1. 09 May, 2017 14 commits
    • Michal Hocko's avatar
      mm, vmalloc: use __GFP_HIGHMEM implicitly · 19809c2d
      Michal Hocko authored
      __vmalloc* allows users to provide gfp flags for the underlying
      allocation.  This API is quite popular
      
        $ git grep "=[[:space:]]__vmalloc\|return[[:space:]]*__vmalloc" | wc -l
        77
      
      The only problem is that many people are not aware that they really want
      to give __GFP_HIGHMEM along with other flags because there is really no
      reason to consume precious lowmemory on CONFIG_HIGHMEM systems for pages
      which are mapped to the kernel vmalloc space.  About half of users don't
      use this flag, though.  This signals that we make the API unnecessarily
      too complex.
      
      This patch simply uses __GFP_HIGHMEM implicitly when allocating pages to
      be mapped to the vmalloc space.  Current users which add __GFP_HIGHMEM
      are simplified and drop the flag.
      
      Link: http://lkml.kernel.org/r/20170307141020.29107-1-mhocko@kernel.org
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reviewed-by: default avatarMatthew Wilcox <mawilcox@microsoft.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Cristopher Lameter <cl@linux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      19809c2d
    • Huang Ying's avatar
      mm, swap: use kvzalloc to allocate some swap data structures · 54f180d3
      Huang Ying authored
      Now vzalloc() is used in swap code to allocate various data structures,
      such as swap cache, swap slots cache, cluster info, etc.  Because the
      size may be too large on some system, so that normal kzalloc() may fail.
      But using kzalloc() has some advantages, for example, less memory
      fragmentation, less TLB pressure, etc.  So change the data structure
      allocation in swap code to use kvzalloc() which will try kzalloc()
      firstly, and fallback to vzalloc() if kzalloc() failed.
      
      In general, although kmalloc() will reduce the number of high-order
      pages in short term, vmalloc() will cause more pain for memory
      fragmentation in the long term.  And the swap data structure allocation
      that is changed in this patch is expected to be long term allocation.
      
      From Dave Hansen:
       "for example, we have a two-page data structure. vmalloc() takes two
        effectively random order-0 pages, probably from two different 2M pages
        and pins them. That "kills" two 2M pages. kmalloc(), allocating two
        *contiguous* pages, will not cross a 2M boundary. That means it will
        only "kill" the possibility of a single 2M page. More 2M pages == less
        fragmentation.
      
      The allocation in this patch occurs during swap on time, which is
      usually done during system boot, so usually we have high opportunity to
      allocate the contiguous pages successfully.
      
      The allocation for swap_map[] in struct swap_info_struct is not changed,
      because that is usually quite large and vmalloc_to_page() is used for
      it.  That makes it a little harder to change.
      
      Link: http://lkml.kernel.org/r/20170407064911.25447-1-ying.huang@intel.com
      
      Signed-off-by: default avatarHuang Ying <ying.huang@intel.com>
      Acked-by: default avatarTim Chen <tim.c.chen@intel.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      54f180d3
    • Michal Hocko's avatar
      treewide: use kv[mz]alloc* rather than opencoded variants · 752ade68
      Michal Hocko authored
      There are many code paths opencoding kvmalloc.  Let's use the helper
      instead.  The main difference to kvmalloc is that those users are
      usually not considering all the aspects of the memory allocator.  E.g.
      allocation requests <= 32kB (with 4kB pages) are basically never failing
      and invoke OOM killer to satisfy the allocation.  This sounds too
      disruptive for something that has a reasonable fallback - the vmalloc.
      On the other hand those requests might fallback to vmalloc even when the
      memory allocator would succeed after several more reclaim/compaction
      attempts previously.  There is no guarantee something like that happens
      though.
      
      This patch converts many of those places to kv[mz]alloc* helpers because
      they are more conservative.
      
      Link: http://lkml.kernel.org/r/20170306103327.2766-2-mhocko@kernel.org
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> # Xen bits
      Acked-by: Kees Cook <keescook@chrom...
      752ade68
    • Michal Hocko's avatar
      mm: support __GFP_REPEAT in kvmalloc_node for >32kB · 6c5ab651
      Michal Hocko authored
      vhost code uses __GFP_REPEAT when allocating vhost_virtqueue resp.
      vhost_vsock because it would really like to prefer kmalloc to the
      vmalloc fallback - see 23cc5a99 ("vhost-net: extend device
      allocation to vmalloc") for more context.  Michael Tsirkin has also
      noted:
      
       "__GFP_REPEAT overhead is during allocation time. Using vmalloc means
        all accesses are slowed down. Allocation is not on data path, accesses
        are."
      
      The similar applies to other vhost_kvzalloc users.
      
      Let's teach kvmalloc_node to handle __GFP_REPEAT properly.  There are
      two things to be careful about.  First we should prevent from the OOM
      killer and so have to involve __GFP_NORETRY by default and secondly
      override __GFP_REPEAT for !costly order requests as the __GFP_REPEAT is
      ignored for !costly orders.
      
      Supporting __GFP_REPEAT like semantic for !costly request is possible it
      would require changes in the page allocator.  This is out of scope of
      this patch.
      
      This patch shouldn't introduce any functional change.
      
      Link: http://lkml.kernel.org/r/20170306103032.2540-3-mhocko@kernel.org
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6c5ab651
    • Michal Hocko's avatar
      mm, vmalloc: properly track vmalloc users · 1f5307b1
      Michal Hocko authored
      __vmalloc_node_flags used to be static inline but this has changed by
      "mm: introduce kv[mz]alloc helpers" because kvmalloc_node needs to use
      it as well and the code is outside of the vmalloc proper.  I haven't
      realized that changing this will lead to a subtle bug though.  The
      function is responsible to track the caller as well.  This caller is
      then printed by /proc/vmallocinfo.  If __vmalloc_node_flags is not
      inline then we would get only direct users of __vmalloc_node_flags as
      callers (e.g.  v[mz]alloc) which reduces usefulness of this debugging
      feature considerably.  It simply doesn't help to see that the given
      range belongs to vmalloc as a caller:
      
        0xffffc90002c79000-0xffffc90002c7d000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N0=3
        0xffffc90002c81000-0xffffc90002c85000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N1=3
        0xffffc90002c8d000-0xffffc90002c91000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N1=3
        0xffffc90002c95000-0xffffc90002c99000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N1=3
      
      We really want to catch the _caller_ of the vmalloc function.  Fix this
      issue by making __vmalloc_node_flags static inline again.
      
      Link: http://lkml.kernel.org/r/20170502134657.12381-1-mhocko@kernel.org
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1f5307b1
    • Michal Hocko's avatar
      mm: introduce kv[mz]alloc helpers · a7c3e901
      Michal Hocko authored
      Patch series "kvmalloc", v5.
      
      There are many open coded kmalloc with vmalloc fallback instances in the
      tree.  Most of them are not careful enough or simply do not care about
      the underlying semantic of the kmalloc/page allocator which means that
      a) some vmalloc fallbacks are basically unreachable because the kmalloc
      part will keep retrying until it succeeds b) the page allocator can
      invoke a really disruptive steps like the OOM killer to move forward
      which doesn't sound appropriate when we consider that the vmalloc
      fallback is available.
      
      As it can be seen implementing kvmalloc requires quite an intimate
      knowledge if the page allocator and the memory reclaim internals which
      strongly suggests that a helper should be implemented in the memory
      subsystem proper.
      
      Most callers, I could find, have been converted to use the helper
      instead.  This is patch 6.  There are some more relying on __GFP_REPEAT
      in the networking stack which I have converted as well and Eric Dumazet
      was not...
      a7c3e901
    • Vlastimil Babka's avatar
      mm, compaction: finish whole pageblock to reduce fragmentation · baf6a9a1
      Vlastimil Babka authored
      The main goal of direct compaction is to form a high-order page for
      allocation, but it should also help against long-term fragmentation when
      possible.
      
      Most lower-than-pageblock-order compactions are for non-movable
      allocations, which means that if we compact in a movable pageblock and
      terminate as soon as we create the high-order page, it's unlikely that
      the fallback heuristics will claim the whole block.  Instead there might
      be a single unmovable page in a pageblock full of movable pages, and the
      next unmovable allocation might pick another pageblock and increase
      long-term fragmentation.
      
      To help against such scenarios, this patch changes the termination
      criteria for compaction so that the current pageblock is finished even
      though the high-order page already exists.  Note that it might be
      possible that the high-order page formed elsewhere in the zone due to
      parallel activity, but this patch doesn't try to detect that.
      
      This is only done with sync compaction, because async compaction is
      limited to pageblock of the same migratetype, where it cannot result in
      a migratetype fallback.  (Async compaction also eagerly skips
      order-aligned blocks where isolation fails, which is against the goal of
      migrating away as much of the pageblock as possible.)
      
      As a result of this patch, long-term memory fragmentation should be
      reduced.
      
      In testing based on 4.9 kernel with stress-highalloc from mmtests
      configured for order-4 GFP_KERNEL allocations, this patch has reduced
      the number of unmovable allocations falling back to movable pageblocks
      by 20%.  The number
      
      Link: http://lkml.kernel.org/r/20170307131545.28577-9-vbabka@suse.cz
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      baf6a9a1
    • Vlastimil Babka's avatar
      mm, compaction: restrict async compaction to pageblocks of same migratetype · 282722b0
      Vlastimil Babka authored
      The migrate scanner in async compaction is currently limited to
      MIGRATE_MOVABLE pageblocks.  This is a heuristic intended to reduce
      latency, based on the assumption that non-MOVABLE pageblocks are
      unlikely to contain movable pages.
      
      However, with the exception of THP's, most high-order allocations are
      not movable.  Should the async compaction succeed, this increases the
      chance that the non-MOVABLE allocations will fallback to a MOVABLE
      pageblock, making the long-term fragmentation worse.
      
      This patch attempts to help the situation by changing async direct
      compaction so that the migrate scanner only scans the pageblocks of the
      requested migratetype.  If it's a non-MOVABLE type and there are such
      pageblocks that do contain movable pages, chances are that the
      allocation can succeed within one of such pageblocks, removing the need
      for a fallback.  If that fails, the subsequent sync attempt will ignore
      this restriction.
      
      In testing based on 4.9 kernel with stress-highalloc from mmtests
      configured for order-4 GFP_KERNEL allocations, this patch has reduced
      the number of unmovable allocations falling back to movable pageblocks
      by 30%.  The number of movable allocations falling back is reduced by
      12%.
      
      Link: http://lkml.kernel.org/r/20170307131545.28577-8-vbabka@suse.cz
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      282722b0
    • Vlastimil Babka's avatar
      mm, compaction: add migratetype to compact_control · d39773a0
      Vlastimil Babka authored
      Preparation patch.  We are going to need migratetype at lower layers
      than compact_zone() and compact_finished().
      
      Link: http://lkml.kernel.org/r/20170307131545.28577-7-vbabka@suse.cz
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d39773a0
    • Vlastimil Babka's avatar
      mm, compaction: change migrate_async_suitable() to suitable_migration_source() · b682debd
      Vlastimil Babka authored
      Preparation for making the decisions more complex and depending on
      compact_control flags.  No functional change.
      
      Link: http://lkml.kernel.org/r/20170307131545.28577-6-vbabka@suse.cz
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b682debd
    • Vlastimil Babka's avatar
      mm, page_alloc: count movable pages when stealing from pageblock · 02aa0cdd
      Vlastimil Babka authored
      When stealing pages from pageblock of a different migratetype, we count
      how many free pages were stolen, and change the pageblock's migratetype
      if more than half of the pageblock was free.  This might be too
      conservative, as there might be other pages that are not free, but were
      allocated with the same migratetype as our allocation requested.
      
      While we cannot determine the migratetype of allocated pages precisely
      (at least without the page_owner functionality enabled), we can count
      pages that compaction would try to isolate for migration - those are
      either on LRU or __PageMovable().  The rest can be assumed to be
      MIGRATE_RECLAIMABLE or MIGRATE_UNMOVABLE, which we cannot easily
      distinguish.  This counting can be done as part of free page stealing
      with little additional overhead.
      
      The page stealing code is changed so that it considers free pages plus
      pages of the "good" migratetype for the decision whether to change
      pageblock's migratetype.
      
      The result should be more accurate migratetype of pageblocks wrt the
      actual pages in the pageblocks, when stealing from semi-occupied
      pageblocks.  This should help the efficiency of page grouping by
      mobility.
      
      In testing based on 4.9 kernel with stress-highalloc from mmtests
      configured for order-4 GFP_KERNEL allocations, this patch has reduced
      the number of unmovable allocations falling back to movable pageblocks
      by 47%.  The number of movable allocations falling back to other
      pageblocks are increased by 55%, but these events don't cause permanent
      fragmentation, so the tradeoff should be positive.  Later patches also
      offset the movable fallback increase to some extent.
      
      [akpm@linux-foundation.org: merge fix]
      Link: http://lkml.kernel.org/r/20170307131545.28577-5-vbabka@suse.cz
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      02aa0cdd
    • Vlastimil Babka's avatar
      mm, page_alloc: split smallest stolen page in fallback · 3bc48f96
      Vlastimil Babka authored
      The __rmqueue_fallback() function is called when there's no free page of
      requested migratetype, and we need to steal from a different one.
      
      There are various heuristics to make this event infrequent and reduce
      permanent fragmentation.  The main one is to try stealing from a
      pageblock that has the most free pages, and possibly steal them all at
      once and convert the whole pageblock.  Precise searching for such
      pageblock would be expensive, so instead the heuristics walks the free
      lists from MAX_ORDER down to requested order and assumes that the block
      with highest-order free page is likely to also have the most free pages
      in total.
      
      Chances are that together with the highest-order page, we steal also
      pages of lower orders from the same block.  But then we still split the
      highest order page.  This is wasteful and can contribute to
      fragmentation instead of avoiding it.
      
      This patch thus changes __rmqueue_fallback() to just steal the page(s)
      and put them on the freelist of the requested migratetype, and only
      report whether it was successful.  Then we pick (and eventually split)
      the smallest page with __rmqueue_smallest().  This all happens under
      zone lock, so nobody can steal it from us in the process.  This should
      reduce fragmentation due to fallbacks.  At worst we are only stealing a
      single highest-order page and waste some cycles by moving it between
      lists and then removing it, but fallback is not exactly hot path so that
      should not be a concern.  As a side benefit the patch removes some
      duplicate code by reusing __rmqueue_smallest().
      
      [vbabka@suse.cz: fix endless loop in the modified __rmqueue()]
        Link: http://lkml.kernel.org/r/59d71b35-d556-4fc9-ee2e-1574259282fd@suse.cz
      Link: http://lkml.kernel.org/r/20170307131545.28577-4-vbabka@suse.cz
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3bc48f96
    • Vlastimil Babka's avatar
      mm, compaction: remove redundant watermark check in compact_finished() · 228d7e33
      Vlastimil Babka authored
      When detecting whether compaction has succeeded in forming a high-order
      page, __compact_finished() employs a watermark check, followed by an own
      search for a suitable page in the freelists.  This is not ideal for two
      reasons:
      
       - The watermark check also searches high-order freelists, but has a
         less strict criteria wrt fallback. It's therefore redundant and waste
         of cycles. This was different in the past when high-order watermark
         check attempted to apply reserves to high-order pages.
      
       - The watermark check might actually fail due to lack of order-0 pages.
         Compaction can't help with that, so there's no point in continuing
         because of that. It's possible that high-order page still exists and
         it terminates.
      
      This patch therefore removes the watermark check.  This should save some
      cycles and terminate compaction sooner in some cases.
      
      Link: http://lkml.kernel.org/r/20170307131545.28577-3-vbabka@suse.cz
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      228d7e33
    • Vlastimil Babka's avatar
      mm, compaction: reorder fields in struct compact_control · f25ba6dc
      Vlastimil Babka authored
      Patch series "try to reduce fragmenting fallbacks", v3.
      
      Last year, Johannes Weiner has reported a regression in page mobility
      grouping [1] and while the exact cause was not found, I've come up with
      some ways to improve it by reducing the number of allocations falling
      back to different migratetype and causing permanent fragmentation.
      
      The series was tested with mmtests stress-highalloc modified to do
      GFP_KERNEL order-4 allocations, on 4.9 with "mm, vmscan: fix zone
      balance check in prepare_kswapd_sleep" (without that, kcompactd indeed
      wasn't woken up) on UMA machine with 4GB memory.  There were 5 repeats
      of each run, as the extfrag stats are quite volatile (note the stats
      below are sums, not averages, as it was less perl hacking for me).
      
      Success rate are the same, already high due to the low allocation order
      used, so I'm not including them.
      
      Compaction stats:
      (the patches are stacked, and I haven't measured the non-functional-changes
      patches separately)
      
                                           patch 1     patch 2     patch 3     patch 4     patch 7     patch 8
        Compaction stalls                    22449       24680       24846       19765       22059       17480
        Compaction success                   12971       14836       14608       10475       11632        8757
        Compaction failures                   9477        9843       10238        9290       10426        8722
        Page migrate success               3109022     3370438     3312164     1695105     1608435     2111379
        Page migrate failure                911588     1149065     1028264     1112675     1077251     1026367
        Compaction pages isolated          7242983     8015530     7782467     4629063     4402787     5377665
        Compaction migrate scanned       980838938   987367943   957690188   917647238   947155598  1018922197
        Compaction free scanned          557926893   598946443   602236894   594024490   541169699   763651731
        Compaction cost                      10243       10578       10304        8286        8398        9440
      
      Compaction stats are mostly within noise until patch 4, which decreases
      the number of compactions, and migrations.  Part of that could be due to
      more pageblocks marked as unmovable, and async compaction skipping
      those.  This changes a bit with patch 7, but not so much.  Patch 8
      increases free scanner stats and migrations, which comes from the
      changed termination criteria.  Interestingly number of compactions
      decreases - probably the fully compacted pageblock satisfies multiple
      subsequent allocations, so it amortizes.
      
      Next comes the extfrag tracepoint, where "fragmenting" means that an
      allocation had to fallback to a pageblock of another migratetype which
      wasn't fully free (which is almost all of the fallbacks).  I have
      locally added another tracepoint for "Page steal" into
      steal_suitable_fallback() which triggers in situations where we are
      allowed to do move_freepages_block().  If we decide to also do
      set_pageblock_migratetype(), it's "Pages steal with pageblock" with
      break down for which allocation migratetype we are stealing and from
      which fallback migratetype.  The last part "due to counting" comes from
      patch 4 and counts the events where the counting of movable pages
      allowed us to change pageblock's migratetype, while the number of free
      pages alone wouldn't be enough to cross the threshold.
      
                                                             patch 1     patch 2     patch 3     patch 4     patch 7     patch 8
        Page alloc extfrag event                            10155066     8522968    10164959    15622080    13727068    13140319
        Extfrag fragmenting                                 10149231     8517025    10159040    15616925    13721391    13134792
        Extfrag fragmenting for unmovable                     159504      168500      184177       97835       70625       56948
        Extfrag fragmenting unmovable placed with movable     153613      163549      172693       91740       64099       50917
        Extfrag fragmenting unmovable placed with reclaim.      5891        4951       11484        6095        6526        6031
        Extfrag fragmenting for reclaimable                     4738        4829        6345        4822        5640        5378
        Extfrag fragmenting reclaimable placed with movable     1836        1902        1851        1579        1739        1760
        Extfrag fragmenting reclaimable placed with unmov.      2902        2927        4494        3243        3901        3618
        Extfrag fragmenting for movable                      9984989     8343696     9968518    15514268    13645126    13072466
        Pages steal                                           179954      192291      210880      123254       94545       81486
        Pages steal with pageblock                             22153       18943       20154       33562       29969       33444
        Pages steal with pageblock for unmovable               14350       12858       13256       20660       19003       20852
        Pages steal with pageblock for unmovable from mov.     12812       11402       11683       19072       17467       19298
        Pages steal with pageblock for unmovable from recl.     1538        1456        1573        1588        1536        1554
        Pages steal with pageblock for movable                  7114        5489        5965       11787       10012       11493
        Pages steal with pageblock for movable from unmov.      6885        5291        5541       11179        9525       10885
        Pages steal with pageblock for movable from recl.        229         198         424         608         487         608
        Pages steal with pageblock for reclaimable               689         596         933        1115         954        1099
        Pages steal with pageblock for reclaimable from unmov.   273         219         537         658         547         667
        Pages steal with pageblock for reclaimable from mov.     416         377         396         457         407         432
        Pages steal with pageblock due to counting                                                 11834       10075        7530
        ... for unmovable                                                                           8993        7381        4616
        ... for movable                                                                             2792        2653        2851
        ... for reclaimable                                                                           49          41          63
      
      What we can see is that "Extfrag fragmenting for unmovable" and "...
      placed with movable" drops with almost each patch, which is good as we
      are polluting less movable pageblocks with unmovable pages.
      
      The most significant change is patch 4 with movable page counting.  On
      the other hand it increases "Extfrag fragmenting for movable" by 50%.
      "Pages steal" drops though, so these movable allocation fallbacks find
      only small free pages and are not allowed to steal whole pageblocks
      back.  "Pages steal with pageblock" raises, because the patch increases
      the chances of pageblock migratetype changes to happen.  This affects
      all migratetypes.
      
      The summary is that patch 4 is not a clear win wrt these stats, but I
      believe that the tradeoff it makes is a good one.  There's less
      pollution of movable pageblocks by unmovable allocations.  There's less
      stealing between pageblock, and those that remain have higher chance of
      changing migratetype also the pageblock itself, so it should more
      faithfully reflect the migratetype of the pages within the pageblock.
      The increase of movable allocations falling back to unmovable pageblock
      might look dramatic, but those allocations can be migrated by compaction
      when needed, and other patches in the series (7-9) improve that aspect.
      
      Patches 7 and 8 continue the trend of reduced unmovable fallbacks and
      also reduce the impact on movable fallbacks from patch 4.
      
      [1] https://www.spinics.net/lists/linux-mm/msg114237.html
      
      This patch (of 8):
      
      While currently there are (mostly by accident) no holes in struct
      compact_control (on x86_64), but we are going to add more bool flags, so
      place them all together to the end of the structure.  While at it, just
      order all fields from largest to smallest.
      
      Link: http://lkml.kernel.org/r/20170307131545.28577-2-vbabka@suse.cz
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f25ba6dc
  2. 03 May, 2017 26 commits