• Bartlomiej Zolnierkiewicz's avatar
    mm: compaction: handle incorrect MIGRATE_UNMOVABLE type pageblocks · 5ceb9ce6
    Bartlomiej Zolnierkiewicz authored
    When MIGRATE_UNMOVABLE pages are freed from MIGRATE_UNMOVABLE type
    pageblock (and some MIGRATE_MOVABLE pages are left in it) waiting until an
    allocation takes ownership of the block may take too long.  The type of
    the pageblock remains unchanged so the pageblock cannot be used as a
    migration target during compaction.
    
    Fix it by:
    
    * Adding enum compact_mode (COMPACT_ASYNC_[MOVABLE,UNMOVABLE], and
      COMPACT_SYNC) and then converting sync field in struct compact_control
      to use it.
    
    * Adding nr_pageblocks_skipped field to struct compact_control and
      tracking how many destination pageblocks were of MIGRATE_UNMOVABLE type.
       If COMPACT_ASYNC_MOVABLE mode compaction ran fully in
      try_to_compact_pages() (COMPACT_COMPLETE) it implies that there is not a
      suitable page for allocation.  In this case then check how if there were
      enough MIGRATE_UNMOVABLE pageblocks to try a second pass in
      COMPACT_ASYNC_UNMOVABLE mode.
    
    * Scanning the MIGRATE_UNMOVABLE pageblocks (during COMPACT_SYNC and
      COMPACT_ASYNC_UNMOVABLE compaction modes) and building a count based on
      finding PageBuddy pages, page_count(page) == 0 or PageLRU pages.  If all
      pages within the MIGRATE_UNMOVABLE pageblock are in one of those three
      sets change the whole pageblock type to MIGRATE_MOVABLE.
    
    My particular test case (on a ARM EXYNOS4 device with 512 MiB, which means
    131072 standard 4KiB pages in 'Normal' zone) is to:
    
    - allocate 120000 pages for kernel's usage
    - free every second page (60000 pages) of memory just allocated
    - allocate and use 60000 pages from user space
    - free remaining 60000 pages of kernel memory
      (now we have fragmented memory occupied mostly by user space pages)
    - try to allocate 100 order-9 (2048 KiB) pages for kernel's usage
    
    The results:
    - with compaction disabled I get 11 successful allocations
    - with compaction enabled - 14 successful allocations
    - with this patch I'm able to get all 100 successful allocations
    
    NOTE: If we can make kswapd aware of order-0 request during compaction, we
    can enhance kswapd with changing mode to COMPACT_ASYNC_FULL
    (COMPACT_ASYNC_MOVABLE + COMPACT_ASYNC_UNMOVABLE).  Please see the
    following thread:
    
    	http://marc.info/?l=linux-mm&m=133552069417068&w=2
    
    [minchan@kernel.org: minor cleanups]
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Marek Szyprowski <m.szyprowski@samsung.com>
    Signed-off-by: default avatarBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
    Signed-off-by: default avatarKyungmin Park <kyungmin.park@samsung.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    5ceb9ce6
internal.h 10.2 KB