Commit 47b6a24a authored by David Hildenbrand's avatar David Hildenbrand Committed by Linus Torvalds

mm/page_alloc: place pages to tail in __putback_isolated_page()

__putback_isolated_page() already documents that pages will be placed to
the tail of the freelist - this is, however, not the case for "order >=
MAX_ORDER - 2" (see buddy_merge_likely()) - which should be the case for
all existing users.

This change affects two users:
- free page reporting
- page isolation, when undoing the isolation (including memory onlining).

This behavior is desirable for pages that haven't really been touched
lately, so exactly the two users that don't actually read/write page
content, but rather move untouched pages.

The new behavior is especially desirable for memory onlining, where we
allow allocation of newly onlined pages via undo_isolate_page_range() in
online_pages().  Right now, we always place them to the head of the
freelist, resulting in undesireable behavior: Assume we add individual
memory chunks via add_memory() and online them right away to the NORMAL
zone.  We create a dependency chain of unmovable allocations e.g., via the
memmap.  The memmap of the next chunk will be placed onto previous chunks
- if the last block cannot get offlined+removed, all dependent ones cannot
get offlined+removed.  While this can already be observed with individual
DIMMs, it's more of an issue for virtio-mem (and I suspect also ppc
DLPAR).

Document that this should only be used for optimizations, and no code
should rely on this behavior for correction (if the order of the freelists
ever changes).

We won't care about page shuffling: memory onlining already properly
shuffles after onlining.  free page reporting doesn't care about
physically contiguous ranges, and there are already cases where page
isolation will simply move (physically close) free pages to (currently)
the head of the freelists via move_freepages_block() instead of shuffling.
If this becomes ever relevant, we should shuffle the whole zone when
undoing isolation of larger ranges, and after free_contig_range().
Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Reviewed-by: default avatarAlexander Duyck <alexander.h.duyck@linux.intel.com>
Reviewed-by: default avatarOscar Salvador <osalvador@suse.de>
Reviewed-by: default avatarWei Yang <richard.weiyang@linux.alibaba.com>
Reviewed-by: default avatarPankaj Gupta <pankaj.gupta.linux@gmail.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Scott Cheloha <cheloha@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Link: https://lkml.kernel.org/r/20201005121534.15649-3-david@redhat.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent f04a5d5d
...@@ -94,6 +94,18 @@ typedef int __bitwise fpi_t; ...@@ -94,6 +94,18 @@ typedef int __bitwise fpi_t;
*/ */
#define FPI_SKIP_REPORT_NOTIFY ((__force fpi_t)BIT(0)) #define FPI_SKIP_REPORT_NOTIFY ((__force fpi_t)BIT(0))
/*
* Place the (possibly merged) page to the tail of the freelist. Will ignore
* page shuffling (relevant code - e.g., memory onlining - is expected to
* shuffle the whole zone).
*
* Note: No code should rely on this flag for correctness - it's purely
* to allow for optimizations when handing back either fresh pages
* (memory onlining) or untouched pages (page isolation, free page
* reporting).
*/
#define FPI_TO_TAIL ((__force fpi_t)BIT(1))
/* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
static DEFINE_MUTEX(pcp_batch_high_lock); static DEFINE_MUTEX(pcp_batch_high_lock);
#define MIN_PERCPU_PAGELIST_FRACTION (8) #define MIN_PERCPU_PAGELIST_FRACTION (8)
...@@ -1044,7 +1056,9 @@ static inline void __free_one_page(struct page *page, ...@@ -1044,7 +1056,9 @@ static inline void __free_one_page(struct page *page,
done_merging: done_merging:
set_page_order(page, order); set_page_order(page, order);
if (is_shuffle_order(order)) if (fpi_flags & FPI_TO_TAIL)
to_tail = true;
else if (is_shuffle_order(order))
to_tail = shuffle_pick_tail(); to_tail = shuffle_pick_tail();
else else
to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order); to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
...@@ -3306,7 +3320,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt) ...@@ -3306,7 +3320,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
/* Return isolated page to tail of freelist. */ /* Return isolated page to tail of freelist. */
__free_one_page(page, page_to_pfn(page), zone, order, mt, __free_one_page(page, page_to_pfn(page), zone, order, mt,
FPI_SKIP_REPORT_NOTIFY); FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL);
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment