Commit b1f20206 authored by Yu Zhao's avatar Yu Zhao Committed by Andrew Morton

mm: remap unused subpages to shared zeropage when splitting isolated thp

Patch series "mm: split underused THPs", v5.

The current upstream default policy for THP is always.  However, Meta uses
madvise in production as the current THP=always policy vastly
overprovisions THPs in sparsely accessed memory areas, resulting in
excessive memory pressure and premature OOM killing.  Using madvise +
relying on khugepaged has certain drawbacks over THP=always.  Using
madvise hints mean THPs aren't "transparent" and require userspace
changes.  Waiting for khugepaged to scan memory and collapse pages into
THP can be slow and unpredictable in terms of performance (i.e.  you dont
know when the collapse will happen), while production environments require
predictable performance.  If there is enough memory available, its better
for both performance and predictability to have a THP from fault time,
i.e.  THP=always rather than wait for khugepaged to collapse it, and deal
with sparsely populated THPs when the system is running out of memory.

This patch series is an attempt to mitigate the issue of running out of
memory when THP is always enabled.  During runtime whenever a THP is being
faulted in or collapsed by khugepaged, the THP is added to a list. 
Whenever memory reclaim happens, the kernel runs the deferred_split
shrinker which goes through the list and checks if the THP was underused,
i.e.  how many of the base 4K pages of the entire THP were zero-filled. 
If this number goes above a certain threshold, the shrinker will attempt
to split that THP.  Then at remap time, the pages that were zero-filled
are mapped to the shared zeropage, hence saving memory.  This method
avoids the downside of wasting memory in areas where THP is sparsely
filled when THP is always enabled, while still providing the upside THPs
like reduced TLB misses without having to use madvise.

Meta production workloads that were CPU bound (>99% CPU utilzation) were
tested with THP shrinker.  The results after 2 hours are as follows:

                            | THP=madvise |  THP=always   | THP=always
                            |             |               | + shrinker series
                            |             |               | + max_ptes_none=409
-----------------------------------------------------------------------------
Performance improvement     |      -      |    +1.8%      |     +1.7%
(over THP=madvise)          |             |               |
-----------------------------------------------------------------------------
Memory usage                |    54.6G    | 58.8G (+7.7%) |   55.9G (+2.4%)
-----------------------------------------------------------------------------
max_ptes_none=409 means that any THP that has more than 409 out of 512
(80%) zero filled filled pages will be split.

To test out the patches, the below commands without the shrinker will
invoke OOM killer immediately and kill stress, but will not fail with the
shrinker:

echo 450 > /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none
mkdir /sys/fs/cgroup/test
echo $$ > /sys/fs/cgroup/test/cgroup.procs
echo 20M > /sys/fs/cgroup/test/memory.max
echo 0 > /sys/fs/cgroup/test/memory.swap.max
# allocate twice memory.max for each stress worker and touch 40/512 of
# each THP, i.e. vm-stride 50K.
# With the shrinker, max_ptes_none of 470 and below won't invoke OOM
# killer.
# Without the shrinker, OOM killer is invoked immediately irrespective
# of max_ptes_none value and kills stress.
stress --vm 1 --vm-bytes 40M --vm-stride 50K


This patch (of 5):

Here being unused means containing only zeros and inaccessible to
userspace.  When splitting an isolated thp under reclaim or migration, the
unused subpages can be mapped to the shared zeropage, hence saving memory.
This is particularly helpful when the internal fragmentation of a thp is
high, i.e.  it has many untouched subpages.

This is also a prerequisite for THP low utilization shrinker which will be
introduced in later patches, where underutilized THPs are split, and the
zero-filled pages are freed saving memory.

Link: https://lkml.kernel.org/r/20240830100438.3623486-1-usamaarif642@gmail.com
Link: https://lkml.kernel.org/r/20240830100438.3623486-3-usamaarif642@gmail.comSigned-off-by: default avatarYu Zhao <yuzhao@google.com>
Signed-off-by: default avatarUsama Arif <usamaarif642@gmail.com>
Tested-by: default avatarShuang Zhai <zhais@google.com>
Cc: Alexander Zhu <alexlzhu@fb.com>
Cc: Barry Song <baohua@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Kairui Song <ryncsn@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Nico Pache <npache@redhat.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Shuang Zhai <szhai2@cs.rochester.edu>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 903edea6
...@@ -745,7 +745,12 @@ int folio_mkclean(struct folio *); ...@@ -745,7 +745,12 @@ int folio_mkclean(struct folio *);
int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
struct vm_area_struct *vma); struct vm_area_struct *vma);
void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked); enum rmp_flags {
RMP_LOCKED = 1 << 0,
RMP_USE_SHARED_ZEROPAGE = 1 << 1,
};
void remove_migration_ptes(struct folio *src, struct folio *dst, int flags);
/* /*
* rmap_walk_control: To control rmap traversing for specific needs * rmap_walk_control: To control rmap traversing for specific needs
......
...@@ -3004,7 +3004,7 @@ bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, ...@@ -3004,7 +3004,7 @@ bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr,
return false; return false;
} }
static void remap_page(struct folio *folio, unsigned long nr) static void remap_page(struct folio *folio, unsigned long nr, int flags)
{ {
int i = 0; int i = 0;
...@@ -3012,7 +3012,7 @@ static void remap_page(struct folio *folio, unsigned long nr) ...@@ -3012,7 +3012,7 @@ static void remap_page(struct folio *folio, unsigned long nr)
if (!folio_test_anon(folio)) if (!folio_test_anon(folio))
return; return;
for (;;) { for (;;) {
remove_migration_ptes(folio, folio, true); remove_migration_ptes(folio, folio, RMP_LOCKED | flags);
i += folio_nr_pages(folio); i += folio_nr_pages(folio);
if (i >= nr) if (i >= nr)
break; break;
...@@ -3222,7 +3222,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, ...@@ -3222,7 +3222,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
if (nr_dropped) if (nr_dropped)
shmem_uncharge(folio->mapping->host, nr_dropped); shmem_uncharge(folio->mapping->host, nr_dropped);
remap_page(folio, nr); remap_page(folio, nr, PageAnon(head) ? RMP_USE_SHARED_ZEROPAGE : 0);
/* /*
* set page to its compound_head when split to non order-0 pages, so * set page to its compound_head when split to non order-0 pages, so
...@@ -3498,7 +3498,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, ...@@ -3498,7 +3498,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
if (mapping) if (mapping)
xas_unlock(&xas); xas_unlock(&xas);
local_irq_enable(); local_irq_enable();
remap_page(folio, folio_nr_pages(folio)); remap_page(folio, folio_nr_pages(folio), 0);
ret = -EAGAIN; ret = -EAGAIN;
} }
......
...@@ -204,13 +204,57 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list) ...@@ -204,13 +204,57 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
return true; return true;
} }
static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
struct folio *folio,
unsigned long idx)
{
struct page *page = folio_page(folio, idx);
bool contains_data;
pte_t newpte;
void *addr;
VM_BUG_ON_PAGE(PageCompound(page), page);
VM_BUG_ON_PAGE(!PageAnon(page), page);
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(pte_present(*pvmw->pte), page);
if (folio_test_mlocked(folio) || (pvmw->vma->vm_flags & VM_LOCKED) ||
mm_forbids_zeropage(pvmw->vma->vm_mm))
return false;
/*
* The pmd entry mapping the old thp was flushed and the pte mapping
* this subpage has been non present. If the subpage is only zero-filled
* then map it to the shared zeropage.
*/
addr = kmap_local_page(page);
contains_data = memchr_inv(addr, 0, PAGE_SIZE);
kunmap_local(addr);
if (contains_data)
return false;
newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),
pvmw->vma->vm_page_prot));
set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte);
dec_mm_counter(pvmw->vma->vm_mm, mm_counter(folio));
return true;
}
struct rmap_walk_arg {
struct folio *folio;
bool map_unused_to_zeropage;
};
/* /*
* Restore a potential migration pte to a working pte entry * Restore a potential migration pte to a working pte entry
*/ */
static bool remove_migration_pte(struct folio *folio, static bool remove_migration_pte(struct folio *folio,
struct vm_area_struct *vma, unsigned long addr, void *old) struct vm_area_struct *vma, unsigned long addr, void *arg)
{ {
DEFINE_FOLIO_VMA_WALK(pvmw, old, vma, addr, PVMW_SYNC | PVMW_MIGRATION); struct rmap_walk_arg *rmap_walk_arg = arg;
DEFINE_FOLIO_VMA_WALK(pvmw, rmap_walk_arg->folio, vma, addr, PVMW_SYNC | PVMW_MIGRATION);
while (page_vma_mapped_walk(&pvmw)) { while (page_vma_mapped_walk(&pvmw)) {
rmap_t rmap_flags = RMAP_NONE; rmap_t rmap_flags = RMAP_NONE;
...@@ -234,6 +278,9 @@ static bool remove_migration_pte(struct folio *folio, ...@@ -234,6 +278,9 @@ static bool remove_migration_pte(struct folio *folio,
continue; continue;
} }
#endif #endif
if (rmap_walk_arg->map_unused_to_zeropage &&
try_to_map_unused_to_zeropage(&pvmw, folio, idx))
continue;
folio_get(folio); folio_get(folio);
pte = mk_pte(new, READ_ONCE(vma->vm_page_prot)); pte = mk_pte(new, READ_ONCE(vma->vm_page_prot));
...@@ -312,14 +359,21 @@ static bool remove_migration_pte(struct folio *folio, ...@@ -312,14 +359,21 @@ static bool remove_migration_pte(struct folio *folio,
* Get rid of all migration entries and replace them by * Get rid of all migration entries and replace them by
* references to the indicated page. * references to the indicated page.
*/ */
void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked) void remove_migration_ptes(struct folio *src, struct folio *dst, int flags)
{ {
struct rmap_walk_arg rmap_walk_arg = {
.folio = src,
.map_unused_to_zeropage = flags & RMP_USE_SHARED_ZEROPAGE,
};
struct rmap_walk_control rwc = { struct rmap_walk_control rwc = {
.rmap_one = remove_migration_pte, .rmap_one = remove_migration_pte,
.arg = src, .arg = &rmap_walk_arg,
}; };
if (locked) VM_BUG_ON_FOLIO((flags & RMP_USE_SHARED_ZEROPAGE) && (src != dst), src);
if (flags & RMP_LOCKED)
rmap_walk_locked(dst, &rwc); rmap_walk_locked(dst, &rwc);
else else
rmap_walk(dst, &rwc); rmap_walk(dst, &rwc);
...@@ -934,7 +988,7 @@ static int writeout(struct address_space *mapping, struct folio *folio) ...@@ -934,7 +988,7 @@ static int writeout(struct address_space *mapping, struct folio *folio)
* At this point we know that the migration attempt cannot * At this point we know that the migration attempt cannot
* be successful. * be successful.
*/ */
remove_migration_ptes(folio, folio, false); remove_migration_ptes(folio, folio, 0);
rc = mapping->a_ops->writepage(&folio->page, &wbc); rc = mapping->a_ops->writepage(&folio->page, &wbc);
...@@ -1098,7 +1152,7 @@ static void migrate_folio_undo_src(struct folio *src, ...@@ -1098,7 +1152,7 @@ static void migrate_folio_undo_src(struct folio *src,
struct list_head *ret) struct list_head *ret)
{ {
if (page_was_mapped) if (page_was_mapped)
remove_migration_ptes(src, src, false); remove_migration_ptes(src, src, 0);
/* Drop an anon_vma reference if we took one */ /* Drop an anon_vma reference if we took one */
if (anon_vma) if (anon_vma)
put_anon_vma(anon_vma); put_anon_vma(anon_vma);
...@@ -1336,7 +1390,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, ...@@ -1336,7 +1390,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
lru_add_drain(); lru_add_drain();
if (old_page_state & PAGE_WAS_MAPPED) if (old_page_state & PAGE_WAS_MAPPED)
remove_migration_ptes(src, dst, false); remove_migration_ptes(src, dst, 0);
out_unlock_both: out_unlock_both:
folio_unlock(dst); folio_unlock(dst);
...@@ -1474,7 +1528,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, ...@@ -1474,7 +1528,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
if (page_was_mapped) if (page_was_mapped)
remove_migration_ptes(src, remove_migration_ptes(src,
rc == MIGRATEPAGE_SUCCESS ? dst : src, false); rc == MIGRATEPAGE_SUCCESS ? dst : src, 0);
unlock_put_anon: unlock_put_anon:
folio_unlock(dst); folio_unlock(dst);
......
...@@ -424,7 +424,7 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns, ...@@ -424,7 +424,7 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
continue; continue;
folio = page_folio(page); folio = page_folio(page);
remove_migration_ptes(folio, folio, false); remove_migration_ptes(folio, folio, 0);
src_pfns[i] = 0; src_pfns[i] = 0;
folio_unlock(folio); folio_unlock(folio);
...@@ -840,7 +840,7 @@ void migrate_device_finalize(unsigned long *src_pfns, ...@@ -840,7 +840,7 @@ void migrate_device_finalize(unsigned long *src_pfns,
dst = src; dst = src;
} }
remove_migration_ptes(src, dst, false); remove_migration_ptes(src, dst, 0);
folio_unlock(src); folio_unlock(src);
if (folio_is_zone_device(src)) if (folio_is_zone_device(src))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment