Commit f4981502 authored by Liu Shixin's avatar Liu Shixin Committed by Andrew Morton

mm/huge_memory: prevent THP_ZERO_PAGE_ALLOC increased twice

A user who reads THP_ZERO_PAGE_ALLOC may be more concerned about the huge
zero pages that are really allocated for thp.  It is misleading to
increase THP_ZERO_PAGE_ALLOC twice if two threads call get_huge_zero_page
concurrently.  Don't increase the value if the huge page is not really
used.

Update Documentation/admin-guide/mm/transhuge.rst to suit.

Link: https://lkml.kernel.org/r/20220909021653.3371879-1-liushixin2@huawei.comSigned-off-by: default avatarLiu Shixin <liushixin2@huawei.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 13cc3784
...@@ -366,10 +366,9 @@ thp_split_pmd ...@@ -366,10 +366,9 @@ thp_split_pmd
page table entry. page table entry.
thp_zero_page_alloc thp_zero_page_alloc
is incremented every time a huge zero page is is incremented every time a huge zero page used for thp is
successfully allocated. It includes allocations which where successfully allocated. Note, it doesn't count every map of
dropped due race with other allocation. Note, it doesn't count the huge zero page, only its allocation.
every map of the huge zero page, only its allocation.
thp_zero_page_alloc_failed thp_zero_page_alloc_failed
is incremented if kernel fails to allocate is incremented if kernel fails to allocate
......
...@@ -163,7 +163,6 @@ static bool get_huge_zero_page(void) ...@@ -163,7 +163,6 @@ static bool get_huge_zero_page(void)
count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED); count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED);
return false; return false;
} }
count_vm_event(THP_ZERO_PAGE_ALLOC);
preempt_disable(); preempt_disable();
if (cmpxchg(&huge_zero_page, NULL, zero_page)) { if (cmpxchg(&huge_zero_page, NULL, zero_page)) {
preempt_enable(); preempt_enable();
...@@ -175,6 +174,7 @@ static bool get_huge_zero_page(void) ...@@ -175,6 +174,7 @@ static bool get_huge_zero_page(void)
/* We take additional reference here. It will be put back by shrinker */ /* We take additional reference here. It will be put back by shrinker */
atomic_set(&huge_zero_refcount, 2); atomic_set(&huge_zero_refcount, 2);
preempt_enable(); preempt_enable();
count_vm_event(THP_ZERO_PAGE_ALLOC);
return true; return true;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment