Commit 4d928e20 authored by Miaohe Lin's avatar Miaohe Lin Committed by akpm

mm/khugepaged: stop swapping in page when VM_FAULT_RETRY occurs

When do_swap_page returns VM_FAULT_RETRY, we do not retry here and thus
swap entry will remain in pagetable.  This will result in later failure. 
So stop swapping in pages in this case to save cpu cycles.  As A further
optimization, mmap_lock is released when __collapse_huge_page_swapin()
fails to avoid relocking mmap_lock.  And "swapped_in++" is moved after
error handling to make it more accurate.

Link: https://lkml.kernel.org/r/20220625092816.4856-3-linmiaohe@huawei.comSigned-off-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: NeilBrown <neilb@suse.de>
Cc: Peter Xu <peterx@redhat.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zach O'Keefe <zokeefe@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent dd5ff79d
...@@ -972,8 +972,8 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, ...@@ -972,8 +972,8 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
* Bring missing pages in from swap, to complete THP collapse. * Bring missing pages in from swap, to complete THP collapse.
* Only done if khugepaged_scan_pmd believes it is worthwhile. * Only done if khugepaged_scan_pmd believes it is worthwhile.
* *
* Called and returns without pte mapped or spinlocks held, * Called and returns without pte mapped or spinlocks held.
* but with mmap_lock held to protect against vma changes. * Note that if false is returned, mmap_lock will be released.
*/ */
static bool __collapse_huge_page_swapin(struct mm_struct *mm, static bool __collapse_huge_page_swapin(struct mm_struct *mm,
...@@ -1000,27 +1000,24 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, ...@@ -1000,27 +1000,24 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
pte_unmap(vmf.pte); pte_unmap(vmf.pte);
continue; continue;
} }
swapped_in++;
ret = do_swap_page(&vmf); ret = do_swap_page(&vmf);
/* do_swap_page returns VM_FAULT_RETRY with released mmap_lock */ /*
* do_swap_page returns VM_FAULT_RETRY with released mmap_lock.
* Note we treat VM_FAULT_RETRY as VM_FAULT_ERROR here because
* we do not retry here and swap entry will remain in pagetable
* resulting in later failure.
*/
if (ret & VM_FAULT_RETRY) { if (ret & VM_FAULT_RETRY) {
mmap_read_lock(mm); trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
if (hugepage_vma_revalidate(mm, haddr, &vma)) { return false;
/* vma is no longer available, don't continue to swapin */
trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
return false;
}
/* check if the pmd is still valid */
if (mm_find_pmd(mm, haddr) != pmd) {
trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
return false;
}
} }
if (ret & VM_FAULT_ERROR) { if (ret & VM_FAULT_ERROR) {
mmap_read_unlock(mm);
trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
return false; return false;
} }
swapped_in++;
} }
/* Drain LRU add pagevec to remove extra pin on the swapped in pages */ /* Drain LRU add pagevec to remove extra pin on the swapped in pages */
...@@ -1086,13 +1083,12 @@ static void collapse_huge_page(struct mm_struct *mm, ...@@ -1086,13 +1083,12 @@ static void collapse_huge_page(struct mm_struct *mm,
} }
/* /*
* __collapse_huge_page_swapin always returns with mmap_lock locked. * __collapse_huge_page_swapin will return with mmap_lock released
* If it fails, we release mmap_lock and jump out_nolock. * when it fails. So we jump out_nolock directly in that case.
* Continuing to collapse causes inconsistency. * Continuing to collapse causes inconsistency.
*/ */
if (unmapped && !__collapse_huge_page_swapin(mm, vma, address, if (unmapped && !__collapse_huge_page_swapin(mm, vma, address,
pmd, referenced)) { pmd, referenced)) {
mmap_read_unlock(mm);
goto out_nolock; goto out_nolock;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment