Commit 84d60fdd authored by David Hildenbrand's avatar David Hildenbrand Committed by Linus Torvalds

mm: slightly clarify KSM logic in do_swap_page()

Let's make it clearer that KSM might only have to copy a page in case we
have a page in the swapcache, not if we allocated a fresh page and
bypassed the swapcache.  While at it, add a comment why this is usually
necessary and merge the two swapcache conditions.

[akpm@linux-foundation.org: fix comment, per David]

Link: https://lkml.kernel.org/r/20220131162940.210846-4-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Don Dutile <ddutile@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jann Horn <jannh@google.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Liang Zhang <zhangliang5@huawei.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent d4c47097
...@@ -3607,22 +3607,30 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) ...@@ -3607,22 +3607,30 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
goto out_release; goto out_release;
} }
if (swapcache) {
/* /*
* Make sure try_to_free_swap or reuse_swap_page or swapoff did not * Make sure try_to_free_swap or swapoff did not release the
* release the swapcache from under us. The page pin, and pte_same * swapcache from under us. The page pin, and pte_same test
* test below, are not enough to exclude that. Even if it is still * below, are not enough to exclude that. Even if it is still
* swapcache, we need to check that the page's swap has not changed. * swapcache, we need to check that the page's swap has not
*/ * changed.
if (unlikely((!PageSwapCache(page) || */
page_private(page) != entry.val)) && swapcache) if (unlikely(!PageSwapCache(page) ||
page_private(page) != entry.val))
goto out_page; goto out_page;
/*
* KSM sometimes has to copy on read faults, for example, if
* page->index of !PageKSM() pages would be nonlinear inside the
* anon VMA -- PageKSM() is lost on actual swapout.
*/
page = ksm_might_need_to_copy(page, vma, vmf->address); page = ksm_might_need_to_copy(page, vma, vmf->address);
if (unlikely(!page)) { if (unlikely(!page)) {
ret = VM_FAULT_OOM; ret = VM_FAULT_OOM;
page = swapcache; page = swapcache;
goto out_page; goto out_page;
} }
}
cgroup_throttle_swaprate(page, GFP_KERNEL); cgroup_throttle_swaprate(page, GFP_KERNEL);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment