Commit 2ca99358 authored by Peter Xu's avatar Peter Xu Committed by Linus Torvalds

mm: clear vmf->pte after pte_unmap_same() returns

pte_unmap_same() will always unmap the pte pointer.  After the unmap,
vmf->pte will not be valid any more, we should clear it.

It was safe only because no one is accessing vmf->pte after
pte_unmap_same() returns, since the only caller of pte_unmap_same() (so
far) is do_swap_page(), where vmf->pte will in most cases be overwritten
very soon.

Directly pass in vmf into pte_unmap_same() and then we can also avoid
the long parameter list too, which should be a nice cleanup.

Link: https://lkml.kernel.org/r/20210915181533.11188-1-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
Reviewed-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
Reviewed-by: default avatarLiam Howlett <liam.howlett@oracle.com>
Acked-by: default avatarHugh Dickins <hughd@google.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 9ae0f87d
...@@ -2724,19 +2724,19 @@ EXPORT_SYMBOL_GPL(apply_to_existing_page_range); ...@@ -2724,19 +2724,19 @@ EXPORT_SYMBOL_GPL(apply_to_existing_page_range);
* proceeding (but do_wp_page is only called after already making such a check; * proceeding (but do_wp_page is only called after already making such a check;
* and do_anonymous_page can safely check later on). * and do_anonymous_page can safely check later on).
*/ */
static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, static inline int pte_unmap_same(struct vm_fault *vmf)
pte_t *page_table, pte_t orig_pte)
{ {
int same = 1; int same = 1;
#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION) #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION)
if (sizeof(pte_t) > sizeof(unsigned long)) { if (sizeof(pte_t) > sizeof(unsigned long)) {
spinlock_t *ptl = pte_lockptr(mm, pmd); spinlock_t *ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
spin_lock(ptl); spin_lock(ptl);
same = pte_same(*page_table, orig_pte); same = pte_same(*vmf->pte, vmf->orig_pte);
spin_unlock(ptl); spin_unlock(ptl);
} }
#endif #endif
pte_unmap(page_table); pte_unmap(vmf->pte);
vmf->pte = NULL;
return same; return same;
} }
...@@ -3488,7 +3488,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) ...@@ -3488,7 +3488,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
vm_fault_t ret = 0; vm_fault_t ret = 0;
void *shadow = NULL; void *shadow = NULL;
if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) if (!pte_unmap_same(vmf))
goto out; goto out;
entry = pte_to_swp_entry(vmf->orig_pte); entry = pte_to_swp_entry(vmf->orig_pte);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment