Commit a1fde08c authored by Linus Torvalds's avatar Linus Torvalds

VM: skip the stack guard page lookup in get_user_pages only for mlock

The logic in __get_user_pages() used to skip the stack guard page lookup
whenever the caller wasn't interested in seeing what the actual page
was.  But Michel Lespinasse points out that there are cases where we
don't care about the physical page itself (so 'pages' may be NULL), but
do want to make sure a page is mapped into the virtual address space.

So using the existence of the "pages" array as an indication of whether
to look up the guard page or not isn't actually so great, and we really
should just use the FOLL_MLOCK bit.  But because that bit was only set
for the VM_LOCKED case (and not all vma's necessarily have it, even for
mlock()), we couldn't do that originally.

Fix that by moving the VM_LOCKED check deeper into the call-chain, which
actually simplifies many things.  Now mlock() gets simpler, and we can
also check for FOLL_MLOCK in __get_user_pages() and the code ends up
much more straightforward.
Reported-and-reviewed-by: default avatarMichel Lespinasse <walken@google.com>
Cc: stable@kernel.org
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 5895198c
...@@ -1359,7 +1359,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, ...@@ -1359,7 +1359,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
*/ */
mark_page_accessed(page); mark_page_accessed(page);
} }
if (flags & FOLL_MLOCK) { if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) {
/* /*
* The preliminary mapping check is mainly to avoid the * The preliminary mapping check is mainly to avoid the
* pointless overhead of lock_page on the ZERO_PAGE * pointless overhead of lock_page on the ZERO_PAGE
...@@ -1552,10 +1552,9 @@ int __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, ...@@ -1552,10 +1552,9 @@ int __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
} }
/* /*
* If we don't actually want the page itself, * For mlock, just skip the stack guard page.
* and it's the stack guard page, just skip it.
*/ */
if (!pages && stack_guard_page(vma, start)) if ((gup_flags & FOLL_MLOCK) && stack_guard_page(vma, start))
goto next_page; goto next_page;
do { do {
......
...@@ -162,7 +162,7 @@ static long __mlock_vma_pages_range(struct vm_area_struct *vma, ...@@ -162,7 +162,7 @@ static long __mlock_vma_pages_range(struct vm_area_struct *vma,
VM_BUG_ON(end > vma->vm_end); VM_BUG_ON(end > vma->vm_end);
VM_BUG_ON(!rwsem_is_locked(&mm->mmap_sem)); VM_BUG_ON(!rwsem_is_locked(&mm->mmap_sem));
gup_flags = FOLL_TOUCH; gup_flags = FOLL_TOUCH | FOLL_MLOCK;
/* /*
* We want to touch writable mappings with a write fault in order * We want to touch writable mappings with a write fault in order
* to break COW, except for shared mappings because these don't COW * to break COW, except for shared mappings because these don't COW
...@@ -178,9 +178,6 @@ static long __mlock_vma_pages_range(struct vm_area_struct *vma, ...@@ -178,9 +178,6 @@ static long __mlock_vma_pages_range(struct vm_area_struct *vma,
if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)) if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC))
gup_flags |= FOLL_FORCE; gup_flags |= FOLL_FORCE;
if (vma->vm_flags & VM_LOCKED)
gup_flags |= FOLL_MLOCK;
return __get_user_pages(current, mm, addr, nr_pages, gup_flags, return __get_user_pages(current, mm, addr, nr_pages, gup_flags,
NULL, NULL, nonblocking); NULL, NULL, nonblocking);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment