Commit 6ebb4a1b authored by Kirill A. Shutemov's avatar Kirill A. Shutemov Committed by Linus Torvalds

thp: fix another corner case of munlock() vs. THPs

The following test case triggers BUG() in munlock_vma_pages_range():

	int main(int argc, char *argv[])
	{
		int fd;

		system("mount -t tmpfs -o huge=always none /mnt");
		fd = open("/mnt/test", O_CREAT | O_RDWR);
		ftruncate(fd, 4UL << 20);
		mmap(NULL, 4UL << 20, PROT_READ | PROT_WRITE,
				MAP_SHARED | MAP_FIXED | MAP_LOCKED, fd, 0);
		mmap(NULL, 4096, PROT_READ | PROT_WRITE,
				MAP_SHARED | MAP_LOCKED, fd, 0);
		munlockall();
		return 0;
	}

The second mmap() create PTE-mapping of the first huge page in file.  It
makes kernel munlock the page as we never keep PTE-mapped page mlocked.

On munlockall() when we handle vma created by the first mmap(),
munlock_vma_page() returns page_mask == 0, as the page is not mlocked
anymore.  On next iteration follow_page_mask() return tail page, but
page_mask is HPAGE_NR_PAGES - 1.  It makes us skip to the first tail
page of the next huge page and step on
VM_BUG_ON_PAGE(PageMlocked(page)).

The fix is not use the page_mask from follow_page_mask() at all.  It has
no use for us.

Link: http://lkml.kernel.org/r/20170302150252.34120-1-kirill.shutemov@linux.intel.comSigned-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: <stable@vger.kernel.org>    [4.5+]
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 8346242a
...@@ -442,7 +442,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma, ...@@ -442,7 +442,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
while (start < end) { while (start < end) {
struct page *page; struct page *page;
unsigned int page_mask; unsigned int page_mask = 0;
unsigned long page_increm; unsigned long page_increm;
struct pagevec pvec; struct pagevec pvec;
struct zone *zone; struct zone *zone;
...@@ -456,8 +456,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma, ...@@ -456,8 +456,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
* suits munlock very well (and if somehow an abnormal page * suits munlock very well (and if somehow an abnormal page
* has sneaked into the range, we won't oops here: great). * has sneaked into the range, we won't oops here: great).
*/ */
page = follow_page_mask(vma, start, FOLL_GET | FOLL_DUMP, page = follow_page(vma, start, FOLL_GET | FOLL_DUMP);
&page_mask);
if (page && !IS_ERR(page)) { if (page && !IS_ERR(page)) {
if (PageTransTail(page)) { if (PageTransTail(page)) {
...@@ -468,8 +467,8 @@ void munlock_vma_pages_range(struct vm_area_struct *vma, ...@@ -468,8 +467,8 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
/* /*
* Any THP page found by follow_page_mask() may * Any THP page found by follow_page_mask() may
* have gotten split before reaching * have gotten split before reaching
* munlock_vma_page(), so we need to recompute * munlock_vma_page(), so we need to compute
* the page_mask here. * the page_mask here instead.
*/ */
page_mask = munlock_vma_page(page); page_mask = munlock_vma_page(page);
unlock_page(page); unlock_page(page);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment