Commit 51afb12b authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

mm: page migration fix PageMlocked on migrated pages

Commit e6c509f8 ("mm: use clear_page_mlock() in page_remove_rmap()")
in v3.7 inadvertently made mlock_migrate_page() impotent: page migration
unmaps the page from userspace before migrating, and that commit clears
PageMlocked on the final unmap, leaving mlock_migrate_page() with
nothing to do.  Not a serious bug, the next attempt at reclaiming the
page would fix it up; but a betrayal of page migration's intent - the
new page ought to emerge as PageMlocked.

I don't see how to fix it for mlock_migrate_page() itself; but easily
fixed in remove_migration_pte(), by calling mlock_vma_page() when the vma
is VM_LOCKED - under pte lock as in try_to_unmap_one().

Delete mlock_migrate_page()?  Not quite, it does still serve a purpose for
migrate_misplaced_transhuge_page(): where we could replace it by a test,
clear_page_mlock(), mlock_vma_page() sequence; but would that be an
improvement?  mlock_migrate_page() is fairly lean, and let's make it
leaner by skipping the irq save/restore now clearly not needed.
Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Rik van Riel <riel@redhat.com>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent b87537d9
...@@ -271,20 +271,19 @@ extern unsigned int munlock_vma_page(struct page *page); ...@@ -271,20 +271,19 @@ extern unsigned int munlock_vma_page(struct page *page);
extern void clear_page_mlock(struct page *page); extern void clear_page_mlock(struct page *page);
/* /*
* mlock_migrate_page - called only from migrate_page_copy() to * mlock_migrate_page - called only from migrate_misplaced_transhuge_page()
* migrate the Mlocked page flag; update statistics. * (because that does not go through the full procedure of migration ptes):
* to migrate the Mlocked page flag; update statistics.
*/ */
static inline void mlock_migrate_page(struct page *newpage, struct page *page) static inline void mlock_migrate_page(struct page *newpage, struct page *page)
{ {
if (TestClearPageMlocked(page)) { if (TestClearPageMlocked(page)) {
unsigned long flags;
int nr_pages = hpage_nr_pages(page); int nr_pages = hpage_nr_pages(page);
local_irq_save(flags); /* Holding pmd lock, no change in irq context: __mod is safe */
__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
SetPageMlocked(newpage); SetPageMlocked(newpage);
__mod_zone_page_state(page_zone(newpage), NR_MLOCK, nr_pages); __mod_zone_page_state(page_zone(newpage), NR_MLOCK, nr_pages);
local_irq_restore(flags);
} }
} }
......
...@@ -171,6 +171,9 @@ static int remove_migration_pte(struct page *new, struct vm_area_struct *vma, ...@@ -171,6 +171,9 @@ static int remove_migration_pte(struct page *new, struct vm_area_struct *vma,
else else
page_add_file_rmap(new); page_add_file_rmap(new);
if (vma->vm_flags & VM_LOCKED)
mlock_vma_page(new);
/* No need to invalidate - it was non-present before */ /* No need to invalidate - it was non-present before */
update_mmu_cache(vma, addr, ptep); update_mmu_cache(vma, addr, ptep);
unlock: unlock:
...@@ -537,7 +540,6 @@ void migrate_page_copy(struct page *newpage, struct page *page) ...@@ -537,7 +540,6 @@ void migrate_page_copy(struct page *newpage, struct page *page)
cpupid = page_cpupid_xchg_last(page, -1); cpupid = page_cpupid_xchg_last(page, -1);
page_cpupid_xchg_last(newpage, cpupid); page_cpupid_xchg_last(newpage, cpupid);
mlock_migrate_page(newpage, page);
ksm_migrate_page(newpage, page); ksm_migrate_page(newpage, page);
/* /*
* Please do not reorder this without considering how mm/ksm.c's * Please do not reorder this without considering how mm/ksm.c's
...@@ -1787,7 +1789,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, ...@@ -1787,7 +1789,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
SetPageActive(page); SetPageActive(page);
if (TestClearPageUnevictable(new_page)) if (TestClearPageUnevictable(new_page))
SetPageUnevictable(page); SetPageUnevictable(page);
mlock_migrate_page(page, new_page);
unlock_page(new_page); unlock_page(new_page);
put_page(new_page); /* Free it */ put_page(new_page); /* Free it */
...@@ -1829,6 +1830,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, ...@@ -1829,6 +1830,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
goto fail_putback; goto fail_putback;
} }
mlock_migrate_page(new_page, page);
mem_cgroup_migrate(page, new_page, false); mem_cgroup_migrate(page, new_page, false);
page_remove_rmap(page); page_remove_rmap(page);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment