Commit 1e275d40 authored by Christoph Lameter's avatar Christoph Lameter Committed by Linus Torvalds

[PATCH] page migration: Fix MPOL_INTERLEAVE behavior for migration via mbind()

migrate_pages_to() allocates a list of new pages on the intended target
node or with the intended policy and then uses the list of new pages as
targets for the migration of a list of pages out of place.

When the pages are allocated it is not clear which of the out of place
pages will be moved to the new pages.  So we cannot specify an address as
needed by alloc_page_vma().  This causes problem for MPOL_INTERLEAVE which
will currently allocate the pages on the first node of the set.  If mbind
is used with vma that has the policy of MPOL_INTERLEAVE then the
interleaving of pages may be destroyed.

This patch fixes that by generating a fake address for each alloc_page_vma
which will result is a distribution of pages as prescribed by
MPOL_INTERLEAVE.

Lee also noted that the sequence of nodes for the new pages seems to be
inverted.  So we also invert the way the lists of pages for migration are
build.
Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
Looks-ok-to: Andi Kleen <ak@suse.de>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent f68a106f
...@@ -552,7 +552,7 @@ static void migrate_page_add(struct page *page, struct list_head *pagelist, ...@@ -552,7 +552,7 @@ static void migrate_page_add(struct page *page, struct list_head *pagelist,
*/ */
if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(page) == 1) { if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(page) == 1) {
if (isolate_lru_page(page)) if (isolate_lru_page(page))
list_add(&page->lru, pagelist); list_add_tail(&page->lru, pagelist);
} }
} }
...@@ -569,6 +569,7 @@ static int migrate_pages_to(struct list_head *pagelist, ...@@ -569,6 +569,7 @@ static int migrate_pages_to(struct list_head *pagelist,
LIST_HEAD(moved); LIST_HEAD(moved);
LIST_HEAD(failed); LIST_HEAD(failed);
int err = 0; int err = 0;
unsigned long offset = 0;
int nr_pages; int nr_pages;
struct page *page; struct page *page;
struct list_head *p; struct list_head *p;
...@@ -576,8 +577,21 @@ static int migrate_pages_to(struct list_head *pagelist, ...@@ -576,8 +577,21 @@ static int migrate_pages_to(struct list_head *pagelist,
redo: redo:
nr_pages = 0; nr_pages = 0;
list_for_each(p, pagelist) { list_for_each(p, pagelist) {
if (vma) if (vma) {
page = alloc_page_vma(GFP_HIGHUSER, vma, vma->vm_start); /*
* The address passed to alloc_page_vma is used to
* generate the proper interleave behavior. We fake
* the address here by an increasing offset in order
* to get the proper distribution of pages.
*
* No decision has been made as to which page
* a certain old page is moved to so we cannot
* specify the correct address.
*/
page = alloc_page_vma(GFP_HIGHUSER, vma,
offset + vma->vm_start);
offset += PAGE_SIZE;
}
else else
page = alloc_pages_node(dest, GFP_HIGHUSER, 0); page = alloc_pages_node(dest, GFP_HIGHUSER, 0);
...@@ -585,7 +599,7 @@ static int migrate_pages_to(struct list_head *pagelist, ...@@ -585,7 +599,7 @@ static int migrate_pages_to(struct list_head *pagelist,
err = -ENOMEM; err = -ENOMEM;
goto out; goto out;
} }
list_add(&page->lru, &newlist); list_add_tail(&page->lru, &newlist);
nr_pages++; nr_pages++;
if (nr_pages > MIGRATE_CHUNK_SIZE) if (nr_pages > MIGRATE_CHUNK_SIZE)
break; break;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment