Commit 742755a1 authored by Christoph Lameter's avatar Christoph Lameter Committed by Linus Torvalds

[PATCH] page migration: sys_move_pages(): support moving of individual pages

move_pages() is used to move individual pages of a process. The function can
be used to determine the location of pages and to move them onto the desired
node. move_pages() returns status information for each page.

long move_pages(pid, number_of_pages_to_move,
		addresses_of_pages[],
		nodes[] or NULL,
		status[],
		flags);

The addresses of pages is an array of void * pointing to the
pages to be moved.

The nodes array contains the node numbers that the pages should be moved
to. If a NULL is passed instead of an array then no pages are moved but
the status array is updated. The status request may be used to determine
the page state before issuing another move_pages() to move pages.

The status array will contain the state of all individual page migration
attempts when the function terminates. The status array is only valid if
move_pages() completed successfullly.

Possible page states in status[]:

0..MAX_NUMNODES	The page is now on the indicated node.

-ENOENT		Page is not present

-EACCES		Page is mapped by multiple processes and can only
		be moved if MPOL_MF_MOVE_ALL is specified.

-EPERM		The page has been mlocked by a process/driver and
		cannot be moved.

-EBUSY		Page is busy and cannot be moved. Try again later.

-EFAULT		Invalid address (no VMA or zero page).

-ENOMEM		Unable to allocate memory on target node.

-EIO		Unable to write back page. The page must be written
		back in order to move it since the page is dirty and the
		filesystem does not provide a migration function that
		would allow the moving of dirty pages.

-EINVAL		A dirty page cannot be moved. The filesystem does not provide
		a migration function and has no ability to write back pages.

The flags parameter indicates what types of pages to move:

MPOL_MF_MOVE	Move pages that are only mapped by the process.

MPOL_MF_MOVE_ALL Also move pages that are mapped by multiple processes.
		Requires sufficient capabilities.

Possible return codes from move_pages()

-ENOENT		No pages found that would require moving. All pages
		are either already on the target node, not present, had an
		invalid address or could not be moved because they were
		mapped by multiple processes.

-EINVAL		Flags other than MPOL_MF_MOVE(_ALL) specified or an attempt
		to migrate pages in a kernel thread.

-EPERM		MPOL_MF_MOVE_ALL specified without sufficient priviledges.
		or an attempt to move a process belonging to another user.

-EACCES		One of the target nodes is not allowed by the current cpuset.

-ENODEV		One of the target nodes is not online.

-ESRCH		Process does not exist.

-E2BIG		Too many pages to move.

-ENOMEM		Not enough memory to allocate control array.

-EFAULT		Parameters could not be accessed.

A test program for move_pages() may be found with the patches
on ftp.kernel.org:/pub/linux/kernel/people/christoph/pmig/patches-2.6.17-rc4-mm3

From: Christoph Lameter <clameter@sgi.com>

  Detailed results for sys_move_pages()

  Pass a pointer to an integer to get_new_page() that may be used to
  indicate where the completion status of a migration operation should be
  placed.  This allows sys_move_pags() to report back exactly what happened to
  each page.

  Wish there would be a better way to do this. Looks a bit hacky.
Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Jes Sorensen <jes@trained-monkey.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Michael Kerrisk <mtk-manpages@gmx.net>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 95a402c3
...@@ -26,8 +26,13 @@ a process are located. See also the numa_maps manpage in the numactl package. ...@@ -26,8 +26,13 @@ a process are located. See also the numa_maps manpage in the numactl package.
Manual migration is useful if for example the scheduler has relocated Manual migration is useful if for example the scheduler has relocated
a process to a processor on a distant node. A batch scheduler or an a process to a processor on a distant node. A batch scheduler or an
administrator may detect the situation and move the pages of the process administrator may detect the situation and move the pages of the process
nearer to the new processor. At some point in the future we may have nearer to the new processor. The kernel itself does only provide
some mechanism in the scheduler that will automatically move the pages. manual page migration support. Automatic page migration may be implemented
through user space processes that move pages. A special function call
"move_pages" allows the moving of individual pages within a process.
A NUMA profiler may f.e. obtain a log showing frequent off node
accesses and may use the result to move pages to more advantageous
locations.
Larger installations usually partition the system using cpusets into Larger installations usually partition the system using cpusets into
sections of nodes. Paul Jackson has equipped cpusets with the ability to sections of nodes. Paul Jackson has equipped cpusets with the ability to
...@@ -62,22 +67,14 @@ A. In kernel use of migrate_pages() ...@@ -62,22 +67,14 @@ A. In kernel use of migrate_pages()
It also prevents the swapper or other scans to encounter It also prevents the swapper or other scans to encounter
the page. the page.
2. Generate a list of newly allocates pages. These pages will contain the 2. We need to have a function of type new_page_t that can be
contents of the pages from the first list after page migration is passed to migrate_pages(). This function should figure out
complete. how to allocate the correct new page given the old page.
3. The migrate_pages() function is called which attempts 3. The migrate_pages() function is called which attempts
to do the migration. It returns the moved pages in the to do the migration. It will call the function to allocate
list specified as the third parameter and the failed the new page for each page that is considered for
migrations in the fourth parameter. When the function moving.
returns the first list will contain the pages that could still be retried.
4. The leftover pages of various types are returned
to the LRU using putback_to_lru_pages() or otherwise
disposed of. The pages will still have the refcount as
increased by isolate_lru_pages() if putback_to_lru_pages() is not
used! The kernel may want to handle the various cases of failures in
different ways.
B. How migrate_pages() works B. How migrate_pages() works
---------------------------- ----------------------------
......
...@@ -1584,7 +1584,7 @@ sys_call_table: ...@@ -1584,7 +1584,7 @@ sys_call_table:
data8 sys_keyctl data8 sys_keyctl
data8 sys_ioprio_set data8 sys_ioprio_set
data8 sys_ioprio_get // 1275 data8 sys_ioprio_get // 1275
data8 sys_ni_syscall data8 sys_move_pages
data8 sys_inotify_init data8 sys_inotify_init
data8 sys_inotify_add_watch data8 sys_inotify_add_watch
data8 sys_inotify_rm_watch data8 sys_inotify_rm_watch
......
...@@ -265,7 +265,7 @@ ...@@ -265,7 +265,7 @@
#define __NR_keyctl 1273 #define __NR_keyctl 1273
#define __NR_ioprio_set 1274 #define __NR_ioprio_set 1274
#define __NR_ioprio_get 1275 #define __NR_ioprio_get 1275
/* 1276 is available for reuse (was briefly sys_set_zone_reclaim) */ #define __NR_move_pages 1276
#define __NR_inotify_init 1277 #define __NR_inotify_init 1277
#define __NR_inotify_add_watch 1278 #define __NR_inotify_add_watch 1278
#define __NR_inotify_rm_watch 1279 #define __NR_inotify_rm_watch 1279
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
#include <linux/mm.h> #include <linux/mm.h>
typedef struct page *new_page_t(struct page *, unsigned long private); typedef struct page *new_page_t(struct page *, unsigned long private, int **);
#ifdef CONFIG_MIGRATION #ifdef CONFIG_MIGRATION
extern int isolate_lru_page(struct page *p, struct list_head *pagelist); extern int isolate_lru_page(struct page *p, struct list_head *pagelist);
......
...@@ -516,6 +516,11 @@ asmlinkage long sys_set_mempolicy(int mode, unsigned long __user *nmask, ...@@ -516,6 +516,11 @@ asmlinkage long sys_set_mempolicy(int mode, unsigned long __user *nmask,
asmlinkage long sys_migrate_pages(pid_t pid, unsigned long maxnode, asmlinkage long sys_migrate_pages(pid_t pid, unsigned long maxnode,
const unsigned long __user *from, const unsigned long __user *from,
const unsigned long __user *to); const unsigned long __user *to);
asmlinkage long sys_move_pages(pid_t pid, unsigned long nr_pages,
const void __user * __user *pages,
const int __user *nodes,
int __user *status,
int flags);
asmlinkage long sys_mbind(unsigned long start, unsigned long len, asmlinkage long sys_mbind(unsigned long start, unsigned long len,
unsigned long mode, unsigned long mode,
unsigned long __user *nmask, unsigned long __user *nmask,
......
...@@ -87,6 +87,7 @@ cond_syscall(sys_inotify_init); ...@@ -87,6 +87,7 @@ cond_syscall(sys_inotify_init);
cond_syscall(sys_inotify_add_watch); cond_syscall(sys_inotify_add_watch);
cond_syscall(sys_inotify_rm_watch); cond_syscall(sys_inotify_rm_watch);
cond_syscall(sys_migrate_pages); cond_syscall(sys_migrate_pages);
cond_syscall(sys_move_pages);
cond_syscall(sys_chown16); cond_syscall(sys_chown16);
cond_syscall(sys_fchown16); cond_syscall(sys_fchown16);
cond_syscall(sys_getegid16); cond_syscall(sys_getegid16);
......
...@@ -588,7 +588,7 @@ static void migrate_page_add(struct page *page, struct list_head *pagelist, ...@@ -588,7 +588,7 @@ static void migrate_page_add(struct page *page, struct list_head *pagelist,
isolate_lru_page(page, pagelist); isolate_lru_page(page, pagelist);
} }
static struct page *new_node_page(struct page *page, unsigned long node) static struct page *new_node_page(struct page *page, unsigned long node, int **x)
{ {
return alloc_pages_node(node, GFP_HIGHUSER, 0); return alloc_pages_node(node, GFP_HIGHUSER, 0);
} }
...@@ -698,7 +698,7 @@ int do_migrate_pages(struct mm_struct *mm, ...@@ -698,7 +698,7 @@ int do_migrate_pages(struct mm_struct *mm,
} }
static struct page *new_vma_page(struct page *page, unsigned long private) static struct page *new_vma_page(struct page *page, unsigned long private, int **x)
{ {
struct vm_area_struct *vma = (struct vm_area_struct *)private; struct vm_area_struct *vma = (struct vm_area_struct *)private;
......
...@@ -25,6 +25,8 @@ ...@@ -25,6 +25,8 @@
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpuset.h> #include <linux/cpuset.h>
#include <linux/writeback.h> #include <linux/writeback.h>
#include <linux/mempolicy.h>
#include <linux/vmalloc.h>
#include "internal.h" #include "internal.h"
...@@ -62,9 +64,8 @@ int isolate_lru_page(struct page *page, struct list_head *pagelist) ...@@ -62,9 +64,8 @@ int isolate_lru_page(struct page *page, struct list_head *pagelist)
} }
/* /*
* migrate_prep() needs to be called after we have compiled the list of pages * migrate_prep() needs to be called before we start compiling a list of pages
* to be migrated using isolate_lru_page() but before we begin a series of calls * to be migrated using isolate_lru_page().
* to migrate_pages().
*/ */
int migrate_prep(void) int migrate_prep(void)
{ {
...@@ -588,7 +589,8 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private, ...@@ -588,7 +589,8 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
struct page *page, int force) struct page *page, int force)
{ {
int rc = 0; int rc = 0;
struct page *newpage = get_new_page(page, private); int *result = NULL;
struct page *newpage = get_new_page(page, private, &result);
if (!newpage) if (!newpage)
return -ENOMEM; return -ENOMEM;
...@@ -642,6 +644,12 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private, ...@@ -642,6 +644,12 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
* then this will free the page. * then this will free the page.
*/ */
move_to_lru(newpage); move_to_lru(newpage);
if (result) {
if (rc)
*result = rc;
else
*result = page_to_nid(newpage);
}
return rc; return rc;
} }
...@@ -710,3 +718,255 @@ int migrate_pages(struct list_head *from, ...@@ -710,3 +718,255 @@ int migrate_pages(struct list_head *from,
return nr_failed + retry; return nr_failed + retry;
} }
#ifdef CONFIG_NUMA
/*
* Move a list of individual pages
*/
struct page_to_node {
unsigned long addr;
struct page *page;
int node;
int status;
};
static struct page *new_page_node(struct page *p, unsigned long private,
int **result)
{
struct page_to_node *pm = (struct page_to_node *)private;
while (pm->node != MAX_NUMNODES && pm->page != p)
pm++;
if (pm->node == MAX_NUMNODES)
return NULL;
*result = &pm->status;
return alloc_pages_node(pm->node, GFP_HIGHUSER, 0);
}
/*
* Move a set of pages as indicated in the pm array. The addr
* field must be set to the virtual address of the page to be moved
* and the node number must contain a valid target node.
*/
static int do_move_pages(struct mm_struct *mm, struct page_to_node *pm,
int migrate_all)
{
int err;
struct page_to_node *pp;
LIST_HEAD(pagelist);
down_read(&mm->mmap_sem);
/*
* Build a list of pages to migrate
*/
migrate_prep();
for (pp = pm; pp->node != MAX_NUMNODES; pp++) {
struct vm_area_struct *vma;
struct page *page;
/*
* A valid page pointer that will not match any of the
* pages that will be moved.
*/
pp->page = ZERO_PAGE(0);
err = -EFAULT;
vma = find_vma(mm, pp->addr);
if (!vma)
goto set_status;
page = follow_page(vma, pp->addr, FOLL_GET);
err = -ENOENT;
if (!page)
goto set_status;
if (PageReserved(page)) /* Check for zero page */
goto put_and_set;
pp->page = page;
err = page_to_nid(page);
if (err == pp->node)
/*
* Node already in the right place
*/
goto put_and_set;
err = -EACCES;
if (page_mapcount(page) > 1 &&
!migrate_all)
goto put_and_set;
err = isolate_lru_page(page, &pagelist);
put_and_set:
/*
* Either remove the duplicate refcount from
* isolate_lru_page() or drop the page ref if it was
* not isolated.
*/
put_page(page);
set_status:
pp->status = err;
}
if (!list_empty(&pagelist))
err = migrate_pages(&pagelist, new_page_node,
(unsigned long)pm);
else
err = -ENOENT;
up_read(&mm->mmap_sem);
return err;
}
/*
* Determine the nodes of a list of pages. The addr in the pm array
* must have been set to the virtual address of which we want to determine
* the node number.
*/
static int do_pages_stat(struct mm_struct *mm, struct page_to_node *pm)
{
down_read(&mm->mmap_sem);
for ( ; pm->node != MAX_NUMNODES; pm++) {
struct vm_area_struct *vma;
struct page *page;
int err;
err = -EFAULT;
vma = find_vma(mm, pm->addr);
if (!vma)
goto set_status;
page = follow_page(vma, pm->addr, 0);
err = -ENOENT;
/* Use PageReserved to check for zero page */
if (!page || PageReserved(page))
goto set_status;
err = page_to_nid(page);
set_status:
pm->status = err;
}
up_read(&mm->mmap_sem);
return 0;
}
/*
* Move a list of pages in the address space of the currently executing
* process.
*/
asmlinkage long sys_move_pages(pid_t pid, unsigned long nr_pages,
const void __user * __user *pages,
const int __user *nodes,
int __user *status, int flags)
{
int err = 0;
int i;
struct task_struct *task;
nodemask_t task_nodes;
struct mm_struct *mm;
struct page_to_node *pm = NULL;
/* Check flags */
if (flags & ~(MPOL_MF_MOVE|MPOL_MF_MOVE_ALL))
return -EINVAL;
if ((flags & MPOL_MF_MOVE_ALL) && !capable(CAP_SYS_NICE))
return -EPERM;
/* Find the mm_struct */
read_lock(&tasklist_lock);
task = pid ? find_task_by_pid(pid) : current;
if (!task) {
read_unlock(&tasklist_lock);
return -ESRCH;
}
mm = get_task_mm(task);
read_unlock(&tasklist_lock);
if (!mm)
return -EINVAL;
/*
* Check if this process has the right to modify the specified
* process. The right exists if the process has administrative
* capabilities, superuser privileges or the same
* userid as the target process.
*/
if ((current->euid != task->suid) && (current->euid != task->uid) &&
(current->uid != task->suid) && (current->uid != task->uid) &&
!capable(CAP_SYS_NICE)) {
err = -EPERM;
goto out2;
}
task_nodes = cpuset_mems_allowed(task);
/* Limit nr_pages so that the multiplication may not overflow */
if (nr_pages >= ULONG_MAX / sizeof(struct page_to_node) - 1) {
err = -E2BIG;
goto out2;
}
pm = vmalloc((nr_pages + 1) * sizeof(struct page_to_node));
if (!pm) {
err = -ENOMEM;
goto out2;
}
/*
* Get parameters from user space and initialize the pm
* array. Return various errors if the user did something wrong.
*/
for (i = 0; i < nr_pages; i++) {
const void *p;
err = -EFAULT;
if (get_user(p, pages + i))
goto out;
pm[i].addr = (unsigned long)p;
if (nodes) {
int node;
if (get_user(node, nodes + i))
goto out;
err = -ENODEV;
if (!node_online(node))
goto out;
err = -EACCES;
if (!node_isset(node, task_nodes))
goto out;
pm[i].node = node;
}
}
/* End marker */
pm[nr_pages].node = MAX_NUMNODES;
if (nodes)
err = do_move_pages(mm, pm, flags & MPOL_MF_MOVE_ALL);
else
err = do_pages_stat(mm, pm);
if (err >= 0)
/* Return status information */
for (i = 0; i < nr_pages; i++)
if (put_user(pm[i].status, status + i))
err = -EFAULT;
out:
vfree(pm);
out2:
mmput(mm);
return err;
}
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment