Commit 25f23a0c authored by Jérôme Glisse's avatar Jérôme Glisse Committed by Linus Torvalds

mm/hmm: improve and rename hmm_vma_get_pfns() to hmm_range_snapshot()

Rename for consistency between code, comments and documentation.  Also
improves the comments on all the possible returns values.  Improve the
function by returning the number of populated entries in pfns array.

Link: http://lkml.kernel.org/r/20190403193318.16478-5-jglisse@redhat.comSigned-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
Reviewed-by: default avatarRalph Campbell <rcampbell@nvidia.com>
Reviewed-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
Reviewed-by: default avatarIra Weiny <ira.weiny@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Souptick Joarder <jrdr.linux@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 9f454612
...@@ -189,11 +189,7 @@ the driver callback returns. ...@@ -189,11 +189,7 @@ the driver callback returns.
When the device driver wants to populate a range of virtual addresses, it can When the device driver wants to populate a range of virtual addresses, it can
use either:: use either::
int hmm_vma_get_pfns(struct vm_area_struct *vma, long hmm_range_snapshot(struct hmm_range *range);
struct hmm_range *range,
unsigned long start,
unsigned long end,
hmm_pfn_t *pfns);
int hmm_vma_fault(struct vm_area_struct *vma, int hmm_vma_fault(struct vm_area_struct *vma,
struct hmm_range *range, struct hmm_range *range,
unsigned long start, unsigned long start,
...@@ -202,7 +198,7 @@ use either:: ...@@ -202,7 +198,7 @@ use either::
bool write, bool write,
bool block); bool block);
The first one (hmm_vma_get_pfns()) will only fetch present CPU page table The first one (hmm_range_snapshot()) will only fetch present CPU page table
entries and will not trigger a page fault on missing or non-present entries. entries and will not trigger a page fault on missing or non-present entries.
The second one does trigger a page fault on missing or read-only entry if the The second one does trigger a page fault on missing or read-only entry if the
write parameter is true. Page faults use the generic mm page fault code path write parameter is true. Page faults use the generic mm page fault code path
...@@ -220,19 +216,33 @@ respect in order to keep things properly synchronized. The usage pattern is:: ...@@ -220,19 +216,33 @@ respect in order to keep things properly synchronized. The usage pattern is::
{ {
struct hmm_range range; struct hmm_range range;
... ...
range.start = ...;
range.end = ...;
range.pfns = ...;
range.flags = ...;
range.values = ...;
range.pfn_shift = ...;
again: again:
ret = hmm_vma_get_pfns(vma, &range, start, end, pfns); down_read(&mm->mmap_sem);
if (ret) range.vma = ...;
ret = hmm_range_snapshot(&range);
if (ret) {
up_read(&mm->mmap_sem);
return ret; return ret;
}
take_lock(driver->update); take_lock(driver->update);
if (!hmm_vma_range_done(vma, &range)) { if (!hmm_vma_range_done(vma, &range)) {
release_lock(driver->update); release_lock(driver->update);
up_read(&mm->mmap_sem);
goto again; goto again;
} }
// Use pfns array content to update device page table // Use pfns array content to update device page table
release_lock(driver->update); release_lock(driver->update);
up_read(&mm->mmap_sem);
return 0; return 0;
} }
......
...@@ -365,11 +365,11 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror); ...@@ -365,11 +365,11 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror);
* table invalidation serializes on it. * table invalidation serializes on it.
* *
* YOU MUST CALL hmm_vma_range_done() ONCE AND ONLY ONCE EACH TIME YOU CALL * YOU MUST CALL hmm_vma_range_done() ONCE AND ONLY ONCE EACH TIME YOU CALL
* hmm_vma_get_pfns() WITHOUT ERROR ! * hmm_range_snapshot() WITHOUT ERROR !
* *
* IF YOU DO NOT FOLLOW THE ABOVE RULE THE SNAPSHOT CONTENT MIGHT BE INVALID ! * IF YOU DO NOT FOLLOW THE ABOVE RULE THE SNAPSHOT CONTENT MIGHT BE INVALID !
*/ */
int hmm_vma_get_pfns(struct hmm_range *range); long hmm_range_snapshot(struct hmm_range *range);
bool hmm_vma_range_done(struct hmm_range *range); bool hmm_vma_range_done(struct hmm_range *range);
......
...@@ -702,23 +702,25 @@ static void hmm_pfns_special(struct hmm_range *range) ...@@ -702,23 +702,25 @@ static void hmm_pfns_special(struct hmm_range *range)
} }
/* /*
* hmm_vma_get_pfns() - snapshot CPU page table for a range of virtual addresses * hmm_range_snapshot() - snapshot CPU page table for a range
* @range: range being snapshotted * @range: range
* Returns: -EINVAL if invalid argument, -ENOMEM out of memory, -EPERM invalid * Returns: number of valid pages in range->pfns[] (from range start
* vma permission, 0 success * address). This may be zero. If the return value is negative,
* then one of the following values may be returned:
*
* -EINVAL invalid arguments or mm or virtual address are in an
* invalid vma (ie either hugetlbfs or device file vma).
* -EPERM For example, asking for write, when the range is
* read-only
* -EAGAIN Caller needs to retry
* -EFAULT Either no valid vma exists for this range, or it is
* illegal to access the range
* *
* This snapshots the CPU page table for a range of virtual addresses. Snapshot * This snapshots the CPU page table for a range of virtual addresses. Snapshot
* validity is tracked by range struct. See hmm_vma_range_done() for further * validity is tracked by range struct. See hmm_vma_range_done() for further
* information. * information.
*
* The range struct is initialized here. It tracks the CPU page table, but only
* if the function returns success (0), in which case the caller must then call
* hmm_vma_range_done() to stop CPU page table update tracking on this range.
*
* NOT CALLING hmm_vma_range_done() IF FUNCTION RETURNS 0 WILL LEAD TO SERIOUS
* MEMORY CORRUPTION ! YOU HAVE BEEN WARNED !
*/ */
int hmm_vma_get_pfns(struct hmm_range *range) long hmm_range_snapshot(struct hmm_range *range)
{ {
struct vm_area_struct *vma = range->vma; struct vm_area_struct *vma = range->vma;
struct hmm_vma_walk hmm_vma_walk; struct hmm_vma_walk hmm_vma_walk;
...@@ -772,6 +774,7 @@ int hmm_vma_get_pfns(struct hmm_range *range) ...@@ -772,6 +774,7 @@ int hmm_vma_get_pfns(struct hmm_range *range)
hmm_vma_walk.fault = false; hmm_vma_walk.fault = false;
hmm_vma_walk.range = range; hmm_vma_walk.range = range;
mm_walk.private = &hmm_vma_walk; mm_walk.private = &hmm_vma_walk;
hmm_vma_walk.last = range->start;
mm_walk.vma = vma; mm_walk.vma = vma;
mm_walk.mm = vma->vm_mm; mm_walk.mm = vma->vm_mm;
...@@ -788,9 +791,9 @@ int hmm_vma_get_pfns(struct hmm_range *range) ...@@ -788,9 +791,9 @@ int hmm_vma_get_pfns(struct hmm_range *range)
* function return 0). * function return 0).
*/ */
range->hmm = hmm; range->hmm = hmm;
return 0; return (hmm_vma_walk.last - range->start) >> PAGE_SHIFT;
} }
EXPORT_SYMBOL(hmm_vma_get_pfns); EXPORT_SYMBOL(hmm_range_snapshot);
/* /*
* hmm_vma_range_done() - stop tracking change to CPU page table over a range * hmm_vma_range_done() - stop tracking change to CPU page table over a range
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment