Commit 31b912de authored by Joao Martins's avatar Joao Martins Committed by Linus Torvalds

mm/gup: decrement head page once for group of subpages

Rather than decrementing the head page refcount one by one, we walk the
page array and checking which belong to the same compound_head.  Later on
we decrement the calculated amount of references in a single write to the
head page.  To that end switch to for_each_compound_head() does most of
the work.

set_page_dirty() needs no adjustment as it's a nop for non-dirty head
pages and it doesn't operate on tail pages.

This considerably improves unpinning of pages with THP and hugetlbfs:

 - THP

   gup_test -t -m 16384 -r 10 [-L|-a] -S -n 512 -w
   PIN_LONGTERM_BENCHMARK (put values): ~87.6k us -> ~23.2k us

- 16G with 1G huge page size

  gup_test -f /mnt/huge/file -m 16384 -r 10 [-L|-a] -S -n 512 -w
  PIN_LONGTERM_BENCHMARK: (put values): ~87.6k us -> ~27.5k us

Link: https://lkml.kernel.org/r/20210212130843.13865-3-joao.m.martins@oracle.comSigned-off-by: default avatarJoao Martins <joao.m.martins@oracle.com>
Reviewed-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
Reviewed-by: default avatarJason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 8745d7f6
...@@ -265,20 +265,15 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, ...@@ -265,20 +265,15 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
bool make_dirty) bool make_dirty)
{ {
unsigned long index; unsigned long index;
struct page *head;
/* unsigned int ntails;
* TODO: this can be optimized for huge pages: if a series of pages is
* physically contiguous and part of the same compound page, then a
* single operation to the head page should suffice.
*/
if (!make_dirty) { if (!make_dirty) {
unpin_user_pages(pages, npages); unpin_user_pages(pages, npages);
return; return;
} }
for (index = 0; index < npages; index++) { for_each_compound_head(index, pages, npages, head, ntails) {
struct page *page = compound_head(pages[index]);
/* /*
* Checking PageDirty at this point may race with * Checking PageDirty at this point may race with
* clear_page_dirty_for_io(), but that's OK. Two key * clear_page_dirty_for_io(), but that's OK. Two key
...@@ -299,9 +294,9 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, ...@@ -299,9 +294,9 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
* written back, so it gets written back again in the * written back, so it gets written back again in the
* next writeback cycle. This is harmless. * next writeback cycle. This is harmless.
*/ */
if (!PageDirty(page)) if (!PageDirty(head))
set_page_dirty_lock(page); set_page_dirty_lock(head);
unpin_user_page(page); put_compound_head(head, ntails, FOLL_PIN);
} }
} }
EXPORT_SYMBOL(unpin_user_pages_dirty_lock); EXPORT_SYMBOL(unpin_user_pages_dirty_lock);
...@@ -318,6 +313,8 @@ EXPORT_SYMBOL(unpin_user_pages_dirty_lock); ...@@ -318,6 +313,8 @@ EXPORT_SYMBOL(unpin_user_pages_dirty_lock);
void unpin_user_pages(struct page **pages, unsigned long npages) void unpin_user_pages(struct page **pages, unsigned long npages)
{ {
unsigned long index; unsigned long index;
struct page *head;
unsigned int ntails;
/* /*
* If this WARN_ON() fires, then the system *might* be leaking pages (by * If this WARN_ON() fires, then the system *might* be leaking pages (by
...@@ -326,13 +323,9 @@ void unpin_user_pages(struct page **pages, unsigned long npages) ...@@ -326,13 +323,9 @@ void unpin_user_pages(struct page **pages, unsigned long npages)
*/ */
if (WARN_ON(IS_ERR_VALUE(npages))) if (WARN_ON(IS_ERR_VALUE(npages)))
return; return;
/*
* TODO: this can be optimized for huge pages: if a series of pages is for_each_compound_head(index, pages, npages, head, ntails)
* physically contiguous and part of the same compound page, then a put_compound_head(head, ntails, FOLL_PIN);
* single operation to the head page should suffice.
*/
for (index = 0; index < npages; index++)
unpin_user_page(pages[index]);
} }
EXPORT_SYMBOL(unpin_user_pages); EXPORT_SYMBOL(unpin_user_pages);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment