Commit aa712399 authored by Pingfan Liu's avatar Pingfan Liu Committed by Linus Torvalds

mm/gup: speed up check_and_migrate_cma_pages() on huge page

Both hugetlb and thp locate on the same migration type of pageblock, since
they are allocated from a free_list[].  Based on this fact, it is enough
to check on a single subpage to decide the migration type of the whole
huge page.  By this way, it saves (2M/4K - 1) times loop for pmd_huge on
x86, similar on other archs.

Furthermore, when executing isolate_huge_page(), it avoid taking global
hugetlb_lock many times, and meanless remove/add to the local link list
cma_page_list.

[akpm@linux-foundation.org: make `i' and `step' unsigned]
Link: http://lkml.kernel.org/r/1561612545-28997-1-git-send-email-kernelfans@gmail.comSigned-off-by: default avatarPingfan Liu <kernelfans@gmail.com>
Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Reviewed-by: default avatarIra Weiny <ira.weiny@intel.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 520b4a44
...@@ -1449,25 +1449,31 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, ...@@ -1449,25 +1449,31 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
struct vm_area_struct **vmas, struct vm_area_struct **vmas,
unsigned int gup_flags) unsigned int gup_flags)
{ {
long i; unsigned long i;
unsigned long step;
bool drain_allow = true; bool drain_allow = true;
bool migrate_allow = true; bool migrate_allow = true;
LIST_HEAD(cma_page_list); LIST_HEAD(cma_page_list);
check_again: check_again:
for (i = 0; i < nr_pages; i++) { for (i = 0; i < nr_pages;) {
struct page *head = compound_head(pages[i]);
/*
* gup may start from a tail page. Advance step by the left
* part.
*/
step = (1 << compound_order(head)) - (pages[i] - head);
/* /*
* If we get a page from the CMA zone, since we are going to * If we get a page from the CMA zone, since we are going to
* be pinning these entries, we might as well move them out * be pinning these entries, we might as well move them out
* of the CMA zone if possible. * of the CMA zone if possible.
*/ */
if (is_migrate_cma_page(pages[i])) { if (is_migrate_cma_page(head)) {
if (PageHuge(head))
struct page *head = compound_head(pages[i]);
if (PageHuge(head)) {
isolate_huge_page(head, &cma_page_list); isolate_huge_page(head, &cma_page_list);
} else { else {
if (!PageLRU(head) && drain_allow) { if (!PageLRU(head) && drain_allow) {
lru_add_drain_all(); lru_add_drain_all();
drain_allow = false; drain_allow = false;
...@@ -1482,6 +1488,8 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, ...@@ -1482,6 +1488,8 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
} }
} }
} }
i += step;
} }
if (!list_empty(&cma_page_list)) { if (!list_empty(&cma_page_list)) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment