Commit cf66f070 authored by Mel Gorman's avatar Mel Gorman Committed by Linus Torvalds

mm, compaction: do not consider a need to reschedule as contention

Scanning on large machines can take a considerable length of time and
eventually need to be rescheduled.  This is treated as an abort event
but that's not appropriate as the attempt is likely to be retried after
making numerous checks and taking another cycle through the page
allocator.  This patch will check the need to reschedule if necessary
but continue the scanning.

The main benefit is reduced scanning when compaction is taking a long
time or the machine is over-saturated.  It also avoids an unnecessary
exit of compaction that ends up being retried by the page allocator in
the outer loop.

                                     5.0.0-rc1              5.0.0-rc1
                              synccached-v3r16        noresched-v3r17
Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
Amean     fault-both-3      2958.27 (   0.00%)     2965.68 (  -0.25%)
Amean     fault-both-5      4091.90 (   0.00%)     3995.90 (   2.35%)
Amean     fault-both-7      5803.05 (   0.00%)     5842.12 (  -0.67%)
Amean     fault-both-12     9481.06 (   0.00%)     9550.87 (  -0.74%)
Amean     fault-both-18    14141.51 (   0.00%)    13304.72 (   5.92%)
Amean     fault-both-24    16438.00 (   0.00%)    14618.59 (  11.07%)
Amean     fault-both-30    17531.72 (   0.00%)    16650.96 (   5.02%)
Amean     fault-both-32    17101.96 (   0.00%)    17145.15 (  -0.25%)

Link: http://lkml.kernel.org/r/20190118175136.31341-18-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: David Rientjes <rientjes@google.com>
Cc: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent cb810ad2
...@@ -404,21 +404,6 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, ...@@ -404,21 +404,6 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
return true; return true;
} }
/*
* Aside from avoiding lock contention, compaction also periodically checks
* need_resched() and records async compaction as contended if necessary.
*/
static inline void compact_check_resched(struct compact_control *cc)
{
/* async compaction aborts if contended */
if (need_resched()) {
if (cc->mode == MIGRATE_ASYNC)
cc->contended = true;
cond_resched();
}
}
/* /*
* Compaction requires the taking of some coarse locks that are potentially * Compaction requires the taking of some coarse locks that are potentially
* very heavily contended. The lock should be periodically unlocked to avoid * very heavily contended. The lock should be periodically unlocked to avoid
...@@ -447,7 +432,7 @@ static bool compact_unlock_should_abort(spinlock_t *lock, ...@@ -447,7 +432,7 @@ static bool compact_unlock_should_abort(spinlock_t *lock,
return true; return true;
} }
compact_check_resched(cc); cond_resched();
return false; return false;
} }
...@@ -736,7 +721,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, ...@@ -736,7 +721,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
return 0; return 0;
} }
compact_check_resched(cc); cond_resched();
if (cc->direct_compaction && (cc->mode == MIGRATE_ASYNC)) { if (cc->direct_compaction && (cc->mode == MIGRATE_ASYNC)) {
skip_on_failure = true; skip_on_failure = true;
...@@ -1370,7 +1355,7 @@ static void isolate_freepages(struct compact_control *cc) ...@@ -1370,7 +1355,7 @@ static void isolate_freepages(struct compact_control *cc)
* suitable migration targets, so periodically check resched. * suitable migration targets, so periodically check resched.
*/ */
if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)))
compact_check_resched(cc); cond_resched();
page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn,
zone); zone);
...@@ -1666,7 +1651,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone, ...@@ -1666,7 +1651,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
* need to schedule. * need to schedule.
*/ */
if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)))
compact_check_resched(cc); cond_resched();
page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn,
zone); zone);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment