Commit 86a294a8 authored by Michal Hocko's avatar Michal Hocko Committed by Linus Torvalds

mm, oom, compaction: prevent from should_compact_retry looping for ever for costly orders

"mm: consider compaction feedback also for costly allocation" has
removed the upper bound for the reclaim/compaction retries based on the
number of reclaimed pages for costly orders.  While this is desirable
the patch did miss a mis interaction between reclaim, compaction and the
retry logic.  The direct reclaim tries to get zones over min watermark
while compaction backs off and returns COMPACT_SKIPPED when all zones
are below low watermark + 1<<order gap.  If we are getting really close
to OOM then __compaction_suitable can keep returning COMPACT_SKIPPED a
high order request (e.g.  hugetlb order-9) while the reclaim is not able
to release enough pages to get us over low watermark.  The reclaim is
still able to make some progress (usually trashing over few remaining
pages) so we are not able to break out from the loop.

I have seen this happening with the same test described in "mm: consider
compaction feedback also for costly allocation" on a swapless system.
The original problem got resolved by "vmscan: consider classzone_idx in
compaction_ready" but it shows how things might go wrong when we
approach the oom event horizont.

The reason why compaction requires being over low rather than min
watermark is not clear to me.  This check was there essentially since
56de7263 ("mm: compaction: direct compact when a high-order
allocation fails").  It is clearly an implementation detail though and
we shouldn't pull it into the generic retry logic while we should be
able to cope with such eventuality.  The only place in
should_compact_retry where we retry without any upper bound is for
compaction_withdrawn() case.

Introduce compaction_zonelist_suitable function which checks the given
zonelist and returns true only if there is at least one zone which would
would unblock __compaction_suitable if more memory got reclaimed.  In
this implementation it checks __compaction_suitable with NR_FREE_PAGES
plus part of the reclaimable memory as the target for the watermark
check.  The reclaimable memory is reduced linearly by the allocation
order.  The idea is that we do not want to reclaim all the remaining
memory for a single allocation request just unblock
__compaction_suitable which doesn't guarantee we will make a further
progress.

The new helper is then used if compaction_withdrawn() feedback was
provided so we do not retry if there is no outlook for a further
progress.  !costly requests shouldn't be affected much - e.g.  order-2
pages would require to have at least 64kB on the reclaimable LRUs while
order-9 would need at least 32M which should be enough to not lock up.

[vbabka@suse.cz: fix classzone_idx vs. high_zoneidx usage in compaction_zonelist_suitable]
[akpm@linux-foundation.org: fix it for Mel's mm-page_alloc-remove-field-from-alloc_context.patch]
Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarHillf Danton <hillf.zj@alibaba-inc.com>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 7854ea6c
...@@ -142,6 +142,10 @@ static inline bool compaction_withdrawn(enum compact_result result) ...@@ -142,6 +142,10 @@ static inline bool compaction_withdrawn(enum compact_result result)
return false; return false;
} }
bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
int alloc_flags);
extern int kcompactd_run(int nid); extern int kcompactd_run(int nid);
extern void kcompactd_stop(int nid); extern void kcompactd_stop(int nid);
extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx); extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx);
......
...@@ -739,6 +739,9 @@ static inline bool is_dev_zone(const struct zone *zone) ...@@ -739,6 +739,9 @@ static inline bool is_dev_zone(const struct zone *zone)
extern struct mutex zonelists_mutex; extern struct mutex zonelists_mutex;
void build_all_zonelists(pg_data_t *pgdat, struct zone *zone); void build_all_zonelists(pg_data_t *pgdat, struct zone *zone);
void wakeup_kswapd(struct zone *zone, int order, enum zone_type classzone_idx); void wakeup_kswapd(struct zone *zone, int order, enum zone_type classzone_idx);
bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
int classzone_idx, unsigned int alloc_flags,
long free_pages);
bool zone_watermark_ok(struct zone *z, unsigned int order, bool zone_watermark_ok(struct zone *z, unsigned int order,
unsigned long mark, int classzone_idx, unsigned long mark, int classzone_idx,
unsigned int alloc_flags); unsigned int alloc_flags);
......
...@@ -1318,7 +1318,8 @@ static enum compact_result compact_finished(struct zone *zone, ...@@ -1318,7 +1318,8 @@ static enum compact_result compact_finished(struct zone *zone,
*/ */
static enum compact_result __compaction_suitable(struct zone *zone, int order, static enum compact_result __compaction_suitable(struct zone *zone, int order,
unsigned int alloc_flags, unsigned int alloc_flags,
int classzone_idx) int classzone_idx,
unsigned long wmark_target)
{ {
int fragindex; int fragindex;
unsigned long watermark; unsigned long watermark;
...@@ -1341,7 +1342,8 @@ static enum compact_result __compaction_suitable(struct zone *zone, int order, ...@@ -1341,7 +1342,8 @@ static enum compact_result __compaction_suitable(struct zone *zone, int order,
* allocated and for a short time, the footprint is higher * allocated and for a short time, the footprint is higher
*/ */
watermark += (2UL << order); watermark += (2UL << order);
if (!zone_watermark_ok(zone, 0, watermark, classzone_idx, alloc_flags)) if (!__zone_watermark_ok(zone, 0, watermark, classzone_idx,
alloc_flags, wmark_target))
return COMPACT_SKIPPED; return COMPACT_SKIPPED;
/* /*
...@@ -1368,7 +1370,8 @@ enum compact_result compaction_suitable(struct zone *zone, int order, ...@@ -1368,7 +1370,8 @@ enum compact_result compaction_suitable(struct zone *zone, int order,
{ {
enum compact_result ret; enum compact_result ret;
ret = __compaction_suitable(zone, order, alloc_flags, classzone_idx); ret = __compaction_suitable(zone, order, alloc_flags, classzone_idx,
zone_page_state(zone, NR_FREE_PAGES));
trace_mm_compaction_suitable(zone, order, ret); trace_mm_compaction_suitable(zone, order, ret);
if (ret == COMPACT_NOT_SUITABLE_ZONE) if (ret == COMPACT_NOT_SUITABLE_ZONE)
ret = COMPACT_SKIPPED; ret = COMPACT_SKIPPED;
...@@ -1376,6 +1379,39 @@ enum compact_result compaction_suitable(struct zone *zone, int order, ...@@ -1376,6 +1379,39 @@ enum compact_result compaction_suitable(struct zone *zone, int order,
return ret; return ret;
} }
bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
int alloc_flags)
{
struct zone *zone;
struct zoneref *z;
/*
* Make sure at least one zone would pass __compaction_suitable if we continue
* retrying the reclaim.
*/
for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
ac->nodemask) {
unsigned long available;
enum compact_result compact_result;
/*
* Do not consider all the reclaimable memory because we do not
* want to trash just for a single high order allocation which
* is even not guaranteed to appear even if __compaction_suitable
* is happy about the watermark check.
*/
available = zone_reclaimable_pages(zone) / order;
available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
compact_result = __compaction_suitable(zone, order, alloc_flags,
ac_classzone_idx(ac), available);
if (compact_result != COMPACT_SKIPPED &&
compact_result != COMPACT_NOT_SUITABLE_ZONE)
return true;
}
return false;
}
static enum compact_result compact_zone(struct zone *zone, struct compact_control *cc) static enum compact_result compact_zone(struct zone *zone, struct compact_control *cc)
{ {
enum compact_result ret; enum compact_result ret;
......
...@@ -2750,9 +2750,8 @@ static inline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) ...@@ -2750,9 +2750,8 @@ static inline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
* one free page of a suitable size. Checking now avoids taking the zone lock * one free page of a suitable size. Checking now avoids taking the zone lock
* to check in the allocation paths if no pages are free. * to check in the allocation paths if no pages are free.
*/ */
static bool __zone_watermark_ok(struct zone *z, unsigned int order, bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
unsigned long mark, int classzone_idx, int classzone_idx, unsigned int alloc_flags,
unsigned int alloc_flags,
long free_pages) long free_pages)
{ {
long min = mark; long min = mark;
...@@ -3256,8 +3255,8 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order, ...@@ -3256,8 +3255,8 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
} }
static inline bool static inline bool
should_compact_retry(unsigned int order, enum compact_result compact_result, should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
enum migrate_mode *migrate_mode, enum compact_result compact_result, enum migrate_mode *migrate_mode,
int compaction_retries) int compaction_retries)
{ {
int max_retries = MAX_COMPACT_RETRIES; int max_retries = MAX_COMPACT_RETRIES;
...@@ -3281,9 +3280,11 @@ should_compact_retry(unsigned int order, enum compact_result compact_result, ...@@ -3281,9 +3280,11 @@ should_compact_retry(unsigned int order, enum compact_result compact_result,
/* /*
* make sure the compaction wasn't deferred or didn't bail out early * make sure the compaction wasn't deferred or didn't bail out early
* due to locks contention before we declare that we should give up. * due to locks contention before we declare that we should give up.
* But do not retry if the given zonelist is not suitable for
* compaction.
*/ */
if (compaction_withdrawn(compact_result)) if (compaction_withdrawn(compact_result))
return true; return compaction_zonelist_suitable(ac, order, alloc_flags);
/* /*
* !costly requests are much more important than __GFP_REPEAT * !costly requests are much more important than __GFP_REPEAT
...@@ -3311,7 +3312,8 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order, ...@@ -3311,7 +3312,8 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
} }
static inline bool static inline bool
should_compact_retry(unsigned int order, enum compact_result compact_result, should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_flags,
enum compact_result compact_result,
enum migrate_mode *migrate_mode, enum migrate_mode *migrate_mode,
int compaction_retries) int compaction_retries)
{ {
...@@ -3706,8 +3708,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, ...@@ -3706,8 +3708,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
* of free memory (see __compaction_suitable) * of free memory (see __compaction_suitable)
*/ */
if (did_some_progress > 0 && if (did_some_progress > 0 &&
should_compact_retry(order, compact_result, should_compact_retry(ac, order, alloc_flags,
&migration_mode, compaction_retries)) compact_result, &migration_mode,
compaction_retries))
goto retry; goto retry;
/* Reclaim has failed us, start killing things */ /* Reclaim has failed us, start killing things */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment