Commit e88a72e8 authored by Michal Koutný's avatar Michal Koutný Committed by Greg Kroah-Hartman

mm/page_counter.c: fix protection usage propagation

commit a6f23d14 upstream.

When workload runs in cgroups that aren't directly below root cgroup and
their parent specifies reclaim protection, it may end up ineffective.

The reason is that propagate_protected_usage() is not called in all
hierarchy up.  All the protected usage is incorrectly accumulated in the
workload's parent.  This means that siblings_low_usage is overestimated
and effective protection underestimated.  Even though it is transitional
phenomenon (uncharge path does correct propagation and fixes the wrong
children_low_usage), it can undermine the intended protection
unexpectedly.

We have noticed this problem while seeing a swap out in a descendant of a
protected memcg (intermediate node) while the parent was conveniently
under its protection limit and the memory pressure was external to that
hierarchy.  Michal has pinpointed this down to the wrong
siblings_low_usage which led to the unwanted reclaim.

The fix is simply updating children_low_usage in respective ancestors also
in the charging path.

Fixes: 23067153 ("mm: memory.low hierarchical behavior")
Signed-off-by: default avatarMichal Koutný <mkoutny@suse.com>
Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarRoman Gushchin <guro@fb.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: <stable@vger.kernel.org>	[4.18+]
Link: http://lkml.kernel.org/r/20200803153231.15477-1-mhocko@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 73cbb8af
...@@ -77,7 +77,7 @@ void page_counter_charge(struct page_counter *counter, unsigned long nr_pages) ...@@ -77,7 +77,7 @@ void page_counter_charge(struct page_counter *counter, unsigned long nr_pages)
long new; long new;
new = atomic_long_add_return(nr_pages, &c->usage); new = atomic_long_add_return(nr_pages, &c->usage);
propagate_protected_usage(counter, new); propagate_protected_usage(c, new);
/* /*
* This is indeed racy, but we can live with some * This is indeed racy, but we can live with some
* inaccuracy in the watermark. * inaccuracy in the watermark.
...@@ -121,7 +121,7 @@ bool page_counter_try_charge(struct page_counter *counter, ...@@ -121,7 +121,7 @@ bool page_counter_try_charge(struct page_counter *counter,
new = atomic_long_add_return(nr_pages, &c->usage); new = atomic_long_add_return(nr_pages, &c->usage);
if (new > c->max) { if (new > c->max) {
atomic_long_sub(nr_pages, &c->usage); atomic_long_sub(nr_pages, &c->usage);
propagate_protected_usage(counter, new); propagate_protected_usage(c, new);
/* /*
* This is racy, but we can live with some * This is racy, but we can live with some
* inaccuracy in the failcnt. * inaccuracy in the failcnt.
...@@ -130,7 +130,7 @@ bool page_counter_try_charge(struct page_counter *counter, ...@@ -130,7 +130,7 @@ bool page_counter_try_charge(struct page_counter *counter,
*fail = c; *fail = c;
goto failed; goto failed;
} }
propagate_protected_usage(counter, new); propagate_protected_usage(c, new);
/* /*
* Just like with failcnt, we can live with some * Just like with failcnt, we can live with some
* inaccuracy in the watermark. * inaccuracy in the watermark.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment