Commit dee2f8aa authored by Vladimir Davydov's avatar Vladimir Davydov Committed by Linus Torvalds

slub: fix cpuset check in get_any_partial

If we fail to allocate from the current node's stock, we look for free
objects on other nodes before calling the page allocator (see
get_any_partial).  While checking other nodes we respect cpuset
constraints by calling cpuset_zone_allowed.  We enforce hardwall check.
As a result, we will fallback to the page allocator even if there are some
pages cached on other nodes, but the current cpuset doesn't have them set.
 However, the page allocator uses softwall check for kernel allocations,
so it may allocate from one of the other nodes in this case.

Therefore we should use softwall cpuset check in get_any_partial to
conform with the cpuset check in the page allocator.
Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
Acked-by: default avatarZefan Li <lizefan@huawei.com>
Acked-by: default avatarChristoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 061d7074
...@@ -1670,8 +1670,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, ...@@ -1670,8 +1670,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
n = get_node(s, zone_to_nid(zone)); n = get_node(s, zone_to_nid(zone));
if (n && cpuset_zone_allowed(zone, if (n && cpuset_zone_allowed(zone, flags) &&
flags | __GFP_HARDWALL) &&
n->nr_partial > s->min_partial) { n->nr_partial > s->min_partial) {
object = get_partial_node(s, n, c, flags); object = get_partial_node(s, n, c, flags);
if (object) { if (object) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment