Commit 59bfbcf0 authored by Tejun Heo's avatar Tejun Heo Committed by Linus Torvalds

idr: idr_alloc() shouldn't trigger lowmem warning when preloaded

GFP_NOIO is often used for idr_alloc() inside preloaded section as the
allocation mask doesn't really matter.  If the idr tree needs to be
expanded, idr_alloc() first tries to allocate using the specified
allocation mask and if it fails falls back to the preloaded buffer.  This
order prevent non-preloading idr_alloc() users from taking advantage of
preloading ones by using preload buffer without filling it shifting the
burden of allocation to the preload users.

Unfortunately, this allowed/expected-to-fail kmem_cache allocation ends up
generating spurious slab lowmem warning before succeeding the request from
the preload buffer.

This patch makes idr_layer_alloc() add __GFP_NOWARN to the first
kmem_cache attempt and try kmem_cache again w/o __GFP_NOWARN after
allocation from preload_buffer fails so that lowmem warning is generated
if not suppressed by the original @gfp_mask.
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
Reported-by: default avatarDavid Teigland <teigland@redhat.com>
Tested-by: default avatarDavid Teigland <teigland@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 415586c9
...@@ -106,8 +106,14 @@ static struct idr_layer *idr_layer_alloc(gfp_t gfp_mask, struct idr *layer_idr) ...@@ -106,8 +106,14 @@ static struct idr_layer *idr_layer_alloc(gfp_t gfp_mask, struct idr *layer_idr)
if (layer_idr) if (layer_idr)
return get_from_free_list(layer_idr); return get_from_free_list(layer_idr);
/* try to allocate directly from kmem_cache */ /*
new = kmem_cache_zalloc(idr_layer_cache, gfp_mask); * Try to allocate directly from kmem_cache. We want to try this
* before preload buffer; otherwise, non-preloading idr_alloc()
* users will end up taking advantage of preloading ones. As the
* following is allowed to fail for preloaded cases, suppress
* warning this time.
*/
new = kmem_cache_zalloc(idr_layer_cache, gfp_mask | __GFP_NOWARN);
if (new) if (new)
return new; return new;
...@@ -115,18 +121,24 @@ static struct idr_layer *idr_layer_alloc(gfp_t gfp_mask, struct idr *layer_idr) ...@@ -115,18 +121,24 @@ static struct idr_layer *idr_layer_alloc(gfp_t gfp_mask, struct idr *layer_idr)
* Try to fetch one from the per-cpu preload buffer if in process * Try to fetch one from the per-cpu preload buffer if in process
* context. See idr_preload() for details. * context. See idr_preload() for details.
*/ */
if (in_interrupt()) if (!in_interrupt()) {
return NULL; preempt_disable();
new = __this_cpu_read(idr_preload_head);
preempt_disable(); if (new) {
new = __this_cpu_read(idr_preload_head); __this_cpu_write(idr_preload_head, new->ary[0]);
if (new) { __this_cpu_dec(idr_preload_cnt);
__this_cpu_write(idr_preload_head, new->ary[0]); new->ary[0] = NULL;
__this_cpu_dec(idr_preload_cnt); }
new->ary[0] = NULL; preempt_enable();
if (new)
return new;
} }
preempt_enable();
return new; /*
* Both failed. Try kmem_cache again w/o adding __GFP_NOWARN so
* that memory allocation failure warning is printed as intended.
*/
return kmem_cache_zalloc(idr_layer_cache, gfp_mask);
} }
static void idr_layer_rcu_free(struct rcu_head *head) static void idr_layer_rcu_free(struct rcu_head *head)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment