Commit e45cc288 authored by Maurizio Lombardi's avatar Maurizio Lombardi Committed by Vlastimil Babka

mm: slub: fix flush_cpu_slab()/__free_slab() invocations in task context.

Commit 5a836bf6 ("mm: slub: move flush_cpu_slab() invocations
__free_slab() invocations out of IRQ context") moved all flush_cpu_slab()
invocations to the global workqueue to avoid a problem related
with deactivate_slab()/__free_slab() being called from an IRQ context
on PREEMPT_RT kernels.

When the flush_all_cpu_locked() function is called from a task context
it may happen that a workqueue with WQ_MEM_RECLAIM bit set ends up
flushing the global workqueue, this will cause a dependency issue.

 workqueue: WQ_MEM_RECLAIM nvme-delete-wq:nvme_delete_ctrl_work [nvme_core]
   is flushing !WQ_MEM_RECLAIM events:flush_cpu_slab
 WARNING: CPU: 37 PID: 410 at kernel/workqueue.c:2637
   check_flush_dependency+0x10a/0x120
 Workqueue: nvme-delete-wq nvme_delete_ctrl_work [nvme_core]
 RIP: 0010:check_flush_dependency+0x10a/0x120[  453.262125] Call Trace:
 __flush_work.isra.0+0xbf/0x220
 ? __queue_work+0x1dc/0x420
 flush_all_cpus_locked+0xfb/0x120
 __kmem_cache_shutdown+0x2b/0x320
 kmem_cache_destroy+0x49/0x100
 bioset_exit+0x143/0x190
 blk_release_queue+0xb9/0x100
 kobject_cleanup+0x37/0x130
 nvme_fc_ctrl_free+0xc6/0x150 [nvme_fc]
 nvme_free_ctrl+0x1ac/0x2b0 [nvme_core]

Fix this bug by creating a workqueue for the flush operation with
the WQ_MEM_RECLAIM bit set.

Fixes: 5a836bf6 ("mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context")
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarMaurizio Lombardi <mlombard@redhat.com>
Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
parent d71608a8
...@@ -310,6 +310,11 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si) ...@@ -310,6 +310,11 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si)
*/ */
static nodemask_t slab_nodes; static nodemask_t slab_nodes;
/*
* Workqueue used for flush_cpu_slab().
*/
static struct workqueue_struct *flushwq;
/******************************************************************** /********************************************************************
* Core slab cache functions * Core slab cache functions
*******************************************************************/ *******************************************************************/
...@@ -2730,7 +2735,7 @@ static void flush_all_cpus_locked(struct kmem_cache *s) ...@@ -2730,7 +2735,7 @@ static void flush_all_cpus_locked(struct kmem_cache *s)
INIT_WORK(&sfw->work, flush_cpu_slab); INIT_WORK(&sfw->work, flush_cpu_slab);
sfw->skip = false; sfw->skip = false;
sfw->s = s; sfw->s = s;
schedule_work_on(cpu, &sfw->work); queue_work_on(cpu, flushwq, &sfw->work);
} }
for_each_online_cpu(cpu) { for_each_online_cpu(cpu) {
...@@ -4858,6 +4863,8 @@ void __init kmem_cache_init(void) ...@@ -4858,6 +4863,8 @@ void __init kmem_cache_init(void)
void __init kmem_cache_init_late(void) void __init kmem_cache_init_late(void)
{ {
flushwq = alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM, 0);
WARN_ON(!flushwq);
} }
struct kmem_cache * struct kmem_cache *
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment