Commit 3c8f9da4 authored by Sebastian Andrzej Siewior's avatar Sebastian Andrzej Siewior Committed by Jens Axboe

blk-mq: Don't disable preemption around __blk_mq_run_hw_queue().

__blk_mq_delay_run_hw_queue() disables preemption to get a stable
current CPU number and then invokes __blk_mq_run_hw_queue() if the CPU
number is part the mask.

__blk_mq_run_hw_queue() acquires a spin_lock_t which is a sleeping lock
on PREEMPT_RT and can't be acquired with disabled preemption.

It is not required for correctness to invoke __blk_mq_run_hw_queue() on
a CPU matching hctx->cpumask. Both (async and direct requests) can run
on a CPU not matching hctx->cpumask.

The CPU mask without disabling preemption and invoking
__blk_mq_run_hw_queue().
Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: default avatarMing Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/YrLSEiNvagKJaDs5@linutronix.deSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 1d87be82
...@@ -2085,14 +2085,10 @@ static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async, ...@@ -2085,14 +2085,10 @@ static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
return; return;
if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) { if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
int cpu = get_cpu(); if (cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
if (cpumask_test_cpu(cpu, hctx->cpumask)) {
__blk_mq_run_hw_queue(hctx); __blk_mq_run_hw_queue(hctx);
put_cpu();
return; return;
} }
put_cpu();
} }
kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work, kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment