Commit fa2563e4 authored by Thomas Tuttle's avatar Thomas Tuttle Committed by Linus Torvalds

workqueue: lock cwq access in drain_workqueue

Take cwq->gcwq->lock to avoid racing between drain_workqueue checking to
make sure the workqueues are empty and cwq_dec_nr_in_flight decrementing
and then incrementing nr_active when it activates a delayed work.

We discovered this when a corner case in one of our drivers resulted in
us trying to destroy a workqueue in which the remaining work would
always requeue itself again in the same workqueue.  We would hit this
race condition and trip the BUG_ON on workqueue.c:3080.
Signed-off-by: default avatarThomas Tuttle <ttuttle@chromium.org>
Acked-by: default avatarTejun Heo <tj@kernel.org>
Cc: <stable@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent df4e33ad
...@@ -2412,8 +2412,13 @@ void drain_workqueue(struct workqueue_struct *wq) ...@@ -2412,8 +2412,13 @@ void drain_workqueue(struct workqueue_struct *wq)
for_each_cwq_cpu(cpu, wq) { for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq); struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
bool drained;
if (!cwq->nr_active && list_empty(&cwq->delayed_works)) spin_lock_irq(&cwq->gcwq->lock);
drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
spin_unlock_irq(&cwq->gcwq->lock);
if (drained)
continue; continue;
if (++flush_cnt == 10 || if (++flush_cnt == 10 ||
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment