Commit a1d14934 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

workqueue/lockdep: 'Fix' flush_work() annotation

The flush_work() annotation as introduced by commit:

  e159489b ("workqueue: relax lockdep annotation on flush_work()")

hits on the lockdep problem with recursive read locks.

The situation as described is:

Work W1:                Work W2:        Task:

ARR(Q)                  ARR(Q)		flush_workqueue(Q)
A(W1)                   A(W2)             A(Q)
  flush_work(W2)			  R(Q)
    A(W2)
    R(W2)
    if (special)
      A(Q)
    else
      ARR(Q)
    R(Q)

where: A - acquire, ARR - acquire-read-recursive, R - release.

Where under 'special' conditions we want to trigger a lock recursion
deadlock, but otherwise allow the flush_work(). The allowing is done
by using recursive read locks (ARR), but lockdep is broken for
recursive stuff.

However, there appears to be no need to acquire the lock if we're not
'special', so if we remove the 'else' clause things become much
simpler and no longer need the recursion thing at all.
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: default avatarTejun Heo <tj@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: byungchul.park@lge.com
Cc: david@fromorbit.com
Cc: johannes@sipsolutions.net
Cc: oleg@redhat.com
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent e9149858
...@@ -2091,7 +2091,7 @@ __acquires(&pool->lock) ...@@ -2091,7 +2091,7 @@ __acquires(&pool->lock)
spin_unlock_irq(&pool->lock); spin_unlock_irq(&pool->lock);
lock_map_acquire_read(&pwq->wq->lockdep_map); lock_map_acquire(&pwq->wq->lockdep_map);
lock_map_acquire(&lockdep_map); lock_map_acquire(&lockdep_map);
crossrelease_hist_start(XHLOCK_PROC); crossrelease_hist_start(XHLOCK_PROC);
trace_workqueue_execute_start(work); trace_workqueue_execute_start(work);
...@@ -2826,16 +2826,18 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr) ...@@ -2826,16 +2826,18 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
spin_unlock_irq(&pool->lock); spin_unlock_irq(&pool->lock);
/* /*
* If @max_active is 1 or rescuer is in use, flushing another work * Force a lock recursion deadlock when using flush_work() inside a
* item on the same workqueue may lead to deadlock. Make sure the * single-threaded or rescuer equipped workqueue.
* flusher is not running on the same workqueue by verifying write *
* access. * For single threaded workqueues the deadlock happens when the work
* is after the work issuing the flush_work(). For rescuer equipped
* workqueues the deadlock happens when the rescuer stalls, blocking
* forward progress.
*/ */
if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer) if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer) {
lock_map_acquire(&pwq->wq->lockdep_map); lock_map_acquire(&pwq->wq->lockdep_map);
else lock_map_release(&pwq->wq->lockdep_map);
lock_map_acquire_read(&pwq->wq->lockdep_map); }
lock_map_release(&pwq->wq->lockdep_map);
return true; return true;
already_gone: already_gone:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment