Commit ce52a18d authored by Waiman Long's avatar Waiman Long Committed by Ingo Molnar

locking/lockdep: Add a faster path in __lock_release()

When __lock_release() is called, the most likely unlock scenario is
on the innermost lock in the chain.  In this case, we can skip some of
the checks and provide a faster path to completion.
Signed-off-by: default avatarWaiman Long <longman@redhat.com>
Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1538511560-10090-4-git-send-email-longman@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 8ee10862
......@@ -3626,6 +3626,13 @@ __lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
curr->lockdep_depth = i;
curr->curr_chain_key = hlock->prev_chain_key;
/*
* The most likely case is when the unlock is on the innermost
* lock. In this case, we are done!
*/
if (i == depth-1)
return 1;
if (reacquire_held_locks(curr, depth, i + 1))
return 0;
......@@ -3633,10 +3640,14 @@ __lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
* We had N bottles of beer on the wall, we drank one, but now
* there's not N-1 bottles of beer left on the wall...
*/
if (DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth - 1))
return 0;
DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth-1);
return 1;
/*
* Since reacquire_held_locks() would have called check_chain_key()
* indirectly via __lock_acquire(), we don't need to do it again
* on return.
*/
return 0;
}
static int __lock_is_held(const struct lockdep_map *lock, int read)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment