Commit 242489cf authored by Davidlohr Bueso's avatar Davidlohr Bueso Committed by Ingo Molnar

locking/mutexes: Standardize arguments in lock/unlock slowpaths

Just how the locking-end behaves, when unlocking, go ahead and
obtain the proper data structure immediately after the previous
(asm-end) call exits and there are (probably) pending waiters.
This simplifies a bit some of the layering.
Signed-off-by: default avatarDavidlohr Bueso <davidlohr@hp.com>
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Cc: jason.low2@hp.com
Cc: aswin@hp.com
Cc: mingo@kernel.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1406752916-3341-1-git-send-email-davidlohr@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 2e39465a
...@@ -679,9 +679,8 @@ EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible); ...@@ -679,9 +679,8 @@ EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible);
* Release the lock, slowpath: * Release the lock, slowpath:
*/ */
static inline void static inline void
__mutex_unlock_common_slowpath(atomic_t *lock_count, int nested) __mutex_unlock_common_slowpath(struct mutex *lock, int nested)
{ {
struct mutex *lock = container_of(lock_count, struct mutex, count);
unsigned long flags; unsigned long flags;
/* /*
...@@ -716,7 +715,9 @@ __mutex_unlock_common_slowpath(atomic_t *lock_count, int nested) ...@@ -716,7 +715,9 @@ __mutex_unlock_common_slowpath(atomic_t *lock_count, int nested)
__visible void __visible void
__mutex_unlock_slowpath(atomic_t *lock_count) __mutex_unlock_slowpath(atomic_t *lock_count)
{ {
__mutex_unlock_common_slowpath(lock_count, 1); struct mutex *lock = container_of(lock_count, struct mutex, count);
__mutex_unlock_common_slowpath(lock, 1);
} }
#ifndef CONFIG_DEBUG_LOCK_ALLOC #ifndef CONFIG_DEBUG_LOCK_ALLOC
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment