Commit 55f036ca authored by Peter Ziljstra's avatar Peter Ziljstra Committed by Thomas Hellstrom

locking: WW mutex cleanup

Make the WW mutex code more readable by adding comments, splitting up
functions and pointing out that we're actually using the Wait-Die
algorithm.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Gustavo Padovan <gustavo@padovan.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Sean Paul <seanpaul@chromium.org>
Cc: David Airlie <airlied@linux.ie>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Cc: Philippe Ombredanne <pombredanne@nexb.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-doc@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Co-authored-by: default avatarThomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: default avatarThomas Hellstrom <thellstrom@vmware.com>
Acked-by: default avatarIngo Molnar <mingo@kernel.org>
parent eab97669
...@@ -32,10 +32,10 @@ the oldest task) wins, and the one with the higher reservation id (i.e. the ...@@ -32,10 +32,10 @@ the oldest task) wins, and the one with the higher reservation id (i.e. the
younger task) unlocks all of the buffers that it has already locked, and then younger task) unlocks all of the buffers that it has already locked, and then
tries again. tries again.
In the RDBMS literature this deadlock handling approach is called wait/wound: In the RDBMS literature this deadlock handling approach is called wait/die:
The older tasks waits until it can acquire the contended lock. The younger tasks The older tasks waits until it can acquire the contended lock. The younger tasks
needs to back off and drop all the locks it is currently holding, i.e. the needs to back off and drop all the locks it is currently holding, i.e. the
younger task is wounded. younger task dies.
Concepts Concepts
-------- --------
...@@ -56,9 +56,9 @@ Furthermore there are three different class of w/w lock acquire functions: ...@@ -56,9 +56,9 @@ Furthermore there are three different class of w/w lock acquire functions:
* Normal lock acquisition with a context, using ww_mutex_lock. * Normal lock acquisition with a context, using ww_mutex_lock.
* Slowpath lock acquisition on the contending lock, used by the wounded task * Slowpath lock acquisition on the contending lock, used by the task that just
after having dropped all already acquired locks. These functions have the killed its transaction after having dropped all already acquired locks.
_slow postfix. These functions have the _slow postfix.
From a simple semantics point-of-view the _slow functions are not strictly From a simple semantics point-of-view the _slow functions are not strictly
required, since simply calling the normal ww_mutex_lock functions on the required, since simply calling the normal ww_mutex_lock functions on the
...@@ -220,7 +220,7 @@ mutexes are a natural fit for such a case for two reasons: ...@@ -220,7 +220,7 @@ mutexes are a natural fit for such a case for two reasons:
Note that this approach differs in two important ways from the above methods: Note that this approach differs in two important ways from the above methods:
- Since the list of objects is dynamically constructed (and might very well be - Since the list of objects is dynamically constructed (and might very well be
different when retrying due to hitting the -EDEADLK wound condition) there's different when retrying due to hitting the -EDEADLK die condition) there's
no need to keep any object on a persistent list when it's not locked. We can no need to keep any object on a persistent list when it's not locked. We can
therefore move the list_head into the object itself. therefore move the list_head into the object itself.
- On the other hand the dynamic object list construction also means that the -EALREADY return - On the other hand the dynamic object list construction also means that the -EALREADY return
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
* *
* Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> * Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
* *
* Wound/wait implementation: * Wait/Die implementation:
* Copyright (C) 2013 Canonical Ltd. * Copyright (C) 2013 Canonical Ltd.
* *
* This file contains the main data structure and API definitions. * This file contains the main data structure and API definitions.
...@@ -28,9 +28,9 @@ struct ww_class { ...@@ -28,9 +28,9 @@ struct ww_class {
struct ww_acquire_ctx { struct ww_acquire_ctx {
struct task_struct *task; struct task_struct *task;
unsigned long stamp; unsigned long stamp;
unsigned acquired; unsigned int acquired;
#ifdef CONFIG_DEBUG_MUTEXES #ifdef CONFIG_DEBUG_MUTEXES
unsigned done_acquire; unsigned int done_acquire;
struct ww_class *ww_class; struct ww_class *ww_class;
struct ww_mutex *contending_lock; struct ww_mutex *contending_lock;
#endif #endif
...@@ -38,8 +38,8 @@ struct ww_acquire_ctx { ...@@ -38,8 +38,8 @@ struct ww_acquire_ctx {
struct lockdep_map dep_map; struct lockdep_map dep_map;
#endif #endif
#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH #ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
unsigned deadlock_inject_interval; unsigned int deadlock_inject_interval;
unsigned deadlock_inject_countdown; unsigned int deadlock_inject_countdown;
#endif #endif
}; };
...@@ -102,7 +102,7 @@ static inline void ww_mutex_init(struct ww_mutex *lock, ...@@ -102,7 +102,7 @@ static inline void ww_mutex_init(struct ww_mutex *lock,
* *
* Context-based w/w mutex acquiring can be done in any order whatsoever within * Context-based w/w mutex acquiring can be done in any order whatsoever within
* a given lock class. Deadlocks will be detected and handled with the * a given lock class. Deadlocks will be detected and handled with the
* wait/wound logic. * wait/die logic.
* *
* Mixing of context-based w/w mutex acquiring and single w/w mutex locking can * Mixing of context-based w/w mutex acquiring and single w/w mutex locking can
* result in undetected deadlocks and is so forbidden. Mixing different contexts * result in undetected deadlocks and is so forbidden. Mixing different contexts
...@@ -195,13 +195,13 @@ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx) ...@@ -195,13 +195,13 @@ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
* Lock the w/w mutex exclusively for this task. * Lock the w/w mutex exclusively for this task.
* *
* Deadlocks within a given w/w class of locks are detected and handled with the * Deadlocks within a given w/w class of locks are detected and handled with the
* wait/wound algorithm. If the lock isn't immediately avaiable this function * wait/die algorithm. If the lock isn't immediately available this function
* will either sleep until it is (wait case). Or it selects the current context * will either sleep until it is (wait case). Or it selects the current context
* for backing off by returning -EDEADLK (wound case). Trying to acquire the * for backing off by returning -EDEADLK (die case). Trying to acquire the
* same lock with the same context twice is also detected and signalled by * same lock with the same context twice is also detected and signalled by
* returning -EALREADY. Returns 0 if the mutex was successfully acquired. * returning -EALREADY. Returns 0 if the mutex was successfully acquired.
* *
* In the wound case the caller must release all currently held w/w mutexes for * In the die case the caller must release all currently held w/w mutexes for
* the given context and then wait for this contending lock to be available by * the given context and then wait for this contending lock to be available by
* calling ww_mutex_lock_slow. Alternatively callers can opt to not acquire this * calling ww_mutex_lock_slow. Alternatively callers can opt to not acquire this
* lock and proceed with trying to acquire further w/w mutexes (e.g. when * lock and proceed with trying to acquire further w/w mutexes (e.g. when
...@@ -226,14 +226,14 @@ extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acq ...@@ -226,14 +226,14 @@ extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acq
* Lock the w/w mutex exclusively for this task. * Lock the w/w mutex exclusively for this task.
* *
* Deadlocks within a given w/w class of locks are detected and handled with the * Deadlocks within a given w/w class of locks are detected and handled with the
* wait/wound algorithm. If the lock isn't immediately avaiable this function * wait/die algorithm. If the lock isn't immediately available this function
* will either sleep until it is (wait case). Or it selects the current context * will either sleep until it is (wait case). Or it selects the current context
* for backing off by returning -EDEADLK (wound case). Trying to acquire the * for backing off by returning -EDEADLK (die case). Trying to acquire the
* same lock with the same context twice is also detected and signalled by * same lock with the same context twice is also detected and signalled by
* returning -EALREADY. Returns 0 if the mutex was successfully acquired. If a * returning -EALREADY. Returns 0 if the mutex was successfully acquired. If a
* signal arrives while waiting for the lock then this function returns -EINTR. * signal arrives while waiting for the lock then this function returns -EINTR.
* *
* In the wound case the caller must release all currently held w/w mutexes for * In the die case the caller must release all currently held w/w mutexes for
* the given context and then wait for this contending lock to be available by * the given context and then wait for this contending lock to be available by
* calling ww_mutex_lock_slow_interruptible. Alternatively callers can opt to * calling ww_mutex_lock_slow_interruptible. Alternatively callers can opt to
* not acquire this lock and proceed with trying to acquire further w/w mutexes * not acquire this lock and proceed with trying to acquire further w/w mutexes
...@@ -256,7 +256,7 @@ extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock, ...@@ -256,7 +256,7 @@ extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
* @lock: the mutex to be acquired * @lock: the mutex to be acquired
* @ctx: w/w acquire context * @ctx: w/w acquire context
* *
* Acquires a w/w mutex with the given context after a wound case. This function * Acquires a w/w mutex with the given context after a die case. This function
* will sleep until the lock becomes available. * will sleep until the lock becomes available.
* *
* The caller must have released all w/w mutexes already acquired with the * The caller must have released all w/w mutexes already acquired with the
...@@ -290,7 +290,7 @@ ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) ...@@ -290,7 +290,7 @@ ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
* @lock: the mutex to be acquired * @lock: the mutex to be acquired
* @ctx: w/w acquire context * @ctx: w/w acquire context
* *
* Acquires a w/w mutex with the given context after a wound case. This function * Acquires a w/w mutex with the given context after a die case. This function
* will sleep until the lock becomes available and returns 0 when the lock has * will sleep until the lock becomes available and returns 0 when the lock has
* been acquired. If a signal arrives while waiting for the lock then this * been acquired. If a signal arrives while waiting for the lock then this
* function returns -EINTR. * function returns -EINTR.
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment