Commit d7e8af1a authored by Davidlohr Bueso's avatar Davidlohr Bueso Committed by Linus Torvalds

futex: update documentation for ordering guarantees

Commits 11d4616b ("futex: revert back to the explicit waiter
counting code") and 69cd9eba ("futex: avoid race between requeue and
wake") changed some of the finer details of how we think about futexes.
One was a late fix and the other a consequence of overlooking the whole
requeuing logic.

The first change caused our documentation to be incorrect, and the
second made us aware that we need to explicitly add more details to it.
Signed-off-by: default avatarDavidlohr Bueso <davidlohr@hp.com>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 454fd351
...@@ -70,7 +70,10 @@ ...@@ -70,7 +70,10 @@
#include "locking/rtmutex_common.h" #include "locking/rtmutex_common.h"
/* /*
* Basic futex operation and ordering guarantees: * READ this before attempting to hack on futexes!
*
* Basic futex operation and ordering guarantees
* =============================================
* *
* The waiter reads the futex value in user space and calls * The waiter reads the futex value in user space and calls
* futex_wait(). This function computes the hash bucket and acquires * futex_wait(). This function computes the hash bucket and acquires
...@@ -119,7 +122,7 @@ ...@@ -119,7 +122,7 @@
* sys_futex(WAIT, futex, val); * sys_futex(WAIT, futex, val);
* futex_wait(futex, val); * futex_wait(futex, val);
* *
* waiters++; * waiters++; (a)
* mb(); (A) <-- paired with -. * mb(); (A) <-- paired with -.
* | * |
* lock(hash_bucket(futex)); | * lock(hash_bucket(futex)); |
...@@ -135,14 +138,14 @@ ...@@ -135,14 +138,14 @@
* unlock(hash_bucket(futex)); * unlock(hash_bucket(futex));
* schedule(); if (waiters) * schedule(); if (waiters)
* lock(hash_bucket(futex)); * lock(hash_bucket(futex));
* wake_waiters(futex); * else wake_waiters(futex);
* unlock(hash_bucket(futex)); * waiters--; (b) unlock(hash_bucket(futex));
* *
* Where (A) orders the waiters increment and the futex value read -- this * Where (A) orders the waiters increment and the futex value read through
* is guaranteed by the head counter in the hb spinlock; and where (B) * atomic operations (see hb_waiters_inc) and where (B) orders the write
* orders the write to futex and the waiters read -- this is done by the * to futex and the waiters read -- this is done by the barriers in
* barriers in get_futex_key_refs(), through either ihold or atomic_inc, * get_futex_key_refs(), through either ihold or atomic_inc, depending on the
* depending on the futex type. * futex type.
* *
* This yields the following case (where X:=waiters, Y:=futex): * This yields the following case (where X:=waiters, Y:=futex):
* *
...@@ -155,6 +158,17 @@ ...@@ -155,6 +158,17 @@
* Which guarantees that x==0 && y==0 is impossible; which translates back into * Which guarantees that x==0 && y==0 is impossible; which translates back into
* the guarantee that we cannot both miss the futex variable change and the * the guarantee that we cannot both miss the futex variable change and the
* enqueue. * enqueue.
*
* Note that a new waiter is accounted for in (a) even when it is possible that
* the wait call can return error, in which case we backtrack from it in (b).
* Refer to the comment in queue_lock().
*
* Similarly, in order to account for waiters being requeued on another
* address we always increment the waiters for the destination bucket before
* acquiring the lock. It then decrements them again after releasing it -
* the code that actually moves the futex(es) between hash buckets (requeue_futex)
* will do the additional required waiter count housekeeping. This is done for
* double_lock_hb() and double_unlock_hb(), respectively.
*/ */
#ifndef CONFIG_HAVE_FUTEX_CMPXCHG #ifndef CONFIG_HAVE_FUTEX_CMPXCHG
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment