Commit 3d85b270 authored by Andrea Parri's avatar Andrea Parri Committed by Ingo Molnar

locking/spinlock, sched/core: Clarify requirements for smp_mb__after_spinlock()

There are 11 interpretations of the requirements described in the header
comment for smp_mb__after_spinlock(): one for each LKMM maintainer, and
one currently encoded in the Cat file. Stick to the latter (until a more
satisfactory solution is available).

This also reworks some snippets related to the barrier to illustrate the
requirements and to link them to the idioms which are relied upon at its
call sites.
Suggested-by: default avatarBoqun Feng <boqun.feng@gmail.com>
Signed-off-by: default avatarAndrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: akiyks@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Cc: stern@rowland.harvard.edu
Link: http://lkml.kernel.org/r/20180716180605.16115-11-paulmck@linux.vnet.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 76e079fe
...@@ -114,29 +114,48 @@ do { \ ...@@ -114,29 +114,48 @@ do { \
#endif /*arch_spin_is_contended*/ #endif /*arch_spin_is_contended*/
/* /*
* This barrier must provide two things: * smp_mb__after_spinlock() provides the equivalent of a full memory barrier
* between program-order earlier lock acquisitions and program-order later
* memory accesses.
* *
* - it must guarantee a STORE before the spin_lock() is ordered against a * This guarantees that the following two properties hold:
* LOAD after it, see the comments at its two usage sites.
* *
* - it must ensure the critical section is RCsc. * 1) Given the snippet:
* *
* The latter is important for cases where we observe values written by other * { X = 0; Y = 0; }
* CPUs in spin-loops, without barriers, while being subject to scheduling. *
* CPU0 CPU1
*
* WRITE_ONCE(X, 1); WRITE_ONCE(Y, 1);
* spin_lock(S); smp_mb();
* smp_mb__after_spinlock(); r1 = READ_ONCE(X);
* r0 = READ_ONCE(Y);
* spin_unlock(S);
*
* it is forbidden that CPU0 does not observe CPU1's store to Y (r0 = 0)
* and CPU1 does not observe CPU0's store to X (r1 = 0); see the comments
* preceding the call to smp_mb__after_spinlock() in __schedule() and in
* try_to_wake_up().
*
* 2) Given the snippet:
*
* { X = 0; Y = 0; }
* *
* CPU0 CPU1 CPU2 * CPU0 CPU1 CPU2
* *
* for (;;) { * spin_lock(S); spin_lock(S); r1 = READ_ONCE(Y);
* if (READ_ONCE(X)) * WRITE_ONCE(X, 1); smp_mb__after_spinlock(); smp_rmb();
* break; * spin_unlock(S); r0 = READ_ONCE(X); r2 = READ_ONCE(X);
* } * WRITE_ONCE(Y, 1);
* X=1 * spin_unlock(S);
* <sched-out> *
* <sched-in> * it is forbidden that CPU0's critical section executes before CPU1's
* r = X; * critical section (r0 = 1), CPU2 observes CPU1's store to Y (r1 = 1)
* * and CPU2 does not observe CPU0's store to X (r2 = 0); see the comments
* without transitivity it could be that CPU1 observes X!=0 breaks the loop, * preceding the calls to smp_rmb() in try_to_wake_up() for similar
* we get migrated and CPU2 sees X==0. * snippets but "projected" onto two CPUs.
*
* Property (2) upgrades the lock to an RCsc lock.
* *
* Since most load-store architectures implement ACQUIRE with an smp_mb() after * Since most load-store architectures implement ACQUIRE with an smp_mb() after
* the LL/SC loop, they need no further barriers. Similarly all our TSO * the LL/SC loop, they need no further barriers. Similarly all our TSO
......
...@@ -1999,20 +1999,19 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) ...@@ -1999,20 +1999,19 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
* in smp_cond_load_acquire() below. * in smp_cond_load_acquire() below.
* *
* sched_ttwu_pending() try_to_wake_up() * sched_ttwu_pending() try_to_wake_up()
* [S] p->on_rq = 1; [L] P->state * STORE p->on_rq = 1 LOAD p->state
* UNLOCK rq->lock -----. * UNLOCK rq->lock
* \ *
* +--- RMB * __schedule() (switch to task 'p')
* schedule() / * LOCK rq->lock smp_rmb();
* LOCK rq->lock -----' * smp_mb__after_spinlock();
* UNLOCK rq->lock * UNLOCK rq->lock
* *
* [task p] * [task p]
* [S] p->state = UNINTERRUPTIBLE [L] p->on_rq * STORE p->state = UNINTERRUPTIBLE LOAD p->on_rq
* *
* Pairs with the UNLOCK+LOCK on rq->lock from the * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
* last wakeup of our task and the schedule that got our task * __schedule(). See the comment for smp_mb__after_spinlock().
* current.
*/ */
smp_rmb(); smp_rmb();
if (p->on_rq && ttwu_remote(p, wake_flags)) if (p->on_rq && ttwu_remote(p, wake_flags))
...@@ -2026,15 +2025,17 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) ...@@ -2026,15 +2025,17 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
* One must be running (->on_cpu == 1) in order to remove oneself * One must be running (->on_cpu == 1) in order to remove oneself
* from the runqueue. * from the runqueue.
* *
* [S] ->on_cpu = 1; [L] ->on_rq * __schedule() (switch to task 'p') try_to_wake_up()
* STORE p->on_cpu = 1 LOAD p->on_rq
* UNLOCK rq->lock * UNLOCK rq->lock
* RMB
* LOCK rq->lock
* [S] ->on_rq = 0; [L] ->on_cpu
* *
* Pairs with the full barrier implied in the UNLOCK+LOCK on rq->lock * __schedule() (put 'p' to sleep)
* from the consecutive calls to schedule(); the first switching to our * LOCK rq->lock smp_rmb();
* task, the second putting it to sleep. * smp_mb__after_spinlock();
* STORE p->on_rq = 0 LOAD p->on_cpu
*
* Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
* __schedule(). See the comment for smp_mb__after_spinlock().
*/ */
smp_rmb(); smp_rmb();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment