Commit a14d11a0 authored by Andrea Parri's avatar Andrea Parri Committed by Palmer Dabbelt

membarrier: Create Documentation/scheduler/membarrier.rst

To gather the architecture requirements of the "private/global
expedited" membarrier commands.  The file will be expanded to
integrate further information about the membarrier syscall (as
needed/desired in the future).  While at it, amend some related
inline comments in the membarrier codebase.
Suggested-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: default avatarAndrea Parri <parri.andrea@gmail.com>
Reviewed-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/r/20240131144936.29190-3-parri.andrea@gmail.comSigned-off-by: default avatarPalmer Dabbelt <palmer@rivosinc.com>
parent d6cfd177
......@@ -7,6 +7,7 @@ Scheduler
completion
membarrier
sched-arch
sched-bwc
sched-deadline
......
.. SPDX-License-Identifier: GPL-2.0
========================
membarrier() System Call
========================
MEMBARRIER_CMD_{PRIVATE,GLOBAL}_EXPEDITED - Architecture requirements
=====================================================================
Memory barriers before updating rq->curr
----------------------------------------
The commands MEMBARRIER_CMD_PRIVATE_EXPEDITED and MEMBARRIER_CMD_GLOBAL_EXPEDITED
require each architecture to have a full memory barrier after coming from
user-space, before updating rq->curr. This barrier is implied by the sequence
rq_lock(); smp_mb__after_spinlock() in __schedule(). The barrier matches a full
barrier in the proximity of the membarrier system call exit, cf.
membarrier_{private,global}_expedited().
Memory barriers after updating rq->curr
---------------------------------------
The commands MEMBARRIER_CMD_PRIVATE_EXPEDITED and MEMBARRIER_CMD_GLOBAL_EXPEDITED
require each architecture to have a full memory barrier after updating rq->curr,
before returning to user-space. The schemes providing this barrier on the various
architectures are as follows.
- alpha, arc, arm, hexagon, mips rely on the full barrier implied by
spin_unlock() in finish_lock_switch().
- arm64 relies on the full barrier implied by switch_to().
- powerpc, riscv, s390, sparc, x86 rely on the full barrier implied by
switch_mm(), if mm is not NULL; they rely on the full barrier implied
by mmdrop(), otherwise. On powerpc and riscv, switch_mm() relies on
membarrier_arch_switch_mm().
The barrier matches a full barrier in the proximity of the membarrier system call
entry, cf. membarrier_{private,global}_expedited().
......@@ -14039,6 +14039,7 @@ M: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
M: "Paul E. McKenney" <paulmck@kernel.org>
L: linux-kernel@vger.kernel.org
S: Supported
F: Documentation/scheduler/membarrier.rst
F: arch/*/include/asm/membarrier.h
F: include/uapi/linux/membarrier.h
F: kernel/sched/membarrier.c
......
......@@ -6638,7 +6638,9 @@ static void __sched notrace __schedule(unsigned int sched_mode)
* if (signal_pending_state()) if (p->state & @state)
*
* Also, the membarrier system call requires a full memory barrier
* after coming from user-space, before storing to rq->curr.
* after coming from user-space, before storing to rq->curr; this
* barrier matches a full barrier in the proximity of the membarrier
* system call exit.
*/
rq_lock(rq, &rf);
smp_mb__after_spinlock();
......@@ -6716,6 +6718,9 @@ static void __sched notrace __schedule(unsigned int sched_mode)
* architectures where spin_unlock is a full barrier,
* - switch_to() for arm64 (weakly-ordered, spin_unlock
* is a RELEASE barrier),
*
* The barrier matches a full barrier in the proximity of
* the membarrier system call entry.
*/
++*switch_count;
......
......@@ -251,7 +251,7 @@ static int membarrier_global_expedited(void)
return 0;
/*
* Matches memory barriers around rq->curr modification in
* Matches memory barriers after rq->curr modification in
* scheduler.
*/
smp_mb(); /* system call entry is not a mb. */
......@@ -300,7 +300,7 @@ static int membarrier_global_expedited(void)
/*
* Memory barrier on the caller thread _after_ we finished
* waiting for the last IPI. Matches memory barriers around
* waiting for the last IPI. Matches memory barriers before
* rq->curr modification in scheduler.
*/
smp_mb(); /* exit from system call is not a mb */
......@@ -339,7 +339,7 @@ static int membarrier_private_expedited(int flags, int cpu_id)
return 0;
/*
* Matches memory barriers around rq->curr modification in
* Matches memory barriers after rq->curr modification in
* scheduler.
*/
smp_mb(); /* system call entry is not a mb. */
......@@ -415,7 +415,7 @@ static int membarrier_private_expedited(int flags, int cpu_id)
/*
* Memory barrier on the caller thread _after_ we finished
* waiting for the last IPI. Matches memory barriers around
* waiting for the last IPI. Matches memory barriers before
* rq->curr modification in scheduler.
*/
smp_mb(); /* exit from system call is not a mb */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment