Commit d6cfd177 authored by Andrea Parri's avatar Andrea Parri Committed by Palmer Dabbelt

membarrier: riscv: Add full memory barrier in switch_mm()

The membarrier system call requires a full memory barrier after storing
to rq->curr, before going back to user-space.  The barrier is only
needed when switching between processes: the barrier is implied by
mmdrop() when switching from kernel to userspace, and it's not needed
when switching from userspace to kernel.

Rely on the feature/mechanism ARCH_HAS_MEMBARRIER_CALLBACKS and on the
primitive membarrier_arch_switch_mm(), already adopted by the PowerPC
architecture, to insert the required barrier.

Fixes: fab957c1 ("RISC-V: Atomic and Locking Code")
Signed-off-by: default avatarAndrea Parri <parri.andrea@gmail.com>
Reviewed-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/r/20240131144936.29190-2-parri.andrea@gmail.comSigned-off-by: default avatarPalmer Dabbelt <palmer@rivosinc.com>
parent 6613476e
...@@ -14039,7 +14039,7 @@ M: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> ...@@ -14039,7 +14039,7 @@ M: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
M: "Paul E. McKenney" <paulmck@kernel.org> M: "Paul E. McKenney" <paulmck@kernel.org>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Supported S: Supported
F: arch/powerpc/include/asm/membarrier.h F: arch/*/include/asm/membarrier.h
F: include/uapi/linux/membarrier.h F: include/uapi/linux/membarrier.h
F: kernel/sched/membarrier.c F: kernel/sched/membarrier.c
......
...@@ -27,6 +27,7 @@ config RISCV ...@@ -27,6 +27,7 @@ config RISCV
select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_GIGANTIC_PAGE select ARCH_HAS_GIGANTIC_PAGE
select ARCH_HAS_KCOV select ARCH_HAS_KCOV
select ARCH_HAS_MEMBARRIER_CALLBACKS
select ARCH_HAS_MMIOWB select ARCH_HAS_MMIOWB
select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
select ARCH_HAS_PMEM_API select ARCH_HAS_PMEM_API
......
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef _ASM_RISCV_MEMBARRIER_H
#define _ASM_RISCV_MEMBARRIER_H
static inline void membarrier_arch_switch_mm(struct mm_struct *prev,
struct mm_struct *next,
struct task_struct *tsk)
{
/*
* Only need the full barrier when switching between processes.
* Barrier when switching from kernel to userspace is not
* required here, given that it is implied by mmdrop(). Barrier
* when switching from userspace to kernel is not needed after
* store to rq->curr.
*/
if (IS_ENABLED(CONFIG_SMP) &&
likely(!(atomic_read(&next->membarrier_state) &
(MEMBARRIER_STATE_PRIVATE_EXPEDITED |
MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev))
return;
/*
* The membarrier system call requires a full memory barrier
* after storing to rq->curr, before going back to user-space.
* Matches a full barrier in the proximity of the membarrier
* system call entry.
*/
smp_mb();
}
#endif /* _ASM_RISCV_MEMBARRIER_H */
...@@ -323,6 +323,8 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, ...@@ -323,6 +323,8 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next,
if (unlikely(prev == next)) if (unlikely(prev == next))
return; return;
membarrier_arch_switch_mm(prev, next, task);
/* /*
* Mark the current MM context as inactive, and the next as * Mark the current MM context as inactive, and the next as
* active. This is at least used by the icache flushing * active. This is at least used by the icache flushing
......
...@@ -6709,8 +6709,9 @@ static void __sched notrace __schedule(unsigned int sched_mode) ...@@ -6709,8 +6709,9 @@ static void __sched notrace __schedule(unsigned int sched_mode)
* *
* Here are the schemes providing that barrier on the * Here are the schemes providing that barrier on the
* various architectures: * various architectures:
* - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC. * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
* switch_mm() rely on membarrier_arch_switch_mm() on PowerPC. * RISC-V. switch_mm() relies on membarrier_arch_switch_mm()
* on PowerPC and on RISC-V.
* - finish_lock_switch() for weakly-ordered * - finish_lock_switch() for weakly-ordered
* architectures where spin_unlock is a full barrier, * architectures where spin_unlock is a full barrier,
* - switch_to() for arm64 (weakly-ordered, spin_unlock * - switch_to() for arm64 (weakly-ordered, spin_unlock
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment