Commit 79364031 authored by Sebastian Andrzej Siewior's avatar Sebastian Andrzej Siewior Committed by Daniel Borkmann

bpf: Make sure bpf_disable_instrumentation() is safe vs preemption.

The initial implementation of migrate_disable() for mainline was a
wrapper around preempt_disable(). RT kernels substituted this with a
real migrate disable implementation.

Later on mainline gained true migrate disable support, but neither
documentation nor affected code were updated.

Remove stale comments claiming that migrate_disable() is PREEMPT_RT only.

Don't use __this_cpu_inc() in the !PREEMPT_RT path because preemption is
not disabled and the RMW operation can be preempted.

Fixes: 74d862b6 ("sched: Make migrate_disable/enable() independent of RT")
Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211127163200.10466-3-bigeasy@linutronix.de
parent 6a631c04
...@@ -1353,28 +1353,16 @@ extern struct mutex bpf_stats_enabled_mutex; ...@@ -1353,28 +1353,16 @@ extern struct mutex bpf_stats_enabled_mutex;
* kprobes, tracepoints) to prevent deadlocks on map operations as any of * kprobes, tracepoints) to prevent deadlocks on map operations as any of
* these events can happen inside a region which holds a map bucket lock * these events can happen inside a region which holds a map bucket lock
* and can deadlock on it. * and can deadlock on it.
*
* Use the preemption safe inc/dec variants on RT because migrate disable
* is preemptible on RT and preemption in the middle of the RMW operation
* might lead to inconsistent state. Use the raw variants for non RT
* kernels as migrate_disable() maps to preempt_disable() so the slightly
* more expensive save operation can be avoided.
*/ */
static inline void bpf_disable_instrumentation(void) static inline void bpf_disable_instrumentation(void)
{ {
migrate_disable(); migrate_disable();
if (IS_ENABLED(CONFIG_PREEMPT_RT)) this_cpu_inc(bpf_prog_active);
this_cpu_inc(bpf_prog_active);
else
__this_cpu_inc(bpf_prog_active);
} }
static inline void bpf_enable_instrumentation(void) static inline void bpf_enable_instrumentation(void)
{ {
if (IS_ENABLED(CONFIG_PREEMPT_RT)) this_cpu_dec(bpf_prog_active);
this_cpu_dec(bpf_prog_active);
else
__this_cpu_dec(bpf_prog_active);
migrate_enable(); migrate_enable();
} }
......
...@@ -640,9 +640,6 @@ static __always_inline u32 bpf_prog_run(const struct bpf_prog *prog, const void ...@@ -640,9 +640,6 @@ static __always_inline u32 bpf_prog_run(const struct bpf_prog *prog, const void
* This uses migrate_disable/enable() explicitly to document that the * This uses migrate_disable/enable() explicitly to document that the
* invocation of a BPF program does not require reentrancy protection * invocation of a BPF program does not require reentrancy protection
* against a BPF program which is invoked from a preempting task. * against a BPF program which is invoked from a preempting task.
*
* For non RT enabled kernels migrate_disable/enable() maps to
* preempt_disable/enable(), i.e. it disables also preemption.
*/ */
static inline u32 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog, static inline u32 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog,
const void *ctx) const void *ctx)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment