Commit 1be5bdf8 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'kcsan.2022.01.09a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu

Pull KCSAN updates from Paul McKenney:
 "This provides KCSAN fixes and also the ability to take memory barriers
  into account for weakly-ordered systems. This last can increase the
  probability of detecting certain types of data races"

* tag 'kcsan.2022.01.09a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (29 commits)
  kcsan: Only test clear_bit_unlock_is_negative_byte if arch defines it
  kcsan: Avoid nested contexts reading inconsistent reorder_access
  kcsan: Turn barrier instrumentation into macros
  kcsan: Make barrier tests compatible with lockdep
  kcsan: Support WEAK_MEMORY with Clang where no objtool support exists
  compiler_attributes.h: Add __disable_sanitizer_instrumentation
  objtool, kcsan: Remove memory barrier instrumentation from noinstr
  objtool, kcsan: Add memory barrier instrumentation to whitelist
  sched, kcsan: Enable memory barrier instrumentation
  mm, kcsan: Enable barrier instrumentation
  x86/qspinlock, kcsan: Instrument barrier of pv_queued_spin_unlock()
  x86/barriers, kcsan: Use generic instrumentation for non-smp barriers
  asm-generic/bitops, kcsan: Add instrumentation for barriers
  locking/atomics, kcsan: Add instrumentation for barriers
  locking/barriers, kcsan: Support generic instrumentation
  locking/barriers, kcsan: Add instrumentation for barriers
  kcsan: selftest: Add test case to check memory barrier instrumentation
  kcsan: Ignore GCC 11+ warnings about TSan runtime support
  kcsan: test: Add test cases for memory barrier instrumentation
  kcsan: test: Match reordered or normal accesses
  ...
parents 1c824bf7 b473a389
...@@ -204,17 +204,17 @@ Ultimately this allows to determine the possible executions of concurrent code, ...@@ -204,17 +204,17 @@ Ultimately this allows to determine the possible executions of concurrent code,
and if that code is free from data races. and if that code is free from data races.
KCSAN is aware of *marked atomic operations* (``READ_ONCE``, ``WRITE_ONCE``, KCSAN is aware of *marked atomic operations* (``READ_ONCE``, ``WRITE_ONCE``,
``atomic_*``, etc.), but is oblivious of any ordering guarantees and simply ``atomic_*``, etc.), and a subset of ordering guarantees implied by memory
assumes that memory barriers are placed correctly. In other words, KCSAN barriers. With ``CONFIG_KCSAN_WEAK_MEMORY=y``, KCSAN models load or store
assumes that as long as a plain access is not observed to race with another buffering, and can detect missing ``smp_mb()``, ``smp_wmb()``, ``smp_rmb()``,
conflicting access, memory operations are correctly ordered. ``smp_store_release()``, and all ``atomic_*`` operations with equivalent
implied barriers.
This means that KCSAN will not report *potential* data races due to missing
memory ordering. Developers should therefore carefully consider the required Note, KCSAN will not report all data races due to missing memory ordering,
memory ordering requirements that remain unchecked. If, however, missing specifically where a memory barrier would be required to prohibit subsequent
memory ordering (that is observable with a particular compiler and memory operation from reordering before the barrier. Developers should
architecture) leads to an observable data race (e.g. entering a critical therefore carefully consider the required memory ordering requirements that
section erroneously), KCSAN would report the resulting data race. remain unchecked.
Race Detection Beyond Data Races Race Detection Beyond Data Races
-------------------------------- --------------------------------
...@@ -268,6 +268,56 @@ marked operations, if all accesses to a variable that is accessed concurrently ...@@ -268,6 +268,56 @@ marked operations, if all accesses to a variable that is accessed concurrently
are properly marked, KCSAN will never trigger a watchpoint and therefore never are properly marked, KCSAN will never trigger a watchpoint and therefore never
report the accesses. report the accesses.
Modeling Weak Memory
~~~~~~~~~~~~~~~~~~~~
KCSAN's approach to detecting data races due to missing memory barriers is
based on modeling access reordering (with ``CONFIG_KCSAN_WEAK_MEMORY=y``).
Each plain memory access for which a watchpoint is set up, is also selected for
simulated reordering within the scope of its function (at most 1 in-flight
access).
Once an access has been selected for reordering, it is checked along every
other access until the end of the function scope. If an appropriate memory
barrier is encountered, the access will no longer be considered for simulated
reordering.
When the result of a memory operation should be ordered by a barrier, KCSAN can
then detect data races where the conflict only occurs as a result of a missing
barrier. Consider the example::
int x, flag;
void T1(void)
{
x = 1; // data race!
WRITE_ONCE(flag, 1); // correct: smp_store_release(&flag, 1)
}
void T2(void)
{
while (!READ_ONCE(flag)); // correct: smp_load_acquire(&flag)
... = x; // data race!
}
When weak memory modeling is enabled, KCSAN can consider ``x`` in ``T1`` for
simulated reordering. After the write of ``flag``, ``x`` is again checked for
concurrent accesses: because ``T2`` is able to proceed after the write of
``flag``, a data race is detected. With the correct barriers in place, ``x``
would not be considered for reordering after the proper release of ``flag``,
and no data race would be detected.
Deliberate trade-offs in complexity but also practical limitations mean only a
subset of data races due to missing memory barriers can be detected. With
currently available compiler support, the implementation is limited to modeling
the effects of "buffering" (delaying accesses), since the runtime cannot
"prefetch" accesses. Also recall that watchpoints are only set up for plain
accesses, and the only access type for which KCSAN simulates reordering. This
means reordering of marked accesses is not modeled.
A consequence of the above is that acquire operations do not require barrier
instrumentation (no prefetching). Furthermore, marked accesses introducing
address or control dependencies do not require special handling (the marked
access cannot be reordered, later dependent accesses cannot be prefetched).
Key Properties Key Properties
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
...@@ -290,8 +340,8 @@ Key Properties ...@@ -290,8 +340,8 @@ Key Properties
4. **Detects Racy Writes from Devices:** Due to checking data values upon 4. **Detects Racy Writes from Devices:** Due to checking data values upon
setting up watchpoints, racy writes from devices can also be detected. setting up watchpoints, racy writes from devices can also be detected.
5. **Memory Ordering:** KCSAN is *not* explicitly aware of the LKMM's ordering 5. **Memory Ordering:** KCSAN is aware of only a subset of LKMM ordering rules;
rules; this may result in missed data races (false negatives). this may result in missed data races (false negatives).
6. **Analysis Accuracy:** For observed executions, due to using a sampling 6. **Analysis Accuracy:** For observed executions, due to using a sampling
strategy, the analysis is *unsound* (false negatives possible), but aims to strategy, the analysis is *unsound* (false negatives possible), but aims to
......
...@@ -19,9 +19,9 @@ ...@@ -19,9 +19,9 @@
#define wmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)", "sfence", \ #define wmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)", "sfence", \
X86_FEATURE_XMM2) ::: "memory", "cc") X86_FEATURE_XMM2) ::: "memory", "cc")
#else #else
#define mb() asm volatile("mfence":::"memory") #define __mb() asm volatile("mfence":::"memory")
#define rmb() asm volatile("lfence":::"memory") #define __rmb() asm volatile("lfence":::"memory")
#define wmb() asm volatile("sfence" ::: "memory") #define __wmb() asm volatile("sfence" ::: "memory")
#endif #endif
/** /**
...@@ -51,8 +51,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index, ...@@ -51,8 +51,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
/* Prevent speculative execution past this barrier. */ /* Prevent speculative execution past this barrier. */
#define barrier_nospec() alternative("", "lfence", X86_FEATURE_LFENCE_RDTSC) #define barrier_nospec() alternative("", "lfence", X86_FEATURE_LFENCE_RDTSC)
#define dma_rmb() barrier() #define __dma_rmb() barrier()
#define dma_wmb() barrier() #define __dma_wmb() barrier()
#define __smp_mb() asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc") #define __smp_mb() asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc")
......
...@@ -53,6 +53,7 @@ static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) ...@@ -53,6 +53,7 @@ static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
static inline void queued_spin_unlock(struct qspinlock *lock) static inline void queued_spin_unlock(struct qspinlock *lock)
{ {
kcsan_release();
pv_queued_spin_unlock(lock); pv_queued_spin_unlock(lock);
} }
......
...@@ -14,12 +14,38 @@ ...@@ -14,12 +14,38 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/kcsan-checks.h>
#include <asm/rwonce.h> #include <asm/rwonce.h>
#ifndef nop #ifndef nop
#define nop() asm volatile ("nop") #define nop() asm volatile ("nop")
#endif #endif
/*
* Architectures that want generic instrumentation can define __ prefixed
* variants of all barriers.
*/
#ifdef __mb
#define mb() do { kcsan_mb(); __mb(); } while (0)
#endif
#ifdef __rmb
#define rmb() do { kcsan_rmb(); __rmb(); } while (0)
#endif
#ifdef __wmb
#define wmb() do { kcsan_wmb(); __wmb(); } while (0)
#endif
#ifdef __dma_rmb
#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
#endif
#ifdef __dma_wmb
#define dma_wmb() do { kcsan_wmb(); __dma_wmb(); } while (0)
#endif
/* /*
* Force strict CPU ordering. And yes, this is required on UP too when we're * Force strict CPU ordering. And yes, this is required on UP too when we're
* talking to devices. * talking to devices.
...@@ -62,15 +88,15 @@ ...@@ -62,15 +88,15 @@
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#ifndef smp_mb #ifndef smp_mb
#define smp_mb() __smp_mb() #define smp_mb() do { kcsan_mb(); __smp_mb(); } while (0)
#endif #endif
#ifndef smp_rmb #ifndef smp_rmb
#define smp_rmb() __smp_rmb() #define smp_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0)
#endif #endif
#ifndef smp_wmb #ifndef smp_wmb
#define smp_wmb() __smp_wmb() #define smp_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0)
#endif #endif
#else /* !CONFIG_SMP */ #else /* !CONFIG_SMP */
...@@ -123,19 +149,19 @@ do { \ ...@@ -123,19 +149,19 @@ do { \
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#ifndef smp_store_mb #ifndef smp_store_mb
#define smp_store_mb(var, value) __smp_store_mb(var, value) #define smp_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0)
#endif #endif
#ifndef smp_mb__before_atomic #ifndef smp_mb__before_atomic
#define smp_mb__before_atomic() __smp_mb__before_atomic() #define smp_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0)
#endif #endif
#ifndef smp_mb__after_atomic #ifndef smp_mb__after_atomic
#define smp_mb__after_atomic() __smp_mb__after_atomic() #define smp_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0)
#endif #endif
#ifndef smp_store_release #ifndef smp_store_release
#define smp_store_release(p, v) __smp_store_release(p, v) #define smp_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0)
#endif #endif
#ifndef smp_load_acquire #ifndef smp_load_acquire
...@@ -178,13 +204,13 @@ do { \ ...@@ -178,13 +204,13 @@ do { \
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
/* Barriers for virtual machine guests when talking to an SMP host */ /* Barriers for virtual machine guests when talking to an SMP host */
#define virt_mb() __smp_mb() #define virt_mb() do { kcsan_mb(); __smp_mb(); } while (0)
#define virt_rmb() __smp_rmb() #define virt_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0)
#define virt_wmb() __smp_wmb() #define virt_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0)
#define virt_store_mb(var, value) __smp_store_mb(var, value) #define virt_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0)
#define virt_mb__before_atomic() __smp_mb__before_atomic() #define virt_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0)
#define virt_mb__after_atomic() __smp_mb__after_atomic() #define virt_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0)
#define virt_store_release(p, v) __smp_store_release(p, v) #define virt_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0)
#define virt_load_acquire(p) __smp_load_acquire(p) #define virt_load_acquire(p) __smp_load_acquire(p)
/** /**
......
...@@ -67,6 +67,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr) ...@@ -67,6 +67,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr)
*/ */
static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
{ {
kcsan_mb();
instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_set_bit(nr, addr); return arch_test_and_set_bit(nr, addr);
} }
...@@ -80,6 +81,7 @@ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) ...@@ -80,6 +81,7 @@ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
*/ */
static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr)
{ {
kcsan_mb();
instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_clear_bit(nr, addr); return arch_test_and_clear_bit(nr, addr);
} }
...@@ -93,6 +95,7 @@ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) ...@@ -93,6 +95,7 @@ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr)
*/ */
static inline bool test_and_change_bit(long nr, volatile unsigned long *addr) static inline bool test_and_change_bit(long nr, volatile unsigned long *addr)
{ {
kcsan_mb();
instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_change_bit(nr, addr); return arch_test_and_change_bit(nr, addr);
} }
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
*/ */
static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) static inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
{ {
kcsan_release();
instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_clear_bit_unlock(nr, addr); arch_clear_bit_unlock(nr, addr);
} }
...@@ -37,6 +38,7 @@ static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) ...@@ -37,6 +38,7 @@ static inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
*/ */
static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
{ {
kcsan_release();
instrument_write(addr + BIT_WORD(nr), sizeof(long)); instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___clear_bit_unlock(nr, addr); arch___clear_bit_unlock(nr, addr);
} }
...@@ -71,6 +73,7 @@ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) ...@@ -71,6 +73,7 @@ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr)
static inline bool static inline bool
clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr)
{ {
kcsan_release();
instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_clear_bit_unlock_is_negative_byte(nr, addr); return arch_clear_bit_unlock_is_negative_byte(nr, addr);
} }
......
This diff is collapsed.
...@@ -308,6 +308,24 @@ ...@@ -308,6 +308,24 @@
# define __compiletime_warning(msg) # define __compiletime_warning(msg)
#endif #endif
/*
* Optional: only supported since clang >= 14.0
*
* clang: https://clang.llvm.org/docs/AttributeReference.html#disable-sanitizer-instrumentation
*
* disable_sanitizer_instrumentation is not always similar to
* no_sanitize((<sanitizer-name>)): the latter may still let specific sanitizers
* insert code into functions to prevent false positives. Unlike that,
* disable_sanitizer_instrumentation prevents all kinds of instrumentation to
* functions with the attribute.
*/
#if __has_attribute(disable_sanitizer_instrumentation)
# define __disable_sanitizer_instrumentation \
__attribute__((disable_sanitizer_instrumentation))
#else
# define __disable_sanitizer_instrumentation
#endif
/* /*
* gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-weak-function-attribute * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-weak-function-attribute
* gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-weak-variable-attribute * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-weak-variable-attribute
......
...@@ -198,9 +198,20 @@ struct ftrace_likely_data { ...@@ -198,9 +198,20 @@ struct ftrace_likely_data {
# define __no_kasan_or_inline __always_inline # define __no_kasan_or_inline __always_inline
#endif #endif
#define __no_kcsan __no_sanitize_thread
#ifdef __SANITIZE_THREAD__ #ifdef __SANITIZE_THREAD__
/*
* Clang still emits instrumentation for __tsan_func_{entry,exit}() and builtin
* atomics even with __no_sanitize_thread (to avoid false positives in userspace
* ThreadSanitizer). The kernel's requirements are stricter and we really do not
* want any instrumentation with __no_kcsan.
*
* Therefore we add __disable_sanitizer_instrumentation where available to
* disable all instrumentation. See Kconfig.kcsan where this is mandatory.
*/
# define __no_kcsan __no_sanitize_thread __disable_sanitizer_instrumentation
# define __no_sanitize_or_inline __no_kcsan notrace __maybe_unused # define __no_sanitize_or_inline __no_kcsan notrace __maybe_unused
#else
# define __no_kcsan
#endif #endif
#ifndef __no_sanitize_or_inline #ifndef __no_sanitize_or_inline
......
...@@ -36,6 +36,36 @@ ...@@ -36,6 +36,36 @@
*/ */
void __kcsan_check_access(const volatile void *ptr, size_t size, int type); void __kcsan_check_access(const volatile void *ptr, size_t size, int type);
/*
* See definition of __tsan_atomic_signal_fence() in kernel/kcsan/core.c.
* Note: The mappings are arbitrary, and do not reflect any real mappings of C11
* memory orders to the LKMM memory orders and vice-versa!
*/
#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_mb __ATOMIC_SEQ_CST
#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_wmb __ATOMIC_ACQ_REL
#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_rmb __ATOMIC_ACQUIRE
#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_release __ATOMIC_RELEASE
/**
* __kcsan_mb - full memory barrier instrumentation
*/
void __kcsan_mb(void);
/**
* __kcsan_wmb - write memory barrier instrumentation
*/
void __kcsan_wmb(void);
/**
* __kcsan_rmb - read memory barrier instrumentation
*/
void __kcsan_rmb(void);
/**
* __kcsan_release - release barrier instrumentation
*/
void __kcsan_release(void);
/** /**
* kcsan_disable_current - disable KCSAN for the current context * kcsan_disable_current - disable KCSAN for the current context
* *
...@@ -99,7 +129,15 @@ void kcsan_set_access_mask(unsigned long mask); ...@@ -99,7 +129,15 @@ void kcsan_set_access_mask(unsigned long mask);
/* Scoped access information. */ /* Scoped access information. */
struct kcsan_scoped_access { struct kcsan_scoped_access {
struct list_head list; union {
struct list_head list; /* scoped_accesses list */
/*
* Not an entry in scoped_accesses list; stack depth from where
* the access was initialized.
*/
int stack_depth;
};
/* Access information. */ /* Access information. */
const volatile void *ptr; const volatile void *ptr;
size_t size; size_t size;
...@@ -151,6 +189,10 @@ void kcsan_end_scoped_access(struct kcsan_scoped_access *sa); ...@@ -151,6 +189,10 @@ void kcsan_end_scoped_access(struct kcsan_scoped_access *sa);
static inline void __kcsan_check_access(const volatile void *ptr, size_t size, static inline void __kcsan_check_access(const volatile void *ptr, size_t size,
int type) { } int type) { }
static inline void __kcsan_mb(void) { }
static inline void __kcsan_wmb(void) { }
static inline void __kcsan_rmb(void) { }
static inline void __kcsan_release(void) { }
static inline void kcsan_disable_current(void) { } static inline void kcsan_disable_current(void) { }
static inline void kcsan_enable_current(void) { } static inline void kcsan_enable_current(void) { }
static inline void kcsan_enable_current_nowarn(void) { } static inline void kcsan_enable_current_nowarn(void) { }
...@@ -183,12 +225,47 @@ static inline void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) { } ...@@ -183,12 +225,47 @@ static inline void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) { }
*/ */
#define __kcsan_disable_current kcsan_disable_current #define __kcsan_disable_current kcsan_disable_current
#define __kcsan_enable_current kcsan_enable_current_nowarn #define __kcsan_enable_current kcsan_enable_current_nowarn
#else #else /* __SANITIZE_THREAD__ */
static inline void kcsan_check_access(const volatile void *ptr, size_t size, static inline void kcsan_check_access(const volatile void *ptr, size_t size,
int type) { } int type) { }
static inline void __kcsan_enable_current(void) { } static inline void __kcsan_enable_current(void) { }
static inline void __kcsan_disable_current(void) { } static inline void __kcsan_disable_current(void) { }
#endif #endif /* __SANITIZE_THREAD__ */
#if defined(CONFIG_KCSAN_WEAK_MEMORY) && defined(__SANITIZE_THREAD__)
/*
* Normal barrier instrumentation is not done via explicit calls, but by mapping
* to a repurposed __atomic_signal_fence(), which normally does not generate any
* real instructions, but is still intercepted by fsanitize=thread. This means,
* like any other compile-time instrumentation, barrier instrumentation can be
* disabled with the __no_kcsan function attribute.
*
* Also see definition of __tsan_atomic_signal_fence() in kernel/kcsan/core.c.
*
* These are all macros, like <asm/barrier.h>, since some architectures use them
* in non-static inline functions.
*/
#define __KCSAN_BARRIER_TO_SIGNAL_FENCE(name) \
do { \
barrier(); \
__atomic_signal_fence(__KCSAN_BARRIER_TO_SIGNAL_FENCE_##name); \
barrier(); \
} while (0)
#define kcsan_mb() __KCSAN_BARRIER_TO_SIGNAL_FENCE(mb)
#define kcsan_wmb() __KCSAN_BARRIER_TO_SIGNAL_FENCE(wmb)
#define kcsan_rmb() __KCSAN_BARRIER_TO_SIGNAL_FENCE(rmb)
#define kcsan_release() __KCSAN_BARRIER_TO_SIGNAL_FENCE(release)
#elif defined(CONFIG_KCSAN_WEAK_MEMORY) && defined(__KCSAN_INSTRUMENT_BARRIERS__)
#define kcsan_mb __kcsan_mb
#define kcsan_wmb __kcsan_wmb
#define kcsan_rmb __kcsan_rmb
#define kcsan_release __kcsan_release
#else /* CONFIG_KCSAN_WEAK_MEMORY && ... */
#define kcsan_mb() do { } while (0)
#define kcsan_wmb() do { } while (0)
#define kcsan_rmb() do { } while (0)
#define kcsan_release() do { } while (0)
#endif /* CONFIG_KCSAN_WEAK_MEMORY && ... */
/** /**
* __kcsan_check_read - check regular read access for races * __kcsan_check_read - check regular read access for races
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
*/ */
struct kcsan_ctx { struct kcsan_ctx {
int disable_count; /* disable counter */ int disable_count; /* disable counter */
int disable_scoped; /* disable scoped access counter */
int atomic_next; /* number of following atomic ops */ int atomic_next; /* number of following atomic ops */
/* /*
...@@ -48,8 +49,16 @@ struct kcsan_ctx { ...@@ -48,8 +49,16 @@ struct kcsan_ctx {
*/ */
unsigned long access_mask; unsigned long access_mask;
/* List of scoped accesses. */ /* List of scoped accesses; likely to be empty. */
struct list_head scoped_accesses; struct list_head scoped_accesses;
#ifdef CONFIG_KCSAN_WEAK_MEMORY
/*
* Scoped access for modeling access reordering to detect missing memory
* barriers; only keep 1 to keep fast-path complexity manageable.
*/
struct kcsan_scoped_access reorder_access;
#endif
}; };
/** /**
......
...@@ -1339,6 +1339,9 @@ struct task_struct { ...@@ -1339,6 +1339,9 @@ struct task_struct {
#ifdef CONFIG_TRACE_IRQFLAGS #ifdef CONFIG_TRACE_IRQFLAGS
struct irqtrace_events kcsan_save_irqtrace; struct irqtrace_events kcsan_save_irqtrace;
#endif #endif
#ifdef CONFIG_KCSAN_WEAK_MEMORY
int kcsan_stack_depth;
#endif
#endif #endif
#if IS_ENABLED(CONFIG_KUNIT) #if IS_ENABLED(CONFIG_KUNIT)
......
...@@ -171,7 +171,7 @@ do { \ ...@@ -171,7 +171,7 @@ do { \
* Architectures that can implement ACQUIRE better need to take care. * Architectures that can implement ACQUIRE better need to take care.
*/ */
#ifndef smp_mb__after_spinlock #ifndef smp_mb__after_spinlock
#define smp_mb__after_spinlock() do { } while (0) #define smp_mb__after_spinlock() kcsan_mb()
#endif #endif
#ifdef CONFIG_DEBUG_SPINLOCK #ifdef CONFIG_DEBUG_SPINLOCK
......
...@@ -182,11 +182,6 @@ struct task_struct init_task ...@@ -182,11 +182,6 @@ struct task_struct init_task
#endif #endif
#ifdef CONFIG_KCSAN #ifdef CONFIG_KCSAN
.kcsan_ctx = { .kcsan_ctx = {
.disable_count = 0,
.atomic_next = 0,
.atomic_nest_count = 0,
.in_flat_atomic = false,
.access_mask = 0,
.scoped_accesses = {LIST_POISON1, NULL}, .scoped_accesses = {LIST_POISON1, NULL},
}, },
#endif #endif
......
...@@ -12,6 +12,8 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \ ...@@ -12,6 +12,8 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \
-fno-stack-protector -DDISABLE_BRANCH_PROFILING -fno-stack-protector -DDISABLE_BRANCH_PROFILING
obj-y := core.o debugfs.o report.o obj-y := core.o debugfs.o report.o
KCSAN_INSTRUMENT_BARRIERS_selftest.o := y
obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o
CFLAGS_kcsan_test.o := $(CFLAGS_KCSAN) -g -fno-omit-frame-pointer CFLAGS_kcsan_test.o := $(CFLAGS_KCSAN) -g -fno-omit-frame-pointer
......
This diff is collapsed.
This diff is collapsed.
...@@ -215,9 +215,9 @@ static const char *get_access_type(int type) ...@@ -215,9 +215,9 @@ static const char *get_access_type(int type)
if (type & KCSAN_ACCESS_ASSERT) { if (type & KCSAN_ACCESS_ASSERT) {
if (type & KCSAN_ACCESS_SCOPED) { if (type & KCSAN_ACCESS_SCOPED) {
if (type & KCSAN_ACCESS_WRITE) if (type & KCSAN_ACCESS_WRITE)
return "assert no accesses (scoped)"; return "assert no accesses (reordered)";
else else
return "assert no writes (scoped)"; return "assert no writes (reordered)";
} else { } else {
if (type & KCSAN_ACCESS_WRITE) if (type & KCSAN_ACCESS_WRITE)
return "assert no accesses"; return "assert no accesses";
...@@ -240,17 +240,17 @@ static const char *get_access_type(int type) ...@@ -240,17 +240,17 @@ static const char *get_access_type(int type)
case KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: case KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC:
return "read-write (marked)"; return "read-write (marked)";
case KCSAN_ACCESS_SCOPED: case KCSAN_ACCESS_SCOPED:
return "read (scoped)"; return "read (reordered)";
case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_ATOMIC: case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_ATOMIC:
return "read (marked, scoped)"; return "read (marked, reordered)";
case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE: case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE:
return "write (scoped)"; return "write (reordered)";
case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC:
return "write (marked, scoped)"; return "write (marked, reordered)";
case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE: case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE:
return "read-write (scoped)"; return "read-write (reordered)";
case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC:
return "read-write (marked, scoped)"; return "read-write (marked, reordered)";
default: default:
BUG(); BUG();
} }
...@@ -308,10 +308,12 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries ...@@ -308,10 +308,12 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries
/* /*
* Skips to the first entry that matches the function of @ip, and then replaces * Skips to the first entry that matches the function of @ip, and then replaces
* that entry with @ip, returning the entries to skip. * that entry with @ip, returning the entries to skip with @replaced containing
* the replaced entry.
*/ */
static int static int
replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned long ip) replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned long ip,
unsigned long *replaced)
{ {
unsigned long symbolsize, offset; unsigned long symbolsize, offset;
unsigned long target_func; unsigned long target_func;
...@@ -330,6 +332,7 @@ replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned lon ...@@ -330,6 +332,7 @@ replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned lon
func -= offset; func -= offset;
if (func == target_func) { if (func == target_func) {
*replaced = stack_entries[skip];
stack_entries[skip] = ip; stack_entries[skip] = ip;
return skip; return skip;
} }
...@@ -342,9 +345,10 @@ replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned lon ...@@ -342,9 +345,10 @@ replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned lon
} }
static int static int
sanitize_stack_entries(unsigned long stack_entries[], int num_entries, unsigned long ip) sanitize_stack_entries(unsigned long stack_entries[], int num_entries, unsigned long ip,
unsigned long *replaced)
{ {
return ip ? replace_stack_entry(stack_entries, num_entries, ip) : return ip ? replace_stack_entry(stack_entries, num_entries, ip, replaced) :
get_stack_skipnr(stack_entries, num_entries); get_stack_skipnr(stack_entries, num_entries);
} }
...@@ -360,6 +364,14 @@ static int sym_strcmp(void *addr1, void *addr2) ...@@ -360,6 +364,14 @@ static int sym_strcmp(void *addr1, void *addr2)
return strncmp(buf1, buf2, sizeof(buf1)); return strncmp(buf1, buf2, sizeof(buf1));
} }
static void
print_stack_trace(unsigned long stack_entries[], int num_entries, unsigned long reordered_to)
{
stack_trace_print(stack_entries, num_entries, 0);
if (reordered_to)
pr_err(" |\n +-> reordered to: %pS\n", (void *)reordered_to);
}
static void print_verbose_info(struct task_struct *task) static void print_verbose_info(struct task_struct *task)
{ {
if (!task) if (!task)
...@@ -378,10 +390,12 @@ static void print_report(enum kcsan_value_change value_change, ...@@ -378,10 +390,12 @@ static void print_report(enum kcsan_value_change value_change,
struct other_info *other_info, struct other_info *other_info,
u64 old, u64 new, u64 mask) u64 old, u64 new, u64 mask)
{ {
unsigned long reordered_to = 0;
unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 }; unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 };
int num_stack_entries = stack_trace_save(stack_entries, NUM_STACK_ENTRIES, 1); int num_stack_entries = stack_trace_save(stack_entries, NUM_STACK_ENTRIES, 1);
int skipnr = sanitize_stack_entries(stack_entries, num_stack_entries, ai->ip); int skipnr = sanitize_stack_entries(stack_entries, num_stack_entries, ai->ip, &reordered_to);
unsigned long this_frame = stack_entries[skipnr]; unsigned long this_frame = stack_entries[skipnr];
unsigned long other_reordered_to = 0;
unsigned long other_frame = 0; unsigned long other_frame = 0;
int other_skipnr = 0; /* silence uninit warnings */ int other_skipnr = 0; /* silence uninit warnings */
...@@ -394,7 +408,7 @@ static void print_report(enum kcsan_value_change value_change, ...@@ -394,7 +408,7 @@ static void print_report(enum kcsan_value_change value_change,
if (other_info) { if (other_info) {
other_skipnr = sanitize_stack_entries(other_info->stack_entries, other_skipnr = sanitize_stack_entries(other_info->stack_entries,
other_info->num_stack_entries, other_info->num_stack_entries,
other_info->ai.ip); other_info->ai.ip, &other_reordered_to);
other_frame = other_info->stack_entries[other_skipnr]; other_frame = other_info->stack_entries[other_skipnr];
/* @value_change is only known for the other thread */ /* @value_change is only known for the other thread */
...@@ -434,10 +448,9 @@ static void print_report(enum kcsan_value_change value_change, ...@@ -434,10 +448,9 @@ static void print_report(enum kcsan_value_change value_change,
other_info->ai.cpu_id); other_info->ai.cpu_id);
/* Print the other thread's stack trace. */ /* Print the other thread's stack trace. */
stack_trace_print(other_info->stack_entries + other_skipnr, print_stack_trace(other_info->stack_entries + other_skipnr,
other_info->num_stack_entries - other_skipnr, other_info->num_stack_entries - other_skipnr,
0); other_reordered_to);
if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) if (IS_ENABLED(CONFIG_KCSAN_VERBOSE))
print_verbose_info(other_info->task); print_verbose_info(other_info->task);
...@@ -451,9 +464,7 @@ static void print_report(enum kcsan_value_change value_change, ...@@ -451,9 +464,7 @@ static void print_report(enum kcsan_value_change value_change,
get_thread_desc(ai->task_pid), ai->cpu_id); get_thread_desc(ai->task_pid), ai->cpu_id);
} }
/* Print stack trace of this thread. */ /* Print stack trace of this thread. */
stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, print_stack_trace(stack_entries + skipnr, num_stack_entries - skipnr, reordered_to);
0);
if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) if (IS_ENABLED(CONFIG_KCSAN_VERBOSE))
print_verbose_info(current); print_verbose_info(current);
......
...@@ -7,10 +7,15 @@ ...@@ -7,10 +7,15 @@
#define pr_fmt(fmt) "kcsan: " fmt #define pr_fmt(fmt) "kcsan: " fmt
#include <linux/atomic.h>
#include <linux/bitops.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/kcsan-checks.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/printk.h> #include <linux/printk.h>
#include <linux/random.h> #include <linux/random.h>
#include <linux/sched.h>
#include <linux/spinlock.h>
#include <linux/types.h> #include <linux/types.h>
#include "encoding.h" #include "encoding.h"
...@@ -103,6 +108,143 @@ static bool __init test_matching_access(void) ...@@ -103,6 +108,143 @@ static bool __init test_matching_access(void)
return true; return true;
} }
/*
* Correct memory barrier instrumentation is critical to avoiding false
* positives: simple test to check at boot certain barriers are always properly
* instrumented. See kcsan_test for a more complete test.
*/
static DEFINE_SPINLOCK(test_spinlock);
static bool __init test_barrier(void)
{
#ifdef CONFIG_KCSAN_WEAK_MEMORY
struct kcsan_scoped_access *reorder_access = &current->kcsan_ctx.reorder_access;
#else
struct kcsan_scoped_access *reorder_access = NULL;
#endif
bool ret = true;
arch_spinlock_t arch_spinlock = __ARCH_SPIN_LOCK_UNLOCKED;
atomic_t dummy;
long test_var;
if (!reorder_access || !IS_ENABLED(CONFIG_SMP))
return true;
#define __KCSAN_CHECK_BARRIER(access_type, barrier, name) \
do { \
reorder_access->type = (access_type) | KCSAN_ACCESS_SCOPED; \
reorder_access->size = 1; \
barrier; \
if (reorder_access->size != 0) { \
pr_err("improperly instrumented type=(" #access_type "): " name "\n"); \
ret = false; \
} \
} while (0)
#define KCSAN_CHECK_READ_BARRIER(b) __KCSAN_CHECK_BARRIER(0, b, #b)
#define KCSAN_CHECK_WRITE_BARRIER(b) __KCSAN_CHECK_BARRIER(KCSAN_ACCESS_WRITE, b, #b)
#define KCSAN_CHECK_RW_BARRIER(b) __KCSAN_CHECK_BARRIER(KCSAN_ACCESS_WRITE | KCSAN_ACCESS_COMPOUND, b, #b)
kcsan_nestable_atomic_begin(); /* No watchpoints in called functions. */
KCSAN_CHECK_READ_BARRIER(mb());
KCSAN_CHECK_READ_BARRIER(rmb());
KCSAN_CHECK_READ_BARRIER(smp_mb());
KCSAN_CHECK_READ_BARRIER(smp_rmb());
KCSAN_CHECK_READ_BARRIER(dma_rmb());
KCSAN_CHECK_READ_BARRIER(smp_mb__before_atomic());
KCSAN_CHECK_READ_BARRIER(smp_mb__after_atomic());
KCSAN_CHECK_READ_BARRIER(smp_mb__after_spinlock());
KCSAN_CHECK_READ_BARRIER(smp_store_mb(test_var, 0));
KCSAN_CHECK_READ_BARRIER(smp_store_release(&test_var, 0));
KCSAN_CHECK_READ_BARRIER(xchg(&test_var, 0));
KCSAN_CHECK_READ_BARRIER(xchg_release(&test_var, 0));
KCSAN_CHECK_READ_BARRIER(cmpxchg(&test_var, 0, 0));
KCSAN_CHECK_READ_BARRIER(cmpxchg_release(&test_var, 0, 0));
KCSAN_CHECK_READ_BARRIER(atomic_set_release(&dummy, 0));
KCSAN_CHECK_READ_BARRIER(atomic_add_return(1, &dummy));
KCSAN_CHECK_READ_BARRIER(atomic_add_return_release(1, &dummy));
KCSAN_CHECK_READ_BARRIER(atomic_fetch_add(1, &dummy));
KCSAN_CHECK_READ_BARRIER(atomic_fetch_add_release(1, &dummy));
KCSAN_CHECK_READ_BARRIER(test_and_set_bit(0, &test_var));
KCSAN_CHECK_READ_BARRIER(test_and_clear_bit(0, &test_var));
KCSAN_CHECK_READ_BARRIER(test_and_change_bit(0, &test_var));
KCSAN_CHECK_READ_BARRIER(clear_bit_unlock(0, &test_var));
KCSAN_CHECK_READ_BARRIER(__clear_bit_unlock(0, &test_var));
arch_spin_lock(&arch_spinlock);
KCSAN_CHECK_READ_BARRIER(arch_spin_unlock(&arch_spinlock));
spin_lock(&test_spinlock);
KCSAN_CHECK_READ_BARRIER(spin_unlock(&test_spinlock));
KCSAN_CHECK_WRITE_BARRIER(mb());
KCSAN_CHECK_WRITE_BARRIER(wmb());
KCSAN_CHECK_WRITE_BARRIER(smp_mb());
KCSAN_CHECK_WRITE_BARRIER(smp_wmb());
KCSAN_CHECK_WRITE_BARRIER(dma_wmb());
KCSAN_CHECK_WRITE_BARRIER(smp_mb__before_atomic());
KCSAN_CHECK_WRITE_BARRIER(smp_mb__after_atomic());
KCSAN_CHECK_WRITE_BARRIER(smp_mb__after_spinlock());
KCSAN_CHECK_WRITE_BARRIER(smp_store_mb(test_var, 0));
KCSAN_CHECK_WRITE_BARRIER(smp_store_release(&test_var, 0));
KCSAN_CHECK_WRITE_BARRIER(xchg(&test_var, 0));
KCSAN_CHECK_WRITE_BARRIER(xchg_release(&test_var, 0));
KCSAN_CHECK_WRITE_BARRIER(cmpxchg(&test_var, 0, 0));
KCSAN_CHECK_WRITE_BARRIER(cmpxchg_release(&test_var, 0, 0));
KCSAN_CHECK_WRITE_BARRIER(atomic_set_release(&dummy, 0));
KCSAN_CHECK_WRITE_BARRIER(atomic_add_return(1, &dummy));
KCSAN_CHECK_WRITE_BARRIER(atomic_add_return_release(1, &dummy));
KCSAN_CHECK_WRITE_BARRIER(atomic_fetch_add(1, &dummy));
KCSAN_CHECK_WRITE_BARRIER(atomic_fetch_add_release(1, &dummy));
KCSAN_CHECK_WRITE_BARRIER(test_and_set_bit(0, &test_var));
KCSAN_CHECK_WRITE_BARRIER(test_and_clear_bit(0, &test_var));
KCSAN_CHECK_WRITE_BARRIER(test_and_change_bit(0, &test_var));
KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock(0, &test_var));
KCSAN_CHECK_WRITE_BARRIER(__clear_bit_unlock(0, &test_var));
arch_spin_lock(&arch_spinlock);
KCSAN_CHECK_WRITE_BARRIER(arch_spin_unlock(&arch_spinlock));
spin_lock(&test_spinlock);
KCSAN_CHECK_WRITE_BARRIER(spin_unlock(&test_spinlock));
KCSAN_CHECK_RW_BARRIER(mb());
KCSAN_CHECK_RW_BARRIER(wmb());
KCSAN_CHECK_RW_BARRIER(rmb());
KCSAN_CHECK_RW_BARRIER(smp_mb());
KCSAN_CHECK_RW_BARRIER(smp_wmb());
KCSAN_CHECK_RW_BARRIER(smp_rmb());
KCSAN_CHECK_RW_BARRIER(dma_wmb());
KCSAN_CHECK_RW_BARRIER(dma_rmb());
KCSAN_CHECK_RW_BARRIER(smp_mb__before_atomic());
KCSAN_CHECK_RW_BARRIER(smp_mb__after_atomic());
KCSAN_CHECK_RW_BARRIER(smp_mb__after_spinlock());
KCSAN_CHECK_RW_BARRIER(smp_store_mb(test_var, 0));
KCSAN_CHECK_RW_BARRIER(smp_store_release(&test_var, 0));
KCSAN_CHECK_RW_BARRIER(xchg(&test_var, 0));
KCSAN_CHECK_RW_BARRIER(xchg_release(&test_var, 0));
KCSAN_CHECK_RW_BARRIER(cmpxchg(&test_var, 0, 0));
KCSAN_CHECK_RW_BARRIER(cmpxchg_release(&test_var, 0, 0));
KCSAN_CHECK_RW_BARRIER(atomic_set_release(&dummy, 0));
KCSAN_CHECK_RW_BARRIER(atomic_add_return(1, &dummy));
KCSAN_CHECK_RW_BARRIER(atomic_add_return_release(1, &dummy));
KCSAN_CHECK_RW_BARRIER(atomic_fetch_add(1, &dummy));
KCSAN_CHECK_RW_BARRIER(atomic_fetch_add_release(1, &dummy));
KCSAN_CHECK_RW_BARRIER(test_and_set_bit(0, &test_var));
KCSAN_CHECK_RW_BARRIER(test_and_clear_bit(0, &test_var));
KCSAN_CHECK_RW_BARRIER(test_and_change_bit(0, &test_var));
KCSAN_CHECK_RW_BARRIER(clear_bit_unlock(0, &test_var));
KCSAN_CHECK_RW_BARRIER(__clear_bit_unlock(0, &test_var));
arch_spin_lock(&arch_spinlock);
KCSAN_CHECK_RW_BARRIER(arch_spin_unlock(&arch_spinlock));
spin_lock(&test_spinlock);
KCSAN_CHECK_RW_BARRIER(spin_unlock(&test_spinlock));
#ifdef clear_bit_unlock_is_negative_byte
KCSAN_CHECK_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var));
KCSAN_CHECK_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var));
KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var));
#endif
kcsan_nestable_atomic_end();
return ret;
}
static int __init kcsan_selftest(void) static int __init kcsan_selftest(void)
{ {
int passed = 0; int passed = 0;
...@@ -120,6 +262,7 @@ static int __init kcsan_selftest(void) ...@@ -120,6 +262,7 @@ static int __init kcsan_selftest(void)
RUN_TEST(test_requires); RUN_TEST(test_requires);
RUN_TEST(test_encode_decode); RUN_TEST(test_encode_decode);
RUN_TEST(test_matching_access); RUN_TEST(test_matching_access);
RUN_TEST(test_barrier);
pr_info("selftest: %d/%d tests passed\n", passed, total); pr_info("selftest: %d/%d tests passed\n", passed, total);
if (passed != total) if (passed != total)
......
...@@ -11,11 +11,10 @@ ccflags-y += $(call cc-disable-warning, unused-but-set-variable) ...@@ -11,11 +11,10 @@ ccflags-y += $(call cc-disable-warning, unused-but-set-variable)
# that is not a function of syscall inputs. E.g. involuntary context switches. # that is not a function of syscall inputs. E.g. involuntary context switches.
KCOV_INSTRUMENT := n KCOV_INSTRUMENT := n
# There are numerous data races here, however, most of them are due to plain accesses. # Disable KCSAN to avoid excessive noise and performance degradation. To avoid
# This would make it even harder for syzbot to find reproducers, because these # false positives ensure barriers implied by sched functions are instrumented.
# bugs trigger without specific input. Disable by default, but should re-enable
# eventually.
KCSAN_SANITIZE := n KCSAN_SANITIZE := n
KCSAN_INSTRUMENT_BARRIERS := y
ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y) ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y)
# According to Alan Modra <alan@linuxcare.com.au>, the -fno-omit-frame-pointer is # According to Alan Modra <alan@linuxcare.com.au>, the -fno-omit-frame-pointer is
......
...@@ -191,6 +191,26 @@ config KCSAN_STRICT ...@@ -191,6 +191,26 @@ config KCSAN_STRICT
closely aligns with the rules defined by the Linux-kernel memory closely aligns with the rules defined by the Linux-kernel memory
consistency model (LKMM). consistency model (LKMM).
config KCSAN_WEAK_MEMORY
bool "Enable weak memory modeling to detect missing memory barriers"
default y
depends on KCSAN_STRICT
# We can either let objtool nop __tsan_func_{entry,exit}() and builtin
# atomics instrumentation in .noinstr.text, or use a compiler that can
# implement __no_kcsan to really remove all instrumentation.
depends on STACK_VALIDATION || CC_IS_GCC || CLANG_VERSION >= 140000
help
Enable support for modeling a subset of weak memory, which allows
detecting a subset of data races due to missing memory barriers.
Depends on KCSAN_STRICT, because the options strenghtening certain
plain accesses by default (depending on !KCSAN_STRICT) reduce the
ability to detect any data races invoving reordered accesses, in
particular reordered writes.
Weak memory modeling relies on additional instrumentation and may
affect performance.
config KCSAN_REPORT_VALUE_CHANGE_ONLY config KCSAN_REPORT_VALUE_CHANGE_ONLY
bool "Only report races where watcher observed a data value change" bool "Only report races where watcher observed a data value change"
default y default y
......
...@@ -15,6 +15,8 @@ KCSAN_SANITIZE_slab_common.o := n ...@@ -15,6 +15,8 @@ KCSAN_SANITIZE_slab_common.o := n
KCSAN_SANITIZE_slab.o := n KCSAN_SANITIZE_slab.o := n
KCSAN_SANITIZE_slub.o := n KCSAN_SANITIZE_slub.o := n
KCSAN_SANITIZE_page_alloc.o := n KCSAN_SANITIZE_page_alloc.o := n
# But enable explicit instrumentation for memory barriers.
KCSAN_INSTRUMENT_BARRIERS := y
# These files are disabled because they produce non-interesting and/or # These files are disabled because they produce non-interesting and/or
# flaky coverage that is not a function of syscall inputs. E.g. slab is out of # flaky coverage that is not a function of syscall inputs. E.g. slab is out of
......
...@@ -9,7 +9,18 @@ endif ...@@ -9,7 +9,18 @@ endif
# Keep most options here optional, to allow enabling more compilers if absence # Keep most options here optional, to allow enabling more compilers if absence
# of some options does not break KCSAN nor causes false positive reports. # of some options does not break KCSAN nor causes false positive reports.
export CFLAGS_KCSAN := -fsanitize=thread \ kcsan-cflags := -fsanitize=thread -fno-optimize-sibling-calls \
$(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0) -fno-optimize-sibling-calls) \
$(call cc-option,$(call cc-param,tsan-compound-read-before-write=1),$(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1))) \ $(call cc-option,$(call cc-param,tsan-compound-read-before-write=1),$(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1))) \
$(call cc-param,tsan-distinguish-volatile=1) $(call cc-param,tsan-distinguish-volatile=1)
ifdef CONFIG_CC_IS_GCC
# GCC started warning about operations unsupported by the TSan runtime. But
# KCSAN != TSan, so just ignore these warnings.
kcsan-cflags += -Wno-tsan
endif
ifndef CONFIG_KCSAN_WEAK_MEMORY
kcsan-cflags += $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0))
endif
export CFLAGS_KCSAN := $(kcsan-cflags)
...@@ -182,6 +182,11 @@ ifeq ($(CONFIG_KCSAN),y) ...@@ -182,6 +182,11 @@ ifeq ($(CONFIG_KCSAN),y)
_c_flags += $(if $(patsubst n%,, \ _c_flags += $(if $(patsubst n%,, \
$(KCSAN_SANITIZE_$(basetarget).o)$(KCSAN_SANITIZE)y), \ $(KCSAN_SANITIZE_$(basetarget).o)$(KCSAN_SANITIZE)y), \
$(CFLAGS_KCSAN)) $(CFLAGS_KCSAN))
# Some uninstrumented files provide implied barriers required to avoid false
# positives: set KCSAN_INSTRUMENT_BARRIERS for barrier instrumentation only.
_c_flags += $(if $(patsubst n%,, \
$(KCSAN_INSTRUMENT_BARRIERS_$(basetarget).o)$(KCSAN_INSTRUMENT_BARRIERS)n), \
-D__KCSAN_INSTRUMENT_BARRIERS__)
endif endif
# $(srctree)/$(src) for including checkin headers from generated source files # $(srctree)/$(src) for including checkin headers from generated source files
......
...@@ -34,6 +34,14 @@ gen_param_check() ...@@ -34,6 +34,14 @@ gen_param_check()
gen_params_checks() gen_params_checks()
{ {
local meta="$1"; shift local meta="$1"; shift
local order="$1"; shift
if [ "${order}" = "_release" ]; then
printf "\tkcsan_release();\n"
elif [ -z "${order}" ] && ! meta_in "$meta" "slv"; then
# RMW with return value is fully ordered
printf "\tkcsan_mb();\n"
fi
while [ "$#" -gt 0 ]; do while [ "$#" -gt 0 ]; do
gen_param_check "$meta" "$1" gen_param_check "$meta" "$1"
...@@ -56,7 +64,7 @@ gen_proto_order_variant() ...@@ -56,7 +64,7 @@ gen_proto_order_variant()
local ret="$(gen_ret_type "${meta}" "${int}")" local ret="$(gen_ret_type "${meta}" "${int}")"
local params="$(gen_params "${int}" "${atomic}" "$@")" local params="$(gen_params "${int}" "${atomic}" "$@")"
local checks="$(gen_params_checks "${meta}" "$@")" local checks="$(gen_params_checks "${meta}" "${order}" "$@")"
local args="$(gen_args "$@")" local args="$(gen_args "$@")"
local retstmt="$(gen_ret_stmt "${meta}")" local retstmt="$(gen_ret_stmt "${meta}")"
...@@ -75,29 +83,44 @@ EOF ...@@ -75,29 +83,44 @@ EOF
gen_xchg() gen_xchg()
{ {
local xchg="$1"; shift local xchg="$1"; shift
local order="$1"; shift
local mult="$1"; shift local mult="$1"; shift
kcsan_barrier=""
if [ "${xchg%_local}" = "${xchg}" ]; then
case "$order" in
_release) kcsan_barrier="kcsan_release()" ;;
"") kcsan_barrier="kcsan_mb()" ;;
esac
fi
if [ "${xchg%${xchg#try_cmpxchg}}" = "try_cmpxchg" ] ; then if [ "${xchg%${xchg#try_cmpxchg}}" = "try_cmpxchg" ] ; then
cat <<EOF cat <<EOF
#define ${xchg}(ptr, oldp, ...) \\ #define ${xchg}${order}(ptr, oldp, ...) \\
({ \\ ({ \\
typeof(ptr) __ai_ptr = (ptr); \\ typeof(ptr) __ai_ptr = (ptr); \\
typeof(oldp) __ai_oldp = (oldp); \\ typeof(oldp) __ai_oldp = (oldp); \\
EOF
[ -n "$kcsan_barrier" ] && printf "\t${kcsan_barrier}; \\\\\n"
cat <<EOF
instrument_atomic_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\ instrument_atomic_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\
instrument_atomic_write(__ai_oldp, ${mult}sizeof(*__ai_oldp)); \\ instrument_atomic_write(__ai_oldp, ${mult}sizeof(*__ai_oldp)); \\
arch_${xchg}(__ai_ptr, __ai_oldp, __VA_ARGS__); \\ arch_${xchg}${order}(__ai_ptr, __ai_oldp, __VA_ARGS__); \\
}) })
EOF EOF
else else
cat <<EOF cat <<EOF
#define ${xchg}(ptr, ...) \\ #define ${xchg}${order}(ptr, ...) \\
({ \\ ({ \\
typeof(ptr) __ai_ptr = (ptr); \\ typeof(ptr) __ai_ptr = (ptr); \\
EOF
[ -n "$kcsan_barrier" ] && printf "\t${kcsan_barrier}; \\\\\n"
cat <<EOF
instrument_atomic_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\ instrument_atomic_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\
arch_${xchg}(__ai_ptr, __VA_ARGS__); \\ arch_${xchg}${order}(__ai_ptr, __VA_ARGS__); \\
}) })
EOF EOF
...@@ -145,21 +168,21 @@ done ...@@ -145,21 +168,21 @@ done
for xchg in "xchg" "cmpxchg" "cmpxchg64" "try_cmpxchg"; do for xchg in "xchg" "cmpxchg" "cmpxchg64" "try_cmpxchg"; do
for order in "" "_acquire" "_release" "_relaxed"; do for order in "" "_acquire" "_release" "_relaxed"; do
gen_xchg "${xchg}${order}" "" gen_xchg "${xchg}" "${order}" ""
printf "\n" printf "\n"
done done
done done
for xchg in "cmpxchg_local" "cmpxchg64_local" "sync_cmpxchg"; do for xchg in "cmpxchg_local" "cmpxchg64_local" "sync_cmpxchg"; do
gen_xchg "${xchg}" "" gen_xchg "${xchg}" "" ""
printf "\n" printf "\n"
done done
gen_xchg "cmpxchg_double" "2 * " gen_xchg "cmpxchg_double" "" "2 * "
printf "\n\n" printf "\n\n"
gen_xchg "cmpxchg_double_local" "2 * " gen_xchg "cmpxchg_double_local" "" "2 * "
cat <<EOF cat <<EOF
......
...@@ -849,6 +849,10 @@ static const char *uaccess_safe_builtin[] = { ...@@ -849,6 +849,10 @@ static const char *uaccess_safe_builtin[] = {
"__asan_report_store16_noabort", "__asan_report_store16_noabort",
/* KCSAN */ /* KCSAN */
"__kcsan_check_access", "__kcsan_check_access",
"__kcsan_mb",
"__kcsan_wmb",
"__kcsan_rmb",
"__kcsan_release",
"kcsan_found_watchpoint", "kcsan_found_watchpoint",
"kcsan_setup_watchpoint", "kcsan_setup_watchpoint",
"kcsan_check_scoped_accesses", "kcsan_check_scoped_accesses",
...@@ -1068,11 +1072,11 @@ static void annotate_call_site(struct objtool_file *file, ...@@ -1068,11 +1072,11 @@ static void annotate_call_site(struct objtool_file *file,
} }
/* /*
* Many compilers cannot disable KCOV with a function attribute * Many compilers cannot disable KCOV or sanitizer calls with a function
* so they need a little help, NOP out any KCOV calls from noinstr * attribute so they need a little help, NOP out any such calls from
* text. * noinstr text.
*/ */
if (insn->sec->noinstr && sym->kcov) { if (insn->sec->noinstr && sym->profiling_func) {
if (reloc) { if (reloc) {
reloc->type = R_NONE; reloc->type = R_NONE;
elf_write_reloc(file->elf, reloc); elf_write_reloc(file->elf, reloc);
...@@ -1987,6 +1991,31 @@ static int read_intra_function_calls(struct objtool_file *file) ...@@ -1987,6 +1991,31 @@ static int read_intra_function_calls(struct objtool_file *file)
return 0; return 0;
} }
/*
* Return true if name matches an instrumentation function, where calls to that
* function from noinstr code can safely be removed, but compilers won't do so.
*/
static bool is_profiling_func(const char *name)
{
/*
* Many compilers cannot disable KCOV with a function attribute.
*/
if (!strncmp(name, "__sanitizer_cov_", 16))
return true;
/*
* Some compilers currently do not remove __tsan_func_entry/exit nor
* __tsan_atomic_signal_fence (used for barrier instrumentation) with
* the __no_sanitize_thread attribute, remove them. Once the kernel's
* minimum Clang version is 14.0, this can be removed.
*/
if (!strncmp(name, "__tsan_func_", 12) ||
!strcmp(name, "__tsan_atomic_signal_fence"))
return true;
return false;
}
static int classify_symbols(struct objtool_file *file) static int classify_symbols(struct objtool_file *file)
{ {
struct section *sec; struct section *sec;
...@@ -2007,8 +2036,8 @@ static int classify_symbols(struct objtool_file *file) ...@@ -2007,8 +2036,8 @@ static int classify_symbols(struct objtool_file *file)
if (!strcmp(func->name, "__fentry__")) if (!strcmp(func->name, "__fentry__"))
func->fentry = true; func->fentry = true;
if (!strncmp(func->name, "__sanitizer_cov_", 16)) if (is_profiling_func(func->name))
func->kcov = true; func->profiling_func = true;
} }
} }
......
...@@ -58,7 +58,7 @@ struct symbol { ...@@ -58,7 +58,7 @@ struct symbol {
u8 static_call_tramp : 1; u8 static_call_tramp : 1;
u8 retpoline_thunk : 1; u8 retpoline_thunk : 1;
u8 fentry : 1; u8 fentry : 1;
u8 kcov : 1; u8 profiling_func : 1;
struct list_head pv_target; struct list_head pv_target;
}; };
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment