1. 10 Dec, 2021 29 commits
    • Marco Elver's avatar
      kcsan: Only test clear_bit_unlock_is_negative_byte if arch defines it · b473a389
      Marco Elver authored
      Some architectures do not define clear_bit_unlock_is_negative_byte().
      Only test it when it is actually defined (similar to other usage, such
      as in lib/test_kasan.c).
      
      Link: https://lkml.kernel.org/r/202112050757.x67rHnFU-lkp@intel.comReported-by: default avatarkernel test robot <lkp@intel.com>
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      b473a389
    • Marco Elver's avatar
      kcsan: Avoid nested contexts reading inconsistent reorder_access · e3d2b72b
      Marco Elver authored
      Nested contexts, such as nested interrupts or scheduler code, share the
      same kcsan_ctx. When such a nested context reads an inconsistent
      reorder_access due to an interrupt during set_reorder_access(), we can
      observe the following warning:
      
       | ------------[ cut here ]------------
       | Cannot find frame for torture_random kernel/torture.c:456 in stack trace
       | WARNING: CPU: 13 PID: 147 at kernel/kcsan/report.c:343 replace_stack_entry kernel/kcsan/report.c:343
       | ...
       | Call Trace:
       |  <TASK>
       |  sanitize_stack_entries kernel/kcsan/report.c:351 [inline]
       |  print_report kernel/kcsan/report.c:409
       |  kcsan_report_known_origin kernel/kcsan/report.c:693
       |  kcsan_setup_watchpoint kernel/kcsan/core.c:658
       |  rcutorture_one_extend kernel/rcu/rcutorture.c:1475
       |  rcutorture_loop_extend kernel/rcu/rcutorture.c:1558 [inline]
       |  ...
       |  </TASK>
       | ---[ end trace ee5299cb933115f5 ]---
       | ==================================================================
       | BUG: KCSAN: data-race in _raw_spin_lock_irqsave / rcutorture_one_extend
       |
       | write (reordered) to 0xffffffff8c93b300 of 8 bytes by task 154 on cpu 12:
       |  queued_spin_lock                include/asm-generic/qspinlock.h:80 [inline]
       |  do_raw_spin_lock                include/linux/spinlock.h:185 [inline]
       |  __raw_spin_lock_irqsave         include/linux/spinlock_api_smp.h:111 [inline]
       |  _raw_spin_lock_irqsave          kernel/locking/spinlock.c:162
       |  try_to_wake_up                  kernel/sched/core.c:4003
       |  sysvec_apic_timer_interrupt     arch/x86/kernel/apic/apic.c:1097
       |  asm_sysvec_apic_timer_interrupt arch/x86/include/asm/idtentry.h:638
       |  set_reorder_access              kernel/kcsan/core.c:416 [inline]    <-- inconsistent reorder_access
       |  kcsan_setup_watchpoint          kernel/kcsan/core.c:693
       |  rcutorture_one_extend           kernel/rcu/rcutorture.c:1475
       |  rcutorture_loop_extend          kernel/rcu/rcutorture.c:1558 [inline]
       |  rcu_torture_one_read            kernel/rcu/rcutorture.c:1600
       |  rcu_torture_reader              kernel/rcu/rcutorture.c:1692
       |  kthread                         kernel/kthread.c:327
       |  ret_from_fork                   arch/x86/entry/entry_64.S:295
       |
       | read to 0xffffffff8c93b300 of 8 bytes by task 147 on cpu 13:
       |  rcutorture_one_extend           kernel/rcu/rcutorture.c:1475
       |  rcutorture_loop_extend          kernel/rcu/rcutorture.c:1558 [inline]
       |  ...
      
      The warning is telling us that there was a data race which KCSAN wants
      to report, but the function where the original access (that is now
      reordered) happened cannot be found in the stack trace, which prevents
      KCSAN from generating the right stack trace. The stack trace of "write
      (reordered)" now only shows where the access was reordered to, but
      should instead show the stack trace of the original write, with a final
      line saying "reordered to".
      
      At the point where set_reorder_access() is interrupted, it just set
      reorder_access->ptr and size, at which point size is non-zero. This is
      sufficient (if ctx->disable_scoped is zero) for further accesses from
      nested contexts to perform checking of this reorder_access.
      
      That then happened in _raw_spin_lock_irqsave(), which is called by
      scheduler code. However, since reorder_access->ip is still stale (ptr
      and size belong to a different ip not yet set) this finally leads to
      replace_stack_entry() not finding the frame in reorder_access->ip and
      generating the above warning.
      
      Fix it by ensuring that a nested context cannot access reorder_access
      while we update it in set_reorder_access(): set ctx->disable_scoped for
      the duration that reorder_access is updated, which effectively locks
      reorder_access and prevents concurrent use by nested contexts. Note,
      set_reorder_access() can do the update only if disabled_scoped is zero
      on entry, and must therefore set disable_scoped back to non-zero after
      the initial check in set_reorder_access().
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      e3d2b72b
    • Marco Elver's avatar
      kcsan: Turn barrier instrumentation into macros · 80d7476f
      Marco Elver authored
      Some architectures use barriers in 'extern inline' functions, from which
      we should not refer to static inline functions.
      
      For example, building Alpha with gcc and W=1 shows:
      
      ./include/asm-generic/barrier.h:70:30: warning: 'kcsan_rmb' is static but used in inline function 'pmd_offset' which is not static
         70 | #define smp_rmb()       do { kcsan_rmb(); __smp_rmb(); } while (0)
            |                              ^~~~~~~~~
      ./arch/alpha/include/asm/pgtable.h:293:9: note: in expansion of macro 'smp_rmb'
        293 |         smp_rmb(); /* see above */
            |         ^~~~~~~
      
      Which seems to warn about 6.7.4#3 of the C standard:
        "An inline definition of a function with external linkage shall not
         contain a definition of a modifiable object with static or thread
         storage duration, and shall not contain a reference to an identifier
         with internal linkage."
      
      Fix it by turning barrier instrumentation into macros, which matches
      definitions in <asm/barrier.h>.
      
      Perhaps we can revert this change in future, when there are no more
      'extern inline' users left.
      
      Link: https://lkml.kernel.org/r/202112041334.X44uWZXf-lkp@intel.comReported-by: default avatarkernel test robot <lkp@intel.com>
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      80d7476f
    • Marco Elver's avatar
      kcsan: Make barrier tests compatible with lockdep · a70d36e6
      Marco Elver authored
      The barrier tests in selftest and the kcsan_test module only need the
      spinlock and mutex to test correct barrier instrumentation. Therefore,
      these were initially placed on the stack.
      
      However, lockdep asserts that locks are in static storage, and will
      generate this warning:
      
       | INFO: trying to register non-static key.
       | The code is fine but needs lockdep annotation, or maybe
       | you didn't initialize this object before use?
       | turning off the locking correctness validator.
       | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.16.0-rc1+ #3208
       | Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-1ubuntu1.1 04/01/2014
       | Call Trace:
       |  <TASK>
       |  dump_stack_lvl+0x88/0xd8
       |  dump_stack+0x15/0x1b
       |  register_lock_class+0x6b3/0x840
       |  ...
       |  test_barrier+0x490/0x14c7
       |  kcsan_selftest+0x47/0xa0
       |  ...
      
      To fix, move the test locks into static storage.
      
      Fixing the above also revealed that lock operations are strengthened on
      first use with lockdep enabled, due to lockdep calling out into
      non-instrumented files (recall that kernel/locking/lockdep.c is not
      instrumented with KCSAN).
      
      Only kcsan_test checks for over-instrumentation of *_lock() operations,
      where we can simply "warm up" the test locks to avoid the test case
      failing with lockdep.
      Reported-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      a70d36e6
    • Marco Elver's avatar
      kcsan: Support WEAK_MEMORY with Clang where no objtool support exists · bd3d5bd1
      Marco Elver authored
      Clang and GCC behave a little differently when it comes to the
      __no_sanitize_thread attribute, which has valid reasons, and depending
      on context either one could be right.
      
      Traditionally, user space ThreadSanitizer [1] still expects instrumented
      builtin atomics (to avoid false positives) and __tsan_func_{entry,exit}
      (to generate meaningful stack traces), even if the function has the
      attribute no_sanitize("thread").
      
      [1] https://clang.llvm.org/docs/ThreadSanitizer.html#attribute-no-sanitize-thread
      
      GCC doesn't follow the same policy (for better or worse), and removes
      all kinds of instrumentation if no_sanitize is added. Arguably, since
      this may be a problem for user space ThreadSanitizer, we expect this may
      change in future.
      
      Since KCSAN != ThreadSanitizer, the likelihood of false positives even
      without barrier instrumentation everywhere, is much lower by design.
      
      At least for Clang, however, to fully remove all sanitizer
      instrumentation, we must add the disable_sanitizer_instrumentation
      attribute, which is available since Clang 14.0.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      bd3d5bd1
    • Alexander Potapenko's avatar
      compiler_attributes.h: Add __disable_sanitizer_instrumentation · a015b708
      Alexander Potapenko authored
      The new attribute maps to
      __attribute__((disable_sanitizer_instrumentation)), which will be
      supported by Clang >= 14.0. Future support in GCC is also possible.
      
      This attribute disables compiler instrumentation for kernel sanitizer
      tools, making it easier to implement noinstr. It is different from the
      existing __no_sanitize* attributes, which may still allow certain types
      of instrumentation to prevent false positives.
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      a015b708
    • Marco Elver's avatar
      objtool, kcsan: Remove memory barrier instrumentation from noinstr · 05098119
      Marco Elver authored
      Teach objtool to turn instrumentation required for memory barrier
      modeling into nops in noinstr text.
      
      The __tsan_func_entry/exit calls are still emitted by compilers even
      with the __no_sanitize_thread attribute. The memory barrier
      instrumentation will be inserted explicitly (without compiler help), and
      thus needs to also explicitly be removed.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Acked-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      05098119
    • Marco Elver's avatar
      objtool, kcsan: Add memory barrier instrumentation to whitelist · 0525bd82
      Marco Elver authored
      Adds KCSAN's memory barrier instrumentation to objtool's uaccess
      whitelist.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      0525bd82
    • Marco Elver's avatar
      sched, kcsan: Enable memory barrier instrumentation · 6f3f0c98
      Marco Elver authored
      There's no fundamental reason to disable KCSAN for scheduler code,
      except for excessive noise and performance concerns (instrumenting
      scheduler code is usually a good way to stress test KCSAN itself).
      
      However, several core sched functions imply memory barriers that are
      invisible to KCSAN without instrumentation, but are required to avoid
      false positives. Therefore, unconditionally enable instrumentation of
      memory barriers in scheduler code. Also update the comment to reflect
      this and be a bit more brief.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      6f3f0c98
    • Marco Elver's avatar
      mm, kcsan: Enable barrier instrumentation · d37d1fa0
      Marco Elver authored
      Some memory management calls imply memory barriers that are required to
      avoid false positives. For example, without the correct instrumentation,
      we could observe data races of the following variant:
      
                         T0           |           T1
              ------------------------+------------------------
                                      |
               *a = 42;    ---+       |
               kfree(a);      |       |
                              |       | b = kmalloc(..); // b == a
                <reordered> <-+       | *b = 42;         // not a data race!
                                      |
      
      Therefore, instrument memory barriers in all allocator code currently
      not being instrumented in a default build.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      d37d1fa0
    • Marco Elver's avatar
      x86/qspinlock, kcsan: Instrument barrier of pv_queued_spin_unlock() · d93414e3
      Marco Elver authored
      If CONFIG_PARAVIRT_SPINLOCKS=y, queued_spin_unlock() is implemented
      using pv_queued_spin_unlock() which is entirely inline asm based. As
      such, we do not receive any KCSAN barrier instrumentation via regular
      atomic operations.
      
      Add the missing KCSAN barrier instrumentation for the
      CONFIG_PARAVIRT_SPINLOCKS case.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      d93414e3
    • Marco Elver's avatar
      x86/barriers, kcsan: Use generic instrumentation for non-smp barriers · cd8730c3
      Marco Elver authored
      Prefix all barriers with __, now that asm-generic/barriers.h supports
      defining the final instrumented version of these barriers. The change is
      limited to barriers used by x86-64.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      cd8730c3
    • Marco Elver's avatar
      asm-generic/bitops, kcsan: Add instrumentation for barriers · 04def1b9
      Marco Elver authored
      Adds the required KCSAN instrumentation for barriers of atomic bitops.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      04def1b9
    • Marco Elver's avatar
      locking/atomics, kcsan: Add instrumentation for barriers · e87c4f66
      Marco Elver authored
      Adds the required KCSAN instrumentation for barriers of atomics.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      e87c4f66
    • Marco Elver's avatar
      locking/barriers, kcsan: Support generic instrumentation · 2505a51a
      Marco Elver authored
      Thus far only smp_*() barriers had been defined by asm-generic/barrier.h
      based on __smp_*() barriers, because the !SMP case is usually generic.
      
      With the introduction of instrumentation, it also makes sense to have
      asm-generic/barrier.h assist in the definition of instrumented versions
      of mb(), rmb(), wmb(), dma_rmb(), and dma_wmb().
      
      Because there is no requirement to distinguish the !SMP case, the
      definition can be simpler: we can avoid also providing fallbacks for the
      __ prefixed cases, and only check if `defined(__<barrier>)`, to finally
      define the KCSAN-instrumented versions.
      
      This also allows for the compiler to complain if an architecture
      accidentally defines both the normal and __ prefixed variant.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      2505a51a
    • Marco Elver's avatar
      locking/barriers, kcsan: Add instrumentation for barriers · f948666d
      Marco Elver authored
      Adds the required KCSAN instrumentation for barriers if CONFIG_SMP.
      KCSAN supports modeling the effects of:
      
      	smp_mb()
      	smp_rmb()
      	smp_wmb()
      	smp_store_release()
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      f948666d
    • Marco Elver's avatar
      kcsan: selftest: Add test case to check memory barrier instrumentation · 71b0e3ae
      Marco Elver authored
      Memory barrier instrumentation is crucial to avoid false positives. To
      avoid surprises, run a simple test case in the boot-time selftest to
      ensure memory barriers are still instrumented correctly.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      71b0e3ae
    • Marco Elver's avatar
      kcsan: Ignore GCC 11+ warnings about TSan runtime support · 116af35e
      Marco Elver authored
      GCC 11 has introduced a new warning option, -Wtsan [1], to warn about
      unsupported operations in the TSan runtime. But KCSAN != TSan runtime,
      so none of the warnings apply.
      
      [1] https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Warning-Options.html
      
      Ignore the warnings.
      
      Currently the warning only fires in the test for __atomic_thread_fence():
      
      kernel/kcsan/kcsan_test.c: In function ‘test_atomic_builtins’:
      kernel/kcsan/kcsan_test.c:1234:17: warning: ‘atomic_thread_fence’ is not supported with ‘-fsanitize=thread’ [-Wtsan]
       1234 |                 __atomic_thread_fence(__ATOMIC_SEQ_CST);
            |                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      
      which exists to ensure the KCSAN runtime keeps supporting the builtin
      instrumentation.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      116af35e
    • Marco Elver's avatar
      kcsan: test: Add test cases for memory barrier instrumentation · 8bc32b34
      Marco Elver authored
      Adds test cases to check that memory barriers are instrumented
      correctly, and detection of missing memory barriers is working as
      intended if CONFIG_KCSAN_STRICT=y.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      8bc32b34
    • Marco Elver's avatar
      kcsan: test: Match reordered or normal accesses · 7310bd1f
      Marco Elver authored
      Due to reordering accesses with weak memory modeling, any access can now
      appear as "(reordered)".
      
      Match any permutation of accesses if CONFIG_KCSAN_WEAK_MEMORY=y, so that
      we effectively match an access if it is denoted "(reordered)" or not.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      7310bd1f
    • Marco Elver's avatar
      kcsan: Document modeling of weak memory · 82eb6911
      Marco Elver authored
      Document how KCSAN models a subset of weak memory and the subset of
      missing memory barriers it can detect as a result.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      82eb6911
    • Marco Elver's avatar
      kcsan: Show location access was reordered to · be3f6967
      Marco Elver authored
      Also show the location the access was reordered to. An example report:
      
      | ==================================================================
      | BUG: KCSAN: data-race in test_kernel_wrong_memorder / test_kernel_wrong_memorder
      |
      | read-write to 0xffffffffc01e61a8 of 8 bytes by task 2311 on cpu 5:
      |  test_kernel_wrong_memorder+0x57/0x90
      |  access_thread+0x99/0xe0
      |  kthread+0x2ba/0x2f0
      |  ret_from_fork+0x22/0x30
      |
      | read-write (reordered) to 0xffffffffc01e61a8 of 8 bytes by task 2310 on cpu 7:
      |  test_kernel_wrong_memorder+0x57/0x90
      |  access_thread+0x99/0xe0
      |  kthread+0x2ba/0x2f0
      |  ret_from_fork+0x22/0x30
      |   |
      |   +-> reordered to: test_kernel_wrong_memorder+0x80/0x90
      |
      | Reported by Kernel Concurrency Sanitizer on:
      | CPU: 7 PID: 2310 Comm: access_thread Not tainted 5.14.0-rc1+ #18
      | Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
      | ==================================================================
      Reviewed-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      be3f6967
    • Marco Elver's avatar
      kcsan: Call scoped accesses reordered in reports · 3cc21a53
      Marco Elver authored
      The scoping of an access simply denotes the scope in which it may be
      reordered. However, in reports, it'll be less confusing to say the
      access is "reordered". This is more accurate when the race occurred.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      3cc21a53
    • Marco Elver's avatar
      kcsan, kbuild: Add option for barrier instrumentation only · 48c9e28e
      Marco Elver authored
      Source files that disable KCSAN via KCSAN_SANITIZE := n, remove all
      instrumentation, including explicit barrier instrumentation. With
      instrumentation for memory barriers, in few places it is required to
      enable just the explicit instrumentation for memory barriers to avoid
      false positives.
      
      Providing the Makefile variable KCSAN_INSTRUMENT_BARRIERS_obj.o or
      KCSAN_INSTRUMENT_BARRIERS (for all files) set to 'y' only enables the
      explicit barrier instrumentation.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      48c9e28e
    • Marco Elver's avatar
      kcsan: Add core memory barrier instrumentation functions · 0b8b0830
      Marco Elver authored
      Add the core memory barrier instrumentation functions. These invalidate
      the current in-flight reordered access based on the rules for the
      respective barrier types and in-flight access type.
      
      To obtain barrier instrumentation that can be disabled via __no_kcsan
      with appropriate compiler-support (and not just with objtool help),
      barrier instrumentation repurposes __atomic_signal_fence(), instead of
      inserting explicit calls. Crucially, __atomic_signal_fence() normally
      does not map to any real instructions, but is still intercepted by
      fsanitize=thread. As a result, like any other instrumentation done by
      the compiler, barrier instrumentation can be disabled with __no_kcsan.
      
      Unfortunately Clang and GCC currently differ in their __no_kcsan aka
      __no_sanitize_thread behaviour with respect to builtin atomics (and
      __tsan_func_{entry,exit}) instrumentation. This is already reflected in
      Kconfig.kcsan's dependencies for KCSAN_WEAK_MEMORY. A later change will
      introduce support for newer versions of Clang that can implement
      __no_kcsan to also remove the additional instrumentation introduced by
      KCSAN_WEAK_MEMORY.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      0b8b0830
    • Marco Elver's avatar
      kcsan: Add core support for a subset of weak memory modeling · 69562e49
      Marco Elver authored
      Add support for modeling a subset of weak memory, which will enable
      detection of a subset of data races due to missing memory barriers.
      
      KCSAN's approach to detecting missing memory barriers is based on
      modeling access reordering, and enabled if `CONFIG_KCSAN_WEAK_MEMORY=y`,
      which depends on `CONFIG_KCSAN_STRICT=y`. The feature can be enabled or
      disabled at boot and runtime via the `kcsan.weak_memory` boot parameter.
      
      Each memory access for which a watchpoint is set up, is also selected
      for simulated reordering within the scope of its function (at most 1
      in-flight access).
      
      We are limited to modeling the effects of "buffering" (delaying the
      access), since the runtime cannot "prefetch" accesses (therefore no
      acquire modeling). Once an access has been selected for reordering, it
      is checked along every other access until the end of the function scope.
      If an appropriate memory barrier is encountered, the access will no
      longer be considered for reordering.
      
      When the result of a memory operation should be ordered by a barrier,
      KCSAN can then detect data races where the conflict only occurs as a
      result of a missing barrier due to reordering accesses.
      Suggested-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      69562e49
    • Marco Elver's avatar
      kcsan: Avoid checking scoped accesses from nested contexts · 9756f64c
      Marco Elver authored
      Avoid checking scoped accesses from nested contexts (such as nested
      interrupts or in scheduler code) which share the same kcsan_ctx.
      
      This is to avoid detecting false positive races of accesses in the same
      thread with currently scoped accesses: consider setting up a watchpoint
      for a non-scoped (normal) access that also "conflicts" with a current
      scoped access. In a nested interrupt (or in the scheduler), which shares
      the same kcsan_ctx, we cannot check scoped accesses set up in the parent
      context -- simply ignore them in this case.
      
      With the introduction of kcsan_ctx::disable_scoped, we can also clean up
      kcsan_check_scoped_accesses()'s recursion guard, and do not need to
      modify the list's prev pointer.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      9756f64c
    • Marco Elver's avatar
      kcsan: Remove redundant zero-initialization of globals · 71f8de70
      Marco Elver authored
      They are implicitly zero-initialized, remove explicit initialization.
      It keeps the upcoming additions to kcsan_ctx consistent with the rest.
      
      No functional change intended.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      71f8de70
    • Marco Elver's avatar
      kcsan: Refactor reading of instrumented memory · 12305abe
      Marco Elver authored
      Factor out the switch statement reading instrumented memory into a
      helper read_instrumented_memory().
      
      No functional change.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      12305abe
  2. 14 Nov, 2021 11 commits