1. 30 Aug, 2019 5 commits
    • Will Deacon's avatar
      arm64: atomics: Use K constraint when toolchain appears to support it · 03adcbd9
      Will Deacon authored
      The 'K' constraint is a documented AArch64 machine constraint supported
      by GCC for matching integer constants that can be used with a 32-bit
      logical instruction. Unfortunately, some released compilers erroneously
      accept the immediate '4294967295' for this constraint, which is later
      refused by GAS at assembly time. This had led us to avoid the use of
      the 'K' constraint altogether.
      
      Instead, detect whether the compiler is up to the job when building the
      kernel and pass the 'K' constraint to our 32-bit atomic macros when it
      appears to be supported.
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      03adcbd9
    • Will Deacon's avatar
      arm64: atomics: Undefine internal macros after use · 5aad6cda
      Will Deacon authored
      We use a bunch of internal macros when constructing our atomic and
      cmpxchg routines in order to save on boilerplate. Avoid exposing these
      directly to users of the header files.
      Reviewed-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      5aad6cda
    • Will Deacon's avatar
      arm64: lse: Make ARM64_LSE_ATOMICS depend on JUMP_LABEL · b32baf91
      Will Deacon authored
      Support for LSE atomic instructions (CONFIG_ARM64_LSE_ATOMICS) relies on
      a static key to select between the legacy LL/SC implementation which is
      available on all arm64 CPUs and the super-duper LSE implementation which
      is available on CPUs implementing v8.1 and later.
      
      Unfortunately, when building a kernel with CONFIG_JUMP_LABEL disabled
      (e.g. because the toolchain doesn't support 'asm goto'), the static key
      inside the atomics code tries to use atomics itself. This results in a
      mess of circular includes and a build failure:
      
      In file included from ./arch/arm64/include/asm/lse.h:11,
                       from ./arch/arm64/include/asm/atomic.h:16,
                       from ./include/linux/atomic.h:7,
                       from ./include/asm-generic/bitops/atomic.h:5,
                       from ./arch/arm64/include/asm/bitops.h:26,
                       from ./include/linux/bitops.h:19,
                       from ./include/linux/kernel.h:12,
                       from ./include/asm-generic/bug.h:18,
                       from ./arch/arm64/include/asm/bug.h:26,
                       from ./include/linux/bug.h:5,
                       from ./include/linux/page-flags.h:10,
                       from kernel/bounds.c:10:
      ./include/linux/jump_label.h: In function ‘static_key_count’:
      ./include/linux/jump_label.h:254:9: error: implicit declaration of function ‘atomic_read’ [-Werror=implicit-function-declaration]
        return atomic_read(&key->enabled);
               ^~~~~~~~~~~
      
      [ ... more of the same ... ]
      
      Since LSE atomic instructions are not critical to the operation of the
      kernel, make them depend on JUMP_LABEL at compile time.
      Reviewed-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      b32baf91
    • Will Deacon's avatar
      arm64: asm: Kill 'asm/atomic_arch.h' · 0533f97b
      Will Deacon authored
      The contents of 'asm/atomic_arch.h' can be split across some of our
      other 'asm/' headers. Remove it.
      Reviewed-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      0533f97b
    • Will Deacon's avatar
      arm64: lse: Remove unused 'alt_lse' assembly macro · 0ca98b24
      Will Deacon authored
      The 'alt_lse' assembly macro has been unused since 7c8fc35d
      ("locking/atomics/arm64: Replace our atomic/lock bitop implementations
      with asm-generic").
      
      Remove it.
      Reviewed-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      0ca98b24
  2. 29 Aug, 2019 5 commits
    • Andrew Murray's avatar
      arm64: atomics: Remove atomic_ll_sc compilation unit · eb3aabbf
      Andrew Murray authored
      We no longer fall back to out-of-line atomics on systems with
      CONFIG_ARM64_LSE_ATOMICS where ARM64_HAS_LSE_ATOMICS is not set.
      
      Remove the unused compilation unit which provided these symbols.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      eb3aabbf
    • Andrew Murray's avatar
      arm64: avoid using hard-coded registers for LSE atomics · 3337cb5a
      Andrew Murray authored
      Now that we have removed the out-of-line ll/sc atomics we can give
      the compiler the freedom to choose its own register allocation.
      
      Remove the hard-coded use of x30.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      3337cb5a
    • Andrew Murray's avatar
      arm64: atomics: avoid out-of-line ll/sc atomics · addfc386
      Andrew Murray authored
      When building for LSE atomics (CONFIG_ARM64_LSE_ATOMICS), if the hardware
      or toolchain doesn't support it the existing code will fallback to ll/sc
      atomics. It achieves this by branching from inline assembly to a function
      that is built with special compile flags. Further this results in the
      clobbering of registers even when the fallback isn't used increasing
      register pressure.
      
      Improve this by providing inline implementations of both LSE and
      ll/sc and use a static key to select between them, which allows for the
      compiler to generate better atomics code. Put the LL/SC fallback atomics
      in their own subsection to improve icache performance.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      addfc386
    • Andrew Murray's avatar
      arm64: Use correct ll/sc atomic constraints · 580fa1b8
      Andrew Murray authored
      The A64 ISA accepts distinct (but overlapping) ranges of immediates for:
      
       * add arithmetic instructions ('I' machine constraint)
       * sub arithmetic instructions ('J' machine constraint)
       * 32-bit logical instructions ('K' machine constraint)
       * 64-bit logical instructions ('L' machine constraint)
      
      ... but we currently use the 'I' constraint for many atomic operations
      using sub or logical instructions, which is not always valid.
      
      When CONFIG_ARM64_LSE_ATOMICS is not set, this allows invalid immediates
      to be passed to instructions, potentially resulting in a build failure.
      When CONFIG_ARM64_LSE_ATOMICS is selected the out-of-line ll/sc atomics
      always use a register as they have no visibility of the value passed by
      the caller.
      
      This patch adds a constraint parameter to the ATOMIC_xx and
      __CMPXCHG_CASE macros so that we can pass appropriate constraints for
      each case, with uses updated accordingly.
      
      Unfortunately prior to GCC 8.1.0 the 'K' constraint erroneously accepted
      '4294967295', so we must instead force the use of a register.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      580fa1b8
    • Andrew Murray's avatar
      jump_label: Don't warn on __exit jump entries · 8f35eaa5
      Andrew Murray authored
      On architectures that discard .exit.* sections at runtime, a
      warning is printed for each jump label that is used within an
      in-kernel __exit annotated function:
      
      can't patch jump_label at ehci_hcd_cleanup+0x8/0x3c
      WARNING: CPU: 0 PID: 1 at kernel/jump_label.c:410 __jump_label_update+0x12c/0x138
      
      As these functions will never get executed (they are free'd along
      with the rest of initmem) - we do not need to patch them and should
      not display any warnings.
      
      The warning is displayed because the test required to satisfy
      jump_entry_is_init is based on init_section_contains (__init_begin to
      __init_end) whereas the test in __jump_label_update is based on
      init_kernel_text (_sinittext to _einittext) via kernel_text_address).
      
      Fixes: 19483677 ("jump_label: Annotate entries that operate on __init code earlier")
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      8f35eaa5
  3. 05 Aug, 2019 1 commit
  4. 04 Aug, 2019 10 commits
  5. 03 Aug, 2019 19 commits