1. 30 Aug, 2019 2 commits
  2. 29 Aug, 2019 5 commits
    • Andrew Murray's avatar
      arm64: atomics: Remove atomic_ll_sc compilation unit · eb3aabbf
      Andrew Murray authored
      We no longer fall back to out-of-line atomics on systems with
      CONFIG_ARM64_LSE_ATOMICS where ARM64_HAS_LSE_ATOMICS is not set.
      
      Remove the unused compilation unit which provided these symbols.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      eb3aabbf
    • Andrew Murray's avatar
      arm64: avoid using hard-coded registers for LSE atomics · 3337cb5a
      Andrew Murray authored
      Now that we have removed the out-of-line ll/sc atomics we can give
      the compiler the freedom to choose its own register allocation.
      
      Remove the hard-coded use of x30.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      3337cb5a
    • Andrew Murray's avatar
      arm64: atomics: avoid out-of-line ll/sc atomics · addfc386
      Andrew Murray authored
      When building for LSE atomics (CONFIG_ARM64_LSE_ATOMICS), if the hardware
      or toolchain doesn't support it the existing code will fallback to ll/sc
      atomics. It achieves this by branching from inline assembly to a function
      that is built with special compile flags. Further this results in the
      clobbering of registers even when the fallback isn't used increasing
      register pressure.
      
      Improve this by providing inline implementations of both LSE and
      ll/sc and use a static key to select between them, which allows for the
      compiler to generate better atomics code. Put the LL/SC fallback atomics
      in their own subsection to improve icache performance.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      addfc386
    • Andrew Murray's avatar
      arm64: Use correct ll/sc atomic constraints · 580fa1b8
      Andrew Murray authored
      The A64 ISA accepts distinct (but overlapping) ranges of immediates for:
      
       * add arithmetic instructions ('I' machine constraint)
       * sub arithmetic instructions ('J' machine constraint)
       * 32-bit logical instructions ('K' machine constraint)
       * 64-bit logical instructions ('L' machine constraint)
      
      ... but we currently use the 'I' constraint for many atomic operations
      using sub or logical instructions, which is not always valid.
      
      When CONFIG_ARM64_LSE_ATOMICS is not set, this allows invalid immediates
      to be passed to instructions, potentially resulting in a build failure.
      When CONFIG_ARM64_LSE_ATOMICS is selected the out-of-line ll/sc atomics
      always use a register as they have no visibility of the value passed by
      the caller.
      
      This patch adds a constraint parameter to the ATOMIC_xx and
      __CMPXCHG_CASE macros so that we can pass appropriate constraints for
      each case, with uses updated accordingly.
      
      Unfortunately prior to GCC 8.1.0 the 'K' constraint erroneously accepted
      '4294967295', so we must instead force the use of a register.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      580fa1b8
    • Andrew Murray's avatar
      jump_label: Don't warn on __exit jump entries · 8f35eaa5
      Andrew Murray authored
      On architectures that discard .exit.* sections at runtime, a
      warning is printed for each jump label that is used within an
      in-kernel __exit annotated function:
      
      can't patch jump_label at ehci_hcd_cleanup+0x8/0x3c
      WARNING: CPU: 0 PID: 1 at kernel/jump_label.c:410 __jump_label_update+0x12c/0x138
      
      As these functions will never get executed (they are free'd along
      with the rest of initmem) - we do not need to patch them and should
      not display any warnings.
      
      The warning is displayed because the test required to satisfy
      jump_entry_is_init is based on init_section_contains (__init_begin to
      __init_end) whereas the test in __jump_label_update is based on
      init_kernel_text (_sinittext to _einittext) via kernel_text_address).
      
      Fixes: 19483677 ("jump_label: Annotate entries that operate on __init code earlier")
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      8f35eaa5
  3. 05 Aug, 2019 1 commit
  4. 04 Aug, 2019 10 commits
  5. 03 Aug, 2019 22 commits