1. 26 Aug, 2021 3 commits
    • Catalin Marinas's avatar
      Merge branch 'for-next/entry' into for-next/core · 1a7f67e6
      Catalin Marinas authored
      * for-next/entry:
        : More entry.S clean-ups and conversion to C.
        arm64: entry: call exit_to_user_mode() from C
        arm64: entry: move bulk of ret_to_user to C
        arm64: entry: clarify entry/exit helpers
        arm64: entry: consolidate entry/exit helpers
      1a7f67e6
    • Catalin Marinas's avatar
      Merge branches 'for-next/mte', 'for-next/misc' and 'for-next/kselftest',... · 622909e5
      Catalin Marinas authored
      Merge branches 'for-next/mte', 'for-next/misc' and 'for-next/kselftest', remote-tracking branch 'arm64/for-next/perf' into for-next/core
      
      * arm64/for-next/perf:
        arm64/perf: Replace '0xf' instances with ID_AA64DFR0_PMUVER_IMP_DEF
      
      * for-next/mte:
        : Miscellaneous MTE improvements.
        arm64/cpufeature: Optionally disable MTE via command-line
        arm64: kasan: mte: remove redundant mte_report_once logic
        arm64: kasan: mte: use a constant kernel GCR_EL1 value
        arm64: avoid double ISB on kernel entry
        arm64: mte: optimize GCR_EL1 modification on kernel entry/exit
        Documentation: document the preferred tag checking mode feature
        arm64: mte: introduce a per-CPU tag checking mode preference
        arm64: move preemption disablement to prctl handlers
        arm64: mte: change ASYNC and SYNC TCF settings into bitfields
        arm64: mte: rename gcr_user_excl to mte_ctrl
        arm64: mte: avoid TFSRE0_EL1 related operations unless in async mode
      
      * for-next/misc:
        : Miscellaneous updates.
        arm64: Do not trap PMSNEVFR_EL1
        arm64: mm: fix comment typo of pud_offset_phys()
        arm64: signal32: Drop pointless call to sigdelsetmask()
        arm64/sve: Better handle failure to allocate SVE register storage
        arm64: Document the requirement for SCR_EL3.HCE
        arm64: head: avoid over-mapping in map_memory
        arm64/sve: Add a comment documenting the binutils needed for SVE asm
        arm64/sve: Add some comments for sve_save/load_state()
        arm64: replace in_irq() with in_hardirq()
        arm64: mm: Fix TLBI vs ASID rollover
        arm64: entry: Add SYM_CODE annotation for __bad_stack
        arm64: fix typo in a comment
        arm64: move the (z)install rules to arch/arm64/Makefile
        arm64/sve: Make fpsimd_bind_task_to_cpu() static
        arm64: unnecessary end 'return;' in void functions
        arm64/sme: Document boot requirements for SME
        arm64: use __func__ to get function name in pr_err
        arm64: SSBS/DIT: print SSBS and DIT bit when printing PSTATE
        arm64: cpufeature: Use defined macro instead of magic numbers
        arm64/kexec: Test page size support with new TGRAN range values
      
      * for-next/kselftest:
        : Kselftest additions for arm64.
        kselftest/arm64: signal: Add a TODO list for signal handling tests
        kselftest/arm64: signal: Add test case for SVE register state in signals
        kselftest/arm64: signal: Verify that signals can't change the SVE vector length
        kselftest/arm64: signal: Check SVE signal frame shows expected vector length
        kselftest/arm64: signal: Support signal frames with SVE register data
        kselftest/arm64: signal: Add SVE to the set of features we can check for
        kselftest/arm64: pac: Fix skipping of tests on systems without PAC
        kselftest/arm64: mte: Fix misleading output when skipping tests
        kselftest/arm64: Add a TODO list for floating point tests
        kselftest/arm64: Add tests for SVE vector configuration
        kselftest/arm64: Validate vector lengths are set in sve-probe-vls
        kselftest/arm64: Provide a helper binary and "library" for SVE RDVL
        kselftest/arm64: Ignore check_gcr_el1_cswitch binary
      622909e5
    • Alexandru Elisei's avatar
      arm64: Do not trap PMSNEVFR_EL1 · 50cb99fa
      Alexandru Elisei authored
      Commit 31c00d2a ("arm64: Disable fine grained traps on boot") zeroed
      the fine grained trap registers to prevent unwanted register traps from
      occuring. However, for the PMSNEVFR_EL1 register, the corresponding
      HDFG{R,W}TR_EL2.nPMSNEVFR_EL1 fields must be 1 to disable trapping. Set
      both fields to 1 if FEAT_SPEv1p2 is detected to disable read and write
      traps.
      
      Fixes: 31c00d2a ("arm64: Disable fine grained traps on boot")
      Cc: <stable@vger.kernel.org> # 5.13.x
      Signed-off-by: default avatarAlexandru Elisei <alexandru.elisei@arm.com>
      Reviewed-by: default avatarMark Brown <broonie@kernel.org>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20210824154523.906270-1-alexandru.elisei@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      50cb99fa
  2. 25 Aug, 2021 2 commits
  3. 24 Aug, 2021 5 commits
  4. 23 Aug, 2021 6 commits
  5. 20 Aug, 2021 3 commits
  6. 11 Aug, 2021 1 commit
  7. 06 Aug, 2021 2 commits
  8. 05 Aug, 2021 4 commits
    • Mark Rutland's avatar
      arm64: entry: call exit_to_user_mode() from C · e130338e
      Mark Rutland authored
      When handling an exception from EL0, we perform the entry work in that
      exception's C handler, and once the C handler has finished, we return
      back to the entry assembly. Subsequently in the common `ret_to_user`
      assembly we perform the exit work that balances with the entry work.
      This can be somewhat difficult to follow, and makes it hard to rework
      the return paths (e.g. to pass additional context to the exit code, or
      to have exception return logic for specific exceptions).
      
      This patch reworks the entry code such that each EL0 C exception handler
      is responsible for both the entry and exit work. This clearly balances
      the two (and will permit additional variation in future), and avoids an
      unnecessary bounce between assembly and C in the common case, leaving
      `ret_from_fork` as the only place assembly has to call the exit code.
      This means that the exit work is now inlined into the C handler, which
      is already the case for the entry work, and allows the compiler to
      generate better code (e.g. by immediately returning when there is no
      exit work to perform).
      
      To align with other exception entry/exit helpers, enter_from_user_mode()
      is updated to take the EL0 pt_regs as a parameter, though this is
      currently unused.
      
      There should be no functional change as a result of this patch. However,
      this should lead to slightly better backtraces when an error is
      encountered within do_notify_resume(), as the C handler should appear in
      the backtrace, indicating the specific exception that the kernel was
      entered with.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Joey Gouly <joey.gouly@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: default avatarJoey Gouly <joey.gouly@arm.com>
      Link: https://lore.kernel.org/r/20210802140733.52716-5-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      e130338e
    • Mark Rutland's avatar
      arm64: entry: move bulk of ret_to_user to C · 4d1c2ee2
      Mark Rutland authored
      In `ret_to_user` we perform some conditional work depending on the
      thread flags, then perform some IRQ/context tracking which is intended
      to balance with the IRQ/context tracking performed in the entry C code.
      
      For simplicity and consistency, it would be preferable to move this all
      to C. As a step towards that, this patch moves the conditional work and
      IRQ/context tracking into a C helper function. To aid bisectability,
      this is called from the `ret_to_user` assembly, and a subsequent patch
      will move the call to C code.
      
      As local_daif_mask() handles all necessary tracing and PMR manipulation,
      we no longer need to handle this explicitly. As we call
      exit_to_user_mode() directly, the `user_enter_irqoff` macro is no longer
      used, and can be removed. As enter_from_user_mode() and
      exit_to_user_mode() are no longer called from assembly, these can be
      made static, and as these are typically very small, they are marked
      __always_inline to avoid the overhead of a function call.
      
      For now, enablement of single-step is left in entry.S, and for this we
      still need to read the flags in ret_to_user(). It is safe to read this
      separately as TIF_SINGLESTEP is not part of _TIF_WORK_MASK.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Joey Gouly <joey.gouly@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: default avatarJoey Gouly <joey.gouly@arm.com>
      Link: https://lore.kernel.org/r/20210802140733.52716-4-mark.rutland@arm.com
      [catalin.marinas@arm.com: removed unused gic_prio_kentry_setup macro]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4d1c2ee2
    • Mark Rutland's avatar
      arm64: entry: clarify entry/exit helpers · bc29b71f
      Mark Rutland authored
      When entering an exception, we must perform irq/context state management
      before we can use instrumentable C code. Similarly, when exiting an
      exception we cannot use instrumentable C code after we perform
      irq/context state management.
      
      Originally, we'd intended that the enter_from_*() and exit_to_*()
      helpers would enforce this by virtue of being the first and last
      functions called, respectively, in an exception handler. However, as
      they now call instrumentable code themselves, this is not as clearly
      true.
      
      To make this more robust, this patch splits the irq/context state
      management into separate helpers, with all the helpers commented to make
      their intended purpose more obvious.
      
      In exit_to_kernel_mode() we'll now check TFSR_EL1 before we assert that
      IRQs are disabled, but this ordering is not important, and other than
      this there should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Joey Gouly <joey.gouly@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: default avatarJoey Gouly <joey.gouly@arm.com>
      Link: https://lore.kernel.org/r/20210802140733.52716-3-mark.rutland@arm.com
      [catalin.marinas@arm.com: comment typos fix-up]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bc29b71f
    • Mark Rutland's avatar
      arm64: entry: consolidate entry/exit helpers · 46a2b02d
      Mark Rutland authored
      To make the various entry/exit helpers easier to understand and easier
      to compare, this patch moves all the entry/exit helpers to be adjacent
      at the top of entry-common.c, rather than being spread out throughout
      the file.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Joey Gouly <joey.gouly@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: default avatarJoey Gouly <joey.gouly@arm.com>
      Link: https://lore.kernel.org/r/20210802140733.52716-2-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      46a2b02d
  9. 03 Aug, 2021 7 commits
  10. 02 Aug, 2021 6 commits
  11. 30 Jul, 2021 1 commit