1. 29 Oct, 2021 3 commits
    • Will Deacon's avatar
      Merge branch 'for-next/kexec' into for-next/core · d8a2c0fb
      Will Deacon authored
      * for-next/kexec:
        arm64: trans_pgd: remove trans_pgd_map_page()
        arm64: kexec: remove cpu-reset.h
        arm64: kexec: remove the pre-kexec PoC maintenance
        arm64: kexec: keep MMU enabled during kexec relocation
        arm64: kexec: install a copy of the linear-map
        arm64: kexec: use ld script for relocation function
        arm64: kexec: relocate in EL1 mode
        arm64: kexec: configure EL2 vectors for kexec
        arm64: kexec: pass kimage as the only argument to relocation function
        arm64: kexec: Use dcache ops macros instead of open-coding
        arm64: kexec: skip relocation code for inplace kexec
        arm64: kexec: flush image and lists during kexec load time
        arm64: hibernate: abstract ttrb0 setup function
        arm64: trans_pgd: hibernate: Add trans_pgd_copy_el2_vectors
        arm64: kernel: add helper for booted at EL2 and not VHE
      d8a2c0fb
    • Will Deacon's avatar
      Merge branch 'for-next/extable' into for-next/core · 99fe09c8
      Will Deacon authored
      * for-next/extable:
        arm64: vmlinux.lds.S: remove `.fixup` section
        arm64: extable: add load_unaligned_zeropad() handler
        arm64: extable: add a dedicated uaccess handler
        arm64: extable: add `type` and `data` fields
        arm64: extable: use `ex` for `exception_table_entry`
        arm64: extable: make fixup_exception() return bool
        arm64: extable: consolidate definitions
        arm64: gpr-num: support W registers
        arm64: factor out GPR numbering helpers
        arm64: kvm: use kvm_exception_table_entry
        arm64: lib: __arch_copy_to_user(): fold fixups into body
        arm64: lib: __arch_copy_from_user(): fold fixups into body
        arm64: lib: __arch_clear_user(): fold fixups into body
      99fe09c8
    • Will Deacon's avatar
      Merge branch 'for-next/8.6-timers' into for-next/core · a69483ee
      Will Deacon authored
      * for-next/8.6-timers:
        arm64: Add HWCAP for self-synchronising virtual counter
        arm64: Add handling of CNTVCTSS traps
        arm64: Add CNT{P,V}CTSS_EL0 alternatives to cnt{p,v}ct_el0
        arm64: Add a capability for FEAT_ECV
        clocksource/drivers/arch_arm_timer: Move workaround synchronisation around
        clocksource/drivers/arm_arch_timer: Fix masking for high freq counters
        clocksource/drivers/arm_arch_timer: Drop unnecessary ISB on CVAL programming
        clocksource/drivers/arm_arch_timer: Remove any trace of the TVAL programming interface
        clocksource/drivers/arm_arch_timer: Work around broken CVAL implementations
        clocksource/drivers/arm_arch_timer: Advertise 56bit timer to the core code
        clocksource/drivers/arm_arch_timer: Move MMIO timer programming over to CVAL
        clocksource/drivers/arm_arch_timer: Fix MMIO base address vs callback ordering issue
        clocksource/drivers/arm_arch_timer: Move drop _tval from erratum function names
        clocksource/drivers/arm_arch_timer: Move system register timer programming over to CVAL
        clocksource/drivers/arm_arch_timer: Extend write side of timer register accessors to u64
        clocksource/drivers/arm_arch_timer: Drop CNT*_TVAL read accessors
        clocksource/arm_arch_timer: Add build-time guards for unhandled register accesses
      a69483ee
  2. 21 Oct, 2021 13 commits
    • Mark Rutland's avatar
      arm64: vmlinux.lds.S: remove `.fixup` section · bf6e667f
      Mark Rutland authored
      We no longer place anything into a `.fixup` section, so we no longer
      need to place those sections into the `.text` section in the main kernel
      Image.
      
      Remove the use of `.fixup`.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-14-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      bf6e667f
    • Mark Rutland's avatar
      arm64: extable: add load_unaligned_zeropad() handler · 753b3236
      Mark Rutland authored
      For inline assembly, we place exception fixups out-of-line in the
      `.fixup` section such that these are out of the way of the fast path.
      This has a few drawbacks:
      
      * Since the fixup code is anonymous, backtraces will symbolize fixups as
        offsets from the nearest prior symbol, currently
        `__entry_tramp_text_end`. This is confusing, and painful to debug
        without access to the relevant vmlinux.
      
      * Since the exception handler adjusts the PC to execute the fixup, and
        the fixup uses a direct branch back into the function it fixes,
        backtraces of fixups miss the original function. This is confusing,
        and violates requirements for RELIABLE_STACKTRACE (and therefore
        LIVEPATCH).
      
      * Inline assembly and associated fixups are generated from templates,
        and we have many copies of logically identical fixups which only
        differ in which specific registers are written to and which address is
        branched to at the end of the fixup. This is potentially wasteful of
        I-cache resources, and makes it hard to add additional logic to fixups
        without significant bloat.
      
      * In the case of load_unaligned_zeropad(), the logic in the fixup
        requires a temporary register that we must allocate even in the
        fast-path where it will not be used.
      
      This patch address all four concerns for load_unaligned_zeropad() fixups
      by adding a dedicated exception handler which performs the fixup logic
      in exception context and subsequent returns back after the faulting
      instruction. For the moment, the fixup logic is identical to the old
      assembly fixup logic, but in future we could enhance this by taking the
      ESR and FAR into account to constrain the faults we try to fix up, or to
      specialize fixups for MTE tag check faults.
      
      Other than backtracing, there should be no functional change as a result
      of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-13-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      753b3236
    • Mark Rutland's avatar
      arm64: extable: add a dedicated uaccess handler · 2e77a62c
      Mark Rutland authored
      For inline assembly, we place exception fixups out-of-line in the
      `.fixup` section such that these are out of the way of the fast path.
      This has a few drawbacks:
      
      * Since the fixup code is anonymous, backtraces will symbolize fixups as
        offsets from the nearest prior symbol, currently
        `__entry_tramp_text_end`. This is confusing, and painful to debug
        without access to the relevant vmlinux.
      
      * Since the exception handler adjusts the PC to execute the fixup, and
        the fixup uses a direct branch back into the function it fixes,
        backtraces of fixups miss the original function. This is confusing,
        and violates requirements for RELIABLE_STACKTRACE (and therefore
        LIVEPATCH).
      
      * Inline assembly and associated fixups are generated from templates,
        and we have many copies of logically identical fixups which only
        differ in which specific registers are written to and which address is
        branched to at the end of the fixup. This is potentially wasteful of
        I-cache resources, and makes it hard to add additional logic to fixups
        without significant bloat.
      
      This patch address all three concerns for inline uaccess fixups by
      adding a dedicated exception handler which updates registers in
      exception context and subsequent returns back into the function which
      faulted, removing the need for fixups specialized to each faulting
      instruction.
      
      Other than backtracing, there should be no functional change as a result
      of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-12-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      2e77a62c
    • Mark Rutland's avatar
      arm64: extable: add `type` and `data` fields · d6e2cc56
      Mark Rutland authored
      Subsequent patches will add specialized handlers for fixups, in addition
      to the simple PC fixup and BPF handlers we have today. In preparation,
      this patch adds a new `type` field to struct exception_table_entry, and
      uses this to distinguish the fixup and BPF cases. A `data` field is also
      added so that subsequent patches can associate data specific to each
      exception site (e.g. register numbers).
      
      Handlers are named ex_handler_*() for consistency, following the exmaple
      of x86. At the same time, get_ex_fixup() is split out into a helper so
      that it can be used by other ex_handler_*() functions ins subsequent
      patches.
      
      This patch will increase the size of the exception tables, which will be
      remedied by subsequent patches removing redundant fixup code. There
      should be no functional change as a result of this patch.
      
      Since each entry is now 12 bytes in size, we must reduce the alignment
      of each entry from `.align 3` (i.e. 8 bytes) to `.align 2` (i.e. 4
      bytes), which is the natrual alignment of the `insn` and `fixup` fields.
      The current 8-byte alignment is a holdover from when the `insn` and
      `fixup` fields was 8 bytes, and while not harmful has not been necessary
      since commit:
      
        6c94f27a ("arm64: switch to relative exception tables")
      
      Similarly, RO_EXCEPTION_TABLE_ALIGN is dropped to 4 bytes.
      
      Concurrently with this patch, x86's exception table entry format is
      being updated (similarly to a 12-byte format, with 32-bytes of absolute
      data). Once both have been merged it should be possible to unify the
      sorttable logic for the two.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Andrii Nakryiko <andrii@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-11-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      d6e2cc56
    • Mark Rutland's avatar
      arm64: extable: use `ex` for `exception_table_entry` · 5d0e7905
      Mark Rutland authored
      Subsequent patches will extend `struct exception_table_entry` with more
      fields, and the distinction between the entry and its `fixup` field will
      become more important.
      
      For clarity, let's consistently use `ex` to refer to refer to an entire
      entry. In subsequent patches we'll use `fixup` to refer to the fixup
      field specifically. This matches the naming convention used today in
      arch/arm64/net/bpf_jit_comp.c.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-10-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      5d0e7905
    • Mark Rutland's avatar
      arm64: extable: make fixup_exception() return bool · e8c328d7
      Mark Rutland authored
      The return values of fixup_exception() and arm64_bpf_fixup_exception()
      represent a boolean condition rather than an error code, so for clarity
      it would be better to return `bool` rather than `int`.
      
      This patch adjusts the code accordingly. While we're modifying the
      prototype, we also remove the unnecessary `extern` keyword, so that this
      won't look out of place when we make subsequent additions to the header.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Andrii Nakryiko <andrii@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-9-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      e8c328d7
    • Mark Rutland's avatar
      arm64: extable: consolidate definitions · 819771cc
      Mark Rutland authored
      In subsequent patches we'll alter the structure and usage of struct
      exception_table_entry. For inline assembly, we create these using the
      `_ASM_EXTABLE()` CPP macro defined in <asm/uaccess.h>, and for plain
      assembly code we use the `_asm_extable()` GAS macro defined in
      <asm/assembler.h>, which are largely identical save for different
      escaping and stringification requirements.
      
      This patch moves the common definitions to a new <asm/asm-extable.h>
      header, so that it's easier to keep the two in-sync, and to remove the
      implication that these are only used for uaccess helpers (as e.g.
      load_unaligned_zeropad() is only used on kernel memory, and depends upon
      `_ASM_EXTABLE()`.
      
      At the same time, a few minor modifications are made for clarity and in
      preparation for subsequent patches:
      
      * The structure creation is factored out into an `__ASM_EXTABLE_RAW()`
        macro. This will make it easier to support different fixup variants in
        subsequent patches without needing to update all users of
        `_ASM_EXTABLE()`, and makes it easier to see tha the CPP and GAS
        variants of the macros are structurally identical.
      
        For the CPP macro, the stringification of fields is left to the
        wrapper macro, `_ASM_EXTABLE()`, as in subsequent patches it will be
        necessary to stringify fields in wrapper macros to safely concatenate
        strings which cannot be token-pasted together in CPP.
      
      * The fields of the structure are created separately on their own lines.
        This will make it easier to add/remove/modify individual fields
        clearly.
      
      * Additional parentheses are added around the use of macro arguments in
        field definitions to avoid any potential problems with evaluation due
        to operator precedence, and to make errors upon misuse clearer.
      
      * USER() is moved into <asm/asm-uaccess.h>, as it is not required by all
        assembly code, and is already refered to by comments in that file.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-8-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      819771cc
    • Mark Rutland's avatar
      arm64: gpr-num: support W registers · 286fba6c
      Mark Rutland authored
      In subsequent patches we'll want to map W registers to their register
      numbers. Update gpr-num.h so that we can do this.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-7-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      286fba6c
    • Mark Rutland's avatar
      arm64: factor out GPR numbering helpers · 8ed1b498
      Mark Rutland authored
      In <asm/sysreg.h> we have macros to convert the names of general purpose
      registers (GPRs) into integer constants, which we use to manually build
      the encoding for `MRS` and `MSR` instructions where we can't rely on the
      assembler to do so for us.
      
      In subsequent patches we'll need to map the same GPR names to integer
      constants so that we can use this to build metadata for exception
      fixups.
      
      So that the we can use the mappings elsewhere, factor out the
      definitions into a new <asm/gpr-num.h> header, renaming the definitions
      to align with this "GPR num" naming for clarity.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-6-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      8ed1b498
    • Mark Rutland's avatar
      arm64: kvm: use kvm_exception_table_entry · ae2b2f33
      Mark Rutland authored
      In subsequent patches we'll alter `struct exception_table_entry`, adding
      fields that are not needed for KVM exception fixups.
      
      In preparation for this, migrate KVM to its own `struct
      kvm_exception_table_entry`, which is identical to the current format of
      `struct exception_table_entry`. Comments are updated accordingly.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Alexandru Elisei <alexandru.elisei@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-5-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      ae2b2f33
    • Mark Rutland's avatar
      arm64: lib: __arch_copy_to_user(): fold fixups into body · 139f9ab7
      Mark Rutland authored
      Like other functions, __arch_copy_to_user() places its exception fixups
      in the `.fixup` section without any clear association with
      __arch_copy_to_user() itself. If we backtrace the fixup code, it will be
      symbolized as an offset from the nearest prior symbol, which happens to
      be `__entry_tramp_text_end`. Further, since the PC adjustment for the
      fixup is akin to a direct branch rather than a function call,
      __arch_copy_to_user() itself will be missing from the backtrace.
      
      This is confusing and hinders debugging. In general this pattern will
      also be problematic for CONFIG_LIVEPATCH, since fixups often return to
      their associated function, but this isn't accurately captured in the
      stacktrace.
      
      To solve these issues for assembly functions, we must move fixups into
      the body of the functions themselves, after the usual fast-path returns.
      This patch does so for __arch_copy_to_user().
      
      Inline assembly will be dealt with in subsequent patches.
      
      Other than the improved backtracing, there should be no functional
      change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-4-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      139f9ab7
    • Mark Rutland's avatar
      arm64: lib: __arch_copy_from_user(): fold fixups into body · 4012e0e2
      Mark Rutland authored
      Like other functions, __arch_copy_from_user() places its exception
      fixups in the `.fixup` section without any clear association with
      __arch_copy_from_user() itself. If we backtrace the fixup code, it will
      be symbolized as an offset from the nearest prior symbol, which happens
      to be `__entry_tramp_text_end`. Further, since the PC adjustment for the
      fixup is akin to a direct branch rather than a function call,
      __arch_copy_from_user() itself will be missing from the backtrace.
      
      This is confusing and hinders debugging. In general this pattern will
      also be problematic for CONFIG_LIVEPATCH, since fixups often return to
      their associated function, but this isn't accurately captured in the
      stacktrace.
      
      To solve these issues for assembly functions, we must move fixups into
      the body of the functions themselves, after the usual fast-path returns.
      This patch does so for __arch_copy_from_user().
      
      Inline assembly will be dealt with in subsequent patches.
      
      Other than the improved backtracing, there should be no functional
      change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-3-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      4012e0e2
    • Mark Rutland's avatar
      arm64: lib: __arch_clear_user(): fold fixups into body · 35d67794
      Mark Rutland authored
      Like other functions, __arch_clear_user() places its exception fixups in
      the `.fixup` section without any clear association with
      __arch_clear_user() itself. If we backtrace the fixup code, it will be
      symbolized as an offset from the nearest prior symbol, which happens to
      be `__entry_tramp_text_end`. Further, since the PC adjustment for the
      fixup is akin to a direct branch rather than a function call,
      __arch_clear_user() itself will be missing from the backtrace.
      
      This is confusing and hinders debugging. In general this pattern will
      also be problematic for CONFIG_LIVEPATCH, since fixups often return to
      their associated function, but this isn't accurately captured in the
      stacktrace.
      
      To solve these issues for assembly functions, we must move fixups into
      the body of the functions themselves, after the usual fast-path returns.
      This patch does so for __arch_clear_user().
      
      Inline assembly will be dealt with in subsequent patches.
      
      Other than the improved backtracing, there should be no functional
      change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-2-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      35d67794
  3. 19 Oct, 2021 5 commits
  4. 18 Oct, 2021 2 commits
  5. 17 Oct, 2021 11 commits
  6. 01 Oct, 2021 6 commits