Commit 3cd86a58 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Catalin Marinas:
 "The bulk is in-kernel pointer authentication, activity monitors and
  lots of asm symbol annotations. I also queued the sys_mremap() patch
  commenting the asymmetry in the address untagging.

  Summary:

   - In-kernel Pointer Authentication support (previously only offered
     to user space).

   - ARM Activity Monitors (AMU) extension support allowing better CPU
     utilisation numbers for the scheduler (frequency invariance).

   - Memory hot-remove support for arm64.

   - Lots of asm annotations (SYM_*) in preparation for the in-kernel
     Branch Target Identification (BTI) support.

   - arm64 perf updates: ARMv8.5-PMU 64-bit counters, refactoring the
     PMU init callbacks, support for new DT compatibles.

   - IPv6 header checksum optimisation.

   - Fixes: SDEI (software delegated exception interface) double-lock on
     hibernate with shared events.

   - Minor clean-ups and refactoring: cpu_ops accessor,
     cpu_do_switch_mm() converted to C, cpufeature finalisation helper.

   - sys_mremap() comment explaining the asymmetric address untagging
     behaviour"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (81 commits)
  mm/mremap: Add comment explaining the untagging behaviour of mremap()
  arm64: head: Convert install_el2_stub to SYM_INNER_LABEL
  arm64: Introduce get_cpu_ops() helper function
  arm64: Rename cpu_read_ops() to init_cpu_ops()
  arm64: Declare ACPI parking protocol CPU operation if needed
  arm64: move kimage_vaddr to .rodata
  arm64: use mov_q instead of literal ldr
  arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH
  lkdtm: arm64: test kernel pointer authentication
  arm64: compile the kernel with ptrauth return address signing
  kconfig: Add support for 'as-option'
  arm64: suspend: restore the kernel ptrauth keys
  arm64: __show_regs: strip PAC from lr in printk
  arm64: unwind: strip PAC from kernel addresses
  arm64: mask PAC bits of __builtin_return_address
  arm64: initialize ptrauth keys for kernel booting task
  arm64: initialize and switch ptrauth kernel keys
  arm64: enable ptrauth earlier
  arm64: cpufeature: handle conflicts based on capability
  arm64: cpufeature: Move cpu capability helpers inside C file
  ...
parents a8222fd5 b2a84de2
=======================================================
Activity Monitors Unit (AMU) extension in AArch64 Linux
=======================================================
Author: Ionela Voinescu <ionela.voinescu@arm.com>
Date: 2019-09-10
This document briefly describes the provision of Activity Monitors Unit
support in AArch64 Linux.
Architecture overview
---------------------
The activity monitors extension is an optional extension introduced by the
ARMv8.4 CPU architecture.
The activity monitors unit, implemented in each CPU, provides performance
counters intended for system management use. The AMU extension provides a
system register interface to the counter registers and also supports an
optional external memory-mapped interface.
Version 1 of the Activity Monitors architecture implements a counter group
of four fixed and architecturally defined 64-bit event counters.
- CPU cycle counter: increments at the frequency of the CPU.
- Constant counter: increments at the fixed frequency of the system
clock.
- Instructions retired: increments with every architecturally executed
instruction.
- Memory stall cycles: counts instruction dispatch stall cycles caused by
misses in the last level cache within the clock domain.
When in WFI or WFE these counters do not increment.
The Activity Monitors architecture provides space for up to 16 architected
event counters. Future versions of the architecture may use this space to
implement additional architected event counters.
Additionally, version 1 implements a counter group of up to 16 auxiliary
64-bit event counters.
On cold reset all counters reset to 0.
Basic support
-------------
The kernel can safely run a mix of CPUs with and without support for the
activity monitors extension. Therefore, when CONFIG_ARM64_AMU_EXTN is
selected we unconditionally enable the capability to allow any late CPU
(secondary or hotplugged) to detect and use the feature.
When the feature is detected on a CPU, we flag the availability of the
feature but this does not guarantee the correct functionality of the
counters, only the presence of the extension.
Firmware (code running at higher exception levels, e.g. arm-tf) support is
needed to:
- Enable access for lower exception levels (EL2 and EL1) to the AMU
registers.
- Enable the counters. If not enabled these will read as 0.
- Save/restore the counters before/after the CPU is being put/brought up
from the 'off' power state.
When using kernels that have this feature enabled but boot with broken
firmware the user may experience panics or lockups when accessing the
counter registers. Even if these symptoms are not observed, the values
returned by the register reads might not correctly reflect reality. Most
commonly, the counters will read as 0, indicating that they are not
enabled.
If proper support is not provided in firmware it's best to disable
CONFIG_ARM64_AMU_EXTN. To be noted that for security reasons, this does not
bypass the setting of AMUSERENR_EL0 to trap accesses from EL0 (userspace) to
EL1 (kernel). Therefore, firmware should still ensure accesses to AMU registers
are not trapped in EL2/EL3.
The fixed counters of AMUv1 are accessible though the following system
register definitions:
- SYS_AMEVCNTR0_CORE_EL0
- SYS_AMEVCNTR0_CONST_EL0
- SYS_AMEVCNTR0_INST_RET_EL0
- SYS_AMEVCNTR0_MEM_STALL_EL0
Auxiliary platform specific counters can be accessed using
SYS_AMEVCNTR1_EL0(n), where n is a value between 0 and 15.
Details can be found in: arch/arm64/include/asm/sysreg.h.
Userspace access
----------------
Currently, access from userspace to the AMU registers is disabled due to:
- Security reasons: they might expose information about code executed in
secure mode.
- Purpose: AMU counters are intended for system management use.
Also, the presence of the feature is not visible to userspace.
Virtualization
--------------
Currently, access from userspace (EL0) and kernelspace (EL1) on the KVM
guest side is disabled due to:
- Security reasons: they might expose information about code executed
by other guests or the host.
Any attempt to access the AMU registers will result in an UNDEFINED
exception being injected into the guest.
...@@ -248,6 +248,20 @@ Before jumping into the kernel, the following conditions must be met: ...@@ -248,6 +248,20 @@ Before jumping into the kernel, the following conditions must be met:
- HCR_EL2.APK (bit 40) must be initialised to 0b1 - HCR_EL2.APK (bit 40) must be initialised to 0b1
- HCR_EL2.API (bit 41) must be initialised to 0b1 - HCR_EL2.API (bit 41) must be initialised to 0b1
For CPUs with Activity Monitors Unit v1 (AMUv1) extension present:
- If EL3 is present:
CPTR_EL3.TAM (bit 30) must be initialised to 0b0
CPTR_EL2.TAM (bit 30) must be initialised to 0b0
AMCNTENSET0_EL0 must be initialised to 0b1111
AMCNTENSET1_EL0 must be initialised to a platform specific value
having 0b1 set for the corresponding bit for each of the auxiliary
counters present.
- If the kernel is entered at EL1:
AMCNTENSET0_EL0 must be initialised to 0b1111
AMCNTENSET1_EL0 must be initialised to a platform specific value
having 0b1 set for the corresponding bit for each of the auxiliary
counters present.
The requirements described above for CPU mode, caches, MMUs, architected The requirements described above for CPU mode, caches, MMUs, architected
timers, coherency and system registers apply to all CPUs. All CPUs must timers, coherency and system registers apply to all CPUs. All CPUs must
enter the kernel in the same exception level. enter the kernel in the same exception level.
......
...@@ -6,6 +6,7 @@ ARM64 Architecture ...@@ -6,6 +6,7 @@ ARM64 Architecture
:maxdepth: 1 :maxdepth: 1
acpi_object_usage acpi_object_usage
amu
arm-acpi arm-acpi
booting booting
cpu-feature-registers cpu-feature-registers
......
...@@ -117,6 +117,7 @@ config ARM64 ...@@ -117,6 +117,7 @@ config ARM64
select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_BITREVERSE select HAVE_ARCH_BITREVERSE
select HAVE_ARCH_COMPILER_H
select HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_HUGE_VMAP
select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_JUMP_LABEL_RELATIVE
...@@ -280,6 +281,9 @@ config ZONE_DMA32 ...@@ -280,6 +281,9 @@ config ZONE_DMA32
config ARCH_ENABLE_MEMORY_HOTPLUG config ARCH_ENABLE_MEMORY_HOTPLUG
def_bool y def_bool y
config ARCH_ENABLE_MEMORY_HOTREMOVE
def_bool y
config SMP config SMP
def_bool y def_bool y
...@@ -951,11 +955,11 @@ config HOTPLUG_CPU ...@@ -951,11 +955,11 @@ config HOTPLUG_CPU
# Common NUMA Features # Common NUMA Features
config NUMA config NUMA
bool "Numa Memory Allocation and Scheduler Support" bool "NUMA Memory Allocation and Scheduler Support"
select ACPI_NUMA if ACPI select ACPI_NUMA if ACPI
select OF_NUMA select OF_NUMA
help help
Enable NUMA (Non Uniform Memory Access) support. Enable NUMA (Non-Uniform Memory Access) support.
The kernel will try to allocate memory used by a CPU on the The kernel will try to allocate memory used by a CPU on the
local memory of the CPU and add some more local memory of the CPU and add some more
...@@ -1497,6 +1501,9 @@ config ARM64_PTR_AUTH ...@@ -1497,6 +1501,9 @@ config ARM64_PTR_AUTH
bool "Enable support for pointer authentication" bool "Enable support for pointer authentication"
default y default y
depends on !KVM || ARM64_VHE depends on !KVM || ARM64_VHE
depends on (CC_HAS_SIGN_RETURN_ADDRESS || CC_HAS_BRANCH_PROT_PAC_RET) && AS_HAS_PAC
depends on CC_IS_GCC || (CC_IS_CLANG && AS_HAS_CFI_NEGATE_RA_STATE)
depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
help help
Pointer authentication (part of the ARMv8.3 Extensions) provides Pointer authentication (part of the ARMv8.3 Extensions) provides
instructions for signing and authenticating pointers against secret instructions for signing and authenticating pointers against secret
...@@ -1504,16 +1511,72 @@ config ARM64_PTR_AUTH ...@@ -1504,16 +1511,72 @@ config ARM64_PTR_AUTH
and other attacks. and other attacks.
This option enables these instructions at EL0 (i.e. for userspace). This option enables these instructions at EL0 (i.e. for userspace).
Choosing this option will cause the kernel to initialise secret keys Choosing this option will cause the kernel to initialise secret keys
for each process at exec() time, with these keys being for each process at exec() time, with these keys being
context-switched along with the process. context-switched along with the process.
If the compiler supports the -mbranch-protection or
-msign-return-address flag (e.g. GCC 7 or later), then this option
will also cause the kernel itself to be compiled with return address
protection. In this case, and if the target hardware is known to
support pointer authentication, then CONFIG_STACKPROTECTOR can be
disabled with minimal loss of protection.
The feature is detected at runtime. If the feature is not present in The feature is detected at runtime. If the feature is not present in
hardware it will not be advertised to userspace/KVM guest nor will it hardware it will not be advertised to userspace/KVM guest nor will it
be enabled. However, KVM guest also require VHE mode and hence be enabled. However, KVM guest also require VHE mode and hence
CONFIG_ARM64_VHE=y option to use this feature. CONFIG_ARM64_VHE=y option to use this feature.
If the feature is present on the boot CPU but not on a late CPU, then
the late CPU will be parked. Also, if the boot CPU does not have
address auth and the late CPU has then the late CPU will still boot
but with the feature disabled. On such a system, this option should
not be selected.
This feature works with FUNCTION_GRAPH_TRACER option only if
DYNAMIC_FTRACE_WITH_REGS is enabled.
config CC_HAS_BRANCH_PROT_PAC_RET
# GCC 9 or later, clang 8 or later
def_bool $(cc-option,-mbranch-protection=pac-ret+leaf)
config CC_HAS_SIGN_RETURN_ADDRESS
# GCC 7, 8
def_bool $(cc-option,-msign-return-address=all)
config AS_HAS_PAC
def_bool $(as-option,-Wa$(comma)-march=armv8.3-a)
config AS_HAS_CFI_NEGATE_RA_STATE
def_bool $(as-instr,.cfi_startproc\n.cfi_negate_ra_state\n.cfi_endproc\n)
endmenu
menu "ARMv8.4 architectural features"
config ARM64_AMU_EXTN
bool "Enable support for the Activity Monitors Unit CPU extension"
default y
help
The activity monitors extension is an optional extension introduced
by the ARMv8.4 CPU architecture. This enables support for version 1
of the activity monitors architecture, AMUv1.
To enable the use of this extension on CPUs that implement it, say Y.
Note that for architectural reasons, firmware _must_ implement AMU
support when running on CPUs that present the activity monitors
extension. The required support is present in:
* Version 1.5 and later of the ARM Trusted Firmware
For kernels that have this configuration enabled but boot with broken
firmware, you may need to say N here until the firmware is fixed.
Otherwise you may experience firmware panics or lockups when
accessing the counter registers. Even if you are not observing these
symptoms, the values returned by the register reads might not
correctly reflect reality. Most commonly, the value read will be 0,
indicating that the counter is not enabled.
endmenu endmenu
menu "ARMv8.5 architectural features" menu "ARMv8.5 architectural features"
......
...@@ -65,6 +65,17 @@ stack_protector_prepare: prepare0 ...@@ -65,6 +65,17 @@ stack_protector_prepare: prepare0
include/generated/asm-offsets.h)) include/generated/asm-offsets.h))
endif endif
ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the
# compiler to generate them and consequently to break the single image contract
# we pass it only to the assembler. This option is utilized only in case of non
# integrated assemblers.
branch-prot-flags-$(CONFIG_AS_HAS_PAC) += -Wa,-march=armv8.3-a
KBUILD_CFLAGS += $(branch-prot-flags-y)
endif
ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
KBUILD_CPPFLAGS += -mbig-endian KBUILD_CPPFLAGS += -mbig-endian
CHECKFLAGS += -D__AARCH64EB__ CHECKFLAGS += -D__AARCH64EB__
......
...@@ -9,8 +9,8 @@ ...@@ -9,8 +9,8 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#define AES_ENTRY(func) SYM_FUNC_START(ce_ ## func) #define AES_FUNC_START(func) SYM_FUNC_START(ce_ ## func)
#define AES_ENDPROC(func) SYM_FUNC_END(ce_ ## func) #define AES_FUNC_END(func) SYM_FUNC_END(ce_ ## func)
.arch armv8-a+crypto .arch armv8-a+crypto
......
...@@ -51,7 +51,7 @@ SYM_FUNC_END(aes_decrypt_block5x) ...@@ -51,7 +51,7 @@ SYM_FUNC_END(aes_decrypt_block5x)
* int blocks) * int blocks)
*/ */
AES_ENTRY(aes_ecb_encrypt) AES_FUNC_START(aes_ecb_encrypt)
stp x29, x30, [sp, #-16]! stp x29, x30, [sp, #-16]!
mov x29, sp mov x29, sp
...@@ -79,10 +79,10 @@ ST5( st1 {v4.16b}, [x0], #16 ) ...@@ -79,10 +79,10 @@ ST5( st1 {v4.16b}, [x0], #16 )
.Lecbencout: .Lecbencout:
ldp x29, x30, [sp], #16 ldp x29, x30, [sp], #16
ret ret
AES_ENDPROC(aes_ecb_encrypt) AES_FUNC_END(aes_ecb_encrypt)
AES_ENTRY(aes_ecb_decrypt) AES_FUNC_START(aes_ecb_decrypt)
stp x29, x30, [sp, #-16]! stp x29, x30, [sp, #-16]!
mov x29, sp mov x29, sp
...@@ -110,7 +110,7 @@ ST5( st1 {v4.16b}, [x0], #16 ) ...@@ -110,7 +110,7 @@ ST5( st1 {v4.16b}, [x0], #16 )
.Lecbdecout: .Lecbdecout:
ldp x29, x30, [sp], #16 ldp x29, x30, [sp], #16
ret ret
AES_ENDPROC(aes_ecb_decrypt) AES_FUNC_END(aes_ecb_decrypt)
/* /*
...@@ -126,7 +126,7 @@ AES_ENDPROC(aes_ecb_decrypt) ...@@ -126,7 +126,7 @@ AES_ENDPROC(aes_ecb_decrypt)
* u32 const rk2[]); * u32 const rk2[]);
*/ */
AES_ENTRY(aes_essiv_cbc_encrypt) AES_FUNC_START(aes_essiv_cbc_encrypt)
ld1 {v4.16b}, [x5] /* get iv */ ld1 {v4.16b}, [x5] /* get iv */
mov w8, #14 /* AES-256: 14 rounds */ mov w8, #14 /* AES-256: 14 rounds */
...@@ -135,7 +135,7 @@ AES_ENTRY(aes_essiv_cbc_encrypt) ...@@ -135,7 +135,7 @@ AES_ENTRY(aes_essiv_cbc_encrypt)
enc_switch_key w3, x2, x6 enc_switch_key w3, x2, x6
b .Lcbcencloop4x b .Lcbcencloop4x
AES_ENTRY(aes_cbc_encrypt) AES_FUNC_START(aes_cbc_encrypt)
ld1 {v4.16b}, [x5] /* get iv */ ld1 {v4.16b}, [x5] /* get iv */
enc_prepare w3, x2, x6 enc_prepare w3, x2, x6
...@@ -167,10 +167,10 @@ AES_ENTRY(aes_cbc_encrypt) ...@@ -167,10 +167,10 @@ AES_ENTRY(aes_cbc_encrypt)
.Lcbcencout: .Lcbcencout:
st1 {v4.16b}, [x5] /* return iv */ st1 {v4.16b}, [x5] /* return iv */
ret ret
AES_ENDPROC(aes_cbc_encrypt) AES_FUNC_END(aes_cbc_encrypt)
AES_ENDPROC(aes_essiv_cbc_encrypt) AES_FUNC_END(aes_essiv_cbc_encrypt)
AES_ENTRY(aes_essiv_cbc_decrypt) AES_FUNC_START(aes_essiv_cbc_decrypt)
stp x29, x30, [sp, #-16]! stp x29, x30, [sp, #-16]!
mov x29, sp mov x29, sp
...@@ -181,7 +181,7 @@ AES_ENTRY(aes_essiv_cbc_decrypt) ...@@ -181,7 +181,7 @@ AES_ENTRY(aes_essiv_cbc_decrypt)
encrypt_block cbciv, w8, x6, x7, w9 encrypt_block cbciv, w8, x6, x7, w9
b .Lessivcbcdecstart b .Lessivcbcdecstart
AES_ENTRY(aes_cbc_decrypt) AES_FUNC_START(aes_cbc_decrypt)
stp x29, x30, [sp, #-16]! stp x29, x30, [sp, #-16]!
mov x29, sp mov x29, sp
...@@ -238,8 +238,8 @@ ST5( st1 {v4.16b}, [x0], #16 ) ...@@ -238,8 +238,8 @@ ST5( st1 {v4.16b}, [x0], #16 )
st1 {cbciv.16b}, [x5] /* return iv */ st1 {cbciv.16b}, [x5] /* return iv */
ldp x29, x30, [sp], #16 ldp x29, x30, [sp], #16
ret ret
AES_ENDPROC(aes_cbc_decrypt) AES_FUNC_END(aes_cbc_decrypt)
AES_ENDPROC(aes_essiv_cbc_decrypt) AES_FUNC_END(aes_essiv_cbc_decrypt)
/* /*
...@@ -249,7 +249,7 @@ AES_ENDPROC(aes_essiv_cbc_decrypt) ...@@ -249,7 +249,7 @@ AES_ENDPROC(aes_essiv_cbc_decrypt)
* int rounds, int bytes, u8 const iv[]) * int rounds, int bytes, u8 const iv[])
*/ */
AES_ENTRY(aes_cbc_cts_encrypt) AES_FUNC_START(aes_cbc_cts_encrypt)
adr_l x8, .Lcts_permute_table adr_l x8, .Lcts_permute_table
sub x4, x4, #16 sub x4, x4, #16
add x9, x8, #32 add x9, x8, #32
...@@ -276,9 +276,9 @@ AES_ENTRY(aes_cbc_cts_encrypt) ...@@ -276,9 +276,9 @@ AES_ENTRY(aes_cbc_cts_encrypt)
st1 {v0.16b}, [x4] /* overlapping stores */ st1 {v0.16b}, [x4] /* overlapping stores */
st1 {v1.16b}, [x0] st1 {v1.16b}, [x0]
ret ret
AES_ENDPROC(aes_cbc_cts_encrypt) AES_FUNC_END(aes_cbc_cts_encrypt)
AES_ENTRY(aes_cbc_cts_decrypt) AES_FUNC_START(aes_cbc_cts_decrypt)
adr_l x8, .Lcts_permute_table adr_l x8, .Lcts_permute_table
sub x4, x4, #16 sub x4, x4, #16
add x9, x8, #32 add x9, x8, #32
...@@ -305,7 +305,7 @@ AES_ENTRY(aes_cbc_cts_decrypt) ...@@ -305,7 +305,7 @@ AES_ENTRY(aes_cbc_cts_decrypt)
st1 {v2.16b}, [x4] /* overlapping stores */ st1 {v2.16b}, [x4] /* overlapping stores */
st1 {v0.16b}, [x0] st1 {v0.16b}, [x0]
ret ret
AES_ENDPROC(aes_cbc_cts_decrypt) AES_FUNC_END(aes_cbc_cts_decrypt)
.section ".rodata", "a" .section ".rodata", "a"
.align 6 .align 6
...@@ -324,7 +324,7 @@ AES_ENDPROC(aes_cbc_cts_decrypt) ...@@ -324,7 +324,7 @@ AES_ENDPROC(aes_cbc_cts_decrypt)
* int blocks, u8 ctr[]) * int blocks, u8 ctr[])
*/ */
AES_ENTRY(aes_ctr_encrypt) AES_FUNC_START(aes_ctr_encrypt)
stp x29, x30, [sp, #-16]! stp x29, x30, [sp, #-16]!
mov x29, sp mov x29, sp
...@@ -409,7 +409,7 @@ ST5( st1 {v4.16b}, [x0], #16 ) ...@@ -409,7 +409,7 @@ ST5( st1 {v4.16b}, [x0], #16 )
rev x7, x7 rev x7, x7
ins vctr.d[0], x7 ins vctr.d[0], x7
b .Lctrcarrydone b .Lctrcarrydone
AES_ENDPROC(aes_ctr_encrypt) AES_FUNC_END(aes_ctr_encrypt)
/* /*
...@@ -433,7 +433,7 @@ AES_ENDPROC(aes_ctr_encrypt) ...@@ -433,7 +433,7 @@ AES_ENDPROC(aes_ctr_encrypt)
uzp1 xtsmask.4s, xtsmask.4s, \tmp\().4s uzp1 xtsmask.4s, xtsmask.4s, \tmp\().4s
.endm .endm
AES_ENTRY(aes_xts_encrypt) AES_FUNC_START(aes_xts_encrypt)
stp x29, x30, [sp, #-16]! stp x29, x30, [sp, #-16]!
mov x29, sp mov x29, sp
...@@ -518,9 +518,9 @@ AES_ENTRY(aes_xts_encrypt) ...@@ -518,9 +518,9 @@ AES_ENTRY(aes_xts_encrypt)
st1 {v2.16b}, [x4] /* overlapping stores */ st1 {v2.16b}, [x4] /* overlapping stores */
mov w4, wzr mov w4, wzr
b .Lxtsencctsout b .Lxtsencctsout
AES_ENDPROC(aes_xts_encrypt) AES_FUNC_END(aes_xts_encrypt)
AES_ENTRY(aes_xts_decrypt) AES_FUNC_START(aes_xts_decrypt)
stp x29, x30, [sp, #-16]! stp x29, x30, [sp, #-16]!
mov x29, sp mov x29, sp
...@@ -612,13 +612,13 @@ AES_ENTRY(aes_xts_decrypt) ...@@ -612,13 +612,13 @@ AES_ENTRY(aes_xts_decrypt)
st1 {v2.16b}, [x4] /* overlapping stores */ st1 {v2.16b}, [x4] /* overlapping stores */
mov w4, wzr mov w4, wzr
b .Lxtsdecctsout b .Lxtsdecctsout
AES_ENDPROC(aes_xts_decrypt) AES_FUNC_END(aes_xts_decrypt)
/* /*
* aes_mac_update(u8 const in[], u32 const rk[], int rounds, * aes_mac_update(u8 const in[], u32 const rk[], int rounds,
* int blocks, u8 dg[], int enc_before, int enc_after) * int blocks, u8 dg[], int enc_before, int enc_after)
*/ */
AES_ENTRY(aes_mac_update) AES_FUNC_START(aes_mac_update)
frame_push 6 frame_push 6
mov x19, x0 mov x19, x0
...@@ -676,4 +676,4 @@ AES_ENTRY(aes_mac_update) ...@@ -676,4 +676,4 @@ AES_ENTRY(aes_mac_update)
ld1 {v0.16b}, [x23] /* get dg */ ld1 {v0.16b}, [x23] /* get dg */
enc_prepare w21, x20, x0 enc_prepare w21, x20, x0
b .Lmacloop4x b .Lmacloop4x
AES_ENDPROC(aes_mac_update) AES_FUNC_END(aes_mac_update)
...@@ -8,8 +8,8 @@ ...@@ -8,8 +8,8 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#define AES_ENTRY(func) SYM_FUNC_START(neon_ ## func) #define AES_FUNC_START(func) SYM_FUNC_START(neon_ ## func)
#define AES_ENDPROC(func) SYM_FUNC_END(neon_ ## func) #define AES_FUNC_END(func) SYM_FUNC_END(neon_ ## func)
xtsmask .req v7 xtsmask .req v7
cbciv .req v7 cbciv .req v7
......
...@@ -587,20 +587,20 @@ CPU_LE( rev w8, w8 ) ...@@ -587,20 +587,20 @@ CPU_LE( rev w8, w8 )
* struct ghash_key const *k, u64 dg[], u8 ctr[], * struct ghash_key const *k, u64 dg[], u8 ctr[],
* int rounds, u8 tag) * int rounds, u8 tag)
*/ */
ENTRY(pmull_gcm_encrypt) SYM_FUNC_START(pmull_gcm_encrypt)
pmull_gcm_do_crypt 1 pmull_gcm_do_crypt 1
ENDPROC(pmull_gcm_encrypt) SYM_FUNC_END(pmull_gcm_encrypt)
/* /*
* void pmull_gcm_decrypt(int blocks, u8 dst[], const u8 src[], * void pmull_gcm_decrypt(int blocks, u8 dst[], const u8 src[],
* struct ghash_key const *k, u64 dg[], u8 ctr[], * struct ghash_key const *k, u64 dg[], u8 ctr[],
* int rounds, u8 tag) * int rounds, u8 tag)
*/ */
ENTRY(pmull_gcm_decrypt) SYM_FUNC_START(pmull_gcm_decrypt)
pmull_gcm_do_crypt 0 pmull_gcm_do_crypt 0
ENDPROC(pmull_gcm_decrypt) SYM_FUNC_END(pmull_gcm_decrypt)
pmull_gcm_ghash_4x: SYM_FUNC_START_LOCAL(pmull_gcm_ghash_4x)
movi MASK.16b, #0xe1 movi MASK.16b, #0xe1
shl MASK.2d, MASK.2d, #57 shl MASK.2d, MASK.2d, #57
...@@ -681,9 +681,9 @@ pmull_gcm_ghash_4x: ...@@ -681,9 +681,9 @@ pmull_gcm_ghash_4x:
eor XL.16b, XL.16b, T2.16b eor XL.16b, XL.16b, T2.16b
ret ret
ENDPROC(pmull_gcm_ghash_4x) SYM_FUNC_END(pmull_gcm_ghash_4x)
pmull_gcm_enc_4x: SYM_FUNC_START_LOCAL(pmull_gcm_enc_4x)
ld1 {KS0.16b}, [x5] // load upper counter ld1 {KS0.16b}, [x5] // load upper counter
sub w10, w8, #4 sub w10, w8, #4
sub w11, w8, #3 sub w11, w8, #3
...@@ -746,7 +746,7 @@ pmull_gcm_enc_4x: ...@@ -746,7 +746,7 @@ pmull_gcm_enc_4x:
eor INP3.16b, INP3.16b, KS3.16b eor INP3.16b, INP3.16b, KS3.16b
ret ret
ENDPROC(pmull_gcm_enc_4x) SYM_FUNC_END(pmull_gcm_enc_4x)
.section ".rodata", "a" .section ".rodata", "a"
.align 6 .align 6
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_ASM_POINTER_AUTH_H
#define __ASM_ASM_POINTER_AUTH_H
#include <asm/alternative.h>
#include <asm/asm-offsets.h>
#include <asm/cpufeature.h>
#include <asm/sysreg.h>
#ifdef CONFIG_ARM64_PTR_AUTH
/*
* thread.keys_user.ap* as offset exceeds the #imm offset range
* so use the base value of ldp as thread.keys_user and offset as
* thread.keys_user.ap*.
*/
.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
mov \tmp1, #THREAD_KEYS_USER
add \tmp1, \tsk, \tmp1
alternative_if_not ARM64_HAS_ADDRESS_AUTH
b .Laddr_auth_skip_\@
alternative_else_nop_endif
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA]
msr_s SYS_APIAKEYLO_EL1, \tmp2
msr_s SYS_APIAKEYHI_EL1, \tmp3
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB]
msr_s SYS_APIBKEYLO_EL1, \tmp2
msr_s SYS_APIBKEYHI_EL1, \tmp3
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA]
msr_s SYS_APDAKEYLO_EL1, \tmp2
msr_s SYS_APDAKEYHI_EL1, \tmp3
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB]
msr_s SYS_APDBKEYLO_EL1, \tmp2
msr_s SYS_APDBKEYHI_EL1, \tmp3
.Laddr_auth_skip_\@:
alternative_if ARM64_HAS_GENERIC_AUTH
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA]
msr_s SYS_APGAKEYLO_EL1, \tmp2
msr_s SYS_APGAKEYHI_EL1, \tmp3
alternative_else_nop_endif
.endm
.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3
alternative_if ARM64_HAS_ADDRESS_AUTH
mov \tmp1, #THREAD_KEYS_KERNEL
add \tmp1, \tsk, \tmp1
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
msr_s SYS_APIAKEYLO_EL1, \tmp2
msr_s SYS_APIAKEYHI_EL1, \tmp3
.if \sync == 1
isb
.endif
alternative_else_nop_endif
.endm
#else /* CONFIG_ARM64_PTR_AUTH */
.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
.endm
.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3
.endm
#endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_ASM_POINTER_AUTH_H */
...@@ -256,12 +256,6 @@ alternative_endif ...@@ -256,12 +256,6 @@ alternative_endif
ldr \rd, [\rn, #VMA_VM_MM] ldr \rd, [\rn, #VMA_VM_MM]
.endm .endm
/*
* mmid - get context id from mm pointer (mm->context.id)
*/
.macro mmid, rd, rn
ldr \rd, [\rn, #MM_CONTEXT_ID]
.endm
/* /*
* read_ctr - read CTR_EL0. If the system has mismatched register fields, * read_ctr - read CTR_EL0. If the system has mismatched register fields,
* provide the system wide safe value from arm64_ftr_reg_ctrel0.sys_val * provide the system wide safe value from arm64_ftr_reg_ctrel0.sys_val
...@@ -430,6 +424,16 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU ...@@ -430,6 +424,16 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU
9000: 9000:
.endm .endm
/*
* reset_amuserenr_el0 - reset AMUSERENR_EL0 if AMUv1 present
*/
.macro reset_amuserenr_el0, tmpreg
mrs \tmpreg, id_aa64pfr0_el1 // Check ID_AA64PFR0_EL1
ubfx \tmpreg, \tmpreg, #ID_AA64PFR0_AMU_SHIFT, #4
cbz \tmpreg, .Lskip_\@ // Skip if no AMU present
msr_s SYS_AMUSERENR_EL0, xzr // Disable AMU access from EL0
.Lskip_\@:
.endm
/* /*
* copy_page - copy src to dest using temp registers t1-t8 * copy_page - copy src to dest using temp registers t1-t8
*/ */
......
...@@ -5,7 +5,12 @@ ...@@ -5,7 +5,12 @@
#ifndef __ASM_CHECKSUM_H #ifndef __ASM_CHECKSUM_H
#define __ASM_CHECKSUM_H #define __ASM_CHECKSUM_H
#include <linux/types.h> #include <linux/in6.h>
#define _HAVE_ARCH_IPV6_CSUM
__sum16 csum_ipv6_magic(const struct in6_addr *saddr,
const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum sum);
static inline __sum16 csum_fold(__wsum csum) static inline __sum16 csum_fold(__wsum csum)
{ {
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_COMPILER_H
#define __ASM_COMPILER_H
#if defined(CONFIG_ARM64_PTR_AUTH)
/*
* The EL0/EL1 pointer bits used by a pointer authentication code.
* This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
*/
#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual)
#define ptrauth_kernel_pac_mask() GENMASK_ULL(63, vabits_actual)
/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */
#define ptrauth_clear_pac(ptr) \
((ptr & BIT_ULL(55)) ? (ptr | ptrauth_kernel_pac_mask()) : \
(ptr & ~ptrauth_user_pac_mask()))
#define __builtin_return_address(val) \
(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
#endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_COMPILER_H */
...@@ -55,12 +55,12 @@ struct cpu_operations { ...@@ -55,12 +55,12 @@ struct cpu_operations {
#endif #endif
}; };
extern const struct cpu_operations *cpu_ops[NR_CPUS]; int __init init_cpu_ops(int cpu);
int __init cpu_read_ops(int cpu); extern const struct cpu_operations *get_cpu_ops(int cpu);
static inline void __init cpu_read_bootcpu_ops(void) static inline void __init init_bootcpu_ops(void)
{ {
cpu_read_ops(0); init_cpu_ops(0);
} }
#endif /* ifndef __ASM_CPU_OPS_H */ #endif /* ifndef __ASM_CPU_OPS_H */
...@@ -58,7 +58,10 @@ ...@@ -58,7 +58,10 @@
#define ARM64_WORKAROUND_SPECULATIVE_AT_NVHE 48 #define ARM64_WORKAROUND_SPECULATIVE_AT_NVHE 48
#define ARM64_HAS_E0PD 49 #define ARM64_HAS_E0PD 49
#define ARM64_HAS_RNG 50 #define ARM64_HAS_RNG 50
#define ARM64_HAS_AMU_EXTN 51
#define ARM64_HAS_ADDRESS_AUTH 52
#define ARM64_HAS_GENERIC_AUTH 53
#define ARM64_NCAPS 51 #define ARM64_NCAPS 54
#endif /* __ASM_CPUCAPS_H */ #endif /* __ASM_CPUCAPS_H */
...@@ -208,6 +208,10 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; ...@@ -208,6 +208,10 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
* In some non-typical cases either both (a) and (b), or neither, * In some non-typical cases either both (a) and (b), or neither,
* should be permitted. This can be described by including neither * should be permitted. This can be described by including neither
* or both flags in the capability's type field. * or both flags in the capability's type field.
*
* In case of a conflict, the CPU is prevented from booting. If the
* ARM64_CPUCAP_PANIC_ON_CONFLICT flag is specified for the capability,
* then a kernel panic is triggered.
*/ */
...@@ -240,6 +244,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; ...@@ -240,6 +244,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
#define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU ((u16)BIT(4)) #define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU ((u16)BIT(4))
/* Is it safe for a late CPU to miss this capability when system has it */ /* Is it safe for a late CPU to miss this capability when system has it */
#define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU ((u16)BIT(5)) #define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU ((u16)BIT(5))
/* Panic when a conflict is detected */
#define ARM64_CPUCAP_PANIC_ON_CONFLICT ((u16)BIT(6))
/* /*
* CPU errata workarounds that need to be enabled at boot time if one or * CPU errata workarounds that need to be enabled at boot time if one or
...@@ -279,9 +285,20 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; ...@@ -279,9 +285,20 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
/* /*
* CPU feature used early in the boot based on the boot CPU. All secondary * CPU feature used early in the boot based on the boot CPU. All secondary
* CPUs must match the state of the capability as detected by the boot CPU. * CPUs must match the state of the capability as detected by the boot CPU. In
* case of a conflict, a kernel panic is triggered.
*/
#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE \
(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
/*
* CPU feature used early in the boot based on the boot CPU. It is safe for a
* late CPU to have this feature even though the boot CPU hasn't enabled it,
* although the feature will not be used by Linux in this case. If the boot CPU
* has enabled this feature already, then every late CPU must have it.
*/ */
#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU #define ARM64_CPUCAP_BOOT_CPU_FEATURE \
(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
struct arm64_cpu_capabilities { struct arm64_cpu_capabilities {
const char *desc; const char *desc;
...@@ -340,18 +357,6 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap) ...@@ -340,18 +357,6 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap)
return cap->type & ARM64_CPUCAP_SCOPE_MASK; return cap->type & ARM64_CPUCAP_SCOPE_MASK;
} }
static inline bool
cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
{
return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
}
static inline bool
cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
{
return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
}
/* /*
* Generic helper for handling capabilties with multiple (match,enable) pairs * Generic helper for handling capabilties with multiple (match,enable) pairs
* of call backs, sharing the same capability bit. * of call backs, sharing the same capability bit.
...@@ -390,14 +395,16 @@ unsigned long cpu_get_elf_hwcap2(void); ...@@ -390,14 +395,16 @@ unsigned long cpu_get_elf_hwcap2(void);
#define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name)) #define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
#define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name)) #define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
/* System capability check for constant caps */ static __always_inline bool system_capabilities_finalized(void)
static __always_inline bool __cpus_have_const_cap(int num)
{ {
if (num >= ARM64_NCAPS) return static_branch_likely(&arm64_const_caps_ready);
return false;
return static_branch_unlikely(&cpu_hwcap_keys[num]);
} }
/*
* Test for a capability with a runtime check.
*
* Before the capability is detected, this returns false.
*/
static inline bool cpus_have_cap(unsigned int num) static inline bool cpus_have_cap(unsigned int num)
{ {
if (num >= ARM64_NCAPS) if (num >= ARM64_NCAPS)
...@@ -405,14 +412,53 @@ static inline bool cpus_have_cap(unsigned int num) ...@@ -405,14 +412,53 @@ static inline bool cpus_have_cap(unsigned int num)
return test_bit(num, cpu_hwcaps); return test_bit(num, cpu_hwcaps);
} }
/*
* Test for a capability without a runtime check.
*
* Before capabilities are finalized, this returns false.
* After capabilities are finalized, this is patched to avoid a runtime check.
*
* @num must be a compile-time constant.
*/
static __always_inline bool __cpus_have_const_cap(int num)
{
if (num >= ARM64_NCAPS)
return false;
return static_branch_unlikely(&cpu_hwcap_keys[num]);
}
/*
* Test for a capability, possibly with a runtime check.
*
* Before capabilities are finalized, this behaves as cpus_have_cap().
* After capabilities are finalized, this is patched to avoid a runtime check.
*
* @num must be a compile-time constant.
*/
static __always_inline bool cpus_have_const_cap(int num) static __always_inline bool cpus_have_const_cap(int num)
{ {
if (static_branch_likely(&arm64_const_caps_ready)) if (system_capabilities_finalized())
return __cpus_have_const_cap(num); return __cpus_have_const_cap(num);
else else
return cpus_have_cap(num); return cpus_have_cap(num);
} }
/*
* Test for a capability without a runtime check.
*
* Before capabilities are finalized, this will BUG().
* After capabilities are finalized, this is patched to avoid a runtime check.
*
* @num must be a compile-time constant.
*/
static __always_inline bool cpus_have_final_cap(int num)
{
if (system_capabilities_finalized())
return __cpus_have_const_cap(num);
else
BUG();
}
static inline void cpus_set_cap(unsigned int num) static inline void cpus_set_cap(unsigned int num)
{ {
if (num >= ARM64_NCAPS) { if (num >= ARM64_NCAPS) {
...@@ -447,6 +493,29 @@ cpuid_feature_extract_unsigned_field(u64 features, int field) ...@@ -447,6 +493,29 @@ cpuid_feature_extract_unsigned_field(u64 features, int field)
return cpuid_feature_extract_unsigned_field_width(features, field, 4); return cpuid_feature_extract_unsigned_field_width(features, field, 4);
} }
/*
* Fields that identify the version of the Performance Monitors Extension do
* not follow the standard ID scheme. See ARM DDI 0487E.a page D13-2825,
* "Alternative ID scheme used for the Performance Monitors Extension version".
*/
static inline u64 __attribute_const__
cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap)
{
u64 val = cpuid_feature_extract_unsigned_field(features, field);
u64 mask = GENMASK_ULL(field + 3, field);
/* Treat IMPLEMENTATION DEFINED functionality as unimplemented */
if (val == 0xf)
val = 0;
if (val > cap) {
features &= ~mask;
features |= (cap << field) & mask;
}
return features;
}
static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp) static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
{ {
return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift); return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
...@@ -590,15 +659,13 @@ static __always_inline bool system_supports_cnp(void) ...@@ -590,15 +659,13 @@ static __always_inline bool system_supports_cnp(void)
static inline bool system_supports_address_auth(void) static inline bool system_supports_address_auth(void)
{ {
return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
(cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
} }
static inline bool system_supports_generic_auth(void) static inline bool system_supports_generic_auth(void)
{ {
return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF));
} }
static inline bool system_uses_irq_prio_masking(void) static inline bool system_uses_irq_prio_masking(void)
...@@ -613,11 +680,6 @@ static inline bool system_has_prio_mask_debugging(void) ...@@ -613,11 +680,6 @@ static inline bool system_has_prio_mask_debugging(void)
system_uses_irq_prio_masking(); system_uses_irq_prio_masking();
} }
static inline bool system_capabilities_finalized(void)
{
return static_branch_likely(&arm64_const_caps_ready);
}
#define ARM64_BP_HARDEN_UNKNOWN -1 #define ARM64_BP_HARDEN_UNKNOWN -1
#define ARM64_BP_HARDEN_WA_NEEDED 0 #define ARM64_BP_HARDEN_WA_NEEDED 0
#define ARM64_BP_HARDEN_NOT_REQUIRED 1 #define ARM64_BP_HARDEN_NOT_REQUIRED 1
...@@ -678,6 +740,11 @@ static inline bool cpu_has_hw_af(void) ...@@ -678,6 +740,11 @@ static inline bool cpu_has_hw_af(void)
ID_AA64MMFR1_HADBS_SHIFT); ID_AA64MMFR1_HADBS_SHIFT);
} }
#ifdef CONFIG_ARM64_AMU_EXTN
/* Check whether the cpu supports the Activity Monitors Unit (AMU) */
extern bool cpu_has_amu_feat(int cpu);
#endif
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif #endif
...@@ -60,7 +60,7 @@ ...@@ -60,7 +60,7 @@
#define ESR_ELx_EC_BKPT32 (0x38) #define ESR_ELx_EC_BKPT32 (0x38)
/* Unallocated EC: 0x39 */ /* Unallocated EC: 0x39 */
#define ESR_ELx_EC_VECTOR32 (0x3A) /* EL2 only */ #define ESR_ELx_EC_VECTOR32 (0x3A) /* EL2 only */
/* Unallocted EC: 0x3B */ /* Unallocated EC: 0x3B */
#define ESR_ELx_EC_BRK64 (0x3C) #define ESR_ELx_EC_BRK64 (0x3C)
/* Unallocated EC: 0x3D - 0x3F */ /* Unallocated EC: 0x3D - 0x3F */
#define ESR_ELx_EC_MAX (0x3F) #define ESR_ELx_EC_MAX (0x3F)
......
...@@ -267,6 +267,7 @@ ...@@ -267,6 +267,7 @@
/* Hyp Coprocessor Trap Register */ /* Hyp Coprocessor Trap Register */
#define CPTR_EL2_TCPAC (1 << 31) #define CPTR_EL2_TCPAC (1 << 31)
#define CPTR_EL2_TAM (1 << 30)
#define CPTR_EL2_TTA (1 << 20) #define CPTR_EL2_TTA (1 << 20)
#define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT) #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT)
#define CPTR_EL2_TZ (1 << 8) #define CPTR_EL2_TZ (1 << 8)
......
...@@ -36,6 +36,8 @@ ...@@ -36,6 +36,8 @@
*/ */
#define KVM_VECTOR_PREAMBLE (2 * AARCH64_INSN_SIZE) #define KVM_VECTOR_PREAMBLE (2 * AARCH64_INSN_SIZE)
#define __SMCCC_WORKAROUND_1_SMC_SZ 36
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/mm.h> #include <linux/mm.h>
...@@ -75,6 +77,8 @@ extern void __vgic_v3_init_lrs(void); ...@@ -75,6 +77,8 @@ extern void __vgic_v3_init_lrs(void);
extern u32 __kvm_get_mdcr_el2(void); extern u32 __kvm_get_mdcr_el2(void);
extern char __smccc_workaround_1_smc[__SMCCC_WORKAROUND_1_SMC_SZ];
/* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */ /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
#define __hyp_this_cpu_ptr(sym) \ #define __hyp_this_cpu_ptr(sym) \
({ \ ({ \
......
...@@ -481,7 +481,7 @@ static inline void *kvm_get_hyp_vector(void) ...@@ -481,7 +481,7 @@ static inline void *kvm_get_hyp_vector(void)
int slot = -1; int slot = -1;
if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) { if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) {
vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs_start)); vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs));
slot = data->hyp_vectors_slot; slot = data->hyp_vectors_slot;
} }
...@@ -510,14 +510,13 @@ static inline int kvm_map_vectors(void) ...@@ -510,14 +510,13 @@ static inline int kvm_map_vectors(void)
* HBP + HEL2 -> use hardened vertors and use exec mapping * HBP + HEL2 -> use hardened vertors and use exec mapping
*/ */
if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) { if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) {
__kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs_start); __kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs);
__kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base); __kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base);
} }
if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) { if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) {
phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs_start); phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs);
unsigned long size = (__bp_harden_hyp_vecs_end - unsigned long size = __BP_HARDEN_HYP_VECS_SZ;
__bp_harden_hyp_vecs_start);
/* /*
* Always allocate a spare vector slot, as we don't * Always allocate a spare vector slot, as we don't
......
...@@ -54,6 +54,7 @@ ...@@ -54,6 +54,7 @@
#define MODULES_VADDR (BPF_JIT_REGION_END) #define MODULES_VADDR (BPF_JIT_REGION_END)
#define MODULES_VSIZE (SZ_128M) #define MODULES_VSIZE (SZ_128M)
#define VMEMMAP_START (-VMEMMAP_SIZE - SZ_2M) #define VMEMMAP_START (-VMEMMAP_SIZE - SZ_2M)
#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE)
#define PCI_IO_END (VMEMMAP_START - SZ_2M) #define PCI_IO_END (VMEMMAP_START - SZ_2M)
#define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE)
#define FIXADDR_TOP (PCI_IO_START - SZ_2M) #define FIXADDR_TOP (PCI_IO_START - SZ_2M)
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#define TTBR_ASID_MASK (UL(0xffff) << 48) #define TTBR_ASID_MASK (UL(0xffff) << 48)
#define BP_HARDEN_EL2_SLOTS 4 #define BP_HARDEN_EL2_SLOTS 4
#define __BP_HARDEN_HYP_VECS_SZ (BP_HARDEN_EL2_SLOTS * SZ_2K)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
...@@ -23,9 +24,9 @@ typedef struct { ...@@ -23,9 +24,9 @@ typedef struct {
} mm_context_t; } mm_context_t;
/* /*
* This macro is only used by the TLBI code, which cannot race with an * This macro is only used by the TLBI and low-level switch_mm() code,
* ASID change and therefore doesn't need to reload the counter using * neither of which can race with an ASID change. We therefore don't
* atomic64_read. * need to reload the counter using atomic64_read().
*/ */
#define ASID(mm) ((mm)->context.id.counter & 0xffff) #define ASID(mm) ((mm)->context.id.counter & 0xffff)
...@@ -43,7 +44,8 @@ struct bp_hardening_data { ...@@ -43,7 +44,8 @@ struct bp_hardening_data {
#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ #if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \
defined(CONFIG_HARDEN_EL2_VECTORS)) defined(CONFIG_HARDEN_EL2_VECTORS))
extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[];
extern char __bp_harden_hyp_vecs[];
extern atomic_t arm64_el2_vector_last_slot; extern atomic_t arm64_el2_vector_last_slot;
#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */ #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */
......
...@@ -46,6 +46,8 @@ static inline void cpu_set_reserved_ttbr0(void) ...@@ -46,6 +46,8 @@ static inline void cpu_set_reserved_ttbr0(void)
isb(); isb();
} }
void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm) static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
{ {
BUG_ON(pgd == swapper_pg_dir); BUG_ON(pgd == swapper_pg_dir);
......
...@@ -21,6 +21,10 @@ extern void __cpu_copy_user_page(void *to, const void *from, ...@@ -21,6 +21,10 @@ extern void __cpu_copy_user_page(void *to, const void *from,
extern void copy_page(void *to, const void *from); extern void copy_page(void *to, const void *from);
extern void clear_page(void *to); extern void clear_page(void *to);
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
#define clear_user_page(addr,vaddr,pg) __cpu_clear_user_page(addr, vaddr) #define clear_user_page(addr,vaddr,pg) __cpu_clear_user_page(addr, vaddr)
#define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr) #define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr)
......
...@@ -176,9 +176,10 @@ ...@@ -176,9 +176,10 @@
#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ #define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */
#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ #define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */
#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
#define ARMV8_PMU_PMCR_N_MASK 0x1f #define ARMV8_PMU_PMCR_N_MASK 0x1f
#define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */ #define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */
/* /*
* PMOVSR: counters overflow flag status reg * PMOVSR: counters overflow flag status reg
......
...@@ -22,7 +22,7 @@ struct ptrauth_key { ...@@ -22,7 +22,7 @@ struct ptrauth_key {
* We give each process its own keys, which are shared by all threads. The keys * We give each process its own keys, which are shared by all threads. The keys
* are inherited upon fork(), and reinitialised upon exec*(). * are inherited upon fork(), and reinitialised upon exec*().
*/ */
struct ptrauth_keys { struct ptrauth_keys_user {
struct ptrauth_key apia; struct ptrauth_key apia;
struct ptrauth_key apib; struct ptrauth_key apib;
struct ptrauth_key apda; struct ptrauth_key apda;
...@@ -30,7 +30,11 @@ struct ptrauth_keys { ...@@ -30,7 +30,11 @@ struct ptrauth_keys {
struct ptrauth_key apga; struct ptrauth_key apga;
}; };
static inline void ptrauth_keys_init(struct ptrauth_keys *keys) struct ptrauth_keys_kernel {
struct ptrauth_key apia;
};
static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
{ {
if (system_supports_address_auth()) { if (system_supports_address_auth()) {
get_random_bytes(&keys->apia, sizeof(keys->apia)); get_random_bytes(&keys->apia, sizeof(keys->apia));
...@@ -50,48 +54,38 @@ do { \ ...@@ -50,48 +54,38 @@ do { \
write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \
} while (0) } while (0)
static inline void ptrauth_keys_switch(struct ptrauth_keys *keys) static __always_inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
{ {
if (system_supports_address_auth()) { if (system_supports_address_auth())
__ptrauth_key_install(APIA, keys->apia); get_random_bytes(&keys->apia, sizeof(keys->apia));
__ptrauth_key_install(APIB, keys->apib); }
__ptrauth_key_install(APDA, keys->apda);
__ptrauth_key_install(APDB, keys->apdb);
}
if (system_supports_generic_auth()) static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys)
__ptrauth_key_install(APGA, keys->apga); {
if (system_supports_address_auth())
__ptrauth_key_install(APIA, keys->apia);
} }
extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
/*
* The EL0 pointer bits used by a pointer authentication code.
* This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
*/
#define ptrauth_user_pac_mask() GENMASK(54, vabits_actual)
/* Only valid for EL0 TTBR0 instruction pointers */
static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
{ {
return ptr & ~ptrauth_user_pac_mask(); return ptrauth_clear_pac(ptr);
} }
#define ptrauth_thread_init_user(tsk) \ #define ptrauth_thread_init_user(tsk) \
do { \ ptrauth_keys_init_user(&(tsk)->thread.keys_user)
struct task_struct *__ptiu_tsk = (tsk); \ #define ptrauth_thread_init_kernel(tsk) \
ptrauth_keys_init(&__ptiu_tsk->thread.keys_user); \ ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel)
ptrauth_keys_switch(&__ptiu_tsk->thread.keys_user); \ #define ptrauth_thread_switch_kernel(tsk) \
} while (0) ptrauth_keys_switch_kernel(&(tsk)->thread.keys_kernel)
#define ptrauth_thread_switch(tsk) \
ptrauth_keys_switch(&(tsk)->thread.keys_user)
#else /* CONFIG_ARM64_PTR_AUTH */ #else /* CONFIG_ARM64_PTR_AUTH */
#define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL)
#define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_strip_insn_pac(lr) (lr)
#define ptrauth_thread_init_user(tsk) #define ptrauth_thread_init_user(tsk)
#define ptrauth_thread_switch(tsk) #define ptrauth_thread_init_kernel(tsk)
#define ptrauth_thread_switch_kernel(tsk)
#endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_POINTER_AUTH_H */ #endif /* __ASM_POINTER_AUTH_H */
...@@ -13,11 +13,9 @@ ...@@ -13,11 +13,9 @@
#include <asm/page.h> #include <asm/page.h>
struct mm_struct;
struct cpu_suspend_ctx; struct cpu_suspend_ctx;
extern void cpu_do_idle(void); extern void cpu_do_idle(void);
extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr); extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr); extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
......
...@@ -148,7 +148,8 @@ struct thread_struct { ...@@ -148,7 +148,8 @@ struct thread_struct {
unsigned long fault_code; /* ESR_EL1 value */ unsigned long fault_code; /* ESR_EL1 value */
struct debug_info debug; /* debugging */ struct debug_info debug; /* debugging */
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH
struct ptrauth_keys keys_user; struct ptrauth_keys_user keys_user;
struct ptrauth_keys_kernel keys_kernel;
#endif #endif
}; };
......
...@@ -23,6 +23,14 @@ ...@@ -23,6 +23,14 @@
#define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT) #define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT)
#define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT) #define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT)
/* Possible options for __cpu_setup */
/* Option to setup primary cpu */
#define ARM64_CPU_BOOT_PRIMARY (1)
/* Option to setup secondary cpus */
#define ARM64_CPU_BOOT_SECONDARY (2)
/* Option to setup cpus for different cpu run time services */
#define ARM64_CPU_RUNTIME (3)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/percpu.h> #include <asm/percpu.h>
...@@ -30,6 +38,7 @@ ...@@ -30,6 +38,7 @@
#include <linux/threads.h> #include <linux/threads.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <asm/pointer_auth.h>
DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number); DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
...@@ -87,6 +96,9 @@ asmlinkage void secondary_start_kernel(void); ...@@ -87,6 +96,9 @@ asmlinkage void secondary_start_kernel(void);
struct secondary_data { struct secondary_data {
void *stack; void *stack;
struct task_struct *task; struct task_struct *task;
#ifdef CONFIG_ARM64_PTR_AUTH
struct ptrauth_keys_kernel ptrauth_key;
#endif
long status; long status;
}; };
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/random.h> #include <linux/random.h>
#include <linux/version.h> #include <linux/version.h>
#include <asm/pointer_auth.h>
extern unsigned long __stack_chk_guard; extern unsigned long __stack_chk_guard;
...@@ -26,6 +27,7 @@ extern unsigned long __stack_chk_guard; ...@@ -26,6 +27,7 @@ extern unsigned long __stack_chk_guard;
*/ */
static __always_inline void boot_init_stack_canary(void) static __always_inline void boot_init_stack_canary(void)
{ {
#if defined(CONFIG_STACKPROTECTOR)
unsigned long canary; unsigned long canary;
/* Try to get a semi random initial value. */ /* Try to get a semi random initial value. */
...@@ -36,6 +38,9 @@ static __always_inline void boot_init_stack_canary(void) ...@@ -36,6 +38,9 @@ static __always_inline void boot_init_stack_canary(void)
current->stack_canary = canary; current->stack_canary = canary;
if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK)) if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK))
__stack_chk_guard = current->stack_canary; __stack_chk_guard = current->stack_canary;
#endif
ptrauth_thread_init_kernel(current);
ptrauth_thread_switch_kernel(current);
} }
#endif /* _ASM_STACKPROTECTOR_H */ #endif /* _ASM_STACKPROTECTOR_H */
...@@ -386,6 +386,42 @@ ...@@ -386,6 +386,42 @@
#define SYS_TPIDR_EL0 sys_reg(3, 3, 13, 0, 2) #define SYS_TPIDR_EL0 sys_reg(3, 3, 13, 0, 2)
#define SYS_TPIDRRO_EL0 sys_reg(3, 3, 13, 0, 3) #define SYS_TPIDRRO_EL0 sys_reg(3, 3, 13, 0, 3)
/* Definitions for system register interface to AMU for ARMv8.4 onwards */
#define SYS_AM_EL0(crm, op2) sys_reg(3, 3, 13, (crm), (op2))
#define SYS_AMCR_EL0 SYS_AM_EL0(2, 0)
#define SYS_AMCFGR_EL0 SYS_AM_EL0(2, 1)
#define SYS_AMCGCR_EL0 SYS_AM_EL0(2, 2)
#define SYS_AMUSERENR_EL0 SYS_AM_EL0(2, 3)
#define SYS_AMCNTENCLR0_EL0 SYS_AM_EL0(2, 4)
#define SYS_AMCNTENSET0_EL0 SYS_AM_EL0(2, 5)
#define SYS_AMCNTENCLR1_EL0 SYS_AM_EL0(3, 0)
#define SYS_AMCNTENSET1_EL0 SYS_AM_EL0(3, 1)
/*
* Group 0 of activity monitors (architected):
* op0 op1 CRn CRm op2
* Counter: 11 011 1101 010:n<3> n<2:0>
* Type: 11 011 1101 011:n<3> n<2:0>
* n: 0-15
*
* Group 1 of activity monitors (auxiliary):
* op0 op1 CRn CRm op2
* Counter: 11 011 1101 110:n<3> n<2:0>
* Type: 11 011 1101 111:n<3> n<2:0>
* n: 0-15
*/
#define SYS_AMEVCNTR0_EL0(n) SYS_AM_EL0(4 + ((n) >> 3), (n) & 7)
#define SYS_AMEVTYPE0_EL0(n) SYS_AM_EL0(6 + ((n) >> 3), (n) & 7)
#define SYS_AMEVCNTR1_EL0(n) SYS_AM_EL0(12 + ((n) >> 3), (n) & 7)
#define SYS_AMEVTYPE1_EL0(n) SYS_AM_EL0(14 + ((n) >> 3), (n) & 7)
/* AMU v1: Fixed (architecturally defined) activity monitors */
#define SYS_AMEVCNTR0_CORE_EL0 SYS_AMEVCNTR0_EL0(0)
#define SYS_AMEVCNTR0_CONST_EL0 SYS_AMEVCNTR0_EL0(1)
#define SYS_AMEVCNTR0_INST_RET_EL0 SYS_AMEVCNTR0_EL0(2)
#define SYS_AMEVCNTR0_MEM_STALL SYS_AMEVCNTR0_EL0(3)
#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0) #define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
#define SYS_CNTP_TVAL_EL0 sys_reg(3, 3, 14, 2, 0) #define SYS_CNTP_TVAL_EL0 sys_reg(3, 3, 14, 2, 0)
...@@ -598,6 +634,7 @@ ...@@ -598,6 +634,7 @@
#define ID_AA64PFR0_CSV3_SHIFT 60 #define ID_AA64PFR0_CSV3_SHIFT 60
#define ID_AA64PFR0_CSV2_SHIFT 56 #define ID_AA64PFR0_CSV2_SHIFT 56
#define ID_AA64PFR0_DIT_SHIFT 48 #define ID_AA64PFR0_DIT_SHIFT 48
#define ID_AA64PFR0_AMU_SHIFT 44
#define ID_AA64PFR0_SVE_SHIFT 32 #define ID_AA64PFR0_SVE_SHIFT 32
#define ID_AA64PFR0_RAS_SHIFT 28 #define ID_AA64PFR0_RAS_SHIFT 28
#define ID_AA64PFR0_GIC_SHIFT 24 #define ID_AA64PFR0_GIC_SHIFT 24
...@@ -608,6 +645,7 @@ ...@@ -608,6 +645,7 @@
#define ID_AA64PFR0_EL1_SHIFT 4 #define ID_AA64PFR0_EL1_SHIFT 4
#define ID_AA64PFR0_EL0_SHIFT 0 #define ID_AA64PFR0_EL0_SHIFT 0
#define ID_AA64PFR0_AMU 0x1
#define ID_AA64PFR0_SVE 0x1 #define ID_AA64PFR0_SVE 0x1
#define ID_AA64PFR0_RAS_V1 0x1 #define ID_AA64PFR0_RAS_V1 0x1
#define ID_AA64PFR0_FP_NI 0xf #define ID_AA64PFR0_FP_NI 0xf
...@@ -702,6 +740,16 @@ ...@@ -702,6 +740,16 @@
#define ID_AA64DFR0_TRACEVER_SHIFT 4 #define ID_AA64DFR0_TRACEVER_SHIFT 4
#define ID_AA64DFR0_DEBUGVER_SHIFT 0 #define ID_AA64DFR0_DEBUGVER_SHIFT 0
#define ID_AA64DFR0_PMUVER_8_0 0x1
#define ID_AA64DFR0_PMUVER_8_1 0x4
#define ID_AA64DFR0_PMUVER_8_4 0x5
#define ID_AA64DFR0_PMUVER_8_5 0x6
#define ID_AA64DFR0_PMUVER_IMP_DEF 0xf
#define ID_DFR0_PERFMON_SHIFT 24
#define ID_DFR0_PERFMON_8_1 0x4
#define ID_ISAR5_RDM_SHIFT 24 #define ID_ISAR5_RDM_SHIFT 24
#define ID_ISAR5_CRC32_SHIFT 16 #define ID_ISAR5_CRC32_SHIFT 16
#define ID_ISAR5_SHA2_SHIFT 12 #define ID_ISAR5_SHA2_SHIFT 12
......
...@@ -16,6 +16,15 @@ int pcibus_to_node(struct pci_bus *bus); ...@@ -16,6 +16,15 @@ int pcibus_to_node(struct pci_bus *bus);
#include <linux/arch_topology.h> #include <linux/arch_topology.h>
#ifdef CONFIG_ARM64_AMU_EXTN
/*
* Replace task scheduler's default counter-based
* frequency-invariance scale factor setting.
*/
void topology_scale_freq_tick(void);
#define arch_scale_freq_tick topology_scale_freq_tick
#endif /* CONFIG_ARM64_AMU_EXTN */
/* Replace task scheduler's default frequency-invariant accounting */ /* Replace task scheduler's default frequency-invariant accounting */
#define arch_scale_freq_capacity topology_get_freq_scale #define arch_scale_freq_capacity topology_get_freq_scale
......
...@@ -21,7 +21,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ ...@@ -21,7 +21,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \
smp.o smp_spin_table.o topology.o smccc-call.o \ smp.o smp_spin_table.o topology.o smccc-call.o \
syscall.o syscall.o
extra-$(CONFIG_EFI) := efi-entry.o targets += efi-entry.o
OBJCOPYFLAGS := --prefix-symbols=__efistub_ OBJCOPYFLAGS := --prefix-symbols=__efistub_
$(obj)/%.stub.o: $(obj)/%.o FORCE $(obj)/%.stub.o: $(obj)/%.o FORCE
......
...@@ -630,7 +630,7 @@ static int __init armv8_deprecated_init(void) ...@@ -630,7 +630,7 @@ static int __init armv8_deprecated_init(void)
register_insn_emulation(&cp15_barrier_ops); register_insn_emulation(&cp15_barrier_ops);
if (IS_ENABLED(CONFIG_SETEND_EMULATION)) { if (IS_ENABLED(CONFIG_SETEND_EMULATION)) {
if(system_supports_mixed_endian_el0()) if (system_supports_mixed_endian_el0())
register_insn_emulation(&setend_ops); register_insn_emulation(&setend_ops);
else else
pr_info("setend instruction emulation is not supported on this system\n"); pr_info("setend instruction emulation is not supported on this system\n");
......
...@@ -40,6 +40,10 @@ int main(void) ...@@ -40,6 +40,10 @@ int main(void)
#endif #endif
BLANK(); BLANK();
DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context)); DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context));
#ifdef CONFIG_ARM64_PTR_AUTH
DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user));
DEFINE(THREAD_KEYS_KERNEL, offsetof(struct task_struct, thread.keys_kernel));
#endif
BLANK(); BLANK();
DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); DEFINE(S_X0, offsetof(struct pt_regs, regs[0]));
DEFINE(S_X2, offsetof(struct pt_regs, regs[2])); DEFINE(S_X2, offsetof(struct pt_regs, regs[2]));
...@@ -88,6 +92,9 @@ int main(void) ...@@ -88,6 +92,9 @@ int main(void)
BLANK(); BLANK();
DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack)); DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack));
DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task)); DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task));
#ifdef CONFIG_ARM64_PTR_AUTH
DEFINE(CPU_BOOT_PTRAUTH_KEY, offsetof(struct secondary_data, ptrauth_key));
#endif
BLANK(); BLANK();
#ifdef CONFIG_KVM_ARM_HOST #ifdef CONFIG_KVM_ARM_HOST
DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt));
...@@ -127,6 +134,15 @@ int main(void) ...@@ -127,6 +134,15 @@ int main(void)
#ifdef CONFIG_ARM_SDE_INTERFACE #ifdef CONFIG_ARM_SDE_INTERFACE
DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs));
DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority));
#endif
#ifdef CONFIG_ARM64_PTR_AUTH
DEFINE(PTRAUTH_USER_KEY_APIA, offsetof(struct ptrauth_keys_user, apia));
DEFINE(PTRAUTH_USER_KEY_APIB, offsetof(struct ptrauth_keys_user, apib));
DEFINE(PTRAUTH_USER_KEY_APDA, offsetof(struct ptrauth_keys_user, apda));
DEFINE(PTRAUTH_USER_KEY_APDB, offsetof(struct ptrauth_keys_user, apdb));
DEFINE(PTRAUTH_USER_KEY_APGA, offsetof(struct ptrauth_keys_user, apga));
DEFINE(PTRAUTH_KERNEL_KEY_APIA, offsetof(struct ptrauth_keys_kernel, apia));
BLANK();
#endif #endif
return 0; return 0;
} }
...@@ -32,7 +32,7 @@ ...@@ -32,7 +32,7 @@
ENTRY(__cpu_soft_restart) ENTRY(__cpu_soft_restart)
/* Clear sctlr_el1 flags. */ /* Clear sctlr_el1 flags. */
mrs x12, sctlr_el1 mrs x12, sctlr_el1
ldr x13, =SCTLR_ELx_FLAGS mov_q x13, SCTLR_ELx_FLAGS
bic x12, x12, x13 bic x12, x12, x13
pre_disable_mmu_workaround pre_disable_mmu_workaround
msr sctlr_el1, x12 msr sctlr_el1, x12
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <asm/cpu.h> #include <asm/cpu.h>
#include <asm/cputype.h> #include <asm/cputype.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/kvm_asm.h>
#include <asm/smp_plat.h> #include <asm/smp_plat.h>
static bool __maybe_unused static bool __maybe_unused
...@@ -113,13 +114,10 @@ atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1); ...@@ -113,13 +114,10 @@ atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
#ifdef CONFIG_KVM_INDIRECT_VECTORS #ifdef CONFIG_KVM_INDIRECT_VECTORS
extern char __smccc_workaround_1_smc_start[];
extern char __smccc_workaround_1_smc_end[];
static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
const char *hyp_vecs_end) const char *hyp_vecs_end)
{ {
void *dst = lm_alias(__bp_harden_hyp_vecs_start + slot * SZ_2K); void *dst = lm_alias(__bp_harden_hyp_vecs + slot * SZ_2K);
int i; int i;
for (i = 0; i < SZ_2K; i += 0x80) for (i = 0; i < SZ_2K; i += 0x80)
...@@ -163,9 +161,6 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn, ...@@ -163,9 +161,6 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn,
raw_spin_unlock(&bp_lock); raw_spin_unlock(&bp_lock);
} }
#else #else
#define __smccc_workaround_1_smc_start NULL
#define __smccc_workaround_1_smc_end NULL
static void install_bp_hardening_cb(bp_hardening_cb_t fn, static void install_bp_hardening_cb(bp_hardening_cb_t fn,
const char *hyp_vecs_start, const char *hyp_vecs_start,
const char *hyp_vecs_end) const char *hyp_vecs_end)
...@@ -176,7 +171,7 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn, ...@@ -176,7 +171,7 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn,
#include <linux/arm-smccc.h> #include <linux/arm-smccc.h>
static void call_smc_arch_workaround_1(void) static void __maybe_unused call_smc_arch_workaround_1(void)
{ {
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL); arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
} }
...@@ -239,11 +234,14 @@ static int detect_harden_bp_fw(void) ...@@ -239,11 +234,14 @@ static int detect_harden_bp_fw(void)
smccc_end = NULL; smccc_end = NULL;
break; break;
#if IS_ENABLED(CONFIG_KVM_ARM_HOST)
case SMCCC_CONDUIT_SMC: case SMCCC_CONDUIT_SMC:
cb = call_smc_arch_workaround_1; cb = call_smc_arch_workaround_1;
smccc_start = __smccc_workaround_1_smc_start; smccc_start = __smccc_workaround_1_smc;
smccc_end = __smccc_workaround_1_smc_end; smccc_end = __smccc_workaround_1_smc +
__SMCCC_WORKAROUND_1_SMC_SZ;
break; break;
#endif
default: default:
return -1; return -1;
......
...@@ -15,10 +15,12 @@ ...@@ -15,10 +15,12 @@
#include <asm/smp_plat.h> #include <asm/smp_plat.h>
extern const struct cpu_operations smp_spin_table_ops; extern const struct cpu_operations smp_spin_table_ops;
#ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL
extern const struct cpu_operations acpi_parking_protocol_ops; extern const struct cpu_operations acpi_parking_protocol_ops;
#endif
extern const struct cpu_operations cpu_psci_ops; extern const struct cpu_operations cpu_psci_ops;
const struct cpu_operations *cpu_ops[NR_CPUS] __ro_after_init; static const struct cpu_operations *cpu_ops[NR_CPUS] __ro_after_init;
static const struct cpu_operations *const dt_supported_cpu_ops[] __initconst = { static const struct cpu_operations *const dt_supported_cpu_ops[] __initconst = {
&smp_spin_table_ops, &smp_spin_table_ops,
...@@ -94,7 +96,7 @@ static const char *__init cpu_read_enable_method(int cpu) ...@@ -94,7 +96,7 @@ static const char *__init cpu_read_enable_method(int cpu)
/* /*
* Read a cpu's enable method and record it in cpu_ops. * Read a cpu's enable method and record it in cpu_ops.
*/ */
int __init cpu_read_ops(int cpu) int __init init_cpu_ops(int cpu)
{ {
const char *enable_method = cpu_read_enable_method(cpu); const char *enable_method = cpu_read_enable_method(cpu);
...@@ -109,3 +111,8 @@ int __init cpu_read_ops(int cpu) ...@@ -109,3 +111,8 @@ int __init cpu_read_ops(int cpu)
return 0; return 0;
} }
const struct cpu_operations *get_cpu_ops(int cpu)
{
return cpu_ops[cpu];
}
...@@ -116,6 +116,8 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused) ...@@ -116,6 +116,8 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap); static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
static bool __system_matches_cap(unsigned int n);
/* /*
* NOTE: Any changes to the visibility of features should be kept in * NOTE: Any changes to the visibility of features should be kept in
* sync with the documentation of the CPU feature register ABI. * sync with the documentation of the CPU feature register ABI.
...@@ -163,6 +165,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { ...@@ -163,6 +165,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_AMU_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0), FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
...@@ -551,7 +554,7 @@ static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new) ...@@ -551,7 +554,7 @@ static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new)
BUG_ON(!reg); BUG_ON(!reg);
for (ftrp = reg->ftr_bits; ftrp->width; ftrp++) { for (ftrp = reg->ftr_bits; ftrp->width; ftrp++) {
u64 ftr_mask = arm64_ftr_mask(ftrp); u64 ftr_mask = arm64_ftr_mask(ftrp);
s64 ftr_new = arm64_ftr_value(ftrp, new); s64 ftr_new = arm64_ftr_value(ftrp, new);
...@@ -1222,6 +1225,57 @@ static bool has_hw_dbm(const struct arm64_cpu_capabilities *cap, ...@@ -1222,6 +1225,57 @@ static bool has_hw_dbm(const struct arm64_cpu_capabilities *cap,
#endif #endif
#ifdef CONFIG_ARM64_AMU_EXTN
/*
* The "amu_cpus" cpumask only signals that the CPU implementation for the
* flagged CPUs supports the Activity Monitors Unit (AMU) but does not provide
* information regarding all the events that it supports. When a CPU bit is
* set in the cpumask, the user of this feature can only rely on the presence
* of the 4 fixed counters for that CPU. But this does not guarantee that the
* counters are enabled or access to these counters is enabled by code
* executed at higher exception levels (firmware).
*/
static struct cpumask amu_cpus __read_mostly;
bool cpu_has_amu_feat(int cpu)
{
return cpumask_test_cpu(cpu, &amu_cpus);
}
/* Initialize the use of AMU counters for frequency invariance */
extern void init_cpu_freq_invariance_counters(void);
static void cpu_amu_enable(struct arm64_cpu_capabilities const *cap)
{
if (has_cpuid_feature(cap, SCOPE_LOCAL_CPU)) {
pr_info("detected CPU%d: Activity Monitors Unit (AMU)\n",
smp_processor_id());
cpumask_set_cpu(smp_processor_id(), &amu_cpus);
init_cpu_freq_invariance_counters();
}
}
static bool has_amu(const struct arm64_cpu_capabilities *cap,
int __unused)
{
/*
* The AMU extension is a non-conflicting feature: the kernel can
* safely run a mix of CPUs with and without support for the
* activity monitors extension. Therefore, unconditionally enable
* the capability to allow any late CPU to use the feature.
*
* With this feature unconditionally enabled, the cpu_enable
* function will be called for all CPUs that match the criteria,
* including secondary and hotplugged, marking this feature as
* present on that respective CPU. The enable function will also
* print a detection message.
*/
return true;
}
#endif
#ifdef CONFIG_ARM64_VHE #ifdef CONFIG_ARM64_VHE
static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused) static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused)
{ {
...@@ -1316,10 +1370,18 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused) ...@@ -1316,10 +1370,18 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
#endif /* CONFIG_ARM64_RAS_EXTN */ #endif /* CONFIG_ARM64_RAS_EXTN */
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH
static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
int __unused)
{ {
sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
}
static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
int __unused)
{
return __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
__system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
} }
#endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* CONFIG_ARM64_PTR_AUTH */
...@@ -1347,6 +1409,25 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry, ...@@ -1347,6 +1409,25 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
} }
#endif #endif
/* Internal helper functions to match cpu capability type */
static bool
cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
{
return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
}
static bool
cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
{
return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
}
static bool
cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap)
{
return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT);
}
static const struct arm64_cpu_capabilities arm64_features[] = { static const struct arm64_cpu_capabilities arm64_features[] = {
{ {
.desc = "GIC system register CPU interface", .desc = "GIC system register CPU interface",
...@@ -1499,6 +1580,24 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -1499,6 +1580,24 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.cpu_enable = cpu_clear_disr, .cpu_enable = cpu_clear_disr,
}, },
#endif /* CONFIG_ARM64_RAS_EXTN */ #endif /* CONFIG_ARM64_RAS_EXTN */
#ifdef CONFIG_ARM64_AMU_EXTN
{
/*
* The feature is enabled by default if CONFIG_ARM64_AMU_EXTN=y.
* Therefore, don't provide .desc as we don't want the detection
* message to be shown until at least one CPU is detected to
* support the feature.
*/
.capability = ARM64_HAS_AMU_EXTN,
.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
.matches = has_amu,
.sys_reg = SYS_ID_AA64PFR0_EL1,
.sign = FTR_UNSIGNED,
.field_pos = ID_AA64PFR0_AMU_SHIFT,
.min_field_value = ID_AA64PFR0_AMU,
.cpu_enable = cpu_amu_enable,
},
#endif /* CONFIG_ARM64_AMU_EXTN */
{ {
.desc = "Data cache clean to the PoU not required for I/D coherence", .desc = "Data cache clean to the PoU not required for I/D coherence",
.capability = ARM64_HAS_CACHE_IDC, .capability = ARM64_HAS_CACHE_IDC,
...@@ -1592,24 +1691,27 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -1592,24 +1691,27 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
{ {
.desc = "Address authentication (architected algorithm)", .desc = "Address authentication (architected algorithm)",
.capability = ARM64_HAS_ADDRESS_AUTH_ARCH, .capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
.type = ARM64_CPUCAP_SYSTEM_FEATURE, .type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
.sys_reg = SYS_ID_AA64ISAR1_EL1, .sys_reg = SYS_ID_AA64ISAR1_EL1,
.sign = FTR_UNSIGNED, .sign = FTR_UNSIGNED,
.field_pos = ID_AA64ISAR1_APA_SHIFT, .field_pos = ID_AA64ISAR1_APA_SHIFT,
.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED, .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
.matches = has_cpuid_feature, .matches = has_cpuid_feature,
.cpu_enable = cpu_enable_address_auth,
}, },
{ {
.desc = "Address authentication (IMP DEF algorithm)", .desc = "Address authentication (IMP DEF algorithm)",
.capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF, .capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
.type = ARM64_CPUCAP_SYSTEM_FEATURE, .type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
.sys_reg = SYS_ID_AA64ISAR1_EL1, .sys_reg = SYS_ID_AA64ISAR1_EL1,
.sign = FTR_UNSIGNED, .sign = FTR_UNSIGNED,
.field_pos = ID_AA64ISAR1_API_SHIFT, .field_pos = ID_AA64ISAR1_API_SHIFT,
.min_field_value = ID_AA64ISAR1_API_IMP_DEF, .min_field_value = ID_AA64ISAR1_API_IMP_DEF,
.matches = has_cpuid_feature, .matches = has_cpuid_feature,
.cpu_enable = cpu_enable_address_auth, },
{
.capability = ARM64_HAS_ADDRESS_AUTH,
.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
.matches = has_address_auth,
}, },
{ {
.desc = "Generic authentication (architected algorithm)", .desc = "Generic authentication (architected algorithm)",
...@@ -1631,6 +1733,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -1631,6 +1733,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.min_field_value = ID_AA64ISAR1_GPI_IMP_DEF, .min_field_value = ID_AA64ISAR1_GPI_IMP_DEF,
.matches = has_cpuid_feature, .matches = has_cpuid_feature,
}, },
{
.capability = ARM64_HAS_GENERIC_AUTH,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_generic_auth,
},
#endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* CONFIG_ARM64_PTR_AUTH */
#ifdef CONFIG_ARM64_PSEUDO_NMI #ifdef CONFIG_ARM64_PSEUDO_NMI
{ {
...@@ -1980,10 +2087,8 @@ static void __init enable_cpu_capabilities(u16 scope_mask) ...@@ -1980,10 +2087,8 @@ static void __init enable_cpu_capabilities(u16 scope_mask)
* Run through the list of capabilities to check for conflicts. * Run through the list of capabilities to check for conflicts.
* If the system has already detected a capability, take necessary * If the system has already detected a capability, take necessary
* action on this CPU. * action on this CPU.
*
* Returns "false" on conflicts.
*/ */
static bool verify_local_cpu_caps(u16 scope_mask) static void verify_local_cpu_caps(u16 scope_mask)
{ {
int i; int i;
bool cpu_has_cap, system_has_cap; bool cpu_has_cap, system_has_cap;
...@@ -2028,10 +2133,12 @@ static bool verify_local_cpu_caps(u16 scope_mask) ...@@ -2028,10 +2133,12 @@ static bool verify_local_cpu_caps(u16 scope_mask)
pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n", pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n",
smp_processor_id(), caps->capability, smp_processor_id(), caps->capability,
caps->desc, system_has_cap, cpu_has_cap); caps->desc, system_has_cap, cpu_has_cap);
return false;
}
return true; if (cpucap_panic_on_conflict(caps))
cpu_panic_kernel();
else
cpu_die_early();
}
} }
/* /*
...@@ -2041,12 +2148,8 @@ static bool verify_local_cpu_caps(u16 scope_mask) ...@@ -2041,12 +2148,8 @@ static bool verify_local_cpu_caps(u16 scope_mask)
static void check_early_cpu_features(void) static void check_early_cpu_features(void)
{ {
verify_cpu_asid_bits(); verify_cpu_asid_bits();
/*
* Early features are used by the kernel already. If there verify_local_cpu_caps(SCOPE_BOOT_CPU);
* is a conflict, we cannot proceed further.
*/
if (!verify_local_cpu_caps(SCOPE_BOOT_CPU))
cpu_panic_kernel();
} }
static void static void
...@@ -2094,8 +2197,7 @@ static void verify_local_cpu_capabilities(void) ...@@ -2094,8 +2197,7 @@ static void verify_local_cpu_capabilities(void)
* check_early_cpu_features(), as they need to be verified * check_early_cpu_features(), as they need to be verified
* on all secondary CPUs. * on all secondary CPUs.
*/ */
if (!verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU)) verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU);
cpu_die_early();
verify_local_elf_hwcaps(arm64_elf_hwcaps); verify_local_elf_hwcaps(arm64_elf_hwcaps);
...@@ -2146,6 +2248,23 @@ bool this_cpu_has_cap(unsigned int n) ...@@ -2146,6 +2248,23 @@ bool this_cpu_has_cap(unsigned int n)
return false; return false;
} }
/*
* This helper function is used in a narrow window when,
* - The system wide safe registers are set with all the SMP CPUs and,
* - The SYSTEM_FEATURE cpu_hwcaps may not have been set.
* In all other cases cpus_have_{const_}cap() should be used.
*/
static bool __system_matches_cap(unsigned int n)
{
if (n < ARM64_NCAPS) {
const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[n];
if (cap)
return cap->matches(cap, SCOPE_SYSTEM);
}
return false;
}
void cpu_set_feature(unsigned int num) void cpu_set_feature(unsigned int num)
{ {
WARN_ON(num >= MAX_CPU_FEATURES); WARN_ON(num >= MAX_CPU_FEATURES);
...@@ -2218,7 +2337,7 @@ void __init setup_cpu_features(void) ...@@ -2218,7 +2337,7 @@ void __init setup_cpu_features(void)
static bool __maybe_unused static bool __maybe_unused
cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused) cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
{ {
return (cpus_have_const_cap(ARM64_HAS_PAN) && !cpus_have_const_cap(ARM64_HAS_UAO)); return (__system_matches_cap(ARM64_HAS_PAN) && !__system_matches_cap(ARM64_HAS_UAO));
} }
static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap) static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
......
...@@ -18,11 +18,11 @@ ...@@ -18,11 +18,11 @@
int arm_cpuidle_init(unsigned int cpu) int arm_cpuidle_init(unsigned int cpu)
{ {
const struct cpu_operations *ops = get_cpu_ops(cpu);
int ret = -EOPNOTSUPP; int ret = -EOPNOTSUPP;
if (cpu_ops[cpu] && cpu_ops[cpu]->cpu_suspend && if (ops && ops->cpu_suspend && ops->cpu_init_idle)
cpu_ops[cpu]->cpu_init_idle) ret = ops->cpu_init_idle(cpu);
ret = cpu_ops[cpu]->cpu_init_idle(cpu);
return ret; return ret;
} }
...@@ -37,8 +37,9 @@ int arm_cpuidle_init(unsigned int cpu) ...@@ -37,8 +37,9 @@ int arm_cpuidle_init(unsigned int cpu)
int arm_cpuidle_suspend(int index) int arm_cpuidle_suspend(int index)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
const struct cpu_operations *ops = get_cpu_ops(cpu);
return cpu_ops[cpu]->cpu_suspend(index); return ops->cpu_suspend(index);
} }
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
......
...@@ -175,7 +175,7 @@ NOKPROBE_SYMBOL(el0_pc); ...@@ -175,7 +175,7 @@ NOKPROBE_SYMBOL(el0_pc);
static void notrace el0_sp(struct pt_regs *regs, unsigned long esr) static void notrace el0_sp(struct pt_regs *regs, unsigned long esr)
{ {
user_exit_irqoff(); user_exit_irqoff();
local_daif_restore(DAIF_PROCCTX_NOIRQ); local_daif_restore(DAIF_PROCCTX);
do_sp_pc_abort(regs->sp, esr, regs); do_sp_pc_abort(regs->sp, esr, regs);
} }
NOKPROBE_SYMBOL(el0_sp); NOKPROBE_SYMBOL(el0_sp);
......
...@@ -75,27 +75,27 @@ ...@@ -75,27 +75,27 @@
add x29, sp, #S_STACKFRAME add x29, sp, #S_STACKFRAME
.endm .endm
ENTRY(ftrace_regs_caller) SYM_CODE_START(ftrace_regs_caller)
ftrace_regs_entry 1 ftrace_regs_entry 1
b ftrace_common b ftrace_common
ENDPROC(ftrace_regs_caller) SYM_CODE_END(ftrace_regs_caller)
ENTRY(ftrace_caller) SYM_CODE_START(ftrace_caller)
ftrace_regs_entry 0 ftrace_regs_entry 0
b ftrace_common b ftrace_common
ENDPROC(ftrace_caller) SYM_CODE_END(ftrace_caller)
ENTRY(ftrace_common) SYM_CODE_START(ftrace_common)
sub x0, x30, #AARCH64_INSN_SIZE // ip (callsite's BL insn) sub x0, x30, #AARCH64_INSN_SIZE // ip (callsite's BL insn)
mov x1, x9 // parent_ip (callsite's LR) mov x1, x9 // parent_ip (callsite's LR)
ldr_l x2, function_trace_op // op ldr_l x2, function_trace_op // op
mov x3, sp // regs mov x3, sp // regs
GLOBAL(ftrace_call) SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
bl ftrace_stub bl ftrace_stub
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
GLOBAL(ftrace_graph_call) // ftrace_graph_caller(); SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
nop // If enabled, this will be replaced nop // If enabled, this will be replaced
// "b ftrace_graph_caller" // "b ftrace_graph_caller"
#endif #endif
...@@ -122,17 +122,17 @@ ftrace_common_return: ...@@ -122,17 +122,17 @@ ftrace_common_return:
add sp, sp, #S_FRAME_SIZE + 16 add sp, sp, #S_FRAME_SIZE + 16
ret x9 ret x9
ENDPROC(ftrace_common) SYM_CODE_END(ftrace_common)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
ENTRY(ftrace_graph_caller) SYM_CODE_START(ftrace_graph_caller)
ldr x0, [sp, #S_PC] ldr x0, [sp, #S_PC]
sub x0, x0, #AARCH64_INSN_SIZE // ip (callsite's BL insn) sub x0, x0, #AARCH64_INSN_SIZE // ip (callsite's BL insn)
add x1, sp, #S_LR // parent_ip (callsite's LR) add x1, sp, #S_LR // parent_ip (callsite's LR)
ldr x2, [sp, #S_FRAME_SIZE] // parent fp (callsite's FP) ldr x2, [sp, #S_FRAME_SIZE] // parent fp (callsite's FP)
bl prepare_ftrace_return bl prepare_ftrace_return
b ftrace_common_return b ftrace_common_return
ENDPROC(ftrace_graph_caller) SYM_CODE_END(ftrace_graph_caller)
#endif #endif
#else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ #else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
...@@ -218,7 +218,7 @@ ENDPROC(ftrace_graph_caller) ...@@ -218,7 +218,7 @@ ENDPROC(ftrace_graph_caller)
* - tracer function to probe instrumented function's entry, * - tracer function to probe instrumented function's entry,
* - ftrace_graph_caller to set up an exit hook * - ftrace_graph_caller to set up an exit hook
*/ */
ENTRY(_mcount) SYM_FUNC_START(_mcount)
mcount_enter mcount_enter
ldr_l x2, ftrace_trace_function ldr_l x2, ftrace_trace_function
...@@ -242,7 +242,7 @@ skip_ftrace_call: // } ...@@ -242,7 +242,7 @@ skip_ftrace_call: // }
b.ne ftrace_graph_caller // ftrace_graph_caller(); b.ne ftrace_graph_caller // ftrace_graph_caller();
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
mcount_exit mcount_exit
ENDPROC(_mcount) SYM_FUNC_END(_mcount)
EXPORT_SYMBOL(_mcount) EXPORT_SYMBOL(_mcount)
NOKPROBE(_mcount) NOKPROBE(_mcount)
...@@ -253,9 +253,9 @@ NOKPROBE(_mcount) ...@@ -253,9 +253,9 @@ NOKPROBE(_mcount)
* and later on, NOP to branch to ftrace_caller() when enabled or branch to * and later on, NOP to branch to ftrace_caller() when enabled or branch to
* NOP when disabled per-function base. * NOP when disabled per-function base.
*/ */
ENTRY(_mcount) SYM_FUNC_START(_mcount)
ret ret
ENDPROC(_mcount) SYM_FUNC_END(_mcount)
EXPORT_SYMBOL(_mcount) EXPORT_SYMBOL(_mcount)
NOKPROBE(_mcount) NOKPROBE(_mcount)
...@@ -268,24 +268,24 @@ NOKPROBE(_mcount) ...@@ -268,24 +268,24 @@ NOKPROBE(_mcount)
* - tracer function to probe instrumented function's entry, * - tracer function to probe instrumented function's entry,
* - ftrace_graph_caller to set up an exit hook * - ftrace_graph_caller to set up an exit hook
*/ */
ENTRY(ftrace_caller) SYM_FUNC_START(ftrace_caller)
mcount_enter mcount_enter
mcount_get_pc0 x0 // function's pc mcount_get_pc0 x0 // function's pc
mcount_get_lr x1 // function's lr mcount_get_lr x1 // function's lr
GLOBAL(ftrace_call) // tracer(pc, lr); SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL) // tracer(pc, lr);
nop // This will be replaced with "bl xxx" nop // This will be replaced with "bl xxx"
// where xxx can be any kind of tracer. // where xxx can be any kind of tracer.
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
GLOBAL(ftrace_graph_call) // ftrace_graph_caller(); SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
nop // If enabled, this will be replaced nop // If enabled, this will be replaced
// "b ftrace_graph_caller" // "b ftrace_graph_caller"
#endif #endif
mcount_exit mcount_exit
ENDPROC(ftrace_caller) SYM_FUNC_END(ftrace_caller)
#endif /* CONFIG_DYNAMIC_FTRACE */ #endif /* CONFIG_DYNAMIC_FTRACE */
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
...@@ -298,20 +298,20 @@ ENDPROC(ftrace_caller) ...@@ -298,20 +298,20 @@ ENDPROC(ftrace_caller)
* the call stack in order to intercept instrumented function's return path * the call stack in order to intercept instrumented function's return path
* and run return_to_handler() later on its exit. * and run return_to_handler() later on its exit.
*/ */
ENTRY(ftrace_graph_caller) SYM_FUNC_START(ftrace_graph_caller)
mcount_get_pc x0 // function's pc mcount_get_pc x0 // function's pc
mcount_get_lr_addr x1 // pointer to function's saved lr mcount_get_lr_addr x1 // pointer to function's saved lr
mcount_get_parent_fp x2 // parent's fp mcount_get_parent_fp x2 // parent's fp
bl prepare_ftrace_return // prepare_ftrace_return(pc, &lr, fp) bl prepare_ftrace_return // prepare_ftrace_return(pc, &lr, fp)
mcount_exit mcount_exit
ENDPROC(ftrace_graph_caller) SYM_FUNC_END(ftrace_graph_caller)
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ #endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
ENTRY(ftrace_stub) SYM_FUNC_START(ftrace_stub)
ret ret
ENDPROC(ftrace_stub) SYM_FUNC_END(ftrace_stub)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
/* /*
...@@ -320,7 +320,7 @@ ENDPROC(ftrace_stub) ...@@ -320,7 +320,7 @@ ENDPROC(ftrace_stub)
* Run ftrace_return_to_handler() before going back to parent. * Run ftrace_return_to_handler() before going back to parent.
* @fp is checked against the value passed by ftrace_graph_caller(). * @fp is checked against the value passed by ftrace_graph_caller().
*/ */
ENTRY(return_to_handler) SYM_CODE_START(return_to_handler)
/* save return value regs */ /* save return value regs */
sub sp, sp, #64 sub sp, sp, #64
stp x0, x1, [sp] stp x0, x1, [sp]
...@@ -340,5 +340,5 @@ ENTRY(return_to_handler) ...@@ -340,5 +340,5 @@ ENTRY(return_to_handler)
add sp, sp, #64 add sp, sp, #64
ret ret
END(return_to_handler) SYM_CODE_END(return_to_handler)
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/asm_pointer_auth.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/esr.h> #include <asm/esr.h>
...@@ -177,6 +178,7 @@ alternative_cb_end ...@@ -177,6 +178,7 @@ alternative_cb_end
apply_ssbd 1, x22, x23 apply_ssbd 1, x22, x23
ptrauth_keys_install_kernel tsk, 1, x20, x22, x23
.else .else
add x21, sp, #S_FRAME_SIZE add x21, sp, #S_FRAME_SIZE
get_current_task tsk get_current_task tsk
...@@ -341,6 +343,9 @@ alternative_else_nop_endif ...@@ -341,6 +343,9 @@ alternative_else_nop_endif
msr cntkctl_el1, x1 msr cntkctl_el1, x1
4: 4:
#endif #endif
/* No kernel C function calls after this as user keys are set. */
ptrauth_keys_install_user tsk, x0, x1, x2
apply_ssbd 0, x0, x1 apply_ssbd 0, x0, x1
.endif .endif
...@@ -465,7 +470,7 @@ alternative_endif ...@@ -465,7 +470,7 @@ alternative_endif
.pushsection ".entry.text", "ax" .pushsection ".entry.text", "ax"
.align 11 .align 11
ENTRY(vectors) SYM_CODE_START(vectors)
kernel_ventry 1, sync_invalid // Synchronous EL1t kernel_ventry 1, sync_invalid // Synchronous EL1t
kernel_ventry 1, irq_invalid // IRQ EL1t kernel_ventry 1, irq_invalid // IRQ EL1t
kernel_ventry 1, fiq_invalid // FIQ EL1t kernel_ventry 1, fiq_invalid // FIQ EL1t
...@@ -492,7 +497,7 @@ ENTRY(vectors) ...@@ -492,7 +497,7 @@ ENTRY(vectors)
kernel_ventry 0, fiq_invalid, 32 // FIQ 32-bit EL0 kernel_ventry 0, fiq_invalid, 32 // FIQ 32-bit EL0
kernel_ventry 0, error_invalid, 32 // Error 32-bit EL0 kernel_ventry 0, error_invalid, 32 // Error 32-bit EL0
#endif #endif
END(vectors) SYM_CODE_END(vectors)
#ifdef CONFIG_VMAP_STACK #ifdef CONFIG_VMAP_STACK
/* /*
...@@ -534,57 +539,57 @@ __bad_stack: ...@@ -534,57 +539,57 @@ __bad_stack:
ASM_BUG() ASM_BUG()
.endm .endm
el0_sync_invalid: SYM_CODE_START_LOCAL(el0_sync_invalid)
inv_entry 0, BAD_SYNC inv_entry 0, BAD_SYNC
ENDPROC(el0_sync_invalid) SYM_CODE_END(el0_sync_invalid)
el0_irq_invalid: SYM_CODE_START_LOCAL(el0_irq_invalid)
inv_entry 0, BAD_IRQ inv_entry 0, BAD_IRQ
ENDPROC(el0_irq_invalid) SYM_CODE_END(el0_irq_invalid)
el0_fiq_invalid: SYM_CODE_START_LOCAL(el0_fiq_invalid)
inv_entry 0, BAD_FIQ inv_entry 0, BAD_FIQ
ENDPROC(el0_fiq_invalid) SYM_CODE_END(el0_fiq_invalid)
el0_error_invalid: SYM_CODE_START_LOCAL(el0_error_invalid)
inv_entry 0, BAD_ERROR inv_entry 0, BAD_ERROR
ENDPROC(el0_error_invalid) SYM_CODE_END(el0_error_invalid)
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
el0_fiq_invalid_compat: SYM_CODE_START_LOCAL(el0_fiq_invalid_compat)
inv_entry 0, BAD_FIQ, 32 inv_entry 0, BAD_FIQ, 32
ENDPROC(el0_fiq_invalid_compat) SYM_CODE_END(el0_fiq_invalid_compat)
#endif #endif
el1_sync_invalid: SYM_CODE_START_LOCAL(el1_sync_invalid)
inv_entry 1, BAD_SYNC inv_entry 1, BAD_SYNC
ENDPROC(el1_sync_invalid) SYM_CODE_END(el1_sync_invalid)
el1_irq_invalid: SYM_CODE_START_LOCAL(el1_irq_invalid)
inv_entry 1, BAD_IRQ inv_entry 1, BAD_IRQ
ENDPROC(el1_irq_invalid) SYM_CODE_END(el1_irq_invalid)
el1_fiq_invalid: SYM_CODE_START_LOCAL(el1_fiq_invalid)
inv_entry 1, BAD_FIQ inv_entry 1, BAD_FIQ
ENDPROC(el1_fiq_invalid) SYM_CODE_END(el1_fiq_invalid)
el1_error_invalid: SYM_CODE_START_LOCAL(el1_error_invalid)
inv_entry 1, BAD_ERROR inv_entry 1, BAD_ERROR
ENDPROC(el1_error_invalid) SYM_CODE_END(el1_error_invalid)
/* /*
* EL1 mode handlers. * EL1 mode handlers.
*/ */
.align 6 .align 6
el1_sync: SYM_CODE_START_LOCAL_NOALIGN(el1_sync)
kernel_entry 1 kernel_entry 1
mov x0, sp mov x0, sp
bl el1_sync_handler bl el1_sync_handler
kernel_exit 1 kernel_exit 1
ENDPROC(el1_sync) SYM_CODE_END(el1_sync)
.align 6 .align 6
el1_irq: SYM_CODE_START_LOCAL_NOALIGN(el1_irq)
kernel_entry 1 kernel_entry 1
gic_prio_irq_setup pmr=x20, tmp=x1 gic_prio_irq_setup pmr=x20, tmp=x1
enable_da_f enable_da_f
...@@ -639,42 +644,42 @@ alternative_else_nop_endif ...@@ -639,42 +644,42 @@ alternative_else_nop_endif
#endif #endif
kernel_exit 1 kernel_exit 1
ENDPROC(el1_irq) SYM_CODE_END(el1_irq)
/* /*
* EL0 mode handlers. * EL0 mode handlers.
*/ */
.align 6 .align 6
el0_sync: SYM_CODE_START_LOCAL_NOALIGN(el0_sync)
kernel_entry 0 kernel_entry 0
mov x0, sp mov x0, sp
bl el0_sync_handler bl el0_sync_handler
b ret_to_user b ret_to_user
ENDPROC(el0_sync) SYM_CODE_END(el0_sync)
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
.align 6 .align 6
el0_sync_compat: SYM_CODE_START_LOCAL_NOALIGN(el0_sync_compat)
kernel_entry 0, 32 kernel_entry 0, 32
mov x0, sp mov x0, sp
bl el0_sync_compat_handler bl el0_sync_compat_handler
b ret_to_user b ret_to_user
ENDPROC(el0_sync_compat) SYM_CODE_END(el0_sync_compat)
.align 6 .align 6
el0_irq_compat: SYM_CODE_START_LOCAL_NOALIGN(el0_irq_compat)
kernel_entry 0, 32 kernel_entry 0, 32
b el0_irq_naked b el0_irq_naked
ENDPROC(el0_irq_compat) SYM_CODE_END(el0_irq_compat)
el0_error_compat: SYM_CODE_START_LOCAL_NOALIGN(el0_error_compat)
kernel_entry 0, 32 kernel_entry 0, 32
b el0_error_naked b el0_error_naked
ENDPROC(el0_error_compat) SYM_CODE_END(el0_error_compat)
#endif #endif
.align 6 .align 6
el0_irq: SYM_CODE_START_LOCAL_NOALIGN(el0_irq)
kernel_entry 0 kernel_entry 0
el0_irq_naked: el0_irq_naked:
gic_prio_irq_setup pmr=x20, tmp=x0 gic_prio_irq_setup pmr=x20, tmp=x0
...@@ -696,9 +701,9 @@ el0_irq_naked: ...@@ -696,9 +701,9 @@ el0_irq_naked:
bl trace_hardirqs_on bl trace_hardirqs_on
#endif #endif
b ret_to_user b ret_to_user
ENDPROC(el0_irq) SYM_CODE_END(el0_irq)
el1_error: SYM_CODE_START_LOCAL(el1_error)
kernel_entry 1 kernel_entry 1
mrs x1, esr_el1 mrs x1, esr_el1
gic_prio_kentry_setup tmp=x2 gic_prio_kentry_setup tmp=x2
...@@ -706,9 +711,9 @@ el1_error: ...@@ -706,9 +711,9 @@ el1_error:
mov x0, sp mov x0, sp
bl do_serror bl do_serror
kernel_exit 1 kernel_exit 1
ENDPROC(el1_error) SYM_CODE_END(el1_error)
el0_error: SYM_CODE_START_LOCAL(el0_error)
kernel_entry 0 kernel_entry 0
el0_error_naked: el0_error_naked:
mrs x25, esr_el1 mrs x25, esr_el1
...@@ -720,7 +725,7 @@ el0_error_naked: ...@@ -720,7 +725,7 @@ el0_error_naked:
bl do_serror bl do_serror
enable_da_f enable_da_f
b ret_to_user b ret_to_user
ENDPROC(el0_error) SYM_CODE_END(el0_error)
/* /*
* Ok, we need to do extra processing, enter the slow path. * Ok, we need to do extra processing, enter the slow path.
...@@ -832,7 +837,7 @@ alternative_else_nop_endif ...@@ -832,7 +837,7 @@ alternative_else_nop_endif
.endm .endm
.align 11 .align 11
ENTRY(tramp_vectors) SYM_CODE_START_NOALIGN(tramp_vectors)
.space 0x400 .space 0x400
tramp_ventry tramp_ventry
...@@ -844,24 +849,24 @@ ENTRY(tramp_vectors) ...@@ -844,24 +849,24 @@ ENTRY(tramp_vectors)
tramp_ventry 32 tramp_ventry 32
tramp_ventry 32 tramp_ventry 32
tramp_ventry 32 tramp_ventry 32
END(tramp_vectors) SYM_CODE_END(tramp_vectors)
ENTRY(tramp_exit_native) SYM_CODE_START(tramp_exit_native)
tramp_exit tramp_exit
END(tramp_exit_native) SYM_CODE_END(tramp_exit_native)
ENTRY(tramp_exit_compat) SYM_CODE_START(tramp_exit_compat)
tramp_exit 32 tramp_exit 32
END(tramp_exit_compat) SYM_CODE_END(tramp_exit_compat)
.ltorg .ltorg
.popsection // .entry.tramp.text .popsection // .entry.tramp.text
#ifdef CONFIG_RANDOMIZE_BASE #ifdef CONFIG_RANDOMIZE_BASE
.pushsection ".rodata", "a" .pushsection ".rodata", "a"
.align PAGE_SHIFT .align PAGE_SHIFT
.globl __entry_tramp_data_start SYM_DATA_START(__entry_tramp_data_start)
__entry_tramp_data_start:
.quad vectors .quad vectors
SYM_DATA_END(__entry_tramp_data_start)
.popsection // .rodata .popsection // .rodata
#endif /* CONFIG_RANDOMIZE_BASE */ #endif /* CONFIG_RANDOMIZE_BASE */
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
...@@ -874,7 +879,7 @@ __entry_tramp_data_start: ...@@ -874,7 +879,7 @@ __entry_tramp_data_start:
* Previous and next are guaranteed not to be the same. * Previous and next are guaranteed not to be the same.
* *
*/ */
ENTRY(cpu_switch_to) SYM_FUNC_START(cpu_switch_to)
mov x10, #THREAD_CPU_CONTEXT mov x10, #THREAD_CPU_CONTEXT
add x8, x0, x10 add x8, x0, x10
mov x9, sp mov x9, sp
...@@ -895,21 +900,22 @@ ENTRY(cpu_switch_to) ...@@ -895,21 +900,22 @@ ENTRY(cpu_switch_to)
ldr lr, [x8] ldr lr, [x8]
mov sp, x9 mov sp, x9
msr sp_el0, x1 msr sp_el0, x1
ptrauth_keys_install_kernel x1, 1, x8, x9, x10
ret ret
ENDPROC(cpu_switch_to) SYM_FUNC_END(cpu_switch_to)
NOKPROBE(cpu_switch_to) NOKPROBE(cpu_switch_to)
/* /*
* This is how we return from a fork. * This is how we return from a fork.
*/ */
ENTRY(ret_from_fork) SYM_CODE_START(ret_from_fork)
bl schedule_tail bl schedule_tail
cbz x19, 1f // not a kernel thread cbz x19, 1f // not a kernel thread
mov x0, x20 mov x0, x20
blr x19 blr x19
1: get_current_task tsk 1: get_current_task tsk
b ret_to_user b ret_to_user
ENDPROC(ret_from_fork) SYM_CODE_END(ret_from_fork)
NOKPROBE(ret_from_fork) NOKPROBE(ret_from_fork)
#ifdef CONFIG_ARM_SDE_INTERFACE #ifdef CONFIG_ARM_SDE_INTERFACE
...@@ -938,7 +944,7 @@ NOKPROBE(ret_from_fork) ...@@ -938,7 +944,7 @@ NOKPROBE(ret_from_fork)
*/ */
.ltorg .ltorg
.pushsection ".entry.tramp.text", "ax" .pushsection ".entry.tramp.text", "ax"
ENTRY(__sdei_asm_entry_trampoline) SYM_CODE_START(__sdei_asm_entry_trampoline)
mrs x4, ttbr1_el1 mrs x4, ttbr1_el1
tbz x4, #USER_ASID_BIT, 1f tbz x4, #USER_ASID_BIT, 1f
...@@ -960,7 +966,7 @@ ENTRY(__sdei_asm_entry_trampoline) ...@@ -960,7 +966,7 @@ ENTRY(__sdei_asm_entry_trampoline)
ldr x4, =__sdei_asm_handler ldr x4, =__sdei_asm_handler
#endif #endif
br x4 br x4
ENDPROC(__sdei_asm_entry_trampoline) SYM_CODE_END(__sdei_asm_entry_trampoline)
NOKPROBE(__sdei_asm_entry_trampoline) NOKPROBE(__sdei_asm_entry_trampoline)
/* /*
...@@ -970,21 +976,22 @@ NOKPROBE(__sdei_asm_entry_trampoline) ...@@ -970,21 +976,22 @@ NOKPROBE(__sdei_asm_entry_trampoline)
* x2: exit_mode * x2: exit_mode
* x4: struct sdei_registered_event argument from registration time. * x4: struct sdei_registered_event argument from registration time.
*/ */
ENTRY(__sdei_asm_exit_trampoline) SYM_CODE_START(__sdei_asm_exit_trampoline)
ldr x4, [x4, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)] ldr x4, [x4, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
cbnz x4, 1f cbnz x4, 1f
tramp_unmap_kernel tmp=x4 tramp_unmap_kernel tmp=x4
1: sdei_handler_exit exit_mode=x2 1: sdei_handler_exit exit_mode=x2
ENDPROC(__sdei_asm_exit_trampoline) SYM_CODE_END(__sdei_asm_exit_trampoline)
NOKPROBE(__sdei_asm_exit_trampoline) NOKPROBE(__sdei_asm_exit_trampoline)
.ltorg .ltorg
.popsection // .entry.tramp.text .popsection // .entry.tramp.text
#ifdef CONFIG_RANDOMIZE_BASE #ifdef CONFIG_RANDOMIZE_BASE
.pushsection ".rodata", "a" .pushsection ".rodata", "a"
__sdei_asm_trampoline_next_handler: SYM_DATA_START(__sdei_asm_trampoline_next_handler)
.quad __sdei_asm_handler .quad __sdei_asm_handler
SYM_DATA_END(__sdei_asm_trampoline_next_handler)
.popsection // .rodata .popsection // .rodata
#endif /* CONFIG_RANDOMIZE_BASE */ #endif /* CONFIG_RANDOMIZE_BASE */
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
...@@ -1002,7 +1009,7 @@ __sdei_asm_trampoline_next_handler: ...@@ -1002,7 +1009,7 @@ __sdei_asm_trampoline_next_handler:
* follow SMC-CC. We save (or retrieve) all the registers as the handler may * follow SMC-CC. We save (or retrieve) all the registers as the handler may
* want them. * want them.
*/ */
ENTRY(__sdei_asm_handler) SYM_CODE_START(__sdei_asm_handler)
stp x2, x3, [x1, #SDEI_EVENT_INTREGS + S_PC] stp x2, x3, [x1, #SDEI_EVENT_INTREGS + S_PC]
stp x4, x5, [x1, #SDEI_EVENT_INTREGS + 16 * 2] stp x4, x5, [x1, #SDEI_EVENT_INTREGS + 16 * 2]
stp x6, x7, [x1, #SDEI_EVENT_INTREGS + 16 * 3] stp x6, x7, [x1, #SDEI_EVENT_INTREGS + 16 * 3]
...@@ -1085,6 +1092,6 @@ alternative_else_nop_endif ...@@ -1085,6 +1092,6 @@ alternative_else_nop_endif
tramp_alias dst=x5, sym=__sdei_asm_exit_trampoline tramp_alias dst=x5, sym=__sdei_asm_exit_trampoline
br x5 br x5
#endif #endif
ENDPROC(__sdei_asm_handler) SYM_CODE_END(__sdei_asm_handler)
NOKPROBE(__sdei_asm_handler) NOKPROBE(__sdei_asm_handler)
#endif /* CONFIG_ARM_SDE_INTERFACE */ #endif /* CONFIG_ARM_SDE_INTERFACE */
...@@ -105,7 +105,7 @@ pe_header: ...@@ -105,7 +105,7 @@ pe_header:
* x24 __primary_switch() .. relocate_kernel() * x24 __primary_switch() .. relocate_kernel()
* current RELR displacement * current RELR displacement
*/ */
ENTRY(stext) SYM_CODE_START(stext)
bl preserve_boot_args bl preserve_boot_args
bl el2_setup // Drop to EL1, w0=cpu_boot_mode bl el2_setup // Drop to EL1, w0=cpu_boot_mode
adrp x23, __PHYS_OFFSET adrp x23, __PHYS_OFFSET
...@@ -118,14 +118,15 @@ ENTRY(stext) ...@@ -118,14 +118,15 @@ ENTRY(stext)
* On return, the CPU will be ready for the MMU to be turned on and * On return, the CPU will be ready for the MMU to be turned on and
* the TCR will have been set. * the TCR will have been set.
*/ */
mov x0, #ARM64_CPU_BOOT_PRIMARY
bl __cpu_setup // initialise processor bl __cpu_setup // initialise processor
b __primary_switch b __primary_switch
ENDPROC(stext) SYM_CODE_END(stext)
/* /*
* Preserve the arguments passed by the bootloader in x0 .. x3 * Preserve the arguments passed by the bootloader in x0 .. x3
*/ */
preserve_boot_args: SYM_CODE_START_LOCAL(preserve_boot_args)
mov x21, x0 // x21=FDT mov x21, x0 // x21=FDT
adr_l x0, boot_args // record the contents of adr_l x0, boot_args // record the contents of
...@@ -137,7 +138,7 @@ preserve_boot_args: ...@@ -137,7 +138,7 @@ preserve_boot_args:
mov x1, #0x20 // 4 x 8 bytes mov x1, #0x20 // 4 x 8 bytes
b __inval_dcache_area // tail call b __inval_dcache_area // tail call
ENDPROC(preserve_boot_args) SYM_CODE_END(preserve_boot_args)
/* /*
* Macro to create a table entry to the next page. * Macro to create a table entry to the next page.
...@@ -275,7 +276,7 @@ ENDPROC(preserve_boot_args) ...@@ -275,7 +276,7 @@ ENDPROC(preserve_boot_args)
* - first few MB of the kernel linear mapping to jump to once the MMU has * - first few MB of the kernel linear mapping to jump to once the MMU has
* been enabled * been enabled
*/ */
__create_page_tables: SYM_FUNC_START_LOCAL(__create_page_tables)
mov x28, lr mov x28, lr
/* /*
...@@ -403,15 +404,14 @@ __create_page_tables: ...@@ -403,15 +404,14 @@ __create_page_tables:
bl __inval_dcache_area bl __inval_dcache_area
ret x28 ret x28
ENDPROC(__create_page_tables) SYM_FUNC_END(__create_page_tables)
.ltorg
/* /*
* The following fragment of code is executed with the MMU enabled. * The following fragment of code is executed with the MMU enabled.
* *
* x0 = __PHYS_OFFSET * x0 = __PHYS_OFFSET
*/ */
__primary_switched: SYM_FUNC_START_LOCAL(__primary_switched)
adrp x4, init_thread_union adrp x4, init_thread_union
add sp, x4, #THREAD_SIZE add sp, x4, #THREAD_SIZE
adr_l x5, init_task adr_l x5, init_task
...@@ -456,7 +456,14 @@ __primary_switched: ...@@ -456,7 +456,14 @@ __primary_switched:
mov x29, #0 mov x29, #0
mov x30, #0 mov x30, #0
b start_kernel b start_kernel
ENDPROC(__primary_switched) SYM_FUNC_END(__primary_switched)
.pushsection ".rodata", "a"
SYM_DATA_START(kimage_vaddr)
.quad _text - TEXT_OFFSET
SYM_DATA_END(kimage_vaddr)
EXPORT_SYMBOL(kimage_vaddr)
.popsection
/* /*
* end early head section, begin head code that is also used for * end early head section, begin head code that is also used for
...@@ -464,10 +471,6 @@ ENDPROC(__primary_switched) ...@@ -464,10 +471,6 @@ ENDPROC(__primary_switched)
*/ */
.section ".idmap.text","awx" .section ".idmap.text","awx"
ENTRY(kimage_vaddr)
.quad _text - TEXT_OFFSET
EXPORT_SYMBOL(kimage_vaddr)
/* /*
* If we're fortunate enough to boot at EL2, ensure that the world is * If we're fortunate enough to boot at EL2, ensure that the world is
* sane before dropping to EL1. * sane before dropping to EL1.
...@@ -475,7 +478,7 @@ EXPORT_SYMBOL(kimage_vaddr) ...@@ -475,7 +478,7 @@ EXPORT_SYMBOL(kimage_vaddr)
* Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in w0 if * Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in w0 if
* booted in EL1 or EL2 respectively. * booted in EL1 or EL2 respectively.
*/ */
ENTRY(el2_setup) SYM_FUNC_START(el2_setup)
msr SPsel, #1 // We want to use SP_EL{1,2} msr SPsel, #1 // We want to use SP_EL{1,2}
mrs x0, CurrentEL mrs x0, CurrentEL
cmp x0, #CurrentEL_EL2 cmp x0, #CurrentEL_EL2
...@@ -599,7 +602,7 @@ set_hcr: ...@@ -599,7 +602,7 @@ set_hcr:
isb isb
ret ret
install_el2_stub: SYM_INNER_LABEL(install_el2_stub, SYM_L_LOCAL)
/* /*
* When VHE is not in use, early init of EL2 and EL1 needs to be * When VHE is not in use, early init of EL2 and EL1 needs to be
* done here. * done here.
...@@ -636,13 +639,13 @@ install_el2_stub: ...@@ -636,13 +639,13 @@ install_el2_stub:
msr elr_el2, lr msr elr_el2, lr
mov w0, #BOOT_CPU_MODE_EL2 // This CPU booted in EL2 mov w0, #BOOT_CPU_MODE_EL2 // This CPU booted in EL2
eret eret
ENDPROC(el2_setup) SYM_FUNC_END(el2_setup)
/* /*
* Sets the __boot_cpu_mode flag depending on the CPU boot mode passed * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed
* in w0. See arch/arm64/include/asm/virt.h for more info. * in w0. See arch/arm64/include/asm/virt.h for more info.
*/ */
set_cpu_boot_mode_flag: SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag)
adr_l x1, __boot_cpu_mode adr_l x1, __boot_cpu_mode
cmp w0, #BOOT_CPU_MODE_EL2 cmp w0, #BOOT_CPU_MODE_EL2
b.ne 1f b.ne 1f
...@@ -651,7 +654,7 @@ set_cpu_boot_mode_flag: ...@@ -651,7 +654,7 @@ set_cpu_boot_mode_flag:
dmb sy dmb sy
dc ivac, x1 // Invalidate potentially stale cache line dc ivac, x1 // Invalidate potentially stale cache line
ret ret
ENDPROC(set_cpu_boot_mode_flag) SYM_FUNC_END(set_cpu_boot_mode_flag)
/* /*
* These values are written with the MMU off, but read with the MMU on. * These values are written with the MMU off, but read with the MMU on.
...@@ -667,15 +670,17 @@ ENDPROC(set_cpu_boot_mode_flag) ...@@ -667,15 +670,17 @@ ENDPROC(set_cpu_boot_mode_flag)
* This is not in .bss, because we set it sufficiently early that the boot-time * This is not in .bss, because we set it sufficiently early that the boot-time
* zeroing of .bss would clobber it. * zeroing of .bss would clobber it.
*/ */
ENTRY(__boot_cpu_mode) SYM_DATA_START(__boot_cpu_mode)
.long BOOT_CPU_MODE_EL2 .long BOOT_CPU_MODE_EL2
.long BOOT_CPU_MODE_EL1 .long BOOT_CPU_MODE_EL1
SYM_DATA_END(__boot_cpu_mode)
/* /*
* The booting CPU updates the failed status @__early_cpu_boot_status, * The booting CPU updates the failed status @__early_cpu_boot_status,
* with MMU turned off. * with MMU turned off.
*/ */
ENTRY(__early_cpu_boot_status) SYM_DATA_START(__early_cpu_boot_status)
.quad 0 .quad 0
SYM_DATA_END(__early_cpu_boot_status)
.popsection .popsection
...@@ -683,7 +688,7 @@ ENTRY(__early_cpu_boot_status) ...@@ -683,7 +688,7 @@ ENTRY(__early_cpu_boot_status)
* This provides a "holding pen" for platforms to hold all secondary * This provides a "holding pen" for platforms to hold all secondary
* cores are held until we're ready for them to initialise. * cores are held until we're ready for them to initialise.
*/ */
ENTRY(secondary_holding_pen) SYM_FUNC_START(secondary_holding_pen)
bl el2_setup // Drop to EL1, w0=cpu_boot_mode bl el2_setup // Drop to EL1, w0=cpu_boot_mode
bl set_cpu_boot_mode_flag bl set_cpu_boot_mode_flag
mrs x0, mpidr_el1 mrs x0, mpidr_el1
...@@ -695,31 +700,32 @@ pen: ldr x4, [x3] ...@@ -695,31 +700,32 @@ pen: ldr x4, [x3]
b.eq secondary_startup b.eq secondary_startup
wfe wfe
b pen b pen
ENDPROC(secondary_holding_pen) SYM_FUNC_END(secondary_holding_pen)
/* /*
* Secondary entry point that jumps straight into the kernel. Only to * Secondary entry point that jumps straight into the kernel. Only to
* be used where CPUs are brought online dynamically by the kernel. * be used where CPUs are brought online dynamically by the kernel.
*/ */
ENTRY(secondary_entry) SYM_FUNC_START(secondary_entry)
bl el2_setup // Drop to EL1 bl el2_setup // Drop to EL1
bl set_cpu_boot_mode_flag bl set_cpu_boot_mode_flag
b secondary_startup b secondary_startup
ENDPROC(secondary_entry) SYM_FUNC_END(secondary_entry)
secondary_startup: SYM_FUNC_START_LOCAL(secondary_startup)
/* /*
* Common entry point for secondary CPUs. * Common entry point for secondary CPUs.
*/ */
bl __cpu_secondary_check52bitva bl __cpu_secondary_check52bitva
mov x0, #ARM64_CPU_BOOT_SECONDARY
bl __cpu_setup // initialise processor bl __cpu_setup // initialise processor
adrp x1, swapper_pg_dir adrp x1, swapper_pg_dir
bl __enable_mmu bl __enable_mmu
ldr x8, =__secondary_switched ldr x8, =__secondary_switched
br x8 br x8
ENDPROC(secondary_startup) SYM_FUNC_END(secondary_startup)
__secondary_switched: SYM_FUNC_START_LOCAL(__secondary_switched)
adr_l x5, vectors adr_l x5, vectors
msr vbar_el1, x5 msr vbar_el1, x5
isb isb
...@@ -734,13 +740,13 @@ __secondary_switched: ...@@ -734,13 +740,13 @@ __secondary_switched:
mov x29, #0 mov x29, #0
mov x30, #0 mov x30, #0
b secondary_start_kernel b secondary_start_kernel
ENDPROC(__secondary_switched) SYM_FUNC_END(__secondary_switched)
__secondary_too_slow: SYM_FUNC_START_LOCAL(__secondary_too_slow)
wfe wfe
wfi wfi
b __secondary_too_slow b __secondary_too_slow
ENDPROC(__secondary_too_slow) SYM_FUNC_END(__secondary_too_slow)
/* /*
* The booting CPU updates the failed status @__early_cpu_boot_status, * The booting CPU updates the failed status @__early_cpu_boot_status,
...@@ -772,7 +778,7 @@ ENDPROC(__secondary_too_slow) ...@@ -772,7 +778,7 @@ ENDPROC(__secondary_too_slow)
* Checks if the selected granule size is supported by the CPU. * Checks if the selected granule size is supported by the CPU.
* If it isn't, park the CPU * If it isn't, park the CPU
*/ */
ENTRY(__enable_mmu) SYM_FUNC_START(__enable_mmu)
mrs x2, ID_AA64MMFR0_EL1 mrs x2, ID_AA64MMFR0_EL1
ubfx x2, x2, #ID_AA64MMFR0_TGRAN_SHIFT, 4 ubfx x2, x2, #ID_AA64MMFR0_TGRAN_SHIFT, 4
cmp x2, #ID_AA64MMFR0_TGRAN_SUPPORTED cmp x2, #ID_AA64MMFR0_TGRAN_SUPPORTED
...@@ -796,9 +802,9 @@ ENTRY(__enable_mmu) ...@@ -796,9 +802,9 @@ ENTRY(__enable_mmu)
dsb nsh dsb nsh
isb isb
ret ret
ENDPROC(__enable_mmu) SYM_FUNC_END(__enable_mmu)
ENTRY(__cpu_secondary_check52bitva) SYM_FUNC_START(__cpu_secondary_check52bitva)
#ifdef CONFIG_ARM64_VA_BITS_52 #ifdef CONFIG_ARM64_VA_BITS_52
ldr_l x0, vabits_actual ldr_l x0, vabits_actual
cmp x0, #52 cmp x0, #52
...@@ -816,9 +822,9 @@ ENTRY(__cpu_secondary_check52bitva) ...@@ -816,9 +822,9 @@ ENTRY(__cpu_secondary_check52bitva)
#endif #endif
2: ret 2: ret
ENDPROC(__cpu_secondary_check52bitva) SYM_FUNC_END(__cpu_secondary_check52bitva)
__no_granule_support: SYM_FUNC_START_LOCAL(__no_granule_support)
/* Indicate that this CPU can't boot and is stuck in the kernel */ /* Indicate that this CPU can't boot and is stuck in the kernel */
update_early_cpu_boot_status \ update_early_cpu_boot_status \
CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_GRAN, x1, x2 CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_GRAN, x1, x2
...@@ -826,10 +832,10 @@ __no_granule_support: ...@@ -826,10 +832,10 @@ __no_granule_support:
wfe wfe
wfi wfi
b 1b b 1b
ENDPROC(__no_granule_support) SYM_FUNC_END(__no_granule_support)
#ifdef CONFIG_RELOCATABLE #ifdef CONFIG_RELOCATABLE
__relocate_kernel: SYM_FUNC_START_LOCAL(__relocate_kernel)
/* /*
* Iterate over each entry in the relocation table, and apply the * Iterate over each entry in the relocation table, and apply the
* relocations in place. * relocations in place.
...@@ -931,10 +937,10 @@ __relocate_kernel: ...@@ -931,10 +937,10 @@ __relocate_kernel:
#endif #endif
ret ret
ENDPROC(__relocate_kernel) SYM_FUNC_END(__relocate_kernel)
#endif #endif
__primary_switch: SYM_FUNC_START_LOCAL(__primary_switch)
#ifdef CONFIG_RANDOMIZE_BASE #ifdef CONFIG_RANDOMIZE_BASE
mov x19, x0 // preserve new SCTLR_EL1 value mov x19, x0 // preserve new SCTLR_EL1 value
mrs x20, sctlr_el1 // preserve old SCTLR_EL1 value mrs x20, sctlr_el1 // preserve old SCTLR_EL1 value
...@@ -977,4 +983,4 @@ __primary_switch: ...@@ -977,4 +983,4 @@ __primary_switch:
ldr x8, =__primary_switched ldr x8, =__primary_switched
adrp x0, __PHYS_OFFSET adrp x0, __PHYS_OFFSET
br x8 br x8
ENDPROC(__primary_switch) SYM_FUNC_END(__primary_switch)
...@@ -110,8 +110,6 @@ ENTRY(swsusp_arch_suspend_exit) ...@@ -110,8 +110,6 @@ ENTRY(swsusp_arch_suspend_exit)
cbz x24, 3f /* Do we need to re-initialise EL2? */ cbz x24, 3f /* Do we need to re-initialise EL2? */
hvc #0 hvc #0
3: ret 3: ret
.ltorg
ENDPROC(swsusp_arch_suspend_exit) ENDPROC(swsusp_arch_suspend_exit)
/* /*
......
...@@ -63,7 +63,7 @@ el1_sync: ...@@ -63,7 +63,7 @@ el1_sync:
beq 9f // Nothing to reset! beq 9f // Nothing to reset!
/* Someone called kvm_call_hyp() against the hyp-stub... */ /* Someone called kvm_call_hyp() against the hyp-stub... */
ldr x0, =HVC_STUB_ERR mov_q x0, HVC_STUB_ERR
eret eret
9: mov x0, xzr 9: mov x0, xzr
......
...@@ -121,7 +121,7 @@ static int setup_dtb(struct kimage *image, ...@@ -121,7 +121,7 @@ static int setup_dtb(struct kimage *image,
/* add kaslr-seed */ /* add kaslr-seed */
ret = fdt_delprop(dtb, off, FDT_PROP_KASLR_SEED); ret = fdt_delprop(dtb, off, FDT_PROP_KASLR_SEED);
if (ret == -FDT_ERR_NOTFOUND) if (ret == -FDT_ERR_NOTFOUND)
ret = 0; ret = 0;
else if (ret) else if (ret)
goto out; goto out;
......
This diff is collapsed.
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
{ {
struct ptrauth_keys *keys = &tsk->thread.keys_user; struct ptrauth_keys_user *keys = &tsk->thread.keys_user;
unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY | unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY |
PR_PAC_APDAKEY | PR_PAC_APDBKEY; PR_PAC_APDAKEY | PR_PAC_APDBKEY;
unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY; unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY;
...@@ -18,8 +18,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) ...@@ -18,8 +18,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
return -EINVAL; return -EINVAL;
if (!arg) { if (!arg) {
ptrauth_keys_init(keys); ptrauth_keys_init_user(keys);
ptrauth_keys_switch(keys);
return 0; return 0;
} }
...@@ -41,7 +40,5 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) ...@@ -41,7 +40,5 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
if (arg & PR_PAC_APGAKEY) if (arg & PR_PAC_APGAKEY)
get_random_bytes(&keys->apga, sizeof(keys->apga)); get_random_bytes(&keys->apga, sizeof(keys->apga));
ptrauth_keys_switch(keys);
return 0; return 0;
} }
...@@ -262,7 +262,7 @@ void __show_regs(struct pt_regs *regs) ...@@ -262,7 +262,7 @@ void __show_regs(struct pt_regs *regs)
if (!user_mode(regs)) { if (!user_mode(regs)) {
printk("pc : %pS\n", (void *)regs->pc); printk("pc : %pS\n", (void *)regs->pc);
printk("lr : %pS\n", (void *)lr); printk("lr : %pS\n", (void *)ptrauth_strip_insn_pac(lr));
} else { } else {
printk("pc : %016llx\n", regs->pc); printk("pc : %016llx\n", regs->pc);
printk("lr : %016llx\n", lr); printk("lr : %016llx\n", lr);
...@@ -376,6 +376,8 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long stack_start, ...@@ -376,6 +376,8 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long stack_start,
*/ */
fpsimd_flush_task_state(p); fpsimd_flush_task_state(p);
ptrauth_thread_init_kernel(p);
if (likely(!(p->flags & PF_KTHREAD))) { if (likely(!(p->flags & PF_KTHREAD))) {
*childregs = *current_pt_regs(); *childregs = *current_pt_regs();
childregs->regs[0] = 0; childregs->regs[0] = 0;
...@@ -512,7 +514,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, ...@@ -512,7 +514,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
contextidr_thread_switch(next); contextidr_thread_switch(next);
entry_task_switch(next); entry_task_switch(next);
uao_thread_switch(next); uao_thread_switch(next);
ptrauth_thread_switch(next);
ssbs_thread_switch(next); ssbs_thread_switch(next);
/* /*
......
...@@ -999,7 +999,7 @@ static struct ptrauth_key pac_key_from_user(__uint128_t ukey) ...@@ -999,7 +999,7 @@ static struct ptrauth_key pac_key_from_user(__uint128_t ukey)
} }
static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys, static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
const struct ptrauth_keys *keys) const struct ptrauth_keys_user *keys)
{ {
ukeys->apiakey = pac_key_to_user(&keys->apia); ukeys->apiakey = pac_key_to_user(&keys->apia);
ukeys->apibkey = pac_key_to_user(&keys->apib); ukeys->apibkey = pac_key_to_user(&keys->apib);
...@@ -1007,7 +1007,7 @@ static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys, ...@@ -1007,7 +1007,7 @@ static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
ukeys->apdbkey = pac_key_to_user(&keys->apdb); ukeys->apdbkey = pac_key_to_user(&keys->apdb);
} }
static void pac_address_keys_from_user(struct ptrauth_keys *keys, static void pac_address_keys_from_user(struct ptrauth_keys_user *keys,
const struct user_pac_address_keys *ukeys) const struct user_pac_address_keys *ukeys)
{ {
keys->apia = pac_key_from_user(ukeys->apiakey); keys->apia = pac_key_from_user(ukeys->apiakey);
...@@ -1021,7 +1021,7 @@ static int pac_address_keys_get(struct task_struct *target, ...@@ -1021,7 +1021,7 @@ static int pac_address_keys_get(struct task_struct *target,
unsigned int pos, unsigned int count, unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf) void *kbuf, void __user *ubuf)
{ {
struct ptrauth_keys *keys = &target->thread.keys_user; struct ptrauth_keys_user *keys = &target->thread.keys_user;
struct user_pac_address_keys user_keys; struct user_pac_address_keys user_keys;
if (!system_supports_address_auth()) if (!system_supports_address_auth())
...@@ -1038,7 +1038,7 @@ static int pac_address_keys_set(struct task_struct *target, ...@@ -1038,7 +1038,7 @@ static int pac_address_keys_set(struct task_struct *target,
unsigned int pos, unsigned int count, unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf) const void *kbuf, const void __user *ubuf)
{ {
struct ptrauth_keys *keys = &target->thread.keys_user; struct ptrauth_keys_user *keys = &target->thread.keys_user;
struct user_pac_address_keys user_keys; struct user_pac_address_keys user_keys;
int ret; int ret;
...@@ -1056,12 +1056,12 @@ static int pac_address_keys_set(struct task_struct *target, ...@@ -1056,12 +1056,12 @@ static int pac_address_keys_set(struct task_struct *target,
} }
static void pac_generic_keys_to_user(struct user_pac_generic_keys *ukeys, static void pac_generic_keys_to_user(struct user_pac_generic_keys *ukeys,
const struct ptrauth_keys *keys) const struct ptrauth_keys_user *keys)
{ {
ukeys->apgakey = pac_key_to_user(&keys->apga); ukeys->apgakey = pac_key_to_user(&keys->apga);
} }
static void pac_generic_keys_from_user(struct ptrauth_keys *keys, static void pac_generic_keys_from_user(struct ptrauth_keys_user *keys,
const struct user_pac_generic_keys *ukeys) const struct user_pac_generic_keys *ukeys)
{ {
keys->apga = pac_key_from_user(ukeys->apgakey); keys->apga = pac_key_from_user(ukeys->apgakey);
...@@ -1072,7 +1072,7 @@ static int pac_generic_keys_get(struct task_struct *target, ...@@ -1072,7 +1072,7 @@ static int pac_generic_keys_get(struct task_struct *target,
unsigned int pos, unsigned int count, unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf) void *kbuf, void __user *ubuf)
{ {
struct ptrauth_keys *keys = &target->thread.keys_user; struct ptrauth_keys_user *keys = &target->thread.keys_user;
struct user_pac_generic_keys user_keys; struct user_pac_generic_keys user_keys;
if (!system_supports_generic_auth()) if (!system_supports_generic_auth())
...@@ -1089,7 +1089,7 @@ static int pac_generic_keys_set(struct task_struct *target, ...@@ -1089,7 +1089,7 @@ static int pac_generic_keys_set(struct task_struct *target,
unsigned int pos, unsigned int count, unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf) const void *kbuf, const void __user *ubuf)
{ {
struct ptrauth_keys *keys = &target->thread.keys_user; struct ptrauth_keys_user *keys = &target->thread.keys_user;
struct user_pac_generic_keys user_keys; struct user_pac_generic_keys user_keys;
int ret; int ret;
......
...@@ -41,7 +41,7 @@ ENTRY(arm64_relocate_new_kernel) ...@@ -41,7 +41,7 @@ ENTRY(arm64_relocate_new_kernel)
cmp x0, #CurrentEL_EL2 cmp x0, #CurrentEL_EL2
b.ne 1f b.ne 1f
mrs x0, sctlr_el2 mrs x0, sctlr_el2
ldr x1, =SCTLR_ELx_FLAGS mov_q x1, SCTLR_ELx_FLAGS
bic x0, x0, x1 bic x0, x0, x1
pre_disable_mmu_workaround pre_disable_mmu_workaround
msr sctlr_el2, x0 msr sctlr_el2, x0
...@@ -113,8 +113,6 @@ ENTRY(arm64_relocate_new_kernel) ...@@ -113,8 +113,6 @@ ENTRY(arm64_relocate_new_kernel)
ENDPROC(arm64_relocate_new_kernel) ENDPROC(arm64_relocate_new_kernel)
.ltorg
.align 3 /* To keep the 64-bit values below naturally aligned. */ .align 3 /* To keep the 64-bit values below naturally aligned. */
.Lcopy_end: .Lcopy_end:
......
...@@ -344,7 +344,7 @@ void __init setup_arch(char **cmdline_p) ...@@ -344,7 +344,7 @@ void __init setup_arch(char **cmdline_p)
else else
psci_acpi_init(); psci_acpi_init();
cpu_read_bootcpu_ops(); init_bootcpu_ops();
smp_init_cpus(); smp_init_cpus();
smp_build_mpidr_hash(); smp_build_mpidr_hash();
...@@ -371,8 +371,10 @@ void __init setup_arch(char **cmdline_p) ...@@ -371,8 +371,10 @@ void __init setup_arch(char **cmdline_p)
static inline bool cpu_can_disable(unsigned int cpu) static inline bool cpu_can_disable(unsigned int cpu)
{ {
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
if (cpu_ops[cpu] && cpu_ops[cpu]->cpu_can_disable) const struct cpu_operations *ops = get_cpu_ops(cpu);
return cpu_ops[cpu]->cpu_can_disable(cpu);
if (ops && ops->cpu_can_disable)
return ops->cpu_can_disable(cpu);
#endif #endif
return false; return false;
} }
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/smp.h>
.text .text
/* /*
...@@ -99,6 +100,7 @@ ENDPROC(__cpu_suspend_enter) ...@@ -99,6 +100,7 @@ ENDPROC(__cpu_suspend_enter)
.pushsection ".idmap.text", "awx" .pushsection ".idmap.text", "awx"
ENTRY(cpu_resume) ENTRY(cpu_resume)
bl el2_setup // if in EL2 drop to EL1 cleanly bl el2_setup // if in EL2 drop to EL1 cleanly
mov x0, #ARM64_CPU_RUNTIME
bl __cpu_setup bl __cpu_setup
/* enable the MMU early - so we can access sleep_save_stash by va */ /* enable the MMU early - so we can access sleep_save_stash by va */
adrp x1, swapper_pg_dir adrp x1, swapper_pg_dir
......
...@@ -93,8 +93,10 @@ static inline int op_cpu_kill(unsigned int cpu) ...@@ -93,8 +93,10 @@ static inline int op_cpu_kill(unsigned int cpu)
*/ */
static int boot_secondary(unsigned int cpu, struct task_struct *idle) static int boot_secondary(unsigned int cpu, struct task_struct *idle)
{ {
if (cpu_ops[cpu]->cpu_boot) const struct cpu_operations *ops = get_cpu_ops(cpu);
return cpu_ops[cpu]->cpu_boot(cpu);
if (ops->cpu_boot)
return ops->cpu_boot(cpu);
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
...@@ -112,63 +114,66 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) ...@@ -112,63 +114,66 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
*/ */
secondary_data.task = idle; secondary_data.task = idle;
secondary_data.stack = task_stack_page(idle) + THREAD_SIZE; secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
#if defined(CONFIG_ARM64_PTR_AUTH)
secondary_data.ptrauth_key.apia.lo = idle->thread.keys_kernel.apia.lo;
secondary_data.ptrauth_key.apia.hi = idle->thread.keys_kernel.apia.hi;
#endif
update_cpu_boot_status(CPU_MMU_OFF); update_cpu_boot_status(CPU_MMU_OFF);
__flush_dcache_area(&secondary_data, sizeof(secondary_data)); __flush_dcache_area(&secondary_data, sizeof(secondary_data));
/* /* Now bring the CPU into our world */
* Now bring the CPU into our world.
*/
ret = boot_secondary(cpu, idle); ret = boot_secondary(cpu, idle);
if (ret == 0) { if (ret) {
/*
* CPU was successfully started, wait for it to come online or
* time out.
*/
wait_for_completion_timeout(&cpu_running,
msecs_to_jiffies(5000));
if (!cpu_online(cpu)) {
pr_crit("CPU%u: failed to come online\n", cpu);
ret = -EIO;
}
} else {
pr_err("CPU%u: failed to boot: %d\n", cpu, ret); pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
return ret; return ret;
} }
/*
* CPU was successfully started, wait for it to come online or
* time out.
*/
wait_for_completion_timeout(&cpu_running,
msecs_to_jiffies(5000));
if (cpu_online(cpu))
return 0;
pr_crit("CPU%u: failed to come online\n", cpu);
secondary_data.task = NULL; secondary_data.task = NULL;
secondary_data.stack = NULL; secondary_data.stack = NULL;
#if defined(CONFIG_ARM64_PTR_AUTH)
secondary_data.ptrauth_key.apia.lo = 0;
secondary_data.ptrauth_key.apia.hi = 0;
#endif
__flush_dcache_area(&secondary_data, sizeof(secondary_data)); __flush_dcache_area(&secondary_data, sizeof(secondary_data));
status = READ_ONCE(secondary_data.status); status = READ_ONCE(secondary_data.status);
if (ret && status) { if (status == CPU_MMU_OFF)
status = READ_ONCE(__early_cpu_boot_status);
if (status == CPU_MMU_OFF)
status = READ_ONCE(__early_cpu_boot_status);
switch (status & CPU_BOOT_STATUS_MASK) { switch (status & CPU_BOOT_STATUS_MASK) {
default: default:
pr_err("CPU%u: failed in unknown state : 0x%lx\n", pr_err("CPU%u: failed in unknown state : 0x%lx\n",
cpu, status); cpu, status);
cpus_stuck_in_kernel++; cpus_stuck_in_kernel++;
break; break;
case CPU_KILL_ME: case CPU_KILL_ME:
if (!op_cpu_kill(cpu)) { if (!op_cpu_kill(cpu)) {
pr_crit("CPU%u: died during early boot\n", cpu); pr_crit("CPU%u: died during early boot\n", cpu);
break;
}
pr_crit("CPU%u: may not have shut down cleanly\n", cpu);
/* Fall through */
case CPU_STUCK_IN_KERNEL:
pr_crit("CPU%u: is stuck in kernel\n", cpu);
if (status & CPU_STUCK_REASON_52_BIT_VA)
pr_crit("CPU%u: does not support 52-bit VAs\n", cpu);
if (status & CPU_STUCK_REASON_NO_GRAN)
pr_crit("CPU%u: does not support %luK granule \n", cpu, PAGE_SIZE / SZ_1K);
cpus_stuck_in_kernel++;
break; break;
case CPU_PANIC_KERNEL:
panic("CPU%u detected unsupported configuration\n", cpu);
} }
pr_crit("CPU%u: may not have shut down cleanly\n", cpu);
/* Fall through */
case CPU_STUCK_IN_KERNEL:
pr_crit("CPU%u: is stuck in kernel\n", cpu);
if (status & CPU_STUCK_REASON_52_BIT_VA)
pr_crit("CPU%u: does not support 52-bit VAs\n", cpu);
if (status & CPU_STUCK_REASON_NO_GRAN) {
pr_crit("CPU%u: does not support %luK granule\n",
cpu, PAGE_SIZE / SZ_1K);
}
cpus_stuck_in_kernel++;
break;
case CPU_PANIC_KERNEL:
panic("CPU%u detected unsupported configuration\n", cpu);
} }
return ret; return ret;
...@@ -196,6 +201,7 @@ asmlinkage notrace void secondary_start_kernel(void) ...@@ -196,6 +201,7 @@ asmlinkage notrace void secondary_start_kernel(void)
{ {
u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK; u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
struct mm_struct *mm = &init_mm; struct mm_struct *mm = &init_mm;
const struct cpu_operations *ops;
unsigned int cpu; unsigned int cpu;
cpu = task_cpu(current); cpu = task_cpu(current);
...@@ -227,8 +233,9 @@ asmlinkage notrace void secondary_start_kernel(void) ...@@ -227,8 +233,9 @@ asmlinkage notrace void secondary_start_kernel(void)
*/ */
check_local_cpu_capabilities(); check_local_cpu_capabilities();
if (cpu_ops[cpu]->cpu_postboot) ops = get_cpu_ops(cpu);
cpu_ops[cpu]->cpu_postboot(); if (ops->cpu_postboot)
ops->cpu_postboot();
/* /*
* Log the CPU info before it is marked online and might get read. * Log the CPU info before it is marked online and might get read.
...@@ -266,19 +273,21 @@ asmlinkage notrace void secondary_start_kernel(void) ...@@ -266,19 +273,21 @@ asmlinkage notrace void secondary_start_kernel(void)
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
static int op_cpu_disable(unsigned int cpu) static int op_cpu_disable(unsigned int cpu)
{ {
const struct cpu_operations *ops = get_cpu_ops(cpu);
/* /*
* If we don't have a cpu_die method, abort before we reach the point * If we don't have a cpu_die method, abort before we reach the point
* of no return. CPU0 may not have an cpu_ops, so test for it. * of no return. CPU0 may not have an cpu_ops, so test for it.
*/ */
if (!cpu_ops[cpu] || !cpu_ops[cpu]->cpu_die) if (!ops || !ops->cpu_die)
return -EOPNOTSUPP; return -EOPNOTSUPP;
/* /*
* We may need to abort a hot unplug for some other mechanism-specific * We may need to abort a hot unplug for some other mechanism-specific
* reason. * reason.
*/ */
if (cpu_ops[cpu]->cpu_disable) if (ops->cpu_disable)
return cpu_ops[cpu]->cpu_disable(cpu); return ops->cpu_disable(cpu);
return 0; return 0;
} }
...@@ -314,15 +323,17 @@ int __cpu_disable(void) ...@@ -314,15 +323,17 @@ int __cpu_disable(void)
static int op_cpu_kill(unsigned int cpu) static int op_cpu_kill(unsigned int cpu)
{ {
const struct cpu_operations *ops = get_cpu_ops(cpu);
/* /*
* If we have no means of synchronising with the dying CPU, then assume * If we have no means of synchronising with the dying CPU, then assume
* that it is really dead. We can only wait for an arbitrary length of * that it is really dead. We can only wait for an arbitrary length of
* time and hope that it's dead, so let's skip the wait and just hope. * time and hope that it's dead, so let's skip the wait and just hope.
*/ */
if (!cpu_ops[cpu]->cpu_kill) if (!ops->cpu_kill)
return 0; return 0;
return cpu_ops[cpu]->cpu_kill(cpu); return ops->cpu_kill(cpu);
} }
/* /*
...@@ -357,6 +368,7 @@ void __cpu_die(unsigned int cpu) ...@@ -357,6 +368,7 @@ void __cpu_die(unsigned int cpu)
void cpu_die(void) void cpu_die(void)
{ {
unsigned int cpu = smp_processor_id(); unsigned int cpu = smp_processor_id();
const struct cpu_operations *ops = get_cpu_ops(cpu);
idle_task_exit(); idle_task_exit();
...@@ -370,12 +382,22 @@ void cpu_die(void) ...@@ -370,12 +382,22 @@ void cpu_die(void)
* mechanism must perform all required cache maintenance to ensure that * mechanism must perform all required cache maintenance to ensure that
* no dirty lines are lost in the process of shutting down the CPU. * no dirty lines are lost in the process of shutting down the CPU.
*/ */
cpu_ops[cpu]->cpu_die(cpu); ops->cpu_die(cpu);
BUG(); BUG();
} }
#endif #endif
static void __cpu_try_die(int cpu)
{
#ifdef CONFIG_HOTPLUG_CPU
const struct cpu_operations *ops = get_cpu_ops(cpu);
if (ops && ops->cpu_die)
ops->cpu_die(cpu);
#endif
}
/* /*
* Kill the calling secondary CPU, early in bringup before it is turned * Kill the calling secondary CPU, early in bringup before it is turned
* online. * online.
...@@ -389,12 +411,11 @@ void cpu_die_early(void) ...@@ -389,12 +411,11 @@ void cpu_die_early(void)
/* Mark this CPU absent */ /* Mark this CPU absent */
set_cpu_present(cpu, 0); set_cpu_present(cpu, 0);
#ifdef CONFIG_HOTPLUG_CPU if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
update_cpu_boot_status(CPU_KILL_ME); update_cpu_boot_status(CPU_KILL_ME);
/* Check if we can park ourselves */ __cpu_try_die(cpu);
if (cpu_ops[cpu] && cpu_ops[cpu]->cpu_die) }
cpu_ops[cpu]->cpu_die(cpu);
#endif
update_cpu_boot_status(CPU_STUCK_IN_KERNEL); update_cpu_boot_status(CPU_STUCK_IN_KERNEL);
cpu_park_loop(); cpu_park_loop();
...@@ -488,10 +509,13 @@ static bool __init is_mpidr_duplicate(unsigned int cpu, u64 hwid) ...@@ -488,10 +509,13 @@ static bool __init is_mpidr_duplicate(unsigned int cpu, u64 hwid)
*/ */
static int __init smp_cpu_setup(int cpu) static int __init smp_cpu_setup(int cpu)
{ {
if (cpu_read_ops(cpu)) const struct cpu_operations *ops;
if (init_cpu_ops(cpu))
return -ENODEV; return -ENODEV;
if (cpu_ops[cpu]->cpu_init(cpu)) ops = get_cpu_ops(cpu);
if (ops->cpu_init(cpu))
return -ENODEV; return -ENODEV;
set_cpu_possible(cpu, true); set_cpu_possible(cpu, true);
...@@ -714,6 +738,7 @@ void __init smp_init_cpus(void) ...@@ -714,6 +738,7 @@ void __init smp_init_cpus(void)
void __init smp_prepare_cpus(unsigned int max_cpus) void __init smp_prepare_cpus(unsigned int max_cpus)
{ {
const struct cpu_operations *ops;
int err; int err;
unsigned int cpu; unsigned int cpu;
unsigned int this_cpu; unsigned int this_cpu;
...@@ -744,10 +769,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus) ...@@ -744,10 +769,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
if (cpu == smp_processor_id()) if (cpu == smp_processor_id())
continue; continue;
if (!cpu_ops[cpu]) ops = get_cpu_ops(cpu);
if (!ops)
continue; continue;
err = cpu_ops[cpu]->cpu_prepare(cpu); err = ops->cpu_prepare(cpu);
if (err) if (err)
continue; continue;
...@@ -863,10 +889,8 @@ static void ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs) ...@@ -863,10 +889,8 @@ static void ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs)
local_irq_disable(); local_irq_disable();
sdei_mask_local_cpu(); sdei_mask_local_cpu();
#ifdef CONFIG_HOTPLUG_CPU if (IS_ENABLED(CONFIG_HOTPLUG_CPU))
if (cpu_ops[cpu]->cpu_die) __cpu_try_die(cpu);
cpu_ops[cpu]->cpu_die(cpu);
#endif
/* just in case */ /* just in case */
cpu_park_loop(); cpu_park_loop();
...@@ -1059,8 +1083,9 @@ static bool have_cpu_die(void) ...@@ -1059,8 +1083,9 @@ static bool have_cpu_die(void)
{ {
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
int any_cpu = raw_smp_processor_id(); int any_cpu = raw_smp_processor_id();
const struct cpu_operations *ops = get_cpu_ops(any_cpu);
if (cpu_ops[any_cpu] && cpu_ops[any_cpu]->cpu_die) if (ops && ops->cpu_die)
return true; return true;
#endif #endif
return false; return false;
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/stacktrace.h> #include <linux/stacktrace.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/pointer_auth.h>
#include <asm/stack_pointer.h> #include <asm/stack_pointer.h>
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
...@@ -86,7 +87,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) ...@@ -86,7 +87,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
if (tsk->ret_stack && if (tsk->ret_stack &&
(frame->pc == (unsigned long)return_to_handler)) { (ptrauth_strip_insn_pac(frame->pc) == (unsigned long)return_to_handler)) {
struct ftrace_ret_stack *ret_stack; struct ftrace_ret_stack *ret_stack;
/* /*
* This is a case where function graph tracer has * This is a case where function graph tracer has
...@@ -101,6 +102,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) ...@@ -101,6 +102,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
} }
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
frame->pc = ptrauth_strip_insn_pac(frame->pc);
/* /*
* Frames created upon entry from EL0 have NULL FP and PC values, so * Frames created upon entry from EL0 have NULL FP and PC values, so
* don't bother reporting these. Frames created by __noreturn functions * don't bother reporting these. Frames created by __noreturn functions
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/arch_topology.h> #include <linux/arch_topology.h>
#include <linux/cacheinfo.h> #include <linux/cacheinfo.h>
#include <linux/cpufreq.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/percpu.h> #include <linux/percpu.h>
...@@ -120,4 +121,183 @@ int __init parse_acpi_topology(void) ...@@ -120,4 +121,183 @@ int __init parse_acpi_topology(void)
} }
#endif #endif
#ifdef CONFIG_ARM64_AMU_EXTN
#undef pr_fmt
#define pr_fmt(fmt) "AMU: " fmt
static DEFINE_PER_CPU_READ_MOSTLY(unsigned long, arch_max_freq_scale);
static DEFINE_PER_CPU(u64, arch_const_cycles_prev);
static DEFINE_PER_CPU(u64, arch_core_cycles_prev);
static cpumask_var_t amu_fie_cpus;
/* Initialize counter reference per-cpu variables for the current CPU */
void init_cpu_freq_invariance_counters(void)
{
this_cpu_write(arch_core_cycles_prev,
read_sysreg_s(SYS_AMEVCNTR0_CORE_EL0));
this_cpu_write(arch_const_cycles_prev,
read_sysreg_s(SYS_AMEVCNTR0_CONST_EL0));
}
static int validate_cpu_freq_invariance_counters(int cpu)
{
u64 max_freq_hz, ratio;
if (!cpu_has_amu_feat(cpu)) {
pr_debug("CPU%d: counters are not supported.\n", cpu);
return -EINVAL;
}
if (unlikely(!per_cpu(arch_const_cycles_prev, cpu) ||
!per_cpu(arch_core_cycles_prev, cpu))) {
pr_debug("CPU%d: cycle counters are not enabled.\n", cpu);
return -EINVAL;
}
/* Convert maximum frequency from KHz to Hz and validate */
max_freq_hz = cpufreq_get_hw_max_freq(cpu) * 1000;
if (unlikely(!max_freq_hz)) {
pr_debug("CPU%d: invalid maximum frequency.\n", cpu);
return -EINVAL;
}
/*
* Pre-compute the fixed ratio between the frequency of the constant
* counter and the maximum frequency of the CPU.
*
* const_freq
* arch_max_freq_scale = ---------------- * SCHED_CAPACITY_SCALE²
* cpuinfo_max_freq
*
* We use a factor of 2 * SCHED_CAPACITY_SHIFT -> SCHED_CAPACITY_SCALE²
* in order to ensure a good resolution for arch_max_freq_scale for
* very low arch timer frequencies (down to the KHz range which should
* be unlikely).
*/
ratio = (u64)arch_timer_get_rate() << (2 * SCHED_CAPACITY_SHIFT);
ratio = div64_u64(ratio, max_freq_hz);
if (!ratio) {
WARN_ONCE(1, "System timer frequency too low.\n");
return -EINVAL;
}
per_cpu(arch_max_freq_scale, cpu) = (unsigned long)ratio;
return 0;
}
static inline bool
enable_policy_freq_counters(int cpu, cpumask_var_t valid_cpus)
{
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
if (!policy) {
pr_debug("CPU%d: No cpufreq policy found.\n", cpu);
return false;
}
if (cpumask_subset(policy->related_cpus, valid_cpus))
cpumask_or(amu_fie_cpus, policy->related_cpus,
amu_fie_cpus);
cpufreq_cpu_put(policy);
return true;
}
static DEFINE_STATIC_KEY_FALSE(amu_fie_key);
#define amu_freq_invariant() static_branch_unlikely(&amu_fie_key)
static int __init init_amu_fie(void)
{
cpumask_var_t valid_cpus;
bool have_policy = false;
int ret = 0;
int cpu;
if (!zalloc_cpumask_var(&valid_cpus, GFP_KERNEL))
return -ENOMEM;
if (!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL)) {
ret = -ENOMEM;
goto free_valid_mask;
}
for_each_present_cpu(cpu) {
if (validate_cpu_freq_invariance_counters(cpu))
continue;
cpumask_set_cpu(cpu, valid_cpus);
have_policy |= enable_policy_freq_counters(cpu, valid_cpus);
}
/*
* If we are not restricted by cpufreq policies, we only enable
* the use of the AMU feature for FIE if all CPUs support AMU.
* Otherwise, enable_policy_freq_counters has already enabled
* policy cpus.
*/
if (!have_policy && cpumask_equal(valid_cpus, cpu_present_mask))
cpumask_or(amu_fie_cpus, amu_fie_cpus, valid_cpus);
if (!cpumask_empty(amu_fie_cpus)) {
pr_info("CPUs[%*pbl]: counters will be used for FIE.",
cpumask_pr_args(amu_fie_cpus));
static_branch_enable(&amu_fie_key);
}
free_valid_mask:
free_cpumask_var(valid_cpus);
return ret;
}
late_initcall_sync(init_amu_fie);
bool arch_freq_counters_available(struct cpumask *cpus)
{
return amu_freq_invariant() &&
cpumask_subset(cpus, amu_fie_cpus);
}
void topology_scale_freq_tick(void)
{
u64 prev_core_cnt, prev_const_cnt;
u64 core_cnt, const_cnt, scale;
int cpu = smp_processor_id();
if (!amu_freq_invariant())
return;
if (!cpumask_test_cpu(cpu, amu_fie_cpus))
return;
const_cnt = read_sysreg_s(SYS_AMEVCNTR0_CONST_EL0);
core_cnt = read_sysreg_s(SYS_AMEVCNTR0_CORE_EL0);
prev_const_cnt = this_cpu_read(arch_const_cycles_prev);
prev_core_cnt = this_cpu_read(arch_core_cycles_prev);
if (unlikely(core_cnt <= prev_core_cnt ||
const_cnt <= prev_const_cnt))
goto store_and_exit;
/*
* /\core arch_max_freq_scale
* scale = ------- * --------------------
* /\const SCHED_CAPACITY_SCALE
*
* See validate_cpu_freq_invariance_counters() for details on
* arch_max_freq_scale and the use of SCHED_CAPACITY_SHIFT.
*/
scale = core_cnt - prev_core_cnt;
scale *= this_cpu_read(arch_max_freq_scale);
scale = div64_u64(scale >> SCHED_CAPACITY_SHIFT,
const_cnt - prev_const_cnt);
scale = min_t(unsigned long, scale, SCHED_CAPACITY_SCALE);
this_cpu_write(freq_scale, (unsigned long)scale);
store_and_exit:
this_cpu_write(arch_core_cycles_prev, core_cnt);
this_cpu_write(arch_const_cycles_prev, const_cnt);
}
#endif /* CONFIG_ARM64_AMU_EXTN */
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
.text .text
nop nop
ENTRY(__kernel_rt_sigreturn) SYM_FUNC_START(__kernel_rt_sigreturn)
.cfi_startproc .cfi_startproc
.cfi_signal_frame .cfi_signal_frame
.cfi_def_cfa x29, 0 .cfi_def_cfa x29, 0
...@@ -23,4 +23,4 @@ ENTRY(__kernel_rt_sigreturn) ...@@ -23,4 +23,4 @@ ENTRY(__kernel_rt_sigreturn)
mov x8, #__NR_rt_sigreturn mov x8, #__NR_rt_sigreturn
svc #0 svc #0
.cfi_endproc .cfi_endproc
ENDPROC(__kernel_rt_sigreturn) SYM_FUNC_END(__kernel_rt_sigreturn)
...@@ -10,13 +10,6 @@ ...@@ -10,13 +10,6 @@
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/unistd.h> #include <asm/unistd.h>
#define ARM_ENTRY(name) \
ENTRY(name)
#define ARM_ENDPROC(name) \
.type name, %function; \
END(name)
.text .text
.arm .arm
...@@ -24,39 +17,39 @@ ...@@ -24,39 +17,39 @@
.save {r0-r15} .save {r0-r15}
.pad #COMPAT_SIGFRAME_REGS_OFFSET .pad #COMPAT_SIGFRAME_REGS_OFFSET
nop nop
ARM_ENTRY(__kernel_sigreturn_arm) SYM_FUNC_START(__kernel_sigreturn_arm)
mov r7, #__NR_compat_sigreturn mov r7, #__NR_compat_sigreturn
svc #0 svc #0
.fnend .fnend
ARM_ENDPROC(__kernel_sigreturn_arm) SYM_FUNC_END(__kernel_sigreturn_arm)
.fnstart .fnstart
.save {r0-r15} .save {r0-r15}
.pad #COMPAT_RT_SIGFRAME_REGS_OFFSET .pad #COMPAT_RT_SIGFRAME_REGS_OFFSET
nop nop
ARM_ENTRY(__kernel_rt_sigreturn_arm) SYM_FUNC_START(__kernel_rt_sigreturn_arm)
mov r7, #__NR_compat_rt_sigreturn mov r7, #__NR_compat_rt_sigreturn
svc #0 svc #0
.fnend .fnend
ARM_ENDPROC(__kernel_rt_sigreturn_arm) SYM_FUNC_END(__kernel_rt_sigreturn_arm)
.thumb .thumb
.fnstart .fnstart
.save {r0-r15} .save {r0-r15}
.pad #COMPAT_SIGFRAME_REGS_OFFSET .pad #COMPAT_SIGFRAME_REGS_OFFSET
nop nop
ARM_ENTRY(__kernel_sigreturn_thumb) SYM_FUNC_START(__kernel_sigreturn_thumb)
mov r7, #__NR_compat_sigreturn mov r7, #__NR_compat_sigreturn
svc #0 svc #0
.fnend .fnend
ARM_ENDPROC(__kernel_sigreturn_thumb) SYM_FUNC_END(__kernel_sigreturn_thumb)
.fnstart .fnstart
.save {r0-r15} .save {r0-r15}
.pad #COMPAT_RT_SIGFRAME_REGS_OFFSET .pad #COMPAT_RT_SIGFRAME_REGS_OFFSET
nop nop
ARM_ENTRY(__kernel_rt_sigreturn_thumb) SYM_FUNC_START(__kernel_rt_sigreturn_thumb)
mov r7, #__NR_compat_rt_sigreturn mov r7, #__NR_compat_rt_sigreturn
svc #0 svc #0
.fnend .fnend
ARM_ENDPROC(__kernel_rt_sigreturn_thumb) SYM_FUNC_END(__kernel_rt_sigreturn_thumb)
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
.align 11 .align 11
ENTRY(__kvm_hyp_init) SYM_CODE_START(__kvm_hyp_init)
ventry __invalid // Synchronous EL2t ventry __invalid // Synchronous EL2t
ventry __invalid // IRQ EL2t ventry __invalid // IRQ EL2t
ventry __invalid // FIQ EL2t ventry __invalid // FIQ EL2t
...@@ -60,7 +60,7 @@ alternative_else_nop_endif ...@@ -60,7 +60,7 @@ alternative_else_nop_endif
msr ttbr0_el2, x4 msr ttbr0_el2, x4
mrs x4, tcr_el1 mrs x4, tcr_el1
ldr x5, =TCR_EL2_MASK mov_q x5, TCR_EL2_MASK
and x4, x4, x5 and x4, x4, x5
mov x5, #TCR_EL2_RES1 mov x5, #TCR_EL2_RES1
orr x4, x4, x5 orr x4, x4, x5
...@@ -102,7 +102,7 @@ alternative_else_nop_endif ...@@ -102,7 +102,7 @@ alternative_else_nop_endif
* as well as the EE bit on BE. Drop the A flag since the compiler * as well as the EE bit on BE. Drop the A flag since the compiler
* is allowed to generate unaligned accesses. * is allowed to generate unaligned accesses.
*/ */
ldr x4, =(SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A)) mov_q x4, (SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A))
CPU_BE( orr x4, x4, #SCTLR_ELx_EE) CPU_BE( orr x4, x4, #SCTLR_ELx_EE)
msr sctlr_el2, x4 msr sctlr_el2, x4
isb isb
...@@ -117,9 +117,9 @@ CPU_BE( orr x4, x4, #SCTLR_ELx_EE) ...@@ -117,9 +117,9 @@ CPU_BE( orr x4, x4, #SCTLR_ELx_EE)
/* Hello, World! */ /* Hello, World! */
eret eret
ENDPROC(__kvm_hyp_init) SYM_CODE_END(__kvm_hyp_init)
ENTRY(__kvm_handle_stub_hvc) SYM_CODE_START(__kvm_handle_stub_hvc)
cmp x0, #HVC_SOFT_RESTART cmp x0, #HVC_SOFT_RESTART
b.ne 1f b.ne 1f
...@@ -142,7 +142,7 @@ reset: ...@@ -142,7 +142,7 @@ reset:
* case we coming via HVC_SOFT_RESTART. * case we coming via HVC_SOFT_RESTART.
*/ */
mrs x5, sctlr_el2 mrs x5, sctlr_el2
ldr x6, =SCTLR_ELx_FLAGS mov_q x6, SCTLR_ELx_FLAGS
bic x5, x5, x6 // Clear SCTL_M and etc bic x5, x5, x6 // Clear SCTL_M and etc
pre_disable_mmu_workaround pre_disable_mmu_workaround
msr sctlr_el2, x5 msr sctlr_el2, x5
...@@ -155,11 +155,9 @@ reset: ...@@ -155,11 +155,9 @@ reset:
eret eret
1: /* Bad stub call */ 1: /* Bad stub call */
ldr x0, =HVC_STUB_ERR mov_q x0, HVC_STUB_ERR
eret eret
ENDPROC(__kvm_handle_stub_hvc) SYM_CODE_END(__kvm_handle_stub_hvc)
.ltorg
.popsection .popsection
...@@ -28,7 +28,7 @@ ...@@ -28,7 +28,7 @@
* and is used to implement hyp stubs in the same way as in * and is used to implement hyp stubs in the same way as in
* arch/arm64/kernel/hyp_stub.S. * arch/arm64/kernel/hyp_stub.S.
*/ */
ENTRY(__kvm_call_hyp) SYM_FUNC_START(__kvm_call_hyp)
hvc #0 hvc #0
ret ret
ENDPROC(__kvm_call_hyp) SYM_FUNC_END(__kvm_call_hyp)
...@@ -11,12 +11,12 @@ ...@@ -11,12 +11,12 @@
.text .text
.pushsection .hyp.text, "ax" .pushsection .hyp.text, "ax"
ENTRY(__fpsimd_save_state) SYM_FUNC_START(__fpsimd_save_state)
fpsimd_save x0, 1 fpsimd_save x0, 1
ret ret
ENDPROC(__fpsimd_save_state) SYM_FUNC_END(__fpsimd_save_state)
ENTRY(__fpsimd_restore_state) SYM_FUNC_START(__fpsimd_restore_state)
fpsimd_restore x0, 1 fpsimd_restore x0, 1
ret ret
ENDPROC(__fpsimd_restore_state) SYM_FUNC_END(__fpsimd_restore_state)
...@@ -180,7 +180,7 @@ el2_error: ...@@ -180,7 +180,7 @@ el2_error:
eret eret
sb sb
ENTRY(__hyp_do_panic) SYM_FUNC_START(__hyp_do_panic)
mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
PSR_MODE_EL1h) PSR_MODE_EL1h)
msr spsr_el2, lr msr spsr_el2, lr
...@@ -188,18 +188,19 @@ ENTRY(__hyp_do_panic) ...@@ -188,18 +188,19 @@ ENTRY(__hyp_do_panic)
msr elr_el2, lr msr elr_el2, lr
eret eret
sb sb
ENDPROC(__hyp_do_panic) SYM_FUNC_END(__hyp_do_panic)
ENTRY(__hyp_panic) SYM_CODE_START(__hyp_panic)
get_host_ctxt x0, x1 get_host_ctxt x0, x1
b hyp_panic b hyp_panic
ENDPROC(__hyp_panic) SYM_CODE_END(__hyp_panic)
.macro invalid_vector label, target = __hyp_panic .macro invalid_vector label, target = __hyp_panic
.align 2 .align 2
SYM_CODE_START(\label)
\label: \label:
b \target b \target
ENDPROC(\label) SYM_CODE_END(\label)
.endm .endm
/* None of these should ever happen */ /* None of these should ever happen */
...@@ -246,7 +247,7 @@ check_preamble_length 661b, 662b ...@@ -246,7 +247,7 @@ check_preamble_length 661b, 662b
check_preamble_length 661b, 662b check_preamble_length 661b, 662b
.endm .endm
ENTRY(__kvm_hyp_vector) SYM_CODE_START(__kvm_hyp_vector)
invalid_vect el2t_sync_invalid // Synchronous EL2t invalid_vect el2t_sync_invalid // Synchronous EL2t
invalid_vect el2t_irq_invalid // IRQ EL2t invalid_vect el2t_irq_invalid // IRQ EL2t
invalid_vect el2t_fiq_invalid // FIQ EL2t invalid_vect el2t_fiq_invalid // FIQ EL2t
...@@ -266,7 +267,7 @@ ENTRY(__kvm_hyp_vector) ...@@ -266,7 +267,7 @@ ENTRY(__kvm_hyp_vector)
valid_vect el1_irq // IRQ 32-bit EL1 valid_vect el1_irq // IRQ 32-bit EL1
invalid_vect el1_fiq_invalid // FIQ 32-bit EL1 invalid_vect el1_fiq_invalid // FIQ 32-bit EL1
valid_vect el1_error // Error 32-bit EL1 valid_vect el1_error // Error 32-bit EL1
ENDPROC(__kvm_hyp_vector) SYM_CODE_END(__kvm_hyp_vector)
#ifdef CONFIG_KVM_INDIRECT_VECTORS #ifdef CONFIG_KVM_INDIRECT_VECTORS
.macro hyp_ventry .macro hyp_ventry
...@@ -311,15 +312,17 @@ alternative_cb_end ...@@ -311,15 +312,17 @@ alternative_cb_end
.endm .endm
.align 11 .align 11
ENTRY(__bp_harden_hyp_vecs_start) SYM_CODE_START(__bp_harden_hyp_vecs)
.rept BP_HARDEN_EL2_SLOTS .rept BP_HARDEN_EL2_SLOTS
generate_vectors generate_vectors
.endr .endr
ENTRY(__bp_harden_hyp_vecs_end) 1: .org __bp_harden_hyp_vecs + __BP_HARDEN_HYP_VECS_SZ
.org 1b
SYM_CODE_END(__bp_harden_hyp_vecs)
.popsection .popsection
ENTRY(__smccc_workaround_1_smc_start) SYM_CODE_START(__smccc_workaround_1_smc)
esb esb
sub sp, sp, #(8 * 4) sub sp, sp, #(8 * 4)
stp x2, x3, [sp, #(8 * 0)] stp x2, x3, [sp, #(8 * 0)]
...@@ -329,5 +332,7 @@ ENTRY(__smccc_workaround_1_smc_start) ...@@ -329,5 +332,7 @@ ENTRY(__smccc_workaround_1_smc_start)
ldp x2, x3, [sp, #(8 * 0)] ldp x2, x3, [sp, #(8 * 0)]
ldp x0, x1, [sp, #(8 * 2)] ldp x0, x1, [sp, #(8 * 2)]
add sp, sp, #(8 * 4) add sp, sp, #(8 * 4)
ENTRY(__smccc_workaround_1_smc_end) 1: .org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ
.org 1b
SYM_CODE_END(__smccc_workaround_1_smc)
#endif #endif
...@@ -98,6 +98,18 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu) ...@@ -98,6 +98,18 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu)
val = read_sysreg(cpacr_el1); val = read_sysreg(cpacr_el1);
val |= CPACR_EL1_TTA; val |= CPACR_EL1_TTA;
val &= ~CPACR_EL1_ZEN; val &= ~CPACR_EL1_ZEN;
/*
* With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
* CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
* except for some missing controls, such as TAM.
* In this case, CPTR_EL2.TAM has the same position with or without
* VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
* shift value for trapping the AMU accesses.
*/
val |= CPTR_EL2_TAM;
if (update_fp_enabled(vcpu)) { if (update_fp_enabled(vcpu)) {
if (vcpu_has_sve(vcpu)) if (vcpu_has_sve(vcpu))
val |= CPACR_EL1_ZEN; val |= CPACR_EL1_ZEN;
...@@ -119,7 +131,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu) ...@@ -119,7 +131,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
__activate_traps_common(vcpu); __activate_traps_common(vcpu);
val = CPTR_EL2_DEFAULT; val = CPTR_EL2_DEFAULT;
val |= CPTR_EL2_TTA | CPTR_EL2_TZ; val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM;
if (!update_fp_enabled(vcpu)) { if (!update_fp_enabled(vcpu)) {
val |= CPTR_EL2_TFP; val |= CPTR_EL2_TFP;
__activate_traps_fpsimd32(vcpu); __activate_traps_fpsimd32(vcpu);
...@@ -127,7 +139,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu) ...@@ -127,7 +139,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
write_sysreg(val, cptr_el2); write_sysreg(val, cptr_el2);
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
isb(); isb();
...@@ -146,12 +158,12 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) ...@@ -146,12 +158,12 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
{ {
u64 hcr = vcpu->arch.hcr_el2; u64 hcr = vcpu->arch.hcr_el2;
if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
hcr |= HCR_TVM; hcr |= HCR_TVM;
write_sysreg(hcr, hcr_el2); write_sysreg(hcr, hcr_el2);
if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2);
if (has_vhe()) if (has_vhe())
...@@ -181,7 +193,7 @@ static void __hyp_text __deactivate_traps_nvhe(void) ...@@ -181,7 +193,7 @@ static void __hyp_text __deactivate_traps_nvhe(void)
{ {
u64 mdcr_el2 = read_sysreg(mdcr_el2); u64 mdcr_el2 = read_sysreg(mdcr_el2);
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
u64 val; u64 val;
/* /*
...@@ -328,7 +340,7 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) ...@@ -328,7 +340,7 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
* resolve the IPA using the AT instruction. * resolve the IPA using the AT instruction.
*/ */
if (!(esr & ESR_ELx_S1PTW) && if (!(esr & ESR_ELx_S1PTW) &&
(cpus_have_const_cap(ARM64_WORKAROUND_834220) || (cpus_have_final_cap(ARM64_WORKAROUND_834220) ||
(esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
if (!__translate_far_to_hpfar(far, &hpfar)) if (!__translate_far_to_hpfar(far, &hpfar))
return false; return false;
...@@ -498,7 +510,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) ...@@ -498,7 +510,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
if (*exit_code != ARM_EXCEPTION_TRAP) if (*exit_code != ARM_EXCEPTION_TRAP)
goto exit; goto exit;
if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 && kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
handle_tx2_tvm(vcpu)) handle_tx2_tvm(vcpu))
return true; return true;
...@@ -555,7 +567,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) ...@@ -555,7 +567,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu)
{ {
if (!cpus_have_const_cap(ARM64_SSBD)) if (!cpus_have_final_cap(ARM64_SSBD))
return false; return false;
return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG);
......
...@@ -71,7 +71,7 @@ static void __hyp_text __sysreg_save_el2_return_state(struct kvm_cpu_context *ct ...@@ -71,7 +71,7 @@ static void __hyp_text __sysreg_save_el2_return_state(struct kvm_cpu_context *ct
ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR);
ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR);
if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2);
} }
...@@ -118,7 +118,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) ...@@ -118,7 +118,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2);
write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1);
if (!cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR);
write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR);
} else if (!ctxt->__hyp_running_vcpu) { } else if (!ctxt->__hyp_running_vcpu) {
...@@ -149,7 +149,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) ...@@ -149,7 +149,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1); write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1);
write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1); write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1);
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) && if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) &&
ctxt->__hyp_running_vcpu) { ctxt->__hyp_running_vcpu) {
/* /*
* Must only be done for host registers, hence the context * Must only be done for host registers, hence the context
...@@ -194,7 +194,7 @@ __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) ...@@ -194,7 +194,7 @@ __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt)
write_sysreg_el2(ctxt->gp_regs.regs.pc, SYS_ELR); write_sysreg_el2(ctxt->gp_regs.regs.pc, SYS_ELR);
write_sysreg_el2(pstate, SYS_SPSR); write_sysreg_el2(pstate, SYS_SPSR);
if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2);
} }
......
...@@ -23,7 +23,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, ...@@ -23,7 +23,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm,
local_irq_save(cxt->flags); local_irq_save(cxt->flags);
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) {
/* /*
* For CPUs that are affected by ARM errata 1165522 or 1530923, * For CPUs that are affected by ARM errata 1165522 or 1530923,
* we cannot trust stage-1 to be in a correct state at that * we cannot trust stage-1 to be in a correct state at that
...@@ -63,7 +63,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, ...@@ -63,7 +63,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm,
static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm,
struct tlb_inv_context *cxt) struct tlb_inv_context *cxt)
{ {
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
u64 val; u64 val;
/* /*
...@@ -103,7 +103,7 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, ...@@ -103,7 +103,7 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm,
write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
isb(); isb();
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) {
/* Restore the registers to what they were */ /* Restore the registers to what they were */
write_sysreg_el1(cxt->tcr, SYS_TCR); write_sysreg_el1(cxt->tcr, SYS_TCR);
write_sysreg_el1(cxt->sctlr, SYS_SCTLR); write_sysreg_el1(cxt->sctlr, SYS_SCTLR);
...@@ -117,7 +117,7 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm, ...@@ -117,7 +117,7 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm,
{ {
write_sysreg(0, vttbr_el2); write_sysreg(0, vttbr_el2);
if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
/* Ensure write of the host VMID */ /* Ensure write of the host VMID */
isb(); isb();
/* Restore the host's TCR_EL1 */ /* Restore the host's TCR_EL1 */
......
...@@ -1003,6 +1003,20 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, ...@@ -1003,6 +1003,20 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \ { SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \
access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), } access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
static bool access_amu(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
kvm_inject_undefined(vcpu);
return false;
}
/* Macro to expand the AMU counter and type registers*/
#define AMU_AMEVCNTR0_EL0(n) { SYS_DESC(SYS_AMEVCNTR0_EL0(n)), access_amu }
#define AMU_AMEVTYPE0_EL0(n) { SYS_DESC(SYS_AMEVTYPE0_EL0(n)), access_amu }
#define AMU_AMEVCNTR1_EL0(n) { SYS_DESC(SYS_AMEVCNTR1_EL0(n)), access_amu }
#define AMU_AMEVTYPE1_EL0(n) { SYS_DESC(SYS_AMEVTYPE1_EL0(n)), access_amu }
static bool trap_ptrauth(struct kvm_vcpu *vcpu, static bool trap_ptrauth(struct kvm_vcpu *vcpu,
struct sys_reg_params *p, struct sys_reg_params *p,
const struct sys_reg_desc *rd) const struct sys_reg_desc *rd)
...@@ -1078,13 +1092,25 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, ...@@ -1078,13 +1092,25 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
(u32)r->CRn, (u32)r->CRm, (u32)r->Op2); (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
u64 val = raz ? 0 : read_sanitised_ftr_reg(id); u64 val = raz ? 0 : read_sanitised_ftr_reg(id);
if (id == SYS_ID_AA64PFR0_EL1 && !vcpu_has_sve(vcpu)) { if (id == SYS_ID_AA64PFR0_EL1) {
val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); if (!vcpu_has_sve(vcpu))
val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
val &= ~(0xfUL << ID_AA64PFR0_AMU_SHIFT);
} else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) { } else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) {
val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) | val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) |
(0xfUL << ID_AA64ISAR1_API_SHIFT) | (0xfUL << ID_AA64ISAR1_API_SHIFT) |
(0xfUL << ID_AA64ISAR1_GPA_SHIFT) | (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
(0xfUL << ID_AA64ISAR1_GPI_SHIFT)); (0xfUL << ID_AA64ISAR1_GPI_SHIFT));
} else if (id == SYS_ID_AA64DFR0_EL1) {
/* Limit guests to PMUv3 for ARMv8.1 */
val = cpuid_feature_cap_perfmon_field(val,
ID_AA64DFR0_PMUVER_SHIFT,
ID_AA64DFR0_PMUVER_8_1);
} else if (id == SYS_ID_DFR0_EL1) {
/* Limit guests to PMUv3 for ARMv8.1 */
val = cpuid_feature_cap_perfmon_field(val,
ID_DFR0_PERFMON_SHIFT,
ID_DFR0_PERFMON_8_1);
} }
return val; return val;
...@@ -1565,6 +1591,79 @@ static const struct sys_reg_desc sys_reg_descs[] = { ...@@ -1565,6 +1591,79 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 },
{ SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 },
{ SYS_DESC(SYS_AMCR_EL0), access_amu },
{ SYS_DESC(SYS_AMCFGR_EL0), access_amu },
{ SYS_DESC(SYS_AMCGCR_EL0), access_amu },
{ SYS_DESC(SYS_AMUSERENR_EL0), access_amu },
{ SYS_DESC(SYS_AMCNTENCLR0_EL0), access_amu },
{ SYS_DESC(SYS_AMCNTENSET0_EL0), access_amu },
{ SYS_DESC(SYS_AMCNTENCLR1_EL0), access_amu },
{ SYS_DESC(SYS_AMCNTENSET1_EL0), access_amu },
AMU_AMEVCNTR0_EL0(0),
AMU_AMEVCNTR0_EL0(1),
AMU_AMEVCNTR0_EL0(2),
AMU_AMEVCNTR0_EL0(3),
AMU_AMEVCNTR0_EL0(4),
AMU_AMEVCNTR0_EL0(5),
AMU_AMEVCNTR0_EL0(6),
AMU_AMEVCNTR0_EL0(7),
AMU_AMEVCNTR0_EL0(8),
AMU_AMEVCNTR0_EL0(9),
AMU_AMEVCNTR0_EL0(10),
AMU_AMEVCNTR0_EL0(11),
AMU_AMEVCNTR0_EL0(12),
AMU_AMEVCNTR0_EL0(13),
AMU_AMEVCNTR0_EL0(14),
AMU_AMEVCNTR0_EL0(15),
AMU_AMEVTYPE0_EL0(0),
AMU_AMEVTYPE0_EL0(1),
AMU_AMEVTYPE0_EL0(2),
AMU_AMEVTYPE0_EL0(3),
AMU_AMEVTYPE0_EL0(4),
AMU_AMEVTYPE0_EL0(5),
AMU_AMEVTYPE0_EL0(6),
AMU_AMEVTYPE0_EL0(7),
AMU_AMEVTYPE0_EL0(8),
AMU_AMEVTYPE0_EL0(9),
AMU_AMEVTYPE0_EL0(10),
AMU_AMEVTYPE0_EL0(11),
AMU_AMEVTYPE0_EL0(12),
AMU_AMEVTYPE0_EL0(13),
AMU_AMEVTYPE0_EL0(14),
AMU_AMEVTYPE0_EL0(15),
AMU_AMEVCNTR1_EL0(0),
AMU_AMEVCNTR1_EL0(1),
AMU_AMEVCNTR1_EL0(2),
AMU_AMEVCNTR1_EL0(3),
AMU_AMEVCNTR1_EL0(4),
AMU_AMEVCNTR1_EL0(5),
AMU_AMEVCNTR1_EL0(6),
AMU_AMEVCNTR1_EL0(7),
AMU_AMEVCNTR1_EL0(8),
AMU_AMEVCNTR1_EL0(9),
AMU_AMEVCNTR1_EL0(10),
AMU_AMEVCNTR1_EL0(11),
AMU_AMEVCNTR1_EL0(12),
AMU_AMEVCNTR1_EL0(13),
AMU_AMEVCNTR1_EL0(14),
AMU_AMEVCNTR1_EL0(15),
AMU_AMEVTYPE1_EL0(0),
AMU_AMEVTYPE1_EL0(1),
AMU_AMEVTYPE1_EL0(2),
AMU_AMEVTYPE1_EL0(3),
AMU_AMEVTYPE1_EL0(4),
AMU_AMEVTYPE1_EL0(5),
AMU_AMEVTYPE1_EL0(6),
AMU_AMEVTYPE1_EL0(7),
AMU_AMEVTYPE1_EL0(8),
AMU_AMEVTYPE1_EL0(9),
AMU_AMEVTYPE1_EL0(10),
AMU_AMEVTYPE1_EL0(11),
AMU_AMEVTYPE1_EL0(12),
AMU_AMEVTYPE1_EL0(13),
AMU_AMEVTYPE1_EL0(14),
AMU_AMEVTYPE1_EL0(15),
{ SYS_DESC(SYS_CNTP_TVAL_EL0), access_arch_timer }, { SYS_DESC(SYS_CNTP_TVAL_EL0), access_arch_timer },
{ SYS_DESC(SYS_CNTP_CTL_EL0), access_arch_timer }, { SYS_DESC(SYS_CNTP_CTL_EL0), access_arch_timer },
{ SYS_DESC(SYS_CNTP_CVAL_EL0), access_arch_timer }, { SYS_DESC(SYS_CNTP_CVAL_EL0), access_arch_timer },
......
...@@ -124,3 +124,30 @@ unsigned int do_csum(const unsigned char *buff, int len) ...@@ -124,3 +124,30 @@ unsigned int do_csum(const unsigned char *buff, int len)
return sum >> 16; return sum >> 16;
} }
__sum16 csum_ipv6_magic(const struct in6_addr *saddr,
const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum csum)
{
__uint128_t src, dst;
u64 sum = (__force u64)csum;
src = *(const __uint128_t *)saddr->s6_addr;
dst = *(const __uint128_t *)daddr->s6_addr;
sum += (__force u32)htonl(len);
#ifdef __LITTLE_ENDIAN
sum += (u32)proto << 24;
#else
sum += proto;
#endif
src += (src >> 64) | (src << 64);
dst += (dst >> 64) | (dst << 64);
sum = accumulate(sum, src >> 64);
sum = accumulate(sum, dst >> 64);
sum += ((sum >> 32) | (sum << 32));
return csum_fold((__force __wsum)(sum >> 32));
}
EXPORT_SYMBOL(csum_ipv6_magic);
...@@ -186,7 +186,7 @@ CPU_LE( rev data2, data2 ) ...@@ -186,7 +186,7 @@ CPU_LE( rev data2, data2 )
* as carry-propagation can corrupt the upper bits if the trailing * as carry-propagation can corrupt the upper bits if the trailing
* bytes in the string contain 0x01. * bytes in the string contain 0x01.
* However, if there is no NUL byte in the dword, we can generate * However, if there is no NUL byte in the dword, we can generate
* the result directly. We ca not just subtract the bytes as the * the result directly. We cannot just subtract the bytes as the
* MSB might be significant. * MSB might be significant.
*/ */
CPU_BE( cbnz has_nul, 1f ) CPU_BE( cbnz has_nul, 1f )
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
* Copyright (C) 2012 ARM Ltd. * Copyright (C) 2012 ARM Ltd.
*/ */
#include <linux/bitfield.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/slab.h> #include <linux/slab.h>
...@@ -254,10 +255,37 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) ...@@ -254,10 +255,37 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
/* Errata workaround post TTBRx_EL1 update. */ /* Errata workaround post TTBRx_EL1 update. */
asmlinkage void post_ttbr_update_workaround(void) asmlinkage void post_ttbr_update_workaround(void)
{ {
if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456))
return;
asm(ALTERNATIVE("nop; nop; nop", asm(ALTERNATIVE("nop; nop; nop",
"ic iallu; dsb nsh; isb", "ic iallu; dsb nsh; isb",
ARM64_WORKAROUND_CAVIUM_27456, ARM64_WORKAROUND_CAVIUM_27456));
CONFIG_CAVIUM_ERRATUM_27456)); }
void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm)
{
unsigned long ttbr1 = read_sysreg(ttbr1_el1);
unsigned long asid = ASID(mm);
unsigned long ttbr0 = phys_to_ttbr(pgd_phys);
/* Skip CNP for the reserved ASID */
if (system_supports_cnp() && asid)
ttbr0 |= TTBR_CNP_BIT;
/* SW PAN needs a copy of the ASID in TTBR0 for entry */
if (IS_ENABLED(CONFIG_ARM64_SW_TTBR0_PAN))
ttbr0 |= FIELD_PREP(TTBR_ASID_MASK, asid);
/* Set ASID in TTBR1 since TCR.A1 is set */
ttbr1 &= ~TTBR_ASID_MASK;
ttbr1 |= FIELD_PREP(TTBR_ASID_MASK, asid);
write_sysreg(ttbr1, ttbr1_el1);
isb();
write_sysreg(ttbr0, ttbr0_el1);
isb();
post_ttbr_update_workaround();
} }
static int asids_update_limit(void) static int asids_update_limit(void)
......
This diff is collapsed.
...@@ -11,11 +11,13 @@ ...@@ -11,11 +11,13 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/asm_pointer_auth.h>
#include <asm/hwcap.h> #include <asm/hwcap.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/pgtable-hwdef.h> #include <asm/pgtable-hwdef.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/smp.h>
#ifdef CONFIG_ARM64_64K_PAGES #ifdef CONFIG_ARM64_64K_PAGES
#define TCR_TG_FLAGS TCR_TG0_64K | TCR_TG1_64K #define TCR_TG_FLAGS TCR_TG0_64K | TCR_TG1_64K
...@@ -131,45 +133,19 @@ alternative_endif ...@@ -131,45 +133,19 @@ alternative_endif
ubfx x11, x11, #1, #1 ubfx x11, x11, #1, #1
msr oslar_el1, x11 msr oslar_el1, x11
reset_pmuserenr_el0 x0 // Disable PMU access from EL0 reset_pmuserenr_el0 x0 // Disable PMU access from EL0
reset_amuserenr_el0 x0 // Disable AMU access from EL0
alternative_if ARM64_HAS_RAS_EXTN alternative_if ARM64_HAS_RAS_EXTN
msr_s SYS_DISR_EL1, xzr msr_s SYS_DISR_EL1, xzr
alternative_else_nop_endif alternative_else_nop_endif
ptrauth_keys_install_kernel x14, 0, x1, x2, x3
isb isb
ret ret
SYM_FUNC_END(cpu_do_resume) SYM_FUNC_END(cpu_do_resume)
.popsection .popsection
#endif #endif
/*
* cpu_do_switch_mm(pgd_phys, tsk)
*
* Set the translation table base pointer to be pgd_phys.
*
* - pgd_phys - physical address of new TTB
*/
SYM_FUNC_START(cpu_do_switch_mm)
mrs x2, ttbr1_el1
mmid x1, x1 // get mm->context.id
phys_to_ttbr x3, x0
alternative_if ARM64_HAS_CNP
cbz x1, 1f // skip CNP for reserved ASID
orr x3, x3, #TTBR_CNP_BIT
1:
alternative_else_nop_endif
#ifdef CONFIG_ARM64_SW_TTBR0_PAN
bfi x3, x1, #48, #16 // set the ASID field in TTBR0
#endif
bfi x2, x1, #48, #16 // set the ASID
msr ttbr1_el1, x2 // in TTBR1 (since TCR.A1 is set)
isb
msr ttbr0_el1, x3 // now update TTBR0
isb
b post_ttbr_update_workaround // Back to C code...
SYM_FUNC_END(cpu_do_switch_mm)
.pushsection ".idmap.text", "awx" .pushsection ".idmap.text", "awx"
.macro __idmap_cpu_set_reserved_ttbr1, tmp1, tmp2 .macro __idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
...@@ -408,35 +384,37 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings) ...@@ -408,35 +384,37 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings)
/* /*
* __cpu_setup * __cpu_setup
* *
* Initialise the processor for turning the MMU on. Return in x0 the * Initialise the processor for turning the MMU on.
* value of the SCTLR_EL1 register. *
* Input:
* x0 with a flag ARM64_CPU_BOOT_PRIMARY/ARM64_CPU_BOOT_SECONDARY/ARM64_CPU_RUNTIME.
* Output:
* Return in x0 the value of the SCTLR_EL1 register.
*/ */
.pushsection ".idmap.text", "awx" .pushsection ".idmap.text", "awx"
SYM_FUNC_START(__cpu_setup) SYM_FUNC_START(__cpu_setup)
tlbi vmalle1 // Invalidate local TLB tlbi vmalle1 // Invalidate local TLB
dsb nsh dsb nsh
mov x0, #3 << 20 mov x1, #3 << 20
msr cpacr_el1, x0 // Enable FP/ASIMD msr cpacr_el1, x1 // Enable FP/ASIMD
mov x0, #1 << 12 // Reset mdscr_el1 and disable mov x1, #1 << 12 // Reset mdscr_el1 and disable
msr mdscr_el1, x0 // access to the DCC from EL0 msr mdscr_el1, x1 // access to the DCC from EL0
isb // Unmask debug exceptions now, isb // Unmask debug exceptions now,
enable_dbg // since this is per-cpu enable_dbg // since this is per-cpu
reset_pmuserenr_el0 x0 // Disable PMU access from EL0 reset_pmuserenr_el0 x1 // Disable PMU access from EL0
reset_amuserenr_el0 x1 // Disable AMU access from EL0
/* /*
* Memory region attributes * Memory region attributes
*/ */
mov_q x5, MAIR_EL1_SET mov_q x5, MAIR_EL1_SET
msr mair_el1, x5 msr mair_el1, x5
/*
* Prepare SCTLR
*/
mov_q x0, SCTLR_EL1_SET
/* /*
* Set/prepare TCR and TTBR. We use 512GB (39-bit) address range for * Set/prepare TCR and TTBR. We use 512GB (39-bit) address range for
* both user and kernel. * both user and kernel.
*/ */
ldr x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ mov_q x10, TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS
tcr_clear_errata_bits x10, x9, x5 tcr_clear_errata_bits x10, x9, x5
...@@ -468,5 +446,51 @@ SYM_FUNC_START(__cpu_setup) ...@@ -468,5 +446,51 @@ SYM_FUNC_START(__cpu_setup)
1: 1:
#endif /* CONFIG_ARM64_HW_AFDBM */ #endif /* CONFIG_ARM64_HW_AFDBM */
msr tcr_el1, x10 msr tcr_el1, x10
mov x1, x0
/*
* Prepare SCTLR
*/
mov_q x0, SCTLR_EL1_SET
#ifdef CONFIG_ARM64_PTR_AUTH
/* No ptrauth setup for run time cpus */
cmp x1, #ARM64_CPU_RUNTIME
b.eq 3f
/* Check if the CPU supports ptrauth */
mrs x2, id_aa64isar1_el1
ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
cbz x2, 3f
/*
* The primary cpu keys are reset here and can be
* re-initialised with some proper values later.
*/
msr_s SYS_APIAKEYLO_EL1, xzr
msr_s SYS_APIAKEYHI_EL1, xzr
/* Just enable ptrauth for primary cpu */
cmp x1, #ARM64_CPU_BOOT_PRIMARY
b.eq 2f
/* if !system_supports_address_auth() then skip enable */
alternative_if_not ARM64_HAS_ADDRESS_AUTH
b 3f
alternative_else_nop_endif
/* Install ptrauth key for secondary cpus */
adr_l x2, secondary_data
ldr x3, [x2, #CPU_BOOT_TASK] // get secondary_data.task
cbz x3, 2f // check for slow booting cpus
ldp x3, x4, [x2, #CPU_BOOT_PTRAUTH_KEY]
msr_s SYS_APIAKEYLO_EL1, x3
msr_s SYS_APIAKEYHI_EL1, x4
2: /* Enable ptrauth instructions */
ldr x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
orr x0, x0, x2
3:
#endif
ret // return to head.S ret // return to head.S
SYM_FUNC_END(__cpu_setup) SYM_FUNC_END(__cpu_setup)
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/memory_hotplug.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <asm/ptdump.h> #include <asm/ptdump.h>
...@@ -7,7 +8,10 @@ ...@@ -7,7 +8,10 @@
static int ptdump_show(struct seq_file *m, void *v) static int ptdump_show(struct seq_file *m, void *v)
{ {
struct ptdump_info *info = m->private; struct ptdump_info *info = m->private;
get_online_mems();
ptdump_walk(m, info); ptdump_walk(m, info);
put_online_mems();
return 0; return 0;
} }
DEFINE_SHOW_ATTRIBUTE(ptdump); DEFINE_SHOW_ATTRIBUTE(ptdump);
......
...@@ -21,6 +21,10 @@ ...@@ -21,6 +21,10 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/smp.h> #include <linux/smp.h>
__weak bool arch_freq_counters_available(struct cpumask *cpus)
{
return false;
}
DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE; DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE;
void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq, void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
...@@ -29,6 +33,14 @@ void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq, ...@@ -29,6 +33,14 @@ void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
unsigned long scale; unsigned long scale;
int i; int i;
/*
* If the use of counters for FIE is enabled, just return as we don't
* want to update the scale factor with information from CPUFREQ.
* Instead the scale factor will be updated from arch_scale_freq_tick.
*/
if (arch_freq_counters_available(cpus))
return;
scale = (cur_freq << SCHED_CAPACITY_SHIFT) / max_freq; scale = (cur_freq << SCHED_CAPACITY_SHIFT) / max_freq;
for_each_cpu(i, cpus) for_each_cpu(i, cpus)
......
...@@ -889,6 +889,17 @@ static int arch_timer_starting_cpu(unsigned int cpu) ...@@ -889,6 +889,17 @@ static int arch_timer_starting_cpu(unsigned int cpu)
return 0; return 0;
} }
static int validate_timer_rate(void)
{
if (!arch_timer_rate)
return -EINVAL;
/* Arch timer frequency < 1MHz can cause trouble */
WARN_ON(arch_timer_rate < 1000000);
return 0;
}
/* /*
* For historical reasons, when probing with DT we use whichever (non-zero) * For historical reasons, when probing with DT we use whichever (non-zero)
* rate was probed first, and don't verify that others match. If the first node * rate was probed first, and don't verify that others match. If the first node
...@@ -904,7 +915,7 @@ static void arch_timer_of_configure_rate(u32 rate, struct device_node *np) ...@@ -904,7 +915,7 @@ static void arch_timer_of_configure_rate(u32 rate, struct device_node *np)
arch_timer_rate = rate; arch_timer_rate = rate;
/* Check the timer frequency. */ /* Check the timer frequency. */
if (arch_timer_rate == 0) if (validate_timer_rate())
pr_warn("frequency not available\n"); pr_warn("frequency not available\n");
} }
...@@ -1598,9 +1609,10 @@ static int __init arch_timer_acpi_init(struct acpi_table_header *table) ...@@ -1598,9 +1609,10 @@ static int __init arch_timer_acpi_init(struct acpi_table_header *table)
* CNTFRQ value. This *must* be correct. * CNTFRQ value. This *must* be correct.
*/ */
arch_timer_rate = arch_timer_get_cntfrq(); arch_timer_rate = arch_timer_get_cntfrq();
if (!arch_timer_rate) { ret = validate_timer_rate();
if (ret) {
pr_err(FW_BUG "frequency not available.\n"); pr_err(FW_BUG "frequency not available.\n");
return -EINVAL; return ret;
} }
arch_timer_uses_ppi = arch_timer_select_ppi(); arch_timer_uses_ppi = arch_timer_select_ppi();
......
...@@ -1733,6 +1733,26 @@ unsigned int cpufreq_quick_get_max(unsigned int cpu) ...@@ -1733,6 +1733,26 @@ unsigned int cpufreq_quick_get_max(unsigned int cpu)
} }
EXPORT_SYMBOL(cpufreq_quick_get_max); EXPORT_SYMBOL(cpufreq_quick_get_max);
/**
* cpufreq_get_hw_max_freq - get the max hardware frequency of the CPU
* @cpu: CPU number
*
* The default return value is the max_freq field of cpuinfo.
*/
__weak unsigned int cpufreq_get_hw_max_freq(unsigned int cpu)
{
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
unsigned int ret_freq = 0;
if (policy) {
ret_freq = policy->cpuinfo.max_freq;
cpufreq_cpu_put(policy);
}
return ret_freq;
}
EXPORT_SYMBOL(cpufreq_get_hw_max_freq);
static unsigned int __cpufreq_get(struct cpufreq_policy *policy) static unsigned int __cpufreq_get(struct cpufreq_policy *policy)
{ {
if (unlikely(policy_is_inactive(policy))) if (unlikely(policy_is_inactive(policy)))
......
This diff is collapsed.
...@@ -378,3 +378,39 @@ void lkdtm_DOUBLE_FAULT(void) ...@@ -378,3 +378,39 @@ void lkdtm_DOUBLE_FAULT(void)
pr_err("XFAIL: this test is ia32-only\n"); pr_err("XFAIL: this test is ia32-only\n");
#endif #endif
} }
#ifdef CONFIG_ARM64_PTR_AUTH
static noinline void change_pac_parameters(void)
{
/* Reset the keys of current task */
ptrauth_thread_init_kernel(current);
ptrauth_thread_switch_kernel(current);
}
#define CORRUPT_PAC_ITERATE 10
noinline void lkdtm_CORRUPT_PAC(void)
{
int i;
if (!system_supports_address_auth()) {
pr_err("FAIL: arm64 pointer authentication feature not present\n");
return;
}
pr_info("Change the PAC parameters to force function return failure\n");
/*
* Pac is a hash value computed from input keys, return address and
* stack pointer. As pac has fewer bits so there is a chance of
* collision, so iterate few times to reduce the collision probability.
*/
for (i = 0; i < CORRUPT_PAC_ITERATE; i++)
change_pac_parameters();
pr_err("FAIL: %s test failed. Kernel may be unstable from here\n", __func__);
}
#else /* !CONFIG_ARM64_PTR_AUTH */
noinline void lkdtm_CORRUPT_PAC(void)
{
pr_err("FAIL: arm64 pointer authentication config disabled\n");
}
#endif
...@@ -116,6 +116,7 @@ static const struct crashtype crashtypes[] = { ...@@ -116,6 +116,7 @@ static const struct crashtype crashtypes[] = {
CRASHTYPE(STACK_GUARD_PAGE_LEADING), CRASHTYPE(STACK_GUARD_PAGE_LEADING),
CRASHTYPE(STACK_GUARD_PAGE_TRAILING), CRASHTYPE(STACK_GUARD_PAGE_TRAILING),
CRASHTYPE(UNSET_SMEP), CRASHTYPE(UNSET_SMEP),
CRASHTYPE(CORRUPT_PAC),
CRASHTYPE(UNALIGNED_LOAD_STORE_WRITE), CRASHTYPE(UNALIGNED_LOAD_STORE_WRITE),
CRASHTYPE(OVERWRITE_ALLOCATION), CRASHTYPE(OVERWRITE_ALLOCATION),
CRASHTYPE(WRITE_AFTER_FREE), CRASHTYPE(WRITE_AFTER_FREE),
......
...@@ -31,6 +31,7 @@ void lkdtm_UNSET_SMEP(void); ...@@ -31,6 +31,7 @@ void lkdtm_UNSET_SMEP(void);
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
void lkdtm_DOUBLE_FAULT(void); void lkdtm_DOUBLE_FAULT(void);
#endif #endif
void lkdtm_CORRUPT_PAC(void);
/* lkdtm_heap.c */ /* lkdtm_heap.c */
void __init lkdtm_heap_init(void); void __init lkdtm_heap_init(void);
......
...@@ -328,15 +328,15 @@ static ssize_t arm_ccn_pmu_event_show(struct device *dev, ...@@ -328,15 +328,15 @@ static ssize_t arm_ccn_pmu_event_show(struct device *dev,
struct arm_ccn_pmu_event, attr); struct arm_ccn_pmu_event, attr);
ssize_t res; ssize_t res;
res = snprintf(buf, PAGE_SIZE, "type=0x%x", event->type); res = scnprintf(buf, PAGE_SIZE, "type=0x%x", event->type);
if (event->event) if (event->event)
res += snprintf(buf + res, PAGE_SIZE - res, ",event=0x%x", res += scnprintf(buf + res, PAGE_SIZE - res, ",event=0x%x",
event->event); event->event);
if (event->def) if (event->def)
res += snprintf(buf + res, PAGE_SIZE - res, ",%s", res += scnprintf(buf + res, PAGE_SIZE - res, ",%s",
event->def); event->def);
if (event->mask) if (event->mask)
res += snprintf(buf + res, PAGE_SIZE - res, ",mask=0x%x", res += scnprintf(buf + res, PAGE_SIZE - res, ",mask=0x%x",
event->mask); event->mask);
/* Arguments required by an event */ /* Arguments required by an event */
...@@ -344,25 +344,25 @@ static ssize_t arm_ccn_pmu_event_show(struct device *dev, ...@@ -344,25 +344,25 @@ static ssize_t arm_ccn_pmu_event_show(struct device *dev,
case CCN_TYPE_CYCLES: case CCN_TYPE_CYCLES:
break; break;
case CCN_TYPE_XP: case CCN_TYPE_XP:
res += snprintf(buf + res, PAGE_SIZE - res, res += scnprintf(buf + res, PAGE_SIZE - res,
",xp=?,vc=?"); ",xp=?,vc=?");
if (event->event == CCN_EVENT_WATCHPOINT) if (event->event == CCN_EVENT_WATCHPOINT)
res += snprintf(buf + res, PAGE_SIZE - res, res += scnprintf(buf + res, PAGE_SIZE - res,
",port=?,dir=?,cmp_l=?,cmp_h=?,mask=?"); ",port=?,dir=?,cmp_l=?,cmp_h=?,mask=?");
else else
res += snprintf(buf + res, PAGE_SIZE - res, res += scnprintf(buf + res, PAGE_SIZE - res,
",bus=?"); ",bus=?");
break; break;
case CCN_TYPE_MN: case CCN_TYPE_MN:
res += snprintf(buf + res, PAGE_SIZE - res, ",node=%d", ccn->mn_id); res += scnprintf(buf + res, PAGE_SIZE - res, ",node=%d", ccn->mn_id);
break; break;
default: default:
res += snprintf(buf + res, PAGE_SIZE - res, ",node=?"); res += scnprintf(buf + res, PAGE_SIZE - res, ",node=?");
break; break;
} }
res += snprintf(buf + res, PAGE_SIZE - res, "\n"); res += scnprintf(buf + res, PAGE_SIZE - res, "\n");
return res; return res;
} }
......
...@@ -831,7 +831,7 @@ static void *arm_spe_pmu_setup_aux(struct perf_event *event, void **pages, ...@@ -831,7 +831,7 @@ static void *arm_spe_pmu_setup_aux(struct perf_event *event, void **pages,
* parts and give userspace a fighting chance of getting some * parts and give userspace a fighting chance of getting some
* useful data out of it. * useful data out of it.
*/ */
if (!nr_pages || (snapshot && (nr_pages & 1))) if (snapshot && (nr_pages & 1))
return NULL; return NULL;
if (cpu == -1) if (cpu == -1)
......
...@@ -30,6 +30,8 @@ static inline unsigned long topology_get_freq_scale(int cpu) ...@@ -30,6 +30,8 @@ static inline unsigned long topology_get_freq_scale(int cpu)
return per_cpu(freq_scale, cpu); return per_cpu(freq_scale, cpu);
} }
bool arch_freq_counters_available(struct cpumask *cpus);
DECLARE_PER_CPU(unsigned long, thermal_pressure); DECLARE_PER_CPU(unsigned long, thermal_pressure);
static inline unsigned long topology_get_thermal_pressure(int cpu) static inline unsigned long topology_get_thermal_pressure(int cpu)
......
...@@ -205,6 +205,7 @@ static inline bool policy_is_shared(struct cpufreq_policy *policy) ...@@ -205,6 +205,7 @@ static inline bool policy_is_shared(struct cpufreq_policy *policy)
unsigned int cpufreq_get(unsigned int cpu); unsigned int cpufreq_get(unsigned int cpu);
unsigned int cpufreq_quick_get(unsigned int cpu); unsigned int cpufreq_quick_get(unsigned int cpu);
unsigned int cpufreq_quick_get_max(unsigned int cpu); unsigned int cpufreq_quick_get_max(unsigned int cpu);
unsigned int cpufreq_get_hw_max_freq(unsigned int cpu);
void disable_cpufreq(void); void disable_cpufreq(void);
u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy); u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy);
...@@ -232,6 +233,10 @@ static inline unsigned int cpufreq_quick_get_max(unsigned int cpu) ...@@ -232,6 +233,10 @@ static inline unsigned int cpufreq_quick_get_max(unsigned int cpu)
{ {
return 0; return 0;
} }
static inline unsigned int cpufreq_get_hw_max_freq(unsigned int cpu)
{
return 0;
}
static inline void disable_cpufreq(void) { } static inline void disable_cpufreq(void) { }
#endif #endif
......
...@@ -80,6 +80,7 @@ struct arm_pmu { ...@@ -80,6 +80,7 @@ struct arm_pmu {
struct pmu pmu; struct pmu pmu;
cpumask_t supported_cpus; cpumask_t supported_cpus;
char *name; char *name;
int pmuver;
irqreturn_t (*handle_irq)(struct arm_pmu *pmu); irqreturn_t (*handle_irq)(struct arm_pmu *pmu);
void (*enable)(struct perf_event *event); void (*enable)(struct perf_event *event);
void (*disable)(struct perf_event *event); void (*disable)(struct perf_event *event);
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/random.h> #include <linux/random.h>
#ifdef CONFIG_STACKPROTECTOR #if defined(CONFIG_STACKPROTECTOR) || defined(CONFIG_ARM64_PTR_AUTH)
# include <asm/stackprotector.h> # include <asm/stackprotector.h>
#else #else
static inline void boot_init_stack_canary(void) static inline void boot_init_stack_canary(void)
......
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment