Commit 57b8b1b4 authored by Will Deacon's avatar Will Deacon

Merge branches 'for-next/acpi', 'for-next/boot', 'for-next/bpf',...

Merge branches 'for-next/acpi', 'for-next/boot', 'for-next/bpf', 'for-next/cpuinfo', 'for-next/fpsimd', 'for-next/misc', 'for-next/mm', 'for-next/pci', 'for-next/perf', 'for-next/ptrauth', 'for-next/sdei', 'for-next/selftests', 'for-next/stacktrace', 'for-next/svm', 'for-next/topology', 'for-next/tpyos' and 'for-next/vdso' into for-next/core

Remove unused functions and parameters from ACPI IORT code.
(Zenghui Yu via Lorenzo Pieralisi)
* for-next/acpi:
  ACPI/IORT: Remove the unused inline functions
  ACPI/IORT: Drop the unused @ops of iort_add_device_replay()

Remove redundant code and fix documentation of caching behaviour for the
HVC_SOFT_RESTART hypercall.
(Pingfan Liu)
* for-next/boot:
  Documentation/kvm/arm: improve description of HVC_SOFT_RESTART
  arm64/relocate_kernel: remove redundant code

Improve reporting of unexpected kernel traps due to BPF JIT failure.
(Will Deacon)
* for-next/bpf:
  arm64: Improve diagnostics when trapping BRK with FAULT_BRK_IMM

Improve robustness of user-visible HWCAP strings and their corresponding
numerical constants.
(Anshuman Khandual)
* for-next/cpuinfo:
  arm64/cpuinfo: Define HWCAP name arrays per their actual bit definitions

Cleanups to handling of SVE and FPSIMD register state in preparation
for potential future optimisation of handling across syscalls.
(Julien Grall)
* for-next/fpsimd:
  arm64/sve: Implement a helper to load SVE registers from FPSIMD state
  arm64/sve: Implement a helper to flush SVE registers
  arm64/fpsimdmacros: Allow the macro "for" to be used in more cases
  arm64/fpsimdmacros: Introduce a macro to update ZCR_EL1.LEN
  arm64/signal: Update the comment in preserve_sve_context
  arm64/fpsimd: Update documentation of do_sve_acc

Miscellaneous changes.
(Tian Tao and others)
* for-next/misc:
  arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE
  arm64: mm: Fix missing-prototypes in pageattr.c
  arm64/fpsimd: Fix missing-prototypes in fpsimd.c
  arm64: hibernate: Remove unused including <linux/version.h>
  arm64/mm: Refactor {pgd, pud, pmd, pte}_ERROR()
  arm64: Remove the unused include statements
  arm64: get rid of TEXT_OFFSET
  arm64: traps: Add str of description to panic() in die()

Memory management updates and cleanups.
(Anshuman Khandual and others)
* for-next/mm:
  arm64: dbm: Invalidate local TLB when setting TCR_EL1.HD
  arm64: mm: Make flush_tlb_fix_spurious_fault() a no-op
  arm64/mm: Unify CONT_PMD_SHIFT
  arm64/mm: Unify CONT_PTE_SHIFT
  arm64/mm: Remove CONT_RANGE_OFFSET
  arm64/mm: Enable THP migration
  arm64/mm: Change THP helpers to comply with generic MM semantics
  arm64/mm/ptdump: Add address markers for BPF regions

Allow prefetchable PCI BARs to be exposed to userspace using normal
non-cacheable mappings.
(Clint Sbisa)
* for-next/pci:
  arm64: Enable PCI write-combine resources under sysfs

Perf/PMU driver updates.
(Julien Thierry and others)
* for-next/perf:
  perf: arm-cmn: Fix conversion specifiers for node type
  perf: arm-cmn: Fix unsigned comparison to less than zero
  arm_pmu: arm64: Use NMIs for PMU
  arm_pmu: Introduce pmu_irq_ops
  KVM: arm64: pmu: Make overflow handler NMI safe
  arm64: perf: Defer irq_work to IPI_IRQ_WORK
  arm64: perf: Remove PMU locking
  arm64: perf: Avoid PMXEV* indirection
  arm64: perf: Add missing ISB in armv8pmu_enable_counter()
  perf: Add Arm CMN-600 PMU driver
  perf: Add Arm CMN-600 DT binding
  arm64: perf: Add support caps under sysfs
  drivers/perf: thunderx2_pmu: Fix memory resource error handling
  drivers/perf: xgene_pmu: Fix uninitialized resource struct
  perf: arm_dsu: Support DSU ACPI devices
  arm64: perf: Remove unnecessary event_idx check
  drivers/perf: hisi: Add missing include of linux/module.h
  arm64: perf: Add general hardware LLC events for PMUv3

Support for the Armv8.3 Pointer Authentication enhancements.
(By Amit Daniel Kachhap)
* for-next/ptrauth:
  arm64: kprobe: clarify the comment of steppable hint instructions
  arm64: kprobe: disable probe of fault prone ptrauth instruction
  arm64: cpufeature: Modify address authentication cpufeature to exact
  arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements
  arm64: traps: Allow force_signal_inject to pass esr error code
  arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions

Tonnes of cleanup to the SDEI driver.
(Gavin Shan)
* for-next/sdei:
  firmware: arm_sdei: Remove _sdei_event_unregister()
  firmware: arm_sdei: Remove _sdei_event_register()
  firmware: arm_sdei: Introduce sdei_do_local_call()
  firmware: arm_sdei: Cleanup on cross call function
  firmware: arm_sdei: Remove while loop in sdei_event_unregister()
  firmware: arm_sdei: Remove while loop in sdei_event_register()
  firmware: arm_sdei: Remove redundant error message in sdei_probe()
  firmware: arm_sdei: Remove duplicate check in sdei_get_conduit()
  firmware: arm_sdei: Unregister driver on error in sdei_init()
  firmware: arm_sdei: Avoid nested statements in sdei_init()
  firmware: arm_sdei: Retrieve event number from event instance
  firmware: arm_sdei: Common block for failing path in sdei_event_create()
  firmware: arm_sdei: Remove sdei_is_err()

Selftests for Pointer Authentication and FPSIMD/SVE context-switching.
(Mark Brown and Boyan Karatotev)
* for-next/selftests:
  selftests: arm64: Add build and documentation for FP tests
  selftests: arm64: Add wrapper scripts for stress tests
  selftests: arm64: Add utility to set SVE vector lengths
  selftests: arm64: Add stress tests for FPSMID and SVE context switching
  selftests: arm64: Add test for the SVE ptrace interface
  selftests: arm64: Test case for enumeration of SVE vector lengths
  kselftests/arm64: add PAuth tests for single threaded consistency and differently initialized keys
  kselftests/arm64: add PAuth test for whether exec() changes keys
  kselftests/arm64: add nop checks for PAuth tests
  kselftests/arm64: add a basic Pointer Authentication test

Implementation of ARCH_STACKWALK for unwinding.
(Mark Brown)
* for-next/stacktrace:
  arm64: Move console stack display code to stacktrace.c
  arm64: stacktrace: Convert to ARCH_STACKWALK
  arm64: stacktrace: Make stack walk callback consistent with generic code
  stacktrace: Remove reliable argument from arch_stack_walk() callback

Support for ASID pinning, which is required when sharing page-tables with
the SMMU.
(Jean-Philippe Brucker)
* for-next/svm:
  arm64: cpufeature: Export symbol read_sanitised_ftr_reg()
  arm64: mm: Pin down ASIDs for sharing mm with devices

Rely on firmware tables for establishing CPU topology.
(Valentin Schneider)
* for-next/topology:
  arm64: topology: Stop using MPIDR for topology information

Spelling fixes.
(Xiaoming Ni and Yanfei Xu)
* for-next/tpyos:
  arm64/numa: Fix a typo in comment of arm64_numa_init
  arm64: fix some spelling mistakes in the comments by codespell

vDSO cleanups.
(Will Deacon)
* for-next/vdso:
  arm64: vdso: Fix unusual formatting in *setup_additional_pages()
  arm64: vdso32: Remove a bunch of #ifdef CONFIG_COMPAT_VDSO guards
=============================
Arm Coherent Mesh Network PMU
=============================
CMN-600 is a configurable mesh interconnect consisting of a rectangular
grid of crosspoints (XPs), with each crosspoint supporting up to two
device ports to which various AMBA CHI agents are attached.
CMN implements a distributed PMU design as part of its debug and trace
functionality. This consists of a local monitor (DTM) at every XP, which
counts up to 4 event signals from the connected device nodes and/or the
XP itself. Overflow from these local counters is accumulated in up to 8
global counters implemented by the main controller (DTC), which provides
overall PMU control and interrupts for global counter overflow.
PMU events
----------
The PMU driver registers a single PMU device for the whole interconnect,
see /sys/bus/event_source/devices/arm_cmn. Multi-chip systems may link
more than one CMN together via external CCIX links - in this situation,
each mesh counts its own events entirely independently, and additional
PMU devices will be named arm_cmn_{1..n}.
Most events are specified in a format based directly on the TRM
definitions - "type" selects the respective node type, and "eventid" the
event number. Some events require an additional occupancy ID, which is
specified by "occupid".
* Since RN-D nodes do not have any distinct events from RN-I nodes, they
are treated as the same type (0xa), and the common event templates are
named "rnid_*".
* The cycle counter is treated as a synthetic event belonging to the DTC
node ("type" == 0x3, "eventid" is ignored).
* XP events also encode the port and channel in the "eventid" field, to
match the underlying pmu_event0_id encoding for the pmu_event_sel
register. The event templates are named with prefixes to cover all
permutations.
By default each event provides an aggregate count over all nodes of the
given type. To target a specific node, "bynodeid" must be set to 1 and
"nodeid" to the appropriate value derived from the CMN configuration
(as defined in the "Node ID Mapping" section of the TRM).
Watchpoints
-----------
The PMU can also count watchpoint events to monitor specific flit
traffic. Watchpoints are treated as a synthetic event type, and like PMU
events can be global or targeted with a particular XP's "nodeid" value.
Since the watchpoint direction is otherwise implicit in the underlying
register selection, separate events are provided for flit uploads and
downloads.
The flit match value and mask are passed in config1 and config2 ("val"
and "mask" respectively). "wp_dev_sel", "wp_chn_sel", "wp_grp" and
"wp_exclusive" are specified per the TRM definitions for dtm_wp_config0.
Where a watchpoint needs to match fields from both match groups on the
REQ or SNP channel, it can be specified as two events - one for each
group - with the same nonzero "combine" value. The count for such a
pair of combined events will be attributed to the primary match.
Watchpoint events with a "combine" value of 0 are considered independent
and will count individually.
...@@ -12,6 +12,7 @@ Performance monitor support ...@@ -12,6 +12,7 @@ Performance monitor support
qcom_l2_pmu qcom_l2_pmu
qcom_l3_pmu qcom_l3_pmu
arm-ccn arm-ccn
arm-cmn
xgene-pmu xgene-pmu
arm_dsu_pmu arm_dsu_pmu
thunderx2-pmu thunderx2-pmu
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright 2020 Arm Ltd.
%YAML 1.2
---
$id: http://devicetree.org/schemas/perf/arm,cmn.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Arm CMN (Coherent Mesh Network) Performance Monitors
maintainers:
- Robin Murphy <robin.murphy@arm.com>
properties:
compatible:
const: arm,cmn-600
reg:
items:
- description: Physical address of the base (PERIPHBASE) and
size (up to 64MB) of the configuration address space.
interrupts:
minItems: 1
maxItems: 4
items:
- description: Overflow interrupt for DTC0
- description: Overflow interrupt for DTC1
- description: Overflow interrupt for DTC2
- description: Overflow interrupt for DTC3
description: One interrupt for each DTC domain implemented must
be specified, in order. DTC0 is always present.
arm,root-node:
$ref: /schemas/types.yaml#/definitions/uint32
description: Offset from PERIPHBASE of the configuration
discovery node (see TRM definition of ROOTNODEBASE).
required:
- compatible
- reg
- interrupts
- arm,root-node
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
pmu@50000000 {
compatible = "arm,cmn-600";
reg = <0x50000000 0x4000000>;
/* 4x2 mesh with one DTC, and CFG node at 0,1,1,0 */
interrupts = <GIC_SPI 46 IRQ_TYPE_LEVEL_HIGH>;
arm,root-node = <0x104000>;
};
...
...@@ -54,9 +54,9 @@ these functions (see arch/arm{,64}/include/asm/virt.h): ...@@ -54,9 +54,9 @@ these functions (see arch/arm{,64}/include/asm/virt.h):
x3 = x1's value when entering the next payload (arm64) x3 = x1's value when entering the next payload (arm64)
x4 = x2's value when entering the next payload (arm64) x4 = x2's value when entering the next payload (arm64)
Mask all exceptions, disable the MMU, move the arguments into place Mask all exceptions, disable the MMU, clear I+D bits, move the arguments
(arm64 only), and jump to the restart address while at HYP/EL2. This into place (arm64 only), and jump to the restart address while at HYP/EL2.
hypercall is not expected to return to its caller. This hypercall is not expected to return to its caller.
Any other value of r0/x0 triggers a hypervisor-specific handling, Any other value of r0/x0 triggers a hypervisor-specific handling,
which is not documented here. which is not documented here.
......
...@@ -29,6 +29,7 @@ config ARM64 ...@@ -29,6 +29,7 @@ config ARM64
select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SET_DIRECT_MAP select ARCH_HAS_SET_DIRECT_MAP
select ARCH_HAS_SET_MEMORY select ARCH_HAS_SET_MEMORY
select ARCH_STACKWALK
select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_STRICT_MODULE_RWX select ARCH_HAS_STRICT_MODULE_RWX
select ARCH_HAS_SYNC_DMA_FOR_DEVICE select ARCH_HAS_SYNC_DMA_FOR_DEVICE
...@@ -211,12 +212,18 @@ config ARM64_PAGE_SHIFT ...@@ -211,12 +212,18 @@ config ARM64_PAGE_SHIFT
default 14 if ARM64_16K_PAGES default 14 if ARM64_16K_PAGES
default 12 default 12
config ARM64_CONT_SHIFT config ARM64_CONT_PTE_SHIFT
int int
default 5 if ARM64_64K_PAGES default 5 if ARM64_64K_PAGES
default 7 if ARM64_16K_PAGES default 7 if ARM64_16K_PAGES
default 4 default 4
config ARM64_CONT_PMD_SHIFT
int
default 5 if ARM64_64K_PAGES
default 5 if ARM64_16K_PAGES
default 4
config ARCH_MMAP_RND_BITS_MIN config ARCH_MMAP_RND_BITS_MIN
default 14 if ARM64_64K_PAGES default 14 if ARM64_64K_PAGES
default 16 if ARM64_16K_PAGES default 16 if ARM64_16K_PAGES
...@@ -1876,6 +1883,10 @@ config ARCH_ENABLE_HUGEPAGE_MIGRATION ...@@ -1876,6 +1883,10 @@ config ARCH_ENABLE_HUGEPAGE_MIGRATION
def_bool y def_bool y
depends on HUGETLB_PAGE && MIGRATION depends on HUGETLB_PAGE && MIGRATION
config ARCH_ENABLE_THP_MIGRATION
def_bool y
depends on TRANSPARENT_HUGEPAGE
menu "Power management options" menu "Power management options"
source "kernel/power/Kconfig" source "kernel/power/Kconfig"
......
...@@ -11,7 +11,6 @@ ...@@ -11,7 +11,6 @@
# Copyright (C) 1995-2001 by Russell King # Copyright (C) 1995-2001 by Russell King
LDFLAGS_vmlinux :=--no-undefined -X LDFLAGS_vmlinux :=--no-undefined -X
CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET)
ifeq ($(CONFIG_RELOCATABLE), y) ifeq ($(CONFIG_RELOCATABLE), y)
# Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour # Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
...@@ -132,9 +131,6 @@ endif ...@@ -132,9 +131,6 @@ endif
# Default value # Default value
head-y := arch/arm64/kernel/head.o head-y := arch/arm64/kernel/head.o
# The byte offset of the kernel image in RAM from the start of RAM.
TEXT_OFFSET := 0x0
ifeq ($(CONFIG_KASAN_SW_TAGS), y) ifeq ($(CONFIG_KASAN_SW_TAGS), y)
KASAN_SHADOW_SCALE_SHIFT := 4 KASAN_SHADOW_SCALE_SHIFT := 4
else else
...@@ -145,8 +141,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) ...@@ -145,8 +141,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
export TEXT_OFFSET
core-y += arch/arm64/ core-y += arch/arm64/
libs-y := arch/arm64/lib/ $(libs-y) libs-y := arch/arm64/lib/ $(libs-y)
libs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a libs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a
......
...@@ -13,8 +13,7 @@ ...@@ -13,8 +13,7 @@
#define MAX_FDT_SIZE SZ_2M #define MAX_FDT_SIZE SZ_2M
/* /*
* arm64 requires the kernel image to placed * arm64 requires the kernel image to placed at a 2 MB aligned base address
* TEXT_OFFSET bytes beyond a 2 MB aligned base
*/ */
#define MIN_KIMG_ALIGN SZ_2M #define MIN_KIMG_ALIGN SZ_2M
......
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
* mechanism for doing so, tests whether it is possible to boot * mechanism for doing so, tests whether it is possible to boot
* the given CPU. * the given CPU.
* @cpu_boot: Boots a cpu into the kernel. * @cpu_boot: Boots a cpu into the kernel.
* @cpu_postboot: Optionally, perform any post-boot cleanup or necesary * @cpu_postboot: Optionally, perform any post-boot cleanup or necessary
* synchronisation. Called from the cpu being booted. * synchronisation. Called from the cpu being booted.
* @cpu_can_disable: Determines whether a CPU can be disabled based on * @cpu_can_disable: Determines whether a CPU can be disabled based on
* mechanism-specific information. * mechanism-specific information.
......
...@@ -358,7 +358,7 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap) ...@@ -358,7 +358,7 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap)
} }
/* /*
* Generic helper for handling capabilties with multiple (match,enable) pairs * Generic helper for handling capabilities with multiple (match,enable) pairs
* of call backs, sharing the same capability bit. * of call backs, sharing the same capability bit.
* Iterate over each entry to see if at least one matches. * Iterate over each entry to see if at least one matches.
*/ */
......
...@@ -35,7 +35,9 @@ ...@@ -35,7 +35,9 @@
#define ESR_ELx_EC_SYS64 (0x18) #define ESR_ELx_EC_SYS64 (0x18)
#define ESR_ELx_EC_SVE (0x19) #define ESR_ELx_EC_SVE (0x19)
#define ESR_ELx_EC_ERET (0x1a) /* EL2 only */ #define ESR_ELx_EC_ERET (0x1a) /* EL2 only */
/* Unallocated EC: 0x1b - 0x1E */ /* Unallocated EC: 0x1B */
#define ESR_ELx_EC_FPAC (0x1C) /* EL1 and above */
/* Unallocated EC: 0x1D - 0x1E */
#define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */ #define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */
#define ESR_ELx_EC_IABT_LOW (0x20) #define ESR_ELx_EC_IABT_LOW (0x20)
#define ESR_ELx_EC_IABT_CUR (0x21) #define ESR_ELx_EC_IABT_CUR (0x21)
......
...@@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr); ...@@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
void do_cp15instr(unsigned int esr, struct pt_regs *regs); void do_cp15instr(unsigned int esr, struct pt_regs *regs);
void do_el0_svc(struct pt_regs *regs); void do_el0_svc(struct pt_regs *regs);
void do_el0_svc_compat(struct pt_regs *regs); void do_el0_svc_compat(struct pt_regs *regs);
void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr);
#endif /* __ASM_EXCEPTION_H */ #endif /* __ASM_EXCEPTION_H */
...@@ -22,6 +22,15 @@ struct exception_table_entry ...@@ -22,6 +22,15 @@ struct exception_table_entry
#define ARCH_HAS_RELATIVE_EXTABLE #define ARCH_HAS_RELATIVE_EXTABLE
static inline bool in_bpf_jit(struct pt_regs *regs)
{
if (!IS_ENABLED(CONFIG_BPF_JIT))
return false;
return regs->pc >= BPF_JIT_REGION_START &&
regs->pc < BPF_JIT_REGION_END;
}
#ifdef CONFIG_BPF_JIT #ifdef CONFIG_BPF_JIT
int arm64_bpf_fixup_exception(const struct exception_table_entry *ex, int arm64_bpf_fixup_exception(const struct exception_table_entry *ex,
struct pt_regs *regs); struct pt_regs *regs);
......
...@@ -69,6 +69,9 @@ static inline void *sve_pffr(struct thread_struct *thread) ...@@ -69,6 +69,9 @@ static inline void *sve_pffr(struct thread_struct *thread)
extern void sve_save_state(void *state, u32 *pfpsr); extern void sve_save_state(void *state, u32 *pfpsr);
extern void sve_load_state(void const *state, u32 const *pfpsr, extern void sve_load_state(void const *state, u32 const *pfpsr,
unsigned long vq_minus_1); unsigned long vq_minus_1);
extern void sve_flush_live(void);
extern void sve_load_from_fpsimd_state(struct user_fpsimd_state const *state,
unsigned long vq_minus_1);
extern unsigned int sve_get_vl(void); extern unsigned int sve_get_vl(void);
struct arm64_cpu_capabilities; struct arm64_cpu_capabilities;
......
...@@ -164,25 +164,59 @@ ...@@ -164,25 +164,59 @@
| ((\np) << 5) | ((\np) << 5)
.endm .endm
/* PFALSE P\np.B */
.macro _sve_pfalse np
_sve_check_preg \np
.inst 0x2518e400 \
| (\np)
.endm
.macro __for from:req, to:req .macro __for from:req, to:req
.if (\from) == (\to) .if (\from) == (\to)
_for__body \from _for__body %\from
.else .else
__for \from, (\from) + ((\to) - (\from)) / 2 __for %\from, %((\from) + ((\to) - (\from)) / 2)
__for (\from) + ((\to) - (\from)) / 2 + 1, \to __for %((\from) + ((\to) - (\from)) / 2 + 1), %\to
.endif .endif
.endm .endm
.macro _for var:req, from:req, to:req, insn:vararg .macro _for var:req, from:req, to:req, insn:vararg
.macro _for__body \var:req .macro _for__body \var:req
.noaltmacro
\insn \insn
.altmacro
.endm .endm
.altmacro
__for \from, \to __for \from, \to
.noaltmacro
.purgem _for__body .purgem _for__body
.endm .endm
/* Update ZCR_EL1.LEN with the new VQ */
.macro sve_load_vq xvqminus1, xtmp, xtmp2
mrs_s \xtmp, SYS_ZCR_EL1
bic \xtmp2, \xtmp, ZCR_ELx_LEN_MASK
orr \xtmp2, \xtmp2, \xvqminus1
cmp \xtmp2, \xtmp
b.eq 921f
msr_s SYS_ZCR_EL1, \xtmp2 //self-synchronising
921:
.endm
/* Preserve the first 128-bits of Znz and zero the rest. */
.macro _sve_flush_z nz
_sve_check_zreg \nz
mov v\nz\().16b, v\nz\().16b
.endm
.macro sve_flush
_for n, 0, 31, _sve_flush_z \n
_for n, 0, 15, _sve_pfalse \n
_sve_wrffr 0
.endm
.macro sve_save nxbase, xpfpsr, nxtmp .macro sve_save nxbase, xpfpsr, nxtmp
_for n, 0, 31, _sve_str_v \n, \nxbase, \n - 34 _for n, 0, 31, _sve_str_v \n, \nxbase, \n - 34
_for n, 0, 15, _sve_str_p \n, \nxbase, \n - 16 _for n, 0, 15, _sve_str_p \n, \nxbase, \n - 16
...@@ -197,13 +231,7 @@ ...@@ -197,13 +231,7 @@
.endm .endm
.macro sve_load nxbase, xpfpsr, xvqminus1, nxtmp, xtmp2 .macro sve_load nxbase, xpfpsr, xvqminus1, nxtmp, xtmp2
mrs_s x\nxtmp, SYS_ZCR_EL1 sve_load_vq \xvqminus1, x\nxtmp, \xtmp2
bic \xtmp2, x\nxtmp, ZCR_ELx_LEN_MASK
orr \xtmp2, \xtmp2, \xvqminus1
cmp \xtmp2, x\nxtmp
b.eq 921f
msr_s SYS_ZCR_EL1, \xtmp2 // self-synchronising
921:
_for n, 0, 31, _sve_ldr_v \n, \nxbase, \n - 34 _for n, 0, 31, _sve_ldr_v \n, \nxbase, \n - 34
_sve_ldr_p 0, \nxbase _sve_ldr_p 0, \nxbase
_sve_wrffr 0 _sve_wrffr 0
......
...@@ -8,18 +8,27 @@ ...@@ -8,18 +8,27 @@
#include <uapi/asm/hwcap.h> #include <uapi/asm/hwcap.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#define COMPAT_HWCAP_SWP (1 << 0)
#define COMPAT_HWCAP_HALF (1 << 1) #define COMPAT_HWCAP_HALF (1 << 1)
#define COMPAT_HWCAP_THUMB (1 << 2) #define COMPAT_HWCAP_THUMB (1 << 2)
#define COMPAT_HWCAP_26BIT (1 << 3)
#define COMPAT_HWCAP_FAST_MULT (1 << 4) #define COMPAT_HWCAP_FAST_MULT (1 << 4)
#define COMPAT_HWCAP_FPA (1 << 5)
#define COMPAT_HWCAP_VFP (1 << 6) #define COMPAT_HWCAP_VFP (1 << 6)
#define COMPAT_HWCAP_EDSP (1 << 7) #define COMPAT_HWCAP_EDSP (1 << 7)
#define COMPAT_HWCAP_JAVA (1 << 8)
#define COMPAT_HWCAP_IWMMXT (1 << 9)
#define COMPAT_HWCAP_CRUNCH (1 << 10)
#define COMPAT_HWCAP_THUMBEE (1 << 11)
#define COMPAT_HWCAP_NEON (1 << 12) #define COMPAT_HWCAP_NEON (1 << 12)
#define COMPAT_HWCAP_VFPv3 (1 << 13) #define COMPAT_HWCAP_VFPv3 (1 << 13)
#define COMPAT_HWCAP_VFPV3D16 (1 << 14)
#define COMPAT_HWCAP_TLS (1 << 15) #define COMPAT_HWCAP_TLS (1 << 15)
#define COMPAT_HWCAP_VFPv4 (1 << 16) #define COMPAT_HWCAP_VFPv4 (1 << 16)
#define COMPAT_HWCAP_IDIVA (1 << 17) #define COMPAT_HWCAP_IDIVA (1 << 17)
#define COMPAT_HWCAP_IDIVT (1 << 18) #define COMPAT_HWCAP_IDIVT (1 << 18)
#define COMPAT_HWCAP_IDIV (COMPAT_HWCAP_IDIVA|COMPAT_HWCAP_IDIVT) #define COMPAT_HWCAP_IDIV (COMPAT_HWCAP_IDIVA|COMPAT_HWCAP_IDIVT)
#define COMPAT_HWCAP_VFPD32 (1 << 19)
#define COMPAT_HWCAP_LPAE (1 << 20) #define COMPAT_HWCAP_LPAE (1 << 20)
#define COMPAT_HWCAP_EVTSTRM (1 << 21) #define COMPAT_HWCAP_EVTSTRM (1 << 21)
......
...@@ -359,9 +359,13 @@ __AARCH64_INSN_FUNCS(brk, 0xFFE0001F, 0xD4200000) ...@@ -359,9 +359,13 @@ __AARCH64_INSN_FUNCS(brk, 0xFFE0001F, 0xD4200000)
__AARCH64_INSN_FUNCS(exception, 0xFF000000, 0xD4000000) __AARCH64_INSN_FUNCS(exception, 0xFF000000, 0xD4000000)
__AARCH64_INSN_FUNCS(hint, 0xFFFFF01F, 0xD503201F) __AARCH64_INSN_FUNCS(hint, 0xFFFFF01F, 0xD503201F)
__AARCH64_INSN_FUNCS(br, 0xFFFFFC1F, 0xD61F0000) __AARCH64_INSN_FUNCS(br, 0xFFFFFC1F, 0xD61F0000)
__AARCH64_INSN_FUNCS(br_auth, 0xFEFFF800, 0xD61F0800)
__AARCH64_INSN_FUNCS(blr, 0xFFFFFC1F, 0xD63F0000) __AARCH64_INSN_FUNCS(blr, 0xFFFFFC1F, 0xD63F0000)
__AARCH64_INSN_FUNCS(blr_auth, 0xFEFFF800, 0xD63F0800)
__AARCH64_INSN_FUNCS(ret, 0xFFFFFC1F, 0xD65F0000) __AARCH64_INSN_FUNCS(ret, 0xFFFFFC1F, 0xD65F0000)
__AARCH64_INSN_FUNCS(ret_auth, 0xFFFFFBFF, 0xD65F0BFF)
__AARCH64_INSN_FUNCS(eret, 0xFFFFFFFF, 0xD69F03E0) __AARCH64_INSN_FUNCS(eret, 0xFFFFFFFF, 0xD69F03E0)
__AARCH64_INSN_FUNCS(eret_auth, 0xFFFFFBFF, 0xD69F0BFF)
__AARCH64_INSN_FUNCS(mrs, 0xFFF00000, 0xD5300000) __AARCH64_INSN_FUNCS(mrs, 0xFFF00000, 0xD5300000)
__AARCH64_INSN_FUNCS(msr_imm, 0xFFF8F01F, 0xD500401F) __AARCH64_INSN_FUNCS(msr_imm, 0xFFF8F01F, 0xD500401F)
__AARCH64_INSN_FUNCS(msr_reg, 0xFFF00000, 0xD5100000) __AARCH64_INSN_FUNCS(msr_reg, 0xFFF00000, 0xD5100000)
......
...@@ -86,7 +86,7 @@ ...@@ -86,7 +86,7 @@
+ EARLY_PGDS((vstart), (vend)) /* each PGDIR needs a next level page table */ \ + EARLY_PGDS((vstart), (vend)) /* each PGDIR needs a next level page table */ \
+ EARLY_PUDS((vstart), (vend)) /* each PUD needs a next level page table */ \ + EARLY_PUDS((vstart), (vend)) /* each PUD needs a next level page table */ \
+ EARLY_PMDS((vstart), (vend))) /* each PMD needs a next level page table */ + EARLY_PMDS((vstart), (vend))) /* each PMD needs a next level page table */
#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR + TEXT_OFFSET, _end)) #define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end))
#define IDMAP_DIR_SIZE (IDMAP_PGTABLE_LEVELS * PAGE_SIZE) #define IDMAP_DIR_SIZE (IDMAP_PGTABLE_LEVELS * PAGE_SIZE)
#ifdef CONFIG_ARM64_SW_TTBR0_PAN #ifdef CONFIG_ARM64_SW_TTBR0_PAN
......
...@@ -66,7 +66,7 @@ ...@@ -66,7 +66,7 @@
* TWI: Trap WFI * TWI: Trap WFI
* TIDCP: Trap L2CTLR/L2ECTLR * TIDCP: Trap L2CTLR/L2ECTLR
* BSU_IS: Upgrade barriers to the inner shareable domain * BSU_IS: Upgrade barriers to the inner shareable domain
* FB: Force broadcast of all maintainance operations * FB: Force broadcast of all maintenance operations
* AMO: Override CPSR.A and enable signaling with VA * AMO: Override CPSR.A and enable signaling with VA
* IMO: Override CPSR.I and enable signaling with VI * IMO: Override CPSR.I and enable signaling with VI
* FMO: Override CPSR.F and enable signaling with VF * FMO: Override CPSR.F and enable signaling with VF
......
...@@ -169,7 +169,7 @@ extern s64 memstart_addr; ...@@ -169,7 +169,7 @@ extern s64 memstart_addr;
/* PHYS_OFFSET - the physical address of the start of memory. */ /* PHYS_OFFSET - the physical address of the start of memory. */
#define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); memstart_addr; }) #define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); memstart_addr; })
/* the virtual base of the kernel image (minus TEXT_OFFSET) */ /* the virtual base of the kernel image */
extern u64 kimage_vaddr; extern u64 kimage_vaddr;
/* the offset between the kernel virtual and physical mappings */ /* the offset between the kernel virtual and physical mappings */
......
...@@ -17,11 +17,14 @@ ...@@ -17,11 +17,14 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/refcount.h>
typedef struct { typedef struct {
atomic64_t id; atomic64_t id;
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
void *sigpage; void *sigpage;
#endif #endif
refcount_t pinned;
void *vdso; void *vdso;
unsigned long flags; unsigned long flags;
} mm_context_t; } mm_context_t;
......
...@@ -177,7 +177,13 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp) ...@@ -177,7 +177,13 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
#define destroy_context(mm) do { } while(0) #define destroy_context(mm) do { } while(0)
void check_and_switch_context(struct mm_struct *mm); void check_and_switch_context(struct mm_struct *mm);
#define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; }) static inline int
init_new_context(struct task_struct *tsk, struct mm_struct *mm)
{
atomic64_set(&mm->context.id, 0);
refcount_set(&mm->context.pinned, 0);
return 0;
}
#ifdef CONFIG_ARM64_SW_TTBR0_PAN #ifdef CONFIG_ARM64_SW_TTBR0_PAN
static inline void update_saved_ttbr0(struct task_struct *tsk, static inline void update_saved_ttbr0(struct task_struct *tsk,
...@@ -248,6 +254,9 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, ...@@ -248,6 +254,9 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
void verify_cpu_asid_bits(void); void verify_cpu_asid_bits(void);
void post_ttbr_update_workaround(void); void post_ttbr_update_workaround(void);
unsigned long arm64_mm_context_get(struct mm_struct *mm);
void arm64_mm_context_put(struct mm_struct *mm);
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#endif /* !__ASM_MMU_CONTEXT_H */ #endif /* !__ASM_MMU_CONTEXT_H */
...@@ -25,6 +25,9 @@ const struct cpumask *cpumask_of_node(int node); ...@@ -25,6 +25,9 @@ const struct cpumask *cpumask_of_node(int node);
/* Returns a pointer to the cpumask of CPUs on Node 'node'. */ /* Returns a pointer to the cpumask of CPUs on Node 'node'. */
static inline const struct cpumask *cpumask_of_node(int node) static inline const struct cpumask *cpumask_of_node(int node)
{ {
if (node == NUMA_NO_NODE)
return cpu_all_mask;
return node_to_cpumask_map[node]; return node_to_cpumask_map[node];
} }
#endif #endif
......
...@@ -11,13 +11,8 @@ ...@@ -11,13 +11,8 @@
#include <linux/const.h> #include <linux/const.h>
/* PAGE_SHIFT determines the page size */ /* PAGE_SHIFT determines the page size */
/* CONT_SHIFT determines the number of pages which can be tracked together */
#define PAGE_SHIFT CONFIG_ARM64_PAGE_SHIFT #define PAGE_SHIFT CONFIG_ARM64_PAGE_SHIFT
#define CONT_SHIFT CONFIG_ARM64_CONT_SHIFT
#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT) #define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1)) #define PAGE_MASK (~(PAGE_SIZE-1))
#define CONT_SIZE (_AC(1, UL) << (CONT_SHIFT + PAGE_SHIFT))
#define CONT_MASK (~(CONT_SIZE-1))
#endif /* __ASM_PAGE_DEF_H */ #endif /* __ASM_PAGE_DEF_H */
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#define pcibios_assign_all_busses() \ #define pcibios_assign_all_busses() \
(pci_has_flag(PCI_REASSIGN_ALL_BUS)) (pci_has_flag(PCI_REASSIGN_ALL_BUS))
#define arch_can_pci_mmap_wc() 1
#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 1
extern int isa_dma_bridge_buggy; extern int isa_dma_bridge_buggy;
......
...@@ -236,6 +236,9 @@ ...@@ -236,6 +236,9 @@
#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ #define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */
#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ #define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */
/* PMMIR_EL1.SLOTS mask */
#define ARMV8_PMU_SLOTS_MASK 0xff
#ifdef CONFIG_PERF_EVENTS #ifdef CONFIG_PERF_EVENTS
struct pt_regs; struct pt_regs;
extern unsigned long perf_instruction_pointer(struct pt_regs *regs); extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
......
...@@ -81,25 +81,15 @@ ...@@ -81,25 +81,15 @@
/* /*
* Contiguous page definitions. * Contiguous page definitions.
*/ */
#ifdef CONFIG_ARM64_64K_PAGES #define CONT_PTE_SHIFT (CONFIG_ARM64_CONT_PTE_SHIFT + PAGE_SHIFT)
#define CONT_PTE_SHIFT (5 + PAGE_SHIFT)
#define CONT_PMD_SHIFT (5 + PMD_SHIFT)
#elif defined(CONFIG_ARM64_16K_PAGES)
#define CONT_PTE_SHIFT (7 + PAGE_SHIFT)
#define CONT_PMD_SHIFT (5 + PMD_SHIFT)
#else
#define CONT_PTE_SHIFT (4 + PAGE_SHIFT)
#define CONT_PMD_SHIFT (4 + PMD_SHIFT)
#endif
#define CONT_PTES (1 << (CONT_PTE_SHIFT - PAGE_SHIFT)) #define CONT_PTES (1 << (CONT_PTE_SHIFT - PAGE_SHIFT))
#define CONT_PTE_SIZE (CONT_PTES * PAGE_SIZE) #define CONT_PTE_SIZE (CONT_PTES * PAGE_SIZE)
#define CONT_PTE_MASK (~(CONT_PTE_SIZE - 1)) #define CONT_PTE_MASK (~(CONT_PTE_SIZE - 1))
#define CONT_PMD_SHIFT (CONFIG_ARM64_CONT_PMD_SHIFT + PMD_SHIFT)
#define CONT_PMDS (1 << (CONT_PMD_SHIFT - PMD_SHIFT)) #define CONT_PMDS (1 << (CONT_PMD_SHIFT - PMD_SHIFT))
#define CONT_PMD_SIZE (CONT_PMDS * PMD_SIZE) #define CONT_PMD_SIZE (CONT_PMDS * PMD_SIZE)
#define CONT_PMD_MASK (~(CONT_PMD_SIZE - 1)) #define CONT_PMD_MASK (~(CONT_PMD_SIZE - 1))
/* the numerical offset of the PTE within a range of CONT_PTES */
#define CONT_RANGE_OFFSET(addr) (((addr)>>PAGE_SHIFT)&(CONT_PTES-1))
/* /*
* Hardware page table definitions. * Hardware page table definitions.
......
...@@ -19,6 +19,13 @@ ...@@ -19,6 +19,13 @@
#define PTE_DEVMAP (_AT(pteval_t, 1) << 57) #define PTE_DEVMAP (_AT(pteval_t, 1) << 57)
#define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */ #define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */
/*
* This bit indicates that the entry is present i.e. pmd_page()
* still points to a valid huge page in memory even if the pmd
* has been invalidated.
*/
#define PMD_PRESENT_INVALID (_AT(pteval_t, 1) << 59) /* only when !PMD_SECT_VALID */
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
......
...@@ -35,11 +35,6 @@ ...@@ -35,11 +35,6 @@
extern struct page *vmemmap; extern struct page *vmemmap;
extern void __pte_error(const char *file, int line, unsigned long val);
extern void __pmd_error(const char *file, int line, unsigned long val);
extern void __pud_error(const char *file, int line, unsigned long val);
extern void __pgd_error(const char *file, int line, unsigned long val);
#ifdef CONFIG_TRANSPARENT_HUGEPAGE #ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
...@@ -50,6 +45,14 @@ extern void __pgd_error(const char *file, int line, unsigned long val); ...@@ -50,6 +45,14 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
__flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
/*
* Outside of a few very special situations (e.g. hibernation), we always
* use broadcast TLB invalidation instructions, therefore a spurious page
* fault on one CPU which has been handled concurrently by another CPU
* does not need to perform additional invalidation.
*/
#define flush_tlb_fix_spurious_fault(vma, address) do { } while (0)
/* /*
* ZERO_PAGE is a global shared page that is always zero: used * ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc.. * for zero-mapped memory areas etc..
...@@ -57,7 +60,8 @@ extern void __pgd_error(const char *file, int line, unsigned long val); ...@@ -57,7 +60,8 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
#define ZERO_PAGE(vaddr) phys_to_page(__pa_symbol(empty_zero_page)) #define ZERO_PAGE(vaddr) phys_to_page(__pa_symbol(empty_zero_page))
#define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte)) #define pte_ERROR(e) \
pr_err("%s:%d: bad pte %016llx.\n", __FILE__, __LINE__, pte_val(e))
/* /*
* Macros to convert between a physical address and its placement in a * Macros to convert between a physical address and its placement in a
...@@ -145,6 +149,18 @@ static inline pte_t set_pte_bit(pte_t pte, pgprot_t prot) ...@@ -145,6 +149,18 @@ static inline pte_t set_pte_bit(pte_t pte, pgprot_t prot)
return pte; return pte;
} }
static inline pmd_t clear_pmd_bit(pmd_t pmd, pgprot_t prot)
{
pmd_val(pmd) &= ~pgprot_val(prot);
return pmd;
}
static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot)
{
pmd_val(pmd) |= pgprot_val(prot);
return pmd;
}
static inline pte_t pte_wrprotect(pte_t pte) static inline pte_t pte_wrprotect(pte_t pte)
{ {
pte = clear_pte_bit(pte, __pgprot(PTE_WRITE)); pte = clear_pte_bit(pte, __pgprot(PTE_WRITE));
...@@ -363,15 +379,24 @@ static inline int pmd_protnone(pmd_t pmd) ...@@ -363,15 +379,24 @@ static inline int pmd_protnone(pmd_t pmd)
} }
#endif #endif
#define pmd_present_invalid(pmd) (!!(pmd_val(pmd) & PMD_PRESENT_INVALID))
static inline int pmd_present(pmd_t pmd)
{
return pte_present(pmd_pte(pmd)) || pmd_present_invalid(pmd);
}
/* /*
* THP definitions. * THP definitions.
*/ */
#ifdef CONFIG_TRANSPARENT_HUGEPAGE #ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT)) static inline int pmd_trans_huge(pmd_t pmd)
{
return pmd_val(pmd) && pmd_present(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#define pmd_present(pmd) pte_present(pmd_pte(pmd))
#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd)) #define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
#define pmd_young(pmd) pte_young(pmd_pte(pmd)) #define pmd_young(pmd) pte_young(pmd_pte(pmd))
#define pmd_valid(pmd) pte_valid(pmd_pte(pmd)) #define pmd_valid(pmd) pte_valid(pmd_pte(pmd))
...@@ -381,7 +406,14 @@ static inline int pmd_protnone(pmd_t pmd) ...@@ -381,7 +406,14 @@ static inline int pmd_protnone(pmd_t pmd)
#define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd))) #define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd)))
#define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd))) #define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd)))
#define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd))) #define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd)))
#define pmd_mkinvalid(pmd) (__pmd(pmd_val(pmd) & ~PMD_SECT_VALID))
static inline pmd_t pmd_mkinvalid(pmd_t pmd)
{
pmd = set_pmd_bit(pmd, __pgprot(PMD_PRESENT_INVALID));
pmd = clear_pmd_bit(pmd, __pgprot(PMD_SECT_VALID));
return pmd;
}
#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) #define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd))
...@@ -541,7 +573,8 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) ...@@ -541,7 +573,8 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
#if CONFIG_PGTABLE_LEVELS > 2 #if CONFIG_PGTABLE_LEVELS > 2
#define pmd_ERROR(pmd) __pmd_error(__FILE__, __LINE__, pmd_val(pmd)) #define pmd_ERROR(e) \
pr_err("%s:%d: bad pmd %016llx.\n", __FILE__, __LINE__, pmd_val(e))
#define pud_none(pud) (!pud_val(pud)) #define pud_none(pud) (!pud_val(pud))
#define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT)) #define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT))
...@@ -608,7 +641,8 @@ static inline unsigned long pud_page_vaddr(pud_t pud) ...@@ -608,7 +641,8 @@ static inline unsigned long pud_page_vaddr(pud_t pud)
#if CONFIG_PGTABLE_LEVELS > 3 #if CONFIG_PGTABLE_LEVELS > 3
#define pud_ERROR(pud) __pud_error(__FILE__, __LINE__, pud_val(pud)) #define pud_ERROR(e) \
pr_err("%s:%d: bad pud %016llx.\n", __FILE__, __LINE__, pud_val(e))
#define p4d_none(p4d) (!p4d_val(p4d)) #define p4d_none(p4d) (!p4d_val(p4d))
#define p4d_bad(p4d) (!(p4d_val(p4d) & 2)) #define p4d_bad(p4d) (!(p4d_val(p4d) & 2))
...@@ -667,7 +701,8 @@ static inline unsigned long p4d_page_vaddr(p4d_t p4d) ...@@ -667,7 +701,8 @@ static inline unsigned long p4d_page_vaddr(p4d_t p4d)
#endif /* CONFIG_PGTABLE_LEVELS > 3 */ #endif /* CONFIG_PGTABLE_LEVELS > 3 */
#define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd)) #define pgd_ERROR(e) \
pr_err("%s:%d: bad pgd %016llx.\n", __FILE__, __LINE__, pgd_val(e))
#define pgd_set_fixmap(addr) ((pgd_t *)set_fixmap_offset(FIX_PGD, addr)) #define pgd_set_fixmap(addr) ((pgd_t *)set_fixmap_offset(FIX_PGD, addr))
#define pgd_clear_fixmap() clear_fixmap(FIX_PGD) #define pgd_clear_fixmap() clear_fixmap(FIX_PGD)
...@@ -847,6 +882,11 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, ...@@ -847,6 +882,11 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(swp) ((pte_t) { (swp).val }) #define __swp_entry_to_pte(swp) ((pte_t) { (swp).val })
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
#define __pmd_to_swp_entry(pmd) ((swp_entry_t) { pmd_val(pmd) })
#define __swp_entry_to_pmd(swp) __pmd((swp).val)
#endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
/* /*
* Ensure that there are not more swap files than can be encoded in the kernel * Ensure that there are not more swap files than can be encoded in the kernel
* PTEs. * PTEs.
......
...@@ -63,7 +63,7 @@ struct stackframe { ...@@ -63,7 +63,7 @@ struct stackframe {
extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame); extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame, extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
int (*fn)(struct stackframe *, void *), void *data); bool (*fn)(void *, unsigned long), void *data);
extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl); const char *loglvl);
......
...@@ -321,6 +321,8 @@ ...@@ -321,6 +321,8 @@
#define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1) #define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1)
#define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2) #define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2)
#define SYS_PMMIR_EL1 sys_reg(3, 0, 9, 14, 6)
#define SYS_MAIR_EL1 sys_reg(3, 0, 10, 2, 0) #define SYS_MAIR_EL1 sys_reg(3, 0, 10, 2, 0)
#define SYS_AMAIR_EL1 sys_reg(3, 0, 10, 3, 0) #define SYS_AMAIR_EL1 sys_reg(3, 0, 10, 3, 0)
...@@ -638,8 +640,16 @@ ...@@ -638,8 +640,16 @@
#define ID_AA64ISAR1_APA_NI 0x0 #define ID_AA64ISAR1_APA_NI 0x0
#define ID_AA64ISAR1_APA_ARCHITECTED 0x1 #define ID_AA64ISAR1_APA_ARCHITECTED 0x1
#define ID_AA64ISAR1_APA_ARCH_EPAC 0x2
#define ID_AA64ISAR1_APA_ARCH_EPAC2 0x3
#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC 0x4
#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB 0x5
#define ID_AA64ISAR1_API_NI 0x0 #define ID_AA64ISAR1_API_NI 0x0
#define ID_AA64ISAR1_API_IMP_DEF 0x1 #define ID_AA64ISAR1_API_IMP_DEF 0x1
#define ID_AA64ISAR1_API_IMP_DEF_EPAC 0x2
#define ID_AA64ISAR1_API_IMP_DEF_EPAC2 0x3
#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC 0x4
#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB 0x5
#define ID_AA64ISAR1_GPA_NI 0x0 #define ID_AA64ISAR1_GPA_NI 0x0
#define ID_AA64ISAR1_GPA_ARCHITECTED 0x1 #define ID_AA64ISAR1_GPA_ARCHITECTED 0x1
#define ID_AA64ISAR1_GPI_NI 0x0 #define ID_AA64ISAR1_GPI_NI 0x0
......
...@@ -24,7 +24,7 @@ struct undef_hook { ...@@ -24,7 +24,7 @@ struct undef_hook {
void register_undef_hook(struct undef_hook *hook); void register_undef_hook(struct undef_hook *hook);
void unregister_undef_hook(struct undef_hook *hook); void unregister_undef_hook(struct undef_hook *hook);
void force_signal_inject(int signal, int code, unsigned long address); void force_signal_inject(int signal, int code, unsigned long address, unsigned int err);
void arm64_notify_segfault(unsigned long addr); void arm64_notify_segfault(unsigned long addr);
void arm64_force_sig_fault(int signo, int code, void __user *addr, const char *str); void arm64_force_sig_fault(int signo, int code, void __user *addr, const char *str);
void arm64_force_sig_mceerr(int code, void __user *addr, short lsb, const char *str); void arm64_force_sig_mceerr(int code, void __user *addr, short lsb, const char *str);
......
...@@ -3,8 +3,6 @@ ...@@ -3,8 +3,6 @@
# Makefile for the linux kernel. # Makefile for the linux kernel.
# #
CPPFLAGS_vmlinux.lds := -DTEXT_OFFSET=$(TEXT_OFFSET)
AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
CFLAGS_armv8_deprecated.o := -I$(src) CFLAGS_armv8_deprecated.o := -I$(src)
CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
......
...@@ -35,6 +35,10 @@ SYM_CODE_START(__cpu_soft_restart) ...@@ -35,6 +35,10 @@ SYM_CODE_START(__cpu_soft_restart)
mov_q x13, SCTLR_ELx_FLAGS mov_q x13, SCTLR_ELx_FLAGS
bic x12, x12, x13 bic x12, x12, x13
pre_disable_mmu_workaround pre_disable_mmu_workaround
/*
* either disable EL1&0 translation regime or disable EL2&0 translation
* regime if HCR_EL2.E2H == 1
*/
msr sctlr_el1, x12 msr sctlr_el1, x12
isb isb
......
...@@ -169,8 +169,6 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn, ...@@ -169,8 +169,6 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn,
} }
#endif /* CONFIG_KVM_INDIRECT_VECTORS */ #endif /* CONFIG_KVM_INDIRECT_VECTORS */
#include <linux/arm-smccc.h>
static void __maybe_unused call_smc_arch_workaround_1(void) static void __maybe_unused call_smc_arch_workaround_1(void)
{ {
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL); arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
......
...@@ -197,9 +197,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = { ...@@ -197,9 +197,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0), FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0), FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
ARM64_FTR_END, ARM64_FTR_END,
}; };
...@@ -1111,6 +1111,7 @@ u64 read_sanitised_ftr_reg(u32 id) ...@@ -1111,6 +1111,7 @@ u64 read_sanitised_ftr_reg(u32 id)
return 0; return 0;
return regp->sys_val; return regp->sys_val;
} }
EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg);
#define read_sysreg_case(r) \ #define read_sysreg_case(r) \
case r: return read_sysreg_s(r) case r: return read_sysreg_s(r)
...@@ -1443,6 +1444,7 @@ static inline void __cpu_enable_hw_dbm(void) ...@@ -1443,6 +1444,7 @@ static inline void __cpu_enable_hw_dbm(void)
write_sysreg(tcr, tcr_el1); write_sysreg(tcr, tcr_el1);
isb(); isb();
local_flush_tlb_all();
} }
static bool cpu_has_broken_dbm(void) static bool cpu_has_broken_dbm(void)
...@@ -1648,11 +1650,37 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused) ...@@ -1648,11 +1650,37 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
#endif /* CONFIG_ARM64_RAS_EXTN */ #endif /* CONFIG_ARM64_RAS_EXTN */
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH
static bool has_address_auth(const struct arm64_cpu_capabilities *entry, static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
int __unused)
{ {
return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || int boot_val, sec_val;
__system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
/* We don't expect to be called with SCOPE_SYSTEM */
WARN_ON(scope == SCOPE_SYSTEM);
/*
* The ptr-auth feature levels are not intercompatible with lower
* levels. Hence we must match ptr-auth feature level of the secondary
* CPUs with that of the boot CPU. The level of boot cpu is fetched
* from the sanitised register whereas direct register read is done for
* the secondary CPUs.
* The sanitised feature state is guaranteed to match that of the
* boot CPU as a mismatched secondary CPU is parked before it gets
* a chance to update the state, with the capability.
*/
boot_val = cpuid_feature_extract_field(read_sanitised_ftr_reg(entry->sys_reg),
entry->field_pos, entry->sign);
if (scope & SCOPE_BOOT_CPU)
return boot_val >= entry->min_field_value;
/* Now check for the secondary CPUs with SCOPE_LOCAL_CPU scope */
sec_val = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
entry->field_pos, entry->sign);
return sec_val == boot_val;
}
static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry,
int scope)
{
return has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH], scope) ||
has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
} }
static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
...@@ -2021,7 +2049,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -2021,7 +2049,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED, .sign = FTR_UNSIGNED,
.field_pos = ID_AA64ISAR1_APA_SHIFT, .field_pos = ID_AA64ISAR1_APA_SHIFT,
.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED, .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
.matches = has_cpuid_feature, .matches = has_address_auth_cpucap,
}, },
{ {
.desc = "Address authentication (IMP DEF algorithm)", .desc = "Address authentication (IMP DEF algorithm)",
...@@ -2031,12 +2059,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -2031,12 +2059,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED, .sign = FTR_UNSIGNED,
.field_pos = ID_AA64ISAR1_API_SHIFT, .field_pos = ID_AA64ISAR1_API_SHIFT,
.min_field_value = ID_AA64ISAR1_API_IMP_DEF, .min_field_value = ID_AA64ISAR1_API_IMP_DEF,
.matches = has_cpuid_feature, .matches = has_address_auth_cpucap,
}, },
{ {
.capability = ARM64_HAS_ADDRESS_AUTH, .capability = ARM64_HAS_ADDRESS_AUTH,
.type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
.matches = has_address_auth, .matches = has_address_auth_metacap,
}, },
{ {
.desc = "Generic authentication (architected algorithm)", .desc = "Generic authentication (architected algorithm)",
......
...@@ -43,94 +43,92 @@ static const char *icache_policy_str[] = { ...@@ -43,94 +43,92 @@ static const char *icache_policy_str[] = {
unsigned long __icache_flags; unsigned long __icache_flags;
static const char *const hwcap_str[] = { static const char *const hwcap_str[] = {
"fp", [KERNEL_HWCAP_FP] = "fp",
"asimd", [KERNEL_HWCAP_ASIMD] = "asimd",
"evtstrm", [KERNEL_HWCAP_EVTSTRM] = "evtstrm",
"aes", [KERNEL_HWCAP_AES] = "aes",
"pmull", [KERNEL_HWCAP_PMULL] = "pmull",
"sha1", [KERNEL_HWCAP_SHA1] = "sha1",
"sha2", [KERNEL_HWCAP_SHA2] = "sha2",
"crc32", [KERNEL_HWCAP_CRC32] = "crc32",
"atomics", [KERNEL_HWCAP_ATOMICS] = "atomics",
"fphp", [KERNEL_HWCAP_FPHP] = "fphp",
"asimdhp", [KERNEL_HWCAP_ASIMDHP] = "asimdhp",
"cpuid", [KERNEL_HWCAP_CPUID] = "cpuid",
"asimdrdm", [KERNEL_HWCAP_ASIMDRDM] = "asimdrdm",
"jscvt", [KERNEL_HWCAP_JSCVT] = "jscvt",
"fcma", [KERNEL_HWCAP_FCMA] = "fcma",
"lrcpc", [KERNEL_HWCAP_LRCPC] = "lrcpc",
"dcpop", [KERNEL_HWCAP_DCPOP] = "dcpop",
"sha3", [KERNEL_HWCAP_SHA3] = "sha3",
"sm3", [KERNEL_HWCAP_SM3] = "sm3",
"sm4", [KERNEL_HWCAP_SM4] = "sm4",
"asimddp", [KERNEL_HWCAP_ASIMDDP] = "asimddp",
"sha512", [KERNEL_HWCAP_SHA512] = "sha512",
"sve", [KERNEL_HWCAP_SVE] = "sve",
"asimdfhm", [KERNEL_HWCAP_ASIMDFHM] = "asimdfhm",
"dit", [KERNEL_HWCAP_DIT] = "dit",
"uscat", [KERNEL_HWCAP_USCAT] = "uscat",
"ilrcpc", [KERNEL_HWCAP_ILRCPC] = "ilrcpc",
"flagm", [KERNEL_HWCAP_FLAGM] = "flagm",
"ssbs", [KERNEL_HWCAP_SSBS] = "ssbs",
"sb", [KERNEL_HWCAP_SB] = "sb",
"paca", [KERNEL_HWCAP_PACA] = "paca",
"pacg", [KERNEL_HWCAP_PACG] = "pacg",
"dcpodp", [KERNEL_HWCAP_DCPODP] = "dcpodp",
"sve2", [KERNEL_HWCAP_SVE2] = "sve2",
"sveaes", [KERNEL_HWCAP_SVEAES] = "sveaes",
"svepmull", [KERNEL_HWCAP_SVEPMULL] = "svepmull",
"svebitperm", [KERNEL_HWCAP_SVEBITPERM] = "svebitperm",
"svesha3", [KERNEL_HWCAP_SVESHA3] = "svesha3",
"svesm4", [KERNEL_HWCAP_SVESM4] = "svesm4",
"flagm2", [KERNEL_HWCAP_FLAGM2] = "flagm2",
"frint", [KERNEL_HWCAP_FRINT] = "frint",
"svei8mm", [KERNEL_HWCAP_SVEI8MM] = "svei8mm",
"svef32mm", [KERNEL_HWCAP_SVEF32MM] = "svef32mm",
"svef64mm", [KERNEL_HWCAP_SVEF64MM] = "svef64mm",
"svebf16", [KERNEL_HWCAP_SVEBF16] = "svebf16",
"i8mm", [KERNEL_HWCAP_I8MM] = "i8mm",
"bf16", [KERNEL_HWCAP_BF16] = "bf16",
"dgh", [KERNEL_HWCAP_DGH] = "dgh",
"rng", [KERNEL_HWCAP_RNG] = "rng",
"bti", [KERNEL_HWCAP_BTI] = "bti",
/* reserved for "mte" */
NULL
}; };
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
#define COMPAT_KERNEL_HWCAP(x) const_ilog2(COMPAT_HWCAP_ ## x)
static const char *const compat_hwcap_str[] = { static const char *const compat_hwcap_str[] = {
"swp", [COMPAT_KERNEL_HWCAP(SWP)] = "swp",
"half", [COMPAT_KERNEL_HWCAP(HALF)] = "half",
"thumb", [COMPAT_KERNEL_HWCAP(THUMB)] = "thumb",
"26bit", [COMPAT_KERNEL_HWCAP(26BIT)] = NULL, /* Not possible on arm64 */
"fastmult", [COMPAT_KERNEL_HWCAP(FAST_MULT)] = "fastmult",
"fpa", [COMPAT_KERNEL_HWCAP(FPA)] = NULL, /* Not possible on arm64 */
"vfp", [COMPAT_KERNEL_HWCAP(VFP)] = "vfp",
"edsp", [COMPAT_KERNEL_HWCAP(EDSP)] = "edsp",
"java", [COMPAT_KERNEL_HWCAP(JAVA)] = NULL, /* Not possible on arm64 */
"iwmmxt", [COMPAT_KERNEL_HWCAP(IWMMXT)] = NULL, /* Not possible on arm64 */
"crunch", [COMPAT_KERNEL_HWCAP(CRUNCH)] = NULL, /* Not possible on arm64 */
"thumbee", [COMPAT_KERNEL_HWCAP(THUMBEE)] = NULL, /* Not possible on arm64 */
"neon", [COMPAT_KERNEL_HWCAP(NEON)] = "neon",
"vfpv3", [COMPAT_KERNEL_HWCAP(VFPv3)] = "vfpv3",
"vfpv3d16", [COMPAT_KERNEL_HWCAP(VFPV3D16)] = NULL, /* Not possible on arm64 */
"tls", [COMPAT_KERNEL_HWCAP(TLS)] = "tls",
"vfpv4", [COMPAT_KERNEL_HWCAP(VFPv4)] = "vfpv4",
"idiva", [COMPAT_KERNEL_HWCAP(IDIVA)] = "idiva",
"idivt", [COMPAT_KERNEL_HWCAP(IDIVT)] = "idivt",
"vfpd32", [COMPAT_KERNEL_HWCAP(VFPD32)] = NULL, /* Not possible on arm64 */
"lpae", [COMPAT_KERNEL_HWCAP(LPAE)] = "lpae",
"evtstrm", [COMPAT_KERNEL_HWCAP(EVTSTRM)] = "evtstrm",
NULL
}; };
#define COMPAT_KERNEL_HWCAP2(x) const_ilog2(COMPAT_HWCAP2_ ## x)
static const char *const compat_hwcap2_str[] = { static const char *const compat_hwcap2_str[] = {
"aes", [COMPAT_KERNEL_HWCAP2(AES)] = "aes",
"pmull", [COMPAT_KERNEL_HWCAP2(PMULL)] = "pmull",
"sha1", [COMPAT_KERNEL_HWCAP2(SHA1)] = "sha1",
"sha2", [COMPAT_KERNEL_HWCAP2(SHA2)] = "sha2",
"crc32", [COMPAT_KERNEL_HWCAP2(CRC32)] = "crc32",
NULL
}; };
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
...@@ -166,16 +164,25 @@ static int c_show(struct seq_file *m, void *v) ...@@ -166,16 +164,25 @@ static int c_show(struct seq_file *m, void *v)
seq_puts(m, "Features\t:"); seq_puts(m, "Features\t:");
if (compat) { if (compat) {
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
for (j = 0; compat_hwcap_str[j]; j++) for (j = 0; j < ARRAY_SIZE(compat_hwcap_str); j++) {
if (compat_elf_hwcap & (1 << j)) if (compat_elf_hwcap & (1 << j)) {
/*
* Warn once if any feature should not
* have been present on arm64 platform.
*/
if (WARN_ON_ONCE(!compat_hwcap_str[j]))
continue;
seq_printf(m, " %s", compat_hwcap_str[j]); seq_printf(m, " %s", compat_hwcap_str[j]);
}
}
for (j = 0; compat_hwcap2_str[j]; j++) for (j = 0; j < ARRAY_SIZE(compat_hwcap2_str); j++)
if (compat_elf_hwcap2 & (1 << j)) if (compat_elf_hwcap2 & (1 << j))
seq_printf(m, " %s", compat_hwcap2_str[j]); seq_printf(m, " %s", compat_hwcap2_str[j]);
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
} else { } else {
for (j = 0; hwcap_str[j]; j++) for (j = 0; j < ARRAY_SIZE(hwcap_str); j++)
if (cpu_have_feature(j)) if (cpu_have_feature(j))
seq_printf(m, " %s", hwcap_str[j]); seq_printf(m, " %s", hwcap_str[j]);
} }
......
...@@ -384,7 +384,7 @@ void __init debug_traps_init(void) ...@@ -384,7 +384,7 @@ void __init debug_traps_init(void)
hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP, hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP,
TRAP_TRACE, "single-step handler"); TRAP_TRACE, "single-step handler");
hook_debug_fault_code(DBG_ESR_EVT_BRK, brk_handler, SIGTRAP, hook_debug_fault_code(DBG_ESR_EVT_BRK, brk_handler, SIGTRAP,
TRAP_BRKPT, "ptrace BRK handler"); TRAP_BRKPT, "BRK handler");
} }
/* Re-enable single step for syscall restarting. */ /* Re-enable single step for syscall restarting. */
......
...@@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr) ...@@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
} }
NOKPROBE_SYMBOL(el1_dbg); NOKPROBE_SYMBOL(el1_dbg);
static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr)
{
local_daif_inherit(regs);
do_ptrauth_fault(regs, esr);
}
NOKPROBE_SYMBOL(el1_fpac);
asmlinkage void notrace el1_sync_handler(struct pt_regs *regs) asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
{ {
unsigned long esr = read_sysreg(esr_el1); unsigned long esr = read_sysreg(esr_el1);
...@@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs) ...@@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
case ESR_ELx_EC_BRK64: case ESR_ELx_EC_BRK64:
el1_dbg(regs, esr); el1_dbg(regs, esr);
break; break;
case ESR_ELx_EC_FPAC:
el1_fpac(regs, esr);
break;
default: default:
el1_inv(regs, esr); el1_inv(regs, esr);
} }
...@@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs) ...@@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs)
} }
NOKPROBE_SYMBOL(el0_svc); NOKPROBE_SYMBOL(el0_svc);
static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr)
{
user_exit_irqoff();
local_daif_restore(DAIF_PROCCTX);
do_ptrauth_fault(regs, esr);
}
NOKPROBE_SYMBOL(el0_fpac);
asmlinkage void notrace el0_sync_handler(struct pt_regs *regs) asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
{ {
unsigned long esr = read_sysreg(esr_el1); unsigned long esr = read_sysreg(esr_el1);
...@@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs) ...@@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
case ESR_ELx_EC_BRK64: case ESR_ELx_EC_BRK64:
el0_dbg(regs, esr); el0_dbg(regs, esr);
break; break;
case ESR_ELx_EC_FPAC:
el0_fpac(regs, esr);
break;
default: default:
el0_inv(regs, esr); el0_inv(regs, esr);
} }
......
...@@ -32,6 +32,7 @@ SYM_FUNC_START(fpsimd_load_state) ...@@ -32,6 +32,7 @@ SYM_FUNC_START(fpsimd_load_state)
SYM_FUNC_END(fpsimd_load_state) SYM_FUNC_END(fpsimd_load_state)
#ifdef CONFIG_ARM64_SVE #ifdef CONFIG_ARM64_SVE
SYM_FUNC_START(sve_save_state) SYM_FUNC_START(sve_save_state)
sve_save 0, x1, 2 sve_save 0, x1, 2
ret ret
...@@ -46,4 +47,28 @@ SYM_FUNC_START(sve_get_vl) ...@@ -46,4 +47,28 @@ SYM_FUNC_START(sve_get_vl)
_sve_rdvl 0, 1 _sve_rdvl 0, 1
ret ret
SYM_FUNC_END(sve_get_vl) SYM_FUNC_END(sve_get_vl)
/*
* Load SVE state from FPSIMD state.
*
* x0 = pointer to struct fpsimd_state
* x1 = VQ - 1
*
* Each SVE vector will be loaded with the first 128-bits taken from FPSIMD
* and the rest zeroed. All the other SVE registers will be zeroed.
*/
SYM_FUNC_START(sve_load_from_fpsimd_state)
sve_load_vq x1, x2, x3
fpsimd_restore x0, 8
_for n, 0, 15, _sve_pfalse \n
_sve_wrffr 0
ret
SYM_FUNC_END(sve_load_from_fpsimd_state)
/* Zero all SVE registers but the first 128-bits of each vector */
SYM_FUNC_START(sve_flush_live)
sve_flush
ret
SYM_FUNC_END(sve_flush_live)
#endif /* CONFIG_ARM64_SVE */ #endif /* CONFIG_ARM64_SVE */
...@@ -32,9 +32,11 @@ ...@@ -32,9 +32,11 @@
#include <linux/swab.h> #include <linux/swab.h>
#include <asm/esr.h> #include <asm/esr.h>
#include <asm/exception.h>
#include <asm/fpsimd.h> #include <asm/fpsimd.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/cputype.h> #include <asm/cputype.h>
#include <asm/neon.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/simd.h> #include <asm/simd.h>
#include <asm/sigcontext.h> #include <asm/sigcontext.h>
...@@ -312,7 +314,7 @@ static void fpsimd_save(void) ...@@ -312,7 +314,7 @@ static void fpsimd_save(void)
* re-enter user with corrupt state. * re-enter user with corrupt state.
* There's no way to recover, so kill it: * There's no way to recover, so kill it:
*/ */
force_signal_inject(SIGKILL, SI_KERNEL, 0); force_signal_inject(SIGKILL, SI_KERNEL, 0, 0);
return; return;
} }
...@@ -928,7 +930,7 @@ void fpsimd_release_task(struct task_struct *dead_task) ...@@ -928,7 +930,7 @@ void fpsimd_release_task(struct task_struct *dead_task)
* the SVE access trap will be disabled the next time this task * the SVE access trap will be disabled the next time this task
* reaches ret_to_user. * reaches ret_to_user.
* *
* TIF_SVE should be clear on entry: otherwise, task_fpsimd_load() * TIF_SVE should be clear on entry: otherwise, fpsimd_restore_current_state()
* would have disabled the SVE access trap for userspace during * would have disabled the SVE access trap for userspace during
* ret_to_user, making an SVE access trap impossible in that case. * ret_to_user, making an SVE access trap impossible in that case.
*/ */
...@@ -936,7 +938,7 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs) ...@@ -936,7 +938,7 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs)
{ {
/* Even if we chose not to use SVE, the hardware could still trap: */ /* Even if we chose not to use SVE, the hardware could still trap: */
if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) { if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) {
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc); force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
return; return;
} }
......
...@@ -36,14 +36,10 @@ ...@@ -36,14 +36,10 @@
#include "efi-header.S" #include "efi-header.S"
#define __PHYS_OFFSET (KERNEL_START - TEXT_OFFSET) #define __PHYS_OFFSET KERNEL_START
#if (TEXT_OFFSET & 0xfff) != 0 #if (PAGE_OFFSET & 0x1fffff) != 0
#error TEXT_OFFSET must be at least 4KB aligned
#elif (PAGE_OFFSET & 0x1fffff) != 0
#error PAGE_OFFSET must be at least 2MB aligned #error PAGE_OFFSET must be at least 2MB aligned
#elif TEXT_OFFSET > 0x1fffff
#error TEXT_OFFSET must be less than 2MB
#endif #endif
/* /*
...@@ -55,7 +51,7 @@ ...@@ -55,7 +51,7 @@
* x0 = physical address to the FDT blob. * x0 = physical address to the FDT blob.
* *
* This code is mostly position independent so you call this at * This code is mostly position independent so you call this at
* __pa(PAGE_OFFSET + TEXT_OFFSET). * __pa(PAGE_OFFSET).
* *
* Note that the callee-saved registers are used for storing variables * Note that the callee-saved registers are used for storing variables
* that are useful before the MMU is enabled. The allocations are described * that are useful before the MMU is enabled. The allocations are described
...@@ -77,7 +73,7 @@ _head: ...@@ -77,7 +73,7 @@ _head:
b primary_entry // branch to kernel start, magic b primary_entry // branch to kernel start, magic
.long 0 // reserved .long 0 // reserved
#endif #endif
le64sym _kernel_offset_le // Image load offset from start of RAM, little-endian .quad 0 // Image load offset from start of RAM, little-endian
le64sym _kernel_size_le // Effective size of kernel image, little-endian le64sym _kernel_size_le // Effective size of kernel image, little-endian
le64sym _kernel_flags_le // Informative flags, little-endian le64sym _kernel_flags_le // Informative flags, little-endian
.quad 0 // reserved .quad 0 // reserved
...@@ -382,7 +378,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables) ...@@ -382,7 +378,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
* Map the kernel image (starting with PHYS_OFFSET). * Map the kernel image (starting with PHYS_OFFSET).
*/ */
adrp x0, init_pg_dir adrp x0, init_pg_dir
mov_q x5, KIMAGE_VADDR + TEXT_OFFSET // compile time __va(_text) mov_q x5, KIMAGE_VADDR // compile time __va(_text)
add x5, x5, x23 // add KASLR displacement add x5, x5, x23 // add KASLR displacement
mov x4, PTRS_PER_PGD mov x4, PTRS_PER_PGD
adrp x6, _end // runtime __pa(_end) adrp x6, _end // runtime __pa(_end)
...@@ -474,7 +470,7 @@ SYM_FUNC_END(__primary_switched) ...@@ -474,7 +470,7 @@ SYM_FUNC_END(__primary_switched)
.pushsection ".rodata", "a" .pushsection ".rodata", "a"
SYM_DATA_START(kimage_vaddr) SYM_DATA_START(kimage_vaddr)
.quad _text - TEXT_OFFSET .quad _text
SYM_DATA_END(kimage_vaddr) SYM_DATA_END(kimage_vaddr)
EXPORT_SYMBOL(kimage_vaddr) EXPORT_SYMBOL(kimage_vaddr)
.popsection .popsection
......
...@@ -21,7 +21,6 @@ ...@@ -21,7 +21,6 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/utsname.h> #include <linux/utsname.h>
#include <linux/version.h>
#include <asm/barrier.h> #include <asm/barrier.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
......
...@@ -62,7 +62,6 @@ ...@@ -62,7 +62,6 @@
*/ */
#define HEAD_SYMBOLS \ #define HEAD_SYMBOLS \
DEFINE_IMAGE_LE64(_kernel_size_le, _end - _text); \ DEFINE_IMAGE_LE64(_kernel_size_le, _end - _text); \
DEFINE_IMAGE_LE64(_kernel_offset_le, TEXT_OFFSET); \
DEFINE_IMAGE_LE64(_kernel_flags_le, __HEAD_FLAGS); DEFINE_IMAGE_LE64(_kernel_flags_le, __HEAD_FLAGS);
#endif /* __ARM64_KERNEL_IMAGE_H */ #endif /* __ARM64_KERNEL_IMAGE_H */
...@@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn) ...@@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn)
case AARCH64_INSN_HINT_XPACLRI: case AARCH64_INSN_HINT_XPACLRI:
case AARCH64_INSN_HINT_PACIA_1716: case AARCH64_INSN_HINT_PACIA_1716:
case AARCH64_INSN_HINT_PACIB_1716: case AARCH64_INSN_HINT_PACIB_1716:
case AARCH64_INSN_HINT_AUTIA_1716:
case AARCH64_INSN_HINT_AUTIB_1716:
case AARCH64_INSN_HINT_PACIAZ: case AARCH64_INSN_HINT_PACIAZ:
case AARCH64_INSN_HINT_PACIASP: case AARCH64_INSN_HINT_PACIASP:
case AARCH64_INSN_HINT_PACIBZ: case AARCH64_INSN_HINT_PACIBZ:
case AARCH64_INSN_HINT_PACIBSP: case AARCH64_INSN_HINT_PACIBSP:
case AARCH64_INSN_HINT_AUTIAZ:
case AARCH64_INSN_HINT_AUTIASP:
case AARCH64_INSN_HINT_AUTIBZ:
case AARCH64_INSN_HINT_AUTIBSP:
case AARCH64_INSN_HINT_BTI: case AARCH64_INSN_HINT_BTI:
case AARCH64_INSN_HINT_BTIC: case AARCH64_INSN_HINT_BTIC:
case AARCH64_INSN_HINT_BTIJ: case AARCH64_INSN_HINT_BTIJ:
...@@ -176,7 +170,7 @@ bool __kprobes aarch64_insn_uses_literal(u32 insn) ...@@ -176,7 +170,7 @@ bool __kprobes aarch64_insn_uses_literal(u32 insn)
bool __kprobes aarch64_insn_is_branch(u32 insn) bool __kprobes aarch64_insn_is_branch(u32 insn)
{ {
/* b, bl, cb*, tb*, b.cond, br, blr */ /* b, bl, cb*, tb*, ret*, b.cond, br*, blr* */
return aarch64_insn_is_b(insn) || return aarch64_insn_is_b(insn) ||
aarch64_insn_is_bl(insn) || aarch64_insn_is_bl(insn) ||
...@@ -185,8 +179,11 @@ bool __kprobes aarch64_insn_is_branch(u32 insn) ...@@ -185,8 +179,11 @@ bool __kprobes aarch64_insn_is_branch(u32 insn)
aarch64_insn_is_tbz(insn) || aarch64_insn_is_tbz(insn) ||
aarch64_insn_is_tbnz(insn) || aarch64_insn_is_tbnz(insn) ||
aarch64_insn_is_ret(insn) || aarch64_insn_is_ret(insn) ||
aarch64_insn_is_ret_auth(insn) ||
aarch64_insn_is_br(insn) || aarch64_insn_is_br(insn) ||
aarch64_insn_is_br_auth(insn) ||
aarch64_insn_is_blr(insn) || aarch64_insn_is_blr(insn) ||
aarch64_insn_is_blr_auth(insn) ||
aarch64_insn_is_bcond(insn); aarch64_insn_is_bcond(insn);
} }
......
...@@ -137,11 +137,11 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, ...@@ -137,11 +137,11 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
* whist unwinding the stackframe and is like a subroutine return so we use * whist unwinding the stackframe and is like a subroutine return so we use
* the PC. * the PC.
*/ */
static int callchain_trace(struct stackframe *frame, void *data) static bool callchain_trace(void *data, unsigned long pc)
{ {
struct perf_callchain_entry_ctx *entry = data; struct perf_callchain_entry_ctx *entry = data;
perf_callchain_store(entry, frame->pc); perf_callchain_store(entry, pc);
return 0; return true;
} }
void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
......
This diff is collapsed.
...@@ -16,7 +16,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) ...@@ -16,7 +16,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
/* /*
* Our handling of compat tasks (PERF_SAMPLE_REGS_ABI_32) is weird, but * Our handling of compat tasks (PERF_SAMPLE_REGS_ABI_32) is weird, but
* we're stuck with it for ABI compatability reasons. * we're stuck with it for ABI compatibility reasons.
* *
* For a 32-bit consumer inspecting a 32-bit task, then it will look at * For a 32-bit consumer inspecting a 32-bit task, then it will look at
* the first 16 registers (see arch/arm/include/uapi/asm/perf_regs.h). * the first 16 registers (see arch/arm/include/uapi/asm/perf_regs.h).
......
...@@ -29,7 +29,8 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn) ...@@ -29,7 +29,8 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
aarch64_insn_is_msr_imm(insn) || aarch64_insn_is_msr_imm(insn) ||
aarch64_insn_is_msr_reg(insn) || aarch64_insn_is_msr_reg(insn) ||
aarch64_insn_is_exception(insn) || aarch64_insn_is_exception(insn) ||
aarch64_insn_is_eret(insn)) aarch64_insn_is_eret(insn) ||
aarch64_insn_is_eret_auth(insn))
return false; return false;
/* /*
...@@ -42,8 +43,10 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn) ...@@ -42,8 +43,10 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
!= AARCH64_INSN_SPCLREG_DAIF; != AARCH64_INSN_SPCLREG_DAIF;
/* /*
* The HINT instruction is is problematic when single-stepping, * The HINT instruction is steppable only if it is in whitelist
* except for the NOP case. * and the rest of other such instructions are blocked for
* single stepping as they may cause exception or other
* unintended behaviour.
*/ */
if (aarch64_insn_is_hint(insn)) if (aarch64_insn_is_hint(insn))
return aarch64_insn_is_steppable_hint(insn); return aarch64_insn_is_steppable_hint(insn);
......
...@@ -36,18 +36,6 @@ SYM_CODE_START(arm64_relocate_new_kernel) ...@@ -36,18 +36,6 @@ SYM_CODE_START(arm64_relocate_new_kernel)
mov x14, xzr /* x14 = entry ptr */ mov x14, xzr /* x14 = entry ptr */
mov x13, xzr /* x13 = copy dest */ mov x13, xzr /* x13 = copy dest */
/* Clear the sctlr_el2 flags. */
mrs x0, CurrentEL
cmp x0, #CurrentEL_EL2
b.ne 1f
mrs x0, sctlr_el2
mov_q x1, SCTLR_ELx_FLAGS
bic x0, x0, x1
pre_disable_mmu_workaround
msr sctlr_el2, x0
isb
1:
/* Check if the new image needs relocation. */ /* Check if the new image needs relocation. */
tbnz x16, IND_DONE_BIT, .Ldone tbnz x16, IND_DONE_BIT, .Ldone
......
...@@ -18,16 +18,16 @@ struct return_address_data { ...@@ -18,16 +18,16 @@ struct return_address_data {
void *addr; void *addr;
}; };
static int save_return_addr(struct stackframe *frame, void *d) static bool save_return_addr(void *d, unsigned long pc)
{ {
struct return_address_data *data = d; struct return_address_data *data = d;
if (!data->level) { if (!data->level) {
data->addr = (void *)frame->pc; data->addr = (void *)pc;
return 1; return false;
} else { } else {
--data->level; --data->level;
return 0; return true;
} }
} }
NOKPROBE_SYMBOL(save_return_addr); NOKPROBE_SYMBOL(save_return_addr);
......
...@@ -244,7 +244,8 @@ static int preserve_sve_context(struct sve_context __user *ctx) ...@@ -244,7 +244,8 @@ static int preserve_sve_context(struct sve_context __user *ctx)
if (vq) { if (vq) {
/* /*
* This assumes that the SVE state has already been saved to * This assumes that the SVE state has already been saved to
* the task struct by calling preserve_fpsimd_context(). * the task struct by calling the function
* fpsimd_signal_preserve_current_state().
*/ */
err |= __copy_to_user((char __user *)ctx + SVE_SIG_REGS_OFFSET, err |= __copy_to_user((char __user *)ctx + SVE_SIG_REGS_OFFSET,
current->thread.sve_state, current->thread.sve_state,
......
...@@ -83,9 +83,9 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu) ...@@ -83,9 +83,9 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
/* /*
* We write the release address as LE regardless of the native * We write the release address as LE regardless of the native
* endianess of the kernel. Therefore, any boot-loaders that * endianness of the kernel. Therefore, any boot-loaders that
* read this address need to convert this address to the * read this address need to convert this address to the
* boot-loader's endianess before jumping. This is mandated by * boot-loader's endianness before jumping. This is mandated by
* the boot protocol. * the boot protocol.
*/ */
writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr); writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr);
......
...@@ -118,12 +118,12 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) ...@@ -118,12 +118,12 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
NOKPROBE_SYMBOL(unwind_frame); NOKPROBE_SYMBOL(unwind_frame);
void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
int (*fn)(struct stackframe *, void *), void *data) bool (*fn)(void *, unsigned long), void *data)
{ {
while (1) { while (1) {
int ret; int ret;
if (fn(frame, data)) if (!fn(data, frame->pc))
break; break;
ret = unwind_frame(tsk, frame); ret = unwind_frame(tsk, frame);
if (ret < 0) if (ret < 0)
...@@ -132,84 +132,89 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, ...@@ -132,84 +132,89 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
} }
NOKPROBE_SYMBOL(walk_stackframe); NOKPROBE_SYMBOL(walk_stackframe);
#ifdef CONFIG_STACKTRACE static void dump_backtrace_entry(unsigned long where, const char *loglvl)
struct stack_trace_data {
struct stack_trace *trace;
unsigned int no_sched_functions;
unsigned int skip;
};
static int save_trace(struct stackframe *frame, void *d)
{ {
struct stack_trace_data *data = d; printk("%s %pS\n", loglvl, (void *)where);
struct stack_trace *trace = data->trace;
unsigned long addr = frame->pc;
if (data->no_sched_functions && in_sched_functions(addr))
return 0;
if (data->skip) {
data->skip--;
return 0;
}
trace->entries[trace->nr_entries++] = addr;
return trace->nr_entries >= trace->max_entries;
} }
void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace) void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl)
{ {
struct stack_trace_data data;
struct stackframe frame; struct stackframe frame;
int skip = 0;
data.trace = trace; pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
data.skip = trace->skip;
data.no_sched_functions = 0;
start_backtrace(&frame, regs->regs[29], regs->pc); if (regs) {
walk_stackframe(current, &frame, save_trace, &data); if (user_mode(regs))
} return;
EXPORT_SYMBOL_GPL(save_stack_trace_regs); skip = 1;
}
static noinline void __save_stack_trace(struct task_struct *tsk, if (!tsk)
struct stack_trace *trace, unsigned int nosched) tsk = current;
{
struct stack_trace_data data;
struct stackframe frame;
if (!try_get_task_stack(tsk)) if (!try_get_task_stack(tsk))
return; return;
data.trace = trace; if (tsk == current) {
data.skip = trace->skip;
data.no_sched_functions = nosched;
if (tsk != current) {
start_backtrace(&frame, thread_saved_fp(tsk),
thread_saved_pc(tsk));
} else {
/* We don't want this function nor the caller */
data.skip += 2;
start_backtrace(&frame, start_backtrace(&frame,
(unsigned long)__builtin_frame_address(0), (unsigned long)__builtin_frame_address(0),
(unsigned long)__save_stack_trace); (unsigned long)dump_backtrace);
} else {
/*
* task blocked in __switch_to
*/
start_backtrace(&frame,
thread_saved_fp(tsk),
thread_saved_pc(tsk));
} }
walk_stackframe(tsk, &frame, save_trace, &data); printk("%sCall trace:\n", loglvl);
do {
/* skip until specified stack frame */
if (!skip) {
dump_backtrace_entry(frame.pc, loglvl);
} else if (frame.fp == regs->regs[29]) {
skip = 0;
/*
* Mostly, this is the case where this function is
* called in panic/abort. As exception handler's
* stack frame does not contain the corresponding pc
* at which an exception has taken place, use regs->pc
* instead.
*/
dump_backtrace_entry(regs->pc, loglvl);
}
} while (!unwind_frame(tsk, &frame));
put_task_stack(tsk); put_task_stack(tsk);
} }
void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
{ {
__save_stack_trace(tsk, trace, 1); dump_backtrace(NULL, tsk, loglvl);
barrier();
} }
EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
void save_stack_trace(struct stack_trace *trace) #ifdef CONFIG_STACKTRACE
void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
struct task_struct *task, struct pt_regs *regs)
{ {
__save_stack_trace(current, trace, 0); struct stackframe frame;
if (regs)
start_backtrace(&frame, regs->regs[29], regs->pc);
else if (task == current)
start_backtrace(&frame,
(unsigned long)__builtin_frame_address(0),
(unsigned long)arch_stack_walk);
else
start_backtrace(&frame, thread_saved_fp(task),
thread_saved_pc(task));
walk_stackframe(task, &frame, consume_entry, cookie);
} }
EXPORT_SYMBOL_GPL(save_stack_trace);
#endif #endif
...@@ -36,21 +36,23 @@ void store_cpu_topology(unsigned int cpuid) ...@@ -36,21 +36,23 @@ void store_cpu_topology(unsigned int cpuid)
if (mpidr & MPIDR_UP_BITMASK) if (mpidr & MPIDR_UP_BITMASK)
return; return;
/* Create cpu topology mapping based on MPIDR. */ /*
if (mpidr & MPIDR_MT_BITMASK) { * This would be the place to create cpu topology based on MPIDR.
/* Multiprocessor system : Multi-threads per core */ *
cpuid_topo->thread_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); * However, it cannot be trusted to depict the actual topology; some
cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 1); * pieces of the architecture enforce an artificial cap on Aff0 values
cpuid_topo->package_id = MPIDR_AFFINITY_LEVEL(mpidr, 2) | * (e.g. GICv3's ICC_SGI1R_EL1 limits it to 15), leading to an
MPIDR_AFFINITY_LEVEL(mpidr, 3) << 8; * artificial cycling of Aff1, Aff2 and Aff3 values. IOW, these end up
} else { * having absolutely no relationship to the actual underlying system
/* Multiprocessor system : Single-thread per core */ * topology, and cannot be reasonably used as core / package ID.
*
* If the MT bit is set, Aff0 *could* be used to define a thread ID, but
* we still wouldn't be able to obtain a sane core ID. This means we
* need to entirely ignore MPIDR for any topology deduction.
*/
cpuid_topo->thread_id = -1; cpuid_topo->thread_id = -1;
cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); cpuid_topo->core_id = cpuid;
cpuid_topo->package_id = MPIDR_AFFINITY_LEVEL(mpidr, 1) | cpuid_topo->package_id = cpu_to_node(cpuid);
MPIDR_AFFINITY_LEVEL(mpidr, 2) << 8 |
MPIDR_AFFINITY_LEVEL(mpidr, 3) << 16;
}
pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n", pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n",
cpuid, cpuid_topo->package_id, cpuid_topo->core_id, cpuid, cpuid_topo->package_id, cpuid_topo->core_id,
......
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
#include <asm/daifflags.h> #include <asm/daifflags.h>
#include <asm/debug-monitors.h> #include <asm/debug-monitors.h>
#include <asm/esr.h> #include <asm/esr.h>
#include <asm/extable.h>
#include <asm/insn.h> #include <asm/insn.h>
#include <asm/kprobes.h> #include <asm/kprobes.h>
#include <asm/traps.h> #include <asm/traps.h>
...@@ -53,11 +54,6 @@ static const char *handler[]= { ...@@ -53,11 +54,6 @@ static const char *handler[]= {
int show_unhandled_signals = 0; int show_unhandled_signals = 0;
static void dump_backtrace_entry(unsigned long where, const char *loglvl)
{
printk("%s %pS\n", loglvl, (void *)where);
}
static void dump_kernel_instr(const char *lvl, struct pt_regs *regs) static void dump_kernel_instr(const char *lvl, struct pt_regs *regs)
{ {
unsigned long addr = instruction_pointer(regs); unsigned long addr = instruction_pointer(regs);
...@@ -83,66 +79,6 @@ static void dump_kernel_instr(const char *lvl, struct pt_regs *regs) ...@@ -83,66 +79,6 @@ static void dump_kernel_instr(const char *lvl, struct pt_regs *regs)
printk("%sCode: %s\n", lvl, str); printk("%sCode: %s\n", lvl, str);
} }
void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl)
{
struct stackframe frame;
int skip = 0;
pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
if (regs) {
if (user_mode(regs))
return;
skip = 1;
}
if (!tsk)
tsk = current;
if (!try_get_task_stack(tsk))
return;
if (tsk == current) {
start_backtrace(&frame,
(unsigned long)__builtin_frame_address(0),
(unsigned long)dump_backtrace);
} else {
/*
* task blocked in __switch_to
*/
start_backtrace(&frame,
thread_saved_fp(tsk),
thread_saved_pc(tsk));
}
printk("%sCall trace:\n", loglvl);
do {
/* skip until specified stack frame */
if (!skip) {
dump_backtrace_entry(frame.pc, loglvl);
} else if (frame.fp == regs->regs[29]) {
skip = 0;
/*
* Mostly, this is the case where this function is
* called in panic/abort. As exception handler's
* stack frame does not contain the corresponding pc
* at which an exception has taken place, use regs->pc
* instead.
*/
dump_backtrace_entry(regs->pc, loglvl);
}
} while (!unwind_frame(tsk, &frame));
put_task_stack(tsk);
}
void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
{
dump_backtrace(NULL, tsk, loglvl);
barrier();
}
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
#define S_PREEMPT " PREEMPT" #define S_PREEMPT " PREEMPT"
#elif defined(CONFIG_PREEMPT_RT) #elif defined(CONFIG_PREEMPT_RT)
...@@ -200,9 +136,9 @@ void die(const char *str, struct pt_regs *regs, int err) ...@@ -200,9 +136,9 @@ void die(const char *str, struct pt_regs *regs, int err)
oops_exit(); oops_exit();
if (in_interrupt()) if (in_interrupt())
panic("Fatal exception in interrupt"); panic("%s: Fatal exception in interrupt", str);
if (panic_on_oops) if (panic_on_oops)
panic("Fatal exception"); panic("%s: Fatal exception", str);
raw_spin_unlock_irqrestore(&die_lock, flags); raw_spin_unlock_irqrestore(&die_lock, flags);
...@@ -412,7 +348,7 @@ static int call_undef_hook(struct pt_regs *regs) ...@@ -412,7 +348,7 @@ static int call_undef_hook(struct pt_regs *regs)
return fn ? fn(regs, instr) : 1; return fn ? fn(regs, instr) : 1;
} }
void force_signal_inject(int signal, int code, unsigned long address) void force_signal_inject(int signal, int code, unsigned long address, unsigned int err)
{ {
const char *desc; const char *desc;
struct pt_regs *regs = current_pt_regs(); struct pt_regs *regs = current_pt_regs();
...@@ -438,7 +374,7 @@ void force_signal_inject(int signal, int code, unsigned long address) ...@@ -438,7 +374,7 @@ void force_signal_inject(int signal, int code, unsigned long address)
signal = SIGKILL; signal = SIGKILL;
} }
arm64_notify_die(desc, regs, signal, code, (void __user *)address, 0); arm64_notify_die(desc, regs, signal, code, (void __user *)address, err);
} }
/* /*
...@@ -455,7 +391,7 @@ void arm64_notify_segfault(unsigned long addr) ...@@ -455,7 +391,7 @@ void arm64_notify_segfault(unsigned long addr)
code = SEGV_ACCERR; code = SEGV_ACCERR;
mmap_read_unlock(current->mm); mmap_read_unlock(current->mm);
force_signal_inject(SIGSEGV, code, addr); force_signal_inject(SIGSEGV, code, addr, 0);
} }
void do_undefinstr(struct pt_regs *regs) void do_undefinstr(struct pt_regs *regs)
...@@ -468,17 +404,28 @@ void do_undefinstr(struct pt_regs *regs) ...@@ -468,17 +404,28 @@ void do_undefinstr(struct pt_regs *regs)
return; return;
BUG_ON(!user_mode(regs)); BUG_ON(!user_mode(regs));
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc); force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
} }
NOKPROBE_SYMBOL(do_undefinstr); NOKPROBE_SYMBOL(do_undefinstr);
void do_bti(struct pt_regs *regs) void do_bti(struct pt_regs *regs)
{ {
BUG_ON(!user_mode(regs)); BUG_ON(!user_mode(regs));
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc); force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
} }
NOKPROBE_SYMBOL(do_bti); NOKPROBE_SYMBOL(do_bti);
void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr)
{
/*
* Unexpected FPAC exception or pointer authentication failure in
* the kernel: kill the task before it does any more harm.
*/
BUG_ON(!user_mode(regs));
force_signal_inject(SIGILL, ILL_ILLOPN, regs->pc, esr);
}
NOKPROBE_SYMBOL(do_ptrauth_fault);
#define __user_cache_maint(insn, address, res) \ #define __user_cache_maint(insn, address, res) \
if (address >= user_addr_max()) { \ if (address >= user_addr_max()) { \
res = -EFAULT; \ res = -EFAULT; \
...@@ -528,7 +475,7 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs) ...@@ -528,7 +475,7 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
__user_cache_maint("ic ivau", address, ret); __user_cache_maint("ic ivau", address, ret);
break; break;
default: default:
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc); force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
return; return;
} }
...@@ -581,7 +528,7 @@ static void mrs_handler(unsigned int esr, struct pt_regs *regs) ...@@ -581,7 +528,7 @@ static void mrs_handler(unsigned int esr, struct pt_regs *regs)
sysreg = esr_sys64_to_sysreg(esr); sysreg = esr_sys64_to_sysreg(esr);
if (do_emulate_mrs(regs, sysreg, rt) != 0) if (do_emulate_mrs(regs, sysreg, rt) != 0)
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc); force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
} }
static void wfi_handler(unsigned int esr, struct pt_regs *regs) static void wfi_handler(unsigned int esr, struct pt_regs *regs)
...@@ -775,6 +722,7 @@ static const char *esr_class_str[] = { ...@@ -775,6 +722,7 @@ static const char *esr_class_str[] = {
[ESR_ELx_EC_SYS64] = "MSR/MRS (AArch64)", [ESR_ELx_EC_SYS64] = "MSR/MRS (AArch64)",
[ESR_ELx_EC_SVE] = "SVE", [ESR_ELx_EC_SVE] = "SVE",
[ESR_ELx_EC_ERET] = "ERET/ERETAA/ERETAB", [ESR_ELx_EC_ERET] = "ERET/ERETAA/ERETAB",
[ESR_ELx_EC_FPAC] = "FPAC",
[ESR_ELx_EC_IMP_DEF] = "EL3 IMP DEF", [ESR_ELx_EC_IMP_DEF] = "EL3 IMP DEF",
[ESR_ELx_EC_IABT_LOW] = "IABT (lower EL)", [ESR_ELx_EC_IABT_LOW] = "IABT (lower EL)",
[ESR_ELx_EC_IABT_CUR] = "IABT (current EL)", [ESR_ELx_EC_IABT_CUR] = "IABT (current EL)",
...@@ -935,26 +883,6 @@ asmlinkage void enter_from_user_mode(void) ...@@ -935,26 +883,6 @@ asmlinkage void enter_from_user_mode(void)
} }
NOKPROBE_SYMBOL(enter_from_user_mode); NOKPROBE_SYMBOL(enter_from_user_mode);
void __pte_error(const char *file, int line, unsigned long val)
{
pr_err("%s:%d: bad pte %016lx.\n", file, line, val);
}
void __pmd_error(const char *file, int line, unsigned long val)
{
pr_err("%s:%d: bad pmd %016lx.\n", file, line, val);
}
void __pud_error(const char *file, int line, unsigned long val)
{
pr_err("%s:%d: bad pud %016lx.\n", file, line, val);
}
void __pgd_error(const char *file, int line, unsigned long val)
{
pr_err("%s:%d: bad pgd %016lx.\n", file, line, val);
}
/* GENERIC_BUG traps */ /* GENERIC_BUG traps */
int is_valid_bugaddr(unsigned long addr) int is_valid_bugaddr(unsigned long addr)
...@@ -994,6 +922,21 @@ static struct break_hook bug_break_hook = { ...@@ -994,6 +922,21 @@ static struct break_hook bug_break_hook = {
.imm = BUG_BRK_IMM, .imm = BUG_BRK_IMM,
}; };
static int reserved_fault_handler(struct pt_regs *regs, unsigned int esr)
{
pr_err("%s generated an invalid instruction at %pS!\n",
in_bpf_jit(regs) ? "BPF JIT" : "Kernel text patching",
(void *)instruction_pointer(regs));
/* We cannot handle this */
return DBG_HOOK_ERROR;
}
static struct break_hook fault_break_hook = {
.fn = reserved_fault_handler,
.imm = FAULT_BRK_IMM,
};
#ifdef CONFIG_KASAN_SW_TAGS #ifdef CONFIG_KASAN_SW_TAGS
#define KASAN_ESR_RECOVER 0x20 #define KASAN_ESR_RECOVER 0x20
...@@ -1059,6 +1002,7 @@ int __init early_brk64(unsigned long addr, unsigned int esr, ...@@ -1059,6 +1002,7 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
void __init trap_init(void) void __init trap_init(void)
{ {
register_kernel_break_hook(&bug_break_hook); register_kernel_break_hook(&bug_break_hook);
register_kernel_break_hook(&fault_break_hook);
#ifdef CONFIG_KASAN_SW_TAGS #ifdef CONFIG_KASAN_SW_TAGS
register_kernel_break_hook(&kasan_break_hook); register_kernel_break_hook(&kasan_break_hook);
#endif #endif
......
...@@ -30,15 +30,11 @@ ...@@ -30,15 +30,11 @@
#include <asm/vdso.h> #include <asm/vdso.h>
extern char vdso_start[], vdso_end[]; extern char vdso_start[], vdso_end[];
#ifdef CONFIG_COMPAT_VDSO
extern char vdso32_start[], vdso32_end[]; extern char vdso32_start[], vdso32_end[];
#endif /* CONFIG_COMPAT_VDSO */
enum vdso_abi { enum vdso_abi {
VDSO_ABI_AA64, VDSO_ABI_AA64,
#ifdef CONFIG_COMPAT_VDSO
VDSO_ABI_AA32, VDSO_ABI_AA32,
#endif /* CONFIG_COMPAT_VDSO */
}; };
enum vvar_pages { enum vvar_pages {
...@@ -284,21 +280,17 @@ static int __setup_additional_pages(enum vdso_abi abi, ...@@ -284,21 +280,17 @@ static int __setup_additional_pages(enum vdso_abi abi,
/* /*
* Create and map the vectors page for AArch32 tasks. * Create and map the vectors page for AArch32 tasks.
*/ */
#ifdef CONFIG_COMPAT_VDSO
static int aarch32_vdso_mremap(const struct vm_special_mapping *sm, static int aarch32_vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma) struct vm_area_struct *new_vma)
{ {
return __vdso_remap(VDSO_ABI_AA32, sm, new_vma); return __vdso_remap(VDSO_ABI_AA32, sm, new_vma);
} }
#endif /* CONFIG_COMPAT_VDSO */
enum aarch32_map { enum aarch32_map {
AA32_MAP_VECTORS, /* kuser helpers */ AA32_MAP_VECTORS, /* kuser helpers */
#ifdef CONFIG_COMPAT_VDSO AA32_MAP_SIGPAGE,
AA32_MAP_VVAR, AA32_MAP_VVAR,
AA32_MAP_VDSO, AA32_MAP_VDSO,
#endif
AA32_MAP_SIGPAGE
}; };
static struct page *aarch32_vectors_page __ro_after_init; static struct page *aarch32_vectors_page __ro_after_init;
...@@ -309,7 +301,10 @@ static struct vm_special_mapping aarch32_vdso_maps[] = { ...@@ -309,7 +301,10 @@ static struct vm_special_mapping aarch32_vdso_maps[] = {
.name = "[vectors]", /* ABI */ .name = "[vectors]", /* ABI */
.pages = &aarch32_vectors_page, .pages = &aarch32_vectors_page,
}, },
#ifdef CONFIG_COMPAT_VDSO [AA32_MAP_SIGPAGE] = {
.name = "[sigpage]", /* ABI */
.pages = &aarch32_sig_page,
},
[AA32_MAP_VVAR] = { [AA32_MAP_VVAR] = {
.name = "[vvar]", .name = "[vvar]",
.fault = vvar_fault, .fault = vvar_fault,
...@@ -319,11 +314,6 @@ static struct vm_special_mapping aarch32_vdso_maps[] = { ...@@ -319,11 +314,6 @@ static struct vm_special_mapping aarch32_vdso_maps[] = {
.name = "[vdso]", .name = "[vdso]",
.mremap = aarch32_vdso_mremap, .mremap = aarch32_vdso_mremap,
}, },
#endif /* CONFIG_COMPAT_VDSO */
[AA32_MAP_SIGPAGE] = {
.name = "[sigpage]", /* ABI */
.pages = &aarch32_sig_page,
},
}; };
static int aarch32_alloc_kuser_vdso_page(void) static int aarch32_alloc_kuser_vdso_page(void)
...@@ -362,25 +352,25 @@ static int aarch32_alloc_sigpage(void) ...@@ -362,25 +352,25 @@ static int aarch32_alloc_sigpage(void)
return 0; return 0;
} }
#ifdef CONFIG_COMPAT_VDSO
static int __aarch32_alloc_vdso_pages(void) static int __aarch32_alloc_vdso_pages(void)
{ {
if (!IS_ENABLED(CONFIG_COMPAT_VDSO))
return 0;
vdso_info[VDSO_ABI_AA32].dm = &aarch32_vdso_maps[AA32_MAP_VVAR]; vdso_info[VDSO_ABI_AA32].dm = &aarch32_vdso_maps[AA32_MAP_VVAR];
vdso_info[VDSO_ABI_AA32].cm = &aarch32_vdso_maps[AA32_MAP_VDSO]; vdso_info[VDSO_ABI_AA32].cm = &aarch32_vdso_maps[AA32_MAP_VDSO];
return __vdso_init(VDSO_ABI_AA32); return __vdso_init(VDSO_ABI_AA32);
} }
#endif /* CONFIG_COMPAT_VDSO */
static int __init aarch32_alloc_vdso_pages(void) static int __init aarch32_alloc_vdso_pages(void)
{ {
int ret; int ret;
#ifdef CONFIG_COMPAT_VDSO
ret = __aarch32_alloc_vdso_pages(); ret = __aarch32_alloc_vdso_pages();
if (ret) if (ret)
return ret; return ret;
#endif
ret = aarch32_alloc_sigpage(); ret = aarch32_alloc_sigpage();
if (ret) if (ret)
...@@ -449,14 +439,12 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) ...@@ -449,14 +439,12 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
if (ret) if (ret)
goto out; goto out;
#ifdef CONFIG_COMPAT_VDSO if (IS_ENABLED(CONFIG_COMPAT_VDSO)) {
ret = __setup_additional_pages(VDSO_ABI_AA32, ret = __setup_additional_pages(VDSO_ABI_AA32, mm, bprm,
mm,
bprm,
uses_interp); uses_interp);
if (ret) if (ret)
goto out; goto out;
#endif /* CONFIG_COMPAT_VDSO */ }
ret = aarch32_sigreturn_setup(mm); ret = aarch32_sigreturn_setup(mm);
out: out:
...@@ -497,8 +485,7 @@ static int __init vdso_init(void) ...@@ -497,8 +485,7 @@ static int __init vdso_init(void)
} }
arch_initcall(vdso_init); arch_initcall(vdso_init);
int arch_setup_additional_pages(struct linux_binprm *bprm, int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
int uses_interp)
{ {
struct mm_struct *mm = current->mm; struct mm_struct *mm = current->mm;
int ret; int ret;
...@@ -506,11 +493,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, ...@@ -506,11 +493,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
if (mmap_write_lock_killable(mm)) if (mmap_write_lock_killable(mm))
return -EINTR; return -EINTR;
ret = __setup_additional_pages(VDSO_ABI_AA64, ret = __setup_additional_pages(VDSO_ABI_AA64, mm, bprm, uses_interp);
mm,
bprm,
uses_interp);
mmap_write_unlock(mm); mmap_write_unlock(mm);
return ret; return ret;
......
...@@ -105,7 +105,7 @@ SECTIONS ...@@ -105,7 +105,7 @@ SECTIONS
*(.eh_frame) *(.eh_frame)
} }
. = KIMAGE_VADDR + TEXT_OFFSET; . = KIMAGE_VADDR;
.head.text : { .head.text : {
_text = .; _text = .;
...@@ -274,4 +274,4 @@ ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE, ...@@ -274,4 +274,4 @@ ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
/* /*
* If padding is applied before .head.text, virt<->phys conversions will fail. * If padding is applied before .head.text, virt<->phys conversions will fail.
*/ */
ASSERT(_text == (KIMAGE_VADDR + TEXT_OFFSET), "HEAD is misaligned") ASSERT(_text == KIMAGE_VADDR, "HEAD is misaligned")
...@@ -269,6 +269,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) ...@@ -269,6 +269,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
kvm_pmu_release_perf_event(&pmu->pmc[i]); kvm_pmu_release_perf_event(&pmu->pmc[i]);
irq_work_sync(&vcpu->arch.pmu.overflow_work);
} }
u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
...@@ -433,6 +434,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) ...@@ -433,6 +434,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
kvm_pmu_update_state(vcpu); kvm_pmu_update_state(vcpu);
} }
/**
* When perf interrupt is an NMI, we cannot safely notify the vcpu corresponding
* to the event.
* This is why we need a callback to do it once outside of the NMI context.
*/
static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
{
struct kvm_vcpu *vcpu;
struct kvm_pmu *pmu;
pmu = container_of(work, struct kvm_pmu, overflow_work);
vcpu = kvm_pmc_to_vcpu(pmu->pmc);
kvm_vcpu_kick(vcpu);
}
/** /**
* When the perf event overflows, set the overflow status and inform the vcpu. * When the perf event overflows, set the overflow status and inform the vcpu.
*/ */
...@@ -465,7 +482,11 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event, ...@@ -465,7 +482,11 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
if (kvm_pmu_overflow_status(vcpu)) { if (kvm_pmu_overflow_status(vcpu)) {
kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
if (!in_nmi())
kvm_vcpu_kick(vcpu); kvm_vcpu_kick(vcpu);
else
irq_work_queue(&vcpu->arch.pmu.overflow_work);
} }
cpu_pmu->pmu.start(perf_event, PERF_EF_RELOAD); cpu_pmu->pmu.start(perf_event, PERF_EF_RELOAD);
...@@ -764,6 +785,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) ...@@ -764,6 +785,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
return ret; return ret;
} }
init_irq_work(&vcpu->arch.pmu.overflow_work,
kvm_pmu_perf_overflow_notify_vcpu);
vcpu->arch.pmu.created = true; vcpu->arch.pmu.created = true;
return 0; return 0;
} }
......
...@@ -1001,8 +1001,8 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) ...@@ -1001,8 +1001,8 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1)
raw_spin_lock_irqsave(&irq->irq_lock, flags); raw_spin_lock_irqsave(&irq->irq_lock, flags);
/* /*
* An access targetting Group0 SGIs can only generate * An access targeting Group0 SGIs can only generate
* those, while an access targetting Group1 SGIs can * those, while an access targeting Group1 SGIs can
* generate interrupts of either group. * generate interrupts of either group.
*/ */
if (!irq->group || allow_group1) { if (!irq->group || allow_group1) {
......
...@@ -4,7 +4,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ ...@@ -4,7 +4,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \
ioremap.o mmap.o pgd.o mmu.o \ ioremap.o mmap.o pgd.o mmu.o \
context.o proc.o pageattr.o context.o proc.o pageattr.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_PTDUMP_CORE) += dump.o obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o
obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_NUMA) += numa.o
obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
......
...@@ -27,6 +27,10 @@ static DEFINE_PER_CPU(atomic64_t, active_asids); ...@@ -27,6 +27,10 @@ static DEFINE_PER_CPU(atomic64_t, active_asids);
static DEFINE_PER_CPU(u64, reserved_asids); static DEFINE_PER_CPU(u64, reserved_asids);
static cpumask_t tlb_flush_pending; static cpumask_t tlb_flush_pending;
static unsigned long max_pinned_asids;
static unsigned long nr_pinned_asids;
static unsigned long *pinned_asid_map;
#define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_MASK (~GENMASK(asid_bits - 1, 0))
#define ASID_FIRST_VERSION (1UL << asid_bits) #define ASID_FIRST_VERSION (1UL << asid_bits)
...@@ -72,7 +76,7 @@ void verify_cpu_asid_bits(void) ...@@ -72,7 +76,7 @@ void verify_cpu_asid_bits(void)
} }
} }
static void set_kpti_asid_bits(void) static void set_kpti_asid_bits(unsigned long *map)
{ {
unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(unsigned long); unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(unsigned long);
/* /*
...@@ -81,13 +85,15 @@ static void set_kpti_asid_bits(void) ...@@ -81,13 +85,15 @@ static void set_kpti_asid_bits(void)
* is set, then the ASID will map only userspace. Thus * is set, then the ASID will map only userspace. Thus
* mark even as reserved for kernel. * mark even as reserved for kernel.
*/ */
memset(asid_map, 0xaa, len); memset(map, 0xaa, len);
} }
static void set_reserved_asid_bits(void) static void set_reserved_asid_bits(void)
{ {
if (arm64_kernel_unmapped_at_el0()) if (pinned_asid_map)
set_kpti_asid_bits(); bitmap_copy(asid_map, pinned_asid_map, NUM_USER_ASIDS);
else if (arm64_kernel_unmapped_at_el0())
set_kpti_asid_bits(asid_map);
else else
bitmap_clear(asid_map, 0, NUM_USER_ASIDS); bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
} }
...@@ -165,6 +171,14 @@ static u64 new_context(struct mm_struct *mm) ...@@ -165,6 +171,14 @@ static u64 new_context(struct mm_struct *mm)
if (check_update_reserved_asid(asid, newasid)) if (check_update_reserved_asid(asid, newasid))
return newasid; return newasid;
/*
* If it is pinned, we can keep using it. Note that reserved
* takes priority, because even if it is also pinned, we need to
* update the generation into the reserved_asids.
*/
if (refcount_read(&mm->context.pinned))
return newasid;
/* /*
* We had a valid ASID in a previous life, so try to re-use * We had a valid ASID in a previous life, so try to re-use
* it if possible. * it if possible.
...@@ -256,6 +270,71 @@ void check_and_switch_context(struct mm_struct *mm) ...@@ -256,6 +270,71 @@ void check_and_switch_context(struct mm_struct *mm)
cpu_switch_mm(mm->pgd, mm); cpu_switch_mm(mm->pgd, mm);
} }
unsigned long arm64_mm_context_get(struct mm_struct *mm)
{
unsigned long flags;
u64 asid;
if (!pinned_asid_map)
return 0;
raw_spin_lock_irqsave(&cpu_asid_lock, flags);
asid = atomic64_read(&mm->context.id);
if (refcount_inc_not_zero(&mm->context.pinned))
goto out_unlock;
if (nr_pinned_asids >= max_pinned_asids) {
asid = 0;
goto out_unlock;
}
if (!asid_gen_match(asid)) {
/*
* We went through one or more rollover since that ASID was
* used. Ensure that it is still valid, or generate a new one.
*/
asid = new_context(mm);
atomic64_set(&mm->context.id, asid);
}
nr_pinned_asids++;
__set_bit(asid2idx(asid), pinned_asid_map);
refcount_set(&mm->context.pinned, 1);
out_unlock:
raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
asid &= ~ASID_MASK;
/* Set the equivalent of USER_ASID_BIT */
if (asid && arm64_kernel_unmapped_at_el0())
asid |= 1;
return asid;
}
EXPORT_SYMBOL_GPL(arm64_mm_context_get);
void arm64_mm_context_put(struct mm_struct *mm)
{
unsigned long flags;
u64 asid = atomic64_read(&mm->context.id);
if (!pinned_asid_map)
return;
raw_spin_lock_irqsave(&cpu_asid_lock, flags);
if (refcount_dec_and_test(&mm->context.pinned)) {
__clear_bit(asid2idx(asid), pinned_asid_map);
nr_pinned_asids--;
}
raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
}
EXPORT_SYMBOL_GPL(arm64_mm_context_put);
/* Errata workaround post TTBRx_EL1 update. */ /* Errata workaround post TTBRx_EL1 update. */
asmlinkage void post_ttbr_update_workaround(void) asmlinkage void post_ttbr_update_workaround(void)
{ {
...@@ -296,8 +375,11 @@ static int asids_update_limit(void) ...@@ -296,8 +375,11 @@ static int asids_update_limit(void)
{ {
unsigned long num_available_asids = NUM_USER_ASIDS; unsigned long num_available_asids = NUM_USER_ASIDS;
if (arm64_kernel_unmapped_at_el0()) if (arm64_kernel_unmapped_at_el0()) {
num_available_asids /= 2; num_available_asids /= 2;
if (pinned_asid_map)
set_kpti_asid_bits(pinned_asid_map);
}
/* /*
* Expect allocation after rollover to fail if we don't have at least * Expect allocation after rollover to fail if we don't have at least
* one more ASID than CPUs. ASID #0 is reserved for init_mm. * one more ASID than CPUs. ASID #0 is reserved for init_mm.
...@@ -305,6 +387,13 @@ static int asids_update_limit(void) ...@@ -305,6 +387,13 @@ static int asids_update_limit(void)
WARN_ON(num_available_asids - 1 <= num_possible_cpus()); WARN_ON(num_available_asids - 1 <= num_possible_cpus());
pr_info("ASID allocator initialised with %lu entries\n", pr_info("ASID allocator initialised with %lu entries\n",
num_available_asids); num_available_asids);
/*
* There must always be an ASID available after rollover. Ensure that,
* even if all CPUs have a reserved ASID and the maximum number of ASIDs
* are pinned, there still is at least one empty slot in the ASID map.
*/
max_pinned_asids = num_available_asids - num_possible_cpus() - 2;
return 0; return 0;
} }
arch_initcall(asids_update_limit); arch_initcall(asids_update_limit);
...@@ -319,13 +408,17 @@ static int asids_init(void) ...@@ -319,13 +408,17 @@ static int asids_init(void)
panic("Failed to allocate bitmap for %lu ASIDs\n", panic("Failed to allocate bitmap for %lu ASIDs\n",
NUM_USER_ASIDS); NUM_USER_ASIDS);
pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS),
sizeof(*pinned_asid_map), GFP_KERNEL);
nr_pinned_asids = 0;
/* /*
* We cannot call set_reserved_asid_bits() here because CPU * We cannot call set_reserved_asid_bits() here because CPU
* caps are not finalized yet, so it is safer to assume KPTI * caps are not finalized yet, so it is safer to assume KPTI
* and reserve kernel ASID's from beginning. * and reserve kernel ASID's from beginning.
*/ */
if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
set_kpti_asid_bits(); set_kpti_asid_bits(asid_map);
return 0; return 0;
} }
early_initcall(asids_init); early_initcall(asids_init);
...@@ -14,9 +14,7 @@ int fixup_exception(struct pt_regs *regs) ...@@ -14,9 +14,7 @@ int fixup_exception(struct pt_regs *regs)
if (!fixup) if (!fixup)
return 0; return 0;
if (IS_ENABLED(CONFIG_BPF_JIT) && if (in_bpf_jit(regs))
regs->pc >= BPF_JIT_REGION_START &&
regs->pc < BPF_JIT_REGION_END)
return arm64_bpf_fixup_exception(fixup, regs); return arm64_bpf_fixup_exception(fixup, regs);
regs->pc = (unsigned long)&fixup->fixup + fixup->fixup; regs->pc = (unsigned long)&fixup->fixup + fixup->fixup;
......
...@@ -218,7 +218,9 @@ int ptep_set_access_flags(struct vm_area_struct *vma, ...@@ -218,7 +218,9 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval);
} while (pteval != old_pteval); } while (pteval != old_pteval);
flush_tlb_fix_spurious_fault(vma, address); /* Invalidate a stale read-only entry */
if (dirty)
flush_tlb_page(vma, address);
return 1; return 1;
} }
......
...@@ -46,7 +46,11 @@ EXPORT_SYMBOL(node_to_cpumask_map); ...@@ -46,7 +46,11 @@ EXPORT_SYMBOL(node_to_cpumask_map);
*/ */
const struct cpumask *cpumask_of_node(int node) const struct cpumask *cpumask_of_node(int node)
{ {
if (WARN_ON(node >= nr_node_ids))
if (node == NUMA_NO_NODE)
return cpu_all_mask;
if (WARN_ON(node < 0 || node >= nr_node_ids))
return cpu_none_mask; return cpu_none_mask;
if (WARN_ON(node_to_cpumask_map[node] == NULL)) if (WARN_ON(node_to_cpumask_map[node] == NULL))
...@@ -448,7 +452,7 @@ static int __init dummy_numa_init(void) ...@@ -448,7 +452,7 @@ static int __init dummy_numa_init(void)
* arm64_numa_init() - Initialize NUMA * arm64_numa_init() - Initialize NUMA
* *
* Try each configured NUMA initialization method until one succeeds. The * Try each configured NUMA initialization method until one succeeds. The
* last fallback is dummy single node config encomapssing whole memory. * last fallback is dummy single node config encompassing whole memory.
*/ */
void __init arm64_numa_init(void) void __init arm64_numa_init(void)
{ {
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <asm/cacheflush.h>
#include <asm/set_memory.h> #include <asm/set_memory.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
......
...@@ -41,6 +41,8 @@ static struct addr_marker address_markers[] = { ...@@ -41,6 +41,8 @@ static struct addr_marker address_markers[] = {
{ 0 /* KASAN_SHADOW_START */, "Kasan shadow start" }, { 0 /* KASAN_SHADOW_START */, "Kasan shadow start" },
{ KASAN_SHADOW_END, "Kasan shadow end" }, { KASAN_SHADOW_END, "Kasan shadow end" },
#endif #endif
{ BPF_JIT_REGION_START, "BPF start" },
{ BPF_JIT_REGION_END, "BPF end" },
{ MODULES_VADDR, "Modules start" }, { MODULES_VADDR, "Modules start" },
{ MODULES_END, "Modules end" }, { MODULES_END, "Modules end" },
{ VMALLOC_START, "vmalloc() area" }, { VMALLOC_START, "vmalloc() area" },
......
...@@ -19,7 +19,7 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, ...@@ -19,7 +19,7 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
unwind_for_each_frame(&state, task, regs, 0) { unwind_for_each_frame(&state, task, regs, 0) {
addr = unwind_get_return_address(&state); addr = unwind_get_return_address(&state);
if (!addr || !consume_entry(cookie, addr, false)) if (!addr || !consume_entry(cookie, addr))
break; break;
} }
} }
...@@ -56,7 +56,7 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, ...@@ -56,7 +56,7 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
return -EINVAL; return -EINVAL;
#endif #endif
if (!consume_entry(cookie, addr, false)) if (!consume_entry(cookie, addr))
return -EINVAL; return -EINVAL;
} }
......
...@@ -18,13 +18,13 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, ...@@ -18,13 +18,13 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
struct unwind_state state; struct unwind_state state;
unsigned long addr; unsigned long addr;
if (regs && !consume_entry(cookie, regs->ip, false)) if (regs && !consume_entry(cookie, regs->ip))
return; return;
for (unwind_start(&state, task, regs, NULL); !unwind_done(&state); for (unwind_start(&state, task, regs, NULL); !unwind_done(&state);
unwind_next_frame(&state)) { unwind_next_frame(&state)) {
addr = unwind_get_return_address(&state); addr = unwind_get_return_address(&state);
if (!addr || !consume_entry(cookie, addr, false)) if (!addr || !consume_entry(cookie, addr))
break; break;
} }
} }
...@@ -72,7 +72,7 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, ...@@ -72,7 +72,7 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
if (!addr) if (!addr)
return -EINVAL; return -EINVAL;
if (!consume_entry(cookie, addr, false)) if (!consume_entry(cookie, addr))
return -EINVAL; return -EINVAL;
} }
...@@ -114,7 +114,7 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie, ...@@ -114,7 +114,7 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
{ {
const void __user *fp = (const void __user *)regs->bp; const void __user *fp = (const void __user *)regs->bp;
if (!consume_entry(cookie, regs->ip, false)) if (!consume_entry(cookie, regs->ip))
return; return;
while (1) { while (1) {
...@@ -128,7 +128,7 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie, ...@@ -128,7 +128,7 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
break; break;
if (!frame.ret_addr) if (!frame.ret_addr)
break; break;
if (!consume_entry(cookie, frame.ret_addr, false)) if (!consume_entry(cookie, frame.ret_addr))
break; break;
fp = frame.next_fp; fp = frame.next_fp;
} }
......
...@@ -811,8 +811,7 @@ static inline const struct iommu_ops *iort_fwspec_iommu_ops(struct device *dev) ...@@ -811,8 +811,7 @@ static inline const struct iommu_ops *iort_fwspec_iommu_ops(struct device *dev)
return (fwspec && fwspec->ops) ? fwspec->ops : NULL; return (fwspec && fwspec->ops) ? fwspec->ops : NULL;
} }
static inline int iort_add_device_replay(const struct iommu_ops *ops, static inline int iort_add_device_replay(struct device *dev)
struct device *dev)
{ {
int err = 0; int err = 0;
...@@ -1072,7 +1071,7 @@ const struct iommu_ops *iort_iommu_configure_id(struct device *dev, ...@@ -1072,7 +1071,7 @@ const struct iommu_ops *iort_iommu_configure_id(struct device *dev,
*/ */
if (!err) { if (!err) {
ops = iort_fwspec_iommu_ops(dev); ops = iort_fwspec_iommu_ops(dev);
err = iort_add_device_replay(ops, dev); err = iort_add_device_replay(dev);
} }
/* Ignore all other errors apart from EPROBE_DEFER */ /* Ignore all other errors apart from EPROBE_DEFER */
...@@ -1087,11 +1086,6 @@ const struct iommu_ops *iort_iommu_configure_id(struct device *dev, ...@@ -1087,11 +1086,6 @@ const struct iommu_ops *iort_iommu_configure_id(struct device *dev,
} }
#else #else
static inline const struct iommu_ops *iort_fwspec_iommu_ops(struct device *dev)
{ return NULL; }
static inline int iort_add_device_replay(const struct iommu_ops *ops,
struct device *dev)
{ return 0; }
int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head)
{ return 0; } { return 0; }
const struct iommu_ops *iort_iommu_configure_id(struct device *dev, const struct iommu_ops *iort_iommu_configure_id(struct device *dev,
......
This diff is collapsed.
...@@ -64,7 +64,6 @@ lib-$(CONFIG_ARM) += arm32-stub.o ...@@ -64,7 +64,6 @@ lib-$(CONFIG_ARM) += arm32-stub.o
lib-$(CONFIG_ARM64) += arm64-stub.o lib-$(CONFIG_ARM64) += arm64-stub.o
lib-$(CONFIG_X86) += x86-stub.o lib-$(CONFIG_X86) += x86-stub.o
CFLAGS_arm32-stub.o := -DTEXT_OFFSET=$(TEXT_OFFSET) CFLAGS_arm32-stub.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
CFLAGS_arm64-stub.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
# #
# For x86, bootloaders like systemd-boot or grub-efi do not zero-initialize the # For x86, bootloaders like systemd-boot or grub-efi do not zero-initialize the
......
...@@ -77,7 +77,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, ...@@ -77,7 +77,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
kernel_size = _edata - _text; kernel_size = _edata - _text;
kernel_memsize = kernel_size + (_end - _edata); kernel_memsize = kernel_size + (_end - _edata);
*reserve_size = kernel_memsize + TEXT_OFFSET % min_kimg_align(); *reserve_size = kernel_memsize;
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && phys_seed != 0) { if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && phys_seed != 0) {
/* /*
...@@ -91,7 +91,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, ...@@ -91,7 +91,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
} }
if (status != EFI_SUCCESS) { if (status != EFI_SUCCESS) {
if (IS_ALIGNED((u64)_text - TEXT_OFFSET, min_kimg_align())) { if (IS_ALIGNED((u64)_text, min_kimg_align())) {
/* /*
* Just execute from wherever we were loaded by the * Just execute from wherever we were loaded by the
* UEFI PE/COFF loader if the alignment is suitable. * UEFI PE/COFF loader if the alignment is suitable.
...@@ -111,7 +111,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, ...@@ -111,7 +111,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
} }
} }
*image_addr = *reserve_addr + TEXT_OFFSET % min_kimg_align(); *image_addr = *reserve_addr;
memcpy((void *)*image_addr, _text, kernel_size); memcpy((void *)*image_addr, _text, kernel_size);
return EFI_SUCCESS; return EFI_SUCCESS;
......
...@@ -41,6 +41,13 @@ config ARM_CCN ...@@ -41,6 +41,13 @@ config ARM_CCN
PMU (perf) driver supporting the ARM CCN (Cache Coherent Network) PMU (perf) driver supporting the ARM CCN (Cache Coherent Network)
interconnect. interconnect.
config ARM_CMN
tristate "Arm CMN-600 PMU support"
depends on ARM64 || (COMPILE_TEST && 64BIT)
help
Support for PMU events monitoring on the Arm CMN-600 Coherent Mesh
Network interconnect.
config ARM_PMU config ARM_PMU
depends on ARM || ARM64 depends on ARM || ARM64
bool "ARM PMU framework" bool "ARM PMU framework"
......
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_ARM_CCI_PMU) += arm-cci.o obj-$(CONFIG_ARM_CCI_PMU) += arm-cci.o
obj-$(CONFIG_ARM_CCN) += arm-ccn.o obj-$(CONFIG_ARM_CCN) += arm-ccn.o
obj-$(CONFIG_ARM_CMN) += arm-cmn.o
obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o
obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o
......
This diff is collapsed.
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#define DRVNAME PMUNAME "_pmu" #define DRVNAME PMUNAME "_pmu"
#define pr_fmt(fmt) DRVNAME ": " fmt #define pr_fmt(fmt) DRVNAME ": " fmt
#include <linux/acpi.h>
#include <linux/bitmap.h> #include <linux/bitmap.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/bug.h> #include <linux/bug.h>
...@@ -603,18 +604,19 @@ static struct dsu_pmu *dsu_pmu_alloc(struct platform_device *pdev) ...@@ -603,18 +604,19 @@ static struct dsu_pmu *dsu_pmu_alloc(struct platform_device *pdev)
} }
/** /**
* dsu_pmu_dt_get_cpus: Get the list of CPUs in the cluster. * dsu_pmu_dt_get_cpus: Get the list of CPUs in the cluster
* from device tree.
*/ */
static int dsu_pmu_dt_get_cpus(struct device_node *dev, cpumask_t *mask) static int dsu_pmu_dt_get_cpus(struct device *dev, cpumask_t *mask)
{ {
int i = 0, n, cpu; int i = 0, n, cpu;
struct device_node *cpu_node; struct device_node *cpu_node;
n = of_count_phandle_with_args(dev, "cpus", NULL); n = of_count_phandle_with_args(dev->of_node, "cpus", NULL);
if (n <= 0) if (n <= 0)
return -ENODEV; return -ENODEV;
for (; i < n; i++) { for (; i < n; i++) {
cpu_node = of_parse_phandle(dev, "cpus", i); cpu_node = of_parse_phandle(dev->of_node, "cpus", i);
if (!cpu_node) if (!cpu_node)
break; break;
cpu = of_cpu_node_to_id(cpu_node); cpu = of_cpu_node_to_id(cpu_node);
...@@ -631,6 +633,36 @@ static int dsu_pmu_dt_get_cpus(struct device_node *dev, cpumask_t *mask) ...@@ -631,6 +633,36 @@ static int dsu_pmu_dt_get_cpus(struct device_node *dev, cpumask_t *mask)
return 0; return 0;
} }
/**
* dsu_pmu_acpi_get_cpus: Get the list of CPUs in the cluster
* from ACPI.
*/
static int dsu_pmu_acpi_get_cpus(struct device *dev, cpumask_t *mask)
{
#ifdef CONFIG_ACPI
int cpu;
/*
* A dsu pmu node is inside a cluster parent node along with cpu nodes.
* We need to find out all cpus that have the same parent with this pmu.
*/
for_each_possible_cpu(cpu) {
struct acpi_device *acpi_dev;
struct device *cpu_dev = get_cpu_device(cpu);
if (!cpu_dev)
continue;
acpi_dev = ACPI_COMPANION(cpu_dev);
if (acpi_dev &&
acpi_dev->parent == ACPI_COMPANION(dev)->parent)
cpumask_set_cpu(cpu, mask);
}
#endif
return 0;
}
/* /*
* dsu_pmu_probe_pmu: Probe the PMU details on a CPU in the cluster. * dsu_pmu_probe_pmu: Probe the PMU details on a CPU in the cluster.
*/ */
...@@ -676,6 +708,7 @@ static int dsu_pmu_device_probe(struct platform_device *pdev) ...@@ -676,6 +708,7 @@ static int dsu_pmu_device_probe(struct platform_device *pdev)
{ {
int irq, rc; int irq, rc;
struct dsu_pmu *dsu_pmu; struct dsu_pmu *dsu_pmu;
struct fwnode_handle *fwnode = dev_fwnode(&pdev->dev);
char *name; char *name;
static atomic_t pmu_idx = ATOMIC_INIT(-1); static atomic_t pmu_idx = ATOMIC_INIT(-1);
...@@ -683,7 +716,16 @@ static int dsu_pmu_device_probe(struct platform_device *pdev) ...@@ -683,7 +716,16 @@ static int dsu_pmu_device_probe(struct platform_device *pdev)
if (IS_ERR(dsu_pmu)) if (IS_ERR(dsu_pmu))
return PTR_ERR(dsu_pmu); return PTR_ERR(dsu_pmu);
rc = dsu_pmu_dt_get_cpus(pdev->dev.of_node, &dsu_pmu->associated_cpus); if (IS_ERR_OR_NULL(fwnode))
return -ENOENT;
if (is_of_node(fwnode))
rc = dsu_pmu_dt_get_cpus(&pdev->dev, &dsu_pmu->associated_cpus);
else if (is_acpi_device_node(fwnode))
rc = dsu_pmu_acpi_get_cpus(&pdev->dev, &dsu_pmu->associated_cpus);
else
return -ENOENT;
if (rc) { if (rc) {
dev_warn(&pdev->dev, "Failed to parse the CPUs\n"); dev_warn(&pdev->dev, "Failed to parse the CPUs\n");
return rc; return rc;
...@@ -752,11 +794,21 @@ static const struct of_device_id dsu_pmu_of_match[] = { ...@@ -752,11 +794,21 @@ static const struct of_device_id dsu_pmu_of_match[] = {
{ .compatible = "arm,dsu-pmu", }, { .compatible = "arm,dsu-pmu", },
{}, {},
}; };
MODULE_DEVICE_TABLE(of, dsu_pmu_of_match);
#ifdef CONFIG_ACPI
static const struct acpi_device_id dsu_pmu_acpi_match[] = {
{ "ARMHD500", 0},
{},
};
MODULE_DEVICE_TABLE(acpi, dsu_pmu_acpi_match);
#endif
static struct platform_driver dsu_pmu_driver = { static struct platform_driver dsu_pmu_driver = {
.driver = { .driver = {
.name = DRVNAME, .name = DRVNAME,
.of_match_table = of_match_ptr(dsu_pmu_of_match), .of_match_table = of_match_ptr(dsu_pmu_of_match),
.acpi_match_table = ACPI_PTR(dsu_pmu_acpi_match),
.suppress_bind_attrs = true, .suppress_bind_attrs = true,
}, },
.probe = dsu_pmu_device_probe, .probe = dsu_pmu_device_probe,
...@@ -826,7 +878,6 @@ static void __exit dsu_pmu_exit(void) ...@@ -826,7 +878,6 @@ static void __exit dsu_pmu_exit(void)
module_init(dsu_pmu_init); module_init(dsu_pmu_init);
module_exit(dsu_pmu_exit); module_exit(dsu_pmu_exit);
MODULE_DEVICE_TABLE(of, dsu_pmu_of_match);
MODULE_DESCRIPTION("Perf driver for ARM DynamIQ Shared Unit"); MODULE_DESCRIPTION("Perf driver for ARM DynamIQ Shared Unit");
MODULE_AUTHOR("Suzuki K Poulose <suzuki.poulose@arm.com>"); MODULE_AUTHOR("Suzuki K Poulose <suzuki.poulose@arm.com>");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment