Commit e831101a authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Catalin Marinas:

 - Kexec support for arm64

 - Kprobes support

 - Expose MIDR_EL1 and REVIDR_EL1 CPU identification registers to sysfs

 - Trapping of user space cache maintenance operations and emulation in
   the kernel (CPU errata workaround)

 - Clean-up of the early page tables creation (kernel linear mapping,
   EFI run-time maps) to avoid splitting larger blocks (e.g.  pmds) into
   smaller ones (e.g.  ptes)

 - VDSO support for CLOCK_MONOTONIC_RAW in clock_gettime()

 - ARCH_HAS_KCOV enabled for arm64

 - Optimise IP checksum helpers

 - SWIOTLB optimisation to only allocate/initialise the buffer if the
   available RAM is beyond the 32-bit mask

 - Properly handle the "nosmp" command line argument

 - Fix for the initialisation of the CPU debug state during early boot

 - vdso-offsets.h build dependency workaround

 - Build fix when RANDOMIZE_BASE is enabled with MODULES off

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (64 commits)
  arm64: arm: Fix-up the removal of the arm64 regs_query_register_name() prototype
  arm64: Only select ARM64_MODULE_PLTS if MODULES=y
  arm64: mm: run pgtable_page_ctor() on non-swapper translation table pages
  arm64: mm: make create_mapping_late() non-allocating
  arm64: Honor nosmp kernel command line option
  arm64: Fix incorrect per-cpu usage for boot CPU
  arm64: kprobes: Add KASAN instrumentation around stack accesses
  arm64: kprobes: Cleanup jprobe_return
  arm64: kprobes: Fix overflow when saving stack
  arm64: kprobes: WARN if attempting to step with PSTATE.D=1
  arm64: debug: remove unused local_dbg_{enable, disable} macros
  arm64: debug: remove redundant spsr manipulation
  arm64: debug: unmask PSTATE.D earlier
  arm64: localise Image objcopy flags
  arm64: ptrace: remove extra define for CPSR's E bit
  kprobes: Add arm64 case in kprobe example module
  arm64: Add kernel return probes support (kretprobes)
  arm64: Add trampoline code for kretprobes
  arm64: kprobes instruction simulation support
  arm64: Treat all entry code as non-kprobe-able
  ...
parents f9abf53a fd6380b7
...@@ -340,3 +340,13 @@ Description: POWERNV CPUFreq driver's frequency throttle stats directory and ...@@ -340,3 +340,13 @@ Description: POWERNV CPUFreq driver's frequency throttle stats directory and
'policyX/throttle_stats' directory and all the attributes are same as 'policyX/throttle_stats' directory and all the attributes are same as
the /sys/devices/system/cpu/cpuX/cpufreq/throttle_stats directory and the /sys/devices/system/cpu/cpuX/cpufreq/throttle_stats directory and
attributes which give the frequency throttle information of the chip. attributes which give the frequency throttle information of the chip.
What: /sys/devices/system/cpu/cpuX/regs/
/sys/devices/system/cpu/cpuX/regs/identification/
/sys/devices/system/cpu/cpuX/regs/identification/midr_el1
/sys/devices/system/cpu/cpuX/regs/identification/revidr_el1
Date: June 2016
Contact: Linux ARM Kernel Mailing list <linux-arm-kernel@lists.infradead.org>
Description: AArch64 CPU registers
'identification' directory exposes the CPU ID registers for
identifying model and revision of the CPU.
This diff is collapsed.
...@@ -34,7 +34,7 @@ of the summary text almost directly, to be honest. ...@@ -34,7 +34,7 @@ of the summary text almost directly, to be honest.
The short form of the rationale for ACPI on ARM is: The short form of the rationale for ACPI on ARM is:
-- ACPI’s bytecode (AML) allows the platform to encode hardware behavior, -- ACPI’s byte code (AML) allows the platform to encode hardware behavior,
while DT explicitly does not support this. For hardware vendors, being while DT explicitly does not support this. For hardware vendors, being
able to encode behavior is a key tool used in supporting operating able to encode behavior is a key tool used in supporting operating
system releases on new hardware. system releases on new hardware.
...@@ -57,11 +57,11 @@ The short form of the rationale for ACPI on ARM is: ...@@ -57,11 +57,11 @@ The short form of the rationale for ACPI on ARM is:
-- The new ACPI governance process works well and Linux is now at the same -- The new ACPI governance process works well and Linux is now at the same
table as hardware vendors and other OS vendors. In fact, there is no table as hardware vendors and other OS vendors. In fact, there is no
longer any reason to feel that ACPI is only belongs to Windows or that longer any reason to feel that ACPI only belongs to Windows or that
Linux is in any way secondary to Microsoft in this arena. The move of Linux is in any way secondary to Microsoft in this arena. The move of
ACPI governance into the UEFI forum has significantly opened up the ACPI governance into the UEFI forum has significantly opened up the
specification development process, and currently, a large portion of the specification development process, and currently, a large portion of the
changes being made to ACPI is being driven by Linux. changes being made to ACPI are being driven by Linux.
Key to the use of ACPI is the support model. For servers in general, the Key to the use of ACPI is the support model. For servers in general, the
responsibility for hardware behaviour cannot solely be the domain of the responsibility for hardware behaviour cannot solely be the domain of the
...@@ -110,7 +110,7 @@ ACPI support in drivers and subsystems for ARMv8 should never be mutually ...@@ -110,7 +110,7 @@ ACPI support in drivers and subsystems for ARMv8 should never be mutually
exclusive with DT support at compile time. exclusive with DT support at compile time.
At boot time the kernel will only use one description method depending on At boot time the kernel will only use one description method depending on
parameters passed from the bootloader (including kernel bootargs). parameters passed from the boot loader (including kernel bootargs).
Regardless of whether DT or ACPI is used, the kernel must always be capable Regardless of whether DT or ACPI is used, the kernel must always be capable
of booting with either scheme (in kernels with both schemes enabled at compile of booting with either scheme (in kernels with both schemes enabled at compile
...@@ -159,7 +159,7 @@ Further, the ACPI core will only use the 64-bit address fields in the FADT ...@@ -159,7 +159,7 @@ Further, the ACPI core will only use the 64-bit address fields in the FADT
(Fixed ACPI Description Table). Any 32-bit address fields in the FADT will (Fixed ACPI Description Table). Any 32-bit address fields in the FADT will
be ignored on arm64. be ignored on arm64.
Hardware reduced mode (see Section 4.1 of the ACPI 5.1 specification) will Hardware reduced mode (see Section 4.1 of the ACPI 6.1 specification) will
be enforced by the ACPI core on arm64. Doing so allows the ACPI core to be enforced by the ACPI core on arm64. Doing so allows the ACPI core to
run less complex code since it no longer has to provide support for legacy run less complex code since it no longer has to provide support for legacy
hardware from other architectures. Any fields that are not to be used for hardware from other architectures. Any fields that are not to be used for
...@@ -167,7 +167,7 @@ hardware reduced mode must be set to zero. ...@@ -167,7 +167,7 @@ hardware reduced mode must be set to zero.
For the ACPI core to operate properly, and in turn provide the information For the ACPI core to operate properly, and in turn provide the information
the kernel needs to configure devices, it expects to find the following the kernel needs to configure devices, it expects to find the following
tables (all section numbers refer to the ACPI 5.1 specfication): tables (all section numbers refer to the ACPI 6.1 specification):
-- RSDP (Root System Description Pointer), section 5.2.5 -- RSDP (Root System Description Pointer), section 5.2.5
...@@ -185,9 +185,23 @@ tables (all section numbers refer to the ACPI 5.1 specfication): ...@@ -185,9 +185,23 @@ tables (all section numbers refer to the ACPI 5.1 specfication):
-- If PCI is supported, the MCFG (Memory mapped ConFiGuration -- If PCI is supported, the MCFG (Memory mapped ConFiGuration
Table), section 5.2.6, specifically Table 5-31. Table), section 5.2.6, specifically Table 5-31.
-- If booting without a console=<device> kernel parameter is
supported, the SPCR (Serial Port Console Redirection table),
section 5.2.6, specifically Table 5-31.
-- If necessary to describe the I/O topology, SMMUs and GIC ITSs,
the IORT (Input Output Remapping Table, section 5.2.6, specifically
Table 5-31).
-- If NUMA is supported, the SRAT (System Resource Affinity Table)
and SLIT (System Locality distance Information Table), sections
5.2.16 and 5.2.17, respectively.
If the above tables are not all present, the kernel may or may not be If the above tables are not all present, the kernel may or may not be
able to boot properly since it may not be able to configure all of the able to boot properly since it may not be able to configure all of the
devices available. devices available. This list of tables is not meant to be all inclusive;
in some environments other tables may be needed (e.g., any of the APEI
tables from section 18) to support specific functionality.
ACPI Detection ACPI Detection
...@@ -198,7 +212,7 @@ the device structure. This is detailed further in the "Driver ...@@ -198,7 +212,7 @@ the device structure. This is detailed further in the "Driver
Recommendations" section. Recommendations" section.
In non-driver code, if the presence of ACPI needs to be detected at In non-driver code, if the presence of ACPI needs to be detected at
runtime, then check the value of acpi_disabled. If CONFIG_ACPI is not run time, then check the value of acpi_disabled. If CONFIG_ACPI is not
set, acpi_disabled will always be 1. set, acpi_disabled will always be 1.
...@@ -233,7 +247,7 @@ that looks like this: Name(KEY0, "value0"). An ACPI device driver would ...@@ -233,7 +247,7 @@ that looks like this: Name(KEY0, "value0"). An ACPI device driver would
then retrieve the value of the property by evaluating the KEY0 object. then retrieve the value of the property by evaluating the KEY0 object.
However, using Name() this way has multiple problems: (1) ACPI limits However, using Name() this way has multiple problems: (1) ACPI limits
names ("KEY0") to four characters unlike DT; (2) there is no industry names ("KEY0") to four characters unlike DT; (2) there is no industry
wide registry that maintains a list of names, minimzing re-use; (3) wide registry that maintains a list of names, minimizing re-use; (3)
there is also no registry for the definition of property values ("value0"), there is also no registry for the definition of property values ("value0"),
again making re-use difficult; and (4) how does one maintain backward again making re-use difficult; and (4) how does one maintain backward
compatibility as new hardware comes out? The _DSD method was created compatibility as new hardware comes out? The _DSD method was created
...@@ -434,7 +448,8 @@ The ACPI specification changes regularly. During the year 2014, for instance, ...@@ -434,7 +448,8 @@ The ACPI specification changes regularly. During the year 2014, for instance,
version 5.1 was released and version 6.0 substantially completed, with most of version 5.1 was released and version 6.0 substantially completed, with most of
the changes being driven by ARM-specific requirements. Proposed changes are the changes being driven by ARM-specific requirements. Proposed changes are
presented and discussed in the ASWG (ACPI Specification Working Group) which presented and discussed in the ASWG (ACPI Specification Working Group) which
is a part of the UEFI Forum. is a part of the UEFI Forum. The current version of the ACPI specification
is 6.1 release in January 2016.
Participation in this group is open to all UEFI members. Please see Participation in this group is open to all UEFI members. Please see
http://www.uefi.org/workinggroup for details on group membership. http://www.uefi.org/workinggroup for details on group membership.
...@@ -443,7 +458,7 @@ It is the intent of the ARMv8 ACPI kernel code to follow the ACPI specification ...@@ -443,7 +458,7 @@ It is the intent of the ARMv8 ACPI kernel code to follow the ACPI specification
as closely as possible, and to only implement functionality that complies with as closely as possible, and to only implement functionality that complies with
the released standards from UEFI ASWG. As a practical matter, there will be the released standards from UEFI ASWG. As a practical matter, there will be
vendors that provide bad ACPI tables or violate the standards in some way. vendors that provide bad ACPI tables or violate the standards in some way.
If this is because of errors, quirks and fixups may be necessary, but will If this is because of errors, quirks and fix-ups may be necessary, but will
be avoided if possible. If there are features missing from ACPI that preclude be avoided if possible. If there are features missing from ACPI that preclude
it from being used on a platform, ECRs (Engineering Change Requests) should be it from being used on a platform, ECRs (Engineering Change Requests) should be
submitted to ASWG and go through the normal approval process; for those that submitted to ASWG and go through the normal approval process; for those that
...@@ -480,8 +495,7 @@ References ...@@ -480,8 +495,7 @@ References
Software on ARM Platforms", dated 16 Aug 2014 Software on ARM Platforms", dated 16 Aug 2014
[2] http://www.secretlab.ca/archives/151, 10 Jan 2015, Copyright (c) 2015, [2] http://www.secretlab.ca/archives/151, 10 Jan 2015, Copyright (c) 2015,
Linaro Ltd., written by Grant Likely. A copy of the verbatim text (apart Linaro Ltd., written by Grant Likely.
from formatting) is also in Documentation/arm64/why_use_acpi.txt.
[3] AMD ACPI for Seattle platform documentation: [3] AMD ACPI for Seattle platform documentation:
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/Seattle_ACPI_Guide.pdf http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/Seattle_ACPI_Guide.pdf
......
...@@ -39,7 +39,9 @@ Optional properties: ...@@ -39,7 +39,9 @@ Optional properties:
When using a PPI, specifies a list of phandles to CPU When using a PPI, specifies a list of phandles to CPU
nodes corresponding to the set of CPUs which have nodes corresponding to the set of CPUs which have
a PMU of this type signalling the PPI listed in the a PMU of this type signalling the PPI listed in the
interrupts property. interrupts property, unless this is already specified
by the PPI interrupt specifier itself (in which case
the interrupt-affinity property shouldn't be present).
This property should be present when there is more than This property should be present when there is more than
a single SPI. a single SPI.
......
...@@ -8,6 +8,7 @@ config ARM64 ...@@ -8,6 +8,7 @@ config ARM64
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_KCOV
select ARCH_HAS_SG_CHAIN select ARCH_HAS_SG_CHAIN
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_CMPXCHG_LOCKREF
...@@ -86,8 +87,11 @@ config ARM64 ...@@ -86,8 +87,11 @@ config ARM64
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_PERF_REGS select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP select HAVE_PERF_USER_STACK_DUMP
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_RCU_TABLE_FREE select HAVE_RCU_TABLE_FREE
select HAVE_SYSCALL_TRACEPOINTS select HAVE_SYSCALL_TRACEPOINTS
select HAVE_KPROBES
select HAVE_KRETPROBES if HAVE_KPROBES
select IOMMU_DMA if IOMMU_SUPPORT select IOMMU_DMA if IOMMU_SUPPORT
select IRQ_DOMAIN select IRQ_DOMAIN
select IRQ_FORCED_THREADING select IRQ_FORCED_THREADING
...@@ -665,6 +669,16 @@ config PARAVIRT_TIME_ACCOUNTING ...@@ -665,6 +669,16 @@ config PARAVIRT_TIME_ACCOUNTING
If in doubt, say N here. If in doubt, say N here.
config KEXEC
depends on PM_SLEEP_SMP
select KEXEC_CORE
bool "kexec system call"
---help---
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
but it is independent of the system firmware. And like a reboot
you can start any kernel with it, not just Linux.
config XEN_DOM0 config XEN_DOM0
def_bool y def_bool y
depends on XEN depends on XEN
...@@ -873,7 +887,7 @@ config RELOCATABLE ...@@ -873,7 +887,7 @@ config RELOCATABLE
config RANDOMIZE_BASE config RANDOMIZE_BASE
bool "Randomize the address of the kernel image" bool "Randomize the address of the kernel image"
select ARM64_MODULE_PLTS select ARM64_MODULE_PLTS if MODULES
select RELOCATABLE select RELOCATABLE
help help
Randomizes the virtual address at which the kernel image is Randomizes the virtual address at which the kernel image is
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
LDFLAGS_vmlinux :=-p --no-undefined -X LDFLAGS_vmlinux :=-p --no-undefined -X
CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET) CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET)
OBJCOPYFLAGS :=-O binary -R .note -R .note.gnu.build-id -R .comment -S
GZFLAGS :=-9 GZFLAGS :=-9
ifneq ($(CONFIG_RELOCATABLE),) ifneq ($(CONFIG_RELOCATABLE),)
...@@ -121,6 +120,16 @@ archclean: ...@@ -121,6 +120,16 @@ archclean:
$(Q)$(MAKE) $(clean)=$(boot) $(Q)$(MAKE) $(clean)=$(boot)
$(Q)$(MAKE) $(clean)=$(boot)/dts $(Q)$(MAKE) $(clean)=$(boot)/dts
# We need to generate vdso-offsets.h before compiling certain files in kernel/.
# In order to do that, we should use the archprepare target, but we can't since
# asm-offsets.h is included in some files used to generate vdso-offsets.h, and
# asm-offsets.h is built in prepare0, for which archprepare is a dependency.
# Therefore we need to generate the header after prepare0 has been made, hence
# this hack.
prepare: vdso_prepare
vdso_prepare: prepare0
$(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso include/generated/vdso-offsets.h
define archhelp define archhelp
echo '* Image.gz - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)' echo '* Image.gz - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)'
echo ' Image - Uncompressed kernel image (arch/$(ARCH)/boot/Image)' echo ' Image - Uncompressed kernel image (arch/$(ARCH)/boot/Image)'
......
...@@ -14,6 +14,8 @@ ...@@ -14,6 +14,8 @@
# Based on the ia64 boot/Makefile. # Based on the ia64 boot/Makefile.
# #
OBJCOPYFLAGS_Image :=-O binary -R .note -R .note.gnu.build-id -R .comment -S
targets := Image Image.gz targets := Image Image.gz
$(obj)/Image: vmlinux FORCE $(obj)/Image: vmlinux FORCE
......
...@@ -70,6 +70,7 @@ CONFIG_KSM=y ...@@ -70,6 +70,7 @@ CONFIG_KSM=y
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_CMA=y CONFIG_CMA=y
CONFIG_XEN=y CONFIG_XEN=y
CONFIG_KEXEC=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_COMPAT=y CONFIG_COMPAT=y
CONFIG_CPU_IDLE=y CONFIG_CPU_IDLE=y
......
generic-y += bug.h generic-y += bug.h
generic-y += bugs.h generic-y += bugs.h
generic-y += checksum.h
generic-y += clkdev.h generic-y += clkdev.h
generic-y += cputime.h generic-y += cputime.h
generic-y += current.h generic-y += current.h
......
...@@ -95,13 +95,11 @@ void apply_alternatives(void *start, size_t length); ...@@ -95,13 +95,11 @@ void apply_alternatives(void *start, size_t length);
* The code that follows this macro will be assembled and linked as * The code that follows this macro will be assembled and linked as
* normal. There are no restrictions on this code. * normal. There are no restrictions on this code.
*/ */
.macro alternative_if_not cap, enable = 1 .macro alternative_if_not cap
.if \enable
.pushsection .altinstructions, "a" .pushsection .altinstructions, "a"
altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
.popsection .popsection
661: 661:
.endif
.endm .endm
/* /*
...@@ -118,27 +116,27 @@ void apply_alternatives(void *start, size_t length); ...@@ -118,27 +116,27 @@ void apply_alternatives(void *start, size_t length);
* alternative sequence it is defined in (branches into an * alternative sequence it is defined in (branches into an
* alternative sequence are not fixed up). * alternative sequence are not fixed up).
*/ */
.macro alternative_else, enable = 1 .macro alternative_else
.if \enable
662: .pushsection .altinstr_replacement, "ax" 662: .pushsection .altinstr_replacement, "ax"
663: 663:
.endif
.endm .endm
/* /*
* Complete an alternative code sequence. * Complete an alternative code sequence.
*/ */
.macro alternative_endif, enable = 1 .macro alternative_endif
.if \enable
664: .popsection 664: .popsection
.org . - (664b-663b) + (662b-661b) .org . - (664b-663b) + (662b-661b)
.org . - (662b-661b) + (664b-663b) .org . - (662b-661b) + (664b-663b)
.endif
.endm .endm
#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...) \ #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...) \
alternative_insn insn1, insn2, cap, IS_ENABLED(cfg) alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
.macro user_alt, label, oldinstr, newinstr, cond
9999: alternative_insn "\oldinstr", "\newinstr", \cond
_ASM_EXTABLE 9999b, \label
.endm
/* /*
* Generate the assembly for UAO alternatives with exception table entries. * Generate the assembly for UAO alternatives with exception table entries.
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#define __ASM_ASSEMBLER_H #define __ASM_ASSEMBLER_H
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/cpufeature.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/pgtable-hwdef.h> #include <asm/pgtable-hwdef.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
...@@ -261,7 +262,16 @@ lr .req x30 // link register ...@@ -261,7 +262,16 @@ lr .req x30 // link register
add \size, \kaddr, \size add \size, \kaddr, \size
sub \tmp2, \tmp1, #1 sub \tmp2, \tmp1, #1
bic \kaddr, \kaddr, \tmp2 bic \kaddr, \kaddr, \tmp2
9998: dc \op, \kaddr 9998:
.if (\op == cvau || \op == cvac)
alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
dc \op, \kaddr
alternative_else
dc civac, \kaddr
alternative_endif
.else
dc \op, \kaddr
.endif
add \kaddr, \kaddr, \tmp1 add \kaddr, \kaddr, \tmp1
cmp \kaddr, \size cmp \kaddr, \size
b.lo 9998b b.lo 9998b
......
/*
* Copyright (C) 2016 ARM Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ASM_CHECKSUM_H
#define __ASM_CHECKSUM_H
#include <linux/types.h>
static inline __sum16 csum_fold(__wsum csum)
{
u32 sum = (__force u32)csum;
sum += (sum >> 16) | (sum << 16);
return ~(__force __sum16)(sum >> 16);
}
#define csum_fold csum_fold
static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
{
__uint128_t tmp;
u64 sum;
tmp = *(const __uint128_t *)iph;
iph += 16;
ihl -= 4;
tmp += ((tmp >> 64) | (tmp << 64));
sum = tmp >> 64;
do {
sum += *(const u32 *)iph;
iph += 4;
} while (--ihl);
sum += ((sum >> 32) | (sum << 32));
return csum_fold(sum >> 32);
}
#define ip_fast_csum ip_fast_csum
#include <asm-generic/checksum.h>
#endif /* __ASM_CHECKSUM_H */
...@@ -25,10 +25,12 @@ ...@@ -25,10 +25,12 @@
*/ */
struct cpuinfo_arm64 { struct cpuinfo_arm64 {
struct cpu cpu; struct cpu cpu;
struct kobject kobj;
u32 reg_ctr; u32 reg_ctr;
u32 reg_cntfrq; u32 reg_cntfrq;
u32 reg_dczid; u32 reg_dczid;
u32 reg_midr; u32 reg_midr;
u32 reg_revidr;
u64 reg_id_aa64dfr0; u64 reg_id_aa64dfr0;
u64 reg_id_aa64dfr1; u64 reg_id_aa64dfr1;
......
...@@ -191,7 +191,9 @@ void __init setup_cpu_features(void); ...@@ -191,7 +191,9 @@ void __init setup_cpu_features(void);
void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
const char *info); const char *info);
void enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps);
void check_local_cpu_errata(void); void check_local_cpu_errata(void);
void __init enable_errata_workarounds(void);
void verify_local_cpu_errata(void); void verify_local_cpu_errata(void);
void verify_local_cpu_capabilities(void); void verify_local_cpu_capabilities(void);
......
...@@ -66,6 +66,11 @@ ...@@ -66,6 +66,11 @@
#define CACHE_FLUSH_IS_SAFE 1 #define CACHE_FLUSH_IS_SAFE 1
/* kprobes BRK opcodes with ESR encoding */
#define BRK64_ESR_MASK 0xFFFF
#define BRK64_ESR_KPROBES 0x0004
#define BRK64_OPCODE_KPROBES (AARCH64_BREAK_MON | (BRK64_ESR_KPROBES << 5))
/* AArch32 */ /* AArch32 */
#define DBG_ESR_EVT_BKPT 0x4 #define DBG_ESR_EVT_BKPT 0x4
#define DBG_ESR_EVT_VECC 0x5 #define DBG_ESR_EVT_VECC 0x5
......
...@@ -14,8 +14,7 @@ extern void efi_init(void); ...@@ -14,8 +14,7 @@ extern void efi_init(void);
#endif #endif
int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md); int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md);
int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
#define efi_set_mapping_permissions efi_create_mapping
#define arch_efi_call_virt_setup() \ #define arch_efi_call_virt_setup() \
({ \ ({ \
......
...@@ -74,6 +74,7 @@ ...@@ -74,6 +74,7 @@
#define ESR_ELx_EC_SHIFT (26) #define ESR_ELx_EC_SHIFT (26)
#define ESR_ELx_EC_MASK (UL(0x3F) << ESR_ELx_EC_SHIFT) #define ESR_ELx_EC_MASK (UL(0x3F) << ESR_ELx_EC_SHIFT)
#define ESR_ELx_EC(esr) (((esr) & ESR_ELx_EC_MASK) >> ESR_ELx_EC_SHIFT)
#define ESR_ELx_IL (UL(1) << 25) #define ESR_ELx_IL (UL(1) << 25)
#define ESR_ELx_ISS_MASK (ESR_ELx_IL - 1) #define ESR_ELx_ISS_MASK (ESR_ELx_IL - 1)
......
...@@ -120,6 +120,29 @@ enum aarch64_insn_register { ...@@ -120,6 +120,29 @@ enum aarch64_insn_register {
AARCH64_INSN_REG_SP = 31 /* Stack pointer: as load/store base reg */ AARCH64_INSN_REG_SP = 31 /* Stack pointer: as load/store base reg */
}; };
enum aarch64_insn_special_register {
AARCH64_INSN_SPCLREG_SPSR_EL1 = 0xC200,
AARCH64_INSN_SPCLREG_ELR_EL1 = 0xC201,
AARCH64_INSN_SPCLREG_SP_EL0 = 0xC208,
AARCH64_INSN_SPCLREG_SPSEL = 0xC210,
AARCH64_INSN_SPCLREG_CURRENTEL = 0xC212,
AARCH64_INSN_SPCLREG_DAIF = 0xDA11,
AARCH64_INSN_SPCLREG_NZCV = 0xDA10,
AARCH64_INSN_SPCLREG_FPCR = 0xDA20,
AARCH64_INSN_SPCLREG_DSPSR_EL0 = 0xDA28,
AARCH64_INSN_SPCLREG_DLR_EL0 = 0xDA29,
AARCH64_INSN_SPCLREG_SPSR_EL2 = 0xE200,
AARCH64_INSN_SPCLREG_ELR_EL2 = 0xE201,
AARCH64_INSN_SPCLREG_SP_EL1 = 0xE208,
AARCH64_INSN_SPCLREG_SPSR_INQ = 0xE218,
AARCH64_INSN_SPCLREG_SPSR_ABT = 0xE219,
AARCH64_INSN_SPCLREG_SPSR_UND = 0xE21A,
AARCH64_INSN_SPCLREG_SPSR_FIQ = 0xE21B,
AARCH64_INSN_SPCLREG_SPSR_EL3 = 0xF200,
AARCH64_INSN_SPCLREG_ELR_EL3 = 0xF201,
AARCH64_INSN_SPCLREG_SP_EL2 = 0xF210
};
enum aarch64_insn_variant { enum aarch64_insn_variant {
AARCH64_INSN_VARIANT_32BIT, AARCH64_INSN_VARIANT_32BIT,
AARCH64_INSN_VARIANT_64BIT AARCH64_INSN_VARIANT_64BIT
...@@ -223,8 +246,15 @@ static __always_inline bool aarch64_insn_is_##abbr(u32 code) \ ...@@ -223,8 +246,15 @@ static __always_inline bool aarch64_insn_is_##abbr(u32 code) \
static __always_inline u32 aarch64_insn_get_##abbr##_value(void) \ static __always_inline u32 aarch64_insn_get_##abbr##_value(void) \
{ return (val); } { return (val); }
__AARCH64_INSN_FUNCS(adr_adrp, 0x1F000000, 0x10000000)
__AARCH64_INSN_FUNCS(prfm_lit, 0xFF000000, 0xD8000000)
__AARCH64_INSN_FUNCS(str_reg, 0x3FE0EC00, 0x38206800) __AARCH64_INSN_FUNCS(str_reg, 0x3FE0EC00, 0x38206800)
__AARCH64_INSN_FUNCS(ldr_reg, 0x3FE0EC00, 0x38606800) __AARCH64_INSN_FUNCS(ldr_reg, 0x3FE0EC00, 0x38606800)
__AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000)
__AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000)
__AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000)
__AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000)
__AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000)
__AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000) __AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000)
__AARCH64_INSN_FUNCS(ldp_post, 0x7FC00000, 0x28C00000) __AARCH64_INSN_FUNCS(ldp_post, 0x7FC00000, 0x28C00000)
__AARCH64_INSN_FUNCS(stp_pre, 0x7FC00000, 0x29800000) __AARCH64_INSN_FUNCS(stp_pre, 0x7FC00000, 0x29800000)
...@@ -273,10 +303,15 @@ __AARCH64_INSN_FUNCS(svc, 0xFFE0001F, 0xD4000001) ...@@ -273,10 +303,15 @@ __AARCH64_INSN_FUNCS(svc, 0xFFE0001F, 0xD4000001)
__AARCH64_INSN_FUNCS(hvc, 0xFFE0001F, 0xD4000002) __AARCH64_INSN_FUNCS(hvc, 0xFFE0001F, 0xD4000002)
__AARCH64_INSN_FUNCS(smc, 0xFFE0001F, 0xD4000003) __AARCH64_INSN_FUNCS(smc, 0xFFE0001F, 0xD4000003)
__AARCH64_INSN_FUNCS(brk, 0xFFE0001F, 0xD4200000) __AARCH64_INSN_FUNCS(brk, 0xFFE0001F, 0xD4200000)
__AARCH64_INSN_FUNCS(exception, 0xFF000000, 0xD4000000)
__AARCH64_INSN_FUNCS(hint, 0xFFFFF01F, 0xD503201F) __AARCH64_INSN_FUNCS(hint, 0xFFFFF01F, 0xD503201F)
__AARCH64_INSN_FUNCS(br, 0xFFFFFC1F, 0xD61F0000) __AARCH64_INSN_FUNCS(br, 0xFFFFFC1F, 0xD61F0000)
__AARCH64_INSN_FUNCS(blr, 0xFFFFFC1F, 0xD63F0000) __AARCH64_INSN_FUNCS(blr, 0xFFFFFC1F, 0xD63F0000)
__AARCH64_INSN_FUNCS(ret, 0xFFFFFC1F, 0xD65F0000) __AARCH64_INSN_FUNCS(ret, 0xFFFFFC1F, 0xD65F0000)
__AARCH64_INSN_FUNCS(eret, 0xFFFFFFFF, 0xD69F03E0)
__AARCH64_INSN_FUNCS(mrs, 0xFFF00000, 0xD5300000)
__AARCH64_INSN_FUNCS(msr_imm, 0xFFF8F01F, 0xD500401F)
__AARCH64_INSN_FUNCS(msr_reg, 0xFFF00000, 0xD5100000)
#undef __AARCH64_INSN_FUNCS #undef __AARCH64_INSN_FUNCS
...@@ -286,6 +321,8 @@ bool aarch64_insn_is_branch_imm(u32 insn); ...@@ -286,6 +321,8 @@ bool aarch64_insn_is_branch_imm(u32 insn);
int aarch64_insn_read(void *addr, u32 *insnp); int aarch64_insn_read(void *addr, u32 *insnp);
int aarch64_insn_write(void *addr, u32 insn); int aarch64_insn_write(void *addr, u32 insn);
enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn); enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn);
bool aarch64_insn_uses_literal(u32 insn);
bool aarch64_insn_is_branch(u32 insn);
u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn); u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn);
u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type, u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
u32 insn, u64 imm); u32 insn, u64 imm);
...@@ -367,9 +404,13 @@ bool aarch32_insn_is_wide(u32 insn); ...@@ -367,9 +404,13 @@ bool aarch32_insn_is_wide(u32 insn);
#define A32_RT_OFFSET 12 #define A32_RT_OFFSET 12
#define A32_RT2_OFFSET 0 #define A32_RT2_OFFSET 0
u32 aarch64_insn_extract_system_reg(u32 insn);
u32 aarch32_insn_extract_reg_num(u32 insn, int offset); u32 aarch32_insn_extract_reg_num(u32 insn, int offset);
u32 aarch32_insn_mcr_extract_opc2(u32 insn); u32 aarch32_insn_mcr_extract_opc2(u32 insn);
u32 aarch32_insn_mcr_extract_crm(u32 insn); u32 aarch32_insn_mcr_extract_crm(u32 insn);
typedef bool (pstate_check_t)(unsigned long);
extern pstate_check_t * const aarch32_opcode_cond_checks[16];
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* __ASM_INSN_H */ #endif /* __ASM_INSN_H */
...@@ -110,8 +110,5 @@ static inline int arch_irqs_disabled_flags(unsigned long flags) ...@@ -110,8 +110,5 @@ static inline int arch_irqs_disabled_flags(unsigned long flags)
: : "r" (flags) : "memory"); \ : : "r" (flags) : "memory"); \
} while (0) } while (0)
#define local_dbg_enable() asm("msr daifclr, #8" : : : "memory")
#define local_dbg_disable() asm("msr daifset, #8" : : : "memory")
#endif #endif
#endif #endif
/*
* kexec for arm64
*
* Copyright (C) Linaro.
* Copyright (C) Huawei Futurewei Technologies.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ARM64_KEXEC_H
#define _ARM64_KEXEC_H
/* Maximum physical address we can use pages from */
#define KEXEC_SOURCE_MEMORY_LIMIT (-1UL)
/* Maximum address we can reach in physical address mode */
#define KEXEC_DESTINATION_MEMORY_LIMIT (-1UL)
/* Maximum address we can use for the control code buffer */
#define KEXEC_CONTROL_MEMORY_LIMIT (-1UL)
#define KEXEC_CONTROL_PAGE_SIZE 4096
#define KEXEC_ARCH KEXEC_ARCH_AARCH64
#ifndef __ASSEMBLY__
/**
* crash_setup_regs() - save registers for the panic kernel
*
* @newregs: registers are saved here
* @oldregs: registers to be saved (may be %NULL)
*/
static inline void crash_setup_regs(struct pt_regs *newregs,
struct pt_regs *oldregs)
{
/* Empty routine needed to avoid build errors. */
}
#endif /* __ASSEMBLY__ */
#endif
/*
* arch/arm64/include/asm/kprobes.h
*
* Copyright (C) 2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _ARM_KPROBES_H
#define _ARM_KPROBES_H
#include <linux/types.h>
#include <linux/ptrace.h>
#include <linux/percpu.h>
#define __ARCH_WANT_KPROBES_INSN_SLOT
#define MAX_INSN_SIZE 1
#define MAX_STACK_SIZE 128
#define flush_insn_slot(p) do { } while (0)
#define kretprobe_blacklist_size 0
#include <asm/probes.h>
struct prev_kprobe {
struct kprobe *kp;
unsigned int status;
};
/* Single step context for kprobe */
struct kprobe_step_ctx {
unsigned long ss_pending;
unsigned long match_addr;
};
/* per-cpu kprobe control block */
struct kprobe_ctlblk {
unsigned int kprobe_status;
unsigned long saved_irqflag;
struct prev_kprobe prev_kprobe;
struct kprobe_step_ctx ss_ctx;
struct pt_regs jprobe_saved_regs;
char jprobes_stack[MAX_STACK_SIZE];
};
void arch_remove_kprobe(struct kprobe *);
int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr);
int kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data);
int kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr);
int kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr);
void kretprobe_trampoline(void);
void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
#endif /* _ARM_KPROBES_H */
...@@ -210,7 +210,7 @@ static inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu *vcpu) ...@@ -210,7 +210,7 @@ static inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu *vcpu)
static inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu) static inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu)
{ {
return kvm_vcpu_get_hsr(vcpu) >> ESR_ELx_EC_SHIFT; return ESR_ELx_EC(kvm_vcpu_get_hsr(vcpu));
} }
static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu) static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu)
......
...@@ -34,7 +34,7 @@ extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt); ...@@ -34,7 +34,7 @@ extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
extern void init_mem_pgprot(void); extern void init_mem_pgprot(void);
extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
unsigned long virt, phys_addr_t size, unsigned long virt, phys_addr_t size,
pgprot_t prot); pgprot_t prot, bool allow_block_mappings);
extern void *fixmap_remap_fdt(phys_addr_t dt_phys); extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
#endif #endif
/*
* arch/arm64/include/asm/probes.h
*
* Copyright (C) 2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _ARM_PROBES_H
#define _ARM_PROBES_H
#include <asm/opcodes.h>
struct kprobe;
struct arch_specific_insn;
typedef u32 kprobe_opcode_t;
typedef void (kprobes_handler_t) (u32 opcode, long addr, struct pt_regs *);
/* architecture specific copy of original instruction */
struct arch_specific_insn {
kprobe_opcode_t *insn;
pstate_check_t *pstate_cc;
kprobes_handler_t *handler;
/* restore address after step xol */
unsigned long restore;
};
#endif
...@@ -192,5 +192,6 @@ static inline void spin_lock_prefetch(const void *ptr) ...@@ -192,5 +192,6 @@ static inline void spin_lock_prefetch(const void *ptr)
void cpu_enable_pan(void *__unused); void cpu_enable_pan(void *__unused);
void cpu_enable_uao(void *__unused); void cpu_enable_uao(void *__unused);
void cpu_enable_cache_maint_trap(void *__unused);
#endif /* __ASM_PROCESSOR_H */ #endif /* __ASM_PROCESSOR_H */
/*
* Copyright (C) 2014 ARM Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ASM_PTDUMP_H
#define __ASM_PTDUMP_H
#ifdef CONFIG_ARM64_PTDUMP
#include <linux/mm_types.h>
struct addr_marker {
unsigned long start_address;
char *name;
};
struct ptdump_info {
struct mm_struct *mm;
const struct addr_marker *markers;
unsigned long base_addr;
unsigned long max_addr;
};
int ptdump_register(struct ptdump_info *info, const char *name);
#else
static inline int ptdump_register(struct ptdump_info *info, const char *name)
{
return 0;
}
#endif /* CONFIG_ARM64_PTDUMP */
#endif /* __ASM_PTDUMP_H */
...@@ -46,7 +46,6 @@ ...@@ -46,7 +46,6 @@
#define COMPAT_PSR_MODE_UND 0x0000001b #define COMPAT_PSR_MODE_UND 0x0000001b
#define COMPAT_PSR_MODE_SYS 0x0000001f #define COMPAT_PSR_MODE_SYS 0x0000001f
#define COMPAT_PSR_T_BIT 0x00000020 #define COMPAT_PSR_T_BIT 0x00000020
#define COMPAT_PSR_E_BIT 0x00000200
#define COMPAT_PSR_F_BIT 0x00000040 #define COMPAT_PSR_F_BIT 0x00000040
#define COMPAT_PSR_I_BIT 0x00000080 #define COMPAT_PSR_I_BIT 0x00000080
#define COMPAT_PSR_A_BIT 0x00000100 #define COMPAT_PSR_A_BIT 0x00000100
...@@ -74,6 +73,7 @@ ...@@ -74,6 +73,7 @@
#define COMPAT_PT_DATA_ADDR 0x10004 #define COMPAT_PT_DATA_ADDR 0x10004
#define COMPAT_PT_TEXT_END_ADDR 0x10008 #define COMPAT_PT_TEXT_END_ADDR 0x10008
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/bug.h>
/* sizeof(struct user) for AArch32 */ /* sizeof(struct user) for AArch32 */
#define COMPAT_USER_SZ 296 #define COMPAT_USER_SZ 296
...@@ -121,6 +121,8 @@ struct pt_regs { ...@@ -121,6 +121,8 @@ struct pt_regs {
u64 unused; // maintain 16 byte alignment u64 unused; // maintain 16 byte alignment
}; };
#define MAX_REG_OFFSET offsetof(struct pt_regs, pstate)
#define arch_has_single_step() (1) #define arch_has_single_step() (1)
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
...@@ -146,9 +148,58 @@ struct pt_regs { ...@@ -146,9 +148,58 @@ struct pt_regs {
#define fast_interrupts_enabled(regs) \ #define fast_interrupts_enabled(regs) \
(!((regs)->pstate & PSR_F_BIT)) (!((regs)->pstate & PSR_F_BIT))
#define user_stack_pointer(regs) \ #define GET_USP(regs) \
(!compat_user_mode(regs) ? (regs)->sp : (regs)->compat_sp) (!compat_user_mode(regs) ? (regs)->sp : (regs)->compat_sp)
#define SET_USP(ptregs, value) \
(!compat_user_mode(regs) ? ((regs)->sp = value) : ((regs)->compat_sp = value))
extern int regs_query_register_offset(const char *name);
extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
unsigned int n);
/**
* regs_get_register() - get register value from its offset
* @regs: pt_regs from which register value is gotten
* @offset: offset of the register.
*
* regs_get_register returns the value of a register whose offset from @regs.
* The @offset is the offset of the register in struct pt_regs.
* If @offset is bigger than MAX_REG_OFFSET, this returns 0.
*/
static inline u64 regs_get_register(struct pt_regs *regs, unsigned int offset)
{
u64 val = 0;
WARN_ON(offset & 7);
offset >>= 3;
switch (offset) {
case 0 ... 30:
val = regs->regs[offset];
break;
case offsetof(struct pt_regs, sp) >> 3:
val = regs->sp;
break;
case offsetof(struct pt_regs, pc) >> 3:
val = regs->pc;
break;
case offsetof(struct pt_regs, pstate) >> 3:
val = regs->pstate;
break;
default:
val = 0;
}
return val;
}
/* Valid only for Kernel mode traps. */
static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
{
return regs->sp;
}
static inline unsigned long regs_return_value(struct pt_regs *regs) static inline unsigned long regs_return_value(struct pt_regs *regs)
{ {
return regs->regs[0]; return regs->regs[0];
...@@ -158,8 +209,15 @@ static inline unsigned long regs_return_value(struct pt_regs *regs) ...@@ -158,8 +209,15 @@ static inline unsigned long regs_return_value(struct pt_regs *regs)
struct task_struct; struct task_struct;
int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task); int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task);
#define instruction_pointer(regs) ((unsigned long)(regs)->pc) #define GET_IP(regs) ((unsigned long)(regs)->pc)
#define SET_IP(regs, value) ((regs)->pc = ((u64) (value)))
#define GET_FP(ptregs) ((unsigned long)(ptregs)->regs[29])
#define SET_FP(ptregs, value) ((ptregs)->regs[29] = ((u64) (value)))
#include <asm-generic/ptrace.h>
#undef profile_pc
extern unsigned long profile_pc(struct pt_regs *regs); extern unsigned long profile_pc(struct pt_regs *regs);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -98,11 +98,11 @@ ...@@ -98,11 +98,11 @@
SCTLR_ELx_SA | SCTLR_ELx_I) SCTLR_ELx_SA | SCTLR_ELx_I)
/* SCTLR_EL1 specific flags. */ /* SCTLR_EL1 specific flags. */
#define SCTLR_EL1_UCI (1 << 26)
#define SCTLR_EL1_SPAN (1 << 23) #define SCTLR_EL1_SPAN (1 << 23)
#define SCTLR_EL1_SED (1 << 8) #define SCTLR_EL1_SED (1 << 8)
#define SCTLR_EL1_CP15BEN (1 << 5) #define SCTLR_EL1_CP15BEN (1 << 5)
/* id_aa64isar0 */ /* id_aa64isar0 */
#define ID_AA64ISAR0_RDM_SHIFT 28 #define ID_AA64ISAR0_RDM_SHIFT 28
#define ID_AA64ISAR0_ATOMICS_SHIFT 20 #define ID_AA64ISAR0_ATOMICS_SHIFT 20
......
...@@ -34,6 +34,8 @@ struct undef_hook { ...@@ -34,6 +34,8 @@ struct undef_hook {
void register_undef_hook(struct undef_hook *hook); void register_undef_hook(struct undef_hook *hook);
void unregister_undef_hook(struct undef_hook *hook); void unregister_undef_hook(struct undef_hook *hook);
void arm64_notify_segfault(struct pt_regs *regs, unsigned long addr);
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
static inline int __in_irqentry_text(unsigned long ptr) static inline int __in_irqentry_text(unsigned long ptr)
{ {
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
/* /*
* User space memory access functions * User space memory access functions
*/ */
#include <linux/kasan-checks.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
...@@ -256,15 +257,29 @@ do { \ ...@@ -256,15 +257,29 @@ do { \
-EFAULT; \ -EFAULT; \
}) })
extern unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n); extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
extern unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n); extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n); extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n);
extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n); extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
static inline unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n)
{
kasan_check_write(to, n);
return __arch_copy_from_user(to, from, n);
}
static inline unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n)
{
kasan_check_read(from, n);
return __arch_copy_to_user(to, from, n);
}
static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n) static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n)
{ {
kasan_check_write(to, n);
if (access_ok(VERIFY_READ, from, n)) if (access_ok(VERIFY_READ, from, n))
n = __copy_from_user(to, from, n); n = __arch_copy_from_user(to, from, n);
else /* security hole - plug it */ else /* security hole - plug it */
memset(to, 0, n); memset(to, 0, n);
return n; return n;
...@@ -272,8 +287,10 @@ static inline unsigned long __must_check copy_from_user(void *to, const void __u ...@@ -272,8 +287,10 @@ static inline unsigned long __must_check copy_from_user(void *to, const void __u
static inline unsigned long __must_check copy_to_user(void __user *to, const void *from, unsigned long n) static inline unsigned long __must_check copy_to_user(void __user *to, const void *from, unsigned long n)
{ {
kasan_check_read(from, n);
if (access_ok(VERIFY_WRITE, to, n)) if (access_ok(VERIFY_WRITE, to, n))
n = __copy_to_user(to, from, n); n = __arch_copy_to_user(to, from, n);
return n; return n;
} }
......
...@@ -22,6 +22,8 @@ ...@@ -22,6 +22,8 @@
struct vdso_data { struct vdso_data {
__u64 cs_cycle_last; /* Timebase at clocksource init */ __u64 cs_cycle_last; /* Timebase at clocksource init */
__u64 raw_time_sec; /* Raw time */
__u64 raw_time_nsec;
__u64 xtime_clock_sec; /* Kernel time */ __u64 xtime_clock_sec; /* Kernel time */
__u64 xtime_clock_nsec; __u64 xtime_clock_nsec;
__u64 xtime_coarse_sec; /* Coarse time */ __u64 xtime_coarse_sec; /* Coarse time */
...@@ -29,8 +31,10 @@ struct vdso_data { ...@@ -29,8 +31,10 @@ struct vdso_data {
__u64 wtm_clock_sec; /* Wall to monotonic time */ __u64 wtm_clock_sec; /* Wall to monotonic time */
__u64 wtm_clock_nsec; __u64 wtm_clock_nsec;
__u32 tb_seq_count; /* Timebase sequence counter */ __u32 tb_seq_count; /* Timebase sequence counter */
__u32 cs_mult; /* Clocksource multiplier */ /* cs_* members must be adjacent and in this order (ldp accesses) */
__u32 cs_shift; /* Clocksource shift */ __u32 cs_mono_mult; /* NTP-adjusted clocksource multiplier */
__u32 cs_shift; /* Clocksource shift (mono = raw) */
__u32 cs_raw_mult; /* Raw clocksource multiplier */
__u32 tz_minuteswest; /* Whacky timezone stuff */ __u32 tz_minuteswest; /* Whacky timezone stuff */
__u32 tz_dsttime; __u32 tz_dsttime;
__u32 use_syscall; __u32 use_syscall;
......
...@@ -34,6 +34,11 @@ ...@@ -34,6 +34,11 @@
*/ */
#define HVC_SET_VECTORS 1 #define HVC_SET_VECTORS 1
/*
* HVC_SOFT_RESTART - CPU soft reset, used by the cpu_soft_restart routine.
*/
#define HVC_SOFT_RESTART 2
#define BOOT_CPU_MODE_EL1 (0xe11) #define BOOT_CPU_MODE_EL1 (0xe11)
#define BOOT_CPU_MODE_EL2 (0xe12) #define BOOT_CPU_MODE_EL2 (0xe12)
......
...@@ -26,8 +26,7 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE ...@@ -26,8 +26,7 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE
$(call if_changed,objcopy) $(call if_changed,objcopy)
arm64-obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \ arm64-obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \
sys_compat.o entry32.o \ sys_compat.o entry32.o
../../arm/kernel/opcodes.o
arm64-obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o arm64-obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o
arm64-obj-$(CONFIG_MODULES) += arm64ksyms.o module.o arm64-obj-$(CONFIG_MODULES) += arm64ksyms.o module.o
arm64-obj-$(CONFIG_ARM64_MODULE_PLTS) += module-plts.o arm64-obj-$(CONFIG_ARM64_MODULE_PLTS) += module-plts.o
...@@ -47,12 +46,10 @@ arm64-obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o ...@@ -47,12 +46,10 @@ arm64-obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o
arm64-obj-$(CONFIG_PARAVIRT) += paravirt.o arm64-obj-$(CONFIG_PARAVIRT) += paravirt.o
arm64-obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o arm64-obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
arm64-obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o arm64-obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o
arm64-obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o \
cpu-reset.o
obj-y += $(arm64-obj-y) vdso/ obj-y += $(arm64-obj-y) vdso/ probes/
obj-m += $(arm64-obj-m) obj-m += $(arm64-obj-m)
head-y := head.o head-y := head.o
extra-y += $(head-y) vmlinux.lds extra-y += $(head-y) vmlinux.lds
# vDSO - this must be built first to generate the symbol offsets
$(call objectify,$(arm64-obj-y)): $(obj)/vdso/vdso-offsets.h
$(obj)/vdso/vdso-offsets.h: $(obj)/vdso
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/arm-smccc.h> #include <linux/arm-smccc.h>
#include <linux/kprobes.h>
#include <asm/checksum.h> #include <asm/checksum.h>
...@@ -34,8 +35,8 @@ EXPORT_SYMBOL(copy_page); ...@@ -34,8 +35,8 @@ EXPORT_SYMBOL(copy_page);
EXPORT_SYMBOL(clear_page); EXPORT_SYMBOL(clear_page);
/* user mem (segment) */ /* user mem (segment) */
EXPORT_SYMBOL(__copy_from_user); EXPORT_SYMBOL(__arch_copy_from_user);
EXPORT_SYMBOL(__copy_to_user); EXPORT_SYMBOL(__arch_copy_to_user);
EXPORT_SYMBOL(__clear_user); EXPORT_SYMBOL(__clear_user);
EXPORT_SYMBOL(__copy_in_user); EXPORT_SYMBOL(__copy_in_user);
...@@ -68,6 +69,7 @@ EXPORT_SYMBOL(test_and_change_bit); ...@@ -68,6 +69,7 @@ EXPORT_SYMBOL(test_and_change_bit);
#ifdef CONFIG_FUNCTION_TRACER #ifdef CONFIG_FUNCTION_TRACER
EXPORT_SYMBOL(_mcount); EXPORT_SYMBOL(_mcount);
NOKPROBE_SYMBOL(_mcount);
#endif #endif
/* arm-smccc */ /* arm-smccc */
......
...@@ -316,28 +316,6 @@ static void __init register_insn_emulation_sysctl(struct ctl_table *table) ...@@ -316,28 +316,6 @@ static void __init register_insn_emulation_sysctl(struct ctl_table *table)
*/ */
#define TYPE_SWPB (1 << 22) #define TYPE_SWPB (1 << 22)
/*
* Set up process info to signal segmentation fault - called on access error.
*/
static void set_segfault(struct pt_regs *regs, unsigned long addr)
{
siginfo_t info;
down_read(&current->mm->mmap_sem);
if (find_vma(current->mm, addr) == NULL)
info.si_code = SEGV_MAPERR;
else
info.si_code = SEGV_ACCERR;
up_read(&current->mm->mmap_sem);
info.si_signo = SIGSEGV;
info.si_errno = 0;
info.si_addr = (void *) instruction_pointer(regs);
pr_debug("SWP{B} emulation: access caused memory abort!\n");
arm64_notify_die("Illegal memory access", regs, &info, 0);
}
static int emulate_swpX(unsigned int address, unsigned int *data, static int emulate_swpX(unsigned int address, unsigned int *data,
unsigned int type) unsigned int type)
{ {
...@@ -366,6 +344,21 @@ static int emulate_swpX(unsigned int address, unsigned int *data, ...@@ -366,6 +344,21 @@ static int emulate_swpX(unsigned int address, unsigned int *data,
return res; return res;
} }
#define ARM_OPCODE_CONDITION_UNCOND 0xf
static unsigned int __kprobes aarch32_check_condition(u32 opcode, u32 psr)
{
u32 cc_bits = opcode >> 28;
if (cc_bits != ARM_OPCODE_CONDITION_UNCOND) {
if ((*aarch32_opcode_cond_checks[cc_bits])(psr))
return ARM_OPCODE_CONDTEST_PASS;
else
return ARM_OPCODE_CONDTEST_FAIL;
}
return ARM_OPCODE_CONDTEST_UNCOND;
}
/* /*
* swp_handler logs the id of calling process, dissects the instruction, sanity * swp_handler logs the id of calling process, dissects the instruction, sanity
* checks the memory location, calls emulate_swpX for the actual operation and * checks the memory location, calls emulate_swpX for the actual operation and
...@@ -380,7 +373,7 @@ static int swp_handler(struct pt_regs *regs, u32 instr) ...@@ -380,7 +373,7 @@ static int swp_handler(struct pt_regs *regs, u32 instr)
type = instr & TYPE_SWPB; type = instr & TYPE_SWPB;
switch (arm_check_condition(instr, regs->pstate)) { switch (aarch32_check_condition(instr, regs->pstate)) {
case ARM_OPCODE_CONDTEST_PASS: case ARM_OPCODE_CONDTEST_PASS:
break; break;
case ARM_OPCODE_CONDTEST_FAIL: case ARM_OPCODE_CONDTEST_FAIL:
...@@ -430,7 +423,8 @@ static int swp_handler(struct pt_regs *regs, u32 instr) ...@@ -430,7 +423,8 @@ static int swp_handler(struct pt_regs *regs, u32 instr)
return 0; return 0;
fault: fault:
set_segfault(regs, address); pr_debug("SWP{B} emulation: access caused memory abort!\n");
arm64_notify_segfault(regs, address);
return 0; return 0;
} }
...@@ -461,7 +455,7 @@ static int cp15barrier_handler(struct pt_regs *regs, u32 instr) ...@@ -461,7 +455,7 @@ static int cp15barrier_handler(struct pt_regs *regs, u32 instr)
{ {
perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, 1, regs, regs->pc); perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, 1, regs, regs->pc);
switch (arm_check_condition(instr, regs->pstate)) { switch (aarch32_check_condition(instr, regs->pstate)) {
case ARM_OPCODE_CONDTEST_PASS: case ARM_OPCODE_CONDTEST_PASS:
break; break;
case ARM_OPCODE_CONDTEST_FAIL: case ARM_OPCODE_CONDTEST_FAIL:
......
...@@ -51,6 +51,17 @@ int main(void) ...@@ -51,6 +51,17 @@ int main(void)
DEFINE(S_X5, offsetof(struct pt_regs, regs[5])); DEFINE(S_X5, offsetof(struct pt_regs, regs[5]));
DEFINE(S_X6, offsetof(struct pt_regs, regs[6])); DEFINE(S_X6, offsetof(struct pt_regs, regs[6]));
DEFINE(S_X7, offsetof(struct pt_regs, regs[7])); DEFINE(S_X7, offsetof(struct pt_regs, regs[7]));
DEFINE(S_X8, offsetof(struct pt_regs, regs[8]));
DEFINE(S_X10, offsetof(struct pt_regs, regs[10]));
DEFINE(S_X12, offsetof(struct pt_regs, regs[12]));
DEFINE(S_X14, offsetof(struct pt_regs, regs[14]));
DEFINE(S_X16, offsetof(struct pt_regs, regs[16]));
DEFINE(S_X18, offsetof(struct pt_regs, regs[18]));
DEFINE(S_X20, offsetof(struct pt_regs, regs[20]));
DEFINE(S_X22, offsetof(struct pt_regs, regs[22]));
DEFINE(S_X24, offsetof(struct pt_regs, regs[24]));
DEFINE(S_X26, offsetof(struct pt_regs, regs[26]));
DEFINE(S_X28, offsetof(struct pt_regs, regs[28]));
DEFINE(S_LR, offsetof(struct pt_regs, regs[30])); DEFINE(S_LR, offsetof(struct pt_regs, regs[30]));
DEFINE(S_SP, offsetof(struct pt_regs, sp)); DEFINE(S_SP, offsetof(struct pt_regs, sp));
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
...@@ -78,6 +89,7 @@ int main(void) ...@@ -78,6 +89,7 @@ int main(void)
BLANK(); BLANK();
DEFINE(CLOCK_REALTIME, CLOCK_REALTIME); DEFINE(CLOCK_REALTIME, CLOCK_REALTIME);
DEFINE(CLOCK_MONOTONIC, CLOCK_MONOTONIC); DEFINE(CLOCK_MONOTONIC, CLOCK_MONOTONIC);
DEFINE(CLOCK_MONOTONIC_RAW, CLOCK_MONOTONIC_RAW);
DEFINE(CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC); DEFINE(CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE); DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);
DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE); DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE);
...@@ -85,6 +97,8 @@ int main(void) ...@@ -85,6 +97,8 @@ int main(void)
DEFINE(NSEC_PER_SEC, NSEC_PER_SEC); DEFINE(NSEC_PER_SEC, NSEC_PER_SEC);
BLANK(); BLANK();
DEFINE(VDSO_CS_CYCLE_LAST, offsetof(struct vdso_data, cs_cycle_last)); DEFINE(VDSO_CS_CYCLE_LAST, offsetof(struct vdso_data, cs_cycle_last));
DEFINE(VDSO_RAW_TIME_SEC, offsetof(struct vdso_data, raw_time_sec));
DEFINE(VDSO_RAW_TIME_NSEC, offsetof(struct vdso_data, raw_time_nsec));
DEFINE(VDSO_XTIME_CLK_SEC, offsetof(struct vdso_data, xtime_clock_sec)); DEFINE(VDSO_XTIME_CLK_SEC, offsetof(struct vdso_data, xtime_clock_sec));
DEFINE(VDSO_XTIME_CLK_NSEC, offsetof(struct vdso_data, xtime_clock_nsec)); DEFINE(VDSO_XTIME_CLK_NSEC, offsetof(struct vdso_data, xtime_clock_nsec));
DEFINE(VDSO_XTIME_CRS_SEC, offsetof(struct vdso_data, xtime_coarse_sec)); DEFINE(VDSO_XTIME_CRS_SEC, offsetof(struct vdso_data, xtime_coarse_sec));
...@@ -92,7 +106,8 @@ int main(void) ...@@ -92,7 +106,8 @@ int main(void)
DEFINE(VDSO_WTM_CLK_SEC, offsetof(struct vdso_data, wtm_clock_sec)); DEFINE(VDSO_WTM_CLK_SEC, offsetof(struct vdso_data, wtm_clock_sec));
DEFINE(VDSO_WTM_CLK_NSEC, offsetof(struct vdso_data, wtm_clock_nsec)); DEFINE(VDSO_WTM_CLK_NSEC, offsetof(struct vdso_data, wtm_clock_nsec));
DEFINE(VDSO_TB_SEQ_COUNT, offsetof(struct vdso_data, tb_seq_count)); DEFINE(VDSO_TB_SEQ_COUNT, offsetof(struct vdso_data, tb_seq_count));
DEFINE(VDSO_CS_MULT, offsetof(struct vdso_data, cs_mult)); DEFINE(VDSO_CS_MONO_MULT, offsetof(struct vdso_data, cs_mono_mult));
DEFINE(VDSO_CS_RAW_MULT, offsetof(struct vdso_data, cs_raw_mult));
DEFINE(VDSO_CS_SHIFT, offsetof(struct vdso_data, cs_shift)); DEFINE(VDSO_CS_SHIFT, offsetof(struct vdso_data, cs_shift));
DEFINE(VDSO_TZ_MINWEST, offsetof(struct vdso_data, tz_minuteswest)); DEFINE(VDSO_TZ_MINWEST, offsetof(struct vdso_data, tz_minuteswest));
DEFINE(VDSO_TZ_DSTTIME, offsetof(struct vdso_data, tz_dsttime)); DEFINE(VDSO_TZ_DSTTIME, offsetof(struct vdso_data, tz_dsttime));
......
/*
* CPU reset routines
*
* Copyright (C) 2001 Deep Blue Solutions Ltd.
* Copyright (C) 2012 ARM Ltd.
* Copyright (C) 2015 Huawei Futurewei Technologies.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
#include <asm/sysreg.h>
#include <asm/virt.h>
.text
.pushsection .idmap.text, "ax"
/*
* __cpu_soft_restart(el2_switch, entry, arg0, arg1, arg2) - Helper for
* cpu_soft_restart.
*
* @el2_switch: Flag to indicate a swich to EL2 is needed.
* @entry: Location to jump to for soft reset.
* arg0: First argument passed to @entry.
* arg1: Second argument passed to @entry.
* arg2: Third argument passed to @entry.
*
* Put the CPU into the same state as it would be if it had been reset, and
* branch to what would be the reset vector. It must be executed with the
* flat identity mapping.
*/
ENTRY(__cpu_soft_restart)
/* Clear sctlr_el1 flags. */
mrs x12, sctlr_el1
ldr x13, =SCTLR_ELx_FLAGS
bic x12, x12, x13
msr sctlr_el1, x12
isb
cbz x0, 1f // el2_switch?
mov x0, #HVC_SOFT_RESTART
hvc #0 // no return
1: mov x18, x1 // entry
mov x0, x2 // arg0
mov x1, x3 // arg1
mov x2, x4 // arg2
br x18
ENDPROC(__cpu_soft_restart)
.popsection
/*
* CPU reset routines
*
* Copyright (C) 2015 Huawei Futurewei Technologies.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ARM64_CPU_RESET_H
#define _ARM64_CPU_RESET_H
#include <asm/virt.h>
void __cpu_soft_restart(unsigned long el2_switch, unsigned long entry,
unsigned long arg0, unsigned long arg1, unsigned long arg2);
static inline void __noreturn cpu_soft_restart(unsigned long el2_switch,
unsigned long entry, unsigned long arg0, unsigned long arg1,
unsigned long arg2)
{
typeof(__cpu_soft_restart) *restart;
el2_switch = el2_switch && !is_kernel_in_hyp_mode() &&
is_hyp_mode_available();
restart = (void *)virt_to_phys(__cpu_soft_restart);
cpu_install_idmap();
restart(el2_switch, entry, arg0, arg1, arg2);
unreachable();
}
#endif
...@@ -46,6 +46,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ...@@ -46,6 +46,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
.desc = "ARM errata 826319, 827319, 824069", .desc = "ARM errata 826319, 827319, 824069",
.capability = ARM64_WORKAROUND_CLEAN_CACHE, .capability = ARM64_WORKAROUND_CLEAN_CACHE,
MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x02), MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x02),
.enable = cpu_enable_cache_maint_trap,
}, },
#endif #endif
#ifdef CONFIG_ARM64_ERRATUM_819472 #ifdef CONFIG_ARM64_ERRATUM_819472
...@@ -54,6 +55,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ...@@ -54,6 +55,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
.desc = "ARM errata 819472", .desc = "ARM errata 819472",
.capability = ARM64_WORKAROUND_CLEAN_CACHE, .capability = ARM64_WORKAROUND_CLEAN_CACHE,
MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x01), MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x01),
.enable = cpu_enable_cache_maint_trap,
}, },
#endif #endif
#ifdef CONFIG_ARM64_ERRATUM_832075 #ifdef CONFIG_ARM64_ERRATUM_832075
...@@ -133,3 +135,8 @@ void check_local_cpu_errata(void) ...@@ -133,3 +135,8 @@ void check_local_cpu_errata(void)
{ {
update_cpu_capabilities(arm64_errata, "enabling workaround for"); update_cpu_capabilities(arm64_errata, "enabling workaround for");
} }
void __init enable_errata_workarounds(void)
{
enable_cpu_capabilities(arm64_errata);
}
...@@ -913,8 +913,7 @@ void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, ...@@ -913,8 +913,7 @@ void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
* Run through the enabled capabilities and enable() it on all active * Run through the enabled capabilities and enable() it on all active
* CPUs * CPUs
*/ */
static void __init void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
{ {
for (; caps->matches; caps++) for (; caps->matches; caps++)
if (caps->enable && cpus_have_cap(caps->capability)) if (caps->enable && cpus_have_cap(caps->capability))
...@@ -1036,6 +1035,7 @@ void __init setup_cpu_features(void) ...@@ -1036,6 +1035,7 @@ void __init setup_cpu_features(void)
/* Set the CPU feature capabilies */ /* Set the CPU feature capabilies */
setup_feature_capabilities(); setup_feature_capabilities();
enable_errata_workarounds();
setup_elf_hwcaps(arm64_elf_hwcaps); setup_elf_hwcaps(arm64_elf_hwcaps);
if (system_supports_32bit_el0()) if (system_supports_32bit_el0())
......
...@@ -183,6 +183,123 @@ const struct seq_operations cpuinfo_op = { ...@@ -183,6 +183,123 @@ const struct seq_operations cpuinfo_op = {
.show = c_show .show = c_show
}; };
static struct kobj_type cpuregs_kobj_type = {
.sysfs_ops = &kobj_sysfs_ops,
};
/*
* The ARM ARM uses the phrase "32-bit register" to describe a register
* whose upper 32 bits are RES0 (per C5.1.1, ARM DDI 0487A.i), however
* no statement is made as to whether the upper 32 bits will or will not
* be made use of in future, and between ARM DDI 0487A.c and ARM DDI
* 0487A.d CLIDR_EL1 was expanded from 32-bit to 64-bit.
*
* Thus, while both MIDR_EL1 and REVIDR_EL1 are described as 32-bit
* registers, we expose them both as 64 bit values to cater for possible
* future expansion without an ABI break.
*/
#define kobj_to_cpuinfo(kobj) container_of(kobj, struct cpuinfo_arm64, kobj)
#define CPUREGS_ATTR_RO(_name, _field) \
static ssize_t _name##_show(struct kobject *kobj, \
struct kobj_attribute *attr, char *buf) \
{ \
struct cpuinfo_arm64 *info = kobj_to_cpuinfo(kobj); \
\
if (info->reg_midr) \
return sprintf(buf, "0x%016x\n", info->reg_##_field); \
else \
return 0; \
} \
static struct kobj_attribute cpuregs_attr_##_name = __ATTR_RO(_name)
CPUREGS_ATTR_RO(midr_el1, midr);
CPUREGS_ATTR_RO(revidr_el1, revidr);
static struct attribute *cpuregs_id_attrs[] = {
&cpuregs_attr_midr_el1.attr,
&cpuregs_attr_revidr_el1.attr,
NULL
};
static struct attribute_group cpuregs_attr_group = {
.attrs = cpuregs_id_attrs,
.name = "identification"
};
static int cpuid_add_regs(int cpu)
{
int rc;
struct device *dev;
struct cpuinfo_arm64 *info = &per_cpu(cpu_data, cpu);
dev = get_cpu_device(cpu);
if (!dev) {
rc = -ENODEV;
goto out;
}
rc = kobject_add(&info->kobj, &dev->kobj, "regs");
if (rc)
goto out;
rc = sysfs_create_group(&info->kobj, &cpuregs_attr_group);
if (rc)
kobject_del(&info->kobj);
out:
return rc;
}
static int cpuid_remove_regs(int cpu)
{
struct device *dev;
struct cpuinfo_arm64 *info = &per_cpu(cpu_data, cpu);
dev = get_cpu_device(cpu);
if (!dev)
return -ENODEV;
if (info->kobj.parent) {
sysfs_remove_group(&info->kobj, &cpuregs_attr_group);
kobject_del(&info->kobj);
}
return 0;
}
static int cpuid_callback(struct notifier_block *nb,
unsigned long action, void *hcpu)
{
int rc = 0;
unsigned long cpu = (unsigned long)hcpu;
switch (action & ~CPU_TASKS_FROZEN) {
case CPU_ONLINE:
rc = cpuid_add_regs(cpu);
break;
case CPU_DEAD:
rc = cpuid_remove_regs(cpu);
break;
}
return notifier_from_errno(rc);
}
static int __init cpuinfo_regs_init(void)
{
int cpu;
cpu_notifier_register_begin();
for_each_possible_cpu(cpu) {
struct cpuinfo_arm64 *info = &per_cpu(cpu_data, cpu);
kobject_init(&info->kobj, &cpuregs_kobj_type);
if (cpu_online(cpu))
cpuid_add_regs(cpu);
}
__hotcpu_notifier(cpuid_callback, 0);
cpu_notifier_register_done();
return 0;
}
static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info) static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
{ {
unsigned int cpu = smp_processor_id(); unsigned int cpu = smp_processor_id();
...@@ -212,6 +329,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) ...@@ -212,6 +329,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
info->reg_ctr = read_cpuid_cachetype(); info->reg_ctr = read_cpuid_cachetype();
info->reg_dczid = read_cpuid(DCZID_EL0); info->reg_dczid = read_cpuid(DCZID_EL0);
info->reg_midr = read_cpuid_id(); info->reg_midr = read_cpuid_id();
info->reg_revidr = read_cpuid(REVIDR_EL1);
info->reg_id_aa64dfr0 = read_cpuid(ID_AA64DFR0_EL1); info->reg_id_aa64dfr0 = read_cpuid(ID_AA64DFR0_EL1);
info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1); info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1);
...@@ -264,3 +382,5 @@ void __init cpuinfo_store_boot_cpu(void) ...@@ -264,3 +382,5 @@ void __init cpuinfo_store_boot_cpu(void)
boot_cpu_data = *info; boot_cpu_data = *info;
init_cpu_features(&boot_cpu_data); init_cpu_features(&boot_cpu_data);
} }
device_initcall(cpuinfo_regs_init);
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <linux/hardirq.h> #include <linux/hardirq.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/kprobes.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
...@@ -48,6 +49,7 @@ static void mdscr_write(u32 mdscr) ...@@ -48,6 +49,7 @@ static void mdscr_write(u32 mdscr)
asm volatile("msr mdscr_el1, %0" :: "r" (mdscr)); asm volatile("msr mdscr_el1, %0" :: "r" (mdscr));
local_dbg_restore(flags); local_dbg_restore(flags);
} }
NOKPROBE_SYMBOL(mdscr_write);
static u32 mdscr_read(void) static u32 mdscr_read(void)
{ {
...@@ -55,6 +57,7 @@ static u32 mdscr_read(void) ...@@ -55,6 +57,7 @@ static u32 mdscr_read(void)
asm volatile("mrs %0, mdscr_el1" : "=r" (mdscr)); asm volatile("mrs %0, mdscr_el1" : "=r" (mdscr));
return mdscr; return mdscr;
} }
NOKPROBE_SYMBOL(mdscr_read);
/* /*
* Allow root to disable self-hosted debug from userspace. * Allow root to disable self-hosted debug from userspace.
...@@ -103,6 +106,7 @@ void enable_debug_monitors(enum dbg_active_el el) ...@@ -103,6 +106,7 @@ void enable_debug_monitors(enum dbg_active_el el)
mdscr_write(mdscr); mdscr_write(mdscr);
} }
} }
NOKPROBE_SYMBOL(enable_debug_monitors);
void disable_debug_monitors(enum dbg_active_el el) void disable_debug_monitors(enum dbg_active_el el)
{ {
...@@ -123,6 +127,7 @@ void disable_debug_monitors(enum dbg_active_el el) ...@@ -123,6 +127,7 @@ void disable_debug_monitors(enum dbg_active_el el)
mdscr_write(mdscr); mdscr_write(mdscr);
} }
} }
NOKPROBE_SYMBOL(disable_debug_monitors);
/* /*
* OS lock clearing. * OS lock clearing.
...@@ -151,7 +156,6 @@ static int debug_monitors_init(void) ...@@ -151,7 +156,6 @@ static int debug_monitors_init(void)
/* Clear the OS lock. */ /* Clear the OS lock. */
on_each_cpu(clear_os_lock, NULL, 1); on_each_cpu(clear_os_lock, NULL, 1);
isb(); isb();
local_dbg_enable();
/* Register hotplug handler. */ /* Register hotplug handler. */
__register_cpu_notifier(&os_lock_nb); __register_cpu_notifier(&os_lock_nb);
...@@ -166,22 +170,15 @@ postcore_initcall(debug_monitors_init); ...@@ -166,22 +170,15 @@ postcore_initcall(debug_monitors_init);
*/ */
static void set_regs_spsr_ss(struct pt_regs *regs) static void set_regs_spsr_ss(struct pt_regs *regs)
{ {
unsigned long spsr; regs->pstate |= DBG_SPSR_SS;
spsr = regs->pstate;
spsr &= ~DBG_SPSR_SS;
spsr |= DBG_SPSR_SS;
regs->pstate = spsr;
} }
NOKPROBE_SYMBOL(set_regs_spsr_ss);
static void clear_regs_spsr_ss(struct pt_regs *regs) static void clear_regs_spsr_ss(struct pt_regs *regs)
{ {
unsigned long spsr; regs->pstate &= ~DBG_SPSR_SS;
spsr = regs->pstate;
spsr &= ~DBG_SPSR_SS;
regs->pstate = spsr;
} }
NOKPROBE_SYMBOL(clear_regs_spsr_ss);
/* EL1 Single Step Handler hooks */ /* EL1 Single Step Handler hooks */
static LIST_HEAD(step_hook); static LIST_HEAD(step_hook);
...@@ -225,6 +222,7 @@ static int call_step_hook(struct pt_regs *regs, unsigned int esr) ...@@ -225,6 +222,7 @@ static int call_step_hook(struct pt_regs *regs, unsigned int esr)
return retval; return retval;
} }
NOKPROBE_SYMBOL(call_step_hook);
static void send_user_sigtrap(int si_code) static void send_user_sigtrap(int si_code)
{ {
...@@ -266,6 +264,10 @@ static int single_step_handler(unsigned long addr, unsigned int esr, ...@@ -266,6 +264,10 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
*/ */
user_rewind_single_step(current); user_rewind_single_step(current);
} else { } else {
#ifdef CONFIG_KPROBES
if (kprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
return 0;
#endif
if (call_step_hook(regs, esr) == DBG_HOOK_HANDLED) if (call_step_hook(regs, esr) == DBG_HOOK_HANDLED)
return 0; return 0;
...@@ -279,6 +281,7 @@ static int single_step_handler(unsigned long addr, unsigned int esr, ...@@ -279,6 +281,7 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
return 0; return 0;
} }
NOKPROBE_SYMBOL(single_step_handler);
/* /*
* Breakpoint handler is re-entrant as another breakpoint can * Breakpoint handler is re-entrant as another breakpoint can
...@@ -316,19 +319,28 @@ static int call_break_hook(struct pt_regs *regs, unsigned int esr) ...@@ -316,19 +319,28 @@ static int call_break_hook(struct pt_regs *regs, unsigned int esr)
return fn ? fn(regs, esr) : DBG_HOOK_ERROR; return fn ? fn(regs, esr) : DBG_HOOK_ERROR;
} }
NOKPROBE_SYMBOL(call_break_hook);
static int brk_handler(unsigned long addr, unsigned int esr, static int brk_handler(unsigned long addr, unsigned int esr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
if (user_mode(regs)) { if (user_mode(regs)) {
send_user_sigtrap(TRAP_BRKPT); send_user_sigtrap(TRAP_BRKPT);
} else if (call_break_hook(regs, esr) != DBG_HOOK_HANDLED) { }
pr_warning("Unexpected kernel BRK exception at EL1\n"); #ifdef CONFIG_KPROBES
else if ((esr & BRK64_ESR_MASK) == BRK64_ESR_KPROBES) {
if (kprobe_breakpoint_handler(regs, esr) != DBG_HOOK_HANDLED)
return -EFAULT;
}
#endif
else if (call_break_hook(regs, esr) != DBG_HOOK_HANDLED) {
pr_warn("Unexpected kernel BRK exception at EL1\n");
return -EFAULT; return -EFAULT;
} }
return 0; return 0;
} }
NOKPROBE_SYMBOL(brk_handler);
int aarch32_break_handler(struct pt_regs *regs) int aarch32_break_handler(struct pt_regs *regs)
{ {
...@@ -365,6 +377,7 @@ int aarch32_break_handler(struct pt_regs *regs) ...@@ -365,6 +377,7 @@ int aarch32_break_handler(struct pt_regs *regs)
send_user_sigtrap(TRAP_BRKPT); send_user_sigtrap(TRAP_BRKPT);
return 0; return 0;
} }
NOKPROBE_SYMBOL(aarch32_break_handler);
static int __init debug_traps_init(void) static int __init debug_traps_init(void)
{ {
...@@ -386,6 +399,7 @@ void user_rewind_single_step(struct task_struct *task) ...@@ -386,6 +399,7 @@ void user_rewind_single_step(struct task_struct *task)
if (test_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP)) if (test_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP))
set_regs_spsr_ss(task_pt_regs(task)); set_regs_spsr_ss(task_pt_regs(task));
} }
NOKPROBE_SYMBOL(user_rewind_single_step);
void user_fastforward_single_step(struct task_struct *task) void user_fastforward_single_step(struct task_struct *task)
{ {
...@@ -401,6 +415,7 @@ void kernel_enable_single_step(struct pt_regs *regs) ...@@ -401,6 +415,7 @@ void kernel_enable_single_step(struct pt_regs *regs)
mdscr_write(mdscr_read() | DBG_MDSCR_SS); mdscr_write(mdscr_read() | DBG_MDSCR_SS);
enable_debug_monitors(DBG_ACTIVE_EL1); enable_debug_monitors(DBG_ACTIVE_EL1);
} }
NOKPROBE_SYMBOL(kernel_enable_single_step);
void kernel_disable_single_step(void) void kernel_disable_single_step(void)
{ {
...@@ -408,12 +423,14 @@ void kernel_disable_single_step(void) ...@@ -408,12 +423,14 @@ void kernel_disable_single_step(void)
mdscr_write(mdscr_read() & ~DBG_MDSCR_SS); mdscr_write(mdscr_read() & ~DBG_MDSCR_SS);
disable_debug_monitors(DBG_ACTIVE_EL1); disable_debug_monitors(DBG_ACTIVE_EL1);
} }
NOKPROBE_SYMBOL(kernel_disable_single_step);
int kernel_active_single_step(void) int kernel_active_single_step(void)
{ {
WARN_ON(!irqs_disabled()); WARN_ON(!irqs_disabled());
return mdscr_read() & DBG_MDSCR_SS; return mdscr_read() & DBG_MDSCR_SS;
} }
NOKPROBE_SYMBOL(kernel_active_single_step);
/* ptrace API */ /* ptrace API */
void user_enable_single_step(struct task_struct *task) void user_enable_single_step(struct task_struct *task)
...@@ -421,8 +438,10 @@ void user_enable_single_step(struct task_struct *task) ...@@ -421,8 +438,10 @@ void user_enable_single_step(struct task_struct *task)
set_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP); set_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP);
set_regs_spsr_ss(task_pt_regs(task)); set_regs_spsr_ss(task_pt_regs(task));
} }
NOKPROBE_SYMBOL(user_enable_single_step);
void user_disable_single_step(struct task_struct *task) void user_disable_single_step(struct task_struct *task)
{ {
clear_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP); clear_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP);
} }
NOKPROBE_SYMBOL(user_disable_single_step);
...@@ -62,13 +62,61 @@ struct screen_info screen_info __section(.data); ...@@ -62,13 +62,61 @@ struct screen_info screen_info __section(.data);
int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md) int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
{ {
pteval_t prot_val = create_mapping_protection(md); pteval_t prot_val = create_mapping_protection(md);
bool allow_block_mappings = (md->type != EFI_RUNTIME_SERVICES_CODE &&
md->type != EFI_RUNTIME_SERVICES_DATA);
if (!PAGE_ALIGNED(md->phys_addr) ||
!PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT)) {
/*
* If the end address of this region is not aligned to page
* size, the mapping is rounded up, and may end up sharing a
* page frame with the next UEFI memory region. If we create
* a block entry now, we may need to split it again when mapping
* the next region, and support for that is going to be removed
* from the MMU routines. So avoid block mappings altogether in
* that case.
*/
allow_block_mappings = false;
}
create_pgd_mapping(mm, md->phys_addr, md->virt_addr, create_pgd_mapping(mm, md->phys_addr, md->virt_addr,
md->num_pages << EFI_PAGE_SHIFT, md->num_pages << EFI_PAGE_SHIFT,
__pgprot(prot_val | PTE_NG)); __pgprot(prot_val | PTE_NG), allow_block_mappings);
return 0;
}
static int __init set_permissions(pte_t *ptep, pgtable_t token,
unsigned long addr, void *data)
{
efi_memory_desc_t *md = data;
pte_t pte = *ptep;
if (md->attribute & EFI_MEMORY_RO)
pte = set_pte_bit(pte, __pgprot(PTE_RDONLY));
if (md->attribute & EFI_MEMORY_XP)
pte = set_pte_bit(pte, __pgprot(PTE_PXN));
set_pte(ptep, pte);
return 0; return 0;
} }
int __init efi_set_mapping_permissions(struct mm_struct *mm,
efi_memory_desc_t *md)
{
BUG_ON(md->type != EFI_RUNTIME_SERVICES_CODE &&
md->type != EFI_RUNTIME_SERVICES_DATA);
/*
* Calling apply_to_page_range() is only safe on regions that are
* guaranteed to be mapped down to pages. Since we are only called
* for regions that have been mapped using efi_create_mapping() above
* (and this is checked by the generic Memory Attributes table parsing
* routines), there is no need to check that again here.
*/
return apply_to_page_range(mm, md->virt_addr,
md->num_pages << EFI_PAGE_SHIFT,
set_permissions, md);
}
static int __init arm64_dmi_init(void) static int __init arm64_dmi_init(void)
{ {
/* /*
......
...@@ -258,6 +258,7 @@ tsk .req x28 // current thread_info ...@@ -258,6 +258,7 @@ tsk .req x28 // current thread_info
/* /*
* Exception vectors. * Exception vectors.
*/ */
.pushsection ".entry.text", "ax"
.align 11 .align 11
ENTRY(vectors) ENTRY(vectors)
...@@ -466,7 +467,7 @@ el0_sync: ...@@ -466,7 +467,7 @@ el0_sync:
cmp x24, #ESR_ELx_EC_FP_EXC64 // FP/ASIMD exception cmp x24, #ESR_ELx_EC_FP_EXC64 // FP/ASIMD exception
b.eq el0_fpsimd_exc b.eq el0_fpsimd_exc
cmp x24, #ESR_ELx_EC_SYS64 // configurable trap cmp x24, #ESR_ELx_EC_SYS64 // configurable trap
b.eq el0_undef b.eq el0_sys
cmp x24, #ESR_ELx_EC_SP_ALIGN // stack alignment exception cmp x24, #ESR_ELx_EC_SP_ALIGN // stack alignment exception
b.eq el0_sp_pc b.eq el0_sp_pc
cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception
...@@ -547,7 +548,7 @@ el0_ia: ...@@ -547,7 +548,7 @@ el0_ia:
enable_dbg_and_irq enable_dbg_and_irq
ct_user_exit ct_user_exit
mov x0, x26 mov x0, x26
orr x1, x25, #1 << 24 // use reserved ISS bit for instruction aborts mov x1, x25
mov x2, sp mov x2, sp
bl do_mem_abort bl do_mem_abort
b ret_to_user b ret_to_user
...@@ -594,6 +595,16 @@ el0_undef: ...@@ -594,6 +595,16 @@ el0_undef:
mov x0, sp mov x0, sp
bl do_undefinstr bl do_undefinstr
b ret_to_user b ret_to_user
el0_sys:
/*
* System instructions, for trapped cache maintenance instructions
*/
enable_dbg_and_irq
ct_user_exit
mov x0, x25
mov x1, sp
bl do_sysinstr
b ret_to_user
el0_dbg: el0_dbg:
/* /*
* Debug exception handling * Debug exception handling
...@@ -789,6 +800,8 @@ __ni_sys_trace: ...@@ -789,6 +800,8 @@ __ni_sys_trace:
bl do_ni_syscall bl do_ni_syscall
b __sys_trace_return b __sys_trace_return
.popsection // .entry.text
/* /*
* Special system call wrappers. * Special system call wrappers.
*/ */
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/cpu_pm.h> #include <linux/cpu_pm.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/hw_breakpoint.h> #include <linux/hw_breakpoint.h>
#include <linux/kprobes.h>
#include <linux/perf_event.h> #include <linux/perf_event.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/smp.h> #include <linux/smp.h>
...@@ -127,6 +128,7 @@ static u64 read_wb_reg(int reg, int n) ...@@ -127,6 +128,7 @@ static u64 read_wb_reg(int reg, int n)
return val; return val;
} }
NOKPROBE_SYMBOL(read_wb_reg);
static void write_wb_reg(int reg, int n, u64 val) static void write_wb_reg(int reg, int n, u64 val)
{ {
...@@ -140,6 +142,7 @@ static void write_wb_reg(int reg, int n, u64 val) ...@@ -140,6 +142,7 @@ static void write_wb_reg(int reg, int n, u64 val)
} }
isb(); isb();
} }
NOKPROBE_SYMBOL(write_wb_reg);
/* /*
* Convert a breakpoint privilege level to the corresponding exception * Convert a breakpoint privilege level to the corresponding exception
...@@ -157,6 +160,7 @@ static enum dbg_active_el debug_exception_level(int privilege) ...@@ -157,6 +160,7 @@ static enum dbg_active_el debug_exception_level(int privilege)
return -EINVAL; return -EINVAL;
} }
} }
NOKPROBE_SYMBOL(debug_exception_level);
enum hw_breakpoint_ops { enum hw_breakpoint_ops {
HW_BREAKPOINT_INSTALL, HW_BREAKPOINT_INSTALL,
...@@ -575,6 +579,7 @@ static void toggle_bp_registers(int reg, enum dbg_active_el el, int enable) ...@@ -575,6 +579,7 @@ static void toggle_bp_registers(int reg, enum dbg_active_el el, int enable)
write_wb_reg(reg, i, ctrl); write_wb_reg(reg, i, ctrl);
} }
} }
NOKPROBE_SYMBOL(toggle_bp_registers);
/* /*
* Debug exception handlers. * Debug exception handlers.
...@@ -654,6 +659,7 @@ static int breakpoint_handler(unsigned long unused, unsigned int esr, ...@@ -654,6 +659,7 @@ static int breakpoint_handler(unsigned long unused, unsigned int esr,
return 0; return 0;
} }
NOKPROBE_SYMBOL(breakpoint_handler);
static int watchpoint_handler(unsigned long addr, unsigned int esr, static int watchpoint_handler(unsigned long addr, unsigned int esr,
struct pt_regs *regs) struct pt_regs *regs)
...@@ -756,6 +762,7 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr, ...@@ -756,6 +762,7 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr,
return 0; return 0;
} }
NOKPROBE_SYMBOL(watchpoint_handler);
/* /*
* Handle single-step exception. * Handle single-step exception.
...@@ -813,6 +820,7 @@ int reinstall_suspended_bps(struct pt_regs *regs) ...@@ -813,6 +820,7 @@ int reinstall_suspended_bps(struct pt_regs *regs)
return !handled_exception; return !handled_exception;
} }
NOKPROBE_SYMBOL(reinstall_suspended_bps);
/* /*
* Context-switcher for restoring suspended breakpoints. * Context-switcher for restoring suspended breakpoints.
......
...@@ -71,8 +71,16 @@ el1_sync: ...@@ -71,8 +71,16 @@ el1_sync:
msr vbar_el2, x1 msr vbar_el2, x1
b 9f b 9f
2: cmp x0, #HVC_SOFT_RESTART
b.ne 3f
mov x0, x2
mov x2, x4
mov x4, x1
mov x1, x3
br x4 // no return
/* Someone called kvm_call_hyp() against the hyp-stub... */ /* Someone called kvm_call_hyp() against the hyp-stub... */
2: mov x0, #ARM_EXCEPTION_HYP_GONE 3: mov x0, #ARM_EXCEPTION_HYP_GONE
9: eret 9: eret
ENDPROC(el1_sync) ENDPROC(el1_sync)
......
...@@ -30,6 +30,7 @@ ...@@ -30,6 +30,7 @@
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/debug-monitors.h> #include <asm/debug-monitors.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/opcodes.h>
#include <asm/insn.h> #include <asm/insn.h>
#define AARCH64_INSN_SF_BIT BIT(31) #define AARCH64_INSN_SF_BIT BIT(31)
...@@ -162,6 +163,32 @@ static bool __kprobes __aarch64_insn_hotpatch_safe(u32 insn) ...@@ -162,6 +163,32 @@ static bool __kprobes __aarch64_insn_hotpatch_safe(u32 insn)
aarch64_insn_is_nop(insn); aarch64_insn_is_nop(insn);
} }
bool __kprobes aarch64_insn_uses_literal(u32 insn)
{
/* ldr/ldrsw (literal), prfm */
return aarch64_insn_is_ldr_lit(insn) ||
aarch64_insn_is_ldrsw_lit(insn) ||
aarch64_insn_is_adr_adrp(insn) ||
aarch64_insn_is_prfm_lit(insn);
}
bool __kprobes aarch64_insn_is_branch(u32 insn)
{
/* b, bl, cb*, tb*, b.cond, br, blr */
return aarch64_insn_is_b(insn) ||
aarch64_insn_is_bl(insn) ||
aarch64_insn_is_cbz(insn) ||
aarch64_insn_is_cbnz(insn) ||
aarch64_insn_is_tbz(insn) ||
aarch64_insn_is_tbnz(insn) ||
aarch64_insn_is_ret(insn) ||
aarch64_insn_is_br(insn) ||
aarch64_insn_is_blr(insn) ||
aarch64_insn_is_bcond(insn);
}
/* /*
* ARM Architecture Reference Manual for ARMv8 Profile-A, Issue A.a * ARM Architecture Reference Manual for ARMv8 Profile-A, Issue A.a
* Section B2.6.5 "Concurrent modification and execution of instructions": * Section B2.6.5 "Concurrent modification and execution of instructions":
...@@ -1175,6 +1202,14 @@ u32 aarch64_set_branch_offset(u32 insn, s32 offset) ...@@ -1175,6 +1202,14 @@ u32 aarch64_set_branch_offset(u32 insn, s32 offset)
BUG(); BUG();
} }
/*
* Extract the Op/CR data from a msr/mrs instruction.
*/
u32 aarch64_insn_extract_system_reg(u32 insn)
{
return (insn & 0x1FFFE0) >> 5;
}
bool aarch32_insn_is_wide(u32 insn) bool aarch32_insn_is_wide(u32 insn)
{ {
return insn >= 0xe800; return insn >= 0xe800;
...@@ -1200,3 +1235,101 @@ u32 aarch32_insn_mcr_extract_crm(u32 insn) ...@@ -1200,3 +1235,101 @@ u32 aarch32_insn_mcr_extract_crm(u32 insn)
{ {
return insn & CRM_MASK; return insn & CRM_MASK;
} }
static bool __kprobes __check_eq(unsigned long pstate)
{
return (pstate & PSR_Z_BIT) != 0;
}
static bool __kprobes __check_ne(unsigned long pstate)
{
return (pstate & PSR_Z_BIT) == 0;
}
static bool __kprobes __check_cs(unsigned long pstate)
{
return (pstate & PSR_C_BIT) != 0;
}
static bool __kprobes __check_cc(unsigned long pstate)
{
return (pstate & PSR_C_BIT) == 0;
}
static bool __kprobes __check_mi(unsigned long pstate)
{
return (pstate & PSR_N_BIT) != 0;
}
static bool __kprobes __check_pl(unsigned long pstate)
{
return (pstate & PSR_N_BIT) == 0;
}
static bool __kprobes __check_vs(unsigned long pstate)
{
return (pstate & PSR_V_BIT) != 0;
}
static bool __kprobes __check_vc(unsigned long pstate)
{
return (pstate & PSR_V_BIT) == 0;
}
static bool __kprobes __check_hi(unsigned long pstate)
{
pstate &= ~(pstate >> 1); /* PSR_C_BIT &= ~PSR_Z_BIT */
return (pstate & PSR_C_BIT) != 0;
}
static bool __kprobes __check_ls(unsigned long pstate)
{
pstate &= ~(pstate >> 1); /* PSR_C_BIT &= ~PSR_Z_BIT */
return (pstate & PSR_C_BIT) == 0;
}
static bool __kprobes __check_ge(unsigned long pstate)
{
pstate ^= (pstate << 3); /* PSR_N_BIT ^= PSR_V_BIT */
return (pstate & PSR_N_BIT) == 0;
}
static bool __kprobes __check_lt(unsigned long pstate)
{
pstate ^= (pstate << 3); /* PSR_N_BIT ^= PSR_V_BIT */
return (pstate & PSR_N_BIT) != 0;
}
static bool __kprobes __check_gt(unsigned long pstate)
{
/*PSR_N_BIT ^= PSR_V_BIT */
unsigned long temp = pstate ^ (pstate << 3);
temp |= (pstate << 1); /*PSR_N_BIT |= PSR_Z_BIT */
return (temp & PSR_N_BIT) == 0;
}
static bool __kprobes __check_le(unsigned long pstate)
{
/*PSR_N_BIT ^= PSR_V_BIT */
unsigned long temp = pstate ^ (pstate << 3);
temp |= (pstate << 1); /*PSR_N_BIT |= PSR_Z_BIT */
return (temp & PSR_N_BIT) != 0;
}
static bool __kprobes __check_al(unsigned long pstate)
{
return true;
}
/*
* Note that the ARMv8 ARM calls condition code 0b1111 "nv", but states that
* it behaves identically to 0b1110 ("al").
*/
pstate_check_t * const aarch32_opcode_cond_checks[16] = {
__check_eq, __check_ne, __check_cs, __check_cc,
__check_mi, __check_pl, __check_vs, __check_vc,
__check_hi, __check_ls, __check_ge, __check_lt,
__check_gt, __check_le, __check_al, __check_al
};
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/kdebug.h> #include <linux/kdebug.h>
#include <linux/kgdb.h> #include <linux/kgdb.h>
#include <linux/kprobes.h>
#include <asm/traps.h> #include <asm/traps.h>
struct dbg_reg_def_t dbg_reg_def[DBG_MAX_REG_NUM] = { struct dbg_reg_def_t dbg_reg_def[DBG_MAX_REG_NUM] = {
...@@ -230,6 +231,7 @@ static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr) ...@@ -230,6 +231,7 @@ static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr)
kgdb_handle_exception(1, SIGTRAP, 0, regs); kgdb_handle_exception(1, SIGTRAP, 0, regs);
return 0; return 0;
} }
NOKPROBE_SYMBOL(kgdb_brk_fn)
static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr) static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)
{ {
...@@ -238,12 +240,14 @@ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr) ...@@ -238,12 +240,14 @@ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)
return 0; return 0;
} }
NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr) static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
{ {
kgdb_handle_exception(1, SIGTRAP, 0, regs); kgdb_handle_exception(1, SIGTRAP, 0, regs);
return 0; return 0;
} }
NOKPROBE_SYMBOL(kgdb_step_brk_fn);
static struct break_hook kgdb_brkpt_hook = { static struct break_hook kgdb_brkpt_hook = {
.esr_mask = 0xffffffff, .esr_mask = 0xffffffff,
......
/*
* kexec for arm64
*
* Copyright (C) Linaro.
* Copyright (C) Huawei Futurewei Technologies.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kexec.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
#include <asm/cpu_ops.h>
#include <asm/mmu_context.h>
#include "cpu-reset.h"
/* Global variables for the arm64_relocate_new_kernel routine. */
extern const unsigned char arm64_relocate_new_kernel[];
extern const unsigned long arm64_relocate_new_kernel_size;
static unsigned long kimage_start;
/**
* kexec_image_info - For debugging output.
*/
#define kexec_image_info(_i) _kexec_image_info(__func__, __LINE__, _i)
static void _kexec_image_info(const char *func, int line,
const struct kimage *kimage)
{
unsigned long i;
pr_debug("%s:%d:\n", func, line);
pr_debug(" kexec kimage info:\n");
pr_debug(" type: %d\n", kimage->type);
pr_debug(" start: %lx\n", kimage->start);
pr_debug(" head: %lx\n", kimage->head);
pr_debug(" nr_segments: %lu\n", kimage->nr_segments);
for (i = 0; i < kimage->nr_segments; i++) {
pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n",
i,
kimage->segment[i].mem,
kimage->segment[i].mem + kimage->segment[i].memsz,
kimage->segment[i].memsz,
kimage->segment[i].memsz / PAGE_SIZE);
}
}
void machine_kexec_cleanup(struct kimage *kimage)
{
/* Empty routine needed to avoid build errors. */
}
/**
* machine_kexec_prepare - Prepare for a kexec reboot.
*
* Called from the core kexec code when a kernel image is loaded.
* Forbid loading a kexec kernel if we have no way of hotplugging cpus or cpus
* are stuck in the kernel. This avoids a panic once we hit machine_kexec().
*/
int machine_kexec_prepare(struct kimage *kimage)
{
kimage_start = kimage->start;
kexec_image_info(kimage);
if (kimage->type != KEXEC_TYPE_CRASH && cpus_are_stuck_in_kernel()) {
pr_err("Can't kexec: CPUs are stuck in the kernel.\n");
return -EBUSY;
}
return 0;
}
/**
* kexec_list_flush - Helper to flush the kimage list and source pages to PoC.
*/
static void kexec_list_flush(struct kimage *kimage)
{
kimage_entry_t *entry;
for (entry = &kimage->head; ; entry++) {
unsigned int flag;
void *addr;
/* flush the list entries. */
__flush_dcache_area(entry, sizeof(kimage_entry_t));
flag = *entry & IND_FLAGS;
if (flag == IND_DONE)
break;
addr = phys_to_virt(*entry & PAGE_MASK);
switch (flag) {
case IND_INDIRECTION:
/* Set entry point just before the new list page. */
entry = (kimage_entry_t *)addr - 1;
break;
case IND_SOURCE:
/* flush the source pages. */
__flush_dcache_area(addr, PAGE_SIZE);
break;
case IND_DESTINATION:
break;
default:
BUG();
}
}
}
/**
* kexec_segment_flush - Helper to flush the kimage segments to PoC.
*/
static void kexec_segment_flush(const struct kimage *kimage)
{
unsigned long i;
pr_debug("%s:\n", __func__);
for (i = 0; i < kimage->nr_segments; i++) {
pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n",
i,
kimage->segment[i].mem,
kimage->segment[i].mem + kimage->segment[i].memsz,
kimage->segment[i].memsz,
kimage->segment[i].memsz / PAGE_SIZE);
__flush_dcache_area(phys_to_virt(kimage->segment[i].mem),
kimage->segment[i].memsz);
}
}
/**
* machine_kexec - Do the kexec reboot.
*
* Called from the core kexec code for a sys_reboot with LINUX_REBOOT_CMD_KEXEC.
*/
void machine_kexec(struct kimage *kimage)
{
phys_addr_t reboot_code_buffer_phys;
void *reboot_code_buffer;
/*
* New cpus may have become stuck_in_kernel after we loaded the image.
*/
BUG_ON(cpus_are_stuck_in_kernel() || (num_online_cpus() > 1));
reboot_code_buffer_phys = page_to_phys(kimage->control_code_page);
reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys);
kexec_image_info(kimage);
pr_debug("%s:%d: control_code_page: %p\n", __func__, __LINE__,
kimage->control_code_page);
pr_debug("%s:%d: reboot_code_buffer_phys: %pa\n", __func__, __LINE__,
&reboot_code_buffer_phys);
pr_debug("%s:%d: reboot_code_buffer: %p\n", __func__, __LINE__,
reboot_code_buffer);
pr_debug("%s:%d: relocate_new_kernel: %p\n", __func__, __LINE__,
arm64_relocate_new_kernel);
pr_debug("%s:%d: relocate_new_kernel_size: 0x%lx(%lu) bytes\n",
__func__, __LINE__, arm64_relocate_new_kernel_size,
arm64_relocate_new_kernel_size);
/*
* Copy arm64_relocate_new_kernel to the reboot_code_buffer for use
* after the kernel is shut down.
*/
memcpy(reboot_code_buffer, arm64_relocate_new_kernel,
arm64_relocate_new_kernel_size);
/* Flush the reboot_code_buffer in preparation for its execution. */
__flush_dcache_area(reboot_code_buffer, arm64_relocate_new_kernel_size);
flush_icache_range((uintptr_t)reboot_code_buffer,
arm64_relocate_new_kernel_size);
/* Flush the kimage list and its buffers. */
kexec_list_flush(kimage);
/* Flush the new image if already in place. */
if (kimage->head & IND_DONE)
kexec_segment_flush(kimage);
pr_info("Bye!\n");
/* Disable all DAIF exceptions. */
asm volatile ("msr daifset, #0xf" : : : "memory");
/*
* cpu_soft_restart will shutdown the MMU, disable data caches, then
* transfer control to the reboot_code_buffer which contains a copy of
* the arm64_relocate_new_kernel routine. arm64_relocate_new_kernel
* uses physical addressing to relocate the new image to its final
* position and transfers control to the image entry point when the
* relocation is complete.
*/
cpu_soft_restart(1, reboot_code_buffer_phys, kimage->head,
kimage_start, 0);
BUG(); /* Should never get here. */
}
void machine_crash_shutdown(struct pt_regs *regs)
{
/* Empty routine needed to avoid build errors. */
}
obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o \
kprobes_trampoline.o \
simulate-insn.o
/*
* arch/arm64/kernel/probes/decode-insn.c
*
* Copyright (C) 2013 Linaro Limited.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/kprobes.h>
#include <linux/module.h>
#include <asm/kprobes.h>
#include <asm/insn.h>
#include <asm/sections.h>
#include "decode-insn.h"
#include "simulate-insn.h"
static bool __kprobes aarch64_insn_is_steppable(u32 insn)
{
/*
* Branch instructions will write a new value into the PC which is
* likely to be relative to the XOL address and therefore invalid.
* Deliberate generation of an exception during stepping is also not
* currently safe. Lastly, MSR instructions can do any number of nasty
* things we can't handle during single-stepping.
*/
if (aarch64_get_insn_class(insn) == AARCH64_INSN_CLS_BR_SYS) {
if (aarch64_insn_is_branch(insn) ||
aarch64_insn_is_msr_imm(insn) ||
aarch64_insn_is_msr_reg(insn) ||
aarch64_insn_is_exception(insn) ||
aarch64_insn_is_eret(insn))
return false;
/*
* The MRS instruction may not return a correct value when
* executing in the single-stepping environment. We do make one
* exception, for reading the DAIF bits.
*/
if (aarch64_insn_is_mrs(insn))
return aarch64_insn_extract_system_reg(insn)
!= AARCH64_INSN_SPCLREG_DAIF;
/*
* The HINT instruction is is problematic when single-stepping,
* except for the NOP case.
*/
if (aarch64_insn_is_hint(insn))
return aarch64_insn_is_nop(insn);
return true;
}
/*
* Instructions which load PC relative literals are not going to work
* when executed from an XOL slot. Instructions doing an exclusive
* load/store are not going to complete successfully when single-step
* exception handling happens in the middle of the sequence.
*/
if (aarch64_insn_uses_literal(insn) ||
aarch64_insn_is_exclusive(insn))
return false;
return true;
}
/* Return:
* INSN_REJECTED If instruction is one not allowed to kprobe,
* INSN_GOOD If instruction is supported and uses instruction slot,
* INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
*/
static enum kprobe_insn __kprobes
arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
{
/*
* Instructions reading or modifying the PC won't work from the XOL
* slot.
*/
if (aarch64_insn_is_steppable(insn))
return INSN_GOOD;
if (aarch64_insn_is_bcond(insn)) {
asi->handler = simulate_b_cond;
} else if (aarch64_insn_is_cbz(insn) ||
aarch64_insn_is_cbnz(insn)) {
asi->handler = simulate_cbz_cbnz;
} else if (aarch64_insn_is_tbz(insn) ||
aarch64_insn_is_tbnz(insn)) {
asi->handler = simulate_tbz_tbnz;
} else if (aarch64_insn_is_adr_adrp(insn)) {
asi->handler = simulate_adr_adrp;
} else if (aarch64_insn_is_b(insn) ||
aarch64_insn_is_bl(insn)) {
asi->handler = simulate_b_bl;
} else if (aarch64_insn_is_br(insn) ||
aarch64_insn_is_blr(insn) ||
aarch64_insn_is_ret(insn)) {
asi->handler = simulate_br_blr_ret;
} else if (aarch64_insn_is_ldr_lit(insn)) {
asi->handler = simulate_ldr_literal;
} else if (aarch64_insn_is_ldrsw_lit(insn)) {
asi->handler = simulate_ldrsw_literal;
} else {
/*
* Instruction cannot be stepped out-of-line and we don't
* (yet) simulate it.
*/
return INSN_REJECTED;
}
return INSN_GOOD_NO_SLOT;
}
static bool __kprobes
is_probed_address_atomic(kprobe_opcode_t *scan_start, kprobe_opcode_t *scan_end)
{
while (scan_start > scan_end) {
/*
* atomic region starts from exclusive load and ends with
* exclusive store.
*/
if (aarch64_insn_is_store_ex(le32_to_cpu(*scan_start)))
return false;
else if (aarch64_insn_is_load_ex(le32_to_cpu(*scan_start)))
return true;
scan_start--;
}
return false;
}
enum kprobe_insn __kprobes
arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
{
enum kprobe_insn decoded;
kprobe_opcode_t insn = le32_to_cpu(*addr);
kprobe_opcode_t *scan_start = addr - 1;
kprobe_opcode_t *scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE;
#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
struct module *mod;
#endif
if (addr >= (kprobe_opcode_t *)_text &&
scan_end < (kprobe_opcode_t *)_text)
scan_end = (kprobe_opcode_t *)_text;
#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
else {
preempt_disable();
mod = __module_address((unsigned long)addr);
if (mod && within_module_init((unsigned long)addr, mod) &&
!within_module_init((unsigned long)scan_end, mod))
scan_end = (kprobe_opcode_t *)mod->init_layout.base;
else if (mod && within_module_core((unsigned long)addr, mod) &&
!within_module_core((unsigned long)scan_end, mod))
scan_end = (kprobe_opcode_t *)mod->core_layout.base;
preempt_enable();
}
#endif
decoded = arm_probe_decode_insn(insn, asi);
if (decoded == INSN_REJECTED ||
is_probed_address_atomic(scan_start, scan_end))
return INSN_REJECTED;
return decoded;
}
/*
* arch/arm64/kernel/probes/decode-insn.h
*
* Copyright (C) 2013 Linaro Limited.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _ARM_KERNEL_KPROBES_ARM64_H
#define _ARM_KERNEL_KPROBES_ARM64_H
/*
* ARM strongly recommends a limit of 128 bytes between LoadExcl and
* StoreExcl instructions in a single thread of execution. So keep the
* max atomic context size as 32.
*/
#define MAX_ATOMIC_CONTEXT_SIZE (128 / sizeof(kprobe_opcode_t))
enum kprobe_insn {
INSN_REJECTED,
INSN_GOOD_NO_SLOT,
INSN_GOOD,
};
enum kprobe_insn __kprobes
arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi);
#endif /* _ARM_KERNEL_KPROBES_ARM64_H */
This diff is collapsed.
/*
* trampoline entry and return code for kretprobes.
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/assembler.h>
.text
.macro save_all_base_regs
stp x0, x1, [sp, #S_X0]
stp x2, x3, [sp, #S_X2]
stp x4, x5, [sp, #S_X4]
stp x6, x7, [sp, #S_X6]
stp x8, x9, [sp, #S_X8]
stp x10, x11, [sp, #S_X10]
stp x12, x13, [sp, #S_X12]
stp x14, x15, [sp, #S_X14]
stp x16, x17, [sp, #S_X16]
stp x18, x19, [sp, #S_X18]
stp x20, x21, [sp, #S_X20]
stp x22, x23, [sp, #S_X22]
stp x24, x25, [sp, #S_X24]
stp x26, x27, [sp, #S_X26]
stp x28, x29, [sp, #S_X28]
add x0, sp, #S_FRAME_SIZE
stp lr, x0, [sp, #S_LR]
/*
* Construct a useful saved PSTATE
*/
mrs x0, nzcv
mrs x1, daif
orr x0, x0, x1
mrs x1, CurrentEL
orr x0, x0, x1
mrs x1, SPSel
orr x0, x0, x1
stp xzr, x0, [sp, #S_PC]
.endm
.macro restore_all_base_regs
ldr x0, [sp, #S_PSTATE]
and x0, x0, #(PSR_N_BIT | PSR_Z_BIT | PSR_C_BIT | PSR_V_BIT)
msr nzcv, x0
ldp x0, x1, [sp, #S_X0]
ldp x2, x3, [sp, #S_X2]
ldp x4, x5, [sp, #S_X4]
ldp x6, x7, [sp, #S_X6]
ldp x8, x9, [sp, #S_X8]
ldp x10, x11, [sp, #S_X10]
ldp x12, x13, [sp, #S_X12]
ldp x14, x15, [sp, #S_X14]
ldp x16, x17, [sp, #S_X16]
ldp x18, x19, [sp, #S_X18]
ldp x20, x21, [sp, #S_X20]
ldp x22, x23, [sp, #S_X22]
ldp x24, x25, [sp, #S_X24]
ldp x26, x27, [sp, #S_X26]
ldp x28, x29, [sp, #S_X28]
.endm
ENTRY(kretprobe_trampoline)
sub sp, sp, #S_FRAME_SIZE
save_all_base_regs
mov x0, sp
bl trampoline_probe_handler
/*
* Replace trampoline address in lr with actual orig_ret_addr return
* address.
*/
mov lr, x0
restore_all_base_regs
add sp, sp, #S_FRAME_SIZE
ret
ENDPROC(kretprobe_trampoline)
/*
* arch/arm64/kernel/probes/simulate-insn.c
*
* Copyright (C) 2013 Linaro Limited.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/kprobes.h>
#include "simulate-insn.h"
#define sign_extend(x, signbit) \
((x) | (0 - ((x) & (1 << (signbit)))))
#define bbl_displacement(insn) \
sign_extend(((insn) & 0x3ffffff) << 2, 27)
#define bcond_displacement(insn) \
sign_extend(((insn >> 5) & 0x7ffff) << 2, 20)
#define cbz_displacement(insn) \
sign_extend(((insn >> 5) & 0x7ffff) << 2, 20)
#define tbz_displacement(insn) \
sign_extend(((insn >> 5) & 0x3fff) << 2, 15)
#define ldr_displacement(insn) \
sign_extend(((insn >> 5) & 0x7ffff) << 2, 20)
static inline void set_x_reg(struct pt_regs *regs, int reg, u64 val)
{
if (reg < 31)
regs->regs[reg] = val;
}
static inline void set_w_reg(struct pt_regs *regs, int reg, u64 val)
{
if (reg < 31)
regs->regs[reg] = lower_32_bits(val);
}
static inline u64 get_x_reg(struct pt_regs *regs, int reg)
{
if (reg < 31)
return regs->regs[reg];
else
return 0;
}
static inline u32 get_w_reg(struct pt_regs *regs, int reg)
{
if (reg < 31)
return lower_32_bits(regs->regs[reg]);
else
return 0;
}
static bool __kprobes check_cbz(u32 opcode, struct pt_regs *regs)
{
int xn = opcode & 0x1f;
return (opcode & (1 << 31)) ?
(get_x_reg(regs, xn) == 0) : (get_w_reg(regs, xn) == 0);
}
static bool __kprobes check_cbnz(u32 opcode, struct pt_regs *regs)
{
int xn = opcode & 0x1f;
return (opcode & (1 << 31)) ?
(get_x_reg(regs, xn) != 0) : (get_w_reg(regs, xn) != 0);
}
static bool __kprobes check_tbz(u32 opcode, struct pt_regs *regs)
{
int xn = opcode & 0x1f;
int bit_pos = ((opcode & (1 << 31)) >> 26) | ((opcode >> 19) & 0x1f);
return ((get_x_reg(regs, xn) >> bit_pos) & 0x1) == 0;
}
static bool __kprobes check_tbnz(u32 opcode, struct pt_regs *regs)
{
int xn = opcode & 0x1f;
int bit_pos = ((opcode & (1 << 31)) >> 26) | ((opcode >> 19) & 0x1f);
return ((get_x_reg(regs, xn) >> bit_pos) & 0x1) != 0;
}
/*
* instruction simulation functions
*/
void __kprobes
simulate_adr_adrp(u32 opcode, long addr, struct pt_regs *regs)
{
long imm, xn, val;
xn = opcode & 0x1f;
imm = ((opcode >> 3) & 0x1ffffc) | ((opcode >> 29) & 0x3);
imm = sign_extend(imm, 20);
if (opcode & 0x80000000)
val = (imm<<12) + (addr & 0xfffffffffffff000);
else
val = imm + addr;
set_x_reg(regs, xn, val);
instruction_pointer_set(regs, instruction_pointer(regs) + 4);
}
void __kprobes
simulate_b_bl(u32 opcode, long addr, struct pt_regs *regs)
{
int disp = bbl_displacement(opcode);
/* Link register is x30 */
if (opcode & (1 << 31))
set_x_reg(regs, 30, addr + 4);
instruction_pointer_set(regs, addr + disp);
}
void __kprobes
simulate_b_cond(u32 opcode, long addr, struct pt_regs *regs)
{
int disp = 4;
if (aarch32_opcode_cond_checks[opcode & 0xf](regs->pstate & 0xffffffff))
disp = bcond_displacement(opcode);
instruction_pointer_set(regs, addr + disp);
}
void __kprobes
simulate_br_blr_ret(u32 opcode, long addr, struct pt_regs *regs)
{
int xn = (opcode >> 5) & 0x1f;
/* update pc first in case we're doing a "blr lr" */
instruction_pointer_set(regs, get_x_reg(regs, xn));
/* Link register is x30 */
if (((opcode >> 21) & 0x3) == 1)
set_x_reg(regs, 30, addr + 4);
}
void __kprobes
simulate_cbz_cbnz(u32 opcode, long addr, struct pt_regs *regs)
{
int disp = 4;
if (opcode & (1 << 24)) {
if (check_cbnz(opcode, regs))
disp = cbz_displacement(opcode);
} else {
if (check_cbz(opcode, regs))
disp = cbz_displacement(opcode);
}
instruction_pointer_set(regs, addr + disp);
}
void __kprobes
simulate_tbz_tbnz(u32 opcode, long addr, struct pt_regs *regs)
{
int disp = 4;
if (opcode & (1 << 24)) {
if (check_tbnz(opcode, regs))
disp = tbz_displacement(opcode);
} else {
if (check_tbz(opcode, regs))
disp = tbz_displacement(opcode);
}
instruction_pointer_set(regs, addr + disp);
}
void __kprobes
simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs)
{
u64 *load_addr;
int xn = opcode & 0x1f;
int disp;
disp = ldr_displacement(opcode);
load_addr = (u64 *) (addr + disp);
if (opcode & (1 << 30)) /* x0-x30 */
set_x_reg(regs, xn, *load_addr);
else /* w0-w30 */
set_w_reg(regs, xn, *load_addr);
instruction_pointer_set(regs, instruction_pointer(regs) + 4);
}
void __kprobes
simulate_ldrsw_literal(u32 opcode, long addr, struct pt_regs *regs)
{
s32 *load_addr;
int xn = opcode & 0x1f;
int disp;
disp = ldr_displacement(opcode);
load_addr = (s32 *) (addr + disp);
set_x_reg(regs, xn, *load_addr);
instruction_pointer_set(regs, instruction_pointer(regs) + 4);
}
/*
* arch/arm64/kernel/probes/simulate-insn.h
*
* Copyright (C) 2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _ARM_KERNEL_KPROBES_SIMULATE_INSN_H
#define _ARM_KERNEL_KPROBES_SIMULATE_INSN_H
void simulate_adr_adrp(u32 opcode, long addr, struct pt_regs *regs);
void simulate_b_bl(u32 opcode, long addr, struct pt_regs *regs);
void simulate_b_cond(u32 opcode, long addr, struct pt_regs *regs);
void simulate_br_blr_ret(u32 opcode, long addr, struct pt_regs *regs);
void simulate_cbz_cbnz(u32 opcode, long addr, struct pt_regs *regs);
void simulate_tbz_tbnz(u32 opcode, long addr, struct pt_regs *regs);
void simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs);
void simulate_ldrsw_literal(u32 opcode, long addr, struct pt_regs *regs);
#endif /* _ARM_KERNEL_KPROBES_SIMULATE_INSN_H */
...@@ -48,6 +48,107 @@ ...@@ -48,6 +48,107 @@
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include <trace/events/syscalls.h> #include <trace/events/syscalls.h>
struct pt_regs_offset {
const char *name;
int offset;
};
#define REG_OFFSET_NAME(r) {.name = #r, .offset = offsetof(struct pt_regs, r)}
#define REG_OFFSET_END {.name = NULL, .offset = 0}
#define GPR_OFFSET_NAME(r) \
{.name = "x" #r, .offset = offsetof(struct pt_regs, regs[r])}
static const struct pt_regs_offset regoffset_table[] = {
GPR_OFFSET_NAME(0),
GPR_OFFSET_NAME(1),
GPR_OFFSET_NAME(2),
GPR_OFFSET_NAME(3),
GPR_OFFSET_NAME(4),
GPR_OFFSET_NAME(5),
GPR_OFFSET_NAME(6),
GPR_OFFSET_NAME(7),
GPR_OFFSET_NAME(8),
GPR_OFFSET_NAME(9),
GPR_OFFSET_NAME(10),
GPR_OFFSET_NAME(11),
GPR_OFFSET_NAME(12),
GPR_OFFSET_NAME(13),
GPR_OFFSET_NAME(14),
GPR_OFFSET_NAME(15),
GPR_OFFSET_NAME(16),
GPR_OFFSET_NAME(17),
GPR_OFFSET_NAME(18),
GPR_OFFSET_NAME(19),
GPR_OFFSET_NAME(20),
GPR_OFFSET_NAME(21),
GPR_OFFSET_NAME(22),
GPR_OFFSET_NAME(23),
GPR_OFFSET_NAME(24),
GPR_OFFSET_NAME(25),
GPR_OFFSET_NAME(26),
GPR_OFFSET_NAME(27),
GPR_OFFSET_NAME(28),
GPR_OFFSET_NAME(29),
GPR_OFFSET_NAME(30),
{.name = "lr", .offset = offsetof(struct pt_regs, regs[30])},
REG_OFFSET_NAME(sp),
REG_OFFSET_NAME(pc),
REG_OFFSET_NAME(pstate),
REG_OFFSET_END,
};
/**
* regs_query_register_offset() - query register offset from its name
* @name: the name of a register
*
* regs_query_register_offset() returns the offset of a register in struct
* pt_regs from its name. If the name is invalid, this returns -EINVAL;
*/
int regs_query_register_offset(const char *name)
{
const struct pt_regs_offset *roff;
for (roff = regoffset_table; roff->name != NULL; roff++)
if (!strcmp(roff->name, name))
return roff->offset;
return -EINVAL;
}
/**
* regs_within_kernel_stack() - check the address in the stack
* @regs: pt_regs which contains kernel stack pointer.
* @addr: address which is checked.
*
* regs_within_kernel_stack() checks @addr is within the kernel stack page(s).
* If @addr is within the kernel stack, it returns true. If not, returns false.
*/
static bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr)
{
return ((addr & ~(THREAD_SIZE - 1)) ==
(kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1))) ||
on_irq_stack(addr, raw_smp_processor_id());
}
/**
* regs_get_kernel_stack_nth() - get Nth entry of the stack
* @regs: pt_regs which contains kernel stack pointer.
* @n: stack entry number.
*
* regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which
* is specified by @regs. If the @n th entry is NOT in the kernel stack,
* this returns 0.
*/
unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
{
unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs);
addr += n;
if (regs_within_kernel_stack(regs, (unsigned long)addr))
return *addr;
else
return 0;
}
/* /*
* TODO: does not yet catch signals sent when the child dies. * TODO: does not yet catch signals sent when the child dies.
* in exit.c or in signal.c. * in exit.c or in signal.c.
......
/*
* kexec for arm64
*
* Copyright (C) Linaro.
* Copyright (C) Huawei Futurewei Technologies.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kexec.h>
#include <linux/linkage.h>
#include <asm/assembler.h>
#include <asm/kexec.h>
#include <asm/page.h>
#include <asm/sysreg.h>
/*
* arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it.
*
* The memory that the old kernel occupies may be overwritten when coping the
* new image to its final location. To assure that the
* arm64_relocate_new_kernel routine which does that copy is not overwritten,
* all code and data needed by arm64_relocate_new_kernel must be between the
* symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The
* machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec
* control_code_page, a special page which has been set up to be preserved
* during the copy operation.
*/
ENTRY(arm64_relocate_new_kernel)
/* Setup the list loop variables. */
mov x17, x1 /* x17 = kimage_start */
mov x16, x0 /* x16 = kimage_head */
dcache_line_size x15, x0 /* x15 = dcache line size */
mov x14, xzr /* x14 = entry ptr */
mov x13, xzr /* x13 = copy dest */
/* Clear the sctlr_el2 flags. */
mrs x0, CurrentEL
cmp x0, #CurrentEL_EL2
b.ne 1f
mrs x0, sctlr_el2
ldr x1, =SCTLR_ELx_FLAGS
bic x0, x0, x1
msr sctlr_el2, x0
isb
1:
/* Check if the new image needs relocation. */
tbnz x16, IND_DONE_BIT, .Ldone
.Lloop:
and x12, x16, PAGE_MASK /* x12 = addr */
/* Test the entry flags. */
.Ltest_source:
tbz x16, IND_SOURCE_BIT, .Ltest_indirection
/* Invalidate dest page to PoC. */
mov x0, x13
add x20, x0, #PAGE_SIZE
sub x1, x15, #1
bic x0, x0, x1
2: dc ivac, x0
add x0, x0, x15
cmp x0, x20
b.lo 2b
dsb sy
mov x20, x13
mov x21, x12
copy_page x20, x21, x0, x1, x2, x3, x4, x5, x6, x7
/* dest += PAGE_SIZE */
add x13, x13, PAGE_SIZE
b .Lnext
.Ltest_indirection:
tbz x16, IND_INDIRECTION_BIT, .Ltest_destination
/* ptr = addr */
mov x14, x12
b .Lnext
.Ltest_destination:
tbz x16, IND_DESTINATION_BIT, .Lnext
/* dest = addr */
mov x13, x12
.Lnext:
/* entry = *ptr++ */
ldr x16, [x14], #8
/* while (!(entry & DONE)) */
tbz x16, IND_DONE_BIT, .Lloop
.Ldone:
/* wait for writes from copy_page to finish */
dsb nsh
ic iallu
dsb nsh
isb
/* Start new image. */
mov x0, xzr
mov x1, xzr
mov x2, xzr
mov x3, xzr
br x17
ENDPROC(arm64_relocate_new_kernel)
.ltorg
.align 3 /* To keep the 64-bit values below naturally aligned. */
.Lcopy_end:
.org KEXEC_CONTROL_PAGE_SIZE
/*
* arm64_relocate_new_kernel_size - Number of bytes to copy to the
* control_code_page.
*/
.globl arm64_relocate_new_kernel_size
arm64_relocate_new_kernel_size:
.quad .Lcopy_end - arm64_relocate_new_kernel
...@@ -202,7 +202,7 @@ static void __init request_standard_resources(void) ...@@ -202,7 +202,7 @@ static void __init request_standard_resources(void)
struct resource *res; struct resource *res;
kernel_code.start = virt_to_phys(_text); kernel_code.start = virt_to_phys(_text);
kernel_code.end = virt_to_phys(_etext - 1); kernel_code.end = virt_to_phys(__init_begin - 1);
kernel_data.start = virt_to_phys(_sdata); kernel_data.start = virt_to_phys(_sdata);
kernel_data.end = virt_to_phys(_end - 1); kernel_data.end = virt_to_phys(_end - 1);
......
...@@ -267,7 +267,6 @@ asmlinkage void secondary_start_kernel(void) ...@@ -267,7 +267,6 @@ asmlinkage void secondary_start_kernel(void)
set_cpu_online(cpu, true); set_cpu_online(cpu, true);
complete(&cpu_running); complete(&cpu_running);
local_dbg_enable();
local_irq_enable(); local_irq_enable();
local_async_enable(); local_async_enable();
...@@ -437,9 +436,9 @@ void __init smp_cpus_done(unsigned int max_cpus) ...@@ -437,9 +436,9 @@ void __init smp_cpus_done(unsigned int max_cpus)
void __init smp_prepare_boot_cpu(void) void __init smp_prepare_boot_cpu(void)
{ {
set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
cpuinfo_store_boot_cpu(); cpuinfo_store_boot_cpu();
save_boot_cpu_run_el(); save_boot_cpu_run_el();
set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
} }
static u64 __init of_get_cpu_mpidr(struct device_node *dn) static u64 __init of_get_cpu_mpidr(struct device_node *dn)
...@@ -695,6 +694,13 @@ void __init smp_prepare_cpus(unsigned int max_cpus) ...@@ -695,6 +694,13 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
smp_store_cpu_info(smp_processor_id()); smp_store_cpu_info(smp_processor_id());
/*
* If UP is mandated by "nosmp" (which implies "maxcpus=0"), don't set
* secondary CPUs present.
*/
if (max_cpus == 0)
return;
/* /*
* Initialise the present map (which describes the set of CPUs * Initialise the present map (which describes the set of CPUs
* actually populated at the present time) and release the * actually populated at the present time) and release the
......
...@@ -41,6 +41,7 @@ ...@@ -41,6 +41,7 @@
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
#include <asm/exception.h> #include <asm/exception.h>
#include <asm/system_misc.h> #include <asm/system_misc.h>
#include <asm/sysreg.h>
static const char *handler[]= { static const char *handler[]= {
"Synchronous Abort", "Synchronous Abort",
...@@ -52,15 +53,14 @@ static const char *handler[]= { ...@@ -52,15 +53,14 @@ static const char *handler[]= {
int show_unhandled_signals = 1; int show_unhandled_signals = 1;
/* /*
* Dump out the contents of some memory nicely... * Dump out the contents of some kernel memory nicely...
*/ */
static void dump_mem(const char *lvl, const char *str, unsigned long bottom, static void dump_mem(const char *lvl, const char *str, unsigned long bottom,
unsigned long top, bool compat) unsigned long top)
{ {
unsigned long first; unsigned long first;
mm_segment_t fs; mm_segment_t fs;
int i; int i;
unsigned int width = compat ? 4 : 8;
/* /*
* We need to switch to kernel mode so that we can use __get_user * We need to switch to kernel mode so that we can use __get_user
...@@ -78,22 +78,15 @@ static void dump_mem(const char *lvl, const char *str, unsigned long bottom, ...@@ -78,22 +78,15 @@ static void dump_mem(const char *lvl, const char *str, unsigned long bottom,
memset(str, ' ', sizeof(str)); memset(str, ' ', sizeof(str));
str[sizeof(str) - 1] = '\0'; str[sizeof(str) - 1] = '\0';
for (p = first, i = 0; i < (32 / width) for (p = first, i = 0; i < (32 / 8)
&& p < top; i++, p += width) { && p < top; i++, p += 8) {
if (p >= bottom && p < top) { if (p >= bottom && p < top) {
unsigned long val; unsigned long val;
if (width == 8) {
if (__get_user(val, (unsigned long *)p) == 0) if (__get_user(val, (unsigned long *)p) == 0)
sprintf(str + i * 17, " %016lx", val); sprintf(str + i * 17, " %016lx", val);
else else
sprintf(str + i * 17, " ????????????????"); sprintf(str + i * 17, " ????????????????");
} else {
if (__get_user(val, (unsigned int *)p) == 0)
sprintf(str + i * 9, " %08lx", val);
else
sprintf(str + i * 9, " ????????");
}
} }
} }
printk("%s%04lx:%s\n", lvl, first & 0xffff, str); printk("%s%04lx:%s\n", lvl, first & 0xffff, str);
...@@ -216,7 +209,7 @@ static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk) ...@@ -216,7 +209,7 @@ static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
stack = IRQ_STACK_TO_TASK_STACK(irq_stack_ptr); stack = IRQ_STACK_TO_TASK_STACK(irq_stack_ptr);
dump_mem("", "Exception stack", stack, dump_mem("", "Exception stack", stack,
stack + sizeof(struct pt_regs), false); stack + sizeof(struct pt_regs));
} }
} }
} }
...@@ -254,10 +247,9 @@ static int __die(const char *str, int err, struct thread_info *thread, ...@@ -254,10 +247,9 @@ static int __die(const char *str, int err, struct thread_info *thread,
pr_emerg("Process %.*s (pid: %d, stack limit = 0x%p)\n", pr_emerg("Process %.*s (pid: %d, stack limit = 0x%p)\n",
TASK_COMM_LEN, tsk->comm, task_pid_nr(tsk), thread + 1); TASK_COMM_LEN, tsk->comm, task_pid_nr(tsk), thread + 1);
if (!user_mode(regs) || in_interrupt()) { if (!user_mode(regs)) {
dump_mem(KERN_EMERG, "Stack: ", regs->sp, dump_mem(KERN_EMERG, "Stack: ", regs->sp,
THREAD_SIZE + (unsigned long)task_stack_page(tsk), THREAD_SIZE + (unsigned long)task_stack_page(tsk));
compat_user_mode(regs));
dump_backtrace(regs, tsk); dump_backtrace(regs, tsk);
dump_instr(KERN_EMERG, regs); dump_instr(KERN_EMERG, regs);
} }
...@@ -373,11 +365,59 @@ static int call_undef_hook(struct pt_regs *regs) ...@@ -373,11 +365,59 @@ static int call_undef_hook(struct pt_regs *regs)
return fn ? fn(regs, instr) : 1; return fn ? fn(regs, instr) : 1;
} }
asmlinkage void __exception do_undefinstr(struct pt_regs *regs) static void force_signal_inject(int signal, int code, struct pt_regs *regs,
unsigned long address)
{ {
siginfo_t info; siginfo_t info;
void __user *pc = (void __user *)instruction_pointer(regs); void __user *pc = (void __user *)instruction_pointer(regs);
const char *desc;
switch (signal) {
case SIGILL:
desc = "undefined instruction";
break;
case SIGSEGV:
desc = "illegal memory access";
break;
default:
desc = "bad mode";
break;
}
if (unhandled_signal(current, signal) &&
show_unhandled_signals_ratelimited()) {
pr_info("%s[%d]: %s: pc=%p\n",
current->comm, task_pid_nr(current), desc, pc);
dump_instr(KERN_INFO, regs);
}
info.si_signo = signal;
info.si_errno = 0;
info.si_code = code;
info.si_addr = pc;
arm64_notify_die(desc, regs, &info, 0);
}
/*
* Set up process info to signal segmentation fault - called on access error.
*/
void arm64_notify_segfault(struct pt_regs *regs, unsigned long addr)
{
int code;
down_read(&current->mm->mmap_sem);
if (find_vma(current->mm, addr) == NULL)
code = SEGV_MAPERR;
else
code = SEGV_ACCERR;
up_read(&current->mm->mmap_sem);
force_signal_inject(SIGSEGV, code, regs, addr);
}
asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
{
/* check for AArch32 breakpoint instructions */ /* check for AArch32 breakpoint instructions */
if (!aarch32_break_handler(regs)) if (!aarch32_break_handler(regs))
return; return;
...@@ -385,18 +425,66 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs) ...@@ -385,18 +425,66 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
if (call_undef_hook(regs) == 0) if (call_undef_hook(regs) == 0)
return; return;
if (unhandled_signal(current, SIGILL) && show_unhandled_signals_ratelimited()) { force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0);
pr_info("%s[%d]: undefined instruction: pc=%p\n", }
current->comm, task_pid_nr(current), pc);
dump_instr(KERN_INFO, regs);
}
info.si_signo = SIGILL; void cpu_enable_cache_maint_trap(void *__unused)
info.si_errno = 0; {
info.si_code = ILL_ILLOPC; config_sctlr_el1(SCTLR_EL1_UCI, 0);
info.si_addr = pc; }
#define __user_cache_maint(insn, address, res) \
asm volatile ( \
"1: " insn ", %1\n" \
" mov %w0, #0\n" \
"2:\n" \
" .pushsection .fixup,\"ax\"\n" \
" .align 2\n" \
"3: mov %w0, %w2\n" \
" b 2b\n" \
" .popsection\n" \
_ASM_EXTABLE(1b, 3b) \
: "=r" (res) \
: "r" (address), "i" (-EFAULT) )
asmlinkage void __exception do_sysinstr(unsigned int esr, struct pt_regs *regs)
{
unsigned long address;
int ret;
/* if this is a write with: Op0=1, Op2=1, Op1=3, CRn=7 */
if ((esr & 0x01fffc01) == 0x0012dc00) {
int rt = (esr >> 5) & 0x1f;
int crm = (esr >> 1) & 0x0f;
arm64_notify_die("Oops - undefined instruction", regs, &info, 0); address = (rt == 31) ? 0 : regs->regs[rt];
switch (crm) {
case 11: /* DC CVAU, gets promoted */
__user_cache_maint("dc civac", address, ret);
break;
case 10: /* DC CVAC, gets promoted */
__user_cache_maint("dc civac", address, ret);
break;
case 14: /* DC CIVAC */
__user_cache_maint("dc civac", address, ret);
break;
case 5: /* IC IVAU */
__user_cache_maint("ic ivau", address, ret);
break;
default:
force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0);
return;
}
} else {
force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0);
return;
}
if (ret)
arm64_notify_segfault(regs, address);
else
regs->pc += 4;
} }
long compat_arm_syscall(struct pt_regs *regs); long compat_arm_syscall(struct pt_regs *regs);
...@@ -465,7 +553,7 @@ static const char *esr_class_str[] = { ...@@ -465,7 +553,7 @@ static const char *esr_class_str[] = {
const char *esr_get_class_string(u32 esr) const char *esr_get_class_string(u32 esr)
{ {
return esr_class_str[esr >> ESR_ELx_EC_SHIFT]; return esr_class_str[ESR_ELx_EC(esr)];
} }
/* /*
......
...@@ -214,10 +214,16 @@ void update_vsyscall(struct timekeeper *tk) ...@@ -214,10 +214,16 @@ void update_vsyscall(struct timekeeper *tk)
vdso_data->wtm_clock_nsec = tk->wall_to_monotonic.tv_nsec; vdso_data->wtm_clock_nsec = tk->wall_to_monotonic.tv_nsec;
if (!use_syscall) { if (!use_syscall) {
/* tkr_mono.cycle_last == tkr_raw.cycle_last */
vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last; vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last;
vdso_data->raw_time_sec = tk->raw_time.tv_sec;
vdso_data->raw_time_nsec = tk->raw_time.tv_nsec;
vdso_data->xtime_clock_sec = tk->xtime_sec; vdso_data->xtime_clock_sec = tk->xtime_sec;
vdso_data->xtime_clock_nsec = tk->tkr_mono.xtime_nsec; vdso_data->xtime_clock_nsec = tk->tkr_mono.xtime_nsec;
vdso_data->cs_mult = tk->tkr_mono.mult; /* tkr_raw.xtime_nsec == 0 */
vdso_data->cs_mono_mult = tk->tkr_mono.mult;
vdso_data->cs_raw_mult = tk->tkr_raw.mult;
/* tkr_mono.shift == tkr_raw.shift */
vdso_data->cs_shift = tk->tkr_mono.shift; vdso_data->cs_shift = tk->tkr_mono.shift;
} }
......
...@@ -23,7 +23,7 @@ GCOV_PROFILE := n ...@@ -23,7 +23,7 @@ GCOV_PROFILE := n
ccflags-y += -Wl,-shared ccflags-y += -Wl,-shared
obj-y += vdso.o obj-y += vdso.o
extra-y += vdso.lds vdso-offsets.h extra-y += vdso.lds
CPPFLAGS_vdso.lds += -P -C -U$(ARCH) CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
# Force dependency (incbin is bad) # Force dependency (incbin is bad)
...@@ -42,11 +42,10 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE ...@@ -42,11 +42,10 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
gen-vdsosym := $(srctree)/$(src)/gen_vdso_offsets.sh gen-vdsosym := $(srctree)/$(src)/gen_vdso_offsets.sh
quiet_cmd_vdsosym = VDSOSYM $@ quiet_cmd_vdsosym = VDSOSYM $@
define cmd_vdsosym define cmd_vdsosym
$(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@ && \ $(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@
cp $@ include/generated/
endef endef
$(obj)/vdso-offsets.h: $(obj)/vdso.so.dbg FORCE include/generated/vdso-offsets.h: $(obj)/vdso.so.dbg FORCE
$(call if_changed,vdsosym) $(call if_changed,vdsosym)
# Assembly rules for the .S files # Assembly rules for the .S files
......
...@@ -26,24 +26,109 @@ ...@@ -26,24 +26,109 @@
#define NSEC_PER_SEC_HI16 0x3b9a #define NSEC_PER_SEC_HI16 0x3b9a
vdso_data .req x6 vdso_data .req x6
use_syscall .req w7 seqcnt .req w7
seqcnt .req w8 w_tmp .req w8
x_tmp .req x8
/*
* Conventions for macro arguments:
* - An argument is write-only if its name starts with "res".
* - All other arguments are read-only, unless otherwise specified.
*/
.macro seqcnt_acquire .macro seqcnt_acquire
9999: ldr seqcnt, [vdso_data, #VDSO_TB_SEQ_COUNT] 9999: ldr seqcnt, [vdso_data, #VDSO_TB_SEQ_COUNT]
tbnz seqcnt, #0, 9999b tbnz seqcnt, #0, 9999b
dmb ishld dmb ishld
ldr use_syscall, [vdso_data, #VDSO_USE_SYSCALL]
.endm .endm
.macro seqcnt_read, cnt .macro seqcnt_check fail
dmb ishld dmb ishld
ldr \cnt, [vdso_data, #VDSO_TB_SEQ_COUNT] ldr w_tmp, [vdso_data, #VDSO_TB_SEQ_COUNT]
cmp w_tmp, seqcnt
b.ne \fail
.endm .endm
.macro seqcnt_check, cnt, fail .macro syscall_check fail
cmp \cnt, seqcnt ldr w_tmp, [vdso_data, #VDSO_USE_SYSCALL]
b.ne \fail cbnz w_tmp, \fail
.endm
.macro get_nsec_per_sec res
mov \res, #NSEC_PER_SEC_LO16
movk \res, #NSEC_PER_SEC_HI16, lsl #16
.endm
/*
* Returns the clock delta, in nanoseconds left-shifted by the clock
* shift.
*/
.macro get_clock_shifted_nsec res, cycle_last, mult
/* Read the virtual counter. */
isb
mrs x_tmp, cntvct_el0
/* Calculate cycle delta and convert to ns. */
sub \res, x_tmp, \cycle_last
/* We can only guarantee 56 bits of precision. */
movn x_tmp, #0xff00, lsl #48
and \res, x_tmp, \res
mul \res, \res, \mult
.endm
/*
* Returns in res_{sec,nsec} the REALTIME timespec, based on the
* "wall time" (xtime) and the clock_mono delta.
*/
.macro get_ts_realtime res_sec, res_nsec, \
clock_nsec, xtime_sec, xtime_nsec, nsec_to_sec
add \res_nsec, \clock_nsec, \xtime_nsec
udiv x_tmp, \res_nsec, \nsec_to_sec
add \res_sec, \xtime_sec, x_tmp
msub \res_nsec, x_tmp, \nsec_to_sec, \res_nsec
.endm
/*
* Returns in res_{sec,nsec} the timespec based on the clock_raw delta,
* used for CLOCK_MONOTONIC_RAW.
*/
.macro get_ts_clock_raw res_sec, res_nsec, clock_nsec, nsec_to_sec
udiv \res_sec, \clock_nsec, \nsec_to_sec
msub \res_nsec, \res_sec, \nsec_to_sec, \clock_nsec
.endm
/* sec and nsec are modified in place. */
.macro add_ts sec, nsec, ts_sec, ts_nsec, nsec_to_sec
/* Add timespec. */
add \sec, \sec, \ts_sec
add \nsec, \nsec, \ts_nsec
/* Normalise the new timespec. */
cmp \nsec, \nsec_to_sec
b.lt 9999f
sub \nsec, \nsec, \nsec_to_sec
add \sec, \sec, #1
9999:
cmp \nsec, #0
b.ge 9998f
add \nsec, \nsec, \nsec_to_sec
sub \sec, \sec, #1
9998:
.endm
.macro clock_gettime_return, shift=0
.if \shift == 1
lsr x11, x11, x12
.endif
stp x10, x11, [x1, #TSPEC_TV_SEC]
mov x0, xzr
ret
.endm
.macro jump_slot jumptable, index, label
.if (. - \jumptable) != 4 * (\index)
.error "Jump slot index mismatch"
.endif
b \label
.endm .endm
.text .text
...@@ -51,18 +136,25 @@ seqcnt .req w8 ...@@ -51,18 +136,25 @@ seqcnt .req w8
/* int __kernel_gettimeofday(struct timeval *tv, struct timezone *tz); */ /* int __kernel_gettimeofday(struct timeval *tv, struct timezone *tz); */
ENTRY(__kernel_gettimeofday) ENTRY(__kernel_gettimeofday)
.cfi_startproc .cfi_startproc
mov x2, x30
.cfi_register x30, x2
/* Acquire the sequence counter and get the timespec. */
adr vdso_data, _vdso_data adr vdso_data, _vdso_data
1: seqcnt_acquire
cbnz use_syscall, 4f
/* If tv is NULL, skip to the timezone code. */ /* If tv is NULL, skip to the timezone code. */
cbz x0, 2f cbz x0, 2f
bl __do_get_tspec
seqcnt_check w9, 1b /* Compute the time of day. */
1: seqcnt_acquire
syscall_check fail=4f
ldr x10, [vdso_data, #VDSO_CS_CYCLE_LAST]
/* w11 = cs_mono_mult, w12 = cs_shift */
ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
seqcnt_check fail=1b
get_nsec_per_sec res=x9
lsl x9, x9, x12
get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
get_ts_realtime res_sec=x10, res_nsec=x11, \
clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
/* Convert ns to us. */ /* Convert ns to us. */
mov x13, #1000 mov x13, #1000
...@@ -76,95 +168,126 @@ ENTRY(__kernel_gettimeofday) ...@@ -76,95 +168,126 @@ ENTRY(__kernel_gettimeofday)
stp w4, w5, [x1, #TZ_MINWEST] stp w4, w5, [x1, #TZ_MINWEST]
3: 3:
mov x0, xzr mov x0, xzr
ret x2 ret
4: 4:
/* Syscall fallback. */ /* Syscall fallback. */
mov x8, #__NR_gettimeofday mov x8, #__NR_gettimeofday
svc #0 svc #0
ret x2 ret
.cfi_endproc .cfi_endproc
ENDPROC(__kernel_gettimeofday) ENDPROC(__kernel_gettimeofday)
#define JUMPSLOT_MAX CLOCK_MONOTONIC_COARSE
/* int __kernel_clock_gettime(clockid_t clock_id, struct timespec *tp); */ /* int __kernel_clock_gettime(clockid_t clock_id, struct timespec *tp); */
ENTRY(__kernel_clock_gettime) ENTRY(__kernel_clock_gettime)
.cfi_startproc .cfi_startproc
cmp w0, #CLOCK_REALTIME cmp w0, #JUMPSLOT_MAX
ccmp w0, #CLOCK_MONOTONIC, #0x4, ne b.hi syscall
b.ne 2f adr vdso_data, _vdso_data
adr x_tmp, jumptable
add x_tmp, x_tmp, w0, uxtw #2
br x_tmp
ALIGN
jumptable:
jump_slot jumptable, CLOCK_REALTIME, realtime
jump_slot jumptable, CLOCK_MONOTONIC, monotonic
b syscall
b syscall
jump_slot jumptable, CLOCK_MONOTONIC_RAW, monotonic_raw
jump_slot jumptable, CLOCK_REALTIME_COARSE, realtime_coarse
jump_slot jumptable, CLOCK_MONOTONIC_COARSE, monotonic_coarse
.if (. - jumptable) != 4 * (JUMPSLOT_MAX + 1)
.error "Wrong jumptable size"
.endif
ALIGN
realtime:
seqcnt_acquire
syscall_check fail=syscall
ldr x10, [vdso_data, #VDSO_CS_CYCLE_LAST]
/* w11 = cs_mono_mult, w12 = cs_shift */
ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
seqcnt_check fail=realtime
mov x2, x30 /* All computations are done with left-shifted nsecs. */
.cfi_register x30, x2 get_nsec_per_sec res=x9
lsl x9, x9, x12
/* Get kernel timespec. */ get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
adr vdso_data, _vdso_data get_ts_realtime res_sec=x10, res_nsec=x11, \
1: seqcnt_acquire clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
cbnz use_syscall, 7f clock_gettime_return, shift=1
bl __do_get_tspec ALIGN
seqcnt_check w9, 1b monotonic:
seqcnt_acquire
syscall_check fail=syscall
ldr x10, [vdso_data, #VDSO_CS_CYCLE_LAST]
/* w11 = cs_mono_mult, w12 = cs_shift */
ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
ldp x3, x4, [vdso_data, #VDSO_WTM_CLK_SEC]
seqcnt_check fail=monotonic
mov x30, x2 /* All computations are done with left-shifted nsecs. */
lsl x4, x4, x12
get_nsec_per_sec res=x9
lsl x9, x9, x12
cmp w0, #CLOCK_MONOTONIC get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
b.ne 6f get_ts_realtime res_sec=x10, res_nsec=x11, \
clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
/* Get wtm timespec. */ add_ts sec=x10, nsec=x11, ts_sec=x3, ts_nsec=x4, nsec_to_sec=x9
ldp x13, x14, [vdso_data, #VDSO_WTM_CLK_SEC] clock_gettime_return, shift=1
/* Check the sequence counter. */ ALIGN
seqcnt_read w9 monotonic_raw:
seqcnt_check w9, 1b seqcnt_acquire
b 4f syscall_check fail=syscall
2: ldr x10, [vdso_data, #VDSO_CS_CYCLE_LAST]
cmp w0, #CLOCK_REALTIME_COARSE /* w11 = cs_raw_mult, w12 = cs_shift */
ccmp w0, #CLOCK_MONOTONIC_COARSE, #0x4, ne ldp w12, w11, [vdso_data, #VDSO_CS_SHIFT]
b.ne 8f ldp x13, x14, [vdso_data, #VDSO_RAW_TIME_SEC]
seqcnt_check fail=monotonic_raw
/* xtime_coarse_nsec is already right-shifted */ /* All computations are done with left-shifted nsecs. */
mov x12, #0 lsl x14, x14, x12
get_nsec_per_sec res=x9
lsl x9, x9, x12
/* Get coarse timespec. */ get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
adr vdso_data, _vdso_data get_ts_clock_raw res_sec=x10, res_nsec=x11, \
3: seqcnt_acquire clock_nsec=x15, nsec_to_sec=x9
ldp x10, x11, [vdso_data, #VDSO_XTIME_CRS_SEC]
/* Get wtm timespec. */ add_ts sec=x10, nsec=x11, ts_sec=x13, ts_nsec=x14, nsec_to_sec=x9
ldp x13, x14, [vdso_data, #VDSO_WTM_CLK_SEC] clock_gettime_return, shift=1
/* Check the sequence counter. */ ALIGN
seqcnt_read w9 realtime_coarse:
seqcnt_check w9, 3b seqcnt_acquire
ldp x10, x11, [vdso_data, #VDSO_XTIME_CRS_SEC]
seqcnt_check fail=realtime_coarse
clock_gettime_return
cmp w0, #CLOCK_MONOTONIC_COARSE ALIGN
b.ne 6f monotonic_coarse:
4: seqcnt_acquire
/* Add on wtm timespec. */ ldp x10, x11, [vdso_data, #VDSO_XTIME_CRS_SEC]
add x10, x10, x13 ldp x13, x14, [vdso_data, #VDSO_WTM_CLK_SEC]
lsl x14, x14, x12 seqcnt_check fail=monotonic_coarse
add x11, x11, x14
/* Normalise the new timespec. */ /* Computations are done in (non-shifted) nsecs. */
mov x15, #NSEC_PER_SEC_LO16 get_nsec_per_sec res=x9
movk x15, #NSEC_PER_SEC_HI16, lsl #16 add_ts sec=x10, nsec=x11, ts_sec=x13, ts_nsec=x14, nsec_to_sec=x9
lsl x15, x15, x12 clock_gettime_return
cmp x11, x15
b.lt 5f
sub x11, x11, x15
add x10, x10, #1
5:
cmp x11, #0
b.ge 6f
add x11, x11, x15
sub x10, x10, #1
6: /* Store to the user timespec. */ ALIGN
lsr x11, x11, x12 syscall: /* Syscall fallback. */
stp x10, x11, [x1, #TSPEC_TV_SEC]
mov x0, xzr
ret
7:
mov x30, x2
8: /* Syscall fallback. */
mov x8, #__NR_clock_gettime mov x8, #__NR_clock_gettime
svc #0 svc #0
ret ret
...@@ -176,6 +299,7 @@ ENTRY(__kernel_clock_getres) ...@@ -176,6 +299,7 @@ ENTRY(__kernel_clock_getres)
.cfi_startproc .cfi_startproc
cmp w0, #CLOCK_REALTIME cmp w0, #CLOCK_REALTIME
ccmp w0, #CLOCK_MONOTONIC, #0x4, ne ccmp w0, #CLOCK_MONOTONIC, #0x4, ne
ccmp w0, #CLOCK_MONOTONIC_RAW, #0x4, ne
b.ne 1f b.ne 1f
ldr x2, 5f ldr x2, 5f
...@@ -203,46 +327,3 @@ ENTRY(__kernel_clock_getres) ...@@ -203,46 +327,3 @@ ENTRY(__kernel_clock_getres)
.quad CLOCK_COARSE_RES .quad CLOCK_COARSE_RES
.cfi_endproc .cfi_endproc
ENDPROC(__kernel_clock_getres) ENDPROC(__kernel_clock_getres)
/*
* Read the current time from the architected counter.
* Expects vdso_data to be initialised.
* Clobbers the temporary registers (x9 - x15).
* Returns:
* - w9 = vDSO sequence counter
* - (x10, x11) = (ts->tv_sec, shifted ts->tv_nsec)
* - w12 = cs_shift
*/
ENTRY(__do_get_tspec)
.cfi_startproc
/* Read from the vDSO data page. */
ldr x10, [vdso_data, #VDSO_CS_CYCLE_LAST]
ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
ldp w11, w12, [vdso_data, #VDSO_CS_MULT]
seqcnt_read w9
/* Read the virtual counter. */
isb
mrs x15, cntvct_el0
/* Calculate cycle delta and convert to ns. */
sub x10, x15, x10
/* We can only guarantee 56 bits of precision. */
movn x15, #0xff00, lsl #48
and x10, x15, x10
mul x10, x10, x11
/* Use the kernel time to calculate the new timespec. */
mov x11, #NSEC_PER_SEC_LO16
movk x11, #NSEC_PER_SEC_HI16, lsl #16
lsl x11, x11, x12
add x15, x10, x14
udiv x14, x15, x11
add x10, x13, x14
mul x13, x14, x11
sub x11, x15, x13
ret
.cfi_endproc
ENDPROC(__do_get_tspec)
...@@ -118,9 +118,11 @@ SECTIONS ...@@ -118,9 +118,11 @@ SECTIONS
__exception_text_end = .; __exception_text_end = .;
IRQENTRY_TEXT IRQENTRY_TEXT
SOFTIRQENTRY_TEXT SOFTIRQENTRY_TEXT
ENTRY_TEXT
TEXT_TEXT TEXT_TEXT
SCHED_TEXT SCHED_TEXT
LOCK_TEXT LOCK_TEXT
KPROBES_TEXT
HYPERVISOR_TEXT HYPERVISOR_TEXT
IDMAP_TEXT IDMAP_TEXT
HIBERNATE_TEXT HIBERNATE_TEXT
...@@ -131,12 +133,13 @@ SECTIONS ...@@ -131,12 +133,13 @@ SECTIONS
} }
. = ALIGN(SEGMENT_ALIGN); . = ALIGN(SEGMENT_ALIGN);
_etext = .; /* End of text section */
RO_DATA(PAGE_SIZE) /* everything from this point to */ RO_DATA(PAGE_SIZE) /* everything from this point to */
EXCEPTION_TABLE(8) /* _etext will be marked RO NX */ EXCEPTION_TABLE(8) /* __init_begin will be marked RO NX */
NOTES NOTES
. = ALIGN(SEGMENT_ALIGN); . = ALIGN(SEGMENT_ALIGN);
_etext = .; /* End of text and rodata section */
__init_begin = .; __init_begin = .;
INIT_TEXT_SECTION(8) INIT_TEXT_SECTION(8)
......
...@@ -106,7 +106,7 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -106,7 +106,7 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
run->exit_reason = KVM_EXIT_DEBUG; run->exit_reason = KVM_EXIT_DEBUG;
run->debug.arch.hsr = hsr; run->debug.arch.hsr = hsr;
switch (hsr >> ESR_ELx_EC_SHIFT) { switch (ESR_ELx_EC(hsr)) {
case ESR_ELx_EC_WATCHPT_LOW: case ESR_ELx_EC_WATCHPT_LOW:
run->debug.arch.far = vcpu->arch.fault.far_el2; run->debug.arch.far = vcpu->arch.fault.far_el2;
/* fall through */ /* fall through */
...@@ -149,7 +149,7 @@ static exit_handle_fn arm_exit_handlers[] = { ...@@ -149,7 +149,7 @@ static exit_handle_fn arm_exit_handlers[] = {
static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu) static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
{ {
u32 hsr = kvm_vcpu_get_hsr(vcpu); u32 hsr = kvm_vcpu_get_hsr(vcpu);
u8 hsr_ec = hsr >> ESR_ELx_EC_SHIFT; u8 hsr_ec = ESR_ELx_EC(hsr);
if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers) || if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers) ||
!arm_exit_handlers[hsr_ec]) { !arm_exit_handlers[hsr_ec]) {
......
...@@ -17,6 +17,10 @@ obj-$(CONFIG_KVM_ARM_HOST) += tlb.o ...@@ -17,6 +17,10 @@ obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
# KVM code is run at a different exception code with a different map, so
# compiler instrumentation that inserts callbacks or checks into the code may
# cause crashes. Just disable it.
GCOV_PROFILE := n GCOV_PROFILE := n
KASAN_SANITIZE := n KASAN_SANITIZE := n
UBSAN_SANITIZE := n UBSAN_SANITIZE := n
KCOV_INSTRUMENT := n
...@@ -198,7 +198,7 @@ static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) ...@@ -198,7 +198,7 @@ static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
{ {
u64 esr = read_sysreg_el2(esr); u64 esr = read_sysreg_el2(esr);
u8 ec = esr >> ESR_ELx_EC_SHIFT; u8 ec = ESR_ELx_EC(esr);
u64 hpfar, far; u64 hpfar, far;
vcpu->arch.fault.esr_el2 = esr; vcpu->arch.fault.esr_el2 = esr;
......
...@@ -66,7 +66,7 @@ ...@@ -66,7 +66,7 @@
.endm .endm
end .req x5 end .req x5
ENTRY(__copy_from_user) ENTRY(__arch_copy_from_user)
ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
CONFIG_ARM64_PAN) CONFIG_ARM64_PAN)
add end, x0, x2 add end, x0, x2
...@@ -75,7 +75,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \ ...@@ -75,7 +75,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
CONFIG_ARM64_PAN) CONFIG_ARM64_PAN)
mov x0, #0 // Nothing to copy mov x0, #0 // Nothing to copy
ret ret
ENDPROC(__copy_from_user) ENDPROC(__arch_copy_from_user)
.section .fixup,"ax" .section .fixup,"ax"
.align 2 .align 2
......
...@@ -65,7 +65,7 @@ ...@@ -65,7 +65,7 @@
.endm .endm
end .req x5 end .req x5
ENTRY(__copy_to_user) ENTRY(__arch_copy_to_user)
ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
CONFIG_ARM64_PAN) CONFIG_ARM64_PAN)
add end, x0, x2 add end, x0, x2
...@@ -74,7 +74,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \ ...@@ -74,7 +74,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
CONFIG_ARM64_PAN) CONFIG_ARM64_PAN)
mov x0, #0 mov x0, #0
ret ret
ENDPROC(__copy_to_user) ENDPROC(__arch_copy_to_user)
.section .fixup,"ax" .section .fixup,"ax"
.align 2 .align 2
......
...@@ -52,7 +52,7 @@ ENTRY(__flush_cache_user_range) ...@@ -52,7 +52,7 @@ ENTRY(__flush_cache_user_range)
sub x3, x2, #1 sub x3, x2, #1
bic x4, x0, x3 bic x4, x0, x3
1: 1:
USER(9f, dc cvau, x4 ) // clean D line to PoU user_alt 9f, "dc cvau, x4", "dc civac, x4", ARM64_WORKAROUND_CLEAN_CACHE
add x4, x4, x2 add x4, x4, x2
cmp x4, x1 cmp x4, x1
b.lo 1b b.lo 1b
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/bootmem.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/genalloc.h> #include <linux/genalloc.h>
...@@ -29,6 +30,8 @@ ...@@ -29,6 +30,8 @@
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
static int swiotlb __read_mostly;
static pgprot_t __get_dma_pgprot(struct dma_attrs *attrs, pgprot_t prot, static pgprot_t __get_dma_pgprot(struct dma_attrs *attrs, pgprot_t prot,
bool coherent) bool coherent)
{ {
...@@ -341,6 +344,13 @@ static int __swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, ...@@ -341,6 +344,13 @@ static int __swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
return ret; return ret;
} }
static int __swiotlb_dma_supported(struct device *hwdev, u64 mask)
{
if (swiotlb)
return swiotlb_dma_supported(hwdev, mask);
return 1;
}
static struct dma_map_ops swiotlb_dma_ops = { static struct dma_map_ops swiotlb_dma_ops = {
.alloc = __dma_alloc, .alloc = __dma_alloc,
.free = __dma_free, .free = __dma_free,
...@@ -354,7 +364,7 @@ static struct dma_map_ops swiotlb_dma_ops = { ...@@ -354,7 +364,7 @@ static struct dma_map_ops swiotlb_dma_ops = {
.sync_single_for_device = __swiotlb_sync_single_for_device, .sync_single_for_device = __swiotlb_sync_single_for_device,
.sync_sg_for_cpu = __swiotlb_sync_sg_for_cpu, .sync_sg_for_cpu = __swiotlb_sync_sg_for_cpu,
.sync_sg_for_device = __swiotlb_sync_sg_for_device, .sync_sg_for_device = __swiotlb_sync_sg_for_device,
.dma_supported = swiotlb_dma_supported, .dma_supported = __swiotlb_dma_supported,
.mapping_error = swiotlb_dma_mapping_error, .mapping_error = swiotlb_dma_mapping_error,
}; };
...@@ -513,6 +523,9 @@ EXPORT_SYMBOL(dummy_dma_ops); ...@@ -513,6 +523,9 @@ EXPORT_SYMBOL(dummy_dma_ops);
static int __init arm64_dma_init(void) static int __init arm64_dma_init(void)
{ {
if (swiotlb_force || max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT))
swiotlb = 1;
return atomic_pool_init(); return atomic_pool_init();
} }
arch_initcall(arm64_dma_init); arch_initcall(arm64_dma_init);
...@@ -848,15 +861,16 @@ static int __iommu_attach_notifier(struct notifier_block *nb, ...@@ -848,15 +861,16 @@ static int __iommu_attach_notifier(struct notifier_block *nb,
{ {
struct iommu_dma_notifier_data *master, *tmp; struct iommu_dma_notifier_data *master, *tmp;
if (action != BUS_NOTIFY_ADD_DEVICE) if (action != BUS_NOTIFY_BIND_DRIVER)
return 0; return 0;
mutex_lock(&iommu_dma_notifier_lock); mutex_lock(&iommu_dma_notifier_lock);
list_for_each_entry_safe(master, tmp, &iommu_dma_masters, list) { list_for_each_entry_safe(master, tmp, &iommu_dma_masters, list) {
if (do_iommu_attach(master->dev, master->ops, if (data == master->dev && do_iommu_attach(master->dev,
master->dma_base, master->size)) { master->ops, master->dma_base, master->size)) {
list_del(&master->list); list_del(&master->list);
kfree(master); kfree(master);
break;
} }
} }
mutex_unlock(&iommu_dma_notifier_lock); mutex_unlock(&iommu_dma_notifier_lock);
...@@ -870,17 +884,8 @@ static int __init register_iommu_dma_ops_notifier(struct bus_type *bus) ...@@ -870,17 +884,8 @@ static int __init register_iommu_dma_ops_notifier(struct bus_type *bus)
if (!nb) if (!nb)
return -ENOMEM; return -ENOMEM;
/*
* The device must be attached to a domain before the driver probe
* routine gets a chance to start allocating DMA buffers. However,
* the IOMMU driver also needs a chance to configure the iommu_group
* via its add_device callback first, so we need to make the attach
* happen between those two points. Since the IOMMU core uses a bus
* notifier with default priority for add_device, do the same but
* with a lower priority to ensure the appropriate ordering.
*/
nb->notifier_call = __iommu_attach_notifier; nb->notifier_call = __iommu_attach_notifier;
nb->priority = -100;
ret = bus_register_notifier(bus, nb); ret = bus_register_notifier(bus, nb);
if (ret) { if (ret) {
...@@ -904,10 +909,6 @@ static int __init __iommu_dma_init(void) ...@@ -904,10 +909,6 @@ static int __init __iommu_dma_init(void)
if (!ret) if (!ret)
ret = register_iommu_dma_ops_notifier(&pci_bus_type); ret = register_iommu_dma_ops_notifier(&pci_bus_type);
#endif #endif
/* handle devices queued before this arch_initcall */
if (!ret)
__iommu_attach_notifier(NULL, BUS_NOTIFY_ADD_DEVICE, NULL);
return ret; return ret;
} }
arch_initcall(__iommu_dma_init); arch_initcall(__iommu_dma_init);
......
This diff is collapsed.
This diff is collapsed.
...@@ -160,12 +160,10 @@ static void __init arm64_memory_present(void) ...@@ -160,12 +160,10 @@ static void __init arm64_memory_present(void)
static void __init arm64_memory_present(void) static void __init arm64_memory_present(void)
{ {
struct memblock_region *reg; struct memblock_region *reg;
int nid = 0;
for_each_memblock(memory, reg) { for_each_memblock(memory, reg) {
#ifdef CONFIG_NUMA int nid = memblock_get_region_node(reg);
nid = reg->nid;
#endif
memory_present(nid, memblock_region_memory_base_pfn(reg), memory_present(nid, memblock_region_memory_base_pfn(reg),
memblock_region_memory_end_pfn(reg)); memblock_region_memory_end_pfn(reg));
} }
...@@ -403,6 +401,7 @@ static void __init free_unused_memmap(void) ...@@ -403,6 +401,7 @@ static void __init free_unused_memmap(void)
*/ */
void __init mem_init(void) void __init mem_init(void)
{ {
if (swiotlb_force || max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT))
swiotlb_init(1); swiotlb_init(1);
set_max_mapnr(pfn_to_page(max_pfn) - mem_map); set_max_mapnr(pfn_to_page(max_pfn) - mem_map);
...@@ -430,9 +429,9 @@ void __init mem_init(void) ...@@ -430,9 +429,9 @@ void __init mem_init(void)
pr_cont(" vmalloc : 0x%16lx - 0x%16lx (%6ld GB)\n", pr_cont(" vmalloc : 0x%16lx - 0x%16lx (%6ld GB)\n",
MLG(VMALLOC_START, VMALLOC_END)); MLG(VMALLOC_START, VMALLOC_END));
pr_cont(" .text : 0x%p" " - 0x%p" " (%6ld KB)\n", pr_cont(" .text : 0x%p" " - 0x%p" " (%6ld KB)\n",
MLK_ROUNDUP(_text, __start_rodata)); MLK_ROUNDUP(_text, _etext));
pr_cont(" .rodata : 0x%p" " - 0x%p" " (%6ld KB)\n", pr_cont(" .rodata : 0x%p" " - 0x%p" " (%6ld KB)\n",
MLK_ROUNDUP(__start_rodata, _etext)); MLK_ROUNDUP(__start_rodata, __init_begin));
pr_cont(" .init : 0x%p" " - 0x%p" " (%6ld KB)\n", pr_cont(" .init : 0x%p" " - 0x%p" " (%6ld KB)\n",
MLK_ROUNDUP(__init_begin, __init_end)); MLK_ROUNDUP(__init_begin, __init_end));
pr_cont(" .data : 0x%p" " - 0x%p" " (%6ld KB)\n", pr_cont(" .data : 0x%p" " - 0x%p" " (%6ld KB)\n",
......
This diff is collapsed.
...@@ -180,6 +180,8 @@ ENTRY(__cpu_setup) ...@@ -180,6 +180,8 @@ ENTRY(__cpu_setup)
msr cpacr_el1, x0 // Enable FP/ASIMD msr cpacr_el1, x0 // Enable FP/ASIMD
mov x0, #1 << 12 // Reset mdscr_el1 and disable mov x0, #1 << 12 // Reset mdscr_el1 and disable
msr mdscr_el1, x0 // access to the DCC from EL0 msr mdscr_el1, x0 // access to the DCC from EL0
isb // Unmask debug exceptions now,
enable_dbg // since this is per-cpu
reset_pmuserenr_el0 x0 // Disable PMU access from EL0 reset_pmuserenr_el0 x0 // Disable PMU access from EL0
/* /*
* Memory region attributes for LPAE: * Memory region attributes for LPAE:
......
This diff is collapsed.
...@@ -39,6 +39,7 @@ ...@@ -39,6 +39,7 @@
#define KEXEC_ARCH_SH (42 << 16) #define KEXEC_ARCH_SH (42 << 16)
#define KEXEC_ARCH_MIPS_LE (10 << 16) #define KEXEC_ARCH_MIPS_LE (10 << 16)
#define KEXEC_ARCH_MIPS ( 8 << 16) #define KEXEC_ARCH_MIPS ( 8 << 16)
#define KEXEC_ARCH_AARCH64 (183 << 16)
/* The artificial cap on the number of segments passed to kexec_load. */ /* The artificial cap on the number of segments passed to kexec_load. */
#define KEXEC_SEGMENT_MAX 16 #define KEXEC_SEGMENT_MAX 16
......
...@@ -46,6 +46,11 @@ static int handler_pre(struct kprobe *p, struct pt_regs *regs) ...@@ -46,6 +46,11 @@ static int handler_pre(struct kprobe *p, struct pt_regs *regs)
" ex1 = 0x%lx\n", " ex1 = 0x%lx\n",
p->symbol_name, p->addr, regs->pc, regs->ex1); p->symbol_name, p->addr, regs->pc, regs->ex1);
#endif #endif
#ifdef CONFIG_ARM64
pr_info("<%s> pre_handler: p->addr = 0x%p, pc = 0x%lx,"
" pstate = 0x%lx\n",
p->symbol_name, p->addr, (long)regs->pc, (long)regs->pstate);
#endif
/* A dump_stack() here will give a stack backtrace */ /* A dump_stack() here will give a stack backtrace */
return 0; return 0;
...@@ -71,6 +76,10 @@ static void handler_post(struct kprobe *p, struct pt_regs *regs, ...@@ -71,6 +76,10 @@ static void handler_post(struct kprobe *p, struct pt_regs *regs,
printk(KERN_INFO "<%s> post_handler: p->addr = 0x%p, ex1 = 0x%lx\n", printk(KERN_INFO "<%s> post_handler: p->addr = 0x%p, ex1 = 0x%lx\n",
p->symbol_name, p->addr, regs->ex1); p->symbol_name, p->addr, regs->ex1);
#endif #endif
#ifdef CONFIG_ARM64
pr_info("<%s> post_handler: p->addr = 0x%p, pstate = 0x%lx\n",
p->symbol_name, p->addr, (long)regs->pstate);
#endif
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment