Commit 9b9e2113 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Catalin Marinas:

 - KCSAN enabled for arm64.

 - Additional kselftests to exercise the syscall ABI w.r.t. SVE/FPSIMD.

 - Some more SVE clean-ups and refactoring in preparation for SME
   support (scalable matrix extensions).

 - BTI clean-ups (SYM_FUNC macros etc.)

 - arm64 atomics clean-up and codegen improvements.

 - HWCAPs for FEAT_AFP (alternate floating point behaviour) and
   FEAT_RPRESS (increased precision of reciprocal estimate and
   reciprocal square root estimate).

 - Use SHA3 instructions to speed-up XOR.

 - arm64 unwind code refactoring/unification.

 - Avoid DC (data cache maintenance) instructions when DCZID_EL0.DZP ==
   1 (potentially set by a hypervisor; user-space already does this).

 - Perf updates for arm64: support for CI-700, HiSilicon PCIe PMU,
   Marvell CN10K LLC-TAD PMU, miscellaneous clean-ups.

 - Other fixes and clean-ups; highlights: fix the handling of erratum
   1418040, correct the calculation of the nomap region boundaries,
   introduce io_stop_wc() mapped to the new DGH instruction (data
   gathering hint).

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (81 commits)
  arm64: Use correct method to calculate nomap region boundaries
  arm64: Drop outdated links in comments
  arm64: perf: Don't register user access sysctl handler multiple times
  drivers: perf: marvell_cn10k: fix an IS_ERR() vs NULL check
  perf/smmuv3: Fix unused variable warning when CONFIG_OF=n
  arm64: errata: Fix exec handling in erratum 1418040 workaround
  arm64: Unhash early pointer print plus improve comment
  asm-generic: introduce io_stop_wc() and add implementation for ARM64
  arm64: Ensure that the 'bti' macro is defined where linkage.h is included
  arm64: remove __dma_*_area() aliases
  docs/arm64: delete a space from tagged-address-abi
  arm64: Enable KCSAN
  kselftest/arm64: Add pidbench for floating point syscall cases
  arm64/fp: Add comments documenting the usage of state restore functions
  kselftest/arm64: Add a test program to exercise the syscall ABI
  kselftest/arm64: Allow signal tests to trigger from a function
  kselftest/arm64: Parameterise ptrace vector length information
  arm64/sve: Minor clarification of ABI documentation
  arm64/sve: Generalise vector length configuration prctl() for SME
  arm64/sve: Make sysctl interface for SVE reusable by SME
  ...
parents a7ac3140 945409a6
================================================
HiSilicon PCIe Performance Monitoring Unit (PMU)
================================================
On Hip09, HiSilicon PCIe Performance Monitoring Unit (PMU) could monitor
bandwidth, latency, bus utilization and buffer occupancy data of PCIe.
Each PCIe Core has a PMU to monitor multi Root Ports of this PCIe Core and
all Endpoints downstream these Root Ports.
HiSilicon PCIe PMU driver
=========================
The PCIe PMU driver registers a perf PMU with the name of its sicl-id and PCIe
Core id.::
/sys/bus/event_source/hisi_pcie<sicl>_<core>
PMU driver provides description of available events and filter options in sysfs,
see /sys/bus/event_source/devices/hisi_pcie<sicl>_<core>.
The "format" directory describes all formats of the config (events) and config1
(filter options) fields of the perf_event_attr structure. The "events" directory
describes all documented events shown in perf list.
The "identifier" sysfs file allows users to identify the version of the
PMU hardware device.
The "bus" sysfs file allows users to get the bus number of Root Ports
monitored by PMU.
Example usage of perf::
$# perf list
hisi_pcie0_0/rx_mwr_latency/ [kernel PMU event]
hisi_pcie0_0/rx_mwr_cnt/ [kernel PMU event]
------------------------------------------
$# perf stat -e hisi_pcie0_0/rx_mwr_latency/
$# perf stat -e hisi_pcie0_0/rx_mwr_cnt/
$# perf stat -g -e hisi_pcie0_0/rx_mwr_latency/ -e hisi_pcie0_0/rx_mwr_cnt/
The current driver does not support sampling. So "perf record" is unsupported.
Also attach to a task is unsupported for PCIe PMU.
Filter options
--------------
1. Target filter
PMU could only monitor the performance of traffic downstream target Root Ports
or downstream target Endpoint. PCIe PMU driver support "port" and "bdf"
interfaces for users, and these two interfaces aren't supported at the same
time.
-port
"port" filter can be used in all PCIe PMU events, target Root Port can be
selected by configuring the 16-bits-bitmap "port". Multi ports can be selected
for AP-layer-events, and only one port can be selected for TL/DL-layer-events.
For example, if target Root Port is 0000:00:00.0 (x8 lanes), bit0 of bitmap
should be set, port=0x1; if target Root Port is 0000:00:04.0 (x4 lanes),
bit8 is set, port=0x100; if these two Root Ports are both monitored, port=0x101.
Example usage of perf::
$# perf stat -e hisi_pcie0_0/rx_mwr_latency,port=0x1/ sleep 5
-bdf
"bdf" filter can only be used in bandwidth events, target Endpoint is selected
by configuring BDF to "bdf". Counter only counts the bandwidth of message
requested by target Endpoint.
For example, "bdf=0x3900" means BDF of target Endpoint is 0000:39:00.0.
Example usage of perf::
$# perf stat -e hisi_pcie0_0/rx_mrd_flux,bdf=0x3900/ sleep 5
2. Trigger filter
Event statistics start when the first time TLP length is greater/smaller
than trigger condition. You can set the trigger condition by writing "trig_len",
and set the trigger mode by writing "trig_mode". This filter can only be used
in bandwidth events.
For example, "trig_len=4" means trigger condition is 2^4 DW, "trig_mode=0"
means statistics start when TLP length > trigger condition, "trig_mode=1"
means start when TLP length < condition.
Example usage of perf::
$# perf stat -e hisi_pcie0_0/rx_mrd_flux,trig_len=0x4,trig_mode=1/ sleep 5
3. Threshold filter
Counter counts when TLP length within the specified range. You can set the
threshold by writing "thr_len", and set the threshold mode by writing
"thr_mode". This filter can only be used in bandwidth events.
For example, "thr_len=4" means threshold is 2^4 DW, "thr_mode=0" means
counter counts when TLP length >= threshold, and "thr_mode=1" means counts
when TLP length < threshold.
Example usage of perf::
$# perf stat -e hisi_pcie0_0/rx_mrd_flux,thr_len=0x4,thr_mode=1/ sleep 5
...@@ -905,6 +905,17 @@ enabled, otherwise writing to this file will return ``-EBUSY``. ...@@ -905,6 +905,17 @@ enabled, otherwise writing to this file will return ``-EBUSY``.
The default value is 8. The default value is 8.
perf_user_access (arm64 only)
=================================
Controls user space access for reading perf event counters. When set to 1,
user space can read performance monitor counter registers directly.
The default value is 0 (access disabled).
See Documentation/arm64/perf.rst for more information.
pid_max pid_max
======= =======
......
...@@ -275,6 +275,23 @@ infrastructure: ...@@ -275,6 +275,23 @@ infrastructure:
| SVEVer | [3-0] | y | | SVEVer | [3-0] | y |
+------------------------------+---------+---------+ +------------------------------+---------+---------+
8) ID_AA64MMFR1_EL1 - Memory model feature register 1
+------------------------------+---------+---------+
| Name | bits | visible |
+------------------------------+---------+---------+
| AFP | [47-44] | y |
+------------------------------+---------+---------+
9) ID_AA64ISAR2_EL1 - Instruction set attribute register 2
+------------------------------+---------+---------+
| Name | bits | visible |
+------------------------------+---------+---------+
| RPRES | [7-4] | y |
+------------------------------+---------+---------+
Appendix I: Example Appendix I: Example
------------------- -------------------
......
...@@ -251,6 +251,14 @@ HWCAP2_ECV ...@@ -251,6 +251,14 @@ HWCAP2_ECV
Functionality implied by ID_AA64MMFR0_EL1.ECV == 0b0001. Functionality implied by ID_AA64MMFR0_EL1.ECV == 0b0001.
HWCAP2_AFP
Functionality implied by ID_AA64MFR1_EL1.AFP == 0b0001.
HWCAP2_RPRES
Functionality implied by ID_AA64ISAR2_EL1.RPRES == 0b0001.
4. Unused AT_HWCAP bits 4. Unused AT_HWCAP bits
----------------------- -----------------------
......
...@@ -2,7 +2,10 @@ ...@@ -2,7 +2,10 @@
.. _perf_index: .. _perf_index:
===================== ====
Perf
====
Perf Event Attributes Perf Event Attributes
===================== =====================
...@@ -88,3 +91,76 @@ exclude_host. However when using !exclude_hv there is a small blackout ...@@ -88,3 +91,76 @@ exclude_host. However when using !exclude_hv there is a small blackout
window at the guest entry/exit where host events are not captured. window at the guest entry/exit where host events are not captured.
On VHE systems there are no blackout windows. On VHE systems there are no blackout windows.
Perf Userspace PMU Hardware Counter Access
==========================================
Overview
--------
The perf userspace tool relies on the PMU to monitor events. It offers an
abstraction layer over the hardware counters since the underlying
implementation is cpu-dependent.
Arm64 allows userspace tools to have access to the registers storing the
hardware counters' values directly.
This targets specifically self-monitoring tasks in order to reduce the overhead
by directly accessing the registers without having to go through the kernel.
How-to
------
The focus is set on the armv8 PMUv3 which makes sure that the access to the pmu
registers is enabled and that the userspace has access to the relevant
information in order to use them.
In order to have access to the hardware counters, the global sysctl
kernel/perf_user_access must first be enabled:
.. code-block:: sh
echo 1 > /proc/sys/kernel/perf_user_access
It is necessary to open the event using the perf tool interface with config1:1
attr bit set: the sys_perf_event_open syscall returns a fd which can
subsequently be used with the mmap syscall in order to retrieve a page of memory
containing information about the event. The PMU driver uses this page to expose
to the user the hardware counter's index and other necessary data. Using this
index enables the user to access the PMU registers using the `mrs` instruction.
Access to the PMU registers is only valid while the sequence lock is unchanged.
In particular, the PMSELR_EL0 register is zeroed each time the sequence lock is
changed.
The userspace access is supported in libperf using the perf_evsel__mmap()
and perf_evsel__read() functions. See `tools/lib/perf/tests/test-evsel.c`_ for
an example.
About heterogeneous systems
---------------------------
On heterogeneous systems such as big.LITTLE, userspace PMU counter access can
only be enabled when the tasks are pinned to a homogeneous subset of cores and
the corresponding PMU instance is opened by specifying the 'type' attribute.
The use of generic event types is not supported in this case.
Have a look at `tools/perf/arch/arm64/tests/user-events.c`_ for an example. It
can be run using the perf tool to check that the access to the registers works
correctly from userspace:
.. code-block:: sh
perf test -v user
About chained events and counter sizes
--------------------------------------
The user can request either a 32-bit (config1:0 == 0) or 64-bit (config1:0 == 1)
counter along with userspace access. The sys_perf_event_open syscall will fail
if a 64-bit counter is requested and the hardware doesn't support 64-bit
counters. Chained events are not supported in conjunction with userspace counter
access. If a 32-bit counter is requested on hardware with 64-bit counters, then
userspace must treat the upper 32-bits read from the counter as UNKNOWN. The
'pmc_width' field in the user page will indicate the valid width of the counter
and should be used to mask the upper bits as needed.
.. Links
.. _tools/perf/arch/arm64/tests/user-events.c:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/perf/arch/arm64/tests/user-events.c
.. _tools/lib/perf/tests/test-evsel.c:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/lib/perf/tests/test-evsel.c
...@@ -255,7 +255,7 @@ prctl(PR_SVE_GET_VL) ...@@ -255,7 +255,7 @@ prctl(PR_SVE_GET_VL)
vector length change (which would only normally be the case between a vector length change (which would only normally be the case between a
fork() or vfork() and the corresponding execve() in typical use). fork() or vfork() and the corresponding execve() in typical use).
To extract the vector length from the result, and it with To extract the vector length from the result, bitwise and it with
PR_SVE_VL_LEN_MASK. PR_SVE_VL_LEN_MASK.
Return value: a nonnegative value on success, or a negative value on error: Return value: a nonnegative value on success, or a negative value on error:
......
...@@ -49,7 +49,7 @@ how the user addresses are used by the kernel: ...@@ -49,7 +49,7 @@ how the user addresses are used by the kernel:
- ``brk()``, ``mmap()`` and the ``new_address`` argument to - ``brk()``, ``mmap()`` and the ``new_address`` argument to
``mremap()`` as these have the potential to alias with existing ``mremap()`` as these have the potential to alias with existing
user addresses. user addresses.
NOTE: This behaviour changed in v5.6 and so some earlier kernels may NOTE: This behaviour changed in v5.6 and so some earlier kernels may
incorrectly accept valid tagged pointers for the ``brk()``, incorrectly accept valid tagged pointers for the ``brk()``,
......
...@@ -12,12 +12,14 @@ maintainers: ...@@ -12,12 +12,14 @@ maintainers:
properties: properties:
compatible: compatible:
const: arm,cmn-600 enum:
- arm,cmn-600
- arm,ci-700
reg: reg:
items: items:
- description: Physical address of the base (PERIPHBASE) and - description: Physical address of the base (PERIPHBASE) and
size (up to 64MB) of the configuration address space. size of the configuration address space.
interrupts: interrupts:
minItems: 1 minItems: 1
...@@ -31,14 +33,23 @@ properties: ...@@ -31,14 +33,23 @@ properties:
arm,root-node: arm,root-node:
$ref: /schemas/types.yaml#/definitions/uint32 $ref: /schemas/types.yaml#/definitions/uint32
description: Offset from PERIPHBASE of the configuration description: Offset from PERIPHBASE of CMN-600's configuration
discovery node (see TRM definition of ROOTNODEBASE). discovery node (see TRM definition of ROOTNODEBASE). Not
relevant for newer CMN/CI products.
required: required:
- compatible - compatible
- reg - reg
- interrupts - interrupts
- arm,root-node
if:
properties:
compatible:
contains:
const: arm,cmn-600
then:
required:
- arm,root-node
additionalProperties: false additionalProperties: false
......
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/perf/arm,smmu-v3-pmcg.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Arm SMMUv3 Performance Monitor Counter Group
maintainers:
- Will Deacon <will@kernel.org>
- Robin Murphy <robin.murphy@arm.com>
description: |
An SMMUv3 may have several Performance Monitor Counter Group (PMCG).
They are standalone performance monitoring units that support both
architected and IMPLEMENTATION DEFINED event counters.
properties:
$nodename:
pattern: "^pmu@[0-9a-f]*"
compatible:
oneOf:
- items:
- const: arm,mmu-600-pmcg
- const: arm,smmu-v3-pmcg
- const: arm,smmu-v3-pmcg
reg:
items:
- description: Register page 0
- description: Register page 1, if SMMU_PMCG_CFGR.RELOC_CTRS = 1
minItems: 1
interrupts:
maxItems: 1
msi-parent: true
required:
- compatible
- reg
anyOf:
- required:
- interrupts
- required:
- msi-parent
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
pmu@2b420000 {
compatible = "arm,smmu-v3-pmcg";
reg = <0x2b420000 0x1000>,
<0x2b430000 0x1000>;
interrupts = <GIC_SPI 80 IRQ_TYPE_EDGE_RISING>;
msi-parent = <&its 0xff0000>;
};
pmu@2b440000 {
compatible = "arm,smmu-v3-pmcg";
reg = <0x2b440000 0x1000>,
<0x2b450000 0x1000>;
interrupts = <GIC_SPI 81 IRQ_TYPE_EDGE_RISING>;
msi-parent = <&its 0xff0000>;
};
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/perf/marvell-cn10k-tad.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Marvell CN10K LLC-TAD performance monitor
maintainers:
- Bhaskara Budiredla <bbudiredla@marvell.com>
description: |
The Tag-and-Data units (TADs) maintain coherence and contain CN10K
shared on-chip last level cache (LLC). The tad pmu measures the
performance of last-level cache. Each tad pmu supports up to eight
counters.
The DT setup comprises of number of tad blocks, the sizes of pmu
regions, tad blocks and overall base address of the HW.
properties:
compatible:
const: marvell,cn10k-tad-pmu
reg:
maxItems: 1
marvell,tad-cnt:
description: specifies the number of tads on the soc
$ref: /schemas/types.yaml#/definitions/uint32
marvell,tad-page-size:
description: specifies the size of each tad page
$ref: /schemas/types.yaml#/definitions/uint32
marvell,tad-pmu-page-size:
description: specifies the size of page that the pmu uses
$ref: /schemas/types.yaml#/definitions/uint32
required:
- compatible
- reg
- marvell,tad-cnt
- marvell,tad-page-size
- marvell,tad-pmu-page-size
additionalProperties: false
examples:
- |
tad {
#address-cells = <2>;
#size-cells = <2>;
tad_pmu@80000000 {
compatible = "marvell,cn10k-tad-pmu";
reg = <0x87e2 0x80000000 0x0 0x1000>;
marvell,tad-cnt = <1>;
marvell,tad-page-size = <0x1000>;
marvell,tad-pmu-page-size = <0x1000>;
};
};
...@@ -1950,6 +1950,14 @@ There are some more advanced barrier functions: ...@@ -1950,6 +1950,14 @@ There are some more advanced barrier functions:
For load from persistent memory, existing read memory barriers are sufficient For load from persistent memory, existing read memory barriers are sufficient
to ensure read ordering. to ensure read ordering.
(*) io_stop_wc();
For memory accesses with write-combining attributes (e.g. those returned
by ioremap_wc(), the CPU may wait for prior accesses to be merged with
subsequent ones. io_stop_wc() can be used to prevent the merging of
write-combining memory accesses before this macro with those after it when
such wait has performance implications.
=============================== ===============================
IMPLICIT KERNEL MEMORY BARRIERS IMPLICIT KERNEL MEMORY BARRIERS
=============================== ===============================
......
...@@ -8615,8 +8615,10 @@ F: drivers/misc/hisi_hikey_usb.c ...@@ -8615,8 +8615,10 @@ F: drivers/misc/hisi_hikey_usb.c
HISILICON PMU DRIVER HISILICON PMU DRIVER
M: Shaokun Zhang <zhangshaokun@hisilicon.com> M: Shaokun Zhang <zhangshaokun@hisilicon.com>
M: Qi Liu <liuqi115@huawei.com>
S: Supported S: Supported
W: http://www.hisilicon.com W: http://www.hisilicon.com
F: Documentation/admin-guide/perf/hisi-pcie-pmu.rst
F: Documentation/admin-guide/perf/hisi-pmu.rst F: Documentation/admin-guide/perf/hisi-pmu.rst
F: drivers/perf/hisilicon F: drivers/perf/hisilicon
......
...@@ -150,6 +150,8 @@ config ARM64 ...@@ -150,6 +150,8 @@ config ARM64
select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN
select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE) select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE)
# Some instrumentation may be unsound, hence EXPERT
select HAVE_ARCH_KCSAN if EXPERT
select HAVE_ARCH_KFENCE select HAVE_ARCH_KFENCE
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_BITS
...@@ -1545,6 +1547,12 @@ endmenu ...@@ -1545,6 +1547,12 @@ endmenu
menu "ARMv8.2 architectural features" menu "ARMv8.2 architectural features"
config AS_HAS_ARMV8_2
def_bool $(cc-option,-Wa$(comma)-march=armv8.2-a)
config AS_HAS_SHA3
def_bool $(as-instr,.arch armv8.2-a+sha3)
config ARM64_PMEM config ARM64_PMEM
bool "Enable support for persistent memory" bool "Enable support for persistent memory"
select ARCH_HAS_PMEM_API select ARCH_HAS_PMEM_API
......
...@@ -58,6 +58,11 @@ stack_protector_prepare: prepare0 ...@@ -58,6 +58,11 @@ stack_protector_prepare: prepare0
include/generated/asm-offsets.h)) include/generated/asm-offsets.h))
endif endif
ifeq ($(CONFIG_AS_HAS_ARMV8_2), y)
# make sure to pass the newest target architecture to -march.
asm-arch := armv8.2-a
endif
# Ensure that if the compiler supports branch protection we default it # Ensure that if the compiler supports branch protection we default it
# off, this will be overridden if we are using branch protection. # off, this will be overridden if we are using branch protection.
branch-prot-flags-y += $(call cc-option,-mbranch-protection=none) branch-prot-flags-y += $(call cc-option,-mbranch-protection=none)
......
...@@ -363,15 +363,15 @@ ST5( mov v4.16b, vctr.16b ) ...@@ -363,15 +363,15 @@ ST5( mov v4.16b, vctr.16b )
adr x16, 1f adr x16, 1f
sub x16, x16, x12, lsl #3 sub x16, x16, x12, lsl #3
br x16 br x16
hint 34 // bti c bti c
mov v0.d[0], vctr.d[0] mov v0.d[0], vctr.d[0]
hint 34 // bti c bti c
mov v1.d[0], vctr.d[0] mov v1.d[0], vctr.d[0]
hint 34 // bti c bti c
mov v2.d[0], vctr.d[0] mov v2.d[0], vctr.d[0]
hint 34 // bti c bti c
mov v3.d[0], vctr.d[0] mov v3.d[0], vctr.d[0]
ST5( hint 34 ) ST5( bti c )
ST5( mov v4.d[0], vctr.d[0] ) ST5( mov v4.d[0], vctr.d[0] )
1: b 2f 1: b 2f
.previous .previous
......
...@@ -790,6 +790,16 @@ alternative_endif ...@@ -790,6 +790,16 @@ alternative_endif
.Lnoyield_\@: .Lnoyield_\@:
.endm .endm
/*
* Branch Target Identifier (BTI)
*/
.macro bti, targets
.equ .L__bti_targets_c, 34
.equ .L__bti_targets_j, 36
.equ .L__bti_targets_jc,38
hint #.L__bti_targets_\targets
.endm
/* /*
* This macro emits a program property note section identifying * This macro emits a program property note section identifying
* architecture features which require special handling, mainly for * architecture features which require special handling, mainly for
......
...@@ -44,11 +44,11 @@ __ll_sc_atomic_##op(int i, atomic_t *v) \ ...@@ -44,11 +44,11 @@ __ll_sc_atomic_##op(int i, atomic_t *v) \
\ \
asm volatile("// atomic_" #op "\n" \ asm volatile("// atomic_" #op "\n" \
__LL_SC_FALLBACK( \ __LL_SC_FALLBACK( \
" prfm pstl1strm, %2\n" \ " prfm pstl1strm, %2\n" \
"1: ldxr %w0, %2\n" \ "1: ldxr %w0, %2\n" \
" " #asm_op " %w0, %w0, %w3\n" \ " " #asm_op " %w0, %w0, %w3\n" \
" stxr %w1, %w0, %2\n" \ " stxr %w1, %w0, %2\n" \
" cbnz %w1, 1b\n") \ " cbnz %w1, 1b\n") \
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
: __stringify(constraint) "r" (i)); \ : __stringify(constraint) "r" (i)); \
} }
...@@ -62,12 +62,12 @@ __ll_sc_atomic_##op##_return##name(int i, atomic_t *v) \ ...@@ -62,12 +62,12 @@ __ll_sc_atomic_##op##_return##name(int i, atomic_t *v) \
\ \
asm volatile("// atomic_" #op "_return" #name "\n" \ asm volatile("// atomic_" #op "_return" #name "\n" \
__LL_SC_FALLBACK( \ __LL_SC_FALLBACK( \
" prfm pstl1strm, %2\n" \ " prfm pstl1strm, %2\n" \
"1: ld" #acq "xr %w0, %2\n" \ "1: ld" #acq "xr %w0, %2\n" \
" " #asm_op " %w0, %w0, %w3\n" \ " " #asm_op " %w0, %w0, %w3\n" \
" st" #rel "xr %w1, %w0, %2\n" \ " st" #rel "xr %w1, %w0, %2\n" \
" cbnz %w1, 1b\n" \ " cbnz %w1, 1b\n" \
" " #mb ) \ " " #mb ) \
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
: __stringify(constraint) "r" (i) \ : __stringify(constraint) "r" (i) \
: cl); \ : cl); \
...@@ -84,12 +84,12 @@ __ll_sc_atomic_fetch_##op##name(int i, atomic_t *v) \ ...@@ -84,12 +84,12 @@ __ll_sc_atomic_fetch_##op##name(int i, atomic_t *v) \
\ \
asm volatile("// atomic_fetch_" #op #name "\n" \ asm volatile("// atomic_fetch_" #op #name "\n" \
__LL_SC_FALLBACK( \ __LL_SC_FALLBACK( \
" prfm pstl1strm, %3\n" \ " prfm pstl1strm, %3\n" \
"1: ld" #acq "xr %w0, %3\n" \ "1: ld" #acq "xr %w0, %3\n" \
" " #asm_op " %w1, %w0, %w4\n" \ " " #asm_op " %w1, %w0, %w4\n" \
" st" #rel "xr %w2, %w1, %3\n" \ " st" #rel "xr %w2, %w1, %3\n" \
" cbnz %w2, 1b\n" \ " cbnz %w2, 1b\n" \
" " #mb ) \ " " #mb ) \
: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \ : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \
: __stringify(constraint) "r" (i) \ : __stringify(constraint) "r" (i) \
: cl); \ : cl); \
...@@ -143,11 +143,11 @@ __ll_sc_atomic64_##op(s64 i, atomic64_t *v) \ ...@@ -143,11 +143,11 @@ __ll_sc_atomic64_##op(s64 i, atomic64_t *v) \
\ \
asm volatile("// atomic64_" #op "\n" \ asm volatile("// atomic64_" #op "\n" \
__LL_SC_FALLBACK( \ __LL_SC_FALLBACK( \
" prfm pstl1strm, %2\n" \ " prfm pstl1strm, %2\n" \
"1: ldxr %0, %2\n" \ "1: ldxr %0, %2\n" \
" " #asm_op " %0, %0, %3\n" \ " " #asm_op " %0, %0, %3\n" \
" stxr %w1, %0, %2\n" \ " stxr %w1, %0, %2\n" \
" cbnz %w1, 1b") \ " cbnz %w1, 1b") \
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
: __stringify(constraint) "r" (i)); \ : __stringify(constraint) "r" (i)); \
} }
...@@ -161,12 +161,12 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \ ...@@ -161,12 +161,12 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \
\ \
asm volatile("// atomic64_" #op "_return" #name "\n" \ asm volatile("// atomic64_" #op "_return" #name "\n" \
__LL_SC_FALLBACK( \ __LL_SC_FALLBACK( \
" prfm pstl1strm, %2\n" \ " prfm pstl1strm, %2\n" \
"1: ld" #acq "xr %0, %2\n" \ "1: ld" #acq "xr %0, %2\n" \
" " #asm_op " %0, %0, %3\n" \ " " #asm_op " %0, %0, %3\n" \
" st" #rel "xr %w1, %0, %2\n" \ " st" #rel "xr %w1, %0, %2\n" \
" cbnz %w1, 1b\n" \ " cbnz %w1, 1b\n" \
" " #mb ) \ " " #mb ) \
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
: __stringify(constraint) "r" (i) \ : __stringify(constraint) "r" (i) \
: cl); \ : cl); \
...@@ -176,19 +176,19 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \ ...@@ -176,19 +176,19 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \
#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\ #define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\
static inline long \ static inline long \
__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \ __ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \
{ \ { \
s64 result, val; \ s64 result, val; \
unsigned long tmp; \ unsigned long tmp; \
\ \
asm volatile("// atomic64_fetch_" #op #name "\n" \ asm volatile("// atomic64_fetch_" #op #name "\n" \
__LL_SC_FALLBACK( \ __LL_SC_FALLBACK( \
" prfm pstl1strm, %3\n" \ " prfm pstl1strm, %3\n" \
"1: ld" #acq "xr %0, %3\n" \ "1: ld" #acq "xr %0, %3\n" \
" " #asm_op " %1, %0, %4\n" \ " " #asm_op " %1, %0, %4\n" \
" st" #rel "xr %w2, %1, %3\n" \ " st" #rel "xr %w2, %1, %3\n" \
" cbnz %w2, 1b\n" \ " cbnz %w2, 1b\n" \
" " #mb ) \ " " #mb ) \
: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \ : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \
: __stringify(constraint) "r" (i) \ : __stringify(constraint) "r" (i) \
: cl); \ : cl); \
...@@ -241,14 +241,14 @@ __ll_sc_atomic64_dec_if_positive(atomic64_t *v) ...@@ -241,14 +241,14 @@ __ll_sc_atomic64_dec_if_positive(atomic64_t *v)
asm volatile("// atomic64_dec_if_positive\n" asm volatile("// atomic64_dec_if_positive\n"
__LL_SC_FALLBACK( __LL_SC_FALLBACK(
" prfm pstl1strm, %2\n" " prfm pstl1strm, %2\n"
"1: ldxr %0, %2\n" "1: ldxr %0, %2\n"
" subs %0, %0, #1\n" " subs %0, %0, #1\n"
" b.lt 2f\n" " b.lt 2f\n"
" stlxr %w1, %0, %2\n" " stlxr %w1, %0, %2\n"
" cbnz %w1, 1b\n" " cbnz %w1, 1b\n"
" dmb ish\n" " dmb ish\n"
"2:") "2:")
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) : "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
: :
: "cc", "memory"); : "cc", "memory");
......
...@@ -11,13 +11,13 @@ ...@@ -11,13 +11,13 @@
#define __ASM_ATOMIC_LSE_H #define __ASM_ATOMIC_LSE_H
#define ATOMIC_OP(op, asm_op) \ #define ATOMIC_OP(op, asm_op) \
static inline void __lse_atomic_##op(int i, atomic_t *v) \ static inline void __lse_atomic_##op(int i, atomic_t *v) \
{ \ { \
asm volatile( \ asm volatile( \
__LSE_PREAMBLE \ __LSE_PREAMBLE \
" " #asm_op " %w[i], %[v]\n" \ " " #asm_op " %w[i], %[v]\n" \
: [i] "+r" (i), [v] "+Q" (v->counter) \ : [v] "+Q" (v->counter) \
: "r" (v)); \ : [i] "r" (i)); \
} }
ATOMIC_OP(andnot, stclr) ATOMIC_OP(andnot, stclr)
...@@ -25,19 +25,27 @@ ATOMIC_OP(or, stset) ...@@ -25,19 +25,27 @@ ATOMIC_OP(or, stset)
ATOMIC_OP(xor, steor) ATOMIC_OP(xor, steor)
ATOMIC_OP(add, stadd) ATOMIC_OP(add, stadd)
static inline void __lse_atomic_sub(int i, atomic_t *v)
{
__lse_atomic_add(-i, v);
}
#undef ATOMIC_OP #undef ATOMIC_OP
#define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \ #define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \
static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v) \ static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v) \
{ \ { \
int old; \
\
asm volatile( \ asm volatile( \
__LSE_PREAMBLE \ __LSE_PREAMBLE \
" " #asm_op #mb " %w[i], %w[i], %[v]" \ " " #asm_op #mb " %w[i], %w[old], %[v]" \
: [i] "+r" (i), [v] "+Q" (v->counter) \ : [v] "+Q" (v->counter), \
: "r" (v) \ [old] "=r" (old) \
: [i] "r" (i) \
: cl); \ : cl); \
\ \
return i; \ return old; \
} }
#define ATOMIC_FETCH_OPS(op, asm_op) \ #define ATOMIC_FETCH_OPS(op, asm_op) \
...@@ -54,51 +62,46 @@ ATOMIC_FETCH_OPS(add, ldadd) ...@@ -54,51 +62,46 @@ ATOMIC_FETCH_OPS(add, ldadd)
#undef ATOMIC_FETCH_OP #undef ATOMIC_FETCH_OP
#undef ATOMIC_FETCH_OPS #undef ATOMIC_FETCH_OPS
#define ATOMIC_OP_ADD_RETURN(name, mb, cl...) \ #define ATOMIC_FETCH_OP_SUB(name) \
static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v) \
{ \
return __lse_atomic_fetch_add##name(-i, v); \
}
ATOMIC_FETCH_OP_SUB(_relaxed)
ATOMIC_FETCH_OP_SUB(_acquire)
ATOMIC_FETCH_OP_SUB(_release)
ATOMIC_FETCH_OP_SUB( )
#undef ATOMIC_FETCH_OP_SUB
#define ATOMIC_OP_ADD_SUB_RETURN(name) \
static inline int __lse_atomic_add_return##name(int i, atomic_t *v) \ static inline int __lse_atomic_add_return##name(int i, atomic_t *v) \
{ \ { \
u32 tmp; \ return __lse_atomic_fetch_add##name(i, v) + i; \
\ } \
asm volatile( \
__LSE_PREAMBLE \
" ldadd" #mb " %w[i], %w[tmp], %[v]\n" \
" add %w[i], %w[i], %w[tmp]" \
: [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \
: "r" (v) \
: cl); \
\ \
return i; \ static inline int __lse_atomic_sub_return##name(int i, atomic_t *v) \
{ \
return __lse_atomic_fetch_sub(i, v) - i; \
} }
ATOMIC_OP_ADD_RETURN(_relaxed, ) ATOMIC_OP_ADD_SUB_RETURN(_relaxed)
ATOMIC_OP_ADD_RETURN(_acquire, a, "memory") ATOMIC_OP_ADD_SUB_RETURN(_acquire)
ATOMIC_OP_ADD_RETURN(_release, l, "memory") ATOMIC_OP_ADD_SUB_RETURN(_release)
ATOMIC_OP_ADD_RETURN( , al, "memory") ATOMIC_OP_ADD_SUB_RETURN( )
#undef ATOMIC_OP_ADD_RETURN #undef ATOMIC_OP_ADD_SUB_RETURN
static inline void __lse_atomic_and(int i, atomic_t *v) static inline void __lse_atomic_and(int i, atomic_t *v)
{ {
asm volatile( return __lse_atomic_andnot(~i, v);
__LSE_PREAMBLE
" mvn %w[i], %w[i]\n"
" stclr %w[i], %[v]"
: [i] "+&r" (i), [v] "+Q" (v->counter)
: "r" (v));
} }
#define ATOMIC_FETCH_OP_AND(name, mb, cl...) \ #define ATOMIC_FETCH_OP_AND(name, mb, cl...) \
static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v) \ static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v) \
{ \ { \
asm volatile( \ return __lse_atomic_fetch_andnot##name(~i, v); \
__LSE_PREAMBLE \
" mvn %w[i], %w[i]\n" \
" ldclr" #mb " %w[i], %w[i], %[v]" \
: [i] "+&r" (i), [v] "+Q" (v->counter) \
: "r" (v) \
: cl); \
\
return i; \
} }
ATOMIC_FETCH_OP_AND(_relaxed, ) ATOMIC_FETCH_OP_AND(_relaxed, )
...@@ -108,69 +111,14 @@ ATOMIC_FETCH_OP_AND( , al, "memory") ...@@ -108,69 +111,14 @@ ATOMIC_FETCH_OP_AND( , al, "memory")
#undef ATOMIC_FETCH_OP_AND #undef ATOMIC_FETCH_OP_AND
static inline void __lse_atomic_sub(int i, atomic_t *v)
{
asm volatile(
__LSE_PREAMBLE
" neg %w[i], %w[i]\n"
" stadd %w[i], %[v]"
: [i] "+&r" (i), [v] "+Q" (v->counter)
: "r" (v));
}
#define ATOMIC_OP_SUB_RETURN(name, mb, cl...) \
static inline int __lse_atomic_sub_return##name(int i, atomic_t *v) \
{ \
u32 tmp; \
\
asm volatile( \
__LSE_PREAMBLE \
" neg %w[i], %w[i]\n" \
" ldadd" #mb " %w[i], %w[tmp], %[v]\n" \
" add %w[i], %w[i], %w[tmp]" \
: [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \
: "r" (v) \
: cl); \
\
return i; \
}
ATOMIC_OP_SUB_RETURN(_relaxed, )
ATOMIC_OP_SUB_RETURN(_acquire, a, "memory")
ATOMIC_OP_SUB_RETURN(_release, l, "memory")
ATOMIC_OP_SUB_RETURN( , al, "memory")
#undef ATOMIC_OP_SUB_RETURN
#define ATOMIC_FETCH_OP_SUB(name, mb, cl...) \
static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v) \
{ \
asm volatile( \
__LSE_PREAMBLE \
" neg %w[i], %w[i]\n" \
" ldadd" #mb " %w[i], %w[i], %[v]" \
: [i] "+&r" (i), [v] "+Q" (v->counter) \
: "r" (v) \
: cl); \
\
return i; \
}
ATOMIC_FETCH_OP_SUB(_relaxed, )
ATOMIC_FETCH_OP_SUB(_acquire, a, "memory")
ATOMIC_FETCH_OP_SUB(_release, l, "memory")
ATOMIC_FETCH_OP_SUB( , al, "memory")
#undef ATOMIC_FETCH_OP_SUB
#define ATOMIC64_OP(op, asm_op) \ #define ATOMIC64_OP(op, asm_op) \
static inline void __lse_atomic64_##op(s64 i, atomic64_t *v) \ static inline void __lse_atomic64_##op(s64 i, atomic64_t *v) \
{ \ { \
asm volatile( \ asm volatile( \
__LSE_PREAMBLE \ __LSE_PREAMBLE \
" " #asm_op " %[i], %[v]\n" \ " " #asm_op " %[i], %[v]\n" \
: [i] "+r" (i), [v] "+Q" (v->counter) \ : [v] "+Q" (v->counter) \
: "r" (v)); \ : [i] "r" (i)); \
} }
ATOMIC64_OP(andnot, stclr) ATOMIC64_OP(andnot, stclr)
...@@ -178,19 +126,27 @@ ATOMIC64_OP(or, stset) ...@@ -178,19 +126,27 @@ ATOMIC64_OP(or, stset)
ATOMIC64_OP(xor, steor) ATOMIC64_OP(xor, steor)
ATOMIC64_OP(add, stadd) ATOMIC64_OP(add, stadd)
static inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
{
__lse_atomic64_add(-i, v);
}
#undef ATOMIC64_OP #undef ATOMIC64_OP
#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \ #define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \
static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\ static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\
{ \ { \
s64 old; \
\
asm volatile( \ asm volatile( \
__LSE_PREAMBLE \ __LSE_PREAMBLE \
" " #asm_op #mb " %[i], %[i], %[v]" \ " " #asm_op #mb " %[i], %[old], %[v]" \
: [i] "+r" (i), [v] "+Q" (v->counter) \ : [v] "+Q" (v->counter), \
: "r" (v) \ [old] "=r" (old) \
: [i] "r" (i) \
: cl); \ : cl); \
\ \
return i; \ return old; \
} }
#define ATOMIC64_FETCH_OPS(op, asm_op) \ #define ATOMIC64_FETCH_OPS(op, asm_op) \
...@@ -207,51 +163,46 @@ ATOMIC64_FETCH_OPS(add, ldadd) ...@@ -207,51 +163,46 @@ ATOMIC64_FETCH_OPS(add, ldadd)
#undef ATOMIC64_FETCH_OP #undef ATOMIC64_FETCH_OP
#undef ATOMIC64_FETCH_OPS #undef ATOMIC64_FETCH_OPS
#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \ #define ATOMIC64_FETCH_OP_SUB(name) \
static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \
{ \
return __lse_atomic64_fetch_add##name(-i, v); \
}
ATOMIC64_FETCH_OP_SUB(_relaxed)
ATOMIC64_FETCH_OP_SUB(_acquire)
ATOMIC64_FETCH_OP_SUB(_release)
ATOMIC64_FETCH_OP_SUB( )
#undef ATOMIC64_FETCH_OP_SUB
#define ATOMIC64_OP_ADD_SUB_RETURN(name) \
static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\ static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\
{ \ { \
unsigned long tmp; \ return __lse_atomic64_fetch_add##name(i, v) + i; \
\ } \
asm volatile( \
__LSE_PREAMBLE \
" ldadd" #mb " %[i], %x[tmp], %[v]\n" \
" add %[i], %[i], %x[tmp]" \
: [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \
: "r" (v) \
: cl); \
\ \
return i; \ static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v)\
{ \
return __lse_atomic64_fetch_sub##name(i, v) - i; \
} }
ATOMIC64_OP_ADD_RETURN(_relaxed, ) ATOMIC64_OP_ADD_SUB_RETURN(_relaxed)
ATOMIC64_OP_ADD_RETURN(_acquire, a, "memory") ATOMIC64_OP_ADD_SUB_RETURN(_acquire)
ATOMIC64_OP_ADD_RETURN(_release, l, "memory") ATOMIC64_OP_ADD_SUB_RETURN(_release)
ATOMIC64_OP_ADD_RETURN( , al, "memory") ATOMIC64_OP_ADD_SUB_RETURN( )
#undef ATOMIC64_OP_ADD_RETURN #undef ATOMIC64_OP_ADD_SUB_RETURN
static inline void __lse_atomic64_and(s64 i, atomic64_t *v) static inline void __lse_atomic64_and(s64 i, atomic64_t *v)
{ {
asm volatile( return __lse_atomic64_andnot(~i, v);
__LSE_PREAMBLE
" mvn %[i], %[i]\n"
" stclr %[i], %[v]"
: [i] "+&r" (i), [v] "+Q" (v->counter)
: "r" (v));
} }
#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ #define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \
static inline long __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \ static inline long __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \
{ \ { \
asm volatile( \ return __lse_atomic64_fetch_andnot##name(~i, v); \
__LSE_PREAMBLE \
" mvn %[i], %[i]\n" \
" ldclr" #mb " %[i], %[i], %[v]" \
: [i] "+&r" (i), [v] "+Q" (v->counter) \
: "r" (v) \
: cl); \
\
return i; \
} }
ATOMIC64_FETCH_OP_AND(_relaxed, ) ATOMIC64_FETCH_OP_AND(_relaxed, )
...@@ -261,61 +212,6 @@ ATOMIC64_FETCH_OP_AND( , al, "memory") ...@@ -261,61 +212,6 @@ ATOMIC64_FETCH_OP_AND( , al, "memory")
#undef ATOMIC64_FETCH_OP_AND #undef ATOMIC64_FETCH_OP_AND
static inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
{
asm volatile(
__LSE_PREAMBLE
" neg %[i], %[i]\n"
" stadd %[i], %[v]"
: [i] "+&r" (i), [v] "+Q" (v->counter)
: "r" (v));
}
#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \
static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \
{ \
unsigned long tmp; \
\
asm volatile( \
__LSE_PREAMBLE \
" neg %[i], %[i]\n" \
" ldadd" #mb " %[i], %x[tmp], %[v]\n" \
" add %[i], %[i], %x[tmp]" \
: [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \
: "r" (v) \
: cl); \
\
return i; \
}
ATOMIC64_OP_SUB_RETURN(_relaxed, )
ATOMIC64_OP_SUB_RETURN(_acquire, a, "memory")
ATOMIC64_OP_SUB_RETURN(_release, l, "memory")
ATOMIC64_OP_SUB_RETURN( , al, "memory")
#undef ATOMIC64_OP_SUB_RETURN
#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \
static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \
{ \
asm volatile( \
__LSE_PREAMBLE \
" neg %[i], %[i]\n" \
" ldadd" #mb " %[i], %[i], %[v]" \
: [i] "+&r" (i), [v] "+Q" (v->counter) \
: "r" (v) \
: cl); \
\
return i; \
}
ATOMIC64_FETCH_OP_SUB(_relaxed, )
ATOMIC64_FETCH_OP_SUB(_acquire, a, "memory")
ATOMIC64_FETCH_OP_SUB(_release, l, "memory")
ATOMIC64_FETCH_OP_SUB( , al, "memory")
#undef ATOMIC64_FETCH_OP_SUB
static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v) static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v)
{ {
unsigned long tmp; unsigned long tmp;
......
...@@ -26,6 +26,14 @@ ...@@ -26,6 +26,14 @@
#define __tsb_csync() asm volatile("hint #18" : : : "memory") #define __tsb_csync() asm volatile("hint #18" : : : "memory")
#define csdb() asm volatile("hint #20" : : : "memory") #define csdb() asm volatile("hint #20" : : : "memory")
/*
* Data Gathering Hint:
* This instruction prevents merging memory accesses with Normal-NC or
* Device-GRE attributes before the hint instruction with any memory accesses
* appearing after the hint instruction.
*/
#define dgh() asm volatile("hint #6" : : : "memory")
#ifdef CONFIG_ARM64_PSEUDO_NMI #ifdef CONFIG_ARM64_PSEUDO_NMI
#define pmr_sync() \ #define pmr_sync() \
do { \ do { \
...@@ -46,6 +54,7 @@ ...@@ -46,6 +54,7 @@
#define dma_rmb() dmb(oshld) #define dma_rmb() dmb(oshld)
#define dma_wmb() dmb(oshst) #define dma_wmb() dmb(oshst)
#define io_stop_wc() dgh()
#define tsb_csync() \ #define tsb_csync() \
do { \ do { \
......
...@@ -51,6 +51,7 @@ struct cpuinfo_arm64 { ...@@ -51,6 +51,7 @@ struct cpuinfo_arm64 {
u64 reg_id_aa64dfr1; u64 reg_id_aa64dfr1;
u64 reg_id_aa64isar0; u64 reg_id_aa64isar0;
u64 reg_id_aa64isar1; u64 reg_id_aa64isar1;
u64 reg_id_aa64isar2;
u64 reg_id_aa64mmfr0; u64 reg_id_aa64mmfr0;
u64 reg_id_aa64mmfr1; u64 reg_id_aa64mmfr1;
u64 reg_id_aa64mmfr2; u64 reg_id_aa64mmfr2;
......
...@@ -51,8 +51,8 @@ extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, ...@@ -51,8 +51,8 @@ extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state,
extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_flush_task_state(struct task_struct *target);
extern void fpsimd_save_and_flush_cpu_state(void); extern void fpsimd_save_and_flush_cpu_state(void);
/* Maximum VL that SVE VL-agnostic software can transparently support */ /* Maximum VL that SVE/SME VL-agnostic software can transparently support */
#define SVE_VL_ARCH_MAX 0x100 #define VL_ARCH_MAX 0x100
/* Offset of FFR in the SVE register dump */ /* Offset of FFR in the SVE register dump */
static inline size_t sve_ffr_offset(int vl) static inline size_t sve_ffr_offset(int vl)
...@@ -122,7 +122,7 @@ extern void fpsimd_sync_to_sve(struct task_struct *task); ...@@ -122,7 +122,7 @@ extern void fpsimd_sync_to_sve(struct task_struct *task);
extern void sve_sync_to_fpsimd(struct task_struct *task); extern void sve_sync_to_fpsimd(struct task_struct *task);
extern void sve_sync_from_fpsimd_zeropad(struct task_struct *task); extern void sve_sync_from_fpsimd_zeropad(struct task_struct *task);
extern int sve_set_vector_length(struct task_struct *task, extern int vec_set_vector_length(struct task_struct *task, enum vec_type type,
unsigned long vl, unsigned long flags); unsigned long vl, unsigned long flags);
extern int sve_set_current_vl(unsigned long arg); extern int sve_set_current_vl(unsigned long arg);
......
...@@ -106,6 +106,8 @@ ...@@ -106,6 +106,8 @@
#define KERNEL_HWCAP_BTI __khwcap2_feature(BTI) #define KERNEL_HWCAP_BTI __khwcap2_feature(BTI)
#define KERNEL_HWCAP_MTE __khwcap2_feature(MTE) #define KERNEL_HWCAP_MTE __khwcap2_feature(MTE)
#define KERNEL_HWCAP_ECV __khwcap2_feature(ECV) #define KERNEL_HWCAP_ECV __khwcap2_feature(ECV)
#define KERNEL_HWCAP_AFP __khwcap2_feature(AFP)
#define KERNEL_HWCAP_RPRES __khwcap2_feature(RPRES)
/* /*
* This yields a mask that user programs can use to figure out what * This yields a mask that user programs can use to figure out what
......
#ifndef __ASM_LINKAGE_H #ifndef __ASM_LINKAGE_H
#define __ASM_LINKAGE_H #define __ASM_LINKAGE_H
#ifdef __ASSEMBLY__
#include <asm/assembler.h>
#endif
#define __ALIGN .align 2 #define __ALIGN .align 2
#define __ALIGN_STR ".align 2" #define __ALIGN_STR ".align 2"
#if defined(CONFIG_ARM64_BTI_KERNEL) && defined(__aarch64__)
/*
* Since current versions of gas reject the BTI instruction unless we
* set the architecture version to v8.5 we use the hint instruction
* instead.
*/
#define BTI_C hint 34 ;
/* /*
* When using in-kernel BTI we need to ensure that PCS-conformant assembly * When using in-kernel BTI we need to ensure that PCS-conformant
* functions have suitable annotations. Override SYM_FUNC_START to insert * assembly functions have suitable annotations. Override
* a BTI landing pad at the start of everything. * SYM_FUNC_START to insert a BTI landing pad at the start of
* everything, the override is done unconditionally so we're more
* likely to notice any drift from the overridden definitions.
*/ */
#define SYM_FUNC_START(name) \ #define SYM_FUNC_START(name) \
SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN) \ SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN) \
BTI_C bti c ;
#define SYM_FUNC_START_NOALIGN(name) \ #define SYM_FUNC_START_NOALIGN(name) \
SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE) \ SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE) \
BTI_C bti c ;
#define SYM_FUNC_START_LOCAL(name) \ #define SYM_FUNC_START_LOCAL(name) \
SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN) \ SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN) \
BTI_C bti c ;
#define SYM_FUNC_START_LOCAL_NOALIGN(name) \ #define SYM_FUNC_START_LOCAL_NOALIGN(name) \
SYM_START(name, SYM_L_LOCAL, SYM_A_NONE) \ SYM_START(name, SYM_L_LOCAL, SYM_A_NONE) \
BTI_C bti c ;
#define SYM_FUNC_START_WEAK(name) \ #define SYM_FUNC_START_WEAK(name) \
SYM_START(name, SYM_L_WEAK, SYM_A_ALIGN) \ SYM_START(name, SYM_L_WEAK, SYM_A_ALIGN) \
BTI_C bti c ;
#define SYM_FUNC_START_WEAK_NOALIGN(name) \ #define SYM_FUNC_START_WEAK_NOALIGN(name) \
SYM_START(name, SYM_L_WEAK, SYM_A_NONE) \ SYM_START(name, SYM_L_WEAK, SYM_A_NONE) \
BTI_C bti c ;
#endif
/* /*
* Annotate a function as position independent, i.e., safe to be called before * Annotate a function as position independent, i.e., safe to be called before
......
...@@ -84,10 +84,12 @@ static inline void __dc_gzva(u64 p) ...@@ -84,10 +84,12 @@ static inline void __dc_gzva(u64 p)
static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag, static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
bool init) bool init)
{ {
u64 curr, mask, dczid_bs, end1, end2, end3; u64 curr, mask, dczid, dczid_bs, dczid_dzp, end1, end2, end3;
/* Read DC G(Z)VA block size from the system register. */ /* Read DC G(Z)VA block size from the system register. */
dczid_bs = 4ul << (read_cpuid(DCZID_EL0) & 0xf); dczid = read_cpuid(DCZID_EL0);
dczid_bs = 4ul << (dczid & 0xf);
dczid_dzp = (dczid >> 4) & 1;
curr = (u64)__tag_set(addr, tag); curr = (u64)__tag_set(addr, tag);
mask = dczid_bs - 1; mask = dczid_bs - 1;
...@@ -106,7 +108,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag, ...@@ -106,7 +108,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
*/ */
#define SET_MEMTAG_RANGE(stg_post, dc_gva) \ #define SET_MEMTAG_RANGE(stg_post, dc_gva) \
do { \ do { \
if (size >= 2 * dczid_bs) { \ if (!dczid_dzp && size >= 2 * dczid_bs) {\
do { \ do { \
curr = stg_post(curr); \ curr = stg_post(curr); \
} while (curr < end1); \ } while (curr < end1); \
......
...@@ -47,6 +47,10 @@ struct stack_info { ...@@ -47,6 +47,10 @@ struct stack_info {
* @prev_type: The type of stack this frame record was on, or a synthetic * @prev_type: The type of stack this frame record was on, or a synthetic
* value of STACK_TYPE_UNKNOWN. This is used to detect a * value of STACK_TYPE_UNKNOWN. This is used to detect a
* transition from one stack to another. * transition from one stack to another.
*
* @kr_cur: When KRETPROBES is selected, holds the kretprobe instance
* associated with the most recently encountered replacement lr
* value.
*/ */
struct stackframe { struct stackframe {
unsigned long fp; unsigned long fp;
...@@ -59,9 +63,6 @@ struct stackframe { ...@@ -59,9 +63,6 @@ struct stackframe {
#endif #endif
}; };
extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
bool (*fn)(void *, unsigned long), void *data);
extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl); const char *loglvl);
...@@ -146,7 +147,4 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, ...@@ -146,7 +147,4 @@ static inline bool on_accessible_stack(const struct task_struct *tsk,
return false; return false;
} }
void start_backtrace(struct stackframe *frame, unsigned long fp,
unsigned long pc);
#endif /* __ASM_STACKTRACE_H */ #endif /* __ASM_STACKTRACE_H */
...@@ -182,6 +182,7 @@ ...@@ -182,6 +182,7 @@
#define SYS_ID_AA64ISAR0_EL1 sys_reg(3, 0, 0, 6, 0) #define SYS_ID_AA64ISAR0_EL1 sys_reg(3, 0, 0, 6, 0)
#define SYS_ID_AA64ISAR1_EL1 sys_reg(3, 0, 0, 6, 1) #define SYS_ID_AA64ISAR1_EL1 sys_reg(3, 0, 0, 6, 1)
#define SYS_ID_AA64ISAR2_EL1 sys_reg(3, 0, 0, 6, 2)
#define SYS_ID_AA64MMFR0_EL1 sys_reg(3, 0, 0, 7, 0) #define SYS_ID_AA64MMFR0_EL1 sys_reg(3, 0, 0, 7, 0)
#define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1) #define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1)
...@@ -771,6 +772,20 @@ ...@@ -771,6 +772,20 @@
#define ID_AA64ISAR1_GPI_NI 0x0 #define ID_AA64ISAR1_GPI_NI 0x0
#define ID_AA64ISAR1_GPI_IMP_DEF 0x1 #define ID_AA64ISAR1_GPI_IMP_DEF 0x1
/* id_aa64isar2 */
#define ID_AA64ISAR2_RPRES_SHIFT 4
#define ID_AA64ISAR2_WFXT_SHIFT 0
#define ID_AA64ISAR2_RPRES_8BIT 0x0
#define ID_AA64ISAR2_RPRES_12BIT 0x1
/*
* Value 0x1 has been removed from the architecture, and is
* reserved, but has not yet been removed from the ARM ARM
* as of ARM DDI 0487G.b.
*/
#define ID_AA64ISAR2_WFXT_NI 0x0
#define ID_AA64ISAR2_WFXT_SUPPORTED 0x2
/* id_aa64pfr0 */ /* id_aa64pfr0 */
#define ID_AA64PFR0_CSV3_SHIFT 60 #define ID_AA64PFR0_CSV3_SHIFT 60
#define ID_AA64PFR0_CSV2_SHIFT 56 #define ID_AA64PFR0_CSV2_SHIFT 56
...@@ -889,6 +904,7 @@ ...@@ -889,6 +904,7 @@
#endif #endif
/* id_aa64mmfr1 */ /* id_aa64mmfr1 */
#define ID_AA64MMFR1_AFP_SHIFT 44
#define ID_AA64MMFR1_ETS_SHIFT 36 #define ID_AA64MMFR1_ETS_SHIFT 36
#define ID_AA64MMFR1_TWED_SHIFT 32 #define ID_AA64MMFR1_TWED_SHIFT 32
#define ID_AA64MMFR1_XNX_SHIFT 28 #define ID_AA64MMFR1_XNX_SHIFT 28
......
...@@ -76,5 +76,7 @@ ...@@ -76,5 +76,7 @@
#define HWCAP2_BTI (1 << 17) #define HWCAP2_BTI (1 << 17)
#define HWCAP2_MTE (1 << 18) #define HWCAP2_MTE (1 << 18)
#define HWCAP2_ECV (1 << 19) #define HWCAP2_ECV (1 << 19)
#define HWCAP2_AFP (1 << 20)
#define HWCAP2_RPRES (1 << 21)
#endif /* _UAPI__ASM_HWCAP_H */ #endif /* _UAPI__ASM_HWCAP_H */
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/irq_work.h> #include <linux/irq_work.h>
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/of_fdt.h> #include <linux/of_fdt.h>
#include <linux/libfdt.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/serial_core.h> #include <linux/serial_core.h>
#include <linux/pgtable.h> #include <linux/pgtable.h>
...@@ -62,29 +63,22 @@ static int __init parse_acpi(char *arg) ...@@ -62,29 +63,22 @@ static int __init parse_acpi(char *arg)
} }
early_param("acpi", parse_acpi); early_param("acpi", parse_acpi);
static int __init dt_scan_depth1_nodes(unsigned long node, static bool __init dt_is_stub(void)
const char *uname, int depth,
void *data)
{ {
/* int node;
* Ignore anything not directly under the root node; we'll
* catch its parent instead.
*/
if (depth != 1)
return 0;
if (strcmp(uname, "chosen") == 0) fdt_for_each_subnode(node, initial_boot_params, 0) {
return 0; const char *name = fdt_get_name(initial_boot_params, node, NULL);
if (strcmp(name, "chosen") == 0)
continue;
if (strcmp(name, "hypervisor") == 0 &&
of_flat_dt_is_compatible(node, "xen,xen"))
continue;
if (strcmp(uname, "hypervisor") == 0 && return false;
of_flat_dt_is_compatible(node, "xen,xen")) }
return 0;
/* return true;
* This node at depth 1 is neither a chosen node nor a xen node,
* which we do not expect.
*/
return 1;
} }
/* /*
...@@ -205,8 +199,7 @@ void __init acpi_boot_table_init(void) ...@@ -205,8 +199,7 @@ void __init acpi_boot_table_init(void)
* and ACPI has not been [force] enabled (acpi=on|force) * and ACPI has not been [force] enabled (acpi=on|force)
*/ */
if (param_acpi_off || if (param_acpi_off ||
(!param_acpi_on && !param_acpi_force && (!param_acpi_on && !param_acpi_force && !dt_is_stub()))
of_scan_flat_dt(dt_scan_depth1_nodes, NULL)))
goto done; goto done;
/* /*
......
...@@ -225,6 +225,11 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = { ...@@ -225,6 +225,11 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
ARM64_FTR_END, ARM64_FTR_END,
}; };
static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_RPRES_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
...@@ -325,6 +330,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = { ...@@ -325,6 +330,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
}; };
static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = { static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_AFP_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0),
...@@ -637,6 +643,7 @@ static const struct __ftr_reg_entry { ...@@ -637,6 +643,7 @@ static const struct __ftr_reg_entry {
ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0), ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0),
ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1, ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1,
&id_aa64isar1_override), &id_aa64isar1_override),
ARM64_FTR_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2),
/* Op1 = 0, CRn = 0, CRm = 7 */ /* Op1 = 0, CRn = 0, CRm = 7 */
ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0), ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
...@@ -933,6 +940,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) ...@@ -933,6 +940,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
init_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1, info->reg_id_aa64dfr1); init_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1, info->reg_id_aa64dfr1);
init_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1, info->reg_id_aa64isar0); init_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1, info->reg_id_aa64isar0);
init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1); init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
init_cpu_ftr_reg(SYS_ID_AA64ISAR2_EL1, info->reg_id_aa64isar2);
init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0); init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1); init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2); init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2);
...@@ -1151,6 +1159,8 @@ void update_cpu_features(int cpu, ...@@ -1151,6 +1159,8 @@ void update_cpu_features(int cpu,
info->reg_id_aa64isar0, boot->reg_id_aa64isar0); info->reg_id_aa64isar0, boot->reg_id_aa64isar0);
taint |= check_update_ftr_reg(SYS_ID_AA64ISAR1_EL1, cpu, taint |= check_update_ftr_reg(SYS_ID_AA64ISAR1_EL1, cpu,
info->reg_id_aa64isar1, boot->reg_id_aa64isar1); info->reg_id_aa64isar1, boot->reg_id_aa64isar1);
taint |= check_update_ftr_reg(SYS_ID_AA64ISAR2_EL1, cpu,
info->reg_id_aa64isar2, boot->reg_id_aa64isar2);
/* /*
* Differing PARange support is fine as long as all peripherals and * Differing PARange support is fine as long as all peripherals and
...@@ -1272,6 +1282,7 @@ u64 __read_sysreg_by_encoding(u32 sys_id) ...@@ -1272,6 +1282,7 @@ u64 __read_sysreg_by_encoding(u32 sys_id)
read_sysreg_case(SYS_ID_AA64MMFR2_EL1); read_sysreg_case(SYS_ID_AA64MMFR2_EL1);
read_sysreg_case(SYS_ID_AA64ISAR0_EL1); read_sysreg_case(SYS_ID_AA64ISAR0_EL1);
read_sysreg_case(SYS_ID_AA64ISAR1_EL1); read_sysreg_case(SYS_ID_AA64ISAR1_EL1);
read_sysreg_case(SYS_ID_AA64ISAR2_EL1);
read_sysreg_case(SYS_CNTFRQ_EL0); read_sysreg_case(SYS_CNTFRQ_EL0);
read_sysreg_case(SYS_CTR_EL0); read_sysreg_case(SYS_CTR_EL0);
...@@ -2476,6 +2487,8 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { ...@@ -2476,6 +2487,8 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE), HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE),
#endif /* CONFIG_ARM64_MTE */ #endif /* CONFIG_ARM64_MTE */
HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_ECV_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV), HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_ECV_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV),
HWCAP_CAP(SYS_ID_AA64MMFR1_EL1, ID_AA64MMFR1_AFP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AFP),
HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_RPRES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RPRES),
{}, {},
}; };
......
...@@ -95,6 +95,8 @@ static const char *const hwcap_str[] = { ...@@ -95,6 +95,8 @@ static const char *const hwcap_str[] = {
[KERNEL_HWCAP_BTI] = "bti", [KERNEL_HWCAP_BTI] = "bti",
[KERNEL_HWCAP_MTE] = "mte", [KERNEL_HWCAP_MTE] = "mte",
[KERNEL_HWCAP_ECV] = "ecv", [KERNEL_HWCAP_ECV] = "ecv",
[KERNEL_HWCAP_AFP] = "afp",
[KERNEL_HWCAP_RPRES] = "rpres",
}; };
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
...@@ -391,6 +393,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) ...@@ -391,6 +393,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1); info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1);
info->reg_id_aa64isar0 = read_cpuid(ID_AA64ISAR0_EL1); info->reg_id_aa64isar0 = read_cpuid(ID_AA64ISAR0_EL1);
info->reg_id_aa64isar1 = read_cpuid(ID_AA64ISAR1_EL1); info->reg_id_aa64isar1 = read_cpuid(ID_AA64ISAR1_EL1);
info->reg_id_aa64isar2 = read_cpuid(ID_AA64ISAR2_EL1);
info->reg_id_aa64mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); info->reg_id_aa64mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
info->reg_id_aa64mmfr1 = read_cpuid(ID_AA64MMFR1_EL1); info->reg_id_aa64mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
info->reg_id_aa64mmfr2 = read_cpuid(ID_AA64MMFR2_EL1); info->reg_id_aa64mmfr2 = read_cpuid(ID_AA64MMFR2_EL1);
......
...@@ -77,17 +77,13 @@ ...@@ -77,17 +77,13 @@
.endm .endm
SYM_CODE_START(ftrace_regs_caller) SYM_CODE_START(ftrace_regs_caller)
#ifdef BTI_C bti c
BTI_C
#endif
ftrace_regs_entry 1 ftrace_regs_entry 1
b ftrace_common b ftrace_common
SYM_CODE_END(ftrace_regs_caller) SYM_CODE_END(ftrace_regs_caller)
SYM_CODE_START(ftrace_caller) SYM_CODE_START(ftrace_caller)
#ifdef BTI_C bti c
BTI_C
#endif
ftrace_regs_entry 0 ftrace_regs_entry 0
b ftrace_common b ftrace_common
SYM_CODE_END(ftrace_caller) SYM_CODE_END(ftrace_caller)
......
...@@ -966,8 +966,10 @@ SYM_CODE_START(__sdei_asm_handler) ...@@ -966,8 +966,10 @@ SYM_CODE_START(__sdei_asm_handler)
mov sp, x1 mov sp, x1
mov x1, x0 // address to complete_and_resume mov x1, x0 // address to complete_and_resume
/* x0 = (x0 <= 1) ? EVENT_COMPLETE:EVENT_COMPLETE_AND_RESUME */ /* x0 = (x0 <= SDEI_EV_FAILED) ?
cmp x0, #1 * EVENT_COMPLETE:EVENT_COMPLETE_AND_RESUME
*/
cmp x0, #SDEI_EV_FAILED
mov_q x2, SDEI_1_0_FN_SDEI_EVENT_COMPLETE mov_q x2, SDEI_1_0_FN_SDEI_EVENT_COMPLETE
mov_q x3, SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME mov_q x3, SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME
csel x0, x2, x3, ls csel x0, x2, x3, ls
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpu_pm.h> #include <linux/cpu_pm.h>
#include <linux/ctype.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/irqflags.h> #include <linux/irqflags.h>
...@@ -406,12 +407,13 @@ static unsigned int find_supported_vector_length(enum vec_type type, ...@@ -406,12 +407,13 @@ static unsigned int find_supported_vector_length(enum vec_type type,
#if defined(CONFIG_ARM64_SVE) && defined(CONFIG_SYSCTL) #if defined(CONFIG_ARM64_SVE) && defined(CONFIG_SYSCTL)
static int sve_proc_do_default_vl(struct ctl_table *table, int write, static int vec_proc_do_default_vl(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos) void *buffer, size_t *lenp, loff_t *ppos)
{ {
struct vl_info *info = &vl_info[ARM64_VEC_SVE]; struct vl_info *info = table->extra1;
enum vec_type type = info->type;
int ret; int ret;
int vl = get_sve_default_vl(); int vl = get_default_vl(type);
struct ctl_table tmp_table = { struct ctl_table tmp_table = {
.data = &vl, .data = &vl,
.maxlen = sizeof(vl), .maxlen = sizeof(vl),
...@@ -428,7 +430,7 @@ static int sve_proc_do_default_vl(struct ctl_table *table, int write, ...@@ -428,7 +430,7 @@ static int sve_proc_do_default_vl(struct ctl_table *table, int write,
if (!sve_vl_valid(vl)) if (!sve_vl_valid(vl))
return -EINVAL; return -EINVAL;
set_sve_default_vl(find_supported_vector_length(ARM64_VEC_SVE, vl)); set_default_vl(type, find_supported_vector_length(type, vl));
return 0; return 0;
} }
...@@ -436,7 +438,8 @@ static struct ctl_table sve_default_vl_table[] = { ...@@ -436,7 +438,8 @@ static struct ctl_table sve_default_vl_table[] = {
{ {
.procname = "sve_default_vector_length", .procname = "sve_default_vector_length",
.mode = 0644, .mode = 0644,
.proc_handler = sve_proc_do_default_vl, .proc_handler = vec_proc_do_default_vl,
.extra1 = &vl_info[ARM64_VEC_SVE],
}, },
{ } { }
}; };
...@@ -629,7 +632,7 @@ void sve_sync_from_fpsimd_zeropad(struct task_struct *task) ...@@ -629,7 +632,7 @@ void sve_sync_from_fpsimd_zeropad(struct task_struct *task)
__fpsimd_to_sve(sst, fst, vq); __fpsimd_to_sve(sst, fst, vq);
} }
int sve_set_vector_length(struct task_struct *task, int vec_set_vector_length(struct task_struct *task, enum vec_type type,
unsigned long vl, unsigned long flags) unsigned long vl, unsigned long flags)
{ {
if (flags & ~(unsigned long)(PR_SVE_VL_INHERIT | if (flags & ~(unsigned long)(PR_SVE_VL_INHERIT |
...@@ -640,33 +643,35 @@ int sve_set_vector_length(struct task_struct *task, ...@@ -640,33 +643,35 @@ int sve_set_vector_length(struct task_struct *task,
return -EINVAL; return -EINVAL;
/* /*
* Clamp to the maximum vector length that VL-agnostic SVE code can * Clamp to the maximum vector length that VL-agnostic code
* work with. A flag may be assigned in the future to allow setting * can work with. A flag may be assigned in the future to
* of larger vector lengths without confusing older software. * allow setting of larger vector lengths without confusing
* older software.
*/ */
if (vl > SVE_VL_ARCH_MAX) if (vl > VL_ARCH_MAX)
vl = SVE_VL_ARCH_MAX; vl = VL_ARCH_MAX;
vl = find_supported_vector_length(ARM64_VEC_SVE, vl); vl = find_supported_vector_length(type, vl);
if (flags & (PR_SVE_VL_INHERIT | if (flags & (PR_SVE_VL_INHERIT |
PR_SVE_SET_VL_ONEXEC)) PR_SVE_SET_VL_ONEXEC))
task_set_sve_vl_onexec(task, vl); task_set_vl_onexec(task, type, vl);
else else
/* Reset VL to system default on next exec: */ /* Reset VL to system default on next exec: */
task_set_sve_vl_onexec(task, 0); task_set_vl_onexec(task, type, 0);
/* Only actually set the VL if not deferred: */ /* Only actually set the VL if not deferred: */
if (flags & PR_SVE_SET_VL_ONEXEC) if (flags & PR_SVE_SET_VL_ONEXEC)
goto out; goto out;
if (vl == task_get_sve_vl(task)) if (vl == task_get_vl(task, type))
goto out; goto out;
/* /*
* To ensure the FPSIMD bits of the SVE vector registers are preserved, * To ensure the FPSIMD bits of the SVE vector registers are preserved,
* write any live register state back to task_struct, and convert to a * write any live register state back to task_struct, and convert to a
* non-SVE thread. * regular FPSIMD thread. Since the vector length can only be changed
* with a syscall we can't be in streaming mode while reconfiguring.
*/ */
if (task == current) { if (task == current) {
get_cpu_fpsimd_context(); get_cpu_fpsimd_context();
...@@ -687,10 +692,10 @@ int sve_set_vector_length(struct task_struct *task, ...@@ -687,10 +692,10 @@ int sve_set_vector_length(struct task_struct *task,
*/ */
sve_free(task); sve_free(task);
task_set_sve_vl(task, vl); task_set_vl(task, type, vl);
out: out:
update_tsk_thread_flag(task, TIF_SVE_VL_INHERIT, update_tsk_thread_flag(task, vec_vl_inherit_flag(type),
flags & PR_SVE_VL_INHERIT); flags & PR_SVE_VL_INHERIT);
return 0; return 0;
...@@ -698,20 +703,21 @@ int sve_set_vector_length(struct task_struct *task, ...@@ -698,20 +703,21 @@ int sve_set_vector_length(struct task_struct *task,
/* /*
* Encode the current vector length and flags for return. * Encode the current vector length and flags for return.
* This is only required for prctl(): ptrace has separate fields * This is only required for prctl(): ptrace has separate fields.
* SVE and SME use the same bits for _ONEXEC and _INHERIT.
* *
* flags are as for sve_set_vector_length(). * flags are as for vec_set_vector_length().
*/ */
static int sve_prctl_status(unsigned long flags) static int vec_prctl_status(enum vec_type type, unsigned long flags)
{ {
int ret; int ret;
if (flags & PR_SVE_SET_VL_ONEXEC) if (flags & PR_SVE_SET_VL_ONEXEC)
ret = task_get_sve_vl_onexec(current); ret = task_get_vl_onexec(current, type);
else else
ret = task_get_sve_vl(current); ret = task_get_vl(current, type);
if (test_thread_flag(TIF_SVE_VL_INHERIT)) if (test_thread_flag(vec_vl_inherit_flag(type)))
ret |= PR_SVE_VL_INHERIT; ret |= PR_SVE_VL_INHERIT;
return ret; return ret;
...@@ -729,11 +735,11 @@ int sve_set_current_vl(unsigned long arg) ...@@ -729,11 +735,11 @@ int sve_set_current_vl(unsigned long arg)
if (!system_supports_sve() || is_compat_task()) if (!system_supports_sve() || is_compat_task())
return -EINVAL; return -EINVAL;
ret = sve_set_vector_length(current, vl, flags); ret = vec_set_vector_length(current, ARM64_VEC_SVE, vl, flags);
if (ret) if (ret)
return ret; return ret;
return sve_prctl_status(flags); return vec_prctl_status(ARM64_VEC_SVE, flags);
} }
/* PR_SVE_GET_VL */ /* PR_SVE_GET_VL */
...@@ -742,7 +748,7 @@ int sve_get_current_vl(void) ...@@ -742,7 +748,7 @@ int sve_get_current_vl(void)
if (!system_supports_sve() || is_compat_task()) if (!system_supports_sve() || is_compat_task())
return -EINVAL; return -EINVAL;
return sve_prctl_status(0); return vec_prctl_status(ARM64_VEC_SVE, 0);
} }
static void vec_probe_vqs(struct vl_info *info, static void vec_probe_vqs(struct vl_info *info,
...@@ -1107,7 +1113,7 @@ static void fpsimd_flush_thread_vl(enum vec_type type) ...@@ -1107,7 +1113,7 @@ static void fpsimd_flush_thread_vl(enum vec_type type)
vl = get_default_vl(type); vl = get_default_vl(type);
if (WARN_ON(!sve_vl_valid(vl))) if (WARN_ON(!sve_vl_valid(vl)))
vl = SVE_VL_MIN; vl = vl_info[type].min_vl;
supported_vl = find_supported_vector_length(type, vl); supported_vl = find_supported_vector_length(type, vl);
if (WARN_ON(supported_vl != vl)) if (WARN_ON(supported_vl != vl))
...@@ -1213,7 +1219,8 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, ...@@ -1213,7 +1219,8 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state,
/* /*
* Load the userland FPSIMD state of 'current' from memory, but only if the * Load the userland FPSIMD state of 'current' from memory, but only if the
* FPSIMD state already held in the registers is /not/ the most recent FPSIMD * FPSIMD state already held in the registers is /not/ the most recent FPSIMD
* state of 'current' * state of 'current'. This is called when we are preparing to return to
* userspace to ensure that userspace sees a good register state.
*/ */
void fpsimd_restore_current_state(void) void fpsimd_restore_current_state(void)
{ {
...@@ -1244,7 +1251,9 @@ void fpsimd_restore_current_state(void) ...@@ -1244,7 +1251,9 @@ void fpsimd_restore_current_state(void)
/* /*
* Load an updated userland FPSIMD state for 'current' from memory and set the * Load an updated userland FPSIMD state for 'current' from memory and set the
* flag that indicates that the FPSIMD register contents are the most recent * flag that indicates that the FPSIMD register contents are the most recent
* FPSIMD state of 'current' * FPSIMD state of 'current'. This is used by the signal code to restore the
* register state when returning from a signal handler in FPSIMD only cases,
* any SVE context will be discarded.
*/ */
void fpsimd_update_current_state(struct user_fpsimd_state const *state) void fpsimd_update_current_state(struct user_fpsimd_state const *state)
{ {
......
...@@ -7,10 +7,6 @@ ...@@ -7,10 +7,6 @@
* Ubuntu project, hibernation support for mach-dove * Ubuntu project, hibernation support for mach-dove
* Copyright (C) 2010 Nokia Corporation (Hiroshi Doyu) * Copyright (C) 2010 Nokia Corporation (Hiroshi Doyu)
* Copyright (C) 2010 Texas Instruments, Inc. (Teerth Reddy et al.) * Copyright (C) 2010 Texas Instruments, Inc. (Teerth Reddy et al.)
* https://lkml.org/lkml/2010/6/18/4
* https://lists.linux-foundation.org/pipermail/linux-pm/2010-June/027422.html
* https://patchwork.kernel.org/patch/96442/
*
* Copyright (C) 2006 Rafael J. Wysocki <rjw@sisk.pl> * Copyright (C) 2006 Rafael J. Wysocki <rjw@sisk.pl>
*/ */
#define pr_fmt(x) "hibernate: " x #define pr_fmt(x) "hibernate: " x
......
...@@ -104,13 +104,15 @@ static void *kexec_page_alloc(void *arg) ...@@ -104,13 +104,15 @@ static void *kexec_page_alloc(void *arg)
{ {
struct kimage *kimage = (struct kimage *)arg; struct kimage *kimage = (struct kimage *)arg;
struct page *page = kimage_alloc_control_pages(kimage, 0); struct page *page = kimage_alloc_control_pages(kimage, 0);
void *vaddr = NULL;
if (!page) if (!page)
return NULL; return NULL;
memset(page_address(page), 0, PAGE_SIZE); vaddr = page_address(page);
memset(vaddr, 0, PAGE_SIZE);
return page_address(page); return vaddr;
} }
int machine_kexec_post_load(struct kimage *kimage) int machine_kexec_post_load(struct kimage *kimage)
......
...@@ -5,10 +5,10 @@ ...@@ -5,10 +5,10 @@
* Copyright (C) 2015 ARM Limited * Copyright (C) 2015 ARM Limited
*/ */
#include <linux/perf_event.h> #include <linux/perf_event.h>
#include <linux/stacktrace.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/pointer_auth.h> #include <asm/pointer_auth.h>
#include <asm/stacktrace.h>
struct frame_tail { struct frame_tail {
struct frame_tail __user *fp; struct frame_tail __user *fp;
...@@ -132,30 +132,21 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, ...@@ -132,30 +132,21 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
} }
} }
/*
* Gets called by walk_stackframe() for every stackframe. This will be called
* whist unwinding the stackframe and is like a subroutine return so we use
* the PC.
*/
static bool callchain_trace(void *data, unsigned long pc) static bool callchain_trace(void *data, unsigned long pc)
{ {
struct perf_callchain_entry_ctx *entry = data; struct perf_callchain_entry_ctx *entry = data;
perf_callchain_store(entry, pc); return perf_callchain_store(entry, pc) == 0;
return true;
} }
void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs) struct pt_regs *regs)
{ {
struct stackframe frame;
if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) { if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
/* We don't support guest os callchain now */ /* We don't support guest os callchain now */
return; return;
} }
start_backtrace(&frame, regs->regs[29], regs->pc); arch_stack_walk(callchain_trace, entry, current, regs);
walk_stackframe(current, &frame, callchain_trace, entry);
} }
unsigned long perf_instruction_pointer(struct pt_regs *regs) unsigned long perf_instruction_pointer(struct pt_regs *regs)
......
This diff is collapsed.
...@@ -40,6 +40,7 @@ ...@@ -40,6 +40,7 @@
#include <linux/percpu.h> #include <linux/percpu.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <linux/prctl.h> #include <linux/prctl.h>
#include <linux/stacktrace.h>
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/compat.h> #include <asm/compat.h>
...@@ -439,34 +440,26 @@ static void entry_task_switch(struct task_struct *next) ...@@ -439,34 +440,26 @@ static void entry_task_switch(struct task_struct *next)
/* /*
* ARM erratum 1418040 handling, affecting the 32bit view of CNTVCT. * ARM erratum 1418040 handling, affecting the 32bit view of CNTVCT.
* Assuming the virtual counter is enabled at the beginning of times: * Ensure access is disabled when switching to a 32bit task, ensure
* * access is enabled when switching to a 64bit task.
* - disable access when switching from a 64bit task to a 32bit task
* - enable access when switching from a 32bit task to a 64bit task
*/ */
static void erratum_1418040_thread_switch(struct task_struct *prev, static void erratum_1418040_thread_switch(struct task_struct *next)
struct task_struct *next)
{ {
bool prev32, next32; if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040) ||
u64 val; !this_cpu_has_cap(ARM64_WORKAROUND_1418040))
if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040))
return;
prev32 = is_compat_thread(task_thread_info(prev));
next32 = is_compat_thread(task_thread_info(next));
if (prev32 == next32 || !this_cpu_has_cap(ARM64_WORKAROUND_1418040))
return; return;
val = read_sysreg(cntkctl_el1); if (is_compat_thread(task_thread_info(next)))
sysreg_clear_set(cntkctl_el1, ARCH_TIMER_USR_VCT_ACCESS_EN, 0);
if (!next32)
val |= ARCH_TIMER_USR_VCT_ACCESS_EN;
else else
val &= ~ARCH_TIMER_USR_VCT_ACCESS_EN; sysreg_clear_set(cntkctl_el1, 0, ARCH_TIMER_USR_VCT_ACCESS_EN);
}
write_sysreg(val, cntkctl_el1); static void erratum_1418040_new_exec(void)
{
preempt_disable();
erratum_1418040_thread_switch(current);
preempt_enable();
} }
/* /*
...@@ -490,7 +483,8 @@ void update_sctlr_el1(u64 sctlr) ...@@ -490,7 +483,8 @@ void update_sctlr_el1(u64 sctlr)
/* /*
* Thread switching. * Thread switching.
*/ */
__notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, __notrace_funcgraph __sched
struct task_struct *__switch_to(struct task_struct *prev,
struct task_struct *next) struct task_struct *next)
{ {
struct task_struct *last; struct task_struct *last;
...@@ -501,7 +495,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, ...@@ -501,7 +495,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
contextidr_thread_switch(next); contextidr_thread_switch(next);
entry_task_switch(next); entry_task_switch(next);
ssbs_thread_switch(next); ssbs_thread_switch(next);
erratum_1418040_thread_switch(prev, next); erratum_1418040_thread_switch(next);
ptrauth_thread_switch_user(next); ptrauth_thread_switch_user(next);
/* /*
...@@ -528,30 +522,37 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, ...@@ -528,30 +522,37 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
return last; return last;
} }
struct wchan_info {
unsigned long pc;
int count;
};
static bool get_wchan_cb(void *arg, unsigned long pc)
{
struct wchan_info *wchan_info = arg;
if (!in_sched_functions(pc)) {
wchan_info->pc = pc;
return false;
}
return wchan_info->count++ < 16;
}
unsigned long __get_wchan(struct task_struct *p) unsigned long __get_wchan(struct task_struct *p)
{ {
struct stackframe frame; struct wchan_info wchan_info = {
unsigned long stack_page, ret = 0; .pc = 0,
int count = 0; .count = 0,
};
stack_page = (unsigned long)try_get_task_stack(p); if (!try_get_task_stack(p))
if (!stack_page)
return 0; return 0;
start_backtrace(&frame, thread_saved_fp(p), thread_saved_pc(p)); arch_stack_walk(get_wchan_cb, &wchan_info, p, NULL);
do {
if (unwind_frame(p, &frame))
goto out;
if (!in_sched_functions(frame.pc)) {
ret = frame.pc;
goto out;
}
} while (count++ < 16);
out:
put_task_stack(p); put_task_stack(p);
return ret;
return wchan_info.pc;
} }
unsigned long arch_align_stack(unsigned long sp) unsigned long arch_align_stack(unsigned long sp)
...@@ -611,6 +612,7 @@ void arch_setup_new_exec(void) ...@@ -611,6 +612,7 @@ void arch_setup_new_exec(void)
current->mm->context.flags = mmflags; current->mm->context.flags = mmflags;
ptrauth_thread_init_user(); ptrauth_thread_init_user();
mte_thread_init_user(); mte_thread_init_user();
erratum_1418040_new_exec();
if (task_spec_ssb_noexec(current)) { if (task_spec_ssb_noexec(current)) {
arch_prctl_spec_ctrl_set(current, PR_SPEC_STORE_BYPASS, arch_prctl_spec_ctrl_set(current, PR_SPEC_STORE_BYPASS,
......
...@@ -812,9 +812,9 @@ static int sve_set(struct task_struct *target, ...@@ -812,9 +812,9 @@ static int sve_set(struct task_struct *target,
/* /*
* Apart from SVE_PT_REGS_MASK, all SVE_PT_* flags are consumed by * Apart from SVE_PT_REGS_MASK, all SVE_PT_* flags are consumed by
* sve_set_vector_length(), which will also validate them for us: * vec_set_vector_length(), which will also validate them for us:
*/ */
ret = sve_set_vector_length(target, header.vl, ret = vec_set_vector_length(target, ARM64_VEC_SVE, header.vl,
((unsigned long)header.flags & ~SVE_PT_REGS_MASK) << 16); ((unsigned long)header.flags & ~SVE_PT_REGS_MASK) << 16);
if (ret) if (ret)
goto out; goto out;
......
...@@ -9,9 +9,9 @@ ...@@ -9,9 +9,9 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <linux/stacktrace.h>
#include <asm/stack_pointer.h> #include <asm/stack_pointer.h>
#include <asm/stacktrace.h>
struct return_address_data { struct return_address_data {
unsigned int level; unsigned int level;
...@@ -35,15 +35,11 @@ NOKPROBE_SYMBOL(save_return_addr); ...@@ -35,15 +35,11 @@ NOKPROBE_SYMBOL(save_return_addr);
void *return_address(unsigned int level) void *return_address(unsigned int level)
{ {
struct return_address_data data; struct return_address_data data;
struct stackframe frame;
data.level = level + 2; data.level = level + 2;
data.addr = NULL; data.addr = NULL;
start_backtrace(&frame, arch_stack_walk(save_return_addr, &data, current, NULL);
(unsigned long)__builtin_frame_address(0),
(unsigned long)return_address);
walk_stackframe(current, &frame, save_return_addr, &data);
if (!data.level) if (!data.level)
return data.addr; return data.addr;
......
...@@ -189,11 +189,16 @@ static void __init setup_machine_fdt(phys_addr_t dt_phys) ...@@ -189,11 +189,16 @@ static void __init setup_machine_fdt(phys_addr_t dt_phys)
if (!dt_virt || !early_init_dt_scan(dt_virt)) { if (!dt_virt || !early_init_dt_scan(dt_virt)) {
pr_crit("\n" pr_crit("\n"
"Error: invalid device tree blob at physical address %pa (virtual address 0x%p)\n" "Error: invalid device tree blob at physical address %pa (virtual address 0x%px)\n"
"The dtb must be 8-byte aligned and must not exceed 2 MB in size\n" "The dtb must be 8-byte aligned and must not exceed 2 MB in size\n"
"\nPlease check your bootloader.", "\nPlease check your bootloader.",
&dt_phys, dt_virt); &dt_phys, dt_virt);
/*
* Note that in this _really_ early stage we cannot even BUG()
* or oops, so the least terrible thing to do is cpu_relax(),
* or else we could end-up printing non-initialized data, etc.
*/
while (true) while (true)
cpu_relax(); cpu_relax();
} }
...@@ -232,12 +237,14 @@ static void __init request_standard_resources(void) ...@@ -232,12 +237,14 @@ static void __init request_standard_resources(void)
if (memblock_is_nomap(region)) { if (memblock_is_nomap(region)) {
res->name = "reserved"; res->name = "reserved";
res->flags = IORESOURCE_MEM; res->flags = IORESOURCE_MEM;
res->start = __pfn_to_phys(memblock_region_reserved_base_pfn(region));
res->end = __pfn_to_phys(memblock_region_reserved_end_pfn(region)) - 1;
} else { } else {
res->name = "System RAM"; res->name = "System RAM";
res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
res->start = __pfn_to_phys(memblock_region_memory_base_pfn(region));
res->end = __pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1;
} }
res->start = __pfn_to_phys(memblock_region_memory_base_pfn(region));
res->end = __pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1;
request_resource(&iomem_resource, res); request_resource(&iomem_resource, res);
......
...@@ -33,8 +33,8 @@ ...@@ -33,8 +33,8 @@
*/ */
void start_backtrace(struct stackframe *frame, unsigned long fp, static void start_backtrace(struct stackframe *frame, unsigned long fp,
unsigned long pc) unsigned long pc)
{ {
frame->fp = fp; frame->fp = fp;
frame->pc = pc; frame->pc = pc;
...@@ -63,7 +63,8 @@ void start_backtrace(struct stackframe *frame, unsigned long fp, ...@@ -63,7 +63,8 @@ void start_backtrace(struct stackframe *frame, unsigned long fp,
* records (e.g. a cycle), determined based on the location and fp value of A * records (e.g. a cycle), determined based on the location and fp value of A
* and the location (but not the fp value) of B. * and the location (but not the fp value) of B.
*/ */
int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) static int notrace unwind_frame(struct task_struct *tsk,
struct stackframe *frame)
{ {
unsigned long fp = frame->fp; unsigned long fp = frame->fp;
struct stack_info info; struct stack_info info;
...@@ -141,8 +142,9 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) ...@@ -141,8 +142,9 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
} }
NOKPROBE_SYMBOL(unwind_frame); NOKPROBE_SYMBOL(unwind_frame);
void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, static void notrace walk_stackframe(struct task_struct *tsk,
bool (*fn)(void *, unsigned long), void *data) struct stackframe *frame,
bool (*fn)(void *, unsigned long), void *data)
{ {
while (1) { while (1) {
int ret; int ret;
...@@ -156,24 +158,20 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, ...@@ -156,24 +158,20 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
} }
NOKPROBE_SYMBOL(walk_stackframe); NOKPROBE_SYMBOL(walk_stackframe);
static void dump_backtrace_entry(unsigned long where, const char *loglvl) static bool dump_backtrace_entry(void *arg, unsigned long where)
{ {
char *loglvl = arg;
printk("%s %pSb\n", loglvl, (void *)where); printk("%s %pSb\n", loglvl, (void *)where);
return true;
} }
void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl) const char *loglvl)
{ {
struct stackframe frame;
int skip = 0;
pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk); pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
if (regs) { if (regs && user_mode(regs))
if (user_mode(regs)) return;
return;
skip = 1;
}
if (!tsk) if (!tsk)
tsk = current; tsk = current;
...@@ -181,36 +179,8 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, ...@@ -181,36 +179,8 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
if (!try_get_task_stack(tsk)) if (!try_get_task_stack(tsk))
return; return;
if (tsk == current) {
start_backtrace(&frame,
(unsigned long)__builtin_frame_address(0),
(unsigned long)dump_backtrace);
} else {
/*
* task blocked in __switch_to
*/
start_backtrace(&frame,
thread_saved_fp(tsk),
thread_saved_pc(tsk));
}
printk("%sCall trace:\n", loglvl); printk("%sCall trace:\n", loglvl);
do { arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs);
/* skip until specified stack frame */
if (!skip) {
dump_backtrace_entry(frame.pc, loglvl);
} else if (frame.fp == regs->regs[29]) {
skip = 0;
/*
* Mostly, this is the case where this function is
* called in panic/abort. As exception handler's
* stack frame does not contain the corresponding pc
* at which an exception has taken place, use regs->pc
* instead.
*/
dump_backtrace_entry(regs->pc, loglvl);
}
} while (!unwind_frame(tsk, &frame));
put_task_stack(tsk); put_task_stack(tsk);
} }
...@@ -221,8 +191,6 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) ...@@ -221,8 +191,6 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
barrier(); barrier();
} }
#ifdef CONFIG_STACKTRACE
noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
void *cookie, struct task_struct *task, void *cookie, struct task_struct *task,
struct pt_regs *regs) struct pt_regs *regs)
...@@ -241,5 +209,3 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, ...@@ -241,5 +209,3 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
walk_stackframe(task, &frame, consume_entry, cookie); walk_stackframe(task, &frame, consume_entry, cookie);
} }
#endif
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <linux/timex.h> #include <linux/timex.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/profile.h> #include <linux/profile.h>
#include <linux/stacktrace.h>
#include <linux/syscore_ops.h> #include <linux/syscore_ops.h>
#include <linux/timer.h> #include <linux/timer.h>
#include <linux/irq.h> #include <linux/irq.h>
...@@ -29,25 +30,25 @@ ...@@ -29,25 +30,25 @@
#include <clocksource/arm_arch_timer.h> #include <clocksource/arm_arch_timer.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/stacktrace.h>
#include <asm/paravirt.h> #include <asm/paravirt.h>
unsigned long profile_pc(struct pt_regs *regs) static bool profile_pc_cb(void *arg, unsigned long pc)
{ {
struct stackframe frame; unsigned long *prof_pc = arg;
if (!in_lock_functions(regs->pc)) if (in_lock_functions(pc))
return regs->pc; return true;
*prof_pc = pc;
return false;
}
start_backtrace(&frame, regs->regs[29], regs->pc); unsigned long profile_pc(struct pt_regs *regs)
{
unsigned long prof_pc = 0;
do { arch_stack_walk(profile_pc_cb, &prof_pc, current, regs);
int ret = unwind_frame(NULL, &frame);
if (ret < 0)
return 0;
} while (in_lock_functions(frame.pc));
return frame.pc; return prof_pc;
} }
EXPORT_SYMBOL(profile_pc); EXPORT_SYMBOL(profile_pc);
......
...@@ -32,6 +32,7 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO ...@@ -32,6 +32,7 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) $(GCC_PLUGINS_CFLAGS) \ CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) $(GCC_PLUGINS_CFLAGS) \
$(CC_FLAGS_LTO) $(CC_FLAGS_LTO)
KASAN_SANITIZE := n KASAN_SANITIZE := n
KCSAN_SANITIZE := n
UBSAN_SANITIZE := n UBSAN_SANITIZE := n
OBJECT_FILES_NON_STANDARD := y OBJECT_FILES_NON_STANDARD := y
KCOV_INSTRUMENT := n KCOV_INSTRUMENT := n
......
...@@ -140,9 +140,12 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu) ...@@ -140,9 +140,12 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu)
return 1; return 1;
} }
/*
* Guest access to SVE registers should be routed to this handler only
* when the system doesn't support SVE.
*/
static int handle_sve(struct kvm_vcpu *vcpu) static int handle_sve(struct kvm_vcpu *vcpu)
{ {
/* Until SVE is supported for guests: */
kvm_inject_undefined(vcpu); kvm_inject_undefined(vcpu);
return 1; return 1;
} }
......
...@@ -89,6 +89,7 @@ KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI) ...@@ -89,6 +89,7 @@ KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI)
# cause crashes. Just disable it. # cause crashes. Just disable it.
GCOV_PROFILE := n GCOV_PROFILE := n
KASAN_SANITIZE := n KASAN_SANITIZE := n
KCSAN_SANITIZE := n
UBSAN_SANITIZE := n UBSAN_SANITIZE := n
KCOV_INSTRUMENT := n KCOV_INSTRUMENT := n
......
...@@ -52,10 +52,10 @@ int kvm_arm_init_sve(void) ...@@ -52,10 +52,10 @@ int kvm_arm_init_sve(void)
* The get_sve_reg()/set_sve_reg() ioctl interface will need * The get_sve_reg()/set_sve_reg() ioctl interface will need
* to be extended with multiple register slice support in * to be extended with multiple register slice support in
* order to support vector lengths greater than * order to support vector lengths greater than
* SVE_VL_ARCH_MAX: * VL_ARCH_MAX:
*/ */
if (WARN_ON(kvm_sve_max_vl > SVE_VL_ARCH_MAX)) if (WARN_ON(kvm_sve_max_vl > VL_ARCH_MAX))
kvm_sve_max_vl = SVE_VL_ARCH_MAX; kvm_sve_max_vl = VL_ARCH_MAX;
/* /*
* Don't even try to make use of vector lengths that * Don't even try to make use of vector lengths that
...@@ -103,7 +103,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) ...@@ -103,7 +103,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
* set_sve_vls(). Double-check here just to be sure: * set_sve_vls(). Double-check here just to be sure:
*/ */
if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl() || if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl() ||
vl > SVE_VL_ARCH_MAX)) vl > VL_ARCH_MAX))
return -EIO; return -EIO;
buf = kzalloc(SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl)), GFP_KERNEL_ACCOUNT); buf = kzalloc(SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl)), GFP_KERNEL_ACCOUNT);
......
...@@ -1525,7 +1525,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { ...@@ -1525,7 +1525,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* CRm=6 */ /* CRm=6 */
ID_SANITISED(ID_AA64ISAR0_EL1), ID_SANITISED(ID_AA64ISAR0_EL1),
ID_SANITISED(ID_AA64ISAR1_EL1), ID_SANITISED(ID_AA64ISAR1_EL1),
ID_UNALLOCATED(6,2), ID_SANITISED(ID_AA64ISAR2_EL1),
ID_UNALLOCATED(6,3), ID_UNALLOCATED(6,3),
ID_UNALLOCATED(6,4), ID_UNALLOCATED(6,4),
ID_UNALLOCATED(6,5), ID_UNALLOCATED(6,5),
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
*/ */
SYM_FUNC_START_PI(clear_page) SYM_FUNC_START_PI(clear_page)
mrs x1, dczid_el0 mrs x1, dczid_el0
tbnz x1, #4, 2f /* Branch if DC ZVA is prohibited */
and w1, w1, #0xf and w1, w1, #0xf
mov x2, #4 mov x2, #4
lsl x1, x2, x1 lsl x1, x2, x1
...@@ -25,5 +26,14 @@ SYM_FUNC_START_PI(clear_page) ...@@ -25,5 +26,14 @@ SYM_FUNC_START_PI(clear_page)
tst x0, #(PAGE_SIZE - 1) tst x0, #(PAGE_SIZE - 1)
b.ne 1b b.ne 1b
ret ret
2: stnp xzr, xzr, [x0]
stnp xzr, xzr, [x0, #16]
stnp xzr, xzr, [x0, #32]
stnp xzr, xzr, [x0, #48]
add x0, x0, #64
tst x0, #(PAGE_SIZE - 1)
b.ne 2b
ret
SYM_FUNC_END_PI(clear_page) SYM_FUNC_END_PI(clear_page)
EXPORT_SYMBOL(clear_page) EXPORT_SYMBOL(clear_page)
...@@ -38,9 +38,7 @@ ...@@ -38,9 +38,7 @@
* incremented by 256 prior to return). * incremented by 256 prior to return).
*/ */
SYM_CODE_START(__hwasan_tag_mismatch) SYM_CODE_START(__hwasan_tag_mismatch)
#ifdef BTI_C bti c
BTI_C
#endif
add x29, sp, #232 add x29, sp, #232
stp x2, x3, [sp, #8 * 2] stp x2, x3, [sp, #8 * 2]
stp x4, x5, [sp, #8 * 4] stp x4, x5, [sp, #8 * 4]
......
...@@ -43,17 +43,23 @@ SYM_FUNC_END(mte_clear_page_tags) ...@@ -43,17 +43,23 @@ SYM_FUNC_END(mte_clear_page_tags)
* x0 - address to the beginning of the page * x0 - address to the beginning of the page
*/ */
SYM_FUNC_START(mte_zero_clear_page_tags) SYM_FUNC_START(mte_zero_clear_page_tags)
and x0, x0, #(1 << MTE_TAG_SHIFT) - 1 // clear the tag
mrs x1, dczid_el0 mrs x1, dczid_el0
tbnz x1, #4, 2f // Branch if DC GZVA is prohibited
and w1, w1, #0xf and w1, w1, #0xf
mov x2, #4 mov x2, #4
lsl x1, x2, x1 lsl x1, x2, x1
and x0, x0, #(1 << MTE_TAG_SHIFT) - 1 // clear the tag
1: dc gzva, x0 1: dc gzva, x0
add x0, x0, x1 add x0, x0, x1
tst x0, #(PAGE_SIZE - 1) tst x0, #(PAGE_SIZE - 1)
b.ne 1b b.ne 1b
ret ret
2: stz2g x0, [x0], #(MTE_GRANULE_SIZE * 2)
tst x0, #(PAGE_SIZE - 1)
b.ne 2b
ret
SYM_FUNC_END(mte_zero_clear_page_tags) SYM_FUNC_END(mte_zero_clear_page_tags)
/* /*
......
...@@ -167,7 +167,7 @@ void xor_arm64_neon_5(unsigned long bytes, unsigned long *p1, ...@@ -167,7 +167,7 @@ void xor_arm64_neon_5(unsigned long bytes, unsigned long *p1,
} while (--lines > 0); } while (--lines > 0);
} }
struct xor_block_template const xor_block_inner_neon = { struct xor_block_template xor_block_inner_neon __ro_after_init = {
.name = "__inner_neon__", .name = "__inner_neon__",
.do_2 = xor_arm64_neon_2, .do_2 = xor_arm64_neon_2,
.do_3 = xor_arm64_neon_3, .do_3 = xor_arm64_neon_3,
...@@ -176,6 +176,151 @@ struct xor_block_template const xor_block_inner_neon = { ...@@ -176,6 +176,151 @@ struct xor_block_template const xor_block_inner_neon = {
}; };
EXPORT_SYMBOL(xor_block_inner_neon); EXPORT_SYMBOL(xor_block_inner_neon);
static inline uint64x2_t eor3(uint64x2_t p, uint64x2_t q, uint64x2_t r)
{
uint64x2_t res;
asm(ARM64_ASM_PREAMBLE ".arch_extension sha3\n"
"eor3 %0.16b, %1.16b, %2.16b, %3.16b"
: "=w"(res) : "w"(p), "w"(q), "w"(r));
return res;
}
static void xor_arm64_eor3_3(unsigned long bytes, unsigned long *p1,
unsigned long *p2, unsigned long *p3)
{
uint64_t *dp1 = (uint64_t *)p1;
uint64_t *dp2 = (uint64_t *)p2;
uint64_t *dp3 = (uint64_t *)p3;
register uint64x2_t v0, v1, v2, v3;
long lines = bytes / (sizeof(uint64x2_t) * 4);
do {
/* p1 ^= p2 ^ p3 */
v0 = eor3(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0),
vld1q_u64(dp3 + 0));
v1 = eor3(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2),
vld1q_u64(dp3 + 2));
v2 = eor3(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4),
vld1q_u64(dp3 + 4));
v3 = eor3(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6),
vld1q_u64(dp3 + 6));
/* store */
vst1q_u64(dp1 + 0, v0);
vst1q_u64(dp1 + 2, v1);
vst1q_u64(dp1 + 4, v2);
vst1q_u64(dp1 + 6, v3);
dp1 += 8;
dp2 += 8;
dp3 += 8;
} while (--lines > 0);
}
static void xor_arm64_eor3_4(unsigned long bytes, unsigned long *p1,
unsigned long *p2, unsigned long *p3,
unsigned long *p4)
{
uint64_t *dp1 = (uint64_t *)p1;
uint64_t *dp2 = (uint64_t *)p2;
uint64_t *dp3 = (uint64_t *)p3;
uint64_t *dp4 = (uint64_t *)p4;
register uint64x2_t v0, v1, v2, v3;
long lines = bytes / (sizeof(uint64x2_t) * 4);
do {
/* p1 ^= p2 ^ p3 */
v0 = eor3(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0),
vld1q_u64(dp3 + 0));
v1 = eor3(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2),
vld1q_u64(dp3 + 2));
v2 = eor3(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4),
vld1q_u64(dp3 + 4));
v3 = eor3(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6),
vld1q_u64(dp3 + 6));
/* p1 ^= p4 */
v0 = veorq_u64(v0, vld1q_u64(dp4 + 0));
v1 = veorq_u64(v1, vld1q_u64(dp4 + 2));
v2 = veorq_u64(v2, vld1q_u64(dp4 + 4));
v3 = veorq_u64(v3, vld1q_u64(dp4 + 6));
/* store */
vst1q_u64(dp1 + 0, v0);
vst1q_u64(dp1 + 2, v1);
vst1q_u64(dp1 + 4, v2);
vst1q_u64(dp1 + 6, v3);
dp1 += 8;
dp2 += 8;
dp3 += 8;
dp4 += 8;
} while (--lines > 0);
}
static void xor_arm64_eor3_5(unsigned long bytes, unsigned long *p1,
unsigned long *p2, unsigned long *p3,
unsigned long *p4, unsigned long *p5)
{
uint64_t *dp1 = (uint64_t *)p1;
uint64_t *dp2 = (uint64_t *)p2;
uint64_t *dp3 = (uint64_t *)p3;
uint64_t *dp4 = (uint64_t *)p4;
uint64_t *dp5 = (uint64_t *)p5;
register uint64x2_t v0, v1, v2, v3;
long lines = bytes / (sizeof(uint64x2_t) * 4);
do {
/* p1 ^= p2 ^ p3 */
v0 = eor3(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0),
vld1q_u64(dp3 + 0));
v1 = eor3(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2),
vld1q_u64(dp3 + 2));
v2 = eor3(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4),
vld1q_u64(dp3 + 4));
v3 = eor3(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6),
vld1q_u64(dp3 + 6));
/* p1 ^= p4 ^ p5 */
v0 = eor3(v0, vld1q_u64(dp4 + 0), vld1q_u64(dp5 + 0));
v1 = eor3(v1, vld1q_u64(dp4 + 2), vld1q_u64(dp5 + 2));
v2 = eor3(v2, vld1q_u64(dp4 + 4), vld1q_u64(dp5 + 4));
v3 = eor3(v3, vld1q_u64(dp4 + 6), vld1q_u64(dp5 + 6));
/* store */
vst1q_u64(dp1 + 0, v0);
vst1q_u64(dp1 + 2, v1);
vst1q_u64(dp1 + 4, v2);
vst1q_u64(dp1 + 6, v3);
dp1 += 8;
dp2 += 8;
dp3 += 8;
dp4 += 8;
dp5 += 8;
} while (--lines > 0);
}
static int __init xor_neon_init(void)
{
if (IS_ENABLED(CONFIG_AS_HAS_SHA3) && cpu_have_named_feature(SHA3)) {
xor_block_inner_neon.do_3 = xor_arm64_eor3_3;
xor_block_inner_neon.do_4 = xor_arm64_eor3_4;
xor_block_inner_neon.do_5 = xor_arm64_eor3_5;
}
return 0;
}
module_init(xor_neon_init);
static void __exit xor_neon_exit(void)
{
}
module_exit(xor_neon_exit);
MODULE_AUTHOR("Jackie Liu <liuyun01@kylinos.cn>"); MODULE_AUTHOR("Jackie Liu <liuyun01@kylinos.cn>");
MODULE_DESCRIPTION("ARMv8 XOR Extensions"); MODULE_DESCRIPTION("ARMv8 XOR Extensions");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
...@@ -140,15 +140,7 @@ SYM_FUNC_END(dcache_clean_pou) ...@@ -140,15 +140,7 @@ SYM_FUNC_END(dcache_clean_pou)
* - start - kernel start address of region * - start - kernel start address of region
* - end - kernel end address of region * - end - kernel end address of region
*/ */
SYM_FUNC_START_LOCAL(__dma_inv_area)
SYM_FUNC_START_PI(dcache_inval_poc) SYM_FUNC_START_PI(dcache_inval_poc)
/* FALLTHROUGH */
/*
* __dma_inv_area(start, end)
* - start - virtual start address of region
* - end - virtual end address of region
*/
dcache_line_size x2, x3 dcache_line_size x2, x3
sub x3, x2, #1 sub x3, x2, #1
tst x1, x3 // end cache line aligned? tst x1, x3 // end cache line aligned?
...@@ -167,7 +159,6 @@ SYM_FUNC_START_PI(dcache_inval_poc) ...@@ -167,7 +159,6 @@ SYM_FUNC_START_PI(dcache_inval_poc)
dsb sy dsb sy
ret ret
SYM_FUNC_END_PI(dcache_inval_poc) SYM_FUNC_END_PI(dcache_inval_poc)
SYM_FUNC_END(__dma_inv_area)
/* /*
* dcache_clean_poc(start, end) * dcache_clean_poc(start, end)
...@@ -178,19 +169,10 @@ SYM_FUNC_END(__dma_inv_area) ...@@ -178,19 +169,10 @@ SYM_FUNC_END(__dma_inv_area)
* - start - virtual start address of region * - start - virtual start address of region
* - end - virtual end address of region * - end - virtual end address of region
*/ */
SYM_FUNC_START_LOCAL(__dma_clean_area)
SYM_FUNC_START_PI(dcache_clean_poc) SYM_FUNC_START_PI(dcache_clean_poc)
/* FALLTHROUGH */
/*
* __dma_clean_area(start, end)
* - start - virtual start address of region
* - end - virtual end address of region
*/
dcache_by_line_op cvac, sy, x0, x1, x2, x3 dcache_by_line_op cvac, sy, x0, x1, x2, x3
ret ret
SYM_FUNC_END_PI(dcache_clean_poc) SYM_FUNC_END_PI(dcache_clean_poc)
SYM_FUNC_END(__dma_clean_area)
/* /*
* dcache_clean_pop(start, end) * dcache_clean_pop(start, end)
...@@ -232,8 +214,8 @@ SYM_FUNC_END_PI(__dma_flush_area) ...@@ -232,8 +214,8 @@ SYM_FUNC_END_PI(__dma_flush_area)
SYM_FUNC_START_PI(__dma_map_area) SYM_FUNC_START_PI(__dma_map_area)
add x1, x0, x1 add x1, x0, x1
cmp w2, #DMA_FROM_DEVICE cmp w2, #DMA_FROM_DEVICE
b.eq __dma_inv_area b.eq __pi_dcache_inval_poc
b __dma_clean_area b __pi_dcache_clean_poc
SYM_FUNC_END_PI(__dma_map_area) SYM_FUNC_END_PI(__dma_map_area)
/* /*
...@@ -245,6 +227,6 @@ SYM_FUNC_END_PI(__dma_map_area) ...@@ -245,6 +227,6 @@ SYM_FUNC_END_PI(__dma_map_area)
SYM_FUNC_START_PI(__dma_unmap_area) SYM_FUNC_START_PI(__dma_unmap_area)
add x1, x0, x1 add x1, x0, x1
cmp w2, #DMA_TO_DEVICE cmp w2, #DMA_TO_DEVICE
b.ne __dma_inv_area b.ne __pi_dcache_inval_poc
ret ret
SYM_FUNC_END_PI(__dma_unmap_area) SYM_FUNC_END_PI(__dma_unmap_area)
...@@ -35,8 +35,8 @@ static unsigned long *pinned_asid_map; ...@@ -35,8 +35,8 @@ static unsigned long *pinned_asid_map;
#define ASID_FIRST_VERSION (1UL << asid_bits) #define ASID_FIRST_VERSION (1UL << asid_bits)
#define NUM_USER_ASIDS ASID_FIRST_VERSION #define NUM_USER_ASIDS ASID_FIRST_VERSION
#define asid2idx(asid) ((asid) & ~ASID_MASK) #define ctxid2asid(asid) ((asid) & ~ASID_MASK)
#define idx2asid(idx) asid2idx(idx) #define asid2ctxid(asid, genid) ((asid) | (genid))
/* Get the ASIDBits supported by the current CPU */ /* Get the ASIDBits supported by the current CPU */
static u32 get_cpu_asid_bits(void) static u32 get_cpu_asid_bits(void)
...@@ -50,10 +50,10 @@ static u32 get_cpu_asid_bits(void) ...@@ -50,10 +50,10 @@ static u32 get_cpu_asid_bits(void)
pr_warn("CPU%d: Unknown ASID size (%d); assuming 8-bit\n", pr_warn("CPU%d: Unknown ASID size (%d); assuming 8-bit\n",
smp_processor_id(), fld); smp_processor_id(), fld);
fallthrough; fallthrough;
case 0: case ID_AA64MMFR0_ASID_8:
asid = 8; asid = 8;
break; break;
case 2: case ID_AA64MMFR0_ASID_16:
asid = 16; asid = 16;
} }
...@@ -120,7 +120,7 @@ static void flush_context(void) ...@@ -120,7 +120,7 @@ static void flush_context(void)
*/ */
if (asid == 0) if (asid == 0)
asid = per_cpu(reserved_asids, i); asid = per_cpu(reserved_asids, i);
__set_bit(asid2idx(asid), asid_map); __set_bit(ctxid2asid(asid), asid_map);
per_cpu(reserved_asids, i) = asid; per_cpu(reserved_asids, i) = asid;
} }
...@@ -162,7 +162,7 @@ static u64 new_context(struct mm_struct *mm) ...@@ -162,7 +162,7 @@ static u64 new_context(struct mm_struct *mm)
u64 generation = atomic64_read(&asid_generation); u64 generation = atomic64_read(&asid_generation);
if (asid != 0) { if (asid != 0) {
u64 newasid = generation | (asid & ~ASID_MASK); u64 newasid = asid2ctxid(ctxid2asid(asid), generation);
/* /*
* If our current ASID was active during a rollover, we * If our current ASID was active during a rollover, we
...@@ -183,7 +183,7 @@ static u64 new_context(struct mm_struct *mm) ...@@ -183,7 +183,7 @@ static u64 new_context(struct mm_struct *mm)
* We had a valid ASID in a previous life, so try to re-use * We had a valid ASID in a previous life, so try to re-use
* it if possible. * it if possible.
*/ */
if (!__test_and_set_bit(asid2idx(asid), asid_map)) if (!__test_and_set_bit(ctxid2asid(asid), asid_map))
return newasid; return newasid;
} }
...@@ -209,7 +209,7 @@ static u64 new_context(struct mm_struct *mm) ...@@ -209,7 +209,7 @@ static u64 new_context(struct mm_struct *mm)
set_asid: set_asid:
__set_bit(asid, asid_map); __set_bit(asid, asid_map);
cur_idx = asid; cur_idx = asid;
return idx2asid(asid) | generation; return asid2ctxid(asid, generation);
} }
void check_and_switch_context(struct mm_struct *mm) void check_and_switch_context(struct mm_struct *mm)
...@@ -300,13 +300,13 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) ...@@ -300,13 +300,13 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
} }
nr_pinned_asids++; nr_pinned_asids++;
__set_bit(asid2idx(asid), pinned_asid_map); __set_bit(ctxid2asid(asid), pinned_asid_map);
refcount_set(&mm->context.pinned, 1); refcount_set(&mm->context.pinned, 1);
out_unlock: out_unlock:
raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
asid &= ~ASID_MASK; asid = ctxid2asid(asid);
/* Set the equivalent of USER_ASID_BIT */ /* Set the equivalent of USER_ASID_BIT */
if (asid && arm64_kernel_unmapped_at_el0()) if (asid && arm64_kernel_unmapped_at_el0())
...@@ -327,7 +327,7 @@ void arm64_mm_context_put(struct mm_struct *mm) ...@@ -327,7 +327,7 @@ void arm64_mm_context_put(struct mm_struct *mm)
raw_spin_lock_irqsave(&cpu_asid_lock, flags); raw_spin_lock_irqsave(&cpu_asid_lock, flags);
if (refcount_dec_and_test(&mm->context.pinned)) { if (refcount_dec_and_test(&mm->context.pinned)) {
__clear_bit(asid2idx(asid), pinned_asid_map); __clear_bit(ctxid2asid(asid), pinned_asid_map);
nr_pinned_asids--; nr_pinned_asids--;
} }
......
...@@ -10,9 +10,6 @@ ...@@ -10,9 +10,6 @@
#include <asm/asm-extable.h> #include <asm/asm-extable.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
typedef bool (*ex_handler_t)(const struct exception_table_entry *,
struct pt_regs *);
static inline unsigned long static inline unsigned long
get_ex_fixup(const struct exception_table_entry *ex) get_ex_fixup(const struct exception_table_entry *ex)
{ {
......
...@@ -297,6 +297,8 @@ static void die_kernel_fault(const char *msg, unsigned long addr, ...@@ -297,6 +297,8 @@ static void die_kernel_fault(const char *msg, unsigned long addr,
pr_alert("Unable to handle kernel %s at virtual address %016lx\n", msg, pr_alert("Unable to handle kernel %s at virtual address %016lx\n", msg,
addr); addr);
kasan_non_canonical_hook(addr);
mem_abort_decode(esr); mem_abort_decode(esr);
show_pte(addr); show_pte(addr);
...@@ -813,11 +815,8 @@ void do_mem_abort(unsigned long far, unsigned int esr, struct pt_regs *regs) ...@@ -813,11 +815,8 @@ void do_mem_abort(unsigned long far, unsigned int esr, struct pt_regs *regs)
if (!inf->fn(far, esr, regs)) if (!inf->fn(far, esr, regs))
return; return;
if (!user_mode(regs)) { if (!user_mode(regs))
pr_alert("Unhandled fault at 0x%016lx\n", addr); die_kernel_fault(inf->name, addr, esr, regs);
mem_abort_decode(esr);
show_pte(addr);
}
/* /*
* At this point we have an unrecognized fault type whose tag bits may * At this point we have an unrecognized fault type whose tag bits may
......
...@@ -47,7 +47,7 @@ obj-y := cputable.o syscalls.o \ ...@@ -47,7 +47,7 @@ obj-y := cputable.o syscalls.o \
udbg.o misc.o io.o misc_$(BITS).o \ udbg.o misc.o io.o misc_$(BITS).o \
of_platform.o prom_parse.o firmware.o \ of_platform.o prom_parse.o firmware.o \
hw_breakpoint_constraints.o interrupt.o \ hw_breakpoint_constraints.o interrupt.o \
kdebugfs.o kdebugfs.o stacktrace.o
obj-y += ptrace/ obj-y += ptrace/
obj-$(CONFIG_PPC64) += setup_64.o \ obj-$(CONFIG_PPC64) += setup_64.o \
paca.o nvram_64.o note.o paca.o nvram_64.o note.o
...@@ -116,7 +116,6 @@ obj-$(CONFIG_OPTPROBES) += optprobes.o optprobes_head.o ...@@ -116,7 +116,6 @@ obj-$(CONFIG_OPTPROBES) += optprobes.o optprobes_head.o
obj-$(CONFIG_KPROBES_ON_FTRACE) += kprobes-ftrace.o obj-$(CONFIG_KPROBES_ON_FTRACE) += kprobes-ftrace.o
obj-$(CONFIG_UPROBES) += uprobes.o obj-$(CONFIG_UPROBES) += uprobes.o
obj-$(CONFIG_PPC_UDBG_16550) += legacy_serial.o udbg_16550.o obj-$(CONFIG_PPC_UDBG_16550) += legacy_serial.o udbg_16550.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-$(CONFIG_SWIOTLB) += dma-swiotlb.o obj-$(CONFIG_SWIOTLB) += dma-swiotlb.o
obj-$(CONFIG_ARCH_HAS_DMA_SET_MASK) += dma-mask.o obj-$(CONFIG_ARCH_HAS_DMA_SET_MASK) += dma-mask.o
......
...@@ -139,12 +139,8 @@ unsigned long __get_wchan(struct task_struct *task) ...@@ -139,12 +139,8 @@ unsigned long __get_wchan(struct task_struct *task)
return pc; return pc;
} }
#ifdef CONFIG_STACKTRACE
noinline void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, noinline void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
struct task_struct *task, struct pt_regs *regs) struct task_struct *task, struct pt_regs *regs)
{ {
walk_stackframe(task, regs, consume_entry, cookie); walk_stackframe(task, regs, consume_entry, cookie);
} }
#endif /* CONFIG_STACKTRACE */
...@@ -40,7 +40,7 @@ obj-y += sysinfo.o lgr.o os_info.o machine_kexec.o ...@@ -40,7 +40,7 @@ obj-y += sysinfo.o lgr.o os_info.o machine_kexec.o
obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o
obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o
obj-y += nospec-branch.o ipl_vmparm.o machine_kexec_reloc.o unwind_bc.o obj-y += nospec-branch.o ipl_vmparm.o machine_kexec_reloc.o unwind_bc.o
obj-y += smp.o text_amode31.o obj-y += smp.o text_amode31.o stacktrace.o
extra-y += head64.o vmlinux.lds extra-y += head64.o vmlinux.lds
...@@ -55,7 +55,6 @@ compat-obj-$(CONFIG_AUDIT) += compat_audit.o ...@@ -55,7 +55,6 @@ compat-obj-$(CONFIG_AUDIT) += compat_audit.o
obj-$(CONFIG_COMPAT) += compat_linux.o compat_signal.o obj-$(CONFIG_COMPAT) += compat_linux.o compat_signal.o
obj-$(CONFIG_COMPAT) += $(compat-obj-y) obj-$(CONFIG_COMPAT) += $(compat-obj-y)
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-$(CONFIG_KPROBES) += kprobes.o obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_KPROBES) += kprobes_insn_page.o obj-$(CONFIG_KPROBES) += kprobes_insn_page.o
obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o
......
...@@ -2476,7 +2476,7 @@ static int x86_pmu_event_init(struct perf_event *event) ...@@ -2476,7 +2476,7 @@ static int x86_pmu_event_init(struct perf_event *event)
if (READ_ONCE(x86_pmu.attr_rdpmc) && if (READ_ONCE(x86_pmu.attr_rdpmc) &&
!(event->hw.flags & PERF_X86_EVENT_LARGE_PEBS)) !(event->hw.flags & PERF_X86_EVENT_LARGE_PEBS))
event->hw.flags |= PERF_X86_EVENT_RDPMC_ALLOWED; event->hw.flags |= PERF_EVENT_FLAG_USER_READ_CNT;
return err; return err;
} }
...@@ -2510,7 +2510,7 @@ void perf_clear_dirty_counters(void) ...@@ -2510,7 +2510,7 @@ void perf_clear_dirty_counters(void)
static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
{ {
if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED)) if (!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT))
return; return;
/* /*
...@@ -2531,7 +2531,7 @@ static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) ...@@ -2531,7 +2531,7 @@ static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm)
{ {
if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED)) if (!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT))
return; return;
if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed)) if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
...@@ -2542,7 +2542,7 @@ static int x86_pmu_event_idx(struct perf_event *event) ...@@ -2542,7 +2542,7 @@ static int x86_pmu_event_idx(struct perf_event *event)
{ {
struct hw_perf_event *hwc = &event->hw; struct hw_perf_event *hwc = &event->hw;
if (!(hwc->flags & PERF_X86_EVENT_RDPMC_ALLOWED)) if (!(hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT))
return 0; return 0;
if (is_metric_idx(hwc->idx)) if (is_metric_idx(hwc->idx))
...@@ -2725,7 +2725,7 @@ void arch_perf_update_userpage(struct perf_event *event, ...@@ -2725,7 +2725,7 @@ void arch_perf_update_userpage(struct perf_event *event,
userpg->cap_user_time = 0; userpg->cap_user_time = 0;
userpg->cap_user_time_zero = 0; userpg->cap_user_time_zero = 0;
userpg->cap_user_rdpmc = userpg->cap_user_rdpmc =
!!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED); !!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT);
userpg->pmc_width = x86_pmu.cntval_bits; userpg->pmc_width = x86_pmu.cntval_bits;
if (!using_native_sched_clock() || !sched_clock_stable()) if (!using_native_sched_clock() || !sched_clock_stable())
......
...@@ -74,7 +74,7 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode) ...@@ -74,7 +74,7 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode)
#define PERF_X86_EVENT_PEBS_NA_HSW 0x0010 /* haswell style datala, unknown */ #define PERF_X86_EVENT_PEBS_NA_HSW 0x0010 /* haswell style datala, unknown */
#define PERF_X86_EVENT_EXCL 0x0020 /* HT exclusivity on counter */ #define PERF_X86_EVENT_EXCL 0x0020 /* HT exclusivity on counter */
#define PERF_X86_EVENT_DYNAMIC 0x0040 /* dynamic alloc'd constraint */ #define PERF_X86_EVENT_DYNAMIC 0x0040 /* dynamic alloc'd constraint */
#define PERF_X86_EVENT_RDPMC_ALLOWED 0x0080 /* grant rdpmc permission */
#define PERF_X86_EVENT_EXCL_ACCT 0x0100 /* accounted EXCL event */ #define PERF_X86_EVENT_EXCL_ACCT 0x0100 /* accounted EXCL event */
#define PERF_X86_EVENT_AUTO_RELOAD 0x0200 /* use PEBS auto-reload */ #define PERF_X86_EVENT_AUTO_RELOAD 0x0200 /* use PEBS auto-reload */
#define PERF_X86_EVENT_LARGE_PEBS 0x0400 /* use large PEBS */ #define PERF_X86_EVENT_LARGE_PEBS 0x0400 /* use large PEBS */
......
...@@ -84,7 +84,7 @@ obj-$(CONFIG_IA32_EMULATION) += tls.o ...@@ -84,7 +84,7 @@ obj-$(CONFIG_IA32_EMULATION) += tls.o
obj-y += step.o obj-y += step.o
obj-$(CONFIG_INTEL_TXT) += tboot.o obj-$(CONFIG_INTEL_TXT) += tboot.o
obj-$(CONFIG_ISA_DMA_API) += i8237.o obj-$(CONFIG_ISA_DMA_API) += i8237.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o obj-y += stacktrace.o
obj-y += cpu/ obj-y += cpu/
obj-y += acpi/ obj-y += acpi/
obj-y += reboot.o obj-y += reboot.o
......
...@@ -43,7 +43,7 @@ config ARM_CCN ...@@ -43,7 +43,7 @@ config ARM_CCN
config ARM_CMN config ARM_CMN
tristate "Arm CMN-600 PMU support" tristate "Arm CMN-600 PMU support"
depends on ARM64 || (COMPILE_TEST && 64BIT) depends on ARM64 || COMPILE_TEST
help help
Support for PMU events monitoring on the Arm CMN-600 Coherent Mesh Support for PMU events monitoring on the Arm CMN-600 Coherent Mesh
Network interconnect. Network interconnect.
...@@ -139,6 +139,13 @@ config ARM_DMC620_PMU ...@@ -139,6 +139,13 @@ config ARM_DMC620_PMU
Support for PMU events monitoring on the ARM DMC-620 memory Support for PMU events monitoring on the ARM DMC-620 memory
controller. controller.
config MARVELL_CN10K_TAD_PMU
tristate "Marvell CN10K LLC-TAD PMU"
depends on ARM64 || (COMPILE_TEST && 64BIT)
help
Provides support for Last-Level cache Tag-and-data Units (LLC-TAD)
performance monitors on CN10K family silicons.
source "drivers/perf/hisilicon/Kconfig" source "drivers/perf/hisilicon/Kconfig"
endmenu endmenu
...@@ -14,3 +14,4 @@ obj-$(CONFIG_THUNDERX2_PMU) += thunderx2_pmu.o ...@@ -14,3 +14,4 @@ obj-$(CONFIG_THUNDERX2_PMU) += thunderx2_pmu.o
obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o
obj-$(CONFIG_ARM_SPE_PMU) += arm_spe_pmu.o obj-$(CONFIG_ARM_SPE_PMU) += arm_spe_pmu.o
obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o
obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o
This diff is collapsed.
...@@ -47,6 +47,7 @@ ...@@ -47,6 +47,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/msi.h> #include <linux/msi.h>
#include <linux/of.h>
#include <linux/perf_event.h> #include <linux/perf_event.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/smp.h> #include <linux/smp.h>
...@@ -75,6 +76,10 @@ ...@@ -75,6 +76,10 @@
#define SMMU_PMCG_CR 0xE04 #define SMMU_PMCG_CR 0xE04
#define SMMU_PMCG_CR_ENABLE BIT(0) #define SMMU_PMCG_CR_ENABLE BIT(0)
#define SMMU_PMCG_IIDR 0xE08 #define SMMU_PMCG_IIDR 0xE08
#define SMMU_PMCG_IIDR_PRODUCTID GENMASK(31, 20)
#define SMMU_PMCG_IIDR_VARIANT GENMASK(19, 16)
#define SMMU_PMCG_IIDR_REVISION GENMASK(15, 12)
#define SMMU_PMCG_IIDR_IMPLEMENTER GENMASK(11, 0)
#define SMMU_PMCG_CEID0 0xE20 #define SMMU_PMCG_CEID0 0xE20
#define SMMU_PMCG_CEID1 0xE28 #define SMMU_PMCG_CEID1 0xE28
#define SMMU_PMCG_IRQ_CTRL 0xE50 #define SMMU_PMCG_IRQ_CTRL 0xE50
...@@ -83,6 +88,20 @@ ...@@ -83,6 +88,20 @@
#define SMMU_PMCG_IRQ_CFG1 0xE60 #define SMMU_PMCG_IRQ_CFG1 0xE60
#define SMMU_PMCG_IRQ_CFG2 0xE64 #define SMMU_PMCG_IRQ_CFG2 0xE64
/* IMP-DEF ID registers */
#define SMMU_PMCG_PIDR0 0xFE0
#define SMMU_PMCG_PIDR0_PART_0 GENMASK(7, 0)
#define SMMU_PMCG_PIDR1 0xFE4
#define SMMU_PMCG_PIDR1_DES_0 GENMASK(7, 4)
#define SMMU_PMCG_PIDR1_PART_1 GENMASK(3, 0)
#define SMMU_PMCG_PIDR2 0xFE8
#define SMMU_PMCG_PIDR2_REVISION GENMASK(7, 4)
#define SMMU_PMCG_PIDR2_DES_1 GENMASK(2, 0)
#define SMMU_PMCG_PIDR3 0xFEC
#define SMMU_PMCG_PIDR3_REVAND GENMASK(7, 4)
#define SMMU_PMCG_PIDR4 0xFD0
#define SMMU_PMCG_PIDR4_DES_2 GENMASK(3, 0)
/* MSI config fields */ /* MSI config fields */
#define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2) #define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2)
#define MSI_CFG2_MEMATTR_DEVICE_nGnRE 0x1 #define MSI_CFG2_MEMATTR_DEVICE_nGnRE 0x1
...@@ -754,6 +773,41 @@ static void smmu_pmu_get_acpi_options(struct smmu_pmu *smmu_pmu) ...@@ -754,6 +773,41 @@ static void smmu_pmu_get_acpi_options(struct smmu_pmu *smmu_pmu)
dev_notice(smmu_pmu->dev, "option mask 0x%x\n", smmu_pmu->options); dev_notice(smmu_pmu->dev, "option mask 0x%x\n", smmu_pmu->options);
} }
static bool smmu_pmu_coresight_id_regs(struct smmu_pmu *smmu_pmu)
{
return of_device_is_compatible(smmu_pmu->dev->of_node,
"arm,mmu-600-pmcg");
}
static void smmu_pmu_get_iidr(struct smmu_pmu *smmu_pmu)
{
u32 iidr = readl_relaxed(smmu_pmu->reg_base + SMMU_PMCG_IIDR);
if (!iidr && smmu_pmu_coresight_id_regs(smmu_pmu)) {
u32 pidr0 = readl(smmu_pmu->reg_base + SMMU_PMCG_PIDR0);
u32 pidr1 = readl(smmu_pmu->reg_base + SMMU_PMCG_PIDR1);
u32 pidr2 = readl(smmu_pmu->reg_base + SMMU_PMCG_PIDR2);
u32 pidr3 = readl(smmu_pmu->reg_base + SMMU_PMCG_PIDR3);
u32 pidr4 = readl(smmu_pmu->reg_base + SMMU_PMCG_PIDR4);
u32 productid = FIELD_GET(SMMU_PMCG_PIDR0_PART_0, pidr0) |
(FIELD_GET(SMMU_PMCG_PIDR1_PART_1, pidr1) << 8);
u32 variant = FIELD_GET(SMMU_PMCG_PIDR2_REVISION, pidr2);
u32 revision = FIELD_GET(SMMU_PMCG_PIDR3_REVAND, pidr3);
u32 implementer =
FIELD_GET(SMMU_PMCG_PIDR1_DES_0, pidr1) |
(FIELD_GET(SMMU_PMCG_PIDR2_DES_1, pidr2) << 4) |
(FIELD_GET(SMMU_PMCG_PIDR4_DES_2, pidr4) << 8);
iidr = FIELD_PREP(SMMU_PMCG_IIDR_PRODUCTID, productid) |
FIELD_PREP(SMMU_PMCG_IIDR_VARIANT, variant) |
FIELD_PREP(SMMU_PMCG_IIDR_REVISION, revision) |
FIELD_PREP(SMMU_PMCG_IIDR_IMPLEMENTER, implementer);
}
smmu_pmu->iidr = iidr;
}
static int smmu_pmu_probe(struct platform_device *pdev) static int smmu_pmu_probe(struct platform_device *pdev)
{ {
struct smmu_pmu *smmu_pmu; struct smmu_pmu *smmu_pmu;
...@@ -825,7 +879,7 @@ static int smmu_pmu_probe(struct platform_device *pdev) ...@@ -825,7 +879,7 @@ static int smmu_pmu_probe(struct platform_device *pdev)
return err; return err;
} }
smmu_pmu->iidr = readl_relaxed(smmu_pmu->reg_base + SMMU_PMCG_IIDR); smmu_pmu_get_iidr(smmu_pmu);
name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "smmuv3_pmcg_%llx", name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "smmuv3_pmcg_%llx",
(res_0->start) >> SMMU_PMCG_PA_SHIFT); (res_0->start) >> SMMU_PMCG_PA_SHIFT);
...@@ -834,7 +888,8 @@ static int smmu_pmu_probe(struct platform_device *pdev) ...@@ -834,7 +888,8 @@ static int smmu_pmu_probe(struct platform_device *pdev)
return -EINVAL; return -EINVAL;
} }
smmu_pmu_get_acpi_options(smmu_pmu); if (!dev->of_node)
smmu_pmu_get_acpi_options(smmu_pmu);
/* Pick one CPU to be the preferred one to use */ /* Pick one CPU to be the preferred one to use */
smmu_pmu->on_cpu = raw_smp_processor_id(); smmu_pmu->on_cpu = raw_smp_processor_id();
...@@ -884,9 +939,18 @@ static void smmu_pmu_shutdown(struct platform_device *pdev) ...@@ -884,9 +939,18 @@ static void smmu_pmu_shutdown(struct platform_device *pdev)
smmu_pmu_disable(&smmu_pmu->pmu); smmu_pmu_disable(&smmu_pmu->pmu);
} }
#ifdef CONFIG_OF
static const struct of_device_id smmu_pmu_of_match[] = {
{ .compatible = "arm,smmu-v3-pmcg" },
{}
};
MODULE_DEVICE_TABLE(of, smmu_pmu_of_match);
#endif
static struct platform_driver smmu_pmu_driver = { static struct platform_driver smmu_pmu_driver = {
.driver = { .driver = {
.name = "arm-smmu-v3-pmcg", .name = "arm-smmu-v3-pmcg",
.of_match_table = of_match_ptr(smmu_pmu_of_match),
.suppress_bind_attrs = true, .suppress_bind_attrs = true,
}, },
.probe = smmu_pmu_probe, .probe = smmu_pmu_probe,
......
...@@ -5,3 +5,12 @@ config HISI_PMU ...@@ -5,3 +5,12 @@ config HISI_PMU
help help
Support for HiSilicon SoC L3 Cache performance monitor, Hydra Home Support for HiSilicon SoC L3 Cache performance monitor, Hydra Home
Agent performance monitor and DDR Controller performance monitor. Agent performance monitor and DDR Controller performance monitor.
config HISI_PCIE_PMU
tristate "HiSilicon PCIE PERF PMU"
depends on PCI && ARM64
help
Provide support for HiSilicon PCIe performance monitoring unit (PMU)
RCiEP devices.
Adds the PCIe PMU into perf events system for monitoring latency,
bandwidth etc.
...@@ -2,3 +2,5 @@ ...@@ -2,3 +2,5 @@
obj-$(CONFIG_HISI_PMU) += hisi_uncore_pmu.o hisi_uncore_l3c_pmu.o \ obj-$(CONFIG_HISI_PMU) += hisi_uncore_pmu.o hisi_uncore_l3c_pmu.o \
hisi_uncore_hha_pmu.o hisi_uncore_ddrc_pmu.o hisi_uncore_sllc_pmu.o \ hisi_uncore_hha_pmu.o hisi_uncore_ddrc_pmu.o hisi_uncore_sllc_pmu.o \
hisi_uncore_pa_pmu.o hisi_uncore_pa_pmu.o
obj-$(CONFIG_HISI_PCIE_PMU) += hisi_pcie_pmu.o
This diff is collapsed.
This diff is collapsed.
...@@ -251,5 +251,16 @@ do { \ ...@@ -251,5 +251,16 @@ do { \
#define pmem_wmb() wmb() #define pmem_wmb() wmb()
#endif #endif
/*
* ioremap_wc() maps I/O memory as memory with write-combining attributes. For
* this kind of memory accesses, the CPU may wait for prior accesses to be
* merged with subsequent ones. In some situation, such wait is bad for the
* performance. io_stop_wc() can be used to prevent the merging of
* write-combining memory accesses before this macro with those after it.
*/
#ifndef io_stop_wc
#define io_stop_wc do { } while (0)
#endif
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#endif /* __ASM_GENERIC_BARRIER_H */ #endif /* __ASM_GENERIC_BARRIER_H */
...@@ -225,6 +225,7 @@ enum cpuhp_state { ...@@ -225,6 +225,7 @@ enum cpuhp_state {
CPUHP_AP_PERF_ARM_HISI_L3_ONLINE, CPUHP_AP_PERF_ARM_HISI_L3_ONLINE,
CPUHP_AP_PERF_ARM_HISI_PA_ONLINE, CPUHP_AP_PERF_ARM_HISI_PA_ONLINE,
CPUHP_AP_PERF_ARM_HISI_SLLC_ONLINE, CPUHP_AP_PERF_ARM_HISI_SLLC_ONLINE,
CPUHP_AP_PERF_ARM_HISI_PCIE_PMU_ONLINE,
CPUHP_AP_PERF_ARM_L2X0_ONLINE, CPUHP_AP_PERF_ARM_L2X0_ONLINE,
CPUHP_AP_PERF_ARM_QCOM_L2_ONLINE, CPUHP_AP_PERF_ARM_QCOM_L2_ONLINE,
CPUHP_AP_PERF_ARM_QCOM_L3_ONLINE, CPUHP_AP_PERF_ARM_QCOM_L3_ONLINE,
......
...@@ -129,6 +129,15 @@ struct hw_perf_event_extra { ...@@ -129,6 +129,15 @@ struct hw_perf_event_extra {
int idx; /* index in shared_regs->regs[] */ int idx; /* index in shared_regs->regs[] */
}; };
/**
* hw_perf_event::flag values
*
* PERF_EVENT_FLAG_ARCH bits are reserved for architecture-specific
* usage.
*/
#define PERF_EVENT_FLAG_ARCH 0x0000ffff
#define PERF_EVENT_FLAG_USER_READ_CNT 0x80000000
/** /**
* struct hw_perf_event - performance event hardware details: * struct hw_perf_event - performance event hardware details:
*/ */
...@@ -822,6 +831,7 @@ struct perf_event_context { ...@@ -822,6 +831,7 @@ struct perf_event_context {
int nr_events; int nr_events;
int nr_active; int nr_active;
int nr_user;
int is_active; int is_active;
int nr_stat; int nr_stat;
int nr_freq; int nr_freq;
......
...@@ -8,22 +8,6 @@ ...@@ -8,22 +8,6 @@
struct task_struct; struct task_struct;
struct pt_regs; struct pt_regs;
#ifdef CONFIG_STACKTRACE
void stack_trace_print(const unsigned long *trace, unsigned int nr_entries,
int spaces);
int stack_trace_snprint(char *buf, size_t size, const unsigned long *entries,
unsigned int nr_entries, int spaces);
unsigned int stack_trace_save(unsigned long *store, unsigned int size,
unsigned int skipnr);
unsigned int stack_trace_save_tsk(struct task_struct *task,
unsigned long *store, unsigned int size,
unsigned int skipnr);
unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
unsigned int size, unsigned int skipnr);
unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries);
/* Internal interfaces. Do not use in generic code */
#ifdef CONFIG_ARCH_STACKWALK #ifdef CONFIG_ARCH_STACKWALK
/** /**
...@@ -76,8 +60,25 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie, ...@@ -76,8 +60,25 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie, void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
const struct pt_regs *regs); const struct pt_regs *regs);
#endif /* CONFIG_ARCH_STACKWALK */
#else /* CONFIG_ARCH_STACKWALK */ #ifdef CONFIG_STACKTRACE
void stack_trace_print(const unsigned long *trace, unsigned int nr_entries,
int spaces);
int stack_trace_snprint(char *buf, size_t size, const unsigned long *entries,
unsigned int nr_entries, int spaces);
unsigned int stack_trace_save(unsigned long *store, unsigned int size,
unsigned int skipnr);
unsigned int stack_trace_save_tsk(struct task_struct *task,
unsigned long *store, unsigned int size,
unsigned int skipnr);
unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
unsigned int size, unsigned int skipnr);
unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries);
#ifndef CONFIG_ARCH_STACKWALK
/* Internal interfaces. Do not use in generic code */
struct stack_trace { struct stack_trace {
unsigned int nr_entries, max_entries; unsigned int nr_entries, max_entries;
unsigned long *entries; unsigned long *entries;
......
...@@ -1808,6 +1808,8 @@ list_add_event(struct perf_event *event, struct perf_event_context *ctx) ...@@ -1808,6 +1808,8 @@ list_add_event(struct perf_event *event, struct perf_event_context *ctx)
list_add_rcu(&event->event_entry, &ctx->event_list); list_add_rcu(&event->event_entry, &ctx->event_list);
ctx->nr_events++; ctx->nr_events++;
if (event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT)
ctx->nr_user++;
if (event->attr.inherit_stat) if (event->attr.inherit_stat)
ctx->nr_stat++; ctx->nr_stat++;
...@@ -1999,6 +2001,8 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx) ...@@ -1999,6 +2001,8 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx)
event->attach_state &= ~PERF_ATTACH_CONTEXT; event->attach_state &= ~PERF_ATTACH_CONTEXT;
ctx->nr_events--; ctx->nr_events--;
if (event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT)
ctx->nr_user--;
if (event->attr.inherit_stat) if (event->attr.inherit_stat)
ctx->nr_stat--; ctx->nr_stat--;
......
...@@ -8,6 +8,7 @@ CFLAGS_REMOVE_debugfs.o = $(CC_FLAGS_FTRACE) ...@@ -8,6 +8,7 @@ CFLAGS_REMOVE_debugfs.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_report.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_report.o = $(CC_FLAGS_FTRACE)
CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \
$(call cc-option,-mno-outline-atomics) \
-fno-stack-protector -DDISABLE_BRANCH_PROFILING -fno-stack-protector -DDISABLE_BRANCH_PROFILING
obj-y := core.o debugfs.o report.o obj-y := core.o debugfs.o report.o
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
ARCH ?= $(shell uname -m 2>/dev/null || echo not) ARCH ?= $(shell uname -m 2>/dev/null || echo not)
ifneq (,$(filter $(ARCH),aarch64 arm64)) ifneq (,$(filter $(ARCH),aarch64 arm64))
ARM64_SUBTARGETS ?= tags signal pauth fp mte bti ARM64_SUBTARGETS ?= tags signal pauth fp mte bti abi
else else
ARM64_SUBTARGETS := ARM64_SUBTARGETS :=
endif endif
......
# SPDX-License-Identifier: GPL-2.0
# Copyright (C) 2021 ARM Limited
TEST_GEN_PROGS := syscall-abi
include ../../lib.mk
$(OUTPUT)/syscall-abi: syscall-abi.c syscall-abi-asm.S
This diff is collapsed.
This diff is collapsed.
fp-pidbench
fpsimd-test fpsimd-test
rdvl-sve rdvl-sve
sve-probe-vls sve-probe-vls
......
...@@ -2,13 +2,15 @@ ...@@ -2,13 +2,15 @@
CFLAGS += -I../../../../../usr/include/ CFLAGS += -I../../../../../usr/include/
TEST_GEN_PROGS := sve-ptrace sve-probe-vls vec-syscfg TEST_GEN_PROGS := sve-ptrace sve-probe-vls vec-syscfg
TEST_PROGS_EXTENDED := fpsimd-test fpsimd-stress \ TEST_PROGS_EXTENDED := fp-pidbench fpsimd-test fpsimd-stress \
rdvl-sve \ rdvl-sve \
sve-test sve-stress \ sve-test sve-stress \
vlset vlset
all: $(TEST_GEN_PROGS) $(TEST_PROGS_EXTENDED) all: $(TEST_GEN_PROGS) $(TEST_PROGS_EXTENDED)
fp-pidbench: fp-pidbench.S asm-utils.o
$(CC) -nostdlib $^ -o $@
fpsimd-test: fpsimd-test.o asm-utils.o fpsimd-test: fpsimd-test.o asm-utils.o
$(CC) -nostdlib $^ -o $@ $(CC) -nostdlib $^ -o $@
rdvl-sve: rdvl-sve.o rdvl.o rdvl-sve: rdvl-sve.o rdvl.o
......
// SPDX-License-Identifier: GPL-2.0-only
// Copyright (C) 2021 ARM Limited.
// Original author: Mark Brown <broonie@kernel.org>
//
// Trivial syscall overhead benchmark.
//
// This is implemented in asm to ensure that we don't have any issues with
// system libraries using instructions that disrupt the test.
#include <asm/unistd.h>
#include "assembler.h"
.arch_extension sve
.macro test_loop per_loop
mov x10, x20
mov x8, #__NR_getpid
mrs x11, CNTVCT_EL0
1:
\per_loop
svc #0
sub x10, x10, #1
cbnz x10, 1b
mrs x12, CNTVCT_EL0
sub x0, x12, x11
bl putdec
puts "\n"
.endm
// Main program entry point
.globl _start
function _start
_start:
puts "Iterations per test: "
mov x20, #10000
lsl x20, x20, #8
mov x0, x20
bl putdec
puts "\n"
// Test having never used SVE
puts "No SVE: "
test_loop
// Check for SVE support - should use hwcap but that's hard in asm
mrs x0, ID_AA64PFR0_EL1
ubfx x0, x0, #32, #4
cbnz x0, 1f
puts "System does not support SVE\n"
b out
1:
// Execute a SVE instruction
puts "SVE VL: "
rdvl x0, #8
bl putdec
puts "\n"
puts "SVE used once: "
test_loop
// Use SVE per syscall
puts "SVE used per syscall: "
test_loop "rdvl x0, #8"
// And we're done
out:
mov x0, #0
mov x8, #__NR_exit
svc #0
...@@ -310,14 +310,12 @@ int test_setup(struct tdescr *td) ...@@ -310,14 +310,12 @@ int test_setup(struct tdescr *td)
int test_run(struct tdescr *td) int test_run(struct tdescr *td)
{ {
if (td->sig_trig) { if (td->trigger)
if (td->trigger) return td->trigger(td);
return td->trigger(td); else if (td->sig_trig)
else return default_trigger(td);
return default_trigger(td); else
} else {
return td->run(td, NULL, NULL); return td->run(td, NULL, NULL);
}
} }
void test_result(struct tdescr *td) void test_result(struct tdescr *td)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment