- 13 Apr, 2021 2 commits
-
-
Marc Zyngier authored
Signed-off-by: Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Signed-off-by: Marc Zyngier <maz@kernel.org>
-
- 11 Apr, 2021 2 commits
-
-
Alexandru Elisei authored
Even though KVM sets up MDCR_EL2 to trap accesses to the SPE buffer and sampling control registers and to inject an undefined exception, the presence of FEAT_SPE is still advertised in the ID_AA64DFR0_EL1 register, if the hardware supports it. Getting an undefined exception when accessing a register usually happens for a hardware feature which is not implemented, and indeed this is how PMU emulation is handled when the virtual machine has been created without the KVM_ARM_VCPU_PMU_V3 feature. Let's be consistent and never advertise FEAT_SPE, because KVM doesn't have support for emulating it yet. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210409152154.198566-3-alexandru.elisei@arm.com
-
Alexandru Elisei authored
KVM sets up MDCR_EL2 to trap accesses to the SPE buffer and sampling control registers and it relies on the fact that KVM injects an undefined exception for unknown registers. This mechanism of injecting undefined exceptions also prints a warning message for the host kernel; for example, when a guest tries to access PMSIDR_EL1: [ 2.691830] kvm [142]: Unsupported guest sys_reg access at: 80009e78 [800003c5] [ 2.691830] { Op0( 3), Op1( 0), CRn( 9), CRm( 9), Op2( 7), func_read }, This is unnecessary, because KVM has explicitly configured trapping of those registers and is well aware of their existence. Prevent the warning by adding the SPE registers to the list of registers that KVM emulates. The access function will inject the undefined exception. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210409152154.198566-2-alexandru.elisei@arm.com
-
- 08 Apr, 2021 1 commit
-
-
Marc Zyngier authored
Signed-off-by: Marc Zyngier <maz@kernel.org>
-
- 07 Apr, 2021 2 commits
-
-
Alexandru Elisei authored
When a VCPU is created, the kvm_vcpu struct is initialized to zero in kvm_vm_ioctl_create_vcpu(). On VHE systems, the first time vcpu.arch.mdcr_el2 is loaded on hardware is in vcpu_load(), before it is set to a sensible value in kvm_arm_setup_debug() later in the run loop. The result is that KVM executes for a short time with MDCR_EL2 set to zero. This has several unintended consequences: * Setting MDCR_EL2.HPMN to 0 is constrained unpredictable according to ARM DDI 0487G.a, page D13-3820. The behavior specified by the architecture in this case is for the PE to behave as if MDCR_EL2.HPMN is set to a value less than or equal to PMCR_EL0.N, which means that an unknown number of counters are now disabled by MDCR_EL2.HPME, which is zero. * The host configuration for the other debug features controlled by MDCR_EL2 is temporarily lost. This has been harmless so far, as Linux doesn't use the other fields, but that might change in the future. Let's avoid both issues by initializing the VCPU's mdcr_el2 field in kvm_vcpu_vcpu_first_run_init(), thus making sure that the MDCR_EL2 register has a consistent value after each vcpu_load(). Fixes: d5a21bcc ("KVM: arm64: Move common VHE/non-VHE trap config in separate functions") Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210407144857.199746-3-alexandru.elisei@arm.com
-
Alexandru Elisei authored
Commit 21b6f32f ("KVM: arm64: guest debug, define API headers") added the arm64 KVM_GUESTDBG_USE_HW flag for the KVM_SET_GUEST_DEBUG ioctl and commit 834bf887 ("KVM: arm64: enable KVM_CAP_SET_GUEST_DEBUG") documented and implemented the flag functionality. Since its introduction, at no point was the flag known by any name other than KVM_GUESTDBG_USE_HW for the arm64 architecture, so refer to it as such in the documentation. CC: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210407144857.199746-2-alexandru.elisei@arm.com
-
- 06 Apr, 2021 16 commits
-
-
Suzuki K Poulose authored
Document the device tree bindings for Trace Buffer Extension (TRBE). Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Rob Herring <robh@kernel.org> Cc: devicetree@vger.kernel.org Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-21-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Anshuman Khandual authored
Add documentation for the TRBE under trace/coresight. Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-doc@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [ Split from the TRBE driver patch ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-20-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Anshuman Khandual authored
Add sysfs ABI documentation for the TRBE devices. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-doc@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [ Split from the TRBE driver patch ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-19-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Anshuman Khandual authored
Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is accessible via the system registers. The TRBE supports different addressing modes including CPU virtual address and buffer modes including the circular buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1), an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the access to the trace buffer could be prohibited by a higher exception level (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU private interrupt (PPI) on address translation errors and when the buffer is full. Overall implementation here is inspired from the Arm SPE driver. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [ Mark the buffer truncated on WRAP event, error code cleanup ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-18-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Anshuman Khandual authored
Add support for dedicated sinks that are bound to individual CPUs. (e.g, TRBE). To allow quicker access to the sink for a given CPU bound source, keep a percpu array of the sink devices. Also, add support for building a path to the CPU local sink from the ETM. This adds a new percpu sink type CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM. This new sink type is exclusively available and can only work with percpu source type device CORESIGHT_DEV_SUBTYPE_SOURCE_PROC. This defines a percpu structure that accommodates a single coresight_device which can be used to store an initialized instance from a sink driver. As these sinks are exclusively linked and dependent on corresponding percpu sources devices, they should also be the default sink device during a perf session. Outwards device connections are scanned while establishing paths between a source and a sink device. But such connections are not present for certain percpu source and sink devices which are exclusively linked and dependent. Build the path directly and skip connection scanning for such devices. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Tested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [Moved the set/get percpu sink APIs from TRBE patch to here Fixed build break on arm32] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-17-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
The context associated with an ETM for a given perf event includes : - handle -> the perf output handle for the AUX buffer. - the path for the trace components - the buffer config for the sink. The path and the buffer config are part of the "aux_priv" data (etm_event_data) setup by the setup_aux() callback, and made available via perf_get_aux(handle). Now with a sink supporting IRQ, the sink could "end" an output handle when the buffer reaches the programmed limit and would try to restart a handle. This could fail if there is not enough space left the AUX buffer (e.g, the userspace has not consumed the data). This leaves the "handle" disconnected from the "event" and also the "perf_get_aux()" cleared. This all happens within the sink driver, without the etm_perf driver being aware. Now when the event is actually stopped, etm_event_stop() will need to access the "event_data". But since the handle is not valid anymore, we loose the information to stop the "trace" path. So, we need a reliable way to access the etm_event_data even when the handle may not be active. This patch replaces the per_cpu handle array with a per_cpu context for the ETM, which tracks the "handle" as well as the "etm_event_data". The context notes the etm_event_data at etm_event_start() and clears it at etm_event_stop(). This makes sure that we don't access a stale "etm_event_data" as we are guaranteed that it is not freed by free_aux() as long as the event is active and tracing, also provides us with access to the critical information needed to wind up a session even in the absence of an active output_handle. This is not an issue for the legacy sinks as none of them supports an IRQ and is centrally handled by the etm-perf. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-16-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
Document the device tree bindings for Embedded Trace Extensions. ETE can be connected to legacy coresight components and thus could optionally contain a connection graph as described by the CoreSight bindings. Cc: devicetree@vger.kernel.org Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-15-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
Add ETE as one of the supported device types we support with ETM4x driver. The devices are named following the existing convention as ete<N>. ETE mandates that the trace resource status register is programmed before the tracing is turned on. For the moment simply write to it indicating TraceActive. Cc: Mike Leach <mike.leach@linaro.org> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-14-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
Add support for handling the system registers for Embedded Trace Extensions (ETE). ETE shares most of the registers with ETMv4 except for some and also adds some new registers. Re-arrange the ETMv4x list to share the common definitions and add the ETE sysreg support. Cc: Mike Leach <mike.leach@linaro.org> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-13-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
ETE may not implement the OS lock and instead could rely on the PE OS Lock for the trace unit access. This is indicated by the TRCOLSR.OSM == 0b100. Add support for handling the PE OS lock Cc: Mike Leach <mike.leach@linaro.org> Reviewed-by: mike.leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-12-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
If a graph node is not found for a given node, of_get_next_endpoint() will emit the following error message : OF: graph: no port node found in /<node_name> If the given component doesn't have any explicit connections (e.g, ETE) we could simply ignore the graph parsing. As for any legacy component where this is mandatory, the device will not be usable as before this patch. Updating the DT bindings to Yaml and enabling the schema checks can detect such issues with the DT. Cc: Mike Leach <mike.leach@linaro.org> Cc: Leo Yan <leo.yan@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-11-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
When a sink is not specified by the user, the etm perf driver finds a suitable sink automatically, based on the first ETM where this event could be scheduled. Then we allocate the sink buffer based on the selected sink. This is fine for a CPU bound event as the "sink" is always guaranteed to be reachable from the ETM (as this is the only ETM where the event is going to be scheduled). However, if we have a thread bound event, the event could be scheduled on any of the ETMs on the system. In this case, currently we automatically select a sink and exclude any ETMs that cannot reach the selected sink. This is problematic especially for 1x1 configurations. We end up in tracing the event only on the "first" ETM, as the default sink is local to the first ETM and unreachable from the rest. However, we could allow the other ETMs to trace if they all have a sink that is compatible with the "selected" sink and can use the sink buffer. This can be easily done by verifying that they are all driven by the same driver and matches the same subtype. Please note that at anytime there can be only one ETM tracing the event. Adding support for different types of sinks for a single event is complex and is not something that we expect on a sane configuration. Cc: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Mike Leach <mike.leach@linaro.org> Tested-by: Linu Cherian <lcherian@marvell.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-10-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
If the CPU implements Arm v8.4 Trace filter controls (FEAT_TRF), move the ETM to trace prohibited region using TRFCR, while disabling. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-9-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
For a nvhe host, the EL2 must allow the EL1&0 translation regime for TraceBuffer (MDCR_EL2.E2TB == 0b11). This must be saved/restored over a trip to the guest. Also, before entering the guest, we must flush any trace data if the TRBE was enabled. And we must prohibit the generation of trace while we are in EL1 by clearing the TRFCR_EL1. For vhe, the EL2 must prevent the EL1 access to the Trace Buffer. The MDCR_EL2 bit definitions for TRBE are available here : https://developer.arm.com/documentation/ddi0601/2020-12/AArch64-Registers/ Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405164307.1720226-8-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
At the moment, we check the availability of SPE on the given CPU (i.e, SPE is implemented and is allowed at the host) during every guest entry. This can be optimized a bit by moving the check to vcpu_load time and recording the availability of the feature on the current CPU via a new flag. This will also be useful for adding the TRBE support. Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Alexandru Elisei <Alexandru.Elisei@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405164307.1720226-7-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
Rather than falling to an "unhandled access", inject add an explicit "undefined access" for TRFCR_EL1 access from the guest. Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405164307.1720226-6-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
- 05 Apr, 2021 4 commits
-
-
Anshuman Khandual authored
This adds TRBE related registers and corresponding feature macros. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Mike Leach <mike.leach@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-5-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
tsb csync synchronizes the trace operation of instructions. The instruction is a nop when FEAT_TRF is not implemented. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-4-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
CoreSight PMU supports aux-buffer for the ETM tracing. The trace generated by the ETM (associated with individual CPUs, like Intel PT) is captured by a separate IP (CoreSight TMC-ETR/ETF until now). The TMC-ETR applies formatting of the raw ETM trace data, as it can collect traces from multiple ETMs, with the TraceID to indicate the source of a given trace packet. Arm Trace Buffer Extension is new "sink" IP, attached to individual CPUs and thus do not provide additional formatting, like TMC-ETR. Additionally, a system could have both TRBE *and* TMC-ETR for the trace collection. e.g, TMC-ETR could be used as a single trace buffer to collect data from multiple ETMs to correlate the traces from different CPUs. It is possible to have a perf session where some events end up collecting the trace in TMC-ETR while the others in TRBE. Thus we need a way to identify the type of the trace for each AUX record. Define the trace formats exported by the CoreSight PMU. We don't define the flags following the "ETM" as this information is available to the user when issuing the session. What is missing is the additional formatting applied by the "sink" which is decided at the runtime and the user may not have a control on. So we define : - CORESIGHT format (indicates the Frame format) - RAW format (indicates the format of the source) The default value is CORESIGHT format for all the records (i,e == 0). Add the RAW format for others that use raw format. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-3-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
Suzuki K Poulose authored
Allocate a byte for advertising the PMU specific format type of the given AUX record. A PMU could end up providing hardware trace data in multiple format in a single session. e.g, The format of hardware buffer produced by CoreSight ETM PMU depends on the type of the "sink" device used for collection for an event (Traditional TMC-ETR/Bs with formatting or TRBEs without any formatting). # Boring story of why this is needed. Goto The_End_of_Story for skipping. CoreSight ETM trace allows instruction level tracing of Arm CPUs. The ETM generates the CPU excecution trace and pumps it into CoreSight AMBA Trace Bus and is collected by a different CoreSight component (traditionally CoreSight TMC-ETR /ETB/ETF), called "sink". Important to note that there is no guarantee that every CPU has a dedicated sink. Thus multiple ETMs could pump the trace data into the same "sink" and thus they apply additional formatting of the trace data for the user to decode it properly and attribute the trace data to the corresponding ETM. However, with the introduction of Arm Trace buffer Extensions (TRBE), we now have a dedicated per-CPU architected sink for collecting the trace. Since the TRBE is always per-CPU, it doesn't apply any formatting of the trace. The support for this driver is under review [1]. Now a system could have a per-cpu TRBE and one or more shared TMC-ETRs on the system. A user could choose a "specific" sink for a perf session (e.g, a TMC-ETR) or the driver could automatically select the nearest sink for a given ETM. It is possible that some ETMs could end up using TMC-ETR (e.g, if the TRBE is not usable on the CPU) while the others using TRBE in a single perf session. Thus we now have "formatted" trace collected from TMC-ETR and "unformatted" trace collected from TRBE. However, we don't get into a situation where a single event could end up using TMC-ETR & TRBE. i.e, any AUX buffer is guaranteed to be either RAW or FORMATTED, but not a mix of both. As for perf decoding, we need to know the type of the data in the individual AUX buffers, so that it can set up the "OpenCSD" (library for decoding CoreSight trace) decoder instance appropriately. Thus the perf.data file must conatin the hints for the tool to decode the data correctly. Since this is a runtime variable, and perf tool doesn't have a control on what sink gets used (in case of automatic sink selection), we need this information made available from the PMU driver for each AUX record. # The_End_of_Story Cc: Peter Ziljstra <peterz@infradead.org> Cc: alexander.shishkin@linux.intel.com Cc: mingo@redhat.com Cc: will@kernel.org Cc: mark.rutland@arm.com Cc: mike.leach@linaro.org Cc: acme@kernel.org Cc: jolsa@redhat.com Cc: Mathieu Poirier <mathieu.poirer@linaro.org> Reviewed by: Mike Leach <mike.leach@linaro.org> Acked-by: Peter Ziljstra <peterz@infradead.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-2-suzuki.poulose@arm.comSigned-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
-
- 31 Mar, 2021 1 commit
-
-
Xu Jia authored
The sparse tool complains as follows: arch/arm64/kvm/arm.c:1900:6: warning: symbol '_kvm_host_prot_finalize' was not declared. Should it be static? This symbol is not used outside of arm.c, so this commit marks it static. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Xu Jia <xujia39@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1617176179-31931-1-git-send-email-xujia39@huawei.com
-
- 25 Mar, 2021 2 commits
-
-
Marc Zyngier authored
Now that the read_ctr macro has been specialised for nVHE, the whole CPU_FTR_REG_HYP_COPY infrastrcture looks completely overengineered. Simplify it by populating the two u64 quantities (MMFR0 and 1) that the hypervisor need. Reviewed-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
In protected mode, late CPUs are not allowed to boot (enforced by the PSCI relay). We can thus specialise the read_ctr macro to always return a pre-computed, sanitised value. Special care is taken to prevent the use of this custome version outside of the protected mode. Reviewed-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
-
- 24 Mar, 2021 2 commits
-
-
Suzuki K Poulose authored
Disable guest access to the Trace Filter control registers. We do not advertise the Trace filter feature to the guest (ID_AA64DFR0_EL1: TRACE_FILT is cleared) already, but the guest can still access the TRFCR_EL1 unless we trap it. This will also make sure that the guest cannot fiddle with the filtering controls set by a nvhe host. Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210323120647.454211-3-suzuki.poulose@arm.com
-
Suzuki K Poulose authored
Currently we advertise the ID_AA6DFR0_EL1.TRACEVER for the guest, when the trace register accesses are trapped (CPTR_EL2.TTA == 1). So, the guest will get an undefined instruction, if trusts the ID registers and access one of the trace registers. Lets be nice to the guest and hide the feature to avoid unexpected behavior. Even though this can be done at KVM sysreg emulation layer, we do this by removing the TRACEVER from the sanitised feature register field. This is fine as long as the ETM drivers can handle the individual trace units separately, even when there are differences among the CPUs. Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210323120647.454211-2-suzuki.poulose@arm.com
-
- 19 Mar, 2021 8 commits
-
-
Quentin Perret authored
When KVM runs in nVHE protected mode, use the host stage 2 to unmap the hypervisor sections by marking them as owned by the hypervisor itself. The long-term goal is to ensure the EL2 code can remain robust regardless of the host's state, so this starts by making sure the host cannot e.g. write to the .hyp sections directly. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-39-qperret@google.com
-
Quentin Perret authored
The host currently writes directly in EL2 per-CPU data sections from the PMU code when running in nVHE. In preparation for unmapping the EL2 sections from the host stage 2, disable PMU support in protected mode as we currently do not have a use-case for it. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-38-qperret@google.com
-
Quentin Perret authored
We will soon unmap the .hyp sections from the host stage 2 in Protected nVHE mode, which obviously works with at least page granularity, so make sure to align them correctly. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-37-qperret@google.com
-
Quentin Perret authored
When KVM runs in protected nVHE mode, make use of a stage 2 page-table to give the hypervisor some control over the host memory accesses. The host stage 2 is created lazily using large block mappings if possible, and will default to page mappings in absence of a better solution. >From this point on, memory accesses from the host to protected memory regions (e.g. not 'owned' by the host) are fatal and lead to hyp_panic(). Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-36-qperret@google.com
-
Quentin Perret authored
We will need to read sanitized values of mmfr{0,1}_el1 at EL2 soon, so add them to the list of copied variables. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-35-qperret@google.com
-
Quentin Perret authored
Introduce a new stage 2 configuration flag to specify that all mappings in a given page-table will be identity-mapped, as will be the case for the host. This allows to introduce sanity checks in the map path and to avoid programming errors. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-34-qperret@google.com
-
Quentin Perret authored
In order to further configure stage 2 page-tables, pass flags to the init function using a new enum. The first of these flags allows to disable FWB even if the hardware supports it as we will need to do so for the host stage 2. Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-33-qperret@google.com
-
Quentin Perret authored
Since the host stage 2 will be identity mapped, and since it will own most of memory, it would preferable for performance to try and use large block mappings whenever that is possible. To ease this, introduce a new helper in the KVM page-table code which allows to search for large ranges of available IPA space. This will be used in the host memory abort path to greedily idmap large portion of the PA space. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-32-qperret@google.com
-