- 04 Dec, 2020 14 commits
-
-
David Brazdil authored
When compiling with __KVM_NVHE_HYPERVISOR__, redefine per_cpu_offset() to __hyp_per_cpu_offset() which looks up the base of the nVHE per-CPU region of the given cpu and computes its offset from the .hyp.data..percpu section. This enables use of per_cpu_ptr() helpers in nVHE hyp code. Until now only this_cpu_ptr() was supported by setting TPIDR_EL2. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-14-dbrazdil@google.com
-
David Brazdil authored
Add rules for renaming the .data..ro_after_init ELF section in KVM nVHE object files to .hyp.data..ro_after_init, linking it into the kernel and mapping it in hyp at runtime. The section is RW to the host, then mapped RO in hyp. The expectation is that the host populates the variables in the section and they are never changed by hyp afterwards. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-13-dbrazdil@google.com
-
David Brazdil authored
MAIR_EL2 and TCR_EL2 are currently initialized from their _EL1 values. This will not work once KVM starts intercepting PSCI ON/SUSPEND SMCs and initializing EL2 state before EL1 state. Obtain the EL1 values during KVM init and store them in the init params struct. The struct will stay in memory and can be used when booting new cores. Take the opportunity to move copying the T0SZ value from idmap_t0sz in KVM init rather than in .hyp.idmap.text. This avoids the need for the idmap_t0sz symbol alias. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-12-dbrazdil@google.com
-
David Brazdil authored
Once we start initializing KVM on newly booted cores before the rest of the kernel, parameters to __do_hyp_init will need to be provided by EL2 rather than EL1. At that point it will not be possible to pass its three arguments directly because PSCI_CPU_ON only supports one context argument. Refactor __do_hyp_init to accept its parameters in a struct. This prepares the code for KVM booting cores as well as removes any limits on the number of __do_hyp_init arguments. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-11-dbrazdil@google.com
-
David Brazdil authored
KVM precomputes the hyp VA of __kvm_hyp_host_vector, essentially a constant (minus ASLR), before passing it to __kvm_hyp_init. Now that we have alternatives for converting kimg VA to hyp VA, replace this with computing the constant inside __kvm_hyp_init, thus removing the need for an argument. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-10-dbrazdil@google.com
-
David Brazdil authored
When a CPU is booted in EL2, the kernel checks for VHE support and initializes the CPU core accordingly. For nVHE it also installs the stub vectors and drops down to EL1. Once KVM gains the ability to boot cores without going through the kernel entry point, it will need to initialize the CPU the same way. Extract the relevant bits of el2_setup into an init_el2_state macro with an argument specifying whether to initialize for VHE or nVHE. The following ifdefs are removed: * CONFIG_ARM_GIC_V3 - always selected on arm64 * CONFIG_COMPAT - hstr_el2 can be set even without 32-bit support No functional change intended. Size of el2_setup increased by 148 bytes due to duplication. Signed-off-by: David Brazdil <dbrazdil@google.com> [maz: reworked to fit the new PSTATE initial setup code] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-9-dbrazdil@google.com
-
David Brazdil authored
CPU index should never be negative. Change the signature of (set_)cpu_logical_map to take an unsigned int. This still works even if the users treat the CPU index as an int, and will allow the hypervisor's implementation to check that the index is valid with a single upper-bound check. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-8-dbrazdil@google.com
-
David Brazdil authored
Make it possible to retrieve a copy of the psci_0_1_function_ids struct. This is useful for KVM if it is configured to intercept host's PSCI SMCs. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20201202184122.26046-7-dbrazdil@google.com
-
David Brazdil authored
Small refactor that replaces array of v0.1 function IDs indexed by an enum of function-name constants with a struct of function IDs "indexed" by field names. This is done in preparation for exposing the IDs to other parts of the kernel. Exposing a struct avoids the need for bounds checking. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20201202184122.26046-6-dbrazdil@google.com
-
David Brazdil authored
Refactor implementation of v0.1+ functions (CPU_SUSPEND, CPU_OFF, CPU_ON, MIGRATE) to have two functions psci_0_1_foo / psci_0_2_foo that select the function ID and call a common helper __psci_foo. This is a small cleanup so that the function ID array is only used for v0.1 configurations. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20201202184122.26046-5-dbrazdil@google.com
-
David Brazdil authored
KVM's host PSCI SMC filter needs to be aware of the PSCI version of the system but currently it is impossible to distinguish between v0.1 and PSCI disabled because both have get_version == NULL. Populate get_version for v0.1 with a function that returns a constant. psci_opt.get_version is currently unused so this has no effect on existing functionality. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20201202184122.26046-4-dbrazdil@google.com
-
David Brazdil authored
Expose the boolean value whether the system is running with KVM in protected mode (nVHE + kernel param). CPU capability was selected over a global variable to allow use in alternatives. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-3-dbrazdil@google.com
-
David Brazdil authored
Add an early parameter that allows users to select the mode of operation for KVM/arm64. For now, the only supported value is "protected". By passing this flag users opt into the hypervisor placing additional restrictions on the host kernel. These allow the hypervisor to spawn guests whose state is kept private from the host. Restrictions will include stage-2 address translation to prevent host from accessing guest memory, filtering its SMC calls, etc. Without this parameter, the default behaviour remains selecting VHE/nVHE based on hardware support and CONFIG_ARM64_VHE. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-2-dbrazdil@google.com
-
Marc Zyngier authored
Signed-off-by: Marc Zyngier <maz@kernel.org>
-
- 03 Dec, 2020 1 commit
-
-
Mark Rutland authored
Now that the PAN toggling has been removed, the only user of __system_matches_cap() is has_generic_auth(), which is only built when CONFIG_ARM64_PTR_AUTH is selected, and Qian reports that this results in a build-time warning when CONFIG_ARM64_PTR_AUTH is not selected: | arch/arm64/kernel/cpufeature.c:2649:13: warning: '__system_matches_cap' defined but not used [-Wunused-function] | static bool __system_matches_cap(unsigned int n) | ^~~~~~~~~~~~~~~~~~~~ It's tricky to restructure things to prevent this, so let's mark __system_matches_cap() as __maybe_unused, as we used to do for the other user of __system_matches_cap() which we just removed. Reported-by: Qian Cai <qcai@redhat.com> Suggested-by: Qian Cai <qcai@redhat.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20201203152403.26100-1-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 02 Dec, 2020 16 commits
-
-
Mark Rutland authored
Now that arm64 no longer uses UAO, remove the vestigal feature detection code and Kconfig text. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-13-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Some code (e.g. futex) needs to make privileged accesses to userspace memory, and uses uaccess_{enable,disable}_privileged() in order to permit this. All other uaccess primitives use LDTR/STTR, and never need to toggle PAN. Remove the redundant PAN toggling. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-12-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Now that set_fs() is gone, addr_limit_user_check() is redundant. Remove the checks and associated thread flag. To ensure that _TIF_WORK_MASK can be used as an immediate value in an AND instruction (as it is in `ret_to_user`), TIF_MTE_ASYNC_FAULT is renumbered to keep the constituent bits of _TIF_WORK_MASK contiguous. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-11-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Now that the uaccess primitives dont take addr_limit into account, we have no need to manipulate this via set_fs() and get_fs(). Remove support for these, along with some infrastructure this renders redundant. We no longer need to flip UAO to access kernel memory under KERNEL_DS, and head.S unconditionally clears UAO for all kernel configurations via an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO, nor do we need to context-switch it. However, we still need to adjust PAN during SDEI entry. Masking of __user pointers no longer needs to use the dynamic value of addr_limit, and can use a constant derived from the maximum possible userspace task size. A new TASK_SIZE_MAX constant is introduced for this, which is also used by core code. In configurations supporting 52-bit VAs, this may include a region of unusable VA space above a 48-bit TTBR0 limit, but never includes any portion of TTBR1. Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and KERNEL_DS were inclusive limits, and is converted to a mask by subtracting one. As the SDEI entry code repurposes the otherwise unnecessary pt_regs::orig_addr_limit field to store the TTBR1 of the interrupted context, for now we rename that to pt_regs::sdei_ttbr1. In future we can consider factoring that out. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: James Morse <james.morse@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-10-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Now the uaccess primitives use LDTR/STTR unconditionally, the uao_{ldp,stp,user_alternative} asm macros are misnamed, and have a redundant argument. Let's remove the redundant argument and rename these to user_{ldp,stp,ldst} respectively to clean this up. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Robin Murohy <robin.murphy@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-9-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
This patch separates arm64's user and kernel memory access primitives into distinct routines, adding new __{get,put}_kernel_nofault() helpers to access kernel memory, upon which core code builds larger copy routines. The kernel access routines (using LDR/STR) are not affected by PAN (when legitimately accessing kernel memory), nor are they affected by UAO. Switching to KERNEL_DS may set UAO, but this does not adversely affect the kernel access routines. The user access routines (using LDTR/STTR) are not affected by PAN (when legitimately accessing user memory), but are affected by UAO. As these are only legitimate to use under USER_DS with UAO clear, this should not be problematic. Routines performing atomics to user memory (futex and deprecated instruction emulation) still need to transiently clear PAN, and these are left as-is. These are never used on kernel memory. Subsequent patches will refactor the uaccess helpers to remove redundant code, and will also remove the redundant PAN/UAO manipulation. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-8-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
As a step towards implementing __{get,put}_kernel_nofault(), this patch splits most user-memory specific logic out of __{get,put}_user(), with the memory access and fault handling in new __{raw_get,put}_mem() helpers. For now the LDR/LDTR patching is left within the *get_mem() helpers, and will be removed in a subsequent patch. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-7-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Currently __copy_user_flushcache() open-codes raw_copy_from_user(), and doesn't use uaccess_mask_ptr() on the user address. Let's have it call raw_copy_from_user(), which is both a simplification and ensures that user pointers are masked under speculation. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-6-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
We currently have many uaccess_*{enable,disable}*() variants, which subsequent patches will cut down as part of removing set_fs() and friends. Once this simplification is made, most uaccess routines will only need to ensure that the user page tables are mapped in TTBR0, as is currently dealt with by uaccess_ttbr0_{enable,disable}(). The existing uaccess_{enable,disable}() routines ensure that user page tables are mapped in TTBR0, and also disable PAN protections, which is necessary to be able to use atomics on user memory, but also permit unrelated privileged accesses to access user memory. As preparatory step, let's rename uaccess_{enable,disable}() to uaccess_{enable,disable}_privileged(), highlighting this caveat and discouraging wider misuse. Subsequent patches can reuse the uaccess_{enable,disable}() naming for the common case of ensuring the user page tables are mapped in TTBR0. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-5-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
In preparation for removing addr_limit and set_fs() we must decouple the SDEI PAN/UAO manipulation from the uaccess code, and explicitly reinitialize these as required. SDEI enters the kernel with a non-architectural exception, and prior to the most recent revision of the specification (ARM DEN 0054B), PSTATE bits (e.g. PAN, UAO) are not manipulated in the same way as for architectural exceptions. Notably, older versions of the spec can be read ambiguously as to whether PSTATE bits are inherited unchanged from the interrupted context or whether they are generated from scratch, with TF-A doing the latter. We have three cases to consider: 1) The existing TF-A implementation of SDEI will clear PAN and clear UAO (along with other bits in PSTATE) when delivering an SDEI exception. 2) In theory, implementations of SDEI prior to revision B could inherit PAN and UAO (along with other bits in PSTATE) unchanged from the interrupted context. However, in practice such implementations do not exist. 3) Going forward, new implementations of SDEI must clear UAO, and depending on SCTLR_ELx.SPAN must either inherit or set PAN. As we can ignore (2) we can assume that upon SDEI entry, UAO is always clear, though PAN may be clear, inherited, or set per SCTLR_ELx.SPAN. Therefore, we must explicitly initialize PAN, but do not need to do anything for UAO. Considering what we need to do: * When set_fs() is removed, force_uaccess_begin() will have no HW side-effects. As this only clears UAO, which we can assume has already been cleared upon entry, this is not a problem. We do not need to add code to manipulate UAO explicitly. * PAN may be cleared upon entry (in case 1 above), so where a kernel is built to use PAN and this is supported by all CPUs, the kernel must set PAN upon entry to ensure expected behaviour. * PAN may be inherited from the interrupted context (in case 3 above), and so where a kernel is not built to use PAN or where PAN support is not uniform across CPUs, the kernel must clear PAN to ensure expected behaviour. This patch reworks the SDEI code accordingly, explicitly setting PAN to the expected state in all cases. To cater for the cases where the kernel does not use PAN or this is not uniformly supported by hardware we add a new cpu_has_pan() helper which can be used regardless of whether the kernel is built to use PAN. The existing system_uses_ttbr0_pan() is redefined in terms of system_uses_hw_pan() both for clarity and as a minor optimization when HW PAN is not selected. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-3-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
The SDEI support code is split across arch/arm64/ and drivers/firmware/, largley this is split so that the arch-specific portions are under arch/arm64, and the management logic is under drivers/firmware/. However, exception entry fixups are currently under drivers/firmware. Let's move the exception entry fixups under arch/arm64/. This de-clutters the management logic, and puts all the arch-specific portions in one place. Doing this also allows the fixups to be applied earlier, so things like PAN and UAO will be in a known good state before we run other logic. This will also make subsequent refactoring easier. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-2-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
As with SCTLR_ELx and other control registers, some PSTATE bits are UNKNOWN out-of-reset, and we may not be able to rely on hardware or firmware to initialize them to our liking prior to entry to the kernel, e.g. in the primary/secondary boot paths and return from idle/suspend. It would be more robust (and easier to reason about) if we consistently initialized PSTATE to a default value, as we do with control registers. This will ensure that the kernel is not adversely affected by bits it is not aware of, e.g. when support for a feature such as PAN/UAO is disabled. This patch ensures that PSTATE is consistently initialized at boot time via an ERET. This is not intended to relax the existing requirements (e.g. DAIF bits must still be set prior to entering the kernel). For features detected dynamically (which may require system-wide support), it is still necessary to subsequently modify PSTATE. As ERET is not always a Context Synchronization Event, an ISB is placed before each exception return to ensure updates to control registers have taken effect. This handles the kernel being entered with SCTLR_ELx.EOS clear (or any future control bits being in an UNKNOWN state). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-6-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Let's make SCTLR_ELx initialization a bit clearer by using meaningful names for the initialization values, following the same scheme for SCTLR_EL1 and SCTLR_EL2. These definitions will be used more widely in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-5-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
For a while now el2_setup has performed some basic initialization of EL1 even when the kernel is booted at EL1, so the name is a little misleading. Further, some comments are stale as with VHE it doesn't drop the CPU to EL1. To clarify things, rename el2_setup to init_kernel_el, and update comments to be clearer as to the function's purpose. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-4-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
To make callsites easier to read, add trivial C wrappers for the SET_PSTATE_*() helpers, and convert trivial uses over to these. The new wrappers will be used further in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-3-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
For consistency, all tasks have a pt_regs reserved at the highest portion of their task stack. Among other things, this ensures that a task's SP is always pointing within its stack rather than pointing immediately past the end. While it is never legitimate to ERET from a kthread, we take pains to initialize pt_regs for kthreads as if this were legitimate. As this is never legitimate, the effects of an erroneous return are rarely tested. Let's simplify things by initializing a kthread's pt_regs such that an ERET is caught as an illegal exception return, and removing the explicit initialization of other exception context. Note that as spectre_v4_enable_task_mitigation() only manipulates the PSTATE within the unused regs this is safe to remove. As user tasks will have their exception context initialized via start_thread() or start_compat_thread(), this should only impact cases where something has gone very wrong and we'd like that to be clearly indicated. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-2-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 27 Nov, 2020 3 commits
-
-
Marc Zyngier authored
Registers x0/x1 get repeateadly pushed and poped during a host HVC call. Instead, leave the registers on the stack, trading a store instruction on the fast path for an add on the slow path. Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Move the setting of SSBS directly into the HVC handler, using the C helpers rather than the inline asssembly code. Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Directly using the kimage_voffset variable is fine for now, but will become more problematic as we start distrusting EL1. Instead, patch the kimage_voffset into the HYP text, ensuring we don't have to load an untrusted value later on. Signed-off-by: Marc Zyngier <maz@kernel.org>
-
- 09 Nov, 2020 6 commits
-
-
Will Deacon authored
When building with LTO, there is an increased risk of the compiler converting an address dependency headed by a READ_ONCE() invocation into a control dependency and consequently allowing for harmful reordering by the CPU. Ensure that such transformations are harmless by overriding the generic READ_ONCE() definition with one that provides acquire semantics when building with LTO. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
In preparation for patching the internals of READ_ONCE() itself, replace its usage on the alternatives patching patch with a volatile variable instead. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
Armv8.3 introduced the LDAPR instruction, which provides weaker memory ordering semantics than LDARi (RCpc vs RCsc). Generally, we provide an RCsc implementation when implementing the Linux memory model, but LDAPR can be used as a useful alternative to dependency ordering, particularly when the compiler is capable of breaking the dependencies. Since LDAPR is not available on all CPUs, add a cpufeature to detect it at runtime and allow the instruction to be used with alternative code patching. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
asm/alternative.h contains both the macros needed to use alternatives, as well the type definitions and function prototypes for applying them. Split the header in two, so that alternatives can be used from core header files such as linux/compiler.h without the risk of circular includes Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
The uao_* alternative asm macros are only used by the uaccess assembly routines in arch/arm64/lib/, where they are included indirectly via asm-uaccess.h. Since they're specific to the uaccess assembly (and will lose the alternatives in subsequent patches), let's move them into asm-uaccess.h. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> [will: update #include in mte.S to pull in uao asm macros] Signed-off-by: Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Now that we can use function pointer, use a dispatch table to call the individual HVC handlers, leading to more maintainable code. Further improvements include helpers to declare the mapping of local variables to values passed in the host context. Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
-