- 30 Oct, 2023 11 commits
-
-
Oliver Upton authored
* kvm-arm64/sgi-injection: : vSGI injection improvements + fixes, courtesy Marc Zyngier : : Avoid linearly searching for vSGI targets using a compressed MPIDR to : index a cache. While at it, fix some egregious bugs in KVM's mishandling : of vcpuid (user-controlled value) and vcpu_idx. KVM: arm64: Clarify the ordering requirements for vcpu/RD creation KVM: arm64: vgic-v3: Optimize affinity-based SGI injection KVM: arm64: Fast-track kvm_mpidr_to_vcpu() when mpidr_data is available KVM: arm64: Build MPIDR to vcpu index cache at runtime KVM: arm64: Simplify kvm_vcpu_get_mpidr_aff() KVM: arm64: Use vcpu_idx for invalidation tracking KVM: arm64: vgic: Use vcpu_idx for the debug information KVM: arm64: vgic-v2: Use cpuid from userspace as vcpu_id KVM: arm64: vgic-v3: Refactor GICv3 SGI generation KVM: arm64: vgic-its: Treat the collection target address as a vcpu_id KVM: arm64: vgic: Make kvm_vgic_inject_irq() take a vcpu pointer Signed-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
* kvm-arm64/stage2-vhe-load: : Setup stage-2 MMU from vcpu_load() for VHE : : Unlike nVHE, there is no need to switch the stage-2 MMU around on guest : entry/exit in VHE mode as the host is running at EL2. Despite this KVM : reloads the stage-2 on every guest entry, which is needless. : : This series moves the setup of the stage-2 MMU context to vcpu_load() : when running in VHE mode. This is likely to be a win across the board, : but also allows us to remove an ISB on the guest entry path for systems : with one of the speculative AT errata. KVM: arm64: Move VTCR_EL2 into struct s2_mmu KVM: arm64: Load the stage-2 MMU context in kvm_vcpu_load_vhe() KVM: arm64: Rename helpers for VHE vCPU load/put KVM: arm64: Reload stage-2 for VMID change on VHE KVM: arm64: Restore the stage-2 context in VHE's __tlb_switch_to_host() KVM: arm64: Don't zero VTTBR in __tlb_switch_to_host() Signed-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
* kvm-arm64/nv-trap-fixes: : NV trap forwarding fixes, courtesy Miguel Luis and Marc Zyngier : : - Explicitly define the effects of HCR_EL2.NV on EL2 sysregs in the : NV trap encoding : : - Make EL2 registers that access AArch32 guest state UNDEF or RAZ/WI : where appropriate for NV guests KVM: arm64: Handle AArch32 SPSR_{irq,abt,und,fiq} as RAZ/WI KVM: arm64: Do not let a L1 hypervisor access the *32_EL2 sysregs KVM: arm64: Refine _EL2 system register list that require trap reinjection arm64: Add missing _EL2 encodings arm64: Add missing _EL12 encodings Signed-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
* kvm-arm64/smccc-filter-cleanups: : Cleanup the management of KVM's SMCCC maple tree : : Avoid the cost of maintaining the SMCCC filter maple tree if userspace : hasn't writen a rule to the filter. While at it, rip out the now : unnecessary VM flag to indicate whether or not the SMCCC filter was : configured. KVM: arm64: Use mtree_empty() to determine if SMCCC filter configured KVM: arm64: Only insert reserved ranges when SMCCC filter is used KVM: arm64: Add a predicate for testing if SMCCC filter is configured Signed-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
* kvm-arm64/pmevtyper-filter: : Fixes to KVM's handling of the PMUv3 exception level filtering bits : : - NSH (count at EL2) and M (count at EL3) should be stateful when the : respective EL is advertised in the ID registers but have no effect on : event counting. : : - NSU and NSK modify the event filtering of EL0 and EL1, respectively. : Though the kernel may not use these bits, other KVM guests might. : Implement these bits exactly as written in the pseudocode if EL3 is : advertised. KVM: arm64: Add PMU event filter bits required if EL3 is implemented KVM: arm64: Make PMEVTYPER<n>_EL0.NSH RES0 if EL2 isn't advertised Signed-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
* kvm-arm64/feature-flag-refactor: : vCPU feature flag cleanup : : Clean up KVM's handling of vCPU feature flags to get rid of the : vCPU-scoped bitmaps and remove failure paths from kvm_reset_vcpu(). KVM: arm64: Get rid of vCPU-scoped feature bitmap KVM: arm64: Remove unused return value from kvm_reset_vcpu() KVM: arm64: Hoist NV+SVE check into KVM_ARM_VCPU_INIT ioctl handler KVM: arm64: Prevent NV feature flag on systems w/o nested virt KVM: arm64: Hoist PAuth checks into KVM_ARM_VCPU_INIT ioctl KVM: arm64: Hoist SVE check into KVM_ARM_VCPU_INIT ioctl handler KVM: arm64: Hoist PMUv3 check into KVM_ARM_VCPU_INIT ioctl handler KVM: arm64: Add generic check for system-supported vCPU features Signed-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
* kvm-arm64/misc: : Miscellaneous updates : : - Put an upper bound on the number of I-cache invalidations by : cacheline to avoid soft lockups : : - Get rid of bogus refererence count transfer for THP mappings : : - Do a local TLB invalidation on permission fault race : : - Fixes for page_fault_test KVM selftest : : - Add a tracepoint for detecting MMIO instructions unsupported by KVM KVM: arm64: Add tracepoint for MMIO accesses where ISV==0 KVM: arm64: selftest: Perform ISB before reading PAR_EL1 KVM: arm64: selftest: Add the missing .guest_prepare() KVM: arm64: Always invalidate TLB for stage-2 permission faults KVM: arm64: Do not transfer page refcount for THP adjustment KVM: arm64: Avoid soft lockups due to I-cache maintenance arm64: tlbflush: Rename MAX_TLBI_OPS KVM: arm64: Don't use kerneldoc comment for arm64_check_features() Signed-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
It is a pretty well known fact that KVM does not support MMIO emulation without valid instruction syndrome information (ESR_EL2.ISV == 0). The current kvm_pr_unimpl() is pretty useless, as it contains zero context to relate the event to a vCPU. Replace it with a precise tracepoint that dumps the relevant context so the user can make sense of what the guest is doing. Acked-by:
Zenghui Yu <yuzenghui@huawei.com> Acked-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231026205306.3045075-1-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Zenghui Yu authored
It looks like a mistake to issue ISB *after* reading PAR_EL1, we should instead perform it between the AT instruction and the reads of PAR_EL1. As according to DDI0487J.a IJTYVP, "When an address translation instruction is executed, explicit synchronization is required to guarantee the result is visible to subsequent direct reads of PAR_EL1." Otherwise all guest_at testcases fail on my box with ==== Test Assertion Failure ==== aarch64/page_fault_test.c:142: par & 1 == 0 pid=1355864 tid=1355864 errno=4 - Interrupted system call 1 0x0000000000402853: vcpu_run_loop at page_fault_test.c:681 2 0x0000000000402cdb: run_test at page_fault_test.c:730 3 0x0000000000403897: for_each_guest_mode at guest_modes.c:100 4 0x00000000004019f3: for_each_test_and_guest_mode at page_fault_test.c:1105 5 (inlined by) main at page_fault_test.c:1131 6 0x0000ffffb153c03b: ?? ??:0 7 0x0000ffffb153c113: ?? ??:0 8 0x0000000000401aaf: _start at ??:? 0x1 != 0x0 (par & 1 != 0) Signed-off-by:
Zenghui Yu <yuzenghui@huawei.com> Acked-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231007124043.626-2-yuzenghui@huawei.comSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Zenghui Yu authored
Running page_fault_test on a Cortex A72 fails with Test: ro_memslot_no_syndrome_guest_cas Testing guest mode: PA-bits:40, VA-bits:48, 4K pages Testing memory backing src type: anonymous ==== Test Assertion Failure ==== aarch64/page_fault_test.c:117: guest_check_lse() pid=1944087 tid=1944087 errno=4 - Interrupted system call 1 0x00000000004028b3: vcpu_run_loop at page_fault_test.c:682 2 0x0000000000402d93: run_test at page_fault_test.c:731 3 0x0000000000403957: for_each_guest_mode at guest_modes.c:100 4 0x00000000004019f3: for_each_test_and_guest_mode at page_fault_test.c:1108 5 (inlined by) main at page_fault_test.c:1134 6 0x0000ffff868e503b: ?? ??:0 7 0x0000ffff868e5113: ?? ??:0 8 0x0000000000401aaf: _start at ??:? guest_check_lse() because we don't have a guest_prepare stage to check the presence of FEAT_LSE and skip the related guest_cas testing, and we end-up failing in GUEST_ASSERT(guest_check_lse()). Add the missing .guest_prepare() where it's indeed required. Signed-off-by:
Zenghui Yu <yuzenghui@huawei.com> Acked-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231007124043.626-1-yuzenghui@huawei.comSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
It is possible for multiple vCPUs to fault on the same IPA and attempt to resolve the fault. One of the page table walks will actually update the PTE and the rest will return -EAGAIN per our race detection scheme. KVM elides the TLB invalidation on the racing threads as the return value is nonzero. Before commit a12ab137 ("KVM: arm64: Use local TLBI on permission relaxation") KVM always used broadcast TLB invalidations when handling permission faults, which had the convenient property of making the stage-2 updates visible to all CPUs in the system. However now we do a local invalidation, and TLBI elision leads to the vCPU thread faulting again on the stale entry. Remember that the architecture permits the TLB to cache translations that precipitate a permission fault. Invalidate the TLB entry responsible for the permission fault if the stage-2 descriptor has been relaxed, regardless of which thread actually did the job. Acked-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230922223229.1608155-1-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 25 Oct, 2023 5 commits
-
-
Marc Zyngier authored
When trapping accesses from a NV guest that tries to access SPSR_{irq,abt,und,fiq}, make sure we handle them as RAZ/WI, as if AArch32 wasn't implemented. This involves a bit of repainting to make the visibility handler more generic. Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231023095444.1587322-6-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
DBGVCR32_EL2, DACR32_EL2, IFSR32_EL2 and FPEXC32_EL2 are required to UNDEF when AArch32 isn't implemented, which is definitely the case when running NV. Given that this is the only case where these registers can trap, unconditionally inject an UNDEF exception. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Oliver Upton <oliver.upton@linux.dev> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20231023095444.1587322-5-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Miguel Luis authored
Implement a fine grained approach in the _EL2 sysreg range instead of the current wide cast trap. This ensures that we don't mistakenly inject the wrong exception into the guest. [maz: commit message massaging, dropped secure and AArch32 registers from the list] Fixes: d0fc0a25 ("KVM: arm64: nv: Add trap forwarding for HCR_EL2") Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Miguel Luis <miguel.luis@oracle.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231023095444.1587322-4-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Miguel Luis authored
Some _EL2 encodings are missing. Add them. Signed-off-by:
Miguel Luis <miguel.luis@oracle.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> [maz: dropped secure encodings] Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231023095444.1587322-3-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Miguel Luis authored
Some _EL12 encodings are missing. Add them. Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Miguel Luis <miguel.luis@oracle.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231023095444.1587322-2-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 24 Oct, 2023 2 commits
-
-
Oliver Upton authored
Suzuki noticed that KVM's PMU emulation is oblivious to the NSU and NSK event filter bits. On systems that have EL3 these bits modify the filter behavior in non-secure EL0 and EL1, respectively. Even though the kernel doesn't use these bits, it is entirely possible some other guest OS does. Additionally, it would appear that these and the M bit are required by the architecture if EL3 is implemented. Allow the EL3 event filter bits to be set if EL3 is advertised in the guest's ID register. Implement the behavior of NSU and NSK according to the pseudocode, and entirely ignore the M bit for perf event creation. Reported-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20231019185618.3442949-3-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
The NSH bit, which filters event counting at EL2, is required by the architecture if an implementation has EL2. Even though KVM doesn't support nested virt yet, it makes no effort to hide the existence of EL2 from the ID registers. Userspace can, however, change the value of PFR0 to hide EL2. Align KVM's sysreg emulation with the architecture and make NSH RES0 if EL2 isn't advertised. Keep in mind the bit is ignored when constructing the backing perf event. While at it, build the event type mask using explicit field definitions instead of relying on ARMV8_PMU_EVTYPE_MASK. KVM probably should've been doing this in the first place, as it avoids changes to the aforementioned mask affecting sysreg emulation. Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20231019185618.3442949-2-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 23 Oct, 2023 1 commit
-
-
Marc Zyngier authored
We currently have a global VTCR_EL2 value for each guest, even if the guest uses NV. This implies that the guest's own S2 must fit in the host's. This is odd, for multiple reasons: - the PARange values and the number of IPA bits don't necessarily match: you can have 33 bits of IPA space, and yet you can only describe 32 or 36 bits of PARange - When userspace set the IPA space, it creates a contract with the kernel saying "this is the IPA space I'm prepared to handle". At no point does it constraint the guest's own IPA space as long as the guest doesn't try to use a [I]PA outside of the IPA space set by userspace - We don't even try to hide the value of ID_AA64MMFR0_EL1.PARange. And then there is the consequence of the above: if a guest tries to create a S2 that has for input address something that is larger than the IPA space defined by the host, we inject a fatal exception. This is no good. For all intent and purposes, a guest should be able to have the S2 it really wants, as long as the *output* address of that S2 isn't outside of the IPA space. For that, we need to have a per-s2_mmu VTCR_EL2 setting, which allows us to represent the full PARange. Move the vctr field into the s2_mmu structure, which has no impact whatsoever, except for NV. Note that once we are able to override ID_AA64MMFR0_EL1.PARange from userspace, we'll also be able to restrict the size of the shadow S2 that NV uses. Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231012205108.3937270-1-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 20 Oct, 2023 5 commits
-
-
Oliver Upton authored
To date the VHE code has aggressively reloaded the stage-2 MMU context on every guest entry, despite the fact that this isn't necessary. This was probably done for consistency with the nVHE code, which needs to switch in/out the stage-2 MMU context as both the host and guest run at EL1. Hoist __load_stage2() into kvm_vcpu_load_vhe(), thus avoiding a reload on every guest entry/exit. This is likely to be beneficial to systems with one of the speculative AT errata, as there is now one fewer context synchronization event on the guest entry path. Additionally, it is possible that implementations have hitched correctness mitigations on writes to VTTBR_EL2, which are now elided on guest re-entry. Note that __tlb_switch_to_guest() is deliberately left untouched as it can be called outside the context of a running vCPU. Link: https://lore.kernel.org/r/20231018233212.2888027-6-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
The names for the helpers we expose to the 'generic' KVM code are a bit imprecise; we switch the EL0 + EL1 sysreg context and setup trap controls that do not need to change for every guest entry/exit. Rename + shuffle things around a bit in preparation for loading the stage-2 MMU context on vcpu_load(). Link: https://lore.kernel.org/r/20231018233212.2888027-5-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Naturally, a change to the VMID for an MMU implies a new value for VTTBR. Reload on VMID change in anticipation of loading stage-2 on vcpu_load() instead of every guest entry. Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231018233212.2888027-4-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
An MMU notifier could cause us to clobber the stage-2 context loaded on a CPU when we switch to another VM's context to invalidate. This isn't an issue right now as the stage-2 context gets reloaded on every guest entry, but is disastrous when moving __load_stage2() into the vcpu_load() path. Restore the previous stage-2 context on the way out of a TLB invalidation if we installed something else. Deliberately do this after TGE=1 is synchronized to keep things safe in light of the speculative AT errata. Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231018233212.2888027-3-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
HCR_EL2.TGE=0 is sufficient to disable stage-2 translation, so there's no need to explicitly zero VTTBR_EL2. Link: https://lore.kernel.org/r/20231018233212.2888027-2-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 05 Oct, 2023 3 commits
-
-
Oliver Upton authored
The smccc_filter maple tree is only populated if userspace attempted to configure it. Use the state of the maple tree to determine if the filter has been configured, eliminating the VM flag. Reviewed-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231004234947.207507-4-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
The reserved ranges are only useful for preventing userspace from adding a rule that intersects with functions we must handle in KVM. If userspace never writes to the SMCCC filter than this is all just wasted work/memory. Insert reserved ranges on the first call to KVM_ARM_VM_SMCCC_FILTER. Reviewed-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231004234947.207507-3-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
Eventually we can drop the VM flag, move around the existing implementation for now. Reviewed-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20231004234947.207507-2-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 30 Sep, 2023 12 commits
-
-
Marc Zyngier authored
It goes without saying, but it is probably better to spell it out: If userspace tries to restore and VM, but creates vcpus and/or RDs in a different order, the vcpu/RD mapping will be different. Yes, our API is an ugly piece of crap and I can't believe that we missed this. If we want to relax the above, we'll need to define a new userspace API that allows the mapping to be specified, rather than relying on the kernel to perform the mapping on its own. Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-12-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Our affinity-based SGI injection code is a bit daft. We iterate over all the CPUs trying to match the set of affinities that the guest is trying to reach, leading to some very bad behaviours if the selected targets are at a high vcpu index. Instead, we can now use the fact that we have an optimised MPIDR to vcpu mapping, and only look at the relevant values. This results in a much faster injection for large VMs, and in a near constant time, irrespective of the position in the vcpu index space. As a bonus, this is mostly deleting a lot of hard-to-read code. Nobody will complain about that. Suggested-by:
Xu Zhao <zhaoxu.35@bytedance.com> Tested-by:
Joey Gouly <joey.gouly@arm.com> Tested-by:
Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-11-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
If our fancy little table is present when calling kvm_mpidr_to_vcpu(), use it to recover the corresponding vcpu. Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Tested-by:
Joey Gouly <joey.gouly@arm.com> Tested-by:
Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-10-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
The MPIDR_EL1 register contains a unique value that identifies the CPU. The only problem with it is that it is stupidly large (32 bits, once the useless stuff is removed). Trying to obtain a vcpu from an MPIDR value is a fairly common, yet costly operation: we iterate over all the vcpus until we find the correct one. While this is cheap for small VMs, it is pretty expensive on large ones, specially if you are trying to get to the one that's at the end of the list... In order to help with this, it is important to realise that the MPIDR values are actually structured, and that implementations tend to use a small number of significant bits in the 32bit space. We can use this fact to our advantage by computing a small hash table that uses the "compression" of the significant MPIDR bits as an index, giving us the vcpu index as a result. Given that the MPIDR values can be supplied by userspace, and that an evil VMM could decide to make *all* bits significant, resulting in a 4G-entry table, we only use this method if the resulting table fits in a single page. Otherwise, we fallback to the good old iterative method. Nothing uses that table just yet, but keep your eyes peeled. Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Tested-by:
Joey Gouly <joey.gouly@arm.com> Tested-by:
Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-9-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
By definition, MPIDR_EL1 cannot be modified by the guest. This means it is pointless to check whether this is loaded on the CPU. Simplify the kvm_vcpu_get_mpidr_aff() helper to directly access the in-memory value. Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Tested-by:
Joey Gouly <joey.gouly@arm.com> Tested-by:
Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-8-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
While vcpu_id isn't necessarily a bad choice as an identifier for the currently running vcpu, it is provided by userspace, and there is close to no guarantee that it would be unique. Switch it to vcpu_idx instead, for which we have much stronger guarantees. Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-7-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
When dumping the debug information, use vcpu_idx instead of vcpu_id, as this is independent of any userspace influence. Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-6-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
When parsing a GICv2 attribute that contains a cpuid, handle this as the vcpu_id, not a vcpu_idx, as userspace cannot really know the mapping between the two. For this, use kvm_get_vcpu_by_id() instead of kvm_get_vcpu(). Take this opportunity to get rid of the pointless check against online_vcpus, which doesn't make much sense either, and switch to FIELD_GET as a way to extract the vcpu_id. Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-5-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
As we're about to change the way SGIs are sent, start by splitting out some of the basic functionnality: instead of intermingling the broadcast and non-broadcast cases with the actual SGI generation, perform the following cleanups: - move the SGI queuing into its own helper - split the broadcast code from the affinity-driven code - replace the mask/shift combinations with FIELD_GET() - fix the confusion between vcpu_id and vcpu when handling the broadcast case The result is much more readable, and paves the way for further optimisations. Tested-by:
Joey Gouly <joey.gouly@arm.com> Tested-by:
Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-4-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Since our emulated ITS advertises GITS_TYPER.PTA=0, the target address associated to a collection is a PE number and not an address. So far, so good. However, the PE number is what userspace has provided given us (aka the vcpu_id), and not the internal vcpu index. Make sure we consistently retrieve the vcpu by ID rather than by index, adding a helper that deals with most of the cases. We also get rid of the pointless (and bogus) comparisons to online_vcpus, which don't really make sense. Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-3-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Passing a vcpu_id to kvm_vgic_inject_irq() is silly for two reasons: - we often confuse vcpu_id and vcpu_idx - we eventually have to convert it back to a vcpu - we can't count Instead, pass a vcpu pointer, which is unambiguous. A NULL vcpu is also allowed for interrupts that are not private to a vcpu (such as SPIs). Reviewed-by:
Zenghui Yu <yuzenghui@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-2-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Vincent Donnefort authored
GUP affects a refcount common to all pages forming the THP. There is therefore no need to move the refcount from a tail to the head page. Under the hood it decrements and increments the same counter. Signed-off-by:
Vincent Donnefort <vdonnefort@google.com> Reviewed-by:
Gavin Shan <gshan@redhat.com> Reviewed-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230928173205.2826598-2-vdonnefort@google.comSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 24 Sep, 2023 1 commit
-
-
Linus Torvalds authored
-