- 13 Oct, 2022 1 commit
-
-
Gavin Shan authored
It's required by vm_userspace_mem_region_add() that memory size should be aligned to host page size. However, one guest page is provided by memslot_modification_stress_test. It triggers failure in the scenario of 64KB-page-size-host and 4KB-page-size-guest, as the following messages indicate. # ./memslot_modification_stress_test Testing guest mode: PA-bits:40, VA-bits:48, 4K pages guest physical test memory: [0xffbfff0000, 0xffffff0000) Finished creating vCPUs Started all vCPUs ==== Test Assertion Failure ==== lib/kvm_util.c:824: vm_adjust_num_guest_pages(vm->mode, npages) == npages pid=5712 tid=5712 errno=0 - Success 1 0x0000000000404eeb: vm_userspace_mem_region_add at kvm_util.c:822 2 0x0000000000401a5b: add_remove_memslot at memslot_modification_stress_test.c:82 3 (inlined by) run_test at memslot_modification_stress_test.c:110 4 0x0000000000402417: for_each_guest_mode at guest_modes.c:100 5 0x00000000004016a7: main at memslot_modification_stress_test.c:187 6 0x0000ffffb8cd4383: ?? ??:0 7 0x0000000000401827: _start at :? Number of guest pages is not compatible with the host. Try npages=16 Fix the issue by providing 16 guest pages to the memory slot for this particular combination of 64KB-page-size-host and 4KB-page-size-guest on aarch64. Fixes: ef4c9f4f ("KVM: selftests: Fix 32-bit truncation of vm_get_max_gfn()") Signed-off-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221013063020.201856-1-gshan@redhat.com
-
- 10 Oct, 2022 1 commit
-
-
Zenghui Yu authored
Commit 98f94ce4 ("KVM: selftests: Move KVM_CREATE_DEVICE_TEST code to separate helper") wrongly converted a "real" GIC device creation to __kvm_test_create_device() and caused the test failure on my D05 (which supports v2 emulation). Fix it. Fixes: 98f94ce4 ("KVM: selftests: Move KVM_CREATE_DEVICE_TEST code to separate helper") Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221009033131.365-1-yuzenghui@huawei.com
-
- 09 Oct, 2022 3 commits
-
-
Vincent Donnefort authored
For historical reasons, the VHE code inherited the build configuration from nVHE. Now those two parts have their own folder and makefile, we can enable stack protection and branch profiling for VHE. Signed-off-by: Vincent Donnefort <vdonnefort@google.com> Reviewed-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221004154216.2833636-1-vdonnefort@google.com
-
Oliver Upton authored
Presently stage2_apply_range() works on a batch of memory addressed by a stage 2 root table entry for the VM. Depending on the IPA limit of the VM and PAGE_SIZE of the host, this could address a massive range of memory. Some examples: 4 level, 4K paging -> 512 GB batch size 3 level, 64K paging -> 4TB batch size Unsurprisingly, working on such a large range of memory can lead to soft lockups. When running dirty_log_perf_test: ./dirty_log_perf_test -m -2 -s anonymous_thp -b 4G -v 48 watchdog: BUG: soft lockup - CPU#0 stuck for 45s! [dirty_log_perf_:16703] Modules linked in: vfat fat cdc_ether usbnet mii xhci_pci xhci_hcd sha3_generic gq(O) CPU: 0 PID: 16703 Comm: dirty_log_perf_ Tainted: G O 6.0.0-smp-DEV #1 pstate: 80400009 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : dcache_clean_inval_poc+0x24/0x38 lr : clean_dcache_guest_page+0x28/0x4c sp : ffff800021763990 pmr_save: 000000e0 x29: ffff800021763990 x28: 0000000000000005 x27: 0000000000000de0 x26: 0000000000000001 x25: 00400830b13bc77f x24: ffffad4f91ead9c0 x23: 0000000000000000 x22: ffff8000082ad9c8 x21: 0000fffafa7bc000 x20: ffffad4f9066ce50 x19: 0000000000000003 x18: ffffad4f92402000 x17: 000000000000011b x16: 000000000000011b x15: 0000000000000124 x14: ffff07ff8301d280 x13: 0000000000000000 x12: 00000000ffffffff x11: 0000000000010001 x10: fffffc0000000000 x9 : ffffad4f9069e580 x8 : 000000000000000c x7 : 0000000000000000 x6 : 000000000000003f x5 : ffff07ffa2076980 x4 : 0000000000000001 x3 : 000000000000003f x2 : 0000000000000040 x1 : ffff0830313bd000 x0 : ffff0830313bcc40 Call trace: dcache_clean_inval_poc+0x24/0x38 stage2_unmap_walker+0x138/0x1ec __kvm_pgtable_walk+0x130/0x1d4 __kvm_pgtable_walk+0x170/0x1d4 __kvm_pgtable_walk+0x170/0x1d4 __kvm_pgtable_walk+0x170/0x1d4 kvm_pgtable_stage2_unmap+0xc4/0xf8 kvm_arch_flush_shadow_memslot+0xa4/0x10c kvm_set_memslot+0xb8/0x454 __kvm_set_memory_region+0x194/0x244 kvm_vm_ioctl_set_memory_region+0x58/0x7c kvm_vm_ioctl+0x49c/0x560 __arm64_sys_ioctl+0x9c/0xd4 invoke_syscall+0x4c/0x124 el0_svc_common+0xc8/0x194 do_el0_svc+0x38/0xc0 el0_svc+0x2c/0xa4 el0t_64_sync_handler+0x84/0xf0 el0t_64_sync+0x1a0/0x1a4 Use the largest supported block mapping for the configured page size as the batch granularity. In so doing the walker is guaranteed to visit a leaf only once. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221007234151.461779-3-oliver.upton@linux.dev
-
Oliver Upton authored
Work out the minimum page table level where KVM supports block mappings at compile time. While at it, rewrite the comment around supported block mappings to directly describe what KVM supports instead of phrasing in terms of what it does not. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221007234151.461779-2-oliver.upton@linux.dev
-
- 01 Oct, 2022 3 commits
-
-
Marc Zyngier authored
* kvm-arm64/misc-6.1: : . : Misc KVM/arm64 fixes and improvement for v6.1 : : - Simplify the affinity check when moving a GICv3 collection : : - Tone down the shouting when kvm-arm.mode=protected is passed : to a guest : : - Fix various comments : : - Advertise the new kvmarm@lists.linux.dev and deprecate the : old Columbia list : . KVM: arm64: Advertise new kvmarm mailing list KVM: arm64: Fix comment typo in nvhe/switch.c KVM: selftests: Update top-of-file comment in psci_test KVM: arm64: Ignore kvm-arm.mode if !is_hyp_mode_available() KVM: arm64: vgic: Remove duplicate check in update_affinity_collection() Signed-off-by: Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
* kvm-arm64/dirty-log-ordered: : . : Retrofit some ordering into the existing API dirty-ring by: : : - relying on acquire/release semantics which are the default on x86, : but need to be explicit on arm64 : : - adding a new capability that indicate which flavor is supported, either : with explicit ordering (arm64) or both implicit and explicit (x86), : as suggested by Paolo at KVM Forum : : - documenting the requirements for this new capability on weakly ordered : architectures : : - updating the selftests to do the right thing : . KVM: selftests: dirty-log: Use KVM_CAP_DIRTY_LOG_RING_ACQ_REL if available KVM: selftests: dirty-log: Upgrade flag accesses to acquire/release semantics KVM: Document weakly ordered architecture requirements for dirty ring KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL KVM: Add KVM_CAP_DIRTY_LOG_RING_ACQ_REL capability and config option KVM: Use acquire/release semantics when accessing dirty ring GFN state Signed-off-by: Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
As announced on the kvmarm list, we're moving the mailing list over to kvmarm@lists.linux.dev: <quote> As you probably all know, the kvmarm mailing has been hosted on Columbia's machines for as long as the project existed (over 13 years). After all this time, the university has decided to retire the list infrastructure and asked us to find a new hosting. A new mailing list has been created on lists.linux.dev[1], and I'm kindly asking everyone interested in following the KVM/arm64 developments to start subscribing to it (and start posting your patches there). I hope that people will move over to it quickly enough that we can soon give Columbia the green light to turn their systems off. Note that the new list will only get archived automatically once we fully switch over, but I'll make sure we fill any gap and not lose any message. In the meantime, please Cc both lists. [...] [1] https://subspace.kernel.org/lists.linux.dev.html </quote> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221001091245.3900668-1-maz@kernel.org
-
- 29 Sep, 2022 7 commits
-
-
Marc Zyngier authored
Pick KVM_CAP_DIRTY_LOG_RING_ACQ_REL if exposed by the kernel. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20220926145120.27974-7-maz@kernel.org
-
Marc Zyngier authored
In order to preserve ordering, make sure that the flag accesses in the dirty log are done using acquire/release accessors. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20220926145120.27974-6-maz@kernel.org
-
Marc Zyngier authored
Now that the kernel can expose to userspace that its dirty ring management relies on explicit ordering, document these new requirements for VMMs to do the right thing. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20220926145120.27974-5-maz@kernel.org
-
Marc Zyngier authored
Since x86 is TSO (give or take), allow it to advertise the new ACQ_REL version of the dirty ring capability. No other change is required for it. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20220926145120.27974-4-maz@kernel.org
-
Marc Zyngier authored
In order to differenciate between architectures that require no extra synchronisation when accessing the dirty ring and those who do, add a new capability (KVM_CAP_DIRTY_LOG_RING_ACQ_REL) that identify the latter sort. TSO architectures can obviously advertise both, while relaxed architectures must only advertise the ACQ_REL version. This requires some configuration symbol rejigging, with HAVE_KVM_DIRTY_RING being only indirectly selected by two top-level config symbols: - HAVE_KVM_DIRTY_RING_TSO for strongly ordered architectures (x86) - HAVE_KVM_DIRTY_RING_ACQ_REL for weakly ordered architectures (arm64) Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20220926145120.27974-3-maz@kernel.org
-
Marc Zyngier authored
The current implementation of the dirty ring has an implicit requirement that stores to the dirty ring from userspace must be: - be ordered with one another - visible from another CPU executing a ring reset While these implicit requirements work well for x86 (and any other TSO-like architecture), they do not work for more relaxed architectures such as arm64 where stores to different addresses can be freely reordered, and loads from these addresses not observing writes from another CPU unless the required barriers (or acquire/release semantics) are used. In order to start fixing this, upgrade the ring reset accesses: - the kvm_dirty_gfn_harvested() helper now uses acquire semantics so it is ordered after all previous writes, including that from userspace - the kvm_dirty_gfn_set_invalid() helper now uses release semantics so that the next_slot and next_offset reads don't drift past the entry invalidation This is only a partial fix as the userspace side also need upgrading. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20220926145120.27974-2-maz@kernel.org
-
Wei-Lin Chang authored
Fix the comment of __hyp_vgic_restore_state() from saying VEH to VHE, also change the underscore to a dash to match the comment above __hyp_vgic_save_state(). Signed-off-by: Wei-Lin Chang <r09922117@csie.ntu.edu.tw> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220929042839.24277-1-r09922117@csie.ntu.edu.tw
-
- 28 Sep, 2022 1 commit
-
-
Oliver Upton authored
Fix the comment to accurately describe the test and recently added SYSTEM_SUSPEND test case. What was once psci_cpu_on_test was renamed and extended to squeeze in a test case for PSCI SYSTEM_SUSPEND. Nonetheless, the author of those changes (whoever they may be...) failed to update the file comment to reflect what had changed. Reported-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220819162100.213854-1-oliver.upton@linux.dev
-
- 26 Sep, 2022 2 commits
-
-
Elliot Berman authored
Ignore kvm-arm.mode if !is_hyp_mode_available(). Specifically, we want to avoid switching kvm_mode to KVM_MODE_PROTECTED if hypervisor mode is not available. This prevents "Protected KVM" cpu capability being reported when Linux is booting in EL1 and would not have KVM enabled. Reasonably though, we should warn if the command line is requesting a KVM mode at all if KVM isn't actually available. Allow "kvm-arm.mode=none" to skip the warning since this would disable KVM anyway. Signed-off-by: Elliot Berman <quic_eberman@quicinc.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220920190658.2880184-1-quic_eberman@quicinc.com
-
Gavin Shan authored
The 'coll' parameter to update_affinity_collection() is never NULL, so comparing it with 'ite->collection' is enough to cover both the NULL case and the "another collection" case. Remove the duplicate check in update_affinity_collection(). Signed-off-by: Gavin Shan <gshan@redhat.com> [maz: repainted commit message] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220923065447.323445-1-gshan@redhat.com
-
- 19 Sep, 2022 6 commits
-
-
Marc Zyngier authored
* kvm-arm64/single-step-async-exception: : . : Single-step fixes from Reiji Watanabe: : : "This series fixes two bugs of single-step execution enabled by : userspace, and add a test case for KVM_GUESTDBG_SINGLESTEP to : the debug-exception test to verify the single-step behavior." : . KVM: arm64: selftests: Add a test case for KVM_GUESTDBG_SINGLESTEP KVM: arm64: selftests: Refactor debug-exceptions to make it amenable to new test cases KVM: arm64: Clear PSTATE.SS when the Software Step state was Active-pending KVM: arm64: Preserve PSTATE.SS for the guest while single-step is enabled Signed-off-by: Marc Zyngier <maz@kernel.org>
-
Reiji Watanabe authored
Add a test case for KVM_GUESTDBG_SINGLESTEP to the debug-exceptions test. The test enables single-step execution from userspace, and check if the exit to userspace occurs for each instruction that is stepped. Set the default number of the test iterations to a number of iterations sufficient to always reproduce the problem that the previous patch fixes on an Ampere Altra machine. Signed-off-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220917010600.532642-5-reijiw@google.com
-
Reiji Watanabe authored
Split up the current test into a helper, but leave the debug version checking in main(), to make it convenient to add a new debug exception test case in a subsequent patch. Signed-off-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220917010600.532642-4-reijiw@google.com
-
Reiji Watanabe authored
While userspace enables single-step, if the Software Step state at the last guest exit was "Active-pending", clear PSTATE.SS on guest entry to restore the state. Currently, KVM sets PSTATE.SS to 1 on every guest entry while userspace enables single-step for the vCPU (with KVM_GUESTDBG_SINGLESTEP). It means KVM always makes the vCPU's Software Step state "Active-not-pending" on the guest entry, which lets the VCPU perform single-step (then Software Step exception is taken). This could cause extra single-step (without returning to userspace) if the Software Step state at the last guest exit was "Active-pending" (i.e. the last exit was triggered by an asynchronous exception after the single-step is performed, but before the Software Step exception is taken. See "Figure D2-3 Software step state machine" and "D2.12.7 Behavior in the active-pending state" in ARM DDI 0487I.a for more info about this behavior). Fix this by clearing PSTATE.SS on guest entry if the Software Step state at the last exit was "Active-pending" so that KVM restore the state (and the exception is taken before further single-step is performed). Fixes: 337b99bf ("KVM: arm64: guest debug, add support for single-step") Signed-off-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220917010600.532642-3-reijiw@google.com
-
Reiji Watanabe authored
Preserve the PSTATE.SS value for the guest while userspace enables single-step (i.e. while KVM manipulates the PSTATE.SS) for the vCPU. Currently, while userspace enables single-step for the vCPU (with KVM_GUESTDBG_SINGLESTEP), KVM sets PSTATE.SS to 1 on every guest entry, not saving its original value. When userspace disables single-step, KVM doesn't restore the original value for the subsequent guest entry (use the current value instead). Exception return instructions copy PSTATE.SS from SPSR_ELx.SS only in certain cases when single-step is enabled (and set it to 0 in other cases). So, the value matters only when the guest enables single-step (and when the guest's Software step state isn't affected by single-step enabled by userspace, practically), though. Fix this by preserving the original PSTATE.SS value while userspace enables single-step, and restoring the value once it is disabled. This fix modifies the behavior of GET_ONE_REG/SET_ONE_REG for the PSTATE.SS while single-step is enabled by userspace. Presently, GET_ONE_REG/SET_ONE_REG gets/sets the current PSTATE.SS value, which KVM will override on the next guest entry (i.e. the value userspace gets/sets is not used for the next guest entry). With this patch, GET_ONE_REG/SET_ONE_REG will get/set the guest's preserved value, which KVM will preserve and try to restore after single-step is disabled. Fixes: 337b99bf ("KVM: arm64: guest debug, add support for single-step") Signed-off-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220917010600.532642-2-reijiw@google.com
-
Marc Zyngier authored
Merge arm64/for-next/sysreg in order to avoid upstream conflicts due to the never ending sysreg repainting... Signed-off-by: Marc Zyngier <maz@kernel.org>
-
- 16 Sep, 2022 7 commits
-
-
Mark Brown authored
Convert ID_AA64AFRn_EL1 to automatic generation as per DDI0487I.a, no functional changes. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220910163354.860255-7-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Convert ID_AA64FDR1_EL1 to automatic generation as per DDI0487I.a, no functional changes. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220910163354.860255-6-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Convert ID_AA64DFR0_EL1 to automatic generation as per DDI0487I.a, no functional changes. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220910163354.860255-5-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Currently the kernel refers to the versions of the PMU and SPE features by the version of the architecture where those features were updated but the ARM refers to them using the FEAT_ names for the features. To improve consistency and help with updating for newer features and since v9 will make our current naming scheme a bit more confusing update the macros identfying features to use the FEAT_ based scheme. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220910163354.860255-4-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64DFR0_EL1 to follow the convention. No functional changes. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220910163354.860255-3-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
The naming scheme the architecture uses for the fields in ID_AA64DFR0_EL1 does not align well with kernel conventions, using as it does a lot of MixedCase in various arrangements. In preparation for automatically generating the defines for this register rename the defines used to match what is in the architecture. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220910163354.860255-2-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
* kvm-arm64/aarch32-raz-idregs: : . : Rework AArch32 ID registers exposed by KVM on AArch64-only : systems by treating them as RAZ/WI instead as UNKOWN as : architected, which allows them to be trivially migrated : between different systems. : : Patches courtesy of Oliver Upton. : . KVM: selftests: Add test for AArch32 ID registers KVM: arm64: Treat 32bit ID registers as RAZ/WI on 64bit-only system KVM: arm64: Add a visibility bit to ignore user writes KVM: arm64: Spin off helper for calling visibility hook KVM: arm64: Drop raz parameter from read_id_reg() KVM: arm64: Remove internal accessor helpers for id regs KVM: arm64: Use visibility hook to treat ID regs as RAZ Signed-off-by: Marc Zyngier <maz@kernel.org>
-
- 14 Sep, 2022 7 commits
-
-
Oliver Upton authored
Add a test to assert that KVM handles the AArch64 views of the AArch32 ID registers as RAZ/WI (writable only from userspace). For registers that were already hidden or unallocated, expect RAZ + invariant behavior. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-8-oliver.upton@linux.dev
-
Oliver Upton authored
One of the oddities of the architecture is that the AArch64 views of the AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any EL. Nonetheless, KVM exposes these registers to userspace for the sake of save/restore. It is possible that the UNKNOWN value could differ between systems, leading to a rejected write from userspace. Avoid the issue altogether by handling the AArch32 ID registers as RAZ/WI when on an AArch64-only system. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-7-oliver.upton@linux.dev
-
Oliver Upton authored
We're about to ignore writes to AArch32 ID registers on AArch64-only systems. Add a bit to indicate a register is handled as write ignore when accessed from userspace. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-6-oliver.upton@linux.dev
-
Oliver Upton authored
No functional change intended. Reviewed-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-5-oliver.upton@linux.dev
-
Oliver Upton authored
There is no longer a need for caller-specified RAZ visibility. Hoist the call to sysreg_visible_as_raz() into read_id_reg() and drop the parameter. No functional change intended. Suggested-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-4-oliver.upton@linux.dev
-
Oliver Upton authored
The internal accessors are only ever called once. Dump out their contents in the caller. No functional change intended. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-3-oliver.upton@linux.dev
-
Oliver Upton authored
The generic id reg accessors already handle RAZ registers by way of the visibility hook. Add a visibility hook that returns REG_RAZ unconditionally and throw out the RAZ specific accessors. Reviewed-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-2-oliver.upton@linux.dev
-
- 09 Sep, 2022 2 commits
-
-
Mark Brown authored
The FEAT_NMI extension adds a new system register ALLINT for controlling NMI related interrupt masking, add a definition of this register as per DDI0487H.a. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20220905225425.1871461-29-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Convert SCXTNUM_EL1 to automatic generation as per DDI0487H.a, no functional changes. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20220905225425.1871461-28-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-