1. 15 Oct, 2022 2 commits
  2. 13 Oct, 2022 1 commit
    • Gavin Shan's avatar
      KVM: selftests: Fix number of pages for memory slot in memslot_modification_stress_test · 05c2224d
      Gavin Shan authored
      It's required by vm_userspace_mem_region_add() that memory size
      should be aligned to host page size. However, one guest page is
      provided by memslot_modification_stress_test. It triggers failure
      in the scenario of 64KB-page-size-host and 4KB-page-size-guest,
      as the following messages indicate.
      
       # ./memslot_modification_stress_test
       Testing guest mode: PA-bits:40,  VA-bits:48,  4K pages
       guest physical test memory: [0xffbfff0000, 0xffffff0000)
       Finished creating vCPUs
       Started all vCPUs
       ==== Test Assertion Failure ====
         lib/kvm_util.c:824: vm_adjust_num_guest_pages(vm->mode, npages) == npages
         pid=5712 tid=5712 errno=0 - Success
            1	0x0000000000404eeb: vm_userspace_mem_region_add at kvm_util.c:822
            2	0x0000000000401a5b: add_remove_memslot at memslot_modification_stress_test.c:82
            3	 (inlined by) run_test at memslot_modification_stress_test.c:110
            4	0x0000000000402417: for_each_guest_mode at guest_modes.c:100
            5	0x00000000004016a7: main at memslot_modification_stress_test.c:187
            6	0x0000ffffb8cd4383: ?? ??:0
            7	0x0000000000401827: _start at :?
         Number of guest pages is not compatible with the host. Try npages=16
      
      Fix the issue by providing 16 guest pages to the memory slot for this
      particular combination of 64KB-page-size-host and 4KB-page-size-guest
      on aarch64.
      
      Fixes: ef4c9f4f ("KVM: selftests: Fix 32-bit truncation of vm_get_max_gfn()")
      Signed-off-by: default avatarGavin Shan <gshan@redhat.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221013063020.201856-1-gshan@redhat.com
      05c2224d
  3. 10 Oct, 2022 1 commit
  4. 09 Oct, 2022 3 commits
    • Vincent Donnefort's avatar
      KVM: arm64: Enable stack protection and branch profiling for VHE · 837d632a
      Vincent Donnefort authored
      For historical reasons, the VHE code inherited the build configuration from
      nVHE. Now those two parts have their own folder and makefile, we can
      enable stack protection and branch profiling for VHE.
      Signed-off-by: default avatarVincent Donnefort <vdonnefort@google.com>
      Reviewed-by: default avatarQuentin Perret <qperret@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221004154216.2833636-1-vdonnefort@google.com
      837d632a
    • Oliver Upton's avatar
      KVM: arm64: Limit stage2_apply_range() batch size to largest block · 5994bc9e
      Oliver Upton authored
      Presently stage2_apply_range() works on a batch of memory addressed by a
      stage 2 root table entry for the VM. Depending on the IPA limit of the
      VM and PAGE_SIZE of the host, this could address a massive range of
      memory. Some examples:
      
        4 level, 4K paging -> 512 GB batch size
      
        3 level, 64K paging -> 4TB batch size
      
      Unsurprisingly, working on such a large range of memory can lead to soft
      lockups. When running dirty_log_perf_test:
      
        ./dirty_log_perf_test -m -2 -s anonymous_thp -b 4G -v 48
      
        watchdog: BUG: soft lockup - CPU#0 stuck for 45s! [dirty_log_perf_:16703]
        Modules linked in: vfat fat cdc_ether usbnet mii xhci_pci xhci_hcd sha3_generic gq(O)
        CPU: 0 PID: 16703 Comm: dirty_log_perf_ Tainted: G           O       6.0.0-smp-DEV #1
        pstate: 80400009 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
        pc : dcache_clean_inval_poc+0x24/0x38
        lr : clean_dcache_guest_page+0x28/0x4c
        sp : ffff800021763990
        pmr_save: 000000e0
        x29: ffff800021763990 x28: 0000000000000005 x27: 0000000000000de0
        x26: 0000000000000001 x25: 00400830b13bc77f x24: ffffad4f91ead9c0
        x23: 0000000000000000 x22: ffff8000082ad9c8 x21: 0000fffafa7bc000
        x20: ffffad4f9066ce50 x19: 0000000000000003 x18: ffffad4f92402000
        x17: 000000000000011b x16: 000000000000011b x15: 0000000000000124
        x14: ffff07ff8301d280 x13: 0000000000000000 x12: 00000000ffffffff
        x11: 0000000000010001 x10: fffffc0000000000 x9 : ffffad4f9069e580
        x8 : 000000000000000c x7 : 0000000000000000 x6 : 000000000000003f
        x5 : ffff07ffa2076980 x4 : 0000000000000001 x3 : 000000000000003f
        x2 : 0000000000000040 x1 : ffff0830313bd000 x0 : ffff0830313bcc40
        Call trace:
         dcache_clean_inval_poc+0x24/0x38
         stage2_unmap_walker+0x138/0x1ec
         __kvm_pgtable_walk+0x130/0x1d4
         __kvm_pgtable_walk+0x170/0x1d4
         __kvm_pgtable_walk+0x170/0x1d4
         __kvm_pgtable_walk+0x170/0x1d4
         kvm_pgtable_stage2_unmap+0xc4/0xf8
         kvm_arch_flush_shadow_memslot+0xa4/0x10c
         kvm_set_memslot+0xb8/0x454
         __kvm_set_memory_region+0x194/0x244
         kvm_vm_ioctl_set_memory_region+0x58/0x7c
         kvm_vm_ioctl+0x49c/0x560
         __arm64_sys_ioctl+0x9c/0xd4
         invoke_syscall+0x4c/0x124
         el0_svc_common+0xc8/0x194
         do_el0_svc+0x38/0xc0
         el0_svc+0x2c/0xa4
         el0t_64_sync_handler+0x84/0xf0
         el0t_64_sync+0x1a0/0x1a4
      
      Use the largest supported block mapping for the configured page size as
      the batch granularity. In so doing the walker is guaranteed to visit a
      leaf only once.
      Signed-off-by: default avatarOliver Upton <oliver.upton@linux.dev>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221007234151.461779-3-oliver.upton@linux.dev
      5994bc9e
    • Oliver Upton's avatar
      KVM: arm64: Work out supported block level at compile time · 3b5c082b
      Oliver Upton authored
      Work out the minimum page table level where KVM supports block mappings
      at compile time. While at it, rewrite the comment around supported block
      mappings to directly describe what KVM supports instead of phrasing in
      terms of what it does not.
      Signed-off-by: default avatarOliver Upton <oliver.upton@linux.dev>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221007234151.461779-2-oliver.upton@linux.dev
      3b5c082b
  5. 01 Oct, 2022 3 commits
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/misc-6.1 into kvmarm-master/next · b302ca52
      Marc Zyngier authored
      * kvm-arm64/misc-6.1:
        : .
        : Misc KVM/arm64 fixes and improvement for v6.1
        :
        : - Simplify the affinity check when moving a GICv3 collection
        :
        : - Tone down the shouting when kvm-arm.mode=protected is passed
        :   to a guest
        :
        : - Fix various comments
        :
        : - Advertise the new kvmarm@lists.linux.dev and deprecate the
        :   old Columbia list
        : .
        KVM: arm64: Advertise new kvmarm mailing list
        KVM: arm64: Fix comment typo in nvhe/switch.c
        KVM: selftests: Update top-of-file comment in psci_test
        KVM: arm64: Ignore kvm-arm.mode if !is_hyp_mode_available()
        KVM: arm64: vgic: Remove duplicate check in update_affinity_collection()
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      b302ca52
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/dirty-log-ordered into kvmarm-master/next · 250012dd
      Marc Zyngier authored
      * kvm-arm64/dirty-log-ordered:
        : .
        : Retrofit some ordering into the existing API dirty-ring by:
        :
        : - relying on acquire/release semantics which are the default on x86,
        :   but need to be explicit on arm64
        :
        : - adding a new capability that indicate which flavor is supported, either
        :   with explicit ordering (arm64) or both implicit and explicit (x86),
        :   as suggested by Paolo at KVM Forum
        :
        : - documenting the requirements for this new capability on weakly ordered
        :   architectures
        :
        : - updating the selftests to do the right thing
        : .
        KVM: selftests: dirty-log: Use KVM_CAP_DIRTY_LOG_RING_ACQ_REL if available
        KVM: selftests: dirty-log: Upgrade flag accesses to acquire/release semantics
        KVM: Document weakly ordered architecture requirements for dirty ring
        KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL
        KVM: Add KVM_CAP_DIRTY_LOG_RING_ACQ_REL capability and config option
        KVM: Use acquire/release semantics when accessing dirty ring GFN state
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      250012dd
    • Marc Zyngier's avatar
      KVM: arm64: Advertise new kvmarm mailing list · ac107abe
      Marc Zyngier authored
      As announced on the kvmarm list, we're moving the mailing list over
      to kvmarm@lists.linux.dev:
      
      <quote>
      As you probably all know, the kvmarm mailing has been hosted on
      Columbia's machines for as long as the project existed (over 13
      years). After all this time, the university has decided to retire the
      list infrastructure and asked us to find a new hosting.
      
      A new mailing list has been created on lists.linux.dev[1], and I'm
      kindly asking everyone interested in following the KVM/arm64
      developments to start subscribing to it (and start posting your
      patches there). I hope that people will move over to it quickly enough
      that we can soon give Columbia the green light to turn their systems
      off.
      
      Note that the new list will only get archived automatically once we
      fully switch over, but I'll make sure we fill any gap and not lose any
      message. In the meantime, please Cc both lists.
      
      [...]
      
      [1] https://subspace.kernel.org/lists.linux.dev.html
      </quote>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221001091245.3900668-1-maz@kernel.org
      ac107abe
  6. 29 Sep, 2022 7 commits
  7. 28 Sep, 2022 1 commit
  8. 26 Sep, 2022 2 commits
  9. 19 Sep, 2022 6 commits
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/single-step-async-exception into kvmarm-master/next · bb0cca24
      Marc Zyngier authored
      * kvm-arm64/single-step-async-exception:
        : .
        : Single-step fixes from Reiji Watanabe:
        :
        : "This series fixes two bugs of single-step execution enabled by
        : userspace, and add a test case for KVM_GUESTDBG_SINGLESTEP to
        : the debug-exception test to verify the single-step behavior."
        : .
        KVM: arm64: selftests: Add a test case for KVM_GUESTDBG_SINGLESTEP
        KVM: arm64: selftests: Refactor debug-exceptions to make it amenable to new test cases
        KVM: arm64: Clear PSTATE.SS when the Software Step state was Active-pending
        KVM: arm64: Preserve PSTATE.SS for the guest while single-step is enabled
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      bb0cca24
    • Reiji Watanabe's avatar
      KVM: arm64: selftests: Add a test case for KVM_GUESTDBG_SINGLESTEP · b18e4d4a
      Reiji Watanabe authored
      Add a test case for KVM_GUESTDBG_SINGLESTEP to the debug-exceptions test.
      The test enables single-step execution from userspace, and check if the
      exit to userspace occurs for each instruction that is stepped.
      Set the default number of the test iterations to a number of iterations
      sufficient to always reproduce the problem that the previous patch fixes
      on an Ampere Altra machine.
      Signed-off-by: default avatarReiji Watanabe <reijiw@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220917010600.532642-5-reijiw@google.com
      b18e4d4a
    • Reiji Watanabe's avatar
      KVM: arm64: selftests: Refactor debug-exceptions to make it amenable to new test cases · ff00e737
      Reiji Watanabe authored
      Split up the current test into a helper, but leave the debug version
      checking in main(), to make it convenient to add a new debug exception
      test case in a subsequent patch.
      Signed-off-by: default avatarReiji Watanabe <reijiw@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220917010600.532642-4-reijiw@google.com
      ff00e737
    • Reiji Watanabe's avatar
      KVM: arm64: Clear PSTATE.SS when the Software Step state was Active-pending · 370531d1
      Reiji Watanabe authored
      While userspace enables single-step, if the Software Step state at the
      last guest exit was "Active-pending", clear PSTATE.SS on guest entry
      to restore the state.
      
      Currently, KVM sets PSTATE.SS to 1 on every guest entry while userspace
      enables single-step for the vCPU (with KVM_GUESTDBG_SINGLESTEP).
      It means KVM always makes the vCPU's Software Step state
      "Active-not-pending" on the guest entry, which lets the VCPU perform
      single-step (then Software Step exception is taken). This could cause
      extra single-step (without returning to userspace) if the Software Step
      state at the last guest exit was "Active-pending" (i.e. the last
      exit was triggered by an asynchronous exception after the single-step
      is performed, but before the Software Step exception is taken.
      See "Figure D2-3 Software step state machine" and "D2.12.7 Behavior
      in the active-pending state" in ARM DDI 0487I.a for more info about
      this behavior).
      
      Fix this by clearing PSTATE.SS on guest entry if the Software Step state
      at the last exit was "Active-pending" so that KVM restore the state (and
      the exception is taken before further single-step is performed).
      
      Fixes: 337b99bf ("KVM: arm64: guest debug, add support for single-step")
      Signed-off-by: default avatarReiji Watanabe <reijiw@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220917010600.532642-3-reijiw@google.com
      370531d1
    • Reiji Watanabe's avatar
      KVM: arm64: Preserve PSTATE.SS for the guest while single-step is enabled · 34fbdee0
      Reiji Watanabe authored
      Preserve the PSTATE.SS value for the guest while userspace enables
      single-step (i.e. while KVM manipulates the PSTATE.SS) for the vCPU.
      
      Currently, while userspace enables single-step for the vCPU
      (with KVM_GUESTDBG_SINGLESTEP), KVM sets PSTATE.SS to 1 on every
      guest entry, not saving its original value.
      When userspace disables single-step, KVM doesn't restore the original
      value for the subsequent guest entry (use the current value instead).
      Exception return instructions copy PSTATE.SS from SPSR_ELx.SS
      only in certain cases when single-step is enabled (and set it to 0
      in other cases). So, the value matters only when the guest enables
      single-step (and when the guest's Software step state isn't affected
      by single-step enabled by userspace, practically), though.
      
      Fix this by preserving the original PSTATE.SS value while userspace
      enables single-step, and restoring the value once it is disabled.
      
      This fix modifies the behavior of GET_ONE_REG/SET_ONE_REG for the
      PSTATE.SS while single-step is enabled by userspace.
      Presently, GET_ONE_REG/SET_ONE_REG gets/sets the current PSTATE.SS
      value, which KVM will override on the next guest entry (i.e. the
      value userspace gets/sets is not used for the next guest entry).
      With this patch, GET_ONE_REG/SET_ONE_REG will get/set the guest's
      preserved value, which KVM will preserve and try to restore after
      single-step is disabled.
      
      Fixes: 337b99bf ("KVM: arm64: guest debug, add support for single-step")
      Signed-off-by: default avatarReiji Watanabe <reijiw@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220917010600.532642-2-reijiw@google.com
      34fbdee0
    • Marc Zyngier's avatar
      Merge remote-tracking branch 'arm64/for-next/sysreg' into kvmarm-master/next · b04b3315
      Marc Zyngier authored
      Merge arm64/for-next/sysreg in order to avoid upstream conflicts
      due to the never ending sysreg repainting...
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      b04b3315
  10. 16 Sep, 2022 7 commits
  11. 14 Sep, 2022 7 commits