An error occurred fetching the project authors.
  1. 12 Jun, 2023 2 commits
  2. 20 Apr, 2023 2 commits
    • Will Deacon's avatar
      KVM: arm64: Ensure CPU PMU probes before pKVM host de-privilege · 87727ba2
      Will Deacon authored
      Although pKVM supports CPU PMU emulation for non-protected guests since
      722625c6 ("KVM: arm64: Reenable pmu in Protected Mode"), this relies
      on the PMU driver probing before the host has de-privileged so that the
      'kvm_arm_pmu_available' static key can still be enabled by patching the
      hypervisor text.
      
      As it happens, both of these events hang off device_initcall() but the
      PMU consistently won the race until 7755cec6 ("arm64: perf: Move
      PMUv3 driver to drivers/perf"). Since then, the host will fail to boot
      when pKVM is enabled:
      
        | hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
        | kvm [1]: nVHE hyp BUG at: [<ffff8000090366e0>] __kvm_nvhe_handle_host_mem_abort+0x270/0x284!
        | kvm [1]: Cannot dump pKVM nVHE stacktrace: !CONFIG_PROTECTED_NVHE_STACKTRACE
        | kvm [1]: Hyp Offset: 0xfffea41fbdf70000
        | Kernel panic - not syncing: HYP panic:
        | PS:a00003c9 PC:0000dbe04b0c66e0 ESR:00000000f2000800
        | FAR:fffffbfffddfcf00 HPFAR:00000000010b0bf0 PAR:0000000000000000
        | VCPU:0000000000000000
        | CPU: 2 PID: 1 Comm: swapper/0 Not tainted 6.3.0-rc7-00083-g0bce6746d154 #1
        | Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
        | Call trace:
        |  dump_backtrace+0xec/0x108
        |  show_stack+0x18/0x2c
        |  dump_stack_lvl+0x50/0x68
        |  dump_stack+0x18/0x24
        |  panic+0x13c/0x33c
        |  nvhe_hyp_panic_handler+0x10c/0x190
        |  aarch64_insn_patch_text_nosync+0x64/0xc8
        |  arch_jump_label_transform+0x4c/0x5c
        |  __jump_label_update+0x84/0xfc
        |  jump_label_update+0x100/0x134
        |  static_key_enable_cpuslocked+0x68/0xac
        |  static_key_enable+0x20/0x34
        |  kvm_host_pmu_init+0x88/0xa4
        |  armpmu_register+0xf0/0xf4
        |  arm_pmu_acpi_probe+0x2ec/0x368
        |  armv8_pmu_driver_init+0x38/0x44
        |  do_one_initcall+0xcc/0x240
      
      Fix the race properly by deferring the de-privilege step to
      device_initcall_sync(). This will also be needed in future when probing
      IOMMU devices and allows us to separate the pKVM de-privilege logic from
      the core hypervisor initialisation path.
      
      Cc: Oliver Upton <oliver.upton@linux.dev>
      Cc: Fuad Tabba <tabba@google.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Fixes: 7755cec6 ("arm64: perf: Move PMUv3 driver to drivers/perf")
      Tested-by: default avatarFuad Tabba <tabba@google.com>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20230420123356.2708-1-will@kernel.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      87727ba2
    • Reiji Watanabe's avatar
      KVM: arm64: Acquire mp_state_lock in kvm_arch_vcpu_ioctl_vcpu_init() · 4ff910be
      Reiji Watanabe authored
      kvm_arch_vcpu_ioctl_vcpu_init() doesn't acquire mp_state_lock
      when setting the mp_state to KVM_MP_STATE_RUNNABLE. Fix the
      code to acquire the lock.
      Signed-off-by: default avatarReiji Watanabe <reijiw@google.com>
      [maz: minor refactor]
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20230419021852.2981107-2-reijiw@google.com
      4ff910be
  3. 05 Apr, 2023 3 commits
  4. 04 Apr, 2023 1 commit
  5. 31 Mar, 2023 1 commit
  6. 30 Mar, 2023 3 commits
  7. 29 Mar, 2023 3 commits
    • Oliver Upton's avatar
      KVM: arm64: Use config_lock to protect data ordered against KVM_RUN · 4bba7f7d
      Oliver Upton authored
      There are various bits of VM-scoped data that can only be configured
      before the first call to KVM_RUN, such as the hypercall bitmaps and
      the PMU. As these fields are protected by the kvm->lock and accessed
      while holding vcpu->mutex, this is yet another example of lock
      inversion.
      
      Change out the kvm->lock for kvm->arch.config_lock in all of these
      instances. Opportunistically simplify the locking mechanics of the
      PMU configuration by holding the config_lock for the entirety of
      kvm_arm_pmu_v3_set_attr().
      
      Note that this also addresses a couple of bugs. There is an unguarded
      read of the PMU version in KVM_ARM_VCPU_PMU_V3_FILTER which could race
      with KVM_ARM_VCPU_PMU_V3_SET_PMU. Additionally, until now writes to the
      per-vCPU vPMU irq were not serialized VM-wide, meaning concurrent calls
      to KVM_ARM_VCPU_PMU_V3_IRQ could lead to a false positive in
      pmu_irq_is_valid().
      
      Cc: stable@vger.kernel.org
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Signed-off-by: default avatarOliver Upton <oliver.upton@linux.dev>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20230327164747.2466958-4-oliver.upton@linux.dev
      4bba7f7d
    • Oliver Upton's avatar
      KVM: arm64: Avoid lock inversion when setting the VM register width · c43120af
      Oliver Upton authored
      kvm->lock must be taken outside of the vcpu->mutex. Of course, the
      locking documentation for KVM makes this abundantly clear. Nonetheless,
      the locking order in KVM/arm64 has been wrong for quite a while; we
      acquire the kvm->lock while holding the vcpu->mutex all over the shop.
      
      All was seemingly fine until commit 42a90008 ("KVM: Ensure lockdep
      knows about kvm->lock vs. vcpu->mutex ordering rule") caught us with our
      pants down, leading to lockdep barfing:
      
       ======================================================
       WARNING: possible circular locking dependency detected
       6.2.0-rc7+ #19 Not tainted
       ------------------------------------------------------
       qemu-system-aar/859 is trying to acquire lock:
       ffff5aa69269eba0 (&host_kvm->lock){+.+.}-{3:3}, at: kvm_reset_vcpu+0x34/0x274
      
       but task is already holding lock:
       ffff5aa68768c0b8 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x8c/0xba0
      
       which lock already depends on the new lock.
      
      Add a dedicated lock to serialize writes to VM-scoped configuration from
      the context of a vCPU. Protect the register width flags with the new
      lock, thus avoiding the need to grab the kvm->lock while holding
      vcpu->mutex in kvm_reset_vcpu().
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Link: https://lore.kernel.org/kvmarm/f6452cdd-65ff-34b8-bab0-5c06416da5f6@arm.com/Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Signed-off-by: default avatarOliver Upton <oliver.upton@linux.dev>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20230327164747.2466958-3-oliver.upton@linux.dev
      c43120af
    • Oliver Upton's avatar
      KVM: arm64: Avoid vcpu->mutex v. kvm->lock inversion in CPU_ON · 0acc7239
      Oliver Upton authored
      KVM/arm64 had the lock ordering backwards on vcpu->mutex and kvm->lock
      from the very beginning. One such example is the way vCPU resets are
      handled: the kvm->lock is acquired while handling a guest CPU_ON PSCI
      call.
      
      Add a dedicated lock to serialize writes to kvm_vcpu_arch::{mp_state,
      reset_state}. Promote all accessors of mp_state to {READ,WRITE}_ONCE()
      as readers do not acquire the mp_state_lock. While at it, plug yet
      another race by taking the mp_state_lock in the KVM_SET_MP_STATE ioctl
      handler.
      
      As changes to MP state are now guarded with a dedicated lock, drop the
      kvm->lock acquisition from the PSCI CPU_ON path. Similarly, move the
      reader of reset_state outside of the kvm->lock and instead protect it
      with the mp_state_lock. Note that writes to reset_state::reset have been
      demoted to regular stores as both readers and writers acquire the
      mp_state_lock.
      
      While the kvm->lock inversion still exists in kvm_reset_vcpu(), at least
      now PSCI CPU_ON no longer depends on it for serializing vCPU reset.
      
      Cc: stable@vger.kernel.org
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Signed-off-by: default avatarOliver Upton <oliver.upton@linux.dev>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20230327164747.2466958-2-oliver.upton@linux.dev
      0acc7239
  8. 16 Mar, 2023 1 commit
  9. 11 Feb, 2023 1 commit
  10. 07 Feb, 2023 2 commits
  11. 02 Feb, 2023 1 commit
  12. 29 Dec, 2022 8 commits
  13. 19 Nov, 2022 1 commit
  14. 11 Nov, 2022 6 commits
  15. 10 Nov, 2022 2 commits
  16. 09 Nov, 2022 1 commit
    • Paolo Bonzini's avatar
      KVM: replace direct irq.h inclusion · d663b8a2
      Paolo Bonzini authored
      virt/kvm/irqchip.c is including "irq.h" from the arch-specific KVM source
      directory (i.e. not from arch/*/include) for the sole purpose of retrieving
      irqchip_in_kernel.
      
      Making the function inline in a header that is already included,
      such as asm/kvm_host.h, is not possible because it needs to look at
      struct kvm which is defined after asm/kvm_host.h is included.  So add a
      kvm_arch_irqchip_in_kernel non-inline function; irqchip_in_kernel() is
      only performance critical on arm64 and x86, and the non-inline function
      is enough on all other architectures.
      
      irq.h can then be deleted from all architectures except x86.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d663b8a2
  17. 26 Sep, 2022 2 commits