- 18 Mar, 2020 2 commits
-
-
Zhenyu Wang authored
On Tigerlake new AVX512 VP2INTERSECT feature is available. This allows to expose it via KVM_GET_SUPPORTED_CPUID. Cc: "Zhong, Yang" <yang.zhong@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
The name of nested_vmx_exit_reflected suggests that it's purely a test, but it actually marks VMCS12 pages as dirty. Move this to vmx_handle_exit, observing that the initial nested_run_pending check in nested_vmx_exit_reflected is pointless---nested_run_pending has just been cleared in vmx_vcpu_run and won't be set until handle_vmlaunch or handle_vmresume. Suggested-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 17 Mar, 2020 1 commit
-
-
Uros Bizjak authored
Registers in "regs" array are indexed as rax/rcx/rdx/.../rsi/rdi/r8/... Reorder access to "regs" array in vmenter.S to follow its natural order. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 16 Mar, 2020 37 commits
-
-
Paolo Bonzini authored
Merge tag 'kvm-s390-next-5.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD KVM: s390: Features and Enhancements for 5.7 part1 1. Allow to disable gisa 2. protected virtual machines Protected VMs (PVM) are KVM VMs, where KVM can't access the VM's state like guest memory and guest registers anymore. Instead the PVMs are mostly managed by a new entity called Ultravisor (UV), which provides an API, so KVM and the PV can request management actions. PVMs are encrypted at rest and protected from hypervisor access while running. They switch from a normal operation into protected mode, so we can still use the standard boot process to load a encrypted blob and then move it into protected mode. Rebooting is only possible by passing through the unprotected/normal mode and switching to protected again. One mm related patch will go via Andrews mm tree ( mm/gup/writeback: add callbacks for inaccessible pages)
-
Vitaly Kuznetsov authored
Check that guest doesn't hang when an invalid eVMCS GPA is specified. Testing that #UD is injected would probably be better but selftests lack the infrastructure currently. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Vitaly Kuznetsov authored
Check that VMfailInvalid happens when eVMCS revision is is invalid. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Vitaly Kuznetsov authored
KVM allows to use revision_id from MSR_IA32_VMX_BASIC as eVMCS revision_id to workaround a bug in genuine Hyper-V (see the comment in nested_vmx_handle_enlightened_vmptrld()), this shouldn't be used by default. Switch to using KVM_EVMCS_VERSION(1). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Vitaly Kuznetsov authored
nested_vmx_handle_enlightened_vmptrld() fails in two cases: - when we fail to kvm_vcpu_map() the supplied GPA - when revision_id is incorrect. Genuine Hyper-V raises #UD in the former case (at least with *some* incorrect GPAs) and does VMfailInvalid() in the later. KVM doesn't do anything so L1 just gets stuck retrying the same faulty VMLAUNCH. nested_vmx_handle_enlightened_vmptrld() has two call sites: nested_vmx_run() and nested_get_vmcs12_pages(). The former needs to queue do much: the failure there happens after migration when L2 was running (and L1 did something weird like wrote to VP assist page from a different vCPU), just kill L1 with KVM_EXIT_INTERNAL_ERROR. Reported-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> [Squash kbuild autopatch. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Vitaly Kuznetsov authored
When vmx_set_nested_state() happens, we may not have all the required data to map enlightened VMCS: e.g. HV_X64_MSR_VP_ASSIST_PAGE MSR may not yet be restored so we need a postponed action. Currently, we (ab)use need_vmcs12_to_shadow_sync/nested_sync_vmcs12_to_shadow() for that but this is not ideal: - We may not need to sync anything if L2 is running - It is hard to propagate errors from nested_sync_vmcs12_to_shadow() as we call it from vmx_prepare_switch_to_guest() which happens just before we do VMLAUNCH, the code is not ready to handle errors there. Move eVMCS mapping to nested_get_vmcs12_pages() and request KVM_REQ_GET_VMCS12_PAGES, it seems to be is less abusive in nature. It would probably be possible to introduce a specialized KVM_REQ_EVMCS_MAP but it is undesirable to propagate eVMCS specifics all the way up to x86.c Note, we don't need to request KVM_REQ_GET_VMCS12_PAGES from vmx_set_nested_state() directly as nested_vmx_enter_non_root_mode() already does that. Requesting KVM_REQ_GET_VMCS12_PAGES is done to document the (non-obvious) side-effect and to be future proof. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
-
Wainer dos Santos Moschetta authored
Changed all tests and utilities to use TEST_FAIL macro instead of TEST_ASSERT(false,...). Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Wainer dos Santos Moschetta authored
Some tests/utilities use the TEST_ASSERT(false, ...) pattern to indicate a failure and stop execution. This change introduces the TEST_FAIL macro which is a wrap around TEST_ASSERT(false, ...) and so provides a direct alternative for failing a test. Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Christian Borntraeger authored
Normal reset and initial CPU reset do not clear all registers. Add a test that those registers are NOT changed. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Christian Borntraeger authored
We should not only test the oneregs or the get_(x)regs interfaces but also the sync_regs. Those are usually the canonical place for register content. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Christian Borntraeger authored
The guest crashes very early due to changes in the control registers used by dynamic address translation. Let us use different registers that will not crash the guest. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Andrew Jones authored
The steal-time test confirms what is reported to the guest as stolen time is consistent with the run_delay reported for the VCPU thread on the host. Both x86_64 and AArch64 have the concept of steal/stolen time so this test is introduced for both architectures. While adding the test we ensure .gitignore has all tests listed (it was missing s390x/resets) and that the Makefile has all tests listed in alphabetical order (not really necessary, but it almost was already...). We also extend the common API with a new num-guest- pages call and a new timespec call. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Andrew Jones authored
Also correct the comment and prototype for vm_create_default(), as it takes a number of pages, not a size. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Andrew Jones authored
Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Andrew Jones authored
Use the format attribute to enable printf format warnings, and then fix them all. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Christian Borntraeger authored
acrs are 32 bit and not 64 bit. Reported-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Christian Borntraeger authored
value is u64 and not string. Reported-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Andrew Jones authored
Move function documentation comment blocks to the header files in order to avoid duplicating them for each architecture. While at it clean up and fix up the comment blocks. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Andrew Jones authored
Add svm_vmcall_test to gitignore list, and realphabetize it. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Miaohe Lin authored
The function does not return bool anymore. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
After test_and_set_bit() for kvm->arch.apicv_inhibit_reasons, we will always get false when calling kvm_apicv_activated() because it's sure apicv_inhibit_reasons do not equal to 0. What the code wants to do, is check whether APICv was *already* active and if so skip the costly request; we can do this using cmpxchg. Reported-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Oliver Upton authored
commit 5ef8acbd ("KVM: nVMX: Emulate MTF when performing instruction emulation") introduced a helper to check the MTF VM-execution control in vmcs12. Change pre-existing check in nested_vmx_exit_reflected() to instead use the helper. Signed-off-by: Oliver Upton <oupton@google.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Wanpeng Li authored
PMU is not exposed to guest by most of products from cloud providers since the bad performance of PMU emulation and security concern. However, it calls perf_guest_switch_get_msrs() and clear_atomic_switch_msr() unconditionally even if PMU is not exposed to the guest before each vmentry. ~2% vmexit time reduced can be observed by kvm-unit-tests/vmexit.flat on my SKX server. Before patch: vmcall 1559 After patch: vmcall 1529 Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Andrew Jones authored
s390 requires 1M aligned guest sizes. Embedding the rounding in vm_adjust_num_guest_pages() allows us to remove it from a few other places. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Suravee Suthikulpanit authored
GA Log tracepoint is useful when debugging AVIC performance issue as it can be used with perf to count the number of times IOMMU AVIC injects interrupts through the slow-path instead of directly inject interrupts to the target vcpu. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Peter Xu authored
Clarify locking.rst to mention early that we're not enabling fast page fault for indirect sps. The previous wording is confusing, in that it seems the proposed solution has been already implemented but it has not. Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
This patch reproduces for nSVM the change that was made for nVMX in commit b5861e5c ("KVM: nVMX: Fix loss of pending IRQ/NMI before entering L2"). While I do not have a test that breaks without it, I cannot see why it would not be necessary since all events are unblocked by VMRUN's setting of GIF back to 1. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
The current implementation of physical interrupt delivery to a nested guest is quite broken. It relies on svm_interrupt_allowed returning false if VINTR=1 so that the interrupt can be injected from enable_irq_window, but this does not work for guests that do not intercept HLT or that rely on clearing the host IF to block physical interrupts while L2 runs. This patch can be split in two logical parts, but including only one breaks tests so I am combining both changes together. The first and easiest is simply to return true for svm_interrupt_allowed if HF_VINTR_MASK is set and HIF is set. This way the semantics of svm_interrupt_allowed are respected: svm_interrupt_allowed being false does not mean "call enable_irq_window", it means "interrupts cannot be injected now". After doing this, however, we need another place to inject the interrupt, and fortunately we already have one, check_nested_events, which nested SVM does not implement but which is meant exactly for this purpose. It is called before interrupts are injected, and it can therefore do the L2->L1 switch while leaving inject_pending_event none the wiser. This patch was developed together with Cathy Avery, who wrote the test and did a lot of the initial debugging. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
If a nested VM is started while an IRQ was pending and with V_INTR_MASKING=1, the behavior of the guest depends on host IF. If it is 1, the VM should exit immediately, before executing the first instruction of the guest, because VMRUN sets GIF back to 1. If it is 0 and the host has VGIF, however, at the time of the VMRUN instruction L0 is running the guest with a pending interrupt window request. This interrupt window request is completely irrelevant to L2, since IF only controls virtual interrupts, so this patch drops INTERCEPT_VINTR from the VMCB while running L2 under these circumstances. To simplify the code, both steps of enabling the interrupt window (setting the VINTR intercept and requesting a fake virtual interrupt in svm_inject_irq) are grouped in the svm_set_vintr function, and likewise for dismissing the interrupt window request in svm_clear_vintr. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Instead of touching the host intercepts so that the bitwise OR in recalc_intercepts just works, mask away uninteresting intercepts directly in recalc_intercepts. This is cleaner and keeps the logic in one place even for intercepts that can change even while L2 is running. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
The set_cr3 callback is not setting the guest CR3, it is setting the root of the guest page tables, either shadow or two-dimensional. To make this clearer as well as to indicate that the MMU calls it via kvm_mmu_load_cr3, rename it to load_mmu_pgd. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Similar to what kvm-intel.ko is doing, provide a single callback that merges svm_set_cr3, set_tdp_cr3 and nested_svm_set_tdp_cr3. This lets us unify the set_cr3 and set_tdp_cr3 entries in kvm_x86_ops. I'm doing that in this same patch because splitting it adds quite a bit of churn due to the need for forward declarations. For the same reason the assignment to vcpu->arch.mmu->set_cr3 is moved to kvm_init_shadow_mmu from init_kvm_softmmu and nested_svm_init_mmu_context. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Invert and rename the kvm_cpuid() param that controls out-of-range logic to better reflect the semantics of the affected callers, i.e. callers that bypass the out-of-range logic do so because they are looking up an exact guest CPUID entry, e.g. to query the maxphyaddr. Similarly, rename kvm_cpuid()'s internal "found" to "exact" to clarify that it tracks whether or not the exact requested leaf was found, as opposed to any usable leaf being found. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Move all of the out-of-range logic into a single helper, get_out_of_range_cpuid_entry(), to avoid an extra lookup of CPUID.0.0 and to provide a single location for documenting the out-of-range behavior. No functional change intended. Cc: Jim Mattson <jmattson@google.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Rework the masking in the out-of-range CPUID logic to handle the Hypervisor sub-classes, as well as the Centaur class if the guest virtual CPU vendor is Centaur. Masking against 0x80000000 only handles basic and extended leafs, which results in Hypervisor range checks being performed against the basic CPUID class, and Centuar range checks being performed against the Extended class. E.g. if CPUID.0x40000000.EAX returns 0x4000000A and there is no entry for CPUID.0x40000006, then function 0x40000006 would be incorrectly reported as out of bounds. While there is no official definition of what constitutes a class, the convention established for Hypervisor classes effectively uses bits 31:8 as the mask by virtue of checking for different bases in increments of 0x100, e.g. KVM advertises its CPUID functions starting at 0x40000100 when HyperV features are advertised at the default base of 0x40000000. The bad range check doesn't cause functional problems for any known VMM because out-of-range semantics only come into play if the exact entry isn't found, and VMMs either support a very limited Hypervisor range, e.g. the official KVM range is 0x40000000-0x40000001 (effectively no room for undefined leafs) or explicitly defines gaps to be zero, e.g. Qemu explicitly creates zeroed entries up to the Centaur and Hypervisor limits (the latter comes into play when providing HyperV features). The bad behavior can be visually confirmed by dumping CPUID output in the guest when running Qemu with a stable TSC, as Qemu extends the limit of range 0x40000000 to 0x40000010 to advertise VMware's cpuid_freq, without defining zeroed entries for 0x40000002 - 0x4000000f. Note, documentation of Centaur/VIA CPUs is hard to come by. Designating 0xc0000000 - 0xcfffffff as the Centaur class is a best guess as to the behavior of a real Centaur/VIA CPU. Fixes: 43561123 ("kvm: x86: Improve emulation of CPUID leaves 0BH and 1FH") Cc: Jim Mattson <jmattson@google.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Extend guest_cpuid_is_amd() to cover Hygon virtual CPUs and rename it accordingly. Hygon CPUs use an AMD-based core and so have the same basic behavior as AMD CPUs. Fixes: b8f4abb6 ("x86/kvm: Add Hygon Dhyana support to KVM") Cc: Pu Wen <puwen@hygon.cn> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-