- 04 Apr, 2018 10 commits
-
-
Paolo Bonzini authored
Testsuite contributed by Google and cleaned up by myself for inclusion in Linux. Signed-off-by: Ken Hofsass <hofsass@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Peng Hao authored
fix a "warning: no previous prototype". Cc: stable@vger.kernel.org Signed-off-by: Peng Hao <peng.hao2@zte.com.cn> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Wanpeng Li authored
There is no easy way to force KVM to run an instruction through the emulator (by design as that will expose the x86 emulator as a significant attack-surface). However, we do wish to expose the x86 emulator in case we are testing it (e.g. via kvm-unit-tests). Therefore, this patch adds a "force emulation prefix" that is designed to raise #UD which KVM will trap and it's #UD exit-handler will match "force emulation prefix" to run instruction after prefix by the x86 emulator. To not expose the x86 emulator by default, we add a module parameter that should be off by default. A simple testcase here: #include <stdio.h> #include <string.h> #define HYPERVISOR_INFO 0x40000000 #define CPUID(idx, eax, ebx, ecx, edx) \ asm volatile (\ "ud2a; .ascii \"kvm\"; cpuid" \ :"=b" (*ebx), "=a" (*eax), "=c" (*ecx), "=d" (*edx) \ :"0"(idx) ); void main() { unsigned int eax, ebx, ecx, edx; char string[13]; CPUID(HYPERVISOR_INFO, &eax, &ebx, &ecx, &edx); *(unsigned int *)(string + 0) = ebx; *(unsigned int *)(string + 4) = ecx; *(unsigned int *)(string + 8) = edx; string[12] = 0; if (strncmp(string, "KVMKVMKVM\0\0\0", 12) == 0) printf("kvm guest\n"); else printf("bare hardware\n"); } Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Liran Alon <liran.alon@oracle.com> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> [Correctly handle usermode exits. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Wanpeng Li authored
Introduce handle_ud() to handle invalid opcode, this function will be used by later patches. Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim KrÄmáÅ
™ <rkrcmar@redhat.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Liran Alon <liran.alon@oracle.com> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> -
Paolo Bonzini authored
vmx_save_host_state has multiple ifdefs for CONFIG_X86_64 that have no other code between them. Simplify by reducing them to a single conditional. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Arnd Bergmann authored
The local variable was newly introduced but is only accessed in one place on x86_64, but not on 32-bit: arch/x86/kvm/vmx.c: In function 'vmx_save_host_state': arch/x86/kvm/vmx.c:2175:6: error: unused variable 'cpu' [-Werror=unused-variable] This puts it into another #ifdef. Fixes: 35060ed6 ("x86/kvm/vmx: avoid expensive rdmsr for MSR_GS_BASE") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Remove the WARN_ON in handle_ept_misconfig() as it is unnecessary and causes false positives. Return the unmodified result of kvm_mmu_page_fault() instead of converting a system error code to KVM_EXIT_UNKNOWN so that userspace sees the error code of the actual failure, not a generic "we don't know what went wrong". * kvm_mmu_page_fault() will WARN if reserved bits are set in the SPTEs, i.e. it covers the case where an EPT misconfig occurred because of a KVM bug. * The WARN_ON will fire on any system error code that is hit while handling the fault, e.g. -ENOMEM from mmu_topup_memory_caches() while handling a legitmate MMIO EPT misconfig or -EFAULT from kvm_handle_bad_page() if the corresponding HVA is invalid. In either case, userspace should receive the original error code and firing a warning is incorrect behavior as KVM is operating as designed. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
The bug that led to commit 95e057e2 was a benign warning (no adverse affects other than the warning itself) that was detected by syzkaller. Further inspection shows that the WARN_ON in question, in handle_ept_misconfig(), is unnecessary and flawed (this was also briefly discussed in the original patch: https://patchwork.kernel.org/patch/10204649). * The WARN_ON is unnecessary as kvm_mmu_page_fault() will WARN if reserved bits are set in the SPTEs, i.e. it covers the case where an EPT misconfig occurred because of a KVM bug. * The WARN_ON is flawed because it will fire on any system error code that is hit while handling the fault, e.g. -ENOMEM can be returned by mmu_topup_memory_caches() while handling a legitmate MMIO EPT misconfig. The original behavior of returning -EFAULT when userspace munmaps an HVA without first removing the memslot is correct and desirable, i.e. KVM is letting userspace know it has generated a bad address. Returning RET_PF_EMULATE masks the WARN_ON in the EPT misconfig path, but does not fix the underlying bug, i.e. the WARN_ON is bogus. Furthermore, returning RET_PF_EMULATE has the unwanted side effect of causing KVM to attempt to emulate an instruction on any page fault with an invalid HVA translation, e.g. a not-present EPT violation on a VM_PFNMAP VMA whose fault handler failed to insert a PFN. * There is no guarantee that the fault is directly related to the instruction, i.e. the fault could have been triggered by a side effect memory access in the guest, e.g. while vectoring a #DB or writing a tracing record. This could cause KVM to effectively mask the fault if KVM doesn't model the behavior leading to the fault, i.e. emulation could succeed and resume the guest. * If emulation does fail, KVM will return EMULATION_FAILED instead of -EFAULT, which is a red herring as the user will either debug a bogus emulation attempt or scratch their head wondering why we were attempting emulation in the first place. TL;DR: revert to returning -EFAULT and remove the bogus WARN_ON in handle_ept_misconfig in a future patch. This reverts commit 95e057e2. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Stefan Fritsch authored
This is very similar to the aligned versions movaps/movapd. We have seen the corresponding emulation failures with openbsd as guest and with Windows 10 with intel HD graphics pass through. Signed-off-by: Christian Ehrhardt <christian_ehrhardt@genua.de> Signed-off-by: Stefan Fritsch <sf@sfritsch.de> Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Exit to userspace with KVM_INTERNAL_ERROR_EMULATION if we encounter an exception in Protected Mode while emulating guest due to invalid guest state. Unlike Big RM, KVM doesn't support emulating exceptions in PM, i.e. PM exceptions are always injected via the VMCS. Because we will never do VMRESUME due to emulation_required, the exception is never realized and we'll keep emulating the faulting instruction over and over until we receive a signal. Exit to userspace iff there is a pending exception, i.e. don't exit simply on a requested event. The purpose of this check and exit is to aid in debugging a guest that is in all likelihood already doomed. Invalid guest state in PM is extremely limited in normal operation, e.g. it generally only occurs for a few instructions early in BIOS, and any exception at this time is all but guaranteed to be fatal. Non-vectored interrupts, e.g. INIT, SIPI and SMI, can be cleanly handled/emulated, while checking for vectored interrupts, e.g. INTR and NMI, without hitting false positives would add a fair amount of complexity for almost no benefit (getting hit by lightning seems more likely than encountering this specific scenario). Add a WARN_ON_ONCE to vmx_queue_exception() if we try to inject an exception via the VMCS and emulation_required is true. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
- 29 Mar, 2018 1 commit
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpcRadim Krčmář authored
KVM PPC update for 4.17 - Improvements for the radix page fault handler for HV KVM on POWER9.
-
- 28 Mar, 2018 26 commits
-
-
Liran Alon authored
When vCPU runs L2 and there is a pending event that requires to exit from L2 to L1 and nested_run_pending=1, vcpu_enter_guest() will request an immediate-exit from L2 (See req_immediate_exit). Since now handling of req_immediate_exit also makes sure to set KVM_REQ_EVENT, there is no need to also set it on vmx_vcpu_run() when nested_run_pending=1. This optimizes cases where VMRESUME was executed by L1 to enter L2 and there is no pending events that require exit from L2 to L1. Previously, this would have set KVM_REQ_EVENT unnecessarly. Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Liran Alon authored
In case L2 VMExit to L0 during event-delivery, VMCS02 is filled with IDT-vectoring-info which vmx_complete_interrupts() makes sure to reinject before next resume of L2. While handling the VMExit in L0, an IPI could be sent by another L1 vCPU to the L1 vCPU which currently runs L2 and exited to L0. When L0 will reach vcpu_enter_guest() and call inject_pending_event(), it will note that a previous event was re-injected to L2 (by IDT-vectoring-info) and therefore won't check if there are pending L1 events which require exit from L2 to L1. Thus, L0 enters L2 without immediate VMExit even though there are pending L1 events! This commit fixes the issue by making sure to check for L1 pending events even if a previous event was reinjected to L2 and bailing out from inject_pending_event() before evaluating a new pending event in case an event was already reinjected. The bug was observed by the following setup: * L0 is a 64CPU machine which runs KVM. * L1 is a 16CPU machine which runs KVM. * L0 & L1 runs with APICv disabled. (Also reproduced with APICv enabled but easier to analyze below info with APICv disabled) * L1 runs a 16CPU L2 Windows Server 2012 R2 guest. During L2 boot, L1 hangs completely and analyzing the hang reveals that one L1 vCPU is holding KVM's mmu_lock and is waiting forever on an IPI that he has sent for another L1 vCPU. And all other L1 vCPUs are currently attempting to grab mmu_lock. Therefore, all L1 vCPUs are stuck forever (as L1 runs with kernel-preemption disabled). Observing /sys/kernel/debug/tracing/trace_pipe reveals the following series of events: (1) qemu-system-x86-19066 [030] kvm_nested_vmexit: rip: 0xfffff802c5dca82f reason: EPT_VIOLATION ext_inf1: 0x0000000000000182 ext_inf2: 0x00000000800000d2 ext_int: 0x00000000 ext_int_err: 0x00000000 (2) qemu-system-x86-19054 [028] kvm_apic_accept_irq: apicid f vec 252 (Fixed|edge) (3) qemu-system-x86-19066 [030] kvm_inj_virq: irq 210 (4) qemu-system-x86-19066 [030] kvm_entry: vcpu 15 (5) qemu-system-x86-19066 [030] kvm_exit: reason EPT_VIOLATION rip 0xffffe00069202690 info 83 0 (6) qemu-system-x86-19066 [030] kvm_nested_vmexit: rip: 0xffffe00069202690 reason: EPT_VIOLATION ext_inf1: 0x0000000000000083 ext_inf2: 0x0000000000000000 ext_int: 0x00000000 ext_int_err: 0x00000000 (7) qemu-system-x86-19066 [030] kvm_nested_vmexit_inject: reason: EPT_VIOLATION ext_inf1: 0x0000000000000083 ext_inf2: 0x0000000000000000 ext_int: 0x00000000 ext_int_err: 0x00000000 (8) qemu-system-x86-19066 [030] kvm_entry: vcpu 15 Which can be analyzed as follows: (1) L2 VMExit to L0 on EPT_VIOLATION during delivery of vector 0xd2. Therefore, vmx_complete_interrupts() will set KVM_REQ_EVENT and reinject a pending-interrupt of 0xd2. (2) L1 sends an IPI of vector 0xfc (CALL_FUNCTION_VECTOR) to destination vCPU 15. This will set relevant bit in LAPIC's IRR and set KVM_REQ_EVENT. (3) L0 reach vcpu_enter_guest() which calls inject_pending_event() which notes that interrupt 0xd2 was reinjected and therefore calls vmx_inject_irq() and returns. Without checking for pending L1 events! Note that at this point, KVM_REQ_EVENT was cleared by vcpu_enter_guest() before calling inject_pending_event(). (4) L0 resumes L2 without immediate-exit even though there is a pending L1 event (The IPI pending in LAPIC's IRR). We have already reached the buggy scenario but events could be furthered analyzed: (5+6) L2 VMExit to L0 on EPT_VIOLATION. This time not during event-delivery. (7) L0 decides to forward the VMExit to L1 for further handling. (8) L0 resumes into L1. Note that because KVM_REQ_EVENT is cleared, the LAPIC's IRR is not examined and therefore the IPI is still not delivered into L1! Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Liran Alon authored
The reason that exception.pending should block re-injection of NMI/interrupt is not described correctly in comment in code. Instead, it describes why a pending exception should be injected before a pending NMI/interrupt. Therefore, move currently present comment to code-block evaluating a new pending event which explains why exception.pending is evaluated first. In addition, create a new comment describing that exception.pending blocks re-injection of NMI/interrupt because the exception was queued by handling vmexit which was due to NMI/interrupt delivery. Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@orcle.com> [Used a comment from Sean J <sean.j.christopherson@intel.com>. - Radim] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Liran Alon authored
For exceptions & NMIs events, KVM code use the following coding convention: *) "pending" represents an event that should be injected to guest at some point but it's side-effects have not yet occurred. *) "injected" represents an event that it's side-effects have already occurred. However, interrupts don't conform to this coding convention. All current code flows mark interrupt.pending when it's side-effects have already taken place (For example, bit moved from LAPIC IRR to ISR). Therefore, it makes sense to just rename interrupt.pending to interrupt.injected. This change follows logic of previous commit 664f8e26 ("KVM: X86: Fix loss of exception which has not yet been injected") which changed exception to follow this coding convention as well. It is important to note that in case !lapic_in_kernel(vcpu), interrupt.pending usage was and still incorrect. In this case, interrrupt.pending can only be set using one of the following ioctls: KVM_INTERRUPT, KVM_SET_VCPU_EVENTS and KVM_SET_SREGS. Looking at how QEMU uses these ioctls, one can see that QEMU uses them either to re-set an "interrupt.pending" state it has received from KVM (via KVM_GET_VCPU_EVENTS interrupt.pending or via KVM_GET_SREGS interrupt_bitmap) or by dispatching a new interrupt from QEMU's emulated LAPIC which reset bit in IRR and set bit in ISR before sending ioctl to KVM. So it seems that indeed "interrupt.pending" in this case is also suppose to represent "interrupt.injected". However, kvm_cpu_has_interrupt() & kvm_cpu_has_injectable_intr() is misusing (now named) interrupt.injected in order to return if there is a pending interrupt. This leads to nVMX/nSVM not be able to distinguish if it should exit from L2 to L1 on EXTERNAL_INTERRUPT on pending interrupt or should re-inject an injected interrupt. Therefore, add a FIXME at these functions for handling this issue. This patch introduce no semantics change. Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Liran Alon authored
kvm_inject_realmode_interrupt() is called from one of the injection functions which writes event-injection to VMCS: vmx_queue_exception(), vmx_inject_irq() and vmx_inject_nmi(). All these functions are called just to cause an event-injection to guest. They are not responsible of manipulating the event-pending flag. The only purpose of kvm_inject_realmode_interrupt() should be to emulate real-mode interrupt-injection. This was also incorrect when called from vmx_queue_exception(). Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Vitaly Kuznetsov authored
Enlightened VMCS is just a structure in memory, the main benefit besides avoiding somewhat slower VMREAD/VMWRITE is using clean field mask: we tell the underlying hypervisor which fields were modified since VMEXIT so there's no need to inspect them all. Tight CPUID loop test shows significant speedup: Before: 18890 cycles After: 8304 cycles Static key is being used to avoid performance penalty for non-Hyper-V deployments. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Vitaly Kuznetsov authored
TLFS 5.0 says: "Support for an enlightened VMCS interface is reported with CPUID leaf 0x40000004. If an enlightened VMCS interface is supported, additional nested enlightenments may be discovered by reading the CPUID leaf 0x4000000A (see 2.4.11)." Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Vitaly Kuznetsov authored
The definitions are according to the Hyper-V TLFS v5.0. KVM on Hyper-V will use these. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Vitaly Kuznetsov authored
Virtual Processor Assist Pages usage allows us to do optimized EOI processing for APIC, enable Enlightened VMCS support in KVM and more. struct hv_vp_assist_page is defined according to the Hyper-V TLFS v5.0b. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Ladi Prosek authored
The assist page has been used only for the paravirtual EOI so far, hence the "APIC" in the MSR name. Renaming to match the Hyper-V TLFS where it's called "Virtual VP Assist MSR". Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Vitaly Kuznetsov authored
mshyperv.h now only contains fucntions/variables we define in kernel, all definitions from TLFS should go to hyperv-tlfs.h. 'enum hv_cpuid_function' is removed as we already have this info in hyperv-tlfs.h, code in mshyperv.c is adjusted accordingly. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Vitaly Kuznetsov authored
hyperv.h is not part of uapi, there are no (known) users outside of kernel. We are making changes to this file to match current Hyper-V Hypervisor Top-Level Functional Specification (TLFS, see: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs) and we don't want to maintain backwards compatibility. Move the file renaming to hyperv-tlfs.h to avoid confusing it with mshyperv.h. In future, all definitions from TLFS should go to it and all kernel objects should go to mshyperv.h or include/linux/hyperv.h. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Andrew Jones authored
Add missing entries to the index and ensure the entries are in alphabetical order. Also amd-memory-encryption.rst is an .rst not a .txt. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Wanpeng Li authored
static_key_disable_cpuslocked(): static key 'virt_spin_lock_key+0x0/0x20' used before call to jump_label_init() WARNING: CPU: 0 PID: 0 at kernel/jump_label.c:161 static_key_disable_cpuslocked+0x61/0x80 RIP: 0010:static_key_disable_cpuslocked+0x61/0x80 Call Trace: static_key_disable+0x16/0x20 start_kernel+0x192/0x4b3 secondary_startup_64+0xa5/0xb0 Qspinlock will be choosed when dedicated pCPUs are available, however, the static virt_spin_lock_key is set in kvm_spinlock_init() before jump_label_init() has been called, which will result in a WARN(). This patch fixes it by delaying the virt_spin_lock_key setup to .smp_prepare_cpus(). Reported-by: Davidlohr Bueso <dbueso@suse.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Fixes: b2798ba0 ("KVM: X86: Choose qspinlock when dedicated physical CPUs are available") Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Cole Robinson authored
Unused since added in 18e8f410Signed-off-by: Cole Robinson <crobinso@redhat.com> Reviewed-and-tested-by: Stefan Raspl <stefan.raspl@linux.vnet.ibm.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Cole Robinson authored
$ python3 tools/kvm/kvm_stat/kvm_stat Traceback (most recent call last): File "tools/kvm/kvm_stat/kvm_stat", line 1668, in <module> main() File "tools/kvm/kvm_stat/kvm_stat", line 1639, in main assign_globals() File "tools/kvm/kvm_stat/kvm_stat", line 1618, in assign_globals for line in file('/proc/mounts'): NameError: name 'file' is not defined open() is the python3 way, and works on python2.6+ Signed-off-by: Cole Robinson <crobinso@redhat.com> Reviewed-and-tested-by: Stefan Raspl <stefan.raspl@linux.vnet.ibm.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Cole Robinson authored
$ python3 tools/kvm/kvm_stat/kvm_stat File "tools/kvm/kvm_stat/kvm_stat", line 1137 def sortkey((_k, v)): ^ SyntaxError: invalid syntax Fix it in a way that's compatible with python2 and python3 Signed-off-by: Cole Robinson <crobinso@redhat.com> Tested-by: Stefan Raspl <stefan.raspl@linux.vnet.ibm.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Babu Moger authored
Bring the PLE(pause loop exit) logic to AMD svm driver. While testing, we found this helping in situations where numerous pauses are generated. Without these patches we could see continuos VMEXITS due to pause interceptions. Tested it on AMD EPYC server with boot parameter idle=poll on a VM with 32 vcpus to simulate extensive pause behaviour. Here are VMEXITS in 10 seconds interval. Pauses 810199 504 Total 882184 325415 Signed-off-by: Babu Moger <babu.moger@amd.com> [Prevented the window from dropping below the initial value. - Radim] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Babu Moger authored
This patch adds the support for pause filtering threshold. This feature support is indicated by CPUID Fn8000_000A_EDX. See AMD APM Vol 2 Section 15.14.4 Pause Intercept Filtering for more details. In this mode, a 16-bit pause filter threshold field is added in VMCB. The threshold value is a cycle count that is used to reset the pause counter. As with simple pause filtering, VMRUN loads the pause count value from VMCB into an internal counter. Then, on each pause instruction the hardware checks the elapsed number of cycles since the most recent pause instruction against the pause Filter Threshold. If the elapsed cycle count is greater than the pause filter threshold, then the internal pause count is reloaded from VMCB and execution continues. If the elapsed cycle count is less than the pause filter threshold, then the internal pause count is decremented. If the count value is less than zero and pause intercept is enabled, a #VMEXIT is triggered. If advanced pause filtering is supported and pause filter threshold field is set to zero, the filter will operate in the simpler, count only mode. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Babu Moger authored
This patch brings some of the code from vmx to x86.h header file. Now, we can share this code between vmx and svm. Modified couple functions to make it common. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Babu Moger authored
Get rid of ple_window_actual_max, because its benefits are really minuscule and the logic is complicated. The overflows(and underflow) are controlled in __ple_window_grow and _ple_window_shrink respectively. Suggested-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Babu Moger <babu.moger@amd.com> [Fixed potential wraparound and change the max to UINT_MAX. - Radim] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Babu Moger authored
The vmx module parameters are supposed to be unsigned variants. Also fixed the checkpatch errors like the one below. WARNING: Symbolic permissions 'S_IRUGO' are not preferred. Consider using octal permissions '0444'. +module_param(ple_gap, uint, S_IRUGO); Signed-off-by: Babu Moger <babu.moger@amd.com> [Expanded uint to unsigned int in code. - Radim] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Andi Kleen authored
KVM and perf have a special backdoor mechanism to report the IP for interrupts re-executed after vm exit. This works for the NMIs that perf normally uses. However when perf is in timer mode it doesn't work because the timer interrupt doesn't get this special treatment. This is common when KVM is running nested in another hypervisor which may not implement the PMU, so only timer mode is available. Call the functions to set up the backdoor IP also for non NMI interrupts. I renamed the functions to set up the backdoor IP reporting to be more appropiate for their new use. The SVM change is only compile tested. v2: Moved the functions inline. For the normal interrupt case the before/after functions are now called from x86.c, not arch specific code. For the NMI case we still need to call it in the architecture specific code, because it's already needed in the low level *_run functions. Signed-off-by: Andi Kleen <ak@linux.intel.com> [Removed unnecessary calls from arch handle_external_intr. - Radim] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarmRadim Krčmář authored
KVM/ARM updates for v4.17 - VHE optimizations - EL2 address space randomization - Variant 3a mitigation for Cortex-A57 and A72 - The usual vgic fixes - Various minor tidying-up
-
Marc Zyngier authored
MIDR_ALL_VERSIONS is changing, and won't have the same meaning in 4.17, and the right thing to use will be ERRATA_MIDR_ALL_VERSIONS. In order to cope with the merge window, let's add a compatibility macro that will allow a relatively smooth transition, and that can be removed post 4.17-rc1. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
Creates far too many conflicts with arm64/for-next/core, to be resent post -rc1. This reverts commit f9f5dc19. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
- 27 Mar, 2018 1 commit
-
-
Paul Mackerras authored
This changes the hypervisor page fault handler for radix guests to use the generic KVM __gfn_to_pfn_memslot() function instead of using get_user_pages_fast() and then handling the case of VM_PFNMAP vmas specially. The old code missed the case of VM_IO vmas; with this change, VM_IO vmas will now be handled correctly by code within __gfn_to_pfn_memslot. Currently, __gfn_to_pfn_memslot calls hva_to_pfn, which only uses __get_user_pages_fast for the initial lookup in the cases where either atomic or async is set. Since we are not setting either atomic or async, we do our own __get_user_pages_fast first, for now. This also adds code to check for the KVM_MEM_READONLY flag on the memslot. If it is set and this is a write access, we synthesize a data storage interrupt for the guest. In the case where the page is not normal RAM (i.e. page == NULL in kvmppc_book3s_radix_page_fault(), we read the PTE from the Linux page tables because we need the mapping attribute bits as well as the PFN. (The mapping attribute bits indicate whether accesses have to be non-cacheable and/or guarded.) Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
-
- 26 Mar, 2018 2 commits
-
-
Marc Zyngier authored
vgic_copy_lpi_list() parses the LPI list and picks LPIs targeting a given vcpu. We allocate the array containing the intids before taking the lpi_list_lock, which means we can have an array size that is not equal to the number of LPIs. This is particularly obvious when looking at the path coming from vgic_enable_lpis, which is not a command, and thus can run in parallel with commands: vcpu 0: vcpu 1: vgic_enable_lpis its_sync_lpi_pending_table vgic_copy_lpi_list intids = kmalloc_array(irq_count) MAPI(lpi targeting vcpu 0) list_for_each_entry(lpi_list_head) intids[i++] = irq->intid; At that stage, we will happily overrun the intids array. Boo. An easy fix is is to break once the array is full. The MAPI command will update the config anyway, and we won't miss a thing. We also make sure that lpi_list_count is read exactly once, so that further updates of that value will not affect the array bound check. Cc: stable@vger.kernel.org Fixes: ccb1d791 ("KVM: arm64: vgic-its: Fix pending table sync") Reviewed-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
It was recently reported that VFIO mediated devices, and anything that VFIO exposes as level interrupts, do no strictly follow the expected logic of such interrupts as it only lowers the input line when the guest has EOId the interrupt at the GIC level, rather than when it Acked the interrupt at the device level. THe GIC's Active+Pending state is fundamentally incompatible with this behaviour, as it prevents KVM from observing the EOI, and in turn results in VFIO never dropping the line. This results in an interrupt storm in the guest, which it really never expected. As we cannot really change VFIO to follow the strict rules of level signalling, let's forbid the A+P state altogether, as it is in the end only an optimization. It ensures that we will transition via an invalid state, which we can use to notify VFIO of the EOI. Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Tested-by: Shunyong Yang <shunyong.yang@hxt-semitech.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-