An error occurred fetching the project authors.
- 01 Mar, 2010 11 commits
-
-
Avi Kivity authored
Now that we can allow the guest to play with cr0 when the fpu is loaded, we can enable lazy fpu when npt is in use. Acked-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
If two conditions apply: - no bits outside TS and EM differ between the host and guest cr0 - the fpu is active then we can activate the selective cr0 write intercept and drop the unconditional cr0 read and write intercept, and allow the guest to run with the host fpu state. This reduces cr0 exits due to guest fpu management while the guest fpu is loaded. Acked-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Currently we don't intercept cr0 at all when npt is enabled. This improves performance but requires us to activate the fpu at all times. Remove this behaviour in preparation for adding selective cr0 intercepts. Acked-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
init_vmcb() sets up the intercepts as if the fpu is active, so initialize it there. This avoids an INIT from setting up intercepts inconsistent with fpu_active. Acked-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Defer fpu deactivation as much as possible - if the guest fpu is loaded, keep it loaded until the next heavyweight exit (where we are forced to unload it). This reduces unnecessary exits. We also defer fpu activation on clts; while clts signals the intent to use the fpu, we can't be sure the guest will actually use it. Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
We will use this later to give the guest ownership of cr0.ts. Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Since we'd like to allow the guest to own a few bits of cr0 at times, we need to know when we access those bits. Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Sheng Yang authored
Then the callback can provide the maximum supported large page level, which is more flexible. Also move the gb page support into x86_64 specific. Signed-off-by:
Sheng Yang <sheng@linux.intel.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
The tsc_offset adjustment in svm_vcpu_load is executed unconditionally even if Linux considers the host tsc as stable. This causes a Linux guest detecting an unstable tsc in any case. This patch removes the tsc_offset adjustment if the host tsc is stable. The guest will now get the benefit of a stable tsc too. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Sheng Yang authored
Before enabling, execution of "rdtscp" in guest would result in #UD. Signed-off-by:
Sheng Yang <sheng@linux.intel.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Sheng Yang authored
Sometime, we need to adjust some state in order to reflect guest CPUID setting, e.g. if we don't expose rdtscp to guest, we won't want to enable it on hardware. cpuid_update() is introduced for this purpose. Also export kvm_find_cpuid_entry() for later use. Signed-off-by:
Sheng Yang <sheng@linux.intel.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
- 03 Dec, 2009 23 commits
-
-
Jan Kiszka authored
This new IOCTL exports all yet user-invisible states related to exceptions, interrupts, and NMIs. Together with appropriate user space changes, this fixes sporadic problems of vmsave/restore, live migration and system reset. [avi: future-proof abi by adding a flags field] Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
The svm_set_cr0() call will initialize save->cr0 properly even when npt is enabled, clearing the NW and CD bits as expected, so we don't need to initialize it manually for npt_enabled anymore. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
svm_vcpu_reset() was not properly resetting the contents of the guest-visible cr0 register, causing the following issue: https://bugzilla.redhat.com/show_bug.cgi?id=525699 Without resetting cr0 properly, the vcpu was running the SIPI bootstrap routine with paging enabled, making the vcpu get a pagefault exception while trying to run it. Instead of setting vmcb->save.cr0 directly, the new code just resets kvm->arch.cr0 and calls kvm_set_cr0(). The bits that were set/cleared on vmcb->save.cr0 (PG, WP, !CD, !NW) will be set properly by svm_set_cr0(). kvm_set_cr0() is used instead of calling svm_set_cr0() directly to make sure kvm_mmu_reset_context() is called to reset the mmu to nonpaging mode. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Jan Kiszka authored
Push the NMI-related singlestep variable into vcpu_svm. It's dealing with an AMD-specific deficit, nothing generic for x86. Acked-by:
Gleb Natapov <gleb@redhat.com> Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com> arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/svm.c | 12 +++++++----- 2 files changed, 7 insertions(+), 6 deletions(-) Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Mark Langsdorf authored
New AMD processors (Family 0x10 models 8+) support the Pause Filter Feature. This feature creates a new field in the VMCB called Pause Filter Count. If Pause Filter Count is greater than 0 and intercepting PAUSEs is enabled, the processor will increment an internal counter when a PAUSE instruction occurs instead of intercepting. When the internal counter reaches the Pause Filter Count value, a PAUSE intercept will occur. This feature can be used to detect contended spinlocks, especially when the lock holding VCPU is not scheduled. Rescheduling another VCPU prevents the VCPU seeking the lock from wasting its quantum by spinning idly. Experimental results show that most spinlocks are held for less than 1000 PAUSE cycles or more than a few thousand. Default the Pause Filter Counter to 3000 to detect the contended spinlocks. Processor support for this feature is indicated by a CPUID bit. On a 24 core system running 4 guests each with 16 VCPUs, this patch improved overall performance of each guest's 32 job kernbench by approximately 3-5% when combined with a scheduler algorithm thati caused the VCPU to sleep for a brief period. Further performance improvement may be possible with a more sophisticated yield algorithm. Signed-off-by:
Mark Langsdorf <mark.langsdorf@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
With all important informations now delivered through tracepoints we can savely remove the nsvm_printk debugging code for nested svm. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
This patch adds a tracepoint for the event that the guest executed the SKINIT instruction. This information is important because SKINIT is an SVM extenstion not yet implemented by nested SVM and we may need this information for debugging hypervisors that do not yet run on nested SVM. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
This patch adds a tracepoint for the event that the guest executed the INVLPGA instruction. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
This patch adds a special tracepoint for the event that a nested #vmexit is injected because kvm wants to inject an interrupt into the guest. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
This patch adds a tracepoint for a nested #vmexit that gets re-injected to the guest. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
This patch adds a tracepoint for every #vmexit we get from a nested guest. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
This patch adds a dedicated kvm tracepoint for a nested vmrun. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
The nested SVM code emulates a #vmexit caused by a request to open the irq window right in the request function. This is a bug because the request function runs with preemption and interrupts disabled but the #vmexit emulation might sleep. This can cause a schedule()-while-atomic bug and is fixed with this patch. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Alexander Graf authored
If event_inj is valid on a #vmexit the host CPU would write the contents to exit_int_info, so the hypervisor knows that the event wasn't injected. We don't do this in nested SVM by now which is a bug and fixed by this patch. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Jan Kiszka authored
Much of so far vendor-specific code for setting up guest debug can actually be handled by the generic code. This also fixes a minor deficit in the SVM part /wrt processing KVM_GUESTDBG_ENABLE. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Zachary Amsden authored
Both VMX and SVM require per-cpu memory allocation, which is done at module init time, for only online cpus. Backend was not allocating enough structure for all possible CPUs, so new CPUs coming online could not be hardware enabled. Signed-off-by:
Zachary Amsden <zamsden@redhat.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Zachary Amsden authored
Signed-off-by:
Zachary Amsden <zamsden@redhat.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
This patch replaces them with native_read_tsc() which can also be used in expressions and saves a variable on the stack in this case. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
The exit_int_info field is only written by the hardware and never read. So it does not need to be copied on a vmrun emulation. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
This patch reorganizes the logic in svm_interrupt_allowed to make it better to read. This is important because the logic is a lot more complicated with Nested SVM. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Alexander Graf authored
X86 CPUs need to have some magic happening to enable the virtualization extensions on them. This magic can result in unpleasant results for users, like blocking other VMMs from working (vmx) or using invalid TLB entries (svm). Currently KVM activates virtualization when the respective kernel module is loaded. This blocks us from autoloading KVM modules without breaking other VMMs. To circumvent this problem at least a bit, this patch introduces on demand activation of virtualization. This means, that instead virtualization is enabled on creation of the first virtual machine and disabled on destruction of the last one. So using this, KVM can be easily autoloaded, while keeping other hypervisors usable. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Marcelo Tosatti authored
nested_svm_map unnecessarily takes mmap_sem around gfn_to_page, since gfn_to_page / get_user_pages are responsible for it. Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Acked-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
They're just copies of vcpu->run, which is readily accessible. Signed-off-by:
Avi Kivity <avi@redhat.com>
-
- 29 Oct, 2009 1 commit
-
-
Tejun Heo authored
This patch updates percpu related symbols in x86 such that percpu symbols are unique and don't clash with local symbols. This serves two purposes of decreasing the possibility of global percpu symbol collision and allowing dropping per_cpu__ prefix from percpu symbols. * arch/x86/kernel/cpu/common.c: rename local variable to avoid collision * arch/x86/kvm/svm.c: s/svm_data/sd/ for local variables to avoid collision * arch/x86/kernel/cpu/cpu_debug.c: s/cpu_arr/cpud_arr/ s/priv_arr/cpud_priv_arr/ s/cpu_priv_count/cpud_priv_count/ * arch/x86/kernel/cpu/intel_cacheinfo.c: s/cpuid4_info/ici_cpuid4_info/ s/cache_kobject/ici_cache_kobject/ s/index_kobject/ici_index_kobject/ * arch/x86/kernel/ds.c: s/cpu_context/cpu_ds_context/ Partly based on Rusty Russell's "alloc_percpu: rename percpu vars which cause name clashes" patch. Signed-off-by:
Tejun Heo <tj@kernel.org> Acked-by:
(kvm) Avi Kivity <avi@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: x86@kernel.org
-
- 04 Oct, 2009 2 commits
-
-
Joerg Roedel authored
When running nested we need to touch the l1 guests tsc_offset. Otherwise changes will be lost or a wrong value be read. Cc: stable@kernel.org Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Joerg Roedel authored
When svm_vcpu_load is called while the vcpu is running in guest mode the tsc adjustment made there is lost on the next emulated #vmexit. This causes the tsc running backwards in the guest. This patch fixes the issue by also adjusting the tsc_offset in the emulated hsave area so that it will not get lost. Cc: stable@kernel.org Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
- 10 Sep, 2009 3 commits
-
-
Avi Kivity authored
It is no longer possible to reproduce the problem any more, so presumably it has been fixed. Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
Nested SVM is (in my experience) stable enough to be enabled by default. So omit the requirement to pass a module parameter. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
Not checking for this flag breaks any nested hypervisor that does not set VINTR. So fix it with this patch. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-