- 01 Aug, 2010 40 commits
-
-
Chris Lalancette authored
We really want to "kvm_set_irq" during the hrtimer callback, but that is risky because that is during interrupt context. Instead, offload the work to a workqueue, which is a bit safer and should provide most of the same functionality. Signed-off-by: Chris Lalancette <clalance@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Wei Yongjun authored
emulate pusha instruction only writeback the last EDI register, but the other registers which need to be writeback is ignored. This patch fixed it. Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Zachary Amsden authored
Fix a slight error with assertion in local APIC code. Signed-off-by: Zachary Amsden <zamsden@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Xiao Guangrong authored
While we mark the parent's unsync_child_bitmap, if the parent is already unsynced, it no need walk it's parent, it can reduce some unnecessary workload Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Xiao Guangrong authored
In current code, some page's unsync_child_bitmap is not cleared completely in mmu_sync_children(), for example, if two PDPEs shard one PDT, one of PDPE's unsync_child_bitmap is not cleared. Currently, it not harm anything just little overload, but it's the prepare work for the later patch Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Xiao Guangrong authored
Decrease sp->unsync_children after clear unsync_child_bitmap bit Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Xiao Guangrong authored
If the sync-sp just sync transient, don't mark its pte notrap Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Xiao Guangrong authored
The sync page is already write protected in mmu_sync_children(), don't write protected it again Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Xiao Guangrong authored
Using wrap function to cleanup page dirty judgment Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Xiao Guangrong authored
Rename 'page' and 'shadow_page' to 'sp' to better fit the context Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Sheng Yang authored
This patch enable save/restore of xsave state. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Denis Kirjanov authored
Fix compile warning: CC [M] arch/powerpc/kvm/powerpc.o arch/powerpc/kvm/powerpc.c: In function 'kvm_arch_vcpu_ioctl_run': arch/powerpc/kvm/powerpc.c:290: warning: 'gpr' may be used uninitialized in this function arch/powerpc/kvm/powerpc.c:290: note: 'gpr' was declared here Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Avi Kivity authored
On Intel, we call skip_emulated_instruction() even if we injected a #GP, resulting in the #GP pointing at the wrong address. Fix by injecting the exception and skipping the instruction at the same place, so we can do just one or the other. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Avi Kivity authored
On Intel, we call skip_emulated_instruction() even if we injected a #GP, resulting in the #GP pointing at the wrong address. Fix by injecting the exception and skipping the instruction at the same place, so we can do just one or the other. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Avi Kivity authored
On Intel, we call skip_emulated_instruction() even if we injected a #GP, resulting in the #GP pointing at the wrong address. Fix by injecting the exception and skipping the instruction at the same place, so we can do just one or the other. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Dexuan Cui authored
This patch enable guest to use XSAVE/XRSTOR instructions. We assume that host_xcr0 would use all possible bits that OS supported. And we loaded xcr0 in the same way we handled fpu - do it as late as we can. Signed-off-by: Dexuan Cui <dexuan.cui@intel.com> Signed-off-by: Sheng Yang <sheng@linux.intel.com> Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Signed-off-by: Avi Kivity <avi@redhat.com>
-
Andi Kleen authored
No real bugs in this one. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Andi Kleen authored
When the user passed in a NULL mask pass this on from the ioctl handler. Found by gcc 4.6's new warnings. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
delay local tlb flush until enter guest moden, it can reduce vpid flush frequency and reduce remote tlb flush IPI(if KVM_REQ_TLB_FLUSH bit is already set, IPI is not sent) Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
Use kvm_mmu_flush_tlb() function instead of calling kvm_x86_ops->tlb_flush(vcpu) directly. Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
This remote tlb flush is no necessary since we have synced while sp is zapped Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
fix: [ INFO: suspicious rcu_dereference_check() usage. ] --------------------------------------------------- include/linux/kvm_host.h:258 invoked rcu_dereference_check() without protection! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 1 1 lock held by qemu-system-x86/3796: #0: (&vcpu->mutex){+.+.+.}, at: [<ffffffffa0217fd8>] vcpu_load+0x1a/0x66 [kvm] stack backtrace: Pid: 3796, comm: qemu-system-x86 Not tainted 2.6.34 #25 Call Trace: [<ffffffff81070ed1>] lockdep_rcu_dereference+0x9d/0xa5 [<ffffffffa0214fdf>] gfn_to_memslot_unaliased+0x65/0xa0 [kvm] [<ffffffffa0216139>] gfn_to_hva+0x22/0x4c [kvm] [<ffffffffa0216217>] kvm_write_guest_page+0x2a/0x7f [kvm] [<ffffffffa0216286>] kvm_clear_guest_page+0x1a/0x1c [kvm] [<ffffffffa0278239>] init_rmode+0x3b/0x180 [kvm_intel] [<ffffffffa02786ce>] vmx_set_cr0+0x350/0x4d3 [kvm_intel] [<ffffffffa02274ff>] kvm_arch_vcpu_ioctl_set_sregs+0x122/0x31a [kvm] [<ffffffffa021859c>] kvm_vcpu_ioctl+0x578/0xa3d [kvm] [<ffffffff8106624c>] ? cpu_clock+0x2d/0x40 [<ffffffff810f7d86>] ? fget_light+0x244/0x28e [<ffffffff810709b9>] ? trace_hardirqs_off_caller+0x1f/0x10e [<ffffffff8110501b>] vfs_ioctl+0x32/0xa6 [<ffffffff81105597>] do_vfs_ioctl+0x47f/0x4b8 [<ffffffff813ae654>] ? sub_preempt_count+0xa3/0xb7 [<ffffffff810f7da8>] ? fget_light+0x266/0x28e [<ffffffff810f7c53>] ? fget_light+0x111/0x28e [<ffffffff81105617>] sys_ioctl+0x47/0x6a [<ffffffff81002c1b>] system_call_fastpath+0x16/0x1b Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Gui Jianfeng authored
The name "pid_sync_vcpu_all" isn't appropriate since it just affect a single vpid, so rename it to vpid_sync_vcpu_single(). Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Gui Jianfeng authored
Add all-context INVVPID type support. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
collect remote tlb flush in kvm_mmu_pte_write() path Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
Now, we can safely to traverse sp hlish Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
Using kvm_mmu_prepare_zap_page() and kvm_mmu_zap_page() instead of kvm_mmu_zap_page() that can reduce remote tlb flush IPI Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
In the later patch, we will modify sp's zapping way like below: kvm_mmu_prepare_zap_page A kvm_mmu_prepare_zap_page B kvm_mmu_prepare_zap_page C .... kvm_mmu_commit_zap_page [ zaped multiple sps only need to call kvm_mmu_commit_zap_page once ] In __kvm_mmu_free_some_pages() function, the free page number is getted form 'vcpu->kvm->arch.n_free_mmu_pages' in loop, it will hinders us to apply kvm_mmu_prepare_zap_page() and kvm_mmu_commit_zap_page() since kvm_mmu_prepare_zap_page() not free sp. Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
Using kvm_mmu_prepare_zap_page() and kvm_mmu_commit_zap_page() to split kvm_mmu_zap_page() function, then we can: - traverse hlist safely - easily to gather remote tlb flush which occurs during page zapped Those feature can be used in the later patches Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
Introduce for_each_gfn_sp() and for_each_gfn_indirect_valid_sp() to cleanup hlist traverseing Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Xiao Guangrong authored
In kvm_mmu_unprotect_page(), the invalid sp can be skipped Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Gui Jianfeng authored
According to SDM, we need check whether single-context INVVPID type is supported before issuing invvpid instruction. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Reviewed-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Lai Jiangshan authored
Should use linux/uaccess.h instead of asm/uaccess.h Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Lai Jiangshan authored
The type of '*new.rmap' is not 'struct page *', fix it Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Sheng Yang authored
We only support 4 levels EPT pagetable now. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Glauber Costa authored
This patch adds a file that documents the usage of KVM-specific MSRs. Signed-off-by: Glauber Costa <glommer@redhat.com> Reviewed-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Andreas Schwab authored
Instead of instantiating a whole thread_struct on the stack use only the required parts of it. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Tested-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Mohammed Gamal authored
The vmexit handler returns KVM_EXIT_UNKNOWN since there is no handler for vmentry failures. This intercepts vmentry failures and returns KVM_FAIL_ENTRY to userspace instead. Signed-off-by: Mohammed Gamal <m.gamal005@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Gui Jianfeng authored
There's no need to calculate quadrant if tdp is enabled. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-