- 30 Jan, 2008 40 commits
-
-
Avi Kivity authored
Instead of fetching one byte at a time, prefetch 15 bytes (or until the next page boundary) to avoid guest page table walks. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Theoretically used to acccess memory known to be ordinary RAM, it was never implemented. It is questionable whether it is possible to implement it correctly. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Izik Eidus authored
Improve dirty bit setting for pages that kvm release, until now every page that we released we marked dirty, from now only pages that have potential to get dirty we mark dirty. Signed-off-by: Izik Eidus <izike@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Izik Eidus authored
When we map a page, we check whether some other vcpu mapped it for us and if so, bail out. But we should decrease the refcount on the page as we do so. Signed-off-by: Izik Eidus <izike@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves structures: kvm_cpuid_entry kvm_cpuid from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
Move structures: kvm_sregs kvm_msr_entry kvm_msrs kvm_msr_list from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves structures: kvm_segment kvm_dtable from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves structure lapic_state from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves structure kvm_regs to include/asm-x86/kvm.h. Each architecture will need to create there own version of this structure. Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves structures: kvm_pic_state kvm_ioapic_state to inclue/asm-x86/kvm.h. Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves sturct kvm_memory_alias from include/linux/kvm.h to include/asm-x86/kvm.h. Also have include/linux/kvm.h include include/asm/kvm.h. Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Hollis Blanchard authored
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Hollis Blanchard authored
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Hollis Blanchard authored
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Izik Eidus authored
Use kvm_write_guest_page() with empty_zero_page, instead of doing kmap and memset. Signed-off-by: Izik Eidus <izike@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Izik Eidus authored
Things are simpler and more regular this way. Signed-off-by: Izik Eidus <izike@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jan Kiszka authored
Ensure that segment.base == segment.selector << 4 when entering the real mode on Intel so that the CPU will not bark at us. This fixes some old protected mode demo from http://www.x86.org/articles/pmbasics/tspec_a1_doc.htm. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Move out kvm_mmu init and exit functionality from kvm_main.c Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Meanwhile keep the interface in common, and leave as more logic in common as possible. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Amit Shah authored
Instead of having each architecture do it individually, we do this in the arch-independent code (just x86 as of now). [avi: add svm to the mix, which was added to mainline during the 2.6.24-rc process] Signed-off-by: Amit Shah <amit.shah@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
This is in addition to the current virtual cpu statistics. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
-
Avi Kivity authored
Measure the number of times we switch the fpu state. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
This is a little more accurate (since it counts actual reloads, not potential reloads), and reverses the sense of the statistic to measure a bad event like most of the other stats (e.g. we want to minimize all counters). Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Add two arch hooks to handle kvm_create_vm and kvm destroy_vm. Now, just put io_bus init and destory in common. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Since their callers are not declared with __init. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Joe Perches authored
Fix sparse warnings "Using plain integer as NULL pointer" Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Move kvm_vcpu_ioctl_translate to arch, since mmu would be put under arch. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
We pass vcpu, vmx->fail, and vmx->launched to assembly code, but all three are fields within vmx. Consolidate by only passing in vmx and offsets for the rest. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Make KVM_CHECK_EXTENSION code into a function, all archs can define its capability independently. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Acked-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Sheng Yang authored
The current 'lods' and 'stos' is depending on incoming CR2 rather than decode memory address from registers. Signed-off-by: Sheng Yang <sheng.yang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Acked-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-