Commit c812a51d authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'kvm-updates/2.6.34' of git://git.kernel.org/pub/scm/virt/kvm/kvm

* 'kvm-updates/2.6.34' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (145 commits)
  KVM: x86: Add KVM_CAP_X86_ROBUST_SINGLESTEP
  KVM: VMX: Update instruction length on intercepted BP
  KVM: Fix emulate_sys[call, enter, exit]()'s fault handling
  KVM: Fix segment descriptor loading
  KVM: Fix load_guest_segment_descriptor() to inject page fault
  KVM: x86 emulator: Forbid modifying CS segment register by mov instruction
  KVM: Convert kvm->requests_lock to raw_spinlock_t
  KVM: Convert i8254/i8259 locks to raw_spinlocks
  KVM: x86 emulator: disallow opcode 82 in 64-bit mode
  KVM: x86 emulator: code style cleanup
  KVM: Plan obsolescence of kernel allocated slots, paravirt mmu
  KVM: x86 emulator: Add LOCK prefix validity checking
  KVM: x86 emulator: Check CPL level during privilege instruction emulation
  KVM: x86 emulator: Fix popf emulation
  KVM: x86 emulator: Check IOPL level during io instruction emulation
  KVM: x86 emulator: fix memory access during x86 emulation
  KVM: x86 emulator: Add Virtual-8086 mode of emulation
  KVM: x86 emulator: Add group9 instruction decoding
  KVM: x86 emulator: Add group8 instruction decoding
  KVM: do not store wqh in irqfd
  ...

Trivial conflicts in Documentation/feature-removal-schedule.txt
parents 9467c4fd d2be1651
...@@ -556,3 +556,35 @@ Why: udev fully replaces this special file system that only contains CAPI ...@@ -556,3 +556,35 @@ Why: udev fully replaces this special file system that only contains CAPI
NCCI TTY device nodes. User space (pppdcapiplugin) works without NCCI TTY device nodes. User space (pppdcapiplugin) works without
noticing the difference. noticing the difference.
Who: Jan Kiszka <jan.kiszka@web.de> Who: Jan Kiszka <jan.kiszka@web.de>
----------------------------
What: KVM memory aliases support
When: July 2010
Why: Memory aliasing support is used for speeding up guest vga access
through the vga windows.
Modern userspace no longer uses this feature, so it's just bitrotted
code and can be removed with no impact.
Who: Avi Kivity <avi@redhat.com>
----------------------------
What: KVM kernel-allocated memory slots
When: July 2010
Why: Since 2.6.25, kvm supports user-allocated memory slots, which are
much more flexible than kernel-allocated slots. All current userspace
supports the newer interface and this code can be removed with no
impact.
Who: Avi Kivity <avi@redhat.com>
----------------------------
What: KVM paravirt mmu host support
When: January 2011
Why: The paravirt mmu host support is slower than non-paravirt mmu, both
on newer and older hardware. It is already not exposed to the guest,
and kept only for live migration purposes.
Who: Avi Kivity <avi@redhat.com>
----------------------------
...@@ -23,12 +23,12 @@ of a virtual machine. The ioctls belong to three classes ...@@ -23,12 +23,12 @@ of a virtual machine. The ioctls belong to three classes
Only run vcpu ioctls from the same thread that was used to create the Only run vcpu ioctls from the same thread that was used to create the
vcpu. vcpu.
2. File descritpors 2. File descriptors
The kvm API is centered around file descriptors. An initial The kvm API is centered around file descriptors. An initial
open("/dev/kvm") obtains a handle to the kvm subsystem; this handle open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
can be used to issue system ioctls. A KVM_CREATE_VM ioctl on this can be used to issue system ioctls. A KVM_CREATE_VM ioctl on this
handle will create a VM file descripror which can be used to issue VM handle will create a VM file descriptor which can be used to issue VM
ioctls. A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu ioctls. A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu
and return a file descriptor pointing to it. Finally, ioctls on a vcpu and return a file descriptor pointing to it. Finally, ioctls on a vcpu
fd can be used to control the vcpu, including the important task of fd can be used to control the vcpu, including the important task of
...@@ -643,7 +643,7 @@ Type: vm ioctl ...@@ -643,7 +643,7 @@ Type: vm ioctl
Parameters: struct kvm_clock_data (in) Parameters: struct kvm_clock_data (in)
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
Sets the current timestamp of kvmclock to the valued specific in its parameter. Sets the current timestamp of kvmclock to the value specified in its parameter.
In conjunction with KVM_GET_CLOCK, it is used to ensure monotonicity on scenarios In conjunction with KVM_GET_CLOCK, it is used to ensure monotonicity on scenarios
such as migration. such as migration.
...@@ -795,11 +795,11 @@ Unused. ...@@ -795,11 +795,11 @@ Unused.
__u64 data_offset; /* relative to kvm_run start */ __u64 data_offset; /* relative to kvm_run start */
} io; } io;
If exit_reason is KVM_EXIT_IO_IN or KVM_EXIT_IO_OUT, then the vcpu has If exit_reason is KVM_EXIT_IO, then the vcpu has
executed a port I/O instruction which could not be satisfied by kvm. executed a port I/O instruction which could not be satisfied by kvm.
data_offset describes where the data is located (KVM_EXIT_IO_OUT) or data_offset describes where the data is located (KVM_EXIT_IO_OUT) or
where kvm expects application code to place the data for the next where kvm expects application code to place the data for the next
KVM_RUN invocation (KVM_EXIT_IO_IN). Data format is a patcked array. KVM_RUN invocation (KVM_EXIT_IO_IN). Data format is a packed array.
struct { struct {
struct kvm_debug_exit_arch arch; struct kvm_debug_exit_arch arch;
...@@ -815,7 +815,7 @@ Unused. ...@@ -815,7 +815,7 @@ Unused.
__u8 is_write; __u8 is_write;
} mmio; } mmio;
If exit_reason is KVM_EXIT_MMIO or KVM_EXIT_IO_OUT, then the vcpu has If exit_reason is KVM_EXIT_MMIO, then the vcpu has
executed a memory-mapped I/O instruction which could not be satisfied executed a memory-mapped I/O instruction which could not be satisfied
by kvm. The 'data' member contains the written data if 'is_write' is by kvm. The 'data' member contains the written data if 'is_write' is
true, and should be filled by application code otherwise. true, and should be filled by application code otherwise.
......
...@@ -3173,7 +3173,7 @@ F: arch/x86/include/asm/svm.h ...@@ -3173,7 +3173,7 @@ F: arch/x86/include/asm/svm.h
F: arch/x86/kvm/svm.c F: arch/x86/kvm/svm.c
KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC
M: Hollis Blanchard <hollisb@us.ibm.com> M: Alexander Graf <agraf@suse.de>
L: kvm-ppc@vger.kernel.org L: kvm-ppc@vger.kernel.org
W: http://kvm.qumranet.com W: http://kvm.qumranet.com
S: Supported S: Supported
......
...@@ -26,6 +26,7 @@ config KVM ...@@ -26,6 +26,7 @@ config KVM
select ANON_INODES select ANON_INODES
select HAVE_KVM_IRQCHIP select HAVE_KVM_IRQCHIP
select KVM_APIC_ARCHITECTURE select KVM_APIC_ARCHITECTURE
select KVM_MMIO
---help--- ---help---
Support hosting fully virtualized guest machines using hardware Support hosting fully virtualized guest machines using hardware
virtualization extensions. You will need a fairly recent virtualization extensions. You will need a fairly recent
......
...@@ -241,10 +241,10 @@ static int handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ...@@ -241,10 +241,10 @@ static int handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
return 0; return 0;
mmio: mmio:
if (p->dir) if (p->dir)
r = kvm_io_bus_read(&vcpu->kvm->mmio_bus, p->addr, r = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, p->addr,
p->size, &p->data); p->size, &p->data);
else else
r = kvm_io_bus_write(&vcpu->kvm->mmio_bus, p->addr, r = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, p->addr,
p->size, &p->data); p->size, &p->data);
if (r) if (r)
printk(KERN_ERR"kvm: No iodevice found! addr:%lx\n", p->addr); printk(KERN_ERR"kvm: No iodevice found! addr:%lx\n", p->addr);
...@@ -636,12 +636,9 @@ static void kvm_vcpu_post_transition(struct kvm_vcpu *vcpu) ...@@ -636,12 +636,9 @@ static void kvm_vcpu_post_transition(struct kvm_vcpu *vcpu)
static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{ {
union context *host_ctx, *guest_ctx; union context *host_ctx, *guest_ctx;
int r; int r, idx;
/* idx = srcu_read_lock(&vcpu->kvm->srcu);
* down_read() may sleep and return with interrupts enabled
*/
down_read(&vcpu->kvm->slots_lock);
again: again:
if (signal_pending(current)) { if (signal_pending(current)) {
...@@ -663,7 +660,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ...@@ -663,7 +660,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
if (r < 0) if (r < 0)
goto vcpu_run_fail; goto vcpu_run_fail;
up_read(&vcpu->kvm->slots_lock); srcu_read_unlock(&vcpu->kvm->srcu, idx);
kvm_guest_enter(); kvm_guest_enter();
/* /*
...@@ -687,7 +684,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ...@@ -687,7 +684,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
kvm_guest_exit(); kvm_guest_exit();
preempt_enable(); preempt_enable();
down_read(&vcpu->kvm->slots_lock); idx = srcu_read_lock(&vcpu->kvm->srcu);
r = kvm_handle_exit(kvm_run, vcpu); r = kvm_handle_exit(kvm_run, vcpu);
...@@ -697,10 +694,10 @@ static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ...@@ -697,10 +694,10 @@ static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
} }
out: out:
up_read(&vcpu->kvm->slots_lock); srcu_read_unlock(&vcpu->kvm->srcu, idx);
if (r > 0) { if (r > 0) {
kvm_resched(vcpu); kvm_resched(vcpu);
down_read(&vcpu->kvm->slots_lock); idx = srcu_read_lock(&vcpu->kvm->srcu);
goto again; goto again;
} }
...@@ -971,7 +968,7 @@ long kvm_arch_vm_ioctl(struct file *filp, ...@@ -971,7 +968,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
goto out; goto out;
r = kvm_setup_default_irq_routing(kvm); r = kvm_setup_default_irq_routing(kvm);
if (r) { if (r) {
kfree(kvm->arch.vioapic); kvm_ioapic_destroy(kvm);
goto out; goto out;
} }
break; break;
...@@ -1377,12 +1374,14 @@ static void free_kvm(struct kvm *kvm) ...@@ -1377,12 +1374,14 @@ static void free_kvm(struct kvm *kvm)
static void kvm_release_vm_pages(struct kvm *kvm) static void kvm_release_vm_pages(struct kvm *kvm)
{ {
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot; struct kvm_memory_slot *memslot;
int i, j; int i, j;
unsigned long base_gfn; unsigned long base_gfn;
for (i = 0; i < kvm->nmemslots; i++) { slots = rcu_dereference(kvm->memslots);
memslot = &kvm->memslots[i]; for (i = 0; i < slots->nmemslots; i++) {
memslot = &slots->memslots[i];
base_gfn = memslot->base_gfn; base_gfn = memslot->base_gfn;
for (j = 0; j < memslot->npages; j++) { for (j = 0; j < memslot->npages; j++) {
...@@ -1405,6 +1404,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm) ...@@ -1405,6 +1404,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
kfree(kvm->arch.vioapic); kfree(kvm->arch.vioapic);
kvm_release_vm_pages(kvm); kvm_release_vm_pages(kvm);
kvm_free_physmem(kvm); kvm_free_physmem(kvm);
cleanup_srcu_struct(&kvm->srcu);
free_kvm(kvm); free_kvm(kvm);
} }
...@@ -1576,15 +1576,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp, ...@@ -1576,15 +1576,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return r; return r;
} }
int kvm_arch_set_memory_region(struct kvm *kvm, int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *memslot,
struct kvm_memory_slot old, struct kvm_memory_slot old,
struct kvm_userspace_memory_region *mem,
int user_alloc) int user_alloc)
{ {
unsigned long i; unsigned long i;
unsigned long pfn; unsigned long pfn;
int npages = mem->memory_size >> PAGE_SHIFT; int npages = memslot->npages;
struct kvm_memory_slot *memslot = &kvm->memslots[mem->slot];
unsigned long base_gfn = memslot->base_gfn; unsigned long base_gfn = memslot->base_gfn;
if (base_gfn + npages > (KVM_MAX_MEM_SIZE >> PAGE_SHIFT)) if (base_gfn + npages > (KVM_MAX_MEM_SIZE >> PAGE_SHIFT))
...@@ -1608,6 +1608,14 @@ int kvm_arch_set_memory_region(struct kvm *kvm, ...@@ -1608,6 +1608,14 @@ int kvm_arch_set_memory_region(struct kvm *kvm,
return 0; return 0;
} }
void kvm_arch_commit_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem,
struct kvm_memory_slot old,
int user_alloc)
{
return;
}
void kvm_arch_flush_shadow(struct kvm *kvm) void kvm_arch_flush_shadow(struct kvm *kvm)
{ {
kvm_flush_remote_tlbs(kvm); kvm_flush_remote_tlbs(kvm);
...@@ -1802,7 +1810,7 @@ static int kvm_ia64_sync_dirty_log(struct kvm *kvm, ...@@ -1802,7 +1810,7 @@ static int kvm_ia64_sync_dirty_log(struct kvm *kvm,
if (log->slot >= KVM_MEMORY_SLOTS) if (log->slot >= KVM_MEMORY_SLOTS)
goto out; goto out;
memslot = &kvm->memslots[log->slot]; memslot = &kvm->memslots->memslots[log->slot];
r = -ENOENT; r = -ENOENT;
if (!memslot->dirty_bitmap) if (!memslot->dirty_bitmap)
goto out; goto out;
...@@ -1827,6 +1835,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, ...@@ -1827,6 +1835,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
struct kvm_memory_slot *memslot; struct kvm_memory_slot *memslot;
int is_dirty = 0; int is_dirty = 0;
mutex_lock(&kvm->slots_lock);
spin_lock(&kvm->arch.dirty_log_lock); spin_lock(&kvm->arch.dirty_log_lock);
r = kvm_ia64_sync_dirty_log(kvm, log); r = kvm_ia64_sync_dirty_log(kvm, log);
...@@ -1840,12 +1849,13 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, ...@@ -1840,12 +1849,13 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
/* If nothing is dirty, don't bother messing with page tables. */ /* If nothing is dirty, don't bother messing with page tables. */
if (is_dirty) { if (is_dirty) {
kvm_flush_remote_tlbs(kvm); kvm_flush_remote_tlbs(kvm);
memslot = &kvm->memslots[log->slot]; memslot = &kvm->memslots->memslots[log->slot];
n = ALIGN(memslot->npages, BITS_PER_LONG) / 8; n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
memset(memslot->dirty_bitmap, 0, n); memset(memslot->dirty_bitmap, 0, n);
} }
r = 0; r = 0;
out: out:
mutex_unlock(&kvm->slots_lock);
spin_unlock(&kvm->arch.dirty_log_lock); spin_unlock(&kvm->arch.dirty_log_lock);
return r; return r;
} }
......
...@@ -75,7 +75,7 @@ static void set_pal_result(struct kvm_vcpu *vcpu, ...@@ -75,7 +75,7 @@ static void set_pal_result(struct kvm_vcpu *vcpu,
struct exit_ctl_data *p; struct exit_ctl_data *p;
p = kvm_get_exit_data(vcpu); p = kvm_get_exit_data(vcpu);
if (p && p->exit_reason == EXIT_REASON_PAL_CALL) { if (p->exit_reason == EXIT_REASON_PAL_CALL) {
p->u.pal_data.ret = result; p->u.pal_data.ret = result;
return ; return ;
} }
...@@ -87,7 +87,7 @@ static void set_sal_result(struct kvm_vcpu *vcpu, ...@@ -87,7 +87,7 @@ static void set_sal_result(struct kvm_vcpu *vcpu,
struct exit_ctl_data *p; struct exit_ctl_data *p;
p = kvm_get_exit_data(vcpu); p = kvm_get_exit_data(vcpu);
if (p && p->exit_reason == EXIT_REASON_SAL_CALL) { if (p->exit_reason == EXIT_REASON_SAL_CALL) {
p->u.sal_data.ret = result; p->u.sal_data.ret = result;
return ; return ;
} }
...@@ -322,7 +322,7 @@ static u64 kvm_get_pal_call_index(struct kvm_vcpu *vcpu) ...@@ -322,7 +322,7 @@ static u64 kvm_get_pal_call_index(struct kvm_vcpu *vcpu)
struct exit_ctl_data *p; struct exit_ctl_data *p;
p = kvm_get_exit_data(vcpu); p = kvm_get_exit_data(vcpu);
if (p && (p->exit_reason == EXIT_REASON_PAL_CALL)) if (p->exit_reason == EXIT_REASON_PAL_CALL)
index = p->u.pal_data.gr28; index = p->u.pal_data.gr28;
return index; return index;
...@@ -646,18 +646,16 @@ static void kvm_get_sal_call_data(struct kvm_vcpu *vcpu, u64 *in0, u64 *in1, ...@@ -646,18 +646,16 @@ static void kvm_get_sal_call_data(struct kvm_vcpu *vcpu, u64 *in0, u64 *in1,
p = kvm_get_exit_data(vcpu); p = kvm_get_exit_data(vcpu);
if (p) { if (p->exit_reason == EXIT_REASON_SAL_CALL) {
if (p->exit_reason == EXIT_REASON_SAL_CALL) { *in0 = p->u.sal_data.in0;
*in0 = p->u.sal_data.in0; *in1 = p->u.sal_data.in1;
*in1 = p->u.sal_data.in1; *in2 = p->u.sal_data.in2;
*in2 = p->u.sal_data.in2; *in3 = p->u.sal_data.in3;
*in3 = p->u.sal_data.in3; *in4 = p->u.sal_data.in4;
*in4 = p->u.sal_data.in4; *in5 = p->u.sal_data.in5;
*in5 = p->u.sal_data.in5; *in6 = p->u.sal_data.in6;
*in6 = p->u.sal_data.in6; *in7 = p->u.sal_data.in7;
*in7 = p->u.sal_data.in7; return ;
return ;
}
} }
*in0 = 0; *in0 = 0;
} }
......
...@@ -316,8 +316,8 @@ void emulate_io_inst(struct kvm_vcpu *vcpu, u64 padr, u64 ma) ...@@ -316,8 +316,8 @@ void emulate_io_inst(struct kvm_vcpu *vcpu, u64 padr, u64 ma)
return; return;
} else { } else {
inst_type = -1; inst_type = -1;
panic_vm(vcpu, "Unsupported MMIO access instruction! \ panic_vm(vcpu, "Unsupported MMIO access instruction! "
Bunld[0]=0x%lx, Bundle[1]=0x%lx\n", "Bunld[0]=0x%lx, Bundle[1]=0x%lx\n",
bundle.i64[0], bundle.i64[1]); bundle.i64[0], bundle.i64[1]);
} }
......
...@@ -1639,8 +1639,8 @@ void vcpu_set_psr(struct kvm_vcpu *vcpu, unsigned long val) ...@@ -1639,8 +1639,8 @@ void vcpu_set_psr(struct kvm_vcpu *vcpu, unsigned long val)
* Otherwise panic * Otherwise panic
*/ */
if (val & (IA64_PSR_PK | IA64_PSR_IS | IA64_PSR_VM)) if (val & (IA64_PSR_PK | IA64_PSR_IS | IA64_PSR_VM))
panic_vm(vcpu, "Only support guests with vpsr.pk =0 \ panic_vm(vcpu, "Only support guests with vpsr.pk =0 "
& vpsr.is=0\n"); "& vpsr.is=0\n");
/* /*
* For those IA64_PSR bits: id/da/dd/ss/ed/ia * For those IA64_PSR bits: id/da/dd/ss/ed/ia
......
...@@ -97,4 +97,10 @@ ...@@ -97,4 +97,10 @@
#define RESUME_HOST RESUME_FLAG_HOST #define RESUME_HOST RESUME_FLAG_HOST
#define RESUME_HOST_NV (RESUME_FLAG_HOST|RESUME_FLAG_NV) #define RESUME_HOST_NV (RESUME_FLAG_HOST|RESUME_FLAG_NV)
#define KVM_GUEST_MODE_NONE 0
#define KVM_GUEST_MODE_GUEST 1
#define KVM_GUEST_MODE_SKIP 2
#define KVM_INST_FETCH_FAILED -1
#endif /* __POWERPC_KVM_ASM_H__ */ #endif /* __POWERPC_KVM_ASM_H__ */
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <asm/kvm_ppc.h> #include <asm/kvm_book3s_64_asm.h>
struct kvmppc_slb { struct kvmppc_slb {
u64 esid; u64 esid;
...@@ -33,7 +33,8 @@ struct kvmppc_slb { ...@@ -33,7 +33,8 @@ struct kvmppc_slb {
bool Ks; bool Ks;
bool Kp; bool Kp;
bool nx; bool nx;
bool large; bool large; /* PTEs are 16MB */
bool tb; /* 1TB segment */
bool class; bool class;
}; };
...@@ -69,6 +70,7 @@ struct kvmppc_sid_map { ...@@ -69,6 +70,7 @@ struct kvmppc_sid_map {
struct kvmppc_vcpu_book3s { struct kvmppc_vcpu_book3s {
struct kvm_vcpu vcpu; struct kvm_vcpu vcpu;
struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
struct kvmppc_sid_map sid_map[SID_MAP_NUM]; struct kvmppc_sid_map sid_map[SID_MAP_NUM];
struct kvmppc_slb slb[64]; struct kvmppc_slb slb[64];
struct { struct {
...@@ -89,6 +91,7 @@ struct kvmppc_vcpu_book3s { ...@@ -89,6 +91,7 @@ struct kvmppc_vcpu_book3s {
u64 vsid_next; u64 vsid_next;
u64 vsid_max; u64 vsid_max;
int context_id; int context_id;
ulong prog_flags; /* flags to inject when giving a 700 trap */
}; };
#define CONTEXT_HOST 0 #define CONTEXT_HOST 0
...@@ -119,6 +122,10 @@ extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, ...@@ -119,6 +122,10 @@ extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat,
extern u32 kvmppc_trampoline_lowmem; extern u32 kvmppc_trampoline_lowmem;
extern u32 kvmppc_trampoline_enter; extern u32 kvmppc_trampoline_enter;
extern void kvmppc_rmcall(ulong srr0, ulong srr1);
extern void kvmppc_load_up_fpu(void);
extern void kvmppc_load_up_altivec(void);
extern void kvmppc_load_up_vsx(void);
static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu) static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
{ {
......
...@@ -20,6 +20,8 @@ ...@@ -20,6 +20,8 @@
#ifndef __ASM_KVM_BOOK3S_ASM_H__ #ifndef __ASM_KVM_BOOK3S_ASM_H__
#define __ASM_KVM_BOOK3S_ASM_H__ #define __ASM_KVM_BOOK3S_ASM_H__
#ifdef __ASSEMBLY__
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
#include <asm/kvm_asm.h> #include <asm/kvm_asm.h>
...@@ -55,4 +57,20 @@ kvmppc_resume_\intno: ...@@ -55,4 +57,20 @@ kvmppc_resume_\intno:
#endif /* CONFIG_KVM_BOOK3S_64_HANDLER */ #endif /* CONFIG_KVM_BOOK3S_64_HANDLER */
#else /*__ASSEMBLY__ */
struct kvmppc_book3s_shadow_vcpu {
ulong gpr[14];
u32 cr;
u32 xer;
ulong host_r1;
ulong host_r2;
ulong handler;
ulong scratch0;
ulong scratch1;
ulong vmhandler;
};
#endif /*__ASSEMBLY__ */
#endif /* __ASM_KVM_BOOK3S_ASM_H__ */ #endif /* __ASM_KVM_BOOK3S_ASM_H__ */
...@@ -52,9 +52,12 @@ struct kvmppc_vcpu_e500 { ...@@ -52,9 +52,12 @@ struct kvmppc_vcpu_e500 {
u32 mas5; u32 mas5;
u32 mas6; u32 mas6;
u32 mas7; u32 mas7;
u32 l1csr0;
u32 l1csr1; u32 l1csr1;
u32 hid0; u32 hid0;
u32 hid1; u32 hid1;
u32 tlb0cfg;
u32 tlb1cfg;
struct kvm_vcpu vcpu; struct kvm_vcpu vcpu;
}; };
......
...@@ -167,23 +167,40 @@ struct kvm_vcpu_arch { ...@@ -167,23 +167,40 @@ struct kvm_vcpu_arch {
ulong trampoline_lowmem; ulong trampoline_lowmem;
ulong trampoline_enter; ulong trampoline_enter;
ulong highmem_handler; ulong highmem_handler;
ulong rmcall;
ulong host_paca_phys; ulong host_paca_phys;
struct kvmppc_mmu mmu; struct kvmppc_mmu mmu;
#endif #endif
u64 fpr[32];
ulong gpr[32]; ulong gpr[32];
u64 fpr[32];
u32 fpscr;
#ifdef CONFIG_ALTIVEC
vector128 vr[32];
vector128 vscr;
#endif
#ifdef CONFIG_VSX
u64 vsr[32];
#endif
ulong pc; ulong pc;
u32 cr;
ulong ctr; ulong ctr;
ulong lr; ulong lr;
#ifdef CONFIG_BOOKE
ulong xer; ulong xer;
u32 cr;
#endif
ulong msr; ulong msr;
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
ulong shadow_msr; ulong shadow_msr;
ulong shadow_srr1;
ulong hflags; ulong hflags;
ulong guest_owned_ext;
#endif #endif
u32 mmucr; u32 mmucr;
ulong sprg0; ulong sprg0;
...@@ -242,6 +259,8 @@ struct kvm_vcpu_arch { ...@@ -242,6 +259,8 @@ struct kvm_vcpu_arch {
#endif #endif
ulong fault_dear; ulong fault_dear;
ulong fault_esr; ulong fault_esr;
ulong queued_dear;
ulong queued_esr;
gpa_t paddr_accessed; gpa_t paddr_accessed;
u8 io_gpr; /* GPR used as IO source/target */ u8 io_gpr; /* GPR used as IO source/target */
......
...@@ -28,6 +28,9 @@ ...@@ -28,6 +28,9 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/kvm_types.h> #include <linux/kvm_types.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#ifdef CONFIG_PPC_BOOK3S
#include <asm/kvm_book3s.h>
#endif
enum emulation_result { enum emulation_result {
EMULATE_DONE, /* no further processing */ EMULATE_DONE, /* no further processing */
...@@ -80,8 +83,9 @@ extern void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu); ...@@ -80,8 +83,9 @@ extern void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu);
extern void kvmppc_core_deliver_interrupts(struct kvm_vcpu *vcpu); extern void kvmppc_core_deliver_interrupts(struct kvm_vcpu *vcpu);
extern int kvmppc_core_pending_dec(struct kvm_vcpu *vcpu); extern int kvmppc_core_pending_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_queue_program(struct kvm_vcpu *vcpu); extern void kvmppc_core_queue_program(struct kvm_vcpu *vcpu, ulong flags);
extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu); extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu, extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
struct kvm_interrupt *irq); struct kvm_interrupt *irq);
...@@ -95,4 +99,81 @@ extern void kvmppc_booke_exit(void); ...@@ -95,4 +99,81 @@ extern void kvmppc_booke_exit(void);
extern void kvmppc_core_destroy_mmu(struct kvm_vcpu *vcpu); extern void kvmppc_core_destroy_mmu(struct kvm_vcpu *vcpu);
#ifdef CONFIG_PPC_BOOK3S
/* We assume we're always acting on the current vcpu */
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
if ( num < 14 ) {
get_paca()->shadow_vcpu.gpr[num] = val;
to_book3s(vcpu)->shadow_vcpu.gpr[num] = val;
} else
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
if ( num < 14 )
return get_paca()->shadow_vcpu.gpr[num];
else
return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
get_paca()->shadow_vcpu.cr = val;
to_book3s(vcpu)->shadow_vcpu.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return get_paca()->shadow_vcpu.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
get_paca()->shadow_vcpu.xer = val;
to_book3s(vcpu)->shadow_vcpu.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
return get_paca()->shadow_vcpu.xer;
}
#else
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
vcpu->arch.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
vcpu->arch.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
return vcpu->arch.xer;
}
#endif
#endif /* __POWERPC_KVM_PPC_H__ */ #endif /* __POWERPC_KVM_PPC_H__ */
...@@ -19,6 +19,9 @@ ...@@ -19,6 +19,9 @@
#include <asm/mmu.h> #include <asm/mmu.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/exception-64e.h> #include <asm/exception-64e.h>
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
#include <asm/kvm_book3s_64_asm.h>
#endif
register struct paca_struct *local_paca asm("r13"); register struct paca_struct *local_paca asm("r13");
...@@ -135,6 +138,8 @@ struct paca_struct { ...@@ -135,6 +138,8 @@ struct paca_struct {
u64 esid; u64 esid;
u64 vsid; u64 vsid;
} kvm_slb[64]; /* guest SLB */ } kvm_slb[64]; /* guest SLB */
/* We use this to store guest state in */
struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
u8 kvm_slb_max; /* highest used guest slb entry */ u8 kvm_slb_max; /* highest used guest slb entry */
u8 kvm_in_guest; /* are we inside the guest? */ u8 kvm_in_guest; /* are we inside the guest? */
#endif #endif
......
...@@ -426,6 +426,10 @@ ...@@ -426,6 +426,10 @@
#define SRR1_WAKEMT 0x00280000 /* mtctrl */ #define SRR1_WAKEMT 0x00280000 /* mtctrl */
#define SRR1_WAKEDEC 0x00180000 /* Decrementer interrupt */ #define SRR1_WAKEDEC 0x00180000 /* Decrementer interrupt */
#define SRR1_WAKETHERM 0x00100000 /* Thermal management interrupt */ #define SRR1_WAKETHERM 0x00100000 /* Thermal management interrupt */
#define SRR1_PROGFPE 0x00100000 /* Floating Point Enabled */
#define SRR1_PROGPRIV 0x00040000 /* Privileged instruction */
#define SRR1_PROGTRAP 0x00020000 /* Trap */
#define SRR1_PROGADDR 0x00010000 /* SRR0 contains subsequent addr */
#define SPRN_HSRR0 0x13A /* Save/Restore Register 0 */ #define SPRN_HSRR0 0x13A /* Save/Restore Register 0 */
#define SPRN_HSRR1 0x13B /* Save/Restore Register 1 */ #define SPRN_HSRR1 0x13B /* Save/Restore Register 1 */
......
...@@ -194,6 +194,30 @@ int main(void) ...@@ -194,6 +194,30 @@ int main(void)
DEFINE(PACA_KVM_IN_GUEST, offsetof(struct paca_struct, kvm_in_guest)); DEFINE(PACA_KVM_IN_GUEST, offsetof(struct paca_struct, kvm_in_guest));
DEFINE(PACA_KVM_SLB, offsetof(struct paca_struct, kvm_slb)); DEFINE(PACA_KVM_SLB, offsetof(struct paca_struct, kvm_slb));
DEFINE(PACA_KVM_SLB_MAX, offsetof(struct paca_struct, kvm_slb_max)); DEFINE(PACA_KVM_SLB_MAX, offsetof(struct paca_struct, kvm_slb_max));
DEFINE(PACA_KVM_CR, offsetof(struct paca_struct, shadow_vcpu.cr));
DEFINE(PACA_KVM_XER, offsetof(struct paca_struct, shadow_vcpu.xer));
DEFINE(PACA_KVM_R0, offsetof(struct paca_struct, shadow_vcpu.gpr[0]));
DEFINE(PACA_KVM_R1, offsetof(struct paca_struct, shadow_vcpu.gpr[1]));
DEFINE(PACA_KVM_R2, offsetof(struct paca_struct, shadow_vcpu.gpr[2]));
DEFINE(PACA_KVM_R3, offsetof(struct paca_struct, shadow_vcpu.gpr[3]));
DEFINE(PACA_KVM_R4, offsetof(struct paca_struct, shadow_vcpu.gpr[4]));
DEFINE(PACA_KVM_R5, offsetof(struct paca_struct, shadow_vcpu.gpr[5]));
DEFINE(PACA_KVM_R6, offsetof(struct paca_struct, shadow_vcpu.gpr[6]));
DEFINE(PACA_KVM_R7, offsetof(struct paca_struct, shadow_vcpu.gpr[7]));
DEFINE(PACA_KVM_R8, offsetof(struct paca_struct, shadow_vcpu.gpr[8]));
DEFINE(PACA_KVM_R9, offsetof(struct paca_struct, shadow_vcpu.gpr[9]));
DEFINE(PACA_KVM_R10, offsetof(struct paca_struct, shadow_vcpu.gpr[10]));
DEFINE(PACA_KVM_R11, offsetof(struct paca_struct, shadow_vcpu.gpr[11]));
DEFINE(PACA_KVM_R12, offsetof(struct paca_struct, shadow_vcpu.gpr[12]));
DEFINE(PACA_KVM_R13, offsetof(struct paca_struct, shadow_vcpu.gpr[13]));
DEFINE(PACA_KVM_HOST_R1, offsetof(struct paca_struct, shadow_vcpu.host_r1));
DEFINE(PACA_KVM_HOST_R2, offsetof(struct paca_struct, shadow_vcpu.host_r2));
DEFINE(PACA_KVM_VMHANDLER, offsetof(struct paca_struct,
shadow_vcpu.vmhandler));
DEFINE(PACA_KVM_SCRATCH0, offsetof(struct paca_struct,
shadow_vcpu.scratch0));
DEFINE(PACA_KVM_SCRATCH1, offsetof(struct paca_struct,
shadow_vcpu.scratch1));
#endif #endif
#endif /* CONFIG_PPC64 */ #endif /* CONFIG_PPC64 */
...@@ -389,8 +413,6 @@ int main(void) ...@@ -389,8 +413,6 @@ int main(void)
DEFINE(VCPU_HOST_PID, offsetof(struct kvm_vcpu, arch.host_pid)); DEFINE(VCPU_HOST_PID, offsetof(struct kvm_vcpu, arch.host_pid));
DEFINE(VCPU_GPRS, offsetof(struct kvm_vcpu, arch.gpr)); DEFINE(VCPU_GPRS, offsetof(struct kvm_vcpu, arch.gpr));
DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr)); DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr));
DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr));
DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer));
DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr)); DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc)); DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc));
DEFINE(VCPU_MSR, offsetof(struct kvm_vcpu, arch.msr)); DEFINE(VCPU_MSR, offsetof(struct kvm_vcpu, arch.msr));
...@@ -411,11 +433,16 @@ int main(void) ...@@ -411,11 +433,16 @@ int main(void)
DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2)); DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2));
DEFINE(VCPU_HOST_MSR, offsetof(struct kvm_vcpu, arch.host_msr)); DEFINE(VCPU_HOST_MSR, offsetof(struct kvm_vcpu, arch.host_msr));
DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr)); DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr));
DEFINE(VCPU_SHADOW_SRR1, offsetof(struct kvm_vcpu, arch.shadow_srr1));
DEFINE(VCPU_TRAMPOLINE_LOWMEM, offsetof(struct kvm_vcpu, arch.trampoline_lowmem)); DEFINE(VCPU_TRAMPOLINE_LOWMEM, offsetof(struct kvm_vcpu, arch.trampoline_lowmem));
DEFINE(VCPU_TRAMPOLINE_ENTER, offsetof(struct kvm_vcpu, arch.trampoline_enter)); DEFINE(VCPU_TRAMPOLINE_ENTER, offsetof(struct kvm_vcpu, arch.trampoline_enter));
DEFINE(VCPU_HIGHMEM_HANDLER, offsetof(struct kvm_vcpu, arch.highmem_handler)); DEFINE(VCPU_HIGHMEM_HANDLER, offsetof(struct kvm_vcpu, arch.highmem_handler));
DEFINE(VCPU_RMCALL, offsetof(struct kvm_vcpu, arch.rmcall));
DEFINE(VCPU_HFLAGS, offsetof(struct kvm_vcpu, arch.hflags)); DEFINE(VCPU_HFLAGS, offsetof(struct kvm_vcpu, arch.hflags));
#endif #else
DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr));
DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer));
#endif /* CONFIG_PPC64 */
#endif #endif
#ifdef CONFIG_44x #ifdef CONFIG_44x
DEFINE(PGD_T_LOG2, PGD_T_LOG2); DEFINE(PGD_T_LOG2, PGD_T_LOG2);
......
...@@ -107,6 +107,7 @@ EXPORT_SYMBOL(giveup_altivec); ...@@ -107,6 +107,7 @@ EXPORT_SYMBOL(giveup_altivec);
#endif /* CONFIG_ALTIVEC */ #endif /* CONFIG_ALTIVEC */
#ifdef CONFIG_VSX #ifdef CONFIG_VSX
EXPORT_SYMBOL(giveup_vsx); EXPORT_SYMBOL(giveup_vsx);
EXPORT_SYMBOL_GPL(__giveup_vsx);
#endif /* CONFIG_VSX */ #endif /* CONFIG_VSX */
#ifdef CONFIG_SPE #ifdef CONFIG_SPE
EXPORT_SYMBOL(giveup_spe); EXPORT_SYMBOL(giveup_spe);
......
...@@ -65,13 +65,14 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -65,13 +65,14 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
*/ */
switch (dcrn) { switch (dcrn) {
case DCRN_CPR0_CONFIG_ADDR: case DCRN_CPR0_CONFIG_ADDR:
vcpu->arch.gpr[rt] = vcpu->arch.cpr0_cfgaddr; kvmppc_set_gpr(vcpu, rt, vcpu->arch.cpr0_cfgaddr);
break; break;
case DCRN_CPR0_CONFIG_DATA: case DCRN_CPR0_CONFIG_DATA:
local_irq_disable(); local_irq_disable();
mtdcr(DCRN_CPR0_CONFIG_ADDR, mtdcr(DCRN_CPR0_CONFIG_ADDR,
vcpu->arch.cpr0_cfgaddr); vcpu->arch.cpr0_cfgaddr);
vcpu->arch.gpr[rt] = mfdcr(DCRN_CPR0_CONFIG_DATA); kvmppc_set_gpr(vcpu, rt,
mfdcr(DCRN_CPR0_CONFIG_DATA));
local_irq_enable(); local_irq_enable();
break; break;
default: default:
...@@ -93,11 +94,11 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -93,11 +94,11 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
/* emulate some access in kernel */ /* emulate some access in kernel */
switch (dcrn) { switch (dcrn) {
case DCRN_CPR0_CONFIG_ADDR: case DCRN_CPR0_CONFIG_ADDR:
vcpu->arch.cpr0_cfgaddr = vcpu->arch.gpr[rs]; vcpu->arch.cpr0_cfgaddr = kvmppc_get_gpr(vcpu, rs);
break; break;
default: default:
run->dcr.dcrn = dcrn; run->dcr.dcrn = dcrn;
run->dcr.data = vcpu->arch.gpr[rs]; run->dcr.data = kvmppc_get_gpr(vcpu, rs);
run->dcr.is_write = 1; run->dcr.is_write = 1;
vcpu->arch.dcr_needed = 1; vcpu->arch.dcr_needed = 1;
kvmppc_account_exit(vcpu, DCR_EXITS); kvmppc_account_exit(vcpu, DCR_EXITS);
...@@ -146,13 +147,13 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs) ...@@ -146,13 +147,13 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
switch (sprn) { switch (sprn) {
case SPRN_PID: case SPRN_PID:
kvmppc_set_pid(vcpu, vcpu->arch.gpr[rs]); break; kvmppc_set_pid(vcpu, kvmppc_get_gpr(vcpu, rs)); break;
case SPRN_MMUCR: case SPRN_MMUCR:
vcpu->arch.mmucr = vcpu->arch.gpr[rs]; break; vcpu->arch.mmucr = kvmppc_get_gpr(vcpu, rs); break;
case SPRN_CCR0: case SPRN_CCR0:
vcpu->arch.ccr0 = vcpu->arch.gpr[rs]; break; vcpu->arch.ccr0 = kvmppc_get_gpr(vcpu, rs); break;
case SPRN_CCR1: case SPRN_CCR1:
vcpu->arch.ccr1 = vcpu->arch.gpr[rs]; break; vcpu->arch.ccr1 = kvmppc_get_gpr(vcpu, rs); break;
default: default:
emulated = kvmppc_booke_emulate_mtspr(vcpu, sprn, rs); emulated = kvmppc_booke_emulate_mtspr(vcpu, sprn, rs);
} }
...@@ -167,13 +168,13 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt) ...@@ -167,13 +168,13 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
switch (sprn) { switch (sprn) {
case SPRN_PID: case SPRN_PID:
vcpu->arch.gpr[rt] = vcpu->arch.pid; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.pid); break;
case SPRN_MMUCR: case SPRN_MMUCR:
vcpu->arch.gpr[rt] = vcpu->arch.mmucr; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.mmucr); break;
case SPRN_CCR0: case SPRN_CCR0:
vcpu->arch.gpr[rt] = vcpu->arch.ccr0; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ccr0); break;
case SPRN_CCR1: case SPRN_CCR1:
vcpu->arch.gpr[rt] = vcpu->arch.ccr1; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ccr1); break;
default: default:
emulated = kvmppc_booke_emulate_mfspr(vcpu, sprn, rt); emulated = kvmppc_booke_emulate_mfspr(vcpu, sprn, rt);
} }
......
...@@ -439,7 +439,7 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws) ...@@ -439,7 +439,7 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws)
struct kvmppc_44x_tlbe *tlbe; struct kvmppc_44x_tlbe *tlbe;
unsigned int gtlb_index; unsigned int gtlb_index;
gtlb_index = vcpu->arch.gpr[ra]; gtlb_index = kvmppc_get_gpr(vcpu, ra);
if (gtlb_index > KVM44x_GUEST_TLB_SIZE) { if (gtlb_index > KVM44x_GUEST_TLB_SIZE) {
printk("%s: index %d\n", __func__, gtlb_index); printk("%s: index %d\n", __func__, gtlb_index);
kvmppc_dump_vcpu(vcpu); kvmppc_dump_vcpu(vcpu);
...@@ -455,15 +455,15 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws) ...@@ -455,15 +455,15 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws)
switch (ws) { switch (ws) {
case PPC44x_TLB_PAGEID: case PPC44x_TLB_PAGEID:
tlbe->tid = get_mmucr_stid(vcpu); tlbe->tid = get_mmucr_stid(vcpu);
tlbe->word0 = vcpu->arch.gpr[rs]; tlbe->word0 = kvmppc_get_gpr(vcpu, rs);
break; break;
case PPC44x_TLB_XLAT: case PPC44x_TLB_XLAT:
tlbe->word1 = vcpu->arch.gpr[rs]; tlbe->word1 = kvmppc_get_gpr(vcpu, rs);
break; break;
case PPC44x_TLB_ATTRIB: case PPC44x_TLB_ATTRIB:
tlbe->word2 = vcpu->arch.gpr[rs]; tlbe->word2 = kvmppc_get_gpr(vcpu, rs);
break; break;
default: default:
...@@ -500,18 +500,20 @@ int kvmppc_44x_emul_tlbsx(struct kvm_vcpu *vcpu, u8 rt, u8 ra, u8 rb, u8 rc) ...@@ -500,18 +500,20 @@ int kvmppc_44x_emul_tlbsx(struct kvm_vcpu *vcpu, u8 rt, u8 ra, u8 rb, u8 rc)
unsigned int as = get_mmucr_sts(vcpu); unsigned int as = get_mmucr_sts(vcpu);
unsigned int pid = get_mmucr_stid(vcpu); unsigned int pid = get_mmucr_stid(vcpu);
ea = vcpu->arch.gpr[rb]; ea = kvmppc_get_gpr(vcpu, rb);
if (ra) if (ra)
ea += vcpu->arch.gpr[ra]; ea += kvmppc_get_gpr(vcpu, ra);
gtlb_index = kvmppc_44x_tlb_index(vcpu, ea, pid, as); gtlb_index = kvmppc_44x_tlb_index(vcpu, ea, pid, as);
if (rc) { if (rc) {
u32 cr = kvmppc_get_cr(vcpu);
if (gtlb_index < 0) if (gtlb_index < 0)
vcpu->arch.cr &= ~0x20000000; kvmppc_set_cr(vcpu, cr & ~0x20000000);
else else
vcpu->arch.cr |= 0x20000000; kvmppc_set_cr(vcpu, cr | 0x20000000);
} }
vcpu->arch.gpr[rt] = gtlb_index; kvmppc_set_gpr(vcpu, rt, gtlb_index);
kvmppc_set_exit_type(vcpu, EMULATED_TLBSX_EXITS); kvmppc_set_exit_type(vcpu, EMULATED_TLBSX_EXITS);
return EMULATE_DONE; return EMULATE_DONE;
......
...@@ -20,6 +20,7 @@ config KVM ...@@ -20,6 +20,7 @@ config KVM
bool bool
select PREEMPT_NOTIFIERS select PREEMPT_NOTIFIERS
select ANON_INODES select ANON_INODES
select KVM_MMIO
config KVM_BOOK3S_64_HANDLER config KVM_BOOK3S_64_HANDLER
bool bool
......
This diff is collapsed.
...@@ -65,11 +65,11 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -65,11 +65,11 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
case 31: case 31:
switch (get_xop(inst)) { switch (get_xop(inst)) {
case OP_31_XOP_MFMSR: case OP_31_XOP_MFMSR:
vcpu->arch.gpr[get_rt(inst)] = vcpu->arch.msr; kvmppc_set_gpr(vcpu, get_rt(inst), vcpu->arch.msr);
break; break;
case OP_31_XOP_MTMSRD: case OP_31_XOP_MTMSRD:
{ {
ulong rs = vcpu->arch.gpr[get_rs(inst)]; ulong rs = kvmppc_get_gpr(vcpu, get_rs(inst));
if (inst & 0x10000) { if (inst & 0x10000) {
vcpu->arch.msr &= ~(MSR_RI | MSR_EE); vcpu->arch.msr &= ~(MSR_RI | MSR_EE);
vcpu->arch.msr |= rs & (MSR_RI | MSR_EE); vcpu->arch.msr |= rs & (MSR_RI | MSR_EE);
...@@ -78,30 +78,30 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -78,30 +78,30 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
break; break;
} }
case OP_31_XOP_MTMSR: case OP_31_XOP_MTMSR:
kvmppc_set_msr(vcpu, vcpu->arch.gpr[get_rs(inst)]); kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, get_rs(inst)));
break; break;
case OP_31_XOP_MFSRIN: case OP_31_XOP_MFSRIN:
{ {
int srnum; int srnum;
srnum = (vcpu->arch.gpr[get_rb(inst)] >> 28) & 0xf; srnum = (kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf;
if (vcpu->arch.mmu.mfsrin) { if (vcpu->arch.mmu.mfsrin) {
u32 sr; u32 sr;
sr = vcpu->arch.mmu.mfsrin(vcpu, srnum); sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
vcpu->arch.gpr[get_rt(inst)] = sr; kvmppc_set_gpr(vcpu, get_rt(inst), sr);
} }
break; break;
} }
case OP_31_XOP_MTSRIN: case OP_31_XOP_MTSRIN:
vcpu->arch.mmu.mtsrin(vcpu, vcpu->arch.mmu.mtsrin(vcpu,
(vcpu->arch.gpr[get_rb(inst)] >> 28) & 0xf, (kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf,
vcpu->arch.gpr[get_rs(inst)]); kvmppc_get_gpr(vcpu, get_rs(inst)));
break; break;
case OP_31_XOP_TLBIE: case OP_31_XOP_TLBIE:
case OP_31_XOP_TLBIEL: case OP_31_XOP_TLBIEL:
{ {
bool large = (inst & 0x00200000) ? true : false; bool large = (inst & 0x00200000) ? true : false;
ulong addr = vcpu->arch.gpr[get_rb(inst)]; ulong addr = kvmppc_get_gpr(vcpu, get_rb(inst));
vcpu->arch.mmu.tlbie(vcpu, addr, large); vcpu->arch.mmu.tlbie(vcpu, addr, large);
break; break;
} }
...@@ -111,14 +111,16 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -111,14 +111,16 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
if (!vcpu->arch.mmu.slbmte) if (!vcpu->arch.mmu.slbmte)
return EMULATE_FAIL; return EMULATE_FAIL;
vcpu->arch.mmu.slbmte(vcpu, vcpu->arch.gpr[get_rs(inst)], vcpu->arch.mmu.slbmte(vcpu,
vcpu->arch.gpr[get_rb(inst)]); kvmppc_get_gpr(vcpu, get_rs(inst)),
kvmppc_get_gpr(vcpu, get_rb(inst)));
break; break;
case OP_31_XOP_SLBIE: case OP_31_XOP_SLBIE:
if (!vcpu->arch.mmu.slbie) if (!vcpu->arch.mmu.slbie)
return EMULATE_FAIL; return EMULATE_FAIL;
vcpu->arch.mmu.slbie(vcpu, vcpu->arch.gpr[get_rb(inst)]); vcpu->arch.mmu.slbie(vcpu,
kvmppc_get_gpr(vcpu, get_rb(inst)));
break; break;
case OP_31_XOP_SLBIA: case OP_31_XOP_SLBIA:
if (!vcpu->arch.mmu.slbia) if (!vcpu->arch.mmu.slbia)
...@@ -132,9 +134,9 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -132,9 +134,9 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
} else { } else {
ulong t, rb; ulong t, rb;
rb = vcpu->arch.gpr[get_rb(inst)]; rb = kvmppc_get_gpr(vcpu, get_rb(inst));
t = vcpu->arch.mmu.slbmfee(vcpu, rb); t = vcpu->arch.mmu.slbmfee(vcpu, rb);
vcpu->arch.gpr[get_rt(inst)] = t; kvmppc_set_gpr(vcpu, get_rt(inst), t);
} }
break; break;
case OP_31_XOP_SLBMFEV: case OP_31_XOP_SLBMFEV:
...@@ -143,20 +145,20 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -143,20 +145,20 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
} else { } else {
ulong t, rb; ulong t, rb;
rb = vcpu->arch.gpr[get_rb(inst)]; rb = kvmppc_get_gpr(vcpu, get_rb(inst));
t = vcpu->arch.mmu.slbmfev(vcpu, rb); t = vcpu->arch.mmu.slbmfev(vcpu, rb);
vcpu->arch.gpr[get_rt(inst)] = t; kvmppc_set_gpr(vcpu, get_rt(inst), t);
} }
break; break;
case OP_31_XOP_DCBZ: case OP_31_XOP_DCBZ:
{ {
ulong rb = vcpu->arch.gpr[get_rb(inst)]; ulong rb = kvmppc_get_gpr(vcpu, get_rb(inst));
ulong ra = 0; ulong ra = 0;
ulong addr; ulong addr;
u32 zeros[8] = { 0, 0, 0, 0, 0, 0, 0, 0 }; u32 zeros[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
if (get_ra(inst)) if (get_ra(inst))
ra = vcpu->arch.gpr[get_ra(inst)]; ra = kvmppc_get_gpr(vcpu, get_ra(inst));
addr = (ra + rb) & ~31ULL; addr = (ra + rb) & ~31ULL;
if (!(vcpu->arch.msr & MSR_SF)) if (!(vcpu->arch.msr & MSR_SF))
...@@ -233,43 +235,44 @@ static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val) ...@@ -233,43 +235,44 @@ static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs) int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
{ {
int emulated = EMULATE_DONE; int emulated = EMULATE_DONE;
ulong spr_val = kvmppc_get_gpr(vcpu, rs);
switch (sprn) { switch (sprn) {
case SPRN_SDR1: case SPRN_SDR1:
to_book3s(vcpu)->sdr1 = vcpu->arch.gpr[rs]; to_book3s(vcpu)->sdr1 = spr_val;
break; break;
case SPRN_DSISR: case SPRN_DSISR:
to_book3s(vcpu)->dsisr = vcpu->arch.gpr[rs]; to_book3s(vcpu)->dsisr = spr_val;
break; break;
case SPRN_DAR: case SPRN_DAR:
vcpu->arch.dear = vcpu->arch.gpr[rs]; vcpu->arch.dear = spr_val;
break; break;
case SPRN_HIOR: case SPRN_HIOR:
to_book3s(vcpu)->hior = vcpu->arch.gpr[rs]; to_book3s(vcpu)->hior = spr_val;
break; break;
case SPRN_IBAT0U ... SPRN_IBAT3L: case SPRN_IBAT0U ... SPRN_IBAT3L:
case SPRN_IBAT4U ... SPRN_IBAT7L: case SPRN_IBAT4U ... SPRN_IBAT7L:
case SPRN_DBAT0U ... SPRN_DBAT3L: case SPRN_DBAT0U ... SPRN_DBAT3L:
case SPRN_DBAT4U ... SPRN_DBAT7L: case SPRN_DBAT4U ... SPRN_DBAT7L:
kvmppc_write_bat(vcpu, sprn, (u32)vcpu->arch.gpr[rs]); kvmppc_write_bat(vcpu, sprn, (u32)spr_val);
/* BAT writes happen so rarely that we're ok to flush /* BAT writes happen so rarely that we're ok to flush
* everything here */ * everything here */
kvmppc_mmu_pte_flush(vcpu, 0, 0); kvmppc_mmu_pte_flush(vcpu, 0, 0);
break; break;
case SPRN_HID0: case SPRN_HID0:
to_book3s(vcpu)->hid[0] = vcpu->arch.gpr[rs]; to_book3s(vcpu)->hid[0] = spr_val;
break; break;
case SPRN_HID1: case SPRN_HID1:
to_book3s(vcpu)->hid[1] = vcpu->arch.gpr[rs]; to_book3s(vcpu)->hid[1] = spr_val;
break; break;
case SPRN_HID2: case SPRN_HID2:
to_book3s(vcpu)->hid[2] = vcpu->arch.gpr[rs]; to_book3s(vcpu)->hid[2] = spr_val;
break; break;
case SPRN_HID4: case SPRN_HID4:
to_book3s(vcpu)->hid[4] = vcpu->arch.gpr[rs]; to_book3s(vcpu)->hid[4] = spr_val;
break; break;
case SPRN_HID5: case SPRN_HID5:
to_book3s(vcpu)->hid[5] = vcpu->arch.gpr[rs]; to_book3s(vcpu)->hid[5] = spr_val;
/* guest HID5 set can change is_dcbz32 */ /* guest HID5 set can change is_dcbz32 */
if (vcpu->arch.mmu.is_dcbz32(vcpu) && if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
(mfmsr() & MSR_HV)) (mfmsr() & MSR_HV))
...@@ -299,38 +302,38 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt) ...@@ -299,38 +302,38 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
switch (sprn) { switch (sprn) {
case SPRN_SDR1: case SPRN_SDR1:
vcpu->arch.gpr[rt] = to_book3s(vcpu)->sdr1; kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
break; break;
case SPRN_DSISR: case SPRN_DSISR:
vcpu->arch.gpr[rt] = to_book3s(vcpu)->dsisr; kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->dsisr);
break; break;
case SPRN_DAR: case SPRN_DAR:
vcpu->arch.gpr[rt] = vcpu->arch.dear; kvmppc_set_gpr(vcpu, rt, vcpu->arch.dear);
break; break;
case SPRN_HIOR: case SPRN_HIOR:
vcpu->arch.gpr[rt] = to_book3s(vcpu)->hior; kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hior);
break; break;
case SPRN_HID0: case SPRN_HID0:
vcpu->arch.gpr[rt] = to_book3s(vcpu)->hid[0]; kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[0]);
break; break;
case SPRN_HID1: case SPRN_HID1:
vcpu->arch.gpr[rt] = to_book3s(vcpu)->hid[1]; kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[1]);
break; break;
case SPRN_HID2: case SPRN_HID2:
vcpu->arch.gpr[rt] = to_book3s(vcpu)->hid[2]; kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[2]);
break; break;
case SPRN_HID4: case SPRN_HID4:
vcpu->arch.gpr[rt] = to_book3s(vcpu)->hid[4]; kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[4]);
break; break;
case SPRN_HID5: case SPRN_HID5:
vcpu->arch.gpr[rt] = to_book3s(vcpu)->hid[5]; kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[5]);
break; break;
case SPRN_THRM1: case SPRN_THRM1:
case SPRN_THRM2: case SPRN_THRM2:
case SPRN_THRM3: case SPRN_THRM3:
case SPRN_CTRLF: case SPRN_CTRLF:
case SPRN_CTRLT: case SPRN_CTRLT:
vcpu->arch.gpr[rt] = 0; kvmppc_set_gpr(vcpu, rt, 0);
break; break;
default: default:
printk(KERN_INFO "KVM: invalid SPR read: %d\n", sprn); printk(KERN_INFO "KVM: invalid SPR read: %d\n", sprn);
......
...@@ -22,3 +22,11 @@ ...@@ -22,3 +22,11 @@
EXPORT_SYMBOL_GPL(kvmppc_trampoline_enter); EXPORT_SYMBOL_GPL(kvmppc_trampoline_enter);
EXPORT_SYMBOL_GPL(kvmppc_trampoline_lowmem); EXPORT_SYMBOL_GPL(kvmppc_trampoline_lowmem);
EXPORT_SYMBOL_GPL(kvmppc_rmcall);
EXPORT_SYMBOL_GPL(kvmppc_load_up_fpu);
#ifdef CONFIG_ALTIVEC
EXPORT_SYMBOL_GPL(kvmppc_load_up_altivec);
#endif
#ifdef CONFIG_VSX
EXPORT_SYMBOL_GPL(kvmppc_load_up_vsx);
#endif
This diff is collapsed.
...@@ -54,7 +54,7 @@ static struct kvmppc_slb *kvmppc_mmu_book3s_64_find_slbe( ...@@ -54,7 +54,7 @@ static struct kvmppc_slb *kvmppc_mmu_book3s_64_find_slbe(
if (!vcpu_book3s->slb[i].valid) if (!vcpu_book3s->slb[i].valid)
continue; continue;
if (vcpu_book3s->slb[i].large) if (vcpu_book3s->slb[i].tb)
cmp_esid = esid_1t; cmp_esid = esid_1t;
if (vcpu_book3s->slb[i].esid == cmp_esid) if (vcpu_book3s->slb[i].esid == cmp_esid)
...@@ -65,9 +65,10 @@ static struct kvmppc_slb *kvmppc_mmu_book3s_64_find_slbe( ...@@ -65,9 +65,10 @@ static struct kvmppc_slb *kvmppc_mmu_book3s_64_find_slbe(
eaddr, esid, esid_1t); eaddr, esid, esid_1t);
for (i = 0; i < vcpu_book3s->slb_nr; i++) { for (i = 0; i < vcpu_book3s->slb_nr; i++) {
if (vcpu_book3s->slb[i].vsid) if (vcpu_book3s->slb[i].vsid)
dprintk(" %d: %c%c %llx %llx\n", i, dprintk(" %d: %c%c%c %llx %llx\n", i,
vcpu_book3s->slb[i].valid ? 'v' : ' ', vcpu_book3s->slb[i].valid ? 'v' : ' ',
vcpu_book3s->slb[i].large ? 'l' : ' ', vcpu_book3s->slb[i].large ? 'l' : ' ',
vcpu_book3s->slb[i].tb ? 't' : ' ',
vcpu_book3s->slb[i].esid, vcpu_book3s->slb[i].esid,
vcpu_book3s->slb[i].vsid); vcpu_book3s->slb[i].vsid);
} }
...@@ -84,7 +85,7 @@ static u64 kvmppc_mmu_book3s_64_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr, ...@@ -84,7 +85,7 @@ static u64 kvmppc_mmu_book3s_64_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
if (!slb) if (!slb)
return 0; return 0;
if (slb->large) if (slb->tb)
return (((u64)eaddr >> 12) & 0xfffffff) | return (((u64)eaddr >> 12) & 0xfffffff) |
(((u64)slb->vsid) << 28); (((u64)slb->vsid) << 28);
...@@ -309,7 +310,8 @@ static void kvmppc_mmu_book3s_64_slbmte(struct kvm_vcpu *vcpu, u64 rs, u64 rb) ...@@ -309,7 +310,8 @@ static void kvmppc_mmu_book3s_64_slbmte(struct kvm_vcpu *vcpu, u64 rs, u64 rb)
slbe = &vcpu_book3s->slb[slb_nr]; slbe = &vcpu_book3s->slb[slb_nr];
slbe->large = (rs & SLB_VSID_L) ? 1 : 0; slbe->large = (rs & SLB_VSID_L) ? 1 : 0;
slbe->esid = slbe->large ? esid_1t : esid; slbe->tb = (rs & SLB_VSID_B_1T) ? 1 : 0;
slbe->esid = slbe->tb ? esid_1t : esid;
slbe->vsid = rs >> 12; slbe->vsid = rs >> 12;
slbe->valid = (rb & SLB_ESID_V) ? 1 : 0; slbe->valid = (rb & SLB_ESID_V) ? 1 : 0;
slbe->Ks = (rs & SLB_VSID_KS) ? 1 : 0; slbe->Ks = (rs & SLB_VSID_KS) ? 1 : 0;
......
...@@ -45,36 +45,25 @@ kvmppc_trampoline_\intno: ...@@ -45,36 +45,25 @@ kvmppc_trampoline_\intno:
* To distinguish, we check a magic byte in the PACA * To distinguish, we check a magic byte in the PACA
*/ */
mfspr r13, SPRN_SPRG_PACA /* r13 = PACA */ mfspr r13, SPRN_SPRG_PACA /* r13 = PACA */
std r12, (PACA_EXMC + EX_R12)(r13) std r12, PACA_KVM_SCRATCH0(r13)
mfcr r12 mfcr r12
stw r12, (PACA_EXMC + EX_CCR)(r13) stw r12, PACA_KVM_SCRATCH1(r13)
lbz r12, PACA_KVM_IN_GUEST(r13) lbz r12, PACA_KVM_IN_GUEST(r13)
cmpwi r12, 0 cmpwi r12, KVM_GUEST_MODE_NONE
bne ..kvmppc_handler_hasmagic_\intno bne ..kvmppc_handler_hasmagic_\intno
/* No KVM guest? Then jump back to the Linux handler! */ /* No KVM guest? Then jump back to the Linux handler! */
lwz r12, (PACA_EXMC + EX_CCR)(r13) lwz r12, PACA_KVM_SCRATCH1(r13)
mtcr r12 mtcr r12
ld r12, (PACA_EXMC + EX_R12)(r13) ld r12, PACA_KVM_SCRATCH0(r13)
mfspr r13, SPRN_SPRG_SCRATCH0 /* r13 = original r13 */ mfspr r13, SPRN_SPRG_SCRATCH0 /* r13 = original r13 */
b kvmppc_resume_\intno /* Get back original handler */ b kvmppc_resume_\intno /* Get back original handler */
/* Now we know we're handling a KVM guest */ /* Now we know we're handling a KVM guest */
..kvmppc_handler_hasmagic_\intno: ..kvmppc_handler_hasmagic_\intno:
/* Unset guest state */
li r12, 0
stb r12, PACA_KVM_IN_GUEST(r13)
std r1, (PACA_EXMC+EX_R9)(r13) /* Should we just skip the faulting instruction? */
std r10, (PACA_EXMC+EX_R10)(r13) cmpwi r12, KVM_GUEST_MODE_SKIP
std r11, (PACA_EXMC+EX_R11)(r13) beq kvmppc_handler_skip_ins
std r2, (PACA_EXMC+EX_R13)(r13)
mfsrr0 r10
mfsrr1 r11
/* Restore R1/R2 so we can handle faults */
ld r1, PACAR1(r13)
ld r2, (PACA_EXMC+EX_SRR0)(r13)
/* Let's store which interrupt we're handling */ /* Let's store which interrupt we're handling */
li r12, \intno li r12, \intno
...@@ -101,24 +90,108 @@ INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_PERFMON ...@@ -101,24 +90,108 @@ INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_PERFMON
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_ALTIVEC INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_ALTIVEC
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_VSX INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_VSX
/*
* Bring us back to the faulting code, but skip the
* faulting instruction.
*
* This is a generic exit path from the interrupt
* trampolines above.
*
* Input Registers:
*
* R12 = free
* R13 = PACA
* PACA.KVM.SCRATCH0 = guest R12
* PACA.KVM.SCRATCH1 = guest CR
* SPRG_SCRATCH0 = guest R13
*
*/
kvmppc_handler_skip_ins:
/* Patch the IP to the next instruction */
mfsrr0 r12
addi r12, r12, 4
mtsrr0 r12
/* Clean up all state */
lwz r12, PACA_KVM_SCRATCH1(r13)
mtcr r12
ld r12, PACA_KVM_SCRATCH0(r13)
mfspr r13, SPRN_SPRG_SCRATCH0
/* And get back into the code */
RFI
/* /*
* This trampoline brings us back to a real mode handler * This trampoline brings us back to a real mode handler
* *
* Input Registers: * Input Registers:
* *
* R6 = SRR0 * R5 = SRR0
* R7 = SRR1 * R6 = SRR1
* LR = real-mode IP * LR = real-mode IP
* *
*/ */
.global kvmppc_handler_lowmem_trampoline .global kvmppc_handler_lowmem_trampoline
kvmppc_handler_lowmem_trampoline: kvmppc_handler_lowmem_trampoline:
mtsrr0 r6 mtsrr0 r5
mtsrr1 r7 mtsrr1 r6
blr blr
kvmppc_handler_lowmem_trampoline_end: kvmppc_handler_lowmem_trampoline_end:
/*
* Call a function in real mode
*
* Input Registers:
*
* R3 = function
* R4 = MSR
* R5 = CTR
*
*/
_GLOBAL(kvmppc_rmcall)
mtmsr r4 /* Disable relocation, so mtsrr
doesn't get interrupted */
mtctr r5
mtsrr0 r3
mtsrr1 r4
RFI
/*
* Activate current's external feature (FPU/Altivec/VSX)
*/
#define define_load_up(what) \
\
_GLOBAL(kvmppc_load_up_ ## what); \
subi r1, r1, INT_FRAME_SIZE; \
mflr r3; \
std r3, _LINK(r1); \
mfmsr r4; \
std r31, GPR3(r1); \
mr r31, r4; \
li r5, MSR_DR; \
oris r5, r5, MSR_EE@h; \
andc r4, r4, r5; \
mtmsr r4; \
\
bl .load_up_ ## what; \
\
mtmsr r31; \
ld r3, _LINK(r1); \
ld r31, GPR3(r1); \
addi r1, r1, INT_FRAME_SIZE; \
mtlr r3; \
blr
define_load_up(fpu)
#ifdef CONFIG_ALTIVEC
define_load_up(altivec)
#endif
#ifdef CONFIG_VSX
define_load_up(vsx)
#endif
.global kvmppc_trampoline_lowmem .global kvmppc_trampoline_lowmem
kvmppc_trampoline_lowmem: kvmppc_trampoline_lowmem:
.long kvmppc_handler_lowmem_trampoline - _stext .long kvmppc_handler_lowmem_trampoline - _stext
......
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
#define REBOLT_SLB_ENTRY(num) \ #define REBOLT_SLB_ENTRY(num) \
ld r10, SHADOW_SLB_ESID(num)(r11); \ ld r10, SHADOW_SLB_ESID(num)(r11); \
cmpdi r10, 0; \ cmpdi r10, 0; \
beq slb_exit_skip_1; \ beq slb_exit_skip_ ## num; \
oris r10, r10, SLB_ESID_V@h; \ oris r10, r10, SLB_ESID_V@h; \
ld r9, SHADOW_SLB_VSID(num)(r11); \ ld r9, SHADOW_SLB_VSID(num)(r11); \
slbmte r9, r10; \ slbmte r9, r10; \
...@@ -51,23 +51,21 @@ kvmppc_handler_trampoline_enter: ...@@ -51,23 +51,21 @@ kvmppc_handler_trampoline_enter:
* *
* MSR = ~IR|DR * MSR = ~IR|DR
* R13 = PACA * R13 = PACA
* R1 = host R1
* R2 = host R2
* R9 = guest IP * R9 = guest IP
* R10 = guest MSR * R10 = guest MSR
* R11 = free * all other GPRS = free
* R12 = free * PACA[KVM_CR] = guest CR
* PACA[PACA_EXMC + EX_R9] = guest R9 * PACA[KVM_XER] = guest XER
* PACA[PACA_EXMC + EX_R10] = guest R10
* PACA[PACA_EXMC + EX_R11] = guest R11
* PACA[PACA_EXMC + EX_R12] = guest R12
* PACA[PACA_EXMC + EX_R13] = guest R13
* PACA[PACA_EXMC + EX_CCR] = guest CR
* PACA[PACA_EXMC + EX_R3] = guest XER
*/ */
mtsrr0 r9 mtsrr0 r9
mtsrr1 r10 mtsrr1 r10
mtspr SPRN_SPRG_SCRATCH0, r0 /* Activate guest mode, so faults get handled by KVM */
li r11, KVM_GUEST_MODE_GUEST
stb r11, PACA_KVM_IN_GUEST(r13)
/* Remove LPAR shadow entries */ /* Remove LPAR shadow entries */
...@@ -131,20 +129,27 @@ slb_do_enter: ...@@ -131,20 +129,27 @@ slb_do_enter:
/* Enter guest */ /* Enter guest */
mfspr r0, SPRN_SPRG_SCRATCH0 ld r0, (PACA_KVM_R0)(r13)
ld r1, (PACA_KVM_R1)(r13)
ld r9, (PACA_EXMC+EX_R9)(r13) ld r2, (PACA_KVM_R2)(r13)
ld r10, (PACA_EXMC+EX_R10)(r13) ld r3, (PACA_KVM_R3)(r13)
ld r12, (PACA_EXMC+EX_R12)(r13) ld r4, (PACA_KVM_R4)(r13)
ld r5, (PACA_KVM_R5)(r13)
lwz r11, (PACA_EXMC+EX_CCR)(r13) ld r6, (PACA_KVM_R6)(r13)
ld r7, (PACA_KVM_R7)(r13)
ld r8, (PACA_KVM_R8)(r13)
ld r9, (PACA_KVM_R9)(r13)
ld r10, (PACA_KVM_R10)(r13)
ld r12, (PACA_KVM_R12)(r13)
lwz r11, (PACA_KVM_CR)(r13)
mtcr r11 mtcr r11
ld r11, (PACA_EXMC+EX_R3)(r13) ld r11, (PACA_KVM_XER)(r13)
mtxer r11 mtxer r11
ld r11, (PACA_EXMC+EX_R11)(r13) ld r11, (PACA_KVM_R11)(r13)
ld r13, (PACA_EXMC+EX_R13)(r13) ld r13, (PACA_KVM_R13)(r13)
RFI RFI
kvmppc_handler_trampoline_enter_end: kvmppc_handler_trampoline_enter_end:
...@@ -162,28 +167,54 @@ kvmppc_handler_trampoline_exit: ...@@ -162,28 +167,54 @@ kvmppc_handler_trampoline_exit:
/* Register usage at this point: /* Register usage at this point:
* *
* SPRG_SCRATCH0 = guest R13 * SPRG_SCRATCH0 = guest R13
* R01 = host R1 * R12 = exit handler id
* R02 = host R2 * R13 = PACA
* R10 = guest PC * PACA.KVM.SCRATCH0 = guest R12
* R11 = guest MSR * PACA.KVM.SCRATCH1 = guest CR
* R12 = exit handler id
* R13 = PACA
* PACA.exmc.CCR = guest CR
* PACA.exmc.R9 = guest R1
* PACA.exmc.R10 = guest R10
* PACA.exmc.R11 = guest R11
* PACA.exmc.R12 = guest R12
* PACA.exmc.R13 = guest R2
* *
*/ */
/* Save registers */ /* Save registers */
std r0, (PACA_EXMC+EX_SRR0)(r13) std r0, PACA_KVM_R0(r13)
std r9, (PACA_EXMC+EX_R3)(r13) std r1, PACA_KVM_R1(r13)
std r10, (PACA_EXMC+EX_LR)(r13) std r2, PACA_KVM_R2(r13)
std r11, (PACA_EXMC+EX_DAR)(r13) std r3, PACA_KVM_R3(r13)
std r4, PACA_KVM_R4(r13)
std r5, PACA_KVM_R5(r13)
std r6, PACA_KVM_R6(r13)
std r7, PACA_KVM_R7(r13)
std r8, PACA_KVM_R8(r13)
std r9, PACA_KVM_R9(r13)
std r10, PACA_KVM_R10(r13)
std r11, PACA_KVM_R11(r13)
/* Restore R1/R2 so we can handle faults */
ld r1, PACA_KVM_HOST_R1(r13)
ld r2, PACA_KVM_HOST_R2(r13)
/* Save guest PC and MSR in GPRs */
mfsrr0 r3
mfsrr1 r4
/* Get scratch'ed off registers */
mfspr r9, SPRN_SPRG_SCRATCH0
std r9, PACA_KVM_R13(r13)
ld r8, PACA_KVM_SCRATCH0(r13)
std r8, PACA_KVM_R12(r13)
lwz r7, PACA_KVM_SCRATCH1(r13)
stw r7, PACA_KVM_CR(r13)
/* Save more register state */
mfxer r6
stw r6, PACA_KVM_XER(r13)
mfdar r5
mfdsisr r6
/* /*
* In order for us to easily get the last instruction, * In order for us to easily get the last instruction,
...@@ -202,17 +233,28 @@ kvmppc_handler_trampoline_exit: ...@@ -202,17 +233,28 @@ kvmppc_handler_trampoline_exit:
ld_last_inst: ld_last_inst:
/* Save off the guest instruction we're at */ /* Save off the guest instruction we're at */
/* Set guest mode to 'jump over instruction' so if lwz faults
* we'll just continue at the next IP. */
li r9, KVM_GUEST_MODE_SKIP
stb r9, PACA_KVM_IN_GUEST(r13)
/* 1) enable paging for data */ /* 1) enable paging for data */
mfmsr r9 mfmsr r9
ori r11, r9, MSR_DR /* Enable paging for data */ ori r11, r9, MSR_DR /* Enable paging for data */
mtmsr r11 mtmsr r11
/* 2) fetch the instruction */ /* 2) fetch the instruction */
lwz r0, 0(r10) li r0, KVM_INST_FETCH_FAILED /* In case lwz faults */
lwz r0, 0(r3)
/* 3) disable paging again */ /* 3) disable paging again */
mtmsr r9 mtmsr r9
no_ld_last_inst: no_ld_last_inst:
/* Unset guest mode */
li r9, KVM_GUEST_MODE_NONE
stb r9, PACA_KVM_IN_GUEST(r13)
/* Restore bolted entries from the shadow and fix it along the way */ /* Restore bolted entries from the shadow and fix it along the way */
/* We don't store anything in entry 0, so we don't need to take care of it */ /* We don't store anything in entry 0, so we don't need to take care of it */
...@@ -233,29 +275,27 @@ no_ld_last_inst: ...@@ -233,29 +275,27 @@ no_ld_last_inst:
slb_do_exit: slb_do_exit:
/* Restore registers */ /* Register usage at this point:
*
ld r11, (PACA_EXMC+EX_DAR)(r13) * R0 = guest last inst
ld r10, (PACA_EXMC+EX_LR)(r13) * R1 = host R1
ld r9, (PACA_EXMC+EX_R3)(r13) * R2 = host R2
* R3 = guest PC
/* Save last inst */ * R4 = guest MSR
stw r0, (PACA_EXMC+EX_LR)(r13) * R5 = guest DAR
* R6 = guest DSISR
/* Save DAR and DSISR before going to paged mode */ * R12 = exit handler id
mfdar r0 * R13 = PACA
std r0, (PACA_EXMC+EX_DAR)(r13) * PACA.KVM.* = guest *
mfdsisr r0 *
stw r0, (PACA_EXMC+EX_DSISR)(r13) */
/* RFI into the highmem handler */ /* RFI into the highmem handler */
mfmsr r0 mfmsr r7
ori r0, r0, MSR_IR|MSR_DR|MSR_RI /* Enable paging */ ori r7, r7, MSR_IR|MSR_DR|MSR_RI /* Enable paging */
mtsrr1 r0 mtsrr1 r7
ld r0, PACASAVEDMSR(r13) /* Highmem handler address */ ld r8, PACA_KVM_VMHANDLER(r13) /* Highmem handler address */
mtsrr0 r0 mtsrr0 r8
mfspr r0, SPRN_SPRG_SCRATCH0
RFI RFI
kvmppc_handler_trampoline_exit_end: kvmppc_handler_trampoline_exit_end:
......
...@@ -69,10 +69,10 @@ void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu) ...@@ -69,10 +69,10 @@ void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu)
for (i = 0; i < 32; i += 4) { for (i = 0; i < 32; i += 4) {
printk("gpr%02d: %08lx %08lx %08lx %08lx\n", i, printk("gpr%02d: %08lx %08lx %08lx %08lx\n", i,
vcpu->arch.gpr[i], kvmppc_get_gpr(vcpu, i),
vcpu->arch.gpr[i+1], kvmppc_get_gpr(vcpu, i+1),
vcpu->arch.gpr[i+2], kvmppc_get_gpr(vcpu, i+2),
vcpu->arch.gpr[i+3]); kvmppc_get_gpr(vcpu, i+3));
} }
} }
...@@ -82,8 +82,32 @@ static void kvmppc_booke_queue_irqprio(struct kvm_vcpu *vcpu, ...@@ -82,8 +82,32 @@ static void kvmppc_booke_queue_irqprio(struct kvm_vcpu *vcpu,
set_bit(priority, &vcpu->arch.pending_exceptions); set_bit(priority, &vcpu->arch.pending_exceptions);
} }
void kvmppc_core_queue_program(struct kvm_vcpu *vcpu) static void kvmppc_core_queue_dtlb_miss(struct kvm_vcpu *vcpu,
ulong dear_flags, ulong esr_flags)
{ {
vcpu->arch.queued_dear = dear_flags;
vcpu->arch.queued_esr = esr_flags;
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_DTLB_MISS);
}
static void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu,
ulong dear_flags, ulong esr_flags)
{
vcpu->arch.queued_dear = dear_flags;
vcpu->arch.queued_esr = esr_flags;
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_DATA_STORAGE);
}
static void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu,
ulong esr_flags)
{
vcpu->arch.queued_esr = esr_flags;
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_INST_STORAGE);
}
void kvmppc_core_queue_program(struct kvm_vcpu *vcpu, ulong esr_flags)
{
vcpu->arch.queued_esr = esr_flags;
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_PROGRAM); kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_PROGRAM);
} }
...@@ -97,6 +121,11 @@ int kvmppc_core_pending_dec(struct kvm_vcpu *vcpu) ...@@ -97,6 +121,11 @@ int kvmppc_core_pending_dec(struct kvm_vcpu *vcpu)
return test_bit(BOOKE_IRQPRIO_DECREMENTER, &vcpu->arch.pending_exceptions); return test_bit(BOOKE_IRQPRIO_DECREMENTER, &vcpu->arch.pending_exceptions);
} }
void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu)
{
clear_bit(BOOKE_IRQPRIO_DECREMENTER, &vcpu->arch.pending_exceptions);
}
void kvmppc_core_queue_external(struct kvm_vcpu *vcpu, void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
struct kvm_interrupt *irq) struct kvm_interrupt *irq)
{ {
...@@ -109,14 +138,19 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu, ...@@ -109,14 +138,19 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
{ {
int allowed = 0; int allowed = 0;
ulong msr_mask; ulong msr_mask;
bool update_esr = false, update_dear = false;
switch (priority) { switch (priority) {
case BOOKE_IRQPRIO_PROGRAM:
case BOOKE_IRQPRIO_DTLB_MISS: case BOOKE_IRQPRIO_DTLB_MISS:
case BOOKE_IRQPRIO_ITLB_MISS:
case BOOKE_IRQPRIO_SYSCALL:
case BOOKE_IRQPRIO_DATA_STORAGE: case BOOKE_IRQPRIO_DATA_STORAGE:
update_dear = true;
/* fall through */
case BOOKE_IRQPRIO_INST_STORAGE: case BOOKE_IRQPRIO_INST_STORAGE:
case BOOKE_IRQPRIO_PROGRAM:
update_esr = true;
/* fall through */
case BOOKE_IRQPRIO_ITLB_MISS:
case BOOKE_IRQPRIO_SYSCALL:
case BOOKE_IRQPRIO_FP_UNAVAIL: case BOOKE_IRQPRIO_FP_UNAVAIL:
case BOOKE_IRQPRIO_SPE_UNAVAIL: case BOOKE_IRQPRIO_SPE_UNAVAIL:
case BOOKE_IRQPRIO_SPE_FP_DATA: case BOOKE_IRQPRIO_SPE_FP_DATA:
...@@ -151,6 +185,10 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu, ...@@ -151,6 +185,10 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
vcpu->arch.srr0 = vcpu->arch.pc; vcpu->arch.srr0 = vcpu->arch.pc;
vcpu->arch.srr1 = vcpu->arch.msr; vcpu->arch.srr1 = vcpu->arch.msr;
vcpu->arch.pc = vcpu->arch.ivpr | vcpu->arch.ivor[priority]; vcpu->arch.pc = vcpu->arch.ivpr | vcpu->arch.ivor[priority];
if (update_esr == true)
vcpu->arch.esr = vcpu->arch.queued_esr;
if (update_dear == true)
vcpu->arch.dear = vcpu->arch.queued_dear;
kvmppc_set_msr(vcpu, vcpu->arch.msr & msr_mask); kvmppc_set_msr(vcpu, vcpu->arch.msr & msr_mask);
clear_bit(priority, &vcpu->arch.pending_exceptions); clear_bit(priority, &vcpu->arch.pending_exceptions);
...@@ -223,8 +261,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -223,8 +261,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
if (vcpu->arch.msr & MSR_PR) { if (vcpu->arch.msr & MSR_PR) {
/* Program traps generated by user-level software must be handled /* Program traps generated by user-level software must be handled
* by the guest kernel. */ * by the guest kernel. */
vcpu->arch.esr = vcpu->arch.fault_esr; kvmppc_core_queue_program(vcpu, vcpu->arch.fault_esr);
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_PROGRAM);
r = RESUME_GUEST; r = RESUME_GUEST;
kvmppc_account_exit(vcpu, USR_PR_INST); kvmppc_account_exit(vcpu, USR_PR_INST);
break; break;
...@@ -280,16 +317,14 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -280,16 +317,14 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
break; break;
case BOOKE_INTERRUPT_DATA_STORAGE: case BOOKE_INTERRUPT_DATA_STORAGE:
vcpu->arch.dear = vcpu->arch.fault_dear; kvmppc_core_queue_data_storage(vcpu, vcpu->arch.fault_dear,
vcpu->arch.esr = vcpu->arch.fault_esr; vcpu->arch.fault_esr);
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_DATA_STORAGE);
kvmppc_account_exit(vcpu, DSI_EXITS); kvmppc_account_exit(vcpu, DSI_EXITS);
r = RESUME_GUEST; r = RESUME_GUEST;
break; break;
case BOOKE_INTERRUPT_INST_STORAGE: case BOOKE_INTERRUPT_INST_STORAGE:
vcpu->arch.esr = vcpu->arch.fault_esr; kvmppc_core_queue_inst_storage(vcpu, vcpu->arch.fault_esr);
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_INST_STORAGE);
kvmppc_account_exit(vcpu, ISI_EXITS); kvmppc_account_exit(vcpu, ISI_EXITS);
r = RESUME_GUEST; r = RESUME_GUEST;
break; break;
...@@ -310,9 +345,9 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -310,9 +345,9 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
gtlb_index = kvmppc_mmu_dtlb_index(vcpu, eaddr); gtlb_index = kvmppc_mmu_dtlb_index(vcpu, eaddr);
if (gtlb_index < 0) { if (gtlb_index < 0) {
/* The guest didn't have a mapping for it. */ /* The guest didn't have a mapping for it. */
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_DTLB_MISS); kvmppc_core_queue_dtlb_miss(vcpu,
vcpu->arch.dear = vcpu->arch.fault_dear; vcpu->arch.fault_dear,
vcpu->arch.esr = vcpu->arch.fault_esr; vcpu->arch.fault_esr);
kvmppc_mmu_dtlb_miss(vcpu); kvmppc_mmu_dtlb_miss(vcpu);
kvmppc_account_exit(vcpu, DTLB_REAL_MISS_EXITS); kvmppc_account_exit(vcpu, DTLB_REAL_MISS_EXITS);
r = RESUME_GUEST; r = RESUME_GUEST;
...@@ -426,7 +461,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) ...@@ -426,7 +461,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{ {
vcpu->arch.pc = 0; vcpu->arch.pc = 0;
vcpu->arch.msr = 0; vcpu->arch.msr = 0;
vcpu->arch.gpr[1] = (16<<20) - 8; /* -8 for the callee-save LR slot */ kvmppc_set_gpr(vcpu, 1, (16<<20) - 8); /* -8 for the callee-save LR slot */
vcpu->arch.shadow_pid = 1; vcpu->arch.shadow_pid = 1;
...@@ -444,10 +479,10 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -444,10 +479,10 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
int i; int i;
regs->pc = vcpu->arch.pc; regs->pc = vcpu->arch.pc;
regs->cr = vcpu->arch.cr; regs->cr = kvmppc_get_cr(vcpu);
regs->ctr = vcpu->arch.ctr; regs->ctr = vcpu->arch.ctr;
regs->lr = vcpu->arch.lr; regs->lr = vcpu->arch.lr;
regs->xer = vcpu->arch.xer; regs->xer = kvmppc_get_xer(vcpu);
regs->msr = vcpu->arch.msr; regs->msr = vcpu->arch.msr;
regs->srr0 = vcpu->arch.srr0; regs->srr0 = vcpu->arch.srr0;
regs->srr1 = vcpu->arch.srr1; regs->srr1 = vcpu->arch.srr1;
...@@ -461,7 +496,7 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -461,7 +496,7 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
regs->sprg7 = vcpu->arch.sprg6; regs->sprg7 = vcpu->arch.sprg6;
for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
regs->gpr[i] = vcpu->arch.gpr[i]; regs->gpr[i] = kvmppc_get_gpr(vcpu, i);
return 0; return 0;
} }
...@@ -471,10 +506,10 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -471,10 +506,10 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
int i; int i;
vcpu->arch.pc = regs->pc; vcpu->arch.pc = regs->pc;
vcpu->arch.cr = regs->cr; kvmppc_set_cr(vcpu, regs->cr);
vcpu->arch.ctr = regs->ctr; vcpu->arch.ctr = regs->ctr;
vcpu->arch.lr = regs->lr; vcpu->arch.lr = regs->lr;
vcpu->arch.xer = regs->xer; kvmppc_set_xer(vcpu, regs->xer);
kvmppc_set_msr(vcpu, regs->msr); kvmppc_set_msr(vcpu, regs->msr);
vcpu->arch.srr0 = regs->srr0; vcpu->arch.srr0 = regs->srr0;
vcpu->arch.srr1 = regs->srr1; vcpu->arch.srr1 = regs->srr1;
...@@ -486,8 +521,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -486,8 +521,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
vcpu->arch.sprg6 = regs->sprg5; vcpu->arch.sprg6 = regs->sprg5;
vcpu->arch.sprg7 = regs->sprg6; vcpu->arch.sprg7 = regs->sprg6;
for (i = 0; i < ARRAY_SIZE(vcpu->arch.gpr); i++) for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
vcpu->arch.gpr[i] = regs->gpr[i]; kvmppc_set_gpr(vcpu, i, regs->gpr[i]);
return 0; return 0;
} }
......
...@@ -62,20 +62,20 @@ int kvmppc_booke_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -62,20 +62,20 @@ int kvmppc_booke_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
case OP_31_XOP_MFMSR: case OP_31_XOP_MFMSR:
rt = get_rt(inst); rt = get_rt(inst);
vcpu->arch.gpr[rt] = vcpu->arch.msr; kvmppc_set_gpr(vcpu, rt, vcpu->arch.msr);
kvmppc_set_exit_type(vcpu, EMULATED_MFMSR_EXITS); kvmppc_set_exit_type(vcpu, EMULATED_MFMSR_EXITS);
break; break;
case OP_31_XOP_MTMSR: case OP_31_XOP_MTMSR:
rs = get_rs(inst); rs = get_rs(inst);
kvmppc_set_exit_type(vcpu, EMULATED_MTMSR_EXITS); kvmppc_set_exit_type(vcpu, EMULATED_MTMSR_EXITS);
kvmppc_set_msr(vcpu, vcpu->arch.gpr[rs]); kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, rs));
break; break;
case OP_31_XOP_WRTEE: case OP_31_XOP_WRTEE:
rs = get_rs(inst); rs = get_rs(inst);
vcpu->arch.msr = (vcpu->arch.msr & ~MSR_EE) vcpu->arch.msr = (vcpu->arch.msr & ~MSR_EE)
| (vcpu->arch.gpr[rs] & MSR_EE); | (kvmppc_get_gpr(vcpu, rs) & MSR_EE);
kvmppc_set_exit_type(vcpu, EMULATED_WRTEE_EXITS); kvmppc_set_exit_type(vcpu, EMULATED_WRTEE_EXITS);
break; break;
...@@ -101,22 +101,23 @@ int kvmppc_booke_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -101,22 +101,23 @@ int kvmppc_booke_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs) int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
{ {
int emulated = EMULATE_DONE; int emulated = EMULATE_DONE;
ulong spr_val = kvmppc_get_gpr(vcpu, rs);
switch (sprn) { switch (sprn) {
case SPRN_DEAR: case SPRN_DEAR:
vcpu->arch.dear = vcpu->arch.gpr[rs]; break; vcpu->arch.dear = spr_val; break;
case SPRN_ESR: case SPRN_ESR:
vcpu->arch.esr = vcpu->arch.gpr[rs]; break; vcpu->arch.esr = spr_val; break;
case SPRN_DBCR0: case SPRN_DBCR0:
vcpu->arch.dbcr0 = vcpu->arch.gpr[rs]; break; vcpu->arch.dbcr0 = spr_val; break;
case SPRN_DBCR1: case SPRN_DBCR1:
vcpu->arch.dbcr1 = vcpu->arch.gpr[rs]; break; vcpu->arch.dbcr1 = spr_val; break;
case SPRN_DBSR: case SPRN_DBSR:
vcpu->arch.dbsr &= ~vcpu->arch.gpr[rs]; break; vcpu->arch.dbsr &= ~spr_val; break;
case SPRN_TSR: case SPRN_TSR:
vcpu->arch.tsr &= ~vcpu->arch.gpr[rs]; break; vcpu->arch.tsr &= ~spr_val; break;
case SPRN_TCR: case SPRN_TCR:
vcpu->arch.tcr = vcpu->arch.gpr[rs]; vcpu->arch.tcr = spr_val;
kvmppc_emulate_dec(vcpu); kvmppc_emulate_dec(vcpu);
break; break;
...@@ -124,64 +125,64 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs) ...@@ -124,64 +125,64 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
* loaded into the real SPRGs when resuming the * loaded into the real SPRGs when resuming the
* guest. */ * guest. */
case SPRN_SPRG4: case SPRN_SPRG4:
vcpu->arch.sprg4 = vcpu->arch.gpr[rs]; break; vcpu->arch.sprg4 = spr_val; break;
case SPRN_SPRG5: case SPRN_SPRG5:
vcpu->arch.sprg5 = vcpu->arch.gpr[rs]; break; vcpu->arch.sprg5 = spr_val; break;
case SPRN_SPRG6: case SPRN_SPRG6:
vcpu->arch.sprg6 = vcpu->arch.gpr[rs]; break; vcpu->arch.sprg6 = spr_val; break;
case SPRN_SPRG7: case SPRN_SPRG7:
vcpu->arch.sprg7 = vcpu->arch.gpr[rs]; break; vcpu->arch.sprg7 = spr_val; break;
case SPRN_IVPR: case SPRN_IVPR:
vcpu->arch.ivpr = vcpu->arch.gpr[rs]; vcpu->arch.ivpr = spr_val;
break; break;
case SPRN_IVOR0: case SPRN_IVOR0:
vcpu->arch.ivor[BOOKE_IRQPRIO_CRITICAL] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_CRITICAL] = spr_val;
break; break;
case SPRN_IVOR1: case SPRN_IVOR1:
vcpu->arch.ivor[BOOKE_IRQPRIO_MACHINE_CHECK] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_MACHINE_CHECK] = spr_val;
break; break;
case SPRN_IVOR2: case SPRN_IVOR2:
vcpu->arch.ivor[BOOKE_IRQPRIO_DATA_STORAGE] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_DATA_STORAGE] = spr_val;
break; break;
case SPRN_IVOR3: case SPRN_IVOR3:
vcpu->arch.ivor[BOOKE_IRQPRIO_INST_STORAGE] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_INST_STORAGE] = spr_val;
break; break;
case SPRN_IVOR4: case SPRN_IVOR4:
vcpu->arch.ivor[BOOKE_IRQPRIO_EXTERNAL] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_EXTERNAL] = spr_val;
break; break;
case SPRN_IVOR5: case SPRN_IVOR5:
vcpu->arch.ivor[BOOKE_IRQPRIO_ALIGNMENT] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_ALIGNMENT] = spr_val;
break; break;
case SPRN_IVOR6: case SPRN_IVOR6:
vcpu->arch.ivor[BOOKE_IRQPRIO_PROGRAM] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_PROGRAM] = spr_val;
break; break;
case SPRN_IVOR7: case SPRN_IVOR7:
vcpu->arch.ivor[BOOKE_IRQPRIO_FP_UNAVAIL] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_FP_UNAVAIL] = spr_val;
break; break;
case SPRN_IVOR8: case SPRN_IVOR8:
vcpu->arch.ivor[BOOKE_IRQPRIO_SYSCALL] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_SYSCALL] = spr_val;
break; break;
case SPRN_IVOR9: case SPRN_IVOR9:
vcpu->arch.ivor[BOOKE_IRQPRIO_AP_UNAVAIL] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_AP_UNAVAIL] = spr_val;
break; break;
case SPRN_IVOR10: case SPRN_IVOR10:
vcpu->arch.ivor[BOOKE_IRQPRIO_DECREMENTER] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_DECREMENTER] = spr_val;
break; break;
case SPRN_IVOR11: case SPRN_IVOR11:
vcpu->arch.ivor[BOOKE_IRQPRIO_FIT] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_FIT] = spr_val;
break; break;
case SPRN_IVOR12: case SPRN_IVOR12:
vcpu->arch.ivor[BOOKE_IRQPRIO_WATCHDOG] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_WATCHDOG] = spr_val;
break; break;
case SPRN_IVOR13: case SPRN_IVOR13:
vcpu->arch.ivor[BOOKE_IRQPRIO_DTLB_MISS] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_DTLB_MISS] = spr_val;
break; break;
case SPRN_IVOR14: case SPRN_IVOR14:
vcpu->arch.ivor[BOOKE_IRQPRIO_ITLB_MISS] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_ITLB_MISS] = spr_val;
break; break;
case SPRN_IVOR15: case SPRN_IVOR15:
vcpu->arch.ivor[BOOKE_IRQPRIO_DEBUG] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_DEBUG] = spr_val;
break; break;
default: default:
...@@ -197,65 +198,65 @@ int kvmppc_booke_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt) ...@@ -197,65 +198,65 @@ int kvmppc_booke_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
switch (sprn) { switch (sprn) {
case SPRN_IVPR: case SPRN_IVPR:
vcpu->arch.gpr[rt] = vcpu->arch.ivpr; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivpr); break;
case SPRN_DEAR: case SPRN_DEAR:
vcpu->arch.gpr[rt] = vcpu->arch.dear; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.dear); break;
case SPRN_ESR: case SPRN_ESR:
vcpu->arch.gpr[rt] = vcpu->arch.esr; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.esr); break;
case SPRN_DBCR0: case SPRN_DBCR0:
vcpu->arch.gpr[rt] = vcpu->arch.dbcr0; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.dbcr0); break;
case SPRN_DBCR1: case SPRN_DBCR1:
vcpu->arch.gpr[rt] = vcpu->arch.dbcr1; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.dbcr1); break;
case SPRN_DBSR: case SPRN_DBSR:
vcpu->arch.gpr[rt] = vcpu->arch.dbsr; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.dbsr); break;
case SPRN_IVOR0: case SPRN_IVOR0:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_CRITICAL]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_CRITICAL]);
break; break;
case SPRN_IVOR1: case SPRN_IVOR1:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_MACHINE_CHECK]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_MACHINE_CHECK]);
break; break;
case SPRN_IVOR2: case SPRN_IVOR2:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_DATA_STORAGE]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_DATA_STORAGE]);
break; break;
case SPRN_IVOR3: case SPRN_IVOR3:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_INST_STORAGE]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_INST_STORAGE]);
break; break;
case SPRN_IVOR4: case SPRN_IVOR4:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_EXTERNAL]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_EXTERNAL]);
break; break;
case SPRN_IVOR5: case SPRN_IVOR5:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_ALIGNMENT]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_ALIGNMENT]);
break; break;
case SPRN_IVOR6: case SPRN_IVOR6:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_PROGRAM]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_PROGRAM]);
break; break;
case SPRN_IVOR7: case SPRN_IVOR7:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_FP_UNAVAIL]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_FP_UNAVAIL]);
break; break;
case SPRN_IVOR8: case SPRN_IVOR8:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_SYSCALL]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_SYSCALL]);
break; break;
case SPRN_IVOR9: case SPRN_IVOR9:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_AP_UNAVAIL]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_AP_UNAVAIL]);
break; break;
case SPRN_IVOR10: case SPRN_IVOR10:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_DECREMENTER]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_DECREMENTER]);
break; break;
case SPRN_IVOR11: case SPRN_IVOR11:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_FIT]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_FIT]);
break; break;
case SPRN_IVOR12: case SPRN_IVOR12:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_WATCHDOG]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_WATCHDOG]);
break; break;
case SPRN_IVOR13: case SPRN_IVOR13:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_DTLB_MISS]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_DTLB_MISS]);
break; break;
case SPRN_IVOR14: case SPRN_IVOR14:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_ITLB_MISS]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_ITLB_MISS]);
break; break;
case SPRN_IVOR15: case SPRN_IVOR15:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_DEBUG]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_DEBUG]);
break; break;
default: default:
......
...@@ -60,6 +60,12 @@ int kvmppc_core_vcpu_setup(struct kvm_vcpu *vcpu) ...@@ -60,6 +60,12 @@ int kvmppc_core_vcpu_setup(struct kvm_vcpu *vcpu)
kvmppc_e500_tlb_setup(vcpu_e500); kvmppc_e500_tlb_setup(vcpu_e500);
/* Registers init */
vcpu->arch.pvr = mfspr(SPRN_PVR);
/* Since booke kvm only support one core, update all vcpus' PIR to 0 */
vcpu->vcpu_id = 0;
return 0; return 0;
} }
......
...@@ -74,54 +74,59 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs) ...@@ -74,54 +74,59 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
{ {
struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu); struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu);
int emulated = EMULATE_DONE; int emulated = EMULATE_DONE;
ulong spr_val = kvmppc_get_gpr(vcpu, rs);
switch (sprn) { switch (sprn) {
case SPRN_PID: case SPRN_PID:
vcpu_e500->pid[0] = vcpu->arch.shadow_pid = vcpu_e500->pid[0] = vcpu->arch.shadow_pid =
vcpu->arch.pid = vcpu->arch.gpr[rs]; vcpu->arch.pid = spr_val;
break; break;
case SPRN_PID1: case SPRN_PID1:
vcpu_e500->pid[1] = vcpu->arch.gpr[rs]; break; vcpu_e500->pid[1] = spr_val; break;
case SPRN_PID2: case SPRN_PID2:
vcpu_e500->pid[2] = vcpu->arch.gpr[rs]; break; vcpu_e500->pid[2] = spr_val; break;
case SPRN_MAS0: case SPRN_MAS0:
vcpu_e500->mas0 = vcpu->arch.gpr[rs]; break; vcpu_e500->mas0 = spr_val; break;
case SPRN_MAS1: case SPRN_MAS1:
vcpu_e500->mas1 = vcpu->arch.gpr[rs]; break; vcpu_e500->mas1 = spr_val; break;
case SPRN_MAS2: case SPRN_MAS2:
vcpu_e500->mas2 = vcpu->arch.gpr[rs]; break; vcpu_e500->mas2 = spr_val; break;
case SPRN_MAS3: case SPRN_MAS3:
vcpu_e500->mas3 = vcpu->arch.gpr[rs]; break; vcpu_e500->mas3 = spr_val; break;
case SPRN_MAS4: case SPRN_MAS4:
vcpu_e500->mas4 = vcpu->arch.gpr[rs]; break; vcpu_e500->mas4 = spr_val; break;
case SPRN_MAS6: case SPRN_MAS6:
vcpu_e500->mas6 = vcpu->arch.gpr[rs]; break; vcpu_e500->mas6 = spr_val; break;
case SPRN_MAS7: case SPRN_MAS7:
vcpu_e500->mas7 = vcpu->arch.gpr[rs]; break; vcpu_e500->mas7 = spr_val; break;
case SPRN_L1CSR0:
vcpu_e500->l1csr0 = spr_val;
vcpu_e500->l1csr0 &= ~(L1CSR0_DCFI | L1CSR0_CLFC);
break;
case SPRN_L1CSR1: case SPRN_L1CSR1:
vcpu_e500->l1csr1 = vcpu->arch.gpr[rs]; break; vcpu_e500->l1csr1 = spr_val; break;
case SPRN_HID0: case SPRN_HID0:
vcpu_e500->hid0 = vcpu->arch.gpr[rs]; break; vcpu_e500->hid0 = spr_val; break;
case SPRN_HID1: case SPRN_HID1:
vcpu_e500->hid1 = vcpu->arch.gpr[rs]; break; vcpu_e500->hid1 = spr_val; break;
case SPRN_MMUCSR0: case SPRN_MMUCSR0:
emulated = kvmppc_e500_emul_mt_mmucsr0(vcpu_e500, emulated = kvmppc_e500_emul_mt_mmucsr0(vcpu_e500,
vcpu->arch.gpr[rs]); spr_val);
break; break;
/* extra exceptions */ /* extra exceptions */
case SPRN_IVOR32: case SPRN_IVOR32:
vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_UNAVAIL] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_UNAVAIL] = spr_val;
break; break;
case SPRN_IVOR33: case SPRN_IVOR33:
vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_FP_DATA] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_FP_DATA] = spr_val;
break; break;
case SPRN_IVOR34: case SPRN_IVOR34:
vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_FP_ROUND] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_FP_ROUND] = spr_val;
break; break;
case SPRN_IVOR35: case SPRN_IVOR35:
vcpu->arch.ivor[BOOKE_IRQPRIO_PERFORMANCE_MONITOR] = vcpu->arch.gpr[rs]; vcpu->arch.ivor[BOOKE_IRQPRIO_PERFORMANCE_MONITOR] = spr_val;
break; break;
default: default:
...@@ -138,63 +143,57 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt) ...@@ -138,63 +143,57 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
switch (sprn) { switch (sprn) {
case SPRN_PID: case SPRN_PID:
vcpu->arch.gpr[rt] = vcpu_e500->pid[0]; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->pid[0]); break;
case SPRN_PID1: case SPRN_PID1:
vcpu->arch.gpr[rt] = vcpu_e500->pid[1]; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->pid[1]); break;
case SPRN_PID2: case SPRN_PID2:
vcpu->arch.gpr[rt] = vcpu_e500->pid[2]; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->pid[2]); break;
case SPRN_MAS0: case SPRN_MAS0:
vcpu->arch.gpr[rt] = vcpu_e500->mas0; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->mas0); break;
case SPRN_MAS1: case SPRN_MAS1:
vcpu->arch.gpr[rt] = vcpu_e500->mas1; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->mas1); break;
case SPRN_MAS2: case SPRN_MAS2:
vcpu->arch.gpr[rt] = vcpu_e500->mas2; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->mas2); break;
case SPRN_MAS3: case SPRN_MAS3:
vcpu->arch.gpr[rt] = vcpu_e500->mas3; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->mas3); break;
case SPRN_MAS4: case SPRN_MAS4:
vcpu->arch.gpr[rt] = vcpu_e500->mas4; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->mas4); break;
case SPRN_MAS6: case SPRN_MAS6:
vcpu->arch.gpr[rt] = vcpu_e500->mas6; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->mas6); break;
case SPRN_MAS7: case SPRN_MAS7:
vcpu->arch.gpr[rt] = vcpu_e500->mas7; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->mas7); break;
case SPRN_TLB0CFG: case SPRN_TLB0CFG:
vcpu->arch.gpr[rt] = mfspr(SPRN_TLB0CFG); kvmppc_set_gpr(vcpu, rt, vcpu_e500->tlb0cfg); break;
vcpu->arch.gpr[rt] &= ~0xfffUL;
vcpu->arch.gpr[rt] |= vcpu_e500->guest_tlb_size[0];
break;
case SPRN_TLB1CFG: case SPRN_TLB1CFG:
vcpu->arch.gpr[rt] = mfspr(SPRN_TLB1CFG); kvmppc_set_gpr(vcpu, rt, vcpu_e500->tlb1cfg); break;
vcpu->arch.gpr[rt] &= ~0xfffUL; case SPRN_L1CSR0:
vcpu->arch.gpr[rt] |= vcpu_e500->guest_tlb_size[1]; kvmppc_set_gpr(vcpu, rt, vcpu_e500->l1csr0); break;
break;
case SPRN_L1CSR1: case SPRN_L1CSR1:
vcpu->arch.gpr[rt] = vcpu_e500->l1csr1; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->l1csr1); break;
case SPRN_HID0: case SPRN_HID0:
vcpu->arch.gpr[rt] = vcpu_e500->hid0; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->hid0); break;
case SPRN_HID1: case SPRN_HID1:
vcpu->arch.gpr[rt] = vcpu_e500->hid1; break; kvmppc_set_gpr(vcpu, rt, vcpu_e500->hid1); break;
case SPRN_MMUCSR0: case SPRN_MMUCSR0:
vcpu->arch.gpr[rt] = 0; break; kvmppc_set_gpr(vcpu, rt, 0); break;
case SPRN_MMUCFG: case SPRN_MMUCFG:
vcpu->arch.gpr[rt] = mfspr(SPRN_MMUCFG); break; kvmppc_set_gpr(vcpu, rt, mfspr(SPRN_MMUCFG)); break;
/* extra exceptions */ /* extra exceptions */
case SPRN_IVOR32: case SPRN_IVOR32:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_UNAVAIL]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_UNAVAIL]);
break; break;
case SPRN_IVOR33: case SPRN_IVOR33:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_FP_DATA]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_FP_DATA]);
break; break;
case SPRN_IVOR34: case SPRN_IVOR34:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_FP_ROUND]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_FP_ROUND]);
break; break;
case SPRN_IVOR35: case SPRN_IVOR35:
vcpu->arch.gpr[rt] = vcpu->arch.ivor[BOOKE_IRQPRIO_PERFORMANCE_MONITOR]; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ivor[BOOKE_IRQPRIO_PERFORMANCE_MONITOR]);
break; break;
default: default:
emulated = kvmppc_booke_emulate_mfspr(vcpu, sprn, rt); emulated = kvmppc_booke_emulate_mfspr(vcpu, sprn, rt);
......
...@@ -417,7 +417,7 @@ int kvmppc_e500_emul_tlbivax(struct kvm_vcpu *vcpu, int ra, int rb) ...@@ -417,7 +417,7 @@ int kvmppc_e500_emul_tlbivax(struct kvm_vcpu *vcpu, int ra, int rb)
int esel, tlbsel; int esel, tlbsel;
gva_t ea; gva_t ea;
ea = ((ra) ? vcpu->arch.gpr[ra] : 0) + vcpu->arch.gpr[rb]; ea = ((ra) ? kvmppc_get_gpr(vcpu, ra) : 0) + kvmppc_get_gpr(vcpu, rb);
ia = (ea >> 2) & 0x1; ia = (ea >> 2) & 0x1;
...@@ -470,7 +470,7 @@ int kvmppc_e500_emul_tlbsx(struct kvm_vcpu *vcpu, int rb) ...@@ -470,7 +470,7 @@ int kvmppc_e500_emul_tlbsx(struct kvm_vcpu *vcpu, int rb)
struct tlbe *gtlbe = NULL; struct tlbe *gtlbe = NULL;
gva_t ea; gva_t ea;
ea = vcpu->arch.gpr[rb]; ea = kvmppc_get_gpr(vcpu, rb);
for (tlbsel = 0; tlbsel < 2; tlbsel++) { for (tlbsel = 0; tlbsel < 2; tlbsel++) {
esel = kvmppc_e500_tlb_index(vcpu_e500, ea, tlbsel, pid, as); esel = kvmppc_e500_tlb_index(vcpu_e500, ea, tlbsel, pid, as);
...@@ -728,6 +728,12 @@ int kvmppc_e500_tlb_init(struct kvmppc_vcpu_e500 *vcpu_e500) ...@@ -728,6 +728,12 @@ int kvmppc_e500_tlb_init(struct kvmppc_vcpu_e500 *vcpu_e500)
if (vcpu_e500->shadow_pages[1] == NULL) if (vcpu_e500->shadow_pages[1] == NULL)
goto err_out_page0; goto err_out_page0;
/* Init TLB configuration register */
vcpu_e500->tlb0cfg = mfspr(SPRN_TLB0CFG) & ~0xfffUL;
vcpu_e500->tlb0cfg |= vcpu_e500->guest_tlb_size[0];
vcpu_e500->tlb1cfg = mfspr(SPRN_TLB1CFG) & ~0xfffUL;
vcpu_e500->tlb1cfg |= vcpu_e500->guest_tlb_size[1];
return 0; return 0;
err_out_page0: err_out_page0:
......
This diff is collapsed.
...@@ -137,6 +137,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm) ...@@ -137,6 +137,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
{ {
kvmppc_free_vcpus(kvm); kvmppc_free_vcpus(kvm);
kvm_free_physmem(kvm); kvm_free_physmem(kvm);
cleanup_srcu_struct(&kvm->srcu);
kfree(kvm); kfree(kvm);
} }
...@@ -165,14 +166,24 @@ long kvm_arch_dev_ioctl(struct file *filp, ...@@ -165,14 +166,24 @@ long kvm_arch_dev_ioctl(struct file *filp,
return -EINVAL; return -EINVAL;
} }
int kvm_arch_set_memory_region(struct kvm *kvm, int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *memslot,
struct kvm_memory_slot old, struct kvm_memory_slot old,
int user_alloc) struct kvm_userspace_memory_region *mem,
int user_alloc)
{ {
return 0; return 0;
} }
void kvm_arch_commit_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem,
struct kvm_memory_slot old,
int user_alloc)
{
return;
}
void kvm_arch_flush_shadow(struct kvm *kvm) void kvm_arch_flush_shadow(struct kvm *kvm)
{ {
} }
...@@ -260,34 +271,35 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, ...@@ -260,34 +271,35 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
static void kvmppc_complete_dcr_load(struct kvm_vcpu *vcpu, static void kvmppc_complete_dcr_load(struct kvm_vcpu *vcpu,
struct kvm_run *run) struct kvm_run *run)
{ {
ulong *gpr = &vcpu->arch.gpr[vcpu->arch.io_gpr]; kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, run->dcr.data);
*gpr = run->dcr.data;
} }
static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
struct kvm_run *run) struct kvm_run *run)
{ {
ulong *gpr = &vcpu->arch.gpr[vcpu->arch.io_gpr]; ulong gpr;
if (run->mmio.len > sizeof(*gpr)) { if (run->mmio.len > sizeof(gpr)) {
printk(KERN_ERR "bad MMIO length: %d\n", run->mmio.len); printk(KERN_ERR "bad MMIO length: %d\n", run->mmio.len);
return; return;
} }
if (vcpu->arch.mmio_is_bigendian) { if (vcpu->arch.mmio_is_bigendian) {
switch (run->mmio.len) { switch (run->mmio.len) {
case 4: *gpr = *(u32 *)run->mmio.data; break; case 4: gpr = *(u32 *)run->mmio.data; break;
case 2: *gpr = *(u16 *)run->mmio.data; break; case 2: gpr = *(u16 *)run->mmio.data; break;
case 1: *gpr = *(u8 *)run->mmio.data; break; case 1: gpr = *(u8 *)run->mmio.data; break;
} }
} else { } else {
/* Convert BE data from userland back to LE. */ /* Convert BE data from userland back to LE. */
switch (run->mmio.len) { switch (run->mmio.len) {
case 4: *gpr = ld_le32((u32 *)run->mmio.data); break; case 4: gpr = ld_le32((u32 *)run->mmio.data); break;
case 2: *gpr = ld_le16((u16 *)run->mmio.data); break; case 2: gpr = ld_le16((u16 *)run->mmio.data); break;
case 1: *gpr = *(u8 *)run->mmio.data; break; case 1: gpr = *(u8 *)run->mmio.data; break;
} }
} }
kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
} }
int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu, int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
......
...@@ -242,6 +242,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm) ...@@ -242,6 +242,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
kvm_free_physmem(kvm); kvm_free_physmem(kvm);
free_page((unsigned long)(kvm->arch.sca)); free_page((unsigned long)(kvm->arch.sca));
debug_unregister(kvm->arch.dbf); debug_unregister(kvm->arch.dbf);
cleanup_srcu_struct(&kvm->srcu);
kfree(kvm); kfree(kvm);
} }
...@@ -690,14 +691,12 @@ long kvm_arch_vcpu_ioctl(struct file *filp, ...@@ -690,14 +691,12 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
} }
/* Section: memory related */ /* Section: memory related */
int kvm_arch_set_memory_region(struct kvm *kvm, int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *memslot,
struct kvm_memory_slot old, struct kvm_memory_slot old,
int user_alloc) struct kvm_userspace_memory_region *mem,
int user_alloc)
{ {
int i;
struct kvm_vcpu *vcpu;
/* A few sanity checks. We can have exactly one memory slot which has /* A few sanity checks. We can have exactly one memory slot which has
to start at guest virtual zero and which has to be located at a to start at guest virtual zero and which has to be located at a
page boundary in userland and which has to end at a page boundary. page boundary in userland and which has to end at a page boundary.
...@@ -720,14 +719,23 @@ int kvm_arch_set_memory_region(struct kvm *kvm, ...@@ -720,14 +719,23 @@ int kvm_arch_set_memory_region(struct kvm *kvm,
if (!user_alloc) if (!user_alloc)
return -EINVAL; return -EINVAL;
return 0;
}
void kvm_arch_commit_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem,
struct kvm_memory_slot old,
int user_alloc)
{
int i;
struct kvm_vcpu *vcpu;
/* request update of sie control block for all available vcpus */ /* request update of sie control block for all available vcpus */
kvm_for_each_vcpu(i, vcpu, kvm) { kvm_for_each_vcpu(i, vcpu, kvm) {
if (test_and_set_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests)) if (test_and_set_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests))
continue; continue;
kvm_s390_inject_sigp_stop(vcpu, ACTION_RELOADVCPU_ON_STOP); kvm_s390_inject_sigp_stop(vcpu, ACTION_RELOADVCPU_ON_STOP);
} }
return 0;
} }
void kvm_arch_flush_shadow(struct kvm *kvm) void kvm_arch_flush_shadow(struct kvm *kvm)
......
...@@ -67,10 +67,14 @@ static inline long kvm_s390_vcpu_get_memsize(struct kvm_vcpu *vcpu) ...@@ -67,10 +67,14 @@ static inline long kvm_s390_vcpu_get_memsize(struct kvm_vcpu *vcpu)
static inline void kvm_s390_vcpu_set_mem(struct kvm_vcpu *vcpu) static inline void kvm_s390_vcpu_set_mem(struct kvm_vcpu *vcpu)
{ {
int idx;
struct kvm_memory_slot *mem; struct kvm_memory_slot *mem;
struct kvm_memslots *memslots;
down_read(&vcpu->kvm->slots_lock); idx = srcu_read_lock(&vcpu->kvm->srcu);
mem = &vcpu->kvm->memslots[0]; memslots = rcu_dereference(vcpu->kvm->memslots);
mem = &memslots->memslots[0];
vcpu->arch.sie_block->gmsor = mem->userspace_addr; vcpu->arch.sie_block->gmsor = mem->userspace_addr;
vcpu->arch.sie_block->gmslm = vcpu->arch.sie_block->gmslm =
...@@ -78,7 +82,7 @@ static inline void kvm_s390_vcpu_set_mem(struct kvm_vcpu *vcpu) ...@@ -78,7 +82,7 @@ static inline void kvm_s390_vcpu_set_mem(struct kvm_vcpu *vcpu)
(mem->npages << PAGE_SHIFT) + (mem->npages << PAGE_SHIFT) +
VIRTIODESCSPACE - 1ul; VIRTIODESCSPACE - 1ul;
up_read(&vcpu->kvm->slots_lock); srcu_read_unlock(&vcpu->kvm->srcu, idx);
} }
/* implemented in priv.c */ /* implemented in priv.c */
......
...@@ -11,6 +11,7 @@ header-y += sigcontext32.h ...@@ -11,6 +11,7 @@ header-y += sigcontext32.h
header-y += ucontext.h header-y += ucontext.h
header-y += processor-flags.h header-y += processor-flags.h
header-y += hw_breakpoint.h header-y += hw_breakpoint.h
header-y += hyperv.h
unifdef-y += e820.h unifdef-y += e820.h
unifdef-y += ist.h unifdef-y += ist.h
......
#ifndef _ASM_X86_KVM_HYPERV_H
#define _ASM_X86_KVM_HYPERV_H
#include <linux/types.h>
/*
* The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
* is set by CPUID(HvCpuIdFunctionVersionAndFeatures).
*/
#define HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS 0x40000000
#define HYPERV_CPUID_INTERFACE 0x40000001
#define HYPERV_CPUID_VERSION 0x40000002
#define HYPERV_CPUID_FEATURES 0x40000003
#define HYPERV_CPUID_ENLIGHTMENT_INFO 0x40000004
#define HYPERV_CPUID_IMPLEMENT_LIMITS 0x40000005
/*
* Feature identification. EAX indicates which features are available
* to the partition based upon the current partition privileges.
*/
/* VP Runtime (HV_X64_MSR_VP_RUNTIME) available */
#define HV_X64_MSR_VP_RUNTIME_AVAILABLE (1 << 0)
/* Partition Reference Counter (HV_X64_MSR_TIME_REF_COUNT) available*/
#define HV_X64_MSR_TIME_REF_COUNT_AVAILABLE (1 << 1)
/*
* Basic SynIC MSRs (HV_X64_MSR_SCONTROL through HV_X64_MSR_EOM
* and HV_X64_MSR_SINT0 through HV_X64_MSR_SINT15) available
*/
#define HV_X64_MSR_SYNIC_AVAILABLE (1 << 2)
/*
* Synthetic Timer MSRs (HV_X64_MSR_STIMER0_CONFIG through
* HV_X64_MSR_STIMER3_COUNT) available
*/
#define HV_X64_MSR_SYNTIMER_AVAILABLE (1 << 3)
/*
* APIC access MSRs (HV_X64_MSR_EOI, HV_X64_MSR_ICR and HV_X64_MSR_TPR)
* are available
*/
#define HV_X64_MSR_APIC_ACCESS_AVAILABLE (1 << 4)
/* Hypercall MSRs (HV_X64_MSR_GUEST_OS_ID and HV_X64_MSR_HYPERCALL) available*/
#define HV_X64_MSR_HYPERCALL_AVAILABLE (1 << 5)
/* Access virtual processor index MSR (HV_X64_MSR_VP_INDEX) available*/
#define HV_X64_MSR_VP_INDEX_AVAILABLE (1 << 6)
/* Virtual system reset MSR (HV_X64_MSR_RESET) is available*/
#define HV_X64_MSR_RESET_AVAILABLE (1 << 7)
/*
* Access statistics pages MSRs (HV_X64_MSR_STATS_PARTITION_RETAIL_PAGE,
* HV_X64_MSR_STATS_PARTITION_INTERNAL_PAGE, HV_X64_MSR_STATS_VP_RETAIL_PAGE,
* HV_X64_MSR_STATS_VP_INTERNAL_PAGE) available
*/
#define HV_X64_MSR_STAT_PAGES_AVAILABLE (1 << 8)
/*
* Feature identification: EBX indicates which flags were specified at
* partition creation. The format is the same as the partition creation
* flag structure defined in section Partition Creation Flags.
*/
#define HV_X64_CREATE_PARTITIONS (1 << 0)
#define HV_X64_ACCESS_PARTITION_ID (1 << 1)
#define HV_X64_ACCESS_MEMORY_POOL (1 << 2)
#define HV_X64_ADJUST_MESSAGE_BUFFERS (1 << 3)
#define HV_X64_POST_MESSAGES (1 << 4)
#define HV_X64_SIGNAL_EVENTS (1 << 5)
#define HV_X64_CREATE_PORT (1 << 6)
#define HV_X64_CONNECT_PORT (1 << 7)
#define HV_X64_ACCESS_STATS (1 << 8)
#define HV_X64_DEBUGGING (1 << 11)
#define HV_X64_CPU_POWER_MANAGEMENT (1 << 12)
#define HV_X64_CONFIGURE_PROFILER (1 << 13)
/*
* Feature identification. EDX indicates which miscellaneous features
* are available to the partition.
*/
/* The MWAIT instruction is available (per section MONITOR / MWAIT) */
#define HV_X64_MWAIT_AVAILABLE (1 << 0)
/* Guest debugging support is available */
#define HV_X64_GUEST_DEBUGGING_AVAILABLE (1 << 1)
/* Performance Monitor support is available*/
#define HV_X64_PERF_MONITOR_AVAILABLE (1 << 2)
/* Support for physical CPU dynamic partitioning events is available*/
#define HV_X64_CPU_DYNAMIC_PARTITIONING_AVAILABLE (1 << 3)
/*
* Support for passing hypercall input parameter block via XMM
* registers is available
*/
#define HV_X64_HYPERCALL_PARAMS_XMM_AVAILABLE (1 << 4)
/* Support for a virtual guest idle state is available */
#define HV_X64_GUEST_IDLE_STATE_AVAILABLE (1 << 5)
/*
* Implementation recommendations. Indicates which behaviors the hypervisor
* recommends the OS implement for optimal performance.
*/
/*
* Recommend using hypercall for address space switches rather
* than MOV to CR3 instruction
*/
#define HV_X64_MWAIT_RECOMMENDED (1 << 0)
/* Recommend using hypercall for local TLB flushes rather
* than INVLPG or MOV to CR3 instructions */
#define HV_X64_LOCAL_TLB_FLUSH_RECOMMENDED (1 << 1)
/*
* Recommend using hypercall for remote TLB flushes rather
* than inter-processor interrupts
*/
#define HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED (1 << 2)
/*
* Recommend using MSRs for accessing APIC registers
* EOI, ICR and TPR rather than their memory-mapped counterparts
*/
#define HV_X64_APIC_ACCESS_RECOMMENDED (1 << 3)
/* Recommend using the hypervisor-provided MSR to initiate a system RESET */
#define HV_X64_SYSTEM_RESET_RECOMMENDED (1 << 4)
/*
* Recommend using relaxed timing for this partition. If used,
* the VM should disable any watchdog timeouts that rely on the
* timely delivery of external interrupts
*/
#define HV_X64_RELAXED_TIMING_RECOMMENDED (1 << 5)
/* MSR used to identify the guest OS. */
#define HV_X64_MSR_GUEST_OS_ID 0x40000000
/* MSR used to setup pages used to communicate with the hypervisor. */
#define HV_X64_MSR_HYPERCALL 0x40000001
/* MSR used to provide vcpu index */
#define HV_X64_MSR_VP_INDEX 0x40000002
/* Define the virtual APIC registers */
#define HV_X64_MSR_EOI 0x40000070
#define HV_X64_MSR_ICR 0x40000071
#define HV_X64_MSR_TPR 0x40000072
#define HV_X64_MSR_APIC_ASSIST_PAGE 0x40000073
/* Define synthetic interrupt controller model specific registers. */
#define HV_X64_MSR_SCONTROL 0x40000080
#define HV_X64_MSR_SVERSION 0x40000081
#define HV_X64_MSR_SIEFP 0x40000082
#define HV_X64_MSR_SIMP 0x40000083
#define HV_X64_MSR_EOM 0x40000084
#define HV_X64_MSR_SINT0 0x40000090
#define HV_X64_MSR_SINT1 0x40000091
#define HV_X64_MSR_SINT2 0x40000092
#define HV_X64_MSR_SINT3 0x40000093
#define HV_X64_MSR_SINT4 0x40000094
#define HV_X64_MSR_SINT5 0x40000095
#define HV_X64_MSR_SINT6 0x40000096
#define HV_X64_MSR_SINT7 0x40000097
#define HV_X64_MSR_SINT8 0x40000098
#define HV_X64_MSR_SINT9 0x40000099
#define HV_X64_MSR_SINT10 0x4000009A
#define HV_X64_MSR_SINT11 0x4000009B
#define HV_X64_MSR_SINT12 0x4000009C
#define HV_X64_MSR_SINT13 0x4000009D
#define HV_X64_MSR_SINT14 0x4000009E
#define HV_X64_MSR_SINT15 0x4000009F
#define HV_X64_MSR_HYPERCALL_ENABLE 0x00000001
#define HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_SHIFT 12
#define HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_MASK \
(~((1ull << HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_SHIFT) - 1))
/* Declare the various hypercall operations. */
#define HV_X64_HV_NOTIFY_LONG_SPIN_WAIT 0x0008
#define HV_X64_MSR_APIC_ASSIST_PAGE_ENABLE 0x00000001
#define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT 12
#define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_MASK \
(~((1ull << HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
#define HV_PROCESSOR_POWER_STATE_C0 0
#define HV_PROCESSOR_POWER_STATE_C1 1
#define HV_PROCESSOR_POWER_STATE_C2 2
#define HV_PROCESSOR_POWER_STATE_C3 3
/* hypercall status code */
#define HV_STATUS_SUCCESS 0
#define HV_STATUS_INVALID_HYPERCALL_CODE 2
#define HV_STATUS_INVALID_HYPERCALL_INPUT 3
#define HV_STATUS_INVALID_ALIGNMENT 4
#endif
...@@ -54,13 +54,23 @@ struct x86_emulate_ctxt; ...@@ -54,13 +54,23 @@ struct x86_emulate_ctxt;
struct x86_emulate_ops { struct x86_emulate_ops {
/* /*
* read_std: Read bytes of standard (non-emulated/special) memory. * read_std: Read bytes of standard (non-emulated/special) memory.
* Used for instruction fetch, stack operations, and others. * Used for descriptor reading.
* @addr: [IN ] Linear address from which to read. * @addr: [IN ] Linear address from which to read.
* @val: [OUT] Value read from memory, zero-extended to 'u_long'. * @val: [OUT] Value read from memory, zero-extended to 'u_long'.
* @bytes: [IN ] Number of bytes to read from memory. * @bytes: [IN ] Number of bytes to read from memory.
*/ */
int (*read_std)(unsigned long addr, void *val, int (*read_std)(unsigned long addr, void *val,
unsigned int bytes, struct kvm_vcpu *vcpu); unsigned int bytes, struct kvm_vcpu *vcpu, u32 *error);
/*
* fetch: Read bytes of standard (non-emulated/special) memory.
* Used for instruction fetch.
* @addr: [IN ] Linear address from which to read.
* @val: [OUT] Value read from memory, zero-extended to 'u_long'.
* @bytes: [IN ] Number of bytes to read from memory.
*/
int (*fetch)(unsigned long addr, void *val,
unsigned int bytes, struct kvm_vcpu *vcpu, u32 *error);
/* /*
* read_emulated: Read bytes from emulated/special memory area. * read_emulated: Read bytes from emulated/special memory area.
...@@ -74,7 +84,7 @@ struct x86_emulate_ops { ...@@ -74,7 +84,7 @@ struct x86_emulate_ops {
struct kvm_vcpu *vcpu); struct kvm_vcpu *vcpu);
/* /*
* write_emulated: Read bytes from emulated/special memory area. * write_emulated: Write bytes to emulated/special memory area.
* @addr: [IN ] Linear address to which to write. * @addr: [IN ] Linear address to which to write.
* @val: [IN ] Value to write to memory (low-order bytes used as * @val: [IN ] Value to write to memory (low-order bytes used as
* required). * required).
...@@ -168,6 +178,7 @@ struct x86_emulate_ctxt { ...@@ -168,6 +178,7 @@ struct x86_emulate_ctxt {
/* Execution mode, passed to the emulator. */ /* Execution mode, passed to the emulator. */
#define X86EMUL_MODE_REAL 0 /* Real mode. */ #define X86EMUL_MODE_REAL 0 /* Real mode. */
#define X86EMUL_MODE_VM86 1 /* Virtual 8086 mode. */
#define X86EMUL_MODE_PROT16 2 /* 16-bit protected mode. */ #define X86EMUL_MODE_PROT16 2 /* 16-bit protected mode. */
#define X86EMUL_MODE_PROT32 4 /* 32-bit protected mode. */ #define X86EMUL_MODE_PROT32 4 /* 32-bit protected mode. */
#define X86EMUL_MODE_PROT64 8 /* 64-bit (long) mode. */ #define X86EMUL_MODE_PROT64 8 /* 64-bit (long) mode. */
......
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
#include <asm/mtrr.h> #include <asm/mtrr.h>
#include <asm/msr-index.h> #include <asm/msr-index.h>
#define KVM_MAX_VCPUS 16 #define KVM_MAX_VCPUS 64
#define KVM_MEMORY_SLOTS 32 #define KVM_MEMORY_SLOTS 32
/* memory slots that does not exposed to userspace */ /* memory slots that does not exposed to userspace */
#define KVM_PRIVATE_MEM_SLOTS 4 #define KVM_PRIVATE_MEM_SLOTS 4
...@@ -38,19 +38,6 @@ ...@@ -38,19 +38,6 @@
#define CR3_L_MODE_RESERVED_BITS (CR3_NONPAE_RESERVED_BITS | \ #define CR3_L_MODE_RESERVED_BITS (CR3_NONPAE_RESERVED_BITS | \
0xFFFFFF0000000000ULL) 0xFFFFFF0000000000ULL)
#define KVM_GUEST_CR0_MASK_UNRESTRICTED_GUEST \
(X86_CR0_WP | X86_CR0_NE | X86_CR0_NW | X86_CR0_CD)
#define KVM_GUEST_CR0_MASK \
(KVM_GUEST_CR0_MASK_UNRESTRICTED_GUEST | X86_CR0_PG | X86_CR0_PE)
#define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST \
(X86_CR0_WP | X86_CR0_NE | X86_CR0_TS | X86_CR0_MP)
#define KVM_VM_CR0_ALWAYS_ON \
(KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST | X86_CR0_PG | X86_CR0_PE)
#define KVM_GUEST_CR4_MASK \
(X86_CR4_VME | X86_CR4_PSE | X86_CR4_PAE | X86_CR4_PGE | X86_CR4_VMXE)
#define KVM_PMODE_VM_CR4_ALWAYS_ON (X86_CR4_PAE | X86_CR4_VMXE)
#define KVM_RMODE_VM_CR4_ALWAYS_ON (X86_CR4_VME | X86_CR4_PAE | X86_CR4_VMXE)
#define INVALID_PAGE (~(hpa_t)0) #define INVALID_PAGE (~(hpa_t)0)
#define UNMAPPED_GVA (~(gpa_t)0) #define UNMAPPED_GVA (~(gpa_t)0)
...@@ -256,7 +243,8 @@ struct kvm_mmu { ...@@ -256,7 +243,8 @@ struct kvm_mmu {
void (*new_cr3)(struct kvm_vcpu *vcpu); void (*new_cr3)(struct kvm_vcpu *vcpu);
int (*page_fault)(struct kvm_vcpu *vcpu, gva_t gva, u32 err); int (*page_fault)(struct kvm_vcpu *vcpu, gva_t gva, u32 err);
void (*free)(struct kvm_vcpu *vcpu); void (*free)(struct kvm_vcpu *vcpu);
gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t gva); gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t gva, u32 access,
u32 *error);
void (*prefetch_page)(struct kvm_vcpu *vcpu, void (*prefetch_page)(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *page); struct kvm_mmu_page *page);
int (*sync_page)(struct kvm_vcpu *vcpu, int (*sync_page)(struct kvm_vcpu *vcpu,
...@@ -282,13 +270,15 @@ struct kvm_vcpu_arch { ...@@ -282,13 +270,15 @@ struct kvm_vcpu_arch {
u32 regs_dirty; u32 regs_dirty;
unsigned long cr0; unsigned long cr0;
unsigned long cr0_guest_owned_bits;
unsigned long cr2; unsigned long cr2;
unsigned long cr3; unsigned long cr3;
unsigned long cr4; unsigned long cr4;
unsigned long cr4_guest_owned_bits;
unsigned long cr8; unsigned long cr8;
u32 hflags; u32 hflags;
u64 pdptrs[4]; /* pae */ u64 pdptrs[4]; /* pae */
u64 shadow_efer; u64 efer;
u64 apic_base; u64 apic_base;
struct kvm_lapic *apic; /* kernel irqchip context */ struct kvm_lapic *apic; /* kernel irqchip context */
int32_t apic_arb_prio; int32_t apic_arb_prio;
...@@ -374,17 +364,27 @@ struct kvm_vcpu_arch { ...@@ -374,17 +364,27 @@ struct kvm_vcpu_arch {
/* used for guest single stepping over the given code position */ /* used for guest single stepping over the given code position */
u16 singlestep_cs; u16 singlestep_cs;
unsigned long singlestep_rip; unsigned long singlestep_rip;
/* fields used by HYPER-V emulation */
u64 hv_vapic;
}; };
struct kvm_mem_alias { struct kvm_mem_alias {
gfn_t base_gfn; gfn_t base_gfn;
unsigned long npages; unsigned long npages;
gfn_t target_gfn; gfn_t target_gfn;
#define KVM_ALIAS_INVALID 1UL
unsigned long flags;
}; };
struct kvm_arch{ #define KVM_ARCH_HAS_UNALIAS_INSTANTIATION
int naliases;
struct kvm_mem_aliases {
struct kvm_mem_alias aliases[KVM_ALIAS_SLOTS]; struct kvm_mem_alias aliases[KVM_ALIAS_SLOTS];
int naliases;
};
struct kvm_arch {
struct kvm_mem_aliases *aliases;
unsigned int n_free_mmu_pages; unsigned int n_free_mmu_pages;
unsigned int n_requested_mmu_pages; unsigned int n_requested_mmu_pages;
...@@ -416,6 +416,10 @@ struct kvm_arch{ ...@@ -416,6 +416,10 @@ struct kvm_arch{
s64 kvmclock_offset; s64 kvmclock_offset;
struct kvm_xen_hvm_config xen_hvm_config; struct kvm_xen_hvm_config xen_hvm_config;
/* fields used by HYPER-V emulation */
u64 hv_guest_os_id;
u64 hv_hypercall;
}; };
struct kvm_vm_stat { struct kvm_vm_stat {
...@@ -471,6 +475,7 @@ struct kvm_x86_ops { ...@@ -471,6 +475,7 @@ struct kvm_x86_ops {
int (*hardware_setup)(void); /* __init */ int (*hardware_setup)(void); /* __init */
void (*hardware_unsetup)(void); /* __exit */ void (*hardware_unsetup)(void); /* __exit */
bool (*cpu_has_accelerated_tpr)(void); bool (*cpu_has_accelerated_tpr)(void);
void (*cpuid_update)(struct kvm_vcpu *vcpu);
/* Create, but do not attach this VCPU */ /* Create, but do not attach this VCPU */
struct kvm_vcpu *(*vcpu_create)(struct kvm *kvm, unsigned id); struct kvm_vcpu *(*vcpu_create)(struct kvm *kvm, unsigned id);
...@@ -492,6 +497,7 @@ struct kvm_x86_ops { ...@@ -492,6 +497,7 @@ struct kvm_x86_ops {
void (*set_segment)(struct kvm_vcpu *vcpu, void (*set_segment)(struct kvm_vcpu *vcpu,
struct kvm_segment *var, int seg); struct kvm_segment *var, int seg);
void (*get_cs_db_l_bits)(struct kvm_vcpu *vcpu, int *db, int *l); void (*get_cs_db_l_bits)(struct kvm_vcpu *vcpu, int *db, int *l);
void (*decache_cr0_guest_bits)(struct kvm_vcpu *vcpu);
void (*decache_cr4_guest_bits)(struct kvm_vcpu *vcpu); void (*decache_cr4_guest_bits)(struct kvm_vcpu *vcpu);
void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0); void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0);
void (*set_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3); void (*set_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3);
...@@ -501,12 +507,13 @@ struct kvm_x86_ops { ...@@ -501,12 +507,13 @@ struct kvm_x86_ops {
void (*set_idt)(struct kvm_vcpu *vcpu, struct descriptor_table *dt); void (*set_idt)(struct kvm_vcpu *vcpu, struct descriptor_table *dt);
void (*get_gdt)(struct kvm_vcpu *vcpu, struct descriptor_table *dt); void (*get_gdt)(struct kvm_vcpu *vcpu, struct descriptor_table *dt);
void (*set_gdt)(struct kvm_vcpu *vcpu, struct descriptor_table *dt); void (*set_gdt)(struct kvm_vcpu *vcpu, struct descriptor_table *dt);
unsigned long (*get_dr)(struct kvm_vcpu *vcpu, int dr); int (*get_dr)(struct kvm_vcpu *vcpu, int dr, unsigned long *dest);
void (*set_dr)(struct kvm_vcpu *vcpu, int dr, unsigned long value, int (*set_dr)(struct kvm_vcpu *vcpu, int dr, unsigned long value);
int *exception);
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg); void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
unsigned long (*get_rflags)(struct kvm_vcpu *vcpu); unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags); void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
void (*fpu_activate)(struct kvm_vcpu *vcpu);
void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
void (*tlb_flush)(struct kvm_vcpu *vcpu); void (*tlb_flush)(struct kvm_vcpu *vcpu);
...@@ -531,7 +538,8 @@ struct kvm_x86_ops { ...@@ -531,7 +538,8 @@ struct kvm_x86_ops {
int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
int (*get_tdp_level)(void); int (*get_tdp_level)(void);
u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
bool (*gb_page_enable)(void); int (*get_lpage_level)(void);
bool (*rdtscp_supported)(void);
const struct trace_print_flags *exit_reasons_str; const struct trace_print_flags *exit_reasons_str;
}; };
...@@ -606,8 +614,7 @@ int emulator_set_dr(struct x86_emulate_ctxt *ctxt, int dr, ...@@ -606,8 +614,7 @@ int emulator_set_dr(struct x86_emulate_ctxt *ctxt, int dr,
unsigned long value); unsigned long value);
void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int seg);
int type_bits, int seg);
int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason); int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason);
...@@ -653,6 +660,10 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu); ...@@ -653,6 +660,10 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu);
int kvm_mmu_load(struct kvm_vcpu *vcpu); int kvm_mmu_load(struct kvm_vcpu *vcpu);
void kvm_mmu_unload(struct kvm_vcpu *vcpu); void kvm_mmu_unload(struct kvm_vcpu *vcpu);
void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu); void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu);
gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva, u32 *error);
gpa_t kvm_mmu_gva_to_gpa_fetch(struct kvm_vcpu *vcpu, gva_t gva, u32 *error);
gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva, u32 *error);
gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva, u32 *error);
int kvm_emulate_hypercall(struct kvm_vcpu *vcpu); int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
...@@ -666,6 +677,7 @@ void kvm_disable_tdp(void); ...@@ -666,6 +677,7 @@ void kvm_disable_tdp(void);
int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3); int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3);
int complete_pio(struct kvm_vcpu *vcpu); int complete_pio(struct kvm_vcpu *vcpu);
bool kvm_check_iopl(struct kvm_vcpu *vcpu);
struct kvm_memory_slot *gfn_to_memslot_unaliased(struct kvm *kvm, gfn_t gfn); struct kvm_memory_slot *gfn_to_memslot_unaliased(struct kvm *kvm, gfn_t gfn);
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#define _ASM_X86_KVM_PARA_H #define _ASM_X86_KVM_PARA_H
#include <linux/types.h> #include <linux/types.h>
#include <asm/hyperv.h>
/* This CPUID returns the signature 'KVMKVMKVM' in ebx, ecx, and edx. It /* This CPUID returns the signature 'KVMKVMKVM' in ebx, ecx, and edx. It
* should be used to determine that a VM is running under KVM. * should be used to determine that a VM is running under KVM.
......
...@@ -313,7 +313,7 @@ struct __attribute__ ((__packed__)) vmcb { ...@@ -313,7 +313,7 @@ struct __attribute__ ((__packed__)) vmcb {
#define SVM_EXIT_ERR -1 #define SVM_EXIT_ERR -1
#define SVM_CR0_SELECTIVE_MASK (1 << 3 | 1) /* TS and MP */ #define SVM_CR0_SELECTIVE_MASK (X86_CR0_TS | X86_CR0_MP)
#define SVM_VMLOAD ".byte 0x0f, 0x01, 0xda" #define SVM_VMLOAD ".byte 0x0f, 0x01, 0xda"
#define SVM_VMRUN ".byte 0x0f, 0x01, 0xd8" #define SVM_VMRUN ".byte 0x0f, 0x01, 0xd8"
......
...@@ -53,6 +53,7 @@ ...@@ -53,6 +53,7 @@
*/ */
#define SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES 0x00000001 #define SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES 0x00000001
#define SECONDARY_EXEC_ENABLE_EPT 0x00000002 #define SECONDARY_EXEC_ENABLE_EPT 0x00000002
#define SECONDARY_EXEC_RDTSCP 0x00000008
#define SECONDARY_EXEC_ENABLE_VPID 0x00000020 #define SECONDARY_EXEC_ENABLE_VPID 0x00000020
#define SECONDARY_EXEC_WBINVD_EXITING 0x00000040 #define SECONDARY_EXEC_WBINVD_EXITING 0x00000040
#define SECONDARY_EXEC_UNRESTRICTED_GUEST 0x00000080 #define SECONDARY_EXEC_UNRESTRICTED_GUEST 0x00000080
...@@ -251,6 +252,7 @@ enum vmcs_field { ...@@ -251,6 +252,7 @@ enum vmcs_field {
#define EXIT_REASON_MSR_READ 31 #define EXIT_REASON_MSR_READ 31
#define EXIT_REASON_MSR_WRITE 32 #define EXIT_REASON_MSR_WRITE 32
#define EXIT_REASON_MWAIT_INSTRUCTION 36 #define EXIT_REASON_MWAIT_INSTRUCTION 36
#define EXIT_REASON_MONITOR_INSTRUCTION 39
#define EXIT_REASON_PAUSE_INSTRUCTION 40 #define EXIT_REASON_PAUSE_INSTRUCTION 40
#define EXIT_REASON_MCE_DURING_VMENTRY 41 #define EXIT_REASON_MCE_DURING_VMENTRY 41
#define EXIT_REASON_TPR_BELOW_THRESHOLD 43 #define EXIT_REASON_TPR_BELOW_THRESHOLD 43
...@@ -362,6 +364,7 @@ enum vmcs_field { ...@@ -362,6 +364,7 @@ enum vmcs_field {
#define VMX_EPTP_UC_BIT (1ull << 8) #define VMX_EPTP_UC_BIT (1ull << 8)
#define VMX_EPTP_WB_BIT (1ull << 14) #define VMX_EPTP_WB_BIT (1ull << 14)
#define VMX_EPT_2MB_PAGE_BIT (1ull << 16) #define VMX_EPT_2MB_PAGE_BIT (1ull << 16)
#define VMX_EPT_1GB_PAGE_BIT (1ull << 17)
#define VMX_EPT_EXTENT_INDIVIDUAL_BIT (1ull << 24) #define VMX_EPT_EXTENT_INDIVIDUAL_BIT (1ull << 24)
#define VMX_EPT_EXTENT_CONTEXT_BIT (1ull << 25) #define VMX_EPT_EXTENT_CONTEXT_BIT (1ull << 25)
#define VMX_EPT_EXTENT_GLOBAL_BIT (1ull << 26) #define VMX_EPT_EXTENT_GLOBAL_BIT (1ull << 26)
...@@ -374,7 +377,7 @@ enum vmcs_field { ...@@ -374,7 +377,7 @@ enum vmcs_field {
#define VMX_EPT_READABLE_MASK 0x1ull #define VMX_EPT_READABLE_MASK 0x1ull
#define VMX_EPT_WRITABLE_MASK 0x2ull #define VMX_EPT_WRITABLE_MASK 0x2ull
#define VMX_EPT_EXECUTABLE_MASK 0x4ull #define VMX_EPT_EXECUTABLE_MASK 0x4ull
#define VMX_EPT_IGMT_BIT (1ull << 6) #define VMX_EPT_IPAT_BIT (1ull << 6)
#define VMX_EPT_IDENTITY_PAGETABLE_ADDR 0xfffbc000ul #define VMX_EPT_IDENTITY_PAGETABLE_ADDR 0xfffbc000ul
......
...@@ -301,7 +301,8 @@ static int __init vsyscall_init(void) ...@@ -301,7 +301,8 @@ static int __init vsyscall_init(void)
register_sysctl_table(kernel_root_table2); register_sysctl_table(kernel_root_table2);
#endif #endif
on_each_cpu(cpu_vsyscall_init, NULL, 1); on_each_cpu(cpu_vsyscall_init, NULL, 1);
hotcpu_notifier(cpu_vsyscall_notifier, 0); /* notifier priority > KVM */
hotcpu_notifier(cpu_vsyscall_notifier, 30);
return 0; return 0;
} }
......
...@@ -29,6 +29,7 @@ config KVM ...@@ -29,6 +29,7 @@ config KVM
select HAVE_KVM_EVENTFD select HAVE_KVM_EVENTFD
select KVM_APIC_ARCHITECTURE select KVM_APIC_ARCHITECTURE
select USER_RETURN_NOTIFIER select USER_RETURN_NOTIFIER
select KVM_MMIO
---help--- ---help---
Support hosting fully virtualized guest machines using hardware Support hosting fully virtualized guest machines using hardware
virtualization extensions. You will need a fairly recent virtualization extensions. You will need a fairly recent
......
This diff is collapsed.
...@@ -242,11 +242,11 @@ static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian) ...@@ -242,11 +242,11 @@ static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian)
{ {
struct kvm_kpit_state *ps = container_of(kian, struct kvm_kpit_state, struct kvm_kpit_state *ps = container_of(kian, struct kvm_kpit_state,
irq_ack_notifier); irq_ack_notifier);
spin_lock(&ps->inject_lock); raw_spin_lock(&ps->inject_lock);
if (atomic_dec_return(&ps->pit_timer.pending) < 0) if (atomic_dec_return(&ps->pit_timer.pending) < 0)
atomic_inc(&ps->pit_timer.pending); atomic_inc(&ps->pit_timer.pending);
ps->irq_ack = 1; ps->irq_ack = 1;
spin_unlock(&ps->inject_lock); raw_spin_unlock(&ps->inject_lock);
} }
void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu) void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu)
...@@ -605,7 +605,7 @@ static const struct kvm_io_device_ops speaker_dev_ops = { ...@@ -605,7 +605,7 @@ static const struct kvm_io_device_ops speaker_dev_ops = {
.write = speaker_ioport_write, .write = speaker_ioport_write,
}; };
/* Caller must have writers lock on slots_lock */ /* Caller must hold slots_lock */
struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags) struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
{ {
struct kvm_pit *pit; struct kvm_pit *pit;
...@@ -624,7 +624,7 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags) ...@@ -624,7 +624,7 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
mutex_init(&pit->pit_state.lock); mutex_init(&pit->pit_state.lock);
mutex_lock(&pit->pit_state.lock); mutex_lock(&pit->pit_state.lock);
spin_lock_init(&pit->pit_state.inject_lock); raw_spin_lock_init(&pit->pit_state.inject_lock);
kvm->arch.vpit = pit; kvm->arch.vpit = pit;
pit->kvm = kvm; pit->kvm = kvm;
...@@ -645,13 +645,13 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags) ...@@ -645,13 +645,13 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
kvm_register_irq_mask_notifier(kvm, 0, &pit->mask_notifier); kvm_register_irq_mask_notifier(kvm, 0, &pit->mask_notifier);
kvm_iodevice_init(&pit->dev, &pit_dev_ops); kvm_iodevice_init(&pit->dev, &pit_dev_ops);
ret = __kvm_io_bus_register_dev(&kvm->pio_bus, &pit->dev); ret = kvm_io_bus_register_dev(kvm, KVM_PIO_BUS, &pit->dev);
if (ret < 0) if (ret < 0)
goto fail; goto fail;
if (flags & KVM_PIT_SPEAKER_DUMMY) { if (flags & KVM_PIT_SPEAKER_DUMMY) {
kvm_iodevice_init(&pit->speaker_dev, &speaker_dev_ops); kvm_iodevice_init(&pit->speaker_dev, &speaker_dev_ops);
ret = __kvm_io_bus_register_dev(&kvm->pio_bus, ret = kvm_io_bus_register_dev(kvm, KVM_PIO_BUS,
&pit->speaker_dev); &pit->speaker_dev);
if (ret < 0) if (ret < 0)
goto fail_unregister; goto fail_unregister;
...@@ -660,11 +660,12 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags) ...@@ -660,11 +660,12 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
return pit; return pit;
fail_unregister: fail_unregister:
__kvm_io_bus_unregister_dev(&kvm->pio_bus, &pit->dev); kvm_io_bus_unregister_dev(kvm, KVM_PIO_BUS, &pit->dev);
fail: fail:
if (pit->irq_source_id >= 0) kvm_unregister_irq_mask_notifier(kvm, 0, &pit->mask_notifier);
kvm_free_irq_source_id(kvm, pit->irq_source_id); kvm_unregister_irq_ack_notifier(kvm, &pit_state->irq_ack_notifier);
kvm_free_irq_source_id(kvm, pit->irq_source_id);
kfree(pit); kfree(pit);
return NULL; return NULL;
...@@ -723,12 +724,12 @@ void kvm_inject_pit_timer_irqs(struct kvm_vcpu *vcpu) ...@@ -723,12 +724,12 @@ void kvm_inject_pit_timer_irqs(struct kvm_vcpu *vcpu)
/* Try to inject pending interrupts when /* Try to inject pending interrupts when
* last one has been acked. * last one has been acked.
*/ */
spin_lock(&ps->inject_lock); raw_spin_lock(&ps->inject_lock);
if (atomic_read(&ps->pit_timer.pending) && ps->irq_ack) { if (atomic_read(&ps->pit_timer.pending) && ps->irq_ack) {
ps->irq_ack = 0; ps->irq_ack = 0;
inject = 1; inject = 1;
} }
spin_unlock(&ps->inject_lock); raw_spin_unlock(&ps->inject_lock);
if (inject) if (inject)
__inject_pit_timer_intr(kvm); __inject_pit_timer_intr(kvm);
} }
......
...@@ -27,7 +27,7 @@ struct kvm_kpit_state { ...@@ -27,7 +27,7 @@ struct kvm_kpit_state {
u32 speaker_data_on; u32 speaker_data_on;
struct mutex lock; struct mutex lock;
struct kvm_pit *pit; struct kvm_pit *pit;
spinlock_t inject_lock; raw_spinlock_t inject_lock;
unsigned long irq_ack; unsigned long irq_ack;
struct kvm_irq_ack_notifier irq_ack_notifier; struct kvm_irq_ack_notifier irq_ack_notifier;
}; };
......
...@@ -44,18 +44,19 @@ static void pic_clear_isr(struct kvm_kpic_state *s, int irq) ...@@ -44,18 +44,19 @@ static void pic_clear_isr(struct kvm_kpic_state *s, int irq)
* Other interrupt may be delivered to PIC while lock is dropped but * Other interrupt may be delivered to PIC while lock is dropped but
* it should be safe since PIC state is already updated at this stage. * it should be safe since PIC state is already updated at this stage.
*/ */
spin_unlock(&s->pics_state->lock); raw_spin_unlock(&s->pics_state->lock);
kvm_notify_acked_irq(s->pics_state->kvm, SELECT_PIC(irq), irq); kvm_notify_acked_irq(s->pics_state->kvm, SELECT_PIC(irq), irq);
spin_lock(&s->pics_state->lock); raw_spin_lock(&s->pics_state->lock);
} }
void kvm_pic_clear_isr_ack(struct kvm *kvm) void kvm_pic_clear_isr_ack(struct kvm *kvm)
{ {
struct kvm_pic *s = pic_irqchip(kvm); struct kvm_pic *s = pic_irqchip(kvm);
spin_lock(&s->lock);
raw_spin_lock(&s->lock);
s->pics[0].isr_ack = 0xff; s->pics[0].isr_ack = 0xff;
s->pics[1].isr_ack = 0xff; s->pics[1].isr_ack = 0xff;
spin_unlock(&s->lock); raw_spin_unlock(&s->lock);
} }
/* /*
...@@ -156,9 +157,9 @@ static void pic_update_irq(struct kvm_pic *s) ...@@ -156,9 +157,9 @@ static void pic_update_irq(struct kvm_pic *s)
void kvm_pic_update_irq(struct kvm_pic *s) void kvm_pic_update_irq(struct kvm_pic *s)
{ {
spin_lock(&s->lock); raw_spin_lock(&s->lock);
pic_update_irq(s); pic_update_irq(s);
spin_unlock(&s->lock); raw_spin_unlock(&s->lock);
} }
int kvm_pic_set_irq(void *opaque, int irq, int level) int kvm_pic_set_irq(void *opaque, int irq, int level)
...@@ -166,14 +167,14 @@ int kvm_pic_set_irq(void *opaque, int irq, int level) ...@@ -166,14 +167,14 @@ int kvm_pic_set_irq(void *opaque, int irq, int level)
struct kvm_pic *s = opaque; struct kvm_pic *s = opaque;
int ret = -1; int ret = -1;
spin_lock(&s->lock); raw_spin_lock(&s->lock);
if (irq >= 0 && irq < PIC_NUM_PINS) { if (irq >= 0 && irq < PIC_NUM_PINS) {
ret = pic_set_irq1(&s->pics[irq >> 3], irq & 7, level); ret = pic_set_irq1(&s->pics[irq >> 3], irq & 7, level);
pic_update_irq(s); pic_update_irq(s);
trace_kvm_pic_set_irq(irq >> 3, irq & 7, s->pics[irq >> 3].elcr, trace_kvm_pic_set_irq(irq >> 3, irq & 7, s->pics[irq >> 3].elcr,
s->pics[irq >> 3].imr, ret == 0); s->pics[irq >> 3].imr, ret == 0);
} }
spin_unlock(&s->lock); raw_spin_unlock(&s->lock);
return ret; return ret;
} }
...@@ -203,7 +204,7 @@ int kvm_pic_read_irq(struct kvm *kvm) ...@@ -203,7 +204,7 @@ int kvm_pic_read_irq(struct kvm *kvm)
int irq, irq2, intno; int irq, irq2, intno;
struct kvm_pic *s = pic_irqchip(kvm); struct kvm_pic *s = pic_irqchip(kvm);
spin_lock(&s->lock); raw_spin_lock(&s->lock);
irq = pic_get_irq(&s->pics[0]); irq = pic_get_irq(&s->pics[0]);
if (irq >= 0) { if (irq >= 0) {
pic_intack(&s->pics[0], irq); pic_intack(&s->pics[0], irq);
...@@ -228,7 +229,7 @@ int kvm_pic_read_irq(struct kvm *kvm) ...@@ -228,7 +229,7 @@ int kvm_pic_read_irq(struct kvm *kvm)
intno = s->pics[0].irq_base + irq; intno = s->pics[0].irq_base + irq;
} }
pic_update_irq(s); pic_update_irq(s);
spin_unlock(&s->lock); raw_spin_unlock(&s->lock);
return intno; return intno;
} }
...@@ -442,7 +443,7 @@ static int picdev_write(struct kvm_io_device *this, ...@@ -442,7 +443,7 @@ static int picdev_write(struct kvm_io_device *this,
printk(KERN_ERR "PIC: non byte write\n"); printk(KERN_ERR "PIC: non byte write\n");
return 0; return 0;
} }
spin_lock(&s->lock); raw_spin_lock(&s->lock);
switch (addr) { switch (addr) {
case 0x20: case 0x20:
case 0x21: case 0x21:
...@@ -455,7 +456,7 @@ static int picdev_write(struct kvm_io_device *this, ...@@ -455,7 +456,7 @@ static int picdev_write(struct kvm_io_device *this,
elcr_ioport_write(&s->pics[addr & 1], addr, data); elcr_ioport_write(&s->pics[addr & 1], addr, data);
break; break;
} }
spin_unlock(&s->lock); raw_spin_unlock(&s->lock);
return 0; return 0;
} }
...@@ -472,7 +473,7 @@ static int picdev_read(struct kvm_io_device *this, ...@@ -472,7 +473,7 @@ static int picdev_read(struct kvm_io_device *this,
printk(KERN_ERR "PIC: non byte read\n"); printk(KERN_ERR "PIC: non byte read\n");
return 0; return 0;
} }
spin_lock(&s->lock); raw_spin_lock(&s->lock);
switch (addr) { switch (addr) {
case 0x20: case 0x20:
case 0x21: case 0x21:
...@@ -486,7 +487,7 @@ static int picdev_read(struct kvm_io_device *this, ...@@ -486,7 +487,7 @@ static int picdev_read(struct kvm_io_device *this,
break; break;
} }
*(unsigned char *)val = data; *(unsigned char *)val = data;
spin_unlock(&s->lock); raw_spin_unlock(&s->lock);
return 0; return 0;
} }
...@@ -520,7 +521,7 @@ struct kvm_pic *kvm_create_pic(struct kvm *kvm) ...@@ -520,7 +521,7 @@ struct kvm_pic *kvm_create_pic(struct kvm *kvm)
s = kzalloc(sizeof(struct kvm_pic), GFP_KERNEL); s = kzalloc(sizeof(struct kvm_pic), GFP_KERNEL);
if (!s) if (!s)
return NULL; return NULL;
spin_lock_init(&s->lock); raw_spin_lock_init(&s->lock);
s->kvm = kvm; s->kvm = kvm;
s->pics[0].elcr_mask = 0xf8; s->pics[0].elcr_mask = 0xf8;
s->pics[1].elcr_mask = 0xde; s->pics[1].elcr_mask = 0xde;
...@@ -533,7 +534,9 @@ struct kvm_pic *kvm_create_pic(struct kvm *kvm) ...@@ -533,7 +534,9 @@ struct kvm_pic *kvm_create_pic(struct kvm *kvm)
* Initialize PIO device * Initialize PIO device
*/ */
kvm_iodevice_init(&s->dev, &picdev_ops); kvm_iodevice_init(&s->dev, &picdev_ops);
ret = kvm_io_bus_register_dev(kvm, &kvm->pio_bus, &s->dev); mutex_lock(&kvm->slots_lock);
ret = kvm_io_bus_register_dev(kvm, KVM_PIO_BUS, &s->dev);
mutex_unlock(&kvm->slots_lock);
if (ret < 0) { if (ret < 0) {
kfree(s); kfree(s);
return NULL; return NULL;
...@@ -541,3 +544,14 @@ struct kvm_pic *kvm_create_pic(struct kvm *kvm) ...@@ -541,3 +544,14 @@ struct kvm_pic *kvm_create_pic(struct kvm *kvm)
return s; return s;
} }
void kvm_destroy_pic(struct kvm *kvm)
{
struct kvm_pic *vpic = kvm->arch.vpic;
if (vpic) {
kvm_io_bus_unregister_dev(kvm, KVM_PIO_BUS, &vpic->dev);
kvm->arch.vpic = NULL;
kfree(vpic);
}
}
...@@ -62,7 +62,7 @@ struct kvm_kpic_state { ...@@ -62,7 +62,7 @@ struct kvm_kpic_state {
}; };
struct kvm_pic { struct kvm_pic {
spinlock_t lock; raw_spinlock_t lock;
unsigned pending_acks; unsigned pending_acks;
struct kvm *kvm; struct kvm *kvm;
struct kvm_kpic_state pics[2]; /* 0 is master pic, 1 is slave pic */ struct kvm_kpic_state pics[2]; /* 0 is master pic, 1 is slave pic */
...@@ -75,6 +75,7 @@ struct kvm_pic { ...@@ -75,6 +75,7 @@ struct kvm_pic {
}; };
struct kvm_pic *kvm_create_pic(struct kvm *kvm); struct kvm_pic *kvm_create_pic(struct kvm *kvm);
void kvm_destroy_pic(struct kvm *kvm);
int kvm_pic_read_irq(struct kvm *kvm); int kvm_pic_read_irq(struct kvm *kvm);
void kvm_pic_update_irq(struct kvm_pic *s); void kvm_pic_update_irq(struct kvm_pic *s);
void kvm_pic_clear_isr_ack(struct kvm *kvm); void kvm_pic_clear_isr_ack(struct kvm *kvm);
......
#ifndef ASM_KVM_CACHE_REGS_H #ifndef ASM_KVM_CACHE_REGS_H
#define ASM_KVM_CACHE_REGS_H #define ASM_KVM_CACHE_REGS_H
#define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS
#define KVM_POSSIBLE_CR4_GUEST_BITS \
(X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \
| X86_CR4_OSXMMEXCPT | X86_CR4_PGE)
static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu, static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu,
enum kvm_reg reg) enum kvm_reg reg)
{ {
...@@ -38,4 +43,30 @@ static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu, int index) ...@@ -38,4 +43,30 @@ static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu, int index)
return vcpu->arch.pdptrs[index]; return vcpu->arch.pdptrs[index];
} }
static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask)
{
ulong tmask = mask & KVM_POSSIBLE_CR0_GUEST_BITS;
if (tmask & vcpu->arch.cr0_guest_owned_bits)
kvm_x86_ops->decache_cr0_guest_bits(vcpu);
return vcpu->arch.cr0 & mask;
}
static inline ulong kvm_read_cr0(struct kvm_vcpu *vcpu)
{
return kvm_read_cr0_bits(vcpu, ~0UL);
}
static inline ulong kvm_read_cr4_bits(struct kvm_vcpu *vcpu, ulong mask)
{
ulong tmask = mask & KVM_POSSIBLE_CR4_GUEST_BITS;
if (tmask & vcpu->arch.cr4_guest_owned_bits)
kvm_x86_ops->decache_cr4_guest_bits(vcpu);
return vcpu->arch.cr4 & mask;
}
static inline ulong kvm_read_cr4(struct kvm_vcpu *vcpu)
{
return kvm_read_cr4_bits(vcpu, ~0UL);
}
#endif #endif
...@@ -1246,3 +1246,34 @@ int kvm_x2apic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data) ...@@ -1246,3 +1246,34 @@ int kvm_x2apic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
return 0; return 0;
} }
int kvm_hv_vapic_msr_write(struct kvm_vcpu *vcpu, u32 reg, u64 data)
{
struct kvm_lapic *apic = vcpu->arch.apic;
if (!irqchip_in_kernel(vcpu->kvm))
return 1;
/* if this is ICR write vector before command */
if (reg == APIC_ICR)
apic_reg_write(apic, APIC_ICR2, (u32)(data >> 32));
return apic_reg_write(apic, reg, (u32)data);
}
int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 reg, u64 *data)
{
struct kvm_lapic *apic = vcpu->arch.apic;
u32 low, high = 0;
if (!irqchip_in_kernel(vcpu->kvm))
return 1;
if (apic_reg_read(apic, reg, 4, &low))
return 1;
if (reg == APIC_ICR)
apic_reg_read(apic, APIC_ICR2, 4, &high);
*data = (((u64)high) << 32) | low;
return 0;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment