Commit e7517324 authored by Wanpeng Li's avatar Wanpeng Li Committed by Paolo Bonzini

KVM: X86: Fix fpu state crash in kvm guest

The idea before commit 240c35a3 (which has just been reverted)
was that we have the following FPU states:

               userspace (QEMU)             guest
---------------------------------------------------------------------------
               processor                    vcpu->arch.guest_fpu
>>> KVM_RUN: kvm_load_guest_fpu
               vcpu->arch.user_fpu          processor
>>> preempt out
               vcpu->arch.user_fpu          current->thread.fpu
>>> preempt in
               vcpu->arch.user_fpu          processor
>>> back to userspace
>>> kvm_put_guest_fpu
               processor                    vcpu->arch.guest_fpu
---------------------------------------------------------------------------

With the new lazy model we want to get the state back to the processor
when schedule in from current->thread.fpu.
Reported-by: default avatarThomas Lambertz <mail@thomaslambertz.de>
Reported-by: default avataranthony <antdev66@gmail.com>
Tested-by: default avataranthony <antdev66@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Thomas Lambertz <mail@thomaslambertz.de>
Cc: anthony <antdev66@gmail.com>
Cc: stable@vger.kernel.org
Fixes: 5f409e20 (x86/fpu: Defer FPU state load until return to userspace)
Signed-off-by: default avatarWanpeng Li <wanpengli@tencent.com>
[Add a comment in front of the warning. - Paolo]
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent ec269475
...@@ -3306,6 +3306,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) ...@@ -3306,6 +3306,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
kvm_x86_ops->vcpu_load(vcpu, cpu); kvm_x86_ops->vcpu_load(vcpu, cpu);
fpregs_assert_state_consistent();
if (test_thread_flag(TIF_NEED_FPU_LOAD))
switch_fpu_return();
/* Apply any externally detected TSC adjustments (due to suspend) */ /* Apply any externally detected TSC adjustments (due to suspend) */
if (unlikely(vcpu->arch.tsc_offset_adjustment)) { if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment); adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);
...@@ -7990,9 +7994,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) ...@@ -7990,9 +7994,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
trace_kvm_entry(vcpu->vcpu_id); trace_kvm_entry(vcpu->vcpu_id);
guest_enter_irqoff(); guest_enter_irqoff();
fpregs_assert_state_consistent(); /* The preempt notifier should have taken care of the FPU already. */
if (test_thread_flag(TIF_NEED_FPU_LOAD)) WARN_ON_ONCE(test_thread_flag(TIF_NEED_FPU_LOAD));
switch_fpu_return();
if (unlikely(vcpu->arch.switch_db_regs)) { if (unlikely(vcpu->arch.switch_db_regs)) {
set_debugreg(0, 7); set_debugreg(0, 7);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment