1. 30 Apr, 2019 1 commit
    • Sean Christopherson's avatar
      KVM: x86: Whitelist port 0x7e for pre-incrementing %rip · 8764ed55
      Sean Christopherson authored
      KVM's recent bug fix to update %rip after emulating I/O broke userspace
      that relied on the previous behavior of incrementing %rip prior to
      exiting to userspace.  When running a Windows XP guest on AMD hardware,
      Qemu may patch "OUT 0x7E" instructions in reaction to the OUT itself.
      Because KVM's old behavior was to increment %rip before exiting to
      userspace to handle the I/O, Qemu manually adjusted %rip to account for
      the OUT instruction.
      
      Arguably this is a userspace bug as KVM requires userspace to re-enter
      the kernel to complete instruction emulation before taking any other
      actions.  That being said, this is a bit of a grey area and breaking
      userspace that has worked for many years is bad.
      
      Pre-increment %rip on OUT to port 0x7e before exiting to userspace to
      hack around the issue.
      
      Fixes: 45def77e ("KVM: x86: update %rip after emulating IO")
      Reported-by: default avatarSimon Becherer <simon@becherer.de>
      Reported-and-tested-by: default avatarIakov Karpov <srid@rkmail.ru>
      Reported-by: default avatarGabriele Balducci <balducci@units.it>
      Reported-by: default avatarAntti Antinoja <reader@fennosys.fi>
      Cc: stable@vger.kernel.org
      Cc: Takashi Iwai <tiwai@suse.com>
      Cc: Jiri Slaby <jslaby@suse.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8764ed55
  2. 29 Apr, 2019 1 commit
  3. 27 Apr, 2019 1 commit
    • Rick Edgecombe's avatar
      KVM: VMX: Move RSB stuffing to before the first RET after VM-Exit · f2fde6a5
      Rick Edgecombe authored
      The not-so-recent change to move VMX's VM-Exit handing to a dedicated
      "function" unintentionally exposed KVM to a speculative attack from the
      guest by executing a RET prior to stuffing the RSB.  Make RSB stuffing
      happen immediately after VM-Exit, before any unpaired returns.
      
      Alternatively, the VM-Exit path could postpone full RSB stuffing until
      its current location by stuffing the RSB only as needed, or by avoiding
      returns in the VM-Exit path entirely, but both alternatives are beyond
      ugly since vmx_vmexit() has multiple indirect callers (by way of
      vmx_vmenter()).  And putting the RSB stuffing immediately after VM-Exit
      makes it much less likely to be re-broken in the future.
      
      Note, the cost of PUSH/POP could be avoided in the normal flow by
      pairing the PUSH RAX with the POP RAX in __vmx_vcpu_run() and adding an
      a POP to nested_vmx_check_vmentry_hw(), but such a weird/subtle
      dependency is likely to cause problems in the long run, and PUSH/POP
      will take all of a few cycles, which is peanuts compared to the number
      of cycles required to fill the RSB.
      
      Fixes: 453eafbe ("KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines")
      Reported-by: default avatarRick Edgecombe <rick.p.edgecombe@intel.com>
      Signed-off-by: default avatarRick Edgecombe <rick.p.edgecombe@intel.com>
      Co-developed-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f2fde6a5
  4. 18 Apr, 2019 7 commits
    • Sean Christopherson's avatar
      KVM: lapic: Convert guest TSC to host time domain if necessary · b6aa57c6
      Sean Christopherson authored
      To minimize the latency of timer interrupts as observed by the guest,
      KVM adjusts the values it programs into the host timers to account for
      the host's overhead of programming and handling the timer event.  In
      the event that the adjustments are too aggressive, i.e. the timer fires
      earlier than the guest expects, KVM busy waits immediately prior to
      entering the guest.
      
      Currently, KVM manually converts the delay from nanoseconds to clock
      cycles.  But, the conversion is done in the guest's time domain, while
      the delay occurs in the host's time domain.  This is perfectly ok when
      the guest and host are using the same TSC ratio, but if the guest is
      using a different ratio then the delay may not be accurate and could
      wait too little or too long.
      
      When the guest is not using the host's ratio, convert the delay from
      guest clock cycles to host nanoseconds and use ndelay() instead of
      __delay() to provide more accurate timing.  Because converting to
      nanoseconds is relatively expensive, e.g. requires division and more
      multiplication ops, continue using __delay() directly when guest and
      host TSCs are running at the same ratio.
      
      Cc: Liran Alon <liran.alon@oracle.com>
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Cc: stable@vger.kernel.org
      Fixes: 3b8a5df6 ("KVM: LAPIC: Tune lapic_timer_advance_ns automatically")
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b6aa57c6
    • Sean Christopherson's avatar
      KVM: lapic: Allow user to disable adaptive tuning of timer advancement · c3941d9e
      Sean Christopherson authored
      The introduction of adaptive tuning of lapic timer advancement did not
      allow for the scenario where userspace would want to disable adaptive
      tuning but still employ timer advancement, e.g. for testing purposes or
      to handle a use case where adaptive tuning is unable to settle on a
      suitable time.  This is epecially pertinent now that KVM places a hard
      threshold on the maximum advancment time.
      
      Rework the timer semantics to accept signed values, with a value of '-1'
      being interpreted as "use adaptive tuning with KVM's internal default",
      and any other value being used as an explicit advancement time, e.g. a
      time of '0' effectively disables advancement.
      
      Note, this does not completely restore the original behavior of
      lapic_timer_advance_ns.  Prior to tracking the advancement per vCPU,
      which is necessary to support autotuning, userspace could adjust
      lapic_timer_advance_ns for *running* vCPU.  With per-vCPU tracking, the
      module params are snapshotted at vCPU creation, i.e. applying a new
      advancement effectively requires restarting a VM.
      
      Dynamically updating a running vCPU is possible, e.g. a helper could be
      added to retrieve the desired delay, choosing between the global module
      param and the per-VCPU value depending on whether or not auto-tuning is
      (globally) enabled, but introduces a great deal of complexity.  The
      wrapper itself is not complex, but understanding and documenting the
      effects of dynamically toggling auto-tuning and/or adjusting the timer
      advancement is nigh impossible since the behavior would be dependent on
      KVM's implementation as well as compiler optimizations.  In other words,
      providing stable behavior would require extremely careful consideration
      now and in the future.
      
      Given that the expected use of a manually-tuned timer advancement is to
      "tune once, run many", use the vastly simpler approach of recognizing
      changes to the module params only when creating a new vCPU.
      
      Cc: Liran Alon <liran.alon@oracle.com>
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Reviewed-by: default avatarLiran Alon <liran.alon@oracle.com>
      Cc: stable@vger.kernel.org
      Fixes: 3b8a5df6 ("KVM: LAPIC: Tune lapic_timer_advance_ns automatically")
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c3941d9e
    • Sean Christopherson's avatar
      KVM: lapic: Track lapic timer advance per vCPU · 39497d76
      Sean Christopherson authored
      Automatically adjusting the globally-shared timer advancement could
      corrupt the timer, e.g. if multiple vCPUs are concurrently adjusting
      the advancement value.  That could be partially fixed by using a local
      variable for the arithmetic, but it would still be susceptible to a
      race when setting timer_advance_adjust_done.
      
      And because virtual_tsc_khz and tsc_scaling_ratio are per-vCPU, the
      correct calibration for a given vCPU may not apply to all vCPUs.
      
      Furthermore, lapic_timer_advance_ns is marked __read_mostly, which is
      effectively violated when finding a stable advancement takes an extended
      amount of timer.
      
      Opportunistically change the definition of lapic_timer_advance_ns to
      a u32 so that it matches the style of struct kvm_timer.  Explicitly
      pass the param to kvm_create_lapic() so that it doesn't have to be
      exposed to lapic.c, thus reducing the probability of unintentionally
      using the global value instead of the per-vCPU value.
      
      Cc: Liran Alon <liran.alon@oracle.com>
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Reviewed-by: default avatarLiran Alon <liran.alon@oracle.com>
      Cc: stable@vger.kernel.org
      Fixes: 3b8a5df6 ("KVM: LAPIC: Tune lapic_timer_advance_ns automatically")
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      39497d76
    • Sean Christopherson's avatar
      KVM: lapic: Disable timer advancement if adaptive tuning goes haywire · 57bf67e7
      Sean Christopherson authored
      To minimize the latency of timer interrupts as observed by the guest,
      KVM adjusts the values it programs into the host timers to account for
      the host's overhead of programming and handling the timer event.  Now
      that the timer advancement is automatically tuned during runtime, it's
      effectively unbounded by default, e.g. if KVM is running as L1 the
      advancement can measure in hundreds of milliseconds.
      
      Disable timer advancement if adaptive tuning yields an advancement of
      more than 5000ns, as large advancements can break reasonable assumptions
      of the guest, e.g. that a timer configured to fire after 1ms won't
      arrive on the next instruction.  Although KVM busy waits to mitigate the
      case of a timer event arriving too early, complications can arise when
      shifting the interrupt too far, e.g. kvm-unit-test's vmx.interrupt test
      will fail when its "host" exits on interrupts as KVM may inject the INTR
      before the guest executes STI+HLT.   Arguably the unit test is "broken"
      in the sense that delaying a timer interrupt by 1ms doesn't technically
      guarantee the interrupt will arrive after STI+HLT, but it's a reasonable
      assumption that KVM should support.
      
      Furthermore, an unbounded advancement also effectively unbounds the time
      spent busy waiting, e.g. if the guest programs a timer with a very large
      delay.
      
      5000ns is a somewhat arbitrary threshold.  When running on bare metal,
      which is the intended use case, timer advancement is expected to be in
      the general vicinity of 1000ns.  5000ns is high enough that false
      positives are unlikely, while not being so high as to negatively affect
      the host's performance/stability.
      
      Note, a future patch will enable userspace to disable KVM's adaptive
      tuning, which will allow priveleged userspace will to specifying an
      advancement value in excess of this arbitrary threshold in order to
      satisfy an abnormal use case.
      
      Cc: Liran Alon <liran.alon@oracle.com>
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Cc: stable@vger.kernel.org
      Fixes: 3b8a5df6 ("KVM: LAPIC: Tune lapic_timer_advance_ns automatically")
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      57bf67e7
    • Vitaly Kuznetsov's avatar
      x86: kvm: hyper-v: deal with buggy TLB flush requests from WS2012 · da66761c
      Vitaly Kuznetsov authored
      It was reported that with some special Multi Processor Group configuration,
      e.g:
       bcdedit.exe /set groupsize 1
       bcdedit.exe /set maxgroup on
       bcdedit.exe /set groupaware on
      for a 16-vCPU guest WS2012 shows BSOD on boot when PV TLB flush mechanism
      is in use.
      
      Tracing kvm_hv_flush_tlb immediately reveals the issue:
      
       kvm_hv_flush_tlb: processor_mask 0x0 address_space 0x0 flags 0x2
      
      The only flag set in this request is HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES,
      however, processor_mask is 0x0 and no HV_FLUSH_ALL_PROCESSORS is specified.
      We don't flush anything and apparently it's not what Windows expects.
      
      TLFS doesn't say anything about such requests and newer Windows versions
      seem to be unaffected. This all feels like a WS2012 bug, which is, however,
      easy to workaround in KVM: let's flush everything when we see an empty
      flush request, over-flushing doesn't hurt.
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      da66761c
    • Liran Alon's avatar
      KVM: x86: Consider LAPIC TSC-Deadline timer expired if deadline too short · c09d65d9
      Liran Alon authored
      If guest sets MSR_IA32_TSCDEADLINE to value such that in host
      time-domain it's shorter than lapic_timer_advance_ns, we can
      reach a case that we call hrtimer_start() with expiration time set at
      the past.
      
      Because lapic_timer.timer is init with HRTIMER_MODE_ABS_PINNED, it
      is not allowed to run in softirq and therefore will never expire.
      
      To avoid such a scenario, verify that deadline expiration time is set on
      host time-domain further than (now + lapic_timer_advance_ns).
      
      A future patch can also consider adding a min_timer_deadline_ns module parameter,
      similar to min_timer_period_us to avoid races that amount of ns it takes
      to run logic could still call hrtimer_start() with expiration timer set
      at the past.
      Reviewed-by: default avatarJoao Martins <joao.m.martins@oracle.com>
      Signed-off-by: default avatarLiran Alon <liran.alon@oracle.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c09d65d9
    • Paolo Bonzini's avatar
      Merge tag 'kvm-ppc-fixes-5.1-1' of... · 78671ab4
      Paolo Bonzini authored
      Merge tag 'kvm-ppc-fixes-5.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc into HEAD
      
      KVM/PPC fixes for 5.1
      
      - Fix host hang in the HTM assist code for POWER9
      - Take srcu read lock around memslot lookup
      78671ab4
  5. 16 Apr, 2019 20 commits
    • Vitaly Kuznetsov's avatar
      KVM: x86: avoid misreporting level-triggered irqs as edge-triggered in tracing · 7a223e06
      Vitaly Kuznetsov authored
      In __apic_accept_irq() interface trig_mode is int and actually on some code
      paths it is set above u8:
      
      kvm_apic_set_irq() extracts it from 'struct kvm_lapic_irq' where trig_mode
      is u16. This is done on purpose as e.g. kvm_set_msi_irq() sets it to
      (1 << 15) & e->msi.data
      
      kvm_apic_local_deliver sets it to reg & (1 << 15).
      
      Fix the immediate issue by making 'tm' into u16. We may also want to adjust
      __apic_accept_irq() interface and use proper sizes for vector, level,
      trig_mode but this is not urgent.
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      7a223e06
    • Paolo Bonzini's avatar
      KVM: fix spectrev1 gadgets · 1d487e9b
      Paolo Bonzini authored
      These were found with smatch, and then generalized when applicable.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1d487e9b
    • Hariprasad Kelam's avatar
      KVM: x86: fix warning Using plain integer as NULL pointer · be43c440
      Hariprasad Kelam authored
      Changed passing argument as "0 to NULL" which resolves below sparse warning
      
      arch/x86/kvm/x86.c:3096:61: warning: Using plain integer as NULL pointer
      Signed-off-by: default avatarHariprasad Kelam <hariprasad.kelam@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      be43c440
    • Vitaly Kuznetsov's avatar
      selftests: kvm: add a selftest for SMM · 79904c9d
      Vitaly Kuznetsov authored
      Add a simple test for SMM, based on VMX.  The test implements its own
      sync between the guest and the host as using our ucall library seems to
      be too cumbersome: SMI handler is happening in real-address mode.
      
      This patch also fixes KVM_SET_NESTED_STATE to happen after
      KVM_SET_VCPU_EVENTS, in fact it places it last.  This is because
      KVM needs to know whether the processor is in SMM or not.
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      79904c9d
    • Paolo Bonzini's avatar
      selftests: kvm: fix for compilers that do not support -no-pie · c2390f16
      Paolo Bonzini authored
      -no-pie was added to GCC at the same time as their configuration option
      --enable-default-pie.  Compilers that were built before do not have
      -no-pie, but they also do not need it.  Detect the option at build
      time.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c2390f16
    • Paolo Bonzini's avatar
      selftests: kvm/evmcs_test: complete I/O before migrating guest state · c68c21ca
      Paolo Bonzini authored
      Starting state migration after an IO exit without first completing IO
      may result in test failures.  We already have two tests that need this
      (this patch in fact fixes evmcs_test, similar to what was fixed for
      state_test in commit 0f73bbc8, "KVM: selftests: complete IO before
      migrating guest state", 2019-03-13) and a third is coming.  So, move the
      code to vcpu_save_state, and while at it do not access register state
      until after I/O is complete.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c68c21ca
    • Sean Christopherson's avatar
      KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels · b68f3cc7
      Sean Christopherson authored
      Invoking the 64-bit variation on a 32-bit kenrel will crash the guest,
      trigger a WARN, and/or lead to a buffer overrun in the host, e.g.
      rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and
      thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64.
      
      KVM allows userspace to report long mode support via CPUID, even though
      the guest is all but guaranteed to crash if it actually tries to enable
      long mode.  But, a pure 32-bit guest that is ignorant of long mode will
      happily plod along.
      
      SMM complicates things as 64-bit CPUs use a different SMRAM save state
      area.  KVM handles this correctly for 64-bit kernels, e.g. uses the
      legacy save state map if userspace has hid long mode from the guest,
      but doesn't fare well when userspace reports long mode support on a
      32-bit host kernel (32-bit KVM doesn't support 64-bit guests).
      
      Since the alternative is to crash the guest, e.g. by not loading state
      or explicitly requesting shutdown, unconditionally use the legacy SMRAM
      save state map for 32-bit KVM.  If a guest has managed to get far enough
      to handle SMIs when running under a weird/buggy userspace hypervisor,
      then don't deliberately crash the guest since there are no downsides
      (from KVM's perspective) to allow it to continue running.
      
      Fixes: 660a5d51 ("KVM: x86: save/load state on SMM switch")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b68f3cc7
    • Sean Christopherson's avatar
      KVM: x86: Don't clear EFER during SMM transitions for 32-bit vCPU · 8f4dc2e7
      Sean Christopherson authored
      Neither AMD nor Intel CPUs have an EFER field in the legacy SMRAM save
      state area, i.e. don't save/restore EFER across SMM transitions.  KVM
      somewhat models this, e.g. doesn't clear EFER on entry to SMM if the
      guest doesn't support long mode.  But during RSM, KVM unconditionally
      clears EFER so that it can get back to pure 32-bit mode in order to
      start loading CRs with their actual non-SMM values.
      
      Clear EFER only when it will be written when loading the non-SMM state
      so as to preserve bits that can theoretically be set on 32-bit vCPUs,
      e.g. KVM always emulates EFER_SCE.
      
      And because CR4.PAE is cleared only to play nice with EFER, wrap that
      code in the long mode check as well.  Note, this may result in a
      compiler warning about cr4 being consumed uninitialized.  Re-read CR4
      even though it's technically unnecessary, as doing so allows for more
      readable code and RSM emulation is not a performance critical path.
      
      Fixes: 660a5d51 ("KVM: x86: save/load state on SMM switch")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8f4dc2e7
    • Sean Christopherson's avatar
      KVM: x86: clear SMM flags before loading state while leaving SMM · 9ec19493
      Sean Christopherson authored
      RSM emulation is currently broken on VMX when the interrupted guest has
      CR4.VMXE=1.  Stop dancing around the issue of HF_SMM_MASK being set when
      loading SMSTATE into architectural state, e.g. by toggling it for
      problematic flows, and simply clear HF_SMM_MASK prior to loading
      architectural state (from SMRAM save state area).
      Reported-by: default avatarJon Doron <arilou@gmail.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Liran Alon <liran.alon@oracle.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Fixes: 5bea5123 ("KVM: VMX: check nested state and CR4.VMXE against SMM")
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Tested-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      9ec19493
    • Sean Christopherson's avatar
      KVM: x86: Open code kvm_set_hflags · c5833c7a
      Sean Christopherson authored
      Prepare for clearing HF_SMM_MASK prior to loading state from the SMRAM
      save state map, i.e. kvm_smm_changed() needs to be called after state
      has been loaded and so cannot be done automatically when setting
      hflags from RSM.
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c5833c7a
    • Sean Christopherson's avatar
      KVM: x86: Load SMRAM in a single shot when leaving SMM · ed19321f
      Sean Christopherson authored
      RSM emulation is currently broken on VMX when the interrupted guest has
      CR4.VMXE=1.  Rather than dance around the issue of HF_SMM_MASK being set
      when loading SMSTATE into architectural state, ideally RSM emulation
      itself would be reworked to clear HF_SMM_MASK prior to loading non-SMM
      architectural state.
      
      Ostensibly, the only motivation for having HF_SMM_MASK set throughout
      the loading of state from the SMRAM save state area is so that the
      memory accesses from GET_SMSTATE() are tagged with role.smm.  Load
      all of the SMRAM save state area from guest memory at the beginning of
      RSM emulation, and load state from the buffer instead of reading guest
      memory one-by-one.
      
      This paves the way for clearing HF_SMM_MASK prior to loading state,
      and also aligns RSM with the enter_smm() behavior, which fills a
      buffer and writes SMRAM save state in a single go.
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ed19321f
    • Liran Alon's avatar
      KVM: nVMX: Expose RDPMC-exiting only when guest supports PMU · e51bfdb6
      Liran Alon authored
      Issue was discovered when running kvm-unit-tests on KVM running as L1 on
      top of Hyper-V.
      
      When vmx_instruction_intercept unit-test attempts to run RDPMC to test
      RDPMC-exiting, it is intercepted by L1 KVM which it's EXIT_REASON_RDPMC
      handler raise #GP because vCPU exposed by Hyper-V doesn't support PMU.
      Instead of unit-test expectation to be reflected with EXIT_REASON_RDPMC.
      
      The reason vmx_instruction_intercept unit-test attempts to run RDPMC
      even though Hyper-V doesn't support PMU is because L1 expose to L2
      support for RDPMC-exiting. Which is reasonable to assume that is
      supported only in case CPU supports PMU to being with.
      
      Above issue can easily be simulated by modifying
      vmx_instruction_intercept config in x86/unittests.cfg to run QEMU with
      "-cpu host,+vmx,-pmu" and run unit-test.
      
      To handle issue, change KVM to expose RDPMC-exiting only when guest
      supports PMU.
      Reported-by: default avatarSaar Amar <saaramar@microsoft.com>
      Reviewed-by: default avatarMihai Carabas <mihai.carabas@oracle.com>
      Reviewed-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarLiran Alon <liran.alon@oracle.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e51bfdb6
    • Liran Alon's avatar
      KVM: x86: Raise #GP when guest vCPU do not support PMU · 672ff6cf
      Liran Alon authored
      Before this change, reading a VMware pseduo PMC will succeed even when
      PMU is not supported by guest. This can easily be seen by running
      kvm-unit-test vmware_backdoors with "-cpu host,-pmu" option.
      Reviewed-by: default avatarMihai Carabas <mihai.carabas@oracle.com>
      Signed-off-by: default avatarLiran Alon <liran.alon@oracle.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      672ff6cf
    • WANG Chao's avatar
      x86/kvm: move kvm_load/put_guest_xcr0 into atomic context · 1811d979
      WANG Chao authored
      guest xcr0 could leak into host when MCE happens in guest mode. Because
      do_machine_check() could schedule out at a few places.
      
      For example:
      
      kvm_load_guest_xcr0
      ...
      kvm_x86_ops->run(vcpu) {
        vmx_vcpu_run
          vmx_complete_atomic_exit
            kvm_machine_check
              do_machine_check
                do_memory_failure
                  memory_failure
                    lock_page
      
      In this case, host_xcr0 is 0x2ff, guest vcpu xcr0 is 0xff. After schedule
      out, host cpu has guest xcr0 loaded (0xff).
      
      In __switch_to {
           switch_fpu_finish
             copy_kernel_to_fpregs
               XRSTORS
      
      If any bit i in XSTATE_BV[i] == 1 and xcr0[i] == 0, XRSTORS will
      generate #GP (In this case, bit 9). Then ex_handler_fprestore kicks in
      and tries to reinitialize fpu by restoring init fpu state. Same story as
      last #GP, except we get DOUBLE FAULT this time.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarWANG Chao <chao.wang@ucloud.cn>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1811d979
    • Vitaly Kuznetsov's avatar
      KVM: x86: svm: make sure NMI is injected after nmi_singlestep · 99c22179
      Vitaly Kuznetsov authored
      I noticed that apic test from kvm-unit-tests always hangs on my EPYC 7401P,
      the hanging test nmi-after-sti is trying to deliver 30000 NMIs and tracing
      shows that we're sometimes able to deliver a few but never all.
      
      When we're trying to inject an NMI we may fail to do so immediately for
      various reasons, however, we still need to inject it so enable_nmi_window()
      arms nmi_singlestep mode. #DB occurs as expected, but we're not checking
      for pending NMIs before entering the guest and unless there's a different
      event to process, the NMI will never get delivered.
      
      Make KVM_REQ_EVENT request on the vCPU from db_interception() to make sure
      pending NMIs are checked and possibly injected.
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      99c22179
    • Suthikulpanit, Suravee's avatar
      svm/avic: Fix invalidate logical APIC id entry · e44e3eac
      Suthikulpanit, Suravee authored
      Only clear the valid bit when invalidate logical APIC id entry.
      The current logic clear the valid bit, but also set the rest of
      the bits (including reserved bits) to 1.
      
      Fixes: 98d90582 ('svm: Fix AVIC DFR and LDR handling')
      Signed-off-by: default avatarSuravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e44e3eac
    • Suthikulpanit, Suravee's avatar
      Revert "svm: Fix AVIC incomplete IPI emulation" · 4a58038b
      Suthikulpanit, Suravee authored
      This reverts commit bb218fbc.
      
      As Oren Twaig pointed out the old discussion:
      
        https://patchwork.kernel.org/patch/8292231/
      
      that the change coud potentially cause an extra IPI to be sent to
      the destination vcpu because the AVIC hardware already set the IRR bit
      before the incomplete IPI #VMEXIT with id=1 (target vcpu is not running).
      Since writting to ICR and ICR2 will also set the IRR. If something triggers
      the destination vcpu to get scheduled before the emulation finishes, then
      this could result in an additional IPI.
      
      Also, the issue mentioned in the commit bb218fbc was misdiagnosed.
      
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Reported-by: default avatarOren Twaig <oren@scalemp.com>
      Signed-off-by: default avatarSuravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      4a58038b
    • Ben Gardon's avatar
      kvm: mmu: Fix overflow on kvm mmu page limit calculation · bc8a3d89
      Ben Gardon authored
      KVM bases its memory usage limits on the total number of guest pages
      across all memslots. However, those limits, and the calculations to
      produce them, use 32 bit unsigned integers. This can result in overflow
      if a VM has more guest pages that can be represented by a u32. As a
      result of this overflow, KVM can use a low limit on the number of MMU
      pages it will allocate. This makes KVM unable to map all of guest memory
      at once, prompting spurious faults.
      
      Tested: Ran all kvm-unit-tests on an Intel Haswell machine. This patch
      	introduced no new failures.
      Signed-off-by: default avatarBen Gardon <bgardon@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      bc8a3d89
    • Paolo Bonzini's avatar
      KVM: nVMX: always use early vmcs check when EPT is disabled · 2b27924b
      Paolo Bonzini authored
      The remaining failures of vmx.flat when EPT is disabled are caused by
      incorrectly reflecting VMfails to the L1 hypervisor.  What happens is
      that nested_vmx_restore_host_state corrupts the guest CR3, reloading it
      with the host's shadow CR3 instead, because it blindly loads GUEST_CR3
      from the vmcs01.
      
      For simplicity let's just always use hardware VMCS checks when EPT is
      disabled.  This way, nested_vmx_restore_host_state is not reached at
      all (or at least shouldn't be reached).
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2b27924b
    • Paolo Bonzini's avatar
      KVM: nVMX: allow tests to use bad virtual-APIC page address · 69090810
      Paolo Bonzini authored
      As mentioned in the comment, there are some special cases where we can simply
      clear the TPR shadow bit from the CPU-based execution controls in the vmcs02.
      Handle them so that we can remove some XFAILs from vmx.flat.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      69090810
  6. 15 Apr, 2019 1 commit
    • Sean Christopherson's avatar
      KVM: x86/mmu: Fix an inverted list_empty() check when zapping sptes · cfd32acf
      Sean Christopherson authored
      A recently introduced helper for handling zap vs. remote flush
      incorrectly bails early, effectively leaking defunct shadow pages.
      Manifests as a slab BUG when exiting KVM due to the shadow pages
      being alive when their associated cache is destroyed.
      
      ==========================================================================
      BUG kvm_mmu_page_header: Objects remaining in kvm_mmu_page_header on ...
      --------------------------------------------------------------------------
      Disabling lock debugging due to kernel taint
      INFO: Slab 0x00000000fc436387 objects=26 used=23 fp=0x00000000d023caee ...
      CPU: 6 PID: 4315 Comm: rmmod Tainted: G    B             5.1.0-rc2+ #19
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
      Call Trace:
       dump_stack+0x46/0x5b
       slab_err+0xad/0xd0
       ? on_each_cpu_mask+0x3c/0x50
       ? ksm_migrate_page+0x60/0x60
       ? on_each_cpu_cond_mask+0x7c/0xa0
       ? __kmalloc+0x1ca/0x1e0
       __kmem_cache_shutdown+0x13a/0x310
       shutdown_cache+0xf/0x130
       kmem_cache_destroy+0x1d5/0x200
       kvm_mmu_module_exit+0xa/0x30 [kvm]
       kvm_arch_exit+0x45/0x60 [kvm]
       kvm_exit+0x6f/0x80 [kvm]
       vmx_exit+0x1a/0x50 [kvm_intel]
       __x64_sys_delete_module+0x153/0x1f0
       ? exit_to_usermode_loop+0x88/0xc0
       do_syscall_64+0x4f/0x100
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      Fixes: a2113634 ("KVM: x86/mmu: Split remote_flush+zap case out of kvm_mmu_flush_or_zap()")
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      cfd32acf
  7. 10 Apr, 2019 3 commits
  8. 09 Apr, 2019 3 commits
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net · 869e3305
      Linus Torvalds authored
      Pull networking fixes from David Miller:
      
       1) Off by one and bounds checking fixes in NFC, from Dan Carpenter.
      
       2) There have been many weird regressions in r8169 since we turned ASPM
          support on, some are still not understood nor completely resolved.
          Let's turn this back off for now. From Heiner Kallweit.
      
       3) Signess fixes for ethtool speed value handling, from Michael
          Zhivich.
      
       4) Handle timestamps properly in macb driver, from Paul Thomas.
      
       5) Two erspan fixes, it's the usual "skb ->data potentially reallocated
          and we're holding a stale protocol header pointer". From Lorenzo
          Bianconi.
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
        bnxt_en: Reset device on RX buffer errors.
        bnxt_en: Improve RX consumer index validity check.
        net: macb driver, check for SKBTX_HW_TSTAMP
        qlogic: qlcnic: fix use of SPEED_UNKNOWN ethtool constant
        broadcom: tg3: fix use of SPEED_UNKNOWN ethtool constant
        ethtool: avoid signed-unsigned comparison in ethtool_validate_speed()
        net: ip6_gre: fix possible use-after-free in ip6erspan_rcv
        net: ip_gre: fix possible use-after-free in erspan_rcv
        r8169: disable ASPM again
        MAINTAINERS: ieee802154: update documentation file pattern
        net: vrf: Fix ping failed when vrf mtu is set to 0
        selftests: add a tc matchall test case
        nfc: nci: Potential off by one in ->pipes[] array
        NFC: nci: Add some bounds checking in nci_hci_cmd_received()
      869e3305
    • Linus Torvalds's avatar
      Merge branch 'fixes-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security · a556810d
      Linus Torvalds authored
      Pull TPM fixes from James Morris:
       "From Jarkko: These are critical fixes for v5.1. Contains also couple
        of new selftests for v5.1 features (partial reads in /dev/tpm0)"
      
      * 'fixes-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
        selftests/tpm2: Open tpm dev in unbuffered mode
        selftests/tpm2: Extend tests to cover partial reads
        KEYS: trusted: fix -Wvarags warning
        tpm: Fix the type of the return value in calc_tpm2_event_size()
        KEYS: trusted: allow trusted.ko to initialize w/o a TPM
        tpm: fix an invalid condition in tpm_common_poll
        tpm: turn on TPM on suspend for TPM 1.x
      a556810d
    • Linus Torvalds's avatar
      Merge tag 'xtensa-20190408' of git://github.com/jcmvbkbc/linux-xtensa · 10d43397
      Linus Torvalds authored
      Pull xtensa fixes from Max Filippov:
      
       - fix syscall number passed to trace_sys_exit
      
       - fix syscall number initialization in start_thread
      
       - fix level interpretation in the return_address
      
       - fix format string warning in init_pmd
      
      * tag 'xtensa-20190408' of git://github.com/jcmvbkbc/linux-xtensa:
        xtensa: fix format string warning in init_pmd
        xtensa: fix return_address
        xtensa: fix initialization of pt_regs::syscall in start_thread
        xtensa: use actual syscall number in do_syscall_trace_leave
      10d43397
  9. 08 Apr, 2019 3 commits