1. 09 Feb, 2015 1 commit
  2. 06 Feb, 2015 1 commit
    • Paolo Bonzini's avatar
      kvm: add halt_poll_ns module parameter · f7819512
      Paolo Bonzini authored
      This patch introduces a new module parameter for the KVM module; when it
      is present, KVM attempts a bit of polling on every HLT before scheduling
      itself out via kvm_vcpu_block.
      
      This parameter helps a lot for latency-bound workloads---in particular
      I tested it with O_DSYNC writes with a battery-backed disk in the host.
      In this case, writes are fast (because the data doesn't have to go all
      the way to the platters) but they cannot be merged by either the host or
      the guest.  KVM's performance here is usually around 30% of bare metal,
      or 50% if you use cache=directsync or cache=writethrough (these
      parameters avoid that the guest sends pointless flush requests, and
      at the same time they are not slow because of the battery-backed cache).
      The bad performance happens because on every halt the host CPU decides
      to halt itself too.  When the interrupt comes, the vCPU thread is then
      migrated to a new physical CPU, and in general the latency is horrible
      because the vCPU thread has to be scheduled back in.
      
      With this patch performance reaches 60-65% of bare metal and, more
      important, 99% of what you get if you use idle=poll in the guest.  This
      means that the tunable gets rid of this particular bottleneck, and more
      work can be done to improve performance in the kernel or QEMU.
      
      Of course there is some price to pay; every time an otherwise idle vCPUs
      is interrupted by an interrupt, it will poll unnecessarily and thus
      impose a little load on the host.  The above results were obtained with
      a mostly random value of the parameter (500000), and the load was around
      1.5-2.5% CPU usage on one of the host's core for each idle guest vCPU.
      
      The patch also adds a new stat, /sys/kernel/debug/kvm/halt_successful_poll,
      that can be used to tune the parameter.  It counts how many HLT
      instructions received an interrupt during the polling period; each
      successful poll avoids that Linux schedules the VCPU thread out and back
      in, and may also avoid a likely trip to C1 and back for the physical CPU.
      
      While the VM is idle, a Linux 4 VCPU VM halts around 10 times per second.
      Of these halts, almost all are failed polls.  During the benchmark,
      instead, basically all halts end within the polling period, except a more
      or less constant stream of 50 per second coming from vCPUs that are not
      running the benchmark.  The wasted time is thus very low.  Things may
      be slightly different for Windows VMs, which have a ~10 ms timer tick.
      
      The effect is also visible on Marcelo's recently-introduced latency
      test for the TSC deadline timer.  Though of course a non-RT kernel has
      awful latency bounds, the latency of the timer is around 8000-10000 clock
      cycles compared to 20000-120000 without setting halt_poll_ns.  For the TSC
      deadline timer, thus, the effect is both a smaller average latency and
      a smaller variance.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f7819512
  3. 05 Feb, 2015 1 commit
  4. 04 Feb, 2015 2 commits
    • James Hogan's avatar
      KVM: MIPS: Don't leak FPU/DSP to guest · f798217d
      James Hogan authored
      The FPU and DSP are enabled via the CP0 Status CU1 and MX bits by
      kvm_mips_set_c0_status() on a guest exit, presumably in case there is
      active state that needs saving if pre-emption occurs. However neither of
      these bits are cleared again when returning to the guest.
      
      This effectively gives the guest access to the FPU/DSP hardware after
      the first guest exit even though it is not aware of its presence,
      allowing FP instructions in guest user code to intermittently actually
      execute instead of trapping into the guest OS for emulation. It will
      then read & manipulate the hardware FP registers which technically
      belong to the user process (e.g. QEMU), or are stale from another user
      process. It can also crash the guest OS by causing an FP exception, for
      which a guest exception handler won't have been registered.
      
      First lets save and disable the FPU (and MSA) state with lose_fpu(1)
      before entering the guest. This simplifies the problem, especially for
      when guest FPU/MSA support is added in the future, and prevents FR=1 FPU
      state being live when the FR bit gets cleared for the guest, which
      according to the architecture causes the contents of the FPU and vector
      registers to become UNPREDICTABLE.
      
      We can then safely remove the enabling of the FPU in
      kvm_mips_set_c0_status(), since there should never be any active FPU or
      MSA state to save at pre-emption, which should plug the FPU leak.
      
      DSP state is always live rather than being lazily restored, so for that
      it is simpler to just clear the MX bit again when re-entering the guest.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Sanjay Lal <sanjayl@kymasys.com>
      Cc: Gleb Natapov <gleb@kernel.org>
      Cc: kvm@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # v3.10+: 044f0f03: MIPS: KVM: Deliver guest interrupts
      Cc: <stable@vger.kernel.org> # v3.10+
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f798217d
    • James Hogan's avatar
      KVM: MIPS: Disable HTW while in guest · c4c6f2ca
      James Hogan authored
      Ensure any hardware page table walker (HTW) is disabled while in KVM
      guest mode, as KVM doesn't yet set up hardware page table walking for
      guest mappings so the wrong mappings would get loaded, resulting in the
      guest hanging or crashing once it reaches userland.
      
      The HTW is disabled and re-enabled around the call to
      __kvm_mips_vcpu_run() which does the initial switch into guest mode and
      the final switch out of guest context. Additionally it is enabled for
      the duration of guest exits (i.e. kvm_mips_handle_exit()), getting
      disabled again before returning back to guest or host.
      
      In all cases the HTW is only disabled in normal kernel mode while
      interrupts are disabled, so that the HTW doesn't get left disabled if
      the process is preempted.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: Gleb Natapov <gleb@kernel.org>
      Cc: kvm@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # v3.17+
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c4c6f2ca
  5. 03 Feb, 2015 6 commits
    • Wincy Van's avatar
      KVM: nVMX: Enable nested posted interrupt processing · 705699a1
      Wincy Van authored
      If vcpu has a interrupt in vmx non-root mode, injecting that interrupt
      requires a vmexit.  With posted interrupt processing, the vmexit
      is not needed, and interrupts are fully taken care of by hardware.
      In nested vmx, this feature avoids much more vmexits than non-nested vmx.
      
      When L1 asks L0 to deliver L1's posted interrupt vector, and the target
      VCPU is in non-root mode, we use a physical ipi to deliver POSTED_INTR_NV
      to the target vCPU.  Using POSTED_INTR_NV avoids unexpected interrupts
      if a concurrent vmexit happens and L1's vector is different with L0's.
      The IPI triggers posted interrupt processing in the target physical CPU.
      
      In case the target vCPU was not in guest mode, complete the posted
      interrupt delivery on the next entry to L2.
      Signed-off-by: default avatarWincy Van <fanwenyi0529@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      705699a1
    • Wincy Van's avatar
      KVM: nVMX: Enable nested virtual interrupt delivery · 608406e2
      Wincy Van authored
      With virtual interrupt delivery, the hardware lets KVM use a more
      efficient mechanism for interrupt injection. This is an important feature
      for nested VMX, because it reduces vmexits substantially and they are
      much more expensive with nested virtualization.  This is especially
      important for throughput-bound scenarios.
      Signed-off-by: default avatarWincy Van <fanwenyi0529@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      608406e2
    • Wincy Van's avatar
      KVM: nVMX: Enable nested apic register virtualization · 82f0dd4b
      Wincy Van authored
      We can reduce apic register virtualization cost with this feature,
      it is also a requirement for virtual interrupt delivery and posted
      interrupt processing.
      Signed-off-by: default avatarWincy Van <fanwenyi0529@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      82f0dd4b
    • Wincy Van's avatar
      KVM: nVMX: Make nested control MSRs per-cpu · b9c237bb
      Wincy Van authored
      To enable nested apicv support, we need per-cpu vmx
      control MSRs:
        1. If in-kernel irqchip is enabled, we can enable nested
           posted interrupt, we should set posted intr bit in
           the nested_vmx_pinbased_ctls_high.
        2. If in-kernel irqchip is disabled, we can not enable
           nested posted interrupt, the posted intr bit
           in the nested_vmx_pinbased_ctls_high will be cleared.
      
      Since there would be different settings about in-kernel
      irqchip between VMs, different nested control MSRs
      are needed.
      Signed-off-by: default avatarWincy Van <fanwenyi0529@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b9c237bb
    • Wincy Van's avatar
      KVM: nVMX: Enable nested virtualize x2apic mode · f2b93280
      Wincy Van authored
      When L2 is using x2apic, we can use virtualize x2apic mode to
      gain higher performance, especially in apicv case.
      
      This patch also introduces nested_vmx_check_apicv_controls
      for the nested apicv patches.
      Signed-off-by: default avatarWincy Van <fanwenyi0529@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f2b93280
    • Wincy Van's avatar
      KVM: nVMX: Prepare for using hardware MSR bitmap · 3af18d9c
      Wincy Van authored
      Currently, if L1 enables MSR_BITMAP, we will emulate this feature, all
      of L2's msr access is intercepted by L0.  Features like "virtualize
      x2apic mode" require that the MSR bitmap is enabled, or the hardware
      will exit and for example not virtualize the x2apic MSRs.  In order to
      let L1 use these features, we need to build a merged bitmap that only
      not cause a VMEXIT if 1) L1 requires that 2) the bit is not required by
      the processor for APIC virtualization.
      
      For now the guests are still run with MSR bitmap disabled, but this
      patch already introduces nested_vmx_merge_msr_bitmap for future use.
      Signed-off-by: default avatarWincy Van <fanwenyi0529@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      3af18d9c
  6. 02 Feb, 2015 2 commits
  7. 30 Jan, 2015 6 commits
  8. 29 Jan, 2015 5 commits
  9. 27 Jan, 2015 2 commits
  10. 26 Jan, 2015 7 commits
  11. 23 Jan, 2015 7 commits