1. 14 Nov, 2014 2 commits
    • Igor Mammedov's avatar
      kvm: memslots: replace heap sort with an insertion sort pass · 063584d4
      Igor Mammedov authored
      memslots is a sorted array.  When a slot is changed, heapsort (lib/sort.c)
      would take O(n log n) time to update it; an optimized insertion sort will
      only cost O(n) on an array with just one item out of order.
      
      Replace sort() with a custom sort that takes advantage of memslots usage
      pattern and the known position of the changed slot.
      
      performance change of 128 memslots insertions with gradually increasing
      size (the worst case):
      
            heap sort   custom sort
      max:  249747      2500 cycles
      
      with custom sort alg taking ~98% less then original
      update time.
      Signed-off-by: default avatarIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      063584d4
    • Igor Mammedov's avatar
      kvm: x86: increase user memory slots to 509 · 1d4e7e3c
      Igor Mammedov authored
      With the 3 private slots, this gives us 512 slots total.
      Motivation for this is in addition to assigned devices
      support more memory hotplug slots, where 1 slot is
      used by a hotplugged memory stick.
      It will allow to support upto 256 hotplug memory
      slots and leave 253 slots for assigned devices and
      other devices that use them.
      Signed-off-by: default avatarIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1d4e7e3c
  2. 13 Nov, 2014 1 commit
    • Chris J Arges's avatar
      kvm: svm: move WARN_ON in svm_adjust_tsc_offset · d913b904
      Chris J Arges authored
      When running the tsc_adjust kvm-unit-test on an AMD processor with the
      IA32_TSC_ADJUST feature enabled, the WARN_ON in svm_adjust_tsc_offset can be
      triggered. This WARN_ON checks for a negative adjustment in case __scale_tsc
      is called; however it may trigger unnecessary warnings.
      
      This patch moves the WARN_ON to trigger only if __scale_tsc will actually be
      called from svm_adjust_tsc_offset. In addition make adj in kvm_set_msr_common
      s64 since this can have signed values.
      Signed-off-by: default avatarChris J Arges <chris.j.arges@canonical.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d913b904
  3. 12 Nov, 2014 2 commits
    • Andy Lutomirski's avatar
      x86, kvm, vmx: Don't set LOAD_IA32_EFER when host and guest match · 54b98bff
      Andy Lutomirski authored
      There's nothing to switch if the host and guest values are the same.
      I am unable to find evidence that this makes any difference
      whatsoever.
      Signed-off-by: default avatarAndy Lutomirski <luto@amacapital.net>
      [I could see a difference on Nehalem.  From 5 runs:
      
       userspace exit, guest!=host   12200 11772 12130 12164 12327
       userspace exit, guest=host    11983 11780 11920 11919 12040
       lightweight exit, guest!=host  3214  3220  3238  3218  3337
       lightweight exit, guest=host   3178  3193  3193  3187  3220
      
       This passes the t-test with 99% confidence for userspace exit,
       98.5% confidence for lightweight exit. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      54b98bff
    • Andy Lutomirski's avatar
      x86, kvm, vmx: Always use LOAD_IA32_EFER if available · f6577a5f
      Andy Lutomirski authored
      At least on Sandy Bridge, letting the CPU switch IA32_EFER is much
      faster than switching it manually.
      
      I benchmarked this using the vmexit kvm-unit-test (single run, but
      GOAL multiplied by 5 to do more iterations):
      
      Test                                  Before      After    Change
      cpuid                                   2000       1932    -3.40%
      vmcall                                  1914       1817    -5.07%
      mov_from_cr8                              13         13     0.00%
      mov_to_cr8                                19         19     0.00%
      inl_from_pmtimer                       19164      10619   -44.59%
      inl_from_qemu                          15662      10302   -34.22%
      inl_from_kernel                         3916       3802    -2.91%
      outl_to_kernel                          2230       2194    -1.61%
      mov_dr                                   172        176     2.33%
      ipi                                (skipped)  (skipped)
      ipi+halt                           (skipped)  (skipped)
      ple-round-robin                           13         13     0.00%
      wr_tsc_adjust_msr                       1920       1845    -3.91%
      rd_tsc_adjust_msr                       1892       1814    -4.12%
      mmio-no-eventfd:pci-mem                16394      11165   -31.90%
      mmio-wildcard-eventfd:pci-mem           4607       4645     0.82%
      mmio-datamatch-eventfd:pci-mem          4601       4610     0.20%
      portio-no-eventfd:pci-io               11507       7942   -30.98%
      portio-wildcard-eventfd:pci-io          2239       2225    -0.63%
      portio-datamatch-eventfd:pci-io         2250       2234    -0.71%
      
      I haven't explicitly computed the significance of these numbers,
      but this isn't subtle.
      Signed-off-by: default avatarAndy Lutomirski <luto@amacapital.net>
      [The results were reproducible on all of Nehalem, Sandy Bridge and
       Ivy Bridge.  The slowness of manual switching is because writing
       to EFER with WRMSR triggers a TLB flush, even if the only bit you're
       touching is SCE (so the page table format is not affected).  Doing
       the write as part of vmentry/vmexit, instead, does not flush the TLB,
       probably because all processors that have EPT also have VPID. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f6577a5f
  4. 10 Nov, 2014 1 commit
  5. 08 Nov, 2014 7 commits
  6. 07 Nov, 2014 23 commits
  7. 03 Nov, 2014 4 commits
    • Tiejun Chen's avatar
      kvm: kvmclock: use get_cpu() and put_cpu() · c6338ce4
      Tiejun Chen authored
      We can use get_cpu() and put_cpu() to replace
      preempt_disable()/cpu = smp_processor_id() and
      preempt_enable() for slightly better code.
      Signed-off-by: default avatarTiejun Chen <tiejun.chen@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c6338ce4
    • Radim Krčmář's avatar
      KVM: x86: optimize some accesses to LVTT and SPIV · f30ebc31
      Radim Krčmář authored
      We mirror a subset of these registers in separate variables.
      Using them directly should be faster.
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f30ebc31
    • Radim Krčmář's avatar
      KVM: x86: detect LVTT changes under APICv · a323b409
      Radim Krčmář authored
      APIC-write VM exits are "trap-like": they save CS:RIP values for the
      instruction after the write, and more importantly, the handler will
      already see the new value in the virtual-APIC page.  This means that
      apic_reg_write cannot use kvm_apic_get_reg to omit timer cancelation
      when mode changes.
      
      timer_mode_mask shouldn't be changing as it depends on cpuid.
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a323b409
    • Radim Krčmář's avatar
      KVM: x86: detect SPIV changes under APICv · e462755c
      Radim Krčmář authored
      APIC-write VM exits are "trap-like": they save CS:RIP values for the
      instruction after the write, and more importantly, the handler will
      already see the new value in the virtual-APIC page.
      
      This caused a bug if you used KVM_SET_IRQCHIP to set the SW-enabled bit
      in the SPIV register.  The chain of events is as follows:
      
      * When the irqchip is added to the destination VM, the apic_sw_disabled
      static key is incremented (1)
      
      * When the KVM_SET_IRQCHIP ioctl is invoked, it is decremented (0)
      
      * When the guest disables the bit in the SPIV register, e.g. as part of
      shutdown, apic_set_spiv does not notice the change and the static key is
      _not_ incremented.
      
      * When the guest is destroyed, the static key is decremented (-1),
      resulting in this trace:
      
        WARNING: at kernel/jump_label.c:81 __static_key_slow_dec+0xa6/0xb0()
        jump label: negative count!
      
        [<ffffffff816bf898>] dump_stack+0x19/0x1b
        [<ffffffff8107c6f1>] warn_slowpath_common+0x61/0x80
        [<ffffffff8107c76c>] warn_slowpath_fmt+0x5c/0x80
        [<ffffffff811931e6>] __static_key_slow_dec+0xa6/0xb0
        [<ffffffff81193226>] static_key_slow_dec_deferred+0x16/0x20
        [<ffffffffa0637698>] kvm_free_lapic+0x88/0xa0 [kvm]
        [<ffffffffa061c63e>] kvm_arch_vcpu_uninit+0x2e/0xe0 [kvm]
        [<ffffffffa05ff301>] kvm_vcpu_uninit+0x21/0x40 [kvm]
        [<ffffffffa067cec7>] vmx_free_vcpu+0x47/0x70 [kvm_intel]
        [<ffffffffa061bc50>] kvm_arch_vcpu_free+0x50/0x60 [kvm]
        [<ffffffffa061ca22>] kvm_arch_destroy_vm+0x102/0x260 [kvm]
        [<ffffffff810b68fd>] ? synchronize_srcu+0x1d/0x20
        [<ffffffffa06030d1>] kvm_put_kvm+0xe1/0x1c0 [kvm]
        [<ffffffffa06036f8>] kvm_vcpu_release+0x18/0x20 [kvm]
        [<ffffffff81215c62>] __fput+0x102/0x310
        [<ffffffff81215f4e>] ____fput+0xe/0x10
        [<ffffffff810ab664>] task_work_run+0xb4/0xe0
        [<ffffffff81083944>] do_exit+0x304/0xc60
        [<ffffffff816c8dfc>] ? _raw_spin_unlock_irq+0x2c/0x50
        [<ffffffff810fd22d>] ?  trace_hardirqs_on_caller+0xfd/0x1c0
        [<ffffffff8108432c>] do_group_exit+0x4c/0xc0
        [<ffffffff810843b4>] SyS_exit_group+0x14/0x20
        [<ffffffff816d33a9>] system_call_fastpath+0x16/0x1b
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e462755c