1. 22 Oct, 2013 1 commit
  2. 18 Oct, 2013 4 commits
  3. 17 Oct, 2013 2 commits
    • Christoffer Dall's avatar
      KVM: ARM: Update comments for kvm_handle_wfi · 86ed81aa
      Christoffer Dall authored
      Update comments to reflect what is really going on and add the TWE bit
      to the comments in kvm_arm.h.
      
      Also renames the function to kvm_handle_wfx like is done on arm64 for
      consistency and uber-correctness.
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      86ed81aa
    • Marc Zyngier's avatar
      ARM: KVM: Yield CPU when vcpu executes a WFE · 58d5ec8f
      Marc Zyngier authored
      On an (even slightly) oversubscribed system, spinlocks are quickly
      becoming a bottleneck, as some vcpus are spinning, waiting for a
      lock to be released, while the vcpu holding the lock may not be
      running at all.
      
      This creates contention, and the observed slowdown is 40x for
      hackbench. No, this isn't a typo.
      
      The solution is to trap blocking WFEs and tell KVM that we're
      now spinning. This ensures that other vpus will get a scheduling
      boost, allowing the lock to be released more quickly. Also, using
      CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT slightly improves the performance
      when the VM is severely overcommited.
      
      Quick test to estimate the performance: hackbench 1 process 1000
      
      2xA15 host (baseline):	1.843s
      
      2xA15 guest w/o patch:	2.083s
      4xA15 guest w/o patch:	80.212s
      8xA15 guest w/o patch:	Could not be bothered to find out
      
      2xA15 guest w/ patch:	2.102s
      4xA15 guest w/ patch:	3.205s
      8xA15 guest w/ patch:	6.887s
      
      So we go from a 40x degradation to 1.5x in the 2x overcommit case,
      which is vaguely more acceptable.
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      58d5ec8f
  4. 16 Oct, 2013 4 commits
  5. 15 Oct, 2013 1 commit
  6. 14 Oct, 2013 7 commits
  7. 13 Oct, 2013 3 commits
  8. 10 Oct, 2013 1 commit
  9. 03 Oct, 2013 8 commits
  10. 02 Oct, 2013 5 commits
  11. 30 Sep, 2013 4 commits
    • Paolo Bonzini's avatar
      KVM: Convert kvm_lock back to non-raw spinlock · 2f303b74
      Paolo Bonzini authored
      In commit e935b837 ("KVM: Convert kvm_lock to raw_spinlock"),
      the kvm_lock was made a raw lock.  However, the kvm mmu_shrink()
      function tries to grab the (non-raw) mmu_lock within the scope of
      the raw locked kvm_lock being held.  This leads to the following:
      
      BUG: sleeping function called from invalid context at kernel/rtmutex.c:659
      in_atomic(): 1, irqs_disabled(): 0, pid: 55, name: kswapd0
      Preemption disabled at:[<ffffffffa0376eac>] mmu_shrink+0x5c/0x1b0 [kvm]
      
      Pid: 55, comm: kswapd0 Not tainted 3.4.34_preempt-rt
      Call Trace:
       [<ffffffff8106f2ad>] __might_sleep+0xfd/0x160
       [<ffffffff817d8d64>] rt_spin_lock+0x24/0x50
       [<ffffffffa0376f3c>] mmu_shrink+0xec/0x1b0 [kvm]
       [<ffffffff8111455d>] shrink_slab+0x17d/0x3a0
       [<ffffffff81151f00>] ? mem_cgroup_iter+0x130/0x260
       [<ffffffff8111824a>] balance_pgdat+0x54a/0x730
       [<ffffffff8111fe47>] ? set_pgdat_percpu_threshold+0xa7/0xd0
       [<ffffffff811185bf>] kswapd+0x18f/0x490
       [<ffffffff81070961>] ? get_parent_ip+0x11/0x50
       [<ffffffff81061970>] ? __init_waitqueue_head+0x50/0x50
       [<ffffffff81118430>] ? balance_pgdat+0x730/0x730
       [<ffffffff81060d2b>] kthread+0xdb/0xe0
       [<ffffffff8106e122>] ? finish_task_switch+0x52/0x100
       [<ffffffff817e1e94>] kernel_thread_helper+0x4/0x10
       [<ffffffff81060c50>] ? __init_kthread_worker+0x
      
      After the previous patch, kvm_lock need not be a raw spinlock anymore,
      so change it back.
      Reported-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Cc: kvm@vger.kernel.org
      Cc: gleb@redhat.com
      Cc: jan.kiszka@siemens.com
      Reviewed-by: default avatarGleb Natapov <gleb@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2f303b74
    • Paolo Bonzini's avatar
      KVM: protect kvm_usage_count with its own spinlock · 4a937f96
      Paolo Bonzini authored
      The VM list need not be protected by a raw spinlock.  Separate the
      two so that kvm_lock can be made non-raw.
      
      Cc: kvm@vger.kernel.org
      Cc: gleb@redhat.com
      Cc: jan.kiszka@siemens.com
      Reviewed-by: default avatarGleb Natapov <gleb@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      4a937f96
    • Paolo Bonzini's avatar
      KVM: cleanup (physical) CPU hotplug · 4fa92fb2
      Paolo Bonzini authored
      Remove the useless argument, and do not do anything if there are no
      VMs running at the time of the hotplug.
      
      Cc: kvm@vger.kernel.org
      Cc: gleb@redhat.com
      Cc: jan.kiszka@siemens.com
      Reviewed-by: default avatarGleb Natapov <gleb@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      4fa92fb2
    • Gleb Natapov's avatar
      KVM: nVMX: Do not generate #DF if #PF happens during exception delivery into L2 · feaf0c7d
      Gleb Natapov authored
      If #PF happens during delivery of an exception into L2 and L1 also do
      not have the page mapped in its shadow page table then L0 needs to
      generate vmexit to L2 with original event in IDT_VECTORING_INFO, but
      current code combines both exception and generates #DF instead. Fix that
      by providing nVMX specific function to handle page faults during page
      table walk that handles this case correctly.
      Signed-off-by: default avatarGleb Natapov <gleb@redhat.com>
      Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      feaf0c7d