1. 20 Apr, 2020 2 commits
    • Paolo Bonzini's avatar
      KVM: x86: cleanup kvm_inject_emulated_page_fault · 0cd665bd
      Paolo Bonzini authored
      To reconstruct the kvm_mmu to be used for page fault injection, we
      can simply use fault->nested_page_fault.  This matches how
      fault->nested_page_fault is assigned in the first place by
      FNAME(walk_addr_generic).
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      0cd665bd
    • Paolo Bonzini's avatar
      KVM: x86: introduce kvm_mmu_invalidate_gva · 5efac074
      Paolo Bonzini authored
      Wrap the combination of mmu->invlpg and kvm_x86_ops->tlb_flush_gva
      into a new function.  This function also lets us specify the host PGD to
      invalidate and also the MMU, both of which will be useful in fixing and
      simplifying kvm_inject_emulated_page_fault.
      
      A nested guest's MMU however has g_context->invlpg == NULL.  Instead of
      setting it to nonpaging_invlpg, make kvm_mmu_invalidate_gva the only
      entry point to mmu->invlpg and make a NULL invlpg pointer equivalent
      to nonpaging_invlpg, saving a retpoline.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      5efac074
  2. 15 Apr, 2020 24 commits
  3. 14 Apr, 2020 5 commits
  4. 07 Apr, 2020 8 commits
  5. 03 Apr, 2020 1 commit
    • Uros Bizjak's avatar
      KVM: SVM: Split svm_vcpu_run inline assembly to separate file · 199cd1d7
      Uros Bizjak authored
      The compiler (GCC) does not like the situation, where there is inline
      assembly block that clobbers all available machine registers in the
      middle of the function. This situation can be found in function
      svm_vcpu_run in file kvm/svm.c and results in many register spills and
      fills to/from stack frame.
      
      This patch fixes the issue with the same approach as was done for
      VMX some time ago. The big inline assembly is moved to a separate
      assembly .S file, taking into account all ABI requirements.
      
      There are two main benefits of the above approach:
      
      * elimination of several register spills and fills to/from stack
      frame, and consequently smaller function .text size. The binary size
      of svm_vcpu_run is lowered from 2019 to 1626 bytes.
      
      * more efficient access to a register save array. Currently, register
      save array is accessed as:
      
          7b00:    48 8b 98 28 02 00 00     mov    0x228(%rax),%rbx
          7b07:    48 8b 88 18 02 00 00     mov    0x218(%rax),%rcx
          7b0e:    48 8b 90 20 02 00 00     mov    0x220(%rax),%rdx
      
      and passing ia pointer to a register array as an argument to a function one gets:
      
        12:    48 8b 48 08              mov    0x8(%rax),%rcx
        16:    48 8b 50 10              mov    0x10(%rax),%rdx
        1a:    48 8b 58 18              mov    0x18(%rax),%rbx
      
      As a result, the total size, considering that the new function size is 229
      bytes, gets lowered by 164 bytes.
      Signed-off-by: default avatarUros Bizjak <ubizjak@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      199cd1d7