1. 19 Nov, 2014 4 commits
    • Nadav Amit's avatar
      KVM: x86: Perform limit checks when assigning EIP · d50eaa18
      Nadav Amit authored
      If branch (e.g., jmp, ret) causes limit violations, since the target IP >
      limit, the #GP exception occurs before the branch.  In other words, the RIP
      pushed on the stack should be that of the branch and not that of the target.
      
      To do so, we can call __linearize, with new EIP, which also saves us the code
      which performs the canonical address checks. On the case of assigning an EIP >=
      2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
      does not exceed the limit and would trigger #GP(0) otherwise.
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d50eaa18
    • Nadav Amit's avatar
      KVM: x86: Emulator performs privilege checks on __linearize · a7315d2f
      Nadav Amit authored
      When segment is accessed, real hardware does not perform any privilege level
      checks.  In contrast, KVM emulator does. This causes some discrepencies from
      real hardware. For instance, reading from readable code segment may fail due to
      incorrect segment checks. In addition, it introduces unnecassary overhead.
      
      To reference Intel SDM 5.5 ("Privilege Levels"): "Privilege levels are checked
      when the segment selector of a segment descriptor is loaded into a segment
      register." The SDM never mentions privilege level checks during memory access,
      except for loading far pointers in section 5.10 ("Pointer Validation"). Those
      are actually segment selector loads and are emulated in the similarily (i.e.,
      regardless to __linearize checks).
      
      This behavior was also checked using sysexit. A data-segment whose DPL=0 was
      loaded, and after sysexit (CPL=3) it is still accessible.
      
      Therefore, all the privilege level checks in __linearize are removed.
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a7315d2f
    • Nadav Amit's avatar
      KVM: x86: Stack size is overridden by __linearize · 1c1c35ae
      Nadav Amit authored
      When performing segmented-read/write in the emulator for stack operations, it
      ignores the stack size, and uses the ad_bytes as indication for the pointer
      size. As a result, a wrong address may be accessed.
      
      To fix this behavior, we can remove the masking of address in __linearize and
      perform it beforehand.  It is already done for the operands (so currently it is
      inefficiently done twice). It is missing in two cases:
      1. When using rip_relative
      2. On fetch_bit_operand that changes the address.
      
      This patch masks the address on these two occassions, and removes the masking
      from __linearize.
      
      Note that it does not mask EIP during fetch. In protected/legacy mode code
      fetch when RIP >= 2^32 should result in #GP and not wrap-around. Since we make
      limit checks within __linearize, this is the expected behavior.
      
      Partial revert of commit 518547b3 (KVM: x86: Emulator does not
      calculate address correctly, 2014-09-30).
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1c1c35ae
    • Nadav Amit's avatar
      KVM: x86: Revert NoBigReal patch in the emulator · 7d882ffa
      Nadav Amit authored
      Commit 10e38fc7cab6 ("KVM: x86: Emulator flag for instruction that only support
      16-bit addresses in real mode") introduced NoBigReal for instructions such as
      MONITOR. Apparetnly, the Intel SDM description that led to this patch is
      misleading.  Since no instruction is using NoBigReal, it is safe to remove it,
      we fully understand what the SDM means.
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      7d882ffa
  2. 18 Nov, 2014 2 commits
  3. 17 Nov, 2014 5 commits
    • Nadav Amit's avatar
      KVM: x86: Fix lost interrupt on irr_pending race · f210f757
      Nadav Amit authored
      apic_find_highest_irr assumes irr_pending is set if any vector in APIC_IRR is
      set.  If this assumption is broken and apicv is disabled, the injection of
      interrupts may be deferred until another interrupt is delivered to the guest.
      Ultimately, if no other interrupt should be injected to that vCPU, the pending
      interrupt may be lost.
      
      commit 56cc2406 ("KVM: nVMX: fix "acknowledge interrupt on exit" when APICv
      is in use") changed the behavior of apic_clear_irr so irr_pending is cleared
      after setting APIC_IRR vector. After this commit, if apic_set_irr and
      apic_clear_irr run simultaneously, a race may occur, resulting in APIC_IRR
      vector set, and irr_pending cleared. In the following example, assume a single
      vector is set in IRR prior to calling apic_clear_irr:
      
      apic_set_irr				apic_clear_irr
      ------------				--------------
      apic->irr_pending = true;
      					apic_clear_vector(...);
      					vec = apic_search_irr(apic);
      					// => vec == -1
      apic_set_vector(...);
      					apic->irr_pending = (vec != -1);
      					// => apic->irr_pending == false
      
      Nonetheless, it appears the race might even occur prior to this commit:
      
      apic_set_irr				apic_clear_irr
      ------------				--------------
      apic->irr_pending = true;
      					apic->irr_pending = false;
      					apic_clear_vector(...);
      					if (apic_search_irr(apic) != -1)
      						apic->irr_pending = true;
      					// => apic->irr_pending == false
      apic_set_vector(...);
      
      Fixing this issue by:
      1. Restoring the previous behavior of apic_clear_irr: clear irr_pending, call
         apic_clear_vector, and then if APIC_IRR is non-zero, set irr_pending.
      2. On apic_set_irr: first call apic_set_vector, then set irr_pending.
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f210f757
    • Paolo Bonzini's avatar
      KVM: compute correct map even if all APICs are software disabled · a3e339e1
      Paolo Bonzini authored
      Logical destination mode can be used to send NMI IPIs even when all
      APICs are software disabled, so if all APICs are software disabled we
      should still look at the DFRs.
      
      So the DFRs should all be the same, even if some or all APICs are
      software disabled.  However, the SDM does not say this, so tweak
      the logic as follows:
      
      - if one APIC is enabled and has LDR != 0, use that one to build the map.
      This picks the right DFR in case an OS is only setting it for the
      software-enabled APICs, or in case an OS is using logical addressing
      on some APICs while leaving the rest in reset state (using LDR was
      suggested by Radim).
      
      - if all APICs are disabled, pick a random one to build the map.
      We use the last one with LDR != 0 for simplicity.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a3e339e1
    • Nadav Amit's avatar
      KVM: x86: Software disabled APIC should still deliver NMIs · 173beedc
      Nadav Amit authored
      Currently, the APIC logical map does not consider VCPUs whose local-apic is
      software-disabled.  However, NMIs, INIT, etc. should still be delivered to such
      VCPUs. Therefore, the APIC mode should first be determined, and then the map,
      considering all VCPUs should be constructed.
      
      To address this issue, first find the APIC mode, and only then construct the
      logical map.
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      173beedc
    • Paolo Bonzini's avatar
      kvm: simplify update_memslots invocation · 5cc15027
      Paolo Bonzini authored
      The update_memslots invocation is only needed in one case.  Make
      the code clearer by moving it to __kvm_set_memory_region, and
      removing the wrapper around insert_memslot.
      Reviewed-by: default avatarIgor Mammedov <imammedo@redhat.com>
      Reviewed-by: default avatarTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      5cc15027
    • Paolo Bonzini's avatar
      kvm: commonize allocation of the new memory slots · f2a81036
      Paolo Bonzini authored
      The two kmemdup invocations can be unified.  I find that the new
      placement of the comment makes it easier to see what happens.
      Reviewed-by: default avatarIgor Mammedov <imammedo@redhat.com>
      Reviewed-by: default avatarTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f2a81036
  4. 14 Nov, 2014 3 commits
  5. 13 Nov, 2014 1 commit
    • Chris J Arges's avatar
      kvm: svm: move WARN_ON in svm_adjust_tsc_offset · d913b904
      Chris J Arges authored
      When running the tsc_adjust kvm-unit-test on an AMD processor with the
      IA32_TSC_ADJUST feature enabled, the WARN_ON in svm_adjust_tsc_offset can be
      triggered. This WARN_ON checks for a negative adjustment in case __scale_tsc
      is called; however it may trigger unnecessary warnings.
      
      This patch moves the WARN_ON to trigger only if __scale_tsc will actually be
      called from svm_adjust_tsc_offset. In addition make adj in kvm_set_msr_common
      s64 since this can have signed values.
      Signed-off-by: default avatarChris J Arges <chris.j.arges@canonical.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d913b904
  6. 12 Nov, 2014 2 commits
    • Andy Lutomirski's avatar
      x86, kvm, vmx: Don't set LOAD_IA32_EFER when host and guest match · 54b98bff
      Andy Lutomirski authored
      There's nothing to switch if the host and guest values are the same.
      I am unable to find evidence that this makes any difference
      whatsoever.
      Signed-off-by: default avatarAndy Lutomirski <luto@amacapital.net>
      [I could see a difference on Nehalem.  From 5 runs:
      
       userspace exit, guest!=host   12200 11772 12130 12164 12327
       userspace exit, guest=host    11983 11780 11920 11919 12040
       lightweight exit, guest!=host  3214  3220  3238  3218  3337
       lightweight exit, guest=host   3178  3193  3193  3187  3220
      
       This passes the t-test with 99% confidence for userspace exit,
       98.5% confidence for lightweight exit. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      54b98bff
    • Andy Lutomirski's avatar
      x86, kvm, vmx: Always use LOAD_IA32_EFER if available · f6577a5f
      Andy Lutomirski authored
      At least on Sandy Bridge, letting the CPU switch IA32_EFER is much
      faster than switching it manually.
      
      I benchmarked this using the vmexit kvm-unit-test (single run, but
      GOAL multiplied by 5 to do more iterations):
      
      Test                                  Before      After    Change
      cpuid                                   2000       1932    -3.40%
      vmcall                                  1914       1817    -5.07%
      mov_from_cr8                              13         13     0.00%
      mov_to_cr8                                19         19     0.00%
      inl_from_pmtimer                       19164      10619   -44.59%
      inl_from_qemu                          15662      10302   -34.22%
      inl_from_kernel                         3916       3802    -2.91%
      outl_to_kernel                          2230       2194    -1.61%
      mov_dr                                   172        176     2.33%
      ipi                                (skipped)  (skipped)
      ipi+halt                           (skipped)  (skipped)
      ple-round-robin                           13         13     0.00%
      wr_tsc_adjust_msr                       1920       1845    -3.91%
      rd_tsc_adjust_msr                       1892       1814    -4.12%
      mmio-no-eventfd:pci-mem                16394      11165   -31.90%
      mmio-wildcard-eventfd:pci-mem           4607       4645     0.82%
      mmio-datamatch-eventfd:pci-mem          4601       4610     0.20%
      portio-no-eventfd:pci-io               11507       7942   -30.98%
      portio-wildcard-eventfd:pci-io          2239       2225    -0.63%
      portio-datamatch-eventfd:pci-io         2250       2234    -0.71%
      
      I haven't explicitly computed the significance of these numbers,
      but this isn't subtle.
      Signed-off-by: default avatarAndy Lutomirski <luto@amacapital.net>
      [The results were reproducible on all of Nehalem, Sandy Bridge and
       Ivy Bridge.  The slowness of manual switching is because writing
       to EFER with WRMSR triggers a TLB flush, even if the only bit you're
       touching is SCE (so the page table format is not affected).  Doing
       the write as part of vmentry/vmexit, instead, does not flush the TLB,
       probably because all processors that have EPT also have VPID. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f6577a5f
  7. 10 Nov, 2014 1 commit
  8. 08 Nov, 2014 7 commits
  9. 07 Nov, 2014 15 commits