1. 21 Feb, 2017 6 commits
    • Andy Lutomirski's avatar
      x86/kvm/vmx: Defer TR reload after VM exit · b7ffc44d
      Andy Lutomirski authored
      Intel's VMX is daft and resets the hidden TSS limit register to 0x67
      on VMX reload, and the 0x67 is not configurable.  KVM currently
      reloads TR using the LTR instruction on every exit, but this is quite
      slow because LTR is serializing.
      
      The 0x67 limit is entirely harmless unless ioperm() is in use, so
      defer the reload until a task using ioperm() is actually running.
      
      Here's some poorly done benchmarking using kvm-unit-tests:
      
      Before:
      
      cpuid 1313
      vmcall 1195
      mov_from_cr8 11
      mov_to_cr8 17
      inl_from_pmtimer 6770
      inl_from_qemu 6856
      inl_from_kernel 2435
      outl_to_kernel 1402
      
      After:
      
      cpuid 1291
      vmcall 1181
      mov_from_cr8 11
      mov_to_cr8 16
      inl_from_pmtimer 6457
      inl_from_qemu 6209
      inl_from_kernel 2339
      outl_to_kernel 1391
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      [Force-reload TR in invalidate_tss_limit. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b7ffc44d
    • Andy Lutomirski's avatar
      x86/asm/64: Drop __cacheline_aligned from struct x86_hw_tss · d3273dea
      Andy Lutomirski authored
      Historically, the entire TSS + io bitmap structure was cacheline
      aligned, but commit ca241c75 ("x86: unify tss_struct") changed it
      (presumably inadvertently) so that the fixed-layout hardware part is
      cacheline-aligned and the io bitmap is after the padding.  This wastes
      24 bytes (the hardware part should be 104 bytes, but this pads it to
      128 bytes) and, serves no purpose, and causes sizeof(struct
      x86_hw_tss) to have a confusing value.
      
      Drop the pointless alignment.
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d3273dea
    • Andy Lutomirski's avatar
      x86/kvm/vmx: Simplify segment_base() · 8c2e41f7
      Andy Lutomirski authored
      Use actual pointer types for pointers (instead of unsigned long) and
      replace hardcoded constants with the appropriate self-documenting
      macros.
      
      The function is still a bit messy, but this seems a lot better than
      before to me.
      
      This is mostly borrowed from a patch by Thomas Garnier.
      
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8c2e41f7
    • Andy Lutomirski's avatar
      x86/kvm/vmx: Get rid of segment_base() on 64-bit kernels · e28baead
      Andy Lutomirski authored
      It was a bit buggy (it didn't list all segment types that needed
      64-bit fixups), but the bug was irrelevant because it wasn't called
      in any interesting context on 64-bit kernels and was only used for
      data segents on 32-bit kernels.
      
      To avoid confusion, make it explicitly 32-bit only.
      
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e28baead
    • Andy Lutomirski's avatar
      x86/kvm/vmx: Don't fetch the TSS base from the GDT · e0c23063
      Andy Lutomirski authored
      The current CPU's TSS base is a foregone conclusion, so there's no need
      to parse it out of the segment tables.  This should save a couple cycles
      (as STR is surely microcoded and poorly optimized) but, more importantly,
      it's a cleanup and it means that segment_base() will never be called on
      64-bit kernels.
      
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e0c23063
    • Andy Lutomirski's avatar
      x86/asm: Define the kernel TSS limit in a macro · 4f53ab14
      Andy Lutomirski authored
      Rather than open-coding the kernel TSS limit in set_tss_desc(), make
      it a real macro near the TSS layout definition.
      
      This is purely a cleanup.
      
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Jim Mattson <jmattson@google.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      4f53ab14
  2. 20 Feb, 2017 2 commits
  3. 18 Feb, 2017 1 commit
  4. 17 Feb, 2017 7 commits
  5. 16 Feb, 2017 6 commits
  6. 15 Feb, 2017 15 commits
  7. 09 Feb, 2017 3 commits