1. 13 Feb, 2018 17 commits
  2. 11 Feb, 2018 1 commit
    • David Woodhouse's avatar
      x86/speculation: Update Speculation Control microcode blacklist · 17513420
      David Woodhouse authored
      Intel have retroactively blessed the 0xc2 microcode on Skylake mobile
      and desktop parts, and the Gemini Lake 0x22 microcode is apparently fine
      too. We blacklisted the latter purely because it was present with all
      the other problematic ones in the 2018-01-08 release, but now it's
      explicitly listed as OK.
      
      We still list 0x84 for the various Kaby Lake / Coffee Lake parts, as
      that appeared in one version of the blacklist and then reverted to
      0x80 again. We can change it if 0x84 is actually announced to be safe.
      Signed-off-by: default avatarDavid Woodhouse <dwmw@amazon.co.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: arjan.van.de.ven@intel.com
      Cc: jmattson@google.com
      Cc: karahmed@amazon.de
      Cc: kvm@vger.kernel.org
      Cc: pbonzini@redhat.com
      Cc: rkrcmar@redhat.com
      Cc: sironi@amazon.de
      Link: http://lkml.kernel.org/r/1518305967-31356-2-git-send-email-dwmw@amazon.co.ukSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      17513420
  3. 10 Feb, 2018 1 commit
    • Nadav Amit's avatar
      x86/mm/pti: Fix PTI comment in entry_SYSCALL_64() · 14b1fcc6
      Nadav Amit authored
      The comment is confusing since the path is taken when
      CONFIG_PAGE_TABLE_ISOLATION=y is disabled (while the comment says it is not
      taken).
      Signed-off-by: default avatarNadav Amit <namit@vmware.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: nadav.amit@gmail.com
      Link: http://lkml.kernel.org/r/20180209170638.15161-1-namit@vmware.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      14b1fcc6
  4. 09 Feb, 2018 2 commits
  5. 06 Feb, 2018 3 commits
  6. 03 Feb, 2018 6 commits
  7. 02 Feb, 2018 4 commits
  8. 01 Feb, 2018 1 commit
  9. 31 Jan, 2018 2 commits
    • Paolo Bonzini's avatar
      KVM: VMX: make MSR bitmaps per-VCPU · 904e14fb
      Paolo Bonzini authored
      Place the MSR bitmap in struct loaded_vmcs, and update it in place
      every time the x2apic or APICv state can change.  This is rare and
      the loop can handle 64 MSRs per iteration, in a similar fashion as
      nested_vmx_prepare_msr_bitmap.
      
      This prepares for choosing, on a per-VM basis, whether to intercept
      the SPEC_CTRL and PRED_CMD MSRs.
      
      Cc: stable@vger.kernel.org       # prereq for Spectre mitigation
      Suggested-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      904e14fb
    • Josh Poimboeuf's avatar
      x86/paravirt: Remove 'noreplace-paravirt' cmdline option · 12c69f1e
      Josh Poimboeuf authored
      The 'noreplace-paravirt' option disables paravirt patching, leaving the
      original pv indirect calls in place.
      
      That's highly incompatible with retpolines, unless we want to uglify
      paravirt even further and convert the paravirt calls to retpolines.
      
      As far as I can tell, the option doesn't seem to be useful for much
      other than introducing surprising corner cases and making the kernel
      vulnerable to Spectre v2.  It was probably a debug option from the early
      paravirt days.  So just remove it.
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarJuergen Gross <jgross@suse.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Ashok Raj <ashok.raj@intel.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Jun Nakajima <jun.nakajima@intel.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Link: https://lkml.kernel.org/r/20180131041333.2x6blhxirc2kclrq@treble
      12c69f1e
  10. 30 Jan, 2018 3 commits
    • Tim Chen's avatar
      x86/speculation: Use Indirect Branch Prediction Barrier in context switch · 18bf3c3e
      Tim Chen authored
      Flush indirect branches when switching into a process that marked itself
      non dumpable. This protects high value processes like gpg better,
      without having too high performance overhead.
      
      If done naïvely, we could switch to a kernel idle thread and then back
      to the original process, such as:
      
          process A -> idle -> process A
      
      In such scenario, we do not have to do IBPB here even though the process
      is non-dumpable, as we are switching back to the same process after a
      hiatus.
      
      To avoid the redundant IBPB, which is expensive, we track the last mm
      user context ID. The cost is to have an extra u64 mm context id to track
      the last mm we were using before switching to the init_mm used by idle.
      Avoiding the extra IBPB is probably worth the extra memory for this
      common scenario.
      
      For those cases where tlb_defer_switch_to_init_mm() returns true (non
      PCID), lazy tlb will defer switch to init_mm, so we will not be changing
      the mm for the process A -> idle -> process A switch. So IBPB will be
      skipped for this case.
      
      Thanks to the reviewers and Andy Lutomirski for the suggestion of
      using ctx_id which got rid of the problem of mm pointer recycling.
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: ak@linux.intel.com
      Cc: karahmed@amazon.de
      Cc: arjan@linux.intel.com
      Cc: torvalds@linux-foundation.org
      Cc: linux@dominikbrodowski.net
      Cc: peterz@infradead.org
      Cc: bp@alien8.de
      Cc: luto@kernel.org
      Cc: pbonzini@redhat.com
      Cc: gregkh@linux-foundation.org
      Link: https://lkml.kernel.org/r/1517263487-3708-1-git-send-email-dwmw@amazon.co.uk
      18bf3c3e
    • David Woodhouse's avatar
      x86/cpuid: Fix up "virtual" IBRS/IBPB/STIBP feature bits on Intel · 7fcae111
      David Woodhouse authored
      Despite the fact that all the other code there seems to be doing it, just
      using set_cpu_cap() in early_intel_init() doesn't actually work.
      
      For CPUs with PKU support, setup_pku() calls get_cpu_cap() after
      c->c_init() has set those feature bits. That resets those bits back to what
      was queried from the hardware.
      
      Turning the bits off for bad microcode is easy to fix. That can just use
      setup_clear_cpu_cap() to force them off for all CPUs.
      
      I was less keen on forcing the feature bits *on* that way, just in case
      of inconsistencies. I appreciate that the kernel is going to get this
      utterly wrong if CPU features are not consistent, because it has already
      applied alternatives by the time secondary CPUs are brought up.
      
      But at least if setup_force_cpu_cap() isn't being used, we might have a
      chance of *detecting* the lack of the corresponding bit and either
      panicking or refusing to bring the offending CPU online.
      
      So ensure that the appropriate feature bits are set within get_cpu_cap()
      regardless of how many extra times it's called.
      
      Fixes: 2961298e ("x86/cpufeatures: Clean up Spectre v2 related CPUID flags")
      Signed-off-by: default avatarDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: karahmed@amazon.de
      Cc: peterz@infradead.org
      Cc: bp@alien8.de
      Link: https://lkml.kernel.org/r/1517322623-15261-1-git-send-email-dwmw@amazon.co.uk
      7fcae111
    • Colin Ian King's avatar
      x86/spectre: Fix spelling mistake: "vunerable"-> "vulnerable" · e698dcdf
      Colin Ian King authored
      Trivial fix to spelling mistake in pr_err error message text.
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: kernel-janitors@vger.kernel.org
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Link: https://lkml.kernel.org/r/20180130193218.9271-1-colin.king@canonical.com
      e698dcdf