- 21 Jul, 2023 3 commits
-
-
Daniel Sneddon authored
Gather Data Sampling (GDS) is a transient execution attack using gather instructions from the AVX2 and AVX512 extensions. This attack allows malicious code to infer data that was previously stored in vector registers. Systems that are not vulnerable to GDS will set the GDS_NO bit of the IA32_ARCH_CAPABILITIES MSR. This is useful for VM guests that may think they are on vulnerable systems that are, in fact, not affected. Guests that are running on affected hosts where the mitigation is enabled are protected as if they were running on an unaffected system. On all hosts that are not affected or that are mitigated, set the GDS_NO bit. Signed-off-by:
Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
Daniel Sneddon authored
Gather Data Sampling (GDS) is mitigated in microcode. However, on systems that haven't received the updated microcode, disabling AVX can act as a mitigation. Add a Kconfig option that uses the microcode mitigation if available and disables AVX otherwise. Setting this option has no effect on systems not affected by GDS. This is the equivalent of setting gather_data_sampling=force. Signed-off-by:
Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
Daniel Sneddon authored
The Gather Data Sampling (GDS) vulnerability allows malicious software to infer stale data previously stored in vector registers. This may include sensitive data such as cryptographic keys. GDS is mitigated in microcode, and systems with up-to-date microcode are protected by default. However, any affected system that is running with older microcode will still be vulnerable to GDS attacks. Since the gather instructions used by the attacker are part of the AVX2 and AVX512 extensions, disabling these extensions prevents gather instructions from being executed, thereby mitigating the system from GDS. Disabling AVX2 is sufficient, but we don't have the granularity to do this. The XCR0[2] disables AVX, with no option to just disable AVX2. Add a kernel parameter gather_data_sampling=force that will enable the microcode mitigation if available, otherwise it will disable AVX on affected systems. This option will be ignored if cmdline mitigations=off. This is a *big* hammer. It is known to break buggy userspace that uses incomplete, buggy AVX enumeration. Unfortunately, such userspace does exist in the wild: https://www.mail-archive.com/bug-coreutils@gnu.org/msg33046.html [ dhansen: add some more ominous warnings about disabling AVX ] Signed-off-by:
Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
- 19 Jul, 2023 1 commit
-
-
Daniel Sneddon authored
Gather Data Sampling (GDS) is a hardware vulnerability which allows unprivileged speculative access to data which was previously stored in vector registers. Intel processors that support AVX2 and AVX512 have gather instructions that fetch non-contiguous data elements from memory. On vulnerable hardware, when a gather instruction is transiently executed and encounters a fault, stale data from architectural or internal vector registers may get transiently stored to the destination vector register allowing an attacker to infer the stale data using typical side channel techniques like cache timing attacks. This mitigation is different from many earlier ones for two reasons. First, it is enabled by default and a bit must be set to *DISABLE* it. This is the opposite of normal mitigation polarity. This means GDS can be mitigated simply by updating microcode and leaving the new control bit alone. Second, GDS has a "lock" bit. This lock bit is there because the mitigation affects the hardware security features KeyLocker and SGX. It needs to be enabled and *STAY* enabled for these features to be mitigated against GDS. The mitigation is enabled in the microcode by default. Disable it by setting gather_data_sampling=off or by disabling all mitigations with mitigations=off. The mitigation status can be checked by reading: /sys/devices/system/cpu/vulnerabilities/gather_data_sampling Signed-off-by:
Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
- 16 Jun, 2023 1 commit
-
-
Thomas Gleixner authored
check_bugs() is a dumping ground for finalizing the CPU bringup. Only parts of it has to do with actual CPU bugs. Split it apart into arch_cpu_finalize_init() and cpu_select_mitigations(). Fixup the bogus 32bit comments while at it. No functional change. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230613224545.019583869@linutronix.de
-
- 16 Mar, 2023 1 commit
-
-
Borislav Petkov (AMD) authored
The AutoIBRS bit gets set only on the BSP as part of determining which mitigation to enable on AMD. Setting on the APs relies on the circumstance that the APs get booted through the trampoline and EFER - the MSR which contains that bit - gets replicated on every AP from the BSP. However, this can change in the future and considering the security implications of this bit not being set on every CPU, make sure it is set by verifying EFER later in the boot process and on every AP. Reported-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Acked-by:
Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20230224185257.o3mcmloei5zqu7wa@treble
-
- 27 Feb, 2023 1 commit
-
-
KP Singh authored
When plain IBRS is enabled (not enhanced IBRS), the logic in spectre_v2_user_select_mitigation() determines that STIBP is not needed. The IBRS bit implicitly protects against cross-thread branch target injection. However, with legacy IBRS, the IBRS bit is cleared on returning to userspace for performance reasons which leaves userspace threads vulnerable to cross-thread branch target injection against which STIBP protects. Exclude IBRS from the spectre_v2_in_ibrs_mode() check to allow for enabling STIBP (through seccomp/prctl() by default or always-on, if selected by spectre_v2_user kernel cmdline parameter). [ bp: Massage. ] Fixes: 7c693f54 ("x86/speculation: Add spectre_v2=ibrs option to support Kernel IBRS") Reported-by:
José Oliveira <joseloliveira11@gmail.com> Reported-by:
Rodrigo Branco <rodrigo@kernelhacking.com> Signed-off-by:
KP Singh <kpsingh@kernel.org> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230220120127.1975241-1-kpsingh@kernel.org Link: https://lore.kernel.org/r/20230221184908.2349578-1-kpsingh@kernel.org
-
- 25 Jan, 2023 1 commit
-
-
Kim Phillips authored
The AMD Zen4 core supports a new feature called Automatic IBRS. It is a "set-and-forget" feature that means that, like Intel's Enhanced IBRS, h/w manages its IBRS mitigation resources automatically across CPL transitions. The feature is advertised by CPUID_Fn80000021_EAX bit 8 and is enabled by setting MSR C000_0080 (EFER) bit 21. Enable Automatic IBRS by default if the CPU feature is present. It typically provides greater performance over the incumbent generic retpolines mitigation. Reuse the SPECTRE_V2_EIBRS spectre_v2_mitigation enum. AMD Automatic IBRS and Intel Enhanced IBRS have similar enablement. Add NO_EIBRS_PBRSB to cpu_vuln_whitelist, since AMD Automatic IBRS isn't affected by PBRSB-eIBRS. The kernel command line option spectre_v2=eibrs is used to select AMD Automatic IBRS, if available. Signed-off-by:
Kim Phillips <kim.phillips@amd.com> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Acked-by:
Sean Christopherson <seanjc@google.com> Acked-by:
Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20230124163319.2277355-8-kim.phillips@amd.com
-
- 13 Jan, 2023 1 commit
-
-
Peter Zijlstra authored
objtool to the rescue: vmlinux.o: warning: objtool: intel_idle_ibrs+0x17: call to spec_ctrl_current() leaves .noinstr.text section vmlinux.o: warning: objtool: intel_idle_ibrs+0x27: call to wrmsrl.constprop.0() leaves .noinstr.text section Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Tested-by:
Tony Lindgren <tony@atomide.com> Tested-by:
Ulf Hansson <ulf.hansson@linaro.org> Acked-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by:
Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.556912863@infradead.org
-
- 12 Jan, 2023 1 commit
-
-
Breno Leitao authored
Currently, x86_spec_ctrl_base is read at boot time and speculative bits are set if Kconfig items are enabled. For example, IBRS is enabled if CONFIG_CPU_IBRS_ENTRY is configured, etc. These MSR bits are not cleared if the mitigations are disabled. This is a problem when kexec-ing a kernel that has the mitigation disabled from a kernel that has the mitigation enabled. In this case, the MSR bits are not cleared during the new kernel boot. As a result, this might have some performance degradation that is hard to pinpoint. This problem does not happen if the machine is (hard) rebooted because the bit will be cleared by default. [ bp: Massage. ] Suggested-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Breno Leitao <leitao@debian.org> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20221128153148.1129350-1-leitao@debian.org
-
- 10 Jan, 2023 1 commit
-
-
Ashok Raj authored
The prototype for the x86_read_arch_cap_msr() function has moved to arch/x86/include/asm/cpu.h - kill the redundant definition in arch/x86/kernel/cpu.h and include the header. Signed-off-by:
Ashok Raj <ashok.raj@intel.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Reviewed-by:
Daniel Sneddon <daniel.sneddon@linux.intel.com> Link: https://lore.kernel.org/r/20221128172451.792595-1-ashok.raj@intel.com
-
- 04 Jan, 2023 1 commit
-
-
Rodrigo Branco authored
We missed the window between the TIF flag update and the next reschedule. Signed-off-by:
Rodrigo Branco <bsdaemon@google.com> Reviewed-by:
Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: <stable@vger.kernel.org>
-
- 02 Dec, 2022 1 commit
-
-
Pawan Gupta authored
The "force" argument to write_spec_ctrl_current() is currently ambiguous as it does not guarantee the MSR write. This is due to the optimization that writes to the MSR happen only when the new value differs from the cached value. This is fine in most cases, but breaks for S3 resume when the cached MSR value gets out of sync with the hardware MSR value due to S3 resetting it. When x86_spec_ctrl_current is same as x86_spec_ctrl_base, the MSR write is skipped. Which results in SPEC_CTRL mitigations not getting restored. Move the MSR write from write_spec_ctrl_current() to a new function that unconditionally writes to the MSR. Update the callers accordingly and rename functions. [ bp: Rework a bit. ] Fixes: caa0ff24 ("x86/bugs: Keep a per-CPU IA32_SPEC_CTRL value") Suggested-by:
Borislav Petkov <bp@alien8.de> Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/806d39b0bfec2fe8f50dc5446dff20f5bb24a959.1669821572.git.pawan.kumar.gupta@linux.intel.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 22 Nov, 2022 1 commit
-
-
Juergen Gross authored
Convert the remaining cases of static_cpu_has(X86_FEATURE_XENPV) and boot_cpu_has(X86_FEATURE_XENPV) to use cpu_feature_enabled(), allowing more efficient code in case the kernel is configured without CONFIG_XEN_PV. Signed-off-by:
Juergen Gross <jgross@suse.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Acked-by:
Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20221104072701.20283-6-jgross@suse.com
-
- 09 Nov, 2022 2 commits
-
-
Paolo Bonzini authored
x86_virt_spec_ctrl only deals with the paravirtualized MSR_IA32_VIRT_SPEC_CTRL now and does not handle MSR_IA32_SPEC_CTRL anymore; remove the corresponding, unused argument. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Restoration of the host IA32_SPEC_CTRL value is probably too late with respect to the return thunk training sequence. With respect to the user/kernel boundary, AMD says, "If software chooses to toggle STIBP (e.g., set STIBP on kernel entry, and clear it on kernel exit), software should set STIBP to 1 before executing the return thunk training sequence." I assume the same requirements apply to the guest/host boundary. The return thunk training sequence is in vmenter.S, quite close to the VM-exit. On hosts without V_SPEC_CTRL, however, the host's IA32_SPEC_CTRL value is not restored until much later. To avoid this, move the restoration of host SPEC_CTRL to assembly and, for consistency, move the restoration of the guest SPEC_CTRL as well. This is not particularly difficult, apart from some care to cover both 32- and 64-bit, and to share code between SEV-ES and normal vmentry. Cc: stable@vger.kernel.org Fixes: a149180f ("x86: Add magic AMD return-thunk") Suggested-by:
Jim Mattson <jmattson@google.com> Reviewed-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- 17 Oct, 2022 3 commits
-
-
Peter Zijlstra (Intel) authored
Debug aid, allows running retbleed=force,stuff on non-affected uarchs Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org>
-
Thomas Gleixner authored
The fully secure mitigation for RSB underflow on Intel SKL CPUs is IBRS, which inflicts up to 30% penalty for pathological syscall heavy work loads. Software based call depth tracking and RSB refill is not perfect, but reduces the attack surface massively. The penalty for the pathological case is about 8% which is still annoying but definitely more palatable than IBRS. Add a retbleed=stuff command line option to enable the call depth tracking and software refill of the RSB. This gives admins a choice. IBeeRS are safe and cause headaches, call depth tracking is considered to be s(t)ufficiently safe. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111149.029587352@infradead.org
-
Borislav Petkov authored
Those mitigations are very talkative; use the printing helper which pays attention to the buffer size. Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220809153419.10182-1-bp@alien8.de
-
- 18 Aug, 2022 1 commit
-
-
Pawan Gupta authored
Older Intel CPUs that are not in the affected processor list for MMIO Stale Data vulnerabilities currently report "Not affected" in sysfs, which may not be correct. Vulnerability status for these older CPUs is unknown. Add known-not-affected CPUs to the whitelist. Report "unknown" mitigation status for CPUs that are not in blacklist, whitelist and also don't enumerate MSR ARCH_CAPABILITIES bits that reflect hardware immunity to MMIO Stale Data vulnerabilities. Mitigation is not deployed when the status is unknown. [ bp: Massage, fixup. ] Fixes: 8d50cdf8 ("x86/speculation/mmio: Add sysfs reporting for Processor MMIO Stale Data") Suggested-by:
Andrew Cooper <andrew.cooper3@citrix.com> Suggested-by:
Tony Luck <tony.luck@intel.com> Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/a932c154772f2121794a5f2eded1a11013114711.1657846269.git.pawan.kumar.gupta@linux.intel.com
-
- 08 Aug, 2022 1 commit
-
-
Kim Phillips authored
AMD's "Technical Guidance for Mitigating Branch Type Confusion, Rev. 1.0 2022-07-12" whitepaper, under section 6.1.2 "IBPB On Privileged Mode Entry / SMT Safety" says: Similar to the Jmp2Ret mitigation, if the code on the sibling thread cannot be trusted, software should set STIBP to 1 or disable SMT to ensure SMT safety when using this mitigation. So, like already being done for retbleed=unret, and now also for retbleed=ibpb, force STIBP on machines that have it, and report its SMT vulnerability status accordingly. [ bp: Remove the "we" and remove "[AMD]" applicability parameter which doesn't work here. ] Fixes: 3ebc1700 ("x86/bugs: Add retbleed=ibpb") Signed-off-by:
Kim Phillips <kim.phillips@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Cc: stable@vger.kernel.org # 5.10, 5.15, 5.19 Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537 Link: https://lore.kernel.org/r/20220804192201.439596-1-kim.phillips@amd.com
-
- 03 Aug, 2022 1 commit
-
-
Daniel Sneddon authored
tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as documented for RET instructions after VM exits. Mitigate it with a new one-entry RSB stuffing mechanism and a new LFENCE. == Background == Indirect Branch Restricted Speculation (IBRS) was designed to help mitigate Branch Target Injection and Speculative Store Bypass, i.e. Spectre, attacks. IBRS prevents software run in less privileged modes from affecting branch prediction in more privileged modes. IBRS requires the MSR to be written on every privilege level change. To overcome some of the performance issues of IBRS, Enhanced IBRS was introduced. eIBRS is an "always on" IBRS, in other words, just turn it on once instead of writing the MSR on every privilege level change. When eIBRS is enabled, more privileged modes should be protected from less privileged modes, including protecting VMMs from guests. == Problem == Here's a simplification of how guests are run on Linux' KVM: void run_kvm_guest(void) { // Prepare to run guest VMRESUME(); // Clean up after guest runs } The execution flow for that would look something like this to the processor: 1. Host-side: call run_kvm_guest() 2. Host-side: VMRESUME 3. Guest runs, does "CALL guest_function" 4. VM exit, host runs again 5. Host might make some "cleanup" function calls 6. Host-side: RET from run_kvm_guest() Now, when back on the host, there are a couple of possible scenarios of post-guest activity the host needs to do before executing host code: * on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not touched and Linux has to do a 32-entry stuffing. * on eIBRS hardware, VM exit with IBRS enabled, or restoring the host IBRS=1 shortly after VM exit, has a documented side effect of flushing the RSB except in this PBRSB situation where the software needs to stuff the last RSB entry "by hand". IOW, with eIBRS supported, host RET instructions should no longer be influenced by guest behavior after the host retires a single CALL instruction. However, if the RET instructions are "unbalanced" with CALLs after a VM exit as is the RET in #6, it might speculatively use the address for the instruction after the CALL in #3 as an RSB prediction. This is a problem since the (untrusted) guest controls this address. Balanced CALL/RET instruction pairs such as in step #5 are not affected. == Solution == The PBRSB issue affects a wide variety of Intel processors which support eIBRS. But not all of them need mitigation. Today, X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e., eIBRS systems which enable legacy IBRS explicitly. However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT and most of them need a new mitigation. Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT. The lighter-weight mitigation performs a CALL instruction which is immediately followed by a speculative execution barrier (INT3). This steers speculative execution to the barrier -- just like a retpoline -- which ensures that speculation can never reach an unbalanced RET. Then, ensure this CALL is retired before continuing execution with an LFENCE. In other words, the window of exposure is opened at VM exit where RET behavior is troublesome. While the window is open, force RSB predictions sampling for RET targets to a dead end at the INT3. Close the window with the LFENCE. There is a subset of eIBRS systems which are not vulnerable to PBRSB. Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB. Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO. [ bp: Massage, incorporate review comments from Andy Cooper. ] Signed-off-by:
Daniel Sneddon <daniel.sneddon@linux.intel.com> Co-developed-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
- 29 Jul, 2022 1 commit
-
-
Thadeu Lima de Souza Cascardo authored
Some cloud hypervisors do not provide IBPB on very recent CPU processors, including AMD processors affected by Retbleed. Using IBPB before firmware calls on such systems would cause a GPF at boot like the one below. Do not enable such calls when IBPB support is not present. EFI Variables Facility v0.08 2004-May-17 general protection fault, maybe for address 0x1: 0000 [#1] PREEMPT SMP NOPTI CPU: 0 PID: 24 Comm: kworker/u2:1 Not tainted 5.19.0-rc8+ #7 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Workqueue: efi_rts_wq efi_call_rts RIP: 0010:efi_call_rts Code: e8 37 33 58 ff 41 bf 48 00 00 00 49 89 c0 44 89 f9 48 83 c8 01 4c 89 c2 48 c1 ea 20 66 90 b9 49 00 00 00 b8 01 00 00 00 31 d2 <0f> 30 e8 7b 9f 5d ff e8 f6 f8 ff ff 4c 89 f1 4c 89 ea 4c 89 e6 48 RSP: 0018:ffffb373800d7e38 EFLAGS: 00010246 RAX: 0000000000000001 RBX: 0000000000000006 RCX: 0000000000000049 RDX: 0000000000000000 RSI: ffff94fbc19d8fe0 RDI: ffff94fbc1b2b300 RBP: ffffb373800d7e70 R08: 0000000000000000 R09: 0000000000000000 R10: 000000000000000b R11: 000000000000000b R12: ffffb3738001fd78 R13: ffff94fbc2fcfc00 R14: ffffb3738001fd80 R15: 0000000000000048 FS: 0000000000000000(0000) GS:ffff94fc3da00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffff94fc30201000 CR3: 000000006f610000 CR4: 00000000000406f0 Call Trace: <TASK> ? __wake_up process_one_work worker_thread ? rescuer_thread kthread ? kthread_complete_and_exit ret_from_fork </TASK> Modules linked in: Fixes: 28a99e95 ("x86/amd: Use IBPB for firmware calls") Reported-by:
Dimitri John Ledkov <dimitri.ledkov@canonical.com> Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20220728122602.2500509-1-cascardo@canonical.com
-
- 20 Jul, 2022 1 commit
-
-
Pawan Gupta authored
IBRS mitigation for spectre_v2 forces write to MSR_IA32_SPEC_CTRL at every kernel entry/exit. On Enhanced IBRS parts setting MSR_IA32_SPEC_CTRL[IBRS] only once at boot is sufficient. MSR writes at every kernel entry/exit incur unnecessary performance loss. When Enhanced IBRS feature is present, print a warning about this unnecessary performance loss. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/2a5eaf54583c2bfe0edc4fea64006656256cca17.1657814857.git.pawan.kumar.gupta@linux.intel.com
-
- 18 Jul, 2022 1 commit
-
-
Peter Zijlstra authored
On AMD IBRS does not prevent Retbleed; as such use IBPB before a firmware call to flush the branch history state. And because in order to do an EFI call, the kernel maps a whole lot of the kernel page table into the EFI page table, do an IBPB just in case in order to prevent the scenario of poisoning the BTB and causing an EFI call using the unprotected RET there. [ bp: Massage. ] Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220715194550.793957-1-cascardo@canonical.com
-
- 16 Jul, 2022 1 commit
-
-
Kim Phillips authored
Remove a superfluous ' in the mitigation string. Fixes: e8ec1b6e ("x86/bugs: Enable STIBP for JMP2RET") Signed-off-by:
Kim Phillips <kim.phillips@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
- 14 Jul, 2022 1 commit
-
-
Jiapeng Chong authored
This symbol is not used outside of bugs.c, so mark it static. Reported-by:
Abaci Robot <abaci@linux.alibaba.com> Signed-off-by:
Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220714072939.71162-1-jiapeng.chong@linux.alibaba.com
-
- 09 Jul, 2022 1 commit
-
-
Pawan Gupta authored
Some Intel processors may use alternate predictors for RETs on RSB-underflow. This condition may be vulnerable to Branch History Injection (BHI) and intramode-BTI. Kernel earlier added spectre_v2 mitigation modes (eIBRS+Retpolines, eIBRS+LFENCE, Retpolines) which protect indirect CALLs and JMPs against such attacks. However, on RSB-underflow, RET target prediction may fallback to alternate predictors. As a result, RET's predicted target may get influenced by branch history. A new MSR_IA32_SPEC_CTRL bit (RRSBA_DIS_S) controls this fallback behavior when in kernel mode. When set, RETs will not take predictions from alternate predictors, hence mitigating RETs as well. Support for this is enumerated by CPUID.7.2.EDX[RRSBA_CTRL] (bit2). For spectre v2 mitigation, when a user selects a mitigation that protects indirect CALLs and JMPs against BHI and intramode-BTI, set RRSBA_DIS_S also to protect RETs for RSB-underflow case. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
- 08 Jul, 2022 1 commit
-
-
Thadeu Lima de Souza Cascardo authored
There are some VM configurations which have Skylake model but do not support IBPB. In those cases, when using retbleed=ibpb, userspace is going to be killed and kernel is going to panic. If the CPU does not support IBPB, warn and proceed with the auto option. Also, do not fallback to IBPB on AMD/Hygon systems if it is not supported. Fixes: 3ebc1700 ("x86/bugs: Add retbleed=ibpb") Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
- 29 Jun, 2022 1 commit
-
-
Peter Zijlstra authored
Do fine-grained Kconfig for all the various retbleed parts. NOTE: if your compiler doesn't support return thunks this will silently 'upgrade' your mitigation to IBPB, you might not like this. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
- 27 Jun, 2022 10 commits
-
-
Josh Poimboeuf authored
On VMX, there are some balanced returns between the time the guest's SPEC_CTRL value is written, and the vmenter. Balanced returns (matched by a preceding call) are usually ok, but it's at least theoretically possible an NMI with a deep call stack could empty the RSB before one of the returns. For maximum paranoia, don't allow *any* returns (balanced or otherwise) between the SPEC_CTRL write and the vmenter. [ bp: Fix 32-bit build. ] Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
Josh Poimboeuf authored
Prevent RSB underflow/poisoning attacks with RSB. While at it, add a bunch of comments to attempt to document the current state of tribal knowledge about RSB attacks and what exactly is being mitigated. Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
Josh Poimboeuf authored
On eIBRS systems, the returns in the vmexit return path from __vmx_vcpu_run() to vmx_vcpu_run() are exposed to RSB poisoning attacks. Fix that by moving the post-vmexit spec_ctrl handling to immediately after the vmexit. Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
Josh Poimboeuf authored
This mask has been made redundant by kvm_spec_ctrl_test_value(). And it doesn't even work when MSR interception is disabled, as the guest can just write to SPEC_CTRL directly. Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
Josh Poimboeuf authored
There's no need to recalculate the host value for every entry/exit. Just use the cached value in spec_ctrl_current(). Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
Josh Poimboeuf authored
If the SMT state changes, SSBD might get accidentally disabled. Fix that. Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
Josh Poimboeuf authored
When booting with retbleed=auto, if the kernel wasn't built with CONFIG_CC_HAS_RETURN_THUNK, the mitigation falls back to IBPB. Make sure a warning is printed in that case. The IBPB fallback check is done twice, but it really only needs to be done once. Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
Peter Zijlstra authored
jmp2ret mitigates the easy-to-attack case at relatively low overhead. It mitigates the long speculation windows after a mispredicted RET, but it does not mitigate the short speculation window from arbitrary instruction boundaries. On Zen2, there is a chicken bit which needs setting, which mitigates "arbitrary instruction boundaries" down to just "basic block boundaries". But there is no fix for the short speculation window on basic block boundaries, other than to flush the entire BTB to evict all attacker predictions. On the spectrum of "fast & blurry" -> "safe", there is (on top of STIBP or no-SMT): 1) Nothing System wide open 2) jmp2ret May stop a script kiddy 3) jmp2ret+chickenbit Raises the bar rather further 4) IBPB Only thing which can count as "safe". Tentative numbers put IBPB-on-entry at a 2.5x hit on Zen2, and a 10x hit on Zen1 according to lmbench. [ bp: Fixup feature bit comments, document option, 32-bit build fix. ] Suggested-by: Andrew Cooper ...
-
Peter Zijlstra authored
Having IBRS enabled while the SMT sibling is idle unnecessarily slows down the running sibling. OTOH, disabling IBRS around idle takes two MSR writes, which will increase the idle latency. Therefore, only disable IBRS around deeper idle states. Shallow idle states are bounded by the tick in duration, since NOHZ is not allowed for them by virtue of their short target residency. Only do this for mwait-driven idle, since that keeps interrupts disabled across idle, which makes disabling IBRS vs IRQ-entry a non-issue. Note: C6 is a random threshold, most importantly C1 probably shouldn't disable IBRS, benchmarking needed. Suggested-by:
Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
Peter Zijlstra authored
Skylake suffers from RSB underflow speculation issues; report this vulnerability and it's mitigation (spectre_v2=ibrs). [jpoimboe: cleanups, eibrs] Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Borislav Petkov <bp@suse.de>
-