Commit 963d0d60 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'x86_bugs_for_v6.12_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 hw mitigation updates from Borislav Petkov:

 - Add CONFIG_ option for every hw CPU mitigation. The intent is to
   support configurations and scenarios where the mitigations code is
   irrelevant

 - Other small fixlets and improvements

* tag 'x86_bugs_for_v6.12_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/bugs: Fix handling when SRSO mitigation is disabled
  x86/bugs: Add missing NO_SSB flag
  Documentation/srso: Document a method for checking safe RET operates properly
  x86/bugs: Add a separate config for GDS
  x86/bugs: Remove GDS Force Kconfig option
  x86/bugs: Add a separate config for SSB
  x86/bugs: Add a separate config for Spectre V2
  x86/bugs: Add a separate config for SRBDS
  x86/bugs: Add a separate config for Spectre v1
  x86/bugs: Add a separate config for RETBLEED
  x86/bugs: Add a separate config for L1TF
  x86/bugs: Add a separate config for MMIO Stable Data
  x86/bugs: Add a separate config for TAA
  x86/bugs: Add a separate config for MDS
parents d580d74e 1dbb6b14
......@@ -158,3 +158,72 @@ poisoned BTB entry and using that safe one for all function returns.
In older Zen1 and Zen2, this is accomplished using a reinterpretation
technique similar to Retbleed one: srso_untrain_ret() and
srso_safe_ret().
Checking the safe RET mitigation actually works
-----------------------------------------------
In case one wants to validate whether the SRSO safe RET mitigation works
on a kernel, one could use two performance counters
* PMC_0xc8 - Count of RET/RET lw retired
* PMC_0xc9 - Count of RET/RET lw retired mispredicted
and compare the number of RETs retired properly vs those retired
mispredicted, in kernel mode. Another way of specifying those events
is::
# perf list ex_ret_near_ret
List of pre-defined events (to be used in -e or -M):
core:
ex_ret_near_ret
[Retired Near Returns]
ex_ret_near_ret_mispred
[Retired Near Returns Mispredicted]
Either the command using the event mnemonics::
# perf stat -e ex_ret_near_ret:k -e ex_ret_near_ret_mispred:k sleep 10s
or using the raw PMC numbers::
# perf stat -e cpu/event=0xc8,umask=0/k -e cpu/event=0xc9,umask=0/k sleep 10s
should give the same amount. I.e., every RET retired should be
mispredicted::
[root@brent: ~/kernel/linux/tools/perf> ./perf stat -e cpu/event=0xc8,umask=0/k -e cpu/event=0xc9,umask=0/k sleep 10s
Performance counter stats for 'sleep 10s':
137,167 cpu/event=0xc8,umask=0/k
137,173 cpu/event=0xc9,umask=0/k
10.004110303 seconds time elapsed
0.000000000 seconds user
0.004462000 seconds sys
vs the case when the mitigation is disabled (spec_rstack_overflow=off)
or not functioning properly, showing usually a lot smaller number of
mispredicted retired RETs vs the overall count of retired RETs during
a workload::
[root@brent: ~/kernel/linux/tools/perf> ./perf stat -e cpu/event=0xc8,umask=0/k -e cpu/event=0xc9,umask=0/k sleep 10s
Performance counter stats for 'sleep 10s':
201,627 cpu/event=0xc8,umask=0/k
4,074 cpu/event=0xc9,umask=0/k
10.003267252 seconds time elapsed
0.002729000 seconds user
0.000000000 seconds sys
Also, there is a selftest which performs the above, go to
tools/testing/selftests/x86/ and do::
make srso
./srso
......@@ -2610,24 +2610,15 @@ config MITIGATION_SLS
against straight line speculation. The kernel image might be slightly
larger.
config MITIGATION_GDS_FORCE
bool "Force GDS Mitigation"
config MITIGATION_GDS
bool "Mitigate Gather Data Sampling"
depends on CPU_SUP_INTEL
default n
default y
help
Gather Data Sampling (GDS) is a hardware vulnerability which allows
unprivileged speculative access to data which was previously stored in
vector registers.
This option is equivalent to setting gather_data_sampling=force on the
command line. The microcode mitigation is used if present, otherwise
AVX is disabled as a mitigation. On affected systems that are missing
the microcode any userspace code that unconditionally uses AVX will
break with this option set.
Setting this option on systems not vulnerable to GDS has no effect.
If in doubt, say N.
Enable mitigation for Gather Data Sampling (GDS). GDS is a hardware
vulnerability which allows unprivileged speculative access to data
which was previously stored in vector registers. The attacker uses gather
instructions to infer the stale vector register data.
config MITIGATION_RFDS
bool "RFDS Mitigation"
......@@ -2650,6 +2641,107 @@ config MITIGATION_SPECTRE_BHI
indirect branches.
See <file:Documentation/admin-guide/hw-vuln/spectre.rst>
config MITIGATION_MDS
bool "Mitigate Microarchitectural Data Sampling (MDS) hardware bug"
depends on CPU_SUP_INTEL
default y
help
Enable mitigation for Microarchitectural Data Sampling (MDS). MDS is
a hardware vulnerability which allows unprivileged speculative access
to data which is available in various CPU internal buffers.
See also <file:Documentation/admin-guide/hw-vuln/mds.rst>
config MITIGATION_TAA
bool "Mitigate TSX Asynchronous Abort (TAA) hardware bug"
depends on CPU_SUP_INTEL
default y
help
Enable mitigation for TSX Asynchronous Abort (TAA). TAA is a hardware
vulnerability that allows unprivileged speculative access to data
which is available in various CPU internal buffers by using
asynchronous aborts within an Intel TSX transactional region.
See also <file:Documentation/admin-guide/hw-vuln/tsx_async_abort.rst>
config MITIGATION_MMIO_STALE_DATA
bool "Mitigate MMIO Stale Data hardware bug"
depends on CPU_SUP_INTEL
default y
help
Enable mitigation for MMIO Stale Data hardware bugs. Processor MMIO
Stale Data Vulnerabilities are a class of memory-mapped I/O (MMIO)
vulnerabilities that can expose data. The vulnerabilities require the
attacker to have access to MMIO.
See also
<file:Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst>
config MITIGATION_L1TF
bool "Mitigate L1 Terminal Fault (L1TF) hardware bug"
depends on CPU_SUP_INTEL
default y
help
Mitigate L1 Terminal Fault (L1TF) hardware bug. L1 Terminal Fault is a
hardware vulnerability which allows unprivileged speculative access to data
available in the Level 1 Data Cache.
See <file:Documentation/admin-guide/hw-vuln/l1tf.rst
config MITIGATION_RETBLEED
bool "Mitigate RETBleed hardware bug"
depends on (CPU_SUP_INTEL && MITIGATION_SPECTRE_V2) || MITIGATION_UNRET_ENTRY || MITIGATION_IBPB_ENTRY
default y
help
Enable mitigation for RETBleed (Arbitrary Speculative Code Execution
with Return Instructions) vulnerability. RETBleed is a speculative
execution attack which takes advantage of microarchitectural behavior
in many modern microprocessors, similar to Spectre v2. An
unprivileged attacker can use these flaws to bypass conventional
memory security restrictions to gain read access to privileged memory
that would otherwise be inaccessible.
config MITIGATION_SPECTRE_V1
bool "Mitigate SPECTRE V1 hardware bug"
default y
help
Enable mitigation for Spectre V1 (Bounds Check Bypass). Spectre V1 is a
class of side channel attacks that takes advantage of speculative
execution that bypasses conditional branch instructions used for
memory access bounds check.
See also <file:Documentation/admin-guide/hw-vuln/spectre.rst>
config MITIGATION_SPECTRE_V2
bool "Mitigate SPECTRE V2 hardware bug"
default y
help
Enable mitigation for Spectre V2 (Branch Target Injection). Spectre
V2 is a class of side channel attacks that takes advantage of
indirect branch predictors inside the processor. In Spectre variant 2
attacks, the attacker can steer speculative indirect branches in the
victim to gadget code by poisoning the branch target buffer of a CPU
used for predicting indirect branch addresses.
See also <file:Documentation/admin-guide/hw-vuln/spectre.rst>
config MITIGATION_SRBDS
bool "Mitigate Special Register Buffer Data Sampling (SRBDS) hardware bug"
depends on CPU_SUP_INTEL
default y
help
Enable mitigation for Special Register Buffer Data Sampling (SRBDS).
SRBDS is a hardware vulnerability that allows Microarchitectural Data
Sampling (MDS) techniques to infer values returned from special
register accesses. An unprivileged user can extract values returned
from RDRAND and RDSEED executed on another core or sibling thread
using MDS techniques.
See also
<file:Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst>
config MITIGATION_SSB
bool "Mitigate Speculative Store Bypass (SSB) hardware bug"
default y
help
Enable mitigation for Speculative Store Bypass (SSB). SSB is a
hardware security vulnerability and its exploitation takes advantage
of speculative execution in a similar way to the Meltdown and Spectre
security vulnerabilities.
endif
config ARCH_HAS_ADD_PAGES
......
......@@ -233,7 +233,8 @@ static void x86_amd_ssb_disable(void)
#define pr_fmt(fmt) "MDS: " fmt
/* Default mitigation for MDS-affected CPUs */
static enum mds_mitigations mds_mitigation __ro_after_init = MDS_MITIGATION_FULL;
static enum mds_mitigations mds_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_FULL : MDS_MITIGATION_OFF;
static bool mds_nosmt __ro_after_init = false;
static const char * const mds_strings[] = {
......@@ -293,7 +294,8 @@ enum taa_mitigations {
};
/* Default mitigation for TAA-affected CPUs */
static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
static enum taa_mitigations taa_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
static bool taa_nosmt __ro_after_init;
static const char * const taa_strings[] = {
......@@ -391,7 +393,8 @@ enum mmio_mitigations {
};
/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
static enum mmio_mitigations mmio_mitigation __ro_after_init = MMIO_MITIGATION_VERW;
static enum mmio_mitigations mmio_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
static bool mmio_nosmt __ro_after_init = false;
static const char * const mmio_strings[] = {
......@@ -605,7 +608,8 @@ enum srbds_mitigations {
SRBDS_MITIGATION_HYPERVISOR,
};
static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
static enum srbds_mitigations srbds_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_FULL : SRBDS_MITIGATION_OFF;
static const char * const srbds_strings[] = {
[SRBDS_MITIGATION_OFF] = "Vulnerable",
......@@ -731,11 +735,8 @@ enum gds_mitigations {
GDS_MITIGATION_HYPERVISOR,
};
#if IS_ENABLED(CONFIG_MITIGATION_GDS_FORCE)
static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FORCE;
#else
static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FULL;
#endif
static enum gds_mitigations gds_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_FULL : GDS_MITIGATION_OFF;
static const char * const gds_strings[] = {
[GDS_MITIGATION_OFF] = "Vulnerable",
......@@ -871,7 +872,8 @@ enum spectre_v1_mitigation {
};
static enum spectre_v1_mitigation spectre_v1_mitigation __ro_after_init =
SPECTRE_V1_MITIGATION_AUTO;
IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V1) ?
SPECTRE_V1_MITIGATION_AUTO : SPECTRE_V1_MITIGATION_NONE;
static const char * const spectre_v1_strings[] = {
[SPECTRE_V1_MITIGATION_NONE] = "Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers",
......@@ -986,7 +988,7 @@ static const char * const retbleed_strings[] = {
static enum retbleed_mitigation retbleed_mitigation __ro_after_init =
RETBLEED_MITIGATION_NONE;
static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init =
RETBLEED_CMD_AUTO;
IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_CMD_AUTO : RETBLEED_CMD_OFF;
static int __ro_after_init retbleed_nosmt = false;
......@@ -1447,17 +1449,18 @@ static void __init spec_v2_print_cond(const char *reason, bool secure)
static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
{
enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
enum spectre_v2_mitigation_cmd cmd;
char arg[20];
int ret, i;
cmd = IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2) ? SPECTRE_V2_CMD_AUTO : SPECTRE_V2_CMD_NONE;
if (cmdline_find_option_bool(boot_command_line, "nospectre_v2") ||
cpu_mitigations_off())
return SPECTRE_V2_CMD_NONE;
ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
if (ret < 0)
return SPECTRE_V2_CMD_AUTO;
return cmd;
for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
if (!match_option(arg, ret, mitigation_options[i].option))
......@@ -1467,8 +1470,8 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
}
if (i >= ARRAY_SIZE(mitigation_options)) {
pr_err("unknown option (%s). Switching to AUTO select\n", arg);
return SPECTRE_V2_CMD_AUTO;
pr_err("unknown option (%s). Switching to default mode\n", arg);
return cmd;
}
if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
......@@ -2021,10 +2024,12 @@ static const struct {
static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
{
enum ssb_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;
enum ssb_mitigation_cmd cmd;
char arg[20];
int ret, i;
cmd = IS_ENABLED(CONFIG_MITIGATION_SSB) ?
SPEC_STORE_BYPASS_CMD_AUTO : SPEC_STORE_BYPASS_CMD_NONE;
if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable") ||
cpu_mitigations_off()) {
return SPEC_STORE_BYPASS_CMD_NONE;
......@@ -2032,7 +2037,7 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
ret = cmdline_find_option(boot_command_line, "spec_store_bypass_disable",
arg, sizeof(arg));
if (ret < 0)
return SPEC_STORE_BYPASS_CMD_AUTO;
return cmd;
for (i = 0; i < ARRAY_SIZE(ssb_mitigation_options); i++) {
if (!match_option(arg, ret, ssb_mitigation_options[i].option))
......@@ -2043,8 +2048,8 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
}
if (i >= ARRAY_SIZE(ssb_mitigation_options)) {
pr_err("unknown option (%s). Switching to AUTO select\n", arg);
return SPEC_STORE_BYPASS_CMD_AUTO;
pr_err("unknown option (%s). Switching to default mode\n", arg);
return cmd;
}
}
......@@ -2371,7 +2376,8 @@ EXPORT_SYMBOL_GPL(itlb_multihit_kvm_mitigation);
#define pr_fmt(fmt) "L1TF: " fmt
/* Default mitigation for L1TF-affected CPUs */
enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
enum l1tf_mitigations l1tf_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_FLUSH : L1TF_MITIGATION_OFF;
#if IS_ENABLED(CONFIG_KVM_INTEL)
EXPORT_SYMBOL_GPL(l1tf_mitigation);
#endif
......@@ -2551,10 +2557,9 @@ static void __init srso_select_mitigation(void)
{
bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
if (cpu_mitigations_off())
return;
if (!boot_cpu_has_bug(X86_BUG_SRSO)) {
if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
cpu_mitigations_off() ||
srso_cmd == SRSO_CMD_OFF) {
if (boot_cpu_has(X86_FEATURE_SBPB))
x86_pred_cmd = PRED_CMD_SBPB;
return;
......@@ -2585,11 +2590,6 @@ static void __init srso_select_mitigation(void)
}
switch (srso_cmd) {
case SRSO_CMD_OFF:
if (boot_cpu_has(X86_FEATURE_SBPB))
x86_pred_cmd = PRED_CMD_SBPB;
return;
case SRSO_CMD_MICROCODE:
if (has_microcode) {
srso_mitigation = SRSO_MITIGATION_MICROCODE;
......@@ -2643,6 +2643,8 @@ static void __init srso_select_mitigation(void)
pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
}
break;
default:
break;
}
out:
......
......@@ -1165,8 +1165,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
VULNWL_INTEL(INTEL_CORE_YONAH, NO_SSB),
VULNWL_INTEL(INTEL_ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
VULNWL_INTEL(INTEL_ATOM_AIRMONT_NP, NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
VULNWL_INTEL(INTEL_ATOM_AIRMONT_MID, NO_SSB | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | MSBDS_ONLY),
VULNWL_INTEL(INTEL_ATOM_AIRMONT_NP, NO_SSB | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
VULNWL_INTEL(INTEL_ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
VULNWL_INTEL(INTEL_ATOM_GOLDMONT_D, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
......
......@@ -77,7 +77,7 @@ all_32: $(BINARIES_32)
all_64: $(BINARIES_64)
EXTRA_CLEAN := $(BINARIES_32) $(BINARIES_64)
EXTRA_CLEAN := $(BINARIES_32) $(BINARIES_64) srso
$(BINARIES_32): $(OUTPUT)/%_32: %.c helpers.h
$(CC) -m32 -o $@ $(CFLAGS) $(EXTRA_CFLAGS) $< $(EXTRA_FILES) -lrt -ldl -lm
......
// SPDX-License-Identifier: GPL-2.0
#include <linux/perf_event.h>
#include <cpuid.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <sys/syscall.h>
#include <unistd.h>
int main(void)
{
struct perf_event_attr ret_attr, mret_attr;
long long count_rets, count_rets_mispred;
int rrets_fd, mrrets_fd;
unsigned int cpuid1_eax, b, c, d;
__cpuid(1, cpuid1_eax, b, c, d);
if (cpuid1_eax < 0x00800f00 ||
cpuid1_eax > 0x00afffff) {
fprintf(stderr, "This needs to run on a Zen[1-4] machine (CPUID(1).EAX: 0x%x). Exiting...\n", cpuid1_eax);
exit(EXIT_FAILURE);
}
memset(&ret_attr, 0, sizeof(struct perf_event_attr));
memset(&mret_attr, 0, sizeof(struct perf_event_attr));
ret_attr.type = mret_attr.type = PERF_TYPE_RAW;
ret_attr.size = mret_attr.size = sizeof(struct perf_event_attr);
ret_attr.config = 0xc8;
mret_attr.config = 0xc9;
ret_attr.disabled = mret_attr.disabled = 1;
ret_attr.exclude_user = mret_attr.exclude_user = 1;
ret_attr.exclude_hv = mret_attr.exclude_hv = 1;
rrets_fd = syscall(SYS_perf_event_open, &ret_attr, 0, -1, -1, 0);
if (rrets_fd == -1) {
perror("opening retired RETs fd");
exit(EXIT_FAILURE);
}
mrrets_fd = syscall(SYS_perf_event_open, &mret_attr, 0, -1, -1, 0);
if (mrrets_fd == -1) {
perror("opening retired mispredicted RETs fd");
exit(EXIT_FAILURE);
}
ioctl(rrets_fd, PERF_EVENT_IOC_RESET, 0);
ioctl(mrrets_fd, PERF_EVENT_IOC_RESET, 0);
ioctl(rrets_fd, PERF_EVENT_IOC_ENABLE, 0);
ioctl(mrrets_fd, PERF_EVENT_IOC_ENABLE, 0);
printf("Sleeping for 10 seconds\n");
sleep(10);
ioctl(rrets_fd, PERF_EVENT_IOC_DISABLE, 0);
ioctl(mrrets_fd, PERF_EVENT_IOC_DISABLE, 0);
read(rrets_fd, &count_rets, sizeof(long long));
read(mrrets_fd, &count_rets_mispred, sizeof(long long));
printf("RETs: (%lld retired <-> %lld mispredicted)\n",
count_rets, count_rets_mispred);
printf("SRSO Safe-RET mitigation works correctly if both counts are almost equal.\n");
return 0;
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment