Commit 40548c6b authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'x86-pti-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 pti updates from Thomas Gleixner:
 "This contains:

   - a PTI bugfix to avoid setting reserved CR3 bits when PCID is
     disabled. This seems to cause issues on a virtual machine at least
     and is incorrect according to the AMD manual.

   - a PTI bugfix which disables the perf BTS facility if PTI is
     enabled. The BTS AUX buffer is not globally visible and causes the
     CPU to fault when the mapping disappears on switching CR3 to user
     space. A full fix which restores BTS on PTI is non trivial and will
     be worked on.

   - PTI bugfixes for EFI and trusted boot which make sure that the user
     space visible page table entries have the NX bit cleared

   - removal of dead code in the PTI pagetable setup functions

   - add PTI documentation

   - add a selftest for vsyscall to verify that the kernel actually
     implements what it advertises.

   - a sysfs interface to expose vulnerability and mitigation
     information so there is a coherent way for users to retrieve the
     status.

   - the initial spectre_v2 mitigations, aka retpoline:

      + The necessary ASM thunk and compiler support

      + The ASM variants of retpoline and the conversion of affected ASM
        code

      + Make LFENCE serializing on AMD so it can be used as speculation
        trap

      + The RSB fill after vmexit

   - initial objtool support for retpoline

  As I said in the status mail this is the most of the set of patches
  which should go into 4.15 except two straight forward patches still on
  hold:

   - the retpoline add on of LFENCE which waits for ACKs

   - the RSB fill after context switch

  Both should be ready to go early next week and with that we'll have
  covered the major holes of spectre_v2 and go back to normality"

* 'x86-pti-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (28 commits)
  x86,perf: Disable intel_bts when PTI
  security/Kconfig: Correct the Documentation reference for PTI
  x86/pti: Fix !PCID and sanitize defines
  selftests/x86: Add test_vsyscall
  x86/retpoline: Fill return stack buffer on vmexit
  x86/retpoline/irq32: Convert assembler indirect jumps
  x86/retpoline/checksum32: Convert assembler indirect jumps
  x86/retpoline/xen: Convert Xen hypercall indirect jumps
  x86/retpoline/hyperv: Convert assembler indirect jumps
  x86/retpoline/ftrace: Convert ftrace assembler indirect jumps
  x86/retpoline/entry: Convert entry assembler indirect jumps
  x86/retpoline/crypto: Convert crypto assembler indirect jumps
  x86/spectre: Add boot time option to select Spectre v2 mitigation
  x86/retpoline: Add initial retpoline support
  objtool: Allow alternatives to be ignored
  objtool: Detect jumps to retpoline thunks
  x86/pti: Make unpoison of pgd for trusted boot work for real
  x86/alternatives: Fix optimize_nops() checking
  sysfs/cpu: Fix typos in vulnerability documentation
  x86/cpu/AMD: Use LFENCE_RDTSC in preference to MFENCE_RDTSC
  ...
parents 2c1cfa49 99a9dc98
...@@ -375,3 +375,19 @@ Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> ...@@ -375,3 +375,19 @@ Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: information about CPUs heterogeneity. Description: information about CPUs heterogeneity.
cpu_capacity: capacity of cpu#. cpu_capacity: capacity of cpu#.
What: /sys/devices/system/cpu/vulnerabilities
/sys/devices/system/cpu/vulnerabilities/meltdown
/sys/devices/system/cpu/vulnerabilities/spectre_v1
/sys/devices/system/cpu/vulnerabilities/spectre_v2
Date: January 2018
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: Information about CPU vulnerabilities
The files are named after the code names of CPU
vulnerabilities. The output of those files reflects the
state of the CPUs in the system. Possible output values:
"Not affected" CPU is not affected by the vulnerability
"Vulnerable" CPU is affected and no mitigation in effect
"Mitigation: $M" CPU is affected and mitigation $M is in effect
...@@ -2623,6 +2623,11 @@ ...@@ -2623,6 +2623,11 @@
nosmt [KNL,S390] Disable symmetric multithreading (SMT). nosmt [KNL,S390] Disable symmetric multithreading (SMT).
Equivalent to smt=1. Equivalent to smt=1.
nospectre_v2 [X86] Disable all mitigations for the Spectre variant 2
(indirect branch prediction) vulnerability. System may
allow data leaks with this option, which is equivalent
to spectre_v2=off.
noxsave [BUGS=X86] Disables x86 extended register state save noxsave [BUGS=X86] Disables x86 extended register state save
and restore using xsave. The kernel will fallback to and restore using xsave. The kernel will fallback to
enabling legacy floating-point and sse state. enabling legacy floating-point and sse state.
...@@ -2709,8 +2714,6 @@ ...@@ -2709,8 +2714,6 @@
steal time is computed, but won't influence scheduler steal time is computed, but won't influence scheduler
behaviour behaviour
nopti [X86-64] Disable kernel page table isolation
nolapic [X86-32,APIC] Do not enable or use the local APIC. nolapic [X86-32,APIC] Do not enable or use the local APIC.
nolapic_timer [X86-32,APIC] Do not use the local APIC timer. nolapic_timer [X86-32,APIC] Do not use the local APIC timer.
...@@ -3291,11 +3294,20 @@ ...@@ -3291,11 +3294,20 @@
pt. [PARIDE] pt. [PARIDE]
See Documentation/blockdev/paride.txt. See Documentation/blockdev/paride.txt.
pti= [X86_64] pti= [X86_64] Control Page Table Isolation of user and
Control user/kernel address space isolation: kernel address spaces. Disabling this feature
on - enable removes hardening, but improves performance of
off - disable system calls and interrupts.
auto - default setting
on - unconditionally enable
off - unconditionally disable
auto - kernel detects whether your CPU model is
vulnerable to issues that PTI mitigates
Not specifying this option is equivalent to pti=auto.
nopti [X86_64]
Equivalent to pti=off
pty.legacy_count= pty.legacy_count=
[KNL] Number of legacy pty's. Overwrites compiled-in [KNL] Number of legacy pty's. Overwrites compiled-in
...@@ -3946,6 +3958,29 @@ ...@@ -3946,6 +3958,29 @@
sonypi.*= [HW] Sony Programmable I/O Control Device driver sonypi.*= [HW] Sony Programmable I/O Control Device driver
See Documentation/laptops/sonypi.txt See Documentation/laptops/sonypi.txt
spectre_v2= [X86] Control mitigation of Spectre variant 2
(indirect branch speculation) vulnerability.
on - unconditionally enable
off - unconditionally disable
auto - kernel detects whether your CPU model is
vulnerable
Selecting 'on' will, and 'auto' may, choose a
mitigation method at run time according to the
CPU, the available microcode, the setting of the
CONFIG_RETPOLINE configuration option, and the
compiler with which the kernel was built.
Specific mitigations can also be selected manually:
retpoline - replace indirect branches
retpoline,generic - google's original retpoline
retpoline,amd - AMD-specific minimal thunk
Not specifying this option is equivalent to
spectre_v2=auto.
spia_io_base= [HW,MTD] spia_io_base= [HW,MTD]
spia_fio_base= spia_fio_base=
spia_pedr= spia_pedr=
......
Overview
========
Page Table Isolation (pti, previously known as KAISER[1]) is a
countermeasure against attacks on the shared user/kernel address
space such as the "Meltdown" approach[2].
To mitigate this class of attacks, we create an independent set of
page tables for use only when running userspace applications. When
the kernel is entered via syscalls, interrupts or exceptions, the
page tables are switched to the full "kernel" copy. When the system
switches back to user mode, the user copy is used again.
The userspace page tables contain only a minimal amount of kernel
data: only what is needed to enter/exit the kernel such as the
entry/exit functions themselves and the interrupt descriptor table
(IDT). There are a few strictly unnecessary things that get mapped
such as the first C function when entering an interrupt (see
comments in pti.c).
This approach helps to ensure that side-channel attacks leveraging
the paging structures do not function when PTI is enabled. It can be
enabled by setting CONFIG_PAGE_TABLE_ISOLATION=y at compile time.
Once enabled at compile-time, it can be disabled at boot with the
'nopti' or 'pti=' kernel parameters (see kernel-parameters.txt).
Page Table Management
=====================
When PTI is enabled, the kernel manages two sets of page tables.
The first set is very similar to the single set which is present in
kernels without PTI. This includes a complete mapping of userspace
that the kernel can use for things like copy_to_user().
Although _complete_, the user portion of the kernel page tables is
crippled by setting the NX bit in the top level. This ensures
that any missed kernel->user CR3 switch will immediately crash
userspace upon executing its first instruction.
The userspace page tables map only the kernel data needed to enter
and exit the kernel. This data is entirely contained in the 'struct
cpu_entry_area' structure which is placed in the fixmap which gives
each CPU's copy of the area a compile-time-fixed virtual address.
For new userspace mappings, the kernel makes the entries in its
page tables like normal. The only difference is when the kernel
makes entries in the top (PGD) level. In addition to setting the
entry in the main kernel PGD, a copy of the entry is made in the
userspace page tables' PGD.
This sharing at the PGD level also inherently shares all the lower
layers of the page tables. This leaves a single, shared set of
userspace page tables to manage. One PTE to lock, one set of
accessed bits, dirty bits, etc...
Overhead
========
Protection against side-channel attacks is important. But,
this protection comes at a cost:
1. Increased Memory Use
a. Each process now needs an order-1 PGD instead of order-0.
(Consumes an additional 4k per process).
b. The 'cpu_entry_area' structure must be 2MB in size and 2MB
aligned so that it can be mapped by setting a single PMD
entry. This consumes nearly 2MB of RAM once the kernel
is decompressed, but no space in the kernel image itself.
2. Runtime Cost
a. CR3 manipulation to switch between the page table copies
must be done at interrupt, syscall, and exception entry
and exit (it can be skipped when the kernel is interrupted,
though.) Moves to CR3 are on the order of a hundred
cycles, and are required at every entry and exit.
b. A "trampoline" must be used for SYSCALL entry. This
trampoline depends on a smaller set of resources than the
non-PTI SYSCALL entry code, so requires mapping fewer
things into the userspace page tables. The downside is
that stacks must be switched at entry time.
d. Global pages are disabled for all kernel structures not
mapped into both kernel and userspace page tables. This
feature of the MMU allows different processes to share TLB
entries mapping the kernel. Losing the feature means more
TLB misses after a context switch. The actual loss of
performance is very small, however, never exceeding 1%.
d. Process Context IDentifiers (PCID) is a CPU feature that
allows us to skip flushing the entire TLB when switching page
tables by setting a special bit in CR3 when the page tables
are changed. This makes switching the page tables (at context
switch, or kernel entry/exit) cheaper. But, on systems with
PCID support, the context switch code must flush both the user
and kernel entries out of the TLB. The user PCID TLB flush is
deferred until the exit to userspace, minimizing the cost.
See intel.com/sdm for the gory PCID/INVPCID details.
e. The userspace page tables must be populated for each new
process. Even without PTI, the shared kernel mappings
are created by copying top-level (PGD) entries into each
new process. But, with PTI, there are now *two* kernel
mappings: one in the kernel page tables that maps everything
and one for the entry/exit structures. At fork(), we need to
copy both.
f. In addition to the fork()-time copying, there must also
be an update to the userspace PGD any time a set_pgd() is done
on a PGD used to map userspace. This ensures that the kernel
and userspace copies always map the same userspace
memory.
g. On systems without PCID support, each CR3 write flushes
the entire TLB. That means that each syscall, interrupt
or exception flushes the TLB.
h. INVPCID is a TLB-flushing instruction which allows flushing
of TLB entries for non-current PCIDs. Some systems support
PCIDs, but do not support INVPCID. On these systems, addresses
can only be flushed from the TLB for the current PCID. When
flushing a kernel address, we need to flush all PCIDs, so a
single kernel address flush will require a TLB-flushing CR3
write upon the next use of every PCID.
Possible Future Work
====================
1. We can be more careful about not actually writing to CR3
unless its value is actually changed.
2. Allow PTI to be enabled/disabled at runtime in addition to the
boot-time switching.
Testing
========
To test stability of PTI, the following test procedure is recommended,
ideally doing all of these in parallel:
1. Set CONFIG_DEBUG_ENTRY=y
2. Run several copies of all of the tools/testing/selftests/x86/ tests
(excluding MPX and protection_keys) in a loop on multiple CPUs for
several minutes. These tests frequently uncover corner cases in the
kernel entry code. In general, old kernels might cause these tests
themselves to crash, but they should never crash the kernel.
3. Run the 'perf' tool in a mode (top or record) that generates many
frequent performance monitoring non-maskable interrupts (see "NMI"
in /proc/interrupts). This exercises the NMI entry/exit code which
is known to trigger bugs in code paths that did not expect to be
interrupted, including nested NMIs. Using "-c" boosts the rate of
NMIs, and using two -c with separate counters encourages nested NMIs
and less deterministic behavior.
while true; do perf record -c 10000 -e instructions,cycles -a sleep 10; done
4. Launch a KVM virtual machine.
5. Run 32-bit binaries on systems supporting the SYSCALL instruction.
This has been a lightly-tested code path and needs extra scrutiny.
Debugging
=========
Bugs in PTI cause a few different signatures of crashes
that are worth noting here.
* Failures of the selftests/x86 code. Usually a bug in one of the
more obscure corners of entry_64.S
* Crashes in early boot, especially around CPU bringup. Bugs
in the trampoline code or mappings cause these.
* Crashes at the first interrupt. Caused by bugs in entry_64.S,
like screwing up a page table switch. Also caused by
incorrectly mapping the IRQ handler entry code.
* Crashes at the first NMI. The NMI code is separate from main
interrupt handlers and can have bugs that do not affect
normal interrupts. Also caused by incorrectly mapping NMI
code. NMIs that interrupt the entry code must be very
careful and can be the cause of crashes that show up when
running perf.
* Kernel crashes at the first exit to userspace. entry_64.S
bugs, or failing to map some of the exit code.
* Crashes at first interrupt that interrupts userspace. The paths
in entry_64.S that return to userspace are sometimes separate
from the ones that return to the kernel.
* Double faults: overflowing the kernel stack because of page
faults upon page faults. Caused by touching non-pti-mapped
data in the entry code, or forgetting to switch to kernel
CR3 before calling into C functions which are not pti-mapped.
* Userspace segfaults early in boot, sometimes manifesting
as mount(8) failing to mount the rootfs. These have
tended to be TLB invalidation issues. Usually invalidating
the wrong PCID, or otherwise missing an invalidation.
1. https://gruss.cc/files/kaiser.pdf
2. https://meltdownattack.com/meltdown.pdf
...@@ -88,6 +88,7 @@ config X86 ...@@ -88,6 +88,7 @@ config X86
select GENERIC_CLOCKEVENTS_MIN_ADJUST select GENERIC_CLOCKEVENTS_MIN_ADJUST
select GENERIC_CMOS_UPDATE select GENERIC_CMOS_UPDATE
select GENERIC_CPU_AUTOPROBE select GENERIC_CPU_AUTOPROBE
select GENERIC_CPU_VULNERABILITIES
select GENERIC_EARLY_IOREMAP select GENERIC_EARLY_IOREMAP
select GENERIC_FIND_FIRST_BIT select GENERIC_FIND_FIRST_BIT
select GENERIC_IOMAP select GENERIC_IOMAP
...@@ -428,6 +429,19 @@ config GOLDFISH ...@@ -428,6 +429,19 @@ config GOLDFISH
def_bool y def_bool y
depends on X86_GOLDFISH depends on X86_GOLDFISH
config RETPOLINE
bool "Avoid speculative indirect branches in kernel"
default y
help
Compile kernel with the retpoline compiler options to guard against
kernel-to-user data leaks by avoiding speculative indirect
branches. Requires a compiler with -mindirect-branch=thunk-extern
support for full protection. The kernel may run slower.
Without compiler support, at least indirect branches in assembler
code are eliminated. Since this includes the syscall entry path,
it is not entirely pointless.
config INTEL_RDT config INTEL_RDT
bool "Intel Resource Director Technology support" bool "Intel Resource Director Technology support"
default n default n
......
...@@ -230,6 +230,16 @@ KBUILD_CFLAGS += -Wno-sign-compare ...@@ -230,6 +230,16 @@ KBUILD_CFLAGS += -Wno-sign-compare
# #
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
# Avoid indirect branches in kernel to deal with Spectre
ifdef CONFIG_RETPOLINE
RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch=thunk-extern -mindirect-branch-register)
ifneq ($(RETPOLINE_CFLAGS),)
KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) -DRETPOLINE
else
$(warning CONFIG_RETPOLINE=y, but not supported by the compiler. Toolchain update recommended.)
endif
endif
archscripts: scripts_basic archscripts: scripts_basic
$(Q)$(MAKE) $(build)=arch/x86/tools relocs $(Q)$(MAKE) $(build)=arch/x86/tools relocs
......
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/inst.h> #include <asm/inst.h>
#include <asm/frame.h> #include <asm/frame.h>
#include <asm/nospec-branch.h>
/* /*
* The following macros are used to move an (un)aligned 16 byte value to/from * The following macros are used to move an (un)aligned 16 byte value to/from
...@@ -2884,7 +2885,7 @@ ENTRY(aesni_xts_crypt8) ...@@ -2884,7 +2885,7 @@ ENTRY(aesni_xts_crypt8)
pxor INC, STATE4 pxor INC, STATE4
movdqu IV, 0x30(OUTP) movdqu IV, 0x30(OUTP)
call *%r11 CALL_NOSPEC %r11
movdqu 0x00(OUTP), INC movdqu 0x00(OUTP), INC
pxor INC, STATE1 pxor INC, STATE1
...@@ -2929,7 +2930,7 @@ ENTRY(aesni_xts_crypt8) ...@@ -2929,7 +2930,7 @@ ENTRY(aesni_xts_crypt8)
_aesni_gf128mul_x_ble() _aesni_gf128mul_x_ble()
movups IV, (IVP) movups IV, (IVP)
call *%r11 CALL_NOSPEC %r11
movdqu 0x40(OUTP), INC movdqu 0x40(OUTP), INC
pxor INC, STATE1 pxor INC, STATE1
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/frame.h> #include <asm/frame.h>
#include <asm/nospec-branch.h>
#define CAMELLIA_TABLE_BYTE_LEN 272 #define CAMELLIA_TABLE_BYTE_LEN 272
...@@ -1227,7 +1228,7 @@ camellia_xts_crypt_16way: ...@@ -1227,7 +1228,7 @@ camellia_xts_crypt_16way:
vpxor 14 * 16(%rax), %xmm15, %xmm14; vpxor 14 * 16(%rax), %xmm15, %xmm14;
vpxor 15 * 16(%rax), %xmm15, %xmm15; vpxor 15 * 16(%rax), %xmm15, %xmm15;
call *%r9; CALL_NOSPEC %r9;
addq $(16 * 16), %rsp; addq $(16 * 16), %rsp;
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/frame.h> #include <asm/frame.h>
#include <asm/nospec-branch.h>
#define CAMELLIA_TABLE_BYTE_LEN 272 #define CAMELLIA_TABLE_BYTE_LEN 272
...@@ -1343,7 +1344,7 @@ camellia_xts_crypt_32way: ...@@ -1343,7 +1344,7 @@ camellia_xts_crypt_32way:
vpxor 14 * 32(%rax), %ymm15, %ymm14; vpxor 14 * 32(%rax), %ymm15, %ymm14;
vpxor 15 * 32(%rax), %ymm15, %ymm15; vpxor 15 * 32(%rax), %ymm15, %ymm15;
call *%r9; CALL_NOSPEC %r9;
addq $(16 * 32), %rsp; addq $(16 * 32), %rsp;
......
...@@ -45,6 +45,7 @@ ...@@ -45,6 +45,7 @@
#include <asm/inst.h> #include <asm/inst.h>
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/nospec-branch.h>
## ISCSI CRC 32 Implementation with crc32 and pclmulqdq Instruction ## ISCSI CRC 32 Implementation with crc32 and pclmulqdq Instruction
...@@ -172,7 +173,7 @@ continue_block: ...@@ -172,7 +173,7 @@ continue_block:
movzxw (bufp, %rax, 2), len movzxw (bufp, %rax, 2), len
lea crc_array(%rip), bufp lea crc_array(%rip), bufp
lea (bufp, len, 1), bufp lea (bufp, len, 1), bufp
jmp *bufp JMP_NOSPEC bufp
################################################################ ################################################################
## 2a) PROCESS FULL BLOCKS: ## 2a) PROCESS FULL BLOCKS:
......
...@@ -198,8 +198,11 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -198,8 +198,11 @@ For 32-bit we have the following conventions - kernel is built with
* PAGE_TABLE_ISOLATION PGDs are 8k. Flip bit 12 to switch between the two * PAGE_TABLE_ISOLATION PGDs are 8k. Flip bit 12 to switch between the two
* halves: * halves:
*/ */
#define PTI_SWITCH_PGTABLES_MASK (1<<PAGE_SHIFT) #define PTI_USER_PGTABLE_BIT PAGE_SHIFT
#define PTI_SWITCH_MASK (PTI_SWITCH_PGTABLES_MASK|(1<<X86_CR3_PTI_SWITCH_BIT)) #define PTI_USER_PGTABLE_MASK (1 << PTI_USER_PGTABLE_BIT)
#define PTI_USER_PCID_BIT X86_CR3_PTI_PCID_USER_BIT
#define PTI_USER_PCID_MASK (1 << PTI_USER_PCID_BIT)
#define PTI_USER_PGTABLE_AND_PCID_MASK (PTI_USER_PCID_MASK | PTI_USER_PGTABLE_MASK)
.macro SET_NOFLUSH_BIT reg:req .macro SET_NOFLUSH_BIT reg:req
bts $X86_CR3_PCID_NOFLUSH_BIT, \reg bts $X86_CR3_PCID_NOFLUSH_BIT, \reg
...@@ -208,7 +211,7 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -208,7 +211,7 @@ For 32-bit we have the following conventions - kernel is built with
.macro ADJUST_KERNEL_CR3 reg:req .macro ADJUST_KERNEL_CR3 reg:req
ALTERNATIVE "", "SET_NOFLUSH_BIT \reg", X86_FEATURE_PCID ALTERNATIVE "", "SET_NOFLUSH_BIT \reg", X86_FEATURE_PCID
/* Clear PCID and "PAGE_TABLE_ISOLATION bit", point CR3 at kernel pagetables: */ /* Clear PCID and "PAGE_TABLE_ISOLATION bit", point CR3 at kernel pagetables: */
andq $(~PTI_SWITCH_MASK), \reg andq $(~PTI_USER_PGTABLE_AND_PCID_MASK), \reg
.endm .endm
.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req .macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
...@@ -239,15 +242,19 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -239,15 +242,19 @@ For 32-bit we have the following conventions - kernel is built with
/* Flush needed, clear the bit */ /* Flush needed, clear the bit */
btr \scratch_reg, THIS_CPU_user_pcid_flush_mask btr \scratch_reg, THIS_CPU_user_pcid_flush_mask
movq \scratch_reg2, \scratch_reg movq \scratch_reg2, \scratch_reg
jmp .Lwrcr3_\@ jmp .Lwrcr3_pcid_\@
.Lnoflush_\@: .Lnoflush_\@:
movq \scratch_reg2, \scratch_reg movq \scratch_reg2, \scratch_reg
SET_NOFLUSH_BIT \scratch_reg SET_NOFLUSH_BIT \scratch_reg
.Lwrcr3_pcid_\@:
/* Flip the ASID to the user version */
orq $(PTI_USER_PCID_MASK), \scratch_reg
.Lwrcr3_\@: .Lwrcr3_\@:
/* Flip the PGD and ASID to the user version */ /* Flip the PGD to the user version */
orq $(PTI_SWITCH_MASK), \scratch_reg orq $(PTI_USER_PGTABLE_MASK), \scratch_reg
mov \scratch_reg, %cr3 mov \scratch_reg, %cr3
.Lend_\@: .Lend_\@:
.endm .endm
...@@ -263,17 +270,12 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -263,17 +270,12 @@ For 32-bit we have the following conventions - kernel is built with
movq %cr3, \scratch_reg movq %cr3, \scratch_reg
movq \scratch_reg, \save_reg movq \scratch_reg, \save_reg
/* /*
* Is the "switch mask" all zero? That means that both of * Test the user pagetable bit. If set, then the user page tables
* these are zero: * are active. If clear CR3 already has the kernel page table
* * active.
* 1. The user/kernel PCID bit, and
* 2. The user/kernel "bit" that points CR3 to the
* bottom half of the 8k PGD
*
* That indicates a kernel CR3 value, not a user CR3.
*/ */
testq $(PTI_SWITCH_MASK), \scratch_reg bt $PTI_USER_PGTABLE_BIT, \scratch_reg
jz .Ldone_\@ jnc .Ldone_\@
ADJUST_KERNEL_CR3 \scratch_reg ADJUST_KERNEL_CR3 \scratch_reg
movq \scratch_reg, %cr3 movq \scratch_reg, %cr3
...@@ -290,7 +292,7 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -290,7 +292,7 @@ For 32-bit we have the following conventions - kernel is built with
* KERNEL pages can always resume with NOFLUSH as we do * KERNEL pages can always resume with NOFLUSH as we do
* explicit flushes. * explicit flushes.
*/ */
bt $X86_CR3_PTI_SWITCH_BIT, \save_reg bt $PTI_USER_PGTABLE_BIT, \save_reg
jnc .Lnoflush_\@ jnc .Lnoflush_\@
/* /*
......
...@@ -44,6 +44,7 @@ ...@@ -44,6 +44,7 @@
#include <asm/asm.h> #include <asm/asm.h>
#include <asm/smap.h> #include <asm/smap.h>
#include <asm/frame.h> #include <asm/frame.h>
#include <asm/nospec-branch.h>
.section .entry.text, "ax" .section .entry.text, "ax"
...@@ -290,7 +291,7 @@ ENTRY(ret_from_fork) ...@@ -290,7 +291,7 @@ ENTRY(ret_from_fork)
/* kernel thread */ /* kernel thread */
1: movl %edi, %eax 1: movl %edi, %eax
call *%ebx CALL_NOSPEC %ebx
/* /*
* A kernel thread is allowed to return here after successfully * A kernel thread is allowed to return here after successfully
* calling do_execve(). Exit to userspace to complete the execve() * calling do_execve(). Exit to userspace to complete the execve()
...@@ -919,7 +920,7 @@ common_exception: ...@@ -919,7 +920,7 @@ common_exception:
movl %ecx, %es movl %ecx, %es
TRACE_IRQS_OFF TRACE_IRQS_OFF
movl %esp, %eax # pt_regs pointer movl %esp, %eax # pt_regs pointer
call *%edi CALL_NOSPEC %edi
jmp ret_from_exception jmp ret_from_exception
END(common_exception) END(common_exception)
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <asm/pgtable_types.h> #include <asm/pgtable_types.h>
#include <asm/export.h> #include <asm/export.h>
#include <asm/frame.h> #include <asm/frame.h>
#include <asm/nospec-branch.h>
#include <linux/err.h> #include <linux/err.h>
#include "calling.h" #include "calling.h"
...@@ -191,7 +192,7 @@ ENTRY(entry_SYSCALL_64_trampoline) ...@@ -191,7 +192,7 @@ ENTRY(entry_SYSCALL_64_trampoline)
*/ */
pushq %rdi pushq %rdi
movq $entry_SYSCALL_64_stage2, %rdi movq $entry_SYSCALL_64_stage2, %rdi
jmp *%rdi JMP_NOSPEC %rdi
END(entry_SYSCALL_64_trampoline) END(entry_SYSCALL_64_trampoline)
.popsection .popsection
...@@ -270,7 +271,12 @@ entry_SYSCALL_64_fastpath: ...@@ -270,7 +271,12 @@ entry_SYSCALL_64_fastpath:
* It might end up jumping to the slow path. If it jumps, RAX * It might end up jumping to the slow path. If it jumps, RAX
* and all argument registers are clobbered. * and all argument registers are clobbered.
*/ */
#ifdef CONFIG_RETPOLINE
movq sys_call_table(, %rax, 8), %rax
call __x86_indirect_thunk_rax
#else
call *sys_call_table(, %rax, 8) call *sys_call_table(, %rax, 8)
#endif
.Lentry_SYSCALL_64_after_fastpath_call: .Lentry_SYSCALL_64_after_fastpath_call:
movq %rax, RAX(%rsp) movq %rax, RAX(%rsp)
...@@ -442,7 +448,7 @@ ENTRY(stub_ptregs_64) ...@@ -442,7 +448,7 @@ ENTRY(stub_ptregs_64)
jmp entry_SYSCALL64_slow_path jmp entry_SYSCALL64_slow_path
1: 1:
jmp *%rax /* Called from C */ JMP_NOSPEC %rax /* Called from C */
END(stub_ptregs_64) END(stub_ptregs_64)
.macro ptregs_stub func .macro ptregs_stub func
...@@ -521,7 +527,7 @@ ENTRY(ret_from_fork) ...@@ -521,7 +527,7 @@ ENTRY(ret_from_fork)
1: 1:
/* kernel thread */ /* kernel thread */
movq %r12, %rdi movq %r12, %rdi
call *%rbx CALL_NOSPEC %rbx
/* /*
* A kernel thread is allowed to return here after successfully * A kernel thread is allowed to return here after successfully
* calling do_execve(). Exit to userspace to complete the execve() * calling do_execve(). Exit to userspace to complete the execve()
......
...@@ -582,6 +582,24 @@ static __init int bts_init(void) ...@@ -582,6 +582,24 @@ static __init int bts_init(void)
if (!boot_cpu_has(X86_FEATURE_DTES64) || !x86_pmu.bts) if (!boot_cpu_has(X86_FEATURE_DTES64) || !x86_pmu.bts)
return -ENODEV; return -ENODEV;
if (boot_cpu_has(X86_FEATURE_PTI)) {
/*
* BTS hardware writes through a virtual memory map we must
* either use the kernel physical map, or the user mapping of
* the AUX buffer.
*
* However, since this driver supports per-CPU and per-task inherit
* we cannot use the user mapping since it will not be availble
* if we're not running the owning process.
*
* With PTI we can't use the kernal map either, because its not
* there when we run userspace.
*
* For now, disable this driver when using PTI.
*/
return -ENODEV;
}
bts_pmu.capabilities = PERF_PMU_CAP_AUX_NO_SG | PERF_PMU_CAP_ITRACE | bts_pmu.capabilities = PERF_PMU_CAP_AUX_NO_SG | PERF_PMU_CAP_ITRACE |
PERF_PMU_CAP_EXCLUSIVE; PERF_PMU_CAP_EXCLUSIVE;
bts_pmu.task_ctx_nr = perf_sw_context; bts_pmu.task_ctx_nr = perf_sw_context;
......
...@@ -11,7 +11,32 @@ ...@@ -11,7 +11,32 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/special_insns.h> #include <asm/special_insns.h>
#include <asm/preempt.h> #include <asm/preempt.h>
#include <asm/asm.h>
#ifndef CONFIG_X86_CMPXCHG64 #ifndef CONFIG_X86_CMPXCHG64
extern void cmpxchg8b_emu(void); extern void cmpxchg8b_emu(void);
#endif #endif
#ifdef CONFIG_RETPOLINE
#ifdef CONFIG_X86_32
#define INDIRECT_THUNK(reg) extern asmlinkage void __x86_indirect_thunk_e ## reg(void);
#else
#define INDIRECT_THUNK(reg) extern asmlinkage void __x86_indirect_thunk_r ## reg(void);
INDIRECT_THUNK(8)
INDIRECT_THUNK(9)
INDIRECT_THUNK(10)
INDIRECT_THUNK(11)
INDIRECT_THUNK(12)
INDIRECT_THUNK(13)
INDIRECT_THUNK(14)
INDIRECT_THUNK(15)
#endif
INDIRECT_THUNK(ax)
INDIRECT_THUNK(bx)
INDIRECT_THUNK(cx)
INDIRECT_THUNK(dx)
INDIRECT_THUNK(si)
INDIRECT_THUNK(di)
INDIRECT_THUNK(bp)
INDIRECT_THUNK(sp)
#endif /* CONFIG_RETPOLINE */
...@@ -203,6 +203,8 @@ ...@@ -203,6 +203,8 @@
#define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */
#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
#define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
#define X86_FEATURE_RETPOLINE ( 7*32+12) /* Generic Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* AMD Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
#define X86_FEATURE_INTEL_PT ( 7*32+15) /* Intel Processor Trace */ #define X86_FEATURE_INTEL_PT ( 7*32+15) /* Intel Processor Trace */
#define X86_FEATURE_AVX512_4VNNIW ( 7*32+16) /* AVX-512 Neural Network Instructions */ #define X86_FEATURE_AVX512_4VNNIW ( 7*32+16) /* AVX-512 Neural Network Instructions */
...@@ -342,5 +344,7 @@ ...@@ -342,5 +344,7 @@
#define X86_BUG_MONITOR X86_BUG(12) /* IPI required to wake up remote CPU */ #define X86_BUG_MONITOR X86_BUG(12) /* IPI required to wake up remote CPU */
#define X86_BUG_AMD_E400 X86_BUG(13) /* CPU is among the affected by Erratum 400 */ #define X86_BUG_AMD_E400 X86_BUG(13) /* CPU is among the affected by Erratum 400 */
#define X86_BUG_CPU_MELTDOWN X86_BUG(14) /* CPU is affected by meltdown attack and needs kernel page table isolation */ #define X86_BUG_CPU_MELTDOWN X86_BUG(14) /* CPU is affected by meltdown attack and needs kernel page table isolation */
#define X86_BUG_SPECTRE_V1 X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
#define X86_BUG_SPECTRE_V2 X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
#endif /* _ASM_X86_CPUFEATURES_H */ #endif /* _ASM_X86_CPUFEATURES_H */
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include <linux/nmi.h> #include <linux/nmi.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/hyperv.h> #include <asm/hyperv.h>
#include <asm/nospec-branch.h>
/* /*
* The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
...@@ -186,10 +187,11 @@ static inline u64 hv_do_hypercall(u64 control, void *input, void *output) ...@@ -186,10 +187,11 @@ static inline u64 hv_do_hypercall(u64 control, void *input, void *output)
return U64_MAX; return U64_MAX;
__asm__ __volatile__("mov %4, %%r8\n" __asm__ __volatile__("mov %4, %%r8\n"
"call *%5" CALL_NOSPEC
: "=a" (hv_status), ASM_CALL_CONSTRAINT, : "=a" (hv_status), ASM_CALL_CONSTRAINT,
"+c" (control), "+d" (input_address) "+c" (control), "+d" (input_address)
: "r" (output_address), "m" (hv_hypercall_pg) : "r" (output_address),
THUNK_TARGET(hv_hypercall_pg)
: "cc", "memory", "r8", "r9", "r10", "r11"); : "cc", "memory", "r8", "r9", "r10", "r11");
#else #else
u32 input_address_hi = upper_32_bits(input_address); u32 input_address_hi = upper_32_bits(input_address);
...@@ -200,13 +202,13 @@ static inline u64 hv_do_hypercall(u64 control, void *input, void *output) ...@@ -200,13 +202,13 @@ static inline u64 hv_do_hypercall(u64 control, void *input, void *output)
if (!hv_hypercall_pg) if (!hv_hypercall_pg)
return U64_MAX; return U64_MAX;
__asm__ __volatile__("call *%7" __asm__ __volatile__(CALL_NOSPEC
: "=A" (hv_status), : "=A" (hv_status),
"+c" (input_address_lo), ASM_CALL_CONSTRAINT "+c" (input_address_lo), ASM_CALL_CONSTRAINT
: "A" (control), : "A" (control),
"b" (input_address_hi), "b" (input_address_hi),
"D"(output_address_hi), "S"(output_address_lo), "D"(output_address_hi), "S"(output_address_lo),
"m" (hv_hypercall_pg) THUNK_TARGET(hv_hypercall_pg)
: "cc", "memory"); : "cc", "memory");
#endif /* !x86_64 */ #endif /* !x86_64 */
return hv_status; return hv_status;
...@@ -227,10 +229,10 @@ static inline u64 hv_do_fast_hypercall8(u16 code, u64 input1) ...@@ -227,10 +229,10 @@ static inline u64 hv_do_fast_hypercall8(u16 code, u64 input1)
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
{ {
__asm__ __volatile__("call *%4" __asm__ __volatile__(CALL_NOSPEC
: "=a" (hv_status), ASM_CALL_CONSTRAINT, : "=a" (hv_status), ASM_CALL_CONSTRAINT,
"+c" (control), "+d" (input1) "+c" (control), "+d" (input1)
: "m" (hv_hypercall_pg) : THUNK_TARGET(hv_hypercall_pg)
: "cc", "r8", "r9", "r10", "r11"); : "cc", "r8", "r9", "r10", "r11");
} }
#else #else
...@@ -238,13 +240,13 @@ static inline u64 hv_do_fast_hypercall8(u16 code, u64 input1) ...@@ -238,13 +240,13 @@ static inline u64 hv_do_fast_hypercall8(u16 code, u64 input1)
u32 input1_hi = upper_32_bits(input1); u32 input1_hi = upper_32_bits(input1);
u32 input1_lo = lower_32_bits(input1); u32 input1_lo = lower_32_bits(input1);
__asm__ __volatile__ ("call *%5" __asm__ __volatile__ (CALL_NOSPEC
: "=A"(hv_status), : "=A"(hv_status),
"+c"(input1_lo), "+c"(input1_lo),
ASM_CALL_CONSTRAINT ASM_CALL_CONSTRAINT
: "A" (control), : "A" (control),
"b" (input1_hi), "b" (input1_hi),
"m" (hv_hypercall_pg) THUNK_TARGET(hv_hypercall_pg)
: "cc", "edi", "esi"); : "cc", "edi", "esi");
} }
#endif #endif
......
...@@ -355,6 +355,9 @@ ...@@ -355,6 +355,9 @@
#define FAM10H_MMIO_CONF_BASE_MASK 0xfffffffULL #define FAM10H_MMIO_CONF_BASE_MASK 0xfffffffULL
#define FAM10H_MMIO_CONF_BASE_SHIFT 20 #define FAM10H_MMIO_CONF_BASE_SHIFT 20
#define MSR_FAM10H_NODE_ID 0xc001100c #define MSR_FAM10H_NODE_ID 0xc001100c
#define MSR_F10H_DECFG 0xc0011029
#define MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT 1
#define MSR_F10H_DECFG_LFENCE_SERIALIZE BIT_ULL(MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT)
/* K8 MSRs */ /* K8 MSRs */
#define MSR_K8_TOP_MEM1 0xc001001a #define MSR_K8_TOP_MEM1 0xc001001a
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __NOSPEC_BRANCH_H__
#define __NOSPEC_BRANCH_H__
#include <asm/alternative.h>
#include <asm/alternative-asm.h>
#include <asm/cpufeatures.h>
/*
* Fill the CPU return stack buffer.
*
* Each entry in the RSB, if used for a speculative 'ret', contains an
* infinite 'pause; jmp' loop to capture speculative execution.
*
* This is required in various cases for retpoline and IBRS-based
* mitigations for the Spectre variant 2 vulnerability. Sometimes to
* eliminate potentially bogus entries from the RSB, and sometimes
* purely to ensure that it doesn't get empty, which on some CPUs would
* allow predictions from other (unwanted!) sources to be used.
*
* We define a CPP macro such that it can be used from both .S files and
* inline assembly. It's possible to do a .macro and then include that
* from C via asm(".include <asm/nospec-branch.h>") but let's not go there.
*/
#define RSB_CLEAR_LOOPS 32 /* To forcibly overwrite all entries */
#define RSB_FILL_LOOPS 16 /* To avoid underflow */
/*
* Google experimented with loop-unrolling and this turned out to be
* the optimal version — two calls, each with their own speculation
* trap should their return address end up getting used, in a loop.
*/
#define __FILL_RETURN_BUFFER(reg, nr, sp) \
mov $(nr/2), reg; \
771: \
call 772f; \
773: /* speculation trap */ \
pause; \
jmp 773b; \
772: \
call 774f; \
775: /* speculation trap */ \
pause; \
jmp 775b; \
774: \
dec reg; \
jnz 771b; \
add $(BITS_PER_LONG/8) * nr, sp;
#ifdef __ASSEMBLY__
/*
* This should be used immediately before a retpoline alternative. It tells
* objtool where the retpolines are so that it can make sense of the control
* flow by just reading the original instruction(s) and ignoring the
* alternatives.
*/
.macro ANNOTATE_NOSPEC_ALTERNATIVE
.Lannotate_\@:
.pushsection .discard.nospec
.long .Lannotate_\@ - .
.popsection
.endm
/*
* These are the bare retpoline primitives for indirect jmp and call.
* Do not use these directly; they only exist to make the ALTERNATIVE
* invocation below less ugly.
*/
.macro RETPOLINE_JMP reg:req
call .Ldo_rop_\@
.Lspec_trap_\@:
pause
jmp .Lspec_trap_\@
.Ldo_rop_\@:
mov \reg, (%_ASM_SP)
ret
.endm
/*
* This is a wrapper around RETPOLINE_JMP so the called function in reg
* returns to the instruction after the macro.
*/
.macro RETPOLINE_CALL reg:req
jmp .Ldo_call_\@
.Ldo_retpoline_jmp_\@:
RETPOLINE_JMP \reg
.Ldo_call_\@:
call .Ldo_retpoline_jmp_\@
.endm
/*
* JMP_NOSPEC and CALL_NOSPEC macros can be used instead of a simple
* indirect jmp/call which may be susceptible to the Spectre variant 2
* attack.
*/
.macro JMP_NOSPEC reg:req
#ifdef CONFIG_RETPOLINE
ANNOTATE_NOSPEC_ALTERNATIVE
ALTERNATIVE_2 __stringify(jmp *\reg), \
__stringify(RETPOLINE_JMP \reg), X86_FEATURE_RETPOLINE, \
__stringify(lfence; jmp *\reg), X86_FEATURE_RETPOLINE_AMD
#else
jmp *\reg
#endif
.endm
.macro CALL_NOSPEC reg:req
#ifdef CONFIG_RETPOLINE
ANNOTATE_NOSPEC_ALTERNATIVE
ALTERNATIVE_2 __stringify(call *\reg), \
__stringify(RETPOLINE_CALL \reg), X86_FEATURE_RETPOLINE,\
__stringify(lfence; call *\reg), X86_FEATURE_RETPOLINE_AMD
#else
call *\reg
#endif
.endm
/*
* A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP
* monstrosity above, manually.
*/
.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req
#ifdef CONFIG_RETPOLINE
ANNOTATE_NOSPEC_ALTERNATIVE
ALTERNATIVE "jmp .Lskip_rsb_\@", \
__stringify(__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)) \
\ftr
.Lskip_rsb_\@:
#endif
.endm
#else /* __ASSEMBLY__ */
#define ANNOTATE_NOSPEC_ALTERNATIVE \
"999:\n\t" \
".pushsection .discard.nospec\n\t" \
".long 999b - .\n\t" \
".popsection\n\t"
#if defined(CONFIG_X86_64) && defined(RETPOLINE)
/*
* Since the inline asm uses the %V modifier which is only in newer GCC,
* the 64-bit one is dependent on RETPOLINE not CONFIG_RETPOLINE.
*/
# define CALL_NOSPEC \
ANNOTATE_NOSPEC_ALTERNATIVE \
ALTERNATIVE( \
"call *%[thunk_target]\n", \
"call __x86_indirect_thunk_%V[thunk_target]\n", \
X86_FEATURE_RETPOLINE)
# define THUNK_TARGET(addr) [thunk_target] "r" (addr)
#elif defined(CONFIG_X86_32) && defined(CONFIG_RETPOLINE)
/*
* For i386 we use the original ret-equivalent retpoline, because
* otherwise we'll run out of registers. We don't care about CET
* here, anyway.
*/
# define CALL_NOSPEC ALTERNATIVE("call *%[thunk_target]\n", \
" jmp 904f;\n" \
" .align 16\n" \
"901: call 903f;\n" \
"902: pause;\n" \
" jmp 902b;\n" \
" .align 16\n" \
"903: addl $4, %%esp;\n" \
" pushl %[thunk_target];\n" \
" ret;\n" \
" .align 16\n" \
"904: call 901b;\n", \
X86_FEATURE_RETPOLINE)
# define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
#else /* No retpoline for C / inline asm */
# define CALL_NOSPEC "call *%[thunk_target]\n"
# define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
#endif
/* The Spectre V2 mitigation variants */
enum spectre_v2_mitigation {
SPECTRE_V2_NONE,
SPECTRE_V2_RETPOLINE_MINIMAL,
SPECTRE_V2_RETPOLINE_MINIMAL_AMD,
SPECTRE_V2_RETPOLINE_GENERIC,
SPECTRE_V2_RETPOLINE_AMD,
SPECTRE_V2_IBRS,
};
/*
* On VMEXIT we must ensure that no RSB predictions learned in the guest
* can be followed in the host, by overwriting the RSB completely. Both
* retpoline and IBRS mitigations for Spectre v2 need this; only on future
* CPUs with IBRS_ATT *might* it be avoided.
*/
static inline void vmexit_fill_RSB(void)
{
#ifdef CONFIG_RETPOLINE
unsigned long loops = RSB_CLEAR_LOOPS / 2;
asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE
ALTERNATIVE("jmp 910f",
__stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)),
X86_FEATURE_RETPOLINE)
"910:"
: "=&r" (loops), ASM_CALL_CONSTRAINT
: "r" (loops) : "memory" );
#endif
}
#endif /* __ASSEMBLY__ */
#endif /* __NOSPEC_BRANCH_H__ */
...@@ -40,7 +40,7 @@ ...@@ -40,7 +40,7 @@
#define CR3_NOFLUSH BIT_ULL(63) #define CR3_NOFLUSH BIT_ULL(63)
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_PAGE_TABLE_ISOLATION
# define X86_CR3_PTI_SWITCH_BIT 11 # define X86_CR3_PTI_PCID_USER_BIT 11
#endif #endif
#else #else
......
...@@ -81,13 +81,13 @@ static inline u16 kern_pcid(u16 asid) ...@@ -81,13 +81,13 @@ static inline u16 kern_pcid(u16 asid)
* Make sure that the dynamic ASID space does not confict with the * Make sure that the dynamic ASID space does not confict with the
* bit we are using to switch between user and kernel ASIDs. * bit we are using to switch between user and kernel ASIDs.
*/ */
BUILD_BUG_ON(TLB_NR_DYN_ASIDS >= (1 << X86_CR3_PTI_SWITCH_BIT)); BUILD_BUG_ON(TLB_NR_DYN_ASIDS >= (1 << X86_CR3_PTI_PCID_USER_BIT));
/* /*
* The ASID being passed in here should have respected the * The ASID being passed in here should have respected the
* MAX_ASID_AVAILABLE and thus never have the switch bit set. * MAX_ASID_AVAILABLE and thus never have the switch bit set.
*/ */
VM_WARN_ON_ONCE(asid & (1 << X86_CR3_PTI_SWITCH_BIT)); VM_WARN_ON_ONCE(asid & (1 << X86_CR3_PTI_PCID_USER_BIT));
#endif #endif
/* /*
* The dynamically-assigned ASIDs that get passed in are small * The dynamically-assigned ASIDs that get passed in are small
...@@ -112,7 +112,7 @@ static inline u16 user_pcid(u16 asid) ...@@ -112,7 +112,7 @@ static inline u16 user_pcid(u16 asid)
{ {
u16 ret = kern_pcid(asid); u16 ret = kern_pcid(asid);
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_PAGE_TABLE_ISOLATION
ret |= 1 << X86_CR3_PTI_SWITCH_BIT; ret |= 1 << X86_CR3_PTI_PCID_USER_BIT;
#endif #endif
return ret; return ret;
} }
......
...@@ -44,6 +44,7 @@ ...@@ -44,6 +44,7 @@
#include <asm/page.h> #include <asm/page.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/smap.h> #include <asm/smap.h>
#include <asm/nospec-branch.h>
#include <xen/interface/xen.h> #include <xen/interface/xen.h>
#include <xen/interface/sched.h> #include <xen/interface/sched.h>
...@@ -217,9 +218,9 @@ privcmd_call(unsigned call, ...@@ -217,9 +218,9 @@ privcmd_call(unsigned call,
__HYPERCALL_5ARG(a1, a2, a3, a4, a5); __HYPERCALL_5ARG(a1, a2, a3, a4, a5);
stac(); stac();
asm volatile("call *%[call]" asm volatile(CALL_NOSPEC
: __HYPERCALL_5PARAM : __HYPERCALL_5PARAM
: [call] "a" (&hypercall_page[call]) : [thunk_target] "a" (&hypercall_page[call])
: __HYPERCALL_CLOBBER5); : __HYPERCALL_CLOBBER5);
clac(); clac();
......
...@@ -344,9 +344,12 @@ recompute_jump(struct alt_instr *a, u8 *orig_insn, u8 *repl_insn, u8 *insnbuf) ...@@ -344,9 +344,12 @@ recompute_jump(struct alt_instr *a, u8 *orig_insn, u8 *repl_insn, u8 *insnbuf)
static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *instr) static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *instr)
{ {
unsigned long flags; unsigned long flags;
int i;
if (instr[0] != 0x90) for (i = 0; i < a->padlen; i++) {
if (instr[i] != 0x90)
return; return;
}
local_irq_save(flags); local_irq_save(flags);
add_nops(instr + (a->instrlen - a->padlen), a->padlen); add_nops(instr + (a->instrlen - a->padlen), a->padlen);
......
...@@ -829,9 +829,33 @@ static void init_amd(struct cpuinfo_x86 *c) ...@@ -829,9 +829,33 @@ static void init_amd(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_K8); set_cpu_cap(c, X86_FEATURE_K8);
if (cpu_has(c, X86_FEATURE_XMM2)) { if (cpu_has(c, X86_FEATURE_XMM2)) {
unsigned long long val;
int ret;
/*
* A serializing LFENCE has less overhead than MFENCE, so
* use it for execution serialization. On families which
* don't have that MSR, LFENCE is already serializing.
* msr_set_bit() uses the safe accessors, too, even if the MSR
* is not present.
*/
msr_set_bit(MSR_F10H_DECFG,
MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT);
/*
* Verify that the MSR write was successful (could be running
* under a hypervisor) and only then assume that LFENCE is
* serializing.
*/
ret = rdmsrl_safe(MSR_F10H_DECFG, &val);
if (!ret && (val & MSR_F10H_DECFG_LFENCE_SERIALIZE)) {
/* A serializing LFENCE stops RDTSC speculation */
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
} else {
/* MFENCE stops RDTSC speculation */ /* MFENCE stops RDTSC speculation */
set_cpu_cap(c, X86_FEATURE_MFENCE_RDTSC); set_cpu_cap(c, X86_FEATURE_MFENCE_RDTSC);
} }
}
/* /*
* Family 0x12 and above processors have APIC timer * Family 0x12 and above processors have APIC timer
......
...@@ -10,6 +10,10 @@ ...@@ -10,6 +10,10 @@
*/ */
#include <linux/init.h> #include <linux/init.h>
#include <linux/utsname.h> #include <linux/utsname.h>
#include <linux/cpu.h>
#include <asm/nospec-branch.h>
#include <asm/cmdline.h>
#include <asm/bugs.h> #include <asm/bugs.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
...@@ -20,6 +24,8 @@ ...@@ -20,6 +24,8 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/set_memory.h> #include <asm/set_memory.h>
static void __init spectre_v2_select_mitigation(void);
void __init check_bugs(void) void __init check_bugs(void)
{ {
identify_boot_cpu(); identify_boot_cpu();
...@@ -29,6 +35,9 @@ void __init check_bugs(void) ...@@ -29,6 +35,9 @@ void __init check_bugs(void)
print_cpu_info(&boot_cpu_data); print_cpu_info(&boot_cpu_data);
} }
/* Select the proper spectre mitigation before patching alternatives */
spectre_v2_select_mitigation();
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
/* /*
* Check whether we are able to run this kernel safely on SMP. * Check whether we are able to run this kernel safely on SMP.
...@@ -60,3 +69,179 @@ void __init check_bugs(void) ...@@ -60,3 +69,179 @@ void __init check_bugs(void)
set_memory_4k((unsigned long)__va(0), 1); set_memory_4k((unsigned long)__va(0), 1);
#endif #endif
} }
/* The kernel command line selection */
enum spectre_v2_mitigation_cmd {
SPECTRE_V2_CMD_NONE,
SPECTRE_V2_CMD_AUTO,
SPECTRE_V2_CMD_FORCE,
SPECTRE_V2_CMD_RETPOLINE,
SPECTRE_V2_CMD_RETPOLINE_GENERIC,
SPECTRE_V2_CMD_RETPOLINE_AMD,
};
static const char *spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE_MINIMAL] = "Vulnerable: Minimal generic ASM retpoline",
[SPECTRE_V2_RETPOLINE_MINIMAL_AMD] = "Vulnerable: Minimal AMD ASM retpoline",
[SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline",
[SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline",
};
#undef pr_fmt
#define pr_fmt(fmt) "Spectre V2 mitigation: " fmt
static enum spectre_v2_mitigation spectre_v2_enabled = SPECTRE_V2_NONE;
static void __init spec2_print_if_insecure(const char *reason)
{
if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
pr_info("%s\n", reason);
}
static void __init spec2_print_if_secure(const char *reason)
{
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
pr_info("%s\n", reason);
}
static inline bool retp_compiler(void)
{
return __is_defined(RETPOLINE);
}
static inline bool match_option(const char *arg, int arglen, const char *opt)
{
int len = strlen(opt);
return len == arglen && !strncmp(arg, opt, len);
}
static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
{
char arg[20];
int ret;
ret = cmdline_find_option(boot_command_line, "spectre_v2", arg,
sizeof(arg));
if (ret > 0) {
if (match_option(arg, ret, "off")) {
goto disable;
} else if (match_option(arg, ret, "on")) {
spec2_print_if_secure("force enabled on command line.");
return SPECTRE_V2_CMD_FORCE;
} else if (match_option(arg, ret, "retpoline")) {
spec2_print_if_insecure("retpoline selected on command line.");
return SPECTRE_V2_CMD_RETPOLINE;
} else if (match_option(arg, ret, "retpoline,amd")) {
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {
pr_err("retpoline,amd selected but CPU is not AMD. Switching to AUTO select\n");
return SPECTRE_V2_CMD_AUTO;
}
spec2_print_if_insecure("AMD retpoline selected on command line.");
return SPECTRE_V2_CMD_RETPOLINE_AMD;
} else if (match_option(arg, ret, "retpoline,generic")) {
spec2_print_if_insecure("generic retpoline selected on command line.");
return SPECTRE_V2_CMD_RETPOLINE_GENERIC;
} else if (match_option(arg, ret, "auto")) {
return SPECTRE_V2_CMD_AUTO;
}
}
if (!cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
return SPECTRE_V2_CMD_AUTO;
disable:
spec2_print_if_insecure("disabled on command line.");
return SPECTRE_V2_CMD_NONE;
}
static void __init spectre_v2_select_mitigation(void)
{
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
/*
* If the CPU is not affected and the command line mode is NONE or AUTO
* then nothing to do.
*/
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
(cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
return;
switch (cmd) {
case SPECTRE_V2_CMD_NONE:
return;
case SPECTRE_V2_CMD_FORCE:
/* FALLTRHU */
case SPECTRE_V2_CMD_AUTO:
goto retpoline_auto;
case SPECTRE_V2_CMD_RETPOLINE_AMD:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_amd;
break;
case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_generic;
break;
case SPECTRE_V2_CMD_RETPOLINE:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_auto;
break;
}
pr_err("kernel not compiled with retpoline; no mitigation available!");
return;
retpoline_auto:
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
retpoline_amd:
if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
pr_err("LFENCE not serializing. Switching to generic retpoline\n");
goto retpoline_generic;
}
mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_AMD :
SPECTRE_V2_RETPOLINE_MINIMAL_AMD;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
} else {
retpoline_generic:
mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_GENERIC :
SPECTRE_V2_RETPOLINE_MINIMAL;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
}
spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]);
}
#undef pr_fmt
#ifdef CONFIG_SYSFS
ssize_t cpu_show_meltdown(struct device *dev,
struct device_attribute *attr, char *buf)
{
if (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
return sprintf(buf, "Not affected\n");
if (boot_cpu_has(X86_FEATURE_PTI))
return sprintf(buf, "Mitigation: PTI\n");
return sprintf(buf, "Vulnerable\n");
}
ssize_t cpu_show_spectre_v1(struct device *dev,
struct device_attribute *attr, char *buf)
{
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1))
return sprintf(buf, "Not affected\n");
return sprintf(buf, "Vulnerable\n");
}
ssize_t cpu_show_spectre_v2(struct device *dev,
struct device_attribute *attr, char *buf)
{
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
return sprintf(buf, "Not affected\n");
return sprintf(buf, "%s\n", spectre_v2_strings[spectre_v2_enabled]);
}
#endif
...@@ -926,6 +926,9 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c) ...@@ -926,6 +926,9 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
if (c->x86_vendor != X86_VENDOR_AMD) if (c->x86_vendor != X86_VENDOR_AMD)
setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN); setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
fpu__init_system(c); fpu__init_system(c);
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <asm/segment.h> #include <asm/segment.h>
#include <asm/export.h> #include <asm/export.h>
#include <asm/ftrace.h> #include <asm/ftrace.h>
#include <asm/nospec-branch.h>
#ifdef CC_USING_FENTRY #ifdef CC_USING_FENTRY
# define function_hook __fentry__ # define function_hook __fentry__
...@@ -197,7 +198,8 @@ ftrace_stub: ...@@ -197,7 +198,8 @@ ftrace_stub:
movl 0x4(%ebp), %edx movl 0x4(%ebp), %edx
subl $MCOUNT_INSN_SIZE, %eax subl $MCOUNT_INSN_SIZE, %eax
call *ftrace_trace_function movl ftrace_trace_function, %ecx
CALL_NOSPEC %ecx
popl %edx popl %edx
popl %ecx popl %ecx
...@@ -241,5 +243,5 @@ return_to_handler: ...@@ -241,5 +243,5 @@ return_to_handler:
movl %eax, %ecx movl %eax, %ecx
popl %edx popl %edx
popl %eax popl %eax
jmp *%ecx JMP_NOSPEC %ecx
#endif #endif
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/ftrace.h> #include <asm/ftrace.h>
#include <asm/export.h> #include <asm/export.h>
#include <asm/nospec-branch.h>
.code64 .code64
.section .entry.text, "ax" .section .entry.text, "ax"
...@@ -286,8 +286,8 @@ trace: ...@@ -286,8 +286,8 @@ trace:
* ip and parent ip are used and the list function is called when * ip and parent ip are used and the list function is called when
* function tracing is enabled. * function tracing is enabled.
*/ */
call *ftrace_trace_function movq ftrace_trace_function, %r8
CALL_NOSPEC %r8
restore_mcount_regs restore_mcount_regs
jmp fgraph_trace jmp fgraph_trace
...@@ -329,5 +329,5 @@ GLOBAL(return_to_handler) ...@@ -329,5 +329,5 @@ GLOBAL(return_to_handler)
movq 8(%rsp), %rdx movq 8(%rsp), %rdx
movq (%rsp), %rax movq (%rsp), %rax
addq $24, %rsp addq $24, %rsp
jmp *%rdi JMP_NOSPEC %rdi
#endif #endif
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/nospec-branch.h>
#ifdef CONFIG_DEBUG_STACKOVERFLOW #ifdef CONFIG_DEBUG_STACKOVERFLOW
...@@ -55,11 +56,11 @@ DEFINE_PER_CPU(struct irq_stack *, softirq_stack); ...@@ -55,11 +56,11 @@ DEFINE_PER_CPU(struct irq_stack *, softirq_stack);
static void call_on_stack(void *func, void *stack) static void call_on_stack(void *func, void *stack)
{ {
asm volatile("xchgl %%ebx,%%esp \n" asm volatile("xchgl %%ebx,%%esp \n"
"call *%%edi \n" CALL_NOSPEC
"movl %%ebx,%%esp \n" "movl %%ebx,%%esp \n"
: "=b" (stack) : "=b" (stack)
: "0" (stack), : "0" (stack),
"D"(func) [thunk_target] "D"(func)
: "memory", "cc", "edx", "ecx", "eax"); : "memory", "cc", "edx", "ecx", "eax");
} }
...@@ -95,11 +96,11 @@ static inline int execute_on_irq_stack(int overflow, struct irq_desc *desc) ...@@ -95,11 +96,11 @@ static inline int execute_on_irq_stack(int overflow, struct irq_desc *desc)
call_on_stack(print_stack_overflow, isp); call_on_stack(print_stack_overflow, isp);
asm volatile("xchgl %%ebx,%%esp \n" asm volatile("xchgl %%ebx,%%esp \n"
"call *%%edi \n" CALL_NOSPEC
"movl %%ebx,%%esp \n" "movl %%ebx,%%esp \n"
: "=a" (arg1), "=b" (isp) : "=a" (arg1), "=b" (isp)
: "0" (desc), "1" (isp), : "0" (desc), "1" (isp),
"D" (desc->handle_irq) [thunk_target] "D" (desc->handle_irq)
: "memory", "cc", "ecx"); : "memory", "cc", "ecx");
return 1; return 1;
} }
......
...@@ -138,6 +138,17 @@ static int map_tboot_page(unsigned long vaddr, unsigned long pfn, ...@@ -138,6 +138,17 @@ static int map_tboot_page(unsigned long vaddr, unsigned long pfn,
return -1; return -1;
set_pte_at(&tboot_mm, vaddr, pte, pfn_pte(pfn, prot)); set_pte_at(&tboot_mm, vaddr, pte, pfn_pte(pfn, prot));
pte_unmap(pte); pte_unmap(pte);
/*
* PTI poisons low addresses in the kernel page tables in the
* name of making them unusable for userspace. To execute
* code at such a low address, the poison must be cleared.
*
* Note: 'pgd' actually gets set in p4d_alloc() _or_
* pud_alloc() depending on 4/5-level paging.
*/
pgd->pgd &= ~_PAGE_NX;
return 0; return 0;
} }
......
...@@ -45,6 +45,7 @@ ...@@ -45,6 +45,7 @@
#include <asm/debugreg.h> #include <asm/debugreg.h>
#include <asm/kvm_para.h> #include <asm/kvm_para.h>
#include <asm/irq_remapping.h> #include <asm/irq_remapping.h>
#include <asm/nospec-branch.h>
#include <asm/virtext.h> #include <asm/virtext.h>
#include "trace.h" #include "trace.h"
...@@ -5027,6 +5028,9 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) ...@@ -5027,6 +5028,9 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
#endif #endif
); );
/* Eliminate branch target predictions from guest mode */
vmexit_fill_RSB();
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
wrmsrl(MSR_GS_BASE, svm->host.gs_base); wrmsrl(MSR_GS_BASE, svm->host.gs_base);
#else #else
......
...@@ -50,6 +50,7 @@ ...@@ -50,6 +50,7 @@
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/irq_remapping.h> #include <asm/irq_remapping.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/nospec-branch.h>
#include "trace.h" #include "trace.h"
#include "pmu.h" #include "pmu.h"
...@@ -9490,6 +9491,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) ...@@ -9490,6 +9491,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
#endif #endif
); );
/* Eliminate branch target predictions from guest mode */
vmexit_fill_RSB();
/* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */ /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
if (debugctlmsr) if (debugctlmsr)
update_debugctlmsr(debugctlmsr); update_debugctlmsr(debugctlmsr);
......
...@@ -26,6 +26,7 @@ lib-y += memcpy_$(BITS).o ...@@ -26,6 +26,7 @@ lib-y += memcpy_$(BITS).o
lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o
lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o insn-eval.o lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o insn-eval.o
lib-$(CONFIG_RANDOMIZE_BASE) += kaslr.o lib-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
lib-$(CONFIG_RETPOLINE) += retpoline.o
obj-y += msr.o msr-reg.o msr-reg-export.o hweight.o obj-y += msr.o msr-reg.o msr-reg-export.o hweight.o
......
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/asm.h> #include <asm/asm.h>
#include <asm/export.h> #include <asm/export.h>
#include <asm/nospec-branch.h>
/* /*
* computes a partial checksum, e.g. for TCP/UDP fragments * computes a partial checksum, e.g. for TCP/UDP fragments
...@@ -156,7 +157,7 @@ ENTRY(csum_partial) ...@@ -156,7 +157,7 @@ ENTRY(csum_partial)
negl %ebx negl %ebx
lea 45f(%ebx,%ebx,2), %ebx lea 45f(%ebx,%ebx,2), %ebx
testl %esi, %esi testl %esi, %esi
jmp *%ebx JMP_NOSPEC %ebx
# Handle 2-byte-aligned regions # Handle 2-byte-aligned regions
20: addw (%esi), %ax 20: addw (%esi), %ax
...@@ -439,7 +440,7 @@ ENTRY(csum_partial_copy_generic) ...@@ -439,7 +440,7 @@ ENTRY(csum_partial_copy_generic)
andl $-32,%edx andl $-32,%edx
lea 3f(%ebx,%ebx), %ebx lea 3f(%ebx,%ebx), %ebx
testl %esi, %esi testl %esi, %esi
jmp *%ebx JMP_NOSPEC %ebx
1: addl $64,%esi 1: addl $64,%esi
addl $64,%edi addl $64,%edi
SRC(movb -32(%edx),%bl) ; SRC(movb (%edx),%bl) SRC(movb -32(%edx),%bl) ; SRC(movb (%edx),%bl)
......
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/stringify.h>
#include <linux/linkage.h>
#include <asm/dwarf2.h>
#include <asm/cpufeatures.h>
#include <asm/alternative-asm.h>
#include <asm/export.h>
#include <asm/nospec-branch.h>
.macro THUNK reg
.section .text.__x86.indirect_thunk.\reg
ENTRY(__x86_indirect_thunk_\reg)
CFI_STARTPROC
JMP_NOSPEC %\reg
CFI_ENDPROC
ENDPROC(__x86_indirect_thunk_\reg)
.endm
/*
* Despite being an assembler file we can't just use .irp here
* because __KSYM_DEPS__ only uses the C preprocessor and would
* only see one instance of "__x86_indirect_thunk_\reg" rather
* than one per register with the correct names. So we do it
* the simple and nasty way...
*/
#define EXPORT_THUNK(reg) EXPORT_SYMBOL(__x86_indirect_thunk_ ## reg)
#define GENERATE_THUNK(reg) THUNK reg ; EXPORT_THUNK(reg)
GENERATE_THUNK(_ASM_AX)
GENERATE_THUNK(_ASM_BX)
GENERATE_THUNK(_ASM_CX)
GENERATE_THUNK(_ASM_DX)
GENERATE_THUNK(_ASM_SI)
GENERATE_THUNK(_ASM_DI)
GENERATE_THUNK(_ASM_BP)
GENERATE_THUNK(_ASM_SP)
#ifdef CONFIG_64BIT
GENERATE_THUNK(r8)
GENERATE_THUNK(r9)
GENERATE_THUNK(r10)
GENERATE_THUNK(r11)
GENERATE_THUNK(r12)
GENERATE_THUNK(r13)
GENERATE_THUNK(r14)
GENERATE_THUNK(r15)
#endif
...@@ -149,7 +149,7 @@ pgd_t __pti_set_user_pgd(pgd_t *pgdp, pgd_t pgd) ...@@ -149,7 +149,7 @@ pgd_t __pti_set_user_pgd(pgd_t *pgdp, pgd_t pgd)
* *
* Returns a pointer to a P4D on success, or NULL on failure. * Returns a pointer to a P4D on success, or NULL on failure.
*/ */
static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address) static __init p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
{ {
pgd_t *pgd = kernel_to_user_pgdp(pgd_offset_k(address)); pgd_t *pgd = kernel_to_user_pgdp(pgd_offset_k(address));
gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO); gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
...@@ -164,12 +164,7 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address) ...@@ -164,12 +164,7 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
if (!new_p4d_page) if (!new_p4d_page)
return NULL; return NULL;
if (pgd_none(*pgd)) {
set_pgd(pgd, __pgd(_KERNPG_TABLE | __pa(new_p4d_page))); set_pgd(pgd, __pgd(_KERNPG_TABLE | __pa(new_p4d_page)));
new_p4d_page = 0;
}
if (new_p4d_page)
free_page(new_p4d_page);
} }
BUILD_BUG_ON(pgd_large(*pgd) != 0); BUILD_BUG_ON(pgd_large(*pgd) != 0);
...@@ -182,7 +177,7 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address) ...@@ -182,7 +177,7 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
* *
* Returns a pointer to a PMD on success, or NULL on failure. * Returns a pointer to a PMD on success, or NULL on failure.
*/ */
static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address) static __init pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
{ {
gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO); gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
p4d_t *p4d = pti_user_pagetable_walk_p4d(address); p4d_t *p4d = pti_user_pagetable_walk_p4d(address);
...@@ -194,12 +189,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address) ...@@ -194,12 +189,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
if (!new_pud_page) if (!new_pud_page)
return NULL; return NULL;
if (p4d_none(*p4d)) {
set_p4d(p4d, __p4d(_KERNPG_TABLE | __pa(new_pud_page))); set_p4d(p4d, __p4d(_KERNPG_TABLE | __pa(new_pud_page)));
new_pud_page = 0;
}
if (new_pud_page)
free_page(new_pud_page);
} }
pud = pud_offset(p4d, address); pud = pud_offset(p4d, address);
...@@ -213,12 +203,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address) ...@@ -213,12 +203,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
if (!new_pmd_page) if (!new_pmd_page)
return NULL; return NULL;
if (pud_none(*pud)) {
set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page))); set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
new_pmd_page = 0;
}
if (new_pmd_page)
free_page(new_pmd_page);
} }
return pmd_offset(pud, address); return pmd_offset(pud, address);
...@@ -251,12 +236,7 @@ static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address) ...@@ -251,12 +236,7 @@ static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address)
if (!new_pte_page) if (!new_pte_page)
return NULL; return NULL;
if (pmd_none(*pmd)) {
set_pmd(pmd, __pmd(_KERNPG_TABLE | __pa(new_pte_page))); set_pmd(pmd, __pmd(_KERNPG_TABLE | __pa(new_pte_page)));
new_pte_page = 0;
}
if (new_pte_page)
free_page(new_pte_page);
} }
pte = pte_offset_kernel(pmd, address); pte = pte_offset_kernel(pmd, address);
......
...@@ -135,7 +135,9 @@ pgd_t * __init efi_call_phys_prolog(void) ...@@ -135,7 +135,9 @@ pgd_t * __init efi_call_phys_prolog(void)
pud[j] = *pud_offset(p4d_k, vaddr); pud[j] = *pud_offset(p4d_k, vaddr);
} }
} }
pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
} }
out: out:
__flush_tlb_all(); __flush_tlb_all();
......
...@@ -236,6 +236,9 @@ config GENERIC_CPU_DEVICES ...@@ -236,6 +236,9 @@ config GENERIC_CPU_DEVICES
config GENERIC_CPU_AUTOPROBE config GENERIC_CPU_AUTOPROBE
bool bool
config GENERIC_CPU_VULNERABILITIES
bool
config SOC_BUS config SOC_BUS
bool bool
select GLOB select GLOB
......
...@@ -511,10 +511,58 @@ static void __init cpu_dev_register_generic(void) ...@@ -511,10 +511,58 @@ static void __init cpu_dev_register_generic(void)
#endif #endif
} }
#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
ssize_t __weak cpu_show_meltdown(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "Not affected\n");
}
ssize_t __weak cpu_show_spectre_v1(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "Not affected\n");
}
ssize_t __weak cpu_show_spectre_v2(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "Not affected\n");
}
static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
static struct attribute *cpu_root_vulnerabilities_attrs[] = {
&dev_attr_meltdown.attr,
&dev_attr_spectre_v1.attr,
&dev_attr_spectre_v2.attr,
NULL
};
static const struct attribute_group cpu_root_vulnerabilities_group = {
.name = "vulnerabilities",
.attrs = cpu_root_vulnerabilities_attrs,
};
static void __init cpu_register_vulnerabilities(void)
{
if (sysfs_create_group(&cpu_subsys.dev_root->kobj,
&cpu_root_vulnerabilities_group))
pr_err("Unable to register CPU vulnerabilities\n");
}
#else
static inline void cpu_register_vulnerabilities(void) { }
#endif
void __init cpu_dev_init(void) void __init cpu_dev_init(void)
{ {
if (subsys_system_register(&cpu_subsys, cpu_root_attr_groups)) if (subsys_system_register(&cpu_subsys, cpu_root_attr_groups))
panic("Failed to register CPU subsystem"); panic("Failed to register CPU subsystem");
cpu_dev_register_generic(); cpu_dev_register_generic();
cpu_register_vulnerabilities();
} }
...@@ -47,6 +47,13 @@ extern void cpu_remove_dev_attr(struct device_attribute *attr); ...@@ -47,6 +47,13 @@ extern void cpu_remove_dev_attr(struct device_attribute *attr);
extern int cpu_add_dev_attr_group(struct attribute_group *attrs); extern int cpu_add_dev_attr_group(struct attribute_group *attrs);
extern void cpu_remove_dev_attr_group(struct attribute_group *attrs); extern void cpu_remove_dev_attr_group(struct attribute_group *attrs);
extern ssize_t cpu_show_meltdown(struct device *dev,
struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_spectre_v1(struct device *dev,
struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_spectre_v2(struct device *dev,
struct device_attribute *attr, char *buf);
extern __printf(4, 5) extern __printf(4, 5)
struct device *cpu_device_create(struct device *parent, void *drvdata, struct device *cpu_device_create(struct device *parent, void *drvdata,
const struct attribute_group **groups, const struct attribute_group **groups,
......
...@@ -63,7 +63,7 @@ config PAGE_TABLE_ISOLATION ...@@ -63,7 +63,7 @@ config PAGE_TABLE_ISOLATION
ensuring that the majority of kernel addresses are not mapped ensuring that the majority of kernel addresses are not mapped
into userspace. into userspace.
See Documentation/x86/pagetable-isolation.txt for more details. See Documentation/x86/pti.txt for more details.
config SECURITY_INFINIBAND config SECURITY_INFINIBAND
bool "Infiniband Security Hooks" bool "Infiniband Security Hooks"
......
...@@ -427,6 +427,40 @@ static void add_ignores(struct objtool_file *file) ...@@ -427,6 +427,40 @@ static void add_ignores(struct objtool_file *file)
} }
} }
/*
* FIXME: For now, just ignore any alternatives which add retpolines. This is
* a temporary hack, as it doesn't allow ORC to unwind from inside a retpoline.
* But it at least allows objtool to understand the control flow *around* the
* retpoline.
*/
static int add_nospec_ignores(struct objtool_file *file)
{
struct section *sec;
struct rela *rela;
struct instruction *insn;
sec = find_section_by_name(file->elf, ".rela.discard.nospec");
if (!sec)
return 0;
list_for_each_entry(rela, &sec->rela_list, list) {
if (rela->sym->type != STT_SECTION) {
WARN("unexpected relocation symbol type in %s", sec->name);
return -1;
}
insn = find_insn(file, rela->sym->sec, rela->addend);
if (!insn) {
WARN("bad .discard.nospec entry");
return -1;
}
insn->ignore_alts = true;
}
return 0;
}
/* /*
* Find the destination instructions for all jumps. * Find the destination instructions for all jumps.
*/ */
...@@ -456,6 +490,13 @@ static int add_jump_destinations(struct objtool_file *file) ...@@ -456,6 +490,13 @@ static int add_jump_destinations(struct objtool_file *file)
} else if (rela->sym->sec->idx) { } else if (rela->sym->sec->idx) {
dest_sec = rela->sym->sec; dest_sec = rela->sym->sec;
dest_off = rela->sym->sym.st_value + rela->addend + 4; dest_off = rela->sym->sym.st_value + rela->addend + 4;
} else if (strstr(rela->sym->name, "_indirect_thunk_")) {
/*
* Retpoline jumps are really dynamic jumps in
* disguise, so convert them accordingly.
*/
insn->type = INSN_JUMP_DYNAMIC;
continue;
} else { } else {
/* sibling call */ /* sibling call */
insn->jump_dest = 0; insn->jump_dest = 0;
...@@ -502,11 +543,18 @@ static int add_call_destinations(struct objtool_file *file) ...@@ -502,11 +543,18 @@ static int add_call_destinations(struct objtool_file *file)
dest_off = insn->offset + insn->len + insn->immediate; dest_off = insn->offset + insn->len + insn->immediate;
insn->call_dest = find_symbol_by_offset(insn->sec, insn->call_dest = find_symbol_by_offset(insn->sec,
dest_off); dest_off);
/*
* FIXME: Thanks to retpolines, it's now considered
* normal for a function to call within itself. So
* disable this warning for now.
*/
#if 0
if (!insn->call_dest) { if (!insn->call_dest) {
WARN_FUNC("can't find call dest symbol at offset 0x%lx", WARN_FUNC("can't find call dest symbol at offset 0x%lx",
insn->sec, insn->offset, dest_off); insn->sec, insn->offset, dest_off);
return -1; return -1;
} }
#endif
} else if (rela->sym->type == STT_SECTION) { } else if (rela->sym->type == STT_SECTION) {
insn->call_dest = find_symbol_by_offset(rela->sym->sec, insn->call_dest = find_symbol_by_offset(rela->sym->sec,
rela->addend+4); rela->addend+4);
...@@ -671,12 +719,6 @@ static int add_special_section_alts(struct objtool_file *file) ...@@ -671,12 +719,6 @@ static int add_special_section_alts(struct objtool_file *file)
return ret; return ret;
list_for_each_entry_safe(special_alt, tmp, &special_alts, list) { list_for_each_entry_safe(special_alt, tmp, &special_alts, list) {
alt = malloc(sizeof(*alt));
if (!alt) {
WARN("malloc failed");
ret = -1;
goto out;
}
orig_insn = find_insn(file, special_alt->orig_sec, orig_insn = find_insn(file, special_alt->orig_sec,
special_alt->orig_off); special_alt->orig_off);
...@@ -687,6 +729,10 @@ static int add_special_section_alts(struct objtool_file *file) ...@@ -687,6 +729,10 @@ static int add_special_section_alts(struct objtool_file *file)
goto out; goto out;
} }
/* Ignore retpoline alternatives. */
if (orig_insn->ignore_alts)
continue;
new_insn = NULL; new_insn = NULL;
if (!special_alt->group || special_alt->new_len) { if (!special_alt->group || special_alt->new_len) {
new_insn = find_insn(file, special_alt->new_sec, new_insn = find_insn(file, special_alt->new_sec,
...@@ -712,6 +758,13 @@ static int add_special_section_alts(struct objtool_file *file) ...@@ -712,6 +758,13 @@ static int add_special_section_alts(struct objtool_file *file)
goto out; goto out;
} }
alt = malloc(sizeof(*alt));
if (!alt) {
WARN("malloc failed");
ret = -1;
goto out;
}
alt->insn = new_insn; alt->insn = new_insn;
list_add_tail(&alt->list, &orig_insn->alts); list_add_tail(&alt->list, &orig_insn->alts);
...@@ -1028,6 +1081,10 @@ static int decode_sections(struct objtool_file *file) ...@@ -1028,6 +1081,10 @@ static int decode_sections(struct objtool_file *file)
add_ignores(file); add_ignores(file);
ret = add_nospec_ignores(file);
if (ret)
return ret;
ret = add_jump_destinations(file); ret = add_jump_destinations(file);
if (ret) if (ret)
return ret; return ret;
......
...@@ -44,7 +44,7 @@ struct instruction { ...@@ -44,7 +44,7 @@ struct instruction {
unsigned int len; unsigned int len;
unsigned char type; unsigned char type;
unsigned long immediate; unsigned long immediate;
bool alt_group, visited, dead_end, ignore, hint, save, restore; bool alt_group, visited, dead_end, ignore, hint, save, restore, ignore_alts;
struct symbol *call_dest; struct symbol *call_dest;
struct instruction *jump_dest; struct instruction *jump_dest;
struct list_head alts; struct list_head alts;
......
...@@ -7,7 +7,7 @@ include ../lib.mk ...@@ -7,7 +7,7 @@ include ../lib.mk
TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt ptrace_syscall test_mremap_vdso \ TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt ptrace_syscall test_mremap_vdso \
check_initial_reg_state sigreturn ldt_gdt iopl mpx-mini-test ioperm \ check_initial_reg_state sigreturn ldt_gdt iopl mpx-mini-test ioperm \
protection_keys test_vdso protection_keys test_vdso test_vsyscall
TARGETS_C_32BIT_ONLY := entry_from_vm86 syscall_arg_fault test_syscall_vdso unwind_vdso \ TARGETS_C_32BIT_ONLY := entry_from_vm86 syscall_arg_fault test_syscall_vdso unwind_vdso \
test_FCMOV test_FCOMI test_FISTTP \ test_FCMOV test_FCOMI test_FISTTP \
vdso_restorer vdso_restorer
......
/* SPDX-License-Identifier: GPL-2.0 */
#define _GNU_SOURCE
#include <stdio.h>
#include <sys/time.h>
#include <time.h>
#include <stdlib.h>
#include <sys/syscall.h>
#include <unistd.h>
#include <dlfcn.h>
#include <string.h>
#include <inttypes.h>
#include <signal.h>
#include <sys/ucontext.h>
#include <errno.h>
#include <err.h>
#include <sched.h>
#include <stdbool.h>
#include <setjmp.h>
#ifdef __x86_64__
# define VSYS(x) (x)
#else
# define VSYS(x) 0
#endif
#ifndef SYS_getcpu
# ifdef __x86_64__
# define SYS_getcpu 309
# else
# define SYS_getcpu 318
# endif
#endif
static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *),
int flags)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = handler;
sa.sa_flags = SA_SIGINFO | flags;
sigemptyset(&sa.sa_mask);
if (sigaction(sig, &sa, 0))
err(1, "sigaction");
}
/* vsyscalls and vDSO */
bool should_read_vsyscall = false;
typedef long (*gtod_t)(struct timeval *tv, struct timezone *tz);
gtod_t vgtod = (gtod_t)VSYS(0xffffffffff600000);
gtod_t vdso_gtod;
typedef int (*vgettime_t)(clockid_t, struct timespec *);
vgettime_t vdso_gettime;
typedef long (*time_func_t)(time_t *t);
time_func_t vtime = (time_func_t)VSYS(0xffffffffff600400);
time_func_t vdso_time;
typedef long (*getcpu_t)(unsigned *, unsigned *, void *);
getcpu_t vgetcpu = (getcpu_t)VSYS(0xffffffffff600800);
getcpu_t vdso_getcpu;
static void init_vdso(void)
{
void *vdso = dlopen("linux-vdso.so.1", RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
if (!vdso)
vdso = dlopen("linux-gate.so.1", RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
if (!vdso) {
printf("[WARN]\tfailed to find vDSO\n");
return;
}
vdso_gtod = (gtod_t)dlsym(vdso, "__vdso_gettimeofday");
if (!vdso_gtod)
printf("[WARN]\tfailed to find gettimeofday in vDSO\n");
vdso_gettime = (vgettime_t)dlsym(vdso, "__vdso_clock_gettime");
if (!vdso_gettime)
printf("[WARN]\tfailed to find clock_gettime in vDSO\n");
vdso_time = (time_func_t)dlsym(vdso, "__vdso_time");
if (!vdso_time)
printf("[WARN]\tfailed to find time in vDSO\n");
vdso_getcpu = (getcpu_t)dlsym(vdso, "__vdso_getcpu");
if (!vdso_getcpu) {
/* getcpu() was never wired up in the 32-bit vDSO. */
printf("[%s]\tfailed to find getcpu in vDSO\n",
sizeof(long) == 8 ? "WARN" : "NOTE");
}
}
static int init_vsys(void)
{
#ifdef __x86_64__
int nerrs = 0;
FILE *maps;
char line[128];
bool found = false;
maps = fopen("/proc/self/maps", "r");
if (!maps) {
printf("[WARN]\tCould not open /proc/self/maps -- assuming vsyscall is r-x\n");
should_read_vsyscall = true;
return 0;
}
while (fgets(line, sizeof(line), maps)) {
char r, x;
void *start, *end;
char name[128];
if (sscanf(line, "%p-%p %c-%cp %*x %*x:%*x %*u %s",
&start, &end, &r, &x, name) != 5)
continue;
if (strcmp(name, "[vsyscall]"))
continue;
printf("\tvsyscall map: %s", line);
if (start != (void *)0xffffffffff600000 ||
end != (void *)0xffffffffff601000) {
printf("[FAIL]\taddress range is nonsense\n");
nerrs++;
}
printf("\tvsyscall permissions are %c-%c\n", r, x);
should_read_vsyscall = (r == 'r');
if (x != 'x') {
vgtod = NULL;
vtime = NULL;
vgetcpu = NULL;
}
found = true;
break;
}
fclose(maps);
if (!found) {
printf("\tno vsyscall map in /proc/self/maps\n");
should_read_vsyscall = false;
vgtod = NULL;
vtime = NULL;
vgetcpu = NULL;
}
return nerrs;
#else
return 0;
#endif
}
/* syscalls */
static inline long sys_gtod(struct timeval *tv, struct timezone *tz)
{
return syscall(SYS_gettimeofday, tv, tz);
}
static inline int sys_clock_gettime(clockid_t id, struct timespec *ts)
{
return syscall(SYS_clock_gettime, id, ts);
}
static inline long sys_time(time_t *t)
{
return syscall(SYS_time, t);
}
static inline long sys_getcpu(unsigned * cpu, unsigned * node,
void* cache)
{
return syscall(SYS_getcpu, cpu, node, cache);
}
static jmp_buf jmpbuf;
static void sigsegv(int sig, siginfo_t *info, void *ctx_void)
{
siglongjmp(jmpbuf, 1);
}
static double tv_diff(const struct timeval *a, const struct timeval *b)
{
return (double)(a->tv_sec - b->tv_sec) +
(double)((int)a->tv_usec - (int)b->tv_usec) * 1e-6;
}
static int check_gtod(const struct timeval *tv_sys1,
const struct timeval *tv_sys2,
const struct timezone *tz_sys,
const char *which,
const struct timeval *tv_other,
const struct timezone *tz_other)
{
int nerrs = 0;
double d1, d2;
if (tz_other && (tz_sys->tz_minuteswest != tz_other->tz_minuteswest || tz_sys->tz_dsttime != tz_other->tz_dsttime)) {
printf("[FAIL] %s tz mismatch\n", which);
nerrs++;
}
d1 = tv_diff(tv_other, tv_sys1);
d2 = tv_diff(tv_sys2, tv_other);
printf("\t%s time offsets: %lf %lf\n", which, d1, d2);
if (d1 < 0 || d2 < 0) {
printf("[FAIL]\t%s time was inconsistent with the syscall\n", which);
nerrs++;
} else {
printf("[OK]\t%s gettimeofday()'s timeval was okay\n", which);
}
return nerrs;
}
static int test_gtod(void)
{
struct timeval tv_sys1, tv_sys2, tv_vdso, tv_vsys;
struct timezone tz_sys, tz_vdso, tz_vsys;
long ret_vdso = -1;
long ret_vsys = -1;
int nerrs = 0;
printf("[RUN]\ttest gettimeofday()\n");
if (sys_gtod(&tv_sys1, &tz_sys) != 0)
err(1, "syscall gettimeofday");
if (vdso_gtod)
ret_vdso = vdso_gtod(&tv_vdso, &tz_vdso);
if (vgtod)
ret_vsys = vgtod(&tv_vsys, &tz_vsys);
if (sys_gtod(&tv_sys2, &tz_sys) != 0)
err(1, "syscall gettimeofday");
if (vdso_gtod) {
if (ret_vdso == 0) {
nerrs += check_gtod(&tv_sys1, &tv_sys2, &tz_sys, "vDSO", &tv_vdso, &tz_vdso);
} else {
printf("[FAIL]\tvDSO gettimeofday() failed: %ld\n", ret_vdso);
nerrs++;
}
}
if (vgtod) {
if (ret_vsys == 0) {
nerrs += check_gtod(&tv_sys1, &tv_sys2, &tz_sys, "vsyscall", &tv_vsys, &tz_vsys);
} else {
printf("[FAIL]\tvsys gettimeofday() failed: %ld\n", ret_vsys);
nerrs++;
}
}
return nerrs;
}
static int test_time(void) {
int nerrs = 0;
printf("[RUN]\ttest time()\n");
long t_sys1, t_sys2, t_vdso = 0, t_vsys = 0;
long t2_sys1 = -1, t2_sys2 = -1, t2_vdso = -1, t2_vsys = -1;
t_sys1 = sys_time(&t2_sys1);
if (vdso_time)
t_vdso = vdso_time(&t2_vdso);
if (vtime)
t_vsys = vtime(&t2_vsys);
t_sys2 = sys_time(&t2_sys2);
if (t_sys1 < 0 || t_sys1 != t2_sys1 || t_sys2 < 0 || t_sys2 != t2_sys2) {
printf("[FAIL]\tsyscall failed (ret1:%ld output1:%ld ret2:%ld output2:%ld)\n", t_sys1, t2_sys1, t_sys2, t2_sys2);
nerrs++;
return nerrs;
}
if (vdso_time) {
if (t_vdso < 0 || t_vdso != t2_vdso) {
printf("[FAIL]\tvDSO failed (ret:%ld output:%ld)\n", t_vdso, t2_vdso);
nerrs++;
} else if (t_vdso < t_sys1 || t_vdso > t_sys2) {
printf("[FAIL]\tvDSO returned the wrong time (%ld %ld %ld)\n", t_sys1, t_vdso, t_sys2);
nerrs++;
} else {
printf("[OK]\tvDSO time() is okay\n");
}
}
if (vtime) {
if (t_vsys < 0 || t_vsys != t2_vsys) {
printf("[FAIL]\tvsyscall failed (ret:%ld output:%ld)\n", t_vsys, t2_vsys);
nerrs++;
} else if (t_vsys < t_sys1 || t_vsys > t_sys2) {
printf("[FAIL]\tvsyscall returned the wrong time (%ld %ld %ld)\n", t_sys1, t_vsys, t_sys2);
nerrs++;
} else {
printf("[OK]\tvsyscall time() is okay\n");
}
}
return nerrs;
}
static int test_getcpu(int cpu)
{
int nerrs = 0;
long ret_sys, ret_vdso = -1, ret_vsys = -1;
printf("[RUN]\tgetcpu() on CPU %d\n", cpu);
cpu_set_t cpuset;
CPU_ZERO(&cpuset);
CPU_SET(cpu, &cpuset);
if (sched_setaffinity(0, sizeof(cpuset), &cpuset) != 0) {
printf("[SKIP]\tfailed to force CPU %d\n", cpu);
return nerrs;
}
unsigned cpu_sys, cpu_vdso, cpu_vsys, node_sys, node_vdso, node_vsys;
unsigned node = 0;
bool have_node = false;
ret_sys = sys_getcpu(&cpu_sys, &node_sys, 0);
if (vdso_getcpu)
ret_vdso = vdso_getcpu(&cpu_vdso, &node_vdso, 0);
if (vgetcpu)
ret_vsys = vgetcpu(&cpu_vsys, &node_vsys, 0);
if (ret_sys == 0) {
if (cpu_sys != cpu) {
printf("[FAIL]\tsyscall reported CPU %hu but should be %d\n", cpu_sys, cpu);
nerrs++;
}
have_node = true;
node = node_sys;
}
if (vdso_getcpu) {
if (ret_vdso) {
printf("[FAIL]\tvDSO getcpu() failed\n");
nerrs++;
} else {
if (!have_node) {
have_node = true;
node = node_vdso;
}
if (cpu_vdso != cpu) {
printf("[FAIL]\tvDSO reported CPU %hu but should be %d\n", cpu_vdso, cpu);
nerrs++;
} else {
printf("[OK]\tvDSO reported correct CPU\n");
}
if (node_vdso != node) {
printf("[FAIL]\tvDSO reported node %hu but should be %hu\n", node_vdso, node);
nerrs++;
} else {
printf("[OK]\tvDSO reported correct node\n");
}
}
}
if (vgetcpu) {
if (ret_vsys) {
printf("[FAIL]\tvsyscall getcpu() failed\n");
nerrs++;
} else {
if (!have_node) {
have_node = true;
node = node_vsys;
}
if (cpu_vsys != cpu) {
printf("[FAIL]\tvsyscall reported CPU %hu but should be %d\n", cpu_vsys, cpu);
nerrs++;
} else {
printf("[OK]\tvsyscall reported correct CPU\n");
}
if (node_vsys != node) {
printf("[FAIL]\tvsyscall reported node %hu but should be %hu\n", node_vsys, node);
nerrs++;
} else {
printf("[OK]\tvsyscall reported correct node\n");
}
}
}
return nerrs;
}
static int test_vsys_r(void)
{
#ifdef __x86_64__
printf("[RUN]\tChecking read access to the vsyscall page\n");
bool can_read;
if (sigsetjmp(jmpbuf, 1) == 0) {
*(volatile int *)0xffffffffff600000;
can_read = true;
} else {
can_read = false;
}
if (can_read && !should_read_vsyscall) {
printf("[FAIL]\tWe have read access, but we shouldn't\n");
return 1;
} else if (!can_read && should_read_vsyscall) {
printf("[FAIL]\tWe don't have read access, but we should\n");
return 1;
} else {
printf("[OK]\tgot expected result\n");
}
#endif
return 0;
}
#ifdef __x86_64__
#define X86_EFLAGS_TF (1UL << 8)
static volatile sig_atomic_t num_vsyscall_traps;
static unsigned long get_eflags(void)
{
unsigned long eflags;
asm volatile ("pushfq\n\tpopq %0" : "=rm" (eflags));
return eflags;
}
static void set_eflags(unsigned long eflags)
{
asm volatile ("pushq %0\n\tpopfq" : : "rm" (eflags) : "flags");
}
static void sigtrap(int sig, siginfo_t *info, void *ctx_void)
{
ucontext_t *ctx = (ucontext_t *)ctx_void;
unsigned long ip = ctx->uc_mcontext.gregs[REG_RIP];
if (((ip ^ 0xffffffffff600000UL) & ~0xfffUL) == 0)
num_vsyscall_traps++;
}
static int test_native_vsyscall(void)
{
time_t tmp;
bool is_native;
if (!vtime)
return 0;
printf("[RUN]\tchecking for native vsyscall\n");
sethandler(SIGTRAP, sigtrap, 0);
set_eflags(get_eflags() | X86_EFLAGS_TF);
vtime(&tmp);
set_eflags(get_eflags() & ~X86_EFLAGS_TF);
/*
* If vsyscalls are emulated, we expect a single trap in the
* vsyscall page -- the call instruction will trap with RIP
* pointing to the entry point before emulation takes over.
* In native mode, we expect two traps, since whatever code
* the vsyscall page contains will be more than just a ret
* instruction.
*/
is_native = (num_vsyscall_traps > 1);
printf("\tvsyscalls are %s (%d instructions in vsyscall page)\n",
(is_native ? "native" : "emulated"),
(int)num_vsyscall_traps);
return 0;
}
#endif
int main(int argc, char **argv)
{
int nerrs = 0;
init_vdso();
nerrs += init_vsys();
nerrs += test_gtod();
nerrs += test_time();
nerrs += test_getcpu(0);
nerrs += test_getcpu(1);
sethandler(SIGSEGV, sigsegv, 0);
nerrs += test_vsys_r();
#ifdef __x86_64__
nerrs += test_native_vsyscall();
#endif
return nerrs ? 1 : 0;
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment