Commit 87f0df78 authored by Rick Edgecombe's avatar Rick Edgecombe Committed by Dave Hansen

x86/shstk: Move arch detail comment out of core mm

The comment around VM_SHADOW_STACK in mm.h refers to a lot of x86
specific details that don't belong in a cross arch file. Remove these
out of core mm, and just leave the non-arch details.

Since the comment includes some useful details that would be good to
retain in the source somewhere, put the arch specifics parts in
arch/x86/shstk.c near alloc_shstk(), where memory of this type is
allocated. Include a reference to the existence of the x86 details near
the VM_SHADOW_STACK definition mm.h.
Signed-off-by: default avatarRick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: default avatarMark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/all/20230706233248.445713-1-rick.p.edgecombe%40intel.com
parent 67840ad0
......@@ -72,6 +72,31 @@ static int create_rstor_token(unsigned long ssp, unsigned long *token_addr)
return 0;
}
/*
* VM_SHADOW_STACK will have a guard page. This helps userspace protect
* itself from attacks. The reasoning is as follows:
*
* The shadow stack pointer(SSP) is moved by CALL, RET, and INCSSPQ. The
* INCSSP instruction can increment the shadow stack pointer. It is the
* shadow stack analog of an instruction like:
*
* addq $0x80, %rsp
*
* However, there is one important difference between an ADD on %rsp
* and INCSSP. In addition to modifying SSP, INCSSP also reads from the
* memory of the first and last elements that were "popped". It can be
* thought of as acting like this:
*
* READ_ONCE(ssp); // read+discard top element on stack
* ssp += nr_to_pop * 8; // move the shadow stack
* READ_ONCE(ssp-8); // read+discard last popped stack element
*
* The maximum distance INCSSP can move the SSP is 2040 bytes, before
* it would read the memory. Therefore a single page gap will be enough
* to prevent any operation from shifting the SSP to an adjacent stack,
* since it would have to land in the gap at least once, causing a
* fault.
*/
static unsigned long alloc_shstk(unsigned long addr, unsigned long size,
unsigned long token_offset, bool set_res_tok)
{
......
......@@ -343,33 +343,13 @@ extern unsigned int kobjsize(const void *objp);
#ifdef CONFIG_X86_USER_SHADOW_STACK
/*
* This flag should not be set with VM_SHARED because of lack of support
* core mm. It will also get a guard page. This helps userspace protect
* itself from attacks. The reasoning is as follows:
* VM_SHADOW_STACK should not be set with VM_SHARED because of lack of
* support core mm.
*
* The shadow stack pointer(SSP) is moved by CALL, RET, and INCSSPQ. The
* INCSSP instruction can increment the shadow stack pointer. It is the
* shadow stack analog of an instruction like:
*
* addq $0x80, %rsp
*
* However, there is one important difference between an ADD on %rsp
* and INCSSP. In addition to modifying SSP, INCSSP also reads from the
* memory of the first and last elements that were "popped". It can be
* thought of as acting like this:
*
* READ_ONCE(ssp); // read+discard top element on stack
* ssp += nr_to_pop * 8; // move the shadow stack
* READ_ONCE(ssp-8); // read+discard last popped stack element
*
* The maximum distance INCSSP can move the SSP is 2040 bytes, before
* it would read the memory. Therefore a single page gap will be enough
* to prevent any operation from shifting the SSP to an adjacent stack,
* since it would have to land in the gap at least once, causing a
* fault.
*
* Prevent using INCSSP to move the SSP between shadow stacks by
* having a PAGE_SIZE guard gap.
* These VMAs will get a single end guard page. This helps userspace protect
* itself from attacks. A single page is enough for current shadow stack archs
* (x86). See the comments near alloc_shstk() in arch/x86/kernel/shstk.c
* for more details on the guard size.
*/
# define VM_SHADOW_STACK VM_HIGH_ARCH_5
#else
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment