Commit c9cf9833 authored by Davidlohr Bueso's avatar Davidlohr Bueso Committed by Sasha Levin

ipc/shm: Fix shmat mmap nil-page protection

[ Upstream commit 95e91b83 ]

The issue is described here, with a nice testcase:

    https://bugzilla.kernel.org/show_bug.cgi?id=192931

The problem is that shmat() calls do_mmap_pgoff() with MAP_FIXED, and
the address rounded down to 0.  For the regular mmap case, the
protection mentioned above is that the kernel gets to generate the
address -- arch_get_unmapped_area() will always check for MAP_FIXED and
return that address.  So by the time we do security_mmap_addr(0) things
get funky for shmat().

The testcase itself shows that while a regular user crashes, root will
not have a problem attaching a nil-page.  There are two possible fixes
to this.  The first, and which this patch does, is to simply allow root
to crash as well -- this is also regular mmap behavior, ie when hacking
up the testcase and adding mmap(...  |MAP_FIXED).  While this approach
is the safer option, the second alternative is to ignore SHM_RND if the
rounded address is 0, thus only having MAP_SHARED flags.  This makes the
behavior of shmat() identical to the mmap() case.  The downside of this
is obviously user visible, but does make sense in that it maintains
semantics after the round-down wrt 0 address and mmap.

Passes shm related ltp tests.

Link: http://lkml.kernel.org/r/1486050195-18629-1-git-send-email-dave@stgolabs.netSigned-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
Reported-by: default avatarGareth Evans <gareth.evans@contextis.co.uk>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
parent 1827f7e6
...@@ -1083,8 +1083,8 @@ SYSCALL_DEFINE3(shmctl, int, shmid, int, cmd, struct shmid_ds __user *, buf) ...@@ -1083,8 +1083,8 @@ SYSCALL_DEFINE3(shmctl, int, shmid, int, cmd, struct shmid_ds __user *, buf)
* "raddr" thing points to kernel space, and there has to be a wrapper around * "raddr" thing points to kernel space, and there has to be a wrapper around
* this. * this.
*/ */
long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr, long do_shmat(int shmid, char __user *shmaddr, int shmflg,
unsigned long shmlba) ulong *raddr, unsigned long shmlba)
{ {
struct shmid_kernel *shp; struct shmid_kernel *shp;
unsigned long addr; unsigned long addr;
...@@ -1105,8 +1105,13 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr, ...@@ -1105,8 +1105,13 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr,
goto out; goto out;
else if ((addr = (ulong)shmaddr)) { else if ((addr = (ulong)shmaddr)) {
if (addr & (shmlba - 1)) { if (addr & (shmlba - 1)) {
if (shmflg & SHM_RND) /*
addr &= ~(shmlba - 1); /* round down */ * Round down to the nearest multiple of shmlba.
* For sane do_mmap_pgoff() parameters, avoid
* round downs that trigger nil-page and MAP_FIXED.
*/
if ((shmflg & SHM_RND) && addr >= shmlba)
addr &= ~(shmlba - 1);
else else
#ifndef __ARCH_FORCE_SHMLBA #ifndef __ARCH_FORCE_SHMLBA
if (addr & ~PAGE_MASK) if (addr & ~PAGE_MASK)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment