1. 31 Jul, 2017 2 commits
    • Mark Rutland's avatar
      arm64: ensure extension of smp_store_release value · 43cd60ae
      Mark Rutland authored
      [ Upstream commit 994870be ]
      
      When an inline assembly operand's type is narrower than the register it
      is allocated to, the least significant bits of the register (up to the
      operand type's width) are valid, and any other bits are permitted to
      contain any arbitrary value. This aligns with the AAPCS64 parameter
      passing rules.
      
      Our __smp_store_release() implementation does not account for this, and
      implicitly assumes that operands have been zero-extended to the width of
      the type being stored to. Thus, we may store unknown values to memory
      when the value type is narrower than the pointer type (e.g. when storing
      a char to a long).
      
      This patch fixes the issue by casting the value operand to the same
      width as the pointer operand in all cases, which ensures that the value
      is zero-extended as we expect. We use the same union trickery as
      __smp_load_acquire and {READ,WRITE}_ONCE() to avoid GCC complaining that
      pointers are potentially cast to narrower width integers in unreachable
      paths.
      
      A whitespace issue at the top of __smp_store_release() is also
      corrected.
      
      No changes are necessary for __smp_load_acquire(). Load instructions
      implicitly clear any upper bits of the register, and the compiler will
      only consider the least significant bits of the register as valid
      regardless.
      
      Fixes: 47933ad4 ("arch: Introduce smp_load_acquire(), smp_store_release()")
      Fixes: 878a84d5 ("arm64: add missing data types in smp_load_acquire/smp_store_release")
      Cc: <stable@vger.kernel.org> # 3.14.x-
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Matthias Kaehlcke <mka@chromium.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
      43cd60ae
    • Mark Rutland's avatar
      arm64: armv8_deprecated: ensure extension of addr · 13e28a1b
      Mark Rutland authored
      [ Upstream commit 55de49f9 ]
      
      Our compat swp emulation holds the compat user address in an unsigned
      int, which it passes to __user_swpX_asm(). When a 32-bit value is passed
      in a register, the upper 32 bits of the register are unknown, and we
      must extend the value to 64 bits before we can use it as a base address.
      
      This patch casts the address to unsigned long to ensure it has been
      suitably extended, avoiding the potential issue, and silencing a related
      warning from clang.
      
      Fixes: bd35a4ad ("arm64: Port SWP/SWPB emulation support from arm")
      Cc: <stable@vger.kernel.org> # 3.19.x-
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
      13e28a1b
  2. 26 Jul, 2017 11 commits
  3. 29 Jun, 2017 1 commit
  4. 28 Jun, 2017 2 commits
    • Hugh Dickins's avatar
      mm: fix new crash in unmapped_area_topdown() · dcda279d
      Hugh Dickins authored
      [ Upstream commit f4cb767d ]
      
      Trinity gets kernel BUG at mm/mmap.c:1963! in about 3 minutes of
      mmap testing.  That's the VM_BUG_ON(gap_end < gap_start) at the
      end of unmapped_area_topdown().  Linus points out how MAP_FIXED
      (which does not have to respect our stack guard gap intentions)
      could result in gap_end below gap_start there.  Fix that, and
      the similar case in its alternative, unmapped_area().
      
      Cc: stable@vger.kernel.org
      Fixes: 1be7107f ("mm: larger stack guard gap, between vmas")
      Reported-by: default avatarDave Jones <davej@codemonkey.org.uk>
      Debugged-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
      dcda279d
    • Sasha Levin's avatar
      mm: larger stack guard gap, between vmas · 8b18c6b2
      Sasha Levin authored
      [ Upstream commit 1be7107f ]
      
      Stack guard page is a useful feature to reduce a risk of stack smashing
      into a different mapping. We have been using a single page gap which
      is sufficient to prevent having stack adjacent to a different mapping.
      But this seems to be insufficient in the light of the stack usage in
      userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
      used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
      which is 256kB or stack strings with MAX_ARG_STRLEN.
      
      This will become especially dangerous for suid binaries and the default
      no limit for the stack size limit because those applications can be
      tricked to consume a large portion of the stack and a single glibc call
      could jump over the guard page. These attacks are not theoretical,
      unfortunatelly.
      
      Make those attacks less probable by increasing the stack guard gap
      to 1MB (on systems with 4k pages; but make it depend on the page size
      because systems with larger base pages might cap stack allocations in
      the PAGE_SIZE units) which should cover larger alloca() and VLA stack
      allocations. It is obviously not a full fix because the problem is
      somehow inherent, but it should reduce attack space a lot.
      
      One could argue that the gap size should be configurable from userspace,
      but that can be done later when somebody finds that the new 1MB is wrong
      for some special case applications.  For now, add a kernel command line
      option (stack_guard_gap) to specify the stack gap size (in page units).
      
      Implementation wise, first delete all the old code for stack guard page:
      because although we could get away with accounting one extra page in a
      stack vma, accounting a larger gap can break userspace - case in point,
      a program run with "ulimit -S -v 20000" failed when the 1MB gap was
      counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
      and strict non-overcommit mode.
      
      Instead of keeping gap inside the stack vma, maintain the stack guard
      gap as a gap between vmas: using vm_start_gap() in place of vm_start
      (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
      places which need to respect the gap - mainly arch_get_unmapped_area(),
      and and the vma tree's subtree_gap support for that.
      Original-patch-by: default avatarOleg Nesterov <oleg@redhat.com>
      Original-patch-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Tested-by: Helge Deller <deller@gmx.de> # parisc
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
      8b18c6b2
  5. 26 Jun, 2017 24 commits