1. 21 Nov, 2007 2 commits
    • David Miller's avatar
      Fix compat futex hangs. · 52c7418e
      David Miller authored
      [FUTEX]: Fix address computation in compat code.
      
      [ Upstream commit: 3c5fd9c7 ]
      
      compat_exit_robust_list() computes a pointer to the
      futex entry in userspace as follows:
      
      	(void __user *)entry + futex_offset
      
      'entry' is a 'struct robust_list __user *', and
      'futex_offset' is a 'compat_long_t' (typically a 's32').
      
      Things explode if the 32-bit sign bit is set in futex_offset.
      
      Type promotion sign extends futex_offset to a 64-bit value before
      adding it to 'entry'.
      
      This triggered a problem on sparc64 running 32-bit applications which
      would lock up a cpu looping forever in the fault handling for the
      userspace load in handle_futex_death().
      
      Compat userspace runs with address masking (wherein the cpu zeros out
      the top 32-bits of every effective address given to a memory operation
      instruction) so the sparc64 fault handler accounts for this by
      zero'ing out the top 32-bits of the fault address too.
      
      Since the kernel properly uses the compat_uptr interfaces, kernel side
      accesses to compat userspace work too since they will only use
      addresses with the top 32-bit clear.
      
      Because of this compat futex layer bug we get into the following loop
      when executing the get_user() load near the top of handle_futex_death():
      
      1) load from address '0xfffffffff7f16bd8', FAULT
      2) fault handler clears upper 32-bits, processes fault
         for address '0xf7f16bd8' which succeeds
      3) goto #1
      
      I want to thank Bernd Zeimetz, Josip Rodin, and Fabio Massimo Di Nitto
      for their tireless efforts helping me track down this bug.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@suse.de>
      52c7418e
    • Christoph Lameter's avatar
      SLUB: Fix memory leak by not reusing cpu_slab · 06283bea
      Christoph Lameter authored
      backport of 05aa3450 from Linus's tree.
      
      SLUB: Fix memory leak by not reusing cpu_slab
      
      Fix the memory leak that may occur when we attempt to reuse a cpu_slab
      that was allocated while we reenabled interrupts in order to be able to
      grow a slab cache. The per cpu freelist may contain objects and in that
      situation we may overwrite the per cpu freelist pointer loosing objects.
      This only occurs if we find that the concurrently allocated slab fits
      our allocation needs.
      
      If we simply always deactivate the slab then the freelist will be properly
      reintegrated and the memory leak will go away.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@suse.de>
      06283bea
  2. 16 Nov, 2007 3 commits
  3. 05 Nov, 2007 10 commits
  4. 02 Nov, 2007 25 commits