1. 16 May, 2014 19 commits
  2. 30 Apr, 2014 18 commits
  3. 28 Apr, 2014 1 commit
  4. 27 Apr, 2014 2 commits
    • Will Deacon's avatar
      word-at-a-time: avoid undefined behaviour in zero_bytemask macro · ec6931b2
      Will Deacon authored
      The asm-generic, big-endian version of zero_bytemask creates a mask of
      bytes preceding the first zero-byte by left shifting ~0ul based on the
      position of the first zero byte.
      
      Unfortunately, if the first (top) byte is zero, the output of
      prep_zero_mask has only the top bit set, resulting in undefined C
      behaviour as we shift left by an amount equal to the width of the type.
      As it happens, GCC doesn't manage to spot this through the call to fls(),
      but the issue remains if architectures choose to implement their shift
      instructions differently.
      
      An example would be arch/arm/ (AArch32), where LSL Rd, Rn, #32 results
      in Rd == 0x0, whilst on arch/arm64 (AArch64) LSL Xd, Xn, #64 results in
      Xd == Xn.
      
      Rather than check explicitly for the problematic shift, this patch adds
      an extra shift by 1, replacing fls with __fls. Since zero_bytemask is
      never called with a zero argument (has_zero() is used to check the data
      first), we don't need to worry about calling __fls(0), which is
      undefined.
      
      Cc: <stable@vger.kernel.org>
      Cc: Victor Kamensky <victor.kamensky@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ec6931b2
    • Linus Torvalds's avatar
      Merge branch 'safe-dirty-tlb-flush' · ac6c9e2b
      Linus Torvalds authored
      This merges the patch to fix possible loss of dirty bit on munmap() or
      madvice(DONTNEED).  If there are concurrent writers on other CPU's that
      have the unmapped/unneeded page in their TLBs, their writes to the page
      could possibly get lost if a third CPU raced with the TLB flush and did
      a page_mkclean() before the page was fully written.
      
      Admittedly, if you unmap() or madvice(DONTNEED) an area _while_ another
      thread is still busy writing to it, you deserve all the lost writes you
      could get.  But we kernel people hold ourselves to higher quality
      standards than "crazy people deserve to lose", because, well, we've seen
      people do all kinds of crazy things.
      
      So let's get it right, just because we can, and we don't have to worry
      about it.
      
      * safe-dirty-tlb-flush:
        mm: split 'tlb_flush_mmu()' into tlb flushing and memory freeing parts
      ac6c9e2b