1. 22 Sep, 2014 13 commits
  2. 16 Sep, 2014 3 commits
  3. 15 Sep, 2014 8 commits
    • Linus Torvalds's avatar
      Merge tag 'regmap-v3.17-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap · 2324067f
      Linus Torvalds authored
      Pull regmap fix from Mark Brown:
       "Fix registers file in debugfs
      
        Ensure that the mode reported for the registers file in debugfs is
        accurate by marking it as read only when the define to enable writes
        has not been set.  This is on the edge of being a bug fix but it's
        debugfs and it makes it much easier for users to spot what's going
        wrong when they forget to enable writeability"
      
      * tag 'regmap-v3.17-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
        regmap: Fix debugfs-file 'registers' mode
      2324067f
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input · b9217862
      Linus Torvalds authored
      Pull input updates from Dmitry Torokhov:
       "A few quirks for i8042/AT keyboards and a small device tree doc fix
        for Atmel Touchscreens"
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
        Input: atmel_mxt_ts - fix merge in DT documentation
        Input: i8042 - also set the firmware id for MUXed ports
        Input: i8042 - add nomux quirk for Avatar AVIU-145A6
        Input: i8042 - add Fujitsu U574 to no_timeout dmi table
        Input: atkbd - do not try 'deactivate' keyboard on any LG laptops
      b9217862
    • Tony Luck's avatar
      ia64: Fix syscall number for memfd_create · f3b59331
      Tony Luck authored
      Cut & paste typo from the line above.
      Reported-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f3b59331
    • Linus Torvalds's avatar
      vfs: simplify and shrink stack frame of link_path_walk() · d6bb3e90
      Linus Torvalds authored
      Commit 9226b5b4 ("vfs: avoid non-forwarding large load after small
      store in path lookup") made link_path_walk() always access the
      "hash_len" field as a single 64-bit entity, in order to avoid mixed size
      accesses to the members.
      
      However, what I didn't notice was that that effectively means that the
      whole "struct qstr this" is now basically redundant.  We already
      explicitly track the "const char *name", and if we just use "u64
      hash_len" instead of "long len", there is nothing else left of the
      "struct qstr".
      
      We do end up wanting the "struct qstr" if we have a filesystem with a
      "d_hash()" function, but that's a rare case, and we might as well then
      just squirrell away the name and hash_len at that point.
      
      End result: fewer live variables in the loop, a smaller stack frame, and
      better code generation.  And we don't need to pass in pointers variables
      to helper functions any more, because the return value contains all the
      relevant information.  So this removes more lines than it adds, and the
      source code is clearer too.
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d6bb3e90
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 · 3630056d
      Linus Torvalds authored
      Pull crypto fixes from Herbert Xu:
       "This fixes the newly added drbg generator so that it actually works on
        32-bit machines.  Previously the code was only tested on 64-bit and on
        32-bit it overflowed and simply doesn't work"
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
        crypto: drbg - remove check for uninitialized DRBG handle
        crypto: drbg - backport "fix maximum value checks on 32 bit systems"
      3630056d
    • Linus Torvalds's avatar
      Linux 3.17-rc5 · 9e82bf01
      Linus Torvalds authored
      9e82bf01
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs · 83373f70
      Linus Torvalds authored
      Pull vfs fixes from Al Viro:
       "double iput() on failure exit in lustre, racy removal of spliced
        dentries from ->s_anon in __d_materialise_dentry() plus a bunch of
        assorted RCU pathwalk fixes"
      
      The RCU pathwalk fixes end up fixing a couple of cases where we
      incorrectly dropped out of RCU walking, due to incorrect initialization
      and testing of the sequence locks in some corner cases.  Since dropping
      out of RCU walk mode forces the slow locked accesses, those corner cases
      slowed down quite dramatically.
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
        be careful with nd->inode in path_init() and follow_dotdot_rcu()
        don't bugger nd->seq on set_root_rcu() from follow_dotdot_rcu()
        fix bogus read_seqretry() checks introduced in b37199e6
        move the call of __d_drop(anon) into __d_materialise_unique(dentry, anon)
        [fix] lustre: d_make_root() does iput() on dentry allocation failure
      83373f70
    • Linus Torvalds's avatar
      vfs: avoid non-forwarding large load after small store in path lookup · 9226b5b4
      Linus Torvalds authored
      The performance regression that Josef Bacik reported in the pathname
      lookup (see commit 99d263d4 "vfs: fix bad hashing of dentries") made
      me look at performance stability of the dcache code, just to verify that
      the problem was actually fixed.  That turned up a few other problems in
      this area.
      
      There are a few cases where we exit RCU lookup mode and go to the slow
      serializing case when we shouldn't, Al has fixed those and they'll come
      in with the next VFS pull.
      
      But my performance verification also shows that link_path_walk() turns
      out to have a very unfortunate 32-bit store of the length and hash of
      the name we look up, followed by a 64-bit read of the combined hash_len
      field.  That screws up the processor store to load forwarding, causing
      an unnecessary hickup in this critical routine.
      
      It's caused by the ugly calling convention for the "hash_name()"
      function, and easily fixed by just making hash_name() fill in the whole
      'struct qstr' rather than passing it a pointer to just the hash value.
      
      With that, the profile for this function looks much smoother.
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9226b5b4
  4. 14 Sep, 2014 12 commits
  5. 13 Sep, 2014 4 commits
    • Linus Torvalds's avatar
      Merge branches 'locking-urgent-for-linus' and 'timers-urgent-for-linus' of... · 1536340e
      Linus Torvalds authored
      Merge branches 'locking-urgent-for-linus' and 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
      
      Pull futex and timer fixes from Thomas Gleixner:
       "A oneliner bugfix for the jinxed futex code:
      
         - Drop hash bucket lock in the error exit path.  I really could slap
           myself for intruducing that bug while fixing all the other horror
           in that code three month ago ...
      
        and the timer department is not too proud about the following fixes:
      
         - Deal with a long standing rounding bug in the timeval to jiffies
           conversion.  It's a real issue and this fix fell through the cracks
           for quite some time.
      
         - Another round of alarmtimer fixes.  Finally this code gets used
           more widely and the subtle issues hidden for quite some time are
           noticed and fixed.  Nothing really exciting, just the itty bitty
           details which bite the serious users here and there"
      
      * 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        futex: Unlock hb->lock in futex_wait_requeue_pi() error path
      
      * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        alarmtimer: Lock k_itimer during timer callback
        alarmtimer: Do not signal SIGEV_NONE timers
        alarmtimer: Return relative times in timer_gettime
        jiffies: Fix timeval conversion to jiffies
      1536340e
    • Guy Martin's avatar
      parisc: Implement new LWS CAS supporting 64 bit operations. · 89206491
      Guy Martin authored
      The current LWS cas only works correctly for 32bit. The new LWS allows
      for CAS operations of variable size.
      Signed-off-by: default avatarGuy Martin <gmsoft@tuxicoman.be>
      Cc: <stable@vger.kernel.org> # 3.13+
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      89206491
    • Linus Torvalds's avatar
      vfs: fix bad hashing of dentries · 99d263d4
      Linus Torvalds authored
      Josef Bacik found a performance regression between 3.2 and 3.10 and
      narrowed it down to commit bfcfaa77 ("vfs: use 'unsigned long'
      accesses for dcache name comparison and hashing"). He reports:
      
       "The test case is essentially
      
            for (i = 0; i < 1000000; i++)
                    mkdir("a$i");
      
        On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k
        dir/sec with 3.10.  This is because we spend waaaaay more time in
        __d_lookup on 3.10 than in 3.2.
      
        The new hashing function for strings is suboptimal for <
        sizeof(unsigned long) string names (and hell even > sizeof(unsigned
        long) string names that I've tested).  I broke out the old hashing
        function and the new one into a userspace helper to get real numbers
        and this is what I'm getting:
      
            Old hash table had 1000000 entries, 0 dupes, 0 max dupes
            New hash table had 12628 entries, 987372 dupes, 900 max dupes
            We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 567 dupes for the new hash
      
        My test does the hash, and then does the d_hash into a integer pointer
        array the same size as the dentry hash table on my system, and then
        just increments the value at the address we got to see how many
        entries we overlap with.
      
        As you can see the old hash function ended up with all 1 million
        entries in their own bucket, whereas the new one they are only
        distributed among ~12.5k buckets, which is why we're using so much
        more CPU in __d_lookup".
      
      The reason for this hash regression is two-fold:
      
       - On 64-bit architectures the down-mixing of the original 64-bit
         word-at-a-time hash into the final 32-bit hash value is very
         simplistic and suboptimal, and just adds the two 32-bit parts
         together.
      
         In particular, because there is no bit shuffling and the mixing
         boundary is also a byte boundary, similar character patterns in the
         low and high word easily end up just canceling each other out.
      
       - the old byte-at-a-time hash mixed each byte into the final hash as it
         hashed the path component name, resulting in the low bits of the hash
         generally being a good source of hash data.  That is not true for the
         word-at-a-time case, and the hash data is distributed among all the
         bits.
      
      The fix is the same in both cases: do a better job of mixing the bits up
      and using as much of the hash data as possible.  We already have the
      "hash_32|64()" functions to do that.
      Reported-by: default avatarJosef Bacik <jbacik@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: linux-fsdevel@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      99d263d4
    • Linus Torvalds's avatar
      Make hash_64() use a 64-bit multiply when appropriate · 23d0db76
      Linus Torvalds authored
      The hash_64() function historically does the multiply by the
      GOLDEN_RATIO_PRIME_64 number with explicit shifts and adds, because
      unlike the 32-bit case, gcc seems unable to turn the constant multiply
      into the more appropriate shift and adds when required.
      
      However, that means that we generate those shifts and adds even when the
      architecture has a fast multiplier, and could just do it better in
      hardware.
      
      Use the now-cleaned-up CONFIG_ARCH_HAS_FAST_MULTIPLIER (together with
      "is it a 64-bit architecture") to decide whether to use an integer
      multiply or the explicit sequence of shift/add instructions.
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      23d0db76