1. 11 Sep, 2015 1 commit
  2. 08 Sep, 2015 1 commit
  3. 13 Aug, 2015 1 commit
  4. 12 Aug, 2015 10 commits
  5. 05 Aug, 2015 1 commit
  6. 03 Aug, 2015 22 commits
    • Jason Baron's avatar
      jump label, locking/static_keys: Update docs · 412758cb
      Jason Baron authored
      Signed-off-by: default avatarJason Baron <jbaron@akamai.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: benh@kernel.crashing.org
      Cc: bp@alien8.de
      Cc: davem@davemloft.net
      Cc: ddaney@caviumnetworks.com
      Cc: heiko.carstens@de.ibm.com
      Cc: linux-kernel@vger.kernel.org
      Cc: liuj97@gmail.com
      Cc: luto@amacapital.net
      Cc: michael@ellerman.id.au
      Cc: rabin@rab.in
      Cc: ralf@linux-mips.org
      Cc: rostedt@goodmis.org
      Cc: vbabka@suse.cz
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/6b50f2f6423a2244f37f4b1d2d6c211b9dcdf4f8.1438227999.git.jbaron@akamai.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      412758cb
    • Ingo Molnar's avatar
      locking/static_keys: Provide a selftest · 2bf9e0ab
      Ingo Molnar authored
      The 'jump label' self-test is in reality testing static keys - rename things
      accordingly.
      
      Also prettify the code in various places while at it.
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: benh@kernel.crashing.org
      Cc: bp@alien8.de
      Cc: davem@davemloft.net
      Cc: ddaney@caviumnetworks.com
      Cc: heiko.carstens@de.ibm.com
      Cc: linux-kernel@vger.kernel.org
      Cc: liuj97@gmail.com
      Cc: luto@amacapital.net
      Cc: michael@ellerman.id.au
      Cc: rabin@rab.in
      Cc: ralf@linux-mips.org
      Cc: rostedt@goodmis.org
      Cc: vbabka@suse.cz
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/0c091ecebd78a879ed8a71835d205a691a75ab4e.1438227999.git.jbaron@akamai.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      2bf9e0ab
    • Jason Baron's avatar
      jump_label: Provide a self-test · 579e1acb
      Jason Baron authored
      Signed-off-by: default avatarJason Baron <jbaron@akamai.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: benh@kernel.crashing.org
      Cc: bp@alien8.de
      Cc: davem@davemloft.net
      Cc: ddaney@caviumnetworks.com
      Cc: heiko.carstens@de.ibm.com
      Cc: linux-kernel@vger.kernel.org
      Cc: liuj97@gmail.com
      Cc: luto@amacapital.net
      Cc: michael@ellerman.id.au
      Cc: rabin@rab.in
      Cc: ralf@linux-mips.org
      Cc: rostedt@goodmis.org
      Cc: shuahkh@osg.samsung.com
      Cc: vbabka@suse.cz
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/0c091ecebd78a879ed8a71835d205a691a75ab4e.1438227999.git.jbaron@akamai.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      579e1acb
    • Heiko Carstens's avatar
      s390/uaccess, locking/static_keys: employ static_branch_likely() · ed79e946
      Heiko Carstens authored
      Use the new static_branch_likely() primitive to make sure that the
      most likely case is executed without taking an unconditional branch.
      This wasn't possible with the old jump label primitives.
      Signed-off-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150729064600.GB3953@osirisSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ed79e946
    • Peter Zijlstra's avatar
      x86, tsc, locking/static_keys: Employ static_branch_likely() · 3bbfafb7
      Peter Zijlstra authored
      Because of the static_key restrictions we had to take an unconditional
      jump for the most likely case, causing $I bloat.
      
      Rewrite to use the new primitives.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      3bbfafb7
    • Peter Zijlstra's avatar
      locking/static_keys: Add selftest · 1987c947
      Peter Zijlstra authored
      Add a little selftest that validates all combinations.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1987c947
    • Peter Zijlstra's avatar
      locking/static_keys: Add a new static_key interface · 11276d53
      Peter Zijlstra authored
      There are various problems and short-comings with the current
      static_key interface:
      
       - static_key_{true,false}() read like a branch depending on the key
         value, instead of the actual likely/unlikely branch depending on
         init value.
      
       - static_key_{true,false}() are, as stated above, tied to the
         static_key init values STATIC_KEY_INIT_{TRUE,FALSE}.
      
       - we're limited to the 2 (out of 4) possible options that compile to
         a default NOP because that's what our arch_static_branch() assembly
         emits.
      
      So provide a new static_key interface:
      
        DEFINE_STATIC_KEY_TRUE(name);
        DEFINE_STATIC_KEY_FALSE(name);
      
      Which define a key of different types with an initial true/false
      value.
      
      Then allow:
      
         static_branch_likely()
         static_branch_unlikely()
      
      to take a key of either type and emit the right instruction for the
      case.
      
      This means adding a second arch_static_branch_jump() assembly helper
      which emits a JMP per default.
      
      In order to determine the right instruction for the right state,
      encode the branch type in the LSB of jump_entry::key.
      
      This is the final step in removing the naming confusion that has led to
      a stream of avoidable bugs such as:
      
        a833581e ("x86, perf: Fix static_key bug in load_mm_cr4()")
      
      ... but it also allows new static key combinations that will give us
      performance enhancements in the subsequent patches.
      
      Tested-by: Rabin Vincent <rabin@rab.in> # arm
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> # ppc
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # s390
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      11276d53
    • Peter Zijlstra's avatar
      locking/static_keys: Rework update logic · 706249c2
      Peter Zijlstra authored
      Instead of spreading the branch_default logic all over the place,
      concentrate it into the one jump_label_type() function.
      
      This does mean we need to actually increment/decrement the enabled
      count _before_ calling the update path, otherwise jump_label_type()
      will not see the right state.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      706249c2
    • Peter Zijlstra's avatar
      locking/static_keys: Add static_key_{en,dis}able() helpers · e33886b3
      Peter Zijlstra authored
      Add two helpers to make it easier to treat the refcount as boolean.
      Suggested-by: default avatarJason Baron <jasonbaron0@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e33886b3
    • Peter Zijlstra's avatar
      jump_label: Add jump_entry_key() helper · 7dcfd915
      Peter Zijlstra authored
      Avoid some casting with a helper, also prepares the way for
      overloading the LSB of jump_entry::key.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      7dcfd915
    • Peter Zijlstra's avatar
      jump_label, locking/static_keys: Rename JUMP_LABEL_TYPE_* and related helpers... · a1efb01f
      Peter Zijlstra authored
      jump_label, locking/static_keys: Rename JUMP_LABEL_TYPE_* and related helpers to the static_key* pattern
      
      Rename the JUMP_LABEL_TYPE_* macros to be JUMP_TYPE_* and move the
      inline helpers into kernel/jump_label.c, since that's the only place
      they're ever used.
      
      Also rename the helpers where it's all about static keys.
      
      This is the second step in removing the naming confusion that has led to
      a stream of avoidable bugs such as:
      
        a833581e ("x86, perf: Fix static_key bug in load_mm_cr4()")
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a1efb01f
    • Peter Zijlstra's avatar
      jump_label: Rename JUMP_LABEL_{EN,DIS}ABLE to JUMP_LABEL_{JMP,NOP} · 76b235c6
      Peter Zijlstra authored
      Since we've already stepped away from ENABLE is a JMP and DISABLE is a
      NOP with the branch_default bits, and are going to make it even worse,
      rename it to make it all clearer.
      
      This way we don't mix multiple levels of logic attributes, but have a
      plain 'physical' name for what the current instruction patching status
      of a jump label is.
      
      This is a first step in removing the naming confusion that has led to
      a stream of avoidable bugs such as:
      
        a833581e ("x86, perf: Fix static_key bug in load_mm_cr4()")
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      [ Beefed up the changelog. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      76b235c6
    • Ingo Molnar's avatar
      Merge branch 'x86/asm' into locking/core · f320ead7
      Ingo Molnar authored
      Upcoming changes to static keys is interacting/conflicting with the following
      pending TSC commits in tip:x86/asm:
      
        4ea1636b x86/asm/tsc: Rename native_read_tsc() to rdtsc()
        ...
      
      So merge it into the locking tree to have a smoother resolution.
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f320ead7
    • Andrey Konovalov's avatar
      locking, arch: use WRITE_ONCE()/READ_ONCE() in smp_store_release()/smp_load_acquire() · 76695af2
      Andrey Konovalov authored
      Replace ACCESS_ONCE() macro in smp_store_release() and smp_load_acquire()
      with WRITE_ONCE() and READ_ONCE() on x86, arm, arm64, ia64, metag, mips,
      powerpc, s390, sparc and asm-generic since ACCESS_ONCE() does not work
      reliably on non-scalar types.
      
      WRITE_ONCE() and READ_ONCE() were introduced in the following commits:
      
        230fa253 ("kernel: Provide READ_ONCE and ASSIGN_ONCE")
        43239cbe ("kernel: Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val)")
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Acked-by: default avatarRalf Baechle <ralf@linux-mips.org>
      Cc: Alexander Duyck <alexander.h.duyck@redhat.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arch@vger.kernel.org
      Link: http://lkml.kernel.org/r/1438528264-714-1-git-send-email-andreyknvl@google.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      76695af2
    • Waiman Long's avatar
      locking/pvqspinlock: Only kick CPU at unlock time · 75d22702
      Waiman Long authored
      For an over-committed guest with more vCPUs than physical CPUs
      available, it is possible that a vCPU may be kicked twice before
      getting the lock - once before it becomes queue head and once again
      before it gets the lock. All these CPU kicking and halting (VMEXIT)
      can be expensive and slow down system performance.
      
      This patch adds a new vCPU state (vcpu_hashed) which enables the code
      to delay CPU kicking until at unlock time. Once this state is set,
      the new lock holder will set _Q_SLOW_VAL and fill in the hash table
      on behalf of the halted queue head vCPU. The original vcpu_halted
      state will be used by pv_wait_node() only to differentiate other
      queue nodes from the qeue head.
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1436647018-49734-2-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      75d22702
    • Waiman Long's avatar
      locking/qrwlock: Reduce reader/writer to reader lock transfer latency · ffffeaf3
      Waiman Long authored
      Currently, a reader will check first to make sure that the writer mode
      byte is cleared before incrementing the reader count. That waiting is
      not really necessary. It increases the latency in the reader/writer
      to reader transition and reduces readers performance.
      
      This patch eliminates that waiting. It also has the side effect
      of reducing the chance of writer lock stealing and improving the
      fairness of the lock. Using a locking microbenchmark, a 10-threads 5M
      locking loop of mostly readers (RW ratio = 10,000:1) has the following
      performance numbers in a Haswell-EX box:
      
              Kernel          Locking Rate (Kops/s)
              ------          ---------------------
              4.1.1               15,063,081
              4.1.1+patch         17,241,552  (+14.4%)
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/1436459543-29126-2-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ffffeaf3
    • Will Deacon's avatar
      locking/pvqspinlock: Order pv_unhash() after cmpxchg() on unlock slowpath · 3b3fdf10
      Will Deacon authored
      When we unlock in __pv_queued_spin_unlock(), a failed cmpxchg() on the lock
      value indicates that we need to take the slow-path and unhash the
      corresponding node blocked on the lock.
      
      Since a failed cmpxchg() does not provide any memory-ordering guarantees,
      it is possible that the node data could be read before the cmpxchg() on
      weakly-ordered architectures and therefore return a stale value, leading
      to hash corruption and/or a BUG().
      
      This patch adds an smb_rmb() following the failed cmpxchg operation, so
      that the unhashing is ordered after the lock has been checked.
      Reported-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      [ Added more comments]
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steve Capper <Steve.Capper@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20150713155830.GL2632@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      3b3fdf10
    • Will Deacon's avatar
      locking/Documentation: Clarify failed cmpxchg() memory ordering semantics · ed2de9f7
      Will Deacon authored
      A failed cmpxchg does not provide any memory ordering guarantees, a
      property that is used to optimise the cmpxchg implementations on Alpha,
      PowerPC and arm64.
      
      This patch updates atomic_ops.txt and memory-barriers.txt to reflect
      this.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman Long <waiman.long@hp.com>
      Link: http://lkml.kernel.org/r/20150716151006.GH26390@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ed2de9f7
    • Peter Zijlstra's avatar
      locking: Clean up pvqspinlock warning · 0b792bf5
      Peter Zijlstra authored
       - Rename the on-stack variable to match the datastructure variable,
      
       - place the cmpxchg back under the comment that explains it,
      
       - clean up the WARN() statement to avoid superfluous conditionals
         and line-breaks.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      0b792bf5
    • Ingo Molnar's avatar
      Merge branch 'locking/urgent', tag 'v4.2-rc5' into locking/core, to pick up... · 3a7651e6
      Ingo Molnar authored
      Merge branch 'locking/urgent', tag 'v4.2-rc5' into locking/core, to pick up fixes before applying new changes
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      3a7651e6
    • Linus Torvalds's avatar
      Linux 4.2-rc5 · 74d33293
      Linus Torvalds authored
      74d33293
    • Linus Torvalds's avatar
      Merge tag 'powerpc-4.2-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux · d08c3181
      Linus Torvalds authored
      Pull powerpc fixes from Michael Ellerman:
       - TCE table memory calculation fix from Alexey
       - Build fix for ans-lcd from Luis
       - Unbalanced IRQ warning fix from Alistair
      
      * tag 'powerpc-4.2-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
        powerpc/eeh-powernv: Fix unbalanced IRQ warning
        macintosh/ans-lcd: fix build failure after module_init/exit relocation
        powerpc/powernv/ioda2: Fix calculation for memory allocated for TCE table
      d08c3181
  7. 02 Aug, 2015 4 commits