• Jason Low's avatar
    locking/rwsem: Optimize write lock by reducing operations in slowpath · c0fcb6c2
    Jason Low authored
    When acquiring the rwsem write lock in the slowpath, we first try
    to set count to RWSEM_WAITING_BIAS. When that is successful,
    we then atomically add the RWSEM_WAITING_BIAS in cases where
    there are other tasks on the wait list. This causes write lock
    operations to often issue multiple atomic operations.
    
    We can instead make the list_is_singular() check first, and then
    set the count accordingly, so that we issue at most 1 atomic
    operation when acquiring the write lock and reduce unnecessary
    cacheline contention.
    Signed-off-by: default avatarJason Low <jason.low2@hpe.com>
    Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Waiman Long<Waiman.Long@hpe.com>
    Acked-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: Fenghua Yu <fenghua.yu@intel.com>
    Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
    Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
    Cc: Jason Low <jason.low2@hp.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
    Cc: Matt Turner <mattst88@gmail.com>
    Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Peter Hurley <peter@hurleysoftware.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Richard Henderson <rth@twiddle.net>
    Cc: Terry Rudd <terry.rudd@hpe.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Cc: Tony Luck <tony.luck@intel.com>
    Link: http://lkml.kernel.org/r/1463445486-16078-2-git-send-email-jason.low2@hpe.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
    c0fcb6c2
rwsem-xadd.c 17.5 KB