• Davidlohr Bueso's avatar
    locking/rwsem: Support optimistic spinning · 4fc828e2
    Davidlohr Bueso authored
    We have reached the point where our mutexes are quite fine tuned
    for a number of situations. This includes the use of heuristics
    and optimistic spinning, based on MCS locking techniques.
    
    Exclusive ownership of read-write semaphores are, conceptually,
    just about the same as mutexes, making them close cousins. To
    this end we need to make them both perform similarly, and
    right now, rwsems are simply not up to it. This was discovered
    by both reverting commit 4fc3f1d6 (mm/rmap, migration: Make
    rmap_walk_anon() and try_to_unmap_anon() more scalable) and
    similarly, converting some other mutexes (ie: i_mmap_mutex) to
    rwsems. This creates a situation where users have to choose
    between a rwsem and mutex taking into account this important
    performance difference. Specifically, biggest difference between
    both locks is when we fail to acquire a mutex in the fastpath,
    optimistic spinning comes in to play and we can avoid a large
    amount of unnecessary sleeping and overhead of moving tasks in
    and out of wait queue. Rwsems do not have such logic.
    
    This patch, based on the work from Tim Chen and I, adds support
    for write-side optimistic spinning when the lock is contended.
    It also includes support for the recently added cancelable MCS
    locking for adaptive spinning. Note that is is only applicable
    to the xadd method, and the spinlock rwsem variant remains intact.
    
    Allowing optimistic spinning before putting the writer on the wait
    queue reduces wait queue contention and provided greater chance
    for the rwsem to get acquired. With these changes, rwsem is on par
    with mutex. The performance benefits can be seen on a number of
    workloads. For instance, on a 8 socket, 80 core 64bit Westmere box,
    aim7 shows the following improvements in throughput:
    
     +--------------+---------------------+-----------------+
     |   Workload   | throughput-increase | number of users |
     +--------------+---------------------+-----------------+
     | alltests     | 20%                 | >1000           |
     | custom       | 27%, 60%            | 10-100, >1000   |
     | high_systime | 36%, 30%            | >100, >1000     |
     | shared       | 58%, 29%            | 10-100, >1000   |
     +--------------+---------------------+-----------------+
    
    There was also improvement on smaller systems, such as a quad-core
    x86-64 laptop running a 30Gb PostgreSQL (pgbench) workload for up
    to +60% in throughput for over 50 clients. Additionally, benefits
    were also noticed in exim (mail server) workloads. Furthermore, no
    performance regression have been seen at all.
    
    Based-on-work-from: Tim Chen <tim.c.chen@linux.intel.com>
    Signed-off-by: default avatarDavidlohr Bueso <davidlohr@hp.com>
    [peterz: rej fixup due to comment patches, sched/rt.h header]
    Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
    Cc: Alex Shi <alex.shi@linaro.org>
    Cc: Andi Kleen <andi@firstfloor.org>
    Cc: Michel Lespinasse <walken@google.com>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Peter Hurley <peter@hurleysoftware.com>
    Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>
    Cc: Jason Low <jason.low2@hp.com>
    Cc: Aswin Chandramouleeswaran <aswin@hp.com>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: "Scott J Norton" <scott.norton@hp.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Chris Mason <clm@fb.com>
    Cc: Josef Bacik <jbacik@fusionio.com>
    Link: http://lkml.kernel.org/r/1399055055.6275.15.camel@buesod1.americas.hpqcorp.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
    4fc828e2
rwsem.c 3.35 KB