Commit 40a35503 authored by Michael Kerrisk's avatar Michael Kerrisk Committed by Thomas Gleixner

doc: Fix misnamed FUTEX_CMP_REQUEUE_PI op constants

FUTEX_CMP_REQUEUE_PI was misnamed in two different ways:
FUTEX_REQUEUE_CMP_PI and FUTEX_REQUEUE_PI. The existence of two
different misnamings leaves the reader wondering if we are talking
about two different operations. Furthermore, the misnamings mean
that grepping the source for the correct name (which doesn't
appear at all) won't find this documentation file.
Signed-off-by: default avatarMichael Kerrisk <mtk.manpages@gmail.com>
Reviewed-by: default avatarDarren Hart <darren@dvhart.com>
Link: http://lkml.kernel.org/r/54B9663D.9070000@gmail.comSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
parent a97af339
...@@ -98,7 +98,7 @@ rt_mutex_start_proxy_lock() and rt_mutex_finish_proxy_lock(), which ...@@ -98,7 +98,7 @@ rt_mutex_start_proxy_lock() and rt_mutex_finish_proxy_lock(), which
allow the requeue code to acquire an uncontended rt_mutex on behalf allow the requeue code to acquire an uncontended rt_mutex on behalf
of the waiter and to enqueue the waiter on a contended rt_mutex. of the waiter and to enqueue the waiter on a contended rt_mutex.
Two new system calls provide the kernel<->user interface to Two new system calls provide the kernel<->user interface to
requeue_pi: FUTEX_WAIT_REQUEUE_PI and FUTEX_REQUEUE_CMP_PI. requeue_pi: FUTEX_WAIT_REQUEUE_PI and FUTEX_CMP_REQUEUE_PI.
FUTEX_WAIT_REQUEUE_PI is called by the waiter (pthread_cond_wait() FUTEX_WAIT_REQUEUE_PI is called by the waiter (pthread_cond_wait()
and pthread_cond_timedwait()) to block on the initial futex and wait and pthread_cond_timedwait()) to block on the initial futex and wait
...@@ -107,7 +107,7 @@ result of a high-speed collision between futex_wait() and ...@@ -107,7 +107,7 @@ result of a high-speed collision between futex_wait() and
futex_lock_pi(), with some extra logic to check for the additional futex_lock_pi(), with some extra logic to check for the additional
wake-up scenarios. wake-up scenarios.
FUTEX_REQUEUE_CMP_PI is called by the waker FUTEX_CMP_REQUEUE_PI is called by the waker
(pthread_cond_broadcast() and pthread_cond_signal()) to requeue and (pthread_cond_broadcast() and pthread_cond_signal()) to requeue and
possibly wake the waiting tasks. Internally, this system call is possibly wake the waiting tasks. Internally, this system call is
still handled by futex_requeue (by passing requeue_pi=1). Before still handled by futex_requeue (by passing requeue_pi=1). Before
...@@ -120,12 +120,12 @@ task as a waiter on the underlying rt_mutex. It is possible that ...@@ -120,12 +120,12 @@ task as a waiter on the underlying rt_mutex. It is possible that
the lock can be acquired at this stage as well, if so, the next the lock can be acquired at this stage as well, if so, the next
waiter is woken to finish the acquisition of the lock. waiter is woken to finish the acquisition of the lock.
FUTEX_REQUEUE_PI accepts nr_wake and nr_requeue as arguments, but FUTEX_CMP_REQUEUE_PI accepts nr_wake and nr_requeue as arguments, but
their sum is all that really matters. futex_requeue() will wake or their sum is all that really matters. futex_requeue() will wake or
requeue up to nr_wake + nr_requeue tasks. It will wake only as many requeue up to nr_wake + nr_requeue tasks. It will wake only as many
tasks as it can acquire the lock for, which in the majority of cases tasks as it can acquire the lock for, which in the majority of cases
should be 0 as good programming practice dictates that the caller of should be 0 as good programming practice dictates that the caller of
either pthread_cond_broadcast() or pthread_cond_signal() acquire the either pthread_cond_broadcast() or pthread_cond_signal() acquire the
mutex prior to making the call. FUTEX_REQUEUE_PI requires that mutex prior to making the call. FUTEX_CMP_REQUEUE_PI requires that
nr_wake=1. nr_requeue should be INT_MAX for broadcast and 0 for nr_wake=1. nr_requeue should be INT_MAX for broadcast and 0 for
signal. signal.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment