Commit 3d3a0d1b authored by Paul E. McKenney's avatar Paul E. McKenney

rcu: Point to documentation of ordering guarantees

Add comments to synchronize_rcu() and friends that point to
Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst.
Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
parent 2f20de99
...@@ -1000,6 +1000,9 @@ EXPORT_SYMBOL_GPL(synchronize_srcu_expedited); ...@@ -1000,6 +1000,9 @@ EXPORT_SYMBOL_GPL(synchronize_srcu_expedited);
* synchronize_srcu(), srcu_read_lock(), and srcu_read_unlock() are * synchronize_srcu(), srcu_read_lock(), and srcu_read_unlock() are
* passed the same srcu_struct structure. * passed the same srcu_struct structure.
* *
* Implementation of these memory-ordering guarantees is similar to
* that of synchronize_rcu().
*
* If SRCU is likely idle, expedite the first request. This semantic * If SRCU is likely idle, expedite the first request. This semantic
* was provided by Classic SRCU, and is relied upon by its users, so TREE * was provided by Classic SRCU, and is relied upon by its users, so TREE
* SRCU must also provide it. Note that detecting idleness is heuristic * SRCU must also provide it. Note that detecting idleness is heuristic
......
...@@ -3084,6 +3084,9 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func) ...@@ -3084,6 +3084,9 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func)
* between the call to call_rcu() and the invocation of "func()" -- even * between the call to call_rcu() and the invocation of "func()" -- even
* if CPU A and CPU B are the same CPU (but again only if the system has * if CPU A and CPU B are the same CPU (but again only if the system has
* more than one CPU). * more than one CPU).
*
* Implementation of these memory-ordering guarantees is described here:
* Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst.
*/ */
void call_rcu(struct rcu_head *head, rcu_callback_t func) void call_rcu(struct rcu_head *head, rcu_callback_t func)
{ {
...@@ -3751,6 +3754,9 @@ static int rcu_blocking_is_gp(void) ...@@ -3751,6 +3754,9 @@ static int rcu_blocking_is_gp(void)
* to have executed a full memory barrier during the execution of * to have executed a full memory barrier during the execution of
* synchronize_rcu() -- even if CPU A and CPU B are the same CPU (but * synchronize_rcu() -- even if CPU A and CPU B are the same CPU (but
* again only if the system has more than one CPU). * again only if the system has more than one CPU).
*
* Implementation of these memory-ordering guarantees is described here:
* Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst.
*/ */
void synchronize_rcu(void) void synchronize_rcu(void)
{ {
...@@ -3821,7 +3827,7 @@ EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu); ...@@ -3821,7 +3827,7 @@ EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu);
/** /**
* poll_state_synchronize_rcu - Conditionally wait for an RCU grace period * poll_state_synchronize_rcu - Conditionally wait for an RCU grace period
* *
* @oldstate: return from call to get_state_synchronize_rcu() or start_poll_synchronize_rcu() * @oldstate: value from get_state_synchronize_rcu() or start_poll_synchronize_rcu()
* *
* If a full RCU grace period has elapsed since the earlier call from * If a full RCU grace period has elapsed since the earlier call from
* which oldstate was obtained, return @true, otherwise return @false. * which oldstate was obtained, return @true, otherwise return @false.
...@@ -3837,6 +3843,11 @@ EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu); ...@@ -3837,6 +3843,11 @@ EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu);
* (many hours even on 32-bit systems) should check them occasionally * (many hours even on 32-bit systems) should check them occasionally
* and either refresh them or set a flag indicating that the grace period * and either refresh them or set a flag indicating that the grace period
* has completed. * has completed.
*
* This function provides the same memory-ordering guarantees that
* would be provided by a synchronize_rcu() that was invoked at the call
* to the function that provided @oldstate, and that returned at the end
* of this function.
*/ */
bool poll_state_synchronize_rcu(unsigned long oldstate) bool poll_state_synchronize_rcu(unsigned long oldstate)
{ {
...@@ -3851,7 +3862,7 @@ EXPORT_SYMBOL_GPL(poll_state_synchronize_rcu); ...@@ -3851,7 +3862,7 @@ EXPORT_SYMBOL_GPL(poll_state_synchronize_rcu);
/** /**
* cond_synchronize_rcu - Conditionally wait for an RCU grace period * cond_synchronize_rcu - Conditionally wait for an RCU grace period
* *
* @oldstate: return value from earlier call to get_state_synchronize_rcu() * @oldstate: value from get_state_synchronize_rcu() or start_poll_synchronize_rcu()
* *
* If a full RCU grace period has elapsed since the earlier call to * If a full RCU grace period has elapsed since the earlier call to
* get_state_synchronize_rcu() or start_poll_synchronize_rcu(), just return. * get_state_synchronize_rcu() or start_poll_synchronize_rcu(), just return.
...@@ -3861,6 +3872,11 @@ EXPORT_SYMBOL_GPL(poll_state_synchronize_rcu); ...@@ -3861,6 +3872,11 @@ EXPORT_SYMBOL_GPL(poll_state_synchronize_rcu);
* counter wrap is harmless. If the counter wraps, we have waited for * counter wrap is harmless. If the counter wraps, we have waited for
* more than 2 billion grace periods (and way more on a 64-bit system!), * more than 2 billion grace periods (and way more on a 64-bit system!),
* so waiting for one additional grace period should be just fine. * so waiting for one additional grace period should be just fine.
*
* This function provides the same memory-ordering guarantees that
* would be provided by a synchronize_rcu() that was invoked at the call
* to the function that provided @oldstate, and that returned at the end
* of this function.
*/ */
void cond_synchronize_rcu(unsigned long oldstate) void cond_synchronize_rcu(unsigned long oldstate)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment