Commit 3e7768b7 authored by Paul E. McKenney's avatar Paul E. McKenney

doc: Update checklist.txt

This commit updates checklist.txt to reflect RCU additions and changes
over the past few years.
Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
parent ef2555cf
...@@ -32,8 +32,8 @@ over a rather long period of time, but improvements are always welcome! ...@@ -32,8 +32,8 @@ over a rather long period of time, but improvements are always welcome!
for lockless updates. This does result in the mildly for lockless updates. This does result in the mildly
counter-intuitive situation where rcu_read_lock() and counter-intuitive situation where rcu_read_lock() and
rcu_read_unlock() are used to protect updates, however, this rcu_read_unlock() are used to protect updates, however, this
approach provides the same potential simplifications that garbage approach can provide the same simplifications to certain types
collectors do. of lockless algorithms that garbage collectors do.
1. Does the update code have proper mutual exclusion? 1. Does the update code have proper mutual exclusion?
...@@ -49,12 +49,12 @@ over a rather long period of time, but improvements are always welcome! ...@@ -49,12 +49,12 @@ over a rather long period of time, but improvements are always welcome!
them -- even x86 allows later loads to be reordered to precede them -- even x86 allows later loads to be reordered to precede
earlier stores), and be prepared to explain why this added earlier stores), and be prepared to explain why this added
complexity is worthwhile. If you choose #c, be prepared to complexity is worthwhile. If you choose #c, be prepared to
explain how this single task does not become a major bottleneck on explain how this single task does not become a major bottleneck
big multiprocessor machines (for example, if the task is updating on large systems (for example, if the task is updating information
information relating to itself that other tasks can read, there relating to itself that other tasks can read, there by definition
by definition can be no bottleneck). Note that the definition can be no bottleneck). Note that the definition of "large" has
of "large" has changed significantly: Eight CPUs was "large" changed significantly: Eight CPUs was "large" in the year 2000,
in the year 2000, but a hundred CPUs was unremarkable in 2017. but a hundred CPUs was unremarkable in 2017.
2. Do the RCU read-side critical sections make proper use of 2. Do the RCU read-side critical sections make proper use of
rcu_read_lock() and friends? These primitives are needed rcu_read_lock() and friends? These primitives are needed
...@@ -97,33 +97,38 @@ over a rather long period of time, but improvements are always welcome! ...@@ -97,33 +97,38 @@ over a rather long period of time, but improvements are always welcome!
b. Proceed as in (a) above, but also maintain per-element b. Proceed as in (a) above, but also maintain per-element
locks (that are acquired by both readers and writers) locks (that are acquired by both readers and writers)
that guard per-element state. Of course, fields that that guard per-element state. Fields that the readers
the readers refrain from accessing can be guarded by refrain from accessing can be guarded by some other lock
some other lock acquired only by updaters, if desired. acquired only by updaters, if desired.
This works quite well, also. This also works quite well.
c. Make updates appear atomic to readers. For example, c. Make updates appear atomic to readers. For example,
pointer updates to properly aligned fields will pointer updates to properly aligned fields will
appear atomic, as will individual atomic primitives. appear atomic, as will individual atomic primitives.
Sequences of operations performed under a lock will *not* Sequences of operations performed under a lock will *not*
appear to be atomic to RCU readers, nor will sequences appear to be atomic to RCU readers, nor will sequences
of multiple atomic primitives. of multiple atomic primitives. One alternative is to
move multiple individual fields to a separate structure,
thus solving the multiple-field problem by imposing an
additional level of indirection.
This can work, but is starting to get a bit tricky. This can work, but is starting to get a bit tricky.
d. Carefully order the updates and the reads so that d. Carefully order the updates and the reads so that readers
readers see valid data at all phases of the update. see valid data at all phases of the update. This is often
This is often more difficult than it sounds, especially more difficult than it sounds, especially given modern
given modern CPUs' tendency to reorder memory references. CPUs' tendency to reorder memory references. One must
One must usually liberally sprinkle memory barriers usually liberally sprinkle memory-ordering operations
(smp_wmb(), smp_rmb(), smp_mb()) through the code, through the code, making it difficult to understand and
making it difficult to understand and to test. to test. Where it works, it is better to use things
like smp_store_release() and smp_load_acquire(), but in
It is usually better to group the changing data into some cases the smp_mb() full memory barrier is required.
a separate structure, so that the change may be made
to appear atomic by updating a pointer to reference As noted earlier, it is usually better to group the
a new structure containing updated values. changing data into a separate structure, so that the
change may be made to appear atomic by updating a pointer
to reference a new structure containing updated values.
4. Weakly ordered CPUs pose special challenges. Almost all CPUs 4. Weakly ordered CPUs pose special challenges. Almost all CPUs
are weakly ordered -- even x86 CPUs allow later loads to be are weakly ordered -- even x86 CPUs allow later loads to be
...@@ -188,26 +193,29 @@ over a rather long period of time, but improvements are always welcome! ...@@ -188,26 +193,29 @@ over a rather long period of time, but improvements are always welcome!
when publicizing a pointer to a structure that can when publicizing a pointer to a structure that can
be traversed by an RCU read-side critical section. be traversed by an RCU read-side critical section.
5. If call_rcu() or call_srcu() is used, the callback function will 5. If any of call_rcu(), call_srcu(), call_rcu_tasks(),
be called from softirq context. In particular, it cannot block. call_rcu_tasks_rude(), or call_rcu_tasks_trace() is used,
If you need the callback to block, run that code in a workqueue the callback function may be invoked from softirq context,
handler scheduled from the callback. The queue_rcu_work() and in any case with bottom halves disabled. In particular,
function does this for you in the case of call_rcu(). this callback function cannot block. If you need the callback
to block, run that code in a workqueue handler scheduled from
the callback. The queue_rcu_work() function does this for you
in the case of call_rcu().
6. Since synchronize_rcu() can block, it cannot be called 6. Since synchronize_rcu() can block, it cannot be called
from any sort of irq context. The same rule applies from any sort of irq context. The same rule applies
for synchronize_srcu(), synchronize_rcu_expedited(), and for synchronize_srcu(), synchronize_rcu_expedited(),
synchronize_srcu_expedited(). synchronize_srcu_expedited(), synchronize_rcu_tasks(),
synchronize_rcu_tasks_rude(), and synchronize_rcu_tasks_trace().
The expedited forms of these primitives have the same semantics The expedited forms of these primitives have the same semantics
as the non-expedited forms, but expediting is both expensive and as the non-expedited forms, but expediting is more CPU intensive.
(with the exception of synchronize_srcu_expedited()) unfriendly Use of the expedited primitives should be restricted to rare
to real-time workloads. Use of the expedited primitives should configuration-change operations that would not normally be
be restricted to rare configuration-change operations that would undertaken while a real-time workload is running. Note that
not normally be undertaken while a real-time workload is running. IPI-sensitive real-time workloads can use the rcupdate.rcu_normal
However, real-time workloads can use rcupdate.rcu_normal kernel kernel boot parameter to completely disable expedited grace
boot parameter to completely disable expedited grace periods, periods, though this might have performance implications.
though this might have performance implications.
In particular, if you find yourself invoking one of the expedited In particular, if you find yourself invoking one of the expedited
primitives repeatedly in a loop, please do everyone a favor: primitives repeatedly in a loop, please do everyone a favor:
...@@ -215,8 +223,9 @@ over a rather long period of time, but improvements are always welcome! ...@@ -215,8 +223,9 @@ over a rather long period of time, but improvements are always welcome!
a single non-expedited primitive to cover the entire batch. a single non-expedited primitive to cover the entire batch.
This will very likely be faster than the loop containing the This will very likely be faster than the loop containing the
expedited primitive, and will be much much easier on the rest expedited primitive, and will be much much easier on the rest
of the system, especially to real-time workloads running on of the system, especially to real-time workloads running on the
the rest of the system. rest of the system. Alternatively, instead use asynchronous
primitives such as call_rcu().
7. As of v4.20, a given kernel implements only one RCU flavor, which 7. As of v4.20, a given kernel implements only one RCU flavor, which
is RCU-sched for PREEMPTION=n and RCU-preempt for PREEMPTION=y. is RCU-sched for PREEMPTION=n and RCU-preempt for PREEMPTION=y.
...@@ -239,7 +248,8 @@ over a rather long period of time, but improvements are always welcome! ...@@ -239,7 +248,8 @@ over a rather long period of time, but improvements are always welcome!
the corresponding readers must use rcu_read_lock_trace() and the corresponding readers must use rcu_read_lock_trace() and
rcu_read_unlock_trace(). If an updater uses call_rcu_tasks_rude() rcu_read_unlock_trace(). If an updater uses call_rcu_tasks_rude()
or synchronize_rcu_tasks_rude(), then the corresponding readers or synchronize_rcu_tasks_rude(), then the corresponding readers
must use anything that disables interrupts. must use anything that disables preemption, for example,
preempt_disable() and preempt_enable().
Mixing things up will result in confusion and broken kernels, and Mixing things up will result in confusion and broken kernels, and
has even resulted in an exploitable security issue. Therefore, has even resulted in an exploitable security issue. Therefore,
...@@ -253,15 +263,16 @@ over a rather long period of time, but improvements are always welcome! ...@@ -253,15 +263,16 @@ over a rather long period of time, but improvements are always welcome!
that this usage is safe is that readers can use anything that that this usage is safe is that readers can use anything that
disables BH when updaters use call_rcu() or synchronize_rcu(). disables BH when updaters use call_rcu() or synchronize_rcu().
8. Although synchronize_rcu() is slower than is call_rcu(), it 8. Although synchronize_rcu() is slower than is call_rcu(),
usually results in simpler code. So, unless update performance is it usually results in simpler code. So, unless update
critically important, the updaters cannot block, or the latency of performance is critically important, the updaters cannot block,
synchronize_rcu() is visible from userspace, synchronize_rcu() or the latency of synchronize_rcu() is visible from userspace,
should be used in preference to call_rcu(). Furthermore, synchronize_rcu() should be used in preference to call_rcu().
kfree_rcu() usually results in even simpler code than does Furthermore, kfree_rcu() and kvfree_rcu() usually result
synchronize_rcu() without synchronize_rcu()'s multi-millisecond in even simpler code than does synchronize_rcu() without
latency. So please take advantage of kfree_rcu()'s "fire and synchronize_rcu()'s multi-millisecond latency. So please take
forget" memory-freeing capabilities where it applies. advantage of kfree_rcu()'s and kvfree_rcu()'s "fire and forget"
memory-freeing capabilities where it applies.
An especially important property of the synchronize_rcu() An especially important property of the synchronize_rcu()
primitive is that it automatically self-limits: if grace periods primitive is that it automatically self-limits: if grace periods
...@@ -271,8 +282,8 @@ over a rather long period of time, but improvements are always welcome! ...@@ -271,8 +282,8 @@ over a rather long period of time, but improvements are always welcome!
cases where grace periods are delayed, as failing to do so can cases where grace periods are delayed, as failing to do so can
result in excessive realtime latencies or even OOM conditions. result in excessive realtime latencies or even OOM conditions.
Ways of gaining this self-limiting property when using call_rcu() Ways of gaining this self-limiting property when using call_rcu(),
include: kfree_rcu(), or kvfree_rcu() include:
a. Keeping a count of the number of data-structure elements a. Keeping a count of the number of data-structure elements
used by the RCU-protected data structure, including used by the RCU-protected data structure, including
...@@ -304,18 +315,21 @@ over a rather long period of time, but improvements are always welcome! ...@@ -304,18 +315,21 @@ over a rather long period of time, but improvements are always welcome!
here is that superuser already has lots of ways to crash here is that superuser already has lots of ways to crash
the machine. the machine.
d. Periodically invoke synchronize_rcu(), permitting a limited d. Periodically invoke rcu_barrier(), permitting a limited
number of updates per grace period. Better yet, periodically number of updates per grace period.
invoke rcu_barrier() to wait for all outstanding callbacks.
The same cautions apply to call_srcu() and kfree_rcu(). The same cautions apply to call_srcu(), call_rcu_tasks(),
call_rcu_tasks_rude(), and call_rcu_tasks_trace(). This is
why there is an srcu_barrier(), rcu_barrier_tasks(),
rcu_barrier_tasks_rude(), and rcu_barrier_tasks_rude(),
respectively.
Note that although these primitives do take action to avoid memory Note that although these primitives do take action to avoid
exhaustion when any given CPU has too many callbacks, a determined memory exhaustion when any given CPU has too many callbacks,
user could still exhaust memory. This is especially the case a determined user or administrator can still exhaust memory.
if a system with a large number of CPUs has been configured to This is especially the case if a system with a large number of
offload all of its RCU callbacks onto a single CPU, or if the CPUs has been configured to offload all of its RCU callbacks onto
system has relatively little free memory. a single CPU, or if the system has relatively little free memory.
9. All RCU list-traversal primitives, which include 9. All RCU list-traversal primitives, which include
rcu_dereference(), list_for_each_entry_rcu(), and rcu_dereference(), list_for_each_entry_rcu(), and
...@@ -344,14 +358,14 @@ over a rather long period of time, but improvements are always welcome! ...@@ -344,14 +358,14 @@ over a rather long period of time, but improvements are always welcome!
and you don't hold the appropriate update-side lock, you *must* and you don't hold the appropriate update-side lock, you *must*
use the "_rcu()" variants of the list macros. Failing to do so use the "_rcu()" variants of the list macros. Failing to do so
will break Alpha, cause aggressive compilers to generate bad code, will break Alpha, cause aggressive compilers to generate bad code,
and confuse people trying to read your code. and confuse people trying to understand your code.
11. Any lock acquired by an RCU callback must be acquired elsewhere 11. Any lock acquired by an RCU callback must be acquired elsewhere
with softirq disabled, e.g., via spin_lock_irqsave(), with softirq disabled, e.g., via spin_lock_bh(). Failing to
spin_lock_bh(), etc. Failing to disable softirq on a given disable softirq on a given acquisition of that lock will result
acquisition of that lock will result in deadlock as soon as in deadlock as soon as the RCU softirq handler happens to run
the RCU softirq handler happens to run your RCU callback while your RCU callback while interrupting that acquisition's critical
interrupting that acquisition's critical section. section.
12. RCU callbacks can be and are executed in parallel. In many cases, 12. RCU callbacks can be and are executed in parallel. In many cases,
the callback code simply wrappers around kfree(), so that this the callback code simply wrappers around kfree(), so that this
...@@ -372,7 +386,17 @@ over a rather long period of time, but improvements are always welcome! ...@@ -372,7 +386,17 @@ over a rather long period of time, but improvements are always welcome!
for some real-time workloads, this is the whole point of using for some real-time workloads, this is the whole point of using
the rcu_nocbs= kernel boot parameter. the rcu_nocbs= kernel boot parameter.
13. Unlike other forms of RCU, it *is* permissible to block in an In addition, do not assume that callbacks queued in a given order
will be invoked in that order, even if they all are queued on the
same CPU. Furthermore, do not assume that same-CPU callbacks will
be invoked serially. For example, in recent kernels, CPUs can be
switched between offloaded and de-offloaded callback invocation,
and while a given CPU is undergoing such a switch, its callbacks
might be concurrently invoked by that CPU's softirq handler and
that CPU's rcuo kthread. At such times, that CPU's callbacks
might be executed both concurrently and out of order.
13. Unlike most flavors of RCU, it *is* permissible to block in an
SRCU read-side critical section (demarked by srcu_read_lock() SRCU read-side critical section (demarked by srcu_read_lock()
and srcu_read_unlock()), hence the "SRCU": "sleepable RCU". and srcu_read_unlock()), hence the "SRCU": "sleepable RCU".
Please note that if you don't need to sleep in read-side critical Please note that if you don't need to sleep in read-side critical
...@@ -412,6 +436,12 @@ over a rather long period of time, but improvements are always welcome! ...@@ -412,6 +436,12 @@ over a rather long period of time, but improvements are always welcome!
never sends IPIs to other CPUs, so it is easier on never sends IPIs to other CPUs, so it is easier on
real-time workloads than is synchronize_rcu_expedited(). real-time workloads than is synchronize_rcu_expedited().
It is also permissible to sleep in RCU Tasks Trace read-side
critical, which are delimited by rcu_read_lock_trace() and
rcu_read_unlock_trace(). However, this is a specialized flavor
of RCU, and you should not use it without first checking with
its current users. In most cases, you should instead use SRCU.
Note that rcu_assign_pointer() relates to SRCU just as it does to Note that rcu_assign_pointer() relates to SRCU just as it does to
other forms of RCU, but instead of rcu_dereference() you should other forms of RCU, but instead of rcu_dereference() you should
use srcu_dereference() in order to avoid lockdep splats. use srcu_dereference() in order to avoid lockdep splats.
...@@ -442,50 +472,62 @@ over a rather long period of time, but improvements are always welcome! ...@@ -442,50 +472,62 @@ over a rather long period of time, but improvements are always welcome!
find problems as follows: find problems as follows:
CONFIG_PROVE_LOCKING: CONFIG_PROVE_LOCKING:
check that accesses to RCU-protected data check that accesses to RCU-protected data structures
structures are carried out under the proper RCU are carried out under the proper RCU read-side critical
read-side critical section, while holding the right section, while holding the right combination of locks,
combination of locks, or whatever other conditions or whatever other conditions are appropriate.
are appropriate.
CONFIG_DEBUG_OBJECTS_RCU_HEAD: CONFIG_DEBUG_OBJECTS_RCU_HEAD:
check that you don't pass the check that you don't pass the same object to call_rcu()
same object to call_rcu() (or friends) before an RCU (or friends) before an RCU grace period has elapsed
grace period has elapsed since the last time that you since the last time that you passed that same object to
passed that same object to call_rcu() (or friends). call_rcu() (or friends).
__rcu sparse checks: __rcu sparse checks:
tag the pointer to the RCU-protected data tag the pointer to the RCU-protected data structure
structure with __rcu, and sparse will warn you if you with __rcu, and sparse will warn you if you access that
access that pointer without the services of one of the pointer without the services of one of the variants
variants of rcu_dereference(). of rcu_dereference().
These debugging aids can help you find problems that are These debugging aids can help you find problems that are
otherwise extremely difficult to spot. otherwise extremely difficult to spot.
17. If you register a callback using call_rcu() or call_srcu(), and 17. If you pass a callback function defined within a module to one of
pass in a function defined within a loadable module, then it in call_rcu(), call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(),
necessary to wait for all pending callbacks to be invoked after or call_rcu_tasks_trace(), then it is necessary to wait for all
the last invocation and before unloading that module. Note that pending callbacks to be invoked before unloading that module.
it is absolutely *not* sufficient to wait for a grace period! Note that it is absolutely *not* sufficient to wait for a grace
The current (say) synchronize_rcu() implementation is *not* period! For example, synchronize_rcu() implementation is *not*
guaranteed to wait for callbacks registered on other CPUs. guaranteed to wait for callbacks registered on other CPUs via
Or even on the current CPU if that CPU recently went offline call_rcu(). Or even on the current CPU if that CPU recently
and came back online. went offline and came back online.
You instead need to use one of the barrier functions: You instead need to use one of the barrier functions:
- call_rcu() -> rcu_barrier() - call_rcu() -> rcu_barrier()
- call_srcu() -> srcu_barrier() - call_srcu() -> srcu_barrier()
- call_rcu_tasks() -> rcu_barrier_tasks()
- call_rcu_tasks_rude() -> rcu_barrier_tasks_rude()
- call_rcu_tasks_trace() -> rcu_barrier_tasks_trace()
However, these barrier functions are absolutely *not* guaranteed However, these barrier functions are absolutely *not* guaranteed
to wait for a grace period. In fact, if there are no call_rcu() to wait for a grace period. For example, if there are no
callbacks waiting anywhere in the system, rcu_barrier() is within call_rcu() callbacks queued anywhere in the system, rcu_barrier()
its rights to return immediately. can and will return immediately.
So if you need to wait for both an RCU grace period and for So if you need to wait for both a grace period and for all
all pre-existing call_rcu() callbacks, you will need to execute pre-existing callbacks, you will need to invoke both functions,
both rcu_barrier() and synchronize_rcu(), if necessary, using with the pair depending on the flavor of RCU:
something like workqueues to execute them concurrently.
- Either synchronize_rcu() or synchronize_rcu_expedited(),
together with rcu_barrier()
- Either synchronize_srcu() or synchronize_srcu_expedited(),
together with and srcu_barrier()
- synchronize_rcu_tasks() and rcu_barrier_tasks()
- synchronize_tasks_rude() and rcu_barrier_tasks_rude()
- synchronize_tasks_trace() and rcu_barrier_tasks_trace()
If necessary, you can use something like workqueues to execute
the requisite pair of functions concurrently.
See rcubarrier.rst for more information. See rcubarrier.rst for more information.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment