Commit bdd4431c authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'rcu/next' of...

Merge branch 'rcu/next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu

The major features of this series are:

 - making RCU more aggressive about entering dyntick-idle mode in order to
   improve energy efficiency

 - converting a few more call_rcu()s to kfree_rcu()s

 - applying a number of rcutree fixes and cleanups to rcutiny

 - removing CONFIG_SMP #ifdefs from treercu

 - allowing RCU CPU stall times to be set via sysfs

 - adding CPU-stall capability to rcutorture

 - adding more RCU-abuse diagnostics

 - updating documentation

 - fixing yet more issues located by the still-ongoing top-to-bottom
   inspection of RCU, this time with a special focus on the
   CPU-hotplug code path.
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parents 586c6e70 1cc85961
This diff is collapsed.
...@@ -180,6 +180,20 @@ over a rather long period of time, but improvements are always welcome! ...@@ -180,6 +180,20 @@ over a rather long period of time, but improvements are always welcome!
operations that would not normally be undertaken while a real-time operations that would not normally be undertaken while a real-time
workload is running. workload is running.
In particular, if you find yourself invoking one of the expedited
primitives repeatedly in a loop, please do everyone a favor:
Restructure your code so that it batches the updates, allowing
a single non-expedited primitive to cover the entire batch.
This will very likely be faster than the loop containing the
expedited primitive, and will be much much easier on the rest
of the system, especially to real-time workloads running on
the rest of the system.
In addition, it is illegal to call the expedited forms from
a CPU-hotplug notifier, or while holding a lock that is acquired
by a CPU-hotplug notifier. Failing to observe this restriction
will result in deadlock.
7. If the updater uses call_rcu() or synchronize_rcu(), then the 7. If the updater uses call_rcu() or synchronize_rcu(), then the
corresponding readers must use rcu_read_lock() and corresponding readers must use rcu_read_lock() and
rcu_read_unlock(). If the updater uses call_rcu_bh() or rcu_read_unlock(). If the updater uses call_rcu_bh() or
......
...@@ -12,14 +12,38 @@ CONFIG_RCU_CPU_STALL_TIMEOUT ...@@ -12,14 +12,38 @@ CONFIG_RCU_CPU_STALL_TIMEOUT
This kernel configuration parameter defines the period of time This kernel configuration parameter defines the period of time
that RCU will wait from the beginning of a grace period until it that RCU will wait from the beginning of a grace period until it
issues an RCU CPU stall warning. This time period is normally issues an RCU CPU stall warning. This time period is normally
ten seconds. sixty seconds.
RCU_SECONDS_TILL_STALL_RECHECK This configuration parameter may be changed at runtime via the
/sys/module/rcutree/parameters/rcu_cpu_stall_timeout, however
this parameter is checked only at the beginning of a cycle.
So if you are 30 seconds into a 70-second stall, setting this
sysfs parameter to (say) five will shorten the timeout for the
-next- stall, or the following warning for the current stall
(assuming the stall lasts long enough). It will not affect the
timing of the next warning for the current stall.
This macro defines the period of time that RCU will wait after Stall-warning messages may be enabled and disabled completely via
issuing a stall warning until it issues another stall warning /sys/module/rcutree/parameters/rcu_cpu_stall_suppress.
for the same stall. This time period is normally set to three
times the check interval plus thirty seconds. CONFIG_RCU_CPU_STALL_VERBOSE
This kernel configuration parameter causes the stall warning to
also dump the stacks of any tasks that are blocking the current
RCU-preempt grace period.
RCU_CPU_STALL_INFO
This kernel configuration parameter causes the stall warning to
print out additional per-CPU diagnostic information, including
information on scheduling-clock ticks and RCU's idle-CPU tracking.
RCU_STALL_DELAY_DELTA
Although the lockdep facility is extremely useful, it does add
some overhead. Therefore, under CONFIG_PROVE_RCU, the
RCU_STALL_DELAY_DELTA macro allows five extra seconds before
giving an RCU CPU stall warning message.
RCU_STALL_RAT_DELAY RCU_STALL_RAT_DELAY
...@@ -64,6 +88,54 @@ INFO: rcu_bh_state detected stalls on CPUs/tasks: { } (detected by 4, 2502 jiffi ...@@ -64,6 +88,54 @@ INFO: rcu_bh_state detected stalls on CPUs/tasks: { } (detected by 4, 2502 jiffi
This is rare, but does happen from time to time in real life. This is rare, but does happen from time to time in real life.
If the CONFIG_RCU_CPU_STALL_INFO kernel configuration parameter is set,
more information is printed with the stall-warning message, for example:
INFO: rcu_preempt detected stall on CPU
0: (63959 ticks this GP) idle=241/3fffffffffffffff/0
(t=65000 jiffies)
In kernels with CONFIG_RCU_FAST_NO_HZ, even more information is
printed:
INFO: rcu_preempt detected stall on CPU
0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 drain=0 . timer=-1
(t=65000 jiffies)
The "(64628 ticks this GP)" indicates that this CPU has taken more
than 64,000 scheduling-clock interrupts during the current stalled
grace period. If the CPU was not yet aware of the current grace
period (for example, if it was offline), then this part of the message
indicates how many grace periods behind the CPU is.
The "idle=" portion of the message prints the dyntick-idle state.
The hex number before the first "/" is the low-order 12 bits of the
dynticks counter, which will have an even-numbered value if the CPU is
in dyntick-idle mode and an odd-numbered value otherwise. The hex
number between the two "/"s is the value of the nesting, which will
be a small positive number if in the idle loop and a very large positive
number (as shown above) otherwise.
For CONFIG_RCU_FAST_NO_HZ kernels, the "drain=0" indicates that the
CPU is not in the process of trying to force itself into dyntick-idle
state, the "." indicates that the CPU has not given up forcing RCU
into dyntick-idle mode (it would be "H" otherwise), and the "timer=-1"
indicates that the CPU has not recented forced RCU into dyntick-idle
mode (it would otherwise indicate the number of microseconds remaining
in this forced state).
Multiple Warnings From One Stall
If a stall lasts long enough, multiple stall-warning messages will be
printed for it. The second and subsequent messages are printed at
longer intervals, so that the time between (say) the first and second
message will be about three times the interval between the beginning
of the stall and the first message.
What Causes RCU CPU Stall Warnings?
So your kernel printed an RCU CPU stall warning. The next question is So your kernel printed an RCU CPU stall warning. The next question is
"What caused it?" The following problems can result in RCU CPU stall "What caused it?" The following problems can result in RCU CPU stall
warnings: warnings:
...@@ -128,4 +200,5 @@ is occurring, which will usually be in the function nearest the top of ...@@ -128,4 +200,5 @@ is occurring, which will usually be in the function nearest the top of
that portion of the stack which remains the same from trace to trace. that portion of the stack which remains the same from trace to trace.
If you can reliably trigger the stall, ftrace can be quite helpful. If you can reliably trigger the stall, ftrace can be quite helpful.
RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE. RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE
and with RCU's event tracing.
...@@ -69,6 +69,13 @@ onoff_interval ...@@ -69,6 +69,13 @@ onoff_interval
CPU-hotplug operations regardless of what value is CPU-hotplug operations regardless of what value is
specified for onoff_interval. specified for onoff_interval.
onoff_holdoff The number of seconds to wait until starting CPU-hotplug
operations. This would normally only be used when
rcutorture was built into the kernel and started
automatically at boot time, in which case it is useful
in order to avoid confusing boot-time code with CPUs
coming and going.
shuffle_interval shuffle_interval
The number of seconds to keep the test threads affinitied The number of seconds to keep the test threads affinitied
to a particular subset of the CPUs, defaults to 3 seconds. to a particular subset of the CPUs, defaults to 3 seconds.
...@@ -79,6 +86,24 @@ shutdown_secs The number of seconds to run the test before terminating ...@@ -79,6 +86,24 @@ shutdown_secs The number of seconds to run the test before terminating
zero, which disables test termination and system shutdown. zero, which disables test termination and system shutdown.
This capability is useful for automated testing. This capability is useful for automated testing.
stall_cpu The number of seconds that a CPU should be stalled while
within both an rcu_read_lock() and a preempt_disable().
This stall happens only once per rcutorture run.
If you need multiple stalls, use modprobe and rmmod to
repeatedly run rcutorture. The default for stall_cpu
is zero, which prevents rcutorture from stalling a CPU.
Note that attempts to rmmod rcutorture while the stall
is ongoing will hang, so be careful what value you
choose for this module parameter! In addition, too-large
values for stall_cpu might well induce failures and
warnings in other parts of the kernel. You have been
warned!
stall_cpu_holdoff
The number of seconds to wait after rcutorture starts
before stalling a CPU. Defaults to 10 seconds.
stat_interval The number of seconds between output of torture stat_interval The number of seconds between output of torture
statistics (via printk()). Regardless of the interval, statistics (via printk()). Regardless of the interval,
statistics are printed when the module is unloaded. statistics are printed when the module is unloaded.
...@@ -271,11 +296,13 @@ The following script may be used to torture RCU: ...@@ -271,11 +296,13 @@ The following script may be used to torture RCU:
#!/bin/sh #!/bin/sh
modprobe rcutorture modprobe rcutorture
sleep 100 sleep 3600
rmmod rcutorture rmmod rcutorture
dmesg | grep torture: dmesg | grep torture:
The output can be manually inspected for the error flag of "!!!". The output can be manually inspected for the error flag of "!!!".
One could of course create a more elaborate script that automatically One could of course create a more elaborate script that automatically
checked for such errors. The "rmmod" command forces a "SUCCESS" or checked for such errors. The "rmmod" command forces a "SUCCESS",
"FAILURE" indication to be printk()ed. "FAILURE", or "RCU_HOTPLUG" indication to be printk()ed. The first
two are self-explanatory, while the last indicates that while there
were no RCU failures, CPU-hotplug problems were detected.
...@@ -33,23 +33,23 @@ rcu/rcuboost: ...@@ -33,23 +33,23 @@ rcu/rcuboost:
The output of "cat rcu/rcudata" looks as follows: The output of "cat rcu/rcudata" looks as follows:
rcu_sched: rcu_sched:
0 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=545/1/0 df=50 of=0 ri=0 ql=163 qs=NRW. kt=0/W/0 ktl=ebc3 b=10 ci=153737 co=0 ca=0 0 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=545/1/0 df=50 of=0 ql=163 qs=NRW. kt=0/W/0 ktl=ebc3 b=10 ci=153737 co=0 ca=0
1 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=967/1/0 df=58 of=0 ri=0 ql=634 qs=NRW. kt=0/W/1 ktl=58c b=10 ci=191037 co=0 ca=0 1 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=967/1/0 df=58 of=0 ql=634 qs=NRW. kt=0/W/1 ktl=58c b=10 ci=191037 co=0 ca=0
2 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=1081/1/0 df=175 of=0 ri=0 ql=74 qs=N.W. kt=0/W/2 ktl=da94 b=10 ci=75991 co=0 ca=0 2 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=1081/1/0 df=175 of=0 ql=74 qs=N.W. kt=0/W/2 ktl=da94 b=10 ci=75991 co=0 ca=0
3 c=20942 g=20943 pq=1 pgp=20942 qp=1 dt=1846/0/0 df=404 of=0 ri=0 ql=0 qs=.... kt=0/W/3 ktl=d1cd b=10 ci=72261 co=0 ca=0 3 c=20942 g=20943 pq=1 pgp=20942 qp=1 dt=1846/0/0 df=404 of=0 ql=0 qs=.... kt=0/W/3 ktl=d1cd b=10 ci=72261 co=0 ca=0
4 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=369/1/0 df=83 of=0 ri=0 ql=48 qs=N.W. kt=0/W/4 ktl=e0e7 b=10 ci=128365 co=0 ca=0 4 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=369/1/0 df=83 of=0 ql=48 qs=N.W. kt=0/W/4 ktl=e0e7 b=10 ci=128365 co=0 ca=0
5 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=381/1/0 df=64 of=0 ri=0 ql=169 qs=NRW. kt=0/W/5 ktl=fb2f b=10 ci=164360 co=0 ca=0 5 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=381/1/0 df=64 of=0 ql=169 qs=NRW. kt=0/W/5 ktl=fb2f b=10 ci=164360 co=0 ca=0
6 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=1037/1/0 df=183 of=0 ri=0 ql=62 qs=N.W. kt=0/W/6 ktl=d2ad b=10 ci=65663 co=0 ca=0 6 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=1037/1/0 df=183 of=0 ql=62 qs=N.W. kt=0/W/6 ktl=d2ad b=10 ci=65663 co=0 ca=0
7 c=20897 g=20897 pq=1 pgp=20896 qp=0 dt=1572/0/0 df=382 of=0 ri=0 ql=0 qs=.... kt=0/W/7 ktl=cf15 b=10 ci=75006 co=0 ca=0 7 c=20897 g=20897 pq=1 pgp=20896 qp=0 dt=1572/0/0 df=382 of=0 ql=0 qs=.... kt=0/W/7 ktl=cf15 b=10 ci=75006 co=0 ca=0
rcu_bh: rcu_bh:
0 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=545/1/0 df=6 of=0 ri=1 ql=0 qs=.... kt=0/W/0 ktl=ebc3 b=10 ci=0 co=0 ca=0 0 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=545/1/0 df=6 of=0 ql=0 qs=.... kt=0/W/0 ktl=ebc3 b=10 ci=0 co=0 ca=0
1 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=967/1/0 df=3 of=0 ri=1 ql=0 qs=.... kt=0/W/1 ktl=58c b=10 ci=151 co=0 ca=0 1 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=967/1/0 df=3 of=0 ql=0 qs=.... kt=0/W/1 ktl=58c b=10 ci=151 co=0 ca=0
2 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1081/1/0 df=6 of=0 ri=1 ql=0 qs=.... kt=0/W/2 ktl=da94 b=10 ci=0 co=0 ca=0 2 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1081/1/0 df=6 of=0 ql=0 qs=.... kt=0/W/2 ktl=da94 b=10 ci=0 co=0 ca=0
3 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1846/0/0 df=8 of=0 ri=1 ql=0 qs=.... kt=0/W/3 ktl=d1cd b=10 ci=0 co=0 ca=0 3 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1846/0/0 df=8 of=0 ql=0 qs=.... kt=0/W/3 ktl=d1cd b=10 ci=0 co=0 ca=0
4 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=369/1/0 df=6 of=0 ri=1 ql=0 qs=.... kt=0/W/4 ktl=e0e7 b=10 ci=0 co=0 ca=0 4 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=369/1/0 df=6 of=0 ql=0 qs=.... kt=0/W/4 ktl=e0e7 b=10 ci=0 co=0 ca=0
5 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=381/1/0 df=4 of=0 ri=1 ql=0 qs=.... kt=0/W/5 ktl=fb2f b=10 ci=0 co=0 ca=0 5 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=381/1/0 df=4 of=0 ql=0 qs=.... kt=0/W/5 ktl=fb2f b=10 ci=0 co=0 ca=0
6 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1037/1/0 df=6 of=0 ri=1 ql=0 qs=.... kt=0/W/6 ktl=d2ad b=10 ci=0 co=0 ca=0 6 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1037/1/0 df=6 of=0 ql=0 qs=.... kt=0/W/6 ktl=d2ad b=10 ci=0 co=0 ca=0
7 c=1474 g=1474 pq=1 pgp=1473 qp=0 dt=1572/0/0 df=8 of=0 ri=1 ql=0 qs=.... kt=0/W/7 ktl=cf15 b=10 ci=0 co=0 ca=0 7 c=1474 g=1474 pq=1 pgp=1473 qp=0 dt=1572/0/0 df=8 of=0 ql=0 qs=.... kt=0/W/7 ktl=cf15 b=10 ci=0 co=0 ca=0
The first section lists the rcu_data structures for rcu_sched, the second The first section lists the rcu_data structures for rcu_sched, the second
for rcu_bh. Note that CONFIG_TREE_PREEMPT_RCU kernels will have an for rcu_bh. Note that CONFIG_TREE_PREEMPT_RCU kernels will have an
...@@ -119,10 +119,6 @@ o "of" is the number of times that some other CPU has forced a ...@@ -119,10 +119,6 @@ o "of" is the number of times that some other CPU has forced a
CPU is offline when it is really alive and kicking) is a fatal CPU is offline when it is really alive and kicking) is a fatal
error, so it makes sense to err conservatively. error, so it makes sense to err conservatively.
o "ri" is the number of times that RCU has seen fit to send a
reschedule IPI to this CPU in order to get it to report a
quiescent state.
o "ql" is the number of RCU callbacks currently residing on o "ql" is the number of RCU callbacks currently residing on
this CPU. This is the total number of callbacks, regardless this CPU. This is the total number of callbacks, regardless
of what state they are in (new, waiting for grace period to of what state they are in (new, waiting for grace period to
......
...@@ -165,13 +165,6 @@ static inline int ext_hash(u16 code) ...@@ -165,13 +165,6 @@ static inline int ext_hash(u16 code)
return (code + (code >> 9)) & 0xff; return (code + (code >> 9)) & 0xff;
} }
static void ext_int_hash_update(struct rcu_head *head)
{
struct ext_int_info *p = container_of(head, struct ext_int_info, rcu);
kfree(p);
}
int register_external_interrupt(u16 code, ext_int_handler_t handler) int register_external_interrupt(u16 code, ext_int_handler_t handler)
{ {
struct ext_int_info *p; struct ext_int_info *p;
...@@ -202,7 +195,7 @@ int unregister_external_interrupt(u16 code, ext_int_handler_t handler) ...@@ -202,7 +195,7 @@ int unregister_external_interrupt(u16 code, ext_int_handler_t handler)
list_for_each_entry_rcu(p, &ext_int_hash[index], entry) list_for_each_entry_rcu(p, &ext_int_hash[index], entry)
if (p->code == code && p->handler == handler) { if (p->code == code && p->handler == handler) {
list_del_rcu(&p->entry); list_del_rcu(&p->entry);
call_rcu(&p->rcu, ext_int_hash_update); kfree_rcu(p, rcu);
} }
spin_unlock_irqrestore(&ext_int_hash_lock, flags); spin_unlock_irqrestore(&ext_int_hash_lock, flags);
return 0; return 0;
......
...@@ -85,16 +85,6 @@ static struct ft_tport *ft_tport_create(struct fc_lport *lport) ...@@ -85,16 +85,6 @@ static struct ft_tport *ft_tport_create(struct fc_lport *lport)
return tport; return tport;
} }
/*
* Free tport via RCU.
*/
static void ft_tport_rcu_free(struct rcu_head *rcu)
{
struct ft_tport *tport = container_of(rcu, struct ft_tport, rcu);
kfree(tport);
}
/* /*
* Delete a target local port. * Delete a target local port.
* Caller holds ft_lport_lock. * Caller holds ft_lport_lock.
...@@ -114,7 +104,7 @@ static void ft_tport_delete(struct ft_tport *tport) ...@@ -114,7 +104,7 @@ static void ft_tport_delete(struct ft_tport *tport)
tpg->tport = NULL; tpg->tport = NULL;
tport->tpg = NULL; tport->tpg = NULL;
} }
call_rcu(&tport->rcu, ft_tport_rcu_free); kfree_rcu(tport, rcu);
} }
/* /*
......
...@@ -190,6 +190,33 @@ extern void rcu_idle_exit(void); ...@@ -190,6 +190,33 @@ extern void rcu_idle_exit(void);
extern void rcu_irq_enter(void); extern void rcu_irq_enter(void);
extern void rcu_irq_exit(void); extern void rcu_irq_exit(void);
/**
* RCU_NONIDLE - Indicate idle-loop code that needs RCU readers
* @a: Code that RCU needs to pay attention to.
*
* RCU, RCU-bh, and RCU-sched read-side critical sections are forbidden
* in the inner idle loop, that is, between the rcu_idle_enter() and
* the rcu_idle_exit() -- RCU will happily ignore any such read-side
* critical sections. However, things like powertop need tracepoints
* in the inner idle loop.
*
* This macro provides the way out: RCU_NONIDLE(do_something_with_RCU())
* will tell RCU that it needs to pay attending, invoke its argument
* (in this example, a call to the do_something_with_RCU() function),
* and then tell RCU to go back to ignoring this CPU. It is permissible
* to nest RCU_NONIDLE() wrappers, but the nesting level is currently
* quite limited. If deeper nesting is required, it will be necessary
* to adjust DYNTICK_TASK_NESTING_VALUE accordingly.
*
* This macro may be used from process-level code only.
*/
#define RCU_NONIDLE(a) \
do { \
rcu_idle_exit(); \
do { a; } while (0); \
rcu_idle_enter(); \
} while (0)
/* /*
* Infrastructure to implement the synchronize_() primitives in * Infrastructure to implement the synchronize_() primitives in
* TREE_RCU and rcu_barrier_() primitives in TINY_RCU. * TREE_RCU and rcu_barrier_() primitives in TINY_RCU.
...@@ -226,6 +253,15 @@ static inline void destroy_rcu_head_on_stack(struct rcu_head *head) ...@@ -226,6 +253,15 @@ static inline void destroy_rcu_head_on_stack(struct rcu_head *head)
} }
#endif /* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */ #endif /* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */
#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU)
bool rcu_lockdep_current_cpu_online(void);
#else /* #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */
static inline bool rcu_lockdep_current_cpu_online(void)
{
return 1;
}
#endif /* #else #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */
#ifdef CONFIG_DEBUG_LOCK_ALLOC #ifdef CONFIG_DEBUG_LOCK_ALLOC
#ifdef CONFIG_PROVE_RCU #ifdef CONFIG_PROVE_RCU
...@@ -239,13 +275,11 @@ static inline int rcu_is_cpu_idle(void) ...@@ -239,13 +275,11 @@ static inline int rcu_is_cpu_idle(void)
static inline void rcu_lock_acquire(struct lockdep_map *map) static inline void rcu_lock_acquire(struct lockdep_map *map)
{ {
WARN_ON_ONCE(rcu_is_cpu_idle());
lock_acquire(map, 0, 0, 2, 1, NULL, _THIS_IP_); lock_acquire(map, 0, 0, 2, 1, NULL, _THIS_IP_);
} }
static inline void rcu_lock_release(struct lockdep_map *map) static inline void rcu_lock_release(struct lockdep_map *map)
{ {
WARN_ON_ONCE(rcu_is_cpu_idle());
lock_release(map, 1, _THIS_IP_); lock_release(map, 1, _THIS_IP_);
} }
...@@ -270,6 +304,9 @@ extern int debug_lockdep_rcu_enabled(void); ...@@ -270,6 +304,9 @@ extern int debug_lockdep_rcu_enabled(void);
* occur in the same context, for example, it is illegal to invoke * occur in the same context, for example, it is illegal to invoke
* rcu_read_unlock() in process context if the matching rcu_read_lock() * rcu_read_unlock() in process context if the matching rcu_read_lock()
* was invoked from within an irq handler. * was invoked from within an irq handler.
*
* Note that rcu_read_lock() is disallowed if the CPU is either idle or
* offline from an RCU perspective, so check for those as well.
*/ */
static inline int rcu_read_lock_held(void) static inline int rcu_read_lock_held(void)
{ {
...@@ -277,6 +314,8 @@ static inline int rcu_read_lock_held(void) ...@@ -277,6 +314,8 @@ static inline int rcu_read_lock_held(void)
return 1; return 1;
if (rcu_is_cpu_idle()) if (rcu_is_cpu_idle())
return 0; return 0;
if (!rcu_lockdep_current_cpu_online())
return 0;
return lock_is_held(&rcu_lock_map); return lock_is_held(&rcu_lock_map);
} }
...@@ -313,6 +352,9 @@ extern int rcu_read_lock_bh_held(void); ...@@ -313,6 +352,9 @@ extern int rcu_read_lock_bh_held(void);
* notice an extended quiescent state to other CPUs that started a grace * notice an extended quiescent state to other CPUs that started a grace
* period. Otherwise we would delay any grace period as long as we run in * period. Otherwise we would delay any grace period as long as we run in
* the idle task. * the idle task.
*
* Similarly, we avoid claiming an SRCU read lock held if the current
* CPU is offline.
*/ */
#ifdef CONFIG_PREEMPT_COUNT #ifdef CONFIG_PREEMPT_COUNT
static inline int rcu_read_lock_sched_held(void) static inline int rcu_read_lock_sched_held(void)
...@@ -323,6 +365,8 @@ static inline int rcu_read_lock_sched_held(void) ...@@ -323,6 +365,8 @@ static inline int rcu_read_lock_sched_held(void)
return 1; return 1;
if (rcu_is_cpu_idle()) if (rcu_is_cpu_idle())
return 0; return 0;
if (!rcu_lockdep_current_cpu_online())
return 0;
if (debug_locks) if (debug_locks)
lockdep_opinion = lock_is_held(&rcu_sched_lock_map); lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
return lockdep_opinion || preempt_count() != 0 || irqs_disabled(); return lockdep_opinion || preempt_count() != 0 || irqs_disabled();
...@@ -381,8 +425,22 @@ extern int rcu_my_thread_group_empty(void); ...@@ -381,8 +425,22 @@ extern int rcu_my_thread_group_empty(void);
} \ } \
} while (0) } while (0)
#if defined(CONFIG_PROVE_RCU) && !defined(CONFIG_PREEMPT_RCU)
static inline void rcu_preempt_sleep_check(void)
{
rcu_lockdep_assert(!lock_is_held(&rcu_lock_map),
"Illegal context switch in RCU read-side "
"critical section");
}
#else /* #ifdef CONFIG_PROVE_RCU */
static inline void rcu_preempt_sleep_check(void)
{
}
#endif /* #else #ifdef CONFIG_PROVE_RCU */
#define rcu_sleep_check() \ #define rcu_sleep_check() \
do { \ do { \
rcu_preempt_sleep_check(); \
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map), \ rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map), \
"Illegal context switch in RCU-bh" \ "Illegal context switch in RCU-bh" \
" read-side critical section"); \ " read-side critical section"); \
...@@ -470,6 +528,13 @@ extern int rcu_my_thread_group_empty(void); ...@@ -470,6 +528,13 @@ extern int rcu_my_thread_group_empty(void);
* NULL. Although rcu_access_pointer() may also be used in cases where * NULL. Although rcu_access_pointer() may also be used in cases where
* update-side locks prevent the value of the pointer from changing, you * update-side locks prevent the value of the pointer from changing, you
* should instead use rcu_dereference_protected() for this use case. * should instead use rcu_dereference_protected() for this use case.
*
* It is also permissible to use rcu_access_pointer() when read-side
* access to the pointer was removed at least one grace period ago, as
* is the case in the context of the RCU callback that is freeing up
* the data, or after a synchronize_rcu() returns. This can be useful
* when tearing down multi-linked structures after a grace period
* has elapsed.
*/ */
#define rcu_access_pointer(p) __rcu_access_pointer((p), __rcu) #define rcu_access_pointer(p) __rcu_access_pointer((p), __rcu)
...@@ -659,6 +724,8 @@ static inline void rcu_read_lock(void) ...@@ -659,6 +724,8 @@ static inline void rcu_read_lock(void)
__rcu_read_lock(); __rcu_read_lock();
__acquire(RCU); __acquire(RCU);
rcu_lock_acquire(&rcu_lock_map); rcu_lock_acquire(&rcu_lock_map);
rcu_lockdep_assert(!rcu_is_cpu_idle(),
"rcu_read_lock() used illegally while idle");
} }
/* /*
...@@ -678,6 +745,8 @@ static inline void rcu_read_lock(void) ...@@ -678,6 +745,8 @@ static inline void rcu_read_lock(void)
*/ */
static inline void rcu_read_unlock(void) static inline void rcu_read_unlock(void)
{ {
rcu_lockdep_assert(!rcu_is_cpu_idle(),
"rcu_read_unlock() used illegally while idle");
rcu_lock_release(&rcu_lock_map); rcu_lock_release(&rcu_lock_map);
__release(RCU); __release(RCU);
__rcu_read_unlock(); __rcu_read_unlock();
...@@ -705,6 +774,8 @@ static inline void rcu_read_lock_bh(void) ...@@ -705,6 +774,8 @@ static inline void rcu_read_lock_bh(void)
local_bh_disable(); local_bh_disable();
__acquire(RCU_BH); __acquire(RCU_BH);
rcu_lock_acquire(&rcu_bh_lock_map); rcu_lock_acquire(&rcu_bh_lock_map);
rcu_lockdep_assert(!rcu_is_cpu_idle(),
"rcu_read_lock_bh() used illegally while idle");
} }
/* /*
...@@ -714,6 +785,8 @@ static inline void rcu_read_lock_bh(void) ...@@ -714,6 +785,8 @@ static inline void rcu_read_lock_bh(void)
*/ */
static inline void rcu_read_unlock_bh(void) static inline void rcu_read_unlock_bh(void)
{ {
rcu_lockdep_assert(!rcu_is_cpu_idle(),
"rcu_read_unlock_bh() used illegally while idle");
rcu_lock_release(&rcu_bh_lock_map); rcu_lock_release(&rcu_bh_lock_map);
__release(RCU_BH); __release(RCU_BH);
local_bh_enable(); local_bh_enable();
...@@ -737,6 +810,8 @@ static inline void rcu_read_lock_sched(void) ...@@ -737,6 +810,8 @@ static inline void rcu_read_lock_sched(void)
preempt_disable(); preempt_disable();
__acquire(RCU_SCHED); __acquire(RCU_SCHED);
rcu_lock_acquire(&rcu_sched_lock_map); rcu_lock_acquire(&rcu_sched_lock_map);
rcu_lockdep_assert(!rcu_is_cpu_idle(),
"rcu_read_lock_sched() used illegally while idle");
} }
/* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */
...@@ -753,6 +828,8 @@ static inline notrace void rcu_read_lock_sched_notrace(void) ...@@ -753,6 +828,8 @@ static inline notrace void rcu_read_lock_sched_notrace(void)
*/ */
static inline void rcu_read_unlock_sched(void) static inline void rcu_read_unlock_sched(void)
{ {
rcu_lockdep_assert(!rcu_is_cpu_idle(),
"rcu_read_unlock_sched() used illegally while idle");
rcu_lock_release(&rcu_sched_lock_map); rcu_lock_release(&rcu_sched_lock_map);
__release(RCU_SCHED); __release(RCU_SCHED);
preempt_enable(); preempt_enable();
...@@ -841,7 +918,7 @@ void __kfree_rcu(struct rcu_head *head, unsigned long offset) ...@@ -841,7 +918,7 @@ void __kfree_rcu(struct rcu_head *head, unsigned long offset)
/* See the kfree_rcu() header comment. */ /* See the kfree_rcu() header comment. */
BUILD_BUG_ON(!__is_kfree_rcu_offset(offset)); BUILD_BUG_ON(!__is_kfree_rcu_offset(offset));
call_rcu(head, (rcu_callback)offset); kfree_call_rcu(head, (rcu_callback)offset);
} }
/** /**
......
...@@ -27,13 +27,9 @@ ...@@ -27,13 +27,9 @@
#include <linux/cache.h> #include <linux/cache.h>
#ifdef CONFIG_RCU_BOOST
static inline void rcu_init(void) static inline void rcu_init(void)
{ {
} }
#else /* #ifdef CONFIG_RCU_BOOST */
void rcu_init(void);
#endif /* #else #ifdef CONFIG_RCU_BOOST */
static inline void rcu_barrier_bh(void) static inline void rcu_barrier_bh(void)
{ {
...@@ -83,6 +79,12 @@ static inline void synchronize_sched_expedited(void) ...@@ -83,6 +79,12 @@ static inline void synchronize_sched_expedited(void)
synchronize_sched(); synchronize_sched();
} }
static inline void kfree_call_rcu(struct rcu_head *head,
void (*func)(struct rcu_head *rcu))
{
call_rcu(head, func);
}
#ifdef CONFIG_TINY_RCU #ifdef CONFIG_TINY_RCU
static inline void rcu_preempt_note_context_switch(void) static inline void rcu_preempt_note_context_switch(void)
......
...@@ -61,6 +61,24 @@ extern void synchronize_rcu_bh(void); ...@@ -61,6 +61,24 @@ extern void synchronize_rcu_bh(void);
extern void synchronize_sched_expedited(void); extern void synchronize_sched_expedited(void);
extern void synchronize_rcu_expedited(void); extern void synchronize_rcu_expedited(void);
void kfree_call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu));
/**
* synchronize_rcu_bh_expedited - Brute-force RCU-bh grace period
*
* Wait for an RCU-bh grace period to elapse, but use a "big hammer"
* approach to force the grace period to end quickly. This consumes
* significant time on all CPUs and is unfriendly to real-time workloads,
* so is thus not recommended for any sort of common-case code. In fact,
* if you are using synchronize_rcu_bh_expedited() in a loop, please
* restructure your code to batch your updates, and then use a single
* synchronize_rcu_bh() instead.
*
* Note that it is illegal to call this function while holding any lock
* that is acquired by a CPU-hotplug notifier. And yes, it is also illegal
* to call this function from a CPU-hotplug notifier. Failing to observe
* these restriction will result in deadlock.
*/
static inline void synchronize_rcu_bh_expedited(void) static inline void synchronize_rcu_bh_expedited(void)
{ {
synchronize_sched_expedited(); synchronize_sched_expedited();
...@@ -83,6 +101,7 @@ extern void rcu_sched_force_quiescent_state(void); ...@@ -83,6 +101,7 @@ extern void rcu_sched_force_quiescent_state(void);
/* A context switch is a grace period for RCU-sched and RCU-bh. */ /* A context switch is a grace period for RCU-sched and RCU-bh. */
static inline int rcu_blocking_is_gp(void) static inline int rcu_blocking_is_gp(void)
{ {
might_sleep(); /* Check for RCU read-side critical section. */
return num_online_cpus() == 1; return num_online_cpus() == 1;
} }
......
...@@ -1864,8 +1864,7 @@ extern void task_clear_jobctl_pending(struct task_struct *task, ...@@ -1864,8 +1864,7 @@ extern void task_clear_jobctl_pending(struct task_struct *task,
#ifdef CONFIG_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
#define RCU_READ_UNLOCK_BLOCKED (1 << 0) /* blocked while in RCU read-side. */ #define RCU_READ_UNLOCK_BLOCKED (1 << 0) /* blocked while in RCU read-side. */
#define RCU_READ_UNLOCK_BOOSTED (1 << 1) /* boosted while in RCU read-side. */ #define RCU_READ_UNLOCK_NEED_QS (1 << 1) /* RCU core needs CPU response. */
#define RCU_READ_UNLOCK_NEED_QS (1 << 2) /* RCU core needs CPU response. */
static inline void rcu_copy_process(struct task_struct *p) static inline void rcu_copy_process(struct task_struct *p)
{ {
......
...@@ -99,15 +99,18 @@ long srcu_batches_completed(struct srcu_struct *sp); ...@@ -99,15 +99,18 @@ long srcu_batches_completed(struct srcu_struct *sp);
* power mode. This way we can notice an extended quiescent state to * power mode. This way we can notice an extended quiescent state to
* other CPUs that started a grace period. Otherwise we would delay any * other CPUs that started a grace period. Otherwise we would delay any
* grace period as long as we run in the idle task. * grace period as long as we run in the idle task.
*
* Similarly, we avoid claiming an SRCU read lock held if the current
* CPU is offline.
*/ */
static inline int srcu_read_lock_held(struct srcu_struct *sp) static inline int srcu_read_lock_held(struct srcu_struct *sp)
{ {
if (rcu_is_cpu_idle())
return 0;
if (!debug_lockdep_rcu_enabled()) if (!debug_lockdep_rcu_enabled())
return 1; return 1;
if (rcu_is_cpu_idle())
return 0;
if (!rcu_lockdep_current_cpu_online())
return 0;
return lock_is_held(&sp->dep_map); return lock_is_held(&sp->dep_map);
} }
...@@ -169,6 +172,8 @@ static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp) ...@@ -169,6 +172,8 @@ static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp)
int retval = __srcu_read_lock(sp); int retval = __srcu_read_lock(sp);
rcu_lock_acquire(&(sp)->dep_map); rcu_lock_acquire(&(sp)->dep_map);
rcu_lockdep_assert(!rcu_is_cpu_idle(),
"srcu_read_lock() used illegally while idle");
return retval; return retval;
} }
...@@ -182,6 +187,8 @@ static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp) ...@@ -182,6 +187,8 @@ static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp)
static inline void srcu_read_unlock(struct srcu_struct *sp, int idx) static inline void srcu_read_unlock(struct srcu_struct *sp, int idx)
__releases(sp) __releases(sp)
{ {
rcu_lockdep_assert(!rcu_is_cpu_idle(),
"srcu_read_unlock() used illegally while idle");
rcu_lock_release(&(sp)->dep_map); rcu_lock_release(&(sp)->dep_map);
__srcu_read_unlock(sp, idx); __srcu_read_unlock(sp, idx);
} }
......
...@@ -313,19 +313,22 @@ TRACE_EVENT(rcu_prep_idle, ...@@ -313,19 +313,22 @@ TRACE_EVENT(rcu_prep_idle,
/* /*
* Tracepoint for the registration of a single RCU callback function. * Tracepoint for the registration of a single RCU callback function.
* The first argument is the type of RCU, the second argument is * The first argument is the type of RCU, the second argument is
* a pointer to the RCU callback itself, and the third element is the * a pointer to the RCU callback itself, the third element is the
* new RCU callback queue length for the current CPU. * number of lazy callbacks queued, and the fourth element is the
* total number of callbacks queued.
*/ */
TRACE_EVENT(rcu_callback, TRACE_EVENT(rcu_callback,
TP_PROTO(char *rcuname, struct rcu_head *rhp, long qlen), TP_PROTO(char *rcuname, struct rcu_head *rhp, long qlen_lazy,
long qlen),
TP_ARGS(rcuname, rhp, qlen), TP_ARGS(rcuname, rhp, qlen_lazy, qlen),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(char *, rcuname)
__field(void *, rhp) __field(void *, rhp)
__field(void *, func) __field(void *, func)
__field(long, qlen_lazy)
__field(long, qlen) __field(long, qlen)
), ),
...@@ -333,11 +336,13 @@ TRACE_EVENT(rcu_callback, ...@@ -333,11 +336,13 @@ TRACE_EVENT(rcu_callback,
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->rhp = rhp; __entry->rhp = rhp;
__entry->func = rhp->func; __entry->func = rhp->func;
__entry->qlen_lazy = qlen_lazy;
__entry->qlen = qlen; __entry->qlen = qlen;
), ),
TP_printk("%s rhp=%p func=%pf %ld", TP_printk("%s rhp=%p func=%pf %ld/%ld",
__entry->rcuname, __entry->rhp, __entry->func, __entry->qlen) __entry->rcuname, __entry->rhp, __entry->func,
__entry->qlen_lazy, __entry->qlen)
); );
/* /*
...@@ -345,20 +350,21 @@ TRACE_EVENT(rcu_callback, ...@@ -345,20 +350,21 @@ TRACE_EVENT(rcu_callback,
* kfree() form. The first argument is the RCU type, the second argument * kfree() form. The first argument is the RCU type, the second argument
* is a pointer to the RCU callback, the third argument is the offset * is a pointer to the RCU callback, the third argument is the offset
* of the callback within the enclosing RCU-protected data structure, * of the callback within the enclosing RCU-protected data structure,
* and the fourth argument is the new RCU callback queue length for the * the fourth argument is the number of lazy callbacks queued, and the
* current CPU. * fifth argument is the total number of callbacks queued.
*/ */
TRACE_EVENT(rcu_kfree_callback, TRACE_EVENT(rcu_kfree_callback,
TP_PROTO(char *rcuname, struct rcu_head *rhp, unsigned long offset, TP_PROTO(char *rcuname, struct rcu_head *rhp, unsigned long offset,
long qlen), long qlen_lazy, long qlen),
TP_ARGS(rcuname, rhp, offset, qlen), TP_ARGS(rcuname, rhp, offset, qlen_lazy, qlen),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(char *, rcuname)
__field(void *, rhp) __field(void *, rhp)
__field(unsigned long, offset) __field(unsigned long, offset)
__field(long, qlen_lazy)
__field(long, qlen) __field(long, qlen)
), ),
...@@ -366,41 +372,45 @@ TRACE_EVENT(rcu_kfree_callback, ...@@ -366,41 +372,45 @@ TRACE_EVENT(rcu_kfree_callback,
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->rhp = rhp; __entry->rhp = rhp;
__entry->offset = offset; __entry->offset = offset;
__entry->qlen_lazy = qlen_lazy;
__entry->qlen = qlen; __entry->qlen = qlen;
), ),
TP_printk("%s rhp=%p func=%ld %ld", TP_printk("%s rhp=%p func=%ld %ld/%ld",
__entry->rcuname, __entry->rhp, __entry->offset, __entry->rcuname, __entry->rhp, __entry->offset,
__entry->qlen) __entry->qlen_lazy, __entry->qlen)
); );
/* /*
* Tracepoint for marking the beginning rcu_do_batch, performed to start * Tracepoint for marking the beginning rcu_do_batch, performed to start
* RCU callback invocation. The first argument is the RCU flavor, * RCU callback invocation. The first argument is the RCU flavor,
* the second is the total number of callbacks (including those that * the second is the number of lazy callbacks queued, the third is
* are not yet ready to be invoked), and the third argument is the * the total number of callbacks queued, and the fourth argument is
* current RCU-callback batch limit. * the current RCU-callback batch limit.
*/ */
TRACE_EVENT(rcu_batch_start, TRACE_EVENT(rcu_batch_start,
TP_PROTO(char *rcuname, long qlen, int blimit), TP_PROTO(char *rcuname, long qlen_lazy, long qlen, int blimit),
TP_ARGS(rcuname, qlen, blimit), TP_ARGS(rcuname, qlen_lazy, qlen, blimit),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(char *, rcuname)
__field(long, qlen_lazy)
__field(long, qlen) __field(long, qlen)
__field(int, blimit) __field(int, blimit)
), ),
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->qlen_lazy = qlen_lazy;
__entry->qlen = qlen; __entry->qlen = qlen;
__entry->blimit = blimit; __entry->blimit = blimit;
), ),
TP_printk("%s CBs=%ld bl=%d", TP_printk("%s CBs=%ld/%ld bl=%d",
__entry->rcuname, __entry->qlen, __entry->blimit) __entry->rcuname, __entry->qlen_lazy, __entry->qlen,
__entry->blimit)
); );
/* /*
...@@ -531,16 +541,21 @@ TRACE_EVENT(rcu_torture_read, ...@@ -531,16 +541,21 @@ TRACE_EVENT(rcu_torture_read,
#else /* #ifdef CONFIG_RCU_TRACE */ #else /* #ifdef CONFIG_RCU_TRACE */
#define trace_rcu_grace_period(rcuname, gpnum, gpevent) do { } while (0) #define trace_rcu_grace_period(rcuname, gpnum, gpevent) do { } while (0)
#define trace_rcu_grace_period_init(rcuname, gpnum, level, grplo, grphi, qsmask) do { } while (0) #define trace_rcu_grace_period_init(rcuname, gpnum, level, grplo, grphi, \
qsmask) do { } while (0)
#define trace_rcu_preempt_task(rcuname, pid, gpnum) do { } while (0) #define trace_rcu_preempt_task(rcuname, pid, gpnum) do { } while (0)
#define trace_rcu_unlock_preempted_task(rcuname, gpnum, pid) do { } while (0) #define trace_rcu_unlock_preempted_task(rcuname, gpnum, pid) do { } while (0)
#define trace_rcu_quiescent_state_report(rcuname, gpnum, mask, qsmask, level, grplo, grphi, gp_tasks) do { } while (0) #define trace_rcu_quiescent_state_report(rcuname, gpnum, mask, qsmask, level, \
grplo, grphi, gp_tasks) do { } \
while (0)
#define trace_rcu_fqs(rcuname, gpnum, cpu, qsevent) do { } while (0) #define trace_rcu_fqs(rcuname, gpnum, cpu, qsevent) do { } while (0)
#define trace_rcu_dyntick(polarity, oldnesting, newnesting) do { } while (0) #define trace_rcu_dyntick(polarity, oldnesting, newnesting) do { } while (0)
#define trace_rcu_prep_idle(reason) do { } while (0) #define trace_rcu_prep_idle(reason) do { } while (0)
#define trace_rcu_callback(rcuname, rhp, qlen) do { } while (0) #define trace_rcu_callback(rcuname, rhp, qlen_lazy, qlen) do { } while (0)
#define trace_rcu_kfree_callback(rcuname, rhp, offset, qlen) do { } while (0) #define trace_rcu_kfree_callback(rcuname, rhp, offset, qlen_lazy, qlen) \
#define trace_rcu_batch_start(rcuname, qlen, blimit) do { } while (0) do { } while (0)
#define trace_rcu_batch_start(rcuname, qlen_lazy, qlen, blimit) \
do { } while (0)
#define trace_rcu_invoke_callback(rcuname, rhp) do { } while (0) #define trace_rcu_invoke_callback(rcuname, rhp) do { } while (0)
#define trace_rcu_invoke_kfree_callback(rcuname, rhp, offset) do { } while (0) #define trace_rcu_invoke_kfree_callback(rcuname, rhp, offset) do { } while (0)
#define trace_rcu_batch_end(rcuname, callbacks_invoked, cb, nr, iit, risk) \ #define trace_rcu_batch_end(rcuname, callbacks_invoked, cb, nr, iit, risk) \
......
...@@ -438,15 +438,6 @@ config PREEMPT_RCU ...@@ -438,15 +438,6 @@ config PREEMPT_RCU
This option enables preemptible-RCU code that is common between This option enables preemptible-RCU code that is common between
the TREE_PREEMPT_RCU and TINY_PREEMPT_RCU implementations. the TREE_PREEMPT_RCU and TINY_PREEMPT_RCU implementations.
config RCU_TRACE
bool "Enable tracing for RCU"
help
This option provides tracing in RCU which presents stats
in debugfs for debugging RCU implementation.
Say Y here if you want to enable RCU tracing
Say N if you are unsure.
config RCU_FANOUT config RCU_FANOUT
int "Tree-based hierarchical RCU fanout value" int "Tree-based hierarchical RCU fanout value"
range 2 64 if 64BIT range 2 64 if 64BIT
......
...@@ -4176,7 +4176,13 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s) ...@@ -4176,7 +4176,13 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
printk("-------------------------------\n"); printk("-------------------------------\n");
printk("%s:%d %s!\n", file, line, s); printk("%s:%d %s!\n", file, line, s);
printk("\nother info that might help us debug this:\n\n"); printk("\nother info that might help us debug this:\n\n");
printk("\nrcu_scheduler_active = %d, debug_locks = %d\n", rcu_scheduler_active, debug_locks); printk("\n%srcu_scheduler_active = %d, debug_locks = %d\n",
!rcu_lockdep_current_cpu_online()
? "RCU used illegally from offline CPU!\n"
: rcu_is_cpu_idle()
? "RCU used illegally from idle CPU!\n"
: "",
rcu_scheduler_active, debug_locks);
/* /*
* If a CPU is in the RCU-free window in idle (ie: in the section * If a CPU is in the RCU-free window in idle (ie: in the section
......
...@@ -33,8 +33,27 @@ ...@@ -33,8 +33,27 @@
* Process-level increment to ->dynticks_nesting field. This allows for * Process-level increment to ->dynticks_nesting field. This allows for
* architectures that use half-interrupts and half-exceptions from * architectures that use half-interrupts and half-exceptions from
* process context. * process context.
*
* DYNTICK_TASK_NEST_MASK defines a field of width DYNTICK_TASK_NEST_WIDTH
* that counts the number of process-based reasons why RCU cannot
* consider the corresponding CPU to be idle, and DYNTICK_TASK_NEST_VALUE
* is the value used to increment or decrement this field.
*
* The rest of the bits could in principle be used to count interrupts,
* but this would mean that a negative-one value in the interrupt
* field could incorrectly zero out the DYNTICK_TASK_NEST_MASK field.
* We therefore provide a two-bit guard field defined by DYNTICK_TASK_MASK
* that is set to DYNTICK_TASK_FLAG upon initial exit from idle.
* The DYNTICK_TASK_EXIT_IDLE value is thus the combined value used upon
* initial exit from idle.
*/ */
#define DYNTICK_TASK_NESTING (LLONG_MAX / 2 - 1) #define DYNTICK_TASK_NEST_WIDTH 7
#define DYNTICK_TASK_NEST_VALUE ((LLONG_MAX >> DYNTICK_TASK_NEST_WIDTH) + 1)
#define DYNTICK_TASK_NEST_MASK (LLONG_MAX - DYNTICK_TASK_NEST_VALUE + 1)
#define DYNTICK_TASK_FLAG ((DYNTICK_TASK_NEST_VALUE / 8) * 2)
#define DYNTICK_TASK_MASK ((DYNTICK_TASK_NEST_VALUE / 8) * 3)
#define DYNTICK_TASK_EXIT_IDLE (DYNTICK_TASK_NEST_VALUE + \
DYNTICK_TASK_FLAG)
/* /*
* debug_rcu_head_queue()/debug_rcu_head_unqueue() are used internally * debug_rcu_head_queue()/debug_rcu_head_unqueue() are used internally
...@@ -50,7 +69,6 @@ extern struct debug_obj_descr rcuhead_debug_descr; ...@@ -50,7 +69,6 @@ extern struct debug_obj_descr rcuhead_debug_descr;
static inline void debug_rcu_head_queue(struct rcu_head *head) static inline void debug_rcu_head_queue(struct rcu_head *head)
{ {
WARN_ON_ONCE((unsigned long)head & 0x3);
debug_object_activate(head, &rcuhead_debug_descr); debug_object_activate(head, &rcuhead_debug_descr);
debug_object_active_state(head, &rcuhead_debug_descr, debug_object_active_state(head, &rcuhead_debug_descr,
STATE_RCU_HEAD_READY, STATE_RCU_HEAD_READY,
...@@ -76,16 +94,18 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head) ...@@ -76,16 +94,18 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head)
extern void kfree(const void *); extern void kfree(const void *);
static inline void __rcu_reclaim(char *rn, struct rcu_head *head) static inline bool __rcu_reclaim(char *rn, struct rcu_head *head)
{ {
unsigned long offset = (unsigned long)head->func; unsigned long offset = (unsigned long)head->func;
if (__is_kfree_rcu_offset(offset)) { if (__is_kfree_rcu_offset(offset)) {
RCU_TRACE(trace_rcu_invoke_kfree_callback(rn, head, offset)); RCU_TRACE(trace_rcu_invoke_kfree_callback(rn, head, offset));
kfree((void *)head - offset); kfree((void *)head - offset);
return 1;
} else { } else {
RCU_TRACE(trace_rcu_invoke_callback(rn, head)); RCU_TRACE(trace_rcu_invoke_callback(rn, head));
head->func(head); head->func(head);
return 0;
} }
} }
......
...@@ -88,6 +88,9 @@ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled); ...@@ -88,6 +88,9 @@ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
* section. * section.
* *
* Check debug_lockdep_rcu_enabled() to prevent false positives during boot. * Check debug_lockdep_rcu_enabled() to prevent false positives during boot.
*
* Note that rcu_read_lock() is disallowed if the CPU is either idle or
* offline from an RCU perspective, so check for those as well.
*/ */
int rcu_read_lock_bh_held(void) int rcu_read_lock_bh_held(void)
{ {
...@@ -95,6 +98,8 @@ int rcu_read_lock_bh_held(void) ...@@ -95,6 +98,8 @@ int rcu_read_lock_bh_held(void)
return 1; return 1;
if (rcu_is_cpu_idle()) if (rcu_is_cpu_idle())
return 0; return 0;
if (!rcu_lockdep_current_cpu_online())
return 0;
return in_softirq() || irqs_disabled(); return in_softirq() || irqs_disabled();
} }
EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held); EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
......
...@@ -53,7 +53,7 @@ static void __call_rcu(struct rcu_head *head, ...@@ -53,7 +53,7 @@ static void __call_rcu(struct rcu_head *head,
#include "rcutiny_plugin.h" #include "rcutiny_plugin.h"
static long long rcu_dynticks_nesting = DYNTICK_TASK_NESTING; static long long rcu_dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
/* Common code for rcu_idle_enter() and rcu_irq_exit(), see kernel/rcutree.c. */ /* Common code for rcu_idle_enter() and rcu_irq_exit(), see kernel/rcutree.c. */
static void rcu_idle_enter_common(long long oldval) static void rcu_idle_enter_common(long long oldval)
...@@ -88,10 +88,16 @@ void rcu_idle_enter(void) ...@@ -88,10 +88,16 @@ void rcu_idle_enter(void)
local_irq_save(flags); local_irq_save(flags);
oldval = rcu_dynticks_nesting; oldval = rcu_dynticks_nesting;
WARN_ON_ONCE((rcu_dynticks_nesting & DYNTICK_TASK_NEST_MASK) == 0);
if ((rcu_dynticks_nesting & DYNTICK_TASK_NEST_MASK) ==
DYNTICK_TASK_NEST_VALUE)
rcu_dynticks_nesting = 0; rcu_dynticks_nesting = 0;
else
rcu_dynticks_nesting -= DYNTICK_TASK_NEST_VALUE;
rcu_idle_enter_common(oldval); rcu_idle_enter_common(oldval);
local_irq_restore(flags); local_irq_restore(flags);
} }
EXPORT_SYMBOL_GPL(rcu_idle_enter);
/* /*
* Exit an interrupt handler towards idle. * Exit an interrupt handler towards idle.
...@@ -140,11 +146,15 @@ void rcu_idle_exit(void) ...@@ -140,11 +146,15 @@ void rcu_idle_exit(void)
local_irq_save(flags); local_irq_save(flags);
oldval = rcu_dynticks_nesting; oldval = rcu_dynticks_nesting;
WARN_ON_ONCE(oldval != 0); WARN_ON_ONCE(rcu_dynticks_nesting < 0);
rcu_dynticks_nesting = DYNTICK_TASK_NESTING; if (rcu_dynticks_nesting & DYNTICK_TASK_NEST_MASK)
rcu_dynticks_nesting += DYNTICK_TASK_NEST_VALUE;
else
rcu_dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
rcu_idle_exit_common(oldval); rcu_idle_exit_common(oldval);
local_irq_restore(flags); local_irq_restore(flags);
} }
EXPORT_SYMBOL_GPL(rcu_idle_exit);
/* /*
* Enter an interrupt handler, moving away from idle. * Enter an interrupt handler, moving away from idle.
...@@ -258,7 +268,7 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp) ...@@ -258,7 +268,7 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
/* If no RCU callbacks ready to invoke, just return. */ /* If no RCU callbacks ready to invoke, just return. */
if (&rcp->rcucblist == rcp->donetail) { if (&rcp->rcucblist == rcp->donetail) {
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, -1)); RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, 0, -1));
RCU_TRACE(trace_rcu_batch_end(rcp->name, 0, RCU_TRACE(trace_rcu_batch_end(rcp->name, 0,
ACCESS_ONCE(rcp->rcucblist), ACCESS_ONCE(rcp->rcucblist),
need_resched(), need_resched(),
...@@ -269,7 +279,7 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp) ...@@ -269,7 +279,7 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
/* Move the ready-to-invoke callbacks to a local list. */ /* Move the ready-to-invoke callbacks to a local list. */
local_irq_save(flags); local_irq_save(flags);
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, -1)); RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1));
list = rcp->rcucblist; list = rcp->rcucblist;
rcp->rcucblist = *rcp->donetail; rcp->rcucblist = *rcp->donetail;
*rcp->donetail = NULL; *rcp->donetail = NULL;
...@@ -319,6 +329,10 @@ static void rcu_process_callbacks(struct softirq_action *unused) ...@@ -319,6 +329,10 @@ static void rcu_process_callbacks(struct softirq_action *unused)
*/ */
void synchronize_sched(void) void synchronize_sched(void)
{ {
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map) &&
!lock_is_held(&rcu_lock_map) &&
!lock_is_held(&rcu_sched_lock_map),
"Illegal synchronize_sched() in RCU read-side critical section");
cond_resched(); cond_resched();
} }
EXPORT_SYMBOL_GPL(synchronize_sched); EXPORT_SYMBOL_GPL(synchronize_sched);
......
...@@ -132,6 +132,7 @@ static struct rcu_preempt_ctrlblk rcu_preempt_ctrlblk = { ...@@ -132,6 +132,7 @@ static struct rcu_preempt_ctrlblk rcu_preempt_ctrlblk = {
RCU_TRACE(.rcb.name = "rcu_preempt") RCU_TRACE(.rcb.name = "rcu_preempt")
}; };
static void rcu_read_unlock_special(struct task_struct *t);
static int rcu_preempted_readers_exp(void); static int rcu_preempted_readers_exp(void);
static void rcu_report_exp_done(void); static void rcu_report_exp_done(void);
...@@ -146,6 +147,16 @@ static int rcu_cpu_blocking_cur_gp(void) ...@@ -146,6 +147,16 @@ static int rcu_cpu_blocking_cur_gp(void)
/* /*
* Check for a running RCU reader. Because there is only one CPU, * Check for a running RCU reader. Because there is only one CPU,
* there can be but one running RCU reader at a time. ;-) * there can be but one running RCU reader at a time. ;-)
*
* Returns zero if there are no running readers. Returns a positive
* number if there is at least one reader within its RCU read-side
* critical section. Returns a negative number if an outermost reader
* is in the midst of exiting from its RCU read-side critical section
*
* Returns zero if there are no running readers. Returns a positive
* number if there is at least one reader within its RCU read-side
* critical section. Returns a negative number if an outermost reader
* is in the midst of exiting from its RCU read-side critical section.
*/ */
static int rcu_preempt_running_reader(void) static int rcu_preempt_running_reader(void)
{ {
...@@ -307,7 +318,6 @@ static int rcu_boost(void) ...@@ -307,7 +318,6 @@ static int rcu_boost(void)
t = container_of(tb, struct task_struct, rcu_node_entry); t = container_of(tb, struct task_struct, rcu_node_entry);
rt_mutex_init_proxy_locked(&mtx, t); rt_mutex_init_proxy_locked(&mtx, t);
t->rcu_boost_mutex = &mtx; t->rcu_boost_mutex = &mtx;
t->rcu_read_unlock_special |= RCU_READ_UNLOCK_BOOSTED;
raw_local_irq_restore(flags); raw_local_irq_restore(flags);
rt_mutex_lock(&mtx); rt_mutex_lock(&mtx);
rt_mutex_unlock(&mtx); /* Keep lockdep happy. */ rt_mutex_unlock(&mtx); /* Keep lockdep happy. */
...@@ -475,7 +485,7 @@ void rcu_preempt_note_context_switch(void) ...@@ -475,7 +485,7 @@ void rcu_preempt_note_context_switch(void)
unsigned long flags; unsigned long flags;
local_irq_save(flags); /* must exclude scheduler_tick(). */ local_irq_save(flags); /* must exclude scheduler_tick(). */
if (rcu_preempt_running_reader() && if (rcu_preempt_running_reader() > 0 &&
(t->rcu_read_unlock_special & RCU_READ_UNLOCK_BLOCKED) == 0) { (t->rcu_read_unlock_special & RCU_READ_UNLOCK_BLOCKED) == 0) {
/* Possibly blocking in an RCU read-side critical section. */ /* Possibly blocking in an RCU read-side critical section. */
...@@ -494,6 +504,13 @@ void rcu_preempt_note_context_switch(void) ...@@ -494,6 +504,13 @@ void rcu_preempt_note_context_switch(void)
list_add(&t->rcu_node_entry, &rcu_preempt_ctrlblk.blkd_tasks); list_add(&t->rcu_node_entry, &rcu_preempt_ctrlblk.blkd_tasks);
if (rcu_cpu_blocking_cur_gp()) if (rcu_cpu_blocking_cur_gp())
rcu_preempt_ctrlblk.gp_tasks = &t->rcu_node_entry; rcu_preempt_ctrlblk.gp_tasks = &t->rcu_node_entry;
} else if (rcu_preempt_running_reader() < 0 &&
t->rcu_read_unlock_special) {
/*
* Complete exit from RCU read-side critical section on
* behalf of preempted instance of __rcu_read_unlock().
*/
rcu_read_unlock_special(t);
} }
/* /*
...@@ -526,12 +543,15 @@ EXPORT_SYMBOL_GPL(__rcu_read_lock); ...@@ -526,12 +543,15 @@ EXPORT_SYMBOL_GPL(__rcu_read_lock);
* notify RCU core processing or task having blocked during the RCU * notify RCU core processing or task having blocked during the RCU
* read-side critical section. * read-side critical section.
*/ */
static void rcu_read_unlock_special(struct task_struct *t) static noinline void rcu_read_unlock_special(struct task_struct *t)
{ {
int empty; int empty;
int empty_exp; int empty_exp;
unsigned long flags; unsigned long flags;
struct list_head *np; struct list_head *np;
#ifdef CONFIG_RCU_BOOST
struct rt_mutex *rbmp = NULL;
#endif /* #ifdef CONFIG_RCU_BOOST */
int special; int special;
/* /*
...@@ -552,7 +572,7 @@ static void rcu_read_unlock_special(struct task_struct *t) ...@@ -552,7 +572,7 @@ static void rcu_read_unlock_special(struct task_struct *t)
rcu_preempt_cpu_qs(); rcu_preempt_cpu_qs();
/* Hardware IRQ handlers cannot block. */ /* Hardware IRQ handlers cannot block. */
if (in_irq()) { if (in_irq() || in_serving_softirq()) {
local_irq_restore(flags); local_irq_restore(flags);
return; return;
} }
...@@ -597,10 +617,10 @@ static void rcu_read_unlock_special(struct task_struct *t) ...@@ -597,10 +617,10 @@ static void rcu_read_unlock_special(struct task_struct *t)
} }
#ifdef CONFIG_RCU_BOOST #ifdef CONFIG_RCU_BOOST
/* Unboost self if was boosted. */ /* Unboost self if was boosted. */
if (special & RCU_READ_UNLOCK_BOOSTED) { if (t->rcu_boost_mutex != NULL) {
t->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_BOOSTED; rbmp = t->rcu_boost_mutex;
rt_mutex_unlock(t->rcu_boost_mutex);
t->rcu_boost_mutex = NULL; t->rcu_boost_mutex = NULL;
rt_mutex_unlock(rbmp);
} }
#endif /* #ifdef CONFIG_RCU_BOOST */ #endif /* #ifdef CONFIG_RCU_BOOST */
local_irq_restore(flags); local_irq_restore(flags);
...@@ -618,13 +638,22 @@ void __rcu_read_unlock(void) ...@@ -618,13 +638,22 @@ void __rcu_read_unlock(void)
struct task_struct *t = current; struct task_struct *t = current;
barrier(); /* needed if we ever invoke rcu_read_unlock in rcutiny.c */ barrier(); /* needed if we ever invoke rcu_read_unlock in rcutiny.c */
if (t->rcu_read_lock_nesting != 1)
--t->rcu_read_lock_nesting; --t->rcu_read_lock_nesting;
barrier(); /* decrement before load of ->rcu_read_unlock_special */ else {
if (t->rcu_read_lock_nesting == 0 && t->rcu_read_lock_nesting = INT_MIN;
unlikely(ACCESS_ONCE(t->rcu_read_unlock_special))) barrier(); /* assign before ->rcu_read_unlock_special load */
if (unlikely(ACCESS_ONCE(t->rcu_read_unlock_special)))
rcu_read_unlock_special(t); rcu_read_unlock_special(t);
barrier(); /* ->rcu_read_unlock_special load before assign */
t->rcu_read_lock_nesting = 0;
}
#ifdef CONFIG_PROVE_LOCKING #ifdef CONFIG_PROVE_LOCKING
WARN_ON_ONCE(t->rcu_read_lock_nesting < 0); {
int rrln = ACCESS_ONCE(t->rcu_read_lock_nesting);
WARN_ON_ONCE(rrln < 0 && rrln > INT_MIN / 2);
}
#endif /* #ifdef CONFIG_PROVE_LOCKING */ #endif /* #ifdef CONFIG_PROVE_LOCKING */
} }
EXPORT_SYMBOL_GPL(__rcu_read_unlock); EXPORT_SYMBOL_GPL(__rcu_read_unlock);
...@@ -649,7 +678,7 @@ static void rcu_preempt_check_callbacks(void) ...@@ -649,7 +678,7 @@ static void rcu_preempt_check_callbacks(void)
invoke_rcu_callbacks(); invoke_rcu_callbacks();
if (rcu_preempt_gp_in_progress() && if (rcu_preempt_gp_in_progress() &&
rcu_cpu_blocking_cur_gp() && rcu_cpu_blocking_cur_gp() &&
rcu_preempt_running_reader()) rcu_preempt_running_reader() > 0)
t->rcu_read_unlock_special |= RCU_READ_UNLOCK_NEED_QS; t->rcu_read_unlock_special |= RCU_READ_UNLOCK_NEED_QS;
} }
...@@ -706,6 +735,11 @@ EXPORT_SYMBOL_GPL(call_rcu); ...@@ -706,6 +735,11 @@ EXPORT_SYMBOL_GPL(call_rcu);
*/ */
void synchronize_rcu(void) void synchronize_rcu(void)
{ {
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map) &&
!lock_is_held(&rcu_lock_map) &&
!lock_is_held(&rcu_sched_lock_map),
"Illegal synchronize_rcu() in RCU read-side critical section");
#ifdef CONFIG_DEBUG_LOCK_ALLOC #ifdef CONFIG_DEBUG_LOCK_ALLOC
if (!rcu_scheduler_active) if (!rcu_scheduler_active)
return; return;
...@@ -882,6 +916,7 @@ static void rcu_preempt_process_callbacks(void) ...@@ -882,6 +916,7 @@ static void rcu_preempt_process_callbacks(void)
static void invoke_rcu_callbacks(void) static void invoke_rcu_callbacks(void)
{ {
have_rcu_kthread_work = 1; have_rcu_kthread_work = 1;
if (rcu_kthread_task != NULL)
wake_up(&rcu_kthread_wq); wake_up(&rcu_kthread_wq);
} }
...@@ -943,11 +978,15 @@ early_initcall(rcu_spawn_kthreads); ...@@ -943,11 +978,15 @@ early_initcall(rcu_spawn_kthreads);
#else /* #ifdef CONFIG_RCU_BOOST */ #else /* #ifdef CONFIG_RCU_BOOST */
/* Hold off callback invocation until early_initcall() time. */
static int rcu_scheduler_fully_active __read_mostly;
/* /*
* Start up softirq processing of callbacks. * Start up softirq processing of callbacks.
*/ */
void invoke_rcu_callbacks(void) void invoke_rcu_callbacks(void)
{ {
if (rcu_scheduler_fully_active)
raise_softirq(RCU_SOFTIRQ); raise_softirq(RCU_SOFTIRQ);
} }
...@@ -963,10 +1002,14 @@ static bool rcu_is_callbacks_kthread(void) ...@@ -963,10 +1002,14 @@ static bool rcu_is_callbacks_kthread(void)
#endif /* #ifdef CONFIG_RCU_TRACE */ #endif /* #ifdef CONFIG_RCU_TRACE */
void rcu_init(void) static int __init rcu_scheduler_really_started(void)
{ {
rcu_scheduler_fully_active = 1;
open_softirq(RCU_SOFTIRQ, rcu_process_callbacks); open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
raise_softirq(RCU_SOFTIRQ); /* Invoke any callbacks from early boot. */
return 0;
} }
early_initcall(rcu_scheduler_really_started);
#endif /* #else #ifdef CONFIG_RCU_BOOST */ #endif /* #else #ifdef CONFIG_RCU_BOOST */
......
...@@ -65,7 +65,10 @@ static int fqs_duration; /* Duration of bursts (us), 0 to disable. */ ...@@ -65,7 +65,10 @@ static int fqs_duration; /* Duration of bursts (us), 0 to disable. */
static int fqs_holdoff; /* Hold time within burst (us). */ static int fqs_holdoff; /* Hold time within burst (us). */
static int fqs_stutter = 3; /* Wait time between bursts (s). */ static int fqs_stutter = 3; /* Wait time between bursts (s). */
static int onoff_interval; /* Wait time between CPU hotplugs, 0=disable. */ static int onoff_interval; /* Wait time between CPU hotplugs, 0=disable. */
static int onoff_holdoff; /* Seconds after boot before CPU hotplugs. */
static int shutdown_secs; /* Shutdown time (s). <=0 for no shutdown. */ static int shutdown_secs; /* Shutdown time (s). <=0 for no shutdown. */
static int stall_cpu; /* CPU-stall duration (s). 0 for no stall. */
static int stall_cpu_holdoff = 10; /* Time to wait until stall (s). */
static int test_boost = 1; /* Test RCU prio boost: 0=no, 1=maybe, 2=yes. */ static int test_boost = 1; /* Test RCU prio boost: 0=no, 1=maybe, 2=yes. */
static int test_boost_interval = 7; /* Interval between boost tests, seconds. */ static int test_boost_interval = 7; /* Interval between boost tests, seconds. */
static int test_boost_duration = 4; /* Duration of each boost test, seconds. */ static int test_boost_duration = 4; /* Duration of each boost test, seconds. */
...@@ -95,8 +98,14 @@ module_param(fqs_stutter, int, 0444); ...@@ -95,8 +98,14 @@ module_param(fqs_stutter, int, 0444);
MODULE_PARM_DESC(fqs_stutter, "Wait time between fqs bursts (s)"); MODULE_PARM_DESC(fqs_stutter, "Wait time between fqs bursts (s)");
module_param(onoff_interval, int, 0444); module_param(onoff_interval, int, 0444);
MODULE_PARM_DESC(onoff_interval, "Time between CPU hotplugs (s), 0=disable"); MODULE_PARM_DESC(onoff_interval, "Time between CPU hotplugs (s), 0=disable");
module_param(onoff_holdoff, int, 0444);
MODULE_PARM_DESC(onoff_holdoff, "Time after boot before CPU hotplugs (s)");
module_param(shutdown_secs, int, 0444); module_param(shutdown_secs, int, 0444);
MODULE_PARM_DESC(shutdown_secs, "Shutdown time (s), zero to disable."); MODULE_PARM_DESC(shutdown_secs, "Shutdown time (s), zero to disable.");
module_param(stall_cpu, int, 0444);
MODULE_PARM_DESC(stall_cpu, "Stall duration (s), zero to disable.");
module_param(stall_cpu_holdoff, int, 0444);
MODULE_PARM_DESC(stall_cpu_holdoff, "Time to wait before starting stall (s).");
module_param(test_boost, int, 0444); module_param(test_boost, int, 0444);
MODULE_PARM_DESC(test_boost, "Test RCU prio boost: 0=no, 1=maybe, 2=yes."); MODULE_PARM_DESC(test_boost, "Test RCU prio boost: 0=no, 1=maybe, 2=yes.");
module_param(test_boost_interval, int, 0444); module_param(test_boost_interval, int, 0444);
...@@ -129,6 +138,7 @@ static struct task_struct *shutdown_task; ...@@ -129,6 +138,7 @@ static struct task_struct *shutdown_task;
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
static struct task_struct *onoff_task; static struct task_struct *onoff_task;
#endif /* #ifdef CONFIG_HOTPLUG_CPU */ #endif /* #ifdef CONFIG_HOTPLUG_CPU */
static struct task_struct *stall_task;
#define RCU_TORTURE_PIPE_LEN 10 #define RCU_TORTURE_PIPE_LEN 10
...@@ -990,12 +1000,12 @@ static void rcu_torture_timer(unsigned long unused) ...@@ -990,12 +1000,12 @@ static void rcu_torture_timer(unsigned long unused)
rcu_read_lock_bh_held() || rcu_read_lock_bh_held() ||
rcu_read_lock_sched_held() || rcu_read_lock_sched_held() ||
srcu_read_lock_held(&srcu_ctl)); srcu_read_lock_held(&srcu_ctl));
do_trace_rcu_torture_read(cur_ops->name, &p->rtort_rcu);
if (p == NULL) { if (p == NULL) {
/* Leave because rcu_torture_writer is not yet underway */ /* Leave because rcu_torture_writer is not yet underway */
cur_ops->readunlock(idx); cur_ops->readunlock(idx);
return; return;
} }
do_trace_rcu_torture_read(cur_ops->name, &p->rtort_rcu);
if (p->rtort_mbtest == 0) if (p->rtort_mbtest == 0)
atomic_inc(&n_rcu_torture_mberror); atomic_inc(&n_rcu_torture_mberror);
spin_lock(&rand_lock); spin_lock(&rand_lock);
...@@ -1053,13 +1063,13 @@ rcu_torture_reader(void *arg) ...@@ -1053,13 +1063,13 @@ rcu_torture_reader(void *arg)
rcu_read_lock_bh_held() || rcu_read_lock_bh_held() ||
rcu_read_lock_sched_held() || rcu_read_lock_sched_held() ||
srcu_read_lock_held(&srcu_ctl)); srcu_read_lock_held(&srcu_ctl));
do_trace_rcu_torture_read(cur_ops->name, &p->rtort_rcu);
if (p == NULL) { if (p == NULL) {
/* Wait for rcu_torture_writer to get underway */ /* Wait for rcu_torture_writer to get underway */
cur_ops->readunlock(idx); cur_ops->readunlock(idx);
schedule_timeout_interruptible(HZ); schedule_timeout_interruptible(HZ);
continue; continue;
} }
do_trace_rcu_torture_read(cur_ops->name, &p->rtort_rcu);
if (p->rtort_mbtest == 0) if (p->rtort_mbtest == 0)
atomic_inc(&n_rcu_torture_mberror); atomic_inc(&n_rcu_torture_mberror);
cur_ops->read_delay(&rand); cur_ops->read_delay(&rand);
...@@ -1300,13 +1310,13 @@ rcu_torture_print_module_parms(struct rcu_torture_ops *cur_ops, char *tag) ...@@ -1300,13 +1310,13 @@ rcu_torture_print_module_parms(struct rcu_torture_ops *cur_ops, char *tag)
"fqs_duration=%d fqs_holdoff=%d fqs_stutter=%d " "fqs_duration=%d fqs_holdoff=%d fqs_stutter=%d "
"test_boost=%d/%d test_boost_interval=%d " "test_boost=%d/%d test_boost_interval=%d "
"test_boost_duration=%d shutdown_secs=%d " "test_boost_duration=%d shutdown_secs=%d "
"onoff_interval=%d\n", "onoff_interval=%d onoff_holdoff=%d\n",
torture_type, tag, nrealreaders, nfakewriters, torture_type, tag, nrealreaders, nfakewriters,
stat_interval, verbose, test_no_idle_hz, shuffle_interval, stat_interval, verbose, test_no_idle_hz, shuffle_interval,
stutter, irqreader, fqs_duration, fqs_holdoff, fqs_stutter, stutter, irqreader, fqs_duration, fqs_holdoff, fqs_stutter,
test_boost, cur_ops->can_boost, test_boost, cur_ops->can_boost,
test_boost_interval, test_boost_duration, shutdown_secs, test_boost_interval, test_boost_duration, shutdown_secs,
onoff_interval); onoff_interval, onoff_holdoff);
} }
static struct notifier_block rcutorture_shutdown_nb = { static struct notifier_block rcutorture_shutdown_nb = {
...@@ -1410,6 +1420,11 @@ rcu_torture_onoff(void *arg) ...@@ -1410,6 +1420,11 @@ rcu_torture_onoff(void *arg)
for_each_online_cpu(cpu) for_each_online_cpu(cpu)
maxcpu = cpu; maxcpu = cpu;
WARN_ON(maxcpu < 0); WARN_ON(maxcpu < 0);
if (onoff_holdoff > 0) {
VERBOSE_PRINTK_STRING("rcu_torture_onoff begin holdoff");
schedule_timeout_interruptible(onoff_holdoff * HZ);
VERBOSE_PRINTK_STRING("rcu_torture_onoff end holdoff");
}
while (!kthread_should_stop()) { while (!kthread_should_stop()) {
cpu = (rcu_random(&rand) >> 4) % (maxcpu + 1); cpu = (rcu_random(&rand) >> 4) % (maxcpu + 1);
if (cpu_online(cpu) && cpu_is_hotpluggable(cpu)) { if (cpu_online(cpu) && cpu_is_hotpluggable(cpu)) {
...@@ -1450,12 +1465,15 @@ rcu_torture_onoff(void *arg) ...@@ -1450,12 +1465,15 @@ rcu_torture_onoff(void *arg)
static int __cpuinit static int __cpuinit
rcu_torture_onoff_init(void) rcu_torture_onoff_init(void)
{ {
int ret;
if (onoff_interval <= 0) if (onoff_interval <= 0)
return 0; return 0;
onoff_task = kthread_run(rcu_torture_onoff, NULL, "rcu_torture_onoff"); onoff_task = kthread_run(rcu_torture_onoff, NULL, "rcu_torture_onoff");
if (IS_ERR(onoff_task)) { if (IS_ERR(onoff_task)) {
ret = PTR_ERR(onoff_task);
onoff_task = NULL; onoff_task = NULL;
return PTR_ERR(onoff_task); return ret;
} }
return 0; return 0;
} }
...@@ -1481,6 +1499,63 @@ static void rcu_torture_onoff_cleanup(void) ...@@ -1481,6 +1499,63 @@ static void rcu_torture_onoff_cleanup(void)
#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */ #endif /* #else #ifdef CONFIG_HOTPLUG_CPU */
/*
* CPU-stall kthread. It waits as specified by stall_cpu_holdoff, then
* induces a CPU stall for the time specified by stall_cpu.
*/
static int __cpuinit rcu_torture_stall(void *args)
{
unsigned long stop_at;
VERBOSE_PRINTK_STRING("rcu_torture_stall task started");
if (stall_cpu_holdoff > 0) {
VERBOSE_PRINTK_STRING("rcu_torture_stall begin holdoff");
schedule_timeout_interruptible(stall_cpu_holdoff * HZ);
VERBOSE_PRINTK_STRING("rcu_torture_stall end holdoff");
}
if (!kthread_should_stop()) {
stop_at = get_seconds() + stall_cpu;
/* RCU CPU stall is expected behavior in following code. */
printk(KERN_ALERT "rcu_torture_stall start.\n");
rcu_read_lock();
preempt_disable();
while (ULONG_CMP_LT(get_seconds(), stop_at))
continue; /* Induce RCU CPU stall warning. */
preempt_enable();
rcu_read_unlock();
printk(KERN_ALERT "rcu_torture_stall end.\n");
}
rcutorture_shutdown_absorb("rcu_torture_stall");
while (!kthread_should_stop())
schedule_timeout_interruptible(10 * HZ);
return 0;
}
/* Spawn CPU-stall kthread, if stall_cpu specified. */
static int __init rcu_torture_stall_init(void)
{
int ret;
if (stall_cpu <= 0)
return 0;
stall_task = kthread_run(rcu_torture_stall, NULL, "rcu_torture_stall");
if (IS_ERR(stall_task)) {
ret = PTR_ERR(stall_task);
stall_task = NULL;
return ret;
}
return 0;
}
/* Clean up after the CPU-stall kthread, if one was spawned. */
static void rcu_torture_stall_cleanup(void)
{
if (stall_task == NULL)
return;
VERBOSE_PRINTK_STRING("Stopping rcu_torture_stall_task.");
kthread_stop(stall_task);
}
static int rcutorture_cpu_notify(struct notifier_block *self, static int rcutorture_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu) unsigned long action, void *hcpu)
{ {
...@@ -1523,6 +1598,7 @@ rcu_torture_cleanup(void) ...@@ -1523,6 +1598,7 @@ rcu_torture_cleanup(void)
fullstop = FULLSTOP_RMMOD; fullstop = FULLSTOP_RMMOD;
mutex_unlock(&fullstop_mutex); mutex_unlock(&fullstop_mutex);
unregister_reboot_notifier(&rcutorture_shutdown_nb); unregister_reboot_notifier(&rcutorture_shutdown_nb);
rcu_torture_stall_cleanup();
if (stutter_task) { if (stutter_task) {
VERBOSE_PRINTK_STRING("Stopping rcu_torture_stutter task"); VERBOSE_PRINTK_STRING("Stopping rcu_torture_stutter task");
kthread_stop(stutter_task); kthread_stop(stutter_task);
...@@ -1602,6 +1678,10 @@ rcu_torture_cleanup(void) ...@@ -1602,6 +1678,10 @@ rcu_torture_cleanup(void)
cur_ops->cleanup(); cur_ops->cleanup();
if (atomic_read(&n_rcu_torture_error)) if (atomic_read(&n_rcu_torture_error))
rcu_torture_print_module_parms(cur_ops, "End of test: FAILURE"); rcu_torture_print_module_parms(cur_ops, "End of test: FAILURE");
else if (n_online_successes != n_online_attempts ||
n_offline_successes != n_offline_attempts)
rcu_torture_print_module_parms(cur_ops,
"End of test: RCU_HOTPLUG");
else else
rcu_torture_print_module_parms(cur_ops, "End of test: SUCCESS"); rcu_torture_print_module_parms(cur_ops, "End of test: SUCCESS");
} }
...@@ -1819,6 +1899,7 @@ rcu_torture_init(void) ...@@ -1819,6 +1899,7 @@ rcu_torture_init(void)
} }
rcu_torture_onoff_init(); rcu_torture_onoff_init();
register_reboot_notifier(&rcutorture_shutdown_nb); register_reboot_notifier(&rcutorture_shutdown_nb);
rcu_torture_stall_init();
rcutorture_record_test_transition(); rcutorture_record_test_transition();
mutex_unlock(&fullstop_mutex); mutex_unlock(&fullstop_mutex);
return 0; return 0;
......
This diff is collapsed.
...@@ -239,6 +239,12 @@ struct rcu_data { ...@@ -239,6 +239,12 @@ struct rcu_data {
bool preemptible; /* Preemptible RCU? */ bool preemptible; /* Preemptible RCU? */
struct rcu_node *mynode; /* This CPU's leaf of hierarchy */ struct rcu_node *mynode; /* This CPU's leaf of hierarchy */
unsigned long grpmask; /* Mask to apply to leaf qsmask. */ unsigned long grpmask; /* Mask to apply to leaf qsmask. */
#ifdef CONFIG_RCU_CPU_STALL_INFO
unsigned long ticks_this_gp; /* The number of scheduling-clock */
/* ticks this CPU has handled */
/* during and after the last grace */
/* period it is aware of. */
#endif /* #ifdef CONFIG_RCU_CPU_STALL_INFO */
/* 2) batch handling */ /* 2) batch handling */
/* /*
...@@ -265,7 +271,8 @@ struct rcu_data { ...@@ -265,7 +271,8 @@ struct rcu_data {
*/ */
struct rcu_head *nxtlist; struct rcu_head *nxtlist;
struct rcu_head **nxttail[RCU_NEXT_SIZE]; struct rcu_head **nxttail[RCU_NEXT_SIZE];
long qlen; /* # of queued callbacks */ long qlen_lazy; /* # of lazy queued callbacks */
long qlen; /* # of queued callbacks, incl lazy */
long qlen_last_fqs_check; long qlen_last_fqs_check;
/* qlen at last check for QS forcing */ /* qlen at last check for QS forcing */
unsigned long n_cbs_invoked; /* count of RCU cbs invoked. */ unsigned long n_cbs_invoked; /* count of RCU cbs invoked. */
...@@ -282,7 +289,6 @@ struct rcu_data { ...@@ -282,7 +289,6 @@ struct rcu_data {
/* 4) reasons this CPU needed to be kicked by force_quiescent_state */ /* 4) reasons this CPU needed to be kicked by force_quiescent_state */
unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */ unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */
unsigned long offline_fqs; /* Kicked due to being offline. */ unsigned long offline_fqs; /* Kicked due to being offline. */
unsigned long resched_ipi; /* Sent a resched IPI. */
/* 5) __rcu_pending() statistics. */ /* 5) __rcu_pending() statistics. */
unsigned long n_rcu_pending; /* rcu_pending() calls since boot. */ unsigned long n_rcu_pending; /* rcu_pending() calls since boot. */
...@@ -313,12 +319,6 @@ struct rcu_data { ...@@ -313,12 +319,6 @@ struct rcu_data {
#else #else
#define RCU_STALL_DELAY_DELTA 0 #define RCU_STALL_DELAY_DELTA 0
#endif #endif
#define RCU_SECONDS_TILL_STALL_CHECK (CONFIG_RCU_CPU_STALL_TIMEOUT * HZ + \
RCU_STALL_DELAY_DELTA)
/* for rsp->jiffies_stall */
#define RCU_SECONDS_TILL_STALL_RECHECK (3 * RCU_SECONDS_TILL_STALL_CHECK + 30)
/* for rsp->jiffies_stall */
#define RCU_STALL_RAT_DELAY 2 /* Allow other CPUs time */ #define RCU_STALL_RAT_DELAY 2 /* Allow other CPUs time */
/* to take at least one */ /* to take at least one */
/* scheduling clock irq */ /* scheduling clock irq */
...@@ -438,8 +438,8 @@ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp); ...@@ -438,8 +438,8 @@ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp);
static int rcu_preempt_offline_tasks(struct rcu_state *rsp, static int rcu_preempt_offline_tasks(struct rcu_state *rsp,
struct rcu_node *rnp, struct rcu_node *rnp,
struct rcu_data *rdp); struct rcu_data *rdp);
static void rcu_preempt_offline_cpu(int cpu);
#endif /* #ifdef CONFIG_HOTPLUG_CPU */ #endif /* #ifdef CONFIG_HOTPLUG_CPU */
static void rcu_preempt_cleanup_dead_cpu(int cpu);
static void rcu_preempt_check_callbacks(int cpu); static void rcu_preempt_check_callbacks(int cpu);
static void rcu_preempt_process_callbacks(void); static void rcu_preempt_process_callbacks(void);
void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu)); void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu));
...@@ -448,9 +448,9 @@ static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp, ...@@ -448,9 +448,9 @@ static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
bool wake); bool wake);
#endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) */ #endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) */
static int rcu_preempt_pending(int cpu); static int rcu_preempt_pending(int cpu);
static int rcu_preempt_needs_cpu(int cpu); static int rcu_preempt_cpu_has_callbacks(int cpu);
static void __cpuinit rcu_preempt_init_percpu_data(int cpu); static void __cpuinit rcu_preempt_init_percpu_data(int cpu);
static void rcu_preempt_send_cbs_to_online(void); static void rcu_preempt_cleanup_dying_cpu(void);
static void __init __rcu_init_preempt(void); static void __init __rcu_init_preempt(void);
static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags); static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
static void rcu_preempt_boost_start_gp(struct rcu_node *rnp); static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
...@@ -471,5 +471,10 @@ static void __cpuinit rcu_prepare_kthreads(int cpu); ...@@ -471,5 +471,10 @@ static void __cpuinit rcu_prepare_kthreads(int cpu);
static void rcu_prepare_for_idle_init(int cpu); static void rcu_prepare_for_idle_init(int cpu);
static void rcu_cleanup_after_idle(int cpu); static void rcu_cleanup_after_idle(int cpu);
static void rcu_prepare_for_idle(int cpu); static void rcu_prepare_for_idle(int cpu);
static void print_cpu_stall_info_begin(void);
static void print_cpu_stall_info(struct rcu_state *rsp, int cpu);
static void print_cpu_stall_info_end(void);
static void zero_cpu_stall_ticks(struct rcu_data *rdp);
static void increment_cpu_stall_ticks(void);
#endif /* #ifndef RCU_TREE_NONCORE */ #endif /* #ifndef RCU_TREE_NONCORE */
This diff is collapsed.
...@@ -72,9 +72,9 @@ static void print_one_rcu_data(struct seq_file *m, struct rcu_data *rdp) ...@@ -72,9 +72,9 @@ static void print_one_rcu_data(struct seq_file *m, struct rcu_data *rdp)
rdp->dynticks->dynticks_nesting, rdp->dynticks->dynticks_nesting,
rdp->dynticks->dynticks_nmi_nesting, rdp->dynticks->dynticks_nmi_nesting,
rdp->dynticks_fqs); rdp->dynticks_fqs);
seq_printf(m, " of=%lu ri=%lu", rdp->offline_fqs, rdp->resched_ipi); seq_printf(m, " of=%lu", rdp->offline_fqs);
seq_printf(m, " ql=%ld qs=%c%c%c%c", seq_printf(m, " ql=%ld/%ld qs=%c%c%c%c",
rdp->qlen, rdp->qlen_lazy, rdp->qlen,
".N"[rdp->nxttail[RCU_NEXT_READY_TAIL] != ".N"[rdp->nxttail[RCU_NEXT_READY_TAIL] !=
rdp->nxttail[RCU_NEXT_TAIL]], rdp->nxttail[RCU_NEXT_TAIL]],
".R"[rdp->nxttail[RCU_WAIT_TAIL] != ".R"[rdp->nxttail[RCU_WAIT_TAIL] !=
...@@ -144,8 +144,8 @@ static void print_one_rcu_data_csv(struct seq_file *m, struct rcu_data *rdp) ...@@ -144,8 +144,8 @@ static void print_one_rcu_data_csv(struct seq_file *m, struct rcu_data *rdp)
rdp->dynticks->dynticks_nesting, rdp->dynticks->dynticks_nesting,
rdp->dynticks->dynticks_nmi_nesting, rdp->dynticks->dynticks_nmi_nesting,
rdp->dynticks_fqs); rdp->dynticks_fqs);
seq_printf(m, ",%lu,%lu", rdp->offline_fqs, rdp->resched_ipi); seq_printf(m, ",%lu", rdp->offline_fqs);
seq_printf(m, ",%ld,\"%c%c%c%c\"", rdp->qlen, seq_printf(m, ",%ld,%ld,\"%c%c%c%c\"", rdp->qlen_lazy, rdp->qlen,
".N"[rdp->nxttail[RCU_NEXT_READY_TAIL] != ".N"[rdp->nxttail[RCU_NEXT_READY_TAIL] !=
rdp->nxttail[RCU_NEXT_TAIL]], rdp->nxttail[RCU_NEXT_TAIL]],
".R"[rdp->nxttail[RCU_WAIT_TAIL] != ".R"[rdp->nxttail[RCU_WAIT_TAIL] !=
...@@ -168,7 +168,7 @@ static int show_rcudata_csv(struct seq_file *m, void *unused) ...@@ -168,7 +168,7 @@ static int show_rcudata_csv(struct seq_file *m, void *unused)
{ {
seq_puts(m, "\"CPU\",\"Online?\",\"c\",\"g\",\"pq\",\"pgp\",\"pq\","); seq_puts(m, "\"CPU\",\"Online?\",\"c\",\"g\",\"pq\",\"pgp\",\"pq\",");
seq_puts(m, "\"dt\",\"dt nesting\",\"dt NMI nesting\",\"df\","); seq_puts(m, "\"dt\",\"dt nesting\",\"dt NMI nesting\",\"df\",");
seq_puts(m, "\"of\",\"ri\",\"ql\",\"qs\""); seq_puts(m, "\"of\",\"qll\",\"ql\",\"qs\"");
#ifdef CONFIG_RCU_BOOST #ifdef CONFIG_RCU_BOOST
seq_puts(m, "\"kt\",\"ktl\""); seq_puts(m, "\"kt\",\"ktl\"");
#endif /* #ifdef CONFIG_RCU_BOOST */ #endif /* #ifdef CONFIG_RCU_BOOST */
......
...@@ -172,6 +172,12 @@ static void __synchronize_srcu(struct srcu_struct *sp, void (*sync_func)(void)) ...@@ -172,6 +172,12 @@ static void __synchronize_srcu(struct srcu_struct *sp, void (*sync_func)(void))
{ {
int idx; int idx;
rcu_lockdep_assert(!lock_is_held(&sp->dep_map) &&
!lock_is_held(&rcu_bh_lock_map) &&
!lock_is_held(&rcu_lock_map) &&
!lock_is_held(&rcu_sched_lock_map),
"Illegal synchronize_srcu() in same-type SRCU (or RCU) read-side critical section");
idx = sp->completed; idx = sp->completed;
mutex_lock(&sp->mutex); mutex_lock(&sp->mutex);
...@@ -280,19 +286,26 @@ void synchronize_srcu(struct srcu_struct *sp) ...@@ -280,19 +286,26 @@ void synchronize_srcu(struct srcu_struct *sp)
EXPORT_SYMBOL_GPL(synchronize_srcu); EXPORT_SYMBOL_GPL(synchronize_srcu);
/** /**
* synchronize_srcu_expedited - like synchronize_srcu, but less patient * synchronize_srcu_expedited - Brute-force SRCU grace period
* @sp: srcu_struct with which to synchronize. * @sp: srcu_struct with which to synchronize.
* *
* Flip the completed counter, and wait for the old count to drain to zero. * Wait for an SRCU grace period to elapse, but use a "big hammer"
* As with classic RCU, the updater must use some separate means of * approach to force the grace period to end quickly. This consumes
* synchronizing concurrent updates. Can block; must be called from * significant time on all CPUs and is unfriendly to real-time workloads,
* process context. * so is thus not recommended for any sort of common-case code. In fact,
* if you are using synchronize_srcu_expedited() in a loop, please
* restructure your code to batch your updates, and then use a single
* synchronize_srcu() instead.
* *
* Note that it is illegal to call synchronize_srcu_expedited() * Note that it is illegal to call this function while holding any lock
* from the corresponding SRCU read-side critical section; doing so * that is acquired by a CPU-hotplug notifier. And yes, it is also illegal
* will result in deadlock. However, it is perfectly legal to call * to call this function from a CPU-hotplug notifier. Failing to observe
* synchronize_srcu_expedited() on one srcu_struct from some other * these restriction will result in deadlock. It is also illegal to call
* srcu_struct's read-side critical section. * synchronize_srcu_expedited() from the corresponding SRCU read-side
* critical section; doing so will result in deadlock. However, it is
* perfectly legal to call synchronize_srcu_expedited() on one srcu_struct
* from some other srcu_struct's read-side critical section, as long as
* the resulting graph of srcu_structs is acyclic.
*/ */
void synchronize_srcu_expedited(struct srcu_struct *sp) void synchronize_srcu_expedited(struct srcu_struct *sp)
{ {
......
...@@ -927,6 +927,30 @@ config RCU_CPU_STALL_VERBOSE ...@@ -927,6 +927,30 @@ config RCU_CPU_STALL_VERBOSE
Say Y if you want to enable such checks. Say Y if you want to enable such checks.
config RCU_CPU_STALL_INFO
bool "Print additional diagnostics on RCU CPU stall"
depends on (TREE_RCU || TREE_PREEMPT_RCU) && DEBUG_KERNEL
default n
help
For each stalled CPU that is aware of the current RCU grace
period, print out additional per-CPU diagnostic information
regarding scheduling-clock ticks, idle state, and,
for RCU_FAST_NO_HZ kernels, idle-entry state.
Say N if you are unsure.
Say Y if you want to enable such diagnostics.
config RCU_TRACE
bool "Enable tracing for RCU"
depends on DEBUG_KERNEL
help
This option provides tracing in RCU which presents stats
in debugfs for debugging RCU implementation.
Say Y here if you want to enable RCU tracing
Say N if you are unsure.
config KPROBES_SANITY_TEST config KPROBES_SANITY_TEST
bool "Kprobes sanity tests" bool "Kprobes sanity tests"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
......
...@@ -1857,11 +1857,6 @@ static int cipso_v4_genopt(unsigned char *buf, u32 buf_len, ...@@ -1857,11 +1857,6 @@ static int cipso_v4_genopt(unsigned char *buf, u32 buf_len,
return CIPSO_V4_HDR_LEN + ret_val; return CIPSO_V4_HDR_LEN + ret_val;
} }
static void opt_kfree_rcu(struct rcu_head *head)
{
kfree(container_of(head, struct ip_options_rcu, rcu));
}
/** /**
* cipso_v4_sock_setattr - Add a CIPSO option to a socket * cipso_v4_sock_setattr - Add a CIPSO option to a socket
* @sk: the socket * @sk: the socket
...@@ -1938,7 +1933,7 @@ int cipso_v4_sock_setattr(struct sock *sk, ...@@ -1938,7 +1933,7 @@ int cipso_v4_sock_setattr(struct sock *sk,
} }
rcu_assign_pointer(sk_inet->inet_opt, opt); rcu_assign_pointer(sk_inet->inet_opt, opt);
if (old) if (old)
call_rcu(&old->rcu, opt_kfree_rcu); kfree_rcu(old, rcu);
return 0; return 0;
...@@ -2005,7 +2000,7 @@ int cipso_v4_req_setattr(struct request_sock *req, ...@@ -2005,7 +2000,7 @@ int cipso_v4_req_setattr(struct request_sock *req,
req_inet = inet_rsk(req); req_inet = inet_rsk(req);
opt = xchg(&req_inet->opt, opt); opt = xchg(&req_inet->opt, opt);
if (opt) if (opt)
call_rcu(&opt->rcu, opt_kfree_rcu); kfree_rcu(opt, rcu);
return 0; return 0;
...@@ -2075,7 +2070,7 @@ static int cipso_v4_delopt(struct ip_options_rcu **opt_ptr) ...@@ -2075,7 +2070,7 @@ static int cipso_v4_delopt(struct ip_options_rcu **opt_ptr)
* remove the entire option struct */ * remove the entire option struct */
*opt_ptr = NULL; *opt_ptr = NULL;
hdr_delta = opt->opt.optlen; hdr_delta = opt->opt.optlen;
call_rcu(&opt->rcu, opt_kfree_rcu); kfree_rcu(opt, rcu);
} }
return hdr_delta; return hdr_delta;
......
...@@ -445,11 +445,6 @@ int ip_recv_error(struct sock *sk, struct msghdr *msg, int len) ...@@ -445,11 +445,6 @@ int ip_recv_error(struct sock *sk, struct msghdr *msg, int len)
} }
static void opt_kfree_rcu(struct rcu_head *head)
{
kfree(container_of(head, struct ip_options_rcu, rcu));
}
/* /*
* Socket option code for IP. This is the end of the line after any * Socket option code for IP. This is the end of the line after any
* TCP,UDP etc options on an IP socket. * TCP,UDP etc options on an IP socket.
...@@ -525,7 +520,7 @@ static int do_ip_setsockopt(struct sock *sk, int level, ...@@ -525,7 +520,7 @@ static int do_ip_setsockopt(struct sock *sk, int level,
} }
rcu_assign_pointer(inet->inet_opt, opt); rcu_assign_pointer(inet->inet_opt, opt);
if (old) if (old)
call_rcu(&old->rcu, opt_kfree_rcu); kfree_rcu(old, rcu);
break; break;
} }
case IP_PKTINFO: case IP_PKTINFO:
......
...@@ -413,12 +413,6 @@ struct mesh_path *mesh_path_lookup_by_idx(int idx, struct ieee80211_sub_if_data ...@@ -413,12 +413,6 @@ struct mesh_path *mesh_path_lookup_by_idx(int idx, struct ieee80211_sub_if_data
return NULL; return NULL;
} }
static void mesh_gate_node_reclaim(struct rcu_head *rp)
{
struct mpath_node *node = container_of(rp, struct mpath_node, rcu);
kfree(node);
}
/** /**
* mesh_path_add_gate - add the given mpath to a mesh gate to our path table * mesh_path_add_gate - add the given mpath to a mesh gate to our path table
* @mpath: gate path to add to table * @mpath: gate path to add to table
...@@ -479,7 +473,7 @@ static int mesh_gate_del(struct mesh_table *tbl, struct mesh_path *mpath) ...@@ -479,7 +473,7 @@ static int mesh_gate_del(struct mesh_table *tbl, struct mesh_path *mpath)
if (gate->mpath == mpath) { if (gate->mpath == mpath) {
spin_lock_bh(&tbl->gates_lock); spin_lock_bh(&tbl->gates_lock);
hlist_del_rcu(&gate->list); hlist_del_rcu(&gate->list);
call_rcu(&gate->rcu, mesh_gate_node_reclaim); kfree_rcu(gate, rcu);
spin_unlock_bh(&tbl->gates_lock); spin_unlock_bh(&tbl->gates_lock);
mpath->sdata->u.mesh.num_gates--; mpath->sdata->u.mesh.num_gates--;
mpath->is_gate = false; mpath->is_gate = false;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment