Commit f80fe66c authored by Paul E. McKenney's avatar Paul E. McKenney

Merge branches 'doc.2021.11.30c', 'exp.2021.12.07a', 'fastnohz.2021.11.30c',...

Merge branches 'doc.2021.11.30c', 'exp.2021.12.07a', 'fastnohz.2021.11.30c', 'fixes.2021.11.30c', 'nocb.2021.12.09a', 'nolibc.2021.11.30c', 'tasks.2021.12.09a', 'torture.2021.12.07a' and 'torturescript.2021.11.30c' into HEAD

doc.2021.11.30c: Documentation updates.
exp.2021.12.07a: Expedited-grace-period fixes.
fastnohz.2021.11.30c: Remove CONFIG_RCU_FAST_NO_HZ.
fixes.2021.11.30c: Miscellaneous fixes.
nocb.2021.12.09a: No-CB CPU updates.
nolibc.2021.11.30c: Tiny in-kernel library updates.
tasks.2021.12.09a: RCU-tasks updates, including update-side scalability.
torture.2021.12.07a: Torture-test in-kernel module updates.
torturescript.2021.11.30c: Torture-test scripting updates.
...@@ -254,17 +254,6 @@ period (in this case 2603), the grace-period sequence number (7075), and ...@@ -254,17 +254,6 @@ period (in this case 2603), the grace-period sequence number (7075), and
an estimate of the total number of RCU callbacks queued across all CPUs an estimate of the total number of RCU callbacks queued across all CPUs
(625 in this case). (625 in this case).
In kernels with CONFIG_RCU_FAST_NO_HZ, more information is printed
for each CPU::
0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 dyntick_enabled: 1
The "last_accelerate:" prints the low-order 16 bits (in hex) of the
jiffies counter when this CPU last invoked rcu_try_advance_all_cbs()
from rcu_needs_cpu() or last invoked rcu_accelerate_cbs() from
rcu_prepare_for_idle(). "dyntick_enabled: 1" indicates that dyntick-idle
processing is enabled.
If the grace period ends just as the stall warning starts printing, If the grace period ends just as the stall warning starts printing,
there will be a spurious stall-warning message, which will include there will be a spurious stall-warning message, which will include
the following:: the following::
......
...@@ -4343,19 +4343,30 @@ ...@@ -4343,19 +4343,30 @@
Disable the Correctable Errors Collector, Disable the Correctable Errors Collector,
see CONFIG_RAS_CEC help text. see CONFIG_RAS_CEC help text.
rcu_nocbs= [KNL] rcu_nocbs[=cpu-list]
The argument is a cpu list, as described above. [KNL] The optional argument is a cpu list,
as described above.
In kernels built with CONFIG_RCU_NOCB_CPU=y, set
the specified list of CPUs to be no-callback CPUs. In kernels built with CONFIG_RCU_NOCB_CPU=y,
Invocation of these CPUs' RCU callbacks will be enable the no-callback CPU mode, which prevents
offloaded to "rcuox/N" kthreads created for that such CPUs' callbacks from being invoked in
purpose, where "x" is "p" for RCU-preempt, and softirq context. Invocation of such CPUs' RCU
"s" for RCU-sched, and "N" is the CPU number. callbacks will instead be offloaded to "rcuox/N"
This reduces OS jitter on the offloaded CPUs, kthreads created for that purpose, where "x" is
which can be useful for HPC and real-time "p" for RCU-preempt, "s" for RCU-sched, and "g"
workloads. It can also improve energy efficiency for the kthreads that mediate grace periods; and
for asymmetric multiprocessors. "N" is the CPU number. This reduces OS jitter on
the offloaded CPUs, which can be useful for HPC
and real-time workloads. It can also improve
energy efficiency for asymmetric multiprocessors.
If a cpulist is passed as an argument, the specified
list of CPUs is set to no-callback mode from boot.
Otherwise, if the '=' sign and the cpulist
arguments are omitted, no CPU will be set to
no-callback mode from boot but the mode may be
toggled at runtime via cpusets.
rcu_nocb_poll [KNL] rcu_nocb_poll [KNL]
Rather than requiring that offloaded CPUs Rather than requiring that offloaded CPUs
...@@ -4489,10 +4500,6 @@ ...@@ -4489,10 +4500,6 @@
on rcutree.qhimark at boot time and to zero to on rcutree.qhimark at boot time and to zero to
disable more aggressive help enlistment. disable more aggressive help enlistment.
rcutree.rcu_idle_gp_delay= [KNL]
Set wakeup interval for idle CPUs that have
RCU callbacks (RCU_FAST_NO_HZ=y).
rcutree.rcu_kick_kthreads= [KNL] rcutree.rcu_kick_kthreads= [KNL]
Cause the grace-period kthread to get an extra Cause the grace-period kthread to get an extra
wake_up() if it sleeps three times longer than wake_up() if it sleeps three times longer than
...@@ -4603,8 +4610,12 @@ ...@@ -4603,8 +4610,12 @@
in seconds. in seconds.
rcutorture.fwd_progress= [KNL] rcutorture.fwd_progress= [KNL]
Enable RCU grace-period forward-progress testing Specifies the number of kthreads to be used
for RCU grace-period forward-progress testing
for the types of RCU supporting this notion. for the types of RCU supporting this notion.
Defaults to 1 kthread, values less than zero or
greater than the number of CPUs cause the number
of CPUs to be used.
rcutorture.fwd_progress_div= [KNL] rcutorture.fwd_progress_div= [KNL]
Specify the fraction of a CPU-stall-warning Specify the fraction of a CPU-stall-warning
...@@ -4805,6 +4816,29 @@ ...@@ -4805,6 +4816,29 @@
period to instead use normal non-expedited period to instead use normal non-expedited
grace-period processing. grace-period processing.
rcupdate.rcu_task_collapse_lim= [KNL]
Set the maximum number of callbacks present
at the beginning of a grace period that allows
the RCU Tasks flavors to collapse back to using
a single callback queue. This switching only
occurs when rcupdate.rcu_task_enqueue_lim is
set to the default value of -1.
rcupdate.rcu_task_contend_lim= [KNL]
Set the minimum number of callback-queuing-time
lock-contention events per jiffy required to
cause the RCU Tasks flavors to switch to per-CPU
callback queuing. This switching only occurs
when rcupdate.rcu_task_enqueue_lim is set to
the default value of -1.
rcupdate.rcu_task_enqueue_lim= [KNL]
Set the number of callback queues to use for the
RCU Tasks family of RCU flavors. The default
of -1 allows this to be automatically (and
dynamically) adjusted. This parameter is intended
for use in testing.
rcupdate.rcu_task_ipi_delay= [KNL] rcupdate.rcu_task_ipi_delay= [KNL]
Set time in jiffies during which RCU tasks will Set time in jiffies during which RCU tasks will
avoid sending IPIs, starting with the beginning avoid sending IPIs, starting with the beginning
......
...@@ -184,16 +184,12 @@ There are situations in which idle CPUs cannot be permitted to ...@@ -184,16 +184,12 @@ There are situations in which idle CPUs cannot be permitted to
enter either dyntick-idle mode or adaptive-tick mode, the most enter either dyntick-idle mode or adaptive-tick mode, the most
common being when that CPU has RCU callbacks pending. common being when that CPU has RCU callbacks pending.
The CONFIG_RCU_FAST_NO_HZ=y Kconfig option may be used to cause such CPUs Avoid this by offloading RCU callback processing to "rcuo" kthreads
to enter dyntick-idle mode or adaptive-tick mode anyway. In this case,
a timer will awaken these CPUs every four jiffies in order to ensure
that the RCU callbacks are processed in a timely fashion.
Another approach is to offload RCU callback processing to "rcuo" kthreads
using the CONFIG_RCU_NOCB_CPU=y Kconfig option. The specific CPUs to using the CONFIG_RCU_NOCB_CPU=y Kconfig option. The specific CPUs to
offload may be selected using The "rcu_nocbs=" kernel boot parameter, offload may be selected using The "rcu_nocbs=" kernel boot parameter,
which takes a comma-separated list of CPUs and CPU ranges, for example, which takes a comma-separated list of CPUs and CPU ranges, for example,
"1,3-5" selects CPUs 1, 3, 4, and 5. "1,3-5" selects CPUs 1, 3, 4, and 5. Note that CPUs specified by
the "nohz_full" kernel boot parameter are also offloaded.
The offloaded CPUs will never queue RCU callbacks, and therefore RCU The offloaded CPUs will never queue RCU callbacks, and therefore RCU
never prevents offloaded CPUs from entering either dyntick-idle mode never prevents offloaded CPUs from entering either dyntick-idle mode
......
...@@ -69,7 +69,7 @@ struct rcu_cblist { ...@@ -69,7 +69,7 @@ struct rcu_cblist {
* *
* *
* ---------------------------------------------------------------------------- * ----------------------------------------------------------------------------
* | SEGCBLIST_SOFTIRQ_ONLY | * | SEGCBLIST_RCU_CORE |
* | | * | |
* | Callbacks processed by rcu_core() from softirqs or local | * | Callbacks processed by rcu_core() from softirqs or local |
* | rcuc kthread, without holding nocb_lock. | * | rcuc kthread, without holding nocb_lock. |
...@@ -77,7 +77,7 @@ struct rcu_cblist { ...@@ -77,7 +77,7 @@ struct rcu_cblist {
* | * |
* v * v
* ---------------------------------------------------------------------------- * ----------------------------------------------------------------------------
* | SEGCBLIST_OFFLOADED | * | SEGCBLIST_RCU_CORE | SEGCBLIST_LOCKING | SEGCBLIST_OFFLOADED |
* | | * | |
* | Callbacks processed by rcu_core() from softirqs or local | * | Callbacks processed by rcu_core() from softirqs or local |
* | rcuc kthread, while holding nocb_lock. Waking up CB and GP kthreads, | * | rcuc kthread, while holding nocb_lock. Waking up CB and GP kthreads, |
...@@ -89,6 +89,8 @@ struct rcu_cblist { ...@@ -89,6 +89,8 @@ struct rcu_cblist {
* | | * | |
* v v * v v
* --------------------------------------- ----------------------------------| * --------------------------------------- ----------------------------------|
* | SEGCBLIST_RCU_CORE | | | SEGCBLIST_RCU_CORE | |
* | SEGCBLIST_LOCKING | | | SEGCBLIST_LOCKING | |
* | SEGCBLIST_OFFLOADED | | | SEGCBLIST_OFFLOADED | | * | SEGCBLIST_OFFLOADED | | | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_KTHREAD_CB | | SEGCBLIST_KTHREAD_GP | * | SEGCBLIST_KTHREAD_CB | | SEGCBLIST_KTHREAD_GP |
* | | | | * | | | |
...@@ -104,9 +106,10 @@ struct rcu_cblist { ...@@ -104,9 +106,10 @@ struct rcu_cblist {
* | * |
* v * v
* |--------------------------------------------------------------------------| * |--------------------------------------------------------------------------|
* | SEGCBLIST_LOCKING | |
* | SEGCBLIST_OFFLOADED | | * | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_KTHREAD_CB | | * | SEGCBLIST_KTHREAD_GP | |
* | SEGCBLIST_KTHREAD_GP | * | SEGCBLIST_KTHREAD_CB |
* | | * | |
* | Kthreads handle callbacks holding nocb_lock, local rcu_core() stops | * | Kthreads handle callbacks holding nocb_lock, local rcu_core() stops |
* | handling callbacks. Enable bypass queueing. | * | handling callbacks. Enable bypass queueing. |
...@@ -120,6 +123,7 @@ struct rcu_cblist { ...@@ -120,6 +123,7 @@ struct rcu_cblist {
* *
* *
* |--------------------------------------------------------------------------| * |--------------------------------------------------------------------------|
* | SEGCBLIST_LOCKING | |
* | SEGCBLIST_OFFLOADED | | * | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_KTHREAD_CB | | * | SEGCBLIST_KTHREAD_CB | |
* | SEGCBLIST_KTHREAD_GP | * | SEGCBLIST_KTHREAD_GP |
...@@ -130,6 +134,22 @@ struct rcu_cblist { ...@@ -130,6 +134,22 @@ struct rcu_cblist {
* | * |
* v * v
* |--------------------------------------------------------------------------| * |--------------------------------------------------------------------------|
* | SEGCBLIST_RCU_CORE | |
* | SEGCBLIST_LOCKING | |
* | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_KTHREAD_CB | |
* | SEGCBLIST_KTHREAD_GP |
* | |
* | CB/GP kthreads handle callbacks holding nocb_lock, local rcu_core() |
* | handles callbacks concurrently. Bypass enqueue is enabled. |
* | Invoke RCU core so we make sure not to preempt it in the middle with |
* | leaving some urgent work unattended within a jiffy. |
* ----------------------------------------------------------------------------
* |
* v
* |--------------------------------------------------------------------------|
* | SEGCBLIST_RCU_CORE | |
* | SEGCBLIST_LOCKING | |
* | SEGCBLIST_KTHREAD_CB | | * | SEGCBLIST_KTHREAD_CB | |
* | SEGCBLIST_KTHREAD_GP | * | SEGCBLIST_KTHREAD_GP |
* | | * | |
...@@ -143,7 +163,9 @@ struct rcu_cblist { ...@@ -143,7 +163,9 @@ struct rcu_cblist {
* | | * | |
* v v * v v
* ---------------------------------------------------------------------------| * ---------------------------------------------------------------------------|
* | | * | | |
* | SEGCBLIST_RCU_CORE | | SEGCBLIST_RCU_CORE | |
* | SEGCBLIST_LOCKING | | SEGCBLIST_LOCKING | |
* | SEGCBLIST_KTHREAD_CB | SEGCBLIST_KTHREAD_GP | * | SEGCBLIST_KTHREAD_CB | SEGCBLIST_KTHREAD_GP |
* | | | * | | |
* | GP kthread woke up and | CB kthread woke up and | * | GP kthread woke up and | CB kthread woke up and |
...@@ -159,7 +181,7 @@ struct rcu_cblist { ...@@ -159,7 +181,7 @@ struct rcu_cblist {
* | * |
* v * v
* ---------------------------------------------------------------------------- * ----------------------------------------------------------------------------
* | 0 | * | SEGCBLIST_RCU_CORE | SEGCBLIST_LOCKING |
* | | * | |
* | Callbacks processed by rcu_core() from softirqs or local | * | Callbacks processed by rcu_core() from softirqs or local |
* | rcuc kthread, while holding nocb_lock. Forbid nocb_timer to be armed. | * | rcuc kthread, while holding nocb_lock. Forbid nocb_timer to be armed. |
...@@ -168,17 +190,18 @@ struct rcu_cblist { ...@@ -168,17 +190,18 @@ struct rcu_cblist {
* | * |
* v * v
* ---------------------------------------------------------------------------- * ----------------------------------------------------------------------------
* | SEGCBLIST_SOFTIRQ_ONLY | * | SEGCBLIST_RCU_CORE |
* | | * | |
* | Callbacks processed by rcu_core() from softirqs or local | * | Callbacks processed by rcu_core() from softirqs or local |
* | rcuc kthread, without holding nocb_lock. | * | rcuc kthread, without holding nocb_lock. |
* ---------------------------------------------------------------------------- * ----------------------------------------------------------------------------
*/ */
#define SEGCBLIST_ENABLED BIT(0) #define SEGCBLIST_ENABLED BIT(0)
#define SEGCBLIST_SOFTIRQ_ONLY BIT(1) #define SEGCBLIST_RCU_CORE BIT(1)
#define SEGCBLIST_KTHREAD_CB BIT(2) #define SEGCBLIST_LOCKING BIT(2)
#define SEGCBLIST_KTHREAD_GP BIT(3) #define SEGCBLIST_KTHREAD_CB BIT(3)
#define SEGCBLIST_OFFLOADED BIT(4) #define SEGCBLIST_KTHREAD_GP BIT(4)
#define SEGCBLIST_OFFLOADED BIT(5)
struct rcu_segcblist { struct rcu_segcblist {
struct rcu_head *head; struct rcu_head *head;
......
...@@ -364,6 +364,12 @@ static inline void rcu_preempt_sleep_check(void) { } ...@@ -364,6 +364,12 @@ static inline void rcu_preempt_sleep_check(void) { }
#define rcu_check_sparse(p, space) #define rcu_check_sparse(p, space)
#endif /* #else #ifdef __CHECKER__ */ #endif /* #else #ifdef __CHECKER__ */
#define __unrcu_pointer(p, local) \
({ \
typeof(*p) *local = (typeof(*p) *__force)(p); \
rcu_check_sparse(p, __rcu); \
((typeof(*p) __force __kernel *)(local)); \
})
/** /**
* unrcu_pointer - mark a pointer as not being RCU protected * unrcu_pointer - mark a pointer as not being RCU protected
* @p: pointer needing to lose its __rcu property * @p: pointer needing to lose its __rcu property
...@@ -371,39 +377,35 @@ static inline void rcu_preempt_sleep_check(void) { } ...@@ -371,39 +377,35 @@ static inline void rcu_preempt_sleep_check(void) { }
* Converts @p from an __rcu pointer to a __kernel pointer. * Converts @p from an __rcu pointer to a __kernel pointer.
* This allows an __rcu pointer to be used with xchg() and friends. * This allows an __rcu pointer to be used with xchg() and friends.
*/ */
#define unrcu_pointer(p) \ #define unrcu_pointer(p) __unrcu_pointer(p, __UNIQUE_ID(rcu))
({ \
typeof(*p) *_________p1 = (typeof(*p) *__force)(p); \
rcu_check_sparse(p, __rcu); \
((typeof(*p) __force __kernel *)(_________p1)); \
})
#define __rcu_access_pointer(p, space) \ #define __rcu_access_pointer(p, local, space) \
({ \ ({ \
typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \ typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
rcu_check_sparse(p, space); \ rcu_check_sparse(p, space); \
((typeof(*p) __force __kernel *)(_________p1)); \ ((typeof(*p) __force __kernel *)(local)); \
}) })
#define __rcu_dereference_check(p, c, space) \ #define __rcu_dereference_check(p, local, c, space) \
({ \ ({ \
/* Dependency order vs. p above. */ \ /* Dependency order vs. p above. */ \
typeof(*p) *________p1 = (typeof(*p) *__force)READ_ONCE(p); \ typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \ RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
rcu_check_sparse(p, space); \ rcu_check_sparse(p, space); \
((typeof(*p) __force __kernel *)(________p1)); \ ((typeof(*p) __force __kernel *)(local)); \
}) })
#define __rcu_dereference_protected(p, c, space) \ #define __rcu_dereference_protected(p, local, c, space) \
({ \ ({ \
RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_protected() usage"); \ RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_protected() usage"); \
rcu_check_sparse(p, space); \ rcu_check_sparse(p, space); \
((typeof(*p) __force __kernel *)(p)); \ ((typeof(*p) __force __kernel *)(p)); \
}) })
#define rcu_dereference_raw(p) \ #define __rcu_dereference_raw(p, local) \
({ \ ({ \
/* Dependency order vs. p above. */ \ /* Dependency order vs. p above. */ \
typeof(p) ________p1 = READ_ONCE(p); \ typeof(p) local = READ_ONCE(p); \
((typeof(*p) __force __kernel *)(________p1)); \ ((typeof(*p) __force __kernel *)(local)); \
}) })
#define rcu_dereference_raw(p) __rcu_dereference_raw(p, __UNIQUE_ID(rcu))
/** /**
* RCU_INITIALIZER() - statically initialize an RCU-protected global variable * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
...@@ -490,7 +492,7 @@ do { \ ...@@ -490,7 +492,7 @@ do { \
* when tearing down multi-linked structures after a grace period * when tearing down multi-linked structures after a grace period
* has elapsed. * has elapsed.
*/ */
#define rcu_access_pointer(p) __rcu_access_pointer((p), __rcu) #define rcu_access_pointer(p) __rcu_access_pointer((p), __UNIQUE_ID(rcu), __rcu)
/** /**
* rcu_dereference_check() - rcu_dereference with debug checking * rcu_dereference_check() - rcu_dereference with debug checking
...@@ -526,7 +528,8 @@ do { \ ...@@ -526,7 +528,8 @@ do { \
* annotated as __rcu. * annotated as __rcu.
*/ */
#define rcu_dereference_check(p, c) \ #define rcu_dereference_check(p, c) \
__rcu_dereference_check((p), (c) || rcu_read_lock_held(), __rcu) __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
(c) || rcu_read_lock_held(), __rcu)
/** /**
* rcu_dereference_bh_check() - rcu_dereference_bh with debug checking * rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
...@@ -541,7 +544,8 @@ do { \ ...@@ -541,7 +544,8 @@ do { \
* rcu_read_lock() but also rcu_read_lock_bh() into account. * rcu_read_lock() but also rcu_read_lock_bh() into account.
*/ */
#define rcu_dereference_bh_check(p, c) \ #define rcu_dereference_bh_check(p, c) \
__rcu_dereference_check((p), (c) || rcu_read_lock_bh_held(), __rcu) __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
(c) || rcu_read_lock_bh_held(), __rcu)
/** /**
* rcu_dereference_sched_check() - rcu_dereference_sched with debug checking * rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
...@@ -556,7 +560,8 @@ do { \ ...@@ -556,7 +560,8 @@ do { \
* only rcu_read_lock() but also rcu_read_lock_sched() into account. * only rcu_read_lock() but also rcu_read_lock_sched() into account.
*/ */
#define rcu_dereference_sched_check(p, c) \ #define rcu_dereference_sched_check(p, c) \
__rcu_dereference_check((p), (c) || rcu_read_lock_sched_held(), \ __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
(c) || rcu_read_lock_sched_held(), \
__rcu) __rcu)
/* /*
...@@ -566,7 +571,8 @@ do { \ ...@@ -566,7 +571,8 @@ do { \
* The no-tracing version of rcu_dereference_raw() must not call * The no-tracing version of rcu_dereference_raw() must not call
* rcu_read_lock_held(). * rcu_read_lock_held().
*/ */
#define rcu_dereference_raw_check(p) __rcu_dereference_check((p), 1, __rcu) #define rcu_dereference_raw_check(p) \
__rcu_dereference_check((p), __UNIQUE_ID(rcu), 1, __rcu)
/** /**
* rcu_dereference_protected() - fetch RCU pointer when updates prevented * rcu_dereference_protected() - fetch RCU pointer when updates prevented
...@@ -585,7 +591,7 @@ do { \ ...@@ -585,7 +591,7 @@ do { \
* but very ugly failures. * but very ugly failures.
*/ */
#define rcu_dereference_protected(p, c) \ #define rcu_dereference_protected(p, c) \
__rcu_dereference_protected((p), (c), __rcu) __rcu_dereference_protected((p), __UNIQUE_ID(rcu), (c), __rcu)
/** /**
......
...@@ -85,7 +85,7 @@ static inline void rcu_irq_enter_irqson(void) { } ...@@ -85,7 +85,7 @@ static inline void rcu_irq_enter_irqson(void) { }
static inline void rcu_irq_exit(void) { } static inline void rcu_irq_exit(void) { }
static inline void rcu_irq_exit_check_preempt(void) { } static inline void rcu_irq_exit_check_preempt(void) { }
#define rcu_is_idle_cpu(cpu) \ #define rcu_is_idle_cpu(cpu) \
(is_idle_task(current) && !in_nmi() && !in_irq() && !in_serving_softirq()) (is_idle_task(current) && !in_nmi() && !in_hardirq() && !in_serving_softirq())
static inline void exit_rcu(void) { } static inline void exit_rcu(void) { }
static inline bool rcu_preempt_need_deferred_qs(struct task_struct *t) static inline bool rcu_preempt_need_deferred_qs(struct task_struct *t)
{ {
......
...@@ -117,7 +117,8 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) ...@@ -117,7 +117,8 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
* lockdep_is_held() calls. * lockdep_is_held() calls.
*/ */
#define srcu_dereference_check(p, ssp, c) \ #define srcu_dereference_check(p, ssp, c) \
__rcu_dereference_check((p), (c) || srcu_read_lock_held(ssp), __rcu) __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
(c) || srcu_read_lock_held(ssp), __rcu)
/** /**
* srcu_dereference - fetch SRCU-protected pointer for later dereferencing * srcu_dereference - fetch SRCU-protected pointer for later dereferencing
......
...@@ -38,13 +38,8 @@ do { \ ...@@ -38,13 +38,8 @@ do { \
pr_alert("%s" TORTURE_FLAG " %s\n", torture_type, s); \ pr_alert("%s" TORTURE_FLAG " %s\n", torture_type, s); \
} \ } \
} while (0) } while (0)
#define VERBOSE_TOROUT_ERRSTRING(s) \ #define TOROUT_ERRSTRING(s) \
do { \ pr_alert("%s" TORTURE_FLAG "!!! %s\n", torture_type, s)
if (verbose) { \
verbose_torout_sleep(); \
pr_alert("%s" TORTURE_FLAG "!!! %s\n", torture_type, s); \
} \
} while (0)
void verbose_torout_sleep(void); void verbose_torout_sleep(void);
#define torture_init_error(firsterr) \ #define torture_init_error(firsterr) \
......
...@@ -1047,7 +1047,7 @@ static int __init lock_torture_init(void) ...@@ -1047,7 +1047,7 @@ static int __init lock_torture_init(void)
sizeof(writer_tasks[0]), sizeof(writer_tasks[0]),
GFP_KERNEL); GFP_KERNEL);
if (writer_tasks == NULL) { if (writer_tasks == NULL) {
VERBOSE_TOROUT_ERRSTRING("writer_tasks: Out of memory"); TOROUT_ERRSTRING("writer_tasks: Out of memory");
firsterr = -ENOMEM; firsterr = -ENOMEM;
goto unwind; goto unwind;
} }
...@@ -1058,7 +1058,7 @@ static int __init lock_torture_init(void) ...@@ -1058,7 +1058,7 @@ static int __init lock_torture_init(void)
sizeof(reader_tasks[0]), sizeof(reader_tasks[0]),
GFP_KERNEL); GFP_KERNEL);
if (reader_tasks == NULL) { if (reader_tasks == NULL) {
VERBOSE_TOROUT_ERRSTRING("reader_tasks: Out of memory"); TOROUT_ERRSTRING("reader_tasks: Out of memory");
kfree(writer_tasks); kfree(writer_tasks);
writer_tasks = NULL; writer_tasks = NULL;
firsterr = -ENOMEM; firsterr = -ENOMEM;
......
...@@ -112,7 +112,7 @@ config RCU_STALL_COMMON ...@@ -112,7 +112,7 @@ config RCU_STALL_COMMON
making these warnings mandatory for the tree variants. making these warnings mandatory for the tree variants.
config RCU_NEED_SEGCBLIST config RCU_NEED_SEGCBLIST
def_bool ( TREE_RCU || TREE_SRCU ) def_bool ( TREE_RCU || TREE_SRCU || TASKS_RCU_GENERIC )
config RCU_FANOUT config RCU_FANOUT
int "Tree-based hierarchical RCU fanout value" int "Tree-based hierarchical RCU fanout value"
...@@ -169,24 +169,6 @@ config RCU_FANOUT_LEAF ...@@ -169,24 +169,6 @@ config RCU_FANOUT_LEAF
Take the default if unsure. Take the default if unsure.
config RCU_FAST_NO_HZ
bool "Accelerate last non-dyntick-idle CPU's grace periods"
depends on NO_HZ_COMMON && SMP && RCU_EXPERT
default n
help
This option permits CPUs to enter dynticks-idle state even if
they have RCU callbacks queued, and prevents RCU from waking
these CPUs up more than roughly once every four jiffies (by
default, you can adjust this using the rcutree.rcu_idle_gp_delay
parameter), thus improving energy efficiency. On the other
hand, this option increases the duration of RCU grace periods,
for example, slowing down synchronize_rcu().
Say Y if energy efficiency is critically important, and you
don't care about increased grace-period durations.
Say N if you are unsure.
config RCU_BOOST config RCU_BOOST
bool "Enable RCU priority boosting" bool "Enable RCU priority boosting"
depends on (RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT) || PREEMPT_RT depends on (RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT) || PREEMPT_RT
......
...@@ -261,16 +261,14 @@ void rcu_segcblist_disable(struct rcu_segcblist *rsclp) ...@@ -261,16 +261,14 @@ void rcu_segcblist_disable(struct rcu_segcblist *rsclp)
} }
/* /*
* Mark the specified rcu_segcblist structure as offloaded. * Mark the specified rcu_segcblist structure as offloaded (or not)
*/ */
void rcu_segcblist_offload(struct rcu_segcblist *rsclp, bool offload) void rcu_segcblist_offload(struct rcu_segcblist *rsclp, bool offload)
{ {
if (offload) { if (offload)
rcu_segcblist_clear_flags(rsclp, SEGCBLIST_SOFTIRQ_ONLY); rcu_segcblist_set_flags(rsclp, SEGCBLIST_LOCKING | SEGCBLIST_OFFLOADED);
rcu_segcblist_set_flags(rsclp, SEGCBLIST_OFFLOADED); else
} else {
rcu_segcblist_clear_flags(rsclp, SEGCBLIST_OFFLOADED); rcu_segcblist_clear_flags(rsclp, SEGCBLIST_OFFLOADED);
}
} }
/* /*
......
...@@ -80,11 +80,14 @@ static inline bool rcu_segcblist_is_enabled(struct rcu_segcblist *rsclp) ...@@ -80,11 +80,14 @@ static inline bool rcu_segcblist_is_enabled(struct rcu_segcblist *rsclp)
return rcu_segcblist_test_flags(rsclp, SEGCBLIST_ENABLED); return rcu_segcblist_test_flags(rsclp, SEGCBLIST_ENABLED);
} }
/* Is the specified rcu_segcblist offloaded, or is SEGCBLIST_SOFTIRQ_ONLY set? */ /*
* Is the specified rcu_segcblist NOCB offloaded (or in the middle of the
* [de]offloading process)?
*/
static inline bool rcu_segcblist_is_offloaded(struct rcu_segcblist *rsclp) static inline bool rcu_segcblist_is_offloaded(struct rcu_segcblist *rsclp)
{ {
if (IS_ENABLED(CONFIG_RCU_NOCB_CPU) && if (IS_ENABLED(CONFIG_RCU_NOCB_CPU) &&
!rcu_segcblist_test_flags(rsclp, SEGCBLIST_SOFTIRQ_ONLY)) rcu_segcblist_test_flags(rsclp, SEGCBLIST_LOCKING))
return true; return true;
return false; return false;
...@@ -92,9 +95,8 @@ static inline bool rcu_segcblist_is_offloaded(struct rcu_segcblist *rsclp) ...@@ -92,9 +95,8 @@ static inline bool rcu_segcblist_is_offloaded(struct rcu_segcblist *rsclp)
static inline bool rcu_segcblist_completely_offloaded(struct rcu_segcblist *rsclp) static inline bool rcu_segcblist_completely_offloaded(struct rcu_segcblist *rsclp)
{ {
int flags = SEGCBLIST_KTHREAD_CB | SEGCBLIST_KTHREAD_GP | SEGCBLIST_OFFLOADED; if (IS_ENABLED(CONFIG_RCU_NOCB_CPU) &&
!rcu_segcblist_test_flags(rsclp, SEGCBLIST_RCU_CORE))
if (IS_ENABLED(CONFIG_RCU_NOCB_CPU) && (rsclp->flags & flags) == flags)
return true; return true;
return false; return false;
......
...@@ -50,8 +50,8 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>"); ...@@ -50,8 +50,8 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>");
pr_alert("%s" SCALE_FLAG " %s\n", scale_type, s) pr_alert("%s" SCALE_FLAG " %s\n", scale_type, s)
#define VERBOSE_SCALEOUT_STRING(s) \ #define VERBOSE_SCALEOUT_STRING(s) \
do { if (verbose) pr_alert("%s" SCALE_FLAG " %s\n", scale_type, s); } while (0) do { if (verbose) pr_alert("%s" SCALE_FLAG " %s\n", scale_type, s); } while (0)
#define VERBOSE_SCALEOUT_ERRSTRING(s) \ #define SCALEOUT_ERRSTRING(s) \
do { if (verbose) pr_alert("%s" SCALE_FLAG "!!! %s\n", scale_type, s); } while (0) pr_alert("%s" SCALE_FLAG "!!! %s\n", scale_type, s)
/* /*
* The intended use cases for the nreaders and nwriters module parameters * The intended use cases for the nreaders and nwriters module parameters
...@@ -514,11 +514,11 @@ rcu_scale_cleanup(void) ...@@ -514,11 +514,11 @@ rcu_scale_cleanup(void)
* during the mid-boot phase, so have to wait till the end. * during the mid-boot phase, so have to wait till the end.
*/ */
if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp) if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp)
VERBOSE_SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!"); SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!");
if (rcu_gp_is_normal() && gp_exp) if (rcu_gp_is_normal() && gp_exp)
VERBOSE_SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!"); SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!");
if (gp_exp && gp_async) if (gp_exp && gp_async)
VERBOSE_SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!"); SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!");
if (torture_cleanup_begin()) if (torture_cleanup_begin())
return; return;
...@@ -845,7 +845,7 @@ rcu_scale_init(void) ...@@ -845,7 +845,7 @@ rcu_scale_init(void)
reader_tasks = kcalloc(nrealreaders, sizeof(reader_tasks[0]), reader_tasks = kcalloc(nrealreaders, sizeof(reader_tasks[0]),
GFP_KERNEL); GFP_KERNEL);
if (reader_tasks == NULL) { if (reader_tasks == NULL) {
VERBOSE_SCALEOUT_ERRSTRING("out of memory"); SCALEOUT_ERRSTRING("out of memory");
firsterr = -ENOMEM; firsterr = -ENOMEM;
goto unwind; goto unwind;
} }
...@@ -865,7 +865,7 @@ rcu_scale_init(void) ...@@ -865,7 +865,7 @@ rcu_scale_init(void)
kcalloc(nrealwriters, sizeof(*writer_n_durations), kcalloc(nrealwriters, sizeof(*writer_n_durations),
GFP_KERNEL); GFP_KERNEL);
if (!writer_tasks || !writer_durations || !writer_n_durations) { if (!writer_tasks || !writer_durations || !writer_n_durations) {
VERBOSE_SCALEOUT_ERRSTRING("out of memory"); SCALEOUT_ERRSTRING("out of memory");
firsterr = -ENOMEM; firsterr = -ENOMEM;
goto unwind; goto unwind;
} }
......
This diff is collapsed.
...@@ -44,7 +44,10 @@ ...@@ -44,7 +44,10 @@
pr_alert("%s" SCALE_FLAG s, scale_type, ## x) pr_alert("%s" SCALE_FLAG s, scale_type, ## x)
#define VERBOSE_SCALEOUT(s, x...) \ #define VERBOSE_SCALEOUT(s, x...) \
do { if (verbose) pr_alert("%s" SCALE_FLAG s, scale_type, ## x); } while (0) do { \
if (verbose) \
pr_alert("%s" SCALE_FLAG s "\n", scale_type, ## x); \
} while (0)
static atomic_t verbose_batch_ctr; static atomic_t verbose_batch_ctr;
...@@ -54,12 +57,11 @@ do { \ ...@@ -54,12 +57,11 @@ do { \
(verbose_batched <= 0 || \ (verbose_batched <= 0 || \
!(atomic_inc_return(&verbose_batch_ctr) % verbose_batched))) { \ !(atomic_inc_return(&verbose_batch_ctr) % verbose_batched))) { \
schedule_timeout_uninterruptible(1); \ schedule_timeout_uninterruptible(1); \
pr_alert("%s" SCALE_FLAG s, scale_type, ## x); \ pr_alert("%s" SCALE_FLAG s "\n", scale_type, ## x); \
} \ } \
} while (0) } while (0)
#define VERBOSE_SCALEOUT_ERRSTRING(s, x...) \ #define SCALEOUT_ERRSTRING(s, x...) pr_alert("%s" SCALE_FLAG "!!! " s "\n", scale_type, ## x)
do { if (verbose) pr_alert("%s" SCALE_FLAG "!!! " s, scale_type, ## x); } while (0)
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Joel Fernandes (Google) <joel@joelfernandes.org>"); MODULE_AUTHOR("Joel Fernandes (Google) <joel@joelfernandes.org>");
...@@ -604,7 +606,7 @@ static u64 process_durations(int n) ...@@ -604,7 +606,7 @@ static u64 process_durations(int n)
char *buf; char *buf;
u64 sum = 0; u64 sum = 0;
buf = kmalloc(128 + nreaders * 32, GFP_KERNEL); buf = kmalloc(800 + 64, GFP_KERNEL);
if (!buf) if (!buf)
return 0; return 0;
buf[0] = 0; buf[0] = 0;
...@@ -617,13 +619,15 @@ static u64 process_durations(int n) ...@@ -617,13 +619,15 @@ static u64 process_durations(int n)
if (i % 5 == 0) if (i % 5 == 0)
strcat(buf, "\n"); strcat(buf, "\n");
if (strlen(buf) >= 800) {
pr_alert("%s", buf);
buf[0] = 0;
}
strcat(buf, buf1); strcat(buf, buf1);
sum += rt->last_duration_ns; sum += rt->last_duration_ns;
} }
strcat(buf, "\n"); pr_alert("%s\n", buf);
SCALEOUT("%s\n", buf);
kfree(buf); kfree(buf);
return sum; return sum;
...@@ -637,7 +641,6 @@ static u64 process_durations(int n) ...@@ -637,7 +641,6 @@ static u64 process_durations(int n)
// point all the timestamps are printed. // point all the timestamps are printed.
static int main_func(void *arg) static int main_func(void *arg)
{ {
bool errexit = false;
int exp, r; int exp, r;
char buf1[64]; char buf1[64];
char *buf; char *buf;
...@@ -648,10 +651,10 @@ static int main_func(void *arg) ...@@ -648,10 +651,10 @@ static int main_func(void *arg)
VERBOSE_SCALEOUT("main_func task started"); VERBOSE_SCALEOUT("main_func task started");
result_avg = kzalloc(nruns * sizeof(*result_avg), GFP_KERNEL); result_avg = kzalloc(nruns * sizeof(*result_avg), GFP_KERNEL);
buf = kzalloc(64 + nruns * 32, GFP_KERNEL); buf = kzalloc(800 + 64, GFP_KERNEL);
if (!result_avg || !buf) { if (!result_avg || !buf) {
VERBOSE_SCALEOUT_ERRSTRING("out of memory"); SCALEOUT_ERRSTRING("out of memory");
errexit = true; goto oom_exit;
} }
if (holdoff) if (holdoff)
schedule_timeout_interruptible(holdoff * HZ); schedule_timeout_interruptible(holdoff * HZ);
...@@ -663,8 +666,6 @@ static int main_func(void *arg) ...@@ -663,8 +666,6 @@ static int main_func(void *arg)
// Start exp readers up per experiment // Start exp readers up per experiment
for (exp = 0; exp < nruns && !torture_must_stop(); exp++) { for (exp = 0; exp < nruns && !torture_must_stop(); exp++) {
if (errexit)
break;
if (torture_must_stop()) if (torture_must_stop())
goto end; goto end;
...@@ -698,26 +699,23 @@ static int main_func(void *arg) ...@@ -698,26 +699,23 @@ static int main_func(void *arg)
// Print the average of all experiments // Print the average of all experiments
SCALEOUT("END OF TEST. Calculating average duration per loop (nanoseconds)...\n"); SCALEOUT("END OF TEST. Calculating average duration per loop (nanoseconds)...\n");
if (!errexit) { pr_alert("Runs\tTime(ns)\n");
buf[0] = 0;
strcat(buf, "\n");
strcat(buf, "Runs\tTime(ns)\n");
}
for (exp = 0; exp < nruns; exp++) { for (exp = 0; exp < nruns; exp++) {
u64 avg; u64 avg;
u32 rem; u32 rem;
if (errexit)
break;
avg = div_u64_rem(result_avg[exp], 1000, &rem); avg = div_u64_rem(result_avg[exp], 1000, &rem);
sprintf(buf1, "%d\t%llu.%03u\n", exp + 1, avg, rem); sprintf(buf1, "%d\t%llu.%03u\n", exp + 1, avg, rem);
strcat(buf, buf1); strcat(buf, buf1);
if (strlen(buf) >= 800) {
pr_alert("%s", buf);
buf[0] = 0;
}
} }
if (!errexit) pr_alert("%s", buf);
SCALEOUT("%s", buf);
oom_exit:
// This will shutdown everything including us. // This will shutdown everything including us.
if (shutdown) { if (shutdown) {
shutdown_start = 1; shutdown_start = 1;
...@@ -841,12 +839,12 @@ ref_scale_init(void) ...@@ -841,12 +839,12 @@ ref_scale_init(void)
reader_tasks = kcalloc(nreaders, sizeof(reader_tasks[0]), reader_tasks = kcalloc(nreaders, sizeof(reader_tasks[0]),
GFP_KERNEL); GFP_KERNEL);
if (!reader_tasks) { if (!reader_tasks) {
VERBOSE_SCALEOUT_ERRSTRING("out of memory"); SCALEOUT_ERRSTRING("out of memory");
firsterr = -ENOMEM; firsterr = -ENOMEM;
goto unwind; goto unwind;
} }
VERBOSE_SCALEOUT("Starting %d reader threads\n", nreaders); VERBOSE_SCALEOUT("Starting %d reader threads", nreaders);
for (i = 0; i < nreaders; i++) { for (i = 0; i < nreaders; i++) {
firsterr = torture_create_kthread(ref_scale_reader, (void *)i, firsterr = torture_create_kthread(ref_scale_reader, (void *)i,
......
...@@ -99,7 +99,7 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx) ...@@ -99,7 +99,7 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx)
int newval = READ_ONCE(ssp->srcu_lock_nesting[idx]) - 1; int newval = READ_ONCE(ssp->srcu_lock_nesting[idx]) - 1;
WRITE_ONCE(ssp->srcu_lock_nesting[idx], newval); WRITE_ONCE(ssp->srcu_lock_nesting[idx], newval);
if (!newval && READ_ONCE(ssp->srcu_gp_waiting)) if (!newval && READ_ONCE(ssp->srcu_gp_waiting) && in_task())
swake_up_one(&ssp->srcu_wq); swake_up_one(&ssp->srcu_wq);
} }
EXPORT_SYMBOL_GPL(__srcu_read_unlock); EXPORT_SYMBOL_GPL(__srcu_read_unlock);
......
This diff is collapsed.
This diff is collapsed.
...@@ -157,7 +157,6 @@ struct rcu_data { ...@@ -157,7 +157,6 @@ struct rcu_data {
bool core_needs_qs; /* Core waits for quiescent state. */ bool core_needs_qs; /* Core waits for quiescent state. */
bool beenonline; /* CPU online at least once. */ bool beenonline; /* CPU online at least once. */
bool gpwrap; /* Possible ->gp_seq wrap. */ bool gpwrap; /* Possible ->gp_seq wrap. */
bool exp_deferred_qs; /* This CPU awaiting a deferred QS? */
bool cpu_started; /* RCU watching this onlining CPU. */ bool cpu_started; /* RCU watching this onlining CPU. */
struct rcu_node *mynode; /* This CPU's leaf of hierarchy */ struct rcu_node *mynode; /* This CPU's leaf of hierarchy */
unsigned long grpmask; /* Mask to apply to leaf qsmask. */ unsigned long grpmask; /* Mask to apply to leaf qsmask. */
...@@ -189,11 +188,6 @@ struct rcu_data { ...@@ -189,11 +188,6 @@ struct rcu_data {
bool rcu_urgent_qs; /* GP old need light quiescent state. */ bool rcu_urgent_qs; /* GP old need light quiescent state. */
bool rcu_forced_tick; /* Forced tick to provide QS. */ bool rcu_forced_tick; /* Forced tick to provide QS. */
bool rcu_forced_tick_exp; /* ... provide QS to expedited GP. */ bool rcu_forced_tick_exp; /* ... provide QS to expedited GP. */
#ifdef CONFIG_RCU_FAST_NO_HZ
unsigned long last_accelerate; /* Last jiffy CBs were accelerated. */
unsigned long last_advance_all; /* Last jiffy CBs were all advanced. */
int tick_nohz_enabled_snap; /* Previously seen value from sysfs. */
#endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
/* 4) rcu_barrier(), OOM callbacks, and expediting. */ /* 4) rcu_barrier(), OOM callbacks, and expediting. */
struct rcu_head barrier_head; struct rcu_head barrier_head;
...@@ -227,8 +221,11 @@ struct rcu_data { ...@@ -227,8 +221,11 @@ struct rcu_data {
struct swait_queue_head nocb_gp_wq; /* For nocb kthreads to sleep on. */ struct swait_queue_head nocb_gp_wq; /* For nocb kthreads to sleep on. */
bool nocb_cb_sleep; /* Is the nocb CB thread asleep? */ bool nocb_cb_sleep; /* Is the nocb CB thread asleep? */
struct task_struct *nocb_cb_kthread; struct task_struct *nocb_cb_kthread;
struct rcu_data *nocb_next_cb_rdp; struct list_head nocb_head_rdp; /*
/* Next rcu_data in wakeup chain. */ * Head of rcu_data list in wakeup chain,
* if rdp_gp.
*/
struct list_head nocb_entry_rdp; /* rcu_data node in wakeup chain. */
/* The following fields are used by CB kthread, hence new cacheline. */ /* The following fields are used by CB kthread, hence new cacheline. */
struct rcu_data *nocb_gp_rdp ____cacheline_internodealigned_in_smp; struct rcu_data *nocb_gp_rdp ____cacheline_internodealigned_in_smp;
...@@ -419,8 +416,6 @@ static bool rcu_is_callbacks_kthread(void); ...@@ -419,8 +416,6 @@ static bool rcu_is_callbacks_kthread(void);
static void rcu_cpu_kthread_setup(unsigned int cpu); static void rcu_cpu_kthread_setup(unsigned int cpu);
static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp); static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp);
static void __init rcu_spawn_boost_kthreads(void); static void __init rcu_spawn_boost_kthreads(void);
static void rcu_cleanup_after_idle(void);
static void rcu_prepare_for_idle(void);
static bool rcu_preempt_has_tasks(struct rcu_node *rnp); static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
static bool rcu_preempt_need_deferred_qs(struct task_struct *t); static bool rcu_preempt_need_deferred_qs(struct task_struct *t);
static void rcu_preempt_deferred_qs(struct task_struct *t); static void rcu_preempt_deferred_qs(struct task_struct *t);
...@@ -447,12 +442,16 @@ static void rcu_nocb_unlock_irqrestore(struct rcu_data *rdp, ...@@ -447,12 +442,16 @@ static void rcu_nocb_unlock_irqrestore(struct rcu_data *rdp,
static void rcu_lockdep_assert_cblist_protected(struct rcu_data *rdp); static void rcu_lockdep_assert_cblist_protected(struct rcu_data *rdp);
#ifdef CONFIG_RCU_NOCB_CPU #ifdef CONFIG_RCU_NOCB_CPU
static void __init rcu_organize_nocb_kthreads(void); static void __init rcu_organize_nocb_kthreads(void);
/*
* Disable IRQs before checking offloaded state so that local
* locking is safe against concurrent de-offloading.
*/
#define rcu_nocb_lock_irqsave(rdp, flags) \ #define rcu_nocb_lock_irqsave(rdp, flags) \
do { \ do { \
if (!rcu_segcblist_is_offloaded(&(rdp)->cblist)) \
local_irq_save(flags); \ local_irq_save(flags); \
else \ if (rcu_segcblist_is_offloaded(&(rdp)->cblist)) \
raw_spin_lock_irqsave(&(rdp)->nocb_lock, (flags)); \ raw_spin_lock(&(rdp)->nocb_lock); \
} while (0) } while (0)
#else /* #ifdef CONFIG_RCU_NOCB_CPU */ #else /* #ifdef CONFIG_RCU_NOCB_CPU */
#define rcu_nocb_lock_irqsave(rdp, flags) local_irq_save(flags) #define rcu_nocb_lock_irqsave(rdp, flags) local_irq_save(flags)
......
...@@ -255,7 +255,7 @@ static void rcu_report_exp_cpu_mult(struct rcu_node *rnp, ...@@ -255,7 +255,7 @@ static void rcu_report_exp_cpu_mult(struct rcu_node *rnp,
*/ */
static void rcu_report_exp_rdp(struct rcu_data *rdp) static void rcu_report_exp_rdp(struct rcu_data *rdp)
{ {
WRITE_ONCE(rdp->exp_deferred_qs, false); WRITE_ONCE(rdp->cpu_no_qs.b.exp, false);
rcu_report_exp_cpu_mult(rdp->mynode, rdp->grpmask, true); rcu_report_exp_cpu_mult(rdp->mynode, rdp->grpmask, true);
} }
...@@ -387,6 +387,7 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp) ...@@ -387,6 +387,7 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
continue; continue;
} }
if (get_cpu() == cpu) { if (get_cpu() == cpu) {
mask_ofl_test |= mask;
put_cpu(); put_cpu();
continue; continue;
} }
...@@ -506,7 +507,10 @@ static void synchronize_rcu_expedited_wait(void) ...@@ -506,7 +507,10 @@ static void synchronize_rcu_expedited_wait(void)
if (rdp->rcu_forced_tick_exp) if (rdp->rcu_forced_tick_exp)
continue; continue;
rdp->rcu_forced_tick_exp = true; rdp->rcu_forced_tick_exp = true;
preempt_disable();
if (cpu_online(cpu))
tick_dep_set_cpu(cpu, TICK_DEP_BIT_RCU_EXP); tick_dep_set_cpu(cpu, TICK_DEP_BIT_RCU_EXP);
preempt_enable();
} }
} }
j = READ_ONCE(jiffies_till_first_fqs); j = READ_ONCE(jiffies_till_first_fqs);
...@@ -655,7 +659,7 @@ static void rcu_exp_handler(void *unused) ...@@ -655,7 +659,7 @@ static void rcu_exp_handler(void *unused)
rcu_dynticks_curr_cpu_in_eqs()) { rcu_dynticks_curr_cpu_in_eqs()) {
rcu_report_exp_rdp(rdp); rcu_report_exp_rdp(rdp);
} else { } else {
rdp->exp_deferred_qs = true; WRITE_ONCE(rdp->cpu_no_qs.b.exp, true);
set_tsk_need_resched(t); set_tsk_need_resched(t);
set_preempt_need_resched(); set_preempt_need_resched();
} }
...@@ -677,7 +681,7 @@ static void rcu_exp_handler(void *unused) ...@@ -677,7 +681,7 @@ static void rcu_exp_handler(void *unused)
if (depth > 0) { if (depth > 0) {
raw_spin_lock_irqsave_rcu_node(rnp, flags); raw_spin_lock_irqsave_rcu_node(rnp, flags);
if (rnp->expmask & rdp->grpmask) { if (rnp->expmask & rdp->grpmask) {
rdp->exp_deferred_qs = true; WRITE_ONCE(rdp->cpu_no_qs.b.exp, true);
t->rcu_read_unlock_special.b.exp_hint = true; t->rcu_read_unlock_special.b.exp_hint = true;
} }
raw_spin_unlock_irqrestore_rcu_node(rnp, flags); raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
...@@ -759,7 +763,7 @@ static void sync_sched_exp_online_cleanup(int cpu) ...@@ -759,7 +763,7 @@ static void sync_sched_exp_online_cleanup(int cpu)
my_cpu = get_cpu(); my_cpu = get_cpu();
/* Quiescent state either not needed or already requested, leave. */ /* Quiescent state either not needed or already requested, leave. */
if (!(READ_ONCE(rnp->expmask) & rdp->grpmask) || if (!(READ_ONCE(rnp->expmask) & rdp->grpmask) ||
rdp->cpu_no_qs.b.exp) { READ_ONCE(rdp->cpu_no_qs.b.exp)) {
put_cpu(); put_cpu();
return; return;
} }
......
This diff is collapsed.
This diff is collapsed.
...@@ -347,26 +347,6 @@ static void rcu_dump_cpu_stacks(void) ...@@ -347,26 +347,6 @@ static void rcu_dump_cpu_stacks(void)
} }
} }
#ifdef CONFIG_RCU_FAST_NO_HZ
static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
{
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
sprintf(cp, "last_accelerate: %04lx/%04lx dyntick_enabled: %d",
rdp->last_accelerate & 0xffff, jiffies & 0xffff,
!!rdp->tick_nohz_enabled_snap);
}
#else /* #ifdef CONFIG_RCU_FAST_NO_HZ */
static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
{
*cp = '\0';
}
#endif /* #else #ifdef CONFIG_RCU_FAST_NO_HZ */
static const char * const gp_state_names[] = { static const char * const gp_state_names[] = {
[RCU_GP_IDLE] = "RCU_GP_IDLE", [RCU_GP_IDLE] = "RCU_GP_IDLE",
[RCU_GP_WAIT_GPS] = "RCU_GP_WAIT_GPS", [RCU_GP_WAIT_GPS] = "RCU_GP_WAIT_GPS",
...@@ -408,13 +388,12 @@ static bool rcu_is_gp_kthread_starving(unsigned long *jp) ...@@ -408,13 +388,12 @@ static bool rcu_is_gp_kthread_starving(unsigned long *jp)
* of RCU grace periods that this CPU is ignorant of, for example, "1" * of RCU grace periods that this CPU is ignorant of, for example, "1"
* if the CPU was aware of the previous grace period. * if the CPU was aware of the previous grace period.
* *
* Also print out idle and (if CONFIG_RCU_FAST_NO_HZ) idle-entry info. * Also print out idle info.
*/ */
static void print_cpu_stall_info(int cpu) static void print_cpu_stall_info(int cpu)
{ {
unsigned long delta; unsigned long delta;
bool falsepositive; bool falsepositive;
char fast_no_hz[72];
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
char *ticks_title; char *ticks_title;
unsigned long ticks_value; unsigned long ticks_value;
...@@ -432,11 +411,10 @@ static void print_cpu_stall_info(int cpu) ...@@ -432,11 +411,10 @@ static void print_cpu_stall_info(int cpu)
ticks_title = "ticks this GP"; ticks_title = "ticks this GP";
ticks_value = rdp->ticks_this_gp; ticks_value = rdp->ticks_this_gp;
} }
print_cpu_stall_fast_no_hz(fast_no_hz, cpu);
delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq); delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq);
falsepositive = rcu_is_gp_kthread_starving(NULL) && falsepositive = rcu_is_gp_kthread_starving(NULL) &&
rcu_dynticks_in_eqs(rcu_dynticks_snap(rdp)); rcu_dynticks_in_eqs(rcu_dynticks_snap(rdp));
pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s%s\n", pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s\n",
cpu, cpu,
"O."[!!cpu_online(cpu)], "O."[!!cpu_online(cpu)],
"o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)], "o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)],
...@@ -449,7 +427,6 @@ static void print_cpu_stall_info(int cpu) ...@@ -449,7 +427,6 @@ static void print_cpu_stall_info(int cpu)
rdp->dynticks_nesting, rdp->dynticks_nmi_nesting, rdp->dynticks_nesting, rdp->dynticks_nmi_nesting,
rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu), rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart, data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
fast_no_hz,
falsepositive ? " (false positive?)" : ""); falsepositive ? " (false positive?)" : "");
} }
......
...@@ -38,14 +38,10 @@ ...@@ -38,14 +38,10 @@
#define SCFTORT_STRING "scftorture" #define SCFTORT_STRING "scftorture"
#define SCFTORT_FLAG SCFTORT_STRING ": " #define SCFTORT_FLAG SCFTORT_STRING ": "
#define SCFTORTOUT(s, x...) \
pr_alert(SCFTORT_FLAG s, ## x)
#define VERBOSE_SCFTORTOUT(s, x...) \ #define VERBOSE_SCFTORTOUT(s, x...) \
do { if (verbose) pr_alert(SCFTORT_FLAG s, ## x); } while (0) do { if (verbose) pr_alert(SCFTORT_FLAG s "\n", ## x); } while (0)
#define VERBOSE_SCFTORTOUT_ERRSTRING(s, x...) \ #define SCFTORTOUT_ERRSTRING(s, x...) pr_alert(SCFTORT_FLAG "!!! " s "\n", ## x)
do { if (verbose) pr_alert(SCFTORT_FLAG "!!! " s, ## x); } while (0)
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Paul E. McKenney <paulmck@kernel.org>"); MODULE_AUTHOR("Paul E. McKenney <paulmck@kernel.org>");
...@@ -587,14 +583,14 @@ static int __init scf_torture_init(void) ...@@ -587,14 +583,14 @@ static int __init scf_torture_init(void)
if (weight_resched1 == 0 && weight_single1 == 0 && weight_single_rpc1 == 0 && if (weight_resched1 == 0 && weight_single1 == 0 && weight_single_rpc1 == 0 &&
weight_single_wait1 == 0 && weight_many1 == 0 && weight_many_wait1 == 0 && weight_single_wait1 == 0 && weight_many1 == 0 && weight_many_wait1 == 0 &&
weight_all1 == 0 && weight_all_wait1 == 0) { weight_all1 == 0 && weight_all_wait1 == 0) {
VERBOSE_SCFTORTOUT_ERRSTRING("all zero weights makes no sense"); SCFTORTOUT_ERRSTRING("all zero weights makes no sense");
firsterr = -EINVAL; firsterr = -EINVAL;
goto unwind; goto unwind;
} }
if (IS_BUILTIN(CONFIG_SCF_TORTURE_TEST)) if (IS_BUILTIN(CONFIG_SCF_TORTURE_TEST))
scf_sel_add(weight_resched1, SCF_PRIM_RESCHED, false); scf_sel_add(weight_resched1, SCF_PRIM_RESCHED, false);
else if (weight_resched1) else if (weight_resched1)
VERBOSE_SCFTORTOUT_ERRSTRING("built as module, weight_resched ignored"); SCFTORTOUT_ERRSTRING("built as module, weight_resched ignored");
scf_sel_add(weight_single1, SCF_PRIM_SINGLE, false); scf_sel_add(weight_single1, SCF_PRIM_SINGLE, false);
scf_sel_add(weight_single_rpc1, SCF_PRIM_SINGLE_RPC, true); scf_sel_add(weight_single_rpc1, SCF_PRIM_SINGLE_RPC, true);
scf_sel_add(weight_single_wait1, SCF_PRIM_SINGLE, true); scf_sel_add(weight_single_wait1, SCF_PRIM_SINGLE, true);
...@@ -625,12 +621,12 @@ static int __init scf_torture_init(void) ...@@ -625,12 +621,12 @@ static int __init scf_torture_init(void)
nthreads = num_online_cpus(); nthreads = num_online_cpus();
scf_stats_p = kcalloc(nthreads, sizeof(scf_stats_p[0]), GFP_KERNEL); scf_stats_p = kcalloc(nthreads, sizeof(scf_stats_p[0]), GFP_KERNEL);
if (!scf_stats_p) { if (!scf_stats_p) {
VERBOSE_SCFTORTOUT_ERRSTRING("out of memory"); SCFTORTOUT_ERRSTRING("out of memory");
firsterr = -ENOMEM; firsterr = -ENOMEM;
goto unwind; goto unwind;
} }
VERBOSE_SCFTORTOUT("Starting %d smp_call_function() threads\n", nthreads); VERBOSE_SCFTORTOUT("Starting %d smp_call_function() threads", nthreads);
atomic_set(&n_started, nthreads); atomic_set(&n_started, nthreads);
for (i = 0; i < nthreads; i++) { for (i = 0; i < nthreads; i++) {
......
...@@ -570,7 +570,7 @@ int torture_shuffle_init(long shuffint) ...@@ -570,7 +570,7 @@ int torture_shuffle_init(long shuffint)
shuffle_idle_cpu = -1; shuffle_idle_cpu = -1;
if (!alloc_cpumask_var(&shuffle_tmp_mask, GFP_KERNEL)) { if (!alloc_cpumask_var(&shuffle_tmp_mask, GFP_KERNEL)) {
VERBOSE_TOROUT_ERRSTRING("Failed to alloc mask"); TOROUT_ERRSTRING("Failed to alloc mask");
return -ENOMEM; return -ENOMEM;
} }
...@@ -934,7 +934,7 @@ int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m, ...@@ -934,7 +934,7 @@ int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m,
*tp = kthread_run(fn, arg, "%s", s); *tp = kthread_run(fn, arg, "%s", s);
if (IS_ERR(*tp)) { if (IS_ERR(*tp)) {
ret = PTR_ERR(*tp); ret = PTR_ERR(*tp);
VERBOSE_TOROUT_ERRSTRING(f); TOROUT_ERRSTRING(f);
*tp = NULL; *tp = NULL;
} }
torture_shuffle_task_register(*tp); torture_shuffle_task_register(*tp);
......
This diff is collapsed.
...@@ -30,9 +30,9 @@ editor=${EDITOR-vi} ...@@ -30,9 +30,9 @@ editor=${EDITOR-vi}
files= files=
for i in ${rundir}/*/Make.out for i in ${rundir}/*/Make.out
do do
if egrep -q "error:|warning:" < $i if egrep -q "error:|warning:|^ld: .*undefined reference to" < $i
then then
egrep "error:|warning:" < $i > $i.diags egrep "error:|warning:|^ld: .*undefined reference to" < $i > $i.diags
files="$files $i.diags $i" files="$files $i.diags $i"
fi fi
done done
......
...@@ -25,7 +25,7 @@ stopstate="`grep 'End-test grace-period state: g' $i/console.log 2> /dev/null | ...@@ -25,7 +25,7 @@ stopstate="`grep 'End-test grace-period state: g' $i/console.log 2> /dev/null |
tail -1 | sed -e 's/^\[[ 0-9.]*] //' | tail -1 | sed -e 's/^\[[ 0-9.]*] //' |
awk '{ print \"[\" $1 \" \" $5 \" \" $6 \" \" $7 \"]\"; }' | awk '{ print \"[\" $1 \" \" $5 \" \" $6 \" \" $7 \"]\"; }' |
tr -d '\012\015'`" tr -d '\012\015'`"
fwdprog="`grep 'rcu_torture_fwd_prog_cr Duration' $i/console.log 2> /dev/null | sed -e 's/^\[[^]]*] //' | sort -k15nr | head -1 | awk '{ print $14 " " $15 }'`" fwdprog="`grep 'rcu_torture_fwd_prog n_max_cbs: ' $i/console.log 2> /dev/null | sed -e 's/^\[[^]]*] //' | sort -k3nr | head -1 | awk '{ print $2 " " $3 }'`"
if test -z "$ngps" if test -z "$ngps"
then then
echo "$configfile ------- " $stopstate echo "$configfile ------- " $stopstate
......
...@@ -144,7 +144,7 @@ do ...@@ -144,7 +144,7 @@ do
if test "$ret" -ne 0 if test "$ret" -ne 0
then then
echo System $i unreachable, giving up. | tee -a "$oldrun/remote-log" echo System $i unreachable, giving up. | tee -a "$oldrun/remote-log"
exit 4 | tee -a "$oldrun/remote-log" exit 4
fi fi
done done
...@@ -156,9 +156,16 @@ do ...@@ -156,9 +156,16 @@ do
cat $T/binres.tgz | ssh $i "cd /tmp; tar -xzf -" cat $T/binres.tgz | ssh $i "cd /tmp; tar -xzf -"
ret=$? ret=$?
if test "$ret" -ne 0 if test "$ret" -ne 0
then
echo Unable to download $T/binres.tgz to system $i, waiting and then retrying. | tee -a "$oldrun/remote-log"
sleep 60
cat $T/binres.tgz | ssh $i "cd /tmp; tar -xzf -"
ret=$?
if test "$ret" -ne 0
then then
echo Unable to download $T/binres.tgz to system $i, giving up. | tee -a "$oldrun/remote-log" echo Unable to download $T/binres.tgz to system $i, giving up. | tee -a "$oldrun/remote-log"
exit 10 | tee -a "$oldrun/remote-log" exit 10
fi
fi fi
done done
...@@ -177,16 +184,16 @@ checkremotefile () { ...@@ -177,16 +184,16 @@ checkremotefile () {
ret=$? ret=$?
if test "$ret" -eq 255 if test "$ret" -eq 255
then then
echo " ---" ssh failure to $1 checking for file $2, retry after $sleeptime seconds. `date` echo " ---" ssh failure to $1 checking for file $2, retry after $sleeptime seconds. `date` | tee -a "$oldrun/remote-log"
elif test "$ret" -eq 0 elif test "$ret" -eq 0
then then
return 0 return 0
elif test "$ret" -eq 1 elif test "$ret" -eq 1
then then
echo " ---" File \"$2\" not found: ssh $1 test -f \"$2\" echo " ---" File \"$2\" not found: ssh $1 test -f \"$2\" | tee -a "$oldrun/remote-log"
return 1 return 1
else else
echo " ---" Exit code $ret: ssh $1 test -f \"$2\", retry after $sleeptime seconds. `date` echo " ---" Exit code $ret: ssh $1 test -f \"$2\", retry after $sleeptime seconds. `date` | tee -a "$oldrun/remote-log"
return $ret return $ret
fi fi
sleep $sleeptime sleep $sleeptime
...@@ -245,7 +252,7 @@ do ...@@ -245,7 +252,7 @@ do
sleep 30 sleep 30
fi fi
done done
echo All batches started. `date` echo All batches started. `date` | tee -a "$oldrun/remote-log"
# Wait for all remaining scenarios to complete and collect results. # Wait for all remaining scenarios to complete and collect results.
for i in $systems for i in $systems
...@@ -254,7 +261,7 @@ do ...@@ -254,7 +261,7 @@ do
do do
sleep 30 sleep 30
done done
echo " ---" Collecting results from $i `date` echo " ---" Collecting results from $i `date` | tee -a "$oldrun/remote-log"
( cd "$oldrun"; ssh $i "cd $rundir; tar -czf - kvm-remote-*.sh.out */console.log */kvm-test-1-run*.sh.out */qemu[_-]pid */qemu-retval */qemu-affinity; rm -rf $T > /dev/null 2>&1" | tar -xzf - ) ( cd "$oldrun"; ssh $i "cd $rundir; tar -czf - kvm-remote-*.sh.out */console.log */kvm-test-1-run*.sh.out */qemu[_-]pid */qemu-retval */qemu-affinity; rm -rf $T > /dev/null 2>&1" | tar -xzf - )
done done
......
...@@ -74,7 +74,9 @@ usage () { ...@@ -74,7 +74,9 @@ usage () {
echo " --help" echo " --help"
echo " --interactive" echo " --interactive"
echo " --jitter N [ maxsleep (us) [ maxspin (us) ] ]" echo " --jitter N [ maxsleep (us) [ maxspin (us) ] ]"
echo " --kasan"
echo " --kconfig Kconfig-options" echo " --kconfig Kconfig-options"
echo " --kcsan"
echo " --kmake-arg kernel-make-arguments" echo " --kmake-arg kernel-make-arguments"
echo " --mac nn:nn:nn:nn:nn:nn" echo " --mac nn:nn:nn:nn:nn:nn"
echo " --memory megabytes|nnnG" echo " --memory megabytes|nnnG"
...@@ -83,6 +85,7 @@ usage () { ...@@ -83,6 +85,7 @@ usage () {
echo " --qemu-cmd qemu-system-..." echo " --qemu-cmd qemu-system-..."
echo " --remote" echo " --remote"
echo " --results absolute-pathname" echo " --results absolute-pathname"
echo " --shutdown-grace seconds"
echo " --torture lock|rcu|rcuscale|refscale|scf" echo " --torture lock|rcu|rcuscale|refscale|scf"
echo " --trust-make" echo " --trust-make"
exit 1 exit 1
...@@ -175,14 +178,14 @@ do ...@@ -175,14 +178,14 @@ do
jitter="$2" jitter="$2"
shift shift
;; ;;
--kasan)
TORTURE_KCONFIG_KASAN_ARG="CONFIG_DEBUG_INFO=y CONFIG_KASAN=y"; export TORTURE_KCONFIG_KASAN_ARG
;;
--kconfig|--kconfigs) --kconfig|--kconfigs)
checkarg --kconfig "(Kconfig options)" $# "$2" '^CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\)\( CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\)\)*$' '^error$' checkarg --kconfig "(Kconfig options)" $# "$2" '^CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\)\( CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\)\)*$' '^error$'
TORTURE_KCONFIG_ARG="`echo "$TORTURE_KCONFIG_ARG $2" | sed -e 's/^ *//' -e 's/ *$//'`" TORTURE_KCONFIG_ARG="`echo "$TORTURE_KCONFIG_ARG $2" | sed -e 's/^ *//' -e 's/ *$//'`"
shift shift
;; ;;
--kasan)
TORTURE_KCONFIG_KASAN_ARG="CONFIG_DEBUG_INFO=y CONFIG_KASAN=y"; export TORTURE_KCONFIG_KASAN_ARG
;;
--kcsan) --kcsan)
TORTURE_KCONFIG_KCSAN_ARG="CONFIG_DEBUG_INFO=y CONFIG_KCSAN=y CONFIG_KCSAN_STRICT=y CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_PROVE_LOCKING=y"; export TORTURE_KCONFIG_KCSAN_ARG TORTURE_KCONFIG_KCSAN_ARG="CONFIG_DEBUG_INFO=y CONFIG_KCSAN=y CONFIG_KCSAN_STRICT=y CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_PROVE_LOCKING=y"; export TORTURE_KCONFIG_KCSAN_ARG
;; ;;
......
...@@ -39,7 +39,8 @@ fi ...@@ -39,7 +39,8 @@ fi
grep warning: < $F > $T/warnings grep warning: < $F > $T/warnings
grep "include/linux/*rcu*\.h:" $T/warnings > $T/hwarnings grep "include/linux/*rcu*\.h:" $T/warnings > $T/hwarnings
grep "kernel/rcu/[^/]*:" $T/warnings > $T/cwarnings grep "kernel/rcu/[^/]*:" $T/warnings > $T/cwarnings
cat $T/hwarnings $T/cwarnings > $T/rcuwarnings grep "^ld: .*undefined reference to" $T/warnings | head -1 > $T/ldwarnings
cat $T/hwarnings $T/cwarnings $T/ldwarnings > $T/rcuwarnings
if test -s $T/rcuwarnings if test -s $T/rcuwarnings
then then
print_warning $title build errors: print_warning $title build errors:
......
rcutorture.torture_type=tasks rcutorture.torture_type=tasks
rcutree.use_softirq=0 rcutree.use_softirq=0
rcupdate.rcu_task_enqueue_lim=4
rcutorture.torture_type=tasks-tracing rcutorture.torture_type=tasks-tracing
rcupdate.rcu_task_enqueue_lim=2
rcutorture.torture_type=tasks-tracing rcutorture.torture_type=tasks-tracing
rcutorture.fwd_progress=2
...@@ -6,7 +6,6 @@ CONFIG_PREEMPT=y ...@@ -6,7 +6,6 @@ CONFIG_PREEMPT=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=y
CONFIG_RCU_TRACE=y CONFIG_RCU_TRACE=y
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_MAXSMP=y CONFIG_MAXSMP=y
......
...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=y ...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_RCU_TRACE=n CONFIG_RCU_TRACE=n
CONFIG_RCU_FANOUT=3 CONFIG_RCU_FANOUT=3
CONFIG_RCU_FANOUT_LEAF=3 CONFIG_RCU_FANOUT_LEAF=3
......
...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=n ...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=n
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=n CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=y CONFIG_NO_HZ_FULL=y
CONFIG_RCU_FAST_NO_HZ=y
CONFIG_RCU_TRACE=y CONFIG_RCU_TRACE=y
CONFIG_RCU_FANOUT=4 CONFIG_RCU_FANOUT=4
CONFIG_RCU_FANOUT_LEAF=3 CONFIG_RCU_FANOUT_LEAF=3
......
...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=n ...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=n
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_RCU_TRACE=n CONFIG_RCU_TRACE=n
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_RCU_FANOUT=6 CONFIG_RCU_FANOUT=6
......
...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=n ...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=n
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_RCU_TRACE=n CONFIG_RCU_TRACE=n
CONFIG_RCU_FANOUT=6 CONFIG_RCU_FANOUT=6
CONFIG_RCU_FANOUT_LEAF=6 CONFIG_RCU_FANOUT_LEAF=6
......
...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=n ...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=n
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=n CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=y CONFIG_NO_HZ_FULL=y
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_RCU_TRACE=y CONFIG_RCU_TRACE=y
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_RCU_FANOUT=2 CONFIG_RCU_FANOUT=2
......
...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=y ...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_RCU_TRACE=n CONFIG_RCU_TRACE=n
CONFIG_RCU_FANOUT=3 CONFIG_RCU_FANOUT=3
CONFIG_RCU_FANOUT_LEAF=2 CONFIG_RCU_FANOUT_LEAF=2
......
...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=n ...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=n
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_RCU_TRACE=n CONFIG_RCU_TRACE=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
......
...@@ -7,7 +7,6 @@ CONFIG_PREEMPT_DYNAMIC=n ...@@ -7,7 +7,6 @@ CONFIG_PREEMPT_DYNAMIC=n
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_LOCKING=n CONFIG_PROVE_LOCKING=n
......
...@@ -5,7 +5,6 @@ CONFIG_PREEMPT=n ...@@ -5,7 +5,6 @@ CONFIG_PREEMPT=n
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_LOCKING=n CONFIG_PROVE_LOCKING=n
......
...@@ -6,7 +6,6 @@ CONFIG_PREEMPT=y ...@@ -6,7 +6,6 @@ CONFIG_PREEMPT=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_SUSPEND=n CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
......
...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=y ...@@ -7,7 +7,6 @@ CONFIG_PREEMPT=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_SUSPEND=n CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
......
...@@ -6,7 +6,6 @@ CONFIG_PREEMPT=n ...@@ -6,7 +6,6 @@ CONFIG_PREEMPT=n
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_SUSPEND=n CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
......
...@@ -6,7 +6,6 @@ CONFIG_PREEMPT=y ...@@ -6,7 +6,6 @@ CONFIG_PREEMPT=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_SUSPEND=n CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
......
...@@ -15,7 +15,6 @@ CONFIG_PROVE_RCU -- Hardwired to CONFIG_PROVE_LOCKING. ...@@ -15,7 +15,6 @@ CONFIG_PROVE_RCU -- Hardwired to CONFIG_PROVE_LOCKING.
CONFIG_RCU_BOOST -- one of PREEMPT_RCU. CONFIG_RCU_BOOST -- one of PREEMPT_RCU.
CONFIG_RCU_FANOUT -- Cover hierarchy, but overlap with others. CONFIG_RCU_FANOUT -- Cover hierarchy, but overlap with others.
CONFIG_RCU_FANOUT_LEAF -- Do one non-default. CONFIG_RCU_FANOUT_LEAF -- Do one non-default.
CONFIG_RCU_FAST_NO_HZ -- Do one, but not with all nohz_full CPUs.
CONFIG_RCU_NOCB_CPU -- Do three, one with no rcu_nocbs CPUs, one with CONFIG_RCU_NOCB_CPU -- Do three, one with no rcu_nocbs CPUs, one with
rcu_nocbs=0, and one with all rcu_nocbs CPUs. rcu_nocbs=0, and one with all rcu_nocbs CPUs.
CONFIG_RCU_TRACE -- Do half. CONFIG_RCU_TRACE -- Do half.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment