Commit b36c830f authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'for-mingo' of...

Merge branch 'for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu

Pull v5.10 RCU changes from Paul E. McKenney:

- Debugging for smp_call_function().

- Strict grace periods for KASAN.  The point of this series is to find
  RCU-usage bugs, so the corresponding new RCU_STRICT_GRACE_PERIOD
  Kconfig option depends on both DEBUG_KERNEL and RCU_EXPERT, and is
  further disabled by dfefault.  Finally, the help text includes
  a goodly list of scary caveats.

- New smp_call_function() torture test.

- Torture-test updates.

- Documentation updates.

- Miscellaneous fixes.
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents 583090b1 6fe208f6
...@@ -963,7 +963,7 @@ exit and perhaps also vice versa. Therefore, whenever the ...@@ -963,7 +963,7 @@ exit and perhaps also vice versa. Therefore, whenever the
``->dynticks_nesting`` field is incremented up from zero, the ``->dynticks_nesting`` field is incremented up from zero, the
``->dynticks_nmi_nesting`` field is set to a large positive number, and ``->dynticks_nmi_nesting`` field is set to a large positive number, and
whenever the ``->dynticks_nesting`` field is decremented down to zero, whenever the ``->dynticks_nesting`` field is decremented down to zero,
the the ``->dynticks_nmi_nesting`` field is set to zero. Assuming that the ``->dynticks_nmi_nesting`` field is set to zero. Assuming that
the number of misnested interrupts is not sufficient to overflow the the number of misnested interrupts is not sufficient to overflow the
counter, this approach corrects the ``->dynticks_nmi_nesting`` field counter, this approach corrects the ``->dynticks_nmi_nesting`` field
every time the corresponding CPU enters the idle loop from process every time the corresponding CPU enters the idle loop from process
......
...@@ -2162,7 +2162,7 @@ scheduling-clock interrupt be enabled when RCU needs it to be: ...@@ -2162,7 +2162,7 @@ scheduling-clock interrupt be enabled when RCU needs it to be:
this sort of thing. this sort of thing.
#. If a CPU is in a portion of the kernel that is absolutely positively #. If a CPU is in a portion of the kernel that is absolutely positively
no-joking guaranteed to never execute any RCU read-side critical no-joking guaranteed to never execute any RCU read-side critical
sections, and RCU believes this CPU to to be idle, no problem. This sections, and RCU believes this CPU to be idle, no problem. This
sort of thing is used by some architectures for light-weight sort of thing is used by some architectures for light-weight
exception handlers, which can then avoid the overhead of exception handlers, which can then avoid the overhead of
``rcu_irq_enter()`` and ``rcu_irq_exit()`` at exception entry and ``rcu_irq_enter()`` and ``rcu_irq_exit()`` at exception entry and
...@@ -2431,7 +2431,7 @@ However, there are legitimate preemptible-RCU implementations that do ...@@ -2431,7 +2431,7 @@ However, there are legitimate preemptible-RCU implementations that do
not have this property, given that any point in the code outside of an not have this property, given that any point in the code outside of an
RCU read-side critical section can be a quiescent state. Therefore, RCU read-side critical section can be a quiescent state. Therefore,
*RCU-sched* was created, which follows “classic” RCU in that an *RCU-sched* was created, which follows “classic” RCU in that an
RCU-sched grace period waits for for pre-existing interrupt and NMI RCU-sched grace period waits for pre-existing interrupt and NMI
handlers. In kernels built with ``CONFIG_PREEMPT=n``, the RCU and handlers. In kernels built with ``CONFIG_PREEMPT=n``, the RCU and
RCU-sched APIs have identical implementations, while kernels built with RCU-sched APIs have identical implementations, while kernels built with
``CONFIG_PREEMPT=y`` provide a separate implementation for each. ``CONFIG_PREEMPT=y`` provide a separate implementation for each.
......
...@@ -360,7 +360,7 @@ order to amortize their overhead over many uses of the corresponding APIs. ...@@ -360,7 +360,7 @@ order to amortize their overhead over many uses of the corresponding APIs.
There are at least three flavors of RCU usage in the Linux kernel. The diagram There are at least three flavors of RCU usage in the Linux kernel. The diagram
above shows the most common one. On the updater side, the rcu_assign_pointer(), above shows the most common one. On the updater side, the rcu_assign_pointer(),
sychronize_rcu() and call_rcu() primitives used are the same for all three synchronize_rcu() and call_rcu() primitives used are the same for all three
flavors. However for protection (on the reader side), the primitives used vary flavors. However for protection (on the reader side), the primitives used vary
depending on the flavor: depending on the flavor:
......
...@@ -3070,6 +3070,10 @@ ...@@ -3070,6 +3070,10 @@
and gids from such clients. This is intended to ease and gids from such clients. This is intended to ease
migration from NFSv2/v3. migration from NFSv2/v3.
nmi_backtrace.backtrace_idle [KNL]
Dump stacks even of idle CPUs in response to an
NMI stack-backtrace request.
nmi_debug= [KNL,SH] Specify one or more actions to take nmi_debug= [KNL,SH] Specify one or more actions to take
when a NMI is triggered. when a NMI is triggered.
Format: [state][,regs][,debounce][,die] Format: [state][,regs][,debounce][,die]
...@@ -4149,46 +4153,55 @@ ...@@ -4149,46 +4153,55 @@
This wake_up() will be accompanied by a This wake_up() will be accompanied by a
WARN_ONCE() splat and an ftrace_dump(). WARN_ONCE() splat and an ftrace_dump().
rcutree.rcu_unlock_delay= [KNL]
In CONFIG_RCU_STRICT_GRACE_PERIOD=y kernels,
this specifies an rcu_read_unlock()-time delay
in microseconds. This defaults to zero.
Larger delays increase the probability of
catching RCU pointer leaks, that is, buggy use
of RCU-protected pointers after the relevant
rcu_read_unlock() has completed.
rcutree.sysrq_rcu= [KNL] rcutree.sysrq_rcu= [KNL]
Commandeer a sysrq key to dump out Tree RCU's Commandeer a sysrq key to dump out Tree RCU's
rcu_node tree with an eye towards determining rcu_node tree with an eye towards determining
why a new grace period has not yet started. why a new grace period has not yet started.
rcuperf.gp_async= [KNL] rcuscale.gp_async= [KNL]
Measure performance of asynchronous Measure performance of asynchronous
grace-period primitives such as call_rcu(). grace-period primitives such as call_rcu().
rcuperf.gp_async_max= [KNL] rcuscale.gp_async_max= [KNL]
Specify the maximum number of outstanding Specify the maximum number of outstanding
callbacks per writer thread. When a writer callbacks per writer thread. When a writer
thread exceeds this limit, it invokes the thread exceeds this limit, it invokes the
corresponding flavor of rcu_barrier() to allow corresponding flavor of rcu_barrier() to allow
previously posted callbacks to drain. previously posted callbacks to drain.
rcuperf.gp_exp= [KNL] rcuscale.gp_exp= [KNL]
Measure performance of expedited synchronous Measure performance of expedited synchronous
grace-period primitives. grace-period primitives.
rcuperf.holdoff= [KNL] rcuscale.holdoff= [KNL]
Set test-start holdoff period. The purpose of Set test-start holdoff period. The purpose of
this parameter is to delay the start of the this parameter is to delay the start of the
test until boot completes in order to avoid test until boot completes in order to avoid
interference. interference.
rcuperf.kfree_rcu_test= [KNL] rcuscale.kfree_rcu_test= [KNL]
Set to measure performance of kfree_rcu() flooding. Set to measure performance of kfree_rcu() flooding.
rcuperf.kfree_nthreads= [KNL] rcuscale.kfree_nthreads= [KNL]
The number of threads running loops of kfree_rcu(). The number of threads running loops of kfree_rcu().
rcuperf.kfree_alloc_num= [KNL] rcuscale.kfree_alloc_num= [KNL]
Number of allocations and frees done in an iteration. Number of allocations and frees done in an iteration.
rcuperf.kfree_loops= [KNL] rcuscale.kfree_loops= [KNL]
Number of loops doing rcuperf.kfree_alloc_num number Number of loops doing rcuscale.kfree_alloc_num number
of allocations and frees. of allocations and frees.
rcuperf.nreaders= [KNL] rcuscale.nreaders= [KNL]
Set number of RCU readers. The value -1 selects Set number of RCU readers. The value -1 selects
N, where N is the number of CPUs. A value N, where N is the number of CPUs. A value
"n" less than -1 selects N-n+1, where N is again "n" less than -1 selects N-n+1, where N is again
...@@ -4197,23 +4210,23 @@ ...@@ -4197,23 +4210,23 @@
A value of "n" less than or equal to -N selects A value of "n" less than or equal to -N selects
a single reader. a single reader.
rcuperf.nwriters= [KNL] rcuscale.nwriters= [KNL]
Set number of RCU writers. The values operate Set number of RCU writers. The values operate
the same as for rcuperf.nreaders. the same as for rcuscale.nreaders.
N, where N is the number of CPUs N, where N is the number of CPUs
rcuperf.perf_type= [KNL] rcuscale.perf_type= [KNL]
Specify the RCU implementation to test. Specify the RCU implementation to test.
rcuperf.shutdown= [KNL] rcuscale.shutdown= [KNL]
Shut the system down after performance tests Shut the system down after performance tests
complete. This is useful for hands-off automated complete. This is useful for hands-off automated
testing. testing.
rcuperf.verbose= [KNL] rcuscale.verbose= [KNL]
Enable additional printk() statements. Enable additional printk() statements.
rcuperf.writer_holdoff= [KNL] rcuscale.writer_holdoff= [KNL]
Write-side holdoff between grace periods, Write-side holdoff between grace periods,
in microseconds. The default of zero says in microseconds. The default of zero says
no holdoff. no holdoff.
...@@ -4266,6 +4279,18 @@ ...@@ -4266,6 +4279,18 @@
are zero, rcutorture acts as if is interpreted are zero, rcutorture acts as if is interpreted
they are all non-zero. they are all non-zero.
rcutorture.irqreader= [KNL]
Run RCU readers from irq handlers, or, more
accurately, from a timer handler. Not all RCU
flavors take kindly to this sort of thing.
rcutorture.leakpointer= [KNL]
Leak an RCU-protected pointer out of the reader.
This can of course result in splats, and is
intended to test the ability of things like
CONFIG_RCU_STRICT_GRACE_PERIOD=y to detect
such leaks.
rcutorture.n_barrier_cbs= [KNL] rcutorture.n_barrier_cbs= [KNL]
Set callbacks/threads for rcu_barrier() testing. Set callbacks/threads for rcu_barrier() testing.
...@@ -4487,8 +4512,8 @@ ...@@ -4487,8 +4512,8 @@
refscale.shutdown= [KNL] refscale.shutdown= [KNL]
Shut down the system at the end of the performance Shut down the system at the end of the performance
test. This defaults to 1 (shut it down) when test. This defaults to 1 (shut it down) when
rcuperf is built into the kernel and to 0 (leave refscale is built into the kernel and to 0 (leave
it running) when rcuperf is built as a module. it running) when refscale is built as a module.
refscale.verbose= [KNL] refscale.verbose= [KNL]
Enable additional printk() statements. Enable additional printk() statements.
...@@ -4634,6 +4659,98 @@ ...@@ -4634,6 +4659,98 @@
Format: integer between 0 and 10 Format: integer between 0 and 10
Default is 0. Default is 0.
scftorture.holdoff= [KNL]
Number of seconds to hold off before starting
test. Defaults to zero for module insertion and
to 10 seconds for built-in smp_call_function()
tests.
scftorture.longwait= [KNL]
Request ridiculously long waits randomly selected
up to the chosen limit in seconds. Zero (the
default) disables this feature. Please note
that requesting even small non-zero numbers of
seconds can result in RCU CPU stall warnings,
softlockup complaints, and so on.
scftorture.nthreads= [KNL]
Number of kthreads to spawn to invoke the
smp_call_function() family of functions.
The default of -1 specifies a number of kthreads
equal to the number of CPUs.
scftorture.onoff_holdoff= [KNL]
Number seconds to wait after the start of the
test before initiating CPU-hotplug operations.
scftorture.onoff_interval= [KNL]
Number seconds to wait between successive
CPU-hotplug operations. Specifying zero (which
is the default) disables CPU-hotplug operations.
scftorture.shutdown_secs= [KNL]
The number of seconds following the start of the
test after which to shut down the system. The
default of zero avoids shutting down the system.
Non-zero values are useful for automated tests.
scftorture.stat_interval= [KNL]
The number of seconds between outputting the
current test statistics to the console. A value
of zero disables statistics output.
scftorture.stutter_cpus= [KNL]
The number of jiffies to wait between each change
to the set of CPUs under test.
scftorture.use_cpus_read_lock= [KNL]
Use use_cpus_read_lock() instead of the default
preempt_disable() to disable CPU hotplug
while invoking one of the smp_call_function*()
functions.
scftorture.verbose= [KNL]
Enable additional printk() statements.
scftorture.weight_single= [KNL]
The probability weighting to use for the
smp_call_function_single() function with a zero
"wait" parameter. A value of -1 selects the
default if all other weights are -1. However,
if at least one weight has some other value, a
value of -1 will instead select a weight of zero.
scftorture.weight_single_wait= [KNL]
The probability weighting to use for the
smp_call_function_single() function with a
non-zero "wait" parameter. See weight_single.
scftorture.weight_many= [KNL]
The probability weighting to use for the
smp_call_function_many() function with a zero
"wait" parameter. See weight_single.
Note well that setting a high probability for
this weighting can place serious IPI load
on the system.
scftorture.weight_many_wait= [KNL]
The probability weighting to use for the
smp_call_function_many() function with a
non-zero "wait" parameter. See weight_single
and weight_many.
scftorture.weight_all= [KNL]
The probability weighting to use for the
smp_call_function_all() function with a zero
"wait" parameter. See weight_single and
weight_many.
scftorture.weight_all_wait= [KNL]
The probability weighting to use for the
smp_call_function_all() function with a
non-zero "wait" parameter. See weight_single
and weight_many.
skew_tick= [KNL] Offset the periodic timer tick per cpu to mitigate skew_tick= [KNL] Offset the periodic timer tick per cpu to mitigate
xtime_lock contention on larger systems, and/or RCU lock xtime_lock contention on larger systems, and/or RCU lock
contention on all systems with CONFIG_MAXSMP set. contention on all systems with CONFIG_MAXSMP set.
......
...@@ -17547,8 +17547,9 @@ S: Supported ...@@ -17547,8 +17547,9 @@ S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
F: Documentation/RCU/torture.rst F: Documentation/RCU/torture.rst
F: kernel/locking/locktorture.c F: kernel/locking/locktorture.c
F: kernel/rcu/rcuperf.c F: kernel/rcu/rcuscale.c
F: kernel/rcu/rcutorture.c F: kernel/rcu/rcutorture.c
F: kernel/rcu/refscale.c
F: kernel/torture.c F: kernel/torture.c
TOSHIBA ACPI EXTRAS DRIVER TOSHIBA ACPI EXTRAS DRIVER
......
...@@ -229,7 +229,8 @@ void kvm_page_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new, ...@@ -229,7 +229,8 @@ void kvm_page_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new,
return; return;
idx = srcu_read_lock(&head->track_srcu); idx = srcu_read_lock(&head->track_srcu);
hlist_for_each_entry_rcu(n, &head->track_notifier_list, node) hlist_for_each_entry_srcu(n, &head->track_notifier_list, node,
srcu_read_lock_held(&head->track_srcu))
if (n->track_write) if (n->track_write)
n->track_write(vcpu, gpa, new, bytes, n); n->track_write(vcpu, gpa, new, bytes, n);
srcu_read_unlock(&head->track_srcu, idx); srcu_read_unlock(&head->track_srcu, idx);
...@@ -254,7 +255,8 @@ void kvm_page_track_flush_slot(struct kvm *kvm, struct kvm_memory_slot *slot) ...@@ -254,7 +255,8 @@ void kvm_page_track_flush_slot(struct kvm *kvm, struct kvm_memory_slot *slot)
return; return;
idx = srcu_read_lock(&head->track_srcu); idx = srcu_read_lock(&head->track_srcu);
hlist_for_each_entry_rcu(n, &head->track_notifier_list, node) hlist_for_each_entry_srcu(n, &head->track_notifier_list, node,
srcu_read_lock_held(&head->track_srcu))
if (n->track_flush_slot) if (n->track_flush_slot)
n->track_flush_slot(kvm, slot, n); n->track_flush_slot(kvm, slot, n);
srcu_read_unlock(&head->track_srcu, idx); srcu_read_unlock(&head->track_srcu, idx);
......
...@@ -63,9 +63,17 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list) ...@@ -63,9 +63,17 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
RCU_LOCKDEP_WARN(!(cond) && !rcu_read_lock_any_held(), \ RCU_LOCKDEP_WARN(!(cond) && !rcu_read_lock_any_held(), \
"RCU-list traversed in non-reader section!"); \ "RCU-list traversed in non-reader section!"); \
}) })
#define __list_check_srcu(cond) \
({ \
RCU_LOCKDEP_WARN(!(cond), \
"RCU-list traversed without holding the required lock!");\
})
#else #else
#define __list_check_rcu(dummy, cond, extra...) \ #define __list_check_rcu(dummy, cond, extra...) \
({ check_arg_count_one(extra); }) ({ check_arg_count_one(extra); })
#define __list_check_srcu(cond) ({ })
#endif #endif
/* /*
...@@ -385,6 +393,25 @@ static inline void list_splice_tail_init_rcu(struct list_head *list, ...@@ -385,6 +393,25 @@ static inline void list_splice_tail_init_rcu(struct list_head *list,
&pos->member != (head); \ &pos->member != (head); \
pos = list_entry_rcu(pos->member.next, typeof(*pos), member)) pos = list_entry_rcu(pos->member.next, typeof(*pos), member))
/**
* list_for_each_entry_srcu - iterate over rcu list of given type
* @pos: the type * to use as a loop cursor.
* @head: the head for your list.
* @member: the name of the list_head within the struct.
* @cond: lockdep expression for the lock required to traverse the list.
*
* This list-traversal primitive may safely run concurrently with
* the _rcu list-mutation primitives such as list_add_rcu()
* as long as the traversal is guarded by srcu_read_lock().
* The lockdep expression srcu_read_lock_held() can be passed as the
* cond argument from read side.
*/
#define list_for_each_entry_srcu(pos, head, member, cond) \
for (__list_check_srcu(cond), \
pos = list_entry_rcu((head)->next, typeof(*pos), member); \
&pos->member != (head); \
pos = list_entry_rcu(pos->member.next, typeof(*pos), member))
/** /**
* list_entry_lockless - get the struct for this entry * list_entry_lockless - get the struct for this entry
* @ptr: the &struct list_head pointer. * @ptr: the &struct list_head pointer.
...@@ -683,6 +710,27 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n, ...@@ -683,6 +710,27 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n,
pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(\ pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(\
&(pos)->member)), typeof(*(pos)), member)) &(pos)->member)), typeof(*(pos)), member))
/**
* hlist_for_each_entry_srcu - iterate over rcu list of given type
* @pos: the type * to use as a loop cursor.
* @head: the head for your list.
* @member: the name of the hlist_node within the struct.
* @cond: lockdep expression for the lock required to traverse the list.
*
* This list-traversal primitive may safely run concurrently with
* the _rcu list-mutation primitives such as hlist_add_head_rcu()
* as long as the traversal is guarded by srcu_read_lock().
* The lockdep expression srcu_read_lock_held() can be passed as the
* cond argument from read side.
*/
#define hlist_for_each_entry_srcu(pos, head, member, cond) \
for (__list_check_srcu(cond), \
pos = hlist_entry_safe(rcu_dereference_raw(hlist_first_rcu(head)),\
typeof(*(pos)), member); \
pos; \
pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(\
&(pos)->member)), typeof(*(pos)), member))
/** /**
* hlist_for_each_entry_rcu_notrace - iterate over rcu list of given type (for tracing) * hlist_for_each_entry_rcu_notrace - iterate over rcu list of given type (for tracing)
* @pos: the type * to use as a loop cursor. * @pos: the type * to use as a loop cursor.
......
...@@ -55,6 +55,12 @@ void __rcu_read_unlock(void); ...@@ -55,6 +55,12 @@ void __rcu_read_unlock(void);
#else /* #ifdef CONFIG_PREEMPT_RCU */ #else /* #ifdef CONFIG_PREEMPT_RCU */
#ifdef CONFIG_TINY_RCU
#define rcu_read_unlock_strict() do { } while (0)
#else
void rcu_read_unlock_strict(void);
#endif
static inline void __rcu_read_lock(void) static inline void __rcu_read_lock(void)
{ {
preempt_disable(); preempt_disable();
...@@ -63,6 +69,7 @@ static inline void __rcu_read_lock(void) ...@@ -63,6 +69,7 @@ static inline void __rcu_read_lock(void)
static inline void __rcu_read_unlock(void) static inline void __rcu_read_unlock(void)
{ {
preempt_enable(); preempt_enable();
rcu_read_unlock_strict();
} }
static inline int rcu_preempt_depth(void) static inline int rcu_preempt_depth(void)
...@@ -709,8 +716,8 @@ static inline void rcu_read_lock_bh(void) ...@@ -709,8 +716,8 @@ static inline void rcu_read_lock_bh(void)
"rcu_read_lock_bh() used illegally while idle"); "rcu_read_lock_bh() used illegally while idle");
} }
/* /**
* rcu_read_unlock_bh - marks the end of a softirq-only RCU critical section * rcu_read_unlock_bh() - marks the end of a softirq-only RCU critical section
* *
* See rcu_read_lock_bh() for more information. * See rcu_read_lock_bh() for more information.
*/ */
...@@ -751,10 +758,10 @@ static inline notrace void rcu_read_lock_sched_notrace(void) ...@@ -751,10 +758,10 @@ static inline notrace void rcu_read_lock_sched_notrace(void)
__acquire(RCU_SCHED); __acquire(RCU_SCHED);
} }
/* /**
* rcu_read_unlock_sched - marks the end of a RCU-classic critical section * rcu_read_unlock_sched() - marks the end of a RCU-classic critical section
* *
* See rcu_read_lock_sched for more information. * See rcu_read_lock_sched() for more information.
*/ */
static inline void rcu_read_unlock_sched(void) static inline void rcu_read_unlock_sched(void)
{ {
...@@ -945,7 +952,7 @@ static inline void rcu_head_init(struct rcu_head *rhp) ...@@ -945,7 +952,7 @@ static inline void rcu_head_init(struct rcu_head *rhp)
} }
/** /**
* rcu_head_after_call_rcu - Has this rcu_head been passed to call_rcu()? * rcu_head_after_call_rcu() - Has this rcu_head been passed to call_rcu()?
* @rhp: The rcu_head structure to test. * @rhp: The rcu_head structure to test.
* @f: The function passed to call_rcu() along with @rhp. * @f: The function passed to call_rcu() along with @rhp.
* *
......
...@@ -103,7 +103,6 @@ static inline void rcu_scheduler_starting(void) { } ...@@ -103,7 +103,6 @@ static inline void rcu_scheduler_starting(void) { }
static inline void rcu_end_inkernel_boot(void) { } static inline void rcu_end_inkernel_boot(void) { }
static inline bool rcu_inkernel_boot_has_ended(void) { return true; } static inline bool rcu_inkernel_boot_has_ended(void) { return true; }
static inline bool rcu_is_watching(void) { return true; } static inline bool rcu_is_watching(void) { return true; }
static inline bool __rcu_is_watching(void) { return true; }
static inline void rcu_momentary_dyntick_idle(void) { } static inline void rcu_momentary_dyntick_idle(void) { }
static inline void kfree_rcu_scheduler_running(void) { } static inline void kfree_rcu_scheduler_running(void) { }
static inline bool rcu_gp_might_be_stalled(void) { return false; } static inline bool rcu_gp_might_be_stalled(void) { return false; }
......
...@@ -64,7 +64,6 @@ extern int rcu_scheduler_active __read_mostly; ...@@ -64,7 +64,6 @@ extern int rcu_scheduler_active __read_mostly;
void rcu_end_inkernel_boot(void); void rcu_end_inkernel_boot(void);
bool rcu_inkernel_boot_has_ended(void); bool rcu_inkernel_boot_has_ended(void);
bool rcu_is_watching(void); bool rcu_is_watching(void);
bool __rcu_is_watching(void);
#ifndef CONFIG_PREEMPTION #ifndef CONFIG_PREEMPTION
void rcu_all_qs(void); void rcu_all_qs(void);
#endif #endif
......
...@@ -26,6 +26,9 @@ struct __call_single_data { ...@@ -26,6 +26,9 @@ struct __call_single_data {
struct { struct {
struct llist_node llist; struct llist_node llist;
unsigned int flags; unsigned int flags;
#ifdef CONFIG_64BIT
u16 src, dst;
#endif
}; };
}; };
smp_call_func_t func; smp_call_func_t func;
......
...@@ -61,6 +61,9 @@ struct __call_single_node { ...@@ -61,6 +61,9 @@ struct __call_single_node {
unsigned int u_flags; unsigned int u_flags;
atomic_t a_flags; atomic_t a_flags;
}; };
#ifdef CONFIG_64BIT
u16 src, dst;
#endif
}; };
#endif /* __LINUX_SMP_TYPES_H */ #endif /* __LINUX_SMP_TYPES_H */
...@@ -74,17 +74,17 @@ TRACE_EVENT_RCU(rcu_grace_period, ...@@ -74,17 +74,17 @@ TRACE_EVENT_RCU(rcu_grace_period,
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gp_seq) __field(long, gp_seq)
__field(const char *, gpevent) __field(const char *, gpevent)
), ),
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->gp_seq = gp_seq; __entry->gp_seq = (long)gp_seq;
__entry->gpevent = gpevent; __entry->gpevent = gpevent;
), ),
TP_printk("%s %lu %s", TP_printk("%s %ld %s",
__entry->rcuname, __entry->gp_seq, __entry->gpevent) __entry->rcuname, __entry->gp_seq, __entry->gpevent)
); );
...@@ -114,8 +114,8 @@ TRACE_EVENT_RCU(rcu_future_grace_period, ...@@ -114,8 +114,8 @@ TRACE_EVENT_RCU(rcu_future_grace_period,
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gp_seq) __field(long, gp_seq)
__field(unsigned long, gp_seq_req) __field(long, gp_seq_req)
__field(u8, level) __field(u8, level)
__field(int, grplo) __field(int, grplo)
__field(int, grphi) __field(int, grphi)
...@@ -124,16 +124,16 @@ TRACE_EVENT_RCU(rcu_future_grace_period, ...@@ -124,16 +124,16 @@ TRACE_EVENT_RCU(rcu_future_grace_period,
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->gp_seq = gp_seq; __entry->gp_seq = (long)gp_seq;
__entry->gp_seq_req = gp_seq_req; __entry->gp_seq_req = (long)gp_seq_req;
__entry->level = level; __entry->level = level;
__entry->grplo = grplo; __entry->grplo = grplo;
__entry->grphi = grphi; __entry->grphi = grphi;
__entry->gpevent = gpevent; __entry->gpevent = gpevent;
), ),
TP_printk("%s %lu %lu %u %d %d %s", TP_printk("%s %ld %ld %u %d %d %s",
__entry->rcuname, __entry->gp_seq, __entry->gp_seq_req, __entry->level, __entry->rcuname, (long)__entry->gp_seq, (long)__entry->gp_seq_req, __entry->level,
__entry->grplo, __entry->grphi, __entry->gpevent) __entry->grplo, __entry->grphi, __entry->gpevent)
); );
...@@ -153,7 +153,7 @@ TRACE_EVENT_RCU(rcu_grace_period_init, ...@@ -153,7 +153,7 @@ TRACE_EVENT_RCU(rcu_grace_period_init,
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gp_seq) __field(long, gp_seq)
__field(u8, level) __field(u8, level)
__field(int, grplo) __field(int, grplo)
__field(int, grphi) __field(int, grphi)
...@@ -162,14 +162,14 @@ TRACE_EVENT_RCU(rcu_grace_period_init, ...@@ -162,14 +162,14 @@ TRACE_EVENT_RCU(rcu_grace_period_init,
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->gp_seq = gp_seq; __entry->gp_seq = (long)gp_seq;
__entry->level = level; __entry->level = level;
__entry->grplo = grplo; __entry->grplo = grplo;
__entry->grphi = grphi; __entry->grphi = grphi;
__entry->qsmask = qsmask; __entry->qsmask = qsmask;
), ),
TP_printk("%s %lu %u %d %d %lx", TP_printk("%s %ld %u %d %d %lx",
__entry->rcuname, __entry->gp_seq, __entry->level, __entry->rcuname, __entry->gp_seq, __entry->level,
__entry->grplo, __entry->grphi, __entry->qsmask) __entry->grplo, __entry->grphi, __entry->qsmask)
); );
...@@ -197,17 +197,17 @@ TRACE_EVENT_RCU(rcu_exp_grace_period, ...@@ -197,17 +197,17 @@ TRACE_EVENT_RCU(rcu_exp_grace_period,
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gpseq) __field(long, gpseq)
__field(const char *, gpevent) __field(const char *, gpevent)
), ),
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->gpseq = gpseq; __entry->gpseq = (long)gpseq;
__entry->gpevent = gpevent; __entry->gpevent = gpevent;
), ),
TP_printk("%s %lu %s", TP_printk("%s %ld %s",
__entry->rcuname, __entry->gpseq, __entry->gpevent) __entry->rcuname, __entry->gpseq, __entry->gpevent)
); );
...@@ -316,17 +316,17 @@ TRACE_EVENT_RCU(rcu_preempt_task, ...@@ -316,17 +316,17 @@ TRACE_EVENT_RCU(rcu_preempt_task,
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gp_seq) __field(long, gp_seq)
__field(int, pid) __field(int, pid)
), ),
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->gp_seq = gp_seq; __entry->gp_seq = (long)gp_seq;
__entry->pid = pid; __entry->pid = pid;
), ),
TP_printk("%s %lu %d", TP_printk("%s %ld %d",
__entry->rcuname, __entry->gp_seq, __entry->pid) __entry->rcuname, __entry->gp_seq, __entry->pid)
); );
...@@ -343,17 +343,17 @@ TRACE_EVENT_RCU(rcu_unlock_preempted_task, ...@@ -343,17 +343,17 @@ TRACE_EVENT_RCU(rcu_unlock_preempted_task,
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gp_seq) __field(long, gp_seq)
__field(int, pid) __field(int, pid)
), ),
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->gp_seq = gp_seq; __entry->gp_seq = (long)gp_seq;
__entry->pid = pid; __entry->pid = pid;
), ),
TP_printk("%s %lu %d", __entry->rcuname, __entry->gp_seq, __entry->pid) TP_printk("%s %ld %d", __entry->rcuname, __entry->gp_seq, __entry->pid)
); );
/* /*
...@@ -374,7 +374,7 @@ TRACE_EVENT_RCU(rcu_quiescent_state_report, ...@@ -374,7 +374,7 @@ TRACE_EVENT_RCU(rcu_quiescent_state_report,
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gp_seq) __field(long, gp_seq)
__field(unsigned long, mask) __field(unsigned long, mask)
__field(unsigned long, qsmask) __field(unsigned long, qsmask)
__field(u8, level) __field(u8, level)
...@@ -385,7 +385,7 @@ TRACE_EVENT_RCU(rcu_quiescent_state_report, ...@@ -385,7 +385,7 @@ TRACE_EVENT_RCU(rcu_quiescent_state_report,
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->gp_seq = gp_seq; __entry->gp_seq = (long)gp_seq;
__entry->mask = mask; __entry->mask = mask;
__entry->qsmask = qsmask; __entry->qsmask = qsmask;
__entry->level = level; __entry->level = level;
...@@ -394,7 +394,7 @@ TRACE_EVENT_RCU(rcu_quiescent_state_report, ...@@ -394,7 +394,7 @@ TRACE_EVENT_RCU(rcu_quiescent_state_report,
__entry->gp_tasks = gp_tasks; __entry->gp_tasks = gp_tasks;
), ),
TP_printk("%s %lu %lx>%lx %u %d %d %u", TP_printk("%s %ld %lx>%lx %u %d %d %u",
__entry->rcuname, __entry->gp_seq, __entry->rcuname, __entry->gp_seq,
__entry->mask, __entry->qsmask, __entry->level, __entry->mask, __entry->qsmask, __entry->level,
__entry->grplo, __entry->grphi, __entry->gp_tasks) __entry->grplo, __entry->grphi, __entry->gp_tasks)
...@@ -415,19 +415,19 @@ TRACE_EVENT_RCU(rcu_fqs, ...@@ -415,19 +415,19 @@ TRACE_EVENT_RCU(rcu_fqs,
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gp_seq) __field(long, gp_seq)
__field(int, cpu) __field(int, cpu)
__field(const char *, qsevent) __field(const char *, qsevent)
), ),
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->gp_seq = gp_seq; __entry->gp_seq = (long)gp_seq;
__entry->cpu = cpu; __entry->cpu = cpu;
__entry->qsevent = qsevent; __entry->qsevent = qsevent;
), ),
TP_printk("%s %lu %d %s", TP_printk("%s %ld %d %s",
__entry->rcuname, __entry->gp_seq, __entry->rcuname, __entry->gp_seq,
__entry->cpu, __entry->qsevent) __entry->cpu, __entry->qsevent)
); );
......
...@@ -133,6 +133,8 @@ KASAN_SANITIZE_stackleak.o := n ...@@ -133,6 +133,8 @@ KASAN_SANITIZE_stackleak.o := n
KCSAN_SANITIZE_stackleak.o := n KCSAN_SANITIZE_stackleak.o := n
KCOV_INSTRUMENT_stackleak.o := n KCOV_INSTRUMENT_stackleak.o := n
obj-$(CONFIG_SCF_TORTURE_TEST) += scftorture.o
$(obj)/configs.o: $(obj)/config_data.gz $(obj)/configs.o: $(obj)/config_data.gz
targets += config_data.gz targets += config_data.gz
......
...@@ -304,7 +304,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) ...@@ -304,7 +304,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
* terminate a grace period, if and only if the timer interrupt is * terminate a grace period, if and only if the timer interrupt is
* not nested into another interrupt. * not nested into another interrupt.
* *
* Checking for __rcu_is_watching() here would prevent the nesting * Checking for rcu_is_watching() here would prevent the nesting
* interrupt to invoke rcu_irq_enter(). If that nested interrupt is * interrupt to invoke rcu_irq_enter(). If that nested interrupt is
* the tick then rcu_flavor_sched_clock_irq() would wrongfully * the tick then rcu_flavor_sched_clock_irq() would wrongfully
* assume that it is the first interupt and eventually claim * assume that it is the first interupt and eventually claim
......
...@@ -566,7 +566,7 @@ static struct lock_torture_ops rwsem_lock_ops = { ...@@ -566,7 +566,7 @@ static struct lock_torture_ops rwsem_lock_ops = {
#include <linux/percpu-rwsem.h> #include <linux/percpu-rwsem.h>
static struct percpu_rw_semaphore pcpu_rwsem; static struct percpu_rw_semaphore pcpu_rwsem;
void torture_percpu_rwsem_init(void) static void torture_percpu_rwsem_init(void)
{ {
BUG_ON(percpu_init_rwsem(&pcpu_rwsem)); BUG_ON(percpu_init_rwsem(&pcpu_rwsem));
} }
......
...@@ -135,10 +135,12 @@ config RCU_FANOUT ...@@ -135,10 +135,12 @@ config RCU_FANOUT
config RCU_FANOUT_LEAF config RCU_FANOUT_LEAF
int "Tree-based hierarchical RCU leaf-level fanout value" int "Tree-based hierarchical RCU leaf-level fanout value"
range 2 64 if 64BIT range 2 64 if 64BIT && !RCU_STRICT_GRACE_PERIOD
range 2 32 if !64BIT range 2 32 if !64BIT && !RCU_STRICT_GRACE_PERIOD
range 2 3 if RCU_STRICT_GRACE_PERIOD
depends on TREE_RCU && RCU_EXPERT depends on TREE_RCU && RCU_EXPERT
default 16 default 16 if !RCU_STRICT_GRACE_PERIOD
default 2 if RCU_STRICT_GRACE_PERIOD
help help
This option controls the leaf-level fanout of hierarchical This option controls the leaf-level fanout of hierarchical
implementations of RCU, and allows trading off cache misses implementations of RCU, and allows trading off cache misses
......
...@@ -23,7 +23,7 @@ config TORTURE_TEST ...@@ -23,7 +23,7 @@ config TORTURE_TEST
tristate tristate
default n default n
config RCU_PERF_TEST config RCU_SCALE_TEST
tristate "performance tests for RCU" tristate "performance tests for RCU"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
select TORTURE_TEST select TORTURE_TEST
...@@ -114,4 +114,19 @@ config RCU_EQS_DEBUG ...@@ -114,4 +114,19 @@ config RCU_EQS_DEBUG
Say N here if you need ultimate kernel/user switch latencies Say N here if you need ultimate kernel/user switch latencies
Say Y if you are unsure Say Y if you are unsure
config RCU_STRICT_GRACE_PERIOD
bool "Provide debug RCU implementation with short grace periods"
depends on DEBUG_KERNEL && RCU_EXPERT
default n
select PREEMPT_COUNT if PREEMPT=n
help
Select this option to build an RCU variant that is strict about
grace periods, making them as short as it can. This limits
scalability, destroys real-time response, degrades battery
lifetime and kills performance. Don't try this on large
machines, as in systems with more than about 10 or 20 CPUs.
But in conjunction with tools like KASAN, it can be helpful
when looking for certain types of RCU usage bugs, for example,
too-short RCU read-side critical sections.
endmenu # "RCU Debugging" endmenu # "RCU Debugging"
...@@ -11,7 +11,7 @@ obj-y += update.o sync.o ...@@ -11,7 +11,7 @@ obj-y += update.o sync.o
obj-$(CONFIG_TREE_SRCU) += srcutree.o obj-$(CONFIG_TREE_SRCU) += srcutree.o
obj-$(CONFIG_TINY_SRCU) += srcutiny.o obj-$(CONFIG_TINY_SRCU) += srcutiny.o
obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o
obj-$(CONFIG_RCU_PERF_TEST) += rcuperf.o obj-$(CONFIG_RCU_SCALE_TEST) += rcuscale.o
obj-$(CONFIG_RCU_REF_SCALE_TEST) += refscale.o obj-$(CONFIG_RCU_REF_SCALE_TEST) += refscale.o
obj-$(CONFIG_TREE_RCU) += tree.o obj-$(CONFIG_TREE_RCU) += tree.o
obj-$(CONFIG_TINY_RCU) += tiny.o obj-$(CONFIG_TINY_RCU) += tiny.o
......
...@@ -475,8 +475,16 @@ bool rcu_segcblist_accelerate(struct rcu_segcblist *rsclp, unsigned long seq) ...@@ -475,8 +475,16 @@ bool rcu_segcblist_accelerate(struct rcu_segcblist *rsclp, unsigned long seq)
* Also advance to the oldest segment of callbacks whose * Also advance to the oldest segment of callbacks whose
* ->gp_seq[] completion is at or after that passed in via "seq", * ->gp_seq[] completion is at or after that passed in via "seq",
* skipping any empty segments. * skipping any empty segments.
*
* Note that segment "i" (and any lower-numbered segments
* containing older callbacks) will be unaffected, and their
* grace-period numbers remain unchanged. For example, if i ==
* WAIT_TAIL, then neither WAIT_TAIL nor DONE_TAIL will be touched.
* Instead, the CBs in NEXT_TAIL will be merged with those in
* NEXT_READY_TAIL and the grace-period number of NEXT_READY_TAIL
* would be updated. NEXT_TAIL would then be empty.
*/ */
if (++i >= RCU_NEXT_TAIL) if (rcu_segcblist_restempty(rsclp, i) || ++i >= RCU_NEXT_TAIL)
return false; return false;
/* /*
......
...@@ -52,19 +52,6 @@ ...@@ -52,19 +52,6 @@
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com> and Josh Triplett <josh@joshtriplett.org>"); MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com> and Josh Triplett <josh@joshtriplett.org>");
#ifndef data_race
#define data_race(expr) \
({ \
expr; \
})
#endif
#ifndef ASSERT_EXCLUSIVE_WRITER
#define ASSERT_EXCLUSIVE_WRITER(var) do { } while (0)
#endif
#ifndef ASSERT_EXCLUSIVE_ACCESS
#define ASSERT_EXCLUSIVE_ACCESS(var) do { } while (0)
#endif
/* Bits for ->extendables field, extendables param, and related definitions. */ /* Bits for ->extendables field, extendables param, and related definitions. */
#define RCUTORTURE_RDR_SHIFT 8 /* Put SRCU index in upper bits. */ #define RCUTORTURE_RDR_SHIFT 8 /* Put SRCU index in upper bits. */
#define RCUTORTURE_RDR_MASK ((1 << RCUTORTURE_RDR_SHIFT) - 1) #define RCUTORTURE_RDR_MASK ((1 << RCUTORTURE_RDR_SHIFT) - 1)
...@@ -100,6 +87,7 @@ torture_param(bool, gp_normal, false, ...@@ -100,6 +87,7 @@ torture_param(bool, gp_normal, false,
"Use normal (non-expedited) GP wait primitives"); "Use normal (non-expedited) GP wait primitives");
torture_param(bool, gp_sync, false, "Use synchronous GP wait primitives"); torture_param(bool, gp_sync, false, "Use synchronous GP wait primitives");
torture_param(int, irqreader, 1, "Allow RCU readers from irq handlers"); torture_param(int, irqreader, 1, "Allow RCU readers from irq handlers");
torture_param(int, leakpointer, 0, "Leak pointer dereferences from readers");
torture_param(int, n_barrier_cbs, 0, torture_param(int, n_barrier_cbs, 0,
"# of callbacks/kthreads for barrier testing"); "# of callbacks/kthreads for barrier testing");
torture_param(int, nfakewriters, 4, "Number of RCU fake writer threads"); torture_param(int, nfakewriters, 4, "Number of RCU fake writer threads");
...@@ -185,6 +173,7 @@ static long n_barrier_successes; /* did rcu_barrier test succeed? */ ...@@ -185,6 +173,7 @@ static long n_barrier_successes; /* did rcu_barrier test succeed? */
static unsigned long n_read_exits; static unsigned long n_read_exits;
static struct list_head rcu_torture_removed; static struct list_head rcu_torture_removed;
static unsigned long shutdown_jiffies; static unsigned long shutdown_jiffies;
static unsigned long start_gp_seq;
static int rcu_torture_writer_state; static int rcu_torture_writer_state;
#define RTWS_FIXED_DELAY 0 #define RTWS_FIXED_DELAY 0
...@@ -1413,6 +1402,9 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp) ...@@ -1413,6 +1402,9 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp)
preempt_enable(); preempt_enable();
rcutorture_one_extend(&readstate, 0, trsp, rtrsp); rcutorture_one_extend(&readstate, 0, trsp, rtrsp);
WARN_ON_ONCE(readstate & RCUTORTURE_RDR_MASK); WARN_ON_ONCE(readstate & RCUTORTURE_RDR_MASK);
// This next splat is expected behavior if leakpointer, especially
// for CONFIG_RCU_STRICT_GRACE_PERIOD=y kernels.
WARN_ON_ONCE(leakpointer && READ_ONCE(p->rtort_pipe_count) > 1);
/* If error or close call, record the sequence of reader protections. */ /* If error or close call, record the sequence of reader protections. */
if ((pipe_count > 1 || completed > 1) && !xchg(&err_segs_recorded, 1)) { if ((pipe_count > 1 || completed > 1) && !xchg(&err_segs_recorded, 1)) {
...@@ -1808,6 +1800,7 @@ struct rcu_fwd { ...@@ -1808,6 +1800,7 @@ struct rcu_fwd {
unsigned long rcu_launder_gp_seq_start; unsigned long rcu_launder_gp_seq_start;
}; };
static DEFINE_MUTEX(rcu_fwd_mutex);
static struct rcu_fwd *rcu_fwds; static struct rcu_fwd *rcu_fwds;
static bool rcu_fwd_emergency_stop; static bool rcu_fwd_emergency_stop;
...@@ -2074,8 +2067,14 @@ static void rcu_torture_fwd_prog_cr(struct rcu_fwd *rfp) ...@@ -2074,8 +2067,14 @@ static void rcu_torture_fwd_prog_cr(struct rcu_fwd *rfp)
static int rcutorture_oom_notify(struct notifier_block *self, static int rcutorture_oom_notify(struct notifier_block *self,
unsigned long notused, void *nfreed) unsigned long notused, void *nfreed)
{ {
struct rcu_fwd *rfp = rcu_fwds; struct rcu_fwd *rfp;
mutex_lock(&rcu_fwd_mutex);
rfp = rcu_fwds;
if (!rfp) {
mutex_unlock(&rcu_fwd_mutex);
return NOTIFY_OK;
}
WARN(1, "%s invoked upon OOM during forward-progress testing.\n", WARN(1, "%s invoked upon OOM during forward-progress testing.\n",
__func__); __func__);
rcu_torture_fwd_cb_hist(rfp); rcu_torture_fwd_cb_hist(rfp);
...@@ -2093,6 +2092,7 @@ static int rcutorture_oom_notify(struct notifier_block *self, ...@@ -2093,6 +2092,7 @@ static int rcutorture_oom_notify(struct notifier_block *self,
smp_mb(); /* Frees before return to avoid redoing OOM. */ smp_mb(); /* Frees before return to avoid redoing OOM. */
(*(unsigned long *)nfreed)++; /* Forward progress CBs freed! */ (*(unsigned long *)nfreed)++; /* Forward progress CBs freed! */
pr_info("%s returning after OOM processing.\n", __func__); pr_info("%s returning after OOM processing.\n", __func__);
mutex_unlock(&rcu_fwd_mutex);
return NOTIFY_OK; return NOTIFY_OK;
} }
...@@ -2114,13 +2114,11 @@ static int rcu_torture_fwd_prog(void *args) ...@@ -2114,13 +2114,11 @@ static int rcu_torture_fwd_prog(void *args)
do { do {
schedule_timeout_interruptible(fwd_progress_holdoff * HZ); schedule_timeout_interruptible(fwd_progress_holdoff * HZ);
WRITE_ONCE(rcu_fwd_emergency_stop, false); WRITE_ONCE(rcu_fwd_emergency_stop, false);
register_oom_notifier(&rcutorture_oom_nb);
if (!IS_ENABLED(CONFIG_TINY_RCU) || if (!IS_ENABLED(CONFIG_TINY_RCU) ||
rcu_inkernel_boot_has_ended()) rcu_inkernel_boot_has_ended())
rcu_torture_fwd_prog_nr(rfp, &tested, &tested_tries); rcu_torture_fwd_prog_nr(rfp, &tested, &tested_tries);
if (rcu_inkernel_boot_has_ended()) if (rcu_inkernel_boot_has_ended())
rcu_torture_fwd_prog_cr(rfp); rcu_torture_fwd_prog_cr(rfp);
unregister_oom_notifier(&rcutorture_oom_nb);
/* Avoid slow periods, better to test when busy. */ /* Avoid slow periods, better to test when busy. */
stutter_wait("rcu_torture_fwd_prog"); stutter_wait("rcu_torture_fwd_prog");
...@@ -2160,9 +2158,26 @@ static int __init rcu_torture_fwd_prog_init(void) ...@@ -2160,9 +2158,26 @@ static int __init rcu_torture_fwd_prog_init(void)
return -ENOMEM; return -ENOMEM;
spin_lock_init(&rfp->rcu_fwd_lock); spin_lock_init(&rfp->rcu_fwd_lock);
rfp->rcu_fwd_cb_tail = &rfp->rcu_fwd_cb_head; rfp->rcu_fwd_cb_tail = &rfp->rcu_fwd_cb_head;
mutex_lock(&rcu_fwd_mutex);
rcu_fwds = rfp;
mutex_unlock(&rcu_fwd_mutex);
register_oom_notifier(&rcutorture_oom_nb);
return torture_create_kthread(rcu_torture_fwd_prog, rfp, fwd_prog_task); return torture_create_kthread(rcu_torture_fwd_prog, rfp, fwd_prog_task);
} }
static void rcu_torture_fwd_prog_cleanup(void)
{
struct rcu_fwd *rfp;
torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task);
rfp = rcu_fwds;
mutex_lock(&rcu_fwd_mutex);
rcu_fwds = NULL;
mutex_unlock(&rcu_fwd_mutex);
unregister_oom_notifier(&rcutorture_oom_nb);
kfree(rfp);
}
/* Callback function for RCU barrier testing. */ /* Callback function for RCU barrier testing. */
static void rcu_torture_barrier_cbf(struct rcu_head *rcu) static void rcu_torture_barrier_cbf(struct rcu_head *rcu)
{ {
...@@ -2460,7 +2475,7 @@ rcu_torture_cleanup(void) ...@@ -2460,7 +2475,7 @@ rcu_torture_cleanup(void)
show_rcu_gp_kthreads(); show_rcu_gp_kthreads();
rcu_torture_read_exit_cleanup(); rcu_torture_read_exit_cleanup();
rcu_torture_barrier_cleanup(); rcu_torture_barrier_cleanup();
torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task); rcu_torture_fwd_prog_cleanup();
torture_stop_kthread(rcu_torture_stall, stall_task); torture_stop_kthread(rcu_torture_stall, stall_task);
torture_stop_kthread(rcu_torture_writer, writer_task); torture_stop_kthread(rcu_torture_writer, writer_task);
...@@ -2482,8 +2497,9 @@ rcu_torture_cleanup(void) ...@@ -2482,8 +2497,9 @@ rcu_torture_cleanup(void)
rcutorture_get_gp_data(cur_ops->ttype, &flags, &gp_seq); rcutorture_get_gp_data(cur_ops->ttype, &flags, &gp_seq);
srcutorture_get_gp_data(cur_ops->ttype, srcu_ctlp, &flags, &gp_seq); srcutorture_get_gp_data(cur_ops->ttype, srcu_ctlp, &flags, &gp_seq);
pr_alert("%s: End-test grace-period state: g%lu f%#x\n", pr_alert("%s: End-test grace-period state: g%ld f%#x total-gps=%ld\n",
cur_ops->name, gp_seq, flags); cur_ops->name, (long)gp_seq, flags,
rcutorture_seq_diff(gp_seq, start_gp_seq));
torture_stop_kthread(rcu_torture_stats, stats_task); torture_stop_kthread(rcu_torture_stats, stats_task);
torture_stop_kthread(rcu_torture_fqs, fqs_task); torture_stop_kthread(rcu_torture_fqs, fqs_task);
if (rcu_torture_can_boost()) if (rcu_torture_can_boost())
...@@ -2607,6 +2623,8 @@ rcu_torture_init(void) ...@@ -2607,6 +2623,8 @@ rcu_torture_init(void)
long i; long i;
int cpu; int cpu;
int firsterr = 0; int firsterr = 0;
int flags = 0;
unsigned long gp_seq = 0;
static struct rcu_torture_ops *torture_ops[] = { static struct rcu_torture_ops *torture_ops[] = {
&rcu_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops, &rcu_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops,
&busted_srcud_ops, &tasks_ops, &tasks_rude_ops, &busted_srcud_ops, &tasks_ops, &tasks_rude_ops,
...@@ -2649,6 +2667,11 @@ rcu_torture_init(void) ...@@ -2649,6 +2667,11 @@ rcu_torture_init(void)
nrealreaders = 1; nrealreaders = 1;
} }
rcu_torture_print_module_parms(cur_ops, "Start of test"); rcu_torture_print_module_parms(cur_ops, "Start of test");
rcutorture_get_gp_data(cur_ops->ttype, &flags, &gp_seq);
srcutorture_get_gp_data(cur_ops->ttype, srcu_ctlp, &flags, &gp_seq);
start_gp_seq = gp_seq;
pr_alert("%s: Start-test grace-period state: g%ld f%#x\n",
cur_ops->name, (long)gp_seq, flags);
/* Set up the freelist. */ /* Set up the freelist. */
......
...@@ -546,9 +546,11 @@ static int main_func(void *arg) ...@@ -546,9 +546,11 @@ static int main_func(void *arg)
// Print the average of all experiments // Print the average of all experiments
SCALEOUT("END OF TEST. Calculating average duration per loop (nanoseconds)...\n"); SCALEOUT("END OF TEST. Calculating average duration per loop (nanoseconds)...\n");
buf[0] = 0; if (!errexit) {
strcat(buf, "\n"); buf[0] = 0;
strcat(buf, "Runs\tTime(ns)\n"); strcat(buf, "\n");
strcat(buf, "Runs\tTime(ns)\n");
}
for (exp = 0; exp < nruns; exp++) { for (exp = 0; exp < nruns; exp++) {
u64 avg; u64 avg;
......
...@@ -29,19 +29,6 @@ ...@@ -29,19 +29,6 @@
#include "rcu.h" #include "rcu.h"
#include "rcu_segcblist.h" #include "rcu_segcblist.h"
#ifndef data_race
#define data_race(expr) \
({ \
expr; \
})
#endif
#ifndef ASSERT_EXCLUSIVE_WRITER
#define ASSERT_EXCLUSIVE_WRITER(var) do { } while (0)
#endif
#ifndef ASSERT_EXCLUSIVE_ACCESS
#define ASSERT_EXCLUSIVE_ACCESS(var) do { } while (0)
#endif
/* Holdoff in nanoseconds for auto-expediting. */ /* Holdoff in nanoseconds for auto-expediting. */
#define DEFAULT_SRCU_EXP_HOLDOFF (25 * 1000) #define DEFAULT_SRCU_EXP_HOLDOFF (25 * 1000)
static ulong exp_holdoff = DEFAULT_SRCU_EXP_HOLDOFF; static ulong exp_holdoff = DEFAULT_SRCU_EXP_HOLDOFF;
......
This diff is collapsed.
...@@ -156,6 +156,7 @@ struct rcu_data { ...@@ -156,6 +156,7 @@ struct rcu_data {
bool beenonline; /* CPU online at least once. */ bool beenonline; /* CPU online at least once. */
bool gpwrap; /* Possible ->gp_seq wrap. */ bool gpwrap; /* Possible ->gp_seq wrap. */
bool exp_deferred_qs; /* This CPU awaiting a deferred QS? */ bool exp_deferred_qs; /* This CPU awaiting a deferred QS? */
bool cpu_started; /* RCU watching this onlining CPU. */
struct rcu_node *mynode; /* This CPU's leaf of hierarchy */ struct rcu_node *mynode; /* This CPU's leaf of hierarchy */
unsigned long grpmask; /* Mask to apply to leaf qsmask. */ unsigned long grpmask; /* Mask to apply to leaf qsmask. */
unsigned long ticks_this_gp; /* The number of scheduling-clock */ unsigned long ticks_this_gp; /* The number of scheduling-clock */
...@@ -164,6 +165,7 @@ struct rcu_data { ...@@ -164,6 +165,7 @@ struct rcu_data {
/* period it is aware of. */ /* period it is aware of. */
struct irq_work defer_qs_iw; /* Obtain later scheduler attention. */ struct irq_work defer_qs_iw; /* Obtain later scheduler attention. */
bool defer_qs_iw_pending; /* Scheduler attention pending? */ bool defer_qs_iw_pending; /* Scheduler attention pending? */
struct work_struct strict_work; /* Schedule readers for strict GPs. */
/* 2) batch handling */ /* 2) batch handling */
struct rcu_segcblist cblist; /* Segmented callback list, with */ struct rcu_segcblist cblist; /* Segmented callback list, with */
......
...@@ -732,11 +732,9 @@ static void rcu_exp_need_qs(void) ...@@ -732,11 +732,9 @@ static void rcu_exp_need_qs(void)
/* Invoked on each online non-idle CPU for expedited quiescent state. */ /* Invoked on each online non-idle CPU for expedited quiescent state. */
static void rcu_exp_handler(void *unused) static void rcu_exp_handler(void *unused)
{ {
struct rcu_data *rdp; struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
struct rcu_node *rnp; struct rcu_node *rnp = rdp->mynode;
rdp = this_cpu_ptr(&rcu_data);
rnp = rdp->mynode;
if (!(READ_ONCE(rnp->expmask) & rdp->grpmask) || if (!(READ_ONCE(rnp->expmask) & rdp->grpmask) ||
__this_cpu_read(rcu_data.cpu_no_qs.b.exp)) __this_cpu_read(rcu_data.cpu_no_qs.b.exp))
return; return;
......
...@@ -36,6 +36,8 @@ static void __init rcu_bootup_announce_oddness(void) ...@@ -36,6 +36,8 @@ static void __init rcu_bootup_announce_oddness(void)
pr_info("\tRCU dyntick-idle grace-period acceleration is enabled.\n"); pr_info("\tRCU dyntick-idle grace-period acceleration is enabled.\n");
if (IS_ENABLED(CONFIG_PROVE_RCU)) if (IS_ENABLED(CONFIG_PROVE_RCU))
pr_info("\tRCU lockdep checking is enabled.\n"); pr_info("\tRCU lockdep checking is enabled.\n");
if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD))
pr_info("\tRCU strict (and thus non-scalable) grace periods enabled.\n");
if (RCU_NUM_LVLS >= 4) if (RCU_NUM_LVLS >= 4)
pr_info("\tFour(or more)-level hierarchy is enabled.\n"); pr_info("\tFour(or more)-level hierarchy is enabled.\n");
if (RCU_FANOUT_LEAF != 16) if (RCU_FANOUT_LEAF != 16)
...@@ -374,6 +376,8 @@ void __rcu_read_lock(void) ...@@ -374,6 +376,8 @@ void __rcu_read_lock(void)
rcu_preempt_read_enter(); rcu_preempt_read_enter();
if (IS_ENABLED(CONFIG_PROVE_LOCKING)) if (IS_ENABLED(CONFIG_PROVE_LOCKING))
WARN_ON_ONCE(rcu_preempt_depth() > RCU_NEST_PMAX); WARN_ON_ONCE(rcu_preempt_depth() > RCU_NEST_PMAX);
if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) && rcu_state.gp_kthread)
WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, true);
barrier(); /* critical section after entry code. */ barrier(); /* critical section after entry code. */
} }
EXPORT_SYMBOL_GPL(__rcu_read_lock); EXPORT_SYMBOL_GPL(__rcu_read_lock);
...@@ -455,8 +459,14 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags) ...@@ -455,8 +459,14 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
return; return;
} }
t->rcu_read_unlock_special.s = 0; t->rcu_read_unlock_special.s = 0;
if (special.b.need_qs) if (special.b.need_qs) {
rcu_qs(); if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD)) {
rcu_report_qs_rdp(rdp);
udelay(rcu_unlock_delay);
} else {
rcu_qs();
}
}
/* /*
* Respond to a request by an expedited grace period for a * Respond to a request by an expedited grace period for a
...@@ -768,6 +778,24 @@ dump_blkd_tasks(struct rcu_node *rnp, int ncheck) ...@@ -768,6 +778,24 @@ dump_blkd_tasks(struct rcu_node *rnp, int ncheck)
#else /* #ifdef CONFIG_PREEMPT_RCU */ #else /* #ifdef CONFIG_PREEMPT_RCU */
/*
* If strict grace periods are enabled, and if the calling
* __rcu_read_unlock() marks the beginning of a quiescent state, immediately
* report that quiescent state and, if requested, spin for a bit.
*/
void rcu_read_unlock_strict(void)
{
struct rcu_data *rdp;
if (!IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) ||
irqs_disabled() || preempt_count() || !rcu_state.gp_kthread)
return;
rdp = this_cpu_ptr(&rcu_data);
rcu_report_qs_rdp(rdp);
udelay(rcu_unlock_delay);
}
EXPORT_SYMBOL_GPL(rcu_read_unlock_strict);
/* /*
* Tell them what RCU they are running. * Tell them what RCU they are running.
*/ */
...@@ -1926,6 +1954,7 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) ...@@ -1926,6 +1954,7 @@ static void nocb_gp_wait(struct rcu_data *my_rdp)
* nearest grace period (if any) to wait for next. The CB kthreads * nearest grace period (if any) to wait for next. The CB kthreads
* and the global grace-period kthread are awakened if needed. * and the global grace-period kthread are awakened if needed.
*/ */
WARN_ON_ONCE(my_rdp->nocb_gp_rdp != my_rdp);
for (rdp = my_rdp; rdp; rdp = rdp->nocb_next_cb_rdp) { for (rdp = my_rdp; rdp; rdp = rdp->nocb_next_cb_rdp) {
trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("Check")); trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("Check"));
rcu_nocb_lock_irqsave(rdp, flags); rcu_nocb_lock_irqsave(rdp, flags);
...@@ -2411,13 +2440,12 @@ static void show_rcu_nocb_state(struct rcu_data *rdp) ...@@ -2411,13 +2440,12 @@ static void show_rcu_nocb_state(struct rcu_data *rdp)
return; return;
waslocked = raw_spin_is_locked(&rdp->nocb_gp_lock); waslocked = raw_spin_is_locked(&rdp->nocb_gp_lock);
wastimer = timer_pending(&rdp->nocb_timer); wastimer = timer_pending(&rdp->nocb_bypass_timer);
wassleep = swait_active(&rdp->nocb_gp_wq); wassleep = swait_active(&rdp->nocb_gp_wq);
if (!rdp->nocb_defer_wakeup && !rdp->nocb_gp_sleep && if (!rdp->nocb_gp_sleep && !waslocked && !wastimer && !wassleep)
!waslocked && !wastimer && !wassleep)
return; /* Nothing untowards. */ return; /* Nothing untowards. */
pr_info(" !!! %c%c%c%c %c\n", pr_info(" nocb GP activity on CB-only CPU!!! %c%c%c%c %c\n",
"lL"[waslocked], "lL"[waslocked],
"dD"[!!rdp->nocb_defer_wakeup], "dD"[!!rdp->nocb_defer_wakeup],
"tT"[wastimer], "tT"[wastimer],
......
...@@ -158,7 +158,7 @@ static void rcu_stall_kick_kthreads(void) ...@@ -158,7 +158,7 @@ static void rcu_stall_kick_kthreads(void)
{ {
unsigned long j; unsigned long j;
if (!rcu_kick_kthreads) if (!READ_ONCE(rcu_kick_kthreads))
return; return;
j = READ_ONCE(rcu_state.jiffies_kick_kthreads); j = READ_ONCE(rcu_state.jiffies_kick_kthreads);
if (time_after(jiffies, j) && rcu_state.gp_kthread && if (time_after(jiffies, j) && rcu_state.gp_kthread &&
...@@ -580,7 +580,7 @@ static void check_cpu_stall(struct rcu_data *rdp) ...@@ -580,7 +580,7 @@ static void check_cpu_stall(struct rcu_data *rdp)
unsigned long js; unsigned long js;
struct rcu_node *rnp; struct rcu_node *rnp;
if ((rcu_stall_is_suppressed() && !rcu_kick_kthreads) || if ((rcu_stall_is_suppressed() && !READ_ONCE(rcu_kick_kthreads)) ||
!rcu_gp_in_progress()) !rcu_gp_in_progress())
return; return;
rcu_stall_kick_kthreads(); rcu_stall_kick_kthreads();
...@@ -623,7 +623,7 @@ static void check_cpu_stall(struct rcu_data *rdp) ...@@ -623,7 +623,7 @@ static void check_cpu_stall(struct rcu_data *rdp)
/* We haven't checked in, so go dump stack. */ /* We haven't checked in, so go dump stack. */
print_cpu_stall(gps); print_cpu_stall(gps);
if (rcu_cpu_stall_ftrace_dump) if (READ_ONCE(rcu_cpu_stall_ftrace_dump))
rcu_ftrace_dump(DUMP_ALL); rcu_ftrace_dump(DUMP_ALL);
} else if (rcu_gp_in_progress() && } else if (rcu_gp_in_progress() &&
...@@ -632,7 +632,7 @@ static void check_cpu_stall(struct rcu_data *rdp) ...@@ -632,7 +632,7 @@ static void check_cpu_stall(struct rcu_data *rdp)
/* They had a few time units to dump stack, so complain. */ /* They had a few time units to dump stack, so complain. */
print_other_cpu_stall(gs2, gps); print_other_cpu_stall(gs2, gps);
if (rcu_cpu_stall_ftrace_dump) if (READ_ONCE(rcu_cpu_stall_ftrace_dump))
rcu_ftrace_dump(DUMP_ALL); rcu_ftrace_dump(DUMP_ALL);
} }
} }
......
...@@ -53,19 +53,6 @@ ...@@ -53,19 +53,6 @@
#endif #endif
#define MODULE_PARAM_PREFIX "rcupdate." #define MODULE_PARAM_PREFIX "rcupdate."
#ifndef data_race
#define data_race(expr) \
({ \
expr; \
})
#endif
#ifndef ASSERT_EXCLUSIVE_WRITER
#define ASSERT_EXCLUSIVE_WRITER(var) do { } while (0)
#endif
#ifndef ASSERT_EXCLUSIVE_ACCESS
#define ASSERT_EXCLUSIVE_ACCESS(var) do { } while (0)
#endif
#ifndef CONFIG_TINY_RCU #ifndef CONFIG_TINY_RCU
module_param(rcu_expedited, int, 0); module_param(rcu_expedited, int, 0);
module_param(rcu_normal, int, 0); module_param(rcu_normal, int, 0);
......
This diff is collapsed.
...@@ -20,6 +20,9 @@ ...@@ -20,6 +20,9 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/sched/idle.h> #include <linux/sched/idle.h>
#include <linux/hypervisor.h> #include <linux/hypervisor.h>
#include <linux/sched/clock.h>
#include <linux/nmi.h>
#include <linux/sched/debug.h>
#include "smpboot.h" #include "smpboot.h"
#include "sched/smp.h" #include "sched/smp.h"
...@@ -96,6 +99,103 @@ void __init call_function_init(void) ...@@ -96,6 +99,103 @@ void __init call_function_init(void)
smpcfd_prepare_cpu(smp_processor_id()); smpcfd_prepare_cpu(smp_processor_id());
} }
#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
static DEFINE_PER_CPU(call_single_data_t *, cur_csd);
static DEFINE_PER_CPU(smp_call_func_t, cur_csd_func);
static DEFINE_PER_CPU(void *, cur_csd_info);
#define CSD_LOCK_TIMEOUT (5ULL * NSEC_PER_SEC)
static atomic_t csd_bug_count = ATOMIC_INIT(0);
/* Record current CSD work for current CPU, NULL to erase. */
static void csd_lock_record(call_single_data_t *csd)
{
if (!csd) {
smp_mb(); /* NULL cur_csd after unlock. */
__this_cpu_write(cur_csd, NULL);
return;
}
__this_cpu_write(cur_csd_func, csd->func);
__this_cpu_write(cur_csd_info, csd->info);
smp_wmb(); /* func and info before csd. */
__this_cpu_write(cur_csd, csd);
smp_mb(); /* Update cur_csd before function call. */
/* Or before unlock, as the case may be. */
}
static __always_inline int csd_lock_wait_getcpu(call_single_data_t *csd)
{
unsigned int csd_type;
csd_type = CSD_TYPE(csd);
if (csd_type == CSD_TYPE_ASYNC || csd_type == CSD_TYPE_SYNC)
return csd->dst; /* Other CSD_TYPE_ values might not have ->dst. */
return -1;
}
/*
* Complain if too much time spent waiting. Note that only
* the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
* so waiting on other types gets much less information.
*/
static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, int *bug_id)
{
int cpu = -1;
int cpux;
bool firsttime;
u64 ts2, ts_delta;
call_single_data_t *cpu_cur_csd;
unsigned int flags = READ_ONCE(csd->flags);
if (!(flags & CSD_FLAG_LOCK)) {
if (!unlikely(*bug_id))
return true;
cpu = csd_lock_wait_getcpu(csd);
pr_alert("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d released the lock.\n",
*bug_id, raw_smp_processor_id(), cpu);
return true;
}
ts2 = sched_clock();
ts_delta = ts2 - *ts1;
if (likely(ts_delta <= CSD_LOCK_TIMEOUT))
return false;
firsttime = !*bug_id;
if (firsttime)
*bug_id = atomic_inc_return(&csd_bug_count);
cpu = csd_lock_wait_getcpu(csd);
if (WARN_ONCE(cpu < 0 || cpu >= nr_cpu_ids, "%s: cpu = %d\n", __func__, cpu))
cpux = 0;
else
cpux = cpu;
cpu_cur_csd = smp_load_acquire(&per_cpu(cur_csd, cpux)); /* Before func and info. */
pr_alert("csd: %s non-responsive CSD lock (#%d) on CPU#%d, waiting %llu ns for CPU#%02d %pS(%ps).\n",
firsttime ? "Detected" : "Continued", *bug_id, raw_smp_processor_id(), ts2 - ts0,
cpu, csd->func, csd->info);
if (cpu_cur_csd && csd != cpu_cur_csd) {
pr_alert("\tcsd: CSD lock (#%d) handling prior %pS(%ps) request.\n",
*bug_id, READ_ONCE(per_cpu(cur_csd_func, cpux)),
READ_ONCE(per_cpu(cur_csd_info, cpux)));
} else {
pr_alert("\tcsd: CSD lock (#%d) %s.\n",
*bug_id, !cpu_cur_csd ? "unresponsive" : "handling this request");
}
if (cpu >= 0) {
if (!trigger_single_cpu_backtrace(cpu))
dump_cpu_task(cpu);
if (!cpu_cur_csd) {
pr_alert("csd: Re-sending CSD lock (#%d) IPI from CPU#%02d to CPU#%02d\n", *bug_id, raw_smp_processor_id(), cpu);
arch_send_call_function_single_ipi(cpu);
}
}
dump_stack();
*ts1 = ts2;
return false;
}
/* /*
* csd_lock/csd_unlock used to serialize access to per-cpu csd resources * csd_lock/csd_unlock used to serialize access to per-cpu csd resources
* *
...@@ -103,10 +203,30 @@ void __init call_function_init(void) ...@@ -103,10 +203,30 @@ void __init call_function_init(void)
* previous function call. For multi-cpu calls its even more interesting * previous function call. For multi-cpu calls its even more interesting
* as we'll have to ensure no other cpu is observing our csd. * as we'll have to ensure no other cpu is observing our csd.
*/ */
static __always_inline void csd_lock_wait(call_single_data_t *csd)
{
int bug_id = 0;
u64 ts0, ts1;
ts1 = ts0 = sched_clock();
for (;;) {
if (csd_lock_wait_toolong(csd, ts0, &ts1, &bug_id))
break;
cpu_relax();
}
smp_acquire__after_ctrl_dep();
}
#else
static void csd_lock_record(call_single_data_t *csd)
{
}
static __always_inline void csd_lock_wait(call_single_data_t *csd) static __always_inline void csd_lock_wait(call_single_data_t *csd)
{ {
smp_cond_load_acquire(&csd->flags, !(VAL & CSD_FLAG_LOCK)); smp_cond_load_acquire(&csd->flags, !(VAL & CSD_FLAG_LOCK));
} }
#endif
static __always_inline void csd_lock(call_single_data_t *csd) static __always_inline void csd_lock(call_single_data_t *csd)
{ {
...@@ -166,9 +286,11 @@ static int generic_exec_single(int cpu, call_single_data_t *csd) ...@@ -166,9 +286,11 @@ static int generic_exec_single(int cpu, call_single_data_t *csd)
* We can unlock early even for the synchronous on-stack case, * We can unlock early even for the synchronous on-stack case,
* since we're doing this from the same CPU.. * since we're doing this from the same CPU..
*/ */
csd_lock_record(csd);
csd_unlock(csd); csd_unlock(csd);
local_irq_save(flags); local_irq_save(flags);
func(info); func(info);
csd_lock_record(NULL);
local_irq_restore(flags); local_irq_restore(flags);
return 0; return 0;
} }
...@@ -268,8 +390,10 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline) ...@@ -268,8 +390,10 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)
entry = &csd_next->llist; entry = &csd_next->llist;
} }
csd_lock_record(csd);
func(info); func(info);
csd_unlock(csd); csd_unlock(csd);
csd_lock_record(NULL);
} else { } else {
prev = &csd->llist; prev = &csd->llist;
} }
...@@ -296,8 +420,10 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline) ...@@ -296,8 +420,10 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)
smp_call_func_t func = csd->func; smp_call_func_t func = csd->func;
void *info = csd->info; void *info = csd->info;
csd_lock_record(csd);
csd_unlock(csd); csd_unlock(csd);
func(info); func(info);
csd_lock_record(NULL);
} else if (type == CSD_TYPE_IRQ_WORK) { } else if (type == CSD_TYPE_IRQ_WORK) {
irq_work_single(csd); irq_work_single(csd);
} }
...@@ -375,6 +501,10 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info, ...@@ -375,6 +501,10 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
csd->func = func; csd->func = func;
csd->info = info; csd->info = info;
#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
csd->src = smp_processor_id();
csd->dst = cpu;
#endif
err = generic_exec_single(cpu, csd); err = generic_exec_single(cpu, csd);
...@@ -540,6 +670,10 @@ static void smp_call_function_many_cond(const struct cpumask *mask, ...@@ -540,6 +670,10 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
csd->flags |= CSD_TYPE_SYNC; csd->flags |= CSD_TYPE_SYNC;
csd->func = func; csd->func = func;
csd->info = info; csd->info = info;
#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
csd->src = smp_processor_id();
csd->dst = cpu;
#endif
if (llist_add(&csd->llist, &per_cpu(call_single_queue, cpu))) if (llist_add(&csd->llist, &per_cpu(call_single_queue, cpu)))
__cpumask_set_cpu(cpu, cfd->cpumask_ipi); __cpumask_set_cpu(cpu, cfd->cpumask_ipi);
} }
......
...@@ -927,7 +927,7 @@ static bool can_stop_idle_tick(int cpu, struct tick_sched *ts) ...@@ -927,7 +927,7 @@ static bool can_stop_idle_tick(int cpu, struct tick_sched *ts)
if (ratelimit < 10 && if (ratelimit < 10 &&
(local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) { (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
pr_warn("NOHZ: local_softirq_pending %02x\n", pr_warn("NOHZ tick-stop error: Non-RCU local softirq work is pending, handler #%02x!!!\n",
(unsigned int) local_softirq_pending()); (unsigned int) local_softirq_pending());
ratelimit++; ratelimit++;
} }
......
...@@ -1367,6 +1367,27 @@ config WW_MUTEX_SELFTEST ...@@ -1367,6 +1367,27 @@ config WW_MUTEX_SELFTEST
Say M if you want these self tests to build as a module. Say M if you want these self tests to build as a module.
Say N if you are unsure. Say N if you are unsure.
config SCF_TORTURE_TEST
tristate "torture tests for smp_call_function*()"
depends on DEBUG_KERNEL
select TORTURE_TEST
help
This option provides a kernel module that runs torture tests
on the smp_call_function() family of primitives. The kernel
module may be built after the fact on the running kernel to
be tested, if desired.
config CSD_LOCK_WAIT_DEBUG
bool "Debugging for csd_lock_wait(), called from smp_call_function*()"
depends on DEBUG_KERNEL
depends on 64BIT
default n
help
This option enables debug prints when CPUs are slow to respond
to the smp_call_function*() IPI wrappers. These debug prints
include the IPI handler function currently executing (if any)
and relevant stack traces.
endmenu # lock debugging endmenu # lock debugging
config TRACE_IRQFLAGS config TRACE_IRQFLAGS
......
...@@ -85,12 +85,16 @@ void nmi_trigger_cpumask_backtrace(const cpumask_t *mask, ...@@ -85,12 +85,16 @@ void nmi_trigger_cpumask_backtrace(const cpumask_t *mask,
put_cpu(); put_cpu();
} }
// Dump stacks even for idle CPUs.
static bool backtrace_idle;
module_param(backtrace_idle, bool, 0644);
bool nmi_cpu_backtrace(struct pt_regs *regs) bool nmi_cpu_backtrace(struct pt_regs *regs)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) { if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
if (regs && cpu_in_idle(instruction_pointer(regs))) { if (!READ_ONCE(backtrace_idle) && regs && cpu_in_idle(instruction_pointer(regs))) {
pr_warn("NMI backtrace for cpu %d skipped: idling at %pS\n", pr_warn("NMI backtrace for cpu %d skipped: idling at %pS\n",
cpu, (void *)instruction_pointer(regs)); cpu, (void *)instruction_pointer(regs));
} else { } else {
......
#!/bin/bash #!/bin/bash
# SPDX-License-Identifier: GPL-2.0+ # SPDX-License-Identifier: GPL-2.0+
# #
# Analyze a given results directory for rcuperf performance measurements, # Analyze a given results directory for rcuscale performance measurements,
# looking for ftrace data. Exits with 0 if data was found, analyzed, and # looking for ftrace data. Exits with 0 if data was found, analyzed, and
# printed. Intended to be invoked from kvm-recheck-rcuperf.sh after # printed. Intended to be invoked from kvm-recheck-rcuscale.sh after
# argument checking. # argument checking.
# #
# Usage: kvm-recheck-rcuperf-ftrace.sh resdir # Usage: kvm-recheck-rcuscale-ftrace.sh resdir
# #
# Copyright (C) IBM Corporation, 2016 # Copyright (C) IBM Corporation, 2016
# #
......
#!/bin/bash #!/bin/bash
# SPDX-License-Identifier: GPL-2.0+ # SPDX-License-Identifier: GPL-2.0+
# #
# Analyze a given results directory for rcuperf performance measurements. # Analyze a given results directory for rcuscale scalability measurements.
# #
# Usage: kvm-recheck-rcuperf.sh resdir # Usage: kvm-recheck-rcuscale.sh resdir
# #
# Copyright (C) IBM Corporation, 2016 # Copyright (C) IBM Corporation, 2016
# #
...@@ -20,7 +20,7 @@ fi ...@@ -20,7 +20,7 @@ fi
PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH
. functions.sh . functions.sh
if kvm-recheck-rcuperf-ftrace.sh $i if kvm-recheck-rcuscale-ftrace.sh $i
then then
# ftrace data was successfully analyzed, call it good! # ftrace data was successfully analyzed, call it good!
exit 0 exit 0
...@@ -30,12 +30,12 @@ configfile=`echo $i | sed -e 's/^.*\///'` ...@@ -30,12 +30,12 @@ configfile=`echo $i | sed -e 's/^.*\///'`
sed -e 's/^\[[^]]*]//' < $i/console.log | sed -e 's/^\[[^]]*]//' < $i/console.log |
awk ' awk '
/-perf: .* gps: .* batches:/ { /-scale: .* gps: .* batches:/ {
ngps = $9; ngps = $9;
nbatches = $11; nbatches = $11;
} }
/-perf: .*writer-duration/ { /-scale: .*writer-duration/ {
gptimes[++n] = $5 / 1000.; gptimes[++n] = $5 / 1000.;
sum += $5 / 1000.; sum += $5 / 1000.;
} }
...@@ -43,7 +43,7 @@ awk ' ...@@ -43,7 +43,7 @@ awk '
END { END {
newNR = asort(gptimes); newNR = asort(gptimes);
if (newNR <= 0) { if (newNR <= 0) {
print "No rcuperf records found???" print "No rcuscale records found???"
exit; exit;
} }
pct50 = int(newNR * 50 / 100); pct50 = int(newNR * 50 / 100);
...@@ -79,5 +79,5 @@ END { ...@@ -79,5 +79,5 @@ END {
print "99th percentile grace-period duration: " gptimes[pct99]; print "99th percentile grace-period duration: " gptimes[pct99];
print "Maximum grace-period duration: " gptimes[newNR]; print "Maximum grace-period duration: " gptimes[newNR];
print "Grace periods: " ngps + 0 " Batches: " nbatches + 0 " Ratio: " ngps / nbatches; print "Grace periods: " ngps + 0 " Batches: " nbatches + 0 " Ratio: " ngps / nbatches;
print "Computed from rcuperf printk output."; print "Computed from rcuscale printk output.";
}' }'
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0+
#
# Analyze a given results directory for rcutorture progress.
#
# Usage: kvm-recheck-rcu.sh resdir
#
# Copyright (C) Facebook, 2020
#
# Authors: Paul E. McKenney <paulmck@kernel.org>
i="$1"
if test -d "$i" -a -r "$i"
then
:
else
echo Unreadable results directory: $i
exit 1
fi
. functions.sh
configfile=`echo $i | sed -e 's/^.*\///'`
nscfs="`grep 'scf_invoked_count ver:' $i/console.log 2> /dev/null | tail -1 | sed -e 's/^.* scf_invoked_count ver: //' -e 's/ .*$//' | tr -d '\015'`"
if test -z "$nscfs"
then
echo "$configfile ------- "
else
dur="`sed -e 's/^.* scftorture.shutdown_secs=//' -e 's/ .*$//' < $i/qemu-cmd 2> /dev/null`"
if test -z "$dur"
then
rate=""
else
nscfss=`awk -v nscfs=$nscfs -v dur=$dur '
BEGIN { print nscfs / dur }' < /dev/null`
rate=" ($nscfss/s)"
fi
echo "${configfile} ------- ${nscfs} SCF handler invocations$rate"
fi
...@@ -66,6 +66,7 @@ config_override_param () { ...@@ -66,6 +66,7 @@ config_override_param () {
echo > $T/KcList echo > $T/KcList
config_override_param "$config_dir/CFcommon" KcList "`cat $config_dir/CFcommon 2> /dev/null`" config_override_param "$config_dir/CFcommon" KcList "`cat $config_dir/CFcommon 2> /dev/null`"
config_override_param "$config_template" KcList "`cat $config_template 2> /dev/null`" config_override_param "$config_template" KcList "`cat $config_template 2> /dev/null`"
config_override_param "--gdb options" KcList "$TORTURE_KCONFIG_GDB_ARG"
config_override_param "--kasan options" KcList "$TORTURE_KCONFIG_KASAN_ARG" config_override_param "--kasan options" KcList "$TORTURE_KCONFIG_KASAN_ARG"
config_override_param "--kcsan options" KcList "$TORTURE_KCONFIG_KCSAN_ARG" config_override_param "--kcsan options" KcList "$TORTURE_KCONFIG_KCSAN_ARG"
config_override_param "--kconfig argument" KcList "$TORTURE_KCONFIG_ARG" config_override_param "--kconfig argument" KcList "$TORTURE_KCONFIG_ARG"
...@@ -152,7 +153,11 @@ qemu_append="`identify_qemu_append "$QEMU"`" ...@@ -152,7 +153,11 @@ qemu_append="`identify_qemu_append "$QEMU"`"
boot_args="`configfrag_boot_params "$boot_args" "$config_template"`" boot_args="`configfrag_boot_params "$boot_args" "$config_template"`"
# Generate kernel-version-specific boot parameters # Generate kernel-version-specific boot parameters
boot_args="`per_version_boot_params "$boot_args" $resdir/.config $seconds`" boot_args="`per_version_boot_params "$boot_args" $resdir/.config $seconds`"
echo $QEMU $qemu_args -m $TORTURE_QEMU_MEM -kernel $KERNEL -append \"$qemu_append $boot_args\" > $resdir/qemu-cmd if test -n "$TORTURE_BOOT_GDB_ARG"
then
boot_args="$boot_args $TORTURE_BOOT_GDB_ARG"
fi
echo $QEMU $qemu_args -m $TORTURE_QEMU_MEM -kernel $KERNEL -append \"$qemu_append $boot_args\" $TORTURE_QEMU_GDB_ARG > $resdir/qemu-cmd
if test -n "$TORTURE_BUILDONLY" if test -n "$TORTURE_BUILDONLY"
then then
...@@ -171,14 +176,26 @@ echo "NOTE: $QEMU either did not run or was interactive" > $resdir/console.log ...@@ -171,14 +176,26 @@ echo "NOTE: $QEMU either did not run or was interactive" > $resdir/console.log
# Attempt to run qemu # Attempt to run qemu
( . $T/qemu-cmd; wait `cat $resdir/qemu_pid`; echo $? > $resdir/qemu-retval ) & ( . $T/qemu-cmd; wait `cat $resdir/qemu_pid`; echo $? > $resdir/qemu-retval ) &
commandcompleted=0 commandcompleted=0
sleep 10 # Give qemu's pid a chance to reach the file if test -z "$TORTURE_KCONFIG_GDB_ARG"
if test -s "$resdir/qemu_pid"
then then
qemu_pid=`cat "$resdir/qemu_pid"` sleep 10 # Give qemu's pid a chance to reach the file
echo Monitoring qemu job at pid $qemu_pid if test -s "$resdir/qemu_pid"
else then
qemu_pid="" qemu_pid=`cat "$resdir/qemu_pid"`
echo Monitoring qemu job at yet-as-unknown pid echo Monitoring qemu job at pid $qemu_pid
else
qemu_pid=""
echo Monitoring qemu job at yet-as-unknown pid
fi
fi
if test -n "$TORTURE_KCONFIG_GDB_ARG"
then
echo Waiting for you to attach a debug session, for example: > /dev/tty
echo " gdb $base_resdir/vmlinux" > /dev/tty
echo 'After symbols load and the "(gdb)" prompt appears:' > /dev/tty
echo " target remote :1234" > /dev/tty
echo " continue" > /dev/tty
kstarttime=`gawk 'BEGIN { print systime() }' < /dev/null`
fi fi
while : while :
do do
......
...@@ -31,6 +31,9 @@ TORTURE_DEFCONFIG=defconfig ...@@ -31,6 +31,9 @@ TORTURE_DEFCONFIG=defconfig
TORTURE_BOOT_IMAGE="" TORTURE_BOOT_IMAGE=""
TORTURE_INITRD="$KVM/initrd"; export TORTURE_INITRD TORTURE_INITRD="$KVM/initrd"; export TORTURE_INITRD
TORTURE_KCONFIG_ARG="" TORTURE_KCONFIG_ARG=""
TORTURE_KCONFIG_GDB_ARG=""
TORTURE_BOOT_GDB_ARG=""
TORTURE_QEMU_GDB_ARG=""
TORTURE_KCONFIG_KASAN_ARG="" TORTURE_KCONFIG_KASAN_ARG=""
TORTURE_KCONFIG_KCSAN_ARG="" TORTURE_KCONFIG_KCSAN_ARG=""
TORTURE_KMAKE_ARG="" TORTURE_KMAKE_ARG=""
...@@ -46,6 +49,7 @@ jitter="-1" ...@@ -46,6 +49,7 @@ jitter="-1"
usage () { usage () {
echo "Usage: $scriptname optional arguments:" echo "Usage: $scriptname optional arguments:"
echo " --allcpus"
echo " --bootargs kernel-boot-arguments" echo " --bootargs kernel-boot-arguments"
echo " --bootimage relative-path-to-kernel-boot-image" echo " --bootimage relative-path-to-kernel-boot-image"
echo " --buildonly" echo " --buildonly"
...@@ -55,17 +59,19 @@ usage () { ...@@ -55,17 +59,19 @@ usage () {
echo " --defconfig string" echo " --defconfig string"
echo " --dryrun sched|script" echo " --dryrun sched|script"
echo " --duration minutes" echo " --duration minutes"
echo " --gdb"
echo " --help"
echo " --interactive" echo " --interactive"
echo " --jitter N [ maxsleep (us) [ maxspin (us) ] ]" echo " --jitter N [ maxsleep (us) [ maxspin (us) ] ]"
echo " --kconfig Kconfig-options" echo " --kconfig Kconfig-options"
echo " --kmake-arg kernel-make-arguments" echo " --kmake-arg kernel-make-arguments"
echo " --mac nn:nn:nn:nn:nn:nn" echo " --mac nn:nn:nn:nn:nn:nn"
echo " --memory megabytes | nnnG" echo " --memory megabytes|nnnG"
echo " --no-initrd" echo " --no-initrd"
echo " --qemu-args qemu-arguments" echo " --qemu-args qemu-arguments"
echo " --qemu-cmd qemu-system-..." echo " --qemu-cmd qemu-system-..."
echo " --results absolute-pathname" echo " --results absolute-pathname"
echo " --torture rcu" echo " --torture lock|rcu|rcuscale|refscale|scf"
echo " --trust-make" echo " --trust-make"
exit 1 exit 1
} }
...@@ -126,6 +132,14 @@ do ...@@ -126,6 +132,14 @@ do
dur=$(($2*60)) dur=$(($2*60))
shift shift
;; ;;
--gdb)
TORTURE_KCONFIG_GDB_ARG="CONFIG_DEBUG_INFO=y"; export TORTURE_KCONFIG_GDB_ARG
TORTURE_BOOT_GDB_ARG="nokaslr"; export TORTURE_BOOT_GDB_ARG
TORTURE_QEMU_GDB_ARG="-s -S"; export TORTURE_QEMU_GDB_ARG
;;
--help|-h)
usage
;;
--interactive) --interactive)
TORTURE_QEMU_INTERACTIVE=1; export TORTURE_QEMU_INTERACTIVE TORTURE_QEMU_INTERACTIVE=1; export TORTURE_QEMU_INTERACTIVE
;; ;;
...@@ -184,13 +198,13 @@ do ...@@ -184,13 +198,13 @@ do
shift shift
;; ;;
--torture) --torture)
checkarg --torture "(suite name)" "$#" "$2" '^\(lock\|rcu\|rcuperf\|refscale\)$' '^--' checkarg --torture "(suite name)" "$#" "$2" '^\(lock\|rcu\|rcuscale\|refscale\|scf\)$' '^--'
TORTURE_SUITE=$2 TORTURE_SUITE=$2
shift shift
if test "$TORTURE_SUITE" = rcuperf || test "$TORTURE_SUITE" = refscale if test "$TORTURE_SUITE" = rcuscale || test "$TORTURE_SUITE" = refscale
then then
# If you really want jitter for refscale or # If you really want jitter for refscale or
# rcuperf, specify it after specifying the rcuperf # rcuscale, specify it after specifying the rcuscale
# or the refscale. (But why jitter in these cases?) # or the refscale. (But why jitter in these cases?)
jitter=0 jitter=0
fi fi
...@@ -248,6 +262,15 @@ do ...@@ -248,6 +262,15 @@ do
done done
touch $T/cfgcpu touch $T/cfgcpu
configs_derep="`echo $configs_derep | sed -e "s/\<CFLIST\>/$defaultconfigs/g"`" configs_derep="`echo $configs_derep | sed -e "s/\<CFLIST\>/$defaultconfigs/g"`"
if test -n "$TORTURE_KCONFIG_GDB_ARG"
then
if test "`echo $configs_derep | wc -w`" -gt 1
then
echo "The --config list is: $configs_derep."
echo "Only one --config permitted with --gdb, terminating."
exit 1
fi
fi
for CF1 in $configs_derep for CF1 in $configs_derep
do do
if test -f "$CONFIGFRAG/$CF1" if test -f "$CONFIGFRAG/$CF1"
...@@ -323,6 +346,9 @@ TORTURE_BUILDONLY="$TORTURE_BUILDONLY"; export TORTURE_BUILDONLY ...@@ -323,6 +346,9 @@ TORTURE_BUILDONLY="$TORTURE_BUILDONLY"; export TORTURE_BUILDONLY
TORTURE_DEFCONFIG="$TORTURE_DEFCONFIG"; export TORTURE_DEFCONFIG TORTURE_DEFCONFIG="$TORTURE_DEFCONFIG"; export TORTURE_DEFCONFIG
TORTURE_INITRD="$TORTURE_INITRD"; export TORTURE_INITRD TORTURE_INITRD="$TORTURE_INITRD"; export TORTURE_INITRD
TORTURE_KCONFIG_ARG="$TORTURE_KCONFIG_ARG"; export TORTURE_KCONFIG_ARG TORTURE_KCONFIG_ARG="$TORTURE_KCONFIG_ARG"; export TORTURE_KCONFIG_ARG
TORTURE_KCONFIG_GDB_ARG="$TORTURE_KCONFIG_GDB_ARG"; export TORTURE_KCONFIG_GDB_ARG
TORTURE_BOOT_GDB_ARG="$TORTURE_BOOT_GDB_ARG"; export TORTURE_BOOT_GDB_ARG
TORTURE_QEMU_GDB_ARG="$TORTURE_QEMU_GDB_ARG"; export TORTURE_QEMU_GDB_ARG
TORTURE_KCONFIG_KASAN_ARG="$TORTURE_KCONFIG_KASAN_ARG"; export TORTURE_KCONFIG_KASAN_ARG TORTURE_KCONFIG_KASAN_ARG="$TORTURE_KCONFIG_KASAN_ARG"; export TORTURE_KCONFIG_KASAN_ARG
TORTURE_KCONFIG_KCSAN_ARG="$TORTURE_KCONFIG_KCSAN_ARG"; export TORTURE_KCONFIG_KCSAN_ARG TORTURE_KCONFIG_KCSAN_ARG="$TORTURE_KCONFIG_KCSAN_ARG"; export TORTURE_KCONFIG_KCSAN_ARG
TORTURE_KMAKE_ARG="$TORTURE_KMAKE_ARG"; export TORTURE_KMAKE_ARG TORTURE_KMAKE_ARG="$TORTURE_KMAKE_ARG"; export TORTURE_KMAKE_ARG
......
...@@ -33,8 +33,8 @@ then ...@@ -33,8 +33,8 @@ then
fi fi
cat /dev/null > $file.diags cat /dev/null > $file.diags
# Check for proper termination, except for rcuperf and refscale. # Check for proper termination, except for rcuscale and refscale.
if test "$TORTURE_SUITE" != rcuperf && test "$TORTURE_SUITE" != refscale if test "$TORTURE_SUITE" != rcuscale && test "$TORTURE_SUITE" != refscale
then then
# check for abject failure # check for abject failure
...@@ -67,6 +67,7 @@ then ...@@ -67,6 +67,7 @@ then
grep --binary-files=text 'torture:.*ver:' $file | grep --binary-files=text 'torture:.*ver:' $file |
egrep --binary-files=text -v '\(null\)|rtc: 000000000* ' | egrep --binary-files=text -v '\(null\)|rtc: 000000000* ' |
sed -e 's/^(initramfs)[^]]*] //' -e 's/^\[[^]]*] //' | sed -e 's/^(initramfs)[^]]*] //' -e 's/^\[[^]]*] //' |
sed -e 's/^.*ver: //' |
awk ' awk '
BEGIN { BEGIN {
ver = 0; ver = 0;
...@@ -74,13 +75,13 @@ then ...@@ -74,13 +75,13 @@ then
} }
{ {
if (!badseq && ($5 + 0 != $5 || $5 <= ver)) { if (!badseq && ($1 + 0 != $1 || $1 <= ver)) {
badseqno1 = ver; badseqno1 = ver;
badseqno2 = $5; badseqno2 = $1;
badseqnr = NR; badseqnr = NR;
badseq = 1; badseq = 1;
} }
ver = $5 ver = $1
} }
END { END {
......
...@@ -16,5 +16,6 @@ CONFIG_RCU_NOCB_CPU=y ...@@ -16,5 +16,6 @@ CONFIG_RCU_NOCB_CPU=y
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
#CHECK#CONFIG_PROVE_RCU=y #CHECK#CONFIG_PROVE_RCU=y
CONFIG_PROVE_RCU_LIST=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y CONFIG_RCU_EXPERT=y
...@@ -11,6 +11,6 @@ ...@@ -11,6 +11,6 @@
# #
# Adds per-version torture-module parameters to kernels supporting them. # Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () { per_version_boot_params () {
echo $1 rcuperf.shutdown=1 \ echo $1 rcuscale.shutdown=1 \
rcuperf.verbose=1 rcuscale.verbose=1
} }
CONFIG_SCF_TORTURE_TEST=y
CONFIG_PRINTK_TIME=y
CONFIG_SMP=y
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=y
CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_LOCKING=n
CONFIG_SMP=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0+
#
# Torture-suite-dependent shell functions for the rest of the scripts.
#
# Copyright (C) Facebook, 2020
#
# Authors: Paul E. McKenney <paulmck@kernel.org>
# scftorture_param_onoff bootparam-string config-file
#
# Adds onoff scftorture module parameters to kernels having it.
scftorture_param_onoff () {
if ! bootparam_hotplug_cpu "$1" && configfrag_hotplug_cpu "$2"
then
echo CPU-hotplug kernel, adding scftorture onoff. 1>&2
echo scftorture.onoff_interval=1000 scftorture.onoff_holdoff=30
fi
}
# per_version_boot_params bootparam-string config-file seconds
#
# Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () {
echo $1 `scftorture_param_onoff "$1" "$2"` \
scftorture.stat_interval=15 \
scftorture.shutdown_secs=$3 \
scftorture.verbose=1 \
scf
}
The rcutorture scripting tools automatically create the needed initrd The rcutorture scripting tools automatically create an initrd containing
directory using dracut. Failing that, this tool will create an initrd a single statically linked binary named "init" that loops over a
containing a single statically linked binary named "init" that loops very long sleep() call. In both cases, this creation is done by
over a very long sleep() call. In both cases, this creation is done tools/testing/selftests/rcutorture/bin/mkinitrd.sh.
by tools/testing/selftests/rcutorture/bin/mkinitrd.sh.
However, if you are attempting to run rcutorture on a system that does However, if you don't like the notion of statically linked bare-bones
not have dracut installed, and if you don't like the notion of static userspace environments, you might wish to press an existing initrd
linking, you might wish to press an existing initrd into service: into service:
------------------------------------------------------------------------ ------------------------------------------------------------------------
cd tools/testing/selftests/rcutorture cd tools/testing/selftests/rcutorture
...@@ -15,24 +14,3 @@ mkdir initrd ...@@ -15,24 +14,3 @@ mkdir initrd
cd initrd cd initrd
cpio -id < /tmp/initrd.img.zcat cpio -id < /tmp/initrd.img.zcat
# Manually verify that initrd contains needed binaries and libraries. # Manually verify that initrd contains needed binaries and libraries.
------------------------------------------------------------------------
Interestingly enough, if you are running rcutorture, you don't really
need userspace in many cases. Running without userspace has the
advantage of allowing you to test your kernel independently of the
distro in place, the root-filesystem layout, and so on. To make this
happen, put the following script in the initrd's tree's "/init" file,
with 0755 mode.
------------------------------------------------------------------------
#!/bin/sh
while :
do
sleep 10
done
------------------------------------------------------------------------
This approach also allows most of the binaries and libraries in the
initrd filesystem to be dispensed with, which can save significant
space in rcutorture's "res" directory.
This document describes one way to create the rcu-test-image file Normally, a minimal initrd is created automatically by the rcutorture
that contains the filesystem used by the guest-OS kernel. There are scripting. But minimal really does mean "minimal", namely just a single
probably much better ways of doing this, and this filesystem could no root directory with a single statically linked executable named "init":
doubt be smaller. It is probably also possible to simply download
an appropriate image from any number of places. $ size tools/testing/selftests/rcutorture/initrd/init
text data bss dec hex filename
328 0 8 336 150 tools/testing/selftests/rcutorture/initrd/init
Suppose you need to run some scripts, perhaps to monitor or control
some aspect of the rcutorture testing. This will require a more fully
filled-out userspace, perhaps containing libraries, executables for
the shell and other utilities, and soforth. In that case, place your
desired filesystem here:
tools/testing/selftests/rcutorture/initrd
For example, your tools/testing/selftests/rcutorture/initrd/init might
be a script that does any needed mount operations and starts whatever
scripts need starting to properly monitor or control your testing.
The next rcutorture build will then incorporate this filesystem into
the kernel image that is passed to qemu.
Or maybe you need a real root filesystem for some reason, in which case
please read on!
The remainder of this document describes one way to create the
rcu-test-image file that contains the filesystem used by the guest-OS
kernel. There are probably much better ways of doing this, and this
filesystem could no doubt be smaller. It is probably also possible to
simply download an appropriate image from any number of places.
That said, here are the commands: That said, here are the commands:
...@@ -36,7 +61,7 @@ References: ...@@ -36,7 +61,7 @@ References:
https://help.ubuntu.com/community/JeOSVMBuilder https://help.ubuntu.com/community/JeOSVMBuilder
http://wiki.libvirt.org/page/UbuntuKVMWalkthrough http://wiki.libvirt.org/page/UbuntuKVMWalkthrough
http://www.moe.co.uk/2011/01/07/pci_add_option_rom-failed-to-find-romfile-pxe-rtl8139-bin/ -- "apt-get install kvm-pxe" http://www.moe.co.uk/2011/01/07/pci_add_option_rom-failed-to-find-romfile-pxe-rtl8139-bin/ -- "apt-get install kvm-pxe"
http://www.landley.net/writing/rootfs-howto.html https://www.landley.net/writing/rootfs-howto.html
http://en.wikipedia.org/wiki/Initrd https://en.wikipedia.org/wiki/Initrd
http://en.wikipedia.org/wiki/Cpio https://en.wikipedia.org/wiki/Cpio
http://wiki.libvirt.org/page/UbuntuKVMWalkthrough http://wiki.libvirt.org/page/UbuntuKVMWalkthrough
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment