Commit 630e1e0b authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'rcu/next' of...

Merge branch 'rcu/next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu

Conflicts:
	arch/x86/kernel/ptrace.c

Pull the latest RCU tree from Paul E. McKenney:

"       The major features of this series are:

  1.	A first version of no-callbacks CPUs.  This version prohibits
  	offlining CPU 0, but only when enabled via CONFIG_RCU_NOCB_CPU=y.
  	Relaxing this constraint is in progress, but not yet ready
  	for prime time.  These commits were posted to LKML at
  	https://lkml.org/lkml/2012/10/30/724, and are at branch rcu/nocb.

  2.	Changes to SRCU that allows statically initialized srcu_struct
  	structures.  These commits were posted to LKML at
  	https://lkml.org/lkml/2012/10/30/296, and are at branch rcu/srcu.

  3.	Restructuring of RCU's debugfs output.  These commits were posted
  	to LKML at https://lkml.org/lkml/2012/10/30/341, and are at
  	branch rcu/tracing.

  4.	Additional CPU-hotplug/RCU improvements, posted to LKML at
  	https://lkml.org/lkml/2012/10/30/327, and are at branch rcu/hotplug.
  	Note that the commit eliminating __stop_machine() was judged to
  	be too-high of risk, so is deferred to 3.9.

  5.	Changes to RCU's idle interface, most notably a new module
  	parameter that redirects normal grace-period operations to
  	their expedited equivalents.  These were posted to LKML at
  	https://lkml.org/lkml/2012/10/30/739, and are at branch rcu/idle.

  6.	Additional diagnostics for RCU's CPU stall warning facility,
  	posted to LKML at https://lkml.org/lkml/2012/10/30/315, and
  	are at branch rcu/stall.  The most notable change reduces the
  	default RCU CPU stall-warning time from 60 seconds to 21 seconds,
  	so that it once again happens sooner than the softlockup timeout.

  7.	Documentation updates, which were posted to LKML at
  	https://lkml.org/lkml/2012/10/30/280, and are at branch rcu/doc.
  	A couple of late-breaking changes were posted at
  	https://lkml.org/lkml/2012/11/16/634 and
  	https://lkml.org/lkml/2012/11/16/547.

  8.	Miscellaneous fixes, which were posted to LKML at
  	https://lkml.org/lkml/2012/10/30/309, along with a late-breaking
  	change posted at Fri, 16 Nov 2012 11:26:25 -0800 with message-ID
  	<20121116192625.GA447@linux.vnet.ibm.com>, but which lkml.org
  	seems to have missed.  These are at branch rcu/fixes.

  9.	Finally, a fix for an lockdep-RCU splat was posted to LKML
  	at https://lkml.org/lkml/2012/11/7/486.  This is at rcu/next. "
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents 7e5530af 91d1aa43
...@@ -186,7 +186,7 @@ Bibtex Entries ...@@ -186,7 +186,7 @@ Bibtex Entries
@article{Kung80 @article{Kung80
,author="H. T. Kung and Q. Lehman" ,author="H. T. Kung and Q. Lehman"
,title="Concurrent Maintenance of Binary Search Trees" ,title="Concurrent Manipulation of Binary Search Trees"
,Year="1980" ,Year="1980"
,Month="September" ,Month="September"
,journal="ACM Transactions on Database Systems" ,journal="ACM Transactions on Database Systems"
......
...@@ -271,15 +271,14 @@ over a rather long period of time, but improvements are always welcome! ...@@ -271,15 +271,14 @@ over a rather long period of time, but improvements are always welcome!
The same cautions apply to call_rcu_bh() and call_rcu_sched(). The same cautions apply to call_rcu_bh() and call_rcu_sched().
9. All RCU list-traversal primitives, which include 9. All RCU list-traversal primitives, which include
rcu_dereference(), list_for_each_entry_rcu(), rcu_dereference(), list_for_each_entry_rcu(), and
list_for_each_continue_rcu(), and list_for_each_safe_rcu(), list_for_each_safe_rcu(), must be either within an RCU read-side
must be either within an RCU read-side critical section or critical section or must be protected by appropriate update-side
must be protected by appropriate update-side locks. RCU locks. RCU read-side critical sections are delimited by
read-side critical sections are delimited by rcu_read_lock() rcu_read_lock() and rcu_read_unlock(), or by similar primitives
and rcu_read_unlock(), or by similar primitives such as such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which
rcu_read_lock_bh() and rcu_read_unlock_bh(), in which case case the matching rcu_dereference() primitive must be used in
the matching rcu_dereference() primitive must be used in order order to keep lockdep happy, in this case, rcu_dereference_bh().
to keep lockdep happy, in this case, rcu_dereference_bh().
The reason that it is permissible to use RCU list-traversal The reason that it is permissible to use RCU list-traversal
primitives when the update-side lock is held is that doing so primitives when the update-side lock is held is that doing so
......
...@@ -205,7 +205,7 @@ RCU ("read-copy update") its name. The RCU code is as follows: ...@@ -205,7 +205,7 @@ RCU ("read-copy update") its name. The RCU code is as follows:
audit_copy_rule(&ne->rule, &e->rule); audit_copy_rule(&ne->rule, &e->rule);
ne->rule.action = newaction; ne->rule.action = newaction;
ne->rule.file_count = newfield_count; ne->rule.file_count = newfield_count;
list_replace_rcu(e, ne); list_replace_rcu(&e->list, &ne->list);
call_rcu(&e->rcu, audit_free_rule); call_rcu(&e->rcu, audit_free_rule);
return 0; return 0;
} }
......
...@@ -20,7 +20,7 @@ release_referenced() delete() ...@@ -20,7 +20,7 @@ release_referenced() delete()
{ { { {
... write_lock(&list_lock); ... write_lock(&list_lock);
atomic_dec(&el->rc, relfunc) ... atomic_dec(&el->rc, relfunc) ...
... delete_element ... remove_element
} write_unlock(&list_lock); } write_unlock(&list_lock);
... ...
if (atomic_dec_and_test(&el->rc)) if (atomic_dec_and_test(&el->rc))
...@@ -52,7 +52,7 @@ release_referenced() delete() ...@@ -52,7 +52,7 @@ release_referenced() delete()
{ { { {
... spin_lock(&list_lock); ... spin_lock(&list_lock);
if (atomic_dec_and_test(&el->rc)) ... if (atomic_dec_and_test(&el->rc)) ...
call_rcu(&el->head, el_free); delete_element call_rcu(&el->head, el_free); remove_element
... spin_unlock(&list_lock); ... spin_unlock(&list_lock);
} ... } ...
if (atomic_dec_and_test(&el->rc)) if (atomic_dec_and_test(&el->rc))
...@@ -64,3 +64,60 @@ Sometimes, a reference to the element needs to be obtained in the ...@@ -64,3 +64,60 @@ Sometimes, a reference to the element needs to be obtained in the
update (write) stream. In such cases, atomic_inc_not_zero() might be update (write) stream. In such cases, atomic_inc_not_zero() might be
overkill, since we hold the update-side spinlock. One might instead overkill, since we hold the update-side spinlock. One might instead
use atomic_inc() in such cases. use atomic_inc() in such cases.
It is not always convenient to deal with "FAIL" in the
search_and_reference() code path. In such cases, the
atomic_dec_and_test() may be moved from delete() to el_free()
as follows:
1. 2.
add() search_and_reference()
{ {
alloc_object rcu_read_lock();
... search_for_element
atomic_set(&el->rc, 1); atomic_inc(&el->rc);
spin_lock(&list_lock); ...
add_element rcu_read_unlock();
... }
spin_unlock(&list_lock); 4.
} delete()
3. {
release_referenced() spin_lock(&list_lock);
{ ...
... remove_element
if (atomic_dec_and_test(&el->rc)) spin_unlock(&list_lock);
kfree(el); ...
... call_rcu(&el->head, el_free);
} ...
5. }
void el_free(struct rcu_head *rhp)
{
release_referenced();
}
The key point is that the initial reference added by add() is not removed
until after a grace period has elapsed following removal. This means that
search_and_reference() cannot find this element, which means that the value
of el->rc cannot increase. Thus, once it reaches zero, there are no
readers that can or ever will be able to reference the element. The
element can therefore safely be freed. This in turn guarantees that if
any reader finds the element, that reader may safely acquire a reference
without checking the value of the reference counter.
In cases where delete() can sleep, synchronize_rcu() can be called from
delete(), so that el_free() can be subsumed into delete as follows:
4.
delete()
{
spin_lock(&list_lock);
...
remove_element
spin_unlock(&list_lock);
...
synchronize_rcu();
if (atomic_dec_and_test(&el->rc))
kfree(el);
...
}
This diff is collapsed.
...@@ -499,6 +499,8 @@ The foo_reclaim() function might appear as follows: ...@@ -499,6 +499,8 @@ The foo_reclaim() function might appear as follows:
{ {
struct foo *fp = container_of(rp, struct foo, rcu); struct foo *fp = container_of(rp, struct foo, rcu);
foo_cleanup(fp->a);
kfree(fp); kfree(fp);
} }
...@@ -521,6 +523,12 @@ o Use call_rcu() -after- removing a data element from an ...@@ -521,6 +523,12 @@ o Use call_rcu() -after- removing a data element from an
read-side critical sections that might be referencing that read-side critical sections that might be referencing that
data item. data item.
If the callback for call_rcu() is not doing anything more than calling
kfree() on the structure, you can use kfree_rcu() instead of call_rcu()
to avoid having to write your own callback:
kfree_rcu(old_fp, rcu);
Again, see checklist.txt for additional rules governing the use of RCU. Again, see checklist.txt for additional rules governing the use of RCU.
...@@ -773,8 +781,8 @@ a single atomic update, converting to RCU will require special care. ...@@ -773,8 +781,8 @@ a single atomic update, converting to RCU will require special care.
Also, the presence of synchronize_rcu() means that the RCU version of Also, the presence of synchronize_rcu() means that the RCU version of
delete() can now block. If this is a problem, there is a callback-based delete() can now block. If this is a problem, there is a callback-based
mechanism that never blocks, namely call_rcu(), that can be used in mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can
place of synchronize_rcu(). be used in place of synchronize_rcu().
7. FULL LIST OF RCU APIs 7. FULL LIST OF RCU APIs
...@@ -789,9 +797,7 @@ RCU list traversal: ...@@ -789,9 +797,7 @@ RCU list traversal:
list_for_each_entry_rcu list_for_each_entry_rcu
hlist_for_each_entry_rcu hlist_for_each_entry_rcu
hlist_nulls_for_each_entry_rcu hlist_nulls_for_each_entry_rcu
list_for_each_entry_continue_rcu
list_for_each_continue_rcu (to be deprecated in favor of new
list_for_each_entry_continue_rcu)
RCU pointer/list update: RCU pointer/list update:
...@@ -813,6 +819,7 @@ RCU: Critical sections Grace period Barrier ...@@ -813,6 +819,7 @@ RCU: Critical sections Grace period Barrier
rcu_read_unlock synchronize_rcu rcu_read_unlock synchronize_rcu
rcu_dereference synchronize_rcu_expedited rcu_dereference synchronize_rcu_expedited
call_rcu call_rcu
kfree_rcu
bh: Critical sections Grace period Barrier bh: Critical sections Grace period Barrier
......
...@@ -2394,6 +2394,27 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -2394,6 +2394,27 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
ramdisk_size= [RAM] Sizes of RAM disks in kilobytes ramdisk_size= [RAM] Sizes of RAM disks in kilobytes
See Documentation/blockdev/ramdisk.txt. See Documentation/blockdev/ramdisk.txt.
rcu_nocbs= [KNL,BOOT]
In kernels built with CONFIG_RCU_NOCB_CPU=y, set
the specified list of CPUs to be no-callback CPUs.
Invocation of these CPUs' RCU callbacks will
be offloaded to "rcuoN" kthreads created for
that purpose. This reduces OS jitter on the
offloaded CPUs, which can be useful for HPC and
real-time workloads. It can also improve energy
efficiency for asymmetric multiprocessors.
rcu_nocbs_poll [KNL,BOOT]
Rather than requiring that offloaded CPUs
(specified by rcu_nocbs= above) explicitly
awaken the corresponding "rcuoN" kthreads,
make these kthreads poll for callbacks.
This improves the real-time response for the
offloaded CPUs by relieving them of the need to
wake up the corresponding kthread, but degrades
energy efficiency by requiring that the kthreads
periodically wake up to do the polling.
rcutree.blimit= [KNL,BOOT] rcutree.blimit= [KNL,BOOT]
Set maximum number of finished RCU callbacks to process Set maximum number of finished RCU callbacks to process
in one batch. in one batch.
......
...@@ -251,12 +251,13 @@ And there are a number of things that _must_ or _must_not_ be assumed: ...@@ -251,12 +251,13 @@ And there are a number of things that _must_ or _must_not_ be assumed:
And for: And for:
*A = X; Y = *A; *A = X; *(A + 4) = Y;
we may get either of: we may get any of:
STORE *A = X; Y = LOAD *A; STORE *A = X; STORE *(A + 4) = Y;
STORE *A = Y = X; STORE *(A + 4) = Y; STORE *A = X;
STORE {*A, *(A + 4) } = {X, Y};
========================= =========================
......
...@@ -300,15 +300,16 @@ config SECCOMP_FILTER ...@@ -300,15 +300,16 @@ config SECCOMP_FILTER
See Documentation/prctl/seccomp_filter.txt for details. See Documentation/prctl/seccomp_filter.txt for details.
config HAVE_RCU_USER_QS config HAVE_CONTEXT_TRACKING
bool bool
help help
Provide kernel entry/exit hooks necessary for userspace Provide kernel/user boundaries probes necessary for subsystems
RCU extended quiescent state. Syscalls need to be wrapped inside that need it, such as userspace RCU extended quiescent state.
rcu_user_exit()-rcu_user_enter() through the slow path using Syscalls need to be wrapped inside user_exit()-user_enter() through
TIF_NOHZ flag. Exceptions handlers must be wrapped as well. Irqs the slow path using TIF_NOHZ flag. Exceptions handlers must be
are already protected inside rcu_irq_enter/rcu_irq_exit() but wrapped as well. Irqs are already protected inside
preemption or signal handling on irq exit still need to be protected. rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on
irq exit still need to be protected.
config HAVE_VIRT_CPU_ACCOUNTING config HAVE_VIRT_CPU_ACCOUNTING
bool bool
......
...@@ -648,7 +648,7 @@ static void stack_proc(void *arg) ...@@ -648,7 +648,7 @@ static void stack_proc(void *arg)
struct task_struct *from = current, *to = arg; struct task_struct *from = current, *to = arg;
to->thread.saved_task = from; to->thread.saved_task = from;
rcu_switch(from, to); rcu_user_hooks_switch(from, to);
switch_to(from, to, from); switch_to(from, to, from);
} }
......
...@@ -106,7 +106,7 @@ config X86 ...@@ -106,7 +106,7 @@ config X86
select KTIME_SCALAR if X86_32 select KTIME_SCALAR if X86_32
select GENERIC_STRNCPY_FROM_USER select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER select GENERIC_STRNLEN_USER
select HAVE_RCU_USER_QS if X86_64 select HAVE_CONTEXT_TRACKING if X86_64
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING
select GENERIC_KERNEL_THREAD select GENERIC_KERNEL_THREAD
select GENERIC_KERNEL_EXECVE select GENERIC_KERNEL_EXECVE
......
#ifndef _ASM_X86_RCU_H #ifndef _ASM_X86_CONTEXT_TRACKING_H
#define _ASM_X86_RCU_H #define _ASM_X86_CONTEXT_TRACKING_H
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/context_tracking.h>
#include <linux/rcupdate.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
static inline void exception_enter(struct pt_regs *regs) static inline void exception_enter(struct pt_regs *regs)
{ {
rcu_user_exit(); user_exit();
} }
static inline void exception_exit(struct pt_regs *regs) static inline void exception_exit(struct pt_regs *regs)
{ {
#ifdef CONFIG_RCU_USER_QS #ifdef CONFIG_CONTEXT_TRACKING
if (user_mode(regs)) if (user_mode(regs))
rcu_user_enter(); user_enter();
#endif #endif
} }
#else /* __ASSEMBLY__ */ #else /* __ASSEMBLY__ */
#ifdef CONFIG_RCU_USER_QS #ifdef CONFIG_CONTEXT_TRACKING
# define SCHEDULE_USER call schedule_user # define SCHEDULE_USER call schedule_user
#else #else
# define SCHEDULE_USER call schedule # define SCHEDULE_USER call schedule
......
...@@ -56,7 +56,7 @@ ...@@ -56,7 +56,7 @@
#include <asm/ftrace.h> #include <asm/ftrace.h>
#include <asm/percpu.h> #include <asm/percpu.h>
#include <asm/asm.h> #include <asm/asm.h>
#include <asm/rcu.h> #include <asm/context_tracking.h>
#include <asm/smap.h> #include <asm/smap.h>
#include <linux/err.h> #include <linux/err.h>
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <linux/hw_breakpoint.h> #include <linux/hw_breakpoint.h>
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/context_tracking.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
...@@ -1491,7 +1492,7 @@ long syscall_trace_enter(struct pt_regs *regs) ...@@ -1491,7 +1492,7 @@ long syscall_trace_enter(struct pt_regs *regs)
{ {
long ret = 0; long ret = 0;
rcu_user_exit(); user_exit();
/* /*
* If we stepped into a sysenter/syscall insn, it trapped in * If we stepped into a sysenter/syscall insn, it trapped in
...@@ -1546,7 +1547,7 @@ void syscall_trace_leave(struct pt_regs *regs) ...@@ -1546,7 +1547,7 @@ void syscall_trace_leave(struct pt_regs *regs)
* or do_notify_resume(), in which case we can be in RCU * or do_notify_resume(), in which case we can be in RCU
* user mode. * user mode.
*/ */
rcu_user_exit(); user_exit();
audit_syscall_exit(regs); audit_syscall_exit(regs);
...@@ -1564,5 +1565,5 @@ void syscall_trace_leave(struct pt_regs *regs) ...@@ -1564,5 +1565,5 @@ void syscall_trace_leave(struct pt_regs *regs)
if (step || test_thread_flag(TIF_SYSCALL_TRACE)) if (step || test_thread_flag(TIF_SYSCALL_TRACE))
tracehook_report_syscall_exit(regs, step); tracehook_report_syscall_exit(regs, step);
rcu_user_enter(); user_enter();
} }
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/user-return-notifier.h> #include <linux/user-return-notifier.h>
#include <linux/uprobes.h> #include <linux/uprobes.h>
#include <linux/context_tracking.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/ucontext.h> #include <asm/ucontext.h>
...@@ -816,7 +817,7 @@ static void do_signal(struct pt_regs *regs) ...@@ -816,7 +817,7 @@ static void do_signal(struct pt_regs *regs)
void void
do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags) do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags)
{ {
rcu_user_exit(); user_exit();
#ifdef CONFIG_X86_MCE #ifdef CONFIG_X86_MCE
/* notify userspace of pending MCEs */ /* notify userspace of pending MCEs */
...@@ -838,7 +839,7 @@ do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags) ...@@ -838,7 +839,7 @@ do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags)
if (thread_info_flags & _TIF_USER_RETURN_NOTIFY) if (thread_info_flags & _TIF_USER_RETURN_NOTIFY)
fire_user_return_notifiers(); fire_user_return_notifiers();
rcu_user_enter(); user_enter();
} }
void signal_fault(struct pt_regs *regs, void __user *frame, char *where) void signal_fault(struct pt_regs *regs, void __user *frame, char *where)
......
...@@ -55,7 +55,7 @@ ...@@ -55,7 +55,7 @@
#include <asm/i387.h> #include <asm/i387.h>
#include <asm/fpu-internal.h> #include <asm/fpu-internal.h>
#include <asm/mce.h> #include <asm/mce.h>
#include <asm/rcu.h> #include <asm/context_tracking.h>
#include <asm/mach_traps.h> #include <asm/mach_traps.h>
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
#include <asm/pgalloc.h> /* pgd_*(), ... */ #include <asm/pgalloc.h> /* pgd_*(), ... */
#include <asm/kmemcheck.h> /* kmemcheck_*(), ... */ #include <asm/kmemcheck.h> /* kmemcheck_*(), ... */
#include <asm/fixmap.h> /* VSYSCALL_START */ #include <asm/fixmap.h> /* VSYSCALL_START */
#include <asm/rcu.h> /* exception_enter(), ... */ #include <asm/context_tracking.h> /* exception_enter(), ... */
/* /*
* Page fault error code bits: * Page fault error code bits:
......
#ifndef _LINUX_CONTEXT_TRACKING_H
#define _LINUX_CONTEXT_TRACKING_H
#ifdef CONFIG_CONTEXT_TRACKING
#include <linux/sched.h>
extern void user_enter(void);
extern void user_exit(void);
extern void context_tracking_task_switch(struct task_struct *prev,
struct task_struct *next);
#else
static inline void user_enter(void) { }
static inline void user_exit(void) { }
static inline void context_tracking_task_switch(struct task_struct *prev,
struct task_struct *next) { }
#endif /* !CONFIG_CONTEXT_TRACKING */
#endif
...@@ -286,23 +286,6 @@ static inline void list_splice_init_rcu(struct list_head *list, ...@@ -286,23 +286,6 @@ static inline void list_splice_init_rcu(struct list_head *list,
&pos->member != (head); \ &pos->member != (head); \
pos = list_entry_rcu(pos->member.next, typeof(*pos), member)) pos = list_entry_rcu(pos->member.next, typeof(*pos), member))
/**
* list_for_each_continue_rcu
* @pos: the &struct list_head to use as a loop cursor.
* @head: the head for your list.
*
* Iterate over an rcu-protected list, continuing after current point.
*
* This list-traversal primitive may safely run concurrently with
* the _rcu list-mutation primitives such as list_add_rcu()
* as long as the traversal is guarded by rcu_read_lock().
*/
#define list_for_each_continue_rcu(pos, head) \
for ((pos) = rcu_dereference_raw(list_next_rcu(pos)); \
(pos) != (head); \
(pos) = rcu_dereference_raw(list_next_rcu(pos)))
/** /**
* list_for_each_entry_continue_rcu - continue iteration over list of given type * list_for_each_entry_continue_rcu - continue iteration over list of given type
* @pos: the type * to use as a loop cursor. * @pos: the type * to use as a loop cursor.
......
...@@ -90,6 +90,25 @@ extern void do_trace_rcu_torture_read(char *rcutorturename, ...@@ -90,6 +90,25 @@ extern void do_trace_rcu_torture_read(char *rcutorturename,
* that started after call_rcu() was invoked. RCU read-side critical * that started after call_rcu() was invoked. RCU read-side critical
* sections are delimited by rcu_read_lock() and rcu_read_unlock(), * sections are delimited by rcu_read_lock() and rcu_read_unlock(),
* and may be nested. * and may be nested.
*
* Note that all CPUs must agree that the grace period extended beyond
* all pre-existing RCU read-side critical section. On systems with more
* than one CPU, this means that when "func()" is invoked, each CPU is
* guaranteed to have executed a full memory barrier since the end of its
* last RCU read-side critical section whose beginning preceded the call
* to call_rcu(). It also means that each CPU executing an RCU read-side
* critical section that continues beyond the start of "func()" must have
* executed a memory barrier after the call_rcu() but before the beginning
* of that RCU read-side critical section. Note that these guarantees
* include CPUs that are offline, idle, or executing in user mode, as
* well as CPUs that are executing in the kernel.
*
* Furthermore, if CPU A invoked call_rcu() and CPU B invoked the
* resulting RCU callback function "func()", then both CPU A and CPU B are
* guaranteed to execute a full memory barrier during the time interval
* between the call to call_rcu() and the invocation of "func()" -- even
* if CPU A and CPU B are the same CPU (but again only if the system has
* more than one CPU).
*/ */
extern void call_rcu(struct rcu_head *head, extern void call_rcu(struct rcu_head *head,
void (*func)(struct rcu_head *head)); void (*func)(struct rcu_head *head));
...@@ -118,6 +137,9 @@ extern void call_rcu(struct rcu_head *head, ...@@ -118,6 +137,9 @@ extern void call_rcu(struct rcu_head *head,
* OR * OR
* - rcu_read_lock_bh() and rcu_read_unlock_bh(), if in process context. * - rcu_read_lock_bh() and rcu_read_unlock_bh(), if in process context.
* These may be nested. * These may be nested.
*
* See the description of call_rcu() for more detailed information on
* memory ordering guarantees.
*/ */
extern void call_rcu_bh(struct rcu_head *head, extern void call_rcu_bh(struct rcu_head *head,
void (*func)(struct rcu_head *head)); void (*func)(struct rcu_head *head));
...@@ -137,6 +159,9 @@ extern void call_rcu_bh(struct rcu_head *head, ...@@ -137,6 +159,9 @@ extern void call_rcu_bh(struct rcu_head *head,
* OR * OR
* anything that disables preemption. * anything that disables preemption.
* These may be nested. * These may be nested.
*
* See the description of call_rcu() for more detailed information on
* memory ordering guarantees.
*/ */
extern void call_rcu_sched(struct rcu_head *head, extern void call_rcu_sched(struct rcu_head *head,
void (*func)(struct rcu_head *rcu)); void (*func)(struct rcu_head *rcu));
...@@ -197,13 +222,13 @@ extern void rcu_user_enter(void); ...@@ -197,13 +222,13 @@ extern void rcu_user_enter(void);
extern void rcu_user_exit(void); extern void rcu_user_exit(void);
extern void rcu_user_enter_after_irq(void); extern void rcu_user_enter_after_irq(void);
extern void rcu_user_exit_after_irq(void); extern void rcu_user_exit_after_irq(void);
extern void rcu_user_hooks_switch(struct task_struct *prev,
struct task_struct *next);
#else #else
static inline void rcu_user_enter(void) { } static inline void rcu_user_enter(void) { }
static inline void rcu_user_exit(void) { } static inline void rcu_user_exit(void) { }
static inline void rcu_user_enter_after_irq(void) { } static inline void rcu_user_enter_after_irq(void) { }
static inline void rcu_user_exit_after_irq(void) { } static inline void rcu_user_exit_after_irq(void) { }
static inline void rcu_user_hooks_switch(struct task_struct *prev,
struct task_struct *next) { }
#endif /* CONFIG_RCU_USER_QS */ #endif /* CONFIG_RCU_USER_QS */
extern void exit_rcu(void); extern void exit_rcu(void);
......
...@@ -109,6 +109,8 @@ extern void update_cpu_load_nohz(void); ...@@ -109,6 +109,8 @@ extern void update_cpu_load_nohz(void);
extern unsigned long get_parent_ip(unsigned long addr); extern unsigned long get_parent_ip(unsigned long addr);
extern void dump_cpu_task(int cpu);
struct seq_file; struct seq_file;
struct cfs_rq; struct cfs_rq;
struct task_group; struct task_group;
...@@ -1844,14 +1846,6 @@ static inline void rcu_copy_process(struct task_struct *p) ...@@ -1844,14 +1846,6 @@ static inline void rcu_copy_process(struct task_struct *p)
#endif #endif
static inline void rcu_switch(struct task_struct *prev,
struct task_struct *next)
{
#ifdef CONFIG_RCU_USER_QS
rcu_user_hooks_switch(prev, next);
#endif
}
static inline void tsk_restore_flags(struct task_struct *task, static inline void tsk_restore_flags(struct task_struct *task,
unsigned long orig_flags, unsigned long flags) unsigned long orig_flags, unsigned long flags)
{ {
......
...@@ -16,8 +16,10 @@ ...@@ -16,8 +16,10 @@
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
* *
* Copyright (C) IBM Corporation, 2006 * Copyright (C) IBM Corporation, 2006
* Copyright (C) Fujitsu, 2012
* *
* Author: Paul McKenney <paulmck@us.ibm.com> * Author: Paul McKenney <paulmck@us.ibm.com>
* Lai Jiangshan <laijs@cn.fujitsu.com>
* *
* For detailed explanation of Read-Copy Update mechanism see - * For detailed explanation of Read-Copy Update mechanism see -
* Documentation/RCU/ *.txt * Documentation/RCU/ *.txt
...@@ -40,6 +42,8 @@ struct rcu_batch { ...@@ -40,6 +42,8 @@ struct rcu_batch {
struct rcu_head *head, **tail; struct rcu_head *head, **tail;
}; };
#define RCU_BATCH_INIT(name) { NULL, &(name.head) }
struct srcu_struct { struct srcu_struct {
unsigned completed; unsigned completed;
struct srcu_struct_array __percpu *per_cpu_ref; struct srcu_struct_array __percpu *per_cpu_ref;
...@@ -70,12 +74,42 @@ int __init_srcu_struct(struct srcu_struct *sp, const char *name, ...@@ -70,12 +74,42 @@ int __init_srcu_struct(struct srcu_struct *sp, const char *name,
__init_srcu_struct((sp), #sp, &__srcu_key); \ __init_srcu_struct((sp), #sp, &__srcu_key); \
}) })
#define __SRCU_DEP_MAP_INIT(srcu_name) .dep_map = { .name = #srcu_name },
#else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
int init_srcu_struct(struct srcu_struct *sp); int init_srcu_struct(struct srcu_struct *sp);
#define __SRCU_DEP_MAP_INIT(srcu_name)
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
void process_srcu(struct work_struct *work);
#define __SRCU_STRUCT_INIT(name) \
{ \
.completed = -300, \
.per_cpu_ref = &name##_srcu_array, \
.queue_lock = __SPIN_LOCK_UNLOCKED(name.queue_lock), \
.running = false, \
.batch_queue = RCU_BATCH_INIT(name.batch_queue), \
.batch_check0 = RCU_BATCH_INIT(name.batch_check0), \
.batch_check1 = RCU_BATCH_INIT(name.batch_check1), \
.batch_done = RCU_BATCH_INIT(name.batch_done), \
.work = __DELAYED_WORK_INITIALIZER(name.work, process_srcu, 0),\
__SRCU_DEP_MAP_INIT(name) \
}
/*
* define and init a srcu struct at build time.
* dont't call init_srcu_struct() nor cleanup_srcu_struct() on it.
*/
#define DEFINE_SRCU(name) \
static DEFINE_PER_CPU(struct srcu_struct_array, name##_srcu_array);\
struct srcu_struct name = __SRCU_STRUCT_INIT(name);
#define DEFINE_STATIC_SRCU(name) \
static DEFINE_PER_CPU(struct srcu_struct_array, name##_srcu_array);\
static struct srcu_struct name = __SRCU_STRUCT_INIT(name);
/** /**
* call_srcu() - Queue a callback for invocation after an SRCU grace period * call_srcu() - Queue a callback for invocation after an SRCU grace period
* @sp: srcu_struct in queue the callback * @sp: srcu_struct in queue the callback
......
...@@ -549,6 +549,7 @@ TRACE_EVENT(rcu_torture_read, ...@@ -549,6 +549,7 @@ TRACE_EVENT(rcu_torture_read,
* "EarlyExit": rcu_barrier_callback() piggybacked, thus early exit. * "EarlyExit": rcu_barrier_callback() piggybacked, thus early exit.
* "Inc1": rcu_barrier_callback() piggyback check counter incremented. * "Inc1": rcu_barrier_callback() piggyback check counter incremented.
* "Offline": rcu_barrier_callback() found offline CPU * "Offline": rcu_barrier_callback() found offline CPU
* "OnlineNoCB": rcu_barrier_callback() found online no-CBs CPU.
* "OnlineQ": rcu_barrier_callback() found online CPU with callbacks. * "OnlineQ": rcu_barrier_callback() found online CPU with callbacks.
* "OnlineNQ": rcu_barrier_callback() found online CPU, no callbacks. * "OnlineNQ": rcu_barrier_callback() found online CPU, no callbacks.
* "IRQ": An rcu_barrier_callback() callback posted on remote CPU. * "IRQ": An rcu_barrier_callback() callback posted on remote CPU.
......
...@@ -486,35 +486,35 @@ config PREEMPT_RCU ...@@ -486,35 +486,35 @@ config PREEMPT_RCU
This option enables preemptible-RCU code that is common between This option enables preemptible-RCU code that is common between
the TREE_PREEMPT_RCU and TINY_PREEMPT_RCU implementations. the TREE_PREEMPT_RCU and TINY_PREEMPT_RCU implementations.
config CONTEXT_TRACKING
bool
config RCU_USER_QS config RCU_USER_QS
bool "Consider userspace as in RCU extended quiescent state" bool "Consider userspace as in RCU extended quiescent state"
depends on HAVE_RCU_USER_QS && SMP depends on HAVE_CONTEXT_TRACKING && SMP
select CONTEXT_TRACKING
help help
This option sets hooks on kernel / userspace boundaries and This option sets hooks on kernel / userspace boundaries and
puts RCU in extended quiescent state when the CPU runs in puts RCU in extended quiescent state when the CPU runs in
userspace. It means that when a CPU runs in userspace, it is userspace. It means that when a CPU runs in userspace, it is
excluded from the global RCU state machine and thus doesn't excluded from the global RCU state machine and thus doesn't
to keep the timer tick on for RCU. try to keep the timer tick on for RCU.
Unless you want to hack and help the development of the full Unless you want to hack and help the development of the full
tickless feature, you shouldn't enable this option. It adds dynticks mode, you shouldn't enable this option. It also
unnecessary overhead. adds unnecessary overhead.
If unsure say N If unsure say N
config RCU_USER_QS_FORCE config CONTEXT_TRACKING_FORCE
bool "Force userspace extended QS by default" bool "Force context tracking"
depends on RCU_USER_QS depends on CONTEXT_TRACKING
help help
Set the hooks in user/kernel boundaries by default in order to Probe on user/kernel boundaries by default in order to
test this feature that treats userspace as an extended quiescent test the features that rely on it such as userspace RCU extended
state until we have a real user like a full adaptive nohz option. quiescent states.
This test is there for debugging until we have a real user like the
Unless you want to hack and help the development of the full full dynticks mode.
tickless feature, you shouldn't enable this option. It adds
unnecessary overhead.
If unsure say N
config RCU_FANOUT config RCU_FANOUT
int "Tree-based hierarchical RCU fanout value" int "Tree-based hierarchical RCU fanout value"
...@@ -582,14 +582,13 @@ config RCU_FAST_NO_HZ ...@@ -582,14 +582,13 @@ config RCU_FAST_NO_HZ
depends on NO_HZ && SMP depends on NO_HZ && SMP
default n default n
help help
This option causes RCU to attempt to accelerate grace periods This option causes RCU to attempt to accelerate grace periods in
in order to allow CPUs to enter dynticks-idle state more order to allow CPUs to enter dynticks-idle state more quickly.
quickly. On the other hand, this option increases the overhead On the other hand, this option increases the overhead of the
of the dynticks-idle checking, particularly on systems with dynticks-idle checking, thus degrading scheduling latency.
large numbers of CPUs.
Say Y if energy efficiency is critically important, particularly Say Y if energy efficiency is critically important, and you don't
if you have relatively few CPUs. care about real-time response.
Say N if you are unsure. Say N if you are unsure.
...@@ -655,6 +654,28 @@ config RCU_BOOST_DELAY ...@@ -655,6 +654,28 @@ config RCU_BOOST_DELAY
Accept the default if unsure. Accept the default if unsure.
config RCU_NOCB_CPU
bool "Offload RCU callback processing from boot-selected CPUs"
depends on TREE_RCU || TREE_PREEMPT_RCU
default n
help
Use this option to reduce OS jitter for aggressive HPC or
real-time workloads. It can also be used to offload RCU
callback invocation to energy-efficient CPUs in battery-powered
asymmetric multiprocessors.
This option offloads callback invocation from the set of
CPUs specified at boot time by the rcu_nocbs parameter.
For each such CPU, a kthread ("rcuoN") will be created to
invoke callbacks, where the "N" is the CPU being offloaded.
Nothing prevents this kthread from running on the specified
CPUs, but (1) the kthreads may be preempted between each
callback, and (2) affinity or cgroups can be used to force
the kthreads to run on whatever set of CPUs is desired.
Say Y here if you want reduced OS jitter on selected CPUs.
Say N here if you are unsure.
endmenu # "RCU Subsystem" endmenu # "RCU Subsystem"
config IKCONFIG config IKCONFIG
......
...@@ -110,6 +110,7 @@ obj-$(CONFIG_USER_RETURN_NOTIFIER) += user-return-notifier.o ...@@ -110,6 +110,7 @@ obj-$(CONFIG_USER_RETURN_NOTIFIER) += user-return-notifier.o
obj-$(CONFIG_PADATA) += padata.o obj-$(CONFIG_PADATA) += padata.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
obj-$(CONFIG_JUMP_LABEL) += jump_label.o obj-$(CONFIG_JUMP_LABEL) += jump_label.o
obj-$(CONFIG_CONTEXT_TRACKING) += context_tracking.o
$(obj)/configs.o: $(obj)/config_data.h $(obj)/configs.o: $(obj)/config_data.h
......
#include <linux/context_tracking.h>
#include <linux/rcupdate.h>
#include <linux/sched.h>
#include <linux/percpu.h>
#include <linux/hardirq.h>
struct context_tracking {
/*
* When active is false, hooks are not set to
* minimize overhead: TIF flags are cleared
* and calls to user_enter/exit are ignored. This
* may be further optimized using static keys.
*/
bool active;
enum {
IN_KERNEL = 0,
IN_USER,
} state;
};
static DEFINE_PER_CPU(struct context_tracking, context_tracking) = {
#ifdef CONFIG_CONTEXT_TRACKING_FORCE
.active = true,
#endif
};
void user_enter(void)
{
unsigned long flags;
/*
* Some contexts may involve an exception occuring in an irq,
* leading to that nesting:
* rcu_irq_enter() rcu_user_exit() rcu_user_exit() rcu_irq_exit()
* This would mess up the dyntick_nesting count though. And rcu_irq_*()
* helpers are enough to protect RCU uses inside the exception. So
* just return immediately if we detect we are in an IRQ.
*/
if (in_interrupt())
return;
WARN_ON_ONCE(!current->mm);
local_irq_save(flags);
if (__this_cpu_read(context_tracking.active) &&
__this_cpu_read(context_tracking.state) != IN_USER) {
__this_cpu_write(context_tracking.state, IN_USER);
rcu_user_enter();
}
local_irq_restore(flags);
}
void user_exit(void)
{
unsigned long flags;
/*
* Some contexts may involve an exception occuring in an irq,
* leading to that nesting:
* rcu_irq_enter() rcu_user_exit() rcu_user_exit() rcu_irq_exit()
* This would mess up the dyntick_nesting count though. And rcu_irq_*()
* helpers are enough to protect RCU uses inside the exception. So
* just return immediately if we detect we are in an IRQ.
*/
if (in_interrupt())
return;
local_irq_save(flags);
if (__this_cpu_read(context_tracking.state) == IN_USER) {
__this_cpu_write(context_tracking.state, IN_KERNEL);
rcu_user_exit();
}
local_irq_restore(flags);
}
void context_tracking_task_switch(struct task_struct *prev,
struct task_struct *next)
{
if (__this_cpu_read(context_tracking.active)) {
clear_tsk_thread_flag(prev, TIF_NOHZ);
set_tsk_thread_flag(next, TIF_NOHZ);
}
}
...@@ -141,6 +141,23 @@ static ssize_t fscaps_show(struct kobject *kobj, ...@@ -141,6 +141,23 @@ static ssize_t fscaps_show(struct kobject *kobj,
} }
KERNEL_ATTR_RO(fscaps); KERNEL_ATTR_RO(fscaps);
int rcu_expedited;
static ssize_t rcu_expedited_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
return sprintf(buf, "%d\n", rcu_expedited);
}
static ssize_t rcu_expedited_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
{
if (kstrtoint(buf, 0, &rcu_expedited))
return -EINVAL;
return count;
}
KERNEL_ATTR_RW(rcu_expedited);
/* /*
* Make /sys/kernel/notes give the raw contents of our kernel .notes section. * Make /sys/kernel/notes give the raw contents of our kernel .notes section.
*/ */
...@@ -182,6 +199,7 @@ static struct attribute * kernel_attrs[] = { ...@@ -182,6 +199,7 @@ static struct attribute * kernel_attrs[] = {
&kexec_crash_size_attr.attr, &kexec_crash_size_attr.attr,
&vmcoreinfo_attr.attr, &vmcoreinfo_attr.attr,
#endif #endif
&rcu_expedited_attr.attr,
NULL NULL
}; };
......
...@@ -109,4 +109,6 @@ static inline bool __rcu_reclaim(char *rn, struct rcu_head *head) ...@@ -109,4 +109,6 @@ static inline bool __rcu_reclaim(char *rn, struct rcu_head *head)
} }
} }
extern int rcu_expedited;
#endif /* __LINUX_RCU_H */ #endif /* __LINUX_RCU_H */
...@@ -46,12 +46,15 @@ ...@@ -46,12 +46,15 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/hardirq.h> #include <linux/hardirq.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/module.h>
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include <trace/events/rcu.h> #include <trace/events/rcu.h>
#include "rcu.h" #include "rcu.h"
module_param(rcu_expedited, int, 0);
#ifdef CONFIG_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
/* /*
......
...@@ -195,7 +195,7 @@ EXPORT_SYMBOL(rcu_is_cpu_idle); ...@@ -195,7 +195,7 @@ EXPORT_SYMBOL(rcu_is_cpu_idle);
*/ */
int rcu_is_cpu_rrupt_from_idle(void) int rcu_is_cpu_rrupt_from_idle(void)
{ {
return rcu_dynticks_nesting <= 0; return rcu_dynticks_nesting <= 1;
} }
/* /*
......
...@@ -706,7 +706,10 @@ void synchronize_rcu(void) ...@@ -706,7 +706,10 @@ void synchronize_rcu(void)
return; return;
/* Once we get past the fastpath checks, same code as rcu_barrier(). */ /* Once we get past the fastpath checks, same code as rcu_barrier(). */
rcu_barrier(); if (rcu_expedited)
synchronize_rcu_expedited();
else
rcu_barrier();
} }
EXPORT_SYMBOL_GPL(synchronize_rcu); EXPORT_SYMBOL_GPL(synchronize_rcu);
......
...@@ -339,7 +339,6 @@ rcu_stutter_wait(char *title) ...@@ -339,7 +339,6 @@ rcu_stutter_wait(char *title)
struct rcu_torture_ops { struct rcu_torture_ops {
void (*init)(void); void (*init)(void);
void (*cleanup)(void);
int (*readlock)(void); int (*readlock)(void);
void (*read_delay)(struct rcu_random_state *rrsp); void (*read_delay)(struct rcu_random_state *rrsp);
void (*readunlock)(int idx); void (*readunlock)(int idx);
...@@ -431,7 +430,6 @@ static void rcu_torture_deferred_free(struct rcu_torture *p) ...@@ -431,7 +430,6 @@ static void rcu_torture_deferred_free(struct rcu_torture *p)
static struct rcu_torture_ops rcu_ops = { static struct rcu_torture_ops rcu_ops = {
.init = NULL, .init = NULL,
.cleanup = NULL,
.readlock = rcu_torture_read_lock, .readlock = rcu_torture_read_lock,
.read_delay = rcu_read_delay, .read_delay = rcu_read_delay,
.readunlock = rcu_torture_read_unlock, .readunlock = rcu_torture_read_unlock,
...@@ -475,7 +473,6 @@ static void rcu_sync_torture_init(void) ...@@ -475,7 +473,6 @@ static void rcu_sync_torture_init(void)
static struct rcu_torture_ops rcu_sync_ops = { static struct rcu_torture_ops rcu_sync_ops = {
.init = rcu_sync_torture_init, .init = rcu_sync_torture_init,
.cleanup = NULL,
.readlock = rcu_torture_read_lock, .readlock = rcu_torture_read_lock,
.read_delay = rcu_read_delay, .read_delay = rcu_read_delay,
.readunlock = rcu_torture_read_unlock, .readunlock = rcu_torture_read_unlock,
...@@ -493,7 +490,6 @@ static struct rcu_torture_ops rcu_sync_ops = { ...@@ -493,7 +490,6 @@ static struct rcu_torture_ops rcu_sync_ops = {
static struct rcu_torture_ops rcu_expedited_ops = { static struct rcu_torture_ops rcu_expedited_ops = {
.init = rcu_sync_torture_init, .init = rcu_sync_torture_init,
.cleanup = NULL,
.readlock = rcu_torture_read_lock, .readlock = rcu_torture_read_lock,
.read_delay = rcu_read_delay, /* just reuse rcu's version. */ .read_delay = rcu_read_delay, /* just reuse rcu's version. */
.readunlock = rcu_torture_read_unlock, .readunlock = rcu_torture_read_unlock,
...@@ -536,7 +532,6 @@ static void rcu_bh_torture_deferred_free(struct rcu_torture *p) ...@@ -536,7 +532,6 @@ static void rcu_bh_torture_deferred_free(struct rcu_torture *p)
static struct rcu_torture_ops rcu_bh_ops = { static struct rcu_torture_ops rcu_bh_ops = {
.init = NULL, .init = NULL,
.cleanup = NULL,
.readlock = rcu_bh_torture_read_lock, .readlock = rcu_bh_torture_read_lock,
.read_delay = rcu_read_delay, /* just reuse rcu's version. */ .read_delay = rcu_read_delay, /* just reuse rcu's version. */
.readunlock = rcu_bh_torture_read_unlock, .readunlock = rcu_bh_torture_read_unlock,
...@@ -553,7 +548,6 @@ static struct rcu_torture_ops rcu_bh_ops = { ...@@ -553,7 +548,6 @@ static struct rcu_torture_ops rcu_bh_ops = {
static struct rcu_torture_ops rcu_bh_sync_ops = { static struct rcu_torture_ops rcu_bh_sync_ops = {
.init = rcu_sync_torture_init, .init = rcu_sync_torture_init,
.cleanup = NULL,
.readlock = rcu_bh_torture_read_lock, .readlock = rcu_bh_torture_read_lock,
.read_delay = rcu_read_delay, /* just reuse rcu's version. */ .read_delay = rcu_read_delay, /* just reuse rcu's version. */
.readunlock = rcu_bh_torture_read_unlock, .readunlock = rcu_bh_torture_read_unlock,
...@@ -570,7 +564,6 @@ static struct rcu_torture_ops rcu_bh_sync_ops = { ...@@ -570,7 +564,6 @@ static struct rcu_torture_ops rcu_bh_sync_ops = {
static struct rcu_torture_ops rcu_bh_expedited_ops = { static struct rcu_torture_ops rcu_bh_expedited_ops = {
.init = rcu_sync_torture_init, .init = rcu_sync_torture_init,
.cleanup = NULL,
.readlock = rcu_bh_torture_read_lock, .readlock = rcu_bh_torture_read_lock,
.read_delay = rcu_read_delay, /* just reuse rcu's version. */ .read_delay = rcu_read_delay, /* just reuse rcu's version. */
.readunlock = rcu_bh_torture_read_unlock, .readunlock = rcu_bh_torture_read_unlock,
...@@ -589,19 +582,7 @@ static struct rcu_torture_ops rcu_bh_expedited_ops = { ...@@ -589,19 +582,7 @@ static struct rcu_torture_ops rcu_bh_expedited_ops = {
* Definitions for srcu torture testing. * Definitions for srcu torture testing.
*/ */
static struct srcu_struct srcu_ctl; DEFINE_STATIC_SRCU(srcu_ctl);
static void srcu_torture_init(void)
{
init_srcu_struct(&srcu_ctl);
rcu_sync_torture_init();
}
static void srcu_torture_cleanup(void)
{
synchronize_srcu(&srcu_ctl);
cleanup_srcu_struct(&srcu_ctl);
}
static int srcu_torture_read_lock(void) __acquires(&srcu_ctl) static int srcu_torture_read_lock(void) __acquires(&srcu_ctl)
{ {
...@@ -672,8 +653,7 @@ static int srcu_torture_stats(char *page) ...@@ -672,8 +653,7 @@ static int srcu_torture_stats(char *page)
} }
static struct rcu_torture_ops srcu_ops = { static struct rcu_torture_ops srcu_ops = {
.init = srcu_torture_init, .init = rcu_sync_torture_init,
.cleanup = srcu_torture_cleanup,
.readlock = srcu_torture_read_lock, .readlock = srcu_torture_read_lock,
.read_delay = srcu_read_delay, .read_delay = srcu_read_delay,
.readunlock = srcu_torture_read_unlock, .readunlock = srcu_torture_read_unlock,
...@@ -687,8 +667,7 @@ static struct rcu_torture_ops srcu_ops = { ...@@ -687,8 +667,7 @@ static struct rcu_torture_ops srcu_ops = {
}; };
static struct rcu_torture_ops srcu_sync_ops = { static struct rcu_torture_ops srcu_sync_ops = {
.init = srcu_torture_init, .init = rcu_sync_torture_init,
.cleanup = srcu_torture_cleanup,
.readlock = srcu_torture_read_lock, .readlock = srcu_torture_read_lock,
.read_delay = srcu_read_delay, .read_delay = srcu_read_delay,
.readunlock = srcu_torture_read_unlock, .readunlock = srcu_torture_read_unlock,
...@@ -712,8 +691,7 @@ static void srcu_torture_read_unlock_raw(int idx) __releases(&srcu_ctl) ...@@ -712,8 +691,7 @@ static void srcu_torture_read_unlock_raw(int idx) __releases(&srcu_ctl)
} }
static struct rcu_torture_ops srcu_raw_ops = { static struct rcu_torture_ops srcu_raw_ops = {
.init = srcu_torture_init, .init = rcu_sync_torture_init,
.cleanup = srcu_torture_cleanup,
.readlock = srcu_torture_read_lock_raw, .readlock = srcu_torture_read_lock_raw,
.read_delay = srcu_read_delay, .read_delay = srcu_read_delay,
.readunlock = srcu_torture_read_unlock_raw, .readunlock = srcu_torture_read_unlock_raw,
...@@ -727,8 +705,7 @@ static struct rcu_torture_ops srcu_raw_ops = { ...@@ -727,8 +705,7 @@ static struct rcu_torture_ops srcu_raw_ops = {
}; };
static struct rcu_torture_ops srcu_raw_sync_ops = { static struct rcu_torture_ops srcu_raw_sync_ops = {
.init = srcu_torture_init, .init = rcu_sync_torture_init,
.cleanup = srcu_torture_cleanup,
.readlock = srcu_torture_read_lock_raw, .readlock = srcu_torture_read_lock_raw,
.read_delay = srcu_read_delay, .read_delay = srcu_read_delay,
.readunlock = srcu_torture_read_unlock_raw, .readunlock = srcu_torture_read_unlock_raw,
...@@ -747,8 +724,7 @@ static void srcu_torture_synchronize_expedited(void) ...@@ -747,8 +724,7 @@ static void srcu_torture_synchronize_expedited(void)
} }
static struct rcu_torture_ops srcu_expedited_ops = { static struct rcu_torture_ops srcu_expedited_ops = {
.init = srcu_torture_init, .init = rcu_sync_torture_init,
.cleanup = srcu_torture_cleanup,
.readlock = srcu_torture_read_lock, .readlock = srcu_torture_read_lock,
.read_delay = srcu_read_delay, .read_delay = srcu_read_delay,
.readunlock = srcu_torture_read_unlock, .readunlock = srcu_torture_read_unlock,
...@@ -783,7 +759,6 @@ static void rcu_sched_torture_deferred_free(struct rcu_torture *p) ...@@ -783,7 +759,6 @@ static void rcu_sched_torture_deferred_free(struct rcu_torture *p)
static struct rcu_torture_ops sched_ops = { static struct rcu_torture_ops sched_ops = {
.init = rcu_sync_torture_init, .init = rcu_sync_torture_init,
.cleanup = NULL,
.readlock = sched_torture_read_lock, .readlock = sched_torture_read_lock,
.read_delay = rcu_read_delay, /* just reuse rcu's version. */ .read_delay = rcu_read_delay, /* just reuse rcu's version. */
.readunlock = sched_torture_read_unlock, .readunlock = sched_torture_read_unlock,
...@@ -799,7 +774,6 @@ static struct rcu_torture_ops sched_ops = { ...@@ -799,7 +774,6 @@ static struct rcu_torture_ops sched_ops = {
static struct rcu_torture_ops sched_sync_ops = { static struct rcu_torture_ops sched_sync_ops = {
.init = rcu_sync_torture_init, .init = rcu_sync_torture_init,
.cleanup = NULL,
.readlock = sched_torture_read_lock, .readlock = sched_torture_read_lock,
.read_delay = rcu_read_delay, /* just reuse rcu's version. */ .read_delay = rcu_read_delay, /* just reuse rcu's version. */
.readunlock = sched_torture_read_unlock, .readunlock = sched_torture_read_unlock,
...@@ -814,7 +788,6 @@ static struct rcu_torture_ops sched_sync_ops = { ...@@ -814,7 +788,6 @@ static struct rcu_torture_ops sched_sync_ops = {
static struct rcu_torture_ops sched_expedited_ops = { static struct rcu_torture_ops sched_expedited_ops = {
.init = rcu_sync_torture_init, .init = rcu_sync_torture_init,
.cleanup = NULL,
.readlock = sched_torture_read_lock, .readlock = sched_torture_read_lock,
.read_delay = rcu_read_delay, /* just reuse rcu's version. */ .read_delay = rcu_read_delay, /* just reuse rcu's version. */
.readunlock = sched_torture_read_unlock, .readunlock = sched_torture_read_unlock,
...@@ -1396,12 +1369,16 @@ rcu_torture_print_module_parms(struct rcu_torture_ops *cur_ops, char *tag) ...@@ -1396,12 +1369,16 @@ rcu_torture_print_module_parms(struct rcu_torture_ops *cur_ops, char *tag)
"fqs_duration=%d fqs_holdoff=%d fqs_stutter=%d " "fqs_duration=%d fqs_holdoff=%d fqs_stutter=%d "
"test_boost=%d/%d test_boost_interval=%d " "test_boost=%d/%d test_boost_interval=%d "
"test_boost_duration=%d shutdown_secs=%d " "test_boost_duration=%d shutdown_secs=%d "
"stall_cpu=%d stall_cpu_holdoff=%d "
"n_barrier_cbs=%d "
"onoff_interval=%d onoff_holdoff=%d\n", "onoff_interval=%d onoff_holdoff=%d\n",
torture_type, tag, nrealreaders, nfakewriters, torture_type, tag, nrealreaders, nfakewriters,
stat_interval, verbose, test_no_idle_hz, shuffle_interval, stat_interval, verbose, test_no_idle_hz, shuffle_interval,
stutter, irqreader, fqs_duration, fqs_holdoff, fqs_stutter, stutter, irqreader, fqs_duration, fqs_holdoff, fqs_stutter,
test_boost, cur_ops->can_boost, test_boost, cur_ops->can_boost,
test_boost_interval, test_boost_duration, shutdown_secs, test_boost_interval, test_boost_duration, shutdown_secs,
stall_cpu, stall_cpu_holdoff,
n_barrier_cbs,
onoff_interval, onoff_holdoff); onoff_interval, onoff_holdoff);
} }
...@@ -1502,6 +1479,7 @@ rcu_torture_onoff(void *arg) ...@@ -1502,6 +1479,7 @@ rcu_torture_onoff(void *arg)
unsigned long delta; unsigned long delta;
int maxcpu = -1; int maxcpu = -1;
DEFINE_RCU_RANDOM(rand); DEFINE_RCU_RANDOM(rand);
int ret;
unsigned long starttime; unsigned long starttime;
VERBOSE_PRINTK_STRING("rcu_torture_onoff task started"); VERBOSE_PRINTK_STRING("rcu_torture_onoff task started");
...@@ -1522,7 +1500,13 @@ rcu_torture_onoff(void *arg) ...@@ -1522,7 +1500,13 @@ rcu_torture_onoff(void *arg)
torture_type, cpu); torture_type, cpu);
starttime = jiffies; starttime = jiffies;
n_offline_attempts++; n_offline_attempts++;
if (cpu_down(cpu) == 0) { ret = cpu_down(cpu);
if (ret) {
if (verbose)
pr_alert("%s" TORTURE_FLAG
"rcu_torture_onoff task: offline %d failed: errno %d\n",
torture_type, cpu, ret);
} else {
if (verbose) if (verbose)
pr_alert("%s" TORTURE_FLAG pr_alert("%s" TORTURE_FLAG
"rcu_torture_onoff task: offlined %d\n", "rcu_torture_onoff task: offlined %d\n",
...@@ -1936,8 +1920,6 @@ rcu_torture_cleanup(void) ...@@ -1936,8 +1920,6 @@ rcu_torture_cleanup(void)
rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
if (cur_ops->cleanup)
cur_ops->cleanup();
if (atomic_read(&n_rcu_torture_error) || n_rcu_torture_barrier_error) if (atomic_read(&n_rcu_torture_error) || n_rcu_torture_barrier_error)
rcu_torture_print_module_parms(cur_ops, "End of test: FAILURE"); rcu_torture_print_module_parms(cur_ops, "End of test: FAILURE");
else if (n_online_successes != n_online_attempts || else if (n_online_successes != n_online_attempts ||
......
This diff is collapsed.
...@@ -287,6 +287,7 @@ struct rcu_data { ...@@ -287,6 +287,7 @@ struct rcu_data {
long qlen_last_fqs_check; long qlen_last_fqs_check;
/* qlen at last check for QS forcing */ /* qlen at last check for QS forcing */
unsigned long n_cbs_invoked; /* count of RCU cbs invoked. */ unsigned long n_cbs_invoked; /* count of RCU cbs invoked. */
unsigned long n_nocbs_invoked; /* count of no-CBs RCU cbs invoked. */
unsigned long n_cbs_orphaned; /* RCU cbs orphaned by dying CPU */ unsigned long n_cbs_orphaned; /* RCU cbs orphaned by dying CPU */
unsigned long n_cbs_adopted; /* RCU cbs adopted from dying CPU */ unsigned long n_cbs_adopted; /* RCU cbs adopted from dying CPU */
unsigned long n_force_qs_snap; unsigned long n_force_qs_snap;
...@@ -317,6 +318,18 @@ struct rcu_data { ...@@ -317,6 +318,18 @@ struct rcu_data {
struct rcu_head oom_head; struct rcu_head oom_head;
#endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */ #endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
/* 7) Callback offloading. */
#ifdef CONFIG_RCU_NOCB_CPU
struct rcu_head *nocb_head; /* CBs waiting for kthread. */
struct rcu_head **nocb_tail;
atomic_long_t nocb_q_count; /* # CBs waiting for kthread */
atomic_long_t nocb_q_count_lazy; /* (approximate). */
int nocb_p_count; /* # CBs being invoked by kthread */
int nocb_p_count_lazy; /* (approximate). */
wait_queue_head_t nocb_wq; /* For nocb kthreads to sleep on. */
struct task_struct *nocb_kthread;
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
int cpu; int cpu;
struct rcu_state *rsp; struct rcu_state *rsp;
}; };
...@@ -369,6 +382,12 @@ struct rcu_state { ...@@ -369,6 +382,12 @@ struct rcu_state {
struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */ struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */
void (*call)(struct rcu_head *head, /* call_rcu() flavor. */ void (*call)(struct rcu_head *head, /* call_rcu() flavor. */
void (*func)(struct rcu_head *head)); void (*func)(struct rcu_head *head));
#ifdef CONFIG_RCU_NOCB_CPU
void (*call_remote)(struct rcu_head *head,
void (*func)(struct rcu_head *head));
/* call_rcu() flavor, but for */
/* placing on remote CPU. */
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
/* The following fields are guarded by the root rcu_node's lock. */ /* The following fields are guarded by the root rcu_node's lock. */
...@@ -383,9 +402,8 @@ struct rcu_state { ...@@ -383,9 +402,8 @@ struct rcu_state {
/* End of fields guarded by root rcu_node's lock. */ /* End of fields guarded by root rcu_node's lock. */
raw_spinlock_t onofflock ____cacheline_internodealigned_in_smp; raw_spinlock_t orphan_lock ____cacheline_internodealigned_in_smp;
/* exclude on/offline and */ /* Protect following fields. */
/* starting new GP. */
struct rcu_head *orphan_nxtlist; /* Orphaned callbacks that */ struct rcu_head *orphan_nxtlist; /* Orphaned callbacks that */
/* need a grace period. */ /* need a grace period. */
struct rcu_head **orphan_nxttail; /* Tail of above. */ struct rcu_head **orphan_nxttail; /* Tail of above. */
...@@ -394,7 +412,7 @@ struct rcu_state { ...@@ -394,7 +412,7 @@ struct rcu_state {
struct rcu_head **orphan_donetail; /* Tail of above. */ struct rcu_head **orphan_donetail; /* Tail of above. */
long qlen_lazy; /* Number of lazy callbacks. */ long qlen_lazy; /* Number of lazy callbacks. */
long qlen; /* Total number of callbacks. */ long qlen; /* Total number of callbacks. */
/* End of fields guarded by onofflock. */ /* End of fields guarded by orphan_lock. */
struct mutex onoff_mutex; /* Coordinate hotplug & GPs. */ struct mutex onoff_mutex; /* Coordinate hotplug & GPs. */
...@@ -405,6 +423,18 @@ struct rcu_state { ...@@ -405,6 +423,18 @@ struct rcu_state {
/* _rcu_barrier(). */ /* _rcu_barrier(). */
/* End of fields guarded by barrier_mutex. */ /* End of fields guarded by barrier_mutex. */
atomic_long_t expedited_start; /* Starting ticket. */
atomic_long_t expedited_done; /* Done ticket. */
atomic_long_t expedited_wrap; /* # near-wrap incidents. */
atomic_long_t expedited_tryfail; /* # acquisition failures. */
atomic_long_t expedited_workdone1; /* # done by others #1. */
atomic_long_t expedited_workdone2; /* # done by others #2. */
atomic_long_t expedited_normal; /* # fallbacks to normal. */
atomic_long_t expedited_stoppedcpus; /* # successful stop_cpus. */
atomic_long_t expedited_done_tries; /* # tries to update _done. */
atomic_long_t expedited_done_lost; /* # times beaten to _done. */
atomic_long_t expedited_done_exit; /* # times exited _done loop. */
unsigned long jiffies_force_qs; /* Time at which to invoke */ unsigned long jiffies_force_qs; /* Time at which to invoke */
/* force_quiescent_state(). */ /* force_quiescent_state(). */
unsigned long n_force_qs; /* Number of calls to */ unsigned long n_force_qs; /* Number of calls to */
...@@ -428,6 +458,8 @@ struct rcu_state { ...@@ -428,6 +458,8 @@ struct rcu_state {
#define RCU_GP_FLAG_FQS 0x2 /* Need grace-period quiescent-state forcing. */ #define RCU_GP_FLAG_FQS 0x2 /* Need grace-period quiescent-state forcing. */
extern struct list_head rcu_struct_flavors; extern struct list_head rcu_struct_flavors;
/* Sequence through rcu_state structures for each RCU flavor. */
#define for_each_rcu_flavor(rsp) \ #define for_each_rcu_flavor(rsp) \
list_for_each_entry((rsp), &rcu_struct_flavors, flavors) list_for_each_entry((rsp), &rcu_struct_flavors, flavors)
...@@ -504,5 +536,32 @@ static void print_cpu_stall_info(struct rcu_state *rsp, int cpu); ...@@ -504,5 +536,32 @@ static void print_cpu_stall_info(struct rcu_state *rsp, int cpu);
static void print_cpu_stall_info_end(void); static void print_cpu_stall_info_end(void);
static void zero_cpu_stall_ticks(struct rcu_data *rdp); static void zero_cpu_stall_ticks(struct rcu_data *rdp);
static void increment_cpu_stall_ticks(void); static void increment_cpu_stall_ticks(void);
static bool is_nocb_cpu(int cpu);
static bool __call_rcu_nocb(struct rcu_data *rdp, struct rcu_head *rhp,
bool lazy);
static bool rcu_nocb_adopt_orphan_cbs(struct rcu_state *rsp,
struct rcu_data *rdp);
static bool nocb_cpu_expendable(int cpu);
static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp);
static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp);
static void init_nocb_callback_list(struct rcu_data *rdp);
static void __init rcu_init_nocb(void);
#endif /* #ifndef RCU_TREE_NONCORE */ #endif /* #ifndef RCU_TREE_NONCORE */
#ifdef CONFIG_RCU_TRACE
#ifdef CONFIG_RCU_NOCB_CPU
/* Sum up queue lengths for tracing. */
static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll)
{
*ql = atomic_long_read(&rdp->nocb_q_count) + rdp->nocb_p_count;
*qll = atomic_long_read(&rdp->nocb_q_count_lazy) + rdp->nocb_p_count_lazy;
}
#else /* #ifdef CONFIG_RCU_NOCB_CPU */
static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll)
{
*ql = 0;
*qll = 0;
}
#endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
#endif /* #ifdef CONFIG_RCU_TRACE */
This diff is collapsed.
This diff is collapsed.
...@@ -72,6 +72,7 @@ ...@@ -72,6 +72,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/init_task.h> #include <linux/init_task.h>
#include <linux/binfmts.h> #include <linux/binfmts.h>
#include <linux/context_tracking.h>
#include <asm/switch_to.h> #include <asm/switch_to.h>
#include <asm/tlb.h> #include <asm/tlb.h>
...@@ -1886,8 +1887,8 @@ context_switch(struct rq *rq, struct task_struct *prev, ...@@ -1886,8 +1887,8 @@ context_switch(struct rq *rq, struct task_struct *prev,
spin_release(&rq->lock.dep_map, 1, _THIS_IP_); spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
#endif #endif
context_tracking_task_switch(prev, next);
/* Here we just switch the register state and the stack. */ /* Here we just switch the register state and the stack. */
rcu_switch(prev, next);
switch_to(prev, next, prev); switch_to(prev, next, prev);
barrier(); barrier();
...@@ -2911,7 +2912,7 @@ asmlinkage void __sched schedule(void) ...@@ -2911,7 +2912,7 @@ asmlinkage void __sched schedule(void)
} }
EXPORT_SYMBOL(schedule); EXPORT_SYMBOL(schedule);
#ifdef CONFIG_RCU_USER_QS #ifdef CONFIG_CONTEXT_TRACKING
asmlinkage void __sched schedule_user(void) asmlinkage void __sched schedule_user(void)
{ {
/* /*
...@@ -2920,9 +2921,9 @@ asmlinkage void __sched schedule_user(void) ...@@ -2920,9 +2921,9 @@ asmlinkage void __sched schedule_user(void)
* we haven't yet exited the RCU idle mode. Do it here manually until * we haven't yet exited the RCU idle mode. Do it here manually until
* we find a better solution. * we find a better solution.
*/ */
rcu_user_exit(); user_exit();
schedule(); schedule();
rcu_user_enter(); user_enter();
} }
#endif #endif
...@@ -3027,7 +3028,7 @@ asmlinkage void __sched preempt_schedule_irq(void) ...@@ -3027,7 +3028,7 @@ asmlinkage void __sched preempt_schedule_irq(void)
/* Catch callers which need to be fixed */ /* Catch callers which need to be fixed */
BUG_ON(ti->preempt_count || !irqs_disabled()); BUG_ON(ti->preempt_count || !irqs_disabled());
rcu_user_exit(); user_exit();
do { do {
add_preempt_count(PREEMPT_ACTIVE); add_preempt_count(PREEMPT_ACTIVE);
local_irq_enable(); local_irq_enable();
...@@ -4474,6 +4475,7 @@ static const char stat_nam[] = TASK_STATE_TO_CHAR_STR; ...@@ -4474,6 +4475,7 @@ static const char stat_nam[] = TASK_STATE_TO_CHAR_STR;
void sched_show_task(struct task_struct *p) void sched_show_task(struct task_struct *p)
{ {
unsigned long free = 0; unsigned long free = 0;
int ppid;
unsigned state; unsigned state;
state = p->state ? __ffs(p->state) + 1 : 0; state = p->state ? __ffs(p->state) + 1 : 0;
...@@ -4493,8 +4495,11 @@ void sched_show_task(struct task_struct *p) ...@@ -4493,8 +4495,11 @@ void sched_show_task(struct task_struct *p)
#ifdef CONFIG_DEBUG_STACK_USAGE #ifdef CONFIG_DEBUG_STACK_USAGE
free = stack_not_used(p); free = stack_not_used(p);
#endif #endif
rcu_read_lock();
ppid = task_pid_nr(rcu_dereference(p->real_parent));
rcu_read_unlock();
printk(KERN_CONT "%5lu %5d %6d 0x%08lx\n", free, printk(KERN_CONT "%5lu %5d %6d 0x%08lx\n", free,
task_pid_nr(p), task_pid_nr(rcu_dereference(p->real_parent)), task_pid_nr(p), ppid,
(unsigned long)task_thread_info(p)->flags); (unsigned long)task_thread_info(p)->flags);
show_stack(p, NULL); show_stack(p, NULL);
...@@ -8076,3 +8081,9 @@ struct cgroup_subsys cpuacct_subsys = { ...@@ -8076,3 +8081,9 @@ struct cgroup_subsys cpuacct_subsys = {
.base_cftypes = files, .base_cftypes = files,
}; };
#endif /* CONFIG_CGROUP_CPUACCT */ #endif /* CONFIG_CGROUP_CPUACCT */
void dump_cpu_task(int cpu)
{
pr_info("Task dump for CPU %d:\n", cpu);
sched_show_task(cpu_curr(cpu));
}
...@@ -16,8 +16,10 @@ ...@@ -16,8 +16,10 @@
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
* *
* Copyright (C) IBM Corporation, 2006 * Copyright (C) IBM Corporation, 2006
* Copyright (C) Fujitsu, 2012
* *
* Author: Paul McKenney <paulmck@us.ibm.com> * Author: Paul McKenney <paulmck@us.ibm.com>
* Lai Jiangshan <laijs@cn.fujitsu.com>
* *
* For detailed explanation of Read-Copy Update mechanism see - * For detailed explanation of Read-Copy Update mechanism see -
* Documentation/RCU/ *.txt * Documentation/RCU/ *.txt
...@@ -34,6 +36,10 @@ ...@@ -34,6 +36,10 @@
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/srcu.h> #include <linux/srcu.h>
#include <trace/events/rcu.h>
#include "rcu.h"
/* /*
* Initialize an rcu_batch structure to empty. * Initialize an rcu_batch structure to empty.
*/ */
...@@ -92,9 +98,6 @@ static inline void rcu_batch_move(struct rcu_batch *to, struct rcu_batch *from) ...@@ -92,9 +98,6 @@ static inline void rcu_batch_move(struct rcu_batch *to, struct rcu_batch *from)
} }
} }
/* single-thread state-machine */
static void process_srcu(struct work_struct *work);
static int init_srcu_struct_fields(struct srcu_struct *sp) static int init_srcu_struct_fields(struct srcu_struct *sp)
{ {
sp->completed = 0; sp->completed = 0;
...@@ -464,7 +467,9 @@ static void __synchronize_srcu(struct srcu_struct *sp, int trycount) ...@@ -464,7 +467,9 @@ static void __synchronize_srcu(struct srcu_struct *sp, int trycount)
*/ */
void synchronize_srcu(struct srcu_struct *sp) void synchronize_srcu(struct srcu_struct *sp)
{ {
__synchronize_srcu(sp, SYNCHRONIZE_SRCU_TRYCOUNT); __synchronize_srcu(sp, rcu_expedited
? SYNCHRONIZE_SRCU_EXP_TRYCOUNT
: SYNCHRONIZE_SRCU_TRYCOUNT);
} }
EXPORT_SYMBOL_GPL(synchronize_srcu); EXPORT_SYMBOL_GPL(synchronize_srcu);
...@@ -637,7 +642,7 @@ static void srcu_reschedule(struct srcu_struct *sp) ...@@ -637,7 +642,7 @@ static void srcu_reschedule(struct srcu_struct *sp)
/* /*
* This is the work-queue function that handles SRCU grace periods. * This is the work-queue function that handles SRCU grace periods.
*/ */
static void process_srcu(struct work_struct *work) void process_srcu(struct work_struct *work)
{ {
struct srcu_struct *sp; struct srcu_struct *sp;
...@@ -648,3 +653,4 @@ static void process_srcu(struct work_struct *work) ...@@ -648,3 +653,4 @@ static void process_srcu(struct work_struct *work)
srcu_invoke_callbacks(sp); srcu_invoke_callbacks(sp);
srcu_reschedule(sp); srcu_reschedule(sp);
} }
EXPORT_SYMBOL_GPL(process_srcu);
...@@ -972,7 +972,7 @@ config RCU_CPU_STALL_TIMEOUT ...@@ -972,7 +972,7 @@ config RCU_CPU_STALL_TIMEOUT
int "RCU CPU stall timeout in seconds" int "RCU CPU stall timeout in seconds"
depends on TREE_RCU || TREE_PREEMPT_RCU depends on TREE_RCU || TREE_PREEMPT_RCU
range 3 300 range 3 300
default 60 default 21
help help
If a given RCU grace period extends more than the specified If a given RCU grace period extends more than the specified
number of seconds, a CPU stall warning is printed. If the number of seconds, a CPU stall warning is printed. If the
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment