Commit 2600a46e authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'trace-v4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing updates from Steven Rostedt:
 "This includes two new updates for the ftrace infrastructure.

   - With the changing of the code for filtering events by pid, from a
     list of pids to a bitmask, we can now easily implement following
     forks.  With a new tracing option "event-fork" which, when set,
     will have tasks with pids in set_event_pid, when they fork, to have
     their child pids added to set_event_pid and the child will be
     traced as well.

     Note, if "event-fork" is set and a task with its pid in
     set_event_pid exits, its pid will be removed from set_event_pid

   - The addition of Tom Zanussi's hist triggers.  This includes a very
     thorough documentatino on how to use the hist triggers with events.
     This introduces a quick and easy way to get histogram data from
     events and their fields.

  Some other cleanups and updates were added as well.  Like Masami
  Hiramatsu added test cases for the event trigger and hist triggers.
  Also I added a speed up of filtering by using a temp buffer when
  filters are set"

* tag 'trace-v4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (45 commits)
  tracing: Use temp buffer when filtering events
  tracing: Remove TRACE_EVENT_FL_USE_CALL_FILTER logic
  tracing: Remove unused function trace_current_buffer_lock_reserve()
  tracing: Remove one use of trace_current_buffer_lock_reserve()
  tracing: Have trace_buffer_unlock_commit() call the _regs version with NULL
  tracing: Remove unused function trace_current_buffer_discard_commit()
  tracing: Move trace_buffer_unlock_commit{_regs}() to local header
  tracing: Fold filter_check_discard() into its only user
  tracing: Make filter_check_discard() local
  tracing: Move event_trigger_unlock_commit{_regs}() to local header
  tracing: Don't use the address of the buffer array name in copy_from_user
  tracing: Handle tracing_map_alloc_elts() error path correctly
  tracing: Add check for NULL event field when creating hist field
  tracing: checking for NULL instead of IS_ERR()
  tracing: Do not inherit event-fork option for instances
  tracing: Fix unsigned comparison to zero in hist trigger code
  kselftests/ftrace: Add a test for log2 modifier of hist trigger
  tracing: Add hist trigger 'log2' modifier
  kselftests/ftrace: Add hist trigger testcases
  kselftests/ftrace : Add event trigger testcases
  ...
parents 03e1aa1c 0fc1b09f
This diff is collapsed.
...@@ -210,6 +210,11 @@ of ftrace. Here is a list of some of the key files: ...@@ -210,6 +210,11 @@ of ftrace. Here is a list of some of the key files:
Note, sched_switch and sched_wake_up will also trace events Note, sched_switch and sched_wake_up will also trace events
listed in this file. listed in this file.
To have the PIDs of children of tasks with their PID in this file
added on fork, enable the "event-fork" option. That option will also
cause the PIDs of tasks to be removed from this file when the task
exits.
set_graph_function: set_graph_function:
Set a "trigger" function where tracing should start Set a "trigger" function where tracing should start
...@@ -725,16 +730,14 @@ noraw ...@@ -725,16 +730,14 @@ noraw
nohex nohex
nobin nobin
noblock noblock
nostacktrace
trace_printk trace_printk
noftrace_preempt
nobranch nobranch
annotate annotate
nouserstacktrace nouserstacktrace
nosym-userobj nosym-userobj
noprintk-msg-only noprintk-msg-only
context-info context-info
latency-format nolatency-format
sleep-time sleep-time
graph-time graph-time
record-cmd record-cmd
...@@ -742,7 +745,10 @@ overwrite ...@@ -742,7 +745,10 @@ overwrite
nodisable_on_free nodisable_on_free
irq-info irq-info
markers markers
noevent-fork
function-trace function-trace
nodisplay-graph
nostacktrace
To disable one of the options, echo in the option prepended with To disable one of the options, echo in the option prepended with
"no". "no".
...@@ -796,11 +802,6 @@ Here are the available options: ...@@ -796,11 +802,6 @@ Here are the available options:
block - When set, reading trace_pipe will not block when polled. block - When set, reading trace_pipe will not block when polled.
stacktrace - This is one of the options that changes the trace
itself. When a trace is recorded, so is the stack
of functions. This allows for back traces of
trace sites.
trace_printk - Can disable trace_printk() from writing into the buffer. trace_printk - Can disable trace_printk() from writing into the buffer.
branch - Enable branch tracing with the tracer. branch - Enable branch tracing with the tracer.
...@@ -897,6 +898,10 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6] ...@@ -897,6 +898,10 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
When disabled, the trace_marker will error with EINVAL When disabled, the trace_marker will error with EINVAL
on write. on write.
event-fork - When set, tasks with PIDs listed in set_event_pid will have
the PIDs of their children added to set_event_pid when those
tasks fork. Also, when tasks with PIDs in set_event_pid exit,
their PIDs will be removed from the file.
function-trace - The latency tracers will enable function tracing function-trace - The latency tracers will enable function tracing
if this option is enabled (default it is). When if this option is enabled (default it is). When
...@@ -904,8 +909,17 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6] ...@@ -904,8 +909,17 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
functions. This keeps the overhead of the tracer down functions. This keeps the overhead of the tracer down
when performing latency tests. when performing latency tests.
Note: Some tracers have their own options. They only appear display-graph - When set, the latency tracers (irqsoff, wakeup, etc) will
when the tracer is active. use function graph tracing instead of function tracing.
stacktrace - This is one of the options that changes the trace
itself. When a trace is recorded, so is the stack
of functions. This allows for back traces of
trace sites.
Note: Some tracers have their own options. They only appear in this
file when the tracer is active. They always appear in the
options directory.
......
...@@ -154,21 +154,6 @@ trace_event_buffer_lock_reserve(struct ring_buffer **current_buffer, ...@@ -154,21 +154,6 @@ trace_event_buffer_lock_reserve(struct ring_buffer **current_buffer,
struct trace_event_file *trace_file, struct trace_event_file *trace_file,
int type, unsigned long len, int type, unsigned long len,
unsigned long flags, int pc); unsigned long flags, int pc);
struct ring_buffer_event *
trace_current_buffer_lock_reserve(struct ring_buffer **current_buffer,
int type, unsigned long len,
unsigned long flags, int pc);
void trace_buffer_unlock_commit(struct trace_array *tr,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
unsigned long flags, int pc);
void trace_buffer_unlock_commit_regs(struct trace_array *tr,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
unsigned long flags, int pc,
struct pt_regs *regs);
void trace_current_buffer_discard_commit(struct ring_buffer *buffer,
struct ring_buffer_event *event);
void tracing_record_cmdline(struct task_struct *tsk); void tracing_record_cmdline(struct task_struct *tsk);
...@@ -229,7 +214,6 @@ enum { ...@@ -229,7 +214,6 @@ enum {
TRACE_EVENT_FL_NO_SET_FILTER_BIT, TRACE_EVENT_FL_NO_SET_FILTER_BIT,
TRACE_EVENT_FL_IGNORE_ENABLE_BIT, TRACE_EVENT_FL_IGNORE_ENABLE_BIT,
TRACE_EVENT_FL_WAS_ENABLED_BIT, TRACE_EVENT_FL_WAS_ENABLED_BIT,
TRACE_EVENT_FL_USE_CALL_FILTER_BIT,
TRACE_EVENT_FL_TRACEPOINT_BIT, TRACE_EVENT_FL_TRACEPOINT_BIT,
TRACE_EVENT_FL_KPROBE_BIT, TRACE_EVENT_FL_KPROBE_BIT,
TRACE_EVENT_FL_UPROBE_BIT, TRACE_EVENT_FL_UPROBE_BIT,
...@@ -244,7 +228,6 @@ enum { ...@@ -244,7 +228,6 @@ enum {
* WAS_ENABLED - Set and stays set when an event was ever enabled * WAS_ENABLED - Set and stays set when an event was ever enabled
* (used for module unloading, if a module event is enabled, * (used for module unloading, if a module event is enabled,
* it is best to clear the buffers that used it). * it is best to clear the buffers that used it).
* USE_CALL_FILTER - For trace internal events, don't use file filter
* TRACEPOINT - Event is a tracepoint * TRACEPOINT - Event is a tracepoint
* KPROBE - Event is a kprobe * KPROBE - Event is a kprobe
* UPROBE - Event is a uprobe * UPROBE - Event is a uprobe
...@@ -255,7 +238,6 @@ enum { ...@@ -255,7 +238,6 @@ enum {
TRACE_EVENT_FL_NO_SET_FILTER = (1 << TRACE_EVENT_FL_NO_SET_FILTER_BIT), TRACE_EVENT_FL_NO_SET_FILTER = (1 << TRACE_EVENT_FL_NO_SET_FILTER_BIT),
TRACE_EVENT_FL_IGNORE_ENABLE = (1 << TRACE_EVENT_FL_IGNORE_ENABLE_BIT), TRACE_EVENT_FL_IGNORE_ENABLE = (1 << TRACE_EVENT_FL_IGNORE_ENABLE_BIT),
TRACE_EVENT_FL_WAS_ENABLED = (1 << TRACE_EVENT_FL_WAS_ENABLED_BIT), TRACE_EVENT_FL_WAS_ENABLED = (1 << TRACE_EVENT_FL_WAS_ENABLED_BIT),
TRACE_EVENT_FL_USE_CALL_FILTER = (1 << TRACE_EVENT_FL_USE_CALL_FILTER_BIT),
TRACE_EVENT_FL_TRACEPOINT = (1 << TRACE_EVENT_FL_TRACEPOINT_BIT), TRACE_EVENT_FL_TRACEPOINT = (1 << TRACE_EVENT_FL_TRACEPOINT_BIT),
TRACE_EVENT_FL_KPROBE = (1 << TRACE_EVENT_FL_KPROBE_BIT), TRACE_EVENT_FL_KPROBE = (1 << TRACE_EVENT_FL_KPROBE_BIT),
TRACE_EVENT_FL_UPROBE = (1 << TRACE_EVENT_FL_UPROBE_BIT), TRACE_EVENT_FL_UPROBE = (1 << TRACE_EVENT_FL_UPROBE_BIT),
...@@ -407,16 +389,12 @@ enum event_trigger_type { ...@@ -407,16 +389,12 @@ enum event_trigger_type {
ETT_SNAPSHOT = (1 << 1), ETT_SNAPSHOT = (1 << 1),
ETT_STACKTRACE = (1 << 2), ETT_STACKTRACE = (1 << 2),
ETT_EVENT_ENABLE = (1 << 3), ETT_EVENT_ENABLE = (1 << 3),
ETT_EVENT_HIST = (1 << 4),
ETT_HIST_ENABLE = (1 << 5),
}; };
extern int filter_match_preds(struct event_filter *filter, void *rec); extern int filter_match_preds(struct event_filter *filter, void *rec);
extern int filter_check_discard(struct trace_event_file *file, void *rec,
struct ring_buffer *buffer,
struct ring_buffer_event *event);
extern int call_filter_check_discard(struct trace_event_call *call, void *rec,
struct ring_buffer *buffer,
struct ring_buffer_event *event);
extern enum event_trigger_type event_triggers_call(struct trace_event_file *file, extern enum event_trigger_type event_triggers_call(struct trace_event_file *file,
void *rec); void *rec);
extern void event_triggers_post_call(struct trace_event_file *file, extern void event_triggers_post_call(struct trace_event_file *file,
...@@ -450,100 +428,6 @@ trace_trigger_soft_disabled(struct trace_event_file *file) ...@@ -450,100 +428,6 @@ trace_trigger_soft_disabled(struct trace_event_file *file)
return false; return false;
} }
/*
* Helper function for event_trigger_unlock_commit{_regs}().
* If there are event triggers attached to this event that requires
* filtering against its fields, then they wil be called as the
* entry already holds the field information of the current event.
*
* It also checks if the event should be discarded or not.
* It is to be discarded if the event is soft disabled and the
* event was only recorded to process triggers, or if the event
* filter is active and this event did not match the filters.
*
* Returns true if the event is discarded, false otherwise.
*/
static inline bool
__event_trigger_test_discard(struct trace_event_file *file,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
void *entry,
enum event_trigger_type *tt)
{
unsigned long eflags = file->flags;
if (eflags & EVENT_FILE_FL_TRIGGER_COND)
*tt = event_triggers_call(file, entry);
if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags))
ring_buffer_discard_commit(buffer, event);
else if (!filter_check_discard(file, entry, buffer, event))
return false;
return true;
}
/**
* event_trigger_unlock_commit - handle triggers and finish event commit
* @file: The file pointer assoctiated to the event
* @buffer: The ring buffer that the event is being written to
* @event: The event meta data in the ring buffer
* @entry: The event itself
* @irq_flags: The state of the interrupts at the start of the event
* @pc: The state of the preempt count at the start of the event.
*
* This is a helper function to handle triggers that require data
* from the event itself. It also tests the event against filters and
* if the event is soft disabled and should be discarded.
*/
static inline void
event_trigger_unlock_commit(struct trace_event_file *file,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
void *entry, unsigned long irq_flags, int pc)
{
enum event_trigger_type tt = ETT_NONE;
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
trace_buffer_unlock_commit(file->tr, buffer, event, irq_flags, pc);
if (tt)
event_triggers_post_call(file, tt, entry);
}
/**
* event_trigger_unlock_commit_regs - handle triggers and finish event commit
* @file: The file pointer assoctiated to the event
* @buffer: The ring buffer that the event is being written to
* @event: The event meta data in the ring buffer
* @entry: The event itself
* @irq_flags: The state of the interrupts at the start of the event
* @pc: The state of the preempt count at the start of the event.
*
* This is a helper function to handle triggers that require data
* from the event itself. It also tests the event against filters and
* if the event is soft disabled and should be discarded.
*
* Same as event_trigger_unlock_commit() but calls
* trace_buffer_unlock_commit_regs() instead of trace_buffer_unlock_commit().
*/
static inline void
event_trigger_unlock_commit_regs(struct trace_event_file *file,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
void *entry, unsigned long irq_flags, int pc,
struct pt_regs *regs)
{
enum event_trigger_type tt = ETT_NONE;
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
trace_buffer_unlock_commit_regs(file->tr, buffer, event,
irq_flags, pc, regs);
if (tt)
event_triggers_post_call(file, tt, entry);
}
#ifdef CONFIG_BPF_EVENTS #ifdef CONFIG_BPF_EVENTS
unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx); unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx);
#else #else
......
...@@ -528,6 +528,32 @@ config MMIOTRACE ...@@ -528,6 +528,32 @@ config MMIOTRACE
See Documentation/trace/mmiotrace.txt. See Documentation/trace/mmiotrace.txt.
If you are not helping to develop drivers, say N. If you are not helping to develop drivers, say N.
config TRACING_MAP
bool
depends on ARCH_HAVE_NMI_SAFE_CMPXCHG
help
tracing_map is a special-purpose lock-free map for tracing,
separated out as a stand-alone facility in order to allow it
to be shared between multiple tracers. It isn't meant to be
generally used outside of that context, and is normally
selected by tracers that use it.
config HIST_TRIGGERS
bool "Histogram triggers"
depends on ARCH_HAVE_NMI_SAFE_CMPXCHG
select TRACING_MAP
default n
help
Hist triggers allow one or more arbitrary trace event fields
to be aggregated into hash tables and dumped to stdout by
reading a debugfs/tracefs file. They're useful for
gathering quick and dirty (though precise) summaries of
event activity as an initial guide for further investigation
using more advanced tools.
See Documentation/trace/events.txt.
If in doubt, say N.
config MMIOTRACE_TEST config MMIOTRACE_TEST
tristate "Test module for mmiotrace" tristate "Test module for mmiotrace"
depends on MMIOTRACE && m depends on MMIOTRACE && m
......
...@@ -31,6 +31,7 @@ obj-$(CONFIG_TRACING) += trace_output.o ...@@ -31,6 +31,7 @@ obj-$(CONFIG_TRACING) += trace_output.o
obj-$(CONFIG_TRACING) += trace_seq.o obj-$(CONFIG_TRACING) += trace_seq.o
obj-$(CONFIG_TRACING) += trace_stat.o obj-$(CONFIG_TRACING) += trace_stat.o
obj-$(CONFIG_TRACING) += trace_printk.o obj-$(CONFIG_TRACING) += trace_printk.o
obj-$(CONFIG_TRACING_MAP) += tracing_map.o
obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
...@@ -53,6 +54,7 @@ obj-$(CONFIG_EVENT_TRACING) += trace_event_perf.o ...@@ -53,6 +54,7 @@ obj-$(CONFIG_EVENT_TRACING) += trace_event_perf.o
endif endif
obj-$(CONFIG_EVENT_TRACING) += trace_events_filter.o obj-$(CONFIG_EVENT_TRACING) += trace_events_filter.o
obj-$(CONFIG_EVENT_TRACING) += trace_events_trigger.o obj-$(CONFIG_EVENT_TRACING) += trace_events_trigger.o
obj-$(CONFIG_HIST_TRIGGERS) += trace_events_hist.o
obj-$(CONFIG_BPF_EVENTS) += bpf_trace.o obj-$(CONFIG_BPF_EVENTS) += bpf_trace.o
obj-$(CONFIG_KPROBE_EVENT) += trace_kprobe.o obj-$(CONFIG_KPROBE_EVENT) += trace_kprobe.o
obj-$(CONFIG_TRACEPOINTS) += power-traces.o obj-$(CONFIG_TRACEPOINTS) += power-traces.o
......
This diff is collapsed.
...@@ -177,9 +177,8 @@ struct trace_options { ...@@ -177,9 +177,8 @@ struct trace_options {
}; };
struct trace_pid_list { struct trace_pid_list {
unsigned int nr_pids; int pid_max;
int order; unsigned long *pids;
pid_t *pids;
}; };
/* /*
...@@ -656,6 +655,7 @@ static inline void __trace_stack(struct trace_array *tr, unsigned long flags, ...@@ -656,6 +655,7 @@ static inline void __trace_stack(struct trace_array *tr, unsigned long flags,
extern cycle_t ftrace_now(int cpu); extern cycle_t ftrace_now(int cpu);
extern void trace_find_cmdline(int pid, char comm[]); extern void trace_find_cmdline(int pid, char comm[]);
extern void trace_event_follow_fork(struct trace_array *tr, bool enable);
#ifdef CONFIG_DYNAMIC_FTRACE #ifdef CONFIG_DYNAMIC_FTRACE
extern unsigned long ftrace_update_tot_cnt; extern unsigned long ftrace_update_tot_cnt;
...@@ -967,6 +967,7 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf, ...@@ -967,6 +967,7 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
C(STOP_ON_FREE, "disable_on_free"), \ C(STOP_ON_FREE, "disable_on_free"), \
C(IRQ_INFO, "irq-info"), \ C(IRQ_INFO, "irq-info"), \
C(MARKERS, "markers"), \ C(MARKERS, "markers"), \
C(EVENT_FORK, "event-fork"), \
FUNCTION_FLAGS \ FUNCTION_FLAGS \
FGRAPH_FLAGS \ FGRAPH_FLAGS \
STACK_FLAGS \ STACK_FLAGS \
...@@ -1064,6 +1065,137 @@ struct trace_subsystem_dir { ...@@ -1064,6 +1065,137 @@ struct trace_subsystem_dir {
int nr_events; int nr_events;
}; };
extern int call_filter_check_discard(struct trace_event_call *call, void *rec,
struct ring_buffer *buffer,
struct ring_buffer_event *event);
void trace_buffer_unlock_commit_regs(struct trace_array *tr,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
unsigned long flags, int pc,
struct pt_regs *regs);
static inline void trace_buffer_unlock_commit(struct trace_array *tr,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
unsigned long flags, int pc)
{
trace_buffer_unlock_commit_regs(tr, buffer, event, flags, pc, NULL);
}
DECLARE_PER_CPU(struct ring_buffer_event *, trace_buffered_event);
DECLARE_PER_CPU(int, trace_buffered_event_cnt);
void trace_buffered_event_disable(void);
void trace_buffered_event_enable(void);
static inline void
__trace_event_discard_commit(struct ring_buffer *buffer,
struct ring_buffer_event *event)
{
if (this_cpu_read(trace_buffered_event) == event) {
/* Simply release the temp buffer */
this_cpu_dec(trace_buffered_event_cnt);
return;
}
ring_buffer_discard_commit(buffer, event);
}
/*
* Helper function for event_trigger_unlock_commit{_regs}().
* If there are event triggers attached to this event that requires
* filtering against its fields, then they wil be called as the
* entry already holds the field information of the current event.
*
* It also checks if the event should be discarded or not.
* It is to be discarded if the event is soft disabled and the
* event was only recorded to process triggers, or if the event
* filter is active and this event did not match the filters.
*
* Returns true if the event is discarded, false otherwise.
*/
static inline bool
__event_trigger_test_discard(struct trace_event_file *file,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
void *entry,
enum event_trigger_type *tt)
{
unsigned long eflags = file->flags;
if (eflags & EVENT_FILE_FL_TRIGGER_COND)
*tt = event_triggers_call(file, entry);
if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags) ||
(unlikely(file->flags & EVENT_FILE_FL_FILTERED) &&
!filter_match_preds(file->filter, entry))) {
__trace_event_discard_commit(buffer, event);
return true;
}
return false;
}
/**
* event_trigger_unlock_commit - handle triggers and finish event commit
* @file: The file pointer assoctiated to the event
* @buffer: The ring buffer that the event is being written to
* @event: The event meta data in the ring buffer
* @entry: The event itself
* @irq_flags: The state of the interrupts at the start of the event
* @pc: The state of the preempt count at the start of the event.
*
* This is a helper function to handle triggers that require data
* from the event itself. It also tests the event against filters and
* if the event is soft disabled and should be discarded.
*/
static inline void
event_trigger_unlock_commit(struct trace_event_file *file,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
void *entry, unsigned long irq_flags, int pc)
{
enum event_trigger_type tt = ETT_NONE;
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
trace_buffer_unlock_commit(file->tr, buffer, event, irq_flags, pc);
if (tt)
event_triggers_post_call(file, tt, entry);
}
/**
* event_trigger_unlock_commit_regs - handle triggers and finish event commit
* @file: The file pointer assoctiated to the event
* @buffer: The ring buffer that the event is being written to
* @event: The event meta data in the ring buffer
* @entry: The event itself
* @irq_flags: The state of the interrupts at the start of the event
* @pc: The state of the preempt count at the start of the event.
*
* This is a helper function to handle triggers that require data
* from the event itself. It also tests the event against filters and
* if the event is soft disabled and should be discarded.
*
* Same as event_trigger_unlock_commit() but calls
* trace_buffer_unlock_commit_regs() instead of trace_buffer_unlock_commit().
*/
static inline void
event_trigger_unlock_commit_regs(struct trace_event_file *file,
struct ring_buffer *buffer,
struct ring_buffer_event *event,
void *entry, unsigned long irq_flags, int pc,
struct pt_regs *regs)
{
enum event_trigger_type tt = ETT_NONE;
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
trace_buffer_unlock_commit_regs(file->tr, buffer, event,
irq_flags, pc, regs);
if (tt)
event_triggers_post_call(file, tt, entry);
}
#define FILTER_PRED_INVALID ((unsigned short)-1) #define FILTER_PRED_INVALID ((unsigned short)-1)
#define FILTER_PRED_IS_RIGHT (1 << 15) #define FILTER_PRED_IS_RIGHT (1 << 15)
#define FILTER_PRED_FOLD (1 << 15) #define FILTER_PRED_FOLD (1 << 15)
...@@ -1161,6 +1293,15 @@ extern struct mutex event_mutex; ...@@ -1161,6 +1293,15 @@ extern struct mutex event_mutex;
extern struct list_head ftrace_events; extern struct list_head ftrace_events;
extern const struct file_operations event_trigger_fops; extern const struct file_operations event_trigger_fops;
extern const struct file_operations event_hist_fops;
#ifdef CONFIG_HIST_TRIGGERS
extern int register_trigger_hist_cmd(void);
extern int register_trigger_hist_enable_disable_cmds(void);
#else
static inline int register_trigger_hist_cmd(void) { return 0; }
static inline int register_trigger_hist_enable_disable_cmds(void) { return 0; }
#endif
extern int register_trigger_cmds(void); extern int register_trigger_cmds(void);
extern void clear_event_triggers(struct trace_array *tr); extern void clear_event_triggers(struct trace_array *tr);
...@@ -1174,9 +1315,41 @@ struct event_trigger_data { ...@@ -1174,9 +1315,41 @@ struct event_trigger_data {
char *filter_str; char *filter_str;
void *private_data; void *private_data;
bool paused; bool paused;
bool paused_tmp;
struct list_head list; struct list_head list;
char *name;
struct list_head named_list;
struct event_trigger_data *named_data;
}; };
/* Avoid typos */
#define ENABLE_EVENT_STR "enable_event"
#define DISABLE_EVENT_STR "disable_event"
#define ENABLE_HIST_STR "enable_hist"
#define DISABLE_HIST_STR "disable_hist"
struct enable_trigger_data {
struct trace_event_file *file;
bool enable;
bool hist;
};
extern int event_enable_trigger_print(struct seq_file *m,
struct event_trigger_ops *ops,
struct event_trigger_data *data);
extern void event_enable_trigger_free(struct event_trigger_ops *ops,
struct event_trigger_data *data);
extern int event_enable_trigger_func(struct event_command *cmd_ops,
struct trace_event_file *file,
char *glob, char *cmd, char *param);
extern int event_enable_register_trigger(char *glob,
struct event_trigger_ops *ops,
struct event_trigger_data *data,
struct trace_event_file *file);
extern void event_enable_unregister_trigger(char *glob,
struct event_trigger_ops *ops,
struct event_trigger_data *test,
struct trace_event_file *file);
extern void trigger_data_free(struct event_trigger_data *data); extern void trigger_data_free(struct event_trigger_data *data);
extern int event_trigger_init(struct event_trigger_ops *ops, extern int event_trigger_init(struct event_trigger_ops *ops,
struct event_trigger_data *data); struct event_trigger_data *data);
...@@ -1189,7 +1362,18 @@ extern void unregister_trigger(char *glob, struct event_trigger_ops *ops, ...@@ -1189,7 +1362,18 @@ extern void unregister_trigger(char *glob, struct event_trigger_ops *ops,
extern int set_trigger_filter(char *filter_str, extern int set_trigger_filter(char *filter_str,
struct event_trigger_data *trigger_data, struct event_trigger_data *trigger_data,
struct trace_event_file *file); struct trace_event_file *file);
extern struct event_trigger_data *find_named_trigger(const char *name);
extern bool is_named_trigger(struct event_trigger_data *test);
extern int save_named_trigger(const char *name,
struct event_trigger_data *data);
extern void del_named_trigger(struct event_trigger_data *data);
extern void pause_named_trigger(struct event_trigger_data *data);
extern void unpause_named_trigger(struct event_trigger_data *data);
extern void set_named_trigger_data(struct event_trigger_data *data,
struct event_trigger_data *named_data);
extern int register_event_command(struct event_command *cmd); extern int register_event_command(struct event_command *cmd);
extern int unregister_event_command(struct event_command *cmd);
extern int register_trigger_hist_enable_disable_cmds(void);
/** /**
* struct event_trigger_ops - callbacks for trace event triggers * struct event_trigger_ops - callbacks for trace event triggers
......
This diff is collapsed.
...@@ -689,9 +689,6 @@ static void append_filter_err(struct filter_parse_state *ps, ...@@ -689,9 +689,6 @@ static void append_filter_err(struct filter_parse_state *ps,
static inline struct event_filter *event_filter(struct trace_event_file *file) static inline struct event_filter *event_filter(struct trace_event_file *file)
{ {
if (file->event_call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
return file->event_call->filter;
else
return file->filter; return file->filter;
} }
...@@ -826,12 +823,12 @@ static void __free_preds(struct event_filter *filter) ...@@ -826,12 +823,12 @@ static void __free_preds(struct event_filter *filter)
static void filter_disable(struct trace_event_file *file) static void filter_disable(struct trace_event_file *file)
{ {
struct trace_event_call *call = file->event_call; unsigned long old_flags = file->flags;
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
call->flags &= ~TRACE_EVENT_FL_FILTERED;
else
file->flags &= ~EVENT_FILE_FL_FILTERED; file->flags &= ~EVENT_FILE_FL_FILTERED;
if (old_flags != file->flags)
trace_buffered_event_disable();
} }
static void __free_filter(struct event_filter *filter) static void __free_filter(struct event_filter *filter)
...@@ -883,12 +880,7 @@ static int __alloc_preds(struct event_filter *filter, int n_preds) ...@@ -883,12 +880,7 @@ static int __alloc_preds(struct event_filter *filter, int n_preds)
static inline void __remove_filter(struct trace_event_file *file) static inline void __remove_filter(struct trace_event_file *file)
{ {
struct trace_event_call *call = file->event_call;
filter_disable(file); filter_disable(file);
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
remove_filter_string(call->filter);
else
remove_filter_string(file->filter); remove_filter_string(file->filter);
} }
...@@ -906,15 +898,8 @@ static void filter_free_subsystem_preds(struct trace_subsystem_dir *dir, ...@@ -906,15 +898,8 @@ static void filter_free_subsystem_preds(struct trace_subsystem_dir *dir,
static inline void __free_subsystem_filter(struct trace_event_file *file) static inline void __free_subsystem_filter(struct trace_event_file *file)
{ {
struct trace_event_call *call = file->event_call;
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER) {
__free_filter(call->filter);
call->filter = NULL;
} else {
__free_filter(file->filter); __free_filter(file->filter);
file->filter = NULL; file->filter = NULL;
}
} }
static void filter_free_subsystem_filters(struct trace_subsystem_dir *dir, static void filter_free_subsystem_filters(struct trace_subsystem_dir *dir,
...@@ -1718,69 +1703,43 @@ static int replace_preds(struct trace_event_call *call, ...@@ -1718,69 +1703,43 @@ static int replace_preds(struct trace_event_call *call,
static inline void event_set_filtered_flag(struct trace_event_file *file) static inline void event_set_filtered_flag(struct trace_event_file *file)
{ {
struct trace_event_call *call = file->event_call; unsigned long old_flags = file->flags;
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
call->flags |= TRACE_EVENT_FL_FILTERED;
else
file->flags |= EVENT_FILE_FL_FILTERED; file->flags |= EVENT_FILE_FL_FILTERED;
if (old_flags != file->flags)
trace_buffered_event_enable();
} }
static inline void event_set_filter(struct trace_event_file *file, static inline void event_set_filter(struct trace_event_file *file,
struct event_filter *filter) struct event_filter *filter)
{ {
struct trace_event_call *call = file->event_call;
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
rcu_assign_pointer(call->filter, filter);
else
rcu_assign_pointer(file->filter, filter); rcu_assign_pointer(file->filter, filter);
} }
static inline void event_clear_filter(struct trace_event_file *file) static inline void event_clear_filter(struct trace_event_file *file)
{ {
struct trace_event_call *call = file->event_call;
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
RCU_INIT_POINTER(call->filter, NULL);
else
RCU_INIT_POINTER(file->filter, NULL); RCU_INIT_POINTER(file->filter, NULL);
} }
static inline void static inline void
event_set_no_set_filter_flag(struct trace_event_file *file) event_set_no_set_filter_flag(struct trace_event_file *file)
{ {
struct trace_event_call *call = file->event_call;
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
call->flags |= TRACE_EVENT_FL_NO_SET_FILTER;
else
file->flags |= EVENT_FILE_FL_NO_SET_FILTER; file->flags |= EVENT_FILE_FL_NO_SET_FILTER;
} }
static inline void static inline void
event_clear_no_set_filter_flag(struct trace_event_file *file) event_clear_no_set_filter_flag(struct trace_event_file *file)
{ {
struct trace_event_call *call = file->event_call;
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
call->flags &= ~TRACE_EVENT_FL_NO_SET_FILTER;
else
file->flags &= ~EVENT_FILE_FL_NO_SET_FILTER; file->flags &= ~EVENT_FILE_FL_NO_SET_FILTER;
} }
static inline bool static inline bool
event_no_set_filter_flag(struct trace_event_file *file) event_no_set_filter_flag(struct trace_event_file *file)
{ {
struct trace_event_call *call = file->event_call;
if (file->flags & EVENT_FILE_FL_NO_SET_FILTER) if (file->flags & EVENT_FILE_FL_NO_SET_FILTER)
return true; return true;
if ((call->flags & TRACE_EVENT_FL_USE_CALL_FILTER) &&
(call->flags & TRACE_EVENT_FL_NO_SET_FILTER))
return true;
return false; return false;
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -14,3 +14,12 @@ enable_tracing() { # start trace recording ...@@ -14,3 +14,12 @@ enable_tracing() { # start trace recording
reset_tracer() { # reset the current tracer reset_tracer() { # reset the current tracer
echo nop > current_tracer echo nop > current_tracer
} }
reset_trigger() { # reset all current setting triggers
grep -v ^# events/*/*/trigger |
while read line; do
cmd=`echo $line | cut -f2- -d: | cut -f1 -d" "`
echo "!$cmd" > `echo $line | cut -f1 -d:`
done
}
#!/bin/sh
# description: event trigger - test event enable/disable trigger
do_reset() {
reset_trigger
echo > set_event
clear_trace
}
fail() { #msg
do_reset
echo $1
exit $FAIL
}
if [ ! -f set_event -o ! -d events/sched ]; then
echo "event tracing is not supported"
exit_unsupported
fi
if [ ! -f events/sched/sched_process_fork/trigger ]; then
echo "event trigger is not supported"
exit_unsupported
fi
reset_tracer
do_reset
FEATURE=`grep enable_event events/sched/sched_process_fork/trigger`
if [ -z "$FEATURE" ]; then
echo "event enable/disable trigger is not supported"
exit_unsupported
fi
echo "Test enable_event trigger"
echo 0 > events/sched/sched_switch/enable
echo 'enable_event:sched:sched_switch' > events/sched/sched_process_fork/trigger
( echo "forked")
if [ `cat events/sched/sched_switch/enable` != '1*' ]; then
fail "enable_event trigger on sched_process_fork did not work"
fi
reset_trigger
echo "Test disable_event trigger"
echo 1 > events/sched/sched_switch/enable
echo 'disable_event:sched:sched_switch' > events/sched/sched_process_fork/trigger
( echo "forked")
if [ `cat events/sched/sched_switch/enable` != '0*' ]; then
fail "disable_event trigger on sched_process_fork did not work"
fi
reset_trigger
echo "Test semantic error for event enable/disable trigger"
! echo 'enable_event:nogroup:noevent' > events/sched/sched_process_fork/trigger
! echo 'disable_event+1' > events/sched/sched_process_fork/trigger
echo 'enable_event:sched:sched_switch' > events/sched/sched_process_fork/trigger
! echo 'enable_event:sched:sched_switch' > events/sched/sched_process_fork/trigger
! echo 'disable_event:sched:sched_switch' > events/sched/sched_process_fork/trigger
do_reset
exit 0
This diff is collapsed.
#!/bin/sh
# description: event trigger - test stacktrace-trigger
do_reset() {
reset_trigger
echo > set_event
clear_trace
}
fail() { #msg
do_reset
echo $1
exit $FAIL
}
if [ ! -f set_event -o ! -d events/sched ]; then
echo "event tracing is not supported"
exit_unsupported
fi
if [ ! -f events/sched/sched_process_fork/trigger ]; then
echo "event trigger is not supported"
exit_unsupported
fi
reset_tracer
do_reset
FEATURE=`grep stacktrace events/sched/sched_process_fork/trigger`
if [ -z "$FEATURE" ]; then
echo "stacktrace trigger is not supported"
exit_unsupported
fi
echo "Test stacktrace tigger"
echo 0 > trace
echo 0 > options/stacktrace
echo 'stacktrace' > events/sched/sched_process_fork/trigger
( echo "forked")
grep "<stack trace>" trace > /dev/null || \
fail "stacktrace trigger on sched_process_fork did not work"
reset_trigger
echo "Test stacktrace semantic errors"
! echo "stacktrace:foo" > events/sched/sched_process_fork/trigger
echo "stacktrace" > events/sched/sched_process_fork/trigger
! echo "stacktrace" > events/sched/sched_process_fork/trigger
do_reset
exit 0
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment