1. 10 Jun, 2009 2 commits
  2. 09 Jun, 2009 5 commits
    • Steven Rostedt's avatar
      tracing: add protection around module events unload · 110bf2b7
      Steven Rostedt authored
      When reading the trace buffer, there is a race that when a module
      is unloaded it removes events that is stilled referenced in the buffers.
      This patch adds the protection around the unloading of the events
      from modules and the reading of the trace buffers.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      110bf2b7
    • Steven Rostedt's avatar
      tracing: add trace_seq_vprint interface · 725c624a
      Steven Rostedt authored
      The code to update the print formats for events requires a vprintf
      format in the trace_seq. This patch adds that interface.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      725c624a
    • Steven Rostedt's avatar
      tracing: fix the block trace points print size · 6556d1df
      Steven Rostedt authored
      The sector field is either u64 or unsigned long depending on
      the arch. This patch casts the sector to unsigned long long to
      prevent the printf warnings.
      
      [ Impact: remove compile warnings ]
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      6556d1df
    • Li Zefan's avatar
      tracing/events: convert block trace points to TRACE_EVENT() · 55782138
      Li Zefan authored
      TRACE_EVENT is a more generic way to define tracepoints. Doing so adds
      these new capabilities to this tracepoint:
      
        - zero-copy and per-cpu splice() tracing
        - binary tracing without printf overhead
        - structured logging records exposed under /debug/tracing/events
        - trace events embedded in function tracer output and other plugins
        - user-defined, per tracepoint filter expressions
        ...
      
      Cons:
      
        - no dev_t info for the output of plug, unplug_timer and unplug_io events.
          no dev_t info for getrq and sleeprq events if bio == NULL.
          no dev_t info for rq_abort,...,rq_requeue events if rq->rq_disk == NULL.
      
          This is mainly because we can't get the deivce from a request queue.
          But this may change in the future.
      
        - A packet command is converted to a string in TP_assign, not TP_print.
          While blktrace do the convertion just before output.
      
          Since pc requests should be rather rare, this is not a big issue.
      
        - In blktrace, an event can have 2 different print formats, but a TRACE_EVENT
          has a unique format, which means we have some unused data in a trace entry.
      
          The overhead is minimized by using __dynamic_array() instead of __array().
      
      I've benchmarked the ioctl blktrace vs the splice based TRACE_EVENT tracing:
      
            dd                   dd + ioctl blktrace       dd + TRACE_EVENT (splice)
      1     7.36s, 42.7 MB/s     7.50s, 42.0 MB/s          7.41s, 42.5 MB/s
      2     7.43s, 42.3 MB/s     7.48s, 42.1 MB/s          7.43s, 42.4 MB/s
      3     7.38s, 42.6 MB/s     7.45s, 42.2 MB/s          7.41s, 42.5 MB/s
      
      So the overhead of tracing is very small, and no regression when using
      those trace events vs blktrace.
      
      And the binary output of TRACE_EVENT is much smaller than blktrace:
      
       # ls -l -h
       -rw-r--r-- 1 root root 8.8M 06-09 13:24 sda.blktrace.0
       -rw-r--r-- 1 root root 195K 06-09 13:24 sda.blktrace.1
       -rw-r--r-- 1 root root 2.7M 06-09 13:25 trace_splice.out
      
      Following are some comparisons between TRACE_EVENT and blktrace:
      
      plug:
        kjournald-480   [000]   303.084981: block_plug: [kjournald]
        kjournald-480   [000]   303.084981:   8,0    P   N [kjournald]
      
      unplug_io:
        kblockd/0-118   [000]   300.052973: block_unplug_io: [kblockd/0] 1
        kblockd/0-118   [000]   300.052974:   8,0    U   N [kblockd/0] 1
      
      remap:
        kjournald-480   [000]   303.085042: block_remap: 8,0 W 102736992 + 8 <- (8,8) 33384
        kjournald-480   [000]   303.085043:   8,0    A   W 102736992 + 8 <- (8,8) 33384
      
      bio_backmerge:
        kjournald-480   [000]   303.085086: block_bio_backmerge: 8,0 W 102737032 + 8 [kjournald]
        kjournald-480   [000]   303.085086:   8,0    M   W 102737032 + 8 [kjournald]
      
      getrq:
        kjournald-480   [000]   303.084974: block_getrq: 8,0 W 102736984 + 8 [kjournald]
        kjournald-480   [000]   303.084975:   8,0    G   W 102736984 + 8 [kjournald]
      
        bash-2066  [001]  1072.953770:   8,0    G   N [bash]
        bash-2066  [001]  1072.953773: block_getrq: 0,0 N 0 + 0 [bash]
      
      rq_complete:
        konsole-2065  [001]   300.053184: block_rq_complete: 8,0 W () 103669040 + 16 [0]
        konsole-2065  [001]   300.053191:   8,0    C   W 103669040 + 16 [0]
      
        ksoftirqd/1-7   [001]  1072.953811:   8,0    C   N (5a 00 08 00 00 00 00 00 24 00) [0]
        ksoftirqd/1-7   [001]  1072.953813: block_rq_complete: 0,0 N (5a 00 08 00 00 00 00 00 24 00) 0 + 0 [0]
      
      rq_insert:
        kjournald-480   [000]   303.084985: block_rq_insert: 8,0 W 0 () 102736984 + 8 [kjournald]
        kjournald-480   [000]   303.084986:   8,0    I   W 102736984 + 8 [kjournald]
      
      Changelog from v2 -> v3:
      
      - use the newly introduced __dynamic_array().
      
      Changelog from v1 -> v2:
      
      - use __string() instead of __array() to minimize the memory required
        to store hex dump of rq->cmd().
      
      - support large pc requests.
      
      - add missing blk_fill_rwbs_rq() in block_rq_requeue TRACE_EVENT.
      
      - some cleanups.
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4A2DF669.5070905@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      55782138
    • Steven Rostedt's avatar
      ring-buffer: fix ret in rb_add_time_stamp · f57a8a19
      Steven Rostedt authored
      The update of ret got mistakenly added to the if statement of
      rb_try_to_discard. The variable ret should be 1 on commit and zero
      otherwise.
      
      [ Impact: fix compiler warning and real bug ]
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      f57a8a19
  3. 08 Jun, 2009 1 commit
    • Peter Zijlstra's avatar
      ring-buffer: pass in lockdep class key for reader_lock · 1f8a6a10
      Peter Zijlstra authored
      On Sun, 7 Jun 2009, Ingo Molnar wrote:
      > Testing tracer sched_switch: <6>Starting ring buffer hammer
      > PASSED
      > Testing tracer sysprof: PASSED
      > Testing tracer function: PASSED
      > Testing tracer irqsoff:
      > =============================================
      > PASSED
      > Testing tracer preemptoff: PASSED
      > Testing tracer preemptirqsoff: [ INFO: possible recursive locking detected ]
      > PASSED
      > Testing tracer branch: 2.6.30-rc8-tip-01972-ge5b9078-dirty #5760
      > ---------------------------------------------
      > rb_consumer/431 is trying to acquire lock:
      >  (&cpu_buffer->reader_lock){......}, at: [<c109eef7>] ring_buffer_reset_cpu+0x37/0x70
      >
      > but task is already holding lock:
      >  (&cpu_buffer->reader_lock){......}, at: [<c10a019e>] ring_buffer_consume+0x7e/0xc0
      >
      > other info that might help us debug this:
      > 1 lock held by rb_consumer/431:
      >  #0:  (&cpu_buffer->reader_lock){......}, at: [<c10a019e>] ring_buffer_consume+0x7e/0xc0
      
      The ring buffer is a generic structure, and can be used outside of
      ftrace. If ftrace traces within the use of the ring buffer, it can produce
      false positives with lockdep.
      
      This patch passes in a static lock key into the allocation of the ring
      buffer, so that different ring buffers will have their own lock class.
      Reported-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1244477919.13761.9042.camel@twins>
      
      [ store key in ring buffer descriptor ]
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      1f8a6a10
  4. 05 Jun, 2009 1 commit
  5. 04 Jun, 2009 1 commit
  6. 03 Jun, 2009 8 commits
    • Steven Rostedt's avatar
      tracing: add annotation to what type of stack trace is recorded · 563af16c
      Steven Rostedt authored
      The current method of printing out a stack trace is to add a new line
      and print out the trace:
      
          yum-updatesd-3120  [002]   573.691303:
       => do_softirq
       => irq_exit
       => smp_apic_timer_interrupt
       => apic_timer_interrupt
      
      This looks a bit awkward, and if we have both stack and user stack traces
      running, it would be nice to have a title to tell them apart, although
      it is easy to tell by the output.
      
      This patch adds an annotation to the start of the stack traces:
      
                  init-1     [003]   929.304979: <stack trace>
       => user_path_at
       => vfs_fstatat
       => vfs_stat
       => sys_newstat
       => system_call_fastpath
      
                   cat-3459  [002]  1016.824040: <user stack trace>
       =>  <0000003aae6c0250>
       =>  <00007ffff4b06ae4>
       =>  <69636172742f6775>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      563af16c
    • Steven Whitehouse's avatar
      tracing: fix multiple use of __print_flags and __print_symbolic · 56d8bd3f
      Steven Whitehouse authored
      Here is an updated patch to include the extra call to
      trace_seq_init() as requested. This is vs. the latest
      -tip tree and fixes the use of multiple __print_flags
      and __print_symbolic in a single tracer. Also tested
      to ensure its working now:
      
      mount.gfs2-2534  [000]   235.850587: gfs2_glock_queue: 8.7 glock 1:2 dequeue PR
      mount.gfs2-2534  [000]   235.850591: gfs2_demote_rq: 8.7 glock 1:0 demote EX to NL flags:DI
      mount.gfs2-2534  [000]   235.850591: gfs2_glock_queue: 8.7 glock 1:0 dequeue EX
      glock_workqueue-2529  [000]   235.850666: gfs2_glock_state_change: 8.7 glock 1:0 state EX => NL tgt:NL dmt:NL flags:lDpI
      glock_workqueue-2529  [000]   235.850672: gfs2_glock_put: 8.7 glock 1:0 state NL => IV flags:I
      Signed-off-by: default avatarSteven Whitehouse <swhiteho@redhat.com>
      LKML-Reference: <1244037123.29604.603.camel@localhost.localdomain>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      56d8bd3f
    • walimis's avatar
      tracing/events: fix output format of user stack · 048dc50c
      walimis authored
      According to "events/ftrace/user_stack/format", fix the output of
      user stack.
      
      before fix:
      
        sh-1073  [000]    31.137561:  <b7f274fe> <-  <0804e33c> <-  <080835c1>
      
      after fix:
      
        sh-1072  [000]    37.039329:
       =>  <b7f8a4fe>
       =>  <0804e33c>
       =>  <080835c1>
      Signed-off-by: default avatarwalimis <walimisdev@gmail.com>
      LKML-Reference: <1244016090-7814-3-git-send-email-walimisdev@gmail.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      048dc50c
    • walimis's avatar
      tracing/events: fix output format of kernel stack · f11b3f4e
      walimis authored
      According to "events/ftrace/kernel_stack/format", output format of
      kernel stack should use "=>" instead of "<=".
      
      The second problem is that we shouldn't skip the first entry in the stack,
      although it seems to be duplicated when used in the "function" tracer,
      but events also use it. If we skip the first one, we will drop the topmost
      entry of the stack.
      
      The last problem is that if the last entry is ULONG_MAX(0xffffffff), we should
      drop it, otherwise it will print a NULL name line.
      
      before fix:
      
            sh-1072  [000]   26.957239: sched_process_fork: parent sh:1072 child sh:1073
            sh-1072  [000]   26.957262:
       <= syscall_call
       <=
            sh-1072  [000]   26.957744: sched_switch: task sh:1072 [120] (R) ==> sh:1073 [120]
            sh-1072  [000]   26.957752:
       <= preempt_schedule
       <= wake_up_new_task
       <= do_fork
       <= sys_clone
       <= syscall_call
       <=
      
      After fix:
      
            sh-1075  [000]    39.791848: sched_process_fork: parent sh:1075  child sh:1076
            sh-1075  [000]    39.791871:
       => sys_clone
       => syscall_call
            sh-1075  [000]    39.792713: sched_switch: task sh:1075 [120] (R) ==> sh:1076 [120]
            sh-1075  [000]    39.792722:
       => schedule
       => preempt_schedule
       => wake_up_new_task
       => do_fork
       => sys_clone
       => syscall_call
      Signed-off-by: default avatarwalimis <walimisdev@gmail.com>
      LKML-Reference: <1244016090-7814-2-git-send-email-walimisdev@gmail.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      f11b3f4e
    • walimis's avatar
      tracing/trace_stack: fix the number of entries in the header · 083a63b4
      walimis authored
      The last entry in the stack_dump_trace is ULONG_MAX, which is not
      a valid entry, but max_stack_trace.nr_entries has accounted for it.
      So when printing the header, we should decrease it by one.
      Before fix, print as following, for example:
      
      	Depth    Size   Location    (53 entries)	<--- should be 52
      	-----    ----   --------
        0)     3264     108   update_wall_time+0x4d5/0x9a0
        ...
       51)       80      80   syscall_call+0x7/0xb
       ^^^
         it's correct.
      Signed-off-by: default avatarwalimis <walimisdev@gmail.com>
      LKML-Reference: <1244016090-7814-1-git-send-email-walimisdev@gmail.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      083a63b4
    • Steven Rostedt's avatar
      ring-buffer: discard timestamps that are at the start of the buffer · ea05b57c
      Steven Rostedt authored
      Every buffer page in the ring buffer includes its own time stamp.
      When an event is recorded to the ring buffer with a delta time greater
      than what can be held in the event header, a time stamp event is created.
      
      If the the create timestamp falls over to the next buffer page, it is
      redundant because the buffer page holds a full time stamp. This patch
      will try to discard the time stamp when it falls to the start of the
      next page.
      
      This change also fixes a issues with disarding events. If most events are
      discarded, timestamps will start to creep into the ring buffer. If we
      do not discard the timestamps then they can fill up the ring buffer over
      time and waste space.
      
      This change will keep time stamps from filling up over another page. If
      something is recorded in the buffer page, and the rest is filtered, then
      the time stamps can only fill up to the end of the page.
      
      [ Impact: prevent time stamps from filling ring buffer ]
      Reported-by: default avatarTim Bird <tim.bird@am.sony.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      ea05b57c
    • Steven Rostedt's avatar
      ring-buffer: try to discard unneeded timestamps · edd813bf
      Steven Rostedt authored
      There are times that a race may happen that we add a timestamp in a
      nested write. This timestamp would just contain a zero delta and serves
      no purpose.
      
      Now that we have a way to discard events, this patch will try to discard
      the timestamp instead of just wasting the space in the ring buffer.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      edd813bf
    • Tim Bird's avatar
      ring-buffer: fix bug in ring_buffer_discard_commit · a2023556
      Tim Bird authored
      There's a bug in ring_buffer_discard_commit.  The wrong
      pointer is being compared in order to check if the event
      can be freed from the buffer rather than discarded
      (i.e. marked as PAD).
      
      I noticed this when I was working on duration filtering.
      The bug is not deadly - it just results in lots of wasted
      space in the buffer.  All filtered events are left in
      the buffer and marked as discarded, rather than being
      removed from the buffer to make space for other events.
      
      Unfortunately, when I fixed this bug, I got errors doing a
      filtered function trace.  Multiple TIME_EXTEND
      events pile up in the buffer, and trigger the
      following loop overage warning in rb_iter_peek():
      
      again:
      	...
      	if (RB_WARN_ON(cpu_buffer, ++nr_loops > 10))
      		return NULL;
      
      I'm not sure what the best way is to fix this. I don't
      know if I should extend the loop threshhold, or if I should
      make the test more complex (ignore TIME_EXTEND
      events), or just get rid of this loop check completely.
      
      Note that if I implement a workaround for this, then I
      see another problem from rb_advance_iter().  I haven't
      tracked that one down yet.
      
      In general, it seems like the case of removing filtered
      events has not been working properly, and so some assumptions
      about buffer invariant conditions need to be revisited.
      
      Here's the patch for the simple fix:
      
      Compare correct pointer for checking if an event can be
      freed rather than left as discarded in the buffer.
      Signed-off-by: default avatarTim Bird <tim.bird@am.sony.com>
      LKML-Reference: <4A25BE9E.5090909@am.sony.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      a2023556
  7. 02 Jun, 2009 10 commits
    • Steven Rostedt's avatar
      ftrace: do not profile functions when disabled · 0f6ce3de
      Steven Rostedt authored
      A race was found that if one were to enable and disable the function
      profiler repeatedly, then the system can panic. This was because a profiled
      function may be preempted just before disabling interrupts. While
      the profiler is disabled and then reenabled, the preempted function
      could start again, and access the hash as it is being initialized.
      
      This just adds a check in the irq disabled part to check if the profiler
      is enabled, and if it is not then it will just exit.
      
      When the system is disabled, the profile_enabled variable is cleared
      before calling the unregistering of the function profiler. This
      unregistering calls stop machine which also acts as a synchronize schedule.
      
      [ Impact: fix panic in enabling/disabling function profiler ]
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      0f6ce3de
    • Steven Rostedt's avatar
      tracing: make trace pipe recognize latency format flag · 112f38a7
      Steven Rostedt authored
      The trace_pipe did not recognize the latency format flag and would produce
      different output than the trace file. The problem was partly due that
      the trace flags in the iterator was not set as well as the trace_pipe
      zeros out part of the iterator (including the flags) to be able to use
      the same routines as the trace file. trace_flags of the iterator should
      not cause any problems when not zeroed out by for trace_pipe.
      Reported-by: default avatarJohannes Berg <johannes@sipsolutions.net>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      112f38a7
    • Steven Rostedt's avatar
      tracing: remove redundant SOFTIRQ from softirq event traces · 1d080d6c
      Steven Rostedt authored
      After converting the softirq tracer to use te flags options, this
      caused a regression with the name. Since the flag was used directly
      it was printed out (i.e. HRTIMER_SOFTIRQ).
      
      This patch only shows the softirq name without the SOFTIRQ part.
      
      [ Impact: fix regression of output from softirq events ]
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      1d080d6c
    • Steven Whitehouse's avatar
      tracing: add exports to use __print_symbolic and __print_flags from a module · ec081ddc
      Steven Whitehouse authored
      A patch to allow the use of __print_symbolic and __print_flags
      from a module. This allows the current GFS2 tracing patch to
      build.
      Signed-off-by: default avatarSteven Whitehouse <swhiteho@redhat.com>
      LKML-Reference: <1243868015.29604.542.camel@localhost.localdomain>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      ec081ddc
    • Li Zefan's avatar
      tracing/events: introduce __dynamic_array() · 7fcb7c47
      Li Zefan authored
      __string() is limited:
      
        - it's a char array, but we may want to define array with other types
        - a source string should be available, but we may just know the string size
      
      We introduce __dynamic_array() to break those limitations, and __string()
      becomes a wrapper of it. As a side effect, now __get_str() can be used
      in TP_fast_assign but not only TP_print.
      
      Take XFS for example, we have the string length in the dirent, but the
      string itself is not NULL-terminated, so __dynamic_array() can be used:
      
      TRACE_EVENT(xfs_dir2,
      	TP_PROTO(struct xfs_da_args *args),
      	TP_ARGS(args),
      
      	TP_STRUCT__entry(
      		__field(int, namelen)
      		__dynamic_array(char, name, args->namelen + 1)
      		...
      	),
      
      	TP_fast_assign(
      		char *name = __get_str(name);
      
      		if (args->namelen)
      			memcpy(name, args->name, args->namelen);
      		name[args->namelen] = '\0';
      
      		__entry->namelen = args->namelen;
      	),
      
      	TP_printk("name %.*s namelen %d",
      		  __entry->namelen ? __get_str(name) : NULL
      		  __entry->namelen)
      );
      
      [ Impact: allow defining dynamic size arrays ]
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4A2384D2.3080403@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      7fcb7c47
    • Li Zefan's avatar
      tracing/events: put TP_fast_assign into braces · a9c1c3ab
      Li Zefan authored
      Currently TP_fast_assign has a limitation that we can't define local
      variables in it.
      
      Here's one use case when we introduce __dynamic_array():
      
      TP_fast_assign(
      	type *p = __get_dynamic_array(item);
      
      	foo(p);
      	bar(p);
      ),
      
      [ Impact: allow defining local variables in TP_fast_assign ]
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4A2384B1.90100@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      a9c1c3ab
    • Li Zefan's avatar
      tracing/events: fix a typo in __string() format output · 6e25db44
      Li Zefan authored
      "tsize" should be "\tsize". Also remove the space before "__str_loc".
      
      Before:
       # cat tracing/events/irq/irq_handler_entry/format
              ...
              field:int irq;  offset:12;      size:4;
              field: __str_loc name;  offset:16;tsize:2;
              ...
      
      After:
       # cat tracing/events/irq/irq_handler_entry/format
      	...
              field:int irq;  offset:12;      size:4;
              field:__str_loc name;   offset:16;      size:2;
      	...
      
      [ Impact: standardize __string field description in events format file ]
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      6e25db44
    • Steven Rostedt's avatar
      tracing: combine the default tracers into one config · 897f17a6
      Steven Rostedt authored
      Both event tracer and sched switch plugin are selected by default
      by all generic tracers. But if no generic tracer is enabled, their options
      appear. But ether one of them will select the other, thus it only
      makes sense to have the default tracers be selected by one option.
      
      [ Impact: clean up kconfig menu ]
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      897f17a6
    • Steven Rostedt's avatar
      tracing: fix config options to not show when automatically selected · 5e0a0939
      Steven Rostedt authored
      There are two options that are selected by all tracers, but we want
      to have those options available when no tracer is selected. These are
      
       The event tracer and sched switch tracer.
      
      The are enabled by all tracers, but if a tracer is not selected we want
      the options to appear. All tracers including them select TRACING.
      Thus what we would like to do is:
      
        config EVENT_TRACER
      	bool "prompt"
      	depends on TRACING
      	select TRACING
      
      But that gives us a bug in the kbuild system since we just created a
      circular dependency. We only want the prompt to show when TRACING is off.
      
      This patch adds GENERIC_TRACER that all tracers will select instead of
      TRACING. The two options (sched switch and event tracer) will select
      TRACING directly and depend on !GENERIC_TRACER. This solves the cicular
      dependency.
      
      [ Impact: hide options that are selected by default ]
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      5e0a0939
    • Steven Rostedt's avatar
      ftrace: add kernel command line function filtering · 2af15d6a
      Steven Rostedt authored
      When using ftrace=function on the command line to trace functions
      on boot up, one can not filter out functions that are commonly called.
      
      This patch adds two new ftrace command line commands.
      
        ftrace_notrace=function-list
        ftrace_filter=function-list
      
      Where function-list is a comma separated list of functions to filter.
      The ftrace_notrace will make the functions listed not be included
      in the function tracing, and ftrace_filter will only trace the functions
      listed.
      
      These two act the same as the debugfs/tracing/set_ftrace_notrace and
      debugfs/tracing/set_ftrace_filter respectively.
      
      The simple glob expressions that are allowed by the filter files can also
      be used by the command line interface.
      
      	ftrace_notrace=rcu*,*lock,*spin*
      
      Will not trace any function that starts with rcu, ends with lock, or has
      the word spin in it.
      
      Note, if the self tests are enabled, they may interfere with the filtering
      set by the command lines.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      2af15d6a
  8. 01 Jun, 2009 10 commits
    • Frederic Weisbecker's avatar
      tracing/stat: remove unappropriate safe walk on list · 43bd1236
      Frederic Weisbecker authored
      register_stat_tracer() uses list_for_each_entry_safe
      to check whether a tracer is already present in the list.
      But we don't delete anything from the list here, so
      we don't need the safe version
      
      [ Impact: cleanup list use is stat tracing ]
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      43bd1236
    • Li Zefan's avatar
      tracing/stat: do some cleanups · dbd3fbdf
      Li Zefan authored
      - remove duplicate code in stat_seq_init()
      - update comments to reflect the change from stat list to stat rbtree
      
      [ Impact: clean up ]
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      dbd3fbdf
    • Li Zefan's avatar
      tracing/stat: remember to free root node · e1622806
      Li Zefan authored
      When closing a trace_stat file, we destroy the rbtree constructed during
      file open, but there is memory leak that the root node is not freed.
      
      [ Impact: fix memory leak when closing a trace_stat file ]
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      e1622806
    • Li Zefan's avatar
      tracing/stat: change dummpy_cmp() to return -1 · b3dd7ba7
      Li Zefan authored
      Currently the output of trace_stat/workqueues is totally reversed:
      
       # cat /debug/tracing/trace_stat/workqueues
          ...
          1       17       17      210       37   `-blk_unplug_work+0x0/0x57
          1     3779     3779      181       11   |-cfq_kick_queue+0x0/0x2f
          1     3796     3796                     kblockd/1:120
          ...
      
      The correct output should be:
      
          1     3796     3796                     kblockd/1:120
          1     3779     3779      181       11   |-cfq_kick_queue+0x0/0x2f
          1       17       17      210       37   `-blk_unplug_work+0x0/0x57
      
      It's caused by "tracing/stat: replace linked list by an rbtree for
      sorting"
      (53059c9b67a62a3dc8c80204d3da42b9267ea5a0).
      
      dummpy_cmp() should return -1, so rb_node will always be inserted as
      right-most node in the rbtree, thus we sort the output in ascending
      order.
      
      [ Impact: fix the output of trace_stat/workqueues ]
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      b3dd7ba7
    • Frederic Weisbecker's avatar
      tracing/stat: replace linked list by an rbtree for sorting · 8f184f27
      Frederic Weisbecker authored
      When the stat tracing framework prepares the entries from a tracer
      to output them to the user, it starts by computing a linear sort
      through a linked list to give the entries ordered by relevance
      to the user.
      
      This is quite ugly and causes a small latency when we begin to
      read the file.
      
      This patch changes that by turning the linked list into a red-black
      tree. Athough the whole iteration using the start and next tracer
      callbacks while opening the file remain the same, it is now much
      more fast and scalable.
      
      The rbtree guarantees O(log(n)) insertions whereas a linked
      list with linear sorting brought us a O(n) despair. Now the
      (visible) latency has disapeared.
      
      [ Impact: kill the latency while starting to read a stat tracer file ]
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      8f184f27
    • Frederic Weisbecker's avatar
      tracing/stat: replace trace_stat_session by stat_session · 0d64f834
      Frederic Weisbecker authored
      The "trace" prefix in struct trace_stat_session type is annoying while
      reading the trace_stat.c file. It makes the lines longer, and
      is not that much useful to explain the sense of this type.
      
      Just keep "struct stat_session" for this type.
      
      [ Impact: make the code a bit more readable ]
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      0d64f834
    • Zhaolei's avatar
      trace_workqueue: remove blank line between each cpu · f3c4ae26
      Zhaolei authored
      The blankline between each cpu's workqueue stat is not necessary, because
      the cpu number is enough to part them by eye.
      Old style also caused a blankline below headline, and made code complex
      by using lock, disableirq and get cpu var.
      
      Old style:
       # CPU  INSERTED  EXECUTED   NAME
       # |      |         |          |
      
         0   8644       8644       events/0
         0      0          0       cpuset
         ...
         0      1          1       kdmflush
      
         1  35365      35365       events/1
         ...
      
      New style:
       # CPU  INSERTED  EXECUTED   NAME
       # |      |         |          |
      
         0   8644       8644       events/0
         0      0          0       cpuset
         ...
         0      1          1       kdmflush
         1  35365      35365       events/1
         ...
      
      [ Impact: provide more readable code ]
      Signed-off-by: default avatarZhao Lei <zhaolei@cn.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      f3c4ae26
    • Zhaolei's avatar
      trace_workqueue: remove cpu_workqueue_stats->first_entry · b8867164
      Zhaolei authored
      cpu_workqueue_stats->first_entry is useless because we can retrieve the
      header of a cpu workqueue using:
      if (&cpu_workqueue_stats->list == workqueue_cpu_stat(cpu)->list.next)
      
      [ Impact: cleanup ]
      Signed-off-by: default avatarZhao Lei <zhaolei@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      b8867164
    • Zhaolei's avatar
      trace_workqueue: use list_for_each_entry() instead of list_for_each_entry_safe() · 1fdfca9c
      Zhaolei authored
      No need to use list_for_each_entry_safe() in iteration without deleting
      any node, we can use list_for_each_entry() instead.
      
      [ Impact: cleanup ]
      Signed-off-by: default avatarZhao Lei <zhaolei@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      1fdfca9c
    • Zhaolei's avatar
      ftrace, workqueuetrace: make workqueue tracepoints use TRACE_EVENT macro · fb39125f
      Zhaolei authored
      v3: zhaolei@cn.fujitsu.com: Change TRACE_EVENT definition to new format
          introduced by Steven Rostedt: consolidate trace and trace_event headers
      v2: kosaki@jp.fujitsu.com: print the function names instead of addr, and zap
          the work addr
      v1: zhaolei@cn.fujitsu.com: Make workqueue tracepoints use TRACE_EVENT macro
      
      TRACE_EVENT is a more generic way to define tracepoints.
      Doing so adds these new capabilities to the tracepoints:
      
        - zero-copy and per-cpu splice() tracing
        - binary tracing without printf overhead
        - structured logging records exposed under /debug/tracing/events
        - trace events embedded in function tracer output and other plugins
        - user-defined, per tracepoint filter expressions
      
      Then, this patch converts DEFINE_TRACE to TRACE_EVENT in workqueue related
      tracepoints.
      
      [ Impact: expand workqueue tracer to events tracing ]
      Signed-off-by: default avatarZhao Lei <zhaolei@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      fb39125f
  9. 27 May, 2009 2 commits
    • Zhaolei's avatar
      ftrace: don't convert function's local variable name in macro · f2aebaee
      Zhaolei authored
      "call" is an argument of macro, but it is also used as a local
      variable name of function in macro.
      We should keep this local variable name distinct from any
      CPP macro parameter name if both are in the same macro scope,
      although it hasn't caused any problem yet.
      
      [ Impact: robustify macro ]
      Signed-off-by: default avatarZhao Lei <zhaolei@cn.fujitsu.com>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      f2aebaee
    • Heiko Carstens's avatar
      trace: disable preemption before taking raw spinlocks · 5b6045a9
      Heiko Carstens authored
      s390 code uses smp_processor_id() in __raw_spin_lock() code which
      reveals that a (raw) spinlock is taken without preemption disabled.
      This can potentially deadlock.
      
      To fix this explicitly disable and enable preemption.
      
      BUG: using smp_processor_id() in preemptible [00000000] code: cat/2278
      caller is trace_find_cmdline+0x40/0xfc
      CPU: 0 Not tainted 2.6.30-rc7-dirty #39
      Process cat (pid: 2278, task: 000000003faedb68, ksp: 000000003b33b988)
      000000003b33b988 000000003b33bae0 0000000000000002 0000000000000000
             000000003b33bb80 000000003b33baf8 000000003b33baf8 00000000000175d6
             0000000000000001 000000003b33b988 000000003f9b0000 000000000000000b
             000000000000000c 000000003b33bb40 000000003b33bae0 0000000000000000
             0000000000000000 00000000000175d6 000000003b33bae0 000000003b33bb28
      Call Trace:
      ([<00000000000174b2>] show_trace+0x112/0x170)
       [<0000000000017582>] show_stack+0x72/0x100
       [<0000000000441538>] dump_stack+0xc8/0xd8
       [<000000000025c350>] debug_smp_processor_id+0x114/0x130
       [<00000000000bf0e4>] trace_find_cmdline+0x40/0xfc
       [<00000000000c35d4>] trace_print_context+0x58/0xac
       [<00000000000bb676>] print_trace_line+0x416/0x470
       [<00000000000bc8fe>] s_show+0x4e/0x428
       [<000000000013834e>] seq_read+0x36a/0x5d4
       [<0000000000112a78>] vfs_read+0xc8/0x174
       [<0000000000112c58>] SyS_read+0x74/0xc4
       [<000000000002c7ae>] sysc_noemu+0x10/0x16
       [<000002000012436c>] 0x2000012436c
      1 lock held by cat/2278:
       #0:  (&p->lock){+.+.+.}, at: [<0000000000138056>] seq_read+0x72/0x5d4
      
      [ Impact: fix preempt-unsafe raw spinlock ]
      Signed-off-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      5b6045a9