1. 10 Sep, 2009 4 commits
    • Li Zefan's avatar
      tracing: format clean ups · bd9cfca9
      Li Zefan authored
      Fix white-space formatting.
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4AA8579B.4020706@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      bd9cfca9
    • Li Zefan's avatar
      tracing: remove dead code · e0ab5f2d
      Li Zefan authored
      Removes unreachable code.
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4AA8579B.4020706@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      e0ab5f2d
    • Steven Rostedt's avatar
      tracing: do not grab lock in wakeup latency function tracing · 478142c3
      Steven Rostedt authored
      The wakeup tracer, when enabled, has its own function tracer.
      It only traces the functions on the CPU where the task it is following
      is on. If a task is woken on one CPU but then migrates to another CPU
      before it wakes up, the latency tracer will then start tracing functions
      on the other CPU.
      
      To find which CPU the task is on, the wakeup function tracer performs
      a task_cpu(wakeup_task). But to make sure the task does not disappear
      it grabs the wakeup_lock, which is also taken when the task wakes up.
      By taking this lock, the function tracer does not need to worry about
      the task being freed as it checks its cpu.
      
      Jan Blunck found a problem with this approach on his 32 CPU box. When
      a task is being traced by the wakeup tracer, all functions take this
      lock. That means that on all 32 CPUs, each function call is taking
      this one lock to see if the task is on that CPU. This lock has just
      serialized all functions on all 32 CPUs. Needless to say, this caused
      major issues on that box. It would even lockup.
      
      This patch changes the wakeup latency to insert a probe on the migrate task
      tracepoint. When a task changes its CPU that it will run on, the
      probe will take note. Now the wakeup function tracer no longer needs
      to take the lock. It only compares the current CPU with a variable that
      holds the current CPU the task is on. We don't worry about races since
      it is OK to add or miss a function trace.
      Reported-by: default avatarJan Blunck <jblunck@suse.de>
      Tested-by: default avatarJan Blunck <jblunck@suse.de>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      478142c3
    • Robert Richter's avatar
      ring-buffer: consolidate interface of rb_buffer_peek() · d8eeb2d3
      Robert Richter authored
      rb_buffer_peek() operates with struct ring_buffer_per_cpu *cpu_buffer
      only. Thus, instead of passing variables buffer and cpu it is better
      to use cpu_buffer directly. This also reduces the risk of races since
      cpu_buffer is not calculated twice.
      Signed-off-by: default avatarRobert Richter <robert.richter@amd.com>
      LKML-Reference: <1249045084-3028-1-git-send-email-robert.richter@amd.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      d8eeb2d3
  2. 06 Sep, 2009 2 commits
  3. 05 Sep, 2009 31 commits
  4. 04 Sep, 2009 3 commits
    • Steven Rostedt's avatar
      ring-buffer: only enable ring_buffer_swap_cpu when needed · 85bac32c
      Steven Rostedt authored
      Since the ability to swap the cpu buffers adds a small overhead to
      the recording of a trace, we only want to add it when needed.
      
      Only the irqsoff and preemptoff tracers use this feature, and both are
      not recommended for production kernels. This patch disables its use
      when neither irqsoff nor preemptoff is configured.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      85bac32c
    • Steven Rostedt's avatar
      ring-buffer: check for swapped buffers in start of committing · 62f0b3eb
      Steven Rostedt authored
      Because the irqsoff tracer can swap an internal CPU buffer, it is possible
      that a swap happens between the start of the write and before the committing
      bit is set (the committing bit will disable swapping).
      
      This patch adds a check for this and will fail the write if it detects it.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      62f0b3eb
    • Steven Rostedt's avatar
      tracing: report error in trace if we fail to swap latency buffer · e8165dbb
      Steven Rostedt authored
      The irqsoff tracer will fail to swap the cpu buffer with the max
      buffer if it preempts a commit. Instead of ignoring this, this patch
      makes the tracer report it if the last max latency failed due to preempting
      a current commit.
      
      The output of the latency tracer will look like this:
      
       # tracer: irqsoff
       #
       # irqsoff latency trace v1.1.5 on 2.6.31-rc5
       # --------------------------------------------------------------------
       # latency: 112 us, #1/1, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
       #    -----------------
       #    | task: -4281 (uid:0 nice:0 policy:0 rt_prio:0)
       #    -----------------
       #  => started at: save_args
       #  => ended at:   __do_softirq
       #
       #
       #                  _------=> CPU#
       #                 / _-----=> irqs-off
       #                | / _----=> need-resched
       #                || / _---=> hardirq/softirq
       #                ||| / _--=> preempt-depth
       #                |||| /
       #                |||||     delay
       #  cmd     pid   ||||| time  |   caller
       #     \   /      |||||   \   |   /
          bash-4281    1d.s6  265us : update_max_tr_single: Failed to swap buffers due to commit in progress
      
      Note the latency time and the functions that disabled the irqs or preemption
      will still be listed.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      e8165dbb