1. 09 May, 2012 8 commits
  2. 08 May, 2012 1 commit
    • Ingo Molnar's avatar
      Merge branch 'perf/annotate' of... · 149936a0
      Ingo Molnar authored
      Merge branch 'perf/annotate' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
      
      Perf annotate browser improvements:
      
       - Get back the line separating the overheads from the disassembly, requested by
         Peter Zijlstra, Linus agreed now that it is a solid line and more column real
         state was harvested. Also it has the jump->arrow lines separated from it by
         the address/jump target column.
      
       - Don't change asm line color when toggling source code view. Requested by
         Peter Zijlstra.
      
      Current snapshot:
      
       avtab_search_node
              │      push   %rbp
              │      mov    %rsp,%rbp
              │    → callq  mcount
              │      movzwl 0x6(%rsi),%edx
              │      and    $0x7fff,%dx
              │      test   %rdi,%rdi
              │    ↓ jne    20
         0.42 │17:┌─→xor    %eax,%eax
              │19:│  leaveq
         0.42 │   │← retq
              │   │  nopl   0x0(%rax,%rax,1)
              │20:│  mov    (%rdi),%rax
         0.08 │   │  test   %rax,%rax
              │   └──je     17
              │      movzwl (%rsi),%ecx
              │      movzwl 0x2(%rsi),%r9d
              │      movzwl 0x4(%rsi),%r8d
              │      movzwl %cx,%esi
              │      movzwl %r9w,%r10d
              │      shl    $0x9,%esi
              │      lea    (%rsi,%r10,4),%esi
              │      lea    (%r8,%rsi,1),%esi
              │      and    0x10(%rdi),%si
              │      movzwl %si,%esi
              │      mov    (%rax,%rsi,8),%rax
         1.01 │      test   %rax,%rax
              │    ↑ je     19
              │      nopw   0x0(%rax,%rax,1)
         3.19 │60:   cmp    %cx,(%rax)
              │    ↓ jne    7e
         0.08 │      cmp    %r9w,0x2(%rax)
              │    ↓ jne    7e
              │      cmp    %r8w,0x4(%rax)
              │    ↓ jne    79
              │      test   %dx,0x6(%rax)
              │    ↑ jne    19
              │79:   cmp    %r8w,0x4(%rax)
        83.45 │7e: ↑ ja     17
         3.36 │      mov    0x10(%rax),%rax
         7.98 │      test   %rax,%rax
              │    ↑ jne    60
              │      leaveq
              │    ← retq
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      149936a0
  3. 07 May, 2012 3 commits
  4. 04 May, 2012 1 commit
  5. 03 May, 2012 3 commits
  6. 28 Apr, 2012 2 commits
    • Steven Rostedt's avatar
      ftrace/x86: Remove the complex ftrace NMI handling code · 4a6d70c9
      Steven Rostedt authored
      As ftrace function tracing would require modifying code that could
      be executed in NMI context, which is not stopped with stop_machine(),
      ftrace had to do a complex algorithm with various stages of setup
      and memory barriers to make it work.
      
      With the new breakpoint method, this is no longer required. The changes
      to the code can be done without any problem in NMI context, as well as
      without stop machine altogether. Remove the complex code as it is
      no longer needed.
      
      Also, a lot of the notrace annotations could be removed from the
      NMI code as it is now safe to trace them. With the exception of
      do_nmi itself, which does some special work to handle running in
      the debug stack. The breakpoint method can cause NMIs to double
      nest the debug stack if it's not setup properly, and that is done
      in do_nmi(), thus that function must not be traced.
      
      (Note the arch sh may want to do the same)
      
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      4a6d70c9
    • Steven Rostedt's avatar
      ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine · 08d636b6
      Steven Rostedt authored
      This method changes x86 to add a breakpoint to the mcount locations
      instead of calling stop machine.
      
      Now that iret can be handled by NMIs, we perform the following to
      update code:
      
      1) Add a breakpoint to all locations that will be modified
      
      2) Sync all cores
      
      3) Update all locations to be either a nop or call (except breakpoint
         op)
      
      4) Sync all cores
      
      5) Remove the breakpoint with the new code.
      
      6) Sync all cores
      
      [
        Added updates that Masami suggested:
         Use unlikely(modifying_ftrace_code) in int3 trap to keep kprobes efficient.
         Don't use NOTIFY_* in ftrace handler in int3 as it is not a notifier.
      ]
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Acked-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      08d636b6
  7. 27 Apr, 2012 6 commits
  8. 26 Apr, 2012 4 commits
  9. 25 Apr, 2012 6 commits
  10. 24 Apr, 2012 4 commits
    • Arnaldo Carvalho de Melo's avatar
      perf annotate browser: Initial loop detection · a3f895be
      Arnaldo Carvalho de Melo authored
      Simple algorithm, just look for the next backward jump that points to
      before the cursor.
      
      Then draw an arrow connecting the jump to its target.
      
      Do this as you move the cursor, entering/exiting possible loops.
      
      Ex (graph chars replaced to avoid mail encoding woes):
      
      avc_has_perm_flags
          0.00 |         nopl   0x0(%rax)
          5.36 |+-> 68:  mov    (%rax),%rax
          5.15 ||        test   %rax,%rax
          0.00 ||      v je     130
          2.96 ||   74:  cmp    -0x20(%rax),%ebx
         47.38 ||        lea    -0x20(%rax),%rcx
          0.28 ||      ^ jne    68
          3.16 ||        cmp    -0x18(%rax),%dx
          0.00 |+------^ jne    68
          4.92 |         cmp    0x4(%rcx),%r13d
          0.00 |       v jne    68
          1.15 |         test   %rcx,%rcx
          0.00 |       v je     130
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-5gairf6or7dazlx3ocxwvftm@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      a3f895be
    • Vaibhav Nagarnaik's avatar
      ring-buffer: Add per_cpu ring buffer control files · 438ced17
      Vaibhav Nagarnaik authored
      Add a debugfs entry under per_cpu/ folder for each cpu called
      buffer_size_kb to control the ring buffer size for each CPU
      independently.
      
      If the global file buffer_size_kb is used to set size, the individual
      ring buffers will be adjusted to the given size. The buffer_size_kb will
      report the common size to maintain backward compatibility.
      
      If the buffer_size_kb file under the per_cpu/ directory is used to
      change buffer size for a specific CPU, only the size of the respective
      ring buffer is updated. When tracing/buffer_size_kb is read, it reports
      'X' to indicate that sizes of per_cpu ring buffers are not equivalent.
      
      Link: http://lkml.kernel.org/r/1328212844-11889-1-git-send-email-vnagarnaik@google.com
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Michael Rubin <mrubin@google.com>
      Cc: David Sharp <dhsharp@google.com>
      Cc: Justin Teravest <teravest@google.com>
      Signed-off-by: default avatarVaibhav Nagarnaik <vnagarnaik@google.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      438ced17
    • Dan Carpenter's avatar
      tracing: Remove an unneeded check in trace_seq_buffer() · 5a26c8f0
      Dan Carpenter authored
      memcpy() returns a pointer to "bug".  Hopefully, it's not NULL here or
      we would already have Oopsed.
      
      Link: http://lkml.kernel.org/r/20120420063145.GA22649@elgon.mountain
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      5a26c8f0
    • Steven Rostedt's avatar
      tracing: Add percpu buffers for trace_printk() · 07d777fe
      Steven Rostedt authored
      Currently, trace_printk() uses a single buffer to write into
      to calculate the size and format needed to save the trace. To
      do this safely in an SMP environment, a spin_lock() is taken
      to only allow one writer at a time to the buffer. But this could
      also affect what is being traced, and add synchronization that
      would not be there otherwise.
      
      Ideally, using percpu buffers would be useful, but since trace_printk()
      is only used in development, having per cpu buffers for something
      never used is a waste of space. Thus, the use of the trace_bprintk()
      format section is changed to be used for static fmts as well as dynamic ones.
      Then at boot up, we can check if the section that holds the trace_printk
      formats is non-empty, and if it does contain something, then we
      know a trace_printk() has been added to the kernel. At this time
      the trace_printk per cpu buffers are allocated. A check is also
      done at module load time in case a module is added that contains a
      trace_printk().
      
      Once the buffers are allocated, they are never freed. If you use
      a trace_printk() then you should know what you are doing.
      
      A buffer is made for each type of context:
      
        normal
        softirq
        irq
        nmi
      
      The context is checked and the appropriate buffer is used.
      This allows for totally lockless usage of trace_printk(),
      and they no longer even disable interrupts.
      Requested-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      07d777fe
  11. 21 Apr, 2012 2 commits