1. 29 Mar, 2018 1 commit
  2. 27 Mar, 2018 9 commits
  3. 25 Mar, 2018 1 commit
    • Ingo Molnar's avatar
      Merge tag 'perf-core-for-mingo-4.17-20180323' of... · a0ac7b3c
      Ingo Molnar authored
      Merge tag 'perf-core-for-mingo-4.17-20180323' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
      
      Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
      
      - Move non-TUI specific annotation routines out of the TUI browser so
        that it can be used in other UIs, and to demonstrate that introduce
        a 'perf annotate --stdio2' option that will apply those formatting
        routines to provide a non-interactive annotation mode (Arnaldo Carvalho de Melo)
      
      - Add 'P' hotkey to the annotation TUI, so dump the current annotated
        symbol to a file, easing report thru e-mail, by getting rid of the
        spaces + right hand side scrollbar chars (Arnaldo Carvalho de Melo)
      
      - Support --ignore-vmlinux to 'perf report' and 'perf annotate', that
        was already present in 'perf top', to use /proc/{kcore,kallsyms},
        allowing to see what is in fact running (patched stuff, alternatives,
        ftrace, etc), not the initial state of the kernel (vmlinux) (Arnaldo Carvalho de Melo)
      
      - Support 'jump' instructions to a different function, treating them
        as 'call' instructions (Arnaldo Carvalho de Melo)
      
      - Fix some jump artifacts when using vmlinux + ASM functions, where
        the ELF symtab for instance, for entry_SYSCALL_64 includes that and
        what comes after the 'syscall_return_via_sysret' label, but the
        objdump -dS prints the jump targets + offsets using the
        syscall_return_via_sysret address, which was confusing 'perf annotate'.
        See the cset comments for further info (Arnaldo Carvalho de Melo)
      
      - Report error from dwfl_attach_state() in the unwind code (Martin Vuille)
      
      - Reference Py_None before returning it in the python extension (Petr Machata)
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a0ac7b3c
  4. 24 Mar, 2018 2 commits
  5. 23 Mar, 2018 5 commits
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Use absolute addresses to calculate jump target offsets · 980b68ec
      Arnaldo Carvalho de Melo authored
      These types of jumps were confusing the annotate browser:
      
      entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
      
      entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
        Percent│ffffffff81a00020:   swapgs
        <SNIP>
               │ffffffff81a00128: ↓ jae    ffffffff81a00139 <syscall_return_via_sysret+0x53>
        <SNIP>
               │ffffffff81a00155: → jmpq   *0x825d2d(%rip)   # ffffffff82225e88 <pv_cpu_ops+0xe8>
      
      I.e. the syscall_return_via_sysret function is actually "inside" the
      entry_SYSCALL_64 function, and the offsets in jumps like these (+0x53)
      are relative to syscall_return_via_sysret, not to syscall_return_via_sysret.
      
      Or this may be some artifact in how the assembler marks the start and
      end of a function and how this ends up in the ELF symtab for vmlinux,
      i.e. syscall_return_via_sysret() isn't "inside" entry_SYSCALL_64, but
      just right after it.
      
      From readelf -sw vmlinux:
      
       80267: ffffffff81a00020   315 NOTYPE  GLOBAL DEFAULT    1 entry_SYSCALL_64
         316: ffffffff81a000e6     0 NOTYPE  LOCAL  DEFAULT    1 syscall_return_via_sysret
      
       0xffffffff81a00020 + 315 > 0xffffffff81a000e6
      
      So instead of looking for offsets after that last '+' sign, calculate
      offsets for jump target addresses that are inside the function being
      disassembled from the absolute address, 0xffffffff81a00139 in this case,
      subtracting from it the objdump address for the start of the function
      being disassembled, entry_SYSCALL_64() in this case.
      
      So, before this patch:
      
      entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
      Percent│       pop    %r10
             │       pop    %r9
             │       pop    %r8
             │       pop    %rax
             │       pop    %rsi
             │       pop    %rdx
             │       pop    %rsi
             │       mov    %rsp,%rdi
             │       mov    %gs:0x5004,%rsp
             │       pushq  0x28(%rdi)
             │       pushq  (%rdi)
             │       push   %rax
             │     ↑ jmp    6c
             │       mov    %cr3,%rdi
             │     ↑ jmp    62
             │       mov    %rdi,%rax
             │       and    $0x7ff,%rdi
             │       bt     %rdi,%gs:0x2219a
             │     ↑ jae    53
             │       btr    %rdi,%gs:0x2219a
             │       mov    %rax,%rdi
             │     ↑ jmp    5b
      
      After:
      
      entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
        0.65 │     → jne    swapgs_restore_regs_and_return_to_usermode
             │       pop    %r10
             │       pop    %r9
             │       pop    %r8
             │       pop    %rax
             │       pop    %rsi
             │       pop    %rdx
             │       pop    %rsi
             │       mov    %rsp,%rdi
             │       mov    %gs:0x5004,%rsp
             │       pushq  0x28(%rdi)
             │       pushq  (%rdi)
             │       push   %rax
             │     ↓ jmp    132
             │       mov    %cr3,%rdi
             │    ┌──jmp    128
             │    │  mov    %rdi,%rax
             │    │  and    $0x7ff,%rdi
             │    │  bt     %rdi,%gs:0x2219a
             │    │↓ jae    119
             │    │  btr    %rdi,%gs:0x2219a
             │    │  mov    %rax,%rdi
             │    │↓ jmp    121
             │119:│  mov    %rax,%rdi
             │    │  bts    $0x3f,%rdi
             │121:│  or     $0x800,%rdi
             │128:└─→or     $0x1000,%rdi
             │       mov    %rdi,%cr3
             │132:   pop    %rax
             │       pop    %rdi
             │       pop    %rsp
             │     → jmpq   *0x825d2d(%rip)        # ffffffff82225e88 <pv_cpu_ops+0xe8>
      
      With those at least navigating to the right destination, an improvement
      for these cases seems to be to be to somehow mark those inner functions,
      which in this case could be:
      
      entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
             │syscall_return_via_sysret:
             │       pop    %r15
             │       pop    %r14
             │       pop    %r13
             │       pop    %r12
             │       pop    %rbp
             │       pop    %rbx
             │       pop    %rsi
             │       pop    %r10
             │       pop    %r9
             │       pop    %r8
             │       pop    %rax
             │       pop    %rsi
             │       pop    %rdx
             │       pop    %rsi
             │       mov    %rsp,%rdi
             │       mov    %gs:0x5004,%rsp
             │       pushq  0x28(%rdi)
             │       pushq  (%rdi)
             │       push   %rax
             │     ↓ jmp    132
             │       mov    %cr3,%rdi
             │    ┌──jmp    128
             │    │  mov    %rdi,%rax
             │    │  and    $0x7ff,%rdi
             │    │  bt     %rdi,%gs:0x2219a
             │    │↓ jae    119
             │    │  btr    %rdi,%gs:0x2219a
             │    │  mov    %rax,%rdi
             │    │↓ jmp    121
             │119:│  mov    %rax,%rdi
             │    │  bts    $0x3f,%rdi
             │121:│  or     $0x800,%rdi
             │128:└─→or     $0x1000,%rdi
             │       mov    %rdi,%cr3
             │132:   pop    %rax
             │       pop    %rdi
             │       pop    %rsp
             │     → jmpq   *0x825d2d(%rip)        # ffffffff82225e88 <pv_cpu_ops+0xe8>
      
      This all gets much better viewed if one uses 'perf report --ignore-vmlinux'
      forcing the usage of /proc/kcore + /proc/kallsyms, when the above
      actually gets down to:
      
        # perf report --ignore-vmlinux
        ## do '/64', will show the function names containing '64',
        ## navigate to /entry_SYSCALL_64_after_hwframe.annotation,
        ## press 'A' to annotate, then 'P' to print that annotation
        ## to a file
        ## From another xterm (or see on screen, this 'P' thing is for
        ## getting rid of those right side scroll bars/spaces):
        # cat /entry_SYSCALL_64_after_hwframe.annotation
        entry_SYSCALL_64_after_hwframe() /proc/kcore
        Event: cycles:ppp
      
        Percent
                    Disassembly of section load0:
      
                    ffffffff9aa00044 <load0>:
         11.97        push   %rax
          4.85        push   %rdi
                      push   %rsi
          2.59        push   %rdx
          2.27        push   %rcx
          0.32        pushq  $0xffffffffffffffda
          1.29        push   %r8
                      xor    %r8d,%r8d
          1.62        push   %r9
          0.65        xor    %r9d,%r9d
          1.62        push   %r10
                      xor    %r10d,%r10d
          5.50        push   %r11
                      xor    %r11d,%r11d
          3.56        push   %rbx
                      xor    %ebx,%ebx
          4.21        push   %rbp
                      xor    %ebp,%ebp
          2.59        push   %r12
          0.97        xor    %r12d,%r12d
          3.24        push   %r13
                      xor    %r13d,%r13d
          2.27        push   %r14
                      xor    %r14d,%r14d
          4.21        push   %r15
                      xor    %r15d,%r15d
          0.97        mov    %rsp,%rdi
          5.50      → callq  do_syscall_64
         14.56        mov    0x58(%rsp),%rcx
          7.44        mov    0x80(%rsp),%r11
          0.32        cmp    %rcx,%r11
                    → jne    swapgs_restore_regs_and_return_to_usermode
          0.32        shl    $0x10,%rcx
          0.32        sar    $0x10,%rcx
          3.24        cmp    %rcx,%r11
                    → jne    swapgs_restore_regs_and_return_to_usermode
          2.27        cmpq   $0x33,0x88(%rsp)
          1.29      → jne    swapgs_restore_regs_and_return_to_usermode
                      mov    0x30(%rsp),%r11
          8.74        cmp    %r11,0x90(%rsp)
                    → jne    swapgs_restore_regs_and_return_to_usermode
          0.32        test   $0x10100,%r11
                    → jne    swapgs_restore_regs_and_return_to_usermode
          0.32        cmpq   $0x2b,0xa0(%rsp)
          0.65      → jne    swapgs_restore_regs_and_return_to_usermode
      
      I.e. using kallsyms makes the function start/end be done differently
      than using what is in the vmlinux ELF symtab and actually the hits
      goes to entry_SYSCALL_64_after_hwframe, which is a GLOBAL() after the
      start of entry_SYSCALL_64:
      
        ENTRY(entry_SYSCALL_64)
                UNWIND_HINT_EMPTY
        <SNIP>
                pushq   $__USER_CS                      /* pt_regs->cs */
                pushq   %rcx                            /* pt_regs->ip */
        GLOBAL(entry_SYSCALL_64_after_hwframe)
                pushq   %rax                            /* pt_regs->orig_ax */
      
                PUSH_AND_CLEAR_REGS rax=$-ENOSYS
      
      And it goes and ends at:
      
                cmpq    $__USER_DS, SS(%rsp)            /* SS must match SYSRET */
                jne     swapgs_restore_regs_and_return_to_usermode
      
                /*
                 * We win! This label is here just for ease of understanding
                 * perf profiles. Nothing jumps here.
                 */
        syscall_return_via_sysret:
                /* rcx and r11 are already restored (see code above) */
                UNWIND_HINT_EMPTY
                POP_REGS pop_rdi=0 skip_r11rcx=1
      
      So perhaps some people should really just play with '--ignore-vmlinux'
      to force /proc/kcore + kallsyms.
      
      One idea is to do both, i.e. have a vmlinux annotation and a
      kcore+kallsyms one, when possible, and even show the patched location,
      etc.
      Reported-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-r11knxv8voesav31xokjiuo6@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      980b68ec
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Defer searching for comma in raw line till it is needed · c448234c
      Arnaldo Carvalho de Melo authored
      That strchr() in jump__scnprintf() needs to be nuked somehow, as it,
      IIRC is already done in jump__parse() and if needed at scnprintf() time,
      should be stashed in the struct filled in parse() time.
      
      For now jus defer it to just before where it is used.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-j0t5hagnphoz9xw07bh3ha3g@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      c448234c
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Support jumping from one function to another · e4cc91b8
      Arnaldo Carvalho de Melo authored
      For instance:
      
        entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
          5.50 │     → callq  do_syscall_64
         14.56 │       mov    0x58(%rsp),%rcx
          7.44 │       mov    0x80(%rsp),%r11
          0.32 │       cmp    %rcx,%r11
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          0.32 │       shl    $0x10,%rcx
          0.32 │       sar    $0x10,%rcx
          3.24 │       cmp    %rcx,%r11
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          2.27 │       cmpq   $0x33,0x88(%rsp)
          1.29 │     → jne    swapgs_restore_regs_and_return_to_usermode
               │       mov    0x30(%rsp),%r11
          8.74 │       cmp    %r11,0x90(%rsp)
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          0.32 │       test   $0x10100,%r11
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          0.32 │       cmpq   $0x2b,0xa0(%rsp)
          0.65 │     → jne    swapgs_restore_regs_and_return_to_usermode
      
      It'll behave just like a "call" instruction, i.e. press enter or right
      arrow over one such line and the browser will navigate to the annotated
      disassembly of that function, which when exited, via left arrow or esc,
      will come back to the calling function.
      
      Now to support jump to an offset on a different function...
      Reported-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-78o508mqvr8inhj63ddtw7mo@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      e4cc91b8
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Add "_local" to jump/offset validation routines · 2eff0611
      Arnaldo Carvalho de Melo authored
      Because they all really check if we can access data structures/visual
      constructs where a "jump" instruction targets code in the same function,
      i.e. things like:
      
        __pthread_mutex_lock  /usr/lib64/libpthread-2.26.so
        1.95 │       mov    __pthread_force_elision,%ecx
             │    ┌──test   %ecx,%ecx
        0.07 │    ├──je     60
             │    │  test   $0x300,%esi
             │    │↓ jne    60
             │    │  or     $0x100,%esi
             │    │  mov    %esi,0x10(%rdi)
             │ 42:│  mov    %esi,%edx
             │    │  lea    0x16(%r8),%rsi
             │    │  mov    %r8,%rdi
             │    │  and    $0x80,%edx
             │    │  add    $0x8,%rsp
             │    │→ jmpq   __lll_lock_elision
             │    │  nop
        0.29 │ 60:└─→and    $0x80,%esi
        0.07 │       mov    $0x1,%edi
        0.29 │       xor    %eax,%eax
        2.53 │       lock   cmpxchg %edi,(%r8)
      
      And not things like that "jmpq __lll_lock_elision", that instead should behave
      like a "call" instruction and "jump" to the disassembly of "___lll_lock_elision".
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-3cwx39u3h66dfw9xjrlt7ca2@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      2eff0611
    • Petr Machata's avatar
      perf python: Reference Py_None before returning it · 83428f2f
      Petr Machata authored
      Python None objects are handled just like all the other objects with
      respect to their reference counting. Before returning Py_None, its
      reference count thus needs to be bumped.
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Acked-by: default avatarJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Petr Machata <petrm@mellanox.com>
      Link: http://lkml.kernel.org/r/b1e565ecccf68064d8d54f37db5d028dda8fa522.1521675563.git.petrm@mellanox.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      83428f2f
  6. 21 Mar, 2018 10 commits
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Mark jumps to outher functions with the call arrow · 751b1783
      Arnaldo Carvalho de Melo authored
      Things like this in _cpp_lex_token (gcc's cc1 program):
      
           cpp_named_operator2name@@Base+0xa72
      
      Point to a place that is after the cpp_named_operator2name boundaries,
      i.e.  in the ELF symbol table for cc1 cpp_named_operator2name is marked
      as being 32-bytes long, but it in fact is much larger than that, so we
      seem to need a symbols__find() routine that looks for >= current->start
      and  < next_symbol->start, possibly just for C++ objects?
      
      For now lets just make some progress by marking jumps to outside the
      current function as call like.
      
      Actual navigation will come next, with further understanding of how the
      symbol searching and disassembly should be done.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-aiys0a0bsgm3e00hbi6fg7yy@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      751b1783
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Pass function descriptor to its instruction parsing routines · 85a84e4f
      Arnaldo Carvalho de Melo authored
      We need that to figure out if jumps have targets in a different
      function.
      
      E.g. _cpp_lex_token(), in /usr/libexec/gcc/x86_64-redhat-linux/5.3.1/cc1
      has a line like this:
      
        jne    c469be <cpp_named_operator2name@@Base+0xa72>
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-ris0ioziyp469pofpzix2atb@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      85a84e4f
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: No need to calculate notes->start twice · 425859ff
      Arnaldo Carvalho de Melo authored
      Since we already set notes->start to map__rip_2objdump(map, sym->start)
      in symbol__annotate2(), no need to calculate that address again in
      symbol__calc_lines(), just use notes->start.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-ycxlg8mm5ueuj21w6gi62l7g@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      425859ff
    • Arnaldo Carvalho de Melo's avatar
      perf annotate browser: Add 'P' hotkey to dump annotation to file · d9bd7665
      Arnaldo Carvalho de Melo authored
      Just like we have in the histograms browser used as the main screen for
      'perf top --tui' and 'perf report --tui', to print the current
      annotation to a file with a named composed by the symbol name and the
      ".annotation" suffix.
      
      Here is one example of pressing 'A' on 'perf top' to live annotate a
      kernel function and then press 'P' to dump that annotation, the
      resulting file:
      
        # cat _raw_spin_lock_irqsave.annotation
        _raw_spin_lock_irqsave() /proc/kcore
        Event: cycles:ppp
      
          7.14        nop
         21.43        push   %rbx
          7.14        pushfq
                      pop    %rax
                      nop
                      mov    %rax,%rbx
                      cli
                      nop
                      xor    %eax,%eax
                      mov    $0x1,%edx
         64.29        lock   cmpxchg %edx,(%rdi)
                      test   %eax,%eax
                    ↓ jne    2b
                      mov    %rbx,%rax
                      pop    %rbx
                    ← retq
                2b:   mov    %eax,%esi
                    → callq  queued_spin_lock_slowpath
                      mov    %rbx,%rax
                      pop    %rbx
                    ← retq
        #
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-zzmnrwugb5vtk7bvg0rbx150@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      d9bd7665
    • Arnaldo Carvalho de Melo's avatar
      perf report: Introduce --ignore-vmlinux command line option · 91340c51
      Arnaldo Carvalho de Melo authored
      We've had this in 'perf top' for quite a while, useful if one wishes
      to force using /proc/kcore to do annotation using the patched kernel
      instead of the ELF image it started from, aka vmlinux.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-ircpvox4wzsv7gasrpb28fw9@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      91340c51
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Introduce --ignore-vmlinux command line option · be316409
      Arnaldo Carvalho de Melo authored
      This is already present in 'perf top', albeit undocumented (will fix),
      and is useful to use /proc/kcore instead of vmlinux and then get what is
      really in place, not what the kernel starts with, before alternatives,
      ftrace .text patching, etc, see the differences:
      
        # perf annotate --stdio2 _raw_spin_lock_irqsave
        _raw_spin_lock_irqsave() /lib/modules/4.16.0-rc4/build/vmlinux
        Event: anon group { cycles, instructions }
      
          0.00   3.17      → callq  __fentry__
          0.00   7.94        push   %rbx
          7.69  36.51      → callq  __page_file_index
                             mov    %rax,%rbx
          7.69   3.17      → callq  *ffffffff82225cd0
                             xor    %eax,%eax
                             mov    $0x1,%edx
         80.77  49.21        lock   cmpxchg %edx,(%rdi)
                             test   %eax,%eax
                           ↓ jne    2b
          3.85   0.00        mov    %rbx,%rax
                             pop    %rbx
                           ← retq
                       2b:   mov    %eax,%esi
                           → callq  queued_spin_lock_slowpath
                             mov    %rbx,%rax
                             pop    %rbx
                           ← retq
        [root@jouet ~]# perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave
        _raw_spin_lock_irqsave() /proc/kcore
        Event: anon group { cycles, instructions }
      
          0.00   3.17        nop
          0.00   7.94        push   %rbx
          0.00  23.81        pushfq
          7.69  12.70        pop    %rax
                             nop
                             mov    %rax,%rbx
          7.69   3.17        cli
                             nop
                             xor    %eax,%eax
                             mov    $0x1,%edx
         80.77  49.21        lock   cmpxchg %edx,(%rdi)
                             test   %eax,%eax
                           ↓ jne    2b
          3.85   0.00        mov    %rbx,%rax
                             pop    %rbx
                           ← retq
                       2b:   mov    %eax,%esi
                           → callq  *ffffffff820e96b0
                             mov    %rbx,%rax
                             pop    %rbx
                           ← retq
        #
      
      Diff of the output of those commands:
      
        # perf annotate --stdio2 _raw_spin_lock_irqsave > /tmp/vmlinux
        # perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave > /tmp/kcore
        # diff -y /tmp/vmlinux /tmp/kcore
        _raw_spin_lock_irqsave() vmlinux             | _raw_spin_lock_irqsave() /proc/kcore
        Event: anon group { cycles, instructions }     Event: anon group { cycles, instructions }
      
         0.00  3.17  → callq __fentry__              |  0.00  3.17     nop
         0.00  7.94    push  %rbx                       0.00  7.94     push  %rbx
         7.69 36.51  → callq __page_file_index       |  0.00 23.81     pushfq
                                                     >  7.69 12.70     pop   %rax
                                                     >                 nop
                       mov   %rax,%rbx                                 mov   %rax,%rbx
         7.69  3.17  → callq *ffffffff82225cd0       |  7.69  3.17     cli
                                                     >                 nop
                       xor   %eax,%eax                                 xor   %eax,%eax
                       mov   $0x1,%edx                                 mov   $0x1,%edx
        80.77 49.21    lock  cmpxchg %edx,(%rdi)       80.77 49.21     lock  cmpxchg %edx,(%rdi)
                       test  %eax,%eax                                 test  %eax,%eax
                     ↓ jne   2b                                      ↓ jne   2b
         3.85  0.00    mov   %rbx,%rax                  3.85  0.00     mov   %rbx,%rax
                       pop   %rbx                                      pop   %rbx
                     ← retq                                          ← retq
                  2b:  mov   %eax,%esi                            2b:  mov   %eax,%esi
                     → callq queued_spin_lock_slowpath|              → callq *ffffffff820e96b0
                       mov   %rbx,%rax                                 mov   %rbx,%rax
                       pop   %rbx                                      pop   %rbx
                     ← retq                                          ← retq
        #
      
      This should be further streamlined by doing both annotations and
      allowing the TUI to toggle initial/current, and show the patched
      instructions in a slightly different color.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-wz8d269hxkcwaczr0r4rhyjg@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      be316409
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Add function header to --stdio2 · 864298f2
      Arnaldo Carvalho de Melo authored
        # perf annotate --stdio2 _raw_spin_lock_irqsave
        _raw_spin_lock_irqsave() /lib/modules/4.16.0-rc4/build/vmlinux
        Event: anon group { cycles, instructions }
      
          0.00   3.17      → callq  __fentry__
          0.00   7.94        push   %rbx
          7.69  36.51      → callq  __page_file_index
                             mov    %rax,%rbx
          7.69   3.17      → callq  *ffffffff82225cd0
                             xor    %eax,%eax
                             mov    $0x1,%edx
         80.77  49.21        lock   cmpxchg %edx,(%rdi)
                             test   %eax,%eax
                           ↓ jne    2b
          3.85   0.00        mov    %rbx,%rax
                             pop    %rbx
                           ← retq
                       2b:   mov    %eax,%esi
                           → callq  queued_spin_lock_slowpath
                             mov    %rbx,%rax
                             pop    %rbx
                           ← retq
        #
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-i86yfyzl8m194ioxgj1jo32f@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      864298f2
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Use the default annotation options for --stdio2 · 35632892
      Arnaldo Carvalho de Melo authored
      With an empty '[annotate]' section in ~/.perfconfig:
      
        # perf record -a --all-kernel -e '{cycles,instructions}:P' sleep 5
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 2.243 MB perf.data (5513 samples) ]
        # perf annotate --stdio2 _raw_spin_lock | head -20
      
                           Disassembly of section .text:
      
                           ffffffff81868790 <_raw_spin_lock>:
                           _raw_spin_lock():
                           EXPORT_SYMBOL(_raw_spin_trylock_bh);
                           #endif
      
                           #ifndef CONFIG_INLINE_SPIN_LOCK
                           void __lockfunc _raw_spin_lock(raw_spinlock_t *lock)
                           {
                           → callq  __fentry__
                           atomic_cmpxchg():
                                   return xadd(&v->counter, -i);
                           }
      
                           static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
                           {
        # perf annotate --stdio2 _raw_spin_lock | head -20
                           → callq  __fentry__
                             xor    %eax,%eax
                             mov    $0x1,%edx
         87.50 100.00        lock   cmpxchg %edx,(%rdi)
          6.25   0.00        test   %eax,%eax
                           ↓ jne    16
          6.25   0.00        repz   retq
                       16:   mov    %eax,%esi
                           ↑ jmpq   ffffffff810e96b0 <queued_spin_lock_slowpath>
        #
        # cat ~/.perfconfig
        [annotate]
      
          hide_src_code = false
          show_linenr = true
        # perf annotate --stdio2 _raw_spin_lock | head -20
      
                       3   Disassembly of section .text:
      
                       5   ffffffff81868790 <_raw_spin_lock>:
                       6   _raw_spin_lock():
                       143 EXPORT_SYMBOL(_raw_spin_trylock_bh);
                       144 #endif
      
                       146 #ifndef CONFIG_INLINE_SPIN_LOCK
                       147 void __lockfunc _raw_spin_lock(raw_spinlock_t *lock)
                       148 {
                           → callq  __fentry__
                       150 atomic_cmpxchg():
                       187         return xadd(&v->counter, -i);
                       188 }
      
                       190 static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
                       191 {
        #
        # cat ~/.perfconfig
        [annotate]
      
          hide_src_code = true
          show_total_period = true
        # perf annotate --stdio2 _raw_spin_lock | head -20
                                     → callq  __fentry__
                                       xor    %eax,%eax
                                       mov    $0x1,%edx
            1411316      152339        lock   cmpxchg %edx,(%rdi)
             344694           0        test   %eax,%eax
                                     ↓ jne    16
              80806           0        repz   retq
                                 16:   mov    %eax,%esi
                                     ↑ jmpq   ffffffff810e96b0 <queued_spin_lock_slowpath>
        #
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-nu4rxg5zkdtgs1b2gc40p7v7@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      35632892
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Move the default annotate options to the library · 7f0b6fde
      Arnaldo Carvalho de Melo authored
      One more thing that goes from the TUI code to be used more widely,
      for instance it'll affect the default options used by:
      
        perf annotate --stdio2
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-0nsz0dm0akdbo30vgja2a10e@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      7f0b6fde
    • Arnaldo Carvalho de Melo's avatar
      perf annotate: Introduce the --stdio2 output mode · befd2a38
      Arnaldo Carvalho de Melo authored
      This uses the TUI augmented formatting routines, modulo interactivity.
      
        # perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave
        _raw_spin_lock_irqsave() /proc/kcore
        Event: cycles:ppp
      
        Percent
      
                    Disassembly of section load0:
      
                    ffffffff9a8734b0 <load0>:
                      nop
                      push   %rbx
         50.00        pushfq
                      pop    %rax
                      nop
                      mov    %rax,%rbx
                      cli
                      nop
                      xor    %eax,%eax
                      mov    $0x1,%edx
         50.00        lock   cmpxchg %edx,(%rdi)
                      test   %eax,%eax
                    ↓ jne    2b
                      mov    %rbx,%rax
                      pop    %rbx
                    ← retq
                2b:   mov    %eax,%esi
                    → callq  queued_spin_lock_slowpath
                      mov    %rbx,%rax
                      pop    %rbx
                    ← retq
      Tested-by: default avatarJin Yao <yao.jin@linux.intel.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-6cte5o8z84mbivbvqlg14uh1@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      befd2a38
  7. 20 Mar, 2018 12 commits