1. 16 Jun, 2015 5 commits
  2. 12 Jun, 2015 4 commits
    • Masami Hiramatsu's avatar
      perf probe: Cut off the gcc optimization postfixes from function name · 35a23ff9
      Masami Hiramatsu authored
      Cut off the postfixes which gcc added for optimized routines from the
      event name automatically generated from symbol name, since *probe-events
      doesn't accept it.  Those symbols will be used if we don't use debuginfo
      to find target functions.
      
      E.g. without this fix;
        -----
        # perf probe -va alloc_buf.isra.23
        probe-definition(0): alloc_buf.isra.23
        symbol:alloc_buf.isra.23 file:(null) line:0 offset:0 return:0 lazy:(null)
        [...]
        Opening /sys/kernel/debug/tracing/kprobe_events write=1
        Added new event:
        Writing event: p:probe/alloc_buf.isra.23 _text+4869328
        Failed to write event: Invalid argument
          Error: Failed to add events. Reason: Invalid argument (Code: -22)
        -----
      With this fix;
        -----
        perf probe -va alloc_buf.isra.23
        probe-definition(0): alloc_buf.isra.23
        symbol:alloc_buf.isra.23 file:(null) line:0 offset:0 return:0 lazy:(null)
        [...]
        Opening /sys/kernel/debug/tracing/kprobe_events write=1
        Added new event:
        Writing event: p:probe/alloc_buf _text+4869328
          probe:alloc_buf      (on alloc_buf.isra.23)
      
        You can now use it in all perf tools, such as:
      
        	perf record -e probe:alloc_buf -aR sleep 1
      
        -----
      Signed-off-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Naohiro Aota <naota@elisp.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150612050820.20548.41625.stgit@localhost.localdomainSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      35a23ff9
    • Ingo Molnar's avatar
      Merge tag 'perf-core-for-mingo' of... · 61d67d56
      Ingo Molnar authored
      Merge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
      
      Pull perf/core improvements and fixes:
      
      User visible changes:
      
        - Beautify the perf_event_open() syscall in 'perf trace'. (Arnaldo Carvalho de Melo)
      
        - Error out unsupported group leader immediately in 'perf stat'. (Kan Liang)
      
        - Amend some 'perf record' option summaries (period, etc). (Peter Zijlstra)
      
        - Avoid possible race condition in copyfile() in 'perf buildid-cache'. (Milos Vyletel)
      
      Infrastructure changes:
      
        - Display 0x for hex values when printing the attribute. (Adrian Hunter)
      
        - Update MANIFEST per files removed from kernel. (David Ahern)
      
      Build fixes:
      
        - Fix PRIu64 printf related failure on 32-bit arch. (He Kuang)
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      61d67d56
    • David Ahern's avatar
      perf tools: Update MANIFEST per files removed from kernel · c8ad7063
      David Ahern authored
      Building perf out of kernel tree is currently broken because the
      MANIFEST file refers to kernel files that have been removed. With this
      patch make perf-targz-src-pkg succeeds as does building perf using the
      generated tarfile.
      Signed-off-by: default avatarDavid Ahern <david.ahern@oracle.com>
      Link: http://lkml.kernel.org/r/1433526173-172332-1-git-send-email-david.ahern@oracle.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      c8ad7063
    • Arnaldo Carvalho de Melo's avatar
      trace: Beautify perf_event_open syscall · a1c2552d
      Arnaldo Carvalho de Melo authored
      Syswide tracing and then running 'stat' and 'trace':
      
       $ perf trace -e perf_event_open
       1034.649 (0.019 ms): perf/6133 perf_event_open(attr_uptr: 0x36f0360, pid: 16134, cpu: -1, group_fd: -1, flags: FD_CLOEXEC) = -1 EINVAL Invalid argument
       1034.670 (0.008 ms): perf/6133 perf_event_open(attr_uptr: 0x36f0360, pid: 16134, cpu: -1, group_fd: -1) = -1 EINVAL Invalid argument
       1034.681 (0.007 ms): perf/6133 perf_event_open(attr_uptr: 0x36f0360, pid: 16134, cpu: -1, group_fd: -1) = -1 EINVAL Invalid argument
       1034.692 (0.007 ms): perf/6133 perf_event_open(attr_uptr: 0x36f0360, pid: 16134, cpu: -1, group_fd: -1) = -1 EINVAL Invalid argument
       9986.983 (0.014 ms): trace/6139 perf_event_open(attr_uptr: 0x7ffd9c629320, pid: -1, group_fd: -1, flags: FD_CLOEXEC) = 3
       9987.026 (0.016 ms): trace/6139 perf_event_open(attr_uptr: 0x37c7e70, pid: -1, group_fd: -1, flags: FD_CLOEXEC) = 3
       9987.041 (0.008 ms): trace/6139 perf_event_open(attr_uptr: 0x37c7e70, pid: -1, group_fd: -1, flags: FD_CLOEXEC) = 3
       9987.489 (0.092 ms): trace/6139 perf_event_open(attr_uptr: 0x3795ee0, pid: 16140, group_fd: -1, flags: FD_CLOEXEC) = 3
       9987.536 (0.044 ms): trace/6139 perf_event_open(attr_uptr: 0x3795ee0, pid: 16140, cpu: 1, group_fd: -1, flags: FD_CLOEXEC) = 4
       9987.580 (0.041 ms): trace/6139 perf_event_open(attr_uptr: 0x3795ee0, pid: 16140, cpu: 2, group_fd: -1, flags: FD_CLOEXEC) = 5
       9987.620 (0.037 ms): trace/6139 perf_event_open(attr_uptr: 0x3795ee0, pid: 16140, cpu: 3, group_fd: -1, flags: FD_CLOEXEC) = 7
       9987.659 (0.035 ms): trace/6139 perf_event_open(attr_uptr: 0x37975d0, pid: 16140, group_fd: -1, flags: FD_CLOEXEC) = 8
       9987.692 (0.031 ms): trace/6139 perf_event_open(attr_uptr: 0x37975d0, pid: 16140, cpu: 1, group_fd: -1, flags: FD_CLOEXEC) = 9
       9987.727 (0.032 ms): trace/6139 perf_event_open(attr_uptr: 0x37975d0, pid: 16140, cpu: 2, group_fd: -1, flags: FD_CLOEXEC) = 10
       9987.761 (0.031 ms): trace/6139 perf_event_open(attr_uptr: 0x37975d0, pid: 16140, cpu: 3, group_fd: -1, flags: FD_CLOEXEC) = 11
      
      Need to intercept perf_copy_attr() with a kprobe or with eBPF...
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Link: http://lkml.kernel.org/n/tip-njb105hab2i3t5dexym9lskl@git.kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      a1c2552d
  3. 11 Jun, 2015 3 commits
  4. 10 Jun, 2015 2 commits
  5. 09 Jun, 2015 1 commit
  6. 08 Jun, 2015 15 commits
  7. 07 Jun, 2015 10 commits
    • Peter Zijlstra's avatar
      perf/x86/intel/pebs: Add PEBSv3 decoding · a3d86542
      Peter Zijlstra authored
      PEBSv3 as present on Skylake fixed the long standing issue of the
      status bits. They now really reflect the events that generated the
      record.
      Tested-by: default avatarAndi Kleen <ak@linux.intel.com>
      Tested-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a3d86542
    • Kan Liang's avatar
      perf tools: handle PERF_RECORD_LOST_SAMPLES · c4937a91
      Kan Liang authored
      This patch modifies the perf tool to handle the new RECORD type,
      PERF_RECORD_LOST_SAMPLES.
      
      The number of lost-sample events is stored in
      .nr_events[PERF_RECORD_LOST_SAMPLES]. The exact number of samples
      which the kernel dropped is stored in total_lost_samples.
      
      When the percentage of dropped samples is greater than 5%, a warning
      is printed.
      
      Here are some examples:
      
      Eg 1, Recording different frequently-occurring events is safe with the
            patch. Only a very low drop rate is associated with such actions.
      
      $ perf record -e '{cycles:p,instructions:p}' -c 20003 --no-time ~/tchain ~/tchain
      
      $ perf report -D | tail
                SAMPLE events:     120243
                 MMAP2 events:          5
          LOST_SAMPLES events:         24
        FINISHED_ROUND events:         15
      cycles:p stats:
                 TOTAL events:      59348
                SAMPLE events:      59348
      instructions:p stats:
                 TOTAL events:      60895
                SAMPLE events:      60895
      
      $ perf report --stdio --group
       # To display the perf.data header info, please use --header/--header-only options.
       #
       #
       # Total Lost Samples: 24
       #
       # Samples: 120K of event 'anon group { cycles:p, instructions:p }'
       # Event count (approx.): 24048600000
       #
       #         Overhead  Command      Shared Object     Symbol
       # ................  ...........  ................
       ..................................
       #
          99.74%  99.86%  tchain_edit  tchain_edit       [.] f3
           0.09%   0.02%  tchain_edit  tchain_edit       [.] f2
           0.04%   0.00%  tchain_edit  [kernel.vmlinux]  [k] ixgbe_read_reg
      
      Eg 2, Recording the same thing multiple times can lead to high drop
            rate, but it is not a useful configuration.
      
      $ perf record -e '{cycles:p,cycles:p}' -c 20003 --no-time ~/tchain
      Warning: Processed 600592 samples and lost 99.73% samples!
      [perf record: Woken up 148 times to write data]
      [perf record: Captured and wrote 36.922 MB perf.data (1206322 samples)]
      [perf record: Woken up 1 times to write data]
      [perf record: Captured and wrote 0.121 MB perf.data (1629 samples)]
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1431285195-14269-9-git-send-email-kan.liang@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c4937a91
    • Kan Liang's avatar
      perf/x86/intel: Introduce PERF_RECORD_LOST_SAMPLES · f38b0dbb
      Kan Liang authored
      After enlarging the PEBS interrupt threshold, there may be some mixed up
      PEBS samples which are discarded by the kernel.
      
      This patch makes the kernel emit a PERF_RECORD_LOST_SAMPLES record with
      the number of possible discarded records when it is impossible to demux
      the samples.
      
      It makes sure the user is not left in the dark about such discards.
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1431285195-14269-8-git-send-email-kan.liang@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f38b0dbb
    • Yan, Zheng's avatar
      perf/intel/x86: Enlarge the PEBS buffer · 15617499
      Yan, Zheng authored
      Currently the PEBS buffer size is 4k, it can only hold about 21
      PEBS records. This patch enlarges the PEBS buffer size to 64k
      (the same as the BTS buffer).
      
      64k memory can hold about 330 PEBS records. This will significantly
      reduce the number of PMIs when batched PEBS interrupts are enabled.
      Signed-off-by: default avatarYan, Zheng <zheng.z.yan@intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1430940834-8964-7-git-send-email-kan.liang@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      15617499
    • Yan, Zheng's avatar
      perf/x86/intel: Drain the PEBS buffer during context switches · 9c964efa
      Yan, Zheng authored
      Flush the PEBS buffer during context switches if PEBS interrupt threshold
      is larger than one. This allows perf to supply TID for sample outputs.
      Signed-off-by: default avatarYan, Zheng <zheng.z.yan@intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1430940834-8964-6-git-send-email-kan.liang@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      9c964efa
    • Yan, Zheng's avatar
      perf/x86/intel: Implement batched PEBS interrupt handling (large PEBS interrupt threshold) · 3569c0d7
      Yan, Zheng authored
      PEBS always had the capability to log samples to its buffers without
      an interrupt. Traditionally perf has not used this but always set the
      PEBS threshold to one.
      
      For frequently occurring events (like cycles or branches or load/store)
      this in term requires using a relatively high sampling period to avoid
      overloading the system, by only processing PMIs. This in term increases
      sampling error.
      
      For the common cases we still need to use the PMI because the PEBS
      hardware has various limitations. The biggest one is that it can not
      supply a callgraph. It also requires setting a fixed period, as the
      hardware does not support adaptive period. Another issue is that it
      cannot supply a time stamp and some other options. To supply a TID it
      requires flushing on context switch. It can however supply the IP, the
      load/store address, TSX information, registers, and some other things.
      
      So we can make PEBS work for some specific cases, basically as long as
      you can do without a callgraph and can set the period you can use this
      new PEBS mode.
      
      The main benefit is the ability to support much lower sampling period
      (down to -c 1000) without extensive overhead.
      
      One use cases is for example to increase the resolution of the c2c tool.
      Another is double checking when you suspect the standard sampling has
      too much sampling error.
      
      Some numbers on the overhead, using cycle soak, comparing the elapsed
      time from "kernbench -M -H" between plain (threshold set to one) and
      multi (large threshold).
      
      The test command for plain:
        "perf record --time -e cycles:p -c $period -- kernbench -M -H"
      
      The test command for multi:
        "perf record --no-time -e cycles:p -c $period -- kernbench -M -H"
      
      ( The only difference of test command between multi and plain is time
        stamp options. Since time stamp is not supported by large PEBS
        threshold, it can be used as a flag to indicate if large threshold is
        enabled during the test. )
      
      	period    plain(Sec)  multi(Sec)  Delta
      	10003     32.7        16.5        16.2
      	20003     30.2        16.2        14.0
      	40003     18.6        14.1        4.5
      	80003     16.8        14.6        2.2
      	100003    16.9        14.1        2.8
      	800003    15.4        15.7        -0.3
      	1000003   15.3        15.2        0.2
      	2000003   15.3        15.1        0.1
      
      With periods below 100003, plain (threshold one) cause much more
      overhead. With 10003 sampling period, the Elapsed Time for multi is
      even 2X faster than plain.
      Signed-off-by: default avatarYan, Zheng <zheng.z.yan@intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1430940834-8964-5-git-send-email-kan.liang@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      3569c0d7
    • Yan, Zheng's avatar
      perf/x86/intel: Handle multiple records in the PEBS buffer · 21509084
      Yan, Zheng authored
      When the PEBS interrupt threshold is larger than one record and the
      machine supports multiple PEBS events, the records of these events are
      mixed up and we need to demultiplex them.
      
      Demuxing the records is hard because the hardware is deficient. The
      hardware has two issues that, when combined, create impossible
      scenarios to demux.
      
      The first issue is that the 'status' field of the PEBS record is a copy
      of the GLOBAL_STATUS MSR at PEBS assist time. To see why this is a
      problem let us first describe the regular PEBS cycle:
      
      A) the CTRn value reaches 0:
        - the corresponding bit in GLOBAL_STATUS gets set
        - we start arming the hardware assist
        < some unspecified amount of time later -- this could cover multiple
          events of interest >
      
      B) the hardware assist is armed, any next event will trigger it
      
      C) a matching event happens:
        - the hardware assist triggers and generates a PEBS record
          this includes a copy of GLOBAL_STATUS at this moment
        - if we auto-reload we (re)set CTRn
        - we clear the relevant bit in GLOBAL_STATUS
      
      Now consider the following chain of events:
      
        A0, B0, A1, C0
      
      The event generated for counter 0 will include a status with counter 1
      set, even though its not at all related to the record. A similar thing
      can happen with a !PEBS event if it just happens to overflow at the
      right moment.
      
      The second issue is that the hardware will only emit one record for two
      or more counters if the event that triggers the assist is 'close'. The
      'close' can be several cycles. In some cases even the complete assist,
      if the event is something that doesn't need retirement.
      
      For instance, consider this chain of events:
      
        A0, B0, A1, B1, C01
      
      Where C01 is an event that triggers both hardware assists, we will
      generate but a single record, but again with both counters listed in the
      status field.
      
      This time the record pertains to both events.
      
      Note that these two cases are different but undistinguishable with the
      data as generated. Therefore demuxing records with multiple PEBS bits
      (we can safely ignore status bits for !PEBS counters) is impossible.
      
      Furthermore we cannot emit the record to both events because that might
      cause a data leak -- the events might not have the same privileges -- so
      what this patch does is discard such events.
      
      The assumption/hope is that such discards will be rare.
      
      Here lists some possible ways you may get high discard rate.
      
        - when you count the same thing multiple times. But it is not a useful
          configuration.
        - you can be unfortunate if you measure with a userspace only PEBS
          event along with either a kernel or unrestricted PEBS event. Imagine
          the event triggering and setting the overflow flag right before
          entering the kernel. Then all kernel side events will end up with
          multiple bits set.
      Signed-off-by: default avatarYan, Zheng <zheng.z.yan@intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      [ Changelog improvements. ]
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1430940834-8964-4-git-send-email-kan.liang@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      21509084
    • Yan, Zheng's avatar
      perf/x86/intel: Introduce setup_pebs_sample_data() · 43cf7631
      Yan, Zheng authored
      Move code that sets up the PEBS sample data to a separate function.
      Signed-off-by: default avatarYan, Zheng <zheng.z.yan@intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1430940834-8964-3-git-send-email-kan.liang@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      43cf7631
    • Yan, Zheng's avatar
      perf/x86/intel: Use the PEBS auto reload mechanism when possible · 851559e3
      Yan, Zheng authored
      When a fixed period is specified, this patch makes perf use the PEBS
      auto reload mechanism. This makes normal profiling faster, because
      it avoids one costly MSR write in the PMI handler.
      
      However, the reset value will be loaded by hardware assist. There is a
      small delay compared to the previous non-auto-reload mechanism. The
      delay time is arbitrary, but very small. The assist cost is 400-800
      cycles, assuming common cases with everything cached. The minimum period
      the patch currently uses is 10000. In that extreme case it can be ~10%
      if cycles are used.
      Signed-off-by: default avatarYan, Zheng <zheng.z.yan@intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1430940834-8964-2-git-send-email-kan.liang@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      851559e3
    • Stephane Eranian's avatar
      perf record: Add support for sampling indirect jumps · 5b68164d
      Stephane Eranian authored
      This patch adds a new branch sampling type support for indirect jumps:
      
        perf record -j ind_jmp .......
      
      It enables analysis of indirect jumps targets. It requires kernel and
      possibly hardware support to operate correctly.
      Signed-off-by: default avatarStephane Eranian <eranian@google.com>
      [ Fixup against: f00898f4 (perf tools: Move branch option parsing to own file) ]
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@redhat.com
      Cc: dsahern@gmail.com
      Cc: jolsa@redhat.com
      Cc: kan.liang@intel.com
      Cc: namhyung@kernel.org
      Link: http://lkml.kernel.org/r/1431637800-31061-4-git-send-email-eranian@google.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      5b68164d