1. 01 Sep, 2017 7 commits
  2. 30 Aug, 2017 1 commit
    • Jin Yao's avatar
      perf report: Calculate the average cycles of iterations · c4ee0625
      Jin Yao authored
      The branch history code has a loop detection function. With this, we can
      get the number of iterations by calculating the removed loops.
      
      While it would be nice for knowing the average cycles of iterations.
      This patch adds up the cycles in branch entries of removed loops and
      save the result to the next branch entry (e.g. branch entry A).
      
      Finally it will display the iteration number and average cycles at the
      "from" of branch entry A.
      
      For example:
      perf record -g -j any,save_type ./div
      perf report --branch-history --no-children --stdio
      
      --22.63%--main div.c:42 (RET CROSS_2M)
                compute_flag div.c:28 (cycles:2 iter:173115 avg_cycles:2)
                |
                 --10.73%--compute_flag div.c:27 (RET CROSS_2M)
                           rand rand.c:28 (cycles:1)
                           rand rand.c:28 (RET CROSS_2M)
                           __random random.c:298 (cycles:1)
                           __random random.c:297 (COND_BWD CROSS_2M)
                           __random random.c:295 (cycles:1)
                           __random random.c:295 (COND_BWD CROSS_2M)
                           __random random.c:295 (cycles:1)
                           __random random.c:295 (RET CROSS_2M)
      Signed-off-by: default avatarYao Jin <yao.jin@linux.intel.com>
      Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1502111115-18305-1-git-send-email-yao.jin@linux.intel.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      c4ee0625
  3. 29 Aug, 2017 9 commits
    • Ingo Molnar's avatar
      Merge tag 'perf-core-for-mingo-4.14-20170829' of... · 1b2f76d7
      Ingo Molnar authored
      Merge tag 'perf-core-for-mingo-4.14-20170829' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
      
      Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
      
       - Fix remote HITM detection for Skylake in 'perf c2c' (Jiri Olsa)
      
       - Fixes for the handling of PERF_RECORD_READ records (Jiri Olsa)
      
       - Fix kprobes blackist symbol lookup in 'perf probe' (Li Bin)
      
       - The PLT header and entry sizes are not the same in !x86, fix it for ARM and
         AARCH64 (Li Bin)
      
       - Beautify pkey_{alloc,free,mprotect} arguments in 'perf trace' (Arnaldo Carvalho de Melo)
      
       - Fix CC, AR, LD external definition, allow flex and bison to be
         externally defined and other related Makefile fixes (David Carrillo-Cisneros)
      
       - Sync CPU features kernel ABI headers with tooling headers (Arnaldo Carvalho de Melo)
      
       - Fix path to PMU formats in 'perf stat' documentation (Jack Henschel)
      
       - Fix static build with newer toolchains (Jiri Olsa)
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1b2f76d7
    • Li Bin's avatar
      perf symbols: Fix plt entry calculation for ARM and AARCH64 · b2f76050
      Li Bin authored
      On x86, the plt header size is as same as the plt entry size, and can be
      identified from shdr's sh_entsize of the plt.
      
      But we can't assume that the sh_entsize of the plt shdr is always the
      plt entry size in all architecture, and the plt header size may be not
      as same as the plt entry size in some architecure.
      
      On ARM, the plt header size is 20 bytes and the plt entry size is 12
      bytes (don't consider the FOUR_WORD_PLT case) that refer to the binutils
      implementation. The plt section is as follows:
      
      Disassembly of section .plt:
      000004a0 <__cxa_finalize@plt-0x14>:
       4a0:   e52de004        push    {lr}            ; (str lr, [sp, #-4]!)
       4a4:   e59fe004        ldr     lr, [pc, #4]    ; 4b0 <_init+0x1c>
       4a8:   e08fe00e        add     lr, pc, lr
       4ac:   e5bef008        ldr     pc, [lr, #8]!
       4b0:   00008424        .word   0x00008424
      
      000004b4 <__cxa_finalize@plt>:
       4b4:   e28fc600        add     ip, pc, #0, 12
       4b8:   e28cca08        add     ip, ip, #8, 20  ; 0x8000
       4bc:   e5bcf424        ldr     pc, [ip, #1060]!        ; 0x424
      
      000004c0 <printf@plt>:
       4c0:   e28fc600        add     ip, pc, #0, 12
       4c4:   e28cca08        add     ip, ip, #8, 20  ; 0x8000
       4c8:   e5bcf41c        ldr     pc, [ip, #1052]!        ; 0x41c
      
      On AARCH64, the plt header size is 32 bytes and the plt entry size is 16
      bytes.  The plt section is as follows:
      
      Disassembly of section .plt:
      0000000000000560 <__cxa_finalize@plt-0x20>:
       560:   a9bf7bf0        stp     x16, x30, [sp,#-16]!
       564:   90000090        adrp    x16, 10000 <__FRAME_END__+0xf8a8>
       568:   f944be11        ldr     x17, [x16,#2424]
       56c:   9125e210        add     x16, x16, #0x978
       570:   d61f0220        br      x17
       574:   d503201f        nop
       578:   d503201f        nop
       57c:   d503201f        nop
      
      0000000000000580 <__cxa_finalize@plt>:
       580:   90000090        adrp    x16, 10000 <__FRAME_END__+0xf8a8>
       584:   f944c211        ldr     x17, [x16,#2432]
       588:   91260210        add     x16, x16, #0x980
       58c:   d61f0220        br      x17
      
      0000000000000590 <__gmon_start__@plt>:
       590:   90000090        adrp    x16, 10000 <__FRAME_END__+0xf8a8>
       594:   f944c611        ldr     x17, [x16,#2440]
       598:   91262210        add     x16, x16, #0x988
       59c:   d61f0220        br      x17
      
      NOTES:
      
      In addition to ARM and AARCH64, other architectures, such as
      s390/alpha/mips/parisc/poperpc/sh/sparc/xtensa also need to consider
      this issue.
      Signed-off-by: default avatarLi Bin <huawei.libin@huawei.com>
      Acked-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexis Berlemont <alexis.berlemont@gmail.com>
      Cc: David Tolnay <dtolnay@gmail.com>
      Cc: Hanjun Guo <guohanjun@huawei.com>
      Cc: Hemant Kumar <hemant@linux.vnet.ibm.com>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Cc: zhangmengting@huawei.com
      Link: http://lkml.kernel.org/r/1496622849-21877-1-git-send-email-huawei.libin@huawei.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      b2f76050
    • Li Bin's avatar
      perf probe: Fix kprobe blacklist checking condition · 2c29461e
      Li Bin authored
      The commit 9aaf5a5f ("perf probe: Check kprobes blacklist when
      adding new events"), 'perf probe' supports checking the blacklist of the
      fuctions which can not be probed.  But the checking condition is wrong,
      that the end_addr of the symbol which is the start_addr of the next
      symbol can't be included.
      
      Committer notes:
      
      IOW make it match its kernel counterpart in kernel/kprobes.c:
      
        bool within_kprobe_blacklist(unsigned long addr)
      
      Each entry have as its end address not its end address, but the first
      address _outside_ that symbol, which for related functions, is the first
      address of the next symbol, like these from kernel/trace/trace_probe.c:
      
      0xffffffffbd198df0-0xffffffffbd198e40	print_type_u8
      0xffffffffbd198e40-0xffffffffbd198e90	print_type_u16
      0xffffffffbd198e90-0xffffffffbd198ee0	print_type_u32
      0xffffffffbd198ee0-0xffffffffbd198f30	print_type_u64
      0xffffffffbd198f30-0xffffffffbd198f80	print_type_s8
      0xffffffffbd198f80-0xffffffffbd198fd0	print_type_s16
      0xffffffffbd198fd0-0xffffffffbd199020	print_type_s32
      0xffffffffbd199020-0xffffffffbd199070	print_type_s64
      0xffffffffbd199070-0xffffffffbd1990c0	print_type_x8
      0xffffffffbd1990c0-0xffffffffbd199110	print_type_x16
      0xffffffffbd199110-0xffffffffbd199160	print_type_x32
      0xffffffffbd199160-0xffffffffbd1991b0	print_type_x64
      
      But not always:
      
      0xffffffffbd1997b0-0xffffffffbd1997c0	fetch_kernel_stack_address (kernel/trace/trace_probe.c)
      0xffffffffbd1c57f0-0xffffffffbd1c58b0	__context_tracking_enter   (kernel/context_tracking.c)
      Signed-off-by: default avatarLi Bin <huawei.libin@huawei.com>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Cc: zhangmengting@huawei.com
      Fixes: 9aaf5a5f ("perf probe: Check kprobes blacklist when adding new events")
      Link: http://lkml.kernel.org/r/1504011443-7269-1-git-send-email-huawei.libin@huawei.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      2c29461e
    • Peter Zijlstra's avatar
      perf/x86: Fix caps/ for !Intel · 5da382eb
      Peter Zijlstra authored
      Move the 'max_precise' capability into generic x86 code where it
      belongs. This fixes a sysfs splat on !Intel systems where we fail to set
      x86_pmu_caps_group.atts.
      Reported-and-tested-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: hpa@zytor.com
      Fixes: 22688d1c20f5 ("x86/perf: Export some PMU attributes in caps/ directory")
      Link: http://lkml.kernel.org/r/20170828104650.2u3rsim4jafyjzv2@hirez.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      5da382eb
    • Kan Liang's avatar
      perf/core, x86: Add PERF_SAMPLE_PHYS_ADDR · fc7ce9c7
      Kan Liang authored
      For understanding how the workload maps to memory channels and hardware
      behavior, it's very important to collect address maps with physical
      addresses. For example, 3D XPoint access can only be found by filtering
      the physical address.
      
      Add a new sample type for physical address.
      
      perf already has a facility to collect data virtual address. This patch
      introduces a function to convert the virtual address to physical address.
      The function is quite generic and can be extended to any architecture as
      long as a virtual address is provided.
      
       - For kernel direct mapping addresses, virt_to_phys is used to convert
         the virtual addresses to physical address.
      
       - For user virtual addresses, __get_user_pages_fast is used to walk the
         pages tables for user physical address.
      
       - This does not work for vmalloc addresses right now. These are not
         resolved, but code to do that could be added.
      
      The new sample type requires collecting the virtual address. The
      virtual address will not be output unless SAMPLE_ADDR is applied.
      
      For security, the physical address can only be exposed to root or
      privileged user.
      Tested-by: default avatarMadhavan Srinivasan <maddy@linux.vnet.ibm.com>
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: mpe@ellerman.id.au
      Link: http://lkml.kernel.org/r/1503967969-48278-1-git-send-email-kan.liang@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      fc7ce9c7
    • Alexander Shishkin's avatar
      perf/core, pt, bts: Get rid of itrace_started · 8d4e6c4c
      Alexander Shishkin authored
      I just noticed that hw.itrace_started and hw.config are aliased to the
      same location. Now, the PT driver happens to use both, which works out
      fine by sheer luck:
      
       - STORE(hw.itrace_start) is ordered before STORE(hw.config), in the
          program order, although there are no compiler barriers to ensure that,
      
       - to the perf_log_itrace_start() hw.itrace_start looks set at the same
         time as when it is intended to be set because both stores happen in the
         same path,
      
       - hw.config is never reset to zero in the PT driver.
      
      Now, the use of hw.config by the PT driver makes more sense (it being a
      HW PMU) than messing around with itrace_started, which is an awkward API
      to begin with.
      
      This patch replaces hw.itrace_started with an attach_state bit and an
      API call for the PMU drivers to use to communicate the condition.
      Signed-off-by: default avatarAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20170330153956.25994-1-alexander.shishkin@linux.intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8d4e6c4c
    • Ingo Molnar's avatar
      e0563e04
    • Zhou Chengming's avatar
      perf/ftrace: Fix double traces of perf on ftrace:function · 75e83876
      Zhou Chengming authored
      When running perf on the ftrace:function tracepoint, there is a bug
      which can be reproduced by:
      
        perf record -e ftrace:function -a sleep 20 &
        perf record -e ftrace:function ls
        perf script
      
                    ls 10304 [005]   171.853235: ftrace:function:
        perf_output_begin
                    ls 10304 [005]   171.853237: ftrace:function:
        perf_output_begin
                    ls 10304 [005]   171.853239: ftrace:function:
        task_tgid_nr_ns
                    ls 10304 [005]   171.853240: ftrace:function:
        task_tgid_nr_ns
                    ls 10304 [005]   171.853242: ftrace:function:
        __task_pid_nr_ns
                    ls 10304 [005]   171.853244: ftrace:function:
        __task_pid_nr_ns
      
      We can see that all the function traces are doubled.
      
      The problem is caused by the inconsistency of the register
      function perf_ftrace_event_register() with the probe function
      perf_ftrace_function_call(). The former registers one probe
      for every perf_event. And the latter handles all perf_events
      on the current cpu. So when two perf_events on the current cpu,
      the traces of them will be doubled.
      
      So this patch adds an extra parameter "event" for perf_tp_event,
      only send sample data to this event when it's not NULL.
      Signed-off-by: default avatarZhou Chengming <zhouchengming1@huawei.com>
      Reviewed-by: default avatarJiri Olsa <jolsa@kernel.org>
      Acked-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: alexander.shishkin@linux.intel.com
      Cc: huawei.libin@huawei.com
      Link: http://lkml.kernel.org/r/1503668977-12526-1-git-send-email-zhouchengming1@huawei.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      75e83876
    • Meng Xu's avatar
      perf/core: Fix potential double-fetch bug · f12f42ac
      Meng Xu authored
      While examining the kernel source code, I found a dangerous operation that
      could turn into a double-fetch situation (a race condition bug) where the same
      userspace memory region are fetched twice into kernel with sanity checks after
      the first fetch while missing checks after the second fetch.
      
        1. The first fetch happens in line 9573 get_user(size, &uattr->size).
      
        2. Subsequently the 'size' variable undergoes a few sanity checks and
           transformations (line 9577 to 9584).
      
        3. The second fetch happens in line 9610 copy_from_user(attr, uattr, size)
      
        4. Given that 'uattr' can be fully controlled in userspace, an attacker can
           race condition to override 'uattr->size' to arbitrary value (say, 0xFFFFFFFF)
           after the first fetch but before the second fetch. The changed value will be
           copied to 'attr->size'.
      
        5. There is no further checks on 'attr->size' until the end of this function,
           and once the function returns, we lose the context to verify that 'attr->size'
           conforms to the sanity checks performed in step 2 (line 9577 to 9584).
      
        6. My manual analysis shows that 'attr->size' is not used elsewhere later,
           so, there is no working exploit against it right now. However, this could
           easily turns to an exploitable one if careless developers start to use
           'attr->size' later.
      
      To fix this, override 'attr->size' from the second fetch to the one from the
      first fetch, regardless of what is actually copied in.
      
      In this way, it is assured that 'attr->size' is consistent with the checks
      performed after the first fetch.
      Signed-off-by: default avatarMeng Xu <mengxu.gatech@gmail.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: alexander.shishkin@linux.intel.com
      Cc: meng.xu@gatech.edu
      Cc: sanidhya@gatech.edu
      Cc: taesoo@gatech.edu
      Link: http://lkml.kernel.org/r/1503522470-35531-1-git-send-email-meng.xu@gatech.eduSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f12f42ac
  4. 28 Aug, 2017 23 commits