- 12 Aug, 2024 6 commits
-
-
Ian Rogers authored
Cache home agent (CHA) events were setting the low rather than high config1 bits. SNR was using CLX CHA events, however its CHA is similar to ICX so remove the events. Incorporate the updates in: https://github.com/intel/perfmon/pull/215 https://github.com/intel/perfmon/pull/216 Fixes: 4cc49942 ("perf vendor events: Update cascadelakex events/metrics") Closes: https://lore.kernel.org/linux-perf-users/CAPhsuW4nem9XZP+b=sJJ7kqXG-cafz0djZf51HsgjCiwkGBA+A@mail.gmail.com/Reported-by: Song Liu <song@kernel.org> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Co-authored-by: Weilin Wang <weilin.wang@intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240811042004.421869-1-irogers@google.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
The bpf_get_stackid() helper returns a signed type to check whether it failed to get a stacktrace or not. But it saved the result in u32 and checked if the value is negative. 376 if (needs_callstack) { 377 pelem->stack_id = bpf_get_stackid(ctx, &stacks, 378 BPF_F_FAST_STACK_CMP | stack_skip); --> 379 if (pelem->stack_id < 0) ./tools/perf/util/bpf_skel/lock_contention.bpf.c:379 contention_begin() warn: unsigned 'pelem->stack_id' is never less than zero. Let's change the type to s32 instead. Fixes: 6d499a6b ("perf lock: Print the number of lost entries for BPF") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240812172533.2015291-1-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
In get_member_overhead(), k is updated when it has a entry in the histogram. But the entry->hists array is allocated with the number of evsel in the group. So the k should be reset when it iterates the event using for_each_group_evsel(), otherwise it'd crash due to a buffer overflow. Fixes: cb1898f5 ("perf annotate-data: Support --skip-empty option") Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240810191502.1947959-1-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Leo Yan authored
Current description for the AUX trace buffer size is misleading. When a user specifies the option '-m,512M', it represents a size value in bytes (512MiB) but not 512M pages (512M x 4KiB regard to a page of 4KiB). Make the document clear that the normal buffer and the AUX tracing buffer share the same semantics. Syncs the documents for consistent text. Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240812093459.2575278-1-leo.yan@arm.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Martin Liška authored
Similarly to other subcommands (like report, top), it would be handy to provide a path for addr2line command. Signed-off-by: Martin Liska <martin.liska@hey.com> Cc: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/eadc3e36-029d-4848-9d69-272fe5a83a26@foxlink.czSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
Instead of explicitely initializing just the .name and .alias_name, use struct member named initialization of just the non-null -name field, the compiler will initialize all the other non-explicitely initialized fields to NULL. This makes the code more robust, avoiding the error recently fixed when the .alias_name was used and contained a random value. Reviewed-by: Veronika Molnarova <vmolnaro@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Radostin Stoyanov <rstoyano@redhat.com> Link: https://lore.kernel.org/lkml/e26941f9-f86c-4f2e-b812-20c49fb2c0d3@redhat.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 09 Aug, 2024 5 commits
-
-
Namhyung Kim authored
The --skip-empty option is to hide dummy events in a group. Like other output mode in 'perf report' and 'perf annotate', the data-type profiling output should support the option. Committer testing: With dummy: root@number:~# perf annotate --stdio --group --data-type --skip-empty | head -24 Annotate type: 'pthread_mutex_t' in /usr/lib64/libc.so.6 (50 samples): event[0] = cpu_atom/mem-loads,ldlat=30/P event[1] = cpu_atom/mem-stores/P event[2] = dummy:u ============================================================================ Percent offset size field 100.00 100.00 0.00 0 40 pthread_mutex_t { 100.00 100.00 0.00 0 40 struct __pthread_mutex_s __data { 45.21 84.54 0.00 0 4 int __lock; 0.00 0.00 0.00 4 4 unsigned int __count; 0.00 1.83 0.00 8 4 int __owner; 5.19 10.65 0.00 12 4 unsigned int __nusers; 49.61 2.97 0.00 16 4 int __kind; 0.00 0.00 0.00 20 2 short int __spins; 0.00 0.00 0.00 22 2 short int __elision; 0.00 0.00 0.00 24 16 __pthread_list_t __list { 0.00 0.00 0.00 24 8 struct __pthread_internal_list* __prev; 0.00 0.00 0.00 32 8 struct __pthread_internal_list* __next; }; }; 0.00 0.00 0.00 0 0 char[] __size; 45.21 84.54 0.00 0 8 long int __align; }; Skipping it: root@number:~# perf annotate --stdio --group --data-type --skip-empty | head -24 Annotate type: 'pthread_mutex_t' in /usr/lib64/libc.so.6 (50 samples): event[0] = cpu_atom/mem-loads,ldlat=30/P event[1] = cpu_atom/mem-stores/P ============================================================================ Percent offset size field 100.00 100.00 0 40 pthread_mutex_t { 100.00 100.00 0 40 struct __pthread_mutex_s __data { 45.21 84.54 0 4 int __lock; 0.00 0.00 4 4 unsigned int __count; 0.00 1.83 8 4 int __owner; 5.19 10.65 12 4 unsigned int __nusers; 49.61 2.97 16 4 int __kind; 0.00 0.00 20 2 short int __spins; 0.00 0.00 22 2 short int __elision; 0.00 0.00 24 16 __pthread_list_t __list { 0.00 0.00 24 8 struct __pthread_internal_list* __prev; 0.00 0.00 32 8 struct __pthread_internal_list* __next; }; }; 0.00 0.00 0 0 char[] __size; 45.21 84.54 0 8 long int __align; }; Annotate type: 'pthread_mutexattr_t' in /usr/lib64/libc.so.6 (1 samples): root@number:~# Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240807061713.1642924-1-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
When --group option is used, it should display all events together. But the current logic only checks if the first (leader) event has samples or not. Let's check the member events as well. Also it missed to put the linked samples from member evsels to the output RB-tree so that it can be displayed in the output. For example, take a look at this example. $ ./perf evlist cpu/mem-loads,ldlat=30/P cpu/mem-stores/P dummy:u It has three events but 'path_put' function has samples only for mem-stores (second) event. $ sudo ./perf annotate --stdio -f path_put Percent | Source code & Disassembly of kcore for cpu/mem-stores/P (2 samples, percent: local period) ---------------------------------------------------------------------------------------------------------- : 0 0xffffffffae600020 <path_put>: 0.00 : ffffffffae600020: endbr64 0.00 : ffffffffae600024: nopl (%rax, %rax) 91.22 : ffffffffae600029: pushq %rbx 0.00 : ffffffffae60002a: movq %rdi, %rbx 0.00 : ffffffffae60002d: movq 8(%rdi), %rdi 8.78 : ffffffffae600031: callq 0xffffffffae614aa0 0.00 : ffffffffae600036: movq (%rbx), %rdi 0.00 : ffffffffae600039: popq %rbx 0.00 : ffffffffae60003a: jmp 0xffffffffae620670 0.00 : ffffffffae60003f: nop Therefore, it didn't show up when --group option is used since the leader ("mem-loads") event has no samples. But now it checks both events. Before: $ sudo ./perf annotate --stdio -f --group path_put (no output) After: $ sudo ./perf annotate --stdio -f --group path_put Percent | Source code & Disassembly of kcore for cpu/mem-loads,ldlat=30/P, cpu/mem-stores/P, dummy:u (0 samples, percent: local period) ------------------------------------------------------------------------------------------------------------------------------------------------------------- : 0 0xffffffffae600020 <path_put>: 0.00 0.00 0.00 : ffffffffae600020: endbr64 0.00 0.00 0.00 : ffffffffae600024: nopl (%rax, %rax) 0.00 91.22 0.00 : ffffffffae600029: pushq %rbx 0.00 0.00 0.00 : ffffffffae60002a: movq %rdi, %rbx 0.00 0.00 0.00 : ffffffffae60002d: movq 8(%rdi), %rdi 0.00 8.78 0.00 : ffffffffae600031: callq 0xffffffffae614aa0 0.00 0.00 0.00 : ffffffffae600036: movq (%rbx), %rdi 0.00 0.00 0.00 : ffffffffae600039: popq %rbx 0.00 0.00 0.00 : ffffffffae60003a: jmp 0xffffffffae620670 0.00 0.00 0.00 : ffffffffae60003f: nop Committer testing: Before: root@number:~# perf annotate --group --stdio2 clear_page_erms root@number:~# After: root@number:~# perf annotate --group --stdio2 clear_page_erms Samples: 125 of events 'cpu_atom/mem-loads,ldlat=30/P, cpu_atom/mem-stores/P, dummy:u', 4000 Hz, Event count (approx.): 13198416, [percent: local period] clear_page_erms() /proc/kcore Percent 0xffffffff990c6cc0 <clear_page_erms>: endbr64 movl $0x1000,%ecx xorl %eax,%eax 0.00 100.00 0.00 rep stosb %al, (%rdi) ← retq int3 int3 int3 int3 nop nop root@number:~# Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20240807061555.1642669-1-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Andi Kleen authored
Create a source symlink to the original source in the objdir. This is similar to what the main kernel build script does. Committer testing: ⬢[acme@toolbox perf-tools-next]$ make O=/tmp/build/$(basename $PWD)/ -C tools/perf install-bin <SNIP> ⬢[acme@toolbox perf-tools-next]$ ls -la /tmp/build/perf-tools-next/source lrwxrwxrwx. 1 acme acme 41 Aug 9 16:26 /tmp/build/perf-tools-next/source -> /home/acme/git/perf-tools-next/tools/perf ⬢[acme@toolbox perf-tools-next]$ Signed-off-by: Andi Kleen <ak@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240807231823.898979-1-ak@linux.intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
In that case we have a set of placeholder functions, one of them uses a 'Dwarf_Addr' type that is not present as it is defined in the missing DWARF libraries, so provide a placeholder typedef for that as well. The build error before this patch: In file included from util/annotate.c:28: util/debuginfo.h:44:46: error: unknown type name ‘Dwarf_Addr’ 44 | Dwarf_Addr *offs __maybe_unused, | ^~~~~~~~~~ make[6]: *** [/home/acme/git/perf-tools-next/tools/build/Makefile.build:106: util/annotate.o] Error 1 make[6]: *** Waiting for unfinished jobs.... Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Link: https://lore.kernel.org/lkml/CAM9d7ciushSwEfj7yW4rtDEJBTcCB991V4cswwFEL+cv6QF2pg@mail.gmail.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Zixian Cai authored
For example, when using the Alder Lake PMU memory load event, the instruction latency is stored in 'ins_lat', while the cache latency is stored in 'weight'. This patch reports the 'ins_lat' field for Python scripting. Committer testing: On a Rocket Lake Refresh Intel machine (14th gen): root@number:~# grep -m1 'model name' /proc/cpuinfo model name : Intel(R) Core(TM) i7-14700K root@number:~# perf mem record -a sleep 5 Memory events are enabled on a subset of CPUs: 16-27 [ perf record: Woken up 85 times to write data ] [ perf record: Captured and wrote 41.236 MB perf.data (191390 samples) ] root@number:~# perf evlist -v cpu_atom/mem-loads,ldlat=30/P: type: 10 (cpu_atom), size: 136, config: 0x5d0 (mem-loads), { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|ADDR|CPU|PERIOD|IDENTIFIER|DATA_SRC|WEIGHT_STRUCT, read_format: ID|LOST, disabled: 1, inherit: 1, freq: 1, precise_ip: 3, sample_id_all: 1, { bp_addr, config1 }: 0x1f cpu_atom/mem-stores/P: type: 10 (cpu_atom), size: 136, config: 0x6d0 (mem-stores), { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|ADDR|CPU|PERIOD|IDENTIFIER|DATA_SRC|WEIGHT_STRUCT, read_format: ID|LOST, disabled: 1, inherit: 1, freq: 1, precise_ip: 3, sample_id_all: 1 dummy:u: type: 1 (software), size: 136, config: 0x9 (PERF_COUNT_SW_DUMMY), { sample_period, sample_freq }: 1, sample_type: IP|TID|TIME|ADDR|CPU|IDENTIFIER|DATA_SRC|WEIGHT_STRUCT, read_format: ID|LOST, inherit: 1, exclude_kernel: 1, exclude_hv: 1, mmap: 1, comm: 1, task: 1, mmap_data: 1, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1 root@number:~# Now generate a python script to then dump the dictionary that now needs to have that 'ins_lat' field: root@number:~# perf script --gen python generated Python script: perf-script.py root@number:~# vim perf-script.py root@number:~# perf script -s perf-script.py | head -40 in trace_begin in trace_end root@number:~# vim perf-script.py Reviewed-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Zixian Cai <fzczx123@gmail.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ben Gainey <ben.gainey@arm.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paran Lee <p4ranlee@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240809080137.3590148-1-fzczx123@gmail.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 08 Aug, 2024 9 commits
-
-
Arnaldo Carvalho de Melo authored
Running on a: root@x1:~# grep 'model name' -m1 /proc/cpuinfo model name : 13th Gen Intel(R) Core(TM) i7-1365U root@x1:~# It skips all the tests with: root@x1:~# perf test -vvvv LBR 97: perf record LBR tests: --- start --- test child forked, pid 2033388 Skip: only x86 CPUs support LBR ---- end(-2) ---- 97: perf record LBR tests : Skip root@x1:~# Because the test checks for the /sys/devices/cpu/caps/branches file, that isn't present as we have instead: root@x1:~# ls -la /sys/devices/cpu*/caps/branches -r--r--r--. 1 root root 4096 Aug 8 11:22 /sys/devices/cpu_atom/caps/branches -r--r--r--. 1 root root 4096 Aug 8 11:21 /sys/devices/cpu_core/caps/branches root@x1:~# If we check as well for one of those, /sys/devices/cpu_core/caps/branches, then we don't skip the tests and all are run on these x86 Intel Hybrid systems as well, passing all of them: root@x1:~# perf test -vvvv LBR 97: perf record LBR tests: --- start --- test child forked, pid 2034956 LBR callgraph [ perf record: Woken up 5 times to write data ] [ perf record: Captured and wrote 1.812 MB /tmp/__perf_test.perf.data.B2HvQ (8114 samples) ] LBR callgraph [Success] LBR any branch test [ perf record: Woken up 25 times to write data ] [ perf record: Captured and wrote 6.382 MB /tmp/__perf_test.perf.data.B2HvQ (8071 samples) ] LBR any branch test: 8071 samples LBR any branch test [Success] LBR any call test [ perf record: Woken up 23 times to write data ] [ perf record: Captured and wrote 6.208 MB /tmp/__perf_test.perf.data.B2HvQ (8092 samples) ] LBR any call test: 8092 samples LBR any call test [Success] LBR any ret test [ perf record: Woken up 24 times to write data ] [ perf record: Captured and wrote 6.396 MB /tmp/__perf_test.perf.data.B2HvQ (8093 samples) ] LBR any ret test: 8093 samples LBR any ret test [Success] LBR any indirect call test [ perf record: Woken up 25 times to write data ] [ perf record: Captured and wrote 6.344 MB /tmp/__perf_test.perf.data.B2HvQ (8067 samples) ] LBR any indirect call test: 8067 samples LBR any indirect call test [Success] LBR any indirect jump test [ perf record: Woken up 12 times to write data ] [ perf record: Captured and wrote 3.073 MB /tmp/__perf_test.perf.data.B2HvQ (8061 samples) ] LBR any indirect jump test: 8061 samples LBR any indirect jump test [Success] LBR direct calls test [ perf record: Woken up 25 times to write data ] [ perf record: Captured and wrote 6.380 MB /tmp/__perf_test.perf.data.B2HvQ (8076 samples) ] LBR direct calls test: 8076 samples LBR direct calls test [Success] LBR any indirect user call test [ perf record: Woken up 5 times to write data ] [ perf record: Captured and wrote 1.597 MB /tmp/__perf_test.perf.data.B2HvQ (8079 samples) ] LBR any indirect user call test: 8079 samples LBR any indirect user call test [Success] LBR system wide any branch test [ perf record: Woken up 26 times to write data ] [ perf record: Captured and wrote 9.088 MB /tmp/__perf_test.perf.data.B2HvQ (9209 samples) ] LBR system wide any branch test: 9209 samples LBR system wide any branch test [Success] LBR system wide any call test [ perf record: Woken up 25 times to write data ] [ perf record: Captured and wrote 8.945 MB /tmp/__perf_test.perf.data.B2HvQ (9333 samples) ] LBR system wide any call test: 9333 samples LBR system wide any call test [Success] LBR parallel any branch test LBR parallel any call test LBR parallel any ret test LBR parallel any indirect call test LBR parallel any indirect jump test LBR parallel direct calls test LBR parallel system wide any branch test LBR parallel any indirect user call test LBR parallel system wide any call test [ perf record: Woken up 9 times to write data ] [ perf record: Woken up 51 times to write data ] [ perf record: Woken up 1 times to write data ] [ perf record: Woken up 5 times to write data ] [ perf record: Woken up 559 times to write data ] [ perf record: Woken up 14 times to write data ] [ perf record: Woken up 17 times to write data ] [ perf record: Woken up 1 times to write data ] [ perf record: Woken up 11 times to write data ] [ perf record: Captured and wrote 0.150 MB /tmp/__perf_test.perf.data.lANpR (1909 samples) ] [ perf record: Captured and wrote 2.371 MB /tmp/__perf_test.perf.data.Olum8 (3033 samples) ] [ perf record: Captured and wrote 1.230 MB /tmp/__perf_test.perf.data.njfJ8 (1742 samples) ] [ perf record: Captured and wrote 5.554 MB /tmp/__perf_test.perf.data.4ZTrj (29662 samples) ] [ perf record: Captured and wrote 19.906 MB /tmp/__perf_test.perf.data.dlGQt (29576 samples) ] [ perf record: Captured and wrote 0.289 MB /tmp/__perf_test.perf.data.CAT7y (4311 samples) ] [ perf record: Captured and wrote 3.129 MB /tmp/__perf_test.perf.data.diuKG (3971 samples) ] LBR parallel any indirect user call test: 1909 samples [ perf record: Captured and wrote 4.858 MB /tmp/__perf_test.perf.data.sVjtN (6130 samples) ] LBR parallel any indirect user call test [Success] [ perf record: Captured and wrote 3.669 MB /tmp/__perf_test.perf.data.AJtNI (4827 samples) ] LBR parallel any indirect jump test: 4311 samples LBR parallel any indirect jump test [Success] LBR parallel direct calls test: 3033 samples LBR parallel direct calls test [Success] LBR parallel any indirect call test: 1742 samples LBR parallel any indirect call test [Success] LBR parallel any call test: 4827 samples LBR parallel any call test [Success] LBR parallel any branch test: 6130 samples LBR parallel any branch test [Success] LBR parallel system wide any branch test: 29662 samples LBR parallel any ret test: 3971 samples LBR parallel any ret test [Success] LBR parallel system wide any branch test [Success] LBR parallel system wide any call test: 29576 samples LBR parallel system wide any call test [Success] ---- end(0) ---- 97: perf record LBR tests : Ok root@x1:~# Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/lkml/ZrTXftup0H46R8WK@x1Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ian Rogers authored
Adds coverage for LBR operations and LBR callgraph. Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anne Macedo <retpolanne@posteo.net> Cc: Changbin Du <changbin.du@huawei.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240808054644.1286065-2-irogers@google.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ian Rogers authored
The 'struct callchain_cursor_node' has a 'struct map_symbol' whose maps and map members are reference counted. Ensure these values use a _get routine to increment the reference counts and use map_symbol__exit() to release the reference counts. Do similar for 'struct thread's prev_lbr_cursor, but save the size of the prev_lbr_cursor array so that it may be iterated. Ensure that when stitch_nodes are placed on the free list the map_symbols are exited. Fix resolve_lbr_callchain_sample() by replacing list_replace_init() to list_splice_init(), so the whole list is moved and nodes aren't leaked. A reproduction of the memory leaks is possible with a leak sanitizer build in the perf report command of: ``` $ perf record -e cycles --call-graph lbr perf test -w thloop $ perf report --stitch-lbr ``` Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Fixes: ff165628 ("perf callchain: Stitch LBR call stack") Signed-off-by: Ian Rogers <irogers@google.com> [ Basic tests after applying the patch, repeating the example above ] Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anne Macedo <retpolanne@posteo.net> Cc: Changbin Du <changbin.du@huawei.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240808054644.1286065-1-irogers@google.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Veronika Molnarova authored
Commit 3e0bf9fd ("perf pmu: Restore full PMU name wildcard support") adds a test case "PMU cmdline match" that covers PMU name wildcard support provided by function perf_pmu__match(). The test works with a wide range of supported combinations of PMU name matching but omits the case that if the perf_pmu__match() cannot match the PMU name to the wildcard, it tries to match its alias. However, this variable is not set up, causing the test case to fail when run with subprocesses or to segfault if run as a single process. ./perf test -vv 9 9: Sysfs PMU tests : 9.1: Parsing with PMU format directory : Ok 9.2: Parsing with PMU event : Ok 9.3: PMU event names : Ok 9.4: PMU name combining : Ok 9.5: PMU name comparison : Ok 9.6: PMU cmdline match : FAILED! ./perf test -F 9 9.1: Parsing with PMU format directory : Ok 9.2: Parsing with PMU event : Ok 9.3: PMU event names : Ok 9.4: PMU name combining : Ok 9.5: PMU name comparison : Ok Segmentation fault (core dumped) Initialize the PMU alias to null for all tests of perf_pmu__match() as this functionality is not being tested and the alias matching works exactly the same as the matching of the PMU name. ./perf test -F 9 9.1: Parsing with PMU format directory : Ok 9.2: Parsing with PMU event : Ok 9.3: PMU event names : Ok 9.4: PMU name combining : Ok 9.5: PMU name comparison : Ok 9.6: PMU cmdline match : Ok Fixes: 3e0bf9fd ("perf pmu: Restore full PMU name wildcard support") Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com> Cc: James Clark <james.clark@linaro.org> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Radostin Stoyanov <rstoyano@redhat.com> Link: https://lore.kernel.org/r/20240808103749.9356-1-vmolnaro@redhat.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
In 'perf ftrace profile sleep 0.1' we know that we'll have an specific kernel function that will take a bit more than 0.1 seconds and will take place just one time, so we can add a check for that so that we validate more than just the presence of some functions in the profile. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Link: https://lore.kernel.org/lkml/ZrTBo7KACZeuCyLj@x1Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
$ sudo ./perf test ftrace -vv 86: perf ftrace tests: --- start --- test child forked, pid 1772223 perf ftrace list test syscalls for sleep: __x64_sys_nanosleep __ia32_sys_nanosleep __x64_sys_clock_nanosleep __ia32_sys_clock_nanosleep perf ftrace list test [Success] perf ftrace trace test # tracer: function_graph # # CPU DURATION FUNCTION CALLS # | | | | | | | 0) | __x64_sys_clock_nanosleep() { 0) | common_nsleep() { 0) | hrtimer_nanosleep() { 0) | do_nanosleep() { perf ftrace trace test [Success] perf ftrace latency test target function: __x64_sys_clock_nanosleep # DURATION | COUNT | GRAPH | 32 - 64 ms | 1 | ############################################## | perf ftrace latency test [Success] perf ftrace profile test # Total (us) Avg (us) Max (us) Count Function 100136.400 100136.400 100136.400 1 __x64_sys_clock_nanosleep 100135.200 100135.200 100135.200 1 common_nsleep 100134.700 100134.700 100134.700 1 hrtimer_nanosleep 100133.700 100133.700 100133.700 1 do_nanosleep 100130.600 100130.600 100130.600 1 schedule 166.868 55.623 80.299 3 scheduler_tick 5.926 5.926 5.926 1 native_smp_send_reschedule 301.941 301.941 301.941 1 __x64_sys_execve 295.786 295.786 295.786 1 do_execveat_common.isra.0 71.397 35.699 46.403 2 bprm_execve 2.519 1.260 1.547 2 sched_mm_cid_before_execve 1.098 0.549 0.686 2 sched_mm_cid_after_execve perf ftrace profile test [Success] ---- end(0) ---- 86: perf ftrace tests : Ok Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Link: https://lore.kernel.org/r/20240808044954.1775333-1-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
The die_get_typename() would resolve typedef and get to the original type. But sometimes the original type is a struct without name and it makes the output confusing and hard to read. This is a diff of perf report -s type before and after the change. New types such as atomic{,64}_t and sigset_t appeared and the portion of unnamed struct was reduced. Also u32, u64 and size_t were splitted from the base types. --- b 2024-08-01 17:02:34.307809952 -0700 +++ a 2024-08-07 14:17:05.245853999 -0700 - 2.40% long unsigned int + 2.26% long unsigned int - 1.56% unsigned int + 1.27% unsigned int - 0.98% struct - 0.79% long long unsigned int + 0.58% long long unsigned int + 0.36% struct + 0.27% atomic64_t + 0.22% u32 + 0.21% u64 + 0.19% atomic_t + 0.13% size_t - 0.08% struct seqcount_spinlock + 0.08% seqcount_spinlock_t + 0.08% sigset_t + 0.08% __poll_t Let's use the typedef name directly and the resolved to get the size of the type. Committer testing: root@x1:~# diff -u before after | head -30 --- before 2024-08-08 09:35:13.917325041 -0300 +++ after 2024-08-08 09:37:35.312257905 -0300 @@ -10,25 +10,27 @@ # ........ ......... # 79.40% (unknown) - 2.28% union 1.96% (stack operation) - 1.24% struct + 1.87% pthread_mutex_t 0.99% u32[] - 0.92% unsigned int 0.77% struct task_struct + 0.75% U32 0.75% struct pcpu_hot 0.63% struct qspinlock + 0.61% atomic_t 0.59% struct list_head - 0.58% int 0.53% struct cfs_rq 0.51% BYTE* - 0.48% unsigned char + 0.48% BYTE 0.48% long unsigned int 0.46% struct rq 0.41% struct worker 0.41% struct memcg_vmstats_percpu + 0.41% pthread_cond_t 0.37% _Bool + 0.36% int root@x1:~# Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240807223129.1738004-1-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
In find_data_type(), it creates and deletes a debug info whenver it tries to find data type for a sample. This is inefficient and it most likely accesses the same binary again and again. Let's add a single entry cache the debug info structure for the last DSO. Depending on sample data, it usually gives me 2~3x (and sometimes more) speed ups. Note that this will introduce a little difference in the output due to the order of checking stack operations. It used to check the stack ops before checking the availability of debug info but I moved it after the symbol check. So it'll report stack operations in DSOs without debug info as unknown. But I think it's ok and better to have the checking near the caching logic. Committer testing: root@x1:~# perf mem record -a sleep 5s root@x1:~# perf evlist cpu_atom/mem-loads,ldlat=30/P cpu_atom/mem-stores/P dummy:u root@x1:~# diff -u before after --- before 2024-08-08 09:33:53.880780784 -0300 +++ after 2024-08-08 09:35:13.917325041 -0300 @@ -81,8 +81,8 @@ # Overhead Data Type # ........ ......... # - 55.43% (unknown) - 11.61% (stack operation) + 55.56% (unknown) + 11.48% (stack operation) 4.93% struct pcpu_hot 3.26% unsigned int 2.48% struct Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240805234648.1453689-1-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ian Rogers authored
iter_finish_branch_entry() doesn't put the branch_info from/to map elements creating memory leaks. This can be seen with: ``` $ perf record -e cycles -b perf test -w noploop $ perf report -D ... Direct leak of 984344 byte(s) in 123043 object(s) allocated from: #0 0x7fb2654f3bd7 in malloc libsanitizer/asan/asan_malloc_linux.cpp:69 #1 0x564d3400d10b in map__get util/map.h:186 #2 0x564d3400d10b in ip__resolve_ams util/machine.c:1981 #3 0x564d34014d81 in sample__resolve_bstack util/machine.c:2151 #4 0x564d34094790 in iter_prepare_branch_entry util/hist.c:898 #5 0x564d34098fa4 in hist_entry_iter__add util/hist.c:1238 #6 0x564d33d1f0c7 in process_sample_event tools/perf/builtin-report.c:334 #7 0x564d34031eb7 in perf_session__deliver_event util/session.c:1655 #8 0x564d3403ba52 in do_flush util/ordered-events.c:245 #9 0x564d3403ba52 in __ordered_events__flush util/ordered-events.c:324 #10 0x564d3402d32e in perf_session__process_user_event util/session.c:1708 #11 0x564d34032480 in perf_session__process_event util/session.c:1877 #12 0x564d340336ad in reader__read_event util/session.c:2399 #13 0x564d34033fdc in reader__process_events util/session.c:2448 #14 0x564d34033fdc in __perf_session__process_events util/session.c:2495 #15 0x564d34033fdc in perf_session__process_events util/session.c:2661 #16 0x564d33d27113 in __cmd_report tools/perf/builtin-report.c:1065 #17 0x564d33d27113 in cmd_report tools/perf/builtin-report.c:1805 #18 0x564d33e0ccb7 in run_builtin tools/perf/perf.c:350 #19 0x564d33e0d45e in handle_internal_command tools/perf/perf.c:403 #20 0x564d33cdd827 in run_argv tools/perf/perf.c:447 #21 0x564d33cdd827 in main tools/perf/perf.c:561 ... ``` Clearing up the map_symbols properly creates maps reference count issues so resolve those. Resolving this issue doesn't improve peak heap consumption for the test above. Committer testing: $ sudo dnf install libasan $ make -k CORESIGHT=1 EXTRA_CFLAGS="-fsanitize=address" CC=clang O=/tmp/build/$(basename $PWD)/ -C tools/perf install-bin Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sun Haiyong <sunhaiyong@loongson.cn> Cc: Yanteng Si <siyanteng@loongson.cn> Link: https://lore.kernel.org/r/20240807065136.1039977-1-irogers@google.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 06 Aug, 2024 6 commits
-
-
Arnaldo Carvalho de Melo authored
To pick a patch that albeit being for tools/perf/ directory went thru a different tree and ended up breaking some recent tests introduced in the perf-tools-next tree to validate duplicate events in the JSON performance event files. Link: https://lore.kernel.org/lkml/ZrIqDMg7cBVhstYU@x1Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Linus Torvalds authored
Merge tag 'platform-drivers-x86-v6.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86 Pull x86 platform driver fixes from Ilpo Järvinen: "Fixes: - Fix ACPI notifier racing with itself (intel-vbtn) - Initialize local variable to cover a timeout corner case (intel/ifs) - WMI docs spelling New device IDs: - amd/{pmc,pmf}: AMD 1Ah model 60h series. - amd/pmf: SPS quirk support for ASUS ROG Ally X" * tag 'platform-drivers-x86-v6.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: platform/x86/intel/ifs: Initialize union ifs_status to zero platform/x86: msi-wmi-platform: Fix spelling mistakes platform/x86/amd/pmf: Add new ACPI ID AMDI0107 platform/x86/amd/pmc: Send OS_HINT command for new AMD platform platform/x86/amd: pmf: Add quirk for ROG Ally X platform/x86: intel-vbtn: Protect ACPI notify handler against recursion
-
Ian Rogers authored
Duplicate event names break invariants in 'perf list'. Assert that an event name isn't duplicated so that broken JSON won't build. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Atish Patra <atishp@rivosinc.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Charles Ci-Jyun Wu <dminus@andestech.com> Cc: Eric Lin <eric.lin@sifive.com> Cc: Greentime Hu <greentime.hu@sifive.com> Cc: Guilherme Amadio <amadio@gentoo.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Inochi Amaoto <inochiama@outlook.com> Cc: James Clark <james.clark@linaro.org> Cc: Ji Sheng Teoh <jisheng.teoh@starfivetech.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linux.dev> Cc: Locus Wei-Han Chen <locus84@andestech.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Samuel Holland <samuel.holland@sifive.com> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Vincent Chen <vincent.chen@sifive.com> Cc: Will Deacon <will@kernel.org> Cc: Xu Yang <xu.yang_2@nxp.com> Link: https://lore.kernel.org/r/20240805194424.597244-5-irogers@google.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ian Rogers authored
OP_SPEC is repeated twice in the file which will break invariants in 'perf list' as discussed in this thread: https://lore.kernel.org/linux-perf-users/20240719081651.24853-1-eric.lin@sifive.com/Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Atish Patra <atishp@rivosinc.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Charles Ci-Jyun Wu <dminus@andestech.com> Cc: Eric Lin <eric.lin@sifive.com> Cc: Greentime Hu <greentime.hu@sifive.com> Cc: Guilherme Amadio <amadio@gentoo.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Inochi Amaoto <inochiama@outlook.com> Cc: James Clark <james.clark@linaro.org> Cc: Ji Sheng Teoh <jisheng.teoh@starfivetech.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linux.dev> Cc: Locus Wei-Han Chen <locus84@andestech.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Samuel Holland <samuel.holland@sifive.com> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Vincent Chen <vincent.chen@sifive.com> Cc: Will Deacon <will@kernel.org> Cc: Xu Yang <xu.yang_2@nxp.com> Link: https://lore.kernel.org/r/20240805194424.597244-3-irogers@google.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ian Rogers authored
Switch from $? (all the prerequisites that are newer than the target) to $^ (all the prerequisites) as touching jevents.py will mean that empty-pmu-events.c won't be passed to the diff command breaking the build. Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Atish Patra <atishp@rivosinc.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Charles Ci-Jyun Wu <dminus@andestech.com> Cc: Eric Lin <eric.lin@sifive.com> Cc: Greentime Hu <greentime.hu@sifive.com> Cc: Guilherme Amadio <amadio@gentoo.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Inochi Amaoto <inochiama@outlook.com> Cc: James Clark <james.clark@linaro.org> Cc: Ji Sheng Teoh <jisheng.teoh@starfivetech.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linux.dev> Cc: Locus Wei-Han Chen <locus84@andestech.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Samuel Holland <samuel.holland@sifive.com> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Vincent Chen <vincent.chen@sifive.com> Cc: Will Deacon <will@kernel.org> Cc: Xu Yang <xu.yang_2@nxp.com> Link: https://lore.kernel.org/r/20240805194424.597244-2-irogers@google.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ian Rogers authored
Building with JEVENTS_ARCH=all builds all CPU types and allows things like assertions to check the validity of the input JSON. Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Atish Patra <atishp@rivosinc.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Charles Ci-Jyun Wu <dminus@andestech.com> Cc: Eric Lin <eric.lin@sifive.com> Cc: Greentime Hu <greentime.hu@sifive.com> Cc: Guilherme Amadio <amadio@gentoo.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Inochi Amaoto <inochiama@outlook.com> Cc: James Clark <james.clark@linaro.org> Cc: Ji Sheng Teoh <jisheng.teoh@starfivetech.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linux.dev> Cc: Locus Wei-Han Chen <locus84@andestech.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Samuel Holland <samuel.holland@sifive.com> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Vincent Chen <vincent.chen@sifive.com> Cc: Will Deacon <will@kernel.org> Cc: Xu Yang <xu.yang_2@nxp.com> Link: https://lore.kernel.org/r/20240805194424.597244-1-irogers@google.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 05 Aug, 2024 11 commits
-
-
Linus Torvalds authored
Merge tag 'linux_kselftest-fixes-6.11-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull kselftest fix from Shuah Khan: "A single fix to the conditional in ksft.py script which incorrectly flags a test suite failed when there are skipped tests in the mix. The logic is fixed to take skipped tests into account and report the test as passed" * tag 'linux_kselftest-fixes-6.11-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: selftests: ksft: Fix finished() helper exit code on skipped tests
-
Namhyung Kim authored
Like in 'perf report', we want to hide empty events in the 'perf annotate' output. This is consistent when the option is set in perf report. For example, the following command would use 3 events including dummy. $ perf mem record -a -- perf test -w noploop $ perf evlist cpu/mem-loads,ldlat=30/P cpu/mem-stores/P dummy:u Just using perf annotate with --group will show the all 3 events. $ perf annotate --group --stdio | head Percent | Source code & Disassembly of ... -------------------------------------------------------------- : 0 0xe060 <_dl_relocate_object>: 0.00 0.00 0.00 : e060: pushq %rbp 0.00 0.00 0.00 : e061: movq %rsp, %rbp 0.00 0.00 0.00 : e064: pushq %r15 0.00 0.00 0.00 : e066: movq %rdi, %r15 0.00 0.00 0.00 : e069: pushq %r14 0.00 0.00 0.00 : e06b: pushq %r13 0.00 0.00 0.00 : e06d: movl %edx, %r13d Now with --skip-empty, it'll hide the last dummy event. $ perf annotate --group --stdio --skip-empty | head Percent | Source code & Disassembly of ... ------------------------------------------------------ : 0 0xe060 <_dl_relocate_object>: 0.00 0.00 : e060: pushq %rbp 0.00 0.00 : e061: movq %rsp, %rbp 0.00 0.00 : e064: pushq %r15 0.00 0.00 : e066: movq %rdi, %r15 0.00 0.00 : e069: pushq %r14 0.00 0.00 : e06b: pushq %r13 0.00 0.00 : e06d: movl %edx, %r13d Committer testing: root@x1:~# perf evlist cpu_atom/mem-loads,ldlat=30/P cpu_atom/mem-stores/P dummy:u root@x1:~# Before: root@x1:~# perf annotate --group --stdio2 do_lookup_x | head -25 Samples: 20 of events 'cpu_atom/mem-loads,ldlat=30/P, cpu_atom/mem-stores/P, dummy:u', 4000 Hz, Event count (approx.): 769079, [percent: local period] do_lookup_x() /usr/lib64/ld-linux-x86-64.so.2 Percent 0x9900 <do_lookup_x>: pushq %rbp movq %rsp,%rbp pushq %r15 pushq %r14 pushq %r13 pushq %r12 pushq %rbx subq $0x88,%rsp movq %rdi,-0x50(%rbp) movl 8(%r9),%edi movq 0x10(%rbp),%r12 movq 0x28(%rbp),%r10 movq %rdx,-0x70(%rbp) movq %rcx,-0x58(%rbp) movq %rdi,%r11 0.00 5.73 0.00 movq %r8,-0x68(%rbp) movq (%r9),%r8 movl %esi,%eax 8.30 0.00 0.00 movl 0x30(%rbp),%r9d movl %esi,%r15d shrl $6, %eax movq %r8,%r13 root@x1:~# After: root@x1:~# perf annotate --group --skip-empty --stdio2 do_lookup_x | head -25 Samples: 20 of events 'cpu_atom/mem-loads,ldlat=30/P, cpu_atom/mem-stores/P', 4000 Hz, Event count (approx.): 769079, [percent: local period] do_lookup_x() /usr/lib64/ld-linux-x86-64.so.2 Percent 0x9900 <do_lookup_x>: pushq %rbp movq %rsp,%rbp pushq %r15 pushq %r14 pushq %r13 pushq %r12 pushq %rbx subq $0x88,%rsp movq %rdi,-0x50(%rbp) movl 8(%r9),%edi movq 0x10(%rbp),%r12 movq 0x28(%rbp),%r10 movq %rdx,-0x70(%rbp) movq %rcx,-0x58(%rbp) movq %rdi,%r11 0.00 5.73 movq %r8,-0x68(%rbp) movq (%r9),%r8 movl %esi,%eax 8.30 0.00 movl 0x30(%rbp),%r9d movl %esi,%r15d shrl $6, %eax movq %r8,%r13 root@x1:~# Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240803211332.1107222-6-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
This is a preparation to support skipping empty events. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240803211332.1107222-5-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
The annotation__pcnt_width() calculates the screen width for the overhead (percent) area considering event groups properly. Use this function consistently so that we can make sure it has similar output in different modes. But there's a difference in stdio and tui output: stdio uses 8 and tui uses 7 for a percent. Let's use 8 and adjust the print width in __annotation_line__write() properly. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240803211332.1107222-4-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
We want to use it in different places so make sure it sets properly in symbol__annotate() before creating the disasm lines. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240803211332.1107222-3-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
The data_nr keeps the number of entries in al->data[] so it should use it when it iterates the array. The notes->src->nr_events should have the same number but it'd be natural to use al->data_nr. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240803211332.1107222-2-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slabLinus Torvalds authored
Pull slab fix from Vlastimil Babka: "Since v6.8 we've had a subtle breakage in SLUB with KFENCE enabled, that can cause a crash. It hasn't been found earlier due to quite specific conditions necessary (OOM during kmem_cache_alloc_bulk())" * tag 'slab-fixes-for-6.11-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: mm, slub: do not call do_slab_free for kfence object
-
Brian Norris authored
The dependencies in tools/lib/bpf/Makefile are incorrect. Before we recurse to build $(BPF_IN_STATIC), we need to build its 'fixdep' executable. I can't use the usual shortcut from Makefile.include: <target>: <sources> fixdep because its 'fixdep' target relies on $(OUTPUT), and $(OUTPUT) differs in the parent 'make' versus the child 'make' -- so I imitate it via open-coding. I tweak a few $(MAKE) invocations while I'm at it, because 1. I'm adding a new recursive make; and 2. these recursive 'make's print spurious lines about files that are "up to date" (which isn't normally a feature in Kbuild subtargets) or "jobserver not available" (see [1]) I also need to tweak the assignment of the OUTPUT variable, so that relative path builds work. For example, for 'make tools/lib/bpf', OUTPUT is unset, and is usually treated as "cwd" -- but recursive make will change cwd and so OUTPUT has a new meaning. For consistency, I ensure OUTPUT is always an absolute path. And $(Q) gets a backup definition in tools/build/Makefile.include, because Makefile.include is sometimes included without tools/build/Makefile, so the "quiet command" stuff doesn't actually work consistently without it. After this change, top-level builds result in an empty grep result from: $ grep 'cannot find fixdep' $(find tools/ -name '*.cmd') [1] https://www.gnu.org/software/make/manual/html_node/MAKE-Variable.html If we're not using $(MAKE) directly, then we need to use more '+'. Signed-off-by: Brian Norris <briannorris@chromium.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Ian Rogers <irogers@google.com> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20240715203325.3832977-4-briannorris@chromium.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Brian Norris authored
The 'fixdep' tool is used to post-process dependency files for various reasons, and it runs after every object file generation command. This even includes 'fixdep' itself. In Kbuild, this isn't actually a problem, because it uses a single command to generate fixdep (a compile-and-link command on fixdep.c), and afterward runs the fixdep command on the accompanying .fixdep.cmd file. In tools/ builds (which notably is maintained separately from Kbuild), fixdep is generated in several phases: 1. fixdep.c -> fixdep-in.o 2. fixdep-in.o -> fixdep Thus, fixdep is not available in the post-processing for step 1, and instead, we generate .cmd files that look like: ## from tools/objtool/libsubcmd/.fixdep.o.cmd # cannot find fixdep (/path/to/linux/tools/objtool/libsubcmd//fixdep) [...] These invalid .cmd files are benign in some respects, but cause problems in others (such as the linked reports). Because the tools/ build system is rather complicated in its own right (and pointedly different than Kbuild), I choose to simply open-code the rule for building fixdep, and avoid the recursive-make indirection that produces the problem in the first place. Signed-off-by: Brian Norris <briannorris@chromium.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Ian Rogers <irogers@google.com> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/all/Zk-C5Eg84yt6_nml@google.com/ Link: https://lore.kernel.org/r/20240715203325.3832977-3-briannorris@chromium.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Brian Norris authored
All built targets need fixdep to be built first, before handling object dependencies [1]. We're missing one such dependency before the libsubcmd target. This resolves .cmd file generation issues such that the following sequence produces many fewer results: $ git clean -xfd tools/ $ make tools/objtool $ grep "cannot find fixdep" $(find tools/objtool -name '*.cmd') In particular, only a buggy tools/objtool/libsubcmd/.fixdep.o.cmd remains, due to circular dependencies of fixdep on itself. Such incomplete .cmd files don't usually cause a direct problem, since they're designed to fail "open", but they can cause some subtle problems that would otherwise be handled by proper fixdep'd dependency files. [2] [1] This problem is better described in commit abb26210 ("perf tools: Force fixdep compilation at the start of the build"). I don't apply its solution here, because additional recursive make can be a bit of overkill. [2] Example failure case: cp -arl linux-src linux-src2 cd linux-src2 make O=/path/to/out cd ../linux-src rm -rf ../linux-src2 make O=/path/to/out Previously, we'd see errors like: make[6]: *** No rule to make target '/path/to/linux-src2/tools/include/linux/compiler.h', needed by '/path/to/out/tools/bpf/resolve_btfids/libsubcmd/exec-cmd.o'. Stop. Now, the properly-fixdep'd .cmd files will ignore a missing /path/to/linux-src2/... Signed-off-by: Brian Norris <briannorris@chromium.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Ian Rogers <irogers@google.com> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/all/ZGVi9HbI43R5trN8@bhelgaas/ Link: https://lore.kernel.org/all/Zk-C5Eg84yt6_nml@google.com/ Link: https://lore.kernel.org/r/20240715203325.3832977-2-briannorris@chromium.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
Add a common options section and move some items to the section. Also add description of new options to report options. Suggested-by: Ian Rogers <irogers@google.com> Reviewed-by: Ian Rogers <irogers@google.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: https://lore.kernel.org/lkml/20240802180913.1023886-1-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 04 Aug, 2024 3 commits
-
-
Linus Torvalds authored
-
Tetsuo Handa authored
The kernel sleep profile is no longer working due to a recursive locking bug introduced by commit 42a20f86 ("sched: Add wrapper for get_wchan() to keep task blocked") Booting with the 'profile=sleep' kernel command line option added or executing # echo -n sleep > /sys/kernel/profiling after boot causes the system to lock up. Lockdep reports kthreadd/3 is trying to acquire lock: ffff93ac82e08d58 (&p->pi_lock){....}-{2:2}, at: get_wchan+0x32/0x70 but task is already holding lock: ffff93ac82e08d58 (&p->pi_lock){....}-{2:2}, at: try_to_wake_up+0x53/0x370 with the call trace being lock_acquire+0xc8/0x2f0 get_wchan+0x32/0x70 __update_stats_enqueue_sleeper+0x151/0x430 enqueue_entity+0x4b0/0x520 enqueue_task_fair+0x92/0x6b0 ttwu_do_activate+0x73/0x140 try_to_wake_up+0x213/0x370 swake_up_locked+0x20/0x50 complete+0x2f/0x40 kthread+0xfb/0x180 However, since nobody noticed this regression for more than two years, let's remove 'profile=sleep' support based on the assumption that nobody needs this functionality. Fixes: 42a20f86 ("sched: Add wrapper for get_wchan() to keep task blocked") Cc: stable@vger.kernel.org # v5.16+ Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Thomas Gleixner: - Prevent a deadlock on cpu_hotplug_lock in the aperf/mperf driver. A recent change in the ACPI code which consolidated code pathes moved the invocation of init_freq_invariance_cppc() to be moved to a CPU hotplug handler. The first invocation on AMD CPUs ends up enabling a static branch which dead locks because the static branch enable tries to acquire cpu_hotplug_lock but that lock is already held write by the hotplug machinery. Use static_branch_enable_cpuslocked() instead and take the hotplug lock read for the Intel code path which is invoked from the architecture code outside of the CPU hotplug operations. - Fix the number of reserved bits in the sev_config structure bit field so that the bitfield does not exceed 64 bit. - Add missing Zen5 model numbers - Fix the alignment assumptions of pti_clone_pgtable() and clone_entry_text() on 32-bit: The code assumes PMD aligned code sections, but on 32-bit the kernel entry text is not PMD aligned. So depending on the code size and location, which is configuration and compiler dependent, entry text can cross a PMD boundary. As the start is not PMD aligned adding PMD size to the start address is larger than the end address which results in partially mapped entry code for user space. That causes endless recursion on the first entry from userspace (usually #PF). Cure this by aligning the start address in the addition so it ends up at the next PMD start address. clone_entry_text() enforces PMD mapping, but on 32-bit the tail might eventually be PTE mapped, which causes a map fail because the PMD for the tail is not a large page mapping. Use PTI_LEVEL_KERNEL_IMAGE for the clone() invocation which resolves to PTE on 32-bit and PMD on 64-bit. - Zero the 8-byte case for get_user() on range check failure on 32-bit The recend consolidation of the 8-byte get_user() case broke the zeroing in the failure case again. Establish it by clearing ECX before the range check and not afterwards as that obvioulsy can't be reached when the range check fails * tag 'x86-urgent-2024-08-04' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/uaccess: Zero the 8-byte get_range case on failure on 32-bit x86/mm: Fix pti_clone_entry_text() for i386 x86/mm: Fix pti_clone_pgtable() alignment assumption x86/setup: Parse the builtin command line before merging x86/CPU/AMD: Add models 0x60-0x6f to the Zen5 range x86/sev: Fix __reserved field in sev_config x86/aperfmperf: Fix deadlock on cpu_hotplug_lock
-