- 16 Feb, 2024 19 commits
-
-
Ian Rogers authored
Update sierraforest events to v1.01 released in: https://github.com/intel/perfmon/commit/582bca24aa0d742306cd4697c5bd1b1b529aa3ce Adds the majority of core and uncore events. Event json automatically generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.pySigned-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Weilin Wang <weilin.wang@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214011820.644458-11-irogers@google.com
-
Ian Rogers authored
Update alderlake events to v1.02 released in: https://github.com/intel/perfmon/commit/4931178d1ede1099a3e4ac7e04ed9f073e03d219 Improves documentation and removes TOPDOWN.BR_MISPREDICT_SLOTS. Event json automatically generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.pySigned-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Weilin Wang <weilin.wang@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214011820.644458-10-irogers@google.com
-
Ian Rogers authored
Update meteorlake events to v1.07 released in: https://github.com/intel/perfmon/commit/62517223080e46bfa9a905a1195c7febae7fdb3e Umask changed on atom mem_bound events. Adds atom events ARITH.FPDIV_ACTIVE, FP_FLOPS_RETIRED.ALL, FP_FLOPS_RETIRED.DP, FP_FLOPS_RETIRED.FP32, ARITH.DIV_ACTIVE, BR_INST_RETIRED.COND, BR_INST_RETIRED.COND_TAKEN, BR_INST_RETIRED.INDIRECT, BR_INST_RETIRED.INDIRECT_CALL, BR_INST_RETIRED.IND_CALL, BR_INST_RETIRED.NEAR_RETURN, DTLB_LOAD_MISSES.WALK_COMPLETED_4K, DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M, DTLB_STORE_MISSES.WALK_COMPLETED_4K, ITLB_MISSES.WALK_COMPLETED_4K, and alias events. Event json automatically generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.pySigned-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Weilin Wang <weilin.wang@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214011820.644458-9-irogers@google.com
-
Ian Rogers authored
Update icelake events to v1.21 released in: https://github.com/intel/perfmon/commit/54f1246b0496112c1d2b2a49e4859c85caa3dbf4 Improves descriptions, removes TOPDOWN.BR_MISPREDICT_SLOTS. Event json automatically generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.pySigned-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Weilin Wang <weilin.wang@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214011820.644458-8-irogers@google.com
-
Ian Rogers authored
Update haswell events to v35 released in: https://github.com/intel/perfmon/commit/c0f9b34d421941bc3e13c6ca5554e6a54e8bd574 Updates "must be precise" on RTM_RETIRED.ABORTED. Event json automatically generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.pySigned-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Weilin Wang <weilin.wang@intel.com> Cc: linux-perf-users@vger.kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214011820.644458-7-irogers@google.com
-
Ian Rogers authored
Update grandridge events to v1.01 released in: https://github.com/intel/perfmon/commit/211d60716509d8248e57450e434de98cc6e511d8 Adds the majority of core and uncore events. Event json automatically generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.pySigned-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Weilin Wang <weilin.wang@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214011820.644458-6-irogers@google.com
-
Ian Rogers authored
Update emeraldrapids events to v1.03 released in: https://github.com/intel/perfmon/commit/c7c6f72dae07fee35d5982232829c0cd37f9e28e Adds uncore CHA events. Event json automatically generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.pySigned-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Weilin Wang <weilin.wang@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214011820.644458-5-irogers@google.com
-
Ian Rogers authored
Update broadwell events to v29 released in: https://github.com/intel/perfmon/commit/47117146c6b9e38811618beca31eba4e41c3d874 Updates "must be precise" on RTM_RETIRED.ABORTED. Event json automatically generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.pySigned-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Weilin Wang <weilin.wang@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214011820.644458-4-irogers@google.com
-
Ian Rogers authored
Update alderlaken events to v1.24 released in: https://github.com/intel/perfmon/commit/e627dd8d89e2d2110f1d499608dd6f37aae37a8c Adds LBR_INSERTS.ANY/MISC_RETIRED.LBR_INSERTS event. Event json automatically generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.pySigned-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Weilin Wang <weilin.wang@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214011820.644458-3-irogers@google.com
-
Ian Rogers authored
Update alderlake events to v1.24 released in: https://github.com/intel/perfmon/commit/e627dd8d89e2d2110f1d499608dd6f37aae37a8c Adds aliased events, improves documentation and fix some event fields. Event json automatically generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.pySigned-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Weilin Wang <weilin.wang@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214011820.644458-2-irogers@google.com
-
Arnaldo Carvalho de Melo authored
If we instead decide to generate vmlinux.h from BTF info, it will be there: $ pahole timespec64 struct timespec64 { time64_t tv_sec; /* 0 8 */ long int tv_nsec; /* 8 8 */ /* size: 16, cachelines: 1, members: 2 */ /* last cacheline: 16 bytes */ }; $ pahole manages to find it from /sys/kernel/btf/vmlinux, that is generated from the kernel types. With this linux/bpf.h doesn't need to be included, as its already in the minimalistic tools/perf/util/bpf_skel/vmlinux/vmlinux.h file or what we need comes when generating a vmlinux.h file from BTF info, i.e. when using GEN_VMLINUX_H=1, as noticed by Namyung in a build break before removing linux/bpf.h. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/Zc_fp6CgDClPhS_O@x1
-
Michael Petlan authored
Signed-off-by: Michael Petlan <mpetlan@redhat.com> Cc: kjain@linux.ibm.com Cc: atrajeev@linux.vnet.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240215110231.15385-8-mpetlan@redhat.com
-
Veronika Molnarova authored
Test perf interface to kprobes: listing, adding and removing probes. It is run as a part of perftool-testsuite_probe test case. Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> Cc: kjain@linux.ibm.com Cc: atrajeev@linux.vnet.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240215110231.15385-7-mpetlan@redhat.com
-
Veronika Molnarova authored
As a form of validation, it is a common practice to check the outputs of commands whether they contain expected patterns or match a certain regex. Add helpers for verifying that all regexes are found in the output, that all lines match any pattern from a set and that a certain expression is not present in the output. In verbose mode these helpers log mismatches for easier failure investigation. Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> Cc: kjain@linux.ibm.com Cc: atrajeev@linux.vnet.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240215110231.15385-6-mpetlan@redhat.com
-
Veronika Molnarova authored
Add new perf probe test case that acts as an entry element in perf test list. Runs multiple subtests from directory "base_probe", which will be added in incomming patches and can be expanded without further editing. Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> Cc: kjain@linux.ibm.com Cc: atrajeev@linux.vnet.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240215110231.15385-5-mpetlan@redhat.com
-
Veronika Molnarova authored
Initialize reporting and logging functions that unifies formatting of the test output used for shell tests. Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> Cc: kjain@linux.ibm.com Cc: atrajeev@linux.vnet.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240215110231.15385-4-mpetlan@redhat.com
-
Veronika Molnarova authored
Add settings defining sample commands later shared by shell tests. This adds the possibility to globally adjust the default values for the whole testsuite. Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> Cc: kjain@linux.ibm.com Cc: atrajeev@linux.vnet.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240215110231.15385-3-mpetlan@redhat.com
-
Veronika Molnarova authored
Unify perf regexes for checking testing output into a single file to reduce duplicates and prevent errors when editing. This will be used in upcomming patches in shell tests. Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> Cc: kjain@linux.ibm.com Cc: atrajeev@linux.vnet.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240215110231.15385-2-mpetlan@redhat.com
-
Adrian Hunter authored
The test needs a struct machine and creates one for the current host, but a side-effect is that struct machine has set up kernel maps including module maps. If the 'Symbols' test --dso option specifies a current kernel module, it will already be present as a kernel dso, and a map with kmaps needs to be used otherwise there will be a segfault - see below. For that case, find the existing map and use that. In that case also, the dso is split by section into multiple dsos, so test those dsos also. That in turn, shows up that those dsos have not had overlapping symbols removed, so the test fails. Example: Before: $ perf test -F -v Symbols --dso /lib/modules/$(uname -r)/kernel/arch/x86/kvm/kvm-intel.ko 70: Symbols : --- start --- Testing /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko Segmentation fault (core dumped) After: $ perf test -F -v Symbols --dso /lib/modules/$(uname -r)/kernel/arch/x86/kvm/kvm-intel.ko 70: Symbols : --- start --- Testing /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko Overlapping symbols: 41d30-41fbb l vmx_init 41d30-41fbb g init_module ---- end ---- Symbols: FAILED! Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Ian Rogers <irogers@google.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240131192416.16387-1-adrian.hunter@intel.com
-
- 15 Feb, 2024 4 commits
-
-
Leo Yan authored
The target is to allow the tool to always enable the perf register feature for native parsing and cross parsing, and current code doesn't depend on the macro 'HAVE_PERF_REGS_SUPPORT'. This patch remove the variable 'NO_PERF_REGS' and the defined macro 'HAVE_PERF_REGS_SUPPORT' from the Makefile. Signed-off-by: Leo Yan <leo.yan@linux.dev> Reviewed-by: Ian Rogers <irogers@google.com> Cc: James Clark <james.clark@arm.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Guo Ren <guoren@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Ming Wang <wangming01@loongson.cn> Cc: John Garry <john.g.garry@oracle.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: linux-csky@vger.kernel.org Cc: linux-riscv@lists.infradead.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214113947.240957-5-leo.yan@linux.dev
-
Leo Yan authored
Every architecture can provide a register list for sampling. If an architecture doesn't support register sampling, it won't define the data structure 'sample_reg_masks'. Consequently, any code using this structure must be protected by the macro 'HAVE_PERF_REGS_SUPPORT'. This patch defines a weak function, arch__sample_reg_masks(), which will be replaced by an architecture-defined function for returning the architecture's register list. With this refactoring, the function always exists, the condition checking for 'HAVE_PERF_REGS_SUPPORT' is not needed anymore, so remove it. Signed-off-by: Leo Yan <leo.yan@linux.dev> Reviewed-by: Ian Rogers <irogers@google.com> Cc: James Clark <james.clark@arm.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Guo Ren <guoren@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Ming Wang <wangming01@loongson.cn> Cc: John Garry <john.g.garry@oracle.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: linux-csky@vger.kernel.org Cc: linux-riscv@lists.infradead.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214113947.240957-4-leo.yan@linux.dev
-
Leo Yan authored
Currently, the macro HAVE_PERF_REGS_SUPPORT is used as a switch to turn on or turn off the code of perf registers. If any architecture cannot support perf register, it disables the perf register parsing, for both the native parsing and cross parsing for other architectures. To support both the native parsing and cross parsing, the tool should always build the perf regs functions. Thus, this patch removes HAVE_PERF_REGS_SUPPORT from the perf regs files. Signed-off-by: Leo Yan <leo.yan@linux.dev> Reviewed-by: Ian Rogers <irogers@google.com> Cc: James Clark <james.clark@arm.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Guo Ren <guoren@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Ming Wang <wangming01@loongson.cn> Cc: John Garry <john.g.garry@oracle.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: linux-csky@vger.kernel.org Cc: linux-riscv@lists.infradead.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214113947.240957-3-leo.yan@linux.dev
-
Leo Yan authored
CONFIG_PERF_REGS is not used, remove it. Signed-off-by: Leo Yan <leo.yan@linux.dev> Reviewed-by: Ian Rogers <irogers@google.com> Cc: James Clark <james.clark@arm.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Guo Ren <guoren@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Ming Wang <wangming01@loongson.cn> Cc: John Garry <john.g.garry@oracle.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: linux-csky@vger.kernel.org Cc: linux-riscv@lists.infradead.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240214113947.240957-2-leo.yan@linux.dev
-
- 13 Feb, 2024 4 commits
-
-
Ian Rogers authored
Counts were switched from the scaled saved value form to the aggregated count to avoid double accounting. When this happened the removing of scaling for a count should have been removed, however, it wasn't and this wasn't observed as it normally doesn't matter because a counter's scale is 1. A problem was observed with RAPL events that are scaled. Fixes: 37cc8ad7 ("perf metric: Directly use counts rather than saved_value") Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Kaige Ye <ye@kaige.org> Cc: John Garry <john.g.garry@oracle.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240209204947.3873294-5-irogers@google.com
-
Ian Rogers authored
Cycles is recognized as part of a hard coded metric in stat-shadow.c, it may call print_metric_only with a NULL fmt string leading to a segfault. Handle the NULL fmt explicitly. Fixes: 088519f3 ("perf stat: Move the display functions to stat-display.c") Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Kaige Ye <ye@kaige.org> Cc: John Garry <john.g.garry@oracle.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240209204947.3873294-4-irogers@google.com
-
Ian Rogers authored
Events in metrics cannot use '/' as a separator, it would be recognized as a divide, so they use '@'. The '@' is recognized in the metricgroups code and changed to '/', do the same in the has_event function so that the parsing is only tried without the @s. Fixes: 4a4a9bf9 ("perf expr: Add has_event function") Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Kaige Ye <ye@kaige.org> Cc: John Garry <john.g.garry@oracle.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240209204947.3873294-3-irogers@google.com
-
Ian Rogers authored
Currently only floating point numbers can be parsed, add a special case for NaN. Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Kaige Ye <ye@kaige.org> Cc: John Garry <john.g.garry@oracle.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240209204947.3873294-2-irogers@google.com
-
- 12 Feb, 2024 7 commits
-
-
Ian Rogers authored
After this change maps__nr_maps is only used by tests, existing users are migrated to maps__empty. Compute maps__empty under the read lock. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Song Liu <song@kernel.org> Cc: Leo Yan <leo.yan@linux.dev> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Artem Savkov <asavkov@redhat.com> Cc: bpf@vger.kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240210031746.4057262-7-irogers@google.com
-
Ian Rogers authored
Move the struct into the C file. Add maps__equal to work around exposing the struct for reference count checking. Add accessors for the unwind_libunwind_ops. Move maps_list_node to its only use in symbol.c. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Song Liu <song@kernel.org> Cc: Leo Yan <leo.yan@linux.dev> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Artem Savkov <asavkov@redhat.com> Cc: bpf@vger.kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240210031746.4057262-6-irogers@google.com
-
Ian Rogers authored
Finding a map is done under a lock, returning the map without a reference count means it can be removed without notice and causing uses after free. Grab a reference count to the map within the lock region and return this. Fix up locations that need a map__put following this. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Song Liu <song@kernel.org> Cc: Leo Yan <leo.yan@linux.dev> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Artem Savkov <asavkov@redhat.com> Cc: bpf@vger.kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240210031746.4057262-5-irogers@google.com
-
Ian Rogers authored
Finding a map is done under a lock, returning the map without a reference count means it can be removed without notice and causing uses after free. Grab a reference count to the map within the lock region and return this. Fix up locations that need a map__put following this. Also fix some reference counted pointer comparisons. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Song Liu <song@kernel.org> Cc: Leo Yan <leo.yan@linux.dev> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Artem Savkov <asavkov@redhat.com> Cc: bpf@vger.kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240210031746.4057262-4-irogers@google.com
-
Ian Rogers authored
Finding a map is done under a lock, returning the map without a reference count means it can be removed without notice and causing uses after free. Grab a reference count to the map within the lock region and return this. Fix up locations that need a map__put following this. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Song Liu <song@kernel.org> Cc: Leo Yan <leo.yan@linux.dev> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Artem Savkov <asavkov@redhat.com> Cc: bpf@vger.kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240210031746.4057262-3-irogers@google.com
-
Ian Rogers authored
Maps is a collection of maps primarily sorted by the starting address of the map. Prior to this change the maps were held in an rbtree requiring 4 pointers per node. Prior to reference count checking, the rbnode was embedded in the map so 3 pointers per node were necessary. This change switches the rbtree to an array lazily sorted by address, much as the array sorting nodes by name. 1 pointer is needed per node, but to avoid excessive resizing the backing array may be twice the number of used elements. Meaning the memory overhead is roughly half that of the rbtree. For a perf record with "--no-bpf-event -g -a" of true, the memory overhead of perf inject is reduce fom 3.3MB to 3MB, so 10% or 300KB is saved. Map inserts always happen at the end of the array. The code tracks whether the insertion violates the sorting property. O(log n) rb-tree complexity is switched to O(1). Remove slides the array, so O(log n) rb-tree complexity is degraded to O(n). A find may need to sort the array using qsort which is O(n*log n), but in general the maps should be sorted and so average performance should be O(log n) as with the rbtree. An rbtree node consumes a cache line, but with the array 4 nodes fit on a cache line. Iteration is simplified to scanning an array rather than pointer chasing. Overall it is expected the performance after the change should be comparable to before, but with half of the memory consumed. To avoid a list and repeated logic around splitting maps, maps__merge_in is rewritten in terms of maps__fixup_overlap_and_insert. maps_merge_in splits the given mapping inserting remaining gaps. maps__fixup_overlap_and_insert splits the existing mappings, then adds the incoming mapping. By adding the new mapping first, then re-inserting the existing mappings the splitting behavior matches. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Song Liu <song@kernel.org> Cc: Leo Yan <leo.yan@linux.dev> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Artem Savkov <asavkov@redhat.com> Cc: bpf@vger.kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240210031746.4057262-2-irogers@google.com
-
Namhyung Kim authored
To get some fixes in the perf test and JSON metrics into the development branch. Signed-off-by: Namhyung Kim <namhyung@kernel.org>
-
- 10 Feb, 2024 1 commit
-
-
Ian Rogers authored
The child_process for addr2line sets in and out to -1 so that pipes get created. It is the caller's responsibility to close the pipes, finish_command doesn't do it. Add the missed closes. Fixes: b3801e79 ("perf srcline: Simplify addr2line subprocess") Signed-off-by: Ian Rogers <irogers@google.com> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: James Clark <james.clark@arm.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: John Garry <john.g.garry@oracle.com> Cc: Tom Rix <trix@redhat.com> Cc: llvm@lists.linux.dev Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240201001504.1348511-8-irogers@google.com
-
- 09 Feb, 2024 5 commits
-
-
Yicong Yang authored
Some platforms have 'cluster' topology and CPUs in the cluster will share resources like L3 Cache Tag (for HiSilicon Kunpeng SoC) or L2 cache (for Intel Jacobsville). Currently parsing and building cluster topology have been supported since [1]. perf stat has already supported aggregation for other topologies like die or socket, etc. It'll be useful to aggregate per-cluster to find problems like L3T bandwidth contention. This patch add support for "--per-cluster" option for per-cluster aggregation. Also update the docs and related test. The output will be like: [root@localhost tmp]# perf stat -a -e LLC-load --per-cluster -- sleep 5 Performance counter stats for 'system wide': S56-D0-CLS158 4 1,321,521,570 LLC-load S56-D0-CLS594 4 794,211,453 LLC-load S56-D0-CLS1030 4 41,623 LLC-load S56-D0-CLS1466 4 41,646 LLC-load S56-D0-CLS1902 4 16,863 LLC-load S56-D0-CLS2338 4 15,721 LLC-load S56-D0-CLS2774 4 22,671 LLC-load [...] On a legacy system without cluster or cluster support, the output will be look like: [root@localhost perf]# perf stat -a -e cycles --per-cluster -- sleep 1 Performance counter stats for 'system wide': S56-D0-CLS0 64 18,011,485 cycles S7182-D0-CLS0 64 16,548,835 cycles Note that this patch doesn't mix the cluster information in the outputs of --per-core to avoid breaking any tools/scripts using it. Note that perf recently supports "--per-cache" aggregation, but it's not the same with the cluster although cluster CPUs may share some cache resources. For example on my machine all clusters within a die share the same L3 cache: $ cat /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list 0-31 $ cat /sys/devices/system/cpu/cpu0/topology/cluster_cpus_list 0-3 [1] commit c5e22fef ("topology: Represent clusters of CPUs within a die") Tested-by: Jie Zhan <zhanjie9@hisilicon.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Reviewed-by: Ian Rogers <irogers@google.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Cc: james.clark@arm.com Cc: 21cnbao@gmail.com Cc: prime.zeng@hisilicon.com Cc: Jonathan.Cameron@huawei.com Cc: fanghao11@huawei.com Cc: linuxarm@huawei.com Cc: tim.c.chen@intel.com Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240208024026.2691-1-yangyicong@huawei.com
-
Namhyung Kim authored
When it converts sample IP to or from objdump-capable one, there's a comment saying that kernel modules have DSO_SPACE__USER. But commit 02213cec ("perf maps: Mark module DSOs with kernel type") changed it and makes the comment confusing. Let's get rid of it. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20240208181025.1329645-1-namhyung@kernel.org
-
Yang Jihong authored
slist needs to be freed in both error path and normal path in thread_map__new_by_tid_str(). Fixes: b52956c9 ("perf tools: Allow multiple threads or processes in record, stat, top") Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Yang Jihong <yangjihong1@huawei.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240206083228.172607-6-yangjihong1@huawei.com
-
Yang Jihong authored
The curr_pid and cpu_last_switched are used only for the 'perf sched replay/latency/map'. Put their initialization in perf_sched__{lat|map|replay () to reduce unnecessary actions in other commands. Simple functional testing: # perf sched record perf bench sched messaging # Running 'sched/messaging' benchmark: # 20 sender and receiver processes per group # 10 groups == 400 processes run Total time: 0.209 [sec] [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 16.456 MB perf.data (147907 samples) ] # perf sched lat ------------------------------------------------------------------------------------------------------------------------------------------- Task | Runtime ms | Switches | Avg delay ms | Max delay ms | Max delay start | Max delay end | ------------------------------------------------------------------------------------------------------------------------------------------- sched-messaging:(401) | 2990.699 ms | 38705 | avg: 0.661 ms | max: 67.046 ms | max start: 456532.624830 s | max end: 456532.691876 s qemu-system-x86:(7) | 179.764 ms | 2191 | avg: 0.152 ms | max: 21.857 ms | max start: 456532.576434 s | max end: 456532.598291 s sshd:48125 | 0.522 ms | 2 | avg: 0.037 ms | max: 0.046 ms | max start: 456532.514610 s | max end: 456532.514656 s <SNIP> ksoftirqd/11:82 | 0.063 ms | 1 | avg: 0.005 ms | max: 0.005 ms | max start: 456532.769366 s | max end: 456532.769371 s kworker/9:0-mm_:34624 | 0.233 ms | 20 | avg: 0.004 ms | max: 0.007 ms | max start: 456532.690804 s | max end: 456532.690812 s migration/13:93 | 0.000 ms | 1 | avg: 0.004 ms | max: 0.004 ms | max start: 456532.512669 s | max end: 456532.512674 s ----------------------------------------------------------------------------------------------------------------- TOTAL: | 3180.750 ms | 41368 | --------------------------------------------------- # echo $? 0 # perf sched map *A0 456532.510141 secs A0 => migration/0:15 *. 456532.510171 secs . => swapper:0 . *B0 456532.510261 secs B0 => migration/1:21 . *. 456532.510279 secs <SNIP> L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 *L7 . . . . 456532.785979 secs L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 *L7 . . . 456532.786054 secs L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 *L7 . . 456532.786127 secs L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 *L7 . 456532.786197 secs L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 L7 *L7 456532.786270 secs # echo $? 0 # perf sched replay run measurement overhead: 108 nsecs sleep measurement overhead: 66473 nsecs the run test took 1000002 nsecs the sleep test took 1082686 nsecs nr_run_events: 49334 nr_sleep_events: 50054 nr_wakeup_events: 34701 target-less wakeups: 165 multi-target wakeups: 766 task 0 ( swapper: 0), nr_events: 15419 task 1 ( swapper: 1), nr_events: 1 task 2 ( swapper: 2), nr_events: 1 <SNIP> task 715 ( sched-messaging: 110248), nr_events: 1438 task 716 ( sched-messaging: 110249), nr_events: 512 task 717 ( sched-messaging: 110250), nr_events: 500 task 718 ( sched-messaging: 110251), nr_events: 537 task 719 ( sched-messaging: 110252), nr_events: 823 ------------------------------------------------------------ #1 : 1325.288, ravg: 1325.29, cpu: 7823.35 / 7823.35 #2 : 1363.606, ravg: 1329.12, cpu: 7655.53 / 7806.56 #3 : 1349.494, ravg: 1331.16, cpu: 7544.80 / 7780.39 #4 : 1311.488, ravg: 1329.19, cpu: 7495.13 / 7751.86 #5 : 1309.902, ravg: 1327.26, cpu: 7266.65 / 7703.34 #6 : 1309.535, ravg: 1325.49, cpu: 7843.86 / 7717.39 #7 : 1316.482, ravg: 1324.59, cpu: 7854.41 / 7731.09 #8 : 1366.604, ravg: 1328.79, cpu: 7955.81 / 7753.57 #9 : 1326.286, ravg: 1328.54, cpu: 7466.86 / 7724.90 #10 : 1356.653, ravg: 1331.35, cpu: 7566.60 / 7709.07 # echo $? 0 Signed-off-by: Yang Jihong <yangjihong1@huawei.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240206083228.172607-5-yangjihong1@huawei.com
-
Yang Jihong authored
The curr_thread is used only for the 'perf sched map'. Put initialization in perf_sched__map() to reduce unnecessary actions in other commands. Simple functional testing: # perf sched record perf bench sched messaging # Running 'sched/messaging' benchmark: # 20 sender and receiver processes per group # 10 groups == 400 processes run Total time: 0.197 [sec] [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 15.526 MB perf.data (140095 samples) ] # perf sched map *A0 451264.532445 secs A0 => migration/0:15 *. 451264.532468 secs . => swapper:0 . *B0 451264.532537 secs B0 => migration/1:21 . *. 451264.532560 secs . . *C0 451264.532644 secs C0 => migration/2:27 . . *. 451264.532668 secs . . . *D0 451264.532753 secs D0 => migration/3:33 . . . *. 451264.532778 secs . . . . *E0 451264.532861 secs E0 => migration/4:39 . . . . *. 451264.532886 secs . . . . . *F0 451264.532973 secs F0 => migration/5:45 <SNIP> A7 A7 A7 A7 A7 *A7 . . . . . . . . . . 451264.790785 secs A7 A7 A7 A7 A7 A7 *A7 . . . . . . . . . 451264.790858 secs A7 A7 A7 A7 A7 A7 A7 *A7 . . . . . . . . 451264.790934 secs A7 A7 A7 A7 A7 A7 A7 A7 *A7 . . . . . . . 451264.791004 secs A7 A7 A7 A7 A7 A7 A7 A7 A7 *A7 . . . . . . 451264.791075 secs A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 *A7 . . . . . 451264.791143 secs A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 *A7 . . . . 451264.791232 secs A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 *A7 . . . 451264.791336 secs A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 *A7 . . 451264.791407 secs A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 *A7 . 451264.791484 secs A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 A7 *A7 451264.791553 secs # echo $? 0 Signed-off-by: Yang Jihong <yangjihong1@huawei.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240206083228.172607-4-yangjihong1@huawei.com
-