Commit 56b2147f authored by Ingo Molnar's avatar Ingo Molnar

Merge tag 'perf-core-for-mingo-5.5-20191107' of...

Merge tag 'perf-core-for-mingo-5.5-20191107' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

perf report:

  Jin Yao:

  - Introduce --total-cycles, for basic block profiling, further using data
    obtained from LBR, an example should suffice:

      # perf record -b
      ^C[ perf record: Woken up 595 times to write data ]
      [ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]

      # perf evlist -v
      cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY

      # perf report --total-cycles --stdio
      # To display the perf.data header info, please use --header/--header-only options.
      #
      # Total Lost Samples: 0
      #
      # Samples: 6M of event 'cycles'
      # Event count (approx.): 6299936
      #
      # Sampled  Sampled   Avg     Avg
      # Cycles%  Cycles  Cycles%  Cycles                 [Program Block Range]     Shared Object
      # .......  ......  .......  .....   ....................................  ................
      #
         2.17%     1.7M   0.08%     607       [compiler.h:199 -> common.c:221]  [kernel.vmlinux]
         0.72%   544.5K   0.03%     230     [entry_64.S:657 -> entry_64.S:662]  [kernel.vmlinux]
         0.56%   541.8K   0.09%     672       [compiler.h:199 -> common.c:300]  [kernel.vmlinux]
         0.39%   293.2K   0.01%     104   [list_debug.c:43 -> list_debug.c:61]  [kernel.vmlinux]
         0.36%   278.6K   0.03%     272   [entry_64.S:1289 -> entry_64.S:1308]  [kernel.vmlinux]

perf record:

  Adrian Hunter:

  - Allow storing perf.data in a directory together with a copy of /proc/kcore.

  Jiwei Sun:

  - Add support for limit perf output file size, i.e.:

    # perf record --all-cpus -F 10000 --max-size=4M sleep 10h
    [ perf record: perf size limit reached (4097 KB), stopping session ]
    [ perf record: Woken up 6 times to write data ]
    [ perf record: Captured and wrote 4.048 MB perf.data (54094 samples) ]
    Terminated
    # ls -lah perf.data
    -rw-------. 1 root root 4.1M Nov  7 15:27 perf.data
    #

perf stat:

  Jiri Olsa:

  - Add --per-node agregation support:

    In live mode:

      # perf stat  -a -I 1000 -e cycles --per-node
      #           time node   cpus             counts unit events
           1.000542550 N0       20          6,202,097      cycles
           1.000542550 N1       20            639,559      cycles
           2.002040063 N0       20          7,412,495      cycles
           2.002040063 N1       20          2,185,577      cycles
           3.003451699 N0       20          6,508,917      cycles
           3.003451699 N1       20            765,607      cycles
      ...

    Or in the record/report stat session:

      # perf stat record -a -I 1000 -e cycles
      #           time             counts unit events
           1.000536937         10,008,468      cycles
           2.002090152          9,578,539      cycles
           3.003625233          7,647,869      cycles
           4.005135036          7,032,086      cycles
      ^C     4.340902364          3,923,893      cycles

      # perf stat report --per-node
      #           time node   cpus             counts unit events
           1.000536937 N0       20          9,355,086      cycles
           1.000536937 N1       20            653,382      cycles
           2.002090152 N0       20          7,712,838      cycles
           2.002090152 N1       20          1,865,701      cycles
       ...

perf probe:

  Masami Hiramatsu:

  Various fixes related to recent additions to the DWARF format:

  - Fix to find range-only function instance

  - Walk function lines in lexical blocks

  - Fix to show function entry line as probe-able

  - Fix wrong address verification

  - Fix to probe a function which has no entry pc

  - Fix to probe an inline function which has no entry pc

  - Fix to list probe event with correct line number

  - Fix to show inlined function callsite without entry_pc

  - Fix to show ranges of variables in functions without entry_pc

  - Return a better scope DIE if there is no best scope

  - Skip end-of-sequence and non statement lines

  - Filter out instances except for inlined subroutine and subprogram

  - Fix to show calling lines of inlined functions

  - Skip overlapped location on searching variables

perf inject:

  Adrian Hunter:

  - Do not strip evsels with --strip, as they are needed for create_gcov
    (see the autofdo example in tools/perf/Documentation/intel-pt.txt).

Intel PT:

  Adrian Hunter:

  - Intel PT uses an auxtrace_cache to store the results of code-walking, to avoid
    repeated decoding. Add an auxtrace_cache__remove to handle text poke events.

core:

  Andi Kleen:

  - Always preserve errno while cleaning up perf_event_open failures.

llvm:

  Arnaldo Carvalho de Melo:

  - No need to tell that the request for saving a .o file for BPF events, as
    expressed in ~/.perfconfig was satisfied, make that a debug message.

perf vendor events:

Intel:

  Haiyan Song:

  - Update CascadelakeX events to v1.05.

  - Update all the Intel JSON metrics from TMAM 3.6.

Treewide:

  Ian Rogers:

  - Improve error paths, plugging leaks found using LLVM tools
    such as libFuzzer.

jevents:

  Yunfeng Ye:

  - Fix resource leak in process_mapfile() and main()

perf kvm:

  Igor Lubashev:

  - Use evlist layer api when possible.

libsubcmd:

  James Clark:

  - Move EXTRA_FLAGS to the end to allow overriding existing flags.

  - Use -O0 with DEBUG=1

perf diff:

  Jin Yao:

  - Don't use hack to skip column length calculation

CoreSight ETM:

  Leo yan:

  - Fix definition of macro TO_CS_QUEUE_NR

ARM64:

  John Garry:

  - Do not try to include libelf header files when its feature detection
    failed, fixing the cross build for ARM64.

perf tests:

  Leo Yan:

  - Fix out of bounds memory access in the backward ring buffer test.
Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents 8f05c1ff 7fa46cbf
......@@ -19,8 +19,7 @@ MAKEFLAGS += --no-print-directory
LIBFILE = $(OUTPUT)libsubcmd.a
CFLAGS := $(EXTRA_WARNINGS) $(EXTRA_CFLAGS)
CFLAGS += -ggdb3 -Wall -Wextra -std=gnu99 -fPIC
CFLAGS := -ggdb3 -Wall -Wextra -std=gnu99 -fPIC
ifeq ($(DEBUG),0)
ifeq ($(feature-fortify-source), 1)
......@@ -28,7 +27,9 @@ ifeq ($(DEBUG),0)
endif
endif
ifeq ($(CC_NO_CLANG), 0)
ifeq ($(DEBUG),1)
CFLAGS += -O0
else ifeq ($(CC_NO_CLANG), 0)
CFLAGS += -O3
else
CFLAGS += -O6
......@@ -43,6 +44,8 @@ CFLAGS += -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE
CFLAGS += -I$(srctree)/tools/include/
CFLAGS += $(EXTRA_WARNINGS) $(EXTRA_CFLAGS)
SUBCMD_IN := $(OUTPUT)libsubcmd-in.o
all:
......
......@@ -571,6 +571,13 @@ config terms. For example: 'cycles/overwrite/' and 'instructions/no-overwrite/'.
Implies --tail-synthesize.
--kcore::
Make a copy of /proc/kcore and place it into a directory with the perf data file.
--max-size=<size>::
Limit the sample data max size, <size> is expected to be a number with
appended unit character - B/K/M/G
SEE ALSO
--------
linkperf:perf-stat[1], linkperf:perf-list[1]
......@@ -525,6 +525,17 @@ include::itrace.txt[]
Configure time quantum for time sort key. Default 100ms.
Accepts s, us, ms, ns units.
--total-cycles::
When --total-cycles is specified, it supports sorting for all blocks by
'Sampled Cycles%'. This is useful to concentrate on the globally hottest
blocks. In output, there are some new columns:
'Sampled Cycles%' - block sampled cycles aggregation / total sampled cycles
'Sampled Cycles' - block sampled cycles aggregation
'Avg Cycles%' - block average sampled cycles / sum of total block average
sampled cycles
'Avg Cycles' - block average sampled cycles
include::callchain-overhead-calculation.txt[]
SEE ALSO
......
......@@ -217,6 +217,11 @@ core number and the number of online logical processors on that physical process
Aggregate counts per monitored threads, when monitoring threads (-t option)
or processes (-p option).
--per-node::
Aggregate counts per NUMA nodes for system-wide mode measurements. This
is a useful mode to detect imbalance between NUMA nodes. To enable this
mode, use --per-node in addition to -a. (system-wide).
-D msecs::
--delay msecs::
After starting the program, wait msecs before measuring. This is useful to
......
perf.data directory format
DISCLAIMER This is not ABI yet and is subject to possible change
in following versions of perf. We will remove this
disclaimer once the directory format soaks in.
This document describes the on-disk perf.data directory format.
The layout is described by HEADER_DIR_FORMAT feature.
Currently it holds only version number (0):
HEADER_DIR_FORMAT = 24
struct {
uint64_t version;
}
The current only version value 0 means that:
- there is a single perf.data file named 'data' within the directory.
e.g.
$ tree -ps perf.data
perf.data
└── [-rw------- 25912] data
Future versions are expected to describe different data files
layout according to special needs.
Currently the only 'perf record' option to output to a directory is
the --kcore option which puts a copy of /proc/kcore into the directory.
e.g.
$ sudo perf record --kcore uname
Linux
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.015 MB perf.data (9 samples) ]
$ sudo tree -ps perf.data
perf.data
├── [-rw------- 23744] data
└── [drwx------ 4096] kcore_dir
├── [-r-------- 6731125] kallsyms
├── [-r-------- 40230912] kcore
└── [-r-------- 5419] modules
1 directory, 4 files
$ sudo perf script -v
build id event received for vmlinux: 1eaa285996affce2d74d8e66dcea09a80c9941de
build id event received for [vdso]: 8bbaf5dc62a9b644b4d4e4539737e104e4a84541
build id event received for /lib/x86_64-linux-gnu/libc-2.28.so: 5b157f49586a3ca84d55837f97ff466767dd3445
Samples for 'cycles' event do not have CPU attribute set. Skipping 'cpu' field.
Using CPUID GenuineIntel-6-8E-A
Using perf.data/kcore_dir/kcore for kernel data
Using perf.data/kcore_dir/kallsyms for symbols
perf 15316 2060795.480902: 1 cycles: ffffffffa2caa548 native_write_msr+0x8 (vmlinux)
perf 15316 2060795.480906: 1 cycles: ffffffffa2caa548 native_write_msr+0x8 (vmlinux)
perf 15316 2060795.480908: 7 cycles: ffffffffa2caa548 native_write_msr+0x8 (vmlinux)
perf 15316 2060795.480910: 119 cycles: ffffffffa2caa54a native_write_msr+0xa (vmlinux)
perf 15316 2060795.480912: 2109 cycles: ffffffffa2c9b7b0 native_apic_msr_write+0x0 (vmlinux)
perf 15316 2060795.480914: 37606 cycles: ffffffffa2f121fe perf_event_addr_filters_exec+0x2e (vmlinux)
uname 15316 2060795.480924: 588287 cycles: ffffffffa303a56d page_counter_try_charge+0x6d (vmlinux)
uname 15316 2060795.481067: 2261945 cycles: ffffffffa301438f kmem_cache_free+0x4f (vmlinux)
uname 15316 2060795.481643: 2172167 cycles: 7f1a48c393c0 _IO_un_link+0x0 (/lib/x86_64-linux-gnu/libc-2.28.so)
......@@ -6,9 +6,10 @@
#include "symbol.h" // for the elf__needs_adjust_symbols() prototype
#include <stdbool.h>
#include <gelf.h>
#ifdef HAVE_LIBELF_SUPPORT
#include <gelf.h>
bool elf__needs_adjust_symbols(GElf_Ehdr ehdr)
{
return ehdr.e_type == ET_EXEC ||
......
......@@ -29,7 +29,7 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
return -1;
}
for (pos = maps__first(maps); pos; pos = map__next(pos)) {
maps__for_each_entry(maps, pos) {
struct kmap *kmap;
size_t size;
......
......@@ -201,7 +201,7 @@ static int process_branch_callback(struct evsel *evsel,
if (a.map != NULL)
a.map->dso->hit = 1;
hist__account_cycles(sample->branch_stack, al, sample, false);
hist__account_cycles(sample->branch_stack, al, sample, false, NULL);
ret = hist_entry_iter__add(&iter, &a, PERF_MAX_STACK_DEPTH, ann);
return ret;
......
......@@ -24,6 +24,7 @@
#include "util/annotate.h"
#include "util/map.h"
#include "util/spark.h"
#include "util/block-info.h"
#include <linux/err.h>
#include <linux/zalloc.h>
#include <subcmd/pager.h>
......@@ -98,8 +99,6 @@ static s64 compute_wdiff_w2;
static const char *cpu_list;
static DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS);
static struct addr_location dummy_al;
enum {
COMPUTE_DELTA,
COMPUTE_RATIO,
......@@ -427,7 +426,8 @@ static int diff__process_sample_event(struct perf_tool *tool,
goto out_put;
}
hist__account_cycles(sample->branch_stack, &al, sample, false);
hist__account_cycles(sample->branch_stack, &al, sample, false,
NULL);
}
/*
......@@ -537,41 +537,6 @@ static void hists__baseline_only(struct hists *hists)
}
}
static int64_t block_cmp(struct perf_hpp_fmt *fmt __maybe_unused,
struct hist_entry *left, struct hist_entry *right)
{
struct block_info *bi_l = left->block_info;
struct block_info *bi_r = right->block_info;
int cmp;
if (!bi_l->sym || !bi_r->sym) {
if (!bi_l->sym && !bi_r->sym)
return 0;
else if (!bi_l->sym)
return -1;
else
return 1;
}
if (bi_l->sym == bi_r->sym) {
if (bi_l->start == bi_r->start) {
if (bi_l->end == bi_r->end)
return 0;
else
return (int64_t)(bi_r->end - bi_l->end);
} else
return (int64_t)(bi_r->start - bi_l->start);
} else {
cmp = strcmp(bi_l->sym->name, bi_r->sym->name);
return cmp;
}
if (bi_l->sym->start != bi_r->sym->start)
return (int64_t)(bi_r->sym->start - bi_l->sym->start);
return (int64_t)(bi_r->sym->end - bi_l->sym->end);
}
static int64_t block_cycles_diff_cmp(struct hist_entry *left,
struct hist_entry *right)
{
......@@ -600,67 +565,13 @@ static void init_block_hist(struct block_hist *bh)
INIT_LIST_HEAD(&bh->block_fmt.list);
INIT_LIST_HEAD(&bh->block_fmt.sort_list);
bh->block_fmt.cmp = block_cmp;
bh->block_fmt.cmp = block_info__cmp;
bh->block_fmt.sort = block_sort;
perf_hpp_list__register_sort_field(&bh->block_list,
&bh->block_fmt);
bh->valid = true;
}
static void init_block_info(struct block_info *bi, struct symbol *sym,
struct cyc_hist *ch, int offset)
{
bi->sym = sym;
bi->start = ch->start;
bi->end = offset;
bi->cycles = ch->cycles;
bi->cycles_aggr = ch->cycles_aggr;
bi->num = ch->num;
bi->num_aggr = ch->num_aggr;
memcpy(bi->cycles_spark, ch->cycles_spark,
NUM_SPARKS * sizeof(u64));
}
static int process_block_per_sym(struct hist_entry *he)
{
struct annotation *notes;
struct cyc_hist *ch;
struct block_hist *bh;
if (!he->ms.map || !he->ms.sym)
return 0;
notes = symbol__annotation(he->ms.sym);
if (!notes || !notes->src || !notes->src->cycles_hist)
return 0;
bh = container_of(he, struct block_hist, he);
init_block_hist(bh);
ch = notes->src->cycles_hist;
for (unsigned int i = 0; i < symbol__size(he->ms.sym); i++) {
if (ch[i].num_aggr) {
struct block_info *bi;
struct hist_entry *he_block;
bi = block_info__new();
if (!bi)
return -1;
init_block_info(bi, he->ms.sym, &ch[i], i);
he_block = hists__add_entry_block(&bh->block_hists,
&dummy_al, bi);
if (!he_block) {
block_info__put(bi);
return -1;
}
}
}
return 0;
}
static int block_pair_cmp(struct hist_entry *a, struct hist_entry *b)
{
struct block_info *bi_a = a->block_info;
......@@ -765,13 +676,6 @@ static void block_hists_match(struct hists *hists_base,
}
}
static int filter_cb(struct hist_entry *he, void *arg __maybe_unused)
{
/* Skip the calculation of column length in output_resort */
he->filtered = true;
return 0;
}
static void hists__precompute(struct hists *hists)
{
struct rb_root_cached *root;
......@@ -792,8 +696,11 @@ static void hists__precompute(struct hists *hists)
he = rb_entry(next, struct hist_entry, rb_node_in);
next = rb_next(&he->rb_node_in);
if (compute == COMPUTE_CYCLES)
process_block_per_sym(he);
if (compute == COMPUTE_CYCLES) {
bh = container_of(he, struct block_hist, he);
init_block_hist(bh);
block_info__process_sym(he, bh, NULL, 0);
}
data__for_each_file_new(i, d) {
pair = get_pair_data(he, d);
......@@ -812,16 +719,18 @@ static void hists__precompute(struct hists *hists)
compute_wdiff(he, pair);
break;
case COMPUTE_CYCLES:
process_block_per_sym(pair);
bh = container_of(he, struct block_hist, he);
pair_bh = container_of(pair, struct block_hist,
he);
init_block_hist(pair_bh);
block_info__process_sym(pair, pair_bh, NULL, 0);
bh = container_of(he, struct block_hist, he);
if (bh->valid && pair_bh->valid) {
block_hists_match(&bh->block_hists,
&pair_bh->block_hists);
hists__output_resort_cb(&pair_bh->block_hists,
NULL, filter_cb);
hists__output_resort(&pair_bh->block_hists,
NULL);
}
break;
default:
......
......@@ -578,58 +578,6 @@ static void strip_init(struct perf_inject *inject)
evsel->handler = drop_sample;
}
static bool has_tracking(struct evsel *evsel)
{
return evsel->core.attr.mmap || evsel->core.attr.mmap2 || evsel->core.attr.comm ||
evsel->core.attr.task;
}
#define COMPAT_MASK (PERF_SAMPLE_ID | PERF_SAMPLE_TID | PERF_SAMPLE_TIME | \
PERF_SAMPLE_ID | PERF_SAMPLE_CPU | PERF_SAMPLE_IDENTIFIER)
/*
* In order that the perf.data file is parsable, tracking events like MMAP need
* their selected event to exist, except if there is only 1 selected event left
* and it has a compatible sample type.
*/
static bool ok_to_remove(struct evlist *evlist,
struct evsel *evsel_to_remove)
{
struct evsel *evsel;
int cnt = 0;
bool ok = false;
if (!has_tracking(evsel_to_remove))
return true;
evlist__for_each_entry(evlist, evsel) {
if (evsel->handler != drop_sample) {
cnt += 1;
if ((evsel->core.attr.sample_type & COMPAT_MASK) ==
(evsel_to_remove->core.attr.sample_type & COMPAT_MASK))
ok = true;
}
}
return ok && cnt == 1;
}
static void strip_fini(struct perf_inject *inject)
{
struct evlist *evlist = inject->session->evlist;
struct evsel *evsel, *tmp;
/* Remove non-synthesized evsels if possible */
evlist__for_each_entry_safe(evlist, tmp, evsel) {
if (evsel->handler == drop_sample &&
ok_to_remove(evlist, evsel)) {
pr_debug("Deleting %s\n", perf_evsel__name(evsel));
evlist__remove(evlist, evsel);
evsel__delete(evsel);
}
}
}
static int __cmd_inject(struct perf_inject *inject)
{
int ret = -EINVAL;
......@@ -729,8 +677,6 @@ static int __cmd_inject(struct perf_inject *inject)
evlist__remove(session->evlist, evsel);
evsel__delete(evsel);
}
if (inject->strip)
strip_fini(inject);
}
session->header.data_offset = output_data_offset;
session->header.data_size = inject->bytes_written;
......
......@@ -998,7 +998,7 @@ static int kvm_events_live_report(struct perf_kvm_stat *kvm)
done = perf_kvm__handle_stdin();
if (!rc && !done)
err = fdarray__poll(fda, 100);
err = evlist__poll(kvm->evlist, 100);
}
evlist__disable(kvm->evlist);
......
......@@ -55,6 +55,9 @@
#include <signal.h>
#include <sys/mman.h>
#include <sys/wait.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <linux/err.h>
#include <linux/string.h>
#include <linux/time64.h>
......@@ -91,8 +94,11 @@ struct record {
struct switch_output switch_output;
unsigned long long samples;
cpu_set_t affinity_mask;
unsigned long output_max_size; /* = 0: unlimited */
};
static volatile int done;
static volatile int auxtrace_record__snapshot_started;
static DEFINE_TRIGGER(auxtrace_snapshot_trigger);
static DEFINE_TRIGGER(switch_output_trigger);
......@@ -120,6 +126,12 @@ static bool switch_output_time(struct record *rec)
trigger_is_ready(&switch_output_trigger);
}
static bool record__output_max_size_exceeded(struct record *rec)
{
return rec->output_max_size &&
(rec->bytes_written >= rec->output_max_size);
}
static int record__write(struct record *rec, struct mmap *map __maybe_unused,
void *bf, size_t size)
{
......@@ -132,6 +144,13 @@ static int record__write(struct record *rec, struct mmap *map __maybe_unused,
rec->bytes_written += size;
if (record__output_max_size_exceeded(rec) && !done) {
fprintf(stderr, "[ perf record: perf size limit reached (%" PRIu64 " KB),"
" stopping session ]\n",
rec->bytes_written >> 10);
done = 1;
}
if (switch_output_size(rec))
trigger_hit(&switch_output_trigger);
......@@ -496,7 +515,6 @@ static int record__pushfn(struct mmap *map, void *to, void *bf, size_t size)
return record__write(rec, map, bf, size);
}
static volatile int done;
static volatile int signr = -1;
static volatile int child_finished;
......@@ -537,7 +555,7 @@ static int record__process_auxtrace(struct perf_tool *tool,
size_t padding;
u8 pad[8] = {0};
if (!perf_data__is_pipe(data) && !perf_data__is_dir(data)) {
if (!perf_data__is_pipe(data) && perf_data__is_single_file(data)) {
off_t file_offset;
int fd = perf_data__fd(data);
int err;
......@@ -699,6 +717,37 @@ static int record__auxtrace_init(struct record *rec __maybe_unused)
#endif
static bool record__kcore_readable(struct machine *machine)
{
char kcore[PATH_MAX];
int fd;
scnprintf(kcore, sizeof(kcore), "%s/proc/kcore", machine->root_dir);
fd = open(kcore, O_RDONLY);
if (fd < 0)
return false;
close(fd);
return true;
}
static int record__kcore_copy(struct machine *machine, struct perf_data *data)
{
char from_dir[PATH_MAX];
char kcore_dir[PATH_MAX];
int ret;
snprintf(from_dir, sizeof(from_dir), "%s/proc", machine->root_dir);
ret = perf_data__make_kcore_dir(data, kcore_dir, sizeof(kcore_dir));
if (ret)
return ret;
return kcore_copy(from_dir, kcore_dir);
}
static int record__mmap_evlist(struct record *rec,
struct evlist *evlist)
{
......@@ -1383,6 +1432,12 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
session->header.env.comp_type = PERF_COMP_ZSTD;
session->header.env.comp_level = rec->opts.comp_level;
if (rec->opts.kcore &&
!record__kcore_readable(&session->machines.host)) {
pr_err("ERROR: kcore is not readable.\n");
return -1;
}
record__init_features(rec);
if (rec->opts.use_clockid && rec->opts.clockid_res_ns)
......@@ -1414,6 +1469,14 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
}
session->header.env.comp_mmap_len = session->evlist->core.mmap_len;
if (rec->opts.kcore) {
err = record__kcore_copy(&session->machines.host, data);
if (err) {
pr_err("ERROR: Failed to copy kcore\n");
goto out_child;
}
}
err = bpf__apply_obj_config();
if (err) {
char errbuf[BUFSIZ];
......@@ -1936,6 +1999,33 @@ static int record__parse_affinity(const struct option *opt, const char *str, int
return 0;
}
static int parse_output_max_size(const struct option *opt,
const char *str, int unset)
{
unsigned long *s = (unsigned long *)opt->value;
static struct parse_tag tags_size[] = {
{ .tag = 'B', .mult = 1 },
{ .tag = 'K', .mult = 1 << 10 },
{ .tag = 'M', .mult = 1 << 20 },
{ .tag = 'G', .mult = 1 << 30 },
{ .tag = 0 },
};
unsigned long val;
if (unset) {
*s = 0;
return 0;
}
val = parse_tag_value(str, tags_size);
if (val != (unsigned long) -1) {
*s = val;
return 0;
}
return -1;
}
static int record__parse_mmap_pages(const struct option *opt,
const char *str,
int unset __maybe_unused)
......@@ -2184,6 +2274,7 @@ static struct option __record_options[] = {
parse_cgroups),
OPT_UINTEGER('D', "delay", &record.opts.initial_delay,
"ms to wait before starting measurement after program start"),
OPT_BOOLEAN(0, "kcore", &record.opts.kcore, "copy /proc/kcore"),
OPT_STRING('u', "uid", &record.opts.target.uid_str, "user",
"user to profile"),
......@@ -2262,6 +2353,8 @@ static struct option __record_options[] = {
"n", "Compressed records using specified level (default: 1 - fastest compression, 22 - greatest compression)",
record__parse_comp_level),
#endif
OPT_CALLBACK(0, "max-size", &record.output_max_size,
"size", "Limit the maximum size of the output file", parse_output_max_size),
OPT_END()
};
......@@ -2322,6 +2415,9 @@ int cmd_record(int argc, const char **argv)
}
if (rec->opts.kcore)
rec->data.is_dir = true;
if (rec->opts.comp_level != 0) {
pr_debug("Compression enabled, disabling build id collection at the end of the session.\n");
rec->no_buildid = true;
......
......@@ -51,6 +51,7 @@
#include "util/util.h" // perf_tip()
#include "ui/ui.h"
#include "ui/progress.h"
#include "util/block-info.h"
#include <dlfcn.h>
#include <errno.h>
......@@ -96,10 +97,13 @@ struct report {
float min_percent;
u64 nr_entries;
u64 queue_size;
u64 total_cycles;
int socket_filter;
DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS);
struct branch_type_stat brtype_stat;
bool symbol_ipc;
bool total_cycles_mode;
struct block_report *block_reports;
};
static int report__config(const char *var, const char *value, void *cb)
......@@ -290,9 +294,10 @@ static int process_sample_event(struct perf_tool *tool,
if (al.map != NULL)
al.map->dso->hit = 1;
if (ui__has_annotation() || rep->symbol_ipc) {
if (ui__has_annotation() || rep->symbol_ipc || rep->total_cycles_mode) {
hist__account_cycles(sample->branch_stack, &al, sample,
rep->nonany_branch_mode);
rep->nonany_branch_mode,
&rep->total_cycles);
}
ret = hist_entry_iter__add(&iter, &al, rep->max_stack, rep);
......@@ -480,11 +485,28 @@ static size_t hists__fprintf_nr_sample_events(struct hists *hists, struct report
return ret + fprintf(fp, "\n#\n");
}
static int perf_evlist__tui_block_hists_browse(struct evlist *evlist,
struct report *rep)
{
struct evsel *pos;
int i = 0, ret;
evlist__for_each_entry(evlist, pos) {
ret = report__browse_block_hists(&rep->block_reports[i++].hist,
rep->min_percent, pos);
if (ret != 0)
return ret;
}
return 0;
}
static int perf_evlist__tty_browse_hists(struct evlist *evlist,
struct report *rep,
const char *help)
{
struct evsel *pos;
int i = 0;
if (!quiet) {
fprintf(stdout, "#\n# Total Lost Samples: %" PRIu64 "\n#\n",
......@@ -500,6 +522,13 @@ static int perf_evlist__tty_browse_hists(struct evlist *evlist,
continue;
hists__fprintf_nr_sample_events(hists, rep, evname, stdout);
if (rep->total_cycles_mode) {
report__browse_block_hists(&rep->block_reports[i++].hist,
rep->min_percent, pos);
continue;
}
hists__fprintf(hists, !quiet, 0, 0, rep->min_percent, stdout,
!(symbol_conf.use_callchain ||
symbol_conf.show_branchflag_count));
......@@ -582,6 +611,11 @@ static int report__browse_hists(struct report *rep)
switch (use_browser) {
case 1:
if (rep->total_cycles_mode) {
ret = perf_evlist__tui_block_hists_browse(evlist, rep);
break;
}
ret = perf_evlist__tui_browse_hists(evlist, help, NULL,
rep->min_percent,
&session->header.env,
......@@ -727,11 +761,9 @@ static struct task *tasks_list(struct task *task, struct machine *machine)
static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
{
size_t printed = 0;
struct rb_node *nd;
for (nd = rb_first(&maps->entries); nd; nd = rb_next(nd)) {
struct map *map = rb_entry(nd, struct map, rb_node);
struct map *map;
maps__for_each_entry(maps, map) {
printed += fprintf(fp, "%*s %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
indent, "", map->start, map->end,
map->prot & PROT_READ ? 'r' : '-',
......@@ -927,6 +959,13 @@ static int __cmd_report(struct report *rep)
report__output_resort(rep);
if (rep->total_cycles_mode) {
rep->block_reports = block_info__create_report(session->evlist,
rep->total_cycles);
if (!rep->block_reports)
return -1;
}
return report__browse_hists(rep);
}
......@@ -1211,6 +1250,8 @@ int cmd_report(int argc, const char **argv)
"Set time quantum for time sort key (default 100ms)",
parse_time_quantum),
OPTS_EVSWITCH(&report.evswitch),
OPT_BOOLEAN(0, "total-cycles", &report.total_cycles_mode,
"Sort all blocks by 'Sampled Cycles%'"),
OPT_END()
};
struct perf_data data = {
......@@ -1373,6 +1414,13 @@ int cmd_report(int argc, const char **argv)
goto error;
}
if (report.total_cycles_mode) {
if (sort__mode != SORT_MODE__BRANCH)
report.total_cycles_mode = false;
else
sort_order = "sym";
}
if (strcmp(input_name, "-") != 0)
setup_browser(true);
else
......@@ -1423,7 +1471,8 @@ int cmd_report(int argc, const char **argv)
* so don't allocate extra space that won't be used in the stdio
* implementation.
*/
if (ui__has_annotation() || report.symbol_ipc) {
if (ui__has_annotation() || report.symbol_ipc ||
report.total_cycles_mode) {
ret = symbol__annotation_init();
if (ret < 0)
goto error;
......@@ -1484,6 +1533,10 @@ int cmd_report(int argc, const char **argv)
itrace_synth_opts__clear_time_range(&itrace_synth_opts);
zfree(&report.ptime_range);
}
if (report.block_reports)
zfree(&report.block_reports);
zstd_fini(&(session->zstd_data));
perf_session__delete(session);
return ret;
......
......@@ -792,6 +792,8 @@ static struct option stat_options[] = {
"aggregate counts per physical processor core", AGGR_CORE),
OPT_SET_UINT(0, "per-thread", &stat_config.aggr_mode,
"aggregate counts per thread", AGGR_THREAD),
OPT_SET_UINT(0, "per-node", &stat_config.aggr_mode,
"aggregate counts per numa node", AGGR_NODE),
OPT_UINTEGER('D', "delay", &stat_config.initial_delay,
"ms to wait before starting measurement after program start"),
OPT_CALLBACK_NOOPT(0, "metric-only", &stat_config.metric_only, NULL,
......@@ -830,6 +832,12 @@ static int perf_stat__get_core(struct perf_stat_config *config __maybe_unused,
return cpu_map__get_core(map, cpu, NULL);
}
static int perf_stat__get_node(struct perf_stat_config *config __maybe_unused,
struct perf_cpu_map *map, int cpu)
{
return cpu_map__get_node(map, cpu, NULL);
}
static int perf_stat__get_aggr(struct perf_stat_config *config,
aggr_get_id_t get_id, struct perf_cpu_map *map, int idx)
{
......@@ -864,6 +872,12 @@ static int perf_stat__get_core_cached(struct perf_stat_config *config,
return perf_stat__get_aggr(config, perf_stat__get_core, map, idx);
}
static int perf_stat__get_node_cached(struct perf_stat_config *config,
struct perf_cpu_map *map, int idx)
{
return perf_stat__get_aggr(config, perf_stat__get_node, map, idx);
}
static bool term_percore_set(void)
{
struct evsel *counter;
......@@ -902,6 +916,13 @@ static int perf_stat_init_aggr_mode(void)
}
stat_config.aggr_get_id = perf_stat__get_core_cached;
break;
case AGGR_NODE:
if (cpu_map__build_node_map(evsel_list->core.cpus, &stat_config.aggr_map)) {
perror("cannot build core map");
return -1;
}
stat_config.aggr_get_id = perf_stat__get_node_cached;
break;
case AGGR_NONE:
if (term_percore_set()) {
if (cpu_map__build_core_map(evsel_list->core.cpus,
......@@ -1014,6 +1035,13 @@ static int perf_env__get_core(struct perf_cpu_map *map, int idx, void *data)
return core;
}
static int perf_env__get_node(struct perf_cpu_map *map, int idx, void *data)
{
int cpu = perf_env__get_cpu(data, map, idx);
return perf_env__numa_node(data, cpu);
}
static int perf_env__build_socket_map(struct perf_env *env, struct perf_cpu_map *cpus,
struct perf_cpu_map **sockp)
{
......@@ -1032,6 +1060,12 @@ static int perf_env__build_core_map(struct perf_env *env, struct perf_cpu_map *c
return cpu_map__build_map(cpus, corep, perf_env__get_core, env);
}
static int perf_env__build_node_map(struct perf_env *env, struct perf_cpu_map *cpus,
struct perf_cpu_map **nodep)
{
return cpu_map__build_map(cpus, nodep, perf_env__get_node, env);
}
static int perf_stat__get_socket_file(struct perf_stat_config *config __maybe_unused,
struct perf_cpu_map *map, int idx)
{
......@@ -1049,6 +1083,12 @@ static int perf_stat__get_core_file(struct perf_stat_config *config __maybe_unus
return perf_env__get_core(map, idx, &perf_stat.session->header.env);
}
static int perf_stat__get_node_file(struct perf_stat_config *config __maybe_unused,
struct perf_cpu_map *map, int idx)
{
return perf_env__get_node(map, idx, &perf_stat.session->header.env);
}
static int perf_stat_init_aggr_mode_file(struct perf_stat *st)
{
struct perf_env *env = &st->session->header.env;
......@@ -1075,6 +1115,13 @@ static int perf_stat_init_aggr_mode_file(struct perf_stat *st)
}
stat_config.aggr_get_id = perf_stat__get_core_file;
break;
case AGGR_NODE:
if (perf_env__build_node_map(env, evsel_list->core.cpus, &stat_config.aggr_map)) {
perror("cannot build core map");
return -1;
}
stat_config.aggr_get_id = perf_stat__get_node_file;
break;
case AGGR_NONE:
case AGGR_GLOBAL:
case AGGR_THREAD:
......@@ -1622,6 +1669,8 @@ static int __cmd_report(int argc, const char **argv)
"aggregate counts per processor die", AGGR_DIE),
OPT_SET_UINT(0, "per-core", &perf_stat.aggr_mode,
"aggregate counts per physical processor core", AGGR_CORE),
OPT_SET_UINT(0, "per-node", &perf_stat.aggr_mode,
"aggregate counts per numa node", AGGR_NODE),
OPT_SET_UINT('A', "no-aggr", &perf_stat.aggr_mode,
"disable CPU count aggregation", AGGR_NONE),
OPT_END()
......@@ -1896,6 +1945,9 @@ int cmd_stat(int argc, const char **argv)
}
}
if (stat_config.aggr_mode == AGGR_NODE)
cpu__setup_cpunode_map();
if (stat_config.times && interval)
interval_count = true;
else if (stat_config.times && !interval) {
......
......@@ -725,7 +725,8 @@ static int hist_iter__top_callback(struct hist_entry_iter *iter,
perf_top__record_precise_ip(top, he, iter->sample, evsel, al->addr);
hist__account_cycles(iter->sample->branch_stack, al, iter->sample,
!(top->record_opts.branch_stack & PERF_SAMPLE_BRANCH_ANY));
!(top->record_opts.branch_stack & PERF_SAMPLE_BRANCH_ANY),
NULL);
return 0;
}
......
......@@ -120,6 +120,7 @@ void perf_evsel__close_fd(struct perf_evsel *evsel)
for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++)
for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
if (FD(evsel, cpu, thread) >= 0)
close(FD(evsel, cpu, thread));
FD(evsel, cpu, thread) = -1;
}
......
......@@ -772,6 +772,7 @@ static int process_mapfile(FILE *outfp, char *fpath)
char *line, *p;
int line_num;
char *tblname;
int ret = 0;
pr_info("%s: Processing mapfile %s\n", prog, fpath);
......@@ -783,6 +784,7 @@ static int process_mapfile(FILE *outfp, char *fpath)
if (!mapfp) {
pr_info("%s: Error %s opening %s\n", prog, strerror(errno),
fpath);
free(line);
return -1;
}
......@@ -809,7 +811,8 @@ static int process_mapfile(FILE *outfp, char *fpath)
/* TODO Deal with lines longer than 16K */
pr_info("%s: Mapfile %s: line %d too long, aborting\n",
prog, fpath, line_num);
return -1;
ret = -1;
goto out;
}
line[strlen(line)-1] = '\0';
......@@ -839,7 +842,9 @@ static int process_mapfile(FILE *outfp, char *fpath)
out:
print_mapping_table_suffix(outfp);
return 0;
fclose(mapfp);
free(line);
return ret;
}
/*
......@@ -1136,6 +1141,7 @@ int main(int argc, char *argv[])
goto empty_map;
} else if (rc < 0) {
/* Make build fail */
fclose(eventsfp);
free_arch_std_events();
return 1;
} else if (rc) {
......@@ -1148,6 +1154,7 @@ int main(int argc, char *argv[])
goto empty_map;
} else if (rc < 0) {
/* Make build fail */
fclose(eventsfp);
free_arch_std_events();
return 1;
} else if (rc) {
......@@ -1165,6 +1172,8 @@ int main(int argc, char *argv[])
if (process_mapfile(eventsfp, mapfile)) {
pr_info("%s: Error processing mapfile %s\n", prog, mapfile);
/* Make build fail */
fclose(eventsfp);
free_arch_std_events();
return 1;
}
......
......@@ -148,6 +148,15 @@ int test__backward_ring_buffer(struct test *test __maybe_unused, int subtest __m
goto out_delete_evlist;
}
evlist__close(evlist);
err = evlist__open(evlist);
if (err < 0) {
pr_debug("perf_evlist__open: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
goto out_delete_evlist;
}
err = do_test(evlist, 1, &sample_count, &comm_count);
if (err != TEST_OK)
goto out_delete_evlist;
......
......@@ -295,7 +295,7 @@ bool test__bp_signal_is_supported(void)
* breakpointed instruction.
*
* Since arm64 has the same issue with arm for the single-step
* handling, this case also gets suck on the breakpointed
* handling, this case also gets stuck on the breakpointed
* instruction.
*
* Just disable the test for these architectures until these
......
......@@ -18,17 +18,16 @@ static int check_maps(struct map_def *merged, unsigned int size, struct map_grou
struct map *map;
unsigned int i = 0;
map = map_groups__first(mg);
while (map) {
map_groups__for_each_entry(mg, map) {
if (i > 0)
TEST_ASSERT_VAL("less maps expected", (map && i < size) || (!map && i == size));
TEST_ASSERT_VAL("wrong map start", map->start == merged[i].start);
TEST_ASSERT_VAL("wrong map end", map->end == merged[i].end);
TEST_ASSERT_VAL("wrong map name", !strcmp(map->dso->name, merged[i].name));
TEST_ASSERT_VAL("wrong map refcnt", refcount_read(&map->refcnt) == 2);
i++;
map = map_groups__next(map);
TEST_ASSERT_VAL("less maps expected", (map && i < size) || (!map && i == size));
}
return TEST_OK;
......
......@@ -182,7 +182,7 @@ int test__vmlinux_matches_kallsyms(struct test *test __maybe_unused, int subtest
header_printed = false;
for (map = maps__first(maps); map; map = map__next(map)) {
maps__for_each_entry(maps, map) {
struct map *
/*
* If it is the kernel, kallsyms is always "[kernel.kallsyms]", while
......@@ -207,7 +207,7 @@ int test__vmlinux_matches_kallsyms(struct test *test __maybe_unused, int subtest
header_printed = false;
for (map = maps__first(maps); map; map = map__next(map)) {
maps__for_each_entry(maps, map) {
struct map *pair;
mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
......@@ -237,7 +237,7 @@ int test__vmlinux_matches_kallsyms(struct test *test __maybe_unused, int subtest
maps = machine__kernel_maps(&kallsyms);
for (map = maps__first(maps); map; map = map__next(map)) {
maps__for_each_entry(maps, map) {
if (!map->priv) {
if (!header_printed) {
pr_info("WARN: Maps only in kallsyms:\n");
......
......@@ -26,6 +26,7 @@
#include "../../util/sort.h"
#include "../../util/top.h"
#include "../../util/thread.h"
#include "../../util/block-info.h"
#include "../../arch/common.h"
#include "../../perf.h"
......@@ -1783,7 +1784,11 @@ static unsigned int hist_browser__refresh(struct ui_browser *browser)
continue;
}
if (symbol_conf.report_individual_block)
percent = block_info__total_cycles_percent(h);
else
percent = hist_entry__get_percent_limit(h);
if (percent < hb->min_pcnt)
continue;
......
......@@ -5,6 +5,7 @@
#include "ui/browser.h"
struct annotation_options;
struct evsel;
struct hist_browser {
struct ui_browser b;
......@@ -15,6 +16,7 @@ struct hist_browser {
struct pstack *pstack;
struct perf_env *env;
struct annotation_options *annotation_opts;
struct evsel *block_evsel;
int print_seq;
bool show_dso;
bool show_headers;
......
This diff is collapsed.
perf-y += annotate.o
perf-y += block-info.o
perf-y += block-range.o
perf-y += build-id.o
perf-y += cacheline.o
......
......@@ -1892,7 +1892,7 @@ static char *expand_tabs(char *line, char **storage, size_t *storage_len)
}
/* Expand the last region. */
len = line_len + 1 - src;
len = line_len - src;
memcpy(&new_line[dst], &line[src], len);
dst += len;
new_line[dst] = '\0';
......
This diff is collapsed.
......@@ -489,6 +489,7 @@ void *auxtrace_cache__alloc_entry(struct auxtrace_cache *c);
void auxtrace_cache__free_entry(struct auxtrace_cache *c, void *entry);
int auxtrace_cache__add(struct auxtrace_cache *c, u32 key,
struct auxtrace_cache_entry *entry);
void auxtrace_cache__remove(struct auxtrace_cache *c, u32 key);
void *auxtrace_cache__lookup(struct auxtrace_cache *c, u32 key);
struct auxtrace_record *auxtrace_record__init(struct evlist *evlist,
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -20,9 +20,12 @@ int cpu_map__get_die_id(int cpu);
int cpu_map__get_die(struct perf_cpu_map *map, int idx, void *data);
int cpu_map__get_core_id(int cpu);
int cpu_map__get_core(struct perf_cpu_map *map, int idx, void *data);
int cpu_map__get_node_id(int cpu);
int cpu_map__get_node(struct perf_cpu_map *map, int idx, void *data);
int cpu_map__build_socket_map(struct perf_cpu_map *cpus, struct perf_cpu_map **sockp);
int cpu_map__build_die_map(struct perf_cpu_map *cpus, struct perf_cpu_map **diep);
int cpu_map__build_core_map(struct perf_cpu_map *cpus, struct perf_cpu_map **corep);
int cpu_map__build_node_map(struct perf_cpu_map *cpus, struct perf_cpu_map **nodep);
const struct perf_cpu_map *cpu_map__online(void); /* thread unsafe */
static inline int cpu_map__socket(struct perf_cpu_map *sock, int s)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment