Commit 3d6dfae8 authored by Ian Rogers's avatar Ian Rogers Committed by Arnaldo Carvalho de Melo

perf parse-events: Remove BPF event support

New features like the BPF --filter support in perf record have made the
BPF event functionality somewhat redundant. As shown by commit
fcb027c1a4f6 ("perf tools: Revert enable indices setting syntax for BPF
map") and commit 14e4b9f4 ("perf trace: Raw augmented syscalls fix
libbpf 1.0+ compatibility") the BPF event support hasn't been well
maintained and it adds considerable complexity in areas like event
parsing, not least as '/' is a separator for event modifiers as well as
in paths.

This patch removes support in the event parser for BPF events and then
the associated functions are removed. This leads to the removal of whole
source files like bpf-loader.c.  Removing support means that augmented
syscalls in perf trace is broken, this will be fixed in a later commit
adding support using BPF skeletons.

The removal of BPF events causes an unused label warning from flex
generated code, so update build to ignore it:

  ```
  util/parse-events-flex.c:2704:1: error: label ‘find_rule’ defined but not used [-Werror=unused-label]
  2704 | find_rule: /* we branch to this label when backing up */
  ```

Committer notes:

Extracted from a larger patch that was also removing the support for
linking with libllvm and libclang, that were an alternative to using an
external clang execution to compile the .c event source code into BPF
bytecode.

Testing it:

  # perf trace -e /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.c
  event syntax error: '/home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.c'
                        \___ Bad event or PMU

  Unabled to find PMU or event on a PMU of 'home'

  Initial error:
  event syntax error: '/home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.c'
                        \___ Cannot find PMU `home'. Missing kernel support?
  Run 'perf list' for a list of valid events

   Usage: perf trace [<options>] [<command>]
      or: perf trace [<options>] -- <command> [<options>]
      or: perf trace record [<options>] [<command>]
      or: perf trace record [<options>] -- <command> [<options>]

      -e, --event <event>   event/syscall selector. use 'perf list' to list available events
  #
Signed-off-by: default avatarIan Rogers <irogers@google.com>
Acked-by: default avatarJiri Olsa <jolsa@kernel.org>
Tested-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Carsten Haitzler <carsten.haitzler@arm.com>
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Fangrui Song <maskray@google.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Clark <james.clark@arm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@amd.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Tiezhu Yang <yangtiezhu@loongson.cn>
Cc: Tom Rix <trix@redhat.com>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: Wang ShaoBo <bobo.shaobowang@huawei.com>
Cc: Yang Jihong <yangjihong1@huawei.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: YueHaibing <yuehaibing@huawei.com>
Cc: bpf@vger.kernel.org
Cc: llvm@lists.linux.dev
Link: https://lore.kernel.org/r/20230810184853.2860737-2-irogers@google.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
parent 56b11a21
...@@ -125,9 +125,6 @@ Given a $HOME/.perfconfig like this: ...@@ -125,9 +125,6 @@ Given a $HOME/.perfconfig like this:
group = true group = true
skip-empty = true skip-empty = true
[llvm]
dump-obj = true
clang-opt = -g
You can hide source code of annotate feature setting the config to false with You can hide source code of annotate feature setting the config to false with
...@@ -657,36 +654,6 @@ ftrace.*:: ...@@ -657,36 +654,6 @@ ftrace.*::
-F option is not specified. Possible values are 'function' and -F option is not specified. Possible values are 'function' and
'function_graph'. 'function_graph'.
llvm.*::
llvm.clang-path::
Path to clang. If omit, search it from $PATH.
llvm.clang-bpf-cmd-template::
Cmdline template. Below lines show its default value. Environment
variable is used to pass options.
"$CLANG_EXEC -D__KERNEL__ -D__NR_CPUS__=$NR_CPUS "\
"-DLINUX_VERSION_CODE=$LINUX_VERSION_CODE " \
"$CLANG_OPTIONS $PERF_BPF_INC_OPTIONS $KERNEL_INC_OPTIONS " \
"-Wno-unused-value -Wno-pointer-sign " \
"-working-directory $WORKING_DIR " \
"-c \"$CLANG_SOURCE\" --target=bpf $CLANG_EMIT_LLVM -O2 -o - $LLVM_OPTIONS_PIPE"
llvm.clang-opt::
Options passed to clang.
llvm.kbuild-dir::
kbuild directory. If not set, use /lib/modules/`uname -r`/build.
If set to "" deliberately, skip kernel header auto-detector.
llvm.kbuild-opts::
Options passed to 'make' when detecting kernel header options.
llvm.dump-obj::
Enable perf dump BPF object files compiled by LLVM.
llvm.opts::
Options passed to llc.
samples.*:: samples.*::
samples.context:: samples.context::
......
...@@ -99,20 +99,6 @@ OPTIONS ...@@ -99,20 +99,6 @@ OPTIONS
If you want to profile write accesses in [0x1000~1008), just set If you want to profile write accesses in [0x1000~1008), just set
'mem:0x1000/8:w'. 'mem:0x1000/8:w'.
- a BPF source file (ending in .c) or a precompiled object file (ending
in .o) selects one or more BPF events.
The BPF program can attach to various perf events based on the ELF section
names.
When processing a '.c' file, perf searches an installed LLVM to compile it
into an object file first. Optional clang options can be passed via the
'--clang-opt' command line option, e.g.:
perf record --clang-opt "-DLINUX_VERSION_CODE=0x50000" \
-e tests/bpf-script-example.c
Note: '--clang-opt' must be placed before '--event/-e'.
- a group of events surrounded by a pair of brace ("{event1,event2,...}"). - a group of events surrounded by a pair of brace ("{event1,event2,...}").
Each event is separated by commas and the group should be quoted to Each event is separated by commas and the group should be quoted to
prevent the shell interpretation. You also need to use --group on prevent the shell interpretation. You also need to use --group on
...@@ -547,14 +533,6 @@ PERF_RECORD_SWITCH_CPU_WIDE. In some cases (e.g. Intel PT, CoreSight or Arm SPE) ...@@ -547,14 +533,6 @@ PERF_RECORD_SWITCH_CPU_WIDE. In some cases (e.g. Intel PT, CoreSight or Arm SPE)
switch events will be enabled automatically, which can be suppressed by switch events will be enabled automatically, which can be suppressed by
by the option --no-switch-events. by the option --no-switch-events.
--clang-path=PATH::
Path to clang binary to use for compiling BPF scriptlets.
(enabled when BPF support is on)
--clang-opt=OPTIONS::
Options passed to clang when compiling BPF scriptlets.
(enabled when BPF support is on)
--vmlinux=PATH:: --vmlinux=PATH::
Specify vmlinux path which has debuginfo. Specify vmlinux path which has debuginfo.
(enabled when BPF prologue is on) (enabled when BPF prologue is on)
......
...@@ -589,18 +589,6 @@ ifndef NO_LIBELF ...@@ -589,18 +589,6 @@ ifndef NO_LIBELF
LIBBPF_STATIC := 1 LIBBPF_STATIC := 1
endif endif
endif endif
ifndef NO_DWARF
ifdef PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
CFLAGS += -DHAVE_BPF_PROLOGUE
$(call detected,CONFIG_BPF_PROLOGUE)
else
msg := $(warning BPF prologue is not supported by architecture $(SRCARCH), missing regs_query_register_offset());
endif
else
msg := $(warning DWARF support is off, BPF prologue is disabled);
endif
endif # NO_LIBBPF endif # NO_LIBBPF
endif # NO_LIBELF endif # NO_LIBELF
......
...@@ -37,8 +37,6 @@ ...@@ -37,8 +37,6 @@
#include "util/parse-branch-options.h" #include "util/parse-branch-options.h"
#include "util/parse-regs-options.h" #include "util/parse-regs-options.h"
#include "util/perf_api_probe.h" #include "util/perf_api_probe.h"
#include "util/llvm-utils.h"
#include "util/bpf-loader.h"
#include "util/trigger.h" #include "util/trigger.h"
#include "util/perf-hooks.h" #include "util/perf-hooks.h"
#include "util/cpu-set-sched.h" #include "util/cpu-set-sched.h"
...@@ -2465,16 +2463,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) ...@@ -2465,16 +2463,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
} }
} }
err = bpf__apply_obj_config();
if (err) {
char errbuf[BUFSIZ];
bpf__strerror_apply_obj_config(err, errbuf, sizeof(errbuf));
pr_err("ERROR: Apply config to BPF failed: %s\n",
errbuf);
goto out_free_threads;
}
/* /*
* Normally perf_session__new would do this, but it doesn't have the * Normally perf_session__new would do this, but it doesn't have the
* evlist. * evlist.
...@@ -3486,10 +3474,6 @@ static struct option __record_options[] = { ...@@ -3486,10 +3474,6 @@ static struct option __record_options[] = {
"collect kernel callchains"), "collect kernel callchains"),
OPT_BOOLEAN(0, "user-callchains", &record.opts.user_callchains, OPT_BOOLEAN(0, "user-callchains", &record.opts.user_callchains,
"collect user callchains"), "collect user callchains"),
OPT_STRING(0, "clang-path", &llvm_param.clang_path, "clang path",
"clang binary to use for compiling BPF scriptlets"),
OPT_STRING(0, "clang-opt", &llvm_param.clang_opt, "clang options",
"options passed to clang when compiling BPF scriptlets"),
OPT_STRING(0, "vmlinux", &symbol_conf.vmlinux_name, OPT_STRING(0, "vmlinux", &symbol_conf.vmlinux_name,
"file", "vmlinux pathname"), "file", "vmlinux pathname"),
OPT_BOOLEAN(0, "buildid-all", &record.buildid_all, OPT_BOOLEAN(0, "buildid-all", &record.buildid_all,
...@@ -3967,27 +3951,6 @@ int cmd_record(int argc, const char **argv) ...@@ -3967,27 +3951,6 @@ int cmd_record(int argc, const char **argv)
setlocale(LC_ALL, ""); setlocale(LC_ALL, "");
#ifndef HAVE_LIBBPF_SUPPORT
# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, "NO_LIBBPF=1", c)
set_nobuild('\0', "clang-path", true);
set_nobuild('\0', "clang-opt", true);
# undef set_nobuild
#endif
#ifndef HAVE_BPF_PROLOGUE
# if !defined (HAVE_DWARF_SUPPORT)
# define REASON "NO_DWARF=1"
# elif !defined (HAVE_LIBBPF_SUPPORT)
# define REASON "NO_LIBBPF=1"
# else
# define REASON "this architecture doesn't support BPF prologue"
# endif
# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, REASON, c)
set_nobuild('\0', "vmlinux", true);
# undef set_nobuild
# undef REASON
#endif
#ifndef HAVE_BPF_SKEL #ifndef HAVE_BPF_SKEL
# define set_nobuild(s, l, m, c) set_option_nobuild(record_options, s, l, m, c) # define set_nobuild(s, l, m, c) set_option_nobuild(record_options, s, l, m, c)
set_nobuild('\0', "off-cpu", "no BUILD_BPF_SKEL=1", true); set_nobuild('\0', "off-cpu", "no BUILD_BPF_SKEL=1", true);
...@@ -4116,14 +4079,6 @@ int cmd_record(int argc, const char **argv) ...@@ -4116,14 +4079,6 @@ int cmd_record(int argc, const char **argv)
if (dry_run) if (dry_run)
goto out; goto out;
err = bpf__setup_stdout(rec->evlist);
if (err) {
bpf__strerror_setup_stdout(rec->evlist, err, errbuf, sizeof(errbuf));
pr_err("ERROR: Setup BPF stdout failed: %s\n",
errbuf);
goto out;
}
err = -ENOMEM; err = -ENOMEM;
if (rec->no_buildid_cache || rec->no_buildid) { if (rec->no_buildid_cache || rec->no_buildid) {
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <api/fs/tracing_path.h> #include <api/fs/tracing_path.h>
#ifdef HAVE_LIBBPF_SUPPORT #ifdef HAVE_LIBBPF_SUPPORT
#include <bpf/bpf.h> #include <bpf/bpf.h>
#include <bpf/libbpf.h>
#endif #endif
#include "util/bpf_map.h" #include "util/bpf_map.h"
#include "util/rlimit.h" #include "util/rlimit.h"
...@@ -53,7 +54,6 @@ ...@@ -53,7 +54,6 @@
#include "trace/beauty/beauty.h" #include "trace/beauty/beauty.h"
#include "trace-event.h" #include "trace-event.h"
#include "util/parse-events.h" #include "util/parse-events.h"
#include "util/bpf-loader.h"
#include "util/tracepoint.h" #include "util/tracepoint.h"
#include "callchain.h" #include "callchain.h"
#include "print_binary.h" #include "print_binary.h"
...@@ -3287,17 +3287,6 @@ static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace, const ch ...@@ -3287,17 +3287,6 @@ static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace, const ch
return bpf_object__find_map_by_name(trace->bpf_obj, name); return bpf_object__find_map_by_name(trace->bpf_obj, name);
} }
static void trace__set_bpf_map_filtered_pids(struct trace *trace)
{
trace->filter_pids.map = trace__find_bpf_map_by_name(trace, "pids_filtered");
}
static void trace__set_bpf_map_syscalls(struct trace *trace)
{
trace->syscalls.prog_array.sys_enter = trace__find_bpf_map_by_name(trace, "syscalls_sys_enter");
trace->syscalls.prog_array.sys_exit = trace__find_bpf_map_by_name(trace, "syscalls_sys_exit");
}
static struct bpf_program *trace__find_bpf_program_by_title(struct trace *trace, const char *name) static struct bpf_program *trace__find_bpf_program_by_title(struct trace *trace, const char *name)
{ {
struct bpf_program *pos, *prog = NULL; struct bpf_program *pos, *prog = NULL;
...@@ -3553,25 +3542,6 @@ static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace) ...@@ -3553,25 +3542,6 @@ static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace)
return err; return err;
} }
static void trace__delete_augmented_syscalls(struct trace *trace)
{
struct evsel *evsel, *tmp;
evlist__remove(trace->evlist, trace->syscalls.events.augmented);
evsel__delete(trace->syscalls.events.augmented);
trace->syscalls.events.augmented = NULL;
evlist__for_each_entry_safe(trace->evlist, tmp, evsel) {
if (evsel->bpf_obj == trace->bpf_obj) {
evlist__remove(trace->evlist, evsel);
evsel__delete(evsel);
}
}
bpf_object__close(trace->bpf_obj);
trace->bpf_obj = NULL;
}
#else // HAVE_LIBBPF_SUPPORT #else // HAVE_LIBBPF_SUPPORT
static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace __maybe_unused, static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace __maybe_unused,
const char *name __maybe_unused) const char *name __maybe_unused)
...@@ -3579,45 +3549,12 @@ static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace __maybe_u ...@@ -3579,45 +3549,12 @@ static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace __maybe_u
return NULL; return NULL;
} }
static void trace__set_bpf_map_filtered_pids(struct trace *trace __maybe_unused)
{
}
static void trace__set_bpf_map_syscalls(struct trace *trace __maybe_unused)
{
}
static struct bpf_program *trace__find_bpf_program_by_title(struct trace *trace __maybe_unused,
const char *name __maybe_unused)
{
return NULL;
}
static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace __maybe_unused) static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace __maybe_unused)
{ {
return 0; return 0;
} }
static void trace__delete_augmented_syscalls(struct trace *trace __maybe_unused)
{
}
#endif // HAVE_LIBBPF_SUPPORT #endif // HAVE_LIBBPF_SUPPORT
static bool trace__only_augmented_syscalls_evsels(struct trace *trace)
{
struct evsel *evsel;
evlist__for_each_entry(trace->evlist, evsel) {
if (evsel == trace->syscalls.events.augmented ||
evsel->bpf_obj == trace->bpf_obj)
continue;
return false;
}
return true;
}
static int trace__set_ev_qualifier_filter(struct trace *trace) static int trace__set_ev_qualifier_filter(struct trace *trace)
{ {
if (trace->syscalls.events.sys_enter) if (trace->syscalls.events.sys_enter)
...@@ -3981,16 +3918,6 @@ static int trace__run(struct trace *trace, int argc, const char **argv) ...@@ -3981,16 +3918,6 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
if (err < 0) if (err < 0)
goto out_error_open; goto out_error_open;
err = bpf__apply_obj_config();
if (err) {
char errbuf[BUFSIZ];
bpf__strerror_apply_obj_config(err, errbuf, sizeof(errbuf));
pr_err("ERROR: Apply config to BPF failed: %s\n",
errbuf);
goto out_error_open;
}
err = trace__set_filter_pids(trace); err = trace__set_filter_pids(trace);
if (err < 0) if (err < 0)
goto out_error_mem; goto out_error_mem;
...@@ -4922,77 +4849,6 @@ int cmd_trace(int argc, const char **argv) ...@@ -4922,77 +4849,6 @@ int cmd_trace(int argc, const char **argv)
"cgroup monitoring only available in system-wide mode"); "cgroup monitoring only available in system-wide mode");
} }
evsel = bpf__setup_output_event(trace.evlist, "__augmented_syscalls__");
if (IS_ERR(evsel)) {
bpf__strerror_setup_output_event(trace.evlist, PTR_ERR(evsel), bf, sizeof(bf));
pr_err("ERROR: Setup trace syscalls enter failed: %s\n", bf);
goto out;
}
if (evsel) {
trace.syscalls.events.augmented = evsel;
evsel = evlist__find_tracepoint_by_name(trace.evlist, "raw_syscalls:sys_enter");
if (evsel == NULL) {
pr_err("ERROR: raw_syscalls:sys_enter not found in the augmented BPF object\n");
goto out;
}
if (evsel->bpf_obj == NULL) {
pr_err("ERROR: raw_syscalls:sys_enter not associated to a BPF object\n");
goto out;
}
trace.bpf_obj = evsel->bpf_obj;
/*
* If we have _just_ the augmenter event but don't have a
* explicit --syscalls, then assume we want all strace-like
* syscalls:
*/
if (!trace.trace_syscalls && trace__only_augmented_syscalls_evsels(&trace))
trace.trace_syscalls = true;
/*
* So, if we have a syscall augmenter, but trace_syscalls, aka
* strace-like syscall tracing is not set, then we need to trow
* away the augmenter, i.e. all the events that were created
* from that BPF object file.
*
* This is more to fix the current .perfconfig trace.add_events
* style of setting up the strace-like eBPF based syscall point
* payload augmenter.
*
* All this complexity will be avoided by adding an alternative
* to trace.add_events in the form of
* trace.bpf_augmented_syscalls, that will be only parsed if we
* need it.
*
* .perfconfig trace.add_events is still useful if we want, for
* instance, have msr_write.msr in some .perfconfig profile based
* 'perf trace --config determinism.profile' mode, where for some
* particular goal/workload type we want a set of events and
* output mode (with timings, etc) instead of having to add
* all via the command line.
*
* Also --config to specify an alternate .perfconfig file needs
* to be implemented.
*/
if (!trace.trace_syscalls) {
trace__delete_augmented_syscalls(&trace);
} else {
trace__set_bpf_map_filtered_pids(&trace);
trace__set_bpf_map_syscalls(&trace);
trace.syscalls.unaugmented_prog = trace__find_bpf_program_by_title(&trace, "!raw_syscalls:unaugmented");
}
}
err = bpf__setup_stdout(trace.evlist);
if (err) {
bpf__strerror_setup_stdout(trace.evlist, err, bf, sizeof(bf));
pr_err("ERROR: Setup BPF stdout failed: %s\n", bf);
goto out;
}
err = -1; err = -1;
if (map_dump_str) { if (map_dump_str) {
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
#include <subcmd/run-command.h> #include <subcmd/run-command.h>
#include "util/parse-events.h" #include "util/parse-events.h"
#include <subcmd/parse-options.h> #include <subcmd/parse-options.h>
#include "util/bpf-loader.h"
#include "util/debug.h" #include "util/debug.h"
#include "util/event.h" #include "util/event.h"
#include "util/util.h" // usage() #include "util/util.h" // usage()
...@@ -324,7 +323,6 @@ static int run_builtin(struct cmd_struct *p, int argc, const char **argv) ...@@ -324,7 +323,6 @@ static int run_builtin(struct cmd_struct *p, int argc, const char **argv)
perf_config__exit(); perf_config__exit();
exit_browser(status); exit_browser(status);
perf_env__exit(&perf_env); perf_env__exit(&perf_env);
bpf__clear();
if (status) if (status)
return status & 0xff; return status & 0xff;
......
# SPDX-License-Identifier: GPL-2.0-only
llvm-src-base.c
llvm-src-kbuild.c
llvm-src-prologue.c
llvm-src-relocation.c
...@@ -37,8 +37,6 @@ perf-y += sample-parsing.o ...@@ -37,8 +37,6 @@ perf-y += sample-parsing.o
perf-y += parse-no-sample-id-all.o perf-y += parse-no-sample-id-all.o
perf-y += kmod-path.o perf-y += kmod-path.o
perf-y += thread-map.o perf-y += thread-map.o
perf-y += llvm.o llvm-src-base.o llvm-src-kbuild.o llvm-src-prologue.o llvm-src-relocation.o
perf-y += bpf.o
perf-y += topology.o perf-y += topology.o
perf-y += mem.o perf-y += mem.o
perf-y += cpumap.o perf-y += cpumap.o
...@@ -69,34 +67,6 @@ perf-y += sigtrap.o ...@@ -69,34 +67,6 @@ perf-y += sigtrap.o
perf-y += event_groups.o perf-y += event_groups.o
perf-y += symbols.o perf-y += symbols.o
$(OUTPUT)tests/llvm-src-base.c: tests/bpf-script-example.c tests/Build
$(call rule_mkdir)
$(Q)echo '#include <tests/llvm.h>' > $@
$(Q)echo 'const char test_llvm__bpf_base_prog[] =' >> $@
$(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
$(Q)echo ';' >> $@
$(OUTPUT)tests/llvm-src-kbuild.c: tests/bpf-script-test-kbuild.c tests/Build
$(call rule_mkdir)
$(Q)echo '#include <tests/llvm.h>' > $@
$(Q)echo 'const char test_llvm__bpf_test_kbuild_prog[] =' >> $@
$(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
$(Q)echo ';' >> $@
$(OUTPUT)tests/llvm-src-prologue.c: tests/bpf-script-test-prologue.c tests/Build
$(call rule_mkdir)
$(Q)echo '#include <tests/llvm.h>' > $@
$(Q)echo 'const char test_llvm__bpf_test_prologue_prog[] =' >> $@
$(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
$(Q)echo ';' >> $@
$(OUTPUT)tests/llvm-src-relocation.c: tests/bpf-script-test-relocation.c tests/Build
$(call rule_mkdir)
$(Q)echo '#include <tests/llvm.h>' > $@
$(Q)echo 'const char test_llvm__bpf_test_relocation[] =' >> $@
$(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
$(Q)echo ';' >> $@
ifeq ($(SRCARCH),$(filter $(SRCARCH),x86 arm arm64 powerpc)) ifeq ($(SRCARCH),$(filter $(SRCARCH),x86 arm arm64 powerpc))
perf-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o perf-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o
endif endif
......
// SPDX-License-Identifier: GPL-2.0
/*
* bpf-script-example.c
* Test basic LLVM building
*/
#ifndef LINUX_VERSION_CODE
# error Need LINUX_VERSION_CODE
# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
#endif
#define BPF_ANY 0
#define BPF_MAP_TYPE_ARRAY 2
#define BPF_FUNC_map_lookup_elem 1
#define BPF_FUNC_map_update_elem 2
static void *(*bpf_map_lookup_elem)(void *map, void *key) =
(void *) BPF_FUNC_map_lookup_elem;
static void *(*bpf_map_update_elem)(void *map, void *key, void *value, int flags) =
(void *) BPF_FUNC_map_update_elem;
/*
* Following macros are taken from tools/lib/bpf/bpf_helpers.h,
* and are used to create BTF defined maps. It is easier to take
* 2 simple macros, than being able to include above header in
* runtime.
*
* __uint - defines integer attribute of BTF map definition,
* Such attributes are represented using a pointer to an array,
* in which dimensionality of array encodes specified integer
* value.
*
* __type - defines pointer variable with typeof(val) type for
* attributes like key or value, which will be defined by the
* size of the type.
*/
#define __uint(name, val) int (*name)[val]
#define __type(name, val) typeof(val) *name
#define SEC(NAME) __attribute__((section(NAME), used))
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__uint(max_entries, 1);
__type(key, int);
__type(value, int);
} flip_table SEC(".maps");
SEC("syscalls:sys_enter_epoll_pwait")
int bpf_func__SyS_epoll_pwait(void *ctx)
{
int ind =0;
int *flag = bpf_map_lookup_elem(&flip_table, &ind);
int new_flag;
if (!flag)
return 0;
/* flip flag and store back */
new_flag = !*flag;
bpf_map_update_elem(&flip_table, &ind, &new_flag, BPF_ANY);
return new_flag;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;
// SPDX-License-Identifier: GPL-2.0
/*
* bpf-script-test-kbuild.c
* Test include from kernel header
*/
#ifndef LINUX_VERSION_CODE
# error Need LINUX_VERSION_CODE
# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
#endif
#define SEC(NAME) __attribute__((section(NAME), used))
#include <uapi/linux/fs.h>
SEC("func=vfs_llseek")
int bpf_func__vfs_llseek(void *ctx)
{
return 0;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;
// SPDX-License-Identifier: GPL-2.0
/*
* bpf-script-test-prologue.c
* Test BPF prologue
*/
#ifndef LINUX_VERSION_CODE
# error Need LINUX_VERSION_CODE
# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
#endif
#define SEC(NAME) __attribute__((section(NAME), used))
#include <uapi/linux/fs.h>
/*
* If CONFIG_PROFILE_ALL_BRANCHES is selected,
* 'if' is redefined after include kernel header.
* Recover 'if' for BPF object code.
*/
#ifdef if
# undef if
#endif
typedef unsigned int __bitwise fmode_t;
#define FMODE_READ 0x1
#define FMODE_WRITE 0x2
static void (*bpf_trace_printk)(const char *fmt, int fmt_size, ...) =
(void *) 6;
SEC("func=null_lseek file->f_mode offset orig")
int bpf_func__null_lseek(void *ctx, int err, unsigned long _f_mode,
unsigned long offset, unsigned long orig)
{
fmode_t f_mode = (fmode_t)_f_mode;
if (err)
return 0;
if (f_mode & FMODE_WRITE)
return 0;
if (offset & 1)
return 0;
if (orig == SEEK_CUR)
return 0;
return 1;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;
// SPDX-License-Identifier: GPL-2.0
/*
* bpf-script-test-relocation.c
* Test BPF loader checking relocation
*/
#ifndef LINUX_VERSION_CODE
# error Need LINUX_VERSION_CODE
# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
#endif
#define BPF_ANY 0
#define BPF_MAP_TYPE_ARRAY 2
#define BPF_FUNC_map_lookup_elem 1
#define BPF_FUNC_map_update_elem 2
static void *(*bpf_map_lookup_elem)(void *map, void *key) =
(void *) BPF_FUNC_map_lookup_elem;
static void *(*bpf_map_update_elem)(void *map, void *key, void *value, int flags) =
(void *) BPF_FUNC_map_update_elem;
struct bpf_map_def {
unsigned int type;
unsigned int key_size;
unsigned int value_size;
unsigned int max_entries;
};
#define SEC(NAME) __attribute__((section(NAME), used))
struct bpf_map_def SEC("maps") my_table = {
.type = BPF_MAP_TYPE_ARRAY,
.key_size = sizeof(int),
.value_size = sizeof(int),
.max_entries = 1,
};
int this_is_a_global_val;
SEC("func=sys_write")
int bpf_func__sys_write(void *ctx)
{
int key = 0;
int value = 0;
/*
* Incorrect relocation. Should not allow this program be
* loaded into kernel.
*/
bpf_map_update_elem(&this_is_a_global_val, &key, &value, 0);
return 0;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;
// SPDX-License-Identifier: GPL-2.0
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/epoll.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <util/record.h>
#include <util/util.h>
#include <util/bpf-loader.h>
#include <util/evlist.h>
#include <linux/filter.h>
#include <linux/kernel.h>
#include <linux/string.h>
#include <api/fs/fs.h>
#include <perf/mmap.h>
#include "tests.h"
#include "llvm.h"
#include "debug.h"
#include "parse-events.h"
#include "util/mmap.h"
#define NR_ITERS 111
#define PERF_TEST_BPF_PATH "/sys/fs/bpf/perf_test"
#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
#include <linux/bpf.h>
#include <bpf/bpf.h>
static int epoll_pwait_loop(void)
{
struct epoll_event events;
int i;
/* Should fail NR_ITERS times */
for (i = 0; i < NR_ITERS; i++)
epoll_pwait(-(i + 1), &events, 0, 0, NULL);
return 0;
}
#ifdef HAVE_BPF_PROLOGUE
static int llseek_loop(void)
{
int fds[2], i;
fds[0] = open("/dev/null", O_RDONLY);
fds[1] = open("/dev/null", O_RDWR);
if (fds[0] < 0 || fds[1] < 0)
return -1;
for (i = 0; i < NR_ITERS; i++) {
lseek(fds[i % 2], i, (i / 2) % 2 ? SEEK_CUR : SEEK_SET);
lseek(fds[(i + 1) % 2], i, (i / 2) % 2 ? SEEK_CUR : SEEK_SET);
}
close(fds[0]);
close(fds[1]);
return 0;
}
#endif
static struct {
enum test_llvm__testcase prog_id;
const char *name;
const char *msg_compile_fail;
const char *msg_load_fail;
int (*target_func)(void);
int expect_result;
bool pin;
} bpf_testcase_table[] = {
{
.prog_id = LLVM_TESTCASE_BASE,
.name = "[basic_bpf_test]",
.msg_compile_fail = "fix 'perf test LLVM' first",
.msg_load_fail = "load bpf object failed",
.target_func = &epoll_pwait_loop,
.expect_result = (NR_ITERS + 1) / 2,
},
{
.prog_id = LLVM_TESTCASE_BASE,
.name = "[bpf_pinning]",
.msg_compile_fail = "fix kbuild first",
.msg_load_fail = "check your vmlinux setting?",
.target_func = &epoll_pwait_loop,
.expect_result = (NR_ITERS + 1) / 2,
.pin = true,
},
#ifdef HAVE_BPF_PROLOGUE
{
.prog_id = LLVM_TESTCASE_BPF_PROLOGUE,
.name = "[bpf_prologue_test]",
.msg_compile_fail = "fix kbuild first",
.msg_load_fail = "check your vmlinux setting?",
.target_func = &llseek_loop,
.expect_result = (NR_ITERS + 1) / 4,
},
#endif
};
static int do_test(struct bpf_object *obj, int (*func)(void),
int expect)
{
struct record_opts opts = {
.target = {
.uid = UINT_MAX,
.uses_mmap = true,
},
.freq = 0,
.mmap_pages = 256,
.default_interval = 1,
};
char pid[16];
char sbuf[STRERR_BUFSIZE];
struct evlist *evlist;
int i, ret = TEST_FAIL, err = 0, count = 0;
struct parse_events_state parse_state;
struct parse_events_error parse_error;
parse_events_error__init(&parse_error);
bzero(&parse_state, sizeof(parse_state));
parse_state.error = &parse_error;
INIT_LIST_HEAD(&parse_state.list);
err = parse_events_load_bpf_obj(&parse_state, &parse_state.list, obj, NULL, NULL);
parse_events_error__exit(&parse_error);
if (err == -ENODATA) {
pr_debug("Failed to add events selected by BPF, debuginfo package not installed\n");
return TEST_SKIP;
}
if (err || list_empty(&parse_state.list)) {
pr_debug("Failed to add events selected by BPF\n");
return TEST_FAIL;
}
snprintf(pid, sizeof(pid), "%d", getpid());
pid[sizeof(pid) - 1] = '\0';
opts.target.tid = opts.target.pid = pid;
/* Instead of evlist__new_default, don't add default events */
evlist = evlist__new();
if (!evlist) {
pr_debug("Not enough memory to create evlist\n");
return TEST_FAIL;
}
err = evlist__create_maps(evlist, &opts.target);
if (err < 0) {
pr_debug("Not enough memory to create thread/cpu maps\n");
goto out_delete_evlist;
}
evlist__splice_list_tail(evlist, &parse_state.list);
evlist__config(evlist, &opts, NULL);
err = evlist__open(evlist);
if (err < 0) {
pr_debug("perf_evlist__open: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
goto out_delete_evlist;
}
err = evlist__mmap(evlist, opts.mmap_pages);
if (err < 0) {
pr_debug("evlist__mmap: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
goto out_delete_evlist;
}
evlist__enable(evlist);
(*func)();
evlist__disable(evlist);
for (i = 0; i < evlist->core.nr_mmaps; i++) {
union perf_event *event;
struct mmap *md;
md = &evlist->mmap[i];
if (perf_mmap__read_init(&md->core) < 0)
continue;
while ((event = perf_mmap__read_event(&md->core)) != NULL) {
const u32 type = event->header.type;
if (type == PERF_RECORD_SAMPLE)
count ++;
}
perf_mmap__read_done(&md->core);
}
if (count != expect * evlist->core.nr_entries) {
pr_debug("BPF filter result incorrect, expected %d, got %d samples\n", expect * evlist->core.nr_entries, count);
goto out_delete_evlist;
}
ret = TEST_OK;
out_delete_evlist:
evlist__delete(evlist);
return ret;
}
static struct bpf_object *
prepare_bpf(void *obj_buf, size_t obj_buf_sz, const char *name)
{
struct bpf_object *obj;
obj = bpf__prepare_load_buffer(obj_buf, obj_buf_sz, name);
if (IS_ERR(obj)) {
pr_debug("Compile BPF program failed.\n");
return NULL;
}
return obj;
}
static int __test__bpf(int idx)
{
int ret;
void *obj_buf;
size_t obj_buf_sz;
struct bpf_object *obj;
ret = test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz,
bpf_testcase_table[idx].prog_id,
false, NULL);
if (ret != TEST_OK || !obj_buf || !obj_buf_sz) {
pr_debug("Unable to get BPF object, %s\n",
bpf_testcase_table[idx].msg_compile_fail);
if ((idx == 0) || (ret == TEST_SKIP))
return TEST_SKIP;
else
return TEST_FAIL;
}
obj = prepare_bpf(obj_buf, obj_buf_sz,
bpf_testcase_table[idx].name);
if ((!!bpf_testcase_table[idx].target_func) != (!!obj)) {
if (!obj)
pr_debug("Fail to load BPF object: %s\n",
bpf_testcase_table[idx].msg_load_fail);
else
pr_debug("Success unexpectedly: %s\n",
bpf_testcase_table[idx].msg_load_fail);
ret = TEST_FAIL;
goto out;
}
if (obj) {
ret = do_test(obj,
bpf_testcase_table[idx].target_func,
bpf_testcase_table[idx].expect_result);
if (ret != TEST_OK)
goto out;
if (bpf_testcase_table[idx].pin) {
int err;
if (!bpf_fs__mount()) {
pr_debug("BPF filesystem not mounted\n");
ret = TEST_FAIL;
goto out;
}
err = mkdir(PERF_TEST_BPF_PATH, 0777);
if (err && errno != EEXIST) {
pr_debug("Failed to make perf_test dir: %s\n",
strerror(errno));
ret = TEST_FAIL;
goto out;
}
if (bpf_object__pin(obj, PERF_TEST_BPF_PATH))
ret = TEST_FAIL;
if (rm_rf(PERF_TEST_BPF_PATH))
ret = TEST_FAIL;
}
}
out:
free(obj_buf);
bpf__clear();
return ret;
}
static int check_env(void)
{
LIBBPF_OPTS(bpf_prog_load_opts, opts);
int err;
char license[] = "GPL";
struct bpf_insn insns[] = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_EXIT_INSN(),
};
err = fetch_kernel_version(&opts.kern_version, NULL, 0);
if (err) {
pr_debug("Unable to get kernel version\n");
return err;
}
err = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, license, insns,
ARRAY_SIZE(insns), &opts);
if (err < 0) {
pr_err("Missing basic BPF support, skip this test: %s\n",
strerror(errno));
return err;
}
close(err);
return 0;
}
static int test__bpf(int i)
{
int err;
if (i < 0 || i >= (int)ARRAY_SIZE(bpf_testcase_table))
return TEST_FAIL;
if (geteuid() != 0) {
pr_debug("Only root can run BPF test\n");
return TEST_SKIP;
}
if (check_env())
return TEST_SKIP;
err = __test__bpf(i);
return err;
}
#endif
static int test__basic_bpf_test(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
return test__bpf(0);
#else
pr_debug("Skip BPF test because BPF or libtraceevent support is not compiled\n");
return TEST_SKIP;
#endif
}
static int test__bpf_pinning(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
return test__bpf(1);
#else
pr_debug("Skip BPF test because BPF or libtraceevent support is not compiled\n");
return TEST_SKIP;
#endif
}
static int test__bpf_prologue_test(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_BPF_PROLOGUE) && defined(HAVE_LIBTRACEEVENT)
return test__bpf(2);
#else
pr_debug("Skip BPF test because BPF or libtraceevent support is not compiled\n");
return TEST_SKIP;
#endif
}
static struct test_case bpf_tests[] = {
#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
TEST_CASE("Basic BPF filtering", basic_bpf_test),
TEST_CASE_REASON("BPF pinning", bpf_pinning,
"clang isn't installed or environment missing BPF support"),
#ifdef HAVE_BPF_PROLOGUE
TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test,
"clang/debuginfo isn't installed or environment missing BPF support"),
#else
TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test, "not compiled in"),
#endif
#else
TEST_CASE_REASON("Basic BPF filtering", basic_bpf_test, "not compiled in or missing libtraceevent support"),
TEST_CASE_REASON("BPF pinning", bpf_pinning, "not compiled in or missing libtraceevent support"),
TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test, "not compiled in or missing libtraceevent support"),
#endif
{ .name = NULL, }
};
struct test_suite suite__bpf = {
.desc = "BPF filter",
.test_cases = bpf_tests,
};
...@@ -92,9 +92,7 @@ static struct test_suite *generic_tests[] = { ...@@ -92,9 +92,7 @@ static struct test_suite *generic_tests[] = {
&suite__fdarray__add, &suite__fdarray__add,
&suite__kmod_path__parse, &suite__kmod_path__parse,
&suite__thread_map, &suite__thread_map,
&suite__llvm,
&suite__session_topology, &suite__session_topology,
&suite__bpf,
&suite__thread_map_synthesize, &suite__thread_map_synthesize,
&suite__thread_map_remove, &suite__thread_map_remove,
&suite__cpu_map, &suite__cpu_map,
......
// SPDX-License-Identifier: GPL-2.0
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "tests.h"
#include "debug.h"
#ifdef HAVE_LIBBPF_SUPPORT
#include <bpf/libbpf.h>
#include <util/llvm-utils.h>
#include "llvm.h"
static int test__bpf_parsing(void *obj_buf, size_t obj_buf_sz)
{
struct bpf_object *obj;
obj = bpf_object__open_mem(obj_buf, obj_buf_sz, NULL);
if (libbpf_get_error(obj))
return TEST_FAIL;
bpf_object__close(obj);
return TEST_OK;
}
static struct {
const char *source;
const char *desc;
bool should_load_fail;
} bpf_source_table[__LLVM_TESTCASE_MAX] = {
[LLVM_TESTCASE_BASE] = {
.source = test_llvm__bpf_base_prog,
.desc = "Basic BPF llvm compile",
},
[LLVM_TESTCASE_KBUILD] = {
.source = test_llvm__bpf_test_kbuild_prog,
.desc = "kbuild searching",
},
[LLVM_TESTCASE_BPF_PROLOGUE] = {
.source = test_llvm__bpf_test_prologue_prog,
.desc = "Compile source for BPF prologue generation",
},
[LLVM_TESTCASE_BPF_RELOCATION] = {
.source = test_llvm__bpf_test_relocation,
.desc = "Compile source for BPF relocation",
.should_load_fail = true,
},
};
int
test_llvm__fetch_bpf_obj(void **p_obj_buf,
size_t *p_obj_buf_sz,
enum test_llvm__testcase idx,
bool force,
bool *should_load_fail)
{
const char *source;
const char *desc;
const char *tmpl_old, *clang_opt_old;
char *tmpl_new = NULL, *clang_opt_new = NULL;
int err, old_verbose, ret = TEST_FAIL;
if (idx >= __LLVM_TESTCASE_MAX)
return TEST_FAIL;
source = bpf_source_table[idx].source;
desc = bpf_source_table[idx].desc;
if (should_load_fail)
*should_load_fail = bpf_source_table[idx].should_load_fail;
/*
* Skip this test if user's .perfconfig doesn't set [llvm] section
* and clang is not found in $PATH
*/
if (!force && (!llvm_param.user_set_param &&
llvm__search_clang())) {
pr_debug("No clang, skip this test\n");
return TEST_SKIP;
}
/*
* llvm is verbosity when error. Suppress all error output if
* not 'perf test -v'.
*/
old_verbose = verbose;
if (verbose == 0)
verbose = -1;
*p_obj_buf = NULL;
*p_obj_buf_sz = 0;
if (!llvm_param.clang_bpf_cmd_template)
goto out;
if (!llvm_param.clang_opt)
llvm_param.clang_opt = strdup("");
err = asprintf(&tmpl_new, "echo '%s' | %s%s", source,
llvm_param.clang_bpf_cmd_template,
old_verbose ? "" : " 2>/dev/null");
if (err < 0)
goto out;
err = asprintf(&clang_opt_new, "-xc %s", llvm_param.clang_opt);
if (err < 0)
goto out;
tmpl_old = llvm_param.clang_bpf_cmd_template;
llvm_param.clang_bpf_cmd_template = tmpl_new;
clang_opt_old = llvm_param.clang_opt;
llvm_param.clang_opt = clang_opt_new;
err = llvm__compile_bpf("-", p_obj_buf, p_obj_buf_sz);
llvm_param.clang_bpf_cmd_template = tmpl_old;
llvm_param.clang_opt = clang_opt_old;
verbose = old_verbose;
if (err)
goto out;
ret = TEST_OK;
out:
free(tmpl_new);
free(clang_opt_new);
if (ret != TEST_OK)
pr_debug("Failed to compile test case: '%s'\n", desc);
return ret;
}
static int test__llvm(int subtest)
{
int ret;
void *obj_buf = NULL;
size_t obj_buf_sz = 0;
bool should_load_fail = false;
if ((subtest < 0) || (subtest >= __LLVM_TESTCASE_MAX))
return TEST_FAIL;
ret = test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz,
subtest, false, &should_load_fail);
if (ret == TEST_OK && !should_load_fail) {
ret = test__bpf_parsing(obj_buf, obj_buf_sz);
if (ret != TEST_OK) {
pr_debug("Failed to parse test case '%s'\n",
bpf_source_table[subtest].desc);
}
}
free(obj_buf);
return ret;
}
#endif //HAVE_LIBBPF_SUPPORT
static int test__llvm__bpf_base_prog(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#ifdef HAVE_LIBBPF_SUPPORT
return test__llvm(LLVM_TESTCASE_BASE);
#else
pr_debug("Skip LLVM test because BPF support is not compiled\n");
return TEST_SKIP;
#endif
}
static int test__llvm__bpf_test_kbuild_prog(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#ifdef HAVE_LIBBPF_SUPPORT
return test__llvm(LLVM_TESTCASE_KBUILD);
#else
pr_debug("Skip LLVM test because BPF support is not compiled\n");
return TEST_SKIP;
#endif
}
static int test__llvm__bpf_test_prologue_prog(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#ifdef HAVE_LIBBPF_SUPPORT
return test__llvm(LLVM_TESTCASE_BPF_PROLOGUE);
#else
pr_debug("Skip LLVM test because BPF support is not compiled\n");
return TEST_SKIP;
#endif
}
static int test__llvm__bpf_test_relocation(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#ifdef HAVE_LIBBPF_SUPPORT
return test__llvm(LLVM_TESTCASE_BPF_RELOCATION);
#else
pr_debug("Skip LLVM test because BPF support is not compiled\n");
return TEST_SKIP;
#endif
}
static struct test_case llvm_tests[] = {
#ifdef HAVE_LIBBPF_SUPPORT
TEST_CASE("Basic BPF llvm compile", llvm__bpf_base_prog),
TEST_CASE("kbuild searching", llvm__bpf_test_kbuild_prog),
TEST_CASE("Compile source for BPF prologue generation",
llvm__bpf_test_prologue_prog),
TEST_CASE("Compile source for BPF relocation", llvm__bpf_test_relocation),
#else
TEST_CASE_REASON("Basic BPF llvm compile", llvm__bpf_base_prog, "not compiled in"),
TEST_CASE_REASON("kbuild searching", llvm__bpf_test_kbuild_prog, "not compiled in"),
TEST_CASE_REASON("Compile source for BPF prologue generation",
llvm__bpf_test_prologue_prog, "not compiled in"),
TEST_CASE_REASON("Compile source for BPF relocation",
llvm__bpf_test_relocation, "not compiled in"),
#endif
{ .name = NULL, }
};
struct test_suite suite__llvm = {
.desc = "LLVM search and compile",
.test_cases = llvm_tests,
};
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef PERF_TEST_LLVM_H
#define PERF_TEST_LLVM_H
#ifdef __cplusplus
extern "C" {
#endif
#include <stddef.h> /* for size_t */
#include <stdbool.h> /* for bool */
extern const char test_llvm__bpf_base_prog[];
extern const char test_llvm__bpf_test_kbuild_prog[];
extern const char test_llvm__bpf_test_prologue_prog[];
extern const char test_llvm__bpf_test_relocation[];
enum test_llvm__testcase {
LLVM_TESTCASE_BASE,
LLVM_TESTCASE_KBUILD,
LLVM_TESTCASE_BPF_PROLOGUE,
LLVM_TESTCASE_BPF_RELOCATION,
__LLVM_TESTCASE_MAX,
};
int test_llvm__fetch_bpf_obj(void **p_obj_buf, size_t *p_obj_buf_sz,
enum test_llvm__testcase index, bool force,
bool *should_load_fail);
#ifdef __cplusplus
}
#endif
#endif
...@@ -113,7 +113,6 @@ DECLARE_SUITE(fdarray__filter); ...@@ -113,7 +113,6 @@ DECLARE_SUITE(fdarray__filter);
DECLARE_SUITE(fdarray__add); DECLARE_SUITE(fdarray__add);
DECLARE_SUITE(kmod_path__parse); DECLARE_SUITE(kmod_path__parse);
DECLARE_SUITE(thread_map); DECLARE_SUITE(thread_map);
DECLARE_SUITE(llvm);
DECLARE_SUITE(bpf); DECLARE_SUITE(bpf);
DECLARE_SUITE(session_topology); DECLARE_SUITE(session_topology);
DECLARE_SUITE(thread_map_synthesize); DECLARE_SUITE(thread_map_synthesize);
...@@ -129,7 +128,6 @@ DECLARE_SUITE(sdt_event); ...@@ -129,7 +128,6 @@ DECLARE_SUITE(sdt_event);
DECLARE_SUITE(is_printable_array); DECLARE_SUITE(is_printable_array);
DECLARE_SUITE(bitmap_print); DECLARE_SUITE(bitmap_print);
DECLARE_SUITE(perf_hooks); DECLARE_SUITE(perf_hooks);
DECLARE_SUITE(clang);
DECLARE_SUITE(unit_number__scnprint); DECLARE_SUITE(unit_number__scnprint);
DECLARE_SUITE(mem2node); DECLARE_SUITE(mem2node);
DECLARE_SUITE(maps__merge_in); DECLARE_SUITE(maps__merge_in);
......
...@@ -23,7 +23,6 @@ perf-y += evswitch.o ...@@ -23,7 +23,6 @@ perf-y += evswitch.o
perf-y += find_bit.o perf-y += find_bit.o
perf-y += get_current_dir_name.o perf-y += get_current_dir_name.o
perf-y += levenshtein.o perf-y += levenshtein.o
perf-y += llvm-utils.o
perf-y += mmap.o perf-y += mmap.o
perf-y += memswap.o perf-y += memswap.o
perf-y += parse-events.o perf-y += parse-events.o
...@@ -150,7 +149,6 @@ perf-y += list_sort.o ...@@ -150,7 +149,6 @@ perf-y += list_sort.o
perf-y += mutex.o perf-y += mutex.o
perf-y += sharded_mutex.o perf-y += sharded_mutex.o
perf-$(CONFIG_LIBBPF) += bpf-loader.o
perf-$(CONFIG_LIBBPF) += bpf_map.o perf-$(CONFIG_LIBBPF) += bpf_map.o
perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter.o perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter.o
perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter_cgroup.o perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter_cgroup.o
...@@ -168,7 +166,6 @@ ifeq ($(CONFIG_LIBTRACEEVENT),y) ...@@ -168,7 +166,6 @@ ifeq ($(CONFIG_LIBTRACEEVENT),y)
perf-$(CONFIG_PERF_BPF_SKEL) += bpf_kwork.o perf-$(CONFIG_PERF_BPF_SKEL) += bpf_kwork.o
endif endif
perf-$(CONFIG_BPF_PROLOGUE) += bpf-prologue.o
perf-$(CONFIG_LIBELF) += symbol-elf.o perf-$(CONFIG_LIBELF) += symbol-elf.o
perf-$(CONFIG_LIBELF) += probe-file.o perf-$(CONFIG_LIBELF) += probe-file.o
perf-$(CONFIG_LIBELF) += probe-event.o perf-$(CONFIG_LIBELF) += probe-event.o
...@@ -235,7 +232,6 @@ perf-$(CONFIG_LIBBPF) += bpf-utils.o ...@@ -235,7 +232,6 @@ perf-$(CONFIG_LIBBPF) += bpf-utils.o
perf-$(CONFIG_LIBPFM4) += pfm.o perf-$(CONFIG_LIBPFM4) += pfm.o
CFLAGS_config.o += -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))" CFLAGS_config.o += -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))"
CFLAGS_llvm-utils.o += -DLIBBPF_INCLUDE_DIR="BUILD_STR($(libbpf_include_dir_SQ))"
# avoid compiler warnings in 32-bit mode # avoid compiler warnings in 32-bit mode
CFLAGS_genelf_debug.o += -Wno-packed CFLAGS_genelf_debug.o += -Wno-packed
...@@ -327,7 +323,7 @@ ifeq ($(BISON_LT_381),1) ...@@ -327,7 +323,7 @@ ifeq ($(BISON_LT_381),1)
bison_flags += -DYYNOMEM=YYABORT bison_flags += -DYYNOMEM=YYABORT
endif endif
CFLAGS_parse-events-flex.o += $(flex_flags) CFLAGS_parse-events-flex.o += $(flex_flags) -Wno-unused-label
CFLAGS_pmu-flex.o += $(flex_flags) CFLAGS_pmu-flex.o += $(flex_flags)
CFLAGS_expr-flex.o += $(flex_flags) CFLAGS_expr-flex.o += $(flex_flags)
CFLAGS_bpf-filter-flex.o += $(flex_flags) CFLAGS_bpf-filter-flex.o += $(flex_flags)
......
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015, Wang Nan <wangnan0@huawei.com>
* Copyright (C) 2015, Huawei Inc.
*/
#ifndef __BPF_LOADER_H
#define __BPF_LOADER_H
#include <linux/compiler.h>
#include <linux/err.h>
#ifdef HAVE_LIBBPF_SUPPORT
#include <bpf/libbpf.h>
enum bpf_loader_errno {
__BPF_LOADER_ERRNO__START = __LIBBPF_ERRNO__START - 100,
/* Invalid config string */
BPF_LOADER_ERRNO__CONFIG = __BPF_LOADER_ERRNO__START,
BPF_LOADER_ERRNO__GROUP, /* Invalid group name */
BPF_LOADER_ERRNO__EVENTNAME, /* Event name is missing */
BPF_LOADER_ERRNO__INTERNAL, /* BPF loader internal error */
BPF_LOADER_ERRNO__COMPILE, /* Error when compiling BPF scriptlet */
BPF_LOADER_ERRNO__PROGCONF_TERM,/* Invalid program config term in config string */
BPF_LOADER_ERRNO__PROLOGUE, /* Failed to generate prologue */
BPF_LOADER_ERRNO__PROLOGUE2BIG, /* Prologue too big for program */
BPF_LOADER_ERRNO__PROLOGUEOOB, /* Offset out of bound for prologue */
BPF_LOADER_ERRNO__OBJCONF_OPT, /* Invalid object config option */
BPF_LOADER_ERRNO__OBJCONF_CONF, /* Config value not set (lost '=')) */
BPF_LOADER_ERRNO__OBJCONF_MAP_OPT, /* Invalid object map config option */
BPF_LOADER_ERRNO__OBJCONF_MAP_NOTEXIST, /* Target map not exist */
BPF_LOADER_ERRNO__OBJCONF_MAP_VALUE, /* Incorrect value type for map */
BPF_LOADER_ERRNO__OBJCONF_MAP_TYPE, /* Incorrect map type */
BPF_LOADER_ERRNO__OBJCONF_MAP_KEYSIZE, /* Incorrect map key size */
BPF_LOADER_ERRNO__OBJCONF_MAP_VALUESIZE,/* Incorrect map value size */
BPF_LOADER_ERRNO__OBJCONF_MAP_NOEVT, /* Event not found for map setting */
BPF_LOADER_ERRNO__OBJCONF_MAP_MAPSIZE, /* Invalid map size for event setting */
BPF_LOADER_ERRNO__OBJCONF_MAP_EVTDIM, /* Event dimension too large */
BPF_LOADER_ERRNO__OBJCONF_MAP_EVTINH, /* Doesn't support inherit event */
BPF_LOADER_ERRNO__OBJCONF_MAP_EVTTYPE, /* Wrong event type for map */
BPF_LOADER_ERRNO__OBJCONF_MAP_IDX2BIG, /* Index too large */
__BPF_LOADER_ERRNO__END,
};
#endif // HAVE_LIBBPF_SUPPORT
struct evsel;
struct evlist;
struct bpf_object;
struct parse_events_term;
#define PERF_BPF_PROBE_GROUP "perf_bpf_probe"
typedef int (*bpf_prog_iter_callback_t)(const char *group, const char *event,
int fd, struct bpf_object *obj, void *arg);
#ifdef HAVE_LIBBPF_SUPPORT
struct bpf_object *bpf__prepare_load(const char *filename, bool source);
int bpf__strerror_prepare_load(const char *filename, bool source,
int err, char *buf, size_t size);
struct bpf_object *bpf__prepare_load_buffer(void *obj_buf, size_t obj_buf_sz,
const char *name);
void bpf__clear(void);
int bpf__probe(struct bpf_object *obj);
int bpf__unprobe(struct bpf_object *obj);
int bpf__strerror_probe(struct bpf_object *obj, int err,
char *buf, size_t size);
int bpf__load(struct bpf_object *obj);
int bpf__strerror_load(struct bpf_object *obj, int err,
char *buf, size_t size);
int bpf__foreach_event(struct bpf_object *obj,
bpf_prog_iter_callback_t func, void *arg);
int bpf__config_obj(struct bpf_object *obj, struct parse_events_term *term,
struct evlist *evlist, int *error_pos);
int bpf__strerror_config_obj(struct bpf_object *obj,
struct parse_events_term *term,
struct evlist *evlist,
int *error_pos, int err, char *buf,
size_t size);
int bpf__apply_obj_config(void);
int bpf__strerror_apply_obj_config(int err, char *buf, size_t size);
int bpf__setup_stdout(struct evlist *evlist);
struct evsel *bpf__setup_output_event(struct evlist *evlist, const char *name);
int bpf__strerror_setup_output_event(struct evlist *evlist, int err, char *buf, size_t size);
#else
#include <errno.h>
#include <string.h>
#include "debug.h"
static inline struct bpf_object *
bpf__prepare_load(const char *filename __maybe_unused,
bool source __maybe_unused)
{
pr_debug("ERROR: eBPF object loading is disabled during compiling.\n");
return ERR_PTR(-ENOTSUP);
}
static inline struct bpf_object *
bpf__prepare_load_buffer(void *obj_buf __maybe_unused,
size_t obj_buf_sz __maybe_unused)
{
return ERR_PTR(-ENOTSUP);
}
static inline void bpf__clear(void) { }
static inline int bpf__probe(struct bpf_object *obj __maybe_unused) { return 0;}
static inline int bpf__unprobe(struct bpf_object *obj __maybe_unused) { return 0;}
static inline int bpf__load(struct bpf_object *obj __maybe_unused) { return 0; }
static inline int
bpf__foreach_event(struct bpf_object *obj __maybe_unused,
bpf_prog_iter_callback_t func __maybe_unused,
void *arg __maybe_unused)
{
return 0;
}
static inline int
bpf__config_obj(struct bpf_object *obj __maybe_unused,
struct parse_events_term *term __maybe_unused,
struct evlist *evlist __maybe_unused,
int *error_pos __maybe_unused)
{
return 0;
}
static inline int
bpf__apply_obj_config(void)
{
return 0;
}
static inline int
bpf__setup_stdout(struct evlist *evlist __maybe_unused)
{
return 0;
}
static inline struct evsel *
bpf__setup_output_event(struct evlist *evlist __maybe_unused, const char *name __maybe_unused)
{
return NULL;
}
static inline int
__bpf_strerror(char *buf, size_t size)
{
if (!size)
return 0;
strncpy(buf,
"ERROR: eBPF object loading is disabled during compiling.\n",
size);
buf[size - 1] = '\0';
return 0;
}
static inline
int bpf__strerror_prepare_load(const char *filename __maybe_unused,
bool source __maybe_unused,
int err __maybe_unused,
char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
static inline int
bpf__strerror_probe(struct bpf_object *obj __maybe_unused,
int err __maybe_unused,
char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
static inline int bpf__strerror_load(struct bpf_object *obj __maybe_unused,
int err __maybe_unused,
char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
static inline int
bpf__strerror_config_obj(struct bpf_object *obj __maybe_unused,
struct parse_events_term *term __maybe_unused,
struct evlist *evlist __maybe_unused,
int *error_pos __maybe_unused,
int err __maybe_unused,
char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
static inline int
bpf__strerror_apply_obj_config(int err __maybe_unused,
char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
static inline int
bpf__strerror_setup_output_event(struct evlist *evlist __maybe_unused,
int err __maybe_unused, char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
#endif
static inline int bpf__strerror_setup_stdout(struct evlist *evlist, int err, char *buf, size_t size)
{
return bpf__strerror_setup_output_event(evlist, err, buf, size);
}
#endif
...@@ -16,7 +16,6 @@ ...@@ -16,7 +16,6 @@
#include <subcmd/exec-cmd.h> #include <subcmd/exec-cmd.h>
#include "util/event.h" /* proc_map_timeout */ #include "util/event.h" /* proc_map_timeout */
#include "util/hist.h" /* perf_hist_config */ #include "util/hist.h" /* perf_hist_config */
#include "util/llvm-utils.h" /* perf_llvm_config */
#include "util/stat.h" /* perf_stat__set_big_num */ #include "util/stat.h" /* perf_stat__set_big_num */
#include "util/evsel.h" /* evsel__hw_names, evsel__use_bpf_counters */ #include "util/evsel.h" /* evsel__hw_names, evsel__use_bpf_counters */
#include "util/srcline.h" /* addr2line_timeout_ms */ #include "util/srcline.h" /* addr2line_timeout_ms */
...@@ -486,9 +485,6 @@ int perf_default_config(const char *var, const char *value, ...@@ -486,9 +485,6 @@ int perf_default_config(const char *var, const char *value,
if (strstarts(var, "call-graph.")) if (strstarts(var, "call-graph."))
return perf_callchain_config(var, value); return perf_callchain_config(var, value);
if (strstarts(var, "llvm."))
return perf_llvm_config(var, value);
if (strstarts(var, "buildid.")) if (strstarts(var, "buildid."))
return perf_buildid_config(var, value); return perf_buildid_config(var, value);
......
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015, Wang Nan <wangnan0@huawei.com>
* Copyright (C) 2015, Huawei Inc.
*/
#ifndef __LLVM_UTILS_H
#define __LLVM_UTILS_H
#include <stdbool.h>
struct llvm_param {
/* Path of clang executable */
const char *clang_path;
/* Path of llc executable */
const char *llc_path;
/*
* Template of clang bpf compiling. 5 env variables
* can be used:
* $CLANG_EXEC: Path to clang.
* $CLANG_OPTIONS: Extra options to clang.
* $KERNEL_INC_OPTIONS: Kernel include directories.
* $WORKING_DIR: Kernel source directory.
* $CLANG_SOURCE: Source file to be compiled.
*/
const char *clang_bpf_cmd_template;
/* Will be filled in $CLANG_OPTIONS */
const char *clang_opt;
/*
* If present it'll add -emit-llvm to $CLANG_OPTIONS to pipe
* the clang output to llc, useful for new llvm options not
* yet selectable via 'clang -mllvm option', such as -mattr=dwarfris
* in clang 6.0/llvm 7
*/
const char *opts;
/* Where to find kbuild system */
const char *kbuild_dir;
/*
* Arguments passed to make, like 'ARCH=arm' if doing cross
* compiling. Should not be used for dynamic compiling.
*/
const char *kbuild_opts;
/*
* Default is false. If set to true, write compiling result
* to object file.
*/
bool dump_obj;
/*
* Default is false. If one of the above fields is set by user
* explicitly then user_set_llvm is set to true. This is used
* for perf test. If user doesn't set anything in .perfconfig
* and clang is not found, don't trigger llvm test.
*/
bool user_set_param;
};
extern struct llvm_param llvm_param;
int perf_llvm_config(const char *var, const char *value);
int llvm__compile_bpf(const char *path, void **p_obj_buf, size_t *p_obj_buf_sz);
/* This function is for test__llvm() use only */
int llvm__search_clang(void);
/* Following functions are reused by builtin clang support */
void llvm__get_kbuild_opts(char **kbuild_dir, char **kbuild_include_opts);
int llvm__get_nr_cpus(void);
void llvm__dump_obj(const char *path, void *obj_buf, size_t size);
#endif
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
#include "parse-events.h" #include "parse-events.h"
#include "string2.h" #include "string2.h"
#include "strlist.h" #include "strlist.h"
#include "bpf-loader.h"
#include "debug.h" #include "debug.h"
#include <api/fs/tracing_path.h> #include <api/fs/tracing_path.h>
#include <perf/cpumap.h> #include <perf/cpumap.h>
...@@ -648,272 +647,6 @@ static int add_tracepoint_multi_sys(struct list_head *list, int *idx, ...@@ -648,272 +647,6 @@ static int add_tracepoint_multi_sys(struct list_head *list, int *idx,
} }
#endif /* HAVE_LIBTRACEEVENT */ #endif /* HAVE_LIBTRACEEVENT */
#ifdef HAVE_LIBBPF_SUPPORT
struct __add_bpf_event_param {
struct parse_events_state *parse_state;
struct list_head *list;
struct list_head *head_config;
YYLTYPE *loc;
};
static int add_bpf_event(const char *group, const char *event, int fd, struct bpf_object *obj,
void *_param)
{
LIST_HEAD(new_evsels);
struct __add_bpf_event_param *param = _param;
struct parse_events_state *parse_state = param->parse_state;
struct list_head *list = param->list;
struct evsel *pos;
int err;
/*
* Check if we should add the event, i.e. if it is a TP but starts with a '!',
* then don't add the tracepoint, this will be used for something else, like
* adding to a BPF_MAP_TYPE_PROG_ARRAY.
*
* See tools/perf/examples/bpf/augmented_raw_syscalls.c
*/
if (group[0] == '!')
return 0;
pr_debug("add bpf event %s:%s and attach bpf program %d\n",
group, event, fd);
err = parse_events_add_tracepoint(&new_evsels, &parse_state->idx, group,
event, parse_state->error,
param->head_config, param->loc);
if (err) {
struct evsel *evsel, *tmp;
pr_debug("Failed to add BPF event %s:%s\n",
group, event);
list_for_each_entry_safe(evsel, tmp, &new_evsels, core.node) {
list_del_init(&evsel->core.node);
evsel__delete(evsel);
}
return err;
}
pr_debug("adding %s:%s\n", group, event);
list_for_each_entry(pos, &new_evsels, core.node) {
pr_debug("adding %s:%s to %p\n",
group, event, pos);
pos->bpf_fd = fd;
pos->bpf_obj = obj;
}
list_splice(&new_evsels, list);
return 0;
}
int parse_events_load_bpf_obj(struct parse_events_state *parse_state,
struct list_head *list,
struct bpf_object *obj,
struct list_head *head_config,
void *loc)
{
int err;
char errbuf[BUFSIZ];
struct __add_bpf_event_param param = {parse_state, list, head_config, loc};
static bool registered_unprobe_atexit = false;
YYLTYPE test_loc = {.first_column = -1};
if (IS_ERR(obj) || !obj) {
snprintf(errbuf, sizeof(errbuf),
"Internal error: load bpf obj with NULL");
err = -EINVAL;
goto errout;
}
/*
* Register atexit handler before calling bpf__probe() so
* bpf__probe() don't need to unprobe probe points its already
* created when failure.
*/
if (!registered_unprobe_atexit) {
atexit(bpf__clear);
registered_unprobe_atexit = true;
}
err = bpf__probe(obj);
if (err) {
bpf__strerror_probe(obj, err, errbuf, sizeof(errbuf));
goto errout;
}
err = bpf__load(obj);
if (err) {
bpf__strerror_load(obj, err, errbuf, sizeof(errbuf));
goto errout;
}
if (!param.loc)
param.loc = &test_loc;
err = bpf__foreach_event(obj, add_bpf_event, &param);
if (err) {
snprintf(errbuf, sizeof(errbuf),
"Attach events in BPF object failed");
goto errout;
}
return 0;
errout:
parse_events_error__handle(parse_state->error, param.loc ? param.loc->first_column : 0,
strdup(errbuf), strdup("(add -v to see detail)"));
return err;
}
static int
parse_events_config_bpf(struct parse_events_state *parse_state,
struct bpf_object *obj,
struct list_head *head_config)
{
struct parse_events_term *term;
int error_pos = 0;
if (!head_config || list_empty(head_config))
return 0;
list_for_each_entry(term, head_config, list) {
int err;
if (term->type_term != PARSE_EVENTS__TERM_TYPE_USER) {
parse_events_error__handle(parse_state->error, term->err_term,
strdup("Invalid config term for BPF object"),
NULL);
return -EINVAL;
}
err = bpf__config_obj(obj, term, parse_state->evlist, &error_pos);
if (err) {
char errbuf[BUFSIZ];
int idx;
bpf__strerror_config_obj(obj, term, parse_state->evlist,
&error_pos, err, errbuf,
sizeof(errbuf));
if (err == -BPF_LOADER_ERRNO__OBJCONF_MAP_VALUE)
idx = term->err_val;
else
idx = term->err_term + error_pos;
parse_events_error__handle(parse_state->error, idx,
strdup(errbuf),
NULL);
return err;
}
}
return 0;
}
/*
* Split config terms:
* perf record -e bpf.c/call-graph=fp,map:array.value[0]=1/ ...
* 'call-graph=fp' is 'evt config', should be applied to each
* events in bpf.c.
* 'map:array.value[0]=1' is 'obj config', should be processed
* with parse_events_config_bpf.
*
* Move object config terms from the first list to obj_head_config.
*/
static void
split_bpf_config_terms(struct list_head *evt_head_config,
struct list_head *obj_head_config)
{
struct parse_events_term *term, *temp;
/*
* Currently, all possible user config term
* belong to bpf object. parse_events__is_hardcoded_term()
* happens to be a good flag.
*
* See parse_events_config_bpf() and
* config_term_tracepoint().
*/
list_for_each_entry_safe(term, temp, evt_head_config, list)
if (!parse_events__is_hardcoded_term(term))
list_move_tail(&term->list, obj_head_config);
}
int parse_events_load_bpf(struct parse_events_state *parse_state,
struct list_head *list,
char *bpf_file_name,
bool source,
struct list_head *head_config,
void *loc_)
{
int err;
struct bpf_object *obj;
LIST_HEAD(obj_head_config);
YYLTYPE *loc = loc_;
if (head_config)
split_bpf_config_terms(head_config, &obj_head_config);
obj = bpf__prepare_load(bpf_file_name, source);
if (IS_ERR(obj)) {
char errbuf[BUFSIZ];
err = PTR_ERR(obj);
if (err == -ENOTSUP)
snprintf(errbuf, sizeof(errbuf),
"BPF support is not compiled");
else
bpf__strerror_prepare_load(bpf_file_name,
source,
-err, errbuf,
sizeof(errbuf));
parse_events_error__handle(parse_state->error, loc->first_column,
strdup(errbuf), strdup("(add -v to see detail)"));
return err;
}
err = parse_events_load_bpf_obj(parse_state, list, obj, head_config, loc);
if (err)
return err;
err = parse_events_config_bpf(parse_state, obj, &obj_head_config);
/*
* Caller doesn't know anything about obj_head_config,
* so combine them together again before returning.
*/
if (head_config)
list_splice_tail(&obj_head_config, head_config);
return err;
}
#else // HAVE_LIBBPF_SUPPORT
int parse_events_load_bpf_obj(struct parse_events_state *parse_state,
struct list_head *list __maybe_unused,
struct bpf_object *obj __maybe_unused,
struct list_head *head_config __maybe_unused,
void *loc_)
{
YYLTYPE *loc = loc_;
parse_events_error__handle(parse_state->error, loc->first_column,
strdup("BPF support is not compiled"),
strdup("Make sure libbpf-devel is available at build time."));
return -ENOTSUP;
}
int parse_events_load_bpf(struct parse_events_state *parse_state,
struct list_head *list __maybe_unused,
char *bpf_file_name __maybe_unused,
bool source __maybe_unused,
struct list_head *head_config __maybe_unused,
void *loc_)
{
YYLTYPE *loc = loc_;
parse_events_error__handle(parse_state->error, loc->first_column,
strdup("BPF support is not compiled"),
strdup("Make sure libbpf-devel is available at build time."));
return -ENOTSUP;
}
#endif // HAVE_LIBBPF_SUPPORT
static int static int
parse_breakpoint_type(const char *type, struct perf_event_attr *attr) parse_breakpoint_type(const char *type, struct perf_event_attr *attr)
{ {
...@@ -2274,7 +2007,6 @@ int __parse_events(struct evlist *evlist, const char *str, const char *pmu_filte ...@@ -2274,7 +2007,6 @@ int __parse_events(struct evlist *evlist, const char *str, const char *pmu_filte
.list = LIST_HEAD_INIT(parse_state.list), .list = LIST_HEAD_INIT(parse_state.list),
.idx = evlist->core.nr_entries, .idx = evlist->core.nr_entries,
.error = err, .error = err,
.evlist = evlist,
.stoken = PE_START_EVENTS, .stoken = PE_START_EVENTS,
.fake_pmu = fake_pmu, .fake_pmu = fake_pmu,
.pmu_filter = pmu_filter, .pmu_filter = pmu_filter,
......
...@@ -118,8 +118,6 @@ struct parse_events_state { ...@@ -118,8 +118,6 @@ struct parse_events_state {
int idx; int idx;
/* Error information. */ /* Error information. */
struct parse_events_error *error; struct parse_events_error *error;
/* Used by BPF event creation. */
struct evlist *evlist;
/* Holds returned terms for term parsing. */ /* Holds returned terms for term parsing. */
struct list_head *terms; struct list_head *terms;
/* Start token. */ /* Start token. */
...@@ -160,19 +158,6 @@ int parse_events_add_tracepoint(struct list_head *list, int *idx, ...@@ -160,19 +158,6 @@ int parse_events_add_tracepoint(struct list_head *list, int *idx,
const char *sys, const char *event, const char *sys, const char *event,
struct parse_events_error *error, struct parse_events_error *error,
struct list_head *head_config, void *loc); struct list_head *head_config, void *loc);
int parse_events_load_bpf(struct parse_events_state *parse_state,
struct list_head *list,
char *bpf_file_name,
bool source,
struct list_head *head_config,
void *loc);
/* Provide this function for perf test */
struct bpf_object;
int parse_events_load_bpf_obj(struct parse_events_state *parse_state,
struct list_head *list,
struct bpf_object *obj,
struct list_head *head_config,
void *loc);
int parse_events_add_numeric(struct parse_events_state *parse_state, int parse_events_add_numeric(struct parse_events_state *parse_state,
struct list_head *list, struct list_head *list,
u32 type, u64 config, u32 type, u64 config,
......
...@@ -68,31 +68,6 @@ static int lc_str(yyscan_t scanner, const struct parse_events_state *state) ...@@ -68,31 +68,6 @@ static int lc_str(yyscan_t scanner, const struct parse_events_state *state)
return str(scanner, state->match_legacy_cache_terms ? PE_LEGACY_CACHE : PE_NAME); return str(scanner, state->match_legacy_cache_terms ? PE_LEGACY_CACHE : PE_NAME);
} }
static bool isbpf_suffix(char *text)
{
int len = strlen(text);
if (len < 2)
return false;
if ((text[len - 1] == 'c' || text[len - 1] == 'o') &&
text[len - 2] == '.')
return true;
if (len > 4 && !strcmp(text + len - 4, ".obj"))
return true;
return false;
}
static bool isbpf(yyscan_t scanner)
{
char *text = parse_events_get_text(scanner);
struct stat st;
if (!isbpf_suffix(text))
return false;
return stat(text, &st) == 0;
}
/* /*
* This function is called when the parser gets two kind of input: * This function is called when the parser gets two kind of input:
* *
...@@ -179,8 +154,6 @@ do { \ ...@@ -179,8 +154,6 @@ do { \
group [^,{}/]*[{][^}]*[}][^,{}/]* group [^,{}/]*[{][^}]*[}][^,{}/]*
event_pmu [^,{}/]+[/][^/]*[/][^,{}/]* event_pmu [^,{}/]+[/][^/]*[/][^,{}/]*
event [^,{}/]+ event [^,{}/]+
bpf_object [^,{}]+\.(o|bpf)[a-zA-Z0-9._]*
bpf_source [^,{}]+\.c[a-zA-Z0-9._]*
num_dec [0-9]+ num_dec [0-9]+
num_hex 0x[a-fA-F0-9]+ num_hex 0x[a-fA-F0-9]+
...@@ -233,8 +206,6 @@ non_digit [^0-9] ...@@ -233,8 +206,6 @@ non_digit [^0-9]
} }
{event_pmu} | {event_pmu} |
{bpf_object} |
{bpf_source} |
{event} { {event} {
BEGIN(INITIAL); BEGIN(INITIAL);
REWIND(1); REWIND(1);
...@@ -363,8 +334,6 @@ r{num_raw_hex} { return str(yyscanner, PE_RAW); } ...@@ -363,8 +334,6 @@ r{num_raw_hex} { return str(yyscanner, PE_RAW); }
{num_hex} { return value(yyscanner, 16); } {num_hex} { return value(yyscanner, 16); }
{modifier_event} { return str(yyscanner, PE_MODIFIER_EVENT); } {modifier_event} { return str(yyscanner, PE_MODIFIER_EVENT); }
{bpf_object} { if (!isbpf(yyscanner)) { USER_REJECT }; return str(yyscanner, PE_BPF_OBJECT); }
{bpf_source} { if (!isbpf(yyscanner)) { USER_REJECT }; return str(yyscanner, PE_BPF_SOURCE); }
{name} { return str(yyscanner, PE_NAME); } {name} { return str(yyscanner, PE_NAME); }
{name_tag} { return str(yyscanner, PE_NAME); } {name_tag} { return str(yyscanner, PE_NAME); }
"/" { BEGIN(config); return '/'; } "/" { BEGIN(config); return '/'; }
......
...@@ -60,7 +60,6 @@ static void free_list_evsel(struct list_head* list_evsel) ...@@ -60,7 +60,6 @@ static void free_list_evsel(struct list_head* list_evsel)
%token PE_VALUE_SYM_TOOL %token PE_VALUE_SYM_TOOL
%token PE_EVENT_NAME %token PE_EVENT_NAME
%token PE_RAW PE_NAME %token PE_RAW PE_NAME
%token PE_BPF_OBJECT PE_BPF_SOURCE
%token PE_MODIFIER_EVENT PE_MODIFIER_BP PE_BP_COLON PE_BP_SLASH %token PE_MODIFIER_EVENT PE_MODIFIER_BP PE_BP_COLON PE_BP_SLASH
%token PE_LEGACY_CACHE %token PE_LEGACY_CACHE
%token PE_PREFIX_MEM %token PE_PREFIX_MEM
...@@ -75,8 +74,6 @@ static void free_list_evsel(struct list_head* list_evsel) ...@@ -75,8 +74,6 @@ static void free_list_evsel(struct list_head* list_evsel)
%type <num> value_sym %type <num> value_sym
%type <str> PE_RAW %type <str> PE_RAW
%type <str> PE_NAME %type <str> PE_NAME
%type <str> PE_BPF_OBJECT
%type <str> PE_BPF_SOURCE
%type <str> PE_LEGACY_CACHE %type <str> PE_LEGACY_CACHE
%type <str> PE_MODIFIER_EVENT %type <str> PE_MODIFIER_EVENT
%type <str> PE_MODIFIER_BP %type <str> PE_MODIFIER_BP
...@@ -97,7 +94,6 @@ static void free_list_evsel(struct list_head* list_evsel) ...@@ -97,7 +94,6 @@ static void free_list_evsel(struct list_head* list_evsel)
%type <list_evsel> event_legacy_tracepoint %type <list_evsel> event_legacy_tracepoint
%type <list_evsel> event_legacy_numeric %type <list_evsel> event_legacy_numeric
%type <list_evsel> event_legacy_raw %type <list_evsel> event_legacy_raw
%type <list_evsel> event_bpf_file
%type <list_evsel> event_def %type <list_evsel> event_def
%type <list_evsel> event_mod %type <list_evsel> event_mod
%type <list_evsel> event_name %type <list_evsel> event_name
...@@ -271,8 +267,7 @@ event_def: event_pmu | ...@@ -271,8 +267,7 @@ event_def: event_pmu |
event_legacy_mem sep_dc | event_legacy_mem sep_dc |
event_legacy_tracepoint sep_dc | event_legacy_tracepoint sep_dc |
event_legacy_numeric sep_dc | event_legacy_numeric sep_dc |
event_legacy_raw sep_dc | event_legacy_raw sep_dc
event_bpf_file
event_pmu: event_pmu:
PE_NAME opt_pmu_config PE_NAME opt_pmu_config
...@@ -620,43 +615,6 @@ PE_RAW opt_event_config ...@@ -620,43 +615,6 @@ PE_RAW opt_event_config
$$ = list; $$ = list;
} }
event_bpf_file:
PE_BPF_OBJECT opt_event_config
{
struct parse_events_state *parse_state = _parse_state;
struct list_head *list;
int err;
list = alloc_list();
if (!list)
YYNOMEM;
err = parse_events_load_bpf(parse_state, list, $1, false, $2, &@1);
parse_events_terms__delete($2);
free($1);
if (err) {
free(list);
PE_ABORT(err);
}
$$ = list;
}
|
PE_BPF_SOURCE opt_event_config
{
struct list_head *list;
int err;
list = alloc_list();
if (!list)
YYNOMEM;
err = parse_events_load_bpf(_parse_state, list, $1, true, $2, &@1);
parse_events_terms__delete($2);
if (err) {
free(list);
PE_ABORT(err);
}
$$ = list;
}
opt_event_config: opt_event_config:
'/' event_config '/' '/' event_config '/'
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment