An error occurred fetching the project authors.
- 06 Jul, 2022 4 commits
-
-
Chuang Wang authored
A potential scenario, when an error is returned after add_uprobe_event_legacy() in perf_event_uprobe_open_legacy(), or bpf_program__attach_perf_event_opts() in bpf_program__attach_uprobe_opts() returns an error, the uprobe_event that was previously created is not cleaned. So, with this patch, when an error is returned, fix this by adding remove_uprobe_event_legacy() Signed-off-by:
Chuang Wang <nashuiliang@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220629151848.65587-4-nashuiliang@gmail.com
-
Chuang Wang authored
Use "type" as opposed to "err" in pr_warn() after determine_uprobe_perf_type_legacy() returns an error. Signed-off-by:
Chuang Wang <nashuiliang@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220629151848.65587-3-nashuiliang@gmail.com
-
Chuang Wang authored
Before the 0bc11ed5 commit ("kprobes: Allow kprobes coexist with livepatch"), in a scenario where livepatch and kprobe coexist on the same function entry, the creation of kprobe_event using add_kprobe_event_legacy() will be successful, at the same time as a trace event (e.g. /debugfs/tracing/events/kprobe/XXX) will exist, but perf_event_open() will return an error because both livepatch and kprobe use FTRACE_OPS_FL_IPMODIFY. As follows: 1) add a livepatch $ insmod livepatch-XXX.ko 2) add a kprobe using tracefs API (i.e. add_kprobe_event_legacy) $ echo 'p:mykprobe XXX' > /sys/kernel/debug/tracing/kprobe_events 3) enable this kprobe (i.e. sys_perf_event_open) This will return an error, -EBUSY. On Andrii Nakryiko's comment, few error paths in bpf_program__attach_kprobe_opts() that should need to call remove_kprobe_event_legacy(). With this patch, whenever an error is returned after add_kprobe_event_legacy() or bpf_program__attach_perf_event_opts(), this ensures that the created kprobe_event is cleaned. Signed-off-by:
Chuang Wang <nashuiliang@gmail.com> Signed-off-by:
Jingren Zhou <zhoujingren@didiglobal.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220629151848.65587-2-nashuiliang@gmail.com
-
Daniel Müller authored
This patch adds support for the proposed type match relation to relo_core where it is shared between userspace and kernel. It plumbs through both kernel-side and libbpf-side support. The matching relation is defined as follows (copy from source): - modifiers and typedefs are stripped (and, hence, effectively ignored) - generally speaking types need to be of same kind (struct vs. struct, union vs. union, etc.) - exceptions are struct/union behind a pointer which could also match a forward declaration of a struct or union, respectively, and enum vs. enum64 (see below) Then, depending on type: - integers: - match if size and signedness match - arrays & pointers: - target types are recursively matched - structs & unions: - local members need to exist in target with the same name - for each member we recursively check match unless it is already behind a pointer, in which case we only check matching names and compatible kind - enums: - local variants have to have a match in target by symbolic name (but not numeric value) - size has to match (but enum may match enum64 and vice versa) - function pointers: - number and position of arguments in local type has to match target - for each argument and the return value we recursively check match Signed-off-by:
Daniel Müller <deso@posteo.net> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220628160127.607834-5-deso@posteo.net
-
- 29 Jun, 2022 1 commit
-
-
Stanislav Fomichev authored
lsm_cgroup/ is the prefix for BPF_LSM_CGROUP. Acked-by:
Martin KaFai Lau <kafai@fb.com> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/20220628174314.1216643-9-sdf@google.comSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 28 Jun, 2022 9 commits
-
-
Andrii Nakryiko authored
Remove support for legacy features and behaviors that previously had to be disabled by calling libbpf_set_strict_mode(): - legacy BPF map definitions are not supported now; - RLIMIT_MEMLOCK auto-setting, if necessary, is always on (but see libbpf_set_memlock_rlim()); - program name is used for program pinning (instead of section name); - cleaned up error returning logic; - entry BPF programs should have SEC() always. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-15-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Get rid of sloppy prefix logic and remove deprecated xdp_{devmap,cpumap} sections. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-13-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Clean up internals that had to deal with the possibility of multi-instance bpf_programs. Libbpf 1.0 doesn't support this, so all this is not necessary now and can be simplified. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-12-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Remove all the public APIs that are related to creating multi-instance bpf_programs through custom preprocessing callback and generally working with them. Also remove all the bpf_{object,map,program}__[set_]priv() APIs. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-10-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Remove a bunch of high-level bpf_object/bpf_map/bpf_program related APIs. All the APIs related to private per-object/map/prog state, program preprocessing callback, and generally everything multi-instance related is removed in a separate patch. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-9-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Remove prog_info_linear-related APIs previously used by perf. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-8-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Remove deprecated perfbuf APIs and clean up opts structs. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-7-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Get rid of deprecated BTF-related APIs. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-6-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Drop low-level APIs as well as high-level (and very confusingly named) BPF object loading bpf_prog_load_xattr() and bpf_prog_load_deprecated() APIs. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-3-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 24 Jun, 2022 1 commit
-
-
Daniel Müller authored
BPF type compatibility checks (bpf_core_types_are_compat()) are currently duplicated between kernel and user space. That's a historical artifact more than intentional doing and can lead to subtle bugs where one implementation is adjusted but another is forgotten. That happened with the enum64 work, for example, where the libbpf side was changed (commit 23b2a3a8 ("libbpf: Add enum64 relocation support")) to use the btf_kind_core_compat() helper function but the kernel side was not (commit 6089fb32 ("bpf: Add btf enum64 support")). This patch addresses both the duplication issue, by merging both implementations and moving them into relo_core.c, and fixes the alluded to kind check (by giving preference to libbpf's already adjusted logic). For discussion of the topic, please refer to: https://lore.kernel.org/bpf/CAADnVQKbWR7oarBdewgOBZUPzryhRYvEbkhyPJQHHuxq=0K1gw@mail.gmail.com/T/#mcc99f4a33ad9a322afaf1b9276fb1f0b7add9665 Changelog: v1 -> v2: - limited libbpf recursion limit to 32 - changed name to __bpf_core_types_are_compat - included warning previously present in libbpf version - merged kernel and user space changes into a single patch Signed-off-by:
Daniel Müller <deso@posteo.net> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220623182934.2582827-1-deso@posteo.net
-
- 17 Jun, 2022 1 commit
-
-
Delyan Kratunov authored
Add section mappings for u(ret)probe.s programs. Acked-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Delyan Kratunov <delyank@fb.com> Link: https://lore.kernel.org/r/aedbc3b74f3523f00010a7b0df8f3388cca59f16.1655248076.git.delyank@fb.comSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 14 Jun, 2022 1 commit
-
-
Yonghong Song authored
Andrii reported a bug with the following information: 2859 if (enum64_placeholder_id == 0) { 2860 enum64_placeholder_id = btf__add_int(btf, "enum64_placeholder", 1, 0); >>> CID 394804: Control flow issues (NO_EFFECT) >>> This less-than-zero comparison of an unsigned value is never true. "enum64_placeholder_id < 0U". 2861 if (enum64_placeholder_id < 0) 2862 return enum64_placeholder_id; 2863 ... Here enum64_placeholder_id declared as '__u32' so enum64_placeholder_id < 0 is always false. Declare enum64_placeholder_id as 'int' in order to capture the potential error properly. Fixes: f2a62588 ("libbpf: Add enum64 sanitization") Reported-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Yonghong Song <yhs@fb.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220613054314.1251905-1-yhs@fb.com
-
- 09 Jun, 2022 1 commit
-
-
Andrii Nakryiko authored
Fix libbpf's bpf_program__attach_uprobe() logic of determining function's *file offset* (which is what kernel is actually expecting) when attaching uprobe/uretprobe by function name. Previously calculation was determining virtual address offset relative to base load address, which (offset) is not always the same as file offset (though very frequently it is which is why this went unnoticed for a while). Fixes: 433966e3 ("libbpf: Support function name-based attach uprobes") Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Cc: Riham Selim <rihams@fb.com> Cc: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20220606220143.3796908-1-andrii@kernel.org
-
- 07 Jun, 2022 2 commits
-
-
Yonghong Song authored
The enum64 relocation support is added. The bpf local type could be either enum or enum64 and the remote type could be either enum or enum64 too. The all combinations of local enum/enum64 and remote enum/enum64 are supported. Acked-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20220607062647.3721719-1-yhs@fb.comSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
When old kernel does not support enum64 but user space btf contains non-zero enum kflag or enum64, libbpf needs to do proper sanitization so modified btf can be accepted by the kernel. Sanitization for enum kflag can be achieved by clearing the kflag bit. For enum64, the type is replaced with an union of integer member types and the integer member size must be smaller than enum64 size. If such an integer type cannot be found, a new type is created and used for union members. Acked-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20220607062636.3721375-1-yhs@fb.comSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 03 Jun, 2022 1 commit
-
-
Yuze Chi authored
Move the correct definition from linker.c into libbpf_internal.h. Fixes: 0087a681 ("libbpf: Automatically fix up BPF_MAP_TYPE_RINGBUF size, if necessary") Reported-by:
Yuze Chi <chiyuze@google.com> Signed-off-by:
Yuze Chi <chiyuze@google.com> Signed-off-by:
Ian Rogers <irogers@google.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220603055156.2830463-1-irogers@google.com
-
- 02 Jun, 2022 4 commits
-
-
Daniel Müller authored
This change introduces a new function, libbpf_bpf_link_type_str, to the public libbpf API. The function allows users to get a string representation for a bpf_link_type enum variant. Signed-off-by:
Daniel Müller <deso@posteo.net> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Quentin Monnet <quentin@isovalent.com> Acked-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220523230428.3077108-11-deso@posteo.net
-
Daniel Müller authored
This change introduces a new function, libbpf_bpf_attach_type_str, to the public libbpf API. The function allows users to get a string representation for a bpf_attach_type variant. Signed-off-by:
Daniel Müller <deso@posteo.net> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Quentin Monnet <quentin@isovalent.com> Acked-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220523230428.3077108-8-deso@posteo.net
-
Daniel Müller authored
This change introduces a new function, libbpf_bpf_map_type_str, to the public libbpf API. The function allows users to get a string representation for a bpf_map_type enum variant. Signed-off-by:
Daniel Müller <deso@posteo.net> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Quentin Monnet <quentin@isovalent.com> Acked-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220523230428.3077108-5-deso@posteo.net
-
Daniel Müller authored
This change introduces a new function, libbpf_bpf_prog_type_str, to the public libbpf API. The function allows users to get a string representation for a bpf_prog_type variant. Signed-off-by:
Daniel Müller <deso@posteo.net> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Quentin Monnet <quentin@isovalent.com> Acked-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220523230428.3077108-2-deso@posteo.net
-
- 23 May, 2022 1 commit
-
-
Julia Lawall authored
Spelling mistake (triple letters) in comment. Detected with the help of Coccinelle. Signed-off-by:
Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Daniel Müller <deso@posteo.net> Link: https://lore.kernel.org/bpf/20220521111145.81697-71-Julia.Lawall@inria.fr
-
- 16 May, 2022 1 commit
-
-
Andrii Nakryiko authored
Fix sec_name memory leak if user defines target-less SEC("tp"). Fixes: 9af8efc4 ("libbpf: Allow "incomplete" basic tracing SEC() definitions") Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20220516184547.3204674-1-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 13 May, 2022 1 commit
-
-
Andrii Nakryiko authored
Add high-level API wrappers for most common and typical BPF map operations that works directly on instances of struct bpf_map * (so you don't have to call bpf_map__fd()) and validate key/value size expectations. These helpers require users to specify key (and value, where appropriate) sizes when performing lookup/update/delete/etc. This forces user to actually think and validate (for themselves) those. This is a good thing as user is expected by kernel to implicitly provide correct key/value buffer sizes and kernel will just read/write necessary amount of data. If it so happens that user doesn't set up buffers correctly (which bit people for per-CPU maps especially) kernel either randomly overwrites stack data or return -EFAULT, depending on user's luck and circumstances. These high-level APIs are meant to prevent such unpleasant and hard to debug bugs. This patch also adds bpf_map_delete_elem_flags() low-level API and requires passing flags to bpf_map__delete_elem() API for consistency across all similar APIs, even though currently kernel doesn't expect any extra flags for BPF_MAP_DELETE_ELEM operation. List of map operations that get these high-level APIs: - bpf_map_lookup_elem; - bpf_map_update_elem; - bpf_map_delete_elem; - bpf_map_lookup_and_delete_elem; - bpf_map_get_next_key. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220512220713.2617964-1-andrii@kernel.org
-
- 11 May, 2022 3 commits
-
-
Jiri Olsa authored
Adding bpf_program__set_insns that allows to set new instructions for a BPF program. This is a very advanced libbpf API and users need to know what they are doing. This should be used from prog_prepare_load_fn callback only. We can have changed instructions after calling prog_prepare_load_fn callback, reloading them. One of the users of this new API will be perf's internal BPF prologue generation. Signed-off-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220510074659.2557731-2-jolsa@kernel.org
-
Andrii Nakryiko authored
Drop unused iteration variable, move overflow prevention check into the for loop. Fixes: 0087a681 ("libbpf: Automatically fix up BPF_MAP_TYPE_RINGBUF size, if necessary") Reported-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220510185159.754299-1-andrii@kernel.org
-
Kui-Feng Lee authored
Add a cookie field to the attributes of bpf_link_create(). Add bpf_program__attach_trace_opts() to attach a cookie to a link. Signed-off-by:
Kui-Feng Lee <kuifeng@fb.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220510205923.3206889-5-kuifeng@fb.com
-
- 09 May, 2022 1 commit
-
-
Andrii Nakryiko authored
Kernel imposes a pretty particular restriction on ringbuf map size. It has to be a power-of-2 multiple of page size. While generally this isn't hard for user to satisfy, sometimes it's impossible to do this declaratively in BPF source code or just plain inconvenient to do at runtime. One such example might be BPF libraries that are supposed to work on different architectures, which might not agree on what the common page size is. Let libbpf find the right size for user instead, if it turns out to not satisfy kernel requirements. If user didn't set size at all, that's most probably a mistake so don't upsize such zero size to one full page, though. Also we need to be careful about not overflowing __u32 max_entries. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-9-andrii@kernel.org
-
- 29 Apr, 2022 3 commits
-
-
Andrii Nakryiko authored
Add bpf_map__set_autocreate() API that allows user to opt-out from libbpf automatically creating BPF map during BPF object load. This is a useful feature when building CO-RE-enabled BPF application that takes advantage of some new-ish BPF map type (e.g., socket-local storage) if kernel supports it, but otherwise uses some alternative way (e.g., extra HASH map). In such case, being able to disable the creation of a map that kernel doesn't support allows to successfully create and load BPF object file with all its other maps and programs. It's still up to user to make sure that no "live" code in any of their BPF programs are referencing such map instance, which can be achieved by guarding such code with CO-RE relocation check or by using .rodata global variables. If user fails to properly guard such code to turn it into "dead code", libbpf will helpfully post-process BPF verifier log and will provide more meaningful error and map name that needs to be guarded properly. As such, instead of: ; value = bpf_map_lookup_elem(&missing_map, &zero); 4: (85) call unknown#2001000000 invalid func unknown#2001000000 ... user will see: ; value = bpf_map_lookup_elem(&missing_map, &zero); 4: <invalid BPF map reference> BPF map 'missing_map' is referenced but wasn't created Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220428041523.4089853-4-andrii@kernel.org
-
Andrii Nakryiko authored
Reuse libbpf_mem_ensure() when adding a new map to the list of maps inside bpf_object. It takes care of proper resizing and reallocating of map array and zeroing out newly allocated memory. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220428041523.4089853-3-andrii@kernel.org
-
Andrii Nakryiko authored
Detect CO-RE spec truncation and append "..." to make user aware that there was supposed to be more of the spec there. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220428041523.4089853-2-andrii@kernel.org
-
- 28 Apr, 2022 2 commits
-
-
Andrii Nakryiko authored
Similar to previous patch, support target-less definitions like SEC("fentry"), SEC("freplace"), etc. For such BTF-backed program types it is expected that user will specify BTF target programmatically at runtime using bpf_program__set_attach_target() *before* load phase. If not, libbpf will report this as an error. Aslo use SEC_ATTACH_BTF flag instead of explicitly listing a set of types that are expected to require attach_btf_id. This was an accidental omission during custom SEC() support refactoring. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220428185349.3799599-3-andrii@kernel.org
-
Andrii Nakryiko authored
In a lot of cases the target of kprobe/kretprobe, tracepoint, raw tracepoint, etc BPF program might not be known at the compilation time and will be discovered at runtime. This was always a supported case by libbpf, with APIs like bpf_program__attach_{kprobe,tracepoint,etc}() accepting full target definition, regardless of what was defined in SEC() definition in BPF source code. Unfortunately, up till now libbpf still enforced users to specify at least something for the fake target, e.g., SEC("kprobe/whatever"), which is cumbersome and somewhat misleading. This patch allows target-less SEC() definitions for basic tracing BPF program types: - kprobe/kretprobe; - multi-kprobe/multi-kretprobe; - tracepoints; - raw tracepoints. Such target-less SEC() definitions are meant to specify declaratively proper BPF program type only. Attachment of them will have to be handled programmatically using correct APIs. As such, skeleton's auto-attachment of such BPF programs is skipped and generic bpf_program__attach() will fail, if attempted, due to the lack of enough target information. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220428185349.3799599-2-andrii@kernel.org
-
- 26 Apr, 2022 3 commits
-
-
Andrii Nakryiko authored
Teach libbpf to post-process BPF verifier log on BPF program load failure and detect known error patterns to provide user with more context. Currently there is one such common situation: an "unguarded" failed BPF CO-RE relocation. While failing CO-RE relocation is expected, it is expected to be property guarded in BPF code such that BPF verifier always eliminates BPF instructions corresponding to such failed CO-RE relos as dead code. In cases when user failed to take such precautions, BPF verifier provides the best log it can: 123: (85) call unknown#195896080 invalid func unknown#195896080 Such incomprehensible log error is due to libbpf "poisoning" BPF instruction that corresponds to failed CO-RE relocation by replacing it with invalid `call 0xbad2310` instruction (195896080 == 0xbad2310 reads "bad relo" if you squint hard enough). Luckily, libbpf has all the necessary information to look up CO-RE relocation that failed and provide more human-readable description of what's going on: 5: <invalid CO-RE relocation> failed to resolve CO-RE relocation <byte_off> [6] struct task_struct___bad.fake_field_subprog (0:2 @ offset 8) This hopefully makes it much easier to understand what's wrong with user's BPF program without googling magic constants. This BPF verifier log fixup is setup to be extensible and is going to be used for at least one other upcoming feature of libbpf in follow up patches. Libbpf is parsing lines of BPF verifier log starting from the very end. Currently it processes up to 10 lines of code looking for familiar patterns. This avoids wasting lots of CPU processing huge verifier logs (especially for log_level=2 verbosity level). Actual verification error should normally be found in last few lines, so this should work reliably. If libbpf needs to expand log beyond available log_buf_size, it truncates the end of the verifier log. Given verifier log normally ends with something like: processed 2 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 ... truncating this on program load error isn't too bad (end user can always increase log size, if it needs to get complete log). Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220426004511.2691730-10-andrii@kernel.org
-
Andrii Nakryiko authored
Previously, libbpf recorded CO-RE relocations with insns_idx resolved according to finalized subprog locations (which are appended at the end of entry BPF program) to simplify the job of light skeleton generator. This is necessary because once subprogs' instructions are appended to main entry BPF program all the subprog instruction indices are shifted and that shift is different for each entry (main) BPF program, so it's generally impossible to map final absolute insn_idx of the finalized BPF program to their original locations inside subprograms. This information is now going to be used not only during light skeleton generation, but also to map absolute instruction index to subprog's instruction and its corresponding CO-RE relocation. So start recording these relocations always, not just when obj->gen_loader is set. This information is going to be freed at the end of bpf_object__load() step, as before (but this can change in the future if there will be a need for this information post load step). Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220426004511.2691730-7-andrii@kernel.org
-
Andrii Nakryiko authored
Instead of using ELF section names as a joining key between .BTF.ext and corresponding BPF programs, pre-build .BTF.ext section number to ELF section index mapping during bpf_object__open() and use it later for matching .BTF.ext information (func/line info or CO-RE relocations) to their respective BPF programs and subprograms. This simplifies corresponding joining logic and let's libbpf do manipulations with BPF program's ELF sections like dropping leading '?' character for non-autoloaded programs. Original joining logic in bpf_object__relocate_core() (see relevant comment that's now removed) was never elegant, so it's a good improvement regardless. But it also avoids unnecessary internal assumptions about preserving original ELF section name as BPF program's section name (which was broken when SEC("?abc") support was added). Fixes: a3820c48 ("libbpf: Support opting out from autoloading BPF programs declaratively") Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220426004511.2691730-5-andrii@kernel.org
-