- 26 Oct, 2021 12 commits
-
-
Ilya Leoshkevich authored
__BYTE_ORDER is supposed to be defined by a libc, and __BYTE_ORDER__ - by a compiler. bpf_core_read.h checks __BYTE_ORDER == __LITTLE_ENDIAN, which is true if neither are defined, leading to incorrect behavior on big-endian hosts if libc headers are not included, which is often the case. Fixes: ee26dade ("libbpf: Add support for relocatable bitfields") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211026010831.748682-2-iii@linux.ibm.com
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Add libbpf APIs to access BPF program instructions. Both before and after libbpf processing (before and after bpf_object__load()). This allows to inspect what's going on with BPF program assembly instructions as libbpf performs its processing magic. But in more practical terms, this allows to do a no-brainer BPF program cloning, which is something you need when working with fentry/fexit BPF programs to be able to attach the same BPF program code to multiple kernel functions. Currently, kernel needs multiple copies of BPF programs, each loaded with its own target BTF ID. retsnoop is one such example that previously had to rely on bpf_program__set_prep() API to hijack program instructions ([0] for before and after). Speaking of bpf_program__set_prep() API and the whole concept of multiple-instance BPF programs in libbpf, all that is scheduled for deprecation in v0.7. It doesn't work well, it's cumbersome, and it will become more broken as libbpf adds more functionality. So deprecate and remove it in libbpf 1.0. It doesn't seem to be used by anyone anyways (except for that retsnoop hack, which is now much cleaner with new APIs as can be seen in [0]). [0] https://github.com/anakryiko/retsnoop/pull/1 ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
The name of the API doesn't convey clearly that this size is in number of bytes (there needed to be a separate comment to make this clear in libbpf.h). Further, measuring the size of BPF program in bytes is not exactly the best fit, because BPF programs always consist of 8-byte instructions. As such, bpf_program__insn_cnt() is a better alternative in pretty much any imaginable case. So schedule bpf_program__size() deprecation starting from v0.7 and it will be removed in libbpf 1.0. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211025224531.1088894-5-andrii@kernel.org
-
Andrii Nakryiko authored
Schedule deprecation of a set of APIs that are related to multi-instance bpf_programs: - bpf_program__set_prep() ([0]); - bpf_program__{set,unset}_instance() ([1]); - bpf_program__nth_fd(). These APIs are obscure, very niche, and don't seem to be used much in practice. bpf_program__set_prep() is pretty useless for anything but the simplest BPF programs, as it doesn't allow to adjust BPF program load attributes, among other things. In short, it already bitrotted and will bitrot some more if not removed. With bpf_program__insns() API, which gives access to post-processed BPF program instructions of any given entry-point BPF program, it's now possible to do whatever necessary adjustments were possible with set_prep() API before, but also more. Given any such use case is automatically an advanced use case, requiring users to stick to low-level bpf_prog_load() APIs and managing their own prog FDs is reasonable. [0] Closes: https://github.com/libbpf/libbpf/issues/299 [1] Closes: https://github.com/libbpf/libbpf/issues/300Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211025224531.1088894-4-andrii@kernel.org
-
Andrii Nakryiko authored
Add APIs providing read-only access to bpf_program BPF instructions ([0]). This is useful for diagnostics purposes, but it also allows a cleaner support for cloning BPF programs after libbpf did all the FD resolution and CO-RE relocations, subprog instructions appending, etc. Currently, cloning BPF program is possible only through hijacking a half-broken bpf_program__set_prep() API, which doesn't really work well for anything but most primitive programs. For instance, set_prep() API doesn't allow adjusting BPF program load parameters which are necessary for loading fentry/fexit BPF programs (the case where BPF program cloning is a necessity if doing some sort of mass-attachment functionality). Given bpf_program__set_prep() API is set to be deprecated, having a cleaner alternative is a must. libbpf internally already keeps track of linear array of struct bpf_insn, so it's not hard to expose it. The only gotcha is that libbpf previously freed instructions array during bpf_object load time, which would make this API much less useful overall, because in between bpf_object__open() and bpf_object__load() a lot of changes to instructions are done by libbpf. So this patch makes libbpf hold onto prog->insns array even after BPF program loading. I think this is a small price for added functionality and improved introspection of BPF program code. See retsnoop PR ([1]) for how it can be used in practice and code savings compared to relying on bpf_program__set_prep(). [0] Closes: https://github.com/libbpf/libbpf/issues/298 [1] https://github.com/anakryiko/retsnoop/pull/1Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211025224531.1088894-3-andrii@kernel.org
-
Andrii Nakryiko authored
Fix instruction index validity check which has off-by-one error. Fixes: 3ee4f533 ("libbpf: Split bpf_core_apply_relo() into bpf_program independent helper.") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211025224531.1088894-2-andrii@kernel.org
-
Andrii Nakryiko authored
Quentin Monnet says: ==================== When listing BPF objects, bpftool can print a number of properties about items holding references to these objects. For example, it can show pinned paths for BPF programs, maps, and links; or programs and maps using a given BTF object; or the names and PIDs of processes referencing BPF objects. To collect this information, bpftool uses hash maps (to be clear: the data structures, inside bpftool - we are not talking of BPF maps). It uses the implementation available from the kernel, and picks it up from tools/include/linux/hashtable.h. This patchset converts bpftool's hash maps to a distinct implementation instead, the one coming with libbpf. The main motivation for this change is that it should ease the path towards a potential out-of-tree mirror for bpftool, like the one libbpf already has. Although it's not perfect to depend on libbpf's internal components, bpftool is intimately tied with the library anyway, and this looks better than depending too much on (non-UAPI) kernel headers. The first two patches contain preparatory work on the Makefile and on the initialisation of the hash maps for collecting pinned paths for objects. Then the transition is split into several steps, one for each kind of properties for which the collection is backed by hash maps. v2: - Move hashmap cleanup for pinned paths for links from do_detach() to do_show(). - Handle errors on hashmap__append() (in three of the patches). - Rename bpftool_hash_fn() and bpftool_equal_fn() as hash_fn_for_key_id() and equal_fn_for_key_id(), respectively. - Add curly braces for hashmap__for_each_key_entry() { } in show_btf_plain() and show_btf_json(), where the flow was difficult to read. ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Quentin Monnet authored
In order to show PIDs and names for processes holding references to BPF programs, maps, links, or BTF objects, bpftool creates hash maps to store all relevant information. This commit is part of a set that transitions from the kernel's hash map implementation to the one coming with libbpf. The motivation is to make bpftool less dependent of kernel headers, to ease the path to a potential out-of-tree mirror, like libbpf has. This is the third and final step of the transition, in which we convert the hash maps used for storing the information about the processes holding references to BPF objects (programs, maps, links, BTF), and at last we drop the inclusion of tools/include/linux/hashtable.h. Note: Checkpatch complains about the use of __weak declarations, and the missing empty lines after the bunch of empty function declarations when compiling without the BPF skeletons (none of these were introduced in this patch). We want to keep things as they are, and the reports should be safe to ignore. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-6-quentin@isovalent.com
-
Quentin Monnet authored
In order to show BPF programs and maps using BTF objects when the latter are being listed, bpftool creates hash maps to store all relevant items. This commit is part of a set that transitions from the kernel's hash map implementation to the one coming with libbpf. The motivation is to make bpftool less dependent of kernel headers, to ease the path to a potential out-of-tree mirror, like libbpf has. This commit focuses on the two hash maps used by bpftool when listing BTF objects to store references to programs and maps, and convert them to the libbpf's implementation. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-5-quentin@isovalent.com
-
Quentin Monnet authored
In order to show pinned paths for BPF programs, maps, or links when listing them with the "-f" option, bpftool creates hash maps to store all relevant paths under the bpffs. So far, it would rely on the kernel implementation (from tools/include/linux/hashtable.h). We can make bpftool rely on libbpf's implementation instead. The motivation is to make bpftool less dependent of kernel headers, to ease the path to a potential out-of-tree mirror, like libbpf has. This commit is the first step of the conversion: the hash maps for pinned paths for programs, maps, and links are converted to libbpf's hashmap.{c,h}. Other hash maps used for the PIDs of process holding references to BPF objects are left unchanged for now. On the build side, this requires adding a dependency to a second header internal to libbpf, and making it a dependency for the bootstrap bpftool version as well. The rest of the changes are a rather straightforward conversion. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-4-quentin@isovalent.com
-
Quentin Monnet authored
BPF programs, maps, and links, can all be listed with their pinned paths by bpftool, when the "-f" option is provided. To do so, bpftool builds hash maps containing all pinned paths for each kind of objects. These three hash maps are always initialised in main.c, and exposed through main.h. There appear to be no particular reason to do so: we can just as well make them static to the files that need them (prog.c, map.c, and link.c respectively), and initialise them only when we want to show objects and the "-f" switch is provided. This may prevent unnecessary memory allocations if the implementation of the hash maps was to change in the future. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-3-quentin@isovalent.com
-
Quentin Monnet authored
The dependency is only useful to make sure that the $(LIBBPF_HDRS_DIR) directory is created before we try to install locally the required libbpf internal header. Let's create this directory properly instead. This is in preparation of making $(LIBBPF_INTERNAL_HDRS) a dependency to the bootstrap bpftool version, in which case we want no dependency on $(LIBBPF). Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-2-quentin@isovalent.com
-
- 25 Oct, 2021 5 commits
-
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Reduce amount of waiting time when running test_progs in parallel mode (-j) by splitting bpf_verif_scale selftests into multiple tests. Previously it was structured as a test with multiple subtests, but subtests are not easily parallelizable with test_progs' infra. Also in practice each scale subtest is really an independent test with nothing shared across all substest. This patch set changes how test_progs test discovery works. Now it is possible to define multiple tests within a single source code file. One of the patches also marks tc_redirect selftests as serial, because it's extremely harmful to the test system when run in parallel mode. ==================== Acked-by: Yucong Sun <sunyucong@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Instead of using subtests in bpf_verif_scale selftest, turn each scale sub-test into its own test. Each subtest is compltely independent and just reuses a bit of common test running logic, so the conversion is trivial. For convenience, keep all of BPF verifier scale tests in one file. This conversion shaves off a significant amount of time when running test_progs in parallel mode. E.g., just running scale tests (-t verif_scale): BEFORE ====== Summary: 24/0 PASSED, 0 SKIPPED, 0 FAILED real 0m22.894s user 0m0.012s sys 0m22.797s AFTER ===== Summary: 24/0 PASSED, 0 SKIPPED, 0 FAILED real 0m12.044s user 0m0.024s sys 0m27.869s Ten second saving right there. test_progs -j is not yet ready to be turned on by default, unfortunately, and some tests fail almost every time, but this is a good improvement nevertheless. Ignoring few failures, here is sequential vs parallel run times when running all tests now: SEQUENTIAL ========== Summary: 206/953 PASSED, 4 SKIPPED, 0 FAILED real 1m5.625s user 0m4.211s sys 0m31.650s PARALLEL ======== Summary: 204/952 PASSED, 4 SKIPPED, 2 FAILED real 0m35.550s user 0m4.998s sys 0m39.890s Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-5-andrii@kernel.org
-
Andrii Nakryiko authored
It seems to cause a lot of harm to kprobe/tracepoint selftests. Yucong mentioned before that it does manipulate sysfs, which might be the reason. So let's mark it as serial, though ideally it would be less intrusive on the system at test. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-4-andrii@kernel.org
-
Andrii Nakryiko authored
Revamp how test discovery works for test_progs and allow multiple test entries per file. Any global void function with no arguments and serial_test_ or test_ prefix is considered a test. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-3-andrii@kernel.org
-
Andrii Nakryiko authored
Ensure that all test entry points are global void functions with no input arguments. Mark few subtest entry points as static. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-2-andrii@kernel.org
-
- 23 Oct, 2021 8 commits
-
-
Andrii Nakryiko authored
Original code assumed fixed and correct BTF header length. That's not always the case, though, so fix this bug with a proper additional check. And use actual header length instead of sizeof(struct btf_header) in sanity checks. Fixes: 8a138aed ("bpf: btf: Add BTF support to libbpf") Reported-by: Evgeny Vereshchagin <evvers@ya.ru> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211023003157.726961-2-andrii@kernel.org
-
Andrii Nakryiko authored
btf_header's str_off+str_len or type_off+type_len can overflow as they are u32s. This will lead to bypassing the sanity checks during BTF parsing, resulting in crashes afterwards. Fix by using 64-bit signed integers for comparison. Fixes: d8123624 ("libbpf: Fix BTF data layout checks and allow empty BTF") Reported-by: Evgeny Vereshchagin <evvers@ya.ru> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211023003157.726961-1-andrii@kernel.org
-
Alexei Starovoitov authored
Yonghong Song says: ==================== Latest upstream llvm-project added support for btf_decl_tag attributes for typedef declarations ([1], [2]). Similar to other btf_decl_tag cases, func/func_param/global_var/struct/union/field, btf_decl_tag with typedef declaration can carry information from kernel source to clang compiler and then to dwarf/BTF, for bpf verification or other use cases. This patch set added kernel support for BTF_KIND_DECL_TAG to typedef declaration (Patch 1). Additional selftests are added to cover unit testing, dedup, or bpf program usage of btf_decl_tag with typedef. (Patches 2, 3 and 4). The btf documentation is updated to include BTF_KIND_DECL_TAG typedef (Patch 5). [1] https://reviews.llvm.org/D110127 [2] https://reviews.llvm.org/D112259 ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Add BTF_KIND_DECL_TAG typedef support in btf.rst. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211021195649.4020514-1-yhs@fb.com
-
Yonghong Song authored
Change value type in progs/tag.c to a typedef with a btf_decl_tag. With `bpftool btf dump file tag.o`, we have ... [14] TYPEDEF 'value_t' type_id=17 [15] DECL_TAG 'tag1' type_id=14 component_idx=-1 [16] DECL_TAG 'tag2' type_id=14 component_idx=-1 [17] STRUCT '(anon)' size=8 vlen=2 'a' type_id=2 bits_offset=0 'b' type_id=2 bits_offset=32 ... The btf_tag selftest also succeeded: $ ./test_progs -t tag #21 btf_tag:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211021195643.4020315-1-yhs@fb.com
-
Yonghong Song authored
Add unit tests for deduplication of BTF_KIND_DECL_TAG to typedef types. Also changed a few comments from "tag" to "decl_tag" to match BTF_KIND_DECL_TAG enum value name. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211021195638.4019770-1-yhs@fb.com
-
Yonghong Song authored
Test good and bad variants of typedef BTF_KIND_DECL_TAG encoding. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211021195633.4019472-1-yhs@fb.com
-
Yonghong Song authored
The llvm patches ([1], [2]) added support to attach btf_decl_tag attributes to typedef declarations. This patch added support in kernel. [1] https://reviews.llvm.org/D110127 [2] https://reviews.llvm.org/D112259Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211021195628.4018847-1-yhs@fb.com
-
- 22 Oct, 2021 15 commits
-
-
Andrii Nakryiko authored
Stanislav Fomichev says: ==================== Commit 15669e1d ("selftests/bpf: Normalize all the rest SEC() uses") broke flow dissector tests. With the strict section names, bpftool isn't able to pin all programs of the objects (all section names are the same now). To bring it back to life let's do the following: - teach libbpf to pin by func name with LIBBPF_STRICT_SEC_NAME - enable strict mode in bpftool (breaking cli change) - fix custom flow_dissector loader to use strict mode - fix flow_dissector tests to use new pin names (func vs sec) v5: - get rid of error when retrying with '/' (Quentin Monnet) v4: - fix comment spelling (Quentin Monnet) - retry progtype without / (Quentin Monnet) v3: - clarify program pinning in LIBBPF_STRICT_SEC_NAME, for real this time (Andrii Nakryiko) - fix possible segfault in __bpf_program__pin_name (Andrii Nakryiko) v2: - add github issue (Andrii Nakryiko) - remove sec_name from bpf_program.pin_name comment (Andrii Nakryiko) - add cover letter (Andrii Nakryiko) ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Stanislav Fomichev authored
- update custom loader to search by name, not section name - update bpftool commands to use proper pin path Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211021214814.1236114-4-sdf@google.com
-
Stanislav Fomichev authored
We can't use section name anymore because they are not unique and pinning objects with multiple programs with the same progtype/secname will fail. [0] Closes: https://github.com/libbpf/libbpf/issues/273 Fixes: 33a2c75c ("libbpf: add internal pin_name") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20211021214814.1236114-2-sdf@google.com
-
Quentin Monnet authored
Bpftool creates a new JSON object for writing program metadata in plain text mode, regardless of metadata being present or not. Then this writer is freed if any metadata has been found and printed, but it leaks otherwise. We cannot destroy the object unconditionally, because the destructor prints an undesirable line break. Instead, make sure the writer is created only after we have found program metadata to print. Found with valgrind. Fixes: aff52e68 ("bpftool: Support dumping metadata") Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022094743.11052-1-quentin@isovalent.com
-
Andrii Nakryiko authored
Hengqi Chen says: ==================== Add btf__type_cnt() and btf__raw_data() APIs and deprecate btf__get_nr_type() and btf__get_raw_data() since the old APIs don't follow libbpf naming convention. Also update tools/selftests to use these new APIs. This is part of effort towards libbpf v1.0 v1->v2: - Update commit message, deprecate the old APIs in libbpf v0.7 (Andrii) - Separate changes in tools/ to individual patches (Andrii) ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Hengqi Chen authored
Replace the calls to btf__get_nr_types/btf__get_raw_data in selftests with new APIs btf__type_cnt/btf__raw_data. The old APIs will be deprecated in libbpf v0.7+. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-6-hengqi.chen@gmail.com
-
Hengqi Chen authored
Replace the call to btf__get_nr_types with new API btf__type_cnt. The old API will be deprecated in libbpf v0.7+. No functionality change. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-5-hengqi.chen@gmail.com
-
Hengqi Chen authored
Replace the call to btf__get_nr_types with new API btf__type_cnt. The old API will be deprecated in libbpf v0.7+. No functionality change. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-4-hengqi.chen@gmail.com
-
Hengqi Chen authored
Replace the call to btf__get_raw_data with new API btf__raw_data. The old APIs will be deprecated in libbpf v0.7+. No functionality change. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-3-hengqi.chen@gmail.com
-
Hengqi Chen authored
Add btf__type_cnt() and btf__raw_data() APIs and deprecate btf__get_nr_type() and btf__get_raw_data() since the old APIs don't follow the libbpf naming convention for getters which omit 'get' in the name (see [0]). btf__raw_data() is just an alias to the existing btf__get_raw_data(). btf__type_cnt() now returns the number of all types of the BTF object including 'void'. [0] Closes: https://github.com/libbpf/libbpf/issues/279Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-2-hengqi.chen@gmail.com
-
Mauricio Vásquez authored
Free btf_dedup if btf_ensure_modifiable() returns error. Fixes: 919d2b1d ("libbpf: Allow modification of BTF and add btf__add_str API") Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022202035.48868-1-mauricio@kinvolk.io
-
Andrii Nakryiko authored
Recent change to use tp/syscalls/sys_enter_nanosleep for perf_buffer selftests causes this selftest to fail on 4.9 kernel in libbpf CI ([0]): libbpf: prog 'handle_sys_enter': failed to attach to perf_event FD 6: Invalid argument libbpf: prog 'handle_sys_enter': failed to attach to tracepoint 'syscalls/sys_enter_nanosleep': Invalid argument It's not exactly clear why, because perf_event itself is created for this tracepoint, but I can't even compile 4.9 kernel locally, so it's hard to figure this out. If anyone has better luck and would like to help investigating this, I'd really appreciate this. For now, unblock CI by switching back to raw_syscalls/sys_enter, but reduce amount of unnecessary samples emitted by filter by process ID. Use explicit ARRAY map for that to make it work on 4.9 as well, because global data isn't yet supported there. Fixes: aa274f98 ("selftests/bpf: Fix possible/online index mismatch in perf_buffer test") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022201342.3490692-1-andrii@kernel.org
-
Andrii Nakryiko authored
Building libbpf sources out of kernel tree (in Github repo) we run into compilation error due to unknown __aligned attribute. It must be coming from some kernel header, which is not available to Github sources. Use explicit __attribute__((aligned(16))) instead. Fixes: 961632d5 ("libbpf: Fix dumping non-aligned __int128") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022192502.2975553-1-andrii@kernel.org
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== This patch set refactors internals of libbpf to enable support for multiple custom .rodata.* and .data.* sections. Each such section is backed by its own BPF_MAP_TYPE_ARRAY, memory-mappable just like .rodata/.data. This is not extended to .bss because .bss is not a great name, it is generated by compiler with name that reflects completely irrelevant historical implementation details. Given that users have to annotate their variables with SEC(".data.my_sec") explicitly, standardizing on .rodata. and .data. prefixes makes more sense and keeps things simpler. Additionally, this patch set makes it simpler to work with those special internal maps by allowing to look them up by their full ELF section name. Patch #1 is a preparatory patch that deprecates one libbpf API and moves custom logic into libbpf.c, where it's used. This code is later refactored with the rest of libbpf.c logic to support multiple data section maps. See individual patches for all the details. For new custom "dot maps", their full ELF section names are used as the names that are sent into the kernel. Object name isn't prepended like for .data/.rodata/.bss. The reason is that with longer custom names, there isn't much space left for object name anyways. Also, if BTF is supported, btf_value_type_id points to DATASEC BTF type, which contains full original ELF name of the section, so tools like bpftool could use that to recover full name. This patch set doesn't add this logic yet, this is left for follow up patches. One interesting possibility that is now open by these changes is that it's possible to do: bpf_trace_printk("My fmt %s", sizeof("My fmt %s"), "blah"); and it will work as expected. I haven't updated libbpf-provided helpers in bpf_helpers.h for snprintf, seq_printf, and printk, because using `static const char ___fmt[] = fmt;` trick is still efficient and doesn't fill out the buffer at runtime (no copying). But we might consider updating them in the future, especially with the array check that Kumar proposed (see [0]). [0] https://lore.kernel.org/bpf/20211012041524.udytbr2xs5wid6x2@apollo.localdomain/ v1->v2: - don't prepend object name for new dot maps; - add __read_mostly example in selftests (Daniel). ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Utilize libbpf's feature of allowing to lookup internal maps by their ELF section names. No need to guess or calculate the exact truncated prefix taken from the object name. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-11-andrii@kernel.org
-