- 20 Aug, 2020 8 commits
-
-
Andrii Nakryiko authored
GCC compilers older than version 5 don't support __builtin_mul_overflow yet. Given GCC 4.9 is the minimal supported compiler for building kernel and the fact that libbpf is a dependency of resolve_btfids, which is dependency of CONFIG_DEBUG_INFO_BTF=y, this needs to be handled. This patch fixes the issue by falling back to slower detection of integer overflow in such cases. Fixes: 029258d7 ("libbpf: Remove any use of reallocarray() in libbpf") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820061411.1755905-2-andriin@fb.com
-
Andrii Nakryiko authored
BPF_CALL | BPF_JMP32 is explicitly not allowed by verifier for BPF helper calls, so don't detect it as a valid call. Also drop the check on func_id pointer, as it's currently always non-null. Fixes: 109cea5a ("libbpf: Sanitize BPF program code for bpf_probe_read_{kernel, user}[_str]") Reported-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820061411.1755905-1-andriin@fb.com
-
Daniel Borkmann authored
Alexei Starovoitov says: ==================== This patch set is the first real user of user mode driver facility. The general use case for user mode driver is to ship vmlinux with preloaded BPF programs. In this particular case the user mode driver populates bpffs instance with two BPF iterators. In several months BPF_LSM project would need to preload the kernel with its own set of BPF programs and attach to LSM hooks instead of bpffs. BPF iterators and BPF_LSM are unstable from uapi perspective. They are tracing based and peek into arbitrary kernel data structures. One can question why a kernel module cannot embed BPF programs inside. The reason is that libbpf is necessary to load them. First libbpf loads BPF Type Format, then creates BPF maps, populates them. Then it relocates code sections inside BPF programs, loads BPF programs, and finally attaches them to events. Theoretically libbpf can be rewritten to work in the kernel, but that is massive undertaking. The maintenance of in-kernel libbpf and user space libbpf would be another challenge. Another obstacle to embedding BPF programs into kernel module is sys_bpf api. Loading of programs, BTF, maps goes through the verifier. It validates and optimizes the code. It's possible to provide in-kernel api to all of sys_bpf commands (load progs, create maps, update maps, load BTF, etc), but that is huge amount of work and forever maintenance headache. Hence the decision is to ship vmlinux with user mode drivers that load BPF programs. Just like kernel modules extend vmlinux BPF programs are safe extensions of the kernel and some of them need to ship with vmlinux. This patch set adds a kernel module with user mode driver that populates bpffs with two BPF iterators. $ mount bpffs /my/bpffs/ -t bpf $ ls -la /my/bpffs/ total 4 drwxrwxrwt 2 root root 0 Jul 2 00:27 . drwxr-xr-x 19 root root 4096 Jul 2 00:09 .. -rw------- 1 root root 0 Jul 2 00:27 maps.debug -rw------- 1 root root 0 Jul 2 00:27 progs.debug The user mode driver will load BPF Type Formats, create BPF maps, populate BPF maps, load two BPF programs, attach them to BPF iterators, and finally send two bpf_link IDs back to the kernel. The kernel will pin two bpf_links into newly mounted bpffs instance under names "progs.debug" and "maps.debug". These two files become human readable. $ cat /my/bpffs/progs.debug id name attached 11 dump_bpf_map bpf_iter_bpf_map 12 dump_bpf_prog bpf_iter_bpf_prog 27 test_pkt_access 32 test_main test_pkt_access test_pkt_access 33 test_subprog1 test_pkt_access_subprog1 test_pkt_access 34 test_subprog2 test_pkt_access_subprog2 test_pkt_access 35 test_subprog3 test_pkt_access_subprog3 test_pkt_access 36 new_get_skb_len get_skb_len test_pkt_access 37 new_get_skb_ifindex get_skb_ifindex test_pkt_access 38 new_get_constant get_constant test_pkt_access The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about all BPF programs currently loaded in the system. This information is unstable and will change from kernel to kernel. In some sence this output is similar to 'bpftool prog show' that is using stable api to retreive information about BPF programs. The BPF subsytems grows quickly and there is always demand to show as much info about BPF things as possible. But we cannot expose all that info via stable uapi of bpf syscall, since the details change so much. Right now a BPF program can be attached to only one other BPF program. Folks are working on patches to enable multi-attach, but for debugging it's necessary to see the current state. There is no uapi for that, but above output shows it: 37 new_get_skb_ifindex get_skb_ifindex test_pkt_access 38 new_get_constant get_constant test_pkt_access [1] [2] [3] [1] is the full name of BPF prog from BTF. [2] is the name of function inside target BPF prog. [3] is the name of target BPF prog. [2] and [3] are not exposed via uapi, since they will change from single to multi soon. There are many other cases where bpf internals are useful for debugging, but shouldn't be exposed via uapi due to high rate of changes. systemd mounts /sys/fs/bpf at the start, so this kernel module with user mode driver needs to be available early. BPF_LSM most likely would need to preload BPF programs even earlier. Few interesting observations: - though bpffs comes with two human readble files "progs.debug" and "maps.debug" they can be removed. 'rm -f /sys/fs/bpf/progs.debug' will remove bpf_link and kernel will automatically unload corresponding BPF progs, maps, BTFs. In the future '-o remount' will be able to restore them. This is not implemented yet. - 'ps aux|grep bpf_preload' shows nothing. User mode driver loaded BPF iterators and exited. Nothing is lingering in user space at this point. - We can consider giving 0644 permissions to "progs.debug" and "maps.debug" to allow unprivileged users see BPF things loaded in the system. We cannot do so with "bpftool prog show", since it's using cap_sys_admin parts of bpf syscall. - The functionality split between core kernel, bpf_preload kernel module and user mode driver is very similar to bpfilter style of interaction. - Similar BPF iterators can be used as unstable extensions to /proc. Like mounting /proc can prepopolate some subdirectory in there with a BPF iterator that will print QUIC sockets instead of tcp and udp. Changelog: v5->v6: - refactored Makefiles with Andrii's help - switched to explicit $(MAKE) style - switched to userldlibs instead of userldflags - fixed build issue with libbpf Makefile due to invocation from kbuild - fixed menuconfig order as spotted by Daniel - introduced CONFIG_USERMODE_DRIVER bool that is selected by bpfilter and bpf_preload v4->v5: - addressed Song and Andrii feedback. s/pages/max_entries/ v3->v4: - took THIS_MODULE in patch 3 as suggested by Daniel to simplify the code. - converted BPF iterator to use BTF (when available) to print full BPF program name instead of 16-byte truncated version. This is something I've been using drgn scripts for. Take a look at get_name() in iterators.bpf.c to see how short it is comparing to what user space bpftool would have to do to print the same full name: . get prog info via obj_info_by_fd . do get_fd_by_id from info->btf_id . fetch potentially large BTF of the program from the kernel . parse that BTF in user space to figure out all type boundaries and string section . read info->func_info to get btf_id of func_proto from there . find that btf_id in the parsed BTF That's quite a bit work for bpftool comparing to few lines in get_name(). I guess would be good to make bpftool do this info extraction anyway. While doing this BTF reading in the kernel realized that the verifier is not smart enough to follow double pointers (added to my todo list), otherwise get_name() would have been even shorter. v2->v3: - fixed module unload race (Daniel) - added selftest (Daniel) - fixed build bot warning v1->v2: - changed names to 'progs.debug' and 'maps.debug' to hopefully better indicate instability of the text output. Having dot in the name also guarantees that these special files will not conflict with normal bpf objects pinned in bpffs, since dot is disallowed for normal pins. - instead of hard coding link_name in the core bpf moved into UMD. - cleanedup error handling. - addressed review comments from Yonghong and Andrii. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
Add a test that mounts two bpffs instances and checks progs.debug and maps.debug for sanity data. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200819042759.51280-5-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Add kernel module with user mode driver that populates bpffs with BPF iterators. $ mount bpffs /my/bpffs/ -t bpf $ ls -la /my/bpffs/ total 4 drwxrwxrwt 2 root root 0 Jul 2 00:27 . drwxr-xr-x 19 root root 4096 Jul 2 00:09 .. -rw------- 1 root root 0 Jul 2 00:27 maps.debug -rw------- 1 root root 0 Jul 2 00:27 progs.debug The user mode driver will load BPF Type Formats, create BPF maps, populate BPF maps, load two BPF programs, attach them to BPF iterators, and finally send two bpf_link IDs back to the kernel. The kernel will pin two bpf_links into newly mounted bpffs instance under names "progs.debug" and "maps.debug". These two files become human readable. $ cat /my/bpffs/progs.debug id name attached 11 dump_bpf_map bpf_iter_bpf_map 12 dump_bpf_prog bpf_iter_bpf_prog 27 test_pkt_access 32 test_main test_pkt_access test_pkt_access 33 test_subprog1 test_pkt_access_subprog1 test_pkt_access 34 test_subprog2 test_pkt_access_subprog2 test_pkt_access 35 test_subprog3 test_pkt_access_subprog3 test_pkt_access 36 new_get_skb_len get_skb_len test_pkt_access 37 new_get_skb_ifindex get_skb_ifindex test_pkt_access 38 new_get_constant get_constant test_pkt_access The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about all BPF programs currently loaded in the system. This information is unstable and will change from kernel to kernel as ".debug" suffix conveys. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200819042759.51280-4-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
The program and map iterators work similar to seq_file-s. Once the program is pinned in bpffs it can be read with "cat" tool to print human readable output. In this case about BPF programs and maps. For example: $ cat /sys/fs/bpf/progs.debug id name attached 5 dump_bpf_map bpf_iter_bpf_map 6 dump_bpf_prog bpf_iter_bpf_prog $ cat /sys/fs/bpf/maps.debug id name max_entries 3 iterator.rodata 1 To avoid kernel build dependency on clang 10 separate bpf skeleton generation into manual "make" step and instead check-in generated .skel.h into git. Unlike 'bpftool prog show' in-kernel BTF name is used (when available) to print full name of BPF program instead of 16-byte truncated name. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200819042759.51280-3-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Refactor the code a bit to extract bpf_link_by_id() helper. It's similar to existing bpf_prog_by_id(). Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200819042759.51280-2-alexei.starovoitov@gmail.com
-
Xu Wang authored
Simplify the return expression. Signed-off-by: Xu Wang <vulab@iscas.ac.cn> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819025324.14680-1-vulab@iscas.ac.cn
-
- 19 Aug, 2020 28 commits
-
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== This patch set adds libbpf support for two new classes of CO-RE relocations: type-based (TYPE_EXISTS/TYPE_SIZE/TYPE_ID_LOCAL/TYPE_ID_TARGET) and enum value-vased (ENUMVAL_EXISTS/ENUMVAL_VALUE): - TYPE_EXISTS allows to detect presence in kernel BTF of a locally-recorded BTF type. Useful for feature detection (new functionality often comes with new internal kernel types), as well as handling type renames and bigger refactorings. - TYPE_SIZE allows to get the real size (in bytes) of a specified kernel type. Useful for dumping internal structure as-is through perfbuf or ringbuf. - TYPE_ID_LOCAL/TYPE_ID_TARGET allow to capture BTF type ID of a BTF type in program's BTF or kernel BTF, respectively. These could be used for high-performance and space-efficient generic data dumping/logging by relying on small and cheap BTF type ID as a data layout descriptor, for post-processing on user-space side. - ENUMVAL_EXISTS can be used for detecting the presence of enumerator value in kernel's enum type. Most direct application is to detect BPF helper support in kernel. - ENUMVAL_VALUE allows to relocate real integer value of kernel enumerator value, which is subject to change (e.g., always a potential issue for internal, non-UAPI, kernel enums). I've indicated potential applications for these relocations, but relocations themselves are generic and unassuming and are designed to work correctly even in unintended applications. Furthermore, relocated values become constants, known to the verifier and could and would be used for dead branch code detection and elimination. This makes them ideal to do all sorts of feature detection and guarding functionality that's not available on some older (but still supported by BPF program) kernels, while having to compile and maintain one unified source code. Selftests are added for all the new features. Selftests utilizing new Clang built-ins are designed such that they will compile with older Clangs and will be skipped during test runs. So this shouldn't cause any build and test failures on systems with slightly outdated Clang compiler. LLVM patches adding these relocation in Clang: - __builtin_btf_type_id() ([0], [1], [2]); - __builtin_preserve_type_info(), __builtin_preserve_enum_value() ([3], [4]). [0] https://reviews.llvm.org/D74572 [1] https://reviews.llvm.org/D74668 [2] https://reviews.llvm.org/D85174 [3] https://reviews.llvm.org/D83878 [4] https://reviews.llvm.org/D83242 v2->v3: - fix feature detection for __builtin_btf_type_id() test (Yonghong); - fix extra empty lines at the end of files (Yonghong); v1->v2: - selftests detect built-in support and are skipped if not found (Alexei). ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add tests validating existence and value relocations for enum value-based relocations. If __builtin_preserve_enum_value() built-in is not supported, skip tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-6-andriin@fb.com
-
Andrii Nakryiko authored
Implement two relocations of a new enumerator value-based CO-RE relocation kind: ENUMVAL_EXISTS and ENUMVAL_VALUE. First, ENUMVAL_EXISTS, allows to detect the presence of a named enumerator value in the target (kernel) BTF. This is useful to do BPF helper/map/program type support detection from BPF program side. bpf_core_enum_value_exists() macro helper is provided to simplify built-in usage. Second, ENUMVAL_VALUE, allows to capture enumerator integer value and relocate it according to the target BTF, if it changes. This is useful to have a guarantee against intentional or accidental re-ordering/re-numbering of some of the internal (non-UAPI) enumerations, where kernel developers don't care about UAPI backwards compatiblity concerns. bpf_core_enum_value() allows to capture this succinctly and use correct enum values in code. LLVM uses ldimm64 instruction to capture enumerator value-based relocations, so add support for ldimm64 instruction patching as well. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-5-andriin@fb.com
-
Andrii Nakryiko authored
Add tests for BTF type ID relocations. To allow testing this, enhance core_relo.c test runner to allow dynamic initialization of test inputs. If Clang doesn't have necessary support for new functionality, test is skipped. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-4-andriin@fb.com
-
Andrii Nakryiko authored
Add selftests for TYPE_EXISTS and TYPE_SIZE relocations, testing correctness of relocations and handling of type compatiblity/incompatibility. If __builtin_preserve_type_info() is not supported by compiler, skip tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-3-andriin@fb.com
-
Andrii Nakryiko authored
Implement support for TYPE_EXISTS/TYPE_SIZE/TYPE_ID_LOCAL/TYPE_ID_REMOTE relocations. These are examples of type-based relocations, as opposed to field-based relocations supported already. The difference is that they are calculating relocation values based on the type itself, not a field within a struct/union. Type-based relos have slightly different semantics when matching local types to kernel target types, see comments in bpf_core_types_are_compat() for details. Their behavior on failure to find target type in kernel BTF also differs. Instead of "poisoning" relocatable instruction and failing load subsequently in kernel, they return 0 (which is rarely a valid return result, so user BPF code can use that to detect success/failure of the relocation and deal with it without extra "guarding" relocations). Also, it's always possible to check existence of the type in target kernel with TYPE_EXISTS relocation, similarly to a field-based FIELD_EXISTS. TYPE_ID_LOCAL relocation is a bit special in that it always succeeds (barring any libbpf/Clang bugs) and resolved to BTF ID using **local** BTF info of BPF program itself. Tests in subsequent patches demonstrate the usage and semantics of new relocations. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-2-andriin@fb.com
-
Maciej Żenczykowski authored
This reduces likelihood of incorrect use. Test: builds Signed-off-by: Maciej Żenczykowski <maze@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819020027.4072288-1-zenczykowski@gmail.com
-
Maciej Żenczykowski authored
This provides a minor performance boost by virtue of inlining instead of cross module function calls. Test: builds Signed-off-by: Maciej Żenczykowski <maze@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819010710.3959310-2-zenczykowski@gmail.com
-
Maciej Żenczykowski authored
This reduces likelihood of incorrect use. Test: builds Signed-off-by: Maciej Żenczykowski <maze@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819010710.3959310-1-zenczykowski@gmail.com
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Get rid of two feature detectors: reallocarray and libelf-mmap. Optional feature detections complicate libbpf Makefile and cause more troubles for various applications that want to integrate libbpf as part of their build. Patch #1 replaces all reallocarray() uses into libbpf-internal reallocarray() implementation. Patches #2 and #3 makes sure we won't re-introduce reallocarray() accidentally. Patch #2 also removes last use of libbpf_internal.h header inside bpftool. There is still nlattr.h that's used by both libbpf and bpftool, but that's left for a follow up patch to split. Patch #4 removed libelf-mmap feature detector and all its uses, as it's trivial to handle missing mmap support in libbpf, the way objtool has been doing it for a while. v1->v2 and v2->v3: - rebase to latest bpf-next (Alexei). ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
It's trivial to handle missing ELF_C_MMAP_READ support in libelf the way that objtool has solved it in ("774bec3f objtool: Add fallback from ELF_C_READ_MMAP to ELF_C_READ"). So instead of having an entire feature detector for that, just do what objtool does for perf and libbpf. And keep their Makefiles a bit simpler. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819013607.3607269-5-andriin@fb.com
-
Andrii Nakryiko authored
Most of libbpf source files already include libbpf_internal.h, so it's a good place to centralize identifier poisoning. So move kernel integer type poisoning there. And also add reallocarray to a poison list to prevent accidental use of it. libbpf_reallocarray() should be used universally instead. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819013607.3607269-4-andriin@fb.com
-
Andrii Nakryiko authored
Most netlink-related functions were unique to bpftool usage, so I moved them into net.c. Few functions are still used by both bpftool and libbpf itself internally, so I've copy-pasted them (libbpf_nl_get_link, libbpf_netlink_open). It's a bit of duplication of code, but better separation of libbpf as a library with public API and bpftool, relying on unexposed functions in libbpf. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819013607.3607269-3-andriin@fb.com
-
Andrii Nakryiko authored
Re-implement glibc's reallocarray() for libbpf internal-only use. reallocarray(), unfortunately, is not available in all versions of glibc, so requires extra feature detection and using reallocarray() stub from <tools/libc_compat.h> and COMPAT_NEED_REALLOCARRAY. All this complicates build of libbpf unnecessarily and is just a maintenance burden. Instead, it's trivial to implement libbpf-specific internal version and use it throughout libbpf. Which is what this patch does, along with converting some realloc() uses that should really have been reallocarray() in the first place. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819013607.3607269-2-andriin@fb.com
-
Andrii Nakryiko authored
Add test simulating ambiguous field size relocation, while fields themselves are at the exact same offset. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818223921.2911963-5-andriin@fb.com
-
Andrii Nakryiko authored
Split the instruction patching logic into relocation value calculation and application of relocation to instruction. Using this, evaluate relocation against each matching candidate and validate that all candidates agree on relocated value. If not, report ambiguity and fail load. This logic is necessary to avoid dangerous (however unlikely) accidental match against two incompatible candidate types. Without this change, libbpf will pick a random type as *the* candidate and apply potentially invalid relocation. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818223921.2911963-4-andriin@fb.com
-
Andrii Nakryiko authored
Add logging of local/target type kind (struct/union/typedef/etc). Preserve unresolved root type ID (for cases of typedef). Improve the format of CO-RE reloc spec output format to contain only relevant and succinct info. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818223921.2911963-3-andriin@fb.com
-
Andrii Nakryiko authored
Instead of printing out integer value of BTF kind, print out a string representation of a kind. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818223921.2911963-2-andriin@fb.com
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== This patch set refactors libbpf feature probing to be done lazily on as-needed basis, instead of proactively testing all possible features libbpf knows about. This allows to scale such detections and mitigations better, without issuing unnecessary syscalls on each bpf_object__load() call. It's also now memoized globally, instead of per-bpf_object. Building on that, libbpf will now detect availability of bpf_probe_read_kernel() helper (which means also -user and -str variants), and will sanitize BPF program code by replacing such references to generic variants (bpf_probe_read[_str]()). This allows to migrate all BPF programs into proper -kernel/-user probing helpers, without the fear of breaking them for old kernels. With that, update BPF_CORE_READ() and related macros to use bpf_probe_read_kernel(), as it doesn't make much sense to do CO-RE relocations against user-space types. And the only class of cases in which BPF program might read kernel type from user-space are UAPI data structures which by definition are fixed in their memory layout and don't need relocating. This is exemplified by test_vmlinux test, which is fixed as part of this patch set as well. BPF_CORE_READ() is useful for chainingg bpf_probe_read_{kernel,user}() calls together even without relocation, so we might add user-space variants, if there is a need. While at making libbpf more useful for older kernels, also improve handling of a complete lack of BTF support in kernel by not even attempting to load BTF info into kernel. This eliminates annoying warning about lack of BTF support in the kernel and map creation retry without BTF. If user is using features that require kernel BTF support, it will still fail, of course. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Detect whether a kernel supports any BTF at all, and if not, don't even attempt loading BTF to avoid unnecessary log messages like: libbpf: Error loading BTF: Invalid argument(22) libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818213356.2629020-8-andriin@fb.com
-
Andrii Nakryiko authored
Now that libbpf can automatically fallback to bpf_probe_read() on old kernels not yet supporting bpf_probe_read_kernel(), switch libbpf BPF-side helper macros to use appropriate BPF helper for reading kernel data. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Cc: Ilya Leoshkevich <iii@linux.ibm.com> Link: https://lore.kernel.org/bpf/20200818213356.2629020-7-andriin@fb.com
-
Andrii Nakryiko authored
The test is reading UAPI kernel structure from user-space. So it doesn't need CO-RE relocations and has to use bpf_probe_read_user(). Fixes: acbd0620 ("selftests/bpf: Add vmlinux.h selftest exercising tracing of syscalls") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818213356.2629020-6-andriin@fb.com
-
Andrii Nakryiko authored
Add BPF program code sanitization pass, replacing calls to BPF bpf_probe_read_{kernel,user}[_str]() helpers with bpf_probe_read[_str](), if libbpf detects that kernel doesn't support new variants. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818213356.2629020-5-andriin@fb.com
-
Andrii Nakryiko authored
Factor out common piece of logic that detects support for a feature based on successfully created FD. Also take care of closing FD, if it was created. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818213356.2629020-4-andriin@fb.com
-
Andrii Nakryiko authored
Turn libbpf's kernel feature probing into lazily-performed checks. This allows to skip performing unnecessary feature checks, if a given BPF application doesn't rely on a particular kernel feature. As we grow number of feature probes, libbpf might perform less unnecessary syscalls and scale better with number of feature probes long-term. By decoupling feature checks from bpf_object, it's also possible to perform feature probing from libbpf static helpers and low-level APIs, if necessary. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818213356.2629020-3-andriin@fb.com
-
Andrii Nakryiko authored
That compilation warning is more annoying, than helpful. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818213356.2629020-2-andriin@fb.com
-
Xu Wang authored
Replace a comma between expression statements by a semicolon. Signed-off-by: Xu Wang <vulab@iscas.ac.cn> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200818071611.21923-1-vulab@iscas.ac.cn
-
Daniel T. Lee authored
>From commit f1394b79 ("block: mark blk_account_io_completion static") symbol blk_account_io_completion() has been marked as static, which makes it no longer possible to attach kprobe to this event. Currently, there are broken samples due to this reason. As a solution to this, attach kprobe events to blk_account_io_done() to modify them to perform the same behavior as before. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200818051641.21724-1-danieltimlee@gmail.com
-
- 18 Aug, 2020 4 commits
-
-
Miaohe Lin authored
The frags of skb_shared_info of the data is assigned in following loop. It is meaningless to do a memcpy of frags here. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Dewar authored
Remove a couple of unused #defines in cs89x0.h. Signed-off-by: Alex Dewar <alex.dewar90@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Miaohe Lin authored
Convert the uses of fallthrough comments to fallthrough macro. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Johannes Berg says: ==================== netlink: allow NLA_BINARY length range validation In quite a few places (perhaps particularly in wireless) we need to validation an NLA_BINARY attribute with both a minimum and a maximum length. Currently, we can do either of the two, but not both, given that we have NLA_MIN_LEN (minimum length) and NLA_BINARY (maximum). Extend the range mechanisms that we use for integer validation to apply to NLA_BINARY as well. After converting everything to use NLA_POLICY_MIN_LEN() we can thus get rid of the NLA_MIN_LEN type since that's now a special case of NLA_BINARY with a minimum length validation. Similarly, NLA_EXACT_LEN can be specified using NLA_POLICY_EXACT_LEN() and also maps to the new NLA_BINARY validation (min == max == desired length). Finally, NLA_POLICY_EXACT_LEN_WARN() also gets to be a somewhat special case of this. I haven't included the patch here now that converts nl82011 to use this because it doesn't apply without another cleanup patch, but we can remove a number of hand-coded min/max length checks and get better error messages from the general validation code while doing that. As I had originally built the netlink policy export to userspace in a way that has min/max length for NLA_BINARY (for the types that we used to call NLA_MIN_LEN, NLA_BINARY and NLA_EXACT_LEN) anyway, it doesn't really change anything there except that now there's a chance that userspace sees min length < max length, which previously wasn't possible. v2: * fix the min<max comment to correctly say min<=max ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-