- 05 Dec, 2021 1 commit
-
-
Alexei Starovoitov authored
Make -d flag functional for gen_loader style program loading. For example: $ bpftool prog load -L -d test_d_path.o ... // will print: libbpf: loading ./test_d_path.o libbpf: elf: section(3) fentry/security_inode_getattr, size 280, link 0, flags 6, type=1 ... libbpf: prog 'prog_close': found data map 0 (test_d_p.bss, sec 7, off 0) for insn 30 libbpf: gen: load_btf: size 5376 libbpf: gen: map_create: test_d_p.bss idx 0 type 2 value_type_id 118 libbpf: map 'test_d_p.bss': created successfully, fd=0 libbpf: gen: map_update_elem: idx 0 libbpf: sec 'fentry/filp_close': found 1 CO-RE relocations libbpf: record_relo_core: prog 1 insn[15] struct file 0:1 final insn_idx 15 libbpf: gen: prog_load: type 26 insns_cnt 35 progi_idx 0 libbpf: gen: find_attach_tgt security_inode_getattr 12 libbpf: gen: prog_load: type 26 insns_cnt 37 progi_idx 1 libbpf: gen: find_attach_tgt filp_close 12 libbpf: gen: finish 0 ... // at this point libbpf finished generating loader program 0: (bf) r6 = r1 1: (bf) r1 = r10 2: (07) r1 += -136 3: (b7) r2 = 136 4: (b7) r3 = 0 5: (85) call bpf_probe_read_kernel#113 6: (05) goto pc+104 ... // this is the assembly dump of the loader program 390: (63) *(u32 *)(r6 +44) = r0 391: (18) r1 = map[idx:0]+5584 393: (61) r0 = *(u32 *)(r1 +0) 394: (63) *(u32 *)(r6 +24) = r0 395: (b7) r0 = 0 396: (95) exit err 0 // the loader program was loaded and executed successfully (null) func#0 @0 ... // CO-RE in the kernel logs: CO-RE relocating STRUCT file: found target candidate [500] prog '': relo #0: kind <byte_off> (0), spec is [8] STRUCT file.f_path (0:1 @ offset 16) prog '': relo #0: matching candidate #0 [500] STRUCT file.f_path (0:1 @ offset 16) prog '': relo #0: patched insn #15 (ALU/ALU64) imm 16 -> 16 vmlinux_cand_cache:[11]file(500), module_cand_cache: ... // verifier logs when it was checking test_d_path.o program: R1 type=ctx expected=fp 0: R1=ctx(id=0,off=0,imm=0) R10=fp0 ; int BPF_PROG(prog_close, struct file *file, void *id) 0: (79) r6 = *(u64 *)(r1 +0) func 'filp_close' arg0 has btf_id 500 type STRUCT 'file' 1: R1=ctx(id=0,off=0,imm=0) R6_w=ptr_file(id=0,off=0,imm=0) R10=fp0 ; pid_t pid = bpf_get_current_pid_tgid() >> 32; 1: (85) call bpf_get_current_pid_tgid#14 ... // if there are multiple programs being loaded by the loader program ... // only the last program in the elf file will be printed, since ... // the same verifier log_buf is used for all PROG_LOAD commands. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211204194623.27779-1-alexei.starovoitov@gmail.com
-
- 04 Dec, 2021 1 commit
-
-
Hou Tao authored
BPF_LOG_KERNEL is only used internally, so disallow bpf_btf_load() to set log level as BPF_LOG_KERNEL. The same checking has already been done in bpf_check(), so factor out a helper to check the validity of log attributes and use it in both places. Fixes: 8580ac94 ("bpf: Process in-kernel BTF") Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20211203053001.740945-1-houtao1@huawei.com
-
- 03 Dec, 2021 3 commits
-
-
Maxim Mikityanskiy authored
The test for bpf_iter_task_vma assumes that the output will be longer than 1 kB, as the comment above the loop says. Due to this assumption, the loop becomes infinite if the output turns to be shorter than 1 kB. The return value of read_fd_into_buffer is 0 when the end of file was reached, and len isn't being increased any more. This commit adds a break on EOF to handle short output correctly. For the reference, this is the contents that I get when running test_progs under vmtest.sh, and it's shorter than 1 kB: 00400000-00401000 r--p 00000000 fe:00 25867 /root/bpf/test_progs 00401000-00674000 r-xp 00001000 fe:00 25867 /root/bpf/test_progs 00674000-0095f000 r--p 00274000 fe:00 25867 /root/bpf/test_progs 0095f000-00983000 r--p 0055e000 fe:00 25867 /root/bpf/test_progs 00983000-00a8a000 rw-p 00582000 fe:00 25867 /root/bpf/test_progs 00a8a000-0484e000 rw-p 00000000 00:00 0 7f6c64000000-7f6c64021000 rw-p 00000000 00:00 0 7f6c64021000-7f6c68000000 ---p 00000000 00:00 0 7f6c6ac8f000-7f6c6ac90000 r--s 00000000 00:0d 8032 anon_inode:bpf-map 7f6c6ac90000-7f6c6ac91000 ---p 00000000 00:00 0 7f6c6ac91000-7f6c6b491000 rw-p 00000000 00:00 0 7f6c6b491000-7f6c6b492000 r--s 00000000 00:0d 8032 anon_inode:bpf-map 7f6c6b492000-7f6c6b493000 rw-s 00000000 00:0d 8032 anon_inode:bpf-map 7ffc1e23d000-7ffc1e25e000 rw-p 00000000 00:00 0 7ffc1e3b8000-7ffc1e3bc000 r--p 00000000 00:00 0 7ffc1e3bc000-7ffc1e3bd000 r-xp 00000000 00:00 0 7fffffffe000-7ffffffff000 --xp 00000000 00:00 0 Fixes: e8168840 ("selftests/bpf: Add test for bpf_iter_task_vma") Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211130181811.594220-1-maximmi@nvidia.com
-
Alexei Starovoitov authored
Reduce bpf_core_apply_relo_insn() stack usage and bump BPF_CORE_SPEC_MAX_LEN limit back to 64. Fixes: 29db4bea ("bpf: Prepare relo_core.c for kernel duty.") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211203182836.16646-1-alexei.starovoitov@gmail.com
-
Andrii Nakryiko authored
Libbpf development version was bumped to 0.7 in c93faaaf ("libbpf: Deprecate bpf_prog_load_xattr() API"), activating a bunch of previously scheduled deprecations. Most APIs are pretty straightforward to replace with newer APIs, but perf has a complicated mixed setup with libbpf used both as static and shared configurations, which makes it non-trivial to migrate the APIs. Further, bpf_program__set_prep() needs more involved refactoring, which will require help from Arnaldo and/or Jiri. So for now, mute deprecation warnings and work on migrating perf off of deprecated APIs separately with the input from owners of the perf tool. Fixes: c93faaaf ("libbpf: Deprecate bpf_prog_load_xattr() API") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211203004640.2455717-1-andrii@kernel.org
-
- 02 Dec, 2021 30 commits
-
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Few lines in the last patch to mark bpf_prog_load_xattr() deprecated required a decent amount of clean ups in all the other patches. samples/bpf is big part of the clean up. This patch set also bumps libbpf version to 0.7, as libbpf v0.6 release will be cut shortly. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
bpf_prog_load_xattr() is high-level API that's named as a low-level BPF_PROG_LOAD wrapper APIs, but it actually operates on struct bpf_object. It's badly and confusingly misnamed as it will load all the progs insige bpf_object, returning prog_fd of the very first BPF program. It also has a bunch of ad-hoc things like log_level override, map_ifindex auto-setting, etc. All this can be expressed more explicitly and cleanly through existing libbpf APIs. This patch marks bpf_prog_load_xattr() for deprecation in libbpf v0.8 ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/308Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-10-andrii@kernel.org
-
Andrii Nakryiko authored
Replace deprecated APIs with new ones. Also mute source code using deprecated AF_XDP (xsk.h). Figuring out what to do with all the AF_XDP stuff is a separate problem that should be solved with its own set of changes. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-9-andrii@kernel.org
-
Andrii Nakryiko authored
Remove xdp_samples_user.o rule redefinition which generates Makefile warning and instead override TPROGS_CFLAGS. This seems to work fine when building inside selftests/bpf. That was one big head-scratcher before I found that generic Makefile.target hid this surprising specialization for for xdp_samples_user.o. Main change is to use actual locally installed libbpf headers. Also drop printk macro re-definition (not even used!). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-8-andrii@kernel.org
-
Andrii Nakryiko authored
Migrate all the selftests that were still using bpf_prog_load_xattr(). Few are converted to skeleton, others will use bpf_object__open_file() API. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-7-andrii@kernel.org
-
Andrii Nakryiko authored
xdpxceiver.c is using AF_XDP APIs that are deprecated starting from libbpf 0.7. Until we migrate the test to libxdp or solve this issue in some other way, mute deprecation warnings within xdpxceiver.c. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-6-andrii@kernel.org
-
Andrii Nakryiko authored
We've added one extra patch that added back the use of legacy btf__dedup() variant. Clean that up. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-5-andrii@kernel.org
-
Andrii Nakryiko authored
Switch to bpf_map_create() API instead. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-4-andrii@kernel.org
-
Andrii Nakryiko authored
Add bpf_program__set_log_level() and bpf_program__log_level() to fetch and adjust log_level sent during BPF_PROG_LOAD command. This allows to selectively request more or less verbose output in BPF verifier log. Also bump libbpf version to 0.7 and make these APIs the first in v0.7. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-3-andrii@kernel.org
-
Andrii Nakryiko authored
Corresponding Linux UAPI struct uses __u32, not int, so keep it consistent. Fixes: 992c4225 ("libbpf: Unify low-level map creation APIs w/ new bpf_map_create()") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-2-andrii@kernel.org
-
Paul E. McKenney authored
The test_cmpxchg() and test_xchg() functions say "test_run add". Therefore, make them say "test_run cmpxchg" and "test_run xchg", respectively. Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201005030.GA3071525@paulmck-ThinkPad-P17-Gen-1
-
Jean-Philippe Brucker authored
Add $(OUTPUT) prefix to testing_helpers.o, so it can be built out of tree when necessary. At the moment, in addition to being built in-tree even when out-of-tree is required, testing_helpers.o is not built with the right recipe when cross-building. For consistency the other helpers, cgroup_helpers and trace_helpers, can also be passed as objects instead of source. Use *_HELPERS variable to keep the Makefile readable. Fixes: f87c1930 ("selftests/bpf: Merge test_stub.c into testing_helpers.c") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201145101.823159-1-jean-philippe@linaro.org
-
Andrii Nakryiko authored
Alexei Starovoitov says: ==================== From: Alexei Starovoitov <ast@kernel.org> v4->v5: . Reduce number of memory allocations in candidate cache logic . Fix couple UAF issues . Add Andrii's patch to cleanup struct bpf_core_cand . More thorough tests . Planned followups: - support -v in lskel - move struct bpf_core_spec out of bpf_core_apply_relo_insn to reduce stack usage - implement bpf_core_types_are_compat v3->v4: . complete refactor of find candidates logic. Now it has small permanent cache. . Fix a bug in gen_loader related to attach_kind. . Fix BTF log size limit. . More tests. v2->v3: . addressed Andrii's feedback in every patch. New field in union bpf_attr changed from "core_relo" to "core_relos". . added one more test and checkpatch.pl-ed the set. v1->v2: . Refactor uapi to pass 'struct bpf_core_relo' from LLVM into libbpf and further into the kernel instead of bpf_core_apply_relo() bpf helper. Because of this change the CO-RE algorithm has an ability to log error and debug events through the standard bpf verifer log mechanism which was not possible with helper approach. . #define RELO_CORE macro was removed and replaced with btf_member_bit_offset() patch. This set introduces CO-RE support in the kernel. There are several reasons to add such support: 1. It's a step toward signed BPF programs. 2. It allows golang like languages that struggle to adopt libbpf to take advantage of CO-RE powers. 3. Currently the field accessed by 'ldx [R1 + 10]' insn is recognized by the verifier purely based on +10 offset. If R1 points to a union the verifier picks one of the fields at this offset. With CO-RE the kernel can disambiguate the field access. Alexei Starovoitov (16): libbpf: Replace btf__type_by_id() with btf_type_by_id(). bpf: Rename btf_member accessors. bpf: Prepare relo_core.c for kernel duty. bpf: Define enum bpf_core_relo_kind as uapi. bpf: Pass a set of bpf_core_relo-s to prog_load command. bpf: Adjust BTF log size limit. bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn(). libbpf: Use CO-RE in the kernel in light skeleton. libbpf: Support init of inner maps in light skeleton. libbpf: Clean gen_loader's attach kind. selftests/bpf: Add lskel version of kfunc test. selftests/bpf: Improve inner_map test coverage. selftests/bpf: Convert map_ptr_kern test to use light skeleton. selftests/bpf: Additional test for CO-RE in the kernel. selftests/bpf: Revert CO-RE removal in test_ksyms_weak. selftests/bpf: Add CO-RE relocations to verifier scale test. ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Alexei Starovoitov authored
Add 182 CO-RE relocations to verifier scale test. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-18-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
The commit 087cba79 ("selftests/bpf: Add weak/typeless ksym test for light skeleton") added test_ksyms_weak to light skeleton testing, but remove CO-RE access. Revert that part of commit, since light skeleton can use CO-RE in the kernel. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-17-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Add a test where randmap() function is appended to three different bpf programs. That action checks struct bpf_core_relo replication logic and offset adjustment in gen loader part of libbpf. Fourth bpf program has 360 CO-RE relocations from vmlinux, bpf_testmod, and non-existing type. It tests candidate cache logic. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-16-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
To exercise CO-RE in the kernel further convert map_ptr_kern test to light skeleton. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-15-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Check that hash and array inner maps are properly initialized. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-14-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Add light skeleton version of kfunc_call_test_subprog test. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-13-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
The gen_loader has to clear attach_kind otherwise the programs without attach_btf_id will fail load if they follow programs with attach_btf_id. Fixes: 67234743 ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-12-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Add ability to initialize inner maps in light skeleton. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-11-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Without lskel the CO-RE relocations are processed by libbpf before any other work is done. Instead, when lskel is needed, remember relocation as RELO_CORE kind. Then when loader prog is generated for a given bpf program pass CO-RE relos of that program to gen loader via bpf_gen__record_relo_core(). The gen loader will remember them as-is and pass it later as-is into the kernel. The normal libbpf flow is to process CO-RE early before call relos happen. In case of gen_loader the core relos have to be added to other relos to be copied together when bpf static function is appended in different places to other main bpf progs. During the copy the append_subprog_relos() will adjust insn_idx for normal relos and for RELO_CORE kind too. When that is done each struct reloc_desc has good relos for specific main prog. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-10-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Given BPF program's BTF root type name perform the following steps: . search in vmlinux candidate cache. . if (present in cache and candidate list >= 1) return candidate list. . do a linear search through kernel BTFs for possible candidates. . regardless of number of candidates found populate vmlinux cache. . if (candidate list >= 1) return candidate list. . search in module candidate cache. . if (present in cache) return candidate list (even if list is empty). . do a linear search through BTFs of all kernel modules collecting candidates from all of them. . regardless of number of candidates found populate module cache. . return candidate list. Then wire the result into bpf_core_apply_relo_insn(). When BPF program is trying to CO-RE relocate a type that doesn't exist in either vmlinux BTF or in modules BTFs these steps will perform 2 cache lookups when cache is hit. Note the cache doesn't prevent the abuse by the program that might have lots of relocations that cannot be resolved. Hence cond_resched(). CO-RE in the kernel requires CAP_BPF, since BTF loading requires it. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-9-alexei.starovoitov@gmail.com
-
Andrii Nakryiko authored
Remove two redundant fields from struct bpf_core_cand. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-8-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Make BTF log size limit to be the same as the verifier log size limit. Otherwise tools that progressively increase log size and use the same log for BTF loading and program loading will be hitting hard to debug EINVAL. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-7-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
struct bpf_core_relo is generated by llvm and processed by libbpf. It's a de-facto uapi. With CO-RE in the kernel the struct bpf_core_relo becomes uapi de-jure. Add an ability to pass a set of 'struct bpf_core_relo' to prog_load command and let the kernel perform CO-RE relocations. Note the struct bpf_line_info and struct bpf_func_info have the same layout when passed from LLVM to libbpf and from libbpf to the kernel except "insn_off" fields means "byte offset" when LLVM generates it. Then libbpf converts it to "insn index" to pass to the kernel. The struct bpf_core_relo's "insn_off" field is always "byte offset". Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-6-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
enum bpf_core_relo_kind is generated by llvm and processed by libbpf. It's a de-facto uapi. With CO-RE in the kernel the bpf_core_relo_kind values become uapi de-jure. Also rename them with BPF_CORE_ prefix to distinguish from conflicting names in bpf_core_read.h. The enums bpf_field_info_kind, bpf_type_id_kind, bpf_type_info_kind, bpf_enum_value_kind are passing different values from bpf program into llvm. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-5-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Make relo_core.c to be compiled for the kernel and for user space libbpf. Note the patch is reducing BPF_CORE_SPEC_MAX_LEN from 64 to 32. This is the maximum number of nested structs and arrays. For example: struct sample { int a; struct { int b[10]; }; }; struct sample *s = ...; int *y = &s->b[5]; This field access is encoded as "0:1:0:5" and spec len is 4. The follow up patch might bump it back to 64. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-4-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Rename btf_member_bit_offset() and btf_member_bitfield_size() to avoid conflicts with similarly named helpers in libbpf's btf.h. Rename the kernel helpers, since libbpf helpers are part of uapi. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-3-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
To prepare relo_core.c to be compiled in the kernel and the user space replace btf__type_by_id with btf_type_by_id. In libbpf btf__type_by_id and btf_type_by_id have different behavior. bpf_core_apply_relo_insn() needs behavior of uapi btf__type_by_id vs internal btf_type_by_id, but type_id range check is already done in bpf_core_apply_relo(), so it's safe to replace it everywhere. The kernel btf_type_by_id() does the check anyway. It doesn't hurt. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-2-alexei.starovoitov@gmail.com
-
- 01 Dec, 2021 2 commits
-
-
Alexander Lobakin authored
Fix the following samples/bpf build error appeared after the introduction of bpf_map_create() in libbpf: CC samples/bpf/fds_example.o samples/bpf/fds_example.c:49:12: error: static declaration of 'bpf_map_create' follows non-static declaration static int bpf_map_create(void) ^ samples/bpf/libbpf/include/bpf/bpf.h:55:16: note: previous declaration is here LIBBPF_API int bpf_map_create(enum bpf_map_type map_type, ^ samples/bpf/fds_example.c:82:23: error: too few arguments to function call, expected 6, have 0 fd = bpf_map_create(); ~~~~~~~~~~~~~~ ^ samples/bpf/libbpf/include/bpf/bpf.h:55:16: note: 'bpf_map_create' declared here LIBBPF_API int bpf_map_create(enum bpf_map_type map_type, ^ 2 errors generated. fds_example by accident has a static function with the same name. It's not worth it to separate a single call into its own function, so just embed it. Fixes: 992c4225 ("libbpf: Unify low-level map creation APIs w/ new bpf_map_create()") Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20211201164931.47357-1-alexandr.lobakin@intel.com
-
Hou Tao authored
An extra newline will output for bpf_log() with BPF_LOG_KERNEL level as shown below: [ 52.095704] BPF:The function test_3 has 12 arguments. Too many. [ 52.095704] [ 52.096896] Error in parsing func ptr test_3 in struct bpf_dummy_ops Now all bpf_log() are ended by newline, but not all btf_verifier_log() are ended by newline, so checking whether or not the log message has the trailing newline and adding a newline if not. Also there is no need to calculate the left userspace buffer size for kernel log output and to truncate the output by '\0' which has already been done by vscnprintf(), so only do these for userspace log output. Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20211201073458.2731595-2-houtao1@huawei.com
-
- 30 Nov, 2021 3 commits
-
-
Andrii Nakryiko authored
Kumar Kartikeya says: ==================== Three commits addressing comments for the typeless/weak ksym set. No functional change intended. Hopefully this is simpler to read for kfunc as well. ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Kumar Kartikeya Dwivedi authored
Alexei pointed out that we can use BPF_REG_0 which already contains imm from move_blob2blob computation. Note that we now compare the second insn's imm, but this should not matter, since both will be zeroed out for the error case for the insn populated earlier. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211122235733.634914-4-memxor@gmail.com
-
Kumar Kartikeya Dwivedi authored
Instead, jump directly to success case stores in case ret >= 0, else do the default 0 value store and jump over the success case. This is better in terms of readability. Readjust the code for kfunc relocation as well to follow a similar pattern, also leads to easier to follow code now. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211122235733.634914-3-memxor@gmail.com
-