- 10 Dec, 2021 15 commits
-
-
Shuyi Cheng authored
Fix error: "failed to pin map: Bad file descriptor, path: /sys/fs/bpf/_rodata_str1_1." In the old kernel, the global data map will not be created, see [0]. So we should skip the pinning of the global data map to avoid bpf_object__pin_maps returning error. Therefore, when the map is not created, we mark “map->skipped" as true and then check during relocation and during pinning. Fixes: 16e0c35c ("libbpf: Load global data maps lazily on legacy kernels") Signed-off-by: Shuyi Cheng <chengshuyi@linux.alibaba.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Vincent Minet authored
The btf__dedup_deprecated name was misspelled in the definition of the compat symbol for btf__dedup. This leads it to be missing from the shared library. This fixes it. Fixes: 957d350a ("libbpf: Turn btf_dedup_opts into OPTS-based struct") Signed-off-by: Vincent Minet <vincent@vincent-minet.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211210063112.80047-1-vincent@vincent-minet.net
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Add new open options and per-program setters to control BTF and program loading log verboseness and allow providing custom log buffers to capture logs of interest. Note how custom log_buf and log_level are orthogonal, which matches previous (alas less customizable) behavior of libbpf, even though it sort of worked by accident: if someone specified log_level = 1 in bpf_object__load_xattr(), first attempt to load any BPF program resulted in wasted bpf() syscall with -EINVAL due to !!log_buf != !!log_level. Then on retry libbpf would allocated log_buffer and try again, after which prog loading would succeed and libbpf would print verbose program loading log through its print callback. This behavior is now documented and made more efficient, not wasting unnecessary syscall. But additionally, log_level can be controlled globally on a per-bpf_object level through bpf_object_open_opts, as well as on a per-program basis with bpf_program__set_log_buf() and bpf_program__set_log_level() APIs. Now that we have a more future-proof way to set log_level, deprecate bpf_object__load_xattr(). v2->v3: - added log_buf selftests for bpf_prog_load() and bpf_btf_load(); - fix !log_buf in bpf_prog_load (John); - fix log_level==0 in bpf_btf_load (thanks selftest!); v1->v2: - fix log_level == 0 handling of bpf_prog_load, add as patch #1 (Alexei); - add comments explaining log_buf_size overflow prevention (Alexei). ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Andrii Nakryiko authored
Switch all the uses of to-be-deprecated bpf_object__load_xattr() into a simple bpf_object__load() calls with optional log_level passed through open_opts.kernel_log_level, if -d option is specified. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-13-andrii@kernel.org
-
Andrii Nakryiko authored
Switch from bpf_object__load_xattr() to bpf_object__load() and kernel_log_level in bpf_object_open_opts. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-12-andrii@kernel.org
-
Andrii Nakryiko authored
Add a selftest that validates that per-program and per-object log_buf overrides work as expected. Also test same logic for low-level bpf_prog_load() and bpf_btf_load() APIs. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-11-andrii@kernel.org
-
Andrii Nakryiko authored
Switch all selftests uses of to-be-deprecated bpf_load_btf() with equivalent bpf_btf_load() calls. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-10-andrii@kernel.org
-
Andrii Nakryiko authored
Deprecate non-extensible bpf_object__load_xattr() in v0.8 ([0]). With log_level control through bpf_object_open_opts or bpf_program__set_log_level(), we are finally at the point where bpf_object__load_xattr() doesn't provide any functionality that can't be accessed through other (better) ways. The other feature, target_btf_path, is also controllable through bpf_object_open_opts. [0] Closes: https://github.com/libbpf/libbpf/issues/289Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-9-andrii@kernel.org
-
Andrii Nakryiko authored
Allow to set user-provided log buffer on a per-program basis ([0]). This gives great deal of flexibility in terms of which programs are loaded with logging enabled and where corresponding logs go. Log buffer set with bpf_program__set_log_buf() overrides kernel_log_buf and kernel_log_size settings set at bpf_object open time through bpf_object_open_opts, if any. Adjust bpf_object_load_prog_instance() logic to not perform own log buf allocation and load retry if custom log buffer is provided by the user. [0] Closes: https://github.com/libbpf/libbpf/issues/418Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-8-andrii@kernel.org
-
Andrii Nakryiko authored
Instead of rewriting error code returned by the kernel of prog load with libbpf-sepcific variants pass through the original error. There is now also no need to have a backup generic -LIBBPF_ERRNO__LOAD fallback error as bpf_prog_load() guarantees that errno will be properly set no matter what. Also drop a completely outdated and pretty useless BPF_PROG_TYPE_KPROBE guess logic. It's not necessary and neither it's helpful in modern BPF applications. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-7-andrii@kernel.org
-
Andrii Nakryiko authored
Add missing "prog '%s': " prefixes in few places and use consistently markers for beginning and end of program load logs. Here's an example of log output: libbpf: prog 'handler': BPF program load failed: Permission denied libbpf: -- BEGIN PROG LOAD LOG --- arg#0 reference type('UNKNOWN ') size cannot be determined: -22 ; out1 = in1; 0: (18) r1 = 0xffffc9000cdcc000 2: (61) r1 = *(u32 *)(r1 +0) ... 81: (63) *(u32 *)(r4 +0) = r5 R1_w=map_value(id=0,off=16,ks=4,vs=20,imm=0) R4=map_value(id=0,off=400,ks=4,vs=16,imm=0) invalid access to map value, value_size=16 off=400 size=4 R4 min value is outside of the allowed memory range processed 63 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 -- END PROG LOAD LOG -- libbpf: failed to load program 'handler' libbpf: failed to load object 'test_skeleton' The entire verifier log, including BEGIN and END markers are now always youtput during a single print callback call. This should make it much easier to post-process or parse it, if necessary. It's not an explicit API guarantee, but it can be reasonably expected to stay like that. Also __bpf_object__open is renamed to bpf_object_open() as it's always an adventure to find the exact function that implements bpf_object's open phase, so drop the double underscored and use internal libbpf naming convention. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-6-andrii@kernel.org
-
Andrii Nakryiko authored
Allow users to provide their own custom log_buf, log_size, and log_level at bpf_object level through bpf_object_open_opts. This log_buf will be used during BTF loading. Subsequent patch will use same log_buf during BPF program loading, unless overriden at per-bpf_program level. When such custom log_buf is provided, libbpf won't be attempting retrying loading of BTF to try to provide its own log buffer to capture kernel's error log output. User is responsible to provide big enough buffer, otherwise they run a risk of getting -ENOSPC error from the bpf() syscall. See also comments in bpf_object_open_opts regarding log_level and log_buf interactions. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-5-andrii@kernel.org
-
Andrii Nakryiko authored
Add libbpf-internal btf_load_into_kernel() that allows to pass preallocated log_buf and custom log_level to be passed into kernel during BPF_BTF_LOAD call. When custom log_buf is provided, btf_load_into_kernel() won't attempt an retry with automatically allocated internal temporary buffer to capture BTF validation log. It's important to note the relation between log_buf and log_level, which slightly deviates from stricter kernel logic. From kernel's POV, if log_buf is specified, log_level has to be > 0, and vice versa. While kernel has good reasons to request such "sanity, this, in practice, is a bit unconvenient and restrictive for libbpf's high-level bpf_object APIs. So libbpf will allow to set non-NULL log_buf and log_level == 0. This is fine and means to attempt to load BTF without logging requested, but if it failes, retry the load with custom log_buf and log_level 1. Similar logic will be implemented for program loading. In practice this means that users can provide custom log buffer just in case error happens, but not really request slower verbose logging all the time. This is also consistent with libbpf behavior when custom log_buf is not set: libbpf first tries to load everything with log_level=0, and only if error happens allocates internal log buffer and retries with log_level=1. Also, while at it, make BTF validation log more obvious and follow the log pattern libbpf is using for dumping BPF verifier log during BPF_PROG_LOAD. BTF loading resulting in an error will look like this: libbpf: BTF loading error: -22 libbpf: -- BEGIN BTF LOAD LOG --- magic: 0xeb9f version: 1 flags: 0x0 hdr_len: 24 type_off: 0 type_len: 1040 str_off: 1040 str_len: 2063598257 btf_total_size: 1753 Total section length too long -- END BTF LOAD LOG -- libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring. This makes it much easier to find relevant parts in libbpf log output. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-4-andrii@kernel.org
-
Andrii Nakryiko authored
Similar to previous bpf_prog_load() and bpf_map_create() APIs, add bpf_btf_load() API which is taking optional OPTS struct. Schedule bpf_load_btf() for deprecation in v0.8 ([0]). This makes naming consistent with BPF_BTF_LOAD command, sets up an API for extensibility in the future, moves options parameters (log-related fields) into optional options, and also allows to pass log_level directly. It also removes log buffer auto-allocation logic from low-level API (consistent with bpf_prog_load() behavior), but preserves a special treatment of log_level == 0 with non-NULL log_buf, which matches low-level bpf_prog_load() and high-level libbpf APIs for BTF and program loading behaviors. [0] Closes: https://github.com/libbpf/libbpf/issues/419Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-3-andrii@kernel.org
-
Andrii Nakryiko authored
To unify libbpf APIs behavior w.r.t. log_buf and log_level, fix bpf_prog_load() to follow the same logic as bpf_btf_load() and high-level bpf_object__load() API will follow in the subsequent patches: - if log_level is 0 and non-NULL log_buf is provided by a user, attempt load operation initially with no log_buf and log_level set; - if successful, we are done, return new FD; - on error, retry the load operation with log_level bumped to 1 and log_buf set; this way verbose logging will be requested only when we are sure that there is a failure, but will be fast in the common/expected success case. Of course, user can still specify log_level > 0 from the very beginning to force log collection. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-2-andrii@kernel.org
-
- 09 Dec, 2021 3 commits
-
-
Minghao Chi authored
Return value directly instead of taking this in another redundant variable. Reported-by: Zeal Robot <zealci@zte.com.cm> Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211209080051.421844-1-chi.minghao@zte.com.cn
-
Colin Ian King authored
The pointer t is being initialized with a value that is never read. The pointer is re-assigned a value a littler later on, hence the initialization is redundant and can be removed. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211207224718.59593-1-colin.i.king@gmail.com
-
Yonghong Song authored
The following warning is triggered when I used clang compiler to build the selftest. /.../prog_tests/btf_dedup_split.c:368:6: warning: variable 'btf2' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (!ASSERT_OK(err, "btf_dedup")) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ /.../prog_tests/btf_dedup_split.c:424:12: note: uninitialized use occurs here btf__free(btf2); ^~~~ /.../prog_tests/btf_dedup_split.c:368:2: note: remove the 'if' if its condition is always false if (!ASSERT_OK(err, "btf_dedup")) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /.../prog_tests/btf_dedup_split.c:343:25: note: initialize the variable 'btf2' to silence this warning struct btf *btf1, *btf2; ^ = NULL Initialize local variable btf2 = NULL and the warning is gone. Fixes: 9a49afe6 ("selftests/bpf: Add btf_dedup case with duplicated structs within CU") Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211209050403.1770836-1-yhs@fb.com
-
- 08 Dec, 2021 1 commit
-
-
Song Liu authored
bpf_create_map is deprecated. Replace it with bpf_map_create. Also add a __weak bpf_map_create() so that when older version of libbpf is linked as a shared library, it falls back to bpf_create_map(). Fixes: 992c4225 ("libbpf: Unify low-level map creation APIs w/ new bpf_map_create()") Signed-off-by: Song Liu <song@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211207232340.2561471-1-song@kernel.org
-
- 07 Dec, 2021 4 commits
-
-
Andrii Nakryiko authored
Alexander Lobakin says: ==================== Samples, at least XDP ones, can be built only with the compiler used to build the kernel itself. However, XDP sample infra introduced in Aug'21 was probably tested with GCC/Binutils only as it's not really compilable for now with Clang/LLVM. These two are trivial fixes addressing this. ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Alexander Lobakin authored
Clang doesn't have 'stringop-truncation' group like GCC does, and complains about it when building samples which use xdp_sample_user infra: samples/bpf/xdp_sample_user.h:48:32: warning: unknown warning group '-Wstringop-truncation', ignored [-Wunknown-warning-option] #pragma GCC diagnostic ignored "-Wstringop-truncation" ^ [ repeat ] Those are harmless, but avoidable when guarding it with ifdef. I could guard push/pop as well, but this would require one more ifdef cruft around a single line which I don't think is reasonable. Fixes: 156f886c ("samples: bpf: Add basic infrastructure for XDP samples") Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20211203195004.5803-3-alexandr.lobakin@intel.com
-
Alexander Lobakin authored
Clang (13) doesn't get the jokes about specifying libraries to link in cclags of individual .o objects: clang-13: warning: -lm: 'linker' input unused [-Wunused-command-line-argument] [ ... ] LD samples/bpf/xdp_redirect_cpu LD samples/bpf/xdp_redirect_map_multi LD samples/bpf/xdp_redirect_map LD samples/bpf/xdp_redirect LD samples/bpf/xdp_monitor /usr/bin/ld: samples/bpf/xdp_sample_user.o: in function `sample_summary_print': xdp_sample_user.c:(.text+0x84c): undefined reference to `floor' /usr/bin/ld: xdp_sample_user.c:(.text+0x870): undefined reference to `ceil' /usr/bin/ld: xdp_sample_user.c:(.text+0x8cf): undefined reference to `floor' /usr/bin/ld: xdp_sample_user.c:(.text+0x8f3): undefined reference to `ceil' [ more ] Specify '-lm' as ldflags for all xdp_sample_user.o users in the main Makefile and remove it from ccflags of ^ in Makefile.target -- just like it's done for all other samples. This works with all compilers. Fixes: 6e1051a5 ("samples: bpf: Convert xdp_monitor to XDP samples helper") Fixes: b926c55d ("samples: bpf: Convert xdp_redirect to XDP samples helper") Fixes: e531a220 ("samples: bpf: Convert xdp_redirect_cpu to XDP samples helper") Fixes: bbe65865 ("samples: bpf: Convert xdp_redirect_map to XDP samples helper") Fixes: 594a116b ("samples: bpf: Convert xdp_redirect_map_multi to XDP samples helper") Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20211203195004.5803-2-alexandr.lobakin@intel.com
-
Alexei Starovoitov authored
When CONFIG_DEBUG_INFO_BTF_MODULES is not set the following warning can be seen: kernel/bpf/btf.c:6588:13: warning: 'purge_cand_cache' defined but not used [-Wunused-function] Fix it. Fixes: 1e89106d ("bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn().") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211207014839.6976-1-alexei.starovoitov@gmail.com
-
- 06 Dec, 2021 3 commits
-
-
Grant Seltzer authored
This adds comments above functions in libbpf.h which document their uses. These comments are of a format that doxygen and sphinx can pick up and render. These are rendered by libbpf.readthedocs.org These doc comments are for: - bpf_object__open_file() - bpf_object__open_mem() - bpf_program__attach_uprobe() - bpf_program__attach_uprobe_opts() Signed-off-by: Grant Seltzer <grantseltzer@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211206203709.332530-1-grantseltzer@gmail.com
-
huangxuesen authored
Fix typo in comment from 'bpf_skeleton_map' to 'bpf_map_skeleton' and from 'bpf_skeleton_prog' to 'bpf_prog_skeleton'. Signed-off-by: huangxuesen <huangxuesen@kuaishou.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1638755236-3851199-1-git-send-email-hxseverything@gmail.com
-
Kajol Jain authored
Branch data available to BPF programs can be very useful to get stack traces out of userspace application. Commit fff7b643 ("bpf: Add bpf_read_branch_records() helper") added BPF support to capture branch records in x86. Enable this feature also for other architectures as well by removing checks specific to x86. If an architecture doesn't support branch records, bpf_read_branch_records() still has appropriate checks and it will return an -EINVAL in that scenario. Based on UAPI helper doc in include/uapi/linux/bpf.h, unsupported architectures should return -ENOENT in such case. Hence, update the appropriate check to return -ENOENT instead. Selftest 'perf_branches' result on power9 machine which has the branch stacks support: - Before this patch: [command]# ./test_progs -t perf_branches #88/1 perf_branches/perf_branches_hw:FAIL #88/2 perf_branches/perf_branches_no_hw:OK #88 perf_branches:FAIL Summary: 0/1 PASSED, 0 SKIPPED, 1 FAILED - After this patch: [command]# ./test_progs -t perf_branches #88/1 perf_branches/perf_branches_hw:OK #88/2 perf_branches/perf_branches_no_hw:OK #88 perf_branches:OK Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED Selftest 'perf_branches' result on power9 machine which doesn't have branch stack report: - After this patch: [command]# ./test_progs -t perf_branches #88/1 perf_branches/perf_branches_hw:SKIP #88/2 perf_branches/perf_branches_no_hw:OK #88 perf_branches:OK Summary: 1/1 PASSED, 1 SKIPPED, 0 FAILED Fixes: fff7b643 ("bpf: Add bpf_read_branch_records() helper") Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211206073315.77432-1-kjain@linux.ibm.com
-
- 05 Dec, 2021 1 commit
-
-
Alexei Starovoitov authored
Make -d flag functional for gen_loader style program loading. For example: $ bpftool prog load -L -d test_d_path.o ... // will print: libbpf: loading ./test_d_path.o libbpf: elf: section(3) fentry/security_inode_getattr, size 280, link 0, flags 6, type=1 ... libbpf: prog 'prog_close': found data map 0 (test_d_p.bss, sec 7, off 0) for insn 30 libbpf: gen: load_btf: size 5376 libbpf: gen: map_create: test_d_p.bss idx 0 type 2 value_type_id 118 libbpf: map 'test_d_p.bss': created successfully, fd=0 libbpf: gen: map_update_elem: idx 0 libbpf: sec 'fentry/filp_close': found 1 CO-RE relocations libbpf: record_relo_core: prog 1 insn[15] struct file 0:1 final insn_idx 15 libbpf: gen: prog_load: type 26 insns_cnt 35 progi_idx 0 libbpf: gen: find_attach_tgt security_inode_getattr 12 libbpf: gen: prog_load: type 26 insns_cnt 37 progi_idx 1 libbpf: gen: find_attach_tgt filp_close 12 libbpf: gen: finish 0 ... // at this point libbpf finished generating loader program 0: (bf) r6 = r1 1: (bf) r1 = r10 2: (07) r1 += -136 3: (b7) r2 = 136 4: (b7) r3 = 0 5: (85) call bpf_probe_read_kernel#113 6: (05) goto pc+104 ... // this is the assembly dump of the loader program 390: (63) *(u32 *)(r6 +44) = r0 391: (18) r1 = map[idx:0]+5584 393: (61) r0 = *(u32 *)(r1 +0) 394: (63) *(u32 *)(r6 +24) = r0 395: (b7) r0 = 0 396: (95) exit err 0 // the loader program was loaded and executed successfully (null) func#0 @0 ... // CO-RE in the kernel logs: CO-RE relocating STRUCT file: found target candidate [500] prog '': relo #0: kind <byte_off> (0), spec is [8] STRUCT file.f_path (0:1 @ offset 16) prog '': relo #0: matching candidate #0 [500] STRUCT file.f_path (0:1 @ offset 16) prog '': relo #0: patched insn #15 (ALU/ALU64) imm 16 -> 16 vmlinux_cand_cache:[11]file(500), module_cand_cache: ... // verifier logs when it was checking test_d_path.o program: R1 type=ctx expected=fp 0: R1=ctx(id=0,off=0,imm=0) R10=fp0 ; int BPF_PROG(prog_close, struct file *file, void *id) 0: (79) r6 = *(u64 *)(r1 +0) func 'filp_close' arg0 has btf_id 500 type STRUCT 'file' 1: R1=ctx(id=0,off=0,imm=0) R6_w=ptr_file(id=0,off=0,imm=0) R10=fp0 ; pid_t pid = bpf_get_current_pid_tgid() >> 32; 1: (85) call bpf_get_current_pid_tgid#14 ... // if there are multiple programs being loaded by the loader program ... // only the last program in the elf file will be printed, since ... // the same verifier log_buf is used for all PROG_LOAD commands. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211204194623.27779-1-alexei.starovoitov@gmail.com
-
- 04 Dec, 2021 1 commit
-
-
Hou Tao authored
BPF_LOG_KERNEL is only used internally, so disallow bpf_btf_load() to set log level as BPF_LOG_KERNEL. The same checking has already been done in bpf_check(), so factor out a helper to check the validity of log attributes and use it in both places. Fixes: 8580ac94 ("bpf: Process in-kernel BTF") Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20211203053001.740945-1-houtao1@huawei.com
-
- 03 Dec, 2021 3 commits
-
-
Maxim Mikityanskiy authored
The test for bpf_iter_task_vma assumes that the output will be longer than 1 kB, as the comment above the loop says. Due to this assumption, the loop becomes infinite if the output turns to be shorter than 1 kB. The return value of read_fd_into_buffer is 0 when the end of file was reached, and len isn't being increased any more. This commit adds a break on EOF to handle short output correctly. For the reference, this is the contents that I get when running test_progs under vmtest.sh, and it's shorter than 1 kB: 00400000-00401000 r--p 00000000 fe:00 25867 /root/bpf/test_progs 00401000-00674000 r-xp 00001000 fe:00 25867 /root/bpf/test_progs 00674000-0095f000 r--p 00274000 fe:00 25867 /root/bpf/test_progs 0095f000-00983000 r--p 0055e000 fe:00 25867 /root/bpf/test_progs 00983000-00a8a000 rw-p 00582000 fe:00 25867 /root/bpf/test_progs 00a8a000-0484e000 rw-p 00000000 00:00 0 7f6c64000000-7f6c64021000 rw-p 00000000 00:00 0 7f6c64021000-7f6c68000000 ---p 00000000 00:00 0 7f6c6ac8f000-7f6c6ac90000 r--s 00000000 00:0d 8032 anon_inode:bpf-map 7f6c6ac90000-7f6c6ac91000 ---p 00000000 00:00 0 7f6c6ac91000-7f6c6b491000 rw-p 00000000 00:00 0 7f6c6b491000-7f6c6b492000 r--s 00000000 00:0d 8032 anon_inode:bpf-map 7f6c6b492000-7f6c6b493000 rw-s 00000000 00:0d 8032 anon_inode:bpf-map 7ffc1e23d000-7ffc1e25e000 rw-p 00000000 00:00 0 7ffc1e3b8000-7ffc1e3bc000 r--p 00000000 00:00 0 7ffc1e3bc000-7ffc1e3bd000 r-xp 00000000 00:00 0 7fffffffe000-7ffffffff000 --xp 00000000 00:00 0 Fixes: e8168840 ("selftests/bpf: Add test for bpf_iter_task_vma") Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211130181811.594220-1-maximmi@nvidia.com
-
Alexei Starovoitov authored
Reduce bpf_core_apply_relo_insn() stack usage and bump BPF_CORE_SPEC_MAX_LEN limit back to 64. Fixes: 29db4bea ("bpf: Prepare relo_core.c for kernel duty.") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211203182836.16646-1-alexei.starovoitov@gmail.com
-
Andrii Nakryiko authored
Libbpf development version was bumped to 0.7 in c93faaaf ("libbpf: Deprecate bpf_prog_load_xattr() API"), activating a bunch of previously scheduled deprecations. Most APIs are pretty straightforward to replace with newer APIs, but perf has a complicated mixed setup with libbpf used both as static and shared configurations, which makes it non-trivial to migrate the APIs. Further, bpf_program__set_prep() needs more involved refactoring, which will require help from Arnaldo and/or Jiri. So for now, mute deprecation warnings and work on migrating perf off of deprecated APIs separately with the input from owners of the perf tool. Fixes: c93faaaf ("libbpf: Deprecate bpf_prog_load_xattr() API") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211203004640.2455717-1-andrii@kernel.org
-
- 02 Dec, 2021 9 commits
-
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Few lines in the last patch to mark bpf_prog_load_xattr() deprecated required a decent amount of clean ups in all the other patches. samples/bpf is big part of the clean up. This patch set also bumps libbpf version to 0.7, as libbpf v0.6 release will be cut shortly. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
bpf_prog_load_xattr() is high-level API that's named as a low-level BPF_PROG_LOAD wrapper APIs, but it actually operates on struct bpf_object. It's badly and confusingly misnamed as it will load all the progs insige bpf_object, returning prog_fd of the very first BPF program. It also has a bunch of ad-hoc things like log_level override, map_ifindex auto-setting, etc. All this can be expressed more explicitly and cleanly through existing libbpf APIs. This patch marks bpf_prog_load_xattr() for deprecation in libbpf v0.8 ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/308Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-10-andrii@kernel.org
-
Andrii Nakryiko authored
Replace deprecated APIs with new ones. Also mute source code using deprecated AF_XDP (xsk.h). Figuring out what to do with all the AF_XDP stuff is a separate problem that should be solved with its own set of changes. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-9-andrii@kernel.org
-
Andrii Nakryiko authored
Remove xdp_samples_user.o rule redefinition which generates Makefile warning and instead override TPROGS_CFLAGS. This seems to work fine when building inside selftests/bpf. That was one big head-scratcher before I found that generic Makefile.target hid this surprising specialization for for xdp_samples_user.o. Main change is to use actual locally installed libbpf headers. Also drop printk macro re-definition (not even used!). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-8-andrii@kernel.org
-
Andrii Nakryiko authored
Migrate all the selftests that were still using bpf_prog_load_xattr(). Few are converted to skeleton, others will use bpf_object__open_file() API. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-7-andrii@kernel.org
-
Andrii Nakryiko authored
xdpxceiver.c is using AF_XDP APIs that are deprecated starting from libbpf 0.7. Until we migrate the test to libxdp or solve this issue in some other way, mute deprecation warnings within xdpxceiver.c. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-6-andrii@kernel.org
-
Andrii Nakryiko authored
We've added one extra patch that added back the use of legacy btf__dedup() variant. Clean that up. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-5-andrii@kernel.org
-
Andrii Nakryiko authored
Switch to bpf_map_create() API instead. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-4-andrii@kernel.org
-
Andrii Nakryiko authored
Add bpf_program__set_log_level() and bpf_program__log_level() to fetch and adjust log_level sent during BPF_PROG_LOAD command. This allows to selectively request more or less verbose output in BPF verifier log. Also bump libbpf version to 0.7 and make these APIs the first in v0.7. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-3-andrii@kernel.org
-