- 12 Dec, 2023 9 commits
-
-
YiFei Zhu authored
We're observing test flakiness on an arm64 platform which might not have timestamps as precise as x86. The test log looks like: test_time_tai:PASS:tai_open 0 nsec test_time_tai:PASS:test_run 0 nsec test_time_tai:PASS:tai_ts1 0 nsec test_time_tai:PASS:tai_ts2 0 nsec test_time_tai:FAIL:tai_forward unexpected tai_forward: actual 1702348135471494160 <= expected 1702348135471494160 test_time_tai:PASS:tai_gettime 0 nsec test_time_tai:PASS:tai_future_ts1 0 nsec test_time_tai:PASS:tai_future_ts2 0 nsec test_time_tai:PASS:tai_range_ts1 0 nsec test_time_tai:PASS:tai_range_ts2 0 nsec #199 time_tai:FAIL This patch changes ASSERT_GT to ASSERT_GE in the tai_forward assertion so that equal timestamps are permitted. Fixes: 64e15820 ("selftests/bpf: Add BPF-helper test for CLOCK_TAI access") Signed-off-by: YiFei Zhu <zhuyifei@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231212182911.3784108-1-zhuyifei@google.com
-
Andrei Matei authored
This patch adds a comment to check_mem_size_reg -- a function whose meaning is not very transparent. The function implicitly deals with two registers connected by convention, which is not obvious. Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231210225149.67639-1-andreimatei1@gmail.com
-
Yang Li authored
The function are defined in the verifier.c file, but not called elsewhere, so delete the unused function. kernel/bpf/verifier.c:3448:20: warning: unused function 'bt_set_slot' kernel/bpf/verifier.c:3453:20: warning: unused function 'bt_clear_slot' kernel/bpf/verifier.c:3488:20: warning: unused function 'bt_is_slot_set' Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20231212005436.103829-1-yang.lee@linux.alibaba.com Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=7714
-
Manu Bretelle authored
`fs_kfuncs.c`'s `test_xattr` would fail the test even when the filesystem did not support xattr, for instance when /tmp is mounted as tmpfs. This change checks errno when setxattr fail. If the failure is due to the operation being unsupported, we will skip the test (just like we would if verity was not enabled on the FS. Before the change, fs_kfuncs test would fail in test_axattr: $ vmtest -k $(make -s image_name) './tools/testing/selftests/bpf/test_progs -a fs_kfuncs' => bzImage ===> Booting [ 0.000000] rcu: RCU restricting CPUs from NR_CPUS=128 to nr_cpu_ ===> Setting up VM ===> Running command [ 4.157491] bpf_testmod: loading out-of-tree module taints kernel. [ 4.161515] bpf_testmod: module verification failed: signature and/or required key missing - tainting kernel test_xattr:PASS:create_file 0 nsec test_xattr:FAIL:setxattr unexpected error: -1 (errno 95) #90/1 fs_kfuncs/xattr:FAIL #90/2 fs_kfuncs/fsverity:SKIP #90 fs_kfuncs:FAIL All error logs: test_xattr:PASS:create_file 0 nsec test_xattr:FAIL:setxattr unexpected error: -1 (errno 95) #90/1 fs_kfuncs/xattr:FAIL #90 fs_kfuncs:FAIL Summary: 0/0 PASSED, 1 SKIPPED, 1 FAILED Test plan: $ touch tmpfs_file && truncate -s 1G tmpfs_file && mkfs.ext4 tmpfs_file # /tmp mounted as tmpfs $ vmtest -k $(make -s image_name) './tools/testing/selftests/bpf/test_progs -a fs_kfuncs' => bzImage ===> Booting ===> Setting up VM ===> Running command WARNING! Selftests relying on bpf_testmod.ko will be skipped. Can't find bpf_testmod.ko kernel module: -2 #90/1 fs_kfuncs/xattr:SKIP #90/2 fs_kfuncs/fsverity:SKIP #90 fs_kfuncs:SKIP Summary: 1/0 PASSED, 2 SKIPPED, 0 FAILED # /tmp mounted as ext4 with xattr enabled but not verity $ vmtest -k $(make -s image_name) 'mount -o loop tmpfs_file /tmp && \ /tools/testing/selftests/bpf/test_progs -a fs_kfuncs' => bzImage ===> Booting ===> Setting up VM ===> Running command [ 4.067071] loop0: detected capacity change from 0 to 2097152 [ 4.191882] EXT4-fs (loop0): mounted filesystem 407ffa36-4553-4c8c-8c78-134443630f69 r/w with ordered data mode. Quota mode: none. WARNING! Selftests relying on bpf_testmod.ko will be skipped. Can't find bpf_testmod.ko kernel module: -2 #90/1 fs_kfuncs/xattr:OK #90/2 fs_kfuncs/fsverity:SKIP #90 fs_kfuncs:OK (SKIP: 1/2) Summary: 1/1 PASSED, 1 SKIPPED, 0 FAILED $ tune2fs -O verity tmpfs_file # /tmp as ext4 with both xattr and verity enabled $ vmtest -k $(make -s image_name) 'mount -o loop tmpfs_file /tmp && \ ./tools/testing/selftests/bpf/test_progs -a fs_kfuncs' => bzImage ===> Booting ===> Setting up VM ===> Running command [ 4.291434] loop0: detected capacity change from 0 to 2097152 [ 4.460828] EXT4-fs (loop0): recovery complete [ 4.468631] EXT4-fs (loop0): mounted filesystem 7b4a7b7f-c442-4b06-9ede-254e63cceb52 r/w with ordered data mode. Quota mode: none. [ 4.988074] fs-verity: sha256 using implementation "sha256-generic" WARNING! Selftests relying on bpf_testmod.ko will be skipped. Can't find bpf_testmod.ko kernel module: -2 #90/1 fs_kfuncs/xattr:OK #90/2 fs_kfuncs/fsverity:OK #90 fs_kfuncs:OK Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED Fixes: 341f06fd ("selftests/bpf: Add tests for filesystem kfuncs") Signed-off-by: Manu Bretelle <chantr4@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20231211180733.763025-1-chantr4@gmail.com
-
Andrii Nakryiko authored
We have a bunch of bool flags for each subprog. Instead of wasting bytes for them, use bitfields instead. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20231204233931.49758-5-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Use the fact that we are passing subprog index around and have a corresponding struct bpf_subprog_info in bpf_verifier_env for each subprogram. We don't need to separately pass around a flag whether subprog is exception callback or not, each relevant verifier function can determine this using provided subprog index if we maintain bpf_subprog_info properly. Also move out exception callback-specific logic from btf_prepare_func_args(), keeping it generic. We can enforce all these restriction right before exception callback verification pass. We add out parameter, arg_cnt, for now, but this will be unnecessary with subsequent refactoring and will be removed. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20231204233931.49758-4-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Emit dynptr type for CONST_PTR_TO_DYNPTR register. Also emit id, ref_obj_id, and dynptr_id fields for STACK_DYNPTR stack slots. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20231204233931.49758-3-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Emit valid memory size addressable through PTR_TO_MEM register. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20231204233931.49758-2-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add selftest that establishes dead code-eliminated valid global subprog (global_dead) and makes sure that it's not possible to freplace it, as it's effectively not there. This test will fail with unexpected success before 2afae08c ("bpf: Validate global subprogs lazily"). v2->v3: - add missing err assignment (Alan); - undo unnecessary signature changes in verifier_global_subprogs.c (Eduard); v1->v2: - don't rely on assembly output in verifier log, which changes between compiler versions (CI). Acked-by: Eduard Zingerman <eddyz87@gmail.com> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/r/20231211174131.2324306-1-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 11 Dec, 2023 2 commits
-
-
Aleksander Lobakin authored
32 bytes may be not enough for some custom metadata. Relax the restriction, allow metadata larger than 32 bytes and make __skb_metadata_differs() work with bigger lengths. Now size of metadata is only limited by the fact it is stored as u8 in skb_shared_info, so maximum possible value is 255. Size still has to be aligned to 4, so the actual upper limit becomes 252. Most driver implementations will offer less, none can offer more. Other important conditions, such as having enough space for xdp_frame building, are already checked in bpf_xdp_adjust_meta(). Signed-off-by: Aleksander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/eb87653c-8ff8-447d-a7a1-25961f60518a@kernel.org Link: https://lore.kernel.org/bpf/20231206205919.404415-3-larysa.zaremba@intel.com
-
Larysa Zaremba authored
Changed check expects passed data meta to be deemed invalid. After loosening the requirement, the size of 36 bytes becomes valid. Therefore, increase tested meta size to 256, so we do not get an unexpected success. Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20231206205919.404415-2-larysa.zaremba@intel.com
-
- 10 Dec, 2023 12 commits
-
-
Alexei Starovoitov authored
David Vernet says: ==================== Add new bpf_cpumask_weight() kfunc It can be useful to query how many bits are set in a cpumask. For example, if you want to perform special logic for the last remaining core that's set in a mask. This logic is already exposed through the main kernel's cpumask header as cpumask_weight(), so it would be useful to add a new bpf_cpumask_weight() kfunc which wraps it and does the same. This patch series was built and tested on top of commit 2146f7fe ("Merge branch 'allocate-bpf-trampoline-on-bpf_prog_pack'"). ==================== Link: https://lore.kernel.org/r/20231207210843.168466-1-void@manifault.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
David Vernet authored
The new bpf_cpumask_weight() kfunc can be used to count the number of bits that are set in a struct cpumask* kptr. Let's add a selftest to verify its behavior. Signed-off-by: David Vernet <void@manifault.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231207210843.168466-3-void@manifault.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
David Vernet authored
It can be useful to query how many bits are set in a cpumask. For example, if you want to perform special logic for the last remaining core that's set in a mask. Let's therefore add a new bpf_cpumask_weight() kfunc which checks how many bits are set in a mask. Signed-off-by: David Vernet <void@manifault.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231207210843.168466-2-void@manifault.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Tiezhu Yang authored
Currently, there are two test cases with same name "ALU64_SMOD_X: -7 % 2 = -1", the first one is right, the second one should be ALU64_SMOD_K because its code is BPF_ALU64 | BPF_MOD | BPF_K. Before: test_bpf: #170 ALU64_SMOD_X: -7 % 2 = -1 jited:1 4 PASS test_bpf: #171 ALU64_SMOD_X: -7 % 2 = -1 jited:1 4 PASS After: test_bpf: #170 ALU64_SMOD_X: -7 % 2 = -1 jited:1 4 PASS test_bpf: #171 ALU64_SMOD_K: -7 % 2 = -1 jited:1 4 PASS Fixes: daabb2b0 ("bpf/tests: add tests for cpuv4 instructions") Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231207040851.19730-1-yangtiezhu@loongson.cnSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add two tests validating that verifier's precision backtracking logic handles BPF_ST_MEM instructions that produce fake register spill into register slot. This is happening when non-zero constant is written directly to a slot, e.g., *(u64 *)(r10 -8) = 123. Add both full 64-bit register spill, as well as 32-bit "sub-spill". Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20231209010958.66758-2-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
When verifier validates BPF_ST_MEM instruction that stores known constant to stack (e.g., *(u64 *)(r10 - 8) = 123), it effectively spills a fake register with a constant (but initially imprecise) value to a stack slot. Because read-side logic treats it as a proper register fill from stack slot, we need to mark such stack slot initialization as INSN_F_STACK_ACCESS instruction to stop precision backtracking from missing it. Fixes: 41f6f64e ("bpf: support non-r10 register spill/fill to/from stack in precision tracking") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20231209010958.66758-1-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Hou Tao says: ==================== The patch set aims to fix the problems found when inspecting the code related with maybe_wait_bpf_programs(). Patch #1 removes unnecessary invocation of maybe_wait_bpf_programs(). Patch #2 calls maybe_wait_bpf_programs() only once for batched update. Patch #3 adds the missed waiting when doing batched lookup_deletion on htab of maps. Patch #4 does wait only if the update or deletion operation succeeds. Patch #5 fixes the value of batch.count when memory allocation fails. ==================== Link: https://lore.kernel.org/r/20231208102355.2628918-1-houtao@huaweicloud.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hou Tao authored
generic_map_{delete,update}_batch() doesn't set uattr->batch.count as zero before it tries to allocate memory for key. If the memory allocation fails, the value of uattr->batch.count will be incorrect. Fix it by setting uattr->batch.count as zero beore batched update or deletion. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20231208102355.2628918-6-houtao@huaweicloud.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hou Tao authored
There is no need to call maybe_wait_bpf_programs() if update or deletion operation fails. So only call maybe_wait_bpf_programs() if update or deletion operation succeeds. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20231208102355.2628918-5-houtao@huaweicloud.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hou Tao authored
When doing batched lookup and deletion operations on htab of maps, maybe_wait_bpf_programs() is needed to ensure all programs don't use the inner map after the bpf syscall returns. Instead of adding the wait in __htab_map_lookup_and_delete_batch(), adding the wait in bpf_map_do_batch() and also removing the calling of maybe_wait_bpf_programs() from generic_map_{delete,update}_batch(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20231208102355.2628918-4-houtao@huaweicloud.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hou Tao authored
Just like commit 9087c6ff ("bpf: Call maybe_wait_bpf_programs() only once from generic_map_delete_batch()"), there is also no need to call maybe_wait_bpf_programs() for each update in batched update, so only call it once in generic_map_update_batch(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20231208102355.2628918-3-houtao@huaweicloud.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hou Tao authored
Both map_lookup_elem() and generic_map_lookup_batch() use bpf_map_copy_value() to lookup and copy the value, and there is no update operation in bpf_map_copy_value(), so just remove the invocation of maybe_wait_bpf_programs() from it. Fixes: 15c14a3d ("bpf: Add bpf_map_{value_size, update_value, map_copy_value} functions") Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20231208102355.2628918-2-houtao@huaweicloud.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 09 Dec, 2023 6 commits
-
-
Sergei Trofimovich authored
Before the change on `i686-linux` `systemd` build failed as: $ bpftool gen object src/core/bpf/socket_bind/socket-bind.bpf.o src/core/bpf/socket_bind/socket-bind.bpf.unstripped.o Error: failed to link 'src/core/bpf/socket_bind/socket-bind.bpf.unstripped.o': Invalid argument (22) After the change it fails as: $ bpftool gen object src/core/bpf/socket_bind/socket-bind.bpf.o src/core/bpf/socket_bind/socket-bind.bpf.unstripped.o libbpf: ELF section #9 has inconsistent alignment addr=8 != d=4 in src/core/bpf/socket_bind/socket-bind.bpf.unstripped.o Error: failed to link 'src/core/bpf/socket_bind/socket-bind.bpf.unstripped.o': Invalid argument (22) Now it's slightly easier to figure out what is wrong with an ELF file. Signed-off-by: Sergei Trofimovich <slyich@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20231208215100.435876-1-slyich@gmail.com
-
Martin KaFai Lau authored
Yafang Shao says: ==================== In the current cgroup1 environment, associating operations between a cgroup and applications in a BPF program requires storing a mapping of cgroup_id to application either in a hash map or maintaining it in userspace. However, by enabling bpf_cgrp_storage for cgroup1, it becomes possible to conveniently store application-specific information in cgroup-local storage and utilize it within BPF programs. Furthermore, enabling this feature for cgroup1 involves minor modifications for the non-attach case, streamlining the process. However, when it comes to enabling this functionality for the cgroup1 attach case, it presents challenges. Therefore, the decision is to focus on enabling it solely for the cgroup1 non-attach case at present. If attempting to attach to a cgroup1 fd, the operation will simply fail with the error code -EBADF. Changes: - RFC -> v1: - Collect acked-by - Avoid unnecessary is_cgroup1 check (Yonghong) - Keep the code patterns consistent (Yonghong) ==================== Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Yafang Shao authored
Expanding the test coverage from cgroup2 to include cgroup1. The result as follows, Already existing test cases for cgroup2: #48/1 cgrp_local_storage/tp_btf:OK #48/2 cgrp_local_storage/attach_cgroup:OK #48/3 cgrp_local_storage/recursion:OK #48/4 cgrp_local_storage/negative:OK #48/5 cgrp_local_storage/cgroup_iter_sleepable:OK #48/6 cgrp_local_storage/yes_rcu_lock:OK #48/7 cgrp_local_storage/no_rcu_lock:OK Expanded test cases for cgroup1: #48/8 cgrp_local_storage/cgrp1_tp_btf:OK #48/9 cgrp_local_storage/cgrp1_recursion:OK #48/10 cgrp_local_storage/cgrp1_negative:OK #48/11 cgrp_local_storage/cgrp1_iter_sleepable:OK #48/12 cgrp_local_storage/cgrp1_yes_rcu_lock:OK #48/13 cgrp_local_storage/cgrp1_no_rcu_lock:OK Summary: #48 cgrp_local_storage:OK Summary: 1/13 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231206115326.4295-4-laoar.shao@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Yafang Shao authored
This new helper allows us to obtain the fd of a net_cls cgroup, which will be utilized in the subsequent patch. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231206115326.4295-3-laoar.shao@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Yafang Shao authored
In the current cgroup1 environment, associating operations between cgroups and applications in a BPF program requires storing a mapping of cgroup_id to application either in a hash map or maintaining it in userspace. However, by enabling bpf_cgrp_storage for cgroup1, it becomes possible to conveniently store application-specific information in cgroup-local storage and utilize it within BPF programs. Furthermore, enabling this feature for cgroup1 involves minor modifications for the non-attach case, streamlining the process. However, when it comes to enabling this functionality for the cgroup1 attach case, it presents challenges. Therefore, the decision is to focus on enabling it solely for the cgroup1 non-attach case at present. If attempting to attach to a cgroup1 fd, the operation will simply fail with the error code -EBADF. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231206115326.4295-2-laoar.shao@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Andrii Nakryiko authored
Because test_bad_ret main program is not written in assembly, we don't control instruction indices in timer_cb_ret_bad() subprog. This bites us in timer/test_bad_ret subtest, where we see difference between cpuv4 and other flavors. For now, make __msg() expectations not rely on instruction indices by anchoring them around bpf_get_prandom_u32 call. Once we have regex/glob support for __msg(), this can be expressed a bit more nicely, but for now just mitigating the problem with available means. Fixes: e02dea15 ("selftests/bpf: validate async callback return value check correctness") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231208233028.3412690-1-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 08 Dec, 2023 5 commits
-
-
Andrii Nakryiko authored
Andrei Matei says: ==================== bpf: fix accesses to uninit stack slots Fix two related issues issues around verifying stack accesses: 1. accesses to uninitialized stack memory was allowed inconsistently 2. the maximum stack depth needed for a program was not always maintained correctly The two issues are fixed together in one commit because the code for one affects the other. V4 to V5: - target bpf-next (Alexei) V3 to V4: - minor fixup to comment in patch 1 (Eduard) - C89-style in patch 3 (Andrii) V2 to V3: - address review comments from Andrii and Eduard - drop new verifier tests in favor of editing existing tests to check for stack depth - append a patch with a bit of cleanup coming out of the previous review ==================== Link: https://lore.kernel.org/r/20231208032519.260451-1-andreimatei1@gmail.comSigned-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Andrei Matei authored
Push the rounding up of stack offsets into the function responsible for growing the stack, rather than relying on all the callers to do it. Uncertainty about whether the callers did it or not tripped up people in a previous review. Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20231208032519.260451-4-andreimatei1@gmail.com
-
Andrei Matei authored
Privileged programs are supposed to be able to read uninitialized stack memory (ever since 6715df8d) but, before this patch, these accesses were permitted inconsistently. In particular, accesses were permitted above state->allocated_stack, but not below it. In other words, if the stack was already "large enough", the access was permitted, but otherwise the access was rejected instead of being allowed to "grow the stack". This undesired rejection was happening in two places: - in check_stack_slot_within_bounds() - in check_stack_range_initialized() This patch arranges for these accesses to be permitted. A bunch of tests that were relying on the old rejection had to change; all of them were changed to add also run unprivileged, in which case the old behavior persists. One tests couldn't be updated - global_func16 - because it can't run unprivileged for other reasons. This patch also fixes the tracking of the stack size for variable-offset reads. This second fix is bundled in the same commit as the first one because they're inter-related. Before this patch, writes to the stack using registers containing a variable offset (as opposed to registers with fixed, known values) were not properly contributing to the function's needed stack size. As a result, it was possible for a program to verify, but then to attempt to read out-of-bounds data at runtime because a too small stack had been allocated for it. Each function tracks the size of the stack it needs in bpf_subprog_info.stack_depth, which is maintained by update_stack_depth(). For regular memory accesses, check_mem_access() was calling update_state_depth() but it was passing in only the fixed part of the offset register, ignoring the variable offset. This was incorrect; the minimum possible value of that register should be used instead. This tracking is now fixed by centralizing the tracking of stack size in grow_stack_state(), and by lifting the calls to grow_stack_state() to check_stack_access_within_bounds() as suggested by Andrii. The code is now simpler and more convincingly tracks the correct maximum stack size. check_stack_range_initialized() can now rely on enough stack having been allocated for the access; this helps with the fix for the first issue. A few tests were changed to also check the stack depth computation. The one that fails without this patch is verifier_var_off:stack_write_priv_vs_unpriv. Fixes: 01f810ac ("bpf: Allow variable-offset stack access") Reported-by: Hao Sun <sunhao.th@gmail.com> Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231208032519.260451-3-andreimatei1@gmail.com Closes: https://lore.kernel.org/bpf/CABWLsev9g8UP_c3a=1qbuZUi20tGoUXoU07FPf-5FLvhOKOY+Q@mail.gmail.com/
-
Andrei Matei authored
Add comments to the datastructure tracking the stack state, as the mapping between each stack slot and where its state is stored is not entirely obvious. Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20231208032519.260451-2-andreimatei1@gmail.com
-
David Vernet authored
In libbpf, when determining whether we need to load vmlinux btf, we're currently (among other things) checking whether there is any struct_ops program present in the object. This works for most realistic struct_ops maps, as a struct_ops map is of course typically composed of one or more struct_ops programs. However, that technically need not be the case. A struct_ops interface could be defined which allows a map to be specified which one or more non-prog fields, and which provides default behavior if no struct_ops progs is actually provided otherwise. For sched_ext, for example, you technically only need to specify the name of the scheduler in the struct_ops map, with the core scheduler logic providing default behavior if no prog is actually specified. If we were to define and try to load such a struct_ops map, we would crash in libbpf when initializing it as obj->btf_vmlinux will be NULL: Reading symbols from minimal... (gdb) r Starting program: minimal_example [Thread debugging using libthread_db enabled] Using host libthread_db library "/usr/lib/libthread_db.so.1". Program received signal SIGSEGV, Segmentation fault. 0x000055555558308c in btf__type_cnt (btf=0x0) at btf.c:612 612 return btf->start_id + btf->nr_types; (gdb) bt type_name=0x5555555d99e3 "sched_ext_ops", kind=4) at btf.c:914 kind=4) at btf.c:942 type=0x7fffffffe558, type_id=0x7fffffffe548, ... data_member=0x7fffffffe568) at libbpf.c:948 kern_btf=0x0) at libbpf.c:1017 at libbpf.c:8059 So as to account for such bare-bones struct_ops maps, let's update obj_needs_vmlinux_btf() to also iterate over an obj's maps and check whether any of them are struct_ops maps. Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20231208061704.400463-1-void@manifault.com
-
- 07 Dec, 2023 6 commits
-
-
Andrii Nakryiko authored
Andrei Matei says: ==================== bpf: fix verification of indirect var-off stack access V4 to V5: - split the test into a separate patch V3 to V4: - include a test per Eduard's request - target bpf-next per Alexei's request (patches didn't change) V2 to V3: - simplify checks for max_off (don't call check_stack_slot_within_bounds for it) - append a commit to protect against overflow in the addition of the register and the offset V1 to V2: - fix max_off calculation for access size = 0 ==================== Link: https://lore.kernel.org/r/20231207041150.229139-1-andreimatei1@gmail.comSigned-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Andrei Matei authored
This patch promotes the arithmetic around checking stack bounds to be done in the 64-bit domain, instead of the current 32bit. The arithmetic implies adding together a 64-bit register with a int offset. The register was checked to be below 1<<29 when it was variable, but not when it was fixed. The offset either comes from an instruction (in which case it is 16 bit), from another register (in which case the caller checked it to be below 1<<29 [1]), or from the size of an argument to a kfunc (in which case it can be a u32 [2]). Between the register being inconsistently checked to be below 1<<29, and the offset being up to an u32, it appears that we were open to overflowing the `int`s which were currently used for arithmetic. [1] https://github.com/torvalds/linux/blob/815fb87b753055df2d9e50f6cd80eb10235fe3e9/kernel/bpf/verifier.c#L7494-L7498 [2] https://github.com/torvalds/linux/blob/815fb87b753055df2d9e50f6cd80eb10235fe3e9/kernel/bpf/verifier.c#L11904Reported-by: Andrii Nakryiko <andrii.nakryiko@gmail.com> Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231207041150.229139-4-andreimatei1@gmail.com
-
Andrei Matei authored
Add a regression test for var-off zero-sized reads. Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20231207041150.229139-3-andreimatei1@gmail.com
-
Andrei Matei authored
This patch fixes a bug around the verification of possibly-zero-sized stack accesses. When the access was done through a var-offset stack pointer, check_stack_access_within_bounds was incorrectly computing the maximum-offset of a zero-sized read to be the same as the register's min offset. Instead, we have to take in account the register's maximum possible value. The patch also simplifies how the max offset is checked; the check is now simpler than for min offset. The bug was allowing accesses to erroneously pass the check_stack_access_within_bounds() checks, only to later crash in check_stack_range_initialized() when all the possibly-affected stack slots are iterated (this time with a correct max offset). check_stack_range_initialized() is relying on check_stack_access_within_bounds() for its accesses to the stack-tracking vector to be within bounds; in the case of zero-sized accesses, we were essentially only verifying that the lowest possible slot was within bounds. We would crash when the max-offset of the stack pointer was >= 0 (which shouldn't pass verification, and hopefully is not something anyone's code attempts to do in practice). Thanks Hao for reporting! Fixes: 01f810ac ("bpf: Allow variable-offset stack access") Reported-by: Hao Sun <sunhao.th@gmail.com> Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231207041150.229139-2-andreimatei1@gmail.com Closes: https://lore.kernel.org/bpf/CACkBjsZGEUaRCHsmaX=h-efVogsRfK1FPxmkgb0Os_frnHiNdw@mail.gmail.com/
-
Alexei Starovoitov authored
Song Liu says: ==================== Allocate bpf trampoline on bpf_prog_pack This set enables allocating bpf trampoline from bpf_prog_pack on x86. The majority of this work, however, is the refactoring of trampoline code. This is needed because we need to handle 4 archs and 2 users (trampoline and struct_ops). 1/7 through 6/7 refactors trampoline code. A few helpers are added. 7/7 finally let bpf trampoline on x86 use bpf_prog_pack. Changes in v7: 1. Use kvmalloc for rw_image in x86/arch_prepare_bpf_trampoline. (Alexei) 2. Add comment to explain why we cannot use kvmalloc in x86/arch_bpf_trampoline_size. (Alexei) Changes in v6: 1. Rebase. 2. Add Acked-by and Tested-by from Jiri Olsa and Björn Töpel. Changes in v5: 1. Adjust size of trampoline ksym. (Jiri) 2. Use "unsigned int size" arg in image management helpers.(Daniel) Changes in v4: 1. Dropped 1/8 in v3, which is already merged in bpf-next. 2. Add Reviewed-by from Björn Töpel. Changes in v3: 1. Fix bug in s390. (Thanks to Ilya Leoshkevich). 2. Fix build error in riscv. (kernel test robot). Changes in v2: 1. Add missing changes in net/bpf/bpf_dummy_struct_ops.c. 2. Reduce one dry run in arch_prepare_bpf_trampoline. (Xu Kuohai) 3. Other small fixes. ==================== Link: https://lore.kernel.org/r/20231206224054.492250-1-song@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Song Liu authored
There are three major changes here: 1. Add arch_[alloc|free]_bpf_trampoline based on bpf_prog_pack; 2. Let arch_prepare_bpf_trampoline handle ROX input image, this requires arch_prepare_bpf_trampoline allocating a temporary RW buffer; 3. Update __arch_prepare_bpf_trampoline() to handle a RW buffer (rw_image) and a ROX buffer (image). This part is similar to the image/rw_image logic in bpf_int_jit_compile(). Signed-off-by: Song Liu <song@kernel.org> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20231206224054.492250-8-song@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-