- 19 Apr, 2022 8 commits
-
-
Kumar Kartikeya Dwivedi authored
Add a few test cases that ensure we catch cases of badly ordered type tags in modifier chains. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220419164608.1990559-3-memxor@gmail.com
-
Kumar Kartikeya Dwivedi authored
It is guaranteed that for modifiers, clang always places type tags before other modifiers, and then the base type. We would like to rely on this guarantee inside the kernel to make it simple to parse type tags from BTF. However, a user would be allowed to construct a BTF without such guarantees. Hence, add a pass to check that in modifier chains, type tags only occur at the head of the chain, and then don't occur later in the chain. If we see a type tag, we can have one or more type tags preceding other modifiers that then never have another type tag. If we see other modifiers, all modifiers following them should never be a type tag. Instead of having to walk chains we verified previously, we can remember the last good modifier type ID which headed a good chain. At that point, we must have verified all other chains headed by type IDs less than it. This makes the verification process less costly, and it becomes a simple O(n) pass. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220419164608.1990559-2-memxor@gmail.com
-
Andrii Nakryiko authored
Take advantage of new libbpf feature for declarative non-autoloaded BPF program SEC() definitions in few test that test single program at a time out of many available programs within the single BPF object. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220419002452.632125-2-andrii@kernel.org
-
Andrii Nakryiko authored
Establish SEC("?abc") naming convention (i.e., adding question mark in front of otherwise normal section name) that allows to set corresponding program's autoload property to false. This is effectively just a declarative way to do bpf_program__set_autoload(prog, false). Having a way to do this declaratively in BPF code itself is useful and convenient for various scenarios. E.g., for testing, when BPF object consists of multiple independent BPF programs that each needs to be tested separately. Opting out all of them by default and then setting autoload to true for just one of them at a time simplifies testing code (see next patch for few conversions in BPF selftests taking advantage of this new feature). Another real-world use case is in libbpf-tools for cases when different BPF programs have to be picked depending on particulars of the host kernel due to various incompatible changes (like kernel function renames or signature change, or to pick kprobe vs fentry depending on corresponding kernel support for the latter). Marking all the different BPF program candidates as non-autoloaded declaratively makes this more obvious in BPF source code and allows simpler code in user-space code. When BPF program marked as SEC("?abc") it is otherwise treated just like SEC("abc") and bpf_program__section_name() reported will be "abc". Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220419002452.632125-1-andrii@kernel.org
-
Yonghong Song authored
The llvm patch [1] enabled opaque pointer which caused selftest 'exhandler' failure. ... ; work = task->task_works; 7: (79) r1 = *(u64 *)(r6 +2120) ; R1_w=ptr_callback_head(off=0,imm=0) R6_w=ptr_task_struct(off=0,imm=0) ; func = work->func; 8: (79) r2 = *(u64 *)(r1 +8) ; R1_w=ptr_callback_head(off=0,imm=0) R2_w=scalar() ; if (!work && !func) 9: (4f) r1 |= r2 math between ptr_ pointer and register with unbounded min value is not allowed below is insn 10 and 11 10: (55) if r1 != 0 goto +5 11: (18) r1 = 0 ll ... In llvm, the code generation of 'r1 |= r2' happened in codegen selectiondag phase due to difference of opaque pointer vs. non-opaque pointer. Without [1], the related code looks like: r2 = *(u64 *)(r6 + 2120) r1 = *(u64 *)(r2 + 8) if r2 != 0 goto +6 <LBB0_4> if r1 != 0 goto +5 <LBB0_4> r1 = 0 ll ... I haven't found a good way in llvm to fix this issue. So let us workaround the problem first so bpf CI won't be blocked. [1] https://reviews.llvm.org/D123300Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220419050900.3136024-1-yhs@fb.com
-
Yonghong Song authored
LLVM commit [1] changed loop pragma behavior such that full loop unroll is always honored with user pragma. Previously, unroll count also depends on the unrolled code size. For pyperf600, without [1], the loop unroll count is 150. With [1], the loop unroll count is 600. The unroll count of 600 caused the program size close to 298k and this caused the following code is generated: 0: 7b 1a 00 ff 00 00 00 00 *(u64 *)(r10 - 256) = r1 ; uint64_t pid_tgid = bpf_get_current_pid_tgid(); 1: 85 00 00 00 0e 00 00 00 call 14 2: bf 06 00 00 00 00 00 00 r6 = r0 ; pid_t pid = (pid_t)(pid_tgid >> 32); 3: bf 61 00 00 00 00 00 00 r1 = r6 4: 77 01 00 00 20 00 00 00 r1 >>= 32 5: 63 1a fc ff 00 00 00 00 *(u32 *)(r10 - 4) = r1 6: bf a2 00 00 00 00 00 00 r2 = r10 7: 07 02 00 00 fc ff ff ff r2 += -4 ; PidData* pidData = bpf_map_lookup_elem(&pidmap, &pid); 8: 18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll 10: 85 00 00 00 01 00 00 00 call 1 11: bf 08 00 00 00 00 00 00 r8 = r0 ; if (!pidData) 12: 15 08 15 e8 00 00 00 00 if r8 == 0 goto -6123 <LBB0_27588+0xffffffffffdae100> Note that insn 12 has a branch offset -6123 which is clearly illegal and will be rejected by the verifier. The negative offset is due to the branch range is greater than INT16_MAX. This patch changed the unroll count to be 150 to avoid above branch target insn out-of-range issue. Also the llvm is enhanced ([2]) to assert if the branch target insn is out of INT16 range. [1] https://reviews.llvm.org/D119148 [2] https://reviews.llvm.org/D123877Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220419043230.2928530-1-yhs@fb.com
-
Stanislav Fomichev authored
Commit 7d08c2c9 ("bpf: Refactor BPF_PROG_RUN_ARRAY family of macros into functions") switched a bunch of BPF_PROG_RUN macros to inline routines. This changed the semantic a bit. Due to arguments expansion of macros, it used to be: rcu_read_lock(); array = rcu_dereference(cgrp->bpf.effective[atype]); ... Now, with with inline routines, we have: array_rcu = rcu_dereference(cgrp->bpf.effective[atype]); /* array_rcu can be kfree'd here */ rcu_read_lock(); array = rcu_dereference(array_rcu); I'm assuming in practice rcu subsystem isn't fast enough to trigger this but let's use rcu API properly. Also, rename to lower caps to not confuse with macros. Additionally, drop and expand BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY. See [1] for more context. [1] https://lore.kernel.org/bpf/CAKH8qBs60fOinFdxiiQikK_q0EcVxGvNTQoWvHLEUGbgcj1UYg@mail.gmail.com/T/#u v2 - keep rcu locks inside by passing cgroup_bpf Fixes: 7d08c2c9 ("bpf: Refactor BPF_PROG_RUN_ARRAY family of macros into functions") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220414161233.170780-1-sdf@google.com
-
Mykola Lysenko authored
This is a pre-req to add separate logging for each subtest in test_progs. Move all the mutable test data to the test_result struct. Move per-test init/de-init into the run_one_test function. Consolidate data aggregation and final log output in calculate_and_print_summary function. As a side effect, this patch fixes double counting of errors for subtests and possible duplicate output of subtest log on failures. Also, add prog_tests_framework.c test to verify some of the counting logic. As part of verification, confirmed that number of reported tests is the same before and after the change for both parallel and sequential test execution. Signed-off-by: Mykola Lysenko <mykolal@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220418222507.1726259-1-mykolal@fb.com
-
- 15 Apr, 2022 14 commits
-
-
Maciej Fijalkowski authored
Simplify the mentioned helper function by removing ternary operator. The expression that is there outputs the boolean value by itself. This helper might be used in the hot path so this simplification can also be considered as micro optimization. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-15-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
Call alloc Rx routine for ZC driver only when the amount of unallocated descriptors exceeds given threshold. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-14-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
Currently, when debugging AF_XDP workloads, one can correlate the -ENXIO return code as the case that XSK is not in the bound state. Returning same code from ndo_xsk_wakeup can be misleading and simply makes it harder to follow what is going on. Change ENXIOs in stmmac's ndo_xsk_wakeup() implementation to EINVALs, so that when probing it is clear that something is wrong on the driver side, not the xsk_{recv,send}msg. There is a -ENETDOWN that can happen from both kernel/driver sides though, but I don't have a correct replacement for this on one of the sides, so let's keep it that way. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-13-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
Currently, when debugging AF_XDP workloads, one can correlate the -ENXIO return code as the case that XSK is not in the bound state. Returning same code from ndo_xsk_wakeup can be misleading and simply makes it harder to follow what is going on. Change ENXIO in mlx5's ndo_xsk_wakeup() implementation to EINVAL, so that when probing it is clear that something is wrong on the driver side, not the xsk_{recv,send}msg. There is a -ENETDOWN that can happen from both kernel/driver sides though, but I don't have a correct replacement for this on one of the sides, so let's keep it that way. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-12-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
Currently, when debugging AF_XDP workloads, one can correlate the -ENXIO return code as the case that XSK is not in the bound state. Returning same code from ndo_xsk_wakeup can be misleading and simply makes it harder to follow what is going on. Change ENXIOs in ixgbe's ndo_xsk_wakeup() implementation to EINVALs, so that when probing it is clear that something is wrong on the driver side, not the xsk_{recv,send}msg. There is a -ENETDOWN that can happen from both kernel/driver sides though, but I don't have a correct replacement for this on one of the sides, so let's keep it that way. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-11-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
Currently, when debugging AF_XDP workloads, one can correlate the -ENXIO return code as the case that XSK is not in the bound state. Returning same code from ndo_xsk_wakeup can be misleading and simply makes it harder to follow what is going on. Change ENXIOs in i40e's ndo_xsk_wakeup() implementation to EINVALs, so that when probing it is clear that something is wrong on the driver side, not the xsk_{recv,send}msg. There is a -ENETDOWN that can happen from both kernel/driver sides though, but I don't have a correct replacement for this on one of the sides, so let's keep it that way. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-10-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
Currently, when debugging AF_XDP workloads, one can correlate the -ENXIO return code as the case that XSK is not in the bound state. Returning same code from ndo_xsk_wakeup can be misleading and simply makes it harder to follow what is going on. Change ENXIOs in ice's ndo_xsk_wakeup() implementation to EINVALs, so that when probing it is clear that something is wrong on the driver side, not the xsk_{recv,send}msg. There is a -ENETDOWN that can happen from both kernel/driver sides though, but I don't have a correct replacement for this on one of the sides, so let's keep it that way. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-9-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
When XSK pool uses need_wakeup feature, correlate -ENOBUFS that was returned from xdp_do_redirect() with a XSK Rx queue being full. In such case, terminate the Rx processing that is being done on the current HW Rx ring and let the user space consume descriptors from XSK Rx queue so that there is room that driver can use later on. Introduce new internal return code IXGBE_XDP_EXIT that will indicate case described above. Note that it does not affect Tx processing that is bound to the same NAPI context, nor the other Rx rings. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-8-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
When XSK pool uses need_wakeup feature, correlate -ENOBUFS that was returned from xdp_do_redirect() with a XSK Rx queue being full. In such case, terminate the Rx processing that is being done on the current HW Rx ring and let the user space consume descriptors from XSK Rx queue so that there is room that driver can use later on. Introduce new internal return code I40E_XDP_EXIT that will indicate case described above. Note that it does not affect Tx processing that is bound to the same NAPI context, nor the other Rx rings. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-7-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
When XSK pool uses need_wakeup feature, correlate -ENOBUFS that was returned from xdp_do_redirect() with a XSK Rx queue being full. In such case, terminate the Rx processing that is being done on the current HW Rx ring and let the user space consume descriptors from XSK Rx queue so that there is room that driver can use later on. Introduce new internal return code ICE_XDP_EXIT that will indicate case described above. Note that it does not affect Tx processing that is bound to the same NAPI context, nor the other Rx rings. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-6-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
ixgbe_run_xdp_zc() suggests to compiler that XDP_REDIRECT is the most probable action returned from BPF program that AF_XDP has in its pipeline. Let's also bring this suggestion up to the callsite of ixgbe_run_xdp_zc() so that compiler will be able to generate more optimized code which in turn will make branch predictor happy. Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-5-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
ice_run_xdp_zc() suggests to compiler that XDP_REDIRECT is the most probable action returned from BPF program that AF_XDP has in its pipeline. Let's also bring this suggestion up to the callsite of ice_run_xdp_zc() so that compiler will be able to generate more optimized code which in turn will make branch predictor happy. Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413153015.453864-4-maciej.fijalkowski@intel.com
-
Maciej Fijalkowski authored
Inspired by patch that made xdp_do_redirect() return values for XSKMAP more meaningful, return -ENXIO instead of -EINVAL for socket being unbound in xsk_rcv_check() as this is the usual value that is returned for such event. In turn, it is now possible to easily distinguish what went wrong, which is a bit harder when for both cases checked, -EINVAL was returned. Return codes can be counted in a nice way via bpftrace oneliner that Jesper has shown: bpftrace -e 'tracepoint:xdp:xdp_redirect* {@err[-args->err] = count();}' Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20220413153015.453864-3-maciej.fijalkowski@intel.com
-
Björn Töpel authored
The error codes returned by xdp_do_redirect() when redirecting a frame to an AF_XDP socket has not been very useful. A driver could not distinguish between different errors. Prior this change the following codes where used: Socket not bound or incorrect queue/netdev: EINVAL XDP frame/AF_XDP buffer size mismatch: ENOSPC Could not allocate buffer (copy mode): ENOSPC AF_XDP Rx buffer full: ENOSPC After this change: Socket not bound or incorrect queue/netdev: EINVAL XDP frame/AF_XDP buffer size mismatch: ENOSPC Could not allocate buffer (copy mode): ENOMEM AF_XDP Rx buffer full: ENOBUFS An AF_XDP zero-copy driver can now potentially determine if the failure was due to a full Rx buffer, and if so stop processing more frames, yielding to the userland AF_XDP application. Signed-off-by: Björn Töpel <bjorn@kernel.org> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20220413153015.453864-2-maciej.fijalkowski@intel.com
-
- 13 Apr, 2022 3 commits
-
-
Yu Zhe authored
Remove/clean up unnecessary void * type castings. Signed-off-by: Yu Zhe <yuzhe@nfschina.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220413015048.12319-1-yuzhe@nfschina.com
-
Daniel Borkmann authored
Pull the migration of the BPF sysctl handling into BPF core. This work is needed in both sysctl-next and bpf-next tree. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/bpf/b615bd44-6bd1-a958-7e3f-dd2ff58931a1@iogearbox.net
-
Yan Zhu authored
We're moving sysctls out of kernel/sysctl.c as it is a mess. We already moved all filesystem sysctls out. And with time the goal is to move all sysctls out to their own subsystem/actual user. kernel/sysctl.c has grown to an insane mess and its easy to run into conflicts with it. The effort to move them out into various subsystems is part of this. Signed-off-by: Yan Zhu <zhuyan34@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/bpf/20220407070759.29506-1-zhuyan34@huawei.com
-
- 11 Apr, 2022 12 commits
-
-
Alan Maguire authored
Parsing of USDT arguments is architecture-specific. On aarch64 it is relatively easy since registers used are x[0-31] and sp. Format is slightly different compared to x86_64. Possible forms are: - "size@[reg[,offset]]" for dereferences, e.g. "-8@[sp,76]" and "-4@[sp]"; - "size@reg" for register values, e.g. "-4@x0"; - "size@value" for raw values, e.g. "-8@1". Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1649690496-1902-2-git-send-email-alan.maguire@oracle.com
-
Yuntao Wang authored
The seq argument is assigned to meta.seq twice, the second one is redundant, remove it. This patch also removes a redundant space in bpf_iter_link_attach(). Signed-off-by: Yuntao Wang <ytcoode@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220410060020.307283-1-ytcoode@gmail.com
-
Geliang Tang authored
Drop duplicate macros min() and MAX() definitions in prog_tests and use MIN() or MAX() in sys/param.h instead. Signed-off-by: Geliang Tang <geliang.tang@suse.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/1ae276da9925c2de59b5bdc93b693b4c243e692e.1649462033.git.geliang.tang@suse.com
-
Pu Lehui authored
This patch implement more BPF atomic operations for RV64. The newly added operations are shown below: atomic[64]_[fetch_]add atomic[64]_[fetch_]and atomic[64]_[fetch_]or atomic[64]_xchg atomic[64]_cmpxchg Since riscv specification does not provide AMO instruction for CAS operation, we use lr/sc instruction for cmpxchg operation, and AMO instructions for the rest ops. Tests "test_bpf.ko" and "test_progs -t atomic" have passed, as well as "test_verifier" with no new failure cases. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20220410101246.232875-1-pulehui@huawei.com
-
Andrii Nakryiko authored
Yafang Shao says: ==================== We have switched to memcg-based memory accouting and thus the rlimit is not needed any more. LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK was introduced in libbpf for backward compatibility, so we can use it instead now. This patchset cleanups the usage of RLIMIT_MEMLOCK in tools/bpf/, tools/testing/selftests/bpf and samples/bpf. The file tools/testing/selftests/bpf/bpf_rlimit.h is removed. The included header sys/resource.h is removed from many files as it is useless in these files. - v4: Squash patches and use customary subject prefixes. (Andrii) - v3: Get rid of bpf_rlimit.h and fix some typos (Andrii) - v2: Use libbpf_set_strict_mode instead. (Andrii) - v1: https://lore.kernel.org/bpf/20220320060815.7716-2-laoar.shao@gmail.com/ ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Yafang Shao authored
Explicitly set libbpf 1.0 API mode, then we can avoid using the deprecated RLIMIT_MEMLOCK. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220409125958.92629-5-laoar.shao@gmail.com
-
Yafang Shao authored
We have switched to memcg-based memory accouting and thus the rlimit is not needed any more. LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK was introduced in libbpf for backward compatibility, so we can use it instead now. libbpf_set_strict_mode always return 0, so we don't need to check whether the return value is 0 or not. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220409125958.92629-4-laoar.shao@gmail.com
-
Yafang Shao authored
We have switched to memcg-based memory accouting and thus the rlimit is not needed any more. LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK was introduced in libbpf for backward compatibility, so we can use it instead now. After this change, the header tools/testing/selftests/bpf/bpf_rlimit.h can be removed. This patch also removes the useless header sys/resource.h from many files in tools/testing/selftests/bpf/. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220409125958.92629-3-laoar.shao@gmail.com
-
Yafang Shao authored
We have switched to memcg-based memory accouting and thus the rlimit is not needed any more. LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK was introduced in libbpf for backward compatibility, so we can use it instead now. This patch also removes the useless header sys/resource.h from many files in samples/bpf. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220409125958.92629-2-laoar.shao@gmail.com
-
Runqing Yang authored
Background: Libbpf automatically replaces calls to BPF bpf_probe_read_{kernel,user} [_str]() helpers with bpf_probe_read[_str](), if libbpf detects that kernel doesn't support new APIs. Specifically, libbpf invokes the probe_kern_probe_read_kernel function to load a small eBPF program into the kernel in which bpf_probe_read_kernel API is invoked and lets the kernel checks whether the new API is valid. If the loading fails, libbpf considers the new API invalid and replaces it with the old API. static int probe_kern_probe_read_kernel(void) { struct bpf_insn insns[] = { BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), /* r1 = r10 (fp) */ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), /* r1 += -8 */ BPF_MOV64_IMM(BPF_REG_2, 8), /* r2 = 8 */ BPF_MOV64_IMM(BPF_REG_3, 0), /* r3 = 0 */ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_probe_read_kernel), BPF_EXIT_INSN(), }; int fd, insn_cnt = ARRAY_SIZE(insns); fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, "GPL", insns, insn_cnt, NULL); return probe_fd(fd); } Bug: On older kernel versions [0], the kernel checks whether the version number provided in the bpf syscall, matches the LINUX_VERSION_CODE. If not matched, the bpf syscall fails. eBPF However, the probe_kern_probe_read_kernel code does not set the kernel version number provided to the bpf syscall, which causes the loading process alwasys fails for old versions. It means that libbpf will replace the new API with the old one even the kernel supports the new one. Solution: After a discussion in [1], the solution is using BPF_PROG_TYPE_TRACEPOINT program type instead of BPF_PROG_TYPE_KPROBE because kernel does not enfoce version check for tracepoint programs. I test the patch in old kernels (4.18 and 4.19) and it works well. [0] https://elixir.bootlin.com/linux/v4.19/source/kernel/bpf/syscall.c#L1360 [1] Closes: https://github.com/libbpf/libbpf/issues/473Signed-off-by: Runqing Yang <rainkin1993@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220409144928.27499-1-rainkin1993@gmail.com
-
Mykola Lysenko authored
Improve subtest selection logic when using -t/-a/-d parameters. In particular, more than one subtest can be specified or a combination of tests / subtests. -a send_signal -d send_signal/send_signal_nmi* - runs send_signal test without nmi tests -a send_signal/send_signal_nmi*,find_vma - runs two send_signal subtests and find_vma test -a 'send_signal*' -a find_vma -d send_signal/send_signal_nmi* - runs 2 send_signal test and find_vma test. Disables two send_signal nmi subtests -t send_signal -t find_vma - runs two *send_signal* tests and one *find_vma* test This will allow us to have granular control over which subtests to disable in the CI system instead of disabling whole tests. Also, add new selftest to avoid possible regression when changing prog_test test name selection logic. Signed-off-by: Mykola Lysenko <mykolal@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220409001750.529930-1-mykolal@fb.com
-
Vladimir Isaev authored
Add PT_REGS macros suitable for ARCompact and ARCv2. Signed-off-by: Vladimir Isaev <isaev@synopsys.com> Signed-off-by: Sergey Matyukevich <geomatsi@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220408224442.599566-1-geomatsi@gmail.com
-
- 09 Apr, 2022 1 commit
-
-
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski authored
Daniel Borkmann says: ==================== pull-request: bpf-next 2022-04-09 We've added 63 non-merge commits during the last 9 day(s) which contain a total of 68 files changed, 4852 insertions(+), 619 deletions(-). The main changes are: 1) Add libbpf support for USDT (User Statically-Defined Tracing) probes. USDTs are an abstraction built on top of uprobes, critical for tracing and BPF, and widely used in production applications, from Andrii Nakryiko. 2) While Andrii was adding support for x86{-64}-specific logic of parsing USDT argument specification, Ilya followed-up with USDT support for s390 architecture, from Ilya Leoshkevich. 3) Support name-based attaching for uprobe BPF programs in libbpf. The format supported is `u[ret]probe/binary_path:[raw_offset|function[+offset]]`, e.g. attaching to libc malloc can be done in BPF via SEC("uprobe/libc.so.6:malloc") now, from Alan Maguire. 4) Various load/store optimizations for the arm64 JIT to shrink the image size by using arm64 str/ldr immediate instructions. Also enable pointer authentication to verify return address for JITed code, from Xu Kuohai. 5) BPF verifier fixes for write access checks to helper functions, e.g. rd-only memory from bpf_*_cpu_ptr() must not be passed to helpers that write into passed buffers, from Kumar Kartikeya Dwivedi. 6) Fix overly excessive stack map allocation for its base map structure and buckets which slipped-in from cleanups during the rlimit accounting removal back then, from Yuntao Wang. 7) Extend the unstable CT lookup helpers for XDP and tc/BPF to report netfilter connection tracking tuple direction, from Lorenzo Bianconi. 8) Improve bpftool dump to show BPF program/link type names, Milan Landaverde. 9) Minor cleanups all over the place from various others. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (63 commits) bpf: Fix excessive memory allocation in stack_map_alloc() selftests/bpf: Fix return value checks in perf_event_stackmap test selftests/bpf: Add CO-RE relos into linked_funcs selftests libbpf: Use weak hidden modifier for USDT BPF-side API functions libbpf: Don't error out on CO-RE relos for overriden weak subprogs samples, bpf: Move routes monitor in xdp_router_ipv4 in a dedicated thread libbpf: Allow WEAK and GLOBAL bindings during BTF fixup libbpf: Use strlcpy() in path resolution fallback logic libbpf: Add s390-specific USDT arg spec parsing logic libbpf: Make BPF-side of USDT support work on big-endian machines libbpf: Minor style improvements in USDT code libbpf: Fix use #ifdef instead of #if to avoid compiler warning libbpf: Potential NULL dereference in usdt_manager_attach_usdt() selftests/bpf: Uprobe tests should verify param/return values libbpf: Improve string parsing for uprobe auto-attach libbpf: Improve library identification for uprobe binary path resolution selftests/bpf: Test for writes to map key from BPF helpers selftests/bpf: Test passing rdonly mem to global func bpf: Reject writes for PTR_TO_MAP_KEY in check_helper_mem_access bpf: Check PTR_TO_MEM | MEM_RDONLY in check_helper_mem_access ... ==================== Link: https://lore.kernel.org/r/20220408231741.19116-1-daniel@iogearbox.netSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 08 Apr, 2022 2 commits
-
-
Yuntao Wang authored
The 'n_buckets * (value_size + sizeof(struct stack_map_bucket))' part of the allocated memory for 'smap' is never used after the memlock accounting was removed, thus get rid of it. [ Note, Daniel: Commit b936ca64 ("bpf: rework memlock-based memory accounting for maps") moved `cost += n_buckets * (value_size + sizeof(struct stack_map_bucket))` up and therefore before the bpf_map_area_alloc() allocation, sigh. In a later step commit c85d6913 ("bpf: move memory size checks to bpf_map_charge_init()"), and the overflow checks of `cost >= U32_MAX - PAGE_SIZE` moved into bpf_map_charge_init(). And then 37086810 ("bpf: Eliminate rlimit-based memory accounting for stackmap maps") finally removed the bpf_map_charge_init(). Anyway, the original code did the allocation same way as /after/ this fix. ] Fixes: b936ca64 ("bpf: rework memlock-based memory accounting for maps") Signed-off-by: Yuntao Wang <ytcoode@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220407130423.798386-1-ytcoode@gmail.com
-
Bert Kenward authored
The 8000 series and newer NICs all get hardware timestamps from the MAC and can provide timestamps on a normal TX queue, rather than via a slow path through the MC. As such we can use this path for any packet where a hardware timestamp is requested. This also enables support for PTP over transports other than IPv4+UDP. Signed-off-by: Bert Kenward <bkenward@solarflare.com> Signed-off-by: Edward Cree <ecree@xilinx.com> Link: https://lore.kernel.org/r/510652dc-54b4-0e11-657e-e37ee3ca26a9@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-