- 08 Jul, 2024 10 commits
-
-
Puranjay Mohan authored
fexit_sleep test runs successfully now on the BPF CI so remove it from the deny list. ftrace direct calls was blocking tracing programs on arm64 but it has been resolved by now. For more details see also discussion in [*]. Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240705145009.32340-1-puranjay@kernel.org [*]
-
Alexei Starovoitov authored
Benjamin Tissoires says: ==================== Small API fix for bpf_wq I realized this while having a map containing both a struct bpf_timer and a struct bpf_wq: the third argument provided to the bpf_wq callback is not the struct bpf_wq pointer itself, but the pointer to the value in the map. Which means that the users need to double cast the provided "value" as this is not a struct bpf_wq *. This is a change of API, but there doesn't seem to be much users of bpf_wq right now, so we should be able to go with this right now. Signed-off-by: Benjamin Tissoires <bentiss@kernel.org> --- Changes in v2: - amended the selftests to retrieve something from the third argument of the callback - Link to v1: https://lore.kernel.org/r/20240705-fix-wq-v1-0-91b4d82cd825@kernel.org --- ==================== Link: https://lore.kernel.org/r/20240708-fix-wq-v2-0-667e5c9fbd99@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Benjamin Tissoires authored
See the previous patch: the API was wrong, we were provided the pointer to the value, not the actual struct bpf_wq *. Signed-off-by: Benjamin Tissoires <bentiss@kernel.org> Link: https://lore.kernel.org/r/20240708-fix-wq-v2-2-667e5c9fbd99@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Benjamin Tissoires authored
I realized this while having a map containing both a struct bpf_timer and a struct bpf_wq: the third argument provided to the bpf_wq callback is not the struct bpf_wq pointer itself, but the pointer to the value in the map. Which means that the users need to double cast the provided "value" as this is not a struct bpf_wq *. This is a change of API, but there doesn't seem to be much users of bpf_wq right now, so we should be able to go with this right now. Fixes: 81f1d7a5 ("bpf: wq: add bpf_wq_set_callback_impl") Signed-off-by: Benjamin Tissoires <bentiss@kernel.org> Link: https://lore.kernel.org/r/20240708-fix-wq-v2-1-667e5c9fbd99@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andreas Ziegler authored
In the current state, an erroneous call to bpf_object__find_map_by_name(NULL, ...) leads to a segmentation fault through the following call chain: bpf_object__find_map_by_name(obj = NULL, ...) -> bpf_object__for_each_map(pos, obj = NULL) -> bpf_object__next_map((obj = NULL), NULL) -> return (obj = NULL)->maps While calling bpf_object__find_map_by_name with obj = NULL is obviously incorrect, this should not lead to a segmentation fault but rather be handled gracefully. As __bpf_map__iter already handles this situation correctly, we can delegate the check for the regular case there and only add a check in case the prev or next parameter is NULL. Signed-off-by: Andreas Ziegler <ziegler.andreas@siemens.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240703083436.505124-1-ziegler.andreas@siemens.com
-
Ilya Leoshkevich authored
Now that the s390x JIT supports exceptions, remove the respective tests from the denylist. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240703005047.40915-4-iii@linux.ibm.com
-
Ilya Leoshkevich authored
Implement the following three pieces required from the JIT: - A "top-level" BPF prog (exception_boundary) must save all non-volatile registers, and not only the ones that it clobbers. - A "handler" BPF prog (exception_cb) must switch stack to that of exception_boundary, and restore the registers that exception_boundary saved. - arch_bpf_stack_walk() must unwind the stack and provide the results in a way that satisfies both bpf_throw() and exception_cb. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240703005047.40915-3-iii@linux.ibm.com
-
Ilya Leoshkevich authored
Using a mask instead of an array saves a small amount of memory and allows marking multiple registers as seen with a simple "or". Another positive side-effect is that it speeds up verification with jitterbug. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240703005047.40915-2-iii@linux.ibm.com
-
Dan Carpenter authored
After commit 0ede61d8 ("file: convert to SLAB_TYPESAFE_BY_RCU") this loop always iterates exactly one time. Delete the for statement and pull the code in a tab. Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Christian Brauner <brauner@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/bpf/ZoWJF51D4zWb6f5t@stanley.mountain
-
Puranjay Mohan authored
When BPF_TRAMP_F_CALL_ORIG is not set, stack space for passing arguments on stack doesn't need to be reserved because the original function is not called. Only reserve space for stacked arguments when BPF_TRAMP_F_CALL_ORIG is set. Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Pu Lehui <pulehui@huawei.com> Link: https://lore.kernel.org/bpf/20240708114758.64414-1-puranjay@kernel.org
-
- 02 Jul, 2024 19 commits
-
-
Florian Lehner authored
Use the .map_allock_check callback to perform allocation checks before allocating memory for the devmap. Signed-off-by: Florian Lehner <dev@der-flo.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240615101158.57889-1-dev@der-flo.net
-
Ilya Leoshkevich authored
Now that the s390x JIT supports arena, remove the respective tests from the denylist. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-13-iii@linux.ibm.com
-
Ilya Leoshkevich authored
Check that __sync_*() functions don't cause kernel panics when handling freed arena pages. x86_64 does not support some arena atomics yet, and aarch64 may or may not support them, based on the availability of LSE atomics at run time. Do not enable this test for these architectures for simplicity. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-12-iii@linux.ibm.com
-
Ilya Leoshkevich authored
While clang uses __attribute__((address_space(1))) both for defining arena pointers and arena globals, GCC requires different syntax for both. While __arena covers the first use case, introduce __arena_global to cover the second one. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-11-iii@linux.ibm.com
-
Ilya Leoshkevich authored
s390x supports most BPF atomics using single instructions, which makes implementing arena support a matter of adding arena address to the base register (unfortunately atomics do not support index registers), and wrapping the respective native instruction in probing sequences. An exception is BPF_XCHG, which is implemented using two different memory accesses and a loop. Make sure there is enough extable entries for both instructions. Compute the base address once for both memory accesses. Since on exception we need to land after the loop, emit the nops manually. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-10-iii@linux.ibm.com
-
Ilya Leoshkevich authored
Now that BPF_PROBE_MEM32 and address space cast instructions are implemented, tell the verifier that the JIT supports arena. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-9-iii@linux.ibm.com
-
Ilya Leoshkevich authored
The new address cast instruction translates arena offsets to userspace addresses. NULL pointers must not be translated. The common code sets up the mappings in such a way that it's enough to replace the higher 32 bits to achieve the desired result. s390x has just an instruction for this: INSERT IMMEDIATE. Implement the sequence using 3 instruction: LOAD AND TEST, BRANCH RELATIVE ON CONDITION and INSERT IMMEDIATE. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-8-iii@linux.ibm.com
-
Ilya Leoshkevich authored
BPF_PROBE_MEM32 is a new mode for LDX, ST and STX instructions. The JIT is supposed to add the start address of the kernel arena mapping to the %dst register, and use a probing variant of the respective memory access. Reuse the existing probing infrastructure for that. Put the arena address into the literal pool, load it into %r1 and use that as an index register. Do not clear any registers in ex_handler_bpf() for failing ST and STX instructions. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-7-iii@linux.ibm.com
-
Ilya Leoshkevich authored
Currently we land on the nop, which is unnecessary: we can just as well begin executing the next instruction. Furthermore, the upcoming arena support for the loop-based BPF_XCHG implementation will require landing on an instruction that comes after the loop. So land on the next JITed instruction, which covers both cases. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-6-iii@linux.ibm.com
-
Ilya Leoshkevich authored
Currently probe insns are handled by two "if" statements at the beginning and at the end of bpf_jit_insn(). The first one needs to be in sync with the huge insn->code statement that follows it, which was not a problem so far, since the check is small. The introduction of arena will make it significantly larger, and it will no longer be obvious whether it is in sync with the opcode switch. Move these statements to the new bpf_jit_probe_load_pre() and bpf_jit_probe_post() functions, and call them only from cases that need them. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-5-iii@linux.ibm.com
-
Ilya Leoshkevich authored
Commit 7fc8c362 ("s390/bpf: encode register within extable entry") introduced explicit passing of the number of the register to be cleared to ex_handler_bpf(), which replaced deducing it from the respective native load instruction using get_probe_mem_regno(). Replace the second and last usage in the same manner, and remove this function. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-4-iii@linux.ibm.com
-
Ilya Leoshkevich authored
The upcoming arena support for the loop-based BPF_XCHG implementation requires emitting nop and extable entries separately. Move nop handling into a separate function, and keep track of the nop offset. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-3-iii@linux.ibm.com
-
Ilya Leoshkevich authored
Zero-extending results of atomic probe operations fails with: verifier bug. zext_dst is set, but no reg is defined The problem is that insn_def_regno() handles BPF_ATOMICs, but not BPF_PROBE_ATOMICs. Fix by adding the missing condition. Fixes: d503a04f ("bpf: Add support for certain atomics in bpf_arena to x86 JIT") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240701234304.14336-2-iii@linux.ibm.com
-
Tao Chen authored
As Quentin said [0], BPF map pinning will fail if the pinmaps path is not under the bpffs, like: libbpf: specified path /home/ubuntu/test/sock_ops_map is not on BPF FS Error: failed to pin all maps [0] https://github.com/libbpf/bpftool/issues/146 Fixes: 3767a94b ("bpftool: add pinmaps argument to the load/loadall") Signed-off-by: Tao Chen <chen.dylane@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Quentin Monnet <qmo@kernel.org> Reviewed-by: Quentin Monnet <qmo@kernel.org> Link: https://lore.kernel.org/bpf/20240702131150.15622-1-chen.dylane@gmail.com
-
Pu Lehui authored
Add testcase where 7th argument is struct for architectures with 8 argument registers, and increase the complexity of the struct. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20240702121944.1091530-4-pulehui@huaweicloud.com
-
Pu Lehui authored
Factor out many args tests from tracing_struct and rename some function names to make more sense. Meanwhile, remove unnecessary skeleton detach operation as it will be covered by skeleton destroy operation. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20240702121944.1091530-3-pulehui@huaweicloud.com
-
Pu Lehui authored
This patch adds 12 function arguments support for riscv64 bpf trampoline. The current bpf trampoline supports <= sizeof(u64) bytes scalar arguments [0] and <= 16 bytes struct arguments [1]. Therefore, we focus on the situation where scalars are at most XLEN bits and aggregates whose total size does not exceed 2×XLEN bits in the riscv calling convention [2]. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@kernel.org> Acked-by: Puranjay Mohan <puranjay@kernel.org> Link: https://elixir.bootlin.com/linux/v6.8/source/kernel/bpf/btf.c#L6184 [0] Link: https://elixir.bootlin.com/linux/v6.8/source/kernel/bpf/btf.c#L6769 [1] Link: https://github.com/riscv-non-isa/riscv-elf-psabi-doc/releases/download/draft-20230929-e5c800e661a53efe3c2678d71a306323b60eb13b/riscv-abi.pdf [2] Link: https://lore.kernel.org/bpf/20240702121944.1091530-2-pulehui@huaweicloud.com
-
Tushar Vyavahare authored
Introduce dynamic adjustment capabilities for fill_size and comp_size parameters to support larger batch sizes beyond the previous 2K limit. Update HW_SW_MAX_RING_SIZE test cases to evaluate AF_XDP's robustness by pushing hardware and software ring sizes to their limits. This test ensures AF_XDP's reliability amidst potential producer/consumer throttling due to maximum ring utilization. Signed-off-by: Tushar Vyavahare <tushar.vyavahare@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20240702055916.48071-3-tushar.vyavahare@intel.com
-
Tushar Vyavahare authored
Previously, HW_SW_MIN_RING_SIZE and HW_SW_MAX_RING_SIZE test cases were not validating Tx/Rx traffic at all due to early return after changing HW ring size in testapp_validate_traffic(). Fix the flow by checking return value of set_ring_size() and act upon it rather than terminating the test case there. Signed-off-by: Tushar Vyavahare <tushar.vyavahare@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20240702055916.48071-2-tushar.vyavahare@intel.com
-
- 01 Jul, 2024 8 commits
-
-
Zhu Jun authored
Delete extra blank lines inside of test_selftest(). Signed-off-by: Zhu Jun <zhujun2@cmss.chinamobile.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240627031905.7133-1-zhujun2@cmss.chinamobile.com
-
Pu Lehui authored
We used bpf_prog_pack to aggregate bpf programs into huge page to relieve the iTLB pressure on the system. We can apply it to bpf trampoline, as Song had been implemented it in core and x86 [0]. This patch is going to use bpf_prog_pack to RV64 bpf trampoline. Since Song and Puranjay have done a lot of work for bpf_prog_pack on RV64, implementing this function will be easy. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> #riscv Link: https://lore.kernel.org/all/20231206224054.492250-1-song@kernel.org [0] Link: https://lore.kernel.org/bpf/20240622030437.3973492-4-pulehui@huaweicloud.com
-
Pu Lehui authored
We get the size of the trampoline image during the dry run phase and allocate memory based on that size. The allocated image will then be populated with instructions during the real patch phase. But after commit 26ef208c ("bpf: Use arch_bpf_trampoline_size"), the `im` argument is inconsistent in the dry run and real patch phase. This may cause emit_imm in RV64 to generate a different number of instructions when generating the 'im' address, potentially causing out-of-bounds issues. Let's emit the maximum number of instructions for the "im" address during dry run to fix this problem. Fixes: 26ef208c ("bpf: Use arch_bpf_trampoline_size") Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240622030437.3973492-3-pulehui@huaweicloud.com
-
Pu Lehui authored
For trampoline using bpf_prog_pack, we need to generate a rw_image buffer with size of (image_end - image). For regular trampoline, we use the precise image size generated by arch_bpf_trampoline_size to allocate rw_image. But for struct_ops trampoline, we allocate rw_image directly using close to PAGE_SIZE size. We do not need to allocate for that much, as the patch size is usually much smaller than PAGE_SIZE. Let's use precise image size for it too. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> #riscv Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/bpf/20240622030437.3973492-2-pulehui@huaweicloud.com
-
Alan Maguire authored
Coverity points out that after calling btf__new_empty_split() the wrong value is checked for error. Fixes: 58e185a0 ("libbpf: Add btf__distill_base() creating split BTF with distilled base BTF") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240629100058.2866763-1-alan.maguire@oracle.com
-
Lorenzo Bianconi authored
Introduce e2e selftest for bpf_xdp_flow_lookup kfunc through xdp_flowtable utility. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/b74393fb4539aecbbd5ac7883605f86a95fb0b6b.1719698275.git.lorenzo@kernel.org
-
Lorenzo Bianconi authored
Introduce bpf_xdp_flow_lookup kfunc in order to perform the lookup of a given flowtable entry based on a fib tuple of incoming traffic. bpf_xdp_flow_lookup can be used as building block to offload in xdp the processing of sw flowtable when hw flowtable is not available. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org> Link: https://lore.kernel.org/bpf/55d38a4e5856f6d1509d823ff4e98aaa6d356097.1719698275.git.lorenzo@kernel.org
-
Florian Westphal authored
This adds a small internal mapping table so that a new bpf (xdp) kfunc can perform lookups in a flowtable. As-is, xdp program has access to the device pointer, but no way to do a lookup in a flowtable -- there is no way to obtain the needed struct without questionable stunts. This allows to obtain an nf_flowtable pointer given a net_device structure. In order to keep backward compatibility, the infrastructure allows the user to add a given device to multiple flowtables, but it will always return the first added mapping performing the lookup since it assumes the right configuration is 1:1 mapping between flowtables and net_devices. Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org> Link: https://lore.kernel.org/bpf/9f20e2c36f494b3bf177328718367f636bb0b2ab.1719698275.git.lorenzo@kernel.org
-
- 27 Jun, 2024 1 commit
-
-
Jiri Olsa authored
ARRAY_SIZE is used on multiple places, move its definition in bpf_misc.h header. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20240626134719.3893748-1-jolsa@kernel.org
-
- 26 Jun, 2024 2 commits
-
-
Alan Maguire authored
When building with clang for ARCH=i386, the following errors are observed: CC kernel/bpf/btf_relocate.o ./tools/lib/bpf/btf_relocate.c:206:23: error: implicit truncation from 'int' to a one-bit wide bit-field changes value from 1 to -1 [-Werror,-Wsingle-bit-bitfield-constant-conversion] 206 | info[id].needs_size = true; | ^ ~ ./tools/lib/bpf/btf_relocate.c:256:25: error: implicit truncation from 'int' to a one-bit wide bit-field changes value from 1 to -1 [-Werror,-Wsingle-bit-bitfield-constant-conversion] 256 | base_info.needs_size = true; | ^ ~ 2 errors generated. The problem is we use 1-bit, 31-bit bitfields in a signed int. Changing to bool needs_size: 1; unsigned int size:31; ...resolves the error and pahole reports that 4 bytes are used for the underlying representation: $ pahole btf_name_info tools/lib/bpf/btf_relocate.o struct btf_name_info { const char * name; /* 0 8 */ unsigned int needs_size:1; /* 8: 0 4 */ unsigned int size:31; /* 8: 1 4 */ __u32 id; /* 12 4 */ /* size: 16, cachelines: 1, members: 4 */ /* last cacheline: 16 bytes */ }; Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240624192903.854261-1-alan.maguire@oracle.com
-
Ma Ke authored
Guard close() with extra link_fd[i] > 0 and fexit_fd[i] > 0 check to prevent close(-1). Signed-off-by: Ma Ke <make24@iscas.ac.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240623131753.2133829-1-make24@iscas.ac.cn
-