- 14 Sep, 2021 4 commits
-
-
Andrii Nakryiko authored
Remove the need to explicitly pass bpf_sec_def for auto-attachable BPF programs, as it is already recorded at bpf_object__open() time for all recognized type of BPF programs. This further reduces number of explicit calls to find_sec_def(), simplifying further refactorings. No functional changes are done by this patch. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20210914014733.2768-4-andrii@kernel.org
-
Andrii Nakryiko authored
Refactor bpf_object__open() sequencing to perform BPF program type detection based on SEC() definitions before we get to relocations collection. This allows to have more information about BPF program by the time we get to, say, struct_ops relocation gathering. This, subsequently, simplifies struct_ops logic and removes the need to perform extra find_sec_def() resolution. With this patch libbpf will require all struct_ops BPF programs to be marked with SEC("struct_ops") or SEC("struct_ops/xxx") annotations. Real-world applications are already doing that through something like selftests's BPF_STRUCT_OPS() macro. This change streamlines libbpf's internal handling of SEC() definitions and is in the sprit of upcoming libbpf-1.0 section strictness changes ([0]). [0] https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#stricter-and-more-uniform-bpf-program-section-name-sec-handlingSigned-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20210914014733.2768-3-andrii@kernel.org
-
Andrii Nakryiko authored
Update struct_ops selftests to always specify "struct_ops" section prefix. Libbpf will require a proper BPF program type set in the next patch, so this prevents tests breaking. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20210914014733.2768-2-andrii@kernel.org
-
Rafael David Tinoco authored
Allow kprobe tracepoint events creation through legacy interface, as the kprobe dynamic PMUs support, used by default, was only created in v4.17. Store legacy kprobe name in struct bpf_perf_link, instead of creating a new "subclass" off of bpf_perf_link. This is ok as it's just two new fields, which are also going to be reused for legacy uprobe support in follow up patches. Signed-off-by: Rafael David Tinoco <rafaeldtinoco@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210912064844.3181742-1-rafaeldtinoco@gmail.com
-
- 13 Sep, 2021 6 commits
-
-
Andrii Nakryiko authored
Turn previously auto-generated libbpf_version.h header into a normal header file. This prevents various tricky Makefile integration issues, simplifies the overall build process, but also allows to further extend it with some more versioning-related APIs in the future. To prevent accidental out-of-sync versions as defined by libbpf.map and libbpf_version.h, Makefile checks their consistency at build time. Simultaneously with this change bump libbpf.map to v0.6. Also undo adding libbpf's output directory into include path for kernel/bpf/preload, bpftool, and resolve_btfids, which is not necessary because libbpf_version.h is just a normal header like any other. Fixes: 0b46b755 ("libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecations") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210913222309.3220849-1-andrii@kernel.org
-
Daniel Borkmann authored
The tailcall_3 test program uses bpf_tail_call_static() where the JIT would patch a direct jump. Add a new tailcall_6 test program replicating exactly the same test just ensuring that bpf_tail_call() uses a map index where the verifier cannot make assumptions this time. In other words, this will now cover both on x86-64 JIT, meaning, JIT images with emit_bpf_tail_call_direct() emission as well as JIT images with emit_bpf_tail_call_indirect() emission. # echo 1 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK #136/7 tailcalls/tailcall_bpf2bpf_1:OK #136/8 tailcalls/tailcall_bpf2bpf_2:OK #136/9 tailcalls/tailcall_bpf2bpf_3:OK #136/10 tailcalls/tailcall_bpf2bpf_4:OK #136/11 tailcalls/tailcall_bpf2bpf_5:OK #136 tailcalls:OK Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED # echo 0 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK [...] For interpreter, the tailcall_1-6 tests are passing as well. The later tailcall_bpf2bpf_* are failing due lack of bpf2bpf + tailcall support in interpreter, so this is expected. Also, manual inspection shows that both loaded programs from tailcall_3 and tailcall_6 test case emit the expected opcodes: * tailcall_3 disasm, emit_bpf_tail_call_direct(): [...] b: push %rax c: push %rbx d: push %r13 f: mov %rdi,%rbx 12: movabs $0xffff8d3f5afb0200,%r13 1c: mov %rbx,%rdi 1f: mov %r13,%rsi 22: xor %edx,%edx _ 24: mov -0x4(%rbp),%eax | limit check 2a: cmp $0x20,%eax | 2d: ja 0x0000000000000046 | 2f: add $0x1,%eax | 32: mov %eax,-0x4(%rbp) |_ 38: nopl 0x0(%rax,%rax,1) 3d: pop %r13 3f: pop %rbx 40: pop %rax 41: jmpq 0xffffffffffffe377 [...] * tailcall_6 disasm, emit_bpf_tail_call_indirect(): [...] 47: movabs $0xffff8d3f59143a00,%rsi 51: mov %edx,%edx 53: cmp %edx,0x24(%rsi) 56: jbe 0x0000000000000093 _ 58: mov -0x4(%rbp),%eax | limit check 5e: cmp $0x20,%eax | 61: ja 0x0000000000000093 | 63: add $0x1,%eax | 66: mov %eax,-0x4(%rbp) |_ 6c: mov 0x110(%rsi,%rdx,8),%rcx 74: test %rcx,%rcx 77: je 0x0000000000000093 79: pop %rax 7a: mov 0x30(%rcx),%rcx 7e: add $0xb,%rcx 82: callq 0x000000000000008e 87: pause 89: lfence 8c: jmp 0x0000000000000087 8e: mov %rcx,(%rsp) 92: retq [...] Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: Tiezhu Yang <yangtiezhu@loongson.cn> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com> Acked-by: Paul Chaignon <paul@cilium.io> Link: https://lore.kernel.org/bpf/CAM1=_QRyRVCODcXo_Y6qOm1iT163HoiSj8U2pZ8Rj3hzMTT=HQ@mail.gmail.com Link: https://lore.kernel.org/bpf/20210910091900.16119-1-daniel@iogearbox.net
-
Alexei Starovoitov authored
Song Liu says: ==================== Changes v6 => v7: 1. Improve/fix intel_pmu_snapshot_branch_stack() logic. (Peter). Changes v5 => v6: 1. Add local_irq_save/restore to intel_pmu_snapshot_branch_stack. (Peter) 2. Remove buf and size check in bpf_get_branch_snapshot, move flags check to later fo the function. (Peter, Andrii) 3. Revise comments for bpf_get_branch_snapshot in bpf.h (Andrii) Changes v4 => v5: 1. Modify perf_snapshot_branch_stack_t to save some memcpy. (Andrii) 2. Minor fixes in selftests. (Andrii) Changes v3 => v4: 1. Do not reshuffle intel_pmu_disable_all(). Use some inline to save LBR entries. (Peter) 2. Move static_call(perf_snapshot_branch_stack) to the helper. (Alexei) 3. Add argument flags to bpf_get_branch_snapshot. (Andrii) 4. Make MAX_BRANCH_SNAPSHOT an enum (Andrii). And rename it as PERF_MAX_BRANCH_SNAPSHOT 5. Make bpf_get_branch_snapshot similar to bpf_read_branch_records. (Andrii) 6. Move the test target function to bpf_testmod. Updated kallsyms_find_next to work properly with modules. (Andrii) Changes v2 => v3: 1. Fix the use of static_call. (Peter) 2. Limit the use to perfmon version >= 2. (Peter) 3. Modify intel_pmu_snapshot_branch_stack() to use intel_pmu_disable_all and intel_pmu_enable_all(). Changes v1 => v2: 1. Rename the helper as bpf_get_branch_snapshot; 2. Fix/simplify the use of static_call; 3. Instead of percpu variables, let intel_pmu_snapshot_branch_stack output branch records to an output argument of type perf_branch_snapshot. Branch stack can be very useful in understanding software events. For example, when a long function, e.g. sys_perf_event_open, returns an errno, it is not obvious why the function failed. Branch stack could provide very helpful information in this type of scenarios. This set adds support to read branch stack with a new BPF helper bpf_get_branch_trace(). Currently, this is only supported in Intel systems. It is also possible to support the same feaure for PowerPC. The hardware that records the branch stace is not stopped automatically on software events. Therefore, it is necessary to stop it in software soon. Otherwise, the hardware buffers/registers will be flushed. One of the key design consideration in this set is to minimize the number of branch record entries between the event triggers and the hardware recorder is stopped. Based on this goal, current design is different from the discussions in original RFC [1]: 1) Static call is used when supported, to save function pointer dereference; 2) intel_pmu_lbr_disable_all is used instead of perf_pmu_disable(), because the latter uses about 10 entries before stopping LBR. With current code, on Intel CPU, LBR is stopped after 7 branch entries after fexit triggers: ID: 0 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0 ID: 1 from __brk_limit+477143934 to bpf_get_branch_snapshot+0 ID: 2 from __brk_limit+477192263 to __brk_limit+477143880 # trampoline ID: 3 from __bpf_prog_enter+34 to __brk_limit+477192251 ID: 4 from migrate_disable+60 to __bpf_prog_enter+9 ID: 5 from __bpf_prog_enter+4 to migrate_disable+0 ID: 6 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0 ID: 7 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13 ID: 8 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13 ... [1] https://lore.kernel.org/bpf/20210818012937.2522409-1-songliubraving@fb.com/ ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Song Liu authored
This test uses bpf_get_branch_snapshot from a fexit program. The test uses a target function (bpf_testmod_loop_test) and compares the record against kallsyms. If there isn't enough record matching kallsyms, the test fails. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210910183352.3151445-4-songliubraving@fb.com
-
Song Liu authored
Introduce bpf_get_branch_snapshot(), which allows tracing pogram to get branch trace from hardware (e.g. Intel LBR). To use the feature, the user need to create perf_event with proper branch_record filtering on each cpu, and then calls bpf_get_branch_snapshot in the bpf function. On Intel CPUs, VLBR event (raw event 0x1b00) can be use for this. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210910183352.3151445-3-songliubraving@fb.com
-
Song Liu authored
The typical way to access branch record (e.g. Intel LBR) is via hardware perf_event. For CPUs with FREEZE_LBRS_ON_PMI support, PMI could capture reliable LBR. On the other hand, LBR could also be useful in non-PMI scenario. For example, in kretprobe or bpf fexit program, LBR could provide a lot of information on what happened with the function. Add API to use branch record for software use. Note that, when the software event triggers, it is necessary to stop the branch record hardware asap. Therefore, static_call is used to remove some branch instructions in this process. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/bpf/20210910183352.3151445-2-songliubraving@fb.com
-
- 10 Sep, 2021 23 commits
-
-
Vadim Fedorenko authored
Analogous to the gso_segs selftests introduced in commit d9ff286a ("bpf: allow BPF programs access skb_shared_info->gso_segs field"). Signed-off-by: Vadim Fedorenko <vfedorenko@novek.ru> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20210909220409.8804-3-vfedorenko@novek.ru
-
Vadim Fedorenko authored
BPF programs may want to know hardware timestamps if NIC supports such timestamping. Expose this data as hwtstamp field of __sk_buff the same way as gso_segs/gso_size. This field could be accessed from the same programs as tstamp field, but it's read-only field. Explicit test to deny access to padding data is added to bpf_skb_is_valid_access. Also update BPF_PROG_TEST_RUN tests of the feature. Signed-off-by: Vadim Fedorenko <vfedorenko@novek.ru> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20210909220409.8804-2-vfedorenko@novek.ru
-
Daniel Borkmann authored
Magnus Karlsson says: ==================== This patch set facilitates adding new tests as well as describing existing ones in the xsk selftests suite and adds 3 new test suites at the end. The idea is to isolate the run-time that executes the test from the actual implementation of the test. Today, implementing a test amounts to adding test specific if-statements all around the run-time, which is not scalable or amenable for reuse. This patch set instead introduces a test specification that is the only thing that a test fills in. The run-time then gets this specification and acts upon it completely unaware of what test it is executing. This way, we can get rid of all test specific if-statements from the run-time and the implementation of the test can be contained in a single function. This hopefully makes it easier to add tests and for users to understand what the test accomplishes. As a recap of what the run-time does: each test is based on the run-time launching two threads and connecting a veth link between the two threads. Each thread opens an AF_XDP socket on that veth interface and one of them sends traffic that the other one receives and validates. Each thread has its own umem. Note that this behavior is not changed by this patch set. A test specification consists of several items. Most importantly: * Two packet streams. One for Tx thread that specifies what traffic to send and one for the Rx thread that specifies what that thread should receive. If it receives exactly what is specified, the test passes, otherwise it fails. A packet stream can also specify what buffers in the umem that should be used by the Rx and Tx threads. * What kind of AF_XDP sockets it should create and bind to what interfaces * How many times it should repeat the socket creation and destruction * The name of the test The interface for the test spec is the following: void test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, struct ifobject *ifobj_rx, enum test_mode mode); /* Reset everything but the interface specifications and the mode */ void test_spec_reset(struct test_spec *test); void test_spec_set_name(struct test_spec *test, const char *name); Packet streams have the following interfaces: struct pkt *pkt_stream_get_pkt(struct pkt_stream *pkt_stream, u32 pkt_nb) struct pkt *pkt_stream_get_next_rx_pkt(struct pkt_stream *pkt_stream) struct pkt_stream *pkt_stream_generate(struct xsk_umem_info *umem, u32 nb_pkts, u32 pkt_len); void pkt_stream_delete(struct pkt_stream *pkt_stream); struct pkt_stream *pkt_stream_clone(struct xsk_umem_info *umem, struct pkt_stream *pkt_stream); /* Replaces all packets in the stream*/ void pkt_stream_replace(struct test_spec *test, u32 nb_pkts, u32 pkt_len); /* Replaces every other packet in the stream */ void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, u32 offset); /* For creating custom made packet streams */ void pkt_stream_generate_custom(struct test_spec *test, struct pkt *pkts, u32 nb_pkts); /* Restores the default packet stream */ void pkt_stream_restore_default(struct test_spec *test); A test can then then in the most basic case described like this (provided the test specification has been created before calling the function): static bool testapp_aligned(struct test_spec *test) { test_spec_set_name(test, "RUN_TO_COMPLETION"); testapp_validate_traffic(test); } Running the same test in unaligned mode would then look like this: static bool testapp_unaligned(struct test_spec *test) { if (!hugepages_present(test->ifobj_tx)) { ksft_test_result_skip("No 2M huge pages present.\n"); return false; } test_spec_set_name(test, "UNALIGNED_MODE"); test->ifobj_tx->umem->unaligned_mode = true; test->ifobj_rx->umem->unaligned_mode = true; /* Let half of the packets straddle a buffer boundrary */ pkt_stream_replace_half(test, PKT_SIZE, XSK_UMEM__DEFAULT_FRAME_SIZE - 32); /* Populate fill ring with addresses in the packet stream */ test->ifobj_rx->pkt_stream->use_addr_for_fill = true; testapp_validate_traffic(test); pkt_stream_restore_default(test); return true; } 3 of the last 4 patches in the set add 3 new test suites, one for unaligned mode, one for testing the rejection of tricky invalid descriptors plus the acceptance of some valid ones in the Tx ring, and one for testing 2K frame sizes (the default is 4K). What is left to do for follow-up patches: * Convert the statistics tests to the new framework. * Implement a way of registering new tests without having the enum test_type. Once this has been done (together with the previous bullet), all the test types can be dropped from the header file. This means that we should be able to add tests by just writing a single function with a new test specification, which is one of the goals. * Introduce functions for manipulating parts of the test or interface spec instead of direct manipulations such as test->ifobj_rx->pkt_stream->use_addr_for_fill = true; which is kind of awkward. * Move the run-time and its interface to its own .c and .h files. Then we can have all the tests in a separate file. * Better error reporting if a test fails. Today it does not state what test fails and might not continue execute the rest of the tests due to this failure. Failures are not propagated upwards through the functions so a failed test will also be a passed test, which messes up the stats counting. This needs to be changed. * Add option to run specific test instead of all of them * Introduce pacing of sent packets so that they are never dropped by the receiver even if it is stalled for some reason. If you run the current tests on a heavily loaded system, they might fail in SKB mode due to packets being dropped by the driver on Tx. Though I have never seen it, it might happen. v1 -> v2: * Fixed a number of spelling errors [Maciej] * Fixed use after free bug in pkt_stream_replace() [Maciej] * pkt_stream_set -> pkt_stream_generate_custom [Maciej] * Fixed formatting problem in testapp_invalid_desc() [Maciej] ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Magnus Karlsson authored
Add tests for 2K frame size. Both a standard send and receive test and one testing for invalid descriptors when the frame size is 2K. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-21-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Add tests for invalid xsk descriptors in the Tx ring. A number of handcrafted nasty invalid descriptors are created and submitted to the tx ring to check that they are validated correctly. Corner case valid ones are also sent. The tests are run for both aligned and unaligned mode. pkt_stream_set() is introduced to be able to create a hand-crafted packet stream where every single packet is specified in detail. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-20-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Eliminate a test specific if-statement for the RX_FILL_EMTPY stats test that is present in the test runner. We can do this as we now have the use_addr_for_fill option. Just create and empty Rx packet stream and indicated that the test runner should use the addresses in that to populate the fill ring. As there are no packets in the stream, the fill ring will be empty and we will get the error stats that we want to test. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-19-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Add a test for unaligned mode in which packet buffers can be placed anywhere within the umem. Some packets are made to straddle page boundaries in order to check for correctness. On the Tx side, buffers are now allocated according to the addresses found in the packet stream. Thus, the placement of buffers can be controlled with the boolean use_addr_for_fill in the packet stream. One new pkt_stream interface is introduced: pkt_stream_replace_half() that replaces every other packet in the default packet stream with the specified new packet. The constant DEFAULT_OFFSET is also introduced. It specifies at what offset from the start of a chunk a Tx packet is placed by the sending thread. This is just to be able to test that it is possible to send packets at an offset not equal to zero. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-18-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Introduce the concept of a default packet stream that is the set of packets sent by most tests. Then add the ability to replace it for a test that would like to send or receive something else through the use of the function pkt_stream_replace() and then restored with pkt_stream_restore_default(). These are then used to convert the STAT_TEST_TX_INVALID to use these new APIs. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-17-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Allow for invalid packets to be sent. These are verified by the Rx thread not to be received. Or put in another way, if they are received, the test will fail. This feature will be used to eliminate an if statement for a stats test and will also be used by other tests in later patches. The previous code could only deal with valid packets. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-16-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Remove the MAX_SOCKS define as it always will be one for the forseable future and the code does not work for any other case anyway. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-15-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Make the pthread_t variables local scope instead of global. No reason for them to be global. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-14-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Make xdp_flags and bind_flags local instead of global by moving them into the interface object. These flags decide if the socket should be created in SKB mode or in DRV mode and therefore they are sticky and will survive a test_spec_reset. Since every test is first run in SKB mode then in DRV mode, this change only happens once. With this change, the configured_mode global variable can also be erradicated. The first test_spec_init() also becomes superfluous and can be eliminated. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-13-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Add the ability in the test specification to specify numbers of sockets to create. The default is one socket. This is then used to remove test specific if-statements around the bpf_res tests. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-12-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Replace the second_step global variable with a test specification variable called total_steps that a test can be set to indicate how many times the packet stream should be sent without reinitializing any sockets. This eliminates test specific code in the test runner around the bidirectional test. The total_steps variable is 1 by default as most tests only need a single round of packets. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-11-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Introduce rx_on and tx_on in the ifobject so that we can describe if the thread should create a socket with only tx, rx, or both. This eliminates some test specific if statements from the code. We can also eliminate the flow vector structure now as this is fully specified by the tx_on and rx_on variables. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-10-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Add a use_poll option to the ifobject so that we do not need to use a test specific if-statement in the test runner. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-9-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Introduce the test name in the test specification. This so we can set the name locally in the test function and simplify the logic for printing out test results. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-8-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Make the frame size configurable instead of it being hard coded to a default. This is a property of the umem and will make it possible to implement tests for different umem frame sizes in a later patch. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-7-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Move the global variable rxqsize to struct xsk_socket_info as it describes the size of a ring in that struct. By default, it is set to the size dictated by libbpf. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-6-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Move the global variables num_frames and frame_headroom to struct xsk_umem_info. They describe properties of the umem so no reason for them to be global. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-5-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Introduce a test specification to be able to concisely describe a test. Currently, a test is implemented by sprinkling test specific if statements here and there, which is not scalable or easy to understand. The end goal with this patch set is to come to the point in which a test is completely specified by a test specification that can easily be constructed in a single function so that new tests can be added without too much trouble. This test specification will be run by a test runner that has no idea about tests. It just executes the what test specification states. This patch introduces the test specification and, as a start, puts the two interface objects in there, one containing the packet stream to be sent and the other one the packet stream that is supposed to be received for a test to pass. The global variables containing these can then be eliminated. The following patches will convert each existing test into a test specification and add the needed fields into it and the functionality in the test runner that act on the test specification. At the end, the test runner should contain no test specific code and each test should be described in a single simple function. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-4-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Introduce a typedef of the thread function so this can be passed to init_iface() in order to simplify that function. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-3-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Simplify the xsk_info and umem_info allocation by allocating them upfront in an array, instead of allocating an array of pointers to future creations of these. Allocating them upfront also has the advantage that configuration information can be stored in these structures instead of relying on global variables. With the previous structure, xsk_info and umem_info were created too late to be able to store most configuration information. This will be used to eliminate most global variables in later patches in this series. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210907071928.9750-2-magnus.karlsson@gmail.com
-
- 09 Sep, 2021 1 commit
-
-
Quentin Monnet authored
Introduce a macro LIBBPF_DEPRECATED_SINCE(major, minor, message) to prepare the deprecation of two API functions. This macro marks functions as deprecated when libbpf's version reaches the values passed as an argument. As part of this change libbpf_version.h header is added with recorded major (LIBBPF_MAJOR_VERSION) and minor (LIBBPF_MINOR_VERSION) libbpf version macros. They are now part of libbpf public API and can be relied upon by user code. libbpf_version.h is installed system-wide along other libbpf public headers. Due to this new build-time auto-generated header, in-kernel applications relying on libbpf (resolve_btfids, bpftool, bpf_preload) are updated to include libbpf's output directory as part of a list of include search paths. Better fix would be to use libbpf's make_install target to install public API headers, but that clean up is left out as a future improvement. The build changes were tested by building kernel (with KBUILD_OUTPUT and O= specified explicitly), bpftool, libbpf, selftests/bpf, and resolve_btfids builds. No problems were detected. Note that because of the constraints of the C preprocessor we have to write a few lines of macro magic for each version used to prepare deprecation (0.6 for now). Also, use LIBBPF_DEPRECATED_SINCE() to schedule deprecation of btf__get_from_id() and btf__load(), which are replaced by btf__load_from_kernel_by_id() and btf__load_into_kernel(), respectively, starting from future libbpf v0.6. This is part of libbpf 1.0 effort ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/278Co-developed-by: Quentin Monnet <quentin@isovalent.com> Co-developed-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210908213226.1871016-1-andrii@kernel.org
-
- 08 Sep, 2021 6 commits
-
-
Andrii Nakryiko authored
After updating to binutils 2.35, the build began to fail with an assembler error. A bug was opened on the Red Hat Bugzilla a few days later for the same issue. Work around the problem by using the new `symver` attribute (introduced in GCC 10) as needed instead of assembler directives. This addresses Red Hat ([0]) and OpenSUSE ([1]) bug reports, as well as libbpf issue ([2]). [0]: https://bugzilla.redhat.com/show_bug.cgi?id=1863059 [1]: https://bugzilla.opensuse.org/show_bug.cgi?id=1188749 [2]: Closes: https://github.com/libbpf/libbpf/issues/338Co-developed-by: Patrick McCarty <patrick.mccarty@intel.com> Co-developed-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Patrick McCarty <patrick.mccarty@intel.com> Signed-off-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210907221023.2660953-1-andrii@kernel.org
-
Andrii Nakryiko authored
Matt Smith says: ==================== This patch series changes the type of bpf_object_skeleton->data to const void * and provides a helper method X__elf_bytes(size_t *sz) for accessing the raw binary data of the compiled embedded BPF object. The type change enforces the previously implied behavior of immutability for this field while casting it to (void *) before assignment allows for compiling with previous versions of the libbpf headers without compiler warnings. The helper method allows easier access to the BPF binary object data and is leveraged to populate the skeleton field. The inclusion of this helper method will allow users to get access to the data without needing to populate an entire skeleton first. Checks are added in the third patch to validate the behavior of the added method ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Matt Smith authored
This patch adds two checks for the X__elf_bytes BPF skeleton helper method. The first asserts that the pointer returned from the helper method is valid, the second asserts that the provided size pointer is set. Signed-off-by: Matt Smith <alastorze@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210901194439.3853238-4-alastorze@fb.com
-
Matt Smith authored
This adds a skeleton method X__elf_bytes() which returns the binary data of the compiled and embedded BPF object file. It additionally sets the size of the return data to the provided size_t pointer argument. The assignment to s->data is cast to void * to ensure no warning is issued if compiled with a previous version of libbpf where the bpf_object_skeleton field is void * instead of const void * Signed-off-by: Matt Smith <alastorze@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210901194439.3853238-3-alastorze@fb.com
-
Matt Smith authored
This change was necessary to enforce the implied contract that bpf_object_skeleton->data should not be mutated. The data will be cast to `void *` during assignment to handle the case where a user is compiling with older libbpf headers to avoid a compiler warning of `const void *` data being cast to `void *` Signed-off-by: Matt Smith <alastorze@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210901194439.3853238-2-alastorze@fb.com
-
Toke Høiland-Jørgensen authored
If libbpf encounters an ELF file that has been stripped of its symbol table, it will crash in bpf_object__add_programs() when trying to dereference the obj->efile.symbols pointer. Fix this by erroring out of bpf_object__elf_collect() if it is not able able to find the symbol table. v2: - Move check into bpf_object__elf_collect() and add nice error message Fixes: 6245947c ("libbpf: Allow gaps in BPF program sections to support overriden weak functions") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210901114812.204720-1-toke@redhat.com
-