- 26 Feb, 2021 40 commits
-
-
Ian Denhardt authored
Per discussion at [0] this was originally introduced as a warning due to concerns about breaking existing code, but a hard error probably makes more sense, especially given that concerns about breakage were only speculation. [0] https://lore.kernel.org/bpf/c964892195a6b91d20a67691448567ef528ffa6d.camel@linux.ibm.com/T/#tSigned-off-by: Ian Denhardt <ian@zenhack.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Link: https://lore.kernel.org/bpf/a6b6c7516f5d559049d669968e953b4a8d7adea3.1614201868.git.ian@zenhack.net
-
Alexei Starovoitov authored
Yonghong Song says: ==================== This patch set introduced bpf_for_each_map_elem() helper. The helper permits bpf program iterates through all elements for a particular map. The work originally inspired by an internal discussion where firewall rules are kept in a map and bpf prog wants to check packet 5 tuples against all rules in the map. A bounded loop can be used but it has a few drawbacks. As the loop iteration goes up, verification time goes up too. For really large maps, verification may fail. A helper which abstracts out the loop itself will not have verification time issue. A recent discussion in [1] involves to iterate all hash map elements in bpf program. Currently iterating all hashmap elements in bpf program is not easy if key space is really big. Having a helper to abstract out the loop itself is even more meaningful. The proposed helper signature looks like: long bpf_for_each_map_elem(map, callback_fn, callback_ctx, flags) where callback_fn is a static function and callback_ctx is a piece of data allocated on the caller stack which can be accessed by the callback_fn. The callback_fn signature might be different for different maps. For example, for hash/array maps, the signature is long callback_fn(map, key, val, callback_ctx) In the rest of series, Patches 1/2/3/4 did some refactoring. Patch 5 implemented core kernel support for the helper. Patches 6 and 7 added hashmap and arraymap support. Patches 8/9 added libbpf support. Patch 10 added bpftool support. Patches 11 and 12 added selftests for hashmap and arraymap. [1]: https://lore.kernel.org/bpf/20210122205415.113822-1-xiyou.wangcong@gmail.com/ Changelogs: v4 -> v5: - rebase on top of bpf-next. v3 -> v4: - better refactoring of check_func_call(), calculate subprogno outside of __check_func_call() helper. (Andrii) - better documentation (like the list of supported maps and their callback signatures) in uapi header. (Andrii) - implement and use ASSERT_LT in selftests. (Andrii) - a few other minor changes. v2 -> v3: - add comments in retrieve_ptr_limit(), which is in sanitize_ptr_alu(), to clarify the code is not executed for PTR_TO_MAP_KEY handling, but code is manually tested. (Alexei) - require BTF for callback function. (Alexei) - simplify hashmap/arraymap callback return handling as return value [0, 1] has been enforced by the verifier. (Alexei) - also mark global subprog (if used in ld_imm64) as RELO_SUBPROG_ADDR. (Andrii) - handle the condition to mark RELO_SUBPROG_ADDR properly. (Andrii) - make bpftool subprog insn offset dumping consist with pcrel calls. (Andrii) v1 -> v2: - setup callee frame in check_helper_call() and then proceed to verify helper return value as normal (Alexei) - use meta data to keep track of map/func pointer to avoid hard coding the register number (Alexei) - verify callback_fn return value range [0, 1]. (Alexei) - add migrate_{disable, enable} to ensure percpu value is the one bpf program expects to see. (Alexei) - change bpf_for_each_map_elem() return value to the number of iterated elements. (Andrii) - Change libbpf pseudo_func relo name to RELO_SUBPROG_ADDR and use more rigid checking for the relocation. (Andrii) - Better format to print out subprog address with bpftool. (Andrii) - Use bpf_prog_test_run to trigger bpf run, instead of bpf_iter. (Andrii) - Other misc changes. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
A test is added for arraymap and percpu arraymap. The test also exercises the early return for the helper which does not traverse all elements. $ ./test_progs -n 45 #45/1 hash_map:OK #45/2 array_map:OK #45 for_each:OK Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204934.3885756-1-yhs@fb.com
-
Yonghong Song authored
A test case is added for hashmap and percpu hashmap. The test also exercises nested bpf_for_each_map_elem() calls like bpf_prog: bpf_for_each_map_elem(func1) func1: bpf_for_each_map_elem(func2) func2: $ ./test_progs -n 45 #45/1 hash_map:OK #45 for_each:OK Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204933.3885657-1-yhs@fb.com
-
Yonghong Song authored
With later hashmap example, using bpftool xlated output may look like: int dump_task(struct bpf_iter__task * ctx): ; struct task_struct *task = ctx->task; 0: (79) r2 = *(u64 *)(r1 +8) ; if (task == (void *)0 || called > 0) ... 19: (18) r2 = subprog[+17] 30: (18) r2 = subprog[+25] ... 36: (95) exit __u64 check_hash_elem(struct bpf_map * map, __u32 * key, __u64 * val, struct callback_ctx * data): ; struct bpf_iter__task *ctx = data->ctx; 37: (79) r5 = *(u64 *)(r4 +0) ... 55: (95) exit __u64 check_percpu_elem(struct bpf_map * map, __u32 * key, __u64 * val, void * unused): ; check_percpu_elem(struct bpf_map *map, __u32 *key, __u64 *val, void *unused) 56: (bf) r6 = r3 ... 83: (18) r2 = subprog[-47] Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204931.3885458-1-yhs@fb.com
-
Yonghong Song authored
A new relocation RELO_SUBPROG_ADDR is added to capture subprog addresses loaded with ld_imm64 insns. Such ld_imm64 insns are marked with BPF_PSEUDO_FUNC and will be passed to kernel. For bpf_for_each_map_elem() case, kernel will check that the to-be-used subprog address must be a static function and replace it with proper actual jited func address. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204930.3885367-1-yhs@fb.com
-
Yonghong Song authored
Move function is_ldimm64() close to the beginning of libbpf.c, so it can be reused by later code and the next patch as well. There is no functionality change. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204929.3885295-1-yhs@fb.com
-
Yonghong Song authored
This patch added support for arraymap and percpu arraymap. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204928.3885192-1-yhs@fb.com
-
Yonghong Song authored
This patch added support for hashmap, percpu hashmap, lru hashmap and percpu lru hashmap. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204927.3885020-1-yhs@fb.com
-
Yonghong Song authored
The bpf_for_each_map_elem() helper is introduced which iterates all map elements with a callback function. The helper signature looks like long bpf_for_each_map_elem(map, callback_fn, callback_ctx, flags) and for each map element, the callback_fn will be called. For example, like hashmap, the callback signature may look like long callback_fn(map, key, val, callback_ctx) There are two known use cases for this. One is from upstream ([1]) where a for_each_map_elem helper may help implement a timeout mechanism in a more generic way. Another is from our internal discussion for a firewall use case where a map contains all the rules. The packet data can be compared to all these rules to decide allow or deny the packet. For array maps, users can already use a bounded loop to traverse elements. Using this helper can avoid using bounded loop. For other type of maps (e.g., hash maps) where bounded loop is hard or impossible to use, this helper provides a convenient way to operate on all elements. For callback_fn, besides map and map element, a callback_ctx, allocated on caller stack, is also passed to the callback function. This callback_ctx argument can provide additional input and allow to write to caller stack for output. If the callback_fn returns 0, the helper will iterate through next element if available. If the callback_fn returns 1, the helper will stop iterating and returns to the bpf program. Other return values are not used for now. Currently, this helper is only available with jit. It is possible to make it work with interpreter with so effort but I leave it as the future work. [1]: https://lore.kernel.org/bpf/20210122205415.113822-1-xiyou.wangcong@gmail.com/Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204925.3884923-1-yhs@fb.com
-
Yonghong Song authored
Currently, verifier function add_subprog() returns 0 for success and negative value for failure. Change the return value to be the subprog number for success. This functionality will be used in the next patch to save a call to find_subprog(). Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204924.3884848-1-yhs@fb.com
-
Yonghong Song authored
Later proposed bpf_for_each_map_elem() helper has callback function as one of its arguments. This patch refactored check_func_call() to permit callback function which sets callee state. Different callback functions may have different callee states. There is no functionality change for this patch. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204923.3884627-1-yhs@fb.com
-
Yonghong Song authored
Factor out the function verbose_invalid_scalar() to verbose print if a scalar is not in a tnum range. There is no functionality change and the function will be used by later patch which introduced bpf_for_each_map_elem(). Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204922.3884375-1-yhs@fb.com
-
Yonghong Song authored
During verifier check_cfg(), all instructions are visited to ensure verifier can handle program control flows. This patch factored out function visit_func_call_insn() so it can be reused in later patch to visit callback function calls. There is no functionality change for this patch. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204920.3884136-1-yhs@fb.com
-
Ilya Leoshkevich authored
Building selftests in a separate directory like this: make O="$BUILD" -C tools/testing/selftests/bpf and then running: cd "$BUILD" && ./test_progs -t btf causes all the non-flavored btf_dump_test_case_*.c tests to fail, because these files are not copied to where test_progs expects to find them. Fix by not skipping EXT-COPY when the original $(OUTPUT) is not empty (lib.mk sets it to $(shell pwd) in that case) and using rsync instead of cp: cp fails because e.g. urandom_read is being copied into itself, and rsync simply skips such cases. rsync is already used by kselftests and therefore is not a new dependency. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210224111445.102342-1-iii@linux.ibm.com
-
KP Singh authored
When vmtest.sh ran a command in a VM, it did not record or propagate the error code of the command. This made the script less "script-able". The script now saves the error code of the said command in a file in the VM, copies the file back to the host and (when available) uses this error code instead of its own. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210225161947.1778590-1-kpsingh@kernel.org
-
Alexei Starovoitov authored
Cong Wang says: ==================== From: Cong Wang <cong.wang@bytedance.com> This patchset is the first series of patches separated out from the original large patchset, to make reviews easier. This patchset does not add any new feature or change any functionality but merely cleans up the existing sockmap and skmsg code and refactors it, to prepare for the patches followed up. This passed all BPF selftests. To see the big picture, the original whole patchset is available on github: https://github.com/congwang/linux/tree/sockmap and this patchset is also available on github: https://github.com/congwang/linux/tree/sockmap1 --- v7: add 1 trivial cleanup patch define a mask for sk_redir fix CONFIG_BPF_SYSCALL in include/net/udp.h make sk_psock_done_strp() static move skb_bpf_redirect_clear() to sk_psock_backlog() v6: fix !CONFIG_INET case v5: improve CONFIG_BPF_SYSCALL dependency add 3 trivial cleanup patches v4: reuse skb dst instead of skb ext fix another Kconfig error v3: fix a few Kconfig compile errors remove an unused variable add a comment for bpf_convert_data_end_access() v2: split the original patchset compute data_end with bpf_convert_data_end_access() get rid of psock->bpf_running reduce the scope of CONFIG_BPF_STREAM_PARSER do not add CONFIG_BPF_SOCK_MAP ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Cong Wang authored
It is not defined or used anywhere. Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-10-xiyou.wangcong@gmail.com
-
Cong Wang authored
It is now nearly identical to bpf_prog_run_pin_on_cpu() and it has an unused parameter 'psock', so we can just get rid of it and call bpf_prog_run_pin_on_cpu() directly. Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-9-xiyou.wangcong@gmail.com
-
Cong Wang authored
It is only used within skmsg.c so can become static. Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-8-xiyou.wangcong@gmail.com
-
Cong Wang authored
It is only used within sock_map.c so can become static. Suggested-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-7-xiyou.wangcong@gmail.com
-
Cong Wang authored
These two eBPF programs are tied to BPF_SK_SKB_STREAM_PARSER and BPF_SK_SKB_STREAM_VERDICT, rename them to reflect the fact they are only used for TCP. And save the name 'skb_verdict' for general use later. Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Lorenz Bauer <lmb@cloudflare.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-6-xiyou.wangcong@gmail.com
-
Cong Wang authored
Currently TCP_SKB_CB() is hard-coded in skmsg code, it certainly does not work for any other non-TCP protocols. We can move them to skb ext, but it introduces a memory allocation on fast path. Fortunately, we only need to a word-size to store all the information, because the flags actually only contains 1 bit so can be just packed into the lowest bit of the "pointer", which is stored as unsigned long. Inside struct sk_buff, '_skb_refdst' can be reused because skb dst is no longer needed after ->sk_data_ready() so we can just drop it. Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-5-xiyou.wangcong@gmail.com
-
Cong Wang authored
Currently, we compute ->data_end with a compile-time constant offset of skb. But as Jakub pointed out, we can actually compute it in eBPF JIT code at run-time, so that we can competely get rid of ->data_end. This is similar to skb_shinfo(skb) computation in bpf_convert_shinfo_access(). Suggested-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-4-xiyou.wangcong@gmail.com
-
Cong Wang authored
struct sk_psock_parser is embedded in sk_psock, it is unnecessary as skb verdict also uses ->saved_data_ready. We can simply fold these fields into sk_psock, and get rid of ->enabled. Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-3-xiyou.wangcong@gmail.com
-
Cong Wang authored
As suggested by John, clean up sockmap related Kconfigs: Reduce the scope of CONFIG_BPF_STREAM_PARSER down to TCP stream parser, to reflect its name. Make the rest sockmap code simply depend on CONFIG_BPF_SYSCALL and CONFIG_INET, the latter is still needed at this point because of TCP/UDP proto update. And leave CONFIG_NET_SOCK_MSG untouched, as it is used by non-sockmap cases. Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Lorenz Bauer <lmb@cloudflare.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20210223184934.6054-2-xiyou.wangcong@gmail.com
-
Hangbin Liu authored
Commit 34b2021c ("bpf: Add BPF-helper for MTU checking") added an extra blank line in bpf helper description. This will make bpf_helpers_doc.py stop building bpf_helper_defs.h immediately after bpf_check_mtu(), which will affect future added functions. Fixes: 34b2021c ("bpf: Add BPF-helper for MTU checking") Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20210223131457.1378978-1-liuhangbin@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Ciara Loftus says: ==================== This series attempts to improve the xsk selftest framework by: 1. making the default output less verbose 2. adding an optional verbose flag to both the test_xsk.sh script and xdpxceiver app. 3. renaming the debug option in the app to to 'dump-pkts' and add a flag to the test_xsk.sh script which enables the flag in the app. 4. changing how tests are launched - now they are launched from the xdpxceiver app instead of the script. Once the improvements are made, a new set of tests are added which test the xsk statistics. The output of the test script now looks like: ./test_xsk.sh PREREQUISITES: [ PASS ] 1..10 ok 1 PASS: SKB NOPOLL ok 2 PASS: SKB POLL ok 3 PASS: SKB NOPOLL Socket Teardown ok 4 PASS: SKB NOPOLL Bi-directional Sockets ok 5 PASS: SKB NOPOLL Stats ok 6 PASS: DRV NOPOLL ok 7 PASS: DRV POLL ok 8 PASS: DRV NOPOLL Socket Teardown ok 9 PASS: DRV NOPOLL Bi-directional Sockets ok 10 PASS: DRV NOPOLL Stats # Totals: pass:10 fail:0 xfail:0 xpass:0 skip:0 error:0 XSK KSELFTESTS: [ PASS ] v2->v3: * Rename dump-pkts to dump_pkts in test_xsk.sh * Add examples of flag usage to test_xsk.sh v1->v2: * Changed '-d' flag in the shell script to '-D' to be consistent with the xdpxceiver app. * Renamed debug mode to 'dump-pkts' which better reflects the behaviour. * Use libpf APIs instead of calls to ss for configuring xdp on the links * Remove mutex init & destroy for each stats test * Added a description for each of the new statistics tests * Distinguish between exiting due to initialisation failure vs test failure This series applies on commit d310ec03 Ciara Loftus (3): selftests/bpf: expose and rename debug argument selftests/bpf: restructure xsk selftests selftests/bpf: introduce xsk statistics tests ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Ciara Loftus authored
This commit introduces a range of tests to the xsk testsuite for validating xsk statistics. A new test type called 'stats' is added. Within it there are four sub-tests. Each test configures a scenario which should trigger the given error statistic. The test passes if the statistic is successfully incremented. The four statistics for which tests have been created are: 1. rx dropped Increase the UMEM frame headroom to a value which results in insufficient space in the rx buffer for both the packet and the headroom. 2. tx invalid Set the 'len' field of tx descriptors to an invalid value (umem frame size + 1). 3. rx ring full Reduce the size of the RX ring to a fraction of the fill ring size. 4. fill queue empty Do not populate the fill queue and then try to receive pkts. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210223162304.7450-5-ciara.loftus@intel.com
-
Ciara Loftus authored
Prior to this commit individual xsk tests were launched from the shell script 'test_xsk.sh'. When adding a new test type, two new test configurations had to be added to this file - one for each of the supported XDP 'modes' (skb or drv). Should zero copy support be added to the xsk selftest framework in the future, three new test configurations would need to be added for each new test type. Each new test type also typically requires new CLI arguments for the xdpxceiver program. This commit aims to reduce the overhead of adding new tests, by launching the test configurations from within the xdpxceiver program itself, using simple loops. Every test is run every time the C program is executed. Many of the CLI arguments can be removed as a result. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210223162304.7450-4-ciara.loftus@intel.com
-
Ciara Loftus authored
Launching xdpxceiver with -D enables what was formerly know as 'debug' mode. Rename this mode to 'dump-pkts' as it better describes the behavior enabled by the option. New usage: ./xdpxceiver .. -D or ./xdpxceiver .. --dump-pkts Also make it possible to pass this flag to the app via the test_xsk.sh shell script like so: ./test_xsk.sh -D Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210223162304.7450-3-ciara.loftus@intel.com
-
Magnus Karlsson authored
Make the xsk tests less verbose by only printing the essentials. Currently, it is hard to see if the tests passed or not due to all the printouts. Move the extra printouts to a verbose option, if further debugging is needed when a problem arises. To run the xsk tests with verbose output: ./test_xsk.sh -v Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20210223162304.7450-2-ciara.loftus@intel.com
-
Brendan Jackman authored
This function has become overloaded, it actually does lots of diverse things in a single pass. Rename it to avoid confusion, and add some concise commentary. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210217104509.2423183-1-jackmanb@google.com
-
Dmitrii Banshchikov authored
Instead of using integer literal here and there use macro name for better context. Signed-off-by: Dmitrii Banshchikov <me@ubique.spb.ru> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20210225202629.585485-1-me@ubique.spb.ru
-
Alexei Starovoitov authored
Song Liu says: ==================== This set enables task local storage for non-BPF_LSM programs. It is common for tracing BPF program to access per-task data. Currently, these data are stored in hash tables with pid as the key. In bcc/libbpftools [1], 9 out of 23 tools use such hash tables. However, hash table is not ideal for many use case. Task local storage provides better usability and performance for BPF programs. Please refer to 6/6 for some performance comparison of task local storage vs. hash table. Changes v5 => v6: 1. Add inc/dec bpf_task_storage_busy in bpf_local_storage_map_free(). Changes v4 => v5: 1. Fix build w/o CONFIG_NET. (kernel test robot) 2. Remove unnecessary check for !task_storage_ptr(). (Martin) 3. Small changes in commit logs. Changes v3 => v4: 1. Prevent deadlock from recursive calls of bpf_task_storage_[get|delete]. (2/6 checks potential deadlock and fails over, 4/6 adds a selftest). Changes v2 => v3: 1. Make the selftest more robust. (Andrii) 2. Small changes with runqslower. (Andrii) 3. Shortern CC list to make it easy for vger. Changes v1 => v2: 1. Do not allocate task local storage when the task is being freed. 2. Revise the selftest and added a new test for a task being freed. 3. Minor changes in runqslower. ==================== Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Song Liu authored
Replace hashtab with task local storage in runqslower. This improves the performance of these BPF programs. The following table summarizes average runtime of these programs, in nanoseconds: task-local hash-prealloc hash-no-prealloc handle__sched_wakeup 125 340 3124 handle__sched_wakeup_new 2812 1510 2998 handle__sched_switch 151 208 991 Note that, task local storage gives better performance than hashtab for handle__sched_wakeup and handle__sched_switch. On the other hand, for handle__sched_wakeup_new, task local storage is slower than hashtab with prealloc. This is because handle__sched_wakeup_new accesses the data for the first time, so it has to allocate the data for task local storage. Once the initial allocation is done, subsequent accesses, as those in handle__sched_wakeup, are much faster with task local storage. If we disable hashtab prealloc, task local storage is much faster for all 3 functions. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210225234319.336131-7-songliubraving@fb.com
-
Song Liu authored
Update the Makefile to prefer using $(O)/vmlinux, $(KBUILD_OUTPUT)/vmlinux (for selftests) or ../../../vmlinux. These two files should have latest definitions for vmlinux.h. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210225234319.336131-6-songliubraving@fb.com
-
Song Liu authored
Add a test with recursive bpf_task_storage_[get|delete] from fentry programs on bpf_local_storage_lookup and bpf_local_storage_update. Without proper deadlock prevent mechanism, this test would cause deadlock. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210225234319.336131-5-songliubraving@fb.com
-
Song Liu authored
Task local storage is enabled for tracing programs. Add two tests for task local storage without CONFIG_BPF_LSM. The first test stores a value in sys_enter and read it back in sys_exit. The second test checks whether the kernel allows allocating task local storage in exit_creds() (which it should not). Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210225234319.336131-4-songliubraving@fb.com
-
Song Liu authored
BPF helpers bpf_task_storage_[get|delete] could hold two locks: bpf_local_storage_map_bucket->lock and bpf_local_storage->lock. Calling these helpers from fentry/fexit programs on functions in bpf_*_storage.c may cause deadlock on either locks. Prevent such deadlock with a per cpu counter, bpf_task_storage_busy. We need this counter to be global, because the two locks here belong to two different objects: bpf_local_storage_map and bpf_local_storage. If we pick one of them as the owner of the counter, it is still possible to trigger deadlock on the other lock. For example, if bpf_local_storage_map owns the counters, it cannot prevent deadlock on bpf_local_storage->lock when two maps are used. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20210225234319.336131-3-songliubraving@fb.com
-