- 01 Aug, 2019 1 commit
-
-
Stanislav Fomichev authored
Since we always allocate memory, allocate just a little bit more for the BPF program in case it need to override user input with bigger value. The canonical example is TCP_CONGESTION where input string might be too small to override (nv -> bbr or cubic). 16 bytes are chosen to match the size of TCP_CA_NAME_MAX and can be extended in the future if needed. Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 31 Jul, 2019 9 commits
-
-
Jakub Kicinski authored
Takshak said in the original submission: With different bpf attach_flags available to attach bpf programs specially with BPF_F_ALLOW_OVERRIDE and BPF_F_ALLOW_MULTI, the list of effective bpf-programs available to any sub-cgroups really needs to be available for easy debugging. Using BPF_F_QUERY_EFFECTIVE flag, one can get the list of not only attached bpf-programs to a cgroup but also the inherited ones from parent cgroup. So a new option is introduced to use BPF_F_QUERY_EFFECTIVE query flag here to list all the effective bpf-programs available for execution at a specified cgroup. Reused modified test program test_cgroup_attach from tools/testing/selftests/bpf: # ./test_cgroup_attach With old bpftool: # bpftool cgroup show /sys/fs/cgroup/cgroup-test-work-dir/cg1/ ID AttachType AttachFlags Name 271 egress multi pkt_cntr_1 272 egress multi pkt_cntr_2 Attached new program pkt_cntr_4 in cg2 gives following: # bpftool cgroup show /sys/fs/cgroup/cgroup-test-work-dir/cg1/cg2 ID AttachType AttachFlags Name 273 egress override pkt_cntr_4 And with new "effective" option it shows all effective programs for cg2: # bpftool cgroup show /sys/fs/cgroup/cgroup-test-work-dir/cg1/cg2 effective ID AttachType AttachFlags Name 273 egress override pkt_cntr_4 271 egress override pkt_cntr_1 272 egress override pkt_cntr_2 Compared to original submission use a local flag instead of global option. We need to clear query_flags on every command, in case batch mode wants to use varying settings. v2: (Takshak) - forbid duplicated flags; - fix cgroup path freeing. Signed-off-by: Takshak Chahande <ctakshak@fb.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Takshak Chahande <ctakshak@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Clear buffered output once test or subtests finishes even if test was successful. Not doing this leads to accumulation of output from previous tests and on first failed tests lots of irrelevant output will be dumped, greatly confusing things. v1->v2: fix Fixes tag, add more context to patch Fixes: 3a516a0a ("selftests/bpf: add sub-tests support for test_progs") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Petar Penkov says: ==================== This patch series introduces a BPF helper function that allows generating SYN cookies from BPF. Currently, this helper is enabled at both the TC hook and the XDP hook. The first two patches in the series add/modify several TCP helper functions to allow for SKB-less operation, as is the case at the XDP hook. The third patch introduces the bpf_tcp_gen_syncookie helper function which generates a SYN cookie for either XDP or TC programs. The return value of this function contains both the MSS value, encoded in the cookie, and the cookie itself. The last three patches sync tools/ and add a test. Performance evaluation: I sent 10Mpps to a fixed port on a host with 2 10G bonded Mellanox 4 NICs from random IPv6 source addresses. Without XDP I observed 7.2Mpps (syn-acks) being sent out if the IPv6 packets carry 20 bytes of TCP options or 7.6Mpps if they carry no options. If I attached a simple program that checks if a packet is IPv6/TCP/SYN, looks up the socket, issues a cookie, and sends it back out after swapping src/dest, recomputing the checksum, and setting the ACK flag, I observed 10Mpps being sent back out. Changes since v1: 1/ Added performance numbers to the cover letter 2/ Patch 2: Refactored a bit to fix compilation issues 3/ Patch 3: Changed ENOTSUPP to EOPNOTSUPP at Toke's suggestion Changes since RFC: 1/ Cookie is returned in host order at Alexei's suggestion 2/ If cookies are not enabled via a sysctl, the helper function returns -ENOENT instead of -EINVAL at Lorenz's suggestion 3/ Fixed documentation to properly reflect that MSS is 16 bits at Lorenz's suggestion 4/ BPF helper requires TCP length to match ->doff field, rather than to simply be no more than 20 bytes at Eric and Alexei's suggestion 5/ Packet type is looked up from the packet version field, rather than from the socket. v4 packets are rejected on v6-only sockets but should work with dual stack listeners at Eric's suggestion 6/ Removed unnecessary `net` argument from helper function in patch 2 at Lorenz's suggestion 7/ Changed test to only pass MSS option so we can convince the verifier that the memory access is not out of bounds Note that 7/ below illustrates the verifier might need to be extended to allow passing a variable tcph->doff to the helper function like below: __u32 thlen = tcph->doff * 4; if (thlen < sizeof(*tcph)) return; __s64 cookie = bpf_tcp_gen_syncookie(sk, ipv4h, 20, tcph, thlen); ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Petar Penkov authored
Modify the existing bpf_tcp_check_syncookie test to also generate a SYN cookie, pass the packet to the kernel, and verify that the two cookies are the same (and both valid). Since cloned SKBs are skipped during generic XDP, this test does not issue a SYN cookie when run in XDP mode. We therefore only check that a valid SYN cookie was issued at the TC hook. Additionally, verify that the MSS for that SYN cookie is within expected range. Signed-off-by: Petar Penkov <ppenkov@google.com> Reviewed-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Petar Penkov authored
Expose bpf_tcp_gen_syncookie to selftests. Signed-off-by: Petar Penkov <ppenkov@google.com> Reviewed-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Petar Penkov authored
Sync updated documentation for bpf_redirect_map. Sync the bpf_tcp_gen_syncookie helper function definition with the one in tools/uapi. Signed-off-by: Petar Penkov <ppenkov@google.com> Reviewed-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Petar Penkov authored
This helper function allows BPF programs to try to generate SYN cookies, given a reference to a listener socket. The function works from XDP and with an skb context since bpf_skc_lookup_tcp can lookup a socket in both cases. Signed-off-by: Petar Penkov <ppenkov@google.com> Suggested-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Petar Penkov authored
This patch allows generation of a SYN cookie before an SKB has been allocated, as is the case at XDP. Signed-off-by: Petar Penkov <ppenkov@google.com> Reviewed-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Petar Penkov authored
This allows us to call this function before an SKB has been allocated. Signed-off-by: Petar Penkov <ppenkov@google.com> Reviewed-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 29 Jul, 2019 7 commits
-
-
Alexei Starovoitov authored
Toke Høiland-Jørgensen says: ==================== This series adds a new map type, devmap_hash, that works like the existing devmap type, but using a hash-based indexing scheme. This is useful for the use case where a devmap is indexed by ifindex (for instance for use with the routing table lookup helper). For this use case, the regular devmap needs to be sized after the maximum ifindex number, not the number of devices in it. A hash-based indexing scheme makes it possible to size the map after the number of devices it should contain instead. This was previously part of my patch series that also turned the regular bpf_redirect() helper into a map-based one; for this series I just pulled out the patches that introduced the new map type. Changelog: v5: - Dynamically set the number of hash buckets by rounding up max_entries to the nearest power of two (mirroring the regular hashmap), as suggested by Jesper. v4: - Remove check_memlock parameter that was left over from an earlier patch series. - Reorder struct members to avoid holes. v3: - Rework the split into different patches - Use spin_lock_irqsave() - Also add documentation and bash completion definitions for bpftool v2: - Split commit adding the new map type so uapi and tools changes are separate. Changes to these patches since the previous series: - Rebase on top of the other devmap changes (makes this one simpler!) - Don't enforce key==val, but allow arbitrary indexes. - Rename the type to devmap_hash to reflect the fact that it's just a hashmap now. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Toke Høiland-Jørgensen authored
This adds selftest and bpftool updates for the devmap_hash map type. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Toke Høiland-Jørgensen authored
This adds the definition for BPF_MAP_TYPE_DEVMAP_HASH to libbpf_probes.c in tools/lib/bpf. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Toke Høiland-Jørgensen authored
This adds the devmap_hash BPF map type to the uapi headers in tools/. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Toke Høiland-Jørgensen authored
A common pattern when using xdp_redirect_map() is to create a device map where the lookup key is simply ifindex. Because device maps are arrays, this leaves holes in the map, and the map has to be sized to fit the largest ifindex, regardless of how many devices actually are actually needed in the map. This patch adds a second type of device map where the key is looked up using a hashmap, instead of being used as an array index. This allows maps to be densely packed, so they can be smaller. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Toke Høiland-Jørgensen authored
The subsequent patch to add a new devmap sub-type can re-use much of the initialisation and allocation code, so refactor it into separate functions. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Toke Høiland-Jørgensen authored
When we changed the device and CPU maps to use linked lists instead of bitmaps, we also removed the need for the map_insert_ctx() helpers to keep track of the bitmaps inside each map. However, it seems I forgot to remove the function definitions stubs, so remove those here. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 28 Jul, 2019 10 commits
-
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== This patch set makes a number of changes to test_progs selftest, which is a collection of many other tests (and sometimes sub-tests as well), to provide better testing experience and allow to start convering many individual test programs under selftests/bpf into a single and convenient test runner. Patch #1 fixes issue with Makefile, which makes prog_tests/test.h compiled as a C code. This fix allows to change how test.h is generated, providing ability to have more control on what and how tests are run. Patch #2 changes how test.h is auto-generated, which allows to have test definitions, instead of just running test functions. This gives ability to do more complicated test run policies. Patch #3 adds `-t <test-name>` and `-n <test-num>` selectors to run only subset of tests. Patch #4 changes libbpf_set_print() to return previously set print callback, allowing to temporarily replace current print callback and then set it back. This is necessary for some tests that want more control over libbpf logging. Patch #5 sets up and takes over libbpf logging from individual tests to test_prog runner, adding -vv verbosity to capture debug output from libbpf. This is useful when debugging failing tests. Patch #6 furthers test output management and buffers it by default, emitting log output only if test fails. This give succinct and clean default test output. It's possible to bypass this behavior with -v flag, which will turn off test output buffering. Patch #7 adds support for sub-tests. It also enhances -t and -n selectors to both support ability to specify sub-test selectors, as well as enhancing number selector to accept sets of test, instead of just individual test number. Patch #8 converts bpf_verif_scale.c test to use sub-test APIs. Patch #9 converts send_signal.c tests to use sub-test APIs. v2->v3: - fix buffered output rare unitialized value bug (Alexei); - fix buffered output va_list reuse bug (Alexei); - fix buffered output truncation due to interleaving zero terminators; v1->v2: - drop libbpf_swap_print, instead return previous function from libbpf_set_print (Stanislav); ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Convert send_signal set of tests to be exposed as three sub-tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Expose each BPF verifier scale test as individual sub-test to allow independent results output and test selection. Test run results now look like this: $ sudo ./test_progs -t verif/ #3/1 loop3.o:OK #3/2 test_verif_scale1.o:OK #3/3 test_verif_scale2.o:OK #3/4 test_verif_scale3.o:OK #3/5 pyperf50.o:OK #3/6 pyperf100.o:OK #3/7 pyperf180.o:OK #3/8 pyperf600.o:OK #3/9 pyperf600_nounroll.o:OK #3/10 loop1.o:OK #3/11 loop2.o:OK #3/12 strobemeta.o:OK #3/13 strobemeta_nounroll1.o:OK #3/14 strobemeta_nounroll2.o:OK #3/15 test_sysctl_loop1.o:OK #3/16 test_sysctl_loop2.o:OK #3/17 test_xdp_loop.o:OK #3/18 test_seg6_loop.o:OK #3 bpf_verif_scale:OK Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Allow tests to have their own set of sub-tests. Also add ability to do test/subtest selection using `-t <test-name>/<subtest-name>` and `-n <test-nums-set>/<subtest-nums-set>`, as an extension of existing -t/-n selector options. For the <test-num-set> format: it's a comma-separated list of either individual test numbers (1-based), or range of test numbers. E.g., all of the following are valid sets of test numbers: - 10 - 1,2,3 - 1-3 - 5-10,1,3-4 '/<subtest' part is optional, but has the same format. E.g., to select test #3 and its sub-tests #10 through #15, use: -t 3/10-15. Similarly, to select tests by name, use `-t verif/strobe`: $ sudo ./test_progs -t verif/strobe #3/12 strobemeta.o:OK #3/13 strobemeta_nounroll1.o:OK #3/14 strobemeta_nounroll2.o:OK #3 bpf_verif_scale:OK Summary: 1/3 PASSED, 0 FAILED Example of using subtest API is in the next patch, converting bpf_verif_scale.c tests to use sub-tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
This patch changes how test output is printed out. By default, if test had no errors, the only output will be a single line with test number, name, and verdict at the end, e.g.: #31 xdp:OK If test had any errors, all log output captured during test execution will be output after test completes. It's possible to force output of log with `-v` (`--verbose`) option, in which case output won't be buffered and will be output immediately. To support this, individual tests are required to use helper methods for logging: `test__printf()` and `test__vprintf()`. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Make test_progs test runner own libbpf logging. Also introduce two levels of verbosity: -v and -vv. First one will be used in subsequent patches to enable test log output always. Second one increases verbosity level of libbpf logging further to include debug output as well. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
By returning previously set print callback from libbpf_set_print, it's possible to restore it, eventually. This is useful when running many independent test with one default print function, but overriding log verbosity for particular subset of tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add ability to specify either test number or test name substring to narrow down a set of test to run. Usage: sudo ./test_progs -n 1 sudo ./test_progs -t attach_probe Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Refactor test_progs to allow better control on what's being run. Also use argp to do argument parsing, so that it's easier to keep adding more options. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Apprently listing header as a normal dependency for a binary output makes it go through compilation as if it was C code. This currently works without a problem, but in subsequent commits causes problems for differently generated test.h for test_progs. Marking those headers as order-only dependency solves the issue. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 26 Jul, 2019 10 commits
-
-
Alexei Starovoitov authored
Stanislav Fomichev says: ==================== C flow dissector supports input flags that tell it to customize parsing by either stopping early or trying to parse as deep as possible. BPF flow dissector always parses as deep as possible which is sub-optimal. Pass input flags to the BPF flow dissector as well so it can make the same decisions. Series outline: * remove unused FLOW_DISSECTOR_F_STOP_AT_L3 flag * export FLOW_DISSECTOR_F_XXX flags as uapi and pass them to BPF flow dissector * add documentation for the export flags * support input flags in BPF_PROG_TEST_RUN via ctx_{in,out} * sync uapi to tools * support FLOW_DISSECTOR_F_PARSE_1ST_FRAG in selftest * support FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL in kernel and selftest * support FLOW_DISSECTOR_F_STOP_AT_ENCAP in selftest Pros: * makes BPF flow dissector faster by avoiding burning extra cycles * existing BPF progs continue to work by ignoring the flags and always parsing as deep as possible Cons: * new UAPI which we need to support (OTOH, if we need to deprecate some flags, we can just stop setting them upon calling BPF programs) Some numbers (with .repeat = 4000000 in test_flow_dissector): test_flow_dissector:PASS:ipv4-frag 35 nsec test_flow_dissector:PASS:ipv4-frag 35 nsec test_flow_dissector:PASS:ipv4-no-frag 32 nsec test_flow_dissector:PASS:ipv4-no-frag 32 nsec test_flow_dissector:PASS:ipv6-frag 39 nsec test_flow_dissector:PASS:ipv6-frag 39 nsec test_flow_dissector:PASS:ipv6-no-frag 36 nsec test_flow_dissector:PASS:ipv6-no-frag 36 nsec test_flow_dissector:PASS:ipv6-flow-label 36 nsec test_flow_dissector:PASS:ipv6-flow-label 36 nsec test_flow_dissector:PASS:ipv6-no-flow-label 33 nsec test_flow_dissector:PASS:ipv6-no-flow-label 33 nsec test_flow_dissector:PASS:ipip-encap 38 nsec test_flow_dissector:PASS:ipip-encap 38 nsec test_flow_dissector:PASS:ipip-no-encap 32 nsec test_flow_dissector:PASS:ipip-no-encap 32 nsec The improvement is around 10%, but it's in a tight cache-hot BPF_PROG_TEST_RUN loop. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Stanislav Fomichev authored
Exit as soon as we found that packet is encapped when BPF_FLOW_DISSECTOR_F_STOP_AT_ENCAP is passed. Add appropriate selftest cases. v2: * Subtract sizeof(struct iphdr) from .iph_inner.tot_len (Willem de Bruijn) Acked-by: Petar Penkov <ppenkov@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Song Liu <songliubraving@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Petar Penkov <ppenkov@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Stanislav Fomichev authored
Add support for exporting ipv6 flow label via bpf_flow_keys. Export flow label from bpf_flow.c and also return early when BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL is passed. Acked-by: Petar Penkov <ppenkov@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Song Liu <songliubraving@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Petar Penkov <ppenkov@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Stanislav Fomichev authored
bpf_flow.c: exit early unless BPF_FLOW_DISSECTOR_F_PARSE_1ST_FRAG is passed in flags. Also, set ip_proto earlier, this makes sure we have correct value with fragmented packets. Add selftest cases to test ipv4/ipv6 fragments and skip eth_get_headlen tests that don't have BPF_FLOW_DISSECTOR_F_PARSE_1ST_FRAG flag. eth_get_headlen calls flow dissector with BPF_FLOW_DISSECTOR_F_PARSE_1ST_FRAG flag so we can't run tests that have different set of input flags against it. v2: * sefltests -> selftests (Willem de Bruijn) * Reword a comment about eth_get_headlen flags (Song Liu) Acked-by: Petar Penkov <ppenkov@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Song Liu <songliubraving@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Petar Penkov <ppenkov@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Stanislav Fomichev authored
Export bpf_flow_keys flags to tools/libbpf/selftests. Acked-by: Petar Penkov <ppenkov@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Song Liu <songliubraving@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Petar Penkov <ppenkov@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Stanislav Fomichev authored
This will allow us to write tests for those flags. v2: * Swap kfree(data) and kfree(user_ctx) (Song Liu) Acked-by: Petar Penkov <ppenkov@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Song Liu <songliubraving@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Petar Penkov <ppenkov@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Stanislav Fomichev authored
Describe what each input flag does and who uses it. Acked-by: Petar Penkov <ppenkov@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Song Liu <songliubraving@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Petar Penkov <ppenkov@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Stanislav Fomichev authored
C flow dissector supports input flags that tell it to customize parsing by either stopping early or trying to parse as deep as possible. Pass those flags to the BPF flow dissector so it can make the same decisions. In the next commits I'll add support for those flags to our reference bpf_flow.c v3: * Export copy of flow dissector flags instead of moving (Alexei Starovoitov) Acked-by: Petar Penkov <ppenkov@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Song Liu <songliubraving@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Petar Penkov <ppenkov@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Allan Zhang authored
Software event output is only enabled by a few prog types. This test is to ensure that all supported types are enabled for bpf_perf_event_output successfully. Signed-off-by: Allan Zhang <allanzhang@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Allan Zhang authored
Software event output is only enabled by a few prog types right now (TC, LWT out, XDP, sockops). Many other skb based prog types need bpf_skb_event_output to produce software event. Added socket_filter, cg_skb, sk_skb prog types to generate sw event. Test bpf code is generated from code snippet: struct TMP { uint64_t tmp; } tt; tt.tmp = 5; bpf_perf_event_output(skb, &connection_tracking_event_map, 0, &tt, sizeof(tt)); return 1; the bpf assembly from llvm is: 0: b7 02 00 00 05 00 00 00 r2 = 5 1: 7b 2a f8 ff 00 00 00 00 *(u64 *)(r10 - 8) = r2 2: bf a4 00 00 00 00 00 00 r4 = r10 3: 07 04 00 00 f8 ff ff ff r4 += -8 4: 18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0ll 6: b7 03 00 00 00 00 00 00 r3 = 0 7: b7 05 00 00 08 00 00 00 r5 = 8 8: 85 00 00 00 19 00 00 00 call 25 9: b7 00 00 00 01 00 00 00 r0 = 1 10: 95 00 00 00 00 00 00 00 exit Signed-off-by: Allan Zhang <allanzhang@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 23 Jul, 2019 3 commits
-
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== There were few more tests and samples that were using custom perf buffer setup code from trace_helpers.h. This patch set gets rid of all the usages of those and removes helpers themselves. Libbpf provides nicer, but equally powerful set of APIs to work with perf ring buffers, so let's have all the samples use v1->v2: - make logging message one long line instead of two (Song). ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
libbpf's perf_buffer API supersedes trace_helper.h's helpers. Remove those helpers after all existing users were already moved to perf_buffer API. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Convert trace_output sample to libbpf's perf_buffer API. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-