- 29 Jan, 2019 3 commits
-
-
Stanislav Fomichev authored
Use existing pkt_v4 and pkt_v6 to make sure flow_keys are what we want. Also, add new bpf_flow_load routine (and flow_dissector_load.h header) that loads bpf_flow.o program and does all required setup. Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
The input is packet data, the output is struct bpf_flow_key. This should make it easy to test flow dissector programs without elaborate setup. Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
This way, we can reuse it for flow dissector in BPF_PROG_TEST_RUN. No functional changes. Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 28 Jan, 2019 6 commits
-
-
Jakub Kicinski authored
When prog array is updated with bpftool users often refer to the map via the ID. Unfortunately, that's likely to lead to confusion because prog arrays get flushed when the last user reference is gone. If there is no other reference bpftool will create one, update successfully just to close the map again and have it flushed. Warn about this case in non-JSON mode. If the problem continues causing confusion we can remove the support for referring to a map by ID for prog array update completely. For now it seems like the potential inconvenience to users who know what they're doing outweighs the benefit. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
YueHaibing authored
Remove duplicated include. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
Jakub Kicinski says: ==================== The tools/testing/selftests/bpf/test_verifier.c file is way too large, and since most people add their at the end of the list it's very prone to conflicts. Break it up in the simplest possible way - slice the array up into smaller C files and include them in the right spot. Tested: $ make -C tools/testing/selftests/bpf/ $ cd tools/testing/selftests/bpf/ ; make v2: The indentation is reduced further as discussed and lines folded. The conversion was scripted, and double checked by hand. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Kicinski authored
Break up the rest of test_verifier tests into separate files. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Kicinski authored
Break up the first 10 kLoC of test verifier test cases out into smaller files. Looks like git line counting gets a little flismy above 16 bit integers, so we need two commits to break up test_verifier. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Kicinski authored
test_verifier.c has grown to be very long (almost 16 kLoC), and it is very conflict prone since we always add tests at the end. Try to break it apart a little bit. Allow test snippets to be defined in separate files and include them automatically into the huge test array. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 26 Jan, 2019 17 commits
-
-
Alexei Starovoitov authored
Jiong Wang says: ==================== v3 -> v4: - Fixed rebase issue. JMP32 checks were missing in two new functions: + kernel/bpf/verifier.c:insn_is_cond_jump + drivers/net/ethernet/netronome/nfp/bpf/main.h:is_mbpf_cond_jump (Daniel) - Further rebased on top of latest llvm-readelf change. v2 -> v3: - Added missed check on JMP32 inside bpf_jit_build_body. (Sandipan) - Wrap ?: statements in s390 port with brace. They are used by macros which doesn't guard the operand with brace. - Fixed the ',' issues test_verifier change. - Reorder two selftests patches to be near each other. - Rebased on top of latest bpf-next. v1 -> v2: - Updated encoding. Use reserved insn class 0x6 instead of packing with existing BPF_JMP. (Alexei) - Updated code comments in s390 port. (Martin) - Separate JIT function for jeq32_imm in NFP port. (Jakub) - Re-implemented auto-testing support. (Jakub) - Moved testcases to test_verifer.c, plus more unit tests. (Jakub) - Fixed JEQ/JNE range deduction. (Jakub) - Also supported JSET in this patch set. - Fixed/Improved range deduction for all the other operations. All C programs under bpf selftest passed verification now. - Improved min/max code implementation. - Fixed bpftool/disassembler. Current eBPF ISA has 32-bit sub-register and has defined a set of ALU32 instructions. However, there is no JMP32 instructions, the consequence is code-gen for 32-bit sub-registers is not efficient. For example, explicit sign-extension from 32-bit to 64-bit is needed for signed comparison. Adding JMP32 instruction therefore could complete eBPF ISA on 32-bit sub-register support. This also match those JMP32 instructions in most JIT backends, for example x64-64 and AArch64. These new eBPF JMP32 instructions could have one-to-one map on them. A few verifier ALU32 related bugs has been fixed recently, and JMP32 introduced by this set further improves BPF sub-register ecosystem. Once this is landed, BPF programs using 32-bit sub-register ISA could get reasonably good support from verifier and JIT compilers. Users then could compare the runtime efficiency of one BPF program under both modes, and could use the one shown better from benchmark result. From benchmark results on some Cilium BPF programs, for 64-bit arches, after JMP32 introduced, programs compiled with -mattr=+alu32 (meaning enable sub-register usage) are smaller in code size and generally smaller in verifier processed insn number. Benchmark results === Text size in bytes (generated by "size") --- LLVM code-gen option default alu32 alu32/jmp32 change Vs. change Vs. alu32 default bpf_lb-DLB_L3.o: 6456 6280 6160 -1.91% -4.58% bpf_lb-DLB_L4.o: 7848 7664 7136 -6.89% -9.07% bpf_lb-DUNKNOWN.o: 2680 2664 2568 -3.60% -4.18% bpf_lxc.o: 104824 104744 97360 -7.05% -7.12% bpf_netdev.o: 23456 23576 21632 -8.25% -7.78% bpf_overlay.o: 16184 16304 14648 -10.16% -9.49% Processed instruction number --- LLVM code-gen option default alu32 alu32/jmp32 change Vs. change Vs. alu32 default bpf_lb-DLB_L3.o: 1579 1281 1295 +1.09% -17.99% bpf_lb-DLB_L4.o: 2045 1663 1556 -6.43% -23.91% bpf_lb-DUNKNOWN.o: 606 513 501 -2.34% -17.33% bpf_lxc.o: 85381 103218 94435 -8.51% +10.60% bpf_netdev.o: 5246 5809 5200 -10.48% -0.08% bpf_overlay.o: 2443 2705 2456 -9.02% -0.53% It is even better for 32-bit arches like x32, arm32 and nfp etc, as now some conditional jump will become JMP32 which doesn't require code-gen for high 32-bit comparison. Encoding === The new JMP32 instructions are using new BPF_JMP32 class which is using the reserved eBPF class number 0x6. And BPF_JA/CALL/EXIT only exist for BPF_JMP, they are reserved opcode for BPF_JMP32. LLVM support === A couple of unit tests has been added and included in this set. Also LLVM code-gen for JMP32 has been added, so you could just compile any BPF C program with both -mcpu=probe and -mattr=+alu32 specified. If you are compiling on a machine with kernel patched by this set, LLVM will select the ISA automatically based on host probe results. Otherwise specify -mcpu=v3 and -mattr=+alu32 could also force use JMP32 ISA. LLVM support could be found at: https://github.com/Netronome/llvm/tree/jmp32-v2 (clang driver also taught about the new "v3" processor, will send out merge request for both clang and llvm once kernel set landed.) JIT backends support === A couple of JIT backends has been supported in this set except SPARC and MIPS. It shouldn't be a big issue for these two ports as LLVM default won't generate JMP32 insns, it will only generate them when host machine is probed to be with the support. Thanks. ==================== Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch enables testing some eBPF programs under sub-register compilation mode. Only enable this when there is BPF_JMP32 support on both LLVM and kernel. This is because only after BPF_JMP32 added, code-gen for complex program under sub-register mode will be clean enough to pass verification. This patch splits TEST_GEN_FILES into BPF_OBJ_FILES and BPF_OBJ_FILES_DUAL_COMPILE. The latter are those objects we would like to compile for both default and sub-register mode. They are also objects used by "test_progs". Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch adds unit tests for new JMP32 instructions. This patch also added the new BPF_JMP32_REG and BPF_JMP32_IMM macros to samples/bpf/bpf_insn.h so that JMP32 insn builders are available to tests under 'samples' directory. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch implements code-gen for new JMP32 instructions on NFP. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch implements code-gen for new JMP32 instructions on s390. Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch implements code-gen for new JMP32 instructions on ppc. For JMP32 | JSET, instruction encoding for PPC_RLWINM_DOT is added to check the result of ANDing low 32-bit of operands. Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com> Cc: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch implements code-gen for new JMP32 instructions on arm. For JSET, "ands" (AND with flags updated) is used, so corresponding encoding helper is added. Cc: Shubham Bansal <illusionist.neo@gmail.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch implements code-gen for new JMP32 instructions on arm64. Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Zi Shen Lim <zlim.lnx@gmail.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch implements code-gen for new JMP32 instructions on x32. Also fixed several reverse xmas tree coding style issues as I am there. Cc: Wang YanQing <udknight@gmail.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch implements code-gen for new JMP32 instructions on x86_64. Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch adds JIT blinds support for JMP32. Like BPF_JMP_REG/IMM, JMP32 version are needed for building raw bpf insn. They are added to both include/linux/filter.h and tools/include/linux/filter.h. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch implements interpreting new JMP32 instructions. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
The cfg code need to be aware of the new JMP32 instruction class so it could partition functions correctly. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch teaches disassembler about JMP32. There are two places to update: - Class 0x6 now used by BPF_JMP32, not "unused". - BPF_JMP32 need to show comparison operands properly. The disassemble format is to add an extra "(32)" before the operands if it is a sub-register. A better disassemble format for both JMP32 and ALU32 just show the register prefix as "w" instead of "r", this is the format using by LLVM assembler. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch teach verifier about the new BPF_JMP32 instruction class. Verifier need to treat it similar as the existing BPF_JMP class. A BPF_JMP32 insn needs to go through all checks that have been done on BPF_JMP. Also, verifier is doing runtime optimizations based on the extra info conditional jump instruction could offer, especially when the comparison is between constant and register that the value range of the register could be improved based on the comparison results. These code are updated accordingly. Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
The current min/max code does both signed and unsigned comparisons against the input argument "val" which is "u64" and there is explicit type casting when the comparison is signed. As we will need slightly more complexer type casting when JMP32 introduced, it is better to host the signed type casting. This makes the code more clean with ignorable runtime overhead. Also, code for J*GE/GT/LT/LE and JEQ/JNE are very similar, this patch combine them. The main purpose for this refactor is to make sure the min/max code will still be readable and with minimum code duplication after JMP32 introduced. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
The new eBPF instruction class JMP32 uses the reserved class number 0x6. Kernel BPF ISA documentation updated accordingly. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 25 Jan, 2019 5 commits
-
-
Daniel Borkmann authored
Björn Töpel says: ==================== This series adds an AF_XDP sock_diag interface for querying sockets from user-space. Tools like iproute2 ss(8) can use this interface to list open AF_XDP sockets. The diagnostic provides information about the Rx/Tx/fill/completetion rings, umem, memory usage and such. For a complete list, please refer to the xsk_diag.c file. The AF_XDP sock_diag interface is optional, and can be built as a module. A separate patch series, adding ss(8) iproute2 support, will follow. v1->v2: * Removed extra newline * Zero-out all user-space facing structures prior setting the members * Added explicit "pad" member in _msg struct * Removed unused variable "req" in xsk_diag_handler_dump() Thanks to Daniel for reviewing the series! ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Björn Töpel authored
This patch adds the sock_diag interface for querying sockets from user space. Tools like iproute2 ss(8) can use this interface to list open AF_XDP sockets. The user-space ABI is defined in linux/xdp_diag.h and includes netlink request and response structs. The request can query sockets and the response contains socket information about the rings, umems, inode and more. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Björn Töpel authored
This commit adds an id to the umem structure. The id uniquely identifies a umem instance, and will be exposed to user-space via the socket monitoring interface. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Björn Töpel authored
Track each AF_XDP socket in a per-netns list. This will be used later by the sock_diag interface for querying sockets from userspace. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
Before: $ make -s -C tools/testing/selftests/bpf readelf: Error: Missing knowledge of 32-bit reloc types used in DWARF sections of machine number 247 readelf: Warning: unable to apply unsupported reloc type 10 to section .debug_info readelf: Warning: unable to apply unsupported reloc type 1 to section .debug_info readelf: Warning: unable to apply unsupported reloc type 10 to section .debug_info After: $ make -s -C tools/testing/selftests/bpf v2: * use llvm-readelf instead of redirecting binutils' readelf stderr to /dev/null Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 24 Jan, 2019 9 commits
-
-
Eric Dumazet authored
This adds the ability to read gso_segs from a BPF program. v3: Use BPF_REG_AX instead of BPF_REG_TMP for the temporary register, as suggested by Martin. v2: refined Eddie Hao patch to address Alexei feedback. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Eddie Hao <eddieh@google.com> Cc: Martin KaFai Lau <kafai@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Prashant Bhole authored
When 'bpftool feature' is executed it shows incorrect help string. test# bpftool feature Usage: bpftool bpftool probe [COMPONENT] [macros [prefix PREFIX]] bpftool bpftool help COMPONENT := { kernel | dev NAME } Instead of fixing the help text by tweaking argv[] indices, this patch changes the default action to 'probe'. It makes the behavior consistent with other subcommands, where first subcommand without extra parameter results in 'show' action. Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
Jakub Kicinski says: ==================== This set adds support for complete removal of dead code. Patch 3 contains all the code removal logic, patches 2 and 4 additionally optimize branches around and to dead code. Patches 6 and 7 allow offload JITs to take advantage of the optimization. After a few small clean ups (8, 9, 10) nfp support is added (11, 12). Removing code directly in the verifier makes it easy to adjust the relevant metadata (line info, subprogram info). JITs for code store constrained architectures would have hard time performing such adjustments at JIT level. Removing subprograms or line info is very hard once BPF core finished the verification. For user space to perform dead code removal it would have to perform the execution simulation/analysis similar to what the verifier does. v3: - fix uninitilized var warning in GCC 6 (buildbot). v4: - simplify the linfo-keeping logic (Yonghong). Instead of trying to figure out that we are removing first instruction of a subprogram, just always keep last dead line info, if first live instruction doesn't have one. v5: - improve comments (Martin Lau). ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Kicinski authored
Add a verifier callback to the nfp JIT to remove the instructions the verifier deemed to be dead. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Kicinski authored
Verifier will now optimize out branches to dead code, implement the replace_insn callback to take advantage of that optimization. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Kicinski authored
Instead of passing env->prog->len around, and trying to adjust for optimized out instructions just save the initial number of instructions in struct nfp_prog. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Kicinski authored
We fail program loading if jump lands on a skipped instruction. This is for historical reasons, it used to be that we only skipped instructions optimized out based on prior context, and therefore the optimization would be buggy if we jumped directly to such instruction (because the context would be skipped by the jump). There are cases where instructions can be skipped without any context, for example there is no point in generating code for: r0 |= 0 We will also soon support dropping dead code, so make the skip logic differentiate between "optimized with preceding context" vs other skip types. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Kicinski authored
Instruction number is meaningless at code gen phase. The target of the instruction is overwritten by nfp_fixup_branches(). The convention is to put the raw offset in target address as a place holder. See cmp_* functions. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Kicinski authored
Let offload JITs know when instructions are replaced and optimized out, so they can update their state appropriately. The optimizations are best effort, if JIT returns an error from any callback verifier will stop notifying it as state may now be out of sync, but the verifier continues making progress. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-