- 29 May, 2019 8 commits
-
-
Andrii Nakryiko authored
pr_warning ultimately may call into user-provided callback function, which can clobber errno value, so we need to save it before that. Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Andrii Nakryiko authored
Ensure that size of a section w/ BPF instruction is exactly a multiple of BPF instruction size. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
There are two functions in libbpf that support passing a log_level parameter for the verifier for loading programs: bpf_object__load_xattr() and bpf_prog_load_xattr(). Both accept an attribute object containing the log_level, and apply it to the programs to load. It turns out that to effectively load the programs, the latter function eventually relies on the former. This was not taken into account when adding support for log_level in bpf_object__load_xattr(), and the log_level passed to bpf_prog_load_xattr() later gets overwritten with a zero value, thus disabling verifier logs for the program in all cases: bpf_prog_load_xattr() // prog->log_level = attr1->log_level; -> bpf_object__load() // attr2->log_level = 0; -> bpf_object__load_xattr() // <pass prog and attr2> -> bpf_object__load_progs() // prog->log_level = attr2->log_level; Fix this by OR-ing the log_level in bpf_object__load_progs(), instead of overwriting it. v2: Fix commit log description (confusion on function names in v1). Fixes: 60276f98 ("libbpf: add bpf_object__load_xattr() API function to pass log_level") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
Now that we don't have __rcu markers on the bpf_prog_array helpers, let's use proper rcu_dereference_protected to obtain array pointer under mutex. Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
Now that we don't have __rcu markers on the bpf_prog_array helpers, let's use proper rcu_dereference_protected to obtain array pointer under mutex. We also don't need __rcu annotations on cgroup_bpf.inactive since it's not read/updated concurrently. v4: * drop cgroup_rcu_xyz wrappers and use rcu APIs directly; presumably should be more clear to understand which mutex/refcount protects each particular place v3: * amend cgroup_rcu_dereference to include percpu_ref_is_dying; cgroup_bpf is now reference counted and we don't hold cgroup_mutex anymore in cgroup_bpf_release v2: * replace xchg with rcu_swap_protected Cc: Roman Gushchin <guro@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Roman Gushchin <guro@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
Now that we don't have __rcu markers on the bpf_prog_array helpers, let's use proper rcu_dereference_protected to obtain array pointer under mutex. Cc: linux-media@vger.kernel.org Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Sean Young <sean@mess.org> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
Drop __rcu annotations and rcu read sections from bpf_prog_array helper functions. They are not needed since all existing callers call those helpers from the rcu update side while holding a mutex. This guarantees that use-after-free could not happen. In the next patches I'll fix the callers with missing rcu_dereference_protected to make sparse/lockdep happy, the proper way to use these helpers is: struct bpf_prog_array __rcu *progs = ...; struct bpf_prog_array *p; mutex_lock(&mtx); p = rcu_dereference_protected(progs, lockdep_is_held(&mtx)); bpf_prog_array_length(p); bpf_prog_array_copy_to_user(p, ...); bpf_prog_array_delete_safe(p, ...); bpf_prog_array_copy_info(p, ...); bpf_prog_array_copy(p, ...); bpf_prog_array_free(p); mutex_unlock(&mtx); No functional changes! rcu_dereference_protected with lockdep_is_held should catch any cases where we update prog array without a mutex (I've looked at existing call sites and I think we hold a mutex everywhere). Motivation is to fix sparse warnings: kernel/bpf/core.c:1803:9: warning: incorrect type in argument 1 (different address spaces) kernel/bpf/core.c:1803:9: expected struct callback_head *head kernel/bpf/core.c:1803:9: got struct callback_head [noderef] <asn:4> * kernel/bpf/core.c:1877:44: warning: incorrect type in initializer (different address spaces) kernel/bpf/core.c:1877:44: expected struct bpf_prog_array_item *item kernel/bpf/core.c:1877:44: got struct bpf_prog_array_item [noderef] <asn:4> * kernel/bpf/core.c:1901:26: warning: incorrect type in assignment (different address spaces) kernel/bpf/core.c:1901:26: expected struct bpf_prog_array_item *existing kernel/bpf/core.c:1901:26: got struct bpf_prog_array_item [noderef] <asn:4> * kernel/bpf/core.c:1935:26: warning: incorrect type in assignment (different address spaces) kernel/bpf/core.c:1935:26: expected struct bpf_prog_array_item *[assigned] existing kernel/bpf/core.c:1935:26: got struct bpf_prog_array_item [noderef] <asn:4> * v2: * remove comment about potential race; that can't happen because all callers are in rcu-update section Cc: Roman Gushchin <guro@fb.com> Acked-by: Roman Gushchin <guro@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alan Maguire authored
When building the tools/testing/selftest/bpf subdirectory, (running both a local directory "make" and a "make -C tools/testing/selftests/bpf") I keep hitting the following compilation error: prog_tests/flow_dissector.c: In function ‘create_tap’: prog_tests/flow_dissector.c:150:38: error: ‘IFF_NAPI’ undeclared (first use in this function) .ifr_flags = IFF_TAP | IFF_NO_PI | IFF_NAPI | IFF_NAPI_FRAGS, ^ prog_tests/flow_dissector.c:150:38: note: each undeclared identifier is reported only once for each function it appears in prog_tests/flow_dissector.c:150:49: error: ‘IFF_NAPI_FRAGS’ undeclared Adding include/uapi/linux/if_tun.h to tools/include/uapi/linux resolves the problem and ensures the compilation of the file does not depend on having up-to-date kernel headers locally. Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 28 May, 2019 15 commits
-
-
Alexei Starovoitov authored
Roman Gushchin says: ==================== This patchset implements a cgroup bpf auto-detachment functionality: bpf programs are detached as soon as possible after removal of the cgroup, without waiting for the release of all associated resources. Patches 2 and 3 are required to implement a corresponding kselftest in patch 4. v5: 1) rebase v4: 1) release cgroup bpf data using a workqueue 2) add test_cgroup_attach to .gitignore v3: 1) some minor changes and typo fixes v2: 1) removed a bogus check in patch 4 2) moved buf[len] = 0 in patch 2 ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Roman Gushchin authored
Add a kselftest to cover bpf auto-detachment functionality. The test creates a cgroup, associates some resources with it, attaches a couple of bpf programs and deletes the cgroup. Then it checks that bpf programs are going away in 5 seconds. Expected output: $ ./test_cgroup_attach #override:PASS #multi:PASS #autodetach:PASS test_cgroup_attach:PASS On a kernel without auto-detaching: $ ./test_cgroup_attach #override:PASS #multi:PASS #autodetach:FAIL test_cgroup_attach:FAIL Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Roman Gushchin authored
Enable all available cgroup v2 controllers when setting up the environment for the bpf kselftests. It's required to properly test the bpf prog auto-detach feature. Also it will generally increase the code coverage. Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Roman Gushchin authored
Convert test_cgrp2_attach2 example into a proper test_cgroup_attach kselftest. It's better because we do run kselftest on a constant basis, so there are better chances to spot a potential regression. Also make it slightly less verbose to conform kselftests output style. Output example: $ ./test_cgroup_attach #override:PASS #multi:PASS test_cgroup_attach:PASS Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Roman Gushchin authored
Currently the lifetime of bpf programs attached to a cgroup is bound to the lifetime of the cgroup itself. It means that if a user forgets (or intentionally avoids) to detach a bpf program before removing the cgroup, it will stay attached up to the release of the cgroup. Since the cgroup can stay in the dying state (the state between being rmdir()'ed and being released) for a very long time, it leads to a waste of memory. Also, it blocks a possibility to implement the memcg-based memory accounting for bpf objects, because a circular reference dependency will occur. Charged memory pages are pinning the corresponding memory cgroup, and if the memory cgroup is pinning the attached bpf program, nothing will be ever released. A dying cgroup can not contain any processes, so the only chance for an attached bpf program to be executed is a live socket associated with the cgroup. So in order to release all bpf data early, let's count associated sockets using a new percpu refcounter. On cgroup removal the counter is transitioned to the atomic mode, and as soon as it reaches 0, all bpf programs are detached. Because cgroup_bpf_release() can block, it can't be called from the percpu ref counter callback directly, so instead an asynchronous work is scheduled. The reference counter is not socket specific, and can be used for any other types of programs, which can be executed from a cgroup-bpf hook outside of the process context, had such a need arise in the future. Signed-off-by: Roman Gushchin <guro@fb.com> Cc: jolsa@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel T. Lee authored
This commit fixes a few style problems in samples/bpf/bpf_load.c: - Magic string use of 'DEBUGFS' - Useless zero initialization of a global variable - Minor style fix with whitespace Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
Right now test_tunnel.sh always exits with success even if some of the subtests fail. Since the output is very verbose, it's hard to spot the issues with subtests. Let's fail the script if any subtest fails. Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Daniel Borkmann authored
Quentin Monnet says: ==================== This series adds an option to bpftool to make it print additional information via libbpf and the kernel verifier when attempting to load programs. A new API function is added to libbpf in order to pass the log_level from bpftool with the bpf_object__* part of the API. Options for a finer control over the log levels to use for libbpf and the verifier could be added in the future, if desired. v3: - Fix and clarify commit logs. v2: - Do not add distinct options for libbpf and verifier logs, just keep the one that sets all log levels to their maximum. Rename the option. - Do not offer a way to pick desired log levels. The choice is "use the option to print all logs" or "stick with the defaults". - Do not export BPF_LOG_* flags to user header. - Update all man pages (most bpftool operations use libbpf and may print libbpf logs). Verifier logs are only used when attempting to load programs for now, so bpftool-prog.rst and bpftool.rst remain the only pages updated in that regard. Previous discussion available at: https://lore.kernel.org/bpf/20190523105426.3938-1-quentin.monnet@netronome.com/ https://lore.kernel.org/bpf/20190429095227.9745-1-quentin.monnet@netronome.com/ ==================== Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
The "-d" option is used to require all logs available for bpftool. So far it meant telling libbpf to print even debug-level information. But there is another source of info that can be made more verbose: when we attemt to load programs with bpftool, we can pass a log_level parameter to the verifier in order to control the amount of information that is printed to the console. Reuse the "-d" option to print all information the verifier can tell. At this time, this means logs related to BPF_LOG_LEVEL1, BPF_LOG_LEVEL2 and BPF_LOG_STATS. As mentioned in the discussion on the first version of this set, these macros are internal to the kernel (include/linux/bpf_verifier.h) and are not meant to be part of the stable user API, therefore we simply use the related constants to print whatever we can at this time, without trying to tell users what is log_level1 or what is statistics. Verifier logs are only used when loading programs for now (In the future: for loading BTF objects with bpftool? Although libbpf does not currently offer to print verifier info at debug level if no error occurred when loading BTF objects), so bpftool.rst and bpftool-prog.rst are the only man pages to get the update. v3: - Add details on log level and BTF loading at the end of commit log. v2: - Remove the possibility to select the log levels to use (v1 offered a combination of "log_level1", "log_level2" and "stats"). - The macros from kernel header bpf_verifier.h are not used (and therefore not moved to UAPI header). - In v1 this was a distinct option, but is now merged in the only "-d" switch to activate libbpf and verifier debug-level logs all at the same time. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
libbpf was recently made aware of the log_level attribute for programs, used to specify the level of information expected to be dumped by the verifier. Function bpf_prog_load_xattr() got support for this log_level parameter. But some applications using libbpf rely on another function to load programs, bpf_object__load(), which does accept any parameter for log level. Create an API function based on bpf_object__load(), but accepting an "attr" object as a parameter. Then add a log_level field to that object, so that applications calling the new bpf_object__load_xattr() can pick the desired log level. v3: - Rewrite commit log. v2: - We are in a new cycle, bump libbpf extraversion number. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
libbpf has three levels of priority for output messages: warn, info, debug. By default, debug output is not printed to the console. Add a new "--debug" (short name: "-d") option to bpftool to print libbpf logs for all three levels. Internally, we simply use the function provided by libbpf to replace the default printing function by one that prints logs regardless of their level. v2: - Remove the possibility to select the log-levels to use (v1 offered a combination of "warn", "info" and "debug"). - Rename option and offer a short name: -d|--debug. - Add option description to all bpftool manual pages (instead of bpftool-prog.rst only), as all commands use libbpf. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Hariprasad Kelam authored
Fix below warning reported by coccicheck: /tools/lib/bpf/libbpf.c:3461:1-3: WARNING: PTR_ERR_OR_ZERO can be used Signed-off-by: Hariprasad Kelam <hariprasad.kelam@gmail.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Chang-Hsien Tsai authored
Use fgets() as the while loop condition. Signed-off-by: Chang-Hsien Tsai <luke.tw@gmail.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Yonghong Song authored
Commit 8b401f9e ("bpf: implement bpf_send_signal() helper") introduced bpf_send_signal() helper. If the context is nmi, the sending signal work needs to be deferred to irq_work. If the signal is invalid, the error will appear in irq_work and it won't be propagated to user. This patch did an early check in the helper itself to notify user invalid signal, as suggested by Daniel. Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Andrii Nakryiko authored
Auto-complete BTF IDs for `btf dump id` sub-command. List of possible BTF IDs is scavenged from loaded BPF programs that have associated BTFs, as there is currently no API in libbpf to fetch list of all BTFs in the system. Suggested-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 25 May, 2019 17 commits
-
-
Matteo Croce authored
This commit adds ibumad to .gitignore which is currently ommited from the ignore file. Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Jiong Wang says: ==================== v9: - Split patch 5 in v8. make bpf uapi header file sync a separate patch. (Alexei) v8: - For stack slot read, mark them as REG_LIVE_READ64. (Alexei) - Change DEF_NOT_SUBREG from -1 to 0. (Alexei) - Rebased on top of latest bpf-next. v7: - Drop the first patch in v6, the one adding 32-bit return value and argument type. (Alexei) - Rename bpf_jit_hardware_zext to bpf_jit_needs_zext. (Alexei) - Use mov32 with imm == 1 to indicate it is zext. (Alexei) - JIT back-ends peephole next insn to optimize out unnecessary zext inserted by verifier. (Alexei) - Testing: + patch set tested (bpf selftest) on x64 host with llvm 9.0 no regression observed no both JIT and interpreter modes. + patch set tested (bpf selftest) on x32 host. By Yanqing Wang, thanks! no regression observed on both JIT and interpreter modes. + patch set tested (bpf selftest) on RV64 host with llvm 9.0, By Björn Töpel, thanks! no regression observed before and after this set with JIT_ALWAYS_ON. test_progs_32 also enabled as LLVM 9.0 is used by Björn. + cross compiled the other affected targets, arm, PowerPC, SPARC, S390. v6: - Fixed s390 kbuild test robot error. (kbuild) - Make comment style in backends patches more consistent. v5: - Adjusted several test_verifier helpers to make them works on hosts w and w/o hardware zext. (Naveen) - Make sure zext flag not set when verifier by-passed, for example, libtest_bpf.ko. (Naveen) - Conservatively mark bpf main return value as 64-bit. (Alexei) - Make sure read flag is either READ64 or READ32, not the mix of both. (Alexei) - Merged patch 1 and 2 in v4. (Alexei) - Fixed kbuild test robot warning on NFP. (kbuild) - Proposed new BPF_ZEXT insn to have optimal code-gen for various JIT back-ends. - Conservately set zext flags for patched-insn. - Fixed return value zext for helper function calls. - Also adjusted test_verifier scalability unit test to avoid triggerring too many insn patch which will hang computer. - re-tested on x86 host with llvm 9.0, no regression on test_verifier, test_progs, test_progs_32. - re-tested offload target (nfp), no regression on local testsuite. v4: - added the two missing fixes which addresses two Jakub's reviewes in v3. - rebase on top of bpf-next. v3: - remove redundant check in "propagate_liveness_reg". (Jakub) - add extra check in "mark_reg_read" to prune more search. (Jakub) - re-implemented "prog_flags" passing mechanism, removed use of global switch inside libbpf. - enabled high 32-bit randomization beyond "test_verifier" and "test_progs". Now it should have been enabled for all possible tests. Re-run all tests, haven't noticed regression. - remove RFC tag. v2: - rebased on top of bpf-next master. - added comments for what is sub-register def index. (Edward, Alexei) - removed patch 1 which turns bit mask from enum to macro. (Alexei) - removed sysctl/bpf_jit_32bit_opt. (Alexei) - merged sub-register def insn index into reg state. (Alexei) - change test methodology (Alexei): + instead of simple unit tests on x86_64 for which this optimization doesn't enabled due to there is hardware support, poison high 32-bit for whose def identified as safe to do so. this could let the correctness of this patch set checked when daily bpf selftest ran which delivers very stressful test on host machine like x86_64. + hi32 poisoning is gated by a new BPF_F_TEST_RND_HI32 prog flags. + BPF_F_TEST_RND_HI32 is enabled for all tests of "test_progs" and "test_verifier", the latter needs minor tweak on two unit tests, please see the patch for the change. + introduced a new global variable "libbpf_test_mode" into libbpf. once it is set to true, it will set BPF_F_TEST_RND_HI32 for all the later PROG_LOAD syscall, the goal is to easy the enable of hi32 poison on exsiting testsuite. we could also introduce new APIs, for example "bpf_prog_test_load", then use -Dbpf_prog_load=bpf_prog_test_load to migrate tests under test_progs, but there are several load APIs, and such new API need some change on struture like "struct bpf_prog_load_attr". + removed old unit tests. it is based on insn scan and requires quite a few test_verifier generic code change. given hi32 randomization could offer good test coverage, the unit tests doesn't add much extra test value. - enhanced register width check ("is_reg64") when record sub-register write, now, it returns more accurate width. - Re-run all tests under "test_progs" and "test_verifier" on x86_64, no regression. Fixed a couple of bugs exposed: 1. ctx field size transformation was not taken into account. 2. insn patch could cause lost of original aux data which is important for ctx field conversion. 3. return value for propagate_liveness was wrong and caused regression on processed insn number. 4. helper call arg wasn't handled properly that path prune may cause 64-bit read info in pruned path lost. - Re-run Cilium bpf prog for processed-insn-number benchmarking, no regression. v1: - Fixed the missing handling on callee-saved for bpf-to-bpf call, sub-register defs therefore moved to frame state. (Jakub Kicinski) - Removed redundant "cross_reg". (Jakub Kicinski) - Various coding styles & grammar fixes. (Jakub Kicinski, Quentin Monnet) eBPF ISA specification requires high 32-bit cleared when low 32-bit sub-register is written. This applies to destination register of ALU32 etc. JIT back-ends must guarantee this semantic when doing code-gen. x86_64 and AArch64 ISA has the same semantics, so the corresponding JIT back-end doesn't need to do extra work. However, 32-bit arches (arm, x86, nfp etc.) and some other 64-bit arches (PowerPC, SPARC etc) need to do explicit zero extension to meet this requirement, otherwise code like the following will fail. u64_value = (u64) u32_value ... other uses of u64_value This is because compiler could exploit the semantic described above and save those zero extensions for extending u32_value to u64_value, these JIT back-ends are expected to guarantee this through inserting extra zero extensions which however could be a significant increase on the code size. Some benchmarks show there could be ~40% sub-register writes out of total insns, meaning at least ~40% extra code-gen. One observation is these extra zero extensions are not always necessary. Take above code snippet for example, it is possible u32_value will never be casted into a u64, the value of high 32-bit of u32_value then could be ignored and extra zero extension could be eliminated. This patch implements this idea, insns defining sub-registers will be marked when the high 32-bit of the defined sub-register matters. For those unmarked insns, it is safe to eliminate high 32-bit clearnace for them. Algo ==== We could use insn scan based static analysis to tell whether one sub-register def doesn't need zero extension. However, using such static analysis, we must do conservative assumption at branching point where multiple uses could be introduced. So, for any sub-register def that is active at branching point, we need to mark it as needing zero extension. This could introducing quite a few false alarms, for example ~25% on Cilium bpf_lxc. It will be far better to use dynamic data-flow tracing which verifier fortunately already has and could be easily extend to serve the purpose of this patch set. - Split read flags into READ32 and READ64. - Record index of insn that does sub-register write. Keep the index inside reg state and update it during verifier insn walking. - A full register read on a sub-register marks its definition insn as needing zero extension on dst register. A new sub-register write overrides the old one. - When propagating read64 during path pruning, also mark any insn defining a sub-register that is read in the pruned path as full-register. Benchmark ========= - I estimate the JITed image could be 10% ~ 30% smaller on these affected arches (nfp, arm, x32, risv, ppc, sparc, s390), depending on the prog. - For Cilium bpf_lxc, there is ~11500 insns in the compiled binary (use latest LLVM snapshot, and with -mcpu=v3 -mattr=+alu32 enabled), 4460 of them has sub-register writes (~40%). Calculated by: cat dump | grep -P "\tw" | wc -l (ALU32) cat dump | grep -P "r.*=.*u32" | wc -l (READ_W) cat dump | grep -P "r.*=.*u16" | wc -l (READ_H) cat dump | grep -P "r.*=.*u8" | wc -l (READ_B) After this patch set enabled, > 25% of those 4460 could be identified as doesn't needing zero extension on the destination, and the percentage could go further up to more than 50% with some follow up optimizations based on the infrastructure offered by this set. This leads to significant save on JITed image. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch eliminate zero extension code-gen for instructions including both alu and load/store. The only exception is for ctx load, because offload target doesn't go through host ctx convert logic so we do customized load and ignores zext flag set by verifier. Cc: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
Cc: Björn Töpel <bjorn.topel@gmail.com> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Tested-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
Cc: Wang YanQing <udknight@gmail.com> Tested-by: Wang YanQing <udknight@gmail.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com> Cc: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
Cc: Shubham Bansal <illusionist.neo@gmail.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
The previous libbpf patch allows user to specify "prog_flags" to bpf program load APIs. To enable high 32-bit randomization for a test, we need to set BPF_F_TEST_RND_HI32 in "prog_flags". To enable such randomization for all tests, we need to make sure all places are passing BPF_F_TEST_RND_HI32. Changing them one by one is not convenient, also, it would be better if a test could be switched to "normal" running mode without code change. Given the program load APIs used across bpf selftests are mostly: bpf_prog_load: load from file bpf_load_program: load from raw insns A test_stub.c is implemented for bpf seltests, it offers two functions for testing purpose: bpf_prog_test_load bpf_test_load_program The are the same as "bpf_prog_load" and "bpf_load_program", except they also set BPF_F_TEST_RND_HI32. Given *_xattr functions are the APIs to customize any "prog_flags", it makes little sense to put these two functions into libbpf. Then, the following CFLAGS are passed to compilations for host programs: -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program They migrate the used load APIs to the test version, hence enable high 32-bit randomization for these tests without changing source code. Besides all these, there are several testcases are using "bpf_prog_load_attr" directly, their call sites are updated to pass BPF_F_TEST_RND_HI32. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
- bpf_fill_ld_abs_vlan_push_pop: Prevent zext happens inside PUSH_CNT loop. This could happen because of BPF_LD_ABS (32-bit def) + BPF_JMP (64-bit use), or BPF_LD_ABS + EXIT (64-bit use of R0). So, change BPF_JMP to BPF_JMP32 and redefine R0 at exit path to cut off the data-flow from inside the loop. - bpf_fill_jump_around_ld_abs: Jump range is limited to 16 bit. every ld_abs is replaced by 6 insns, but on arches like arm, ppc etc, there will be one BPF_ZEXT inserted to extend the error value of the inlined ld_abs sequence which then contains 7 insns. so, set the dividend to 7 so the testcase could work on all arches. - bpf_fill_scale1/bpf_fill_scale2: Both contains ~1M BPF_ALU32_IMM which will trigger ~1M insn patcher call because of hi32 randomization later when BPF_F_TEST_RND_HI32 is set for bpf selftests. Insn patcher is not efficient that 1M call to it will hang computer. So , change to BPF_ALU64_IMM to avoid hi32 randomization. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
libbpf doesn't allow passing "prog_flags" during bpf program load in a couple of load related APIs, "bpf_load_program_xattr", "load_program" and "bpf_prog_load_xattr". It makes sense to allow passing "prog_flags" which is useful for customizing program loading. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32 is set. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
Sync new bpf prog load flag "BPF_F_TEST_RND_HI32" to tools/. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
x86_64 and AArch64 perhaps are two arches that running bpf testsuite frequently, however the zero extension insertion pass is not enabled for them because of their hardware support. It is critical to guarantee the pass correction as it is supposed to be enabled at default for a couple of other arches, for example PowerPC, SPARC, arm, NFP etc. Therefore, it would be very useful if there is a way to test this pass on for example x86_64. The test methodology employed by this set is "poisoning" useless bits. High 32-bit of a definition is randomized if it is identified as not used by any later insn. Such randomization is only enabled under testing mode which is gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32". Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
After previous patches, verifier will mark a insn if it really needs zero extension on dst_reg. It is then for back-ends to decide how to use such information to eliminate unnecessary zero extension code-gen during JIT compilation. One approach is verifier insert explicit zero extension for those insns that need zero extension in a generic way, JIT back-ends then do not generate zero extension for sub-register write at default. However, only those back-ends which do not have hardware zero extension want this optimization. Back-ends like x86_64 and AArch64 have hardware zero extension support that the insertion should be disabled. This patch introduces new target hook "bpf_jit_needs_zext" which returns false at default, meaning verifier zero extension insertion is disabled at default. A back-end could override this hook to return true if it doesn't have hardware support and want verifier insert zero extension explicitly. Offload targets do not use this native target hook, instead, they could get the optimization results using bpf_prog_offload_ops.finalize. NOTE: arches could have diversified features, it is possible for one arch to have hardware zero extension support for some sub-register write insns but not for all. For example, PowerPC, SPARC have zero extended loads, but not for alu32. So when verifier zero extension insertion enabled, these JIT back-ends need to peephole insns to remove those zero extension inserted for insn that actually has hardware zero extension support. The peephole could be as simple as looking the next insn, if it is a special zero extension insn then it is safe to eliminate it if the current insn has hardware zero extension support. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
The encoding for this new variant is based on BPF_X format. "imm" field was 0 only, now it could be 1 which means doing zero extension unconditionally .code = BPF_ALU | BPF_MOV | BPF_X .dst_reg = DST .src_reg = SRC .imm = 1 We use this new form for doing zero extension for which verifier will guarantee SRC == DST. Implications on JIT back-ends when doing code-gen for BPF_ALU | BPF_MOV | BPF_X: 1. No change if hardware already does zero extension unconditionally for sub-register write. 2. Otherwise, when seeing imm == 1, just generate insns to clear high 32-bit. No need to generate insns for the move because when imm == 1, dst_reg is the same as src_reg at the moment. Interpreter doesn't need change as well. It is doing unconditionally zero extension for mov32 already. One helper macro BPF_ZEXT_REG is added to help creating zero extension insn using this new mov32 variant. One helper function insn_is_zext is added for checking one insn is an zero extension on dst. This will be widely used by a few JIT back-ends in later patches in this set. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-