- 30 Aug, 2019 23 commits
-
-
Kevin Laatz authored
The addition of unaligned chunks mode, the documentation needs to be updated to indicate that the incoming addr to the fill ring will only be masked if the user application is run in the aligned chunk mode. This patch also adds a line to explicitly indicate that the incoming addr will not be masked if running the user application in the unaligned chunk mode. Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kevin Laatz authored
This patch modifies xdpsock to use mmap instead of posix_memalign. With this change, we can use hugepages when running the application in unaligned chunks mode. Using hugepages makes it more likely that we have physically contiguous memory, which supports the unaligned chunk mode better. Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kevin Laatz authored
This patch adds buffer recycling support for unaligned buffers. Since we don't mask the addr to 2k at umem_reg in unaligned mode, we need to make sure we give back the correct (original) addr to the fill queue. We achieve this using the new descriptor format and associated masks. The new format uses the upper 16-bits for the offset and the lower 48-bits for the addr. Since we have a field for the offset, we no longer need to modify the actual address. As such, all we have to do to get back the original address is mask for the lower 48 bits (i.e. strip the offset and we get the address on it's own). Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kevin Laatz authored
This patch adds support for the unaligned chunks mode. The addition of the unaligned chunks option will allow users to run the application with more relaxed chunk placement in the XDP umem. Unaligned chunks mode can be used with the '-u' or '--unaligned' command line options. Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kevin Laatz authored
This patch adds a 'flags' field to the umem_config and umem_reg structs. This will allow for more options to be added for configuring umems. The first use for the flags field is to add a flag for unaligned chunks mode. These flags can either be user-provided or filled with a default. Since we change the size of the xsk_umem_config struct, we need to version the ABI. This patch includes the ABI versioning for xsk_umem__create. The Makefile was also updated to handle multiple function versions in check-abi. Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Maxim Mikityanskiy authored
Relax the requirements to the XSK frame size to allow it to be smaller than a page and even not a power of two. The current implementation can work in this mode, both with Striding RQ and without it. The code that checks `mtu + headroom <= XSK frame size` is modified accordingly. Any frame size between 2048 and PAGE_SIZE is accepted. Functions that worked with pages only now work with XSK frames, even if their size is different from PAGE_SIZE. With XSK queues, regardless of the frame size, Striding RQ uses the stride size of PAGE_SIZE, and UMR MTTs are posted using starting addresses of frames, but PAGE_SIZE as page size. MTU guarantees that no packet data will overlap with other frames. UMR MTT size is made equal to the stride size of the RQ, because UMEM frames may come in random order, and we need to handle them one by one. PAGE_SIZE is just a power of two that is bigger than any allowed XSK frame size, and also it doesn't require making additional changes to the code. Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Saeed Mahameed <saeedm@mellanox.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kevin Laatz authored
With the addition of the unaligned chunks option, we need to make sure we handle the offsets accordingly based on the mode we are currently running in. This patch modifies the driver to appropriately mask the address for each case. Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kevin Laatz authored
With the addition of the unaligned chunks option, we need to make sure we handle the offsets accordingly based on the mode we are currently running in. This patch modifies the driver to appropriately mask the address for each case. Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kevin Laatz authored
With the addition of the unaligned chunks option, we need to make sure we handle the offsets accordingly based on the mode we are currently running in. This patch modifies the driver to appropriately mask the address for each case. Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kevin Laatz authored
Currently, addresses are chunk size aligned. This means, we are very restricted in terms of where we can place chunk within the umem. For example, if we have a chunk size of 2k, then our chunks can only be placed at 0,2k,4k,6k,8k... and so on (ie. every 2k starting from 0). This patch introduces the ability to use unaligned chunks. With these changes, we are no longer bound to having to place chunks at a 2k (or whatever your chunk size is) interval. Since we are no longer dealing with aligned chunks, they can now cross page boundaries. Checks for page contiguity have been added in order to keep track of which pages are followed by a physically contiguous page. Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kevin Laatz authored
Currently, the dma, addr and handle are modified when we reuse Rx buffers in zero-copy mode. However, this is not required as the inputs to the function are copies, not the original values themselves. As we use the copies within the function, we can use the original 'obi' values directly without having to mask and add the headroom. Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kevin Laatz authored
Currently, the dma, addr and handle are modified when we reuse Rx buffers in zero-copy mode. However, this is not required as the inputs to the function are copies, not the original values themselves. As we use the copies within the function, we can use the original 'old_bi' values directly without having to mask and add the headroom. Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Masanari Iida authored
This patch fix a spelling typo in test_offload.py Signed-off-by: Masanari Iida <standby24x7@gmail.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Petar Penkov authored
If a SYN cookie is not issued by tcp_v#_gen_syncookie, then the return value will be exactly 0, rather than <= 0. Let's change the check to reflect that, especially since mss is an unsigned value and cannot be negative. Fixes: 70d66244 ("bpf: add bpf_tcp_gen_syncookie helper") Reported-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Petar Penkov <ppenkov@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Daniel Borkmann authored
Jakub Kicinski says: ==================== This set adds a small batching and cache mechanism to the driver. Map dumps require two operations per element - get next, and lookup. Each of those needs a round trip to the device, and on a loaded system scheduling out and in of the dumping process. This set makes the driver request a number of entries at the same time, and if no operation which would modify the map happens from the host side those entries are used to serve lookup requests for up to 250us, at which point they are considered stale. This set has been measured to provide almost 4x dumping speed improvement, Jaco says: OLD dump times 500 000 elements: 26.1s 1 000 000 elements: 54.5s NEW dump times 500 000 elements: 7.6s 1 000 000 elements: 16.5s ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Each get_next and lookup call requires a round trip to the device. However, the device is capable of giving us a few entries back, instead of just one. In this patch we ask for a small yet reasonable number of entries (4) on every get_next call, and on subsequent get_next/lookup calls check this little cache for a hit. The cache is only kept for 250us, and is invalidated on every operation which may modify the map (e.g. delete or update call). Note that operations may be performed simultaneously, so we have to keep track of operations in flight. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
If control channel MTU is too low to support map operations a warning will be printed. This is not enough, we want to make sure probe fails in such scenario, as this would clearly be a faulty configuration. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Daniel Borkmann authored
Quentin Monnet says: ==================== This set attempts to make it easier to build bpftool, in particular when passing a specific output directory. This is a follow-up to the conversation held last month by Lorenz, Ilya and Jakub [0]. The first patch is a minor fix to bpftool's Makefile, regarding the retrieval of kernel version (which currently prints a non-relevant make warning on some invocations). Second patch improves the Makefile commands to support more "make" invocations, or to fix building with custom output directory. On Jakub's suggestion, a script is also added to BPF selftests in order to keep track of the supported build variants. Building bpftool with "make tools/bpf" from the top of the repository generates files in "libbpf/" and "feature/" directories under tools/bpf/ and tools/bpf/bpftool/. The third patch ensures such directories are taken care of on "make clean", and add them to the relevant .gitignore files. At last, fourth patch is a sligthly modified version of Ilya's fix regarding libbpf.a appearing twice on the linking command for bpftool. [0] https://lore.kernel.org/bpf/CACAyw9-CWRHVH3TJ=Tke2x8YiLsH47sLCijdp=V+5M836R9aAA@mail.gmail.com/ v2: - Return error from check script if one of the make invocations returns non-zero (even if binary is successfully produced). - Run "make clean" from bpf/ and not only bpf/bpftool/ in that same script, when relevant. - Add a patch to clean up generated "feature/" and "libbpf/" directories. ==================== Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Cc: Lorenz Bauer <lmb@cloudflare.com> Cc: Ilya Leoshkevich <iii@linux.ibm.com> Cc: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
In bpftool's Makefile, $(LIBS) includes $(LIBBPF), therefore the library is used twice in the linking command. No need to have $(LIBBPF) (from $^) on that command, let's do with "$(OBJS) $(LIBS)" (but move $(LIBBPF) _before_ the -l flags in $(LIBS)). Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
When building "tools/bpf" from the top of the Linux repository, the build system passes a value for the $(OUTPUT) Makefile variable to tools/bpf/Makefile and tools/bpf/bpftool/Makefile, which results in generating "libbpf/" (for bpftool) and "feature/" (bpf and bpftool) directories inside the tree. This commit adds such directories to the relevant .gitignore files, and edits the Makefiles to ensure they are removed on "make clean". The use of "rm" is also made consistent throughout those Makefiles (relies on the $(RM) variable, use "--" to prevent interpreting $(OUTPUT)/$(DESTDIR) as options. v2: - New patch. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
There are a number of alternative "make" invocations that can be used to compile bpftool. The following invocations are expected to work: - through the kbuild system, from the top of the repository (make tools/bpf) - by telling make to change to the bpftool directory (make -C tools/bpf/bpftool) - by building the BPF tools from tools/ (cd tools && make bpf) - by running make from bpftool directory (cd tools/bpf/bpftool && make) Additionally, setting the O or OUTPUT variables should tell the build system to use a custom output path, for each of these alternatives. The following patch fixes the following invocations: $ make tools/bpf $ make tools/bpf O=<dir> $ make -C tools/bpf/bpftool OUTPUT=<dir> $ make -C tools/bpf/bpftool O=<dir> $ cd tools/ && make bpf O=<dir> $ cd tools/bpf/bpftool && make OUTPUT=<dir> $ cd tools/bpf/bpftool && make O=<dir> After this commit, the build still fails for two variants when passing the OUTPUT variable: $ make tools/bpf OUTPUT=<dir> $ cd tools/ && make bpf OUTPUT=<dir> In order to remember and check what make invocations are supposed to work, and to document the ones which do not, a new script is added to the BPF selftests. Note that some invocations require the kernel to be configured, so the script skips them if no .config file is found. v2: - In make_and_clean(), set $ERROR to 1 when "make" returns non-zero, even if the binary was produced. - Run "make clean" from the correct directory (bpf/ instead of bpftool/, when relevant). Reported-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
Bpftool calls the toplevel Makefile to get the kernel version for the sources it is built from. But when the utility is built from the top of the kernel repository, it may dump the following error message for certain architectures (including x86): $ make tools/bpf [...] make[3]: *** [checkbin] Error 1 [...] This does not prevent bpftool compilation, but may feel disconcerting. The "checkbin" arch-dependent target is not supposed to be called for target "kernelversion", which is a simple "echo" of the version number. It turns out this is caused by the make invocation in tools/bpf/bpftool, which attempts to find implicit rules to apply. Extract from debug output: Reading makefiles... Reading makefile 'Makefile'... Reading makefile 'scripts/Kbuild.include' (search path) (no ~ expansion)... Reading makefile 'scripts/subarch.include' (search path) (no ~ expansion)... Reading makefile 'arch/x86/Makefile' (search path) (no ~ expansion)... Reading makefile 'scripts/Makefile.kcov' (search path) (no ~ expansion)... Reading makefile 'scripts/Makefile.gcc-plugins' (search path) (no ~ expansion)... Reading makefile 'scripts/Makefile.kasan' (search path) (no ~ expansion)... Reading makefile 'scripts/Makefile.extrawarn' (search path) (no ~ expansion)... Reading makefile 'scripts/Makefile.ubsan' (search path) (no ~ expansion)... Updating makefiles.... Considering target file 'scripts/Makefile.ubsan'. Looking for an implicit rule for 'scripts/Makefile.ubsan'. Trying pattern rule with stem 'Makefile.ubsan'. [...] Trying pattern rule with stem 'Makefile.ubsan'. Trying implicit prerequisite 'scripts/Makefile.ubsan.o'. Looking for a rule with intermediate file 'scripts/Makefile.ubsan.o'. Avoiding implicit rule recursion. Trying pattern rule with stem 'Makefile.ubsan'. Trying rule prerequisite 'prepare'. Trying rule prerequisite 'FORCE'. Found an implicit rule for 'scripts/Makefile.ubsan'. Considering target file 'prepare'. File 'prepare' does not exist. Considering target file 'prepare0'. File 'prepare0' does not exist. Considering target file 'archprepare'. File 'archprepare' does not exist. Considering target file 'archheaders'. File 'archheaders' does not exist. Finished prerequisites of target file 'archheaders'. Must remake target 'archheaders'. Putting child 0x55976f4f6980 (archheaders) PID 31743 on the chain. To avoid that, pass the -r and -R flags to eliminate the use of make built-in rules (and while at it, built-in variables) when running command "make kernelversion" from bpftool's Makefile. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Yauheni Kaliuta authored
This adds support for bpf-to-bpf function calls in the s390 JIT compiler. The JIT compiler converts the bpf call instructions to native branch instructions. After a round of the usual passes, the start addresses of the JITed images for the callee functions are known. Finally, to fixup the branch target addresses, we need to perform an extra pass. Because of the address range in which JITed images are allocated on s390, the offsets of the start addresses of these images from __bpf_call_base are as large as 64 bits. So, for a function call, the imm field of the instruction cannot be used to determine the callee's address. Use bpf_jit_get_func_addr() helper instead. The patch borrows a lot from: commit 8c11ea5c ("bpf, arm64: fix getting subprog addr from aux for calls") commit e2c95a61 ("bpf, ppc64: generalize fetching subprog into bpf_jit_get_func_addr") commit 8484ce83 ("bpf: powerpc64: add JIT support for multi-function programs") (including the commit message). test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y): without patch: Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED with patch: Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 27 Aug, 2019 11 commits
-
-
Stanislav Fomichev authored
.nhoff = 0 is (correctly) reset to ETH_HLEN on the next line so let's drop it. Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Daniel Borkmann authored
Stanislav Fomichev says: ==================== * add test__skip to indicate skipped tests * remove global success/error counts (use environment) * remove asserts from the tests * remove unused ret from send_signal test v3: * QCHECK -> CHECK_FAIL (Daniel Borkmann) v2: * drop patch that changes output to keep consistent with test_verifier (Alexei Starovoitov) * QCHECK instead of test__fail (Andrii Nakryiko) * test__skip count number of subtests (Andrii Nakryiko) ==================== Cc: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
send_signal test returns static codes from the subtests which nobody looks at, let's rely on the CHECK macros instead. Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
Otherwise they can bring the whole process down. Cc: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
Now that we have a global per-test/per-environment state, there is no longer need to have global fail/success counters (and there is no need to save/get the diff before/after the test). Introduce CHECK_FAIL macro (suggested by Andrii) and covert existing tests to it. CHECK_FAIL uses new test__fail() to record the failure. Cc: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
Export test__skip() to indicate skipped tests and use it in test_send_signal_nmi(). Cc: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Daniel Borkmann authored
Alexei Starovoitov says: ==================== Add few additional tests for precision tracking in the verifier. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
Copy-paste of existing test "calls: cross frame pruning - liveness propagation" but ran with different parentage chain heuristic which stresses different path in precision tracking logic. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
Use BPF_F_TEST_STATE_FREQ flag to check that precision tracking works as expected by comparing every step it takes. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
sync bpf.h from kernel/ to tools/ Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
Introduce BPF_F_TEST_STATE_FREQ flag to stress test parentage chain and state pruning. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 21 Aug, 2019 6 commits
-
-
Quentin Monnet authored
Add a new subcommand to freeze maps from user space. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
When listing maps, read their "frozen" status from procfs, and tell if maps are frozen. As commit log for map freezing command mentions that the feature might be extended with flags (e.g. for write-only instead of read-only) in the future, use an integer and not a boolean for JSON output. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Peter Wu authored
Fix a 'struct pt_reg' typo and clarify when bpf_trace_printk discards lines. Affects documentation only. Signed-off-by: Peter Wu <peter@lekensteyn.nl> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Peter Wu authored
I opened /sys/kernel/tracing/trace once and kept reading from it. bpf_trace_printk somehow did not seem to work, no entries were appended to that trace file. It turns out that tracing is disabled when that file is open. Save the next person some time and document this. The trace file is described in Documentation/trace/ftrace.rst, however the implication "tracing is disabled" did not immediate translate to "bpf_trace_printk silently discards entries". Signed-off-by: Peter Wu <peter@lekensteyn.nl> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Peter Wu authored
There is no 'struct pt_reg'. Signed-off-by: Peter Wu <peter@lekensteyn.nl> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Peter Wu authored
PERF_EVENT_IOC_SET_BPF supports uprobes since v4.3, and tracepoints since v4.7 via commit 04a22fae ("tracing, perf: Implement BPF programs attached to uprobes"), and commit 98b5c2c6 ("perf, bpf: allow bpf programs attach to tracepoints") respectively. Signed-off-by: Peter Wu <peter@lekensteyn.nl> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-