- 18 Apr, 2019 6 commits
-
-
Yonghong Song authored
I hit the following compilation error with gcc 4.8.5. prog_tests/flow_dissector.c: In function ‘test_flow_dissector’: prog_tests/flow_dissector.c:155:2: error: ‘for’ loop initial declarations are only allowed in C99 mode for (int i = 0; i < ARRAY_SIZE(tests); i++) { ^ prog_tests/flow_dissector.c:155:2: note: use option -std=c99 or -std=gnu99 to compile your code Let us fix the issue by avoiding this particular c99 feature. Fixes: a5cb3346 ("selftests/bpf: make flow dissector tests more extensible") Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Jesper Dangaard Brouer says: ==================== This patchset utilize a number of different kernel bulk APIs for optimizing the performance for the XDP cpumap redirect feature. Benchmark details are available here: https://github.com/xdp-project/xdp-project/blob/master/areas/cpumap/cpumap03-optimizations.org Performance measurements can be considered micro benchmarks, as they measure dropping packets at different stages in the network stack. Summary based on above: Baseline benchmarks - baseline-redirect: UdpNoPorts: 3,180,074 - baseline-redirect: iptables-raw drop: 6,193,534 Patch1: bpf: cpumap use ptr_ring_consume_batched - redirect: UdpNoPorts: 3,327,729 - redirect: iptables-raw drop: 6,321,540 Patch2: net: core: introduce build_skb_around - redirect: UdpNoPorts: 3,221,303 - redirect: iptables-raw drop: 6,320,066 Patch3: bpf: cpumap do bulk allocation of SKBs - redirect: UdpNoPorts: 3,290,563 - redirect: iptables-raw drop: 6,650,112 Patch4: bpf: cpumap memory prefetchw optimizations for struct page - redirect: UdpNoPorts: 3,520,250 - redirect: iptables-raw drop: 7,649,604 In this V2 submission I have chosen drop the SKB-list patch using netif_receive_skb_list() as it was not showing a performance improvement for these micro benchmarks. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jesper Dangaard Brouer authored
A lot of the performance gain comes from this patch. While analysing performance overhead it was found that the largest CPU stalls were caused when touching the struct page area. It is first read with a READ_ONCE from build_skb_around via page_is_pfmemalloc(), and when freed written by page_frag_free() call. Measurements show that the prefetchw (W) variant operation is needed to achieve the performance gain. We believe this optimization it two fold, first the W-variant saves one step in the cache-coherency protocol, and second it helps us to avoid the non-temporal prefetch HW optimizations and bring this into all cache-levels. It might be worth investigating if prefetch into L2 will have the same benefit. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jesper Dangaard Brouer authored
As cpumap now batch consume xdp_frame's from the ptr_ring, it knows how many SKBs it need to allocate. Thus, lets bulk allocate these SKBs via kmem_cache_alloc_bulk() API, and use the previously introduced function build_skb_around(). Notice that the flag __GFP_ZERO asks the slab/slub allocator to clear the memory for us. This does clear a larger area than needed, but my micro benchmarks on Intel CPUs show that this is slightly faster due to being a cacheline aligned area is cleared for the SKBs. (For SLUB allocator, there is a future optimization potential, because SKBs will with high probability originate from same page. If we can find/identify continuous memory areas then the Intel CPU memset rep stos will have a real performance gain.) Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jesper Dangaard Brouer authored
The function build_skb() also have the responsibility to allocate and clear the SKB structure. Introduce a new function build_skb_around(), that moves the responsibility of allocation and clearing to the caller. This allows caller to use kmem_cache (slab/slub) bulk allocation API. Next patch use this function combined with kmem_cache_alloc_bulk. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jesper Dangaard Brouer authored
Move ptr_ring dequeue outside loop, that allocate SKBs and calls network stack, as these operations that can take some time. The ptr_ring is a communication channel between CPUs, where we want to reduce/limit any cacheline bouncing. Do a concentrated bulk dequeue via ptr_ring_consume_batched, to shorten the period and times the remote cacheline in ptr_ring is read Batch size 8 is both to (1) limit BH-disable period, and (2) consume one cacheline on 64-bit archs. After reducing the BH-disable section further then we can consider changing this, while still thinking about L1 cacheline size being active. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 17 Apr, 2019 13 commits
-
-
Alexei Starovoitov authored
Magnus Karlsson says: ==================== This patch set fixes one bug and removes two dependencies on Linux kernel headers from the XDP socket code in libbpf. A number of people have pointed out that these two dependencies make it hard to build the XDP socket part of libbpf without any kernel header dependencies. The two removed dependecies are: * Remove the usage of likely and unlikely (compiler.h) in xsk.h. It has been reported that the use of these actually decreases the performance of the ring access code due to an increase in instruction cache misses, so let us just remove these. * Remove the dependency on barrier.h as it brings in a lot of kernel headers. As the XDP socket code only uses two simple functions from it, we can reimplement these. As a bonus, the new implementation is faster as it uses the same barrier primitives as the kernel does when the same code is compiled there. Without this patch, the user land code uses lfence and sfence on x86, which are unnecessarily harsh/thorough. In the process of removing these dependencies a missing barrier function for at least PPC64 was discovered. For a full explanation on the missing barrier, please refer to patch 1. So the patch set now starts with two patches fixing this. I have also added a patch at the end removing this full memory barrier for x86 only, as it is not needed there. Structure of the patch set: Patch 1-2: Adds the missing barrier function in kernel and user space. Patch 3-4: Removes the dependencies Patch 5: Optimizes the added barrier from patch 2 so that it does not do unnecessary work on x86. v2 -> v3: * Added missing memory barrier in ring code * Added an explanation on the three barriers we use in the code * Moved barrier functions from xsk.h to libbpf_util.h * Added comment on why we have these functions in libbpf_util.h * Added a new barrier function in user space that makes it possible to remove the full memory barrier on x86. v1 -> v2: * Added comment about validity of ARM 32-bit barriers. Only armv7 and above. /Magnus ==================== Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Magnus Karlsson authored
The full memory barrier in the XDP socket rings on the consumer side between the load of the data and the store of the consumer ring is there to protect the store from being executed before the load of the data. If this was allowed to happen, the producer might overwrite the data field with a new entry before the consumer got the chance to read it. On x86, stores are guaranteed not to be reordered with older loads, so it does not need a full memory barrier here. A compile time barrier would be enough. This patch introdcues a new primitive in libbpf_util.h that implements a new barrier type (libbpf_smp_rwmb) hindering stores to be reordered with older loads. It is then used in the XDP socket ring access code in libbpf to improve performance. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Magnus Karlsson authored
The use of smp_rmb() and smp_wmb() creates a Linux header dependency on barrier.h that is unnecessary in most parts. This patch implements the two small defines that are needed from barrier.h. As a bonus, the new implementations are faster than the default ones as they default to sfence and lfence for x86, while we only need a compiler barrier in our case. Just as it is when the same ring access code is compiled in the kernel. Fixes: 1cad0788 ("libbpf: add support for using AF_XDP sockets") Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Magnus Karlsson authored
This patch removes the use of likely and unlikely in xsk.h since they create a dependency on Linux headers as reported by several users. There have also been reports that the use of these decreases performance as the compiler puts the code on two different cache lines instead of on a single one. All in all, I think we are better off without them. Fixes: 1cad0788 ("libbpf: add support for using AF_XDP sockets") Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Magnus Karlsson authored
The ring buffer code of XDP sockets is missing a memory barrier on the consumer side between the load of the data and the write that signals that it is ok for the producer to put new data into the buffer. On architectures that does not guarantee that stores are not reordered with older loads, the producer might put data into the ring before the consumer had the chance to read it. As IA does guarantee this ordering, it would only need a compiler barrier here, but there are no primitives in barrier.h for this specific case (hinder writes to be ordered before older reads) so I had to add a smp_mb() here which will translate into a run-time synch operation on IA. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Magnus Karlsson authored
The ring buffer code of XDP sockets is missing a memory barrier on the consumer side between the load of the data and the write that signals that it is ok for the producer to put new data into the buffer. On architectures that does not guarantee that stores are not reordered with older loads, the producer might put data into the ring before the consumer had the chance to read it. As IA does guarantee this ordering, it would only need a compiler barrier here, but there are no primitives in Linux for this specific case (hinder writes to be ordered before older reads) so I had to add a smp_mb() here which will translate into a run-time synch operation on IA. Added a longish comment in the code explaining what each barrier in the ring implementation accomplishes and what would happen if we removed one of them. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Prashant Bhole authored
Let's print btf id of map similar to the way we are printing it for programs. Sample output: user@test# bpftool map -f 61: lpm_trie flags 0x1 key 20B value 8B max_entries 1 memlock 4096B 133: array name test_btf_id flags 0x0 key 4B value 4B max_entries 4 memlock 4096B pinned /sys/fs/bpf/test100 btf_id 174 170: array name test_btf_id flags 0x0 key 4B value 4B max_entries 4 memlock 4096B btf_id 240 Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Prashant Bhole authored
Let's move the final newline printing in show_map_close_plain() at the end of the function because it looks correct and consistent with prog.c. Also let's do related changes for the line which prints pinned file name. Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrey Ignatov authored
Add support for recently added BPF_PROG_TYPE_CGROUP_SYSCTL program type and BPF_CGROUP_SYSCTL attach type. Example of bpftool output with sysctl program from selftests: # bpftool p load ./test_sysctl_prog.o /mnt/bpf/sysctl_prog type cgroup/sysctl # bpftool p l 9: cgroup_sysctl name sysctl_tcp_mem tag 0dd05f81a8d0d52e gpl loaded_at 2019-04-16T12:57:27-0700 uid 0 xlated 1008B jited 623B memlock 4096B # bpftool c a /mnt/cgroup2/bla sysctl id 9 # bpftool c t CgroupPath ID AttachType AttachFlags Name /mnt/cgroup2/bla 9 sysctl sysctl_tcp_mem # bpftool c d /mnt/cgroup2/bla sysctl id 9 # bpftool c t CgroupPath ID AttachType AttachFlags Name Signed-off-by: Andrey Ignatov <rdna@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Using %ld for printing out value of ptrdiff_t type is not portable between 32-bit and 64-bit archs. This is causing compilation errors for libbpf on 32-bit platform (discovered as part of an effort to integrate libbpf into systemd ([0])). Proper formatter is %td, which is used in this patch. v2->v1: - add Reported-by - provide more context on how this issue was discovered [0] https://github.com/systemd/systemd/pull/12151Reported-by: Evgeny Vereshchagin <evvers@ya.ru> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@fb.com> Cc: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Prashant Bhole authored
verifier.c uses BPF_CAST_CALL for casting bpf call except at one place in jit_subprogs(). Let's use the macro for consistency. Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Viet Hoang Tran authored
The helper function bpf_sock_ops_cb_flags_set() can be used to both set and clear the sock_ops callback flags. However, its current behavior is not consistent. BPF program may clear a flag if more than one were set, or replace a flag with another one, but cannot clear all flags. This patch also updates the documentation to clarify the ability to clear flags of this helper function. Signed-off-by: Hoang Tran <hoang.tran@uclouvain.be> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Peter Oskolkov authored
This patch adds tests validating that VRF and BPF-LWT encap work together well, as requested by David Ahern. Signed-off-by: Peter Oskolkov <posk@google.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 16 Apr, 2019 16 commits
-
-
Alban Crequy authored
commit f1a2e44a ("bpf: add queue and stack maps") introduced new BPF helper functions: - BPF_FUNC_map_push_elem - BPF_FUNC_map_pop_elem - BPF_FUNC_map_peek_elem but they were made available only for network BPF programs. This patch makes them available for tracepoint, cgroup and lirc programs. Signed-off-by: Alban Crequy <alban@kinvolk.io> Cc: Mauricio Vasquez B <mauricio.vasquez@polito.it> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
Rewrite selftest to iterate over an array with input packet and expected flow_keys. This should make it easier to extend this test with additional cases without too much boilerplate. Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
Add two tests to check that sequence of 1024 jumps is verifiable. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Benjamin Poirier authored
avoids outputting a series of value: No space left on device The value itself is not wrong but bpf_fd_reuseport_array_lookup_elem() can only return it if the map was created with value_size = 8. There's nothing bpftool can do about it. Instead of repeating this error for every key in the map, print an explanatory warning and a specialized error. example before: key: 00 00 00 00 value: No space left on device key: 01 00 00 00 value: No space left on device key: 02 00 00 00 value: No space left on device Found 0 elements example after: Warning: cannot read values from reuseport_sockarray map with value_size != 8 key: 00 00 00 00 value: <cannot read> key: 01 00 00 00 value: <cannot read> key: 02 00 00 00 value: <cannot read> Found 0 elements Signed-off-by: Benjamin Poirier <bpoirier@suse.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Benjamin Poirier authored
Commit bf598a8f ("bpftool: Improve handling of ENOENT on map dumps") used print_entry_plain() in case of ENOENT. However, that commit introduces dead code. Per-cpu maps are zero-filled. When reading them, it's all or nothing. There will never be a case where some cpus have an entry and others don't. The truth is that ENOENT is an error case. Use print_entry_error() to output the desired message. That function's "value" parameter is also renamed to indicate that we never use it for an actual map value. The output format is unchanged. Signed-off-by: Benjamin Poirier <bpoirier@suse.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
Linux kernel now supports statistics for BPF programs, and bpftool is able to dump them. However, these statistics are not enabled by default, and administrators may not know how to access them. Add a paragraph in bpftool documentation, under the description of the "bpftool prog show" command, to explain that such statistics are available and that their collection is controlled via a dedicated sysctl knob. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
Manual pages would tell that option "-v" (lower case) would print the version number for bpftool. This is wrong: the short name of the option is "-V" (upper case). Fix the documentation accordingly. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
The "pinmaps" keyword is present in the man page, in the verbose description of the "bpftool prog load" command. However, it is missing from the summary of available commands at the beginning of the file. Add it there as well. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
When trying to dump the tree of all cgroups under a given root node, bpftool attempts to query programs of all available attach types. Some of those attach types do not support queries, therefore several of the calls are actually expected to fail. Those calls set errno to EINVAL, which has no consequence for dumping the rest of the tree. It does have consequences however if errno is inspected at a later time. For example, bpftool batch mode relies on errno to determine whether a command has succeeded, and whether it should carry on with the next command. Setting errno to EINVAL when everything worked as expected would therefore make such command fail: # echo 'cgroup tree \n net show' | \ bpftool batch file - To improve this, reset errno when its value is EINVAL after attempting to show programs for all existing attach types in do_show_tree_fn(). Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
Commit 569b0c77 ("tools/bpftool: show btf id in program information") made bpftool print an empty line after each program entry when listing the BPF programs loaded on the system (plain output). This is especially confusing when some programs have an associated BTF id, and others don't. Let's remove the blank line. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Willem de Bruijn authored
The ENCAP flags in bpf_skb_adjust_room are ignored on decap with bpf_skb_net_shrink. Reserve these bits for future use. Fixes: 868d5235 ("bpf: add bpf_skb_adjust_room encap flags") Signed-off-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alan Maguire authored
replace tab after #define with space in line with rest of definitions Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Stanislav Fomichev authored
It was removed in commit 166b5a7f ("selftests_bpf: extend test_tc_tunnel for UDP encap") without any explanation. Otherwise I see: progs/test_tc_tunnel.c:160:17: warning: taking address of packed member 'ip' of class or structure 'v4hdr' may result in an unaligned pointer value [-Waddress-of-packed-member] set_ipv4_csum(&h_outer.ip); ^~~~~~~~~~ 1 warning generated. Cc: Alan Maguire <alan.maguire@oracle.com> Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com> Fixes: 166b5a7f ("selftests_bpf: extend test_tc_tunnel for UDP encap") Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Song Liu <songliubraving@fb.com> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Andrii Nakryiko authored
Add test case verifying that dedup happens (INTs are deduped in this case) and VAR/DATASEC types are not deduped, but have their referenced type IDs adjusted correctly. Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Yonghong Song <yhs@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Andrii Nakryiko authored
This patch adds support for VAR and DATASEC in btf_dedup(). VAR/DATASEC are never deduplicated, but they need to be processed anyway as types they refer to might need to be remapped due to deduplication and compaction. Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Yonghong Song <yhs@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Andrii Nakryiko authored
When CONFIG_DEBUG_INFO_BTF is enabled but available version of pahole is too old to support BTF generation, build script is supposed to emit warning and proceed with the build. Due to using exit instead of return from BASH function, existing handling code prematurely exits exit code 0, not completing some of the build steps. This patch fixes issue by correctly returning just from gen_btf() function only. Fixes: e83b9f55 ("kbuild: add ability to generate BTF type info for vmlinux") Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@fb.com> Cc: Yonghong Song <yhs@fb.com> Cc: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 13 Apr, 2019 4 commits
-
-
Jiong Wang authored
There are a few "regs[regno]" here are there across "check_reg_arg", this patch factor it out into a simple "reg" pointer. The intention is to simplify code indentation and make the later patches in this set look cleaner. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
After code refactor in previous patches, the propagation logic inside the for loop in "propagate_liveness" becomes clear that they are good enough to be factored out into a common function "propagate_liveness_reg". Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
Access to reg states were not factored out, the consequence is long code for dereferencing them which made the indentation not good for reading. This patch factor out these code so the core code in the loop could be easier to follow. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiong Wang authored
Propagation for register and stack slot are finished in separate for loop, while they are perfect to be put into a single loop. This could also let them share some common variables in later patches. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 12 Apr, 2019 1 commit
-
-
Andrey Ignatov authored
Fix a new warning reported by kbuild for make ARCH=i386: In file included from kernel/bpf/cgroup.c:11:0: kernel/bpf/cgroup.c: In function '__cgroup_bpf_run_filter_sysctl': include/linux/kernel.h:827:29: warning: comparison of distinct pointer types lacks a cast (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1))) ^ include/linux/kernel.h:841:4: note: in expansion of macro '__typecheck' (__typecheck(x, y) && __no_side_effects(x, y)) ^~~~~~~~~~~ include/linux/kernel.h:851:24: note: in expansion of macro '__safe_cmp' __builtin_choose_expr(__safe_cmp(x, y), \ ^~~~~~~~~~ include/linux/kernel.h:860:19: note: in expansion of macro '__careful_cmp' #define min(x, y) __careful_cmp(x, y, <) ^~~~~~~~~~~~~ >> kernel/bpf/cgroup.c:837:17: note: in expansion of macro 'min' ctx.new_len = min(PAGE_SIZE, *pcount); ^~~ Fixes: 4e63acdf ("bpf: Introduce bpf_sysctl_{get,set}_new_value helpers") Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-