Merge branch 'bpf-inline-lookups'
Alexei Starovoitov says:
====================
bpf: inline bpf_map_lookup_elem()
bpf_map_lookup_elem() is one of the most frequently used helper functions.
Improve JITed program performance by inlining this helper.
bpf_map_type before after
hash 58M 74M
array 174M 280M
The values are number of lookups per second in ideal conditions
measured by micro-benchmark in patch 6.
The 'perf report' for HASH map type:
before:
54.23% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem
14.24% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw
8.84% map_perf_test [kernel.kallsyms] [k] htab_map_lookup_elem
5.93% map_perf_test [kernel.kallsyms] [k] bpf_map_lookup_elem
2.30% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2
1.49% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler
after:
60.03% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem
18.07% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw
2.91% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2
1.94% map_perf_test [kernel.kallsyms] [k] _einittext
1.90% map_perf_test [kernel.kallsyms] [k] __audit_syscall_exit
1.72% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler
so the cost of htab_map_lookup_elem() and bpf_map_lookup_elem()
is gone after inlining.
'per-cpu' and 'lru' map types can be optimized similarly in the future.
Note the sparse will complain that bpf is addictive ;)
kernel/bpf/hashtab.c:438:19: sparse: subtraction of functions? Share your drugs
kernel/bpf/verifier.c:3342:38: sparse: subtraction of functions? Share your drugs
it's not a new warning, just in new places.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Showing
Please register or sign in to comment