- 12 Aug, 2015 10 commits
-
-
Arnaldo Carvalho de Melo authored
This time using 'trinity' to test these: fchmodat, futimesat, llistxattr, lremovexattr, lstat, mknodat, mq_unlink, stat and vmsplice. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-a1uqu249nwwh0ixrhm80k4a4@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ingo Molnar authored
Merge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo: User visible changes: - Introduce 'srcfile' sort key: (Andi Kleen) # perf record -F 10000 usleep 1 # perf report --stdio --dsos '[kernel.vmlinux]' -s srcfile <SNIP> # Overhead Source File 26.49% copy_page_64.S 5.49% signal.c 0.51% msr.h # It can be combined with other fields, for instance, experiment with '-s srcfile,symbol'. There are some oddities in some distros and with some specific DSOs, being investigated, so your mileage may vary. - Update the column width for the "srcline" sort key (Arnaldo Carvalho de Melo) - Support per-event 'freq' term: (Namhyung Kim) $ perf record -e 'cpu/instructions,freq=1234/',cycles -c 1000 sleep 1 $ perf evlist -F cpu/instructions,freq=1234/: sample_freq=1234 cycles: sample_period=1000 $ Infrastructure changes: - Move perf_counts struct and functions into separate object (Jiri Olsa) - Unset perf_event_attr::freq when period term is set (Jiri Olsa) - Move callchain option parsing code to util.c (Kan Liang) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Takao Indoh authored
This patch just cleans up some files of Intel Processor Trace, does not change its behavior. This patch removes unused definitions and replaces a constant value with a macro. Signed-off-by: Takao Indoh <indou.takao@jp.fujitsu.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin<alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: H.Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438681015-5124-1-git-send-email-indou.takao@jp.fujitsu.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Alexander Shishkin authored
A question [1] was raised about the use of page::private in AUX buffer allocations, so let's add a clarification about its intended use. The private field and flag are used by perf's rb_alloc_aux() path to tell the pmu driver the size of each high-order allocation, so that the driver can program those appropriately into its hardware. This only matters for PMUs that don't support hardware scatter tables. Otherwise, every page in the buffer is just a page. This patch adds a comment about the private field to the AUX buffer allocation path. [1] http://marc.info/?l=linux-kernel&m=143803696607968Reported-by: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438063204-665-1-git-send-email-alexander.shishkin@linux.intel.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
Currently we only update the sysfs event files per available MSR, we didn't actually disallow creating unlisted events. Rework things such that the dectection, sysfs listing and event creation are better coordinated. Sadly it appears it's impossible to probe R/O MSRs under virt. This means we have to do the full model table to avoid listing all MSRs all the time. Tested-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Andy Lutomirski <luto@amacapital.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Ingo Molnar authored
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Matt Fleming authored
Tony reports that booting his 144-cpu machine with maxcpus=10 triggers the following WARN_ON(): [ 21.045727] WARNING: CPU: 8 PID: 647 at arch/x86/kernel/cpu/perf_event_intel_cqm.c:1267 intel_cqm_cpu_prepare+0x75/0x90() [ 21.045744] CPU: 8 PID: 647 Comm: systemd-udevd Not tainted 4.2.0-rc4 #1 [ 21.045745] Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRHSXSD1.86B.0066.R00.1506021730 06/02/2015 [ 21.045747] 0000000000000000 0000000082771b09 ffff880856333ba8 ffffffff81669b67 [ 21.045748] 0000000000000000 0000000000000000 ffff880856333be8 ffffffff8107b02a [ 21.045750] ffff88085b789800 ffff88085f68a020 ffffffff819e2470 000000000000000a [ 21.045750] Call Trace: [ 21.045757] [<ffffffff81669b67>] dump_stack+0x45/0x57 [ 21.045759] [<ffffffff8107b02a>] warn_slowpath_common+0x8a/0xc0 [ 21.045761] [<ffffffff8107b15a>] warn_slowpath_null+0x1a/0x20 [ 21.045762] [<ffffffff81036725>] intel_cqm_cpu_prepare+0x75/0x90 [ 21.045764] [<ffffffff81036872>] intel_cqm_cpu_notifier+0x42/0x160 [ 21.045767] [<ffffffff8109a33d>] notifier_call_chain+0x4d/0x80 [ 21.045769] [<ffffffff8109a44e>] __raw_notifier_call_chain+0xe/0x10 [ 21.045770] [<ffffffff8107b538>] _cpu_up+0xe8/0x190 [ 21.045771] [<ffffffff8107b65a>] cpu_up+0x7a/0xa0 [ 21.045774] [<ffffffff8165e920>] cpu_subsys_online+0x40/0x90 [ 21.045777] [<ffffffff81433b37>] device_online+0x67/0x90 [ 21.045778] [<ffffffff81433bea>] online_store+0x8a/0xa0 [ 21.045782] [<ffffffff81430e78>] dev_attr_store+0x18/0x30 [ 21.045785] [<ffffffff8126b6ba>] sysfs_kf_write+0x3a/0x50 [ 21.045786] [<ffffffff8126ad40>] kernfs_fop_write+0x120/0x170 [ 21.045789] [<ffffffff811f0b77>] __vfs_write+0x37/0x100 [ 21.045791] [<ffffffff811f38b8>] ? __sb_start_write+0x58/0x110 [ 21.045795] [<ffffffff81296d2d>] ? security_file_permission+0x3d/0xc0 [ 21.045796] [<ffffffff811f1279>] vfs_write+0xa9/0x190 [ 21.045797] [<ffffffff811f2075>] SyS_write+0x55/0xc0 [ 21.045800] [<ffffffff81067300>] ? do_page_fault+0x30/0x80 [ 21.045804] [<ffffffff816709ae>] entry_SYSCALL_64_fastpath+0x12/0x71 [ 21.045805] ---[ end trace fe228b836d8af405 ]--- The root cause is that CPU_UP_PREPARE is completely the wrong notifier action from which to access cpu_data(), because smp_store_cpu_info() won't have been executed by the target CPU at that point, which in turn means that ->x86_cache_max_rmid and ->x86_cache_occ_scale haven't been filled out. Instead let's invoke our handler from CPU_STARTING and rename it appropriately. Reported-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Kanaka Juvva <kanaka.d.juvva@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vikas Shivappa <vikas.shivappa@intel.com> Link: http://lkml.kernel.org/r/1438863163-14083-1-git-send-email-matt@codeblueprint.co.ukSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
We fail to free the shared_regs allocation if the constraint_list allocation fails. Cure this and be more consistent in NULL-ing the pointers after free. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
I ran the perf fuzzer, which triggered some WARN()s which are due to trying to stop/restart an event on the wrong CPU. Use the normal IPI pattern to ensure we run the code on the correct CPU. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: bad7192b ("perf: Fix PERF_EVENT_IOC_PERIOD to force-reset the period") Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Ben Hutchings authored
If rb->aux_refcount is decremented to zero before rb->refcount, __rb_free_aux() may be called twice resulting in a double free of rb->aux_pages. Fix this by adding a check to __rb_free_aux(). Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: 57ffc5ca ("perf: Fix AUX buffer refcounting") Link: http://lkml.kernel.org/r/1437953468.12842.17.camel@decadent.org.ukSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
- 10 Aug, 2015 7 commits
-
-
Namhyung Kim authored
Currently perf evlist -F shows the number as if it's always sampling frequency. But we now support per-event freq/period settings. So it'd better to show more detailed info whether it's freq or period. $ perf record -e 'cpu/config=1/,cpu/config=2,period=300000/' sleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.017 MB perf.data ] $ perf evlist -F cpu/config=1/: sample_freq=4000 cpu/config=2,period=300000/: sample_period=300000 Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1439102724-14079-2-git-send-email-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Namhyung Kim authored
Now perf can set per-event value of time and (sampling) period. But I guess most users like me just want to set frequency rather than period. So add the 'freq' term in the event parser. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1439102724-14079-1-git-send-email-namhyung@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Andi Kleen authored
In some cases it's useful to characterize samples by file. This is useful to get a higher level categorization, for example to map cost to subsystems. Add a srcfile sort key to perf report. It builds on top of the existing srcline support. Commiter notes: E.g.: # perf record -F 10000 usleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.016 MB perf.data (13 samples) ] [root@zoo ~]# perf report -s srcfile --stdio # Total Lost Samples: 0 # # Samples: 13 of event 'cycles' # Event count (approx.): 869878 # # Overhead Source File # ........ ........... 60.99% . 20.62% paravirt.h 14.23% rmap.c 4.04% signal.c 0.11% msr.h # The first line is collecting all the files for which srcfiles couldn't somehow get resolved to: # perf report -s srcfile,dso --stdio # Total Lost Samples: 0 # # Samples: 13 of event 'cycles' # Event count (approx.): 869878 # # Overhead Source File Shared Object # ........ ........... ................ 40.97% . ld-2.20.so 20.62% paravirt.h [kernel.vmlinux] 20.02% . libc-2.20.so 14.23% rmap.c [kernel.vmlinux] 4.04% signal.c [kernel.vmlinux] 0.11% msr.h [kernel.vmlinux] # XXX: Investigate why that is not resolving on Fedora 21, Andi says he hasn't seen this on Fedora 22. Signed-off-by: Andi Kleen <ak@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1438988064-21834-1-git-send-email-andi@firstfloor.org [ Added column length update, from 0e65bdb3f90f ('perf hists: Update the column width for the "srcline" sort key') ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
When we introduce a new sort key, we need to update the hists__calc_col_len() function accordingly, otherwise the width will be limited to strlen(header). We can't update it when obtaining a line value for a column (for instance, in sort__srcline_cmp()), because we reset it all when doing a resort (see hists__output_recalc_col_len()), so we need to, from what is in the hist_entry fields, set each of the column widths. Cc: Namhyung Kim <namhyung@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Jiri Olsa <jolsa@kernel.org> Fixes: 409a8be6 ("perf tools: Add sort by src line/number") Link: http://lkml.kernel.org/n/tip-jgbe0yx8v1gs89cslr93pvz2@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
The iter_add_next_cumulative_entry() function calls hist_entry__cmp(), which may want to access the hists where this hist_entry is stored, initialize it to let that happen and avoid segfaults. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-iqg98sfn4fvwcxp0pdvqauie@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
We need to unset 'perf_event_attr::freq' bit (default 1) when 'period' term is specified within event definition like: -e 'cpu/cpu-cycles,call-graph=fp,time,period=100000' otherwise it will handle the period value as frequency (and fail if it crossed the maximum allowed frequency value). Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20150808171210.GC17040@krava.brq.redhat.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Andi Kleen authored
For perf report/script srcline currently only the base file name of the source file is printed. This is a good default because it usually fits on the screen. But in some cases we want to know the full file name, for example to aggregate hits per file. In the later case we need more than the base file name to resolve file naming collisions: for example the kernel source has ~70 files named "core.c" It's also useful as input to post processing tools which want to point to the right file. Add a flag to allow full file name output. Add an option to perf report/script to enable this option. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1438986245-15191-1-git-send-email-andi@firstfloor.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 08 Aug, 2015 3 commits
-
-
Kan Liang authored
Move callchain option parse related code to util.c, to avoid dragging more object files into the python binding. Signed-off-by: Kan Liang <kan.liang@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1438890294-33409-1-git-send-email-kan.liang@intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Moving 'struct perf_counts' and associated functions into separate object, so we could remove stat.c object dependency from python build. It makes the python code to build properly, because it fails to load due to missing stat-shadow.c object dependency if some patches from Kan Liang are applied. So apply this one, then Kan's. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20150807105103.GB8624@krava.brq.redhat.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ingo Molnar authored
Merge tag 'perf-ebpf-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/ebpf library + llvm/clang infrastructure changes from Arnaldo Carvalho de Melo: Infrastructure changes: - Add library for interfacing with the kernel eBPF infrastructure, with tools/perf/ targeted as a first user. (Wang Nan) - Add llvm/clang infrastructure for building BPF object files from C source code. (Wang Nan) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
- 07 Aug, 2015 20 commits
-
-
Wang Nan authored
Previous patches introduce llvm__compile_bpf() to compile source file to eBPF object. This patch adds testcase to test it. It also tests libbpf by opening generated object after applying next patch which introduces HAVE_LIBBPF_SUPPORT option. Since llvm__compile_bpf() prints long messages which users who don't explicitly test llvm doesn't care, this patch set verbose to -1 to suppress all debug, warning and error message, and hint user use 'perf test -v' to see the full output. For the same reason, if clang is not found in PATH and there's no [llvm] section in .perfconfig, skip this test. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/n/1436445342-1402-17-git-send-email-wangnan0@huawei.com [ Add tools/lib/bpf/ to tools/perf/MANIFEST, so that the tarball targets build ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
To help user find correct kernel include options, this patch extracts them from kbuild system by an embedded script kinc_fetch_script, which creates a temporary directory, generates Makefile and an empty dummy.o then use the Makefile to fetch $(NOSTDINC_FLAGS), $(LINUXINCLUDE) and $(EXTRA_CFLAGS) options. The result is passed to compiler script using 'KERNEL_INC_OPTIONS' environment variable. Because options from kbuild contains relative path like 'Iinclude/generated/uapi', the work directory must be changed. This is done by previous patch. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1436445342-1402-16-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This patch detects kernel build directory by checking the existence of include/generated/autoconf.h. clang working directory is changed to kbuild directory if it is found, to help user use relative include path. Following patch will detect kernel include directory, which contains relative include patch so this workdir changing is needed. Users are allowed to set 'kbuild-dir = ""' manually to disable this checking. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/n/tip-owyfwfbemrjn0tlj6tgk2nf5@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This is the core patch for supporting eBPF on-the-fly compiling, does the following work: 1. Search clang compiler using search_program(). 2. Run command template defined in llvm-bpf-cmd-template option in [llvm] config section using read_from_pipe(). Patch of clang and source code path is injected into shell command using environment variable using force_set_env(). Commiter notice: When building with DEBUG=1 we get a compiler error that gets fixed with the same approach described in commit b2365122: perf kmem: Fix compiler warning about may be accessing uninitialized variable The last argument to strtok_r doesn't need to be initialized, its just a placeholder to make this routine reentrant, but gcc doesn't know about that and complains, breaking the build, fix it by setting it to NULL. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/n/1436445342-1402-14-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This patch introduces [llvm] config section with 5 options. Following patches will use then to config llvm dynamica compiling. 'llvm-utils.[ch]' is introduced in this patch for holding all llvm/clang related stuffs. Example: [llvm] # Path to clang. If omit, search it from $PATH. clang-path = "/path/to/clang" # Cmdline template. Following line shows its default value. # Environment variable is used to passing options. # # *NOTE*: -D__KERNEL__ MUST appears before $CLANG_OPTIONS, # so user have a chance to use -U__KERNEL__ in $CLANG_OPTIONS # to cancel it. clang-bpf-cmd-template = "$CLANG_EXEC -D__KERNEL__ $CLANG_OPTIONS \ $KERNEL_INC_OPTIONS -Wno-unused-value \ -Wno-pointer-sign -working-directory \ $WORKING_DIR -c $CLANG_SOURCE -target \ bpf -O2 -o -" # Options passed to clang, will be passed to cmdline by # $CLANG_OPTIONS. clang-opt = "-Wno-unused-value -Wno-pointer-sign" # kbuild directory. If not set, use /lib/modules/`uname -r`/build. # If set to "" deliberately, skip kernel header auto-detector. kbuild-dir = "/path/to/kernel/build" # Options passed to 'make' when detecting kernel header options. kbuild-opts = "ARCH=x86_64" Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1437477214-149684-1-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
To allow enumeration of all bpf_objects, keep them in a list (hidden to caller). bpf_object__for_each_safe() is introduced to do this iteration. It is safe even user close the object during iteration. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-23-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This patch introduces accessors for user of libbpf to retrieve section name and fd of a opened/loaded eBPF program. 'struct bpf_prog_handler' is used for that purpose. Accessors of programs section name and file descriptor are provided. Set/get private data are also impelmented. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Link: http://lkml.kernel.org/r/1435716878-189507-21-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This patch utilizes previous introduced bpf_load_program to load programs in the ELF file into kernel. Result is stored in 'fd' field in 'struct bpf_program'. During loading, it allocs a log buffer and free it before return. Note that that buffer is not passed to bpf_load_program() if the first loading try is successful. Doesn't use a statically allocated log buffer to avoid potention multi-thread problem. Instructions collected during opening is cleared after loading. load_program() is created for loading a 'struct bpf_insn' array into kernel, bpf_program__load() calls it. By this design we have a function loads instructions into kernel. It will be used by further patches, which creates different instances from a program and load them into kernel. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-20-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
bpf_load_program() can be used to load bpf program into kernel. To make loading faster, first try to load without logbuf. Try again with logbuf if the first try failed. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-19-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
If an eBPF program accesses a map, LLVM generates a load instruction which loads an absolute address into a register, like this: ld_64 r1, <MCOperand Expr:(mymap)> ... call 2 That ld_64 instruction will be recorded in relocation section. To enable the usage of that map, relocation must be done by replacing the immediate value by real map file descriptor so it can be found by eBPF map functions. This patch to the relocation work based on information collected by patches: 'bpf tools: Collect symbol table from SHT_SYMTAB section', 'bpf tools: Collect relocation sections from SHT_REL sections' and 'bpf tools: Record map accessing instructions for each program'. For each instruction which needs relocation, it inject corresponding file descriptor to imm field. As a part of protocol, src_reg is set to BPF_PSEUDO_MAP_FD to notify kernel this is a map loading instruction. This is the final part of map relocation patch. The principle of map relocation is described in commit message of 'bpf tools: Collect symbol table from SHT_SYMTAB section'. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-18-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This patch creates maps based on 'map' section in object file using bpf_create_map(), and stores the fds into an array in 'struct bpf_object'. Previous patches parse ELF object file and collects required data, but doesn't play with the kernel. They belong to the 'opening' phase. This patch is the first patch in 'loading' phase. The 'loaded' field is introduced in 'struct bpf_object' to avoid loading an object twice, because the loading phase clears resources collected during the opening which becomes useless after loading. In this patch, maps_buf is cleared. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-17-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This patch introduces bpf.c and bpf.h, which hold common functions issuing bpf syscall. The goal of these two files is to hide syscall completely from user. Note that bpf.c and bpf.h deal with kernel interface only. Things like structure of 'map' section in the ELF object is not cared by of bpf.[ch]. We first introduce bpf_create_map(). Note that, since functions in bpf.[ch] are wrapper of sys_bpf, they don't use OO style naming. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-16-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This patch records the indices of instructions which are needed to be relocated. That information is saved in the 'reloc_desc' field in 'struct bpf_program'. In the loading phase (this patch takes effect in the opening phase), the collected instructions will be replaced by map loading instructions. Since we are going to close the ELF file and clear all data at the end of the 'opening' phase, the ELF information will no longer be valid in the 'loading' phase. We have to locate the instructions before maps are loaded, instead of directly modifying the instruction. 'struct bpf_map_def' is introduced in this patch to let us know how many maps are defined in the object. This is the third part of map relocation. The principle of map relocation is described in commit message of 'bpf tools: Collect symbol table from SHT_SYMTAB section'. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-15-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This patch collects relocation sections into 'struct object'. Such sections are used for connecting maps to bpf programs. 'reloc' field in 'struct bpf_object' is introduced for storing such information. This patch simply store the data into 'reloc' field. Following patch will parse them to know the exact instructions which are needed to be relocated. Note that the collected data will be invalid after ELF object file is closed. This is the second patch related to map relocation. The first one is 'bpf tools: Collect symbol table from SHT_SYMTAB section'. The principle of map relocation is described in its commit message. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-14-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This patch collects all programs in an object file into an array of 'struct bpf_program' for further processing. That structure is for representing each eBPF program. 'bpf_prog' should be a better name, but it has been used by linux/filter.h. Although it is a kernel space name, I still prefer to call it 'bpf_program' to prevent possible confusion. bpf_object__add_program() creates a new 'struct bpf_program' object. It first init a variable in stack using bpf_program__init(), then if success, enlarges obj->programs array and copy the new object in. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-13-git-send-email-wangnan0@huawei.com [ Made bpf_object__add_program() propagate the error (-EINVAL or -ENOMEM) ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
This patch collects symbols section. This section is useful when linking BPF maps. What 'bpf_map_xxx()' functions actually require are map's file descriptors (and the internal verifier converts fds into pointers to 'struct bpf_map'), which we don't know when compiling. Therefore, we should make compiler generate a 'ldr_64 r1, <imm>' instruction, and fill the 'imm' field with the actual file descriptor when loading in libbpf. BPF programs should be written in this way: struct bpf_map_def SEC("maps") my_map = { .type = BPF_MAP_TYPE_HASH, .key_size = sizeof(unsigned long), .value_size = sizeof(unsigned long), .max_entries = 1000000, }; SEC("my_func=sys_write") int my_func(void *ctx) { ... bpf_map_update_elem(&my_map, &key, &value, BPF_ANY); ... } Compiler should convert '&my_map' into a 'ldr_64, r1, <imm>' instruction, where imm should be the address of 'my_map'. According to the address, libbpf knows which map it actually referenced, and then fills the imm field with the 'fd' of that map created by it. However, since we never really 'link' the object file, the imm field is only a record in relocation section. Therefore libbpf should do the relocation: 1. In relocation section (type == SHT_REL), positions of each such 'ldr_64' instruction are recorded with a reference of an entry in symbol table (SHT_SYMTAB); 2. From records in symbol table we can find the indics of map variables. Libbpf first record SHT_SYMTAB and positions of each instruction which required bu such operation. Then create file descriptor. Finally, after map creation complete, replace the imm field. This is the first patch of BPF map related stuff. It records SHT_SYMTAB into object's efile field for further use. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-12-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
If maps are used by eBPF programs, corresponding object file(s) should contain a section named 'map'. Which contains map definitions. This patch copies the data of the whole section. Map data parsing should be acted just before map loading. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-11-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
Expand bpf_obj_elf_collect() to collect license and kernel version information in eBPF object file. eBPF object file should have a section named 'license', which contains a string. It should also have a section named 'version', contains a u32 LINUX_VERSION_CODE. bpf_obj_validate() is introduced to validate object file after loaded. Currently it only check existence of 'version' section. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-10-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
bpf_obj_elf_collect() is introduced to iterate over each elf sections to collection information in eBPF object files. This function will futher enhanced to collect license, kernel version, programs, configs and map information. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-9-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
Check endianness according to EHDR. Code is taken from tools/perf/util/symbol-elf.c. Libbpf doesn't magically convert missmatched endianness. Even if we swap eBPF instructions to correct byte order, we are unable to deal with endianness in code logical generated by LLVM. Therefore, libbpf should simply reject missmatched ELF object, and let LLVM to create good code. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-8-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-