1. 29 May, 2019 10 commits
    • Andrii Nakryiko's avatar
      libbpf: check map name retrieved from ELF · c51829bb
      Andrii Nakryiko authored
      Validate there was no error retrieving symbol name corresponding to
      a BPF map.
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      c51829bb
    • Andrii Nakryiko's avatar
      libbpf: simplify endianness check · 12ef5634
      Andrii Nakryiko authored
      Rewrite endianness check to use "more canonical" way, using
      compiler-defined macros, similar to few other places in libbpf. It also
      is more obvious and shorter.
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      12ef5634
    • Andrii Nakryiko's avatar
      libbpf: preserve errno before calling into user callback · be5c5d4e
      Andrii Nakryiko authored
      pr_warning ultimately may call into user-provided callback function,
      which can clobber errno value, so we need to save it before that.
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      be5c5d4e
    • Andrii Nakryiko's avatar
      libbpf: fix detection of corrupted BPF instructions section · 8ca990ce
      Andrii Nakryiko authored
      Ensure that size of a section w/ BPF instruction is exactly a multiple
      of BPF instruction size.
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      8ca990ce
    • Quentin Monnet's avatar
      libbpf: prevent overwriting of log_level in bpf_object__load_progs() · 501b125a
      Quentin Monnet authored
      There are two functions in libbpf that support passing a log_level
      parameter for the verifier for loading programs:
      bpf_object__load_xattr() and bpf_prog_load_xattr(). Both accept an
      attribute object containing the log_level, and apply it to the programs
      to load.
      
      It turns out that to effectively load the programs, the latter function
      eventually relies on the former. This was not taken into account when
      adding support for log_level in bpf_object__load_xattr(), and the
      log_level passed to bpf_prog_load_xattr() later gets overwritten with a
      zero value, thus disabling verifier logs for the program in all cases:
      
      bpf_prog_load_xattr()             // prog->log_level = attr1->log_level;
      -> bpf_object__load()             // attr2->log_level = 0;
         -> bpf_object__load_xattr()    // <pass prog and attr2>
            -> bpf_object__load_progs() // prog->log_level = attr2->log_level;
      
      Fix this by OR-ing the log_level in bpf_object__load_progs(), instead of
      overwriting it.
      
      v2: Fix commit log description (confusion on function names in v1).
      
      Fixes: 60276f98 ("libbpf: add bpf_object__load_xattr() API function to pass log_level")
      Reported-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarQuentin Monnet <quentin.monnet@netronome.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      501b125a
    • Stanislav Fomichev's avatar
      bpf: tracing: properly use bpf_prog_array api · e672db03
      Stanislav Fomichev authored
      Now that we don't have __rcu markers on the bpf_prog_array helpers,
      let's use proper rcu_dereference_protected to obtain array pointer
      under mutex.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      e672db03
    • Stanislav Fomichev's avatar
      bpf: cgroup: properly use bpf_prog_array api · dbcc1ba2
      Stanislav Fomichev authored
      Now that we don't have __rcu markers on the bpf_prog_array helpers,
      let's use proper rcu_dereference_protected to obtain array pointer
      under mutex.
      
      We also don't need __rcu annotations on cgroup_bpf.inactive since
      it's not read/updated concurrently.
      
      v4:
      * drop cgroup_rcu_xyz wrappers and use rcu APIs directly; presumably
        should be more clear to understand which mutex/refcount protects
        each particular place
      
      v3:
      * amend cgroup_rcu_dereference to include percpu_ref_is_dying;
        cgroup_bpf is now reference counted and we don't hold cgroup_mutex
        anymore in cgroup_bpf_release
      
      v2:
      * replace xchg with rcu_swap_protected
      
      Cc: Roman Gushchin <guro@fb.com>
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      dbcc1ba2
    • Stanislav Fomichev's avatar
      bpf: media: properly use bpf_prog_array api · 02205d2e
      Stanislav Fomichev authored
      Now that we don't have __rcu markers on the bpf_prog_array helpers,
      let's use proper rcu_dereference_protected to obtain array pointer
      under mutex.
      
      Cc: linux-media@vger.kernel.org
      Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
      Cc: Sean Young <sean@mess.org>
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      02205d2e
    • Stanislav Fomichev's avatar
      bpf: remove __rcu annotations from bpf_prog_array · 54e9c9d4
      Stanislav Fomichev authored
      Drop __rcu annotations and rcu read sections from bpf_prog_array
      helper functions. They are not needed since all existing callers
      call those helpers from the rcu update side while holding a mutex.
      This guarantees that use-after-free could not happen.
      
      In the next patches I'll fix the callers with missing
      rcu_dereference_protected to make sparse/lockdep happy, the proper
      way to use these helpers is:
      
      	struct bpf_prog_array __rcu *progs = ...;
      	struct bpf_prog_array *p;
      
      	mutex_lock(&mtx);
      	p = rcu_dereference_protected(progs, lockdep_is_held(&mtx));
      	bpf_prog_array_length(p);
      	bpf_prog_array_copy_to_user(p, ...);
      	bpf_prog_array_delete_safe(p, ...);
      	bpf_prog_array_copy_info(p, ...);
      	bpf_prog_array_copy(p, ...);
      	bpf_prog_array_free(p);
      	mutex_unlock(&mtx);
      
      No functional changes! rcu_dereference_protected with lockdep_is_held
      should catch any cases where we update prog array without a mutex
      (I've looked at existing call sites and I think we hold a mutex
      everywhere).
      
      Motivation is to fix sparse warnings:
      kernel/bpf/core.c:1803:9: warning: incorrect type in argument 1 (different address spaces)
      kernel/bpf/core.c:1803:9:    expected struct callback_head *head
      kernel/bpf/core.c:1803:9:    got struct callback_head [noderef] <asn:4> *
      kernel/bpf/core.c:1877:44: warning: incorrect type in initializer (different address spaces)
      kernel/bpf/core.c:1877:44:    expected struct bpf_prog_array_item *item
      kernel/bpf/core.c:1877:44:    got struct bpf_prog_array_item [noderef] <asn:4> *
      kernel/bpf/core.c:1901:26: warning: incorrect type in assignment (different address spaces)
      kernel/bpf/core.c:1901:26:    expected struct bpf_prog_array_item *existing
      kernel/bpf/core.c:1901:26:    got struct bpf_prog_array_item [noderef] <asn:4> *
      kernel/bpf/core.c:1935:26: warning: incorrect type in assignment (different address spaces)
      kernel/bpf/core.c:1935:26:    expected struct bpf_prog_array_item *[assigned] existing
      kernel/bpf/core.c:1935:26:    got struct bpf_prog_array_item [noderef] <asn:4> *
      
      v2:
      * remove comment about potential race; that can't happen
        because all callers are in rcu-update section
      
      Cc: Roman Gushchin <guro@fb.com>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      54e9c9d4
    • Alan Maguire's avatar
      selftests/bpf: fix compilation error for flow_dissector.c · fe937ea1
      Alan Maguire authored
      When building the tools/testing/selftest/bpf subdirectory,
      (running both a local directory "make" and a
      "make -C tools/testing/selftests/bpf") I keep hitting the
      following compilation error:
      
      prog_tests/flow_dissector.c: In function ‘create_tap’:
      prog_tests/flow_dissector.c:150:38: error: ‘IFF_NAPI’ undeclared (first
      use in this function)
         .ifr_flags = IFF_TAP | IFF_NO_PI | IFF_NAPI | IFF_NAPI_FRAGS,
                                            ^
      prog_tests/flow_dissector.c:150:38: note: each undeclared identifier is
      reported only once for each function it appears in
      prog_tests/flow_dissector.c:150:49: error: ‘IFF_NAPI_FRAGS’ undeclared
      
      Adding include/uapi/linux/if_tun.h to tools/include/uapi/linux
      resolves the problem and ensures the compilation of the file
      does not depend on having up-to-date kernel headers locally.
      Signed-off-by: default avatarAlan Maguire <alan.maguire@oracle.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      fe937ea1
  2. 28 May, 2019 15 commits
  3. 25 May, 2019 15 commits
    • Matteo Croce's avatar
      samples: bpf: add ibumad sample to .gitignore · d9a6f413
      Matteo Croce authored
      This commit adds ibumad to .gitignore which is
      currently ommited from the ignore file.
      Signed-off-by: default avatarMatteo Croce <mcroce@redhat.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d9a6f413
    • Alexei Starovoitov's avatar
      Merge branch 'optimize-zext' · 198ae936
      Alexei Starovoitov authored
      Jiong Wang says:
      
      ====================
      v9:
        - Split patch 5 in v8.
          make bpf uapi header file sync a separate patch. (Alexei)
      
      v8:
        - For stack slot read, mark them as REG_LIVE_READ64. (Alexei)
        - Change DEF_NOT_SUBREG from -1 to 0. (Alexei)
        - Rebased on top of latest bpf-next.
      
      v7:
        - Drop the first patch in v6, the one adding 32-bit return value and
          argument type. (Alexei)
        - Rename bpf_jit_hardware_zext to bpf_jit_needs_zext. (Alexei)
        - Use mov32 with imm == 1 to indicate it is zext. (Alexei)
        - JIT back-ends peephole next insn to optimize out unnecessary zext
          inserted by verifier. (Alexei)
        - Testing:
          + patch set tested (bpf selftest) on x64 host with llvm 9.0
            no regression observed no both JIT and interpreter modes.
          + patch set tested (bpf selftest) on x32 host.
            By Yanqing Wang, thanks!
            no regression observed on both JIT and interpreter modes.
          + patch set tested (bpf selftest) on RV64 host with llvm 9.0,
            By Björn Töpel, thanks!
            no regression observed before and after this set with JIT_ALWAYS_ON.
            test_progs_32 also enabled as LLVM 9.0 is used by Björn.
          + cross compiled the other affected targets, arm, PowerPC, SPARC, S390.
      
      v6:
        - Fixed s390 kbuild test robot error. (kbuild)
        - Make comment style in backends patches more consistent.
      
      v5:
        - Adjusted several test_verifier helpers to make them works on hosts
          w and w/o hardware zext. (Naveen)
        - Make sure zext flag not set when verifier by-passed, for example,
          libtest_bpf.ko. (Naveen)
        - Conservatively mark bpf main return value as 64-bit. (Alexei)
        - Make sure read flag is either READ64 or READ32, not the mix of both.
          (Alexei)
        - Merged patch 1 and 2 in v4. (Alexei)
        - Fixed kbuild test robot warning on NFP. (kbuild)
        - Proposed new BPF_ZEXT insn to have optimal code-gen for various JIT
          back-ends.
        - Conservately set zext flags for patched-insn.
        - Fixed return value zext for helper function calls.
        - Also adjusted test_verifier scalability unit test to avoid triggerring
          too many insn patch which will hang computer.
        - re-tested on x86 host with llvm 9.0, no regression on test_verifier,
          test_progs, test_progs_32.
        - re-tested offload target (nfp), no regression on local testsuite.
      
      v4:
        - added the two missing fixes which addresses two Jakub's reviewes in v3.
        - rebase on top of bpf-next.
      
      v3:
        - remove redundant check in "propagate_liveness_reg". (Jakub)
        - add extra check in "mark_reg_read" to prune more search. (Jakub)
        - re-implemented "prog_flags" passing mechanism, removed use of
          global switch inside libbpf.
        - enabled high 32-bit randomization beyond "test_verifier" and
          "test_progs". Now it should have been enabled for all possible
          tests. Re-run all tests, haven't noticed regression.
        - remove RFC tag.
      
      v2:
        - rebased on top of bpf-next master.
        - added comments for what is sub-register def index. (Edward, Alexei)
        - removed patch 1 which turns bit mask from enum to macro. (Alexei)
        - removed sysctl/bpf_jit_32bit_opt. (Alexei)
        - merged sub-register def insn index into reg state. (Alexei)
        - change test methodology (Alexei):
            + instead of simple unit tests on x86_64 for which this optimization
              doesn't enabled due to there is hardware support, poison high
              32-bit for whose def identified as safe to do so. this could let
              the correctness of this patch set checked when daily bpf selftest
              ran which delivers very stressful test on host machine like x86_64.
            + hi32 poisoning is gated by a new BPF_F_TEST_RND_HI32 prog flags.
            + BPF_F_TEST_RND_HI32 is enabled for all tests of "test_progs" and
              "test_verifier", the latter needs minor tweak on two unit tests,
              please see the patch for the change.
            + introduced a new global variable "libbpf_test_mode" into libbpf.
              once it is set to true, it will set BPF_F_TEST_RND_HI32 for all the
              later PROG_LOAD syscall, the goal is to easy the enable of hi32
              poison on exsiting testsuite.
              we could also introduce new APIs, for example "bpf_prog_test_load",
              then use -Dbpf_prog_load=bpf_prog_test_load to migrate tests under
              test_progs, but there are several load APIs, and such new API need
              some change on struture like "struct bpf_prog_load_attr".
            + removed old unit tests. it is based on insn scan and requires quite
              a few test_verifier generic code change. given hi32 randomization
              could offer good test coverage, the unit tests doesn't add much
              extra test value.
        - enhanced register width check ("is_reg64") when record sub-register
          write, now, it returns more accurate width.
        - Re-run all tests under "test_progs" and "test_verifier" on x86_64, no
          regression. Fixed a couple of bugs exposed:
            1. ctx field size transformation was not taken into account.
            2. insn patch could cause lost of original aux data which is
               important for ctx field conversion.
            3. return value for propagate_liveness was wrong and caused
               regression on processed insn number.
            4. helper call arg wasn't handled properly that path prune may cause
               64-bit read info in pruned path lost.
        - Re-run Cilium bpf prog for processed-insn-number benchmarking, no
          regression.
      
      v1:
        - Fixed the missing handling on callee-saved for bpf-to-bpf call,
          sub-register defs therefore moved to frame state. (Jakub Kicinski)
        - Removed redundant "cross_reg". (Jakub Kicinski)
        - Various coding styles & grammar fixes. (Jakub Kicinski, Quentin Monnet)
      
      eBPF ISA specification requires high 32-bit cleared when low 32-bit
      sub-register is written. This applies to destination register of ALU32 etc.
      JIT back-ends must guarantee this semantic when doing code-gen. x86_64 and
      AArch64 ISA has the same semantics, so the corresponding JIT back-end
      doesn't need to do extra work.
      
      However, 32-bit arches (arm, x86, nfp etc.) and some other 64-bit arches
      (PowerPC, SPARC etc) need to do explicit zero extension to meet this
      requirement, otherwise code like the following will fail.
      
        u64_value = (u64) u32_value
        ... other uses of u64_value
      
      This is because compiler could exploit the semantic described above and
      save those zero extensions for extending u32_value to u64_value, these JIT
      back-ends are expected to guarantee this through inserting extra zero
      extensions which however could be a significant increase on the code size.
      Some benchmarks show there could be ~40% sub-register writes out of total
      insns, meaning at least ~40% extra code-gen.
      
      One observation is these extra zero extensions are not always necessary.
      Take above code snippet for example, it is possible u32_value will never be
      casted into a u64, the value of high 32-bit of u32_value then could be
      ignored and extra zero extension could be eliminated.
      
      This patch implements this idea, insns defining sub-registers will be
      marked when the high 32-bit of the defined sub-register matters. For
      those unmarked insns, it is safe to eliminate high 32-bit clearnace for
      them.
      
      Algo
      ====
      We could use insn scan based static analysis to tell whether one
      sub-register def doesn't need zero extension. However, using such static
      analysis, we must do conservative assumption at branching point where
      multiple uses could be introduced. So, for any sub-register def that is
      active at branching point, we need to mark it as needing zero extension.
      This could introducing quite a few false alarms, for example ~25% on
      Cilium bpf_lxc.
      
      It will be far better to use dynamic data-flow tracing which verifier
      fortunately already has and could be easily extend to serve the purpose of
      this patch set.
      
       - Split read flags into READ32 and READ64.
      
       - Record index of insn that does sub-register write. Keep the index inside
         reg state and update it during verifier insn walking.
      
       - A full register read on a sub-register marks its definition insn as
         needing zero extension on dst register.
      
         A new sub-register write overrides the old one.
      
       - When propagating read64 during path pruning, also mark any insn defining
         a sub-register that is read in the pruned path as full-register.
      
      Benchmark
      =========
       - I estimate the JITed image could be 10% ~ 30% smaller on these affected
         arches (nfp, arm, x32, risv, ppc, sparc, s390), depending on the prog.
      
       - For Cilium bpf_lxc, there is ~11500 insns in the compiled binary (use
         latest LLVM snapshot, and with -mcpu=v3 -mattr=+alu32 enabled), 4460 of
         them has sub-register writes (~40%). Calculated by:
      
          cat dump | grep -P "\tw" | wc -l       (ALU32)
          cat dump | grep -P "r.*=.*u32" | wc -l (READ_W)
          cat dump | grep -P "r.*=.*u16" | wc -l (READ_H)
          cat dump | grep -P "r.*=.*u8" | wc -l  (READ_B)
      
         After this patch set enabled, > 25% of those 4460 could be identified as
         doesn't needing zero extension on the destination, and the percentage
         could go further up to more than 50% with some follow up optimizations
         based on the infrastructure offered by this set. This leads to
         significant save on JITed image.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      198ae936
    • Jiong Wang's avatar
      nfp: bpf: eliminate zero extension code-gen · 0b4de1ff
      Jiong Wang authored
      This patch eliminate zero extension code-gen for instructions including
      both alu and load/store. The only exception is for ctx load, because
      offload target doesn't go through host ctx convert logic so we do
      customized load and ignores zext flag set by verifier.
      
      Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      0b4de1ff
    • Jiong Wang's avatar
      riscv: bpf: eliminate zero extension code-gen · 66d0d5a8
      Jiong Wang authored
      Cc: Björn Töpel <bjorn.topel@gmail.com>
      Acked-by: default avatarBjörn Töpel <bjorn.topel@gmail.com>
      Tested-by: default avatarBjörn Töpel <bjorn.topel@gmail.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      66d0d5a8
    • Jiong Wang's avatar
      x32: bpf: eliminate zero extension code-gen · 836256bf
      Jiong Wang authored
      Cc: Wang YanQing <udknight@gmail.com>
      Tested-by: default avatarWang YanQing <udknight@gmail.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      836256bf
    • Jiong Wang's avatar
      sparc: bpf: eliminate zero extension code-gen · 3e2a33cf
      Jiong Wang authored
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      3e2a33cf
    • Jiong Wang's avatar
      s390: bpf: eliminate zero extension code-gen · 591006b9
      Jiong Wang authored
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      591006b9
    • Jiong Wang's avatar
      powerpc: bpf: eliminate zero extension code-gen · a4c92773
      Jiong Wang authored
      Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
      Cc: Sandipan Das <sandipan@linux.ibm.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      a4c92773
    • Jiong Wang's avatar
      arm: bpf: eliminate zero extension code-gen · 163541e6
      Jiong Wang authored
      Cc: Shubham Bansal <illusionist.neo@gmail.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      163541e6
    • Jiong Wang's avatar
      selftests: bpf: enable hi32 randomization for all tests · 9d120b41
      Jiong Wang authored
      The previous libbpf patch allows user to specify "prog_flags" to bpf
      program load APIs. To enable high 32-bit randomization for a test, we need
      to set BPF_F_TEST_RND_HI32 in "prog_flags".
      
      To enable such randomization for all tests, we need to make sure all places
      are passing BPF_F_TEST_RND_HI32. Changing them one by one is not
      convenient, also, it would be better if a test could be switched to
      "normal" running mode without code change.
      
      Given the program load APIs used across bpf selftests are mostly:
        bpf_prog_load:      load from file
        bpf_load_program:   load from raw insns
      
      A test_stub.c is implemented for bpf seltests, it offers two functions for
      testing purpose:
      
        bpf_prog_test_load
        bpf_test_load_program
      
      The are the same as "bpf_prog_load" and "bpf_load_program", except they
      also set BPF_F_TEST_RND_HI32. Given *_xattr functions are the APIs to
      customize any "prog_flags", it makes little sense to put these two
      functions into libbpf.
      
      Then, the following CFLAGS are passed to compilations for host programs:
        -Dbpf_prog_load=bpf_prog_test_load
        -Dbpf_load_program=bpf_test_load_program
      
      They migrate the used load APIs to the test version, hence enable high
      32-bit randomization for these tests without changing source code.
      
      Besides all these, there are several testcases are using
      "bpf_prog_load_attr" directly, their call sites are updated to pass
      BPF_F_TEST_RND_HI32.
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      9d120b41
    • Jiong Wang's avatar
      selftests: bpf: adjust several test_verifier helpers for insn insertion · f3b55abb
      Jiong Wang authored
      - bpf_fill_ld_abs_vlan_push_pop:
          Prevent zext happens inside PUSH_CNT loop. This could happen because
          of BPF_LD_ABS (32-bit def) + BPF_JMP (64-bit use), or BPF_LD_ABS +
          EXIT (64-bit use of R0). So, change BPF_JMP to BPF_JMP32 and redefine
          R0 at exit path to cut off the data-flow from inside the loop.
      
        - bpf_fill_jump_around_ld_abs:
          Jump range is limited to 16 bit. every ld_abs is replaced by 6 insns,
          but on arches like arm, ppc etc, there will be one BPF_ZEXT inserted
          to extend the error value of the inlined ld_abs sequence which then
          contains 7 insns. so, set the dividend to 7 so the testcase could
          work on all arches.
      
        - bpf_fill_scale1/bpf_fill_scale2:
          Both contains ~1M BPF_ALU32_IMM which will trigger ~1M insn patcher
          call because of hi32 randomization later when BPF_F_TEST_RND_HI32 is
          set for bpf selftests. Insn patcher is not efficient that 1M call to
          it will hang computer. So , change to BPF_ALU64_IMM to avoid hi32
          randomization.
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      f3b55abb
    • Jiong Wang's avatar
      libbpf: add "prog_flags" to bpf_program/bpf_prog_load_attr/bpf_load_program_attr · 04656198
      Jiong Wang authored
      libbpf doesn't allow passing "prog_flags" during bpf program load in a
      couple of load related APIs, "bpf_load_program_xattr", "load_program" and
      "bpf_prog_load_xattr".
      
      It makes sense to allow passing "prog_flags" which is useful for
      customizing program loading.
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      04656198
    • Jiong Wang's avatar
      bpf: verifier: randomize high 32-bit when BPF_F_TEST_RND_HI32 is set · d6c2308c
      Jiong Wang authored
      This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32
      is set.
      Suggested-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d6c2308c
    • Jiong Wang's avatar
      tools: bpf: sync uapi header bpf.h · 9ce33e33
      Jiong Wang authored
      Sync new bpf prog load flag "BPF_F_TEST_RND_HI32" to tools/.
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      9ce33e33
    • Jiong Wang's avatar
      bpf: introduce new bpf prog load flags "BPF_F_TEST_RND_HI32" · c240eff6
      Jiong Wang authored
      x86_64 and AArch64 perhaps are two arches that running bpf testsuite
      frequently, however the zero extension insertion pass is not enabled for
      them because of their hardware support.
      
      It is critical to guarantee the pass correction as it is supposed to be
      enabled at default for a couple of other arches, for example PowerPC,
      SPARC, arm, NFP etc. Therefore, it would be very useful if there is a way
      to test this pass on for example x86_64.
      
      The test methodology employed by this set is "poisoning" useless bits. High
      32-bit of a definition is randomized if it is identified as not used by any
      later insn. Such randomization is only enabled under testing mode which is
      gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32".
      Suggested-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      c240eff6