1. 26 Oct, 2021 12 commits
    • Ilya Leoshkevich's avatar
      libbpf: Fix endianness detection in BPF_CORE_READ_BITFIELD_PROBED() · 45f2bebc
      Ilya Leoshkevich authored
      __BYTE_ORDER is supposed to be defined by a libc, and __BYTE_ORDER__ -
      by a compiler. bpf_core_read.h checks __BYTE_ORDER == __LITTLE_ENDIAN,
      which is true if neither are defined, leading to incorrect behavior on
      big-endian hosts if libc headers are not included, which is often the
      case.
      
      Fixes: ee26dade ("libbpf: Add support for relocatable bitfields")
      Signed-off-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211026010831.748682-2-iii@linux.ibm.com
      45f2bebc
    • Alexei Starovoitov's avatar
      Merge branch 'libbpf: add bpf_program__insns() accessor' · 124c6003
      Alexei Starovoitov authored
      Andrii Nakryiko says:
      
      ====================
      
      Add libbpf APIs to access BPF program instructions. Both before and after
      libbpf processing (before and after bpf_object__load()). This allows to
      inspect what's going on with BPF program assembly instructions as libbpf
      performs its processing magic.
      
      But in more practical terms, this allows to do a no-brainer BPF program
      cloning, which is something you need when working with fentry/fexit BPF
      programs to be able to attach the same BPF program code to multiple kernel
      functions. Currently, kernel needs multiple copies of BPF programs, each
      loaded with its own target BTF ID. retsnoop is one such example that
      previously had to rely on bpf_program__set_prep() API to hijack program
      instructions ([0] for before and after).
      
      Speaking of bpf_program__set_prep() API and the whole concept of
      multiple-instance BPF programs in libbpf, all that is scheduled for
      deprecation in v0.7. It doesn't work well, it's cumbersome, and it will become
      more broken as libbpf adds more functionality. So deprecate and remove it in
      libbpf 1.0. It doesn't seem to be used by anyone anyways (except for that
      retsnoop hack, which is now much cleaner with new APIs as can be seen in [0]).
      
        [0] https://github.com/anakryiko/retsnoop/pull/1
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      124c6003
    • Andrii Nakryiko's avatar
      libbpf: Deprecate ambiguously-named bpf_program__size() API · c4813e96
      Andrii Nakryiko authored
      The name of the API doesn't convey clearly that this size is in number
      of bytes (there needed to be a separate comment to make this clear in
      libbpf.h). Further, measuring the size of BPF program in bytes is not
      exactly the best fit, because BPF programs always consist of 8-byte
      instructions. As such, bpf_program__insn_cnt() is a better alternative
      in pretty much any imaginable case.
      
      So schedule bpf_program__size() deprecation starting from v0.7 and it
      will be removed in libbpf 1.0.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211025224531.1088894-5-andrii@kernel.org
      c4813e96
    • Andrii Nakryiko's avatar
      libbpf: Deprecate multi-instance bpf_program APIs · e21d585c
      Andrii Nakryiko authored
      Schedule deprecation of a set of APIs that are related to multi-instance
      bpf_programs:
        - bpf_program__set_prep() ([0]);
        - bpf_program__{set,unset}_instance() ([1]);
        - bpf_program__nth_fd().
      
      These APIs are obscure, very niche, and don't seem to be used much in
      practice. bpf_program__set_prep() is pretty useless for anything but the
      simplest BPF programs, as it doesn't allow to adjust BPF program load
      attributes, among other things. In short, it already bitrotted and will
      bitrot some more if not removed.
      
      With bpf_program__insns() API, which gives access to post-processed BPF
      program instructions of any given entry-point BPF program, it's now
      possible to do whatever necessary adjustments were possible with
      set_prep() API before, but also more. Given any such use case is
      automatically an advanced use case, requiring users to stick to
      low-level bpf_prog_load() APIs and managing their own prog FDs is
      reasonable.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/299
        [1] Closes: https://github.com/libbpf/libbpf/issues/300Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211025224531.1088894-4-andrii@kernel.org
      e21d585c
    • Andrii Nakryiko's avatar
      libbpf: Add ability to fetch bpf_program's underlying instructions · 65a7fa2e
      Andrii Nakryiko authored
      Add APIs providing read-only access to bpf_program BPF instructions ([0]).
      This is useful for diagnostics purposes, but it also allows a cleaner
      support for cloning BPF programs after libbpf did all the FD resolution
      and CO-RE relocations, subprog instructions appending, etc. Currently,
      cloning BPF program is possible only through hijacking a half-broken
      bpf_program__set_prep() API, which doesn't really work well for anything
      but most primitive programs. For instance, set_prep() API doesn't allow
      adjusting BPF program load parameters which are necessary for loading
      fentry/fexit BPF programs (the case where BPF program cloning is
      a necessity if doing some sort of mass-attachment functionality).
      
      Given bpf_program__set_prep() API is set to be deprecated, having
      a cleaner alternative is a must. libbpf internally already keeps track
      of linear array of struct bpf_insn, so it's not hard to expose it. The
      only gotcha is that libbpf previously freed instructions array during
      bpf_object load time, which would make this API much less useful overall,
      because in between bpf_object__open() and bpf_object__load() a lot of
      changes to instructions are done by libbpf.
      
      So this patch makes libbpf hold onto prog->insns array even after BPF
      program loading. I think this is a small price for added functionality
      and improved introspection of BPF program code.
      
      See retsnoop PR ([1]) for how it can be used in practice and code
      savings compared to relying on bpf_program__set_prep().
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/298
        [1] https://github.com/anakryiko/retsnoop/pull/1Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211025224531.1088894-3-andrii@kernel.org
      65a7fa2e
    • Andrii Nakryiko's avatar
      libbpf: Fix off-by-one bug in bpf_core_apply_relo() · de5d0dce
      Andrii Nakryiko authored
      Fix instruction index validity check which has off-by-one error.
      
      Fixes: 3ee4f533 ("libbpf: Split bpf_core_apply_relo() into bpf_program independent helper.")
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211025224531.1088894-2-andrii@kernel.org
      de5d0dce
    • Andrii Nakryiko's avatar
      Merge branch 'bpftool: Switch to libbpf's hashmap for referencing BPF objects' · 9327acd0
      Andrii Nakryiko authored
      Quentin Monnet says:
      
      ====================
      
      When listing BPF objects, bpftool can print a number of properties about
      items holding references to these objects. For example, it can show pinned
      paths for BPF programs, maps, and links; or programs and maps using a given
      BTF object; or the names and PIDs of processes referencing BPF objects. To
      collect this information, bpftool uses hash maps (to be clear: the data
      structures, inside bpftool - we are not talking of BPF maps). It uses the
      implementation available from the kernel, and picks it up from
      tools/include/linux/hashtable.h.
      
      This patchset converts bpftool's hash maps to a distinct implementation
      instead, the one coming with libbpf. The main motivation for this change is
      that it should ease the path towards a potential out-of-tree mirror for
      bpftool, like the one libbpf already has. Although it's not perfect to
      depend on libbpf's internal components, bpftool is intimately tied with the
      library anyway, and this looks better than depending too much on (non-UAPI)
      kernel headers.
      
      The first two patches contain preparatory work on the Makefile and on the
      initialisation of the hash maps for collecting pinned paths for objects.
      Then the transition is split into several steps, one for each kind of
      properties for which the collection is backed by hash maps.
      
      v2:
        - Move hashmap cleanup for pinned paths for links from do_detach() to
          do_show().
        - Handle errors on hashmap__append() (in three of the patches).
        - Rename bpftool_hash_fn() and bpftool_equal_fn() as hash_fn_for_key_id()
          and equal_fn_for_key_id(), respectively.
        - Add curly braces for hashmap__for_each_key_entry() { } in
          show_btf_plain() and show_btf_json(), where the flow was difficult to
          read.
      ====================
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      9327acd0
    • Quentin Monnet's avatar
      bpftool: Switch to libbpf's hashmap for PIDs/names references · d6699f8e
      Quentin Monnet authored
      In order to show PIDs and names for processes holding references to BPF
      programs, maps, links, or BTF objects, bpftool creates hash maps to
      store all relevant information. This commit is part of a set that
      transitions from the kernel's hash map implementation to the one coming
      with libbpf.
      
      The motivation is to make bpftool less dependent of kernel headers, to
      ease the path to a potential out-of-tree mirror, like libbpf has.
      
      This is the third and final step of the transition, in which we convert
      the hash maps used for storing the information about the processes
      holding references to BPF objects (programs, maps, links, BTF), and at
      last we drop the inclusion of tools/include/linux/hashtable.h.
      
      Note: Checkpatch complains about the use of __weak declarations, and the
      missing empty lines after the bunch of empty function declarations when
      compiling without the BPF skeletons (none of these were introduced in
      this patch). We want to keep things as they are, and the reports should
      be safe to ignore.
      Signed-off-by: default avatarQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211023205154.6710-6-quentin@isovalent.com
      d6699f8e
    • Quentin Monnet's avatar
      bpftool: Switch to libbpf's hashmap for programs/maps in BTF listing · 2828d0d7
      Quentin Monnet authored
      In order to show BPF programs and maps using BTF objects when the latter
      are being listed, bpftool creates hash maps to store all relevant items.
      This commit is part of a set that transitions from the kernel's hash map
      implementation to the one coming with libbpf.
      
      The motivation is to make bpftool less dependent of kernel headers, to
      ease the path to a potential out-of-tree mirror, like libbpf has.
      
      This commit focuses on the two hash maps used by bpftool when listing
      BTF objects to store references to programs and maps, and convert them
      to the libbpf's implementation.
      Signed-off-by: default avatarQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211023205154.6710-5-quentin@isovalent.com
      2828d0d7
    • Quentin Monnet's avatar
      bpftool: Switch to libbpf's hashmap for pinned paths of BPF objects · 8f184732
      Quentin Monnet authored
      In order to show pinned paths for BPF programs, maps, or links when
      listing them with the "-f" option, bpftool creates hash maps to store
      all relevant paths under the bpffs. So far, it would rely on the
      kernel implementation (from tools/include/linux/hashtable.h).
      
      We can make bpftool rely on libbpf's implementation instead. The
      motivation is to make bpftool less dependent of kernel headers, to ease
      the path to a potential out-of-tree mirror, like libbpf has.
      
      This commit is the first step of the conversion: the hash maps for
      pinned paths for programs, maps, and links are converted to libbpf's
      hashmap.{c,h}. Other hash maps used for the PIDs of process holding
      references to BPF objects are left unchanged for now. On the build side,
      this requires adding a dependency to a second header internal to libbpf,
      and making it a dependency for the bootstrap bpftool version as well.
      The rest of the changes are a rather straightforward conversion.
      Signed-off-by: default avatarQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211023205154.6710-4-quentin@isovalent.com
      8f184732
    • Quentin Monnet's avatar
      bpftool: Do not expose and init hash maps for pinned path in main.c · 46241271
      Quentin Monnet authored
      BPF programs, maps, and links, can all be listed with their pinned paths
      by bpftool, when the "-f" option is provided. To do so, bpftool builds
      hash maps containing all pinned paths for each kind of objects.
      
      These three hash maps are always initialised in main.c, and exposed
      through main.h. There appear to be no particular reason to do so: we can
      just as well make them static to the files that need them (prog.c,
      map.c, and link.c respectively), and initialise them only when we want
      to show objects and the "-f" switch is provided.
      
      This may prevent unnecessary memory allocations if the implementation of
      the hash maps was to change in the future.
      Signed-off-by: default avatarQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211023205154.6710-3-quentin@isovalent.com
      46241271
    • Quentin Monnet's avatar
      bpftool: Remove Makefile dep. on $(LIBBPF) for $(LIBBPF_INTERNAL_HDRS) · 8b6c4624
      Quentin Monnet authored
      The dependency is only useful to make sure that the $(LIBBPF_HDRS_DIR)
      directory is created before we try to install locally the required
      libbpf internal header. Let's create this directory properly instead.
      
      This is in preparation of making $(LIBBPF_INTERNAL_HDRS) a dependency to
      the bootstrap bpftool version, in which case we want no dependency on
      $(LIBBPF).
      Signed-off-by: default avatarQuentin Monnet <quentin@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211023205154.6710-2-quentin@isovalent.com
      8b6c4624
  2. 25 Oct, 2021 5 commits
  3. 23 Oct, 2021 8 commits
  4. 22 Oct, 2021 15 commits