An error occurred fetching the project authors.
  1. 06 Jul, 2022 4 commits
  2. 29 Jun, 2022 1 commit
  3. 28 Jun, 2022 9 commits
  4. 24 Jun, 2022 1 commit
  5. 17 Jun, 2022 1 commit
  6. 14 Jun, 2022 1 commit
  7. 09 Jun, 2022 1 commit
  8. 07 Jun, 2022 2 commits
  9. 03 Jun, 2022 1 commit
  10. 02 Jun, 2022 4 commits
  11. 23 May, 2022 1 commit
  12. 16 May, 2022 1 commit
  13. 13 May, 2022 1 commit
    • Andrii Nakryiko's avatar
      libbpf: Add safer high-level wrappers for map operations · 737d0646
      Andrii Nakryiko authored
      Add high-level API wrappers for most common and typical BPF map
      operations that works directly on instances of struct bpf_map * (so
      you don't have to call bpf_map__fd()) and validate key/value size
      expectations.
      
      These helpers require users to specify key (and value, where
      appropriate) sizes when performing lookup/update/delete/etc. This forces
      user to actually think and validate (for themselves) those. This is
      a good thing as user is expected by kernel to implicitly provide correct
      key/value buffer sizes and kernel will just read/write necessary amount
      of data. If it so happens that user doesn't set up buffers correctly
      (which bit people for per-CPU maps especially) kernel either randomly
      overwrites stack data or return -EFAULT, depending on user's luck and
      circumstances. These high-level APIs are meant to prevent such
      unpleasant and hard to debug bugs.
      
      This patch also adds bpf_map_delete_elem_flags() low-level API and
      requires passing flags to bpf_map__delete_elem() API for consistency
      across all similar APIs, even though currently kernel doesn't expect
      any extra flags for BPF_MAP_DELETE_ELEM operation.
      
      List of map operations that get these high-level APIs:
      
        - bpf_map_lookup_elem;
        - bpf_map_update_elem;
        - bpf_map_delete_elem;
        - bpf_map_lookup_and_delete_elem;
        - bpf_map_get_next_key.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20220512220713.2617964-1-andrii@kernel.org
      737d0646
  14. 11 May, 2022 3 commits
  15. 09 May, 2022 1 commit
  16. 29 Apr, 2022 3 commits
  17. 28 Apr, 2022 2 commits
  18. 26 Apr, 2022 3 commits
    • Andrii Nakryiko's avatar
      libbpf: Fix up verifier log for unguarded failed CO-RE relos · 9fdc4273
      Andrii Nakryiko authored
      Teach libbpf to post-process BPF verifier log on BPF program load
      failure and detect known error patterns to provide user with more
      context.
      
      Currently there is one such common situation: an "unguarded" failed BPF
      CO-RE relocation. While failing CO-RE relocation is expected, it is
      expected to be property guarded in BPF code such that BPF verifier
      always eliminates BPF instructions corresponding to such failed CO-RE
      relos as dead code. In cases when user failed to take such precautions,
      BPF verifier provides the best log it can:
      
        123: (85) call unknown#195896080
        invalid func unknown#195896080
      
      Such incomprehensible log error is due to libbpf "poisoning" BPF
      instruction that corresponds to failed CO-RE relocation by replacing it
      with invalid `call 0xbad2310` instruction (195896080 == 0xbad2310 reads
      "bad relo" if you squint hard enough).
      
      Luckily, libbpf has all the necessary information to look up CO-RE
      relocation that failed and provide more human-readable description of
      what's going on:
      
        5: <invalid CO-RE relocation>
        failed to resolve CO-RE relocation <byte_off> [6] struct task_struct___bad.fake_field_subprog (0:2 @ offset 8)
      
      This hopefully makes it much easier to understand what's wrong with
      user's BPF program without googling magic constants.
      
      This BPF verifier log fixup is setup to be extensible and is going to be
      used for at least one other upcoming feature of libbpf in follow up patches.
      Libbpf is parsing lines of BPF verifier log starting from the very end.
      Currently it processes up to 10 lines of code looking for familiar
      patterns. This avoids wasting lots of CPU processing huge verifier logs
      (especially for log_level=2 verbosity level). Actual verification error
      should normally be found in last few lines, so this should work
      reliably.
      
      If libbpf needs to expand log beyond available log_buf_size, it
      truncates the end of the verifier log. Given verifier log normally ends
      with something like:
      
        processed 2 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
      
      ... truncating this on program load error isn't too bad (end user can
      always increase log size, if it needs to get complete log).
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220426004511.2691730-10-andrii@kernel.org
      9fdc4273
    • Andrii Nakryiko's avatar
      libbpf: Record subprog-resolved CO-RE relocations unconditionally · 185cfe83
      Andrii Nakryiko authored
      Previously, libbpf recorded CO-RE relocations with insns_idx resolved
      according to finalized subprog locations (which are appended at the end
      of entry BPF program) to simplify the job of light skeleton generator.
      
      This is necessary because once subprogs' instructions are appended to
      main entry BPF program all the subprog instruction indices are shifted
      and that shift is different for each entry (main) BPF program, so it's
      generally impossible to map final absolute insn_idx of the finalized BPF
      program to their original locations inside subprograms.
      
      This information is now going to be used not only during light skeleton
      generation, but also to map absolute instruction index to subprog's
      instruction and its corresponding CO-RE relocation. So start recording
      these relocations always, not just when obj->gen_loader is set.
      
      This information is going to be freed at the end of bpf_object__load()
      step, as before (but this can change in the future if there will be
      a need for this information post load step).
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220426004511.2691730-7-andrii@kernel.org
      185cfe83
    • Andrii Nakryiko's avatar
      libbpf: Avoid joining .BTF.ext data with BPF programs by section name · 11d5daa8
      Andrii Nakryiko authored
      Instead of using ELF section names as a joining key between .BTF.ext and
      corresponding BPF programs, pre-build .BTF.ext section number to ELF
      section index mapping during bpf_object__open() and use it later for
      matching .BTF.ext information (func/line info or CO-RE relocations) to
      their respective BPF programs and subprograms.
      
      This simplifies corresponding joining logic and let's libbpf do
      manipulations with BPF program's ELF sections like dropping leading '?'
      character for non-autoloaded programs. Original joining logic in
      bpf_object__relocate_core() (see relevant comment that's now removed)
      was never elegant, so it's a good improvement regardless. But it also
      avoids unnecessary internal assumptions about preserving original ELF
      section name as BPF program's section name (which was broken when
      SEC("?abc") support was added).
      
      Fixes: a3820c48 ("libbpf: Support opting out from autoloading BPF programs declaratively")
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20220426004511.2691730-5-andrii@kernel.org
      11d5daa8