1. 10 Dec, 2021 6 commits
    • Andrii Nakryiko's avatar
      libbpf: Preserve kernel error code and remove kprobe prog type guessing · 2eda2145
      Andrii Nakryiko authored
      Instead of rewriting error code returned by the kernel of prog load with
      libbpf-sepcific variants pass through the original error.
      
      There is now also no need to have a backup generic -LIBBPF_ERRNO__LOAD
      fallback error as bpf_prog_load() guarantees that errno will be properly
      set no matter what.
      
      Also drop a completely outdated and pretty useless BPF_PROG_TYPE_KPROBE
      guess logic. It's not necessary and neither it's helpful in modern BPF
      applications.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211209193840.1248570-7-andrii@kernel.org
      2eda2145
    • Andrii Nakryiko's avatar
      libbpf: Improve logging around BPF program loading · ad9a7f96
      Andrii Nakryiko authored
      Add missing "prog '%s': " prefixes in few places and use consistently
      markers for beginning and end of program load logs. Here's an example of
      log output:
      
      libbpf: prog 'handler': BPF program load failed: Permission denied
      libbpf: -- BEGIN PROG LOAD LOG ---
      arg#0 reference type('UNKNOWN ') size cannot be determined: -22
      ; out1 = in1;
      0: (18) r1 = 0xffffc9000cdcc000
      2: (61) r1 = *(u32 *)(r1 +0)
      
      ...
      
      81: (63) *(u32 *)(r4 +0) = r5
       R1_w=map_value(id=0,off=16,ks=4,vs=20,imm=0) R4=map_value(id=0,off=400,ks=4,vs=16,imm=0)
      invalid access to map value, value_size=16 off=400 size=4
      R4 min value is outside of the allowed memory range
      processed 63 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
       -- END PROG LOAD LOG --
      libbpf: failed to load program 'handler'
      libbpf: failed to load object 'test_skeleton'
      
      The entire verifier log, including BEGIN and END markers are now always
      youtput during a single print callback call. This should make it much
      easier to post-process or parse it, if necessary. It's not an explicit
      API guarantee, but it can be reasonably expected to stay like that.
      
      Also __bpf_object__open is renamed to bpf_object_open() as it's always
      an adventure to find the exact function that implements bpf_object's
      open phase, so drop the double underscored and use internal libbpf
      naming convention.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211209193840.1248570-6-andrii@kernel.org
      ad9a7f96
    • Andrii Nakryiko's avatar
      libbpf: Allow passing user log setting through bpf_object_open_opts · e0e3ea88
      Andrii Nakryiko authored
      Allow users to provide their own custom log_buf, log_size, and log_level
      at bpf_object level through bpf_object_open_opts. This log_buf will be
      used during BTF loading. Subsequent patch will use same log_buf during
      BPF program loading, unless overriden at per-bpf_program level.
      
      When such custom log_buf is provided, libbpf won't be attempting
      retrying loading of BTF to try to provide its own log buffer to capture
      kernel's error log output. User is responsible to provide big enough
      buffer, otherwise they run a risk of getting -ENOSPC error from the
      bpf() syscall.
      
      See also comments in bpf_object_open_opts regarding log_level and
      log_buf interactions.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211209193840.1248570-5-andrii@kernel.org
      e0e3ea88
    • Andrii Nakryiko's avatar
      libbpf: Allow passing preallocated log_buf when loading BTF into kernel · 1a190d1e
      Andrii Nakryiko authored
      Add libbpf-internal btf_load_into_kernel() that allows to pass
      preallocated log_buf and custom log_level to be passed into kernel
      during BPF_BTF_LOAD call. When custom log_buf is provided,
      btf_load_into_kernel() won't attempt an retry with automatically
      allocated internal temporary buffer to capture BTF validation log.
      
      It's important to note the relation between log_buf and log_level, which
      slightly deviates from stricter kernel logic. From kernel's POV, if
      log_buf is specified, log_level has to be > 0, and vice versa. While
      kernel has good reasons to request such "sanity, this, in practice, is
      a bit unconvenient and restrictive for libbpf's high-level bpf_object APIs.
      
      So libbpf will allow to set non-NULL log_buf and log_level == 0. This is
      fine and means to attempt to load BTF without logging requested, but if
      it failes, retry the load with custom log_buf and log_level 1. Similar
      logic will be implemented for program loading. In practice this means
      that users can provide custom log buffer just in case error happens, but
      not really request slower verbose logging all the time. This is also
      consistent with libbpf behavior when custom log_buf is not set: libbpf
      first tries to load everything with log_level=0, and only if error
      happens allocates internal log buffer and retries with log_level=1.
      
      Also, while at it, make BTF validation log more obvious and follow the log
      pattern libbpf is using for dumping BPF verifier log during
      BPF_PROG_LOAD. BTF loading resulting in an error will look like this:
      
      libbpf: BTF loading error: -22
      libbpf: -- BEGIN BTF LOAD LOG ---
      magic: 0xeb9f
      version: 1
      flags: 0x0
      hdr_len: 24
      type_off: 0
      type_len: 1040
      str_off: 1040
      str_len: 2063598257
      btf_total_size: 1753
      Total section length too long
      -- END BTF LOAD LOG --
      libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring.
      
      This makes it much easier to find relevant parts in libbpf log output.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211209193840.1248570-4-andrii@kernel.org
      1a190d1e
    • Andrii Nakryiko's avatar
      libbpf: Add OPTS-based bpf_btf_load() API · 0ed08d67
      Andrii Nakryiko authored
      Similar to previous bpf_prog_load() and bpf_map_create() APIs, add
      bpf_btf_load() API which is taking optional OPTS struct. Schedule
      bpf_load_btf() for deprecation in v0.8 ([0]).
      
      This makes naming consistent with BPF_BTF_LOAD command, sets up an API
      for extensibility in the future, moves options parameters (log-related
      fields) into optional options, and also allows to pass log_level
      directly.
      
      It also removes log buffer auto-allocation logic from low-level API
      (consistent with bpf_prog_load() behavior), but preserves a special
      treatment of log_level == 0 with non-NULL log_buf, which matches
      low-level bpf_prog_load() and high-level libbpf APIs for BTF and program
      loading behaviors.
      
        [0] Closes: https://github.com/libbpf/libbpf/issues/419Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211209193840.1248570-3-andrii@kernel.org
      0ed08d67
    • Andrii Nakryiko's avatar
      libbpf: Fix bpf_prog_load() log_buf logic for log_level 0 · 4cf23a3c
      Andrii Nakryiko authored
      To unify libbpf APIs behavior w.r.t. log_buf and log_level, fix
      bpf_prog_load() to follow the same logic as bpf_btf_load() and
      high-level bpf_object__load() API will follow in the subsequent patches:
        - if log_level is 0 and non-NULL log_buf is provided by a user, attempt
          load operation initially with no log_buf and log_level set;
        - if successful, we are done, return new FD;
        - on error, retry the load operation with log_level bumped to 1 and
          log_buf set; this way verbose logging will be requested only when we
          are sure that there is a failure, but will be fast in the
          common/expected success case.
      
      Of course, user can still specify log_level > 0 from the very beginning
      to force log collection.
      Suggested-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20211209193840.1248570-2-andrii@kernel.org
      4cf23a3c
  2. 09 Dec, 2021 3 commits
  3. 08 Dec, 2021 1 commit
  4. 07 Dec, 2021 4 commits
  5. 06 Dec, 2021 3 commits
  6. 05 Dec, 2021 1 commit
    • Alexei Starovoitov's avatar
      bpftool: Add debug mode for gen_loader. · 942df4dc
      Alexei Starovoitov authored
      Make -d flag functional for gen_loader style program loading.
      
      For example:
      $ bpftool prog load -L -d test_d_path.o
      ... // will print:
      libbpf: loading ./test_d_path.o
      libbpf: elf: section(3) fentry/security_inode_getattr, size 280, link 0, flags 6, type=1
      ...
      libbpf: prog 'prog_close': found data map 0 (test_d_p.bss, sec 7, off 0) for insn 30
      libbpf: gen: load_btf: size 5376
      libbpf: gen: map_create: test_d_p.bss idx 0 type 2 value_type_id 118
      libbpf: map 'test_d_p.bss': created successfully, fd=0
      libbpf: gen: map_update_elem: idx 0
      libbpf: sec 'fentry/filp_close': found 1 CO-RE relocations
      libbpf: record_relo_core: prog 1 insn[15] struct file 0:1 final insn_idx 15
      libbpf: gen: prog_load: type 26 insns_cnt 35 progi_idx 0
      libbpf: gen: find_attach_tgt security_inode_getattr 12
      libbpf: gen: prog_load: type 26 insns_cnt 37 progi_idx 1
      libbpf: gen: find_attach_tgt filp_close 12
      libbpf: gen: finish 0
      ... // at this point libbpf finished generating loader program
         0: (bf) r6 = r1
         1: (bf) r1 = r10
         2: (07) r1 += -136
         3: (b7) r2 = 136
         4: (b7) r3 = 0
         5: (85) call bpf_probe_read_kernel#113
         6: (05) goto pc+104
      ... // this is the assembly dump of the loader program
       390: (63) *(u32 *)(r6 +44) = r0
       391: (18) r1 = map[idx:0]+5584
       393: (61) r0 = *(u32 *)(r1 +0)
       394: (63) *(u32 *)(r6 +24) = r0
       395: (b7) r0 = 0
       396: (95) exit
      err 0  // the loader program was loaded and executed successfully
      (null)
      func#0 @0
      ...  // CO-RE in the kernel logs:
      CO-RE relocating STRUCT file: found target candidate [500]
      prog '': relo #0: kind <byte_off> (0), spec is [8] STRUCT file.f_path (0:1 @ offset 16)
      prog '': relo #0: matching candidate #0 [500] STRUCT file.f_path (0:1 @ offset 16)
      prog '': relo #0: patched insn #15 (ALU/ALU64) imm 16 -> 16
      vmlinux_cand_cache:[11]file(500),
      module_cand_cache:
      ... // verifier logs when it was checking test_d_path.o program:
      R1 type=ctx expected=fp
      0: R1=ctx(id=0,off=0,imm=0) R10=fp0
      ; int BPF_PROG(prog_close, struct file *file, void *id)
      0: (79) r6 = *(u64 *)(r1 +0)
      func 'filp_close' arg0 has btf_id 500 type STRUCT 'file'
      1: R1=ctx(id=0,off=0,imm=0) R6_w=ptr_file(id=0,off=0,imm=0) R10=fp0
      ; pid_t pid = bpf_get_current_pid_tgid() >> 32;
      1: (85) call bpf_get_current_pid_tgid#14
      
      ... // if there are multiple programs being loaded by the loader program
      ... // only the last program in the elf file will be printed, since
      ... // the same verifier log_buf is used for all PROG_LOAD commands.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20211204194623.27779-1-alexei.starovoitov@gmail.com
      942df4dc
  7. 04 Dec, 2021 1 commit
  8. 03 Dec, 2021 3 commits
  9. 02 Dec, 2021 18 commits