1. 21 Sep, 2022 6 commits
    • David Vernet's avatar
      bpf: Add libbpf logic for user-space ring buffer · b66ccae0
      David Vernet authored
      Now that all of the logic is in place in the kernel to support user-space
      produced ring buffers, we can add the user-space logic to libbpf. This
      patch therefore adds the following public symbols to libbpf:
      
      struct user_ring_buffer *
      user_ring_buffer__new(int map_fd,
      		      const struct user_ring_buffer_opts *opts);
      void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size);
      void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb,
                                               __u32 size, int timeout_ms);
      void user_ring_buffer__submit(struct user_ring_buffer *rb, void *sample);
      void user_ring_buffer__discard(struct user_ring_buffer *rb,
      void user_ring_buffer__free(struct user_ring_buffer *rb);
      
      A user-space producer must first create a struct user_ring_buffer * object
      with user_ring_buffer__new(), and can then reserve samples in the
      ring buffer using one of the following two symbols:
      
      void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size);
      void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb,
                                               __u32 size, int timeout_ms);
      
      With user_ring_buffer__reserve(), a pointer to a 'size' region of the ring
      buffer will be returned if sufficient space is available in the buffer.
      user_ring_buffer__reserve_blocking() provides similar semantics, but will
      block for up to 'timeout_ms' in epoll_wait if there is insufficient space
      in the buffer. This function has the guarantee from the kernel that it will
      receive at least one event-notification per invocation to
      bpf_ringbuf_drain(), provided that at least one sample is drained, and the
      BPF program did not pass the BPF_RB_NO_WAKEUP flag to bpf_ringbuf_drain().
      
      Once a sample is reserved, it must either be committed to the ring buffer
      with user_ring_buffer__submit(), or discarded with
      user_ring_buffer__discard().
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20220920000100.477320-4-void@manifault.com
      b66ccae0
    • David Vernet's avatar
      bpf: Add bpf_user_ringbuf_drain() helper · 20571567
      David Vernet authored
      In a prior change, we added a new BPF_MAP_TYPE_USER_RINGBUF map type which
      will allow user-space applications to publish messages to a ring buffer
      that is consumed by a BPF program in kernel-space. In order for this
      map-type to be useful, it will require a BPF helper function that BPF
      programs can invoke to drain samples from the ring buffer, and invoke
      callbacks on those samples. This change adds that capability via a new BPF
      helper function:
      
      bpf_user_ringbuf_drain(struct bpf_map *map, void *callback_fn, void *ctx,
                             u64 flags)
      
      BPF programs may invoke this function to run callback_fn() on a series of
      samples in the ring buffer. callback_fn() has the following signature:
      
      long callback_fn(struct bpf_dynptr *dynptr, void *context);
      
      Samples are provided to the callback in the form of struct bpf_dynptr *'s,
      which the program can read using BPF helper functions for querying
      struct bpf_dynptr's.
      
      In order to support bpf_ringbuf_drain(), a new PTR_TO_DYNPTR register
      type is added to the verifier to reflect a dynptr that was allocated by
      a helper function and passed to a BPF program. Unlike PTR_TO_STACK
      dynptrs which are allocated on the stack by a BPF program, PTR_TO_DYNPTR
      dynptrs need not use reference tracking, as the BPF helper is trusted to
      properly free the dynptr before returning. The verifier currently only
      supports PTR_TO_DYNPTR registers that are also DYNPTR_TYPE_LOCAL.
      
      Note that while the corresponding user-space libbpf logic will be added
      in a subsequent patch, this patch does contain an implementation of the
      .map_poll() callback for BPF_MAP_TYPE_USER_RINGBUF maps. This
      .map_poll() callback guarantees that an epoll-waiting user-space
      producer will receive at least one event notification whenever at least
      one sample is drained in an invocation of bpf_user_ringbuf_drain(),
      provided that the function is not invoked with the BPF_RB_NO_WAKEUP
      flag. If the BPF_RB_FORCE_WAKEUP flag is provided, a wakeup
      notification is sent even if no sample was drained.
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20220920000100.477320-3-void@manifault.com
      20571567
    • David Vernet's avatar
      bpf: Define new BPF_MAP_TYPE_USER_RINGBUF map type · 583c1f42
      David Vernet authored
      We want to support a ringbuf map type where samples are published from
      user-space, to be consumed by BPF programs. BPF currently supports a
      kernel -> user-space circular ring buffer via the BPF_MAP_TYPE_RINGBUF
      map type.  We'll need to define a new map type for user-space -> kernel,
      as none of the helpers exported for BPF_MAP_TYPE_RINGBUF will apply
      to a user-space producer ring buffer, and we'll want to add one or
      more helper functions that would not apply for a kernel-producer
      ring buffer.
      
      This patch therefore adds a new BPF_MAP_TYPE_USER_RINGBUF map type
      definition. The map type is useless in its current form, as there is no
      way to access or use it for anything until we one or more BPF helpers. A
      follow-on patch will therefore add a new helper function that allows BPF
      programs to run callbacks on samples that are published to the ring
      buffer.
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20220920000100.477320-2-void@manifault.com
      583c1f42
    • William Dean's avatar
      bpf: simplify code in btf_parse_hdr · 3a74904c
      William Dean authored
      It could directly return 'btf_check_sec_info' to simplify code.
      Signed-off-by: default avatarWilliam Dean <williamsukatube@163.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/r/20220917084248.3649-1-williamsukatube@163.comSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      3a74904c
    • Xin Liu's avatar
      libbpf: Fix NULL pointer exception in API btf_dump__dump_type_data · 7620bffb
      Xin Liu authored
      We found that function btf_dump__dump_type_data can be called by the
      user as an API, but in this function, the `opts` parameter may be used
      as a null pointer.This causes `opts->indent_str` to trigger a NULL
      pointer exception.
      
      Fixes: 2ce8450e ("libbpf: add bpf_object__open_{file, mem} w/ extensible opts")
      Signed-off-by: default avatarXin Liu <liuxin350@huawei.com>
      Signed-off-by: default avatarWeibin Kong <kongweibin2@huawei.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20220917084809.30770-1-liuxin350@huawei.com
      7620bffb
    • Rong Tao's avatar
      samples/bpf: Replace blk_account_io_done() with __blk_account_io_done() · bc069da6
      Rong Tao authored
      Since commit be6bfe36 ("block: inline hot paths of blk_account_io_*()")
      blk_account_io_*() become inline functions.
      Signed-off-by: default avatarRong Tao <rtoax@foxmail.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/bpf/tencent_1CC476835C219FACD84B6715F0D785517E07@qq.com
      bc069da6
  2. 20 Sep, 2022 5 commits
  3. 19 Sep, 2022 1 commit
  4. 17 Sep, 2022 1 commit
  5. 16 Sep, 2022 8 commits
  6. 15 Sep, 2022 1 commit
    • Dave Marchevsky's avatar
      bpf: Add verifier check for BPF_PTR_POISON retval and arg · 47e34cb7
      Dave Marchevsky authored
      BPF_PTR_POISON was added in commit c0a5a21c ("bpf: Allow storing
      referenced kptr in map") to denote a bpf_func_proto btf_id which the
      verifier will replace with a dynamically-determined btf_id at verification
      time.
      
      This patch adds verifier 'poison' functionality to BPF_PTR_POISON in
      order to prepare for expanded use of the value to poison ret- and
      arg-btf_id in ongoing work, namely rbtree and linked list patchsets
      [0, 1]. Specifically, when the verifier checks helper calls, it assumes
      that BPF_PTR_POISON'ed ret type will be replaced with a valid type before
      - or in lieu of - the default ret_btf_id logic. Similarly for arg btf_id.
      
      If poisoned btf_id reaches default handling block for either, consider
      this a verifier internal error and fail verification. Otherwise a helper
      w/ poisoned btf_id but no verifier logic replacing the type will cause a
      crash as the invalid pointer is dereferenced.
      
      Also move BPF_PTR_POISON to existing include/linux/posion.h header and
      remove unnecessary shift.
      
        [0]: lore.kernel.org/bpf/20220830172759.4069786-1-davemarchevsky@fb.com
        [1]: lore.kernel.org/bpf/20220904204145.3089-1-memxor@gmail.com
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Acked-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20220912154544.1398199-1-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      47e34cb7
  7. 11 Sep, 2022 9 commits
  8. 10 Sep, 2022 2 commits
  9. 09 Sep, 2022 5 commits
  10. 07 Sep, 2022 2 commits