1. 21 Apr, 2023 12 commits
  2. 20 Apr, 2023 9 commits
  3. 19 Apr, 2023 1 commit
  4. 18 Apr, 2023 7 commits
  5. 17 Apr, 2023 4 commits
    • Yonghong Song's avatar
      selftests/bpf: Add a selftest for checking subreg equality · 49859de9
      Yonghong Song authored
      Add a selftest to ensure subreg equality if source register
      upper 32bit is 0. Without previous patch, the test will
      fail verification.
      Acked-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/r/20230417222139.360607-1-yhs@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      49859de9
    • Yonghong Song's avatar
      bpf: Improve verifier u32 scalar equality checking · 3be49f79
      Yonghong Song authored
      In [1], I tried to remove bpf-specific codes to prevent certain
      llvm optimizations, and add llvm TTI (target transform info) hooks
      to prevent those optimizations. During this process, I found
      if I enable llvm SimplifyCFG:shouldFoldTwoEntryPHINode
      transformation, I will hit the following verification failure with selftests:
      
        ...
        8: (18) r1 = 0xffffc900001b2230       ; R1_w=map_value(off=560,ks=4,vs=564,imm=0)
        10: (61) r1 = *(u32 *)(r1 +0)         ; R1_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff))
        ; if (skb->tstamp == EGRESS_ENDHOST_MAGIC)
        11: (79) r2 = *(u64 *)(r6 +152)       ; R2_w=scalar() R6=ctx(off=0,imm=0)
        ; if (skb->tstamp == EGRESS_ENDHOST_MAGIC)
        12: (55) if r2 != 0xb9fbeef goto pc+10        ; R2_w=195018479
        13: (bc) w2 = w1                      ; R1_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff))
        ; if (test < __NR_TESTS)
        14: (a6) if w1 < 0x9 goto pc+1 16: R0=2 R1_w=scalar(umax=8,var_off=(0x0; 0xf)) R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) R6=ctx(off=0,imm=0) R10=fp0
        ;
        16: (27) r2 *= 28                     ; R2_w=scalar(umax=120259084260,var_off=(0x0; 0x1ffffffffc),s32_max=2147483644,u32_max=-4)
        17: (18) r3 = 0xffffc900001b2118      ; R3_w=map_value(off=280,ks=4,vs=564,imm=0)
        19: (0f) r3 += r2                     ; R2_w=scalar(umax=120259084260,var_off=(0x0; 0x1ffffffffc),s32_max=2147483644,u32_max=-4) R3_w=map_value(off=280,ks=4,vs=564,umax=120259084260,var_off=(0x0; 0x1ffffffffc),s32_max=2147483644,u32_max=-4)
        20: (61) r2 = *(u32 *)(r3 +0)
        R3 unbounded memory access, make sure to bounds check any such access
        processed 97 insns (limit 1000000) max_states_per_insn 1 total_states 10 peak_states 10 mark_read 6
        -- END PROG LOAD LOG --
        libbpf: prog 'ingress_fwdns_prio100': failed to load: -13
        libbpf: failed to load object 'test_tc_dtime'
        libbpf: failed to load BPF skeleton 'test_tc_dtime': -13
        ...
      
      At insn 14, with condition 'w1 < 9', register r1 is changed from an arbitrary
      u32 value to `scalar(umax=8,var_off=(0x0; 0xf))`. Register r2, however, remains
      as an arbitrary u32 value. Current verifier won't claim r1/r2 equality if
      the previous mov is alu32 ('w2 = w1').
      
      If r1 upper 32bit value is not 0, we indeed cannot clamin r1/r2 equality
      after 'w2 = w1'. But in this particular case, we know r1 upper 32bit value
      is 0, so it is safe to claim r1/r2 equality. This patch exactly did this.
      For a 32bit subreg mov, if the src register upper 32bit is 0,
      it is okay to claim equality between src and dst registers.
      
      With this patch, the above verification sequence becomes
      
        ...
        8: (18) r1 = 0xffffc9000048e230       ; R1_w=map_value(off=560,ks=4,vs=564,imm=0)
        10: (61) r1 = *(u32 *)(r1 +0)         ; R1_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff))
        ; if (skb->tstamp == EGRESS_ENDHOST_MAGIC)
        11: (79) r2 = *(u64 *)(r6 +152)       ; R2_w=scalar() R6=ctx(off=0,imm=0)
        ; if (skb->tstamp == EGRESS_ENDHOST_MAGIC)
        12: (55) if r2 != 0xb9fbeef goto pc+10        ; R2_w=195018479
        13: (bc) w2 = w1                      ; R1_w=scalar(id=6,umax=4294967295,var_off=(0x0; 0xffffffff)) R2_w=scalar(id=6,umax=4294967295,var_off=(0x0; 0xffffffff))
        ; if (test < __NR_TESTS)
        14: (a6) if w1 < 0x9 goto pc+1        ; R1_w=scalar(id=6,umin=9,umax=4294967295,var_off=(0x0; 0xffffffff))
        ...
        from 14 to 16: R0=2 R1_w=scalar(id=6,umax=8,var_off=(0x0; 0xf)) R2_w=scalar(id=6,umax=8,var_off=(0x0; 0xf)) R6=ctx(off=0,imm=0) R10=fp0
        16: (27) r2 *= 28                     ; R2_w=scalar(umax=224,var_off=(0x0; 0xfc))
        17: (18) r3 = 0xffffc9000048e118      ; R3_w=map_value(off=280,ks=4,vs=564,imm=0)
        19: (0f) r3 += r2
        20: (61) r2 = *(u32 *)(r3 +0)         ; R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) R3_w=map_value(off=280,ks=4,vs=564,umax=224,var_off=(0x0; 0xfc),s32_max=252,u32_max=252)
        ...
      
      and eventually the bpf program can be verified successfully.
      
        [1] https://reviews.llvm.org/D147968Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/r/20230417222134.359714-1-yhs@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      3be49f79
    • Sean Young's avatar
      bpf: lirc program type should not require SYS_CAP_ADMIN · 69a8c792
      Sean Young authored
      Make it possible to load lirc program type with just CAP_BPF. There is
      nothing exceptional about lirc programs that means they require
      SYS_CAP_ADMIN.
      
      In order to attach or detach a lirc program type you need permission to
      open /dev/lirc0; if you have permission to do that, you can alter all
      sorts of lirc receiving options. Changing the IR protocol decoder is no
      different.
      
      Right now on a typical distribution /dev/lirc devices are only
      read/write by root. Ideally we would make them group read/write like
      other devices so that local users can use them without becoming root.
      Signed-off-by: default avatarSean Young <sean@mess.org>
      Link: https://lore.kernel.org/r/ZD0ArKpwnDBJZsrE@gofer.mess.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      69a8c792
    • Daniel Borkmann's avatar
      bpf: Set skb redirect and from_ingress info in __bpf_tx_skb · 59e498a3
      Daniel Borkmann authored
      There are some use-cases where it is desirable to use bpf_redirect()
      in combination with ifb device, which currently is not supported, for
      example, around filtering inbound traffic with BPF to then push it to
      ifb which holds the qdisc for shaping in contrast to doing that on the
      egress device.
      
      Toke mentions the following case related to OpenWrt:
      
         Because there's not always a single egress on the other side. These are
         mainly home routers, which tend to have one or more WiFi devices bridged
         to one or more ethernet ports on the LAN side, and a single upstream WAN
         port. And the objective is to control the total amount of traffic going
         over the WAN link (in both directions), to deal with bufferbloat in the
         ISP network (which is sadly still all too prevalent).
      
         In this setup, the traffic can be split arbitrarily between the links
         on the LAN side, and the only "single bottleneck" is the WAN link. So we
         install both egress and ingress shapers on this, configured to something
         like 95-98% of the true link bandwidth, thus moving the queues into the
         qdisc layer in the router. It's usually necessary to set the ingress
         bandwidth shaper a bit lower than the egress due to being "downstream"
         of the bottleneck link, but it does work surprisingly well.
      
         We usually use something like a matchall filter to put all ingress
         traffic on the ifb, so doing the redirect from BPF has not been an
         immediate requirement thus far. However, it does seem a bit odd that
         this is not possible, and we do have a BPF-based filter that layers on
         top of this kind of setup, which currently uses u32 as the ingress
         filter and so it could presumably be improved to use BPF instead if
         that was available.
      Reported-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Reported-by: default avatarYafang Shao <laoar.shao@gmail.com>
      Reported-by: default avatarTonghao Zhang <xiangxia.m.yue@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarYafang Shao <laoar.shao@gmail.com>
      Acked-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://git.openwrt.org/?p=project/qosify.git;a=blob;f=README
      Link: https://lore.kernel.org/bpf/875y9yzbuy.fsf@toke.dk
      Link: https://lore.kernel.org/r/8cebc8b2b6e967e10cbafe2ffd6795050e74accd.1681739137.git.daniel@iogearbox.netSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      59e498a3
  6. 16 Apr, 2023 7 commits
    • Alexei Starovoitov's avatar
      Merge branch 'Remove KF_KPTR_GET kfunc flag' · d40f4f68
      Alexei Starovoitov authored
      David Vernet says:
      
      ====================
      
      We've managed to improve the UX for kptrs significantly over the last 9
      months. All of the existing use cases which previously had KF_KPTR_GET
      kfuncs (struct bpf_cpumask *, struct task_struct *, and struct cgroup *)
      have all been updated to be synchronized using RCU. In other words,
      their KF_KPTR_GET kfuncs have been removed in favor of KF_RCU |
      KF_ACQUIRE kfuncs, with the pointers themselves also being readable from
      maps in an RCU read region thanks to the types being RCU safe.
      
      While KF_KPTR_GET was a logical starting point for kptrs, it's become
      clear that they're not the correct abstraction. KF_KPTR_GET is a flag
      that essentially does nothing other than enforcing that the argument to
      a function is a pointer to a referenced kptr map value. At first glance,
      that's a useful thing to guarantee to a kfunc. It gives kfuncs the
      ability to try and acquire a reference on that kptr without requiring
      the BPF prog to do something like this:
      
      struct kptr_type *in_map, *new = NULL;
      
      in_map = bpf_kptr_xchg(&map->value, NULL);
      if (in_map) {
      	new = bpf_kptr_type_acquire(in_map);
      	in_map = bpf_kptr_xchg(&map->value, in_map);
      	if (in_map)
      		bpf_kptr_type_release(in_map);
      }
      
      That's clearly a pretty ugly (and racy) UX, and if using KF_KPTR_GET is
      the only alternative, it's better than nothing. However, the problem
      with any KF_KPTR_GET kfunc lies in the fact that it always requires some
      kind of synchronization in order to safely do an opportunistic acquire
      of the kptr in the map. This is because a BPF program running on another
      CPU could do a bpf_kptr_xchg() on that map value, and free the kptr
      after it's been read by the KF_KPTR_GET kfunc. For example, the
      now-removed bpf_task_kptr_get() kfunc did the following:
      
      struct task_struct *bpf_task_kptr_get(struct task_struct **pp)
      {
      	    struct task_struct *p;
      
      	rcu_read_lock();
      	p = READ_ONCE(*pp);
      	/* If p is non-NULL, it could still be freed by another CPU,
       	 * so we have to do an opportunistic refcount_inc_not_zero()
      	 * and return NULL if the task will be freed after the
      	 * current RCU read region.
      	 */
      	|f (p && !refcount_inc_not_zero(&p->rcu_users))
      		p = NULL;
      	rcu_read_unlock();
      
      	return p;
      }
      
      In other words, the kfunc uses RCU to ensure that the task remains valid
      after it's been peeked from the map. However, this is completely
      redundant with just defining a KF_RCU kfunc that itself does a
      refcount_inc_not_zero(), which is exactly what bpf_task_acquire() now
      does.
      
      So, the question of whether KF_KPTR_GET is useful is actually, "Are
      there any synchronization mechanisms / safety flags that are required by
      certain kptrs, but which are not provided by the verifier to kfuncs?"
      The answer to that question today is "No", because every kptr we
      currently care about is RCU protected.
      
      Even if the answer ever became "yes", the proper way to support that
      referenced kptr type would be to add support for whatever
      synchronization mechanism it requires in the verifier, rather than
      giving kfuncs a flag that says, "Here's a pointer to a referenced kptr
      in a map, do whatever you need to do."
      
      With all that said -- so as to allow us to consolidate the kfunc API,
      and simplify the verifier, this patchset removes the KF_KPTR_GET kfunc
      flag.
      ---
      
      This is v2 of this patchset
      
      v1: https://lore.kernel.org/all/20230415103231.236063-1-void@manifault.com/
      
      Changelog:
      ----------
      
      v1 -> v2:
      - Fix KF_RU -> KF_RCU typo in commit summary for patch 2/3, and in cover
        letter (Alexei)
      - In order to reduce churn, don't shift all KF_* flags down by 1. We'll
        just fill the now-empty slot the next time we add a flag (Alexei)
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d40f4f68
    • David Vernet's avatar
      bpf,docs: Remove KF_KPTR_GET from documentation · 530474e6
      David Vernet authored
      A prior patch removed KF_KPTR_GET from the kernel. Now that it's no
      longer accessible to kfunc authors, this patch removes it from the BPF
      kfunc documentation.
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Link: https://lore.kernel.org/r/20230416084928.326135-4-void@manifault.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      530474e6
    • David Vernet's avatar
      bpf: Remove KF_KPTR_GET kfunc flag · 7b4ddf39
      David Vernet authored
      We've managed to improve the UX for kptrs significantly over the last 9
      months. All of the existing use cases which previously had KF_KPTR_GET
      kfuncs (struct bpf_cpumask *, struct task_struct *, and struct cgroup *)
      have all been updated to be synchronized using RCU. In other words,
      their KF_KPTR_GET kfuncs have been removed in favor of KF_RCU |
      KF_ACQUIRE kfuncs, with the pointers themselves also being readable from
      maps in an RCU read region thanks to the types being RCU safe.
      
      While KF_KPTR_GET was a logical starting point for kptrs, it's become
      clear that they're not the correct abstraction. KF_KPTR_GET is a flag
      that essentially does nothing other than enforcing that the argument to
      a function is a pointer to a referenced kptr map value. At first glance,
      that's a useful thing to guarantee to a kfunc. It gives kfuncs the
      ability to try and acquire a reference on that kptr without requiring
      the BPF prog to do something like this:
      
      struct kptr_type *in_map, *new = NULL;
      
      in_map = bpf_kptr_xchg(&map->value, NULL);
      if (in_map) {
              new = bpf_kptr_type_acquire(in_map);
              in_map = bpf_kptr_xchg(&map->value, in_map);
              if (in_map)
                      bpf_kptr_type_release(in_map);
      }
      
      That's clearly a pretty ugly (and racy) UX, and if using KF_KPTR_GET is
      the only alternative, it's better than nothing. However, the problem
      with any KF_KPTR_GET kfunc lies in the fact that it always requires some
      kind of synchronization in order to safely do an opportunistic acquire
      of the kptr in the map. This is because a BPF program running on another
      CPU could do a bpf_kptr_xchg() on that map value, and free the kptr
      after it's been read by the KF_KPTR_GET kfunc. For example, the
      now-removed bpf_task_kptr_get() kfunc did the following:
      
      struct task_struct *bpf_task_kptr_get(struct task_struct **pp)
      {
                  struct task_struct *p;
      
              rcu_read_lock();
              p = READ_ONCE(*pp);
              /* If p is non-NULL, it could still be freed by another CPU,
               * so we have to do an opportunistic refcount_inc_not_zero()
               * and return NULL if the task will be freed after the
               * current RCU read region.
               */
              |f (p && !refcount_inc_not_zero(&p->rcu_users))
                      p = NULL;
              rcu_read_unlock();
      
              return p;
      }
      
      In other words, the kfunc uses RCU to ensure that the task remains valid
      after it's been peeked from the map. However, this is completely
      redundant with just defining a KF_RCU kfunc that itself does a
      refcount_inc_not_zero(), which is exactly what bpf_task_acquire() now
      does.
      
      So, the question of whether KF_KPTR_GET is useful is actually, "Are
      there any synchronization mechanisms / safety flags that are required by
      certain kptrs, but which are not provided by the verifier to kfuncs?"
      The answer to that question today is "No", because every kptr we
      currently care about is RCU protected.
      
      Even if the answer ever became "yes", the proper way to support that
      referenced kptr type would be to add support for whatever
      synchronization mechanism it requires in the verifier, rather than
      giving kfuncs a flag that says, "Here's a pointer to a referenced kptr
      in a map, do whatever you need to do."
      
      With all that said -- so as to allow us to consolidate the kfunc API,
      and simplify the verifier a bit, this patch removes KF_KPTR_GET, and all
      relevant logic from the verifier.
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Link: https://lore.kernel.org/r/20230416084928.326135-3-void@manifault.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      7b4ddf39
    • David Vernet's avatar
      bpf: Remove bpf_kfunc_call_test_kptr_get() test kfunc · 09b501d9
      David Vernet authored
      We've managed to improve the UX for kptrs significantly over the last 9
      months. All of the prior main use cases, struct bpf_cpumask *, struct
      task_struct *, and struct cgroup *, have all been updated to be
      synchronized mainly using RCU. In other words, their KF_ACQUIRE kfunc
      calls are all KF_RCU, and the pointers themselves are MEM_RCU and can be
      accessed in an RCU read region in BPF.
      
      In a follow-on change, we'll be removing the KF_KPTR_GET kfunc flag.
      This patch prepares for that by removing the
      bpf_kfunc_call_test_kptr_get() kfunc, and all associated selftests.
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Link: https://lore.kernel.org/r/20230416084928.326135-2-void@manifault.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      09b501d9
    • Alexei Starovoitov's avatar
      Merge branch 'Shared ownership for local kptrs' · 7a0788fe
      Alexei Starovoitov authored
      Dave Marchevsky says:
      
      ====================
      
      This series adds support for refcounted local kptrs to the verifier. A local
      kptr is 'refcounted' if its type contains a struct bpf_refcount field:
      
        struct refcounted_node {
          long data;
          struct bpf_list_node ll;
          struct bpf_refcount ref;
        };
      
      bpf_refcount is used to implement shared ownership for local kptrs.
      
      Motivating usecase
      ==================
      
      If a struct has two collection node fields, e.g.:
      
        struct node {
          long key;
          long val;
          struct bpf_rb_node rb;
          struct bpf_list_node ll;
        };
      
      It's not currently possible to add a node to both the list and rbtree:
      
        long bpf_prog(void *ctx)
        {
          struct node *n = bpf_obj_new(typeof(*n));
          if (!n) { /* ... */ }
      
          bpf_spin_lock(&lock);
      
          bpf_list_push_back(&head, &n->ll);
          bpf_rbtree_add(&root, &n->rb, less); /* Assume a resonable less() */
          bpf_spin_unlock(&lock);
        }
      
      The above program will fail verification due to current owning / non-owning ref
      logic: after bpf_list_push_back, n is a non-owning reference and thus cannot be
      passed to bpf_rbtree_add. The only way to get an owning reference for the node
      that was added is to bpf_list_pop_{front,back} it.
      
      More generally, verifier ownership semantics expect that a node has one
      owner (program, collection, or stashed in map) with exclusive ownership
      of the node's lifetime. The owner free's the node's underlying memory when it
      itself goes away.
      
      Without a shared ownership concept it's impossible to express many real-world
      usecases such that they pass verification.
      
      Semantic Changes
      ================
      
      Before this series, the verifier could make this statement: "whoever has the
      owning reference has exclusive ownership of the referent's lifetime". As
      demonstrated in the previous section, this implies that a BPF program can't
      have an owning reference to some node if that node is in a collection. If
      such a state were possible, the node would have multiple owners, each thinking
      they have exclusive ownership. In order to support shared ownership it's
      necessary to modify the exclusive ownership semantic.
      
      After this series' changes, an owning reference has ownership of the referent's
      lifetime, but it's not necessarily exclusive. The referent's underlying memory
      is guaranteed to be valid (i.e. not free'd) until the reference is dropped or
      used for collection insert.
      
      This change doesn't affect UX of owning or non-owning references much:
      
        * insert kfuncs (bpf_rbtree_add, bpf_list_push_{front,back}) still require
          an owning reference arg, as ownership still must be passed to the
          collection in a shared-ownership world.
      
        * non-owning references still refer to valid memory without claiming
          any ownership.
      
      One important conclusion that followed from "exclusive ownership" statement
      is no longer valid, though. In exclusive-ownership world, if a BPF prog has
      an owning reference to a node, the verifier can conclude that no collection has
      ownership of it. This conclusion was used to avoid runtime checking in the
      implementations of insert and remove operations (""has the node already been
      {inserted, removed}?").
      
      In a shared-ownership world the aforementioned conclusion is no longer valid,
      which necessitates doing runtime checking in insert and remove operation
      kfuncs, and those functions possibly failing to insert or remove anything.
      
      Luckily the verifier changes necessary to go from exclusive to shared ownership
      were fairly minimal. Patches in this series which do change verifier semantics
      generally have some summary dedicated to explaining why certain usecases
      Just Work for shared ownership without verifier changes.
      
      Implementation
      ==============
      
      The changes in this series can be categorized as follows:
      
        * struct bpf_refcount opaque field + plumbing
        * support for refcounted kptrs in bpf_obj_new and bpf_obj_drop
        * bpf_refcount_acquire kfunc
          * enables shared ownershp by bumping refcount + acquiring owning ref
        * support for possibly-failing collection insertion and removal
          * insertion changes are more complex
      
      If a patch's changes have some nuance to their effect - or lack of effect - on
      verifier behavior, the patch summary talks about it at length.
      
      Patch contents:
        * Patch 1 removes btf_field_offs struct
        * Patch 2 adds struct bpf_refcount and associated plumbing
        * Patch 3 modifies semantics of bpf_obj_drop and bpf_obj_new to handle
          refcounted kptrs
        * Patch 4 adds bpf_refcount_acquire
        * Patches 5-7 add support for possibly-failing collection insert and remove
        * Patch 8 centralizes constructor-like functionality for local kptr types
        * Patch 9 adds tests for new functionality
      
      base-commit: 4a1e885c
      
      Changelog:
      
      v1 -> v2: lore.kernel.org/bpf/20230410190753.2012798-1-davemarchevsky@fb.com
      
      Patch #s used below refer to the patch's position in v1 unless otherwise
      specified.
      
        * General
          * Rebase onto latest bpf-next (base-commit updated above)
      
        * Patch 4 - "bpf: Add bpf_refcount_acquire kfunc"
          * Fix typo in summary (Alexei)
        * Patch 7 - "Migrate bpf_rbtree_remove to possibly fail"
          * Modify a paragraph in patch summary to more clearly state that only
            bpf_rbtree_remove's non-owning ref clobbering behavior is changed by the
            patch (Alexei)
          * refcount_off == -1 -> refcount_off < 0  in "node type w/ both list
            and rb_node fields" check, since any negative value means "no
            bpf_refcount field found", and furthermore refcount_off is never
            explicitly set to -1, but rather -EINVAL. (Alexei)
          * Instead of just changing "btf: list_node and rb_node in same struct" test
            expectation to pass instead of fail, do some refactoring to test both
            "list_node, rb_node, and bpf_refcount" (success) and "list_node, rb_node,
            _no_ bpf_refcount" (failure) cases. This ensures that logic change in
            previous bullet point is correct.
            * v1's "btf: list_node and rb_node in same struct" test changes didn't
              add bpf_refcount, so the fact that btf load succeeded w/ list and
              rb_nodes but no bpf_refcount field is further proof that this logic
              was incorrect in v1.
        * Patch 8 - "bpf: Centralize btf_field-specific initialization logic"
          * Instead of doing __init_field_infer_size in kfuncs when taking
            bpf_list_head type input which might've been 0-initialized in map, go
            back to simple oneliner initialization. Add short comment explaining why
            this is necessary. (Alexei)
        * Patch 9 - "selftests/bpf: Add refcounted_kptr tests"
          * Don't __always_inline helper fns in progs/refcounted_kptr.c (Alexei)
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      7a0788fe
    • Dave Marchevsky's avatar
      selftests/bpf: Add refcounted_kptr tests · 6147f151
      Dave Marchevsky authored
      Test refcounted local kptr functionality added in previous patches in
      the series.
      
      Usecases which pass verification:
      
      * Add refcounted local kptr to both tree and list. Then, read and -
        possibly, depending on test variant - delete from tree, then list.
        * Also test doing read-and-maybe-delete in opposite order
      * Stash a refcounted local kptr in a map_value, then add it to a
        rbtree. Read from both, possibly deleting after tree read.
      * Add refcounted local kptr to both tree and list. Then, try reading and
        deleting twice from one of the collections.
      * bpf_refcount_acquire of just-added non-owning ref should work, as
        should bpf_refcount_acquire of owning ref just out of bpf_obj_new
      
      Usecases which fail verification:
      
      * The simple successful bpf_refcount_acquire cases from above should
        both fail to verify if the newly-acquired owning ref is not dropped
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230415201811.343116-10-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      6147f151
    • Dave Marchevsky's avatar
      bpf: Centralize btf_field-specific initialization logic · 3e81740a
      Dave Marchevsky authored
      All btf_fields in an object are 0-initialized by memset in
      bpf_obj_init. This might not be a valid initial state for some field
      types, in which case kfuncs that use the type will properly initialize
      their input if it's been 0-initialized. Some BPF graph collection types
      and kfuncs do this: bpf_list_{head,node} and bpf_rb_node.
      
      An earlier patch in this series added the bpf_refcount field, for which
      the 0 state indicates that the refcounted object should be free'd.
      bpf_obj_init treats this field specially, setting refcount to 1 instead
      of relying on scattered "refcount is 0? Must have just been initialized,
      let's set to 1" logic in kfuncs.
      
      This patch extends this treatment to list and rbtree field types,
      allowing most scattered initialization logic in kfuncs to be removed.
      
      Note that bpf_{list_head,rb_root} may be inside a BPF map, in which case
      they'll be 0-initialized without passing through the newly-added logic,
      so scattered initialization logic must remain for these collection root
      types.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230415201811.343116-9-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      3e81740a