1. 21 Feb, 2020 18 commits
    • Daniel Borkmann's avatar
      Merge branch 'bpf-sockmap-listen' · eb1e1478
      Daniel Borkmann authored
      Jakub Sitnicki says:
      
      ====================
      This patch set turns SOCK{MAP,HASH} into generic collections for TCP
      sockets, both listening and established. Adding support for listening
      sockets enables us to use these BPF map types with reuseport BPF programs.
      
      Why? SOCKMAP and SOCKHASH, in comparison to REUSEPORT_SOCKARRAY, allow
      the socket to be in more than one map at the same time.
      
      Having a BPF map type that can hold listening sockets, and gracefully
      co-exist with reuseport BPF is important if, in the future, we want
      BPF programs that run at socket lookup time [0]. Cover letter for v1 of
      this series tells the full story of how we got here [1].
      
      Although SOCK{MAP,HASH} are not a drop-in replacement for SOCKARRAY just
      yet, because UDP support is lacking, it's a step in this direction. We're
      working with Lorenz on extending SOCK{MAP,HASH} to hold UDP sockets, and
      expect to post RFC series for sockmap + UDP in the near future.
      
      I've dropped Acks from all patches that have been touched since v6.
      
      The audit for missing READ_ONCE annotations for access to sk_prot is
      ongoing. Thus far I've found one location specific to TCP listening sockets
      that needed annotating. This got fixed it in this iteration. I wonder if
      sparse checker could be put to work to identify places where we have
      sk_prot access while not holding sk_lock...
      
      The patch series depends on another one, posted earlier [2], that has
      been split out of it.
      
      v6 -> v7:
      
      - Extended the series to cover SOCKHASH. (patches 4-8, 10-11) (John)
      
      - Rebased onto recent bpf-next. Resolved conflicts in recent fixes to
        sk_state checks on sockmap/sockhash update path. (patch 4)
      
      - Added missing READ_ONCE annotation in sock_copy. (patch 1)
      
      - Split out patches that simplify sk_psock_restore_proto [2].
      
      v5 -> v6:
      
      - Added a fix-up for patch 1 which I forgot to commit in v5. Sigh.
      
      v4 -> v5:
      
      - Rebase onto recent bpf-next to resolve conflicts. (Daniel)
      
      v3 -> v4:
      
      - Make tcp_bpf_clone parameter names consistent across function declaration
        and definition. (Martin)
      
      - Use sock_map_redirect_okay helper everywhere we need to take a different
        action for listening sockets. (Lorenz)
      
      - Expand comment explaining the need for a callback from reuseport to
        sockarray code in reuseport_detach_sock. (Martin)
      
      - Mention the possibility of using a u64 counter for reuseport IDs in the
        future in the description for patch 10. (Martin)
      
      v2 -> v3:
      
      - Generate reuseport ID when group is created. Please see patch 10
        description for details. (Martin)
      
      - Fix the build when CONFIG_NET_SOCK_MSG is not selected by either
        CONFIG_BPF_STREAM_PARSER or CONFIG_TLS. (kbuild bot & John)
      
      - Allow updating sockmap from BPF on BPF_SOCK_OPS_TCP_LISTEN_CB callback. An
        oversight in previous iterations. Users may want to populate the sockmap with
        listening sockets from BPF as well.
      
      - Removed RCU read lock assertion in sock_map_lookup_sys. (Martin)
      
      - Get rid of a warning when child socket was cloned with parent's psock
        state. (John)
      
      - Check for tcp_bpf_unhash rather than tcp_bpf_recvmsg when deciding if
        sk_proto needs restoring on clone. Check for recvmsg in the context of
        listening socket cloning was confusing. (Martin)
      
      - Consolidate sock_map_sk_is_suitable with sock_map_update_okay. This led
        to adding dedicated predicates for sockhash. Update self-tests
        accordingly. (John)
      
      - Annotate unlikely branch in bpf_{sk,msg}_redirect_map when socket isn't
        in a map, or isn't a valid redirect target. (John)
      
      - Document paired READ/WRITE_ONCE annotations and cover shared access in
        more detail in patch 2 description. (John)
      
      - Correct a couple of log messages in sockmap_listen self-tests so the
        message reflects the actual failure.
      
      - Rework reuseport tests from sockmap_listen suite so that ENOENT error
        from bpf_sk_select_reuseport handler does not happen on happy path.
      
      v1 -> v2:
      
      - af_ops->syn_recv_sock callback is no longer overridden and burdened with
        restoring sk_prot and clearing sk_user_data in the child socket. As child
        socket is already hashed when syn_recv_sock returns, it is too late to
        put it in the right state. Instead patches 3 & 4 address restoring
        sk_prot and clearing sk_user_data before we hash the child socket.
        (Pointed out by Martin Lau)
      
      - Annotate shared access to sk->sk_prot with READ_ONCE/WRITE_ONCE macros as
        we write to it from sk_msg while socket might be getting cloned on
        another CPU. (Suggested by John Fastabend)
      
      - Convert tests for SOCKMAP holding listening sockets to return-on-error
        style, and hook them up to test_progs. Also use BPF skeleton for setup.
        Add new tests to cover the race scenario discovered during v1 review.
      
      RFC -> v1:
      
      - Switch from overriding proto->accept to af_ops->syn_recv_sock, which
        happens earlier. Clearing the psock state after accept() does not work
        for child sockets that become orphaned (never got accepted). v4-mapped
        sockets need special care.
      
      - Return the socket cookie on SOCKMAP lookup from syscall to be on par with
        REUSEPORT_SOCKARRAY. Requires SOCKMAP to take u64 on lookup/update from
        syscall.
      
      - Make bpf_sk_redirect_map (ingress) and bpf_msg_redirect_map (egress)
        SOCKMAP helpers fail when target socket is a listening one.
      
      - Make bpf_sk_select_reuseport helper fail when target is a TCP established
        socket.
      
      - Teach libbpf to recognize SK_REUSEPORT program type from section name.
      
      - Add a dedicated set of tests for SOCKMAP holding listening sockets,
        covering map operations, overridden socket callbacks, and BPF helpers.
      
      [0] https://lore.kernel.org/bpf/20190828072250.29828-1-jakub@cloudflare.com/
      [1] https://lore.kernel.org/bpf/20191123110751.6729-1-jakub@cloudflare.com/
      [2] https://lore.kernel.org/bpf/20200217121530.754315-1-jakub@cloudflare.com/
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      eb1e1478
    • Jakub Sitnicki's avatar
      selftests/bpf: Tests for sockmap/sockhash holding listening sockets · 44d28be2
      Jakub Sitnicki authored
      Now that SOCKMAP and SOCKHASH map types can store listening sockets,
      user-space and BPF API is open to a new set of potential pitfalls.
      
      Exercise the map operations, with extra attention to code paths susceptible
      to races between map ops and socket cloning, and BPF helpers that work with
      SOCKMAP/SOCKHASH to gain confidence that all works as expected.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-12-jakub@cloudflare.com
      44d28be2
    • Jakub Sitnicki's avatar
      selftests/bpf: Extend SK_REUSEPORT tests to cover SOCKMAP/SOCKHASH · 11318ba8
      Jakub Sitnicki authored
      Parametrize the SK_REUSEPORT tests so that the map type for storing sockets
      is not hard-coded in the test setup routine.
      
      This, together with careful state cleaning after the tests, lets us run the
      test cases for REUSEPORT_ARRAY, SOCKMAP, and SOCKHASH to have test coverage
      for all supported map types. The last two support only TCP sockets at the
      moment.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-11-jakub@cloudflare.com
      11318ba8
    • Jakub Sitnicki's avatar
      net: Generate reuseport group ID on group creation · 035ff358
      Jakub Sitnicki authored
      Commit 736b4602 ("net: Add ID (if needed) to sock_reuseport and expose
      reuseport_lock") has introduced lazy generation of reuseport group IDs that
      survive group resize.
      
      By comparing the identifier we check if BPF reuseport program is not trying
      to select a socket from a BPF map that belongs to a different reuseport
      group than the one the packet is for.
      
      Because SOCKARRAY used to be the only BPF map type that can be used with
      reuseport BPF, it was possible to delay the generation of reuseport group
      ID until a socket from the group was inserted into BPF map for the first
      time.
      
      Now that SOCK{MAP,HASH} can be used with reuseport BPF we have two options,
      either generate the reuseport ID on map update, like SOCKARRAY does, or
      allocate an ID from the start when reuseport group gets created.
      
      This patch takes the latter approach to keep sockmap free of calls into
      reuseport code. This streamlines the reuseport_id access as its lifetime
      now matches the longevity of reuseport object.
      
      The cost of this simplification, however, is that we allocate reuseport IDs
      for all SO_REUSEPORT users. Even those that don't use SOCKARRAY in their
      setups. With the way identifiers are currently generated, we can have at
      most S32_MAX reuseport groups, which hopefully is sufficient. If we ever
      get close to the limit, we can switch an u64 counter like sk_cookie.
      
      Another change is that we now always call into SOCKARRAY logic to unlink
      the socket from the map when unhashing or closing the socket. Previously we
      did it only when at least one socket from the group was in a BPF map.
      
      It is worth noting that this doesn't conflict with sockmap tear-down in
      case a socket is in a SOCK{MAP,HASH} and belongs to a reuseport
      group. sockmap tear-down happens first:
      
        prot->unhash
        `- tcp_bpf_unhash
           |- tcp_bpf_remove
           |  `- while (sk_psock_link_pop(psock))
           |     `- sk_psock_unlink
           |        `- sock_map_delete_from_link
           |           `- __sock_map_delete
           |              `- sock_map_unref
           |                 `- sk_psock_put
           |                    `- sk_psock_drop
           |                       `- rcu_assign_sk_user_data(sk, NULL)
           `- inet_unhash
              `- reuseport_detach_sock
                 `- bpf_sk_reuseport_detach
                    `- WRITE_ONCE(sk->sk_user_data, NULL)
      Suggested-by: default avatarMartin Lau <kafai@fb.com>
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-10-jakub@cloudflare.com
      035ff358
    • Jakub Sitnicki's avatar
      bpf: Allow selecting reuseport socket from a SOCKMAP/SOCKHASH · 9fed9000
      Jakub Sitnicki authored
      SOCKMAP & SOCKHASH now support storing references to listening
      sockets. Nothing keeps us from using these map types a collection of
      sockets to select from in BPF reuseport programs. Whitelist the map types
      with the bpf_sk_select_reuseport helper.
      
      The restriction that the socket has to be a member of a reuseport group
      still applies. Sockets in SOCKMAP/SOCKHASH that don't have sk_reuseport_cb
      set are not a valid target and we signal it with -EINVAL.
      
      The main benefit from this change is that, in contrast to
      REUSEPORT_SOCKARRAY, SOCK{MAP,HASH} don't impose a restriction that a
      listening socket can be just one BPF map at the same time.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-9-jakub@cloudflare.com
      9fed9000
    • Jakub Sitnicki's avatar
      bpf, sockmap: Let all kernel-land lookup values in SOCKMAP/SOCKHASH · 1d59f3bc
      Jakub Sitnicki authored
      Don't require the kernel code, like BPF helpers, that needs access to
      SOCK{MAP,HASH} map contents to live in net/core/sock_map.c. Expose the
      lookup operation to all kernel-land.
      
      Lookup from BPF context is not whitelisted yet. While syscalls have a
      dedicated lookup handler.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-8-jakub@cloudflare.com
      1d59f3bc
    • Jakub Sitnicki's avatar
      bpf, sockmap: Return socket cookie on lookup from syscall · c1cdf65d
      Jakub Sitnicki authored
      Tooling that populates the SOCK{MAP,HASH} with sockets from user-space
      needs a way to inspect its contents. Returning the struct sock * that the
      map holds to user-space is neither safe nor useful. An approach established
      by REUSEPORT_SOCKARRAY is to return a socket cookie (a unique identifier)
      instead.
      
      Since socket cookies are u64 values, SOCK{MAP,HASH} need to support such a
      value size for lookup to be possible. This requires special handling on
      update, though. Attempts to do a lookup on a map holding u32 values will be
      met with ENOSPC error.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-7-jakub@cloudflare.com
      c1cdf65d
    • Jakub Sitnicki's avatar
      bpf, sockmap: Don't set up upcalls and progs for listening sockets · 6e830c2f
      Jakub Sitnicki authored
      Now that sockmap/sockhash can hold listening sockets, when setting up the
      psock we will (i) grab references to verdict/parser progs, and (2) override
      socket upcalls sk_data_ready and sk_write_space.
      
      However, since we cannot redirect to listening sockets so we don't need to
      link the socket to the BPF progs. And more importantly we don't want the
      listening socket to have overridden upcalls because they would get
      inherited by child sockets cloned from it.
      
      Introduce a separate initialization path for listening sockets that does
      not change the upcalls and ignores the BPF progs.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-6-jakub@cloudflare.com
      6e830c2f
    • Jakub Sitnicki's avatar
      bpf, sockmap: Allow inserting listening TCP sockets into sockmap · 8ca30379
      Jakub Sitnicki authored
      In order for sockmap/sockhash types to become generic collections for
      storing TCP sockets we need to loosen the checks during map update, while
      tightening the checks in redirect helpers.
      
      Currently sock{map,hash} require the TCP socket to be in established state,
      which prevents inserting listening sockets.
      
      Change the update pre-checks so the socket can also be in listening state.
      
      Since it doesn't make sense to redirect with sock{map,hash} to listening
      sockets, add appropriate socket state checks to BPF redirect helpers too.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-5-jakub@cloudflare.com
      8ca30379
    • Jakub Sitnicki's avatar
      tcp_bpf: Don't let child socket inherit parent protocol ops on copy · e8025155
      Jakub Sitnicki authored
      Prepare for cloning listening sockets that have their protocol callbacks
      overridden by sk_msg. Child sockets must not inherit parent callbacks that
      access state stored in sk_user_data owned by the parent.
      
      Restore the child socket protocol callbacks before it gets hashed and any
      of the callbacks can get invoked.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-4-jakub@cloudflare.com
      e8025155
    • Jakub Sitnicki's avatar
      net, sk_msg: Clear sk_user_data pointer on clone if tagged · f1ff5ce2
      Jakub Sitnicki authored
      sk_user_data can hold a pointer to an object that is not intended to be
      shared between the parent socket and the child that gets a pointer copy on
      clone. This is the case when sk_user_data points at reference-counted
      object, like struct sk_psock.
      
      One way to resolve it is to tag the pointer with a no-copy flag by
      repurposing its lowest bit. Based on the bit-flag value we clear the child
      sk_user_data pointer after cloning the parent socket.
      
      The no-copy flag is stored in the pointer itself as opposed to externally,
      say in socket flags, to guarantee that the pointer and the flag are copied
      from parent to child socket in an atomic fashion. Parent socket state is
      subject to change while copying, we don't hold any locks at that time.
      
      This approach relies on an assumption that sk_user_data holds a pointer to
      an object aligned at least 2 bytes. A manual audit of existing users of
      rcu_dereference_sk_user_data helper confirms our assumption.
      
      Also, an RCU-protected sk_user_data is not likely to hold a pointer to a
      char value or a pathological case of "struct { char c; }". To be safe, warn
      when the flag-bit is set when setting sk_user_data to catch any future
      misuses.
      
      It is worth considering why clearing sk_user_data unconditionally is not an
      option. There exist users, DRBD, NVMe, and Xen drivers being among them,
      that rely on the pointer being copied when cloning the listening socket.
      
      Potentially we could distinguish these users by checking if the listening
      socket has been created in kernel-space via sock_create_kern, and hence has
      sk_kern_sock flag set. However, this is not the case for NVMe and Xen
      drivers, which create sockets without marking them as belonging to the
      kernel.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-3-jakub@cloudflare.com
      f1ff5ce2
    • Jakub Sitnicki's avatar
      net, sk_msg: Annotate lockless access to sk_prot on clone · b8e202d1
      Jakub Sitnicki authored
      sk_msg and ULP frameworks override protocol callbacks pointer in
      sk->sk_prot, while tcp accesses it locklessly when cloning the listening
      socket, that is with neither sk_lock nor sk_callback_lock held.
      
      Once we enable use of listening sockets with sockmap (and hence sk_msg),
      there will be shared access to sk->sk_prot if socket is getting cloned
      while being inserted/deleted to/from the sockmap from another CPU:
      
      Read side:
      
      tcp_v4_rcv
        sk = __inet_lookup_skb(...)
        tcp_check_req(sk)
          inet_csk(sk)->icsk_af_ops->syn_recv_sock
            tcp_v4_syn_recv_sock
              tcp_create_openreq_child
                inet_csk_clone_lock
                  sk_clone_lock
                    READ_ONCE(sk->sk_prot)
      
      Write side:
      
      sock_map_ops->map_update_elem
        sock_map_update_elem
          sock_map_update_common
            sock_map_link_no_progs
              tcp_bpf_init
                tcp_bpf_update_sk_prot
                  sk_psock_update_proto
                    WRITE_ONCE(sk->sk_prot, ops)
      
      sock_map_ops->map_delete_elem
        sock_map_delete_elem
          __sock_map_delete
           sock_map_unref
             sk_psock_put
               sk_psock_drop
                 sk_psock_restore_proto
                   tcp_update_ulp
                     WRITE_ONCE(sk->sk_prot, proto)
      
      Mark the shared access with READ_ONCE/WRITE_ONCE annotations.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200218171023.844439-2-jakub@cloudflare.com
      b8e202d1
    • Yonghong Song's avatar
      docs/bpf: Update bpf development Q/A file · e42da4c6
      Yonghong Song authored
      bpf now has its own mailing list bpf@vger.kernel.org.
      Update the bpf_devel_QA.rst file to reflect this.
      
      Also llvm has switch to github with llvm and clang
      in the same repo https://github.com/llvm/llvm-project.git.
      Update the QA file with newer build instructions.
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20200221004354.930952-1-yhs@fb.com
      e42da4c6
    • Andrii Nakryiko's avatar
      selftests/bpf: Fix trampoline_count clean up logic · 006ed53e
      Andrii Nakryiko authored
      Libbpf's Travis CI tests caught this issue. Ensure bpf_link and bpf_object
      clean up is performed correctly.
      
      Fixes: d633d579 ("selftest/bpf: Add test for allowed trampolines count")
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Link: https://lore.kernel.org/bpf/20200220230546.769250-1-andriin@fb.com
      006ed53e
    • Alexei Starovoitov's avatar
      Merge branch 'set_attach_target' · 2c3a3681
      Alexei Starovoitov authored
      Eelco Chaudron says:
      
      ====================
      Currently when you want to attach a trace program to a bpf program
      the section name needs to match the tracepoint/function semantics.
      
      However the addition of the bpf_program__set_attach_target() API
      allows you to specify the tracepoint/function dynamically.
      
      The call flow would look something like this:
      
        xdp_fd = bpf_prog_get_fd_by_id(id);
        trace_obj = bpf_object__open_file("func.o", NULL);
        prog = bpf_object__find_program_by_title(trace_obj,
                                                 "fentry/myfunc");
        bpf_program__set_expected_attach_type(prog, BPF_TRACE_FENTRY);
        bpf_program__set_attach_target(prog, xdp_fd,
                                       "xdpfilt_blk_all");
        bpf_object__load(trace_obj)
      
      v1 -> v2: Remove requirement for attach type hint in API
      v2 -> v3: Moved common warning to __find_vmlinux_btf_id, requested by Andrii
                Updated the xdp_bpf2bpf test to use this new API
      v3 -> v4: Split up patch, update libbpf.map version
      v4 -> v5: Fix return code, and prog assignment in test case
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      2c3a3681
    • Eelco Chaudron's avatar
      selftests/bpf: Update xdp_bpf2bpf test to use new set_attach_target API · 933ce62d
      Eelco Chaudron authored
      Use the new bpf_program__set_attach_target() API in the xdp_bpf2bpf
      selftest so it can be referenced as an example on how to use it.
      Signed-off-by: default avatarEelco Chaudron <echaudro@redhat.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/158220520562.127661.14289388017034825841.stgit@xdp-tutorial
      933ce62d
    • Eelco Chaudron's avatar
      libbpf: Add support for dynamic program attach target · ff26ce5c
      Eelco Chaudron authored
      Currently when you want to attach a trace program to a bpf program
      the section name needs to match the tracepoint/function semantics.
      
      However the addition of the bpf_program__set_attach_target() API
      allows you to specify the tracepoint/function dynamically.
      
      The call flow would look something like this:
      
        xdp_fd = bpf_prog_get_fd_by_id(id);
        trace_obj = bpf_object__open_file("func.o", NULL);
        prog = bpf_object__find_program_by_title(trace_obj,
                                                 "fentry/myfunc");
        bpf_program__set_expected_attach_type(prog, BPF_TRACE_FENTRY);
        bpf_program__set_attach_target(prog, xdp_fd,
                                       "xdpfilt_blk_all");
        bpf_object__load(trace_obj)
      Signed-off-by: default avatarEelco Chaudron <echaudro@redhat.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Acked-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/bpf/158220519486.127661.7964708960649051384.stgit@xdp-tutorial
      ff26ce5c
    • Eelco Chaudron's avatar
  2. 20 Feb, 2020 2 commits
  3. 19 Feb, 2020 8 commits
    • Yonghong Song's avatar
      selftests/bpf: Change llvm flag -mcpu=probe to -mcpu=v3 · 83250f2b
      Yonghong Song authored
      The latest llvm supports cpu version v3, which is cpu version v1
      plus some additional 64bit jmp insns and 32bit jmp insn support.
      
      In selftests/bpf Makefile, the llvm flag -mcpu=probe did runtime
      probe into the host system. Depending on compilation environments,
      it is possible that runtime probe may fail, e.g., due to
      memlock issue. This will cause generated code with cpu version v1.
      This may cause confusion as the same compiler and the same C code
      generates different byte codes in different environment.
      
      Let us change the llvm flag -mcpu=probe to -mcpu=v3 so the
      generated code will be the same regardless of the compilation
      environment.
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/20200219004236.2291125-1-yhs@fb.com
      83250f2b
    • Alexei Starovoitov's avatar
      Merge branch 'bpf_read_branch_records' · 03aa3955
      Alexei Starovoitov authored
      Daniel Xu says:
      
      ====================
      Branch records are a CPU feature that can be configured to record
      certain branches that are taken during code execution. This data is
      particularly interesting for profile guided optimizations. perf has had
      branch record support for a while but the data collection can be a bit
      coarse grained.
      
      We (Facebook) have seen in experiments that associating metadata with
      branch records can improve results (after postprocessing). We generally
      use bpf_probe_read_*() to get metadata out of userspace. That's why bpf
      support for branch records is useful.
      
      Aside from this particular use case, having branch data available to bpf
      progs can be useful to get stack traces out of userspace applications
      that omit frame pointers.
      
      Changes in v8:
      - Use globals instead of perf buffer
      - Call test_perf_branches__detach() before destroying skeleton
      - Fix typo in docs
      
      Changes in v7:
      - Const-ify and static-ify local var
      - Documentation formatting
      
      Changes in v6:
      - Move #ifdef a little to avoid unused variable warnings on !x86
      - Test negative condition in selftest (-EINVAL on improperly configured
        perf event)
      - Skip positive condition selftest on setups that don't support branch
        records
      
      Changes in v5:
      - Rename bpf_perf_prog_read_branches() -> bpf_read_branch_records()
      - Rename BPF_F_GET_BR_SIZE -> BPF_F_GET_BRANCH_RECORDS_SIZE
      - Squash tools/ bpf.h sync into selftest commit
      
      Changes in v4:
      - Add BPF_F_GET_BR_SIZE flag
      - Return -ENOENT on unsupported architectures
      - Only accept initialized memory in helper
      - Check buffer size is multiple of sizeof(struct perf_branch_entry)
      - Use bpf skeleton in selftest
      - Add commit messages
      - Spelling and formatting
      
      Changes in v3:
      - Document filling unused buffer with zero
      - Formatting fixes
      - Rebase
      
      Changes in v2:
      - Change to a bpf helper instead of context access
      - Avoid mentioning Intel specific things
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      03aa3955
    • Daniel Xu's avatar
      selftests/bpf: Add bpf_read_branch_records() selftest · 67306f84
      Daniel Xu authored
      Add a selftest to test:
      
      * default bpf_read_branch_records() behavior
      * BPF_F_GET_BRANCH_RECORDS_SIZE flag behavior
      * error path on non branch record perf events
      * using helper to write to stack
      * using helper to write to global
      
      On host with hardware counter support:
      
          # ./test_progs -t perf_branches
          #27/1 perf_branches_hw:OK
          #27/2 perf_branches_no_hw:OK
          #27 perf_branches:OK
          Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED
      
      On host without hardware counter support (VM):
      
          # ./test_progs -t perf_branches
          #27/1 perf_branches_hw:OK
          #27/2 perf_branches_no_hw:OK
          #27 perf_branches:OK
          Summary: 1/2 PASSED, 1 SKIPPED, 0 FAILED
      
      Also sync tools/include/uapi/linux/bpf.h.
      Signed-off-by: default avatarDaniel Xu <dxu@dxuuu.xyz>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/20200218030432.4600-3-dxu@dxuuu.xyz
      67306f84
    • Daniel Xu's avatar
      bpf: Add bpf_read_branch_records() helper · fff7b643
      Daniel Xu authored
      Branch records are a CPU feature that can be configured to record
      certain branches that are taken during code execution. This data is
      particularly interesting for profile guided optimizations. perf has had
      branch record support for a while but the data collection can be a bit
      coarse grained.
      
      We (Facebook) have seen in experiments that associating metadata with
      branch records can improve results (after postprocessing). We generally
      use bpf_probe_read_*() to get metadata out of userspace. That's why bpf
      support for branch records is useful.
      
      Aside from this particular use case, having branch data available to bpf
      progs can be useful to get stack traces out of userspace applications
      that omit frame pointers.
      Signed-off-by: default avatarDaniel Xu <dxu@dxuuu.xyz>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/20200218030432.4600-2-dxu@dxuuu.xyz
      fff7b643
    • Daniel Borkmann's avatar
      Merge branch 'bpf-skmsg-simplify-restore' · 2f14b2d9
      Daniel Borkmann authored
      Jakub Sitnicki says:
      
      ====================
      This series has been split out from "Extend SOCKMAP to store listening
      sockets" [0]. I think it stands on its own, and makes the latter series
      smaller, which will make the review easier, hopefully.
      
      The essence is that we don't need to do a complicated dance in
      sk_psock_restore_proto, if we agree that the contract with tcp_update_ulp
      is to restore callbacks even when the socket doesn't use ULP. This is what
      tcp_update_ulp currently does, and we just make use of it.
      
      Series is accompanied by a test for a particularly tricky case of restoring
      callbacks when we have both sockmap and tls callbacks configured in
      sk->sk_prot.
      
      [0] https://lore.kernel.org/bpf/20200127131057.150941-1-jakub@cloudflare.com/
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      2f14b2d9
    • Jakub Sitnicki's avatar
      selftests/bpf: Test unhashing kTLS socket after removing from map · d1ba1204
      Jakub Sitnicki authored
      When a TCP socket gets inserted into a sockmap, its sk_prot callbacks get
      replaced with tcp_bpf callbacks built from regular tcp callbacks. If TLS
      gets enabled on the same socket, sk_prot callbacks get replaced once again,
      this time with kTLS callbacks built from tcp_bpf callbacks.
      
      Now, we allow removing a socket from a sockmap that has kTLS enabled. After
      removal, socket remains with kTLS configured. This is where things things
      get tricky.
      
      Since the socket has a set of sk_prot callbacks that are a mix of kTLS and
      tcp_bpf callbacks, we need to restore just the tcp_bpf callbacks to the
      original ones. At the moment, it comes down to the the unhash operation.
      
      We had a regression recently because tcp_bpf callbacks were not cleared in
      this particular scenario of removing a kTLS socket from a sockmap. It got
      fixed in commit 4da6a196 ("bpf: Sockmap/tls, during free we may call
      tcp_bpf_unhash() in loop").
      
      Add a test that triggers the regression so that we don't reintroduce it in
      the future.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200217121530.754315-4-jakub@cloudflare.com
      d1ba1204
    • Jakub Sitnicki's avatar
      bpf, sk_msg: Don't clear saved sock proto on restore · a178b458
      Jakub Sitnicki authored
      There is no need to clear psock->sk_proto when restoring socket protocol
      callbacks in sk->sk_prot. The psock is about to get detached from the sock
      and eventually destroyed. At worst we will restore the protocol callbacks
      and the write callback twice.
      
      This makes reasoning about psock state easier. Once psock is initialized,
      we can count on psock->sk_proto always being set.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200217121530.754315-3-jakub@cloudflare.com
      a178b458
    • Jakub Sitnicki's avatar
      bpf, sk_msg: Let ULP restore sk_proto and write_space callback · a4393861
      Jakub Sitnicki authored
      We don't need a fallback for when the socket is not using ULP.
      tcp_update_ulp handles this case exactly the same as we do in
      sk_psock_restore_proto. Get rid of the duplicated code.
      Signed-off-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200217121530.754315-2-jakub@cloudflare.com
      a4393861
  4. 18 Feb, 2020 7 commits
  5. 17 Feb, 2020 5 commits