1. 09 Nov, 2018 5 commits
    • Sowmini Varadhan's avatar
      selftests/bpf: add a test case for sock_ops perf-event notification · 435f90a3
      Sowmini Varadhan authored
      This patch provides a tcp_bpf based eBPF sample. The test
      
      - ncat(1) as the TCP client program to connect() to a port
        with the intention of triggerring SYN retransmissions: we
        first install an iptables DROP rule to make sure ncat SYNs are
        resent (instead of aborting instantly after a TCP RST)
      
      - has a bpf kernel module that sends a perf-event notification for
        each TCP retransmit, and also tracks the number of such notifications
        sent in the global_map
      
      The test passes when the number of event notifications intercepted
      in user-space matches the value in the global_map.
      Signed-off-by: default avatarSowmini Varadhan <sowmini.varadhan@oracle.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      435f90a3
    • Sowmini Varadhan's avatar
      bpf: add perf event notificaton support for sock_ops · a5a3a828
      Sowmini Varadhan authored
      This patch allows eBPF programs that use sock_ops to send perf
      based event notifications using bpf_perf_event_output(). Our main
      use case for this is the following:
      
        We would like to monitor some subset of TCP sockets in user-space,
        (the monitoring application would define 4-tuples it wants to monitor)
        using TCP_INFO stats to analyze reported problems. The idea is to
        use those stats to see where the bottlenecks are likely to be ("is
        it application-limited?" or "is there evidence of BufferBloat in
        the path?" etc).
      
        Today we can do this by periodically polling for tcp_info, but this
        could be made more efficient if the kernel would asynchronously
        notify the application via tcp_info when some "interesting"
        thresholds (e.g., "RTT variance > X", or "total_retrans > Y" etc)
        are reached. And to make this effective, it is better if
        we could apply the threshold check *before* constructing the
        tcp_info netlink notification, so that we don't waste resources
        constructing notifications that will be discarded by the filter.
      
      This work solves the problem by adding perf event based notification
      support for sock_ops. The eBPF program can thus be designed to apply
      any desired filters to the bpf_sock_ops and trigger a perf event
      notification based on the evaluation from the filter. The user space
      component can use these perf event notifications to either read any
      state managed by the eBPF program, or issue a TCP_INFO netlink call
      if desired.
      Signed-off-by: default avatarSowmini Varadhan <sowmini.varadhan@oracle.com>
      Co-developed-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      a5a3a828
    • Daniel Borkmann's avatar
      Merge branch 'bpf-max-pkt-offset' · 185067a8
      Daniel Borkmann authored
      Jiong Wang says:
      
      ====================
      The maximum packet offset accessed by one BPF program is useful
      information.
      
      Because sometimes there could be packet split and it is possible for some
      reasons (for example performance) we want to reject the BPF program if the
      maximum packet size would trigger such split. Normally, MTU value is
      treated as the maximum packet size, but one BPF program does not always
      access the whole packet, it could only access the head portion of the data.
      
      We could let verifier calculate the maximum packet offset ever used and
      record it inside prog auxiliar information structure as a new field
      "max_pkt_offset".
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      185067a8
    • Jiong Wang's avatar
      nfp: bpf: relax prog rejection through max_pkt_offset · cf599f50
      Jiong Wang authored
      NFP is refusing to offload programs whenever the MTU is set to a value
      larger than the max packet bytes that fits in NFP Cluster Target Memory
      (CTM). However, a eBPF program doesn't always need to access the whole
      packet data.
      
      Verifier has always calculated maximum direct packet access (DPA) offset,
      and kept it in max_pkt_offset inside prog auxiliar information. This patch
      relax prog rejection based on max_pkt_offset.
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      cf599f50
    • Jiong Wang's avatar
      bpf: let verifier to calculate and record max_pkt_offset · e647815a
      Jiong Wang authored
      In check_packet_access, update max_pkt_offset after the offset has passed
      __check_packet_access.
      
      It should be safe to use u32 for max_pkt_offset as explained in code
      comment.
      
      Also, when there is tail call, the max_pkt_offset of the called program is
      unknown, so conservatively set max_pkt_offset to MAX_PACKET_OFF for such
      case.
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      e647815a
  2. 07 Nov, 2018 25 commits
  3. 06 Nov, 2018 10 commits