1. 19 Mar, 2018 5 commits
    • John Fastabend's avatar
      bpf: create tcp_bpf_ulp allowing BPF to monitor socket TX/RX data · 4f738adb
      John Fastabend authored
      This implements a BPF ULP layer to allow policy enforcement and
      monitoring at the socket layer. In order to support this a new
      program type BPF_PROG_TYPE_SK_MSG is used to run the policy at
      the sendmsg/sendpage hook. To attach the policy to sockets a
      sockmap is used with a new program attach type BPF_SK_MSG_VERDICT.
      
      Similar to previous sockmap usages when a sock is added to a
      sockmap, via a map update, if the map contains a BPF_SK_MSG_VERDICT
      program type attached then the BPF ULP layer is created on the
      socket and the attached BPF_PROG_TYPE_SK_MSG program is run for
      every msg in sendmsg case and page/offset in sendpage case.
      
      BPF_PROG_TYPE_SK_MSG Semantics/API:
      
      BPF_PROG_TYPE_SK_MSG supports only two return codes SK_PASS and
      SK_DROP. Returning SK_DROP free's the copied data in the sendmsg
      case and in the sendpage case leaves the data untouched. Both cases
      return -EACESS to the user. Returning SK_PASS will allow the msg to
      be sent.
      
      In the sendmsg case data is copied into kernel space buffers before
      running the BPF program. The kernel space buffers are stored in a
      scatterlist object where each element is a kernel memory buffer.
      Some effort is made to coalesce data from the sendmsg call here.
      For example a sendmsg call with many one byte iov entries will
      likely be pushed into a single entry. The BPF program is run with
      data pointers (start/end) pointing to the first sg element.
      
      In the sendpage case data is not copied. We opt not to copy the
      data by default here, because the BPF infrastructure does not
      know what bytes will be needed nor when they will be needed. So
      copying all bytes may be wasteful. Because of this the initial
      start/end data pointers are (0,0). Meaning no data can be read or
      written. This avoids reading data that may be modified by the
      user. A new helper is added later in this series if reading and
      writing the data is needed. The helper call will do a copy by
      default so that the page is exclusively owned by the BPF call.
      
      The verdict from the BPF_PROG_TYPE_SK_MSG applies to the entire msg
      in the sendmsg() case and the entire page/offset in the sendpage case.
      This avoids ambiguity on how to handle mixed return codes in the
      sendmsg case. Again a helper is added later in the series if
      a verdict needs to apply to multiple system calls and/or only
      a subpart of the currently being processed message.
      
      The helper msg_redirect_map() can be used to select the socket to
      send the data on. This is used similar to existing redirect use
      cases. This allows policy to redirect msgs.
      
      Pseudo code simple example:
      
      The basic logic to attach a program to a socket is as follows,
      
        // load the programs
        bpf_prog_load(SOCKMAP_TCP_MSG_PROG, BPF_PROG_TYPE_SK_MSG,
      		&obj, &msg_prog);
      
        // lookup the sockmap
        bpf_map_msg = bpf_object__find_map_by_name(obj, "my_sock_map");
      
        // get fd for sockmap
        map_fd_msg = bpf_map__fd(bpf_map_msg);
      
        // attach program to sockmap
        bpf_prog_attach(msg_prog, map_fd_msg, BPF_SK_MSG_VERDICT, 0);
      
      Adding sockets to the map is done in the normal way,
      
        // Add a socket 'fd' to sockmap at location 'i'
        bpf_map_update_elem(map_fd_msg, &i, fd, BPF_ANY);
      
      After the above any socket attached to "my_sock_map", in this case
      'fd', will run the BPF msg verdict program (msg_prog) on every
      sendmsg and sendpage system call.
      
      For a complete example see BPF selftests or sockmap samples.
      
      Implementation notes:
      
      It seemed the simplest, to me at least, to use a refcnt to ensure
      psock is not lost across the sendmsg copy into the sg, the bpf program
      running on the data in sg_data, and the final pass to the TCP stack.
      Some performance testing may show a better method to do this and avoid
      the refcnt cost, but for now use the simpler method.
      
      Another item that will come after basic support is in place is
      supporting MSG_MORE flag. At the moment we call sendpages even if
      the MSG_MORE flag is set. An enhancement would be to collect the
      pages into a larger scatterlist and pass down the stack. Notice that
      bpf_tcp_sendmsg() could support this with some additional state saved
      across sendmsg calls. I built the code to support this without having
      to do refactoring work. Other features TBD include ZEROCOPY and the
      TCP_RECV_QUEUE/TCP_NO_QUEUE support. This will follow initial series
      shortly.
      
      Future work could improve size limits on the scatterlist rings used
      here. Currently, we use MAX_SKB_FRAGS simply because this was being
      used already in the TLS case. Future work could extend the kernel sk
      APIs to tune this depending on workload. This is a trade-off
      between memory usage and throughput performance.
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      4f738adb
    • John Fastabend's avatar
      net: generalize sk_alloc_sg to work with scatterlist rings · 8c05dbf0
      John Fastabend authored
      The current implementation of sk_alloc_sg expects scatterlist to always
      start at entry 0 and complete at entry MAX_SKB_FRAGS.
      
      Future patches will want to support starting at arbitrary offset into
      scatterlist so add an additional sg_start parameters and then default
      to the current values in TLS code paths.
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      8c05dbf0
    • John Fastabend's avatar
      net: do_tcp_sendpages flag to avoid SKBTX_SHARED_FRAG · 312fc2b4
      John Fastabend authored
      When calling do_tcp_sendpages() from in kernel and we know the data
      has no references from user side we can omit SKBTX_SHARED_FRAG flag.
      This patch adds an internal flag, NO_SKBTX_SHARED_FRAG that can be used
      to omit setting SKBTX_SHARED_FRAG.
      
      The flag is not exposed to userspace because the sendpage call from
      the splice logic masks out all bits except MSG_MORE.
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      312fc2b4
    • John Fastabend's avatar
      sockmap: convert refcnt to an atomic refcnt · ffa35660
      John Fastabend authored
      The sockmap refcnt up until now has been wrapped in the
      sk_callback_lock(). So its not actually needed any locking of its
      own. The counter itself tracks the lifetime of the psock object.
      Sockets in a sockmap have a lifetime that is independent of the
      map they are part of. This is possible because a single socket may
      be in multiple maps. When this happens we can only release the
      psock data associated with the socket when the refcnt reaches
      zero. There are three possible delete sock reference decrement
      paths first through the normal sockmap process, the user deletes
      the socket from the map. Second the map is removed and all sockets
      in the map are removed, delete path is similar to case 1. The third
      case is an asyncronous socket event such as a closing the socket. The
      last case handles removing sockets that are no longer available.
      For completeness, although inc does not pose any problems in this
      patch series, the inc case only happens when a psock is added to a
      map.
      
      Next we plan to add another socket prog type to handle policy and
      monitoring on the TX path. When we do this however we will need to
      keep a reference count open across the sendmsg/sendpage call and
      holding the sk_callback_lock() here (on every send) seems less than
      ideal, also it may sleep in cases where we hit memory pressure.
      Instead of dealing with these issues in some clever way simply make
      the reference counting a refcnt_t type and do proper atomic ops.
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      ffa35660
    • John Fastabend's avatar
      sock: make static tls function alloc_sg generic sock helper · 2c3682f0
      John Fastabend authored
      The TLS ULP module builds scatterlists from a sock using
      page_frag_refill(). This is going to be useful for other ULPs
      so move it into sock file for more general use.
      
      In the process remove useless goto at end of while loop.
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      2c3682f0
  2. 16 Mar, 2018 5 commits
  3. 15 Mar, 2018 3 commits
    • Daniel Borkmann's avatar
      Merge branch 'bpf-stackmap-build-id' · 68de5ef4
      Daniel Borkmann authored
      Song Liu says:
      
      ====================
      This work follows up discussion at Plumbers'17 on improving addr->sym
      resolution of user stack traces. The following links have more information
      of the discussion:
      
      http://www.linuxplumbersconf.org/2017/ocw/proposals/4764
      https://lwn.net/Articles/734453/     Section "Stack traces and kprobes"
      
      Currently, bpf stackmap store address for each entry in the call trace.
      To map these addresses to user space files, it is necessary to maintain
      the mapping from these virtual address to symbols in the binary. Usually,
      the user space profiler (such as perf) has to scan /proc/pid/maps at the
      beginning of profiling, and monitor mmap2() calls afterwards. Given the
      cost of maintaining the address map, this solution is not practical for
      system wide profiling that is always on.
      
      This patch tries to address this with a variation to stackmap. Instead
      of storing addresses, the variation stores ELF file build_id + offset.
      After profiling, a user space tool will look up these functions with
      build_id (to find the binary or shared library) and the offset.
      
      I also updated bcc/cc library for the stackmap (no python/lua support yet).
      You can find the work at:
      
        https://github.com/liu-song-6/bcc/commits/bpf_get_stackid_v02
      
      Changes v5 -> v6:
      
      1. When kernel stack is added to stackmap with build_id, use fallback
         mechanism to store ip (status == BPF_STACK_BUILD_ID_IP).
      
      Changes v4 -> v5:
      
      1. Only allow build_id lookup in non-nmi context. Added comment and
         commit message to highlight this limitation.
      2. Minor fix reported by kbuild test robot.
      
      Changes v3 -> v4:
      
      1. Add fallback when build_id lookup failed. In this case, status is set
         to BPF_STACK_BUILD_ID_IP, and ip of this entry is saved.
      2. Handle cases where vma is only part of the file (vma->vm_pgoff != 0).
         Thanks to Teng for helping me identify this issue!
      3. Address feedbacks for previous versions.
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      68de5ef4
    • Song Liu's avatar
      bpf: add selftest for stackmap with BPF_F_STACK_BUILD_ID · 81f77fd0
      Song Liu authored
      test_stacktrace_build_id() is added. It accesses tracepoint urandom_read
      with "dd" and "urandom_read" and gathers stack traces. Then it reads the
      stack traces from the stackmap.
      
      urandom_read is a statically link binary that reads from /dev/urandom.
      test_stacktrace_build_id() calls readelf to read build ID of urandom_read
      and compares it with build ID from the stackmap.
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      81f77fd0
    • Song Liu's avatar
      bpf: extend stackmap to save binary_build_id+offset instead of address · 615755a7
      Song Liu authored
      Currently, bpf stackmap store address for each entry in the call trace.
      To map these addresses to user space files, it is necessary to maintain
      the mapping from these virtual address to symbols in the binary. Usually,
      the user space profiler (such as perf) has to scan /proc/pid/maps at the
      beginning of profiling, and monitor mmap2() calls afterwards. Given the
      cost of maintaining the address map, this solution is not practical for
      system wide profiling that is always on.
      
      This patch tries to solve this problem with a variation of stackmap. This
      variation is enabled by flag BPF_F_STACK_BUILD_ID. Instead of storing
      addresses, the variation stores ELF file build_id + offset.
      
      Build ID is a 20-byte unique identifier for ELF files. The following
      command shows the Build ID of /bin/bash:
      
        [user@]$ readelf -n /bin/bash
        ...
          Build ID: XXXXXXXXXX
        ...
      
      With BPF_F_STACK_BUILD_ID, bpf_get_stackid() tries to parse Build ID
      for each entry in the call trace, and translate it into the following
      struct:
      
        struct bpf_stack_build_id_offset {
                __s32           status;
                unsigned char   build_id[BPF_BUILD_ID_SIZE];
                union {
                        __u64   offset;
                        __u64   ip;
                };
        };
      
      The search of build_id is limited to the first page of the file, and this
      page should be in page cache. Otherwise, we fallback to store ip for this
      entry (ip field in struct bpf_stack_build_id_offset). This requires the
      build_id to be stored in the first page. A quick survey of binary and
      dynamic library files in a few different systems shows that almost all
      binary and dynamic library files have build_id in the first page.
      
      Build_id is only meaningful for user stack. If a kernel stack is added to
      a stackmap with BPF_F_STACK_BUILD_ID, it will automatically fallback to
      only store ip (status == BPF_STACK_BUILD_ID_IP). Similarly, if build_id
      lookup failed for some reason, it will also fallback to store ip.
      
      User space can access struct bpf_stack_build_id_offset with bpf
      syscall BPF_MAP_LOOKUP_ELEM. It is necessary for user space to
      maintain mapping from build id to binary files. This mostly static
      mapping is much easier to maintain than per process address maps.
      
      Note: Stackmap with build_id only works in non-nmi context at this time.
      This is because we need to take mm->mmap_sem for find_vma(). If this
      changes, we would like to allow build_id lookup in nmi context.
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      615755a7
  4. 09 Mar, 2018 9 commits
  5. 08 Mar, 2018 3 commits
    • Daniel Borkmann's avatar
      Merge branch 'bpf-perf-sample-addr' · 12ef9bda
      Daniel Borkmann authored
      Teng Qin says:
      
      ====================
      These patches add support that allows bpf programs attached to perf events to
      read the address values recorded with the perf events. These values are
      requested by specifying sample_type with PERF_SAMPLE_ADDR when calling
      perf_event_open().
      
      The main motivation for these changes is to support building memory or lock
      access profiling and tracing tools. For example on Intel CPUs, the recorded
      address values for supported memory or lock access perf events would be
      the access or lock target addresses from PEBS buffer. Such information would
      be very valuable for building tools that help understand memory access or
      lock acquire pattern.
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      12ef9bda
    • Teng Qin's avatar
      samples/bpf: add example to test reading address · 12fe1225
      Teng Qin authored
      This commit adds additional test in the trace_event example, by
      attaching the bpf program to MEM_UOPS_RETIRED.LOCK_LOADS event with
      PERF_SAMPLE_ADDR requested, and print the lock address value read from
      the bpf program to trace_pipe.
      Signed-off-by: default avatarTeng Qin <qinteng@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      12fe1225
    • Teng Qin's avatar
      bpf: add support to read sample address in bpf program · 95da0cdb
      Teng Qin authored
      This commit adds new field "addr" to bpf_perf_event_data which could be
      read and used by bpf programs attached to perf events. The value of the
      field is copied from bpf_perf_event_data_kern.addr and contains the
      address value recorded by specifying sample_type with PERF_SAMPLE_ADDR
      when calling perf_event_open.
      Signed-off-by: default avatarTeng Qin <qinteng@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      95da0cdb
  6. 07 Mar, 2018 15 commits