1. 29 Sep, 2023 2 commits
    • John Fastabend's avatar
      bpf, sockmap: Do not inc copied_seq when PEEK flag set · da9e915e
      John Fastabend authored
      When data is peek'd off the receive queue we shouldn't considered it
      copied from tcp_sock side. When we increment copied_seq this will confuse
      tcp_data_ready() because copied_seq can be arbitrarily increased. From
      application side it results in poll() operations not waking up when
      expected.
      
      Notice tcp stack without BPF recvmsg programs also does not increment
      copied_seq.
      
      We broke this when we moved copied_seq into recvmsg to only update when
      actual copy was happening. But, it wasn't working correctly either before
      because the tcp_data_ready() tried to use the copied_seq value to see
      if data was read by user yet. See fixes tags.
      
      Fixes: e5c6de5f ("bpf, sockmap: Incorrectly handling copied_seq")
      Fixes: 04919bed ("tcp: Introduce tcp_read_skb()")
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20230926035300.135096-3-john.fastabend@gmail.com
      da9e915e
    • John Fastabend's avatar
      bpf: tcp_read_skb needs to pop skb regardless of seq · 9b7177b1
      John Fastabend authored
      Before fix e5c6de5f tcp_read_skb() would increment the tp->copied-seq
      value. This (as described in the commit) would cause an error for apps
      because once that is incremented the application might believe there is no
      data to be read. Then some apps would stall or abort believing no data is
      available.
      
      However, the fix is incomplete because it introduces another issue in
      the skb dequeue. The loop does tcp_recv_skb() in a while loop to consume
      as many skbs as possible. The problem is the call is ...
      
        tcp_recv_skb(sk, seq, &offset)
      
      ... where 'seq' is:
      
        u32 seq = tp->copied_seq;
      
      Now we can hit a case where we've yet incremented copied_seq from BPF side,
      but then tcp_recv_skb() fails this test ...
      
       if (offset < skb->len || (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN))
      
      ... so that instead of returning the skb we call tcp_eat_recv_skb() which
      frees the skb. This is because the routine believes the SKB has been collapsed
      per comment:
      
       /* This looks weird, but this can happen if TCP collapsing
        * splitted a fat GRO packet, while we released socket lock
        * in skb_splice_bits()
        */
      
      This can't happen here we've unlinked the full SKB and orphaned it. Anyways
      it would confuse any BPF programs if the data were suddenly moved underneath
      it.
      
      To fix this situation do simpler operation and just skb_peek() the data
      of the queue followed by the unlink. It shouldn't need to check this
      condition and tcp_read_skb() reads entire skbs so there is no need to
      handle the 'offset!=0' case as we would see in tcp_read_sock().
      
      Fixes: e5c6de5f ("bpf, sockmap: Incorrectly handling copied_seq")
      Fixes: 04919bed ("tcp: Introduce tcp_read_skb()")
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20230926035300.135096-2-john.fastabend@gmail.com
      9b7177b1
  2. 20 Sep, 2023 1 commit
  3. 19 Sep, 2023 5 commits
  4. 18 Sep, 2023 22 commits
  5. 17 Sep, 2023 3 commits
  6. 16 Sep, 2023 4 commits
  7. 15 Sep, 2023 3 commits