1. 08 Sep, 2024 1 commit
  2. 02 Sep, 2024 2 commits
  3. 30 Aug, 2024 1 commit
  4. 29 Aug, 2024 5 commits
    • Jens Axboe's avatar
      io_uring/kbuf: add support for incremental buffer consumption · ae98dbf4
      Jens Axboe authored
      By default, any recv/read operation that uses provided buffers will
      consume at least 1 buffer fully (and maybe more, in case of bundles).
      This adds support for incremental consumption, meaning that an
      application may add large buffers, and each read/recv will just consume
      the part of the buffer that it needs.
      
      For example, let's say an application registers 1MB buffers in a
      provided buffer ring, for streaming receives. If it gets a short recv,
      then the full 1MB buffer will be consumed and passed back to the
      application. With incremental consumption, only the part that was
      actually used is consumed, and the buffer remains the current one.
      
      This means that both the application and the kernel needs to keep track
      of what the current receive point is. Each recv will still pass back a
      buffer ID and the size consumed, the only difference is that before the
      next receive would always be the next buffer in the ring. Now the same
      buffer ID may return multiple receives, each at an offset into that
      buffer from where the previous receive left off. Example:
      
      Application registers a provided buffer ring, and adds two 32K buffers
      to the ring.
      
      Buffer1 address: 0x1000000 (buffer ID 0)
      Buffer2 address: 0x2000000 (buffer ID 1)
      
      A recv completion is received with the following values:
      
      cqe->res	0x1000	(4k bytes received)
      cqe->flags	0x11	(CQE_F_BUFFER|CQE_F_BUF_MORE set, buffer ID 0)
      
      and the application now knows that 4096b of data is available at
      0x1000000, the start of that buffer, and that more data from this buffer
      will be coming. Now the next receive comes in:
      
      cqe->res	0x2010	(8k bytes received)
      cqe->flags	0x11	(CQE_F_BUFFER|CQE_F_BUF_MORE set, buffer ID 0)
      
      which tells the application that 8k is available where the last
      completion left off, at 0x1001000. Next completion is:
      
      cqe->res	0x5000	(20k bytes received)
      cqe->flags	0x1	(CQE_F_BUFFER set, buffer ID 0)
      
      and the application now knows that 20k of data is available at
      0x1003000, which is where the previous receive ended. CQE_F_BUF_MORE
      isn't set, as no more data is available in this buffer ID. The next
      completion is then:
      
      cqe->res	0x1000	(4k bytes received)
      cqe->flags	0x10001	(CQE_F_BUFFER|CQE_F_BUF_MORE set, buffer ID 1)
      
      which tells the application that buffer ID 1 is now the current one,
      hence there's 4k of valid data at 0x2000000. 0x2001000 will be the next
      receive point for this buffer ID.
      
      When a buffer will be reused by future CQE completions,
      IORING_CQE_BUF_MORE will be set in cqe->flags. This tells the application
      that the kernel isn't done with the buffer yet, and that it should expect
      more completions for this buffer ID. Will only be set by provided buffer
      rings setup with IOU_PBUF_RING INC, as that's the only type of buffer
      that will see multiple consecutive completions for the same buffer ID.
      For any other provided buffer type, any completion that passes back
      a buffer to the application is final.
      
      Once a buffer has been fully consumed, the buffer ring head is
      incremented and the next receive will indicate the next buffer ID in the
      CQE cflags.
      
      On the send side, the application can manage how much data is sent from
      an existing buffer by setting sqe->len to the desired send length.
      
      An application can request incremental consumption by setting
      IOU_PBUF_RING_INC in the provided buffer ring registration. Outside of
      that, any provided buffer ring setup and buffer additions is done like
      before, no changes there. The only change is in how an application may
      see multiple completions for the same buffer ID, hence needing to know
      where the next receive will happen.
      
      Note that like existing provided buffer rings, this should not be used
      with IOSQE_ASYNC, as both really require the ring to remain locked over
      the duration of the buffer selection and the operation completion. It
      will consume a buffer otherwise regardless of the size of the IO done.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ae98dbf4
    • Jens Axboe's avatar
      io_uring/kbuf: pass in 'len' argument for buffer commit · 6733e678
      Jens Axboe authored
      In preparation for needing the consumed length, pass in the length being
      completed. Unused right now, but will be used when it is possible to
      partially consume a buffer.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      6733e678
    • Jens Axboe's avatar
      Revert "io_uring: Require zeroed sqe->len on provided-buffers send" · 641a6816
      Jens Axboe authored
      This reverts commit 79996b45.
      
      Revert the change that restricts a send provided buffer to be zero, so
      it will always consume the whole buffer. This is strictly needed for
      partial consumption, as the send may very well be a subset of the
      current buffer. In fact, that's the intended use case.
      
      For non-incremental provided buffer rings, an application should set
      sqe->len carefully to avoid the potential issue described in the
      reverted commit. It is recommended that '0' still be set for len for
      that case, if the application is set on maintaining more than 1 send
      inflight for the same socket. This is somewhat of a nonsensical thing
      to do.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      641a6816
    • Jens Axboe's avatar
      io_uring/kbuf: move io_ring_head_to_buf() to kbuf.h · 2c8fa70b
      Jens Axboe authored
      In preparation for using this helper in kbuf.h as well, move it there and
      turn it into a macro.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      2c8fa70b
    • Jens Axboe's avatar
      io_uring/kbuf: add io_kbuf_commit() helper · ecd5c9b2
      Jens Axboe authored
      Committing the selected ring buffer is currently done in three different
      spots, combine it into a helper and just call that.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ecd5c9b2
  5. 25 Aug, 2024 22 commits
  6. 24 Aug, 2024 9 commits