• Jens Axboe's avatar
    io_uring/kbuf: add support for incremental buffer consumption · ae98dbf4
    Jens Axboe authored
    By default, any recv/read operation that uses provided buffers will
    consume at least 1 buffer fully (and maybe more, in case of bundles).
    This adds support for incremental consumption, meaning that an
    application may add large buffers, and each read/recv will just consume
    the part of the buffer that it needs.
    
    For example, let's say an application registers 1MB buffers in a
    provided buffer ring, for streaming receives. If it gets a short recv,
    then the full 1MB buffer will be consumed and passed back to the
    application. With incremental consumption, only the part that was
    actually used is consumed, and the buffer remains the current one.
    
    This means that both the application and the kernel needs to keep track
    of what the current receive point is. Each recv will still pass back a
    buffer ID and the size consumed, the only difference is that before the
    next receive would always be the next buffer in the ring. Now the same
    buffer ID may return multiple receives, each at an offset into that
    buffer from where the previous receive left off. Example:
    
    Application registers a provided buffer ring, and adds two 32K buffers
    to the ring.
    
    Buffer1 address: 0x1000000 (buffer ID 0)
    Buffer2 address: 0x2000000 (buffer ID 1)
    
    A recv completion is received with the following values:
    
    cqe->res	0x1000	(4k bytes received)
    cqe->flags	0x11	(CQE_F_BUFFER|CQE_F_BUF_MORE set, buffer ID 0)
    
    and the application now knows that 4096b of data is available at
    0x1000000, the start of that buffer, and that more data from this buffer
    will be coming. Now the next receive comes in:
    
    cqe->res	0x2010	(8k bytes received)
    cqe->flags	0x11	(CQE_F_BUFFER|CQE_F_BUF_MORE set, buffer ID 0)
    
    which tells the application that 8k is available where the last
    completion left off, at 0x1001000. Next completion is:
    
    cqe->res	0x5000	(20k bytes received)
    cqe->flags	0x1	(CQE_F_BUFFER set, buffer ID 0)
    
    and the application now knows that 20k of data is available at
    0x1003000, which is where the previous receive ended. CQE_F_BUF_MORE
    isn't set, as no more data is available in this buffer ID. The next
    completion is then:
    
    cqe->res	0x1000	(4k bytes received)
    cqe->flags	0x10001	(CQE_F_BUFFER|CQE_F_BUF_MORE set, buffer ID 1)
    
    which tells the application that buffer ID 1 is now the current one,
    hence there's 4k of valid data at 0x2000000. 0x2001000 will be the next
    receive point for this buffer ID.
    
    When a buffer will be reused by future CQE completions,
    IORING_CQE_BUF_MORE will be set in cqe->flags. This tells the application
    that the kernel isn't done with the buffer yet, and that it should expect
    more completions for this buffer ID. Will only be set by provided buffer
    rings setup with IOU_PBUF_RING INC, as that's the only type of buffer
    that will see multiple consecutive completions for the same buffer ID.
    For any other provided buffer type, any completion that passes back
    a buffer to the application is final.
    
    Once a buffer has been fully consumed, the buffer ring head is
    incremented and the next receive will indicate the next buffer ID in the
    CQE cflags.
    
    On the send side, the application can manage how much data is sent from
    an existing buffer by setting sqe->len to the desired send length.
    
    An application can request incremental consumption by setting
    IOU_PBUF_RING_INC in the provided buffer ring registration. Outside of
    that, any provided buffer ring setup and buffer additions is done like
    before, no changes there. The only change is in how an application may
    see multiple completions for the same buffer ID, hence needing to know
    where the next receive will happen.
    
    Note that like existing provided buffer rings, this should not be used
    with IOSQE_ASYNC, as both really require the ring to remain locked over
    the duration of the buffer selection and the operation completion. It
    will consume a buffer otherwise regardless of the size of the IO done.
    Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
    ae98dbf4
kbuf.h 5.68 KB