Commit abdad709 authored by Jens Axboe's avatar Jens Axboe

io_uring: recycle provided before arming poll

We currently have a race where we recycle the selected buffer if poll
returns IO_APOLL_OK. But that's too late, as the poll could already be
triggering or have triggered. If that race happens, then we're putting a
buffer that's already being used.

Fix this by recycling before we arm poll. This does mean that we'll
sometimes almost instantly re-select the buffer, but it's rare enough in
testing that it should not pose a performance issue.

Fixes: b1c62645 ("io_uring: recycle provided buffers if request goes async")
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 5e929367
...@@ -6240,6 +6240,8 @@ static int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags) ...@@ -6240,6 +6240,8 @@ static int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
req->flags |= REQ_F_POLLED; req->flags |= REQ_F_POLLED;
ipt.pt._qproc = io_async_queue_proc; ipt.pt._qproc = io_async_queue_proc;
io_kbuf_recycle(req);
ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask); ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask);
if (ret || ipt.error) if (ret || ipt.error)
return ret ? IO_APOLL_READY : IO_APOLL_ABORTED; return ret ? IO_APOLL_READY : IO_APOLL_ABORTED;
...@@ -7491,7 +7493,6 @@ static void io_queue_sqe_arm_apoll(struct io_kiocb *req) ...@@ -7491,7 +7493,6 @@ static void io_queue_sqe_arm_apoll(struct io_kiocb *req)
io_queue_async_work(req, NULL); io_queue_async_work(req, NULL);
break; break;
case IO_APOLL_OK: case IO_APOLL_OK:
io_kbuf_recycle(req);
break; break;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment