Commit ef5c600a authored by Dylan Yudaken's avatar Dylan Yudaken Committed by Jens Axboe

io_uring: always prep_async for drain requests

Drain requests all go through io_drain_req, which has a quick exit in case
there is nothing pending (ie the drain is not useful). In that case it can
run the issue the request immediately.

However for safety it queues it through task work.
The problem is that in this case the request is run asynchronously, but
the async work has not been prepared through io_req_prep_async.

This has not been a problem up to now, as the task work always would run
before returning to userspace, and so the user would not have a chance to
race with it.

However - with IORING_SETUP_DEFER_TASKRUN - this is no longer the case and
the work might be defered, giving userspace a chance to change data being
referred to in the request.

Instead _always_ prep_async for drain requests, which is simpler anyway
and removes this issue.

Cc: stable@vger.kernel.org
Fixes: c0e0d6ba ("io_uring: add IORING_SETUP_DEFER_TASKRUN")
Signed-off-by: default avatarDylan Yudaken <dylany@meta.com>
Link: https://lore.kernel.org/r/20230127105911.2420061-1-dylany@meta.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent b00c51ef
...@@ -1765,17 +1765,12 @@ static __cold void io_drain_req(struct io_kiocb *req) ...@@ -1765,17 +1765,12 @@ static __cold void io_drain_req(struct io_kiocb *req)
} }
spin_unlock(&ctx->completion_lock); spin_unlock(&ctx->completion_lock);
ret = io_req_prep_async(req);
if (ret) {
fail:
io_req_defer_failed(req, ret);
return;
}
io_prep_async_link(req); io_prep_async_link(req);
de = kmalloc(sizeof(*de), GFP_KERNEL); de = kmalloc(sizeof(*de), GFP_KERNEL);
if (!de) { if (!de) {
ret = -ENOMEM; ret = -ENOMEM;
goto fail; io_req_defer_failed(req, ret);
return;
} }
spin_lock(&ctx->completion_lock); spin_lock(&ctx->completion_lock);
...@@ -2048,13 +2043,16 @@ static void io_queue_sqe_fallback(struct io_kiocb *req) ...@@ -2048,13 +2043,16 @@ static void io_queue_sqe_fallback(struct io_kiocb *req)
req->flags &= ~REQ_F_HARDLINK; req->flags &= ~REQ_F_HARDLINK;
req->flags |= REQ_F_LINK; req->flags |= REQ_F_LINK;
io_req_defer_failed(req, req->cqe.res); io_req_defer_failed(req, req->cqe.res);
} else if (unlikely(req->ctx->drain_active)) {
io_drain_req(req);
} else { } else {
int ret = io_req_prep_async(req); int ret = io_req_prep_async(req);
if (unlikely(ret)) if (unlikely(ret)) {
io_req_defer_failed(req, ret); io_req_defer_failed(req, ret);
return;
}
if (unlikely(req->ctx->drain_active))
io_drain_req(req);
else else
io_queue_iowq(req, NULL); io_queue_iowq(req, NULL);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment