Commit 7335e3bf authored by Pavel Begunkov's avatar Pavel Begunkov Committed by Jens Axboe

io_uring: don't forget to adjust io_size

We have invariant in io_read() of how much we're trying to read spilled
into an iter and io_size variable. The last one controls decision making
about whether to do read-retries. However, io_size is modified only
after the first read attempt, so if we happen to go for a third retry in
a single call to io_read(), we will get io_size greater than in the
iterator, so may lead to various side effects up to live-locking.

Modify io_size each time.
Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 6bf985dc
......@@ -3548,16 +3548,11 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
/* some cases will consume bytes even on error returns */
iov_iter_revert(iter, io_size - iov_iter_count(iter));
ret = 0;
} else if (ret <= 0 || ret == io_size) {
/* make sure -ERESTARTSYS -> -EINTR is done */
} else if (ret <= 0 || ret == io_size || !force_nonblock ||
(req->file->f_flags & O_NONBLOCK) ||
!(req->flags & REQ_F_ISREG)) {
/* read all, failed, already did sync or don't want to retry */
goto done;
} else {
/* we did blocking attempt. no retry. */
if (!force_nonblock || (req->file->f_flags & O_NONBLOCK) ||
!(req->flags & REQ_F_ISREG))
goto done;
io_size -= ret;
}
ret2 = io_setup_async_rw(req, iovec, inline_vecs, iter, true);
......@@ -3570,6 +3565,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
/* now use our persistent iterator, if we aren't already */
iter = &rw->iter;
retry:
io_size -= ret;
rw->bytes_done += ret;
/* if we can retry, do so with the callbacks armed */
if (!io_rw_should_retry(req)) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment