Commit 2f389343 authored by Pavel Begunkov's avatar Pavel Begunkov Committed by Jens Axboe

io_uring: cmpxchg for poll arm refs release

Replace atomically substracting the ownership reference at the end of
arming a poll with a cmpxchg. We try to release ownership by setting 0
assuming that poll_refs didn't change while we were arming. If it did
change, we keep the ownership and use it to queue a tw, which is fully
capable to process all events and (even tolerates spurious wake ups).

It's a bit more elegant as we reduce races b/w setting the cancellation
flag and getting refs with this release, and with that we don't have to
worry about any kinds of underflows. It's not the fastest path for
polling. The performance difference b/w cmpxchg and atomic dec is
usually negligible and it's not the fastest path.

Cc: stable@vger.kernel.org
Fixes: aa43477b ("io_uring: poll rework")
Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/0c95251624397ea6def568ff040cad2d7926fd51.1668963050.git.asml.silence@gmail.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 7fdbc5f0
...@@ -518,7 +518,6 @@ static int __io_arm_poll_handler(struct io_kiocb *req, ...@@ -518,7 +518,6 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
unsigned issue_flags) unsigned issue_flags)
{ {
struct io_ring_ctx *ctx = req->ctx; struct io_ring_ctx *ctx = req->ctx;
int v;
INIT_HLIST_NODE(&req->hash_node); INIT_HLIST_NODE(&req->hash_node);
req->work.cancel_seq = atomic_read(&ctx->cancel_seq); req->work.cancel_seq = atomic_read(&ctx->cancel_seq);
...@@ -586,11 +585,10 @@ static int __io_arm_poll_handler(struct io_kiocb *req, ...@@ -586,11 +585,10 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
if (ipt->owning) { if (ipt->owning) {
/* /*
* Release ownership. If someone tried to queue a tw while it was * Try to release ownership. If we see a change of state, e.g.
* locked, kick it off for them. * poll was waken up, queue up a tw, it'll deal with it.
*/ */
v = atomic_dec_return(&req->poll_refs); if (atomic_cmpxchg(&req->poll_refs, 1, 0) != 1)
if (unlikely(v & IO_POLL_REF_MASK))
__io_poll_execute(req, 0); __io_poll_execute(req, 0);
} }
return 0; return 0;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment