1. 23 Mar, 2022 1 commit
    • Jens Axboe's avatar
      io_uring: bump poll refs to full 31-bits · e2c0cb7c
      Jens Axboe authored
      The previous commit:
      
      1bc84c40088 ("io_uring: remove poll entry from list when canceling all")
      
      removed a potential overflow condition for the poll references. They
      are currently limited to 20-bits, even if we have 31-bits available. The
      upper bit is used to mark for cancelation.
      
      Bump the poll ref space to 31-bits, making that kind of situation much
      harder to trigger in general. We'll separately add overflow checking
      and handling.
      
      Fixes: aa43477b ("io_uring: poll rework")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      e2c0cb7c
  2. 22 Mar, 2022 1 commit
    • Jens Axboe's avatar
      io_uring: remove poll entry from list when canceling all · 61bc84c4
      Jens Axboe authored
      When the ring is exiting, as part of the shutdown, poll requests are
      removed. But io_poll_remove_all() does not remove entries when finding
      them, and since completions are done out-of-band, we can find and remove
      the same entry multiple times.
      
      We do guard the poll execution by poll ownership, but that does not
      exclude us from reissuing a new one once the previous removal ownership
      goes away.
      
      This can race with poll execution as well, where we then end up seeing
      req->apoll be NULL because a previous task_work requeue finished the
      request.
      
      Remove the poll entry when we find it and get ownership of it. This
      prevents multiple invocations from finding it.
      
      Fixes: aa43477b ("io_uring: poll rework")
      Reported-by: default avatarDylan Yudaken <dylany@fb.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      61bc84c4
  3. 21 Mar, 2022 1 commit
  4. 20 Mar, 2022 2 commits
    • Jens Axboe's avatar
      io_uring: ensure that fsnotify is always called · f63cf519
      Jens Axboe authored
      Ensure that we call fsnotify_modify() if we write a file, and that we
      do fsnotify_access() if we read it. This enables anyone using inotify
      on the file to get notified.
      
      Ditto for fallocate, ensure that fsnotify_modify() is called.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      f63cf519
    • Jens Axboe's avatar
      io_uring: recycle provided before arming poll · abdad709
      Jens Axboe authored
      We currently have a race where we recycle the selected buffer if poll
      returns IO_APOLL_OK. But that's too late, as the poll could already be
      triggering or have triggered. If that race happens, then we're putting a
      buffer that's already being used.
      
      Fix this by recycling before we arm poll. This does mean that we'll
      sometimes almost instantly re-select the buffer, but it's rare enough in
      testing that it should not pose a performance issue.
      
      Fixes: b1c62645 ("io_uring: recycle provided buffers if request goes async")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      abdad709
  5. 18 Mar, 2022 2 commits
  6. 17 Mar, 2022 9 commits
  7. 16 Mar, 2022 4 commits
    • Jens Axboe's avatar
      io_uring: cache poll/double-poll state with a request flag · 91eac1c6
      Jens Axboe authored
      With commit "io_uring: cache req->apoll->events in req->cflags" applied,
      we now have just io_poll_remove_entries() dipping into req->apoll when
      it isn't strictly necessary.
      
      Mark poll and double-poll with a flag, so we know if we need to look
      at apoll->double_poll. This avoids pulling in those cachelines if we
      don't need them. The common case is that the poll wake handler already
      removed these entries while hot off the completion path.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      91eac1c6
    • Jens Axboe's avatar
      io_uring: cache req->apoll->events in req->cflags · 81459350
      Jens Axboe authored
      When we arm poll on behalf of a different type of request, like a network
      receive, then we allocate req->apoll as our poll entry. Running network
      workloads shows io_poll_check_events() as the most expensive part of
      io_uring, and it's all due to having to pull in req->apoll instead of
      just the request which we have hot already.
      
      Cache poll->events in req->cflags, which isn't used until the request
      completes anyway. This isn't strictly needed for regular poll, where
      req->poll.events is used and thus already hot, but for the sake of
      unification we do it all around.
      
      This saves 3-4% of overhead in certain request workloads.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      81459350
    • Jens Axboe's avatar
      io_uring: move req->poll_refs into previous struct hole · 521d61fc
      Jens Axboe authored
      This serves two purposes:
      
      - We now have the last cacheline mostly unused for generic workloads,
        instead of having to pull in the poll refs explicitly for workloads
        that rely on poll arming.
      
      - It shrinks the io_kiocb from 232 to 224 bytes.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      521d61fc
    • Dylan Yudaken's avatar
      io_uring: make tracing format consistent · 052ebf1f
      Dylan Yudaken authored
      Make the tracing formatting for user_data and flags consistent.
      
      Having consistent formatting allows one for example to grep for a specific
      user_data/flags and be able to trace a single sqe through easily.
      
      Change user_data to 0x%llx and flags to 0x%x everywhere. The '0x' is
      useful to disambiguate for example "user_data 100".
      
      Additionally remove the '=' for flags in io_uring_req_failed, again for consistency.
      Signed-off-by: default avatarDylan Yudaken <dylany@fb.com>
      Link: https://lore.kernel.org/r/20220316095204.2191498-1-dylany@fb.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      052ebf1f
  8. 15 Mar, 2022 1 commit
    • Jens Axboe's avatar
      io_uring: recycle apoll_poll entries · 4d9237e3
      Jens Axboe authored
      Particularly for networked workloads, io_uring intensively uses its
      poll based backend to get a notification when data/space is available.
      Profiling workloads, we see 3-4% of alloc+free that is directly attributed
      to just the apoll allocation and free (and the rest being skb alloc+free).
      
      For the fast path, we have ctx->uring_lock held already for both issue
      and the inline completions, and we can utilize that to avoid any extra
      locking needed to have a basic recycling cache for the apoll entries on
      both the alloc and free side.
      
      Double poll still requires an allocation. But those are rare and not
      a fast path item.
      
      With the simple cache in place, we see a 3-4% reduction in overhead for
      the workload.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4d9237e3
  9. 12 Mar, 2022 1 commit
  10. 10 Mar, 2022 18 commits