1. 21 Oct, 2021 9 commits
  2. 20 Oct, 2021 16 commits
  3. 19 Oct, 2021 13 commits
  4. 18 Oct, 2021 2 commits
    • Jens Axboe's avatar
      nvme: wire up completion batching for the IRQ path · 4f502245
      Jens Axboe authored
      Trivial to do now, just need our own io_comp_batch on the stack and pass
      that in to the usual command completion handling.
      
      I pondered making this dependent on how many entries we had to process,
      but even for a single entry there's no discernable difference in
      performance or latency. Running a sync workload over io_uring:
      
      t/io_uring -b512 -d1 -s1 -c1 -p0 -F1 -B1 -n2 /dev/nvme1n1 /dev/nvme2n1
      
      yields the below performance before the patch:
      
      IOPS=254820, BW=124MiB/s, IOS/call=1/1, inflight=(1 1)
      IOPS=251174, BW=122MiB/s, IOS/call=1/1, inflight=(1 1)
      IOPS=250806, BW=122MiB/s, IOS/call=1/1, inflight=(1 1)
      
      and the following after:
      
      IOPS=255972, BW=124MiB/s, IOS/call=1/1, inflight=(1 1)
      IOPS=251920, BW=123MiB/s, IOS/call=1/1, inflight=(1 1)
      IOPS=251794, BW=122MiB/s, IOS/call=1/1, inflight=(1 1)
      
      which definitely isn't slower, about the same if you factor in a bit of
      variance. For peak performance workloads, benchmarking shows a 2%
      improvement.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4f502245
    • Jens Axboe's avatar
      io_uring: utilize the io batching infrastructure for more efficient polled IO · b688f11e
      Jens Axboe authored
      Wire up using an io_comp_batch for f_op->iopoll(). If the lower stack
      supports it, we can handle high rates of polled IO more efficiently.
      
      This raises the single core efficiency on my system from ~6.1M IOPS to
      ~6.6M IOPS running a random read workload at depth 128 on two gen2
      Optane drives.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      b688f11e