1. 15 Apr, 2015 10 commits
    • Mike Snitzer's avatar
      dm: add 'use_blk_mq' module param and expose in per-device ro sysfs attr · 17e149b8
      Mike Snitzer authored
      Request-based DM's blk-mq support defaults to off; but a user can easily
      change the default using the dm_mod.use_blk_mq module/boot option.
      
      Also, you can check what mode a given request-based DM device is using
      with: cat /sys/block/dm-X/dm/use_blk_mq
      
      This change enabled further cleanup and reduced work (e.g. the
      md->io_pool and md->rq_pool isn't created if using blk-mq).
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      17e149b8
    • Mike Snitzer's avatar
      dm: optimize dm_mq_queue_rq to _not_ use kthread if using pure blk-mq · 02233342
      Mike Snitzer authored
      dm_mq_queue_rq() is in atomic context so care must be taken to not
      sleep -- as such GFP_ATOMIC is used for the md->bs bioset allocations
      and dm-mpath's call to blk_get_request().  In the future the bioset
      allocations will hopefully go away (by removing support for partial
      completions of bios in a cloned request).
      
      Also prepare for supporting DM blk-mq ontop of old-style request_fn
      device(s) if a new dm-mod 'use_blk_mq' parameter is set.  The kthread
      will still be used to queue work if blk-mq is used ontop of old-style
      request_fn device(s).
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      02233342
    • Mike Snitzer's avatar
      dm: add full blk-mq support to request-based DM · bfebd1cd
      Mike Snitzer authored
      Commit e5863d9a ("dm: allocate requests in target when stacking on
      blk-mq devices") served as the first step toward fully utilizing blk-mq
      in request-based DM -- it enabled stacking an old-style (request_fn)
      request_queue ontop of the underlying blk-mq device(s).  That first step
      didn't improve performance of DM multipath ontop of fast blk-mq devices
      (e.g. NVMe) because the top-level old-style request_queue was severely
      limited by the queue_lock.
      
      The second step offered here enables stacking a blk-mq request_queue
      ontop of the underlying blk-mq device(s).  This unlocks significant
      performance gains on fast blk-mq devices, Keith Busch tested on his NVMe
      testbed and offered this really positive news:
      
       "Just providing a performance update. All my fio tests are getting
        roughly equal performance whether accessed through the raw block
        device or the multipath device mapper (~470k IOPS). I could only push
        ~20% of the raw iops through dm before this conversion, so this latest
        tree is looking really solid from a performance standpoint."
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Tested-by: default avatarKeith Busch <keith.busch@intel.com>
      bfebd1cd
    • Mike Snitzer's avatar
      dm: impose configurable deadline for dm_request_fn's merge heuristic · 0ce65797
      Mike Snitzer authored
      Otherwise, for sequential workloads, the dm_request_fn can allow
      excessive request merging at the expense of increased service time.
      
      Add a per-device sysfs attribute to allow the user to control how long a
      request, that is a reasonable merge candidate, can be queued on the
      request queue.  The resolution of this request dispatch deadline is in
      microseconds (ranging from 1 to 100000 usecs), to set a 20us deadline:
        echo 20 > /sys/block/dm-7/dm/rq_based_seq_io_merge_deadline
      
      The dm_request_fn's merge heuristic and associated extra accounting is
      disabled by default (rq_based_seq_io_merge_deadline is 0).
      
      This sysfs attribute is not applicable to bio-based DM devices so it
      will only ever report 0 for them.
      
      By allowing a request to remain on the queue it will block others
      requests on the queue.  But introducing a short dequeue delay has proven
      very effective at enabling certain sequential IO workloads on really
      fast, yet IOPS constrained, devices to build up slightly larger IOs --
      yielding 90+% throughput improvements.  Having precise control over the
      time taken to wait for larger requests to build affords control beyond
      that of waiting for certain IO sizes to accumulate (which would require
      a deadline anyway).  This knob will only ever make sense with sequential
      IO workloads and the particular value used is storage configuration
      specific.
      
      Given the expected niche use-case for when this knob is useful it has
      been deemed acceptable to expose this relatively crude method for
      crafting optimal IO on specific storage -- especially given the solution
      is simple yet effective.  In the context of DM multipath, it is
      advisable to tune this sysfs attribute to a value that offers the best
      performance for the common case (e.g. if 4 paths are expected active,
      tune for that; if paths fail then performance may be slightly reduced).
      
      Alternatives were explored to have request-based DM autotune this value
      (e.g. if/when paths fail) but they were quickly deemed too fragile and
      complex to warrant further design and development time.  If this problem
      proves more common as faster storage emerges we'll have to look at
      elevating a generic solution into the block core.
      Tested-by: default avatarShiva Krishna Merla <shivakrishna.merla@netapp.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      0ce65797
    • Mike Snitzer's avatar
      dm sysfs: introduce ability to add writable attributes · b898320d
      Mike Snitzer authored
      Add DM_ATTR_RW() macro and establish .store method in dm_sysfs_ops.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      b898320d
    • Mike Snitzer's avatar
      dm: don't start current request if it would've merged with the previous · de3ec86d
      Mike Snitzer authored
      Request-based DM's dm_request_fn() is so fast to pull requests off the
      queue that steps need to be taken to promote merging by avoiding request
      processing if it makes sense.
      
      If the current request would've merged with previous request let the
      current request stay on the queue longer.
      Suggested-by: default avatarJens Axboe <axboe@fb.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      de3ec86d
    • Mike Snitzer's avatar
      dm: reduce the queue delay used in dm_request_fn from 100ms to 10ms · d548b34b
      Mike Snitzer authored
      Commit 7eaceacc ("block: remove per-queue plugging") didn't justify
      DM's use of a 100ms delay; such an extended delay is a liability when
      there is reason to re-kick the queue.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      d548b34b
    • Mike Snitzer's avatar
      dm: don't schedule delayed run of the queue if nothing to do · 9d1deb83
      Mike Snitzer authored
      In request-based DM's dm_request_fn(), if blk_peek_request() returns
      NULL just return.  Avoids unnecessary blk_delay_queue().
      Reported-by: default avatarJens Axboe <axboe@fb.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      9d1deb83
    • Mike Snitzer's avatar
      dm: only run the queue on completion if congested or no requests pending · 9a0e609e
      Mike Snitzer authored
      On really fast storage it can be beneficial to delay running the
      request_queue to allow the elevator more opportunity to merge requests.
      
      Otherwise, it has been observed that requests are being sent to
      q->request_fn much quicker than is ideal on IOPS-bound backends.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      9a0e609e
    • Mike Snitzer's avatar
      dm: remove request-based logic from make_request_fn wrapper · ff36ab34
      Mike Snitzer authored
      The old dm_request() method used for q->make_request_fn had a branch for
      request-based DM support but it isn't needed given that
      dm_init_request_based_queue() sets it to the standard blk_queue_bio()
      anyway.
      
      Cleanup dm_init_md_queue() to be DM device-type agnostic and have
      dm_setup_md_queue() properly finish queue setup based on DM device-type
      (bio-based vs request-based).
      
      A followup block patch can be made to remove the export for
      blk_queue_bio() now that DM no longer calls it directly.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      ff36ab34
  2. 31 Mar, 2015 12 commits
  3. 30 Mar, 2015 2 commits
  4. 29 Mar, 2015 7 commits
  5. 28 Mar, 2015 9 commits