1. 27 Jul, 2018 1 commit
    • Markus Stockhausen's avatar
      readahead: stricter check for bdi io_pages · dc30b96a
      Markus Stockhausen authored
      ondemand_readahead() checks bdi->io_pages to cap the maximum pages
      that need to be processed. This works until the readit section. If
      we would do an async only readahead (async size = sync size) and
      target is at beginning of window we expand the pages by another
      get_next_ra_size() pages. Btrace for large reads shows that kernel
      always issues a doubled size read at the beginning of processing.
      Add an additional check for io_pages in the lower part of the func.
      The fix helps devices that hard limit bio pages and rely on proper
      handling of max_hw_read_sectors (e.g. older FusionIO cards). For
      that reason it could qualify for stable.
      
      Fixes: 9491ae4a ("mm: don't cap request size based on read-ahead setting")
      Cc: stable@vger.kernel.org
      Signed-off-by: Markus Stockhausen stockhausen@collogia.de
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      dc30b96a
  2. 26 Jul, 2018 2 commits
  3. 25 Jul, 2018 2 commits
    • Juergen Gross's avatar
      xen/blkfront: remove unused macros · d3df0ac0
      Juergen Gross authored
      Remove some macros not used anywhere.
      Acked-by: default avatarRoger Pau Monné <roger.pau@citrix.com>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      d3df0ac0
    • Jens Axboe's avatar
      Merge branch 'nvme-4.19' of git://git.infradead.org/nvme into for-4.19/block · eca53cb6
      Jens Axboe authored
      Pull NVMe updates from Christoph:
      
      "Highlights:
      
       - massively improved tracepoints (Keith Busch)
       - support for larger inline data in the RDMA host and target
         (Steve Wise)
       - RDMA setup/teardown path fixes and refactor (Sagi Grimberg)
       - Command Supported and Effects log support for the NVMe target
         (Chaitanya Kulkarni)
       - buffered I/O support for the NVMe target (Chaitanya Kulkarni)
      
       plus the usual set of cleanups and small enhancements."
      
      * 'nvme-4.19' of git://git.infradead.org/nvme:
        nvmet: don't use uuid_le type
        nvmet: check fileio lba range access boundaries
        nvmet: fix file discard return status
        nvme-rdma: centralize admin/io queue teardown sequence
        nvme-rdma: centralize controller setup sequence
        nvme-rdma: unquiesce queues when deleting the controller
        nvme-rdma: mark expected switch fall-through
        nvme: add disk name to trace events
        nvme: add controller name to trace events
        nvme: use hw qid in trace events
        nvme: cache struct nvme_ctrl reference to struct nvme_request
        nvmet-rdma: add an error flow for post_recv failures
        nvmet-rdma: add unlikely check in the fast path
        nvmet-rdma: support max(16KB, PAGE_SIZE) inline data
        nvme-rdma: support up to 4 segments of inline data
        nvmet: add buffered I/O support for file backed ns
        nvmet: add commands supported and effects log page
        nvme: move init of keep_alive work item to controller initialization
        nvme.h: resync with nvme-cli
      eca53cb6
  4. 24 Jul, 2018 18 commits
  5. 23 Jul, 2018 10 commits
  6. 22 Jul, 2018 2 commits
    • Ming Lei's avatar
      blk-mq: fail the request in case issue failure · 8824f622
      Ming Lei authored
      Inside blk_mq_try_issue_list_directly(), if the request is issued as
      failed, we shouldn't try to do it again, otherwise the warning in
      blk_mq_start_request() will be triggered. This change is aligned to
      behaviour of other ways of request issue & dispatch.
      
      Fixes: 6ce3dd6e ("blk-mq: issue directly if hw queue isn't busy in case of 'none'")
      Cc: Kashyap Desai <kashyap.desai@broadcom.com>
      Cc: Laurence Oberman <loberman@redhat.com>
      Cc: Omar Sandoval <osandov@fb.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bart Van Assche <bart.vanassche@wdc.com>
      Cc: Hannes Reinecke <hare@suse.de>
      Cc: Kashyap Desai <kashyap.desai@broadcom.com>
      Cc: kernel test robot <rong.a.chen@intel.com>
      Cc: LKP <lkp@01.org>
      Reported-by: default avatarkernel test robot <rong.a.chen@intel.com>
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      8824f622
    • Josef Bacik's avatar
      blk-rq-qos: make depth comparisons unsigned · 22f17952
      Josef Bacik authored
      With the change to use UINT_MAX I broke the depth check as any value of
      inflight (ie 0) would be less than (int)UINT_MAX.  Fix this by changing
      everything to unsigned int to match the depth.
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      22f17952
  7. 18 Jul, 2018 5 commits