1. 30 May, 2018 4 commits
    • Kevin Vigor's avatar
      nbd: clear DISCONNECT_REQUESTED flag once disconnection occurs. · 5e3c3a7e
      Kevin Vigor authored
      When a userspace client requests a NBD device be disconnected, the
      DISCONNECT_REQUESTED flag is set. While this flag is set, the driver
      will not inform userspace when a connection is closed.
      
      Unfortunately the flag was never cleared, so once a disconnect was
      requested the driver would thereafter never tell userspace about a
      closed connection. Thus when connections failed due to timeout, no
      attempt to reconnect was made and eventually the device would fail.
      
      Fix by clearing the DISCONNECT_REQUESTED flag (and setting the
      DISCONNECTED flag) once all connections are closed.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarKevin Vigor <kvigor@fb.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5e3c3a7e
    • Liu Bo's avatar
      blk-throttle: fix potential NULL pointer dereference in throtl_select_dispatch · 2ab74cd2
      Liu Bo authored
      tg in throtl_select_dispatch is used first and then do check. Since tg
      may be NULL, it has potential NULL pointer dereference risk. So fix
      it.
      Signed-off-by: default avatarJoseph Qi <joseph.qi@linux.alibaba.com>
      Signed-off-by: default avatarLiu Bo <bo.liu@linux.alibaba.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      2ab74cd2
    • Jianchao Wang's avatar
      block: kyber: make kyber more friendly with merging · a6088845
      Jianchao Wang authored
      Currently, kyber is very unfriendly with merging. kyber depends
      on ctx rq_list to do merging, however, most of time, it will not
      leave any requests in ctx rq_list. This is because even if tokens
      of one domain is used up, kyber will try to dispatch requests
      from other domain and flush the rq_list there.
      
      To improve this, we setup kyber_ctx_queue (kcq) which is similar
      with ctx, but it has rq_lists for different domain and build same
      mapping between kcq and khd as the ctx & hctx. Then we could merge,
      insert and dispatch for different domains separately. At the same
      time, only flush the rq_list of kcq when get domain token successfully.
      Then if one domain token is used up, the requests could be left in
      the rq_list of that domain and maybe merged with following io.
      
      Following is my test result on machine with 8 cores and NVMe card
      INTEL SSDPEKKR128G7
      
      fio size=256m ioengine=libaio iodepth=64 direct=1 numjobs=8
      seq/random
      +------+---------------------------------------------------------------+
      |patch?| bw(MB/s) |   iops    | slat(usec) |    clat(usec)   |  merge  |
      +----------------------------------------------------------------------+
      | w/o  |  606/612 | 151k/153k |  6.89/7.03 | 3349.21/3305.40 |   0/0   |
      +----------------------------------------------------------------------+
      | w/   | 1083/616 | 277k/154k |  4.93/6.95 | 1830.62/3279.95 | 223k/3k |
      +----------------------------------------------------------------------+
      When set numjobs to 16, the bw and iops could reach 1662MB/s and 425k
      on my platform.
      Signed-off-by: default avatarJianchao Wang <jianchao.w.wang@oracle.com>
      Tested-by: default avatarHolger Hoffstätte <holger@applied-asynchrony.com>
      Reviewed-by: default avatarOmar Sandoval <osandov@fb.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a6088845
    • Jens Axboe's avatar
      blk-mq: abstract out blk-mq-sched rq list iteration bio merge helper · 9c558734
      Jens Axboe authored
      No functional changes in this patch, just a prep patch for utilizing
      this in an IO scheduler.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      Reviewed-by: default avatarOmar Sandoval <osandov@fb.com>
      9c558734
  2. 29 May, 2018 19 commits
  3. 28 May, 2018 4 commits
  4. 25 May, 2018 13 commits