1. 02 Oct, 2020 4 commits
    • Coly Li's avatar
      bcache: remove 'int n' from parameter list of bch_bucket_alloc_set() · 17e4aed8
      Coly Li authored
      The parameter 'int n' from bch_bucket_alloc_set() is not cleared
      defined. From the code comments n is the number of buckets to alloc, but
      from the code itself 'n' is the maximum cache to iterate. Indeed all the
      locations where bch_bucket_alloc_set() is called, 'n' is alwasy 1.
      
      This patch removes the confused and unnecessary 'int n' from parameter
      list of  bch_bucket_alloc_set(), and explicitly allocates only 1 bucket
      for its caller.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      17e4aed8
    • Qinglang Miao's avatar
      bcache: Convert to DEFINE_SHOW_ATTRIBUTE · 84e5d136
      Qinglang Miao authored
      Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
      
      As inode->iprivate equals to third parameter of
      debugfs_create_file() which is NULL. So it's equivalent
      to original code logic.
      Signed-off-by: default avatarQinglang Miao <miaoqinglang@huawei.com>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      84e5d136
    • Dongsheng Yang's avatar
      bcache: check c->root with IS_ERR_OR_NULL() in mca_reserve() · 7e59c506
      Dongsheng Yang authored
      In mca_reserve(c) macro, we are checking root whether is NULL or not.
      But that's not enough, when we read the root node in run_cache_set(),
      if we got an error in bch_btree_node_read_done(), we will return
      ERR_PTR(-EIO) to c->root.
      
      And then we will go continue to unregister, but before calling
      unregister_shrinker(&c->shrink), there is a possibility to call
      bch_mca_count(), and we would get a crash with call trace like that:
      
      [ 2149.876008] Unable to handle kernel NULL pointer dereference at virtual address 00000000000000b5
      ... ...
      [ 2150.598931] Call trace:
      [ 2150.606439]  bch_mca_count+0x58/0x98 [escache]
      [ 2150.615866]  do_shrink_slab+0x54/0x310
      [ 2150.624429]  shrink_slab+0x248/0x2d0
      [ 2150.632633]  drop_slab_node+0x54/0x88
      [ 2150.640746]  drop_slab+0x50/0x88
      [ 2150.648228]  drop_caches_sysctl_handler+0xf0/0x118
      [ 2150.657219]  proc_sys_call_handler.isra.18+0xb8/0x110
      [ 2150.666342]  proc_sys_write+0x40/0x50
      [ 2150.673889]  __vfs_write+0x48/0x90
      [ 2150.681095]  vfs_write+0xac/0x1b8
      [ 2150.688145]  ksys_write+0x6c/0xd0
      [ 2150.695127]  __arm64_sys_write+0x24/0x30
      [ 2150.702749]  el0_svc_handler+0xa0/0x128
      [ 2150.710296]  el0_svc+0x8/0xc
      Signed-off-by: default avatarDongsheng Yang <dongsheng.yang@easystack.cn>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      7e59c506
    • Coly Li's avatar
      bcache: share register sysfs with async register · a58e88bf
      Coly Li authored
      Previously the experimental async registration uses a separate sysfs
      file register_async. Now the async registration code seems working well
      for a while, we can do furtuher testing with it now.
      
      This patch changes the async bcache registration shares the same sysfs
      file /sys/fs/bcache/register (and register_quiet). Async registration
      will be default behavior if BCACHE_ASYNC_REGISTRATION is set in kernel
      configure. By default, BCACHE_ASYNC_REGISTRATION is not configured yet.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a58e88bf
  2. 29 Sep, 2020 1 commit
    • Niklas Cassel's avatar
      null_blk: add support for max open/active zone limit for zoned devices · dc4d137e
      Niklas Cassel authored
      Add support for user space to set a max open zone and a max active zone
      limit via configfs. By default, the default values are 0 == no limit.
      
      Call the block layer API functions used for exposing the configured
      limits to sysfs.
      
      Add accounting in null_blk_zoned so that these new limits are respected.
      Performing an operation that would exceed these limits results in a
      standard I/O error.
      
      A max open zone limit exists in the ZBC standard.
      While null_blk_zoned is used to test the Zoned Block Device model in
      Linux, when it comes to differences between ZBC and ZNS, null_blk_zoned
      mostly follows ZBC.
      
      Therefore, implement the manage open zone resources function from ZBC,
      but additionally add support for max active zones.
      This enables user space not only to test against a device with an open
      zone limit, but also to test against a device with an active zone limit.
      Signed-off-by: default avatarNiklas Cassel <niklas.cassel@wdc.com>
      Reviewed-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      dc4d137e
  3. 28 Sep, 2020 1 commit
    • Jens Axboe's avatar
      Merge tag 'nvme-5.10-2020-09-27' of git://git.infradead.org/nvme into for-5.10/drivers · 1ed4211d
      Jens Axboe authored
      Pull NVMe updates from Christoph:
      
      "nvme updates for 5.10
      
       - fix keep alive timer modification (Amit Engel)
       - order the PCI ID list more sensibly (Andy Shevchenko)
       - cleanup the open by controller helper (Chaitanya Kulkarni)
       - use an xarray for th CSE log lookup (Chaitanya Kulkarni)
       - support ZNS in nvmet passthrough mode (Chaitanya Kulkarni)
       - fix nvme_ns_report_zones (me)
       - add a sanity check to nvmet-fc (James Smart)
       - fix interrupt allocation when too many polled queues are specified
         (Jeffle Xu)
       - small nvmet-tcp optimization (Mark Wunderlich)"
      
      * tag 'nvme-5.10-2020-09-27' of git://git.infradead.org/nvme:
        nvme-pci: allocate separate interrupt for the reserved non-polled I/O queue
        nvme: fix error handling in nvme_ns_report_zones
        nvmet-fc: fix missing check for no hostport struct
        nvmet: add passthru ZNS support
        nvmet: handle keep-alive timer when kato is modified by a set features cmd
        nvmet-tcp: have queue io_work context run on sock incoming cpu
        nvme-pci: Move enumeration by class to be last in the table
        nvme: use an xarray to lookup the Commands Supported and Effects log
        nvme: lift the file open code from nvme_ctrl_get_by_path
      1ed4211d
  4. 27 Sep, 2020 9 commits
  5. 25 Sep, 2020 1 commit
    • Jens Axboe's avatar
      Merge branch 'md-next' of... · 163090c1
      Jens Axboe authored
      Merge branch 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-5.10/drivers
      
      Pull MD updates from Song.
      
      * 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
        md/raid10: improve discard request for far layout
        md/raid10: improve raid10 discard request
        md/raid10: pull codes that wait for blocked dev into one function
        md/raid10: extend r10bio devs to raid disks
        md: add md_submit_discard_bio() for submitting discard bio
        md: Simplify code with existing definition RESYNC_SECTORS in raid10.c
        md/raid5: reallocate page array after setting new stripe_size
        md/raid5: resize stripe_head when reshape array
        md/raid5: let multiple devices of stripe_head share page
        md/raid6: let async recovery function support different page offset
        md/raid6: let syndrome computor support different page offset
        md/raid5: convert to new xor compution interface
        md/raid5: add new xor function to support different page offset
        md/raid5: make async_copy_data() to support different page offset
        md/raid5: add a new member of offset into r5dev
        md: only calculate blocksize once and use i_blocksize()
      163090c1
  6. 24 Sep, 2020 24 commits