1. 24 Jul, 2020 1 commit
  2. 23 Jul, 2020 1 commit
  3. 22 Jul, 2020 8 commits
    • Jens Axboe's avatar
      Merge branch 'md-next' of... · ef67744e
      Jens Axboe authored
      Merge branch 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-5.9/drivers
      
      Pull MD for 5.9 from Song.
      
      * 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
        md/raid10: avoid deadlock on recovery.
        raid: md_p.h: drop duplicated word in a comment
        md-cluster: fix rmmod issue when md_cluster convert bitmap to none
        md-cluster: fix safemode_delay value when converting to clustered bitmap
        md/raid5: support config stripe_size by sysfs entry
        md/raid5: set default stripe_size as 4096
        md/raid456: convert macro STRIPE_* to RAID5_STRIPE_*
        raid5: remove the meaningless check in raid5_make_request
        raid5: put the comment of clear_batch_ready to the right place
        raid5: call clear_batch_ready before set STRIPE_ACTIVE
        md: raid10: Fix compilation warning
        md: raid5: Fix compilation warning
        md: raid5-cache: Remove set but unused variable
        md: Fix compilation warning
      ef67744e
    • Vitaly Mayatskikh's avatar
      md/raid10: avoid deadlock on recovery. · fe630de0
      Vitaly Mayatskikh authored
      When disk failure happens and the array has a spare drive, resync thread
      kicks in and starts to refill the spare. However it may get blocked by
      a retry thread that resubmits failed IO to a mirror and itself can get
      blocked on a barrier raised by the resync thread.
      Acked-by: default avatarNigel Croxon <ncroxon@redhat.com>
      Signed-off-by: default avatarVitaly Mayatskikh <vmayatskikh@digitalocean.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      fe630de0
    • Randy Dunlap's avatar
      raid: md_p.h: drop duplicated word in a comment · c333f949
      Randy Dunlap authored
      Drop the doubled word "the" in a comment.
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Cc: Song Liu <song@kernel.org>
      Cc: linux-raid@vger.kernel.org
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      c333f949
    • Zhao Heming's avatar
      md-cluster: fix rmmod issue when md_cluster convert bitmap to none · edee9dfe
      Zhao Heming authored
      update_array_info misses calling module_put when removing cluster bitmap.
      
      steps to reproduce:
      ```
      node1 # mdadm -C /dev/md0 -b clustered -e 1.2 -n 2 -l mirror /dev/sda
      /dev/sdb
      mdadm: array /dev/md0 started.
      node1 # lsmod | egrep "dlm|md_|raid1"
      md_cluster             28672  1
      dlm                   212992  14 md_cluster
      configfs               57344  2 dlm
      raid1                  53248  1
      md_mod                176128  2 raid1,md_cluster
      node1 # mdadm -G /dev/md0 -b none
      node1 # lsmod | egrep "dlm|md_|raid1"
      md_cluster             28672  1 <== should be zero
      dlm                   212992  9 md_cluster
      configfs               57344  2 dlm
      raid1                  53248  1
      md_mod                176128  2 raid1,md_cluster
      node1 # mdadm -G /dev/md0 -b clustered
      node1 # lsmod | egrep "dlm|md_|raid1"
      md_cluster             28672  2 <== increase
      dlm                   212992  14 md_cluster
      configfs               57344  2 dlm
      raid1                  53248  1
      md_mod                176128  2 raid1,md_cluster
      node1 # mdadm -G /dev/md0 -b none
      node1 # mdadm -G /dev/md0 -b clustered
      node1 # lsmod | egrep "dlm|md_|raid1"
      md_cluster             28672  3 <== increase
      dlm                   212992  14 md_cluster
      configfs               57344  2 dlm
      raid1                  53248  1
      md_mod                176128  2 raid1,md_cluster
      ```
      Reviewed-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarZhao Heming <heming.zhao@suse.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      edee9dfe
    • Zhao Heming's avatar
      md-cluster: fix safemode_delay value when converting to clustered bitmap · 7c9d5c54
      Zhao Heming authored
      When array convert to clustered bitmap, the safe_mode_delay doesn't
      clean and vice versa. the /sys/block/mdX/md/safe_mode_delay keep original
      value after changing bitmap type. In safe_delay_store(), the code forbids
      setting mddev->safemode_delay when array is clustered. So in cluster-md
      env, the expected safemode_delay value should be 0.
      
      Reproducible steps:
      ```
      node1 # mdadm --zero-superblock /dev/sd{b,c,d}
      node1 # mdadm -C /dev/md0 -b internal -e 1.2 -n 2 -l mirror /dev/sdb /dev/sdc
      node1 # cat /sys/block/md0/md/safe_mode_delay
      0.204
      node1 # mdadm -G /dev/md0 -b none
      node1 # mdadm --grow /dev/md0 --bitmap=clustered
      node1 # cat /sys/block/md0/md/safe_mode_delay
      0.204  <== doesn't change, should ZERO for cluster-md
      
      node1 # mdadm --zero-superblock /dev/sd{b,c,d}
      node1 # mdadm -C /dev/md0 -b clustered -e 1.2 -n 2 -l mirror /dev/sdb /dev/sdc
      node1 # cat /sys/block/md0/md/safe_mode_delay
      0.000
      node1 # mdadm -G /dev/md0 -b none
      node1 # cat /sys/block/md0/md/safe_mode_delay
      0.000  <== doesn't change, should default value
      ```
      Reviewed-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarZhao Heming <heming.zhao@suse.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      7c9d5c54
    • Yufen Yu's avatar
      md/raid5: support config stripe_size by sysfs entry · 3b5408b9
      Yufen Yu authored
      Adding a new 'stripe_size' sysfs entry to set and show stripe_size.
      stripe_size should not be bigger than PAGE_SIZE, and it requires to
      be multiple of 4096. We can adjust stripe_size by writing value into
      sysfs entry, likely, set stripe_size as 16KB:
      
                echo 16384 > /sys/block/md1/md/stripe_size
      
      Show current stripe_size value:
      
                cat /sys/block/md1/md/stripe_size
      
      For PAGE_SIZE is equal to 4096, 'stripe_size' can just be read.
      Signed-off-by: default avatarYufen Yu <yuyufen@huawei.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      3b5408b9
    • Yufen Yu's avatar
      md/raid5: set default stripe_size as 4096 · e2368582
      Yufen Yu authored
      In RAID5, if issued bio size is bigger than stripe_size, it will be
      split in the unit of stripe_size and process them one by one. Even
      for size less then stripe_size, RAID5 also request data from disk at
      least of stripe_size.
      
      Nowdays, stripe_size is equal to the value of PAGE_SIZE. Since filesystem
      usually issue bio in the unit of 4KB, there is no problem for PAGE_SIZE
      as 4KB. But, for 64KB PAGE_SIZE, bio from filesystem requests 4KB data
      while RAID5 issue IO at least stripe_size (64KB) each time. That will
      waste resource of disk bandwidth and compute xor.
      
      To avoding the waste, we want to make stripe_size configurable. This
      patch just set default stripe_size as 4096. User can also set the value
      bigger than 4KB for some special requirements, such as we know the
      issued io size is more than 4KB.
      
      To evaluate the new feature, we create raid5 device '/dev/md5' with
      4 SSD disk and test it on arm64 machine with 64KB PAGE_SIZE.
      
      1) We format /dev/md5 with mkfs.ext4 and mount ext4 with default
       configure on /mnt directory. Then, trying to test it by dbench with
       command: dbench -D /mnt -t 1000 10. Result show as:
      
       'stripe_size = 64KB'
      
        Operation      Count    AvgLat    MaxLat
        ----------------------------------------
        NTCreateX    9805011     0.021    64.728
        Close        7202525     0.001     0.120
        Rename        415213     0.051    44.681
        Unlink       1980066     0.079    93.147
        Deltree          240     1.793     6.516
        Mkdir            120     0.004     0.007
        Qpathinfo    8887512     0.007    37.114
        Qfileinfo    1557262     0.001     0.030
        Qfsinfo      1629582     0.012     0.152
        Sfileinfo     798756     0.040    57.641
        Find         3436004     0.019    57.782
        WriteX       4887239     0.021    57.638
        ReadX        15370483     0.005    37.818
        LockX          31934     0.003     0.022
        UnlockX        31933     0.001     0.021
        Flush         687205    13.302   530.088
      
       Throughput 307.799 MB/sec  10 clients  10 procs  max_latency=530.091 ms
       -------------------------------------------------------
      
       'stripe_size = 4KB'
      
        Operation      Count    AvgLat    MaxLat
        ----------------------------------------
        NTCreateX    11999166     0.021    36.380
        Close        8814128     0.001     0.122
        Rename        508113     0.051    29.169
        Unlink       2423242     0.070    38.141
        Deltree          300     1.885     7.155
        Mkdir            150     0.004     0.006
        Qpathinfo    10875921     0.007    35.485
        Qfileinfo    1905837     0.001     0.032
        Qfsinfo      1994304     0.012     0.125
        Sfileinfo     977450     0.029    26.489
        Find         4204952     0.019     9.361
        WriteX       5981890     0.019    27.804
        ReadX        18809742     0.004    33.491
        LockX          39074     0.003     0.025
        UnlockX        39074     0.001     0.014
        Flush         841022    10.712   458.848
      
       Throughput 376.777 MB/sec  10 clients  10 procs  max_latency=458.852 ms
       -------------------------------------------------------
      
       It show that setting stripe_size as 4KB has higher thoughput, i.e.
       (376.777 vs 307.799) and has smaller latency than that setting as 64KB.
      
       2) We try to evaluate IO throughput for /dev/md5 by fio with config:
      
       [4KB randwrite]
       direct=1
       numjob=2
       iodepth=64
       ioengine=libaio
       filename=/dev/md5
       bs=4KB
       rw=randwrite
      
       [64KB write]
       direct=1
       numjob=2
       iodepth=64
       ioengine=libaio
       filename=/dev/md5
       bs=1MB
       rw=write
      
       The result as follow:
      
                     +                   +
                     | stripe_size(64KB) | stripe_size(4KB)
       +----------------------------------------------------+
       4KB randwrite |     15MB/s        |      100MB/s
       +----------------------------------------------------+
       1MB write     |   1000MB/s        |      700MB/s
      
       The result show that when size of io is bigger than 4KB (64KB),
       64KB stripe_size has much higher IOPS. But for 4KB randwrite, that
       means, size of io issued to device are smaller, 4KB stripe_size
       have better performance.
      
      Normally, default value (4096) can get relatively good performance.
      But if each issued io is bigger than 4096, setting value more than
      4096 may get better performance.
      
      Here, we just set default stripe_size as 4096, and we will try to
      support setting different stripe_size by sysfs interface in the
      following patch.
      Signed-off-by: default avatarYufen Yu <yuyufen@huawei.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      e2368582
    • Yufen Yu's avatar
      md/raid456: convert macro STRIPE_* to RAID5_STRIPE_* · c911c46c
      Yufen Yu authored
      Convert macro STRIPE_SIZE, STRIPE_SECTORS and STRIPE_SHIFT to
      RAID5_STRIPE_SIZE(), RAID5_STRIPE_SECTORS() and RAID5_STRIPE_SHIFT().
      
      This patch is prepare for the following adjustable stripe_size.
      It will not change any existing functionality.
      Signed-off-by: default avatarYufen Yu <yuyufen@huawei.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      c911c46c
  4. 16 Jul, 2020 7 commits
    • Guoqing Jiang's avatar
      raid5: remove the meaningless check in raid5_make_request · 1684e975
      Guoqing Jiang authored
      We can't guarntee the batched stripe to be set with STRIPE_HANDLE since
      there are lots of functions could set the flag, such as sync_request,
      ops_complete_* and end_{read,write}_request etc.
      
      Also clear_batch_ready called in handle_stripe ensures the batched list
      can't continue to be handled by handle_stripe.
      Signed-off-by: default avatarGuoqing Jiang <guoqing.jiang@cloud.ionos.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      1684e975
    • Guoqing Jiang's avatar
      raid5: put the comment of clear_batch_ready to the right place · cb9902db
      Guoqing Jiang authored
      To make people understand the function well, let's put the comment to
      the right place.
      Signed-off-by: default avatarGuoqing Jiang <guoqing.jiang@cloud.ionos.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      cb9902db
    • Guoqing Jiang's avatar
      raid5: call clear_batch_ready before set STRIPE_ACTIVE · a377a472
      Guoqing Jiang authored
      We tried to only put the head sh of batch list to handle_list, then the
      handle_stripe doesn't handle other members in the batch list. However,
      we still got the calltrace in break_stripe_batch_list.
      
      [593764.644269] stripe state: 2003
      kernel: [593764.644299] ------------[ cut here ]------------
      kernel: [593764.644308] WARNING: CPU: 12 PID: 856 at drivers/md/raid5.c:4625 break_stripe_batch_list+0x203/0x240 [raid456]
      [...]
      kernel: [593764.644363] Call Trace:
      kernel: [593764.644370]  handle_stripe+0x907/0x20c0 [raid456]
      kernel: [593764.644376]  ? __wake_up_common_lock+0x89/0xc0
      kernel: [593764.644379]  handle_active_stripes.isra.57+0x35f/0x570 [raid456]
      kernel: [593764.644382]  ? raid5_wakeup_stripe_thread+0x96/0x1f0 [raid456]
      kernel: [593764.644385]  raid5d+0x480/0x6a0 [raid456]
      kernel: [593764.644390]  ? md_thread+0x11f/0x160
      kernel: [593764.644392]  md_thread+0x11f/0x160
      kernel: [593764.644394]  ? wait_woken+0x80/0x80
      kernel: [593764.644396]  kthread+0xfc/0x130
      kernel: [593764.644398]  ? find_pers+0x70/0x70
      kernel: [593764.644399]  ? kthread_create_on_node+0x70/0x70
      kernel: [593764.644401]  ret_from_fork+0x1f/0x30
      
      As we can see, the stripe was set with STRIPE_ACTIVE and STRIPE_HANDLE,
      and only handle_stripe could set those flags then return. And since the
      stipe was already in the batch list, we need to return earlier before
      set the two flags.
      
      And after dig a little about git history especially commit 3664847d
      ("md/raid5: fix a race condition in stripe batch"), it seems the batched
      stipe still could be handled by handle_stipe, then handle_stipe needs to
      return earlier if clear_batch_ready to return true.
      Signed-off-by: default avatarGuoqing Jiang <guoqing.jiang@cloud.ionos.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      a377a472
    • Damien Le Moal's avatar
      md: raid10: Fix compilation warning · 38ffc01f
      Damien Le Moal authored
      Remove the if statement around the call to sysfs_link_rdev() in
      raid10_start_reshape() to avoid the compilation warning:
      
      warning: suggest braces around empty body in an ‘if’ statement
      
      when compiling with W=1.
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      38ffc01f
    • Damien Le Moal's avatar
      md: raid5: Fix compilation warning · 2aada5b1
      Damien Le Moal authored
      Remove the if statement around the calls to sysfs_link_rdev() to avoid
      the compilation warning "suggest braces around empty body in an ‘if’
      statement" when compiling with W=1.
      
      Also fix function description comments to avoid kdoc format warnings.
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      2aada5b1
    • Damien Le Moal's avatar
      md: raid5-cache: Remove set but unused variable · 52923083
      Damien Le Moal authored
      Remove the variable offset in r5c_tree_index() to avoid a "set but not
      used" compilation warning when compiling with W=1.
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      52923083
    • Damien Le Moal's avatar
      md: Fix compilation warning · 5e3b8a8d
      Damien Le Moal authored
      Remove the if statement around the calls to sysfs_link_rdev() to avoid
      the compilation warnings:
      
      warning: suggest braces around empty body in an ‘if’ statement
      
      when compiling with W=1. For the call to sysfs_create_link() generating
      the same warning, use the err variable to store the function result,
      avoiding triggering another warning as the function is declared
      as 'warn_unused_result'.
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      5e3b8a8d
  5. 15 Jul, 2020 8 commits
    • Niklas Cassel's avatar
      block: add max_active_zones to blk-sysfs · 659bf827
      Niklas Cassel authored
      Add a new max_active zones definition in the sysfs documentation.
      This definition will be common for all devices utilizing the zoned block
      device support in the kernel.
      
      Export max_active_zones according to this new definition for NVMe Zoned
      Namespace devices, ZAC ATA devices (which are treated as SCSI devices by
      the kernel), and ZBC SCSI devices.
      
      Add the new max_active_zones member to struct request_queue, rather
      than as a queue limit, since this property cannot be split across stacking
      drivers.
      
      For SCSI devices, even though max active zones is not part of the ZBC/ZAC
      spec, export max_active_zones as 0, signifying "no limit".
      Signed-off-by: default avatarNiklas Cassel <niklas.cassel@wdc.com>
      Reviewed-by: default avatarJavier González <javier@javigon.com>
      Reviewed-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      659bf827
    • Niklas Cassel's avatar
      block: add max_open_zones to blk-sysfs · e15864f8
      Niklas Cassel authored
      Add a new max_open_zones definition in the sysfs documentation.
      This definition will be common for all devices utilizing the zoned block
      device support in the kernel.
      
      Export max open zones according to this new definition for NVMe Zoned
      Namespace devices, ZAC ATA devices (which are treated as SCSI devices by
      the kernel), and ZBC SCSI devices.
      
      Add the new max_open_zones member to struct request_queue, rather
      than as a queue limit, since this property cannot be split across stacking
      drivers.
      Signed-off-by: default avatarNiklas Cassel <niklas.cassel@wdc.com>
      Reviewed-by: default avatarJavier González <javier@javigon.com>
      Reviewed-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      e15864f8
    • Jens Axboe's avatar
      Merge branch 'md-next' of... · b1d37e5b
      Jens Axboe authored
      Merge branch 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-5.9/drivers
      
      Pull MD fixes from Song.
      
      * 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
        md-cluster: fix wild pointer of unlock_all_bitmaps()
        md/raid5-cache: clear MD_SB_CHANGE_PENDING before flushing stripes
        md: fix deadlock causing by sysfs_notify
        md: improve io stats accounting
        md: raid0/linear: fix dereference before null check on pointer mddev
      b1d37e5b
    • Gustavo A. R. Silva's avatar
      s390/dasd: Use struct_size() helper · 10321aa1
      Gustavo A. R. Silva authored
      Make use of the struct_size() helper instead of an open-coded version
      in order to avoid any potential type mistakes. Also, remove unnecessary
      variable _datasize_.
      
      This code was detected with the help of Coccinelle and, audited and
      fixed manually.
      Signed-off-by: default avatarGustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: default avatarStefan Haberland <sth@linux.ibm.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      10321aa1
    • Stefan Haberland's avatar
      s390/dasd: fix inability to use DASD with DIAG driver · 9f4aa523
      Stefan Haberland authored
      During initialization of the DASD DIAG driver a request is issued
      that has a bio structure that resides on the stack. With virtually
      mapped kernel stacks this bio address might be in virtual storage
      which is unsuitable for usage with the diag250 call.
      In this case the device can not be set online using the DIAG
      discipline and fails with -EOPNOTSUP.
      In the system journal the following error message is presented:
      
      dasd: X.X.XXXX Setting the DASD online with discipline DIAG failed
      with rc=-95
      
      Fix by allocating the bio structure instead of having it on the stack.
      
      Fixes: ce3dc447 ("s390: add support for virtually mapped kernel stacks")
      Signed-off-by: default avatarStefan Haberland <sth@linux.ibm.com>
      Reviewed-by: default avatarPeter Oberparleiter <oberpar@linux.ibm.com>
      Cc: stable@vger.kernel.org #4.20
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      9f4aa523
    • Zhao Heming's avatar
      md-cluster: fix wild pointer of unlock_all_bitmaps() · 60f80d6f
      Zhao Heming authored
      reproduction steps:
      ```
      node1 # mdadm -C /dev/md0 -b clustered -e 1.2 -n 2 -l mirror /dev/sda
      /dev/sdb
      node2 # mdadm -A /dev/md0 /dev/sda /dev/sdb
      node1 # mdadm -G /dev/md0 -b none
      mdadm: failed to remove clustered bitmap.
      node1 # mdadm -S --scan
      ^C  <==== mdadm hung & kernel crash
      ```
      
      kernel stack:
      ```
      [  335.230657] general protection fault: 0000 [#1] SMP NOPTI
      [...]
      [  335.230848] Call Trace:
      [  335.230873]  ? unlock_all_bitmaps+0x5/0x70 [md_cluster]
      [  335.230886]  unlock_all_bitmaps+0x3d/0x70 [md_cluster]
      [  335.230899]  leave+0x10f/0x190 [md_cluster]
      [  335.230932]  ? md_super_wait+0x93/0xa0 [md_mod]
      [  335.230947]  ? leave+0x5/0x190 [md_cluster]
      [  335.230973]  md_cluster_stop+0x1a/0x30 [md_mod]
      [  335.230999]  md_bitmap_free+0x142/0x150 [md_mod]
      [  335.231013]  ? _cond_resched+0x15/0x40
      [  335.231025]  ? mutex_lock+0xe/0x30
      [  335.231056]  __md_stop+0x1c/0xa0 [md_mod]
      [  335.231083]  do_md_stop+0x160/0x580 [md_mod]
      [  335.231119]  ? 0xffffffffc05fb078
      [  335.231148]  md_ioctl+0xa04/0x1930 [md_mod]
      [  335.231165]  ? filename_lookup+0xf2/0x190
      [  335.231179]  blkdev_ioctl+0x93c/0xa10
      [  335.231205]  ? _cond_resched+0x15/0x40
      [  335.231214]  ? __check_object_size+0xd4/0x1a0
      [  335.231224]  block_ioctl+0x39/0x40
      [  335.231243]  do_vfs_ioctl+0xa0/0x680
      [  335.231253]  ksys_ioctl+0x70/0x80
      [  335.231261]  __x64_sys_ioctl+0x16/0x20
      [  335.231271]  do_syscall_64+0x65/0x1f0
      [  335.231278]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      ```
      Signed-off-by: default avatarZhao Heming <heming.zhao@suse.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      60f80d6f
    • Song Liu's avatar
      md/raid5-cache: clear MD_SB_CHANGE_PENDING before flushing stripes · c9020e64
      Song Liu authored
      In recovery, if we process too much data, raid5-cache may set
      MD_SB_CHANGE_PENDING, which causes spinning in handle_stripe().
      Fix this issue by clearing the bit before flushing data only
      stripes. This issue was initially discussed in [1].
      
      [1] https://www.spinics.net/lists/raid/msg64409.htmlSigned-off-by: default avatarSong Liu <songliubraving@fb.com>
      c9020e64
    • Junxiao Bi's avatar
      md: fix deadlock causing by sysfs_notify · e1a86dbb
      Junxiao Bi authored
      The following deadlock was captured. The first process is holding 'kernfs_mutex'
      and hung by io. The io was staging in 'r1conf.pending_bio_list' of raid1 device,
      this pending bio list would be flushed by second process 'md127_raid1', but
      it was hung by 'kernfs_mutex'. Using sysfs_notify_dirent_safe() to replace
      sysfs_notify() can fix it. There were other sysfs_notify() invoked from io
      path, removed all of them.
      
       PID: 40430  TASK: ffff8ee9c8c65c40  CPU: 29  COMMAND: "probe_file"
        #0 [ffffb87c4df37260] __schedule at ffffffff9a8678ec
        #1 [ffffb87c4df372f8] schedule at ffffffff9a867f06
        #2 [ffffb87c4df37310] io_schedule at ffffffff9a0c73e6
        #3 [ffffb87c4df37328] __dta___xfs_iunpin_wait_3443 at ffffffffc03a4057 [xfs]
        #4 [ffffb87c4df373a0] xfs_iunpin_wait at ffffffffc03a6c79 [xfs]
        #5 [ffffb87c4df373b0] __dta_xfs_reclaim_inode_3357 at ffffffffc039a46c [xfs]
        #6 [ffffb87c4df37400] xfs_reclaim_inodes_ag at ffffffffc039a8b6 [xfs]
        #7 [ffffb87c4df37590] xfs_reclaim_inodes_nr at ffffffffc039bb33 [xfs]
        #8 [ffffb87c4df375b0] xfs_fs_free_cached_objects at ffffffffc03af0e9 [xfs]
        #9 [ffffb87c4df375c0] super_cache_scan at ffffffff9a287ec7
       #10 [ffffb87c4df37618] shrink_slab at ffffffff9a1efd93
       #11 [ffffb87c4df37700] shrink_node at ffffffff9a1f5968
       #12 [ffffb87c4df37788] do_try_to_free_pages at ffffffff9a1f5ea2
       #13 [ffffb87c4df377f0] try_to_free_mem_cgroup_pages at ffffffff9a1f6445
       #14 [ffffb87c4df37880] try_charge at ffffffff9a26cc5f
       #15 [ffffb87c4df37920] memcg_kmem_charge_memcg at ffffffff9a270f6a
       #16 [ffffb87c4df37958] new_slab at ffffffff9a251430
       #17 [ffffb87c4df379c0] ___slab_alloc at ffffffff9a251c85
       #18 [ffffb87c4df37a80] __slab_alloc at ffffffff9a25635d
       #19 [ffffb87c4df37ac0] kmem_cache_alloc at ffffffff9a251f89
       #20 [ffffb87c4df37b00] alloc_inode at ffffffff9a2a2b10
       #21 [ffffb87c4df37b20] iget_locked at ffffffff9a2a4854
       #22 [ffffb87c4df37b60] kernfs_get_inode at ffffffff9a311377
       #23 [ffffb87c4df37b80] kernfs_iop_lookup at ffffffff9a311e2b
       #24 [ffffb87c4df37ba8] lookup_slow at ffffffff9a290118
       #25 [ffffb87c4df37c10] walk_component at ffffffff9a291e83
       #26 [ffffb87c4df37c78] path_lookupat at ffffffff9a293619
       #27 [ffffb87c4df37cd8] filename_lookup at ffffffff9a2953af
       #28 [ffffb87c4df37de8] user_path_at_empty at ffffffff9a295566
       #29 [ffffb87c4df37e10] vfs_statx at ffffffff9a289787
       #30 [ffffb87c4df37e70] SYSC_newlstat at ffffffff9a289d5d
       #31 [ffffb87c4df37f18] sys_newlstat at ffffffff9a28a60e
       #32 [ffffb87c4df37f28] do_syscall_64 at ffffffff9a003949
       #33 [ffffb87c4df37f50] entry_SYSCALL_64_after_hwframe at ffffffff9aa001ad
           RIP: 00007f617a5f2905  RSP: 00007f607334f838  RFLAGS: 00000246
           RAX: ffffffffffffffda  RBX: 00007f6064044b20  RCX: 00007f617a5f2905
           RDX: 00007f6064044b20  RSI: 00007f6064044b20  RDI: 00007f6064005890
           RBP: 00007f6064044aa0   R8: 0000000000000030   R9: 000000000000011c
           R10: 0000000000000013  R11: 0000000000000246  R12: 00007f606417e6d0
           R13: 00007f6064044aa0  R14: 00007f6064044b10  R15: 00000000ffffffff
           ORIG_RAX: 0000000000000006  CS: 0033  SS: 002b
      
       PID: 927    TASK: ffff8f15ac5dbd80  CPU: 42  COMMAND: "md127_raid1"
        #0 [ffffb87c4df07b28] __schedule at ffffffff9a8678ec
        #1 [ffffb87c4df07bc0] schedule at ffffffff9a867f06
        #2 [ffffb87c4df07bd8] schedule_preempt_disabled at ffffffff9a86825e
        #3 [ffffb87c4df07be8] __mutex_lock at ffffffff9a869bcc
        #4 [ffffb87c4df07ca0] __mutex_lock_slowpath at ffffffff9a86a013
        #5 [ffffb87c4df07cb0] mutex_lock at ffffffff9a86a04f
        #6 [ffffb87c4df07cc8] kernfs_find_and_get_ns at ffffffff9a311d83
        #7 [ffffb87c4df07cf0] sysfs_notify at ffffffff9a314b3a
        #8 [ffffb87c4df07d18] md_update_sb at ffffffff9a688696
        #9 [ffffb87c4df07d98] md_update_sb at ffffffff9a6886d5
       #10 [ffffb87c4df07da8] md_check_recovery at ffffffff9a68ad9c
       #11 [ffffb87c4df07dd0] raid1d at ffffffffc01f0375 [raid1]
       #12 [ffffb87c4df07ea0] md_thread at ffffffff9a680348
       #13 [ffffb87c4df07f08] kthread at ffffffff9a0b8005
       #14 [ffffb87c4df07f50] ret_from_fork at ffffffff9aa00344
      Signed-off-by: default avatarJunxiao Bi <junxiao.bi@oracle.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      e1a86dbb
  6. 14 Jul, 2020 2 commits
    • Artur Paszkiewicz's avatar
      md: improve io stats accounting · 41d2d848
      Artur Paszkiewicz authored
      Use generic io accounting functions to manage io stats. There was an
      attempt to do this earlier in commit 18c0b223 ("md: use generic io
      stats accounting functions to simplify io stat accounting"), but it did
      not include a call to generic_end_io_acct() and caused issues with
      tracking in-flight IOs, so it was later removed in commit 74672d06
      ("md: fix md io stats accounting broken").
      
      This patch attempts to fix this by using both disk_start_io_acct() and
      disk_end_io_acct(). To make it possible, a struct md_io is allocated for
      every new md bio, which includes the io start_time. A new mempool is
      introduced for this purpose. We override bio->bi_end_io with our own
      callback and call disk_start_io_acct() before passing the bio to
      md_handle_request(). When it completes, we call disk_end_io_acct() and
      the original bi_end_io callback.
      
      This adds correct statistics about in-flight IOs and IO processing time,
      interpreted e.g. in iostat as await, svctm, aqu-sz and %util.
      
      It also fixes a situation where too many IOs where reported if a bio was
      re-submitted to the mddev, because io accounting is now performed only
      on newly arriving bios.
      Acked-by: default avatarGuoqing Jiang <guoqing.jiang@cloud.ionos.com>
      Signed-off-by: default avatarArtur Paszkiewicz <artur.paszkiewicz@intel.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      41d2d848
    • Colin Ian King's avatar
      md: raid0/linear: fix dereference before null check on pointer mddev · 9a5a8597
      Colin Ian King authored
      Pointer mddev is being dereferenced with a test_bit call before mddev
      is being null checked, this may cause a null pointer dereference. Fix
      this by moving the null pointer checks to sanity check mddev before
      it is dereferenced.
      
      Addresses-Coverity: ("Dereference before null check")
      Fixes: 62f7b198 ("md raid0/linear: Mark array as 'broken' and fail BIOs if a member is gone")
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Reviewed-by: default avatarGuilherme G. Piccoli <gpiccoli@canonical.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      9a5a8597
  7. 11 Jul, 2020 1 commit
    • Christophe JAILLET's avatar
      rsxx: switch from 'pci_free_consistent()' to 'dma_free_coherent()' · 2eaac320
      Christophe JAILLET authored
      The wrappers in include/linux/pci-dma-compat.h should go away.
      
      The patch has been generated with the coccinelle script bellow.
      It has been compile tested.
      
      This also aligns code with what is in use in '/rsxx/dma.c'
      
      @@
      @@
      -    PCI_DMA_BIDIRECTIONAL
      +    DMA_BIDIRECTIONAL
      
      @@
      @@
      -    PCI_DMA_TODEVICE
      +    DMA_TO_DEVICE
      
      @@
      @@
      -    PCI_DMA_FROMDEVICE
      +    DMA_FROM_DEVICE
      
      @@
      @@
      -    PCI_DMA_NONE
      +    DMA_NONE
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_alloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_zalloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_free_consistent(e1, e2, e3, e4)
      +    dma_free_coherent(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_single(e1, e2, e3, e4)
      +    dma_map_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_single(e1, e2, e3, e4)
      +    dma_unmap_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4, e5;
      @@
      -    pci_map_page(e1, e2, e3, e4, e5)
      +    dma_map_page(&e1->dev, e2, e3, e4, e5)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_page(e1, e2, e3, e4)
      +    dma_unmap_page(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_sg(e1, e2, e3, e4)
      +    dma_map_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_sg(e1, e2, e3, e4)
      +    dma_unmap_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
      +    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_device(e1, e2, e3, e4)
      +    dma_sync_single_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
      +    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
      +    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2;
      @@
      -    pci_dma_mapping_error(e1, e2)
      +    dma_mapping_error(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_dma_mask(e1, e2)
      +    dma_set_mask(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_consistent_dma_mask(e1, e2)
      +    dma_set_coherent_mask(&e1->dev, e2)
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      2eaac320
  8. 10 Jul, 2020 1 commit
    • Jens Axboe's avatar
      Merge branch 'nvme-5.9' of git://git.infradead.org/nvme into for-5.9/drivers · 80ee071b
      Jens Axboe authored
      Pull NVMe updates from Christoph:
      
      "Below is the current large chunk we have in the nvme tree for 5.9:
      
       - ZNS support (Aravind, Keith, Matias, Niklas)
        - misc cleanups and optimizations
           (Baolin, Chaitanya, David, Dongli, Max, Sagi)"
      
      * 'nvme-5.9' of git://git.infradead.org/nvme: (28 commits)
        nvme: remove ns->disk checks
        nvme-pci: use standard block status symbolic names
        nvme-pci: use the consistent return type of nvme_pci_iod_alloc_size()
        nvme-pci: add a blank line after declarations
        nvme-pci: fix some comments issues
        nvme-pci: remove redundant segment validation
        nvme: document quirked Intel models
        nvme: expose reconnect_delay and ctrl_loss_tmo via sysfs
        nvme: support for zoned namespaces
        nvme: support for multiple Command Sets Supported and Effects log pages
        nvme: implement multiple I/O Command Set support
        null_blk: introduce zone capacity for zoned device
        block: add capacity field to zone descriptors
        nvme: use USEC_PER_SEC instead of magic numbers
        nvmet-tcp: simplify nvmet_process_resp_list
        nvme-tcp: optimize network stack with setting msg flags according to batch size
        nvme-tcp: leverage request plugging
        nvme-tcp: have queue prod/cons send list become a llist
        nvme-fcloop: verify wwnn and wwpn format
        nvmet: use unsigned type for u64
        ...
      80ee071b
  9. 08 Jul, 2020 11 commits