1. 13 Dec, 2018 2 commits
  2. 12 Dec, 2018 2 commits
    • Ming Lei's avatar
      block: deactivate blk_stat timer in wbt_disable_default() · 544fbd16
      Ming Lei authored
      rwb_enabled() can't be changed when there is any inflight IO.
      
      wbt_disable_default() may set rwb->wb_normal as zero, however the
      blk_stat timer may still be pending, and the timer function will update
      wrb->wb_normal again.
      
      This patch introduces blk_stat_deactivate() and applies it in
      wbt_disable_default(), then the following IO hang triggered when running
      parted & switching io scheduler can be fixed:
      
      [  369.937806] INFO: task parted:3645 blocked for more than 120 seconds.
      [  369.938941]       Not tainted 4.20.0-rc6-00284-g906c801e5248 #498
      [  369.939797] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      [  369.940768] parted          D    0  3645   3239 0x00000000
      [  369.941500] Call Trace:
      [  369.941874]  ? __schedule+0x6d9/0x74c
      [  369.942392]  ? wbt_done+0x5e/0x5e
      [  369.942864]  ? wbt_cleanup_cb+0x16/0x16
      [  369.943404]  ? wbt_done+0x5e/0x5e
      [  369.943874]  schedule+0x67/0x78
      [  369.944298]  io_schedule+0x12/0x33
      [  369.944771]  rq_qos_wait+0xb5/0x119
      [  369.945193]  ? karma_partition+0x1c2/0x1c2
      [  369.945691]  ? wbt_cleanup_cb+0x16/0x16
      [  369.946151]  wbt_wait+0x85/0xb6
      [  369.946540]  __rq_qos_throttle+0x23/0x2f
      [  369.947014]  blk_mq_make_request+0xe6/0x40a
      [  369.947518]  generic_make_request+0x192/0x2fe
      [  369.948042]  ? submit_bio+0x103/0x11f
      [  369.948486]  ? __radix_tree_lookup+0x35/0xb5
      [  369.949011]  submit_bio+0x103/0x11f
      [  369.949436]  ? blkg_lookup_slowpath+0x25/0x44
      [  369.949962]  submit_bio_wait+0x53/0x7f
      [  369.950469]  blkdev_issue_flush+0x8a/0xae
      [  369.951032]  blkdev_fsync+0x2f/0x3a
      [  369.951502]  do_fsync+0x2e/0x47
      [  369.951887]  __x64_sys_fsync+0x10/0x13
      [  369.952374]  do_syscall_64+0x89/0x149
      [  369.952819]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
      [  369.953492] RIP: 0033:0x7f95a1e729d4
      [  369.953996] Code: Bad RIP value.
      [  369.954456] RSP: 002b:00007ffdb570dd48 EFLAGS: 00000246 ORIG_RAX: 000000000000004a
      [  369.955506] RAX: ffffffffffffffda RBX: 000055c2139c6be0 RCX: 00007f95a1e729d4
      [  369.956389] RDX: 0000000000000001 RSI: 0000000000001261 RDI: 0000000000000004
      [  369.957325] RBP: 0000000000000002 R08: 0000000000000000 R09: 000055c2139c6ce0
      [  369.958199] R10: 0000000000000000 R11: 0000000000000246 R12: 000055c2139c0380
      [  369.959143] R13: 0000000000000004 R14: 0000000000000100 R15: 0000000000000008
      
      Cc: stable@vger.kernel.org
      Cc: Paolo Valente <paolo.valente@linaro.org>
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      544fbd16
    • Jens Axboe's avatar
      sbitmap: flush deferred clears for resize and shallow gets · b2dbff1b
      Jens Axboe authored
      We're missing a deferred clear off the shallow get, which can cause
      a hang. Additionally, when we resize the sbitmap, we should also
      flush deferred clears for good measure.
      
      Ensure we have full coverage on batch clears, even for paths where
      we would not be doing deferred clear. This makes it less error
      prone for future additions.
      Reported-by: default avatarBart Van Assche <bvanassche@acm.org>
      Tested-by: default avatarMing Lei <ming.lei@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      b2dbff1b
  3. 11 Dec, 2018 24 commits
  4. 10 Dec, 2018 11 commits
    • Jens Axboe's avatar
      mtip32xx: use BLK_STS_DEV_RESOURCE for device resources · 4ba09f69
      Jens Axboe authored
      For cases where we can only fail with IO in-flight, we should be using
      BLK_STS_DEV_RESOURCE instead of BLK_STS_RESOURCE. The latter refers to
      system wide resource constraints.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4ba09f69
    • Arnd Bergmann's avatar
      mtip32xx: avoid using semaphores · e4025e46
      Arnd Bergmann authored
      The "cmd_slot_unal" semaphore is never used in a blocking way
      but only as an atomic counter. Change the code to using
      atomic_dec_if_positive() as a better API.
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      e4025e46
    • Mikulas Patocka's avatar
      dm: remove the pending IO accounting · 6f757231
      Mikulas Patocka authored
      Remove the "pending" atomic counters, that duplicate block-core's
      in_flight counters, and update md_in_flight() to look at percpu
      in_flight counters.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      6f757231
    • Mikulas Patocka's avatar
      block: return just one value from part_in_flight · e016b782
      Mikulas Patocka authored
      The previous patches deleted all the code that needed the second value
      returned from part_in_flight - now the kernel only uses the first value.
      
      Consequently, part_in_flight (and blk_mq_in_flight) may be changed so that
      it only returns one value.
      
      This patch just refactors the code, there's no functional change.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      e016b782
    • Mikulas Patocka's avatar
      block: switch to per-cpu in-flight counters · 1226b8dd
      Mikulas Patocka authored
      Now when part_round_stats is gone, we can switch to per-cpu in-flight
      counters.
      
      We use the local-atomic type local_t, so that if part_inc_in_flight or
      part_dec_in_flight is reentrantly called from an interrupt, the value will
      be correct.
      
      The other counters could be corrupted due to reentrant interrupt, but the
      corruption only results in slight counter skew - the in_flight counter
      must be exact, so it needs local_t.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      1226b8dd
    • Mikulas Patocka's avatar
      block: delete part_round_stats and switch to less precise counting · 5b18b5a7
      Mikulas Patocka authored
      We want to convert to per-cpu in_flight counters.
      
      The function part_round_stats needs the in_flight counter every jiffy, it
      would be too costly to sum all the percpu variables every jiffy, so it
      must be deleted. part_round_stats is used to calculate two counters -
      time_in_queue and io_ticks.
      
      time_in_queue can be calculated without part_round_stats, by adding the
      duration of the I/O when the I/O ends (the value is almost as exact as the
      previously calculated value, except that time for in-progress I/Os is not
      counted).
      
      io_ticks can be approximated by increasing the value when I/O is started
      or ended and the jiffies value has changed. If the I/Os take less than a
      jiffy, the value is as exact as the previously calculated value. If the
      I/Os take more than a jiffy, io_ticks can drift behind the previously
      calculated value.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5b18b5a7
    • Mike Snitzer's avatar
      block: stop passing 'cpu' to all percpu stats methods · 112f158f
      Mike Snitzer authored
      All of part_stat_* and related methods are used with preempt disabled,
      so there is no need to pass cpu around to allow of them.  Just call
      smp_processor_id() as needed.
      Suggested-by: default avatarJens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      112f158f
    • Mike Snitzer's avatar
      dm rq: leverage blk_mq_queue_busy() to check for outstanding IO · dbd3bbd2
      Mike Snitzer authored
      Now that request-based dm-multipath only supports blk-mq, make use of
      the newly introduced blk_mq_queue_busy() to check for outstanding IO --
      rather than (ab)using the block core's in_flight counters.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      dbd3bbd2
    • Mikulas Patocka's avatar
      dm: dont rewrite dm_disk(md)->part0.in_flight · 80a787ba
      Mikulas Patocka authored
      generic_start_io_acct and generic_end_io_acct already update the variable
      in_flight using atomic operations, so we don't have to overwrite them
      again.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      80a787ba
    • Jens Axboe's avatar
      Merge tag 'v4.20-rc6' into for-4.21/block · 96f77410
      Jens Axboe authored
      Pull in v4.20-rc6 to resolve the conflict in NVMe, but also to get the
      two corruption fixes. We're going to be overhauling the direct dispatch
      path, and we need to do that on top of the changes we made for that
      in mainline.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      96f77410
    • Jens Axboe's avatar
      sbitmap: silence bogus lockdep IRQ warning · 58ab5e32
      Jens Axboe authored
      Ming reports that lockdep spews the following trace. What this
      essentially says is that the sbitmap swap_lock was used inconsistently
      in IRQ enabled and disabled context, and that is usually indicative of a
      bug that will cause a deadlock.
      
      For this case, it's a false positive. The swap_lock is used from process
      context only, when we swap the bits in the word and cleared mask. We
      also end up doing that when we are getting a driver tag, from the
      blk_mq_mark_tag_wait(), and from there we hold the waitqueue lock with
      IRQs disabled. However, this isn't from an actual IRQ, it's still
      process context.
      
      In lieu of a better way to fix this, simply always disable interrupts
      when grabbing the swap_lock if lockdep is enabled.
      
      [  100.967642] ================start test sanity/001================
      [  101.238280] null: module loaded
      [  106.093735]
      [  106.094012] =====================================================
      [  106.094854] WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
      [  106.095759] 4.20.0-rc3_5d2ee712_for-next+ #1 Not tainted
      [  106.096551] -----------------------------------------------------
      [  106.097386] fio/1043 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
      [  106.098231] 000000004c43fa71
      (&(&sb->map[i].swap_lock)->rlock){+.+.}, at: sbitmap_get+0xd5/0x22c
      [  106.099431]
      [  106.099431] and this task is already holding:
      [  106.100229] 000000007eec8b2f
      (&(&hctx->dispatch_wait_lock)->rlock){....}, at:
      blk_mq_dispatch_rq_list+0x4c1/0xd7c
      [  106.101630] which would create a new lock dependency:
      [  106.102326]  (&(&hctx->dispatch_wait_lock)->rlock){....} ->
      (&(&sb->map[i].swap_lock)->rlock){+.+.}
      [  106.103553]
      [  106.103553] but this new dependency connects a SOFTIRQ-irq-safe lock:
      [  106.104580]  (&sbq->ws[i].wait){..-.}
      [  106.104582]
      [  106.104582] ... which became SOFTIRQ-irq-safe at:
      [  106.105751]   _raw_spin_lock_irqsave+0x4b/0x82
      [  106.106284]   __wake_up_common_lock+0x119/0x1b9
      [  106.106825]   sbitmap_queue_wake_up+0x33f/0x383
      [  106.107456]   sbitmap_queue_clear+0x4c/0x9a
      [  106.108046]   __blk_mq_free_request+0x188/0x1d3
      [  106.108581]   blk_mq_free_request+0x23b/0x26b
      [  106.109102]   scsi_end_request+0x345/0x5d7
      [  106.109587]   scsi_io_completion+0x4b5/0x8f0
      [  106.110099]   scsi_finish_command+0x412/0x456
      [  106.110615]   scsi_softirq_done+0x23f/0x29b
      [  106.111115]   blk_done_softirq+0x2a7/0x2e6
      [  106.111608]   __do_softirq+0x360/0x6ad
      [  106.112062]   run_ksoftirqd+0x2f/0x5b
      [  106.112499]   smpboot_thread_fn+0x3a5/0x3db
      [  106.113000]   kthread+0x1d4/0x1e4
      [  106.113457]   ret_from_fork+0x3a/0x50
      [  106.113969]
      [  106.113969] to a SOFTIRQ-irq-unsafe lock:
      [  106.114672]  (&(&sb->map[i].swap_lock)->rlock){+.+.}
      [  106.114674]
      [  106.114674] ... which became SOFTIRQ-irq-unsafe at:
      [  106.116000] ...
      [  106.116003]   _raw_spin_lock+0x33/0x64
      [  106.116676]   sbitmap_get+0xd5/0x22c
      [  106.117134]   __sbitmap_queue_get+0xe8/0x177
      [  106.117731]   __blk_mq_get_tag+0x1e6/0x22d
      [  106.118286]   blk_mq_get_tag+0x1db/0x6e4
      [  106.118756]   blk_mq_get_driver_tag+0x161/0x258
      [  106.119383]   blk_mq_dispatch_rq_list+0x28e/0xd7c
      [  106.120043]   blk_mq_do_dispatch_sched+0x23a/0x287
      [  106.120607]   blk_mq_sched_dispatch_requests+0x379/0x3fc
      [  106.121234]   __blk_mq_run_hw_queue+0x137/0x17e
      [  106.121781]   __blk_mq_delay_run_hw_queue+0x80/0x25f
      [  106.122366]   blk_mq_run_hw_queue+0x151/0x187
      [  106.122887]   blk_mq_sched_insert_requests+0x13f/0x175
      [  106.123492]   blk_mq_flush_plug_list+0x7d6/0x81b
      [  106.124042]   blk_flush_plug_list+0x392/0x3d7
      [  106.124557]   blk_finish_plug+0x37/0x4f
      [  106.125019]   read_pages+0x3ef/0x430
      [  106.125446]   __do_page_cache_readahead+0x18e/0x2fc
      [  106.126027]   force_page_cache_readahead+0x121/0x133
      [  106.126621]   page_cache_sync_readahead+0x35f/0x3bb
      [  106.127229]   generic_file_buffered_read+0x410/0x1860
      [  106.127932]   __vfs_read+0x319/0x38f
      [  106.128415]   vfs_read+0xd2/0x19a
      [  106.128817]   ksys_read+0xb9/0x135
      [  106.129225]   do_syscall_64+0x140/0x385
      [  106.129684]   entry_SYSCALL_64_after_hwframe+0x49/0xbe
      [  106.130292]
      [  106.130292] other info that might help us debug this:
      [  106.130292]
      [  106.131226] Chain exists of:
      [  106.131226]   &sbq->ws[i].wait -->
      &(&hctx->dispatch_wait_lock)->rlock -->
      &(&sb->map[i].swap_lock)->rlock
      [  106.131226]
      [  106.132865]  Possible interrupt unsafe locking scenario:
      [  106.132865]
      [  106.133659]        CPU0                    CPU1
      [  106.134194]        ----                    ----
      [  106.134733]   lock(&(&sb->map[i].swap_lock)->rlock);
      [  106.135318]                                local_irq_disable();
      [  106.136014]                                lock(&sbq->ws[i].wait);
      [  106.136747]
      lock(&(&hctx->dispatch_wait_lock)->rlock);
      [  106.137742]   <Interrupt>
      [  106.138110]     lock(&sbq->ws[i].wait);
      [  106.138625]
      [  106.138625]  *** DEADLOCK ***
      [  106.138625]
      [  106.139430] 3 locks held by fio/1043:
      [  106.139947]  #0: 0000000076ff0fd9 (rcu_read_lock){....}, at:
      hctx_lock+0x29/0xe8
      [  106.140813]  #1: 000000002feb1016 (&sbq->ws[i].wait){..-.}, at:
      blk_mq_dispatch_rq_list+0x4ad/0xd7c
      [  106.141877]  #2: 000000007eec8b2f
      (&(&hctx->dispatch_wait_lock)->rlock){....}, at:
      blk_mq_dispatch_rq_list+0x4c1/0xd7c
      [  106.143267]
      [  106.143267] the dependencies between SOFTIRQ-irq-safe lock and the
      holding lock:
      [  106.144351]  -> (&sbq->ws[i].wait){..-.} ops: 82 {
      [  106.144926]     IN-SOFTIRQ-W at:
      [  106.145314]                       _raw_spin_lock_irqsave+0x4b/0x82
      [  106.146042]                       __wake_up_common_lock+0x119/0x1b9
      [  106.146785]                       sbitmap_queue_wake_up+0x33f/0x383
      [  106.147567]                       sbitmap_queue_clear+0x4c/0x9a
      [  106.148379]                       __blk_mq_free_request+0x188/0x1d3
      [  106.149148]                       blk_mq_free_request+0x23b/0x26b
      [  106.149864]                       scsi_end_request+0x345/0x5d7
      [  106.150546]                       scsi_io_completion+0x4b5/0x8f0
      [  106.151367]                       scsi_finish_command+0x412/0x456
      [  106.152157]                       scsi_softirq_done+0x23f/0x29b
      [  106.152855]                       blk_done_softirq+0x2a7/0x2e6
      [  106.153537]                       __do_softirq+0x360/0x6ad
      [  106.154280]                       run_ksoftirqd+0x2f/0x5b
      [  106.155020]                       smpboot_thread_fn+0x3a5/0x3db
      [  106.155828]                       kthread+0x1d4/0x1e4
      [  106.156526]                       ret_from_fork+0x3a/0x50
      [  106.157267]     INITIAL USE at:
      [  106.157713]                      _raw_spin_lock_irqsave+0x4b/0x82
      [  106.158542]                      prepare_to_wait_exclusive+0xa8/0x215
      [  106.159421]                      blk_mq_get_tag+0x34f/0x6e4
      [  106.160186]                      blk_mq_get_request+0x48e/0xaef
      [  106.160997]                      blk_mq_make_request+0x27e/0xbd2
      [  106.161828]                      generic_make_request+0x4d1/0x873
      [  106.162661]                      submit_bio+0x20c/0x253
      [  106.163379]                      mpage_bio_submit+0x44/0x4b
      [  106.164142]                      mpage_readpages+0x3c2/0x407
      [  106.164919]                      read_pages+0x13a/0x430
      [  106.165633]                      __do_page_cache_readahead+0x18e/0x2fc
      [  106.166530]                      force_page_cache_readahead+0x121/0x133
      [  106.167439]                      page_cache_sync_readahead+0x35f/0x3bb
      [  106.168337]                      generic_file_buffered_read+0x410/0x1860
      [  106.169255]                      __vfs_read+0x319/0x38f
      [  106.169977]                      vfs_read+0xd2/0x19a
      [  106.170662]                      ksys_read+0xb9/0x135
      [  106.171356]                      do_syscall_64+0x140/0x385
      [  106.172120]                      entry_SYSCALL_64_after_hwframe+0x49/0xbe
      [  106.173051]   }
      [  106.173308]   ... key      at: [<ffffffff85094600>] __key.26481+0x0/0x40
      [  106.174219]   ... acquired at:
      [  106.174646]    _raw_spin_lock+0x33/0x64
      [  106.175183]    blk_mq_dispatch_rq_list+0x4c1/0xd7c
      [  106.175843]    blk_mq_do_dispatch_sched+0x23a/0x287
      [  106.176518]    blk_mq_sched_dispatch_requests+0x379/0x3fc
      [  106.177262]    __blk_mq_run_hw_queue+0x137/0x17e
      [  106.177900]    __blk_mq_delay_run_hw_queue+0x80/0x25f
      [  106.178591]    blk_mq_run_hw_queue+0x151/0x187
      [  106.179207]    blk_mq_sched_insert_requests+0x13f/0x175
      [  106.179926]    blk_mq_flush_plug_list+0x7d6/0x81b
      [  106.180571]    blk_flush_plug_list+0x392/0x3d7
      [  106.181187]    blk_finish_plug+0x37/0x4f
      [  106.181737]    __se_sys_io_submit+0x171/0x304
      [  106.182346]    do_syscall_64+0x140/0x385
      [  106.182895]    entry_SYSCALL_64_after_hwframe+0x49/0xbe
      [  106.183607]
      [  106.183830] -> (&(&hctx->dispatch_wait_lock)->rlock){....} ops: 1 {
      [  106.184691]    INITIAL USE at:
      [  106.185119]                    _raw_spin_lock+0x33/0x64
      [  106.185838]                    blk_mq_dispatch_rq_list+0x4c1/0xd7c
      [  106.186697]                    blk_mq_do_dispatch_sched+0x23a/0x287
      [  106.187551]                    blk_mq_sched_dispatch_requests+0x379/0x3fc
      [  106.188481]                    __blk_mq_run_hw_queue+0x137/0x17e
      [  106.189307]                    __blk_mq_delay_run_hw_queue+0x80/0x25f
      [  106.190189]                    blk_mq_run_hw_queue+0x151/0x187
      [  106.190989]                    blk_mq_sched_insert_requests+0x13f/0x175
      [  106.191902]                    blk_mq_flush_plug_list+0x7d6/0x81b
      [  106.192739]                    blk_flush_plug_list+0x392/0x3d7
      [  106.193535]                    blk_finish_plug+0x37/0x4f
      [  106.194269]                    __se_sys_io_submit+0x171/0x304
      [  106.195059]                    do_syscall_64+0x140/0x385
      [  106.195794]                    entry_SYSCALL_64_after_hwframe+0x49/0xbe
      [  106.196705]  }
      [  106.196950]  ... key      at: [<ffffffff84880620>] __key.51231+0x0/0x40
      [  106.197853]  ... acquired at:
      [  106.198270]    lock_acquire+0x280/0x2f3
      [  106.198806]    _raw_spin_lock+0x33/0x64
      [  106.199337]    sbitmap_get+0xd5/0x22c
      [  106.199850]    __sbitmap_queue_get+0xe8/0x177
      [  106.200450]    __blk_mq_get_tag+0x1e6/0x22d
      [  106.201035]    blk_mq_get_tag+0x1db/0x6e4
      [  106.201589]    blk_mq_get_driver_tag+0x161/0x258
      [  106.202237]    blk_mq_dispatch_rq_list+0x5b9/0xd7c
      [  106.202902]    blk_mq_do_dispatch_sched+0x23a/0x287
      [  106.203572]    blk_mq_sched_dispatch_requests+0x379/0x3fc
      [  106.204316]    __blk_mq_run_hw_queue+0x137/0x17e
      [  106.204956]    __blk_mq_delay_run_hw_queue+0x80/0x25f
      [  106.205649]    blk_mq_run_hw_queue+0x151/0x187
      [  106.206269]    blk_mq_sched_insert_requests+0x13f/0x175
      [  106.206997]    blk_mq_flush_plug_list+0x7d6/0x81b
      [  106.207644]    blk_flush_plug_list+0x392/0x3d7
      [  106.208264]    blk_finish_plug+0x37/0x4f
      [  106.208814]    __se_sys_io_submit+0x171/0x304
      [  106.209415]    do_syscall_64+0x140/0x385
      [  106.209965]    entry_SYSCALL_64_after_hwframe+0x49/0xbe
      [  106.210684]
      [  106.210904]
      [  106.210904] the dependencies between the lock to be acquired
      [  106.210905]  and SOFTIRQ-irq-unsafe lock:
      [  106.212541] -> (&(&sb->map[i].swap_lock)->rlock){+.+.} ops: 1969 {
      [  106.213393]    HARDIRQ-ON-W at:
      [  106.213840]                     _raw_spin_lock+0x33/0x64
      [  106.214570]                     sbitmap_get+0xd5/0x22c
      [  106.215282]                     __sbitmap_queue_get+0xe8/0x177
      [  106.216086]                     __blk_mq_get_tag+0x1e6/0x22d
      [  106.216876]                     blk_mq_get_tag+0x1db/0x6e4
      [  106.217627]                     blk_mq_get_driver_tag+0x161/0x258
      [  106.218465]                     blk_mq_dispatch_rq_list+0x28e/0xd7c
      [  106.219326]                     blk_mq_do_dispatch_sched+0x23a/0x287
      [  106.220198]                     blk_mq_sched_dispatch_requests+0x379/0x3fc
      [  106.221138]                     __blk_mq_run_hw_queue+0x137/0x17e
      [  106.221975]                     __blk_mq_delay_run_hw_queue+0x80/0x25f
      [  106.222874]                     blk_mq_run_hw_queue+0x151/0x187
      [  106.223686]                     blk_mq_sched_insert_requests+0x13f/0x175
      [  106.224597]                     blk_mq_flush_plug_list+0x7d6/0x81b
      [  106.225444]                     blk_flush_plug_list+0x392/0x3d7
      [  106.226255]                     blk_finish_plug+0x37/0x4f
      [  106.227006]                     read_pages+0x3ef/0x430
      [  106.227717]                     __do_page_cache_readahead+0x18e/0x2fc
      [  106.228595]                     force_page_cache_readahead+0x121/0x133
      [  106.229491]                     page_cache_sync_readahead+0x35f/0x3bb
      [  106.230373]                     generic_file_buffered_read+0x410/0x1860
      [  106.231277]                     __vfs_read+0x319/0x38f
      [  106.231986]                     vfs_read+0xd2/0x19a
      [  106.232666]                     ksys_read+0xb9/0x135
      [  106.233350]                     do_syscall_64+0x140/0x385
      [  106.234097]                     entry_SYSCALL_64_after_hwframe+0x49/0xbe
      [  106.235012]    SOFTIRQ-ON-W at:
      [  106.235460]                     _raw_spin_lock+0x33/0x64
      [  106.236195]                     sbitmap_get+0xd5/0x22c
      [  106.236913]                     __sbitmap_queue_get+0xe8/0x177
      [  106.237715]                     __blk_mq_get_tag+0x1e6/0x22d
      [  106.238488]                     blk_mq_get_tag+0x1db/0x6e4
      [  106.239244]                     blk_mq_get_driver_tag+0x161/0x258
      [  106.240079]                     blk_mq_dispatch_rq_list+0x28e/0xd7c
      [  106.240937]                     blk_mq_do_dispatch_sched+0x23a/0x287
      [  106.241806]                     blk_mq_sched_dispatch_requests+0x379/0x3fc
      [  106.242751]                     __blk_mq_run_hw_queue+0x137/0x17e
      [  106.243579]                     __blk_mq_delay_run_hw_queue+0x80/0x25f
      [  106.244469]                     blk_mq_run_hw_queue+0x151/0x187
      [  106.245277]                     blk_mq_sched_insert_requests+0x13f/0x175
      [  106.246191]                     blk_mq_flush_plug_list+0x7d6/0x81b
      [  106.247044]                     blk_flush_plug_list+0x392/0x3d7
      [  106.247859]                     blk_finish_plug+0x37/0x4f
      [  106.248749]                     read_pages+0x3ef/0x430
      [  106.249463]                     __do_page_cache_readahead+0x18e/0x2fc
      [  106.250357]                     force_page_cache_readahead+0x121/0x133
      [  106.251263]                     page_cache_sync_readahead+0x35f/0x3bb
      [  106.252157]                     generic_file_buffered_read+0x410/0x1860
      [  106.253084]                     __vfs_read+0x319/0x38f
      [  106.253808]                     vfs_read+0xd2/0x19a
      [  106.254488]                     ksys_read+0xb9/0x135
      [  106.255186]                     do_syscall_64+0x140/0x385
      [  106.255943]                     entry_SYSCALL_64_after_hwframe+0x49/0xbe
      [  106.256867]    INITIAL USE at:
      [  106.257300]                    _raw_spin_lock+0x33/0x64
      [  106.258033]                    sbitmap_get+0xd5/0x22c
      [  106.258747]                    __sbitmap_queue_get+0xe8/0x177
      [  106.259542]                    __blk_mq_get_tag+0x1e6/0x22d
      [  106.260320]                    blk_mq_get_tag+0x1db/0x6e4
      [  106.261072]                    blk_mq_get_driver_tag+0x161/0x258
      [  106.261902]                    blk_mq_dispatch_rq_list+0x28e/0xd7c
      [  106.262762]                    blk_mq_do_dispatch_sched+0x23a/0x287
      [  106.263626]                    blk_mq_sched_dispatch_requests+0x379/0x3fc
      [  106.264571]                    __blk_mq_run_hw_queue+0x137/0x17e
      [  106.265409]                    __blk_mq_delay_run_hw_queue+0x80/0x25f
      [  106.266302]                    blk_mq_run_hw_queue+0x151/0x187
      [  106.267111]                    blk_mq_sched_insert_requests+0x13f/0x175
      [  106.268028]                    blk_mq_flush_plug_list+0x7d6/0x81b
      [  106.268878]                    blk_flush_plug_list+0x392/0x3d7
      [  106.269694]                    blk_finish_plug+0x37/0x4f
      [  106.270432]                    read_pages+0x3ef/0x430
      [  106.271139]                    __do_page_cache_readahead+0x18e/0x2fc
      [  106.272040]                    force_page_cache_readahead+0x121/0x133
      [  106.272932]                    page_cache_sync_readahead+0x35f/0x3bb
      [  106.273811]                    generic_file_buffered_read+0x410/0x1860
      [  106.274709]                    __vfs_read+0x319/0x38f
      [  106.275407]                    vfs_read+0xd2/0x19a
      [  106.276074]                    ksys_read+0xb9/0x135
      [  106.276764]                    do_syscall_64+0x140/0x385
      [  106.277500]                    entry_SYSCALL_64_after_hwframe+0x49/0xbe
      [  106.278417]  }
      [  106.278676]  ... key      at: [<ffffffff85094640>] __key.26212+0x0/0x40
      [  106.279586]  ... acquired at:
      [  106.280026]    lock_acquire+0x280/0x2f3
      [  106.280559]    _raw_spin_lock+0x33/0x64
      [  106.281101]    sbitmap_get+0xd5/0x22c
      [  106.281610]    __sbitmap_queue_get+0xe8/0x177
      [  106.282221]    __blk_mq_get_tag+0x1e6/0x22d
      [  106.282809]    blk_mq_get_tag+0x1db/0x6e4
      [  106.283368]    blk_mq_get_driver_tag+0x161/0x258
      [  106.284018]    blk_mq_dispatch_rq_list+0x5b9/0xd7c
      [  106.284685]    blk_mq_do_dispatch_sched+0x23a/0x287
      [  106.285371]    blk_mq_sched_dispatch_requests+0x379/0x3fc
      [  106.286135]    __blk_mq_run_hw_queue+0x137/0x17e
      [  106.286806]    __blk_mq_delay_run_hw_queue+0x80/0x25f
      [  106.287515]    blk_mq_run_hw_queue+0x151/0x187
      [  106.288149]    blk_mq_sched_insert_requests+0x13f/0x175
      [  106.289041]    blk_mq_flush_plug_list+0x7d6/0x81b
      [  106.289912]    blk_flush_plug_list+0x392/0x3d7
      [  106.290590]    blk_finish_plug+0x37/0x4f
      [  106.291238]    __se_sys_io_submit+0x171/0x304
      [  106.291864]    do_syscall_64+0x140/0x385
      [  106.292534]    entry_SYSCALL_64_after_hwframe+0x49/0xbe
      Reported-by: default avatarMing Lei <ming.lei@redhat.com>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      58ab5e32
  5. 09 Dec, 2018 1 commit