- 08 Feb, 2023 3 commits
-
-
Jens Axboe authored
Merge branch 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-6.3/block Pull MD fix from Song: "This commit fixes a rare crash during the takeover process." * 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md: md: account io_acct_set usage with active_io
-
Xiao Ni authored
io_acct_set was enabled for raid0/raid5 io accounting. bios that contain md_io_acct are allocated in the i/o path. There isn't a good method to monitor if these bios are all finished and freed. In the takeover process, io_acct_set (which is used for bios with md_io_acct) need to be freed. However, if some bios finish after io_acct_set is freed, it may trigger the following panic: [ 6973.767999] RIP: 0010:mempool_free+0x52/0x80 [ 6973.786098] Call Trace: [ 6973.786549] md_end_io_acct+0x31/0x40 [ 6973.787227] blk_update_request+0x224/0x380 [ 6973.787994] blk_mq_end_request+0x1a/0x130 [ 6973.788739] blk_complete_reqs+0x35/0x50 [ 6973.789456] __do_softirq+0xd7/0x2c8 [ 6973.790114] ? sort_range+0x20/0x20 [ 6973.790763] run_ksoftirqd+0x2a/0x40 [ 6973.791400] smpboot_thread_fn+0xb5/0x150 [ 6973.792114] kthread+0x10b/0x130 [ 6973.792724] ? set_kthread_struct+0x50/0x50 [ 6973.793491] ret_from_fork+0x1f/0x40 Fix this by increasing and decreasing active_io for each bio with md_io_acct so that mddev_suspend() will wait until all bios from io_acct_set finish before freeing io_acct_set. Reported-by: Fine Fan <ffan@redhat.com> Signed-off-by: Xiao Ni <xni@redhat.com> Signed-off-by: Song Liu <song@kernel.org>
-
Ming Lei authored
Inside ublk_ctrl_del_dev(), when the device is removed, we wait until the device number is freed with holding global lock of ublk_ctl_mutex, this way isn't friendly from user viewpoint: 1) if device is in-use, the current delete command hangs in ublk_ctrl_del_dev(), and user can't break from the handling because wait_event() is used 2) global lock is held, so any new device can't be added and other old devices can't be removed. Improve the deleting handling by the following way, suggested by Nadav: 1) wait without holding the global lock 2) replace wait_event() with wait_event_interruptible() Reported-by: Nadav Amit <nadav.amit@gmail.com> Suggested-by: Nadav Amit <nadav.amit@gmail.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20230207150700.545530-1-ming.lei@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 07 Feb, 2023 5 commits
-
-
Yu Kuai authored
After commit dfd6200a ("blk-cgroup: support to track if policy is online"), there is no need to do this again in bfq. However, 'pd->online' is not protected by 'bfqd->lock', in order to make sure bfq won't see that 'pd->online' is still set after bfq_pd_offline(), clear it before bfq_pd_offline() is called. This is fine because other polices doesn't use 'pd->online' and bfq_pd_offline() will move active bfqq to root cgroup anyway. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230202134913.2364549-1-yukuai1@huaweicloud.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
git://git.infradead.org/nvmeJens Axboe authored
Pull NVMe updates from Christoph: "nvme updates for Linux 6.3 - small improvements to the logging functionality (Amit Engel) - authentication cleanups (Hannes Reinecke) - cleanup and optimize the DMA mapping cod in the PCIe driver (Keith Busch) - work around the command effects for Format NVM (Keith Busch) - misc cleanups (Keith Busch, Christoph Hellwig)" * tag 'nvme-6.3-2023-02-07' of git://git.infradead.org/nvme: nvme: mask CSE effects for security receive nvme: always initialize known command effects nvmet: for nvme admin set_features cmd, call nvmet_check_data_len_lte() nvme-tcp: add additional info for nvme_tcp_timeout log nvme: add nvme_opcode_str function for all nvme cmd types nvme: remove nvme_execute_passthru_rq nvme-pci: place descriptor addresses in iod nvme-pci: use mapped entries for sgl decision nvme-pci: remove SGL segment descriptors nvme-auth: don't use NVMe status codes nvme-fabrics: clarify AUTHREQ result handling
-
Ziyang Zhang authored
queuedata is not referenced in ublk_drv and we can use driver_data instead. Pass NULL to blk_mq_alloc_disk() as queuedata while allocating ublk's gendisk. Signed-off-by: Ziyang Zhang <ZiyangZhang@linux.alibaba.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20230207070839.370817-4-ZiyangZhang@linux.alibaba.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ziyang Zhang authored
WRITE_ZEROES won't return bytes returned just like FLUSH and DISCARD, and we can end it directly. Add missing comment for it in ublk_complete_rq(). Signed-off-by: Ziyang Zhang <ZiyangZhang@linux.alibaba.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20230207070839.370817-3-ZiyangZhang@linux.alibaba.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ziyang Zhang authored
bio_has_data() allows a NULL bio so the NULL check in ublk_rq_has_data() is unnecessary. Signed-off-by: Ziyang Zhang <ZiyangZhang@linux.alibaba.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20230207070839.370817-2-ZiyangZhang@linux.alibaba.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 06 Feb, 2023 17 commits
-
-
Greg Kroah-Hartman authored
When calling debugfs_lookup() the result must have dput() called on it, otherwise the memory will leak over time. To make things simpler, just call debugfs_lookup_and_remove() instead which handles all of the logic at once. Cc: Jens Axboe <axboe@kernel.dk> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: linux-block@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-trace-kernel@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20230202141956.2299521-1-gregkh@linuxfoundation.orgSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
Commit 88022d72 ("blk-mq: don't handle failure in .get_budget") remove BLK_STS_RESOURCE return value and we only check if we can get the budget from .get_budget() now. Correct stale comment that ".get_budget() returns BLK_STS_NO_RESOURCE" to ".get_budget() fails to get the budget". Fixes: 88022d72 ("blk-mq: don't handle failure in .get_budget") Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
Use switch/case handle error as other function do to improve readability in blk_mq_try_issue_list_directly. Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
Commit 113285b4 ("blk-mq: ensure that bd->last is always set correctly") will set last if we failed to get driver tag for next request to avoid flush miss as we break the list walk and will not send the last request in the list which will be sent with last set normally. This code seems stale now becase the flush introduced is always redundant as: For case tag is really out, we will send a extra flush if we find list is not empty after list walk. For case some tag is freed before retry in blk_mq_prep_dispatch_rq for next, then we can get a tag for next request in retry and flush notified already is not necessary. Just remove these stale codes. Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
blk_mq_dispatch_rq_list will notify if hctx is busy in return bool. It will return true if we are not busy and can handle more and return false on the opposite. Inside blk_mq_dispatch_rq_list, errors is only used if list is empty and we will return true if list is empty and (errors + queued) != 0. There are three types of status returned from request: -busy error BLK_STS*_RESOURCE: the failed request will be added back to list and list will not be empty. -BLK_STS_OK: We count queued for BLK_STS_OK -rest error: We count errors for rest error If list is empty, there is no request gets busy error then (errors + queued) will be total requests in the list which is checked not empty at beginning of blk_mq_dispatch_rq_list. So (errors + queued) != 0 is always met if list is empty. Then the (errors + queued) != 0 check and errors number count is not needed. Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
1. Remove check of needs_resource and ret == BLK_STS_DEV_RESOURCE. For busy error BLK_STS*_RESOURCE, request will always be added back to list, so need_resource will not be true and ret will not be == BLK_STS_DEV_RESOURCE if list is empty. We could remove these dead check. 2. Check ret of last request instead of errors If list is empty, we only need to explicitly commit_rqs if error happens at last request which is stored in ret. So check ret of last request instead of errors to remove unnecessary commit_rqs triggered by errors returned from previous request. Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
Call blk_mq_commit_rqs instead of access ->commit_rqs directly. As you can see in comment of blk_mq_commit_rqs, we only need explicitly call this in two cases: -did not queue everything initially scheduled to queue -the last attempt to queue a request failed Both cases can be checked with ret of last request which breaks list walk. Then we can remove unnecessary error count and unnecessary commit triggered by error besides cases described above. Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
We need only to explicitly commit in two error cases: -did not queue everything initially scheduled to queue -the last attempt to queue a request failed (see comment of blk_mq_commit_rqs for more details). Both cases can be checked with ret of last request which breaks list walk. Remove unnecessary error count and unnecessary commit triggered by error which is not covered by cases described above. Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
1. move blk_mq_commit_rqs forward before functions need commits. 2. add queued check and only commits request if any request was queued in blk_mq_commit_rqs to keep commit behavior consistent and remove unnecessary commit. 3. split the queued clearing from blk_mq_plug_commit_rqs as it is not wanted general. 4. sync current caller of blk_mq_commit_rqs with new general blk_mq_commit_rqs. 5. document rule for unusual cases which need explicit commit_rqs. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
Function blk_mq_plug_issue_direct tries to issue batch requests in plug list to driver directly. We will only issue plug request to driver if we are not from scheduler, so from_scheduler parameter of blk_mq_plug_issue_direct is always false. Remove unncessary from_scheduler of blk_mq_plug_issue_direct. Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
We only break the list walk if we get 'BLK_STS_*RESOURCE'. We also count errors for 'BLK_STS_*RESOURCE' error. If list is not empty, errors will always be non-zero. So we can remove unnecessary list_empty check. This will remove redundant list_empty check for case that error happened at sending last request in list. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
Commit f906a6a0 ("blk-mq: improve tag waiting setup for non-shared tags") mark restart for unshared tags for improvement. At that time, tags is only shared betweens queues and we can check if tags is shared by test BLK_MQ_F_TAG_SHARED. Afterwards, commit 32bc15af ("blk-mq: Facilitate a shared sbitmap per tagset") enabled tags share betweens hctxs inside a queue. We only mark restart for shared hctxs inside a queue and may cause io hung if there is no tag currently allocated by hctxs going to be marked restart. Wait on sbitmap_queue instead of mark restart for shared hctxs case to fix this. Fixes: 32bc15af ("blk-mq: Facilitate a shared sbitmap per tagset") Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
For shared queues case, we will only wait on bitmap_tags if we fail to get driver tag. However, rq could be from breserved_tags, then two problems will occur: 1. io hung if no tag is currently allocated from bitmap_tags. 2. unnecessary wakeup when tag is freed to bitmap_tags while no tag is freed to breserved_tags. Wait on the bitmap which rq from to fix this. Fixes: f906a6a0 ("blk-mq: improve tag waiting setup for non-shared tags") Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
Commit 97889f9a ("blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set()") remove handle of TAG_SHARED in restart, then shared_hctx_restart counted for how many hardware queues are marked for restart is removed too. Remove the stale comment that we still count hardware queues need restart. Fixes: 97889f9a ("blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set()") Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Kemeng Shi authored
Commit 1f5bd336 ("blk-mq: add blk_mq_alloc_request_hctx") add blk_mq_alloc_request_hctx to send commands to a specific queue. If BLK_MQ_REQ_NOWAIT is not set in tag allocation, we may change to different hctx after sleep and get tag from unexpected hctx. So BLK_MQ_REQ_NOWAIT must be set in flags for blk_mq_alloc_request_hctx. After commit 600c3b0c ("blk-mq: open code __blk_mq_alloc_request in blk_mq_alloc_request_hctx"), blk_mq_alloc_request_hctx return -EINVAL if both BLK_MQ_REQ_NOWAIT and BLK_MQ_REQ_RESERVED are not set instead of if BLK_MQ_REQ_NOWAIT is not set. So if BLK_MQ_REQ_NOWAIT is not set and BLK_MQ_REQ_RESERVED is set, blk_mq_alloc_request_hctx could alloc tag from unexpected hctx. I guess what we need here is that return -EINVAL if either BLK_MQ_REQ_NOWAIT or BLK_MQ_REQ_RESERVED is not set. Currently both BLK_MQ_REQ_NOWAIT and BLK_MQ_REQ_RESERVED will be set if specific hctx is needed in nvme_auth_submit, nvmf_connect_io_queue and nvmf_connect_admin_queue. Fix the potential BLK_MQ_REQ_NOWAIT missed case in future. Fixes: 600c3b0c ("blk-mq: open code __blk_mq_alloc_request in blk_mq_alloc_request_hctx") Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
The capability attribute was added in 2017 to expose the kernel internal GENHD_FL_MEDIA_CHANGE_NOTIFY to userspace without ever adding a value to an UAPI header, and without ever setting it in any driver until it was finally removed in Linux 5.7. Deprecate the file and always return 0 instead of exposing the other internal and frequently renumbered other gendisk flags. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20230203150209.3199115-1-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
new_blkg can be NULL if the caller didn't pass in a pre-allocated blkg. Don't try to free it in that case. Fixes: 27b642b0 ("blk-cgroup: simplify blkg freeing from initialization failure paths") Reported-by: Yi Zhang <yi.zhang@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20230206150201.3438972-1-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 03 Feb, 2023 15 commits
-
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize bvecs. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ilya Dryomov <idryomov@gmail.com> Link: https://lore.kernel.org/r/20230203150634.3199647-24-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize a bvec. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Jason Wang <jasowang@redhat.com> Link: https://lore.kernel.org/r/20230203150634.3199647-23-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize bvecs. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Chuck Lever <chuck.lever@oracle.com> Acked-by: Trond Myklebust <trond.myklebust@hammerspace.com> Link: https://lore.kernel.org/r/20230203150634.3199647-22-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize a bvec. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20230203150634.3199647-21-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize bvecs. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230203150634.3199647-20-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize a bvec. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20230203150634.3199647-19-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize a bvec. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230203150634.3199647-18-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page and bvec_set_folio helpers to initialize bvecs. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230203150634.3199647-17-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize bvecs. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Trond Myklebust <trond.myklebust@hammerspace.com> Link: https://lore.kernel.org/r/20230203150634.3199647-16-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize a bvec. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230203150634.3199647-15-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize bvecs. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Link: https://lore.kernel.org/r/20230203150634.3199647-14-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize a bvec. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230203150634.3199647-13-hch@lst.deReviewed-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_folio helper to initialize a bvec. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20230203150634.3199647-12-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize bvecs. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20230203150634.3199647-11-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Use the bvec_set_virt helper to initialize the special_vec. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Link: https://lore.kernel.org/r/20230203150634.3199647-10-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-