Commit 37f0dc2e authored by Ming Lei's avatar Ming Lei Committed by Christoph Hellwig

nvme: fix handling single range discard request

When investigating one customer report on warning in nvme_setup_discard,
we observed the controller(nvme/tcp) actually exposes
queue_max_discard_segments(req->q) == 1.

Obviously the current code can't handle this situation, since contiguity
merge like normal RW request is taken.

Fix the issue by building range from request sector/nr_sectors directly.

Fixes: b35ba01e ("nvme: support ranged discard requests")
Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
Reviewed-by: default avatarChaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
parent 54686b61
......@@ -781,16 +781,26 @@ static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req,
range = page_address(ns->ctrl->discard_page);
}
__rq_for_each_bio(bio, req) {
u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector);
u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift;
if (n < segments) {
range[n].cattr = cpu_to_le32(0);
range[n].nlb = cpu_to_le32(nlb);
range[n].slba = cpu_to_le64(slba);
if (queue_max_discard_segments(req->q) == 1) {
u64 slba = nvme_sect_to_lba(ns, blk_rq_pos(req));
u32 nlb = blk_rq_sectors(req) >> (ns->lba_shift - 9);
range[0].cattr = cpu_to_le32(0);
range[0].nlb = cpu_to_le32(nlb);
range[0].slba = cpu_to_le64(slba);
n = 1;
} else {
__rq_for_each_bio(bio, req) {
u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector);
u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift;
if (n < segments) {
range[n].cattr = cpu_to_le32(0);
range[n].nlb = cpu_to_le32(nlb);
range[n].slba = cpu_to_le64(slba);
}
n++;
}
n++;
}
if (WARN_ON_ONCE(n != segments)) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment