Commit 8327cce5 authored by Ritika Srivastava's avatar Ritika Srivastava Committed by Jens Axboe

block: better deal with the delayed not supported case in blk_cloned_rq_check_limits

If WRITE_ZERO/WRITE_SAME operation is not supported by the storage,
blk_cloned_rq_check_limits() will return IO error which will cause
device-mapper to fail the paths.

Instead, if the queue limit is set to 0, return BLK_STS_NOTSUPP.
BLK_STS_NOTSUPP will be ignored by device-mapper and will not fail the
paths.
Suggested-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: default avatarRitika Srivastava <ritika.srivastava@oracle.com>
Reviewed-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 143d2600
...@@ -1148,10 +1148,24 @@ EXPORT_SYMBOL(submit_bio); ...@@ -1148,10 +1148,24 @@ EXPORT_SYMBOL(submit_bio);
static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q,
struct request *rq) struct request *rq)
{ {
if (blk_rq_sectors(rq) > blk_queue_get_max_sectors(q, req_op(rq))) { unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq));
if (blk_rq_sectors(rq) > max_sectors) {
/*
* SCSI device does not have a good way to return if
* Write Same/Zero is actually supported. If a device rejects
* a non-read/write command (discard, write same,etc.) the
* low-level device driver will set the relevant queue limit to
* 0 to prevent blk-lib from issuing more of the offending
* operations. Commands queued prior to the queue limit being
* reset need to be completed with BLK_STS_NOTSUPP to avoid I/O
* errors being propagated to upper layers.
*/
if (max_sectors == 0)
return BLK_STS_NOTSUPP;
printk(KERN_ERR "%s: over max size limit. (%u > %u)\n", printk(KERN_ERR "%s: over max size limit. (%u > %u)\n",
__func__, blk_rq_sectors(rq), __func__, blk_rq_sectors(rq), max_sectors);
blk_queue_get_max_sectors(q, req_op(rq)));
return BLK_STS_IOERR; return BLK_STS_IOERR;
} }
...@@ -1178,8 +1192,11 @@ static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, ...@@ -1178,8 +1192,11 @@ static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q,
*/ */
blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq) blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq)
{ {
if (blk_cloned_rq_check_limits(q, rq)) blk_status_t ret;
return BLK_STS_IOERR;
ret = blk_cloned_rq_check_limits(q, rq);
if (ret != BLK_STS_OK)
return ret;
if (rq->rq_disk && if (rq->rq_disk &&
should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq))) should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq)))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment