Commit dab3902b authored by Chaitanya Kulkarni's avatar Chaitanya Kulkarni Committed by Christoph Hellwig

nvmet: use inline bio for passthru fast path

In nvmet_passthru_execute_cmd() which is a high frequency function
it uses bio_alloc() which leads to memory allocation from the fs pool
for each I/O.

For NVMeoF nvmet_req we already have inline_bvec allocated as a part of
request allocation that can be used with preallocated bio when we
already know the size of request before bio allocation with bio_alloc(),
which we already do.

Introduce a bio member for the nvmet_req passthru anon union. In the
fast path, check if we can get away with inline bvec and bio from
nvmet_req with bio_init() call before actually allocating from the
bio_alloc().

This will be useful to avoid any new memory allocation under high
memory pressure situation and get rid of any extra work of
allocation (bio_alloc()) vs initialization (bio_init()) when
transfer len is < NVMET_MAX_INLINE_DATA_LEN that user can configure at
compile time.
Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: default avatarLogan Gunthorpe <logang@deltatee.com>
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
parent a4fe2d3a
...@@ -332,6 +332,7 @@ struct nvmet_req { ...@@ -332,6 +332,7 @@ struct nvmet_req {
struct work_struct work; struct work_struct work;
} f; } f;
struct { struct {
struct bio inline_bio;
struct request *rq; struct request *rq;
struct work_struct work; struct work_struct work;
bool use_workqueue; bool use_workqueue;
......
...@@ -194,13 +194,19 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) ...@@ -194,13 +194,19 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
if (req->sg_cnt > BIO_MAX_PAGES) if (req->sg_cnt > BIO_MAX_PAGES)
return -EINVAL; return -EINVAL;
bio = bio_alloc(GFP_KERNEL, req->sg_cnt); if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
bio = &req->p.inline_bio;
bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
} else {
bio = bio_alloc(GFP_KERNEL, min(req->sg_cnt, BIO_MAX_PAGES));
bio->bi_end_io = bio_put; bio->bi_end_io = bio_put;
}
bio->bi_opf = req_op(rq); bio->bi_opf = req_op(rq);
for_each_sg(req->sg, sg, req->sg_cnt, i) { for_each_sg(req->sg, sg, req->sg_cnt, i) {
if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length, if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length,
sg->offset) < sg->length) { sg->offset) < sg->length) {
if (bio != &req->p.inline_bio)
bio_put(bio); bio_put(bio);
return -EINVAL; return -EINVAL;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment