Commit 9cc5169c authored by Bart Van Assche's avatar Bart Van Assche Committed by Jens Axboe

block: Improve physical block alignment of split bios

Consider the following example:
* The logical block size is 4 KB.
* The physical block size is 8 KB.
* max_sectors equals (16 KB >> 9) sectors.
* A non-aligned 4 KB and an aligned 64 KB bio are merged into a single
  non-aligned 68 KB bio.

The current behavior is to split such a bio into (16 KB + 16 KB + 16 KB
+ 16 KB + 4 KB). The start of none of these five bio's is aligned to a
physical block boundary.

This patch ensures that such a bio is split into four aligned and
one non-aligned bio instead of being split into five non-aligned bios.
This improves performance because most block devices can handle aligned
requests faster than non-aligned requests.

Since the physical block size is larger than or equal to the logical
block size, this patch preserves the guarantee that the returned
value is a multiple of the logical block size.

Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: default avatarBart Van Assche <bvanassche@acm.org>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 708b25b3
...@@ -132,16 +132,29 @@ static struct bio *blk_bio_write_same_split(struct request_queue *q, ...@@ -132,16 +132,29 @@ static struct bio *blk_bio_write_same_split(struct request_queue *q,
return bio_split(bio, q->limits.max_write_same_sectors, GFP_NOIO, bs); return bio_split(bio, q->limits.max_write_same_sectors, GFP_NOIO, bs);
} }
/*
* Return the maximum number of sectors from the start of a bio that may be
* submitted as a single request to a block device. If enough sectors remain,
* align the end to the physical block size. Otherwise align the end to the
* logical block size. This approach minimizes the number of non-aligned
* requests that are submitted to a block device if the start of a bio is not
* aligned to a physical block boundary.
*/
static inline unsigned get_max_io_size(struct request_queue *q, static inline unsigned get_max_io_size(struct request_queue *q,
struct bio *bio) struct bio *bio)
{ {
unsigned sectors = blk_max_size_offset(q, bio->bi_iter.bi_sector); unsigned sectors = blk_max_size_offset(q, bio->bi_iter.bi_sector);
unsigned mask = queue_logical_block_size(q) - 1; unsigned max_sectors = sectors;
unsigned pbs = queue_physical_block_size(q) >> SECTOR_SHIFT;
unsigned lbs = queue_logical_block_size(q) >> SECTOR_SHIFT;
unsigned start_offset = bio->bi_iter.bi_sector & (pbs - 1);
/* aligned to logical block size */ max_sectors += start_offset;
sectors &= ~(mask >> 9); max_sectors &= ~(pbs - 1);
if (max_sectors > start_offset)
return max_sectors - start_offset;
return sectors; return sectors & (lbs - 1);
} }
static unsigned get_max_segment_size(const struct request_queue *q, static unsigned get_max_segment_size(const struct request_queue *q,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment