Commit d004a5e7 authored by Christoph Hellwig's avatar Christoph Hellwig Committed by Jens Axboe

block: remove __bio_kmap_atomic

This helper doesn't buy us much over calling kmap_atomic directly.
In fact in the only caller it does a bit of useless work as the
caller already has the bvec at hand, and said caller would even
buggy for a multi-segment bio due to the use of this helper.

So just remove it.
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 83f5f7ed
...@@ -216,10 +216,9 @@ may need to abort DMA operations and revert to PIO for the transfer, in ...@@ -216,10 +216,9 @@ may need to abort DMA operations and revert to PIO for the transfer, in
which case a virtual mapping of the page is required. For SCSI it is also which case a virtual mapping of the page is required. For SCSI it is also
done in some scenarios where the low level driver cannot be trusted to done in some scenarios where the low level driver cannot be trusted to
handle a single sg entry correctly. The driver is expected to perform the handle a single sg entry correctly. The driver is expected to perform the
kmaps as needed on such occasions using the bio_kmap_irq and friends kmaps as needed on such occasions as appropriate. A driver could also use
routines as appropriate. A driver could also use the blk_queue_bounce() the blk_queue_bounce() routine on its own to bounce highmem i/o to low
routine on its own to bounce highmem i/o to low memory for specific requests memory for specific requests if so desired.
if so desired.
iii. The i/o scheduler algorithm itself can be replaced/set as appropriate iii. The i/o scheduler algorithm itself can be replaced/set as appropriate
...@@ -1137,8 +1136,8 @@ use dma_map_sg for scatter gather) to be able to ship it to the driver. For ...@@ -1137,8 +1136,8 @@ use dma_map_sg for scatter gather) to be able to ship it to the driver. For
PIO drivers (or drivers that need to revert to PIO transfer once in a PIO drivers (or drivers that need to revert to PIO transfer once in a
while (IDE for example)), where the CPU is doing the actual data while (IDE for example)), where the CPU is doing the actual data
transfer a virtual mapping is needed. If the driver supports highmem I/O, transfer a virtual mapping is needed. If the driver supports highmem I/O,
(Sec 1.1, (ii) ) it needs to use __bio_kmap_atomic or similar to (Sec 1.1, (ii) ) it needs to use kmap_atomic or similar to temporarily map
temporarily map a bio into the virtual address space. a bio into the virtual address space.
8. Prior/Related/Impacted patches 8. Prior/Related/Impacted patches
......
...@@ -110,13 +110,13 @@ static blk_qc_t simdisk_make_request(struct request_queue *q, struct bio *bio) ...@@ -110,13 +110,13 @@ static blk_qc_t simdisk_make_request(struct request_queue *q, struct bio *bio)
sector_t sector = bio->bi_iter.bi_sector; sector_t sector = bio->bi_iter.bi_sector;
bio_for_each_segment(bvec, bio, iter) { bio_for_each_segment(bvec, bio, iter) {
char *buffer = __bio_kmap_atomic(bio, iter); char *buffer = kmap_atomic(bvec.bv_page) + bvec.bv_offset;
unsigned len = bvec.bv_len >> SECTOR_SHIFT; unsigned len = bvec.bv_len >> SECTOR_SHIFT;
simdisk_transfer(dev, sector, len, buffer, simdisk_transfer(dev, sector, len, buffer,
bio_data_dir(bio) == WRITE); bio_data_dir(bio) == WRITE);
sector += len; sector += len;
__bio_kunmap_atomic(buffer); kunmap_atomic(buffer)
} }
bio_endio(bio); bio_endio(bio);
......
...@@ -157,7 +157,7 @@ EXPORT_SYMBOL(blk_set_stacking_limits); ...@@ -157,7 +157,7 @@ EXPORT_SYMBOL(blk_set_stacking_limits);
* Caveat: * Caveat:
* The driver that does this *must* be able to deal appropriately * The driver that does this *must* be able to deal appropriately
* with buffers in "highmemory". This can be accomplished by either calling * with buffers in "highmemory". This can be accomplished by either calling
* __bio_kmap_atomic() to get a temporary kernel mapping, or by calling * kmap_atomic() to get a temporary kernel mapping, or by calling
* blk_queue_bounce() to create a buffer in normal memory. * blk_queue_bounce() to create a buffer in normal memory.
**/ **/
void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn) void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn)
......
...@@ -128,18 +128,6 @@ static inline void *bio_data(struct bio *bio) ...@@ -128,18 +128,6 @@ static inline void *bio_data(struct bio *bio)
*/ */
#define bvec_to_phys(bv) (page_to_phys((bv)->bv_page) + (unsigned long) (bv)->bv_offset) #define bvec_to_phys(bv) (page_to_phys((bv)->bv_page) + (unsigned long) (bv)->bv_offset)
/*
* queues that have highmem support enabled may still need to revert to
* PIO transfers occasionally and thus map high pages temporarily. For
* permanent PIO fall back, user is probably better off disabling highmem
* I/O completely on that queue (see ide-dma for example)
*/
#define __bio_kmap_atomic(bio, iter) \
(kmap_atomic(bio_iter_iovec((bio), (iter)).bv_page) + \
bio_iter_iovec((bio), (iter)).bv_offset)
#define __bio_kunmap_atomic(addr) kunmap_atomic(addr)
/* /*
* merge helpers etc * merge helpers etc
*/ */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment