Commit 548453fd authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-2.6.26' of git://git.kernel.dk/linux-2.6-block

* 'for-2.6.26' of git://git.kernel.dk/linux-2.6-block:
  block: fix blk_register_queue() return value
  block: fix memory hotplug and bouncing in block layer
  block: replace remaining __FUNCTION__ occurrences
  Kconfig: clean up block/Kconfig help descriptions
  cciss: fix warning oops on rmmod of driver
  cciss: Fix race between disk-adding code and interrupt handler
  block: move the padding adjustment to blk_rq_map_sg
  block: add bio_copy_user_iov support to blk_rq_map_user_iov
  block: convert bio_copy_user to bio_copy_user_iov
  loop: manage partitions in disk image
  cdrom: use kmalloced buffers instead of buffers on stack
  cdrom: make unregister_cdrom() return void
  cdrom: use list_head for cdrom_device_info list
  cdrom: protect cdrom_device_info list by mutex
  cdrom: cleanup hardcoded error-code
  cdrom: remove ifdef CONFIG_SYSCTL
parents 9fd91217 fb199746
...@@ -777,7 +777,7 @@ Note that a driver must have one static structure, $<device>_dops$, while ...@@ -777,7 +777,7 @@ Note that a driver must have one static structure, $<device>_dops$, while
it may have as many structures $<device>_info$ as there are minor devices it may have as many structures $<device>_info$ as there are minor devices
active. $Register_cdrom()$ builds a linked list from these. active. $Register_cdrom()$ builds a linked list from these.
\subsection{$Int\ unregister_cdrom(struct\ cdrom_device_info * cdi)$} \subsection{$Void\ unregister_cdrom(struct\ cdrom_device_info * cdi)$}
Unregistering device $cdi$ with minor number $MINOR(cdi\to dev)$ removes Unregistering device $cdi$ with minor number $MINOR(cdi\to dev)$ removes
the minor device from the list. If it was the last registered minor for the minor device from the list. If it was the last registered minor for
......
...@@ -5,14 +5,18 @@ menuconfig BLOCK ...@@ -5,14 +5,18 @@ menuconfig BLOCK
bool "Enable the block layer" if EMBEDDED bool "Enable the block layer" if EMBEDDED
default y default y
help help
This permits the block layer to be removed from the kernel if it's not Provide block layer support for the kernel.
needed (on some embedded devices for example). If this option is
disabled, then blockdev files will become unusable and some
filesystems (such as ext3) will become unavailable.
This option will also disable SCSI character devices and USB storage Disable this option to remove the block layer support from the
since they make use of various block layer definitions and kernel. This may be useful for embedded devices.
facilities.
If this option is disabled:
- block device files will become unusable
- some filesystems (such as ext3) will become unavailable.
Also, SCSI character devices and USB storage will be disabled since
they make use of various block layer definitions and facilities.
Say Y here unless you know you really don't want to mount disks and Say Y here unless you know you really don't want to mount disks and
suchlike. suchlike.
...@@ -23,9 +27,20 @@ config LBD ...@@ -23,9 +27,20 @@ config LBD
bool "Support for Large Block Devices" bool "Support for Large Block Devices"
depends on !64BIT depends on !64BIT
help help
Say Y here if you want to attach large (bigger than 2TB) discs to Enable block devices of size 2TB and larger.
your machine, or if you want to have a raid or loopback device
bigger than 2TB. Otherwise say N. This option is required to support the full capacity of large
(2TB+) block devices, including RAID, disk, Network Block Device,
Logical Volume Manager (LVM) and loopback.
For example, RAID devices are frequently bigger than the capacity
of the largest individual hard drive.
This option is not required if you have individual disk drives
which total 2TB+ and you are not aggregating the capacity into
a large block device (e.g. using RAID or LVM).
If unsure, say N.
config BLK_DEV_IO_TRACE config BLK_DEV_IO_TRACE
bool "Support for tracing block io actions" bool "Support for tracing block io actions"
...@@ -33,19 +48,21 @@ config BLK_DEV_IO_TRACE ...@@ -33,19 +48,21 @@ config BLK_DEV_IO_TRACE
select RELAY select RELAY
select DEBUG_FS select DEBUG_FS
help help
Say Y here, if you want to be able to trace the block layer actions Say Y here if you want to be able to trace the block layer actions
on a given queue. Tracing allows you to see any traffic happening on a given queue. Tracing allows you to see any traffic happening
on a block device queue. For more information (and the user space on a block device queue. For more information (and the userspace
support tools needed), fetch the blktrace app from: support tools needed), fetch the blktrace tools from:
git://git.kernel.dk/blktrace.git git://git.kernel.dk/blktrace.git
If unsure, say N.
config LSF config LSF
bool "Support for Large Single Files" bool "Support for Large Single Files"
depends on !64BIT depends on !64BIT
help help
Say Y here if you want to be able to handle very large files (bigger Say Y here if you want to be able to handle very large files (2TB
than 2TB), otherwise say N. and larger), otherwise say N.
If unsure, say Y. If unsure, say Y.
...@@ -53,14 +70,16 @@ config BLK_DEV_BSG ...@@ -53,14 +70,16 @@ config BLK_DEV_BSG
bool "Block layer SG support v4 (EXPERIMENTAL)" bool "Block layer SG support v4 (EXPERIMENTAL)"
depends on EXPERIMENTAL depends on EXPERIMENTAL
---help--- ---help---
Saying Y here will enable generic SG (SCSI generic) v4 support Saying Y here will enable generic SG (SCSI generic) v4 support
for any block device. for any block device.
Unlike SG v3 (aka block/scsi_ioctl.c drivers/scsi/sg.c), SG v4 Unlike SG v3 (aka block/scsi_ioctl.c drivers/scsi/sg.c), SG v4
can handle complicated SCSI commands: tagged variable length cdbs can handle complicated SCSI commands: tagged variable length cdbs
with bidirectional data transfers and generic request/response with bidirectional data transfers and generic request/response
protocols (e.g. Task Management Functions and SMP in Serial protocols (e.g. Task Management Functions and SMP in Serial
Attached SCSI). Attached SCSI).
If unsure, say N.
endif # BLOCK endif # BLOCK
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/bio.h> #include <linux/bio.h>
#include <linux/blkdev.h> #include <linux/blkdev.h>
#include <scsi/sg.h> /* for struct sg_iovec */
#include "blk.h" #include "blk.h"
...@@ -140,25 +141,8 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq, ...@@ -140,25 +141,8 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq,
ubuf += ret; ubuf += ret;
} }
/* if (!bio_flagged(bio, BIO_USER_MAPPED))
* __blk_rq_map_user() copies the buffers if starting address rq->cmd_flags |= REQ_COPY_USER;
* or length isn't aligned to dma_pad_mask. As the copied
* buffer is always page aligned, we know that there's enough
* room for padding. Extend the last bio and update
* rq->data_len accordingly.
*
* On unmap, bio_uncopy_user() will use unmodified
* bio_map_data pointed to by bio->bi_private.
*/
if (len & q->dma_pad_mask) {
unsigned int pad_len = (q->dma_pad_mask & ~len) + 1;
struct bio *tail = rq->biotail;
tail->bi_io_vec[tail->bi_vcnt - 1].bv_len += pad_len;
tail->bi_size += pad_len;
rq->extra_len += pad_len;
}
rq->buffer = rq->data = NULL; rq->buffer = rq->data = NULL;
return 0; return 0;
...@@ -194,15 +178,26 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq, ...@@ -194,15 +178,26 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
struct sg_iovec *iov, int iov_count, unsigned int len) struct sg_iovec *iov, int iov_count, unsigned int len)
{ {
struct bio *bio; struct bio *bio;
int i, read = rq_data_dir(rq) == READ;
int unaligned = 0;
if (!iov || iov_count <= 0) if (!iov || iov_count <= 0)
return -EINVAL; return -EINVAL;
/* we don't allow misaligned data like bio_map_user() does. If the for (i = 0; i < iov_count; i++) {
* user is using sg, they're expected to know the alignment constraints unsigned long uaddr = (unsigned long)iov[i].iov_base;
* and respect them accordingly */
bio = bio_map_user_iov(q, NULL, iov, iov_count, if (uaddr & queue_dma_alignment(q)) {
rq_data_dir(rq) == READ); unaligned = 1;
break;
}
}
if (unaligned || (q->dma_pad_mask & len))
bio = bio_copy_user_iov(q, iov, iov_count, read);
else
bio = bio_map_user_iov(q, NULL, iov, iov_count, read);
if (IS_ERR(bio)) if (IS_ERR(bio))
return PTR_ERR(bio); return PTR_ERR(bio);
...@@ -212,6 +207,9 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq, ...@@ -212,6 +207,9 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
return -EINVAL; return -EINVAL;
} }
if (!bio_flagged(bio, BIO_USER_MAPPED))
rq->cmd_flags |= REQ_COPY_USER;
bio_get(bio); bio_get(bio);
blk_rq_bio_prep(q, rq, bio); blk_rq_bio_prep(q, rq, bio);
rq->buffer = rq->data = NULL; rq->buffer = rq->data = NULL;
......
...@@ -220,6 +220,15 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq, ...@@ -220,6 +220,15 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
bvprv = bvec; bvprv = bvec;
} /* segments in rq */ } /* segments in rq */
if (unlikely(rq->cmd_flags & REQ_COPY_USER) &&
(rq->data_len & q->dma_pad_mask)) {
unsigned int pad_len = (q->dma_pad_mask & ~rq->data_len) + 1;
sg->length += pad_len;
rq->extra_len += pad_len;
}
if (q->dma_drain_size && q->dma_drain_needed(rq)) { if (q->dma_drain_size && q->dma_drain_needed(rq)) {
if (rq->cmd_flags & REQ_RW) if (rq->cmd_flags & REQ_RW)
memset(q->dma_drain_buffer, 0, q->dma_drain_size); memset(q->dma_drain_buffer, 0, q->dma_drain_size);
......
...@@ -276,9 +276,12 @@ int blk_register_queue(struct gendisk *disk) ...@@ -276,9 +276,12 @@ int blk_register_queue(struct gendisk *disk)
struct request_queue *q = disk->queue; struct request_queue *q = disk->queue;
if (!q || !q->request_fn) if (WARN_ON(!q))
return -ENXIO; return -ENXIO;
if (!q->request_fn)
return 0;
ret = kobject_add(&q->kobj, kobject_get(&disk->dev.kobj), ret = kobject_add(&q->kobj, kobject_get(&disk->dev.kobj),
"%s", "queue"); "%s", "queue");
if (ret < 0) if (ret < 0)
...@@ -300,7 +303,10 @@ void blk_unregister_queue(struct gendisk *disk) ...@@ -300,7 +303,10 @@ void blk_unregister_queue(struct gendisk *disk)
{ {
struct request_queue *q = disk->queue; struct request_queue *q = disk->queue;
if (q && q->request_fn) { if (WARN_ON(!q))
return;
if (q->request_fn) {
elv_unregister_queue(q); elv_unregister_queue(q);
kobject_uevent(&q->kobj, KOBJ_REMOVE); kobject_uevent(&q->kobj, KOBJ_REMOVE);
......
...@@ -1349,6 +1349,10 @@ static void cciss_update_drive_info(int ctlr, int drv_index) ...@@ -1349,6 +1349,10 @@ static void cciss_update_drive_info(int ctlr, int drv_index)
spin_lock_irqsave(CCISS_LOCK(h->ctlr), flags); spin_lock_irqsave(CCISS_LOCK(h->ctlr), flags);
h->drv[drv_index].busy_configuring = 1; h->drv[drv_index].busy_configuring = 1;
spin_unlock_irqrestore(CCISS_LOCK(h->ctlr), flags); spin_unlock_irqrestore(CCISS_LOCK(h->ctlr), flags);
/* deregister_disk sets h->drv[drv_index].queue = NULL */
/* which keeps the interrupt handler from starting */
/* the queue. */
ret = deregister_disk(h->gendisk[drv_index], ret = deregister_disk(h->gendisk[drv_index],
&h->drv[drv_index], 0); &h->drv[drv_index], 0);
h->drv[drv_index].busy_configuring = 0; h->drv[drv_index].busy_configuring = 0;
...@@ -1419,6 +1423,10 @@ static void cciss_update_drive_info(int ctlr, int drv_index) ...@@ -1419,6 +1423,10 @@ static void cciss_update_drive_info(int ctlr, int drv_index)
blk_queue_hardsect_size(disk->queue, blk_queue_hardsect_size(disk->queue,
hba[ctlr]->drv[drv_index].block_size); hba[ctlr]->drv[drv_index].block_size);
/* Make sure all queue data is written out before */
/* setting h->drv[drv_index].queue, as setting this */
/* allows the interrupt handler to start the queue */
wmb();
h->drv[drv_index].queue = disk->queue; h->drv[drv_index].queue = disk->queue;
add_disk(disk); add_disk(disk);
} }
...@@ -3520,10 +3528,17 @@ static int __devinit cciss_init_one(struct pci_dev *pdev, ...@@ -3520,10 +3528,17 @@ static int __devinit cciss_init_one(struct pci_dev *pdev,
continue; continue;
blk_queue_hardsect_size(q, drv->block_size); blk_queue_hardsect_size(q, drv->block_size);
set_capacity(disk, drv->nr_blocks); set_capacity(disk, drv->nr_blocks);
add_disk(disk);
j++; j++;
} while (j <= hba[i]->highest_lun); } while (j <= hba[i]->highest_lun);
/* Make sure all queue data is written out before */
/* interrupt handler, triggered by add_disk, */
/* is allowed to start them. */
wmb();
for (j = 0; j <= hba[i]->highest_lun; j++)
add_disk(hba[i]->gendisk[j]);
return 1; return 1;
clean4: clean4:
......
...@@ -1349,9 +1349,9 @@ cciss_unregister_scsi(int ctlr) ...@@ -1349,9 +1349,9 @@ cciss_unregister_scsi(int ctlr)
/* set scsi_host to NULL so our detect routine will /* set scsi_host to NULL so our detect routine will
find us on register */ find us on register */
sa->scsi_host = NULL; sa->scsi_host = NULL;
spin_unlock_irqrestore(CCISS_LOCK(ctlr), flags);
scsi_cmd_stack_free(ctlr); scsi_cmd_stack_free(ctlr);
kfree(sa); kfree(sa);
spin_unlock_irqrestore(CCISS_LOCK(ctlr), flags);
} }
static int static int
......
...@@ -82,6 +82,9 @@ ...@@ -82,6 +82,9 @@
static LIST_HEAD(loop_devices); static LIST_HEAD(loop_devices);
static DEFINE_MUTEX(loop_devices_mutex); static DEFINE_MUTEX(loop_devices_mutex);
static int max_part;
static int part_shift;
/* /*
* Transfer functions * Transfer functions
*/ */
...@@ -692,6 +695,8 @@ static int loop_change_fd(struct loop_device *lo, struct file *lo_file, ...@@ -692,6 +695,8 @@ static int loop_change_fd(struct loop_device *lo, struct file *lo_file,
goto out_putf; goto out_putf;
fput(old_file); fput(old_file);
if (max_part > 0)
ioctl_by_bdev(bdev, BLKRRPART, 0);
return 0; return 0;
out_putf: out_putf:
...@@ -819,6 +824,8 @@ static int loop_set_fd(struct loop_device *lo, struct file *lo_file, ...@@ -819,6 +824,8 @@ static int loop_set_fd(struct loop_device *lo, struct file *lo_file,
} }
lo->lo_state = Lo_bound; lo->lo_state = Lo_bound;
wake_up_process(lo->lo_thread); wake_up_process(lo->lo_thread);
if (max_part > 0)
ioctl_by_bdev(bdev, BLKRRPART, 0);
return 0; return 0;
out_clr: out_clr:
...@@ -919,6 +926,8 @@ static int loop_clr_fd(struct loop_device *lo, struct block_device *bdev) ...@@ -919,6 +926,8 @@ static int loop_clr_fd(struct loop_device *lo, struct block_device *bdev)
fput(filp); fput(filp);
/* This is safe: open() is still holding a reference. */ /* This is safe: open() is still holding a reference. */
module_put(THIS_MODULE); module_put(THIS_MODULE);
if (max_part > 0)
ioctl_by_bdev(bdev, BLKRRPART, 0);
return 0; return 0;
} }
...@@ -1360,6 +1369,8 @@ static struct block_device_operations lo_fops = { ...@@ -1360,6 +1369,8 @@ static struct block_device_operations lo_fops = {
static int max_loop; static int max_loop;
module_param(max_loop, int, 0); module_param(max_loop, int, 0);
MODULE_PARM_DESC(max_loop, "Maximum number of loop devices"); MODULE_PARM_DESC(max_loop, "Maximum number of loop devices");
module_param(max_part, int, 0);
MODULE_PARM_DESC(max_part, "Maximum number of partitions per loop device");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_ALIAS_BLOCKDEV_MAJOR(LOOP_MAJOR); MODULE_ALIAS_BLOCKDEV_MAJOR(LOOP_MAJOR);
...@@ -1412,7 +1423,7 @@ static struct loop_device *loop_alloc(int i) ...@@ -1412,7 +1423,7 @@ static struct loop_device *loop_alloc(int i)
if (!lo->lo_queue) if (!lo->lo_queue)
goto out_free_dev; goto out_free_dev;
disk = lo->lo_disk = alloc_disk(1); disk = lo->lo_disk = alloc_disk(1 << part_shift);
if (!disk) if (!disk)
goto out_free_queue; goto out_free_queue;
...@@ -1422,7 +1433,7 @@ static struct loop_device *loop_alloc(int i) ...@@ -1422,7 +1433,7 @@ static struct loop_device *loop_alloc(int i)
init_waitqueue_head(&lo->lo_event); init_waitqueue_head(&lo->lo_event);
spin_lock_init(&lo->lo_lock); spin_lock_init(&lo->lo_lock);
disk->major = LOOP_MAJOR; disk->major = LOOP_MAJOR;
disk->first_minor = i; disk->first_minor = i << part_shift;
disk->fops = &lo_fops; disk->fops = &lo_fops;
disk->private_data = lo; disk->private_data = lo;
disk->queue = lo->lo_queue; disk->queue = lo->lo_queue;
...@@ -1502,7 +1513,12 @@ static int __init loop_init(void) ...@@ -1502,7 +1513,12 @@ static int __init loop_init(void)
* themselves and have kernel automatically instantiate actual * themselves and have kernel automatically instantiate actual
* device on-demand. * device on-demand.
*/ */
if (max_loop > 1UL << MINORBITS)
part_shift = 0;
if (max_part > 0)
part_shift = fls(max_part);
if (max_loop > 1UL << (MINORBITS - part_shift))
return -EINVAL; return -EINVAL;
if (max_loop) { if (max_loop) {
...@@ -1510,7 +1526,7 @@ static int __init loop_init(void) ...@@ -1510,7 +1526,7 @@ static int __init loop_init(void)
range = max_loop; range = max_loop;
} else { } else {
nr = 8; nr = 8;
range = 1UL << MINORBITS; range = 1UL << (MINORBITS - part_shift);
} }
if (register_blkdev(LOOP_MAJOR, "loop")) if (register_blkdev(LOOP_MAJOR, "loop"))
...@@ -1549,7 +1565,7 @@ static void __exit loop_exit(void) ...@@ -1549,7 +1565,7 @@ static void __exit loop_exit(void)
unsigned long range; unsigned long range;
struct loop_device *lo, *next; struct loop_device *lo, *next;
range = max_loop ? max_loop : 1UL << MINORBITS; range = max_loop ? max_loop : 1UL << (MINORBITS - part_shift);
list_for_each_entry_safe(lo, next, &loop_devices, lo_list) list_for_each_entry_safe(lo, next, &loop_devices, lo_list)
loop_del_one(lo); loop_del_one(lo);
......
...@@ -79,9 +79,9 @@ MODULE_PARM_DESC(max_queue, "Maximum number of queued commands. (min==1, max==30 ...@@ -79,9 +79,9 @@ MODULE_PARM_DESC(max_queue, "Maximum number of queued commands. (min==1, max==30
/* note: prints function name for you */ /* note: prints function name for you */
#ifdef CARM_DEBUG #ifdef CARM_DEBUG
#define DPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __FUNCTION__, ## args) #define DPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __func__, ## args)
#ifdef CARM_VERBOSE_DEBUG #ifdef CARM_VERBOSE_DEBUG
#define VPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __FUNCTION__, ## args) #define VPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __func__, ## args)
#else #else
#define VPRINTK(fmt, args...) #define VPRINTK(fmt, args...)
#endif /* CARM_VERBOSE_DEBUG */ #endif /* CARM_VERBOSE_DEBUG */
...@@ -96,7 +96,7 @@ MODULE_PARM_DESC(max_queue, "Maximum number of queued commands. (min==1, max==30 ...@@ -96,7 +96,7 @@ MODULE_PARM_DESC(max_queue, "Maximum number of queued commands. (min==1, max==30
#define assert(expr) \ #define assert(expr) \
if(unlikely(!(expr))) { \ if(unlikely(!(expr))) { \
printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \ printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \
#expr,__FILE__,__FUNCTION__,__LINE__); \ #expr, __FILE__, __func__, __LINE__); \
} }
#endif #endif
......
This diff is collapsed.
...@@ -827,7 +827,9 @@ static int __devexit remove_gdrom(struct platform_device *devptr) ...@@ -827,7 +827,9 @@ static int __devexit remove_gdrom(struct platform_device *devptr)
del_gendisk(gd.disk); del_gendisk(gd.disk);
if (gdrom_major) if (gdrom_major)
unregister_blkdev(gdrom_major, GDROM_DEV_NAME); unregister_blkdev(gdrom_major, GDROM_DEV_NAME);
return unregister_cdrom(gd.cd_info); unregister_cdrom(gd.cd_info);
return 0;
} }
static struct platform_driver gdrom_driver = { static struct platform_driver gdrom_driver = {
......
...@@ -650,10 +650,7 @@ static int viocd_remove(struct vio_dev *vdev) ...@@ -650,10 +650,7 @@ static int viocd_remove(struct vio_dev *vdev)
{ {
struct disk_info *d = &viocd_diskinfo[vdev->unit_address]; struct disk_info *d = &viocd_diskinfo[vdev->unit_address];
if (unregister_cdrom(&d->viocd_info) != 0) unregister_cdrom(&d->viocd_info);
printk(VIOCD_KERN_WARNING
"Cannot unregister viocd CD-ROM %s!\n",
d->viocd_info.name);
del_gendisk(d->viocd_disk); del_gendisk(d->viocd_disk);
blk_cleanup_queue(d->viocd_disk->queue); blk_cleanup_queue(d->viocd_disk->queue);
put_disk(d->viocd_disk); put_disk(d->viocd_disk);
......
...@@ -2032,9 +2032,8 @@ static void ide_cd_release(struct kref *kref) ...@@ -2032,9 +2032,8 @@ static void ide_cd_release(struct kref *kref)
kfree(info->buffer); kfree(info->buffer);
kfree(info->toc); kfree(info->toc);
if (devinfo->handle == drive && unregister_cdrom(devinfo)) if (devinfo->handle == drive)
printk(KERN_ERR "%s: %s failed to unregister device from the cdrom " unregister_cdrom(devinfo);
"driver.\n", __FUNCTION__, drive->name);
drive->dsc_overlap = 0; drive->dsc_overlap = 0;
drive->driver_data = NULL; drive->driver_data = NULL;
blk_queue_prep_rq(drive->queue, NULL); blk_queue_prep_rq(drive->queue, NULL);
......
...@@ -852,7 +852,7 @@ void scsi_finish_command(struct scsi_cmnd *cmd) ...@@ -852,7 +852,7 @@ void scsi_finish_command(struct scsi_cmnd *cmd)
"Notifying upper driver of completion " "Notifying upper driver of completion "
"(result %x)\n", cmd->result)); "(result %x)\n", cmd->result));
good_bytes = scsi_bufflen(cmd) + cmd->request->extra_len; good_bytes = scsi_bufflen(cmd);
if (cmd->request->cmd_type != REQ_TYPE_BLOCK_PC) { if (cmd->request->cmd_type != REQ_TYPE_BLOCK_PC) {
drv = scsi_cmd_to_driver(cmd); drv = scsi_cmd_to_driver(cmd);
if (drv->done) if (drv->done)
......
...@@ -444,22 +444,27 @@ int bio_add_page(struct bio *bio, struct page *page, unsigned int len, ...@@ -444,22 +444,27 @@ int bio_add_page(struct bio *bio, struct page *page, unsigned int len,
struct bio_map_data { struct bio_map_data {
struct bio_vec *iovecs; struct bio_vec *iovecs;
void __user *userptr; int nr_sgvecs;
struct sg_iovec *sgvecs;
}; };
static void bio_set_map_data(struct bio_map_data *bmd, struct bio *bio) static void bio_set_map_data(struct bio_map_data *bmd, struct bio *bio,
struct sg_iovec *iov, int iov_count)
{ {
memcpy(bmd->iovecs, bio->bi_io_vec, sizeof(struct bio_vec) * bio->bi_vcnt); memcpy(bmd->iovecs, bio->bi_io_vec, sizeof(struct bio_vec) * bio->bi_vcnt);
memcpy(bmd->sgvecs, iov, sizeof(struct sg_iovec) * iov_count);
bmd->nr_sgvecs = iov_count;
bio->bi_private = bmd; bio->bi_private = bmd;
} }
static void bio_free_map_data(struct bio_map_data *bmd) static void bio_free_map_data(struct bio_map_data *bmd)
{ {
kfree(bmd->iovecs); kfree(bmd->iovecs);
kfree(bmd->sgvecs);
kfree(bmd); kfree(bmd);
} }
static struct bio_map_data *bio_alloc_map_data(int nr_segs) static struct bio_map_data *bio_alloc_map_data(int nr_segs, int iov_count)
{ {
struct bio_map_data *bmd = kmalloc(sizeof(*bmd), GFP_KERNEL); struct bio_map_data *bmd = kmalloc(sizeof(*bmd), GFP_KERNEL);
...@@ -467,13 +472,71 @@ static struct bio_map_data *bio_alloc_map_data(int nr_segs) ...@@ -467,13 +472,71 @@ static struct bio_map_data *bio_alloc_map_data(int nr_segs)
return NULL; return NULL;
bmd->iovecs = kmalloc(sizeof(struct bio_vec) * nr_segs, GFP_KERNEL); bmd->iovecs = kmalloc(sizeof(struct bio_vec) * nr_segs, GFP_KERNEL);
if (bmd->iovecs) if (!bmd->iovecs) {
kfree(bmd);
return NULL;
}
bmd->sgvecs = kmalloc(sizeof(struct sg_iovec) * iov_count, GFP_KERNEL);
if (bmd->sgvecs)
return bmd; return bmd;
kfree(bmd->iovecs);
kfree(bmd); kfree(bmd);
return NULL; return NULL;
} }
static int __bio_copy_iov(struct bio *bio, struct sg_iovec *iov, int iov_count,
int uncopy)
{
int ret = 0, i;
struct bio_vec *bvec;
int iov_idx = 0;
unsigned int iov_off = 0;
int read = bio_data_dir(bio) == READ;
__bio_for_each_segment(bvec, bio, i, 0) {
char *bv_addr = page_address(bvec->bv_page);
unsigned int bv_len = bvec->bv_len;
while (bv_len && iov_idx < iov_count) {
unsigned int bytes;
char *iov_addr;
bytes = min_t(unsigned int,
iov[iov_idx].iov_len - iov_off, bv_len);
iov_addr = iov[iov_idx].iov_base + iov_off;
if (!ret) {
if (!read && !uncopy)
ret = copy_from_user(bv_addr, iov_addr,
bytes);
if (read && uncopy)
ret = copy_to_user(iov_addr, bv_addr,
bytes);
if (ret)
ret = -EFAULT;
}
bv_len -= bytes;
bv_addr += bytes;
iov_addr += bytes;
iov_off += bytes;
if (iov[iov_idx].iov_len == iov_off) {
iov_idx++;
iov_off = 0;
}
}
if (uncopy)
__free_page(bvec->bv_page);
}
return ret;
}
/** /**
* bio_uncopy_user - finish previously mapped bio * bio_uncopy_user - finish previously mapped bio
* @bio: bio being terminated * @bio: bio being terminated
...@@ -484,55 +547,56 @@ static struct bio_map_data *bio_alloc_map_data(int nr_segs) ...@@ -484,55 +547,56 @@ static struct bio_map_data *bio_alloc_map_data(int nr_segs)
int bio_uncopy_user(struct bio *bio) int bio_uncopy_user(struct bio *bio)
{ {
struct bio_map_data *bmd = bio->bi_private; struct bio_map_data *bmd = bio->bi_private;
const int read = bio_data_dir(bio) == READ; int ret;
struct bio_vec *bvec;
int i, ret = 0;
__bio_for_each_segment(bvec, bio, i, 0) { ret = __bio_copy_iov(bio, bmd->sgvecs, bmd->nr_sgvecs, 1);
char *addr = page_address(bvec->bv_page);
unsigned int len = bmd->iovecs[i].bv_len;
if (read && !ret && copy_to_user(bmd->userptr, addr, len))
ret = -EFAULT;
__free_page(bvec->bv_page);
bmd->userptr += len;
}
bio_free_map_data(bmd); bio_free_map_data(bmd);
bio_put(bio); bio_put(bio);
return ret; return ret;
} }
/** /**
* bio_copy_user - copy user data to bio * bio_copy_user_iov - copy user data to bio
* @q: destination block queue * @q: destination block queue
* @uaddr: start of user address * @iov: the iovec.
* @len: length in bytes * @iov_count: number of elements in the iovec
* @write_to_vm: bool indicating writing to pages or not * @write_to_vm: bool indicating writing to pages or not
* *
* Prepares and returns a bio for indirect user io, bouncing data * Prepares and returns a bio for indirect user io, bouncing data
* to/from kernel pages as necessary. Must be paired with * to/from kernel pages as necessary. Must be paired with
* call bio_uncopy_user() on io completion. * call bio_uncopy_user() on io completion.
*/ */
struct bio *bio_copy_user(struct request_queue *q, unsigned long uaddr, struct bio *bio_copy_user_iov(struct request_queue *q, struct sg_iovec *iov,
unsigned int len, int write_to_vm) int iov_count, int write_to_vm)
{ {
unsigned long end = (uaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
unsigned long start = uaddr >> PAGE_SHIFT;
struct bio_map_data *bmd; struct bio_map_data *bmd;
struct bio_vec *bvec; struct bio_vec *bvec;
struct page *page; struct page *page;
struct bio *bio; struct bio *bio;
int i, ret; int i, ret;
int nr_pages = 0;
unsigned int len = 0;
bmd = bio_alloc_map_data(end - start); for (i = 0; i < iov_count; i++) {
unsigned long uaddr;
unsigned long end;
unsigned long start;
uaddr = (unsigned long)iov[i].iov_base;
end = (uaddr + iov[i].iov_len + PAGE_SIZE - 1) >> PAGE_SHIFT;
start = uaddr >> PAGE_SHIFT;
nr_pages += end - start;
len += iov[i].iov_len;
}
bmd = bio_alloc_map_data(nr_pages, iov_count);
if (!bmd) if (!bmd)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
bmd->userptr = (void __user *) uaddr;
ret = -ENOMEM; ret = -ENOMEM;
bio = bio_alloc(GFP_KERNEL, end - start); bio = bio_alloc(GFP_KERNEL, nr_pages);
if (!bio) if (!bio)
goto out_bmd; goto out_bmd;
...@@ -564,22 +628,12 @@ struct bio *bio_copy_user(struct request_queue *q, unsigned long uaddr, ...@@ -564,22 +628,12 @@ struct bio *bio_copy_user(struct request_queue *q, unsigned long uaddr,
* success * success
*/ */
if (!write_to_vm) { if (!write_to_vm) {
char __user *p = (char __user *) uaddr; ret = __bio_copy_iov(bio, iov, iov_count, 0);
if (ret)
/* goto cleanup;
* for a write, copy in data to kernel pages
*/
ret = -EFAULT;
bio_for_each_segment(bvec, bio, i) {
char *addr = page_address(bvec->bv_page);
if (copy_from_user(addr, p, bvec->bv_len))
goto cleanup;
p += bvec->bv_len;
}
} }
bio_set_map_data(bmd, bio); bio_set_map_data(bmd, bio, iov, iov_count);
return bio; return bio;
cleanup: cleanup:
bio_for_each_segment(bvec, bio, i) bio_for_each_segment(bvec, bio, i)
...@@ -591,6 +645,28 @@ struct bio *bio_copy_user(struct request_queue *q, unsigned long uaddr, ...@@ -591,6 +645,28 @@ struct bio *bio_copy_user(struct request_queue *q, unsigned long uaddr,
return ERR_PTR(ret); return ERR_PTR(ret);
} }
/**
* bio_copy_user - copy user data to bio
* @q: destination block queue
* @uaddr: start of user address
* @len: length in bytes
* @write_to_vm: bool indicating writing to pages or not
*
* Prepares and returns a bio for indirect user io, bouncing data
* to/from kernel pages as necessary. Must be paired with
* call bio_uncopy_user() on io completion.
*/
struct bio *bio_copy_user(struct request_queue *q, unsigned long uaddr,
unsigned int len, int write_to_vm)
{
struct sg_iovec iov;
iov.iov_base = (void __user *)uaddr;
iov.iov_len = len;
return bio_copy_user_iov(q, &iov, 1, write_to_vm);
}
static struct bio *__bio_map_user_iov(struct request_queue *q, static struct bio *__bio_map_user_iov(struct request_queue *q,
struct block_device *bdev, struct block_device *bdev,
struct sg_iovec *iov, int iov_count, struct sg_iovec *iov, int iov_count,
......
...@@ -327,6 +327,8 @@ extern struct bio *bio_map_kern(struct request_queue *, void *, unsigned int, ...@@ -327,6 +327,8 @@ extern struct bio *bio_map_kern(struct request_queue *, void *, unsigned int,
extern void bio_set_pages_dirty(struct bio *bio); extern void bio_set_pages_dirty(struct bio *bio);
extern void bio_check_pages_dirty(struct bio *bio); extern void bio_check_pages_dirty(struct bio *bio);
extern struct bio *bio_copy_user(struct request_queue *, unsigned long, unsigned int, int); extern struct bio *bio_copy_user(struct request_queue *, unsigned long, unsigned int, int);
extern struct bio *bio_copy_user_iov(struct request_queue *, struct sg_iovec *,
int, int);
extern int bio_uncopy_user(struct bio *); extern int bio_uncopy_user(struct bio *);
void zero_fill_bio(struct bio *bio); void zero_fill_bio(struct bio *bio);
......
...@@ -112,6 +112,7 @@ enum rq_flag_bits { ...@@ -112,6 +112,7 @@ enum rq_flag_bits {
__REQ_RW_SYNC, /* request is sync (O_DIRECT) */ __REQ_RW_SYNC, /* request is sync (O_DIRECT) */
__REQ_ALLOCED, /* request came from our alloc pool */ __REQ_ALLOCED, /* request came from our alloc pool */
__REQ_RW_META, /* metadata io request */ __REQ_RW_META, /* metadata io request */
__REQ_COPY_USER, /* contains copies of user pages */
__REQ_NR_BITS, /* stops here */ __REQ_NR_BITS, /* stops here */
}; };
...@@ -133,6 +134,7 @@ enum rq_flag_bits { ...@@ -133,6 +134,7 @@ enum rq_flag_bits {
#define REQ_RW_SYNC (1 << __REQ_RW_SYNC) #define REQ_RW_SYNC (1 << __REQ_RW_SYNC)
#define REQ_ALLOCED (1 << __REQ_ALLOCED) #define REQ_ALLOCED (1 << __REQ_ALLOCED)
#define REQ_RW_META (1 << __REQ_RW_META) #define REQ_RW_META (1 << __REQ_RW_META)
#define REQ_COPY_USER (1 << __REQ_COPY_USER)
#define BLK_MAX_CDB 16 #define BLK_MAX_CDB 16
...@@ -533,8 +535,13 @@ extern unsigned long blk_max_low_pfn, blk_max_pfn; ...@@ -533,8 +535,13 @@ extern unsigned long blk_max_low_pfn, blk_max_pfn;
* BLK_BOUNCE_ANY : don't bounce anything * BLK_BOUNCE_ANY : don't bounce anything
* BLK_BOUNCE_ISA : bounce pages above ISA DMA boundary * BLK_BOUNCE_ISA : bounce pages above ISA DMA boundary
*/ */
#if BITS_PER_LONG == 32
#define BLK_BOUNCE_HIGH ((u64)blk_max_low_pfn << PAGE_SHIFT) #define BLK_BOUNCE_HIGH ((u64)blk_max_low_pfn << PAGE_SHIFT)
#define BLK_BOUNCE_ANY ((u64)blk_max_pfn << PAGE_SHIFT) #else
#define BLK_BOUNCE_HIGH -1ULL
#endif
#define BLK_BOUNCE_ANY (-1ULL)
#define BLK_BOUNCE_ISA (ISA_DMA_THRESHOLD) #define BLK_BOUNCE_ISA (ISA_DMA_THRESHOLD)
/* /*
......
...@@ -910,6 +910,7 @@ struct mode_page_header { ...@@ -910,6 +910,7 @@ struct mode_page_header {
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/fs.h> /* not really needed, later.. */ #include <linux/fs.h> /* not really needed, later.. */
#include <linux/device.h> #include <linux/device.h>
#include <linux/list.h>
struct packet_command struct packet_command
{ {
...@@ -934,7 +935,7 @@ struct packet_command ...@@ -934,7 +935,7 @@ struct packet_command
/* Uniform cdrom data structures for cdrom.c */ /* Uniform cdrom data structures for cdrom.c */
struct cdrom_device_info { struct cdrom_device_info {
struct cdrom_device_ops *ops; /* link to device_ops */ struct cdrom_device_ops *ops; /* link to device_ops */
struct cdrom_device_info *next; /* next device_info for this major */ struct list_head list; /* linked list of all device_info */
struct gendisk *disk; /* matching block layer disk */ struct gendisk *disk; /* matching block layer disk */
void *handle; /* driver-dependent data */ void *handle; /* driver-dependent data */
/* specifications */ /* specifications */
...@@ -994,7 +995,7 @@ extern int cdrom_ioctl(struct file *file, struct cdrom_device_info *cdi, ...@@ -994,7 +995,7 @@ extern int cdrom_ioctl(struct file *file, struct cdrom_device_info *cdi,
extern int cdrom_media_changed(struct cdrom_device_info *); extern int cdrom_media_changed(struct cdrom_device_info *);
extern int register_cdrom(struct cdrom_device_info *cdi); extern int register_cdrom(struct cdrom_device_info *cdi);
extern int unregister_cdrom(struct cdrom_device_info *cdi); extern void unregister_cdrom(struct cdrom_device_info *cdi);
typedef struct { typedef struct {
int data; int data;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment