Commit bce750b1 authored by Paolo Bonzini's avatar Paolo Bonzini Committed by James Bottomley

[SCSI] virtio-scsi: release sg_lock after add_buf

We do not need the sglist after calling virtqueue_add_buf.  Hence we
can "pipeline" the locked operations and start preparing the sglist
for the next request while we kick the virtqueue.

Together with the previous two patches, this improves performance as
follows.  For a simple "if=/dev/sda of=/dev/null bs=128M iflag=direct"
(the source being a 10G disk, residing entirely in the host buffer cache),
the additional locking does not cause any penalty with only one dd
process, but 2 simultaneous I/O operations improve their times by 3%:

               number of simultaneous dd
                   1               2
 ----------------------------------------
 current        5.9958s        10.2640s
 patched        5.9531s         9.8663s

(Times are best of 10).
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarJames Bottomley <JBottomley@Parallels.com>
parent 139fe45a
......@@ -281,11 +281,11 @@ static int virtscsi_kick_cmd(struct virtio_scsi *vscsi, struct virtio_scsi_vq *v
spin_lock(&vq->vq_lock);
ret = virtqueue_add_buf(vq->vq, vscsi->sg, out_num, in_num, cmd, gfp);
spin_unlock(&vscsi->sg_lock);
if (ret >= 0)
ret = virtqueue_kick_prepare(vq->vq);
spin_unlock(&vq->vq_lock);
spin_unlock_irqrestore(&vscsi->sg_lock, flags);
spin_unlock_irqrestore(&vq->vq_lock, flags);
if (ret > 0)
virtqueue_notify(vq->vq);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment