Commit 774921a2 authored by David Brownell's avatar David Brownell Committed by Greg Kroah-Hartman

[PATCH] urb->transfer_flags updates

This patch fixes two problems that have already been discussed
on this list:

- USB_QUEUE_BULK is rather pointless (and UHCI-specific).
   If drivers really want only one bulk urb queued at a time,
   they just shouldn't issue such urbs till others complete.

     FIX:  remove it entirely.

- USB_DISABLE_SPD is horribly named (based on a UHCI flag).
   What it really does is turn non-ISO short reads into errors.

     FIX:  rename it.  Now it's URB_SHORT_NOT_OK.

I changed all the drivers using these two flags, including
corresponding changes in the "usbfs" API.

Most of the patch by volume is doc updates:

- Documentation/usb/URB.txt hadn't been updated in two years (!)
   and was pretty out of date.  It also had many details that were
   inappropriately specific to usb-uhci.
- Most of the URB flags weren't even commented as to intent.
- DISABLE_SPD was often documented as if it were SHORT_IS_OK.
- There was still some doc saying how iso should use urb->next.

There are also some related updates:

- Some of the submit sanity checks for transfer flags were
   overly broad ... SHORT_NOT_OK is only for reads, NO_FSBR
   is for non-periodic requests, ZERO_PACKET only for writes.
- The ohci-hcd code thought SHORT_NOT_OK worked for ISO.
- The uhci-hcd code thought QUEUE_BULK applied to non-bulk

Note that this patch doesn't update any of the "old" HCDs,
including usb-ohci-hcd.

In the case of usb-uhci{,-hcd} it'd have been painful to fix
the QUEUE_BULK logic.  That logic was, I think, the original
reason to have that flag!  So I count switching to "uhci-hcd"
as a win already ... :)
parent 9ad568e6
This diff is collapsed.
...@@ -22,9 +22,8 @@ USB-specific: ...@@ -22,9 +22,8 @@ USB-specific:
-ENODEV specified USB-device or bus doesn't exist -ENODEV specified USB-device or bus doesn't exist
-ENXIO a control or interrupt URB is already queued to this endpoint; -ENXIO host controller driver does not support queuing of this type
or (UHCI only) a bulk URB is already queued to this endpoint of urb. (treat as a host controller bug.)
and USB_QUEUE_BULK wasn't used
-EINVAL a) Invalid transfer type specified (or not supported) -EINVAL a) Invalid transfer type specified (or not supported)
b) Invalid interrupt interval (0<=n<256) b) Invalid interrupt interval (0<=n<256)
...@@ -90,9 +89,9 @@ one or more packets could finish before an error stops further endpoint I/O. ...@@ -90,9 +89,9 @@ one or more packets could finish before an error stops further endpoint I/O.
greater than either the max packet size of the greater than either the max packet size of the
endpoint or the remaining buffer size. "Babble". endpoint or the remaining buffer size. "Babble".
-EREMOTEIO The endpoint returned less than max packet size -EREMOTEIO The data read from the endpoint did not fill the
and that amount did not fill the specified buffer specified buffer, and URB_SHORT_NOT_OK was set in
(and USB_DISBLE_SPD was not set in transfer_flags) urb->transfer_flags.
-ETIMEDOUT transfer timed out, NAK -ETIMEDOUT transfer timed out, NAK
......
...@@ -176,7 +176,6 @@ static int hci_usb_rx_submit(struct hci_usb *husb, struct urb *urb) ...@@ -176,7 +176,6 @@ static int hci_usb_rx_submit(struct hci_usb *husb, struct urb *urb)
pipe = usb_rcvbulkpipe(husb->udev, husb->bulk_in_ep); pipe = usb_rcvbulkpipe(husb->udev, husb->bulk_in_ep);
FILL_BULK_URB(urb, husb->udev, pipe, skb->data, size, hci_usb_rx_complete, skb); FILL_BULK_URB(urb, husb->udev, pipe, skb->data, size, hci_usb_rx_complete, skb);
urb->transfer_flags = USB_QUEUE_BULK;
skb_queue_tail(&husb->pending_q, skb); skb_queue_tail(&husb->pending_q, skb);
err = usb_submit_urb(urb, GFP_ATOMIC); err = usb_submit_urb(urb, GFP_ATOMIC);
...@@ -318,7 +317,7 @@ static inline int hci_usb_send_bulk(struct hci_usb *husb, struct sk_buff *skb) ...@@ -318,7 +317,7 @@ static inline int hci_usb_send_bulk(struct hci_usb *husb, struct sk_buff *skb)
FILL_BULK_URB(urb, husb->udev, pipe, skb->data, skb->len, FILL_BULK_URB(urb, husb->udev, pipe, skb->data, skb->len,
hci_usb_tx_complete, skb); hci_usb_tx_complete, skb);
urb->transfer_flags = USB_QUEUE_BULK | USB_ZERO_PACKET; urb->transfer_flags = USB_ZERO_PACKET;
BT_DBG("%s urb %p len %d", husb->hdev.name, urb, skb->len); BT_DBG("%s urb %p len %d", husb->hdev.name, urb, skb->len);
......
...@@ -273,7 +273,7 @@ static void irda_usb_change_speed_xbofs(struct irda_usb_cb *self) ...@@ -273,7 +273,7 @@ static void irda_usb_change_speed_xbofs(struct irda_usb_cb *self)
frame, IRDA_USB_SPEED_MTU, frame, IRDA_USB_SPEED_MTU,
speed_bulk_callback, self); speed_bulk_callback, self);
urb->transfer_buffer_length = USB_IRDA_HEADER; urb->transfer_buffer_length = USB_IRDA_HEADER;
urb->transfer_flags = USB_QUEUE_BULK | USB_ASYNC_UNLINK; urb->transfer_flags = USB_ASYNC_UNLINK;
urb->timeout = MSECS_TO_JIFFIES(100); urb->timeout = MSECS_TO_JIFFIES(100);
/* Irq disabled -> GFP_ATOMIC */ /* Irq disabled -> GFP_ATOMIC */
...@@ -410,7 +410,7 @@ static int irda_usb_hard_xmit(struct sk_buff *skb, struct net_device *netdev) ...@@ -410,7 +410,7 @@ static int irda_usb_hard_xmit(struct sk_buff *skb, struct net_device *netdev)
urb->transfer_buffer_length = skb->len; urb->transfer_buffer_length = skb->len;
/* Note : unlink *must* be Asynchronous because of the code in /* Note : unlink *must* be Asynchronous because of the code in
* irda_usb_net_timeout() -> call in irq - Jean II */ * irda_usb_net_timeout() -> call in irq - Jean II */
urb->transfer_flags = USB_QUEUE_BULK | USB_ASYNC_UNLINK; urb->transfer_flags = USB_ASYNC_UNLINK;
/* This flag (USB_ZERO_PACKET) indicates that what we send is not /* This flag (USB_ZERO_PACKET) indicates that what we send is not
* a continuous stream of data but separate packets. * a continuous stream of data but separate packets.
* In this case, the USB layer will insert an empty USB frame (TD) * In this case, the USB layer will insert an empty USB frame (TD)
...@@ -736,7 +736,6 @@ static void irda_usb_submit(struct irda_usb_cb *self, struct sk_buff *skb, struc ...@@ -736,7 +736,6 @@ static void irda_usb_submit(struct irda_usb_cb *self, struct sk_buff *skb, struc
usb_rcvbulkpipe(self->usbdev, self->bulk_in_ep), usb_rcvbulkpipe(self->usbdev, self->bulk_in_ep),
skb->data, skb->truesize, skb->data, skb->truesize,
irda_usb_receive, skb); irda_usb_receive, skb);
urb->transfer_flags = USB_QUEUE_BULK;
/* Note : unlink *must* be synchronous because of the code in /* Note : unlink *must* be synchronous because of the code in
* irda_usb_net_close() -> free the skb - Jean II */ * irda_usb_net_close() -> free the skb - Jean II */
urb->status = 0; urb->status = 0;
......
...@@ -537,7 +537,6 @@ static int bluetooth_write (struct tty_struct * tty, int from_user, const unsign ...@@ -537,7 +537,6 @@ static int bluetooth_write (struct tty_struct * tty, int from_user, const unsign
/* build up our urb */ /* build up our urb */
FILL_BULK_URB (urb, bluetooth->dev, usb_sndbulkpipe(bluetooth->dev, bluetooth->bulk_out_endpointAddress), FILL_BULK_URB (urb, bluetooth->dev, usb_sndbulkpipe(bluetooth->dev, bluetooth->bulk_out_endpointAddress),
urb->transfer_buffer, buffer_size, bluetooth_write_bulk_callback, bluetooth); urb->transfer_buffer, buffer_size, bluetooth_write_bulk_callback, bluetooth);
urb->transfer_flags |= USB_QUEUE_BULK;
/* send it down the pipe */ /* send it down the pipe */
retval = usb_submit_urb(urb, GFP_KERNEL); retval = usb_submit_urb(urb, GFP_KERNEL);
......
...@@ -780,7 +780,7 @@ static int proc_submiturb(struct dev_state *ps, void *arg) ...@@ -780,7 +780,7 @@ static int proc_submiturb(struct dev_state *ps, void *arg)
if (copy_from_user(&uurb, arg, sizeof(uurb))) if (copy_from_user(&uurb, arg, sizeof(uurb)))
return -EFAULT; return -EFAULT;
if (uurb.flags & ~(USBDEVFS_URB_ISO_ASAP|USBDEVFS_URB_DISABLE_SPD|USBDEVFS_URB_QUEUE_BULK| if (uurb.flags & ~(USBDEVFS_URB_ISO_ASAP|USBDEVFS_URB_SHORT_NOT_OK|
USB_NO_FSBR|USB_ZERO_PACKET)) USB_NO_FSBR|USB_ZERO_PACKET))
return -EINVAL; return -EINVAL;
if (!uurb.buffer) if (!uurb.buffer)
......
...@@ -105,7 +105,7 @@ struct urb * usb_get_urb(struct urb *urb) ...@@ -105,7 +105,7 @@ struct urb * usb_get_urb(struct urb *urb)
* any transfer flags. * any transfer flags.
* *
* Successful submissions return 0; otherwise this routine returns a * Successful submissions return 0; otherwise this routine returns a
* negative error number. If the submission is successful, the complete * negative error number. If the submission is successful, the complete()
* fuction of the urb will be called when the USB host driver is * fuction of the urb will be called when the USB host driver is
* finished with the urb (either a successful transmission, or some * finished with the urb (either a successful transmission, or some
* error case.) * error case.)
...@@ -117,8 +117,8 @@ struct urb * usb_get_urb(struct urb *urb) ...@@ -117,8 +117,8 @@ struct urb * usb_get_urb(struct urb *urb)
* driver which issued the request. The completion handler may then * driver which issued the request. The completion handler may then
* immediately free or reuse that URB. * immediately free or reuse that URB.
* *
* Bulk URBs will be queued if the USB_QUEUE_BULK transfer flag is set * Bulk URBs may be queued by submitting an URB to an endpoint before
* in the URB. This can be used to maximize bandwidth utilization by * previous ones complete. This can maximize bandwidth utilization by
* letting the USB controller start work on the next URB without any * letting the USB controller start work on the next URB without any
* delay to report completion (scheduling and processing an interrupt) * delay to report completion (scheduling and processing an interrupt)
* and then submit that next request. * and then submit that next request.
...@@ -128,16 +128,19 @@ struct urb * usb_get_urb(struct urb *urb) ...@@ -128,16 +128,19 @@ struct urb * usb_get_urb(struct urb *urb)
* *
* Reserved Bandwidth Transfers: * Reserved Bandwidth Transfers:
* *
* Periodic URBs (interrupt or isochronous) are completed repeatedly, * Periodic URBs (interrupt or isochronous) are performed repeatedly.
*
* For interrupt requests this is (currently) automagically done
* until the original request is aborted. When the completion callback * until the original request is aborted. When the completion callback
* indicates the URB has been unlinked (with a special status code), * indicates the URB has been unlinked (with a special status code),
* control of that URB returns to the device driver. Otherwise, the * control of that URB returns to the device driver. Otherwise, the
* completion handler does not control the URB, and should not change * completion handler does not control the URB, and should not change
* any of its fields. * any of its fields.
* *
* Note that isochronous URBs should be submitted in a "ring" data * For isochronous requests, the completion handler is expected to
* structure (using urb->next) to ensure that they are resubmitted * submit an urb, typically resubmitting its parameter, until drivers
* appropriately. * stop wanting data transfers. (For example, audio playback might have
* finished, or a webcam turned off.)
* *
* If the USB subsystem can't reserve sufficient bandwidth to perform * If the USB subsystem can't reserve sufficient bandwidth to perform
* the periodic request, and bandwidth reservation is being done for * the periodic request, and bandwidth reservation is being done for
...@@ -274,17 +277,18 @@ int usb_submit_urb(struct urb *urb, int mem_flags) ...@@ -274,17 +277,18 @@ int usb_submit_urb(struct urb *urb, int mem_flags)
/* enforce simple/standard policy */ /* enforce simple/standard policy */
allowed = USB_ASYNC_UNLINK; // affects later unlinks allowed = USB_ASYNC_UNLINK; // affects later unlinks
allowed |= USB_NO_FSBR; // only affects UHCI
switch (temp) { switch (temp) {
case PIPE_CONTROL:
allowed |= USB_DISABLE_SPD;
break;
case PIPE_BULK: case PIPE_BULK:
allowed |= USB_DISABLE_SPD | USB_QUEUE_BULK allowed |= URB_NO_INTERRUPT;
| USB_ZERO_PACKET | URB_NO_INTERRUPT; if (is_out)
break; allowed |= USB_ZERO_PACKET;
case PIPE_INTERRUPT: /* FALLTHROUGH */
allowed |= USB_DISABLE_SPD; case PIPE_CONTROL:
allowed |= USB_NO_FSBR; /* only affects UHCI */
/* FALLTHROUGH */
default: /* all non-iso endpoints */
if (!is_out)
allowed |= URB_SHORT_NOT_OK;
break; break;
case PIPE_ISOCHRONOUS: case PIPE_ISOCHRONOUS:
allowed |= USB_ISO_ASAP; allowed |= USB_ISO_ASAP;
......
...@@ -170,7 +170,7 @@ static void ehci_urb_complete ( ...@@ -170,7 +170,7 @@ static void ehci_urb_complete (
/* cleanse status if we saw no error */ /* cleanse status if we saw no error */
if (likely (urb->status == -EINPROGRESS)) { if (likely (urb->status == -EINPROGRESS)) {
if (urb->actual_length != urb->transfer_buffer_length if (urb->actual_length != urb->transfer_buffer_length
&& (urb->transfer_flags & USB_DISABLE_SPD)) && (urb->transfer_flags & URB_SHORT_NOT_OK))
urb->status = -EREMOTEIO; urb->status = -EREMOTEIO;
else else
urb->status = 0; urb->status = 0;
...@@ -202,7 +202,7 @@ static void ehci_urb_done ( ...@@ -202,7 +202,7 @@ static void ehci_urb_done (
if (likely (urb->status == -EINPROGRESS)) { if (likely (urb->status == -EINPROGRESS)) {
if (urb->actual_length != urb->transfer_buffer_length if (urb->actual_length != urb->transfer_buffer_length
&& (urb->transfer_flags & USB_DISABLE_SPD)) && (urb->transfer_flags & URB_SHORT_NOT_OK))
urb->status = -EREMOTEIO; urb->status = -EREMOTEIO;
else else
urb->status = 0; urb->status = 0;
...@@ -793,7 +793,7 @@ submit_async ( ...@@ -793,7 +793,7 @@ submit_async (
last_qtd->hw_next = hw_next; last_qtd->hw_next = hw_next;
/* previous urb allows short rx? maybe optimize. */ /* previous urb allows short rx? maybe optimize. */
if (!(last_qtd->urb->transfer_flags & USB_DISABLE_SPD) if (!(last_qtd->urb->transfer_flags & URB_SHORT_NOT_OK)
&& (epnum & 0x10)) { && (epnum & 0x10)) {
// only the last QTD for now // only the last QTD for now
last_qtd->hw_alt_next = hw_next; last_qtd->hw_alt_next = hw_next;
......
...@@ -726,10 +726,6 @@ static void td_done (struct urb *urb, struct td *td) ...@@ -726,10 +726,6 @@ static void td_done (struct urb *urb, struct td *td)
int dlen = 0; int dlen = 0;
cc = (tdPSW >> 12) & 0xF; cc = (tdPSW >> 12) & 0xF;
if (! ((urb->transfer_flags & USB_DISABLE_SPD)
&& (cc == TD_DATAUNDERRUN)))
cc = TD_CC_NOERROR;
if (usb_pipeout (urb->pipe)) if (usb_pipeout (urb->pipe))
dlen = urb->iso_frame_desc [td->index].length; dlen = urb->iso_frame_desc [td->index].length;
else else
...@@ -758,9 +754,9 @@ static void td_done (struct urb *urb, struct td *td) ...@@ -758,9 +754,9 @@ static void td_done (struct urb *urb, struct td *td)
usb_pipeendpoint (urb->pipe), usb_pipeendpoint (urb->pipe),
usb_pipeout (urb->pipe)); usb_pipeout (urb->pipe));
/* update packet status if needed (short may be ok) */ /* update packet status if needed (short is normally ok) */
if (((urb->transfer_flags & USB_DISABLE_SPD) != 0 if (cc == TD_DATAUNDERRUN
&& cc == TD_DATAUNDERRUN)) && !(urb->transfer_flags & URB_SHORT_NOT_OK))
cc = TD_CC_NOERROR; cc = TD_CC_NOERROR;
if (cc != TD_CC_NOERROR) { if (cc != TD_CC_NOERROR) {
spin_lock (&urb->lock); spin_lock (&urb->lock);
......
...@@ -491,7 +491,7 @@ static int uhci_fixup_toggle(struct urb *urb, unsigned int toggle) ...@@ -491,7 +491,7 @@ static int uhci_fixup_toggle(struct urb *urb, unsigned int toggle)
} }
/* This function will append one URB's QH to another URB's QH. This is for */ /* This function will append one URB's QH to another URB's QH. This is for */
/* USB_QUEUE_BULK support for bulk transfers and soon implicitily for */ /* queuing bulk transfers and soon implicitily for */
/* control transfers */ /* control transfers */
static void uhci_append_queued_urb(struct uhci_hcd *uhci, struct urb *eurb, struct urb *urb) static void uhci_append_queued_urb(struct uhci_hcd *uhci, struct urb *eurb, struct urb *urb)
{ {
...@@ -840,7 +840,7 @@ static int uhci_submit_control(struct uhci_hcd *uhci, struct urb *urb) ...@@ -840,7 +840,7 @@ static int uhci_submit_control(struct uhci_hcd *uhci, struct urb *urb)
*/ */
destination ^= (USB_PID_SETUP ^ usb_packetid(urb->pipe)); destination ^= (USB_PID_SETUP ^ usb_packetid(urb->pipe));
if (!(urb->transfer_flags & USB_DISABLE_SPD)) if (!(urb->transfer_flags & URB_SHORT_NOT_OK))
status |= TD_CTRL_SPD; status |= TD_CTRL_SPD;
/* /*
...@@ -1006,7 +1006,7 @@ static int uhci_result_control(struct uhci_hcd *uhci, struct urb *urb) ...@@ -1006,7 +1006,7 @@ static int uhci_result_control(struct uhci_hcd *uhci, struct urb *urb)
/* Check to see if we received a short packet */ /* Check to see if we received a short packet */
if (uhci_actual_length(td_status(td)) < uhci_expected_length(td_token(td))) { if (uhci_actual_length(td_status(td)) < uhci_expected_length(td_token(td))) {
if (urb->transfer_flags & USB_DISABLE_SPD) { if (urb->transfer_flags & URB_SHORT_NOT_OK) {
ret = -EREMOTEIO; ret = -EREMOTEIO;
goto err; goto err;
} }
...@@ -1128,7 +1128,7 @@ static int uhci_result_interrupt(struct uhci_hcd *uhci, struct urb *urb) ...@@ -1128,7 +1128,7 @@ static int uhci_result_interrupt(struct uhci_hcd *uhci, struct urb *urb)
goto td_error; goto td_error;
if (uhci_actual_length(td_status(td)) < uhci_expected_length(td_token(td))) { if (uhci_actual_length(td_status(td)) < uhci_expected_length(td_token(td))) {
if (urb->transfer_flags & USB_DISABLE_SPD) { if (urb->transfer_flags & URB_SHORT_NOT_OK) {
ret = -EREMOTEIO; ret = -EREMOTEIO;
goto err; goto err;
} else } else
...@@ -1210,7 +1210,7 @@ static int uhci_submit_bulk(struct uhci_hcd *uhci, struct urb *urb, struct urb * ...@@ -1210,7 +1210,7 @@ static int uhci_submit_bulk(struct uhci_hcd *uhci, struct urb *urb, struct urb *
/* 3 errors */ /* 3 errors */
status = TD_CTRL_ACTIVE | uhci_maxerr(3); status = TD_CTRL_ACTIVE | uhci_maxerr(3);
if (!(urb->transfer_flags & USB_DISABLE_SPD)) if (!(urb->transfer_flags & URB_SHORT_NOT_OK))
status |= TD_CTRL_SPD; status |= TD_CTRL_SPD;
/* /*
...@@ -1276,7 +1276,7 @@ static int uhci_submit_bulk(struct uhci_hcd *uhci, struct urb *urb, struct urb * ...@@ -1276,7 +1276,7 @@ static int uhci_submit_bulk(struct uhci_hcd *uhci, struct urb *urb, struct urb *
/* Always assume breadth first */ /* Always assume breadth first */
uhci_insert_tds_in_qh(qh, urb, 1); uhci_insert_tds_in_qh(qh, urb, 1);
if (urb->transfer_flags & USB_QUEUE_BULK && eurb) if (eurb)
uhci_append_queued_urb(uhci, eurb, urb); uhci_append_queued_urb(uhci, eurb, urb);
else else
uhci_insert_qh(uhci, uhci->skel_bulk_qh, urb); uhci_insert_qh(uhci, uhci->skel_bulk_qh, urb);
...@@ -1470,10 +1470,6 @@ static int uhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, int mem_flags) ...@@ -1470,10 +1470,6 @@ static int uhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, int mem_flags)
spin_lock_irqsave(&uhci->urb_list_lock, flags); spin_lock_irqsave(&uhci->urb_list_lock, flags);
eurb = uhci_find_urb_ep(uhci, urb); eurb = uhci_find_urb_ep(uhci, urb);
if (eurb && !(urb->transfer_flags & USB_QUEUE_BULK)) {
spin_unlock_irqrestore(&uhci->urb_list_lock, flags);
return -ENXIO;
}
if (!uhci_alloc_urb_priv(uhci, urb)) { if (!uhci_alloc_urb_priv(uhci, urb)) {
spin_unlock_irqrestore(&uhci->urb_list_lock, flags); spin_unlock_irqrestore(&uhci->urb_list_lock, flags);
...@@ -1482,10 +1478,15 @@ static int uhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, int mem_flags) ...@@ -1482,10 +1478,15 @@ static int uhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, int mem_flags)
switch (usb_pipetype(urb->pipe)) { switch (usb_pipetype(urb->pipe)) {
case PIPE_CONTROL: case PIPE_CONTROL:
if (eurb)
ret = -ENXIO; /* no control queueing yet */
else
ret = uhci_submit_control(uhci, urb); ret = uhci_submit_control(uhci, urb);
break; break;
case PIPE_INTERRUPT: case PIPE_INTERRUPT:
if (urb->bandwidth == 0) { /* not yet checked/allocated */ if (eurb)
ret = -ENXIO; /* no interrupt queueing yet */
else if (urb->bandwidth == 0) { /* not yet checked/allocated */
bustime = usb_check_bandwidth(urb->dev, urb); bustime = usb_check_bandwidth(urb->dev, urb);
if (bustime < 0) if (bustime < 0)
ret = bustime; ret = bustime;
......
...@@ -593,7 +593,6 @@ static int se401_start_stream(struct usb_se401 *se401) ...@@ -593,7 +593,6 @@ static int se401_start_stream(struct usb_se401 *se401)
se401->sbuf[i].data, SE401_PACKETSIZE, se401->sbuf[i].data, SE401_PACKETSIZE,
se401_video_irq, se401_video_irq,
se401); se401);
urb->transfer_flags |= USB_QUEUE_BULK;
se401->urb[i]=urb; se401->urb[i]=urb;
......
...@@ -776,7 +776,6 @@ static int stv680_start_stream (struct usb_stv *stv680) ...@@ -776,7 +776,6 @@ static int stv680_start_stream (struct usb_stv *stv680)
stv680->sbuf[i].data, stv680->rawbufsize, stv680->sbuf[i].data, stv680->rawbufsize,
stv680_video_irq, stv680); stv680_video_irq, stv680);
urb->timeout = PENCAM_TIMEOUT * 2; urb->timeout = PENCAM_TIMEOUT * 2;
urb->transfer_flags |= USB_QUEUE_BULK;
stv680->urb[i] = urb; stv680->urb[i] = urb;
err = usb_submit_urb (stv680->urb[i], GFP_KERNEL); err = usb_submit_urb (stv680->urb[i], GFP_KERNEL);
if (err) if (err)
......
...@@ -1317,9 +1317,6 @@ static void rx_submit (struct usbnet *dev, struct urb *urb, int flags) ...@@ -1317,9 +1317,6 @@ static void rx_submit (struct usbnet *dev, struct urb *urb, int flags)
usb_rcvbulkpipe (dev->udev, dev->driver_info->in), usb_rcvbulkpipe (dev->udev, dev->driver_info->in),
skb->data, size, rx_complete, skb); skb->data, size, rx_complete, skb);
urb->transfer_flags |= USB_ASYNC_UNLINK; urb->transfer_flags |= USB_ASYNC_UNLINK;
#ifdef REALLY_QUEUE
urb->transfer_flags |= USB_QUEUE_BULK;
#endif
#if 0 #if 0
// Idle-but-posted reads with UHCI really chew up // Idle-but-posted reads with UHCI really chew up
// PCI bandwidth unless FSBR is disabled // PCI bandwidth unless FSBR is disabled
...@@ -1802,9 +1799,6 @@ static int usbnet_start_xmit (struct sk_buff *skb, struct net_device *net) ...@@ -1802,9 +1799,6 @@ static int usbnet_start_xmit (struct sk_buff *skb, struct net_device *net)
usb_sndbulkpipe (dev->udev, info->out), usb_sndbulkpipe (dev->udev, info->out),
skb->data, skb->len, tx_complete, skb); skb->data, skb->len, tx_complete, skb);
urb->transfer_flags |= USB_ASYNC_UNLINK; urb->transfer_flags |= USB_ASYNC_UNLINK;
#ifdef REALLY_QUEUE
urb->transfer_flags |= USB_QUEUE_BULK;
#endif
// FIXME urb->timeout = ... jiffies ... ; // FIXME urb->timeout = ... jiffies ... ;
spin_lock_irqsave (&dev->txq.lock, flags); spin_lock_irqsave (&dev->txq.lock, flags);
......
...@@ -171,8 +171,6 @@ static int empeg_open (struct usb_serial_port *port, struct file *filp) ...@@ -171,8 +171,6 @@ static int empeg_open (struct usb_serial_port *port, struct file *filp)
empeg_read_bulk_callback, empeg_read_bulk_callback,
port); port);
port->read_urb->transfer_flags |= USB_QUEUE_BULK;
result = usb_submit_urb(port->read_urb, GFP_KERNEL); result = usb_submit_urb(port->read_urb, GFP_KERNEL);
if (result) if (result)
...@@ -270,8 +268,6 @@ static int empeg_write (struct usb_serial_port *port, int from_user, const unsig ...@@ -270,8 +268,6 @@ static int empeg_write (struct usb_serial_port *port, int from_user, const unsig
empeg_write_bulk_callback, empeg_write_bulk_callback,
port); port);
urb->transfer_flags |= USB_QUEUE_BULK;
/* send it down the pipe */ /* send it down the pipe */
status = usb_submit_urb(urb, GFP_ATOMIC); status = usb_submit_urb(urb, GFP_ATOMIC);
if (status) { if (status) {
...@@ -424,8 +420,6 @@ static void empeg_read_bulk_callback (struct urb *urb) ...@@ -424,8 +420,6 @@ static void empeg_read_bulk_callback (struct urb *urb)
empeg_read_bulk_callback, empeg_read_bulk_callback,
port); port);
port->read_urb->transfer_flags |= USB_QUEUE_BULK;
result = usb_submit_urb(port->read_urb, GFP_ATOMIC); result = usb_submit_urb(port->read_urb, GFP_ATOMIC);
if (result) if (result)
......
...@@ -1460,9 +1460,6 @@ static void send_more_port_data(struct edgeport_serial *edge_serial, struct edge ...@@ -1460,9 +1460,6 @@ static void send_more_port_data(struct edgeport_serial *edge_serial, struct edge
usb_sndbulkpipe(edge_serial->serial->dev, edge_serial->bulk_out_endpoint), usb_sndbulkpipe(edge_serial->serial->dev, edge_serial->bulk_out_endpoint),
buffer, count+2, edge_bulk_out_data_callback, edge_port); buffer, count+2, edge_bulk_out_data_callback, edge_port);
/* set the USB_BULK_QUEUE flag so that we can shove a bunch of urbs at once down the pipe */
urb->transfer_flags |= USB_QUEUE_BULK;
urb->dev = edge_serial->serial->dev; urb->dev = edge_serial->serial->dev;
status = usb_submit_urb(urb, GFP_ATOMIC); status = usb_submit_urb(urb, GFP_ATOMIC);
if (status) { if (status) {
...@@ -2488,9 +2485,6 @@ static int write_cmd_usb (struct edgeport_port *edge_port, unsigned char *buffer ...@@ -2488,9 +2485,6 @@ static int write_cmd_usb (struct edgeport_port *edge_port, unsigned char *buffer
usb_sndbulkpipe(edge_serial->serial->dev, edge_serial->bulk_out_endpoint), usb_sndbulkpipe(edge_serial->serial->dev, edge_serial->bulk_out_endpoint),
buffer, length, edge_bulk_out_cmd_callback, edge_port); buffer, length, edge_bulk_out_cmd_callback, edge_port);
/* set the USB_BULK_QUEUE flag so that we can shove a bunch of urbs at once down the pipe */
urb->transfer_flags |= USB_QUEUE_BULK;
edge_port->commandPending = TRUE; edge_port->commandPending = TRUE;
status = usb_submit_urb(urb, GFP_ATOMIC); status = usb_submit_urb(urb, GFP_ATOMIC);
......
...@@ -311,7 +311,6 @@ static int ir_open (struct usb_serial_port *port, struct file *filp) ...@@ -311,7 +311,6 @@ static int ir_open (struct usb_serial_port *port, struct file *filp)
port->read_urb->transfer_buffer_length, port->read_urb->transfer_buffer_length,
ir_read_bulk_callback, ir_read_bulk_callback,
port); port);
port->read_urb->transfer_flags = USB_QUEUE_BULK;
result = usb_submit_urb(port->read_urb, GFP_KERNEL); result = usb_submit_urb(port->read_urb, GFP_KERNEL);
if (result) if (result)
err("%s - failed submitting read urb, error %d", __FUNCTION__, result); err("%s - failed submitting read urb, error %d", __FUNCTION__, result);
...@@ -389,9 +388,7 @@ static int ir_write (struct usb_serial_port *port, int from_user, const unsigned ...@@ -389,9 +388,7 @@ static int ir_write (struct usb_serial_port *port, int from_user, const unsigned
ir_write_bulk_callback, ir_write_bulk_callback,
port); port);
port->write_urb->transfer_flags port->write_urb->transfer_flags = USB_ZERO_PACKET;
= USB_QUEUE_BULK
| USB_ZERO_PACKET;
result = usb_submit_urb (port->write_urb, GFP_ATOMIC); result = usb_submit_urb (port->write_urb, GFP_ATOMIC);
if (result) if (result)
...@@ -501,8 +498,6 @@ static void ir_read_bulk_callback (struct urb *urb) ...@@ -501,8 +498,6 @@ static void ir_read_bulk_callback (struct urb *urb)
ir_read_bulk_callback, ir_read_bulk_callback,
port); port);
port->read_urb->transfer_flags = USB_QUEUE_BULK;
result = usb_submit_urb(port->read_urb, GFP_ATOMIC); result = usb_submit_urb(port->read_urb, GFP_ATOMIC);
if (result) if (result)
...@@ -598,9 +593,7 @@ static void ir_set_termios (struct usb_serial_port *port, struct termios *old_te ...@@ -598,9 +593,7 @@ static void ir_set_termios (struct usb_serial_port *port, struct termios *old_te
ir_write_bulk_callback, ir_write_bulk_callback,
port); port);
port->write_urb->transfer_flags port->write_urb->transfer_flags = USB_ZERO_PACKET;
= USB_QUEUE_BULK
| USB_ZERO_PACKET;
result = usb_submit_urb (port->write_urb, GFP_KERNEL); result = usb_submit_urb (port->write_urb, GFP_KERNEL);
if (result) if (result)
......
...@@ -391,7 +391,6 @@ static int klsi_105_open (struct usb_serial_port *port, struct file *filp) ...@@ -391,7 +391,6 @@ static int klsi_105_open (struct usb_serial_port *port, struct file *filp)
port->read_urb->transfer_buffer_length, port->read_urb->transfer_buffer_length,
klsi_105_read_bulk_callback, klsi_105_read_bulk_callback,
port); port);
port->read_urb->transfer_flags |= USB_QUEUE_BULK;
rc = usb_submit_urb(port->read_urb, GFP_KERNEL); rc = usb_submit_urb(port->read_urb, GFP_KERNEL);
if (rc) { if (rc) {
...@@ -537,8 +536,6 @@ static int klsi_105_write (struct usb_serial_port *port, int from_user, ...@@ -537,8 +536,6 @@ static int klsi_105_write (struct usb_serial_port *port, int from_user,
URB_TRANSFER_BUFFER_SIZE, URB_TRANSFER_BUFFER_SIZE,
klsi_105_write_bulk_callback, klsi_105_write_bulk_callback,
port); port);
urb->transfer_flags |= USB_QUEUE_BULK;
/* send the data out the bulk port */ /* send the data out the bulk port */
result = usb_submit_urb(urb, GFP_ATOMIC); result = usb_submit_urb(urb, GFP_ATOMIC);
......
...@@ -390,7 +390,6 @@ static int visor_write (struct usb_serial_port *port, int from_user, const unsig ...@@ -390,7 +390,6 @@ static int visor_write (struct usb_serial_port *port, int from_user, const unsig
port->bulk_out_endpointAddress), port->bulk_out_endpointAddress),
buffer, count, buffer, count,
visor_write_bulk_callback, port); visor_write_bulk_callback, port);
urb->transfer_flags |= USB_QUEUE_BULK;
/* send it down the pipe */ /* send it down the pipe */
status = usb_submit_urb(urb, GFP_ATOMIC); status = usb_submit_urb(urb, GFP_ATOMIC);
......
...@@ -466,7 +466,7 @@ const struct usb_device_id *usb_match_id(struct usb_device *dev, ...@@ -466,7 +466,7 @@ const struct usb_device_id *usb_match_id(struct usb_device *dev,
* than changeable ("unstable") ones like bus numbers or device addresses. * than changeable ("unstable") ones like bus numbers or device addresses.
* *
* With a partial exception for devices connected to USB 2.0 root hubs, these * With a partial exception for devices connected to USB 2.0 root hubs, these
* identifiers are also predictable: so long as the device tree isn't changed, * identifiers are also predictable. So long as the device tree isn't changed,
* plugging any USB device into a given hub port always gives it the same path. * plugging any USB device into a given hub port always gives it the same path.
* Because of the use of "companion" controllers, devices connected to ports on * Because of the use of "companion" controllers, devices connected to ports on
* USB 2.0 root hubs (EHCI host controllers) will get one path ID if they are * USB 2.0 root hubs (EHCI host controllers) will get one path ID if they are
...@@ -722,16 +722,14 @@ extern void usb_deregister_dev(int num_minors, int start_minor); ...@@ -722,16 +722,14 @@ extern void usb_deregister_dev(int num_minors, int start_minor);
/* /*
* urb->transfer_flags: * urb->transfer_flags:
* *
* FIXME should be URB_* flags * FIXME should _all_ be URB_* flags
*/ */
#define USB_DISABLE_SPD 0x0001 #define URB_SHORT_NOT_OK 0x0001 /* report short reads as errors */
#define USB_ISO_ASAP 0x0002 #define USB_ISO_ASAP 0x0002 /* iso-only, urb->start_frame ignored */
#define USB_ASYNC_UNLINK 0x0008 #define USB_ASYNC_UNLINK 0x0008 /* usb_unlink_urb() returns asap */
#define USB_QUEUE_BULK 0x0010 #define USB_NO_FSBR 0x0020 /* UHCI-specific */
#define USB_NO_FSBR 0x0020
#define USB_ZERO_PACKET 0x0040 /* Finish bulk OUTs with short packet */ #define USB_ZERO_PACKET 0x0040 /* Finish bulk OUTs with short packet */
#define URB_NO_INTERRUPT 0x0080 /* HINT: no non-error interrupt needed */ #define URB_NO_INTERRUPT 0x0080 /* HINT: no non-error interrupt needed */
/* ... less overhead for QUEUE_BULK */
#define USB_TIMEOUT_KILLED 0x1000 /* only set by HCD! */ #define USB_TIMEOUT_KILLED 0x1000 /* only set by HCD! */
struct usb_iso_packet_descriptor { struct usb_iso_packet_descriptor {
...@@ -777,9 +775,9 @@ typedef void (*usb_complete_t)(struct urb *); ...@@ -777,9 +775,9 @@ typedef void (*usb_complete_t)(struct urb *);
* @actual_length: This is read in non-iso completion functions, and * @actual_length: This is read in non-iso completion functions, and
* it tells how many bytes (out of transfer_buffer_length) were * it tells how many bytes (out of transfer_buffer_length) were
* transferred. It will normally be the same as requested, unless * transferred. It will normally be the same as requested, unless
* either an error was reported or a short read was performed and * either an error was reported or a short read was performed.
* the USB_DISABLE_SPD transfer flag was used to say that such * The URB_SHORT_NOT_OK transfer flag may be used to make such
* short reads are not errors. * short reads be reported as errors.
* @setup_packet: Only used for control transfers, this points to eight bytes * @setup_packet: Only used for control transfers, this points to eight bytes
* of setup data. Control transfers always start by sending this data * of setup data. Control transfers always start by sending this data
* to the device. Then transfer_buffer is read or written, if needed. * to the device. Then transfer_buffer is read or written, if needed.
...@@ -814,14 +812,10 @@ typedef void (*usb_complete_t)(struct urb *); ...@@ -814,14 +812,10 @@ typedef void (*usb_complete_t)(struct urb *);
* *
* All non-isochronous URBs must also initialize * All non-isochronous URBs must also initialize
* transfer_buffer and transfer_buffer_length. They may provide the * transfer_buffer and transfer_buffer_length. They may provide the
* USB_DISABLE_SPD transfer flag, indicating that short reads are * URB_SHORT_NOT_OK transfer flag, indicating that short reads are
* not to be treated as errors. * to be treated as errors.
* *
* Bulk URBs may pass the USB_QUEUE_BULK transfer flag, telling the host * Bulk URBs may
* controller driver never to report an error if several bulk requests get
* queued to the same endpoint. Such queueing supports more efficient use
* of bus bandwidth, minimizing delays due to interrupts and scheduling,
* if the host controller hardware is smart enough. Bulk URBs can also
* use the USB_ZERO_PACKET transfer flag, indicating that bulk OUT transfers * use the USB_ZERO_PACKET transfer flag, indicating that bulk OUT transfers
* should always terminate with a short packet, even if it means adding an * should always terminate with a short packet, even if it means adding an
* extra zero length packet. * extra zero length packet.
...@@ -853,7 +847,7 @@ typedef void (*usb_complete_t)(struct urb *); ...@@ -853,7 +847,7 @@ typedef void (*usb_complete_t)(struct urb *);
* the quality of service is only "best effort". Callers provide specially * the quality of service is only "best effort". Callers provide specially
* allocated URBs, with number_of_packets worth of iso_frame_desc structures * allocated URBs, with number_of_packets worth of iso_frame_desc structures
* at the end. Each such packet is an individual ISO transfer. Isochronous * at the end. Each such packet is an individual ISO transfer. Isochronous
* URBs are normally queued (no flag like USB_BULK_QUEUE is needed) so that * URBs are normally queued, submitted by drivers to arrange that
* transfers are at least double buffered, and then explicitly resubmitted * transfers are at least double buffered, and then explicitly resubmitted
* in completion handlers, so * in completion handlers, so
* that data (such as audio or video) streams at as constant a rate as the * that data (such as audio or video) streams at as constant a rate as the
...@@ -892,7 +886,7 @@ struct urb ...@@ -892,7 +886,7 @@ struct urb
struct usb_device *dev; /* (in) pointer to associated device */ struct usb_device *dev; /* (in) pointer to associated device */
unsigned int pipe; /* (in) pipe information */ unsigned int pipe; /* (in) pipe information */
int status; /* (return) non-ISO status */ int status; /* (return) non-ISO status */
unsigned int transfer_flags; /* (in) USB_DISABLE_SPD | ...*/ unsigned int transfer_flags; /* (in) URB_SHORT_NOT_OK | ...*/
void *transfer_buffer; /* (in) associated data buffer */ void *transfer_buffer; /* (in) associated data buffer */
int transfer_buffer_length; /* (in) data buffer length */ int transfer_buffer_length; /* (in) data buffer length */
int actual_length; /* (return) actual transfer length */ int actual_length; /* (return) actual transfer length */
......
...@@ -78,9 +78,8 @@ struct usbdevfs_connectinfo { ...@@ -78,9 +78,8 @@ struct usbdevfs_connectinfo {
unsigned char slow; unsigned char slow;
}; };
#define USBDEVFS_URB_DISABLE_SPD 1 #define USBDEVFS_URB_SHORT_NOT_OK 1
#define USBDEVFS_URB_ISO_ASAP 2 #define USBDEVFS_URB_ISO_ASAP 2
#define USBDEVFS_URB_QUEUE_BULK 0x10
#define USBDEVFS_URB_TYPE_ISO 0 #define USBDEVFS_URB_TYPE_ISO 0
#define USBDEVFS_URB_TYPE_INTERRUPT 1 #define USBDEVFS_URB_TYPE_INTERRUPT 1
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment