- 04 Sep, 2009 25 commits
-
-
Scott Feldman authored
Nic firmware can return zero for port MTU, so check for non-zero value before checking for change in port MTU. Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott Feldman authored
Deprecate some old APIa; change arguments to stats dump all API; add new interrupt assert API Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott Feldman authored
Bug fix: enable VLAN filtering Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott Feldman authored
Provision for multiple Rx/Tx queues. Max of 8 WQs and 8 RQs. Max for completion queue is 8+8=16 and max for interrupt resources is 8+8+2. Add driver/firmware interface for setting up RSS secret key and indirection table. Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott Feldman authored
Bug fix: included MAC drops in rx_dropped netstat. Also track Rx trunctations stat at the MAC Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott Feldman authored
Some driver -> nic firmware calls weren't guarded with a spinlock, exposing the call i/f to a race between two threads Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott Feldman authored
Use netdev_alloc_skb rather than dev_alloc_skb Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott Feldman authored
enic WQ desc supports a maximum 16K buf size, so split any send fragments larger than 16K into several descs. Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott Feldman authored
A0 revision ASIC has an erratum on the RQ desc cache on chip where the cache can become corrupted causing pkt buf writes to wrong locations. The s/w workaround is to post a dummy RQ desc in the ring every 32 descs, causing a flush of the cache. A0 parts are not production, but there are enough of these parts in the wild in test setups to warrant including workaround. A1 revision ASIC parts fix erratum. Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott Feldman authored
Nic firmware can place resources (queues, intrs, etc) on multiple BARs, so allow driver to discover/map resources beyond BAR0. Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Its hard to tell if vlans are dropping frames, since every frame given to vlan_???_start_xmit() functions is accounted as fully transmitted by lower device. We can test dev_queue_xmit() return values to properly account for dropped frames. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
macvlan devices are currently not multi-queue capable. We can do that defining rtnl_link_ops method, get_tx_queues(), called from rtnl_create_link() This new method gets num_tx_queues/real_num_tx_queues from lower device. macvlan_get_tx_queues() is a copy of vlan_get_tx_queues(). Because macvlan_start_xmit() has to update netdev_queue stats only (and not dev->stats), I chose to change tx_errors/tx_aborted_errors accounting to tx_dropped, since netdev_queue structure doesnt define tx_errors / tx_aborted_errors. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
A few drivers still access the arguments to MDIO ioctls as an array of u16. Convert them to use struct mii_ioctl_data. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
dev_ioctl() already checks capable(CAP_NET_ADMIN) before calling the driver's implementation of MDIO ioctls. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
The standard MDIO ioctl numbers are well-established and these should no longer be needed. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Hemminger authored
While perusing vendor driver, I saw that it did not enable the Vaux power unless device was able to wake from lan for D3cold. This might help for Rene's power issue. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dhananjay Phadke authored
Fix a perpetual while() loop in unwinding partial mapped tx skb on dma mapping failure. Reported-by: "Juha Leppanen" <juha_motorsportcom@luukku.com> Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dhananjay Phadke authored
Remove duplicate calls to netxen_napi_add(). Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dhananjay Phadke authored
Alloc 12k skbuffs so that firmware can aggregate more packets into one buffer. This doesn't raise memory consumption since 9k skbs use 16k slab cache anyway. Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
The FCoE DDP in 82599 can be used for both FCoE initiator as well as FCoE target, depending on the indication of the exchange being the responder or originator in the F_CTL (frame control) field in the encapsulated Fiber Channel frame header (T10 Spec., FC-FS). For the initiator, OX_ID is used for FCoE DDP, where for the target RX_ID is used for FCoE DDP. Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
This adds a simple selection of a FCoE tx queue based on the current cpu id to distribute transmission of FCoE traffic evenly among multiple FCoE transmit queues. Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
This patch adds support for multiple transmit queues to the Fiber Channel over Ethernet (FCoE) feature found in 82599. Currently, FCoE has multiple Rx queues available, along with a redirection table, that helps distribute the I/O load across multiple CPUs based on the FC exchange ID. To make this the most effective, we need to provide the same layout of transmit queues to match receive. Particularly, when Data Center Bridging (DCB) is enabled, the designated traffic class for FCoE can have dedicated queues for just FCoE traffic, while not affecting any other type of traffic flow. Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch updates things so that vlan tags are taken into account when setting the receive large packet maximum length. This allows the VF driver to correctly receive full sized frames when vlans are enabled. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
The igb_irq_disable/enable calls were causing virtual functions associated with the igb physical function to have their interrupts disabled. In order to prevent this from occuring we should only clear/set the bits related to the physical function. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch adds support for the set_rx_mode netdevice operation so that igb can better support multiple unicast addresses. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 03 Sep, 2009 13 commits
-
-
Eric Dumazet authored
Remove a debugging aid I accidently left in previous 'cleanup' patch Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
vlan_dev_hard_start_xmit() & vlan_dev_hwaccel_hard_start_xmit() select txqueue number 0, instead of using index provided by skb_get_queue_mapping(). This is not correct after commit 2e59af3d [vlan: multiqueue vlan device] because txq->tx_packets & txq->tx_bytes changes are performed on a single location, and not the right locking. Fix is to take the appropriate struct netdev_queue pointer Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Pure style cleanup patch before surgery :) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karl Hiramoto authored
atm/br2684: netif_stop_queue() when atm device busy and netif_wake_queue() when we can send packets again. This patch removes the call to dev_kfree_skb() when the atm device is busy. Calling dev_kfree_skb() causes heavy packet loss then the device is under heavy load, the more correct behavior should be to stop the upper layers, then when the lower device can queue packets again wake the upper layers. Signed-off-by: Karl Hiramoto <karl@hiramoto.org> Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Uwe Kleine-König authored
fec_enet_mii, fec_enet_rx and fec_enet_tx are both only called by fec_enet_interrupt in interrupt context. So they must not use spin_lock_irq/spin_unlock_irq. This fixes: WARNING: at kernel/lockdep.c:2140 trace_hardirqs_on_caller+0x130/0x194() ... Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Patrick McHardy <kaber@trash.net> Cc: Sascha Hauer <s.hauer@pengutronix.de> Cc: Matt Waddel <Matt.Waddel@freescale.com> Cc: netdev@vger.kernel.org Cc: Tim Sander <tim01@vlsi.informatik.tu-darmstadt.de> Acked-by: Greg Ungerer <gerg@uclinux.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Uwe Kleine-König authored
mii_discover_phy is only called by fec_enet_mii (via mip->mii_func). So &fep->mii_lock is already held and mii_discover_phy must not call mii_queue which locks &fep->mii_lock, too. This was noticed by lockdep: ============================================= [ INFO: possible recursive locking detected ] 2.6.31-rc8-00038-g37d0892c #109 --------------------------------------------- swapper/1 is trying to acquire lock: (&fep->mii_lock){-.....}, at: [<c01569f8>] mii_queue+0x2c/0xcc but task is already holding lock: (&fep->mii_lock){-.....}, at: [<c0156328>] fec_enet_interrupt+0x78/0x460 other info that might help us debug this: 2 locks held by swapper/1: #0: (rtnl_mutex){+.+.+.}, at: [<c0183534>] rtnl_lock+0x18/0x20 #1: (&fep->mii_lock){-.....}, at: [<c0156328>] fec_enet_interrupt+0x78/0x460 stack backtrace: Backtrace: [<c00226fc>] (dump_backtrace+0x0/0x108) from [<c01eac14>] (dump_stack+0x18/0x1c) r6:c781d118 r5:c03e41d8 r4:00000001 [<c01eabfc>] (dump_stack+0x0/0x1c) from [<c005bae4>] (__lock_acquire+0x1a20/0x1a88) [<c005a0c4>] (__lock_acquire+0x0/0x1a88) from [<c005bbac>] (lock_acquire+0x60/0x74) [<c005bb4c>] (lock_acquire+0x0/0x74) from [<c01edda8>] (_spin_lock_irqsave+0x54/0x68) r7:60000093 r6:c01569f8 r5:c785e468 r4:00000000 [<c01edd54>] (_spin_lock_irqsave+0x0/0x68) from [<c01569f8>] (mii_queue+0x2c/0xcc) r7:c785e468 r6:c0156b24 r5:600a0000 r4:c785e000 [<c01569cc>] (mii_queue+0x0/0xcc) from [<c0156b78>] (mii_discover_phy+0x54/0xa8) r8:00000002 r7:00000032 r6:c785e000 r5:c785e360 r4:c785e000 [<c0156b24>] (mii_discover_phy+0x0/0xa8) from [<c0156354>] (fec_enet_interrupt+0xa4/0x460) r5:c785e360 r4:c077a170 [<c01562b0>] (fec_enet_interrupt+0x0/0x460) from [<c0066674>] (handle_IRQ_event+0x48/0x120) [<c006662c>] (handle_IRQ_event+0x0/0x120) from [<c0068438>] (handle_level_irq+0x94/0x11c) ... Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Patrick McHardy <kaber@trash.net> Cc: Sascha Hauer <s.hauer@pengutronix.de> Cc: Matt Waddel <Matt.Waddel@freescale.com> Cc: netdev@vger.kernel.org Cc: Tim Sander <tim01@vlsi.informatik.tu-darmstadt.de> Acked-by: Greg Ungerer <gerg@uclinux.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ralf Baechle authored
The bpq ether driver is modifying the data art of the skb by first dropping the KISS byte (a command byte for the radio) then prepending the length + 4 of the remaining AX.25 packet to be transmitted as a little endian 16-bit number. If the high byte of the length has a different value than the dropped KISS byte users of clones of the skb may observe this as corruption. This was observed with by running listen(8) -a which uses a packet socket which clones transmit packets. The corruption will then typically be displayed for as a KISS "TX Delay" command for AX.25 packets in the range of 252..508 bytes or any other KISS command for yet larger packets. Fixed by using skb_cow to create a private copy should the skb be cloned. Using skb_cow also allows us to cleanup the old logic to ensure sufficient headroom in the skb. While at it, replace a return of 0 from bpq_xmit with the proper constant NETDEV_TX_OK which is now being used everywhere else in this function. Affected: all 2.2, 2.4 and 2.6 kernels. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Reported-by: Jann Traschewski <jann@gmx.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
roel kluin authored
Request_irq() may fail in different ways, handle accordingly. Signed-off-by: Roel Kluin <roel.kluin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wu Fengguang authored
This fixed a lockdep warning which appeared when doing stress memory tests over NFS: inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage. page reclaim => nfs_writepage => tcp_sendmsg => lock sk_lock mount_root => nfs_root_data => tcp_close => lock sk_lock => tcp_send_fin => alloc_skb_fclone => page reclaim David raised a concern that if the allocation fails in tcp_send_fin(), and it's GFP_ATOMIC, we are going to yield() (which sleeps) and loop endlessly waiting for the allocation to succeed. But fact is, the original GFP_KERNEL also sleeps. GFP_ATOMIC+yield() looks weird, but it is no worse the implicit sleep inside GFP_KERNEL. Both could loop endlessly under memory pressure. CC: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> CC: David S. Miller <davem@davemloft.net> CC: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ajit Khaparde authored
This patch adds support to flash a firmware image to a device using ethtool. The driver gets the filename of the firmware image and flashes the image using the request firmware path. The region "on the chip" to be flashed can be specified by an option. It is upto the device driver to enumerate the region number passed by ethtool, to the region to be flashed. The default behavior is to flash all the regions on the chip. Signed-off-by: Ajit Khaparde <ajitk@serverengines.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
The generic packet receive code takes care of setting netdev->last_rx when necessary, for the sake of the bonding ARP monitor. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Neil Horman <nhorman@txudriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Christoph Lameter pointed out that packet drops at qdisc level where not accounted in SNMP counters. Only if application sets IP_RECVERR, drops are reported to user (-ENOBUFS errors) and SNMP counters updated. IP_RECVERR is used to enable extended reliable error message passing, but these are not needed to update system wide SNMP stats. This patch changes things a bit to allow SNMP counters to be updated, regardless of IP_RECVERR being set or not on the socket. Example after an UDP tx flood # netstat -s ... IP: 1487048 outgoing packets dropped ... Udp: ... SndbufErrors: 1487048 send() syscalls, do however still return an OK status, to not break applications. Note : send() manual page explicitly says for -ENOBUFS error : "The output queue for a network interface was full. This generally indicates that the interface has stopped sending, but may be caused by transient congestion. (Normally, this does not occur in Linux. Packets are just silently dropped when a device queue overflows.) " This is not true for IP_RECVERR enabled sockets : a send() syscall that hit a qdisc drop returns an ENOBUFS error. Many thanks to Christoph, David, and last but not least, Alexey ! Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
vlan devices are currently not multi-queue capable. We can do that with a new rtnl_link_ops method, get_tx_queues(), called from rtnl_create_link() This new method gets num_tx_queues/real_num_tx_queues from real device. register_vlan_device() is also handled. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 02 Sep, 2009 2 commits
-
-
Neil Horman authored
It was recently pointed out to me that the last_rx field of the net_device structure wasn't updated regularly. In fact only the bonding driver really uses it currently. Since the drop_monitor code relies on the last_rx field to detect drops on recevie in hardware, We need to find a more reliable way to rate limit our drop checks (so that we don't check for drops on every frame recevied, which would be inefficient. This patch makes a last_rx timestamp that is private to the drop monitor code and is updated for every device that we track. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-