- 23 Jul, 2010 3 commits
-
-
Ursula Braun authored
This patch serializes device removal and other sysfs-triggered configurations by moving removal of sysfs-attributes to the beginning of the remove functions. And it serializes online/offline setting and discipline-switching (causing reestablishing of the net_device) by making use of a new discipline mutex. Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com> Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Carsten Otte authored
This patch fixes a problem that occurs when switching from layer 3 to layer 2 mode. Resetting this mac_bits makes sure that we retrieve our mac address from the card, otherwise the interface simply would'nt work. Signed-off-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Klaus-Dieter Wacker authored
The qeth IP address flag setting is possible when device is offline. When setting device online afterwards the current set IP addresses have to be correctly registered with the device regarding the IP address takeover attribute. Signed-off-by: Klaus-Dieter Wacker <kdwacker@de.ibm.com> Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 22 Jul, 2010 10 commits
-
-
Dan Carpenter authored
If the allocations fail in either dwmac1000_setup() or dwmac100_setup() then return NULL. These are called from stmmac_mac_device_setup(). The check for NULL returns in stmmac_mac_device_setup() needed to be moved forward a couple lines. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
Negate has precedence over comparison so the original assert only checked that "rfml->fragment_size" was larger than 1 or 0. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jay Vosburgh authored
When copying VLAN information to or removing from a slave during slave addition or removal, the bonding code currently holds the bond->lock for write to prevent concurrent modification of the vlan_list / vlgrp. This is unnecessary, as all of these operations occur under RTNL. Holding the bond->lock also caused might_sleep issues for some drivers' ndo_vlan_* functions. This patch removes the extra locking. Problem reported by Michael Chan <mchan@broadcom.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Cc: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jay Vosburgh authored
After commit ad1afb00 ("vlan_dev: VLAN 0 should be treated as "no vlan tag" (802.1p packet)") it is now regular practice for a VLAN "add vid" for VLAN 0 to arrive prior to any VLAN registration or creation of a vlan_group. This patch updates the bonding code that tests for the presence of VLANs configured above bonding. The new logic tests for bond->vlgrp to determine if a registration has occured, instead of testing that bonding's internal vlan_list is empty. The old code would panic when vlan_list was not empty, but vlgrp was still NULL (because only an "add vid" for VLAN 0 had occured). Bonding still adds VLAN 0 to its internal list so that 802.1p frames are handled correctly on transmit when non-VLAN accelerated slaves are members of the bond. The test against bond->vlan_list remains in bond_dev_queue_xmit for this reason. Modification to the bond->vlgrp now occurs under lock (in addition to RTNL), because not all inspections of it occur under RTNL. Additionally, because 8021q will never issue a "kill vid" for VLAN 0, there is now logic in bond_uninit to release any remaining entries from vlan_list. Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Cc: Pedro Garcia <pedro.netdev@dondevamos.com> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wolfram Sang authored
After the change from mdio polling to irq, it became necessary to restore the interrupt mask after resetting the chip in fec_stop(). Otherwise, with all irqs disabled, no communication with the PHY will be possible after e.g. un-/replugging the cable and the device gets stalled. Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kulikov Vasiliy authored
pci_iomap() can fail, handle this case and return -ENOMEM from probe function. Signed-off-by: Kulikov Vasiliy <segooon@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Add a new rt attribute, RTA_MARK, and use it in rt_fill_info()/inet_rtm_getroute() to support following commands : ip route get 192.168.20.110 mark NUMBER ip route get 192.168.20.108 from 192.168.20.110 iif eth1 mark NUMBER ip route list cache [192.168.20.110] mark NUMBER Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marc Kleine-Budde authored
This core is found on some Freescale SoCs and also some Coldfire SoCs. Support for Coldfire is missing though at the moment as they have an older revision of the core which does not have RX FIFO support. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de> Acked-by: Wolfgang Grandegger <wg@grandegger.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
David S. Miller authored
Reported by Stephen Rothwell. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Reported by Stephen Rothwell. Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 21 Jul, 2010 7 commits
-
-
Gustavo F. Padovan authored
Network code uses the __packed macro instead of __attribute__((packed)). Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo F. Padovan authored
Remove IRDA_PACKED macro, which maps to __attribute__((packed)). IRDA is one of the last users of __attribute__((packet)). Networking code uses __packed now. Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kulikov Vasiliy authored
Use for_each_pci_dev() to simplify the code. Signed-off-by: Kulikov Vasiliy <segooon@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Add and use a few neatening macros Remove PFX Add #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt Convert printk(KERN_ERR to pr_err( $ size drivers/net/qlge/built-in.o.* text data bss dec hex filename 116456 2312 25712 144480 23460 drivers/net/qlge/built-in.o.old 114909 2312 25728 142949 22e65 drivers/net/qlge/built-in.o.new Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
There are two initializations of ndo_set_mac_address, one to a local function that is not used otherwise and one to a function that is defined elsewhere. The semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r@ identifier I, s, fld; position p0,p; expression E; @@ struct I s =@p0 { ... .fld@p = E, ...}; @s@ identifier I, s, r.fld; position r.p0,p; expression E; @@ struct I s =@p0 { ... .fld@p = E, ...}; @script:python@ p0 << r.p0; fld << r.fld; ps << s.p; pr << r.p; @@ if int(ps[0].line)<int(pr[0].line) or int(ps[0].column)<int(pr[0].column): cocci.print_main(fld,p0) // </smpl> akpm: - Use the standard eth_mac_addr() in uml_net_set_mac() - Remove unneeded and racy local set_ether_mac() - Remove duplicated (and incorrect) uml_netdev_ops.ndo_set_mac_address initializer. Fixes 8bb95b39 ("uml: convert network device to netdevice ops"). [akpm@linux-foundation.org: rework as above] Signed-off-by: Julia Lawall <julia@diku.dk> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Shevchenko authored
Get rid of own implementation of hex_to_bin(). Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com> Acked-by: Divy Le Ray <divy@chelsio.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Conflicts: drivers/vhost/net.c net/bridge/br_device.c Fix merge conflict in drivers/vhost/net.c with guidance from Stephen Rothwell. Revert the effects of net-2.6 commit 573201f3 since net-next-2.6 has fixes that make bridge netpoll work properly thus we don't need it disabled. Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 20 Jul, 2010 19 commits
-
-
Neil Horman authored
Convert a few calls from kfree_skb to consume_skb Noticed while I was working on dropwatch that I was detecting lots of internal skb drops in several places. While some are legitimate, several were not, freeing skbs that were at the end of their life, rather than being discarded due to an error. This patch converts those calls sites from using kfree_skb to consume_skb, which quiets the in-kernel drop_monitor code from detecting them as drops. Tested successfully by myself Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Neil Horman authored
Patch to add -EAGAIN error to dropwatch netlink message handling code. -EAGAIN will be returned anytime userspace attempts to transition the state of the drop monitor service to a state that its already in. That allows user space to detect this condition, so it doesn't wait for a success ACK that will never arrive. Tested successfully by me Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Giuseppe Cavallaro authored
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Casey Leedom authored
Adding myself as the official maintainer of the Chelsio T4 Virtual function Driver (cxgb4vf). Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Casey Leedom authored
Fix bug in setup_sge_queues() where we were incorrectly only allocating a single "Queue Set" for MSI mode. Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Casey Leedom authored
Fix off-by-one error in checking for the end of the mailbox response delay array. We ended up walking off the end and, if we were unlucky, we'd end up pulling in a 0 and never terminate the mailbox response delay loop ... Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
The new netpoll code in bridging contains use-after-free bugs that are non-trivial to fix. This patch fixes this by removing the code that uses skbs after they're freed. As a consequence, this means that we can no longer call bridge from the netpoll path, so this patch also removes the controller function in order to disable netpoll. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Thanks, Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Even with jumbograms I cannot see any way in which we would need to records a larger than 65535 valued next-header offset. The maximum extension header length is (256 << 3) == 2048. There are only a handful of extension headers specified which we'd even accept (say 5 or 6), therefore the largest next-header offset we'd ever have to contend with is something less than say 16k. Therefore make it a u16 instead of a u32. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
smp_mb() inside bnx2_tx_avail() is used twice in the normal bnx2_start_xmit() path (see illustration below). The full memory barrier is only necessary during race conditions with tx completion. We can speed up the tx path by replacing smp_mb() in bnx2_tx_avail() with a compiler barrier. The compiler barrier is to force the compiler to fetch the tx_prod and tx_cons from memory. In the race condition between bnx2_start_xmit() and bnx2_tx_int(), we have the following situation: bnx2_start_xmit() bnx2_tx_int() if (!bnx2_tx_avail()) BUG(); ... if (!bnx2_tx_avail()) netif_tx_stop_queue(); update_tx_index(); smp_mb(); smp_mb(); if (bnx2_tx_avail()) if (netif_tx_queue_stopped() && netif_tx_wake_queue(); bnx2_tx_avail()) With smp_mb() removed from bnx2_tx_avail(), we need to add smp_mb() to bnx2_start_xmit() as shown above to properly order netif_tx_stop_queue() and bnx2_tx_avail() to check the ring index. If it is not strictly ordered, the tx queue can be stopped forever. This improves performance by about 5% with 2 ports running bi-directional 64-byte packets. Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
Based on original patch by Breno Leitão <leitao@linux.vnet.ibm.com>. Allocate the actual number of vectors and make use of fewer vectors if pci_enable_msix() returns > 0. We must allocate one additional vector for the cnic driver. Cc: Breno Leitão <leitao@linux.vnet.ibm.com> Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
We were using the wrong tx multicast counter instead of the rx multicast counter. Reported-by: Peter Snellman <peter.snellman@cinnober.com> Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Don Skidmore authored
Bump the version string to better reflect what is in the driver. Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
The FCoE protocol stack may hold a lock when this gets called. Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
When FCoE is disabled, there is a race condition that FCoE offload is turned off but the FCoE protocol driver is still queuing I/O thinking offload support still exists. This patch toggles off corresponding FCoE netdev feature flags and notify the FCoE stack first, allowing FCoE protocol stack driver to update its flags upon NETDEV_FEAT_CHANGE so no I/O will be using offload. Also, indicate FCoE offload flags in vlan_features in ixgbe_probe once and do not toggle them in ixgbe_fcoe_enable/disable so when FCoE is created on the VLAN interface, vlan_transfer_features() would properly update the VLAN netdev features flag and notify the FCoE protocol driver for NETDEV_FEAT_CHANGE. Signed-off-by: Yi Zou <yi.zou@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This change removes UDP from the supported protocols for RSS hashing. The reason for removing this protocol is because IP fragmentation was causing a network flow to be broken into two streams, one for fragmented, and one for non-fragmented and this in turn was causing out-of-order issues. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Don Skidmore <donald.c.skidmore@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
Set the DPF bit when PFC is enabled. This will discard PFC frames so they do not get passed up the stack. The DPF bit is set for flow control, but not priority flow control this brings pfc inline with fc. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This change makes it possible to limit the number of descriptors down to 48 per ring. The reason for this change is to address a variation on hardware errata 10 for 82546GB in which descriptors will be lost if more than 32 descriptors are fetched and the PCI-X MRBC is 512. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Emil Tantilov <emil.s.tantilov@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 19 Jul, 2010 1 commit
-
-
Andrew Morton authored
drivers/net/82596.c: In function 'i596_open': drivers/net/82596.c:1044: warning: label 'err_irq_dev' defined but not used Caused by "82596: free resources on error" Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-