- 07 Sep, 2010 24 commits
-
-
Santiago Leon authored
We had a few cases where we returned success on error. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
Fix most of the kernel coding style issues in ibmveth. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
IbmVethNumBufferPools -> IBMVETH_NUM_BUFF_POOLS Also change IBMVETH_MAX_MTU -> IBMVETH_MIN_MTU, it refers to the minimum size not the maximum. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
Use netdev_err to standardise the error output. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
Use netdev_dbg to standardise the debug output. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
These functions appear before their use, so we can remove the redundant prototypes. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
We were using alloc_skb which doesn't create any headroom. Change it to use netdev_alloc_skb to match most other drivers. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
We export all the driver specific statistics via ethtool, so there is no need to duplicate this in procfs. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
This patch enables TCP checksum offload support for IPv6 on ibmveth. This completely eliminates the generation and checking of the checksum for IPv6 packets that are completely virtual and never touch a physical network. A basic TCPIPV6_STREAM netperf run showed a ~30% throughput improvement when an MTU of 64000 was used. This featured is enabled by default, as is the case for IPv4 checksum offload. When checksum offload is enabled the driver will negotiate IPv4 and IPv6 offload with the firmware separately and enable what is available. As long as either IPv4 or IPv6 offload is supported and enabled the device will report that checksum offload is enabled. The device stats, available through ethtool, will display which checksum offload features are supported/enabled by firmware. Performance testing against a stock kernel shows no regression for IPv4 or IPv6 in terms of throughput or processor utilization with checksum disabled or enabled. Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
Remove code in the device probe function where we set up the checksum offload feature and replace it with a call to an existing function that is doing the same. This is done to clean up the driver in preparation of adding IPv6 checksum offload support. Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
On some machines we can improve the bandwidth by ensuring rx buffers are not in the cache. Add a module option that is disabled by default that flushes rx buffers on insertion. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
ibmveth can scatter gather up to 6 segments. If we go over this then we have no option but to call skb_linearize, like other drivers with similar limitations do. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anton Blanchard authored
We want to order the read in ibmveth_rxq_pending_buffer and the read of ibmveth_rxq_buffer_valid which are both cacheable memory. smp_rmb() is good enough for this. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
For small packets, create a new skb and copy the packet into it so we avoid tearing down and creating a TCE entry. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
Use the existing bounce buffer if we send a buffer under a certain size. This saves the overhead of a TCE map/unmap. I can't see any reason for the wmb() in the bounce buffer case, if we need a barrier it will be before we call h_send_logical_lan but we have nothing in the common case. Remove it. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
The ibmveth adapter needs locking in the transmit routine to protect the bounce_buffer but it sets LLTX and forgets to add any of its own locking. Just remove the deprecated LLTX option. Remove the stats lock in the process. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
At the moment we try and replenish the receive ring on every rx interrupt. We even have a pool->threshold but aren't using it. To limit the maximum latency incurred when refilling, change the threshold from 1/2 to 7/8 and reduce the largest rx pool from 768 buffers to 512 which should be more than enough. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
Replace some modulus operators with an increment and compare to avoid an integer divide. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Allan Stephens authored
Cause TIPC to return EAGAIN if it is unable to enable a new Ethernet bearer because one or more recently disabled Ethernet bearers are temporarily consuming resources during shut down. (The previous error code, EDQUOT, is now returned only if all available Ethernet bearer data structures are fully enabled at the time the request to enable an additional bearer is received.) Signed-off-by: Allan Stephens <allan.stephens@windriver.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Allan Stephens authored
Add code to expand the headroom of an outgoing TIPC message if the sk_buff has insufficient room to hold the header for the associated Ethernet device. This change is necessary to ensure that messages TIPC does not create itself (eg. incoming messages that are being routed to another node) do not cause problems, since TIPC has no control over the amount of headroom available in such messages. Signed-off-by: Allan Stephens <allan.stephens@windriver.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Allan Stephens authored
Optimizes TIPC's name table translation code to avoid unnecessary manipulation of the node address field of the resulting port id when name translation fails. This change is possible because a valid port id cannot have a reference field of zero, so examining the reference only is sufficient to determine if the translation was successful. Signed-off-by: Allan Stephens <allan.stephens@windriver.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 06 Sep, 2010 2 commits
-
-
stephen hemminger authored
Change several wan drivers to make strings and other initialize only parameters const. Compile tested only (with no new warnings) Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
While porting GRO to r8169, I found this driver has a bug in its rx path. All skbs given to network stack had their ip_summed set to CHECKSUM_NONE, while hardware said they had correct TCP/UDP checksums. The reason is driver sets skb->ip_summed on the original skb before the copy eventually done by copybreak. The fresh skb gets the ip_summed = CHECKSUM_NONE value, forcing network stack to recompute checksum, and preventing my GRO patch to work. Fix is to make the ip_summed setting after skb copy. Note : rx_copybreak current value is 16383, so all frames are copied... Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 03 Sep, 2010 10 commits
-
-
Casey Leedom authored
Don't call flush_workqueue() on the cxgb3 Work Queue in cxgb_down() when we're being called from the fatal error task ... which is executing on the cxgb3 Work Queue. Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Casey Leedom authored
Platform code needs to deal with them now. Signed-off-by: Dimitris Michailidis <dm@chelsio.com> Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Casey Leedom authored
Signed-off-by: Dimitris Michailidis <dm@chelsio.com> Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Casey Leedom authored
Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
__alloc_skb() uses a memset() to clear all the beginning of skb, including bitfields contained in 'flags1' & 'flags2'. We dont need any more to use kmemcheck_annotate_bitfield() on these fields. However, we still need it for the clone part, which is not cleared. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
struct tulip_private is a bit large (order-1 allocation even on 32bit arch), try to shrink it, remove its net_device_stats field. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Documentation for recent changes to the tunables accept_ra and forwarding. Signed-off-by: Thomas Graf <tgraf@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Similar to accepting router advertisement, the IPv6 stack does not send router solicitations if forwarding is enabled. This patch enables this behavior to be overruled by setting forwarding to the special value 2. Signed-off-by: Thomas Graf <tgraf@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
The current IPv6 behavior is to not accept router advertisements while forwarding, i.e. configured as router. This does make sense, a router is typically not supposed to be auto configured. However there are exceptions and we should allow the current behavior to be overwritten. Therefore this patch enables the user to overrule the "if forwarding enabled then don't listen to RAs" rule by setting accept_ra to the special value of 2. An alternative would be to ignore the forwarding switch alltogether and solely accept RAs based on the value of accept_ra. However, I found that if not intended, accepting RAs as a router can lead to strange unwanted behavior therefore we it seems wise to only do so if the user explicitely asks for this behavior. Signed-off-by: Thomas Graf <tgraf@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
fresh skbs have ip_summed set to CHECKSUM_NONE (0) We can avoid setting again skb->ip_summed to CHECKSUM_NONE in drivers. Introduce skb_checksum_none_assert() helper so that we keep this assertion documented in driver sources. Change most occurrences of : skb->ip_summed = CHECKSUM_NONE; by : skb_checksum_none_assert(skb); Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 02 Sep, 2010 4 commits
-
-
David S. Miller authored
Merge branch 'for-davem' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next-2.6
-
Eric Dumazet authored
get_stats() method incorrectly clears a global array before folding various stats. This can break SNMP applications. Switch to 64 bit flavor to work on a user supplied buffer, and provide 64bit counters even on 32bit arches. Fix a bug in bnad_netdev_hwstats_fill(), for rx_fifo_errors, missing a folding (only the last counter was taken into account) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Rasesh Mody <rmody@brocade.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John W. Linville authored
Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next-2.6 into for-davem
-
Changli Gao authored
Clean the code up according to Documentation/CodingStyle. Don't initialize the variable dont_send in arp_process(). Remove the temporary varialbe flags in arp_state_to_flags(). Signed-off-by: Changli Gao <xiaosuo@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-