- 28 Jul, 2013 18 commits
-
-
Bruce Allan authored
Tx hang is an unintended consequence of another workaround that is in the EEPROM for an issue with the firmware at 10Mbps when K1 (a power mode of the MAC-PHY interconnect) is enabled. The issue is resolved by setting appropriate Tx re-transmission timeouts in the PHY and associated K1 entry times in the MAC to allow enough transmissions to occur without triggering a Tx hang. A similar change is needed when linked at 10Mbps to improve latency. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Bruce Allan authored
Alter the packet buffer allocation accordingly. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Bruce Allan authored
The jumbo frame configuration in the MAC/PHY should be reverted on 82579 and newer parts when the interface is brought down (not just when the MTU is changed back to standard frame size) otherwise iAMT connections (e.g. SoL, IDE-R) will be dropped and cannot be re-acquired until the MTU is changed again. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Bruce Allan authored
The 82583 can disappear off the PCIe bus. This device is a modified 82574 which had the same problem which was fixed by disabling ASPM L1; disabling it on 82583 fixes the issue on this device. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Wei Yang authored
In structure e1000_rx_desc_packet_split, the size of wb.upper.length is defined by a digit. This may introduce some problem when the length is changed. This patch use the macro PS_PAGE_BUFFERS for the definition. And move the definition to hw.h. Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Wei Yang authored
tx_ring/rx_ring size is assigned in function e1000_alloc_queues(), which is called by e1000_sw_init() in the early stage of e1000_probe(). This patch just remove the duplicate assignment of this default ring size value. Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com> Reviewed-by: Gavin Shan <shangw@linux.vnet.ibm.com> Reviewed-by: Da Yu Qiu <qiudayu@cn.ibm.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Acked-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Dean Nelson authored
In attempting to resolve a minor merge conflict, commit e5f2ef7a (Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net) accidentally dropped a call to pci_clear_master() that was intended to remain in place. Commit 4e0855df (e1000e: fix pci-device enable-counter balance) replaced a call to pci_disable_device() by one to pci_clear_master(). And then commit 66148bab (e1000e: fix runtime power management transitions) deleted a number of lines starting two lines following that call. This patch restores the call to pci_clear_master() in __e1000_shutdown(). v2: added summary lines (enclosed in parens) following commit IDs Signed-off-by: Dean Nelson <dnelson@redhat.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Acked-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Andy Shevchenko authored
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
nikolay@redhat.com authored
After commit 4aa5dee4 ("net: convert resend IGMP to notifier event") we have 1 read_unlock in bond_resend_igmp_join_requests which isn't paired with a read_lock because it's removed by that commit. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Reviewed-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
UDP checksums are optional, hence pktgen has been omitting them in favour of performance. The optional flag UDPCSUM enables UDP checksumming. If the output device supports hardware checksumming the skb is prepared and marked CHECKSUM_PARTIAL, otherwise the checksum is generated in software. Signed-off-by: Thomas Graf <tgraf@suug.ch> Cc: Eric Dumazet <edumazet@google.com> Cc: Ben Greear <greearb@candelatech.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Asias He authored
This is useful for other VSOCK transport implemented outside the net/vmw_vsock/ directory to use these headers. Signed-off-by: Asias He <asias@redhat.com> Acked-by: Andy King <acking@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.infradead.org/users/dvhart/linux-2.6David S. Miller authored
Darren Hart says: ==================== Add support for the MinnowBoard in the pch_gbe driver. This was originally sent to LKML as part of the MinnowBoard support series. That is now partially merged and this version of the patch has been isolated from those changes and is now completely self-contained. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ming Lei authored
The default RX_QLEN()/TX_QLEN() didn't consider super speed USB device, so only max 4 URBs are scheduled at the same time for tx/rx, then USB3 NIC can't perform very well. With this patch, both rx and tx thoughput are increased more than 100Mbps when doing iperf test on ax88179_178a USB 3.0 NIC. Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ming Lei authored
This patch centralizes computing of max rx/tx qlen, because: - RX_QLEN()/TX_QLEN() is called in hot path - computing depends on device's usb speed, now we have ls/fs, hs, ss, so more checks need to be involved - in fact, max rx/tx qlen should not only depend on device USB speed, but also depend on ethernet link speed, so we need to consider that in future. - if SG support is done, max tx qlen may need change too Generally, hard_mtu and rx_urb_size are changed in bind(), reset() and link_reset() callback, and change mtu network operation, this patches introduces the API of usbnet_update_max_qlen(), and calls it in above path. Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason Wang authored
Inspired by commit f09e2249 (macvtap: restore vlan header on user read). This patch adds hardware vlan tx support for tuntap. This is done by copying vlan header directly into userspace in tun_put_user() instead of doing it through __vlan_put_tag() in dev_hard_start_xmit(). This eliminates one unnecessary memmove() in vlan_insert_tag() for 802.1ad and 802.1q traffic. pktgen test shows about 20% improvement for 802.1q traffic: Before: 662149pps 317Mb/sec (317831520bps) errors: 0 After: 801033pps 384Mb/sec (384495840bps) errors: 0 Cc: Basil Gor <basil.gor@gmail.com> Cc: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Stringer authored
This patch consolidates the SCTP checksum calculation code from various places to a single new function, sctp_compute_cksum(skb, offset). Signed-off-by: Joe Stringer <joe@wand.net.nz> Reviewed-by: Julian Anastasov <ja@ssi.bg> Acked-by: Simon Horman <horms@verge.net.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael S. Tsirkin authored
For small packets we can simplify xmit processing by linearizing buffers with the header: most packets seem to have enough head room we can use for this purpose. Since existing hypervisors require that header is the first s/g element, we need a feature bit for this. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 26 Jul, 2013 4 commits
-
-
stephen hemminger authored
This started out with fixing a sparse warning, then I realized that the wrapper function bond_netpoll_info could just be removed by rolling it into the enable code. Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Jiri Pirko <jiri@resnulli.us> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
This started out with fixing a sparse warning, then I realized that the wrapper function team_netpoll_info could just be collapsed away by rolling it into the enable code. Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
This started out with fixing a sparse warning, then I realized that the wrapper function br_netpoll_info could just be collapsed away by rolling it into the enable code. Also, eliminate unnecessary goto's Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Jiri Pirko <jiri@resnulli.us> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wang Sheng-Hui authored
We have BOND_MODE_ROUNDROBIN pre-defined as 0, and it's the lowest mode number. Use it to check the arg lower bound instead of magic number 0 in bond_mode_name. Signed-off-by: Wang Sheng-Hui <shhuiw@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 25 Jul, 2013 14 commits
-
-
Darren Hart authored
The MinnowBoard uses an AR803x PHY with the PCH GBE which requires special handling. Use the MinnowBoard PCI Subsystem ID to detect this and add a pci_device_id.driver_data structure and functions to handle platform setup. The AR803x does not implement the RGMII 2ns TX clock delay in the trace routing nor via strapping. Add a detection method for the board and the PHY and enable the TX clock delay via the registers. This PHY will hibernate without link for 10 seconds. Ensure the PHY is awake for probe and then disable hibernation. A future improvement would be to convert pch_gbe to using PHYLIB and making sure we can wake the PHY at the necessary times rather than permanently disabling it. Signed-off-by: Darren Hart <dvhart@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Waskiewicz <peter.p.waskiewicz.jr@intel.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Joe Perches <joe@perches.com> Cc: netdev@vger.kernel.org
-
Wolfram Sang authored
devm_ioremap_resource does sanity checks on the given resource. No need to duplicate this in the driver. Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Darren Hart authored
Avoid using magic numbers when we have perfectly good defines just lying around. Signed-off-by: Darren Hart <dvhart@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Waskiewicz <peter.p.waskiewicz.jr@intel.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: netdev@vger.kernel.org
-
Thomas Gleixner authored
No users outside net/core/dev.c. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Idea of this patch is to add optional limitation of number of unsent bytes in TCP sockets, to reduce usage of kernel memory. TCP receiver might announce a big window, and TCP sender autotuning might allow a large amount of bytes in write queue, but this has little performance impact if a large part of this buffering is wasted : Write queue needs to be large only to deal with large BDP, not necessarily to cope with scheduling delays (incoming ACKS make room for the application to queue more bytes) For most workloads, using a value of 128 KB or less is OK to give applications enough time to react to POLLOUT events in time (or being awaken in a blocking sendmsg()) This patch adds two ways to set the limit : 1) Per socket option TCP_NOTSENT_LOWAT 2) A sysctl (/proc/sys/net/ipv4/tcp_notsent_lowat) for sockets not using TCP_NOTSENT_LOWAT socket option (or setting a zero value) Default value being UINT_MAX (0xFFFFFFFF), meaning this has no effect. This changes poll()/select()/epoll() to report POLLOUT only if number of unsent bytes is below tp->nosent_lowat Note this might increase number of sendmsg()/sendfile() calls when using non blocking sockets, and increase number of context switches for blocking sockets. Note this is not related to SO_SNDLOWAT (as SO_SNDLOWAT is defined as : Specify the minimum number of bytes in the buffer until the socket layer will pass the data to the protocol) Tested: netperf sessions, and watching /proc/net/protocols "memory" column for TCP With 200 concurrent netperf -t TCP_STREAM sessions, amount of kernel memory used by TCP buffers shrinks by ~55 % (20567 pages instead of 45458) lpq83:~# echo -1 >/proc/sys/net/ipv4/tcp_notsent_lowat lpq83:~# (super_netperf 200 -t TCP_STREAM -H remote -l 90 &); sleep 60 ; grep TCP /proc/net/protocols TCPv6 1880 2 45458 no 208 yes ipv6 y y y y y y y y y y y y y n y y y y y TCP 1696 508 45458 no 208 yes kernel y y y y y y y y y y y y y n y y y y y lpq83:~# echo 131072 >/proc/sys/net/ipv4/tcp_notsent_lowat lpq83:~# (super_netperf 200 -t TCP_STREAM -H remote -l 90 &); sleep 60 ; grep TCP /proc/net/protocols TCPv6 1880 2 20567 no 208 yes ipv6 y y y y y y y y y y y y y n y y y y y TCP 1696 508 20567 no 208 yes kernel y y y y y y y y y y y y y n y y y y y Using 128KB has no bad effect on the throughput or cpu usage of a single flow, although there is an increase of context switches. A bonus is that we hold socket lock for a shorter amount of time and should improve latencies of ACK processing. lpq83:~# echo -1 >/proc/sys/net/ipv4/tcp_notsent_lowat lpq83:~# perf stat -e context-switches ./netperf -H 7.7.7.84 -t omni -l 20 -c -i10,3 OMNI Send TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.7.84 () port 0 AF_INET : +/-2.500% @ 99% conf. Local Remote Local Elapsed Throughput Throughput Local Local Remote Remote Local Remote Service Send Socket Recv Socket Send Time Units CPU CPU CPU CPU Service Service Demand Size Size Size (sec) Util Util Util Util Demand Demand Units Final Final % Method % Method 1651584 6291456 16384 20.00 17447.90 10^6bits/s 3.13 S -1.00 U 0.353 -1.000 usec/KB Performance counter stats for './netperf -H 7.7.7.84 -t omni -l 20 -c -i10,3': 412,514 context-switches 200.034645535 seconds time elapsed lpq83:~# echo 131072 >/proc/sys/net/ipv4/tcp_notsent_lowat lpq83:~# perf stat -e context-switches ./netperf -H 7.7.7.84 -t omni -l 20 -c -i10,3 OMNI Send TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.7.84 () port 0 AF_INET : +/-2.500% @ 99% conf. Local Remote Local Elapsed Throughput Throughput Local Local Remote Remote Local Remote Service Send Socket Recv Socket Send Time Units CPU CPU CPU CPU Service Service Demand Size Size Size (sec) Util Util Util Util Demand Demand Units Final Final % Method % Method 1593240 6291456 16384 20.00 17321.16 10^6bits/s 3.35 S -1.00 U 0.381 -1.000 usec/KB Performance counter stats for './netperf -H 7.7.7.84 -t omni -l 20 -c -i10,3': 2,675,818 context-switches 200.029651391 seconds time elapsed Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Yuchung Cheng <ycheng@google.com> Acked-By: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Several call sites use the hardcoded following condition : sk_stream_wspace(sk) >= sk_stream_min_wspace(sk) Lets use a helper because TCP_NOTSENT_LOWAT support will change this condition for TCP sockets. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Yuchung Cheng <ycheng@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
After this file has moved to the uapi section, we also need to update this in the maintainers file. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
The SCTP mailing list address to send patches or questions to is linux-sctp@vger.kernel.org and not lksctp-developers@lists.sourceforge.net anymore. Therefore, update all occurences. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mugunthan V N authored
Add support to show CPSW hardware statistics to user via ethtool so user can find if there were any error reported by hardware or the system is over loaded duing high data rate transfer. Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
dingtianhong authored
The error is found by the checkpatch.pl tools. Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Cc: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
dingtianhong authored
We need rtnl protection while reading slave_cnt and updating the .fail_over_mac, and it also follows the logic "don't change anything slave-related without rtnl". :) Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Cc: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
dingtianhong authored
net/bonding/bond_sysfs.c:1302: ERROR: else should follow close brace '}' net/bonding/bond_sysfs.c:1314: ERROR: else should follow close brace '}' Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Cc: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
dingtianhong authored
The slave_xxx_netpoll will call synchronize_rcu_bh(), so the function may schedule and sleep, it should't be called under spinlocks. bond_netpoll_setup() and bond_netpoll_cleanup() are always protected by rtnl lock, it is no need to take the read lock, as the slave list couldn't be changed outside rtnl lock. Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Cc: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Neel Patel authored
This patch moves all enic ethtool hooks from enic_main.c to a new file enic_ethtool.c Signed-off-by: Neel Patel <neepatel@cisco.com> Signed-off-by: Christian Benvenuti <benve@cisco.com> Signed-off-by: Nishank Trivedi <nistrive@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 24 Jul, 2013 4 commits
-
-
Andi Shyti authored
This patch gets rid of the following warning: net/9p/trans_rdma.c:594:12: warning: ‘rdma_cancelled’ defined but not used [-Wunused-function] static int rdma_cancelled(struct p9_client *client, struct p9_req_t *req) The rdma_cancelled function is not called anywhere in the kernel Signed-off-by: Andi Shyti <andi@etezian.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Sathya Perla says: ==================== The following patches are mostly for providing MAC filtering ability for VFs. Pls apply. Thanks! ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
Currently the UC-list is being deleted from the HW MAC table, but the primary MAC is not. Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
On SH-R and Lancer-R, GET_MAC_LIST cmd is better supported (instead of NTWK_MAC_QUERY cmd) to query provisioned MAC addresses. Similiarly, (on SH-R and Lancer-R) SET_MAC_LIST must be used by the PF to provision a permanent MAC addresses to the VF. Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-