- 22 Jul, 2012 20 commits
-
-
Rick Jones authored
The section titled "Configuring Bonding for Maximum Throughput" is actually section twelve not thirteen, and there are a couple of words spelled incorrectly. Signed-off-by: Rick Jones <rick.jones2@hp.com> Reviewed-by: Nicolas de Pesloüan <nicolas.2p.debian@free.fr> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
Instead of updating the sk_cgrp_prioidx struct field on every send this only updates the field when a task is moved via cgroup infrastructure. This allows sockets that may be used by a kernel worker thread to be managed. For example in the iscsi case today a user can put iscsid in a netprio cgroup and control traffic will be sent with the correct sk_cgrp_prioidx value set but as soon as data is sent the kernel worker thread isssues a send and sk_cgrp_prioidx is updated with the kernel worker threads value which is the default case. It seems more correct to only update the field when the user explicitly sets it via control group infrastructure. This allows the users to manage sockets that may be used with other threads. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael S. Tsirkin authored
Let vhost-net utilize zero copy tx when used with tun. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael S. Tsirkin authored
Export skb_copy_ubufs so that modules can orphan frags. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael S. Tsirkin authored
zero copy packets are normally sent to the outside network, but bridging, tun etc might loop them back to host networking stack. If this happens destructors will never be called, so orphan the frags immediately on receive. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael S. Tsirkin authored
tun xmit is actually receive of the internal tun socket. Orphan the frags same as we do for normal rx path. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael S. Tsirkin authored
Reduce code duplication a bit using the new helper. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael S. Tsirkin authored
Many places do if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY)) skb_copy_ubufs(skb, gfp_mask); to copy and invoke frag destructors if necessary. Add an inline helper for this. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
Richard Cochran authored
This driver now offers software transmit time stamping, so it should advertise that fact via ethtool. Compile tested only. Signed-off-by: Richard Cochran <richardcochran@gmail.com> Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Richard Cochran authored
This driver now offers software transmit time stamping, so it should advertise that fact via ethtool. Compile tested only. Signed-off-by: Richard Cochran <richardcochran@gmail.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: e1000-devel@lists.sourceforge.net Signed-off-by: David S. Miller <davem@davemloft.net>
-
Richard Cochran authored
This driver now offers software transmit time stamping, so it should advertise that fact via ethtool. Compile tested only. Signed-off-by: Richard Cochran <richardcochran@gmail.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: e1000-devel@lists.sourceforge.net Signed-off-by: David S. Miller <davem@davemloft.net>
-
Richard Cochran authored
This driver now offers software transmit time stamping, so it should advertise that fact via ethtool. Compile tested only. Signed-off-by: Richard Cochran <richardcochran@gmail.com> Cc: Eilon Greenstein <eilong@broadcom.com> Cc: Willem de Bruijn <willemb@google.com> Acked-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mark A. Greer authored
Commit 76ff5cc9 (rtnl: allow to specify number of rx and tx queues on device creation) added a reference to the net_device structure's 'num_rx_queues' member in net/core/rtnetlink.c:rtnl_fill_ifinfo() However, the definition for 'num_rx_queues' is surrounded by an '#ifdef CONFIG_RPS' while the new reference to it is not. This causes a compile error when CONFIG_RPS is not defined. Fix the compile error by surrounding the new reference to 'num_rx_queues' by an '#ifdef CONFIG_RPS'. CC: Jiri Pirko <jiri@resnulli.us> Signed-off-by: Mark A. Greer <mgreer@animalcreek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-nextDavid S. Miller authored
Jeff Kirsher says: -------------------- This series contains updates to ixgbe and ixgbevf. ... Akeem G. Abodunrin (1): igb: reset PHY in the link_up process to recover PHY setting after power down. Alexander Duyck (8): ixgbe: Drop probe_vf and merge functionality into ixgbe_enable_sriov ixgbe: Change how we check for pre-existing and assigned VFs ixgbevf: Add lock around mailbox ops to prevent simultaneous access ixgbevf: Add support for PCI error handling ixgbe: Fix handling of FDIR_HASH flag ixgbe: Reduce Rx header size to what is actually used ixgbe: Use num_tcs.pg_tcs as upper limit for TC when checking based on UP ixgbe: Use 1TC DCB instead of disabling DCB for MSI and legacy interrupts Don Skidmore (1): ixgbe: add support for new 82599 device Greg Rose (1): ixgbevf: Fix namespace issue with ixgbe_write_eitr John Fastabend (2): ixgbe: fix RAR entry counting for generic and fdb_add() ixgbe: remove extra unused queues in DCB + FCoE case ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
-
Randy Dunlap authored
Fix printk format warnings in drivers/net/wimax/i2400m: drivers/net/wimax/i2400m/control.c: warning: format '%zu' expects argument of type 'size_t', but argument 4 has type 'ssize_t' [-Wformat] drivers/net/wimax/i2400m/control.c: warning: format '%zu' expects argument of type 'size_t', but argument 5 has type 'ssize_t' [-Wformat] drivers/net/wimax/i2400m/usb-fw.c: warning: format '%zu' expects argument of type 'size_t', but argument 4 has type 'ssize_t' [-Wformat] I don't see these warnings on x86. The warnings that are quoted above are from Geert's kernel build reports. Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> Cc: linux-wimax@intel.com Cc: wimax@linuxwimax.org Signed-off-by: David S. Miller <davem@davemloft.net>
-
Neil Horman authored
I've seen several attempts recently made to do quick failover of sctp transports by reducing various retransmit timers and counters. While its possible to implement a faster failover on multihomed sctp associations, its not particularly robust, in that it can lead to unneeded retransmits, as well as false connection failures due to intermittent latency on a network. Instead, lets implement the new ietf quick failover draft found here: http://tools.ietf.org/html/draft-nishida-tsvwg-sctp-failover-05 This will let the sctp stack identify transports that have had a small number of errors, and avoid using them quickly until their reliability can be re-established. I've tested this out on two virt guests connected via multiple isolated virt networks and believe its in compliance with the above draft and works well. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> CC: Vlad Yasevich <vyasevich@gmail.com> CC: Sridhar Samudrala <sri@us.ibm.com> CC: "David S. Miller" <davem@davemloft.net> CC: linux-sctp@vger.kernel.org CC: joe@perches.com Acked-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kevin Groeneveld authored
Fix race condition in several network drivers when reading stats on 32bit UP architectures. These drivers update their stats in a BH context and therefore should use u64_stats_fetch_begin_bh/u64_stats_fetch_retry_bh instead of u64_stats_fetch_begin/u64_stats_fetch_retry when reading the stats. Signed-off-by: Kevin Groeneveld <kgroeneveld@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Set unicast_sock uc_ttl to -1 so that we select the right ttl, instead of sending packets with a 0 ttl. Bug added in commit be9f4a44 (ipv4: tcp: remove per net tcp_sock) Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 21 Jul, 2012 15 commits
-
-
Akeem G. Abodunrin authored
There was a previous patch to resolve issue with 82576 losing PHY setting after PHY power down. However that previous implementation triggered speed mismatch and occasional link lost. Now, this patch resolves both initial PHY setting and speed mismatch issues. Signed-off-by: Akeem G. Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change makes it so that we can use 1TC DCB in the case of MSI and legacy interrupts. The advantage to this is that it allows us to fully support FCoE w/ DCB instead of having to drop to link flow control only when using these interrupt modes. Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Don Skidmore authored
This patch adds support for a new 82599 device that supports WoL. Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
With DCB and FCoE configured extra queues may be allocated and never used. After this patch we calculate the max correctly. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
Do RAR entry accounting correctly so that errors are reported and promisc mode is set correctly when the number of entries exceeds the hardware limits. This can happen with many macvlan devices attached to the PF or by adding many fdb entries in SR-IOV modes. Also this includes a small refactor to fdb_add() to avoid having so many nested if/else statements after adding a check for the number or RAR entries. The max entries for the PF is currently 16 we allow 15 additional entries to account for the defined MAC. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change makes it so the function ixgbe_dcb_get_tc_from_up will use the num_tcs.pg_tcs to determine the starting value for determining a traffic class based on a user priority. The main motivation for this change is to address possible bad configurations in which more TCs worth of data are populated then there are actual TCs. By limiting this value we can at least make certain we are not providing a map with values that are out of range. As a result any user priorities that are setup in the configuration with a traffic class mapping higher than what the hardware supports will be reported as being on TC 0. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Stephen Ko <stephen.s.ko@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
The recent changes to netdev_alloc_skb actually make it so that the size of the buffer now actually has a more direct input on the truesize. So in order to make best use of the piece of a page we are allocated I am reducing the IXGBE_RX_HDR_SIZE to 256 so that our truesize will be reduced by 256 bytes as well. This should result in performance improvements since the number of uses per page should increase from 4 to 6 in the case of a 4K page. In addition we should see socket performance improvements due to the truesize dropping to less than 1K for buffers less than 256 bytes. Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Greg Rose authored
Make the function static to cleanup namespace. Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <Sibai.li@intel.com Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change makes it so that we can use the atr_sample_rate to determine if we are capable of supporting ATR. The advantage to this approach is that it allows us to now determine the setting of the IXGBE_FLAG_FDIR_HASH_CAPABLE based on the queueing scheme, instead of the queueing scheme being based on the flag. Using this approach there are essentially 5 conditions that must be checked prior to trying to enable ATR: 1. Is SR-IOV disabled? 2. Are the number of TCs <= 1? 3. Is RSS queueing limit greater than 1? 4. Is atr_sample_rate set? 5. Is Flow Director perfect filtering disabled? If any of these conditions are enabled they should disable ATR filtering. Note that in the case of conditions 1 through 4 being met we will set things up for ATR queueing, however if test 5 fails we will still leave the queues allocated for use by perfect filters. The reason for this is to allow for us to switch back and forth between ntuple and ATR without needing to reallocate the descriptor rings. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change adds support for handling IO errors and slot resets. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change adds a spinlock around the mailbox accesses to prevent simultaneous access to the mailboxes. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch does two things. First it drops the unnecessary work of searching for enabled VFs when we first bring up the adapter and instead just uses pci_num_vf to determine how many VFs are enabled on the adapter. The second thing it does is drop the use of vfdev from the vf_data_storage structure. Instead we just search the entire system for a VF that has us as it's PF, and then if that VF is assigned we indicate that the VFs are assigned. This allows us to still check for assigned VFs even if the vfinfo allocation has failed, or vfinfo has been freed. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This is meant to fix a bug in which we were not checking for pre-existing VFs if we were not setting the max_vfs value at driver load. What happens now is that we always call ixgbe_enable_sriov and this checks for pre-existing VFs ore requested VFs prior to deciding on no SR-IOV. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Stefan Hajnoczi authored
The vhost work queue allows processing to be done in vhost worker thread context, which uses the owner process mm. Access to the vring and guest memory is typically only possible from vhost worker context so it is useful to allow work to be queued directly by users. Currently vhost_net only uses the poll wrappers which do not expose the work queue functions. However, for tcm_vhost (vhost_scsi) it will be necessary to queue custom work. Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com> Cc: Zhi Yong Wu <wuzhy@cn.ibm.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
Stefan Hajnoczi authored
In order for other vhost devices to use the VHOST_FEATURES bits the vhost-net specific bits need to be moved to their own VHOST_NET_FEATURES constant. (Asias: Update drivers/vhost/test.c to use VHOST_NET_FEATURES) Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com> Cc: Zhi Yong Wu <wuzhy@cn.ibm.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Asias He <asias@redhat.com> Signed-off-by: Nicholas A. Bellinger <nab@risingtidesystems.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-
- 20 Jul, 2012 5 commits
-
-
Denis Efremov authored
The replacement of spin_lock_irq/spin_unlock_irq pair in interrupt handler by spin_lock_irqsave/spin_lock_irqrestore pair. Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by: Denis Efremov <yefremov.denis@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jesse/openvswitchDavid S. Miller authored
Jesse Gross says: ==================== A few bug fixes and small enhancements for net-next/3.6. ... Ansis Atteka (1): openvswitch: Do not send notification if ovs_vport_set_options() failed Ben Pfaff (1): openvswitch: Check gso_type for correct sk_buff in queue_gso_packets(). Jesse Gross (2): openvswitch: Enable retrieval of TCP flags from IPv6 traffic. openvswitch: Reset upper layer protocol info on internal devices. Leo Alterman (1): openvswitch: Fix typo in documentation. Pravin B Shelar (1): openvswitch: Check currect return value from skb_gso_segment() Raju Subramanian (1): openvswitch: Replace Nicira Networks. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
We were using a special key "0" for all loopback and point-to-point device neigh lookups under ipv4, but we wouldn't use that special key for the neigh creation. So basically we'd make a new neigh at each and every lookup :-) This special case to use only one neigh for these device types is of dubious value, so just remove it entirely. Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Leo Alterman authored
Signed-off-by: Leo Alterman <lalterman@nicira.com> Signed-off-by: Jesse Gross <jesse@nicira.com>
-
Ben Pfaff authored
At the point where it was used, skb_shinfo(skb)->gso_type referred to a post-GSO sk_buff. Thus, it would always be 0. We want to know the pre-GSO gso_type, so we need to obtain it before segmenting. Before this change, the kernel would pass inconsistent data to userspace: packets for UDP fragments with nonzero offset would be passed along with flow keys that indicate a zero offset (that is, the flow key for "later" fragments claimed to be "first" fragments). This inconsistency tended to confuse Open vSwitch userspace, causing it to log messages about "failed to flow_del" the flows with "later" fragments. Signed-off-by: Ben Pfaff <blp@nicira.com> Signed-off-by: Jesse Gross <jesse@nicira.com>
-