- 29 Oct, 2016 8 commits
-
-
Alan Brady authored
There exists a bug in which a 'perfect storm' can occur and cause interrupts to fail to be correctly affinitized. This causes unexpected behavior and has a substantial impact on performance when it happens. The bug occurs if there is heavy traffic, any number of CPUs that have an i40e interrupt are pegged at 100%, and the interrupt afffinity for those CPUs is changed. Instead of moving to the new CPU, the interrupt continues to be polled while there is heavy traffic. The bug is most readily realized as the driver is first brought up and all interrupts start on CPU0. If there is heavy traffic and the interrupt starts polling before the interrupt is affinitized, the interrupt will be stuck on CPU0 until traffic stops. The bug, however, can also be wrought out more simply by affinitizing all the interrupts to a single CPU and then attempting to move any of those interrupts off while there is heavy traffic. This patch fixes the bug by registering for update notifications from the kernel when the interrupt affinity changes. When that fires, we cache the intended affinity mask. Then, while polling, if the cpu is pegged at 100% and we failed to clean the rings, we check to make sure we have the correct affinity and stop polling if we're firing on the wrong CPU. When the kernel successfully moves the interrupt, it will start polling on the correct CPU. The performance impact is minimal since the only time this section gets executed is when performance is already compromised by the CPU. Change-ID: I4410a880159b9dba1f8297aa72bef36dca34e830 Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Preethi Banala authored
Group together the minimum set of offload capabilities that are always supported by VF in base mode. This define would be used by PF to make sure VF in base mode gets minimum of base capabilities . Change-ID: Id5e8f22ba169c8f0a38d22fc36b2cb531c02582c Signed-off-by: Preethi Banala <preethi.banala@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mitch Williams authored
Allow the client interface to reopen existing clients if they were closed. This allows clients to recover from reset, which is essential for supporting VF RDMA. In one instance, the driver was not clearing the open bit when the client was closed. Add the code to clear this bit so that the state is accurate and the driver will not attempt to reopen already-open clients. Remove the ref_cnt variable; it was just getting in the way and was not being used consistently. Change-ID: Ic71af4553b096963ac0c56a997f887c9a4ed162d Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
We cannot currently support SCTP in the hardware, and IPV4_FLOW is not used anywhere by the software so we can go through and drop the functionality related to these two flow types. In addition we cannot support masking based on the protocol value so if the user is expecting a value other than TCP or UDP we should simply return an error rather then trying to allocate a filter for a rule that will only partially match what the user requested. Change-ID: I10d52bb97d8104d76255fe244551814ff9531a63 Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
The function is not used so there is no need to carry it forward. I have plans to add a slightly different function that can be inlined to handle the same kind of functionality. Change-ID: Ie2dfcb189dc75e5fbc156bac23003e3b4210ae0f Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Filip Sadowski authored
Incorrect bit mask was used for testing "get link status" response. Instead of I40E_AQ_LSE_ENABLE (which is actually 0x03) it most probably should be I40E_AQ_LSE_IS_ENABLED (which is defined as 0x01). Change-ID: Ia199142906720507f847de3a33a25c61a9781b2f Signed-off-by: Filip Sadowski <filip.sadowski@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
We can reorder the busy wait loop at the start of the Flow Director transmit function to reduce the overall code size while still retaining the same functionality. As such I am taking advantage of the opportunity to do so. Change-ID: I34c403ca001953c6ac9816e65d5305e73d869026 Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Carolyn Wyborny authored
This patch fixes a problem in the client interface that was causing random stack traces in RDMA driver load and unload tests. This patch fixes the problem by checking for an existing client before trying to open it. Without this patch, there is a timing related null pointer deref. Change-ID: Ib73d30671a27f6f9770dd53b3e5292b88d6b62da Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 28 Oct, 2016 8 commits
-
-
Colin Ian King authored
Some of the pr_* messages are missing spaces, so insert these and also unbreak multi-line literal strings in pr_* messages Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jiri Pirko says: ==================== mlxsw: small driver update For details, see individual patches. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Do this so the sysfs has "device" link correctly set. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Do this so the sysfs has "device" link correctly set. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
So far, mlxsw_pci.ko is the module that registers PCI table for all drivers (spectrum and switchx2). That is problematic for example with dracut. Since mlxsw_spectrum.ko and mlxsw_switchx2.ko are loaded dynamically from within mlxsw_core.ko, dracut does not have track of them and avoids them from being included in initramfs. So make this in an ordinary way and define the PCI tables in individual driver modules, so it can be properly loaded and included in dracut initramfs image. As a side effect, this patch could remove no longer necessary driver "kind" strings which were used to link PCI ids with individual mlxsw drivers. Suggested-by: Ivan Vecera <ivecera@redhat.com> Tested-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Acked-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
pci.h needs to be used for inner function declarations. So move the original one to more appropriate name, pci_hw.h. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-nextDavid S. Miller authored
Steffen Klassert says: ==================== pull request (net-next): ipsec-next 2016-10-25 Just a leftover from the last development cycle. 1) Remove some unused code, from Florian Westphal. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 27 Oct, 2016 9 commits
-
-
Andrey Vagin authored
No one can see these events, because a network namespace can not be destroyed, if it has sockets. Unlike other devices, uevent-s for network devices are generated only inside their network namespaces. They are filtered in kobj_bcast_filter() My experiments shows that net namespaces are destroyed more 30% faster with this optimization. Here is a perf output for destroying network namespaces without this patch. - 94.76% 0.02% kworker/u48:1 [kernel.kallsyms] [k] cleanup_net - 94.74% cleanup_net - 94.64% ops_exit_list.isra.4 - 41.61% default_device_exit_batch - 41.47% unregister_netdevice_many - rollback_registered_many - 40.36% netdev_unregister_kobject - 14.55% device_del + 13.71% kobject_uevent - 13.04% netdev_queue_update_kobjects + 12.96% kobject_put - 12.72% net_rx_queue_update_kobjects kobject_put - kobject_release + 12.69% kobject_uevent + 0.80% call_netdevice_notifiers_info + 19.57% nfsd_exit_net + 11.15% tcp_net_metrics_exit + 8.25% rpcsec_gss_exit_net It's very critical to optimize the exit path for network namespaces, because they are destroyed under net_mutex and many namespaces can be destroyed for one iteration. v2: use dev_set_uevent_suppress() Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Andrei Vagin <avagin@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Richter authored
Fixes: d894be57('ethernet: use net core MTU range checking in more drivers') CC: Jarod Wilson <jarod@redhat.com> CC: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de> Acked-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Johannes Berg says: ==================== genetlink improvements This series contains some generic netlink improvements, making the API safer to use, and making the function pointers in the family struct safer by allowing it to be __ro_after_init. The first patch, introducing genl_family_attrbuf(), just ensures that the users of family->attrbuf aren't actually racy, but making them use the indirection function for obtaining a reference and checking that the context can actually do so. The second patch removes the more or less broken ability to have a static family ID, the three IDs that need to be static because it's simply needed (genl controller), or due to old API misused. Everything else couldn't be static anyway, or could fail when the family is registered, if somebody else already got a static ID. The third patch statically initializes the families, mostly to save some code. I wrote this initially because I thought I could make them all const, but that ends up being very inefficient (it would require always doing some kind of family -> id lookup), so now it's just here because I had it already and it reduces the code size. The fourth patch then, finally, lays the groundwork for what I had really wanted - now with __ro_after_init instead of const; I remove code there to do the ID->family hash table mapping in genetlink and use IDR instead to both allocate and map the IDs, which again ends up saving some code size. Finally, the fifth patch updates all families, as it turns out, no families exist that really dynamically register/unregister. This last patch should perhaps be split up, I could submit it for each subsystem separately, but it'd depend on the second and third to go in first, so would take a while. I can do that though, if that seems better to you. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
Now genl_register_family() is the only thing (other than the users themselves, perhaps, but I didn't find any doing that) writing to the family struct. In all families that I found, genl_register_family() is only called from __init functions (some indirectly, in which case I've add __init annotations to clarifly things), so all can actually be marked __ro_after_init. This protects the data structure from accidental corruption. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
Since generic netlink family IDs are small integers, allocated densely, IDR is an ideal match for lookups. Replace the existing hand-written hash-table with IDR for allocation and lookup. This lets the families only be written to once, during register, since the list_head can be removed and removal of a family won't cause any writes. It also slightly reduces the code size (by about 1.3k on x86-64). Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
Instead of providing macros/inline functions to initialize the families, make all users initialize them statically and get rid of the macros. This reduces the kernel code size by about 1.6k on x86-64 (with allyesconfig). Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
Static family IDs have never really been used, the only use case was the workaround I introduced for those users that assumed their family ID was also their multicast group ID. Additionally, because static family IDs would never be reserved by the generic netlink code, using a relatively low ID would only work for built-in families that can be registered immediately after generic netlink is started, which is basically only the control family (apart from the workaround code, which I also had to add code for so it would reserve those IDs) Thus, anything other than GENL_ID_GENERATE is flawed and luckily not used except in the cases I mentioned. Move those workarounds into a few lines of code, and then get rid of GENL_ID_GENERATE entirely, making it more robust. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
This helper function allows family implementations to access their family's attrbuf. This gets rid of the attrbuf usage in families, and also adds locking validation, since it's not valid to use the attrbuf with parallel_ops or outside of the dumpit callback. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Antonio Quartulli authored
The user may want to use only some bits of the skb mark in his skbedit rules because the remaining part might be used by something else. Introduce the "mask" parameter to the skbedit actor in order to implement such functionality. When the mask is specified, only those bits selected by the latter are altered really changed by the actor, while the rest is left untouched. Signed-off-by: Antonio Quartulli <antonio@open-mesh.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 26 Oct, 2016 15 commits
-
-
Elad Raz authored
When a port_type_set() is been called and the new port type set is the same as the old one, just return success. Signed-off-by: Elad Raz <eladr@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Richter authored
firewire-net, like the older eth1394 driver, reduced the initial MTU to less than 1500 octets if the local link layer controller's asynchronous packet reception limit was lower. This is bogus, since this reception limit does not have anything to do with the transmission limit. Neither did this reduction affect the TX path positively, nor could it prevent link fragmentation at the RX path. Many FireWire CardBus cards have a max_rec of 9, causing an initial MTU of 1024 - 16 = 1008. RFC 2734 and RFC 3146 allow a minimum max_rec = 8, which would result in an initial MTU of 512 - 16 = 496. On such cards, IPv6 could only be employed if the MTU was manually increased to 1280 or more, i.e. IPv6 would not work without intervention from userland. We now always initialize the MTU to 1500, which is the default according to RFC 2734 and RFC 3146. On a VIA VT6316 based CardBus card which was affected by this, changing the MTU from 1008 to 1500 also increases TX bandwidth by 6 %. RX remains unaffected. CC: netdev@vger.kernel.org CC: linux1394-devel@lists.sourceforge.net CC: Jarod Wilson <jarod@redhat.com> Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Richter authored
Commit b3e3893e ("net: use core MTU range checking in misc drivers") mistakenly introduced an upper limit for firewire-net's MTU based on the local link layer controller's reception capability. Revert this. Neither RFC 2734 nor our implementation impose any particular upper limit. Actually, to be on the safe side and to make the code explicit, set ETH_MAX_MTU = 65535 as upper limit now. (I replaced sizeof(struct rfc2734_header) by the equivalent RFC2374_FRAG_HDR_SIZE in order to avoid distracting long/int conversions.) Fixes: b3e3893e('net: use core MTU range checking in misc drivers') CC: netdev@vger.kernel.org CC: linux1394-devel@lists.sourceforge.net CC: Jarod Wilson <jarod@redhat.com> Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de> Acked-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
This node pointer is returned by of_get_child_by_name() with refcount incremented in this function. of_node_put() on it before exitting this function. This is detected by Coccinelle semantic patch. Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
Use setup_timer() instead of init_timer(), being the preferred/standard way to set a timer up. Also, quoting the mod_timer() function comment: -> mod_timer() is a more efficient way to update the expire field of an active timer (if the timer is inactive it will be activated). Use setup_timer and mod_timer to setup and arm a timer, to make the code cleaner and easier to read. Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
Fix to return error code -ENODEV from the DMA is not supported error handling case instead of 0, as done elsewhere in this function. Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
It is not allowed to call kfree_skb() from hardware interrupt context or with interrupts being disabled, spin_lock_irqsave() make sure always in irq disable context. So the kfree_skb() should be replaced with dev_kfree_skb_irq(). This is detected by Coccinelle semantic patch. Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
Fix to return error code -EINVAL from the error handling case instead of 0, as done elsewhere in this function. Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
Use setup_timer function instead of initializing timer with the function and data fields. Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
It's not necessary to free memory allocated with devm_kzalloc in the remove path and using kfree leads to a double free. Fixes: 84640e27 ("net: netcp: Add Keystone NetCP core ethernet driver") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sven Eckelmann authored
The maximum MTU is defined via the slave devices of an batman-adv interface. Thus it is not possible to calculate the max_mtu during the creation of the batman-adv device when no slave devices are attached. Doing so would for example break non-fragmentation setups which then (incorrectly) allow an MTU of 1500 even when underlying device cannot transport 1500 bytes + batman-adv headers. Checking the dynamically calculated max_mtu via the minimum of the slave devices MTU during .ndo_change_mtu is also used by the bridge interface. Cc: Jarod Wilson <jarod@redhat.com> Fixes: b3e3893e ("net: use core MTU range checking in misc drivers") Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Xo Wang says: ==================== Broadcom BCM54612E support This series is based on tip of torvalds/master. The first patch adds register definitions from Broadcom docs. The second patch adds the BCM54612E PHY ID, flags, and device-specific RGMII internal delay initialization. I tested on a custom board with an Aspeed AST2500 SOC with its second MAC connected to this PHY. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Xo Wang authored
This PHY has internal delays enabled after reset. This clears the internal delay enables unless the interface specifically requests them. Signed-off-by: Xo Wang <xow@google.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Joel Stanley <joel@jms.id.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Xo Wang authored
Add the RXD-to-RXC skew (delay) time bit in the Miscellaneous Control shadow register and a mask for the shadow selector field. Remove a re-definition of MII_BCM54XX_AUXCTL_SHDWSEL_AUXCTL. Signed-off-by: Xo Wang <xow@google.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Joel Stanley <joel@jms.id.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
When I prepared commit d250a5f9 ("pkt_sched: gen_estimator: Dont report fake rate estimators"), htb still had an implicit rate estimator for all its classes. Then later, I made this rate estimator optional in commit 64153ce0 ("net_sched: htb: do not setup default rate estimators"), but I forgot to update htb use of gnet_stats_copy_rate_est() After this patch, "tc -s qdisc ..." no longer report fake rate estimators for HTB classes. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-