- 26 Jan, 2019 15 commits
-
-
Peter Oskolkov authored
This patch adds several changes to the ip_defrag selftest, to cover new IPv6 defrag behavior: - min IPv6 frag size is now 8 instead of 1280 - new test cases to cover IPv6 defragmentation in nf_conntrack_reasm.c - new "permissive" mode in negative (overlap) tests: netfilter sometimes drops invalid packets without passing them to IPv6 underneath, and thus defragmentation sometimes succeeds when it is expected to fail; so the permissive mode does not fail the test if the correct reassembled datagram is received instead of a timeout. Signed-off-by: Peter Oskolkov <posk@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter Oskolkov authored
Currently, IPv6 defragmentation code drops non-last fragments that are smaller than 1280 bytes: see commit 0ed4229b ("ipv6: defrag: drop non-last frags smaller than min mtu") This behavior is not specified in IPv6 RFCs and appears to break compatibility with some IPv6 implemenations, as reported here: https://www.spinics.net/lists/netdev/msg543846.html This patch re-uses common IP defragmentation queueing and reassembly code in IP6 defragmentation in nf_conntrack, removing the 1280 byte restriction. Signed-off-by: Peter Oskolkov <posk@google.com> Reported-by: Tom Herbert <tom@herbertland.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter Oskolkov authored
Currently, IPv6 defragmentation code drops non-last fragments that are smaller than 1280 bytes: see commit 0ed4229b ("ipv6: defrag: drop non-last frags smaller than min mtu") This behavior is not specified in IPv6 RFCs and appears to break compatibility with some IPv6 implemenations, as reported here: https://www.spinics.net/lists/netdev/msg543846.html This patch re-uses common IP defragmentation queueing and reassembly code in IPv6, removing the 1280 byte restriction. Signed-off-by: Peter Oskolkov <posk@google.com> Reported-by: Tom Herbert <tom@herbertland.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter Oskolkov authored
This is a refactoring patch: without changing runtime behavior, it moves rbtree-related code from IPv4-specific files/functions into .h/.c defrag files shared with IPv6 defragmentation code. Signed-off-by: Peter Oskolkov <posk@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Florian Westphal <fw@strlen.de> Cc: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Julian Wiedmann says: ==================== s390/qeth: updates 2019-01-25 please apply a first batch of qeth patches for net-next, primarily touching the net_device parts of the driver. In addition to the usual refactoring & code consolidation, patch 7 makes use of netif_device_detach() to let the stack know when our control plane is down. This helps quite a bit wrt to overall locking and proper init/shutdown sequencing. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
For recovery purposes, qeth keeps track of all registered VIDs. Replace this by using the infrastructure introduced in commit 9daae9bd ("net: Call add/kill vid ndo on vlan filter feature toggling"). By managing NETIF_F_HW_VLAN_CTAG_FILTER as a hw_feature, netdev_update_features() will select it from dev->wanted_features and replay all of the netdevice's VIDs to its ndo_vlan_rx_add_vid() callback. z/VM NICs strictly require VLAN registration, so don't expose it as hw_feature there but add a little hack in qeth_enable_hw_features() to make things work regardless. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
When a qeth card is offline, it has no connection to the HW. So none of our control callbacks can run IO against it, and we can only cache the input (eg a new MAC address) without providing proper feedback to the caller. In this context, it seems much more reasonable to simply detach the netdevice and let the kernel reject any interaction with it. This also makes all sorts of internal state checks and locking obsolete. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Re-order the code flow a bit so that all initial HW setup is done before putting the netdevice into play. For a netdevice that hasn't been registered before, we also don't need to re-enable its HW features or check for recovery actions. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
At best this is redundant, at worst it papers over a race in the offline / online code. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
commit 4789a218 ("s390/qeth: fix race when setting MAC address") resolved a race where our initial programming of dev_addr into the HW and a call to ndo_set_mac_address() could run concurrently. In this case, we could end up getting confused about which address was actually set in the HW. The quick fix was to introduce additional locking that blocks any ndo_set_mac_address() while the device is being set online. But the race primarily originated from the fact that we first register the netdevice, and only then program its dev_addr. By re-ordering this sequence, userspace will only be able to change the MAC address _after_ we have finished with setting the initial dev_addr. Still, the same MAC address race can also occur during a subsequent call to qeth_l2_set_online(). So keep around the locking for now, until a follow-up patch fully resolves this. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
The L2 and L3 code for these ops is almost identical, we only need to provide a custom ndo_validate_addr() for L2 that checks whether programming the MAC address succeeded. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
qeth_qdio_cq_handler() doesn't replenish the Output Queue(s), and thus has no reason to wake the txq. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Consolidate the code that marks the current buffer to be flushed, and let qeth_fill_buffer() advance the Output Queue's buffer cursor. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Recent changes to the phylib API - removed phy_stop_interrupts - replaced phy_start_interrupts with phy_request_interrupt - moved some functionality from phy_connect() and phy_disconnect() to phy_start() and phy_stop() respectively. Reflect these changes in the documentation. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2019-01-25 This series provides some updates to mlx5 driver, From Tariq, 1) Make sure RX packet header does not cross page boundary To avoid page boundary crossing, use stride size that fits the maximum possible header. Stride is increased form 64B to 256B. 2) CQ struct cleanup: Take CQ decompress fields into a separate structure From Moshe, 3) Expand XPS cpumask to cover all online cpus From Jason Gunthorpe and Tariq: 4) Compilation warning cleanup From Or, 5) Add trace points for flow tables create/destroy From Saeed, 6) Software stats update/folding improvements this also solves a compilation warning on 32bit systems that was reported last release cycle by Arnd and Andrew. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 25 Jan, 2019 25 commits
-
-
Saeed Mahameed authored
Representors software stats are basic, this patch is reusing the mlx5e_fold_sw_stats in representors, which sums up the basic stats64 for a mlx5e netdevice. Fixes: 8bfaf07f ("net/mlx5e: Present SW stats when state is not opened") Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Tariq Toukan authored
This behavior is already adopted for all other cases in the cited patch. The representor's functions were missed, here we modify the them to behave similarly. Fixes: 8bfaf07f ("net/mlx5e: Present SW stats when state is not opened") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Saeed Mahameed authored
mlx5e_grp_sw_update_stats can be called from two threads, 1) ndo_get_stats64 2) get_ethtool_stats For this reason and to minimize concurrency issue impact on 64bit machines mlx5e_grp_sw_update_stats folds the software stats into a temporary variable then copies it to the global driver stats, both ethtool and ndo statistics callbacks will use the global software stats variable to report whatever stats they need. Actually ndo_get_stats64 doesn't need to fold the whole software stats (mlx5e_grp_sw_update_stats), all it needs is five counters to fill the rtnl_link_stats64 relevant stats parameter. Hence this patch introduces a simpler helper function to fold software stats for ndo_get_stats64 which will work directly on rtnl_link_stats64 stats parameter and not on the global or even temporary mlx5e_sw_stats variable. Since now mlx5e_grp_sw_update_stats is not called by ndo_get_stats64 we can make it static and remove the temp var. Unlike mlx5e_grp_sw_update_stats the new fold stats function doesn't need to zero out the output statistics parameter since it is already done by the stack @dev_get_stats(). This patch is fixing stack usage of mlx5e_grp_sw_update_stats on x86 gcc-4.9 and higher, the concurrency issue between mlx5's ndo_get_stats64 and get_ethtool_stats is resolved as well. Fixes: 8bfaf07f ("net/mlx5e: Present SW stats when state is not opened") Reported-by: Arnd Bergmann <arnd@arndb.de> Reported-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Or Gerlitz authored
We were not tracking flow tables so far, add it up. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Jason Gunthorpe authored
This confusing construction confuses the compiler which can't see that flow is initialized if !err: drivers/net/ethernet/mellanox/mlx5/core/en_tc.c: In function `mlx5e_configure_flower` drivers/net/ethernet/mellanox/mlx5/core/en_tc.c:2727:28: warning: `flow` may be used uninitialized in this function [-Wmaybe-uninitialized] There is no reason for two function outputs, just return the pointer directly and use ERR_PTR to encode a failure. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Moshe Shemesh authored
Currently we have one cpu in XPS cpumask per tx queue, this is good enough for default configuration where there is a tx queue per cpu. However, once configuration changes to use less tx queues, part of the cpus are not XPS-mapped and so the select queue decision falls back to hash calculation and balancing is not guaranteed. Expand XPS cpumask to enable using all cpus even when number of tx queues is smaller than number of cpus. Signed-off-by: Moshe Shemesh <moshe@mellanox.com> Reviewed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Tariq Toukan authored
Only the Receive CQ makes use of these fields. Take them out into a separate struct and save space in the generic CQ structure. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Tariq Toukan authored
In the non-linear SKB memory scheme of Striding RQ, a packet header could cross page boundary. This requires special care in fast path that costs LoC, additional runtime instructions and branches. It could happen when the header (up to 256B) does not fit in a single stride. Avoid this by working with a stride size that fits the maximum possible header. Stride is increased form 64B to 256B. Performance: Tested packet rate for UDP streams, single ring, on ConnectX-5. Configuration: Set Striding RQ and LRO ON (to enabled the non-linear SKB scheme). GRO OFF, early drop by TC rule. 64B: 4x worse memory utilization, no page-crossers headers - No degradation (5,887,305 pps). - The reduction in memory utilization is compensated by the saving of branches tests. 192B: 1.33x worse memory utilization, avoid page-crossers headers - Before: 5,727,252. After: 5,777,037. ~1% gain. 256B: Same memory util, no page-crossers - Before: 5,691,885. After: 5,748,007. ~1% gain. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
David S. Miller authored
This reverts the devlink health changes from 9/17/2019, Jiri wants things to be designed differently and it was agreed that the easiest way to do this is start from the beginning again. Commits reverted: cb5ccfbe 880ee82f c7af343b ff253fed 6f9d5613 fcd852c6 8a66704a 12bd0dce aba25279 ce019faa b8c45a03 And the follow-on build fix: o33a0efa4Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Remove duplicated include. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shalom Toledo authored
Signed-off-by: Shalom Toledo <shalomt@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Zhaolong Zhang authored
max_rcvbuf_size is no longer used since commit "414574a0". Signed-off-by: Zhaolong Zhang <zhangzl2013@126.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Priyaranjan Jha says: ==================== tcp_bbr: Improving TCP BBR performance for WiFi and cellular networks Ack aggregation is quite prevalent with wifi, cellular and cable modem link tchnologies, ACK decimation in middleboxes, and common offloading techniques such as TSO and GRO, at end hosts. Previously, BBR was often cwnd-limited in the presence of severe ACK aggregation, which resulted in low throughput due to insufficient data in flight. To achieve good throughput for wifi and other paths with aggregation, this patch series implements an ACK aggregation estimator for BBR, which estimates the maximum recent degree of ACK aggregation and adapts cwnd based on it. The algorithm is further described by the following presentation: https://datatracker.ietf.org/meeting/101/materials/slides-101-iccrg-an-update-on-bbr-work-at-google-00 (1) A preparatory patch, which refactors bbr_target_cwnd for generic inflight provisioning. (2) Implements BBR ack aggregation estimator and adapts cwnd based on measured degree of ACK aggregation. ==================== Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Priyaranjan Jha authored
Aggregation effects are extremely common with wifi, cellular, and cable modem link technologies, ACK decimation in middleboxes, and LRO and GRO in receiving hosts. The aggregation can happen in either direction, data or ACKs, but in either case the aggregation effect is visible to the sender in the ACK stream. Previously BBR's sending was often limited by cwnd under severe ACK aggregation/decimation because BBR sized the cwnd at 2*BDP. If packets were acked in bursts after long delays (e.g. one ACK acking 5*BDP after 5*RTT), BBR's sending was halted after sending 2*BDP over 2*RTT, leaving the bottleneck idle for potentially long periods. Note that loss-based congestion control does not have this issue because when facing aggregation it continues increasing cwnd after bursts of ACKs, growing cwnd until the buffer is full. To achieve good throughput in the presence of aggregation effects, this algorithm allows the BBR sender to put extra data in flight to keep the bottleneck utilized during silences in the ACK stream that it has evidence to suggest were caused by aggregation. A summary of the algorithm: when a burst of packets are acked by a stretched ACK or a burst of ACKs or both, BBR first estimates the expected amount of data that should have been acked, based on its estimated bandwidth. Then the surplus ("extra_acked") is recorded in a windowed-max filter to estimate the recent level of observed ACK aggregation. Then cwnd is increased by the ACK aggregation estimate. The larger cwnd avoids BBR being cwnd-limited in the face of ACK silences that recent history suggests were caused by aggregation. As a sanity check, the ACK aggregation degree is upper-bounded by the cwnd (at the time of measurement) and a global max of BW * 100ms. The algorithm is further described by the following presentation: https://datatracker.ietf.org/meeting/101/materials/slides-101-iccrg-an-update-on-bbr-work-at-google-00 In our internal testing, we observed a significant increase in BBR throughput (measured using netperf), in a basic wifi setup. - Host1 (sender on ethernet) -> AP -> Host2 (receiver on wifi) - 2.4 GHz -> BBR before: ~73 Mbps; BBR after: ~102 Mbps; CUBIC: ~100 Mbps - 5.0 GHz -> BBR before: ~362 Mbps; BBR after: ~593 Mbps; CUBIC: ~601 Mbps Also, this code is running globally on YouTube TCP connections and produced significant bandwidth increases for YouTube traffic. This is based on Ian Swett's max_ack_height_ algorithm from the QUIC BBR implementation. Signed-off-by: Priyaranjan Jha <priyarjha@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Priyaranjan Jha authored
Because bbr_target_cwnd() is really a general-purpose BBR helper for computing some volume of inflight data as a function of the estimated BDP, refactor it into following helper functions: - bbr_bdp() - bbr_quantization_budget() - bbr_inflight() Signed-off-by: Priyaranjan Jha <priyarjha@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Few chip versions use the same sequence to adjust 10M and ALDPS, so let's factor it out. This patch also fixes a (most likely) typo in rtl8168g_1_hw_phy_config. There bit 8 in reg 0x14 on page 0x0bcc was set and not cleared. According to the vendor driver this bit needs to be cleared in all cases. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Chip versions from RTL8168g onward use the same sequence to disable ALDPS (Advanced Link-Down Power Saving). So let's factor this out. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
I made a dumb mistake when I summed up the slave stats, obviously slaves can come and go which would make the master stats unreliable. Count and export the master stats separately. Fixes: a258aeac ("bonding: add support for xstats and export 3ad stats") Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Heiner Kallweit says: ==================== net: phy: improve starting PHY This patch series improves few aspects of starting the PHY. v2: - improve a warning in patch 4 v3: - extend commit message for patch 2 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Now that we enable the interrupts in phy_start() we don't have to do it before. Therefore remove enabling interrupts from phy_start_interrupts() and rename this function to reflect the changed functionality. v2: - improve warning to clearly state that we fall back to polling Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Interrupts don't have to be enabled before calling phy_start(). Therefore let's enable them in phy_start(). In a subsequent step we'll remove enabling interrupts from phy_connect_direct(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
phy_start() should be called from states PHY_READY or PHY_HALTED only. Check for this to detect misbehaving drivers. Also the state machine should be started only when being called from one of the valid states. Some more background: For all invalid states phy_start() basically was a no-op. All it did was triggering a state machine run, but for all "running" states the poll loop was active anyway. And if called from PHY_DOWN, the state machine does nothing. v3: - extended commit message Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
The state machine is a no-op before phy_start() has been called. Therefore let's enable it in phy_start() only. In phy_start() let's call phy_start_machine() instead of phy_trigger_machine(). phy_start_machine is an alias for phy_trigger_machine but it makes clearer that we start the state machine here instead of just triggering a run. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
In case of error, the function devm_clk_get() returns ERR_PTR() and never returns NULL. The NULL test in the return value check should be replaced with IS_ERR(). Fixes: a7c30e62 ("net: stmmac: Add driver for Qualcomm ethqos") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Vinod Koul <vkoul@kernel.org> Acked-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
Two statements are incorrecly indented, fix these by removing a space. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-