- 27 Apr, 2018 40 commits
-
-
David S. Miller authored
Marcelo Ricardo Leitner says: ==================== sctp: refactor MTU handling Currently MTU handling is spread over SCTP stack. There are multiple places doing same/similar calculations and updating them is error prone as one spot can easily be left out. This patchset converges it into a more concise and consistent code. In general, it moves MTU handling from functions with bigger objectives, such as sctp_assoc_add_peer(), to specific functions. It's also a preparation for the next patchset, which removes the duplication between sctp_make_op_error_space and sctp_make_op_error_fixed and relies on sctp_mtu_payload introduced here. More details on each patch. ==================== Reviewed-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
RFC 6458 Section 8.1.16 says that setting MAXSEG as 0 means that the user is not limiting it, and not that it should set to the *current* maximum, as we are doing. This patch thus allow setting it as 0, effectively removing the user limit. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
When setting SCTP_MAXSEG sock option, it should consider which kind of data chunk is being used if the asoc is already available, so that the limit better reflect reality. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
sctp_sendmsg() could trigger PMTU updates even when PMTU_DISABLED was set, as pmtu_pending could be set unconditionally during icmp handling if the socket was in use by the application. This patch fixes it by checking for PMTU_DISABLED when handling such deferred updates. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
sctp_transport_route currently is very similar to sctp_transport_pmtu plus a few other bits. This patch reuses sctp_transport_pmtu in sctp_transport_route and removes the duplicated code. Also, as all calls to sctp_transport_route were forcing the dst release before calling it, let's just include such release too. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
We are now keeping the MTU information synced between asoc, transport and dst, which makes the check at sctp_packet_config() not needed anymore. As it was the sole caller to this function, lets remove it. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
Which makes sure that the MTU respects the minimum value of SCTP_DEFAULT_MINSEGMENT and that it is correctly aligned. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
No need for this helper. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
and avoid the open-coded versions of it. Now sctp_datamsg_from_user can just re-use asoc->frag_point as it will always be updated. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
When given a MTU, this function calculates how much payload we can carry on it. Without a MTU, it calculates the amount of header overhead we have. So that when we have extra overhead, like the one added for IP options on SELinux patches, it is easier to handle it. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
All changes to asoc PMTU should now go through this wrapper, making it easier to track them and to do other actions upon it. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
As noticed by Xin Long, the if() here is always true as PMTU can never be 0. Reported-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
There was only one case that sctp_assoc_add_peer couldn't handle, which is when SPP_PMTUD_DISABLE is set and pathmtu not initialized. So add this situation to sctp_transport_route and reuse what was already in there. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
This value is not used anywhere in the code. In essence it is a duplicate of SCTP_DEFAULT_MINSEGMENT, which is used by the stack. SCTP_MIN_PMTU value makes more sense, but we should not change to it now as it would risk breaking applications. So this patch removes SCTP_MIN_PMTU and adjust the comment above it. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefano Brivio authored
A vti6 interface can carry IPv4 packets too. Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Wang authored
In tcp_select_initial_window(), we only set rcv_wnd to tcp_default_init_rwnd() if current mss > (1 << wscale). Otherwise, rcv_wnd is kept at the full receive space of the socket which is a value way larger than tcp_default_init_rwnd(). With larger initial rcv_wnd value, receive buffer autotuning logic takes longer to kick in and increase the receive buffer. In a TCP throughput test where receiver has rmem[2] set to 125MB (wscale is 11), we see the connection gets recvbuf limited at the beginning of the connection and gets less throughput overall. Signed-off-by: Wei Wang <weiwan@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ursula Braun says: ==================== smc fixes from 2018-04-17 - v3 in the mean time we challenged the benefit of these CLC handshake optimizations for the sockopts TCP_NODELAY and TCP_CORK. We decided to give up on them for now, since SMC still works properly without. There is now version 3 of the patch series with patches 2-4 implementing sockopts that require special handling in SMC. Version 3 changes * no deferring of setsockopts TCP_NODELAY and TCP_CORK anymore * allow fallback for some sockopts eliminating SMC usage * when setting TCP_NODELAY always enforce data transmission (not only together with corked data) Version 2 changes of Patch 2/4 (and 3/4): * return error -EOPNOTSUPP for TCP_FASTOPEN sockopts * fix a kernel_setsockopt() usage bug by switching parameter variable from type "u8" to "int" * add return code validation when calling kernel_setsockopt() * propagate a setsockopt error on the internal CLC socket to the SMC socket. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
If sockopt TCP_DEFER_ACCEPT is set, the accept is delayed till data is available. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
Setting sockopt TCP_NODELAY or resetting sockopt TCP_CORK triggers data transfer. For a corked SMC socket RDMA writes are deferred, if there is still sufficient send buffer space available. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
Several TCP sockopts do not work for SMC. One example are the TCP_FASTOPEN sockopts, since SMC-connection setup is based on the TCP three-way-handshake. If the SMC socket is still in state SMC_INIT, such sockopts trigger fallback to TCP. Otherwise an error is returned. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karsten Graul authored
The struct smc_cdc_msg must be defined as packed so the size is 44 bytes. And change the structure size check so sizeof is checked. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jeff Kirsher authored
After many years of having a ~30 line copyright and license header to our source files, we are finally able to reduce that to one line with the advent of the SPDX identifier. Also caught a few files missing the SPDX license identifier, so fixed them up. Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: Shannon Nelson <shannon.nelson@oracle.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
kbuild test robot says: >coccinelle warnings: (new ones prefixed by >>) >>> net/core/dev.c:1588:2-3: Unneeded semicolon So, let's remove it. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tobias Regnery authored
Commit c40e89fd ("geneve: configure MTU based on a lower device") added an IS_ENABLED(CONFIG_IPV6) to geneve, leading to the following link error with CONFIG_GENEVE=y and CONFIG_IPV6=m: drivers/net/geneve.o: In function `geneve_link_config': geneve.c:(.text+0x14c): undefined reference to `rt6_lookup' Fix this by adding a Kconfig dependency and forcing GENEVE to be a module when IPV6 is a module. Fixes: c40e89fd ("geneve: configure MTU based on a lower device") Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Julian Wiedmann says: ==================== s390/net: updates 2018-04-26 please apply the following patches to net-next. There's the usual cleanups & small improvements, and Kittipon adds HW offload support for IPv6 checksumming. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
If READ MAC fails to fetch a valid MAC address, allow some more device types (IQD and z/VM OSD) to fall back to a random address. Also use eth_hw_addr_random(), for indicating to userspace that the address type is NET_ADDR_RANDOM. Note that while z/VM has various protection schemes to prohibit custom addresses on its NICs, they are all optional. So we should at least give it a try. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kittipon Meesompop authored
Check if a qeth device supports IPv6 RX checksum offload, and hook it up into the existing NETIF_F_RXCSUM support. As NETIF_F_RXCSUM is now backed by a combination of HW Assists, we need to be a little smarter when dealing with errors during a configuration change: - switching on NETIF_F_RXCSUM only makes sense if at least one HW Assist was enabled successfully. - for switching off NETIF_F_RXCSUM, all available HW Assists need to be deactivated. Signed-off-by: Kittipon Meesompop <kmeesomp@linux.vnet.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kittipon Meesompop authored
Check if a qeth device supports IPv6 TX checksum offload, and advertise NETIF_F_IPV6_CSUM accordingly. Add support for setting the relevant bits in IPv6 packet descriptors. Currently this has only limited use (ie. UDP, or Jumbo Frames). For any TCP traffic with a standard MSS, the TCP checksum gets calculated as part of the linear GSO segmentation. Signed-off-by: Kittipon Meesompop <kmeesomp@linux.vnet.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kittipon Meesompop authored
Add some wrappers to make the protocol-specific Assist code a little more generic, and use them for sending protocol-agnostic commands in the Checksum Offload Assist code. Signed-off-by: Kittipon Meesompop <kmeesomp@linux.vnet.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kittipon Meesompop authored
For new functionality, the L2 subdriver will start using IPv6 assists. So move the query from the L3 subdriver into the common setup path. Signed-off-by: Kittipon Meesompop <kmeesomp@linux.vnet.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kittipon Meesompop authored
This matches the statistics we gather for the TX offload path. Signed-off-by: Kittipon Meesompop <kmeesomp@linux.vnet.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
The kernel does its own validation of the IPv4 header checksum, drivers/HW are not required to handle this. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
This consolidates the checksum offload code that was duplicated over the two qeth subdrivers. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kittipon Meesompop authored
Trivial cleanup, in preparation for a subsequent patch. Signed-off-by: Kittipon Meesompop <kmeesomp@linux.vnet.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
struct net_device contains a dev_port field. Store the OSA port number in this field. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
When removing a VLAN ID on a L3 device, the driver currently attempts to walk and unregister the VLAN device's IP addresses. This can be safely removed - before qeth_l3_vlan_rx_kill_vid() even gets called, we receive an inet[6]addr event for each IP on the device and qeth_l3_handle_ip_event() unregisters the address accordingly. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
As the vid_list is only accessed from process context, there's no need to protect it with a spinlock. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Both qeth sub drivers use the same QDIO queue handlers, there's no need to expose them via the driver's discipline. No functional change. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Use hlist_entry_safe() instead of open-coding it. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Pradeep Nalla says: ==================== liquidio: add support for ndo_get_stats64 Support ndo_get_stats64 instead of ndo_get_stats. Also add stats for multicast and broadcast packets. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-