- 19 Apr, 2017 2 commits
-
-
Ilan Tayari authored
If esp*_offload module is loaded, outbound packets take the GSO code path, being encapsulated at layer 3, but encrypted in layer 2. validate_xmit_xfrm calls esp*_xmit for that. esp*_xmit was wrongfully detecting these packets as going through hardware crypto offload, while in fact they should be encrypted in software, causing plaintext leakage to the network, and also dropping at the receiver side. Perform the encryption in esp*_xmit, if the SA doesn't have a hardware offload_handle. Also, align esp6 code to esp4 logic. Fixes: fca11ebd ("esp4: Reorganize esp_output") Fixes: 383d0350 ("esp6: Reorganize esp_output") Signed-off-by: Ilan Tayari <ilant@mellanox.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Colin Ian King authored
The check for xo being null is incorrect, currently it is checking for non-null, it should be checking for null. Detected with CoverityScan, CID#1429349 ("Dereference after null check") Fixes: 7862b405 ("esp: Add gso handlers for esp4 and esp6") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
- 14 Apr, 2017 14 commits
-
-
Steffen Klassert authored
On IPsec hardware offloading, we already get a secpath with valid state attached when the packet enters the GRO handlers. So check for hardware offload and skip the state lookup in this case. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Ilan Tayari authored
Both esp4 and esp6 used to assume that the SKB payload is encrypted and therefore the inner_network and inner_transport offsets are not relevant. When doing crypto offload in the NIC, this is no longer the case and the NIC driver needs these offsets so it can do TX TCP checksum offloading. This patch sets the inner_network and inner_transport members of the SKB, as well as encapsulation, to reflect the actual positions of these headers, and removes them only once encryption is done on the payload. Signed-off-by: Ilan Tayari <ilant@mellanox.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
When we do IPsec offloading, we need a fallback for packets that were targeted to be IPsec offloaded but rerouted to a device that does not support IPsec offload. For that we add a function that checks the offloading features of the sending device and and flags the requirement of a fallback before it calls the IPsec output function. The IPsec output function adds the IPsec trailer and does encryption if needed. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
We need a fallback algorithm for crypto offloading to a NIC. This is because packets can be rerouted to other NICs that don't support crypto offloading. The fallback is going to be implemented at layer2 where we know the final output device but can't handle asynchronous returns fron the crypto layer. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
This patch adds functions that handles IPsec sequence numbers for GSO segments and TSO offloading. We need to calculate and update the sequence numbers based on the segments that GSO/TSO will generate. We need this to keep software and hardware sequence number counter in sync. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
This patch extends the xfrm_type by an encap function pointer and implements esp4_gso_encap and esp6_gso_encap. These functions doing the basic esp encapsulation for a GSO packet. In case the GSO packet needs to be segmented in software, we add gso_segment functions. This codepath is going to be used on esp hardware offloads. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
We need a fallback for ESP at layer 2, so split esp6_output into generic functions that can be used at layer 3 and layer 2 and use them in esp_output. We also add esp6_xmit which is used for the layer 2 fallback. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
We need a fallback for ESP at layer 2, so split esp_output into generic functions that can be used at layer 3 and layer 2 and use them in esp_output. We also add esp_xmit which is used for the layer 2 fallback. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
We are going to export the ipv4 and the ipv6 version of esp_input_done2. They are not static anymore and can't have the same name. So rename the ipv6 version to esp6_input_done2. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
This patch adds all the bits that are needed to do IPsec hardware offload for IPsec states and ESP packets. We add xfrmdev_ops to the net_device. xfrmdev_ops has function pointers that are needed to manage the xfrm states in the hardware and to do a per packet offloading decision. Joint work with: Ilan Tayari <ilant@mellanox.com> Guy Shapiro <guysh@mellanox.com> Yossi Kuperman <yossiku@mellanox.com> Signed-off-by: Guy Shapiro <guysh@mellanox.com> Signed-off-by: Ilan Tayari <ilant@mellanox.com> Signed-off-by: Yossi Kuperman <yossiku@mellanox.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
This patch adds a gso_segment and xmit callback for the xfrm_mode and implement these functions for tunnel and transport mode. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
This is needed for the upcomming IPsec device offloading. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
We add a struct xfrm_type_offload so that we have the offloaded codepath separated to the non offloaded codepath. With this the non offloade and the offloaded codepath can coexist. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
Steffen Klassert authored
This patch adds netdev features to configure IPsec offloads. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
- 12 Apr, 2017 12 commits
-
-
David S. Miller authored
Ursula Braun says: ==================== net/smc: patches for net-next here are some patches for net/smc. Most important are improvements for socket closing. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
smc specifies IB_SEND_INLINE for IB_WR_SEND ib_post_send calls, but provides a mapped buffer to be sent. This is inconsistent, since IB_SEND_INLINE works without mapped buffer. Problem has not been detected in the past, because tests had been limited to Connect X3 cards from Mellanox, whose mlx4 driver just ignored the IB_SEND_INLINE flag. For now, the IB_SEND_INLINE flag is removed. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
Make sure sockets never accepted are removed cleanly. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
unhash is already called in sock_put_work. Remove the second call. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
State SMC_CLOSED should be reached only, if ConnClosed has been sent to the peer. If ConnClosed is received from the peer, a socket with shutdown SHUT_WR done, switches errorneously to state SMC_CLOSED, which means the peer socket is dangling. The local SMC socket is supposed to switch to state APPFINCLOSEWAIT to make sure smc_close_final() is called during socket close. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
Several state changes occur during SMC socket closing. Currently state changes triggered locally occur in process context with lock_sock() taken while state changes triggered by peer occur in tasklet context with bh_lock_sock() taken. bh_lock_sock() does not wait till a lock_sock(() task in process context is finished. This may lead to races in socket state transitions resulting in dangling SMC-sockets, or it may lead to duplicate SMC socket freeing. This patch introduces a closing worker to run all state changes under lock_sock(). Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Reported-by: Dave Jones <davej@codemonkey.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
Wake up reading file descriptors for a closing socket as well, otherwise some socket applications may stall. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
If peer indicates write_blocked, the cursor state of the received data should be send to the peer immediately (in smc_tx_consumer_update()). Afterwards the write_blocked indicator is cleared. If there is no free slot for another write request, sending is postponed to worker smc_tx_work, and the write_blocked indicator is not cleared. Therefore another clearing check is needed in smc_tx_work(). Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
SMC requires an active ib port on the RoCE device. smc_pnet_find_roce_resource() determines the matching RoCE device port according to the configured PNET table. Do not return the found RoCE device port, if it is not flagged active. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
The global event handler is created only, if the ib_device has already been used by at least one link group. It is guaranteed that there exists the corresponding entry in the smc_ib_devices list. Get rid of this superfluous check. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
This patch removes an outdated comment. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
In the submission of the lastest multiple buffer patch set, this fix was lost. I am sending this patch to put it right again. The fix was originally proposed by Arnd Bergmann. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 11 Apr, 2017 12 commits
-
-
Willem de Bruijn authored
BPF helper functions access socket fields through skb->sk. This is not set in ingress cgroup and socket filters. The association is only made in skb_set_owner_r once the filter has accepted the packet. Sk is available as socket lookup has taken place. Temporarily set skb->sk to sk in these cases. Signed-off-by: Willem de Bruijn <willemb@google.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
Fix the return value check which testing the wrong variable in devlink_dpipe_header_put(). Fixes: 1555d204 ("devlink: Support for pipeline debug (dpipe)") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Julian Wiedmann says: ==================== more s390/net updates here's a second batch of s390/net patches for net-next. A mixed bunch of qeth cleanups, and a few patches to add support for ETHTOOL_GLINKSETTINGS. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
prepare() and complete() are not implemented by any discipline, so just drop all the indirection. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Reviewed-by: Hans Wippel <hwippel@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
LINK_MODE_* replaces the u32-limited SUPPORTED_* / ENABLED_* definitions. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
get_settings() is deprecated and lacks support for higher link speeds, so implement get_link_ksettings() instead. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
In preparation for moving to get_link_ksettings(), clean up how we build the supported and advertised port/speed masks. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
1. a buffer has 16 is_header flags, because that's its # of elements 2. replace the last occurrence of QETH_HEADER_SIZE, and remove it Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
ndo_start_xmit() expects us to return netdev_tx_t here... Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
'elements_needed' is not used in qeth_do_send_packet_fast(), so consequently remove it. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Duplicated code. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Identical code, we just need to call a layer-specific hook to process any received buffer. qeth_buffer_reclaim_work() is shuffled around to avoid a forward declaration for qeth_queue_input_buffer(). Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-