- 18 Oct, 2018 25 commits
-
-
Geetha sowjanya authored
Add support to enable or disable internal loopback mode in CGX. New mbox IDs CGX_INTLBK_ENABLE/DISABLE added for this. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Linu Cherian <lcherian@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Linu Cherian authored
Upon receiving notification from firmware the CGX event handler in the AF driver gets the current link info such as status, speed, duplex etc from CGX driver and sends it across to PFs who have registered to receive such notifications. To support above - Mbox messaging support for sending msgs from AF to PF has been added. - Added mbox msgs so that PFs can register/unregister for link events. - Link notifications are sent to PF under two scenarioss. 1. When a asynchronous link change notification is received from firmware with notification flag turned on for that PF. 2. Upon notification turn on request, the current link status is send to the PF. Also added a new mailbox msg using which RVU PF/VF can retrieve their mapped CGX LMAC's current link info. Link info includes status, speed, duplex and lmac type. Signed-off-by: Linu Cherian <lcherian@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vidhya Raman authored
This patch adds support for setting MAC address filters in CGX for PF interfaces. Also PF interfaces can be put in promiscuous mode. Dataplane PFs access this functionality using mailbox messages to the AF driver. Signed-off-by: Vidhya Raman <vraman@marvell.com> Signed-off-by: Stanislaw Kardach <skardach@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christina Jacob authored
This patch adds support for a RVU PF/VF driver to retrieve it's mapped CGX LMAC Rx and Tx stats from AF via mbox. New mailbox msg is added is added. Signed-off-by: Christina Jacob <cjacob@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sunil Goutham authored
Added new mailbox msgs for RVU PF/VFs to request AF to enable/disable their mapped CGX::LMAC Rx & Tx. Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Linu Cherian <lcherian@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sunil Goutham authored
Instead of looping on a integer timeout, use time_before(jiffies), so that maximum poll time is capped. Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Suggested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ido Schimmel says: ==================== mlxsw: Add VxLAN support This patchset adds support for VxLAN offload in the mlxsw driver. With regards to the forwarding plane, VxLAN support is composed from two main parts: Encapsulation and decapsulation. In the device, NVE encapsulation (and VxLAN in particular) takes place in the bridge. A packet can be encapsulated using VxLAN either because it hit an FDB entry that forwards it to the router with the IP of the remote VTEP or because it was flooded, in which case it is sent to a list of remote VTEPs (in addition to local ports). In either case, the VNI is derived from the filtering identifier (FID) the packet was classified to at ingress and the underlay source IP is taken from a device global configuration. VxLAN decapsulation takes place in the underlay router, where packets that hit a local route that corresponds to the source IP of the local VTEP are decapsulated and injected to the bridge. The packets are classified to a FID based on the VNI they came with. The first six patches export the required APIs in the VxLAN and mlxsw drivers in order to allow for the introduction of the NVE core in the next two patches. The NVE core is designed to support a variety of NVE encapsulations (e.g., VxLAN, NVGRE) and different ASICs, but currently only VxLAN and Spectrum are supported. Spectrum-2 support will be added in the future. The last 10 patches add support for VxLAN decapsulation and encapsulation and include the addition of the required switchdev APIs in the VxLAN driver. These APIs allow capable drivers to get a notification about the addition / deletion of FDB entries to / from the VxLAN's FDB. Subsequent patchset will add selftests (generic and mlxsw-specific), data plane learning, FDB extack and vetoing and support for VLAN-aware bridges (one VNI per VxLAN device model). v2: * Implement netif_is_vxlan() using rtnl_link_ops->kind (Jakub & Stephen) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
In the device, VxLAN encapsulation takes place in the FDB table where certain {MAC, FID} entries are programmed with an underlay unicast IP. MAC addresses that are not programmed in the FDB are flooded to the relevant local ports and also to a list of underlay unicast IPs that are programmed using the all zeros MAC address in the VxLAN driver. One difference between the hardware and software data paths is the fact that in the software data path there are two FDB lookups prior to the encapsulation of the packet. First in the bridge's FDB table using {MAC, VID} and another in the VxLAN's FDB table using {MAC, VNI}. Therefore, when a new VxLAN FDB entry is notified, it is only programmed to the device if there is a corresponding entry in the bridge's FDB table. Similarly, when a new bridge FDB entry pointing to the VxLAN device is notified, it is only programmed to the device if there is a corresponding entry in the VxLAN's FDB table. Note that the above scheme will result in a discrepancy between both data paths if only one FDB table is populated in the software data path. For example, if only the bridge's FDB is populated with an entry pointing to a VxLAN device, then a packet hitting the entry will only be flooded by the kernel to remote VTEPs whereas the device will also flood the packets to other local ports member in the VLAN. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Enslavement of VxLAN devices to offloaded bridges was never forbidden by mlxsw, but this patch makes sure the required configuration is performed in order to allow VxLAN encapsulation and decapsulation to take place in the device. The patch handles both the case where a VxLAN device is enslaved to an already offloaded bridge and the case where the first mlxsw port is enslaved to a bridge that already has VxLAN device configured. Invalid configurations are sanitized and an error string is returned via extack. Since encapsulation and decapsulation do not occur when the VxLAN device is down, the driver makes sure to enable / disable these functionalities based on NETDEV_PRE_UP and NETDEV_DOWN events. Note that NETDEV_PRE_UP is used in favor of NETDEV_UP, as the former allows to veto the operation, if necessary. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Currently, an FDB entry only ceases being offloaded when it is deleted. This changes with VxLAN encapsulation. Devices capable of performing VxLAN encapsulation usually have only one FDB table, unlike the software data path which has two - one in the bridge driver and another in the VxLAN driver. Therefore, bridge FDB entries pointing to a VxLAN device are only offloaded if there is a corresponding entry in the VxLAN FDB. Allow clearing the offload indication in case the corresponding entry was deleted from the VxLAN FDB. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
When notifications are sent about FDB activity, and an FDB entry with several remotes is removed, the notification is sent only for the first destination. That makes it impossible to distinguish between the case where only this first remote is removed, and the one where the FDB entry is removed as a whole. Therefore send one notification for each remote of a removed FDB entry. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Offloaded bridge FDB entries are marked with NTF_OFFLOADED. Implement a similar mechanism for VXLAN, where a given remote destination can be marked as offloaded. To that end, introduce a new event, SWITCHDEV_VXLAN_FDB_OFFLOADED, through which the marking is communicated to the vxlan driver. To identify which RDST should be marked as offloaded, an switchdev_notifier_vxlan_fdb_info is passed to the listeners. The "offloaded" flag in that object determines whether the offloaded mark should be set or cleared. When sending offloaded FDB entries over netlink, mark them with NTF_OFFLOADED. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
A switchdev-capable driver that is aware of VXLAN may need to query VXLAN FDB. In the particular case of mlxsw, this functionality is limited to querying UC FDBs. Those being easier to deal with than the general case of RDST chain traversal, introduce an interface to query specifically UC FDBs: vxlan_fdb_find_uc(). Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
When offloading VXLAN devices, drivers need to know about events in VXLAN FDB database. Since VXLAN models a bridge, it is natural to distribute the VXLAN FDB notifications using the pre-existing switchdev notification mechanism. To that end, introduce two new notification types: SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE and SWITCHDEV_VXLAN_FDB_DEL_TO_DEVICE. Introduce a new function, vxlan_fdb_switchdev_call_notifiers() to send the new notifier types, and a struct switchdev_notifier_vxlan_fdb_info to communicate the details of the FDB entry under consideration. Invoke the new function from vxlan_fdb_notify(). Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Add the ability to determine whether a netdev is a VxLAN netdev by calling the above mentioned function that checks the netdev's rtnl_link_ops. This will allow modules to identify netdev events involving a VxLAN netdev and act accordingly. For example, drivers capable of VxLAN offload will need to configure the underlying device when a VxLAN netdev is being enslaved to an offloaded bridge. Convert nfp to use the newly introduced helper. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
When a local route that matches the source IP of an offloaded NVE tunnel is notified, the driver needs to program it to perform NVE decapsulation instead of merely trapping packets to the CPU. This patch complements "mlxsw: spectrum_router: Enable local routes promotion to perform NVE decap" where existing local routes were promoted to perform NVE decapsulation. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
802.1D FIDs are used to represent VLAN-unaware bridges and currently this is the only type of FID that supports NVE configuration. Since the NVE tunnel device does not take a reference on the FID, it is possible for the FID to be destroyed when it still has NVE configuration. Therefore, when destroying the FID make sure to disable its NVE configuration. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The common NVE core expects each encapsulation type to implement a certain set of operations that are specific to this type and the currently used ASIC. These operations include things such as the ability to determine whether a certain NVE configuration can be offloaded and ASIC-specific initialization for this type. Implement these operations for VxLAN on the Spectrum ASIC. Spectrum-2 support will be added by a future patchset. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The Spectrum ASIC supports different types of NVE encapsulations (e.g., VxLAN, NVGRE) with more types to be supported by future ASICs. Despite being different, all these encapsulations share some common functionality such as the enablement of NVE encapsulation on a given filtering identifier (FID) and the addition of remote VTEPs to the linked-list of VTEPs that traffic should be flooded to. Implement this common core and allow different ASICs to register different operations for different encapsulation types. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Drivers that support tunnel decapsulation (IPinIP or NVE) need to configure the underlying device to conform to the behavior outlined in RFC 6040 with respect to the ECN bits. This behavior is implemented by INET_ECN_decapsulate() which requires an skb to be passed where the ECN CE bit can be potentially set. Since these drivers do not need to mark an skb, but only configure the device to do so, factor out the business logic to __INET_ECN_decapsulate() and potentially perform the marking in INET_ECN_decapsulate(). This allows drivers to invoke __INET_ECN_decapsulate() and configure the device. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Suggested-by: Petr Machata <petrm@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Drivers that support VxLAN offload need to be able to sanitize the configuration of the VxLAN device and accept / reject its offload. For example, mlxsw requires that the local IP of the VxLAN device be set and that packets be flooded to unicast IP(s) and not to a multicast group. Expose the functions that perform such checks. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
In the device, different VRFs (routing tables) are represented using different virtual routers (VRs) and thus the kernel's table IDs are mapped to VR IDs. Allow internal users of the IP router to query the VR ID based on a kernel table ID. This is needed - for example - when configuring the underlay VR where VxLAN encapsulated packets will undergo an L3 lookup. In this case, the kernel's table ID is derived from the VxLAN device's configuration. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
When an NVE tunnel with an IP underlay (e.g., VxLAN) is configured the local route to the tunnel's source IP needs to be promoted to perform NVE decapsulation. Expose an API in the unicast IP router to promote / demote local routes. The case where a local route is configured after the creation of the NVE tunnel will be handled in a subsequent patch in the set. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Current APIs only allow looking for a FID and creating it in case it does not exist. With VxLAN, in case the bridge to which the VxLAN device was enslaved does not already have a corresponding FID, then it means that something went wrong that we need to be aware of. Add an API to look up a FID, but without creating it in order to catch above-mentioned situation. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
In the device, the VNI and the list of remote VTEPs a packet should be flooded to is a property of the filtering identifier (FID). During encapsulation, the VNI is taken from the FID the packet was classified to. During decapsulation, the overlay packet is injected into a bridge and classified to a FID based on the VNI it came with. Allow NVE configuration for a FID. Currently, this is only supported with 802.1D FIDs which are used for VLAN-unaware bridges. However, NVE configuration is going to be supported with 802.1Q FIDs which is why the related fields are placed in the common FID struct. Since the device requires a 1:1 mapping between FID and VNI, the driver maintains a hashtable keyed by VNI and checks if the VNI is already associated with an existing FID. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 16 Oct, 2018 15 commits
-
-
Daniel Borkmann authored
Eric reported that syzkaller triggered a splat in tcp_cleanup_ulp() where assertion sock_owned_by_me() failed. This happened through inet_csk_prepare_forced_close() first releasing the socket lock, then calling into tcp_done(newsk) which is called after the inet_csk_prepare_forced_close() and therefore without the socket lock held. The sock_owned_by_me() assertion can generally be removed as the only place where tcp_cleanup_ulp() is called from now is out of inet_csk_destroy_sock() -> sk->sk_prot->destroy() where socket is in dead state and unreachable. Therefore, add a comment why the check is not needed instead. Fixes: 8b9088f8 ("tcp, ulp: enforce sock_owned_by_me upon ulp init and cleanup") Reported-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Yunsheng Lin says: ==================== Some cleanup and bugfix for desc filling When retransmiting packets, skb_cow_head which is called in hns3_set_tso may clone a new header. And driver will clear the checksum of the header after doing DMA map, so HW will read the old header whose L3 checksum is not cleared and calculate a wrong L3 checksum. Also When sending a big fragment using multiple buffer descriptor, hns3 does one maping, but do multiple unmapping when tx is done, which may cause unmapping problem. This patchset does some cleanup before fixing the above problem. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
When sending a big fragment using multiple buffer descriptor, hns3 does one maping, but do multiple unmapping when tx is done, which may cause unmapping problem. To fix it, this patch makes sure the value of desc_cb.length of the non-first bd is zero. If desc_cb.length is zero, we do not unmap the buffer. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
To keep symmetrical, this patch renames hns_nic_dma_unmap to hns3_clear_desc. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
This patch unifies big tx fragment handling for tso and non-tso case. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peng Li authored
To solve the L3 checksum error problem which happens when driver does not clear L3 checksum, DMA map should be done after calling skb_cow_head. This patch moves DMA map into hns3_fill_desc to ensure that DMA map is done after calling skb_cow_head. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peng Li authored
This patch removes hns3_fill_desc_tso in preparation for fixing some desc filling bug, because for tso or non-tso case, we will use the unified hns3_fill_desc. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Rahul Verma says: ==================== Align PTT and add various link modes. This series aligns the ptt propagation as local ptt or global ptt. Adds new transceiver modes, speed capabilities and board config, which is utilized to display the enhanced link modes, media types and speed. Enhances the link with detailed information. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rahul Verma authored
Newly added link modes are required to be added during setting link modes. If the new link mode is not available during qed_set_link, it may cause link getting down due to empty supported capability, being passed to MFW, after setting autoneg off/on with current/supported speed. Signed-off-by: Rahul Verma <Rahul.Verma@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rahul Verma authored
Set link mode after checking available "supported" link caps of the port. Signed-off-by: Rahul Verma <Rahul.Verma@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rahul Verma authored
Added transceiver type, speed capability and board types in HSI, are utilizing to display the accurate link information in ethtool. Signed-off-by: Rahul Verma <Rahul.Verma@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rahul Verma authored
Added transceiver modes with different speed and media type, speed capability and supported board types in HSI, which will be utilizing to display correct specification of link modes and speed type. Signed-off-by: Rahul Verma <Rahul.Verma@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rahul Verma authored
Align the use of local PTT to propagate through the qed_mcp* API's. Global ptt should not be used. Register access should be done through layers. Register address is mapped into a PTT, PF translation table. Several interface functions require a PTT to direct read/write into register. There is a pool of PTT maintained, and several PTT are used simultaneously to access device registers in different flows. Same PTT should not be used in flows that can run concurrently. To avoid running out of PTT resources, too many PTT should not be acquired without releasing them. Every PF has a global PTT, which is used throughout the life of PF, in most important flows for register access. Generic functions acquire the PTT locally and release after the use. This patch aligns the use of Global PTT and Local PTT accordingly. Signed-off-by: Rahul Verma <rahul.verma@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Fixes the following sparse warning: drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c:282:5: warning: symbol 'aq_fw2x_update_stats' was not declared. Should it be static? Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
David Ahern says: ==================== net: Kernel side filtering for route dumps Implement kernel side filtering of route dumps by protocol (e.g., which routing daemon installed the route), route type (e.g., unicast), table id and nexthop device. iproute2 has been doing this filtering in userspace for years; pushing the filters to the kernel side reduces the amount of data the kernel sends and reduces wasted cycles on both sides processing unwanted data. These initial options provide a huge improvement for efficiently examining routes on large scale systems. v2 - better handling of requests for a specific table. Rather than walking the hash of all tables, lookup the specific table and dump it - refactor mr_rtm_dumproute moving the loop over the table into a helper that can be invoked directly - add hook to return NLM_F_DUMP_FILTERED in DONE message to ensure it is returned even when the dump returns nothing ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-