- 22 Mar, 2018 40 commits
-
-
Salil Mehta authored
After the hardware reset we should re-fetch the configuration from PF like queue info and tc info. This might have impact on allocations made like that of TQPs. Hence, we should release all such allocations and re-allocate fresh according to new fetched configuration after reset. Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Salil Mehta authored
After VF driver knows that hardware reset has been performed successfully, it should proceed ahead and reset the enet layer. This primarily consists of bringing down interface, clearing TX/RX rings, disassociating vectors from ring etc. Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Salil Mehta authored
VF driver depends upon PF to eventually reset the hardware. This request is made using the mailbox command. This patch adds the required function to acheive above. Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Salil Mehta authored
This introduces the hclge device reset states of "requested" and "pending" and also its handling in context to Reset Service Task. Device gets into requested state because of any VF reset request asserted from upper layers, for example due to watchdog timeout expiration. Requested state would result in eventually forwarding the VF reset request to PF which would actually reset the VF. Device will get into pending state if: 1. VF receives the acknowledgement from PF for the VF reset request it originally sent to PF. 2. Reset Service Task detects that after asserting VF reset for certain times the data-path is not working and device then decides to assert full VF reset(this means also resetting the PCIe interface). 3. PF intimates the VF that it has undergone reset. Pending state would result in VF to poll for hardware reset completion status and then resetting the stack/enet layer, which in turn means reinitializing the ring management/enet layer. Note: we would be adding support of 3. later as a separate patch. This decision should not affect VF reset as its event handling is generic in nature. Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Salil Mehta authored
VF reset would involve handling of different reset related events from the stack, physical function, mailbox etc. Reset service task would be used in servicing such reset event requests and later handling the hardware completions waits and initiating the stack resets. Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Salil Mehta authored
HNS3 drivers enet layer, used for the ring management and stack interaction, is common to both VF and PF. PF already supports reset functionality to handle the network stack watchdog timeout trigger but the existing code is not generic enough to be used to support VF reset as well. This patch does following: 1. Makes the existing watchdog timeout handler in enet layer generic i.e. suitable for both VF and PF and 2. Introduces the new reset event handler for the VF code. 3. Changes existing reset event handler of PF code to initialize the reset level Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Kirill Tkhai says: ==================== Rework ip_ra_chain protection Commit 1215e51e "ipv4: fix a deadlock in ip_ra_control" made rtnl_lock() be used in raw_close(). This function is called on every RAW socket destruction, so that rtnl_mutex is taken every time. This scales very sadly. I observe cleanup_net() spending a lot of time in rtnl_lock() and raw_close() is one of the biggest rtnl user (since we have percpu net->ipv4.icmp_sk). This patchset reworks the locking: reverts the problem commit and its descendant, and introduces rtnl-independent locking. This may have a continuation, and someone may work on killing rtnl_lock() in mrtsock_destruct() in the future. v3: Change patches order: [2/5] and [3/5]. v2: Fix sparse warning [4/5], as reported by kbuild test robot. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
Since ra_chain is per-net, we may use per-net mutexes to protect them in ip_ra_control(). This improves scalability. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
This is optimization, which makes ip_call_ra_chain() iterate less sockets to find the sockets it's looking for. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
This reverts commit 1215e51e. Since raw_close() is used on every RAW socket destruction, the changes made by 1215e51e scale sadly. This clearly seen on endless unshare(CLONE_NEWNET) test, and cleanup_net() kwork spends a lot of time waiting for rtnl_lock() introduced by this commit. Previous patch moved IP_ROUTER_ALERT out of rtnl_lock(), so we revert this patch. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
ip_ra_control() does not need sk_lock. Who are the another users of ip_ra_chain? ip_mroute_setsockopt() doesn't take sk_lock, while parallel IP_ROUTER_ALERT syscalls are synchronized by ip_ra_lock. So, we may move this command out of sk_lock. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
This reverts commit ba3f571d. The commit was made after 1215e51e "ipv4: fix a deadlock in ip_ra_control", and killed ip_ra_lock, which became useless after rtnl_lock() made used to destroy every raw ipv4 socket. This scales very bad, and next patch in series reverts 1215e51e. ip_ra_lock will be used again. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Intiyaz Basha authored
When a VF is trusted, all promiscuous traffic will only be sent to that VF. In normal operation promiscuous traffic is sent to the PF. There can be only one trusted VF per PF Signed-off-by: Intiyaz Basha <intiyaz.basha@cavium.com> Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Subash Abhinov Kasiviswanathan says: ==================== net: qualcomm: rmnet: Updates 2018-03-12 This series contains some minor updates for rmnet driver. Patch 1 contains fixes for sparse warnings. Patch 2 updates the copyright date to 2018. Patch 3 is a cleanup in receive path. Patch 4 has the new rmnet netlink attributes in uapi and updates the usage. Patch 5 has the implementation of the fill_info operation. v1->v2: Remove the force casts since the data type is changed to __be types as mentioned by David. v2->v3: Update copyright in files which actually had changes as mentioned by Joe. v3->v4: Add new netlink attributes for mux_id and flags instead of using the the vlan attributes as mentioned by David. The rmnet specific flags are also moved to uapi. The netlink updates are done as part of #4 and #5 has the fill_info operation. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
This is needed to query the mux_id and flags of a rmnet device. Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
Define new netlink attributes for rmnet mux_id and flags. These flags / mux_id were earlier using vlan flags / id respectively. The flag bits are also moved to uapi and are renamed with prefix RMNET_FLAG_*. Also add the rmnet policy to handle the new netlink attributes. Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
Device of the de-aggregated skb is correctly assigned after inspecting the mux_id, so remove the assignment in rmnet_map_deaggregate(). Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
Fix warnings which were reported when running with sparse (make C=1 CF=-D__CHECK_ENDIAN__) drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c:81:15: warning: cast to restricted __be16 drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c:271:37: warning: incorrect type in assignment (different base types) expected unsigned short [unsigned] [usertype] pkt_len got restricted __be16 [usertype] <noident> drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c:287:29: warning: incorrect type in assignment (different base types) expected unsigned short [unsigned] [usertype] pkt_len got restricted __be16 [usertype] <noident> drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c:310:22: warning: cast to restricted __be16 drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c:319:13: warning: cast to restricted __be16 drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c:49:18: warning: cast to restricted __be16 drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c:50:18: warning: cast to restricted __be32 drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c:74:21: warning: cast to restricted __be16 Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
The current logic of flags | TUNNEL_SEQ is always non-zero and hence sequence numbers are always incremented no matter the setting of the TUNNEL_SEQ bit. Fix this by using & instead of |. Detected by CoverityScan, CID#1466039 ("Operands don't affect result") Fixes: 77a5196a ("gre: add sequence number for collect md mode.") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tal Gilboa authored
Net DIM is a generic algorithm, purposed for dynamically optimizing network devices interrupt moderation. This document describes how it works and how to use it. Signed-off-by: Tal Gilboa <talgi@mellanox.com> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
Array mvpp2_pools is being indexed by long_log_pool, however this looks like a cut-n-paste bug and in fact should be short_log_pool. Detected by CoverityScan, CID#1466113 ("Copy-paste error") Fixes: 576193f2 ("net: mvpp2: jumbo frames support") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
GhantaKrishnamurthy MohanKrishna says: ==================== tipc: socket diagnostics additions for AF_TIPC The following patchsets add socket diagnostics support for AF_TIPC by using the sock diag framework. The patchset was created on top of commit id: fb66cb07. New iproute2 package is needed to use this functionality which will be sent for review in a seperate mail. The commit series improves diagnosis of tipc sockets by exporting the configuration, states and statistics of sockets. The series has been co-authored by Parthasarathy Bhuvaragan and consist of two parts: 1-2: Adaptations of existing code to support sock_diag framework. We modify existing functions to support socket diagnostics. Required information about the sockets are exported. 3: Step sk_drops during packet drop. This occurs if the packet cannot be queued due to queue length exceeding configured thresholds. The diag module is optional, and if enabled it will be loaded on demand when needed. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
GhantaKrishnamurthy MohanKrishna authored
Currently when tipc is unable to queue a received message on a socket, the message is rejected back to the sender with error TIPC_ERR_OVERLOAD. However, the application on this socket has no knowledge about these discards. In this commit, we try to step the sk_drops counter when tipc is unable to queue a received message. Export sk_drops using tipc socket diagnostics. Acked-by: Jon Maloy <jon.maloy@ericsson.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: GhantaKrishnamurthy MohanKrishna <mohan.krishna.ghanta.krishnamurthy@ericsson.com> Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
GhantaKrishnamurthy MohanKrishna authored
This commit adds socket diagnostics capability for AF_TIPC in netlink family NETLINK_SOCK_DIAG in a new kernel module (diag.ko). The following are key design considerations: - config TIPC_DIAG has default y, like INET_DIAG. - only requests with flag NLM_F_DUMP is supported (dump all). - tipc_sock_diag_req message is introduced to send filter parameters. - the response attributes are of TLV, some nested. To avoid exposing data structures between diag and tipc modules and avoid code duplication, the following additions are required: - export tipc_nl_sk_walk function to reuse socket iterator. - export tipc_sk_fill_sock_diag to fill the tipc diag attributes. - create a sock_diag response message in __tipc_add_sock_diag defined in diag.c and use the above exported tipc_sk_fill_sock_diag to fill response. Acked-by: Jon Maloy <jon.maloy@ericsson.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: GhantaKrishnamurthy MohanKrishna <mohan.krishna.ghanta.krishnamurthy@ericsson.com> Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
GhantaKrishnamurthy MohanKrishna authored
The current socket iterator function tipc_nl_sk_dump, handles socket locks and calls __tipc_nl_add_sk for each socket. To reuse this logic in sock_diag implementation, we do minor modifications to make these functions generic as described below. In this commit, we add a two new functions __tipc_nl_sk_walk, __tipc_nl_add_sk_info and modify tipc_nl_sk_dump, __tipc_nl_add_sk accordingly. In __tipc_nl_sk_walk we: 1. acquire and release socket locks 2. for each socket, execute the specified callback function In __tipc_nl_add_sk we: - Move the netlink attribute insertion to __tipc_nl_add_sk_info. tipc_nl_sk_dump calls tipc_nl_sk_walk with __tipc_nl_add_sk as argument. sock_diag will use these generic functions in a later commit. There is no functional change in this commit. Acked-by: Jon Maloy <jon.maloy@ericsson.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: GhantaKrishnamurthy MohanKrishna <mohan.krishna.ghanta.krishnamurthy@ericsson.com> Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ido Schimmel says: ==================== mlxsw: Update supported firmware version The first patch bumps the firmware version supported by the driver. The second patch enables a feature introduced in the new version, auto-negotiation disable. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tal Bar authored
In 'auto-neg off' the device have sent AN (auto-negotiation) frames with the forced speed. Thus, fix it using an_disable_admin field in Port type and speed (PTYS) register. This field indicates if speed negotiation frames would be send by the port or not. Add the field and enable/disable it for 'auto-neg on/off', make the port to start/stop sending AN (auto-negotiation) frames. Note that for SwitchX2 the behavior doesn't change (i.e support only AN enabled with forced speed). Signed-off-by: Tal Bar <talb@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tal Bar authored
This new firmware contains: - Support for auto-neg disable mode Signed-off-by: Tal Bar <talb@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Peng Li says: ==================== fix some bugs for HNS3 driver This patchset fixes some bugs for HNS3 driver: [Patch 1/11 - 5/11] fix various bugs reported by hisilicon test team. [Patch 6/11 - 7/11] fix bugs about interrupt coalescing self-adaptive function. [Patch 8/11 - 11/11] fix bugs about ethtool_ops.get_link_ksettings. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
When phy exists, phy_ethtool_ksettings_get function is enough to get the link ksettings. If the phy exists, get_link_ksettings function can return directly after phy_ethtool_ksettings_get is called. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
This patch adds support for querying speed and duplex by ethtool ethX to VF. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
This patch adds ethtool_ops.get_link support to VF. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
Fixed link mode is returned by hns3_get_link_ksettings. It is unreasonable. This patch fixes it by adding some related functions to get link mode from hardware. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
Since we change the update rate of int_gl from every interrupt to every one hundred interrupts, the old way to get time interval by int_gl value is not accurate. This patch calculates the time interval using the jiffies value. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
The interrupt coalescing self-adaptive function updates the int_gl every interrupt. The GL update rate is too faster to get a better new GL value. This patch changes the GL update rate to every one hundred interrupts. The GL update rate is defined by HNS3_INT_ADAPT_DOWN_START. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peng Li authored
It may need more time for IMP handle some command, such as reset. This patch enlarges the max time for cmd timeout. Driver will check the IMP result every us, it may break through the loop when get the right result. So not all command need the max time. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
There is no module that is dependent on hclge or hclgevf's symbol, but hns_enet need them to provide ops for it to run. When there is a need to auto load the hns3 driver, the auto load will fail because hclge or hclgevf is not loaded. Hns_enet has already exported the pci table, so this patch exports the pci table for hclge and hclgevf module too. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
The vlan table in hardware is clear after PF/Core/IMP/Global reset, which will cause vlan tagged packets not being received problem. This patch fixes it by restoring the vlan table after reset. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peng Li authored
VF queue reset flow is different from PF queue reset flow. VF driver should stop VF queue first, then send message to PF and PF do the reset. PF should send a response to VF after PF complete the queue reset, VF can initialize the queue hw after get the response. This patch fixes the VF queue reset flow as the correct step. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-