- 19 Jun, 2020 37 commits
-
-
Vishal Kulkarni authored
Add support to delete ethtool n-tuple filter. Fetch the appropriate filter region (HPFILTER, HASH, NORMAL) in which the filter exists, and delete it from the respective region, accordingly. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: Vishal Kulkarni <vishal@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vishal Kulkarni authored
Add support to parse and insert ethtool n-tuple filters. Translate n-tuple spec to flow spec and use the existing tc-flower offload infra to insert ethtool n-tuple filters. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: Vishal Kulkarni <vishal@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vishal Kulkarni authored
Allocate and manage resources required for ethtool n-tuple filters. Also fetch the HASH filter region size and calculate nhash entries. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: Vishal Kulkarni <vishal@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Po Liu authored
This patch adds a drop frames counter to tc flower offloading. Reporting h/w dropped frames is necessary for some actions. Some actions like police action and the coming introduced stream gate action would produce dropped frames which is necessary for user. Status update shows how many filtered packets increasing and how many dropped in those packets. v2: Changes - Update commit comments suggest by Jiri Pirko. Signed-off-by: Po Liu <Po.Liu@nxp.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Reviewed-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vishal Kulkarni says: ==================== cxgb4: add support to read/write flash This series of patches adds support to read/write different binary images of serial flash present in Chelsio terminator. V2 changes: Patch 1: No change Patch 2: No change Patch 3: Fix 4 compilation warnings reported by C=1, W=1 flags Patch 4: No change Patch 5: No change ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vishal Kulkarni authored
This patch adds support to dump flash memory via ethtool --get-dump Signed-off-by: Vishal Kulkarni <vishal@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vishal Kulkarni authored
Update set_flash to flash boot cfg image to flash region Signed-off-by: Vishal Kulkarni <vishal@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vishal Kulkarni authored
Update set_flash to flash boot image to flash region Signed-off-by: Vishal Kulkarni <vishal@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vishal Kulkarni authored
Update set_flash to flash PHY image to flash region Signed-off-by: Vishal Kulkarni <vishal@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vishal Kulkarni authored
Chelsio adapter contains different flash regions and each region is used by different binary files. This patch adds support to flash images like PHY firmware, boot and boot config using ethtool -f N. The N value mapping is as follows. N = 0 : Parse image and decide which region to flash N = 1 : Firmware N = 2 : PHY firmware N = 3 : boot image N = 4 : boot cfg Signed-off-by: Vishal Kulkarni <vishal@chelsio.com>" Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== net: tso: expand to UDP support With QUIC getting more attention these days, it is worth implementing UDP direct segmentation, the same we did for TCP. Drivers will need to advertize NETIF_F_GSO_UDP_L4 so that GSO stack does not do the (more expensive) segmentation. Note the two first patches are stable candidates, after tests confirm they do not add regressions. v2: addressed Jakub feedback : 1) Added a prep patch for octeontx2-af 2) calls tso_start() earlier in otx2_sq_append_tso() ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Note that like TCP, we do not support additional encapsulations, and that checksums must be offloaded to the NIC. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Add tlen field into struct tso_t, and change tso_start() to return skb_transport_offset(skb) + tso->tlen This removes from callers the need to use tcp_hdrlen(skb) and will ease UDP segmentation offload addition. v2: calls tso_start() earlier in otx2_sq_append_tso() [Jakub] Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
skb argument of tso_count_descs(), tso_build_hdr() and tso_build_data() can be const. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
size field can be an int, no need for size_t Removes a 32bit hole on 64bit kernels. And align fields for better readability. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Transport header size could be 60 bytes, and network header size can also be 60 bytes. Add the Ethernet header and we are above 128 bytes. Since drivers using net/core/tso.c usually allocates one DMA coherent piece of memory per TX queue, this patch might cause issues if a driver was using too many slots. For 1024 slots, we would need 256 KB of physically contiguous memory instead of 128 KB. Alternative fix would be to add checks in the fast path, but this involves more work in all drivers using net/core/tso.c. Fixes: f9cbe9a5 ("net: define the TSO header size in net/tso.h") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We need to increase TSO_HEADER_SIZE from 128 to 256. Since otx2_sq_init() calls qmem_alloc() with TSO_HEADER_SIZE, we need to change (struct qmem)->entry_sz to avoid truncation to 0. Fixes: 7a37245e ("octeontx2-af: NPA block admin queue init") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Barry Song says: ==================== net: hns3: a bundle of minor cleanup and fixes some minor changes to either improve the readability or make the code align with linux APIs better. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Barry Song authored
Right now they are empty functions for our SoC since hardware can keep cache coherent, but it is still good to align with streaming DMA APIs as device drivers should not make an assumption of SoC. Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Barry Song authored
disable_irq() after request_irq() is still risk as there is a chance irq can come after request_irq() and before disable_irq(). this should be done by IRQ_NOAUTOEN flag. Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Barry Song authored
This is for improving the readability. Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Barry Song authored
Move the type of buffer address from unsigned char to void Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Barry Song authored
since we are using device-managed function, it is unnecessary to free in probe. Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes. This code was detected with the help of Coccinelle and, audited and fixed manually. Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tim Harvey authored
If a valid mac address is present in dt, use that before using CSR's or a random mac address. Signed-off-by: Tim Harvey <tharvey@gateworks.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Heiner Kallweit says: ==================== r8169: smaller improvements again Series includes a number of different smaller improvements. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
So far we can not configure irq coalescing when link is down. Allow the user to do this, and assume that he wants to configure irq coalescing for highest speed. Otherwise the irq rate is low enough anyway. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Relevant chip clocks are disabled in rtl_pll_power_down(), therefore move calling clk_disable_unprepare() there. Similar for enabling the clock. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Counters are updated whenever we go down, therefore move the call to rtl8169_down(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
rtl8169_hw_reset() meanwhile does more than a hw reset, therefore rename it to rtl8169_cleanup(). In addition move calling napi_disable() to this function. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
rtl8169_hw_reset() may be called under RTNL lock, therefore switch to synchronize_net(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
In the following scenario WoL isn't configured properly: - Driver is loaded, interface isn't brought up within 10s, so driver runtime-suspends. - WoL is set. - Interface is brought up, stored WoL setting isn't applied. It has always been like that, but the scenario seems to be quite theoretical as I haven't seen any bug report yet. Therefore treat the change as an improvement. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Since 9d3679fe ("r8169: inline rtl8169_make_unusable_by_asic") this constant isn't used any longer, so remove it. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
In case of problems it facilitates the bug analysis if we know whether DASH is active. Therefore emit a message in probe if this is the case. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes. This code was detected with the help of Coccinelle and, audited and fixed manually. Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Whenever a buggy NAPI driver returns more than its budget, we emit a stack trace that is of no use, since it does not tell which driver is buggy. Instead, emit a message giving the function name, and a descriptive message. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes. This code was detected with the help of Coccinelle and, audited and fixed manually. Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 17 Jun, 2020 3 commits
-
-
Gustavo A. R. Silva authored
Use array_size() helper instead of the open-coded version in copy_to_user(). These sorts of multiplication factors need to be wrapped in array_size(). This issue was found with the help of Coccinelle and, audited and fixed manually. Addresses-KSPP-ID: https://github.com/KSPP/linux/issues/83Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
Use vzalloc/vzalloc_node instead of the vmalloc/vzalloc_node and memset. Also, notice that vzalloc_node() function has no 2-factor argument form to calculate the size for the allocation, so multiplication factors need to be wrapped in array_size(). This issue was found with the help of Coccinelle and, audited and fixed manually. Addresses-KSPP-ID: https://github.com/KSPP/linux/issues/83Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hoang Huu Le authored
Currently, updating binding table (add service binding to name table/withdraw a service binding) is being sent over replicast. However, if we are scaling up clusters to > 100 nodes/containers this method is less affection because of looping through nodes in a cluster one by one. It is worth to use broadcast to update a binding service. This way, the binding table can be updated on all peer nodes in one shot. Broadcast is used when all peer nodes, as indicated by a new capability flag TIPC_NAMED_BCAST, support reception of this message type. Four problems need to be considered when introducing this feature. 1) When establishing a link to a new peer node we still update this by a unicast 'bulk' update. This may lead to race conditions, where a later broadcast publication/withdrawal bypass the 'bulk', resulting in disordered publications, or even that a withdrawal may arrive before the corresponding publication. We solve this by adding an 'is_last_bulk' bit in the last bulk messages so that it can be distinguished from all other messages. Only when this message has arrived do we open up for reception of broadcast publications/withdrawals. 2) When a first legacy node is added to the cluster all distribution will switch over to use the legacy 'replicast' method, while the opposite happens when the last legacy node leaves the cluster. This entails another risk of message disordering that has to be handled. We solve this by adding a sequence number to the broadcast/replicast messages, so that disordering can be discovered and corrected. Note however that we don't need to consider potential message loss or duplication at this protocol level. 3) Bulk messages don't contain any sequence numbers, and will always arrive in order. Hence we must exempt those from the sequence number control and deliver them unconditionally. We solve this by adding a new 'is_bulk' bit in those messages so that they can be recognized. 4) Legacy messages, which don't contain any new bits or sequence numbers, but neither can arrive out of order, also need to be exempt from the initial synchronization and sequence number check, and delivered unconditionally. Therefore, we add another 'is_not_legacy' bit to all new messages so that those can be distinguished from legacy messages and the latter delivered directly. v1->v2: - fix warning issue reported by kbuild test robot <lkp@intel.com> - add santiy check to drop the publication message with a sequence number that is lower than the agreed synch point Signed-off-by: kernel test robot <lkp@intel.com> Signed-off-by: Hoang Huu Le <hoang.h.le@dektech.com.au> Acked-by: Jon Maloy <jmaloy@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-