- 29 Mar, 2021 40 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queueDavid S. Miller authored
Tony Nguyen says: ==================== 1GbE Intel Wired LAN Driver Updates 2021-03-29 This series contains updates to igc driver only. Andre Guedes says: Add XDP support for the igc driver. The approach implemented by this series follows the same approach implemented in other Intel drivers as much as possible for the sake of consistency. The series is organized in two parts. In the first part, i.e. patches from 1 to 4, igc_main.c and igc_ptp.c code is refactored in preparation for landing the XDP support, which is introduced in the second part (patches from 5 to 8). As far as code organization is concerned, XDP-related helpers are defined in a new file, igc_xdp.c, and are called by igc_main.c. The features added by this series have been tested with the samples provided in samples/bpf/: xdp1, xdp2, xdp_redirect_cpu, and xdp_redirect_map. Upcoming series will add support of UMEM and zero-copy features from AF_XDP. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Loic Poulain authored
MBIM protocol makes the mhi network interface asymmetric, ingress data received from MHI is MBIM protocol, possibly containing multiple aggregated IP packets, while egress data received from network stack is IP protocol. This changes allows a 'protocol' to specify its own MRU, that when specified is used to allocate MHI RX buffers (skb). For MBIM, Set the default MTU to 1500, which is the usual network MTU for WWAN IP packets, and MRU to 3.5K (for allocation efficiency), allowing skb to fit in an usual 4K page (including padding, skb_shared_info, ...). Signed-off-by: Loic Poulain <loic.poulain@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Loic Poulain authored
Currently, if skb is non-linear, due to MHI skb chaining, it is linearized in MBIM RX handler prior MBIM decoding, causing extra allocation and copy that can be as large as the maximum MBIM frame size (32K). This change introduces MBIM decoding for non-linear skb, allowing to process 'large' non-linear MBIM packets without skb linearization. The IP packets are simply extracted from the MBIM frame using the skb_copy_bits helper. Signed-off-by: Loic Poulain <loic.poulain@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
The variable res is being initialized with a value that is never read and it is being updated later with a new value. The initialization is redundant and can be removed. Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Add a document describing the principles behind resilient next-hop groups, and some notes about how to configure and offload them. Suggested-by: David Ahern <dsahern@gmail.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Yingliang authored
Fix the following make W=1 kernel build warning: drivers/net/mdio.c:95: warning: expecting prototype for mdio_link_ok(). Prototype was for mdio45_links_ok() instead Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Yingliang authored
Fix the following make W=1 kernel build warning: drivers/net/bonding/bond_main.c:982: warning: expecting prototype for change_active_interface(). Prototype was for bond_change_active_slave() instead Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Yingliang authored
Fix the following make W=1 kernel build warning: drivers/net/phy/mdio-boardinfo.c:63: warning: expecting prototype for mdio_register_board_info(). Prototype was for mdiobus_register_board_info() instead Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ido Schimmel says: ==================== mlxsw: Two sampling fixes This patchset fixes two bugs in recent sampling submissions. The first fix, in patch #3, prevents matchall rules with sample action to be added in front of flower rules on egress. Patches #1-#2 are preparations meant at avoiding similar bugs in the future. Patch #4 is a selftest. The second fix, in patch #5, prevents sampling from being enabled on a port if already enabled. Patch #6 is a selftest. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Test that two sampling rules cannot be configured on the same port with the same trigger. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The per-port sampling triggers (i.e., ingress / egress) cannot be enabled twice. Meaning, the below configuration will not result in packets being sampled twice: # tc filter add dev swp1 ingress matchall skip_sw action sample rate 100 group 1 # tc filter add dev swp1 ingress matchall skip_sw action sample rate 100 group 1 Therefore, reject such configurations. Fixes: 90f53c53 ("mlxsw: spectrum: Start using sampling triggers hash table") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The driver can only offload matchall rules that do not match on a protocol. Test that matchall rules that match on a protocol are vetoed. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Perform the priority check earlier in the function instead of repeating it for every action. This fixes a bug that allowed matchall rules with sample action to be added in front of flower rules on egress. Fixes: 54d0e963 ("mlxsw: spectrum_matchall: Add support for egress sampling") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Previous patch moved the protocol check out of the action check, so these if statements can now be converted to a switch statement. Perform the conversion. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Perform the protocol check earlier in the function instead of repeating it for every action. Example: # tc filter add dev swp1 ingress proto ip matchall skip_sw action sample group 1 rate 100 Error: matchall rules only supported with 'all' protocol. We have an error talking to the kernel Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Weihang Li says: ==================== net: marvell: fix some coding style Do some cleanups according to the coding style of kernel. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangyang Li authored
Use tab instead of space to align the code. Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangyang Li authored
Just delete three extra spaces. Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangyang Li authored
Use a trailing */ on a separate line for block comments. Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangyang Li authored
Delete duplicate word in two comments. Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Huazhong Tan says: ==================== net: hns3: misc updates for -next This series include some updates for the HNS3 ethernet driver. flow director configuration"). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
skb_put_padto() may fails because of memory failure, sw_err_cnt is already used to log memory failure in hns3_skb_linearize(), so use it to log the memory failure for skb_put_padto() too. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guojia Liao authored
The device HNAE3_DEVICE_VERSION_V3 supports up to 1280 queues and qsets for one function, so the bitwidth of tc_offset, meaning the tqps index, needs to expand from 10 bits to 11 bits. The device HNAE3_DEVICE_VERSION_V3 supports up to 512 queues on one TC. The tc_size, meaning the exponent with base 2 of queues supported on TC, which needs to expand from 3 bits to 4 bits. Signed-off-by: Guojia Liao <liaoguojia@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
The actual size on wire for tso skb should be (gso_segs - 1) * hdr + skb->len instead of skb->len, which can be seen by user using 'ethtool -S ethX' cmd, and 'Byte Queue Limit' also use the send size stat to do the queue limiting, so add send_bytes in the desc_cb to record the actual send size for a skb. And send_bytes is only for tx desc_cb and page_offset is only for rx desc, so reuse the same space for both of them. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
Currently hns3 driver only handle the xmit skb with one level of fraglist skb, add handling for multi level by calling hns3_tx_bd_num() recursively when calculating bd num and calling hns3_fill_skb_to_desc() recursively when filling tx desc. When the skb has a fraglist level of 24, the skb is simply dropped and stats.max_recursion_level is added to record the error. Move the stat handling from hns3_nic_net_xmit() to hns3_nic_maybe_stop_tx() in order to handle different error stat and add the 'max_recursion_level' and 'hw_limitation' stat. Note that the max recursive level as 24 is chose according to below: commit 48a1df65 ("skbuff: return -EMSGSIZE in skb_to_sgvec to prevent overflow"). And that we are not able to find a testcase to verify the recursive fraglist case, so Fixes tag is not provided. Reported-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yufeng Mo authored
Currently, the queue reset process needs to be performed one by one, which is inefficient. However, the queue reset of the same function is always performed at the same time. Therefore, according to the UM, command HCLGE_OPC_CFG_RST_TRIGGER can be used to reset all queues of the same function at a time, in order to optimize the queue reset process. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
Currently, if user hasn't change channel number, the rss_size is limited to be no more than the vector number, in order to keep one vector only being mapped to one queue. But the queue number of each tc can be different, and one vector also can be mapped by multiple queues. So remove this limitation. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guangbin Huang authored
The array size of bd_num_list is a fixed value, it may have potential overflow risk when array size of hclge_dfx_bd_offset_list is greater than that fixed value. So modify bd_num_list as a pointer and allocate memory for it according to array size of hclge_dfx_bd_offset_list. Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
When new rule state is TO_ADD or ACTIVE, and there is already a rule with same location in the fd_rule_list, the new rule will be freed after modifying the old rule. It may cause user-after-free issue when access rule again in hclge_add_fd_entry_common(). Fixes: fc4243b8 ("net: hns3: refactor flow director configuration") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
Currently, when adding flow director rule, it missed to set rule state. Which may cause the rule state in software is unconsistent with hardware. Fixes: fc4243b8 ("net: hns3: refactor flow director configuration") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guobin Huang authored
There is a error message within devm_ioremap_resource already, so remove the dev_err call to avoid redundant error message. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Guobin Huang <huangguobin4@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guobin Huang authored
There is a error message within devm_ioremap_resource already, so remove the dev_err call to avoid redundant error message. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Guobin Huang <huangguobin4@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guobin Huang authored
There is a error message within devm_ioremap_resource already, so remove the dev_err call to avoid redundant error message. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Guobin Huang <huangguobin4@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guobin Huang authored
There is a error message within devm_ioremap_resource already, so remove the dev_err call to avoid redundant error message. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Guobin Huang <huangguobin4@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
The pointers adapter and phydev are being initialized with values that are never read and are being updated later with a new value. The initialization is redundant and can be removed. Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andre Guedes authored
Add support for the XDP_REDIRECT action which enables XDP programs to redirect packets arriving at I225 NIC. It also implements the ndo_xdp_xmit ops, enabling the igc driver to transmit packets forwarded to it by xdp programs running on other interfaces. The patch tweaks the driver's page counting and recycling scheme as described in the following two commits and implemented by other Intel drivers in order to properly support XDP_REDIRECT action: commit 8ce29c67 ("i40e: tweak page counting for XDP_REDIRECT") commit 75aab4e1 ("i40e: avoid premature Rx buffer reuse") This patch has been tested with the sample apps "xdp_redirect_cpu" and "xdp_redirect_map" located in samples/bpf/. Signed-off-by: Andre Guedes <andre.guedes@intel.com> Signed-off-by: Vedang Patel <vedang.patel@intel.com> Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Andre Guedes authored
Add support for XDP_TX action which enables XDP programs to transmit back receiving frames. I225 controller has only 4 Tx hardware queues. Since XDP programs may not even issue an XDP_TX action, this patch doesn't reserve dedicated queues just for XDP like other Intel drivers do. Instead, the queues are shared between the network stack and XDP. The netdev queue lock is used to ensure mutual exclusion. Since frames can now be transmitted via XDP_TX, the igc_tx_buffer structure is modified so we are able to save a reference to the xdp frame for later clean up once the packet is transmitted. The tx_buffer is mapped to either a skb or a xdpf so we use a union to save the skb or xdpf pointer and have a bit in tx_flags to indicate which field to use. This patch has been tested with the sample app "xdp2" located in samples/bpf/ dir. Signed-off-by: Andre Guedes <andre.guedes@intel.com> Signed-off-by: Vedang Patel <vedang.patel@intel.com> Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Andre Guedes authored
Add the initial XDP support to the igc driver. For now, only XDP_PASS, XDP_DROP, XDP_ABORTED actions are supported. Upcoming patches will add support for the remaining XDP actions. XDP configuration helpers are defined in a new file, igc_xdp.c. These helpers are utilized in igc_main.c to implement the ndo_bpf callback. XDP-related code that belongs to the driver's hot path is landed in igc_main.c. By default, the driver uses Rx buffers with 2 KB size. When XDP is enabled, it uses larger buffers so we have enough space to accommodate the headroom and tailroom required by XDP infrastructure. Also, the driver doesn't support XDP functionality with frames that span over multiple buffers so jumbo frames are not allowed for now. The approach implemented follows the approach implemented in other Intel drivers as much as possible for the sake of consistency across the drivers. Quick comment regarding igc_build_skb(): this patch doesn't touch it because the function is never called. It seems its support is incomplete/in progress. The function was added by commit 0507ef8a ("igc: Add transmit and receive fastpath and interrupt handlers") but ring_uses_build_skb() always return False since the IGC_RING_FLAG_RX_ BUILD_SKB_ENABLED isn't set anywhere in the driver code. This patch has been tested with the sample app "xdp1" located in samples/bpf/ dir. Signed-off-by: Andre Guedes <andre.guedes@intel.com> Signed-off-by: Vedang Patel <vedang.patel@intel.com> Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Andre Guedes authored
While commit 13b5b7fd ("igc: Add support for Tx/Rx rings") introduced code to handle larger packet buffers, it missed the set/clear helpers which enable/disable that feature. Introduce the missing helpers which will be use in the next patch. Signed-off-by: Andre Guedes <andre.guedes@intel.com> Signed-off-by: Vedang Patel <vedang.patel@intel.com> Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Andre Guedes authored
Refactor the Rx timestamp handling in preparation to land XDP support. Rx timestamps are put in the Rx buffer by hardware, before the packet data. When creating the xdp buffer, we will need to check the Rx descriptor to determine if the buffer contains timestamp information and consider the offset when setting xdp.data. The Rx descriptor check is already done in igc_construct_skb(). To avoid code duplication, this patch moves the timestamp handling to igc_clean_rx_irq() so both skb and xdp paths can reuse it. Signed-off-by: Andre Guedes <andre.guedes@intel.com> Signed-off-by: Vedang Patel <vedang.patel@intel.com> Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-