- 16 Jan, 2023 15 commits
-
-
Bobby Eshleman authored
This commit changes virtio/vsock to use sk_buff instead of virtio_vsock_pkt. Beyond better conforming to other net code, using sk_buff allows vsock to use sk_buff-dependent features in the future (such as sockmap) and improves throughput. This patch introduces the following performance changes: Tool: Uperf Env: Phys Host + L1 Guest Payload: 64k Threads: 16 Test Runs: 10 Type: SOCK_STREAM Before: commit b7bfaa76 ("Linux 6.2-rc3") Before ------ g2h: 16.77Gb/s h2g: 10.56Gb/s After ----- g2h: 21.04Gb/s h2g: 10.76Gb/s Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queueDavid S. Miller authored
Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2023-01-13 (ixgbe) This series contains updates to ixgbe driver only. Jesse resolves warning for RCU pointer by no longer restoring old pointer. Sebastian adds waiting for updating of link info on devices utilizing crosstalk fix to avoid false link state. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
After switching to TCP_ESTABLISHED or TCP_LISTEN sk_state, alive SOCK_STREAM and SOCK_SEQPACKET sockets can't change it anymore (since commit 3ff8bff7 "unix: Fix race in SOCK_SEQPACKET's unix_dgram_sendmsg()"). Thus, we do not need to take lock here. Signed-off-by: Kirill Tkhai <tkhai@ya.ru> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Heng Qi says: ==================== virtio-net: support multi buffer xdp Changes since PATCH v4: - Make netdev_warn() in [PATCH 2/10] independent from [PATCH 3/10]. Changes since PATCH v3: - Separate fix patch [2/10] for MTU calculation of single buffer xdp. Note that this patch needs to be backported to the stable branch. Changes since PATCH v2: - Even if single buffer xdp has a hole mechanism, there will be no problem (limiting mtu and turning off GUEST GSO), so there is no need to backport "[PATCH 1/9]"; - Modify calculation of MTU for single buffer xdp in virtnet_xdp_set(); - Make truesize in mergeable mode return to literal meaning; - Add some comments for legibility; Changes since RFC: - Using headroom instead of vi->xdp_enabled to avoid re-reading in add_recvbuf_mergeable(); - Disable GRO_HW and keep linearization for single buffer xdp; - Renamed to virtnet_build_xdp_buff_mrg(); - pr_debug() to netdev_dbg(); - Adjusted the order of the patch series. Currently, virtio net only supports xdp for single-buffer packets or linearized multi-buffer packets. This patchset supports xdp for multi-buffer packets, then larger MTU can be used if xdp sets the xdp.frags. This does not affect single buffer handling. In order to build multi-buffer xdp neatly, we integrated the code into virtnet_build_xdp_buff_mrg() for xdp. The first buffer is used for prepared xdp buff, and the rest of the buffers are added to its skb_shared_info structure. This structure can also be conveniently converted during XDP_PASS to get the corresponding skb. Since virtio net uses comp pages, and bpf_xdp_frags_increase_tail() is based on the assumption of the page pool, (rxq->frag_size - skb_frag_size(frag) - skb_frag_off(frag)) is negative in most cases. So we didn't set xdp_rxq->frag_size in virtnet_open() to disable the tail increase. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heng Qi authored
Driver can pass the skb to stack by build_skb_from_xdp_buff(). Driver forwards multi-buffer packets using the send queue when XDP_TX and XDP_REDIRECT, and clears the reference of multi pages when XDP_DROP. Signed-off-by: Heng Qi <hengqi@linux.alibaba.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heng Qi authored
For the clear construction of xdp_buff, we remove the xdp processing interleaved with page_to_skb(). Now, the logic of xdp and building skb from xdp are separate and independent. Signed-off-by: Heng Qi <hengqi@linux.alibaba.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heng Qi authored
This converts the xdp_buff directly to a skb, including multi-buffer and single buffer xdp. We'll isolate the construction of skb based on xdp from page_to_skb(). Signed-off-by: Heng Qi <hengqi@linux.alibaba.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heng Qi authored
This serves as the basis for XDP_TX and XDP_REDIRECT to send a multi-buffer xdp_frame. Signed-off-by: Heng Qi <hengqi@linux.alibaba.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heng Qi authored
Build multi-buffer xdp using virtnet_build_xdp_buff_mrg(). For the prefilled buffer before xdp is set, we will probably use vq reset in the future. At the same time, virtio net currently uses comp pages, and bpf_xdp_frags_increase_tail() needs to calculate the tailroom of the last frag, which will involve the offset of the corresponding page and cause a negative value, so we disable tail increase by not setting xdp_rxq->frag_size. Signed-off-by: Heng Qi <hengqi@linux.alibaba.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heng Qi authored
Support xdp for multi buffer packets in mergeable mode. Putting the first buffer as the linear part for xdp_buff, and the rest of the buffers as non-linear fragments to struct skb_shared_info in the tailroom belonging to xdp_buff. Let 'truesize' return to its literal meaning, that is, when xdp is set, it includes the length of headroom and tailroom. Signed-off-by: Heng Qi <hengqi@linux.alibaba.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heng Qi authored
Update relative record value for xdp_frame as basis for multi-buffer xdp transmission. Signed-off-by: Heng Qi <hengqi@linux.alibaba.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heng Qi authored
When the xdp program sets xdp.frags, which means it can process multi-buffer packets over larger MTU, so we continue to support xdp. Signed-off-by: Heng Qi <hengqi@linux.alibaba.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heng Qi authored
When single-buffer xdp is loaded, the size of the buffer filled each time is 'sz = (PAGE_SIZE - headroom - tailroom)', which is the maximum packet length that the driver allows the device to pass in. Otherwise, the packet with a length greater than sz will come in, so num_buf will be greater than or equal to 2, and xdp_linearize_page() will be performed and the packet will be dropped because the total length is greater than PAGE_SIZE. So the maximum value of MTU for single-buffer xdp is 'max_sz = sz - ETH_HLEN'. Signed-off-by: Heng Qi <hengqi@linux.alibaba.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heng Qi authored
XDP core assumes that the frame_size of xdp_buff and the length of the frag are PAGE_SIZE. The hole may cause the processing of xdp to fail, so we disable the hole mechanism when xdp is set. Signed-off-by: Heng Qi <hengqi@linux.alibaba.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Srujana Challa authored
Updates CPT inbound inline IPsec configure mailbox to take CPT credit, opcode, credit_th and bpid from VF. This patch also adds a mailbox to read inbound IPsec configuration. Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 14 Jan, 2023 23 commits
-
-
Jakub Kicinski authored
David Thompson says: ==================== mlxbf_gige: add BlueField-3 support This patch series adds driver logic to the "mlxbf_gige" Ethernet driver in order to support the third generation BlueField SoC (BF3). The existing "mlxbf_gige" driver is extended with BF3-specific logic and run-time decisions are made by the driver depending on the SoC generation (BF2 vs. BF3). The BF3 SoC is similar to BF2 SoC with regards to transmit and receive packet processing: * Driver rings usage; consumer & producer indices * Single queue for receive and transmit * DMA operation The differences between BF3 and BF2 SoC are: * In addition to supporting 1Gbps interface speed, the BF3 SoC adds support for 10Mbps and 100Mbps interface speeds * BF3 requires SerDes config logic to support its SGMII interface * BF3 adds support for "ethtool -s" for interface speed config * BF3 utilizes different MDIO logic for accessing the board-level PHY device Testing - Successful build of kernel for ARM64, ARM32, X86_64 - Tested ARM64 build on FastModels, Palladium, SoC ==================== Link: https://lore.kernel.org/r/20230112202609.21331-1-davthompson@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Thompson authored
This patch fixes the white space issue raised by checkpatch: CHECK: Alignment should match open parenthesis +static int mlxbf_gige_eth_ioctl(struct net_device *netdev, + struct ifreq *ifr, int cmd) Signed-off-by: David Thompson <davthompson@nvidia.com> Signed-off-by: Asmaa Mnebhi <asmaa@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Thompson authored
This patch extends the "ethtool_ops" data structure to include the "set_link_ksettings" callback. This change enables configuration of the various interface speeds that the BlueField-3 supports (10Mbps, 100Mbps, and 1Gbps). Signed-off-by: David Thompson <davthompson@nvidia.com> Signed-off-by: Asmaa Mnebhi <asmaa@nvidia.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Thompson authored
The BlueField-3 OOB interface supports 10Mbps, 100Mbps, and 1Gbps speeds. The external PHY is responsible for autonegotiating the speed with the link partner. Once the autonegotiation is done, the BlueField PLU needs to be configured accordingly. This patch does two things: 1) Initialize the advertised control flow/duplex/speed in the probe based on the BlueField SoC generation (2 or 3) 2) Adjust the PLU speed config in the PHY interrupt handler Signed-off-by: David Thompson <davthompson@nvidia.com> Signed-off-by: Asmaa Mnebhi <asmaa@nvidia.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Thompson authored
This patch adds initial MDIO support for the BlueField-3 SoC. Separate header files for the BlueField-2 and the BlueField-3 SoCs have been created. These header files hold the SoC-specific MDIO macros since the register offsets and bit fields have changed. Also, in BlueField-3 there is a separate register for writing and reading the MDIO data. Finally, instead of having "if" statements everywhere to differentiate between SoC-specific logic, a mlxbf_gige_mdio_gw_t struct was created for this purpose. Signed-off-by: David Thompson <davthompson@nvidia.com> Signed-off-by: Asmaa Mnebhi <asmaa@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Russell King (Oracle) authored
Use the phylink_get_link_timer_ns() helper to get the period for the link timer. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://lore.kernel.org/r/E1pFyhW-0067jq-Fh@rmk-PC.armlinux.org.ukSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Piergiorgio Beruto authored
Revert a wrong fix that was done during the review process. The intention was to substitute "if(ret < 0)" with "if(ret)". Unfortunately, the intended fix did not meet the code. Besides, after additional review, it was decided that "if(ret < 0)" was actually the right thing to do. Fixes: 8580e16c ("net/ethtool: add netlink interface for the PLCA RS") Signed-off-by: Piergiorgio Beruto <piergiorgio.beruto@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/f2277af8951a51cfee2fb905af8d7a812b7beaf4.1673616357.git.piergiorgio.beruto@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Michael Walle says: ==================== net: mdio: Continue separating C22 and C45 I've picked this older series from Andrew up and rebased it onto the latest net-next. This is the second patch set in the series which separates the C22 and C45 MDIO bus transactions at the API level to the MDIO bus drivers. ==================== Link: https://lore.kernel.org/r/20230112-net-next-c45-seperation-part-2-v1-0-5eeaae931526@walle.ccSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrew Lunn authored
The enetc MDIO bus driver can perform both C22 and C45 transfers. Create separate functions for each and register the C45 versions using the new API calls where appropriate. This driver is shared with the Felix DSA switch, so update that at the same time. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrew Lunn authored
The stmmac MDIO bus driver in variant gmac4 can perform both C22 and C45 transfers. Create separate functions for each and register the C45 versions using the new API calls where appropriate. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrew Lunn authored
The stmicro stmmac xgmac2 MDIO bus driver can perform both C22 and C45 transfers. Create separate functions for each and register the C45 versions using the new API calls where appropriate. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrew Lunn authored
The microchip lan743x MDIO bus driver can perform both C22 and C45 transfers in some variants. Create separate functions for each and register the C45 versions using the new API calls where appropriate. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrew Lunn authored
The mediatek bus driver can perform both C22 and C45 transfers. Create separate functions for each and register the C45 versions using the new API calls. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrew Lunn authored
The ipq4019 driver can perform both C22 and C45 transfers. Create separate functions for each and register the C45 versions using the new driver API calls. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrew Lunn authored
The aspeed MDIO bus driver can perform both C22 and C45 transfers. Modify the existing C45 functions to take the devad as a parameter, and remove the wrappers so there are individual C22 and C45 functions. Add the C45 functions to the new API calls. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrew Lunn authored
The MDIO mux broadcom iproc can perform both C22 and C45 transfers. Create separate functions for each and register the C45 versions using the new API calls. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrew Lunn authored
The MDIO over I2C bus driver can perform both C22 and C45 transfers. Create separate functions for each and register the C45 versions using the new API calls. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrew Lunn authored
The cavium IP can perform both C22 and C45 transfers. Create separate functions for each and register the C45 versions in both the octeon and thunder bus driver. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Bin Chen authored
Add basic DCB IEEE support. This includes support for ETS, max-rate, and DSCP to user priority mapping. DCB may be configured using iproute2's dcb command. Example usage: dcb ets set dev $dev tc-tsa 0:ets 1:ets 2:ets 3:ets 4:ets 5:ets \ 6:ets 7:ets tc-bw 0:0 1:80 2:0 3:0 4:0 5:0 6:20 7:0 dcb maxrate set dev $dev tc-maxrate 1:1000bit And DCB configuration can be shown using: dcb ets show dev $dev dcb maxrate show dev $dev Signed-off-by: Bin Chen <bin.chen@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230112121102.469739-1-simon.horman@corigine.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Lorenzo Bianconi authored
Similar to MTK Wireless Ethernet Dispatcher (WED) MCU rx queue, we do not need to protect WED MCU tx queue with a spin lock since the tx queue is accessed in the two following routines: - mtk_wed_wo_queue_tx_skb(): it is run at initialization and during mt7915 normal operation. Moreover MCU messages are serialized through MCU mutex. - mtk_wed_wo_queue_tx_clean(): it runs just at mt7915 driver module unload when no more messages are sent to the MCU. Remove tx queue spinlock. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://lore.kernel.org/r/7bd0337b2a13ab1a63673b7c03fd35206b3b284e.1673515140.git.lorenzo@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jon Maxwell authored
In ip6_dst_gc() replace: if (entries > gc_thresh) With: if (entries > ops->gc_thresh) Sending Ipv6 packets in a loop via a raw socket triggers an issue where a route is cloned by ip6_rt_cache_alloc() for each packet sent. This quickly consumes the Ipv6 max_size threshold which defaults to 4096 resulting in these warnings: [1] 99.187805] dst_alloc: 7728 callbacks suppressed [2] Route cache is full: consider increasing sysctl net.ipv6.route.max_size. . . [300] Route cache is full: consider increasing sysctl net.ipv6.route.max_size. When this happens the packet is dropped and sendto() gets a network is unreachable error: remaining pkt 200557 errno 101 remaining pkt 196462 errno 101 . . remaining pkt 126821 errno 101 Implement David Aherns suggestion to remove max_size check seeing that Ipv6 has a GC to manage memory usage. Ipv4 already does not check max_size. Here are some memory comparisons for Ipv4 vs Ipv6 with the patch: Test by running 5 instances of a program that sends UDP packets to a raw socket 5000000 times. Compare Ipv4 and Ipv6 performance with a similar program. Ipv4: Before test: MemFree: 29427108 kB Slab: 237612 kB ip6_dst_cache 1912 2528 256 32 2 : tunables 0 0 0 xfrm_dst_cache 0 0 320 25 2 : tunables 0 0 0 ip_dst_cache 2881 3990 192 42 2 : tunables 0 0 0 During test: MemFree: 29417608 kB Slab: 247712 kB ip6_dst_cache 1912 2528 256 32 2 : tunables 0 0 0 xfrm_dst_cache 0 0 320 25 2 : tunables 0 0 0 ip_dst_cache 44394 44394 192 42 2 : tunables 0 0 0 After test: MemFree: 29422308 kB Slab: 238104 kB ip6_dst_cache 1912 2528 256 32 2 : tunables 0 0 0 xfrm_dst_cache 0 0 320 25 2 : tunables 0 0 0 ip_dst_cache 3048 4116 192 42 2 : tunables 0 0 0 Ipv6 with patch: Errno 101 errors are not observed anymore with the patch. Before test: MemFree: 29422308 kB Slab: 238104 kB ip6_dst_cache 1912 2528 256 32 2 : tunables 0 0 0 xfrm_dst_cache 0 0 320 25 2 : tunables 0 0 0 ip_dst_cache 3048 4116 192 42 2 : tunables 0 0 0 During Test: MemFree: 29431516 kB Slab: 240940 kB ip6_dst_cache 11980 12064 256 32 2 : tunables 0 0 0 xfrm_dst_cache 0 0 320 25 2 : tunables 0 0 0 ip_dst_cache 3048 4116 192 42 2 : tunables 0 0 0 After Test: MemFree: 29441816 kB Slab: 238132 kB ip6_dst_cache 1902 2432 256 32 2 : tunables 0 0 0 xfrm_dst_cache 0 0 320 25 2 : tunables 0 0 0 ip_dst_cache 3048 4116 192 42 2 : tunables 0 0 0 Tested-by: Andrea Mayer <andrea.mayer@uniroma2.it> Signed-off-by: Jon Maxwell <jmaxwell37@gmail.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20230112012532.311021-1-jmaxwell37@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Keith Busch authored
The details of the iov_iter types are appropriately abstracted, so there's no need to check for specific type fields. Just let the abstractions handle it. This is preparing for io_uring/net's io_send to utilize the more efficient ITER_UBUF. Signed-off-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/r/20230111184245.3784393-1-kbusch@meta.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Anand Moon authored
Fix compatible string for RV1126 gmac, and constrain it to be compatible with Synopsys dwmac 4.20a. fix below warning $ make CHECK_DTBS=y rv1126-edgeble-neu2-io.dtb arch/arm/boot/dts/rv1126-edgeble-neu2-io.dtb: ethernet@ffc40000: compatible: 'oneOf' conditional failed, one must be fixed: ['rockchip,rv1126-gmac', 'snps,dwmac-4.20a'] is too long 'rockchip,rv1126-gmac' is not one of ['rockchip,rk3568-gmac', 'rockchip,rk3588-gmac'] Fixes: b36fe2f4 ("dt-bindings: net: rockchip-dwmac: add rv1126 compatible") Reviewed-by: Jagan Teki <jagan@edgeble.ai> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Anand Moon <anand@edgeble.ai> Link: https://lore.kernel.org/r/20230111172437.5295-1-anand@edgeble.aiSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 13 Jan, 2023 2 commits
-
-
Sebastian Czapla authored
Add delayed link state recheck to filter false link up indication caused by transceiver with no fiber cable attached. Signed-off-by: Sebastian Czapla <sebastianx.czapla@intel.com> Tested-by: Sunitha Mekala <sunithax.d.mekala@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Jesse Brandeburg authored
The ixgbe driver uses an older style failure mode when initializing the XDP program and the queues. It causes some warnings when running C=2 checking builds (and it's the last one in the ethernet/intel tree). $ make W=1 C=2 M=`pwd`/drivers/net/ethernet/intel modules .../ixgbe_main.c:10301:25: error: incompatible types in comparison expression (different address spaces): .../ixgbe_main.c:10301:25: struct bpf_prog [noderef] __rcu * .../ixgbe_main.c:10301:25: struct bpf_prog * Fix the problem by removing the line that tried to re-xchg "the old_prog pointer" if there was an error, to make this driver act like the other drivers which return the error code without "pointer restoration." Also, update the "copy the pointer" logic to use WRITE_ONCE as many/all the other drivers do, which required making a change in two separate functions that write the xdp_prog variable in the ring. The code here was modeled after the code in i40e/i40e_xdp_setup(). NOTE: Compile-tested only. CC: Maciej Fijalkowski <maciej.fijalkowski@intel.com> CC: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Chandan Kumar Rout <chandanx.rout@intel.com> (A Contingent Worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-