- 19 Apr, 2023 21 commits
-
-
Jaime Breva authored
Our use-case needs two AT ports available: One for running a ppp daemon, and another one for management This patch enables a second AT port on DATA1 Signed-off-by: Jaime Breva <jbreva@nayarsystems.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Tariq Toukan says: ==================== net/mlx5e: Extend XDP multi-buffer capabilities This series extends the XDP multi-buffer support in the mlx5e driver. Patchset breakdown: - Infrastructural changes and preparations. - Add XDP multi-buffer support for XDP redirect-in. - Use TX MPWQE (multi-packet WQE) HW feature for non-linear single-segmented XDP frames. - Add XDP multi-buffer support for striding RQ. In Striding RQ, we overcome the lack of headroom and tailroom between the RQ strides by allocating a side page per packet and using it for the xdp_buff descriptor. We structure the xdp_buff so that it contains nothing in the linear part, and the whole packet resides in the fragments. Performance highlight: Packet rate test, 64 bytes, 32 channels, MTU 9000 bytes. CPU: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz. NIC: ConnectX-6 Dx, at 100 Gbps. +----------+-------------+-------------+---------+ | Test | Legacy RQ | Striding RQ | Speedup | +----------+-------------+-------------+---------+ | XDP_DROP | 101,615,544 | 117,191,020 | +15% | +----------+-------------+-------------+---------+ | XDP_TX | 95,608,169 | 117,043,422 | +22% | +----------+-------------+-------------+---------+ Series generated against net commit: e61caf04 Merge branch 'page_pool-allow-caching-from-safely-localized-napi' I'm submitting this directly as Saeed is traveling. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Here we add support for multi-buffer XDP handling in Striding RQ, which is our default out-of-the-box RQ type. Before this series, loading such an XDP program would fail, until you switch to the legacy RQ (by unsetting the rx_striding_rq priv-flag). To overcome the lack of headroom and tailroom between the strides, we allocate a side page to be used for the descriptor (xdp_buff / skb) and the linear part. When an XDP program is attached, we structure the xdp_buff so that it contains no data in the linear part, and the whole packet resides in the fragments. In case of XDP_PASS, where an SKB still needs to be created, we copy up to 256 bytes to its linear part, to match the current behavior, and satisfy functions that assume finding the packet headers in the SKB linear part (like eth_type_trans). Performance testing: Packet rate test, 64 bytes, 32 channels, MTU 9000 bytes. CPU: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz. NIC: ConnectX-6 Dx, at 100 Gbps. +----------+-------------+-------------+---------+ | Test | Legacy RQ | Striding RQ | Speedup | +----------+-------------+-------------+---------+ | XDP_DROP | 101,615,544 | 117,191,020 | +15% | +----------+-------------+-------------+---------+ | XDP_TX | 95,608,169 | 117,043,422 | +22% | +----------+-------------+-------------+---------+ Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
In preparation for supporting XDP multi-buffer in striding RQ, use xdp_buff struct to describe the packet. Make its skb_shared_info collide the one of the allocated SKB, then add the fragments using the xdp_buff API. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Make the function more generic. Let it get an additional frame_sz parameter instead of deriving it from the RQ struct. No functional change here, just a preparation for a downstream patch. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Introduce mlx5e_add_skb_shared_info_frag(), a function dedicated for adding a fragment into a struct skb_shared_info object. Use it in the Legacy RQ flow. Similar usage will be added in a downstream patch by the corresponding Striding RQ flow. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Under a few restrictions, TX MPWQE feature can serve multiple TX packets in a single TX descriptor. It requires each of the packets to have a single scatter entry / segment. Today we allow only linear frames to use this feature, although there's no real problem with non-linear ones where the whole packet reside in the first fragment. Expand the XDP TX MPWQE feature support to include such frames. This is in preparation for the downstream patch, in which we will generate such non-linear frames. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Remove the assumption of non-zero linear length in the XDP xmit function, used to serve both internal XDP_TX operations as well as redirected-in requests. Do not apply the MLX5E_XDP_MIN_INLINE check unless necessary. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Function mlx5e_rx_get_linear_stride_sz() returns PAGE_SIZE immediately in case an XDP program is attached. The more accurate formula is ALIGN(sz, PAGE_SIZE), to prevent two packets from residing on the same page. The assumption behind the current code is that sz <= PAGE_SIZE holds for all cases with XDP program set. This is true because it is being called from: - 3 times from Striding RQ flows, in which XDP is not supported for such large packets. - 1 time from Legacy RQ flow, under the condition mlx5e_rx_is_linear_skb(). No functional change here, just removing the implied assumption in preparation for supporting XDP multi-buffer in Striding RQ. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Change mlx5e_xdp_allowed() so it gets the params structure with the xdp_prog applied, rather than creating a local copy based on the current params in priv. This reduces the amount of memory on the stack, and acts on the exact params instance that's about to be applied. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Non-linear mem scheme of Striding RQ does not yet support XDP at this point. Take the check where it belongs, inside the params validation function mlx5e_params_validate_xdp(). Reviewed-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Handle multi-buffer XDP redirect-in requests coming through mlx5e_xdp_xmit. Extend struct mlx5e_xmit_data_frags with an additional dma_arr field, to point to the fragments dma mapping, as they cannot be retrieved via the page_pool_get_dma_addr() function. Push a dma_addr xdpi instance per each fragment, and use them in the completion flow to dma_unmap the frags. Finally, remove the restriction in mlx5e_open_xdpsq, and set the flag in xdp_features. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Here we fix the current wi->num_pkts abuse, as it was used to indicate multiple xdpi entries in the xdpi_fifo. Instead, reduce mlx5e_xdp_info to the size of a single field, making it a union of unions. Per packet, use as many instances as needed to provide the information needed at the time of completion. The sequence of xdpi instances pushed is well defined, derived by the xmit_mode. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
It is not likely nor unlikely that the xdp buff has fragments, it depends on the program loaded and size of the packet received. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Introduce struct mlx5e_xmit_data_frags to be used for non-linear xmit buffers. Let it include sinfo pointer. Take one bit from the len field to indicate if the descriptor has fragments and can be casted-up into the extended version. Zero-init to make sure has_frags, and potentially future fields, are zero when not explicitly assigned. Another field will be added in a downstream patch to indicate and point to dma addresses of the different frags, for redirect-in requests. This simplifies the mlx5e_xmit_xdp_frame/mlx5e_xmit_xdp_frame_mpwqe functions params. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Move TX datapath struct from the generic en.h to the datapath txrx.h header, where it belongs. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Move struct mlx5e_xdp_info and enum mlx5e_xdp_xmit_mode from the generic en.h to the XDP header, where they belong. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alain Volmat authored
Remove no more supported platforms (stih415/stih416 and stid127) Signed-off-by: Alain Volmat <avolmat@me.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com> Link: https://lore.kernel.org/r/20230416195523.61075-1-avolmat@me.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Arnd Bergmann authored
The types for the register argument changed recently, but there are still incompatible prototypes that got left behind, and gcc-13 warns about these: In file included from drivers/net/ethernet/mscc/ocelot.c:13: drivers/net/ethernet/mscc/ocelot.h:97:5: error: conflicting types for 'ocelot_port_readl' due to enum/integer mismatch; have 'u32(struct ocelot_port *, u32)' {aka 'unsigned int(struct ocelot_port *, unsigned int)'} [-Werror=enum-int-mismatch] 97 | u32 ocelot_port_readl(struct ocelot_port *port, u32 reg); | ^~~~~~~~~~~~~~~~~ Just remove the two prototypes, and rely on the copy in the global header. Fixes: 9ecd0579 ("net: mscc: ocelot: strengthen type of "u32 reg" in I/O accessors") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20230417205531.1880657-1-arnd@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Corinna Vinschen authored
stmmac_dev_probe doesn't propagate feature flags to VLANs. So features like offloading don't correspond with the general features and it's not possible to manipulate features via ethtool -K to affect VLANs. Propagate feature flags to vlan features. Drop TSO feature because it does not work on VLANs yet. Signed-off-by: Corinna Vinschen <vinschen@redhat.com> Link: https://lore.kernel.org/r/20230417192845.590034-1-vinschen@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Hangbin Liu authored
Currently, bonding only obtain the timestamp (ts) information of the active slave, which is available only for modes 1, 5, and 6. For other modes, bonding only has software rx timestamping support. However, some users who use modes such as LACP also want tx timestamp support. To address this issue, let's check the ts information of each slave. If all slaves support tx timestamping, we can enable tx timestamping support for the bond. Add a note that the get_ts_info may be called with RCU, or rtnl or reference on the device in ethtool.h> Suggested-by: Miroslav Lichvar <mlichvar@redhat.com> Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Acked-by: Jay Vosburgh <jay.vosburgh@canonical.com> Link: https://lore.kernel.org/r/20230418034841.2566262-1-liuhangbin@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 18 Apr, 2023 19 commits
-
-
Paolo Abeni authored
Samin Guo says: ==================== Add Ethernet driver for StarFive JH7110 SoC This series adds ethernet support for the StarFive JH7110 RISC-V SoC, which includes a dwmac-5.20 MAC driver (from Synopsys DesignWare). This series has been tested and works fine on VisionFive-2 v1.2A and v1.3B SBC boards. For more information and support, you can visit RVspace wiki[1]. You can simply review or test the patches at the link [2]. This patchset should be applied after the patchset [3] [4]. [1]: https://wiki.rvspace.org/ [2]: https://github.com/saminGuo/linux/tree/vf2-6.3rc4-gmac-net-next [3]: https://patchwork.kernel.org/project/linux-riscv/cover/20230401111934.130844-1-hal.feng@starfivetech.com [4]: https://patchwork.kernel.org/project/linux-riscv/cover/20230315055813.94740-1-william.qiu@starfivetech.com ==================== Link: https://lore.kernel.org/r/20230417100251.11871-1-samin.guo@starfivetech.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Samin Guo authored
dwmac supports multiple modess. When working under rmii and rgmii, you need to set different phy interfaces. According to the dwmac document, when working in rmii, it needs to be set to 0x4, and rgmii needs to be set to 0x1. The phy interface needs to be set in syscon, the format is as follows: starfive,syscon: <&syscon, offset, shift> Tested-by: Tommaso Merciai <tomm.merciai@gmail.com> Signed-off-by: Samin Guo <samin.guo@starfivetech.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Samin Guo authored
This adds StarFive dwmac driver support on the StarFive JH7110 SoC. Tested-by: Tommaso Merciai <tomm.merciai@gmail.com> Co-developed-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: Samin Guo <samin.guo@starfivetech.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Yanhong Wang authored
Add documentation to describe StarFive dwmac driver(GMAC). Signed-off-by: Yanhong Wang <yanhong.wang@starfivetech.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Samin Guo <samin.guo@starfivetech.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Samin Guo authored
According to: stmmac_platform.c: stmmac_probe_config_dt stmmac_main.c: stmmac_dvr_probe dwmac controller may require one (stmmaceth) or two (stmmaceth+ahb) reset signals, and the maxItems of resets/reset-names is going to be 2. The gmac of Starfive Jh7110 SOC must have two resets. it uses snps,dwmac-5.20 IP. Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Samin Guo <samin.guo@starfivetech.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Emil Renner Berthing authored
Add "snps,dwmac-5.20" compatible string for 5.20 version that can avoid to define some platform data in the glue layer. Tested-by: Tommaso Merciai <tomm.merciai@gmail.com> Signed-off-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: Samin Guo <samin.guo@starfivetech.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Emil Renner Berthing authored
Add dwmac-5.20 IP version to snps.dwmac.yaml Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: Samin Guo <samin.guo@starfivetech.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Paolo Abeni authored
Heiner Kallweit says: ==================== r8169: use new macros from netdev_queues.h Add one missing subqueue version of the macros, and use the new macros in r8169 to simplify the code. ==================== Link: https://lore.kernel.org/r/7147a001-3d9c-a48d-d398-a94c666aa65b@gmail.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Heiner Kallweit authored
Use new net core macro netif_subqueue_completed_wake to simplify the code of the tx cleanup path. Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Heiner Kallweit authored
Use new net core macro netif_subqueue_maybe_stop in the start_xmit path to simplify the code. Whilst at it, set the tx queue start threshold to twice the stop threshold. Before values were the same, resulting in stopping/starting the queue more often than needed. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Heiner Kallweit authored
Add netif_subqueue_completed_wake, complementing the subqueue versions netif_subqueue_try_stop and netif_subqueue_maybe_stop. Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Jakub Kicinski authored
Vladimir Oltean says: ==================== Ocelot/Felix driver support for preemptible traffic classes The series "Add tc-mqprio and tc-taprio support for preemptible traffic classes" from: https://lore.kernel.org/netdev/20230220122343.1156614-1-vladimir.oltean@nxp.com/ was eventually submitted in a form without the support for the Ocelot/Felix switch driver. This patch set picks up that work again, and presents a fairly modified form compared to the original. ==================== Link: https://lore.kernel.org/r/20230415170551.3939607-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
In order to not transmit (preemptible) frames which will be received by the link partner as corrupted (because it doesn't support FP), the hardware requires the driver to program the QSYS_PREEMPTION_CFG_P_QUEUES register only after the MAC Merge layer becomes active (verification succeeds, or was disabled). There are some cases when FP is known (through experimentation) to be broken. Give priority to FP over cut-through switching, and disable FP for known broken link modes. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
The mqprio queue configuration can appear either through TC_SETUP_QDISC_MQPRIO or through TC_SETUP_QDISC_TAPRIO. Make sure both are treated in the same way. Code does nothing new for now (except for rejecting multiple TXQs per TC, which is a useless concept with DSA switches). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ferenc Fejes <fejes@inf.elte.hu> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
This doesn't apply anything to hardware and in general doesn't do anything that the software variant doesn't do, except for checking that there isn't more than 1 TXQ per TC (TXQs for a DSA switch are a dubious concept anyway). The reason we add this is to be able to parse one more field added to struct tc_mqprio_qopt_offload, namely preemptible_tcs. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ferenc Fejes <fejes@inf.elte.hu> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
ocelot_mm_update_port_status() updates mm->verify_status, but when the verification state of a port changes, an IRQ isn't emitted, but rather, only when the verification state reaches one of the final states (like DISABLED, FAILED, SUCCEEDED) - things that would affect mm->tx_active, which is what the IRQ *is* actually emitted for. That is to say, user space may miss reports of an intermediary MAC Merge verification state (like from INITIAL to VERIFYING), unless there was an IRQ notifying the driver of the change in mm->tx_active as well. This is not a huge deal, but for reliable reporting to user space, let's call ocelot_mm_update_port_status() synchronously from ocelot_port_get_mm(), which makes user space see the current MM status. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
The MAC Merge IRQ of all ports is shared with the PTP TX timestamp IRQ of all ports, which means that currently, when a PTP TX timestamp is generated, felix_irq_handler() also polls for the MAC Merge layer status of all ports, looking for changes. This makes the kernel do more work, and under certain circumstances may make ptp4l require a tx_timestamp_timeout argument higher than before. Changes to the MAC Merge layer status are only to be expected under certain conditions - its TX direction needs to be enabled - so we can check early if that is the case, and omit register access otherwise. Make ocelot_mm_update_port_status() skip register access if mm->tx_enabled is unset, and also call it once more, outside IRQ context, from ocelot_port_set_mm(), when mm->tx_enabled transitions from true to false, because an IRQ is also expected in that case. Also, a port may have its MAC Merge layer enabled but it may not have generated the interrupt. In that case, there's no point in writing to DEV_MM_STATUS to acknowledge that IRQ. We can reduce the number of register writes per port with MM enabled by keeping an "ack" variable which writes the "write-one-to-clear" bits. Those are 3 in number: PRMPT_ACTIVE_STICKY, UNEXP_RX_PFRM_STICKY and UNEXP_TX_PFRM_STICKY. The other fields in DEV_MM_STATUS are read-only and it doesn't matter what is written to them, so writing zero is just fine. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Unfortunately, the workarounds for the hardware bugs make it pointless to keep fine-grained locking for the MAC Merge state of each port. Our vsc9959_cut_through_fwd() implementation requires ocelot->fwd_domain_lock to be held, in order to serialize with changes to the bridging domains and to port speed changes (which affect which ports can be cut-through). Simultaneously, the traffic classes which can be cut-through cannot be preemptible at the same time, and this will depend on the MAC Merge layer state (which changes from threaded interrupt context). Since vsc9959_cut_through_fwd() would have to hold the mm->lock of all ports for a correct and race-free implementation with respect to ocelot_mm_irq(), in practice it means that any time a port's mm->lock is held, it would potentially block holders of ocelot->fwd_domain_lock. In the interest of simple locking rules, make all MAC Merge layer state changes (and preemptible traffic class changes) be serialized by the ocelot->fwd_domain_lock. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
When the switch emits an IRQ, we don't know what caused it, and we iterate through all ports to check the MAC Merge status. Move that iteration inside the ocelot lib; we will change the locking in a future change and it would be good to encapsulate that lock completely within the ocelot lib. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-