- 11 Feb, 2021 40 commits
-
-
Geetha sowjanya authored
NIX hardware context structure got changed to accommodate new features like bandwidth steering, L3/L4 outer/inner checksum enable/disable etc., on CN10K platform. This patch defines new mbox message NIX_CN10K_AQ_INST for new NIX context initialization. This patch also updates the NPA context structures to accommodate bit field changes made for CN10K platform. This patch also removes Big endian bit fields from existing structures as its support got deprecated in current and upcoming silicons. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subbaraya Sundeep authored
Firmware allocates memory regions for PFs and VFs in DRAM. The PFs memory region is used for AF-PF and PF-VF mailbox. This mbox facilitate communication between AF-PF and PF-VF. On CN10K platform: The DRAM region allocated to PF is enumerated as PF BAR4 memory. PF BAR4 contains AF-PF mbox region followed by its VFs mbox region. AF-PF mbox region base address is configured at RVU_AF_PFX_BAR4_ADDR PF-VF mailbox base address is configured at RVU_PF(x)_VF_MBOX_ADDR = RVU_AF_PF()_BAR4_ADDR+64KB. PF access its mbox region via BAR4, whereas VF accesses PF-VF DRAM mailboxes via BAR2 indirect access. On CN9XX platform: Mailbox region in DRAM is divided into two parts AF-PF mbox region and PF-VF mbox region i.e all PFs mbox region is contiguous similarly all VFs. The base address of the AF-PF mbox region is configured at RVU_AF_PF_BAR4_ADDR. AF-PF1 mbox address can be calculated as RVU_AF_PF_BAR4_ADDR * mbox size. The base address of PF-VF mbox region for each PF is configure at RVU_AF_PF(0..15)_VF_BAR4_ADDR.PF access its mbox region via BAR4 and its VF mbox regions from RVU_PF_VF_BAR4_ADDR register, whereas VF access its mbox region via BAR4. This patch changes mbox initialization to support both CN9XX and CN10K platform. The patch also adds new hw_cap flag to setting hw features like TSO etc and removes platform specific name from the PF/VF driver name to make it appropriate for all supported platforms This patch also removes platform specific name from the PF/VF driver name to make it appropriate for all supported platforms Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subbaraya Sundeep authored
Firmware allocates memory regions for PFs and VFs in DRAM. The PFs memory region is used for AF-PF and PF-VF mailbox. This mbox facilitates communication between AF-PF and PF-VF. On CN10K platform: The DRAM region allocated to PF is enumerated as PF BAR4 memory. PF BAR4 contains AF-PF mbox region followed by its VFs mbox region. AF-PF mbox region base address is configured at RVU_AF_PFX_BAR4_ADDR PF-VF mailbox base address is configured at RVU_PF(x)_VF_MBOX_ADDR = RVU_AF_PF()_BAR4_ADDR+64KB. PF access its mbox region via BAR4, whereas VF accesses PF-VF DRAM mailboxes via BAR2 indirect access. On CN9XX platform: Mailbox region in DRAM is divided into two parts AF-PF mbox region and PF-VF mbox region i.e all PFs mbox region is contiguous similarly all VFs. The base address of the AF-PF mbox region is configured at RVU_AF_PF_BAR4_ADDR. AF-PF1 mbox address can be calculated as RVU_AF_PF_BAR4_ADDR * mbox size. The base address of PF-VF mbox region for each PF is configure at RVU_AF_PF(0..15)_VF_BAR4_ADDR.PF access its mbox region via BAR4 and its VF mbox regions from RVU_PF_VF_BAR4_ADDR register, whereas VF access its mbox region via BAR4. This patch changes mbox initialization to support both CN9XX and CN10K platform. This patch also adds CN10K PTP subsystem and device IDs to ptp driver id table. Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Stefan Chulski says: ==================== net: mvpp2: Add TX Flow Control support Armada hardware has a pause generation mechanism in GOP (MAC). The GOP generate flow control frames based on an indication programmed in Ports Control 0 Register. There is a bit per port. However assertion of the PortX Pause bits in the ports control 0 register only sends a one time pause. To complement the function the GOP has a mechanism to periodically send pause control messages based on periodic counters. This mechanism ensures that the pause is effective as long as the Appropriate PortX Pause is asserted. Problem is that Packet Processor that actually can drop packets due to lack of resources not connected to the GOP flow control generation mechanism. To solve this issue Armada has firmware running on CM3 CPU dedicated for Flow Control support. Firmware monitors Packet Processor resources and asserts XON/XOFF by writing to Ports Control 0 Register. MSS shared SRAM memory used to communicate between CM3 firmware and PP2 driver. During init PP2 driver informs firmware about used BM pools, RXQs, congestion and depletion thresholds. The pause frames are generated whenever congestion or depletion in resources is detected. The back pressure is stopped when the resource reaches a sufficient level. So the congestion/depletion and sufficient level implement a hysteresis that reduces the XON/XOFF toggle frequency. Packet Processor v23 hardware introduces support for RX FIFO fill level monitor. Patch "add PPv23 version definition" to differ between v23 and v22 hardware. Patch "add TX FC firmware check" verifies that CM3 firmware supports Flow Control monitoring. v12 --> v13 - Remove bm_underrun_protect module_param v11 --> v12 - Improve warning message in "net: mvpp2: add TX FC firmware check" patch v10 --> v11 - Improve "net: mvpp2: add CM3 SRAM memory map" comment - Move condition check to 'net: mvpp2: always compare hw-version vs MVPP21' patch v9 --> v10 - Add CM3 SRAM description to PPv2 documentation v8 --> v9 - Replace generic pool allocation with devm_ioremap_resource v7 --> v8 - Reorder "always compare hw-version vs MVPP21" and "add PPv23 version definition" commits - Typo fixes - Remove condition fix from "add RXQ flow control configurations" v6 --> v7 - Reduce patch set from 18 to 15 patches - Documentation change combined into a single patch - RXQ and BM size change combined into a single patch - Ring size change check moved into "add RXQ flow control configurations" commit v5 --> v6 - No change v4 --> v5 - Add missed Signed-off - Fix warnings in patches 3 and 12 - Add revision requirement to warning message - Move mss_spinlock into RXQ flow control configurations patch - Improve FCA RXQ non occupied descriptor threshold commit message v3 --> v4 - Remove RFC tag v2 --> v3 - Remove inline functions - Add PPv2.3 description into marvell-pp2.txt - Improve mvpp2_interrupts_mask/unmask procedure - Improve FC enable/disable procedure - Add priv->sram_pool check - Remove gen_pool_destroy call - Reduce Flow Control timer to x100 faster v1 --> v2 - Add memory requirements information - Add EPROBE_DEFER if of_gen_pool_get return NULL - Move Flow control configuration to mvpp2_mac_link_up callback ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
Patch check that TX FC firmware is running in CM3. If not, global TX FC would be disabled. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
This patch fix GMAC TX flow control autoneg. Flow control autoneg wrongly were disabled with enabled TX flow control. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
New FIFO flow control feature was added in PPv23. PPv2 FIFO polled by HW and trigger pause frame if FIFO fill level is below threshold. FIFO HW flow control enabled with CM3 RXQ&BM flow control with ethtool. Current FIFO thresholds is: 9KB for port with maximum speed 10Gb/s port 4KB for port with maximum speed 5Gb/s port 2KB for port with maximum speed 1Gb/s port Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
The PP2v23 hardware supports a feature allowing to double the size of BPPI by decreasing number of pools from 16 to 8. Increasing of BPPI size protect BM drop from BPPI underrun. Underrun could occurred due to stress on DDR and as result slow buffer transition from BPPE to BPPI. New BPPI threshold recommended by spec is: BPPI low threshold - 640 buffers BPPI high threshold - 832 buffers Supported only in PPv23. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
This patch add ethtool flow control configuration support. Tx flow control retrieved correctly by ethtool get function. FW per port ethtool configuration capability added. Patch also takes care about mtu change procedure, if PPv2 switch BM pools during mtu change. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
This patch adds RXQ flow control configurations. Flow control disabled by default. Minimum ring size limited to 1024 descriptors. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
This patch enables global flow control in FW and in the phylink validate mask. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
The firmware needs to monitor the RX Non-occupied descriptor bits for flow control to move to XOFF mode. These bits need to be unmasked to be functional, but they will not raise interrupts as we leave the RX exception summary bit in MVPP2_ISR_RX_TX_MASK_REG clear. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
Flow Control periodic timer would be used if port in XOFF to transmit periodic XOFF frames. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
BM pool and RXQ size increased to support Firmware Flow Control. Minimum depletion thresholds to support FC are 1024 buffers. BM pool size increased to 2048 to have some 1024 buffers space between depletion thresholds and BM pool size. Jumbo frames require a 9888B buffer, so memory requirements for data buffers increased from 7MB to 24MB. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
This patch add PPv23 version definition. PPv23 is new packet processor in CP115. Everything that supported by PPv22, also supported by PPv23. No functional changes in this stage. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
Currently we have PP2v1 and PP2v2 hw-versions, with some different handlers depending upon condition hw_version = MVPP21/MVPP22. In a future there will be also PP2v3. Let's use now the generic "if equal/notEqual MVPP21" for all cases instead of "if MVPP22". This patch does not change any functionality. It is not intended to introduce PP2v3. It just modifies MVPP21/MVPP22 check-condition bringing it to generic and unified form correct for new-code introducing and PP2v3 net-next generation. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
This patch adds CM3 memory map. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Konstantin Porotchkin authored
CM3 SRAM address space will be used for Flow Control configuration. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Signed-off-by: Konstantin Porotchkin <kostap@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Chulski authored
Patch adds CM3 address space and PPv2.3 description. Signed-off-by: Stefan Chulski <stefanc@marvell.com> Acked-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Juergen Gross authored
In order to support the possibility of per-device event channel settings (e.g. lateeoi spurious event thresholds) add a xenbus device pointer to struct irq_info() and modify the related event channel binding interfaces to take the pointer to the xenbus device as a parameter instead of the domain id of the other side. While at it remove the stale prototype of bind_evtchn_to_irq_lateeoi(). Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Wei Liu <wei.liu@kernel.org> Reviewed-by: Paul Durrant <paul@xen.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Juergen Gross authored
In case of a common event for rx and tx queue the event should be regarded to be spurious if no rx and no tx requests are pending. Unfortunately the condition for testing that is wrong causing to decide a event being spurious if no rx OR no tx requests are pending. Fix that plus using local variables for rx/tx pending indicators in order to split function calls and if condition. Fixes: 23025393 ("xen/netback: use lateeoi irq binding") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: Wei Liu <wl@xen.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
The function fib6_walk_continue() cannot return a positive value when called from register_fib_notifier(), but ignoring causes static analyzer to generate warnings in users of register_fib_notifier() that try to convert returned error code to pointer with ERR_PTR(). Handle such case by explicitly checking for positive error values and converting them to -EINVAL in fib6_tables_dump(). Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Suggested-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'mlx5-for-upstream-2021-02-10' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-for-upstream-2021-02-10 Misc cleanups and trivial fixes for net-next 1) spelling mistakes 2) error path checks fixes 3) unused includes and struct fields cleanup 4) build error when MLX5_ESWITCH=no ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
Due to the fact that ic_dev->dev is kept open in ic_close_dev, I had thought that ic_dev will not be freed either. But that is not the case, but instead "everybody dies" when ipconfig cleans up, and just the net_device behind ic_dev->dev remains allocated but not ic_dev itself. This is a problem because in ic_close_devs, for every net device that we're about to close, we compare it against the list of lower interfaces of ic_dev, to figure out whether we should close it or not. But since ic_dev itself is subject to freeing, this means that at some point in the middle of the list of ipconfig interfaces, ic_dev will have been freed, and we would be still attempting to iterate through its list of lower interfaces while checking whether to bring down the remaining ipconfig interfaces. There are multiple ways to avoid the use-after-free: we could delay freeing ic_dev until the very end (outside the while loop). Or an even simpler one: we can observe that we don't need ic_dev when iterating through its lowers, only ic_dev->dev, structure which isn't ever freed. So, by keeping ic_dev->dev in a variable assigned prior to freeing ic_dev, we can avoid all use-after-free issues. Fixes: 46acf7bd ("Revert "net: ipv4: handle DSA enabled master network devices"") Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Several years ago these two entries have been added, but it's not clear why. There's no trace that there has ever been such a chip version, and not even the r8101 vendor driver knows these id's. So let's disable detection, and if nobody complains remove them completely later. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Nikolay Aleksandrov says: ==================== bonding: 3ad: support for 200G/400G ports and more verbose warning xk We'd like to have proper 200G and 400G support with 3ad bond mode, so we need to add new definitions for them in order to have separate oper keys, aggregated bandwidth and proper operation (patches 01 and 02). In patch 03 Ido changes the code to use pr_err_once instead of pr_warn_once which would help future detection of unsupported speeds. v2: patch 03: use pr_err_once instead of WARN_ONCE ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The bond driver needs to be patched to support new ethtool speeds. Currently it emits a single warning [1] when it encounters an unknown speed. As evident by the two previous patches, this is not explicit enough. Instead, promote it to an error. [1] bond10: (slave swp1): unknown ethtool speed (200000) for port 1 (set it to 0) v2: * Use pr_err_once() instead of WARN_ONCE() Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
In order to be able to use 3ad mode with 400G devices we need to extend the supported speeds. Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
In order to be able to use 3ad mode with 200G devices we need to extend the supported speeds. Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Bhaskar Upadhaya says: ==================== qede: add netpoll and per-queue coalesce support This is a followup implementation after series https://patchwork.kernel.org/project/netdevbpf/cover/1610701570-29496-1-git-send-email-bupadhaya@marvell.com/ Patch 1: Add net poll controller support to transmit kernel printks over UDP Patch 2: QLogic card support multiple queues and each queue can be configured with respective coalescing parameters, this patch add per queue rx-usecs, tx-usecs coalescing parameters Patch 3: set default per queue rx-usecs, tx-usecs coalescing parameters and preserve coalesce parameters across interface up and down v3: fixed warnings reported by Dan Carpenter v2: comments from jakub - p1: remove poll_controller ndo and add budget 0 support in qede_poll - p3: preserve coalesce parameters across interface up and down =================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bhaskar Upadhaya authored
Here we do the initialization of coalescing values on load. per queue coalesce values are also restored across up/down of ethernet interface. Signed-off-by: Bhaskar Upadhaya <bupadhaya@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bhaskar Upadhaya authored
per queue coalescing allows better and more finegrained control over interrupt rates. Signed-off-by: Bhaskar Upadhaya <bupadhaya@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bhaskar Upadhaya authored
handle netpoll case when qede_poll is called by netpoll layer with budget 0 Signed-off-by: Bhaskar Upadhaya <bupadhaya@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
Currently, a random stack value is being returned because variable _ret_ is not properly initialized. This variable is actually not used anymore and it should be removed. Fix this by removing all instances of variable ret and return 0. Fixes: 64749c9c ("net: hns3: remove redundant return value of hns3_uninit_all_ring()") Addresses-Coverity-ID: 1501700 ("Uninitialized scalar variable") Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nobuhiro Iwamatsu authored
plat_dat is initialized by stmmac_probe_config_dt(). So, initialization is not required by priv->plat. This removes unnecessary initialization and variables. Signed-off-by: Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
It is simpler to make net->net_cookie a plain u64 written once in setup_net() instead of looping and using atomic64 helpers. Lorenz Bauer wants to add SO_NETNS_COOKIE socket option and this patch would makes his patch series simpler. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Lorenz Bauer <lmb@cloudflare.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
So far we don't re-configure WOL-related register bits when waking up from hibernation. I'm not aware of any problem reports, but better play safe and call __rtl8169_set_wol() in the resume() path too. To achieve this move calling __rtl8169_set_wol() to rtl8169_net_resume() and rename the function to rtl8169_runtime_resume(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Michael Walle says: ==================== net: phy: icplus: cleanups and new features Cleanup the PHY drivers for IPplus devices and add PHY counters and MDIX support for the IP101A/G. Patch 5 adds a model detection based on the behavior of the PHY. Unfortunately, the IP101A shares the PHY ID with the IP101G. But the latter provides more features. Try to detect the newer model by accessing the page selection register. If it is writeable, it is assumed, that it is a IP101G. With this detection in place, we can now access registers >= 16 in a correct way on the IP101G; that is by first selecting the correct page. This might previouly worked, because no one ever set another active page before booting linux. The last two patches add the new features. =================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Walle authored
Implement the operations to set desired mode and retrieve the current mode. This feature was tested with an IP101G. Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Walle authored
The IP101G provides three counters: RX packets, CRC errors and symbol errors. The error counters can be configured to clear automatically on read. Unfortunately, this isn't true for the RX packet counter. Because of this and because the RX packet counter is more likely to overflow, than the error counters implement only support for the error counters. Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-