Commit e485f3a6 authored by Tony Nguyen's avatar Tony Nguyen Committed by David S. Miller

ixgb: Remove ixgb driver

There are likely no users of this driver as the hardware has been
discontinued since 2010. Remove the driver and all references to it
in documentation.
Suggested-by: default avatarJakub Kicinski <kuba@kernel.org>
Signed-off-by: default avatarTony Nguyen <anthony.l.nguyen@intel.com>
Acked-by: default avatarJesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent a593a2fc
...@@ -418,7 +418,6 @@ That is, the recovery API only requires that: ...@@ -418,7 +418,6 @@ That is, the recovery API only requires that:
- drivers/next/e100.c - drivers/next/e100.c
- drivers/net/e1000 - drivers/net/e1000
- drivers/net/e1000e - drivers/net/e1000e
- drivers/net/ixgb
- drivers/net/ixgbe - drivers/net/ixgbe
- drivers/net/cxgb3 - drivers/net/cxgb3
- drivers/net/s2io.c - drivers/net/s2io.c
......
...@@ -31,7 +31,6 @@ Contents: ...@@ -31,7 +31,6 @@ Contents:
intel/fm10k intel/fm10k
intel/igb intel/igb
intel/igbvf intel/igbvf
intel/ixgb
intel/ixgbe intel/ixgbe
intel/ixgbevf intel/ixgbevf
intel/i40e intel/i40e
......
.. SPDX-License-Identifier: GPL-2.0+
=====================================================================
Linux Base Driver for 10 Gigabit Intel(R) Ethernet Network Connection
=====================================================================
October 1, 2018
Contents
========
- In This Release
- Identifying Your Adapter
- Command Line Parameters
- Improving Performance
- Additional Configurations
- Known Issues/Troubleshooting
- Support
In This Release
===============
This file describes the ixgb Linux Base Driver for the 10 Gigabit Intel(R)
Network Connection. This driver includes support for Itanium(R)2-based
systems.
For questions related to hardware requirements, refer to the documentation
supplied with your 10 Gigabit adapter. All hardware requirements listed apply
to use with Linux.
The following features are available in this kernel:
- Native VLANs
- Channel Bonding (teaming)
- SNMP
Channel Bonding documentation can be found in the Linux kernel source:
/Documentation/networking/bonding.rst
The driver information previously displayed in the /proc filesystem is not
supported in this release. Alternatively, you can use ethtool (version 1.6
or later), lspci, and iproute2 to obtain the same information.
Instructions on updating ethtool can be found in the section "Additional
Configurations" later in this document.
Identifying Your Adapter
========================
The following Intel network adapters are compatible with the drivers in this
release:
+------------+------------------------------+----------------------------------+
| Controller | Adapter Name | Physical Layer |
+============+==============================+==================================+
| 82597EX | Intel(R) PRO/10GbE LR/SR/CX4 | - 10G Base-LR (fiber) |
| | Server Adapters | - 10G Base-SR (fiber) |
| | | - 10G Base-CX4 (copper) |
+------------+------------------------------+----------------------------------+
For more information on how to identify your adapter, go to the Adapter &
Driver ID Guide at:
https://support.intel.com
Command Line Parameters
=======================
If the driver is built as a module, the following optional parameters are
used by entering them on the command line with the modprobe command using
this syntax::
modprobe ixgb [<option>=<VAL1>,<VAL2>,...]
For example, with two 10GbE PCI adapters, entering::
modprobe ixgb TxDescriptors=80,128
loads the ixgb driver with 80 TX resources for the first adapter and 128 TX
resources for the second adapter.
The default value for each parameter is generally the recommended setting,
unless otherwise noted.
Copybreak
---------
:Valid Range: 0-XXXX
:Default Value: 256
This is the maximum size of packet that is copied to a new buffer on
receive.
Debug
-----
:Valid Range: 0-16 (0=none,...,16=all)
:Default Value: 0
This parameter adjusts the level of debug messages displayed in the
system logs.
FlowControl
-----------
:Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx)
:Default Value: 1 if no EEPROM, otherwise read from EEPROM
This parameter controls the automatic generation(Tx) and response(Rx) to
Ethernet PAUSE frames. There are hardware bugs associated with enabling
Tx flow control so beware.
RxDescriptors
-------------
:Valid Range: 64-4096
:Default Value: 1024
This value is the number of receive descriptors allocated by the driver.
Increasing this value allows the driver to buffer more incoming packets.
Each descriptor is 16 bytes. A receive buffer is also allocated for
each descriptor and can be either 2048, 4056, 8192, or 16384 bytes,
depending on the MTU setting. When the MTU size is 1500 or less, the
receive buffer size is 2048 bytes. When the MTU is greater than 1500 the
receive buffer size will be either 4056, 8192, or 16384 bytes. The
maximum MTU size is 16114.
TxDescriptors
-------------
:Valid Range: 64-4096
:Default Value: 256
This value is the number of transmit descriptors allocated by the driver.
Increasing this value allows the driver to queue more transmits. Each
descriptor is 16 bytes.
RxIntDelay
----------
:Valid Range: 0-65535 (0=off)
:Default Value: 72
This value delays the generation of receive interrupts in units of
0.8192 microseconds. Receive interrupt reduction can improve CPU
efficiency if properly tuned for specific network traffic. Increasing
this value adds extra latency to frame reception and can end up
decreasing the throughput of TCP traffic. If the system is reporting
dropped receives, this value may be set too high, causing the driver to
run out of available receive descriptors.
TxIntDelay
----------
:Valid Range: 0-65535 (0=off)
:Default Value: 32
This value delays the generation of transmit interrupts in units of
0.8192 microseconds. Transmit interrupt reduction can improve CPU
efficiency if properly tuned for specific network traffic. Increasing
this value adds extra latency to frame transmission and can end up
decreasing the throughput of TCP traffic. If this value is set too high,
it will cause the driver to run out of available transmit descriptors.
XsumRX
------
:Valid Range: 0-1
:Default Value: 1
A value of '1' indicates that the driver should enable IP checksum
offload for received packets (both UDP and TCP) to the adapter hardware.
RxFCHighThresh
--------------
:Valid Range: 1,536-262,136 (0x600 - 0x3FFF8, 8 byte granularity)
:Default Value: 196,608 (0x30000)
Receive Flow control high threshold (when we send a pause frame)
RxFCLowThresh
-------------
:Valid Range: 64-262,136 (0x40 - 0x3FFF8, 8 byte granularity)
:Default Value: 163,840 (0x28000)
Receive Flow control low threshold (when we send a resume frame)
FCReqTimeout
------------
:Valid Range: 1-65535
:Default Value: 65535
Flow control request timeout (how long to pause the link partner's tx)
IntDelayEnable
--------------
:Value Range: 0,1
:Default Value: 1
Interrupt Delay, 0 disables transmit interrupt delay and 1 enables it.
Improving Performance
=====================
With the 10 Gigabit server adapters, the default Linux configuration will
very likely limit the total available throughput artificially. There is a set
of configuration changes that, when applied together, will increase the ability
of Linux to transmit and receive data. The following enhancements were
originally acquired from settings published at https://www.spec.org/web99/ for
various submitted results using Linux.
NOTE:
These changes are only suggestions, and serve as a starting point for
tuning your network performance.
The changes are made in three major ways, listed in order of greatest effect:
- Use ip link to modify the mtu (maximum transmission unit) and the txqueuelen
parameter.
- Use sysctl to modify /proc parameters (essentially kernel tuning)
- Use setpci to modify the MMRBC field in PCI-X configuration space to increase
transmit burst lengths on the bus.
NOTE:
setpci modifies the adapter's configuration registers to allow it to read
up to 4k bytes at a time (for transmits). However, for some systems the
behavior after modifying this register may be undefined (possibly errors of
some kind). A power-cycle, hard reset or explicitly setting the e6 register
back to 22 (setpci -d 8086:1a48 e6.b=22) may be required to get back to a
stable configuration.
- COPY these lines and paste them into ixgb_perf.sh:
::
#!/bin/bash
echo "configuring network performance , edit this file to change the interface
or device ID of 10GbE card"
# set mmrbc to 4k reads, modify only Intel 10GbE device IDs
# replace 1a48 with appropriate 10GbE device's ID installed on the system,
# if needed.
setpci -d 8086:1a48 e6.b=2e
# set the MTU (max transmission unit) - it requires your switch and clients
# to change as well.
# set the txqueuelen
# your ixgb adapter should be loaded as eth1 for this to work, change if needed
ip li set dev eth1 mtu 9000 txqueuelen 1000 up
# call the sysctl utility to modify /proc/sys entries
sysctl -p ./sysctl_ixgb.conf
- COPY these lines and paste them into sysctl_ixgb.conf:
::
# some of the defaults may be different for your kernel
# call this file with sysctl -p <this file>
# these are just suggested values that worked well to increase throughput in
# several network benchmark tests, your mileage may vary
### IPV4 specific settings
# turn TCP timestamp support off, default 1, reduces CPU use
net.ipv4.tcp_timestamps = 0
# turn SACK support off, default on
# on systems with a VERY fast bus -> memory interface this is the big gainer
net.ipv4.tcp_sack = 0
# set min/default/max TCP read buffer, default 4096 87380 174760
net.ipv4.tcp_rmem = 10000000 10000000 10000000
# set min/pressure/max TCP write buffer, default 4096 16384 131072
net.ipv4.tcp_wmem = 10000000 10000000 10000000
# set min/pressure/max TCP buffer space, default 31744 32256 32768
net.ipv4.tcp_mem = 10000000 10000000 10000000
### CORE settings (mostly for socket and UDP effect)
# set maximum receive socket buffer size, default 131071
net.core.rmem_max = 524287
# set maximum send socket buffer size, default 131071
net.core.wmem_max = 524287
# set default receive socket buffer size, default 65535
net.core.rmem_default = 524287
# set default send socket buffer size, default 65535
net.core.wmem_default = 524287
# set maximum amount of option memory buffers, default 10240
net.core.optmem_max = 524287
# set number of unprocessed input packets before kernel starts dropping them; default 300
net.core.netdev_max_backlog = 300000
Edit the ixgb_perf.sh script if necessary to change eth1 to whatever interface
your ixgb driver is using and/or replace '1a48' with appropriate 10GbE device's
ID installed on the system.
NOTE:
Unless these scripts are added to the boot process, these changes will
only last only until the next system reboot.
Resolving Slow UDP Traffic
--------------------------
If your server does not seem to be able to receive UDP traffic as fast as it
can receive TCP traffic, it could be because Linux, by default, does not set
the network stack buffers as large as they need to be to support high UDP
transfer rates. One way to alleviate this problem is to allow more memory to
be used by the IP stack to store incoming data.
For instance, use the commands::
sysctl -w net.core.rmem_max=262143
and::
sysctl -w net.core.rmem_default=262143
to increase the read buffer memory max and default to 262143 (256k - 1) from
defaults of max=131071 (128k - 1) and default=65535 (64k - 1). These variables
will increase the amount of memory used by the network stack for receives, and
can be increased significantly more if necessary for your application.
Additional Configurations
=========================
Configuring the Driver on Different Distributions
-------------------------------------------------
Configuring a network driver to load properly when the system is started is
distribution dependent. Typically, the configuration process involves adding
an alias line to /etc/modprobe.conf as well as editing other system startup
scripts and/or configuration files. Many popular Linux distributions ship
with tools to make these changes for you. To learn the proper way to
configure a network device for your system, refer to your distribution
documentation. If during this process you are asked for the driver or module
name, the name for the Linux Base Driver for the Intel 10GbE Family of
Adapters is ixgb.
Viewing Link Messages
---------------------
Link messages will not be displayed to the console if the distribution is
restricting system messages. In order to see network driver link messages on
your console, set dmesg to eight by entering the following::
dmesg -n 8
NOTE: This setting is not saved across reboots.
Jumbo Frames
------------
The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
enabled by changing the MTU to a value larger than the default of 1500.
The maximum value for the MTU is 16114. Use the ip command to
increase the MTU size. For example::
ip li set dev ethx mtu 9000
The maximum MTU setting for Jumbo Frames is 16114. This value coincides
with the maximum Jumbo Frames size of 16128.
Ethtool
-------
The driver utilizes the ethtool interface for driver configuration and
diagnostics, as well as displaying statistical information. The ethtool
version 1.6 or later is required for this functionality.
The latest release of ethtool can be found from
https://www.kernel.org/pub/software/network/ethtool/
NOTE:
The ethtool version 1.6 only supports a limited set of ethtool options.
Support for a more complete ethtool feature set can be enabled by
upgrading to the latest version.
NAPI
----
NAPI (Rx polling mode) is supported in the ixgb driver.
See https://wiki.linuxfoundation.org/networking/napi for more information on
NAPI.
Known Issues/Troubleshooting
============================
NOTE:
After installing the driver, if your Intel Network Connection is not
working, verify in the "In This Release" section of the readme that you have
installed the correct driver.
Cable Interoperability Issue with Fujitsu XENPAK Module in SmartBits Chassis
----------------------------------------------------------------------------
Excessive CRC errors may be observed if the Intel(R) PRO/10GbE CX4
Server adapter is connected to a Fujitsu XENPAK CX4 module in a SmartBits
chassis using 15 m/24AWG cable assemblies manufactured by Fujitsu or Leoni.
The CRC errors may be received either by the Intel(R) PRO/10GbE CX4
Server adapter or the SmartBits. If this situation occurs using a different
cable assembly may resolve the issue.
Cable Interoperability Issues with HP Procurve 3400cl Switch Port
-----------------------------------------------------------------
Excessive CRC errors may be observed if the Intel(R) PRO/10GbE CX4 Server
adapter is connected to an HP Procurve 3400cl switch port using short cables
(1 m or shorter). If this situation occurs, using a longer cable may resolve
the issue.
Excessive CRC errors may be observed using Fujitsu 24AWG cable assemblies that
Are 10 m or longer or where using a Leoni 15 m/24AWG cable assembly. The CRC
errors may be received either by the CX4 Server adapter or at the switch. If
this situation occurs, using a different cable assembly may resolve the issue.
Jumbo Frames System Requirement
-------------------------------
Memory allocation failures have been observed on Linux systems with 64 MB
of RAM or less that are running Jumbo Frames. If you are using Jumbo
Frames, your system may require more than the advertised minimum
requirement of 64 MB of system memory.
Performance Degradation with Jumbo Frames
-----------------------------------------
Degradation in throughput performance may be observed in some Jumbo frames
environments. If this is observed, increasing the application's socket buffer
size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help.
See the specific application manual and /usr/src/linux*/Documentation/
networking/ip-sysctl.txt for more details.
Allocating Rx Buffers when Using Jumbo Frames
---------------------------------------------
Allocating Rx buffers when using Jumbo Frames on 2.6.x kernels may fail if
the available memory is heavily fragmented. This issue may be seen with PCI-X
adapters or with packet split disabled. This can be reduced or eliminated
by changing the amount of available memory for receive buffer allocation, by
increasing /proc/sys/vm/min_free_kbytes.
Multiple Interfaces on Same Ethernet Broadcast Network
------------------------------------------------------
Due to the default ARP behavior on Linux, it is not possible to have
one system on two IP networks in the same Ethernet broadcast domain
(non-partitioned switch) behave as expected. All Ethernet interfaces
will respond to IP traffic for any IP address assigned to the system.
This results in unbalanced receive traffic.
If you have multiple interfaces in a server, do either of the following:
- Turn on ARP filtering by entering::
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
- Install the interfaces in separate broadcast domains - either in
different switches or in a switch partitioned to VLANs.
UDP Stress Test Dropped Packet Issue
--------------------------------------
Under small packets UDP stress test with 10GbE driver, the Linux system
may drop UDP packets due to the fullness of socket buffers. You may want
to change the driver's Flow Control variables to the minimum value for
controlling packet reception.
Tx Hangs Possible Under Stress
------------------------------
Under stress conditions, if TX hangs occur, turning off TSO
"ethtool -K eth0 tso off" may resolve the problem.
Support
=======
For general information, go to the Intel support website at:
https://www.intel.com/support/
or the Intel Wired Networking project hosted by Sourceforge at:
https://sourceforge.net/projects/e1000
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
to e1000-devel@lists.sf.net
...@@ -487,7 +487,6 @@ CONFIG_CHELSIO_T4=m ...@@ -487,7 +487,6 @@ CONFIG_CHELSIO_T4=m
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_E1000E=y CONFIG_E1000E=y
CONFIG_IGB=y CONFIG_IGB=y
CONFIG_IXGB=y
CONFIG_IXGBE=y CONFIG_IXGBE=y
# CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MELLANOX is not set # CONFIG_NET_VENDOR_MELLANOX is not set
......
...@@ -154,7 +154,6 @@ CONFIG_TUN=m ...@@ -154,7 +154,6 @@ CONFIG_TUN=m
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_E1000E=y CONFIG_E1000E=y
CONFIG_IGB=y CONFIG_IGB=y
CONFIG_IXGB=y
CONFIG_IXGBE=y CONFIG_IXGBE=y
# CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MELLANOX is not set # CONFIG_NET_VENDOR_MELLANOX is not set
......
...@@ -207,7 +207,6 @@ CONFIG_VIRTIO_NET=m ...@@ -207,7 +207,6 @@ CONFIG_VIRTIO_NET=m
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_E1000E=y CONFIG_E1000E=y
CONFIG_IGB=y CONFIG_IGB=y
CONFIG_IXGB=y
CONFIG_IXGBE=y CONFIG_IXGBE=y
# CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MELLANOX is not set # CONFIG_NET_VENDOR_MELLANOX is not set
......
...@@ -280,7 +280,6 @@ CONFIG_SUNDANCE=m ...@@ -280,7 +280,6 @@ CONFIG_SUNDANCE=m
CONFIG_PCMCIA_FMVJ18X=m CONFIG_PCMCIA_FMVJ18X=m
CONFIG_E100=m CONFIG_E100=m
CONFIG_E1000=m CONFIG_E1000=m
CONFIG_IXGB=m
CONFIG_SKGE=m CONFIG_SKGE=m
CONFIG_SKY2=m CONFIG_SKY2=m
CONFIG_MYRI10GE=m CONFIG_MYRI10GE=m
......
...@@ -170,7 +170,6 @@ CONFIG_S2IO=m ...@@ -170,7 +170,6 @@ CONFIG_S2IO=m
CONFIG_E100=y CONFIG_E100=y
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_E1000E=y CONFIG_E1000E=y
CONFIG_IXGB=m
CONFIG_IXGBE=m CONFIG_IXGBE=m
CONFIG_I40E=m CONFIG_I40E=m
CONFIG_MLX4_EN=m CONFIG_MLX4_EN=m
......
...@@ -182,7 +182,6 @@ CONFIG_IBMVNIC=m ...@@ -182,7 +182,6 @@ CONFIG_IBMVNIC=m
CONFIG_E100=y CONFIG_E100=y
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_E1000E=y CONFIG_E1000E=y
CONFIG_IXGB=m
CONFIG_IXGBE=m CONFIG_IXGBE=m
CONFIG_I40E=m CONFIG_I40E=m
CONFIG_MLX4_EN=m CONFIG_MLX4_EN=m
......
...@@ -102,7 +102,6 @@ CONFIG_PCNET32=y ...@@ -102,7 +102,6 @@ CONFIG_PCNET32=y
CONFIG_TIGON3=y CONFIG_TIGON3=y
CONFIG_E100=y CONFIG_E100=y
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_IXGB=m
CONFIG_SUNGEM=y CONFIG_SUNGEM=y
CONFIG_BROADCOM_PHY=m CONFIG_BROADCOM_PHY=m
CONFIG_MARVELL_PHY=y CONFIG_MARVELL_PHY=y
......
...@@ -455,7 +455,6 @@ CONFIG_E100=m ...@@ -455,7 +455,6 @@ CONFIG_E100=m
CONFIG_E1000=m CONFIG_E1000=m
CONFIG_E1000E=m CONFIG_E1000E=m
CONFIG_IGB=m CONFIG_IGB=m
CONFIG_IXGB=m
CONFIG_IXGBE=m CONFIG_IXGBE=m
CONFIG_MV643XX_ETH=m CONFIG_MV643XX_ETH=m
CONFIG_SKGE=m CONFIG_SKGE=m
......
...@@ -164,7 +164,6 @@ CONFIG_IBMVNIC=y ...@@ -164,7 +164,6 @@ CONFIG_IBMVNIC=y
CONFIG_E100=y CONFIG_E100=y
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_E1000E=y CONFIG_E1000E=y
CONFIG_IXGB=m
CONFIG_IXGBE=m CONFIG_IXGBE=m
CONFIG_I40E=m CONFIG_I40E=m
CONFIG_MLX4_EN=m CONFIG_MLX4_EN=m
......
...@@ -149,7 +149,6 @@ CONFIG_BE2NET=m ...@@ -149,7 +149,6 @@ CONFIG_BE2NET=m
CONFIG_E1000=m CONFIG_E1000=m
CONFIG_E1000E=m CONFIG_E1000E=m
CONFIG_IGB=m CONFIG_IGB=m
CONFIG_IXGB=m
CONFIG_IXGBE=m CONFIG_IXGBE=m
CONFIG_I40E=m CONFIG_I40E=m
# CONFIG_NET_VENDOR_MARVELL is not set # CONFIG_NET_VENDOR_MARVELL is not set
......
...@@ -139,23 +139,6 @@ config IGBVF ...@@ -139,23 +139,6 @@ config IGBVF
To compile this driver as a module, choose M here. The module To compile this driver as a module, choose M here. The module
will be called igbvf. will be called igbvf.
config IXGB
tristate "Intel(R) PRO/10GbE support"
depends on PCI
help
This driver supports Intel(R) PRO/10GbE family of adapters for
PCI-X type cards. For PCI-E type cards, use the "ixgbe" driver
instead. For more information on how to identify your adapter, go
to the Adapter & Driver ID Guide that can be located at:
<http://support.intel.com>
More specific information on configuring the driver is in
<file:Documentation/networking/device_drivers/ethernet/intel/ixgb.rst>.
To compile this driver as a module, choose M here. The module
will be called ixgb.
config IXGBE config IXGBE
tristate "Intel(R) 10GbE PCI Express adapters support" tristate "Intel(R) 10GbE PCI Express adapters support"
depends on PCI depends on PCI
......
...@@ -12,7 +12,6 @@ obj-$(CONFIG_IGBVF) += igbvf/ ...@@ -12,7 +12,6 @@ obj-$(CONFIG_IGBVF) += igbvf/
obj-$(CONFIG_IXGBE) += ixgbe/ obj-$(CONFIG_IXGBE) += ixgbe/
obj-$(CONFIG_IXGBEVF) += ixgbevf/ obj-$(CONFIG_IXGBEVF) += ixgbevf/
obj-$(CONFIG_I40E) += i40e/ obj-$(CONFIG_I40E) += i40e/
obj-$(CONFIG_IXGB) += ixgb/
obj-$(CONFIG_IAVF) += iavf/ obj-$(CONFIG_IAVF) += iavf/
obj-$(CONFIG_FM10K) += fm10k/ obj-$(CONFIG_FM10K) += fm10k/
obj-$(CONFIG_ICE) += ice/ obj-$(CONFIG_ICE) += ice/
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 1999 - 2008 Intel Corporation.
#
# Makefile for the Intel(R) PRO/10GbE ethernet driver
#
obj-$(CONFIG_IXGB) += ixgb.o
ixgb-objs := ixgb_main.o ixgb_hw.o ixgb_ee.o ixgb_ethtool.o ixgb_param.o
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 1999 - 2008 Intel Corporation. */
#ifndef _IXGB_H_
#define _IXGB_H_
#include <linux/stddef.h>
#include <linux/module.h>
#include <linux/types.h>
#include <asm/byteorder.h>
#include <linux/mm.h>
#include <linux/errno.h>
#include <linux/ioport.h>
#include <linux/pci.h>
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/delay.h>
#include <linux/timer.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/interrupt.h>
#include <linux/string.h>
#include <linux/pagemap.h>
#include <linux/dma-mapping.h>
#include <linux/bitops.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <linux/capability.h>
#include <linux/in.h>
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/udp.h>
#include <net/pkt_sched.h>
#include <linux/list.h>
#include <linux/reboot.h>
#include <net/checksum.h>
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
#define BAR_0 0
#define BAR_1 1
struct ixgb_adapter;
#include "ixgb_hw.h"
#include "ixgb_ee.h"
#include "ixgb_ids.h"
/* TX/RX descriptor defines */
#define DEFAULT_TXD 256
#define MAX_TXD 4096
#define MIN_TXD 64
/* hardware cannot reliably support more than 512 descriptors owned by
* hardware descriptor cache otherwise an unreliable ring under heavy
* receive load may result */
#define DEFAULT_RXD 512
#define MAX_RXD 512
#define MIN_RXD 64
/* Supported Rx Buffer Sizes */
#define IXGB_RXBUFFER_2048 2048
#define IXGB_RXBUFFER_4096 4096
#define IXGB_RXBUFFER_8192 8192
#define IXGB_RXBUFFER_16384 16384
/* How many Rx Buffers do we bundle into one write to the hardware ? */
#define IXGB_RX_BUFFER_WRITE 8 /* Must be power of 2 */
/* wrapper around a pointer to a socket buffer,
* so a DMA handle can be stored along with the buffer */
struct ixgb_buffer {
struct sk_buff *skb;
dma_addr_t dma;
unsigned long time_stamp;
u16 length;
u16 next_to_watch;
u16 mapped_as_page;
};
struct ixgb_desc_ring {
/* pointer to the descriptor ring memory */
void *desc;
/* physical address of the descriptor ring */
dma_addr_t dma;
/* length of descriptor ring in bytes */
unsigned int size;
/* number of descriptors in the ring */
unsigned int count;
/* next descriptor to associate a buffer with */
unsigned int next_to_use;
/* next descriptor to check for DD status bit */
unsigned int next_to_clean;
/* array of buffer information structs */
struct ixgb_buffer *buffer_info;
};
#define IXGB_DESC_UNUSED(R) \
((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
(R)->next_to_clean - (R)->next_to_use - 1)
#define IXGB_GET_DESC(R, i, type) (&(((struct type *)((R).desc))[i]))
#define IXGB_RX_DESC(R, i) IXGB_GET_DESC(R, i, ixgb_rx_desc)
#define IXGB_TX_DESC(R, i) IXGB_GET_DESC(R, i, ixgb_tx_desc)
#define IXGB_CONTEXT_DESC(R, i) IXGB_GET_DESC(R, i, ixgb_context_desc)
/* board specific private data structure */
struct ixgb_adapter {
struct timer_list watchdog_timer;
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
u32 bd_number;
u32 rx_buffer_len;
u32 part_num;
u16 link_speed;
u16 link_duplex;
struct work_struct tx_timeout_task;
/* TX */
struct ixgb_desc_ring tx_ring ____cacheline_aligned_in_smp;
unsigned int restart_queue;
unsigned long timeo_start;
u32 tx_cmd_type;
u64 hw_csum_tx_good;
u64 hw_csum_tx_error;
u32 tx_int_delay;
u32 tx_timeout_count;
bool tx_int_delay_enable;
bool detect_tx_hung;
/* RX */
struct ixgb_desc_ring rx_ring;
u64 hw_csum_rx_error;
u64 hw_csum_rx_good;
u32 rx_int_delay;
bool rx_csum;
/* OS defined structs */
struct napi_struct napi;
struct net_device *netdev;
struct pci_dev *pdev;
/* structs defined in ixgb_hw.h */
struct ixgb_hw hw;
u16 msg_enable;
struct ixgb_hw_stats stats;
u32 alloc_rx_buff_failed;
bool have_msi;
unsigned long flags;
};
enum ixgb_state_t {
/* TBD
__IXGB_TESTING,
__IXGB_RESETTING,
*/
__IXGB_DOWN
};
/* Exported from other modules */
void ixgb_check_options(struct ixgb_adapter *adapter);
void ixgb_set_ethtool_ops(struct net_device *netdev);
extern char ixgb_driver_name[];
void ixgb_set_speed_duplex(struct net_device *netdev);
int ixgb_up(struct ixgb_adapter *adapter);
void ixgb_down(struct ixgb_adapter *adapter, bool kill_watchdog);
void ixgb_reset(struct ixgb_adapter *adapter);
int ixgb_setup_rx_resources(struct ixgb_adapter *adapter);
int ixgb_setup_tx_resources(struct ixgb_adapter *adapter);
void ixgb_free_rx_resources(struct ixgb_adapter *adapter);
void ixgb_free_tx_resources(struct ixgb_adapter *adapter);
void ixgb_update_stats(struct ixgb_adapter *adapter);
#endif /* _IXGB_H_ */
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 1999 - 2008 Intel Corporation. */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "ixgb_hw.h"
#include "ixgb_ee.h"
/* Local prototypes */
static u16 ixgb_shift_in_bits(struct ixgb_hw *hw);
static void ixgb_shift_out_bits(struct ixgb_hw *hw,
u16 data,
u16 count);
static void ixgb_standby_eeprom(struct ixgb_hw *hw);
static bool ixgb_wait_eeprom_command(struct ixgb_hw *hw);
static void ixgb_cleanup_eeprom(struct ixgb_hw *hw);
/******************************************************************************
* Raises the EEPROM's clock input.
*
* hw - Struct containing variables accessed by shared code
* eecd_reg - EECD's current value
*****************************************************************************/
static void
ixgb_raise_clock(struct ixgb_hw *hw,
u32 *eecd_reg)
{
/* Raise the clock input to the EEPROM (by setting the SK bit), and then
* wait 50 microseconds.
*/
*eecd_reg = *eecd_reg | IXGB_EECD_SK;
IXGB_WRITE_REG(hw, EECD, *eecd_reg);
IXGB_WRITE_FLUSH(hw);
udelay(50);
}
/******************************************************************************
* Lowers the EEPROM's clock input.
*
* hw - Struct containing variables accessed by shared code
* eecd_reg - EECD's current value
*****************************************************************************/
static void
ixgb_lower_clock(struct ixgb_hw *hw,
u32 *eecd_reg)
{
/* Lower the clock input to the EEPROM (by clearing the SK bit), and then
* wait 50 microseconds.
*/
*eecd_reg = *eecd_reg & ~IXGB_EECD_SK;
IXGB_WRITE_REG(hw, EECD, *eecd_reg);
IXGB_WRITE_FLUSH(hw);
udelay(50);
}
/******************************************************************************
* Shift data bits out to the EEPROM.
*
* hw - Struct containing variables accessed by shared code
* data - data to send to the EEPROM
* count - number of bits to shift out
*****************************************************************************/
static void
ixgb_shift_out_bits(struct ixgb_hw *hw,
u16 data,
u16 count)
{
u32 eecd_reg;
u32 mask;
/* We need to shift "count" bits out to the EEPROM. So, value in the
* "data" parameter will be shifted out to the EEPROM one bit at a time.
* In order to do this, "data" must be broken down into bits.
*/
mask = 0x01 << (count - 1);
eecd_reg = IXGB_READ_REG(hw, EECD);
eecd_reg &= ~(IXGB_EECD_DO | IXGB_EECD_DI);
do {
/* A "1" is shifted out to the EEPROM by setting bit "DI" to a "1",
* and then raising and then lowering the clock (the SK bit controls
* the clock input to the EEPROM). A "0" is shifted out to the EEPROM
* by setting "DI" to "0" and then raising and then lowering the clock.
*/
eecd_reg &= ~IXGB_EECD_DI;
if (data & mask)
eecd_reg |= IXGB_EECD_DI;
IXGB_WRITE_REG(hw, EECD, eecd_reg);
IXGB_WRITE_FLUSH(hw);
udelay(50);
ixgb_raise_clock(hw, &eecd_reg);
ixgb_lower_clock(hw, &eecd_reg);
mask = mask >> 1;
} while (mask);
/* We leave the "DI" bit set to "0" when we leave this routine. */
eecd_reg &= ~IXGB_EECD_DI;
IXGB_WRITE_REG(hw, EECD, eecd_reg);
}
/******************************************************************************
* Shift data bits in from the EEPROM
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
static u16
ixgb_shift_in_bits(struct ixgb_hw *hw)
{
u32 eecd_reg;
u32 i;
u16 data;
/* In order to read a register from the EEPROM, we need to shift 16 bits
* in from the EEPROM. Bits are "shifted in" by raising the clock input to
* the EEPROM (setting the SK bit), and then reading the value of the "DO"
* bit. During this "shifting in" process the "DI" bit should always be
* clear..
*/
eecd_reg = IXGB_READ_REG(hw, EECD);
eecd_reg &= ~(IXGB_EECD_DO | IXGB_EECD_DI);
data = 0;
for (i = 0; i < 16; i++) {
data = data << 1;
ixgb_raise_clock(hw, &eecd_reg);
eecd_reg = IXGB_READ_REG(hw, EECD);
eecd_reg &= ~(IXGB_EECD_DI);
if (eecd_reg & IXGB_EECD_DO)
data |= 1;
ixgb_lower_clock(hw, &eecd_reg);
}
return data;
}
/******************************************************************************
* Prepares EEPROM for access
*
* hw - Struct containing variables accessed by shared code
*
* Lowers EEPROM clock. Clears input pin. Sets the chip select pin. This
* function should be called before issuing a command to the EEPROM.
*****************************************************************************/
static void
ixgb_setup_eeprom(struct ixgb_hw *hw)
{
u32 eecd_reg;
eecd_reg = IXGB_READ_REG(hw, EECD);
/* Clear SK and DI */
eecd_reg &= ~(IXGB_EECD_SK | IXGB_EECD_DI);
IXGB_WRITE_REG(hw, EECD, eecd_reg);
/* Set CS */
eecd_reg |= IXGB_EECD_CS;
IXGB_WRITE_REG(hw, EECD, eecd_reg);
}
/******************************************************************************
* Returns EEPROM to a "standby" state
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
static void
ixgb_standby_eeprom(struct ixgb_hw *hw)
{
u32 eecd_reg;
eecd_reg = IXGB_READ_REG(hw, EECD);
/* Deselect EEPROM */
eecd_reg &= ~(IXGB_EECD_CS | IXGB_EECD_SK);
IXGB_WRITE_REG(hw, EECD, eecd_reg);
IXGB_WRITE_FLUSH(hw);
udelay(50);
/* Clock high */
eecd_reg |= IXGB_EECD_SK;
IXGB_WRITE_REG(hw, EECD, eecd_reg);
IXGB_WRITE_FLUSH(hw);
udelay(50);
/* Select EEPROM */
eecd_reg |= IXGB_EECD_CS;
IXGB_WRITE_REG(hw, EECD, eecd_reg);
IXGB_WRITE_FLUSH(hw);
udelay(50);
/* Clock low */
eecd_reg &= ~IXGB_EECD_SK;
IXGB_WRITE_REG(hw, EECD, eecd_reg);
IXGB_WRITE_FLUSH(hw);
udelay(50);
}
/******************************************************************************
* Raises then lowers the EEPROM's clock pin
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
static void
ixgb_clock_eeprom(struct ixgb_hw *hw)
{
u32 eecd_reg;
eecd_reg = IXGB_READ_REG(hw, EECD);
/* Rising edge of clock */
eecd_reg |= IXGB_EECD_SK;
IXGB_WRITE_REG(hw, EECD, eecd_reg);
IXGB_WRITE_FLUSH(hw);
udelay(50);
/* Falling edge of clock */
eecd_reg &= ~IXGB_EECD_SK;
IXGB_WRITE_REG(hw, EECD, eecd_reg);
IXGB_WRITE_FLUSH(hw);
udelay(50);
}
/******************************************************************************
* Terminates a command by lowering the EEPROM's chip select pin
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
static void
ixgb_cleanup_eeprom(struct ixgb_hw *hw)
{
u32 eecd_reg;
eecd_reg = IXGB_READ_REG(hw, EECD);
eecd_reg &= ~(IXGB_EECD_CS | IXGB_EECD_DI);
IXGB_WRITE_REG(hw, EECD, eecd_reg);
ixgb_clock_eeprom(hw);
}
/******************************************************************************
* Waits for the EEPROM to finish the current command.
*
* hw - Struct containing variables accessed by shared code
*
* The command is done when the EEPROM's data out pin goes high.
*
* Returns:
* true: EEPROM data pin is high before timeout.
* false: Time expired.
*****************************************************************************/
static bool
ixgb_wait_eeprom_command(struct ixgb_hw *hw)
{
u32 eecd_reg;
u32 i;
/* Toggle the CS line. This in effect tells to EEPROM to actually execute
* the command in question.
*/
ixgb_standby_eeprom(hw);
/* Now read DO repeatedly until is high (equal to '1'). The EEPROM will
* signal that the command has been completed by raising the DO signal.
* If DO does not go high in 10 milliseconds, then error out.
*/
for (i = 0; i < 200; i++) {
eecd_reg = IXGB_READ_REG(hw, EECD);
if (eecd_reg & IXGB_EECD_DO)
return true;
udelay(50);
}
ASSERT(0);
return false;
}
/******************************************************************************
* Verifies that the EEPROM has a valid checksum
*
* hw - Struct containing variables accessed by shared code
*
* Reads the first 64 16 bit words of the EEPROM and sums the values read.
* If the sum of the 64 16 bit words is 0xBABA, the EEPROM's checksum is
* valid.
*
* Returns:
* true: Checksum is valid
* false: Checksum is not valid.
*****************************************************************************/
bool
ixgb_validate_eeprom_checksum(struct ixgb_hw *hw)
{
u16 checksum = 0;
u16 i;
for (i = 0; i < (EEPROM_CHECKSUM_REG + 1); i++)
checksum += ixgb_read_eeprom(hw, i);
if (checksum == (u16) EEPROM_SUM)
return true;
else
return false;
}
/******************************************************************************
* Calculates the EEPROM checksum and writes it to the EEPROM
*
* hw - Struct containing variables accessed by shared code
*
* Sums the first 63 16 bit words of the EEPROM. Subtracts the sum from 0xBABA.
* Writes the difference to word offset 63 of the EEPROM.
*****************************************************************************/
void
ixgb_update_eeprom_checksum(struct ixgb_hw *hw)
{
u16 checksum = 0;
u16 i;
for (i = 0; i < EEPROM_CHECKSUM_REG; i++)
checksum += ixgb_read_eeprom(hw, i);
checksum = (u16) EEPROM_SUM - checksum;
ixgb_write_eeprom(hw, EEPROM_CHECKSUM_REG, checksum);
}
/******************************************************************************
* Writes a 16 bit word to a given offset in the EEPROM.
*
* hw - Struct containing variables accessed by shared code
* reg - offset within the EEPROM to be written to
* data - 16 bit word to be written to the EEPROM
*
* If ixgb_update_eeprom_checksum is not called after this function, the
* EEPROM will most likely contain an invalid checksum.
*
*****************************************************************************/
void
ixgb_write_eeprom(struct ixgb_hw *hw, u16 offset, u16 data)
{
struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom;
/* Prepare the EEPROM for writing */
ixgb_setup_eeprom(hw);
/* Send the 9-bit EWEN (write enable) command to the EEPROM (5-bit opcode
* plus 4-bit dummy). This puts the EEPROM into write/erase mode.
*/
ixgb_shift_out_bits(hw, EEPROM_EWEN_OPCODE, 5);
ixgb_shift_out_bits(hw, 0, 4);
/* Prepare the EEPROM */
ixgb_standby_eeprom(hw);
/* Send the Write command (3-bit opcode + 6-bit addr) */
ixgb_shift_out_bits(hw, EEPROM_WRITE_OPCODE, 3);
ixgb_shift_out_bits(hw, offset, 6);
/* Send the data */
ixgb_shift_out_bits(hw, data, 16);
ixgb_wait_eeprom_command(hw);
/* Recover from write */
ixgb_standby_eeprom(hw);
/* Send the 9-bit EWDS (write disable) command to the EEPROM (5-bit
* opcode plus 4-bit dummy). This takes the EEPROM out of write/erase
* mode.
*/
ixgb_shift_out_bits(hw, EEPROM_EWDS_OPCODE, 5);
ixgb_shift_out_bits(hw, 0, 4);
/* Done with writing */
ixgb_cleanup_eeprom(hw);
/* clear the init_ctrl_reg_1 to signify that the cache is invalidated */
ee_map->init_ctrl_reg_1 = cpu_to_le16(EEPROM_ICW1_SIGNATURE_CLEAR);
}
/******************************************************************************
* Reads a 16 bit word from the EEPROM.
*
* hw - Struct containing variables accessed by shared code
* offset - offset of 16 bit word in the EEPROM to read
*
* Returns:
* The 16-bit value read from the eeprom
*****************************************************************************/
u16
ixgb_read_eeprom(struct ixgb_hw *hw,
u16 offset)
{
u16 data;
/* Prepare the EEPROM for reading */
ixgb_setup_eeprom(hw);
/* Send the READ command (opcode + addr) */
ixgb_shift_out_bits(hw, EEPROM_READ_OPCODE, 3);
/*
* We have a 64 word EEPROM, there are 6 address bits
*/
ixgb_shift_out_bits(hw, offset, 6);
/* Read the data */
data = ixgb_shift_in_bits(hw);
/* End this read operation */
ixgb_standby_eeprom(hw);
return data;
}
/******************************************************************************
* Reads eeprom and stores data in shared structure.
* Validates eeprom checksum and eeprom signature.
*
* hw - Struct containing variables accessed by shared code
*
* Returns:
* true: if eeprom read is successful
* false: otherwise.
*****************************************************************************/
bool
ixgb_get_eeprom_data(struct ixgb_hw *hw)
{
u16 i;
u16 checksum = 0;
struct ixgb_ee_map_type *ee_map;
ENTER();
ee_map = (struct ixgb_ee_map_type *)hw->eeprom;
pr_debug("Reading eeprom data\n");
for (i = 0; i < IXGB_EEPROM_SIZE ; i++) {
u16 ee_data;
ee_data = ixgb_read_eeprom(hw, i);
checksum += ee_data;
hw->eeprom[i] = cpu_to_le16(ee_data);
}
if (checksum != (u16) EEPROM_SUM) {
pr_debug("Checksum invalid\n");
/* clear the init_ctrl_reg_1 to signify that the cache is
* invalidated */
ee_map->init_ctrl_reg_1 = cpu_to_le16(EEPROM_ICW1_SIGNATURE_CLEAR);
return false;
}
if ((ee_map->init_ctrl_reg_1 & cpu_to_le16(EEPROM_ICW1_SIGNATURE_MASK))
!= cpu_to_le16(EEPROM_ICW1_SIGNATURE_VALID)) {
pr_debug("Signature invalid\n");
return false;
}
return true;
}
/******************************************************************************
* Local function to check if the eeprom signature is good
* If the eeprom signature is good, calls ixgb)get_eeprom_data.
*
* hw - Struct containing variables accessed by shared code
*
* Returns:
* true: eeprom signature was good and the eeprom read was successful
* false: otherwise.
******************************************************************************/
static bool
ixgb_check_and_get_eeprom_data (struct ixgb_hw* hw)
{
struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom;
if ((ee_map->init_ctrl_reg_1 & cpu_to_le16(EEPROM_ICW1_SIGNATURE_MASK))
== cpu_to_le16(EEPROM_ICW1_SIGNATURE_VALID)) {
return true;
} else {
return ixgb_get_eeprom_data(hw);
}
}
/******************************************************************************
* return a word from the eeprom
*
* hw - Struct containing variables accessed by shared code
* index - Offset of eeprom word
*
* Returns:
* Word at indexed offset in eeprom, if valid, 0 otherwise.
******************************************************************************/
__le16
ixgb_get_eeprom_word(struct ixgb_hw *hw, u16 index)
{
if (index < IXGB_EEPROM_SIZE && ixgb_check_and_get_eeprom_data(hw))
return hw->eeprom[index];
return 0;
}
/******************************************************************************
* return the mac address from EEPROM
*
* hw - Struct containing variables accessed by shared code
* mac_addr - Ethernet Address if EEPROM contents are valid, 0 otherwise
*
* Returns: None.
******************************************************************************/
void
ixgb_get_ee_mac_addr(struct ixgb_hw *hw,
u8 *mac_addr)
{
int i;
struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom;
ENTER();
if (ixgb_check_and_get_eeprom_data(hw)) {
for (i = 0; i < ETH_ALEN; i++) {
mac_addr[i] = ee_map->mac_addr[i];
}
pr_debug("eeprom mac address = %pM\n", mac_addr);
}
}
/******************************************************************************
* return the Printed Board Assembly number from EEPROM
*
* hw - Struct containing variables accessed by shared code
*
* Returns:
* PBA number if EEPROM contents are valid, 0 otherwise
******************************************************************************/
u32
ixgb_get_ee_pba_number(struct ixgb_hw *hw)
{
if (ixgb_check_and_get_eeprom_data(hw))
return le16_to_cpu(hw->eeprom[EEPROM_PBA_1_2_REG])
| (le16_to_cpu(hw->eeprom[EEPROM_PBA_3_4_REG])<<16);
return 0;
}
/******************************************************************************
* return the Device Id from EEPROM
*
* hw - Struct containing variables accessed by shared code
*
* Returns:
* Device Id if EEPROM contents are valid, 0 otherwise
******************************************************************************/
u16
ixgb_get_ee_device_id(struct ixgb_hw *hw)
{
struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom;
if (ixgb_check_and_get_eeprom_data(hw))
return le16_to_cpu(ee_map->device_id);
return 0;
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 1999 - 2008 Intel Corporation. */
#ifndef _IXGB_EE_H_
#define _IXGB_EE_H_
#define IXGB_EEPROM_SIZE 64 /* Size in words */
/* EEPROM Commands */
#define EEPROM_READ_OPCODE 0x6 /* EEPROM read opcode */
#define EEPROM_WRITE_OPCODE 0x5 /* EEPROM write opcode */
#define EEPROM_ERASE_OPCODE 0x7 /* EEPROM erase opcode */
#define EEPROM_EWEN_OPCODE 0x13 /* EEPROM erase/write enable */
#define EEPROM_EWDS_OPCODE 0x10 /* EEPROM erase/write disable */
/* EEPROM MAP (Word Offsets) */
#define EEPROM_IA_1_2_REG 0x0000
#define EEPROM_IA_3_4_REG 0x0001
#define EEPROM_IA_5_6_REG 0x0002
#define EEPROM_COMPATIBILITY_REG 0x0003
#define EEPROM_PBA_1_2_REG 0x0008
#define EEPROM_PBA_3_4_REG 0x0009
#define EEPROM_INIT_CONTROL1_REG 0x000A
#define EEPROM_SUBSYS_ID_REG 0x000B
#define EEPROM_SUBVEND_ID_REG 0x000C
#define EEPROM_DEVICE_ID_REG 0x000D
#define EEPROM_VENDOR_ID_REG 0x000E
#define EEPROM_INIT_CONTROL2_REG 0x000F
#define EEPROM_SWDPINS_REG 0x0020
#define EEPROM_CIRCUIT_CTRL_REG 0x0021
#define EEPROM_D0_D3_POWER_REG 0x0022
#define EEPROM_FLASH_VERSION 0x0032
#define EEPROM_CHECKSUM_REG 0x003F
/* Mask bits for fields in Word 0x0a of the EEPROM */
#define EEPROM_ICW1_SIGNATURE_MASK 0xC000
#define EEPROM_ICW1_SIGNATURE_VALID 0x4000
#define EEPROM_ICW1_SIGNATURE_CLEAR 0x0000
/* For checksumming, the sum of all words in the EEPROM should equal 0xBABA. */
#define EEPROM_SUM 0xBABA
/* EEPROM Map Sizes (Byte Counts) */
#define PBA_SIZE 4
/* EEPROM Map defines (WORD OFFSETS)*/
/* EEPROM structure */
struct ixgb_ee_map_type {
u8 mac_addr[ETH_ALEN];
__le16 compatibility;
__le16 reserved1[4];
__le32 pba_number;
__le16 init_ctrl_reg_1;
__le16 subsystem_id;
__le16 subvendor_id;
__le16 device_id;
__le16 vendor_id;
__le16 init_ctrl_reg_2;
__le16 oem_reserved[16];
__le16 swdpins_reg;
__le16 circuit_ctrl_reg;
u8 d3_power;
u8 d0_power;
__le16 reserved2[28];
__le16 checksum;
};
/* EEPROM Functions */
u16 ixgb_read_eeprom(struct ixgb_hw *hw, u16 reg);
bool ixgb_validate_eeprom_checksum(struct ixgb_hw *hw);
void ixgb_update_eeprom_checksum(struct ixgb_hw *hw);
void ixgb_write_eeprom(struct ixgb_hw *hw, u16 reg, u16 data);
#endif /* IXGB_EE_H */
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 1999 - 2008 Intel Corporation. */
/* ethtool support for ixgb */
#include "ixgb.h"
#include <linux/uaccess.h>
#define IXGB_ALL_RAR_ENTRIES 16
enum {NETDEV_STATS, IXGB_STATS};
struct ixgb_stats {
char stat_string[ETH_GSTRING_LEN];
int type;
int sizeof_stat;
int stat_offset;
};
#define IXGB_STAT(m) IXGB_STATS, \
sizeof_field(struct ixgb_adapter, m), \
offsetof(struct ixgb_adapter, m)
#define IXGB_NETDEV_STAT(m) NETDEV_STATS, \
sizeof_field(struct net_device, m), \
offsetof(struct net_device, m)
static struct ixgb_stats ixgb_gstrings_stats[] = {
{"rx_packets", IXGB_NETDEV_STAT(stats.rx_packets)},
{"tx_packets", IXGB_NETDEV_STAT(stats.tx_packets)},
{"rx_bytes", IXGB_NETDEV_STAT(stats.rx_bytes)},
{"tx_bytes", IXGB_NETDEV_STAT(stats.tx_bytes)},
{"rx_errors", IXGB_NETDEV_STAT(stats.rx_errors)},
{"tx_errors", IXGB_NETDEV_STAT(stats.tx_errors)},
{"rx_dropped", IXGB_NETDEV_STAT(stats.rx_dropped)},
{"tx_dropped", IXGB_NETDEV_STAT(stats.tx_dropped)},
{"multicast", IXGB_NETDEV_STAT(stats.multicast)},
{"collisions", IXGB_NETDEV_STAT(stats.collisions)},
/* { "rx_length_errors", IXGB_NETDEV_STAT(stats.rx_length_errors) }, */
{"rx_over_errors", IXGB_NETDEV_STAT(stats.rx_over_errors)},
{"rx_crc_errors", IXGB_NETDEV_STAT(stats.rx_crc_errors)},
{"rx_frame_errors", IXGB_NETDEV_STAT(stats.rx_frame_errors)},
{"rx_no_buffer_count", IXGB_STAT(stats.rnbc)},
{"rx_fifo_errors", IXGB_NETDEV_STAT(stats.rx_fifo_errors)},
{"rx_missed_errors", IXGB_NETDEV_STAT(stats.rx_missed_errors)},
{"tx_aborted_errors", IXGB_NETDEV_STAT(stats.tx_aborted_errors)},
{"tx_carrier_errors", IXGB_NETDEV_STAT(stats.tx_carrier_errors)},
{"tx_fifo_errors", IXGB_NETDEV_STAT(stats.tx_fifo_errors)},
{"tx_heartbeat_errors", IXGB_NETDEV_STAT(stats.tx_heartbeat_errors)},
{"tx_window_errors", IXGB_NETDEV_STAT(stats.tx_window_errors)},
{"tx_deferred_ok", IXGB_STAT(stats.dc)},
{"tx_timeout_count", IXGB_STAT(tx_timeout_count) },
{"tx_restart_queue", IXGB_STAT(restart_queue) },
{"rx_long_length_errors", IXGB_STAT(stats.roc)},
{"rx_short_length_errors", IXGB_STAT(stats.ruc)},
{"tx_tcp_seg_good", IXGB_STAT(stats.tsctc)},
{"tx_tcp_seg_failed", IXGB_STAT(stats.tsctfc)},
{"rx_flow_control_xon", IXGB_STAT(stats.xonrxc)},
{"rx_flow_control_xoff", IXGB_STAT(stats.xoffrxc)},
{"tx_flow_control_xon", IXGB_STAT(stats.xontxc)},
{"tx_flow_control_xoff", IXGB_STAT(stats.xofftxc)},
{"rx_csum_offload_good", IXGB_STAT(hw_csum_rx_good)},
{"rx_csum_offload_errors", IXGB_STAT(hw_csum_rx_error)},
{"tx_csum_offload_good", IXGB_STAT(hw_csum_tx_good)},
{"tx_csum_offload_errors", IXGB_STAT(hw_csum_tx_error)}
};
#define IXGB_STATS_LEN ARRAY_SIZE(ixgb_gstrings_stats)
static int
ixgb_get_link_ksettings(struct net_device *netdev,
struct ethtool_link_ksettings *cmd)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
ethtool_link_ksettings_zero_link_mode(cmd, supported);
ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseT_Full);
ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
ethtool_link_ksettings_zero_link_mode(cmd, advertising);
ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseT_Full);
ethtool_link_ksettings_add_link_mode(cmd, advertising, FIBRE);
cmd->base.port = PORT_FIBRE;
if (netif_carrier_ok(adapter->netdev)) {
cmd->base.speed = SPEED_10000;
cmd->base.duplex = DUPLEX_FULL;
} else {
cmd->base.speed = SPEED_UNKNOWN;
cmd->base.duplex = DUPLEX_UNKNOWN;
}
cmd->base.autoneg = AUTONEG_DISABLE;
return 0;
}
void ixgb_set_speed_duplex(struct net_device *netdev)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
/* be optimistic about our link, since we were up before */
adapter->link_speed = 10000;
adapter->link_duplex = FULL_DUPLEX;
netif_carrier_on(netdev);
netif_wake_queue(netdev);
}
static int
ixgb_set_link_ksettings(struct net_device *netdev,
const struct ethtool_link_ksettings *cmd)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
u32 speed = cmd->base.speed;
if (cmd->base.autoneg == AUTONEG_ENABLE ||
(speed + cmd->base.duplex != SPEED_10000 + DUPLEX_FULL))
return -EINVAL;
if (netif_running(adapter->netdev)) {
ixgb_down(adapter, true);
ixgb_reset(adapter);
ixgb_up(adapter);
ixgb_set_speed_duplex(netdev);
} else
ixgb_reset(adapter);
return 0;
}
static void
ixgb_get_pauseparam(struct net_device *netdev,
struct ethtool_pauseparam *pause)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct ixgb_hw *hw = &adapter->hw;
pause->autoneg = AUTONEG_DISABLE;
if (hw->fc.type == ixgb_fc_rx_pause)
pause->rx_pause = 1;
else if (hw->fc.type == ixgb_fc_tx_pause)
pause->tx_pause = 1;
else if (hw->fc.type == ixgb_fc_full) {
pause->rx_pause = 1;
pause->tx_pause = 1;
}
}
static int
ixgb_set_pauseparam(struct net_device *netdev,
struct ethtool_pauseparam *pause)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct ixgb_hw *hw = &adapter->hw;
if (pause->autoneg == AUTONEG_ENABLE)
return -EINVAL;
if (pause->rx_pause && pause->tx_pause)
hw->fc.type = ixgb_fc_full;
else if (pause->rx_pause && !pause->tx_pause)
hw->fc.type = ixgb_fc_rx_pause;
else if (!pause->rx_pause && pause->tx_pause)
hw->fc.type = ixgb_fc_tx_pause;
else if (!pause->rx_pause && !pause->tx_pause)
hw->fc.type = ixgb_fc_none;
if (netif_running(adapter->netdev)) {
ixgb_down(adapter, true);
ixgb_up(adapter);
ixgb_set_speed_duplex(netdev);
} else
ixgb_reset(adapter);
return 0;
}
static u32
ixgb_get_msglevel(struct net_device *netdev)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
return adapter->msg_enable;
}
static void
ixgb_set_msglevel(struct net_device *netdev, u32 data)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
adapter->msg_enable = data;
}
#define IXGB_GET_STAT(_A_, _R_) _A_->stats._R_
static int
ixgb_get_regs_len(struct net_device *netdev)
{
#define IXGB_REG_DUMP_LEN 136*sizeof(u32)
return IXGB_REG_DUMP_LEN;
}
static void
ixgb_get_regs(struct net_device *netdev,
struct ethtool_regs *regs, void *p)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct ixgb_hw *hw = &adapter->hw;
u32 *reg = p;
u32 *reg_start = reg;
u8 i;
/* the 1 (one) below indicates an attempt at versioning, if the
* interface in ethtool or the driver changes, this 1 should be
* incremented */
regs->version = (1<<24) | hw->revision_id << 16 | hw->device_id;
/* General Registers */
*reg++ = IXGB_READ_REG(hw, CTRL0); /* 0 */
*reg++ = IXGB_READ_REG(hw, CTRL1); /* 1 */
*reg++ = IXGB_READ_REG(hw, STATUS); /* 2 */
*reg++ = IXGB_READ_REG(hw, EECD); /* 3 */
*reg++ = IXGB_READ_REG(hw, MFS); /* 4 */
/* Interrupt */
*reg++ = IXGB_READ_REG(hw, ICR); /* 5 */
*reg++ = IXGB_READ_REG(hw, ICS); /* 6 */
*reg++ = IXGB_READ_REG(hw, IMS); /* 7 */
*reg++ = IXGB_READ_REG(hw, IMC); /* 8 */
/* Receive */
*reg++ = IXGB_READ_REG(hw, RCTL); /* 9 */
*reg++ = IXGB_READ_REG(hw, FCRTL); /* 10 */
*reg++ = IXGB_READ_REG(hw, FCRTH); /* 11 */
*reg++ = IXGB_READ_REG(hw, RDBAL); /* 12 */
*reg++ = IXGB_READ_REG(hw, RDBAH); /* 13 */
*reg++ = IXGB_READ_REG(hw, RDLEN); /* 14 */
*reg++ = IXGB_READ_REG(hw, RDH); /* 15 */
*reg++ = IXGB_READ_REG(hw, RDT); /* 16 */
*reg++ = IXGB_READ_REG(hw, RDTR); /* 17 */
*reg++ = IXGB_READ_REG(hw, RXDCTL); /* 18 */
*reg++ = IXGB_READ_REG(hw, RAIDC); /* 19 */
*reg++ = IXGB_READ_REG(hw, RXCSUM); /* 20 */
/* there are 16 RAR entries in hardware, we only use 3 */
for (i = 0; i < IXGB_ALL_RAR_ENTRIES; i++) {
*reg++ = IXGB_READ_REG_ARRAY(hw, RAL, (i << 1)); /*21,...,51 */
*reg++ = IXGB_READ_REG_ARRAY(hw, RAH, (i << 1)); /*22,...,52 */
}
/* Transmit */
*reg++ = IXGB_READ_REG(hw, TCTL); /* 53 */
*reg++ = IXGB_READ_REG(hw, TDBAL); /* 54 */
*reg++ = IXGB_READ_REG(hw, TDBAH); /* 55 */
*reg++ = IXGB_READ_REG(hw, TDLEN); /* 56 */
*reg++ = IXGB_READ_REG(hw, TDH); /* 57 */
*reg++ = IXGB_READ_REG(hw, TDT); /* 58 */
*reg++ = IXGB_READ_REG(hw, TIDV); /* 59 */
*reg++ = IXGB_READ_REG(hw, TXDCTL); /* 60 */
*reg++ = IXGB_READ_REG(hw, TSPMT); /* 61 */
*reg++ = IXGB_READ_REG(hw, PAP); /* 62 */
/* Physical */
*reg++ = IXGB_READ_REG(hw, PCSC1); /* 63 */
*reg++ = IXGB_READ_REG(hw, PCSC2); /* 64 */
*reg++ = IXGB_READ_REG(hw, PCSS1); /* 65 */
*reg++ = IXGB_READ_REG(hw, PCSS2); /* 66 */
*reg++ = IXGB_READ_REG(hw, XPCSS); /* 67 */
*reg++ = IXGB_READ_REG(hw, UCCR); /* 68 */
*reg++ = IXGB_READ_REG(hw, XPCSTC); /* 69 */
*reg++ = IXGB_READ_REG(hw, MACA); /* 70 */
*reg++ = IXGB_READ_REG(hw, APAE); /* 71 */
*reg++ = IXGB_READ_REG(hw, ARD); /* 72 */
*reg++ = IXGB_READ_REG(hw, AIS); /* 73 */
*reg++ = IXGB_READ_REG(hw, MSCA); /* 74 */
*reg++ = IXGB_READ_REG(hw, MSRWD); /* 75 */
/* Statistics */
*reg++ = IXGB_GET_STAT(adapter, tprl); /* 76 */
*reg++ = IXGB_GET_STAT(adapter, tprh); /* 77 */
*reg++ = IXGB_GET_STAT(adapter, gprcl); /* 78 */
*reg++ = IXGB_GET_STAT(adapter, gprch); /* 79 */
*reg++ = IXGB_GET_STAT(adapter, bprcl); /* 80 */
*reg++ = IXGB_GET_STAT(adapter, bprch); /* 81 */
*reg++ = IXGB_GET_STAT(adapter, mprcl); /* 82 */
*reg++ = IXGB_GET_STAT(adapter, mprch); /* 83 */
*reg++ = IXGB_GET_STAT(adapter, uprcl); /* 84 */
*reg++ = IXGB_GET_STAT(adapter, uprch); /* 85 */
*reg++ = IXGB_GET_STAT(adapter, vprcl); /* 86 */
*reg++ = IXGB_GET_STAT(adapter, vprch); /* 87 */
*reg++ = IXGB_GET_STAT(adapter, jprcl); /* 88 */
*reg++ = IXGB_GET_STAT(adapter, jprch); /* 89 */
*reg++ = IXGB_GET_STAT(adapter, gorcl); /* 90 */
*reg++ = IXGB_GET_STAT(adapter, gorch); /* 91 */
*reg++ = IXGB_GET_STAT(adapter, torl); /* 92 */
*reg++ = IXGB_GET_STAT(adapter, torh); /* 93 */
*reg++ = IXGB_GET_STAT(adapter, rnbc); /* 94 */
*reg++ = IXGB_GET_STAT(adapter, ruc); /* 95 */
*reg++ = IXGB_GET_STAT(adapter, roc); /* 96 */
*reg++ = IXGB_GET_STAT(adapter, rlec); /* 97 */
*reg++ = IXGB_GET_STAT(adapter, crcerrs); /* 98 */
*reg++ = IXGB_GET_STAT(adapter, icbc); /* 99 */
*reg++ = IXGB_GET_STAT(adapter, ecbc); /* 100 */
*reg++ = IXGB_GET_STAT(adapter, mpc); /* 101 */
*reg++ = IXGB_GET_STAT(adapter, tptl); /* 102 */
*reg++ = IXGB_GET_STAT(adapter, tpth); /* 103 */
*reg++ = IXGB_GET_STAT(adapter, gptcl); /* 104 */
*reg++ = IXGB_GET_STAT(adapter, gptch); /* 105 */
*reg++ = IXGB_GET_STAT(adapter, bptcl); /* 106 */
*reg++ = IXGB_GET_STAT(adapter, bptch); /* 107 */
*reg++ = IXGB_GET_STAT(adapter, mptcl); /* 108 */
*reg++ = IXGB_GET_STAT(adapter, mptch); /* 109 */
*reg++ = IXGB_GET_STAT(adapter, uptcl); /* 110 */
*reg++ = IXGB_GET_STAT(adapter, uptch); /* 111 */
*reg++ = IXGB_GET_STAT(adapter, vptcl); /* 112 */
*reg++ = IXGB_GET_STAT(adapter, vptch); /* 113 */
*reg++ = IXGB_GET_STAT(adapter, jptcl); /* 114 */
*reg++ = IXGB_GET_STAT(adapter, jptch); /* 115 */
*reg++ = IXGB_GET_STAT(adapter, gotcl); /* 116 */
*reg++ = IXGB_GET_STAT(adapter, gotch); /* 117 */
*reg++ = IXGB_GET_STAT(adapter, totl); /* 118 */
*reg++ = IXGB_GET_STAT(adapter, toth); /* 119 */
*reg++ = IXGB_GET_STAT(adapter, dc); /* 120 */
*reg++ = IXGB_GET_STAT(adapter, plt64c); /* 121 */
*reg++ = IXGB_GET_STAT(adapter, tsctc); /* 122 */
*reg++ = IXGB_GET_STAT(adapter, tsctfc); /* 123 */
*reg++ = IXGB_GET_STAT(adapter, ibic); /* 124 */
*reg++ = IXGB_GET_STAT(adapter, rfc); /* 125 */
*reg++ = IXGB_GET_STAT(adapter, lfc); /* 126 */
*reg++ = IXGB_GET_STAT(adapter, pfrc); /* 127 */
*reg++ = IXGB_GET_STAT(adapter, pftc); /* 128 */
*reg++ = IXGB_GET_STAT(adapter, mcfrc); /* 129 */
*reg++ = IXGB_GET_STAT(adapter, mcftc); /* 130 */
*reg++ = IXGB_GET_STAT(adapter, xonrxc); /* 131 */
*reg++ = IXGB_GET_STAT(adapter, xontxc); /* 132 */
*reg++ = IXGB_GET_STAT(adapter, xoffrxc); /* 133 */
*reg++ = IXGB_GET_STAT(adapter, xofftxc); /* 134 */
*reg++ = IXGB_GET_STAT(adapter, rjc); /* 135 */
regs->len = (reg - reg_start) * sizeof(u32);
}
static int
ixgb_get_eeprom_len(struct net_device *netdev)
{
/* return size in bytes */
return IXGB_EEPROM_SIZE << 1;
}
static int
ixgb_get_eeprom(struct net_device *netdev,
struct ethtool_eeprom *eeprom, u8 *bytes)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct ixgb_hw *hw = &adapter->hw;
__le16 *eeprom_buff;
int i, max_len, first_word, last_word;
int ret_val = 0;
if (eeprom->len == 0) {
ret_val = -EINVAL;
goto geeprom_error;
}
eeprom->magic = hw->vendor_id | (hw->device_id << 16);
max_len = ixgb_get_eeprom_len(netdev);
if (eeprom->offset > eeprom->offset + eeprom->len) {
ret_val = -EINVAL;
goto geeprom_error;
}
if ((eeprom->offset + eeprom->len) > max_len)
eeprom->len = (max_len - eeprom->offset);
first_word = eeprom->offset >> 1;
last_word = (eeprom->offset + eeprom->len - 1) >> 1;
eeprom_buff = kmalloc_array(last_word - first_word + 1,
sizeof(__le16),
GFP_KERNEL);
if (!eeprom_buff)
return -ENOMEM;
/* note the eeprom was good because the driver loaded */
for (i = 0; i <= (last_word - first_word); i++)
eeprom_buff[i] = ixgb_get_eeprom_word(hw, (first_word + i));
memcpy(bytes, (u8 *)eeprom_buff + (eeprom->offset & 1), eeprom->len);
kfree(eeprom_buff);
geeprom_error:
return ret_val;
}
static int
ixgb_set_eeprom(struct net_device *netdev,
struct ethtool_eeprom *eeprom, u8 *bytes)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct ixgb_hw *hw = &adapter->hw;
u16 *eeprom_buff;
void *ptr;
int max_len, first_word, last_word;
u16 i;
if (eeprom->len == 0)
return -EINVAL;
if (eeprom->magic != (hw->vendor_id | (hw->device_id << 16)))
return -EFAULT;
max_len = ixgb_get_eeprom_len(netdev);
if (eeprom->offset > eeprom->offset + eeprom->len)
return -EINVAL;
if ((eeprom->offset + eeprom->len) > max_len)
eeprom->len = (max_len - eeprom->offset);
first_word = eeprom->offset >> 1;
last_word = (eeprom->offset + eeprom->len - 1) >> 1;
eeprom_buff = kmalloc(max_len, GFP_KERNEL);
if (!eeprom_buff)
return -ENOMEM;
ptr = (void *)eeprom_buff;
if (eeprom->offset & 1) {
/* need read/modify/write of first changed EEPROM word */
/* only the second byte of the word is being modified */
eeprom_buff[0] = ixgb_read_eeprom(hw, first_word);
ptr++;
}
if ((eeprom->offset + eeprom->len) & 1) {
/* need read/modify/write of last changed EEPROM word */
/* only the first byte of the word is being modified */
eeprom_buff[last_word - first_word]
= ixgb_read_eeprom(hw, last_word);
}
memcpy(ptr, bytes, eeprom->len);
for (i = 0; i <= (last_word - first_word); i++)
ixgb_write_eeprom(hw, first_word + i, eeprom_buff[i]);
/* Update the checksum over the first part of the EEPROM if needed */
if (first_word <= EEPROM_CHECKSUM_REG)
ixgb_update_eeprom_checksum(hw);
kfree(eeprom_buff);
return 0;
}
static void
ixgb_get_drvinfo(struct net_device *netdev,
struct ethtool_drvinfo *drvinfo)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
strscpy(drvinfo->driver, ixgb_driver_name,
sizeof(drvinfo->driver));
strscpy(drvinfo->bus_info, pci_name(adapter->pdev),
sizeof(drvinfo->bus_info));
}
static void
ixgb_get_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring,
struct kernel_ethtool_ringparam *kernel_ring,
struct netlink_ext_ack *extack)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct ixgb_desc_ring *txdr = &adapter->tx_ring;
struct ixgb_desc_ring *rxdr = &adapter->rx_ring;
ring->rx_max_pending = MAX_RXD;
ring->tx_max_pending = MAX_TXD;
ring->rx_pending = rxdr->count;
ring->tx_pending = txdr->count;
}
static int
ixgb_set_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring,
struct kernel_ethtool_ringparam *kernel_ring,
struct netlink_ext_ack *extack)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct ixgb_desc_ring *txdr = &adapter->tx_ring;
struct ixgb_desc_ring *rxdr = &adapter->rx_ring;
struct ixgb_desc_ring tx_old, tx_new, rx_old, rx_new;
int err;
tx_old = adapter->tx_ring;
rx_old = adapter->rx_ring;
if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
return -EINVAL;
if (netif_running(adapter->netdev))
ixgb_down(adapter, true);
rxdr->count = max(ring->rx_pending,(u32)MIN_RXD);
rxdr->count = min(rxdr->count,(u32)MAX_RXD);
rxdr->count = ALIGN(rxdr->count, IXGB_REQ_RX_DESCRIPTOR_MULTIPLE);
txdr->count = max(ring->tx_pending,(u32)MIN_TXD);
txdr->count = min(txdr->count,(u32)MAX_TXD);
txdr->count = ALIGN(txdr->count, IXGB_REQ_TX_DESCRIPTOR_MULTIPLE);
if (netif_running(adapter->netdev)) {
/* Try to get new resources before deleting old */
if ((err = ixgb_setup_rx_resources(adapter)))
goto err_setup_rx;
if ((err = ixgb_setup_tx_resources(adapter)))
goto err_setup_tx;
/* save the new, restore the old in order to free it,
* then restore the new back again */
rx_new = adapter->rx_ring;
tx_new = adapter->tx_ring;
adapter->rx_ring = rx_old;
adapter->tx_ring = tx_old;
ixgb_free_rx_resources(adapter);
ixgb_free_tx_resources(adapter);
adapter->rx_ring = rx_new;
adapter->tx_ring = tx_new;
if ((err = ixgb_up(adapter)))
return err;
ixgb_set_speed_duplex(netdev);
}
return 0;
err_setup_tx:
ixgb_free_rx_resources(adapter);
err_setup_rx:
adapter->rx_ring = rx_old;
adapter->tx_ring = tx_old;
ixgb_up(adapter);
return err;
}
static int
ixgb_set_phys_id(struct net_device *netdev, enum ethtool_phys_id_state state)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
switch (state) {
case ETHTOOL_ID_ACTIVE:
return 2;
case ETHTOOL_ID_ON:
ixgb_led_on(&adapter->hw);
break;
case ETHTOOL_ID_OFF:
case ETHTOOL_ID_INACTIVE:
ixgb_led_off(&adapter->hw);
}
return 0;
}
static int
ixgb_get_sset_count(struct net_device *netdev, int sset)
{
switch (sset) {
case ETH_SS_STATS:
return IXGB_STATS_LEN;
default:
return -EOPNOTSUPP;
}
}
static void
ixgb_get_ethtool_stats(struct net_device *netdev,
struct ethtool_stats *stats, u64 *data)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
int i;
char *p = NULL;
ixgb_update_stats(adapter);
for (i = 0; i < IXGB_STATS_LEN; i++) {
switch (ixgb_gstrings_stats[i].type) {
case NETDEV_STATS:
p = (char *) netdev +
ixgb_gstrings_stats[i].stat_offset;
break;
case IXGB_STATS:
p = (char *) adapter +
ixgb_gstrings_stats[i].stat_offset;
break;
}
data[i] = (ixgb_gstrings_stats[i].sizeof_stat ==
sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
}
}
static void
ixgb_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
{
int i;
switch(stringset) {
case ETH_SS_STATS:
for (i = 0; i < IXGB_STATS_LEN; i++) {
memcpy(data + i * ETH_GSTRING_LEN,
ixgb_gstrings_stats[i].stat_string,
ETH_GSTRING_LEN);
}
break;
}
}
static const struct ethtool_ops ixgb_ethtool_ops = {
.get_drvinfo = ixgb_get_drvinfo,
.get_regs_len = ixgb_get_regs_len,
.get_regs = ixgb_get_regs,
.get_link = ethtool_op_get_link,
.get_eeprom_len = ixgb_get_eeprom_len,
.get_eeprom = ixgb_get_eeprom,
.set_eeprom = ixgb_set_eeprom,
.get_ringparam = ixgb_get_ringparam,
.set_ringparam = ixgb_set_ringparam,
.get_pauseparam = ixgb_get_pauseparam,
.set_pauseparam = ixgb_set_pauseparam,
.get_msglevel = ixgb_get_msglevel,
.set_msglevel = ixgb_set_msglevel,
.get_strings = ixgb_get_strings,
.set_phys_id = ixgb_set_phys_id,
.get_sset_count = ixgb_get_sset_count,
.get_ethtool_stats = ixgb_get_ethtool_stats,
.get_link_ksettings = ixgb_get_link_ksettings,
.set_link_ksettings = ixgb_set_link_ksettings,
};
void ixgb_set_ethtool_ops(struct net_device *netdev)
{
netdev->ethtool_ops = &ixgb_ethtool_ops;
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 1999 - 2008 Intel Corporation. */
/* ixgb_hw.c
* Shared functions for accessing and configuring the adapter
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/pci_ids.h>
#include "ixgb_hw.h"
#include "ixgb_ids.h"
#include <linux/etherdevice.h>
/* Local function prototypes */
static u32 ixgb_hash_mc_addr(struct ixgb_hw *hw, u8 * mc_addr);
static void ixgb_mta_set(struct ixgb_hw *hw, u32 hash_value);
static void ixgb_get_bus_info(struct ixgb_hw *hw);
static bool ixgb_link_reset(struct ixgb_hw *hw);
static void ixgb_optics_reset(struct ixgb_hw *hw);
static void ixgb_optics_reset_bcm(struct ixgb_hw *hw);
static ixgb_phy_type ixgb_identify_phy(struct ixgb_hw *hw);
static void ixgb_clear_hw_cntrs(struct ixgb_hw *hw);
static void ixgb_clear_vfta(struct ixgb_hw *hw);
static void ixgb_init_rx_addrs(struct ixgb_hw *hw);
static u16 ixgb_read_phy_reg(struct ixgb_hw *hw,
u32 reg_address,
u32 phy_address,
u32 device_type);
static bool ixgb_setup_fc(struct ixgb_hw *hw);
static bool mac_addr_valid(u8 *mac_addr);
static u32 ixgb_mac_reset(struct ixgb_hw *hw)
{
u32 ctrl_reg;
ctrl_reg = IXGB_CTRL0_RST |
IXGB_CTRL0_SDP3_DIR | /* All pins are Output=1 */
IXGB_CTRL0_SDP2_DIR |
IXGB_CTRL0_SDP1_DIR |
IXGB_CTRL0_SDP0_DIR |
IXGB_CTRL0_SDP3 | /* Initial value 1101 */
IXGB_CTRL0_SDP2 |
IXGB_CTRL0_SDP0;
#ifdef HP_ZX1
/* Workaround for 82597EX reset errata */
IXGB_WRITE_REG_IO(hw, CTRL0, ctrl_reg);
#else
IXGB_WRITE_REG(hw, CTRL0, ctrl_reg);
#endif
/* Delay a few ms just to allow the reset to complete */
msleep(IXGB_DELAY_AFTER_RESET);
ctrl_reg = IXGB_READ_REG(hw, CTRL0);
#ifdef DBG
/* Make sure the self-clearing global reset bit did self clear */
ASSERT(!(ctrl_reg & IXGB_CTRL0_RST));
#endif
if (hw->subsystem_vendor_id == PCI_VENDOR_ID_SUN) {
ctrl_reg = /* Enable interrupt from XFP and SerDes */
IXGB_CTRL1_GPI0_EN |
IXGB_CTRL1_SDP6_DIR |
IXGB_CTRL1_SDP7_DIR |
IXGB_CTRL1_SDP6 |
IXGB_CTRL1_SDP7;
IXGB_WRITE_REG(hw, CTRL1, ctrl_reg);
ixgb_optics_reset_bcm(hw);
}
if (hw->phy_type == ixgb_phy_type_txn17401)
ixgb_optics_reset(hw);
return ctrl_reg;
}
/******************************************************************************
* Reset the transmit and receive units; mask and clear all interrupts.
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
bool
ixgb_adapter_stop(struct ixgb_hw *hw)
{
u32 ctrl_reg;
ENTER();
/* If we are stopped or resetting exit gracefully and wait to be
* started again before accessing the hardware.
*/
if (hw->adapter_stopped) {
pr_debug("Exiting because the adapter is already stopped!!!\n");
return false;
}
/* Set the Adapter Stopped flag so other driver functions stop
* touching the Hardware.
*/
hw->adapter_stopped = true;
/* Clear interrupt mask to stop board from generating interrupts */
pr_debug("Masking off all interrupts\n");
IXGB_WRITE_REG(hw, IMC, 0xFFFFFFFF);
/* Disable the Transmit and Receive units. Then delay to allow
* any pending transactions to complete before we hit the MAC with
* the global reset.
*/
IXGB_WRITE_REG(hw, RCTL, IXGB_READ_REG(hw, RCTL) & ~IXGB_RCTL_RXEN);
IXGB_WRITE_REG(hw, TCTL, IXGB_READ_REG(hw, TCTL) & ~IXGB_TCTL_TXEN);
IXGB_WRITE_FLUSH(hw);
msleep(IXGB_DELAY_BEFORE_RESET);
/* Issue a global reset to the MAC. This will reset the chip's
* transmit, receive, DMA, and link units. It will not effect
* the current PCI configuration. The global reset bit is self-
* clearing, and should clear within a microsecond.
*/
pr_debug("Issuing a global reset to MAC\n");
ctrl_reg = ixgb_mac_reset(hw);
/* Clear interrupt mask to stop board from generating interrupts */
pr_debug("Masking off all interrupts\n");
IXGB_WRITE_REG(hw, IMC, 0xffffffff);
/* Clear any pending interrupt events. */
IXGB_READ_REG(hw, ICR);
return ctrl_reg & IXGB_CTRL0_RST;
}
/******************************************************************************
* Identifies the vendor of the optics module on the adapter. The SR adapters
* support two different types of XPAK optics, so it is necessary to determine
* which optics are present before applying any optics-specific workarounds.
*
* hw - Struct containing variables accessed by shared code.
*
* Returns: the vendor of the XPAK optics module.
*****************************************************************************/
static ixgb_xpak_vendor
ixgb_identify_xpak_vendor(struct ixgb_hw *hw)
{
u32 i;
u16 vendor_name[5];
ixgb_xpak_vendor xpak_vendor;
ENTER();
/* Read the first few bytes of the vendor string from the XPAK NVR
* registers. These are standard XENPAK/XPAK registers, so all XPAK
* devices should implement them. */
for (i = 0; i < 5; i++) {
vendor_name[i] = ixgb_read_phy_reg(hw,
MDIO_PMA_PMD_XPAK_VENDOR_NAME
+ i, IXGB_PHY_ADDRESS,
MDIO_MMD_PMAPMD);
}
/* Determine the actual vendor */
if (vendor_name[0] == 'I' &&
vendor_name[1] == 'N' &&
vendor_name[2] == 'T' &&
vendor_name[3] == 'E' && vendor_name[4] == 'L') {
xpak_vendor = ixgb_xpak_vendor_intel;
} else {
xpak_vendor = ixgb_xpak_vendor_infineon;
}
return xpak_vendor;
}
/******************************************************************************
* Determine the physical layer module on the adapter.
*
* hw - Struct containing variables accessed by shared code. The device_id
* field must be (correctly) populated before calling this routine.
*
* Returns: the phy type of the adapter.
*****************************************************************************/
static ixgb_phy_type
ixgb_identify_phy(struct ixgb_hw *hw)
{
ixgb_phy_type phy_type;
ixgb_xpak_vendor xpak_vendor;
ENTER();
/* Infer the transceiver/phy type from the device id */
switch (hw->device_id) {
case IXGB_DEVICE_ID_82597EX:
pr_debug("Identified TXN17401 optics\n");
phy_type = ixgb_phy_type_txn17401;
break;
case IXGB_DEVICE_ID_82597EX_SR:
/* The SR adapters carry two different types of XPAK optics
* modules; read the vendor identifier to determine the exact
* type of optics. */
xpak_vendor = ixgb_identify_xpak_vendor(hw);
if (xpak_vendor == ixgb_xpak_vendor_intel) {
pr_debug("Identified TXN17201 optics\n");
phy_type = ixgb_phy_type_txn17201;
} else {
pr_debug("Identified G6005 optics\n");
phy_type = ixgb_phy_type_g6005;
}
break;
case IXGB_DEVICE_ID_82597EX_LR:
pr_debug("Identified G6104 optics\n");
phy_type = ixgb_phy_type_g6104;
break;
case IXGB_DEVICE_ID_82597EX_CX4:
pr_debug("Identified CX4\n");
xpak_vendor = ixgb_identify_xpak_vendor(hw);
if (xpak_vendor == ixgb_xpak_vendor_intel) {
pr_debug("Identified TXN17201 optics\n");
phy_type = ixgb_phy_type_txn17201;
} else {
pr_debug("Identified G6005 optics\n");
phy_type = ixgb_phy_type_g6005;
}
break;
default:
pr_debug("Unknown physical layer module\n");
phy_type = ixgb_phy_type_unknown;
break;
}
/* update phy type for sun specific board */
if (hw->subsystem_vendor_id == PCI_VENDOR_ID_SUN)
phy_type = ixgb_phy_type_bcm;
return phy_type;
}
/******************************************************************************
* Performs basic configuration of the adapter.
*
* hw - Struct containing variables accessed by shared code
*
* Resets the controller.
* Reads and validates the EEPROM.
* Initializes the receive address registers.
* Initializes the multicast table.
* Clears all on-chip counters.
* Calls routine to setup flow control settings.
* Leaves the transmit and receive units disabled and uninitialized.
*
* Returns:
* true if successful,
* false if unrecoverable problems were encountered.
*****************************************************************************/
bool
ixgb_init_hw(struct ixgb_hw *hw)
{
u32 i;
bool status;
ENTER();
/* Issue a global reset to the MAC. This will reset the chip's
* transmit, receive, DMA, and link units. It will not effect
* the current PCI configuration. The global reset bit is self-
* clearing, and should clear within a microsecond.
*/
pr_debug("Issuing a global reset to MAC\n");
ixgb_mac_reset(hw);
pr_debug("Issuing an EE reset to MAC\n");
#ifdef HP_ZX1
/* Workaround for 82597EX reset errata */
IXGB_WRITE_REG_IO(hw, CTRL1, IXGB_CTRL1_EE_RST);
#else
IXGB_WRITE_REG(hw, CTRL1, IXGB_CTRL1_EE_RST);
#endif
/* Delay a few ms just to allow the reset to complete */
msleep(IXGB_DELAY_AFTER_EE_RESET);
if (!ixgb_get_eeprom_data(hw))
return false;
/* Use the device id to determine the type of phy/transceiver. */
hw->device_id = ixgb_get_ee_device_id(hw);
hw->phy_type = ixgb_identify_phy(hw);
/* Setup the receive addresses.
* Receive Address Registers (RARs 0 - 15).
*/
ixgb_init_rx_addrs(hw);
/*
* Check that a valid MAC address has been set.
* If it is not valid, we fail hardware init.
*/
if (!mac_addr_valid(hw->curr_mac_addr)) {
pr_debug("MAC address invalid after ixgb_init_rx_addrs\n");
return(false);
}
/* tell the routines in this file they can access hardware again */
hw->adapter_stopped = false;
/* Fill in the bus_info structure */
ixgb_get_bus_info(hw);
/* Zero out the Multicast HASH table */
pr_debug("Zeroing the MTA\n");
for (i = 0; i < IXGB_MC_TBL_SIZE; i++)
IXGB_WRITE_REG_ARRAY(hw, MTA, i, 0);
/* Zero out the VLAN Filter Table Array */
ixgb_clear_vfta(hw);
/* Zero all of the hardware counters */
ixgb_clear_hw_cntrs(hw);
/* Call a subroutine to setup flow control. */
status = ixgb_setup_fc(hw);
/* 82597EX errata: Call check-for-link in case lane deskew is locked */
ixgb_check_for_link(hw);
return status;
}
/******************************************************************************
* Initializes receive address filters.
*
* hw - Struct containing variables accessed by shared code
*
* Places the MAC address in receive address register 0 and clears the rest
* of the receive address registers. Clears the multicast table. Assumes
* the receiver is in reset when the routine is called.
*****************************************************************************/
static void
ixgb_init_rx_addrs(struct ixgb_hw *hw)
{
u32 i;
ENTER();
/*
* If the current mac address is valid, assume it is a software override
* to the permanent address.
* Otherwise, use the permanent address from the eeprom.
*/
if (!mac_addr_valid(hw->curr_mac_addr)) {
/* Get the MAC address from the eeprom for later reference */
ixgb_get_ee_mac_addr(hw, hw->curr_mac_addr);
pr_debug("Keeping Permanent MAC Addr = %pM\n",
hw->curr_mac_addr);
} else {
/* Setup the receive address. */
pr_debug("Overriding MAC Address in RAR[0]\n");
pr_debug("New MAC Addr = %pM\n", hw->curr_mac_addr);
ixgb_rar_set(hw, hw->curr_mac_addr, 0);
}
/* Zero out the other 15 receive addresses. */
pr_debug("Clearing RAR[1-15]\n");
for (i = 1; i < IXGB_RAR_ENTRIES; i++) {
/* Write high reg first to disable the AV bit first */
IXGB_WRITE_REG_ARRAY(hw, RA, ((i << 1) + 1), 0);
IXGB_WRITE_REG_ARRAY(hw, RA, (i << 1), 0);
}
}
/******************************************************************************
* Updates the MAC's list of multicast addresses.
*
* hw - Struct containing variables accessed by shared code
* mc_addr_list - the list of new multicast addresses
* mc_addr_count - number of addresses
* pad - number of bytes between addresses in the list
*
* The given list replaces any existing list. Clears the last 15 receive
* address registers and the multicast table. Uses receive address registers
* for the first 15 multicast addresses, and hashes the rest into the
* multicast table.
*****************************************************************************/
void
ixgb_mc_addr_list_update(struct ixgb_hw *hw,
u8 *mc_addr_list,
u32 mc_addr_count,
u32 pad)
{
u32 hash_value;
u32 i;
u32 rar_used_count = 1; /* RAR[0] is used for our MAC address */
u8 *mca;
ENTER();
/* Set the new number of MC addresses that we are being requested to use. */
hw->num_mc_addrs = mc_addr_count;
/* Clear RAR[1-15] */
pr_debug("Clearing RAR[1-15]\n");
for (i = rar_used_count; i < IXGB_RAR_ENTRIES; i++) {
IXGB_WRITE_REG_ARRAY(hw, RA, (i << 1), 0);
IXGB_WRITE_REG_ARRAY(hw, RA, ((i << 1) + 1), 0);
}
/* Clear the MTA */
pr_debug("Clearing MTA\n");
for (i = 0; i < IXGB_MC_TBL_SIZE; i++)
IXGB_WRITE_REG_ARRAY(hw, MTA, i, 0);
/* Add the new addresses */
mca = mc_addr_list;
for (i = 0; i < mc_addr_count; i++) {
pr_debug("Adding the multicast addresses:\n");
pr_debug("MC Addr #%d = %pM\n", i, mca);
/* Place this multicast address in the RAR if there is room, *
* else put it in the MTA
*/
if (rar_used_count < IXGB_RAR_ENTRIES) {
ixgb_rar_set(hw, mca, rar_used_count);
pr_debug("Added a multicast address to RAR[%d]\n", i);
rar_used_count++;
} else {
hash_value = ixgb_hash_mc_addr(hw, mca);
pr_debug("Hash value = 0x%03X\n", hash_value);
ixgb_mta_set(hw, hash_value);
}
mca += ETH_ALEN + pad;
}
pr_debug("MC Update Complete\n");
}
/******************************************************************************
* Hashes an address to determine its location in the multicast table
*
* hw - Struct containing variables accessed by shared code
* mc_addr - the multicast address to hash
*
* Returns:
* The hash value
*****************************************************************************/
static u32
ixgb_hash_mc_addr(struct ixgb_hw *hw,
u8 *mc_addr)
{
u32 hash_value = 0;
ENTER();
/* The portion of the address that is used for the hash table is
* determined by the mc_filter_type setting.
*/
switch (hw->mc_filter_type) {
/* [0] [1] [2] [3] [4] [5]
* 01 AA 00 12 34 56
* LSB MSB - According to H/W docs */
case 0:
/* [47:36] i.e. 0x563 for above example address */
hash_value =
((mc_addr[4] >> 4) | (((u16) mc_addr[5]) << 4));
break;
case 1: /* [46:35] i.e. 0xAC6 for above example address */
hash_value =
((mc_addr[4] >> 3) | (((u16) mc_addr[5]) << 5));
break;
case 2: /* [45:34] i.e. 0x5D8 for above example address */
hash_value =
((mc_addr[4] >> 2) | (((u16) mc_addr[5]) << 6));
break;
case 3: /* [43:32] i.e. 0x634 for above example address */
hash_value = ((mc_addr[4]) | (((u16) mc_addr[5]) << 8));
break;
default:
/* Invalid mc_filter_type, what should we do? */
pr_debug("MC filter type param set incorrectly\n");
ASSERT(0);
break;
}
hash_value &= 0xFFF;
return hash_value;
}
/******************************************************************************
* Sets the bit in the multicast table corresponding to the hash value.
*
* hw - Struct containing variables accessed by shared code
* hash_value - Multicast address hash value
*****************************************************************************/
static void
ixgb_mta_set(struct ixgb_hw *hw,
u32 hash_value)
{
u32 hash_bit, hash_reg;
u32 mta_reg;
/* The MTA is a register array of 128 32-bit registers.
* It is treated like an array of 4096 bits. We want to set
* bit BitArray[hash_value]. So we figure out what register
* the bit is in, read it, OR in the new bit, then write
* back the new value. The register is determined by the
* upper 7 bits of the hash value and the bit within that
* register are determined by the lower 5 bits of the value.
*/
hash_reg = (hash_value >> 5) & 0x7F;
hash_bit = hash_value & 0x1F;
mta_reg = IXGB_READ_REG_ARRAY(hw, MTA, hash_reg);
mta_reg |= (1 << hash_bit);
IXGB_WRITE_REG_ARRAY(hw, MTA, hash_reg, mta_reg);
}
/******************************************************************************
* Puts an ethernet address into a receive address register.
*
* hw - Struct containing variables accessed by shared code
* addr - Address to put into receive address register
* index - Receive address register to write
*****************************************************************************/
void
ixgb_rar_set(struct ixgb_hw *hw,
const u8 *addr,
u32 index)
{
u32 rar_low, rar_high;
ENTER();
/* HW expects these in little endian so we reverse the byte order
* from network order (big endian) to little endian
*/
rar_low = ((u32) addr[0] |
((u32)addr[1] << 8) |
((u32)addr[2] << 16) |
((u32)addr[3] << 24));
rar_high = ((u32) addr[4] |
((u32)addr[5] << 8) |
IXGB_RAH_AV);
IXGB_WRITE_REG_ARRAY(hw, RA, (index << 1), rar_low);
IXGB_WRITE_REG_ARRAY(hw, RA, ((index << 1) + 1), rar_high);
}
/******************************************************************************
* Writes a value to the specified offset in the VLAN filter table.
*
* hw - Struct containing variables accessed by shared code
* offset - Offset in VLAN filter table to write
* value - Value to write into VLAN filter table
*****************************************************************************/
void
ixgb_write_vfta(struct ixgb_hw *hw,
u32 offset,
u32 value)
{
IXGB_WRITE_REG_ARRAY(hw, VFTA, offset, value);
}
/******************************************************************************
* Clears the VLAN filter table
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
static void
ixgb_clear_vfta(struct ixgb_hw *hw)
{
u32 offset;
for (offset = 0; offset < IXGB_VLAN_FILTER_TBL_SIZE; offset++)
IXGB_WRITE_REG_ARRAY(hw, VFTA, offset, 0);
}
/******************************************************************************
* Configures the flow control settings based on SW configuration.
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
static bool
ixgb_setup_fc(struct ixgb_hw *hw)
{
u32 ctrl_reg;
u32 pap_reg = 0; /* by default, assume no pause time */
bool status = true;
ENTER();
/* Get the current control reg 0 settings */
ctrl_reg = IXGB_READ_REG(hw, CTRL0);
/* Clear the Receive Pause Enable and Transmit Pause Enable bits */
ctrl_reg &= ~(IXGB_CTRL0_RPE | IXGB_CTRL0_TPE);
/* The possible values of the "flow_control" parameter are:
* 0: Flow control is completely disabled
* 1: Rx flow control is enabled (we can receive pause frames
* but not send pause frames).
* 2: Tx flow control is enabled (we can send pause frames
* but we do not support receiving pause frames).
* 3: Both Rx and TX flow control (symmetric) are enabled.
* other: Invalid.
*/
switch (hw->fc.type) {
case ixgb_fc_none: /* 0 */
/* Set CMDC bit to disable Rx Flow control */
ctrl_reg |= (IXGB_CTRL0_CMDC);
break;
case ixgb_fc_rx_pause: /* 1 */
/* RX Flow control is enabled, and TX Flow control is
* disabled.
*/
ctrl_reg |= (IXGB_CTRL0_RPE);
break;
case ixgb_fc_tx_pause: /* 2 */
/* TX Flow control is enabled, and RX Flow control is
* disabled, by a software over-ride.
*/
ctrl_reg |= (IXGB_CTRL0_TPE);
pap_reg = hw->fc.pause_time;
break;
case ixgb_fc_full: /* 3 */
/* Flow control (both RX and TX) is enabled by a software
* over-ride.
*/
ctrl_reg |= (IXGB_CTRL0_RPE | IXGB_CTRL0_TPE);
pap_reg = hw->fc.pause_time;
break;
default:
/* We should never get here. The value should be 0-3. */
pr_debug("Flow control param set incorrectly\n");
ASSERT(0);
break;
}
/* Write the new settings */
IXGB_WRITE_REG(hw, CTRL0, ctrl_reg);
if (pap_reg != 0)
IXGB_WRITE_REG(hw, PAP, pap_reg);
/* Set the flow control receive threshold registers. Normally,
* these registers will be set to a default threshold that may be
* adjusted later by the driver's runtime code. However, if the
* ability to transmit pause frames in not enabled, then these
* registers will be set to 0.
*/
if (!(hw->fc.type & ixgb_fc_tx_pause)) {
IXGB_WRITE_REG(hw, FCRTL, 0);
IXGB_WRITE_REG(hw, FCRTH, 0);
} else {
/* We need to set up the Receive Threshold high and low water
* marks as well as (optionally) enabling the transmission of XON
* frames. */
if (hw->fc.send_xon) {
IXGB_WRITE_REG(hw, FCRTL,
(hw->fc.low_water | IXGB_FCRTL_XONE));
} else {
IXGB_WRITE_REG(hw, FCRTL, hw->fc.low_water);
}
IXGB_WRITE_REG(hw, FCRTH, hw->fc.high_water);
}
return status;
}
/******************************************************************************
* Reads a word from a device over the Management Data Interface (MDI) bus.
* This interface is used to manage Physical layer devices.
*
* hw - Struct containing variables accessed by hw code
* reg_address - Offset of device register being read.
* phy_address - Address of device on MDI.
*
* Returns: Data word (16 bits) from MDI device.
*
* The 82597EX has support for several MDI access methods. This routine
* uses the new protocol MDI Single Command and Address Operation.
* This requires that first an address cycle command is sent, followed by a
* read command.
*****************************************************************************/
static u16
ixgb_read_phy_reg(struct ixgb_hw *hw,
u32 reg_address,
u32 phy_address,
u32 device_type)
{
u32 i;
u32 data;
u32 command = 0;
ASSERT(reg_address <= IXGB_MAX_PHY_REG_ADDRESS);
ASSERT(phy_address <= IXGB_MAX_PHY_ADDRESS);
ASSERT(device_type <= IXGB_MAX_PHY_DEV_TYPE);
/* Setup and write the address cycle command */
command = ((reg_address << IXGB_MSCA_NP_ADDR_SHIFT) |
(device_type << IXGB_MSCA_DEV_TYPE_SHIFT) |
(phy_address << IXGB_MSCA_PHY_ADDR_SHIFT) |
(IXGB_MSCA_ADDR_CYCLE | IXGB_MSCA_MDI_COMMAND));
IXGB_WRITE_REG(hw, MSCA, command);
/**************************************************************
** Check every 10 usec to see if the address cycle completed
** The COMMAND bit will clear when the operation is complete.
** This may take as long as 64 usecs (we'll wait 100 usecs max)
** from the CPU Write to the Ready bit assertion.
**************************************************************/
for (i = 0; i < 10; i++)
{
udelay(10);
command = IXGB_READ_REG(hw, MSCA);
if ((command & IXGB_MSCA_MDI_COMMAND) == 0)
break;
}
ASSERT((command & IXGB_MSCA_MDI_COMMAND) == 0);
/* Address cycle complete, setup and write the read command */
command = ((reg_address << IXGB_MSCA_NP_ADDR_SHIFT) |
(device_type << IXGB_MSCA_DEV_TYPE_SHIFT) |
(phy_address << IXGB_MSCA_PHY_ADDR_SHIFT) |
(IXGB_MSCA_READ | IXGB_MSCA_MDI_COMMAND));
IXGB_WRITE_REG(hw, MSCA, command);
/**************************************************************
** Check every 10 usec to see if the read command completed
** The COMMAND bit will clear when the operation is complete.
** The read may take as long as 64 usecs (we'll wait 100 usecs max)
** from the CPU Write to the Ready bit assertion.
**************************************************************/
for (i = 0; i < 10; i++)
{
udelay(10);
command = IXGB_READ_REG(hw, MSCA);
if ((command & IXGB_MSCA_MDI_COMMAND) == 0)
break;
}
ASSERT((command & IXGB_MSCA_MDI_COMMAND) == 0);
/* Operation is complete, get the data from the MDIO Read/Write Data
* register and return.
*/
data = IXGB_READ_REG(hw, MSRWD);
data >>= IXGB_MSRWD_READ_DATA_SHIFT;
return((u16) data);
}
/******************************************************************************
* Writes a word to a device over the Management Data Interface (MDI) bus.
* This interface is used to manage Physical layer devices.
*
* hw - Struct containing variables accessed by hw code
* reg_address - Offset of device register being read.
* phy_address - Address of device on MDI.
* device_type - Also known as the Device ID or DID.
* data - 16-bit value to be written
*
* Returns: void.
*
* The 82597EX has support for several MDI access methods. This routine
* uses the new protocol MDI Single Command and Address Operation.
* This requires that first an address cycle command is sent, followed by a
* write command.
*****************************************************************************/
static void
ixgb_write_phy_reg(struct ixgb_hw *hw,
u32 reg_address,
u32 phy_address,
u32 device_type,
u16 data)
{
u32 i;
u32 command = 0;
ASSERT(reg_address <= IXGB_MAX_PHY_REG_ADDRESS);
ASSERT(phy_address <= IXGB_MAX_PHY_ADDRESS);
ASSERT(device_type <= IXGB_MAX_PHY_DEV_TYPE);
/* Put the data in the MDIO Read/Write Data register */
IXGB_WRITE_REG(hw, MSRWD, (u32)data);
/* Setup and write the address cycle command */
command = ((reg_address << IXGB_MSCA_NP_ADDR_SHIFT) |
(device_type << IXGB_MSCA_DEV_TYPE_SHIFT) |
(phy_address << IXGB_MSCA_PHY_ADDR_SHIFT) |
(IXGB_MSCA_ADDR_CYCLE | IXGB_MSCA_MDI_COMMAND));
IXGB_WRITE_REG(hw, MSCA, command);
/**************************************************************
** Check every 10 usec to see if the address cycle completed
** The COMMAND bit will clear when the operation is complete.
** This may take as long as 64 usecs (we'll wait 100 usecs max)
** from the CPU Write to the Ready bit assertion.
**************************************************************/
for (i = 0; i < 10; i++)
{
udelay(10);
command = IXGB_READ_REG(hw, MSCA);
if ((command & IXGB_MSCA_MDI_COMMAND) == 0)
break;
}
ASSERT((command & IXGB_MSCA_MDI_COMMAND) == 0);
/* Address cycle complete, setup and write the write command */
command = ((reg_address << IXGB_MSCA_NP_ADDR_SHIFT) |
(device_type << IXGB_MSCA_DEV_TYPE_SHIFT) |
(phy_address << IXGB_MSCA_PHY_ADDR_SHIFT) |
(IXGB_MSCA_WRITE | IXGB_MSCA_MDI_COMMAND));
IXGB_WRITE_REG(hw, MSCA, command);
/**************************************************************
** Check every 10 usec to see if the read command completed
** The COMMAND bit will clear when the operation is complete.
** The write may take as long as 64 usecs (we'll wait 100 usecs max)
** from the CPU Write to the Ready bit assertion.
**************************************************************/
for (i = 0; i < 10; i++)
{
udelay(10);
command = IXGB_READ_REG(hw, MSCA);
if ((command & IXGB_MSCA_MDI_COMMAND) == 0)
break;
}
ASSERT((command & IXGB_MSCA_MDI_COMMAND) == 0);
/* Operation is complete, return. */
}
/******************************************************************************
* Checks to see if the link status of the hardware has changed.
*
* hw - Struct containing variables accessed by hw code
*
* Called by any function that needs to check the link status of the adapter.
*****************************************************************************/
void
ixgb_check_for_link(struct ixgb_hw *hw)
{
u32 status_reg;
u32 xpcss_reg;
ENTER();
xpcss_reg = IXGB_READ_REG(hw, XPCSS);
status_reg = IXGB_READ_REG(hw, STATUS);
if ((xpcss_reg & IXGB_XPCSS_ALIGN_STATUS) &&
(status_reg & IXGB_STATUS_LU)) {
hw->link_up = true;
} else if (!(xpcss_reg & IXGB_XPCSS_ALIGN_STATUS) &&
(status_reg & IXGB_STATUS_LU)) {
pr_debug("XPCSS Not Aligned while Status:LU is set\n");
hw->link_up = ixgb_link_reset(hw);
} else {
/*
* 82597EX errata. Since the lane deskew problem may prevent
* link, reset the link before reporting link down.
*/
hw->link_up = ixgb_link_reset(hw);
}
/* Anything else for 10 Gig?? */
}
/******************************************************************************
* Check for a bad link condition that may have occurred.
* The indication is that the RFC / LFC registers may be incrementing
* continually. A full adapter reset is required to recover.
*
* hw - Struct containing variables accessed by hw code
*
* Called by any function that needs to check the link status of the adapter.
*****************************************************************************/
bool ixgb_check_for_bad_link(struct ixgb_hw *hw)
{
u32 newLFC, newRFC;
bool bad_link_returncode = false;
if (hw->phy_type == ixgb_phy_type_txn17401) {
newLFC = IXGB_READ_REG(hw, LFC);
newRFC = IXGB_READ_REG(hw, RFC);
if ((hw->lastLFC + 250 < newLFC)
|| (hw->lastRFC + 250 < newRFC)) {
pr_debug("BAD LINK! too many LFC/RFC since last check\n");
bad_link_returncode = true;
}
hw->lastLFC = newLFC;
hw->lastRFC = newRFC;
}
return bad_link_returncode;
}
/******************************************************************************
* Clears all hardware statistics counters.
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
static void
ixgb_clear_hw_cntrs(struct ixgb_hw *hw)
{
ENTER();
/* if we are stopped or resetting exit gracefully */
if (hw->adapter_stopped) {
pr_debug("Exiting because the adapter is stopped!!!\n");
return;
}
IXGB_READ_REG(hw, TPRL);
IXGB_READ_REG(hw, TPRH);
IXGB_READ_REG(hw, GPRCL);
IXGB_READ_REG(hw, GPRCH);
IXGB_READ_REG(hw, BPRCL);
IXGB_READ_REG(hw, BPRCH);
IXGB_READ_REG(hw, MPRCL);
IXGB_READ_REG(hw, MPRCH);
IXGB_READ_REG(hw, UPRCL);
IXGB_READ_REG(hw, UPRCH);
IXGB_READ_REG(hw, VPRCL);
IXGB_READ_REG(hw, VPRCH);
IXGB_READ_REG(hw, JPRCL);
IXGB_READ_REG(hw, JPRCH);
IXGB_READ_REG(hw, GORCL);
IXGB_READ_REG(hw, GORCH);
IXGB_READ_REG(hw, TORL);
IXGB_READ_REG(hw, TORH);
IXGB_READ_REG(hw, RNBC);
IXGB_READ_REG(hw, RUC);
IXGB_READ_REG(hw, ROC);
IXGB_READ_REG(hw, RLEC);
IXGB_READ_REG(hw, CRCERRS);
IXGB_READ_REG(hw, ICBC);
IXGB_READ_REG(hw, ECBC);
IXGB_READ_REG(hw, MPC);
IXGB_READ_REG(hw, TPTL);
IXGB_READ_REG(hw, TPTH);
IXGB_READ_REG(hw, GPTCL);
IXGB_READ_REG(hw, GPTCH);
IXGB_READ_REG(hw, BPTCL);
IXGB_READ_REG(hw, BPTCH);
IXGB_READ_REG(hw, MPTCL);
IXGB_READ_REG(hw, MPTCH);
IXGB_READ_REG(hw, UPTCL);
IXGB_READ_REG(hw, UPTCH);
IXGB_READ_REG(hw, VPTCL);
IXGB_READ_REG(hw, VPTCH);
IXGB_READ_REG(hw, JPTCL);
IXGB_READ_REG(hw, JPTCH);
IXGB_READ_REG(hw, GOTCL);
IXGB_READ_REG(hw, GOTCH);
IXGB_READ_REG(hw, TOTL);
IXGB_READ_REG(hw, TOTH);
IXGB_READ_REG(hw, DC);
IXGB_READ_REG(hw, PLT64C);
IXGB_READ_REG(hw, TSCTC);
IXGB_READ_REG(hw, TSCTFC);
IXGB_READ_REG(hw, IBIC);
IXGB_READ_REG(hw, RFC);
IXGB_READ_REG(hw, LFC);
IXGB_READ_REG(hw, PFRC);
IXGB_READ_REG(hw, PFTC);
IXGB_READ_REG(hw, MCFRC);
IXGB_READ_REG(hw, MCFTC);
IXGB_READ_REG(hw, XONRXC);
IXGB_READ_REG(hw, XONTXC);
IXGB_READ_REG(hw, XOFFRXC);
IXGB_READ_REG(hw, XOFFTXC);
IXGB_READ_REG(hw, RJC);
}
/******************************************************************************
* Turns on the software controllable LED
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
void
ixgb_led_on(struct ixgb_hw *hw)
{
u32 ctrl0_reg = IXGB_READ_REG(hw, CTRL0);
/* To turn on the LED, clear software-definable pin 0 (SDP0). */
ctrl0_reg &= ~IXGB_CTRL0_SDP0;
IXGB_WRITE_REG(hw, CTRL0, ctrl0_reg);
}
/******************************************************************************
* Turns off the software controllable LED
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
void
ixgb_led_off(struct ixgb_hw *hw)
{
u32 ctrl0_reg = IXGB_READ_REG(hw, CTRL0);
/* To turn off the LED, set software-definable pin 0 (SDP0). */
ctrl0_reg |= IXGB_CTRL0_SDP0;
IXGB_WRITE_REG(hw, CTRL0, ctrl0_reg);
}
/******************************************************************************
* Gets the current PCI bus type, speed, and width of the hardware
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
static void
ixgb_get_bus_info(struct ixgb_hw *hw)
{
u32 status_reg;
status_reg = IXGB_READ_REG(hw, STATUS);
hw->bus.type = (status_reg & IXGB_STATUS_PCIX_MODE) ?
ixgb_bus_type_pcix : ixgb_bus_type_pci;
if (hw->bus.type == ixgb_bus_type_pci) {
hw->bus.speed = (status_reg & IXGB_STATUS_PCI_SPD) ?
ixgb_bus_speed_66 : ixgb_bus_speed_33;
} else {
switch (status_reg & IXGB_STATUS_PCIX_SPD_MASK) {
case IXGB_STATUS_PCIX_SPD_66:
hw->bus.speed = ixgb_bus_speed_66;
break;
case IXGB_STATUS_PCIX_SPD_100:
hw->bus.speed = ixgb_bus_speed_100;
break;
case IXGB_STATUS_PCIX_SPD_133:
hw->bus.speed = ixgb_bus_speed_133;
break;
default:
hw->bus.speed = ixgb_bus_speed_reserved;
break;
}
}
hw->bus.width = (status_reg & IXGB_STATUS_BUS64) ?
ixgb_bus_width_64 : ixgb_bus_width_32;
}
/******************************************************************************
* Tests a MAC address to ensure it is a valid Individual Address
*
* mac_addr - pointer to MAC address.
*
*****************************************************************************/
static bool
mac_addr_valid(u8 *mac_addr)
{
bool is_valid = true;
ENTER();
/* Make sure it is not a multicast address */
if (is_multicast_ether_addr(mac_addr)) {
pr_debug("MAC address is multicast\n");
is_valid = false;
}
/* Not a broadcast address */
else if (is_broadcast_ether_addr(mac_addr)) {
pr_debug("MAC address is broadcast\n");
is_valid = false;
}
/* Reject the zero address */
else if (is_zero_ether_addr(mac_addr)) {
pr_debug("MAC address is all zeros\n");
is_valid = false;
}
return is_valid;
}
/******************************************************************************
* Resets the 10GbE link. Waits the settle time and returns the state of
* the link.
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
static bool
ixgb_link_reset(struct ixgb_hw *hw)
{
bool link_status = false;
u8 wait_retries = MAX_RESET_ITERATIONS;
u8 lrst_retries = MAX_RESET_ITERATIONS;
do {
/* Reset the link */
IXGB_WRITE_REG(hw, CTRL0,
IXGB_READ_REG(hw, CTRL0) | IXGB_CTRL0_LRST);
/* Wait for link-up and lane re-alignment */
do {
udelay(IXGB_DELAY_USECS_AFTER_LINK_RESET);
link_status =
((IXGB_READ_REG(hw, STATUS) & IXGB_STATUS_LU)
&& (IXGB_READ_REG(hw, XPCSS) &
IXGB_XPCSS_ALIGN_STATUS)) ? true : false;
} while (!link_status && --wait_retries);
} while (!link_status && --lrst_retries);
return link_status;
}
/******************************************************************************
* Resets the 10GbE optics module.
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
static void
ixgb_optics_reset(struct ixgb_hw *hw)
{
if (hw->phy_type == ixgb_phy_type_txn17401) {
ixgb_write_phy_reg(hw,
MDIO_CTRL1,
IXGB_PHY_ADDRESS,
MDIO_MMD_PMAPMD,
MDIO_CTRL1_RESET);
ixgb_read_phy_reg(hw, MDIO_CTRL1, IXGB_PHY_ADDRESS, MDIO_MMD_PMAPMD);
}
}
/******************************************************************************
* Resets the 10GbE optics module for Sun variant NIC.
*
* hw - Struct containing variables accessed by shared code
*****************************************************************************/
#define IXGB_BCM8704_USER_PMD_TX_CTRL_REG 0xC803
#define IXGB_BCM8704_USER_PMD_TX_CTRL_REG_VAL 0x0164
#define IXGB_BCM8704_USER_CTRL_REG 0xC800
#define IXGB_BCM8704_USER_CTRL_REG_VAL 0x7FBF
#define IXGB_BCM8704_USER_DEV3_ADDR 0x0003
#define IXGB_SUN_PHY_ADDRESS 0x0000
#define IXGB_SUN_PHY_RESET_DELAY 305
static void
ixgb_optics_reset_bcm(struct ixgb_hw *hw)
{
u32 ctrl = IXGB_READ_REG(hw, CTRL0);
ctrl &= ~IXGB_CTRL0_SDP2;
ctrl |= IXGB_CTRL0_SDP3;
IXGB_WRITE_REG(hw, CTRL0, ctrl);
IXGB_WRITE_FLUSH(hw);
/* SerDes needs extra delay */
msleep(IXGB_SUN_PHY_RESET_DELAY);
/* Broadcom 7408L configuration */
/* Reference clock config */
ixgb_write_phy_reg(hw,
IXGB_BCM8704_USER_PMD_TX_CTRL_REG,
IXGB_SUN_PHY_ADDRESS,
IXGB_BCM8704_USER_DEV3_ADDR,
IXGB_BCM8704_USER_PMD_TX_CTRL_REG_VAL);
/* we must read the registers twice */
ixgb_read_phy_reg(hw,
IXGB_BCM8704_USER_PMD_TX_CTRL_REG,
IXGB_SUN_PHY_ADDRESS,
IXGB_BCM8704_USER_DEV3_ADDR);
ixgb_read_phy_reg(hw,
IXGB_BCM8704_USER_PMD_TX_CTRL_REG,
IXGB_SUN_PHY_ADDRESS,
IXGB_BCM8704_USER_DEV3_ADDR);
ixgb_write_phy_reg(hw,
IXGB_BCM8704_USER_CTRL_REG,
IXGB_SUN_PHY_ADDRESS,
IXGB_BCM8704_USER_DEV3_ADDR,
IXGB_BCM8704_USER_CTRL_REG_VAL);
ixgb_read_phy_reg(hw,
IXGB_BCM8704_USER_CTRL_REG,
IXGB_SUN_PHY_ADDRESS,
IXGB_BCM8704_USER_DEV3_ADDR);
ixgb_read_phy_reg(hw,
IXGB_BCM8704_USER_CTRL_REG,
IXGB_SUN_PHY_ADDRESS,
IXGB_BCM8704_USER_DEV3_ADDR);
/* SerDes needs extra delay */
msleep(IXGB_SUN_PHY_RESET_DELAY);
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 1999 - 2008 Intel Corporation. */
#ifndef _IXGB_HW_H_
#define _IXGB_HW_H_
#include <linux/mdio.h>
#include "ixgb_osdep.h"
/* Enums */
typedef enum {
ixgb_mac_unknown = 0,
ixgb_82597,
ixgb_num_macs
} ixgb_mac_type;
/* Types of physical layer modules */
typedef enum {
ixgb_phy_type_unknown = 0,
ixgb_phy_type_g6005, /* 850nm, MM fiber, XPAK transceiver */
ixgb_phy_type_g6104, /* 1310nm, SM fiber, XPAK transceiver */
ixgb_phy_type_txn17201, /* 850nm, MM fiber, XPAK transceiver */
ixgb_phy_type_txn17401, /* 1310nm, SM fiber, XENPAK transceiver */
ixgb_phy_type_bcm /* SUN specific board */
} ixgb_phy_type;
/* XPAK transceiver vendors, for the SR adapters */
typedef enum {
ixgb_xpak_vendor_intel,
ixgb_xpak_vendor_infineon
} ixgb_xpak_vendor;
/* Media Types */
typedef enum {
ixgb_media_type_unknown = 0,
ixgb_media_type_fiber = 1,
ixgb_media_type_copper = 2,
ixgb_num_media_types
} ixgb_media_type;
/* Flow Control Settings */
typedef enum {
ixgb_fc_none = 0,
ixgb_fc_rx_pause = 1,
ixgb_fc_tx_pause = 2,
ixgb_fc_full = 3,
ixgb_fc_default = 0xFF
} ixgb_fc_type;
/* PCI bus types */
typedef enum {
ixgb_bus_type_unknown = 0,
ixgb_bus_type_pci,
ixgb_bus_type_pcix
} ixgb_bus_type;
/* PCI bus speeds */
typedef enum {
ixgb_bus_speed_unknown = 0,
ixgb_bus_speed_33,
ixgb_bus_speed_66,
ixgb_bus_speed_100,
ixgb_bus_speed_133,
ixgb_bus_speed_reserved
} ixgb_bus_speed;
/* PCI bus widths */
typedef enum {
ixgb_bus_width_unknown = 0,
ixgb_bus_width_32,
ixgb_bus_width_64
} ixgb_bus_width;
#define IXGB_EEPROM_SIZE 64 /* Size in words */
#define SPEED_10000 10000
#define FULL_DUPLEX 2
#define MIN_NUMBER_OF_DESCRIPTORS 8
#define MAX_NUMBER_OF_DESCRIPTORS 0xFFF8 /* 13 bits in RDLEN/TDLEN, 128B aligned */
#define IXGB_DELAY_BEFORE_RESET 10 /* allow 10ms after idling rx/tx units */
#define IXGB_DELAY_AFTER_RESET 1 /* allow 1ms after the reset */
#define IXGB_DELAY_AFTER_EE_RESET 10 /* allow 10ms after the EEPROM reset */
#define IXGB_DELAY_USECS_AFTER_LINK_RESET 13 /* allow 13 microseconds after the reset */
/* NOTE: this is MICROSECONDS */
#define MAX_RESET_ITERATIONS 8 /* number of iterations to get things right */
/* General Registers */
#define IXGB_CTRL0 0x00000 /* Device Control Register 0 - RW */
#define IXGB_CTRL1 0x00008 /* Device Control Register 1 - RW */
#define IXGB_STATUS 0x00010 /* Device Status Register - RO */
#define IXGB_EECD 0x00018 /* EEPROM/Flash Control/Data Register - RW */
#define IXGB_MFS 0x00020 /* Maximum Frame Size - RW */
/* Interrupt */
#define IXGB_ICR 0x00080 /* Interrupt Cause Read - R/clr */
#define IXGB_ICS 0x00088 /* Interrupt Cause Set - RW */
#define IXGB_IMS 0x00090 /* Interrupt Mask Set/Read - RW */
#define IXGB_IMC 0x00098 /* Interrupt Mask Clear - WO */
/* Receive */
#define IXGB_RCTL 0x00100 /* RX Control - RW */
#define IXGB_FCRTL 0x00108 /* Flow Control Receive Threshold Low - RW */
#define IXGB_FCRTH 0x00110 /* Flow Control Receive Threshold High - RW */
#define IXGB_RDBAL 0x00118 /* RX Descriptor Base Low - RW */
#define IXGB_RDBAH 0x0011C /* RX Descriptor Base High - RW */
#define IXGB_RDLEN 0x00120 /* RX Descriptor Length - RW */
#define IXGB_RDH 0x00128 /* RX Descriptor Head - RW */
#define IXGB_RDT 0x00130 /* RX Descriptor Tail - RW */
#define IXGB_RDTR 0x00138 /* RX Delay Timer Ring - RW */
#define IXGB_RXDCTL 0x00140 /* Receive Descriptor Control - RW */
#define IXGB_RAIDC 0x00148 /* Receive Adaptive Interrupt Delay Control - RW */
#define IXGB_RXCSUM 0x00158 /* Receive Checksum Control - RW */
#define IXGB_RA 0x00180 /* Receive Address Array Base - RW */
#define IXGB_RAL 0x00180 /* Receive Address Low [0:15] - RW */
#define IXGB_RAH 0x00184 /* Receive Address High [0:15] - RW */
#define IXGB_MTA 0x00200 /* Multicast Table Array [0:127] - RW */
#define IXGB_VFTA 0x00400 /* VLAN Filter Table Array [0:127] - RW */
#define IXGB_REQ_RX_DESCRIPTOR_MULTIPLE 8
/* Transmit */
#define IXGB_TCTL 0x00600 /* TX Control - RW */
#define IXGB_TDBAL 0x00608 /* TX Descriptor Base Low - RW */
#define IXGB_TDBAH 0x0060C /* TX Descriptor Base High - RW */
#define IXGB_TDLEN 0x00610 /* TX Descriptor Length - RW */
#define IXGB_TDH 0x00618 /* TX Descriptor Head - RW */
#define IXGB_TDT 0x00620 /* TX Descriptor Tail - RW */
#define IXGB_TIDV 0x00628 /* TX Interrupt Delay Value - RW */
#define IXGB_TXDCTL 0x00630 /* Transmit Descriptor Control - RW */
#define IXGB_TSPMT 0x00638 /* TCP Segmentation PAD & Min Threshold - RW */
#define IXGB_PAP 0x00640 /* Pause and Pace - RW */
#define IXGB_REQ_TX_DESCRIPTOR_MULTIPLE 8
/* Physical */
#define IXGB_PCSC1 0x00700 /* PCS Control 1 - RW */
#define IXGB_PCSC2 0x00708 /* PCS Control 2 - RW */
#define IXGB_PCSS1 0x00710 /* PCS Status 1 - RO */
#define IXGB_PCSS2 0x00718 /* PCS Status 2 - RO */
#define IXGB_XPCSS 0x00720 /* 10GBASE-X PCS Status (or XGXS Lane Status) - RO */
#define IXGB_UCCR 0x00728 /* Unilink Circuit Control Register */
#define IXGB_XPCSTC 0x00730 /* 10GBASE-X PCS Test Control */
#define IXGB_MACA 0x00738 /* MDI Autoscan Command and Address - RW */
#define IXGB_APAE 0x00740 /* Autoscan PHY Address Enable - RW */
#define IXGB_ARD 0x00748 /* Autoscan Read Data - RO */
#define IXGB_AIS 0x00750 /* Autoscan Interrupt Status - RO */
#define IXGB_MSCA 0x00758 /* MDI Single Command and Address - RW */
#define IXGB_MSRWD 0x00760 /* MDI Single Read and Write Data - RW, RO */
/* Wake-up */
#define IXGB_WUFC 0x00808 /* Wake Up Filter Control - RW */
#define IXGB_WUS 0x00810 /* Wake Up Status - RO */
#define IXGB_FFLT 0x01000 /* Flexible Filter Length Table - RW */
#define IXGB_FFMT 0x01020 /* Flexible Filter Mask Table - RW */
#define IXGB_FTVT 0x01420 /* Flexible Filter Value Table - RW */
/* Statistics */
#define IXGB_TPRL 0x02000 /* Total Packets Received (Low) */
#define IXGB_TPRH 0x02004 /* Total Packets Received (High) */
#define IXGB_GPRCL 0x02008 /* Good Packets Received Count (Low) */
#define IXGB_GPRCH 0x0200C /* Good Packets Received Count (High) */
#define IXGB_BPRCL 0x02010 /* Broadcast Packets Received Count (Low) */
#define IXGB_BPRCH 0x02014 /* Broadcast Packets Received Count (High) */
#define IXGB_MPRCL 0x02018 /* Multicast Packets Received Count (Low) */
#define IXGB_MPRCH 0x0201C /* Multicast Packets Received Count (High) */
#define IXGB_UPRCL 0x02020 /* Unicast Packets Received Count (Low) */
#define IXGB_UPRCH 0x02024 /* Unicast Packets Received Count (High) */
#define IXGB_VPRCL 0x02028 /* VLAN Packets Received Count (Low) */
#define IXGB_VPRCH 0x0202C /* VLAN Packets Received Count (High) */
#define IXGB_JPRCL 0x02030 /* Jumbo Packets Received Count (Low) */
#define IXGB_JPRCH 0x02034 /* Jumbo Packets Received Count (High) */
#define IXGB_GORCL 0x02038 /* Good Octets Received Count (Low) */
#define IXGB_GORCH 0x0203C /* Good Octets Received Count (High) */
#define IXGB_TORL 0x02040 /* Total Octets Received (Low) */
#define IXGB_TORH 0x02044 /* Total Octets Received (High) */
#define IXGB_RNBC 0x02048 /* Receive No Buffers Count */
#define IXGB_RUC 0x02050 /* Receive Undersize Count */
#define IXGB_ROC 0x02058 /* Receive Oversize Count */
#define IXGB_RLEC 0x02060 /* Receive Length Error Count */
#define IXGB_CRCERRS 0x02068 /* CRC Error Count */
#define IXGB_ICBC 0x02070 /* Illegal control byte in mid-packet Count */
#define IXGB_ECBC 0x02078 /* Error Control byte in mid-packet Count */
#define IXGB_MPC 0x02080 /* Missed Packets Count */
#define IXGB_TPTL 0x02100 /* Total Packets Transmitted (Low) */
#define IXGB_TPTH 0x02104 /* Total Packets Transmitted (High) */
#define IXGB_GPTCL 0x02108 /* Good Packets Transmitted Count (Low) */
#define IXGB_GPTCH 0x0210C /* Good Packets Transmitted Count (High) */
#define IXGB_BPTCL 0x02110 /* Broadcast Packets Transmitted Count (Low) */
#define IXGB_BPTCH 0x02114 /* Broadcast Packets Transmitted Count (High) */
#define IXGB_MPTCL 0x02118 /* Multicast Packets Transmitted Count (Low) */
#define IXGB_MPTCH 0x0211C /* Multicast Packets Transmitted Count (High) */
#define IXGB_UPTCL 0x02120 /* Unicast Packets Transmitted Count (Low) */
#define IXGB_UPTCH 0x02124 /* Unicast Packets Transmitted Count (High) */
#define IXGB_VPTCL 0x02128 /* VLAN Packets Transmitted Count (Low) */
#define IXGB_VPTCH 0x0212C /* VLAN Packets Transmitted Count (High) */
#define IXGB_JPTCL 0x02130 /* Jumbo Packets Transmitted Count (Low) */
#define IXGB_JPTCH 0x02134 /* Jumbo Packets Transmitted Count (High) */
#define IXGB_GOTCL 0x02138 /* Good Octets Transmitted Count (Low) */
#define IXGB_GOTCH 0x0213C /* Good Octets Transmitted Count (High) */
#define IXGB_TOTL 0x02140 /* Total Octets Transmitted Count (Low) */
#define IXGB_TOTH 0x02144 /* Total Octets Transmitted Count (High) */
#define IXGB_DC 0x02148 /* Defer Count */
#define IXGB_PLT64C 0x02150 /* Packet Transmitted was less than 64 bytes Count */
#define IXGB_TSCTC 0x02170 /* TCP Segmentation Context Transmitted Count */
#define IXGB_TSCTFC 0x02178 /* TCP Segmentation Context Tx Fail Count */
#define IXGB_IBIC 0x02180 /* Illegal byte during Idle stream count */
#define IXGB_RFC 0x02188 /* Remote Fault Count */
#define IXGB_LFC 0x02190 /* Local Fault Count */
#define IXGB_PFRC 0x02198 /* Pause Frame Receive Count */
#define IXGB_PFTC 0x021A0 /* Pause Frame Transmit Count */
#define IXGB_MCFRC 0x021A8 /* MAC Control Frames (non-Pause) Received Count */
#define IXGB_MCFTC 0x021B0 /* MAC Control Frames (non-Pause) Transmitted Count */
#define IXGB_XONRXC 0x021B8 /* XON Received Count */
#define IXGB_XONTXC 0x021C0 /* XON Transmitted Count */
#define IXGB_XOFFRXC 0x021C8 /* XOFF Received Count */
#define IXGB_XOFFTXC 0x021D0 /* XOFF Transmitted Count */
#define IXGB_RJC 0x021D8 /* Receive Jabber Count */
/* CTRL0 Bit Masks */
#define IXGB_CTRL0_LRST 0x00000008
#define IXGB_CTRL0_JFE 0x00000010
#define IXGB_CTRL0_XLE 0x00000020
#define IXGB_CTRL0_MDCS 0x00000040
#define IXGB_CTRL0_CMDC 0x00000080
#define IXGB_CTRL0_SDP0 0x00040000
#define IXGB_CTRL0_SDP1 0x00080000
#define IXGB_CTRL0_SDP2 0x00100000
#define IXGB_CTRL0_SDP3 0x00200000
#define IXGB_CTRL0_SDP0_DIR 0x00400000
#define IXGB_CTRL0_SDP1_DIR 0x00800000
#define IXGB_CTRL0_SDP2_DIR 0x01000000
#define IXGB_CTRL0_SDP3_DIR 0x02000000
#define IXGB_CTRL0_RST 0x04000000
#define IXGB_CTRL0_RPE 0x08000000
#define IXGB_CTRL0_TPE 0x10000000
#define IXGB_CTRL0_VME 0x40000000
/* CTRL1 Bit Masks */
#define IXGB_CTRL1_GPI0_EN 0x00000001
#define IXGB_CTRL1_GPI1_EN 0x00000002
#define IXGB_CTRL1_GPI2_EN 0x00000004
#define IXGB_CTRL1_GPI3_EN 0x00000008
#define IXGB_CTRL1_SDP4 0x00000010
#define IXGB_CTRL1_SDP5 0x00000020
#define IXGB_CTRL1_SDP6 0x00000040
#define IXGB_CTRL1_SDP7 0x00000080
#define IXGB_CTRL1_SDP4_DIR 0x00000100
#define IXGB_CTRL1_SDP5_DIR 0x00000200
#define IXGB_CTRL1_SDP6_DIR 0x00000400
#define IXGB_CTRL1_SDP7_DIR 0x00000800
#define IXGB_CTRL1_EE_RST 0x00002000
#define IXGB_CTRL1_RO_DIS 0x00020000
#define IXGB_CTRL1_PCIXHM_MASK 0x00C00000
#define IXGB_CTRL1_PCIXHM_1_2 0x00000000
#define IXGB_CTRL1_PCIXHM_5_8 0x00400000
#define IXGB_CTRL1_PCIXHM_3_4 0x00800000
#define IXGB_CTRL1_PCIXHM_7_8 0x00C00000
/* STATUS Bit Masks */
#define IXGB_STATUS_LU 0x00000002
#define IXGB_STATUS_AIP 0x00000004
#define IXGB_STATUS_TXOFF 0x00000010
#define IXGB_STATUS_XAUIME 0x00000020
#define IXGB_STATUS_RES 0x00000040
#define IXGB_STATUS_RIS 0x00000080
#define IXGB_STATUS_RIE 0x00000100
#define IXGB_STATUS_RLF 0x00000200
#define IXGB_STATUS_RRF 0x00000400
#define IXGB_STATUS_PCI_SPD 0x00000800
#define IXGB_STATUS_BUS64 0x00001000
#define IXGB_STATUS_PCIX_MODE 0x00002000
#define IXGB_STATUS_PCIX_SPD_MASK 0x0000C000
#define IXGB_STATUS_PCIX_SPD_66 0x00000000
#define IXGB_STATUS_PCIX_SPD_100 0x00004000
#define IXGB_STATUS_PCIX_SPD_133 0x00008000
#define IXGB_STATUS_REV_ID_MASK 0x000F0000
#define IXGB_STATUS_REV_ID_SHIFT 16
/* EECD Bit Masks */
#define IXGB_EECD_SK 0x00000001
#define IXGB_EECD_CS 0x00000002
#define IXGB_EECD_DI 0x00000004
#define IXGB_EECD_DO 0x00000008
#define IXGB_EECD_FWE_MASK 0x00000030
#define IXGB_EECD_FWE_DIS 0x00000010
#define IXGB_EECD_FWE_EN 0x00000020
/* MFS */
#define IXGB_MFS_SHIFT 16
/* Interrupt Register Bit Masks (used for ICR, ICS, IMS, and IMC) */
#define IXGB_INT_TXDW 0x00000001
#define IXGB_INT_TXQE 0x00000002
#define IXGB_INT_LSC 0x00000004
#define IXGB_INT_RXSEQ 0x00000008
#define IXGB_INT_RXDMT0 0x00000010
#define IXGB_INT_RXO 0x00000040
#define IXGB_INT_RXT0 0x00000080
#define IXGB_INT_AUTOSCAN 0x00000200
#define IXGB_INT_GPI0 0x00000800
#define IXGB_INT_GPI1 0x00001000
#define IXGB_INT_GPI2 0x00002000
#define IXGB_INT_GPI3 0x00004000
/* RCTL Bit Masks */
#define IXGB_RCTL_RXEN 0x00000002
#define IXGB_RCTL_SBP 0x00000004
#define IXGB_RCTL_UPE 0x00000008
#define IXGB_RCTL_MPE 0x00000010
#define IXGB_RCTL_RDMTS_MASK 0x00000300
#define IXGB_RCTL_RDMTS_1_2 0x00000000
#define IXGB_RCTL_RDMTS_1_4 0x00000100
#define IXGB_RCTL_RDMTS_1_8 0x00000200
#define IXGB_RCTL_MO_MASK 0x00003000
#define IXGB_RCTL_MO_47_36 0x00000000
#define IXGB_RCTL_MO_46_35 0x00001000
#define IXGB_RCTL_MO_45_34 0x00002000
#define IXGB_RCTL_MO_43_32 0x00003000
#define IXGB_RCTL_MO_SHIFT 12
#define IXGB_RCTL_BAM 0x00008000
#define IXGB_RCTL_BSIZE_MASK 0x00030000
#define IXGB_RCTL_BSIZE_2048 0x00000000
#define IXGB_RCTL_BSIZE_4096 0x00010000
#define IXGB_RCTL_BSIZE_8192 0x00020000
#define IXGB_RCTL_BSIZE_16384 0x00030000
#define IXGB_RCTL_VFE 0x00040000
#define IXGB_RCTL_CFIEN 0x00080000
#define IXGB_RCTL_CFI 0x00100000
#define IXGB_RCTL_RPDA_MASK 0x00600000
#define IXGB_RCTL_RPDA_MC_MAC 0x00000000
#define IXGB_RCTL_MC_ONLY 0x00400000
#define IXGB_RCTL_CFF 0x00800000
#define IXGB_RCTL_SECRC 0x04000000
#define IXGB_RDT_FPDB 0x80000000
#define IXGB_RCTL_IDLE_RX_UNIT 0
/* FCRTL Bit Masks */
#define IXGB_FCRTL_XONE 0x80000000
/* RXDCTL Bit Masks */
#define IXGB_RXDCTL_PTHRESH_MASK 0x000001FF
#define IXGB_RXDCTL_PTHRESH_SHIFT 0
#define IXGB_RXDCTL_HTHRESH_MASK 0x0003FE00
#define IXGB_RXDCTL_HTHRESH_SHIFT 9
#define IXGB_RXDCTL_WTHRESH_MASK 0x07FC0000
#define IXGB_RXDCTL_WTHRESH_SHIFT 18
/* RAIDC Bit Masks */
#define IXGB_RAIDC_HIGHTHRS_MASK 0x0000003F
#define IXGB_RAIDC_DELAY_MASK 0x000FF800
#define IXGB_RAIDC_DELAY_SHIFT 11
#define IXGB_RAIDC_POLL_MASK 0x1FF00000
#define IXGB_RAIDC_POLL_SHIFT 20
#define IXGB_RAIDC_RXT_GATE 0x40000000
#define IXGB_RAIDC_EN 0x80000000
#define IXGB_RAIDC_POLL_1000_INTERRUPTS_PER_SECOND 1220
#define IXGB_RAIDC_POLL_5000_INTERRUPTS_PER_SECOND 244
#define IXGB_RAIDC_POLL_10000_INTERRUPTS_PER_SECOND 122
#define IXGB_RAIDC_POLL_20000_INTERRUPTS_PER_SECOND 61
/* RXCSUM Bit Masks */
#define IXGB_RXCSUM_IPOFL 0x00000100
#define IXGB_RXCSUM_TUOFL 0x00000200
/* RAH Bit Masks */
#define IXGB_RAH_ASEL_MASK 0x00030000
#define IXGB_RAH_ASEL_DEST 0x00000000
#define IXGB_RAH_ASEL_SRC 0x00010000
#define IXGB_RAH_AV 0x80000000
/* TCTL Bit Masks */
#define IXGB_TCTL_TCE 0x00000001
#define IXGB_TCTL_TXEN 0x00000002
#define IXGB_TCTL_TPDE 0x00000004
#define IXGB_TCTL_IDLE_TX_UNIT 0
/* TXDCTL Bit Masks */
#define IXGB_TXDCTL_PTHRESH_MASK 0x0000007F
#define IXGB_TXDCTL_HTHRESH_MASK 0x00007F00
#define IXGB_TXDCTL_HTHRESH_SHIFT 8
#define IXGB_TXDCTL_WTHRESH_MASK 0x007F0000
#define IXGB_TXDCTL_WTHRESH_SHIFT 16
/* TSPMT Bit Masks */
#define IXGB_TSPMT_TSMT_MASK 0x0000FFFF
#define IXGB_TSPMT_TSPBP_MASK 0xFFFF0000
#define IXGB_TSPMT_TSPBP_SHIFT 16
/* PAP Bit Masks */
#define IXGB_PAP_TXPC_MASK 0x0000FFFF
#define IXGB_PAP_TXPV_MASK 0x000F0000
#define IXGB_PAP_TXPV_10G 0x00000000
#define IXGB_PAP_TXPV_1G 0x00010000
#define IXGB_PAP_TXPV_2G 0x00020000
#define IXGB_PAP_TXPV_3G 0x00030000
#define IXGB_PAP_TXPV_4G 0x00040000
#define IXGB_PAP_TXPV_5G 0x00050000
#define IXGB_PAP_TXPV_6G 0x00060000
#define IXGB_PAP_TXPV_7G 0x00070000
#define IXGB_PAP_TXPV_8G 0x00080000
#define IXGB_PAP_TXPV_9G 0x00090000
#define IXGB_PAP_TXPV_WAN 0x000F0000
/* PCSC1 Bit Masks */
#define IXGB_PCSC1_LOOPBACK 0x00004000
/* PCSC2 Bit Masks */
#define IXGB_PCSC2_PCS_TYPE_MASK 0x00000003
#define IXGB_PCSC2_PCS_TYPE_10GBX 0x00000001
/* PCSS1 Bit Masks */
#define IXGB_PCSS1_LOCAL_FAULT 0x00000080
#define IXGB_PCSS1_RX_LINK_STATUS 0x00000004
/* PCSS2 Bit Masks */
#define IXGB_PCSS2_DEV_PRES_MASK 0x0000C000
#define IXGB_PCSS2_DEV_PRES 0x00004000
#define IXGB_PCSS2_TX_LF 0x00000800
#define IXGB_PCSS2_RX_LF 0x00000400
#define IXGB_PCSS2_10GBW 0x00000004
#define IXGB_PCSS2_10GBX 0x00000002
#define IXGB_PCSS2_10GBR 0x00000001
/* XPCSS Bit Masks */
#define IXGB_XPCSS_ALIGN_STATUS 0x00001000
#define IXGB_XPCSS_PATTERN_TEST 0x00000800
#define IXGB_XPCSS_LANE_3_SYNC 0x00000008
#define IXGB_XPCSS_LANE_2_SYNC 0x00000004
#define IXGB_XPCSS_LANE_1_SYNC 0x00000002
#define IXGB_XPCSS_LANE_0_SYNC 0x00000001
/* XPCSTC Bit Masks */
#define IXGB_XPCSTC_BERT_TRIG 0x00200000
#define IXGB_XPCSTC_BERT_SST 0x00100000
#define IXGB_XPCSTC_BERT_PSZ_MASK 0x000C0000
#define IXGB_XPCSTC_BERT_PSZ_SHIFT 17
#define IXGB_XPCSTC_BERT_PSZ_INF 0x00000003
#define IXGB_XPCSTC_BERT_PSZ_68 0x00000001
#define IXGB_XPCSTC_BERT_PSZ_1028 0x00000000
/* MSCA bit Masks */
/* New Protocol Address */
#define IXGB_MSCA_NP_ADDR_MASK 0x0000FFFF
#define IXGB_MSCA_NP_ADDR_SHIFT 0
/* Either Device Type or Register Address,depending on ST_CODE */
#define IXGB_MSCA_DEV_TYPE_MASK 0x001F0000
#define IXGB_MSCA_DEV_TYPE_SHIFT 16
#define IXGB_MSCA_PHY_ADDR_MASK 0x03E00000
#define IXGB_MSCA_PHY_ADDR_SHIFT 21
#define IXGB_MSCA_OP_CODE_MASK 0x0C000000
/* OP_CODE == 00, Address cycle, New Protocol */
/* OP_CODE == 01, Write operation */
/* OP_CODE == 10, Read operation */
/* OP_CODE == 11, Read, auto increment, New Protocol */
#define IXGB_MSCA_ADDR_CYCLE 0x00000000
#define IXGB_MSCA_WRITE 0x04000000
#define IXGB_MSCA_READ 0x08000000
#define IXGB_MSCA_READ_AUTOINC 0x0C000000
#define IXGB_MSCA_OP_CODE_SHIFT 26
#define IXGB_MSCA_ST_CODE_MASK 0x30000000
/* ST_CODE == 00, New Protocol */
/* ST_CODE == 01, Old Protocol */
#define IXGB_MSCA_NEW_PROTOCOL 0x00000000
#define IXGB_MSCA_OLD_PROTOCOL 0x10000000
#define IXGB_MSCA_ST_CODE_SHIFT 28
/* Initiate command, self-clearing when command completes */
#define IXGB_MSCA_MDI_COMMAND 0x40000000
/*MDI In Progress Enable. */
#define IXGB_MSCA_MDI_IN_PROG_EN 0x80000000
/* MSRWD bit masks */
#define IXGB_MSRWD_WRITE_DATA_MASK 0x0000FFFF
#define IXGB_MSRWD_WRITE_DATA_SHIFT 0
#define IXGB_MSRWD_READ_DATA_MASK 0xFFFF0000
#define IXGB_MSRWD_READ_DATA_SHIFT 16
/* Definitions for the optics devices on the MDIO bus. */
#define IXGB_PHY_ADDRESS 0x0 /* Single PHY, multiple "Devices" */
#define MDIO_PMA_PMD_XPAK_VENDOR_NAME 0x803A /* XPAK/XENPAK devices only */
/* Vendor-specific MDIO registers */
#define G6XXX_PMA_PMD_VS1 0xC001 /* Vendor-specific register */
#define G6XXX_XGXS_XAUI_VS2 0x18 /* Vendor-specific register */
#define G6XXX_PMA_PMD_VS1_PLL_RESET 0x80
#define G6XXX_PMA_PMD_VS1_REMOVE_PLL_RESET 0x00
#define G6XXX_XGXS_XAUI_VS2_INPUT_MASK 0x0F /* XAUI lanes synchronized */
/* Layout of a single receive descriptor. The controller assumes that this
* structure is packed into 16 bytes, which is a safe assumption with most
* compilers. However, some compilers may insert padding between the fields,
* in which case the structure must be packed in some compiler-specific
* manner. */
struct ixgb_rx_desc {
__le64 buff_addr;
__le16 length;
__le16 reserved;
u8 status;
u8 errors;
__le16 special;
};
#define IXGB_RX_DESC_STATUS_DD 0x01
#define IXGB_RX_DESC_STATUS_EOP 0x02
#define IXGB_RX_DESC_STATUS_IXSM 0x04
#define IXGB_RX_DESC_STATUS_VP 0x08
#define IXGB_RX_DESC_STATUS_TCPCS 0x20
#define IXGB_RX_DESC_STATUS_IPCS 0x40
#define IXGB_RX_DESC_STATUS_PIF 0x80
#define IXGB_RX_DESC_ERRORS_CE 0x01
#define IXGB_RX_DESC_ERRORS_SE 0x02
#define IXGB_RX_DESC_ERRORS_P 0x08
#define IXGB_RX_DESC_ERRORS_TCPE 0x20
#define IXGB_RX_DESC_ERRORS_IPE 0x40
#define IXGB_RX_DESC_ERRORS_RXE 0x80
#define IXGB_RX_DESC_SPECIAL_VLAN_MASK 0x0FFF /* VLAN ID is in lower 12 bits */
#define IXGB_RX_DESC_SPECIAL_PRI_MASK 0xE000 /* Priority is in upper 3 bits */
#define IXGB_RX_DESC_SPECIAL_PRI_SHIFT 0x000D /* Priority is in upper 3 of 16 */
/* Layout of a single transmit descriptor. The controller assumes that this
* structure is packed into 16 bytes, which is a safe assumption with most
* compilers. However, some compilers may insert padding between the fields,
* in which case the structure must be packed in some compiler-specific
* manner. */
struct ixgb_tx_desc {
__le64 buff_addr;
__le32 cmd_type_len;
u8 status;
u8 popts;
__le16 vlan;
};
#define IXGB_TX_DESC_LENGTH_MASK 0x000FFFFF
#define IXGB_TX_DESC_TYPE_MASK 0x00F00000
#define IXGB_TX_DESC_TYPE_SHIFT 20
#define IXGB_TX_DESC_CMD_MASK 0xFF000000
#define IXGB_TX_DESC_CMD_SHIFT 24
#define IXGB_TX_DESC_CMD_EOP 0x01000000
#define IXGB_TX_DESC_CMD_TSE 0x04000000
#define IXGB_TX_DESC_CMD_RS 0x08000000
#define IXGB_TX_DESC_CMD_VLE 0x40000000
#define IXGB_TX_DESC_CMD_IDE 0x80000000
#define IXGB_TX_DESC_TYPE 0x00100000
#define IXGB_TX_DESC_STATUS_DD 0x01
#define IXGB_TX_DESC_POPTS_IXSM 0x01
#define IXGB_TX_DESC_POPTS_TXSM 0x02
#define IXGB_TX_DESC_SPECIAL_PRI_SHIFT IXGB_RX_DESC_SPECIAL_PRI_SHIFT /* Priority is in upper 3 of 16 */
struct ixgb_context_desc {
u8 ipcss;
u8 ipcso;
__le16 ipcse;
u8 tucss;
u8 tucso;
__le16 tucse;
__le32 cmd_type_len;
u8 status;
u8 hdr_len;
__le16 mss;
};
#define IXGB_CONTEXT_DESC_CMD_TCP 0x01000000
#define IXGB_CONTEXT_DESC_CMD_IP 0x02000000
#define IXGB_CONTEXT_DESC_CMD_TSE 0x04000000
#define IXGB_CONTEXT_DESC_CMD_RS 0x08000000
#define IXGB_CONTEXT_DESC_CMD_IDE 0x80000000
#define IXGB_CONTEXT_DESC_TYPE 0x00000000
#define IXGB_CONTEXT_DESC_STATUS_DD 0x01
/* Filters */
#define IXGB_MC_TBL_SIZE 128 /* Multicast Filter Table (4096 bits) */
#define IXGB_VLAN_FILTER_TBL_SIZE 128 /* VLAN Filter Table (4096 bits) */
#define IXGB_RAR_ENTRIES 3 /* Number of entries in Rx Address array */
#define IXGB_MEMORY_REGISTER_BASE_ADDRESS 0
#define ENET_HEADER_SIZE 14
#define ENET_FCS_LENGTH 4
#define IXGB_MAX_NUM_MULTICAST_ADDRESSES 128
#define IXGB_MIN_ENET_FRAME_SIZE_WITHOUT_FCS 60
#define IXGB_MAX_ENET_FRAME_SIZE_WITHOUT_FCS 1514
#define IXGB_MAX_JUMBO_FRAME_SIZE 0x3F00
/* Phy Addresses */
#define IXGB_OPTICAL_PHY_ADDR 0x0 /* Optical Module phy address */
#define IXGB_XAUII_PHY_ADDR 0x1 /* Xauii transceiver phy address */
#define IXGB_DIAG_PHY_ADDR 0x1F /* Diagnostic Device phy address */
/* This structure takes a 64k flash and maps it for identification commands */
struct ixgb_flash_buffer {
u8 manufacturer_id;
u8 device_id;
u8 filler1[0x2AA8];
u8 cmd2;
u8 filler2[0x2AAA];
u8 cmd1;
u8 filler3[0xAAAA];
};
/* Flow control parameters */
struct ixgb_fc {
u32 high_water; /* Flow Control High-water */
u32 low_water; /* Flow Control Low-water */
u16 pause_time; /* Flow Control Pause timer */
bool send_xon; /* Flow control send XON */
ixgb_fc_type type; /* Type of flow control */
};
/* The historical defaults for the flow control values are given below. */
#define FC_DEFAULT_HI_THRESH (0x8000) /* 32KB */
#define FC_DEFAULT_LO_THRESH (0x4000) /* 16KB */
#define FC_DEFAULT_TX_TIMER (0x100) /* ~130 us */
/* Phy definitions */
#define IXGB_MAX_PHY_REG_ADDRESS 0xFFFF
#define IXGB_MAX_PHY_ADDRESS 31
#define IXGB_MAX_PHY_DEV_TYPE 31
/* Bus parameters */
struct ixgb_bus {
ixgb_bus_speed speed;
ixgb_bus_width width;
ixgb_bus_type type;
};
struct ixgb_hw {
u8 __iomem *hw_addr;/* Base Address of the hardware */
void *back; /* Pointer to OS-dependent struct */
struct ixgb_fc fc; /* Flow control parameters */
struct ixgb_bus bus; /* Bus parameters */
u32 phy_id; /* Phy Identifier */
u32 phy_addr; /* XGMII address of Phy */
ixgb_mac_type mac_type; /* Identifier for MAC controller */
ixgb_phy_type phy_type; /* Transceiver/phy identifier */
u32 max_frame_size; /* Maximum frame size supported */
u32 mc_filter_type; /* Multicast filter hash type */
u32 num_mc_addrs; /* Number of current Multicast addrs */
u8 curr_mac_addr[ETH_ALEN]; /* Individual address currently programmed in MAC */
u32 num_tx_desc; /* Number of Transmit descriptors */
u32 num_rx_desc; /* Number of Receive descriptors */
u32 rx_buffer_size; /* Size of Receive buffer */
bool link_up; /* true if link is valid */
bool adapter_stopped; /* State of adapter */
u16 device_id; /* device id from PCI configuration space */
u16 vendor_id; /* vendor id from PCI configuration space */
u8 revision_id; /* revision id from PCI configuration space */
u16 subsystem_vendor_id; /* subsystem vendor id from PCI configuration space */
u16 subsystem_id; /* subsystem id from PCI configuration space */
u32 bar0; /* Base Address registers */
u32 bar1;
u32 bar2;
u32 bar3;
u16 pci_cmd_word; /* PCI command register id from PCI configuration space */
__le16 eeprom[IXGB_EEPROM_SIZE]; /* EEPROM contents read at init time */
unsigned long io_base; /* Our I/O mapped location */
u32 lastLFC;
u32 lastRFC;
};
/* Statistics reported by the hardware */
struct ixgb_hw_stats {
u64 tprl;
u64 tprh;
u64 gprcl;
u64 gprch;
u64 bprcl;
u64 bprch;
u64 mprcl;
u64 mprch;
u64 uprcl;
u64 uprch;
u64 vprcl;
u64 vprch;
u64 jprcl;
u64 jprch;
u64 gorcl;
u64 gorch;
u64 torl;
u64 torh;
u64 rnbc;
u64 ruc;
u64 roc;
u64 rlec;
u64 crcerrs;
u64 icbc;
u64 ecbc;
u64 mpc;
u64 tptl;
u64 tpth;
u64 gptcl;
u64 gptch;
u64 bptcl;
u64 bptch;
u64 mptcl;
u64 mptch;
u64 uptcl;
u64 uptch;
u64 vptcl;
u64 vptch;
u64 jptcl;
u64 jptch;
u64 gotcl;
u64 gotch;
u64 totl;
u64 toth;
u64 dc;
u64 plt64c;
u64 tsctc;
u64 tsctfc;
u64 ibic;
u64 rfc;
u64 lfc;
u64 pfrc;
u64 pftc;
u64 mcfrc;
u64 mcftc;
u64 xonrxc;
u64 xontxc;
u64 xoffrxc;
u64 xofftxc;
u64 rjc;
};
/* Function Prototypes */
bool ixgb_adapter_stop(struct ixgb_hw *hw);
bool ixgb_init_hw(struct ixgb_hw *hw);
bool ixgb_adapter_start(struct ixgb_hw *hw);
void ixgb_check_for_link(struct ixgb_hw *hw);
bool ixgb_check_for_bad_link(struct ixgb_hw *hw);
void ixgb_rar_set(struct ixgb_hw *hw, const u8 *addr, u32 index);
/* Filters (multicast, vlan, receive) */
void ixgb_mc_addr_list_update(struct ixgb_hw *hw, u8 *mc_addr_list,
u32 mc_addr_count, u32 pad);
/* Vfta functions */
void ixgb_write_vfta(struct ixgb_hw *hw, u32 offset, u32 value);
/* Access functions to eeprom data */
void ixgb_get_ee_mac_addr(struct ixgb_hw *hw, u8 *mac_addr);
u32 ixgb_get_ee_pba_number(struct ixgb_hw *hw);
u16 ixgb_get_ee_device_id(struct ixgb_hw *hw);
bool ixgb_get_eeprom_data(struct ixgb_hw *hw);
__le16 ixgb_get_eeprom_word(struct ixgb_hw *hw, u16 index);
/* Everything else */
void ixgb_led_on(struct ixgb_hw *hw);
void ixgb_led_off(struct ixgb_hw *hw);
void ixgb_write_pci_cfg(struct ixgb_hw *hw,
u32 reg,
u16 * value);
#endif /* _IXGB_HW_H_ */
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 1999 - 2008 Intel Corporation. */
#ifndef _IXGB_IDS_H_
#define _IXGB_IDS_H_
/**********************************************************************
** The Device and Vendor IDs for 10 Gigabit MACs
**********************************************************************/
#define IXGB_DEVICE_ID_82597EX 0x1048
#define IXGB_DEVICE_ID_82597EX_SR 0x1A48
#define IXGB_DEVICE_ID_82597EX_LR 0x1B48
#define IXGB_SUBDEVICE_ID_A11F 0xA11F
#define IXGB_SUBDEVICE_ID_A01F 0xA01F
#define IXGB_DEVICE_ID_82597EX_CX4 0x109E
#define IXGB_SUBDEVICE_ID_A00C 0xA00C
#define IXGB_SUBDEVICE_ID_A01C 0xA01C
#define IXGB_SUBDEVICE_ID_7036 0x7036
#endif /* #ifndef _IXGB_IDS_H_ */
/* End of File */
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 1999 - 2008 Intel Corporation. */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/prefetch.h>
#include "ixgb.h"
char ixgb_driver_name[] = "ixgb";
static char ixgb_driver_string[] = "Intel(R) PRO/10GbE Network Driver";
static const char ixgb_copyright[] = "Copyright (c) 1999-2008 Intel Corporation.";
#define IXGB_CB_LENGTH 256
static unsigned int copybreak __read_mostly = IXGB_CB_LENGTH;
module_param(copybreak, uint, 0644);
MODULE_PARM_DESC(copybreak,
"Maximum size of packet that is copied to a new buffer on receive");
/* ixgb_pci_tbl - PCI Device ID Table
*
* Wildcard entries (PCI_ANY_ID) should come last
* Last entry must be all 0s
*
* { Vendor ID, Device ID, SubVendor ID, SubDevice ID,
* Class, Class Mask, private data (not used) }
*/
static const struct pci_device_id ixgb_pci_tbl[] = {
{PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX_CX4,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX_SR,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX_LR,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
/* required last entry */
{0,}
};
MODULE_DEVICE_TABLE(pci, ixgb_pci_tbl);
/* Local Function Prototypes */
static int ixgb_init_module(void);
static void ixgb_exit_module(void);
static int ixgb_probe(struct pci_dev *pdev, const struct pci_device_id *ent);
static void ixgb_remove(struct pci_dev *pdev);
static int ixgb_sw_init(struct ixgb_adapter *adapter);
static int ixgb_open(struct net_device *netdev);
static int ixgb_close(struct net_device *netdev);
static void ixgb_configure_tx(struct ixgb_adapter *adapter);
static void ixgb_configure_rx(struct ixgb_adapter *adapter);
static void ixgb_setup_rctl(struct ixgb_adapter *adapter);
static void ixgb_clean_tx_ring(struct ixgb_adapter *adapter);
static void ixgb_clean_rx_ring(struct ixgb_adapter *adapter);
static void ixgb_set_multi(struct net_device *netdev);
static void ixgb_watchdog(struct timer_list *t);
static netdev_tx_t ixgb_xmit_frame(struct sk_buff *skb,
struct net_device *netdev);
static int ixgb_change_mtu(struct net_device *netdev, int new_mtu);
static int ixgb_set_mac(struct net_device *netdev, void *p);
static irqreturn_t ixgb_intr(int irq, void *data);
static bool ixgb_clean_tx_irq(struct ixgb_adapter *adapter);
static int ixgb_clean(struct napi_struct *, int);
static bool ixgb_clean_rx_irq(struct ixgb_adapter *, int *, int);
static void ixgb_alloc_rx_buffers(struct ixgb_adapter *, int);
static void ixgb_tx_timeout(struct net_device *dev, unsigned int txqueue);
static void ixgb_tx_timeout_task(struct work_struct *work);
static void ixgb_vlan_strip_enable(struct ixgb_adapter *adapter);
static void ixgb_vlan_strip_disable(struct ixgb_adapter *adapter);
static int ixgb_vlan_rx_add_vid(struct net_device *netdev,
__be16 proto, u16 vid);
static int ixgb_vlan_rx_kill_vid(struct net_device *netdev,
__be16 proto, u16 vid);
static void ixgb_restore_vlan(struct ixgb_adapter *adapter);
static pci_ers_result_t ixgb_io_error_detected (struct pci_dev *pdev,
pci_channel_state_t state);
static pci_ers_result_t ixgb_io_slot_reset (struct pci_dev *pdev);
static void ixgb_io_resume (struct pci_dev *pdev);
static const struct pci_error_handlers ixgb_err_handler = {
.error_detected = ixgb_io_error_detected,
.slot_reset = ixgb_io_slot_reset,
.resume = ixgb_io_resume,
};
static struct pci_driver ixgb_driver = {
.name = ixgb_driver_name,
.id_table = ixgb_pci_tbl,
.probe = ixgb_probe,
.remove = ixgb_remove,
.err_handler = &ixgb_err_handler
};
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
MODULE_DESCRIPTION("Intel(R) PRO/10GbE Network Driver");
MODULE_LICENSE("GPL v2");
#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV|NETIF_MSG_PROBE|NETIF_MSG_LINK)
static int debug = -1;
module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
/**
* ixgb_init_module - Driver Registration Routine
*
* ixgb_init_module is the first routine called when the driver is
* loaded. All it does is register with the PCI subsystem.
**/
static int __init
ixgb_init_module(void)
{
pr_info("%s\n", ixgb_driver_string);
pr_info("%s\n", ixgb_copyright);
return pci_register_driver(&ixgb_driver);
}
module_init(ixgb_init_module);
/**
* ixgb_exit_module - Driver Exit Cleanup Routine
*
* ixgb_exit_module is called just before the driver is removed
* from memory.
**/
static void __exit
ixgb_exit_module(void)
{
pci_unregister_driver(&ixgb_driver);
}
module_exit(ixgb_exit_module);
/**
* ixgb_irq_disable - Mask off interrupt generation on the NIC
* @adapter: board private structure
**/
static void
ixgb_irq_disable(struct ixgb_adapter *adapter)
{
IXGB_WRITE_REG(&adapter->hw, IMC, ~0);
IXGB_WRITE_FLUSH(&adapter->hw);
synchronize_irq(adapter->pdev->irq);
}
/**
* ixgb_irq_enable - Enable default interrupt generation settings
* @adapter: board private structure
**/
static void
ixgb_irq_enable(struct ixgb_adapter *adapter)
{
u32 val = IXGB_INT_RXT0 | IXGB_INT_RXDMT0 |
IXGB_INT_TXDW | IXGB_INT_LSC;
if (adapter->hw.subsystem_vendor_id == PCI_VENDOR_ID_SUN)
val |= IXGB_INT_GPI0;
IXGB_WRITE_REG(&adapter->hw, IMS, val);
IXGB_WRITE_FLUSH(&adapter->hw);
}
int
ixgb_up(struct ixgb_adapter *adapter)
{
struct net_device *netdev = adapter->netdev;
int err, irq_flags = IRQF_SHARED;
int max_frame = netdev->mtu + ENET_HEADER_SIZE + ENET_FCS_LENGTH;
struct ixgb_hw *hw = &adapter->hw;
/* hardware has been reset, we need to reload some things */
ixgb_rar_set(hw, netdev->dev_addr, 0);
ixgb_set_multi(netdev);
ixgb_restore_vlan(adapter);
ixgb_configure_tx(adapter);
ixgb_setup_rctl(adapter);
ixgb_configure_rx(adapter);
ixgb_alloc_rx_buffers(adapter, IXGB_DESC_UNUSED(&adapter->rx_ring));
/* disable interrupts and get the hardware into a known state */
IXGB_WRITE_REG(&adapter->hw, IMC, 0xffffffff);
/* only enable MSI if bus is in PCI-X mode */
if (IXGB_READ_REG(&adapter->hw, STATUS) & IXGB_STATUS_PCIX_MODE) {
err = pci_enable_msi(adapter->pdev);
if (!err) {
adapter->have_msi = true;
irq_flags = 0;
}
/* proceed to try to request regular interrupt */
}
err = request_irq(adapter->pdev->irq, ixgb_intr, irq_flags,
netdev->name, netdev);
if (err) {
if (adapter->have_msi)
pci_disable_msi(adapter->pdev);
netif_err(adapter, probe, adapter->netdev,
"Unable to allocate interrupt Error: %d\n", err);
return err;
}
if ((hw->max_frame_size != max_frame) ||
(hw->max_frame_size !=
(IXGB_READ_REG(hw, MFS) >> IXGB_MFS_SHIFT))) {
hw->max_frame_size = max_frame;
IXGB_WRITE_REG(hw, MFS, hw->max_frame_size << IXGB_MFS_SHIFT);
if (hw->max_frame_size >
IXGB_MAX_ENET_FRAME_SIZE_WITHOUT_FCS + ENET_FCS_LENGTH) {
u32 ctrl0 = IXGB_READ_REG(hw, CTRL0);
if (!(ctrl0 & IXGB_CTRL0_JFE)) {
ctrl0 |= IXGB_CTRL0_JFE;
IXGB_WRITE_REG(hw, CTRL0, ctrl0);
}
}
}
clear_bit(__IXGB_DOWN, &adapter->flags);
napi_enable(&adapter->napi);
ixgb_irq_enable(adapter);
netif_wake_queue(netdev);
mod_timer(&adapter->watchdog_timer, jiffies);
return 0;
}
void
ixgb_down(struct ixgb_adapter *adapter, bool kill_watchdog)
{
struct net_device *netdev = adapter->netdev;
/* prevent the interrupt handler from restarting watchdog */
set_bit(__IXGB_DOWN, &adapter->flags);
netif_carrier_off(netdev);
napi_disable(&adapter->napi);
/* waiting for NAPI to complete can re-enable interrupts */
ixgb_irq_disable(adapter);
free_irq(adapter->pdev->irq, netdev);
if (adapter->have_msi)
pci_disable_msi(adapter->pdev);
if (kill_watchdog)
del_timer_sync(&adapter->watchdog_timer);
adapter->link_speed = 0;
adapter->link_duplex = 0;
netif_stop_queue(netdev);
ixgb_reset(adapter);
ixgb_clean_tx_ring(adapter);
ixgb_clean_rx_ring(adapter);
}
void
ixgb_reset(struct ixgb_adapter *adapter)
{
struct ixgb_hw *hw = &adapter->hw;
ixgb_adapter_stop(hw);
if (!ixgb_init_hw(hw))
netif_err(adapter, probe, adapter->netdev, "ixgb_init_hw failed\n");
/* restore frame size information */
IXGB_WRITE_REG(hw, MFS, hw->max_frame_size << IXGB_MFS_SHIFT);
if (hw->max_frame_size >
IXGB_MAX_ENET_FRAME_SIZE_WITHOUT_FCS + ENET_FCS_LENGTH) {
u32 ctrl0 = IXGB_READ_REG(hw, CTRL0);
if (!(ctrl0 & IXGB_CTRL0_JFE)) {
ctrl0 |= IXGB_CTRL0_JFE;
IXGB_WRITE_REG(hw, CTRL0, ctrl0);
}
}
}
static netdev_features_t
ixgb_fix_features(struct net_device *netdev, netdev_features_t features)
{
/*
* Tx VLAN insertion does not work per HW design when Rx stripping is
* disabled.
*/
if (!(features & NETIF_F_HW_VLAN_CTAG_RX))
features &= ~NETIF_F_HW_VLAN_CTAG_TX;
return features;
}
static int
ixgb_set_features(struct net_device *netdev, netdev_features_t features)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
netdev_features_t changed = features ^ netdev->features;
if (!(changed & (NETIF_F_RXCSUM|NETIF_F_HW_VLAN_CTAG_RX)))
return 0;
adapter->rx_csum = !!(features & NETIF_F_RXCSUM);
if (netif_running(netdev)) {
ixgb_down(adapter, true);
ixgb_up(adapter);
ixgb_set_speed_duplex(netdev);
} else
ixgb_reset(adapter);
return 0;
}
static const struct net_device_ops ixgb_netdev_ops = {
.ndo_open = ixgb_open,
.ndo_stop = ixgb_close,
.ndo_start_xmit = ixgb_xmit_frame,
.ndo_set_rx_mode = ixgb_set_multi,
.ndo_validate_addr = eth_validate_addr,
.ndo_set_mac_address = ixgb_set_mac,
.ndo_change_mtu = ixgb_change_mtu,
.ndo_tx_timeout = ixgb_tx_timeout,
.ndo_vlan_rx_add_vid = ixgb_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = ixgb_vlan_rx_kill_vid,
.ndo_fix_features = ixgb_fix_features,
.ndo_set_features = ixgb_set_features,
};
/**
* ixgb_probe - Device Initialization Routine
* @pdev: PCI device information struct
* @ent: entry in ixgb_pci_tbl
*
* Returns 0 on success, negative on failure
*
* ixgb_probe initializes an adapter identified by a pci_dev structure.
* The OS initialization, configuring of the adapter private structure,
* and a hardware reset occur.
**/
static int
ixgb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct net_device *netdev = NULL;
struct ixgb_adapter *adapter;
static int cards_found = 0;
u8 addr[ETH_ALEN];
int i;
int err;
err = pci_enable_device(pdev);
if (err)
return err;
err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
if (err) {
pr_err("No usable DMA configuration, aborting\n");
goto err_dma_mask;
}
err = pci_request_regions(pdev, ixgb_driver_name);
if (err)
goto err_request_regions;
pci_set_master(pdev);
netdev = alloc_etherdev(sizeof(struct ixgb_adapter));
if (!netdev) {
err = -ENOMEM;
goto err_alloc_etherdev;
}
SET_NETDEV_DEV(netdev, &pdev->dev);
pci_set_drvdata(pdev, netdev);
adapter = netdev_priv(netdev);
adapter->netdev = netdev;
adapter->pdev = pdev;
adapter->hw.back = adapter;
adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
adapter->hw.hw_addr = pci_ioremap_bar(pdev, BAR_0);
if (!adapter->hw.hw_addr) {
err = -EIO;
goto err_ioremap;
}
for (i = BAR_1; i < PCI_STD_NUM_BARS; i++) {
if (pci_resource_len(pdev, i) == 0)
continue;
if (pci_resource_flags(pdev, i) & IORESOURCE_IO) {
adapter->hw.io_base = pci_resource_start(pdev, i);
break;
}
}
netdev->netdev_ops = &ixgb_netdev_ops;
ixgb_set_ethtool_ops(netdev);
netdev->watchdog_timeo = 5 * HZ;
netif_napi_add(netdev, &adapter->napi, ixgb_clean);
strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
adapter->bd_number = cards_found;
adapter->link_speed = 0;
adapter->link_duplex = 0;
/* setup the private structure */
err = ixgb_sw_init(adapter);
if (err)
goto err_sw_init;
netdev->hw_features = NETIF_F_SG |
NETIF_F_TSO |
NETIF_F_HW_CSUM |
NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_CTAG_RX;
netdev->features = netdev->hw_features |
NETIF_F_HW_VLAN_CTAG_FILTER;
netdev->hw_features |= NETIF_F_RXCSUM;
netdev->features |= NETIF_F_HIGHDMA;
netdev->vlan_features |= NETIF_F_HIGHDMA;
/* MTU range: 68 - 16114 */
netdev->min_mtu = ETH_MIN_MTU;
netdev->max_mtu = IXGB_MAX_JUMBO_FRAME_SIZE - ETH_HLEN;
/* make sure the EEPROM is good */
if (!ixgb_validate_eeprom_checksum(&adapter->hw)) {
netif_err(adapter, probe, adapter->netdev,
"The EEPROM Checksum Is Not Valid\n");
err = -EIO;
goto err_eeprom;
}
ixgb_get_ee_mac_addr(&adapter->hw, addr);
eth_hw_addr_set(netdev, addr);
if (!is_valid_ether_addr(netdev->dev_addr)) {
netif_err(adapter, probe, adapter->netdev, "Invalid MAC Address\n");
err = -EIO;
goto err_eeprom;
}
adapter->part_num = ixgb_get_ee_pba_number(&adapter->hw);
timer_setup(&adapter->watchdog_timer, ixgb_watchdog, 0);
INIT_WORK(&adapter->tx_timeout_task, ixgb_tx_timeout_task);
strcpy(netdev->name, "eth%d");
err = register_netdev(netdev);
if (err)
goto err_register;
/* carrier off reporting is important to ethtool even BEFORE open */
netif_carrier_off(netdev);
netif_info(adapter, probe, adapter->netdev,
"Intel(R) PRO/10GbE Network Connection\n");
ixgb_check_options(adapter);
/* reset the hardware with the new settings */
ixgb_reset(adapter);
cards_found++;
return 0;
err_register:
err_sw_init:
err_eeprom:
iounmap(adapter->hw.hw_addr);
err_ioremap:
free_netdev(netdev);
err_alloc_etherdev:
pci_release_regions(pdev);
err_request_regions:
err_dma_mask:
pci_disable_device(pdev);
return err;
}
/**
* ixgb_remove - Device Removal Routine
* @pdev: PCI device information struct
*
* ixgb_remove is called by the PCI subsystem to alert the driver
* that it should release a PCI device. The could be caused by a
* Hot-Plug event, or because the driver is going to be removed from
* memory.
**/
static void
ixgb_remove(struct pci_dev *pdev)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct ixgb_adapter *adapter = netdev_priv(netdev);
cancel_work_sync(&adapter->tx_timeout_task);
unregister_netdev(netdev);
iounmap(adapter->hw.hw_addr);
pci_release_regions(pdev);
free_netdev(netdev);
pci_disable_device(pdev);
}
/**
* ixgb_sw_init - Initialize general software structures (struct ixgb_adapter)
* @adapter: board private structure to initialize
*
* ixgb_sw_init initializes the Adapter private data structure.
* Fields are initialized based on PCI device information and
* OS network device settings (MTU size).
**/
static int
ixgb_sw_init(struct ixgb_adapter *adapter)
{
struct ixgb_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
/* PCI config space info */
hw->vendor_id = pdev->vendor;
hw->device_id = pdev->device;
hw->subsystem_vendor_id = pdev->subsystem_vendor;
hw->subsystem_id = pdev->subsystem_device;
hw->max_frame_size = netdev->mtu + ENET_HEADER_SIZE + ENET_FCS_LENGTH;
adapter->rx_buffer_len = hw->max_frame_size + 8; /* + 8 for errata */
if ((hw->device_id == IXGB_DEVICE_ID_82597EX) ||
(hw->device_id == IXGB_DEVICE_ID_82597EX_CX4) ||
(hw->device_id == IXGB_DEVICE_ID_82597EX_LR) ||
(hw->device_id == IXGB_DEVICE_ID_82597EX_SR))
hw->mac_type = ixgb_82597;
else {
/* should never have loaded on this device */
netif_err(adapter, probe, adapter->netdev, "unsupported device id\n");
}
/* enable flow control to be programmed */
hw->fc.send_xon = 1;
set_bit(__IXGB_DOWN, &adapter->flags);
return 0;
}
/**
* ixgb_open - Called when a network interface is made active
* @netdev: network interface device structure
*
* Returns 0 on success, negative value on failure
*
* The open entry point is called when a network interface is made
* active by the system (IFF_UP). At this point all resources needed
* for transmit and receive operations are allocated, the interrupt
* handler is registered with the OS, the watchdog timer is started,
* and the stack is notified that the interface is ready.
**/
static int
ixgb_open(struct net_device *netdev)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
int err;
/* allocate transmit descriptors */
err = ixgb_setup_tx_resources(adapter);
if (err)
goto err_setup_tx;
netif_carrier_off(netdev);
/* allocate receive descriptors */
err = ixgb_setup_rx_resources(adapter);
if (err)
goto err_setup_rx;
err = ixgb_up(adapter);
if (err)
goto err_up;
netif_start_queue(netdev);
return 0;
err_up:
ixgb_free_rx_resources(adapter);
err_setup_rx:
ixgb_free_tx_resources(adapter);
err_setup_tx:
ixgb_reset(adapter);
return err;
}
/**
* ixgb_close - Disables a network interface
* @netdev: network interface device structure
*
* Returns 0, this is not allowed to fail
*
* The close entry point is called when an interface is de-activated
* by the OS. The hardware is still under the drivers control, but
* needs to be disabled. A global MAC reset is issued to stop the
* hardware, and all transmit and receive resources are freed.
**/
static int
ixgb_close(struct net_device *netdev)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
ixgb_down(adapter, true);
ixgb_free_tx_resources(adapter);
ixgb_free_rx_resources(adapter);
return 0;
}
/**
* ixgb_setup_tx_resources - allocate Tx resources (Descriptors)
* @adapter: board private structure
*
* Return 0 on success, negative on failure
**/
int
ixgb_setup_tx_resources(struct ixgb_adapter *adapter)
{
struct ixgb_desc_ring *txdr = &adapter->tx_ring;
struct pci_dev *pdev = adapter->pdev;
int size;
size = sizeof(struct ixgb_buffer) * txdr->count;
txdr->buffer_info = vzalloc(size);
if (!txdr->buffer_info)
return -ENOMEM;
/* round up to nearest 4K */
txdr->size = txdr->count * sizeof(struct ixgb_tx_desc);
txdr->size = ALIGN(txdr->size, 4096);
txdr->desc = dma_alloc_coherent(&pdev->dev, txdr->size, &txdr->dma,
GFP_KERNEL);
if (!txdr->desc) {
vfree(txdr->buffer_info);
return -ENOMEM;
}
txdr->next_to_use = 0;
txdr->next_to_clean = 0;
return 0;
}
/**
* ixgb_configure_tx - Configure 82597 Transmit Unit after Reset.
* @adapter: board private structure
*
* Configure the Tx unit of the MAC after a reset.
**/
static void
ixgb_configure_tx(struct ixgb_adapter *adapter)
{
u64 tdba = adapter->tx_ring.dma;
u32 tdlen = adapter->tx_ring.count * sizeof(struct ixgb_tx_desc);
u32 tctl;
struct ixgb_hw *hw = &adapter->hw;
/* Setup the Base and Length of the Tx Descriptor Ring
* tx_ring.dma can be either a 32 or 64 bit value
*/
IXGB_WRITE_REG(hw, TDBAL, (tdba & 0x00000000ffffffffULL));
IXGB_WRITE_REG(hw, TDBAH, (tdba >> 32));
IXGB_WRITE_REG(hw, TDLEN, tdlen);
/* Setup the HW Tx Head and Tail descriptor pointers */
IXGB_WRITE_REG(hw, TDH, 0);
IXGB_WRITE_REG(hw, TDT, 0);
/* don't set up txdctl, it induces performance problems if configured
* incorrectly */
/* Set the Tx Interrupt Delay register */
IXGB_WRITE_REG(hw, TIDV, adapter->tx_int_delay);
/* Program the Transmit Control Register */
tctl = IXGB_TCTL_TCE | IXGB_TCTL_TXEN | IXGB_TCTL_TPDE;
IXGB_WRITE_REG(hw, TCTL, tctl);
/* Setup Transmit Descriptor Settings for this adapter */
adapter->tx_cmd_type =
IXGB_TX_DESC_TYPE |
(adapter->tx_int_delay_enable ? IXGB_TX_DESC_CMD_IDE : 0);
}
/**
* ixgb_setup_rx_resources - allocate Rx resources (Descriptors)
* @adapter: board private structure
*
* Returns 0 on success, negative on failure
**/
int
ixgb_setup_rx_resources(struct ixgb_adapter *adapter)
{
struct ixgb_desc_ring *rxdr = &adapter->rx_ring;
struct pci_dev *pdev = adapter->pdev;
int size;
size = sizeof(struct ixgb_buffer) * rxdr->count;
rxdr->buffer_info = vzalloc(size);
if (!rxdr->buffer_info)
return -ENOMEM;
/* Round up to nearest 4K */
rxdr->size = rxdr->count * sizeof(struct ixgb_rx_desc);
rxdr->size = ALIGN(rxdr->size, 4096);
rxdr->desc = dma_alloc_coherent(&pdev->dev, rxdr->size, &rxdr->dma,
GFP_KERNEL);
if (!rxdr->desc) {
vfree(rxdr->buffer_info);
return -ENOMEM;
}
rxdr->next_to_clean = 0;
rxdr->next_to_use = 0;
return 0;
}
/**
* ixgb_setup_rctl - configure the receive control register
* @adapter: Board private structure
**/
static void
ixgb_setup_rctl(struct ixgb_adapter *adapter)
{
u32 rctl;
rctl = IXGB_READ_REG(&adapter->hw, RCTL);
rctl &= ~(3 << IXGB_RCTL_MO_SHIFT);
rctl |=
IXGB_RCTL_BAM | IXGB_RCTL_RDMTS_1_2 |
IXGB_RCTL_RXEN | IXGB_RCTL_CFF |
(adapter->hw.mc_filter_type << IXGB_RCTL_MO_SHIFT);
rctl |= IXGB_RCTL_SECRC;
if (adapter->rx_buffer_len <= IXGB_RXBUFFER_2048)
rctl |= IXGB_RCTL_BSIZE_2048;
else if (adapter->rx_buffer_len <= IXGB_RXBUFFER_4096)
rctl |= IXGB_RCTL_BSIZE_4096;
else if (adapter->rx_buffer_len <= IXGB_RXBUFFER_8192)
rctl |= IXGB_RCTL_BSIZE_8192;
else if (adapter->rx_buffer_len <= IXGB_RXBUFFER_16384)
rctl |= IXGB_RCTL_BSIZE_16384;
IXGB_WRITE_REG(&adapter->hw, RCTL, rctl);
}
/**
* ixgb_configure_rx - Configure 82597 Receive Unit after Reset.
* @adapter: board private structure
*
* Configure the Rx unit of the MAC after a reset.
**/
static void
ixgb_configure_rx(struct ixgb_adapter *adapter)
{
u64 rdba = adapter->rx_ring.dma;
u32 rdlen = adapter->rx_ring.count * sizeof(struct ixgb_rx_desc);
struct ixgb_hw *hw = &adapter->hw;
u32 rctl;
u32 rxcsum;
/* make sure receives are disabled while setting up the descriptors */
rctl = IXGB_READ_REG(hw, RCTL);
IXGB_WRITE_REG(hw, RCTL, rctl & ~IXGB_RCTL_RXEN);
/* set the Receive Delay Timer Register */
IXGB_WRITE_REG(hw, RDTR, adapter->rx_int_delay);
/* Setup the Base and Length of the Rx Descriptor Ring */
IXGB_WRITE_REG(hw, RDBAL, (rdba & 0x00000000ffffffffULL));
IXGB_WRITE_REG(hw, RDBAH, (rdba >> 32));
IXGB_WRITE_REG(hw, RDLEN, rdlen);
/* Setup the HW Rx Head and Tail Descriptor Pointers */
IXGB_WRITE_REG(hw, RDH, 0);
IXGB_WRITE_REG(hw, RDT, 0);
/* due to the hardware errata with RXDCTL, we are unable to use any of
* the performance enhancing features of it without causing other
* subtle bugs, some of the bugs could include receive length
* corruption at high data rates (WTHRESH > 0) and/or receive
* descriptor ring irregularites (particularly in hardware cache) */
IXGB_WRITE_REG(hw, RXDCTL, 0);
/* Enable Receive Checksum Offload for TCP and UDP */
if (adapter->rx_csum) {
rxcsum = IXGB_READ_REG(hw, RXCSUM);
rxcsum |= IXGB_RXCSUM_TUOFL;
IXGB_WRITE_REG(hw, RXCSUM, rxcsum);
}
/* Enable Receives */
IXGB_WRITE_REG(hw, RCTL, rctl);
}
/**
* ixgb_free_tx_resources - Free Tx Resources
* @adapter: board private structure
*
* Free all transmit software resources
**/
void
ixgb_free_tx_resources(struct ixgb_adapter *adapter)
{
struct pci_dev *pdev = adapter->pdev;
ixgb_clean_tx_ring(adapter);
vfree(adapter->tx_ring.buffer_info);
adapter->tx_ring.buffer_info = NULL;
dma_free_coherent(&pdev->dev, adapter->tx_ring.size,
adapter->tx_ring.desc, adapter->tx_ring.dma);
adapter->tx_ring.desc = NULL;
}
static void
ixgb_unmap_and_free_tx_resource(struct ixgb_adapter *adapter,
struct ixgb_buffer *buffer_info)
{
if (buffer_info->dma) {
if (buffer_info->mapped_as_page)
dma_unmap_page(&adapter->pdev->dev, buffer_info->dma,
buffer_info->length, DMA_TO_DEVICE);
else
dma_unmap_single(&adapter->pdev->dev, buffer_info->dma,
buffer_info->length, DMA_TO_DEVICE);
buffer_info->dma = 0;
}
if (buffer_info->skb) {
dev_kfree_skb_any(buffer_info->skb);
buffer_info->skb = NULL;
}
buffer_info->time_stamp = 0;
/* these fields must always be initialized in tx
* buffer_info->length = 0;
* buffer_info->next_to_watch = 0; */
}
/**
* ixgb_clean_tx_ring - Free Tx Buffers
* @adapter: board private structure
**/
static void
ixgb_clean_tx_ring(struct ixgb_adapter *adapter)
{
struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
struct ixgb_buffer *buffer_info;
unsigned long size;
unsigned int i;
/* Free all the Tx ring sk_buffs */
for (i = 0; i < tx_ring->count; i++) {
buffer_info = &tx_ring->buffer_info[i];
ixgb_unmap_and_free_tx_resource(adapter, buffer_info);
}
size = sizeof(struct ixgb_buffer) * tx_ring->count;
memset(tx_ring->buffer_info, 0, size);
/* Zero out the descriptor ring */
memset(tx_ring->desc, 0, tx_ring->size);
tx_ring->next_to_use = 0;
tx_ring->next_to_clean = 0;
IXGB_WRITE_REG(&adapter->hw, TDH, 0);
IXGB_WRITE_REG(&adapter->hw, TDT, 0);
}
/**
* ixgb_free_rx_resources - Free Rx Resources
* @adapter: board private structure
*
* Free all receive software resources
**/
void
ixgb_free_rx_resources(struct ixgb_adapter *adapter)
{
struct ixgb_desc_ring *rx_ring = &adapter->rx_ring;
struct pci_dev *pdev = adapter->pdev;
ixgb_clean_rx_ring(adapter);
vfree(rx_ring->buffer_info);
rx_ring->buffer_info = NULL;
dma_free_coherent(&pdev->dev, rx_ring->size, rx_ring->desc,
rx_ring->dma);
rx_ring->desc = NULL;
}
/**
* ixgb_clean_rx_ring - Free Rx Buffers
* @adapter: board private structure
**/
static void
ixgb_clean_rx_ring(struct ixgb_adapter *adapter)
{
struct ixgb_desc_ring *rx_ring = &adapter->rx_ring;
struct ixgb_buffer *buffer_info;
struct pci_dev *pdev = adapter->pdev;
unsigned long size;
unsigned int i;
/* Free all the Rx ring sk_buffs */
for (i = 0; i < rx_ring->count; i++) {
buffer_info = &rx_ring->buffer_info[i];
if (buffer_info->dma) {
dma_unmap_single(&pdev->dev,
buffer_info->dma,
buffer_info->length,
DMA_FROM_DEVICE);
buffer_info->dma = 0;
buffer_info->length = 0;
}
if (buffer_info->skb) {
dev_kfree_skb(buffer_info->skb);
buffer_info->skb = NULL;
}
}
size = sizeof(struct ixgb_buffer) * rx_ring->count;
memset(rx_ring->buffer_info, 0, size);
/* Zero out the descriptor ring */
memset(rx_ring->desc, 0, rx_ring->size);
rx_ring->next_to_clean = 0;
rx_ring->next_to_use = 0;
IXGB_WRITE_REG(&adapter->hw, RDH, 0);
IXGB_WRITE_REG(&adapter->hw, RDT, 0);
}
/**
* ixgb_set_mac - Change the Ethernet Address of the NIC
* @netdev: network interface device structure
* @p: pointer to an address structure
*
* Returns 0 on success, negative on failure
**/
static int
ixgb_set_mac(struct net_device *netdev, void *p)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct sockaddr *addr = p;
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
eth_hw_addr_set(netdev, addr->sa_data);
ixgb_rar_set(&adapter->hw, addr->sa_data, 0);
return 0;
}
/**
* ixgb_set_multi - Multicast and Promiscuous mode set
* @netdev: network interface device structure
*
* The set_multi entry point is called whenever the multicast address
* list or the network interface flags are updated. This routine is
* responsible for configuring the hardware for proper multicast,
* promiscuous mode, and all-multi behavior.
**/
static void
ixgb_set_multi(struct net_device *netdev)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct ixgb_hw *hw = &adapter->hw;
struct netdev_hw_addr *ha;
u32 rctl;
/* Check for Promiscuous and All Multicast modes */
rctl = IXGB_READ_REG(hw, RCTL);
if (netdev->flags & IFF_PROMISC) {
rctl |= (IXGB_RCTL_UPE | IXGB_RCTL_MPE);
/* disable VLAN filtering */
rctl &= ~IXGB_RCTL_CFIEN;
rctl &= ~IXGB_RCTL_VFE;
} else {
if (netdev->flags & IFF_ALLMULTI) {
rctl |= IXGB_RCTL_MPE;
rctl &= ~IXGB_RCTL_UPE;
} else {
rctl &= ~(IXGB_RCTL_UPE | IXGB_RCTL_MPE);
}
/* enable VLAN filtering */
rctl |= IXGB_RCTL_VFE;
rctl &= ~IXGB_RCTL_CFIEN;
}
if (netdev_mc_count(netdev) > IXGB_MAX_NUM_MULTICAST_ADDRESSES) {
rctl |= IXGB_RCTL_MPE;
IXGB_WRITE_REG(hw, RCTL, rctl);
} else {
u8 *mta = kmalloc_array(ETH_ALEN,
IXGB_MAX_NUM_MULTICAST_ADDRESSES,
GFP_ATOMIC);
u8 *addr;
if (!mta)
goto alloc_failed;
IXGB_WRITE_REG(hw, RCTL, rctl);
addr = mta;
netdev_for_each_mc_addr(ha, netdev) {
memcpy(addr, ha->addr, ETH_ALEN);
addr += ETH_ALEN;
}
ixgb_mc_addr_list_update(hw, mta, netdev_mc_count(netdev), 0);
kfree(mta);
}
alloc_failed:
if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)
ixgb_vlan_strip_enable(adapter);
else
ixgb_vlan_strip_disable(adapter);
}
/**
* ixgb_watchdog - Timer Call-back
* @t: pointer to timer_list containing our private info pointer
**/
static void
ixgb_watchdog(struct timer_list *t)
{
struct ixgb_adapter *adapter = from_timer(adapter, t, watchdog_timer);
struct net_device *netdev = adapter->netdev;
struct ixgb_desc_ring *txdr = &adapter->tx_ring;
ixgb_check_for_link(&adapter->hw);
if (ixgb_check_for_bad_link(&adapter->hw)) {
/* force the reset path */
netif_stop_queue(netdev);
}
if (adapter->hw.link_up) {
if (!netif_carrier_ok(netdev)) {
netdev_info(netdev,
"NIC Link is Up 10 Gbps Full Duplex, Flow Control: %s\n",
(adapter->hw.fc.type == ixgb_fc_full) ?
"RX/TX" :
(adapter->hw.fc.type == ixgb_fc_rx_pause) ?
"RX" :
(adapter->hw.fc.type == ixgb_fc_tx_pause) ?
"TX" : "None");
adapter->link_speed = 10000;
adapter->link_duplex = FULL_DUPLEX;
netif_carrier_on(netdev);
}
} else {
if (netif_carrier_ok(netdev)) {
adapter->link_speed = 0;
adapter->link_duplex = 0;
netdev_info(netdev, "NIC Link is Down\n");
netif_carrier_off(netdev);
}
}
ixgb_update_stats(adapter);
if (!netif_carrier_ok(netdev)) {
if (IXGB_DESC_UNUSED(txdr) + 1 < txdr->count) {
/* We've lost link, so the controller stops DMA,
* but we've got queued Tx work that's never going
* to get done, so reset controller to flush Tx.
* (Do the reset outside of interrupt context). */
schedule_work(&adapter->tx_timeout_task);
/* return immediately since reset is imminent */
return;
}
}
/* Force detection of hung controller every watchdog period */
adapter->detect_tx_hung = true;
/* generate an interrupt to force clean up of any stragglers */
IXGB_WRITE_REG(&adapter->hw, ICS, IXGB_INT_TXDW);
/* Reset the timer */
mod_timer(&adapter->watchdog_timer, jiffies + 2 * HZ);
}
#define IXGB_TX_FLAGS_CSUM 0x00000001
#define IXGB_TX_FLAGS_VLAN 0x00000002
#define IXGB_TX_FLAGS_TSO 0x00000004
static int
ixgb_tso(struct ixgb_adapter *adapter, struct sk_buff *skb)
{
struct ixgb_context_desc *context_desc;
unsigned int i;
u8 ipcss, ipcso, tucss, tucso, hdr_len;
u16 ipcse, tucse, mss;
if (likely(skb_is_gso(skb))) {
struct ixgb_buffer *buffer_info;
struct iphdr *iph;
int err;
err = skb_cow_head(skb, 0);
if (err < 0)
return err;
hdr_len = skb_tcp_all_headers(skb);
mss = skb_shinfo(skb)->gso_size;
iph = ip_hdr(skb);
iph->tot_len = 0;
iph->check = 0;
tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
iph->daddr, 0,
IPPROTO_TCP, 0);
ipcss = skb_network_offset(skb);
ipcso = (void *)&(iph->check) - (void *)skb->data;
ipcse = skb_transport_offset(skb) - 1;
tucss = skb_transport_offset(skb);
tucso = (void *)&(tcp_hdr(skb)->check) - (void *)skb->data;
tucse = 0;
i = adapter->tx_ring.next_to_use;
context_desc = IXGB_CONTEXT_DESC(adapter->tx_ring, i);
buffer_info = &adapter->tx_ring.buffer_info[i];
WARN_ON(buffer_info->dma != 0);
context_desc->ipcss = ipcss;
context_desc->ipcso = ipcso;
context_desc->ipcse = cpu_to_le16(ipcse);
context_desc->tucss = tucss;
context_desc->tucso = tucso;
context_desc->tucse = cpu_to_le16(tucse);
context_desc->mss = cpu_to_le16(mss);
context_desc->hdr_len = hdr_len;
context_desc->status = 0;
context_desc->cmd_type_len = cpu_to_le32(
IXGB_CONTEXT_DESC_TYPE
| IXGB_CONTEXT_DESC_CMD_TSE
| IXGB_CONTEXT_DESC_CMD_IP
| IXGB_CONTEXT_DESC_CMD_TCP
| IXGB_CONTEXT_DESC_CMD_IDE
| (skb->len - (hdr_len)));
if (++i == adapter->tx_ring.count) i = 0;
adapter->tx_ring.next_to_use = i;
return 1;
}
return 0;
}
static bool
ixgb_tx_csum(struct ixgb_adapter *adapter, struct sk_buff *skb)
{
struct ixgb_context_desc *context_desc;
unsigned int i;
u8 css, cso;
if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
struct ixgb_buffer *buffer_info;
css = skb_checksum_start_offset(skb);
cso = css + skb->csum_offset;
i = adapter->tx_ring.next_to_use;
context_desc = IXGB_CONTEXT_DESC(adapter->tx_ring, i);
buffer_info = &adapter->tx_ring.buffer_info[i];
WARN_ON(buffer_info->dma != 0);
context_desc->tucss = css;
context_desc->tucso = cso;
context_desc->tucse = 0;
/* zero out any previously existing data in one instruction */
*(u32 *)&(context_desc->ipcss) = 0;
context_desc->status = 0;
context_desc->hdr_len = 0;
context_desc->mss = 0;
context_desc->cmd_type_len =
cpu_to_le32(IXGB_CONTEXT_DESC_TYPE
| IXGB_TX_DESC_CMD_IDE);
if (++i == adapter->tx_ring.count) i = 0;
adapter->tx_ring.next_to_use = i;
return true;
}
return false;
}
#define IXGB_MAX_TXD_PWR 14
#define IXGB_MAX_DATA_PER_TXD (1<<IXGB_MAX_TXD_PWR)
static int
ixgb_tx_map(struct ixgb_adapter *adapter, struct sk_buff *skb,
unsigned int first)
{
struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
struct pci_dev *pdev = adapter->pdev;
struct ixgb_buffer *buffer_info;
int len = skb_headlen(skb);
unsigned int offset = 0, size, count = 0, i;
unsigned int mss = skb_shinfo(skb)->gso_size;
unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
unsigned int f;
i = tx_ring->next_to_use;
while (len) {
buffer_info = &tx_ring->buffer_info[i];
size = min(len, IXGB_MAX_DATA_PER_TXD);
/* Workaround for premature desc write-backs
* in TSO mode. Append 4-byte sentinel desc */
if (unlikely(mss && !nr_frags && size == len && size > 8))
size -= 4;
buffer_info->length = size;
WARN_ON(buffer_info->dma != 0);
buffer_info->time_stamp = jiffies;
buffer_info->mapped_as_page = false;
buffer_info->dma = dma_map_single(&pdev->dev,
skb->data + offset,
size, DMA_TO_DEVICE);
if (dma_mapping_error(&pdev->dev, buffer_info->dma))
goto dma_error;
buffer_info->next_to_watch = 0;
len -= size;
offset += size;
count++;
if (len) {
i++;
if (i == tx_ring->count)
i = 0;
}
}
for (f = 0; f < nr_frags; f++) {
const skb_frag_t *frag = &skb_shinfo(skb)->frags[f];
len = skb_frag_size(frag);
offset = 0;
while (len) {
i++;
if (i == tx_ring->count)
i = 0;
buffer_info = &tx_ring->buffer_info[i];
size = min(len, IXGB_MAX_DATA_PER_TXD);
/* Workaround for premature desc write-backs
* in TSO mode. Append 4-byte sentinel desc */
if (unlikely(mss && (f == (nr_frags - 1))
&& size == len && size > 8))
size -= 4;
buffer_info->length = size;
buffer_info->time_stamp = jiffies;
buffer_info->mapped_as_page = true;
buffer_info->dma =
skb_frag_dma_map(&pdev->dev, frag, offset, size,
DMA_TO_DEVICE);
if (dma_mapping_error(&pdev->dev, buffer_info->dma))
goto dma_error;
buffer_info->next_to_watch = 0;
len -= size;
offset += size;
count++;
}
}
tx_ring->buffer_info[i].skb = skb;
tx_ring->buffer_info[first].next_to_watch = i;
return count;
dma_error:
dev_err(&pdev->dev, "TX DMA map failed\n");
buffer_info->dma = 0;
if (count)
count--;
while (count--) {
if (i==0)
i += tx_ring->count;
i--;
buffer_info = &tx_ring->buffer_info[i];
ixgb_unmap_and_free_tx_resource(adapter, buffer_info);
}
return 0;
}
static void
ixgb_tx_queue(struct ixgb_adapter *adapter, int count, int vlan_id,int tx_flags)
{
struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
struct ixgb_tx_desc *tx_desc = NULL;
struct ixgb_buffer *buffer_info;
u32 cmd_type_len = adapter->tx_cmd_type;
u8 status = 0;
u8 popts = 0;
unsigned int i;
if (tx_flags & IXGB_TX_FLAGS_TSO) {
cmd_type_len |= IXGB_TX_DESC_CMD_TSE;
popts |= (IXGB_TX_DESC_POPTS_IXSM | IXGB_TX_DESC_POPTS_TXSM);
}
if (tx_flags & IXGB_TX_FLAGS_CSUM)
popts |= IXGB_TX_DESC_POPTS_TXSM;
if (tx_flags & IXGB_TX_FLAGS_VLAN)
cmd_type_len |= IXGB_TX_DESC_CMD_VLE;
i = tx_ring->next_to_use;
while (count--) {
buffer_info = &tx_ring->buffer_info[i];
tx_desc = IXGB_TX_DESC(*tx_ring, i);
tx_desc->buff_addr = cpu_to_le64(buffer_info->dma);
tx_desc->cmd_type_len =
cpu_to_le32(cmd_type_len | buffer_info->length);
tx_desc->status = status;
tx_desc->popts = popts;
tx_desc->vlan = cpu_to_le16(vlan_id);
if (++i == tx_ring->count) i = 0;
}
tx_desc->cmd_type_len |=
cpu_to_le32(IXGB_TX_DESC_CMD_EOP | IXGB_TX_DESC_CMD_RS);
/* Force memory writes to complete before letting h/w
* know there are new descriptors to fetch. (Only
* applicable for weak-ordered memory model archs,
* such as IA-64). */
wmb();
tx_ring->next_to_use = i;
IXGB_WRITE_REG(&adapter->hw, TDT, i);
}
static int __ixgb_maybe_stop_tx(struct net_device *netdev, int size)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
netif_stop_queue(netdev);
/* Herbert's original patch had:
* smp_mb__after_netif_stop_queue();
* but since that doesn't exist yet, just open code it. */
smp_mb();
/* We need to check again in a case another CPU has just
* made room available. */
if (likely(IXGB_DESC_UNUSED(tx_ring) < size))
return -EBUSY;
/* A reprieve! */
netif_start_queue(netdev);
++adapter->restart_queue;
return 0;
}
static int ixgb_maybe_stop_tx(struct net_device *netdev,
struct ixgb_desc_ring *tx_ring, int size)
{
if (likely(IXGB_DESC_UNUSED(tx_ring) >= size))
return 0;
return __ixgb_maybe_stop_tx(netdev, size);
}
/* Tx Descriptors needed, worst case */
#define TXD_USE_COUNT(S) (((S) >> IXGB_MAX_TXD_PWR) + \
(((S) & (IXGB_MAX_DATA_PER_TXD - 1)) ? 1 : 0))
#define DESC_NEEDED TXD_USE_COUNT(IXGB_MAX_DATA_PER_TXD) /* skb->date */ + \
MAX_SKB_FRAGS * TXD_USE_COUNT(PAGE_SIZE) + 1 /* for context */ \
+ 1 /* one more needed for sentinel TSO workaround */
static netdev_tx_t
ixgb_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
unsigned int first;
unsigned int tx_flags = 0;
int vlan_id = 0;
int count = 0;
int tso;
if (test_bit(__IXGB_DOWN, &adapter->flags)) {
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
if (skb->len <= 0) {
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
if (unlikely(ixgb_maybe_stop_tx(netdev, &adapter->tx_ring,
DESC_NEEDED)))
return NETDEV_TX_BUSY;
if (skb_vlan_tag_present(skb)) {
tx_flags |= IXGB_TX_FLAGS_VLAN;
vlan_id = skb_vlan_tag_get(skb);
}
first = adapter->tx_ring.next_to_use;
tso = ixgb_tso(adapter, skb);
if (tso < 0) {
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
if (likely(tso))
tx_flags |= IXGB_TX_FLAGS_TSO;
else if (ixgb_tx_csum(adapter, skb))
tx_flags |= IXGB_TX_FLAGS_CSUM;
count = ixgb_tx_map(adapter, skb, first);
if (count) {
ixgb_tx_queue(adapter, count, vlan_id, tx_flags);
/* Make sure there is space in the ring for the next send. */
ixgb_maybe_stop_tx(netdev, &adapter->tx_ring, DESC_NEEDED);
} else {
dev_kfree_skb_any(skb);
adapter->tx_ring.buffer_info[first].time_stamp = 0;
adapter->tx_ring.next_to_use = first;
}
return NETDEV_TX_OK;
}
/**
* ixgb_tx_timeout - Respond to a Tx Hang
* @netdev: network interface device structure
* @txqueue: queue hanging (unused)
**/
static void
ixgb_tx_timeout(struct net_device *netdev, unsigned int __always_unused txqueue)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
/* Do the reset outside of interrupt context */
schedule_work(&adapter->tx_timeout_task);
}
static void
ixgb_tx_timeout_task(struct work_struct *work)
{
struct ixgb_adapter *adapter =
container_of(work, struct ixgb_adapter, tx_timeout_task);
adapter->tx_timeout_count++;
ixgb_down(adapter, true);
ixgb_up(adapter);
}
/**
* ixgb_change_mtu - Change the Maximum Transfer Unit
* @netdev: network interface device structure
* @new_mtu: new value for maximum frame size
*
* Returns 0 on success, negative on failure
**/
static int
ixgb_change_mtu(struct net_device *netdev, int new_mtu)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
int max_frame = new_mtu + ENET_HEADER_SIZE + ENET_FCS_LENGTH;
if (netif_running(netdev))
ixgb_down(adapter, true);
adapter->rx_buffer_len = max_frame + 8; /* + 8 for errata */
netdev->mtu = new_mtu;
if (netif_running(netdev))
ixgb_up(adapter);
return 0;
}
/**
* ixgb_update_stats - Update the board statistics counters.
* @adapter: board private structure
**/
void
ixgb_update_stats(struct ixgb_adapter *adapter)
{
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
/* Prevent stats update while adapter is being reset */
if (pci_channel_offline(pdev))
return;
if ((netdev->flags & IFF_PROMISC) || (netdev->flags & IFF_ALLMULTI) ||
(netdev_mc_count(netdev) > IXGB_MAX_NUM_MULTICAST_ADDRESSES)) {
u64 multi = IXGB_READ_REG(&adapter->hw, MPRCL);
u32 bcast_l = IXGB_READ_REG(&adapter->hw, BPRCL);
u32 bcast_h = IXGB_READ_REG(&adapter->hw, BPRCH);
u64 bcast = ((u64)bcast_h << 32) | bcast_l;
multi |= ((u64)IXGB_READ_REG(&adapter->hw, MPRCH) << 32);
/* fix up multicast stats by removing broadcasts */
if (multi >= bcast)
multi -= bcast;
adapter->stats.mprcl += (multi & 0xFFFFFFFF);
adapter->stats.mprch += (multi >> 32);
adapter->stats.bprcl += bcast_l;
adapter->stats.bprch += bcast_h;
} else {
adapter->stats.mprcl += IXGB_READ_REG(&adapter->hw, MPRCL);
adapter->stats.mprch += IXGB_READ_REG(&adapter->hw, MPRCH);
adapter->stats.bprcl += IXGB_READ_REG(&adapter->hw, BPRCL);
adapter->stats.bprch += IXGB_READ_REG(&adapter->hw, BPRCH);
}
adapter->stats.tprl += IXGB_READ_REG(&adapter->hw, TPRL);
adapter->stats.tprh += IXGB_READ_REG(&adapter->hw, TPRH);
adapter->stats.gprcl += IXGB_READ_REG(&adapter->hw, GPRCL);
adapter->stats.gprch += IXGB_READ_REG(&adapter->hw, GPRCH);
adapter->stats.uprcl += IXGB_READ_REG(&adapter->hw, UPRCL);
adapter->stats.uprch += IXGB_READ_REG(&adapter->hw, UPRCH);
adapter->stats.vprcl += IXGB_READ_REG(&adapter->hw, VPRCL);
adapter->stats.vprch += IXGB_READ_REG(&adapter->hw, VPRCH);
adapter->stats.jprcl += IXGB_READ_REG(&adapter->hw, JPRCL);
adapter->stats.jprch += IXGB_READ_REG(&adapter->hw, JPRCH);
adapter->stats.gorcl += IXGB_READ_REG(&adapter->hw, GORCL);
adapter->stats.gorch += IXGB_READ_REG(&adapter->hw, GORCH);
adapter->stats.torl += IXGB_READ_REG(&adapter->hw, TORL);
adapter->stats.torh += IXGB_READ_REG(&adapter->hw, TORH);
adapter->stats.rnbc += IXGB_READ_REG(&adapter->hw, RNBC);
adapter->stats.ruc += IXGB_READ_REG(&adapter->hw, RUC);
adapter->stats.roc += IXGB_READ_REG(&adapter->hw, ROC);
adapter->stats.rlec += IXGB_READ_REG(&adapter->hw, RLEC);
adapter->stats.crcerrs += IXGB_READ_REG(&adapter->hw, CRCERRS);
adapter->stats.icbc += IXGB_READ_REG(&adapter->hw, ICBC);
adapter->stats.ecbc += IXGB_READ_REG(&adapter->hw, ECBC);
adapter->stats.mpc += IXGB_READ_REG(&adapter->hw, MPC);
adapter->stats.tptl += IXGB_READ_REG(&adapter->hw, TPTL);
adapter->stats.tpth += IXGB_READ_REG(&adapter->hw, TPTH);
adapter->stats.gptcl += IXGB_READ_REG(&adapter->hw, GPTCL);
adapter->stats.gptch += IXGB_READ_REG(&adapter->hw, GPTCH);
adapter->stats.bptcl += IXGB_READ_REG(&adapter->hw, BPTCL);
adapter->stats.bptch += IXGB_READ_REG(&adapter->hw, BPTCH);
adapter->stats.mptcl += IXGB_READ_REG(&adapter->hw, MPTCL);
adapter->stats.mptch += IXGB_READ_REG(&adapter->hw, MPTCH);
adapter->stats.uptcl += IXGB_READ_REG(&adapter->hw, UPTCL);
adapter->stats.uptch += IXGB_READ_REG(&adapter->hw, UPTCH);
adapter->stats.vptcl += IXGB_READ_REG(&adapter->hw, VPTCL);
adapter->stats.vptch += IXGB_READ_REG(&adapter->hw, VPTCH);
adapter->stats.jptcl += IXGB_READ_REG(&adapter->hw, JPTCL);
adapter->stats.jptch += IXGB_READ_REG(&adapter->hw, JPTCH);
adapter->stats.gotcl += IXGB_READ_REG(&adapter->hw, GOTCL);
adapter->stats.gotch += IXGB_READ_REG(&adapter->hw, GOTCH);
adapter->stats.totl += IXGB_READ_REG(&adapter->hw, TOTL);
adapter->stats.toth += IXGB_READ_REG(&adapter->hw, TOTH);
adapter->stats.dc += IXGB_READ_REG(&adapter->hw, DC);
adapter->stats.plt64c += IXGB_READ_REG(&adapter->hw, PLT64C);
adapter->stats.tsctc += IXGB_READ_REG(&adapter->hw, TSCTC);
adapter->stats.tsctfc += IXGB_READ_REG(&adapter->hw, TSCTFC);
adapter->stats.ibic += IXGB_READ_REG(&adapter->hw, IBIC);
adapter->stats.rfc += IXGB_READ_REG(&adapter->hw, RFC);
adapter->stats.lfc += IXGB_READ_REG(&adapter->hw, LFC);
adapter->stats.pfrc += IXGB_READ_REG(&adapter->hw, PFRC);
adapter->stats.pftc += IXGB_READ_REG(&adapter->hw, PFTC);
adapter->stats.mcfrc += IXGB_READ_REG(&adapter->hw, MCFRC);
adapter->stats.mcftc += IXGB_READ_REG(&adapter->hw, MCFTC);
adapter->stats.xonrxc += IXGB_READ_REG(&adapter->hw, XONRXC);
adapter->stats.xontxc += IXGB_READ_REG(&adapter->hw, XONTXC);
adapter->stats.xoffrxc += IXGB_READ_REG(&adapter->hw, XOFFRXC);
adapter->stats.xofftxc += IXGB_READ_REG(&adapter->hw, XOFFTXC);
adapter->stats.rjc += IXGB_READ_REG(&adapter->hw, RJC);
/* Fill out the OS statistics structure */
netdev->stats.rx_packets = adapter->stats.gprcl;
netdev->stats.tx_packets = adapter->stats.gptcl;
netdev->stats.rx_bytes = adapter->stats.gorcl;
netdev->stats.tx_bytes = adapter->stats.gotcl;
netdev->stats.multicast = adapter->stats.mprcl;
netdev->stats.collisions = 0;
/* ignore RLEC as it reports errors for padded (<64bytes) frames
* with a length in the type/len field */
netdev->stats.rx_errors =
/* adapter->stats.rnbc + */ adapter->stats.crcerrs +
adapter->stats.ruc +
adapter->stats.roc /*+ adapter->stats.rlec */ +
adapter->stats.icbc +
adapter->stats.ecbc + adapter->stats.mpc;
/* see above
* netdev->stats.rx_length_errors = adapter->stats.rlec;
*/
netdev->stats.rx_crc_errors = adapter->stats.crcerrs;
netdev->stats.rx_fifo_errors = adapter->stats.mpc;
netdev->stats.rx_missed_errors = adapter->stats.mpc;
netdev->stats.rx_over_errors = adapter->stats.mpc;
netdev->stats.tx_errors = 0;
netdev->stats.rx_frame_errors = 0;
netdev->stats.tx_aborted_errors = 0;
netdev->stats.tx_carrier_errors = 0;
netdev->stats.tx_fifo_errors = 0;
netdev->stats.tx_heartbeat_errors = 0;
netdev->stats.tx_window_errors = 0;
}
/**
* ixgb_intr - Interrupt Handler
* @irq: interrupt number
* @data: pointer to a network interface device structure
**/
static irqreturn_t
ixgb_intr(int irq, void *data)
{
struct net_device *netdev = data;
struct ixgb_adapter *adapter = netdev_priv(netdev);
struct ixgb_hw *hw = &adapter->hw;
u32 icr = IXGB_READ_REG(hw, ICR);
if (unlikely(!icr))
return IRQ_NONE; /* Not our interrupt */
if (unlikely(icr & (IXGB_INT_RXSEQ | IXGB_INT_LSC)))
if (!test_bit(__IXGB_DOWN, &adapter->flags))
mod_timer(&adapter->watchdog_timer, jiffies);
if (napi_schedule_prep(&adapter->napi)) {
/* Disable interrupts and register for poll. The flush
of the posted write is intentionally left out.
*/
IXGB_WRITE_REG(&adapter->hw, IMC, ~0);
__napi_schedule(&adapter->napi);
}
return IRQ_HANDLED;
}
/**
* ixgb_clean - NAPI Rx polling callback
* @napi: napi struct pointer
* @budget: max number of receives to clean
**/
static int
ixgb_clean(struct napi_struct *napi, int budget)
{
struct ixgb_adapter *adapter = container_of(napi, struct ixgb_adapter, napi);
int work_done = 0;
ixgb_clean_tx_irq(adapter);
ixgb_clean_rx_irq(adapter, &work_done, budget);
/* If budget not fully consumed, exit the polling mode */
if (work_done < budget) {
napi_complete_done(napi, work_done);
if (!test_bit(__IXGB_DOWN, &adapter->flags))
ixgb_irq_enable(adapter);
}
return work_done;
}
/**
* ixgb_clean_tx_irq - Reclaim resources after transmit completes
* @adapter: board private structure
**/
static bool
ixgb_clean_tx_irq(struct ixgb_adapter *adapter)
{
struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
struct net_device *netdev = adapter->netdev;
struct ixgb_tx_desc *tx_desc, *eop_desc;
struct ixgb_buffer *buffer_info;
unsigned int i, eop;
bool cleaned = false;
i = tx_ring->next_to_clean;
eop = tx_ring->buffer_info[i].next_to_watch;
eop_desc = IXGB_TX_DESC(*tx_ring, eop);
while (eop_desc->status & IXGB_TX_DESC_STATUS_DD) {
rmb(); /* read buffer_info after eop_desc */
for (cleaned = false; !cleaned; ) {
tx_desc = IXGB_TX_DESC(*tx_ring, i);
buffer_info = &tx_ring->buffer_info[i];
if (tx_desc->popts &
(IXGB_TX_DESC_POPTS_TXSM |
IXGB_TX_DESC_POPTS_IXSM))
adapter->hw_csum_tx_good++;
ixgb_unmap_and_free_tx_resource(adapter, buffer_info);
*(u32 *)&(tx_desc->status) = 0;
cleaned = (i == eop);
if (++i == tx_ring->count) i = 0;
}
eop = tx_ring->buffer_info[i].next_to_watch;
eop_desc = IXGB_TX_DESC(*tx_ring, eop);
}
tx_ring->next_to_clean = i;
if (unlikely(cleaned && netif_carrier_ok(netdev) &&
IXGB_DESC_UNUSED(tx_ring) >= DESC_NEEDED)) {
/* Make sure that anybody stopping the queue after this
* sees the new next_to_clean. */
smp_mb();
if (netif_queue_stopped(netdev) &&
!(test_bit(__IXGB_DOWN, &adapter->flags))) {
netif_wake_queue(netdev);
++adapter->restart_queue;
}
}
if (adapter->detect_tx_hung) {
/* detect a transmit hang in hardware, this serializes the
* check with the clearing of time_stamp and movement of i */
adapter->detect_tx_hung = false;
if (tx_ring->buffer_info[eop].time_stamp &&
time_after(jiffies, tx_ring->buffer_info[eop].time_stamp + HZ)
&& !(IXGB_READ_REG(&adapter->hw, STATUS) &
IXGB_STATUS_TXOFF)) {
/* detected Tx unit hang */
netif_err(adapter, drv, adapter->netdev,
"Detected Tx Unit Hang\n"
" TDH <%x>\n"
" TDT <%x>\n"
" next_to_use <%x>\n"
" next_to_clean <%x>\n"
"buffer_info[next_to_clean]\n"
" time_stamp <%lx>\n"
" next_to_watch <%x>\n"
" jiffies <%lx>\n"
" next_to_watch.status <%x>\n",
IXGB_READ_REG(&adapter->hw, TDH),
IXGB_READ_REG(&adapter->hw, TDT),
tx_ring->next_to_use,
tx_ring->next_to_clean,
tx_ring->buffer_info[eop].time_stamp,
eop,
jiffies,
eop_desc->status);
netif_stop_queue(netdev);
}
}
return cleaned;
}
/**
* ixgb_rx_checksum - Receive Checksum Offload for 82597.
* @adapter: board private structure
* @rx_desc: receive descriptor
* @skb: socket buffer with received data
**/
static void
ixgb_rx_checksum(struct ixgb_adapter *adapter,
struct ixgb_rx_desc *rx_desc,
struct sk_buff *skb)
{
/* Ignore Checksum bit is set OR
* TCP Checksum has not been calculated
*/
if ((rx_desc->status & IXGB_RX_DESC_STATUS_IXSM) ||
(!(rx_desc->status & IXGB_RX_DESC_STATUS_TCPCS))) {
skb_checksum_none_assert(skb);
return;
}
/* At this point we know the hardware did the TCP checksum */
/* now look at the TCP checksum error bit */
if (rx_desc->errors & IXGB_RX_DESC_ERRORS_TCPE) {
/* let the stack verify checksum errors */
skb_checksum_none_assert(skb);
adapter->hw_csum_rx_error++;
} else {
/* TCP checksum is good */
skb->ip_summed = CHECKSUM_UNNECESSARY;
adapter->hw_csum_rx_good++;
}
}
/*
* this should improve performance for small packets with large amounts
* of reassembly being done in the stack
*/
static void ixgb_check_copybreak(struct napi_struct *napi,
struct ixgb_buffer *buffer_info,
u32 length, struct sk_buff **skb)
{
struct sk_buff *new_skb;
if (length > copybreak)
return;
new_skb = napi_alloc_skb(napi, length);
if (!new_skb)
return;
skb_copy_to_linear_data_offset(new_skb, -NET_IP_ALIGN,
(*skb)->data - NET_IP_ALIGN,
length + NET_IP_ALIGN);
/* save the skb in buffer_info as good */
buffer_info->skb = *skb;
*skb = new_skb;
}
/**
* ixgb_clean_rx_irq - Send received data up the network stack,
* @adapter: board private structure
* @work_done: output pointer to amount of packets cleaned
* @work_to_do: how much work we can complete
**/
static bool
ixgb_clean_rx_irq(struct ixgb_adapter *adapter, int *work_done, int work_to_do)
{
struct ixgb_desc_ring *rx_ring = &adapter->rx_ring;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct ixgb_rx_desc *rx_desc, *next_rxd;
struct ixgb_buffer *buffer_info, *next_buffer, *next2_buffer;
u32 length;
unsigned int i, j;
int cleaned_count = 0;
bool cleaned = false;
i = rx_ring->next_to_clean;
rx_desc = IXGB_RX_DESC(*rx_ring, i);
buffer_info = &rx_ring->buffer_info[i];
while (rx_desc->status & IXGB_RX_DESC_STATUS_DD) {
struct sk_buff *skb;
u8 status;
if (*work_done >= work_to_do)
break;
(*work_done)++;
rmb(); /* read descriptor and rx_buffer_info after status DD */
status = rx_desc->status;
skb = buffer_info->skb;
buffer_info->skb = NULL;
prefetch(skb->data - NET_IP_ALIGN);
if (++i == rx_ring->count)
i = 0;
next_rxd = IXGB_RX_DESC(*rx_ring, i);
prefetch(next_rxd);
j = i + 1;
if (j == rx_ring->count)
j = 0;
next2_buffer = &rx_ring->buffer_info[j];
prefetch(next2_buffer);
next_buffer = &rx_ring->buffer_info[i];
cleaned = true;
cleaned_count++;
dma_unmap_single(&pdev->dev,
buffer_info->dma,
buffer_info->length,
DMA_FROM_DEVICE);
buffer_info->dma = 0;
length = le16_to_cpu(rx_desc->length);
rx_desc->length = 0;
if (unlikely(!(status & IXGB_RX_DESC_STATUS_EOP))) {
/* All receives must fit into a single buffer */
pr_debug("Receive packet consumed multiple buffers length<%x>\n",
length);
dev_kfree_skb_irq(skb);
goto rxdesc_done;
}
if (unlikely(rx_desc->errors &
(IXGB_RX_DESC_ERRORS_CE | IXGB_RX_DESC_ERRORS_SE |
IXGB_RX_DESC_ERRORS_P | IXGB_RX_DESC_ERRORS_RXE))) {
dev_kfree_skb_irq(skb);
goto rxdesc_done;
}
ixgb_check_copybreak(&adapter->napi, buffer_info, length, &skb);
/* Good Receive */
skb_put(skb, length);
/* Receive Checksum Offload */
ixgb_rx_checksum(adapter, rx_desc, skb);
skb->protocol = eth_type_trans(skb, netdev);
if (status & IXGB_RX_DESC_STATUS_VP)
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
le16_to_cpu(rx_desc->special));
netif_receive_skb(skb);
rxdesc_done:
/* clean up descriptor, might be written over by hw */
rx_desc->status = 0;
/* return some buffers to hardware, one at a time is too slow */
if (unlikely(cleaned_count >= IXGB_RX_BUFFER_WRITE)) {
ixgb_alloc_rx_buffers(adapter, cleaned_count);
cleaned_count = 0;
}
/* use prefetched values */
rx_desc = next_rxd;
buffer_info = next_buffer;
}
rx_ring->next_to_clean = i;
cleaned_count = IXGB_DESC_UNUSED(rx_ring);
if (cleaned_count)
ixgb_alloc_rx_buffers(adapter, cleaned_count);
return cleaned;
}
/**
* ixgb_alloc_rx_buffers - Replace used receive buffers
* @adapter: address of board private structure
* @cleaned_count: how many buffers to allocate
**/
static void
ixgb_alloc_rx_buffers(struct ixgb_adapter *adapter, int cleaned_count)
{
struct ixgb_desc_ring *rx_ring = &adapter->rx_ring;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct ixgb_rx_desc *rx_desc;
struct ixgb_buffer *buffer_info;
struct sk_buff *skb;
unsigned int i;
long cleancount;
i = rx_ring->next_to_use;
buffer_info = &rx_ring->buffer_info[i];
cleancount = IXGB_DESC_UNUSED(rx_ring);
/* leave three descriptors unused */
while (--cleancount > 2 && cleaned_count--) {
/* recycle! its good for you */
skb = buffer_info->skb;
if (skb) {
skb_trim(skb, 0);
goto map_skb;
}
skb = netdev_alloc_skb_ip_align(netdev, adapter->rx_buffer_len);
if (unlikely(!skb)) {
/* Better luck next round */
adapter->alloc_rx_buff_failed++;
break;
}
buffer_info->skb = skb;
buffer_info->length = adapter->rx_buffer_len;
map_skb:
buffer_info->dma = dma_map_single(&pdev->dev,
skb->data,
adapter->rx_buffer_len,
DMA_FROM_DEVICE);
if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
adapter->alloc_rx_buff_failed++;
break;
}
rx_desc = IXGB_RX_DESC(*rx_ring, i);
rx_desc->buff_addr = cpu_to_le64(buffer_info->dma);
/* guarantee DD bit not set now before h/w gets descriptor
* this is the rest of the workaround for h/w double
* writeback. */
rx_desc->status = 0;
if (++i == rx_ring->count)
i = 0;
buffer_info = &rx_ring->buffer_info[i];
}
if (likely(rx_ring->next_to_use != i)) {
rx_ring->next_to_use = i;
if (unlikely(i-- == 0))
i = (rx_ring->count - 1);
/* Force memory writes to complete before letting h/w
* know there are new descriptors to fetch. (Only
* applicable for weak-ordered memory model archs, such
* as IA-64). */
wmb();
IXGB_WRITE_REG(&adapter->hw, RDT, i);
}
}
static void
ixgb_vlan_strip_enable(struct ixgb_adapter *adapter)
{
u32 ctrl;
/* enable VLAN tag insert/strip */
ctrl = IXGB_READ_REG(&adapter->hw, CTRL0);
ctrl |= IXGB_CTRL0_VME;
IXGB_WRITE_REG(&adapter->hw, CTRL0, ctrl);
}
static void
ixgb_vlan_strip_disable(struct ixgb_adapter *adapter)
{
u32 ctrl;
/* disable VLAN tag insert/strip */
ctrl = IXGB_READ_REG(&adapter->hw, CTRL0);
ctrl &= ~IXGB_CTRL0_VME;
IXGB_WRITE_REG(&adapter->hw, CTRL0, ctrl);
}
static int
ixgb_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
u32 vfta, index;
/* add VID to filter table */
index = (vid >> 5) & 0x7F;
vfta = IXGB_READ_REG_ARRAY(&adapter->hw, VFTA, index);
vfta |= (1 << (vid & 0x1F));
ixgb_write_vfta(&adapter->hw, index, vfta);
set_bit(vid, adapter->active_vlans);
return 0;
}
static int
ixgb_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid)
{
struct ixgb_adapter *adapter = netdev_priv(netdev);
u32 vfta, index;
/* remove VID from filter table */
index = (vid >> 5) & 0x7F;
vfta = IXGB_READ_REG_ARRAY(&adapter->hw, VFTA, index);
vfta &= ~(1 << (vid & 0x1F));
ixgb_write_vfta(&adapter->hw, index, vfta);
clear_bit(vid, adapter->active_vlans);
return 0;
}
static void
ixgb_restore_vlan(struct ixgb_adapter *adapter)
{
u16 vid;
for_each_set_bit(vid, adapter->active_vlans, VLAN_N_VID)
ixgb_vlan_rx_add_vid(adapter->netdev, htons(ETH_P_8021Q), vid);
}
/**
* ixgb_io_error_detected - called when PCI error is detected
* @pdev: pointer to pci device with error
* @state: pci channel state after error
*
* This callback is called by the PCI subsystem whenever
* a PCI bus error is detected.
*/
static pci_ers_result_t ixgb_io_error_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct ixgb_adapter *adapter = netdev_priv(netdev);
netif_device_detach(netdev);
if (state == pci_channel_io_perm_failure)
return PCI_ERS_RESULT_DISCONNECT;
if (netif_running(netdev))
ixgb_down(adapter, true);
pci_disable_device(pdev);
/* Request a slot reset. */
return PCI_ERS_RESULT_NEED_RESET;
}
/**
* ixgb_io_slot_reset - called after the pci bus has been reset.
* @pdev: pointer to pci device with error
*
* This callback is called after the PCI bus has been reset.
* Basically, this tries to restart the card from scratch.
* This is a shortened version of the device probe/discovery code,
* it resembles the first-half of the ixgb_probe() routine.
*/
static pci_ers_result_t ixgb_io_slot_reset(struct pci_dev *pdev)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct ixgb_adapter *adapter = netdev_priv(netdev);
u8 addr[ETH_ALEN];
if (pci_enable_device(pdev)) {
netif_err(adapter, probe, adapter->netdev,
"Cannot re-enable PCI device after reset\n");
return PCI_ERS_RESULT_DISCONNECT;
}
/* Perform card reset only on one instance of the card */
if (0 != PCI_FUNC (pdev->devfn))
return PCI_ERS_RESULT_RECOVERED;
pci_set_master(pdev);
netif_carrier_off(netdev);
netif_stop_queue(netdev);
ixgb_reset(adapter);
/* Make sure the EEPROM is good */
if (!ixgb_validate_eeprom_checksum(&adapter->hw)) {
netif_err(adapter, probe, adapter->netdev,
"After reset, the EEPROM checksum is not valid\n");
return PCI_ERS_RESULT_DISCONNECT;
}
ixgb_get_ee_mac_addr(&adapter->hw, addr);
eth_hw_addr_set(netdev, addr);
memcpy(netdev->perm_addr, netdev->dev_addr, netdev->addr_len);
if (!is_valid_ether_addr(netdev->perm_addr)) {
netif_err(adapter, probe, adapter->netdev,
"After reset, invalid MAC address\n");
return PCI_ERS_RESULT_DISCONNECT;
}
return PCI_ERS_RESULT_RECOVERED;
}
/**
* ixgb_io_resume - called when its OK to resume normal operations
* @pdev: pointer to pci device with error
*
* The error recovery driver tells us that its OK to resume
* normal operation. Implementation resembles the second-half
* of the ixgb_probe() routine.
*/
static void ixgb_io_resume(struct pci_dev *pdev)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct ixgb_adapter *adapter = netdev_priv(netdev);
pci_set_master(pdev);
if (netif_running(netdev)) {
if (ixgb_up(adapter)) {
pr_err("can't bring device back up after reset\n");
return;
}
}
netif_device_attach(netdev);
mod_timer(&adapter->watchdog_timer, jiffies);
}
/* ixgb_main.c */
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 1999 - 2008 Intel Corporation. */
/* glue for the OS independent part of ixgb
* includes register access macros
*/
#ifndef _IXGB_OSDEP_H_
#define _IXGB_OSDEP_H_
#include <linux/types.h>
#include <linux/delay.h>
#include <asm/io.h>
#include <linux/interrupt.h>
#include <linux/sched.h>
#include <linux/if_ether.h>
#undef ASSERT
#define ASSERT(x) BUG_ON(!(x))
#define ENTER() pr_debug("%s\n", __func__);
#define IXGB_WRITE_REG(a, reg, value) ( \
writel((value), ((a)->hw_addr + IXGB_##reg)))
#define IXGB_READ_REG(a, reg) ( \
readl((a)->hw_addr + IXGB_##reg))
#define IXGB_WRITE_REG_ARRAY(a, reg, offset, value) ( \
writel((value), ((a)->hw_addr + IXGB_##reg + ((offset) << 2))))
#define IXGB_READ_REG_ARRAY(a, reg, offset) ( \
readl((a)->hw_addr + IXGB_##reg + ((offset) << 2)))
#define IXGB_WRITE_FLUSH(a) IXGB_READ_REG(a, STATUS)
#define IXGB_MEMCPY memcpy
#endif /* _IXGB_OSDEP_H_ */
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 1999 - 2008 Intel Corporation. */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "ixgb.h"
/* This is the only thing that needs to be changed to adjust the
* maximum number of ports that the driver can manage.
*/
#define IXGB_MAX_NIC 8
#define OPTION_UNSET -1
#define OPTION_DISABLED 0
#define OPTION_ENABLED 1
/* All parameters are treated the same, as an integer array of values.
* This macro just reduces the need to repeat the same declaration code
* over and over (plus this helps to avoid typo bugs).
*/
#define IXGB_PARAM_INIT { [0 ... IXGB_MAX_NIC] = OPTION_UNSET }
#define IXGB_PARAM(X, desc) \
static int X[IXGB_MAX_NIC+1] \
= IXGB_PARAM_INIT; \
static unsigned int num_##X = 0; \
module_param_array_named(X, X, int, &num_##X, 0); \
MODULE_PARM_DESC(X, desc);
/* Transmit Descriptor Count
*
* Valid Range: 64-4096
*
* Default Value: 256
*/
IXGB_PARAM(TxDescriptors, "Number of transmit descriptors");
/* Receive Descriptor Count
*
* Valid Range: 64-4096
*
* Default Value: 1024
*/
IXGB_PARAM(RxDescriptors, "Number of receive descriptors");
/* User Specified Flow Control Override
*
* Valid Range: 0-3
* - 0 - No Flow Control
* - 1 - Rx only, respond to PAUSE frames but do not generate them
* - 2 - Tx only, generate PAUSE frames but ignore them on receive
* - 3 - Full Flow Control Support
*
* Default Value: 2 - Tx only (silicon bug avoidance)
*/
IXGB_PARAM(FlowControl, "Flow Control setting");
/* XsumRX - Receive Checksum Offload Enable/Disable
*
* Valid Range: 0, 1
* - 0 - disables all checksum offload
* - 1 - enables receive IP/TCP/UDP checksum offload
* on 82597 based NICs
*
* Default Value: 1
*/
IXGB_PARAM(XsumRX, "Disable or enable Receive Checksum offload");
/* Transmit Interrupt Delay in units of 0.8192 microseconds
*
* Valid Range: 0-65535
*
* Default Value: 32
*/
IXGB_PARAM(TxIntDelay, "Transmit Interrupt Delay");
/* Receive Interrupt Delay in units of 0.8192 microseconds
*
* Valid Range: 0-65535
*
* Default Value: 72
*/
IXGB_PARAM(RxIntDelay, "Receive Interrupt Delay");
/* Receive Flow control high threshold (when we send a pause frame)
* (FCRTH)
*
* Valid Range: 1,536 - 262,136 (0x600 - 0x3FFF8, 8 byte granularity)
*
* Default Value: 196,608 (0x30000)
*/
IXGB_PARAM(RxFCHighThresh, "Receive Flow Control High Threshold");
/* Receive Flow control low threshold (when we send a resume frame)
* (FCRTL)
*
* Valid Range: 64 - 262,136 (0x40 - 0x3FFF8, 8 byte granularity)
* must be less than high threshold by at least 8 bytes
*
* Default Value: 163,840 (0x28000)
*/
IXGB_PARAM(RxFCLowThresh, "Receive Flow Control Low Threshold");
/* Flow control request timeout (how long to pause the link partner's tx)
* (PAP 15:0)
*
* Valid Range: 1 - 65535
*
* Default Value: 65535 (0xffff) (we'll send an xon if we recover)
*/
IXGB_PARAM(FCReqTimeout, "Flow Control Request Timeout");
/* Interrupt Delay Enable
*
* Valid Range: 0, 1
*
* - 0 - disables transmit interrupt delay
* - 1 - enables transmmit interrupt delay
*
* Default Value: 1
*/
IXGB_PARAM(IntDelayEnable, "Transmit Interrupt Delay Enable");
#define DEFAULT_TIDV 32
#define MAX_TIDV 0xFFFF
#define MIN_TIDV 0
#define DEFAULT_RDTR 72
#define MAX_RDTR 0xFFFF
#define MIN_RDTR 0
#define DEFAULT_FCRTL 0x28000
#define DEFAULT_FCRTH 0x30000
#define MIN_FCRTL 0
#define MAX_FCRTL 0x3FFE8
#define MIN_FCRTH 8
#define MAX_FCRTH 0x3FFF0
#define MIN_FCPAUSE 1
#define MAX_FCPAUSE 0xffff
#define DEFAULT_FCPAUSE 0xFFFF /* this may be too long */
struct ixgb_option {
enum { enable_option, range_option, list_option } type;
const char *name;
const char *err;
int def;
union {
struct { /* range_option info */
int min;
int max;
} r;
struct { /* list_option info */
int nr;
const struct ixgb_opt_list {
int i;
const char *str;
} *p;
} l;
} arg;
};
static int
ixgb_validate_option(unsigned int *value, const struct ixgb_option *opt)
{
if (*value == OPTION_UNSET) {
*value = opt->def;
return 0;
}
switch (opt->type) {
case enable_option:
switch (*value) {
case OPTION_ENABLED:
pr_info("%s Enabled\n", opt->name);
return 0;
case OPTION_DISABLED:
pr_info("%s Disabled\n", opt->name);
return 0;
}
break;
case range_option:
if (*value >= opt->arg.r.min && *value <= opt->arg.r.max) {
pr_info("%s set to %i\n", opt->name, *value);
return 0;
}
break;
case list_option: {
int i;
const struct ixgb_opt_list *ent;
for (i = 0; i < opt->arg.l.nr; i++) {
ent = &opt->arg.l.p[i];
if (*value == ent->i) {
if (ent->str[0] != '\0')
pr_info("%s\n", ent->str);
return 0;
}
}
}
break;
default:
BUG();
}
pr_info("Invalid %s specified (%i) %s\n", opt->name, *value, opt->err);
*value = opt->def;
return -1;
}
/**
* ixgb_check_options - Range Checking for Command Line Parameters
* @adapter: board private structure
*
* This routine checks all command line parameters for valid user
* input. If an invalid value is given, or if no user specified
* value exists, a default value is used. The final value is stored
* in a variable in the adapter structure.
**/
void
ixgb_check_options(struct ixgb_adapter *adapter)
{
int bd = adapter->bd_number;
if (bd >= IXGB_MAX_NIC) {
pr_notice("Warning: no configuration for board #%i\n", bd);
pr_notice("Using defaults for all values\n");
}
{ /* Transmit Descriptor Count */
static const struct ixgb_option opt = {
.type = range_option,
.name = "Transmit Descriptors",
.err = "using default of " __MODULE_STRING(DEFAULT_TXD),
.def = DEFAULT_TXD,
.arg = { .r = { .min = MIN_TXD,
.max = MAX_TXD}}
};
struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
if (num_TxDescriptors > bd) {
tx_ring->count = TxDescriptors[bd];
ixgb_validate_option(&tx_ring->count, &opt);
} else {
tx_ring->count = opt.def;
}
tx_ring->count = ALIGN(tx_ring->count, IXGB_REQ_TX_DESCRIPTOR_MULTIPLE);
}
{ /* Receive Descriptor Count */
static const struct ixgb_option opt = {
.type = range_option,
.name = "Receive Descriptors",
.err = "using default of " __MODULE_STRING(DEFAULT_RXD),
.def = DEFAULT_RXD,
.arg = { .r = { .min = MIN_RXD,
.max = MAX_RXD}}
};
struct ixgb_desc_ring *rx_ring = &adapter->rx_ring;
if (num_RxDescriptors > bd) {
rx_ring->count = RxDescriptors[bd];
ixgb_validate_option(&rx_ring->count, &opt);
} else {
rx_ring->count = opt.def;
}
rx_ring->count = ALIGN(rx_ring->count, IXGB_REQ_RX_DESCRIPTOR_MULTIPLE);
}
{ /* Receive Checksum Offload Enable */
static const struct ixgb_option opt = {
.type = enable_option,
.name = "Receive Checksum Offload",
.err = "defaulting to Enabled",
.def = OPTION_ENABLED
};
if (num_XsumRX > bd) {
unsigned int rx_csum = XsumRX[bd];
ixgb_validate_option(&rx_csum, &opt);
adapter->rx_csum = rx_csum;
} else {
adapter->rx_csum = opt.def;
}
}
{ /* Flow Control */
static const struct ixgb_opt_list fc_list[] = {
{ ixgb_fc_none, "Flow Control Disabled" },
{ ixgb_fc_rx_pause, "Flow Control Receive Only" },
{ ixgb_fc_tx_pause, "Flow Control Transmit Only" },
{ ixgb_fc_full, "Flow Control Enabled" },
{ ixgb_fc_default, "Flow Control Hardware Default" }
};
static const struct ixgb_option opt = {
.type = list_option,
.name = "Flow Control",
.err = "reading default settings from EEPROM",
.def = ixgb_fc_tx_pause,
.arg = { .l = { .nr = ARRAY_SIZE(fc_list),
.p = fc_list }}
};
if (num_FlowControl > bd) {
unsigned int fc = FlowControl[bd];
ixgb_validate_option(&fc, &opt);
adapter->hw.fc.type = fc;
} else {
adapter->hw.fc.type = opt.def;
}
}
{ /* Receive Flow Control High Threshold */
static const struct ixgb_option opt = {
.type = range_option,
.name = "Rx Flow Control High Threshold",
.err = "using default of " __MODULE_STRING(DEFAULT_FCRTH),
.def = DEFAULT_FCRTH,
.arg = { .r = { .min = MIN_FCRTH,
.max = MAX_FCRTH}}
};
if (num_RxFCHighThresh > bd) {
adapter->hw.fc.high_water = RxFCHighThresh[bd];
ixgb_validate_option(&adapter->hw.fc.high_water, &opt);
} else {
adapter->hw.fc.high_water = opt.def;
}
if (!(adapter->hw.fc.type & ixgb_fc_tx_pause) )
pr_info("Ignoring RxFCHighThresh when no RxFC\n");
}
{ /* Receive Flow Control Low Threshold */
static const struct ixgb_option opt = {
.type = range_option,
.name = "Rx Flow Control Low Threshold",
.err = "using default of " __MODULE_STRING(DEFAULT_FCRTL),
.def = DEFAULT_FCRTL,
.arg = { .r = { .min = MIN_FCRTL,
.max = MAX_FCRTL}}
};
if (num_RxFCLowThresh > bd) {
adapter->hw.fc.low_water = RxFCLowThresh[bd];
ixgb_validate_option(&adapter->hw.fc.low_water, &opt);
} else {
adapter->hw.fc.low_water = opt.def;
}
if (!(adapter->hw.fc.type & ixgb_fc_tx_pause) )
pr_info("Ignoring RxFCLowThresh when no RxFC\n");
}
{ /* Flow Control Pause Time Request*/
static const struct ixgb_option opt = {
.type = range_option,
.name = "Flow Control Pause Time Request",
.err = "using default of "__MODULE_STRING(DEFAULT_FCPAUSE),
.def = DEFAULT_FCPAUSE,
.arg = { .r = { .min = MIN_FCPAUSE,
.max = MAX_FCPAUSE}}
};
if (num_FCReqTimeout > bd) {
unsigned int pause_time = FCReqTimeout[bd];
ixgb_validate_option(&pause_time, &opt);
adapter->hw.fc.pause_time = pause_time;
} else {
adapter->hw.fc.pause_time = opt.def;
}
if (!(adapter->hw.fc.type & ixgb_fc_tx_pause) )
pr_info("Ignoring FCReqTimeout when no RxFC\n");
}
/* high low and spacing check for rx flow control thresholds */
if (adapter->hw.fc.type & ixgb_fc_tx_pause) {
/* high must be greater than low */
if (adapter->hw.fc.high_water < (adapter->hw.fc.low_water + 8)) {
/* set defaults */
pr_info("RxFCHighThresh must be >= (RxFCLowThresh + 8), Using Defaults\n");
adapter->hw.fc.high_water = DEFAULT_FCRTH;
adapter->hw.fc.low_water = DEFAULT_FCRTL;
}
}
{ /* Receive Interrupt Delay */
static const struct ixgb_option opt = {
.type = range_option,
.name = "Receive Interrupt Delay",
.err = "using default of " __MODULE_STRING(DEFAULT_RDTR),
.def = DEFAULT_RDTR,
.arg = { .r = { .min = MIN_RDTR,
.max = MAX_RDTR}}
};
if (num_RxIntDelay > bd) {
adapter->rx_int_delay = RxIntDelay[bd];
ixgb_validate_option(&adapter->rx_int_delay, &opt);
} else {
adapter->rx_int_delay = opt.def;
}
}
{ /* Transmit Interrupt Delay */
static const struct ixgb_option opt = {
.type = range_option,
.name = "Transmit Interrupt Delay",
.err = "using default of " __MODULE_STRING(DEFAULT_TIDV),
.def = DEFAULT_TIDV,
.arg = { .r = { .min = MIN_TIDV,
.max = MAX_TIDV}}
};
if (num_TxIntDelay > bd) {
adapter->tx_int_delay = TxIntDelay[bd];
ixgb_validate_option(&adapter->tx_int_delay, &opt);
} else {
adapter->tx_int_delay = opt.def;
}
}
{ /* Transmit Interrupt Delay Enable */
static const struct ixgb_option opt = {
.type = enable_option,
.name = "Tx Interrupt Delay Enable",
.err = "defaulting to Enabled",
.def = OPTION_ENABLED
};
if (num_IntDelayEnable > bd) {
unsigned int ide = IntDelayEnable[bd];
ixgb_validate_option(&ide, &opt);
adapter->tx_int_delay_enable = ide;
} else {
adapter->tx_int_delay_enable = opt.def;
}
}
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment