Commit 54eac850 authored by David S. Miller's avatar David S. Miller

Merge branch 'amd-xgbe-next'

Tom Lendacky says:

====================
amd-xgbe: AMD XGBE driver updates 2015-05-12

The following series of patches includes functional updates and changes
to the driver.

- Add additional statistics to be collected and reported
- Use the netif_* functions for issuing some debug and informational
  driver messages
- Rx path SKB allocation cleanup/simplification
- Remove stand-alone phylib driver and incorporate function into the nic
  driver
- Simplify device tree support while maintaining backwards compatibility
- Fix the flow control negotiation logic to properly configure flow
  control
- Remove the checking and setting of the device dma_mask field

This patch series is based on net-next.

Changes in v2:
- Change from using the netif_msg_*/netdev_* combination for issuing
  messages to the more concise netif_*
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 6eaf9d18 fc7aabf0
* AMD 10GbE PHY driver (amd-xgbe-phy)
Required properties:
- compatible: Should be "amd,xgbe-phy-seattle-v1a" and
"ethernet-phy-ieee802.3-c45"
- reg: Address and length of the register sets for the device
- SerDes Rx/Tx registers
- SerDes integration registers (1/2)
- SerDes integration registers (2/2)
- interrupt-parent: Should be the phandle for the interrupt controller
that services interrupts for this device
- interrupts: Should contain the amd-xgbe-phy interrupt.
Optional properties:
- amd,speed-set: Speed capabilities of the device
0 - 1GbE and 10GbE (default)
1 - 2.5GbE and 10GbE
The following optional properties are represented by an array with each
value corresponding to a particular speed. The first array value represents
the setting for the 1GbE speed, the second value for the 2.5GbE speed and
the third value for the 10GbE speed. All three values are required if the
property is used.
- amd,serdes-blwc: Baseline wandering correction enablement
0 - Off
1 - On
- amd,serdes-cdr-rate: CDR rate speed selection
- amd,serdes-pq-skew: PQ (data sampling) skew
- amd,serdes-tx-amp: TX amplitude boost
- amd,serdes-dfe-tap-config: DFE taps available to run
- amd,serdes-dfe-tap-enable: DFE taps to enable
Example:
xgbe_phy@e1240800 {
compatible = "amd,xgbe-phy-seattle-v1a", "ethernet-phy-ieee802.3-c45";
reg = <0 0xe1240800 0 0x00400>,
<0 0xe1250000 0 0x00060>,
<0 0xe1250080 0 0x00004>;
interrupt-parent = <&gic>;
interrupts = <0 323 4>;
amd,speed-set = <0>;
amd,serdes-blwc = <1>, <1>, <0>;
amd,serdes-cdr-rate = <2>, <2>, <7>;
amd,serdes-pq-skew = <10>, <10>, <30>;
amd,serdes-tx-amp = <15>, <15>, <10>;
amd,serdes-dfe-tap-config = <3>, <3>, <1>;
amd,serdes-dfe-tap-enable = <0>, <0>, <127>;
};
...@@ -5,12 +5,16 @@ Required properties: ...@@ -5,12 +5,16 @@ Required properties:
- reg: Address and length of the register sets for the device - reg: Address and length of the register sets for the device
- MAC registers - MAC registers
- PCS registers - PCS registers
- SerDes Rx/Tx registers
- SerDes integration registers (1/2)
- SerDes integration registers (2/2)
- interrupt-parent: Should be the phandle for the interrupt controller - interrupt-parent: Should be the phandle for the interrupt controller
that services interrupts for this device that services interrupts for this device
- interrupts: Should contain the amd-xgbe interrupt(s). The first interrupt - interrupts: Should contain the amd-xgbe interrupt(s). The first interrupt
listed is required and is the general device interrupt. If the optional listed is required and is the general device interrupt. If the optional
amd,per-channel-interrupt property is specified, then one additional amd,per-channel-interrupt property is specified, then one additional
interrupt for each DMA channel supported by the device should be specified interrupt for each DMA channel supported by the device should be specified.
The last interrupt listed should be the PCS auto-negotiation interrupt.
- clocks: - clocks:
- DMA clock for the amd-xgbe device (used for calculating the - DMA clock for the amd-xgbe device (used for calculating the
correct Rx interrupt watchdog timer value on a DMA channel correct Rx interrupt watchdog timer value on a DMA channel
...@@ -19,7 +23,6 @@ Required properties: ...@@ -19,7 +23,6 @@ Required properties:
- clock-names: Should be the names of the clocks - clock-names: Should be the names of the clocks
- "dma_clk" for the DMA clock - "dma_clk" for the DMA clock
- "ptp_clk" for the PTP clock - "ptp_clk" for the PTP clock
- phy-handle: See ethernet.txt file in the same directory
- phy-mode: See ethernet.txt file in the same directory - phy-mode: See ethernet.txt file in the same directory
Optional properties: Optional properties:
...@@ -29,19 +32,46 @@ Optional properties: ...@@ -29,19 +32,46 @@ Optional properties:
- amd,per-channel-interrupt: Indicates that Rx and Tx complete will generate - amd,per-channel-interrupt: Indicates that Rx and Tx complete will generate
a unique interrupt for each DMA channel - this requires an additional a unique interrupt for each DMA channel - this requires an additional
interrupt be configured for each DMA channel interrupt be configured for each DMA channel
- amd,speed-set: Speed capabilities of the device
0 - 1GbE and 10GbE (default)
1 - 2.5GbE and 10GbE
The following optional properties are represented by an array with each
value corresponding to a particular speed. The first array value represents
the setting for the 1GbE speed, the second value for the 2.5GbE speed and
the third value for the 10GbE speed. All three values are required if the
property is used.
- amd,serdes-blwc: Baseline wandering correction enablement
0 - Off
1 - On
- amd,serdes-cdr-rate: CDR rate speed selection
- amd,serdes-pq-skew: PQ (data sampling) skew
- amd,serdes-tx-amp: TX amplitude boost
- amd,serdes-dfe-tap-config: DFE taps available to run
- amd,serdes-dfe-tap-enable: DFE taps to enable
Example: Example:
xgbe@e0700000 { xgbe@e0700000 {
compatible = "amd,xgbe-seattle-v1a"; compatible = "amd,xgbe-seattle-v1a";
reg = <0 0xe0700000 0 0x80000>, reg = <0 0xe0700000 0 0x80000>,
<0 0xe0780000 0 0x80000>; <0 0xe0780000 0 0x80000>,
<0 0xe1240800 0 0x00400>,
<0 0xe1250000 0 0x00060>,
<0 0xe1250080 0 0x00004>;
interrupt-parent = <&gic>; interrupt-parent = <&gic>;
interrupts = <0 325 4>, interrupts = <0 325 4>,
<0 326 1>, <0 327 1>, <0 328 1>, <0 329 1>; <0 326 1>, <0 327 1>, <0 328 1>, <0 329 1>,
<0 323 4>;
amd,per-channel-interrupt; amd,per-channel-interrupt;
clocks = <&xgbe_dma_clk>, <&xgbe_ptp_clk>; clocks = <&xgbe_dma_clk>, <&xgbe_ptp_clk>;
clock-names = "dma_clk", "ptp_clk"; clock-names = "dma_clk", "ptp_clk";
phy-handle = <&phy>;
phy-mode = "xgmii"; phy-mode = "xgmii";
mac-address = [ 02 a1 a2 a3 a4 a5 ]; mac-address = [ 02 a1 a2 a3 a4 a5 ];
amd,speed-set = <0>;
amd,serdes-blwc = <1>, <1>, <0>;
amd,serdes-cdr-rate = <2>, <2>, <7>;
amd,serdes-pq-skew = <10>, <10>, <30>;
amd,serdes-tx-amp = <15>, <15>, <10>;
amd,serdes-dfe-tap-config = <3>, <3>, <1>;
amd,serdes-dfe-tap-enable = <0>, <0>, <127>;
}; };
...@@ -652,7 +652,6 @@ M: Tom Lendacky <thomas.lendacky@amd.com> ...@@ -652,7 +652,6 @@ M: Tom Lendacky <thomas.lendacky@amd.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/net/ethernet/amd/xgbe/ F: drivers/net/ethernet/amd/xgbe/
F: drivers/net/phy/amd-xgbe-phy.c
AMS (Apple Motion Sensor) DRIVER AMS (Apple Motion Sensor) DRIVER
M: Michael Hanselmann <linux-kernel@hansmi.ch> M: Michael Hanselmann <linux-kernel@hansmi.ch>
......
...@@ -179,10 +179,8 @@ config SUNLANCE ...@@ -179,10 +179,8 @@ config SUNLANCE
config AMD_XGBE config AMD_XGBE
tristate "AMD 10GbE Ethernet driver" tristate "AMD 10GbE Ethernet driver"
depends on (OF_NET || ACPI) && HAS_IOMEM && HAS_DMA depends on ((OF_NET && OF_ADDRESS) || ACPI) && HAS_IOMEM && HAS_DMA
depends on ARM64 || COMPILE_TEST depends on ARM64 || COMPILE_TEST
select PHYLIB
select AMD_XGBE_PHY
select BITREVERSE select BITREVERSE
select CRC32 select CRC32
select PTP_1588_CLOCK select PTP_1588_CLOCK
......
...@@ -857,6 +857,48 @@ ...@@ -857,6 +857,48 @@
*/ */
#define PCS_MMD_SELECT 0xff #define PCS_MMD_SELECT 0xff
/* SerDes integration register offsets */
#define SIR0_KR_RT_1 0x002c
#define SIR0_STATUS 0x0040
#define SIR1_SPEED 0x0000
/* SerDes integration register entry bit positions and sizes */
#define SIR0_KR_RT_1_RESET_INDEX 11
#define SIR0_KR_RT_1_RESET_WIDTH 1
#define SIR0_STATUS_RX_READY_INDEX 0
#define SIR0_STATUS_RX_READY_WIDTH 1
#define SIR0_STATUS_TX_READY_INDEX 8
#define SIR0_STATUS_TX_READY_WIDTH 1
#define SIR1_SPEED_CDR_RATE_INDEX 12
#define SIR1_SPEED_CDR_RATE_WIDTH 4
#define SIR1_SPEED_DATARATE_INDEX 4
#define SIR1_SPEED_DATARATE_WIDTH 2
#define SIR1_SPEED_PLLSEL_INDEX 3
#define SIR1_SPEED_PLLSEL_WIDTH 1
#define SIR1_SPEED_RATECHANGE_INDEX 6
#define SIR1_SPEED_RATECHANGE_WIDTH 1
#define SIR1_SPEED_TXAMP_INDEX 8
#define SIR1_SPEED_TXAMP_WIDTH 4
#define SIR1_SPEED_WORDMODE_INDEX 0
#define SIR1_SPEED_WORDMODE_WIDTH 3
/* SerDes RxTx register offsets */
#define RXTX_REG6 0x0018
#define RXTX_REG20 0x0050
#define RXTX_REG22 0x0058
#define RXTX_REG114 0x01c8
#define RXTX_REG129 0x0204
/* SerDes RxTx register entry bit positions and sizes */
#define RXTX_REG6_RESETB_RXD_INDEX 8
#define RXTX_REG6_RESETB_RXD_WIDTH 1
#define RXTX_REG20_BLWC_ENA_INDEX 2
#define RXTX_REG20_BLWC_ENA_WIDTH 1
#define RXTX_REG114_PQ_REG_INDEX 9
#define RXTX_REG114_PQ_REG_WIDTH 7
#define RXTX_REG129_RXDFE_CONFIG_INDEX 14
#define RXTX_REG129_RXDFE_CONFIG_WIDTH 2
/* Descriptor/Packet entry bit positions and sizes */ /* Descriptor/Packet entry bit positions and sizes */
#define RX_PACKET_ERRORS_CRC_INDEX 2 #define RX_PACKET_ERRORS_CRC_INDEX 2
#define RX_PACKET_ERRORS_CRC_WIDTH 1 #define RX_PACKET_ERRORS_CRC_WIDTH 1
...@@ -973,10 +1015,47 @@ ...@@ -973,10 +1015,47 @@
#define TX_NORMAL_DESC2_VLAN_INSERT 0x2 #define TX_NORMAL_DESC2_VLAN_INSERT 0x2
/* MDIO undefined or vendor specific registers */ /* MDIO undefined or vendor specific registers */
#ifndef MDIO_PMA_10GBR_PMD_CTRL
#define MDIO_PMA_10GBR_PMD_CTRL 0x0096
#endif
#ifndef MDIO_PMA_10GBR_FECCTRL
#define MDIO_PMA_10GBR_FECCTRL 0x00ab
#endif
#ifndef MDIO_AN_XNP
#define MDIO_AN_XNP 0x0016
#endif
#ifndef MDIO_AN_LPX
#define MDIO_AN_LPX 0x0019
#endif
#ifndef MDIO_AN_COMP_STAT #ifndef MDIO_AN_COMP_STAT
#define MDIO_AN_COMP_STAT 0x0030 #define MDIO_AN_COMP_STAT 0x0030
#endif #endif
#ifndef MDIO_AN_INTMASK
#define MDIO_AN_INTMASK 0x8001
#endif
#ifndef MDIO_AN_INT
#define MDIO_AN_INT 0x8002
#endif
#ifndef MDIO_CTRL1_SPEED1G
#define MDIO_CTRL1_SPEED1G (MDIO_CTRL1_SPEED10G & ~BMCR_SPEED100)
#endif
/* MDIO mask values */
#define XGBE_XNP_MCF_NULL_MESSAGE 0x001
#define XGBE_XNP_ACK_PROCESSED BIT(12)
#define XGBE_XNP_MP_FORMATTED BIT(13)
#define XGBE_XNP_NP_EXCHANGE BIT(15)
#define XGBE_KR_TRAINING_START BIT(0)
#define XGBE_KR_TRAINING_ENABLE BIT(1)
/* Bit setting and getting macros /* Bit setting and getting macros
* The get macro will extract the current bit field value from within * The get macro will extract the current bit field value from within
* the variable * the variable
...@@ -1118,6 +1197,82 @@ do { \ ...@@ -1118,6 +1197,82 @@ do { \
#define XPCS_IOREAD(_pdata, _off) \ #define XPCS_IOREAD(_pdata, _off) \
ioread32((_pdata)->xpcs_regs + (_off)) ioread32((_pdata)->xpcs_regs + (_off))
/* Macros for building, reading or writing register values or bits
* within the register values of SerDes integration registers.
*/
#define XSIR_GET_BITS(_var, _prefix, _field) \
GET_BITS((_var), \
_prefix##_##_field##_INDEX, \
_prefix##_##_field##_WIDTH)
#define XSIR_SET_BITS(_var, _prefix, _field, _val) \
SET_BITS((_var), \
_prefix##_##_field##_INDEX, \
_prefix##_##_field##_WIDTH, (_val))
#define XSIR0_IOREAD(_pdata, _reg) \
ioread16((_pdata)->sir0_regs + _reg)
#define XSIR0_IOREAD_BITS(_pdata, _reg, _field) \
GET_BITS(XSIR0_IOREAD((_pdata), _reg), \
_reg##_##_field##_INDEX, \
_reg##_##_field##_WIDTH)
#define XSIR0_IOWRITE(_pdata, _reg, _val) \
iowrite16((_val), (_pdata)->sir0_regs + _reg)
#define XSIR0_IOWRITE_BITS(_pdata, _reg, _field, _val) \
do { \
u16 reg_val = XSIR0_IOREAD((_pdata), _reg); \
SET_BITS(reg_val, \
_reg##_##_field##_INDEX, \
_reg##_##_field##_WIDTH, (_val)); \
XSIR0_IOWRITE((_pdata), _reg, reg_val); \
} while (0)
#define XSIR1_IOREAD(_pdata, _reg) \
ioread16((_pdata)->sir1_regs + _reg)
#define XSIR1_IOREAD_BITS(_pdata, _reg, _field) \
GET_BITS(XSIR1_IOREAD((_pdata), _reg), \
_reg##_##_field##_INDEX, \
_reg##_##_field##_WIDTH)
#define XSIR1_IOWRITE(_pdata, _reg, _val) \
iowrite16((_val), (_pdata)->sir1_regs + _reg)
#define XSIR1_IOWRITE_BITS(_pdata, _reg, _field, _val) \
do { \
u16 reg_val = XSIR1_IOREAD((_pdata), _reg); \
SET_BITS(reg_val, \
_reg##_##_field##_INDEX, \
_reg##_##_field##_WIDTH, (_val)); \
XSIR1_IOWRITE((_pdata), _reg, reg_val); \
} while (0)
/* Macros for building, reading or writing register values or bits
* within the register values of SerDes RxTx registers.
*/
#define XRXTX_IOREAD(_pdata, _reg) \
ioread16((_pdata)->rxtx_regs + _reg)
#define XRXTX_IOREAD_BITS(_pdata, _reg, _field) \
GET_BITS(XRXTX_IOREAD((_pdata), _reg), \
_reg##_##_field##_INDEX, \
_reg##_##_field##_WIDTH)
#define XRXTX_IOWRITE(_pdata, _reg, _val) \
iowrite16((_val), (_pdata)->rxtx_regs + _reg)
#define XRXTX_IOWRITE_BITS(_pdata, _reg, _field, _val) \
do { \
u16 reg_val = XRXTX_IOREAD((_pdata), _reg); \
SET_BITS(reg_val, \
_reg##_##_field##_INDEX, \
_reg##_##_field##_WIDTH, (_val)); \
XRXTX_IOWRITE((_pdata), _reg, reg_val); \
} while (0)
/* Macros for building, reading or writing register values or bits /* Macros for building, reading or writing register values or bits
* using MDIO. Different from above because of the use of standardized * using MDIO. Different from above because of the use of standardized
* Linux include values. No shifting is performed with the bit * Linux include values. No shifting is performed with the bit
......
...@@ -150,9 +150,12 @@ static int xgbe_dcb_ieee_setets(struct net_device *netdev, ...@@ -150,9 +150,12 @@ static int xgbe_dcb_ieee_setets(struct net_device *netdev,
tc_ets = 0; tc_ets = 0;
tc_ets_weight = 0; tc_ets_weight = 0;
for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
DBGPR(" TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n", i, netif_dbg(pdata, drv, netdev,
ets->tc_tx_bw[i], ets->tc_rx_bw[i], ets->tc_tsa[i]); "TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n", i,
DBGPR(" PRIO%u: TC=%hhu\n", i, ets->prio_tc[i]); ets->tc_tx_bw[i], ets->tc_rx_bw[i],
ets->tc_tsa[i]);
netif_dbg(pdata, drv, netdev, "PRIO%u: TC=%hhu\n", i,
ets->prio_tc[i]);
if ((ets->tc_tx_bw[i] || ets->tc_tsa[i]) && if ((ets->tc_tx_bw[i] || ets->tc_tsa[i]) &&
(i >= pdata->hw_feat.tc_cnt)) (i >= pdata->hw_feat.tc_cnt))
...@@ -214,7 +217,8 @@ static int xgbe_dcb_ieee_setpfc(struct net_device *netdev, ...@@ -214,7 +217,8 @@ static int xgbe_dcb_ieee_setpfc(struct net_device *netdev,
{ {
struct xgbe_prv_data *pdata = netdev_priv(netdev); struct xgbe_prv_data *pdata = netdev_priv(netdev);
DBGPR(" cap=%hhu, en=%hhx, mbc=%hhu, delay=%hhu\n", netif_dbg(pdata, drv, netdev,
"cap=%hhu, en=%#hhx, mbc=%hhu, delay=%hhu\n",
pfc->pfc_cap, pfc->pfc_en, pfc->mbc, pfc->delay); pfc->pfc_cap, pfc->pfc_en, pfc->mbc, pfc->delay);
if (!pdata->pfc) { if (!pdata->pfc) {
...@@ -238,9 +242,10 @@ static u8 xgbe_dcb_getdcbx(struct net_device *netdev) ...@@ -238,9 +242,10 @@ static u8 xgbe_dcb_getdcbx(struct net_device *netdev)
static u8 xgbe_dcb_setdcbx(struct net_device *netdev, u8 dcbx) static u8 xgbe_dcb_setdcbx(struct net_device *netdev, u8 dcbx)
{ {
struct xgbe_prv_data *pdata = netdev_priv(netdev);
u8 support = xgbe_dcb_getdcbx(netdev); u8 support = xgbe_dcb_getdcbx(netdev);
DBGPR(" DCBX=%#hhx\n", dcbx); netif_dbg(pdata, drv, netdev, "DCBX=%#hhx\n", dcbx);
if (dcbx & ~support) if (dcbx & ~support)
return 1; return 1;
......
...@@ -208,8 +208,9 @@ static int xgbe_init_ring(struct xgbe_prv_data *pdata, ...@@ -208,8 +208,9 @@ static int xgbe_init_ring(struct xgbe_prv_data *pdata,
if (!ring->rdata) if (!ring->rdata)
return -ENOMEM; return -ENOMEM;
DBGPR(" rdesc=0x%p, rdesc_dma=0x%llx, rdata=0x%p\n", netif_dbg(pdata, drv, pdata->netdev,
ring->rdesc, ring->rdesc_dma, ring->rdata); "rdesc=%p, rdesc_dma=%pad, rdata=%p\n",
ring->rdesc, &ring->rdesc_dma, ring->rdata);
DBGPR("<--xgbe_init_ring\n"); DBGPR("<--xgbe_init_ring\n");
...@@ -226,7 +227,9 @@ static int xgbe_alloc_ring_resources(struct xgbe_prv_data *pdata) ...@@ -226,7 +227,9 @@ static int xgbe_alloc_ring_resources(struct xgbe_prv_data *pdata)
channel = pdata->channel; channel = pdata->channel;
for (i = 0; i < pdata->channel_count; i++, channel++) { for (i = 0; i < pdata->channel_count; i++, channel++) {
DBGPR(" %s - tx_ring:\n", channel->name); netif_dbg(pdata, drv, pdata->netdev, "%s - Tx ring:\n",
channel->name);
ret = xgbe_init_ring(pdata, channel->tx_ring, ret = xgbe_init_ring(pdata, channel->tx_ring,
pdata->tx_desc_count); pdata->tx_desc_count);
if (ret) { if (ret) {
...@@ -235,12 +238,14 @@ static int xgbe_alloc_ring_resources(struct xgbe_prv_data *pdata) ...@@ -235,12 +238,14 @@ static int xgbe_alloc_ring_resources(struct xgbe_prv_data *pdata)
goto err_ring; goto err_ring;
} }
DBGPR(" %s - rx_ring:\n", channel->name); netif_dbg(pdata, drv, pdata->netdev, "%s - Rx ring:\n",
channel->name);
ret = xgbe_init_ring(pdata, channel->rx_ring, ret = xgbe_init_ring(pdata, channel->rx_ring,
pdata->rx_desc_count); pdata->rx_desc_count);
if (ret) { if (ret) {
netdev_alert(pdata->netdev, netdev_alert(pdata->netdev,
"error initializing Tx ring\n"); "error initializing Rx ring\n");
goto err_ring; goto err_ring;
} }
} }
...@@ -476,8 +481,6 @@ static void xgbe_unmap_rdata(struct xgbe_prv_data *pdata, ...@@ -476,8 +481,6 @@ static void xgbe_unmap_rdata(struct xgbe_prv_data *pdata,
if (rdata->state_saved) { if (rdata->state_saved) {
rdata->state_saved = 0; rdata->state_saved = 0;
rdata->state.incomplete = 0;
rdata->state.context_next = 0;
rdata->state.skb = NULL; rdata->state.skb = NULL;
rdata->state.len = 0; rdata->state.len = 0;
rdata->state.error = 0; rdata->state.error = 0;
...@@ -518,8 +521,6 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb) ...@@ -518,8 +521,6 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
rdata = XGBE_GET_DESC_DATA(ring, cur_index); rdata = XGBE_GET_DESC_DATA(ring, cur_index);
if (tso) { if (tso) {
DBGPR(" TSO packet\n");
/* Map the TSO header */ /* Map the TSO header */
skb_dma = dma_map_single(pdata->dev, skb->data, skb_dma = dma_map_single(pdata->dev, skb->data,
packet->header_len, DMA_TO_DEVICE); packet->header_len, DMA_TO_DEVICE);
...@@ -529,6 +530,9 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb) ...@@ -529,6 +530,9 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
} }
rdata->skb_dma = skb_dma; rdata->skb_dma = skb_dma;
rdata->skb_dma_len = packet->header_len; rdata->skb_dma_len = packet->header_len;
netif_dbg(pdata, tx_queued, pdata->netdev,
"skb header: index=%u, dma=%pad, len=%u\n",
cur_index, &skb_dma, packet->header_len);
offset = packet->header_len; offset = packet->header_len;
...@@ -550,8 +554,9 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb) ...@@ -550,8 +554,9 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
} }
rdata->skb_dma = skb_dma; rdata->skb_dma = skb_dma;
rdata->skb_dma_len = len; rdata->skb_dma_len = len;
DBGPR(" skb data: index=%u, dma=0x%llx, len=%u\n", netif_dbg(pdata, tx_queued, pdata->netdev,
cur_index, skb_dma, len); "skb data: index=%u, dma=%pad, len=%u\n",
cur_index, &skb_dma, len);
datalen -= len; datalen -= len;
offset += len; offset += len;
...@@ -563,7 +568,8 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb) ...@@ -563,7 +568,8 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
} }
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
DBGPR(" mapping frag %u\n", i); netif_dbg(pdata, tx_queued, pdata->netdev,
"mapping frag %u\n", i);
frag = &skb_shinfo(skb)->frags[i]; frag = &skb_shinfo(skb)->frags[i];
offset = 0; offset = 0;
...@@ -582,8 +588,9 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb) ...@@ -582,8 +588,9 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
rdata->skb_dma = skb_dma; rdata->skb_dma = skb_dma;
rdata->skb_dma_len = len; rdata->skb_dma_len = len;
rdata->mapped_as_page = 1; rdata->mapped_as_page = 1;
DBGPR(" skb data: index=%u, dma=0x%llx, len=%u\n", netif_dbg(pdata, tx_queued, pdata->netdev,
cur_index, skb_dma, len); "skb frag: index=%u, dma=%pad, len=%u\n",
cur_index, &skb_dma, len);
datalen -= len; datalen -= len;
offset += len; offset += len;
......
...@@ -710,7 +710,8 @@ static int xgbe_set_promiscuous_mode(struct xgbe_prv_data *pdata, ...@@ -710,7 +710,8 @@ static int xgbe_set_promiscuous_mode(struct xgbe_prv_data *pdata,
if (XGMAC_IOREAD_BITS(pdata, MAC_PFR, PR) == val) if (XGMAC_IOREAD_BITS(pdata, MAC_PFR, PR) == val)
return 0; return 0;
DBGPR(" %s promiscuous mode\n", enable ? "entering" : "leaving"); netif_dbg(pdata, drv, pdata->netdev, "%s promiscuous mode\n",
enable ? "entering" : "leaving");
XGMAC_IOWRITE_BITS(pdata, MAC_PFR, PR, val); XGMAC_IOWRITE_BITS(pdata, MAC_PFR, PR, val);
return 0; return 0;
...@@ -724,7 +725,8 @@ static int xgbe_set_all_multicast_mode(struct xgbe_prv_data *pdata, ...@@ -724,7 +725,8 @@ static int xgbe_set_all_multicast_mode(struct xgbe_prv_data *pdata,
if (XGMAC_IOREAD_BITS(pdata, MAC_PFR, PM) == val) if (XGMAC_IOREAD_BITS(pdata, MAC_PFR, PM) == val)
return 0; return 0;
DBGPR(" %s allmulti mode\n", enable ? "entering" : "leaving"); netif_dbg(pdata, drv, pdata->netdev, "%s allmulti mode\n",
enable ? "entering" : "leaving");
XGMAC_IOWRITE_BITS(pdata, MAC_PFR, PM, val); XGMAC_IOWRITE_BITS(pdata, MAC_PFR, PM, val);
return 0; return 0;
...@@ -749,8 +751,9 @@ static void xgbe_set_mac_reg(struct xgbe_prv_data *pdata, ...@@ -749,8 +751,9 @@ static void xgbe_set_mac_reg(struct xgbe_prv_data *pdata,
mac_addr[0] = ha->addr[4]; mac_addr[0] = ha->addr[4];
mac_addr[1] = ha->addr[5]; mac_addr[1] = ha->addr[5];
DBGPR(" adding mac address %pM at 0x%04x\n", ha->addr, netif_dbg(pdata, drv, pdata->netdev,
*mac_reg); "adding mac address %pM at %#x\n",
ha->addr, *mac_reg);
XGMAC_SET_BITS(mac_addr_hi, MAC_MACA1HR, AE, 1); XGMAC_SET_BITS(mac_addr_hi, MAC_MACA1HR, AE, 1);
} }
...@@ -907,23 +910,6 @@ static void xgbe_write_mmd_regs(struct xgbe_prv_data *pdata, int prtad, ...@@ -907,23 +910,6 @@ static void xgbe_write_mmd_regs(struct xgbe_prv_data *pdata, int prtad,
else else
mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff); mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff);
/* If the PCS is changing modes, match the MAC speed to it */
if (((mmd_address >> 16) == MDIO_MMD_PCS) &&
((mmd_address & 0xffff) == MDIO_CTRL2)) {
struct phy_device *phydev = pdata->phydev;
if (mmd_data & MDIO_PCS_CTRL2_TYPE) {
/* KX mode */
if (phydev->supported & SUPPORTED_1000baseKX_Full)
xgbe_set_gmii_speed(pdata);
else
xgbe_set_gmii_2500_speed(pdata);
} else {
/* KR mode */
xgbe_set_xgmii_speed(pdata);
}
}
/* The PCS registers are accessed using mmio. The underlying APB3 /* The PCS registers are accessed using mmio. The underlying APB3
* management interface uses indirect addressing to access the MMD * management interface uses indirect addressing to access the MMD
* register sets. This requires accessing of the PCS register in two * register sets. This requires accessing of the PCS register in two
...@@ -1322,7 +1308,8 @@ static void xgbe_config_dcb_tc(struct xgbe_prv_data *pdata) ...@@ -1322,7 +1308,8 @@ static void xgbe_config_dcb_tc(struct xgbe_prv_data *pdata)
for (i = 0; i < pdata->hw_feat.tc_cnt; i++) { for (i = 0; i < pdata->hw_feat.tc_cnt; i++) {
switch (ets->tc_tsa[i]) { switch (ets->tc_tsa[i]) {
case IEEE_8021QAZ_TSA_STRICT: case IEEE_8021QAZ_TSA_STRICT:
DBGPR(" TC%u using SP\n", i); netif_dbg(pdata, drv, pdata->netdev,
"TC%u using SP\n", i);
XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_ETSCR, TSA, XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_ETSCR, TSA,
MTL_TSA_SP); MTL_TSA_SP);
break; break;
...@@ -1330,7 +1317,8 @@ static void xgbe_config_dcb_tc(struct xgbe_prv_data *pdata) ...@@ -1330,7 +1317,8 @@ static void xgbe_config_dcb_tc(struct xgbe_prv_data *pdata)
weight = total_weight * ets->tc_tx_bw[i] / 100; weight = total_weight * ets->tc_tx_bw[i] / 100;
weight = clamp(weight, min_weight, total_weight); weight = clamp(weight, min_weight, total_weight);
DBGPR(" TC%u using DWRR (weight %u)\n", i, weight); netif_dbg(pdata, drv, pdata->netdev,
"TC%u using DWRR (weight %u)\n", i, weight);
XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_ETSCR, TSA, XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_ETSCR, TSA,
MTL_TSA_ETS); MTL_TSA_ETS);
XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_QWR, QW, XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_QWR, QW,
...@@ -1359,7 +1347,8 @@ static void xgbe_config_dcb_pfc(struct xgbe_prv_data *pdata) ...@@ -1359,7 +1347,8 @@ static void xgbe_config_dcb_pfc(struct xgbe_prv_data *pdata)
} }
mask &= 0xff; mask &= 0xff;
DBGPR(" TC%u PFC mask=%#x\n", tc, mask); netif_dbg(pdata, drv, pdata->netdev, "TC%u PFC mask=%#x\n",
tc, mask);
reg = MTL_TCPM0R + (MTL_TCPM_INC * (tc / MTL_TCPM_TC_PER_REG)); reg = MTL_TCPM0R + (MTL_TCPM_INC * (tc / MTL_TCPM_TC_PER_REG));
reg_val = XGMAC_IOREAD(pdata, reg); reg_val = XGMAC_IOREAD(pdata, reg);
...@@ -1457,7 +1446,8 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel) ...@@ -1457,7 +1446,8 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
/* Create a context descriptor if this is a TSO packet */ /* Create a context descriptor if this is a TSO packet */
if (tso_context || vlan_context) { if (tso_context || vlan_context) {
if (tso_context) { if (tso_context) {
DBGPR(" TSO context descriptor, mss=%u\n", netif_dbg(pdata, tx_queued, pdata->netdev,
"TSO context descriptor, mss=%u\n",
packet->mss); packet->mss);
/* Set the MSS size */ /* Set the MSS size */
...@@ -1476,7 +1466,8 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel) ...@@ -1476,7 +1466,8 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
} }
if (vlan_context) { if (vlan_context) {
DBGPR(" VLAN context descriptor, ctag=%u\n", netif_dbg(pdata, tx_queued, pdata->netdev,
"VLAN context descriptor, ctag=%u\n",
packet->vlan_ctag); packet->vlan_ctag);
/* Mark it as a CONTEXT descriptor */ /* Mark it as a CONTEXT descriptor */
...@@ -1533,6 +1524,8 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel) ...@@ -1533,6 +1524,8 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
packet->tcp_payload_len); packet->tcp_payload_len);
XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, TCPHDRLEN, XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, TCPHDRLEN,
packet->tcp_header_len / 4); packet->tcp_header_len / 4);
pdata->ext_stats.tx_tso_packets++;
} else { } else {
/* Enable CRC and Pad Insertion */ /* Enable CRC and Pad Insertion */
XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, CPC, 0); XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, CPC, 0);
...@@ -1594,9 +1587,9 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel) ...@@ -1594,9 +1587,9 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
rdesc = rdata->rdesc; rdesc = rdata->rdesc;
XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, OWN, 1); XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, OWN, 1);
#ifdef XGMAC_ENABLE_TX_DESC_DUMP if (netif_msg_tx_queued(pdata))
xgbe_dump_tx_desc(ring, start_index, packet->rdesc_count, 1); xgbe_dump_tx_desc(pdata, ring, start_index,
#endif packet->rdesc_count, 1);
/* Make sure ownership is written to the descriptor */ /* Make sure ownership is written to the descriptor */
dma_wmb(); dma_wmb();
...@@ -1618,11 +1611,12 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel) ...@@ -1618,11 +1611,12 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
static int xgbe_dev_read(struct xgbe_channel *channel) static int xgbe_dev_read(struct xgbe_channel *channel)
{ {
struct xgbe_prv_data *pdata = channel->pdata;
struct xgbe_ring *ring = channel->rx_ring; struct xgbe_ring *ring = channel->rx_ring;
struct xgbe_ring_data *rdata; struct xgbe_ring_data *rdata;
struct xgbe_ring_desc *rdesc; struct xgbe_ring_desc *rdesc;
struct xgbe_packet_data *packet = &ring->packet_data; struct xgbe_packet_data *packet = &ring->packet_data;
struct net_device *netdev = channel->pdata->netdev; struct net_device *netdev = pdata->netdev;
unsigned int err, etlt, l34t; unsigned int err, etlt, l34t;
DBGPR("-->xgbe_dev_read: cur = %d\n", ring->cur); DBGPR("-->xgbe_dev_read: cur = %d\n", ring->cur);
...@@ -1637,9 +1631,8 @@ static int xgbe_dev_read(struct xgbe_channel *channel) ...@@ -1637,9 +1631,8 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
/* Make sure descriptor fields are read after reading the OWN bit */ /* Make sure descriptor fields are read after reading the OWN bit */
dma_rmb(); dma_rmb();
#ifdef XGMAC_ENABLE_RX_DESC_DUMP if (netif_msg_rx_status(pdata))
xgbe_dump_rx_desc(ring, rdesc, ring->cur); xgbe_dump_rx_desc(pdata, ring, ring->cur);
#endif
if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, CTXT)) { if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, CTXT)) {
/* Timestamp Context Descriptor */ /* Timestamp Context Descriptor */
...@@ -1661,9 +1654,12 @@ static int xgbe_dev_read(struct xgbe_channel *channel) ...@@ -1661,9 +1654,12 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
CONTEXT_NEXT, 1); CONTEXT_NEXT, 1);
/* Get the header length */ /* Get the header length */
if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, FD)) if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, FD)) {
rdata->rx.hdr_len = XGMAC_GET_BITS_LE(rdesc->desc2, rdata->rx.hdr_len = XGMAC_GET_BITS_LE(rdesc->desc2,
RX_NORMAL_DESC2, HL); RX_NORMAL_DESC2, HL);
if (rdata->rx.hdr_len)
pdata->ext_stats.rx_split_header_packets++;
}
/* Get the RSS hash */ /* Get the RSS hash */
if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, RSV)) { if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, RSV)) {
...@@ -1700,14 +1696,14 @@ static int xgbe_dev_read(struct xgbe_channel *channel) ...@@ -1700,14 +1696,14 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
INCOMPLETE, 0); INCOMPLETE, 0);
/* Set checksum done indicator as appropriate */ /* Set checksum done indicator as appropriate */
if (channel->pdata->netdev->features & NETIF_F_RXCSUM) if (netdev->features & NETIF_F_RXCSUM)
XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES, XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
CSUM_DONE, 1); CSUM_DONE, 1);
/* Check for errors (only valid in last descriptor) */ /* Check for errors (only valid in last descriptor) */
err = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, ES); err = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, ES);
etlt = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, ETLT); etlt = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, ETLT);
DBGPR(" err=%u, etlt=%#x\n", err, etlt); netif_dbg(pdata, rx_status, netdev, "err=%u, etlt=%#x\n", err, etlt);
if (!err || !etlt) { if (!err || !etlt) {
/* No error if err is 0 or etlt is 0 */ /* No error if err is 0 or etlt is 0 */
...@@ -1718,7 +1714,8 @@ static int xgbe_dev_read(struct xgbe_channel *channel) ...@@ -1718,7 +1714,8 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
packet->vlan_ctag = XGMAC_GET_BITS_LE(rdesc->desc0, packet->vlan_ctag = XGMAC_GET_BITS_LE(rdesc->desc0,
RX_NORMAL_DESC0, RX_NORMAL_DESC0,
OVT); OVT);
DBGPR(" vlan-ctag=0x%04x\n", packet->vlan_ctag); netif_dbg(pdata, rx_status, netdev, "vlan-ctag=%#06x\n",
packet->vlan_ctag);
} }
} else { } else {
if ((etlt == 0x05) || (etlt == 0x06)) if ((etlt == 0x05) || (etlt == 0x06))
...@@ -2026,7 +2023,7 @@ static void xgbe_config_tx_fifo_size(struct xgbe_prv_data *pdata) ...@@ -2026,7 +2023,7 @@ static void xgbe_config_tx_fifo_size(struct xgbe_prv_data *pdata)
for (i = 0; i < pdata->tx_q_count; i++) for (i = 0; i < pdata->tx_q_count; i++)
XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TQS, fifo_size); XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TQS, fifo_size);
netdev_notice(pdata->netdev, netif_info(pdata, drv, pdata->netdev,
"%d Tx hardware queues, %d byte fifo per queue\n", "%d Tx hardware queues, %d byte fifo per queue\n",
pdata->tx_q_count, ((fifo_size + 1) * 256)); pdata->tx_q_count, ((fifo_size + 1) * 256));
} }
...@@ -2042,7 +2039,7 @@ static void xgbe_config_rx_fifo_size(struct xgbe_prv_data *pdata) ...@@ -2042,7 +2039,7 @@ static void xgbe_config_rx_fifo_size(struct xgbe_prv_data *pdata)
for (i = 0; i < pdata->rx_q_count; i++) for (i = 0; i < pdata->rx_q_count; i++)
XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, RQS, fifo_size); XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, RQS, fifo_size);
netdev_notice(pdata->netdev, netif_info(pdata, drv, pdata->netdev,
"%d Rx hardware queues, %d byte fifo per queue\n", "%d Rx hardware queues, %d byte fifo per queue\n",
pdata->rx_q_count, ((fifo_size + 1) * 256)); pdata->rx_q_count, ((fifo_size + 1) * 256));
} }
...@@ -2063,14 +2060,16 @@ static void xgbe_config_queue_mapping(struct xgbe_prv_data *pdata) ...@@ -2063,14 +2060,16 @@ static void xgbe_config_queue_mapping(struct xgbe_prv_data *pdata)
for (i = 0, queue = 0; i < pdata->hw_feat.tc_cnt; i++) { for (i = 0, queue = 0; i < pdata->hw_feat.tc_cnt; i++) {
for (j = 0; j < qptc; j++) { for (j = 0; j < qptc; j++) {
DBGPR(" TXq%u mapped to TC%u\n", queue, i); netif_dbg(pdata, drv, pdata->netdev,
"TXq%u mapped to TC%u\n", queue, i);
XGMAC_MTL_IOWRITE_BITS(pdata, queue, MTL_Q_TQOMR, XGMAC_MTL_IOWRITE_BITS(pdata, queue, MTL_Q_TQOMR,
Q2TCMAP, i); Q2TCMAP, i);
pdata->q2tc_map[queue++] = i; pdata->q2tc_map[queue++] = i;
} }
if (i < qptc_extra) { if (i < qptc_extra) {
DBGPR(" TXq%u mapped to TC%u\n", queue, i); netif_dbg(pdata, drv, pdata->netdev,
"TXq%u mapped to TC%u\n", queue, i);
XGMAC_MTL_IOWRITE_BITS(pdata, queue, MTL_Q_TQOMR, XGMAC_MTL_IOWRITE_BITS(pdata, queue, MTL_Q_TQOMR,
Q2TCMAP, i); Q2TCMAP, i);
pdata->q2tc_map[queue++] = i; pdata->q2tc_map[queue++] = i;
...@@ -2088,13 +2087,15 @@ static void xgbe_config_queue_mapping(struct xgbe_prv_data *pdata) ...@@ -2088,13 +2087,15 @@ static void xgbe_config_queue_mapping(struct xgbe_prv_data *pdata)
for (i = 0, prio = 0; i < prio_queues;) { for (i = 0, prio = 0; i < prio_queues;) {
mask = 0; mask = 0;
for (j = 0; j < ppq; j++) { for (j = 0; j < ppq; j++) {
DBGPR(" PRIO%u mapped to RXq%u\n", prio, i); netif_dbg(pdata, drv, pdata->netdev,
"PRIO%u mapped to RXq%u\n", prio, i);
mask |= (1 << prio); mask |= (1 << prio);
pdata->prio2q_map[prio++] = i; pdata->prio2q_map[prio++] = i;
} }
if (i < ppq_extra) { if (i < ppq_extra) {
DBGPR(" PRIO%u mapped to RXq%u\n", prio, i); netif_dbg(pdata, drv, pdata->netdev,
"PRIO%u mapped to RXq%u\n", prio, i);
mask |= (1 << prio); mask |= (1 << prio);
pdata->prio2q_map[prio++] = i; pdata->prio2q_map[prio++] = i;
} }
......
This diff is collapsed.
...@@ -133,6 +133,12 @@ struct xgbe_stats { ...@@ -133,6 +133,12 @@ struct xgbe_stats {
offsetof(struct xgbe_prv_data, mmc_stats._var), \ offsetof(struct xgbe_prv_data, mmc_stats._var), \
} }
#define XGMAC_EXT_STAT(_string, _var) \
{ _string, \
FIELD_SIZEOF(struct xgbe_ext_stats, _var), \
offsetof(struct xgbe_prv_data, ext_stats._var), \
}
static const struct xgbe_stats xgbe_gstring_stats[] = { static const struct xgbe_stats xgbe_gstring_stats[] = {
XGMAC_MMC_STAT("tx_bytes", txoctetcount_gb), XGMAC_MMC_STAT("tx_bytes", txoctetcount_gb),
XGMAC_MMC_STAT("tx_packets", txframecount_gb), XGMAC_MMC_STAT("tx_packets", txframecount_gb),
...@@ -140,6 +146,7 @@ static const struct xgbe_stats xgbe_gstring_stats[] = { ...@@ -140,6 +146,7 @@ static const struct xgbe_stats xgbe_gstring_stats[] = {
XGMAC_MMC_STAT("tx_broadcast_packets", txbroadcastframes_gb), XGMAC_MMC_STAT("tx_broadcast_packets", txbroadcastframes_gb),
XGMAC_MMC_STAT("tx_multicast_packets", txmulticastframes_gb), XGMAC_MMC_STAT("tx_multicast_packets", txmulticastframes_gb),
XGMAC_MMC_STAT("tx_vlan_packets", txvlanframes_g), XGMAC_MMC_STAT("tx_vlan_packets", txvlanframes_g),
XGMAC_EXT_STAT("tx_tso_packets", tx_tso_packets),
XGMAC_MMC_STAT("tx_64_byte_packets", tx64octets_gb), XGMAC_MMC_STAT("tx_64_byte_packets", tx64octets_gb),
XGMAC_MMC_STAT("tx_65_to_127_byte_packets", tx65to127octets_gb), XGMAC_MMC_STAT("tx_65_to_127_byte_packets", tx65to127octets_gb),
XGMAC_MMC_STAT("tx_128_to_255_byte_packets", tx128to255octets_gb), XGMAC_MMC_STAT("tx_128_to_255_byte_packets", tx128to255octets_gb),
...@@ -171,6 +178,7 @@ static const struct xgbe_stats xgbe_gstring_stats[] = { ...@@ -171,6 +178,7 @@ static const struct xgbe_stats xgbe_gstring_stats[] = {
XGMAC_MMC_STAT("rx_fifo_overflow_errors", rxfifooverflow), XGMAC_MMC_STAT("rx_fifo_overflow_errors", rxfifooverflow),
XGMAC_MMC_STAT("rx_watchdog_errors", rxwatchdogerror), XGMAC_MMC_STAT("rx_watchdog_errors", rxwatchdogerror),
XGMAC_MMC_STAT("rx_pause_frames", rxpauseframes), XGMAC_MMC_STAT("rx_pause_frames", rxpauseframes),
XGMAC_EXT_STAT("rx_split_header_packets", rx_split_header_packets),
}; };
#define XGBE_STATS_COUNT ARRAY_SIZE(xgbe_gstring_stats) #define XGBE_STATS_COUNT ARRAY_SIZE(xgbe_gstring_stats)
...@@ -239,9 +247,9 @@ static void xgbe_get_pauseparam(struct net_device *netdev, ...@@ -239,9 +247,9 @@ static void xgbe_get_pauseparam(struct net_device *netdev,
DBGPR("-->xgbe_get_pauseparam\n"); DBGPR("-->xgbe_get_pauseparam\n");
pause->autoneg = pdata->pause_autoneg; pause->autoneg = pdata->phy.pause_autoneg;
pause->tx_pause = pdata->tx_pause; pause->tx_pause = pdata->phy.tx_pause;
pause->rx_pause = pdata->rx_pause; pause->rx_pause = pdata->phy.rx_pause;
DBGPR("<--xgbe_get_pauseparam\n"); DBGPR("<--xgbe_get_pauseparam\n");
} }
...@@ -250,7 +258,6 @@ static int xgbe_set_pauseparam(struct net_device *netdev, ...@@ -250,7 +258,6 @@ static int xgbe_set_pauseparam(struct net_device *netdev,
struct ethtool_pauseparam *pause) struct ethtool_pauseparam *pause)
{ {
struct xgbe_prv_data *pdata = netdev_priv(netdev); struct xgbe_prv_data *pdata = netdev_priv(netdev);
struct phy_device *phydev = pdata->phydev;
int ret = 0; int ret = 0;
DBGPR("-->xgbe_set_pauseparam\n"); DBGPR("-->xgbe_set_pauseparam\n");
...@@ -258,21 +265,26 @@ static int xgbe_set_pauseparam(struct net_device *netdev, ...@@ -258,21 +265,26 @@ static int xgbe_set_pauseparam(struct net_device *netdev,
DBGPR(" autoneg = %d, tx_pause = %d, rx_pause = %d\n", DBGPR(" autoneg = %d, tx_pause = %d, rx_pause = %d\n",
pause->autoneg, pause->tx_pause, pause->rx_pause); pause->autoneg, pause->tx_pause, pause->rx_pause);
pdata->pause_autoneg = pause->autoneg; if (pause->autoneg && (pdata->phy.autoneg != AUTONEG_ENABLE))
if (pause->autoneg) { return -EINVAL;
phydev->advertising |= ADVERTISED_Pause;
phydev->advertising |= ADVERTISED_Asym_Pause; pdata->phy.pause_autoneg = pause->autoneg;
pdata->phy.tx_pause = pause->tx_pause;
pdata->phy.rx_pause = pause->rx_pause;
} else { pdata->phy.advertising &= ~ADVERTISED_Pause;
phydev->advertising &= ~ADVERTISED_Pause; pdata->phy.advertising &= ~ADVERTISED_Asym_Pause;
phydev->advertising &= ~ADVERTISED_Asym_Pause;
pdata->tx_pause = pause->tx_pause; if (pause->rx_pause) {
pdata->rx_pause = pause->rx_pause; pdata->phy.advertising |= ADVERTISED_Pause;
pdata->phy.advertising |= ADVERTISED_Asym_Pause;
} }
if (pause->tx_pause)
pdata->phy.advertising ^= ADVERTISED_Asym_Pause;
if (netif_running(netdev)) if (netif_running(netdev))
ret = phy_start_aneg(phydev); ret = pdata->phy_if.phy_config_aneg(pdata);
DBGPR("<--xgbe_set_pauseparam\n"); DBGPR("<--xgbe_set_pauseparam\n");
...@@ -283,36 +295,39 @@ static int xgbe_get_settings(struct net_device *netdev, ...@@ -283,36 +295,39 @@ static int xgbe_get_settings(struct net_device *netdev,
struct ethtool_cmd *cmd) struct ethtool_cmd *cmd)
{ {
struct xgbe_prv_data *pdata = netdev_priv(netdev); struct xgbe_prv_data *pdata = netdev_priv(netdev);
int ret;
DBGPR("-->xgbe_get_settings\n"); DBGPR("-->xgbe_get_settings\n");
if (!pdata->phydev) cmd->phy_address = pdata->phy.address;
return -ENODEV;
cmd->supported = pdata->phy.supported;
cmd->advertising = pdata->phy.advertising;
cmd->lp_advertising = pdata->phy.lp_advertising;
cmd->autoneg = pdata->phy.autoneg;
ethtool_cmd_speed_set(cmd, pdata->phy.speed);
cmd->duplex = pdata->phy.duplex;
ret = phy_ethtool_gset(pdata->phydev, cmd); cmd->port = PORT_NONE;
cmd->transceiver = XCVR_INTERNAL;
DBGPR("<--xgbe_get_settings\n"); DBGPR("<--xgbe_get_settings\n");
return ret; return 0;
} }
static int xgbe_set_settings(struct net_device *netdev, static int xgbe_set_settings(struct net_device *netdev,
struct ethtool_cmd *cmd) struct ethtool_cmd *cmd)
{ {
struct xgbe_prv_data *pdata = netdev_priv(netdev); struct xgbe_prv_data *pdata = netdev_priv(netdev);
struct phy_device *phydev = pdata->phydev;
u32 speed; u32 speed;
int ret; int ret;
DBGPR("-->xgbe_set_settings\n"); DBGPR("-->xgbe_set_settings\n");
if (!pdata->phydev)
return -ENODEV;
speed = ethtool_cmd_speed(cmd); speed = ethtool_cmd_speed(cmd);
if (cmd->phy_address != phydev->addr) if (cmd->phy_address != pdata->phy.address)
return -EINVAL; return -EINVAL;
if ((cmd->autoneg != AUTONEG_ENABLE) && if ((cmd->autoneg != AUTONEG_ENABLE) &&
...@@ -333,23 +348,23 @@ static int xgbe_set_settings(struct net_device *netdev, ...@@ -333,23 +348,23 @@ static int xgbe_set_settings(struct net_device *netdev,
return -EINVAL; return -EINVAL;
} }
cmd->advertising &= phydev->supported; cmd->advertising &= pdata->phy.supported;
if ((cmd->autoneg == AUTONEG_ENABLE) && !cmd->advertising) if ((cmd->autoneg == AUTONEG_ENABLE) && !cmd->advertising)
return -EINVAL; return -EINVAL;
ret = 0; ret = 0;
phydev->autoneg = cmd->autoneg; pdata->phy.autoneg = cmd->autoneg;
phydev->speed = speed; pdata->phy.speed = speed;
phydev->duplex = cmd->duplex; pdata->phy.duplex = cmd->duplex;
phydev->advertising = cmd->advertising; pdata->phy.advertising = cmd->advertising;
if (cmd->autoneg == AUTONEG_ENABLE) if (cmd->autoneg == AUTONEG_ENABLE)
phydev->advertising |= ADVERTISED_Autoneg; pdata->phy.advertising |= ADVERTISED_Autoneg;
else else
phydev->advertising &= ~ADVERTISED_Autoneg; pdata->phy.advertising &= ~ADVERTISED_Autoneg;
if (netif_running(netdev)) if (netif_running(netdev))
ret = phy_start_aneg(phydev); ret = pdata->phy_if.phy_config_aneg(pdata);
DBGPR("<--xgbe_set_settings\n"); DBGPR("<--xgbe_set_settings\n");
......
This diff is collapsed.
This diff is collapsed.
...@@ -129,7 +129,7 @@ ...@@ -129,7 +129,7 @@
#include <net/dcbnl.h> #include <net/dcbnl.h>
#define XGBE_DRV_NAME "amd-xgbe" #define XGBE_DRV_NAME "amd-xgbe"
#define XGBE_DRV_VERSION "1.0.0-a" #define XGBE_DRV_VERSION "1.0.2"
#define XGBE_DRV_DESC "AMD 10 Gigabit Ethernet Driver" #define XGBE_DRV_DESC "AMD 10 Gigabit Ethernet Driver"
/* Descriptor related defines */ /* Descriptor related defines */
...@@ -178,14 +178,17 @@ ...@@ -178,14 +178,17 @@
#define XGMAC_JUMBO_PACKET_MTU 9000 #define XGMAC_JUMBO_PACKET_MTU 9000
#define XGMAC_MAX_JUMBO_PACKET 9018 #define XGMAC_MAX_JUMBO_PACKET 9018
/* MDIO bus phy name */
#define XGBE_PHY_NAME "amd_xgbe_phy"
#define XGBE_PRTAD 0
/* Common property names */ /* Common property names */
#define XGBE_MAC_ADDR_PROPERTY "mac-address" #define XGBE_MAC_ADDR_PROPERTY "mac-address"
#define XGBE_PHY_MODE_PROPERTY "phy-mode" #define XGBE_PHY_MODE_PROPERTY "phy-mode"
#define XGBE_DMA_IRQS_PROPERTY "amd,per-channel-interrupt" #define XGBE_DMA_IRQS_PROPERTY "amd,per-channel-interrupt"
#define XGBE_SPEEDSET_PROPERTY "amd,speed-set"
#define XGBE_BLWC_PROPERTY "amd,serdes-blwc"
#define XGBE_CDR_RATE_PROPERTY "amd,serdes-cdr-rate"
#define XGBE_PQ_SKEW_PROPERTY "amd,serdes-pq-skew"
#define XGBE_TX_AMP_PROPERTY "amd,serdes-tx-amp"
#define XGBE_DFE_CFG_PROPERTY "amd,serdes-dfe-tap-config"
#define XGBE_DFE_ENA_PROPERTY "amd,serdes-dfe-tap-enable"
/* Device-tree clock names */ /* Device-tree clock names */
#define XGBE_DMA_CLOCK "dma_clk" #define XGBE_DMA_CLOCK "dma_clk"
...@@ -241,6 +244,49 @@ ...@@ -241,6 +244,49 @@
#define XGBE_RSS_LOOKUP_TABLE_TYPE 0 #define XGBE_RSS_LOOKUP_TABLE_TYPE 0
#define XGBE_RSS_HASH_KEY_TYPE 1 #define XGBE_RSS_HASH_KEY_TYPE 1
/* Auto-negotiation */
#define XGBE_AN_MS_TIMEOUT 500
#define XGBE_LINK_TIMEOUT 10
#define XGBE_AN_INT_CMPLT 0x01
#define XGBE_AN_INC_LINK 0x02
#define XGBE_AN_PG_RCV 0x04
#define XGBE_AN_INT_MASK 0x07
/* Rate-change complete wait/retry count */
#define XGBE_RATECHANGE_COUNT 500
/* Default SerDes settings */
#define XGBE_SPEED_10000_BLWC 0
#define XGBE_SPEED_10000_CDR 0x7
#define XGBE_SPEED_10000_PLL 0x1
#define XGBE_SPEED_10000_PQ 0x12
#define XGBE_SPEED_10000_RATE 0x0
#define XGBE_SPEED_10000_TXAMP 0xa
#define XGBE_SPEED_10000_WORD 0x7
#define XGBE_SPEED_10000_DFE_TAP_CONFIG 0x1
#define XGBE_SPEED_10000_DFE_TAP_ENABLE 0x7f
#define XGBE_SPEED_2500_BLWC 1
#define XGBE_SPEED_2500_CDR 0x2
#define XGBE_SPEED_2500_PLL 0x0
#define XGBE_SPEED_2500_PQ 0xa
#define XGBE_SPEED_2500_RATE 0x1
#define XGBE_SPEED_2500_TXAMP 0xf
#define XGBE_SPEED_2500_WORD 0x1
#define XGBE_SPEED_2500_DFE_TAP_CONFIG 0x3
#define XGBE_SPEED_2500_DFE_TAP_ENABLE 0x0
#define XGBE_SPEED_1000_BLWC 1
#define XGBE_SPEED_1000_CDR 0x2
#define XGBE_SPEED_1000_PLL 0x0
#define XGBE_SPEED_1000_PQ 0xa
#define XGBE_SPEED_1000_RATE 0x3
#define XGBE_SPEED_1000_TXAMP 0xf
#define XGBE_SPEED_1000_WORD 0x1
#define XGBE_SPEED_1000_DFE_TAP_CONFIG 0x3
#define XGBE_SPEED_1000_DFE_TAP_ENABLE 0x0
struct xgbe_prv_data; struct xgbe_prv_data;
struct xgbe_packet_data { struct xgbe_packet_data {
...@@ -334,8 +380,6 @@ struct xgbe_ring_data { ...@@ -334,8 +380,6 @@ struct xgbe_ring_data {
*/ */
unsigned int state_saved; unsigned int state_saved;
struct { struct {
unsigned int incomplete;
unsigned int context_next;
struct sk_buff *skb; struct sk_buff *skb;
unsigned int len; unsigned int len;
unsigned int error; unsigned int error;
...@@ -414,6 +458,13 @@ struct xgbe_channel { ...@@ -414,6 +458,13 @@ struct xgbe_channel {
struct xgbe_ring *rx_ring; struct xgbe_ring *rx_ring;
} ____cacheline_aligned; } ____cacheline_aligned;
enum xgbe_state {
XGBE_DOWN,
XGBE_LINK,
XGBE_LINK_INIT,
XGBE_LINK_ERR,
};
enum xgbe_int { enum xgbe_int {
XGMAC_INT_DMA_CH_SR_TI, XGMAC_INT_DMA_CH_SR_TI,
XGMAC_INT_DMA_CH_SR_TPS, XGMAC_INT_DMA_CH_SR_TPS,
...@@ -445,6 +496,57 @@ enum xgbe_mtl_fifo_size { ...@@ -445,6 +496,57 @@ enum xgbe_mtl_fifo_size {
XGMAC_MTL_FIFO_SIZE_256K = 0x3ff, XGMAC_MTL_FIFO_SIZE_256K = 0x3ff,
}; };
enum xgbe_speed {
XGBE_SPEED_1000 = 0,
XGBE_SPEED_2500,
XGBE_SPEED_10000,
XGBE_SPEEDS,
};
enum xgbe_an {
XGBE_AN_READY = 0,
XGBE_AN_PAGE_RECEIVED,
XGBE_AN_INCOMPAT_LINK,
XGBE_AN_COMPLETE,
XGBE_AN_NO_LINK,
XGBE_AN_ERROR,
};
enum xgbe_rx {
XGBE_RX_BPA = 0,
XGBE_RX_XNP,
XGBE_RX_COMPLETE,
XGBE_RX_ERROR,
};
enum xgbe_mode {
XGBE_MODE_KR = 0,
XGBE_MODE_KX,
};
enum xgbe_speedset {
XGBE_SPEEDSET_1000_10000 = 0,
XGBE_SPEEDSET_2500_10000,
};
struct xgbe_phy {
u32 supported;
u32 advertising;
u32 lp_advertising;
int address;
int autoneg;
int speed;
int duplex;
int link;
int pause_autoneg;
int tx_pause;
int rx_pause;
};
struct xgbe_mmc_stats { struct xgbe_mmc_stats {
/* Tx Stats */ /* Tx Stats */
u64 txoctetcount_gb; u64 txoctetcount_gb;
...@@ -492,6 +594,11 @@ struct xgbe_mmc_stats { ...@@ -492,6 +594,11 @@ struct xgbe_mmc_stats {
u64 rxwatchdogerror; u64 rxwatchdogerror;
}; };
struct xgbe_ext_stats {
u64 tx_tso_packets;
u64 rx_split_header_packets;
};
struct xgbe_hw_if { struct xgbe_hw_if {
int (*tx_complete)(struct xgbe_ring_desc *); int (*tx_complete)(struct xgbe_ring_desc *);
...@@ -591,6 +698,20 @@ struct xgbe_hw_if { ...@@ -591,6 +698,20 @@ struct xgbe_hw_if {
int (*set_rss_lookup_table)(struct xgbe_prv_data *, const u32 *); int (*set_rss_lookup_table)(struct xgbe_prv_data *, const u32 *);
}; };
struct xgbe_phy_if {
/* For initial PHY setup */
void (*phy_init)(struct xgbe_prv_data *);
/* For PHY support when setting device up/down */
int (*phy_reset)(struct xgbe_prv_data *);
int (*phy_start)(struct xgbe_prv_data *);
void (*phy_stop)(struct xgbe_prv_data *);
/* For PHY support while device is up */
void (*phy_status)(struct xgbe_prv_data *);
int (*phy_config_aneg)(struct xgbe_prv_data *);
};
struct xgbe_desc_if { struct xgbe_desc_if {
int (*alloc_ring_resources)(struct xgbe_prv_data *); int (*alloc_ring_resources)(struct xgbe_prv_data *);
void (*free_ring_resources)(struct xgbe_prv_data *); void (*free_ring_resources)(struct xgbe_prv_data *);
...@@ -660,6 +781,9 @@ struct xgbe_prv_data { ...@@ -660,6 +781,9 @@ struct xgbe_prv_data {
/* XGMAC/XPCS related mmio registers */ /* XGMAC/XPCS related mmio registers */
void __iomem *xgmac_regs; /* XGMAC CSRs */ void __iomem *xgmac_regs; /* XGMAC CSRs */
void __iomem *xpcs_regs; /* XPCS MMD registers */ void __iomem *xpcs_regs; /* XPCS MMD registers */
void __iomem *rxtx_regs; /* SerDes Rx/Tx CSRs */
void __iomem *sir0_regs; /* SerDes integration registers (1/2) */
void __iomem *sir1_regs; /* SerDes integration registers (2/2) */
/* Overall device lock */ /* Overall device lock */
spinlock_t lock; spinlock_t lock;
...@@ -670,10 +794,14 @@ struct xgbe_prv_data { ...@@ -670,10 +794,14 @@ struct xgbe_prv_data {
/* RSS addressing mutex */ /* RSS addressing mutex */
struct mutex rss_mutex; struct mutex rss_mutex;
/* Flags representing xgbe_state */
unsigned long dev_state;
int dev_irq; int dev_irq;
unsigned int per_channel_irq; unsigned int per_channel_irq;
struct xgbe_hw_if hw_if; struct xgbe_hw_if hw_if;
struct xgbe_phy_if phy_if;
struct xgbe_desc_if desc_if; struct xgbe_desc_if desc_if;
/* AXI DMA settings */ /* AXI DMA settings */
...@@ -682,6 +810,11 @@ struct xgbe_prv_data { ...@@ -682,6 +810,11 @@ struct xgbe_prv_data {
unsigned int arcache; unsigned int arcache;
unsigned int awcache; unsigned int awcache;
/* Service routine support */
struct workqueue_struct *dev_workqueue;
struct work_struct service_work;
struct timer_list service_timer;
/* Rings for Tx/Rx on a DMA channel */ /* Rings for Tx/Rx on a DMA channel */
struct xgbe_channel *channel; struct xgbe_channel *channel;
unsigned int channel_count; unsigned int channel_count;
...@@ -729,27 +862,12 @@ struct xgbe_prv_data { ...@@ -729,27 +862,12 @@ struct xgbe_prv_data {
u32 rss_table[XGBE_RSS_MAX_TABLE_SIZE]; u32 rss_table[XGBE_RSS_MAX_TABLE_SIZE];
u32 rss_options; u32 rss_options;
/* MDIO settings */
struct module *phy_module;
char *mii_bus_id;
struct mii_bus *mii;
int mdio_mmd;
struct phy_device *phydev;
int default_autoneg;
int default_speed;
/* Current PHY settings */
phy_interface_t phy_mode;
int phy_link;
int phy_speed;
unsigned int phy_tx_pause;
unsigned int phy_rx_pause;
/* Netdev related settings */ /* Netdev related settings */
unsigned char mac_addr[ETH_ALEN]; unsigned char mac_addr[ETH_ALEN];
netdev_features_t netdev_features; netdev_features_t netdev_features;
struct napi_struct napi; struct napi_struct napi;
struct xgbe_mmc_stats mmc_stats; struct xgbe_mmc_stats mmc_stats;
struct xgbe_ext_stats ext_stats;
/* Filtering support */ /* Filtering support */
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)]; unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
...@@ -787,6 +905,54 @@ struct xgbe_prv_data { ...@@ -787,6 +905,54 @@ struct xgbe_prv_data {
/* Keeps track of power mode */ /* Keeps track of power mode */
unsigned int power_down; unsigned int power_down;
/* Network interface message level setting */
u32 msg_enable;
/* Current PHY settings */
phy_interface_t phy_mode;
int phy_link;
int phy_speed;
/* MDIO/PHY related settings */
struct xgbe_phy phy;
int mdio_mmd;
unsigned long link_check;
char an_name[IFNAMSIZ + 32];
struct workqueue_struct *an_workqueue;
int an_irq;
struct work_struct an_irq_work;
unsigned int speed_set;
/* SerDes UEFI configurable settings.
* Switching between modes/speeds requires new values for some
* SerDes settings. The values can be supplied as device
* properties in array format. The first array entry is for
* 1GbE, second for 2.5GbE and third for 10GbE
*/
u32 serdes_blwc[XGBE_SPEEDS];
u32 serdes_cdr_rate[XGBE_SPEEDS];
u32 serdes_pq_skew[XGBE_SPEEDS];
u32 serdes_tx_amp[XGBE_SPEEDS];
u32 serdes_dfe_tap_cfg[XGBE_SPEEDS];
u32 serdes_dfe_tap_ena[XGBE_SPEEDS];
/* Auto-negotiation state machine support */
struct mutex an_mutex;
enum xgbe_an an_result;
enum xgbe_an an_state;
enum xgbe_rx kr_state;
enum xgbe_rx kx_state;
struct work_struct an_work;
unsigned int an_supported;
unsigned int parallel_detect;
unsigned int fec_ability;
unsigned long an_start;
unsigned int lpm_ctrl; /* CTRL1 for resume */
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
struct dentry *xgbe_debugfs; struct dentry *xgbe_debugfs;
...@@ -800,6 +966,7 @@ struct xgbe_prv_data { ...@@ -800,6 +966,7 @@ struct xgbe_prv_data {
/* Function prototypes*/ /* Function prototypes*/
void xgbe_init_function_ptrs_dev(struct xgbe_hw_if *); void xgbe_init_function_ptrs_dev(struct xgbe_hw_if *);
void xgbe_init_function_ptrs_phy(struct xgbe_phy_if *);
void xgbe_init_function_ptrs_desc(struct xgbe_desc_if *); void xgbe_init_function_ptrs_desc(struct xgbe_desc_if *);
struct net_device_ops *xgbe_get_netdev_ops(void); struct net_device_ops *xgbe_get_netdev_ops(void);
struct ethtool_ops *xgbe_get_ethtool_ops(void); struct ethtool_ops *xgbe_get_ethtool_ops(void);
...@@ -807,14 +974,11 @@ struct ethtool_ops *xgbe_get_ethtool_ops(void); ...@@ -807,14 +974,11 @@ struct ethtool_ops *xgbe_get_ethtool_ops(void);
const struct dcbnl_rtnl_ops *xgbe_get_dcbnl_ops(void); const struct dcbnl_rtnl_ops *xgbe_get_dcbnl_ops(void);
#endif #endif
int xgbe_mdio_register(struct xgbe_prv_data *);
void xgbe_mdio_unregister(struct xgbe_prv_data *);
void xgbe_dump_phy_registers(struct xgbe_prv_data *);
void xgbe_ptp_register(struct xgbe_prv_data *); void xgbe_ptp_register(struct xgbe_prv_data *);
void xgbe_ptp_unregister(struct xgbe_prv_data *); void xgbe_ptp_unregister(struct xgbe_prv_data *);
void xgbe_dump_tx_desc(struct xgbe_ring *, unsigned int, unsigned int, void xgbe_dump_tx_desc(struct xgbe_prv_data *, struct xgbe_ring *,
unsigned int); unsigned int, unsigned int, unsigned int);
void xgbe_dump_rx_desc(struct xgbe_ring *, struct xgbe_ring_desc *, void xgbe_dump_rx_desc(struct xgbe_prv_data *, struct xgbe_ring *,
unsigned int); unsigned int);
void xgbe_print_pkt(struct net_device *, struct sk_buff *, bool); void xgbe_print_pkt(struct net_device *, struct sk_buff *, bool);
void xgbe_get_all_hw_features(struct xgbe_prv_data *); void xgbe_get_all_hw_features(struct xgbe_prv_data *);
...@@ -831,18 +995,6 @@ static inline void xgbe_debugfs_init(struct xgbe_prv_data *pdata) {} ...@@ -831,18 +995,6 @@ static inline void xgbe_debugfs_init(struct xgbe_prv_data *pdata) {}
static inline void xgbe_debugfs_exit(struct xgbe_prv_data *pdata) {} static inline void xgbe_debugfs_exit(struct xgbe_prv_data *pdata) {}
#endif /* CONFIG_DEBUG_FS */ #endif /* CONFIG_DEBUG_FS */
/* NOTE: Uncomment for TX and RX DESCRIPTOR DUMP in KERNEL LOG */
#if 0
#define XGMAC_ENABLE_TX_DESC_DUMP
#define XGMAC_ENABLE_RX_DESC_DUMP
#endif
/* NOTE: Uncomment for TX and RX PACKET DUMP in KERNEL LOG */
#if 0
#define XGMAC_ENABLE_TX_PKT_DUMP
#define XGMAC_ENABLE_RX_PKT_DUMP
#endif
/* NOTE: Uncomment for function trace log messages in KERNEL LOG */ /* NOTE: Uncomment for function trace log messages in KERNEL LOG */
#if 0 #if 0
#define YDEBUG #define YDEBUG
...@@ -852,10 +1004,8 @@ static inline void xgbe_debugfs_exit(struct xgbe_prv_data *pdata) {} ...@@ -852,10 +1004,8 @@ static inline void xgbe_debugfs_exit(struct xgbe_prv_data *pdata) {}
/* For debug prints */ /* For debug prints */
#ifdef YDEBUG #ifdef YDEBUG
#define DBGPR(x...) pr_alert(x) #define DBGPR(x...) pr_alert(x)
#define DBGPHY_REGS(x...) xgbe_dump_phy_registers(x)
#else #else
#define DBGPR(x...) do { } while (0) #define DBGPR(x...) do { } while (0)
#define DBGPHY_REGS(x...) do { } while (0)
#endif #endif
#ifdef YDEBUG_MDIO #ifdef YDEBUG_MDIO
......
...@@ -24,13 +24,6 @@ config AMD_PHY ...@@ -24,13 +24,6 @@ config AMD_PHY
---help--- ---help---
Currently supports the am79c874 Currently supports the am79c874
config AMD_XGBE_PHY
tristate "Driver for the AMD 10GbE (amd-xgbe) PHYs"
depends on (OF || ACPI) && HAS_IOMEM
depends on ARM64 || COMPILE_TEST
---help---
Currently supports the AMD 10GbE PHY
config MARVELL_PHY config MARVELL_PHY
tristate "Drivers for Marvell PHYs" tristate "Drivers for Marvell PHYs"
---help--- ---help---
......
...@@ -33,5 +33,4 @@ obj-$(CONFIG_MDIO_BUS_MUX_GPIO) += mdio-mux-gpio.o ...@@ -33,5 +33,4 @@ obj-$(CONFIG_MDIO_BUS_MUX_GPIO) += mdio-mux-gpio.o
obj-$(CONFIG_MDIO_BUS_MUX_MMIOREG) += mdio-mux-mmioreg.o obj-$(CONFIG_MDIO_BUS_MUX_MMIOREG) += mdio-mux-mmioreg.o
obj-$(CONFIG_MDIO_SUN4I) += mdio-sun4i.o obj-$(CONFIG_MDIO_SUN4I) += mdio-sun4i.o
obj-$(CONFIG_MDIO_MOXART) += mdio-moxart.o obj-$(CONFIG_MDIO_MOXART) += mdio-moxart.o
obj-$(CONFIG_AMD_XGBE_PHY) += amd-xgbe-phy.o
obj-$(CONFIG_MDIO_BCM_UNIMAC) += mdio-bcm-unimac.o obj-$(CONFIG_MDIO_BCM_UNIMAC) += mdio-bcm-unimac.o
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment