Commit 833716e0 authored by David S. Miller's avatar David S. Miller

Merge branch 'stmmac-GMAC4.x'

Alexandre TORGUE says:

====================
Enhance stmmac driver to support GMAC4.x IP

This is a subset of patch to enhance current stmmac driver to support
new GMAC4.x chips. New set of callbacks is defined to support this new
family: descriptors, dma, core.

One of main changes of GMAC 4.xx IP is descriptors management.
 -descriptors are only used in ring mode.
 -A descriptor is composed of 4 32bits registers (no more extended
  descriptors)
 -descriptor mechanism (Tx for example, but it is exactly the same for RX):
 -useful registers:
  -DMA_CH#_TxDesc_Ring_Len: length of transmit descriptor ring
  -DMA_CH#_TxDesc_List_Address: start address of the ring
  -DMA_CH#_TxDesc_Tail_Pointer: address of the last descriptor to send + 1.
  -DMA_CH#_TxDesc_Current_App_TxDesc: address of the current descriptor

 -The descriptor Tail Pointer register contains the pointer to the
  descriptor address (N). The base address and the current
  descriptor decide the address of the current descriptor that the
  DMA can process. The descriptors up to one location less than the
  one indicated by the descriptor tail pointer (N-1) are owned by
  the DMA. The DMA continues to process the descriptors until the
  following condition occurs:
  "current descriptor pointer == Descriptor Tail pointer"

  Then the DMA goes into suspend mode. The application must perform
  a write to descriptor tail pointer register and update the tail
  pointer to have the following condition and to start a new transfer:
  "current descriptor pointer < Descriptor tail pointer"

  The DMA automatically wraps around the base address when the end
  of ring is reached.

New features are available on IP:
 -TSO (TCP Segmentation Offload) for TX only
 -Split header: to have header and payload in 2 different buffers (not yet implemented)

Below some throughput figures obtained on some boxes:

                        iperf (mbps)
--------------------------------------
                       tcp     udp
                    tx   rx   tx  rx
                     -----------------
    GMAC4.x         935  930  750 800

Note: There is a change in 4.10a databook on bitfield mapping of DMA_CHANx_INTR_ENA register.
This requires to have é diffrent set of callbacks between IP 4.00a and 4.10a.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 5ada37b5 91979b9d
......@@ -59,6 +59,8 @@ Optional properties:
- snps,fb: fixed-burst
- snps,mb: mixed-burst
- snps,rb: rebuild INCRx Burst
- snps,tso: this enables the TSO feature otherwise it will be managed by
MAC HW capability register.
- mdio: with compatible = "snps,dwmac-mdio", create and register mdio bus.
Examples:
......
STMicroelectronics 10/100/1000 Synopsys Ethernet driver
Copyright (C) 2007-2014 STMicroelectronics Ltd
Copyright (C) 2007-2015 STMicroelectronics Ltd
Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>
This is the driver for the MAC 10/100/1000 on-chip Ethernet controllers
......@@ -138,6 +138,8 @@ struct plat_stmmacenet_data {
int (*init)(struct platform_device *pdev, void *priv);
void (*exit)(struct platform_device *pdev, void *priv);
void *bsp_priv;
int has_gmac4;
bool tso_en;
};
Where:
......@@ -181,6 +183,8 @@ Where:
registers. init/exit callbacks should not use or modify
platform data.
o bsp_priv: another private pointer.
o has_gmac4: uses GMAC4 core.
o tso_en: Enables TSO (TCP Segmentation Offload) feature.
For MDIO bus The we have:
......@@ -278,6 +282,13 @@ Please see the following document:
o stmmac_ethtool.c: to implement the ethtool support;
o stmmac.h: private driver structure;
o common.h: common definitions and VFTs;
o mmc_core.c/mmc.h: Management MAC Counters;
o stmmac_hwtstamp.c: HW timestamp support for PTP;
o stmmac_ptp.c: PTP 1588 clock;
o dwmac-<XXX>.c: these are for the platform glue-logic file; e.g. dwmac-sti.c
for STMicroelectronics SoCs.
- GMAC 3.x
o descs.h: descriptor structure definitions;
o dwmac1000_core.c: dwmac GiGa core functions;
o dwmac1000_dma.c: dma functions for the GMAC chip;
......@@ -289,11 +300,32 @@ Please see the following document:
o enh_desc.c: functions for handling enhanced descriptors;
o norm_desc.c: functions for handling normal descriptors;
o chain_mode.c/ring_mode.c:: functions to manage RING/CHAINED modes;
o mmc_core.c/mmc.h: Management MAC Counters;
o stmmac_hwtstamp.c: HW timestamp support for PTP;
o stmmac_ptp.c: PTP 1588 clock;
o dwmac-<XXX>.c: these are for the platform glue-logic file; e.g. dwmac-sti.c
for STMicroelectronics SoCs.
- GMAC4.x generation
o dwmac4_core.c: dwmac GMAC4.x core functions;
o dwmac4_desc.c: functions for handling GMAC4.x descriptors;
o dwmac4_descs.h: descriptor definitions;
o dwmac4_dma.c: dma functions for the GMAC4.x chip;
o dwmac4_dma.h: dma definitions for the GMAC4.x chip;
o dwmac4.h: core definitions for the GMAC4.x chip;
o dwmac4_lib.c: generic GMAC4.x functions;
4.12) TSO support (GMAC4.x)
TSO (Tcp Segmentation Offload) feature is supported by GMAC 4.x chip family.
When a packet is sent through TCP protocol, the TCP stack ensures that
the SKB provided to the low level driver (stmmac in our case) matches with
the maximum frame len (IP header + TCP header + payload <= 1500 bytes (for
MTU set to 1500)). It means that if an application using TCP want to send a
packet which will have a length (after adding headers) > 1514 the packet
will be split in several TCP packets: The data payload is split and headers
(TCP/IP ..) are added. It is done by software.
When TSO is enabled, the TCP stack doesn't care about the maximum frame
length and provide SKB packet to stmmac as it is. The GMAC IP will have to
perform the segmentation by it self to match with maximum frame length.
This feature can be enabled in device tree through "snps,tso" entry.
5) Debug Information
......
......@@ -3348,6 +3348,7 @@ F: Documentation/powerpc/cxlflash.txt
STMMAC ETHERNET DRIVER
M: Giuseppe Cavallaro <peppe.cavallaro@st.com>
M: Alexandre Torgue <alexandre.torgue@st.com>
L: netdev@vger.kernel.org
W: http://www.stlinux.com
S: Supported
......
......@@ -2,7 +2,8 @@ obj-$(CONFIG_STMMAC_ETH) += stmmac.o
stmmac-objs:= stmmac_main.o stmmac_ethtool.o stmmac_mdio.o ring_mode.o \
chain_mode.o dwmac_lib.o dwmac1000_core.o dwmac1000_dma.o \
dwmac100_core.o dwmac100_dma.o enh_desc.o norm_desc.o \
mmc_core.o stmmac_hwtstamp.o stmmac_ptp.o $(stmmac-y)
mmc_core.o stmmac_hwtstamp.o stmmac_ptp.o dwmac4_descs.o \
dwmac4_dma.o dwmac4_lib.o dwmac4_core.o $(stmmac-y)
# Ordering matters. Generic driver must be last.
obj-$(CONFIG_STMMAC_PLATFORM) += stmmac-platform.o
......
......@@ -41,6 +41,8 @@
/* Synopsys Core versions */
#define DWMAC_CORE_3_40 0x34
#define DWMAC_CORE_3_50 0x35
#define DWMAC_CORE_4_00 0x40
#define STMMAC_CHAN0 0 /* Always supported and default for all chips */
#define DMA_TX_SIZE 512
#define DMA_RX_SIZE 512
......@@ -167,6 +169,9 @@ struct stmmac_extra_stats {
unsigned long mtl_rx_fifo_ctrl_active;
unsigned long mac_rx_frame_ctrl_fifo;
unsigned long mac_gmii_rx_proto_engine;
/* TSO */
unsigned long tx_tso_frames;
unsigned long tx_tso_nfrags;
};
/* CSR Frequency Access Defines*/
......@@ -243,6 +248,7 @@ enum rx_frame_status {
csum_none = 0x2,
llc_snap = 0x4,
dma_own = 0x8,
rx_not_ls = 0x10,
};
/* Tx status */
......@@ -269,6 +275,7 @@ enum dma_irq_status {
#define CORE_PCS_ANE_COMPLETE (1 << 5)
#define CORE_PCS_LINK_STATUS (1 << 6)
#define CORE_RGMII_IRQ (1 << 7)
#define CORE_IRQ_MTL_RX_OVERFLOW BIT(8)
/* Physical Coding Sublayer */
struct rgmii_adv {
......@@ -300,8 +307,10 @@ struct dma_features {
/* 802.3az - Energy-Efficient Ethernet (EEE) */
unsigned int eee;
unsigned int av;
unsigned int tsoen;
/* TX and RX csum */
unsigned int tx_coe;
unsigned int rx_coe;
unsigned int rx_coe_type1;
unsigned int rx_coe_type2;
unsigned int rxfifo_over_2048;
......@@ -348,6 +357,10 @@ struct stmmac_desc_ops {
void (*prepare_tx_desc) (struct dma_desc *p, int is_fs, int len,
bool csum_flag, int mode, bool tx_own,
bool ls);
void (*prepare_tso_tx_desc)(struct dma_desc *p, int is_fs, int len1,
int len2, bool tx_own, bool ls,
unsigned int tcphdrlen,
unsigned int tcppayloadlen);
/* Set/get the owner of the descriptor */
void (*set_tx_owner) (struct dma_desc *p);
int (*get_tx_owner) (struct dma_desc *p);
......@@ -380,6 +393,10 @@ struct stmmac_desc_ops {
u64(*get_timestamp) (void *desc, u32 ats);
/* get rx timestamp status */
int (*get_rx_timestamp_status) (void *desc, u32 ats);
/* Display ring */
void (*display_ring)(void *head, unsigned int size, bool rx);
/* set MSS via context descriptor */
void (*set_mss)(struct dma_desc *p, unsigned int mss);
};
extern const struct stmmac_desc_ops enh_desc_ops;
......@@ -412,9 +429,15 @@ struct stmmac_dma_ops {
int (*dma_interrupt) (void __iomem *ioaddr,
struct stmmac_extra_stats *x);
/* If supported then get the optional core features */
unsigned int (*get_hw_feature) (void __iomem *ioaddr);
void (*get_hw_feature)(void __iomem *ioaddr,
struct dma_features *dma_cap);
/* Program the HW RX Watchdog */
void (*rx_watchdog) (void __iomem *ioaddr, u32 riwt);
void (*set_tx_ring_len)(void __iomem *ioaddr, u32 len);
void (*set_rx_ring_len)(void __iomem *ioaddr, u32 len);
void (*set_rx_tail_ptr)(void __iomem *ioaddr, u32 tail_ptr, u32 chan);
void (*set_tx_tail_ptr)(void __iomem *ioaddr, u32 tail_ptr, u32 chan);
void (*enable_tso)(void __iomem *ioaddr, bool en, u32 chan);
};
struct mac_device_info;
......@@ -463,6 +486,7 @@ struct stmmac_hwtimestamp {
};
extern const struct stmmac_hwtimestamp stmmac_ptp;
extern const struct stmmac_mode_ops dwmac4_ring_mode_ops;
struct mac_link {
int port;
......@@ -495,7 +519,6 @@ struct mac_device_info {
const struct stmmac_hwtimestamp *ptp;
struct mii_regs mii; /* MII register Addresses */
struct mac_link link;
unsigned int synopsys_uid;
void __iomem *pcsr; /* vpointer to device CSRs */
int multicast_filter_bins;
int unicast_filter_entries;
......@@ -504,18 +527,47 @@ struct mac_device_info {
};
struct mac_device_info *dwmac1000_setup(void __iomem *ioaddr, int mcbins,
int perfect_uc_entries);
struct mac_device_info *dwmac100_setup(void __iomem *ioaddr);
int perfect_uc_entries,
int *synopsys_id);
struct mac_device_info *dwmac100_setup(void __iomem *ioaddr, int *synopsys_id);
struct mac_device_info *dwmac4_setup(void __iomem *ioaddr, int mcbins,
int perfect_uc_entries, int *synopsys_id);
void stmmac_set_mac_addr(void __iomem *ioaddr, u8 addr[6],
unsigned int high, unsigned int low);
void stmmac_get_mac_addr(void __iomem *ioaddr, unsigned char *addr,
unsigned int high, unsigned int low);
void stmmac_set_mac(void __iomem *ioaddr, bool enable);
void stmmac_dwmac4_set_mac_addr(void __iomem *ioaddr, u8 addr[6],
unsigned int high, unsigned int low);
void stmmac_dwmac4_get_mac_addr(void __iomem *ioaddr, unsigned char *addr,
unsigned int high, unsigned int low);
void stmmac_dwmac4_set_mac(void __iomem *ioaddr, bool enable);
void dwmac_dma_flush_tx_fifo(void __iomem *ioaddr);
extern const struct stmmac_mode_ops ring_mode_ops;
extern const struct stmmac_mode_ops chain_mode_ops;
extern const struct stmmac_desc_ops dwmac4_desc_ops;
/**
* stmmac_get_synopsys_id - return the SYINID.
* @priv: driver private structure
* Description: this simple function is to decode and return the SYINID
* starting from the HW core register.
*/
static inline u32 stmmac_get_synopsys_id(u32 hwid)
{
/* Check Synopsys Id (not available on old chips) */
if (likely(hwid)) {
u32 uid = ((hwid & 0x0000ff00) >> 8);
u32 synid = (hwid & 0x000000ff);
pr_info("stmmac - user ID: 0x%x, Synopsys ID: 0x%x\n",
uid, synid);
return synid;
}
return 0;
}
#endif /* __COMMON_H__ */
......@@ -491,7 +491,8 @@ static const struct stmmac_ops dwmac1000_ops = {
};
struct mac_device_info *dwmac1000_setup(void __iomem *ioaddr, int mcbins,
int perfect_uc_entries)
int perfect_uc_entries,
int *synopsys_id)
{
struct mac_device_info *mac;
u32 hwid = readl(ioaddr + GMAC_VERSION);
......@@ -516,7 +517,9 @@ struct mac_device_info *dwmac1000_setup(void __iomem *ioaddr, int mcbins,
mac->link.speed = GMAC_CONTROL_FES;
mac->mii.addr = GMAC_MII_ADDR;
mac->mii.data = GMAC_MII_DATA;
mac->synopsys_uid = hwid;
/* Get and dump the chip ID */
*synopsys_id = stmmac_get_synopsys_id(hwid);
return mac;
}
......@@ -215,9 +215,40 @@ static void dwmac1000_dump_dma_regs(void __iomem *ioaddr)
}
}
static unsigned int dwmac1000_get_hw_feature(void __iomem *ioaddr)
static void dwmac1000_get_hw_feature(void __iomem *ioaddr,
struct dma_features *dma_cap)
{
return readl(ioaddr + DMA_HW_FEATURE);
u32 hw_cap = readl(ioaddr + DMA_HW_FEATURE);
dma_cap->mbps_10_100 = (hw_cap & DMA_HW_FEAT_MIISEL);
dma_cap->mbps_1000 = (hw_cap & DMA_HW_FEAT_GMIISEL) >> 1;
dma_cap->half_duplex = (hw_cap & DMA_HW_FEAT_HDSEL) >> 2;
dma_cap->hash_filter = (hw_cap & DMA_HW_FEAT_HASHSEL) >> 4;
dma_cap->multi_addr = (hw_cap & DMA_HW_FEAT_ADDMAC) >> 5;
dma_cap->pcs = (hw_cap & DMA_HW_FEAT_PCSSEL) >> 6;
dma_cap->sma_mdio = (hw_cap & DMA_HW_FEAT_SMASEL) >> 8;
dma_cap->pmt_remote_wake_up = (hw_cap & DMA_HW_FEAT_RWKSEL) >> 9;
dma_cap->pmt_magic_frame = (hw_cap & DMA_HW_FEAT_MGKSEL) >> 10;
/* MMC */
dma_cap->rmon = (hw_cap & DMA_HW_FEAT_MMCSEL) >> 11;
/* IEEE 1588-2002 */
dma_cap->time_stamp =
(hw_cap & DMA_HW_FEAT_TSVER1SEL) >> 12;
/* IEEE 1588-2008 */
dma_cap->atime_stamp = (hw_cap & DMA_HW_FEAT_TSVER2SEL) >> 13;
/* 802.3az - Energy-Efficient Ethernet (EEE) */
dma_cap->eee = (hw_cap & DMA_HW_FEAT_EEESEL) >> 14;
dma_cap->av = (hw_cap & DMA_HW_FEAT_AVSEL) >> 15;
/* TX and RX csum */
dma_cap->tx_coe = (hw_cap & DMA_HW_FEAT_TXCOESEL) >> 16;
dma_cap->rx_coe_type1 = (hw_cap & DMA_HW_FEAT_RXTYP1COE) >> 17;
dma_cap->rx_coe_type2 = (hw_cap & DMA_HW_FEAT_RXTYP2COE) >> 18;
dma_cap->rxfifo_over_2048 = (hw_cap & DMA_HW_FEAT_RXFIFOSIZE) >> 19;
/* TX and RX number of channels */
dma_cap->number_rx_channel = (hw_cap & DMA_HW_FEAT_RXCHCNT) >> 20;
dma_cap->number_tx_channel = (hw_cap & DMA_HW_FEAT_TXCHCNT) >> 22;
/* Alternate (enhanced) DESC mode */
dma_cap->enh_desc = (hw_cap & DMA_HW_FEAT_ENHDESSEL) >> 24;
}
static void dwmac1000_rx_watchdog(void __iomem *ioaddr, u32 riwt)
......
......@@ -173,7 +173,7 @@ static const struct stmmac_ops dwmac100_ops = {
.get_umac_addr = dwmac100_get_umac_addr,
};
struct mac_device_info *dwmac100_setup(void __iomem *ioaddr)
struct mac_device_info *dwmac100_setup(void __iomem *ioaddr, int *synopsys_id)
{
struct mac_device_info *mac;
......@@ -192,7 +192,8 @@ struct mac_device_info *dwmac100_setup(void __iomem *ioaddr)
mac->link.speed = 0;
mac->mii.addr = MAC_MII_ADDR;
mac->mii.data = MAC_MII_DATA;
mac->synopsys_uid = 0;
/* Synopsys Id is not available on old chips */
*synopsys_id = 0;
return mac;
}
/*
* DWMAC4 Header file.
*
* Copyright (C) 2015 STMicroelectronics Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* Author: Alexandre Torgue <alexandre.torgue@st.com>
*/
#ifndef __DWMAC4_H__
#define __DWMAC4_H__
#include "common.h"
/* MAC registers */
#define GMAC_CONFIG 0x00000000
#define GMAC_PACKET_FILTER 0x00000008
#define GMAC_HASH_TAB_0_31 0x00000010
#define GMAC_HASH_TAB_32_63 0x00000014
#define GMAC_RX_FLOW_CTRL 0x00000090
#define GMAC_QX_TX_FLOW_CTRL(x) (0x70 + x * 4)
#define GMAC_INT_STATUS 0x000000b0
#define GMAC_INT_EN 0x000000b4
#define GMAC_AN_CTRL 0x000000e0
#define GMAC_AN_STATUS 0x000000e4
#define GMAC_AN_ADV 0x000000e8
#define GMAC_AN_LPA 0x000000ec
#define GMAC_PMT 0x000000c0
#define GMAC_VERSION 0x00000110
#define GMAC_DEBUG 0x00000114
#define GMAC_HW_FEATURE0 0x0000011c
#define GMAC_HW_FEATURE1 0x00000120
#define GMAC_HW_FEATURE2 0x00000124
#define GMAC_MDIO_ADDR 0x00000200
#define GMAC_MDIO_DATA 0x00000204
#define GMAC_ADDR_HIGH(reg) (0x300 + reg * 8)
#define GMAC_ADDR_LOW(reg) (0x304 + reg * 8)
/* MAC Packet Filtering */
#define GMAC_PACKET_FILTER_PR BIT(0)
#define GMAC_PACKET_FILTER_HMC BIT(2)
#define GMAC_PACKET_FILTER_PM BIT(4)
#define GMAC_MAX_PERFECT_ADDRESSES 128
/* MAC Flow Control RX */
#define GMAC_RX_FLOW_CTRL_RFE BIT(0)
/* MAC Flow Control TX */
#define GMAC_TX_FLOW_CTRL_TFE BIT(1)
#define GMAC_TX_FLOW_CTRL_PT_SHIFT 16
/* MAC Interrupt bitmap*/
#define GMAC_INT_PMT_EN BIT(4)
#define GMAC_INT_LPI_EN BIT(5)
enum dwmac4_irq_status {
time_stamp_irq = 0x00001000,
mmc_rx_csum_offload_irq = 0x00000800,
mmc_tx_irq = 0x00000400,
mmc_rx_irq = 0x00000200,
mmc_irq = 0x00000100,
pmt_irq = 0x00000010,
pcs_ane_irq = 0x00000004,
pcs_link_irq = 0x00000002,
};
/* MAC Auto-Neg bitmap*/
#define GMAC_AN_CTRL_RAN BIT(9)
#define GMAC_AN_CTRL_ANE BIT(12)
#define GMAC_AN_CTRL_ELE BIT(14)
#define GMAC_AN_FD BIT(5)
#define GMAC_AN_HD BIT(6)
#define GMAC_AN_PSE_MASK GENMASK(8, 7)
#define GMAC_AN_PSE_SHIFT 7
/* MAC PMT bitmap */
enum power_event {
pointer_reset = 0x80000000,
global_unicast = 0x00000200,
wake_up_rx_frame = 0x00000040,
magic_frame = 0x00000020,
wake_up_frame_en = 0x00000004,
magic_pkt_en = 0x00000002,
power_down = 0x00000001,
};
/* MAC Debug bitmap */
#define GMAC_DEBUG_TFCSTS_MASK GENMASK(18, 17)
#define GMAC_DEBUG_TFCSTS_SHIFT 17
#define GMAC_DEBUG_TFCSTS_IDLE 0
#define GMAC_DEBUG_TFCSTS_WAIT 1
#define GMAC_DEBUG_TFCSTS_GEN_PAUSE 2
#define GMAC_DEBUG_TFCSTS_XFER 3
#define GMAC_DEBUG_TPESTS BIT(16)
#define GMAC_DEBUG_RFCFCSTS_MASK GENMASK(2, 1)
#define GMAC_DEBUG_RFCFCSTS_SHIFT 1
#define GMAC_DEBUG_RPESTS BIT(0)
/* MAC config */
#define GMAC_CONFIG_IPC BIT(27)
#define GMAC_CONFIG_2K BIT(22)
#define GMAC_CONFIG_ACS BIT(20)
#define GMAC_CONFIG_BE BIT(18)
#define GMAC_CONFIG_JD BIT(17)
#define GMAC_CONFIG_JE BIT(16)
#define GMAC_CONFIG_PS BIT(15)
#define GMAC_CONFIG_FES BIT(14)
#define GMAC_CONFIG_DM BIT(13)
#define GMAC_CONFIG_DCRS BIT(9)
#define GMAC_CONFIG_TE BIT(1)
#define GMAC_CONFIG_RE BIT(0)
/* MAC HW features0 bitmap */
#define GMAC_HW_FEAT_ADDMAC BIT(18)
#define GMAC_HW_FEAT_RXCOESEL BIT(16)
#define GMAC_HW_FEAT_TXCOSEL BIT(14)
#define GMAC_HW_FEAT_EEESEL BIT(13)
#define GMAC_HW_FEAT_TSSEL BIT(12)
#define GMAC_HW_FEAT_MMCSEL BIT(8)
#define GMAC_HW_FEAT_MGKSEL BIT(7)
#define GMAC_HW_FEAT_RWKSEL BIT(6)
#define GMAC_HW_FEAT_SMASEL BIT(5)
#define GMAC_HW_FEAT_VLHASH BIT(4)
#define GMAC_HW_FEAT_PCSSEL BIT(3)
#define GMAC_HW_FEAT_HDSEL BIT(2)
#define GMAC_HW_FEAT_GMIISEL BIT(1)
#define GMAC_HW_FEAT_MIISEL BIT(0)
/* MAC HW features1 bitmap */
#define GMAC_HW_FEAT_AVSEL BIT(20)
#define GMAC_HW_TSOEN BIT(18)
/* MAC HW features2 bitmap */
#define GMAC_HW_FEAT_TXCHCNT GENMASK(21, 18)
#define GMAC_HW_FEAT_RXCHCNT GENMASK(15, 12)
/* MAC HW ADDR regs */
#define GMAC_HI_DCS GENMASK(18, 16)
#define GMAC_HI_DCS_SHIFT 16
#define GMAC_HI_REG_AE BIT(31)
/* MTL registers */
#define MTL_INT_STATUS 0x00000c20
#define MTL_INT_Q0 BIT(0)
#define MTL_CHAN_BASE_ADDR 0x00000d00
#define MTL_CHAN_BASE_OFFSET 0x40
#define MTL_CHANX_BASE_ADDR(x) (MTL_CHAN_BASE_ADDR + \
(x * MTL_CHAN_BASE_OFFSET))
#define MTL_CHAN_TX_OP_MODE(x) MTL_CHANX_BASE_ADDR(x)
#define MTL_CHAN_TX_DEBUG(x) (MTL_CHANX_BASE_ADDR(x) + 0x8)
#define MTL_CHAN_INT_CTRL(x) (MTL_CHANX_BASE_ADDR(x) + 0x2c)
#define MTL_CHAN_RX_OP_MODE(x) (MTL_CHANX_BASE_ADDR(x) + 0x30)
#define MTL_CHAN_RX_DEBUG(x) (MTL_CHANX_BASE_ADDR(x) + 0x38)
#define MTL_OP_MODE_RSF BIT(5)
#define MTL_OP_MODE_TSF BIT(1)
#define MTL_OP_MODE_TTC_MASK 0x70
#define MTL_OP_MODE_TTC_SHIFT 4
#define MTL_OP_MODE_TTC_32 0
#define MTL_OP_MODE_TTC_64 (1 << MTL_OP_MODE_TTC_SHIFT)
#define MTL_OP_MODE_TTC_96 (2 << MTL_OP_MODE_TTC_SHIFT)
#define MTL_OP_MODE_TTC_128 (3 << MTL_OP_MODE_TTC_SHIFT)
#define MTL_OP_MODE_TTC_192 (4 << MTL_OP_MODE_TTC_SHIFT)
#define MTL_OP_MODE_TTC_256 (5 << MTL_OP_MODE_TTC_SHIFT)
#define MTL_OP_MODE_TTC_384 (6 << MTL_OP_MODE_TTC_SHIFT)
#define MTL_OP_MODE_TTC_512 (7 << MTL_OP_MODE_TTC_SHIFT)
#define MTL_OP_MODE_RTC_MASK 0x18
#define MTL_OP_MODE_RTC_SHIFT 3
#define MTL_OP_MODE_RTC_32 (1 << MTL_OP_MODE_RTC_SHIFT)
#define MTL_OP_MODE_RTC_64 0
#define MTL_OP_MODE_RTC_96 (2 << MTL_OP_MODE_RTC_SHIFT)
#define MTL_OP_MODE_RTC_128 (3 << MTL_OP_MODE_RTC_SHIFT)
/* MTL debug */
#define MTL_DEBUG_TXSTSFSTS BIT(5)
#define MTL_DEBUG_TXFSTS BIT(4)
#define MTL_DEBUG_TWCSTS BIT(3)
/* MTL debug: Tx FIFO Read Controller Status */
#define MTL_DEBUG_TRCSTS_MASK GENMASK(2, 1)
#define MTL_DEBUG_TRCSTS_SHIFT 1
#define MTL_DEBUG_TRCSTS_IDLE 0
#define MTL_DEBUG_TRCSTS_READ 1
#define MTL_DEBUG_TRCSTS_TXW 2
#define MTL_DEBUG_TRCSTS_WRITE 3
#define MTL_DEBUG_TXPAUSED BIT(0)
/* MAC debug: GMII or MII Transmit Protocol Engine Status */
#define MTL_DEBUG_RXFSTS_MASK GENMASK(5, 4)
#define MTL_DEBUG_RXFSTS_SHIFT 4
#define MTL_DEBUG_RXFSTS_EMPTY 0
#define MTL_DEBUG_RXFSTS_BT 1
#define MTL_DEBUG_RXFSTS_AT 2
#define MTL_DEBUG_RXFSTS_FULL 3
#define MTL_DEBUG_RRCSTS_MASK GENMASK(2, 1)
#define MTL_DEBUG_RRCSTS_SHIFT 1
#define MTL_DEBUG_RRCSTS_IDLE 0
#define MTL_DEBUG_RRCSTS_RDATA 1
#define MTL_DEBUG_RRCSTS_RSTAT 2
#define MTL_DEBUG_RRCSTS_FLUSH 3
#define MTL_DEBUG_RWCSTS BIT(0)
/* MTL interrupt */
#define MTL_RX_OVERFLOW_INT_EN BIT(24)
#define MTL_RX_OVERFLOW_INT BIT(16)
/* Default operating mode of the MAC */
#define GMAC_CORE_INIT (GMAC_CONFIG_JD | GMAC_CONFIG_PS | GMAC_CONFIG_ACS | \
GMAC_CONFIG_BE | GMAC_CONFIG_DCRS)
/* To dump the core regs excluding the Address Registers */
#define GMAC_REG_NUM 132
/* MTL debug */
#define MTL_DEBUG_TXSTSFSTS BIT(5)
#define MTL_DEBUG_TXFSTS BIT(4)
#define MTL_DEBUG_TWCSTS BIT(3)
/* MTL debug: Tx FIFO Read Controller Status */
#define MTL_DEBUG_TRCSTS_MASK GENMASK(2, 1)
#define MTL_DEBUG_TRCSTS_SHIFT 1
#define MTL_DEBUG_TRCSTS_IDLE 0
#define MTL_DEBUG_TRCSTS_READ 1
#define MTL_DEBUG_TRCSTS_TXW 2
#define MTL_DEBUG_TRCSTS_WRITE 3
#define MTL_DEBUG_TXPAUSED BIT(0)
/* MAC debug: GMII or MII Transmit Protocol Engine Status */
#define MTL_DEBUG_RXFSTS_MASK GENMASK(5, 4)
#define MTL_DEBUG_RXFSTS_SHIFT 4
#define MTL_DEBUG_RXFSTS_EMPTY 0
#define MTL_DEBUG_RXFSTS_BT 1
#define MTL_DEBUG_RXFSTS_AT 2
#define MTL_DEBUG_RXFSTS_FULL 3
#define MTL_DEBUG_RRCSTS_MASK GENMASK(2, 1)
#define MTL_DEBUG_RRCSTS_SHIFT 1
#define MTL_DEBUG_RRCSTS_IDLE 0
#define MTL_DEBUG_RRCSTS_RDATA 1
#define MTL_DEBUG_RRCSTS_RSTAT 2
#define MTL_DEBUG_RRCSTS_FLUSH 3
#define MTL_DEBUG_RWCSTS BIT(0)
extern const struct stmmac_dma_ops dwmac4_dma_ops;
extern const struct stmmac_dma_ops dwmac410_dma_ops;
#endif /* __DWMAC4_H__ */
/*
* This is the driver for the GMAC on-chip Ethernet controller for ST SoCs.
* DWC Ether MAC version 4.00 has been used for developing this code.
*
* This only implements the mac core functions for this chip.
*
* Copyright (C) 2015 STMicroelectronics Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* Author: Alexandre Torgue <alexandre.torgue@st.com>
*/
#include <linux/crc32.h>
#include <linux/slab.h>
#include <linux/ethtool.h>
#include <linux/io.h>
#include "dwmac4.h"
static void dwmac4_core_init(struct mac_device_info *hw, int mtu)
{
void __iomem *ioaddr = hw->pcsr;
u32 value = readl(ioaddr + GMAC_CONFIG);
value |= GMAC_CORE_INIT;
if (mtu > 1500)
value |= GMAC_CONFIG_2K;
if (mtu > 2000)
value |= GMAC_CONFIG_JE;
writel(value, ioaddr + GMAC_CONFIG);
/* Mask GMAC interrupts */
writel(GMAC_INT_PMT_EN, ioaddr + GMAC_INT_EN);
}
static void dwmac4_dump_regs(struct mac_device_info *hw)
{
void __iomem *ioaddr = hw->pcsr;
int i;
pr_debug("\tDWMAC4 regs (base addr = 0x%p)\n", ioaddr);
for (i = 0; i < GMAC_REG_NUM; i++) {
int offset = i * 4;
pr_debug("\tReg No. %d (offset 0x%x): 0x%08x\n", i,
offset, readl(ioaddr + offset));
}
}
static int dwmac4_rx_ipc_enable(struct mac_device_info *hw)
{
void __iomem *ioaddr = hw->pcsr;
u32 value = readl(ioaddr + GMAC_CONFIG);
if (hw->rx_csum)
value |= GMAC_CONFIG_IPC;
else
value &= ~GMAC_CONFIG_IPC;
writel(value, ioaddr + GMAC_CONFIG);
value = readl(ioaddr + GMAC_CONFIG);
return !!(value & GMAC_CONFIG_IPC);
}
static void dwmac4_pmt(struct mac_device_info *hw, unsigned long mode)
{
void __iomem *ioaddr = hw->pcsr;
unsigned int pmt = 0;
if (mode & WAKE_MAGIC) {
pr_debug("GMAC: WOL Magic frame\n");
pmt |= power_down | magic_pkt_en;
}
if (mode & WAKE_UCAST) {
pr_debug("GMAC: WOL on global unicast\n");
pmt |= global_unicast;
}
writel(pmt, ioaddr + GMAC_PMT);
}
static void dwmac4_set_umac_addr(struct mac_device_info *hw,
unsigned char *addr, unsigned int reg_n)
{
void __iomem *ioaddr = hw->pcsr;
stmmac_dwmac4_set_mac_addr(ioaddr, addr, GMAC_ADDR_HIGH(reg_n),
GMAC_ADDR_LOW(reg_n));
}
static void dwmac4_get_umac_addr(struct mac_device_info *hw,
unsigned char *addr, unsigned int reg_n)
{
void __iomem *ioaddr = hw->pcsr;
stmmac_dwmac4_get_mac_addr(ioaddr, addr, GMAC_ADDR_HIGH(reg_n),
GMAC_ADDR_LOW(reg_n));
}
static void dwmac4_set_filter(struct mac_device_info *hw,
struct net_device *dev)
{
void __iomem *ioaddr = (void __iomem *)dev->base_addr;
unsigned int value = 0;
if (dev->flags & IFF_PROMISC) {
value = GMAC_PACKET_FILTER_PR;
} else if ((dev->flags & IFF_ALLMULTI) ||
(netdev_mc_count(dev) > HASH_TABLE_SIZE)) {
/* Pass all multi */
value = GMAC_PACKET_FILTER_PM;
/* Set the 64 bits of the HASH tab. To be updated if taller
* hash table is used
*/
writel(0xffffffff, ioaddr + GMAC_HASH_TAB_0_31);
writel(0xffffffff, ioaddr + GMAC_HASH_TAB_32_63);
} else if (!netdev_mc_empty(dev)) {
u32 mc_filter[2];
struct netdev_hw_addr *ha;
/* Hash filter for multicast */
value = GMAC_PACKET_FILTER_HMC;
memset(mc_filter, 0, sizeof(mc_filter));
netdev_for_each_mc_addr(ha, dev) {
/* The upper 6 bits of the calculated CRC are used to
* index the content of the Hash Table Reg 0 and 1.
*/
int bit_nr =
(bitrev32(~crc32_le(~0, ha->addr, 6)) >> 26);
/* The most significant bit determines the register
* to use while the other 5 bits determines the bit
* within the selected register
*/
mc_filter[bit_nr >> 5] |= (1 << (bit_nr & 0x1F));
}
writel(mc_filter[0], ioaddr + GMAC_HASH_TAB_0_31);
writel(mc_filter[1], ioaddr + GMAC_HASH_TAB_32_63);
}
/* Handle multiple unicast addresses */
if (netdev_uc_count(dev) > GMAC_MAX_PERFECT_ADDRESSES) {
/* Switch to promiscuous mode if more than 128 addrs
* are required
*/
value |= GMAC_PACKET_FILTER_PR;
} else if (!netdev_uc_empty(dev)) {
int reg = 1;
struct netdev_hw_addr *ha;
netdev_for_each_uc_addr(ha, dev) {
dwmac4_set_umac_addr(ioaddr, ha->addr, reg);
reg++;
}
}
writel(value, ioaddr + GMAC_PACKET_FILTER);
}
static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
unsigned int fc, unsigned int pause_time)
{
void __iomem *ioaddr = hw->pcsr;
u32 channel = STMMAC_CHAN0; /* FIXME */
unsigned int flow = 0;
pr_debug("GMAC Flow-Control:\n");
if (fc & FLOW_RX) {
pr_debug("\tReceive Flow-Control ON\n");
flow |= GMAC_RX_FLOW_CTRL_RFE;
writel(flow, ioaddr + GMAC_RX_FLOW_CTRL);
}
if (fc & FLOW_TX) {
pr_debug("\tTransmit Flow-Control ON\n");
flow |= GMAC_TX_FLOW_CTRL_TFE;
writel(flow, ioaddr + GMAC_QX_TX_FLOW_CTRL(channel));
if (duplex) {
pr_debug("\tduplex mode: PAUSE %d\n", pause_time);
flow |= (pause_time << GMAC_TX_FLOW_CTRL_PT_SHIFT);
writel(flow, ioaddr + GMAC_QX_TX_FLOW_CTRL(channel));
}
}
}
static void dwmac4_ctrl_ane(struct mac_device_info *hw, bool restart)
{
void __iomem *ioaddr = hw->pcsr;
/* auto negotiation enable and External Loopback enable */
u32 value = GMAC_AN_CTRL_ANE | GMAC_AN_CTRL_ELE;
if (restart)
value |= GMAC_AN_CTRL_RAN;
writel(value, ioaddr + GMAC_AN_CTRL);
}
static void dwmac4_get_adv(struct mac_device_info *hw, struct rgmii_adv *adv)
{
void __iomem *ioaddr = hw->pcsr;
u32 value = readl(ioaddr + GMAC_AN_ADV);
if (value & GMAC_AN_FD)
adv->duplex = DUPLEX_FULL;
if (value & GMAC_AN_HD)
adv->duplex |= DUPLEX_HALF;
adv->pause = (value & GMAC_AN_PSE_MASK) >> GMAC_AN_PSE_SHIFT;
value = readl(ioaddr + GMAC_AN_LPA);
if (value & GMAC_AN_FD)
adv->lp_duplex = DUPLEX_FULL;
if (value & GMAC_AN_HD)
adv->lp_duplex = DUPLEX_HALF;
adv->lp_pause = (value & GMAC_AN_PSE_MASK) >> GMAC_AN_PSE_SHIFT;
}
static int dwmac4_irq_status(struct mac_device_info *hw,
struct stmmac_extra_stats *x)
{
void __iomem *ioaddr = hw->pcsr;
u32 mtl_int_qx_status;
u32 intr_status;
int ret = 0;
intr_status = readl(ioaddr + GMAC_INT_STATUS);
/* Not used events (e.g. MMC interrupts) are not handled. */
if ((intr_status & mmc_tx_irq))
x->mmc_tx_irq_n++;
if (unlikely(intr_status & mmc_rx_irq))
x->mmc_rx_irq_n++;
if (unlikely(intr_status & mmc_rx_csum_offload_irq))
x->mmc_rx_csum_offload_irq_n++;
/* Clear the PMT bits 5 and 6 by reading the PMT status reg */
if (unlikely(intr_status & pmt_irq)) {
readl(ioaddr + GMAC_PMT);
x->irq_receive_pmt_irq_n++;
}
if ((intr_status & pcs_ane_irq) || (intr_status & pcs_link_irq)) {
readl(ioaddr + GMAC_AN_STATUS);
x->irq_pcs_ane_n++;
}
mtl_int_qx_status = readl(ioaddr + MTL_INT_STATUS);
/* Check MTL Interrupt: Currently only one queue is used: Q0. */
if (mtl_int_qx_status & MTL_INT_Q0) {
/* read Queue 0 Interrupt status */
u32 status = readl(ioaddr + MTL_CHAN_INT_CTRL(STMMAC_CHAN0));
if (status & MTL_RX_OVERFLOW_INT) {
/* clear Interrupt */
writel(status | MTL_RX_OVERFLOW_INT,
ioaddr + MTL_CHAN_INT_CTRL(STMMAC_CHAN0));
ret = CORE_IRQ_MTL_RX_OVERFLOW;
}
}
return ret;
}
static void dwmac4_debug(void __iomem *ioaddr, struct stmmac_extra_stats *x)
{
u32 value;
/* Currently only channel 0 is supported */
value = readl(ioaddr + MTL_CHAN_TX_DEBUG(STMMAC_CHAN0));
if (value & MTL_DEBUG_TXSTSFSTS)
x->mtl_tx_status_fifo_full++;
if (value & MTL_DEBUG_TXFSTS)
x->mtl_tx_fifo_not_empty++;
if (value & MTL_DEBUG_TWCSTS)
x->mmtl_fifo_ctrl++;
if (value & MTL_DEBUG_TRCSTS_MASK) {
u32 trcsts = (value & MTL_DEBUG_TRCSTS_MASK)
>> MTL_DEBUG_TRCSTS_SHIFT;
if (trcsts == MTL_DEBUG_TRCSTS_WRITE)
x->mtl_tx_fifo_read_ctrl_write++;
else if (trcsts == MTL_DEBUG_TRCSTS_TXW)
x->mtl_tx_fifo_read_ctrl_wait++;
else if (trcsts == MTL_DEBUG_TRCSTS_READ)
x->mtl_tx_fifo_read_ctrl_read++;
else
x->mtl_tx_fifo_read_ctrl_idle++;
}
if (value & MTL_DEBUG_TXPAUSED)
x->mac_tx_in_pause++;
value = readl(ioaddr + MTL_CHAN_RX_DEBUG(STMMAC_CHAN0));
if (value & MTL_DEBUG_RXFSTS_MASK) {
u32 rxfsts = (value & MTL_DEBUG_RXFSTS_MASK)
>> MTL_DEBUG_RRCSTS_SHIFT;
if (rxfsts == MTL_DEBUG_RXFSTS_FULL)
x->mtl_rx_fifo_fill_level_full++;
else if (rxfsts == MTL_DEBUG_RXFSTS_AT)
x->mtl_rx_fifo_fill_above_thresh++;
else if (rxfsts == MTL_DEBUG_RXFSTS_BT)
x->mtl_rx_fifo_fill_below_thresh++;
else
x->mtl_rx_fifo_fill_level_empty++;
}
if (value & MTL_DEBUG_RRCSTS_MASK) {
u32 rrcsts = (value & MTL_DEBUG_RRCSTS_MASK) >>
MTL_DEBUG_RRCSTS_SHIFT;
if (rrcsts == MTL_DEBUG_RRCSTS_FLUSH)
x->mtl_rx_fifo_read_ctrl_flush++;
else if (rrcsts == MTL_DEBUG_RRCSTS_RSTAT)
x->mtl_rx_fifo_read_ctrl_read_data++;
else if (rrcsts == MTL_DEBUG_RRCSTS_RDATA)
x->mtl_rx_fifo_read_ctrl_status++;
else
x->mtl_rx_fifo_read_ctrl_idle++;
}
if (value & MTL_DEBUG_RWCSTS)
x->mtl_rx_fifo_ctrl_active++;
/* GMAC debug */
value = readl(ioaddr + GMAC_DEBUG);
if (value & GMAC_DEBUG_TFCSTS_MASK) {
u32 tfcsts = (value & GMAC_DEBUG_TFCSTS_MASK)
>> GMAC_DEBUG_TFCSTS_SHIFT;
if (tfcsts == GMAC_DEBUG_TFCSTS_XFER)
x->mac_tx_frame_ctrl_xfer++;
else if (tfcsts == GMAC_DEBUG_TFCSTS_GEN_PAUSE)
x->mac_tx_frame_ctrl_pause++;
else if (tfcsts == GMAC_DEBUG_TFCSTS_WAIT)
x->mac_tx_frame_ctrl_wait++;
else
x->mac_tx_frame_ctrl_idle++;
}
if (value & GMAC_DEBUG_TPESTS)
x->mac_gmii_tx_proto_engine++;
if (value & GMAC_DEBUG_RFCFCSTS_MASK)
x->mac_rx_frame_ctrl_fifo = (value & GMAC_DEBUG_RFCFCSTS_MASK)
>> GMAC_DEBUG_RFCFCSTS_SHIFT;
if (value & GMAC_DEBUG_RPESTS)
x->mac_gmii_rx_proto_engine++;
}
static const struct stmmac_ops dwmac4_ops = {
.core_init = dwmac4_core_init,
.rx_ipc = dwmac4_rx_ipc_enable,
.dump_regs = dwmac4_dump_regs,
.host_irq_status = dwmac4_irq_status,
.flow_ctrl = dwmac4_flow_ctrl,
.pmt = dwmac4_pmt,
.set_umac_addr = dwmac4_set_umac_addr,
.get_umac_addr = dwmac4_get_umac_addr,
.ctrl_ane = dwmac4_ctrl_ane,
.get_adv = dwmac4_get_adv,
.debug = dwmac4_debug,
.set_filter = dwmac4_set_filter,
};
struct mac_device_info *dwmac4_setup(void __iomem *ioaddr, int mcbins,
int perfect_uc_entries, int *synopsys_id)
{
struct mac_device_info *mac;
u32 hwid = readl(ioaddr + GMAC_VERSION);
mac = kzalloc(sizeof(const struct mac_device_info), GFP_KERNEL);
if (!mac)
return NULL;
mac->pcsr = ioaddr;
mac->multicast_filter_bins = mcbins;
mac->unicast_filter_entries = perfect_uc_entries;
mac->mcast_bits_log2 = 0;
if (mac->multicast_filter_bins)
mac->mcast_bits_log2 = ilog2(mac->multicast_filter_bins);
mac->mac = &dwmac4_ops;
mac->link.port = GMAC_CONFIG_PS;
mac->link.duplex = GMAC_CONFIG_DM;
mac->link.speed = GMAC_CONFIG_FES;
mac->mii.addr = GMAC_MDIO_ADDR;
mac->mii.data = GMAC_MDIO_DATA;
/* Get and dump the chip ID */
*synopsys_id = stmmac_get_synopsys_id(hwid);
if (*synopsys_id > DWMAC_CORE_4_00)
mac->dma = &dwmac410_dma_ops;
else
mac->dma = &dwmac4_dma_ops;
return mac;
}
/*
* This contains the functions to handle the descriptors for DesignWare databook
* 4.xx.
*
* Copyright (C) 2015 STMicroelectronics Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* Author: Alexandre Torgue <alexandre.torgue@st.com>
*/
#include <linux/stmmac.h>
#include "common.h"
#include "dwmac4_descs.h"
static int dwmac4_wrback_get_tx_status(void *data, struct stmmac_extra_stats *x,
struct dma_desc *p,
void __iomem *ioaddr)
{
struct net_device_stats *stats = (struct net_device_stats *)data;
unsigned int tdes3;
int ret = tx_done;
tdes3 = p->des3;
/* Get tx owner first */
if (unlikely(tdes3 & TDES3_OWN))
return tx_dma_own;
/* Verify tx error by looking at the last segment. */
if (likely(!(tdes3 & TDES3_LAST_DESCRIPTOR)))
return tx_not_ls;
if (unlikely(tdes3 & TDES3_ERROR_SUMMARY)) {
if (unlikely(tdes3 & TDES3_JABBER_TIMEOUT))
x->tx_jabber++;
if (unlikely(tdes3 & TDES3_PACKET_FLUSHED))
x->tx_frame_flushed++;
if (unlikely(tdes3 & TDES3_LOSS_CARRIER)) {
x->tx_losscarrier++;
stats->tx_carrier_errors++;
}
if (unlikely(tdes3 & TDES3_NO_CARRIER)) {
x->tx_carrier++;
stats->tx_carrier_errors++;
}
if (unlikely((tdes3 & TDES3_LATE_COLLISION) ||
(tdes3 & TDES3_EXCESSIVE_COLLISION)))
stats->collisions +=
(tdes3 & TDES3_COLLISION_COUNT_MASK)
>> TDES3_COLLISION_COUNT_SHIFT;
if (unlikely(tdes3 & TDES3_EXCESSIVE_DEFERRAL))
x->tx_deferred++;
if (unlikely(tdes3 & TDES3_UNDERFLOW_ERROR))
x->tx_underflow++;
if (unlikely(tdes3 & TDES3_IP_HDR_ERROR))
x->tx_ip_header_error++;
if (unlikely(tdes3 & TDES3_PAYLOAD_ERROR))
x->tx_payload_error++;
ret = tx_err;
}
if (unlikely(tdes3 & TDES3_DEFERRED))
x->tx_deferred++;
return ret;
}
static int dwmac4_wrback_get_rx_status(void *data, struct stmmac_extra_stats *x,
struct dma_desc *p)
{
struct net_device_stats *stats = (struct net_device_stats *)data;
unsigned int rdes1 = p->des1;
unsigned int rdes2 = p->des2;
unsigned int rdes3 = p->des3;
int message_type;
int ret = good_frame;
if (unlikely(rdes3 & RDES3_OWN))
return dma_own;
/* Verify rx error by looking at the last segment. */
if (likely(!(rdes3 & RDES3_LAST_DESCRIPTOR)))
return discard_frame;
if (unlikely(rdes3 & RDES3_ERROR_SUMMARY)) {
if (unlikely(rdes3 & RDES3_GIANT_PACKET))
stats->rx_length_errors++;
if (unlikely(rdes3 & RDES3_OVERFLOW_ERROR))
x->rx_gmac_overflow++;
if (unlikely(rdes3 & RDES3_RECEIVE_WATCHDOG))
x->rx_watchdog++;
if (unlikely(rdes3 & RDES3_RECEIVE_ERROR))
x->rx_mii++;
if (unlikely(rdes3 & RDES3_CRC_ERROR)) {
x->rx_crc++;
stats->rx_crc_errors++;
}
if (unlikely(rdes3 & RDES3_DRIBBLE_ERROR))
x->dribbling_bit++;
ret = discard_frame;
}
message_type = (rdes1 & ERDES4_MSG_TYPE_MASK) >> 8;
if (rdes1 & RDES1_IP_HDR_ERROR)
x->ip_hdr_err++;
if (rdes1 & RDES1_IP_CSUM_BYPASSED)
x->ip_csum_bypassed++;
if (rdes1 & RDES1_IPV4_HEADER)
x->ipv4_pkt_rcvd++;
if (rdes1 & RDES1_IPV6_HEADER)
x->ipv6_pkt_rcvd++;
if (message_type == RDES_EXT_SYNC)
x->rx_msg_type_sync++;
else if (message_type == RDES_EXT_FOLLOW_UP)
x->rx_msg_type_follow_up++;
else if (message_type == RDES_EXT_DELAY_REQ)
x->rx_msg_type_delay_req++;
else if (message_type == RDES_EXT_DELAY_RESP)
x->rx_msg_type_delay_resp++;
else if (message_type == RDES_EXT_PDELAY_REQ)
x->rx_msg_type_pdelay_req++;
else if (message_type == RDES_EXT_PDELAY_RESP)
x->rx_msg_type_pdelay_resp++;
else if (message_type == RDES_EXT_PDELAY_FOLLOW_UP)
x->rx_msg_type_pdelay_follow_up++;
else
x->rx_msg_type_ext_no_ptp++;
if (rdes1 & RDES1_PTP_PACKET_TYPE)
x->ptp_frame_type++;
if (rdes1 & RDES1_PTP_VER)
x->ptp_ver++;
if (rdes1 & RDES1_TIMESTAMP_DROPPED)
x->timestamp_dropped++;
if (unlikely(rdes2 & RDES2_SA_FILTER_FAIL)) {
x->sa_rx_filter_fail++;
ret = discard_frame;
}
if (unlikely(rdes2 & RDES2_DA_FILTER_FAIL)) {
x->da_rx_filter_fail++;
ret = discard_frame;
}
if (rdes2 & RDES2_L3_FILTER_MATCH)
x->l3_filter_match++;
if (rdes2 & RDES2_L4_FILTER_MATCH)
x->l4_filter_match++;
if ((rdes2 & RDES2_L3_L4_FILT_NB_MATCH_MASK)
>> RDES2_L3_L4_FILT_NB_MATCH_SHIFT)
x->l3_l4_filter_no_match++;
return ret;
}
static int dwmac4_rd_get_tx_len(struct dma_desc *p)
{
return (p->des2 & TDES2_BUFFER1_SIZE_MASK);
}
static int dwmac4_get_tx_owner(struct dma_desc *p)
{
return (p->des3 & TDES3_OWN) >> TDES3_OWN_SHIFT;
}
static void dwmac4_set_tx_owner(struct dma_desc *p)
{
p->des3 |= TDES3_OWN;
}
static void dwmac4_set_rx_owner(struct dma_desc *p)
{
p->des3 |= RDES3_OWN;
}
static int dwmac4_get_tx_ls(struct dma_desc *p)
{
return (p->des3 & TDES3_LAST_DESCRIPTOR) >> TDES3_LAST_DESCRIPTOR_SHIFT;
}
static int dwmac4_wrback_get_rx_frame_len(struct dma_desc *p, int rx_coe)
{
return (p->des3 & RDES3_PACKET_SIZE_MASK);
}
static void dwmac4_rd_enable_tx_timestamp(struct dma_desc *p)
{
p->des2 |= TDES2_TIMESTAMP_ENABLE;
}
static int dwmac4_wrback_get_tx_timestamp_status(struct dma_desc *p)
{
return (p->des3 & TDES3_TIMESTAMP_STATUS)
>> TDES3_TIMESTAMP_STATUS_SHIFT;
}
/* NOTE: For RX CTX bit has to be checked before
* HAVE a specific function for TX and another one for RX
*/
static u64 dwmac4_wrback_get_timestamp(void *desc, u32 ats)
{
struct dma_desc *p = (struct dma_desc *)desc;
u64 ns;
ns = p->des0;
/* convert high/sec time stamp value to nanosecond */
ns += p->des1 * 1000000000ULL;
return ns;
}
static int dwmac4_context_get_rx_timestamp_status(void *desc, u32 ats)
{
struct dma_desc *p = (struct dma_desc *)desc;
return (p->des1 & RDES1_TIMESTAMP_AVAILABLE)
>> RDES1_TIMESTAMP_AVAILABLE_SHIFT;
}
static void dwmac4_rd_init_rx_desc(struct dma_desc *p, int disable_rx_ic,
int mode, int end)
{
p->des3 = RDES3_OWN | RDES3_BUFFER1_VALID_ADDR;
if (!disable_rx_ic)
p->des3 |= RDES3_INT_ON_COMPLETION_EN;
}
static void dwmac4_rd_init_tx_desc(struct dma_desc *p, int mode, int end)
{
p->des0 = 0;
p->des1 = 0;
p->des2 = 0;
p->des3 = 0;
}
static void dwmac4_rd_prepare_tx_desc(struct dma_desc *p, int is_fs, int len,
bool csum_flag, int mode, bool tx_own,
bool ls)
{
unsigned int tdes3 = p->des3;
if (unlikely(len > BUF_SIZE_16KiB)) {
p->des2 |= (((len - BUF_SIZE_16KiB) <<
TDES2_BUFFER2_SIZE_MASK_SHIFT)
& TDES2_BUFFER2_SIZE_MASK)
| (BUF_SIZE_16KiB & TDES2_BUFFER1_SIZE_MASK);
} else {
p->des2 |= (len & TDES2_BUFFER1_SIZE_MASK);
}
if (is_fs)
tdes3 |= TDES3_FIRST_DESCRIPTOR;
else
tdes3 &= ~TDES3_FIRST_DESCRIPTOR;
if (likely(csum_flag))
tdes3 |= (TX_CIC_FULL << TDES3_CHECKSUM_INSERTION_SHIFT);
else
tdes3 &= ~(TX_CIC_FULL << TDES3_CHECKSUM_INSERTION_SHIFT);
if (ls)
tdes3 |= TDES3_LAST_DESCRIPTOR;
else
tdes3 &= ~TDES3_LAST_DESCRIPTOR;
/* Finally set the OWN bit. Later the DMA will start! */
if (tx_own)
tdes3 |= TDES3_OWN;
if (is_fs & tx_own)
/* When the own bit, for the first frame, has to be set, all
* descriptors for the same frame has to be set before, to
* avoid race condition.
*/
wmb();
p->des3 = tdes3;
}
static void dwmac4_rd_prepare_tso_tx_desc(struct dma_desc *p, int is_fs,
int len1, int len2, bool tx_own,
bool ls, unsigned int tcphdrlen,
unsigned int tcppayloadlen)
{
unsigned int tdes3 = p->des3;
if (len1)
p->des2 |= (len1 & TDES2_BUFFER1_SIZE_MASK);
if (len2)
p->des2 |= (len2 << TDES2_BUFFER2_SIZE_MASK_SHIFT)
& TDES2_BUFFER2_SIZE_MASK;
if (is_fs) {
tdes3 |= TDES3_FIRST_DESCRIPTOR |
TDES3_TCP_SEGMENTATION_ENABLE |
((tcphdrlen << TDES3_HDR_LEN_SHIFT) &
TDES3_SLOT_NUMBER_MASK) |
((tcppayloadlen & TDES3_TCP_PKT_PAYLOAD_MASK));
} else {
tdes3 &= ~TDES3_FIRST_DESCRIPTOR;
}
if (ls)
tdes3 |= TDES3_LAST_DESCRIPTOR;
else
tdes3 &= ~TDES3_LAST_DESCRIPTOR;
/* Finally set the OWN bit. Later the DMA will start! */
if (tx_own)
tdes3 |= TDES3_OWN;
if (is_fs & tx_own)
/* When the own bit, for the first frame, has to be set, all
* descriptors for the same frame has to be set before, to
* avoid race condition.
*/
wmb();
p->des3 = tdes3;
}
static void dwmac4_release_tx_desc(struct dma_desc *p, int mode)
{
p->des2 = 0;
p->des3 = 0;
}
static void dwmac4_rd_set_tx_ic(struct dma_desc *p)
{
p->des2 |= TDES2_INTERRUPT_ON_COMPLETION;
}
static void dwmac4_display_ring(void *head, unsigned int size, bool rx)
{
struct dma_desc *p = (struct dma_desc *)head;
int i;
pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
for (i = 0; i < size; i++) {
if (p->des0)
pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
i, (unsigned int)virt_to_phys(p),
p->des0, p->des1, p->des2, p->des3);
p++;
}
}
static void dwmac4_set_mss_ctxt(struct dma_desc *p, unsigned int mss)
{
p->des0 = 0;
p->des1 = 0;
p->des2 = mss;
p->des3 = TDES3_CONTEXT_TYPE | TDES3_CTXT_TCMSSV;
}
const struct stmmac_desc_ops dwmac4_desc_ops = {
.tx_status = dwmac4_wrback_get_tx_status,
.rx_status = dwmac4_wrback_get_rx_status,
.get_tx_len = dwmac4_rd_get_tx_len,
.get_tx_owner = dwmac4_get_tx_owner,
.set_tx_owner = dwmac4_set_tx_owner,
.set_rx_owner = dwmac4_set_rx_owner,
.get_tx_ls = dwmac4_get_tx_ls,
.get_rx_frame_len = dwmac4_wrback_get_rx_frame_len,
.enable_tx_timestamp = dwmac4_rd_enable_tx_timestamp,
.get_tx_timestamp_status = dwmac4_wrback_get_tx_timestamp_status,
.get_timestamp = dwmac4_wrback_get_timestamp,
.get_rx_timestamp_status = dwmac4_context_get_rx_timestamp_status,
.set_tx_ic = dwmac4_rd_set_tx_ic,
.prepare_tx_desc = dwmac4_rd_prepare_tx_desc,
.prepare_tso_tx_desc = dwmac4_rd_prepare_tso_tx_desc,
.release_tx_desc = dwmac4_release_tx_desc,
.init_rx_desc = dwmac4_rd_init_rx_desc,
.init_tx_desc = dwmac4_rd_init_tx_desc,
.display_ring = dwmac4_display_ring,
.set_mss = dwmac4_set_mss_ctxt,
};
const struct stmmac_mode_ops dwmac4_ring_mode_ops = { };
/*
* Header File to describe the DMA descriptors and related definitions specific
* for DesignWare databook 4.xx.
*
* Copyright (C) 2015 STMicroelectronics Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* Author: Alexandre Torgue <alexandre.torgue@st.com>
*/
#ifndef __DWMAC4_DESCS_H__
#define __DWMAC4_DESCS_H__
#include <linux/bitops.h>
/* Normal transmit descriptor defines (without split feature) */
/* TDES2 (read format) */
#define TDES2_BUFFER1_SIZE_MASK GENMASK(13, 0)
#define TDES2_VLAN_TAG_MASK GENMASK(15, 14)
#define TDES2_BUFFER2_SIZE_MASK GENMASK(29, 16)
#define TDES2_BUFFER2_SIZE_MASK_SHIFT 16
#define TDES2_TIMESTAMP_ENABLE BIT(30)
#define TDES2_INTERRUPT_ON_COMPLETION BIT(31)
/* TDES3 (read format) */
#define TDES3_PACKET_SIZE_MASK GENMASK(14, 0)
#define TDES3_CHECKSUM_INSERTION_MASK GENMASK(17, 16)
#define TDES3_CHECKSUM_INSERTION_SHIFT 16
#define TDES3_TCP_PKT_PAYLOAD_MASK GENMASK(17, 0)
#define TDES3_TCP_SEGMENTATION_ENABLE BIT(18)
#define TDES3_HDR_LEN_SHIFT 19
#define TDES3_SLOT_NUMBER_MASK GENMASK(22, 19)
#define TDES3_SA_INSERT_CTRL_MASK GENMASK(25, 23)
#define TDES3_CRC_PAD_CTRL_MASK GENMASK(27, 26)
/* TDES3 (write back format) */
#define TDES3_IP_HDR_ERROR BIT(0)
#define TDES3_DEFERRED BIT(1)
#define TDES3_UNDERFLOW_ERROR BIT(2)
#define TDES3_EXCESSIVE_DEFERRAL BIT(3)
#define TDES3_COLLISION_COUNT_MASK GENMASK(7, 4)
#define TDES3_COLLISION_COUNT_SHIFT 4
#define TDES3_EXCESSIVE_COLLISION BIT(8)
#define TDES3_LATE_COLLISION BIT(9)
#define TDES3_NO_CARRIER BIT(10)
#define TDES3_LOSS_CARRIER BIT(11)
#define TDES3_PAYLOAD_ERROR BIT(12)
#define TDES3_PACKET_FLUSHED BIT(13)
#define TDES3_JABBER_TIMEOUT BIT(14)
#define TDES3_ERROR_SUMMARY BIT(15)
#define TDES3_TIMESTAMP_STATUS BIT(17)
#define TDES3_TIMESTAMP_STATUS_SHIFT 17
/* TDES3 context */
#define TDES3_CTXT_TCMSSV BIT(26)
/* TDES3 Common */
#define TDES3_LAST_DESCRIPTOR BIT(28)
#define TDES3_LAST_DESCRIPTOR_SHIFT 28
#define TDES3_FIRST_DESCRIPTOR BIT(29)
#define TDES3_CONTEXT_TYPE BIT(30)
/* TDS3 use for both format (read and write back) */
#define TDES3_OWN BIT(31)
#define TDES3_OWN_SHIFT 31
/* Normal receive descriptor defines (without split feature) */
/* RDES0 (write back format) */
#define RDES0_VLAN_TAG_MASK GENMASK(15, 0)
/* RDES1 (write back format) */
#define RDES1_IP_PAYLOAD_TYPE_MASK GENMASK(2, 0)
#define RDES1_IP_HDR_ERROR BIT(3)
#define RDES1_IPV4_HEADER BIT(4)
#define RDES1_IPV6_HEADER BIT(5)
#define RDES1_IP_CSUM_BYPASSED BIT(6)
#define RDES1_IP_CSUM_ERROR BIT(7)
#define RDES1_PTP_MSG_TYPE_MASK GENMASK(11, 8)
#define RDES1_PTP_PACKET_TYPE BIT(12)
#define RDES1_PTP_VER BIT(13)
#define RDES1_TIMESTAMP_AVAILABLE BIT(14)
#define RDES1_TIMESTAMP_AVAILABLE_SHIFT 14
#define RDES1_TIMESTAMP_DROPPED BIT(15)
#define RDES1_IP_TYPE1_CSUM_MASK GENMASK(31, 16)
/* RDES2 (write back format) */
#define RDES2_L3_L4_HEADER_SIZE_MASK GENMASK(9, 0)
#define RDES2_VLAN_FILTER_STATUS BIT(15)
#define RDES2_SA_FILTER_FAIL BIT(16)
#define RDES2_DA_FILTER_FAIL BIT(17)
#define RDES2_HASH_FILTER_STATUS BIT(18)
#define RDES2_MAC_ADDR_MATCH_MASK GENMASK(26, 19)
#define RDES2_HASH_VALUE_MATCH_MASK GENMASK(26, 19)
#define RDES2_L3_FILTER_MATCH BIT(27)
#define RDES2_L4_FILTER_MATCH BIT(28)
#define RDES2_L3_L4_FILT_NB_MATCH_MASK GENMASK(27, 26)
#define RDES2_L3_L4_FILT_NB_MATCH_SHIFT 26
/* RDES3 (write back format) */
#define RDES3_PACKET_SIZE_MASK GENMASK(14, 0)
#define RDES3_ERROR_SUMMARY BIT(15)
#define RDES3_PACKET_LEN_TYPE_MASK GENMASK(18, 16)
#define RDES3_DRIBBLE_ERROR BIT(19)
#define RDES3_RECEIVE_ERROR BIT(20)
#define RDES3_OVERFLOW_ERROR BIT(21)
#define RDES3_RECEIVE_WATCHDOG BIT(22)
#define RDES3_GIANT_PACKET BIT(23)
#define RDES3_CRC_ERROR BIT(24)
#define RDES3_RDES0_VALID BIT(25)
#define RDES3_RDES1_VALID BIT(26)
#define RDES3_RDES2_VALID BIT(27)
#define RDES3_LAST_DESCRIPTOR BIT(28)
#define RDES3_FIRST_DESCRIPTOR BIT(29)
#define RDES3_CONTEXT_DESCRIPTOR BIT(30)
/* RDES3 (read format) */
#define RDES3_BUFFER1_VALID_ADDR BIT(24)
#define RDES3_BUFFER2_VALID_ADDR BIT(25)
#define RDES3_INT_ON_COMPLETION_EN BIT(30)
/* TDS3 use for both format (read and write back) */
#define RDES3_OWN BIT(31)
#endif /* __DWMAC4_DESCS_H__ */
/*
* This is the driver for the GMAC on-chip Ethernet controller for ST SoCs.
* DWC Ether MAC version 4.xx has been used for developing this code.
*
* This contains the functions to handle the dma.
*
* Copyright (C) 2015 STMicroelectronics Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* Author: Alexandre Torgue <alexandre.torgue@st.com>
*/
#include <linux/io.h>
#include "dwmac4.h"
#include "dwmac4_dma.h"
static void dwmac4_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi)
{
u32 value = readl(ioaddr + DMA_SYS_BUS_MODE);
int i;
pr_info("dwmac4: Master AXI performs %s burst length\n",
(value & DMA_SYS_BUS_FB) ? "fixed" : "any");
if (axi->axi_lpi_en)
value |= DMA_AXI_EN_LPI;
if (axi->axi_xit_frm)
value |= DMA_AXI_LPI_XIT_FRM;
value |= (axi->axi_wr_osr_lmt & DMA_AXI_OSR_MAX) <<
DMA_AXI_WR_OSR_LMT_SHIFT;
value |= (axi->axi_rd_osr_lmt & DMA_AXI_OSR_MAX) <<
DMA_AXI_RD_OSR_LMT_SHIFT;
/* Depending on the UNDEF bit the Master AXI will perform any burst
* length according to the BLEN programmed (by default all BLEN are
* set).
*/
for (i = 0; i < AXI_BLEN; i++) {
switch (axi->axi_blen[i]) {
case 256:
value |= DMA_AXI_BLEN256;
break;
case 128:
value |= DMA_AXI_BLEN128;
break;
case 64:
value |= DMA_AXI_BLEN64;
break;
case 32:
value |= DMA_AXI_BLEN32;
break;
case 16:
value |= DMA_AXI_BLEN16;
break;
case 8:
value |= DMA_AXI_BLEN8;
break;
case 4:
value |= DMA_AXI_BLEN4;
break;
}
}
writel(value, ioaddr + DMA_SYS_BUS_MODE);
}
static void dwmac4_dma_init_channel(void __iomem *ioaddr, int pbl,
u32 dma_tx_phy, u32 dma_rx_phy,
u32 channel)
{
u32 value;
/* set PBL for each channels. Currently we affect same configuration
* on each channel
*/
value = readl(ioaddr + DMA_CHAN_CONTROL(channel));
value = value | DMA_BUS_MODE_PBL;
writel(value, ioaddr + DMA_CHAN_CONTROL(channel));
value = readl(ioaddr + DMA_CHAN_TX_CONTROL(channel));
value = value | (pbl << DMA_BUS_MODE_PBL_SHIFT);
writel(value, ioaddr + DMA_CHAN_TX_CONTROL(channel));
value = readl(ioaddr + DMA_CHAN_RX_CONTROL(channel));
value = value | (pbl << DMA_BUS_MODE_RPBL_SHIFT);
writel(value, ioaddr + DMA_CHAN_RX_CONTROL(channel));
/* Mask interrupts by writing to CSR7 */
writel(DMA_CHAN_INTR_DEFAULT_MASK, ioaddr + DMA_CHAN_INTR_ENA(channel));
writel(dma_tx_phy, ioaddr + DMA_CHAN_TX_BASE_ADDR(channel));
writel(dma_rx_phy, ioaddr + DMA_CHAN_RX_BASE_ADDR(channel));
}
static void dwmac4_dma_init(void __iomem *ioaddr, int pbl, int fb, int mb,
int aal, u32 dma_tx, u32 dma_rx, int atds)
{
u32 value = readl(ioaddr + DMA_SYS_BUS_MODE);
int i;
/* Set the Fixed burst mode */
if (fb)
value |= DMA_SYS_BUS_FB;
/* Mixed Burst has no effect when fb is set */
if (mb)
value |= DMA_SYS_BUS_MB;
if (aal)
value |= DMA_SYS_BUS_AAL;
writel(value, ioaddr + DMA_SYS_BUS_MODE);
for (i = 0; i < DMA_CHANNEL_NB_MAX; i++)
dwmac4_dma_init_channel(ioaddr, pbl, dma_tx, dma_rx, i);
}
static void _dwmac4_dump_dma_regs(void __iomem *ioaddr, u32 channel)
{
pr_debug(" Channel %d\n", channel);
pr_debug("\tDMA_CHAN_CONTROL, offset: 0x%x, val: 0x%x\n", 0,
readl(ioaddr + DMA_CHAN_CONTROL(channel)));
pr_debug("\tDMA_CHAN_TX_CONTROL, offset: 0x%x, val: 0x%x\n", 0x4,
readl(ioaddr + DMA_CHAN_TX_CONTROL(channel)));
pr_debug("\tDMA_CHAN_RX_CONTROL, offset: 0x%x, val: 0x%x\n", 0x8,
readl(ioaddr + DMA_CHAN_RX_CONTROL(channel)));
pr_debug("\tDMA_CHAN_TX_BASE_ADDR, offset: 0x%x, val: 0x%x\n", 0x14,
readl(ioaddr + DMA_CHAN_TX_BASE_ADDR(channel)));
pr_debug("\tDMA_CHAN_RX_BASE_ADDR, offset: 0x%x, val: 0x%x\n", 0x1c,
readl(ioaddr + DMA_CHAN_RX_BASE_ADDR(channel)));
pr_debug("\tDMA_CHAN_TX_END_ADDR, offset: 0x%x, val: 0x%x\n", 0x20,
readl(ioaddr + DMA_CHAN_TX_END_ADDR(channel)));
pr_debug("\tDMA_CHAN_RX_END_ADDR, offset: 0x%x, val: 0x%x\n", 0x28,
readl(ioaddr + DMA_CHAN_RX_END_ADDR(channel)));
pr_debug("\tDMA_CHAN_TX_RING_LEN, offset: 0x%x, val: 0x%x\n", 0x2c,
readl(ioaddr + DMA_CHAN_TX_RING_LEN(channel)));
pr_debug("\tDMA_CHAN_RX_RING_LEN, offset: 0x%x, val: 0x%x\n", 0x30,
readl(ioaddr + DMA_CHAN_RX_RING_LEN(channel)));
pr_debug("\tDMA_CHAN_INTR_ENA, offset: 0x%x, val: 0x%x\n", 0x34,
readl(ioaddr + DMA_CHAN_INTR_ENA(channel)));
pr_debug("\tDMA_CHAN_RX_WATCHDOG, offset: 0x%x, val: 0x%x\n", 0x38,
readl(ioaddr + DMA_CHAN_RX_WATCHDOG(channel)));
pr_debug("\tDMA_CHAN_SLOT_CTRL_STATUS, offset: 0x%x, val: 0x%x\n", 0x3c,
readl(ioaddr + DMA_CHAN_SLOT_CTRL_STATUS(channel)));
pr_debug("\tDMA_CHAN_CUR_TX_DESC, offset: 0x%x, val: 0x%x\n", 0x44,
readl(ioaddr + DMA_CHAN_CUR_TX_DESC(channel)));
pr_debug("\tDMA_CHAN_CUR_RX_DESC, offset: 0x%x, val: 0x%x\n", 0x4c,
readl(ioaddr + DMA_CHAN_CUR_RX_DESC(channel)));
pr_debug("\tDMA_CHAN_CUR_TX_BUF_ADDR, offset: 0x%x, val: 0x%x\n", 0x54,
readl(ioaddr + DMA_CHAN_CUR_TX_BUF_ADDR(channel)));
pr_debug("\tDMA_CHAN_CUR_RX_BUF_ADDR, offset: 0x%x, val: 0x%x\n", 0x5c,
readl(ioaddr + DMA_CHAN_CUR_RX_BUF_ADDR(channel)));
pr_debug("\tDMA_CHAN_STATUS, offset: 0x%x, val: 0x%x\n", 0x60,
readl(ioaddr + DMA_CHAN_STATUS(channel)));
}
static void dwmac4_dump_dma_regs(void __iomem *ioaddr)
{
int i;
pr_debug(" GMAC4 DMA registers\n");
for (i = 0; i < DMA_CHANNEL_NB_MAX; i++)
_dwmac4_dump_dma_regs(ioaddr, i);
}
static void dwmac4_rx_watchdog(void __iomem *ioaddr, u32 riwt)
{
int i;
for (i = 0; i < DMA_CHANNEL_NB_MAX; i++)
writel(riwt, ioaddr + DMA_CHAN_RX_WATCHDOG(i));
}
static void dwmac4_dma_chan_op_mode(void __iomem *ioaddr, int txmode,
int rxmode, u32 channel)
{
u32 mtl_tx_op, mtl_rx_op, mtl_rx_int;
/* Following code only done for channel 0, other channels not yet
* supported.
*/
mtl_tx_op = readl(ioaddr + MTL_CHAN_TX_OP_MODE(channel));
if (txmode == SF_DMA_MODE) {
pr_debug("GMAC: enable TX store and forward mode\n");
/* Transmit COE type 2 cannot be done in cut-through mode. */
mtl_tx_op |= MTL_OP_MODE_TSF;
} else {
pr_debug("GMAC: disabling TX SF (threshold %d)\n", txmode);
mtl_tx_op &= ~MTL_OP_MODE_TSF;
mtl_tx_op &= MTL_OP_MODE_TTC_MASK;
/* Set the transmit threshold */
if (txmode <= 32)
mtl_tx_op |= MTL_OP_MODE_TTC_32;
else if (txmode <= 64)
mtl_tx_op |= MTL_OP_MODE_TTC_64;
else if (txmode <= 96)
mtl_tx_op |= MTL_OP_MODE_TTC_96;
else if (txmode <= 128)
mtl_tx_op |= MTL_OP_MODE_TTC_128;
else if (txmode <= 192)
mtl_tx_op |= MTL_OP_MODE_TTC_192;
else if (txmode <= 256)
mtl_tx_op |= MTL_OP_MODE_TTC_256;
else if (txmode <= 384)
mtl_tx_op |= MTL_OP_MODE_TTC_384;
else
mtl_tx_op |= MTL_OP_MODE_TTC_512;
}
writel(mtl_tx_op, ioaddr + MTL_CHAN_TX_OP_MODE(channel));
mtl_rx_op = readl(ioaddr + MTL_CHAN_RX_OP_MODE(channel));
if (rxmode == SF_DMA_MODE) {
pr_debug("GMAC: enable RX store and forward mode\n");
mtl_rx_op |= MTL_OP_MODE_RSF;
} else {
pr_debug("GMAC: disable RX SF mode (threshold %d)\n", rxmode);
mtl_rx_op &= ~MTL_OP_MODE_RSF;
mtl_rx_op &= MTL_OP_MODE_RTC_MASK;
if (rxmode <= 32)
mtl_rx_op |= MTL_OP_MODE_RTC_32;
else if (rxmode <= 64)
mtl_rx_op |= MTL_OP_MODE_RTC_64;
else if (rxmode <= 96)
mtl_rx_op |= MTL_OP_MODE_RTC_96;
else
mtl_rx_op |= MTL_OP_MODE_RTC_128;
}
writel(mtl_rx_op, ioaddr + MTL_CHAN_RX_OP_MODE(channel));
/* Enable MTL RX overflow */
mtl_rx_int = readl(ioaddr + MTL_CHAN_INT_CTRL(channel));
writel(mtl_rx_int | MTL_RX_OVERFLOW_INT_EN,
ioaddr + MTL_CHAN_INT_CTRL(channel));
}
static void dwmac4_dma_operation_mode(void __iomem *ioaddr, int txmode,
int rxmode, int rxfifosz)
{
/* Only Channel 0 is actually configured and used */
dwmac4_dma_chan_op_mode(ioaddr, txmode, rxmode, 0);
}
static void dwmac4_get_hw_feature(void __iomem *ioaddr,
struct dma_features *dma_cap)
{
u32 hw_cap = readl(ioaddr + GMAC_HW_FEATURE0);
/* MAC HW feature0 */
dma_cap->mbps_10_100 = (hw_cap & GMAC_HW_FEAT_MIISEL);
dma_cap->mbps_1000 = (hw_cap & GMAC_HW_FEAT_GMIISEL) >> 1;
dma_cap->half_duplex = (hw_cap & GMAC_HW_FEAT_HDSEL) >> 2;
dma_cap->hash_filter = (hw_cap & GMAC_HW_FEAT_VLHASH) >> 4;
dma_cap->multi_addr = (hw_cap & GMAC_HW_FEAT_ADDMAC) >> 18;
dma_cap->pcs = (hw_cap & GMAC_HW_FEAT_PCSSEL) >> 3;
dma_cap->sma_mdio = (hw_cap & GMAC_HW_FEAT_SMASEL) >> 5;
dma_cap->pmt_remote_wake_up = (hw_cap & GMAC_HW_FEAT_RWKSEL) >> 6;
dma_cap->pmt_magic_frame = (hw_cap & GMAC_HW_FEAT_MGKSEL) >> 7;
/* MMC */
dma_cap->rmon = (hw_cap & GMAC_HW_FEAT_MMCSEL) >> 8;
/* IEEE 1588-2008 */
dma_cap->atime_stamp = (hw_cap & GMAC_HW_FEAT_TSSEL) >> 12;
/* 802.3az - Energy-Efficient Ethernet (EEE) */
dma_cap->eee = (hw_cap & GMAC_HW_FEAT_EEESEL) >> 13;
/* TX and RX csum */
dma_cap->tx_coe = (hw_cap & GMAC_HW_FEAT_TXCOSEL) >> 14;
dma_cap->rx_coe = (hw_cap & GMAC_HW_FEAT_RXCOESEL) >> 16;
/* MAC HW feature1 */
hw_cap = readl(ioaddr + GMAC_HW_FEATURE1);
dma_cap->av = (hw_cap & GMAC_HW_FEAT_AVSEL) >> 20;
dma_cap->tsoen = (hw_cap & GMAC_HW_TSOEN) >> 18;
/* MAC HW feature2 */
hw_cap = readl(ioaddr + GMAC_HW_FEATURE2);
/* TX and RX number of channels */
dma_cap->number_rx_channel =
((hw_cap & GMAC_HW_FEAT_RXCHCNT) >> 12) + 1;
dma_cap->number_tx_channel =
((hw_cap & GMAC_HW_FEAT_TXCHCNT) >> 18) + 1;
/* IEEE 1588-2002 */
dma_cap->time_stamp = 0;
}
/* Enable/disable TSO feature and set MSS */
static void dwmac4_enable_tso(void __iomem *ioaddr, bool en, u32 chan)
{
u32 value;
if (en) {
/* enable TSO */
value = readl(ioaddr + DMA_CHAN_TX_CONTROL(chan));
writel(value | DMA_CONTROL_TSE,
ioaddr + DMA_CHAN_TX_CONTROL(chan));
} else {
/* enable TSO */
value = readl(ioaddr + DMA_CHAN_TX_CONTROL(chan));
writel(value & ~DMA_CONTROL_TSE,
ioaddr + DMA_CHAN_TX_CONTROL(chan));
}
}
const struct stmmac_dma_ops dwmac4_dma_ops = {
.reset = dwmac4_dma_reset,
.init = dwmac4_dma_init,
.axi = dwmac4_dma_axi,
.dump_regs = dwmac4_dump_dma_regs,
.dma_mode = dwmac4_dma_operation_mode,
.enable_dma_irq = dwmac4_enable_dma_irq,
.disable_dma_irq = dwmac4_disable_dma_irq,
.start_tx = dwmac4_dma_start_tx,
.stop_tx = dwmac4_dma_stop_tx,
.start_rx = dwmac4_dma_start_rx,
.stop_rx = dwmac4_dma_stop_rx,
.dma_interrupt = dwmac4_dma_interrupt,
.get_hw_feature = dwmac4_get_hw_feature,
.rx_watchdog = dwmac4_rx_watchdog,
.set_rx_ring_len = dwmac4_set_rx_ring_len,
.set_tx_ring_len = dwmac4_set_tx_ring_len,
.set_rx_tail_ptr = dwmac4_set_rx_tail_ptr,
.set_tx_tail_ptr = dwmac4_set_tx_tail_ptr,
.enable_tso = dwmac4_enable_tso,
};
const struct stmmac_dma_ops dwmac410_dma_ops = {
.reset = dwmac4_dma_reset,
.init = dwmac4_dma_init,
.axi = dwmac4_dma_axi,
.dump_regs = dwmac4_dump_dma_regs,
.dma_mode = dwmac4_dma_operation_mode,
.enable_dma_irq = dwmac410_enable_dma_irq,
.disable_dma_irq = dwmac4_disable_dma_irq,
.start_tx = dwmac4_dma_start_tx,
.stop_tx = dwmac4_dma_stop_tx,
.start_rx = dwmac4_dma_start_rx,
.stop_rx = dwmac4_dma_stop_rx,
.dma_interrupt = dwmac4_dma_interrupt,
.get_hw_feature = dwmac4_get_hw_feature,
.rx_watchdog = dwmac4_rx_watchdog,
.set_rx_ring_len = dwmac4_set_rx_ring_len,
.set_tx_ring_len = dwmac4_set_tx_ring_len,
.set_rx_tail_ptr = dwmac4_set_rx_tail_ptr,
.set_tx_tail_ptr = dwmac4_set_tx_tail_ptr,
.enable_tso = dwmac4_enable_tso,
};
/*
* DWMAC4 DMA Header file.
*
*
* Copyright (C) 2007-2015 STMicroelectronics Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* Author: Alexandre Torgue <alexandre.torgue@st.com>
*/
#ifndef __DWMAC4_DMA_H__
#define __DWMAC4_DMA_H__
/* Define the max channel number used for tx (also rx).
* dwmac4 accepts up to 8 channels for TX (and also 8 channels for RX
*/
#define DMA_CHANNEL_NB_MAX 1
#define DMA_BUS_MODE 0x00001000
#define DMA_SYS_BUS_MODE 0x00001004
#define DMA_STATUS 0x00001008
#define DMA_DEBUG_STATUS_0 0x0000100c
#define DMA_DEBUG_STATUS_1 0x00001010
#define DMA_DEBUG_STATUS_2 0x00001014
#define DMA_AXI_BUS_MODE 0x00001028
/* DMA Bus Mode bitmap */
#define DMA_BUS_MODE_SFT_RESET BIT(0)
/* DMA SYS Bus Mode bitmap */
#define DMA_BUS_MODE_SPH BIT(24)
#define DMA_BUS_MODE_PBL BIT(16)
#define DMA_BUS_MODE_PBL_SHIFT 16
#define DMA_BUS_MODE_RPBL_SHIFT 16
#define DMA_BUS_MODE_MB BIT(14)
#define DMA_BUS_MODE_FB BIT(0)
/* DMA Interrupt top status */
#define DMA_STATUS_MAC BIT(17)
#define DMA_STATUS_MTL BIT(16)
#define DMA_STATUS_CHAN7 BIT(7)
#define DMA_STATUS_CHAN6 BIT(6)
#define DMA_STATUS_CHAN5 BIT(5)
#define DMA_STATUS_CHAN4 BIT(4)
#define DMA_STATUS_CHAN3 BIT(3)
#define DMA_STATUS_CHAN2 BIT(2)
#define DMA_STATUS_CHAN1 BIT(1)
#define DMA_STATUS_CHAN0 BIT(0)
/* DMA debug status bitmap */
#define DMA_DEBUG_STATUS_TS_MASK 0xf
#define DMA_DEBUG_STATUS_RS_MASK 0xf
/* DMA AXI bitmap */
#define DMA_AXI_EN_LPI BIT(31)
#define DMA_AXI_LPI_XIT_FRM BIT(30)
#define DMA_AXI_WR_OSR_LMT GENMASK(27, 24)
#define DMA_AXI_WR_OSR_LMT_SHIFT 24
#define DMA_AXI_RD_OSR_LMT GENMASK(19, 16)
#define DMA_AXI_RD_OSR_LMT_SHIFT 16
#define DMA_AXI_OSR_MAX 0xf
#define DMA_AXI_MAX_OSR_LIMIT ((DMA_AXI_OSR_MAX << DMA_AXI_WR_OSR_LMT_SHIFT) | \
(DMA_AXI_OSR_MAX << DMA_AXI_RD_OSR_LMT_SHIFT))
#define DMA_SYS_BUS_MB BIT(14)
#define DMA_AXI_1KBBE BIT(13)
#define DMA_SYS_BUS_AAL BIT(12)
#define DMA_AXI_BLEN256 BIT(7)
#define DMA_AXI_BLEN128 BIT(6)
#define DMA_AXI_BLEN64 BIT(5)
#define DMA_AXI_BLEN32 BIT(4)
#define DMA_AXI_BLEN16 BIT(3)
#define DMA_AXI_BLEN8 BIT(2)
#define DMA_AXI_BLEN4 BIT(1)
#define DMA_SYS_BUS_FB BIT(0)
#define DMA_BURST_LEN_DEFAULT (DMA_AXI_BLEN256 | DMA_AXI_BLEN128 | \
DMA_AXI_BLEN64 | DMA_AXI_BLEN32 | \
DMA_AXI_BLEN16 | DMA_AXI_BLEN8 | \
DMA_AXI_BLEN4)
#define DMA_AXI_BURST_LEN_MASK 0x000000FE
/* Following DMA defines are chanels oriented */
#define DMA_CHAN_BASE_ADDR 0x00001100
#define DMA_CHAN_BASE_OFFSET 0x80
#define DMA_CHANX_BASE_ADDR(x) (DMA_CHAN_BASE_ADDR + \
(x * DMA_CHAN_BASE_OFFSET))
#define DMA_CHAN_REG_NUMBER 17
#define DMA_CHAN_CONTROL(x) DMA_CHANX_BASE_ADDR(x)
#define DMA_CHAN_TX_CONTROL(x) (DMA_CHANX_BASE_ADDR(x) + 0x4)
#define DMA_CHAN_RX_CONTROL(x) (DMA_CHANX_BASE_ADDR(x) + 0x8)
#define DMA_CHAN_TX_BASE_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x14)
#define DMA_CHAN_RX_BASE_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x1c)
#define DMA_CHAN_TX_END_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x20)
#define DMA_CHAN_RX_END_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x28)
#define DMA_CHAN_TX_RING_LEN(x) (DMA_CHANX_BASE_ADDR(x) + 0x2c)
#define DMA_CHAN_RX_RING_LEN(x) (DMA_CHANX_BASE_ADDR(x) + 0x30)
#define DMA_CHAN_INTR_ENA(x) (DMA_CHANX_BASE_ADDR(x) + 0x34)
#define DMA_CHAN_RX_WATCHDOG(x) (DMA_CHANX_BASE_ADDR(x) + 0x38)
#define DMA_CHAN_SLOT_CTRL_STATUS(x) (DMA_CHANX_BASE_ADDR(x) + 0x3c)
#define DMA_CHAN_CUR_TX_DESC(x) (DMA_CHANX_BASE_ADDR(x) + 0x44)
#define DMA_CHAN_CUR_RX_DESC(x) (DMA_CHANX_BASE_ADDR(x) + 0x4c)
#define DMA_CHAN_CUR_TX_BUF_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x54)
#define DMA_CHAN_CUR_RX_BUF_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x5c)
#define DMA_CHAN_STATUS(x) (DMA_CHANX_BASE_ADDR(x) + 0x60)
/* DMA Control X */
#define DMA_CONTROL_MSS_MASK GENMASK(13, 0)
/* DMA Tx Channel X Control register defines */
#define DMA_CONTROL_TSE BIT(12)
#define DMA_CONTROL_OSP BIT(4)
#define DMA_CONTROL_ST BIT(0)
/* DMA Rx Channel X Control register defines */
#define DMA_CONTROL_SR BIT(0)
/* Interrupt status per channel */
#define DMA_CHAN_STATUS_REB GENMASK(21, 19)
#define DMA_CHAN_STATUS_REB_SHIFT 19
#define DMA_CHAN_STATUS_TEB GENMASK(18, 16)
#define DMA_CHAN_STATUS_TEB_SHIFT 16
#define DMA_CHAN_STATUS_NIS BIT(15)
#define DMA_CHAN_STATUS_AIS BIT(14)
#define DMA_CHAN_STATUS_CDE BIT(13)
#define DMA_CHAN_STATUS_FBE BIT(12)
#define DMA_CHAN_STATUS_ERI BIT(11)
#define DMA_CHAN_STATUS_ETI BIT(10)
#define DMA_CHAN_STATUS_RWT BIT(9)
#define DMA_CHAN_STATUS_RPS BIT(8)
#define DMA_CHAN_STATUS_RBU BIT(7)
#define DMA_CHAN_STATUS_RI BIT(6)
#define DMA_CHAN_STATUS_TBU BIT(2)
#define DMA_CHAN_STATUS_TPS BIT(1)
#define DMA_CHAN_STATUS_TI BIT(0)
/* Interrupt enable bits per channel */
#define DMA_CHAN_INTR_ENA_NIE BIT(16)
#define DMA_CHAN_INTR_ENA_AIE BIT(15)
#define DMA_CHAN_INTR_ENA_NIE_4_10 BIT(15)
#define DMA_CHAN_INTR_ENA_AIE_4_10 BIT(14)
#define DMA_CHAN_INTR_ENA_CDE BIT(13)
#define DMA_CHAN_INTR_ENA_FBE BIT(12)
#define DMA_CHAN_INTR_ENA_ERE BIT(11)
#define DMA_CHAN_INTR_ENA_ETE BIT(10)
#define DMA_CHAN_INTR_ENA_RWE BIT(9)
#define DMA_CHAN_INTR_ENA_RSE BIT(8)
#define DMA_CHAN_INTR_ENA_RBUE BIT(7)
#define DMA_CHAN_INTR_ENA_RIE BIT(6)
#define DMA_CHAN_INTR_ENA_TBUE BIT(2)
#define DMA_CHAN_INTR_ENA_TSE BIT(1)
#define DMA_CHAN_INTR_ENA_TIE BIT(0)
#define DMA_CHAN_INTR_NORMAL (DMA_CHAN_INTR_ENA_NIE | \
DMA_CHAN_INTR_ENA_RIE | \
DMA_CHAN_INTR_ENA_TIE)
#define DMA_CHAN_INTR_ABNORMAL (DMA_CHAN_INTR_ENA_AIE | \
DMA_CHAN_INTR_ENA_FBE)
/* DMA default interrupt mask for 4.00 */
#define DMA_CHAN_INTR_DEFAULT_MASK (DMA_CHAN_INTR_NORMAL | \
DMA_CHAN_INTR_ABNORMAL)
#define DMA_CHAN_INTR_NORMAL_4_10 (DMA_CHAN_INTR_ENA_NIE_4_10 | \
DMA_CHAN_INTR_ENA_RIE | \
DMA_CHAN_INTR_ENA_TIE)
#define DMA_CHAN_INTR_ABNORMAL_4_10 (DMA_CHAN_INTR_ENA_AIE_4_10 | \
DMA_CHAN_INTR_ENA_FBE)
/* DMA default interrupt mask for 4.10a */
#define DMA_CHAN_INTR_DEFAULT_MASK_4_10 (DMA_CHAN_INTR_NORMAL_4_10 | \
DMA_CHAN_INTR_ABNORMAL_4_10)
/* channel 0 specific fields */
#define DMA_CHAN0_DBG_STAT_TPS GENMASK(15, 12)
#define DMA_CHAN0_DBG_STAT_TPS_SHIFT 12
#define DMA_CHAN0_DBG_STAT_RPS GENMASK(11, 8)
#define DMA_CHAN0_DBG_STAT_RPS_SHIFT 8
int dwmac4_dma_reset(void __iomem *ioaddr);
void dwmac4_enable_dma_transmission(void __iomem *ioaddr, u32 tail_ptr);
void dwmac4_enable_dma_irq(void __iomem *ioaddr);
void dwmac410_enable_dma_irq(void __iomem *ioaddr);
void dwmac4_disable_dma_irq(void __iomem *ioaddr);
void dwmac4_dma_start_tx(void __iomem *ioaddr);
void dwmac4_dma_stop_tx(void __iomem *ioaddr);
void dwmac4_dma_start_rx(void __iomem *ioaddr);
void dwmac4_dma_stop_rx(void __iomem *ioaddr);
int dwmac4_dma_interrupt(void __iomem *ioaddr,
struct stmmac_extra_stats *x);
void dwmac4_set_rx_ring_len(void __iomem *ioaddr, u32 len);
void dwmac4_set_tx_ring_len(void __iomem *ioaddr, u32 len);
void dwmac4_set_rx_tail_ptr(void __iomem *ioaddr, u32 tail_ptr, u32 chan);
void dwmac4_set_tx_tail_ptr(void __iomem *ioaddr, u32 tail_ptr, u32 chan);
#endif /* __DWMAC4_DMA_H__ */
/*
* Copyright (C) 2007-2015 STMicroelectronics Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* Author: Alexandre Torgue <alexandre.torgue@st.com>
*/
#include <linux/io.h>
#include <linux/delay.h>
#include "common.h"
#include "dwmac4_dma.h"
#include "dwmac4.h"
int dwmac4_dma_reset(void __iomem *ioaddr)
{
u32 value = readl(ioaddr + DMA_BUS_MODE);
int limit;
/* DMA SW reset */
value |= DMA_BUS_MODE_SFT_RESET;
writel(value, ioaddr + DMA_BUS_MODE);
limit = 10;
while (limit--) {
if (!(readl(ioaddr + DMA_BUS_MODE) & DMA_BUS_MODE_SFT_RESET))
break;
mdelay(10);
}
if (limit < 0)
return -EBUSY;
return 0;
}
void dwmac4_set_rx_tail_ptr(void __iomem *ioaddr, u32 tail_ptr, u32 chan)
{
writel(tail_ptr, ioaddr + DMA_CHAN_RX_END_ADDR(0));
}
void dwmac4_set_tx_tail_ptr(void __iomem *ioaddr, u32 tail_ptr, u32 chan)
{
writel(tail_ptr, ioaddr + DMA_CHAN_TX_END_ADDR(0));
}
void dwmac4_dma_start_tx(void __iomem *ioaddr)
{
u32 value = readl(ioaddr + DMA_CHAN_TX_CONTROL(STMMAC_CHAN0));
value |= DMA_CONTROL_ST;
writel(value, ioaddr + DMA_CHAN_TX_CONTROL(STMMAC_CHAN0));
value = readl(ioaddr + GMAC_CONFIG);
value |= GMAC_CONFIG_TE;
writel(value, ioaddr + GMAC_CONFIG);
}
void dwmac4_dma_stop_tx(void __iomem *ioaddr)
{
u32 value = readl(ioaddr + DMA_CHAN_TX_CONTROL(STMMAC_CHAN0));
value &= ~DMA_CONTROL_ST;
writel(value, ioaddr + DMA_CHAN_TX_CONTROL(STMMAC_CHAN0));
value = readl(ioaddr + GMAC_CONFIG);
value &= ~GMAC_CONFIG_TE;
writel(value, ioaddr + GMAC_CONFIG);
}
void dwmac4_dma_start_rx(void __iomem *ioaddr)
{
u32 value = readl(ioaddr + DMA_CHAN_RX_CONTROL(STMMAC_CHAN0));
value |= DMA_CONTROL_SR;
writel(value, ioaddr + DMA_CHAN_RX_CONTROL(STMMAC_CHAN0));
value = readl(ioaddr + GMAC_CONFIG);
value |= GMAC_CONFIG_RE;
writel(value, ioaddr + GMAC_CONFIG);
}
void dwmac4_dma_stop_rx(void __iomem *ioaddr)
{
u32 value = readl(ioaddr + DMA_CHAN_RX_CONTROL(STMMAC_CHAN0));
value &= ~DMA_CONTROL_SR;
writel(value, ioaddr + DMA_CHAN_RX_CONTROL(STMMAC_CHAN0));
value = readl(ioaddr + GMAC_CONFIG);
value &= ~GMAC_CONFIG_RE;
writel(value, ioaddr + GMAC_CONFIG);
}
void dwmac4_set_tx_ring_len(void __iomem *ioaddr, u32 len)
{
writel(len, ioaddr + DMA_CHAN_TX_RING_LEN(STMMAC_CHAN0));
}
void dwmac4_set_rx_ring_len(void __iomem *ioaddr, u32 len)
{
writel(len, ioaddr + DMA_CHAN_RX_RING_LEN(STMMAC_CHAN0));
}
void dwmac4_enable_dma_irq(void __iomem *ioaddr)
{
writel(DMA_CHAN_INTR_DEFAULT_MASK, ioaddr +
DMA_CHAN_INTR_ENA(STMMAC_CHAN0));
}
void dwmac410_enable_dma_irq(void __iomem *ioaddr)
{
writel(DMA_CHAN_INTR_DEFAULT_MASK_4_10,
ioaddr + DMA_CHAN_INTR_ENA(STMMAC_CHAN0));
}
void dwmac4_disable_dma_irq(void __iomem *ioaddr)
{
writel(0, ioaddr + DMA_CHAN_INTR_ENA(STMMAC_CHAN0));
}
int dwmac4_dma_interrupt(void __iomem *ioaddr,
struct stmmac_extra_stats *x)
{
int ret = 0;
u32 intr_status = readl(ioaddr + DMA_CHAN_STATUS(0));
/* ABNORMAL interrupts */
if (unlikely(intr_status & DMA_CHAN_STATUS_AIS)) {
if (unlikely(intr_status & DMA_CHAN_STATUS_RBU))
x->rx_buf_unav_irq++;
if (unlikely(intr_status & DMA_CHAN_STATUS_RPS))
x->rx_process_stopped_irq++;
if (unlikely(intr_status & DMA_CHAN_STATUS_RWT))
x->rx_watchdog_irq++;
if (unlikely(intr_status & DMA_CHAN_STATUS_ETI))
x->tx_early_irq++;
if (unlikely(intr_status & DMA_CHAN_STATUS_TPS)) {
x->tx_process_stopped_irq++;
ret = tx_hard_error;
}
if (unlikely(intr_status & DMA_CHAN_STATUS_FBE)) {
x->fatal_bus_error_irq++;
ret = tx_hard_error;
}
}
/* TX/RX NORMAL interrupts */
if (likely(intr_status & DMA_CHAN_STATUS_NIS)) {
x->normal_irq_n++;
if (likely(intr_status & DMA_CHAN_STATUS_RI)) {
u32 value;
value = readl(ioaddr + DMA_CHAN_INTR_ENA(STMMAC_CHAN0));
/* to schedule NAPI on real RIE event. */
if (likely(value & DMA_CHAN_INTR_ENA_RIE)) {
x->rx_normal_irq_n++;
ret |= handle_rx;
}
}
if (likely(intr_status & DMA_CHAN_STATUS_TI)) {
x->tx_normal_irq_n++;
ret |= handle_tx;
}
if (unlikely(intr_status & DMA_CHAN_STATUS_ERI))
x->rx_early_irq++;
}
/* Clear the interrupt by writing a logic 1 to the chanX interrupt
* status [21-0] expect reserved bits [5-3]
*/
writel((intr_status & 0x3fffc7),
ioaddr + DMA_CHAN_STATUS(STMMAC_CHAN0));
return ret;
}
void stmmac_dwmac4_set_mac_addr(void __iomem *ioaddr, u8 addr[6],
unsigned int high, unsigned int low)
{
unsigned long data;
data = (addr[5] << 8) | addr[4];
/* For MAC Addr registers se have to set the Address Enable (AE)
* bit that has no effect on the High Reg 0 where the bit 31 (MO)
* is RO.
*/
data |= (STMMAC_CHAN0 << GMAC_HI_DCS_SHIFT);
writel(data | GMAC_HI_REG_AE, ioaddr + high);
data = (addr[3] << 24) | (addr[2] << 16) | (addr[1] << 8) | addr[0];
writel(data, ioaddr + low);
}
/* Enable disable MAC RX/TX */
void stmmac_dwmac4_set_mac(void __iomem *ioaddr, bool enable)
{
u32 value = readl(ioaddr + GMAC_CONFIG);
if (enable)
value |= GMAC_CONFIG_RE | GMAC_CONFIG_TE;
else
value &= ~(GMAC_CONFIG_TE | GMAC_CONFIG_RE);
writel(value, ioaddr + GMAC_CONFIG);
}
void stmmac_dwmac4_get_mac_addr(void __iomem *ioaddr, unsigned char *addr,
unsigned int high, unsigned int low)
{
unsigned int hi_addr, lo_addr;
/* Read the MAC address from the hardware */
hi_addr = readl(ioaddr + high);
lo_addr = readl(ioaddr + low);
/* Extract the MAC address from the high and low words */
addr[0] = lo_addr & 0xff;
addr[1] = (lo_addr >> 8) & 0xff;
addr[2] = (lo_addr >> 16) & 0xff;
addr[3] = (lo_addr >> 24) & 0xff;
addr[4] = hi_addr & 0xff;
addr[5] = (hi_addr >> 8) & 0xff;
}
......@@ -411,6 +411,26 @@ static int enh_desc_get_rx_timestamp_status(void *desc, u32 ats)
}
}
static void enh_desc_display_ring(void *head, unsigned int size, bool rx)
{
struct dma_extended_desc *ep = (struct dma_extended_desc *)head;
int i;
pr_info("Extended %s descriptor ring:\n", rx ? "RX" : "TX");
for (i = 0; i < size; i++) {
u64 x;
x = *(u64 *)ep;
pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
i, (unsigned int)virt_to_phys(ep),
(unsigned int)x, (unsigned int)(x >> 32),
ep->basic.des2, ep->basic.des3);
ep++;
}
pr_info("\n");
}
const struct stmmac_desc_ops enh_desc_ops = {
.tx_status = enh_desc_get_tx_status,
.rx_status = enh_desc_get_rx_status,
......@@ -430,4 +450,5 @@ const struct stmmac_desc_ops enh_desc_ops = {
.get_tx_timestamp_status = enh_desc_get_tx_timestamp_status,
.get_timestamp = enh_desc_get_timestamp,
.get_rx_timestamp_status = enh_desc_get_rx_timestamp_status,
.display_ring = enh_desc_display_ring,
};
......@@ -35,6 +35,10 @@
* current value.*/
#define MMC_CNTRL_PRESET 0x10
#define MMC_CNTRL_FULL_HALF_PRESET 0x20
#define MMC_GMAC4_OFFSET 0x700
#define MMC_GMAC3_X_OFFSET 0x100
struct stmmac_counters {
unsigned int mmc_tx_octetcount_gb;
unsigned int mmc_tx_framecount_gb;
......
......@@ -28,12 +28,12 @@
/* MAC Management Counters register offset */
#define MMC_CNTRL 0x00000100 /* MMC Control */
#define MMC_RX_INTR 0x00000104 /* MMC RX Interrupt */
#define MMC_TX_INTR 0x00000108 /* MMC TX Interrupt */
#define MMC_RX_INTR_MASK 0x0000010c /* MMC Interrupt Mask */
#define MMC_TX_INTR_MASK 0x00000110 /* MMC Interrupt Mask */
#define MMC_DEFAULT_MASK 0xffffffff
#define MMC_CNTRL 0x00 /* MMC Control */
#define MMC_RX_INTR 0x04 /* MMC RX Interrupt */
#define MMC_TX_INTR 0x08 /* MMC TX Interrupt */
#define MMC_RX_INTR_MASK 0x0c /* MMC Interrupt Mask */
#define MMC_TX_INTR_MASK 0x10 /* MMC Interrupt Mask */
#define MMC_DEFAULT_MASK 0xffffffff
/* MMC TX counter registers */
......@@ -41,115 +41,115 @@
* _GB register stands for good and bad frames
* _G is for good only.
*/
#define MMC_TX_OCTETCOUNT_GB 0x00000114
#define MMC_TX_FRAMECOUNT_GB 0x00000118
#define MMC_TX_BROADCASTFRAME_G 0x0000011c
#define MMC_TX_MULTICASTFRAME_G 0x00000120
#define MMC_TX_64_OCTETS_GB 0x00000124
#define MMC_TX_65_TO_127_OCTETS_GB 0x00000128
#define MMC_TX_128_TO_255_OCTETS_GB 0x0000012c
#define MMC_TX_256_TO_511_OCTETS_GB 0x00000130
#define MMC_TX_512_TO_1023_OCTETS_GB 0x00000134
#define MMC_TX_1024_TO_MAX_OCTETS_GB 0x00000138
#define MMC_TX_UNICAST_GB 0x0000013c
#define MMC_TX_MULTICAST_GB 0x00000140
#define MMC_TX_BROADCAST_GB 0x00000144
#define MMC_TX_UNDERFLOW_ERROR 0x00000148
#define MMC_TX_SINGLECOL_G 0x0000014c
#define MMC_TX_MULTICOL_G 0x00000150
#define MMC_TX_DEFERRED 0x00000154
#define MMC_TX_LATECOL 0x00000158
#define MMC_TX_EXESSCOL 0x0000015c
#define MMC_TX_CARRIER_ERROR 0x00000160
#define MMC_TX_OCTETCOUNT_G 0x00000164
#define MMC_TX_FRAMECOUNT_G 0x00000168
#define MMC_TX_EXCESSDEF 0x0000016c
#define MMC_TX_PAUSE_FRAME 0x00000170
#define MMC_TX_VLAN_FRAME_G 0x00000174
#define MMC_TX_OCTETCOUNT_GB 0x14
#define MMC_TX_FRAMECOUNT_GB 0x18
#define MMC_TX_BROADCASTFRAME_G 0x1c
#define MMC_TX_MULTICASTFRAME_G 0x20
#define MMC_TX_64_OCTETS_GB 0x24
#define MMC_TX_65_TO_127_OCTETS_GB 0x28
#define MMC_TX_128_TO_255_OCTETS_GB 0x2c
#define MMC_TX_256_TO_511_OCTETS_GB 0x30
#define MMC_TX_512_TO_1023_OCTETS_GB 0x34
#define MMC_TX_1024_TO_MAX_OCTETS_GB 0x38
#define MMC_TX_UNICAST_GB 0x3c
#define MMC_TX_MULTICAST_GB 0x40
#define MMC_TX_BROADCAST_GB 0x44
#define MMC_TX_UNDERFLOW_ERROR 0x48
#define MMC_TX_SINGLECOL_G 0x4c
#define MMC_TX_MULTICOL_G 0x50
#define MMC_TX_DEFERRED 0x54
#define MMC_TX_LATECOL 0x58
#define MMC_TX_EXESSCOL 0x5c
#define MMC_TX_CARRIER_ERROR 0x60
#define MMC_TX_OCTETCOUNT_G 0x64
#define MMC_TX_FRAMECOUNT_G 0x68
#define MMC_TX_EXCESSDEF 0x6c
#define MMC_TX_PAUSE_FRAME 0x70
#define MMC_TX_VLAN_FRAME_G 0x74
/* MMC RX counter registers */
#define MMC_RX_FRAMECOUNT_GB 0x00000180
#define MMC_RX_OCTETCOUNT_GB 0x00000184
#define MMC_RX_OCTETCOUNT_G 0x00000188
#define MMC_RX_BROADCASTFRAME_G 0x0000018c
#define MMC_RX_MULTICASTFRAME_G 0x00000190
#define MMC_RX_CRC_ERROR 0x00000194
#define MMC_RX_ALIGN_ERROR 0x00000198
#define MMC_RX_RUN_ERROR 0x0000019C
#define MMC_RX_JABBER_ERROR 0x000001A0
#define MMC_RX_UNDERSIZE_G 0x000001A4
#define MMC_RX_OVERSIZE_G 0x000001A8
#define MMC_RX_64_OCTETS_GB 0x000001AC
#define MMC_RX_65_TO_127_OCTETS_GB 0x000001b0
#define MMC_RX_128_TO_255_OCTETS_GB 0x000001b4
#define MMC_RX_256_TO_511_OCTETS_GB 0x000001b8
#define MMC_RX_512_TO_1023_OCTETS_GB 0x000001bc
#define MMC_RX_1024_TO_MAX_OCTETS_GB 0x000001c0
#define MMC_RX_UNICAST_G 0x000001c4
#define MMC_RX_LENGTH_ERROR 0x000001c8
#define MMC_RX_AUTOFRANGETYPE 0x000001cc
#define MMC_RX_PAUSE_FRAMES 0x000001d0
#define MMC_RX_FIFO_OVERFLOW 0x000001d4
#define MMC_RX_VLAN_FRAMES_GB 0x000001d8
#define MMC_RX_WATCHDOG_ERROR 0x000001dc
#define MMC_RX_FRAMECOUNT_GB 0x80
#define MMC_RX_OCTETCOUNT_GB 0x84
#define MMC_RX_OCTETCOUNT_G 0x88
#define MMC_RX_BROADCASTFRAME_G 0x8c
#define MMC_RX_MULTICASTFRAME_G 0x90
#define MMC_RX_CRC_ERROR 0x94
#define MMC_RX_ALIGN_ERROR 0x98
#define MMC_RX_RUN_ERROR 0x9C
#define MMC_RX_JABBER_ERROR 0xA0
#define MMC_RX_UNDERSIZE_G 0xA4
#define MMC_RX_OVERSIZE_G 0xA8
#define MMC_RX_64_OCTETS_GB 0xAC
#define MMC_RX_65_TO_127_OCTETS_GB 0xb0
#define MMC_RX_128_TO_255_OCTETS_GB 0xb4
#define MMC_RX_256_TO_511_OCTETS_GB 0xb8
#define MMC_RX_512_TO_1023_OCTETS_GB 0xbc
#define MMC_RX_1024_TO_MAX_OCTETS_GB 0xc0
#define MMC_RX_UNICAST_G 0xc4
#define MMC_RX_LENGTH_ERROR 0xc8
#define MMC_RX_AUTOFRANGETYPE 0xcc
#define MMC_RX_PAUSE_FRAMES 0xd0
#define MMC_RX_FIFO_OVERFLOW 0xd4
#define MMC_RX_VLAN_FRAMES_GB 0xd8
#define MMC_RX_WATCHDOG_ERROR 0xdc
/* IPC*/
#define MMC_RX_IPC_INTR_MASK 0x00000200
#define MMC_RX_IPC_INTR 0x00000208
#define MMC_RX_IPC_INTR_MASK 0x100
#define MMC_RX_IPC_INTR 0x108
/* IPv4*/
#define MMC_RX_IPV4_GD 0x00000210
#define MMC_RX_IPV4_HDERR 0x00000214
#define MMC_RX_IPV4_NOPAY 0x00000218
#define MMC_RX_IPV4_FRAG 0x0000021C
#define MMC_RX_IPV4_UDSBL 0x00000220
#define MMC_RX_IPV4_GD 0x110
#define MMC_RX_IPV4_HDERR 0x114
#define MMC_RX_IPV4_NOPAY 0x118
#define MMC_RX_IPV4_FRAG 0x11C
#define MMC_RX_IPV4_UDSBL 0x120
#define MMC_RX_IPV4_GD_OCTETS 0x00000250
#define MMC_RX_IPV4_HDERR_OCTETS 0x00000254
#define MMC_RX_IPV4_NOPAY_OCTETS 0x00000258
#define MMC_RX_IPV4_FRAG_OCTETS 0x0000025c
#define MMC_RX_IPV4_UDSBL_OCTETS 0x00000260
#define MMC_RX_IPV4_GD_OCTETS 0x150
#define MMC_RX_IPV4_HDERR_OCTETS 0x154
#define MMC_RX_IPV4_NOPAY_OCTETS 0x158
#define MMC_RX_IPV4_FRAG_OCTETS 0x15c
#define MMC_RX_IPV4_UDSBL_OCTETS 0x160
/* IPV6*/
#define MMC_RX_IPV6_GD_OCTETS 0x00000264
#define MMC_RX_IPV6_HDERR_OCTETS 0x00000268
#define MMC_RX_IPV6_NOPAY_OCTETS 0x0000026c
#define MMC_RX_IPV6_GD_OCTETS 0x164
#define MMC_RX_IPV6_HDERR_OCTETS 0x168
#define MMC_RX_IPV6_NOPAY_OCTETS 0x16c
#define MMC_RX_IPV6_GD 0x00000224
#define MMC_RX_IPV6_HDERR 0x00000228
#define MMC_RX_IPV6_NOPAY 0x0000022c
#define MMC_RX_IPV6_GD 0x124
#define MMC_RX_IPV6_HDERR 0x128
#define MMC_RX_IPV6_NOPAY 0x12c
/* Protocols*/
#define MMC_RX_UDP_GD 0x00000230
#define MMC_RX_UDP_ERR 0x00000234
#define MMC_RX_TCP_GD 0x00000238
#define MMC_RX_TCP_ERR 0x0000023c
#define MMC_RX_ICMP_GD 0x00000240
#define MMC_RX_ICMP_ERR 0x00000244
#define MMC_RX_UDP_GD 0x130
#define MMC_RX_UDP_ERR 0x134
#define MMC_RX_TCP_GD 0x138
#define MMC_RX_TCP_ERR 0x13c
#define MMC_RX_ICMP_GD 0x140
#define MMC_RX_ICMP_ERR 0x144
#define MMC_RX_UDP_GD_OCTETS 0x00000270
#define MMC_RX_UDP_ERR_OCTETS 0x00000274
#define MMC_RX_TCP_GD_OCTETS 0x00000278
#define MMC_RX_TCP_ERR_OCTETS 0x0000027c
#define MMC_RX_ICMP_GD_OCTETS 0x00000280
#define MMC_RX_ICMP_ERR_OCTETS 0x00000284
#define MMC_RX_UDP_GD_OCTETS 0x170
#define MMC_RX_UDP_ERR_OCTETS 0x174
#define MMC_RX_TCP_GD_OCTETS 0x178
#define MMC_RX_TCP_ERR_OCTETS 0x17c
#define MMC_RX_ICMP_GD_OCTETS 0x180
#define MMC_RX_ICMP_ERR_OCTETS 0x184
void dwmac_mmc_ctrl(void __iomem *ioaddr, unsigned int mode)
void dwmac_mmc_ctrl(void __iomem *mmcaddr, unsigned int mode)
{
u32 value = readl(ioaddr + MMC_CNTRL);
u32 value = readl(mmcaddr + MMC_CNTRL);
value |= (mode & 0x3F);
writel(value, ioaddr + MMC_CNTRL);
writel(value, mmcaddr + MMC_CNTRL);
pr_debug("stmmac: MMC ctrl register (offset 0x%x): 0x%08x\n",
MMC_CNTRL, value);
}
/* To mask all all interrupts.*/
void dwmac_mmc_intr_all_mask(void __iomem *ioaddr)
void dwmac_mmc_intr_all_mask(void __iomem *mmcaddr)
{
writel(MMC_DEFAULT_MASK, ioaddr + MMC_RX_INTR_MASK);
writel(MMC_DEFAULT_MASK, ioaddr + MMC_TX_INTR_MASK);
writel(MMC_DEFAULT_MASK, ioaddr + MMC_RX_IPC_INTR_MASK);
writel(MMC_DEFAULT_MASK, mmcaddr + MMC_RX_INTR_MASK);
writel(MMC_DEFAULT_MASK, mmcaddr + MMC_TX_INTR_MASK);
writel(MMC_DEFAULT_MASK, mmcaddr + MMC_RX_IPC_INTR_MASK);
}
/* This reads the MAC core counters (if actaully supported).
......@@ -157,111 +157,116 @@ void dwmac_mmc_intr_all_mask(void __iomem *ioaddr)
* counter after a read. So all the field of the mmc struct
* have to be incremented.
*/
void dwmac_mmc_read(void __iomem *ioaddr, struct stmmac_counters *mmc)
void dwmac_mmc_read(void __iomem *mmcaddr, struct stmmac_counters *mmc)
{
mmc->mmc_tx_octetcount_gb += readl(ioaddr + MMC_TX_OCTETCOUNT_GB);
mmc->mmc_tx_framecount_gb += readl(ioaddr + MMC_TX_FRAMECOUNT_GB);
mmc->mmc_tx_broadcastframe_g += readl(ioaddr + MMC_TX_BROADCASTFRAME_G);
mmc->mmc_tx_multicastframe_g += readl(ioaddr + MMC_TX_MULTICASTFRAME_G);
mmc->mmc_tx_64_octets_gb += readl(ioaddr + MMC_TX_64_OCTETS_GB);
mmc->mmc_tx_octetcount_gb += readl(mmcaddr + MMC_TX_OCTETCOUNT_GB);
mmc->mmc_tx_framecount_gb += readl(mmcaddr + MMC_TX_FRAMECOUNT_GB);
mmc->mmc_tx_broadcastframe_g += readl(mmcaddr +
MMC_TX_BROADCASTFRAME_G);
mmc->mmc_tx_multicastframe_g += readl(mmcaddr +
MMC_TX_MULTICASTFRAME_G);
mmc->mmc_tx_64_octets_gb += readl(mmcaddr + MMC_TX_64_OCTETS_GB);
mmc->mmc_tx_65_to_127_octets_gb +=
readl(ioaddr + MMC_TX_65_TO_127_OCTETS_GB);
readl(mmcaddr + MMC_TX_65_TO_127_OCTETS_GB);
mmc->mmc_tx_128_to_255_octets_gb +=
readl(ioaddr + MMC_TX_128_TO_255_OCTETS_GB);
readl(mmcaddr + MMC_TX_128_TO_255_OCTETS_GB);
mmc->mmc_tx_256_to_511_octets_gb +=
readl(ioaddr + MMC_TX_256_TO_511_OCTETS_GB);
readl(mmcaddr + MMC_TX_256_TO_511_OCTETS_GB);
mmc->mmc_tx_512_to_1023_octets_gb +=
readl(ioaddr + MMC_TX_512_TO_1023_OCTETS_GB);
readl(mmcaddr + MMC_TX_512_TO_1023_OCTETS_GB);
mmc->mmc_tx_1024_to_max_octets_gb +=
readl(ioaddr + MMC_TX_1024_TO_MAX_OCTETS_GB);
mmc->mmc_tx_unicast_gb += readl(ioaddr + MMC_TX_UNICAST_GB);
mmc->mmc_tx_multicast_gb += readl(ioaddr + MMC_TX_MULTICAST_GB);
mmc->mmc_tx_broadcast_gb += readl(ioaddr + MMC_TX_BROADCAST_GB);
mmc->mmc_tx_underflow_error += readl(ioaddr + MMC_TX_UNDERFLOW_ERROR);
mmc->mmc_tx_singlecol_g += readl(ioaddr + MMC_TX_SINGLECOL_G);
mmc->mmc_tx_multicol_g += readl(ioaddr + MMC_TX_MULTICOL_G);
mmc->mmc_tx_deferred += readl(ioaddr + MMC_TX_DEFERRED);
mmc->mmc_tx_latecol += readl(ioaddr + MMC_TX_LATECOL);
mmc->mmc_tx_exesscol += readl(ioaddr + MMC_TX_EXESSCOL);
mmc->mmc_tx_carrier_error += readl(ioaddr + MMC_TX_CARRIER_ERROR);
mmc->mmc_tx_octetcount_g += readl(ioaddr + MMC_TX_OCTETCOUNT_G);
mmc->mmc_tx_framecount_g += readl(ioaddr + MMC_TX_FRAMECOUNT_G);
mmc->mmc_tx_excessdef += readl(ioaddr + MMC_TX_EXCESSDEF);
mmc->mmc_tx_pause_frame += readl(ioaddr + MMC_TX_PAUSE_FRAME);
mmc->mmc_tx_vlan_frame_g += readl(ioaddr + MMC_TX_VLAN_FRAME_G);
readl(mmcaddr + MMC_TX_1024_TO_MAX_OCTETS_GB);
mmc->mmc_tx_unicast_gb += readl(mmcaddr + MMC_TX_UNICAST_GB);
mmc->mmc_tx_multicast_gb += readl(mmcaddr + MMC_TX_MULTICAST_GB);
mmc->mmc_tx_broadcast_gb += readl(mmcaddr + MMC_TX_BROADCAST_GB);
mmc->mmc_tx_underflow_error += readl(mmcaddr + MMC_TX_UNDERFLOW_ERROR);
mmc->mmc_tx_singlecol_g += readl(mmcaddr + MMC_TX_SINGLECOL_G);
mmc->mmc_tx_multicol_g += readl(mmcaddr + MMC_TX_MULTICOL_G);
mmc->mmc_tx_deferred += readl(mmcaddr + MMC_TX_DEFERRED);
mmc->mmc_tx_latecol += readl(mmcaddr + MMC_TX_LATECOL);
mmc->mmc_tx_exesscol += readl(mmcaddr + MMC_TX_EXESSCOL);
mmc->mmc_tx_carrier_error += readl(mmcaddr + MMC_TX_CARRIER_ERROR);
mmc->mmc_tx_octetcount_g += readl(mmcaddr + MMC_TX_OCTETCOUNT_G);
mmc->mmc_tx_framecount_g += readl(mmcaddr + MMC_TX_FRAMECOUNT_G);
mmc->mmc_tx_excessdef += readl(mmcaddr + MMC_TX_EXCESSDEF);
mmc->mmc_tx_pause_frame += readl(mmcaddr + MMC_TX_PAUSE_FRAME);
mmc->mmc_tx_vlan_frame_g += readl(mmcaddr + MMC_TX_VLAN_FRAME_G);
/* MMC RX counter registers */
mmc->mmc_rx_framecount_gb += readl(ioaddr + MMC_RX_FRAMECOUNT_GB);
mmc->mmc_rx_octetcount_gb += readl(ioaddr + MMC_RX_OCTETCOUNT_GB);
mmc->mmc_rx_octetcount_g += readl(ioaddr + MMC_RX_OCTETCOUNT_G);
mmc->mmc_rx_broadcastframe_g += readl(ioaddr + MMC_RX_BROADCASTFRAME_G);
mmc->mmc_rx_multicastframe_g += readl(ioaddr + MMC_RX_MULTICASTFRAME_G);
mmc->mmc_rx_crc_error += readl(ioaddr + MMC_RX_CRC_ERROR);
mmc->mmc_rx_align_error += readl(ioaddr + MMC_RX_ALIGN_ERROR);
mmc->mmc_rx_run_error += readl(ioaddr + MMC_RX_RUN_ERROR);
mmc->mmc_rx_jabber_error += readl(ioaddr + MMC_RX_JABBER_ERROR);
mmc->mmc_rx_undersize_g += readl(ioaddr + MMC_RX_UNDERSIZE_G);
mmc->mmc_rx_oversize_g += readl(ioaddr + MMC_RX_OVERSIZE_G);
mmc->mmc_rx_64_octets_gb += readl(ioaddr + MMC_RX_64_OCTETS_GB);
mmc->mmc_rx_framecount_gb += readl(mmcaddr + MMC_RX_FRAMECOUNT_GB);
mmc->mmc_rx_octetcount_gb += readl(mmcaddr + MMC_RX_OCTETCOUNT_GB);
mmc->mmc_rx_octetcount_g += readl(mmcaddr + MMC_RX_OCTETCOUNT_G);
mmc->mmc_rx_broadcastframe_g += readl(mmcaddr +
MMC_RX_BROADCASTFRAME_G);
mmc->mmc_rx_multicastframe_g += readl(mmcaddr +
MMC_RX_MULTICASTFRAME_G);
mmc->mmc_rx_crc_error += readl(mmcaddr + MMC_RX_CRC_ERROR);
mmc->mmc_rx_align_error += readl(mmcaddr + MMC_RX_ALIGN_ERROR);
mmc->mmc_rx_run_error += readl(mmcaddr + MMC_RX_RUN_ERROR);
mmc->mmc_rx_jabber_error += readl(mmcaddr + MMC_RX_JABBER_ERROR);
mmc->mmc_rx_undersize_g += readl(mmcaddr + MMC_RX_UNDERSIZE_G);
mmc->mmc_rx_oversize_g += readl(mmcaddr + MMC_RX_OVERSIZE_G);
mmc->mmc_rx_64_octets_gb += readl(mmcaddr + MMC_RX_64_OCTETS_GB);
mmc->mmc_rx_65_to_127_octets_gb +=
readl(ioaddr + MMC_RX_65_TO_127_OCTETS_GB);
readl(mmcaddr + MMC_RX_65_TO_127_OCTETS_GB);
mmc->mmc_rx_128_to_255_octets_gb +=
readl(ioaddr + MMC_RX_128_TO_255_OCTETS_GB);
readl(mmcaddr + MMC_RX_128_TO_255_OCTETS_GB);
mmc->mmc_rx_256_to_511_octets_gb +=
readl(ioaddr + MMC_RX_256_TO_511_OCTETS_GB);
readl(mmcaddr + MMC_RX_256_TO_511_OCTETS_GB);
mmc->mmc_rx_512_to_1023_octets_gb +=
readl(ioaddr + MMC_RX_512_TO_1023_OCTETS_GB);
readl(mmcaddr + MMC_RX_512_TO_1023_OCTETS_GB);
mmc->mmc_rx_1024_to_max_octets_gb +=
readl(ioaddr + MMC_RX_1024_TO_MAX_OCTETS_GB);
mmc->mmc_rx_unicast_g += readl(ioaddr + MMC_RX_UNICAST_G);
mmc->mmc_rx_length_error += readl(ioaddr + MMC_RX_LENGTH_ERROR);
mmc->mmc_rx_autofrangetype += readl(ioaddr + MMC_RX_AUTOFRANGETYPE);
mmc->mmc_rx_pause_frames += readl(ioaddr + MMC_RX_PAUSE_FRAMES);
mmc->mmc_rx_fifo_overflow += readl(ioaddr + MMC_RX_FIFO_OVERFLOW);
mmc->mmc_rx_vlan_frames_gb += readl(ioaddr + MMC_RX_VLAN_FRAMES_GB);
mmc->mmc_rx_watchdog_error += readl(ioaddr + MMC_RX_WATCHDOG_ERROR);
readl(mmcaddr + MMC_RX_1024_TO_MAX_OCTETS_GB);
mmc->mmc_rx_unicast_g += readl(mmcaddr + MMC_RX_UNICAST_G);
mmc->mmc_rx_length_error += readl(mmcaddr + MMC_RX_LENGTH_ERROR);
mmc->mmc_rx_autofrangetype += readl(mmcaddr + MMC_RX_AUTOFRANGETYPE);
mmc->mmc_rx_pause_frames += readl(mmcaddr + MMC_RX_PAUSE_FRAMES);
mmc->mmc_rx_fifo_overflow += readl(mmcaddr + MMC_RX_FIFO_OVERFLOW);
mmc->mmc_rx_vlan_frames_gb += readl(mmcaddr + MMC_RX_VLAN_FRAMES_GB);
mmc->mmc_rx_watchdog_error += readl(mmcaddr + MMC_RX_WATCHDOG_ERROR);
/* IPC */
mmc->mmc_rx_ipc_intr_mask += readl(ioaddr + MMC_RX_IPC_INTR_MASK);
mmc->mmc_rx_ipc_intr += readl(ioaddr + MMC_RX_IPC_INTR);
mmc->mmc_rx_ipc_intr_mask += readl(mmcaddr + MMC_RX_IPC_INTR_MASK);
mmc->mmc_rx_ipc_intr += readl(mmcaddr + MMC_RX_IPC_INTR);
/* IPv4 */
mmc->mmc_rx_ipv4_gd += readl(ioaddr + MMC_RX_IPV4_GD);
mmc->mmc_rx_ipv4_hderr += readl(ioaddr + MMC_RX_IPV4_HDERR);
mmc->mmc_rx_ipv4_nopay += readl(ioaddr + MMC_RX_IPV4_NOPAY);
mmc->mmc_rx_ipv4_frag += readl(ioaddr + MMC_RX_IPV4_FRAG);
mmc->mmc_rx_ipv4_udsbl += readl(ioaddr + MMC_RX_IPV4_UDSBL);
mmc->mmc_rx_ipv4_gd += readl(mmcaddr + MMC_RX_IPV4_GD);
mmc->mmc_rx_ipv4_hderr += readl(mmcaddr + MMC_RX_IPV4_HDERR);
mmc->mmc_rx_ipv4_nopay += readl(mmcaddr + MMC_RX_IPV4_NOPAY);
mmc->mmc_rx_ipv4_frag += readl(mmcaddr + MMC_RX_IPV4_FRAG);
mmc->mmc_rx_ipv4_udsbl += readl(mmcaddr + MMC_RX_IPV4_UDSBL);
mmc->mmc_rx_ipv4_gd_octets += readl(ioaddr + MMC_RX_IPV4_GD_OCTETS);
mmc->mmc_rx_ipv4_gd_octets += readl(mmcaddr + MMC_RX_IPV4_GD_OCTETS);
mmc->mmc_rx_ipv4_hderr_octets +=
readl(ioaddr + MMC_RX_IPV4_HDERR_OCTETS);
readl(mmcaddr + MMC_RX_IPV4_HDERR_OCTETS);
mmc->mmc_rx_ipv4_nopay_octets +=
readl(ioaddr + MMC_RX_IPV4_NOPAY_OCTETS);
mmc->mmc_rx_ipv4_frag_octets += readl(ioaddr + MMC_RX_IPV4_FRAG_OCTETS);
readl(mmcaddr + MMC_RX_IPV4_NOPAY_OCTETS);
mmc->mmc_rx_ipv4_frag_octets += readl(mmcaddr +
MMC_RX_IPV4_FRAG_OCTETS);
mmc->mmc_rx_ipv4_udsbl_octets +=
readl(ioaddr + MMC_RX_IPV4_UDSBL_OCTETS);
readl(mmcaddr + MMC_RX_IPV4_UDSBL_OCTETS);
/* IPV6 */
mmc->mmc_rx_ipv6_gd_octets += readl(ioaddr + MMC_RX_IPV6_GD_OCTETS);
mmc->mmc_rx_ipv6_gd_octets += readl(mmcaddr + MMC_RX_IPV6_GD_OCTETS);
mmc->mmc_rx_ipv6_hderr_octets +=
readl(ioaddr + MMC_RX_IPV6_HDERR_OCTETS);
readl(mmcaddr + MMC_RX_IPV6_HDERR_OCTETS);
mmc->mmc_rx_ipv6_nopay_octets +=
readl(ioaddr + MMC_RX_IPV6_NOPAY_OCTETS);
readl(mmcaddr + MMC_RX_IPV6_NOPAY_OCTETS);
mmc->mmc_rx_ipv6_gd += readl(ioaddr + MMC_RX_IPV6_GD);
mmc->mmc_rx_ipv6_hderr += readl(ioaddr + MMC_RX_IPV6_HDERR);
mmc->mmc_rx_ipv6_nopay += readl(ioaddr + MMC_RX_IPV6_NOPAY);
mmc->mmc_rx_ipv6_gd += readl(mmcaddr + MMC_RX_IPV6_GD);
mmc->mmc_rx_ipv6_hderr += readl(mmcaddr + MMC_RX_IPV6_HDERR);
mmc->mmc_rx_ipv6_nopay += readl(mmcaddr + MMC_RX_IPV6_NOPAY);
/* Protocols */
mmc->mmc_rx_udp_gd += readl(ioaddr + MMC_RX_UDP_GD);
mmc->mmc_rx_udp_err += readl(ioaddr + MMC_RX_UDP_ERR);
mmc->mmc_rx_tcp_gd += readl(ioaddr + MMC_RX_TCP_GD);
mmc->mmc_rx_tcp_err += readl(ioaddr + MMC_RX_TCP_ERR);
mmc->mmc_rx_icmp_gd += readl(ioaddr + MMC_RX_ICMP_GD);
mmc->mmc_rx_icmp_err += readl(ioaddr + MMC_RX_ICMP_ERR);
mmc->mmc_rx_udp_gd += readl(mmcaddr + MMC_RX_UDP_GD);
mmc->mmc_rx_udp_err += readl(mmcaddr + MMC_RX_UDP_ERR);
mmc->mmc_rx_tcp_gd += readl(mmcaddr + MMC_RX_TCP_GD);
mmc->mmc_rx_tcp_err += readl(mmcaddr + MMC_RX_TCP_ERR);
mmc->mmc_rx_icmp_gd += readl(mmcaddr + MMC_RX_ICMP_GD);
mmc->mmc_rx_icmp_err += readl(mmcaddr + MMC_RX_ICMP_ERR);
mmc->mmc_rx_udp_gd_octets += readl(ioaddr + MMC_RX_UDP_GD_OCTETS);
mmc->mmc_rx_udp_err_octets += readl(ioaddr + MMC_RX_UDP_ERR_OCTETS);
mmc->mmc_rx_tcp_gd_octets += readl(ioaddr + MMC_RX_TCP_GD_OCTETS);
mmc->mmc_rx_tcp_err_octets += readl(ioaddr + MMC_RX_TCP_ERR_OCTETS);
mmc->mmc_rx_icmp_gd_octets += readl(ioaddr + MMC_RX_ICMP_GD_OCTETS);
mmc->mmc_rx_icmp_err_octets += readl(ioaddr + MMC_RX_ICMP_ERR_OCTETS);
mmc->mmc_rx_udp_gd_octets += readl(mmcaddr + MMC_RX_UDP_GD_OCTETS);
mmc->mmc_rx_udp_err_octets += readl(mmcaddr + MMC_RX_UDP_ERR_OCTETS);
mmc->mmc_rx_tcp_gd_octets += readl(mmcaddr + MMC_RX_TCP_GD_OCTETS);
mmc->mmc_rx_tcp_err_octets += readl(mmcaddr + MMC_RX_TCP_ERR_OCTETS);
mmc->mmc_rx_icmp_gd_octets += readl(mmcaddr + MMC_RX_ICMP_GD_OCTETS);
mmc->mmc_rx_icmp_err_octets += readl(mmcaddr + MMC_RX_ICMP_ERR_OCTETS);
}
......@@ -279,6 +279,26 @@ static int ndesc_get_rx_timestamp_status(void *desc, u32 ats)
return 1;
}
static void ndesc_display_ring(void *head, unsigned int size, bool rx)
{
struct dma_desc *p = (struct dma_desc *)head;
int i;
pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
for (i = 0; i < size; i++) {
u64 x;
x = *(u64 *)p;
pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x",
i, (unsigned int)virt_to_phys(p),
(unsigned int)x, (unsigned int)(x >> 32),
p->des2, p->des3);
p++;
}
pr_info("\n");
}
const struct stmmac_desc_ops ndesc_ops = {
.tx_status = ndesc_get_tx_status,
.rx_status = ndesc_get_rx_status,
......@@ -297,4 +317,5 @@ const struct stmmac_desc_ops ndesc_ops = {
.get_tx_timestamp_status = ndesc_get_tx_timestamp_status,
.get_timestamp = ndesc_get_timestamp,
.get_rx_timestamp_status = ndesc_get_rx_timestamp_status,
.display_ring = ndesc_display_ring,
};
......@@ -24,7 +24,7 @@
#define __STMMAC_H__
#define STMMAC_RESOURCE_NAME "stmmaceth"
#define DRV_MODULE_VERSION "Oct_2015"
#define DRV_MODULE_VERSION "Jan_2016"
#include <linux/clk.h>
#include <linux/stmmac.h>
......@@ -67,6 +67,7 @@ struct stmmac_priv {
spinlock_t tx_lock;
bool tx_path_in_lpi_mode;
struct timer_list txtimer;
bool tso;
struct dma_desc *dma_rx ____cacheline_aligned_in_smp;
struct dma_extended_desc *dma_erx;
......@@ -128,6 +129,10 @@ struct stmmac_priv {
int use_riwt;
int irq_wake;
spinlock_t ptp_lock;
void __iomem *mmcaddr;
u32 rx_tail_addr;
u32 tx_tail_addr;
u32 mss;
#ifdef CONFIG_DEBUG_FS
struct dentry *dbgfs_dir;
......
......@@ -161,6 +161,9 @@ static const struct stmmac_stats stmmac_gstrings_stats[] = {
STMMAC_STAT(mtl_rx_fifo_ctrl_active),
STMMAC_STAT(mac_rx_frame_ctrl_fifo),
STMMAC_STAT(mac_gmii_rx_proto_engine),
/* TSO */
STMMAC_STAT(tx_tso_frames),
STMMAC_STAT(tx_tso_nfrags),
};
#define STMMAC_STATS_LEN ARRAY_SIZE(stmmac_gstrings_stats)
......@@ -499,14 +502,14 @@ static void stmmac_get_ethtool_stats(struct net_device *dev,
int i, j = 0;
/* Update the DMA HW counters for dwmac10/100 */
if (!priv->plat->has_gmac)
if (priv->hw->dma->dma_diagnostic_fr)
priv->hw->dma->dma_diagnostic_fr(&dev->stats,
(void *) &priv->xstats,
priv->ioaddr);
else {
/* If supported, for new GMAC chips expose the MMC counters */
if (priv->dma_cap.rmon) {
dwmac_mmc_read(priv->ioaddr, &priv->mmc);
dwmac_mmc_read(priv->mmcaddr, &priv->mmc);
for (i = 0; i < STMMAC_MMC_STATS_LEN; i++) {
char *p;
......
......@@ -56,6 +56,7 @@
#include "dwmac1000.h"
#define STMMAC_ALIGN(x) L1_CACHE_ALIGN(x)
#define TSO_MAX_BUFF_SIZE (SZ_16K - 1)
/* Module parameters */
#define TX_TIMEO 5000
......@@ -725,13 +726,15 @@ static void stmmac_adjust_link(struct net_device *dev)
new_state = 1;
switch (phydev->speed) {
case 1000:
if (likely(priv->plat->has_gmac))
if (likely((priv->plat->has_gmac) ||
(priv->plat->has_gmac4)))
ctrl &= ~priv->hw->link.port;
stmmac_hw_fix_mac_speed(priv);
break;
case 100:
case 10:
if (priv->plat->has_gmac) {
if (likely((priv->plat->has_gmac) ||
(priv->plat->has_gmac4))) {
ctrl |= priv->hw->link.port;
if (phydev->speed == SPEED_100) {
ctrl |= priv->hw->link.speed;
......@@ -877,53 +880,22 @@ static int stmmac_init_phy(struct net_device *dev)
return 0;
}
/**
* stmmac_display_ring - display ring
* @head: pointer to the head of the ring passed.
* @size: size of the ring.
* @extend_desc: to verify if extended descriptors are used.
* Description: display the control/status and buffer descriptors.
*/
static void stmmac_display_ring(void *head, int size, int extend_desc)
{
int i;
struct dma_extended_desc *ep = (struct dma_extended_desc *)head;
struct dma_desc *p = (struct dma_desc *)head;
for (i = 0; i < size; i++) {
u64 x;
if (extend_desc) {
x = *(u64 *) ep;
pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
i, (unsigned int)virt_to_phys(ep),
(unsigned int)x, (unsigned int)(x >> 32),
ep->basic.des2, ep->basic.des3);
ep++;
} else {
x = *(u64 *) p;
pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x",
i, (unsigned int)virt_to_phys(p),
(unsigned int)x, (unsigned int)(x >> 32),
p->des2, p->des3);
p++;
}
pr_info("\n");
}
}
static void stmmac_display_rings(struct stmmac_priv *priv)
{
void *head_rx, *head_tx;
if (priv->extend_desc) {
pr_info("Extended RX descriptor ring:\n");
stmmac_display_ring((void *)priv->dma_erx, DMA_RX_SIZE, 1);
pr_info("Extended TX descriptor ring:\n");
stmmac_display_ring((void *)priv->dma_etx, DMA_TX_SIZE, 1);
head_rx = (void *)priv->dma_erx;
head_tx = (void *)priv->dma_etx;
} else {
pr_info("RX descriptor ring:\n");
stmmac_display_ring((void *)priv->dma_rx, DMA_RX_SIZE, 0);
pr_info("TX descriptor ring:\n");
stmmac_display_ring((void *)priv->dma_tx, DMA_TX_SIZE, 0);
head_rx = (void *)priv->dma_rx;
head_tx = (void *)priv->dma_tx;
}
/* Display Rx ring */
priv->hw->desc->display_ring(head_rx, DMA_RX_SIZE, true);
/* Display Tx ring */
priv->hw->desc->display_ring(head_tx, DMA_TX_SIZE, false);
}
static int stmmac_set_bfsize(int mtu, int bufsize)
......@@ -1002,7 +974,10 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
return -EINVAL;
}
p->des2 = priv->rx_skbuff_dma[i];
if (priv->synopsys_id >= DWMAC_CORE_4_00)
p->des0 = priv->rx_skbuff_dma[i];
else
p->des2 = priv->rx_skbuff_dma[i];
if ((priv->hw->mode->init_desc3) &&
(priv->dma_buf_sz == BUF_SIZE_16KiB))
......@@ -1093,7 +1068,16 @@ static int init_dma_desc_rings(struct net_device *dev, gfp_t flags)
p = &((priv->dma_etx + i)->basic);
else
p = priv->dma_tx + i;
p->des2 = 0;
if (priv->synopsys_id >= DWMAC_CORE_4_00) {
p->des0 = 0;
p->des1 = 0;
p->des2 = 0;
p->des3 = 0;
} else {
p->des2 = 0;
}
priv->tx_skbuff_dma[i].buf = 0;
priv->tx_skbuff_dma[i].map_as_page = false;
priv->tx_skbuff_dma[i].len = 0;
......@@ -1356,9 +1340,13 @@ static void stmmac_tx_clean(struct stmmac_priv *priv)
priv->tx_skbuff_dma[entry].len,
DMA_TO_DEVICE);
priv->tx_skbuff_dma[entry].buf = 0;
priv->tx_skbuff_dma[entry].len = 0;
priv->tx_skbuff_dma[entry].map_as_page = false;
}
priv->hw->mode->clean_desc3(priv, p);
if (priv->hw->mode->clean_desc3)
priv->hw->mode->clean_desc3(priv, p);
priv->tx_skbuff_dma[entry].last_segment = false;
priv->tx_skbuff_dma[entry].is_jumbo = false;
......@@ -1481,40 +1469,22 @@ static void stmmac_dma_interrupt(struct stmmac_priv *priv)
static void stmmac_mmc_setup(struct stmmac_priv *priv)
{
unsigned int mode = MMC_CNTRL_RESET_ON_READ | MMC_CNTRL_COUNTER_RESET |
MMC_CNTRL_PRESET | MMC_CNTRL_FULL_HALF_PRESET;
MMC_CNTRL_PRESET | MMC_CNTRL_FULL_HALF_PRESET;
dwmac_mmc_intr_all_mask(priv->ioaddr);
if (priv->synopsys_id >= DWMAC_CORE_4_00)
priv->mmcaddr = priv->ioaddr + MMC_GMAC4_OFFSET;
else
priv->mmcaddr = priv->ioaddr + MMC_GMAC3_X_OFFSET;
dwmac_mmc_intr_all_mask(priv->mmcaddr);
if (priv->dma_cap.rmon) {
dwmac_mmc_ctrl(priv->ioaddr, mode);
dwmac_mmc_ctrl(priv->mmcaddr, mode);
memset(&priv->mmc, 0, sizeof(struct stmmac_counters));
} else
pr_info(" No MAC Management Counters available\n");
}
/**
* stmmac_get_synopsys_id - return the SYINID.
* @priv: driver private structure
* Description: this simple function is to decode and return the SYINID
* starting from the HW core register.
*/
static u32 stmmac_get_synopsys_id(struct stmmac_priv *priv)
{
u32 hwid = priv->hw->synopsys_uid;
/* Check Synopsys Id (not available on old chips) */
if (likely(hwid)) {
u32 uid = ((hwid & 0x0000ff00) >> 8);
u32 synid = (hwid & 0x000000ff);
pr_info("stmmac - user ID: 0x%x, Synopsys ID: 0x%x\n",
uid, synid);
return synid;
}
return 0;
}
/**
* stmmac_selec_desc_mode - to select among: normal/alternate/extend descriptors
* @priv: driver private structure
......@@ -1552,51 +1522,15 @@ static void stmmac_selec_desc_mode(struct stmmac_priv *priv)
*/
static int stmmac_get_hw_features(struct stmmac_priv *priv)
{
u32 hw_cap = 0;
u32 ret = 0;
if (priv->hw->dma->get_hw_feature) {
hw_cap = priv->hw->dma->get_hw_feature(priv->ioaddr);
priv->dma_cap.mbps_10_100 = (hw_cap & DMA_HW_FEAT_MIISEL);
priv->dma_cap.mbps_1000 = (hw_cap & DMA_HW_FEAT_GMIISEL) >> 1;
priv->dma_cap.half_duplex = (hw_cap & DMA_HW_FEAT_HDSEL) >> 2;
priv->dma_cap.hash_filter = (hw_cap & DMA_HW_FEAT_HASHSEL) >> 4;
priv->dma_cap.multi_addr = (hw_cap & DMA_HW_FEAT_ADDMAC) >> 5;
priv->dma_cap.pcs = (hw_cap & DMA_HW_FEAT_PCSSEL) >> 6;
priv->dma_cap.sma_mdio = (hw_cap & DMA_HW_FEAT_SMASEL) >> 8;
priv->dma_cap.pmt_remote_wake_up =
(hw_cap & DMA_HW_FEAT_RWKSEL) >> 9;
priv->dma_cap.pmt_magic_frame =
(hw_cap & DMA_HW_FEAT_MGKSEL) >> 10;
/* MMC */
priv->dma_cap.rmon = (hw_cap & DMA_HW_FEAT_MMCSEL) >> 11;
/* IEEE 1588-2002 */
priv->dma_cap.time_stamp =
(hw_cap & DMA_HW_FEAT_TSVER1SEL) >> 12;
/* IEEE 1588-2008 */
priv->dma_cap.atime_stamp =
(hw_cap & DMA_HW_FEAT_TSVER2SEL) >> 13;
/* 802.3az - Energy-Efficient Ethernet (EEE) */
priv->dma_cap.eee = (hw_cap & DMA_HW_FEAT_EEESEL) >> 14;
priv->dma_cap.av = (hw_cap & DMA_HW_FEAT_AVSEL) >> 15;
/* TX and RX csum */
priv->dma_cap.tx_coe = (hw_cap & DMA_HW_FEAT_TXCOESEL) >> 16;
priv->dma_cap.rx_coe_type1 =
(hw_cap & DMA_HW_FEAT_RXTYP1COE) >> 17;
priv->dma_cap.rx_coe_type2 =
(hw_cap & DMA_HW_FEAT_RXTYP2COE) >> 18;
priv->dma_cap.rxfifo_over_2048 =
(hw_cap & DMA_HW_FEAT_RXFIFOSIZE) >> 19;
/* TX and RX number of channels */
priv->dma_cap.number_rx_channel =
(hw_cap & DMA_HW_FEAT_RXCHCNT) >> 20;
priv->dma_cap.number_tx_channel =
(hw_cap & DMA_HW_FEAT_TXCHCNT) >> 22;
/* Alternate (enhanced) DESC mode */
priv->dma_cap.enh_desc = (hw_cap & DMA_HW_FEAT_ENHDESSEL) >> 24;
}
return hw_cap;
priv->hw->dma->get_hw_feature(priv->ioaddr,
&priv->dma_cap);
ret = 1;
}
return ret;
}
/**
......@@ -1652,8 +1586,19 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
priv->hw->dma->init(priv->ioaddr, pbl, fixed_burst, mixed_burst,
aal, priv->dma_tx_phy, priv->dma_rx_phy, atds);
if ((priv->synopsys_id >= DWMAC_CORE_3_50) &&
(priv->plat->axi && priv->hw->dma->axi))
if (priv->synopsys_id >= DWMAC_CORE_4_00) {
priv->rx_tail_addr = priv->dma_rx_phy +
(DMA_RX_SIZE * sizeof(struct dma_desc));
priv->hw->dma->set_rx_tail_ptr(priv->ioaddr, priv->rx_tail_addr,
STMMAC_CHAN0);
priv->tx_tail_addr = priv->dma_tx_phy +
(DMA_TX_SIZE * sizeof(struct dma_desc));
priv->hw->dma->set_tx_tail_ptr(priv->ioaddr, priv->tx_tail_addr,
STMMAC_CHAN0);
}
if (priv->plat->axi && priv->hw->dma->axi)
priv->hw->dma->axi(priv->ioaddr, priv->plat->axi);
return ret;
......@@ -1733,7 +1678,10 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
}
/* Enable the MAC Rx/Tx */
stmmac_set_mac(priv->ioaddr, true);
if (priv->synopsys_id >= DWMAC_CORE_4_00)
stmmac_dwmac4_set_mac(priv->ioaddr, true);
else
stmmac_set_mac(priv->ioaddr, true);
/* Set the HW DMA mode and the COE */
stmmac_dma_operation_mode(priv);
......@@ -1771,6 +1719,18 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
if (priv->pcs && priv->hw->mac->ctrl_ane)
priv->hw->mac->ctrl_ane(priv->hw, 0);
/* set TX ring length */
if (priv->hw->dma->set_tx_ring_len)
priv->hw->dma->set_tx_ring_len(priv->ioaddr,
(DMA_TX_SIZE - 1));
/* set RX ring length */
if (priv->hw->dma->set_rx_ring_len)
priv->hw->dma->set_rx_ring_len(priv->ioaddr,
(DMA_RX_SIZE - 1));
/* Enable TSO */
if (priv->tso)
priv->hw->dma->enable_tso(priv->ioaddr, 1, STMMAC_CHAN0);
return 0;
}
......@@ -1935,6 +1895,239 @@ static int stmmac_release(struct net_device *dev)
return 0;
}
/**
* stmmac_tso_allocator - close entry point of the driver
* @priv: driver private structure
* @des: buffer start address
* @total_len: total length to fill in descriptors
* @last_segmant: condition for the last descriptor
* Description:
* This function fills descriptor and request new descriptors according to
* buffer length to fill
*/
static void stmmac_tso_allocator(struct stmmac_priv *priv, unsigned int des,
int total_len, bool last_segment)
{
struct dma_desc *desc;
int tmp_len;
u32 buff_size;
tmp_len = total_len;
while (tmp_len > 0) {
priv->cur_tx = STMMAC_GET_ENTRY(priv->cur_tx, DMA_TX_SIZE);
desc = priv->dma_tx + priv->cur_tx;
desc->des0 = des + (total_len - tmp_len);
buff_size = tmp_len >= TSO_MAX_BUFF_SIZE ?
TSO_MAX_BUFF_SIZE : tmp_len;
priv->hw->desc->prepare_tso_tx_desc(desc, 0, buff_size,
0, 1,
(last_segment) && (buff_size < TSO_MAX_BUFF_SIZE),
0, 0);
tmp_len -= TSO_MAX_BUFF_SIZE;
}
}
/**
* stmmac_tso_xmit - Tx entry point of the driver for oversized frames (TSO)
* @skb : the socket buffer
* @dev : device pointer
* Description: this is the transmit function that is called on TSO frames
* (support available on GMAC4 and newer chips).
* Diagram below show the ring programming in case of TSO frames:
*
* First Descriptor
* --------
* | DES0 |---> buffer1 = L2/L3/L4 header
* | DES1 |---> TCP Payload (can continue on next descr...)
* | DES2 |---> buffer 1 and 2 len
* | DES3 |---> must set TSE, TCP hdr len-> [22:19]. TCP payload len [17:0]
* --------
* |
* ...
* |
* --------
* | DES0 | --| Split TCP Payload on Buffers 1 and 2
* | DES1 | --|
* | DES2 | --> buffer 1 and 2 len
* | DES3 |
* --------
*
* mss is fixed when enable tso, so w/o programming the TDES3 ctx field.
*/
static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
{
u32 pay_len, mss;
int tmp_pay_len = 0;
struct stmmac_priv *priv = netdev_priv(dev);
int nfrags = skb_shinfo(skb)->nr_frags;
unsigned int first_entry, des;
struct dma_desc *desc, *first, *mss_desc = NULL;
u8 proto_hdr_len;
int i;
spin_lock(&priv->tx_lock);
/* Compute header lengths */
proto_hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
/* Desc availability based on threshold should be enough safe */
if (unlikely(stmmac_tx_avail(priv) <
(((skb->len - proto_hdr_len) / TSO_MAX_BUFF_SIZE + 1)))) {
if (!netif_queue_stopped(dev)) {
netif_stop_queue(dev);
/* This is a hard error, log it. */
pr_err("%s: Tx Ring full when queue awake\n", __func__);
}
spin_unlock(&priv->tx_lock);
return NETDEV_TX_BUSY;
}
pay_len = skb_headlen(skb) - proto_hdr_len; /* no frags */
mss = skb_shinfo(skb)->gso_size;
/* set new MSS value if needed */
if (mss != priv->mss) {
mss_desc = priv->dma_tx + priv->cur_tx;
priv->hw->desc->set_mss(mss_desc, mss);
priv->mss = mss;
priv->cur_tx = STMMAC_GET_ENTRY(priv->cur_tx, DMA_TX_SIZE);
}
if (netif_msg_tx_queued(priv)) {
pr_info("%s: tcphdrlen %d, hdr_len %d, pay_len %d, mss %d\n",
__func__, tcp_hdrlen(skb), proto_hdr_len, pay_len, mss);
pr_info("\tskb->len %d, skb->data_len %d\n", skb->len,
skb->data_len);
}
first_entry = priv->cur_tx;
desc = priv->dma_tx + first_entry;
first = desc;
/* first descriptor: fill Headers on Buf1 */
des = dma_map_single(priv->device, skb->data, skb_headlen(skb),
DMA_TO_DEVICE);
if (dma_mapping_error(priv->device, des))
goto dma_map_err;
priv->tx_skbuff_dma[first_entry].buf = des;
priv->tx_skbuff_dma[first_entry].len = skb_headlen(skb);
priv->tx_skbuff[first_entry] = skb;
first->des0 = des;
/* Fill start of payload in buff2 of first descriptor */
if (pay_len)
first->des1 = des + proto_hdr_len;
/* If needed take extra descriptors to fill the remaining payload */
tmp_pay_len = pay_len - TSO_MAX_BUFF_SIZE;
stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0));
/* Prepare fragments */
for (i = 0; i < nfrags; i++) {
const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
des = skb_frag_dma_map(priv->device, frag, 0,
skb_frag_size(frag),
DMA_TO_DEVICE);
stmmac_tso_allocator(priv, des, skb_frag_size(frag),
(i == nfrags - 1));
priv->tx_skbuff_dma[priv->cur_tx].buf = des;
priv->tx_skbuff_dma[priv->cur_tx].len = skb_frag_size(frag);
priv->tx_skbuff[priv->cur_tx] = NULL;
priv->tx_skbuff_dma[priv->cur_tx].map_as_page = true;
}
priv->tx_skbuff_dma[priv->cur_tx].last_segment = true;
priv->cur_tx = STMMAC_GET_ENTRY(priv->cur_tx, DMA_TX_SIZE);
if (unlikely(stmmac_tx_avail(priv) <= (MAX_SKB_FRAGS + 1))) {
if (netif_msg_hw(priv))
pr_debug("%s: stop transmitted packets\n", __func__);
netif_stop_queue(dev);
}
dev->stats.tx_bytes += skb->len;
priv->xstats.tx_tso_frames++;
priv->xstats.tx_tso_nfrags += nfrags;
/* Manage tx mitigation */
priv->tx_count_frames += nfrags + 1;
if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
mod_timer(&priv->txtimer,
STMMAC_COAL_TIMER(priv->tx_coal_timer));
} else {
priv->tx_count_frames = 0;
priv->hw->desc->set_tx_ic(desc);
priv->xstats.tx_set_ic_bit++;
}
if (!priv->hwts_tx_en)
skb_tx_timestamp(skb);
if (unlikely((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
priv->hwts_tx_en)) {
/* declare that device is doing timestamping */
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
priv->hw->desc->enable_tx_timestamp(first);
}
/* Complete the first descriptor before granting the DMA */
priv->hw->desc->prepare_tso_tx_desc(first, 1,
proto_hdr_len,
pay_len,
1, priv->tx_skbuff_dma[first_entry].last_segment,
tcp_hdrlen(skb) / 4, (skb->len - proto_hdr_len));
/* If context desc is used to change MSS */
if (mss_desc)
priv->hw->desc->set_tx_owner(mss_desc);
/* The own bit must be the latest setting done when prepare the
* descriptor and then barrier is needed to make sure that
* all is coherent before granting the DMA engine.
*/
smp_wmb();
if (netif_msg_pktdata(priv)) {
pr_info("%s: curr=%d dirty=%d f=%d, e=%d, f_p=%p, nfrags %d\n",
__func__, priv->cur_tx, priv->dirty_tx, first_entry,
priv->cur_tx, first, nfrags);
priv->hw->desc->display_ring((void *)priv->dma_tx, DMA_TX_SIZE,
0);
pr_info(">>> frame to be transmitted: ");
print_pkt(skb->data, skb_headlen(skb));
}
netdev_sent_queue(dev, skb->len);
priv->hw->dma->set_tx_tail_ptr(priv->ioaddr, priv->tx_tail_addr,
STMMAC_CHAN0);
spin_unlock(&priv->tx_lock);
return NETDEV_TX_OK;
dma_map_err:
spin_unlock(&priv->tx_lock);
dev_err(priv->device, "Tx dma map failed\n");
dev_kfree_skb(skb);
priv->dev->stats.tx_dropped++;
return NETDEV_TX_OK;
}
/**
* stmmac_xmit - Tx entry point of the driver
* @skb : the socket buffer
......@@ -1952,6 +2145,13 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
unsigned int entry, first_entry;
struct dma_desc *desc, *first;
unsigned int enh_desc;
unsigned int des;
/* Manage oversized TCP frames for GMAC4 device */
if (skb_is_gso(skb) && priv->tso) {
if (ip_hdr(skb)->protocol == IPPROTO_TCP)
return stmmac_tso_xmit(skb, dev);
}
spin_lock(&priv->tx_lock);
......@@ -1987,7 +2187,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
if (enh_desc)
is_jumbo = priv->hw->mode->is_jumbo_frm(skb->len, enh_desc);
if (unlikely(is_jumbo)) {
if (unlikely(is_jumbo) && likely(priv->synopsys_id <
DWMAC_CORE_4_00)) {
entry = priv->hw->mode->jumbo_frm(priv, skb, csum_insertion);
if (unlikely(entry < 0))
goto dma_map_err;
......@@ -2005,13 +2206,21 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
else
desc = priv->dma_tx + entry;
desc->des2 = skb_frag_dma_map(priv->device, frag, 0, len,
DMA_TO_DEVICE);
if (dma_mapping_error(priv->device, desc->des2))
des = skb_frag_dma_map(priv->device, frag, 0, len,
DMA_TO_DEVICE);
if (dma_mapping_error(priv->device, des))
goto dma_map_err; /* should reuse desc w/o issues */
priv->tx_skbuff[entry] = NULL;
priv->tx_skbuff_dma[entry].buf = desc->des2;
if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00)) {
desc->des0 = des;
priv->tx_skbuff_dma[entry].buf = desc->des0;
} else {
desc->des2 = des;
priv->tx_skbuff_dma[entry].buf = desc->des2;
}
priv->tx_skbuff_dma[entry].map_as_page = true;
priv->tx_skbuff_dma[entry].len = len;
priv->tx_skbuff_dma[entry].last_segment = last_segment;
......@@ -2026,16 +2235,18 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
priv->cur_tx = entry;
if (netif_msg_pktdata(priv)) {
void *tx_head;
pr_debug("%s: curr=%d dirty=%d f=%d, e=%d, first=%p, nfrags=%d",
__func__, priv->cur_tx, priv->dirty_tx, first_entry,
entry, first, nfrags);
if (priv->extend_desc)
stmmac_display_ring((void *)priv->dma_etx,
DMA_TX_SIZE, 1);
tx_head = (void *)priv->dma_etx;
else
stmmac_display_ring((void *)priv->dma_tx,
DMA_TX_SIZE, 0);
tx_head = (void *)priv->dma_tx;
priv->hw->desc->display_ring(tx_head, DMA_TX_SIZE, false);
pr_debug(">>> frame to be transmitted: ");
print_pkt(skb->data, skb->len);
......@@ -2074,12 +2285,19 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
if (likely(!is_jumbo)) {
bool last_segment = (nfrags == 0);
first->des2 = dma_map_single(priv->device, skb->data,
nopaged_len, DMA_TO_DEVICE);
if (dma_mapping_error(priv->device, first->des2))
des = dma_map_single(priv->device, skb->data,
nopaged_len, DMA_TO_DEVICE);
if (dma_mapping_error(priv->device, des))
goto dma_map_err;
priv->tx_skbuff_dma[first_entry].buf = first->des2;
if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00)) {
first->des0 = des;
priv->tx_skbuff_dma[first_entry].buf = first->des0;
} else {
first->des2 = des;
priv->tx_skbuff_dma[first_entry].buf = first->des2;
}
priv->tx_skbuff_dma[first_entry].len = nopaged_len;
priv->tx_skbuff_dma[first_entry].last_segment = last_segment;
......@@ -2103,7 +2321,12 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
}
netdev_sent_queue(dev, skb->len);
priv->hw->dma->enable_dma_transmission(priv->ioaddr);
if (priv->synopsys_id < DWMAC_CORE_4_00)
priv->hw->dma->enable_dma_transmission(priv->ioaddr);
else
priv->hw->dma->set_tx_tail_ptr(priv->ioaddr, priv->tx_tail_addr,
STMMAC_CHAN0);
spin_unlock(&priv->tx_lock);
return NETDEV_TX_OK;
......@@ -2185,9 +2408,15 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv)
dev_kfree_skb(skb);
break;
}
p->des2 = priv->rx_skbuff_dma[entry];
priv->hw->mode->refill_desc3(priv, p);
if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00)) {
p->des0 = priv->rx_skbuff_dma[entry];
p->des1 = 0;
} else {
p->des2 = priv->rx_skbuff_dma[entry];
}
if (priv->hw->mode->refill_desc3)
priv->hw->mode->refill_desc3(priv, p);
if (priv->rx_zeroc_thresh > 0)
priv->rx_zeroc_thresh--;
......@@ -2195,9 +2424,13 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv)
if (netif_msg_rx_status(priv))
pr_debug("\trefill entry #%d\n", entry);
}
wmb();
priv->hw->desc->set_rx_owner(p);
if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00))
priv->hw->desc->init_rx_desc(p, priv->use_riwt, 0, 0);
else
priv->hw->desc->set_rx_owner(p);
wmb();
entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
......@@ -2220,13 +2453,15 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit)
int coe = priv->hw->rx_csum;
if (netif_msg_rx_status(priv)) {
void *rx_head;
pr_debug("%s: descriptor ring:\n", __func__);
if (priv->extend_desc)
stmmac_display_ring((void *)priv->dma_erx,
DMA_RX_SIZE, 1);
rx_head = (void *)priv->dma_erx;
else
stmmac_display_ring((void *)priv->dma_rx,
DMA_RX_SIZE, 0);
rx_head = (void *)priv->dma_rx;
priv->hw->desc->display_ring(rx_head, DMA_RX_SIZE, true);
}
while (count < limit) {
int status;
......@@ -2276,11 +2511,23 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit)
} else {
struct sk_buff *skb;
int frame_len;
unsigned int des;
if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00))
des = p->des0;
else
des = p->des2;
frame_len = priv->hw->desc->get_rx_frame_len(p, coe);
/* check if frame_len fits the preallocated memory */
/* If frame length is greather than skb buffer size
* (preallocated during init) then the packet is
* ignored
*/
if (frame_len > priv->dma_buf_sz) {
pr_err("%s: len %d larger than size (%d)\n",
priv->dev->name, frame_len,
priv->dma_buf_sz);
priv->dev->stats.rx_length_errors++;
break;
}
......@@ -2293,14 +2540,19 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit)
if (netif_msg_rx_status(priv)) {
pr_debug("\tdesc: %p [entry %d] buff=0x%x\n",
p, entry, p->des2);
p, entry, des);
if (frame_len > ETH_FRAME_LEN)
pr_debug("\tframe size %d, COE: %d\n",
frame_len, status);
}
if (unlikely((frame_len < priv->rx_copybreak) ||
stmmac_rx_threshold_count(priv))) {
/* The zero-copy is always used for all the sizes
* in case of GMAC4 because it needs
* to refill the used descriptors, always.
*/
if (unlikely(!priv->plat->has_gmac4 &&
((frame_len < priv->rx_copybreak) ||
stmmac_rx_threshold_count(priv)))) {
skb = netdev_alloc_skb_ip_align(priv->dev,
frame_len);
if (unlikely(!skb)) {
......@@ -2452,7 +2704,7 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
return -EBUSY;
}
if (priv->plat->enh_desc)
if ((priv->plat->enh_desc) || (priv->synopsys_id >= DWMAC_CORE_4_00))
max_mtu = JUMBO_LEN;
else
max_mtu = SKB_MAX_HEAD(NET_SKB_PAD + NET_IP_ALIGN);
......@@ -2466,6 +2718,7 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
}
dev->mtu = new_mtu;
netdev_update_features(dev);
return 0;
......@@ -2490,6 +2743,14 @@ static netdev_features_t stmmac_fix_features(struct net_device *dev,
if (priv->plat->bugged_jumbo && (dev->mtu > ETH_DATA_LEN))
features &= ~NETIF_F_CSUM_MASK;
/* Disable tso if asked by ethtool */
if ((priv->plat->tso_en) && (priv->dma_cap.tsoen)) {
if (features & NETIF_F_TSO)
priv->tso = true;
else
priv->tso = false;
}
return features;
}
......@@ -2536,7 +2797,7 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id)
}
/* To handle GMAC own interrupts */
if (priv->plat->has_gmac) {
if ((priv->plat->has_gmac) || (priv->plat->has_gmac4)) {
int status = priv->hw->mac->host_irq_status(priv->hw,
&priv->xstats);
if (unlikely(status)) {
......@@ -2545,6 +2806,10 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id)
priv->tx_path_in_lpi_mode = true;
if (status & CORE_IRQ_TX_PATH_EXIT_LPI_MODE)
priv->tx_path_in_lpi_mode = false;
if (status & CORE_IRQ_MTL_RX_OVERFLOW)
priv->hw->dma->set_rx_tail_ptr(priv->ioaddr,
priv->rx_tail_addr,
STMMAC_CHAN0);
}
}
......@@ -2617,15 +2882,14 @@ static void sysfs_display_ring(void *head, int size, int extend_desc,
x = *(u64 *) ep;
seq_printf(seq, "%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
i, (unsigned int)virt_to_phys(ep),
(unsigned int)x, (unsigned int)(x >> 32),
ep->basic.des0, ep->basic.des1,
ep->basic.des2, ep->basic.des3);
ep++;
} else {
x = *(u64 *) p;
seq_printf(seq, "%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
i, (unsigned int)virt_to_phys(ep),
(unsigned int)x, (unsigned int)(x >> 32),
p->des2, p->des3);
p->des0, p->des1, p->des2, p->des3);
p++;
}
seq_printf(seq, "\n");
......@@ -2708,10 +2972,15 @@ static int stmmac_sysfs_dma_cap_read(struct seq_file *seq, void *v)
seq_printf(seq, "\tAV features: %s\n", (priv->dma_cap.av) ? "Y" : "N");
seq_printf(seq, "\tChecksum Offload in TX: %s\n",
(priv->dma_cap.tx_coe) ? "Y" : "N");
seq_printf(seq, "\tIP Checksum Offload (type1) in RX: %s\n",
(priv->dma_cap.rx_coe_type1) ? "Y" : "N");
seq_printf(seq, "\tIP Checksum Offload (type2) in RX: %s\n",
(priv->dma_cap.rx_coe_type2) ? "Y" : "N");
if (priv->synopsys_id >= DWMAC_CORE_4_00) {
seq_printf(seq, "\tIP Checksum Offload in RX: %s\n",
(priv->dma_cap.rx_coe) ? "Y" : "N");
} else {
seq_printf(seq, "\tIP Checksum Offload (type1) in RX: %s\n",
(priv->dma_cap.rx_coe_type1) ? "Y" : "N");
seq_printf(seq, "\tIP Checksum Offload (type2) in RX: %s\n",
(priv->dma_cap.rx_coe_type2) ? "Y" : "N");
}
seq_printf(seq, "\tRXFIFO > 2048bytes: %s\n",
(priv->dma_cap.rxfifo_over_2048) ? "Y" : "N");
seq_printf(seq, "\tNumber of Additional RX channel: %d\n",
......@@ -2820,27 +3089,35 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
priv->dev->priv_flags |= IFF_UNICAST_FLT;
mac = dwmac1000_setup(priv->ioaddr,
priv->plat->multicast_filter_bins,
priv->plat->unicast_filter_entries);
priv->plat->unicast_filter_entries,
&priv->synopsys_id);
} else if (priv->plat->has_gmac4) {
priv->dev->priv_flags |= IFF_UNICAST_FLT;
mac = dwmac4_setup(priv->ioaddr,
priv->plat->multicast_filter_bins,
priv->plat->unicast_filter_entries,
&priv->synopsys_id);
} else {
mac = dwmac100_setup(priv->ioaddr);
mac = dwmac100_setup(priv->ioaddr, &priv->synopsys_id);
}
if (!mac)
return -ENOMEM;
priv->hw = mac;
/* Get and dump the chip ID */
priv->synopsys_id = stmmac_get_synopsys_id(priv);
/* To use the chained or ring mode */
if (chain_mode) {
priv->hw->mode = &chain_mode_ops;
pr_info(" Chain mode enabled\n");
priv->mode = STMMAC_CHAIN_MODE;
if (priv->synopsys_id >= DWMAC_CORE_4_00) {
priv->hw->mode = &dwmac4_ring_mode_ops;
} else {
priv->hw->mode = &ring_mode_ops;
pr_info(" Ring mode enabled\n");
priv->mode = STMMAC_RING_MODE;
if (chain_mode) {
priv->hw->mode = &chain_mode_ops;
pr_info(" Chain mode enabled\n");
priv->mode = STMMAC_CHAIN_MODE;
} else {
priv->hw->mode = &ring_mode_ops;
pr_info(" Ring mode enabled\n");
priv->mode = STMMAC_RING_MODE;
}
}
/* Get the HW capability (new GMAC newer than 3.50a) */
......@@ -2856,11 +3133,9 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
priv->plat->enh_desc = priv->dma_cap.enh_desc;
priv->plat->pmt = priv->dma_cap.pmt_remote_wake_up;
/* TXCOE doesn't work in thresh DMA mode */
if (priv->plat->force_thresh_dma_mode)
priv->plat->tx_coe = 0;
else
priv->plat->tx_coe = priv->dma_cap.tx_coe;
priv->plat->tx_coe = priv->dma_cap.tx_coe;
/* In case of GMAC4 rx_coe is from HW cap register. */
priv->plat->rx_coe = priv->dma_cap.rx_coe;
if (priv->dma_cap.rx_coe_type2)
priv->plat->rx_coe = STMMAC_RX_COE_TYPE2;
......@@ -2870,13 +3145,17 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
} else
pr_info(" No HW DMA feature register supported");
/* To use alternate (extended) or normal descriptor structures */
stmmac_selec_desc_mode(priv);
/* To use alternate (extended), normal or GMAC4 descriptor structures */
if (priv->synopsys_id >= DWMAC_CORE_4_00)
priv->hw->desc = &dwmac4_desc_ops;
else
stmmac_selec_desc_mode(priv);
if (priv->plat->rx_coe) {
priv->hw->rx_csum = priv->plat->rx_coe;
pr_info(" RX Checksum Offload Engine supported (type %d)\n",
priv->plat->rx_coe);
pr_info(" RX Checksum Offload Engine supported\n");
if (priv->synopsys_id < DWMAC_CORE_4_00)
pr_info("\tCOE Type %d\n", priv->hw->rx_csum);
}
if (priv->plat->tx_coe)
pr_info(" TX Checksum insertion supported\n");
......@@ -2886,6 +3165,9 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
device_set_wakeup_capable(priv->device, 1);
}
if (priv->dma_cap.tsoen)
pr_info(" TSO supported\n");
return 0;
}
......@@ -2989,6 +3271,12 @@ int stmmac_dvr_probe(struct device *device,
ndev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
NETIF_F_RXCSUM;
if ((priv->plat->tso_en) && (priv->dma_cap.tsoen)) {
ndev->hw_features |= NETIF_F_TSO;
priv->tso = true;
pr_info(" TSO feature enabled\n");
}
ndev->features |= ndev->hw_features | NETIF_F_HIGHDMA;
ndev->watchdog_timeo = msecs_to_jiffies(watchdog);
#ifdef STMMAC_VLAN_TAG_USED
......@@ -3183,6 +3471,11 @@ int stmmac_resume(struct net_device *ndev)
priv->dirty_rx = 0;
priv->dirty_tx = 0;
priv->cur_tx = 0;
/* reset private mss value to force mss context settings at
* next tso xmit (only used for gmac4).
*/
priv->mss = 0;
stmmac_clear_descriptors(priv);
stmmac_hw_setup(ndev, false);
......
......@@ -284,6 +284,13 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
plat->pmt = 1;
}
if (of_device_is_compatible(np, "snps,dwmac-4.00") ||
of_device_is_compatible(np, "snps,dwmac-4.10a")) {
plat->has_gmac4 = 1;
plat->pmt = 1;
plat->tso_en = of_property_read_bool(np, "snps,tso");
}
if (of_device_is_compatible(np, "snps,dwmac-3.610") ||
of_device_is_compatible(np, "snps,dwmac-3.710")) {
plat->enh_desc = 1;
......
......@@ -137,5 +137,7 @@ struct plat_stmmacenet_data {
void (*exit)(struct platform_device *pdev, void *priv);
void *bsp_priv;
struct stmmac_axi *axi;
int has_gmac4;
bool tso_en;
};
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment