Commit cc6216ba authored by David S. Miller's avatar David S. Miller

Merge branch 'mvpp2-tx-flow-control'

Stefan Chulski says:

====================
net: mvpp2: Add TX Flow Control support

Armada hardware has a pause generation mechanism in GOP (MAC).
The GOP generate flow control frames based on an indication programmed in Ports Control 0 Register. There is a bit per port.
However assertion of the PortX Pause bits in the ports control 0 register only sends a one time pause.
To complement the function the GOP has a mechanism to periodically send pause control messages based on periodic counters.
This mechanism ensures that the pause is effective as long as the Appropriate PortX Pause is asserted.

Problem is that Packet Processor that actually can drop packets due to lack of resources not connected to the GOP flow control generation mechanism.
To solve this issue Armada has firmware running on CM3 CPU dedicated for Flow Control support.
Firmware monitors Packet Processor resources and asserts XON/XOFF by writing to Ports Control 0 Register.

MSS shared SRAM memory used to communicate between CM3 firmware and PP2 driver.
During init PP2 driver informs firmware about used BM pools, RXQs, congestion and depletion thresholds.

The pause frames are generated whenever congestion or depletion in resources is detected.
The back pressure is stopped when the resource reaches a sufficient level.
So the congestion/depletion and sufficient level implement a hysteresis that reduces the XON/XOFF toggle frequency.

Packet Processor v23 hardware introduces support for RX FIFO fill level monitor.
Patch "add PPv23 version definition" to differ between v23 and v22 hardware.
Patch "add TX FC firmware check" verifies that CM3 firmware supports Flow Control monitoring.

v12 --> v13
- Remove bm_underrun_protect module_param

v11 --> v12
- Improve warning message in "net: mvpp2: add TX FC firmware check" patch

v10 --> v11
- Improve "net: mvpp2: add CM3 SRAM memory map" comment
- Move condition check to 'net: mvpp2: always compare hw-version vs MVPP21' patch

v9 --> v10
- Add CM3 SRAM description to PPv2 documentation

v8 --> v9
- Replace generic pool allocation with devm_ioremap_resource

v7 --> v8
- Reorder "always compare hw-version vs MVPP21" and "add PPv23 version definition" commits
- Typo fixes
- Remove condition fix from "add RXQ flow control configurations"

v6 --> v7
- Reduce patch set from 18 to 15 patches
 - Documentation change combined into a single patch
 - RXQ and BM size change combined into a single patch
 - Ring size change check moved into "add RXQ flow control configurations" commit

v5 --> v6
- No change

v4 --> v5
- Add missed Signed-off
- Fix warnings in patches 3 and 12
- Add revision requirement to warning message
- Move mss_spinlock into RXQ flow control configurations patch
- Improve FCA RXQ non occupied descriptor threshold commit message

v3 --> v4
- Remove RFC tag

v2 --> v3
- Remove inline functions
- Add PPv2.3 description into marvell-pp2.txt
- Improve mvpp2_interrupts_mask/unmask procedure
- Improve FC enable/disable procedure
- Add priv->sram_pool check
- Remove gen_pool_destroy call
- Reduce Flow Control timer to x100 faster

v1 --> v2
- Add memory requirements information
- Add EPROBE_DEFER if of_gen_pool_get return NULL
- Move Flow control configuration to mvpp2_mac_link_up callback
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents f2fa0e5e 9ca5e767
* Marvell Armada 375 Ethernet Controller (PPv2.1)
Marvell Armada 7K/8K Ethernet Controller (PPv2.2)
Marvell CN913X Ethernet Controller (PPv2.3)
Required properties:
......@@ -12,10 +13,11 @@ Required properties:
- common controller registers
- LMS registers
- one register area per Ethernet port
For "marvell,armada-7k-pp2", must contain the following register
For "marvell,armada-7k-pp2" used by 7K/8K and CN913X, must contain the following register
sets:
- packet processor registers
- networking interfaces registers
- CM3 address space used for TX Flow Control
- clocks: pointers to the reference clocks for this device, consequently:
- main controller clock (for both armada-375-pp2 and armada-7k-pp2)
......@@ -81,7 +83,7 @@ Example for marvell,armada-7k-pp2:
cpm_ethernet: ethernet@0 {
compatible = "marvell,armada-7k-pp22";
reg = <0x0 0x100000>, <0x129000 0xb000>;
reg = <0x0 0x100000>, <0x129000 0xb000>, <0x220000 0x800>;
clocks = <&cpm_syscon0 1 3>, <&cpm_syscon0 1 9>,
<&cpm_syscon0 1 5>, <&cpm_syscon0 1 6>, <&cpm_syscon0 1 18>;
clock-names = "pp_clk", "gop_clk", "mg_clk", "mg_core_clk", "axi_clk";
......
......@@ -59,7 +59,7 @@ config-space@CP11X_BASE {
CP11X_LABEL(ethernet): ethernet@0 {
compatible = "marvell,armada-7k-pp22";
reg = <0x0 0x100000>, <0x129000 0xb000>;
reg = <0x0 0x100000>, <0x129000 0xb000>, <0x220000 0x800>;
clocks = <&CP11X_LABEL(clk) 1 3>, <&CP11X_LABEL(clk) 1 9>,
<&CP11X_LABEL(clk) 1 5>, <&CP11X_LABEL(clk) 1 6>,
<&CP11X_LABEL(clk) 1 18>;
......
......@@ -60,6 +60,9 @@
/* Top Registers */
#define MVPP2_MH_REG(port) (0x5040 + 4 * (port))
#define MVPP2_DSA_EXTENDED BIT(5)
#define MVPP2_VER_ID_REG 0x50b0
#define MVPP2_VER_PP22 0x10
#define MVPP2_VER_PP23 0x11
/* Parser Registers */
#define MVPP2_PRS_INIT_LOOKUP_REG 0x1000
......@@ -292,6 +295,8 @@
#define MVPP2_PON_CAUSE_TXP_OCCUP_DESC_ALL_MASK 0x3fc00000
#define MVPP2_PON_CAUSE_MISC_SUM_MASK BIT(31)
#define MVPP2_ISR_MISC_CAUSE_REG 0x55b0
#define MVPP2_ISR_RX_ERR_CAUSE_REG(port) (0x5520 + 4 * (port))
#define MVPP2_ISR_RX_ERR_CAUSE_NONOCC_MASK 0x00ff
/* Buffer Manager registers */
#define MVPP2_BM_POOL_BASE_REG(pool) (0x6000 + ((pool) * 4))
......@@ -319,6 +324,10 @@
#define MVPP2_BM_HIGH_THRESH_MASK 0x7f0000
#define MVPP2_BM_HIGH_THRESH_VALUE(val) ((val) << \
MVPP2_BM_HIGH_THRESH_OFFS)
#define MVPP2_BM_BPPI_HIGH_THRESH 0x1E
#define MVPP2_BM_BPPI_LOW_THRESH 0x1C
#define MVPP23_BM_BPPI_HIGH_THRESH 0x34
#define MVPP23_BM_BPPI_LOW_THRESH 0x28
#define MVPP2_BM_INTR_CAUSE_REG(pool) (0x6240 + ((pool) * 4))
#define MVPP2_BM_RELEASED_DELAY_MASK BIT(0)
#define MVPP2_BM_ALLOC_FAILED_MASK BIT(1)
......@@ -347,6 +356,10 @@
#define MVPP2_OVERRUN_ETH_DROP 0x7000
#define MVPP2_CLS_ETH_DROP 0x7020
#define MVPP22_BM_POOL_BASE_ADDR_HIGH_REG 0x6310
#define MVPP22_BM_POOL_BASE_ADDR_HIGH_MASK 0xff
#define MVPP23_BM_8POOL_MODE BIT(8)
/* Hit counters registers */
#define MVPP2_CTRS_IDX 0x7040
#define MVPP22_CTRS_TX_CTR(port, txq) ((txq) | ((port) << 3) | BIT(7))
......@@ -469,7 +482,7 @@
#define MVPP22_GMAC_INT_SUM_MASK_LINK_STAT BIT(1)
#define MVPP22_GMAC_INT_SUM_MASK_PTP BIT(2)
/* Per-port XGMAC registers. PPv2.2 only, only for GOP port 0,
/* Per-port XGMAC registers. PPv2.2 and PPv2.3, only for GOP port 0,
* relative to port->base.
*/
#define MVPP22_XLG_CTRL0_REG 0x100
......@@ -506,7 +519,7 @@
#define MVPP22_XLG_CTRL4_MACMODSELECT_GMAC BIT(12)
#define MVPP22_XLG_CTRL4_EN_IDLE_CHECK BIT(14)
/* SMI registers. PPv2.2 only, relative to priv->iface_base. */
/* SMI registers. PPv2.2 and PPv2.3, relative to priv->iface_base. */
#define MVPP22_SMI_MISC_CFG_REG 0x1204
#define MVPP22_SMI_POLLING_EN BIT(10)
......@@ -582,7 +595,7 @@
#define MVPP2_QUEUE_NEXT_DESC(q, index) \
(((index) < (q)->last_desc) ? ((index) + 1) : 0)
/* XPCS registers. PPv2.2 only */
/* XPCS registers.PPv2.2 and PPv2.3 */
#define MVPP22_MPCS_BASE(port) (0x7000 + (port) * 0x1000)
#define MVPP22_MPCS_CTRL 0x14
#define MVPP22_MPCS_CTRL_FWD_ERR_CONN BIT(10)
......@@ -593,7 +606,16 @@
#define MVPP22_MPCS_CLK_RESET_DIV_RATIO(n) ((n) << 4)
#define MVPP22_MPCS_CLK_RESET_DIV_SET BIT(11)
/* XPCS registers. PPv2.2 only */
/* FCA registers. PPv2.2 and PPv2.3 */
#define MVPP22_FCA_BASE(port) (0x7600 + (port) * 0x1000)
#define MVPP22_FCA_REG_SIZE 16
#define MVPP22_FCA_REG_MASK 0xFFFF
#define MVPP22_FCA_CONTROL_REG 0x0
#define MVPP22_FCA_ENABLE_PERIODIC BIT(11)
#define MVPP22_PERIODIC_COUNTER_LSB_REG (0x110)
#define MVPP22_PERIODIC_COUNTER_MSB_REG (0x114)
/* XPCS registers. PPv2.2 and PPv2.3 */
#define MVPP22_XPCS_BASE(port) (0x7400 + (port) * 0x1000)
#define MVPP22_XPCS_CFG0 0x0
#define MVPP22_XPCS_CFG0_RESET_DIS BIT(0)
......@@ -712,8 +734,8 @@
#define MVPP2_PORT_MAX_RXQ 32
/* Max number of Rx descriptors */
#define MVPP2_MAX_RXD_MAX 1024
#define MVPP2_MAX_RXD_DFLT 128
#define MVPP2_MAX_RXD_MAX 2048
#define MVPP2_MAX_RXD_DFLT 1024
/* Max number of Tx descriptors */
#define MVPP2_MAX_TXD_MAX 2048
......@@ -748,6 +770,66 @@
#define MVPP2_TX_FIFO_THRESHOLD(kb) \
((kb) * 1024 - MVPP2_TX_FIFO_THRESHOLD_MIN)
/* RX FIFO threshold in 1KB granularity */
#define MVPP23_PORT0_FIFO_TRSH (9 * 1024)
#define MVPP23_PORT1_FIFO_TRSH (4 * 1024)
#define MVPP23_PORT2_FIFO_TRSH (2 * 1024)
/* RX Flow Control Registers */
#define MVPP2_RX_FC_REG(port) (0x150 + 4 * (port))
#define MVPP2_RX_FC_EN BIT(24)
#define MVPP2_RX_FC_TRSH_OFFS 16
#define MVPP2_RX_FC_TRSH_MASK (0xFF << MVPP2_RX_FC_TRSH_OFFS)
#define MVPP2_RX_FC_TRSH_UNIT 256
/* MSS Flow control */
#define MSS_FC_COM_REG 0
#define FLOW_CONTROL_ENABLE_BIT BIT(0)
#define FLOW_CONTROL_UPDATE_COMMAND_BIT BIT(31)
#define FC_QUANTA 0xFFFF
#define FC_CLK_DIVIDER 100
#define MSS_RXQ_TRESH_BASE 0x200
#define MSS_RXQ_TRESH_OFFS 4
#define MSS_RXQ_TRESH_REG(q, fq) (MSS_RXQ_TRESH_BASE + (((q) + (fq)) \
* MSS_RXQ_TRESH_OFFS))
#define MSS_BUF_POOL_BASE 0x40
#define MSS_BUF_POOL_OFFS 4
#define MSS_BUF_POOL_REG(id) (MSS_BUF_POOL_BASE \
+ (id) * MSS_BUF_POOL_OFFS)
#define MSS_BUF_POOL_STOP_MASK 0xFFF
#define MSS_BUF_POOL_START_MASK (0xFFF << MSS_BUF_POOL_START_OFFS)
#define MSS_BUF_POOL_START_OFFS 12
#define MSS_BUF_POOL_PORTS_MASK (0xF << MSS_BUF_POOL_PORTS_OFFS)
#define MSS_BUF_POOL_PORTS_OFFS 24
#define MSS_BUF_POOL_PORT_OFFS(id) (0x1 << \
((id) + MSS_BUF_POOL_PORTS_OFFS))
#define MSS_RXQ_TRESH_START_MASK 0xFFFF
#define MSS_RXQ_TRESH_STOP_MASK (0xFFFF << MSS_RXQ_TRESH_STOP_OFFS)
#define MSS_RXQ_TRESH_STOP_OFFS 16
#define MSS_RXQ_ASS_BASE 0x80
#define MSS_RXQ_ASS_OFFS 4
#define MSS_RXQ_ASS_PER_REG 4
#define MSS_RXQ_ASS_PER_OFFS 8
#define MSS_RXQ_ASS_PORTID_OFFS 0
#define MSS_RXQ_ASS_PORTID_MASK 0x3
#define MSS_RXQ_ASS_HOSTID_OFFS 2
#define MSS_RXQ_ASS_HOSTID_MASK 0x3F
#define MSS_RXQ_ASS_Q_BASE(q, fq) ((((q) + (fq)) % MSS_RXQ_ASS_PER_REG) \
* MSS_RXQ_ASS_PER_OFFS)
#define MSS_RXQ_ASS_PQ_BASE(q, fq) ((((q) + (fq)) / MSS_RXQ_ASS_PER_REG) \
* MSS_RXQ_ASS_OFFS)
#define MSS_RXQ_ASS_REG(q, fq) (MSS_RXQ_ASS_BASE + MSS_RXQ_ASS_PQ_BASE(q, fq))
#define MSS_THRESHOLD_STOP 768
#define MSS_THRESHOLD_START 1024
#define MSS_FC_MAX_TIMEOUT 5000
/* RX buffer constants */
#define MVPP2_SKB_SHINFO_SIZE \
SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
......@@ -845,8 +927,8 @@ enum mvpp22_ptp_packet_format {
#define MVPP22_PTP_TIMESTAMPQUEUESELECT BIT(18)
/* BM constants */
#define MVPP2_BM_JUMBO_BUF_NUM 512
#define MVPP2_BM_LONG_BUF_NUM 1024
#define MVPP2_BM_JUMBO_BUF_NUM 2048
#define MVPP2_BM_LONG_BUF_NUM 2048
#define MVPP2_BM_SHORT_BUF_NUM 2048
#define MVPP2_BM_POOL_SIZE_MAX (16*1024 - MVPP2_BM_POOL_PTR_ALIGN/4)
#define MVPP2_BM_POOL_PTR_ALIGN 128
......@@ -925,16 +1007,18 @@ struct mvpp2 {
/* Shared registers' base addresses */
void __iomem *lms_base;
void __iomem *iface_base;
void __iomem *cm3_base;
/* On PPv2.2, each "software thread" can access the base
/* On PPv2.2 and PPv2.3, each "software thread" can access the base
* register through a separate address space, each 64 KB apart
* from each other. Typically, such address spaces will be
* used per CPU.
*/
void __iomem *swth_base[MVPP2_MAX_THREADS];
/* On PPv2.2, some port control registers are located into the system
* controller space. These registers are accessible through a regmap.
/* On PPv2.2 and PPv2.3, some port control registers are located into
* the system controller space. These registers are accessible
* through a regmap.
*/
struct regmap *sysctrl_base;
......@@ -976,7 +1060,7 @@ struct mvpp2 {
u32 tclk;
/* HW version */
enum { MVPP21, MVPP22 } hw_version;
enum { MVPP21, MVPP22, MVPP23 } hw_version;
/* Maximum number of RXQs per port */
unsigned int max_port_rxqs;
......@@ -996,6 +1080,12 @@ struct mvpp2 {
/* page_pool allocator */
struct page_pool *page_pool[MVPP2_PORT_MAX_RXQ];
/* Global TX Flow Control config */
bool global_tx_fc;
/* Spinlocks for CM3 shared memory configuration */
spinlock_t mss_spinlock;
};
struct mvpp2_pcpu_stats {
......@@ -1158,6 +1248,9 @@ struct mvpp2_port {
bool rx_hwtstamp;
enum hwtstamp_tx_types tx_hwtstamp_type;
struct mvpp2_hwtstamp_queue tx_hwtstamp_queue[2];
/* Firmware TX flow control */
bool tx_fc;
};
/* The mvpp2_tx_desc and mvpp2_rx_desc structures describe the
......@@ -1220,7 +1313,7 @@ struct mvpp21_rx_desc {
__le32 reserved8;
};
/* HW TX descriptor for PPv2.2 */
/* HW TX descriptor for PPv2.2 and PPv2.3 */
struct mvpp22_tx_desc {
__le32 command;
u8 packet_offset;
......@@ -1232,7 +1325,7 @@ struct mvpp22_tx_desc {
__le64 buf_cookie_misc;
};
/* HW RX descriptor for PPv2.2 */
/* HW RX descriptor for PPv2.2 and PPv2.3 */
struct mvpp22_rx_desc {
__le32 status;
__le16 reserved1;
......@@ -1418,6 +1511,8 @@ void mvpp2_dbgfs_init(struct mvpp2 *priv, const char *name);
void mvpp2_dbgfs_cleanup(struct mvpp2 *priv);
void mvpp23_rx_fifo_fc_en(struct mvpp2 *priv, int port, bool en);
#ifdef CONFIG_MVPP2_PTP
int mvpp22_tai_probe(struct device *dev, struct mvpp2 *priv);
void mvpp22_tai_tstamp(struct mvpp2_tai *tai, u32 tstamp,
......@@ -1450,4 +1545,5 @@ static inline bool mvpp22_rx_hwtstamping(struct mvpp2_port *port)
{
return IS_ENABLED(CONFIG_MVPP2_PTP) && port->rx_hwtstamp;
}
#endif
......@@ -91,6 +91,16 @@ static inline u32 mvpp2_cpu_to_thread(struct mvpp2 *priv, int cpu)
return cpu % priv->nthreads;
}
static void mvpp2_cm3_write(struct mvpp2 *priv, u32 offset, u32 data)
{
writel(data, priv->cm3_base + offset);
}
static u32 mvpp2_cm3_read(struct mvpp2 *priv, u32 offset)
{
return readl(priv->cm3_base + offset);
}
static struct page_pool *
mvpp2_create_page_pool(struct device *dev, int num, int len,
enum dma_data_direction dma_dir)
......@@ -319,7 +329,7 @@ static int mvpp2_get_nrxqs(struct mvpp2 *priv)
{
unsigned int nrxqs;
if (priv->hw_version == MVPP22 && queue_mode == MVPP2_QDIST_SINGLE_MODE)
if (priv->hw_version != MVPP21 && queue_mode == MVPP2_QDIST_SINGLE_MODE)
return 1;
/* According to the PPv2.2 datasheet and our experiments on
......@@ -384,7 +394,7 @@ static int mvpp2_bm_pool_create(struct device *dev, struct mvpp2 *priv,
if (!IS_ALIGNED(size, 16))
return -EINVAL;
/* PPv2.1 needs 8 bytes per buffer pointer, PPv2.2 needs 16
/* PPv2.1 needs 8 bytes per buffer pointer, PPv2.2 and PPv2.3 needs 16
* bytes per buffer pointer
*/
if (priv->hw_version == MVPP21)
......@@ -413,6 +423,19 @@ static int mvpp2_bm_pool_create(struct device *dev, struct mvpp2 *priv,
val = mvpp2_read(priv, MVPP2_BM_POOL_CTRL_REG(bm_pool->id));
val |= MVPP2_BM_START_MASK;
val &= ~MVPP2_BM_LOW_THRESH_MASK;
val &= ~MVPP2_BM_HIGH_THRESH_MASK;
/* Set 8 Pools BPPI threshold for MVPP23 */
if (priv->hw_version == MVPP23) {
val |= MVPP2_BM_LOW_THRESH_VALUE(MVPP23_BM_BPPI_LOW_THRESH);
val |= MVPP2_BM_HIGH_THRESH_VALUE(MVPP23_BM_BPPI_HIGH_THRESH);
} else {
val |= MVPP2_BM_LOW_THRESH_VALUE(MVPP2_BM_BPPI_LOW_THRESH);
val |= MVPP2_BM_HIGH_THRESH_VALUE(MVPP2_BM_BPPI_HIGH_THRESH);
}
mvpp2_write(priv, MVPP2_BM_POOL_CTRL_REG(bm_pool->id), val);
bm_pool->size = size;
......@@ -446,7 +469,7 @@ static void mvpp2_bm_bufs_get_addrs(struct device *dev, struct mvpp2 *priv,
MVPP2_BM_PHY_ALLOC_REG(bm_pool->id));
*phys_addr = mvpp2_thread_read(priv, thread, MVPP2_BM_VIRT_ALLOC_REG);
if (priv->hw_version == MVPP22) {
if (priv->hw_version != MVPP21) {
u32 val;
u32 dma_addr_highbits, phys_addr_highbits;
......@@ -581,6 +604,16 @@ static int mvpp2_bm_pools_init(struct device *dev, struct mvpp2 *priv)
return err;
}
/* Routine enable PPv23 8 pool mode */
static void mvpp23_bm_set_8pool_mode(struct mvpp2 *priv)
{
int val;
val = mvpp2_read(priv, MVPP22_BM_POOL_BASE_ADDR_HIGH_REG);
val |= MVPP23_BM_8POOL_MODE;
mvpp2_write(priv, MVPP22_BM_POOL_BASE_ADDR_HIGH_REG, val);
}
static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)
{
enum dma_data_direction dma_dir = DMA_FROM_DEVICE;
......@@ -634,6 +667,9 @@ static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)
if (!priv->bm_pools)
return -ENOMEM;
if (priv->hw_version == MVPP23)
mvpp23_bm_set_8pool_mode(priv);
err = mvpp2_bm_pools_init(dev, priv);
if (err < 0)
return err;
......@@ -731,6 +767,191 @@ static void *mvpp2_buf_alloc(struct mvpp2_port *port,
return data;
}
/* Routine enable flow control for RXQs condition */
static void mvpp2_rxq_enable_fc(struct mvpp2_port *port)
{
int val, cm3_state, host_id, q;
int fq = port->first_rxq;
unsigned long flags;
spin_lock_irqsave(&port->priv->mss_spinlock, flags);
/* Remove Flow control enable bit to prevent race between FW and Kernel
* If Flow control was enabled, it would be re-enabled.
*/
val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG);
cm3_state = (val & FLOW_CONTROL_ENABLE_BIT);
val &= ~FLOW_CONTROL_ENABLE_BIT;
mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val);
/* Set same Flow control for all RXQs */
for (q = 0; q < port->nrxqs; q++) {
/* Set stop and start Flow control RXQ thresholds */
val = MSS_THRESHOLD_START;
val |= (MSS_THRESHOLD_STOP << MSS_RXQ_TRESH_STOP_OFFS);
mvpp2_cm3_write(port->priv, MSS_RXQ_TRESH_REG(q, fq), val);
val = mvpp2_cm3_read(port->priv, MSS_RXQ_ASS_REG(q, fq));
/* Set RXQ port ID */
val &= ~(MSS_RXQ_ASS_PORTID_MASK << MSS_RXQ_ASS_Q_BASE(q, fq));
val |= (port->id << MSS_RXQ_ASS_Q_BASE(q, fq));
val &= ~(MSS_RXQ_ASS_HOSTID_MASK << (MSS_RXQ_ASS_Q_BASE(q, fq)
+ MSS_RXQ_ASS_HOSTID_OFFS));
/* Calculate RXQ host ID:
* In Single queue mode: Host ID equal to Host ID used for
* shared RX interrupt
* In Multi queue mode: Host ID equal to number of
* RXQ ID / number of CoS queues
* In Single resource mode: Host ID always equal to 0
*/
if (queue_mode == MVPP2_QDIST_SINGLE_MODE)
host_id = port->nqvecs;
else if (queue_mode == MVPP2_QDIST_MULTI_MODE)
host_id = q;
else
host_id = 0;
/* Set RXQ host ID */
val |= (host_id << (MSS_RXQ_ASS_Q_BASE(q, fq)
+ MSS_RXQ_ASS_HOSTID_OFFS));
mvpp2_cm3_write(port->priv, MSS_RXQ_ASS_REG(q, fq), val);
}
/* Notify Firmware that Flow control config space ready for update */
val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG);
val |= FLOW_CONTROL_UPDATE_COMMAND_BIT;
val |= cm3_state;
mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val);
spin_unlock_irqrestore(&port->priv->mss_spinlock, flags);
}
/* Routine disable flow control for RXQs condition */
static void mvpp2_rxq_disable_fc(struct mvpp2_port *port)
{
int val, cm3_state, q;
unsigned long flags;
int fq = port->first_rxq;
spin_lock_irqsave(&port->priv->mss_spinlock, flags);
/* Remove Flow control enable bit to prevent race between FW and Kernel
* If Flow control was enabled, it would be re-enabled.
*/
val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG);
cm3_state = (val & FLOW_CONTROL_ENABLE_BIT);
val &= ~FLOW_CONTROL_ENABLE_BIT;
mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val);
/* Disable Flow control for all RXQs */
for (q = 0; q < port->nrxqs; q++) {
/* Set threshold 0 to disable Flow control */
val = 0;
val |= (0 << MSS_RXQ_TRESH_STOP_OFFS);
mvpp2_cm3_write(port->priv, MSS_RXQ_TRESH_REG(q, fq), val);
val = mvpp2_cm3_read(port->priv, MSS_RXQ_ASS_REG(q, fq));
val &= ~(MSS_RXQ_ASS_PORTID_MASK << MSS_RXQ_ASS_Q_BASE(q, fq));
val &= ~(MSS_RXQ_ASS_HOSTID_MASK << (MSS_RXQ_ASS_Q_BASE(q, fq)
+ MSS_RXQ_ASS_HOSTID_OFFS));
mvpp2_cm3_write(port->priv, MSS_RXQ_ASS_REG(q, fq), val);
}
/* Notify Firmware that Flow control config space ready for update */
val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG);
val |= FLOW_CONTROL_UPDATE_COMMAND_BIT;
val |= cm3_state;
mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val);
spin_unlock_irqrestore(&port->priv->mss_spinlock, flags);
}
/* Routine disable/enable flow control for BM pool condition */
static void mvpp2_bm_pool_update_fc(struct mvpp2_port *port,
struct mvpp2_bm_pool *pool,
bool en)
{
int val, cm3_state;
unsigned long flags;
spin_lock_irqsave(&port->priv->mss_spinlock, flags);
/* Remove Flow control enable bit to prevent race between FW and Kernel
* If Flow control were enabled, it would be re-enabled.
*/
val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG);
cm3_state = (val & FLOW_CONTROL_ENABLE_BIT);
val &= ~FLOW_CONTROL_ENABLE_BIT;
mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val);
/* Check if BM pool should be enabled/disable */
if (en) {
/* Set BM pool start and stop thresholds per port */
val = mvpp2_cm3_read(port->priv, MSS_BUF_POOL_REG(pool->id));
val |= MSS_BUF_POOL_PORT_OFFS(port->id);
val &= ~MSS_BUF_POOL_START_MASK;
val |= (MSS_THRESHOLD_START << MSS_BUF_POOL_START_OFFS);
val &= ~MSS_BUF_POOL_STOP_MASK;
val |= MSS_THRESHOLD_STOP;
mvpp2_cm3_write(port->priv, MSS_BUF_POOL_REG(pool->id), val);
} else {
/* Remove BM pool from the port */
val = mvpp2_cm3_read(port->priv, MSS_BUF_POOL_REG(pool->id));
val &= ~MSS_BUF_POOL_PORT_OFFS(port->id);
/* Zero BM pool start and stop thresholds to disable pool
* flow control if pool empty (not used by any port)
*/
if (!pool->buf_num) {
val &= ~MSS_BUF_POOL_START_MASK;
val &= ~MSS_BUF_POOL_STOP_MASK;
}
mvpp2_cm3_write(port->priv, MSS_BUF_POOL_REG(pool->id), val);
}
/* Notify Firmware that Flow control config space ready for update */
val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG);
val |= FLOW_CONTROL_UPDATE_COMMAND_BIT;
val |= cm3_state;
mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val);
spin_unlock_irqrestore(&port->priv->mss_spinlock, flags);
}
static int mvpp2_enable_global_fc(struct mvpp2 *priv)
{
int val, timeout = 0;
/* Enable global flow control. In this stage global
* flow control enabled, but still disabled per port.
*/
val = mvpp2_cm3_read(priv, MSS_FC_COM_REG);
val |= FLOW_CONTROL_ENABLE_BIT;
mvpp2_cm3_write(priv, MSS_FC_COM_REG, val);
/* Check if Firmware running and disable FC if not*/
val |= FLOW_CONTROL_UPDATE_COMMAND_BIT;
mvpp2_cm3_write(priv, MSS_FC_COM_REG, val);
while (timeout < MSS_FC_MAX_TIMEOUT) {
val = mvpp2_cm3_read(priv, MSS_FC_COM_REG);
if (!(val & FLOW_CONTROL_UPDATE_COMMAND_BIT))
return 0;
usleep_range(10, 20);
timeout++;
}
priv->global_tx_fc = false;
return -EOPNOTSUPP;
}
/* Release buffer to BM */
static inline void mvpp2_bm_pool_put(struct mvpp2_port *port, int pool,
dma_addr_t buf_dma_addr,
......@@ -742,7 +963,7 @@ static inline void mvpp2_bm_pool_put(struct mvpp2_port *port, int pool,
if (test_bit(thread, &port->priv->lock_map))
spin_lock_irqsave(&port->bm_lock[thread], flags);
if (port->priv->hw_version == MVPP22) {
if (port->priv->hw_version != MVPP21) {
u32 val = 0;
if (sizeof(dma_addr_t) == 8)
......@@ -1061,6 +1282,16 @@ static int mvpp2_bm_update_mtu(struct net_device *dev, int mtu)
new_long_pool = MVPP2_BM_LONG;
if (new_long_pool != port->pool_long->id) {
if (port->tx_fc) {
if (pkt_size > MVPP2_BM_LONG_PKT_SIZE)
mvpp2_bm_pool_update_fc(port,
port->pool_short,
false);
else
mvpp2_bm_pool_update_fc(port, port->pool_long,
false);
}
/* Remove port from old short & long pool */
port->pool_long = mvpp2_bm_pool_use(port, port->pool_long->id,
port->pool_long->pkt_size);
......@@ -1078,6 +1309,25 @@ static int mvpp2_bm_update_mtu(struct net_device *dev, int mtu)
mvpp2_swf_bm_pool_init(port);
mvpp2_set_hw_csum(port, new_long_pool);
if (port->tx_fc) {
if (pkt_size > MVPP2_BM_LONG_PKT_SIZE)
mvpp2_bm_pool_update_fc(port, port->pool_long,
true);
else
mvpp2_bm_pool_update_fc(port, port->pool_short,
true);
}
/* Update L4 checksum when jumbo enable/disable on port */
if (new_long_pool == MVPP2_BM_JUMBO && port->id != 0) {
dev->features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);
dev->hw_features &= ~(NETIF_F_IP_CSUM |
NETIF_F_IPV6_CSUM);
} else {
dev->features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
dev->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
}
}
out_set:
......@@ -1133,14 +1383,19 @@ static inline void mvpp2_qvec_interrupt_disable(struct mvpp2_queue_vector *qvec)
static void mvpp2_interrupts_mask(void *arg)
{
struct mvpp2_port *port = arg;
int cpu = smp_processor_id();
u32 thread;
/* If the thread isn't used, don't do anything */
if (smp_processor_id() > port->priv->nthreads)
if (cpu > port->priv->nthreads)
return;
mvpp2_thread_write(port->priv,
mvpp2_cpu_to_thread(port->priv, smp_processor_id()),
thread = mvpp2_cpu_to_thread(port->priv, cpu);
mvpp2_thread_write(port->priv, thread,
MVPP2_ISR_RX_TX_MASK_REG(port->id), 0);
mvpp2_thread_write(port->priv, thread,
MVPP2_ISR_RX_ERR_CAUSE_REG(port->id), 0);
}
/* Unmask the current thread's Rx/Tx interrupts.
......@@ -1150,20 +1405,25 @@ static void mvpp2_interrupts_mask(void *arg)
static void mvpp2_interrupts_unmask(void *arg)
{
struct mvpp2_port *port = arg;
u32 val;
int cpu = smp_processor_id();
u32 val, thread;
/* If the thread isn't used, don't do anything */
if (smp_processor_id() > port->priv->nthreads)
if (cpu > port->priv->nthreads)
return;
thread = mvpp2_cpu_to_thread(port->priv, cpu);
val = MVPP2_CAUSE_MISC_SUM_MASK |
MVPP2_CAUSE_RXQ_OCCUP_DESC_ALL_MASK(port->priv->hw_version);
if (port->has_tx_irqs)
val |= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;
mvpp2_thread_write(port->priv,
mvpp2_cpu_to_thread(port->priv, smp_processor_id()),
mvpp2_thread_write(port->priv, thread,
MVPP2_ISR_RX_TX_MASK_REG(port->id), val);
mvpp2_thread_write(port->priv, thread,
MVPP2_ISR_RX_ERR_CAUSE_REG(port->id),
MVPP2_ISR_RX_ERR_CAUSE_NONOCC_MASK);
}
static void
......@@ -1172,7 +1432,7 @@ mvpp2_shared_interrupt_mask_unmask(struct mvpp2_port *port, bool mask)
u32 val;
int i;
if (port->priv->hw_version != MVPP22)
if (port->priv->hw_version == MVPP21)
return;
if (mask)
......@@ -1188,6 +1448,9 @@ mvpp2_shared_interrupt_mask_unmask(struct mvpp2_port *port, bool mask)
mvpp2_thread_write(port->priv, v->sw_thread_id,
MVPP2_ISR_RX_TX_MASK_REG(port->id), val);
mvpp2_thread_write(port->priv, v->sw_thread_id,
MVPP2_ISR_RX_ERR_CAUSE_REG(port->id),
MVPP2_ISR_RX_ERR_CAUSE_NONOCC_MASK);
}
}
......@@ -1199,7 +1462,7 @@ static bool mvpp2_port_supports_xlg(struct mvpp2_port *port)
static bool mvpp2_port_supports_rgmii(struct mvpp2_port *port)
{
return !(port->priv->hw_version == MVPP22 && port->gop_id == 0);
return !(port->priv->hw_version != MVPP21 && port->gop_id == 0);
}
/* Port configuration routines */
......@@ -1280,6 +1543,49 @@ static void mvpp22_gop_init_10gkr(struct mvpp2_port *port)
writel(val, mpcs + MVPP22_MPCS_CLK_RESET);
}
static void mvpp22_gop_fca_enable_periodic(struct mvpp2_port *port, bool en)
{
struct mvpp2 *priv = port->priv;
void __iomem *fca = priv->iface_base + MVPP22_FCA_BASE(port->gop_id);
u32 val;
val = readl(fca + MVPP22_FCA_CONTROL_REG);
val &= ~MVPP22_FCA_ENABLE_PERIODIC;
if (en)
val |= MVPP22_FCA_ENABLE_PERIODIC;
writel(val, fca + MVPP22_FCA_CONTROL_REG);
}
static void mvpp22_gop_fca_set_timer(struct mvpp2_port *port, u32 timer)
{
struct mvpp2 *priv = port->priv;
void __iomem *fca = priv->iface_base + MVPP22_FCA_BASE(port->gop_id);
u32 lsb, msb;
lsb = timer & MVPP22_FCA_REG_MASK;
msb = timer >> MVPP22_FCA_REG_SIZE;
writel(lsb, fca + MVPP22_PERIODIC_COUNTER_LSB_REG);
writel(msb, fca + MVPP22_PERIODIC_COUNTER_MSB_REG);
}
/* Set Flow Control timer x100 faster than pause quanta to ensure that link
* partner won't send traffic if port is in XOFF mode.
*/
static void mvpp22_gop_fca_set_periodic_timer(struct mvpp2_port *port)
{
u32 timer;
timer = (port->priv->tclk / (USEC_PER_SEC * FC_CLK_DIVIDER))
* FC_QUANTA;
mvpp22_gop_fca_enable_periodic(port, false);
mvpp22_gop_fca_set_timer(port, timer);
mvpp22_gop_fca_enable_periodic(port, true);
}
static int mvpp22_gop_init(struct mvpp2_port *port)
{
struct mvpp2 *priv = port->priv;
......@@ -1324,6 +1630,8 @@ static int mvpp22_gop_init(struct mvpp2_port *port)
val |= GENCONF_SOFT_RESET1_GOP;
regmap_write(priv->sysctrl_base, GENCONF_SOFT_RESET1, val);
mvpp22_gop_fca_set_periodic_timer(port);
unsupported_conf:
return 0;
......@@ -1817,7 +2125,7 @@ static void mvpp2_mac_reset_assert(struct mvpp2_port *port)
MVPP2_GMAC_PORT_RESET_MASK;
writel(val, port->base + MVPP2_GMAC_CTRL_2_REG);
if (port->priv->hw_version == MVPP22 && port->gop_id == 0) {
if (port->priv->hw_version != MVPP21 && port->gop_id == 0) {
val = readl(port->base + MVPP22_XLG_CTRL0_REG) &
~MVPP22_XLG_CTRL0_MAC_RESET_DIS;
writel(val, port->base + MVPP22_XLG_CTRL0_REG);
......@@ -1830,7 +2138,7 @@ static void mvpp22_pcs_reset_assert(struct mvpp2_port *port)
void __iomem *mpcs, *xpcs;
u32 val;
if (port->priv->hw_version != MVPP22 || port->gop_id != 0)
if (port->priv->hw_version == MVPP21 || port->gop_id != 0)
return;
mpcs = priv->iface_base + MVPP22_MPCS_BASE(port->gop_id);
......@@ -1851,7 +2159,7 @@ static void mvpp22_pcs_reset_deassert(struct mvpp2_port *port)
void __iomem *mpcs, *xpcs;
u32 val;
if (port->priv->hw_version != MVPP22 || port->gop_id != 0)
if (port->priv->hw_version == MVPP21 || port->gop_id != 0)
return;
mpcs = priv->iface_base + MVPP22_MPCS_BASE(port->gop_id);
......@@ -2348,6 +2656,20 @@ static void mvpp2_txp_max_tx_size_set(struct mvpp2_port *port)
}
}
/* Set the number of non-occupied descriptors threshold */
static void mvpp2_set_rxq_free_tresh(struct mvpp2_port *port,
struct mvpp2_rx_queue *rxq)
{
u32 val;
mvpp2_write(port->priv, MVPP2_RXQ_NUM_REG, rxq->id);
val = mvpp2_read(port->priv, MVPP2_RXQ_THRESH_REG);
val &= ~MVPP2_RXQ_NON_OCCUPIED_MASK;
val |= MSS_THRESHOLD_STOP << MVPP2_RXQ_NON_OCCUPIED_OFFSET;
mvpp2_write(port->priv, MVPP2_RXQ_THRESH_REG, val);
}
/* Set the number of packets that will be received before Rx interrupt
* will be generated by HW.
*/
......@@ -2611,6 +2933,9 @@ static int mvpp2_rxq_init(struct mvpp2_port *port,
mvpp2_rx_pkts_coal_set(port, rxq);
mvpp2_rx_time_coal_set(port, rxq);
/* Set the number of non occupied descriptors threshold */
mvpp2_set_rxq_free_tresh(port, rxq);
/* Add number of descriptors ready for receiving packets */
mvpp2_rxq_status_update(port, rxq->id, 0, rxq->size);
......@@ -2928,6 +3253,9 @@ static void mvpp2_cleanup_rxqs(struct mvpp2_port *port)
for (queue = 0; queue < port->nrxqs; queue++)
mvpp2_rxq_deinit(port, port->rxqs[queue]);
if (port->tx_fc)
mvpp2_rxq_disable_fc(port);
}
/* Init all Rx queues for port */
......@@ -2940,6 +3268,10 @@ static int mvpp2_setup_rxqs(struct mvpp2_port *port)
if (err)
goto err_cleanup;
}
if (port->tx_fc)
mvpp2_rxq_enable_fc(port);
return 0;
err_cleanup:
......@@ -4196,7 +4528,7 @@ static void mvpp2_start_dev(struct mvpp2_port *port)
/* Enable interrupts on all threads */
mvpp2_interrupts_enable(port);
if (port->priv->hw_version == MVPP22)
if (port->priv->hw_version != MVPP21)
mvpp22_mode_reconfigure(port);
if (port->phylink) {
......@@ -4239,6 +4571,8 @@ static int mvpp2_check_ringparam_valid(struct net_device *dev,
if (ring->rx_pending > MVPP2_MAX_RXD_MAX)
new_rx_pending = MVPP2_MAX_RXD_MAX;
else if (ring->rx_pending < MSS_THRESHOLD_START)
new_rx_pending = MSS_THRESHOLD_START;
else if (!IS_ALIGNED(ring->rx_pending, 16))
new_rx_pending = ALIGN(ring->rx_pending, 16);
......@@ -4412,7 +4746,7 @@ static int mvpp2_open(struct net_device *dev)
valid = true;
}
if (priv->hw_version == MVPP22 && port->port_irq) {
if (priv->hw_version != MVPP21 && port->port_irq) {
err = request_irq(port->port_irq, mvpp2_port_isr, 0,
dev->name, port);
if (err) {
......@@ -5464,7 +5798,7 @@ static void mvpp2_rx_irqs_setup(struct mvpp2_port *port)
return;
}
/* Handle the more complicated PPv2.2 case */
/* Handle the more complicated PPv2.2 and PPv2.3 case */
for (i = 0; i < port->nqvecs; i++) {
struct mvpp2_queue_vector *qv = port->qvecs + i;
......@@ -5641,7 +5975,7 @@ static bool mvpp22_port_has_legacy_tx_irqs(struct device_node *port_node,
/* Checks if the port dt description has the required Tx interrupts:
* - PPv2.1: there are no such interrupts.
* - PPv2.2:
* - PPv2.2 and PPv2.3:
* - The old DTs have: "rx-shared", "tx-cpuX" with X in [0...3]
* - The new ones have: "hifX" with X in [0..8]
*
......@@ -5883,6 +6217,11 @@ static void mvpp2_phylink_validate(struct phylink_config *config,
phylink_set(mask, Autoneg);
phylink_set_port_modes(mask);
if (port->priv->global_tx_fc) {
phylink_set(mask, Pause);
phylink_set(mask, Asym_Pause);
}
switch (state->interface) {
case PHY_INTERFACE_MODE_10GBASER:
case PHY_INTERFACE_MODE_XAUI:
......@@ -5973,7 +6312,7 @@ static void mvpp2_gmac_config(struct mvpp2_port *port, unsigned int mode,
old_ctrl4 = ctrl4 = readl(port->base + MVPP22_GMAC_CTRL_4_REG);
ctrl0 &= ~MVPP2_GMAC_PORT_TYPE_MASK;
ctrl2 &= ~(MVPP2_GMAC_INBAND_AN_MASK | MVPP2_GMAC_PCS_ENABLE_MASK);
ctrl2 &= ~(MVPP2_GMAC_INBAND_AN_MASK | MVPP2_GMAC_PCS_ENABLE_MASK | MVPP2_GMAC_FLOW_CTRL_MASK);
/* Configure port type */
if (phy_interface_mode_is_8023z(state->interface)) {
......@@ -6060,7 +6399,7 @@ static int mvpp2__mac_prepare(struct phylink_config *config, unsigned int mode,
MVPP2_GMAC_PORT_RESET_MASK,
MVPP2_GMAC_PORT_RESET_MASK);
if (port->priv->hw_version == MVPP22) {
if (port->priv->hw_version != MVPP21) {
mvpp22_gop_mask_irq(port);
phy_power_off(port->comphy);
......@@ -6114,7 +6453,7 @@ static int mvpp2_mac_finish(struct phylink_config *config, unsigned int mode,
{
struct mvpp2_port *port = mvpp2_phylink_to_port(config);
if (port->priv->hw_version == MVPP22 &&
if (port->priv->hw_version != MVPP21 &&
port->phy_interface != interface) {
port->phy_interface = interface;
......@@ -6162,6 +6501,7 @@ static void mvpp2_mac_link_up(struct phylink_config *config,
{
struct mvpp2_port *port = mvpp2_phylink_to_port(config);
u32 val;
int i;
if (mvpp2_is_xlg(interface)) {
if (!phylink_autoneg_inband(mode)) {
......@@ -6212,6 +6552,23 @@ static void mvpp2_mac_link_up(struct phylink_config *config,
val);
}
if (port->priv->global_tx_fc) {
port->tx_fc = tx_pause;
if (tx_pause)
mvpp2_rxq_enable_fc(port);
else
mvpp2_rxq_disable_fc(port);
if (port->priv->percpu_pools) {
for (i = 0; i < port->nrxqs; i++)
mvpp2_bm_pool_update_fc(port, &port->priv->bm_pools[i], tx_pause);
} else {
mvpp2_bm_pool_update_fc(port, port->pool_long, tx_pause);
mvpp2_bm_pool_update_fc(port, port->pool_short, tx_pause);
}
if (port->priv->hw_version == MVPP23)
mvpp23_rx_fifo_fc_en(port->priv, port->id, tx_pause);
}
mvpp2_port_enable(port);
mvpp2_egress_enable(port);
......@@ -6629,7 +6986,7 @@ static void mvpp22_rx_fifo_set_hw(struct mvpp2 *priv, int port, int data_size)
mvpp2_write(priv, MVPP2_RX_ATTR_FIFO_SIZE_REG(port), attr_size);
}
/* Initialize TX FIFO's: the total FIFO size is 48kB on PPv2.2.
/* Initialize TX FIFO's: the total FIFO size is 48kB on PPv2.2 and PPv2.3.
* 4kB fixed space must be assigned for the loopback port.
* Redistribute remaining avialable 44kB space among all active ports.
* Guarantee minimum 32kB for 10G port and 8kB for port 1, capable of 2.5G
......@@ -6678,6 +7035,55 @@ static void mvpp22_rx_fifo_init(struct mvpp2 *priv)
mvpp2_write(priv, MVPP2_RX_FIFO_INIT_REG, 0x1);
}
/* Configure Rx FIFO Flow control thresholds */
static void mvpp23_rx_fifo_fc_set_tresh(struct mvpp2 *priv)
{
int port, val;
/* Port 0: maximum speed -10Gb/s port
* required by spec RX FIFO threshold 9KB
* Port 1: maximum speed -5Gb/s port
* required by spec RX FIFO threshold 4KB
* Port 2: maximum speed -1Gb/s port
* required by spec RX FIFO threshold 2KB
*/
/* Without loopback port */
for (port = 0; port < (MVPP2_MAX_PORTS - 1); port++) {
if (port == 0) {
val = (MVPP23_PORT0_FIFO_TRSH / MVPP2_RX_FC_TRSH_UNIT)
<< MVPP2_RX_FC_TRSH_OFFS;
val &= MVPP2_RX_FC_TRSH_MASK;
mvpp2_write(priv, MVPP2_RX_FC_REG(port), val);
} else if (port == 1) {
val = (MVPP23_PORT1_FIFO_TRSH / MVPP2_RX_FC_TRSH_UNIT)
<< MVPP2_RX_FC_TRSH_OFFS;
val &= MVPP2_RX_FC_TRSH_MASK;
mvpp2_write(priv, MVPP2_RX_FC_REG(port), val);
} else {
val = (MVPP23_PORT2_FIFO_TRSH / MVPP2_RX_FC_TRSH_UNIT)
<< MVPP2_RX_FC_TRSH_OFFS;
val &= MVPP2_RX_FC_TRSH_MASK;
mvpp2_write(priv, MVPP2_RX_FC_REG(port), val);
}
}
}
/* Configure Rx FIFO Flow control thresholds */
void mvpp23_rx_fifo_fc_en(struct mvpp2 *priv, int port, bool en)
{
int val;
val = mvpp2_read(priv, MVPP2_RX_FC_REG(port));
if (en)
val |= MVPP2_RX_FC_EN;
else
val &= ~MVPP2_RX_FC_EN;
mvpp2_write(priv, MVPP2_RX_FC_REG(port), val);
}
static void mvpp22_tx_fifo_set_hw(struct mvpp2 *priv, int port, int size)
{
int threshold = MVPP2_TX_FIFO_THRESHOLD(size);
......@@ -6686,7 +7092,7 @@ static void mvpp22_tx_fifo_set_hw(struct mvpp2 *priv, int port, int size)
mvpp2_write(priv, MVPP22_TX_FIFO_THRESH_REG(port), threshold);
}
/* Initialize TX FIFO's: the total FIFO size is 19kB on PPv2.2.
/* Initialize TX FIFO's: the total FIFO size is 19kB on PPv2.2 and PPv2.3.
* 3kB fixed space must be assigned for the loopback port.
* Redistribute remaining avialable 16kB space among all active ports.
* The 10G interface should use 10kB (which is maximum possible size
......@@ -6794,7 +7200,7 @@ static int mvpp2_init(struct platform_device *pdev, struct mvpp2 *priv)
if (dram_target_info)
mvpp2_conf_mbus_windows(dram_target_info, priv);
if (priv->hw_version == MVPP22)
if (priv->hw_version != MVPP21)
mvpp2_axi_init(priv);
/* Disable HW PHY polling */
......@@ -6829,6 +7235,8 @@ static int mvpp2_init(struct platform_device *pdev, struct mvpp2 *priv)
} else {
mvpp22_rx_fifo_init(priv);
mvpp22_tx_fifo_init(priv);
if (priv->hw_version == MVPP23)
mvpp23_rx_fifo_fc_set_tresh(priv);
}
if (priv->hw_version == MVPP21)
......@@ -6854,6 +7262,27 @@ static int mvpp2_init(struct platform_device *pdev, struct mvpp2 *priv)
return 0;
}
static int mvpp2_get_sram(struct platform_device *pdev,
struct mvpp2 *priv)
{
struct resource *res;
res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
if (!res) {
if (has_acpi_companion(&pdev->dev))
dev_warn(&pdev->dev, "ACPI is too old, Flow control not supported\n");
else
dev_warn(&pdev->dev, "DT is too old, Flow control not supported\n");
return 0;
}
priv->cm3_base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(priv->cm3_base))
return PTR_ERR(priv->cm3_base);
return 0;
}
static int mvpp2_probe(struct platform_device *pdev)
{
const struct acpi_device_id *acpi_id;
......@@ -6910,9 +7339,18 @@ static int mvpp2_probe(struct platform_device *pdev)
priv->iface_base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(priv->iface_base))
return PTR_ERR(priv->iface_base);
/* Map CM3 SRAM */
err = mvpp2_get_sram(pdev, priv);
if (err)
dev_warn(&pdev->dev, "Fail to alloc CM3 SRAM\n");
/* Enable global Flow Control only if handler to SRAM not NULL */
if (priv->cm3_base)
priv->global_tx_fc = true;
}
if (priv->hw_version == MVPP22 && dev_of_node(&pdev->dev)) {
if (priv->hw_version != MVPP21 && dev_of_node(&pdev->dev)) {
priv->sysctrl_base =
syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
"marvell,system-controller");
......@@ -6925,7 +7363,7 @@ static int mvpp2_probe(struct platform_device *pdev)
priv->sysctrl_base = NULL;
}
if (priv->hw_version == MVPP22 &&
if (priv->hw_version != MVPP21 &&
mvpp2_get_nrxqs(priv) * 2 <= MVPP2_BM_MAX_POOLS)
priv->percpu_pools = 1;
......@@ -6970,7 +7408,7 @@ static int mvpp2_probe(struct platform_device *pdev)
if (err < 0)
goto err_pp_clk;
if (priv->hw_version == MVPP22) {
if (priv->hw_version != MVPP21) {
priv->mg_clk = devm_clk_get(&pdev->dev, "mg_clk");
if (IS_ERR(priv->mg_clk)) {
err = PTR_ERR(priv->mg_clk);
......@@ -7011,7 +7449,7 @@ static int mvpp2_probe(struct platform_device *pdev)
return -EINVAL;
}
if (priv->hw_version == MVPP22) {
if (priv->hw_version != MVPP21) {
err = dma_set_mask(&pdev->dev, MVPP2_DESC_DMA_MASK);
if (err)
goto err_axi_clk;
......@@ -7031,6 +7469,14 @@ static int mvpp2_probe(struct platform_device *pdev)
priv->port_map |= BIT(i);
}
if (priv->hw_version != MVPP21) {
if (mvpp2_read(priv, MVPP2_VER_ID_REG) == MVPP2_VER_PP23)
priv->hw_version = MVPP23;
}
/* Init mss lock */
spin_lock_init(&priv->mss_spinlock);
/* Initialize network controller */
err = mvpp2_init(pdev, priv);
if (err < 0) {
......@@ -7070,6 +7516,12 @@ static int mvpp2_probe(struct platform_device *pdev)
goto err_port_probe;
}
if (priv->global_tx_fc && priv->hw_version != MVPP21) {
err = mvpp2_enable_global_fc(priv);
if (err)
dev_warn(&pdev->dev, "Minimum of CM3 firmware 18.09 and chip revision B0 required for flow control\n");
}
mvpp2_dbgfs_init(priv, pdev->name);
platform_set_drvdata(pdev, priv);
......@@ -7086,10 +7538,10 @@ static int mvpp2_probe(struct platform_device *pdev)
clk_disable_unprepare(priv->axi_clk);
err_mg_core_clk:
if (priv->hw_version == MVPP22)
if (priv->hw_version != MVPP21)
clk_disable_unprepare(priv->mg_core_clk);
err_mg_clk:
if (priv->hw_version == MVPP22)
if (priv->hw_version != MVPP21)
clk_disable_unprepare(priv->mg_clk);
err_gop_clk:
clk_disable_unprepare(priv->gop_clk);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment