Commit eeada410 authored by David S. Miller's avatar David S. Miller

Merge branch 'dpaa2-switch-next'

Ioana Ciornei says:

====================
dpaa2-switch: CPU terminated traffic and move out of staging

This patch set adds support for Rx/Tx capabilities on DPAA2 switch port
interfaces as well as fixing up some major blunders in how we take care
of the switching domains. The last patch actually moves the driver out
of staging now that the minimum requirements are met.

I am sending this directly towards the net-next tree so that I can use
the rest of the development cycle adding new features on top of the
current driver without worrying about merge conflicts between the
staging and net-next tree.

The control interface is comprised of 3 queues in total: Rx, Rx error
and Tx confirmation. In this patch set we only enable Rx and Tx conf.
All switch ports share the same queues when frames are redirected to the
CPU.  Information regarding the ingress switch port is passed through
frame metadata - the flow context field of the descriptor.

NAPI instances are also shared between switch net_devices and are
enabled when at least on one of the switch ports .dev_open() was called
and disabled when no switch port is still up.

Since the last version of this feature was submitted to the list, I
reworked how the switching and flooding domains are taken care of by the
driver, thus the switch is now able to also add the control port (the
queues that the CPU can dequeue from) into the flooding domains of a
port (broadcast, unknown unicast etc). With this, we are able to receive
and sent traffic from the switch interfaces.

Also, the capability to properly partition the DPSW object into multiple
switching domains was added so that when not under a bridge, the ports
are not actually capable to switch between them. This is possible by
adding a private FDB table per switch interface.  When multiple switch
interfaces are under the same bridge, they will all use the same FDB
table.

Another thing that is fixed in this patch set is how the driver handles
VLAN awareness. The DPAA2 switch is not capable to run as VLAN unaware
but this was not reflected in how the driver responded to requests to
change the VLAN awareness. In the last patch, this is fixed by
describing the switch interfaces as Rx VLAN filtering on [fixed] and
declining any request to join a VLAN unaware bridge.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 157611c8 f48298d3
......@@ -5473,11 +5473,11 @@ F: drivers/net/ethernet/freescale/dpaa2/dpmac*
F: drivers/net/ethernet/freescale/dpaa2/dpni*
DPAA2 ETHERNET SWITCH DRIVER
M: Ioana Radulescu <ruxandra.radulescu@nxp.com>
M: Ioana Ciornei <ioana.ciornei@nxp.com>
L: linux-kernel@vger.kernel.org
L: netdev@vger.kernel.org
S: Maintained
F: drivers/staging/fsl-dpaa2/ethsw
F: drivers/net/ethernet/freescale/dpaa2/dpaa2-switch*
F: drivers/net/ethernet/freescale/dpaa2/dpsw*
DPT_I2O SCSI RAID DRIVER
M: Adaptec OEM Raid Solutions <aacraid@microsemi.com>
......
......@@ -29,3 +29,11 @@ config FSL_DPAA2_PTP_CLOCK
help
This driver adds support for using the DPAA2 1588 timer module
as a PTP clock.
config FSL_DPAA2_SWITCH
tristate "Freescale DPAA2 Ethernet Switch"
depends on BRIDGE || BRIDGE=n
depends on NET_SWITCHDEV
help
Driver for Freescale DPAA2 Ethernet Switch. This driver manages
switch objects discovered on the Freeescale MC bus.
......@@ -5,11 +5,13 @@
obj-$(CONFIG_FSL_DPAA2_ETH) += fsl-dpaa2-eth.o
obj-$(CONFIG_FSL_DPAA2_PTP_CLOCK) += fsl-dpaa2-ptp.o
obj-$(CONFIG_FSL_DPAA2_SWITCH) += fsl-dpaa2-switch.o
fsl-dpaa2-eth-objs := dpaa2-eth.o dpaa2-ethtool.o dpni.o dpaa2-mac.o dpmac.o dpaa2-eth-devlink.o
fsl-dpaa2-eth-${CONFIG_FSL_DPAA2_ETH_DCB} += dpaa2-eth-dcb.o
fsl-dpaa2-eth-${CONFIG_DEBUG_FS} += dpaa2-eth-debugfs.o
fsl-dpaa2-ptp-objs := dpaa2-ptp.o dprtc.o
fsl-dpaa2-switch-objs := dpaa2-switch.o dpaa2-switch-ethtool.o dpsw.o
# Needed by the tracing framework
CFLAGS_dpaa2-eth.o := -I$(src)
......@@ -9,7 +9,7 @@
#include <linux/ethtool.h>
#include "ethsw.h"
#include "dpaa2-switch.h"
static struct {
enum dpsw_counter id;
......
......@@ -3,7 +3,7 @@
* DPAA2 Ethernet Switch driver
*
* Copyright 2014-2016 Freescale Semiconductor Inc.
* Copyright 2017-2018 NXP
* Copyright 2017-2021 NXP
*
*/
......@@ -13,25 +13,120 @@
#include <linux/msi.h>
#include <linux/kthread.h>
#include <linux/workqueue.h>
#include <linux/iommu.h>
#include <linux/fsl/mc.h>
#include "ethsw.h"
#include "dpaa2-switch.h"
/* Minimal supported DPSW version */
#define DPSW_MIN_VER_MAJOR 8
#define DPSW_MIN_VER_MINOR 1
#define DPSW_MIN_VER_MINOR 9
#define DEFAULT_VLAN_ID 1
static int dpaa2_switch_add_vlan(struct ethsw_core *ethsw, u16 vid)
static u16 dpaa2_switch_port_get_fdb_id(struct ethsw_port_priv *port_priv)
{
int err;
return port_priv->fdb->fdb_id;
}
struct dpsw_vlan_cfg vcfg = {
.fdb_id = 0,
};
static struct dpaa2_switch_fdb *dpaa2_switch_fdb_get_unused(struct ethsw_core *ethsw)
{
int i;
for (i = 0; i < ethsw->sw_attr.num_ifs; i++)
if (!ethsw->fdbs[i].in_use)
return &ethsw->fdbs[i];
return NULL;
}
static u16 dpaa2_switch_port_set_fdb(struct ethsw_port_priv *port_priv,
struct net_device *bridge_dev)
{
struct ethsw_port_priv *other_port_priv = NULL;
struct dpaa2_switch_fdb *fdb;
struct net_device *other_dev;
struct list_head *iter;
/* If we leave a bridge (bridge_dev is NULL), find an unused
* FDB and use that.
*/
if (!bridge_dev) {
fdb = dpaa2_switch_fdb_get_unused(port_priv->ethsw_data);
/* If there is no unused FDB, we must be the last port that
* leaves the last bridge, all the others are standalone. We
* can just keep the FDB that we already have.
*/
if (!fdb) {
port_priv->fdb->bridge_dev = NULL;
return 0;
}
port_priv->fdb = fdb;
port_priv->fdb->in_use = true;
port_priv->fdb->bridge_dev = NULL;
return 0;
}
/* The below call to netdev_for_each_lower_dev() demands the RTNL lock
* being held. Assert on it so that it's easier to catch new code
* paths that reach this point without the RTNL lock.
*/
ASSERT_RTNL();
/* If part of a bridge, use the FDB of the first dpaa2 switch interface
* to be present in that bridge
*/
netdev_for_each_lower_dev(bridge_dev, other_dev, iter) {
if (!dpaa2_switch_port_dev_check(other_dev))
continue;
if (other_dev == port_priv->netdev)
continue;
other_port_priv = netdev_priv(other_dev);
break;
}
/* The current port is about to change its FDB to the one used by the
* first port that joined the bridge.
*/
if (other_port_priv) {
/* The previous FDB is about to become unused, since the
* interface is no longer standalone.
*/
port_priv->fdb->in_use = false;
port_priv->fdb->bridge_dev = NULL;
/* Get a reference to the new FDB */
port_priv->fdb = other_port_priv->fdb;
}
/* Keep track of the new upper bridge device */
port_priv->fdb->bridge_dev = bridge_dev;
return 0;
}
static void *dpaa2_iova_to_virt(struct iommu_domain *domain,
dma_addr_t iova_addr)
{
phys_addr_t phys_addr;
phys_addr = domain ? iommu_iova_to_phys(domain, iova_addr) : iova_addr;
return phys_to_virt(phys_addr);
}
static int dpaa2_switch_add_vlan(struct ethsw_port_priv *port_priv, u16 vid)
{
struct ethsw_core *ethsw = port_priv->ethsw_data;
struct dpsw_vlan_cfg vcfg = {0};
int err;
vcfg.fdb_id = dpaa2_switch_port_get_fdb_id(port_priv);
err = dpsw_vlan_add(ethsw->mc_io, 0,
ethsw->dpsw_handle, vid, &vcfg);
if (err) {
......@@ -122,7 +217,7 @@ static int dpaa2_switch_port_add_vlan(struct ethsw_port_priv *port_priv,
{
struct ethsw_core *ethsw = port_priv->ethsw_data;
struct net_device *netdev = port_priv->netdev;
struct dpsw_vlan_if_cfg vcfg;
struct dpsw_vlan_if_cfg vcfg = {0};
int err;
if (port_priv->vlans[vid]) {
......@@ -130,8 +225,13 @@ static int dpaa2_switch_port_add_vlan(struct ethsw_port_priv *port_priv,
return -EEXIST;
}
/* If hit, this VLAN rule will lead the packet into the FDB table
* specified in the vlan configuration below
*/
vcfg.num_ifs = 1;
vcfg.if_id[0] = port_priv->idx;
vcfg.fdb_id = dpaa2_switch_port_get_fdb_id(port_priv);
vcfg.options |= DPSW_VLAN_ADD_IF_OPT_FDB_ID;
err = dpsw_vlan_add_if(ethsw->mc_io, 0, ethsw->dpsw_handle, vid, &vcfg);
if (err) {
netdev_err(netdev, "dpsw_vlan_add_if err %d\n", err);
......@@ -161,44 +261,6 @@ static int dpaa2_switch_port_add_vlan(struct ethsw_port_priv *port_priv,
return 0;
}
static int dpaa2_switch_set_learning(struct ethsw_core *ethsw, bool enable)
{
enum dpsw_fdb_learning_mode learn_mode;
int err;
if (enable)
learn_mode = DPSW_FDB_LEARNING_MODE_HW;
else
learn_mode = DPSW_FDB_LEARNING_MODE_DIS;
err = dpsw_fdb_set_learning_mode(ethsw->mc_io, 0, ethsw->dpsw_handle, 0,
learn_mode);
if (err) {
dev_err(ethsw->dev, "dpsw_fdb_set_learning_mode err %d\n", err);
return err;
}
ethsw->learning = enable;
return 0;
}
static int dpaa2_switch_port_set_flood(struct ethsw_port_priv *port_priv, bool enable)
{
int err;
err = dpsw_if_set_flooding(port_priv->ethsw_data->mc_io, 0,
port_priv->ethsw_data->dpsw_handle,
port_priv->idx, enable);
if (err) {
netdev_err(port_priv->netdev,
"dpsw_if_set_flooding err %d\n", err);
return err;
}
port_priv->flood = enable;
return 0;
}
static int dpaa2_switch_port_set_stp_state(struct ethsw_port_priv *port_priv, u8 state)
{
struct dpsw_stp_cfg stp_cfg = {
......@@ -256,15 +318,17 @@ static int dpaa2_switch_port_fdb_add_uc(struct ethsw_port_priv *port_priv,
const unsigned char *addr)
{
struct dpsw_fdb_unicast_cfg entry = {0};
u16 fdb_id;
int err;
entry.if_egress = port_priv->idx;
entry.type = DPSW_FDB_ENTRY_STATIC;
ether_addr_copy(entry.mac_addr, addr);
fdb_id = dpaa2_switch_port_get_fdb_id(port_priv);
err = dpsw_fdb_add_unicast(port_priv->ethsw_data->mc_io, 0,
port_priv->ethsw_data->dpsw_handle,
0, &entry);
fdb_id, &entry);
if (err)
netdev_err(port_priv->netdev,
"dpsw_fdb_add_unicast err %d\n", err);
......@@ -275,15 +339,17 @@ static int dpaa2_switch_port_fdb_del_uc(struct ethsw_port_priv *port_priv,
const unsigned char *addr)
{
struct dpsw_fdb_unicast_cfg entry = {0};
u16 fdb_id;
int err;
entry.if_egress = port_priv->idx;
entry.type = DPSW_FDB_ENTRY_STATIC;
ether_addr_copy(entry.mac_addr, addr);
fdb_id = dpaa2_switch_port_get_fdb_id(port_priv);
err = dpsw_fdb_remove_unicast(port_priv->ethsw_data->mc_io, 0,
port_priv->ethsw_data->dpsw_handle,
0, &entry);
fdb_id, &entry);
/* Silently discard error for calling multiple times the del command */
if (err && err != -ENXIO)
netdev_err(port_priv->netdev,
......@@ -295,6 +361,7 @@ static int dpaa2_switch_port_fdb_add_mc(struct ethsw_port_priv *port_priv,
const unsigned char *addr)
{
struct dpsw_fdb_multicast_cfg entry = {0};
u16 fdb_id;
int err;
ether_addr_copy(entry.mac_addr, addr);
......@@ -302,9 +369,10 @@ static int dpaa2_switch_port_fdb_add_mc(struct ethsw_port_priv *port_priv,
entry.num_ifs = 1;
entry.if_id[0] = port_priv->idx;
fdb_id = dpaa2_switch_port_get_fdb_id(port_priv);
err = dpsw_fdb_add_multicast(port_priv->ethsw_data->mc_io, 0,
port_priv->ethsw_data->dpsw_handle,
0, &entry);
fdb_id, &entry);
/* Silently discard error for calling multiple times the add command */
if (err && err != -ENXIO)
netdev_err(port_priv->netdev, "dpsw_fdb_add_multicast err %d\n",
......@@ -316,6 +384,7 @@ static int dpaa2_switch_port_fdb_del_mc(struct ethsw_port_priv *port_priv,
const unsigned char *addr)
{
struct dpsw_fdb_multicast_cfg entry = {0};
u16 fdb_id;
int err;
ether_addr_copy(entry.mac_addr, addr);
......@@ -323,9 +392,10 @@ static int dpaa2_switch_port_fdb_del_mc(struct ethsw_port_priv *port_priv,
entry.num_ifs = 1;
entry.if_id[0] = port_priv->idx;
fdb_id = dpaa2_switch_port_get_fdb_id(port_priv);
err = dpsw_fdb_remove_multicast(port_priv->ethsw_data->mc_io, 0,
port_priv->ethsw_data->dpsw_handle,
0, &entry);
fdb_id, &entry);
/* Silently discard error for calling multiple times the del command */
if (err && err != -ENAVAIL)
netdev_err(port_priv->netdev,
......@@ -333,31 +403,6 @@ static int dpaa2_switch_port_fdb_del_mc(struct ethsw_port_priv *port_priv,
return err;
}
static int dpaa2_switch_port_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
struct net_device *dev, const unsigned char *addr,
u16 vid, u16 flags,
struct netlink_ext_ack *extack)
{
if (is_unicast_ether_addr(addr))
return dpaa2_switch_port_fdb_add_uc(netdev_priv(dev),
addr);
else
return dpaa2_switch_port_fdb_add_mc(netdev_priv(dev),
addr);
}
static int dpaa2_switch_port_fdb_del(struct ndmsg *ndm, struct nlattr *tb[],
struct net_device *dev,
const unsigned char *addr, u16 vid)
{
if (is_unicast_ether_addr(addr))
return dpaa2_switch_port_fdb_del_uc(netdev_priv(dev),
addr);
else
return dpaa2_switch_port_fdb_del_mc(netdev_priv(dev),
addr);
}
static void dpaa2_switch_port_get_stats(struct net_device *netdev,
struct rtnl_link_stats64 *stats)
{
......@@ -486,24 +531,66 @@ static int dpaa2_switch_port_carrier_state_sync(struct net_device *netdev)
WARN_ONCE(state.up > 1, "Garbage read into link_state");
if (state.up != port_priv->link_state) {
if (state.up)
if (state.up) {
netif_carrier_on(netdev);
else
netif_tx_start_all_queues(netdev);
} else {
netif_carrier_off(netdev);
netif_tx_stop_all_queues(netdev);
}
port_priv->link_state = state.up;
}
return 0;
}
/* Manage all NAPI instances for the control interface.
*
* We only have one RX queue and one Tx Conf queue for all
* switch ports. Therefore, we only need to enable the NAPI instance once, the
* first time one of the switch ports runs .dev_open().
*/
static void dpaa2_switch_enable_ctrl_if_napi(struct ethsw_core *ethsw)
{
int i;
/* Access to the ethsw->napi_users relies on the RTNL lock */
ASSERT_RTNL();
/* a new interface is using the NAPI instance */
ethsw->napi_users++;
/* if there is already a user of the instance, return */
if (ethsw->napi_users > 1)
return;
for (i = 0; i < DPAA2_SWITCH_RX_NUM_FQS; i++)
napi_enable(&ethsw->fq[i].napi);
}
static void dpaa2_switch_disable_ctrl_if_napi(struct ethsw_core *ethsw)
{
int i;
/* Access to the ethsw->napi_users relies on the RTNL lock */
ASSERT_RTNL();
/* If we are not the last interface using the NAPI, return */
ethsw->napi_users--;
if (ethsw->napi_users)
return;
for (i = 0; i < DPAA2_SWITCH_RX_NUM_FQS; i++)
napi_disable(&ethsw->fq[i].napi);
}
static int dpaa2_switch_port_open(struct net_device *netdev)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
struct ethsw_core *ethsw = port_priv->ethsw_data;
int err;
/* No need to allow Tx as control interface is disabled */
netif_tx_stop_all_queues(netdev);
/* Explicitly set carrier off, otherwise
* netif_carrier_ok() will return true and cause 'ip link show'
* to report the LOWER_UP flag, even though the link
......@@ -527,6 +614,8 @@ static int dpaa2_switch_port_open(struct net_device *netdev)
goto err_carrier_sync;
}
dpaa2_switch_enable_ctrl_if_napi(ethsw);
return 0;
err_carrier_sync:
......@@ -539,6 +628,7 @@ static int dpaa2_switch_port_open(struct net_device *netdev)
static int dpaa2_switch_port_stop(struct net_device *netdev)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
struct ethsw_core *ethsw = port_priv->ethsw_data;
int err;
err = dpsw_if_disable(port_priv->ethsw_data->mc_io, 0,
......@@ -549,16 +639,9 @@ static int dpaa2_switch_port_stop(struct net_device *netdev)
return err;
}
return 0;
}
static netdev_tx_t dpaa2_switch_port_dropframe(struct sk_buff *skb,
struct net_device *netdev)
{
/* we don't support I/O for now, drop the frame */
dev_kfree_skb_any(skb);
dpaa2_switch_disable_ctrl_if_napi(ethsw);
return NETDEV_TX_OK;
return 0;
}
static int dpaa2_switch_port_parent_id(struct net_device *dev,
......@@ -646,26 +729,20 @@ static int dpaa2_switch_port_fdb_valid_entry(struct fdb_dump_entry *entry,
return valid;
}
static int dpaa2_switch_port_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
struct net_device *net_dev,
struct net_device *filter_dev, int *idx)
static int dpaa2_switch_fdb_iterate(struct ethsw_port_priv *port_priv,
dpaa2_switch_fdb_cb_t cb, void *data)
{
struct ethsw_port_priv *port_priv = netdev_priv(net_dev);
struct net_device *net_dev = port_priv->netdev;
struct ethsw_core *ethsw = port_priv->ethsw_data;
struct device *dev = net_dev->dev.parent;
struct fdb_dump_entry *fdb_entries;
struct fdb_dump_entry fdb_entry;
struct ethsw_dump_ctx dump = {
.dev = net_dev,
.skb = skb,
.cb = cb,
.idx = *idx,
};
dma_addr_t fdb_dump_iova;
u16 num_fdb_entries;
u32 fdb_dump_size;
int err = 0, i;
u8 *dma_mem;
u16 fdb_id;
fdb_dump_size = ethsw->sw_attr.max_fdb_entries * sizeof(fdb_entry);
dma_mem = kzalloc(fdb_dump_size, GFP_KERNEL);
......@@ -680,7 +757,8 @@ static int dpaa2_switch_port_fdb_dump(struct sk_buff *skb, struct netlink_callba
goto err_map;
}
err = dpsw_fdb_dump(ethsw->mc_io, 0, ethsw->dpsw_handle, 0,
fdb_id = dpaa2_switch_port_get_fdb_id(port_priv);
err = dpsw_fdb_dump(ethsw->mc_io, 0, ethsw->dpsw_handle, fdb_id,
fdb_dump_iova, fdb_dump_size, &num_fdb_entries);
if (err) {
netdev_err(net_dev, "dpsw_fdb_dump() = %d\n", err);
......@@ -693,17 +771,12 @@ static int dpaa2_switch_port_fdb_dump(struct sk_buff *skb, struct netlink_callba
for (i = 0; i < num_fdb_entries; i++) {
fdb_entry = fdb_entries[i];
if (!dpaa2_switch_port_fdb_valid_entry(&fdb_entry, port_priv))
continue;
err = dpaa2_switch_fdb_dump_nl(&fdb_entry, &dump);
err = cb(port_priv, &fdb_entry, data);
if (err)
goto end;
}
end:
*idx = dump.idx;
kfree(dma_mem);
return 0;
......@@ -715,6 +788,87 @@ static int dpaa2_switch_port_fdb_dump(struct sk_buff *skb, struct netlink_callba
return err;
}
static int dpaa2_switch_fdb_entry_dump(struct ethsw_port_priv *port_priv,
struct fdb_dump_entry *fdb_entry,
void *data)
{
if (!dpaa2_switch_port_fdb_valid_entry(fdb_entry, port_priv))
return 0;
return dpaa2_switch_fdb_dump_nl(fdb_entry, data);
}
static int dpaa2_switch_port_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
struct net_device *net_dev,
struct net_device *filter_dev, int *idx)
{
struct ethsw_port_priv *port_priv = netdev_priv(net_dev);
struct ethsw_dump_ctx dump = {
.dev = net_dev,
.skb = skb,
.cb = cb,
.idx = *idx,
};
int err;
err = dpaa2_switch_fdb_iterate(port_priv, dpaa2_switch_fdb_entry_dump, &dump);
*idx = dump.idx;
return err;
}
static int dpaa2_switch_fdb_entry_fast_age(struct ethsw_port_priv *port_priv,
struct fdb_dump_entry *fdb_entry,
void *data __always_unused)
{
if (!dpaa2_switch_port_fdb_valid_entry(fdb_entry, port_priv))
return 0;
if (!(fdb_entry->type & DPSW_FDB_ENTRY_TYPE_DYNAMIC))
return 0;
if (fdb_entry->type & DPSW_FDB_ENTRY_TYPE_UNICAST)
dpaa2_switch_port_fdb_del_uc(port_priv, fdb_entry->mac_addr);
else
dpaa2_switch_port_fdb_del_mc(port_priv, fdb_entry->mac_addr);
return 0;
}
static void dpaa2_switch_port_fast_age(struct ethsw_port_priv *port_priv)
{
dpaa2_switch_fdb_iterate(port_priv,
dpaa2_switch_fdb_entry_fast_age, NULL);
}
static int dpaa2_switch_port_vlan_add(struct net_device *netdev, __be16 proto,
u16 vid)
{
struct switchdev_obj_port_vlan vlan = {
.obj.id = SWITCHDEV_OBJ_ID_PORT_VLAN,
.vid = vid,
.obj.orig_dev = netdev,
/* This API only allows programming tagged, non-PVID VIDs */
.flags = 0,
};
return dpaa2_switch_port_vlans_add(netdev, &vlan);
}
static int dpaa2_switch_port_vlan_kill(struct net_device *netdev, __be16 proto,
u16 vid)
{
struct switchdev_obj_port_vlan vlan = {
.obj.id = SWITCHDEV_OBJ_ID_PORT_VLAN,
.vid = vid,
.obj.orig_dev = netdev,
/* This API only allows programming tagged, non-PVID VIDs */
.flags = 0,
};
return dpaa2_switch_port_vlans_del(netdev, &vlan);
}
static int dpaa2_switch_port_set_mac_addr(struct ethsw_port_priv *port_priv)
{
struct ethsw_core *ethsw = port_priv->ethsw_data;
......@@ -755,6 +909,137 @@ static int dpaa2_switch_port_set_mac_addr(struct ethsw_port_priv *port_priv)
return 0;
}
static void dpaa2_switch_free_fd(const struct ethsw_core *ethsw,
const struct dpaa2_fd *fd)
{
struct device *dev = ethsw->dev;
unsigned char *buffer_start;
struct sk_buff **skbh, *skb;
dma_addr_t fd_addr;
fd_addr = dpaa2_fd_get_addr(fd);
skbh = dpaa2_iova_to_virt(ethsw->iommu_domain, fd_addr);
skb = *skbh;
buffer_start = (unsigned char *)skbh;
dma_unmap_single(dev, fd_addr,
skb_tail_pointer(skb) - buffer_start,
DMA_TO_DEVICE);
/* Move on with skb release */
dev_kfree_skb(skb);
}
static int dpaa2_switch_build_single_fd(struct ethsw_core *ethsw,
struct sk_buff *skb,
struct dpaa2_fd *fd)
{
struct device *dev = ethsw->dev;
struct sk_buff **skbh;
dma_addr_t addr;
u8 *buff_start;
void *hwa;
buff_start = PTR_ALIGN(skb->data - DPAA2_SWITCH_TX_DATA_OFFSET -
DPAA2_SWITCH_TX_BUF_ALIGN,
DPAA2_SWITCH_TX_BUF_ALIGN);
/* Clear FAS to have consistent values for TX confirmation. It is
* located in the first 8 bytes of the buffer's hardware annotation
* area
*/
hwa = buff_start + DPAA2_SWITCH_SWA_SIZE;
memset(hwa, 0, 8);
/* Store a backpointer to the skb at the beginning of the buffer
* (in the private data area) such that we can release it
* on Tx confirm
*/
skbh = (struct sk_buff **)buff_start;
*skbh = skb;
addr = dma_map_single(dev, buff_start,
skb_tail_pointer(skb) - buff_start,
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, addr)))
return -ENOMEM;
/* Setup the FD fields */
memset(fd, 0, sizeof(*fd));
dpaa2_fd_set_addr(fd, addr);
dpaa2_fd_set_offset(fd, (u16)(skb->data - buff_start));
dpaa2_fd_set_len(fd, skb->len);
dpaa2_fd_set_format(fd, dpaa2_fd_single);
return 0;
}
static netdev_tx_t dpaa2_switch_port_tx(struct sk_buff *skb,
struct net_device *net_dev)
{
struct ethsw_port_priv *port_priv = netdev_priv(net_dev);
struct ethsw_core *ethsw = port_priv->ethsw_data;
int retries = DPAA2_SWITCH_SWP_BUSY_RETRIES;
struct dpaa2_fd fd;
int err;
if (unlikely(skb_headroom(skb) < DPAA2_SWITCH_NEEDED_HEADROOM)) {
struct sk_buff *ns;
ns = skb_realloc_headroom(skb, DPAA2_SWITCH_NEEDED_HEADROOM);
if (unlikely(!ns)) {
net_err_ratelimited("%s: Error reallocating skb headroom\n", net_dev->name);
goto err_free_skb;
}
dev_consume_skb_any(skb);
skb = ns;
}
/* We'll be holding a back-reference to the skb until Tx confirmation */
skb = skb_unshare(skb, GFP_ATOMIC);
if (unlikely(!skb)) {
/* skb_unshare() has already freed the skb */
net_err_ratelimited("%s: Error copying the socket buffer\n", net_dev->name);
goto err_exit;
}
/* At this stage, we do not support non-linear skbs so just try to
* linearize the skb and if that's not working, just drop the packet.
*/
err = skb_linearize(skb);
if (err) {
net_err_ratelimited("%s: skb_linearize error (%d)!\n", net_dev->name, err);
goto err_free_skb;
}
err = dpaa2_switch_build_single_fd(ethsw, skb, &fd);
if (unlikely(err)) {
net_err_ratelimited("%s: ethsw_build_*_fd() %d\n", net_dev->name, err);
goto err_free_skb;
}
do {
err = dpaa2_io_service_enqueue_qd(NULL,
port_priv->tx_qdid,
8, 0, &fd);
retries--;
} while (err == -EBUSY && retries);
if (unlikely(err < 0)) {
dpaa2_switch_free_fd(ethsw, &fd);
goto err_exit;
}
return NETDEV_TX_OK;
err_free_skb:
dev_kfree_skb(skb);
err_exit:
return NETDEV_TX_OK;
}
static const struct net_device_ops dpaa2_switch_port_ops = {
.ndo_open = dpaa2_switch_port_open,
.ndo_stop = dpaa2_switch_port_stop,
......@@ -764,27 +1049,18 @@ static const struct net_device_ops dpaa2_switch_port_ops = {
.ndo_change_mtu = dpaa2_switch_port_change_mtu,
.ndo_has_offload_stats = dpaa2_switch_port_has_offload_stats,
.ndo_get_offload_stats = dpaa2_switch_port_get_offload_stats,
.ndo_fdb_add = dpaa2_switch_port_fdb_add,
.ndo_fdb_del = dpaa2_switch_port_fdb_del,
.ndo_fdb_dump = dpaa2_switch_port_fdb_dump,
.ndo_vlan_rx_add_vid = dpaa2_switch_port_vlan_add,
.ndo_vlan_rx_kill_vid = dpaa2_switch_port_vlan_kill,
.ndo_start_xmit = dpaa2_switch_port_dropframe,
.ndo_start_xmit = dpaa2_switch_port_tx,
.ndo_get_port_parent_id = dpaa2_switch_port_parent_id,
.ndo_get_phys_port_name = dpaa2_switch_port_get_phys_name,
};
static bool dpaa2_switch_port_dev_check(const struct net_device *netdev,
struct notifier_block *nb)
bool dpaa2_switch_port_dev_check(const struct net_device *netdev)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
if (netdev->netdev_ops == &dpaa2_switch_port_ops &&
(!nb || &port_priv->ethsw_data->port_nb == nb ||
&port_priv->ethsw_data->port_switchdev_nb == nb ||
&port_priv->ethsw_data->port_switchdevb_nb == nb))
return true;
return false;
return netdev->netdev_ops == &dpaa2_switch_port_ops;
}
static void dpaa2_switch_links_state_update(struct ethsw_core *ethsw)
......@@ -908,43 +1184,9 @@ static int dpaa2_switch_port_attr_stp_state_set(struct net_device *netdev,
return dpaa2_switch_port_set_stp_state(port_priv, state);
}
static int
dpaa2_switch_port_attr_br_flags_pre_set(struct net_device *netdev,
struct switchdev_brport_flags flags)
{
if (flags.mask & ~(BR_LEARNING | BR_FLOOD))
return -EINVAL;
return 0;
}
static int
dpaa2_switch_port_attr_br_flags_set(struct net_device *netdev,
struct switchdev_brport_flags flags)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
int err = 0;
if (flags.mask & BR_LEARNING) {
/* Learning is enabled per switch */
err = dpaa2_switch_set_learning(port_priv->ethsw_data,
!!(flags.val & BR_LEARNING));
if (err)
return err;
}
if (flags.mask & BR_FLOOD) {
err = dpaa2_switch_port_set_flood(port_priv,
!!(flags.val & BR_FLOOD));
if (err)
return err;
}
return 0;
}
static int dpaa2_switch_port_attr_set(struct net_device *netdev,
const struct switchdev_attr *attr)
const struct switchdev_attr *attr,
struct netlink_ext_ack *extack)
{
int err = 0;
......@@ -953,16 +1195,12 @@ static int dpaa2_switch_port_attr_set(struct net_device *netdev,
err = dpaa2_switch_port_attr_stp_state_set(netdev,
attr->u.stp_state);
break;
case SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS:
err = dpaa2_switch_port_attr_br_flags_pre_set(netdev,
attr->u.brport_flags);
break;
case SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS:
err = dpaa2_switch_port_attr_br_flags_set(netdev,
attr->u.brport_flags);
break;
case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING:
/* VLANs are supported by default */
if (!attr->u.vlan_filtering) {
NL_SET_ERR_MSG_MOD(extack,
"The DPAA2 switch does not support VLAN-unaware operation");
return -EOPNOTSUPP;
}
break;
default:
err = -EOPNOTSUPP;
......@@ -972,8 +1210,8 @@ static int dpaa2_switch_port_attr_set(struct net_device *netdev,
return err;
}
static int dpaa2_switch_port_vlans_add(struct net_device *netdev,
const struct switchdev_obj_port_vlan *vlan)
int dpaa2_switch_port_vlans_add(struct net_device *netdev,
const struct switchdev_obj_port_vlan *vlan)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
struct ethsw_core *ethsw = port_priv->ethsw_data;
......@@ -1008,7 +1246,7 @@ static int dpaa2_switch_port_vlans_add(struct net_device *netdev,
if (!port_priv->ethsw_data->vlans[vlan->vid]) {
/* this is a new VLAN */
err = dpaa2_switch_add_vlan(port_priv->ethsw_data, vlan->vid);
err = dpaa2_switch_add_vlan(port_priv, vlan->vid);
if (err)
return err;
......@@ -1091,7 +1329,11 @@ static int dpaa2_switch_port_del_vlan(struct ethsw_port_priv *port_priv, u16 vid
return -ENOENT;
if (port_priv->vlans[vid] & ETHSW_VLAN_PVID) {
err = dpaa2_switch_port_set_pvid(port_priv, 0);
/* If we are deleting the PVID of a port, use VLAN 4095 instead
* as we are sure that neither the bridge nor the 8021q module
* will use it
*/
err = dpaa2_switch_port_set_pvid(port_priv, 4095);
if (err)
return err;
}
......@@ -1137,8 +1379,8 @@ static int dpaa2_switch_port_del_vlan(struct ethsw_port_priv *port_priv, u16 vid
return 0;
}
static int dpaa2_switch_port_vlans_del(struct net_device *netdev,
const struct switchdev_obj_port_vlan *vlan)
int dpaa2_switch_port_vlans_del(struct net_device *netdev,
const struct switchdev_obj_port_vlan *vlan)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
......@@ -1190,38 +1432,70 @@ static int dpaa2_switch_port_obj_del(struct net_device *netdev,
}
static int dpaa2_switch_port_attr_set_event(struct net_device *netdev,
struct switchdev_notifier_port_attr_info
*port_attr_info)
struct switchdev_notifier_port_attr_info *ptr)
{
int err;
err = dpaa2_switch_port_attr_set(netdev, port_attr_info->attr);
port_attr_info->handled = true;
err = switchdev_handle_port_attr_set(netdev, ptr,
dpaa2_switch_port_dev_check,
dpaa2_switch_port_attr_set);
return notifier_from_errno(err);
}
/* For the moment, only flood setting needs to be updated */
static int dpaa2_switch_port_bridge_join(struct net_device *netdev,
struct net_device *upper_dev)
static int dpaa2_switch_fdb_set_egress_flood(struct ethsw_core *ethsw, u16 fdb_id)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
struct ethsw_core *ethsw = port_priv->ethsw_data;
struct ethsw_port_priv *other_port_priv;
struct dpsw_egress_flood_cfg flood_cfg;
int i = 0, j;
int err;
/* Add all the DPAA2 switch ports found in the same bridging domain to
* the egress flooding domain
*/
for (j = 0; j < ethsw->sw_attr.num_ifs; j++)
if (ethsw->ports[j] && ethsw->ports[j]->fdb->fdb_id == fdb_id)
flood_cfg.if_id[i++] = ethsw->ports[j]->idx;
/* Add the CTRL interface to the egress flooding domain */
flood_cfg.if_id[i++] = ethsw->sw_attr.num_ifs;
/* Use the FDB of the first dpaa2 switch port added to the bridge */
flood_cfg.fdb_id = fdb_id;
/* Setup broadcast flooding domain */
flood_cfg.flood_type = DPSW_BROADCAST;
flood_cfg.num_ifs = i;
err = dpsw_set_egress_flood(ethsw->mc_io, 0, ethsw->dpsw_handle,
&flood_cfg);
if (err) {
dev_err(ethsw->dev, "dpsw_set_egress_flood() = %d\n", err);
return err;
}
/* Setup unknown flooding domain */
flood_cfg.flood_type = DPSW_FLOODING;
flood_cfg.num_ifs = i;
err = dpsw_set_egress_flood(ethsw->mc_io, 0, ethsw->dpsw_handle,
&flood_cfg);
if (err) {
dev_err(ethsw->dev, "dpsw_set_egress_flood() = %d\n", err);
return err;
}
return 0;
}
static int dpaa2_switch_port_bridge_join(struct net_device *netdev,
struct net_device *upper_dev)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
struct ethsw_core *ethsw = port_priv->ethsw_data;
struct ethsw_port_priv *other_port_priv;
struct net_device *other_dev;
struct list_head *iter;
int i, err;
for (i = 0; i < ethsw->sw_attr.num_ifs; i++)
if (ethsw->ports[i]->bridge_dev &&
(ethsw->ports[i]->bridge_dev != upper_dev)) {
netdev_err(netdev,
"Only one bridge supported per DPSW object!\n");
return -EINVAL;
}
int err;
netdev_for_each_lower_dev(upper_dev, other_dev, iter) {
if (!dpaa2_switch_port_dev_check(other_dev, NULL))
if (!dpaa2_switch_port_dev_check(other_dev))
continue;
other_port_priv = netdev_priv(other_dev);
......@@ -1232,25 +1506,103 @@ static int dpaa2_switch_port_bridge_join(struct net_device *netdev,
}
}
/* Enable flooding */
err = dpaa2_switch_port_set_flood(port_priv, 1);
if (!err)
port_priv->bridge_dev = upper_dev;
/* Delete the previously manually installed VLAN 1 */
err = dpaa2_switch_port_del_vlan(port_priv, 1);
if (err)
return err;
dpaa2_switch_port_set_fdb(port_priv, upper_dev);
/* Setup the egress flood policy (broadcast, unknown unicast) */
err = dpaa2_switch_fdb_set_egress_flood(ethsw, port_priv->fdb->fdb_id);
if (err)
goto err_egress_flood;
return 0;
err_egress_flood:
dpaa2_switch_port_set_fdb(port_priv, NULL);
return err;
}
static int dpaa2_switch_port_clear_rxvlan(struct net_device *vdev, int vid, void *arg)
{
__be16 vlan_proto = htons(ETH_P_8021Q);
if (vdev)
vlan_proto = vlan_dev_vlan_proto(vdev);
return dpaa2_switch_port_vlan_kill(arg, vlan_proto, vid);
}
static int dpaa2_switch_port_restore_rxvlan(struct net_device *vdev, int vid, void *arg)
{
__be16 vlan_proto = htons(ETH_P_8021Q);
if (vdev)
vlan_proto = vlan_dev_vlan_proto(vdev);
return dpaa2_switch_port_vlan_add(arg, vlan_proto, vid);
}
static int dpaa2_switch_port_bridge_leave(struct net_device *netdev)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
struct dpaa2_switch_fdb *old_fdb = port_priv->fdb;
struct ethsw_core *ethsw = port_priv->ethsw_data;
int err;
/* Disable flooding */
err = dpaa2_switch_port_set_flood(port_priv, 0);
if (!err)
port_priv->bridge_dev = NULL;
/* First of all, fast age any learn FDB addresses on this switch port */
dpaa2_switch_port_fast_age(port_priv);
return err;
/* Clear all RX VLANs installed through vlan_vid_add() either as VLAN
* upper devices or otherwise from the FDB table that we are about to
* leave
*/
err = vlan_for_each(netdev, dpaa2_switch_port_clear_rxvlan, netdev);
if (err)
netdev_err(netdev, "Unable to clear RX VLANs from old FDB table, err (%d)\n", err);
dpaa2_switch_port_set_fdb(port_priv, NULL);
/* Restore all RX VLANs into the new FDB table that we just joined */
err = vlan_for_each(netdev, dpaa2_switch_port_restore_rxvlan, netdev);
if (err)
netdev_err(netdev, "Unable to restore RX VLANs to the new FDB, err (%d)\n", err);
/* Setup the egress flood policy (broadcast, unknown unicast).
* When the port is not under a bridge, only the CTRL interface is part
* of the flooding domain besides the actual port
*/
err = dpaa2_switch_fdb_set_egress_flood(ethsw, port_priv->fdb->fdb_id);
if (err)
return err;
/* Recreate the egress flood domain of the FDB that we just left */
err = dpaa2_switch_fdb_set_egress_flood(ethsw, old_fdb->fdb_id);
if (err)
return err;
/* Add the VLAN 1 as PVID when not under a bridge. We need this since
* the dpaa2 switch interfaces are not capable to be VLAN unaware
*/
return dpaa2_switch_port_add_vlan(port_priv, DEFAULT_VLAN_ID,
BRIDGE_VLAN_INFO_UNTAGGED | BRIDGE_VLAN_INFO_PVID);
}
static int dpaa2_switch_prevent_bridging_with_8021q_upper(struct net_device *netdev)
{
struct net_device *upper_dev;
struct list_head *iter;
/* RCU read lock not necessary because we have write-side protection
* (rtnl_mutex), however a non-rcu iterator does not exist.
*/
netdev_for_each_upper_dev_rcu(netdev, upper_dev, iter)
if (is_vlan_dev(upper_dev))
return -EOPNOTSUPP;
return 0;
}
static int dpaa2_switch_port_netdevice_event(struct notifier_block *nb,
......@@ -1258,14 +1610,36 @@ static int dpaa2_switch_port_netdevice_event(struct notifier_block *nb,
{
struct net_device *netdev = netdev_notifier_info_to_dev(ptr);
struct netdev_notifier_changeupper_info *info = ptr;
struct netlink_ext_ack *extack;
struct net_device *upper_dev;
int err = 0;
if (!dpaa2_switch_port_dev_check(netdev, nb))
if (!dpaa2_switch_port_dev_check(netdev))
return NOTIFY_DONE;
/* Handle just upper dev link/unlink for the moment */
if (event == NETDEV_CHANGEUPPER) {
extack = netdev_notifier_info_to_extack(&info->info);
switch (event) {
case NETDEV_PRECHANGEUPPER:
upper_dev = info->upper_dev;
if (!netif_is_bridge_master(upper_dev))
break;
if (!br_vlan_enabled(upper_dev)) {
NL_SET_ERR_MSG_MOD(extack, "Cannot join a VLAN-unaware bridge");
err = -EOPNOTSUPP;
goto out;
}
err = dpaa2_switch_prevent_bridging_with_8021q_upper(netdev);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
"Cannot join a bridge while VLAN uppers are present");
goto out;
}
break;
case NETDEV_CHANGEUPPER:
upper_dev = info->upper_dev;
if (netif_is_bridge_master(upper_dev)) {
if (info->linking)
......@@ -1273,8 +1647,10 @@ static int dpaa2_switch_port_netdevice_event(struct notifier_block *nb,
else
err = dpaa2_switch_port_bridge_leave(netdev);
}
break;
}
out:
return notifier_from_errno(err);
}
......@@ -1338,12 +1714,12 @@ static int dpaa2_switch_port_event(struct notifier_block *nb,
struct switchdev_notifier_fdb_info *fdb_info = ptr;
struct ethsw_core *ethsw = port_priv->ethsw_data;
if (!dpaa2_switch_port_dev_check(dev, nb))
return NOTIFY_DONE;
if (event == SWITCHDEV_PORT_ATTR_SET)
return dpaa2_switch_port_attr_set_event(dev, ptr);
if (!dpaa2_switch_port_dev_check(dev))
return NOTIFY_DONE;
switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC);
if (!switchdev_work)
return NOTIFY_BAD;
......@@ -1387,6 +1763,9 @@ static int dpaa2_switch_port_obj_event(unsigned long event,
{
int err = -EOPNOTSUPP;
if (!dpaa2_switch_port_dev_check(netdev))
return NOTIFY_DONE;
switch (event) {
case SWITCHDEV_PORT_OBJ_ADD:
err = dpaa2_switch_port_obj_add(netdev, port_obj_info->obj);
......@@ -1405,9 +1784,6 @@ static int dpaa2_switch_port_blocking_event(struct notifier_block *nb,
{
struct net_device *dev = switchdev_notifier_info_to_dev(ptr);
if (!dpaa2_switch_port_dev_check(dev, nb))
return NOTIFY_DONE;
switch (event) {
case SWITCHDEV_PORT_OBJ_ADD:
case SWITCHDEV_PORT_OBJ_DEL:
......@@ -1419,53 +1795,592 @@ static int dpaa2_switch_port_blocking_event(struct notifier_block *nb,
return NOTIFY_DONE;
}
static int dpaa2_switch_register_notifier(struct device *dev)
/* Build a linear skb based on a single-buffer frame descriptor */
static struct sk_buff *dpaa2_switch_build_linear_skb(struct ethsw_core *ethsw,
const struct dpaa2_fd *fd)
{
struct ethsw_core *ethsw = dev_get_drvdata(dev);
u16 fd_offset = dpaa2_fd_get_offset(fd);
dma_addr_t addr = dpaa2_fd_get_addr(fd);
u32 fd_length = dpaa2_fd_get_len(fd);
struct device *dev = ethsw->dev;
struct sk_buff *skb = NULL;
void *fd_vaddr;
fd_vaddr = dpaa2_iova_to_virt(ethsw->iommu_domain, addr);
dma_unmap_page(dev, addr, DPAA2_SWITCH_RX_BUF_SIZE,
DMA_FROM_DEVICE);
skb = build_skb(fd_vaddr, DPAA2_SWITCH_RX_BUF_SIZE +
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
if (unlikely(!skb)) {
dev_err(dev, "build_skb() failed\n");
return NULL;
}
skb_reserve(skb, fd_offset);
skb_put(skb, fd_length);
ethsw->buf_count--;
return skb;
}
static void dpaa2_switch_tx_conf(struct dpaa2_switch_fq *fq,
const struct dpaa2_fd *fd)
{
dpaa2_switch_free_fd(fq->ethsw, fd);
}
static void dpaa2_switch_rx(struct dpaa2_switch_fq *fq,
const struct dpaa2_fd *fd)
{
struct ethsw_core *ethsw = fq->ethsw;
struct ethsw_port_priv *port_priv;
struct net_device *netdev;
struct vlan_ethhdr *hdr;
struct sk_buff *skb;
u16 vlan_tci, vid;
int if_id, err;
/* get switch ingress interface ID */
if_id = upper_32_bits(dpaa2_fd_get_flc(fd)) & 0x0000FFFF;
if (if_id >= ethsw->sw_attr.num_ifs) {
dev_err(ethsw->dev, "Frame received from unknown interface!\n");
goto err_free_fd;
}
port_priv = ethsw->ports[if_id];
netdev = port_priv->netdev;
/* build the SKB based on the FD received */
if (dpaa2_fd_get_format(fd) != dpaa2_fd_single) {
if (net_ratelimit()) {
netdev_err(netdev, "Received invalid frame format\n");
goto err_free_fd;
}
}
skb = dpaa2_switch_build_linear_skb(ethsw, fd);
if (unlikely(!skb))
goto err_free_fd;
skb_reset_mac_header(skb);
/* Remove the VLAN header if the packet that we just received has a vid
* equal to the port PVIDs. Since the dpaa2-switch can operate only in
* VLAN-aware mode and no alterations are made on the packet when it's
* redirected/mirrored to the control interface, we are sure that there
* will always be a VLAN header present.
*/
hdr = vlan_eth_hdr(skb);
vid = ntohs(hdr->h_vlan_TCI) & VLAN_VID_MASK;
if (vid == port_priv->pvid) {
err = __skb_vlan_pop(skb, &vlan_tci);
if (err) {
dev_info(ethsw->dev, "__skb_vlan_pop() returned %d", err);
goto err_free_fd;
}
}
skb->dev = netdev;
skb->protocol = eth_type_trans(skb, skb->dev);
netif_receive_skb(skb);
return;
err_free_fd:
dpaa2_switch_free_fd(ethsw, fd);
}
static void dpaa2_switch_detect_features(struct ethsw_core *ethsw)
{
ethsw->features = 0;
if (ethsw->major > 8 || (ethsw->major == 8 && ethsw->minor >= 6))
ethsw->features |= ETHSW_FEATURE_MAC_ADDR;
}
static int dpaa2_switch_setup_fqs(struct ethsw_core *ethsw)
{
struct dpsw_ctrl_if_attr ctrl_if_attr;
struct device *dev = ethsw->dev;
int i = 0;
int err;
ethsw->port_nb.notifier_call = dpaa2_switch_port_netdevice_event;
err = register_netdevice_notifier(&ethsw->port_nb);
err = dpsw_ctrl_if_get_attributes(ethsw->mc_io, 0, ethsw->dpsw_handle,
&ctrl_if_attr);
if (err) {
dev_err(dev, "Failed to register netdev notifier\n");
dev_err(dev, "dpsw_ctrl_if_get_attributes() = %d\n", err);
return err;
}
ethsw->port_switchdev_nb.notifier_call = dpaa2_switch_port_event;
err = register_switchdev_notifier(&ethsw->port_switchdev_nb);
ethsw->fq[i].fqid = ctrl_if_attr.rx_fqid;
ethsw->fq[i].ethsw = ethsw;
ethsw->fq[i++].type = DPSW_QUEUE_RX;
ethsw->fq[i].fqid = ctrl_if_attr.tx_err_conf_fqid;
ethsw->fq[i].ethsw = ethsw;
ethsw->fq[i++].type = DPSW_QUEUE_TX_ERR_CONF;
return 0;
}
/* Free buffers acquired from the buffer pool or which were meant to
* be released in the pool
*/
static void dpaa2_switch_free_bufs(struct ethsw_core *ethsw, u64 *buf_array, int count)
{
struct device *dev = ethsw->dev;
void *vaddr;
int i;
for (i = 0; i < count; i++) {
vaddr = dpaa2_iova_to_virt(ethsw->iommu_domain, buf_array[i]);
dma_unmap_page(dev, buf_array[i], DPAA2_SWITCH_RX_BUF_SIZE,
DMA_FROM_DEVICE);
free_pages((unsigned long)vaddr, 0);
}
}
/* Perform a single release command to add buffers
* to the specified buffer pool
*/
static int dpaa2_switch_add_bufs(struct ethsw_core *ethsw, u16 bpid)
{
struct device *dev = ethsw->dev;
u64 buf_array[BUFS_PER_CMD];
struct page *page;
int retries = 0;
dma_addr_t addr;
int err;
int i;
for (i = 0; i < BUFS_PER_CMD; i++) {
/* Allocate one page for each Rx buffer. WRIOP sees
* the entire page except for a tailroom reserved for
* skb shared info
*/
page = dev_alloc_pages(0);
if (!page) {
dev_err(dev, "buffer allocation failed\n");
goto err_alloc;
}
addr = dma_map_page(dev, page, 0, DPAA2_SWITCH_RX_BUF_SIZE,
DMA_FROM_DEVICE);
if (dma_mapping_error(dev, addr)) {
dev_err(dev, "dma_map_single() failed\n");
goto err_map;
}
buf_array[i] = addr;
}
release_bufs:
/* In case the portal is busy, retry until successful or
* max retries hit.
*/
while ((err = dpaa2_io_service_release(NULL, bpid,
buf_array, i)) == -EBUSY) {
if (retries++ >= DPAA2_SWITCH_SWP_BUSY_RETRIES)
break;
cpu_relax();
}
/* If release command failed, clean up and bail out. */
if (err) {
dev_err(dev, "Failed to register switchdev notifier\n");
goto err_switchdev_nb;
dpaa2_switch_free_bufs(ethsw, buf_array, i);
return 0;
}
return i;
err_map:
__free_pages(page, 0);
err_alloc:
/* If we managed to allocate at least some buffers,
* release them to hardware
*/
if (i)
goto release_bufs;
return 0;
}
static int dpaa2_switch_refill_bp(struct ethsw_core *ethsw)
{
int *count = &ethsw->buf_count;
int new_count;
int err = 0;
if (unlikely(*count < DPAA2_ETHSW_REFILL_THRESH)) {
do {
new_count = dpaa2_switch_add_bufs(ethsw, ethsw->bpid);
if (unlikely(!new_count)) {
/* Out of memory; abort for now, we'll
* try later on
*/
break;
}
*count += new_count;
} while (*count < DPAA2_ETHSW_NUM_BUFS);
if (unlikely(*count < DPAA2_ETHSW_NUM_BUFS))
err = -ENOMEM;
}
ethsw->port_switchdevb_nb.notifier_call = dpaa2_switch_port_blocking_event;
err = register_switchdev_blocking_notifier(&ethsw->port_switchdevb_nb);
return err;
}
static int dpaa2_switch_seed_bp(struct ethsw_core *ethsw)
{
int *count, i;
for (i = 0; i < DPAA2_ETHSW_NUM_BUFS; i += BUFS_PER_CMD) {
count = &ethsw->buf_count;
*count += dpaa2_switch_add_bufs(ethsw, ethsw->bpid);
if (unlikely(*count < BUFS_PER_CMD))
return -ENOMEM;
}
return 0;
}
static void dpaa2_switch_drain_bp(struct ethsw_core *ethsw)
{
u64 buf_array[BUFS_PER_CMD];
int ret;
do {
ret = dpaa2_io_service_acquire(NULL, ethsw->bpid,
buf_array, BUFS_PER_CMD);
if (ret < 0) {
dev_err(ethsw->dev,
"dpaa2_io_service_acquire() = %d\n", ret);
return;
}
dpaa2_switch_free_bufs(ethsw, buf_array, ret);
} while (ret);
}
static int dpaa2_switch_setup_dpbp(struct ethsw_core *ethsw)
{
struct dpsw_ctrl_if_pools_cfg dpsw_ctrl_if_pools_cfg = { 0 };
struct device *dev = ethsw->dev;
struct fsl_mc_device *dpbp_dev;
struct dpbp_attr dpbp_attrs;
int err;
err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPBP,
&dpbp_dev);
if (err) {
dev_err(dev, "Failed to register switchdev blocking notifier\n");
goto err_switchdev_blocking_nb;
if (err == -ENXIO)
err = -EPROBE_DEFER;
else
dev_err(dev, "DPBP device allocation failed\n");
return err;
}
ethsw->dpbp_dev = dpbp_dev;
err = dpbp_open(ethsw->mc_io, 0, dpbp_dev->obj_desc.id,
&dpbp_dev->mc_handle);
if (err) {
dev_err(dev, "dpbp_open() failed\n");
goto err_open;
}
err = dpbp_reset(ethsw->mc_io, 0, dpbp_dev->mc_handle);
if (err) {
dev_err(dev, "dpbp_reset() failed\n");
goto err_reset;
}
err = dpbp_enable(ethsw->mc_io, 0, dpbp_dev->mc_handle);
if (err) {
dev_err(dev, "dpbp_enable() failed\n");
goto err_enable;
}
err = dpbp_get_attributes(ethsw->mc_io, 0, dpbp_dev->mc_handle,
&dpbp_attrs);
if (err) {
dev_err(dev, "dpbp_get_attributes() failed\n");
goto err_get_attr;
}
dpsw_ctrl_if_pools_cfg.num_dpbp = 1;
dpsw_ctrl_if_pools_cfg.pools[0].dpbp_id = dpbp_attrs.id;
dpsw_ctrl_if_pools_cfg.pools[0].buffer_size = DPAA2_SWITCH_RX_BUF_SIZE;
dpsw_ctrl_if_pools_cfg.pools[0].backup_pool = 0;
err = dpsw_ctrl_if_set_pools(ethsw->mc_io, 0, ethsw->dpsw_handle,
&dpsw_ctrl_if_pools_cfg);
if (err) {
dev_err(dev, "dpsw_ctrl_if_set_pools() failed\n");
goto err_get_attr;
}
ethsw->bpid = dpbp_attrs.id;
return 0;
err_switchdev_blocking_nb:
unregister_switchdev_notifier(&ethsw->port_switchdev_nb);
err_switchdev_nb:
unregister_netdevice_notifier(&ethsw->port_nb);
err_get_attr:
dpbp_disable(ethsw->mc_io, 0, dpbp_dev->mc_handle);
err_enable:
err_reset:
dpbp_close(ethsw->mc_io, 0, dpbp_dev->mc_handle);
err_open:
fsl_mc_object_free(dpbp_dev);
return err;
}
static void dpaa2_switch_detect_features(struct ethsw_core *ethsw)
static void dpaa2_switch_free_dpbp(struct ethsw_core *ethsw)
{
ethsw->features = 0;
dpbp_disable(ethsw->mc_io, 0, ethsw->dpbp_dev->mc_handle);
dpbp_close(ethsw->mc_io, 0, ethsw->dpbp_dev->mc_handle);
fsl_mc_object_free(ethsw->dpbp_dev);
}
if (ethsw->major > 8 || (ethsw->major == 8 && ethsw->minor >= 6))
ethsw->features |= ETHSW_FEATURE_MAC_ADDR;
static int dpaa2_switch_alloc_rings(struct ethsw_core *ethsw)
{
int i;
for (i = 0; i < DPAA2_SWITCH_RX_NUM_FQS; i++) {
ethsw->fq[i].store =
dpaa2_io_store_create(DPAA2_SWITCH_STORE_SIZE,
ethsw->dev);
if (!ethsw->fq[i].store) {
dev_err(ethsw->dev, "dpaa2_io_store_create failed\n");
while (--i >= 0)
dpaa2_io_store_destroy(ethsw->fq[i].store);
return -ENOMEM;
}
}
return 0;
}
static void dpaa2_switch_destroy_rings(struct ethsw_core *ethsw)
{
int i;
for (i = 0; i < DPAA2_SWITCH_RX_NUM_FQS; i++)
dpaa2_io_store_destroy(ethsw->fq[i].store);
}
static int dpaa2_switch_pull_fq(struct dpaa2_switch_fq *fq)
{
int err, retries = 0;
/* Try to pull from the FQ while the portal is busy and we didn't hit
* the maximum number fo retries
*/
do {
err = dpaa2_io_service_pull_fq(NULL, fq->fqid, fq->store);
cpu_relax();
} while (err == -EBUSY && retries++ < DPAA2_SWITCH_SWP_BUSY_RETRIES);
if (unlikely(err))
dev_err(fq->ethsw->dev, "dpaa2_io_service_pull err %d", err);
return err;
}
/* Consume all frames pull-dequeued into the store */
static int dpaa2_switch_store_consume(struct dpaa2_switch_fq *fq)
{
struct ethsw_core *ethsw = fq->ethsw;
int cleaned = 0, is_last;
struct dpaa2_dq *dq;
int retries = 0;
do {
/* Get the next available FD from the store */
dq = dpaa2_io_store_next(fq->store, &is_last);
if (unlikely(!dq)) {
if (retries++ >= DPAA2_SWITCH_SWP_BUSY_RETRIES) {
dev_err_once(ethsw->dev,
"No valid dequeue response\n");
return -ETIMEDOUT;
}
continue;
}
if (fq->type == DPSW_QUEUE_RX)
dpaa2_switch_rx(fq, dpaa2_dq_fd(dq));
else
dpaa2_switch_tx_conf(fq, dpaa2_dq_fd(dq));
cleaned++;
} while (!is_last);
return cleaned;
}
/* NAPI poll routine */
static int dpaa2_switch_poll(struct napi_struct *napi, int budget)
{
int err, cleaned = 0, store_cleaned, work_done;
struct dpaa2_switch_fq *fq;
int retries = 0;
fq = container_of(napi, struct dpaa2_switch_fq, napi);
do {
err = dpaa2_switch_pull_fq(fq);
if (unlikely(err))
break;
/* Refill pool if appropriate */
dpaa2_switch_refill_bp(fq->ethsw);
store_cleaned = dpaa2_switch_store_consume(fq);
cleaned += store_cleaned;
if (cleaned >= budget) {
work_done = budget;
goto out;
}
} while (store_cleaned);
/* We didn't consume the entire budget, so finish napi and re-enable
* data availability notifications
*/
napi_complete_done(napi, cleaned);
do {
err = dpaa2_io_service_rearm(NULL, &fq->nctx);
cpu_relax();
} while (err == -EBUSY && retries++ < DPAA2_SWITCH_SWP_BUSY_RETRIES);
work_done = max(cleaned, 1);
out:
return work_done;
}
static void dpaa2_switch_fqdan_cb(struct dpaa2_io_notification_ctx *nctx)
{
struct dpaa2_switch_fq *fq;
fq = container_of(nctx, struct dpaa2_switch_fq, nctx);
napi_schedule(&fq->napi);
}
static int dpaa2_switch_setup_dpio(struct ethsw_core *ethsw)
{
struct dpsw_ctrl_if_queue_cfg queue_cfg;
struct dpaa2_io_notification_ctx *nctx;
int err, i, j;
for (i = 0; i < DPAA2_SWITCH_RX_NUM_FQS; i++) {
nctx = &ethsw->fq[i].nctx;
/* Register a new software context for the FQID.
* By using NULL as the first parameter, we specify that we do
* not care on which cpu are interrupts received for this queue
*/
nctx->is_cdan = 0;
nctx->id = ethsw->fq[i].fqid;
nctx->desired_cpu = DPAA2_IO_ANY_CPU;
nctx->cb = dpaa2_switch_fqdan_cb;
err = dpaa2_io_service_register(NULL, nctx, ethsw->dev);
if (err) {
err = -EPROBE_DEFER;
goto err_register;
}
queue_cfg.options = DPSW_CTRL_IF_QUEUE_OPT_DEST |
DPSW_CTRL_IF_QUEUE_OPT_USER_CTX;
queue_cfg.dest_cfg.dest_type = DPSW_CTRL_IF_DEST_DPIO;
queue_cfg.dest_cfg.dest_id = nctx->dpio_id;
queue_cfg.dest_cfg.priority = 0;
queue_cfg.user_ctx = nctx->qman64;
err = dpsw_ctrl_if_set_queue(ethsw->mc_io, 0,
ethsw->dpsw_handle,
ethsw->fq[i].type,
&queue_cfg);
if (err)
goto err_set_queue;
}
return 0;
err_set_queue:
dpaa2_io_service_deregister(NULL, nctx, ethsw->dev);
err_register:
for (j = 0; j < i; j++)
dpaa2_io_service_deregister(NULL, &ethsw->fq[j].nctx,
ethsw->dev);
return err;
}
static void dpaa2_switch_free_dpio(struct ethsw_core *ethsw)
{
int i;
for (i = 0; i < DPAA2_SWITCH_RX_NUM_FQS; i++)
dpaa2_io_service_deregister(NULL, &ethsw->fq[i].nctx,
ethsw->dev);
}
static int dpaa2_switch_ctrl_if_setup(struct ethsw_core *ethsw)
{
int err;
/* setup FQs for Rx and Tx Conf */
err = dpaa2_switch_setup_fqs(ethsw);
if (err)
return err;
/* setup the buffer pool needed on the Rx path */
err = dpaa2_switch_setup_dpbp(ethsw);
if (err)
return err;
err = dpaa2_switch_seed_bp(ethsw);
if (err)
goto err_free_dpbp;
err = dpaa2_switch_alloc_rings(ethsw);
if (err)
goto err_drain_dpbp;
err = dpaa2_switch_setup_dpio(ethsw);
if (err)
goto err_destroy_rings;
err = dpsw_ctrl_if_enable(ethsw->mc_io, 0, ethsw->dpsw_handle);
if (err) {
dev_err(ethsw->dev, "dpsw_ctrl_if_enable err %d\n", err);
goto err_deregister_dpio;
}
return 0;
err_deregister_dpio:
dpaa2_switch_free_dpio(ethsw);
err_destroy_rings:
dpaa2_switch_destroy_rings(ethsw);
err_drain_dpbp:
dpaa2_switch_drain_bp(ethsw);
err_free_dpbp:
dpaa2_switch_free_dpbp(ethsw);
return err;
}
static int dpaa2_switch_init(struct fsl_mc_device *sw_dev)
{
struct device *dev = &sw_dev->dev;
struct ethsw_core *ethsw = dev_get_drvdata(dev);
struct dpsw_vlan_if_cfg vcfg = {0};
struct dpsw_tci_cfg tci_cfg = {0};
struct dpsw_stp_cfg stp_cfg;
int err;
u16 i;
......@@ -1497,11 +2412,14 @@ static int dpaa2_switch_init(struct fsl_mc_device *sw_dev)
if (ethsw->major < DPSW_MIN_VER_MAJOR ||
(ethsw->major == DPSW_MIN_VER_MAJOR &&
ethsw->minor < DPSW_MIN_VER_MINOR)) {
dev_err(dev, "DPSW version %d:%d not supported. Use %d.%d or greater.\n",
ethsw->major,
ethsw->minor,
DPSW_MIN_VER_MAJOR, DPSW_MIN_VER_MINOR);
err = -ENOTSUPP;
dev_err(dev, "DPSW version %d:%d not supported. Use firmware 10.28.0 or greater.\n",
ethsw->major, ethsw->minor);
err = -EOPNOTSUPP;
goto err_close;
}
if (!dpaa2_switch_supports_cpu_traffic(ethsw)) {
err = -EOPNOTSUPP;
goto err_close;
}
......@@ -1513,17 +2431,16 @@ static int dpaa2_switch_init(struct fsl_mc_device *sw_dev)
goto err_close;
}
err = dpsw_fdb_set_learning_mode(ethsw->mc_io, 0, ethsw->dpsw_handle, 0,
DPSW_FDB_LEARNING_MODE_HW);
if (err) {
dev_err(dev, "dpsw_fdb_set_learning_mode err %d\n", err);
goto err_close;
}
stp_cfg.vlan_id = DEFAULT_VLAN_ID;
stp_cfg.state = DPSW_STP_STATE_FORWARDING;
for (i = 0; i < ethsw->sw_attr.num_ifs; i++) {
err = dpsw_if_disable(ethsw->mc_io, 0, ethsw->dpsw_handle, i);
if (err) {
dev_err(dev, "dpsw_if_disable err %d\n", err);
goto err_close;
}
err = dpsw_if_set_stp(ethsw->mc_io, 0, ethsw->dpsw_handle, i,
&stp_cfg);
if (err) {
......@@ -1532,16 +2449,40 @@ static int dpaa2_switch_init(struct fsl_mc_device *sw_dev)
goto err_close;
}
err = dpsw_if_set_broadcast(ethsw->mc_io, 0,
ethsw->dpsw_handle, i, 1);
/* Switch starts with all ports configured to VLAN 1. Need to
* remove this setting to allow configuration at bridge join
*/
vcfg.num_ifs = 1;
vcfg.if_id[0] = i;
err = dpsw_vlan_remove_if_untagged(ethsw->mc_io, 0, ethsw->dpsw_handle,
DEFAULT_VLAN_ID, &vcfg);
if (err) {
dev_err(dev,
"dpsw_if_set_broadcast err %d for port %d\n",
err, i);
dev_err(dev, "dpsw_vlan_remove_if_untagged err %d\n",
err);
goto err_close;
}
tci_cfg.vlan_id = 4095;
err = dpsw_if_set_tci(ethsw->mc_io, 0, ethsw->dpsw_handle, i, &tci_cfg);
if (err) {
dev_err(dev, "dpsw_if_set_tci err %d\n", err);
goto err_close;
}
err = dpsw_vlan_remove_if(ethsw->mc_io, 0, ethsw->dpsw_handle,
DEFAULT_VLAN_ID, &vcfg);
if (err) {
dev_err(dev, "dpsw_vlan_remove_if err %d\n", err);
goto err_close;
}
}
err = dpsw_vlan_remove(ethsw->mc_io, 0, ethsw->dpsw_handle, DEFAULT_VLAN_ID);
if (err) {
dev_err(dev, "dpsw_vlan_remove err %d\n", err);
goto err_close;
}
ethsw->workqueue = alloc_ordered_workqueue("%s_%d_ordered",
WQ_MEM_RECLAIM, "ethsw",
ethsw->sw_attr.id);
......@@ -1550,7 +2491,11 @@ static int dpaa2_switch_init(struct fsl_mc_device *sw_dev)
goto err_close;
}
err = dpaa2_switch_register_notifier(dev);
err = dpsw_fdb_remove(ethsw->mc_io, 0, ethsw->dpsw_handle, 0);
if (err)
goto err_destroy_ordered_workqueue;
err = dpaa2_switch_ctrl_if_setup(ethsw);
if (err)
goto err_destroy_ordered_workqueue;
......@@ -1566,59 +2511,58 @@ static int dpaa2_switch_init(struct fsl_mc_device *sw_dev)
static int dpaa2_switch_port_init(struct ethsw_port_priv *port_priv, u16 port)
{
struct switchdev_obj_port_vlan vlan = {
.obj.id = SWITCHDEV_OBJ_ID_PORT_VLAN,
.vid = DEFAULT_VLAN_ID,
.flags = BRIDGE_VLAN_INFO_UNTAGGED | BRIDGE_VLAN_INFO_PVID,
};
struct net_device *netdev = port_priv->netdev;
struct ethsw_core *ethsw = port_priv->ethsw_data;
struct dpsw_vlan_if_cfg vcfg;
struct dpsw_fdb_cfg fdb_cfg = {0};
struct dpaa2_switch_fdb *fdb;
struct dpsw_if_attr dpsw_if_attr;
u16 fdb_id;
int err;
/* Switch starts with all ports configured to VLAN 1. Need to
* remove this setting to allow configuration at bridge join
*/
vcfg.num_ifs = 1;
vcfg.if_id[0] = port_priv->idx;
err = dpsw_vlan_remove_if_untagged(ethsw->mc_io, 0, ethsw->dpsw_handle,
DEFAULT_VLAN_ID, &vcfg);
/* Get the Tx queue for this specific port */
err = dpsw_if_get_attributes(ethsw->mc_io, 0, ethsw->dpsw_handle,
port_priv->idx, &dpsw_if_attr);
if (err) {
netdev_err(netdev, "dpsw_vlan_remove_if_untagged err %d\n",
err);
netdev_err(netdev, "dpsw_if_get_attributes err %d\n", err);
return err;
}
port_priv->tx_qdid = dpsw_if_attr.qdid;
err = dpaa2_switch_port_set_pvid(port_priv, 0);
if (err)
/* Create a FDB table for this particular switch port */
fdb_cfg.num_fdb_entries = ethsw->sw_attr.max_fdb_entries / ethsw->sw_attr.num_ifs;
err = dpsw_fdb_add(ethsw->mc_io, 0, ethsw->dpsw_handle,
&fdb_id, &fdb_cfg);
if (err) {
netdev_err(netdev, "dpsw_fdb_add err %d\n", err);
return err;
}
err = dpsw_vlan_remove_if(ethsw->mc_io, 0, ethsw->dpsw_handle,
DEFAULT_VLAN_ID, &vcfg);
if (err)
netdev_err(netdev, "dpsw_vlan_remove_if err %d\n", err);
return err;
}
static void dpaa2_switch_unregister_notifier(struct device *dev)
{
struct ethsw_core *ethsw = dev_get_drvdata(dev);
struct notifier_block *nb;
int err;
/* Find an unused dpaa2_switch_fdb structure and use it */
fdb = dpaa2_switch_fdb_get_unused(ethsw);
fdb->fdb_id = fdb_id;
fdb->in_use = true;
fdb->bridge_dev = NULL;
port_priv->fdb = fdb;
nb = &ethsw->port_switchdevb_nb;
err = unregister_switchdev_blocking_notifier(nb);
/* We need to add VLAN 1 as the PVID on this port until it is under a
* bridge since the DPAA2 switch is not able to handle the traffic in a
* VLAN unaware fashion
*/
err = dpaa2_switch_port_vlans_add(netdev, &vlan);
if (err)
dev_err(dev,
"Failed to unregister switchdev blocking notifier (%d)\n",
err);
return err;
err = unregister_switchdev_notifier(&ethsw->port_switchdev_nb);
/* Setup the egress flooding domains (broadcast, unknown unicast */
err = dpaa2_switch_fdb_set_egress_flood(ethsw, port_priv->fdb->fdb_id);
if (err)
dev_err(dev,
"Failed to unregister switchdev notifier (%d)\n", err);
return err;
err = unregister_netdevice_notifier(&ethsw->port_nb);
if (err)
dev_err(dev,
"Failed to unregister netdev notifier (%d)\n", err);
return err;
}
static void dpaa2_switch_takedown(struct fsl_mc_device *sw_dev)
......@@ -1627,13 +2571,20 @@ static void dpaa2_switch_takedown(struct fsl_mc_device *sw_dev)
struct ethsw_core *ethsw = dev_get_drvdata(dev);
int err;
dpaa2_switch_unregister_notifier(dev);
err = dpsw_close(ethsw->mc_io, 0, ethsw->dpsw_handle);
if (err)
dev_warn(dev, "dpsw_close err %d\n", err);
}
static void dpaa2_switch_ctrl_if_teardown(struct ethsw_core *ethsw)
{
dpsw_ctrl_if_disable(ethsw->mc_io, 0, ethsw->dpsw_handle);
dpaa2_switch_free_dpio(ethsw);
dpaa2_switch_destroy_rings(ethsw);
dpaa2_switch_drain_bp(ethsw);
dpaa2_switch_free_dpbp(ethsw);
}
static int dpaa2_switch_remove(struct fsl_mc_device *sw_dev)
{
struct ethsw_port_priv *port_priv;
......@@ -1644,6 +2595,8 @@ static int dpaa2_switch_remove(struct fsl_mc_device *sw_dev)
dev = &sw_dev->dev;
ethsw = dev_get_drvdata(dev);
dpaa2_switch_ctrl_if_teardown(ethsw);
dpaa2_switch_teardown_irqs(sw_dev);
dpsw_disable(ethsw->mc_io, 0, ethsw->dpsw_handle);
......@@ -1653,6 +2606,8 @@ static int dpaa2_switch_remove(struct fsl_mc_device *sw_dev)
unregister_netdev(port_priv->netdev);
free_netdev(port_priv->netdev);
}
kfree(ethsw->fdbs);
kfree(ethsw->ports);
dpaa2_switch_takedown(sw_dev);
......@@ -1689,17 +2644,26 @@ static int dpaa2_switch_probe_port(struct ethsw_core *ethsw,
port_priv->idx = port_idx;
port_priv->stp_state = BR_STATE_FORWARDING;
/* Flooding is implicitly enabled */
port_priv->flood = true;
SET_NETDEV_DEV(port_netdev, dev);
port_netdev->netdev_ops = &dpaa2_switch_port_ops;
port_netdev->ethtool_ops = &dpaa2_switch_port_ethtool_ops;
port_netdev->needed_headroom = DPAA2_SWITCH_NEEDED_HEADROOM;
/* Set MTU limits */
port_netdev->min_mtu = ETH_MIN_MTU;
port_netdev->max_mtu = ETHSW_MAX_FRAME_LENGTH;
/* Populate the private port structure so that later calls to
* dpaa2_switch_port_init() can use it.
*/
ethsw->ports[port_idx] = port_priv;
/* The DPAA2 switch's ingress path depends on the VLAN table,
* thus we are not able to disable VLAN filtering.
*/
port_netdev->features = NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER;
err = dpaa2_switch_port_init(port_priv, port_idx);
if (err)
goto err_port_probe;
......@@ -1708,18 +2672,11 @@ static int dpaa2_switch_probe_port(struct ethsw_core *ethsw,
if (err)
goto err_port_probe;
err = register_netdev(port_netdev);
if (err < 0) {
dev_err(dev, "register_netdev error %d\n", err);
goto err_port_probe;
}
ethsw->ports[port_idx] = port_priv;
return 0;
err_port_probe:
free_netdev(port_netdev);
ethsw->ports[port_idx] = NULL;
return err;
}
......@@ -1737,6 +2694,7 @@ static int dpaa2_switch_probe(struct fsl_mc_device *sw_dev)
return -ENOMEM;
ethsw->dev = dev;
ethsw->iommu_domain = iommu_get_domain_for_dev(dev);
dev_set_drvdata(dev, ethsw);
err = fsl_mc_portal_allocate(sw_dev, FSL_MC_IO_ATOMIC_CONTEXT_PORTAL,
......@@ -1753,12 +2711,6 @@ static int dpaa2_switch_probe(struct fsl_mc_device *sw_dev)
if (err)
goto err_free_cmdport;
/* DEFAULT_VLAN_ID is implicitly configured on the switch */
ethsw->vlans[DEFAULT_VLAN_ID] = ETHSW_VLAN_MEMBER;
/* Learning is implicitly enabled */
ethsw->learning = true;
ethsw->ports = kcalloc(ethsw->sw_attr.num_ifs, sizeof(*ethsw->ports),
GFP_KERNEL);
if (!(ethsw->ports)) {
......@@ -1766,39 +2718,63 @@ static int dpaa2_switch_probe(struct fsl_mc_device *sw_dev)
goto err_takedown;
}
ethsw->fdbs = kcalloc(ethsw->sw_attr.num_ifs, sizeof(*ethsw->fdbs),
GFP_KERNEL);
if (!ethsw->fdbs) {
err = -ENOMEM;
goto err_free_ports;
}
for (i = 0; i < ethsw->sw_attr.num_ifs; i++) {
err = dpaa2_switch_probe_port(ethsw, i);
if (err)
goto err_free_ports;
goto err_free_netdev;
}
/* Add a NAPI instance for each of the Rx queues. The first port's
* net_device will be associated with the instances since we do not have
* different queues for each switch ports.
*/
for (i = 0; i < DPAA2_SWITCH_RX_NUM_FQS; i++)
netif_napi_add(ethsw->ports[0]->netdev,
&ethsw->fq[i].napi, dpaa2_switch_poll,
NAPI_POLL_WEIGHT);
err = dpsw_enable(ethsw->mc_io, 0, ethsw->dpsw_handle);
if (err) {
dev_err(ethsw->dev, "dpsw_enable err %d\n", err);
goto err_free_ports;
goto err_free_netdev;
}
/* Make sure the switch ports are disabled at probe time */
for (i = 0; i < ethsw->sw_attr.num_ifs; i++)
dpsw_if_disable(ethsw->mc_io, 0, ethsw->dpsw_handle, i);
/* Setup IRQs */
err = dpaa2_switch_setup_irqs(sw_dev);
if (err)
goto err_stop;
dev_info(dev, "probed %d port switch\n", ethsw->sw_attr.num_ifs);
/* Register the netdev only when the entire setup is done and the
* switch port interfaces are ready to receive traffic
*/
for (i = 0; i < ethsw->sw_attr.num_ifs; i++) {
err = register_netdev(ethsw->ports[i]->netdev);
if (err < 0) {
dev_err(dev, "register_netdev error %d\n", err);
goto err_unregister_ports;
}
}
return 0;
err_unregister_ports:
for (i--; i >= 0; i--)
unregister_netdev(ethsw->ports[i]->netdev);
dpaa2_switch_teardown_irqs(sw_dev);
err_stop:
dpsw_disable(ethsw->mc_io, 0, ethsw->dpsw_handle);
err_free_ports:
/* Cleanup registered ports only */
for (i--; i >= 0; i--) {
unregister_netdev(ethsw->ports[i]->netdev);
err_free_netdev:
for (i--; i >= 0; i--)
free_netdev(ethsw->ports[i]->netdev);
}
kfree(ethsw->fdbs);
err_free_ports:
kfree(ethsw->ports);
err_takedown:
......@@ -1833,7 +2809,93 @@ static struct fsl_mc_driver dpaa2_switch_drv = {
.match_id_table = dpaa2_switch_match_id_table
};
module_fsl_mc_driver(dpaa2_switch_drv);
static struct notifier_block dpaa2_switch_port_nb __read_mostly = {
.notifier_call = dpaa2_switch_port_netdevice_event,
};
static struct notifier_block dpaa2_switch_port_switchdev_nb = {
.notifier_call = dpaa2_switch_port_event,
};
static struct notifier_block dpaa2_switch_port_switchdev_blocking_nb = {
.notifier_call = dpaa2_switch_port_blocking_event,
};
static int dpaa2_switch_register_notifiers(void)
{
int err;
err = register_netdevice_notifier(&dpaa2_switch_port_nb);
if (err) {
pr_err("dpaa2-switch: failed to register net_device notifier (%d)\n", err);
return err;
}
err = register_switchdev_notifier(&dpaa2_switch_port_switchdev_nb);
if (err) {
pr_err("dpaa2-switch: failed to register switchdev notifier (%d)\n", err);
goto err_switchdev_nb;
}
err = register_switchdev_blocking_notifier(&dpaa2_switch_port_switchdev_blocking_nb);
if (err) {
pr_err("dpaa2-switch: failed to register switchdev blocking notifier (%d)\n", err);
goto err_switchdev_blocking_nb;
}
return 0;
err_switchdev_blocking_nb:
unregister_switchdev_notifier(&dpaa2_switch_port_switchdev_nb);
err_switchdev_nb:
unregister_netdevice_notifier(&dpaa2_switch_port_nb);
return err;
}
static void dpaa2_switch_unregister_notifiers(void)
{
int err;
err = unregister_switchdev_blocking_notifier(&dpaa2_switch_port_switchdev_blocking_nb);
if (err)
pr_err("dpaa2-switch: failed to unregister switchdev blocking notifier (%d)\n",
err);
err = unregister_switchdev_notifier(&dpaa2_switch_port_switchdev_nb);
if (err)
pr_err("dpaa2-switch: failed to unregister switchdev notifier (%d)\n", err);
err = unregister_netdevice_notifier(&dpaa2_switch_port_nb);
if (err)
pr_err("dpaa2-switch: failed to unregister net_device notifier (%d)\n", err);
}
static int __init dpaa2_switch_driver_init(void)
{
int err;
err = fsl_mc_driver_register(&dpaa2_switch_drv);
if (err)
return err;
err = dpaa2_switch_register_notifiers();
if (err) {
fsl_mc_driver_unregister(&dpaa2_switch_drv);
return err;
}
return 0;
}
static void __exit dpaa2_switch_driver_exit(void)
{
dpaa2_switch_unregister_notifiers();
fsl_mc_driver_unregister(&dpaa2_switch_drv);
}
module_init(dpaa2_switch_driver_init);
module_exit(dpaa2_switch_driver_exit);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("DPAA2 Ethernet Switch Driver");
......@@ -3,7 +3,7 @@
* DPAA2 Ethernet Switch declarations
*
* Copyright 2014-2016 Freescale Semiconductor Inc.
* Copyright 2017-2018 NXP
* Copyright 2017-2021 NXP
*
*/
......@@ -17,6 +17,8 @@
#include <uapi/linux/if_bridge.h>
#include <net/switchdev.h>
#include <linux/if_bridge.h>
#include <linux/fsl/mc.h>
#include <soc/fsl/dpaa2-io.h>
#include "dpsw.h"
......@@ -39,10 +41,63 @@
#define ETHSW_FEATURE_MAC_ADDR BIT(0)
/* Number of receive queues (one RX and one TX_CONF) */
#define DPAA2_SWITCH_RX_NUM_FQS 2
/* Hardware requires alignment for ingress/egress buffer addresses */
#define DPAA2_SWITCH_RX_BUF_RAW_SIZE PAGE_SIZE
#define DPAA2_SWITCH_RX_BUF_TAILROOM \
SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
#define DPAA2_SWITCH_RX_BUF_SIZE \
(DPAA2_SWITCH_RX_BUF_RAW_SIZE - DPAA2_SWITCH_RX_BUF_TAILROOM)
#define DPAA2_SWITCH_STORE_SIZE 16
/* Buffer management */
#define BUFS_PER_CMD 7
#define DPAA2_ETHSW_NUM_BUFS (1024 * BUFS_PER_CMD)
#define DPAA2_ETHSW_REFILL_THRESH (DPAA2_ETHSW_NUM_BUFS * 5 / 6)
/* Number of times to retry DPIO portal operations while waiting
* for portal to finish executing current command and become
* available. We want to avoid being stuck in a while loop in case
* hardware becomes unresponsive, but not give up too easily if
* the portal really is busy for valid reasons
*/
#define DPAA2_SWITCH_SWP_BUSY_RETRIES 1000
/* Hardware annotation buffer size */
#define DPAA2_SWITCH_HWA_SIZE 64
/* Software annotation buffer size */
#define DPAA2_SWITCH_SWA_SIZE 64
#define DPAA2_SWITCH_TX_BUF_ALIGN 64
#define DPAA2_SWITCH_TX_DATA_OFFSET \
(DPAA2_SWITCH_HWA_SIZE + DPAA2_SWITCH_SWA_SIZE)
#define DPAA2_SWITCH_NEEDED_HEADROOM \
(DPAA2_SWITCH_TX_DATA_OFFSET + DPAA2_SWITCH_TX_BUF_ALIGN)
extern const struct ethtool_ops dpaa2_switch_port_ethtool_ops;
struct ethsw_core;
struct dpaa2_switch_fq {
struct ethsw_core *ethsw;
enum dpsw_queue_type type;
struct dpaa2_io_store *store;
struct dpaa2_io_notification_ctx nctx;
struct napi_struct napi;
u32 fqid;
};
struct dpaa2_switch_fdb {
struct net_device *bridge_dev;
u16 fdb_id;
bool in_use;
};
/* Per port private data */
struct ethsw_port_priv {
struct net_device *netdev;
......@@ -54,7 +109,9 @@ struct ethsw_port_priv {
u8 vlans[VLAN_VID_MASK + 1];
u16 pvid;
struct net_device *bridge_dev;
u16 tx_qdid;
struct dpaa2_switch_fdb *fdb;
};
/* Switch data */
......@@ -67,14 +124,55 @@ struct ethsw_core {
unsigned long features;
int dev_id;
struct ethsw_port_priv **ports;
struct iommu_domain *iommu_domain;
u8 vlans[VLAN_VID_MASK + 1];
bool learning;
struct notifier_block port_nb;
struct notifier_block port_switchdev_nb;
struct notifier_block port_switchdevb_nb;
struct workqueue_struct *workqueue;
struct dpaa2_switch_fq fq[DPAA2_SWITCH_RX_NUM_FQS];
struct fsl_mc_device *dpbp_dev;
int buf_count;
u16 bpid;
int napi_users;
struct dpaa2_switch_fdb *fdbs;
};
static inline bool dpaa2_switch_supports_cpu_traffic(struct ethsw_core *ethsw)
{
if (ethsw->sw_attr.options & DPSW_OPT_CTRL_IF_DIS) {
dev_err(ethsw->dev, "Control Interface is disabled, cannot probe\n");
return false;
}
if (ethsw->sw_attr.flooding_cfg != DPSW_FLOODING_PER_FDB) {
dev_err(ethsw->dev, "Flooding domain is not per FDB, cannot probe\n");
return false;
}
if (ethsw->sw_attr.broadcast_cfg != DPSW_BROADCAST_PER_FDB) {
dev_err(ethsw->dev, "Broadcast domain is not per FDB, cannot probe\n");
return false;
}
if (ethsw->sw_attr.max_fdbs < ethsw->sw_attr.num_ifs) {
dev_err(ethsw->dev, "The number of FDBs is lower than the number of ports, cannot probe\n");
return false;
}
return true;
}
bool dpaa2_switch_port_dev_check(const struct net_device *netdev);
int dpaa2_switch_port_vlans_add(struct net_device *netdev,
const struct switchdev_obj_port_vlan *vlan);
int dpaa2_switch_port_vlans_del(struct net_device *netdev,
const struct switchdev_obj_port_vlan *vlan);
typedef int dpaa2_switch_fdb_cb_t(struct ethsw_port_priv *port_priv,
struct fdb_dump_entry *fdb_entry,
void *data);
#endif /* __ETHSW_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright 2014-2016 Freescale Semiconductor Inc.
* Copyright 2017-2020 NXP
* Copyright 2017-2021 NXP
*
*/
#ifndef __FSL_DPSW_CMD_H
#define __FSL_DPSW_CMD_H
#include "dpsw.h"
/* DPSW Version */
#define DPSW_VER_MAJOR 8
#define DPSW_VER_MINOR 5
#define DPSW_VER_MINOR 9
#define DPSW_CMD_BASE_VERSION 1
#define DPSW_CMD_VERSION_2 2
......@@ -27,7 +29,7 @@
#define DPSW_CMDID_ENABLE DPSW_CMD_ID(0x002)
#define DPSW_CMDID_DISABLE DPSW_CMD_ID(0x003)
#define DPSW_CMDID_GET_ATTR DPSW_CMD_ID(0x004)
#define DPSW_CMDID_GET_ATTR DPSW_CMD_V2(0x004)
#define DPSW_CMDID_RESET DPSW_CMD_ID(0x005)
#define DPSW_CMDID_SET_IRQ_ENABLE DPSW_CMD_ID(0x012)
......@@ -45,18 +47,18 @@
#define DPSW_CMDID_IF_ENABLE DPSW_CMD_ID(0x03D)
#define DPSW_CMDID_IF_DISABLE DPSW_CMD_ID(0x03E)
#define DPSW_CMDID_IF_GET_ATTR DPSW_CMD_ID(0x042)
#define DPSW_CMDID_IF_SET_MAX_FRAME_LENGTH DPSW_CMD_ID(0x044)
#define DPSW_CMDID_IF_GET_LINK_STATE DPSW_CMD_ID(0x046)
#define DPSW_CMDID_IF_SET_FLOODING DPSW_CMD_ID(0x047)
#define DPSW_CMDID_IF_SET_BROADCAST DPSW_CMD_ID(0x048)
#define DPSW_CMDID_IF_GET_TCI DPSW_CMD_ID(0x04A)
#define DPSW_CMDID_IF_SET_LINK_CFG DPSW_CMD_ID(0x04C)
#define DPSW_CMDID_VLAN_ADD DPSW_CMD_ID(0x060)
#define DPSW_CMDID_VLAN_ADD_IF DPSW_CMD_ID(0x061)
#define DPSW_CMDID_VLAN_ADD_IF DPSW_CMD_V2(0x061)
#define DPSW_CMDID_VLAN_ADD_IF_UNTAGGED DPSW_CMD_ID(0x062)
#define DPSW_CMDID_VLAN_REMOVE_IF DPSW_CMD_ID(0x064)
......@@ -64,17 +66,26 @@
#define DPSW_CMDID_VLAN_REMOVE_IF_FLOODING DPSW_CMD_ID(0x066)
#define DPSW_CMDID_VLAN_REMOVE DPSW_CMD_ID(0x067)
#define DPSW_CMDID_FDB_ADD DPSW_CMD_ID(0x082)
#define DPSW_CMDID_FDB_REMOVE DPSW_CMD_ID(0x083)
#define DPSW_CMDID_FDB_ADD_UNICAST DPSW_CMD_ID(0x084)
#define DPSW_CMDID_FDB_REMOVE_UNICAST DPSW_CMD_ID(0x085)
#define DPSW_CMDID_FDB_ADD_MULTICAST DPSW_CMD_ID(0x086)
#define DPSW_CMDID_FDB_REMOVE_MULTICAST DPSW_CMD_ID(0x087)
#define DPSW_CMDID_FDB_SET_LEARNING_MODE DPSW_CMD_ID(0x088)
#define DPSW_CMDID_FDB_DUMP DPSW_CMD_ID(0x08A)
#define DPSW_CMDID_IF_GET_PORT_MAC_ADDR DPSW_CMD_ID(0x0A7)
#define DPSW_CMDID_IF_GET_PRIMARY_MAC_ADDR DPSW_CMD_ID(0x0A8)
#define DPSW_CMDID_IF_SET_PRIMARY_MAC_ADDR DPSW_CMD_ID(0x0A9)
#define DPSW_CMDID_CTRL_IF_GET_ATTR DPSW_CMD_ID(0x0A0)
#define DPSW_CMDID_CTRL_IF_SET_POOLS DPSW_CMD_ID(0x0A1)
#define DPSW_CMDID_CTRL_IF_ENABLE DPSW_CMD_ID(0x0A2)
#define DPSW_CMDID_CTRL_IF_DISABLE DPSW_CMD_ID(0x0A3)
#define DPSW_CMDID_CTRL_IF_SET_QUEUE DPSW_CMD_ID(0x0A6)
#define DPSW_CMDID_SET_EGRESS_FLOOD DPSW_CMD_ID(0x0AC)
/* Macros for accessing command fields smaller than 1byte */
#define DPSW_MASK(field) \
GENMASK(DPSW_##field##_SHIFT + DPSW_##field##_SIZE - 1, \
......@@ -169,6 +180,12 @@ struct dpsw_cmd_clear_irq_status {
#define DPSW_COMPONENT_TYPE_SHIFT 0
#define DPSW_COMPONENT_TYPE_SIZE 4
#define DPSW_FLOODING_CFG_SHIFT 0
#define DPSW_FLOODING_CFG_SIZE 4
#define DPSW_BROADCAST_CFG_SHIFT 4
#define DPSW_BROADCAST_CFG_SIZE 4
struct dpsw_rsp_get_attr {
/* cmd word 0 */
__le16 num_ifs;
......@@ -186,23 +203,15 @@ struct dpsw_rsp_get_attr {
u8 max_meters_per_if;
/* from LSB only the first 4 bits */
u8 component_type;
__le16 pad;
/* [0:3] - flooding configuration
* [4:7] - broadcast configuration
*/
u8 repl_cfg;
u8 pad;
/* cmd word 3 */
__le64 options;
};
struct dpsw_cmd_if_set_flooding {
__le16 if_id;
/* from LSB: enable:1 */
u8 enable;
};
struct dpsw_cmd_if_set_broadcast {
__le16 if_id;
/* from LSB: enable:1 */
u8 enable;
};
#define DPSW_VLAN_ID_SHIFT 0
#define DPSW_VLAN_ID_SIZE 12
#define DPSW_DEI_SHIFT 12
......@@ -255,6 +264,28 @@ struct dpsw_cmd_if {
__le16 if_id;
};
#define DPSW_ADMIT_UNTAGGED_SHIFT 0
#define DPSW_ADMIT_UNTAGGED_SIZE 4
#define DPSW_ENABLED_SHIFT 5
#define DPSW_ENABLED_SIZE 1
#define DPSW_ACCEPT_ALL_VLAN_SHIFT 6
#define DPSW_ACCEPT_ALL_VLAN_SIZE 1
struct dpsw_rsp_if_get_attr {
/* cmd word 0 */
/* from LSB: admit_untagged:4 enabled:1 accept_all_vlan:1 */
u8 conf;
u8 pad1;
u8 num_tcs;
u8 pad2;
__le16 qdid;
/* cmd word 1 */
__le32 options;
__le32 pad3;
/* cmd word 2 */
__le32 rate;
};
struct dpsw_cmd_if_set_max_frame_length {
__le16 if_id;
__le16 frame_length;
......@@ -295,6 +326,16 @@ struct dpsw_vlan_add {
__le16 vlan_id;
};
struct dpsw_cmd_vlan_add_if {
/* cmd word 0 */
__le16 options;
__le16 vlan_id;
__le16 fdb_id;
__le16 pad0;
/* cmd word 1-4 */
__le64 if_id;
};
struct dpsw_cmd_vlan_manage_if {
/* cmd word 0 */
__le16 pad0;
......@@ -311,7 +352,7 @@ struct dpsw_cmd_vlan_remove {
struct dpsw_cmd_fdb_add {
__le32 pad;
__le16 fdb_aging_time;
__le16 fdb_ageing_time;
__le16 num_fdb_entries;
};
......@@ -350,15 +391,6 @@ struct dpsw_cmd_fdb_multicast_op {
__le64 if_id[4];
};
#define DPSW_LEARNING_MODE_SHIFT 0
#define DPSW_LEARNING_MODE_SIZE 4
struct dpsw_cmd_fdb_set_learning_mode {
__le16 fdb_id;
/* only the first 4 bits from LSB */
u8 mode;
};
struct dpsw_cmd_fdb_dump {
__le16 fdb_id;
__le16 pad0;
......@@ -371,6 +403,36 @@ struct dpsw_rsp_fdb_dump {
__le16 num_entries;
};
struct dpsw_rsp_ctrl_if_get_attr {
__le64 pad;
__le32 rx_fqid;
__le32 rx_err_fqid;
__le32 tx_err_conf_fqid;
};
#define DPSW_BACKUP_POOL(val, order) (((val) & 0x1) << (order))
struct dpsw_cmd_ctrl_if_set_pools {
u8 num_dpbp;
u8 backup_pool_mask;
__le16 pad;
__le32 dpbp_id[DPSW_MAX_DPBP];
__le16 buffer_size[DPSW_MAX_DPBP];
};
#define DPSW_DEST_TYPE_SHIFT 0
#define DPSW_DEST_TYPE_SIZE 4
struct dpsw_cmd_ctrl_if_set_queue {
__le32 dest_id;
u8 dest_priority;
u8 pad;
/* from LSB: dest_type:4 */
u8 dest_type;
u8 qtype;
__le64 user_ctx;
__le32 options;
};
struct dpsw_rsp_get_api_version {
__le16 version_major;
__le16 version_minor;
......@@ -386,5 +448,11 @@ struct dpsw_cmd_if_set_mac_addr {
u8 mac_addr[6];
};
struct dpsw_cmd_set_egress_flood {
__le16 fdb_id;
u8 flood_type;
u8 pad[5];
__le64 if_id;
};
#pragma pack(pop)
#endif /* __FSL_DPSW_CMD_H */
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright 2014-2016 Freescale Semiconductor Inc.
* Copyright 2017-2018 NXP
* Copyright 2017-2021 NXP
*
*/
......@@ -351,9 +351,9 @@ int dpsw_get_attributes(struct fsl_mc_io *mc_io,
attr->max_fdb_mc_groups = le16_to_cpu(rsp_params->max_fdb_mc_groups);
attr->max_meters_per_if = rsp_params->max_meters_per_if;
attr->options = le64_to_cpu(rsp_params->options);
attr->component_type = dpsw_get_field(rsp_params->component_type,
COMPONENT_TYPE);
attr->component_type = dpsw_get_field(rsp_params->component_type, COMPONENT_TYPE);
attr->flooding_cfg = dpsw_get_field(rsp_params->repl_cfg, FLOODING_CFG);
attr->broadcast_cfg = dpsw_get_field(rsp_params->repl_cfg, BROADCAST_CFG);
return 0;
}
......@@ -431,68 +431,6 @@ int dpsw_if_get_link_state(struct fsl_mc_io *mc_io,
return 0;
}
/**
* dpsw_if_set_flooding() - Enable Disable flooding for particular interface
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPSW object
* @if_id: Interface Identifier
* @en: 1 - enable, 0 - disable
*
* Return: Completion status. '0' on Success; Error code otherwise.
*/
int dpsw_if_set_flooding(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 token,
u16 if_id,
u8 en)
{
struct fsl_mc_command cmd = { 0 };
struct dpsw_cmd_if_set_flooding *cmd_params;
/* prepare command */
cmd.header = mc_encode_cmd_header(DPSW_CMDID_IF_SET_FLOODING,
cmd_flags,
token);
cmd_params = (struct dpsw_cmd_if_set_flooding *)cmd.params;
cmd_params->if_id = cpu_to_le16(if_id);
dpsw_set_field(cmd_params->enable, ENABLE, en);
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
}
/**
* dpsw_if_set_broadcast() - Enable/disable broadcast for particular interface
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPSW object
* @if_id: Interface Identifier
* @en: 1 - enable, 0 - disable
*
* Return: Completion status. '0' on Success; Error code otherwise.
*/
int dpsw_if_set_broadcast(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 token,
u16 if_id,
u8 en)
{
struct fsl_mc_command cmd = { 0 };
struct dpsw_cmd_if_set_broadcast *cmd_params;
/* prepare command */
cmd.header = mc_encode_cmd_header(DPSW_CMDID_IF_SET_BROADCAST,
cmd_flags,
token);
cmd_params = (struct dpsw_cmd_if_set_broadcast *)cmd.params;
cmd_params->if_id = cpu_to_le16(if_id);
dpsw_set_field(cmd_params->enable, ENABLE, en);
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
}
/**
* dpsw_if_set_tci() - Set default VLAN Tag Control Information (TCI)
* @mc_io: Pointer to MC portal's I/O object
......@@ -704,6 +642,47 @@ int dpsw_if_disable(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
/**
* dpsw_if_get_attributes() - Function obtains attributes of interface
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPSW object
* @if_id: Interface Identifier
* @attr: Returned interface attributes
*
* Return: Completion status. '0' on Success; Error code otherwise.
*/
int dpsw_if_get_attributes(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
u16 if_id, struct dpsw_if_attr *attr)
{
struct dpsw_rsp_if_get_attr *rsp_params;
struct fsl_mc_command cmd = { 0 };
struct dpsw_cmd_if *cmd_params;
int err;
cmd.header = mc_encode_cmd_header(DPSW_CMDID_IF_GET_ATTR, cmd_flags,
token);
cmd_params = (struct dpsw_cmd_if *)cmd.params;
cmd_params->if_id = cpu_to_le16(if_id);
err = mc_send_command(mc_io, &cmd);
if (err)
return err;
rsp_params = (struct dpsw_rsp_if_get_attr *)cmd.params;
attr->num_tcs = rsp_params->num_tcs;
attr->rate = le32_to_cpu(rsp_params->rate);
attr->options = le32_to_cpu(rsp_params->options);
attr->qdid = le16_to_cpu(rsp_params->qdid);
attr->enabled = dpsw_get_field(rsp_params->conf, ENABLED);
attr->accept_all_vlan = dpsw_get_field(rsp_params->conf,
ACCEPT_ALL_VLAN);
attr->admit_untagged = dpsw_get_field(rsp_params->conf,
ADMIT_UNTAGGED);
return 0;
}
/**
* dpsw_if_set_max_frame_length() - Set Maximum Receive frame length.
* @mc_io: Pointer to MC portal's I/O object
......@@ -945,6 +924,66 @@ int dpsw_vlan_remove(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
/**
* dpsw_fdb_add() - Add FDB to switch and Returns handle to FDB table for
* the reference
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPSW object
* @fdb_id: Returned Forwarding Database Identifier
* @cfg: FDB Configuration
*
* Return: Completion status. '0' on Success; Error code otherwise.
*/
int dpsw_fdb_add(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token, u16 *fdb_id,
const struct dpsw_fdb_cfg *cfg)
{
struct dpsw_cmd_fdb_add *cmd_params;
struct dpsw_rsp_fdb_add *rsp_params;
struct fsl_mc_command cmd = { 0 };
int err;
cmd.header = mc_encode_cmd_header(DPSW_CMDID_FDB_ADD,
cmd_flags,
token);
cmd_params = (struct dpsw_cmd_fdb_add *)cmd.params;
cmd_params->fdb_ageing_time = cpu_to_le16(cfg->fdb_ageing_time);
cmd_params->num_fdb_entries = cpu_to_le16(cfg->num_fdb_entries);
err = mc_send_command(mc_io, &cmd);
if (err)
return err;
rsp_params = (struct dpsw_rsp_fdb_add *)cmd.params;
*fdb_id = le16_to_cpu(rsp_params->fdb_id);
return 0;
}
/**
* dpsw_fdb_remove() - Remove FDB from switch
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPSW object
* @fdb_id: Forwarding Database Identifier
*
* Return: Completion status. '0' on Success; Error code otherwise.
*/
int dpsw_fdb_remove(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token, u16 fdb_id)
{
struct dpsw_cmd_fdb_remove *cmd_params;
struct fsl_mc_command cmd = { 0 };
/* prepare command */
cmd.header = mc_encode_cmd_header(DPSW_CMDID_FDB_REMOVE,
cmd_flags,
token);
cmd_params = (struct dpsw_cmd_fdb_remove *)cmd.params;
cmd_params->fdb_id = cpu_to_le16(fdb_id);
return mc_send_command(mc_io, &cmd);
}
/**
* dpsw_fdb_add_unicast() - Function adds an unicast entry into MAC lookup table
* @mc_io: Pointer to MC portal's I/O object
......@@ -1152,33 +1191,97 @@ int dpsw_fdb_remove_multicast(struct fsl_mc_io *mc_io,
}
/**
* dpsw_fdb_set_learning_mode() - Define FDB learning mode
* dpsw_ctrl_if_get_attributes() - Obtain control interface attributes
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPSW object
* @fdb_id: Forwarding Database Identifier
* @mode: Learning mode
* @attr: Returned control interface attributes
*
* Return: Completion status. '0' on Success; Error code otherwise.
* Return: '0' on Success; Error code otherwise.
*/
int dpsw_fdb_set_learning_mode(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 token,
u16 fdb_id,
enum dpsw_fdb_learning_mode mode)
int dpsw_ctrl_if_get_attributes(struct fsl_mc_io *mc_io, u32 cmd_flags,
u16 token, struct dpsw_ctrl_if_attr *attr)
{
struct dpsw_rsp_ctrl_if_get_attr *rsp_params;
struct fsl_mc_command cmd = { 0 };
struct dpsw_cmd_fdb_set_learning_mode *cmd_params;
int err;
/* prepare command */
cmd.header = mc_encode_cmd_header(DPSW_CMDID_FDB_SET_LEARNING_MODE,
cmd.header = mc_encode_cmd_header(DPSW_CMDID_CTRL_IF_GET_ATTR,
cmd_flags, token);
err = mc_send_command(mc_io, &cmd);
if (err)
return err;
rsp_params = (struct dpsw_rsp_ctrl_if_get_attr *)cmd.params;
attr->rx_fqid = le32_to_cpu(rsp_params->rx_fqid);
attr->rx_err_fqid = le32_to_cpu(rsp_params->rx_err_fqid);
attr->tx_err_conf_fqid = le32_to_cpu(rsp_params->tx_err_conf_fqid);
return 0;
}
/**
* dpsw_ctrl_if_set_pools() - Set control interface buffer pools
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPSW object
* @cfg: Buffer pools configuration
*
* Return: '0' on Success; Error code otherwise.
*/
int dpsw_ctrl_if_set_pools(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
const struct dpsw_ctrl_if_pools_cfg *cfg)
{
struct dpsw_cmd_ctrl_if_set_pools *cmd_params;
struct fsl_mc_command cmd = { 0 };
int i;
cmd.header = mc_encode_cmd_header(DPSW_CMDID_CTRL_IF_SET_POOLS,
cmd_flags, token);
cmd_params = (struct dpsw_cmd_ctrl_if_set_pools *)cmd.params;
cmd_params->num_dpbp = cfg->num_dpbp;
for (i = 0; i < DPSW_MAX_DPBP; i++) {
cmd_params->dpbp_id[i] = cpu_to_le32(cfg->pools[i].dpbp_id);
cmd_params->buffer_size[i] =
cpu_to_le16(cfg->pools[i].buffer_size);
cmd_params->backup_pool_mask |=
DPSW_BACKUP_POOL(cfg->pools[i].backup_pool, i);
}
return mc_send_command(mc_io, &cmd);
}
/**
* dpsw_ctrl_if_set_queue() - Set Rx queue configuration
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of dpsw object
* @qtype: dpsw_queue_type of the targeted queue
* @cfg: Rx queue configuration
*
* Return: '0' on Success; Error code otherwise.
*/
int dpsw_ctrl_if_set_queue(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
enum dpsw_queue_type qtype,
const struct dpsw_ctrl_if_queue_cfg *cfg)
{
struct dpsw_cmd_ctrl_if_set_queue *cmd_params;
struct fsl_mc_command cmd = { 0 };
cmd.header = mc_encode_cmd_header(DPSW_CMDID_CTRL_IF_SET_QUEUE,
cmd_flags,
token);
cmd_params = (struct dpsw_cmd_fdb_set_learning_mode *)cmd.params;
cmd_params->fdb_id = cpu_to_le16(fdb_id);
dpsw_set_field(cmd_params->mode, LEARNING_MODE, mode);
cmd_params = (struct dpsw_cmd_ctrl_if_set_queue *)cmd.params;
cmd_params->dest_id = cpu_to_le32(cfg->dest_cfg.dest_id);
cmd_params->dest_priority = cfg->dest_cfg.priority;
cmd_params->qtype = qtype;
cmd_params->user_ctx = cpu_to_le64(cfg->user_ctx);
cmd_params->options = cpu_to_le32(cfg->options);
dpsw_set_field(cmd_params->dest_type,
DEST_TYPE,
cfg->dest_cfg.dest_type);
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
}
......@@ -1320,3 +1423,64 @@ int dpsw_if_set_primary_mac_addr(struct fsl_mc_io *mc_io, u32 cmd_flags,
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
}
/**
* dpsw_ctrl_if_enable() - Enable control interface
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPSW object
*
* Return: '0' on Success; Error code otherwise.
*/
int dpsw_ctrl_if_enable(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token)
{
struct fsl_mc_command cmd = { 0 };
cmd.header = mc_encode_cmd_header(DPSW_CMDID_CTRL_IF_ENABLE, cmd_flags,
token);
return mc_send_command(mc_io, &cmd);
}
/**
* dpsw_ctrl_if_disable() - Function disables control interface
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPSW object
*
* Return: '0' on Success; Error code otherwise.
*/
int dpsw_ctrl_if_disable(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token)
{
struct fsl_mc_command cmd = { 0 };
cmd.header = mc_encode_cmd_header(DPSW_CMDID_CTRL_IF_DISABLE,
cmd_flags,
token);
return mc_send_command(mc_io, &cmd);
}
/**
* dpsw_set_egress_flood() - Set egress parameters associated with an FDB ID
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPSW object
* @cfg: Egress flooding configuration
*
* Return: '0' on Success; Error code otherwise.
*/
int dpsw_set_egress_flood(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
const struct dpsw_egress_flood_cfg *cfg)
{
struct dpsw_cmd_set_egress_flood *cmd_params;
struct fsl_mc_command cmd = { 0 };
cmd.header = mc_encode_cmd_header(DPSW_CMDID_SET_EGRESS_FLOOD, cmd_flags, token);
cmd_params = (struct dpsw_cmd_set_egress_flood *)cmd.params;
cmd_params->fdb_id = cpu_to_le16(cfg->fdb_id);
cmd_params->flood_type = cfg->flood_type;
build_if_id_bitmap(&cmd_params->if_id, cfg->if_id, cfg->num_ifs);
return mc_send_command(mc_io, &cmd);
}
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright 2014-2016 Freescale Semiconductor Inc.
* Copyright 2017-2018 NXP
* Copyright 2017-2021 NXP
*
*/
......@@ -75,6 +75,35 @@ enum dpsw_component_type {
DPSW_COMPONENT_TYPE_S_VLAN
};
/**
* enum dpsw_flooding_cfg - flooding configuration requested
* @DPSW_FLOODING_PER_VLAN: Flooding replicators are allocated per VLAN and
* interfaces present in each of them can be configured using
* dpsw_vlan_add_if_flooding()/dpsw_vlan_remove_if_flooding().
* This is the default configuration.
*
* @DPSW_FLOODING_PER_FDB: Flooding replicators are allocated per FDB and
* interfaces present in each of them can be configured using
* dpsw_set_egress_flood().
*/
enum dpsw_flooding_cfg {
DPSW_FLOODING_PER_VLAN = 0,
DPSW_FLOODING_PER_FDB,
};
/**
* enum dpsw_broadcast_cfg - broadcast configuration requested
* @DPSW_BROADCAST_PER_OBJECT: There is only one broadcast replicator per DPSW
* object. This is the default configuration.
* @DPSW_BROADCAST_PER_FDB: Broadcast replicators are allocated per FDB and
* interfaces present in each of them can be configured using
* dpsw_set_egress_flood().
*/
enum dpsw_broadcast_cfg {
DPSW_BROADCAST_PER_OBJECT = 0,
DPSW_BROADCAST_PER_FDB,
};
int dpsw_enable(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 token);
......@@ -153,6 +182,8 @@ int dpsw_clear_irq_status(struct fsl_mc_io *mc_io,
* @num_vlans: Current number of VLANs
* @num_fdbs: Current number of FDBs
* @component_type: Component type of this bridge
* @flooding_cfg: Flooding configuration (PER_VLAN - default, PER_FDB)
* @broadcast_cfg: Broadcast configuration (PER_OBJECT - default, PER_FDB)
*/
struct dpsw_attr {
int id;
......@@ -168,6 +199,8 @@ struct dpsw_attr {
u16 num_vlans;
u8 num_fdbs;
enum dpsw_component_type component_type;
enum dpsw_flooding_cfg flooding_cfg;
enum dpsw_broadcast_cfg broadcast_cfg;
};
int dpsw_get_attributes(struct fsl_mc_io *mc_io,
......@@ -175,6 +208,81 @@ int dpsw_get_attributes(struct fsl_mc_io *mc_io,
u16 token,
struct dpsw_attr *attr);
/**
* struct dpsw_ctrl_if_attr - Control interface attributes
* @rx_fqid: Receive FQID
* @rx_err_fqid: Receive error FQID
* @tx_err_conf_fqid: Transmit error and confirmation FQID
*/
struct dpsw_ctrl_if_attr {
u32 rx_fqid;
u32 rx_err_fqid;
u32 tx_err_conf_fqid;
};
int dpsw_ctrl_if_get_attributes(struct fsl_mc_io *mc_io, u32 cmd_flags,
u16 token, struct dpsw_ctrl_if_attr *attr);
enum dpsw_queue_type {
DPSW_QUEUE_RX,
DPSW_QUEUE_TX_ERR_CONF,
DPSW_QUEUE_RX_ERR,
};
/**
* Maximum number of DPBP
*/
#define DPSW_MAX_DPBP 8
/**
* struct dpsw_ctrl_if_pools_cfg - Control interface buffer pools configuration
* @num_dpbp: Number of DPBPs
* @pools: Array of buffer pools parameters; The number of valid entries
* must match 'num_dpbp' value
* @pools.dpbp_id: DPBP object ID
* @pools.buffer_size: Buffer size
* @pools.backup_pool: Backup pool
*/
struct dpsw_ctrl_if_pools_cfg {
u8 num_dpbp;
struct {
int dpbp_id;
u16 buffer_size;
int backup_pool;
} pools[DPSW_MAX_DPBP];
};
int dpsw_ctrl_if_set_pools(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
const struct dpsw_ctrl_if_pools_cfg *cfg);
#define DPSW_CTRL_IF_QUEUE_OPT_USER_CTX 0x00000001
#define DPSW_CTRL_IF_QUEUE_OPT_DEST 0x00000002
enum dpsw_ctrl_if_dest {
DPSW_CTRL_IF_DEST_NONE = 0,
DPSW_CTRL_IF_DEST_DPIO = 1,
};
struct dpsw_ctrl_if_dest_cfg {
enum dpsw_ctrl_if_dest dest_type;
int dest_id;
u8 priority;
};
struct dpsw_ctrl_if_queue_cfg {
u32 options;
u64 user_ctx;
struct dpsw_ctrl_if_dest_cfg dest_cfg;
};
int dpsw_ctrl_if_set_queue(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
enum dpsw_queue_type qtype,
const struct dpsw_ctrl_if_queue_cfg *cfg);
int dpsw_ctrl_if_enable(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token);
int dpsw_ctrl_if_disable(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token);
/**
* enum dpsw_action - Action selection for special/control frames
* @DPSW_ACTION_DROP: Drop frame
......@@ -235,18 +343,6 @@ int dpsw_if_get_link_state(struct fsl_mc_io *mc_io,
u16 if_id,
struct dpsw_link_state *state);
int dpsw_if_set_flooding(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 token,
u16 if_id,
u8 en);
int dpsw_if_set_broadcast(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 token,
u16 if_id,
u8 en);
/**
* struct dpsw_tci_cfg - Tag Control Information (TCI) configuration
* @pcp: Priority Code Point (PCP): a 3-bit field which refers
......@@ -372,6 +468,34 @@ int dpsw_if_disable(struct fsl_mc_io *mc_io,
u16 token,
u16 if_id);
/**
* struct dpsw_if_attr - Structure representing DPSW interface attributes
* @num_tcs: Number of traffic classes
* @rate: Transmit rate in bits per second
* @options: Interface configuration options (bitmap)
* @enabled: Indicates if interface is enabled
* @accept_all_vlan: The device discards/accepts incoming frames
* for VLANs that do not include this interface
* @admit_untagged: When set to 'DPSW_ADMIT_ONLY_VLAN_TAGGED', the device
* discards untagged frames or priority-tagged frames received on
* this interface;
* When set to 'DPSW_ADMIT_ALL', untagged frames or priority-
* tagged frames received on this interface are accepted
* @qdid: control frames transmit qdid
*/
struct dpsw_if_attr {
u8 num_tcs;
u32 rate;
u32 options;
int enabled;
int accept_all_vlan;
enum dpsw_accepted_frames admit_untagged;
u16 qdid;
};
int dpsw_if_get_attributes(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
u16 if_id, struct dpsw_if_attr *attr);
int dpsw_if_set_max_frame_length(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 token,
......@@ -392,6 +516,8 @@ int dpsw_vlan_add(struct fsl_mc_io *mc_io,
u16 vlan_id,
const struct dpsw_vlan_cfg *cfg);
#define DPSW_VLAN_ADD_IF_OPT_FDB_ID 0x0001
/**
* struct dpsw_vlan_if_cfg - Set of VLAN Interfaces
* @num_ifs: The number of interfaces that are assigned to the egress
......@@ -401,7 +527,9 @@ int dpsw_vlan_add(struct fsl_mc_io *mc_io,
*/
struct dpsw_vlan_if_cfg {
u16 num_ifs;
u16 options;
u16 if_id[DPSW_MAX_IF];
u16 fdb_id;
};
int dpsw_vlan_add_if(struct fsl_mc_io *mc_io,
......@@ -555,23 +683,17 @@ enum dpsw_fdb_learning_mode {
DPSW_FDB_LEARNING_MODE_SECURE = 3
};
int dpsw_fdb_set_learning_mode(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 token,
u16 fdb_id,
enum dpsw_fdb_learning_mode mode);
/**
* struct dpsw_fdb_attr - FDB Attributes
* @max_fdb_entries: Number of FDB entries
* @fdb_aging_time: Aging time in seconds
* @fdb_ageing_time: Ageing time in seconds
* @learning_mode: Learning mode
* @num_fdb_mc_groups: Current number of multicast groups
* @max_fdb_mc_groups: Maximum number of multicast groups
*/
struct dpsw_fdb_attr {
u16 max_fdb_entries;
u16 fdb_aging_time;
u16 fdb_ageing_time;
enum dpsw_fdb_learning_mode learning_mode;
u16 num_fdb_mc_groups;
u16 max_fdb_mc_groups;
......@@ -591,4 +713,39 @@ int dpsw_if_get_primary_mac_addr(struct fsl_mc_io *mc_io, u32 cmd_flags,
int dpsw_if_set_primary_mac_addr(struct fsl_mc_io *mc_io, u32 cmd_flags,
u16 token, u16 if_id, u8 mac_addr[6]);
/**
* struct dpsw_fdb_cfg - FDB Configuration
* @num_fdb_entries: Number of FDB entries
* @fdb_ageing_time: Ageing time in seconds
*/
struct dpsw_fdb_cfg {
u16 num_fdb_entries;
u16 fdb_ageing_time;
};
int dpsw_fdb_add(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token, u16 *fdb_id,
const struct dpsw_fdb_cfg *cfg);
int dpsw_fdb_remove(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token, u16 fdb_id);
/**
* enum dpsw_flood_type - Define the flood type of a DPSW object
* @DPSW_BROADCAST: Broadcast flooding
* @DPSW_FLOODING: Unknown flooding
*/
enum dpsw_flood_type {
DPSW_BROADCAST = 0,
DPSW_FLOODING,
};
struct dpsw_egress_flood_cfg {
u16 fdb_id;
enum dpsw_flood_type flood_type;
u16 num_ifs;
u16 if_id[DPSW_MAX_IF];
};
int dpsw_set_egress_flood(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
const struct dpsw_egress_flood_cfg *cfg);
#endif /* __FSL_DPSW_H */
......@@ -78,8 +78,6 @@ source "drivers/staging/clocking-wizard/Kconfig"
source "drivers/staging/fbtft/Kconfig"
source "drivers/staging/fsl-dpaa2/Kconfig"
source "drivers/staging/most/Kconfig"
source "drivers/staging/ks7010/Kconfig"
......
......@@ -29,7 +29,6 @@ obj-$(CONFIG_GS_FPGABOOT) += gs_fpgaboot/
obj-$(CONFIG_UNISYSSPAR) += unisys/
obj-$(CONFIG_COMMON_CLK_XLNX_CLKWZRD) += clocking-wizard/
obj-$(CONFIG_FB_TFT) += fbtft/
obj-$(CONFIG_FSL_DPAA2) += fsl-dpaa2/
obj-$(CONFIG_MOST) += most/
obj-$(CONFIG_KS7010) += ks7010/
obj-$(CONFIG_GREYBUS) += greybus/
......
# SPDX-License-Identifier: GPL-2.0
#
# Freescale DataPath Acceleration Architecture Gen2 (DPAA2) drivers
#
config FSL_DPAA2
bool "Freescale DPAA2 devices"
depends on FSL_MC_BUS
help
Build drivers for Freescale DataPath Acceleration
Architecture (DPAA2) family of SoCs.
config FSL_DPAA2_ETHSW
tristate "Freescale DPAA2 Ethernet Switch"
depends on FSL_DPAA2
depends on NET_SWITCHDEV
help
Driver for Freescale DPAA2 Ethernet Switch. Select
BRIDGE to have support for bridge tools.
# SPDX-License-Identifier: GPL-2.0
#
# Freescale DataPath Acceleration Architecture Gen2 (DPAA2) drivers
#
obj-$(CONFIG_FSL_DPAA2_ETHSW) += ethsw/
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the Freescale DPAA2 Ethernet Switch
#
# Copyright 2014-2017 Freescale Semiconductor Inc.
# Copyright 2017-2018 NXP
obj-$(CONFIG_FSL_DPAA2_ETHSW) += dpaa2-ethsw.o
dpaa2-ethsw-objs := ethsw.o ethsw-ethtool.o dpsw.o
DPAA2 Ethernet Switch driver
============================
This file provides documentation for the DPAA2 Ethernet Switch driver
Contents
========
Supported Platforms
Architecture Overview
Creating an Ethernet Switch
Features
Supported Platforms
===================
This driver provides networking support for Freescale LS2085A, LS2088A
DPAA2 SoCs.
Architecture Overview
=====================
The Ethernet Switch in the DPAA2 architecture consists of several hardware
resources that provide the functionality. These are allocated and
configured via the Management Complex (MC) portals. MC abstracts most of
these resources as DPAA2 objects and exposes ABIs through which they can
be configured and controlled.
For a more detailed description of the DPAA2 architecture and its object
abstractions see:
drivers/staging/fsl-mc/README.txt
The Ethernet Switch is built on top of a Datapath Switch (DPSW) object.
Configuration interface:
---------------------
| DPAA2 Switch driver |
---------------------
.
.
----------
| DPSW API |
----------
. software
================= . ==============
. hardware
---------------------
| MC hardware portals |
---------------------
.
.
------
| DPSW |
------
Driver uses the switch device driver model and exposes each switch port as
a network interface, which can be included in a bridge. Traffic switched
between ports is offloaded into the hardware. Exposed network interfaces
are not used for I/O, they are used just for configuration. This
limitation is going to be addressed in the future.
The DPSW can have ports connected to DPNIs or to PHYs via DPMACs.
[ethA] [ethB] [ethC] [ethD] [ethE] [ethF]
: : : : : :
: : : : : :
[eth drv] [eth drv] [ ethsw drv ]
: : : : : : kernel
========================================================================
: : : : : : hardware
[DPNI] [DPNI] [============= DPSW =================]
| | | | | |
| ---------- | [DPMAC] [DPMAC]
------------------------------- | |
| |
[PHY] [PHY]
For a more detailed description of the Ethernet switch device driver model
see:
Documentation/networking/switchdev.rst
Creating an Ethernet Switch
===========================
A device is created for the switch objects probed on the MC bus. Each DPSW
has a number of properties which determine the configuration options and
associated hardware resources.
A DPSW object (and the other DPAA2 objects needed for a DPAA2 switch) can
be added to a container on the MC bus in one of two ways: statically,
through a Datapath Layout Binary file (DPL) that is parsed by MC at boot
time; or created dynamically at runtime, via the DPAA2 objects APIs.
Features
========
Driver configures DPSW to perform hardware switching offload of
unicast/multicast/broadcast (VLAN tagged or untagged) traffic between its
ports.
It allows configuration of hardware learning, flooding, multicast groups,
port VLAN configuration and STP state.
Static entries can be added/removed from the FDB.
Hardware statistics for each port are provided through ethtool -S option.
* Add I/O capabilities on switch port netdevices. This will allow control
traffic to reach the CPU.
* Add ACL to redirect control traffic to CPU.
* Add support for multiple FDBs and switch port partitioning
* MC firmware uprev; the DPAA2 objects used by the Ethernet Switch driver
need to be kept in sync with binary interface changes in MC
* refine README file
* cleanup
NOTE: At least first three of the above are required before getting the
DPAA2 Ethernet Switch driver out of staging. Another requirement is that
dpio driver is moved to drivers/soc (this is required for I/O).
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment