Commit 113cb8ff authored by David S. Miller's avatar David S. Miller

Merge branch 'Traffic-support-for-dsa_8021q-in-vlan_filtering-1-mode'

Vladimir Oltean says:

====================
Traffic support for dsa_8021q in vlan_filtering=1 mode

This series is an attempt to support as much as possible in terms of
traffic I/O from the network stack with the only dsa_8021q user thus
far, sja1105.

The hardware doesn't support pushing a second VLAN tag to packets that
are already tagged, so our only option is to combine the dsa_8021q with
the user tag into a single tag and decode that on the CPU.

The assumption is that there is a type of use cases for which 7 VLANs
per port are more than sufficient, and that there's another type of use
cases where the full 4096 entries are barely enough. Those use cases are
very different from one another, so I prefer trying to give both the
best experience by creating this best_effort_vlan_filtering knob to
select the mode in which they want to operate in.

v2 was submitted here:
https://patchwork.ozlabs.org/project/netdev/cover/20200511135338.20263-1-olteanv@gmail.com/

v1 was submitted here:
https://patchwork.ozlabs.org/project/netdev/cover/20200510164255.19322-1-olteanv@gmail.com/

Changes in v3:
Patch 01/15:
- Rename again to configure_vlan_while_not_filtering, and add a helper
  function for skipping VLAN configuration.
Patch 03/15:
- Remove sja1105_can_use_vlan_as_tags from driver code.
Patch 06/15:
- Adapt sja1105 driver to the second variable name change.
Patch 08/15:
- Provide an implementation of sja1105_can_use_vlan_as_tags as part of
  the tagger and not as part of the switch driver. So we have to look at
  the skb only, and not at the VLAN awareness state.

Changes in v2:
Patch 01/15:
- Rename variable from vlan_bridge_vtu to configure_vlans_while_disabled.
Patch 03/15:
- Be much more thorough, and make sure that things like virtual links
  and FDB operations still work properly.
Patch 05/15:
- Free the vlan lists on teardown.
- Simplify sja1105_classify_vlan: only look at priv->expect_dsa_8021q.
- Keep vid 1 in the list of dsa_8021q VLANs, to make sure that untagged
  packets transmitted from the stack, like PTP, continue to work in
  VLAN-unaware mode.
Patch 06/15:
- Adapt to vlan_bridge_vtu variable name change.
Patch 11/15:
- In sja1105_best_effort_vlan_filtering_set, get the vlan_filtering
  value of each port instead of just one time for port 0. Normally this
  shouldn't matter, but it avoids issues when port 0 is disabled in
  device tree.
Patch 14/14:
- Only do anything in sja1105_build_subvlans and in
  sja1105_build_crosschip_subvlans when operating in
  SJA1105_VLAN_BEST_EFFORT state. This avoids installing VLAN retagging
  rules in unaware mode, which would cost us a penalty in terms of
  usable frame memory.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 26831d78 a20bc43b
best_effort_vlan_filtering
[DEVICE, DRIVER-SPECIFIC]
Allow plain ETH_P_8021Q headers to be used as DSA tags.
Benefits:
- Can terminate untagged traffic over switch net
devices even when enslaved to a bridge with
vlan_filtering=1.
- Can terminate VLAN-tagged traffic over switch net
devices even when enslaved to a bridge with
vlan_filtering=1, with some constraints (no more than
7 non-pvid VLANs per user port).
- Can do QoS based on VLAN PCP and VLAN membership
admission control for autonomously forwarded frames
(regardless of whether they can be terminated on the
CPU or not).
Drawbacks:
- User cannot use VLANs in range 1024-3071. If the
switch receives frames with such VIDs, it will
misinterpret them as DSA tags.
- Switch uses Shared VLAN Learning (FDB lookup uses
only DMAC as key).
- When VLANs span cross-chip topologies, the total
number of permitted VLANs may be less than 7 per
port, due to a maximum number of 32 VLAN retagging
rules per switch.
Configuration mode: runtime
Type: bool.
......@@ -66,34 +66,193 @@ reprogrammed with the updated static configuration.
Traffic support
===============
The switches do not support switch tagging in hardware. But they do support
customizing the TPID by which VLAN traffic is identified as such. The switch
driver is leveraging ``CONFIG_NET_DSA_TAG_8021Q`` by requesting that special
VLANs (with a custom TPID of ``ETH_P_EDSA`` instead of ``ETH_P_8021Q``) are
installed on its ports when not in ``vlan_filtering`` mode. This does not
interfere with the reception and transmission of real 802.1Q-tagged traffic,
because the switch does no longer parse those packets as VLAN after the TPID
change.
The TPID is restored when ``vlan_filtering`` is requested by the user through
the bridge layer, and general IP termination becomes no longer possible through
the switch netdevices in this mode.
The switches have two programmable filters for link-local destination MACs.
The switches do not have hardware support for DSA tags, except for "slow
protocols" for switch control as STP and PTP. For these, the switches have two
programmable filters for link-local destination MACs.
These are used to trap BPDUs and PTP traffic to the master netdevice, and are
further used to support STP and 1588 ordinary clock/boundary clock
functionality.
The following traffic modes are supported over the switch netdevices:
+--------------------+------------+------------------+------------------+
| | Standalone | Bridged with | Bridged with |
| | ports | vlan_filtering 0 | vlan_filtering 1 |
+====================+============+==================+==================+
| Regular traffic | Yes | Yes | No (use master) |
+--------------------+------------+------------------+------------------+
| Management traffic | Yes | Yes | Yes |
| (BPDU, PTP) | | | |
+--------------------+------------+------------------+------------------+
functionality. For frames trapped to the CPU, source port and switch ID
information is encoded by the hardware into the frames.
But by leveraging ``CONFIG_NET_DSA_TAG_8021Q`` (a software-defined DSA tagging
format based on VLANs), general-purpose traffic termination through the network
stack can be supported under certain circumstances.
Depending on VLAN awareness state, the following operating modes are possible
with the switch:
- Mode 1 (VLAN-unaware): a port is in this mode when it is used as a standalone
net device, or when it is enslaved to a bridge with ``vlan_filtering=0``.
- Mode 2 (fully VLAN-aware): a port is in this mode when it is enslaved to a
bridge with ``vlan_filtering=1``. Access to the entire VLAN range is given to
the user through ``bridge vlan`` commands, but general-purpose (anything
other than STP, PTP etc) traffic termination is not possible through the
switch net devices. The other packets can be still by user space processed
through the DSA master interface (similar to ``DSA_TAG_PROTO_NONE``).
- Mode 3 (best-effort VLAN-aware): a port is in this mode when enslaved to a
bridge with ``vlan_filtering=1``, and the devlink property of its parent
switch named ``best_effort_vlan_filtering`` is set to ``true``. When
configured like this, the range of usable VIDs is reduced (0 to 1023 and 3072
to 4094), so is the number of usable VIDs (maximum of 7 non-pvid VLANs per
port*), and shared VLAN learning is performed (FDB lookup is done only by
DMAC, not also by VID).
To summarize, in each mode, the following types of traffic are supported over
the switch net devices:
+-------------+-----------+--------------+------------+
| | Mode 1 | Mode 2 | Mode 3 |
+=============+===========+==============+============+
| Regular | Yes | No | Yes |
| traffic | | (use master) | |
+-------------+-----------+--------------+------------+
| Management | Yes | Yes | Yes |
| traffic | | | |
| (BPDU, PTP) | | | |
+-------------+-----------+--------------+------------+
To configure the switch to operate in Mode 3, the following steps can be
followed::
ip link add dev br0 type bridge
# swp2 operates in Mode 1 now
ip link set dev swp2 master br0
# swp2 temporarily moves to Mode 2
ip link set dev br0 type bridge vlan_filtering 1
[ 61.204770] sja1105 spi0.1: Reset switch and programmed static config. Reason: VLAN filtering
[ 61.239944] sja1105 spi0.1: Disabled switch tagging
# swp3 now operates in Mode 3
devlink dev param set spi/spi0.1 name best_effort_vlan_filtering value true cmode runtime
[ 64.682927] sja1105 spi0.1: Reset switch and programmed static config. Reason: VLAN filtering
[ 64.711925] sja1105 spi0.1: Enabled switch tagging
# Cannot use VLANs in range 1024-3071 while in Mode 3.
bridge vlan add dev swp2 vid 1025 untagged pvid
RTNETLINK answers: Operation not permitted
bridge vlan add dev swp2 vid 100
bridge vlan add dev swp2 vid 101 untagged
bridge vlan
port vlan ids
swp5 1 PVID Egress Untagged
swp2 1 PVID Egress Untagged
100
101 Egress Untagged
swp3 1 PVID Egress Untagged
swp4 1 PVID Egress Untagged
br0 1 PVID Egress Untagged
bridge vlan add dev swp2 vid 102
bridge vlan add dev swp2 vid 103
bridge vlan add dev swp2 vid 104
bridge vlan add dev swp2 vid 105
bridge vlan add dev swp2 vid 106
bridge vlan add dev swp2 vid 107
# Cannot use mode than 7 VLANs per port while in Mode 3.
[ 3885.216832] sja1105 spi0.1: No more free subvlans
\* "maximum of 7 non-pvid VLANs per port": Decoding VLAN-tagged packets on the
CPU in mode 3 is possible through VLAN retagging of packets that go from the
switch to the CPU. In cross-chip topologies, the port that goes to the CPU
might also go to other switches. In that case, those other switches will see
only a retagged packet (which only has meaning for the CPU). So if they are
interested in this VLAN, they need to apply retagging in the reverse direction,
to recover the original value from it. This consumes extra hardware resources
for this switch. There is a maximum of 32 entries in the Retagging Table of
each switch device.
As an example, consider this cross-chip topology::
+-------------------------------------------------+
| Host SoC |
| +-------------------------+ |
| | DSA master for embedded | |
| | switch (non-sja1105) | |
| +--------+-------------------------+--------+ |
| | embedded L2 switch | |
| | | |
| | +--------------+ +--------------+ | |
| | |DSA master for| |DSA master for| | |
| | | SJA1105 1 | | SJA1105 2 | | |
+--+---+--------------+-----+--------------+---+--+
+-----------------------+ +-----------------------+
| SJA1105 switch 1 | | SJA1105 switch 2 |
+-----+-----+-----+-----+ +-----+-----+-----+-----+
|sw1p0|sw1p1|sw1p2|sw1p3| |sw2p0|sw2p1|sw2p2|sw2p3|
+-----+-----+-----+-----+ +-----+-----+-----+-----+
To reach the CPU, SJA1105 switch 1 (spi/spi2.1) uses the same port as is uses
to reach SJA1105 switch 2 (spi/spi2.2), which would be port 4 (not drawn).
Similarly for SJA1105 switch 2.
Also consider the following commands, that add VLAN 100 to every sja1105 user
port::
devlink dev param set spi/spi2.1 name best_effort_vlan_filtering value true cmode runtime
devlink dev param set spi/spi2.2 name best_effort_vlan_filtering value true cmode runtime
ip link add dev br0 type bridge
for port in sw1p0 sw1p1 sw1p2 sw1p3 \
sw2p0 sw2p1 sw2p2 sw2p3; do
ip link set dev $port master br0
done
ip link set dev br0 type bridge vlan_filtering 1
for port in sw1p0 sw1p1 sw1p2 sw1p3 \
sw2p0 sw2p1 sw2p2; do
bridge vlan add dev $port vid 100
done
ip link add link br0 name br0.100 type vlan id 100 && ip link set dev br0.100 up
ip addr add 192.168.100.3/24 dev br0.100
bridge vlan add dev br0 vid 100 self
bridge vlan
port vlan ids
sw1p0 1 PVID Egress Untagged
100
sw1p1 1 PVID Egress Untagged
100
sw1p2 1 PVID Egress Untagged
100
sw1p3 1 PVID Egress Untagged
100
sw2p0 1 PVID Egress Untagged
100
sw2p1 1 PVID Egress Untagged
100
sw2p2 1 PVID Egress Untagged
100
sw2p3 1 PVID Egress Untagged
br0 1 PVID Egress Untagged
100
SJA1105 switch 1 consumes 1 retagging entry for each VLAN on each user port
towards the CPU. It also consumes 1 retagging entry for each non-pvid VLAN that
it is also interested in, which is configured on any port of any neighbor
switch.
In this case, SJA1105 switch 1 consumes a total of 11 retagging entries, as
follows:
- 8 retagging entries for VLANs 1 and 100 installed on its user ports
(``sw1p0`` - ``sw1p3``)
- 3 retagging entries for VLAN 100 installed on the user ports of SJA1105
switch 2 (``sw2p0`` - ``sw2p2``), because it also has ports that are
interested in it. The VLAN 1 is a pvid on SJA1105 switch 2 and does not need
reverse retagging.
SJA1105 switch 2 also consumes 11 retagging entries, but organized as follows:
- 7 retagging entries for the bridge VLANs on its user ports (``sw2p0`` -
``sw2p3``).
- 4 retagging entries for VLAN 100 installed on the user ports of SJA1105
switch 1 (``sw1p0`` - ``sw1p3``).
Switching features
==================
......
......@@ -87,6 +87,12 @@ struct sja1105_info {
const struct sja1105_dynamic_table_ops *dyn_ops;
const struct sja1105_table_ops *static_ops;
const struct sja1105_regs *regs;
/* Both E/T and P/Q/R/S have quirks when it comes to popping the S-Tag
* from double-tagged frames. E/T will pop it only when it's equal to
* TPID from the General Parameters Table, while P/Q/R/S will only
* pop it when it's equal to TPID2.
*/
u16 qinq_tpid;
int (*reset_cmd)(struct dsa_switch *ds);
int (*setup_rgmii_delay)(const void *ctx, int port);
/* Prototypes from include/net/dsa.h */
......@@ -178,14 +184,31 @@ struct sja1105_flow_block {
int num_virtual_links;
};
struct sja1105_bridge_vlan {
struct list_head list;
int port;
u16 vid;
bool pvid;
bool untagged;
};
enum sja1105_vlan_state {
SJA1105_VLAN_UNAWARE,
SJA1105_VLAN_BEST_EFFORT,
SJA1105_VLAN_FILTERING_FULL,
};
struct sja1105_private {
struct sja1105_static_config static_config;
bool rgmii_rx_delay[SJA1105_NUM_PORTS];
bool rgmii_tx_delay[SJA1105_NUM_PORTS];
bool best_effort_vlan_filtering;
const struct sja1105_info *info;
struct gpio_desc *reset_gpio;
struct spi_device *spidev;
struct dsa_switch *ds;
struct list_head dsa_8021q_vlans;
struct list_head bridge_vlans;
struct list_head crosschip_links;
struct sja1105_flow_block flow_block;
struct sja1105_port ports[SJA1105_NUM_PORTS];
......@@ -193,6 +216,8 @@ struct sja1105_private {
* the switch doesn't confuse them with one another.
*/
struct mutex mgmt_lock;
bool expect_dsa_8021q;
enum sja1105_vlan_state vlan_state;
struct sja1105_tagger_data tagger_data;
struct sja1105_ptp_data ptp_data;
struct sja1105_tas_data tas_data;
......@@ -219,6 +244,8 @@ enum sja1105_reset_reason {
int sja1105_static_config_reload(struct sja1105_private *priv,
enum sja1105_reset_reason reason);
void sja1105_frame_memory_partitioning(struct sja1105_private *priv);
/* From sja1105_spi.c */
int sja1105_xfer_buf(const struct sja1105_private *priv,
sja1105_spi_rw_mode_t rw, u64 reg_addr,
......@@ -303,6 +330,8 @@ size_t sja1105et_l2_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105_vlan_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105_retagging_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105pqrs_mac_config_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105pqrs_avb_params_entry_packing(void *buf, void *entry_ptr,
......
......@@ -133,6 +133,9 @@
#define SJA1105PQRS_SIZE_AVB_PARAMS_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + SJA1105PQRS_SIZE_AVB_PARAMS_ENTRY)
#define SJA1105_SIZE_RETAGGING_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + SJA1105_SIZE_RETAGGING_ENTRY)
#define SJA1105_MAX_DYN_CMD_SIZE \
SJA1105PQRS_SIZE_MAC_CONFIG_DYN_CMD
......@@ -525,6 +528,20 @@ sja1105pqrs_avb_params_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
sja1105_packing(p, &cmd->rdwrset, 29, 29, size, op);
}
static void
sja1105_retagging_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
u8 *p = buf + SJA1105_SIZE_RETAGGING_ENTRY;
const int size = SJA1105_SIZE_DYN_CMD;
sja1105_packing(p, &cmd->valid, 31, 31, size, op);
sja1105_packing(p, &cmd->errors, 30, 30, size, op);
sja1105_packing(p, &cmd->valident, 29, 29, size, op);
sja1105_packing(p, &cmd->rdwrset, 28, 28, size, op);
sja1105_packing(p, &cmd->index, 5, 0, size, op);
}
#define OP_READ BIT(0)
#define OP_WRITE BIT(1)
#define OP_DEL BIT(2)
......@@ -606,6 +623,14 @@ struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
.packed_size = SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD,
.addr = 0x34,
},
[BLK_IDX_RETAGGING] = {
.entry_packing = sja1105_retagging_entry_packing,
.cmd_packing = sja1105_retagging_cmd_packing,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
.access = (OP_WRITE | OP_DEL),
.packed_size = SJA1105_SIZE_RETAGGING_DYN_CMD,
.addr = 0x31,
},
[BLK_IDX_XMII_PARAMS] = {0},
};
......@@ -692,6 +717,14 @@ struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
.packed_size = SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD,
.addr = 0x34,
},
[BLK_IDX_RETAGGING] = {
.entry_packing = sja1105_retagging_entry_packing,
.cmd_packing = sja1105_retagging_cmd_packing,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
.access = (OP_READ | OP_WRITE | OP_DEL),
.packed_size = SJA1105_SIZE_RETAGGING_DYN_CMD,
.addr = 0x38,
},
[BLK_IDX_XMII_PARAMS] = {0},
};
......
......@@ -303,7 +303,8 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
.tag_port = 0,
.vlanid = 1,
};
int i;
struct dsa_switch *ds = priv->ds;
int port;
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
......@@ -324,12 +325,31 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
table->entry_count = 1;
/* VLAN 1: all DT-defined ports are members; no restrictions on
* forwarding; always transmit priority-tagged frames as untagged.
* forwarding; always transmit as untagged.
*/
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
pvid.vmemb_port |= BIT(i);
pvid.vlan_bc |= BIT(i);
pvid.tag_port &= ~BIT(i);
for (port = 0; port < ds->num_ports; port++) {
struct sja1105_bridge_vlan *v;
if (dsa_is_unused_port(ds, port))
continue;
pvid.vmemb_port |= BIT(port);
pvid.vlan_bc |= BIT(port);
pvid.tag_port &= ~BIT(port);
/* Let traffic that don't need dsa_8021q (e.g. STP, PTP) be
* transmitted as untagged.
*/
v = kzalloc(sizeof(*v), GFP_KERNEL);
if (!v)
return -ENOMEM;
v->port = port;
v->vid = 1;
v->untagged = true;
if (dsa_is_cpu_port(ds, port))
v->pvid = true;
list_add(&v->list, &priv->dsa_8021q_vlans);
}
((struct sja1105_vlan_lookup_entry *)table->entries)[0] = pvid;
......@@ -412,6 +432,41 @@ static int sja1105_init_l2_forwarding_params(struct sja1105_private *priv)
return 0;
}
void sja1105_frame_memory_partitioning(struct sja1105_private *priv)
{
struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
struct sja1105_vl_forwarding_params_entry *vl_fwd_params;
struct sja1105_table *table;
int max_mem;
/* VLAN retagging is implemented using a loopback port that consumes
* frame buffers. That leaves less for us.
*/
if (priv->vlan_state == SJA1105_VLAN_BEST_EFFORT)
max_mem = SJA1105_MAX_FRAME_MEMORY_RETAGGING;
else
max_mem = SJA1105_MAX_FRAME_MEMORY;
table = &priv->static_config.tables[BLK_IDX_L2_FORWARDING_PARAMS];
l2_fwd_params = table->entries;
l2_fwd_params->part_spc[0] = max_mem;
/* If we have any critical-traffic virtual links, we need to reserve
* some frame buffer memory for them. At the moment, hardcode the value
* at 100 blocks of 128 bytes of memory each. This leaves 829 blocks
* remaining for best-effort traffic. TODO: figure out a more flexible
* way to perform the frame buffer partitioning.
*/
if (!priv->static_config.tables[BLK_IDX_VL_FORWARDING].entry_count)
return;
table = &priv->static_config.tables[BLK_IDX_VL_FORWARDING_PARAMS];
vl_fwd_params = table->entries;
l2_fwd_params->part_spc[0] -= SJA1105_VL_FRAME_MEMORY;
vl_fwd_params->partspc[0] = SJA1105_VL_FRAME_MEMORY;
}
static int sja1105_init_general_params(struct sja1105_private *priv)
{
struct sja1105_general_params_entry default_general_params = {
......@@ -1303,7 +1358,7 @@ int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
l2_lookup.vlanid = vid;
l2_lookup.iotag = SJA1105_S_TAG;
l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
if (dsa_port_is_vlan_filtering(dsa_to_port(ds, port))) {
if (priv->vlan_state != SJA1105_VLAN_UNAWARE) {
l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_iotag = BIT(0);
} else {
......@@ -1366,7 +1421,7 @@ int sja1105pqrs_fdb_del(struct dsa_switch *ds, int port,
l2_lookup.vlanid = vid;
l2_lookup.iotag = SJA1105_S_TAG;
l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
if (dsa_port_is_vlan_filtering(dsa_to_port(ds, port))) {
if (priv->vlan_state != SJA1105_VLAN_UNAWARE) {
l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_iotag = BIT(0);
} else {
......@@ -1412,7 +1467,7 @@ static int sja1105_fdb_add(struct dsa_switch *ds, int port,
* for what gets printed in 'bridge fdb show'. In the case of zero,
* no VID gets printed at all.
*/
if (!dsa_port_is_vlan_filtering(dsa_to_port(ds, port)))
if (priv->vlan_state != SJA1105_VLAN_FILTERING_FULL)
vid = 0;
return priv->info->fdb_add_cmd(ds, port, addr, vid);
......@@ -1423,7 +1478,7 @@ static int sja1105_fdb_del(struct dsa_switch *ds, int port,
{
struct sja1105_private *priv = ds->priv;
if (!dsa_port_is_vlan_filtering(dsa_to_port(ds, port)))
if (priv->vlan_state != SJA1105_VLAN_FILTERING_FULL)
vid = 0;
return priv->info->fdb_del_cmd(ds, port, addr, vid);
......@@ -1462,7 +1517,7 @@ static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
u64_to_ether_addr(l2_lookup.macaddr, macaddr);
/* We need to hide the dsa_8021q VLANs from the user. */
if (!dsa_port_is_vlan_filtering(dsa_to_port(ds, port)))
if (priv->vlan_state == SJA1105_VLAN_UNAWARE)
l2_lookup.vlanid = 0;
cb(macaddr, l2_lookup.vlanid, l2_lookup.lockeds, data);
}
......@@ -1717,82 +1772,6 @@ static int sja1105_pvid_apply(struct sja1105_private *priv, int port, u16 pvid)
&mac[port], true);
}
static int sja1105_is_vlan_configured(struct sja1105_private *priv, u16 vid)
{
struct sja1105_vlan_lookup_entry *vlan;
int count, i;
vlan = priv->static_config.tables[BLK_IDX_VLAN_LOOKUP].entries;
count = priv->static_config.tables[BLK_IDX_VLAN_LOOKUP].entry_count;
for (i = 0; i < count; i++)
if (vlan[i].vlanid == vid)
return i;
/* Return an invalid entry index if not found */
return -1;
}
static int sja1105_vlan_apply(struct sja1105_private *priv, int port, u16 vid,
bool enabled, bool untagged)
{
struct sja1105_vlan_lookup_entry *vlan;
struct sja1105_table *table;
bool keep = true;
int match, rc;
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
match = sja1105_is_vlan_configured(priv, vid);
if (match < 0) {
/* Can't delete a missing entry. */
if (!enabled)
return 0;
rc = sja1105_table_resize(table, table->entry_count + 1);
if (rc)
return rc;
match = table->entry_count - 1;
}
/* Assign pointer after the resize (it's new memory) */
vlan = table->entries;
vlan[match].vlanid = vid;
if (enabled) {
vlan[match].vlan_bc |= BIT(port);
vlan[match].vmemb_port |= BIT(port);
} else {
vlan[match].vlan_bc &= ~BIT(port);
vlan[match].vmemb_port &= ~BIT(port);
}
/* Also unset tag_port if removing this VLAN was requested,
* just so we don't have a confusing bitmap (no practical purpose).
*/
if (untagged || !enabled)
vlan[match].tag_port &= ~BIT(port);
else
vlan[match].tag_port |= BIT(port);
/* If there's no port left as member of this VLAN,
* it's time for it to go.
*/
if (!vlan[match].vmemb_port)
keep = false;
dev_dbg(priv->ds->dev,
"%s: port %d, vid %llu, broadcast domain 0x%llx, "
"port members 0x%llx, tagged ports 0x%llx, keep %d\n",
__func__, port, vlan[match].vlanid, vlan[match].vlan_bc,
vlan[match].vmemb_port, vlan[match].tag_port, keep);
rc = sja1105_dynamic_config_write(priv, BLK_IDX_VLAN_LOOKUP, vid,
&vlan[match], keep);
if (rc < 0)
return rc;
if (!keep)
return sja1105_table_delete_entry(table, match);
return 0;
}
static int sja1105_crosschip_bridge_join(struct dsa_switch *ds,
int tree_index, int sw_index,
int other_port, struct net_device *br)
......@@ -1811,15 +1790,19 @@ static int sja1105_crosschip_bridge_join(struct dsa_switch *ds,
if (dsa_to_port(ds, port)->bridge_dev != br)
continue;
other_priv->expect_dsa_8021q = true;
rc = dsa_8021q_crosschip_bridge_join(ds, port, other_ds,
other_port, br,
other_port,
&priv->crosschip_links);
other_priv->expect_dsa_8021q = false;
if (rc)
return rc;
priv->expect_dsa_8021q = true;
rc = dsa_8021q_crosschip_bridge_join(other_ds, other_port, ds,
port, br,
port,
&other_priv->crosschip_links);
priv->expect_dsa_8021q = false;
if (rc)
return rc;
}
......@@ -1846,48 +1829,33 @@ static void sja1105_crosschip_bridge_leave(struct dsa_switch *ds,
if (dsa_to_port(ds, port)->bridge_dev != br)
continue;
other_priv->expect_dsa_8021q = true;
dsa_8021q_crosschip_bridge_leave(ds, port, other_ds, other_port,
br, &priv->crosschip_links);
&priv->crosschip_links);
other_priv->expect_dsa_8021q = false;
dsa_8021q_crosschip_bridge_leave(other_ds, other_port, ds,
port, br,
priv->expect_dsa_8021q = true;
dsa_8021q_crosschip_bridge_leave(other_ds, other_port, ds, port,
&other_priv->crosschip_links);
priv->expect_dsa_8021q = false;
}
}
static int sja1105_replay_crosschip_vlans(struct dsa_switch *ds, bool enabled)
{
struct sja1105_private *priv = ds->priv;
struct dsa_8021q_crosschip_link *c;
int rc;
list_for_each_entry(c, &priv->crosschip_links, list) {
rc = dsa_8021q_crosschip_link_apply(ds, c->port, c->other_ds,
c->other_port, enabled);
if (rc)
break;
}
return rc;
}
static int sja1105_setup_8021q_tagging(struct dsa_switch *ds, bool enabled)
{
struct sja1105_private *priv = ds->priv;
int rc, i;
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
priv->expect_dsa_8021q = true;
rc = dsa_port_setup_8021q_tagging(ds, i, enabled);
priv->expect_dsa_8021q = false;
if (rc < 0) {
dev_err(ds->dev, "Failed to setup VLAN tagging for port %d: %d\n",
i, rc);
return rc;
}
}
rc = sja1105_replay_crosschip_vlans(ds, enabled);
if (rc) {
dev_err(ds->dev, "Failed to replay crosschip VLANs: %d\n", rc);
return rc;
}
dev_info(ds->dev, "%s switch tagging\n",
enabled ? "Enabled" : "Disabled");
......@@ -1901,10 +1869,695 @@ sja1105_get_tag_protocol(struct dsa_switch *ds, int port,
return DSA_TAG_PROTO_SJA1105;
}
/* This callback needs to be present */
static int sja1105_find_free_subvlan(u16 *subvlan_map, bool pvid)
{
int subvlan;
if (pvid)
return 0;
for (subvlan = 1; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++)
if (subvlan_map[subvlan] == VLAN_N_VID)
return subvlan;
return -1;
}
static int sja1105_find_subvlan(u16 *subvlan_map, u16 vid)
{
int subvlan;
for (subvlan = 0; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++)
if (subvlan_map[subvlan] == vid)
return subvlan;
return -1;
}
static int sja1105_find_committed_subvlan(struct sja1105_private *priv,
int port, u16 vid)
{
struct sja1105_port *sp = &priv->ports[port];
return sja1105_find_subvlan(sp->subvlan_map, vid);
}
static void sja1105_init_subvlan_map(u16 *subvlan_map)
{
int subvlan;
for (subvlan = 0; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++)
subvlan_map[subvlan] = VLAN_N_VID;
}
static void sja1105_commit_subvlan_map(struct sja1105_private *priv, int port,
u16 *subvlan_map)
{
struct sja1105_port *sp = &priv->ports[port];
int subvlan;
for (subvlan = 0; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++)
sp->subvlan_map[subvlan] = subvlan_map[subvlan];
}
static int sja1105_is_vlan_configured(struct sja1105_private *priv, u16 vid)
{
struct sja1105_vlan_lookup_entry *vlan;
int count, i;
vlan = priv->static_config.tables[BLK_IDX_VLAN_LOOKUP].entries;
count = priv->static_config.tables[BLK_IDX_VLAN_LOOKUP].entry_count;
for (i = 0; i < count; i++)
if (vlan[i].vlanid == vid)
return i;
/* Return an invalid entry index if not found */
return -1;
}
static int
sja1105_find_retagging_entry(struct sja1105_retagging_entry *retagging,
int count, int from_port, u16 from_vid,
u16 to_vid)
{
int i;
for (i = 0; i < count; i++)
if (retagging[i].ing_port == BIT(from_port) &&
retagging[i].vlan_ing == from_vid &&
retagging[i].vlan_egr == to_vid)
return i;
/* Return an invalid entry index if not found */
return -1;
}
static int sja1105_commit_vlans(struct sja1105_private *priv,
struct sja1105_vlan_lookup_entry *new_vlan,
struct sja1105_retagging_entry *new_retagging,
int num_retagging)
{
struct sja1105_retagging_entry *retagging;
struct sja1105_vlan_lookup_entry *vlan;
struct sja1105_table *table;
int num_vlans = 0;
int rc, i, k = 0;
/* VLAN table */
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
vlan = table->entries;
for (i = 0; i < VLAN_N_VID; i++) {
int match = sja1105_is_vlan_configured(priv, i);
if (new_vlan[i].vlanid != VLAN_N_VID)
num_vlans++;
if (new_vlan[i].vlanid == VLAN_N_VID && match >= 0) {
/* Was there before, no longer is. Delete */
dev_dbg(priv->ds->dev, "Deleting VLAN %d\n", i);
rc = sja1105_dynamic_config_write(priv,
BLK_IDX_VLAN_LOOKUP,
i, &vlan[match], false);
if (rc < 0)
return rc;
} else if (new_vlan[i].vlanid != VLAN_N_VID) {
/* Nothing changed, don't do anything */
if (match >= 0 &&
vlan[match].vlanid == new_vlan[i].vlanid &&
vlan[match].tag_port == new_vlan[i].tag_port &&
vlan[match].vlan_bc == new_vlan[i].vlan_bc &&
vlan[match].vmemb_port == new_vlan[i].vmemb_port)
continue;
/* Update entry */
dev_dbg(priv->ds->dev, "Updating VLAN %d\n", i);
rc = sja1105_dynamic_config_write(priv,
BLK_IDX_VLAN_LOOKUP,
i, &new_vlan[i],
true);
if (rc < 0)
return rc;
}
}
if (table->entry_count)
kfree(table->entries);
table->entries = kcalloc(num_vlans, table->ops->unpacked_entry_size,
GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = num_vlans;
vlan = table->entries;
for (i = 0; i < VLAN_N_VID; i++) {
if (new_vlan[i].vlanid == VLAN_N_VID)
continue;
vlan[k++] = new_vlan[i];
}
/* VLAN Retagging Table */
table = &priv->static_config.tables[BLK_IDX_RETAGGING];
retagging = table->entries;
for (i = 0; i < table->entry_count; i++) {
rc = sja1105_dynamic_config_write(priv, BLK_IDX_RETAGGING,
i, &retagging[i], false);
if (rc)
return rc;
}
if (table->entry_count)
kfree(table->entries);
table->entries = kcalloc(num_retagging, table->ops->unpacked_entry_size,
GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = num_retagging;
retagging = table->entries;
for (i = 0; i < num_retagging; i++) {
retagging[i] = new_retagging[i];
/* Update entry */
rc = sja1105_dynamic_config_write(priv, BLK_IDX_RETAGGING,
i, &retagging[i], true);
if (rc < 0)
return rc;
}
return 0;
}
struct sja1105_crosschip_vlan {
struct list_head list;
u16 vid;
bool untagged;
int port;
int other_port;
struct dsa_switch *other_ds;
};
struct sja1105_crosschip_switch {
struct list_head list;
struct dsa_switch *other_ds;
};
static int sja1105_commit_pvid(struct sja1105_private *priv)
{
struct sja1105_bridge_vlan *v;
struct list_head *vlan_list;
int rc = 0;
if (priv->vlan_state == SJA1105_VLAN_FILTERING_FULL)
vlan_list = &priv->bridge_vlans;
else
vlan_list = &priv->dsa_8021q_vlans;
list_for_each_entry(v, vlan_list, list) {
if (v->pvid) {
rc = sja1105_pvid_apply(priv, v->port, v->vid);
if (rc)
break;
}
}
return rc;
}
static int
sja1105_build_bridge_vlans(struct sja1105_private *priv,
struct sja1105_vlan_lookup_entry *new_vlan)
{
struct sja1105_bridge_vlan *v;
if (priv->vlan_state == SJA1105_VLAN_UNAWARE)
return 0;
list_for_each_entry(v, &priv->bridge_vlans, list) {
int match = v->vid;
new_vlan[match].vlanid = v->vid;
new_vlan[match].vmemb_port |= BIT(v->port);
new_vlan[match].vlan_bc |= BIT(v->port);
if (!v->untagged)
new_vlan[match].tag_port |= BIT(v->port);
}
return 0;
}
static int
sja1105_build_dsa_8021q_vlans(struct sja1105_private *priv,
struct sja1105_vlan_lookup_entry *new_vlan)
{
struct sja1105_bridge_vlan *v;
if (priv->vlan_state == SJA1105_VLAN_FILTERING_FULL)
return 0;
list_for_each_entry(v, &priv->dsa_8021q_vlans, list) {
int match = v->vid;
new_vlan[match].vlanid = v->vid;
new_vlan[match].vmemb_port |= BIT(v->port);
new_vlan[match].vlan_bc |= BIT(v->port);
if (!v->untagged)
new_vlan[match].tag_port |= BIT(v->port);
}
return 0;
}
static int sja1105_build_subvlans(struct sja1105_private *priv,
u16 subvlan_map[][DSA_8021Q_N_SUBVLAN],
struct sja1105_vlan_lookup_entry *new_vlan,
struct sja1105_retagging_entry *new_retagging,
int *num_retagging)
{
struct sja1105_bridge_vlan *v;
int k = *num_retagging;
if (priv->vlan_state != SJA1105_VLAN_BEST_EFFORT)
return 0;
list_for_each_entry(v, &priv->bridge_vlans, list) {
int upstream = dsa_upstream_port(priv->ds, v->port);
int match, subvlan;
u16 rx_vid;
/* Only sub-VLANs on user ports need to be applied.
* Bridge VLANs also include VLANs added automatically
* by DSA on the CPU port.
*/
if (!dsa_is_user_port(priv->ds, v->port))
continue;
subvlan = sja1105_find_subvlan(subvlan_map[v->port],
v->vid);
if (subvlan < 0) {
subvlan = sja1105_find_free_subvlan(subvlan_map[v->port],
v->pvid);
if (subvlan < 0) {
dev_err(priv->ds->dev, "No more free subvlans\n");
return -ENOSPC;
}
}
rx_vid = dsa_8021q_rx_vid_subvlan(priv->ds, v->port, subvlan);
/* @v->vid on @v->port needs to be retagged to @rx_vid
* on @upstream. Assume @v->vid on @v->port and on
* @upstream was already configured by the previous
* iteration over bridge_vlans.
*/
match = rx_vid;
new_vlan[match].vlanid = rx_vid;
new_vlan[match].vmemb_port |= BIT(v->port);
new_vlan[match].vmemb_port |= BIT(upstream);
new_vlan[match].vlan_bc |= BIT(v->port);
new_vlan[match].vlan_bc |= BIT(upstream);
/* The "untagged" flag is set the same as for the
* original VLAN
*/
if (!v->untagged)
new_vlan[match].tag_port |= BIT(v->port);
/* But it's always tagged towards the CPU */
new_vlan[match].tag_port |= BIT(upstream);
/* The Retagging Table generates packet *clones* with
* the new VLAN. This is a very odd hardware quirk
* which we need to suppress by dropping the original
* packet.
* Deny egress of the original VLAN towards the CPU
* port. This will force the switch to drop it, and
* we'll see only the retagged packets.
*/
match = v->vid;
new_vlan[match].vlan_bc &= ~BIT(upstream);
/* And the retagging itself */
new_retagging[k].vlan_ing = v->vid;
new_retagging[k].vlan_egr = rx_vid;
new_retagging[k].ing_port = BIT(v->port);
new_retagging[k].egr_port = BIT(upstream);
if (k++ == SJA1105_MAX_RETAGGING_COUNT) {
dev_err(priv->ds->dev, "No more retagging rules\n");
return -ENOSPC;
}
subvlan_map[v->port][subvlan] = v->vid;
}
*num_retagging = k;
return 0;
}
/* Sadly, in crosschip scenarios where the CPU port is also the link to another
* switch, we should retag backwards (the dsa_8021q vid to the original vid) on
* the CPU port of neighbour switches.
*/
static int
sja1105_build_crosschip_subvlans(struct sja1105_private *priv,
struct sja1105_vlan_lookup_entry *new_vlan,
struct sja1105_retagging_entry *new_retagging,
int *num_retagging)
{
struct sja1105_crosschip_vlan *tmp, *pos;
struct dsa_8021q_crosschip_link *c;
struct sja1105_bridge_vlan *v, *w;
struct list_head crosschip_vlans;
int k = *num_retagging;
int rc = 0;
if (priv->vlan_state != SJA1105_VLAN_BEST_EFFORT)
return 0;
INIT_LIST_HEAD(&crosschip_vlans);
list_for_each_entry(c, &priv->crosschip_links, list) {
struct sja1105_private *other_priv = c->other_ds->priv;
if (other_priv->vlan_state == SJA1105_VLAN_FILTERING_FULL)
continue;
/* Crosschip links are also added to the CPU ports.
* Ignore those.
*/
if (!dsa_is_user_port(priv->ds, c->port))
continue;
if (!dsa_is_user_port(c->other_ds, c->other_port))
continue;
/* Search for VLANs on the remote port */
list_for_each_entry(v, &other_priv->bridge_vlans, list) {
bool already_added = false;
bool we_have_it = false;
if (v->port != c->other_port)
continue;
/* If @v is a pvid on @other_ds, it does not need
* re-retagging, because its SVL field is 0 and we
* already allow that, via the dsa_8021q crosschip
* links.
*/
if (v->pvid)
continue;
/* Search for the VLAN on our local port */
list_for_each_entry(w, &priv->bridge_vlans, list) {
if (w->port == c->port && w->vid == v->vid) {
we_have_it = true;
break;
}
}
if (!we_have_it)
continue;
list_for_each_entry(tmp, &crosschip_vlans, list) {
if (tmp->vid == v->vid &&
tmp->untagged == v->untagged &&
tmp->port == c->port &&
tmp->other_port == v->port &&
tmp->other_ds == c->other_ds) {
already_added = true;
break;
}
}
if (already_added)
continue;
tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);
if (!tmp) {
dev_err(priv->ds->dev, "Failed to allocate memory\n");
rc = -ENOMEM;
goto out;
}
tmp->vid = v->vid;
tmp->port = c->port;
tmp->other_port = v->port;
tmp->other_ds = c->other_ds;
tmp->untagged = v->untagged;
list_add(&tmp->list, &crosschip_vlans);
}
}
list_for_each_entry(tmp, &crosschip_vlans, list) {
struct sja1105_private *other_priv = tmp->other_ds->priv;
int upstream = dsa_upstream_port(priv->ds, tmp->port);
int match, subvlan;
u16 rx_vid;
subvlan = sja1105_find_committed_subvlan(other_priv,
tmp->other_port,
tmp->vid);
/* If this happens, it's a bug. The neighbour switch does not
* have a subvlan for tmp->vid on tmp->other_port, but it
* should, since we already checked for its vlan_state.
*/
if (WARN_ON(subvlan < 0)) {
rc = -EINVAL;
goto out;
}
rx_vid = dsa_8021q_rx_vid_subvlan(tmp->other_ds,
tmp->other_port,
subvlan);
/* The @rx_vid retagged from @tmp->vid on
* {@tmp->other_ds, @tmp->other_port} needs to be
* re-retagged to @tmp->vid on the way back to us.
*
* Assume the original @tmp->vid is already configured
* on this local switch, otherwise we wouldn't be
* retagging its subvlan on the other switch in the
* first place. We just need to add a reverse retagging
* rule for @rx_vid and install @rx_vid on our ports.
*/
match = rx_vid;
new_vlan[match].vlanid = rx_vid;
new_vlan[match].vmemb_port |= BIT(tmp->port);
new_vlan[match].vmemb_port |= BIT(upstream);
/* The "untagged" flag is set the same as for the
* original VLAN. And towards the CPU, it doesn't
* really matter, because @rx_vid will only receive
* traffic on that port. For consistency with other dsa_8021q
* VLANs, we'll keep the CPU port tagged.
*/
if (!tmp->untagged)
new_vlan[match].tag_port |= BIT(tmp->port);
new_vlan[match].tag_port |= BIT(upstream);
/* Deny egress of @rx_vid towards our front-panel port.
* This will force the switch to drop it, and we'll see
* only the re-retagged packets (having the original,
* pre-initial-retagging, VLAN @tmp->vid).
*/
new_vlan[match].vlan_bc &= ~BIT(tmp->port);
/* On reverse retagging, the same ingress VLAN goes to multiple
* ports. So we have an opportunity to create composite rules
* to not waste the limited space in the retagging table.
*/
k = sja1105_find_retagging_entry(new_retagging, *num_retagging,
upstream, rx_vid, tmp->vid);
if (k < 0) {
if (*num_retagging == SJA1105_MAX_RETAGGING_COUNT) {
dev_err(priv->ds->dev, "No more retagging rules\n");
rc = -ENOSPC;
goto out;
}
k = (*num_retagging)++;
}
/* And the retagging itself */
new_retagging[k].vlan_ing = rx_vid;
new_retagging[k].vlan_egr = tmp->vid;
new_retagging[k].ing_port = BIT(upstream);
new_retagging[k].egr_port |= BIT(tmp->port);
}
out:
list_for_each_entry_safe(tmp, pos, &crosschip_vlans, list) {
list_del(&tmp->list);
kfree(tmp);
}
return rc;
}
static int sja1105_build_vlan_table(struct sja1105_private *priv, bool notify);
static int sja1105_notify_crosschip_switches(struct sja1105_private *priv)
{
struct sja1105_crosschip_switch *s, *pos;
struct list_head crosschip_switches;
struct dsa_8021q_crosschip_link *c;
int rc = 0;
INIT_LIST_HEAD(&crosschip_switches);
list_for_each_entry(c, &priv->crosschip_links, list) {
bool already_added = false;
list_for_each_entry(s, &crosschip_switches, list) {
if (s->other_ds == c->other_ds) {
already_added = true;
break;
}
}
if (already_added)
continue;
s = kzalloc(sizeof(*s), GFP_KERNEL);
if (!s) {
dev_err(priv->ds->dev, "Failed to allocate memory\n");
rc = -ENOMEM;
goto out;
}
s->other_ds = c->other_ds;
list_add(&s->list, &crosschip_switches);
}
list_for_each_entry(s, &crosschip_switches, list) {
struct sja1105_private *other_priv = s->other_ds->priv;
rc = sja1105_build_vlan_table(other_priv, false);
if (rc)
goto out;
}
out:
list_for_each_entry_safe(s, pos, &crosschip_switches, list) {
list_del(&s->list);
kfree(s);
}
return rc;
}
static int sja1105_build_vlan_table(struct sja1105_private *priv, bool notify)
{
u16 subvlan_map[SJA1105_NUM_PORTS][DSA_8021Q_N_SUBVLAN];
struct sja1105_retagging_entry *new_retagging;
struct sja1105_vlan_lookup_entry *new_vlan;
struct sja1105_table *table;
int i, num_retagging = 0;
int rc;
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
new_vlan = kcalloc(VLAN_N_VID,
table->ops->unpacked_entry_size, GFP_KERNEL);
if (!new_vlan)
return -ENOMEM;
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
new_retagging = kcalloc(SJA1105_MAX_RETAGGING_COUNT,
table->ops->unpacked_entry_size, GFP_KERNEL);
if (!new_retagging) {
kfree(new_vlan);
return -ENOMEM;
}
for (i = 0; i < VLAN_N_VID; i++)
new_vlan[i].vlanid = VLAN_N_VID;
for (i = 0; i < SJA1105_MAX_RETAGGING_COUNT; i++)
new_retagging[i].vlan_ing = VLAN_N_VID;
for (i = 0; i < priv->ds->num_ports; i++)
sja1105_init_subvlan_map(subvlan_map[i]);
/* Bridge VLANs */
rc = sja1105_build_bridge_vlans(priv, new_vlan);
if (rc)
goto out;
/* VLANs necessary for dsa_8021q operation, given to us by tag_8021q.c:
* - RX VLANs
* - TX VLANs
* - Crosschip links
*/
rc = sja1105_build_dsa_8021q_vlans(priv, new_vlan);
if (rc)
goto out;
/* Private VLANs necessary for dsa_8021q operation, which we need to
* determine on our own:
* - Sub-VLANs
* - Sub-VLANs of crosschip switches
*/
rc = sja1105_build_subvlans(priv, subvlan_map, new_vlan, new_retagging,
&num_retagging);
if (rc)
goto out;
rc = sja1105_build_crosschip_subvlans(priv, new_vlan, new_retagging,
&num_retagging);
if (rc)
goto out;
rc = sja1105_commit_vlans(priv, new_vlan, new_retagging, num_retagging);
if (rc)
goto out;
rc = sja1105_commit_pvid(priv);
if (rc)
goto out;
for (i = 0; i < priv->ds->num_ports; i++)
sja1105_commit_subvlan_map(priv, i, subvlan_map[i]);
if (notify) {
rc = sja1105_notify_crosschip_switches(priv);
if (rc)
goto out;
}
out:
kfree(new_vlan);
kfree(new_retagging);
return rc;
}
/* Select the list to which we should add this VLAN. */
static struct list_head *sja1105_classify_vlan(struct sja1105_private *priv,
u16 vid)
{
if (priv->expect_dsa_8021q)
return &priv->dsa_8021q_vlans;
return &priv->bridge_vlans;
}
static int sja1105_vlan_prepare(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_vlan *vlan)
{
struct sja1105_private *priv = ds->priv;
u16 vid;
if (priv->vlan_state == SJA1105_VLAN_FILTERING_FULL)
return 0;
/* If the user wants best-effort VLAN filtering (aka vlan_filtering
* bridge plus tagging), be sure to at least deny alterations to the
* configuration done by dsa_8021q.
*/
for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
if (!priv->expect_dsa_8021q && vid_is_dsa_8021q(vid)) {
dev_err(ds->dev, "Range 1024-3071 reserved for dsa_8021q operation\n");
return -EBUSY;
}
}
return 0;
}
......@@ -1917,8 +2570,10 @@ static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
struct sja1105_l2_lookup_params_entry *l2_lookup_params;
struct sja1105_general_params_entry *general_params;
struct sja1105_private *priv = ds->priv;
enum sja1105_vlan_state state;
struct sja1105_table *table;
struct sja1105_rule *rule;
bool want_tagging;
u16 tpid, tpid2;
int rc;
......@@ -1940,6 +2595,29 @@ static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
tpid2 = ETH_P_SJA1105;
}
for (port = 0; port < ds->num_ports; port++) {
struct sja1105_port *sp = &priv->ports[port];
if (enabled)
sp->xmit_tpid = priv->info->qinq_tpid;
else
sp->xmit_tpid = ETH_P_SJA1105;
}
if (!enabled)
state = SJA1105_VLAN_UNAWARE;
else if (priv->best_effort_vlan_filtering)
state = SJA1105_VLAN_BEST_EFFORT;
else
state = SJA1105_VLAN_FILTERING_FULL;
if (priv->vlan_state == state)
return 0;
priv->vlan_state = state;
want_tagging = (state == SJA1105_VLAN_UNAWARE ||
state == SJA1105_VLAN_BEST_EFFORT);
table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
general_params = table->entries;
/* EtherType used to identify inner tagged (C-tag) VLAN traffic */
......@@ -1952,8 +2630,10 @@ static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
general_params->incl_srcpt1 = enabled;
general_params->incl_srcpt0 = enabled;
want_tagging = priv->best_effort_vlan_filtering || !enabled;
/* VLAN filtering => independent VLAN learning.
* No VLAN filtering => shared VLAN learning.
* No VLAN filtering (or best effort) => shared VLAN learning.
*
* In shared VLAN learning mode, untagged traffic still gets
* pvid-tagged, and the FDB table gets populated with entries
......@@ -1972,7 +2652,9 @@ static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
*/
table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS];
l2_lookup_params = table->entries;
l2_lookup_params->shared_learn = !enabled;
l2_lookup_params->shared_learn = want_tagging;
sja1105_frame_memory_partitioning(priv);
rc = sja1105_static_config_reload(priv, SJA1105_VLAN_FILTERING);
if (rc)
......@@ -1980,56 +2662,191 @@ static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
/* Switch port identification based on 802.1Q is only passable
* if we are not under a vlan_filtering bridge. So make sure
* the two configurations are mutually exclusive.
* the two configurations are mutually exclusive (of course, the
* user may know better, i.e. best_effort_vlan_filtering).
*/
return sja1105_setup_8021q_tagging(ds, !enabled);
return sja1105_setup_8021q_tagging(ds, want_tagging);
}
static void sja1105_vlan_add(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_vlan *vlan)
{
struct sja1105_private *priv = ds->priv;
bool vlan_table_changed = false;
u16 vid;
int rc;
for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
rc = sja1105_vlan_apply(priv, port, vid, true, vlan->flags &
BRIDGE_VLAN_INFO_UNTAGGED);
if (rc < 0) {
dev_err(ds->dev, "Failed to add VLAN %d to port %d: %d\n",
vid, port, rc);
return;
}
if (vlan->flags & BRIDGE_VLAN_INFO_PVID) {
rc = sja1105_pvid_apply(ds->priv, port, vid);
if (rc < 0) {
dev_err(ds->dev, "Failed to set pvid %d on port %d: %d\n",
vid, port, rc);
return;
bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
bool pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID;
struct sja1105_bridge_vlan *v;
struct list_head *vlan_list;
bool already_added = false;
vlan_list = sja1105_classify_vlan(priv, vid);
list_for_each_entry(v, vlan_list, list) {
if (v->port == port && v->vid == vid &&
v->untagged == untagged && v->pvid == pvid) {
already_added = true;
break;
}
}
if (already_added)
continue;
v = kzalloc(sizeof(*v), GFP_KERNEL);
if (!v) {
dev_err(ds->dev, "Out of memory while storing VLAN\n");
return;
}
v->port = port;
v->vid = vid;
v->untagged = untagged;
v->pvid = pvid;
list_add(&v->list, vlan_list);
vlan_table_changed = true;
}
if (!vlan_table_changed)
return;
rc = sja1105_build_vlan_table(priv, true);
if (rc)
dev_err(ds->dev, "Failed to build VLAN table: %d\n", rc);
}
static int sja1105_vlan_del(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_vlan *vlan)
{
struct sja1105_private *priv = ds->priv;
bool vlan_table_changed = false;
u16 vid;
int rc;
for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
rc = sja1105_vlan_apply(priv, port, vid, false, vlan->flags &
BRIDGE_VLAN_INFO_UNTAGGED);
if (rc < 0) {
dev_err(ds->dev, "Failed to remove VLAN %d from port %d: %d\n",
vid, port, rc);
return rc;
struct sja1105_bridge_vlan *v, *n;
struct list_head *vlan_list;
vlan_list = sja1105_classify_vlan(priv, vid);
list_for_each_entry_safe(v, n, vlan_list, list) {
if (v->port == port && v->vid == vid) {
list_del(&v->list);
kfree(v);
vlan_table_changed = true;
break;
}
}
}
if (!vlan_table_changed)
return 0;
return sja1105_build_vlan_table(priv, true);
}
static int sja1105_best_effort_vlan_filtering_get(struct sja1105_private *priv,
bool *be_vlan)
{
*be_vlan = priv->best_effort_vlan_filtering;
return 0;
}
static int sja1105_best_effort_vlan_filtering_set(struct sja1105_private *priv,
bool be_vlan)
{
struct dsa_switch *ds = priv->ds;
bool vlan_filtering;
int port;
int rc;
priv->best_effort_vlan_filtering = be_vlan;
rtnl_lock();
for (port = 0; port < ds->num_ports; port++) {
struct dsa_port *dp;
if (!dsa_is_user_port(ds, port))
continue;
dp = dsa_to_port(ds, port);
vlan_filtering = dsa_port_is_vlan_filtering(dp);
rc = sja1105_vlan_filtering(ds, port, vlan_filtering);
if (rc)
break;
}
rtnl_unlock();
return rc;
}
enum sja1105_devlink_param_id {
SJA1105_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX,
SJA1105_DEVLINK_PARAM_ID_BEST_EFFORT_VLAN_FILTERING,
};
static int sja1105_devlink_param_get(struct dsa_switch *ds, u32 id,
struct devlink_param_gset_ctx *ctx)
{
struct sja1105_private *priv = ds->priv;
int err;
switch (id) {
case SJA1105_DEVLINK_PARAM_ID_BEST_EFFORT_VLAN_FILTERING:
err = sja1105_best_effort_vlan_filtering_get(priv,
&ctx->val.vbool);
break;
default:
err = -EOPNOTSUPP;
break;
}
return err;
}
static int sja1105_devlink_param_set(struct dsa_switch *ds, u32 id,
struct devlink_param_gset_ctx *ctx)
{
struct sja1105_private *priv = ds->priv;
int err;
switch (id) {
case SJA1105_DEVLINK_PARAM_ID_BEST_EFFORT_VLAN_FILTERING:
err = sja1105_best_effort_vlan_filtering_set(priv,
ctx->val.vbool);
break;
default:
err = -EOPNOTSUPP;
break;
}
return err;
}
static const struct devlink_param sja1105_devlink_params[] = {
DSA_DEVLINK_PARAM_DRIVER(SJA1105_DEVLINK_PARAM_ID_BEST_EFFORT_VLAN_FILTERING,
"best_effort_vlan_filtering",
DEVLINK_PARAM_TYPE_BOOL,
BIT(DEVLINK_PARAM_CMODE_RUNTIME)),
};
static int sja1105_setup_devlink_params(struct dsa_switch *ds)
{
return dsa_devlink_params_register(ds, sja1105_devlink_params,
ARRAY_SIZE(sja1105_devlink_params));
}
static void sja1105_teardown_devlink_params(struct dsa_switch *ds)
{
dsa_devlink_params_unregister(ds, sja1105_devlink_params,
ARRAY_SIZE(sja1105_devlink_params));
}
/* The programming model for the SJA1105 switch is "all-at-once" via static
* configuration tables. Some of these can be dynamically modified at runtime,
* but not the xMII mode parameters table.
......@@ -2095,6 +2912,12 @@ static int sja1105_setup(struct dsa_switch *ds)
ds->mtu_enforcement_ingress = true;
ds->configure_vlan_while_not_filtering = true;
rc = sja1105_setup_devlink_params(ds);
if (rc < 0)
return rc;
/* The DSA/switchdev model brings up switch ports in standalone mode by
* default, and that means vlan_filtering is 0 since they're not under
* a bridge, so it's safe to set up switch tagging at this time.
......@@ -2105,6 +2928,7 @@ static int sja1105_setup(struct dsa_switch *ds)
static void sja1105_teardown(struct dsa_switch *ds)
{
struct sja1105_private *priv = ds->priv;
struct sja1105_bridge_vlan *v, *n;
int port;
for (port = 0; port < SJA1105_NUM_PORTS; port++) {
......@@ -2117,10 +2941,21 @@ static void sja1105_teardown(struct dsa_switch *ds)
kthread_destroy_worker(sp->xmit_worker);
}
sja1105_teardown_devlink_params(ds);
sja1105_flower_teardown(ds);
sja1105_tas_teardown(ds);
sja1105_ptp_clock_unregister(ds);
sja1105_static_config_free(&priv->static_config);
list_for_each_entry_safe(v, n, &priv->dsa_8021q_vlans, list) {
list_del(&v->list);
kfree(v);
}
list_for_each_entry_safe(v, n, &priv->bridge_vlans, list) {
list_del(&v->list);
kfree(v);
}
}
static int sja1105_port_enable(struct dsa_switch *ds, int port,
......@@ -2458,6 +3293,8 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
.cls_flower_stats = sja1105_cls_flower_stats,
.crosschip_bridge_join = sja1105_crosschip_bridge_join,
.crosschip_bridge_leave = sja1105_crosschip_bridge_leave,
.devlink_param_get = sja1105_devlink_param_get,
.devlink_param_set = sja1105_devlink_param_set,
};
static int sja1105_check_device_id(struct sja1105_private *priv)
......@@ -2561,6 +3398,8 @@ static int sja1105_probe(struct spi_device *spi)
mutex_init(&priv->mgmt_lock);
INIT_LIST_HEAD(&priv->crosschip_links);
INIT_LIST_HEAD(&priv->bridge_vlans);
INIT_LIST_HEAD(&priv->dsa_8021q_vlans);
sja1105_tas_setup(ds);
sja1105_flower_setup(ds);
......@@ -2574,6 +3413,7 @@ static int sja1105_probe(struct spi_device *spi)
struct sja1105_port *sp = &priv->ports[port];
struct dsa_port *dp = dsa_to_port(ds, port);
struct net_device *slave;
int subvlan;
if (!dsa_is_user_port(ds, port))
continue;
......@@ -2593,6 +3433,10 @@ static int sja1105_probe(struct spi_device *spi)
goto out;
}
skb_queue_head_init(&sp->xmit_queue);
sp->xmit_tpid = ETH_P_SJA1105;
for (subvlan = 0; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++)
sp->subvlan_map[subvlan] = VLAN_N_VID;
}
return 0;
......
......@@ -512,6 +512,7 @@ struct sja1105_info sja1105e_info = {
.part_no = SJA1105ET_PART_NO,
.static_ops = sja1105e_table_ops,
.dyn_ops = sja1105et_dyn_ops,
.qinq_tpid = ETH_P_8021Q,
.ptp_ts_bits = 24,
.ptpegr_ts_bytes = 4,
.reset_cmd = sja1105et_reset_cmd,
......@@ -526,6 +527,7 @@ struct sja1105_info sja1105t_info = {
.part_no = SJA1105ET_PART_NO,
.static_ops = sja1105t_table_ops,
.dyn_ops = sja1105et_dyn_ops,
.qinq_tpid = ETH_P_8021Q,
.ptp_ts_bits = 24,
.ptpegr_ts_bytes = 4,
.reset_cmd = sja1105et_reset_cmd,
......@@ -540,6 +542,7 @@ struct sja1105_info sja1105p_info = {
.part_no = SJA1105P_PART_NO,
.static_ops = sja1105p_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
.qinq_tpid = ETH_P_8021AD,
.ptp_ts_bits = 32,
.ptpegr_ts_bytes = 8,
.setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
......@@ -555,6 +558,7 @@ struct sja1105_info sja1105q_info = {
.part_no = SJA1105Q_PART_NO,
.static_ops = sja1105q_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
.qinq_tpid = ETH_P_8021AD,
.ptp_ts_bits = 32,
.ptpegr_ts_bytes = 8,
.setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
......@@ -570,6 +574,7 @@ struct sja1105_info sja1105r_info = {
.part_no = SJA1105R_PART_NO,
.static_ops = sja1105r_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
.qinq_tpid = ETH_P_8021AD,
.ptp_ts_bits = 32,
.ptpegr_ts_bytes = 8,
.setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
......@@ -586,6 +591,7 @@ struct sja1105_info sja1105s_info = {
.static_ops = sja1105s_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
.regs = &sja1105pqrs_regs,
.qinq_tpid = ETH_P_8021AD,
.ptp_ts_bits = 32,
.ptpegr_ts_bytes = 8,
.setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
......
......@@ -541,6 +541,22 @@ static size_t sja1105_xmii_params_entry_packing(void *buf, void *entry_ptr,
return size;
}
size_t sja1105_retagging_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_retagging_entry *entry = entry_ptr;
const size_t size = SJA1105_SIZE_RETAGGING_ENTRY;
sja1105_packing(buf, &entry->egr_port, 63, 59, size, op);
sja1105_packing(buf, &entry->ing_port, 58, 54, size, op);
sja1105_packing(buf, &entry->vlan_ing, 53, 42, size, op);
sja1105_packing(buf, &entry->vlan_egr, 41, 30, size, op);
sja1105_packing(buf, &entry->do_not_learn, 29, 29, size, op);
sja1105_packing(buf, &entry->use_dest_ports, 28, 28, size, op);
sja1105_packing(buf, &entry->destports, 27, 23, size, op);
return size;
}
size_t sja1105_table_header_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
......@@ -603,6 +619,7 @@ static u64 blk_id_map[BLK_IDX_MAX] = {
[BLK_IDX_L2_FORWARDING_PARAMS] = BLKID_L2_FORWARDING_PARAMS,
[BLK_IDX_AVB_PARAMS] = BLKID_AVB_PARAMS,
[BLK_IDX_GENERAL_PARAMS] = BLKID_GENERAL_PARAMS,
[BLK_IDX_RETAGGING] = BLKID_RETAGGING,
[BLK_IDX_XMII_PARAMS] = BLKID_XMII_PARAMS,
};
......@@ -646,7 +663,7 @@ static_config_check_memory_size(const struct sja1105_table *tables)
{
const struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
const struct sja1105_vl_forwarding_params_entry *vl_fwd_params;
int i, mem = 0;
int i, max_mem, mem = 0;
l2_fwd_params = tables[BLK_IDX_L2_FORWARDING_PARAMS].entries;
......@@ -659,7 +676,12 @@ static_config_check_memory_size(const struct sja1105_table *tables)
mem += vl_fwd_params->partspc[i];
}
if (mem > SJA1105_MAX_FRAME_MEMORY)
if (tables[BLK_IDX_RETAGGING].entry_count)
max_mem = SJA1105_MAX_FRAME_MEMORY_RETAGGING;
else
max_mem = SJA1105_MAX_FRAME_MEMORY;
if (mem > max_mem)
return SJA1105_OVERCOMMITTED_FRAME_MEMORY;
return SJA1105_CONFIG_OK;
......@@ -881,6 +903,12 @@ struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
......@@ -993,6 +1021,12 @@ struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
......@@ -1065,6 +1099,12 @@ struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
......@@ -1177,6 +1217,12 @@ struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
......@@ -1249,6 +1295,12 @@ struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
......@@ -1361,6 +1413,12 @@ struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
......
......@@ -20,6 +20,7 @@
#define SJA1105_SIZE_VLAN_LOOKUP_ENTRY 8
#define SJA1105_SIZE_L2_FORWARDING_ENTRY 8
#define SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY 12
#define SJA1105_SIZE_RETAGGING_ENTRY 8
#define SJA1105_SIZE_XMII_PARAMS_ENTRY 4
#define SJA1105_SIZE_SCHEDULE_PARAMS_ENTRY 12
#define SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY 4
......@@ -54,6 +55,7 @@ enum {
BLKID_L2_FORWARDING_PARAMS = 0x0E,
BLKID_AVB_PARAMS = 0x10,
BLKID_GENERAL_PARAMS = 0x11,
BLKID_RETAGGING = 0x12,
BLKID_XMII_PARAMS = 0x4E,
};
......@@ -75,6 +77,7 @@ enum sja1105_blk_idx {
BLK_IDX_L2_FORWARDING_PARAMS,
BLK_IDX_AVB_PARAMS,
BLK_IDX_GENERAL_PARAMS,
BLK_IDX_RETAGGING,
BLK_IDX_XMII_PARAMS,
BLK_IDX_MAX,
/* Fake block indices that are only valid for dynamic access */
......@@ -99,10 +102,13 @@ enum sja1105_blk_idx {
#define SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT 1
#define SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT 1
#define SJA1105_MAX_GENERAL_PARAMS_COUNT 1
#define SJA1105_MAX_RETAGGING_COUNT 32
#define SJA1105_MAX_XMII_PARAMS_COUNT 1
#define SJA1105_MAX_AVB_PARAMS_COUNT 1
#define SJA1105_MAX_FRAME_MEMORY 929
#define SJA1105_MAX_FRAME_MEMORY_RETAGGING 910
#define SJA1105_VL_FRAME_MEMORY 100
#define SJA1105E_DEVICE_ID 0x9C00000Cull
#define SJA1105T_DEVICE_ID 0x9E00030Eull
......@@ -273,6 +279,16 @@ struct sja1105_mac_config_entry {
u64 ingress;
};
struct sja1105_retagging_entry {
u64 egr_port;
u64 ing_port;
u64 vlan_ing;
u64 vlan_egr;
u64 do_not_learn;
u64 use_dest_ports;
u64 destports;
};
struct sja1105_xmii_params_entry {
u64 phy_mac[5];
u64 xmii_mode[5];
......
......@@ -5,7 +5,6 @@
#include <linux/dsa/8021q.h>
#include "sja1105.h"
#define SJA1105_VL_FRAME_MEMORY 100
#define SJA1105_SIZE_VL_STATUS 8
/* The switch flow classification core implements TTEthernet, which 'thinks' in
......@@ -141,8 +140,6 @@ static bool sja1105_vl_key_lower(struct sja1105_vl_lookup_entry *a,
static int sja1105_init_virtual_links(struct sja1105_private *priv,
struct netlink_ext_ack *extack)
{
struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
struct sja1105_vl_forwarding_params_entry *vl_fwd_params;
struct sja1105_vl_policing_entry *vl_policing;
struct sja1105_vl_forwarding_entry *vl_fwd;
struct sja1105_vl_lookup_entry *vl_lookup;
......@@ -153,10 +150,6 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv,
int max_sharindx = 0;
int i, j, k;
table = &priv->static_config.tables[BLK_IDX_L2_FORWARDING_PARAMS];
l2_fwd_params = table->entries;
l2_fwd_params->part_spc[0] = SJA1105_MAX_FRAME_MEMORY;
/* Figure out the dimensioning of the problem */
list_for_each_entry(rule, &priv->flow_block.rules, list) {
if (rule->type != SJA1105_RULE_VL)
......@@ -308,17 +301,6 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv,
if (!table->entries)
return -ENOMEM;
table->entry_count = 1;
vl_fwd_params = table->entries;
/* Reserve some frame buffer memory for the critical-traffic virtual
* links (this needs to be done). At the moment, hardcode the value
* at 100 blocks of 128 bytes of memory each. This leaves 829 blocks
* remaining for best-effort traffic. TODO: figure out a more flexible
* way to perform the frame buffer partitioning.
*/
l2_fwd_params->part_spc[0] = SJA1105_MAX_FRAME_MEMORY -
SJA1105_VL_FRAME_MEMORY;
vl_fwd_params->partspc[0] = SJA1105_VL_FRAME_MEMORY;
for (i = 0; i < num_virtual_links; i++) {
unsigned long cookie = vl_lookup[i].flow_cookie;
......@@ -342,6 +324,8 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv,
}
}
sja1105_frame_memory_partitioning(priv);
return 0;
}
......@@ -353,14 +337,14 @@ int sja1105_vl_redirect(struct sja1105_private *priv, int port,
struct sja1105_rule *rule = sja1105_rule_find(priv, cookie);
int rc;
if (dsa_port_is_vlan_filtering(dsa_to_port(priv->ds, port)) &&
key->type != SJA1105_KEY_VLAN_AWARE_VL) {
if (priv->vlan_state == SJA1105_VLAN_UNAWARE &&
key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
NL_SET_ERR_MSG_MOD(extack,
"Can only redirect based on {DMAC, VID, PCP}");
"Can only redirect based on DMAC");
return -EOPNOTSUPP;
} else if (key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
} else if (key->type != SJA1105_KEY_VLAN_AWARE_VL) {
NL_SET_ERR_MSG_MOD(extack,
"Can only redirect based on DMAC");
"Can only redirect based on {DMAC, VID, PCP}");
return -EOPNOTSUPP;
}
......@@ -602,14 +586,18 @@ int sja1105_vl_gate(struct sja1105_private *priv, int port,
return -ERANGE;
}
if (dsa_port_is_vlan_filtering(dsa_to_port(priv->ds, port)) &&
key->type != SJA1105_KEY_VLAN_AWARE_VL) {
if (priv->vlan_state == SJA1105_VLAN_UNAWARE &&
key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
dev_err(priv->ds->dev, "1: vlan state %d key type %d\n",
priv->vlan_state, key->type);
NL_SET_ERR_MSG_MOD(extack,
"Can only gate based on {DMAC, VID, PCP}");
"Can only gate based on DMAC");
return -EOPNOTSUPP;
} else if (key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
} else if (key->type != SJA1105_KEY_VLAN_AWARE_VL) {
dev_err(priv->ds->dev, "2: vlan state %d key type %d\n",
priv->vlan_state, key->type);
NL_SET_ERR_MSG_MOD(extack,
"Can only gate based on DMAC");
"Can only gate based on {DMAC, VID, PCP}");
return -EOPNOTSUPP;
}
......
......@@ -20,23 +20,21 @@ struct dsa_8021q_crosschip_link {
refcount_t refcount;
};
#define DSA_8021Q_N_SUBVLAN 8
#if IS_ENABLED(CONFIG_NET_DSA_TAG_8021Q)
int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index,
bool enabled);
int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, bool enabled);
int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, struct net_device *br,
int other_port,
struct list_head *crosschip_links);
int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, struct net_device *br,
int other_port,
struct list_head *crosschip_links);
struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
......@@ -46,10 +44,16 @@ u16 dsa_8021q_tx_vid(struct dsa_switch *ds, int port);
u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port);
u16 dsa_8021q_rx_vid_subvlan(struct dsa_switch *ds, int port, u16 subvlan);
int dsa_8021q_rx_switch_id(u16 vid);
int dsa_8021q_rx_source_port(u16 vid);
u16 dsa_8021q_rx_subvlan(u16 vid);
bool vid_is_dsa_8021q(u16 vid);
#else
int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index,
......@@ -58,16 +62,9 @@ int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index,
return 0;
}
int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, bool enabled)
{
return 0;
}
int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, struct net_device *br,
int other_port,
struct list_head *crosschip_links)
{
return 0;
......@@ -75,7 +72,7 @@ int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, struct net_device *br,
int other_port,
struct list_head *crosschip_links)
{
return 0;
......@@ -97,6 +94,11 @@ u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port)
return 0;
}
u16 dsa_8021q_rx_vid_subvlan(struct dsa_switch *ds, int port, u16 subvlan)
{
return 0;
}
int dsa_8021q_rx_switch_id(u16 vid)
{
return 0;
......@@ -107,6 +109,16 @@ int dsa_8021q_rx_source_port(u16 vid)
return 0;
}
u16 dsa_8021q_rx_subvlan(u16 vid)
{
return 0;
}
bool vid_is_dsa_8021q(u16 vid)
{
return false;
}
#endif /* IS_ENABLED(CONFIG_NET_DSA_TAG_8021Q) */
#endif /* _NET_DSA_8021Q_H */
......@@ -9,6 +9,7 @@
#include <linux/skbuff.h>
#include <linux/etherdevice.h>
#include <linux/dsa/8021q.h>
#include <net/dsa.h>
#define ETH_P_SJA1105 ETH_P_DSA_8021Q
......@@ -53,12 +54,14 @@ struct sja1105_skb_cb {
((struct sja1105_skb_cb *)DSA_SKB_CB_PRIV(skb))
struct sja1105_port {
u16 subvlan_map[DSA_8021Q_N_SUBVLAN];
struct kthread_worker *xmit_worker;
struct kthread_work xmit_work;
struct sk_buff_head xmit_queue;
struct sja1105_tagger_data *data;
struct dsa_port *dp;
bool hwts_tx_en;
u16 xmit_tpid;
};
#endif /* _NET_DSA_SJA1105_H */
......@@ -282,6 +282,13 @@ struct dsa_switch {
*/
bool vlan_filtering_is_global;
/* Pass .port_vlan_add and .port_vlan_del to drivers even for bridges
* that have vlan_filtering=0. All drivers should ideally set this (and
* then the option would get removed), but it is unknown whether this
* would break things or not.
*/
bool configure_vlan_while_not_filtering;
/* In case vlan_filtering_is_global is set, the VLAN awareness state
* should be retrieved from here and not from the per-port settings.
*/
......
......@@ -138,6 +138,7 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br);
void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br);
int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
struct switchdev_trans *trans);
bool dsa_port_skip_vlan_configuration(struct dsa_port *dp);
int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock,
struct switchdev_trans *trans);
int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu,
......
......@@ -257,6 +257,20 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
return 0;
}
/* This enforces legacy behavior for switch drivers which assume they can't
* receive VLAN configuration when enslaved to a bridge with vlan_filtering=0
*/
bool dsa_port_skip_vlan_configuration(struct dsa_port *dp)
{
struct dsa_switch *ds = dp->ds;
if (!dp->bridge_dev)
return false;
return (!ds->configure_vlan_while_not_filtering &&
!br_vlan_enabled(dp->bridge_dev));
}
int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock,
struct switchdev_trans *trans)
{
......
......@@ -314,7 +314,7 @@ static int dsa_slave_vlan_add(struct net_device *dev,
if (obj->orig_dev != dev)
return -EOPNOTSUPP;
if (dp->bridge_dev && !br_vlan_enabled(dp->bridge_dev))
if (dsa_port_skip_vlan_configuration(dp))
return 0;
vlan = *SWITCHDEV_OBJ_PORT_VLAN(obj);
......@@ -381,7 +381,7 @@ static int dsa_slave_vlan_del(struct net_device *dev,
if (obj->orig_dev != dev)
return -EOPNOTSUPP;
if (dp->bridge_dev && !br_vlan_enabled(dp->bridge_dev))
if (dsa_port_skip_vlan_configuration(dp))
return 0;
/* Do not deprogram the CPU port as it may be shared with other user
......@@ -1240,7 +1240,7 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
* need to emulate the switchdev prepare + commit phase.
*/
if (dp->bridge_dev) {
if (!br_vlan_enabled(dp->bridge_dev))
if (dsa_port_skip_vlan_configuration(dp))
return 0;
/* br_vlan_get_info() returns -EINVAL or -ENOENT if the
......@@ -1274,7 +1274,7 @@ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
* need to emulate the switchdev prepare + commit phase.
*/
if (dp->bridge_dev) {
if (!br_vlan_enabled(dp->bridge_dev))
if (dsa_port_skip_vlan_configuration(dp))
return 0;
/* br_vlan_get_info() returns -EINVAL or -ENOENT if the
......
......@@ -17,7 +17,7 @@
*
* | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
* +-----------+-----+-----------------+-----------+-----------------------+
* | DIR | RSV | SWITCH_ID | RSV | PORT |
* | DIR | SVL | SWITCH_ID | SUBVLAN | PORT |
* +-----------+-----+-----------------+-----------+-----------------------+
*
* DIR - VID[11:10]:
......@@ -27,17 +27,24 @@
* These values make the special VIDs of 0, 1 and 4095 to be left
* unused by this coding scheme.
*
* RSV - VID[9]:
* To be used for further expansion of SWITCH_ID or for other purposes.
* Must be transmitted as zero and ignored on receive.
* SVL/SUBVLAN - { VID[9], VID[5:4] }:
* Sub-VLAN encoding. Valid only when DIR indicates an RX VLAN.
* * 0 (0b000): Field does not encode a sub-VLAN, either because
* received traffic is untagged, PVID-tagged or because a second
* VLAN tag is present after this tag and not inside of it.
* * 1 (0b001): Received traffic is tagged with a VID value private
* to the host. This field encodes the index in the host's lookup
* table through which the value of the ingress VLAN ID can be
* recovered.
* * 2 (0b010): Field encodes a sub-VLAN.
* ...
* * 7 (0b111): Field encodes a sub-VLAN.
* When DIR indicates a TX VLAN, SUBVLAN must be transmitted as zero
* (by the host) and ignored on receive (by the switch).
*
* SWITCH_ID - VID[8:6]:
* Index of switch within DSA tree. Must be between 0 and 7.
*
* RSV - VID[5:4]:
* To be used for further expansion of PORT or for other purposes.
* Must be transmitted as zero and ignored on receive.
*
* PORT - VID[3:0]:
* Index of switch port. Must be between 0 and 15.
*/
......@@ -54,6 +61,18 @@
#define DSA_8021Q_SWITCH_ID(x) (((x) << DSA_8021Q_SWITCH_ID_SHIFT) & \
DSA_8021Q_SWITCH_ID_MASK)
#define DSA_8021Q_SUBVLAN_HI_SHIFT 9
#define DSA_8021Q_SUBVLAN_HI_MASK GENMASK(9, 9)
#define DSA_8021Q_SUBVLAN_LO_SHIFT 4
#define DSA_8021Q_SUBVLAN_LO_MASK GENMASK(4, 3)
#define DSA_8021Q_SUBVLAN_HI(x) (((x) & GENMASK(2, 2)) >> 2)
#define DSA_8021Q_SUBVLAN_LO(x) ((x) & GENMASK(1, 0))
#define DSA_8021Q_SUBVLAN(x) \
(((DSA_8021Q_SUBVLAN_LO(x) << DSA_8021Q_SUBVLAN_LO_SHIFT) & \
DSA_8021Q_SUBVLAN_LO_MASK) | \
((DSA_8021Q_SUBVLAN_HI(x) << DSA_8021Q_SUBVLAN_HI_SHIFT) & \
DSA_8021Q_SUBVLAN_HI_MASK))
#define DSA_8021Q_PORT_SHIFT 0
#define DSA_8021Q_PORT_MASK GENMASK(3, 0)
#define DSA_8021Q_PORT(x) (((x) << DSA_8021Q_PORT_SHIFT) & \
......@@ -79,6 +98,13 @@ u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port)
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_vid);
u16 dsa_8021q_rx_vid_subvlan(struct dsa_switch *ds, int port, u16 subvlan)
{
return DSA_8021Q_DIR_RX | DSA_8021Q_SWITCH_ID(ds->index) |
DSA_8021Q_PORT(port) | DSA_8021Q_SUBVLAN(subvlan);
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_vid_subvlan);
/* Returns the decoded switch ID from the RX VID. */
int dsa_8021q_rx_switch_id(u16 vid)
{
......@@ -93,6 +119,27 @@ int dsa_8021q_rx_source_port(u16 vid)
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_source_port);
/* Returns the decoded subvlan from the RX VID. */
u16 dsa_8021q_rx_subvlan(u16 vid)
{
u16 svl_hi, svl_lo;
svl_hi = (vid & DSA_8021Q_SUBVLAN_HI_MASK) >>
DSA_8021Q_SUBVLAN_HI_SHIFT;
svl_lo = (vid & DSA_8021Q_SUBVLAN_LO_MASK) >>
DSA_8021Q_SUBVLAN_LO_SHIFT;
return (svl_hi << 2) | svl_lo;
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_subvlan);
bool vid_is_dsa_8021q(u16 vid)
{
return ((vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_RX ||
(vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_TX);
}
EXPORT_SYMBOL_GPL(vid_is_dsa_8021q);
static int dsa_8021q_restore_pvid(struct dsa_switch *ds, int port)
{
struct bridge_vlan_info vinfo;
......@@ -289,9 +336,9 @@ int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int port, bool enabled)
}
EXPORT_SYMBOL_GPL(dsa_port_setup_8021q_tagging);
int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, bool enabled)
static int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, bool enabled)
{
u16 rx_vid = dsa_8021q_rx_vid(ds, port);
......@@ -301,7 +348,6 @@ int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port,
return dsa_8021q_vid_apply(other_ds, other_port, rx_vid,
BRIDGE_VLAN_INFO_UNTAGGED, enabled);
}
EXPORT_SYMBOL_GPL(dsa_8021q_crosschip_link_apply);
static int dsa_8021q_crosschip_link_add(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
......@@ -362,7 +408,7 @@ static void dsa_8021q_crosschip_link_del(struct dsa_switch *ds,
*/
int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, struct net_device *br,
int other_port,
struct list_head *crosschip_links)
{
/* @other_upstream is how @other_ds reaches us. If we are part
......@@ -378,12 +424,10 @@ int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
if (rc)
return rc;
if (!br_vlan_enabled(br)) {
rc = dsa_8021q_crosschip_link_apply(ds, port, other_ds,
other_port, true);
if (rc)
return rc;
}
rc = dsa_8021q_crosschip_link_apply(ds, port, other_ds,
other_port, true);
if (rc)
return rc;
rc = dsa_8021q_crosschip_link_add(ds, port, other_ds,
other_upstream,
......@@ -391,20 +435,14 @@ int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
if (rc)
return rc;
if (!br_vlan_enabled(br)) {
rc = dsa_8021q_crosschip_link_apply(ds, port, other_ds,
other_upstream, true);
if (rc)
return rc;
}
return 0;
return dsa_8021q_crosschip_link_apply(ds, port, other_ds,
other_upstream, true);
}
EXPORT_SYMBOL_GPL(dsa_8021q_crosschip_bridge_join);
int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, struct net_device *br,
int other_port,
struct list_head *crosschip_links)
{
int other_upstream = dsa_upstream_port(other_ds, other_port);
......@@ -424,14 +462,12 @@ int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port,
if (keep)
continue;
if (!br_vlan_enabled(br)) {
rc = dsa_8021q_crosschip_link_apply(ds, port,
other_ds,
other_port,
false);
if (rc)
return rc;
}
rc = dsa_8021q_crosschip_link_apply(ds, port,
other_ds,
other_port,
false);
if (rc)
return rc;
}
}
......
......@@ -69,12 +69,25 @@ static inline bool sja1105_is_meta_frame(const struct sk_buff *skb)
return true;
}
static bool sja1105_can_use_vlan_as_tags(const struct sk_buff *skb)
{
struct vlan_ethhdr *hdr = vlan_eth_hdr(skb);
if (hdr->h_vlan_proto == ntohs(ETH_P_SJA1105))
return true;
if (hdr->h_vlan_proto != ntohs(ETH_P_8021Q))
return false;
return vid_is_dsa_8021q(ntohs(hdr->h_vlan_TCI) & VLAN_VID_MASK);
}
/* This is the first time the tagger sees the frame on RX.
* Figure out if we can decode it.
*/
static bool sja1105_filter(const struct sk_buff *skb, struct net_device *dev)
{
if (!dsa_port_is_vlan_filtering(dev->dsa_ptr))
if (sja1105_can_use_vlan_as_tags(skb))
return true;
if (sja1105_is_link_local(skb))
return true;
......@@ -96,6 +109,11 @@ static struct sk_buff *sja1105_defer_xmit(struct sja1105_port *sp,
return NULL;
}
static u16 sja1105_xmit_tpid(struct sja1105_port *sp)
{
return sp->xmit_tpid;
}
static struct sk_buff *sja1105_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
......@@ -111,15 +129,7 @@ static struct sk_buff *sja1105_xmit(struct sk_buff *skb,
if (unlikely(sja1105_is_link_local(skb)))
return sja1105_defer_xmit(dp->priv, skb);
/* If we are under a vlan_filtering bridge, IP termination on
* switch ports based on 802.1Q tags is simply too brittle to
* be passable. So just defer to the dsa_slave_notag_xmit
* implementation.
*/
if (dsa_port_is_vlan_filtering(dp))
return skb;
return dsa_8021q_xmit(skb, netdev, ETH_P_SJA1105,
return dsa_8021q_xmit(skb, netdev, sja1105_xmit_tpid(dp->priv),
((pcp << VLAN_PRIO_SHIFT) | tx_vid));
}
......@@ -244,6 +254,20 @@ static struct sk_buff
return skb;
}
static void sja1105_decode_subvlan(struct sk_buff *skb, u16 subvlan)
{
struct dsa_port *dp = dsa_slave_to_port(skb->dev);
struct sja1105_port *sp = dp->priv;
u16 vid = sp->subvlan_map[subvlan];
u16 vlan_tci;
if (vid == VLAN_N_VID)
return;
vlan_tci = (skb->priority << VLAN_PRIO_SHIFT) | vid;
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tci);
}
static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
struct net_device *netdev,
struct packet_type *pt)
......@@ -253,12 +277,13 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
struct ethhdr *hdr;
u16 tpid, vid, tci;
bool is_link_local;
u16 subvlan = 0;
bool is_tagged;
bool is_meta;
hdr = eth_hdr(skb);
tpid = ntohs(hdr->h_proto);
is_tagged = (tpid == ETH_P_SJA1105);
is_tagged = (tpid == ETH_P_SJA1105 || tpid == ETH_P_8021Q);
is_link_local = sja1105_is_link_local(skb);
is_meta = sja1105_is_meta_frame(skb);
......@@ -276,6 +301,7 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
source_port = dsa_8021q_rx_source_port(vid);
switch_id = dsa_8021q_rx_switch_id(vid);
skb->priority = (tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
subvlan = dsa_8021q_rx_subvlan(vid);
} else if (is_link_local) {
/* Management traffic path. Switch embeds the switch ID and
* port ID into bytes of the destination MAC, courtesy of
......@@ -300,6 +326,9 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
return NULL;
}
if (subvlan)
sja1105_decode_subvlan(skb, subvlan);
return sja1105_rcv_meta_state_machine(skb, &meta, is_link_local,
is_meta);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment