Commit a62b70dd authored by David S. Miller's avatar David S. Miller

Merge branch 'switchdev_spring_cleanup'

Scott Feldman says:

====================
switchdev: spring cleanup

v7:

Address review comments:

 - [Jiri] split the br_setlink and br_dellink reverts into their own patches
 - [Jiri] some parameter cleanup of rocker's memory allocators
 - [Jiri] pass trans mode as formal parameter rather than hanging off of
     rocker_port.

v6:

Address review comments:

 - [Jiri] split a couple of patches into one-logical-change per patch
 - [Joe Perches] revert checkpatch -f changes for wrapped lines with long
     symbols.

v5:

Address review comments:

 - [Jiri] include Jiri's s/swdev/switchdev rename patches up front.
 - [Jiri] squash some patches.  Now setlink/dellink/getlink patches are in
     three parts: new implementation, convert drivers to new, delete old impl.
 - [Jiri] some minor variable renames
 - [Jiri] use BUG_ON rather than WARN when COMMIT phase fails when PREPARE
     phase said it was safe to come into the water.
 - [Simon] rocker: fix a few transaction prepare-commit cases that were wrong.
     This was the bulk of the changes in v5.

v4:

Well, it was a lot of work, but now prepare-commit transaction model is how
davem advises: if prepare fails, abort the transaction.  The driver must do
resource reservations up front in prepare phase and return those resources if
aborting.  Commit phase would use reserved resources.  The good news is the
driver code (for rocker) now handles resource allocation failures better by not
leaving partially device or driver states.  This is a side-effect of the
prepare phase where state isn't modified; only validation of inputs and
resource reservations happen in the prepare phase.  Since we're supporting
setting attrs and add objs across lower devs in the stacked case, we need to
hold rtnl_lock (or ensure rtnl_lock is held) so lower devs don't move on us
during the prepare-commit transaction.  DSA driver code skips the prepare phase
and goes straight for the commit phase since no up-front allocations are done
and no device failures (that could be detected in the prepare phase) can
happen.

Remove NETIF_F_HW_SWITCH_OFFLOAD from rocker and the swdev_attr_set/get
wrappers.  DSA doesn't set NETIF_F_HW_SWITCH_OFFLOAD, so it can't be in
swdev_attr_set/get.  rocker doesn't need it; or rather can't support
NETIF_F_HW_SWITCH_OFFLOAD being set/cleared at run-time after the device
port is already up and offloading L2/L3.  NETIF_F_HW_SWITCH_OFFLOAD is still
left as a feature flag for drivers that can use it.

Drop the renaming patch for netdev_switch_notifier.  Other renames are a
result of moving to the attr get/set or obj add/del model.  Everything
but the netdev_switch_notifier is still prefixed with "swdev_".

v3:

Move to two-phase prepare-commit transaction model for attr set and obj add.
Driver gets a change in prepare phase to NACK transaction if lack of resources
or support in device.

v2:

Address review comments:

 - [Jiri] squash a few related patches
 - [Roopa] don't remove NETIF_F_HW_SWITCH_OFFLOAD
 - [Roopa] address VLAN setlink/dellink
 - [Ronen] print warning is attr set revert fails

Not address:

 - Using something other than "swdev_" prefix
 - Vendor extentions

The patch set grew a bit to not only support port attr get/set but also add
support for port obj add/del.  Example of port objs are VLAN, FDB entries, and
FIB entries.  The VLAN support now allows the swdev driver to get VLAN ranges
and flags like PVID and "untagged".  Sridhar will be adding FDB obj support
in follow-on patch.

v1:

The main theme of this patch set is to cleanup swdev in preparation for
new features or fixes to be added soon.  We have a pretty good idea now how
to handle stacked drivers in swdev, but there where some loose ends.  For
example, if a set failed in the middle of walking the lower devs, we would
leave the system in an undefined state...there was no way to recover back to
the previous state.  Speaking of sets, also recognize a pattern that most
swdev API accesses are gets or sets of port attributes, so go ahead and make
port attr get/set the central swdev API, and convert everything that is
set-ish/get-ish to this new API.

Features/fixes that should follow from this cleanup:

 - solve the duplicate pkt forwarding issue
 - get/set bridge attrs, like ageing_time, from/to device
 - get/set more bridge port attrs from/to device

There are some rename cleanups tagging along at the end, to give swdev
consistent naming.

And finally, some much needed updates to the switchdev.txt documentation to
hopefully capture the state-of-the-art of swdev.  Hopefully, we can do a better
job keeping this document up-to-date.

Tested with rocker, of course, to make sure nothing functional broke.  There
are a couple minor tweaks to DSA code for getting switch ID and setting STP
updates to use new API, but not expecting amy breakage there.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents a3eb95f8 4ceec22d
Switch (and switch-ish) device drivers HOWTO Ethernet switch device driver model (switchdev)
=========================== ===============================================
Copyright (c) 2014 Jiri Pirko <jiri@resnulli.us>
Please note that the word "switch" is here used in very generic meaning. Copyright (c) 2014-2015 Scott Feldman <sfeldma@gmail.com>
This include devices supporting L2/L3 but also various flow offloading chips,
including switches embedded into SR-IOV NICs.
The Ethernet switch device driver model (switchdev) is an in-kernel driver
Lets describe a topology a bit. Imagine the following example: model for switch devices which offload the forwarding (data) plane from the
kernel.
+----------------------------+ +---------------+
| SOME switch chip | | CPU | Figure 1 is a block diagram showing the components of the switchdev model for
+----------------------------+ +---------------+ an example setup using a data-center-class switch ASIC chip. Other setups
port1 port2 port3 port4 MNGMNT | PCI-E | with SR-IOV or soft switches, such as OVS, are possible.
| | | | | +---------------+
PHY PHY | | | | NIC0 NIC1
| | | | | |                              User-space tools                                 
| | +- PCI-E -+ | |                                                                               
| +------- MII -------+ |        user space                   |                                         
+------------- MII ------------+       +-------------------------------------------------------------------+   
       kernel                       | Netlink                                 
In this example, there are two independent lines between the switch silicon                                     |                                         
and CPU. NIC0 and NIC1 drivers are not aware of a switch presence. They are                      +--------------+-------------------------------+         
separate from the switch driver. SOME switch chip is by managed by a driver                      |         Network stack                        |         
via PCI-E device MNGMNT. Note that MNGMNT device, NIC0 and NIC1 may be                      |           (Linux)                            |         
connected to some other type of bus.                      |                                              |         
                     +----------------------------------------------+         
Now, for the previous example show the representation in kernel:                                                                               
sw1p2 sw1p4 sw1p6
+----------------------------+ +---------------+                       sw1p1  + sw1p3 +  sw1p5 +         eth1             
| SOME switch chip | | CPU |                         +    |    +    |    +    |            +               
+----------------------------+ +---------------+                         |    |    |    |    |    |            |               
sw0p0 sw0p1 sw0p2 sw0p3 MNGMNT | PCI-E |                      +--+----+----+----+-+--+----+---+  +-----+-----+         
| | | | | +---------------+                      |         Switch driver         |  |    mgmt   |         
PHY PHY | | | | eth0 eth1                      |        (this document)        |  |   driver  |         
| | | | | |                      |                               |  |           |         
| | +- PCI-E -+ | |                      +--------------+----------------+  +-----------+         
| +------- MII -------+ |                                     |                                         
+------------- MII ------------+        kernel                       | HW bus (eg PCI)                         
      +-------------------------------------------------------------------+   
Lets call the example switch driver for SOME switch chip "SOMEswitch". This        hardware                     |                                         
driver takes care of PCI-E device MNGMNT. There is a netdevice instance sw0pX                      +--------------+---+------------+                        
created for each port of a switch. These netdevices are instances                      |         Switch device (sw1)   |                        
of "SOMEswitch" driver. sw0pX netdevices serve as a "representation"                      |  +----+                       +--------+               
of the switch chip. eth0 and eth1 are instances of some other existing driver.                      |  |    v offloaded data path   | mgmt port              
                     |  |    |                       |                        
The only difference of the switch-port netdevice from the ordinary netdevice                      +--|----|----+----+----+----+---+                        
is that is implements couple more NDOs:                         |    |    |    |    |    |                            
                        +    +    +    +    +    +                            
ndo_switch_parent_id_get - This returns the same ID for two port netdevices                        p1   p2   p3   p4   p5   p6
of the same physical switch chip. This is                                        
mandatory to be implemented by all switch drivers                              front-panel ports                                
and serves the caller for recognition of a port                                                                               
netdevice.
ndo_switch_parent_* - Functions that serve for a manipulation of the switch Fig 1.
chip itself (it can be though of as a "parent" of the
port, therefore the name). They are not port-specific.
Caller might use arbitrary port netdevice of the same Include Files
switch and it will make no difference. -------------
ndo_switch_port_* - Functions that serve for a port-specific manipulation.
#include <linux/netdevice.h>
#include <net/switchdev.h>
Configuration
-------------
Use "depends NET_SWITCHDEV" in driver's Kconfig to ensure switchdev model
support is built for driver.
Switch Ports
------------
On switchdev driver initialization, the driver will allocate and register a
struct net_device (using register_netdev()) for each enumerated physical switch
port, called the port netdev. A port netdev is the software representation of
the physical port and provides a conduit for control traffic to/from the
controller (the kernel) and the network, as well as an anchor point for higher
level constructs such as bridges, bonds, VLANs, tunnels, and L3 routers. Using
standard netdev tools (iproute2, ethtool, etc), the port netdev can also
provide to the user access to the physical properties of the switch port such
as PHY link state and I/O statistics.
There is (currently) no higher-level kernel object for the switch beyond the
port netdevs. All of the switchdev driver ops are netdev ops or switchdev ops.
A switch management port is outside the scope of the switchdev driver model.
Typically, the management port is not participating in offloaded data plane and
is loaded with a different driver, such as a NIC driver, on the management port
device.
Port Netdev Naming
^^^^^^^^^^^^^^^^^^
Udev rules should be used for port netdev naming, using some unique attribute
of the port as a key, for example the port MAC address or the port PHYS name.
Hard-coding of kernel netdev names within the driver is discouraged; let the
kernel pick the default netdev name, and let udev set the final name based on a
port attribute.
Using port PHYS name (ndo_get_phys_port_name) for the key is particularly
useful for dynically-named ports where the device names it's ports based on
external configuration. For example, if a physical 40G port is split logically
into 4 10G ports, resulting in 4 port netdevs, the device can give a unique
name for each port using port PHYS name. The udev rule would be:
SUBSYSTEM=="net", ACTION=="add", DRIVER="<driver>", ATTR{phys_port_name}!="", \
NAME="$attr{phys_port_name}"
Suggested naming convention is "swXpYsZ", where X is the switch name or ID, Y
is the port name or ID, and Z is the sub-port name or ID. For example, sw1p1s0
would be sub-port 0 on port 1 on switch 1.
Switch ID
^^^^^^^^^
The switchdev driver must implement the switchdev op switchdev_port_attr_get for
SWITCHDEV_ATTR_PORT_PARENT_ID for each port netdev, returning the same physical ID
for each port of a switch. The ID must be unique between switches on the same
system. The ID does not need to be unique between switches on different
systems.
The switch ID is used to locate ports on a switch and to know if aggregated
ports belong to the same switch.
Port Features
^^^^^^^^^^^^^
NETIF_F_NETNS_LOCAL
If the switchdev driver (and device) only supports offloading of the default
network namespace (netns), the driver should set this feature flag to prevent
the port netdev from being moved out of the default netns. A netns-aware
driver/device would not set this flag and be resposible for partitioning
hardware to preserve netns containment. This means hardware cannot forward
traffic from a port in one namespace to another port in another namespace.
Port Topology
^^^^^^^^^^^^^
The port netdevs representing the physical switch ports can be organized into
higher-level switching constructs. The default construct is a standalone
router port, used to offload L3 forwarding. Two or more ports can be bonded
together to form a LAG. Two or more ports (or LAGs) can be bridged to bridge
to L2 networks. VLANs can be applied to sub-divide L2 networks. L2-over-L3
tunnels can be built on ports. These constructs are built using standard Linux
tools such as the bridge driver, the bonding/team drivers, and netlink-based
tools such as iproute2.
The switchdev driver can know a particular port's position in the topology by
monitoring NETDEV_CHANGEUPPER notifications. For example, a port moved into a
bond will see it's upper master change. If that bond is moved into a bridge,
the bond's upper master will change. And so on. The driver will track such
movements to know what position a port is in in the overall topology by
registering for netdevice events and acting on NETDEV_CHANGEUPPER.
L2 Forwarding Offload
---------------------
The idea is to offload the L2 data forwarding (switching) path from the kernel
to the switchdev device by mirroring bridge FDB entries down to the device. An
FDB entry is the {port, MAC, VLAN} tuple forwarding destination.
To offloading L2 bridging, the switchdev driver/device should support:
- Static FDB entries installed on a bridge port
- Notification of learned/forgotten src mac/vlans from device
- STP state changes on the port
- VLAN flooding of multicast/broadcast and unknown unicast packets
Static FDB Entries
^^^^^^^^^^^^^^^^^^
The switchdev driver should implement ndo_fdb_add, ndo_fdb_del and ndo_fdb_dump
to support static FDB entries installed to the device. Static bridge FDB
entries are installed, for example, using iproute2 bridge cmd:
bridge fdb add ADDR dev DEV [vlan VID] [self]
Note: by default, the bridge does not filter on VLAN and only bridges untagged
traffic. To enable VLAN support, turn on VLAN filtering:
echo 1 >/sys/class/net/<bridge>/bridge/vlan_filtering
Notification of Learned/Forgotten Source MAC/VLANs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The switch device will learn/forget source MAC address/VLAN on ingress packets
and notify the switch driver of the mac/vlan/port tuples. The switch driver,
in turn, will notify the bridge driver using the switchdev notifier call:
err = call_switchdev_notifiers(val, dev, info);
Where val is SWITCHDEV_FDB_ADD when learning and SWITCHDEV_FDB_DEL when forgetting, and
info points to a struct switchdev_notifier_fdb_info. On SWITCHDEV_FDB_ADD, the bridge
driver will install the FDB entry into the bridge's FDB and mark the entry as
NTF_EXT_LEARNED. The iproute2 bridge command will label these entries
"offload":
$ bridge fdb
52:54:00:12:35:01 dev sw1p1 master br0 permanent
00:02:00:00:02:00 dev sw1p1 master br0 offload
00:02:00:00:02:00 dev sw1p1 self
52:54:00:12:35:02 dev sw1p2 master br0 permanent
00:02:00:00:03:00 dev sw1p2 master br0 offload
00:02:00:00:03:00 dev sw1p2 self
33:33:00:00:00:01 dev eth0 self permanent
01:00:5e:00:00:01 dev eth0 self permanent
33:33:ff:00:00:00 dev eth0 self permanent
01:80:c2:00:00:0e dev eth0 self permanent
33:33:00:00:00:01 dev br0 self permanent
01:00:5e:00:00:01 dev br0 self permanent
33:33:ff:12:35:01 dev br0 self permanent
Learning on the port should be disabled on the bridge using the bridge command:
bridge link set dev DEV learning off
Learning on the device port should be enabled, as well as learning_sync:
bridge link set dev DEV learning on self
bridge link set dev DEV learning_sync on self
Learning_sync attribute enables syncing of the learned/forgotton FDB entry to
the bridge's FDB. It's possible, but not optimal, to enable learning on the
device port and on the bridge port, and disable learning_sync.
To support learning and learning_sync port attributes, the driver implements
switchdev op switchdev_port_attr_get/set for SWITCHDEV_ATTR_PORT_BRIDGE_FLAGS. The driver
should initialize the attributes to the hardware defaults.
FDB Ageing
^^^^^^^^^^
There are two FDB ageing models supported: 1) ageing by the device, and 2)
ageing by the kernel. Ageing by the device is preferred if many FDB entries
are supported. The driver calls call_switchdev_notifiers(SWITCHDEV_FDB_DEL, ...) to
age out the FDB entry. In this model, ageing by the kernel should be turned
off. XXX: how to turn off ageing in kernel on a per-port basis or otherwise
prevent the kernel from ageing out the FDB entry?
In the kernel ageing model, the standard bridge ageing mechanism is used to age
out stale FDB entries. To keep an FDB entry "alive", the driver should refresh
the FDB entry by calling call_switchdev_notifiers(SWITCHDEV_FDB_ADD, ...). The
notification will reset the FDB entry's last-used time to now. The driver
should rate limit refresh notifications, for example, no more than once a
second. If the FDB entry expires, ndo_fdb_del is called to remove entry from
the device. XXX: this last part isn't currently correct: ndo_fdb_del isn't
called, so the stale entry remains in device...this need to get fixed.
FDB Flush
^^^^^^^^^
XXX: Unimplemented. Need to support FDB flush by bridge driver for port and
remove both static and learned FDB entries.
STP State Change on Port
^^^^^^^^^^^^^^^^^^^^^^^^
Internally or with a third-party STP protocol implementation (e.g. mstpd), the
bridge driver maintains the STP state for ports, and will notify the switch
driver of STP state change on a port using the switchdev op switchdev_attr_port_set for
SWITCHDEV_ATTR_PORT_STP_UPDATE.
State is one of BR_STATE_*. The switch driver can use STP state updates to
update ingress packet filter list for the port. For example, if port is
DISABLED, no packets should pass, but if port moves to BLOCKED, then STP BPDUs
and other IEEE 01:80:c2:xx:xx:xx link-local multicast packets can pass.
Note that STP BDPUs are untagged and STP state applies to all VLANs on the port
so packet filters should be applied consistently across untagged and tagged
VLANs on the port.
Flooding L2 domain
^^^^^^^^^^^^^^^^^^
For a given L2 VLAN domain, the switch device should flood multicast/broadcast
and unknown unicast packets to all ports in domain, if allowed by port's
current STP state. The switch driver, knowing which ports are within which
vlan L2 domain, can program the switch device for flooding. The packet should
also be sent to the port netdev for processing by the bridge driver. The
bridge should not reflood the packet to the same ports the device flooded.
XXX: the mechanism to avoid duplicate flood packets is being discuseed.
It is possible for the switch device to not handle flooding and push the
packets up to the bridge driver for flooding. This is not ideal as the number
of ports scale in the L2 domain as the device is much more efficient at
flooding packets that software.
IGMP Snooping
^^^^^^^^^^^^^
XXX: complete this section
L3 routing
----------
Offloading L3 routing requires that device be programmed with FIB entries from
the kernel, with the device doing the FIB lookup and forwarding. The device
does a longest prefix match (LPM) on FIB entries matching route prefix and
forwards the packet to the matching FIB entry's nexthop(s) egress ports. To
program the device, the switchdev driver is called with add/delete ops for IPv4
and IPv6 FIB entries. For IPv4, the driver implements switchdev ops:
int (*switchdev_fib_ipv4_add)(struct net_device *dev,
__be32 dst, int dst_len,
struct fib_info *fi,
u8 tos, u8 type,
u32 nlflags, u32 tb_id);
int (*switchdev_fib_ipv4_del)(struct net_device *dev,
__be32 dst, int dst_len,
struct fib_info *fi,
u8 tos, u8 type,
u32 tb_id);
to add/delete IPv4 dst/dest_len prefix on table tb_id. The *fi structure holds
details on the route and route's nexthops. *dev is one of the port netdevs
mentioned in the routes next hop list. If the output port netdevs referenced
in the route's nexthop list don't all have the same switch ID, the driver is
not called to add/delete the FIB entry.
Routes offloaded to the device are labeled with "offload" in the ip route
listing:
$ ip route show
default via 192.168.0.2 dev eth0
11.0.0.0/30 dev sw1p1 proto kernel scope link src 11.0.0.2 offload
11.0.0.4/30 via 11.0.0.1 dev sw1p1 proto zebra metric 20 offload
11.0.0.8/30 dev sw1p2 proto kernel scope link src 11.0.0.10 offload
11.0.0.12/30 via 11.0.0.9 dev sw1p2 proto zebra metric 20 offload
12.0.0.2 proto zebra metric 30 offload
nexthop via 11.0.0.1 dev sw1p1 weight 1
nexthop via 11.0.0.9 dev sw1p2 weight 1
12.0.0.3 via 11.0.0.1 dev sw1p1 proto zebra metric 20 offload
12.0.0.4 via 11.0.0.9 dev sw1p2 proto zebra metric 20 offload
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.15
XXX: add/del IPv6 FIB API
Nexthop Resolution
^^^^^^^^^^^^^^^^^^
The FIB entry's nexthop list contains the nexthop tuple (gateway, dev), but for
the switch device to forward the packet with the correct dst mac address, the
nexthop gateways must be resolved to the neighbor's mac address. Neighbor mac
address discovery comes via the ARP (or ND) process and is available via the
arp_tbl neighbor table. To resolve the routes nexthop gateways, the driver
should trigger the kernel's neighbor resolution process. See the rocker
driver's rocker_port_ipv4_resolve() for an example.
The driver can monitor for updates to arp_tbl using the netevent notifier
NETEVENT_NEIGH_UPDATE. The device can be programmed with resolved nexthops
for the routes as arp_tbl updates.
...@@ -1015,10 +1015,7 @@ static netdev_features_t bond_fix_features(struct net_device *dev, ...@@ -1015,10 +1015,7 @@ static netdev_features_t bond_fix_features(struct net_device *dev,
netdev_features_t mask; netdev_features_t mask;
struct slave *slave; struct slave *slave;
/* If any slave has the offload feature flag set, mask = features;
* set the offload flag on the bond.
*/
mask = features | NETIF_F_HW_SWITCH_OFFLOAD;
features &= ~NETIF_F_ONE_FOR_ALL; features &= ~NETIF_F_ONE_FOR_ALL;
features |= NETIF_F_ALL_FOR_ALL; features |= NETIF_F_ALL_FOR_ALL;
...@@ -4039,8 +4036,9 @@ static const struct net_device_ops bond_netdev_ops = { ...@@ -4039,8 +4036,9 @@ static const struct net_device_ops bond_netdev_ops = {
.ndo_add_slave = bond_enslave, .ndo_add_slave = bond_enslave,
.ndo_del_slave = bond_release, .ndo_del_slave = bond_release,
.ndo_fix_features = bond_fix_features, .ndo_fix_features = bond_fix_features,
.ndo_bridge_setlink = ndo_dflt_netdev_switch_port_bridge_setlink, .ndo_bridge_setlink = switchdev_port_bridge_setlink,
.ndo_bridge_dellink = ndo_dflt_netdev_switch_port_bridge_dellink, .ndo_bridge_getlink = switchdev_port_bridge_getlink,
.ndo_bridge_dellink = switchdev_port_bridge_dellink,
.ndo_features_check = passthru_features_check, .ndo_features_check = passthru_features_check,
}; };
......
...@@ -181,7 +181,7 @@ struct rocker_desc_info { ...@@ -181,7 +181,7 @@ struct rocker_desc_info {
size_t data_size; size_t data_size;
size_t tlv_size; size_t tlv_size;
struct rocker_desc *desc; struct rocker_desc *desc;
DEFINE_DMA_UNMAP_ADDR(mapaddr); dma_addr_t mapaddr;
}; };
struct rocker_dma_ring_info { struct rocker_dma_ring_info {
...@@ -225,6 +225,7 @@ struct rocker_port { ...@@ -225,6 +225,7 @@ struct rocker_port {
struct napi_struct napi_rx; struct napi_struct napi_rx;
struct rocker_dma_ring_info tx_ring; struct rocker_dma_ring_info tx_ring;
struct rocker_dma_ring_info rx_ring; struct rocker_dma_ring_info rx_ring;
struct list_head trans_mem;
}; };
struct rocker { struct rocker {
...@@ -236,21 +237,21 @@ struct rocker { ...@@ -236,21 +237,21 @@ struct rocker {
struct { struct {
u64 id; u64 id;
} hw; } hw;
spinlock_t cmd_ring_lock; spinlock_t cmd_ring_lock; /* for cmd ring accesses */
struct rocker_dma_ring_info cmd_ring; struct rocker_dma_ring_info cmd_ring;
struct rocker_dma_ring_info event_ring; struct rocker_dma_ring_info event_ring;
DECLARE_HASHTABLE(flow_tbl, 16); DECLARE_HASHTABLE(flow_tbl, 16);
spinlock_t flow_tbl_lock; spinlock_t flow_tbl_lock; /* for flow tbl accesses */
u64 flow_tbl_next_cookie; u64 flow_tbl_next_cookie;
DECLARE_HASHTABLE(group_tbl, 16); DECLARE_HASHTABLE(group_tbl, 16);
spinlock_t group_tbl_lock; spinlock_t group_tbl_lock; /* for group tbl accesses */
DECLARE_HASHTABLE(fdb_tbl, 16); DECLARE_HASHTABLE(fdb_tbl, 16);
spinlock_t fdb_tbl_lock; spinlock_t fdb_tbl_lock; /* for fdb tbl accesses */
unsigned long internal_vlan_bitmap[ROCKER_INTERNAL_VLAN_BITMAP_LEN]; unsigned long internal_vlan_bitmap[ROCKER_INTERNAL_VLAN_BITMAP_LEN];
DECLARE_HASHTABLE(internal_vlan_tbl, 8); DECLARE_HASHTABLE(internal_vlan_tbl, 8);
spinlock_t internal_vlan_tbl_lock; spinlock_t internal_vlan_tbl_lock; /* for vlan tbl accesses */
DECLARE_HASHTABLE(neigh_tbl, 16); DECLARE_HASHTABLE(neigh_tbl, 16);
spinlock_t neigh_tbl_lock; spinlock_t neigh_tbl_lock; /* for neigh tbl accesses */
u32 neigh_tbl_next_index; u32 neigh_tbl_next_index;
}; };
...@@ -325,16 +326,83 @@ static bool rocker_port_is_bridged(struct rocker_port *rocker_port) ...@@ -325,16 +326,83 @@ static bool rocker_port_is_bridged(struct rocker_port *rocker_port)
return !!rocker_port->bridge_dev; return !!rocker_port->bridge_dev;
} }
static void *__rocker_port_mem_alloc(struct rocker_port *rocker_port,
enum switchdev_trans trans, size_t size)
{
struct list_head *elem = NULL;
/* If in transaction prepare phase, allocate the memory
* and enqueue it on a per-port list. If in transaction
* commit phase, dequeue the memory from the per-port list
* rather than re-allocating the memory. The idea is the
* driver code paths for prepare and commit are identical
* so the memory allocated in the prepare phase is the
* memory used in the commit phase.
*/
switch (trans) {
case SWITCHDEV_TRANS_PREPARE:
elem = kzalloc(size + sizeof(*elem), GFP_KERNEL);
if (!elem)
return NULL;
list_add_tail(elem, &rocker_port->trans_mem);
break;
case SWITCHDEV_TRANS_COMMIT:
BUG_ON(list_empty(&rocker_port->trans_mem));
elem = rocker_port->trans_mem.next;
list_del_init(elem);
break;
case SWITCHDEV_TRANS_NONE:
elem = kzalloc(size + sizeof(*elem), GFP_KERNEL);
if (elem)
INIT_LIST_HEAD(elem);
break;
default:
break;
}
return elem ? elem + 1 : NULL;
}
static void *rocker_port_kzalloc(struct rocker_port *rocker_port,
enum switchdev_trans trans, size_t size)
{
return __rocker_port_mem_alloc(rocker_port, trans, size);
}
static void *rocker_port_kcalloc(struct rocker_port *rocker_port,
enum switchdev_trans trans, size_t n,
size_t size)
{
return __rocker_port_mem_alloc(rocker_port, trans, n * size);
}
static void rocker_port_kfree(struct rocker_port *rocker_port,
enum switchdev_trans trans, const void *mem)
{
struct list_head *elem;
/* Frees are ignored if in transaction prepare phase. The
* memory remains on the per-port list until freed in the
* commit phase.
*/
if (trans == SWITCHDEV_TRANS_PREPARE)
return;
elem = (struct list_head *)mem - 1;
BUG_ON(!list_empty(elem));
kfree(elem);
}
struct rocker_wait { struct rocker_wait {
wait_queue_head_t wait; wait_queue_head_t wait;
bool done; bool done;
bool nowait;
}; };
static void rocker_wait_reset(struct rocker_wait *wait) static void rocker_wait_reset(struct rocker_wait *wait)
{ {
wait->done = false; wait->done = false;
wait->nowait = false;
} }
static void rocker_wait_init(struct rocker_wait *wait) static void rocker_wait_init(struct rocker_wait *wait)
...@@ -343,20 +411,23 @@ static void rocker_wait_init(struct rocker_wait *wait) ...@@ -343,20 +411,23 @@ static void rocker_wait_init(struct rocker_wait *wait)
rocker_wait_reset(wait); rocker_wait_reset(wait);
} }
static struct rocker_wait *rocker_wait_create(gfp_t gfp) static struct rocker_wait *rocker_wait_create(struct rocker_port *rocker_port,
enum switchdev_trans trans)
{ {
struct rocker_wait *wait; struct rocker_wait *wait;
wait = kmalloc(sizeof(*wait), gfp); wait = rocker_port_kzalloc(rocker_port, trans, sizeof(*wait));
if (!wait) if (!wait)
return NULL; return NULL;
rocker_wait_init(wait); rocker_wait_init(wait);
return wait; return wait;
} }
static void rocker_wait_destroy(struct rocker_wait *work) static void rocker_wait_destroy(struct rocker_port *rocker_port,
enum switchdev_trans trans,
struct rocker_wait *wait)
{ {
kfree(work); rocker_port_kfree(rocker_port, trans, wait);
} }
static bool rocker_wait_event_timeout(struct rocker_wait *wait, static bool rocker_wait_event_timeout(struct rocker_wait *wait,
...@@ -1317,12 +1388,7 @@ static irqreturn_t rocker_cmd_irq_handler(int irq, void *dev_id) ...@@ -1317,12 +1388,7 @@ static irqreturn_t rocker_cmd_irq_handler(int irq, void *dev_id)
spin_lock(&rocker->cmd_ring_lock); spin_lock(&rocker->cmd_ring_lock);
while ((desc_info = rocker_desc_tail_get(&rocker->cmd_ring))) { while ((desc_info = rocker_desc_tail_get(&rocker->cmd_ring))) {
wait = rocker_desc_cookie_ptr_get(desc_info); wait = rocker_desc_cookie_ptr_get(desc_info);
if (wait->nowait) { rocker_wait_wake_up(wait);
rocker_desc_gen_clear(desc_info);
rocker_wait_destroy(wait);
} else {
rocker_wait_wake_up(wait);
}
credits++; credits++;
} }
spin_unlock(&rocker->cmd_ring_lock); spin_unlock(&rocker->cmd_ring_lock);
...@@ -1374,22 +1440,44 @@ static int rocker_event_link_change(struct rocker *rocker, ...@@ -1374,22 +1440,44 @@ static int rocker_event_link_change(struct rocker *rocker,
} }
#define ROCKER_OP_FLAG_REMOVE BIT(0) #define ROCKER_OP_FLAG_REMOVE BIT(0)
#define ROCKER_OP_FLAG_NOWAIT BIT(1) #define ROCKER_OP_FLAG_LEARNED BIT(1)
#define ROCKER_OP_FLAG_LEARNED BIT(2) #define ROCKER_OP_FLAG_REFRESH BIT(2)
#define ROCKER_OP_FLAG_REFRESH BIT(3)
static int rocker_port_fdb(struct rocker_port *rocker_port, static int rocker_port_fdb(struct rocker_port *rocker_port,
enum switchdev_trans trans,
const unsigned char *addr, const unsigned char *addr,
__be16 vlan_id, int flags); __be16 vlan_id, int flags);
struct rocker_mac_vlan_seen_work {
struct work_struct work;
struct rocker_port *rocker_port;
int flags;
unsigned char addr[ETH_ALEN];
__be16 vlan_id;
};
static void rocker_event_mac_vlan_seen_work(struct work_struct *work)
{
struct rocker_mac_vlan_seen_work *sw =
container_of(work, struct rocker_mac_vlan_seen_work, work);
rtnl_lock();
rocker_port_fdb(sw->rocker_port, SWITCHDEV_TRANS_NONE,
sw->addr, sw->vlan_id, sw->flags);
rtnl_unlock();
kfree(work);
}
static int rocker_event_mac_vlan_seen(struct rocker *rocker, static int rocker_event_mac_vlan_seen(struct rocker *rocker,
const struct rocker_tlv *info) const struct rocker_tlv *info)
{ {
struct rocker_mac_vlan_seen_work *sw;
struct rocker_tlv *attrs[ROCKER_TLV_EVENT_MAC_VLAN_MAX + 1]; struct rocker_tlv *attrs[ROCKER_TLV_EVENT_MAC_VLAN_MAX + 1];
unsigned int port_number; unsigned int port_number;
struct rocker_port *rocker_port; struct rocker_port *rocker_port;
unsigned char *addr; unsigned char *addr;
int flags = ROCKER_OP_FLAG_NOWAIT | ROCKER_OP_FLAG_LEARNED; int flags = ROCKER_OP_FLAG_LEARNED;
__be16 vlan_id; __be16 vlan_id;
rocker_tlv_parse_nested(attrs, ROCKER_TLV_EVENT_MAC_VLAN_MAX, info); rocker_tlv_parse_nested(attrs, ROCKER_TLV_EVENT_MAC_VLAN_MAX, info);
...@@ -1411,7 +1499,20 @@ static int rocker_event_mac_vlan_seen(struct rocker *rocker, ...@@ -1411,7 +1499,20 @@ static int rocker_event_mac_vlan_seen(struct rocker *rocker,
rocker_port->stp_state != BR_STATE_FORWARDING) rocker_port->stp_state != BR_STATE_FORWARDING)
return 0; return 0;
return rocker_port_fdb(rocker_port, addr, vlan_id, flags); sw = kmalloc(sizeof(*sw), GFP_ATOMIC);
if (!sw)
return -ENOMEM;
INIT_WORK(&sw->work, rocker_event_mac_vlan_seen_work);
sw->rocker_port = rocker_port;
sw->flags = flags;
ether_addr_copy(sw->addr, addr);
sw->vlan_id = vlan_id;
schedule_work(&sw->work);
return 0;
} }
static int rocker_event_process(struct rocker *rocker, static int rocker_event_process(struct rocker *rocker,
...@@ -1494,41 +1595,44 @@ typedef int (*rocker_cmd_cb_t)(struct rocker *rocker, ...@@ -1494,41 +1595,44 @@ typedef int (*rocker_cmd_cb_t)(struct rocker *rocker,
static int rocker_cmd_exec(struct rocker *rocker, static int rocker_cmd_exec(struct rocker *rocker,
struct rocker_port *rocker_port, struct rocker_port *rocker_port,
enum switchdev_trans trans,
rocker_cmd_cb_t prepare, void *prepare_priv, rocker_cmd_cb_t prepare, void *prepare_priv,
rocker_cmd_cb_t process, void *process_priv, rocker_cmd_cb_t process, void *process_priv)
bool nowait)
{ {
struct rocker_desc_info *desc_info; struct rocker_desc_info *desc_info;
struct rocker_wait *wait; struct rocker_wait *wait;
unsigned long flags; unsigned long flags;
int err; int err;
wait = rocker_wait_create(nowait ? GFP_ATOMIC : GFP_KERNEL); wait = rocker_wait_create(rocker_port, trans);
if (!wait) if (!wait)
return -ENOMEM; return -ENOMEM;
wait->nowait = nowait;
spin_lock_irqsave(&rocker->cmd_ring_lock, flags); spin_lock_irqsave(&rocker->cmd_ring_lock, flags);
desc_info = rocker_desc_head_get(&rocker->cmd_ring); desc_info = rocker_desc_head_get(&rocker->cmd_ring);
if (!desc_info) { if (!desc_info) {
spin_unlock_irqrestore(&rocker->cmd_ring_lock, flags); spin_unlock_irqrestore(&rocker->cmd_ring_lock, flags);
err = -EAGAIN; err = -EAGAIN;
goto out; goto out;
} }
err = prepare(rocker, rocker_port, desc_info, prepare_priv); err = prepare(rocker, rocker_port, desc_info, prepare_priv);
if (err) { if (err) {
spin_unlock_irqrestore(&rocker->cmd_ring_lock, flags); spin_unlock_irqrestore(&rocker->cmd_ring_lock, flags);
goto out; goto out;
} }
rocker_desc_cookie_ptr_set(desc_info, wait); rocker_desc_cookie_ptr_set(desc_info, wait);
rocker_desc_head_set(rocker, &rocker->cmd_ring, desc_info);
spin_unlock_irqrestore(&rocker->cmd_ring_lock, flags);
if (nowait) if (trans != SWITCHDEV_TRANS_PREPARE)
return 0; rocker_desc_head_set(rocker, &rocker->cmd_ring, desc_info);
if (!rocker_wait_event_timeout(wait, HZ / 10)) spin_unlock_irqrestore(&rocker->cmd_ring_lock, flags);
return -EIO;
if (trans != SWITCHDEV_TRANS_PREPARE)
if (!rocker_wait_event_timeout(wait, HZ / 10))
return -EIO;
err = rocker_desc_err(desc_info); err = rocker_desc_err(desc_info);
if (err) if (err)
...@@ -1539,7 +1643,7 @@ static int rocker_cmd_exec(struct rocker *rocker, ...@@ -1539,7 +1643,7 @@ static int rocker_cmd_exec(struct rocker *rocker,
rocker_desc_gen_clear(desc_info); rocker_desc_gen_clear(desc_info);
out: out:
rocker_wait_destroy(wait); rocker_wait_destroy(rocker_port, trans, wait);
return err; return err;
} }
...@@ -1762,41 +1866,46 @@ static int rocker_cmd_get_port_settings_ethtool(struct rocker_port *rocker_port, ...@@ -1762,41 +1866,46 @@ static int rocker_cmd_get_port_settings_ethtool(struct rocker_port *rocker_port,
struct ethtool_cmd *ecmd) struct ethtool_cmd *ecmd)
{ {
return rocker_cmd_exec(rocker_port->rocker, rocker_port, return rocker_cmd_exec(rocker_port->rocker, rocker_port,
SWITCHDEV_TRANS_NONE,
rocker_cmd_get_port_settings_prep, NULL, rocker_cmd_get_port_settings_prep, NULL,
rocker_cmd_get_port_settings_ethtool_proc, rocker_cmd_get_port_settings_ethtool_proc,
ecmd, false); ecmd);
} }
static int rocker_cmd_get_port_settings_macaddr(struct rocker_port *rocker_port, static int rocker_cmd_get_port_settings_macaddr(struct rocker_port *rocker_port,
unsigned char *macaddr) unsigned char *macaddr)
{ {
return rocker_cmd_exec(rocker_port->rocker, rocker_port, return rocker_cmd_exec(rocker_port->rocker, rocker_port,
SWITCHDEV_TRANS_NONE,
rocker_cmd_get_port_settings_prep, NULL, rocker_cmd_get_port_settings_prep, NULL,
rocker_cmd_get_port_settings_macaddr_proc, rocker_cmd_get_port_settings_macaddr_proc,
macaddr, false); macaddr);
} }
static int rocker_cmd_set_port_settings_ethtool(struct rocker_port *rocker_port, static int rocker_cmd_set_port_settings_ethtool(struct rocker_port *rocker_port,
struct ethtool_cmd *ecmd) struct ethtool_cmd *ecmd)
{ {
return rocker_cmd_exec(rocker_port->rocker, rocker_port, return rocker_cmd_exec(rocker_port->rocker, rocker_port,
SWITCHDEV_TRANS_NONE,
rocker_cmd_set_port_settings_ethtool_prep, rocker_cmd_set_port_settings_ethtool_prep,
ecmd, NULL, NULL, false); ecmd, NULL, NULL);
} }
static int rocker_cmd_set_port_settings_macaddr(struct rocker_port *rocker_port, static int rocker_cmd_set_port_settings_macaddr(struct rocker_port *rocker_port,
unsigned char *macaddr) unsigned char *macaddr)
{ {
return rocker_cmd_exec(rocker_port->rocker, rocker_port, return rocker_cmd_exec(rocker_port->rocker, rocker_port,
SWITCHDEV_TRANS_NONE,
rocker_cmd_set_port_settings_macaddr_prep, rocker_cmd_set_port_settings_macaddr_prep,
macaddr, NULL, NULL, false); macaddr, NULL, NULL);
} }
static int rocker_port_set_learning(struct rocker_port *rocker_port) static int rocker_port_set_learning(struct rocker_port *rocker_port,
enum switchdev_trans trans)
{ {
return rocker_cmd_exec(rocker_port->rocker, rocker_port, return rocker_cmd_exec(rocker_port->rocker, rocker_port, trans,
rocker_cmd_set_port_learning_prep, rocker_cmd_set_port_learning_prep,
NULL, NULL, NULL, false); NULL, NULL, NULL);
} }
static int rocker_cmd_flow_tbl_add_ig_port(struct rocker_desc_info *desc_info, static int rocker_cmd_flow_tbl_add_ig_port(struct rocker_desc_info *desc_info,
...@@ -2308,8 +2417,8 @@ rocker_flow_tbl_find(struct rocker *rocker, struct rocker_flow_tbl_entry *match) ...@@ -2308,8 +2417,8 @@ rocker_flow_tbl_find(struct rocker *rocker, struct rocker_flow_tbl_entry *match)
} }
static int rocker_flow_tbl_add(struct rocker_port *rocker_port, static int rocker_flow_tbl_add(struct rocker_port *rocker_port,
struct rocker_flow_tbl_entry *match, enum switchdev_trans trans,
bool nowait) struct rocker_flow_tbl_entry *match)
{ {
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
struct rocker_flow_tbl_entry *found; struct rocker_flow_tbl_entry *found;
...@@ -2324,8 +2433,9 @@ static int rocker_flow_tbl_add(struct rocker_port *rocker_port, ...@@ -2324,8 +2433,9 @@ static int rocker_flow_tbl_add(struct rocker_port *rocker_port,
if (found) { if (found) {
match->cookie = found->cookie; match->cookie = found->cookie;
hash_del(&found->entry); if (trans != SWITCHDEV_TRANS_PREPARE)
kfree(found); hash_del(&found->entry);
rocker_port_kfree(rocker_port, trans, found);
found = match; found = match;
found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_MOD; found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_MOD;
} else { } else {
...@@ -2334,18 +2444,19 @@ static int rocker_flow_tbl_add(struct rocker_port *rocker_port, ...@@ -2334,18 +2444,19 @@ static int rocker_flow_tbl_add(struct rocker_port *rocker_port,
found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_ADD; found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_ADD;
} }
hash_add(rocker->flow_tbl, &found->entry, found->key_crc32); if (trans != SWITCHDEV_TRANS_PREPARE)
hash_add(rocker->flow_tbl, &found->entry, found->key_crc32);
spin_unlock_irqrestore(&rocker->flow_tbl_lock, flags); spin_unlock_irqrestore(&rocker->flow_tbl_lock, flags);
return rocker_cmd_exec(rocker, rocker_port, return rocker_cmd_exec(rocker, rocker_port, trans,
rocker_cmd_flow_tbl_add, rocker_cmd_flow_tbl_add,
found, NULL, NULL, nowait); found, NULL, NULL);
} }
static int rocker_flow_tbl_del(struct rocker_port *rocker_port, static int rocker_flow_tbl_del(struct rocker_port *rocker_port,
struct rocker_flow_tbl_entry *match, enum switchdev_trans trans,
bool nowait) struct rocker_flow_tbl_entry *match)
{ {
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
struct rocker_flow_tbl_entry *found; struct rocker_flow_tbl_entry *found;
...@@ -2360,47 +2471,43 @@ static int rocker_flow_tbl_del(struct rocker_port *rocker_port, ...@@ -2360,47 +2471,43 @@ static int rocker_flow_tbl_del(struct rocker_port *rocker_port,
found = rocker_flow_tbl_find(rocker, match); found = rocker_flow_tbl_find(rocker, match);
if (found) { if (found) {
hash_del(&found->entry); if (trans != SWITCHDEV_TRANS_PREPARE)
hash_del(&found->entry);
found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_DEL; found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_DEL;
} }
spin_unlock_irqrestore(&rocker->flow_tbl_lock, flags); spin_unlock_irqrestore(&rocker->flow_tbl_lock, flags);
kfree(match); rocker_port_kfree(rocker_port, trans, match);
if (found) { if (found) {
err = rocker_cmd_exec(rocker, rocker_port, err = rocker_cmd_exec(rocker, rocker_port, trans,
rocker_cmd_flow_tbl_del, rocker_cmd_flow_tbl_del,
found, NULL, NULL, nowait); found, NULL, NULL);
kfree(found); rocker_port_kfree(rocker_port, trans, found);
} }
return err; return err;
} }
static gfp_t rocker_op_flags_gfp(int flags)
{
return flags & ROCKER_OP_FLAG_NOWAIT ? GFP_ATOMIC : GFP_KERNEL;
}
static int rocker_flow_tbl_do(struct rocker_port *rocker_port, static int rocker_flow_tbl_do(struct rocker_port *rocker_port,
int flags, struct rocker_flow_tbl_entry *entry) enum switchdev_trans trans, int flags,
struct rocker_flow_tbl_entry *entry)
{ {
bool nowait = flags & ROCKER_OP_FLAG_NOWAIT;
if (flags & ROCKER_OP_FLAG_REMOVE) if (flags & ROCKER_OP_FLAG_REMOVE)
return rocker_flow_tbl_del(rocker_port, entry, nowait); return rocker_flow_tbl_del(rocker_port, trans, entry);
else else
return rocker_flow_tbl_add(rocker_port, entry, nowait); return rocker_flow_tbl_add(rocker_port, trans, entry);
} }
static int rocker_flow_tbl_ig_port(struct rocker_port *rocker_port, static int rocker_flow_tbl_ig_port(struct rocker_port *rocker_port,
int flags, u32 in_pport, u32 in_pport_mask, enum switchdev_trans trans, int flags,
u32 in_pport, u32 in_pport_mask,
enum rocker_of_dpa_table_id goto_tbl) enum rocker_of_dpa_table_id goto_tbl)
{ {
struct rocker_flow_tbl_entry *entry; struct rocker_flow_tbl_entry *entry;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
...@@ -2410,18 +2517,19 @@ static int rocker_flow_tbl_ig_port(struct rocker_port *rocker_port, ...@@ -2410,18 +2517,19 @@ static int rocker_flow_tbl_ig_port(struct rocker_port *rocker_port,
entry->key.ig_port.in_pport_mask = in_pport_mask; entry->key.ig_port.in_pport_mask = in_pport_mask;
entry->key.ig_port.goto_tbl = goto_tbl; entry->key.ig_port.goto_tbl = goto_tbl;
return rocker_flow_tbl_do(rocker_port, flags, entry); return rocker_flow_tbl_do(rocker_port, trans, flags, entry);
} }
static int rocker_flow_tbl_vlan(struct rocker_port *rocker_port, static int rocker_flow_tbl_vlan(struct rocker_port *rocker_port,
int flags, u32 in_pport, enum switchdev_trans trans, int flags,
__be16 vlan_id, __be16 vlan_id_mask, u32 in_pport, __be16 vlan_id,
__be16 vlan_id_mask,
enum rocker_of_dpa_table_id goto_tbl, enum rocker_of_dpa_table_id goto_tbl,
bool untagged, __be16 new_vlan_id) bool untagged, __be16 new_vlan_id)
{ {
struct rocker_flow_tbl_entry *entry; struct rocker_flow_tbl_entry *entry;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
...@@ -2435,10 +2543,11 @@ static int rocker_flow_tbl_vlan(struct rocker_port *rocker_port, ...@@ -2435,10 +2543,11 @@ static int rocker_flow_tbl_vlan(struct rocker_port *rocker_port,
entry->key.vlan.untagged = untagged; entry->key.vlan.untagged = untagged;
entry->key.vlan.new_vlan_id = new_vlan_id; entry->key.vlan.new_vlan_id = new_vlan_id;
return rocker_flow_tbl_do(rocker_port, flags, entry); return rocker_flow_tbl_do(rocker_port, trans, flags, entry);
} }
static int rocker_flow_tbl_term_mac(struct rocker_port *rocker_port, static int rocker_flow_tbl_term_mac(struct rocker_port *rocker_port,
enum switchdev_trans trans,
u32 in_pport, u32 in_pport_mask, u32 in_pport, u32 in_pport_mask,
__be16 eth_type, const u8 *eth_dst, __be16 eth_type, const u8 *eth_dst,
const u8 *eth_dst_mask, __be16 vlan_id, const u8 *eth_dst_mask, __be16 vlan_id,
...@@ -2447,7 +2556,7 @@ static int rocker_flow_tbl_term_mac(struct rocker_port *rocker_port, ...@@ -2447,7 +2556,7 @@ static int rocker_flow_tbl_term_mac(struct rocker_port *rocker_port,
{ {
struct rocker_flow_tbl_entry *entry; struct rocker_flow_tbl_entry *entry;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
...@@ -2471,11 +2580,11 @@ static int rocker_flow_tbl_term_mac(struct rocker_port *rocker_port, ...@@ -2471,11 +2580,11 @@ static int rocker_flow_tbl_term_mac(struct rocker_port *rocker_port,
entry->key.term_mac.vlan_id_mask = vlan_id_mask; entry->key.term_mac.vlan_id_mask = vlan_id_mask;
entry->key.term_mac.copy_to_cpu = copy_to_cpu; entry->key.term_mac.copy_to_cpu = copy_to_cpu;
return rocker_flow_tbl_do(rocker_port, flags, entry); return rocker_flow_tbl_do(rocker_port, trans, flags, entry);
} }
static int rocker_flow_tbl_bridge(struct rocker_port *rocker_port, static int rocker_flow_tbl_bridge(struct rocker_port *rocker_port,
int flags, enum switchdev_trans trans, int flags,
const u8 *eth_dst, const u8 *eth_dst_mask, const u8 *eth_dst, const u8 *eth_dst_mask,
__be16 vlan_id, u32 tunnel_id, __be16 vlan_id, u32 tunnel_id,
enum rocker_of_dpa_table_id goto_tbl, enum rocker_of_dpa_table_id goto_tbl,
...@@ -2487,7 +2596,7 @@ static int rocker_flow_tbl_bridge(struct rocker_port *rocker_port, ...@@ -2487,7 +2596,7 @@ static int rocker_flow_tbl_bridge(struct rocker_port *rocker_port,
bool dflt = !eth_dst || (eth_dst && eth_dst_mask); bool dflt = !eth_dst || (eth_dst && eth_dst_mask);
bool wild = false; bool wild = false;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
...@@ -2525,10 +2634,11 @@ static int rocker_flow_tbl_bridge(struct rocker_port *rocker_port, ...@@ -2525,10 +2634,11 @@ static int rocker_flow_tbl_bridge(struct rocker_port *rocker_port,
entry->key.bridge.group_id = group_id; entry->key.bridge.group_id = group_id;
entry->key.bridge.copy_to_cpu = copy_to_cpu; entry->key.bridge.copy_to_cpu = copy_to_cpu;
return rocker_flow_tbl_do(rocker_port, flags, entry); return rocker_flow_tbl_do(rocker_port, trans, flags, entry);
} }
static int rocker_flow_tbl_ucast4_routing(struct rocker_port *rocker_port, static int rocker_flow_tbl_ucast4_routing(struct rocker_port *rocker_port,
enum switchdev_trans trans,
__be16 eth_type, __be32 dst, __be16 eth_type, __be32 dst,
__be32 dst_mask, u32 priority, __be32 dst_mask, u32 priority,
enum rocker_of_dpa_table_id goto_tbl, enum rocker_of_dpa_table_id goto_tbl,
...@@ -2536,7 +2646,7 @@ static int rocker_flow_tbl_ucast4_routing(struct rocker_port *rocker_port, ...@@ -2536,7 +2646,7 @@ static int rocker_flow_tbl_ucast4_routing(struct rocker_port *rocker_port,
{ {
struct rocker_flow_tbl_entry *entry; struct rocker_flow_tbl_entry *entry;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
...@@ -2550,24 +2660,23 @@ static int rocker_flow_tbl_ucast4_routing(struct rocker_port *rocker_port, ...@@ -2550,24 +2660,23 @@ static int rocker_flow_tbl_ucast4_routing(struct rocker_port *rocker_port,
entry->key_len = offsetof(struct rocker_flow_tbl_key, entry->key_len = offsetof(struct rocker_flow_tbl_key,
ucast_routing.group_id); ucast_routing.group_id);
return rocker_flow_tbl_do(rocker_port, flags, entry); return rocker_flow_tbl_do(rocker_port, trans, flags, entry);
} }
static int rocker_flow_tbl_acl(struct rocker_port *rocker_port, static int rocker_flow_tbl_acl(struct rocker_port *rocker_port,
int flags, u32 in_pport, enum switchdev_trans trans, int flags,
u32 in_pport_mask, u32 in_pport, u32 in_pport_mask,
const u8 *eth_src, const u8 *eth_src_mask, const u8 *eth_src, const u8 *eth_src_mask,
const u8 *eth_dst, const u8 *eth_dst_mask, const u8 *eth_dst, const u8 *eth_dst_mask,
__be16 eth_type, __be16 eth_type, __be16 vlan_id,
__be16 vlan_id, __be16 vlan_id_mask, __be16 vlan_id_mask, u8 ip_proto,
u8 ip_proto, u8 ip_proto_mask, u8 ip_proto_mask, u8 ip_tos, u8 ip_tos_mask,
u8 ip_tos, u8 ip_tos_mask,
u32 group_id) u32 group_id)
{ {
u32 priority; u32 priority;
struct rocker_flow_tbl_entry *entry; struct rocker_flow_tbl_entry *entry;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
...@@ -2602,7 +2711,7 @@ static int rocker_flow_tbl_acl(struct rocker_port *rocker_port, ...@@ -2602,7 +2711,7 @@ static int rocker_flow_tbl_acl(struct rocker_port *rocker_port,
entry->key.acl.ip_tos_mask = ip_tos_mask; entry->key.acl.ip_tos_mask = ip_tos_mask;
entry->key.acl.group_id = group_id; entry->key.acl.group_id = group_id;
return rocker_flow_tbl_do(rocker_port, flags, entry); return rocker_flow_tbl_do(rocker_port, trans, flags, entry);
} }
static struct rocker_group_tbl_entry * static struct rocker_group_tbl_entry *
...@@ -2620,22 +2729,24 @@ rocker_group_tbl_find(struct rocker *rocker, ...@@ -2620,22 +2729,24 @@ rocker_group_tbl_find(struct rocker *rocker,
return NULL; return NULL;
} }
static void rocker_group_tbl_entry_free(struct rocker_group_tbl_entry *entry) static void rocker_group_tbl_entry_free(struct rocker_port *rocker_port,
enum switchdev_trans trans,
struct rocker_group_tbl_entry *entry)
{ {
switch (ROCKER_GROUP_TYPE_GET(entry->group_id)) { switch (ROCKER_GROUP_TYPE_GET(entry->group_id)) {
case ROCKER_OF_DPA_GROUP_TYPE_L2_FLOOD: case ROCKER_OF_DPA_GROUP_TYPE_L2_FLOOD:
case ROCKER_OF_DPA_GROUP_TYPE_L2_MCAST: case ROCKER_OF_DPA_GROUP_TYPE_L2_MCAST:
kfree(entry->group_ids); rocker_port_kfree(rocker_port, trans, entry->group_ids);
break; break;
default: default:
break; break;
} }
kfree(entry); rocker_port_kfree(rocker_port, trans, entry);
} }
static int rocker_group_tbl_add(struct rocker_port *rocker_port, static int rocker_group_tbl_add(struct rocker_port *rocker_port,
struct rocker_group_tbl_entry *match, enum switchdev_trans trans,
bool nowait) struct rocker_group_tbl_entry *match)
{ {
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
struct rocker_group_tbl_entry *found; struct rocker_group_tbl_entry *found;
...@@ -2646,8 +2757,9 @@ static int rocker_group_tbl_add(struct rocker_port *rocker_port, ...@@ -2646,8 +2757,9 @@ static int rocker_group_tbl_add(struct rocker_port *rocker_port,
found = rocker_group_tbl_find(rocker, match); found = rocker_group_tbl_find(rocker, match);
if (found) { if (found) {
hash_del(&found->entry); if (trans != SWITCHDEV_TRANS_PREPARE)
rocker_group_tbl_entry_free(found); hash_del(&found->entry);
rocker_group_tbl_entry_free(rocker_port, trans, found);
found = match; found = match;
found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_MOD; found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_MOD;
} else { } else {
...@@ -2655,18 +2767,19 @@ static int rocker_group_tbl_add(struct rocker_port *rocker_port, ...@@ -2655,18 +2767,19 @@ static int rocker_group_tbl_add(struct rocker_port *rocker_port,
found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_ADD; found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_ADD;
} }
hash_add(rocker->group_tbl, &found->entry, found->group_id); if (trans != SWITCHDEV_TRANS_PREPARE)
hash_add(rocker->group_tbl, &found->entry, found->group_id);
spin_unlock_irqrestore(&rocker->group_tbl_lock, flags); spin_unlock_irqrestore(&rocker->group_tbl_lock, flags);
return rocker_cmd_exec(rocker, rocker_port, return rocker_cmd_exec(rocker, rocker_port, trans,
rocker_cmd_group_tbl_add, rocker_cmd_group_tbl_add,
found, NULL, NULL, nowait); found, NULL, NULL);
} }
static int rocker_group_tbl_del(struct rocker_port *rocker_port, static int rocker_group_tbl_del(struct rocker_port *rocker_port,
struct rocker_group_tbl_entry *match, enum switchdev_trans trans,
bool nowait) struct rocker_group_tbl_entry *match)
{ {
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
struct rocker_group_tbl_entry *found; struct rocker_group_tbl_entry *found;
...@@ -2678,93 +2791,95 @@ static int rocker_group_tbl_del(struct rocker_port *rocker_port, ...@@ -2678,93 +2791,95 @@ static int rocker_group_tbl_del(struct rocker_port *rocker_port,
found = rocker_group_tbl_find(rocker, match); found = rocker_group_tbl_find(rocker, match);
if (found) { if (found) {
hash_del(&found->entry); if (trans != SWITCHDEV_TRANS_PREPARE)
hash_del(&found->entry);
found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_DEL; found->cmd = ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_DEL;
} }
spin_unlock_irqrestore(&rocker->group_tbl_lock, flags); spin_unlock_irqrestore(&rocker->group_tbl_lock, flags);
rocker_group_tbl_entry_free(match); rocker_group_tbl_entry_free(rocker_port, trans, match);
if (found) { if (found) {
err = rocker_cmd_exec(rocker, rocker_port, err = rocker_cmd_exec(rocker, rocker_port, trans,
rocker_cmd_group_tbl_del, rocker_cmd_group_tbl_del,
found, NULL, NULL, nowait); found, NULL, NULL);
rocker_group_tbl_entry_free(found); rocker_group_tbl_entry_free(rocker_port, trans, found);
} }
return err; return err;
} }
static int rocker_group_tbl_do(struct rocker_port *rocker_port, static int rocker_group_tbl_do(struct rocker_port *rocker_port,
int flags, struct rocker_group_tbl_entry *entry) enum switchdev_trans trans, int flags,
struct rocker_group_tbl_entry *entry)
{ {
bool nowait = flags & ROCKER_OP_FLAG_NOWAIT;
if (flags & ROCKER_OP_FLAG_REMOVE) if (flags & ROCKER_OP_FLAG_REMOVE)
return rocker_group_tbl_del(rocker_port, entry, nowait); return rocker_group_tbl_del(rocker_port, trans, entry);
else else
return rocker_group_tbl_add(rocker_port, entry, nowait); return rocker_group_tbl_add(rocker_port, trans, entry);
} }
static int rocker_group_l2_interface(struct rocker_port *rocker_port, static int rocker_group_l2_interface(struct rocker_port *rocker_port,
int flags, __be16 vlan_id, enum switchdev_trans trans, int flags,
u32 out_pport, int pop_vlan) __be16 vlan_id, u32 out_pport,
int pop_vlan)
{ {
struct rocker_group_tbl_entry *entry; struct rocker_group_tbl_entry *entry;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
entry->group_id = ROCKER_GROUP_L2_INTERFACE(vlan_id, out_pport); entry->group_id = ROCKER_GROUP_L2_INTERFACE(vlan_id, out_pport);
entry->l2_interface.pop_vlan = pop_vlan; entry->l2_interface.pop_vlan = pop_vlan;
return rocker_group_tbl_do(rocker_port, flags, entry); return rocker_group_tbl_do(rocker_port, trans, flags, entry);
} }
static int rocker_group_l2_fan_out(struct rocker_port *rocker_port, static int rocker_group_l2_fan_out(struct rocker_port *rocker_port,
enum switchdev_trans trans,
int flags, u8 group_count, int flags, u8 group_count,
u32 *group_ids, u32 group_id) u32 *group_ids, u32 group_id)
{ {
struct rocker_group_tbl_entry *entry; struct rocker_group_tbl_entry *entry;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
entry->group_id = group_id; entry->group_id = group_id;
entry->group_count = group_count; entry->group_count = group_count;
entry->group_ids = kcalloc(group_count, sizeof(u32), entry->group_ids = rocker_port_kcalloc(rocker_port, trans, group_count,
rocker_op_flags_gfp(flags)); sizeof(u32));
if (!entry->group_ids) { if (!entry->group_ids) {
kfree(entry); rocker_port_kfree(rocker_port, trans, entry);
return -ENOMEM; return -ENOMEM;
} }
memcpy(entry->group_ids, group_ids, group_count * sizeof(u32)); memcpy(entry->group_ids, group_ids, group_count * sizeof(u32));
return rocker_group_tbl_do(rocker_port, flags, entry); return rocker_group_tbl_do(rocker_port, trans, flags, entry);
} }
static int rocker_group_l2_flood(struct rocker_port *rocker_port, static int rocker_group_l2_flood(struct rocker_port *rocker_port,
int flags, __be16 vlan_id, enum switchdev_trans trans, int flags,
u8 group_count, u32 *group_ids, __be16 vlan_id, u8 group_count,
u32 group_id) u32 *group_ids, u32 group_id)
{ {
return rocker_group_l2_fan_out(rocker_port, flags, return rocker_group_l2_fan_out(rocker_port, trans, flags,
group_count, group_ids, group_count, group_ids,
group_id); group_id);
} }
static int rocker_group_l3_unicast(struct rocker_port *rocker_port, static int rocker_group_l3_unicast(struct rocker_port *rocker_port,
int flags, u32 index, u8 *src_mac, enum switchdev_trans trans, int flags,
u8 *dst_mac, __be16 vlan_id, u32 index, u8 *src_mac, u8 *dst_mac,
bool ttl_check, u32 pport) __be16 vlan_id, bool ttl_check, u32 pport)
{ {
struct rocker_group_tbl_entry *entry; struct rocker_group_tbl_entry *entry;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
...@@ -2777,7 +2892,7 @@ static int rocker_group_l3_unicast(struct rocker_port *rocker_port, ...@@ -2777,7 +2892,7 @@ static int rocker_group_l3_unicast(struct rocker_port *rocker_port,
entry->l3_unicast.ttl_check = ttl_check; entry->l3_unicast.ttl_check = ttl_check;
entry->l3_unicast.group_id = ROCKER_GROUP_L2_INTERFACE(vlan_id, pport); entry->l3_unicast.group_id = ROCKER_GROUP_L2_INTERFACE(vlan_id, pport);
return rocker_group_tbl_do(rocker_port, flags, entry); return rocker_group_tbl_do(rocker_port, trans, flags, entry);
} }
static struct rocker_neigh_tbl_entry * static struct rocker_neigh_tbl_entry *
...@@ -2802,17 +2917,17 @@ static void _rocker_neigh_add(struct rocker *rocker, ...@@ -2802,17 +2917,17 @@ static void _rocker_neigh_add(struct rocker *rocker,
be32_to_cpu(entry->ip_addr)); be32_to_cpu(entry->ip_addr));
} }
static void _rocker_neigh_del(struct rocker *rocker, static void _rocker_neigh_del(struct rocker_port *rocker_port,
enum switchdev_trans trans,
struct rocker_neigh_tbl_entry *entry) struct rocker_neigh_tbl_entry *entry)
{ {
if (--entry->ref_count == 0) { if (--entry->ref_count == 0) {
hash_del(&entry->entry); hash_del(&entry->entry);
kfree(entry); rocker_port_kfree(rocker_port, trans, entry);
} }
} }
static void _rocker_neigh_update(struct rocker *rocker, static void _rocker_neigh_update(struct rocker_neigh_tbl_entry *entry,
struct rocker_neigh_tbl_entry *entry,
u8 *eth_dst, bool ttl_check) u8 *eth_dst, bool ttl_check)
{ {
if (eth_dst) { if (eth_dst) {
...@@ -2824,6 +2939,7 @@ static void _rocker_neigh_update(struct rocker *rocker, ...@@ -2824,6 +2939,7 @@ static void _rocker_neigh_update(struct rocker *rocker,
} }
static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port, static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port,
enum switchdev_trans trans,
int flags, __be32 ip_addr, u8 *eth_dst) int flags, __be32 ip_addr, u8 *eth_dst)
{ {
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
...@@ -2840,7 +2956,7 @@ static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port, ...@@ -2840,7 +2956,7 @@ static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port,
bool removing; bool removing;
int err = 0; int err = 0;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
...@@ -2860,9 +2976,9 @@ static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port, ...@@ -2860,9 +2976,9 @@ static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port,
_rocker_neigh_add(rocker, entry); _rocker_neigh_add(rocker, entry);
} else if (removing) { } else if (removing) {
memcpy(entry, found, sizeof(*entry)); memcpy(entry, found, sizeof(*entry));
_rocker_neigh_del(rocker, found); _rocker_neigh_del(rocker_port, trans, found);
} else if (updating) { } else if (updating) {
_rocker_neigh_update(rocker, found, eth_dst, true); _rocker_neigh_update(found, eth_dst, true);
memcpy(entry, found, sizeof(*entry)); memcpy(entry, found, sizeof(*entry));
} else { } else {
err = -ENOENT; err = -ENOENT;
...@@ -2879,7 +2995,7 @@ static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port, ...@@ -2879,7 +2995,7 @@ static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port,
* other routes' nexthops. * other routes' nexthops.
*/ */
err = rocker_group_l3_unicast(rocker_port, flags, err = rocker_group_l3_unicast(rocker_port, trans, flags,
entry->index, entry->index,
rocker_port->dev->dev_addr, rocker_port->dev->dev_addr,
entry->eth_dst, entry->eth_dst,
...@@ -2895,7 +3011,7 @@ static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port, ...@@ -2895,7 +3011,7 @@ static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port,
if (adding || removing) { if (adding || removing) {
group_id = ROCKER_GROUP_L3_UNICAST(entry->index); group_id = ROCKER_GROUP_L3_UNICAST(entry->index);
err = rocker_flow_tbl_ucast4_routing(rocker_port, err = rocker_flow_tbl_ucast4_routing(rocker_port, trans,
eth_type, ip_addr, eth_type, ip_addr,
inet_make_mask(32), inet_make_mask(32),
priority, goto_tbl, priority, goto_tbl,
...@@ -2909,13 +3025,13 @@ static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port, ...@@ -2909,13 +3025,13 @@ static int rocker_port_ipv4_neigh(struct rocker_port *rocker_port,
err_out: err_out:
if (!adding) if (!adding)
kfree(entry); rocker_port_kfree(rocker_port, trans, entry);
return err; return err;
} }
static int rocker_port_ipv4_resolve(struct rocker_port *rocker_port, static int rocker_port_ipv4_resolve(struct rocker_port *rocker_port,
__be32 ip_addr) enum switchdev_trans trans, __be32 ip_addr)
{ {
struct net_device *dev = rocker_port->dev; struct net_device *dev = rocker_port->dev;
struct neighbour *n = __ipv4_neigh_lookup(dev, (__force u32)ip_addr); struct neighbour *n = __ipv4_neigh_lookup(dev, (__force u32)ip_addr);
...@@ -2932,14 +3048,16 @@ static int rocker_port_ipv4_resolve(struct rocker_port *rocker_port, ...@@ -2932,14 +3048,16 @@ static int rocker_port_ipv4_resolve(struct rocker_port *rocker_port,
*/ */
if (n->nud_state & NUD_VALID) if (n->nud_state & NUD_VALID)
err = rocker_port_ipv4_neigh(rocker_port, 0, ip_addr, n->ha); err = rocker_port_ipv4_neigh(rocker_port, trans, 0,
ip_addr, n->ha);
else else
neigh_event_send(n, NULL); neigh_event_send(n, NULL);
return err; return err;
} }
static int rocker_port_ipv4_nh(struct rocker_port *rocker_port, int flags, static int rocker_port_ipv4_nh(struct rocker_port *rocker_port,
enum switchdev_trans trans, int flags,
__be32 ip_addr, u32 *index) __be32 ip_addr, u32 *index)
{ {
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
...@@ -2952,7 +3070,7 @@ static int rocker_port_ipv4_nh(struct rocker_port *rocker_port, int flags, ...@@ -2952,7 +3070,7 @@ static int rocker_port_ipv4_nh(struct rocker_port *rocker_port, int flags,
bool resolved = true; bool resolved = true;
int err = 0; int err = 0;
entry = kzalloc(sizeof(*entry), rocker_op_flags_gfp(flags)); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
...@@ -2973,9 +3091,9 @@ static int rocker_port_ipv4_nh(struct rocker_port *rocker_port, int flags, ...@@ -2973,9 +3091,9 @@ static int rocker_port_ipv4_nh(struct rocker_port *rocker_port, int flags,
*index = entry->index; *index = entry->index;
resolved = false; resolved = false;
} else if (removing) { } else if (removing) {
_rocker_neigh_del(rocker, found); _rocker_neigh_del(rocker_port, trans, found);
} else if (updating) { } else if (updating) {
_rocker_neigh_update(rocker, found, NULL, false); _rocker_neigh_update(found, NULL, false);
resolved = !is_zero_ether_addr(found->eth_dst); resolved = !is_zero_ether_addr(found->eth_dst);
} else { } else {
err = -ENOENT; err = -ENOENT;
...@@ -2984,7 +3102,7 @@ static int rocker_port_ipv4_nh(struct rocker_port *rocker_port, int flags, ...@@ -2984,7 +3102,7 @@ static int rocker_port_ipv4_nh(struct rocker_port *rocker_port, int flags,
spin_unlock_irqrestore(&rocker->neigh_tbl_lock, lock_flags); spin_unlock_irqrestore(&rocker->neigh_tbl_lock, lock_flags);
if (!adding) if (!adding)
kfree(entry); rocker_port_kfree(rocker_port, trans, entry);
if (err) if (err)
return err; return err;
...@@ -2992,12 +3110,13 @@ static int rocker_port_ipv4_nh(struct rocker_port *rocker_port, int flags, ...@@ -2992,12 +3110,13 @@ static int rocker_port_ipv4_nh(struct rocker_port *rocker_port, int flags,
/* Resolved means neigh ip_addr is resolved to neigh mac. */ /* Resolved means neigh ip_addr is resolved to neigh mac. */
if (!resolved) if (!resolved)
err = rocker_port_ipv4_resolve(rocker_port, ip_addr); err = rocker_port_ipv4_resolve(rocker_port, trans, ip_addr);
return err; return err;
} }
static int rocker_port_vlan_flood_group(struct rocker_port *rocker_port, static int rocker_port_vlan_flood_group(struct rocker_port *rocker_port,
enum switchdev_trans trans,
int flags, __be16 vlan_id) int flags, __be16 vlan_id)
{ {
struct rocker_port *p; struct rocker_port *p;
...@@ -3008,8 +3127,8 @@ static int rocker_port_vlan_flood_group(struct rocker_port *rocker_port, ...@@ -3008,8 +3127,8 @@ static int rocker_port_vlan_flood_group(struct rocker_port *rocker_port,
int err = 0; int err = 0;
int i; int i;
group_ids = kcalloc(rocker->port_count, sizeof(u32), group_ids = rocker_port_kcalloc(rocker_port, trans, rocker->port_count,
rocker_op_flags_gfp(flags)); sizeof(u32));
if (!group_ids) if (!group_ids)
return -ENOMEM; return -ENOMEM;
...@@ -3032,21 +3151,20 @@ static int rocker_port_vlan_flood_group(struct rocker_port *rocker_port, ...@@ -3032,21 +3151,20 @@ static int rocker_port_vlan_flood_group(struct rocker_port *rocker_port,
if (group_count == 0) if (group_count == 0)
goto no_ports_in_vlan; goto no_ports_in_vlan;
err = rocker_group_l2_flood(rocker_port, flags, vlan_id, err = rocker_group_l2_flood(rocker_port, trans, flags, vlan_id,
group_count, group_ids, group_count, group_ids, group_id);
group_id);
if (err) if (err)
netdev_err(rocker_port->dev, netdev_err(rocker_port->dev,
"Error (%d) port VLAN l2 flood group\n", err); "Error (%d) port VLAN l2 flood group\n", err);
no_ports_in_vlan: no_ports_in_vlan:
kfree(group_ids); rocker_port_kfree(rocker_port, trans, group_ids);
return err; return err;
} }
static int rocker_port_vlan_l2_groups(struct rocker_port *rocker_port, static int rocker_port_vlan_l2_groups(struct rocker_port *rocker_port,
int flags, __be16 vlan_id, enum switchdev_trans trans, int flags,
bool pop_vlan) __be16 vlan_id, bool pop_vlan)
{ {
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
struct rocker_port *p; struct rocker_port *p;
...@@ -3063,9 +3181,8 @@ static int rocker_port_vlan_l2_groups(struct rocker_port *rocker_port, ...@@ -3063,9 +3181,8 @@ static int rocker_port_vlan_l2_groups(struct rocker_port *rocker_port,
if (rocker_port->stp_state == BR_STATE_LEARNING || if (rocker_port->stp_state == BR_STATE_LEARNING ||
rocker_port->stp_state == BR_STATE_FORWARDING) { rocker_port->stp_state == BR_STATE_FORWARDING) {
out_pport = rocker_port->pport; out_pport = rocker_port->pport;
err = rocker_group_l2_interface(rocker_port, flags, err = rocker_group_l2_interface(rocker_port, trans, flags,
vlan_id, out_pport, vlan_id, out_pport, pop_vlan);
pop_vlan);
if (err) { if (err) {
netdev_err(rocker_port->dev, netdev_err(rocker_port->dev,
"Error (%d) port VLAN l2 group for pport %d\n", "Error (%d) port VLAN l2 group for pport %d\n",
...@@ -3089,9 +3206,8 @@ static int rocker_port_vlan_l2_groups(struct rocker_port *rocker_port, ...@@ -3089,9 +3206,8 @@ static int rocker_port_vlan_l2_groups(struct rocker_port *rocker_port,
return 0; return 0;
out_pport = 0; out_pport = 0;
err = rocker_group_l2_interface(rocker_port, flags, err = rocker_group_l2_interface(rocker_port, trans, flags,
vlan_id, out_pport, vlan_id, out_pport, pop_vlan);
pop_vlan);
if (err) { if (err) {
netdev_err(rocker_port->dev, netdev_err(rocker_port->dev,
"Error (%d) port VLAN l2 group for CPU port\n", err); "Error (%d) port VLAN l2 group for CPU port\n", err);
...@@ -3147,8 +3263,8 @@ static struct rocker_ctrl { ...@@ -3147,8 +3263,8 @@ static struct rocker_ctrl {
}; };
static int rocker_port_ctrl_vlan_acl(struct rocker_port *rocker_port, static int rocker_port_ctrl_vlan_acl(struct rocker_port *rocker_port,
int flags, struct rocker_ctrl *ctrl, enum switchdev_trans trans, int flags,
__be16 vlan_id) struct rocker_ctrl *ctrl, __be16 vlan_id)
{ {
u32 in_pport = rocker_port->pport; u32 in_pport = rocker_port->pport;
u32 in_pport_mask = 0xffffffff; u32 in_pport_mask = 0xffffffff;
...@@ -3163,7 +3279,7 @@ static int rocker_port_ctrl_vlan_acl(struct rocker_port *rocker_port, ...@@ -3163,7 +3279,7 @@ static int rocker_port_ctrl_vlan_acl(struct rocker_port *rocker_port,
u32 group_id = ROCKER_GROUP_L2_INTERFACE(vlan_id, out_pport); u32 group_id = ROCKER_GROUP_L2_INTERFACE(vlan_id, out_pport);
int err; int err;
err = rocker_flow_tbl_acl(rocker_port, flags, err = rocker_flow_tbl_acl(rocker_port, trans, flags,
in_pport, in_pport_mask, in_pport, in_pport_mask,
eth_src, eth_src_mask, eth_src, eth_src_mask,
ctrl->eth_dst, ctrl->eth_dst_mask, ctrl->eth_dst, ctrl->eth_dst_mask,
...@@ -3180,7 +3296,8 @@ static int rocker_port_ctrl_vlan_acl(struct rocker_port *rocker_port, ...@@ -3180,7 +3296,8 @@ static int rocker_port_ctrl_vlan_acl(struct rocker_port *rocker_port,
} }
static int rocker_port_ctrl_vlan_bridge(struct rocker_port *rocker_port, static int rocker_port_ctrl_vlan_bridge(struct rocker_port *rocker_port,
int flags, struct rocker_ctrl *ctrl, enum switchdev_trans trans, int flags,
struct rocker_ctrl *ctrl,
__be16 vlan_id) __be16 vlan_id)
{ {
enum rocker_of_dpa_table_id goto_tbl = enum rocker_of_dpa_table_id goto_tbl =
...@@ -3192,7 +3309,7 @@ static int rocker_port_ctrl_vlan_bridge(struct rocker_port *rocker_port, ...@@ -3192,7 +3309,7 @@ static int rocker_port_ctrl_vlan_bridge(struct rocker_port *rocker_port,
if (!rocker_port_is_bridged(rocker_port)) if (!rocker_port_is_bridged(rocker_port))
return 0; return 0;
err = rocker_flow_tbl_bridge(rocker_port, flags, err = rocker_flow_tbl_bridge(rocker_port, trans, flags,
ctrl->eth_dst, ctrl->eth_dst_mask, ctrl->eth_dst, ctrl->eth_dst_mask,
vlan_id, tunnel_id, vlan_id, tunnel_id,
goto_tbl, group_id, ctrl->copy_to_cpu); goto_tbl, group_id, ctrl->copy_to_cpu);
...@@ -3204,8 +3321,8 @@ static int rocker_port_ctrl_vlan_bridge(struct rocker_port *rocker_port, ...@@ -3204,8 +3321,8 @@ static int rocker_port_ctrl_vlan_bridge(struct rocker_port *rocker_port,
} }
static int rocker_port_ctrl_vlan_term(struct rocker_port *rocker_port, static int rocker_port_ctrl_vlan_term(struct rocker_port *rocker_port,
int flags, struct rocker_ctrl *ctrl, enum switchdev_trans trans, int flags,
__be16 vlan_id) struct rocker_ctrl *ctrl, __be16 vlan_id)
{ {
u32 in_pport_mask = 0xffffffff; u32 in_pport_mask = 0xffffffff;
__be16 vlan_id_mask = htons(0xffff); __be16 vlan_id_mask = htons(0xffff);
...@@ -3214,7 +3331,7 @@ static int rocker_port_ctrl_vlan_term(struct rocker_port *rocker_port, ...@@ -3214,7 +3331,7 @@ static int rocker_port_ctrl_vlan_term(struct rocker_port *rocker_port,
if (ntohs(vlan_id) == 0) if (ntohs(vlan_id) == 0)
vlan_id = rocker_port->internal_vlan_id; vlan_id = rocker_port->internal_vlan_id;
err = rocker_flow_tbl_term_mac(rocker_port, err = rocker_flow_tbl_term_mac(rocker_port, trans,
rocker_port->pport, in_pport_mask, rocker_port->pport, in_pport_mask,
ctrl->eth_type, ctrl->eth_dst, ctrl->eth_type, ctrl->eth_dst,
ctrl->eth_dst_mask, vlan_id, ctrl->eth_dst_mask, vlan_id,
...@@ -3227,32 +3344,34 @@ static int rocker_port_ctrl_vlan_term(struct rocker_port *rocker_port, ...@@ -3227,32 +3344,34 @@ static int rocker_port_ctrl_vlan_term(struct rocker_port *rocker_port,
return err; return err;
} }
static int rocker_port_ctrl_vlan(struct rocker_port *rocker_port, int flags, static int rocker_port_ctrl_vlan(struct rocker_port *rocker_port,
enum switchdev_trans trans, int flags,
struct rocker_ctrl *ctrl, __be16 vlan_id) struct rocker_ctrl *ctrl, __be16 vlan_id)
{ {
if (ctrl->acl) if (ctrl->acl)
return rocker_port_ctrl_vlan_acl(rocker_port, flags, return rocker_port_ctrl_vlan_acl(rocker_port, trans, flags,
ctrl, vlan_id); ctrl, vlan_id);
if (ctrl->bridge) if (ctrl->bridge)
return rocker_port_ctrl_vlan_bridge(rocker_port, flags, return rocker_port_ctrl_vlan_bridge(rocker_port, trans, flags,
ctrl, vlan_id); ctrl, vlan_id);
if (ctrl->term) if (ctrl->term)
return rocker_port_ctrl_vlan_term(rocker_port, flags, return rocker_port_ctrl_vlan_term(rocker_port, trans, flags,
ctrl, vlan_id); ctrl, vlan_id);
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
static int rocker_port_ctrl_vlan_add(struct rocker_port *rocker_port, static int rocker_port_ctrl_vlan_add(struct rocker_port *rocker_port,
int flags, __be16 vlan_id) enum switchdev_trans trans, int flags,
__be16 vlan_id)
{ {
int err = 0; int err = 0;
int i; int i;
for (i = 0; i < ROCKER_CTRL_MAX; i++) { for (i = 0; i < ROCKER_CTRL_MAX; i++) {
if (rocker_port->ctrls[i]) { if (rocker_port->ctrls[i]) {
err = rocker_port_ctrl_vlan(rocker_port, flags, err = rocker_port_ctrl_vlan(rocker_port, trans, flags,
&rocker_ctrls[i], vlan_id); &rocker_ctrls[i], vlan_id);
if (err) if (err)
return err; return err;
...@@ -3262,7 +3381,8 @@ static int rocker_port_ctrl_vlan_add(struct rocker_port *rocker_port, ...@@ -3262,7 +3381,8 @@ static int rocker_port_ctrl_vlan_add(struct rocker_port *rocker_port,
return err; return err;
} }
static int rocker_port_ctrl(struct rocker_port *rocker_port, int flags, static int rocker_port_ctrl(struct rocker_port *rocker_port,
enum switchdev_trans trans, int flags,
struct rocker_ctrl *ctrl) struct rocker_ctrl *ctrl)
{ {
u16 vid; u16 vid;
...@@ -3271,7 +3391,7 @@ static int rocker_port_ctrl(struct rocker_port *rocker_port, int flags, ...@@ -3271,7 +3391,7 @@ static int rocker_port_ctrl(struct rocker_port *rocker_port, int flags,
for (vid = 1; vid < VLAN_N_VID; vid++) { for (vid = 1; vid < VLAN_N_VID; vid++) {
if (!test_bit(vid, rocker_port->vlan_bitmap)) if (!test_bit(vid, rocker_port->vlan_bitmap))
continue; continue;
err = rocker_port_ctrl_vlan(rocker_port, flags, err = rocker_port_ctrl_vlan(rocker_port, trans, flags,
ctrl, htons(vid)); ctrl, htons(vid));
if (err) if (err)
break; break;
...@@ -3280,8 +3400,8 @@ static int rocker_port_ctrl(struct rocker_port *rocker_port, int flags, ...@@ -3280,8 +3400,8 @@ static int rocker_port_ctrl(struct rocker_port *rocker_port, int flags,
return err; return err;
} }
static int rocker_port_vlan(struct rocker_port *rocker_port, int flags, static int rocker_port_vlan(struct rocker_port *rocker_port,
u16 vid) enum switchdev_trans trans, int flags, u16 vid)
{ {
enum rocker_of_dpa_table_id goto_tbl = enum rocker_of_dpa_table_id goto_tbl =
ROCKER_OF_DPA_TABLE_ID_TERMINATION_MAC; ROCKER_OF_DPA_TABLE_ID_TERMINATION_MAC;
...@@ -3295,50 +3415,57 @@ static int rocker_port_vlan(struct rocker_port *rocker_port, int flags, ...@@ -3295,50 +3415,57 @@ static int rocker_port_vlan(struct rocker_port *rocker_port, int flags,
internal_vlan_id = rocker_port_vid_to_vlan(rocker_port, vid, &untagged); internal_vlan_id = rocker_port_vid_to_vlan(rocker_port, vid, &untagged);
if (adding && test_and_set_bit(ntohs(internal_vlan_id), if (adding && test_bit(ntohs(internal_vlan_id),
rocker_port->vlan_bitmap)) rocker_port->vlan_bitmap))
return 0; /* already added */ return 0; /* already added */
else if (!adding && !test_and_clear_bit(ntohs(internal_vlan_id), else if (!adding && !test_bit(ntohs(internal_vlan_id),
rocker_port->vlan_bitmap)) rocker_port->vlan_bitmap))
return 0; /* already removed */ return 0; /* already removed */
change_bit(ntohs(internal_vlan_id), rocker_port->vlan_bitmap);
if (adding) { if (adding) {
err = rocker_port_ctrl_vlan_add(rocker_port, flags, err = rocker_port_ctrl_vlan_add(rocker_port, trans, flags,
internal_vlan_id); internal_vlan_id);
if (err) { if (err) {
netdev_err(rocker_port->dev, netdev_err(rocker_port->dev,
"Error (%d) port ctrl vlan add\n", err); "Error (%d) port ctrl vlan add\n", err);
return err; goto err_out;
} }
} }
err = rocker_port_vlan_l2_groups(rocker_port, flags, err = rocker_port_vlan_l2_groups(rocker_port, trans, flags,
internal_vlan_id, untagged); internal_vlan_id, untagged);
if (err) { if (err) {
netdev_err(rocker_port->dev, netdev_err(rocker_port->dev,
"Error (%d) port VLAN l2 groups\n", err); "Error (%d) port VLAN l2 groups\n", err);
return err; goto err_out;
} }
err = rocker_port_vlan_flood_group(rocker_port, flags, err = rocker_port_vlan_flood_group(rocker_port, trans, flags,
internal_vlan_id); internal_vlan_id);
if (err) { if (err) {
netdev_err(rocker_port->dev, netdev_err(rocker_port->dev,
"Error (%d) port VLAN l2 flood group\n", err); "Error (%d) port VLAN l2 flood group\n", err);
return err; goto err_out;
} }
err = rocker_flow_tbl_vlan(rocker_port, flags, err = rocker_flow_tbl_vlan(rocker_port, trans, flags,
in_pport, vlan_id, vlan_id_mask, in_pport, vlan_id, vlan_id_mask,
goto_tbl, untagged, internal_vlan_id); goto_tbl, untagged, internal_vlan_id);
if (err) if (err)
netdev_err(rocker_port->dev, netdev_err(rocker_port->dev,
"Error (%d) port VLAN table\n", err); "Error (%d) port VLAN table\n", err);
err_out:
if (trans == SWITCHDEV_TRANS_PREPARE)
change_bit(ntohs(internal_vlan_id), rocker_port->vlan_bitmap);
return err; return err;
} }
static int rocker_port_ig_tbl(struct rocker_port *rocker_port, int flags) static int rocker_port_ig_tbl(struct rocker_port *rocker_port,
enum switchdev_trans trans, int flags)
{ {
enum rocker_of_dpa_table_id goto_tbl; enum rocker_of_dpa_table_id goto_tbl;
u32 in_pport; u32 in_pport;
...@@ -3353,7 +3480,7 @@ static int rocker_port_ig_tbl(struct rocker_port *rocker_port, int flags) ...@@ -3353,7 +3480,7 @@ static int rocker_port_ig_tbl(struct rocker_port *rocker_port, int flags)
in_pport_mask = 0xffff0000; in_pport_mask = 0xffff0000;
goto_tbl = ROCKER_OF_DPA_TABLE_ID_VLAN; goto_tbl = ROCKER_OF_DPA_TABLE_ID_VLAN;
err = rocker_flow_tbl_ig_port(rocker_port, flags, err = rocker_flow_tbl_ig_port(rocker_port, trans, flags,
in_pport, in_pport_mask, in_pport, in_pport_mask,
goto_tbl); goto_tbl);
if (err) if (err)
...@@ -3365,7 +3492,8 @@ static int rocker_port_ig_tbl(struct rocker_port *rocker_port, int flags) ...@@ -3365,7 +3492,8 @@ static int rocker_port_ig_tbl(struct rocker_port *rocker_port, int flags)
struct rocker_fdb_learn_work { struct rocker_fdb_learn_work {
struct work_struct work; struct work_struct work;
struct net_device *dev; struct rocker_port *rocker_port;
enum switchdev_trans trans;
int flags; int flags;
u8 addr[ETH_ALEN]; u8 addr[ETH_ALEN];
u16 vid; u16 vid;
...@@ -3377,23 +3505,24 @@ static void rocker_port_fdb_learn_work(struct work_struct *work) ...@@ -3377,23 +3505,24 @@ static void rocker_port_fdb_learn_work(struct work_struct *work)
container_of(work, struct rocker_fdb_learn_work, work); container_of(work, struct rocker_fdb_learn_work, work);
bool removing = (lw->flags & ROCKER_OP_FLAG_REMOVE); bool removing = (lw->flags & ROCKER_OP_FLAG_REMOVE);
bool learned = (lw->flags & ROCKER_OP_FLAG_LEARNED); bool learned = (lw->flags & ROCKER_OP_FLAG_LEARNED);
struct netdev_switch_notifier_fdb_info info; struct switchdev_notifier_fdb_info info;
info.addr = lw->addr; info.addr = lw->addr;
info.vid = lw->vid; info.vid = lw->vid;
if (learned && removing) if (learned && removing)
call_netdev_switch_notifiers(NETDEV_SWITCH_FDB_DEL, call_switchdev_notifiers(SWITCHDEV_FDB_DEL,
lw->dev, &info.info); lw->rocker_port->dev, &info.info);
else if (learned && !removing) else if (learned && !removing)
call_netdev_switch_notifiers(NETDEV_SWITCH_FDB_ADD, call_switchdev_notifiers(SWITCHDEV_FDB_ADD,
lw->dev, &info.info); lw->rocker_port->dev, &info.info);
kfree(work); rocker_port_kfree(lw->rocker_port, lw->trans, work);
} }
static int rocker_port_fdb_learn(struct rocker_port *rocker_port, static int rocker_port_fdb_learn(struct rocker_port *rocker_port,
int flags, const u8 *addr, __be16 vlan_id) enum switchdev_trans trans, int flags,
const u8 *addr, __be16 vlan_id)
{ {
struct rocker_fdb_learn_work *lw; struct rocker_fdb_learn_work *lw;
enum rocker_of_dpa_table_id goto_tbl = enum rocker_of_dpa_table_id goto_tbl =
...@@ -3409,8 +3538,8 @@ static int rocker_port_fdb_learn(struct rocker_port *rocker_port, ...@@ -3409,8 +3538,8 @@ static int rocker_port_fdb_learn(struct rocker_port *rocker_port,
group_id = ROCKER_GROUP_L2_INTERFACE(vlan_id, out_pport); group_id = ROCKER_GROUP_L2_INTERFACE(vlan_id, out_pport);
if (!(flags & ROCKER_OP_FLAG_REFRESH)) { if (!(flags & ROCKER_OP_FLAG_REFRESH)) {
err = rocker_flow_tbl_bridge(rocker_port, flags, addr, NULL, err = rocker_flow_tbl_bridge(rocker_port, trans, flags, addr,
vlan_id, tunnel_id, goto_tbl, NULL, vlan_id, tunnel_id, goto_tbl,
group_id, copy_to_cpu); group_id, copy_to_cpu);
if (err) if (err)
return err; return err;
...@@ -3422,18 +3551,22 @@ static int rocker_port_fdb_learn(struct rocker_port *rocker_port, ...@@ -3422,18 +3551,22 @@ static int rocker_port_fdb_learn(struct rocker_port *rocker_port,
if (!rocker_port_is_bridged(rocker_port)) if (!rocker_port_is_bridged(rocker_port))
return 0; return 0;
lw = kmalloc(sizeof(*lw), rocker_op_flags_gfp(flags)); lw = rocker_port_kzalloc(rocker_port, trans, sizeof(*lw));
if (!lw) if (!lw)
return -ENOMEM; return -ENOMEM;
INIT_WORK(&lw->work, rocker_port_fdb_learn_work); INIT_WORK(&lw->work, rocker_port_fdb_learn_work);
lw->dev = rocker_port->dev; lw->rocker_port = rocker_port;
lw->trans = trans;
lw->flags = flags; lw->flags = flags;
ether_addr_copy(lw->addr, addr); ether_addr_copy(lw->addr, addr);
lw->vid = rocker_port_vlan_to_vid(rocker_port, vlan_id); lw->vid = rocker_port_vlan_to_vid(rocker_port, vlan_id);
schedule_work(&lw->work); if (trans == SWITCHDEV_TRANS_PREPARE)
rocker_port_kfree(rocker_port, trans, lw);
else
schedule_work(&lw->work);
return 0; return 0;
} }
...@@ -3451,6 +3584,7 @@ rocker_fdb_tbl_find(struct rocker *rocker, struct rocker_fdb_tbl_entry *match) ...@@ -3451,6 +3584,7 @@ rocker_fdb_tbl_find(struct rocker *rocker, struct rocker_fdb_tbl_entry *match)
} }
static int rocker_port_fdb(struct rocker_port *rocker_port, static int rocker_port_fdb(struct rocker_port *rocker_port,
enum switchdev_trans trans,
const unsigned char *addr, const unsigned char *addr,
__be16 vlan_id, int flags) __be16 vlan_id, int flags)
{ {
...@@ -3460,7 +3594,7 @@ static int rocker_port_fdb(struct rocker_port *rocker_port, ...@@ -3460,7 +3594,7 @@ static int rocker_port_fdb(struct rocker_port *rocker_port,
bool removing = (flags & ROCKER_OP_FLAG_REMOVE); bool removing = (flags & ROCKER_OP_FLAG_REMOVE);
unsigned long lock_flags; unsigned long lock_flags;
fdb = kzalloc(sizeof(*fdb), rocker_op_flags_gfp(flags)); fdb = rocker_port_kzalloc(rocker_port, trans, sizeof(*fdb));
if (!fdb) if (!fdb)
return -ENOMEM; return -ENOMEM;
...@@ -3475,7 +3609,7 @@ static int rocker_port_fdb(struct rocker_port *rocker_port, ...@@ -3475,7 +3609,7 @@ static int rocker_port_fdb(struct rocker_port *rocker_port,
found = rocker_fdb_tbl_find(rocker, fdb); found = rocker_fdb_tbl_find(rocker, fdb);
if (removing && found) { if (removing && found) {
kfree(fdb); rocker_port_kfree(rocker_port, trans, fdb);
hash_del(&found->entry); hash_del(&found->entry);
} else if (!removing && !found) { } else if (!removing && !found) {
hash_add(rocker->fdb_tbl, &fdb->entry, fdb->key_crc32); hash_add(rocker->fdb_tbl, &fdb->entry, fdb->key_crc32);
...@@ -3485,22 +3619,23 @@ static int rocker_port_fdb(struct rocker_port *rocker_port, ...@@ -3485,22 +3619,23 @@ static int rocker_port_fdb(struct rocker_port *rocker_port,
/* Check if adding and already exists, or removing and can't find */ /* Check if adding and already exists, or removing and can't find */
if (!found != !removing) { if (!found != !removing) {
kfree(fdb); rocker_port_kfree(rocker_port, trans, fdb);
if (!found && removing) if (!found && removing)
return 0; return 0;
/* Refreshing existing to update aging timers */ /* Refreshing existing to update aging timers */
flags |= ROCKER_OP_FLAG_REFRESH; flags |= ROCKER_OP_FLAG_REFRESH;
} }
return rocker_port_fdb_learn(rocker_port, flags, addr, vlan_id); return rocker_port_fdb_learn(rocker_port, trans, flags, addr, vlan_id);
} }
static int rocker_port_fdb_flush(struct rocker_port *rocker_port) static int rocker_port_fdb_flush(struct rocker_port *rocker_port,
enum switchdev_trans trans)
{ {
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
struct rocker_fdb_tbl_entry *found; struct rocker_fdb_tbl_entry *found;
unsigned long lock_flags; unsigned long lock_flags;
int flags = ROCKER_OP_FLAG_NOWAIT | ROCKER_OP_FLAG_REMOVE; int flags = ROCKER_OP_FLAG_REMOVE;
struct hlist_node *tmp; struct hlist_node *tmp;
int bkt; int bkt;
int err = 0; int err = 0;
...@@ -3516,7 +3651,7 @@ static int rocker_port_fdb_flush(struct rocker_port *rocker_port) ...@@ -3516,7 +3651,7 @@ static int rocker_port_fdb_flush(struct rocker_port *rocker_port)
continue; continue;
if (!found->learned) if (!found->learned)
continue; continue;
err = rocker_port_fdb_learn(rocker_port, flags, err = rocker_port_fdb_learn(rocker_port, trans, flags,
found->key.addr, found->key.addr,
found->key.vlan_id); found->key.vlan_id);
if (err) if (err)
...@@ -3531,7 +3666,8 @@ static int rocker_port_fdb_flush(struct rocker_port *rocker_port) ...@@ -3531,7 +3666,8 @@ static int rocker_port_fdb_flush(struct rocker_port *rocker_port)
} }
static int rocker_port_router_mac(struct rocker_port *rocker_port, static int rocker_port_router_mac(struct rocker_port *rocker_port,
int flags, __be16 vlan_id) enum switchdev_trans trans, int flags,
__be16 vlan_id)
{ {
u32 in_pport_mask = 0xffffffff; u32 in_pport_mask = 0xffffffff;
__be16 eth_type; __be16 eth_type;
...@@ -3544,7 +3680,7 @@ static int rocker_port_router_mac(struct rocker_port *rocker_port, ...@@ -3544,7 +3680,7 @@ static int rocker_port_router_mac(struct rocker_port *rocker_port,
vlan_id = rocker_port->internal_vlan_id; vlan_id = rocker_port->internal_vlan_id;
eth_type = htons(ETH_P_IP); eth_type = htons(ETH_P_IP);
err = rocker_flow_tbl_term_mac(rocker_port, err = rocker_flow_tbl_term_mac(rocker_port, trans,
rocker_port->pport, in_pport_mask, rocker_port->pport, in_pport_mask,
eth_type, rocker_port->dev->dev_addr, eth_type, rocker_port->dev->dev_addr,
dst_mac_mask, vlan_id, vlan_id_mask, dst_mac_mask, vlan_id, vlan_id_mask,
...@@ -3553,7 +3689,7 @@ static int rocker_port_router_mac(struct rocker_port *rocker_port, ...@@ -3553,7 +3689,7 @@ static int rocker_port_router_mac(struct rocker_port *rocker_port,
return err; return err;
eth_type = htons(ETH_P_IPV6); eth_type = htons(ETH_P_IPV6);
err = rocker_flow_tbl_term_mac(rocker_port, err = rocker_flow_tbl_term_mac(rocker_port, trans,
rocker_port->pport, in_pport_mask, rocker_port->pport, in_pport_mask,
eth_type, rocker_port->dev->dev_addr, eth_type, rocker_port->dev->dev_addr,
dst_mac_mask, vlan_id, vlan_id_mask, dst_mac_mask, vlan_id, vlan_id_mask,
...@@ -3562,13 +3698,14 @@ static int rocker_port_router_mac(struct rocker_port *rocker_port, ...@@ -3562,13 +3698,14 @@ static int rocker_port_router_mac(struct rocker_port *rocker_port,
return err; return err;
} }
static int rocker_port_fwding(struct rocker_port *rocker_port) static int rocker_port_fwding(struct rocker_port *rocker_port,
enum switchdev_trans trans)
{ {
bool pop_vlan; bool pop_vlan;
u32 out_pport; u32 out_pport;
__be16 vlan_id; __be16 vlan_id;
u16 vid; u16 vid;
int flags = ROCKER_OP_FLAG_NOWAIT; int flags = 0;
int err; int err;
/* Port will be forwarding-enabled if its STP state is LEARNING /* Port will be forwarding-enabled if its STP state is LEARNING
...@@ -3588,9 +3725,8 @@ static int rocker_port_fwding(struct rocker_port *rocker_port) ...@@ -3588,9 +3725,8 @@ static int rocker_port_fwding(struct rocker_port *rocker_port)
continue; continue;
vlan_id = htons(vid); vlan_id = htons(vid);
pop_vlan = rocker_vlan_id_is_internal(vlan_id); pop_vlan = rocker_vlan_id_is_internal(vlan_id);
err = rocker_group_l2_interface(rocker_port, flags, err = rocker_group_l2_interface(rocker_port, trans, flags,
vlan_id, out_pport, vlan_id, out_pport, pop_vlan);
pop_vlan);
if (err) { if (err) {
netdev_err(rocker_port->dev, netdev_err(rocker_port->dev,
"Error (%d) port VLAN l2 group for pport %d\n", "Error (%d) port VLAN l2 group for pport %d\n",
...@@ -3602,13 +3738,21 @@ static int rocker_port_fwding(struct rocker_port *rocker_port) ...@@ -3602,13 +3738,21 @@ static int rocker_port_fwding(struct rocker_port *rocker_port)
return 0; return 0;
} }
static int rocker_port_stp_update(struct rocker_port *rocker_port, u8 state) static int rocker_port_stp_update(struct rocker_port *rocker_port,
enum switchdev_trans trans, u8 state)
{ {
bool want[ROCKER_CTRL_MAX] = { 0, }; bool want[ROCKER_CTRL_MAX] = { 0, };
bool prev_ctrls[ROCKER_CTRL_MAX];
u8 prev_state;
int flags; int flags;
int err; int err;
int i; int i;
if (trans == SWITCHDEV_TRANS_PREPARE) {
memcpy(prev_ctrls, rocker_port->ctrls, sizeof(prev_ctrls));
prev_state = rocker_port->stp_state;
}
if (rocker_port->stp_state == state) if (rocker_port->stp_state == state)
return 0; return 0;
...@@ -3636,41 +3780,50 @@ static int rocker_port_stp_update(struct rocker_port *rocker_port, u8 state) ...@@ -3636,41 +3780,50 @@ static int rocker_port_stp_update(struct rocker_port *rocker_port, u8 state)
for (i = 0; i < ROCKER_CTRL_MAX; i++) { for (i = 0; i < ROCKER_CTRL_MAX; i++) {
if (want[i] != rocker_port->ctrls[i]) { if (want[i] != rocker_port->ctrls[i]) {
flags = ROCKER_OP_FLAG_NOWAIT | flags = (want[i] ? 0 : ROCKER_OP_FLAG_REMOVE);
(want[i] ? 0 : ROCKER_OP_FLAG_REMOVE); err = rocker_port_ctrl(rocker_port, trans, flags,
err = rocker_port_ctrl(rocker_port, flags,
&rocker_ctrls[i]); &rocker_ctrls[i]);
if (err) if (err)
return err; goto err_out;
rocker_port->ctrls[i] = want[i]; rocker_port->ctrls[i] = want[i];
} }
} }
err = rocker_port_fdb_flush(rocker_port); err = rocker_port_fdb_flush(rocker_port, trans);
if (err) if (err)
return err; goto err_out;
err = rocker_port_fwding(rocker_port, trans);
err_out:
if (trans == SWITCHDEV_TRANS_PREPARE) {
memcpy(rocker_port->ctrls, prev_ctrls, sizeof(prev_ctrls));
rocker_port->stp_state = prev_state;
}
return rocker_port_fwding(rocker_port); return err;
} }
static int rocker_port_fwd_enable(struct rocker_port *rocker_port) static int rocker_port_fwd_enable(struct rocker_port *rocker_port,
enum switchdev_trans trans)
{ {
if (rocker_port_is_bridged(rocker_port)) if (rocker_port_is_bridged(rocker_port))
/* bridge STP will enable port */ /* bridge STP will enable port */
return 0; return 0;
/* port is not bridged, so simulate going to FORWARDING state */ /* port is not bridged, so simulate going to FORWARDING state */
return rocker_port_stp_update(rocker_port, BR_STATE_FORWARDING); return rocker_port_stp_update(rocker_port, trans, BR_STATE_FORWARDING);
} }
static int rocker_port_fwd_disable(struct rocker_port *rocker_port) static int rocker_port_fwd_disable(struct rocker_port *rocker_port,
enum switchdev_trans trans)
{ {
if (rocker_port_is_bridged(rocker_port)) if (rocker_port_is_bridged(rocker_port))
/* bridge STP will disable port */ /* bridge STP will disable port */
return 0; return 0;
/* port is not bridged, so simulate going to DISABLED state */ /* port is not bridged, so simulate going to DISABLED state */
return rocker_port_stp_update(rocker_port, BR_STATE_DISABLED); return rocker_port_stp_update(rocker_port, trans, BR_STATE_DISABLED);
} }
static struct rocker_internal_vlan_tbl_entry * static struct rocker_internal_vlan_tbl_entry *
...@@ -3688,6 +3841,7 @@ rocker_internal_vlan_tbl_find(struct rocker *rocker, int ifindex) ...@@ -3688,6 +3841,7 @@ rocker_internal_vlan_tbl_find(struct rocker *rocker, int ifindex)
} }
static __be16 rocker_port_internal_vlan_id_get(struct rocker_port *rocker_port, static __be16 rocker_port_internal_vlan_id_get(struct rocker_port *rocker_port,
enum switchdev_trans trans,
int ifindex) int ifindex)
{ {
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
...@@ -3696,7 +3850,7 @@ static __be16 rocker_port_internal_vlan_id_get(struct rocker_port *rocker_port, ...@@ -3696,7 +3850,7 @@ static __be16 rocker_port_internal_vlan_id_get(struct rocker_port *rocker_port,
unsigned long lock_flags; unsigned long lock_flags;
int i; int i;
entry = kzalloc(sizeof(*entry), GFP_KERNEL); entry = rocker_port_kzalloc(rocker_port, trans, sizeof(*entry));
if (!entry) if (!entry)
return 0; return 0;
...@@ -3706,7 +3860,7 @@ static __be16 rocker_port_internal_vlan_id_get(struct rocker_port *rocker_port, ...@@ -3706,7 +3860,7 @@ static __be16 rocker_port_internal_vlan_id_get(struct rocker_port *rocker_port,
found = rocker_internal_vlan_tbl_find(rocker, ifindex); found = rocker_internal_vlan_tbl_find(rocker, ifindex);
if (found) { if (found) {
kfree(entry); rocker_port_kfree(rocker_port, trans, entry);
goto found; goto found;
} }
...@@ -3730,6 +3884,7 @@ static __be16 rocker_port_internal_vlan_id_get(struct rocker_port *rocker_port, ...@@ -3730,6 +3884,7 @@ static __be16 rocker_port_internal_vlan_id_get(struct rocker_port *rocker_port,
} }
static void rocker_port_internal_vlan_id_put(struct rocker_port *rocker_port, static void rocker_port_internal_vlan_id_put(struct rocker_port *rocker_port,
enum switchdev_trans trans,
int ifindex) int ifindex)
{ {
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
...@@ -3751,14 +3906,15 @@ static void rocker_port_internal_vlan_id_put(struct rocker_port *rocker_port, ...@@ -3751,14 +3906,15 @@ static void rocker_port_internal_vlan_id_put(struct rocker_port *rocker_port,
bit = ntohs(found->vlan_id) - ROCKER_INTERNAL_VLAN_ID_BASE; bit = ntohs(found->vlan_id) - ROCKER_INTERNAL_VLAN_ID_BASE;
clear_bit(bit, rocker->internal_vlan_bitmap); clear_bit(bit, rocker->internal_vlan_bitmap);
hash_del(&found->entry); hash_del(&found->entry);
kfree(found); rocker_port_kfree(rocker_port, trans, found);
} }
not_found: not_found:
spin_unlock_irqrestore(&rocker->internal_vlan_tbl_lock, lock_flags); spin_unlock_irqrestore(&rocker->internal_vlan_tbl_lock, lock_flags);
} }
static int rocker_port_fib_ipv4(struct rocker_port *rocker_port, __be32 dst, static int rocker_port_fib_ipv4(struct rocker_port *rocker_port,
enum switchdev_trans trans, __be32 dst,
int dst_len, struct fib_info *fi, u32 tb_id, int dst_len, struct fib_info *fi, u32 tb_id,
int flags) int flags)
{ {
...@@ -3782,7 +3938,7 @@ static int rocker_port_fib_ipv4(struct rocker_port *rocker_port, __be32 dst, ...@@ -3782,7 +3938,7 @@ static int rocker_port_fib_ipv4(struct rocker_port *rocker_port, __be32 dst,
has_gw = !!nh->nh_gw; has_gw = !!nh->nh_gw;
if (has_gw && nh_on_port) { if (has_gw && nh_on_port) {
err = rocker_port_ipv4_nh(rocker_port, flags, err = rocker_port_ipv4_nh(rocker_port, trans, flags,
nh->nh_gw, &index); nh->nh_gw, &index);
if (err) if (err)
return err; return err;
...@@ -3793,7 +3949,7 @@ static int rocker_port_fib_ipv4(struct rocker_port *rocker_port, __be32 dst, ...@@ -3793,7 +3949,7 @@ static int rocker_port_fib_ipv4(struct rocker_port *rocker_port, __be32 dst,
group_id = ROCKER_GROUP_L2_INTERFACE(internal_vlan_id, 0); group_id = ROCKER_GROUP_L2_INTERFACE(internal_vlan_id, 0);
} }
err = rocker_flow_tbl_ucast4_routing(rocker_port, eth_type, dst, err = rocker_flow_tbl_ucast4_routing(rocker_port, trans, eth_type, dst,
dst_mask, priority, goto_tbl, dst_mask, priority, goto_tbl,
group_id, flags); group_id, flags);
if (err) if (err)
...@@ -3832,7 +3988,7 @@ static int rocker_port_open(struct net_device *dev) ...@@ -3832,7 +3988,7 @@ static int rocker_port_open(struct net_device *dev)
goto err_request_rx_irq; goto err_request_rx_irq;
} }
err = rocker_port_fwd_enable(rocker_port); err = rocker_port_fwd_enable(rocker_port, SWITCHDEV_TRANS_NONE);
if (err) if (err)
goto err_fwd_enable; goto err_fwd_enable;
...@@ -3859,7 +4015,7 @@ static int rocker_port_stop(struct net_device *dev) ...@@ -3859,7 +4015,7 @@ static int rocker_port_stop(struct net_device *dev)
rocker_port_set_enable(rocker_port, false); rocker_port_set_enable(rocker_port, false);
napi_disable(&rocker_port->napi_rx); napi_disable(&rocker_port->napi_rx);
napi_disable(&rocker_port->napi_tx); napi_disable(&rocker_port->napi_tx);
rocker_port_fwd_disable(rocker_port); rocker_port_fwd_disable(rocker_port, SWITCHDEV_TRANS_NONE);
free_irq(rocker_msix_rx_vector(rocker_port), rocker_port); free_irq(rocker_msix_rx_vector(rocker_port), rocker_port);
free_irq(rocker_msix_tx_vector(rocker_port), rocker_port); free_irq(rocker_msix_tx_vector(rocker_port), rocker_port);
rocker_port_dma_rings_fini(rocker_port); rocker_port_dma_rings_fini(rocker_port);
...@@ -4012,11 +4168,12 @@ static int rocker_port_vlan_rx_add_vid(struct net_device *dev, ...@@ -4012,11 +4168,12 @@ static int rocker_port_vlan_rx_add_vid(struct net_device *dev,
struct rocker_port *rocker_port = netdev_priv(dev); struct rocker_port *rocker_port = netdev_priv(dev);
int err; int err;
err = rocker_port_vlan(rocker_port, 0, vid); err = rocker_port_vlan(rocker_port, SWITCHDEV_TRANS_NONE, 0, vid);
if (err) if (err)
return err; return err;
return rocker_port_router_mac(rocker_port, 0, htons(vid)); return rocker_port_router_mac(rocker_port, SWITCHDEV_TRANS_NONE,
0, htons(vid));
} }
static int rocker_port_vlan_rx_kill_vid(struct net_device *dev, static int rocker_port_vlan_rx_kill_vid(struct net_device *dev,
...@@ -4025,12 +4182,13 @@ static int rocker_port_vlan_rx_kill_vid(struct net_device *dev, ...@@ -4025,12 +4182,13 @@ static int rocker_port_vlan_rx_kill_vid(struct net_device *dev,
struct rocker_port *rocker_port = netdev_priv(dev); struct rocker_port *rocker_port = netdev_priv(dev);
int err; int err;
err = rocker_port_router_mac(rocker_port, ROCKER_OP_FLAG_REMOVE, err = rocker_port_router_mac(rocker_port, SWITCHDEV_TRANS_NONE,
htons(vid)); ROCKER_OP_FLAG_REMOVE, htons(vid));
if (err) if (err)
return err; return err;
return rocker_port_vlan(rocker_port, ROCKER_OP_FLAG_REMOVE, vid); return rocker_port_vlan(rocker_port, SWITCHDEV_TRANS_NONE,
ROCKER_OP_FLAG_REMOVE, vid);
} }
static int rocker_port_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], static int rocker_port_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
...@@ -4045,7 +4203,8 @@ static int rocker_port_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], ...@@ -4045,7 +4203,8 @@ static int rocker_port_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
if (!rocker_port_is_bridged(rocker_port)) if (!rocker_port_is_bridged(rocker_port))
return -EINVAL; return -EINVAL;
return rocker_port_fdb(rocker_port, addr, vlan_id, flags); return rocker_port_fdb(rocker_port, SWITCHDEV_TRANS_NONE,
addr, vlan_id, flags);
} }
static int rocker_port_fdb_del(struct ndmsg *ndm, struct nlattr *tb[], static int rocker_port_fdb_del(struct ndmsg *ndm, struct nlattr *tb[],
...@@ -4059,7 +4218,8 @@ static int rocker_port_fdb_del(struct ndmsg *ndm, struct nlattr *tb[], ...@@ -4059,7 +4218,8 @@ static int rocker_port_fdb_del(struct ndmsg *ndm, struct nlattr *tb[],
if (!rocker_port_is_bridged(rocker_port)) if (!rocker_port_is_bridged(rocker_port))
return -EINVAL; return -EINVAL;
return rocker_port_fdb(rocker_port, addr, vlan_id, flags); return rocker_port_fdb(rocker_port, SWITCHDEV_TRANS_NONE,
addr, vlan_id, flags);
} }
static int rocker_fdb_fill_info(struct sk_buff *skb, static int rocker_fdb_fill_info(struct sk_buff *skb,
...@@ -4135,58 +4295,6 @@ static int rocker_port_fdb_dump(struct sk_buff *skb, ...@@ -4135,58 +4295,6 @@ static int rocker_port_fdb_dump(struct sk_buff *skb,
return idx; return idx;
} }
static int rocker_port_bridge_setlink(struct net_device *dev,
struct nlmsghdr *nlh, u16 flags)
{
struct rocker_port *rocker_port = netdev_priv(dev);
struct nlattr *protinfo;
struct nlattr *attr;
int err;
protinfo = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg),
IFLA_PROTINFO);
if (protinfo) {
attr = nla_find_nested(protinfo, IFLA_BRPORT_LEARNING);
if (attr) {
if (nla_len(attr) < sizeof(u8))
return -EINVAL;
if (nla_get_u8(attr))
rocker_port->brport_flags |= BR_LEARNING;
else
rocker_port->brport_flags &= ~BR_LEARNING;
err = rocker_port_set_learning(rocker_port);
if (err)
return err;
}
attr = nla_find_nested(protinfo, IFLA_BRPORT_LEARNING_SYNC);
if (attr) {
if (nla_len(attr) < sizeof(u8))
return -EINVAL;
if (nla_get_u8(attr))
rocker_port->brport_flags |= BR_LEARNING_SYNC;
else
rocker_port->brport_flags &= ~BR_LEARNING_SYNC;
}
}
return 0;
}
static int rocker_port_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
struct net_device *dev,
u32 filter_mask, int nlflags)
{
struct rocker_port *rocker_port = netdev_priv(dev);
u16 mode = BRIDGE_MODE_UNDEF;
u32 mask = BR_LEARNING | BR_LEARNING_SYNC;
return ndo_dflt_bridge_getlink(skb, pid, seq, dev, mode,
rocker_port->brport_flags, mask,
nlflags);
}
static int rocker_port_get_phys_port_name(struct net_device *dev, static int rocker_port_get_phys_port_name(struct net_device *dev,
char *buf, size_t len) char *buf, size_t len)
{ {
...@@ -4195,9 +4303,10 @@ static int rocker_port_get_phys_port_name(struct net_device *dev, ...@@ -4195,9 +4303,10 @@ static int rocker_port_get_phys_port_name(struct net_device *dev,
int err; int err;
err = rocker_cmd_exec(rocker_port->rocker, rocker_port, err = rocker_cmd_exec(rocker_port->rocker, rocker_port,
SWITCHDEV_TRANS_NONE,
rocker_cmd_get_port_settings_prep, NULL, rocker_cmd_get_port_settings_prep, NULL,
rocker_cmd_get_port_settings_phys_name_proc, rocker_cmd_get_port_settings_phys_name_proc,
&name, false); &name);
return err ? -EOPNOTSUPP : 0; return err ? -EOPNOTSUPP : 0;
} }
...@@ -4212,8 +4321,9 @@ static const struct net_device_ops rocker_port_netdev_ops = { ...@@ -4212,8 +4321,9 @@ static const struct net_device_ops rocker_port_netdev_ops = {
.ndo_fdb_add = rocker_port_fdb_add, .ndo_fdb_add = rocker_port_fdb_add,
.ndo_fdb_del = rocker_port_fdb_del, .ndo_fdb_del = rocker_port_fdb_del,
.ndo_fdb_dump = rocker_port_fdb_dump, .ndo_fdb_dump = rocker_port_fdb_dump,
.ndo_bridge_setlink = rocker_port_bridge_setlink, .ndo_bridge_getlink = switchdev_port_bridge_getlink,
.ndo_bridge_getlink = rocker_port_bridge_getlink, .ndo_bridge_setlink = switchdev_port_bridge_setlink,
.ndo_bridge_dellink = switchdev_port_bridge_dellink,
.ndo_get_phys_port_name = rocker_port_get_phys_port_name, .ndo_get_phys_port_name = rocker_port_get_phys_port_name,
}; };
...@@ -4221,54 +4331,216 @@ static const struct net_device_ops rocker_port_netdev_ops = { ...@@ -4221,54 +4331,216 @@ static const struct net_device_ops rocker_port_netdev_ops = {
* swdev interface * swdev interface
********************/ ********************/
static int rocker_port_swdev_parent_id_get(struct net_device *dev, static int rocker_port_attr_get(struct net_device *dev,
struct netdev_phys_item_id *psid) struct switchdev_attr *attr)
{ {
struct rocker_port *rocker_port = netdev_priv(dev); struct rocker_port *rocker_port = netdev_priv(dev);
struct rocker *rocker = rocker_port->rocker; struct rocker *rocker = rocker_port->rocker;
psid->id_len = sizeof(rocker->hw.id); switch (attr->id) {
memcpy(&psid->id, &rocker->hw.id, psid->id_len); case SWITCHDEV_ATTR_PORT_PARENT_ID:
attr->ppid.id_len = sizeof(rocker->hw.id);
memcpy(&attr->ppid.id, &rocker->hw.id, attr->ppid.id_len);
break;
case SWITCHDEV_ATTR_PORT_BRIDGE_FLAGS:
attr->brport_flags = rocker_port->brport_flags;
break;
default:
return -EOPNOTSUPP;
}
return 0; return 0;
} }
static int rocker_port_swdev_port_stp_update(struct net_device *dev, u8 state) static void rocker_port_trans_abort(struct rocker_port *rocker_port)
{
struct list_head *mem, *tmp;
list_for_each_safe(mem, tmp, &rocker_port->trans_mem) {
list_del(mem);
kfree(mem);
}
}
static int rocker_port_brport_flags_set(struct rocker_port *rocker_port,
enum switchdev_trans trans,
unsigned long brport_flags)
{
unsigned long orig_flags;
int err = 0;
orig_flags = rocker_port->brport_flags;
rocker_port->brport_flags = brport_flags;
if ((orig_flags ^ rocker_port->brport_flags) & BR_LEARNING)
err = rocker_port_set_learning(rocker_port, trans);
if (trans == SWITCHDEV_TRANS_PREPARE)
rocker_port->brport_flags = orig_flags;
return err;
}
static int rocker_port_attr_set(struct net_device *dev,
struct switchdev_attr *attr)
{ {
struct rocker_port *rocker_port = netdev_priv(dev); struct rocker_port *rocker_port = netdev_priv(dev);
int err = 0;
switch (attr->trans) {
case SWITCHDEV_TRANS_PREPARE:
BUG_ON(!list_empty(&rocker_port->trans_mem));
break;
case SWITCHDEV_TRANS_ABORT:
rocker_port_trans_abort(rocker_port);
return 0;
default:
break;
}
switch (attr->id) {
case SWITCHDEV_ATTR_PORT_STP_STATE:
err = rocker_port_stp_update(rocker_port, attr->trans,
attr->stp_state);
break;
case SWITCHDEV_ATTR_PORT_BRIDGE_FLAGS:
err = rocker_port_brport_flags_set(rocker_port, attr->trans,
attr->brport_flags);
break;
default:
err = -EOPNOTSUPP;
break;
}
return err;
}
static int rocker_port_vlan_add(struct rocker_port *rocker_port,
enum switchdev_trans trans, u16 vid, u16 flags)
{
int err;
/* XXX deal with flags for PVID and untagged */
err = rocker_port_vlan(rocker_port, trans, 0, vid);
if (err)
return err;
return rocker_port_router_mac(rocker_port, trans, 0, htons(vid));
}
static int rocker_port_vlans_add(struct rocker_port *rocker_port,
enum switchdev_trans trans,
struct switchdev_obj_vlan *vlan)
{
u16 vid;
int err;
return rocker_port_stp_update(rocker_port, state); for (vid = vlan->vid_start; vid <= vlan->vid_end; vid++) {
err = rocker_port_vlan_add(rocker_port, trans,
vid, vlan->flags);
if (err)
return err;
}
return 0;
} }
static int rocker_port_swdev_fib_ipv4_add(struct net_device *dev, static int rocker_port_obj_add(struct net_device *dev,
__be32 dst, int dst_len, struct switchdev_obj *obj)
struct fib_info *fi,
u8 tos, u8 type,
u32 nlflags, u32 tb_id)
{ {
struct rocker_port *rocker_port = netdev_priv(dev); struct rocker_port *rocker_port = netdev_priv(dev);
int flags = 0; struct switchdev_obj_ipv4_fib *fib4;
int err = 0;
switch (obj->trans) {
case SWITCHDEV_TRANS_PREPARE:
BUG_ON(!list_empty(&rocker_port->trans_mem));
break;
case SWITCHDEV_TRANS_ABORT:
rocker_port_trans_abort(rocker_port);
return 0;
default:
break;
}
return rocker_port_fib_ipv4(rocker_port, dst, dst_len, switch (obj->id) {
fi, tb_id, flags); case SWITCHDEV_OBJ_PORT_VLAN:
err = rocker_port_vlans_add(rocker_port, obj->trans,
&obj->vlan);
break;
case SWITCHDEV_OBJ_IPV4_FIB:
fib4 = &obj->ipv4_fib;
err = rocker_port_fib_ipv4(rocker_port, obj->trans,
fib4->dst, fib4->dst_len,
fib4->fi, fib4->tb_id, 0);
break;
default:
err = -EOPNOTSUPP;
break;
}
return err;
} }
static int rocker_port_swdev_fib_ipv4_del(struct net_device *dev, static int rocker_port_vlan_del(struct rocker_port *rocker_port,
__be32 dst, int dst_len, u16 vid, u16 flags)
struct fib_info *fi, {
u8 tos, u8 type, u32 tb_id) int err;
err = rocker_port_router_mac(rocker_port, SWITCHDEV_TRANS_NONE,
ROCKER_OP_FLAG_REMOVE, htons(vid));
if (err)
return err;
return rocker_port_vlan(rocker_port, SWITCHDEV_TRANS_NONE,
ROCKER_OP_FLAG_REMOVE, vid);
}
static int rocker_port_vlans_del(struct rocker_port *rocker_port,
struct switchdev_obj_vlan *vlan)
{
u16 vid;
int err;
for (vid = vlan->vid_start; vid <= vlan->vid_end; vid++) {
err = rocker_port_vlan_del(rocker_port, vid, vlan->flags);
if (err)
return err;
}
return 0;
}
static int rocker_port_obj_del(struct net_device *dev,
struct switchdev_obj *obj)
{ {
struct rocker_port *rocker_port = netdev_priv(dev); struct rocker_port *rocker_port = netdev_priv(dev);
int flags = ROCKER_OP_FLAG_REMOVE; struct switchdev_obj_ipv4_fib *fib4;
int err = 0;
return rocker_port_fib_ipv4(rocker_port, dst, dst_len, switch (obj->id) {
fi, tb_id, flags); case SWITCHDEV_OBJ_PORT_VLAN:
err = rocker_port_vlans_del(rocker_port, &obj->vlan);
break;
case SWITCHDEV_OBJ_IPV4_FIB:
fib4 = &obj->ipv4_fib;
err = rocker_port_fib_ipv4(rocker_port, SWITCHDEV_TRANS_NONE,
fib4->dst, fib4->dst_len, fib4->fi,
fib4->tb_id, ROCKER_OP_FLAG_REMOVE);
break;
default:
err = -EOPNOTSUPP;
break;
}
return err;
} }
static const struct swdev_ops rocker_port_swdev_ops = { static const struct switchdev_ops rocker_port_switchdev_ops = {
.swdev_parent_id_get = rocker_port_swdev_parent_id_get, .switchdev_port_attr_get = rocker_port_attr_get,
.swdev_port_stp_update = rocker_port_swdev_port_stp_update, .switchdev_port_attr_set = rocker_port_attr_set,
.swdev_fib_ipv4_add = rocker_port_swdev_fib_ipv4_add, .switchdev_port_obj_add = rocker_port_obj_add,
.swdev_fib_ipv4_del = rocker_port_swdev_fib_ipv4_del, .switchdev_port_obj_del = rocker_port_obj_del,
}; };
/******************** /********************
...@@ -4399,9 +4671,10 @@ static int rocker_cmd_get_port_stats_ethtool(struct rocker_port *rocker_port, ...@@ -4399,9 +4671,10 @@ static int rocker_cmd_get_port_stats_ethtool(struct rocker_port *rocker_port,
void *priv) void *priv)
{ {
return rocker_cmd_exec(rocker_port->rocker, rocker_port, return rocker_cmd_exec(rocker_port->rocker, rocker_port,
SWITCHDEV_TRANS_NONE,
rocker_cmd_get_port_stats_prep, NULL, rocker_cmd_get_port_stats_prep, NULL,
rocker_cmd_get_port_stats_ethtool_proc, rocker_cmd_get_port_stats_ethtool_proc,
priv, false); priv);
} }
static void rocker_port_get_stats(struct net_device *dev, static void rocker_port_get_stats(struct net_device *dev,
...@@ -4415,8 +4688,6 @@ static void rocker_port_get_stats(struct net_device *dev, ...@@ -4415,8 +4688,6 @@ static void rocker_port_get_stats(struct net_device *dev,
for (i = 0; i < ARRAY_SIZE(rocker_port_stats); ++i) for (i = 0; i < ARRAY_SIZE(rocker_port_stats); ++i)
data[i] = 0; data[i] = 0;
} }
return;
} }
static int rocker_port_get_sset_count(struct net_device *netdev, int sset) static int rocker_port_get_sset_count(struct net_device *netdev, int sset)
...@@ -4470,8 +4741,9 @@ static int rocker_port_poll_tx(struct napi_struct *napi, int budget) ...@@ -4470,8 +4741,9 @@ static int rocker_port_poll_tx(struct napi_struct *napi, int budget)
if (err == 0) { if (err == 0) {
rocker_port->dev->stats.tx_packets++; rocker_port->dev->stats.tx_packets++;
rocker_port->dev->stats.tx_bytes += skb->len; rocker_port->dev->stats.tx_bytes += skb->len;
} else } else {
rocker_port->dev->stats.tx_errors++; rocker_port->dev->stats.tx_errors++;
}
dev_kfree_skb_any(skb); dev_kfree_skb_any(skb);
credits++; credits++;
...@@ -4583,7 +4855,8 @@ static void rocker_remove_ports(struct rocker *rocker) ...@@ -4583,7 +4855,8 @@ static void rocker_remove_ports(struct rocker *rocker)
for (i = 0; i < rocker->port_count; i++) { for (i = 0; i < rocker->port_count; i++) {
rocker_port = rocker->ports[i]; rocker_port = rocker->ports[i];
rocker_port_ig_tbl(rocker_port, ROCKER_OP_FLAG_REMOVE); rocker_port_ig_tbl(rocker_port, SWITCHDEV_TRANS_NONE,
ROCKER_OP_FLAG_REMOVE);
unregister_netdev(rocker_port->dev); unregister_netdev(rocker_port->dev);
} }
kfree(rocker->ports); kfree(rocker->ports);
...@@ -4619,11 +4892,12 @@ static int rocker_probe_port(struct rocker *rocker, unsigned int port_number) ...@@ -4619,11 +4892,12 @@ static int rocker_probe_port(struct rocker *rocker, unsigned int port_number)
rocker_port->port_number = port_number; rocker_port->port_number = port_number;
rocker_port->pport = port_number + 1; rocker_port->pport = port_number + 1;
rocker_port->brport_flags = BR_LEARNING | BR_LEARNING_SYNC; rocker_port->brport_flags = BR_LEARNING | BR_LEARNING_SYNC;
INIT_LIST_HEAD(&rocker_port->trans_mem);
rocker_port_dev_addr_init(rocker, rocker_port); rocker_port_dev_addr_init(rocker, rocker_port);
dev->netdev_ops = &rocker_port_netdev_ops; dev->netdev_ops = &rocker_port_netdev_ops;
dev->ethtool_ops = &rocker_port_ethtool_ops; dev->ethtool_ops = &rocker_port_ethtool_ops;
dev->swdev_ops = &rocker_port_swdev_ops; dev->switchdev_ops = &rocker_port_switchdev_ops;
netif_napi_add(dev, &rocker_port->napi_tx, rocker_port_poll_tx, netif_napi_add(dev, &rocker_port->napi_tx, rocker_port_poll_tx,
NAPI_POLL_WEIGHT); NAPI_POLL_WEIGHT);
netif_napi_add(dev, &rocker_port->napi_rx, rocker_port_poll_rx, netif_napi_add(dev, &rocker_port->napi_rx, rocker_port_poll_rx,
...@@ -4631,8 +4905,7 @@ static int rocker_probe_port(struct rocker *rocker, unsigned int port_number) ...@@ -4631,8 +4905,7 @@ static int rocker_probe_port(struct rocker *rocker, unsigned int port_number)
rocker_carrier_init(rocker_port); rocker_carrier_init(rocker_port);
dev->features |= NETIF_F_NETNS_LOCAL | dev->features |= NETIF_F_NETNS_LOCAL |
NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_CTAG_FILTER;
NETIF_F_HW_SWITCH_OFFLOAD;
err = register_netdev(dev); err = register_netdev(dev);
if (err) { if (err) {
...@@ -4641,11 +4914,13 @@ static int rocker_probe_port(struct rocker *rocker, unsigned int port_number) ...@@ -4641,11 +4914,13 @@ static int rocker_probe_port(struct rocker *rocker, unsigned int port_number)
} }
rocker->ports[port_number] = rocker_port; rocker->ports[port_number] = rocker_port;
rocker_port_set_learning(rocker_port); rocker_port_set_learning(rocker_port, SWITCHDEV_TRANS_NONE);
rocker_port->internal_vlan_id = rocker_port->internal_vlan_id =
rocker_port_internal_vlan_id_get(rocker_port, dev->ifindex); rocker_port_internal_vlan_id_get(rocker_port,
err = rocker_port_ig_tbl(rocker_port, 0); SWITCHDEV_TRANS_NONE,
dev->ifindex);
err = rocker_port_ig_tbl(rocker_port, SWITCHDEV_TRANS_NONE, 0);
if (err) { if (err) {
dev_err(&pdev->dev, "install ig port table failed\n"); dev_err(&pdev->dev, "install ig port table failed\n");
goto err_port_ig_tbl; goto err_port_ig_tbl;
...@@ -4892,43 +5167,47 @@ static int rocker_port_bridge_join(struct rocker_port *rocker_port, ...@@ -4892,43 +5167,47 @@ static int rocker_port_bridge_join(struct rocker_port *rocker_port,
{ {
int err; int err;
rocker_port_internal_vlan_id_put(rocker_port, rocker_port_internal_vlan_id_put(rocker_port, SWITCHDEV_TRANS_NONE,
rocker_port->dev->ifindex); rocker_port->dev->ifindex);
rocker_port->bridge_dev = bridge; rocker_port->bridge_dev = bridge;
/* Use bridge internal VLAN ID for untagged pkts */ /* Use bridge internal VLAN ID for untagged pkts */
err = rocker_port_vlan(rocker_port, ROCKER_OP_FLAG_REMOVE, 0); err = rocker_port_vlan(rocker_port, SWITCHDEV_TRANS_NONE,
ROCKER_OP_FLAG_REMOVE, 0);
if (err) if (err)
return err; return err;
rocker_port->internal_vlan_id = rocker_port->internal_vlan_id =
rocker_port_internal_vlan_id_get(rocker_port, rocker_port_internal_vlan_id_get(rocker_port,
SWITCHDEV_TRANS_NONE,
bridge->ifindex); bridge->ifindex);
return rocker_port_vlan(rocker_port, 0, 0); return rocker_port_vlan(rocker_port, SWITCHDEV_TRANS_NONE, 0, 0);
} }
static int rocker_port_bridge_leave(struct rocker_port *rocker_port) static int rocker_port_bridge_leave(struct rocker_port *rocker_port)
{ {
int err; int err;
rocker_port_internal_vlan_id_put(rocker_port, rocker_port_internal_vlan_id_put(rocker_port, SWITCHDEV_TRANS_NONE,
rocker_port->bridge_dev->ifindex); rocker_port->bridge_dev->ifindex);
rocker_port->bridge_dev = NULL; rocker_port->bridge_dev = NULL;
/* Use port internal VLAN ID for untagged pkts */ /* Use port internal VLAN ID for untagged pkts */
err = rocker_port_vlan(rocker_port, ROCKER_OP_FLAG_REMOVE, 0); err = rocker_port_vlan(rocker_port, SWITCHDEV_TRANS_NONE,
ROCKER_OP_FLAG_REMOVE, 0);
if (err) if (err)
return err; return err;
rocker_port->internal_vlan_id = rocker_port->internal_vlan_id =
rocker_port_internal_vlan_id_get(rocker_port, rocker_port_internal_vlan_id_get(rocker_port,
SWITCHDEV_TRANS_NONE,
rocker_port->dev->ifindex); rocker_port->dev->ifindex);
err = rocker_port_vlan(rocker_port, 0, 0); err = rocker_port_vlan(rocker_port, SWITCHDEV_TRANS_NONE, 0, 0);
if (err) if (err)
return err; return err;
if (rocker_port->dev->flags & IFF_UP) if (rocker_port->dev->flags & IFF_UP)
err = rocker_port_fwd_enable(rocker_port); err = rocker_port_fwd_enable(rocker_port, SWITCHDEV_TRANS_NONE);
return err; return err;
} }
...@@ -4990,7 +5269,8 @@ static int rocker_neigh_update(struct net_device *dev, struct neighbour *n) ...@@ -4990,7 +5269,8 @@ static int rocker_neigh_update(struct net_device *dev, struct neighbour *n)
int flags = (n->nud_state & NUD_VALID) ? 0 : ROCKER_OP_FLAG_REMOVE; int flags = (n->nud_state & NUD_VALID) ? 0 : ROCKER_OP_FLAG_REMOVE;
__be32 ip_addr = *(__be32 *)n->primary_key; __be32 ip_addr = *(__be32 *)n->primary_key;
return rocker_port_ipv4_neigh(rocker_port, flags, ip_addr, n->ha); return rocker_port_ipv4_neigh(rocker_port, SWITCHDEV_TRANS_NONE,
flags, ip_addr, n->ha);
} }
static int rocker_netevent_event(struct notifier_block *unused, static int rocker_netevent_event(struct notifier_block *unused,
......
...@@ -65,9 +65,9 @@ enum { ...@@ -65,9 +65,9 @@ enum {
#define ROCKER_TEST_DMA_CTRL 0x0034 #define ROCKER_TEST_DMA_CTRL 0x0034
/* Rocker test register ctrl */ /* Rocker test register ctrl */
#define ROCKER_TEST_DMA_CTRL_CLEAR (1 << 0) #define ROCKER_TEST_DMA_CTRL_CLEAR BIT(0)
#define ROCKER_TEST_DMA_CTRL_FILL (1 << 1) #define ROCKER_TEST_DMA_CTRL_FILL BIT(1)
#define ROCKER_TEST_DMA_CTRL_INVERT (1 << 2) #define ROCKER_TEST_DMA_CTRL_INVERT BIT(2)
/* Rocker DMA ring register offsets */ /* Rocker DMA ring register offsets */
#define ROCKER_DMA_DESC_ADDR(x) (0x1000 + (x) * 32) /* 8-byte */ #define ROCKER_DMA_DESC_ADDR(x) (0x1000 + (x) * 32) /* 8-byte */
...@@ -79,7 +79,7 @@ enum { ...@@ -79,7 +79,7 @@ enum {
#define ROCKER_DMA_DESC_RES1(x) (0x101c + (x) * 32) #define ROCKER_DMA_DESC_RES1(x) (0x101c + (x) * 32)
/* Rocker dma ctrl register bits */ /* Rocker dma ctrl register bits */
#define ROCKER_DMA_DESC_CTRL_RESET (1 << 0) #define ROCKER_DMA_DESC_CTRL_RESET BIT(0)
/* Rocker DMA ring types */ /* Rocker DMA ring types */
enum rocker_dma_type { enum rocker_dma_type {
...@@ -111,7 +111,7 @@ struct rocker_desc { ...@@ -111,7 +111,7 @@ struct rocker_desc {
u16 comp_err; u16 comp_err;
}; };
#define ROCKER_DMA_DESC_COMP_ERR_GEN (1 << 15) #define ROCKER_DMA_DESC_COMP_ERR_GEN BIT(15)
/* Rocker DMA TLV struct */ /* Rocker DMA TLV struct */
struct rocker_tlv { struct rocker_tlv {
...@@ -237,14 +237,14 @@ enum { ...@@ -237,14 +237,14 @@ enum {
ROCKER_TLV_RX_MAX = __ROCKER_TLV_RX_MAX - 1, ROCKER_TLV_RX_MAX = __ROCKER_TLV_RX_MAX - 1,
}; };
#define ROCKER_RX_FLAGS_IPV4 (1 << 0) #define ROCKER_RX_FLAGS_IPV4 BIT(0)
#define ROCKER_RX_FLAGS_IPV6 (1 << 1) #define ROCKER_RX_FLAGS_IPV6 BIT(1)
#define ROCKER_RX_FLAGS_CSUM_CALC (1 << 2) #define ROCKER_RX_FLAGS_CSUM_CALC BIT(2)
#define ROCKER_RX_FLAGS_IPV4_CSUM_GOOD (1 << 3) #define ROCKER_RX_FLAGS_IPV4_CSUM_GOOD BIT(3)
#define ROCKER_RX_FLAGS_IP_FRAG (1 << 4) #define ROCKER_RX_FLAGS_IP_FRAG BIT(4)
#define ROCKER_RX_FLAGS_TCP (1 << 5) #define ROCKER_RX_FLAGS_TCP BIT(5)
#define ROCKER_RX_FLAGS_UDP (1 << 6) #define ROCKER_RX_FLAGS_UDP BIT(6)
#define ROCKER_RX_FLAGS_TCP_UDP_CSUM_GOOD (1 << 7) #define ROCKER_RX_FLAGS_TCP_UDP_CSUM_GOOD BIT(7)
enum { enum {
ROCKER_TLV_TX_UNSPEC, ROCKER_TLV_TX_UNSPEC,
...@@ -460,6 +460,6 @@ enum rocker_of_dpa_overlay_type { ...@@ -460,6 +460,6 @@ enum rocker_of_dpa_overlay_type {
#define ROCKER_SWITCH_ID 0x0320 /* 8-byte */ #define ROCKER_SWITCH_ID 0x0320 /* 8-byte */
/* Rocker control bits */ /* Rocker control bits */
#define ROCKER_CONTROL_RESET (1 << 0) #define ROCKER_CONTROL_RESET BIT(0)
#endif #endif
...@@ -1924,7 +1924,7 @@ static netdev_features_t team_fix_features(struct net_device *dev, ...@@ -1924,7 +1924,7 @@ static netdev_features_t team_fix_features(struct net_device *dev,
struct team *team = netdev_priv(dev); struct team *team = netdev_priv(dev);
netdev_features_t mask; netdev_features_t mask;
mask = features | NETIF_F_HW_SWITCH_OFFLOAD; mask = features;
features &= ~NETIF_F_ONE_FOR_ALL; features &= ~NETIF_F_ONE_FOR_ALL;
features |= NETIF_F_ALL_FOR_ALL; features |= NETIF_F_ALL_FOR_ALL;
...@@ -1977,8 +1977,9 @@ static const struct net_device_ops team_netdev_ops = { ...@@ -1977,8 +1977,9 @@ static const struct net_device_ops team_netdev_ops = {
.ndo_del_slave = team_del_slave, .ndo_del_slave = team_del_slave,
.ndo_fix_features = team_fix_features, .ndo_fix_features = team_fix_features,
.ndo_change_carrier = team_change_carrier, .ndo_change_carrier = team_change_carrier,
.ndo_bridge_setlink = ndo_dflt_netdev_switch_port_bridge_setlink, .ndo_bridge_setlink = switchdev_port_bridge_setlink,
.ndo_bridge_dellink = ndo_dflt_netdev_switch_port_bridge_dellink, .ndo_bridge_getlink = switchdev_port_bridge_getlink,
.ndo_bridge_dellink = switchdev_port_bridge_dellink,
.ndo_features_check = passthru_features_check, .ndo_features_check = passthru_features_check,
}; };
......
...@@ -66,7 +66,6 @@ enum { ...@@ -66,7 +66,6 @@ enum {
NETIF_F_HW_VLAN_STAG_FILTER_BIT,/* Receive filtering on VLAN STAGs */ NETIF_F_HW_VLAN_STAG_FILTER_BIT,/* Receive filtering on VLAN STAGs */
NETIF_F_HW_L2FW_DOFFLOAD_BIT, /* Allow L2 Forwarding in Hardware */ NETIF_F_HW_L2FW_DOFFLOAD_BIT, /* Allow L2 Forwarding in Hardware */
NETIF_F_BUSY_POLL_BIT, /* Busy poll */ NETIF_F_BUSY_POLL_BIT, /* Busy poll */
NETIF_F_HW_SWITCH_OFFLOAD_BIT, /* HW switch offload */
/* /*
* Add your fresh new feature above and remember to update * Add your fresh new feature above and remember to update
...@@ -125,7 +124,6 @@ enum { ...@@ -125,7 +124,6 @@ enum {
#define NETIF_F_HW_VLAN_STAG_TX __NETIF_F(HW_VLAN_STAG_TX) #define NETIF_F_HW_VLAN_STAG_TX __NETIF_F(HW_VLAN_STAG_TX)
#define NETIF_F_HW_L2FW_DOFFLOAD __NETIF_F(HW_L2FW_DOFFLOAD) #define NETIF_F_HW_L2FW_DOFFLOAD __NETIF_F(HW_L2FW_DOFFLOAD)
#define NETIF_F_BUSY_POLL __NETIF_F(BUSY_POLL) #define NETIF_F_BUSY_POLL __NETIF_F(BUSY_POLL)
#define NETIF_F_HW_SWITCH_OFFLOAD __NETIF_F(HW_SWITCH_OFFLOAD)
/* Features valid for ethtool to change */ /* Features valid for ethtool to change */
/* = all defined minus driver/device-class-related */ /* = all defined minus driver/device-class-related */
...@@ -161,8 +159,7 @@ enum { ...@@ -161,8 +159,7 @@ enum {
*/ */
#define NETIF_F_ONE_FOR_ALL (NETIF_F_GSO_SOFTWARE | NETIF_F_GSO_ROBUST | \ #define NETIF_F_ONE_FOR_ALL (NETIF_F_GSO_SOFTWARE | NETIF_F_GSO_ROBUST | \
NETIF_F_SG | NETIF_F_HIGHDMA | \ NETIF_F_SG | NETIF_F_HIGHDMA | \
NETIF_F_FRAGLIST | NETIF_F_VLAN_CHALLENGED | \ NETIF_F_FRAGLIST | NETIF_F_VLAN_CHALLENGED)
NETIF_F_HW_SWITCH_OFFLOAD)
/* /*
* If one device doesn't support one of these features, then disable it * If one device doesn't support one of these features, then disable it
......
...@@ -1567,7 +1567,7 @@ struct net_device { ...@@ -1567,7 +1567,7 @@ struct net_device {
const struct net_device_ops *netdev_ops; const struct net_device_ops *netdev_ops;
const struct ethtool_ops *ethtool_ops; const struct ethtool_ops *ethtool_ops;
#ifdef CONFIG_NET_SWITCHDEV #ifdef CONFIG_NET_SWITCHDEV
const struct swdev_ops *swdev_ops; const struct switchdev_ops *switchdev_ops;
#endif #endif
const struct header_ops *header_ops; const struct header_ops *header_ops;
......
...@@ -14,153 +14,210 @@ ...@@ -14,153 +14,210 @@
#include <linux/netdevice.h> #include <linux/netdevice.h>
#include <linux/notifier.h> #include <linux/notifier.h>
#define SWITCHDEV_F_NO_RECURSE BIT(0)
enum switchdev_trans {
SWITCHDEV_TRANS_NONE,
SWITCHDEV_TRANS_PREPARE,
SWITCHDEV_TRANS_ABORT,
SWITCHDEV_TRANS_COMMIT,
};
enum switchdev_attr_id {
SWITCHDEV_ATTR_UNDEFINED,
SWITCHDEV_ATTR_PORT_PARENT_ID,
SWITCHDEV_ATTR_PORT_STP_STATE,
SWITCHDEV_ATTR_PORT_BRIDGE_FLAGS,
};
struct switchdev_attr {
enum switchdev_attr_id id;
enum switchdev_trans trans;
u32 flags;
union {
struct netdev_phys_item_id ppid; /* PORT_PARENT_ID */
u8 stp_state; /* PORT_STP_STATE */
unsigned long brport_flags; /* PORT_BRIDGE_FLAGS */
};
};
struct fib_info; struct fib_info;
enum switchdev_obj_id {
SWITCHDEV_OBJ_UNDEFINED,
SWITCHDEV_OBJ_PORT_VLAN,
SWITCHDEV_OBJ_IPV4_FIB,
};
struct switchdev_obj {
enum switchdev_obj_id id;
enum switchdev_trans trans;
union {
struct switchdev_obj_vlan { /* PORT_VLAN */
u16 flags;
u16 vid_start;
u16 vid_end;
} vlan;
struct switchdev_obj_ipv4_fib { /* IPV4_FIB */
u32 dst;
int dst_len;
struct fib_info *fi;
u8 tos;
u8 type;
u32 nlflags;
u32 tb_id;
} ipv4_fib;
};
};
/** /**
* struct switchdev_ops - switchdev operations * struct switchdev_ops - switchdev operations
* *
* @swdev_parent_id_get: Called to get an ID of the switch chip this port * @switchdev_port_attr_get: Get a port attribute (see switchdev_attr).
* is part of. If driver implements this, it indicates that it
* represents a port of a switch chip.
* *
* @swdev_port_stp_update: Called to notify switch device port of bridge * @switchdev_port_attr_set: Set a port attribute (see switchdev_attr).
* port STP state change.
* *
* @swdev_fib_ipv4_add: Called to add/modify IPv4 route to switch device. * @switchdev_port_obj_add: Add an object to port (see switchdev_obj).
* *
* @swdev_fib_ipv4_del: Called to delete IPv4 route from switch device. * @switchdev_port_obj_del: Delete an object from port (see switchdev_obj).
*/ */
struct swdev_ops { struct switchdev_ops {
int (*swdev_parent_id_get)(struct net_device *dev, int (*switchdev_port_attr_get)(struct net_device *dev,
struct netdev_phys_item_id *psid); struct switchdev_attr *attr);
int (*swdev_port_stp_update)(struct net_device *dev, u8 state); int (*switchdev_port_attr_set)(struct net_device *dev,
int (*swdev_fib_ipv4_add)(struct net_device *dev, __be32 dst, struct switchdev_attr *attr);
int dst_len, struct fib_info *fi, int (*switchdev_port_obj_add)(struct net_device *dev,
u8 tos, u8 type, u32 nlflags, struct switchdev_obj *obj);
u32 tb_id); int (*switchdev_port_obj_del)(struct net_device *dev,
int (*swdev_fib_ipv4_del)(struct net_device *dev, __be32 dst, struct switchdev_obj *obj);
int dst_len, struct fib_info *fi,
u8 tos, u8 type, u32 tb_id);
}; };
enum netdev_switch_notifier_type { enum switchdev_notifier_type {
NETDEV_SWITCH_FDB_ADD = 1, SWITCHDEV_FDB_ADD = 1,
NETDEV_SWITCH_FDB_DEL, SWITCHDEV_FDB_DEL,
}; };
struct netdev_switch_notifier_info { struct switchdev_notifier_info {
struct net_device *dev; struct net_device *dev;
}; };
struct netdev_switch_notifier_fdb_info { struct switchdev_notifier_fdb_info {
struct netdev_switch_notifier_info info; /* must be first */ struct switchdev_notifier_info info; /* must be first */
const unsigned char *addr; const unsigned char *addr;
u16 vid; u16 vid;
}; };
static inline struct net_device * static inline struct net_device *
netdev_switch_notifier_info_to_dev(const struct netdev_switch_notifier_info *info) switchdev_notifier_info_to_dev(const struct switchdev_notifier_info *info)
{ {
return info->dev; return info->dev;
} }
#ifdef CONFIG_NET_SWITCHDEV #ifdef CONFIG_NET_SWITCHDEV
int netdev_switch_parent_id_get(struct net_device *dev, int switchdev_port_attr_get(struct net_device *dev,
struct netdev_phys_item_id *psid); struct switchdev_attr *attr);
int netdev_switch_port_stp_update(struct net_device *dev, u8 state); int switchdev_port_attr_set(struct net_device *dev,
int register_netdev_switch_notifier(struct notifier_block *nb); struct switchdev_attr *attr);
int unregister_netdev_switch_notifier(struct notifier_block *nb); int switchdev_port_obj_add(struct net_device *dev, struct switchdev_obj *obj);
int call_netdev_switch_notifiers(unsigned long val, struct net_device *dev, int switchdev_port_obj_del(struct net_device *dev, struct switchdev_obj *obj);
struct netdev_switch_notifier_info *info); int register_switchdev_notifier(struct notifier_block *nb);
int netdev_switch_port_bridge_setlink(struct net_device *dev, int unregister_switchdev_notifier(struct notifier_block *nb);
struct nlmsghdr *nlh, u16 flags); int call_switchdev_notifiers(unsigned long val, struct net_device *dev,
int netdev_switch_port_bridge_dellink(struct net_device *dev, struct switchdev_notifier_info *info);
struct nlmsghdr *nlh, u16 flags); int switchdev_port_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
int ndo_dflt_netdev_switch_port_bridge_dellink(struct net_device *dev, struct net_device *dev, u32 filter_mask,
struct nlmsghdr *nlh, u16 flags); int nlflags);
int ndo_dflt_netdev_switch_port_bridge_setlink(struct net_device *dev, int switchdev_port_bridge_setlink(struct net_device *dev,
struct nlmsghdr *nlh, u16 flags); struct nlmsghdr *nlh, u16 flags);
int netdev_switch_fib_ipv4_add(u32 dst, int dst_len, struct fib_info *fi, int switchdev_port_bridge_dellink(struct net_device *dev,
u8 tos, u8 type, u32 nlflags, u32 tb_id); struct nlmsghdr *nlh, u16 flags);
int netdev_switch_fib_ipv4_del(u32 dst, int dst_len, struct fib_info *fi, int switchdev_fib_ipv4_add(u32 dst, int dst_len, struct fib_info *fi,
u8 tos, u8 type, u32 tb_id); u8 tos, u8 type, u32 nlflags, u32 tb_id);
void netdev_switch_fib_ipv4_abort(struct fib_info *fi); int switchdev_fib_ipv4_del(u32 dst, int dst_len, struct fib_info *fi,
u8 tos, u8 type, u32 tb_id);
void switchdev_fib_ipv4_abort(struct fib_info *fi);
#else #else
static inline int netdev_switch_parent_id_get(struct net_device *dev, static inline int switchdev_port_attr_get(struct net_device *dev,
struct netdev_phys_item_id *psid) struct switchdev_attr *attr)
{ {
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
static inline int netdev_switch_port_stp_update(struct net_device *dev, static inline int switchdev_port_attr_set(struct net_device *dev,
u8 state) struct switchdev_attr *attr)
{ {
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
static inline int register_netdev_switch_notifier(struct notifier_block *nb) static inline int switchdev_port_obj_add(struct net_device *dev,
struct switchdev_obj *obj)
{ {
return 0; return -EOPNOTSUPP;
} }
static inline int unregister_netdev_switch_notifier(struct notifier_block *nb) static inline int switchdev_port_obj_del(struct net_device *dev,
struct switchdev_obj *obj)
{
return -EOPNOTSUPP;
}
static inline int register_switchdev_notifier(struct notifier_block *nb)
{ {
return 0; return 0;
} }
static inline int call_netdev_switch_notifiers(unsigned long val, struct net_device *dev, static inline int unregister_switchdev_notifier(struct notifier_block *nb)
struct netdev_switch_notifier_info *info)
{ {
return NOTIFY_DONE; return 0;
} }
static inline int netdev_switch_port_bridge_setlink(struct net_device *dev, static inline int call_switchdev_notifiers(unsigned long val,
struct nlmsghdr *nlh, struct net_device *dev,
u16 flags) struct switchdev_notifier_info *info)
{ {
return -EOPNOTSUPP; return NOTIFY_DONE;
} }
static inline int netdev_switch_port_bridge_dellink(struct net_device *dev, static inline int switchdev_port_bridge_getlink(struct sk_buff *skb, u32 pid,
struct nlmsghdr *nlh, u32 seq, struct net_device *dev,
u16 flags) u32 filter_mask, int nlflags)
{ {
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
static inline int ndo_dflt_netdev_switch_port_bridge_dellink(struct net_device *dev, static inline int switchdev_port_bridge_setlink(struct net_device *dev,
struct nlmsghdr *nlh, struct nlmsghdr *nlh,
u16 flags) u16 flags)
{ {
return 0; return -EOPNOTSUPP;
} }
static inline int ndo_dflt_netdev_switch_port_bridge_setlink(struct net_device *dev, static inline int switchdev_port_bridge_dellink(struct net_device *dev,
struct nlmsghdr *nlh, struct nlmsghdr *nlh,
u16 flags) u16 flags)
{ {
return 0; return -EOPNOTSUPP;
} }
static inline int netdev_switch_fib_ipv4_add(u32 dst, int dst_len, static inline int switchdev_fib_ipv4_add(u32 dst, int dst_len,
struct fib_info *fi, struct fib_info *fi,
u8 tos, u8 type, u8 tos, u8 type,
u32 nlflags, u32 tb_id) u32 nlflags, u32 tb_id)
{ {
return 0; return 0;
} }
static inline int netdev_switch_fib_ipv4_del(u32 dst, int dst_len, static inline int switchdev_fib_ipv4_del(u32 dst, int dst_len,
struct fib_info *fi, struct fib_info *fi,
u8 tos, u8 type, u32 tb_id) u8 tos, u8 type, u32 tb_id)
{ {
return 0; return 0;
} }
static inline void netdev_switch_fib_ipv4_abort(struct fib_info *fi) static inline void switchdev_fib_ipv4_abort(struct fib_info *fi)
{ {
} }
......
...@@ -121,13 +121,13 @@ static struct notifier_block br_device_notifier = { ...@@ -121,13 +121,13 @@ static struct notifier_block br_device_notifier = {
.notifier_call = br_device_event .notifier_call = br_device_event
}; };
static int br_netdev_switch_event(struct notifier_block *unused, static int br_switchdev_event(struct notifier_block *unused,
unsigned long event, void *ptr) unsigned long event, void *ptr)
{ {
struct net_device *dev = netdev_switch_notifier_info_to_dev(ptr); struct net_device *dev = switchdev_notifier_info_to_dev(ptr);
struct net_bridge_port *p; struct net_bridge_port *p;
struct net_bridge *br; struct net_bridge *br;
struct netdev_switch_notifier_fdb_info *fdb_info; struct switchdev_notifier_fdb_info *fdb_info;
int err = NOTIFY_DONE; int err = NOTIFY_DONE;
rtnl_lock(); rtnl_lock();
...@@ -138,14 +138,14 @@ static int br_netdev_switch_event(struct notifier_block *unused, ...@@ -138,14 +138,14 @@ static int br_netdev_switch_event(struct notifier_block *unused,
br = p->br; br = p->br;
switch (event) { switch (event) {
case NETDEV_SWITCH_FDB_ADD: case SWITCHDEV_FDB_ADD:
fdb_info = ptr; fdb_info = ptr;
err = br_fdb_external_learn_add(br, p, fdb_info->addr, err = br_fdb_external_learn_add(br, p, fdb_info->addr,
fdb_info->vid); fdb_info->vid);
if (err) if (err)
err = notifier_from_errno(err); err = notifier_from_errno(err);
break; break;
case NETDEV_SWITCH_FDB_DEL: case SWITCHDEV_FDB_DEL:
fdb_info = ptr; fdb_info = ptr;
err = br_fdb_external_learn_del(br, p, fdb_info->addr, err = br_fdb_external_learn_del(br, p, fdb_info->addr,
fdb_info->vid); fdb_info->vid);
...@@ -159,8 +159,8 @@ static int br_netdev_switch_event(struct notifier_block *unused, ...@@ -159,8 +159,8 @@ static int br_netdev_switch_event(struct notifier_block *unused,
return err; return err;
} }
static struct notifier_block br_netdev_switch_notifier = { static struct notifier_block br_switchdev_notifier = {
.notifier_call = br_netdev_switch_event, .notifier_call = br_switchdev_event,
}; };
static void __net_exit br_net_exit(struct net *net) static void __net_exit br_net_exit(struct net *net)
...@@ -214,7 +214,7 @@ static int __init br_init(void) ...@@ -214,7 +214,7 @@ static int __init br_init(void)
if (err) if (err)
goto err_out3; goto err_out3;
err = register_netdev_switch_notifier(&br_netdev_switch_notifier); err = register_switchdev_notifier(&br_switchdev_notifier);
if (err) if (err)
goto err_out4; goto err_out4;
...@@ -235,7 +235,7 @@ static int __init br_init(void) ...@@ -235,7 +235,7 @@ static int __init br_init(void)
return 0; return 0;
err_out5: err_out5:
unregister_netdev_switch_notifier(&br_netdev_switch_notifier); unregister_switchdev_notifier(&br_switchdev_notifier);
err_out4: err_out4:
unregister_netdevice_notifier(&br_device_notifier); unregister_netdevice_notifier(&br_device_notifier);
err_out3: err_out3:
...@@ -253,7 +253,7 @@ static void __exit br_deinit(void) ...@@ -253,7 +253,7 @@ static void __exit br_deinit(void)
{ {
stp_proto_unregister(&br_stp_proto); stp_proto_unregister(&br_stp_proto);
br_netlink_fini(); br_netlink_fini();
unregister_netdev_switch_notifier(&br_netdev_switch_notifier); unregister_switchdev_notifier(&br_switchdev_notifier);
unregister_netdevice_notifier(&br_device_notifier); unregister_netdevice_notifier(&br_device_notifier);
brioctl_set(NULL); brioctl_set(NULL);
unregister_pernet_subsys(&br_net_ops); unregister_pernet_subsys(&br_net_ops);
......
...@@ -586,7 +586,7 @@ int br_setlink(struct net_device *dev, struct nlmsghdr *nlh, u16 flags) ...@@ -586,7 +586,7 @@ int br_setlink(struct net_device *dev, struct nlmsghdr *nlh, u16 flags)
struct nlattr *afspec; struct nlattr *afspec;
struct net_bridge_port *p; struct net_bridge_port *p;
struct nlattr *tb[IFLA_BRPORT_MAX + 1]; struct nlattr *tb[IFLA_BRPORT_MAX + 1];
int err = 0, ret_offload = 0; int err = 0;
protinfo = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_PROTINFO); protinfo = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_PROTINFO);
afspec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC); afspec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
...@@ -628,16 +628,6 @@ int br_setlink(struct net_device *dev, struct nlmsghdr *nlh, u16 flags) ...@@ -628,16 +628,6 @@ int br_setlink(struct net_device *dev, struct nlmsghdr *nlh, u16 flags)
afspec, RTM_SETLINK); afspec, RTM_SETLINK);
} }
if (p && !(flags & BRIDGE_FLAGS_SELF)) {
/* set bridge attributes in hardware if supported
*/
ret_offload = netdev_switch_port_bridge_setlink(dev, nlh,
flags);
if (ret_offload && ret_offload != -EOPNOTSUPP)
br_warn(p->br, "error setting attrs on port %u(%s)\n",
(unsigned int)p->port_no, p->dev->name);
}
if (err == 0) if (err == 0)
br_ifinfo_notify(RTM_NEWLINK, p); br_ifinfo_notify(RTM_NEWLINK, p);
out: out:
...@@ -649,7 +639,7 @@ int br_dellink(struct net_device *dev, struct nlmsghdr *nlh, u16 flags) ...@@ -649,7 +639,7 @@ int br_dellink(struct net_device *dev, struct nlmsghdr *nlh, u16 flags)
{ {
struct nlattr *afspec; struct nlattr *afspec;
struct net_bridge_port *p; struct net_bridge_port *p;
int err = 0, ret_offload = 0; int err = 0;
afspec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC); afspec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
if (!afspec) if (!afspec)
...@@ -668,16 +658,6 @@ int br_dellink(struct net_device *dev, struct nlmsghdr *nlh, u16 flags) ...@@ -668,16 +658,6 @@ int br_dellink(struct net_device *dev, struct nlmsghdr *nlh, u16 flags)
*/ */
br_ifinfo_notify(RTM_NEWLINK, p); br_ifinfo_notify(RTM_NEWLINK, p);
if (p && !(flags & BRIDGE_FLAGS_SELF)) {
/* del bridge attributes in hardware
*/
ret_offload = netdev_switch_port_bridge_dellink(dev, nlh,
flags);
if (ret_offload && ret_offload != -EOPNOTSUPP)
br_warn(p->br, "error deleting attrs on port %u (%s)\n",
(unsigned int)p->port_no, p->dev->name);
}
return err; return err;
} }
static int br_validate(struct nlattr *tb[], struct nlattr *data[]) static int br_validate(struct nlattr *tb[], struct nlattr *data[])
......
...@@ -39,10 +39,14 @@ void br_log_state(const struct net_bridge_port *p) ...@@ -39,10 +39,14 @@ void br_log_state(const struct net_bridge_port *p)
void br_set_state(struct net_bridge_port *p, unsigned int state) void br_set_state(struct net_bridge_port *p, unsigned int state)
{ {
struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_PORT_STP_STATE,
.stp_state = state,
};
int err; int err;
p->state = state; p->state = state;
err = netdev_switch_port_stp_update(p->dev, state); err = switchdev_port_attr_set(p->dev, &attr);
if (err && err != -EOPNOTSUPP) if (err && err != -EOPNOTSUPP)
br_warn(p->br, "error setting offload STP state on port %u(%s)\n", br_warn(p->br, "error setting offload STP state on port %u(%s)\n",
(unsigned int) p->port_no, p->dev->name); (unsigned int) p->port_no, p->dev->name);
......
...@@ -98,7 +98,6 @@ static const char netdev_features_strings[NETDEV_FEATURE_COUNT][ETH_GSTRING_LEN] ...@@ -98,7 +98,6 @@ static const char netdev_features_strings[NETDEV_FEATURE_COUNT][ETH_GSTRING_LEN]
[NETIF_F_RXALL_BIT] = "rx-all", [NETIF_F_RXALL_BIT] = "rx-all",
[NETIF_F_HW_L2FW_DOFFLOAD_BIT] = "l2-fwd-offload", [NETIF_F_HW_L2FW_DOFFLOAD_BIT] = "l2-fwd-offload",
[NETIF_F_BUSY_POLL_BIT] = "busy-poll", [NETIF_F_BUSY_POLL_BIT] = "busy-poll",
[NETIF_F_HW_SWITCH_OFFLOAD_BIT] = "hw-switch-offload",
}; };
static const char static const char
......
...@@ -458,11 +458,15 @@ static ssize_t phys_switch_id_show(struct device *dev, ...@@ -458,11 +458,15 @@ static ssize_t phys_switch_id_show(struct device *dev,
return restart_syscall(); return restart_syscall();
if (dev_isalive(netdev)) { if (dev_isalive(netdev)) {
struct netdev_phys_item_id ppid; struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_PORT_PARENT_ID,
.flags = SWITCHDEV_F_NO_RECURSE,
};
ret = netdev_switch_parent_id_get(netdev, &ppid); ret = switchdev_port_attr_get(netdev, &attr);
if (!ret) if (!ret)
ret = sprintf(buf, "%*phN\n", ppid.id_len, ppid.id); ret = sprintf(buf, "%*phN\n", attr.ppid.id_len,
attr.ppid.id);
} }
rtnl_unlock(); rtnl_unlock();
......
...@@ -1004,16 +1004,19 @@ static int rtnl_phys_port_name_fill(struct sk_buff *skb, struct net_device *dev) ...@@ -1004,16 +1004,19 @@ static int rtnl_phys_port_name_fill(struct sk_buff *skb, struct net_device *dev)
static int rtnl_phys_switch_id_fill(struct sk_buff *skb, struct net_device *dev) static int rtnl_phys_switch_id_fill(struct sk_buff *skb, struct net_device *dev)
{ {
int err; int err;
struct netdev_phys_item_id psid; struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_PORT_PARENT_ID,
.flags = SWITCHDEV_F_NO_RECURSE,
};
err = netdev_switch_parent_id_get(dev, &psid); err = switchdev_port_attr_get(dev, &attr);
if (err) { if (err) {
if (err == -EOPNOTSUPP) if (err == -EOPNOTSUPP)
return 0; return 0;
return err; return err;
} }
if (nla_put(skb, IFLA_PHYS_SWITCH_ID, psid.id_len, psid.id)) if (nla_put(skb, IFLA_PHYS_SWITCH_ID, attr.ppid.id_len, attr.ppid.id))
return -EMSGSIZE; return -EMSGSIZE;
return 0; return 0;
......
...@@ -345,6 +345,24 @@ static int dsa_slave_stp_update(struct net_device *dev, u8 state) ...@@ -345,6 +345,24 @@ static int dsa_slave_stp_update(struct net_device *dev, u8 state)
return ret; return ret;
} }
static int dsa_slave_port_attr_set(struct net_device *dev,
struct switchdev_attr *attr)
{
int ret = 0;
switch (attr->id) {
case SWITCHDEV_ATTR_PORT_STP_STATE:
if (attr->trans == SWITCHDEV_TRANS_COMMIT)
ret = dsa_slave_stp_update(dev, attr->stp_state);
break;
default:
ret = -EOPNOTSUPP;
break;
}
return ret;
}
static int dsa_slave_bridge_port_join(struct net_device *dev, static int dsa_slave_bridge_port_join(struct net_device *dev,
struct net_device *br) struct net_device *br)
{ {
...@@ -382,14 +400,20 @@ static int dsa_slave_bridge_port_leave(struct net_device *dev) ...@@ -382,14 +400,20 @@ static int dsa_slave_bridge_port_leave(struct net_device *dev)
return ret; return ret;
} }
static int dsa_slave_parent_id_get(struct net_device *dev, static int dsa_slave_port_attr_get(struct net_device *dev,
struct netdev_phys_item_id *psid) struct switchdev_attr *attr)
{ {
struct dsa_slave_priv *p = netdev_priv(dev); struct dsa_slave_priv *p = netdev_priv(dev);
struct dsa_switch *ds = p->parent; struct dsa_switch *ds = p->parent;
psid->id_len = sizeof(ds->index); switch (attr->id) {
memcpy(&psid->id, &ds->index, psid->id_len); case SWITCHDEV_ATTR_PORT_PARENT_ID:
attr->ppid.id_len = sizeof(ds->index);
memcpy(&attr->ppid.id, &ds->index, attr->ppid.id_len);
break;
default:
return -EOPNOTSUPP;
}
return 0; return 0;
} }
...@@ -675,9 +699,9 @@ static const struct net_device_ops dsa_slave_netdev_ops = { ...@@ -675,9 +699,9 @@ static const struct net_device_ops dsa_slave_netdev_ops = {
.ndo_get_iflink = dsa_slave_get_iflink, .ndo_get_iflink = dsa_slave_get_iflink,
}; };
static const struct swdev_ops dsa_slave_swdev_ops = { static const struct switchdev_ops dsa_slave_switchdev_ops = {
.swdev_parent_id_get = dsa_slave_parent_id_get, .switchdev_port_attr_get = dsa_slave_port_attr_get,
.swdev_port_stp_update = dsa_slave_stp_update, .switchdev_port_attr_set = dsa_slave_port_attr_set,
}; };
static void dsa_slave_adjust_link(struct net_device *dev) static void dsa_slave_adjust_link(struct net_device *dev)
...@@ -866,7 +890,7 @@ int dsa_slave_create(struct dsa_switch *ds, struct device *parent, ...@@ -866,7 +890,7 @@ int dsa_slave_create(struct dsa_switch *ds, struct device *parent,
eth_hw_addr_inherit(slave_dev, master); eth_hw_addr_inherit(slave_dev, master);
slave_dev->tx_queue_len = 0; slave_dev->tx_queue_len = 0;
slave_dev->netdev_ops = &dsa_slave_netdev_ops; slave_dev->netdev_ops = &dsa_slave_netdev_ops;
slave_dev->swdev_ops = &dsa_slave_swdev_ops; slave_dev->switchdev_ops = &dsa_slave_switchdev_ops;
netdev_for_each_tx_queue(slave_dev, dsa_slave_set_lockdep_class_one, netdev_for_each_tx_queue(slave_dev, dsa_slave_set_lockdep_class_one,
NULL); NULL);
......
...@@ -1165,13 +1165,13 @@ int fib_table_insert(struct fib_table *tb, struct fib_config *cfg) ...@@ -1165,13 +1165,13 @@ int fib_table_insert(struct fib_table *tb, struct fib_config *cfg)
new_fa->fa_state = state & ~FA_S_ACCESSED; new_fa->fa_state = state & ~FA_S_ACCESSED;
new_fa->fa_slen = fa->fa_slen; new_fa->fa_slen = fa->fa_slen;
err = netdev_switch_fib_ipv4_add(key, plen, fi, err = switchdev_fib_ipv4_add(key, plen, fi,
new_fa->fa_tos, new_fa->fa_tos,
cfg->fc_type, cfg->fc_type,
cfg->fc_nlflags, cfg->fc_nlflags,
tb->tb_id); tb->tb_id);
if (err) { if (err) {
netdev_switch_fib_ipv4_abort(fi); switchdev_fib_ipv4_abort(fi);
kmem_cache_free(fn_alias_kmem, new_fa); kmem_cache_free(fn_alias_kmem, new_fa);
goto out; goto out;
} }
...@@ -1215,12 +1215,10 @@ int fib_table_insert(struct fib_table *tb, struct fib_config *cfg) ...@@ -1215,12 +1215,10 @@ int fib_table_insert(struct fib_table *tb, struct fib_config *cfg)
new_fa->tb_id = tb->tb_id; new_fa->tb_id = tb->tb_id;
/* (Optionally) offload fib entry to switch hardware. */ /* (Optionally) offload fib entry to switch hardware. */
err = netdev_switch_fib_ipv4_add(key, plen, fi, tos, err = switchdev_fib_ipv4_add(key, plen, fi, tos, cfg->fc_type,
cfg->fc_type, cfg->fc_nlflags, tb->tb_id);
cfg->fc_nlflags,
tb->tb_id);
if (err) { if (err) {
netdev_switch_fib_ipv4_abort(fi); switchdev_fib_ipv4_abort(fi);
goto out_free_new_fa; goto out_free_new_fa;
} }
...@@ -1239,7 +1237,7 @@ int fib_table_insert(struct fib_table *tb, struct fib_config *cfg) ...@@ -1239,7 +1237,7 @@ int fib_table_insert(struct fib_table *tb, struct fib_config *cfg)
return 0; return 0;
out_sw_fib_del: out_sw_fib_del:
netdev_switch_fib_ipv4_del(key, plen, fi, tos, cfg->fc_type, tb->tb_id); switchdev_fib_ipv4_del(key, plen, fi, tos, cfg->fc_type, tb->tb_id);
out_free_new_fa: out_free_new_fa:
kmem_cache_free(fn_alias_kmem, new_fa); kmem_cache_free(fn_alias_kmem, new_fa);
out: out:
...@@ -1517,8 +1515,8 @@ int fib_table_delete(struct fib_table *tb, struct fib_config *cfg) ...@@ -1517,8 +1515,8 @@ int fib_table_delete(struct fib_table *tb, struct fib_config *cfg)
if (!fa_to_delete) if (!fa_to_delete)
return -ESRCH; return -ESRCH;
netdev_switch_fib_ipv4_del(key, plen, fa_to_delete->fa_info, tos, switchdev_fib_ipv4_del(key, plen, fa_to_delete->fa_info, tos,
cfg->fc_type, tb->tb_id); cfg->fc_type, tb->tb_id);
rtmsg_fib(RTM_DELROUTE, htonl(key), fa_to_delete, plen, tb->tb_id, rtmsg_fib(RTM_DELROUTE, htonl(key), fa_to_delete, plen, tb->tb_id,
&cfg->fc_nlinfo, 0); &cfg->fc_nlinfo, 0);
...@@ -1767,10 +1765,9 @@ void fib_table_flush_external(struct fib_table *tb) ...@@ -1767,10 +1765,9 @@ void fib_table_flush_external(struct fib_table *tb)
if (!fi || !(fi->fib_flags & RTNH_F_EXTERNAL)) if (!fi || !(fi->fib_flags & RTNH_F_EXTERNAL))
continue; continue;
netdev_switch_fib_ipv4_del(n->key, switchdev_fib_ipv4_del(n->key, KEYLENGTH - fa->fa_slen,
KEYLENGTH - fa->fa_slen, fi, fa->fa_tos, fa->fa_type,
fi, fa->fa_tos, tb->tb_id);
fa->fa_type, tb->tb_id);
} }
/* update leaf slen */ /* update leaf slen */
...@@ -1835,10 +1832,9 @@ int fib_table_flush(struct fib_table *tb) ...@@ -1835,10 +1832,9 @@ int fib_table_flush(struct fib_table *tb)
continue; continue;
} }
netdev_switch_fib_ipv4_del(n->key, switchdev_fib_ipv4_del(n->key, KEYLENGTH - fa->fa_slen,
KEYLENGTH - fa->fa_slen, fi, fa->fa_tos, fa->fa_type,
fi, fa->fa_tos, tb->tb_id);
fa->fa_type, tb->tb_id);
hlist_del_rcu(&fa->fa_list); hlist_del_rcu(&fa->fa_list);
fib_release_info(fa->fa_info); fib_release_info(fa->fa_info);
alias_free_mem_rcu(fa); alias_free_mem_rcu(fa);
......
...@@ -15,97 +15,328 @@ ...@@ -15,97 +15,328 @@
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/netdevice.h> #include <linux/netdevice.h>
#include <linux/if_bridge.h>
#include <net/ip_fib.h> #include <net/ip_fib.h>
#include <net/switchdev.h> #include <net/switchdev.h>
/** /**
* netdev_switch_parent_id_get - Get ID of a switch * switchdev_port_attr_get - Get port attribute
*
* @dev: port device * @dev: port device
* @psid: switch ID * @attr: attribute to get
*/
int switchdev_port_attr_get(struct net_device *dev, struct switchdev_attr *attr)
{
const struct switchdev_ops *ops = dev->switchdev_ops;
struct net_device *lower_dev;
struct list_head *iter;
struct switchdev_attr first = {
.id = SWITCHDEV_ATTR_UNDEFINED
};
int err = -EOPNOTSUPP;
if (ops && ops->switchdev_port_attr_get)
return ops->switchdev_port_attr_get(dev, attr);
if (attr->flags & SWITCHDEV_F_NO_RECURSE)
return err;
/* Switch device port(s) may be stacked under
* bond/team/vlan dev, so recurse down to get attr on
* each port. Return -ENODATA if attr values don't
* compare across ports.
*/
netdev_for_each_lower_dev(dev, lower_dev, iter) {
err = switchdev_port_attr_get(lower_dev, attr);
if (err)
break;
if (first.id == SWITCHDEV_ATTR_UNDEFINED)
first = *attr;
else if (memcmp(&first, attr, sizeof(*attr)))
return -ENODATA;
}
return err;
}
EXPORT_SYMBOL_GPL(switchdev_port_attr_get);
static int __switchdev_port_attr_set(struct net_device *dev,
struct switchdev_attr *attr)
{
const struct switchdev_ops *ops = dev->switchdev_ops;
struct net_device *lower_dev;
struct list_head *iter;
int err = -EOPNOTSUPP;
if (ops && ops->switchdev_port_attr_set)
return ops->switchdev_port_attr_set(dev, attr);
if (attr->flags & SWITCHDEV_F_NO_RECURSE)
return err;
/* Switch device port(s) may be stacked under
* bond/team/vlan dev, so recurse down to set attr on
* each port.
*/
netdev_for_each_lower_dev(dev, lower_dev, iter) {
err = __switchdev_port_attr_set(lower_dev, attr);
if (err)
break;
}
return err;
}
struct switchdev_attr_set_work {
struct work_struct work;
struct net_device *dev;
struct switchdev_attr attr;
};
static void switchdev_port_attr_set_work(struct work_struct *work)
{
struct switchdev_attr_set_work *asw =
container_of(work, struct switchdev_attr_set_work, work);
int err;
rtnl_lock();
err = switchdev_port_attr_set(asw->dev, &asw->attr);
BUG_ON(err);
rtnl_unlock();
dev_put(asw->dev);
kfree(work);
}
static int switchdev_port_attr_set_defer(struct net_device *dev,
struct switchdev_attr *attr)
{
struct switchdev_attr_set_work *asw;
asw = kmalloc(sizeof(*asw), GFP_ATOMIC);
if (!asw)
return -ENOMEM;
INIT_WORK(&asw->work, switchdev_port_attr_set_work);
dev_hold(dev);
asw->dev = dev;
memcpy(&asw->attr, attr, sizeof(asw->attr));
schedule_work(&asw->work);
return 0;
}
/**
* switchdev_port_attr_set - Set port attribute
*
* @dev: port device
* @attr: attribute to set
* *
* Get ID of a switch this port is part of. * Use a 2-phase prepare-commit transaction model to ensure
* system is not left in a partially updated state due to
* failure from driver/device.
*/ */
int netdev_switch_parent_id_get(struct net_device *dev, int switchdev_port_attr_set(struct net_device *dev, struct switchdev_attr *attr)
struct netdev_phys_item_id *psid)
{ {
const struct swdev_ops *ops = dev->swdev_ops; int err;
if (!rtnl_is_locked()) {
/* Running prepare-commit transaction across stacked
* devices requires nothing moves, so if rtnl_lock is
* not held, schedule a worker thread to hold rtnl_lock
* while setting attr.
*/
return switchdev_port_attr_set_defer(dev, attr);
}
if (!ops || !ops->swdev_parent_id_get) /* Phase I: prepare for attr set. Driver/device should fail
return -EOPNOTSUPP; * here if there are going to be issues in the commit phase,
return ops->swdev_parent_id_get(dev, psid); * such as lack of resources or support. The driver/device
* should reserve resources needed for the commit phase here,
* but should not commit the attr.
*/
attr->trans = SWITCHDEV_TRANS_PREPARE;
err = __switchdev_port_attr_set(dev, attr);
if (err) {
/* Prepare phase failed: abort the transaction. Any
* resources reserved in the prepare phase are
* released.
*/
attr->trans = SWITCHDEV_TRANS_ABORT;
__switchdev_port_attr_set(dev, attr);
return err;
}
/* Phase II: commit attr set. This cannot fail as a fault
* of driver/device. If it does, it's a bug in the driver/device
* because the driver said everythings was OK in phase I.
*/
attr->trans = SWITCHDEV_TRANS_COMMIT;
err = __switchdev_port_attr_set(dev, attr);
BUG_ON(err);
return err;
}
EXPORT_SYMBOL_GPL(switchdev_port_attr_set);
int __switchdev_port_obj_add(struct net_device *dev, struct switchdev_obj *obj)
{
const struct switchdev_ops *ops = dev->switchdev_ops;
struct net_device *lower_dev;
struct list_head *iter;
int err = -EOPNOTSUPP;
if (ops && ops->switchdev_port_obj_add)
return ops->switchdev_port_obj_add(dev, obj);
/* Switch device port(s) may be stacked under
* bond/team/vlan dev, so recurse down to add object on
* each port.
*/
netdev_for_each_lower_dev(dev, lower_dev, iter) {
err = __switchdev_port_obj_add(lower_dev, obj);
if (err)
break;
}
return err;
} }
EXPORT_SYMBOL_GPL(netdev_switch_parent_id_get);
/** /**
* netdev_switch_port_stp_update - Notify switch device port of STP * switchdev_port_obj_add - Add port object
* state change *
* @dev: port device * @dev: port device
* @state: port STP state * @obj: object to add
*
* Use a 2-phase prepare-commit transaction model to ensure
* system is not left in a partially updated state due to
* failure from driver/device.
*
* rtnl_lock must be held.
*/
int switchdev_port_obj_add(struct net_device *dev, struct switchdev_obj *obj)
{
int err;
ASSERT_RTNL();
/* Phase I: prepare for obj add. Driver/device should fail
* here if there are going to be issues in the commit phase,
* such as lack of resources or support. The driver/device
* should reserve resources needed for the commit phase here,
* but should not commit the obj.
*/
obj->trans = SWITCHDEV_TRANS_PREPARE;
err = __switchdev_port_obj_add(dev, obj);
if (err) {
/* Prepare phase failed: abort the transaction. Any
* resources reserved in the prepare phase are
* released.
*/
obj->trans = SWITCHDEV_TRANS_ABORT;
__switchdev_port_obj_add(dev, obj);
return err;
}
/* Phase II: commit obj add. This cannot fail as a fault
* of driver/device. If it does, it's a bug in the driver/device
* because the driver said everythings was OK in phase I.
*/
obj->trans = SWITCHDEV_TRANS_COMMIT;
err = __switchdev_port_obj_add(dev, obj);
WARN(err, "%s: Commit of object (id=%d) failed.\n", dev->name, obj->id);
return err;
}
EXPORT_SYMBOL_GPL(switchdev_port_obj_add);
/**
* switchdev_port_obj_del - Delete port object
* *
* Notify switch device port of bridge port STP state change. * @dev: port device
* @obj: object to delete
*/ */
int netdev_switch_port_stp_update(struct net_device *dev, u8 state) int switchdev_port_obj_del(struct net_device *dev, struct switchdev_obj *obj)
{ {
const struct swdev_ops *ops = dev->swdev_ops; const struct switchdev_ops *ops = dev->switchdev_ops;
struct net_device *lower_dev; struct net_device *lower_dev;
struct list_head *iter; struct list_head *iter;
int err = -EOPNOTSUPP; int err = -EOPNOTSUPP;
if (ops && ops->swdev_port_stp_update) if (ops && ops->switchdev_port_obj_del)
return ops->swdev_port_stp_update(dev, state); return ops->switchdev_port_obj_del(dev, obj);
/* Switch device port(s) may be stacked under
* bond/team/vlan dev, so recurse down to delete object on
* each port.
*/
netdev_for_each_lower_dev(dev, lower_dev, iter) { netdev_for_each_lower_dev(dev, lower_dev, iter) {
err = netdev_switch_port_stp_update(lower_dev, state); err = switchdev_port_obj_del(lower_dev, obj);
if (err && err != -EOPNOTSUPP) if (err)
return err; break;
} }
return err; return err;
} }
EXPORT_SYMBOL_GPL(netdev_switch_port_stp_update); EXPORT_SYMBOL_GPL(switchdev_port_obj_del);
static DEFINE_MUTEX(netdev_switch_mutex); static DEFINE_MUTEX(switchdev_mutex);
static RAW_NOTIFIER_HEAD(netdev_switch_notif_chain); static RAW_NOTIFIER_HEAD(switchdev_notif_chain);
/** /**
* register_netdev_switch_notifier - Register notifier * register_switchdev_notifier - Register notifier
* @nb: notifier_block * @nb: notifier_block
* *
* Register switch device notifier. This should be used by code * Register switch device notifier. This should be used by code
* which needs to monitor events happening in particular device. * which needs to monitor events happening in particular device.
* Return values are same as for atomic_notifier_chain_register(). * Return values are same as for atomic_notifier_chain_register().
*/ */
int register_netdev_switch_notifier(struct notifier_block *nb) int register_switchdev_notifier(struct notifier_block *nb)
{ {
int err; int err;
mutex_lock(&netdev_switch_mutex); mutex_lock(&switchdev_mutex);
err = raw_notifier_chain_register(&netdev_switch_notif_chain, nb); err = raw_notifier_chain_register(&switchdev_notif_chain, nb);
mutex_unlock(&netdev_switch_mutex); mutex_unlock(&switchdev_mutex);
return err; return err;
} }
EXPORT_SYMBOL_GPL(register_netdev_switch_notifier); EXPORT_SYMBOL_GPL(register_switchdev_notifier);
/** /**
* unregister_netdev_switch_notifier - Unregister notifier * unregister_switchdev_notifier - Unregister notifier
* @nb: notifier_block * @nb: notifier_block
* *
* Unregister switch device notifier. * Unregister switch device notifier.
* Return values are same as for atomic_notifier_chain_unregister(). * Return values are same as for atomic_notifier_chain_unregister().
*/ */
int unregister_netdev_switch_notifier(struct notifier_block *nb) int unregister_switchdev_notifier(struct notifier_block *nb)
{ {
int err; int err;
mutex_lock(&netdev_switch_mutex); mutex_lock(&switchdev_mutex);
err = raw_notifier_chain_unregister(&netdev_switch_notif_chain, nb); err = raw_notifier_chain_unregister(&switchdev_notif_chain, nb);
mutex_unlock(&netdev_switch_mutex); mutex_unlock(&switchdev_mutex);
return err; return err;
} }
EXPORT_SYMBOL_GPL(unregister_netdev_switch_notifier); EXPORT_SYMBOL_GPL(unregister_switchdev_notifier);
/** /**
* call_netdev_switch_notifiers - Call notifiers * call_switchdev_notifiers - Call notifiers
* @val: value passed unmodified to notifier function * @val: value passed unmodified to notifier function
* @dev: port device * @dev: port device
* @info: notifier information data * @info: notifier information data
...@@ -114,146 +345,241 @@ EXPORT_SYMBOL_GPL(unregister_netdev_switch_notifier); ...@@ -114,146 +345,241 @@ EXPORT_SYMBOL_GPL(unregister_netdev_switch_notifier);
* when it needs to propagate hardware event. * when it needs to propagate hardware event.
* Return values are same as for atomic_notifier_call_chain(). * Return values are same as for atomic_notifier_call_chain().
*/ */
int call_netdev_switch_notifiers(unsigned long val, struct net_device *dev, int call_switchdev_notifiers(unsigned long val, struct net_device *dev,
struct netdev_switch_notifier_info *info) struct switchdev_notifier_info *info)
{ {
int err; int err;
info->dev = dev; info->dev = dev;
mutex_lock(&netdev_switch_mutex); mutex_lock(&switchdev_mutex);
err = raw_notifier_call_chain(&netdev_switch_notif_chain, val, info); err = raw_notifier_call_chain(&switchdev_notif_chain, val, info);
mutex_unlock(&netdev_switch_mutex); mutex_unlock(&switchdev_mutex);
return err; return err;
} }
EXPORT_SYMBOL_GPL(call_netdev_switch_notifiers); EXPORT_SYMBOL_GPL(call_switchdev_notifiers);
/** /**
* netdev_switch_port_bridge_setlink - Notify switch device port of bridge * switchdev_port_bridge_getlink - Get bridge port attributes
* port attributes
* *
* @dev: port device * @dev: port device
* @nlh: netlink msg with bridge port attributes
* @flags: bridge setlink flags
* *
* Notify switch device port of bridge port attributes * Called for SELF on rtnl_bridge_getlink to get bridge port
* attributes.
*/ */
int netdev_switch_port_bridge_setlink(struct net_device *dev, int switchdev_port_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
struct nlmsghdr *nlh, u16 flags) struct net_device *dev, u32 filter_mask,
int nlflags)
{ {
const struct net_device_ops *ops = dev->netdev_ops; struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_PORT_BRIDGE_FLAGS,
};
u16 mode = BRIDGE_MODE_UNDEF;
u32 mask = BR_LEARNING | BR_LEARNING_SYNC;
int err;
if (!(dev->features & NETIF_F_HW_SWITCH_OFFLOAD)) err = switchdev_port_attr_get(dev, &attr);
return 0; if (err)
return err;
return ndo_dflt_bridge_getlink(skb, pid, seq, dev, mode,
attr.brport_flags, mask, nlflags);
}
EXPORT_SYMBOL_GPL(switchdev_port_bridge_getlink);
static int switchdev_port_br_setflag(struct net_device *dev,
struct nlattr *nlattr,
unsigned long brport_flag)
{
struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_PORT_BRIDGE_FLAGS,
};
u8 flag = nla_get_u8(nlattr);
int err;
err = switchdev_port_attr_get(dev, &attr);
if (err)
return err;
if (!ops->ndo_bridge_setlink) if (flag)
return -EOPNOTSUPP; attr.brport_flags |= brport_flag;
else
attr.brport_flags &= ~brport_flag;
return ops->ndo_bridge_setlink(dev, nlh, flags); return switchdev_port_attr_set(dev, &attr);
} }
EXPORT_SYMBOL_GPL(netdev_switch_port_bridge_setlink);
/** static const struct nla_policy
* netdev_switch_port_bridge_dellink - Notify switch device port of bridge switchdev_port_bridge_policy[IFLA_BRPORT_MAX + 1] = {
* port attribute delete [IFLA_BRPORT_STATE] = { .type = NLA_U8 },
* [IFLA_BRPORT_COST] = { .type = NLA_U32 },
* @dev: port device [IFLA_BRPORT_PRIORITY] = { .type = NLA_U16 },
* @nlh: netlink msg with bridge port attributes [IFLA_BRPORT_MODE] = { .type = NLA_U8 },
* @flags: bridge setlink flags [IFLA_BRPORT_GUARD] = { .type = NLA_U8 },
* [IFLA_BRPORT_PROTECT] = { .type = NLA_U8 },
* Notify switch device port of bridge port attribute delete [IFLA_BRPORT_FAST_LEAVE] = { .type = NLA_U8 },
*/ [IFLA_BRPORT_LEARNING] = { .type = NLA_U8 },
int netdev_switch_port_bridge_dellink(struct net_device *dev, [IFLA_BRPORT_LEARNING_SYNC] = { .type = NLA_U8 },
struct nlmsghdr *nlh, u16 flags) [IFLA_BRPORT_UNICAST_FLOOD] = { .type = NLA_U8 },
};
static int switchdev_port_br_setlink_protinfo(struct net_device *dev,
struct nlattr *protinfo)
{ {
const struct net_device_ops *ops = dev->netdev_ops; struct nlattr *attr;
int rem;
int err;
if (!(dev->features & NETIF_F_HW_SWITCH_OFFLOAD)) err = nla_validate_nested(protinfo, IFLA_BRPORT_MAX,
return 0; switchdev_port_bridge_policy);
if (err)
return err;
nla_for_each_nested(attr, protinfo, rem) {
switch (nla_type(attr)) {
case IFLA_BRPORT_LEARNING:
err = switchdev_port_br_setflag(dev, attr,
BR_LEARNING);
break;
case IFLA_BRPORT_LEARNING_SYNC:
err = switchdev_port_br_setflag(dev, attr,
BR_LEARNING_SYNC);
break;
default:
err = -EOPNOTSUPP;
break;
}
if (err)
return err;
}
return 0;
}
static int switchdev_port_br_afspec(struct net_device *dev,
struct nlattr *afspec,
int (*f)(struct net_device *dev,
struct switchdev_obj *obj))
{
struct nlattr *attr;
struct bridge_vlan_info *vinfo;
struct switchdev_obj obj = {
.id = SWITCHDEV_OBJ_PORT_VLAN,
};
int rem;
int err;
if (!ops->ndo_bridge_dellink) nla_for_each_nested(attr, afspec, rem) {
return -EOPNOTSUPP; if (nla_type(attr) != IFLA_BRIDGE_VLAN_INFO)
continue;
if (nla_len(attr) != sizeof(struct bridge_vlan_info))
return -EINVAL;
vinfo = nla_data(attr);
obj.vlan.flags = vinfo->flags;
if (vinfo->flags & BRIDGE_VLAN_INFO_RANGE_BEGIN) {
if (obj.vlan.vid_start)
return -EINVAL;
obj.vlan.vid_start = vinfo->vid;
} else if (vinfo->flags & BRIDGE_VLAN_INFO_RANGE_END) {
if (!obj.vlan.vid_start)
return -EINVAL;
obj.vlan.vid_end = vinfo->vid;
if (obj.vlan.vid_end <= obj.vlan.vid_start)
return -EINVAL;
err = f(dev, &obj);
if (err)
return err;
memset(&obj.vlan, 0, sizeof(obj.vlan));
} else {
if (obj.vlan.vid_start)
return -EINVAL;
obj.vlan.vid_start = vinfo->vid;
obj.vlan.vid_end = vinfo->vid;
err = f(dev, &obj);
if (err)
return err;
memset(&obj.vlan, 0, sizeof(obj.vlan));
}
}
return ops->ndo_bridge_dellink(dev, nlh, flags); return 0;
} }
EXPORT_SYMBOL_GPL(netdev_switch_port_bridge_dellink);
/** /**
* ndo_dflt_netdev_switch_port_bridge_setlink - default ndo bridge setlink * switchdev_port_bridge_setlink - Set bridge port attributes
* op for master devices
* *
* @dev: port device * @dev: port device
* @nlh: netlink msg with bridge port attributes * @nlh: netlink header
* @flags: bridge setlink flags * @flags: netlink flags
* *
* Notify master device slaves of bridge port attributes * Called for SELF on rtnl_bridge_setlink to set bridge port
* attributes.
*/ */
int ndo_dflt_netdev_switch_port_bridge_setlink(struct net_device *dev, int switchdev_port_bridge_setlink(struct net_device *dev,
struct nlmsghdr *nlh, u16 flags) struct nlmsghdr *nlh, u16 flags)
{ {
struct net_device *lower_dev; struct nlattr *protinfo;
struct list_head *iter; struct nlattr *afspec;
int ret = 0, err = 0; int err = 0;
if (!(dev->features & NETIF_F_HW_SWITCH_OFFLOAD))
return ret;
netdev_for_each_lower_dev(dev, lower_dev, iter) { protinfo = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg),
err = netdev_switch_port_bridge_setlink(lower_dev, nlh, flags); IFLA_PROTINFO);
if (err && err != -EOPNOTSUPP) if (protinfo) {
ret = err; err = switchdev_port_br_setlink_protinfo(dev, protinfo);
if (err)
return err;
} }
return ret; afspec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg),
IFLA_AF_SPEC);
if (afspec)
err = switchdev_port_br_afspec(dev, afspec,
switchdev_port_obj_add);
return err;
} }
EXPORT_SYMBOL_GPL(ndo_dflt_netdev_switch_port_bridge_setlink); EXPORT_SYMBOL_GPL(switchdev_port_bridge_setlink);
/** /**
* ndo_dflt_netdev_switch_port_bridge_dellink - default ndo bridge dellink * switchdev_port_bridge_dellink - Set bridge port attributes
* op for master devices
* *
* @dev: port device * @dev: port device
* @nlh: netlink msg with bridge port attributes * @nlh: netlink header
* @flags: bridge dellink flags * @flags: netlink flags
* *
* Notify master device slaves of bridge port attribute deletes * Called for SELF on rtnl_bridge_dellink to set bridge port
* attributes.
*/ */
int ndo_dflt_netdev_switch_port_bridge_dellink(struct net_device *dev, int switchdev_port_bridge_dellink(struct net_device *dev,
struct nlmsghdr *nlh, u16 flags) struct nlmsghdr *nlh, u16 flags)
{ {
struct net_device *lower_dev; struct nlattr *afspec;
struct list_head *iter;
int ret = 0, err = 0;
if (!(dev->features & NETIF_F_HW_SWITCH_OFFLOAD)) afspec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg),
return ret; IFLA_AF_SPEC);
if (afspec)
return switchdev_port_br_afspec(dev, afspec,
switchdev_port_obj_del);
netdev_for_each_lower_dev(dev, lower_dev, iter) { return 0;
err = netdev_switch_port_bridge_dellink(lower_dev, nlh, flags);
if (err && err != -EOPNOTSUPP)
ret = err;
}
return ret;
} }
EXPORT_SYMBOL_GPL(ndo_dflt_netdev_switch_port_bridge_dellink); EXPORT_SYMBOL_GPL(switchdev_port_bridge_dellink);
static struct net_device *netdev_switch_get_lowest_dev(struct net_device *dev) static struct net_device *switchdev_get_lowest_dev(struct net_device *dev)
{ {
const struct swdev_ops *ops = dev->swdev_ops; const struct switchdev_ops *ops = dev->switchdev_ops;
struct net_device *lower_dev; struct net_device *lower_dev;
struct net_device *port_dev; struct net_device *port_dev;
struct list_head *iter; struct list_head *iter;
/* Recusively search down until we find a sw port dev. /* Recusively search down until we find a sw port dev.
* (A sw port dev supports swdev_parent_id_get). * (A sw port dev supports switchdev_port_attr_get).
*/ */
if (dev->features & NETIF_F_HW_SWITCH_OFFLOAD && if (ops && ops->switchdev_port_attr_get)
ops && ops->swdev_parent_id_get)
return dev; return dev;
netdev_for_each_lower_dev(dev, lower_dev, iter) { netdev_for_each_lower_dev(dev, lower_dev, iter) {
port_dev = netdev_switch_get_lowest_dev(lower_dev); port_dev = switchdev_get_lowest_dev(lower_dev);
if (port_dev) if (port_dev)
return port_dev; return port_dev;
} }
...@@ -261,10 +587,12 @@ static struct net_device *netdev_switch_get_lowest_dev(struct net_device *dev) ...@@ -261,10 +587,12 @@ static struct net_device *netdev_switch_get_lowest_dev(struct net_device *dev)
return NULL; return NULL;
} }
static struct net_device *netdev_switch_get_dev_by_nhs(struct fib_info *fi) static struct net_device *switchdev_get_dev_by_nhs(struct fib_info *fi)
{ {
struct netdev_phys_item_id psid; struct switchdev_attr attr = {
struct netdev_phys_item_id prev_psid; .id = SWITCHDEV_ATTR_PORT_PARENT_ID,
};
struct switchdev_attr prev_attr;
struct net_device *dev = NULL; struct net_device *dev = NULL;
int nhsel; int nhsel;
...@@ -276,28 +604,29 @@ static struct net_device *netdev_switch_get_dev_by_nhs(struct fib_info *fi) ...@@ -276,28 +604,29 @@ static struct net_device *netdev_switch_get_dev_by_nhs(struct fib_info *fi)
if (!nh->nh_dev) if (!nh->nh_dev)
return NULL; return NULL;
dev = netdev_switch_get_lowest_dev(nh->nh_dev); dev = switchdev_get_lowest_dev(nh->nh_dev);
if (!dev) if (!dev)
return NULL; return NULL;
if (netdev_switch_parent_id_get(dev, &psid)) if (switchdev_port_attr_get(dev, &attr))
return NULL; return NULL;
if (nhsel > 0) { if (nhsel > 0) {
if (prev_psid.id_len != psid.id_len) if (prev_attr.ppid.id_len != attr.ppid.id_len)
return NULL; return NULL;
if (memcmp(prev_psid.id, psid.id, psid.id_len)) if (memcmp(prev_attr.ppid.id, attr.ppid.id,
attr.ppid.id_len))
return NULL; return NULL;
} }
prev_psid = psid; prev_attr = attr;
} }
return dev; return dev;
} }
/** /**
* netdev_switch_fib_ipv4_add - Add IPv4 route entry to switch * switchdev_fib_ipv4_add - Add IPv4 route entry to switch
* *
* @dst: route's IPv4 destination address * @dst: route's IPv4 destination address
* @dst_len: destination address length (prefix length) * @dst_len: destination address length (prefix length)
...@@ -309,11 +638,22 @@ static struct net_device *netdev_switch_get_dev_by_nhs(struct fib_info *fi) ...@@ -309,11 +638,22 @@ static struct net_device *netdev_switch_get_dev_by_nhs(struct fib_info *fi)
* *
* Add IPv4 route entry to switch device. * Add IPv4 route entry to switch device.
*/ */
int netdev_switch_fib_ipv4_add(u32 dst, int dst_len, struct fib_info *fi, int switchdev_fib_ipv4_add(u32 dst, int dst_len, struct fib_info *fi,
u8 tos, u8 type, u32 nlflags, u32 tb_id) u8 tos, u8 type, u32 nlflags, u32 tb_id)
{ {
struct switchdev_obj fib_obj = {
.id = SWITCHDEV_OBJ_IPV4_FIB,
.ipv4_fib = {
.dst = htonl(dst),
.dst_len = dst_len,
.fi = fi,
.tos = tos,
.type = type,
.nlflags = nlflags,
.tb_id = tb_id,
},
};
struct net_device *dev; struct net_device *dev;
const struct swdev_ops *ops;
int err = 0; int err = 0;
/* Don't offload route if using custom ip rules or if /* Don't offload route if using custom ip rules or if
...@@ -328,25 +668,20 @@ int netdev_switch_fib_ipv4_add(u32 dst, int dst_len, struct fib_info *fi, ...@@ -328,25 +668,20 @@ int netdev_switch_fib_ipv4_add(u32 dst, int dst_len, struct fib_info *fi,
if (fi->fib_net->ipv4.fib_offload_disabled) if (fi->fib_net->ipv4.fib_offload_disabled)
return 0; return 0;
dev = netdev_switch_get_dev_by_nhs(fi); dev = switchdev_get_dev_by_nhs(fi);
if (!dev) if (!dev)
return 0; return 0;
ops = dev->swdev_ops;
err = switchdev_port_obj_add(dev, &fib_obj);
if (ops->swdev_fib_ipv4_add) { if (!err)
err = ops->swdev_fib_ipv4_add(dev, htonl(dst), dst_len, fi->fib_flags |= RTNH_F_EXTERNAL;
fi, tos, type, nlflags,
tb_id);
if (!err)
fi->fib_flags |= RTNH_F_EXTERNAL;
}
return err; return err;
} }
EXPORT_SYMBOL_GPL(netdev_switch_fib_ipv4_add); EXPORT_SYMBOL_GPL(switchdev_fib_ipv4_add);
/** /**
* netdev_switch_fib_ipv4_del - Delete IPv4 route entry from switch * switchdev_fib_ipv4_del - Delete IPv4 route entry from switch
* *
* @dst: route's IPv4 destination address * @dst: route's IPv4 destination address
* @dst_len: destination address length (prefix length) * @dst_len: destination address length (prefix length)
...@@ -357,38 +692,45 @@ EXPORT_SYMBOL_GPL(netdev_switch_fib_ipv4_add); ...@@ -357,38 +692,45 @@ EXPORT_SYMBOL_GPL(netdev_switch_fib_ipv4_add);
* *
* Delete IPv4 route entry from switch device. * Delete IPv4 route entry from switch device.
*/ */
int netdev_switch_fib_ipv4_del(u32 dst, int dst_len, struct fib_info *fi, int switchdev_fib_ipv4_del(u32 dst, int dst_len, struct fib_info *fi,
u8 tos, u8 type, u32 tb_id) u8 tos, u8 type, u32 tb_id)
{ {
struct switchdev_obj fib_obj = {
.id = SWITCHDEV_OBJ_IPV4_FIB,
.ipv4_fib = {
.dst = htonl(dst),
.dst_len = dst_len,
.fi = fi,
.tos = tos,
.type = type,
.nlflags = 0,
.tb_id = tb_id,
},
};
struct net_device *dev; struct net_device *dev;
const struct swdev_ops *ops;
int err = 0; int err = 0;
if (!(fi->fib_flags & RTNH_F_EXTERNAL)) if (!(fi->fib_flags & RTNH_F_EXTERNAL))
return 0; return 0;
dev = netdev_switch_get_dev_by_nhs(fi); dev = switchdev_get_dev_by_nhs(fi);
if (!dev) if (!dev)
return 0; return 0;
ops = dev->swdev_ops;
if (ops->swdev_fib_ipv4_del) { err = switchdev_port_obj_del(dev, &fib_obj);
err = ops->swdev_fib_ipv4_del(dev, htonl(dst), dst_len, if (!err)
fi, tos, type, tb_id); fi->fib_flags &= ~RTNH_F_EXTERNAL;
if (!err)
fi->fib_flags &= ~RTNH_F_EXTERNAL;
}
return err; return err;
} }
EXPORT_SYMBOL_GPL(netdev_switch_fib_ipv4_del); EXPORT_SYMBOL_GPL(switchdev_fib_ipv4_del);
/** /**
* netdev_switch_fib_ipv4_abort - Abort an IPv4 FIB operation * switchdev_fib_ipv4_abort - Abort an IPv4 FIB operation
* *
* @fi: route FIB info structure * @fi: route FIB info structure
*/ */
void netdev_switch_fib_ipv4_abort(struct fib_info *fi) void switchdev_fib_ipv4_abort(struct fib_info *fi)
{ {
/* There was a problem installing this route to the offload /* There was a problem installing this route to the offload
* device. For now, until we come up with more refined * device. For now, until we come up with more refined
...@@ -401,4 +743,4 @@ void netdev_switch_fib_ipv4_abort(struct fib_info *fi) ...@@ -401,4 +743,4 @@ void netdev_switch_fib_ipv4_abort(struct fib_info *fi)
fib_flush_external(fi->fib_net); fib_flush_external(fi->fib_net);
fi->fib_net->ipv4.fib_offload_disabled = true; fi->fib_net->ipv4.fib_offload_disabled = true;
} }
EXPORT_SYMBOL_GPL(netdev_switch_fib_ipv4_abort); EXPORT_SYMBOL_GPL(switchdev_fib_ipv4_abort);
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment