Commit 2c080404 authored by David S. Miller's avatar David S. Miller

Merge branch 'bridge-vlan-multicast'

Nikolay Aleksandrov says:

====================
net: bridge: multicast: add vlan support

This patchset adds initial per-vlan multicast support, most of the code
deals with moving to multicast context pointers from bridge/port pointers.
That allows us to switch them with the per-vlan contexts when a multicast
packet is being processed and vlan multicast snooping has been enabled.
That is controlled by a global bridge option added in patch 06 which is
off by default (BR_BOOLOPT_MCAST_VLAN_SNOOPING). It is important to note
that this option can change only under RTNL and doesn't require
multicast_lock, so we need to be careful when retrieving mcast contexts
in parallel. For packet processing they are switched only once in
br_multicast_rcv() and then used until the packet has been processed.
For the most part we need these contexts only to read config values and
check if they are disabled. The global mcast state which is maintained
consists of querier and router timers, the rest are config options.
The port mcast state which is maintained consists of query timer and
link to router port list if it's ever marked as a router port. Port
multicast contexts _must_ be used only with their respective global
contexts, that is a bridge port's mcast context must be used only with
bridge's global mcast context and a vlan/port's mcast context must be
used only with that vlan's global mcast context due to the router port
lists. This way a bridge port can be marked as a router in multiple
vlans, but might not be a router in some other vlan. Also this allows us
to have per-vlan querier elections, per-vlan queries and basically the
whole multicast state becomes per-vlan when the option is enabled.
One of the hardest parts is synchronization with vlan's memory
management, that is done through a new vlan flag: BR_VLFLAG_MCAST_ENABLED
which is changed only under multicast_lock. When a vlan is being
destroyed first that flag is removed under the lock, then the multicast
context is torn down which includes waiting for any outstanding context
timers. Since all of the vlan processing depends on BR_VLFLAG_MCAST_ENABLED
it must be checked first if the contexts are vlan and the multicast_lock
has been acquired. That is done by all IGMP/MLD packet processing
functions and timers. When processing a packet we have RCU so the vlan
memory won't be freed, but if the flag is missing we must not process it.
The timers are synchronized in the same way with the addition of waiting
for them to finish in case they are running after removing the flag
under multicast_lock (i.e. they were waiting for the lock). Multicast vlan
snooping requires vlan filtering to be enabled, if it's disabled then
snooping gets automatically disabled, too. BR_VLFLAG_GLOBAL_MCAST_ENABLED
controls if a vlan has BR_VLFLAG_MCAST_ENABLED set which is used in all
vlan disabled checks. We need both flags because one is controlled by
user-space globally (BR_VLFLAG_GLOBAL_MCAST_ENABLED) and the other is
for a particular bridge/vlan or port/vlan entry (BR_VLFLAG_MCAST_ENABLED).
Since the latter is also used for synchronization between the multicast
and vlan code, and also controlled by BR_VLFLAG_GLOBAL_MCAST_ENABLED we
rely on it when checking if a vlan context is disabled. The multicast
fast-path has 3 new bit tests on the cache-hot bridge flags field, I
didn't observe any measurable difference. I haven't forced either
context options to be always disabled when the other type is enabled
because the state consists of timers which either expire (router) or
don't affect the normal operation. Some options, like the mcast querier
one, won't be allowed to change for the disabled context type, that will
come with a future patch-set which adds per-vlan querier control.

Another important addition is the global vlan options, so far we had
only per bridge/port vlan options but in order to control vlan multicast
snooping globally we need to add a new type of global vlan options.
They can be changed only on the bridge device and are dumped only when a
special flag is set in the dump request. The first global option is vlan
mcast snooping control, it controls the vlan BR_VLFLAG_GLOBAL_MCAST_ENABLED
private flag. It can be set only on master vlan entries. There will be
many more global vlan options in the future both for multicast config
and other per-vlan options (e.g. STP).

There's a lot of room for improvements, I'll do some of the initial
ones but splitting the state to different contexts opens the door
for a lot more. Also any new multicast options become vlan-supported with
very little to no effort by using the same contexts.

Short patch description:
  patches 01-04: initial mcast context add, no functional changes
  patch      05: adds vlan mcast init and control helpers and uses them on
                 vlan create/destroy
  patch      06: adds a global bridge mcast vlan snooping knob (default
                 off)
  patches 07-08: add a helper for users which must derive the contexts
                 based on current bridge and vlan options (e.g. timers)
  patch      09: adds checks for disabled vlan contexts in packet
                 processing and timers
  patch      10: adds support for per-vlan querier and tagged queries
  patch      11: adds router port vlan id in the notifications
  patches 12-14: add global vlan options support (change, dump, notify)
  patch      15: adds per-vlan global mcast snooping control

Future patch-sets which build on this one (in order):
 - vlan state mcast handling
 - user-space mdb contexts (currently only the bridge contexts are used
   there)
 - all bridge multicast config options added per-vlan global and per
   vlan/port
 - iproute2 support for all the new uAPIs
 - selftests

This set has been stress-tested (deleting/adding ports/vlans while changing
vlan mcast snooping while processing IGMP/MLD packets), and also has
passed all bridge self-tests. I'm sending this set as early as possible
since there're a few more related sets that should go in the same
release to get proper and full mcast vlan snooping support.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents edd2e9d5 9dee572c
...@@ -479,16 +479,22 @@ enum { ...@@ -479,16 +479,22 @@ enum {
/* flags used in BRIDGE_VLANDB_DUMP_FLAGS attribute to affect dumps */ /* flags used in BRIDGE_VLANDB_DUMP_FLAGS attribute to affect dumps */
#define BRIDGE_VLANDB_DUMPF_STATS (1 << 0) /* Include stats in the dump */ #define BRIDGE_VLANDB_DUMPF_STATS (1 << 0) /* Include stats in the dump */
#define BRIDGE_VLANDB_DUMPF_GLOBAL (1 << 1) /* Dump global vlan options only */
/* Bridge vlan RTM attributes /* Bridge vlan RTM attributes
* [BRIDGE_VLANDB_ENTRY] = { * [BRIDGE_VLANDB_ENTRY] = {
* [BRIDGE_VLANDB_ENTRY_INFO] * [BRIDGE_VLANDB_ENTRY_INFO]
* ... * ...
* } * }
* [BRIDGE_VLANDB_GLOBAL_OPTIONS] = {
* [BRIDGE_VLANDB_GOPTS_ID]
* ...
* }
*/ */
enum { enum {
BRIDGE_VLANDB_UNSPEC, BRIDGE_VLANDB_UNSPEC,
BRIDGE_VLANDB_ENTRY, BRIDGE_VLANDB_ENTRY,
BRIDGE_VLANDB_GLOBAL_OPTIONS,
__BRIDGE_VLANDB_MAX, __BRIDGE_VLANDB_MAX,
}; };
#define BRIDGE_VLANDB_MAX (__BRIDGE_VLANDB_MAX - 1) #define BRIDGE_VLANDB_MAX (__BRIDGE_VLANDB_MAX - 1)
...@@ -538,6 +544,15 @@ enum { ...@@ -538,6 +544,15 @@ enum {
}; };
#define BRIDGE_VLANDB_STATS_MAX (__BRIDGE_VLANDB_STATS_MAX - 1) #define BRIDGE_VLANDB_STATS_MAX (__BRIDGE_VLANDB_STATS_MAX - 1)
enum {
BRIDGE_VLANDB_GOPTS_UNSPEC,
BRIDGE_VLANDB_GOPTS_ID,
BRIDGE_VLANDB_GOPTS_RANGE,
BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING,
__BRIDGE_VLANDB_GOPTS_MAX
};
#define BRIDGE_VLANDB_GOPTS_MAX (__BRIDGE_VLANDB_GOPTS_MAX - 1)
/* Bridge multicast database attributes /* Bridge multicast database attributes
* [MDBA_MDB] = { * [MDBA_MDB] = {
* [MDBA_MDB_ENTRY] = { * [MDBA_MDB_ENTRY] = {
...@@ -629,6 +644,7 @@ enum { ...@@ -629,6 +644,7 @@ enum {
MDBA_ROUTER_PATTR_TYPE, MDBA_ROUTER_PATTR_TYPE,
MDBA_ROUTER_PATTR_INET_TIMER, MDBA_ROUTER_PATTR_INET_TIMER,
MDBA_ROUTER_PATTR_INET6_TIMER, MDBA_ROUTER_PATTR_INET6_TIMER,
MDBA_ROUTER_PATTR_VID,
__MDBA_ROUTER_PATTR_MAX __MDBA_ROUTER_PATTR_MAX
}; };
#define MDBA_ROUTER_PATTR_MAX (__MDBA_ROUTER_PATTR_MAX - 1) #define MDBA_ROUTER_PATTR_MAX (__MDBA_ROUTER_PATTR_MAX - 1)
...@@ -720,12 +736,14 @@ struct br_mcast_stats { ...@@ -720,12 +736,14 @@ struct br_mcast_stats {
/* bridge boolean options /* bridge boolean options
* BR_BOOLOPT_NO_LL_LEARN - disable learning from link-local packets * BR_BOOLOPT_NO_LL_LEARN - disable learning from link-local packets
* BR_BOOLOPT_MCAST_VLAN_SNOOPING - control vlan multicast snooping
* *
* IMPORTANT: if adding a new option do not forget to handle * IMPORTANT: if adding a new option do not forget to handle
* it in br_boolopt_toggle/get and bridge sysfs * it in br_boolopt_toggle/get and bridge sysfs
*/ */
enum br_boolopt_id { enum br_boolopt_id {
BR_BOOLOPT_NO_LL_LEARN, BR_BOOLOPT_NO_LL_LEARN,
BR_BOOLOPT_MCAST_VLAN_SNOOPING,
BR_BOOLOPT_MAX BR_BOOLOPT_MAX
}; };
......
...@@ -214,17 +214,22 @@ static struct notifier_block br_switchdev_notifier = { ...@@ -214,17 +214,22 @@ static struct notifier_block br_switchdev_notifier = {
int br_boolopt_toggle(struct net_bridge *br, enum br_boolopt_id opt, bool on, int br_boolopt_toggle(struct net_bridge *br, enum br_boolopt_id opt, bool on,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
int err = 0;
switch (opt) { switch (opt) {
case BR_BOOLOPT_NO_LL_LEARN: case BR_BOOLOPT_NO_LL_LEARN:
br_opt_toggle(br, BROPT_NO_LL_LEARN, on); br_opt_toggle(br, BROPT_NO_LL_LEARN, on);
break; break;
case BR_BOOLOPT_MCAST_VLAN_SNOOPING:
err = br_multicast_toggle_vlan_snooping(br, on, extack);
break;
default: default:
/* shouldn't be called with unsupported options */ /* shouldn't be called with unsupported options */
WARN_ON(1); WARN_ON(1);
break; break;
} }
return 0; return err;
} }
int br_boolopt_get(const struct net_bridge *br, enum br_boolopt_id opt) int br_boolopt_get(const struct net_bridge *br, enum br_boolopt_id opt)
...@@ -232,6 +237,8 @@ int br_boolopt_get(const struct net_bridge *br, enum br_boolopt_id opt) ...@@ -232,6 +237,8 @@ int br_boolopt_get(const struct net_bridge *br, enum br_boolopt_id opt)
switch (opt) { switch (opt) {
case BR_BOOLOPT_NO_LL_LEARN: case BR_BOOLOPT_NO_LL_LEARN:
return br_opt_get(br, BROPT_NO_LL_LEARN); return br_opt_get(br, BROPT_NO_LL_LEARN);
case BR_BOOLOPT_MCAST_VLAN_SNOOPING:
return br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED);
default: default:
/* shouldn't be called with unsupported options */ /* shouldn't be called with unsupported options */
WARN_ON(1); WARN_ON(1);
......
...@@ -27,11 +27,14 @@ EXPORT_SYMBOL_GPL(nf_br_ops); ...@@ -27,11 +27,14 @@ EXPORT_SYMBOL_GPL(nf_br_ops);
/* net device transmit always called with BH disabled */ /* net device transmit always called with BH disabled */
netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct net_bridge_mcast_port *pmctx_null = NULL;
struct net_bridge *br = netdev_priv(dev); struct net_bridge *br = netdev_priv(dev);
struct net_bridge_mcast *brmctx = &br->multicast_ctx;
struct net_bridge_fdb_entry *dst; struct net_bridge_fdb_entry *dst;
struct net_bridge_mdb_entry *mdst; struct net_bridge_mdb_entry *mdst;
const struct nf_br_ops *nf_ops; const struct nf_br_ops *nf_ops;
u8 state = BR_STATE_FORWARDING; u8 state = BR_STATE_FORWARDING;
struct net_bridge_vlan *vlan;
const unsigned char *dest; const unsigned char *dest;
u16 vid = 0; u16 vid = 0;
...@@ -53,7 +56,8 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -53,7 +56,8 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
skb_reset_mac_header(skb); skb_reset_mac_header(skb);
skb_pull(skb, ETH_HLEN); skb_pull(skb, ETH_HLEN);
if (!br_allowed_ingress(br, br_vlan_group_rcu(br), skb, &vid, &state)) if (!br_allowed_ingress(br, br_vlan_group_rcu(br), skb, &vid,
&state, &vlan))
goto out; goto out;
if (IS_ENABLED(CONFIG_INET) && if (IS_ENABLED(CONFIG_INET) &&
...@@ -82,15 +86,15 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -82,15 +86,15 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
br_flood(br, skb, BR_PKT_MULTICAST, false, true); br_flood(br, skb, BR_PKT_MULTICAST, false, true);
goto out; goto out;
} }
if (br_multicast_rcv(br, NULL, skb, vid)) { if (br_multicast_rcv(&brmctx, &pmctx_null, vlan, skb, vid)) {
kfree_skb(skb); kfree_skb(skb);
goto out; goto out;
} }
mdst = br_mdb_get(br, skb, vid); mdst = br_mdb_get(brmctx, skb, vid);
if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) && if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) &&
br_multicast_querier_exists(br, eth_hdr(skb), mdst)) br_multicast_querier_exists(brmctx, eth_hdr(skb), mdst))
br_multicast_flood(mdst, skb, false, true); br_multicast_flood(mdst, skb, brmctx, false, true);
else else
br_flood(br, skb, BR_PKT_MULTICAST, false, true); br_flood(br, skb, BR_PKT_MULTICAST, false, true);
} else if ((dst = br_fdb_find_rcu(br, dest, vid)) != NULL) { } else if ((dst = br_fdb_find_rcu(br, dest, vid)) != NULL) {
......
...@@ -267,20 +267,19 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb, ...@@ -267,20 +267,19 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb,
/* called with rcu_read_lock */ /* called with rcu_read_lock */
void br_multicast_flood(struct net_bridge_mdb_entry *mdst, void br_multicast_flood(struct net_bridge_mdb_entry *mdst,
struct sk_buff *skb, struct sk_buff *skb,
struct net_bridge_mcast *brmctx,
bool local_rcv, bool local_orig) bool local_rcv, bool local_orig)
{ {
struct net_device *dev = BR_INPUT_SKB_CB(skb)->brdev;
struct net_bridge *br = netdev_priv(dev);
struct net_bridge_port *prev = NULL; struct net_bridge_port *prev = NULL;
struct net_bridge_port_group *p; struct net_bridge_port_group *p;
bool allow_mode_include = true; bool allow_mode_include = true;
struct hlist_node *rp; struct hlist_node *rp;
rp = br_multicast_get_first_rport_node(br, skb); rp = br_multicast_get_first_rport_node(brmctx, skb);
if (mdst) { if (mdst) {
p = rcu_dereference(mdst->ports); p = rcu_dereference(mdst->ports);
if (br_multicast_should_handle_mode(br, mdst->addr.proto) && if (br_multicast_should_handle_mode(brmctx, mdst->addr.proto) &&
br_multicast_is_star_g(&mdst->addr)) br_multicast_is_star_g(&mdst->addr))
allow_mode_include = false; allow_mode_include = false;
} else { } else {
......
...@@ -69,8 +69,11 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb ...@@ -69,8 +69,11 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
struct net_bridge_port *p = br_port_get_rcu(skb->dev); struct net_bridge_port *p = br_port_get_rcu(skb->dev);
enum br_pkt_type pkt_type = BR_PKT_UNICAST; enum br_pkt_type pkt_type = BR_PKT_UNICAST;
struct net_bridge_fdb_entry *dst = NULL; struct net_bridge_fdb_entry *dst = NULL;
struct net_bridge_mcast_port *pmctx;
struct net_bridge_mdb_entry *mdst; struct net_bridge_mdb_entry *mdst;
bool local_rcv, mcast_hit = false; bool local_rcv, mcast_hit = false;
struct net_bridge_mcast *brmctx;
struct net_bridge_vlan *vlan;
struct net_bridge *br; struct net_bridge *br;
u16 vid = 0; u16 vid = 0;
u8 state; u8 state;
...@@ -78,9 +81,11 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb ...@@ -78,9 +81,11 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
if (!p || p->state == BR_STATE_DISABLED) if (!p || p->state == BR_STATE_DISABLED)
goto drop; goto drop;
brmctx = &p->br->multicast_ctx;
pmctx = &p->multicast_ctx;
state = p->state; state = p->state;
if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid, if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid,
&state)) &state, &vlan))
goto out; goto out;
nbp_switchdev_frame_mark(p, skb); nbp_switchdev_frame_mark(p, skb);
...@@ -98,7 +103,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb ...@@ -98,7 +103,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
local_rcv = true; local_rcv = true;
} else { } else {
pkt_type = BR_PKT_MULTICAST; pkt_type = BR_PKT_MULTICAST;
if (br_multicast_rcv(br, p, skb, vid)) if (br_multicast_rcv(&brmctx, &pmctx, vlan, skb, vid))
goto drop; goto drop;
} }
} }
...@@ -128,11 +133,11 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb ...@@ -128,11 +133,11 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
switch (pkt_type) { switch (pkt_type) {
case BR_PKT_MULTICAST: case BR_PKT_MULTICAST:
mdst = br_mdb_get(br, skb, vid); mdst = br_mdb_get(brmctx, skb, vid);
if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) && if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) &&
br_multicast_querier_exists(br, eth_hdr(skb), mdst)) { br_multicast_querier_exists(brmctx, eth_hdr(skb), mdst)) {
if ((mdst && mdst->host_joined) || if ((mdst && mdst->host_joined) ||
br_multicast_is_router(br, skb)) { br_multicast_is_router(brmctx, skb)) {
local_rcv = true; local_rcv = true;
br->dev->stats.multicast++; br->dev->stats.multicast++;
} }
...@@ -162,7 +167,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb ...@@ -162,7 +167,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
if (!mcast_hit) if (!mcast_hit)
br_flood(br, skb, pkt_type, local_rcv, false); br_flood(br, skb, pkt_type, local_rcv, false);
else else
br_multicast_flood(mdst, skb, local_rcv, false); br_multicast_flood(mdst, skb, brmctx, local_rcv, false);
} }
if (local_rcv) if (local_rcv)
......
...@@ -16,29 +16,29 @@ ...@@ -16,29 +16,29 @@
#include "br_private.h" #include "br_private.h"
static bool br_rports_have_mc_router(struct net_bridge *br) static bool br_rports_have_mc_router(struct net_bridge_mcast *brmctx)
{ {
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
return !hlist_empty(&br->ip4_mc_router_list) || return !hlist_empty(&brmctx->ip4_mc_router_list) ||
!hlist_empty(&br->ip6_mc_router_list); !hlist_empty(&brmctx->ip6_mc_router_list);
#else #else
return !hlist_empty(&br->ip4_mc_router_list); return !hlist_empty(&brmctx->ip4_mc_router_list);
#endif #endif
} }
static bool static bool
br_ip4_rports_get_timer(struct net_bridge_port *port, unsigned long *timer) br_ip4_rports_get_timer(struct net_bridge_port *port, unsigned long *timer)
{ {
*timer = br_timer_value(&port->ip4_mc_router_timer); *timer = br_timer_value(&port->multicast_ctx.ip4_mc_router_timer);
return !hlist_unhashed(&port->ip4_rlist); return !hlist_unhashed(&port->multicast_ctx.ip4_rlist);
} }
static bool static bool
br_ip6_rports_get_timer(struct net_bridge_port *port, unsigned long *timer) br_ip6_rports_get_timer(struct net_bridge_port *port, unsigned long *timer)
{ {
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
*timer = br_timer_value(&port->ip6_mc_router_timer); *timer = br_timer_value(&port->multicast_ctx.ip6_mc_router_timer);
return !hlist_unhashed(&port->ip6_rlist); return !hlist_unhashed(&port->multicast_ctx.ip6_rlist);
#else #else
*timer = 0; *timer = 0;
return false; return false;
...@@ -54,10 +54,10 @@ static int br_rports_fill_info(struct sk_buff *skb, struct netlink_callback *cb, ...@@ -54,10 +54,10 @@ static int br_rports_fill_info(struct sk_buff *skb, struct netlink_callback *cb,
struct nlattr *nest, *port_nest; struct nlattr *nest, *port_nest;
struct net_bridge_port *p; struct net_bridge_port *p;
if (!br->multicast_router) if (!br->multicast_ctx.multicast_router)
return 0; return 0;
if (!br_rports_have_mc_router(br)) if (!br_rports_have_mc_router(&br->multicast_ctx))
return 0; return 0;
nest = nla_nest_start_noflag(skb, MDBA_ROUTER); nest = nla_nest_start_noflag(skb, MDBA_ROUTER);
...@@ -79,7 +79,7 @@ static int br_rports_fill_info(struct sk_buff *skb, struct netlink_callback *cb, ...@@ -79,7 +79,7 @@ static int br_rports_fill_info(struct sk_buff *skb, struct netlink_callback *cb,
nla_put_u32(skb, MDBA_ROUTER_PATTR_TIMER, nla_put_u32(skb, MDBA_ROUTER_PATTR_TIMER,
max(ip4_timer, ip6_timer)) || max(ip4_timer, ip6_timer)) ||
nla_put_u8(skb, MDBA_ROUTER_PATTR_TYPE, nla_put_u8(skb, MDBA_ROUTER_PATTR_TYPE,
p->multicast_router) || p->multicast_ctx.multicast_router) ||
(have_ip4_mc_rtr && (have_ip4_mc_rtr &&
nla_put_u32(skb, MDBA_ROUTER_PATTR_INET_TIMER, nla_put_u32(skb, MDBA_ROUTER_PATTR_INET_TIMER,
ip4_timer)) || ip4_timer)) ||
...@@ -240,7 +240,7 @@ static int __mdb_fill_info(struct sk_buff *skb, ...@@ -240,7 +240,7 @@ static int __mdb_fill_info(struct sk_buff *skb,
switch (mp->addr.proto) { switch (mp->addr.proto) {
case htons(ETH_P_IP): case htons(ETH_P_IP):
dump_srcs_mode = !!(mp->br->multicast_igmp_version == 3); dump_srcs_mode = !!(mp->br->multicast_ctx.multicast_igmp_version == 3);
if (mp->addr.src.ip4) { if (mp->addr.src.ip4) {
if (nla_put_in_addr(skb, MDBA_MDB_EATTR_SOURCE, if (nla_put_in_addr(skb, MDBA_MDB_EATTR_SOURCE,
mp->addr.src.ip4)) mp->addr.src.ip4))
...@@ -250,7 +250,7 @@ static int __mdb_fill_info(struct sk_buff *skb, ...@@ -250,7 +250,7 @@ static int __mdb_fill_info(struct sk_buff *skb,
break; break;
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
case htons(ETH_P_IPV6): case htons(ETH_P_IPV6):
dump_srcs_mode = !!(mp->br->multicast_mld_version == 2); dump_srcs_mode = !!(mp->br->multicast_ctx.multicast_mld_version == 2);
if (!ipv6_addr_any(&mp->addr.src.ip6)) { if (!ipv6_addr_any(&mp->addr.src.ip6)) {
if (nla_put_in6_addr(skb, MDBA_MDB_EATTR_SOURCE, if (nla_put_in6_addr(skb, MDBA_MDB_EATTR_SOURCE,
&mp->addr.src.ip6)) &mp->addr.src.ip6))
...@@ -483,7 +483,7 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg) ...@@ -483,7 +483,7 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg)
/* MDBA_MDB_EATTR_SOURCE */ /* MDBA_MDB_EATTR_SOURCE */
if (pg->key.addr.src.ip4) if (pg->key.addr.src.ip4)
nlmsg_size += nla_total_size(sizeof(__be32)); nlmsg_size += nla_total_size(sizeof(__be32));
if (pg->key.port->br->multicast_igmp_version == 2) if (pg->key.port->br->multicast_ctx.multicast_igmp_version == 2)
goto out; goto out;
addr_size = sizeof(__be32); addr_size = sizeof(__be32);
break; break;
...@@ -492,7 +492,7 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg) ...@@ -492,7 +492,7 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg)
/* MDBA_MDB_EATTR_SOURCE */ /* MDBA_MDB_EATTR_SOURCE */
if (!ipv6_addr_any(&pg->key.addr.src.ip6)) if (!ipv6_addr_any(&pg->key.addr.src.ip6))
nlmsg_size += nla_total_size(sizeof(struct in6_addr)); nlmsg_size += nla_total_size(sizeof(struct in6_addr));
if (pg->key.port->br->multicast_mld_version == 1) if (pg->key.port->br->multicast_ctx.multicast_mld_version == 1)
goto out; goto out;
addr_size = sizeof(struct in6_addr); addr_size = sizeof(struct in6_addr);
break; break;
...@@ -781,12 +781,12 @@ void br_mdb_notify(struct net_device *dev, ...@@ -781,12 +781,12 @@ void br_mdb_notify(struct net_device *dev,
static int nlmsg_populate_rtr_fill(struct sk_buff *skb, static int nlmsg_populate_rtr_fill(struct sk_buff *skb,
struct net_device *dev, struct net_device *dev,
int ifindex, u32 pid, int ifindex, u16 vid, u32 pid,
u32 seq, int type, unsigned int flags) u32 seq, int type, unsigned int flags)
{ {
struct nlattr *nest, *port_nest;
struct br_port_msg *bpm; struct br_port_msg *bpm;
struct nlmsghdr *nlh; struct nlmsghdr *nlh;
struct nlattr *nest;
nlh = nlmsg_put(skb, pid, seq, type, sizeof(*bpm), 0); nlh = nlmsg_put(skb, pid, seq, type, sizeof(*bpm), 0);
if (!nlh) if (!nlh)
...@@ -800,8 +800,18 @@ static int nlmsg_populate_rtr_fill(struct sk_buff *skb, ...@@ -800,8 +800,18 @@ static int nlmsg_populate_rtr_fill(struct sk_buff *skb,
if (!nest) if (!nest)
goto cancel; goto cancel;
if (nla_put_u32(skb, MDBA_ROUTER_PORT, ifindex)) port_nest = nla_nest_start_noflag(skb, MDBA_ROUTER_PORT);
if (!port_nest)
goto end;
if (nla_put_nohdr(skb, sizeof(u32), &ifindex)) {
nla_nest_cancel(skb, port_nest);
goto end; goto end;
}
if (vid && nla_put_u16(skb, MDBA_ROUTER_PATTR_VID, vid)) {
nla_nest_cancel(skb, port_nest);
goto end;
}
nla_nest_end(skb, port_nest);
nla_nest_end(skb, nest); nla_nest_end(skb, nest);
nlmsg_end(skb, nlh); nlmsg_end(skb, nlh);
...@@ -817,23 +827,28 @@ static int nlmsg_populate_rtr_fill(struct sk_buff *skb, ...@@ -817,23 +827,28 @@ static int nlmsg_populate_rtr_fill(struct sk_buff *skb,
static inline size_t rtnl_rtr_nlmsg_size(void) static inline size_t rtnl_rtr_nlmsg_size(void)
{ {
return NLMSG_ALIGN(sizeof(struct br_port_msg)) return NLMSG_ALIGN(sizeof(struct br_port_msg))
+ nla_total_size(sizeof(__u32)); + nla_total_size(sizeof(__u32))
+ nla_total_size(sizeof(u16));
} }
void br_rtr_notify(struct net_device *dev, struct net_bridge_port *port, void br_rtr_notify(struct net_device *dev, struct net_bridge_mcast_port *pmctx,
int type) int type)
{ {
struct net *net = dev_net(dev); struct net *net = dev_net(dev);
struct sk_buff *skb; struct sk_buff *skb;
int err = -ENOBUFS; int err = -ENOBUFS;
int ifindex; int ifindex;
u16 vid;
ifindex = port ? port->dev->ifindex : 0; ifindex = pmctx ? pmctx->port->dev->ifindex : 0;
vid = pmctx && br_multicast_port_ctx_is_vlan(pmctx) ? pmctx->vlan->vid :
0;
skb = nlmsg_new(rtnl_rtr_nlmsg_size(), GFP_ATOMIC); skb = nlmsg_new(rtnl_rtr_nlmsg_size(), GFP_ATOMIC);
if (!skb) if (!skb)
goto errout; goto errout;
err = nlmsg_populate_rtr_fill(skb, dev, ifindex, 0, 0, type, NTF_SELF); err = nlmsg_populate_rtr_fill(skb, dev, ifindex, vid, 0, 0, type,
NTF_SELF);
if (err < 0) { if (err < 0) {
kfree_skb(skb); kfree_skb(skb);
goto errout; goto errout;
...@@ -1084,14 +1099,15 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, ...@@ -1084,14 +1099,15 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port,
} }
rcu_assign_pointer(*pp, p); rcu_assign_pointer(*pp, p);
if (entry->state == MDB_TEMPORARY) if (entry->state == MDB_TEMPORARY)
mod_timer(&p->timer, now + br->multicast_membership_interval); mod_timer(&p->timer,
now + br->multicast_ctx.multicast_membership_interval);
br_mdb_notify(br->dev, mp, p, RTM_NEWMDB); br_mdb_notify(br->dev, mp, p, RTM_NEWMDB);
/* if we are adding a new EXCLUDE port group (*,G) it needs to be also /* if we are adding a new EXCLUDE port group (*,G) it needs to be also
* added to all S,G entries for proper replication, if we are adding * added to all S,G entries for proper replication, if we are adding
* a new INCLUDE port (S,G) then all of *,G EXCLUDE ports need to be * a new INCLUDE port (S,G) then all of *,G EXCLUDE ports need to be
* added to it for proper replication * added to it for proper replication
*/ */
if (br_multicast_should_handle_mode(br, group.proto)) { if (br_multicast_should_handle_mode(&br->multicast_ctx, group.proto)) {
switch (filter_mode) { switch (filter_mode) {
case MCAST_EXCLUDE: case MCAST_EXCLUDE:
br_multicast_star_g_handle_mode(p, MCAST_EXCLUDE); br_multicast_star_g_handle_mode(p, MCAST_EXCLUDE);
......
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
...@@ -287,7 +287,7 @@ static int br_port_fill_attrs(struct sk_buff *skb, ...@@ -287,7 +287,7 @@ static int br_port_fill_attrs(struct sk_buff *skb,
#ifdef CONFIG_BRIDGE_IGMP_SNOOPING #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
if (nla_put_u8(skb, IFLA_BRPORT_MULTICAST_ROUTER, if (nla_put_u8(skb, IFLA_BRPORT_MULTICAST_ROUTER,
p->multicast_router) || p->multicast_ctx.multicast_router) ||
nla_put_u32(skb, IFLA_BRPORT_MCAST_EHT_HOSTS_LIMIT, nla_put_u32(skb, IFLA_BRPORT_MCAST_EHT_HOSTS_LIMIT,
p->multicast_eht_hosts_limit) || p->multicast_eht_hosts_limit) ||
nla_put_u32(skb, IFLA_BRPORT_MCAST_EHT_HOSTS_CNT, nla_put_u32(skb, IFLA_BRPORT_MCAST_EHT_HOSTS_CNT,
...@@ -1324,49 +1324,49 @@ static int br_changelink(struct net_device *brdev, struct nlattr *tb[], ...@@ -1324,49 +1324,49 @@ static int br_changelink(struct net_device *brdev, struct nlattr *tb[],
if (data[IFLA_BR_MCAST_LAST_MEMBER_CNT]) { if (data[IFLA_BR_MCAST_LAST_MEMBER_CNT]) {
u32 val = nla_get_u32(data[IFLA_BR_MCAST_LAST_MEMBER_CNT]); u32 val = nla_get_u32(data[IFLA_BR_MCAST_LAST_MEMBER_CNT]);
br->multicast_last_member_count = val; br->multicast_ctx.multicast_last_member_count = val;
} }
if (data[IFLA_BR_MCAST_STARTUP_QUERY_CNT]) { if (data[IFLA_BR_MCAST_STARTUP_QUERY_CNT]) {
u32 val = nla_get_u32(data[IFLA_BR_MCAST_STARTUP_QUERY_CNT]); u32 val = nla_get_u32(data[IFLA_BR_MCAST_STARTUP_QUERY_CNT]);
br->multicast_startup_query_count = val; br->multicast_ctx.multicast_startup_query_count = val;
} }
if (data[IFLA_BR_MCAST_LAST_MEMBER_INTVL]) { if (data[IFLA_BR_MCAST_LAST_MEMBER_INTVL]) {
u64 val = nla_get_u64(data[IFLA_BR_MCAST_LAST_MEMBER_INTVL]); u64 val = nla_get_u64(data[IFLA_BR_MCAST_LAST_MEMBER_INTVL]);
br->multicast_last_member_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_last_member_interval = clock_t_to_jiffies(val);
} }
if (data[IFLA_BR_MCAST_MEMBERSHIP_INTVL]) { if (data[IFLA_BR_MCAST_MEMBERSHIP_INTVL]) {
u64 val = nla_get_u64(data[IFLA_BR_MCAST_MEMBERSHIP_INTVL]); u64 val = nla_get_u64(data[IFLA_BR_MCAST_MEMBERSHIP_INTVL]);
br->multicast_membership_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_membership_interval = clock_t_to_jiffies(val);
} }
if (data[IFLA_BR_MCAST_QUERIER_INTVL]) { if (data[IFLA_BR_MCAST_QUERIER_INTVL]) {
u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERIER_INTVL]); u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERIER_INTVL]);
br->multicast_querier_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_querier_interval = clock_t_to_jiffies(val);
} }
if (data[IFLA_BR_MCAST_QUERY_INTVL]) { if (data[IFLA_BR_MCAST_QUERY_INTVL]) {
u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERY_INTVL]); u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERY_INTVL]);
br->multicast_query_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_query_interval = clock_t_to_jiffies(val);
} }
if (data[IFLA_BR_MCAST_QUERY_RESPONSE_INTVL]) { if (data[IFLA_BR_MCAST_QUERY_RESPONSE_INTVL]) {
u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERY_RESPONSE_INTVL]); u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERY_RESPONSE_INTVL]);
br->multicast_query_response_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_query_response_interval = clock_t_to_jiffies(val);
} }
if (data[IFLA_BR_MCAST_STARTUP_QUERY_INTVL]) { if (data[IFLA_BR_MCAST_STARTUP_QUERY_INTVL]) {
u64 val = nla_get_u64(data[IFLA_BR_MCAST_STARTUP_QUERY_INTVL]); u64 val = nla_get_u64(data[IFLA_BR_MCAST_STARTUP_QUERY_INTVL]);
br->multicast_startup_query_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_startup_query_interval = clock_t_to_jiffies(val);
} }
if (data[IFLA_BR_MCAST_STATS_ENABLED]) { if (data[IFLA_BR_MCAST_STATS_ENABLED]) {
...@@ -1566,7 +1566,8 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev) ...@@ -1566,7 +1566,8 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev)
return -EMSGSIZE; return -EMSGSIZE;
#endif #endif
#ifdef CONFIG_BRIDGE_IGMP_SNOOPING #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
if (nla_put_u8(skb, IFLA_BR_MCAST_ROUTER, br->multicast_router) || if (nla_put_u8(skb, IFLA_BR_MCAST_ROUTER,
br->multicast_ctx.multicast_router) ||
nla_put_u8(skb, IFLA_BR_MCAST_SNOOPING, nla_put_u8(skb, IFLA_BR_MCAST_SNOOPING,
br_opt_get(br, BROPT_MULTICAST_ENABLED)) || br_opt_get(br, BROPT_MULTICAST_ENABLED)) ||
nla_put_u8(skb, IFLA_BR_MCAST_QUERY_USE_IFADDR, nla_put_u8(skb, IFLA_BR_MCAST_QUERY_USE_IFADDR,
...@@ -1578,38 +1579,38 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev) ...@@ -1578,38 +1579,38 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev)
nla_put_u32(skb, IFLA_BR_MCAST_HASH_ELASTICITY, RHT_ELASTICITY) || nla_put_u32(skb, IFLA_BR_MCAST_HASH_ELASTICITY, RHT_ELASTICITY) ||
nla_put_u32(skb, IFLA_BR_MCAST_HASH_MAX, br->hash_max) || nla_put_u32(skb, IFLA_BR_MCAST_HASH_MAX, br->hash_max) ||
nla_put_u32(skb, IFLA_BR_MCAST_LAST_MEMBER_CNT, nla_put_u32(skb, IFLA_BR_MCAST_LAST_MEMBER_CNT,
br->multicast_last_member_count) || br->multicast_ctx.multicast_last_member_count) ||
nla_put_u32(skb, IFLA_BR_MCAST_STARTUP_QUERY_CNT, nla_put_u32(skb, IFLA_BR_MCAST_STARTUP_QUERY_CNT,
br->multicast_startup_query_count) || br->multicast_ctx.multicast_startup_query_count) ||
nla_put_u8(skb, IFLA_BR_MCAST_IGMP_VERSION, nla_put_u8(skb, IFLA_BR_MCAST_IGMP_VERSION,
br->multicast_igmp_version)) br->multicast_ctx.multicast_igmp_version))
return -EMSGSIZE; return -EMSGSIZE;
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
if (nla_put_u8(skb, IFLA_BR_MCAST_MLD_VERSION, if (nla_put_u8(skb, IFLA_BR_MCAST_MLD_VERSION,
br->multicast_mld_version)) br->multicast_ctx.multicast_mld_version))
return -EMSGSIZE; return -EMSGSIZE;
#endif #endif
clockval = jiffies_to_clock_t(br->multicast_last_member_interval); clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_last_member_interval);
if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_LAST_MEMBER_INTVL, clockval, if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_LAST_MEMBER_INTVL, clockval,
IFLA_BR_PAD)) IFLA_BR_PAD))
return -EMSGSIZE; return -EMSGSIZE;
clockval = jiffies_to_clock_t(br->multicast_membership_interval); clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_membership_interval);
if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_MEMBERSHIP_INTVL, clockval, if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_MEMBERSHIP_INTVL, clockval,
IFLA_BR_PAD)) IFLA_BR_PAD))
return -EMSGSIZE; return -EMSGSIZE;
clockval = jiffies_to_clock_t(br->multicast_querier_interval); clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_querier_interval);
if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERIER_INTVL, clockval, if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERIER_INTVL, clockval,
IFLA_BR_PAD)) IFLA_BR_PAD))
return -EMSGSIZE; return -EMSGSIZE;
clockval = jiffies_to_clock_t(br->multicast_query_interval); clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_query_interval);
if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERY_INTVL, clockval, if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERY_INTVL, clockval,
IFLA_BR_PAD)) IFLA_BR_PAD))
return -EMSGSIZE; return -EMSGSIZE;
clockval = jiffies_to_clock_t(br->multicast_query_response_interval); clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_query_response_interval);
if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERY_RESPONSE_INTVL, clockval, if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERY_RESPONSE_INTVL, clockval,
IFLA_BR_PAD)) IFLA_BR_PAD))
return -EMSGSIZE; return -EMSGSIZE;
clockval = jiffies_to_clock_t(br->multicast_startup_query_interval); clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_startup_query_interval);
if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_STARTUP_QUERY_INTVL, clockval, if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_STARTUP_QUERY_INTVL, clockval,
IFLA_BR_PAD)) IFLA_BR_PAD))
return -EMSGSIZE; return -EMSGSIZE;
......
This diff is collapsed.
...@@ -51,7 +51,8 @@ struct net_bridge_group_eht_set { ...@@ -51,7 +51,8 @@ struct net_bridge_group_eht_set {
#ifdef CONFIG_BRIDGE_IGMP_SNOOPING #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
void br_multicast_eht_clean_sets(struct net_bridge_port_group *pg); void br_multicast_eht_clean_sets(struct net_bridge_port_group *pg);
bool br_multicast_eht_handle(struct net_bridge_port_group *pg, bool br_multicast_eht_handle(const struct net_bridge_mcast *brmctx,
struct net_bridge_port_group *pg,
void *h_addr, void *h_addr,
void *srcs, void *srcs,
u32 nsrcs, u32 nsrcs,
......
...@@ -384,7 +384,7 @@ static ssize_t multicast_router_show(struct device *d, ...@@ -384,7 +384,7 @@ static ssize_t multicast_router_show(struct device *d,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf(buf, "%d\n", br->multicast_router); return sprintf(buf, "%d\n", br->multicast_ctx.multicast_router);
} }
static int set_multicast_router(struct net_bridge *br, unsigned long val, static int set_multicast_router(struct net_bridge *br, unsigned long val,
...@@ -514,7 +514,7 @@ static ssize_t multicast_igmp_version_show(struct device *d, ...@@ -514,7 +514,7 @@ static ssize_t multicast_igmp_version_show(struct device *d,
{ {
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf(buf, "%u\n", br->multicast_igmp_version); return sprintf(buf, "%u\n", br->multicast_ctx.multicast_igmp_version);
} }
static int set_multicast_igmp_version(struct net_bridge *br, unsigned long val, static int set_multicast_igmp_version(struct net_bridge *br, unsigned long val,
...@@ -536,13 +536,13 @@ static ssize_t multicast_last_member_count_show(struct device *d, ...@@ -536,13 +536,13 @@ static ssize_t multicast_last_member_count_show(struct device *d,
char *buf) char *buf)
{ {
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf(buf, "%u\n", br->multicast_last_member_count); return sprintf(buf, "%u\n", br->multicast_ctx.multicast_last_member_count);
} }
static int set_last_member_count(struct net_bridge *br, unsigned long val, static int set_last_member_count(struct net_bridge *br, unsigned long val,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
br->multicast_last_member_count = val; br->multicast_ctx.multicast_last_member_count = val;
return 0; return 0;
} }
...@@ -558,13 +558,13 @@ static ssize_t multicast_startup_query_count_show( ...@@ -558,13 +558,13 @@ static ssize_t multicast_startup_query_count_show(
struct device *d, struct device_attribute *attr, char *buf) struct device *d, struct device_attribute *attr, char *buf)
{ {
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf(buf, "%u\n", br->multicast_startup_query_count); return sprintf(buf, "%u\n", br->multicast_ctx.multicast_startup_query_count);
} }
static int set_startup_query_count(struct net_bridge *br, unsigned long val, static int set_startup_query_count(struct net_bridge *br, unsigned long val,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
br->multicast_startup_query_count = val; br->multicast_ctx.multicast_startup_query_count = val;
return 0; return 0;
} }
...@@ -581,13 +581,13 @@ static ssize_t multicast_last_member_interval_show( ...@@ -581,13 +581,13 @@ static ssize_t multicast_last_member_interval_show(
{ {
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf(buf, "%lu\n", return sprintf(buf, "%lu\n",
jiffies_to_clock_t(br->multicast_last_member_interval)); jiffies_to_clock_t(br->multicast_ctx.multicast_last_member_interval));
} }
static int set_last_member_interval(struct net_bridge *br, unsigned long val, static int set_last_member_interval(struct net_bridge *br, unsigned long val,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
br->multicast_last_member_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_last_member_interval = clock_t_to_jiffies(val);
return 0; return 0;
} }
...@@ -604,13 +604,13 @@ static ssize_t multicast_membership_interval_show( ...@@ -604,13 +604,13 @@ static ssize_t multicast_membership_interval_show(
{ {
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf(buf, "%lu\n", return sprintf(buf, "%lu\n",
jiffies_to_clock_t(br->multicast_membership_interval)); jiffies_to_clock_t(br->multicast_ctx.multicast_membership_interval));
} }
static int set_membership_interval(struct net_bridge *br, unsigned long val, static int set_membership_interval(struct net_bridge *br, unsigned long val,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
br->multicast_membership_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_membership_interval = clock_t_to_jiffies(val);
return 0; return 0;
} }
...@@ -628,13 +628,13 @@ static ssize_t multicast_querier_interval_show(struct device *d, ...@@ -628,13 +628,13 @@ static ssize_t multicast_querier_interval_show(struct device *d,
{ {
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf(buf, "%lu\n", return sprintf(buf, "%lu\n",
jiffies_to_clock_t(br->multicast_querier_interval)); jiffies_to_clock_t(br->multicast_ctx.multicast_querier_interval));
} }
static int set_querier_interval(struct net_bridge *br, unsigned long val, static int set_querier_interval(struct net_bridge *br, unsigned long val,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
br->multicast_querier_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_querier_interval = clock_t_to_jiffies(val);
return 0; return 0;
} }
...@@ -652,13 +652,13 @@ static ssize_t multicast_query_interval_show(struct device *d, ...@@ -652,13 +652,13 @@ static ssize_t multicast_query_interval_show(struct device *d,
{ {
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf(buf, "%lu\n", return sprintf(buf, "%lu\n",
jiffies_to_clock_t(br->multicast_query_interval)); jiffies_to_clock_t(br->multicast_ctx.multicast_query_interval));
} }
static int set_query_interval(struct net_bridge *br, unsigned long val, static int set_query_interval(struct net_bridge *br, unsigned long val,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
br->multicast_query_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_query_interval = clock_t_to_jiffies(val);
return 0; return 0;
} }
...@@ -676,13 +676,13 @@ static ssize_t multicast_query_response_interval_show( ...@@ -676,13 +676,13 @@ static ssize_t multicast_query_response_interval_show(
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf( return sprintf(
buf, "%lu\n", buf, "%lu\n",
jiffies_to_clock_t(br->multicast_query_response_interval)); jiffies_to_clock_t(br->multicast_ctx.multicast_query_response_interval));
} }
static int set_query_response_interval(struct net_bridge *br, unsigned long val, static int set_query_response_interval(struct net_bridge *br, unsigned long val,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
br->multicast_query_response_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_query_response_interval = clock_t_to_jiffies(val);
return 0; return 0;
} }
...@@ -700,13 +700,13 @@ static ssize_t multicast_startup_query_interval_show( ...@@ -700,13 +700,13 @@ static ssize_t multicast_startup_query_interval_show(
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf( return sprintf(
buf, "%lu\n", buf, "%lu\n",
jiffies_to_clock_t(br->multicast_startup_query_interval)); jiffies_to_clock_t(br->multicast_ctx.multicast_startup_query_interval));
} }
static int set_startup_query_interval(struct net_bridge *br, unsigned long val, static int set_startup_query_interval(struct net_bridge *br, unsigned long val,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
br->multicast_startup_query_interval = clock_t_to_jiffies(val); br->multicast_ctx.multicast_startup_query_interval = clock_t_to_jiffies(val);
return 0; return 0;
} }
...@@ -751,7 +751,7 @@ static ssize_t multicast_mld_version_show(struct device *d, ...@@ -751,7 +751,7 @@ static ssize_t multicast_mld_version_show(struct device *d,
{ {
struct net_bridge *br = to_bridge(d); struct net_bridge *br = to_bridge(d);
return sprintf(buf, "%u\n", br->multicast_mld_version); return sprintf(buf, "%u\n", br->multicast_ctx.multicast_mld_version);
} }
static int set_multicast_mld_version(struct net_bridge *br, unsigned long val, static int set_multicast_mld_version(struct net_bridge *br, unsigned long val,
......
...@@ -244,7 +244,7 @@ BRPORT_ATTR_FLAG(isolated, BR_ISOLATED); ...@@ -244,7 +244,7 @@ BRPORT_ATTR_FLAG(isolated, BR_ISOLATED);
#ifdef CONFIG_BRIDGE_IGMP_SNOOPING #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
static ssize_t show_multicast_router(struct net_bridge_port *p, char *buf) static ssize_t show_multicast_router(struct net_bridge_port *p, char *buf)
{ {
return sprintf(buf, "%d\n", p->multicast_router); return sprintf(buf, "%d\n", p->multicast_ctx.multicast_router);
} }
static int store_multicast_router(struct net_bridge_port *p, static int store_multicast_router(struct net_bridge_port *p,
......
...@@ -190,6 +190,8 @@ static void br_vlan_put_master(struct net_bridge_vlan *masterv) ...@@ -190,6 +190,8 @@ static void br_vlan_put_master(struct net_bridge_vlan *masterv)
rhashtable_remove_fast(&vg->vlan_hash, rhashtable_remove_fast(&vg->vlan_hash,
&masterv->vnode, br_vlan_rht_params); &masterv->vnode, br_vlan_rht_params);
__vlan_del_list(masterv); __vlan_del_list(masterv);
br_multicast_toggle_one_vlan(masterv, false);
br_multicast_ctx_deinit(&masterv->br_mcast_ctx);
call_rcu(&masterv->rcu, br_master_vlan_rcu_free); call_rcu(&masterv->rcu, br_master_vlan_rcu_free);
} }
} }
...@@ -280,10 +282,13 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags, ...@@ -280,10 +282,13 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags,
} else { } else {
v->stats = masterv->stats; v->stats = masterv->stats;
} }
br_multicast_port_ctx_init(p, v, &v->port_mcast_ctx);
} else { } else {
err = br_switchdev_port_vlan_add(dev, v->vid, flags, extack); err = br_switchdev_port_vlan_add(dev, v->vid, flags, extack);
if (err && err != -EOPNOTSUPP) if (err && err != -EOPNOTSUPP)
goto out; goto out;
br_multicast_ctx_init(br, v, &v->br_mcast_ctx);
v->priv_flags |= BR_VLFLAG_GLOBAL_MCAST_ENABLED;
} }
/* Add the dev mac and count the vlan only if it's usable */ /* Add the dev mac and count the vlan only if it's usable */
...@@ -306,6 +311,7 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags, ...@@ -306,6 +311,7 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags,
__vlan_add_list(v); __vlan_add_list(v);
__vlan_add_flags(v, flags); __vlan_add_flags(v, flags);
br_multicast_toggle_one_vlan(v, true);
if (p) if (p)
nbp_vlan_set_vlan_dev_state(p, v->vid); nbp_vlan_set_vlan_dev_state(p, v->vid);
...@@ -374,6 +380,8 @@ static int __vlan_del(struct net_bridge_vlan *v) ...@@ -374,6 +380,8 @@ static int __vlan_del(struct net_bridge_vlan *v)
br_vlan_rht_params); br_vlan_rht_params);
__vlan_del_list(v); __vlan_del_list(v);
nbp_vlan_set_vlan_dev_state(p, v->vid); nbp_vlan_set_vlan_dev_state(p, v->vid);
br_multicast_toggle_one_vlan(v, false);
br_multicast_port_ctx_deinit(&v->port_mcast_ctx);
call_rcu(&v->rcu, nbp_vlan_rcu_free); call_rcu(&v->rcu, nbp_vlan_rcu_free);
} }
...@@ -473,7 +481,8 @@ struct sk_buff *br_handle_vlan(struct net_bridge *br, ...@@ -473,7 +481,8 @@ struct sk_buff *br_handle_vlan(struct net_bridge *br,
static bool __allowed_ingress(const struct net_bridge *br, static bool __allowed_ingress(const struct net_bridge *br,
struct net_bridge_vlan_group *vg, struct net_bridge_vlan_group *vg,
struct sk_buff *skb, u16 *vid, struct sk_buff *skb, u16 *vid,
u8 *state) u8 *state,
struct net_bridge_vlan **vlan)
{ {
struct pcpu_sw_netstats *stats; struct pcpu_sw_netstats *stats;
struct net_bridge_vlan *v; struct net_bridge_vlan *v;
...@@ -538,8 +547,9 @@ static bool __allowed_ingress(const struct net_bridge *br, ...@@ -538,8 +547,9 @@ static bool __allowed_ingress(const struct net_bridge *br,
*/ */
skb->vlan_tci |= pvid; skb->vlan_tci |= pvid;
/* if stats are disabled we can avoid the lookup */ /* if snooping and stats are disabled we can avoid the lookup */
if (!br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) { if (!br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED) &&
!br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) {
if (*state == BR_STATE_FORWARDING) { if (*state == BR_STATE_FORWARDING) {
*state = br_vlan_get_pvid_state(vg); *state = br_vlan_get_pvid_state(vg);
return br_vlan_state_allowed(*state, true); return br_vlan_state_allowed(*state, true);
...@@ -566,6 +576,8 @@ static bool __allowed_ingress(const struct net_bridge *br, ...@@ -566,6 +576,8 @@ static bool __allowed_ingress(const struct net_bridge *br,
u64_stats_update_end(&stats->syncp); u64_stats_update_end(&stats->syncp);
} }
*vlan = v;
return true; return true;
drop: drop:
...@@ -575,17 +587,19 @@ static bool __allowed_ingress(const struct net_bridge *br, ...@@ -575,17 +587,19 @@ static bool __allowed_ingress(const struct net_bridge *br,
bool br_allowed_ingress(const struct net_bridge *br, bool br_allowed_ingress(const struct net_bridge *br,
struct net_bridge_vlan_group *vg, struct sk_buff *skb, struct net_bridge_vlan_group *vg, struct sk_buff *skb,
u16 *vid, u8 *state) u16 *vid, u8 *state,
struct net_bridge_vlan **vlan)
{ {
/* If VLAN filtering is disabled on the bridge, all packets are /* If VLAN filtering is disabled on the bridge, all packets are
* permitted. * permitted.
*/ */
*vlan = NULL;
if (!br_opt_get(br, BROPT_VLAN_ENABLED)) { if (!br_opt_get(br, BROPT_VLAN_ENABLED)) {
BR_INPUT_SKB_CB(skb)->vlan_filtered = false; BR_INPUT_SKB_CB(skb)->vlan_filtered = false;
return true; return true;
} }
return __allowed_ingress(br, vg, skb, vid, state); return __allowed_ingress(br, vg, skb, vid, state, vlan);
} }
/* Called under RCU. */ /* Called under RCU. */
...@@ -826,6 +840,10 @@ int br_vlan_filter_toggle(struct net_bridge *br, unsigned long val, ...@@ -826,6 +840,10 @@ int br_vlan_filter_toggle(struct net_bridge *br, unsigned long val,
br_manage_promisc(br); br_manage_promisc(br);
recalculate_group_addr(br); recalculate_group_addr(br);
br_recalculate_fwd_mask(br); br_recalculate_fwd_mask(br);
if (!val && br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED)) {
br_info(br, "vlan filtering disabled, automatically disabling multicast vlan snooping\n");
br_multicast_toggle_vlan_snooping(br, false, NULL);
}
return 0; return 0;
} }
...@@ -1901,6 +1919,7 @@ static int br_vlan_dump_dev(const struct net_device *dev, ...@@ -1901,6 +1919,7 @@ static int br_vlan_dump_dev(const struct net_device *dev,
u32 dump_flags) u32 dump_flags)
{ {
struct net_bridge_vlan *v, *range_start = NULL, *range_end = NULL; struct net_bridge_vlan *v, *range_start = NULL, *range_end = NULL;
bool dump_global = !!(dump_flags & BRIDGE_VLANDB_DUMPF_GLOBAL);
bool dump_stats = !!(dump_flags & BRIDGE_VLANDB_DUMPF_STATS); bool dump_stats = !!(dump_flags & BRIDGE_VLANDB_DUMPF_STATS);
struct net_bridge_vlan_group *vg; struct net_bridge_vlan_group *vg;
int idx = 0, s_idx = cb->args[1]; int idx = 0, s_idx = cb->args[1];
...@@ -1919,6 +1938,10 @@ static int br_vlan_dump_dev(const struct net_device *dev, ...@@ -1919,6 +1938,10 @@ static int br_vlan_dump_dev(const struct net_device *dev,
vg = br_vlan_group_rcu(br); vg = br_vlan_group_rcu(br);
p = NULL; p = NULL;
} else { } else {
/* global options are dumped only for bridge devices */
if (dump_global)
return 0;
p = br_port_get_rcu(dev); p = br_port_get_rcu(dev);
if (WARN_ON(!p)) if (WARN_ON(!p))
return -EINVAL; return -EINVAL;
...@@ -1941,7 +1964,7 @@ static int br_vlan_dump_dev(const struct net_device *dev, ...@@ -1941,7 +1964,7 @@ static int br_vlan_dump_dev(const struct net_device *dev,
/* idx must stay at range's beginning until it is filled in */ /* idx must stay at range's beginning until it is filled in */
list_for_each_entry_rcu(v, &vg->vlan_list, vlist) { list_for_each_entry_rcu(v, &vg->vlan_list, vlist) {
if (!br_vlan_should_use(v)) if (!dump_global && !br_vlan_should_use(v))
continue; continue;
if (idx < s_idx) { if (idx < s_idx) {
idx++; idx++;
...@@ -1954,7 +1977,20 @@ static int br_vlan_dump_dev(const struct net_device *dev, ...@@ -1954,7 +1977,20 @@ static int br_vlan_dump_dev(const struct net_device *dev,
continue; continue;
} }
if (dump_stats || v->vid == pvid || if (dump_global) {
if (br_vlan_global_opts_can_enter_range(v, range_end))
continue;
if (!br_vlan_global_opts_fill(skb, range_start->vid,
range_end->vid,
range_start)) {
err = -EMSGSIZE;
break;
}
/* advance number of filled vlans */
idx += range_end->vid - range_start->vid + 1;
range_start = v;
} else if (dump_stats || v->vid == pvid ||
!br_vlan_can_enter_range(v, range_end)) { !br_vlan_can_enter_range(v, range_end)) {
u16 vlan_flags = br_vlan_flags(range_start, pvid); u16 vlan_flags = br_vlan_flags(range_start, pvid);
...@@ -1977,11 +2013,18 @@ static int br_vlan_dump_dev(const struct net_device *dev, ...@@ -1977,11 +2013,18 @@ static int br_vlan_dump_dev(const struct net_device *dev,
* - last vlan (range_start == range_end, not in range) * - last vlan (range_start == range_end, not in range)
* - last vlan range (range_start != range_end, in range) * - last vlan range (range_start != range_end, in range)
*/ */
if (!err && range_start && if (!err && range_start) {
!br_vlan_fill_vids(skb, range_start->vid, range_end->vid, if (dump_global &&
range_start, br_vlan_flags(range_start, pvid), !br_vlan_global_opts_fill(skb, range_start->vid,
range_end->vid, range_start))
err = -EMSGSIZE;
else if (!dump_global &&
!br_vlan_fill_vids(skb, range_start->vid,
range_end->vid, range_start,
br_vlan_flags(range_start, pvid),
dump_stats)) dump_stats))
err = -EMSGSIZE; err = -EMSGSIZE;
}
cb->args[1] = err ? idx : 0; cb->args[1] = err ? idx : 0;
...@@ -2185,12 +2228,22 @@ static int br_vlan_rtm_process(struct sk_buff *skb, struct nlmsghdr *nlh, ...@@ -2185,12 +2228,22 @@ static int br_vlan_rtm_process(struct sk_buff *skb, struct nlmsghdr *nlh,
} }
nlmsg_for_each_attr(attr, nlh, sizeof(*bvm), rem) { nlmsg_for_each_attr(attr, nlh, sizeof(*bvm), rem) {
if (nla_type(attr) != BRIDGE_VLANDB_ENTRY) switch (nla_type(attr)) {
case BRIDGE_VLANDB_ENTRY:
err = br_vlan_rtm_process_one(dev, attr,
nlh->nlmsg_type,
extack);
break;
case BRIDGE_VLANDB_GLOBAL_OPTIONS:
err = br_vlan_rtm_process_global_options(dev, attr,
nlh->nlmsg_type,
extack);
break;
default:
continue; continue;
}
vlans++; vlans++;
err = br_vlan_rtm_process_one(dev, attr, nlh->nlmsg_type,
extack);
if (err) if (err)
break; break;
} }
......
...@@ -258,3 +258,219 @@ int br_vlan_process_options(const struct net_bridge *br, ...@@ -258,3 +258,219 @@ int br_vlan_process_options(const struct net_bridge *br,
return err; return err;
} }
bool br_vlan_global_opts_can_enter_range(const struct net_bridge_vlan *v_curr,
const struct net_bridge_vlan *r_end)
{
return v_curr->vid - r_end->vid == 1 &&
((v_curr->priv_flags ^ r_end->priv_flags) &
BR_VLFLAG_GLOBAL_MCAST_ENABLED) == 0;
}
bool br_vlan_global_opts_fill(struct sk_buff *skb, u16 vid, u16 vid_range,
const struct net_bridge_vlan *v_opts)
{
struct nlattr *nest;
nest = nla_nest_start(skb, BRIDGE_VLANDB_GLOBAL_OPTIONS);
if (!nest)
return false;
if (nla_put_u16(skb, BRIDGE_VLANDB_GOPTS_ID, vid))
goto out_err;
if (vid_range && vid < vid_range &&
nla_put_u16(skb, BRIDGE_VLANDB_GOPTS_RANGE, vid_range))
goto out_err;
#ifdef CONFIG_BRIDGE_IGMP_SNOOPING
if (nla_put_u8(skb, BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING,
!!(v_opts->priv_flags & BR_VLFLAG_GLOBAL_MCAST_ENABLED)))
goto out_err;
#endif
nla_nest_end(skb, nest);
return true;
out_err:
nla_nest_cancel(skb, nest);
return false;
}
static size_t rtnl_vlan_global_opts_nlmsg_size(void)
{
return NLMSG_ALIGN(sizeof(struct br_vlan_msg))
+ nla_total_size(0) /* BRIDGE_VLANDB_GLOBAL_OPTIONS */
+ nla_total_size(sizeof(u16)) /* BRIDGE_VLANDB_GOPTS_ID */
#ifdef CONFIG_BRIDGE_IGMP_SNOOPING
+ nla_total_size(sizeof(u8)) /* BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING */
#endif
+ nla_total_size(sizeof(u16)); /* BRIDGE_VLANDB_GOPTS_RANGE */
}
static void br_vlan_global_opts_notify(const struct net_bridge *br,
u16 vid, u16 vid_range)
{
struct net_bridge_vlan *v;
struct br_vlan_msg *bvm;
struct nlmsghdr *nlh;
struct sk_buff *skb;
int err = -ENOBUFS;
/* right now notifications are done only with rtnl held */
ASSERT_RTNL();
skb = nlmsg_new(rtnl_vlan_global_opts_nlmsg_size(), GFP_KERNEL);
if (!skb)
goto out_err;
err = -EMSGSIZE;
nlh = nlmsg_put(skb, 0, 0, RTM_NEWVLAN, sizeof(*bvm), 0);
if (!nlh)
goto out_err;
bvm = nlmsg_data(nlh);
memset(bvm, 0, sizeof(*bvm));
bvm->family = AF_BRIDGE;
bvm->ifindex = br->dev->ifindex;
/* need to find the vlan due to flags/options */
v = br_vlan_find(br_vlan_group(br), vid);
if (!v)
goto out_kfree;
if (!br_vlan_global_opts_fill(skb, vid, vid_range, v))
goto out_err;
nlmsg_end(skb, nlh);
rtnl_notify(skb, dev_net(br->dev), 0, RTNLGRP_BRVLAN, NULL, GFP_KERNEL);
return;
out_err:
rtnl_set_sk_err(dev_net(br->dev), RTNLGRP_BRVLAN, err);
out_kfree:
kfree_skb(skb);
}
static int br_vlan_process_global_one_opts(const struct net_bridge *br,
struct net_bridge_vlan_group *vg,
struct net_bridge_vlan *v,
struct nlattr **tb,
bool *changed,
struct netlink_ext_ack *extack)
{
*changed = false;
#ifdef CONFIG_BRIDGE_IGMP_SNOOPING
if (tb[BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING]) {
u8 mc_snooping;
mc_snooping = nla_get_u8(tb[BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING]);
if (br_multicast_toggle_global_vlan(v, !!mc_snooping))
*changed = true;
}
#endif
return 0;
}
static const struct nla_policy br_vlan_db_gpol[BRIDGE_VLANDB_GOPTS_MAX + 1] = {
[BRIDGE_VLANDB_GOPTS_ID] = { .type = NLA_U16 },
[BRIDGE_VLANDB_GOPTS_RANGE] = { .type = NLA_U16 },
[BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING] = { .type = NLA_U8 },
};
int br_vlan_rtm_process_global_options(struct net_device *dev,
const struct nlattr *attr,
int cmd,
struct netlink_ext_ack *extack)
{
struct net_bridge_vlan *v, *curr_start = NULL, *curr_end = NULL;
struct nlattr *tb[BRIDGE_VLANDB_GOPTS_MAX + 1];
struct net_bridge_vlan_group *vg;
u16 vid, vid_range = 0;
struct net_bridge *br;
int err = 0;
if (cmd != RTM_NEWVLAN) {
NL_SET_ERR_MSG_MOD(extack, "Global vlan options support only set operation");
return -EINVAL;
}
if (!netif_is_bridge_master(dev)) {
NL_SET_ERR_MSG_MOD(extack, "Global vlan options can only be set on bridge device");
return -EINVAL;
}
br = netdev_priv(dev);
vg = br_vlan_group(br);
if (WARN_ON(!vg))
return -ENODEV;
err = nla_parse_nested(tb, BRIDGE_VLANDB_GOPTS_MAX, attr,
br_vlan_db_gpol, extack);
if (err)
return err;
if (!tb[BRIDGE_VLANDB_GOPTS_ID]) {
NL_SET_ERR_MSG_MOD(extack, "Missing vlan entry id");
return -EINVAL;
}
vid = nla_get_u16(tb[BRIDGE_VLANDB_GOPTS_ID]);
if (!br_vlan_valid_id(vid, extack))
return -EINVAL;
if (tb[BRIDGE_VLANDB_GOPTS_RANGE]) {
vid_range = nla_get_u16(tb[BRIDGE_VLANDB_GOPTS_RANGE]);
if (!br_vlan_valid_id(vid_range, extack))
return -EINVAL;
if (vid >= vid_range) {
NL_SET_ERR_MSG_MOD(extack, "End vlan id is less than or equal to start vlan id");
return -EINVAL;
}
} else {
vid_range = vid;
}
for (; vid <= vid_range; vid++) {
bool changed = false;
v = br_vlan_find(vg, vid);
if (!v) {
NL_SET_ERR_MSG_MOD(extack, "Vlan in range doesn't exist, can't process global options");
err = -ENOENT;
break;
}
err = br_vlan_process_global_one_opts(br, vg, v, tb, &changed,
extack);
if (err)
break;
if (changed) {
/* vlan options changed, check for range */
if (!curr_start) {
curr_start = v;
curr_end = v;
continue;
}
if (!br_vlan_global_opts_can_enter_range(v, curr_end)) {
br_vlan_global_opts_notify(br, curr_start->vid,
curr_end->vid);
curr_start = v;
}
curr_end = v;
} else {
/* nothing changed and nothing to notify yet */
if (!curr_start)
continue;
br_vlan_global_opts_notify(br, curr_start->vid,
curr_end->vid);
curr_start = NULL;
curr_end = NULL;
}
}
if (curr_start)
br_vlan_global_opts_notify(br, curr_start->vid, curr_end->vid);
return err;
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment