- 20 Mar, 2017 2 commits
-
-
Jacob Keller authored
The firmware expects the port numbers for offloaded UDP tunnels in Little Endian format. We accidentally sent the value in Big Endian format which obviously will cause the wrong port number to be put into the UDP tunnels list. This results in VxLAN and Geneve tunnel Rx offloads being essentially disabled, unless the port number happens to be identical after byte swapping. Note that i40e_aq_add_udp_tunnel() will byteswap the parameter from host order into Little Endian so we don't need worry about passing strictly a __le16 value to the command. This patch essentially reverts b3f5c7bc ("i40e: Fix for extra byte swap in tunnel setup", 2016-08-24), but in a way that makes the result much more clear to the reader. Fixes: b3f5c7bc ("i40e: Fix for extra byte swap in tunnel setup", 2016-08-24) Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Williams, Mitch A <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 18 Mar, 2017 1 commit
-
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 17 Mar, 2017 25 commits
-
-
Manish Awasthi authored
Information reported to ethtool about link modes is wrong for 25G NIC. Fix it by checking for presence of 25G NIC, checking the link speed reported by NIC firmware, and then assigning proper values to the ethtool_link_ksettings struct. Signed-off-by: Manish Awasthi <manish.awasthi@cavium.com> Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Stephen Hemminger says: ==================== netvsc: small changes for net-next One bugfix, and two non-code patches ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Not used anywhere. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Add some short description of how callback's and NAPI interoperate. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Change the argument to channel callback from the channel pointer to the internal data structure containing per-channel info. This avoids any possible races when callback happens during initialization and makes IRQ code simpler. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alexei Starovoitov says: ==================== bpf: inline bpf_map_lookup_elem() bpf_map_lookup_elem() is one of the most frequently used helper functions. Improve JITed program performance by inlining this helper. bpf_map_type before after hash 58M 74M array 174M 280M The values are number of lookups per second in ideal conditions measured by micro-benchmark in patch 6. The 'perf report' for HASH map type: before: 54.23% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem 14.24% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw 8.84% map_perf_test [kernel.kallsyms] [k] htab_map_lookup_elem 5.93% map_perf_test [kernel.kallsyms] [k] bpf_map_lookup_elem 2.30% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 1.49% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler after: 60.03% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem 18.07% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw 2.91% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 1.94% map_perf_test [kernel.kallsyms] [k] _einittext 1.90% map_perf_test [kernel.kallsyms] [k] __audit_syscall_exit 1.72% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler so the cost of htab_map_lookup_elem() and bpf_map_lookup_elem() is gone after inlining. 'per-cpu' and 'lru' map types can be optimized similarly in the future. Note the sparse will complain that bpf is addictive ;) kernel/bpf/hashtab.c:438:19: sparse: subtraction of functions? Share your drugs kernel/bpf/verifier.c:3342:38: sparse: subtraction of functions? Share your drugs it's not a new warning, just in new places. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexei Starovoitov authored
$ map_perf_test 128 speed of HASH bpf_map_lookup_elem() in lookups per second w/o JIT w/JIT before 46M 58M after 42M 74M perf report before: 54.23% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem 14.24% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw 8.84% map_perf_test [kernel.kallsyms] [k] htab_map_lookup_elem 5.93% map_perf_test [kernel.kallsyms] [k] bpf_map_lookup_elem 2.30% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 1.49% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler after: 60.03% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem 18.07% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw 2.91% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 1.94% map_perf_test [kernel.kallsyms] [k] _einittext 1.90% map_perf_test [kernel.kallsyms] [k] __audit_syscall_exit 1.72% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler Notice that bpf_map_lookup_elem() and htab_map_lookup_elem() are trivial functions, yet they take sizeable amount of cpu time. htab_map_gen_lookup() removes bpf_map_lookup_elem() and converts htab_map_lookup_elem() into three BPF insns which causing cpu time for bpf_prog_da4fc6a3f41761a2() slightly increase. $ map_perf_test 256 speed of ARRAY bpf_map_lookup_elem() in lookups per second w/o JIT w/JIT before 97M 174M after 64M 280M before: 37.33% map_perf_test [kernel.kallsyms] [k] array_map_lookup_elem 13.95% map_perf_test [kernel.kallsyms] [k] bpf_map_lookup_elem 6.54% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 4.57% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler after: 32.86% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 6.54% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler array_map_gen_lookup() removes calls to array_map_lookup_elem() and bpf_map_lookup_elem() and replaces them with 7 bpf insns. The performance without JIT is slower, since executing extra insns in the interpreter is slower than running native C code, but with JIT the performance gains are obvious, since native C->x86 code is replaced with fewer bpf->x86 instructions. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexei Starovoitov authored
Optimize: bpf_call bpf_map_lookup_elem map->ops->map_lookup_elem htab_map_lookup_elem __htab_map_lookup_elem into: bpf_call __htab_map_lookup_elem to improve performance of JITed programs. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexei Starovoitov authored
Optimize bpf_call -> bpf_map_lookup_elem() -> array_map_lookup_elem() into a sequence of bpf instructions. When JIT is on the sequence of bpf instructions is the sequence of native cpu instructions with significantly faster performance than indirect call and two function's prologue/epilogue. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexei Starovoitov authored
convert_ctx_accesses() replaces single bpf instruction with a set of instructions. Adjust corresponding insn_aux_data while patching. It's needed to make sure subsequent 'for(all insn)' loops have matching insn and insn_aux_data. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexei Starovoitov authored
reduce indent and make it iterate over instructions similar to convert_ctx_accesses(). Also convert hard BUG_ON into soft verifier error. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexei Starovoitov authored
no functional change. move fixup_bpf_calls() to verifier.c it's being refactored in the next patch Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Soheil Hassas Yeganeh authored
The tcp_tw_recycle was already broken for connections behind NAT, since the per-destination timestamp is not monotonically increasing for multiple machines behind a single destination address. After the randomization of TCP timestamp offsets in commit 8a5bd45f6616 (tcp: randomize tcp timestamp offsets for each connection), the tcp_tw_recycle is broken for all types of connections for the same reason: the timestamps received from a single machine is not monotonically increasing, anymore. Remove tcp_tw_recycle, since it is not functional. Also, remove the PAWSPassive SNMP counter since it is only used for tcp_tw_recycle, and simplify tcp_v4_route_req and tcp_v6_route_req since the strict argument is only set when tcp_tw_recycle is enabled. Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Cc: Lutz Vieweg <lvml@5t9.de> Cc: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Soheil Hassas Yeganeh authored
Commit 8a5bd45f6616 (tcp: randomize tcp timestamp offsets for each connection) randomizes TCP timestamps per connection. After this commit, there is no guarantee that the timestamps received from the same destination are monotonically increasing. As a result, the per-destination timestamp cache in TCP metrics (i.e., tcpm_ts in struct tcp_metrics_block) is broken and cannot be relied upon. Remove the per-destination timestamp cache and all related code paths. Note that this cache was already broken for caching timestamps of multiple machines behind a NAT sharing the same address. Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Cc: Lutz Vieweg <lvml@5t9.de> Cc: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Shannon Nelson says: ==================== sunvnet: better connection management These patches remove some problems in handling of carrier state with the ldmvsw vswitch, remove an xoff misuse in sunvnet, and add stats for debug and tracking of point-to-point connections between the ldom VMs. v2: - added ldmvsw ndo_open to reset the LDC channel - updated copyrights ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
The sunvnet netdev is connected to the controlling ldom's vswitch for network bridging. However, for higher performance between ldoms, there also is a channel between each client ldom. These connections are represented in the sunvnet driver by a queue for each ldom. The driver uses select_queue to tell the stack which queue to use by tracking the mac addresses on the other end of each port. When a connected ldom shuts down, the driver receives an LDC_EVENT_RESET and the port is removed from the driver, thus a queue with no ldom on the other end will never be selected for Tx. The driver was trying to reinforce the "don't use this queue" notion with netif_tx_stop_queue() and netif_tx_wake_queue(), which really should only be used to signal a Tx queue is full (aka XOFF). This misuse of queue state resulted in NETDEV WATCHDOG messages and lots of unnecessary calls into the driver's tx_timeout handler. Simply removing these takes care of the problem. Orabug: 25190537 Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Make sure multicast packets get counted in the device. Orabug: 25190537 Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Track our used and unused queue indexies correctly. Otherwise, as ports dropped out and returned, they all eventually ended up with the same queue index. Orabug: 25190537 Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
In this driver, there is a "port" created for the connection to each of the other ldoms; a netdev queue is mapped to each port, and they are collected under a single netdev. The generic netdev statistics show us all the traffic in and out of our network device, but don't show individual queue/port stats. This patch breaks out the traffic counts for the individual ports and gives us a little view into the state of those connections. Orabug: 25190537 Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
When an ldom VM is bound, the network vswitch infrastructure is set up for it, but was being forced 'UP' by the userland switch configuration script. When 'UP' but not actually connected to a running VM, the ipv6 neighbor probes fail (not a horrible thing) and start cluttering up the kernel logs. Funny thing: these are debug messages that never actually show up, but we do see the net_ratelimited messages that say N callbacks were suppressed. This patch defers the netif_carrier_on() until an actual link has been established with the VM, as indicated by receiving an LDC_EVENT_UP from the underlying LDC protocol. Similarly, we take the link down when we see the LDC_EVENT_RESET. Now when we see the ndo_open(), we reset the link to get things talking again. Orabug: 25525312 Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jarod Wilson authored
Cut-n-paste enablement of 802.3ad bonding on 25G NICs, which currently report 0 as their bandwidth. CC: Jay Vosburgh <j.vosburgh@gmail.com> CC: Veaceslav Falico <vfalico@gmail.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: netdev@vger.kernel.org Signed-off-by: Jarod Wilson <jarod@redhat.com> Acked-by: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
chun Long authored
replace comma to semi colons in tcp_westwood_info(). Acked-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rick Farrington authored
All IRQs owned by the PF and VF drivers share the same nondescript name "octeon"; this makes it difficult to setup interrupt affinity. Change the IRQ names to reflect their specific purpose: LiquidIO<id>-<func>-<type>-<queue pair num> Examples: LiquidIO0-pf0-rxtx-3 LiquidIO1-vf1-rxtx-0 LiquidIO0-pf0-aux We cannot use netdev->name for naming the IRQs because: 1. Early during init, the PF and VF drivers require interrupts to send/receive control data from the NIC firmware; so the PF and VF must request IRQs long before the netdev struct is registered. 2. The IRQ name can only be specified at the time it is requested. It cannot be changed after that. Signed-off-by: Rick Farrington <ricardo.farrington@cavium.com> Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: Satanand Burla <satananda.burla@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rick Farrington authored
Remove invalid call to dma_sync_single_for_cpu() because previous DMA allocation was coherent--not streaming. Remove code that references fields in struct list_head; replace it with calls to list_empty() and list_first_entry(). Also, add comment to clarify complicated if statement. Signed-off-by: Rick Farrington <ricardo.farrington@cavium.com> Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: Derek Chickles <derek.chickles@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nik Unger authored
I recently reported on the netem list that iperf network benchmarks show unexpected results when a bandwidth throttling rate has been configured for netem. Specifically: 1) The measured link bandwidth *increases* when a higher delay is added 2) The measured link bandwidth appears higher than the specified limit 3) The measured link bandwidth for the same very slow settings varies significantly across machines The issue can be reproduced by using tc to configure netem with a 512kbit rate and various (none, 1us, 50ms, 100ms, 200ms) delays on a veth pair between network namespaces, and then using iperf (or any other network benchmarking tool) to test throughput. Complete detailed instructions are in the original email chain here: https://lists.linuxfoundation.org/pipermail/netem/2017-February/001672.html There appear to be two underlying bugs causing these effects: - The first issue causes long delays when the rate is slow and no delay is configured (e.g., "rate 512kbit"). This is because SKBs are not orphaned when no delay is configured, so orphaning does not occur until *after* the rate-induced delay has been applied. For this reason, adding a tiny delay (e.g., "rate 512kbit delay 1us") dramatically increases the measured bandwidth. - The second issue is that rate-induced delays are not correctly applied, allowing SKB delays to occur in parallel. The indended approach is to compute the delay for an SKB and to add this delay to the end of the current queue. However, the code does not detect existing SKBs in the queue due to improperly testing sch->q.qlen, which is nonzero even when packets exist only in the rbtree. Consequently, new SKBs do not wait for the current queue to empty. When packet delays vary significantly (e.g., if packet sizes are different), then this also causes unintended reordering. I modified the code to expect a delay (and orphan the SKB) when a rate is configured. I also added some defensive tests that correctly find the latest scheduled delivery time, even if it is (unexpectedly) for a packet in sch->q. I have tested these changes on the latest kernel (4.11.0-rc1+) and the iperf / ping test results are as expected. Signed-off-by: Nik Unger <njunger@uwaterloo.ca> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 16 Mar, 2017 12 commits
-
-
David S. Miller authored
Or Gerlitz says: ==================== small set of sched cleanups Just two cleanups -- but for the 2nd one I think we need ack from Cong Wang to make sure this isn't actually a bug report.. changes from V1: - addressed comment from Sergei to use 12 hex digits etc ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Or Gerlitz authored
The code introduced by commit 2ccccf5f ("net_sched: update hierarchical backlog too") only sets prev_backlog in fq_codel_dequeue() but not using that anywhere, remove that setting. Cc: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Or Gerlitz authored
As it's used only on that file. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Lin authored
Allows the BCMA version of the bgmac driver to obtain MAC address from the device tree. If no MAC address is specified there, then the previous behavior (obtaining MAC address from SPROM) is used. Signed-off-by: Steve Lin <steven.lin1@broadcom.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Jon Mason <jon.mason@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe Leroy authored
CONFIG_8xx is being deprecated. Since the includes dependent on CONFIG_8xx are useless, just drop them. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe Leroy authored
CONFIG_8xx is deprecated and should soon be removed in favor of CONFIG_PPC_8xx. Anyway, hfc_multi_8xx.h only uses 8xx I/O ports which are linked to the CPM1 communication processor included in the 8xx rather than the 8xx itself. This patch therefore makes it dependent on CONFIG_CPM1 instead, like several other drivers. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jane Li authored
Add basic support for handling suspend and resume. Signed-off-by: Jane Li <jiel@marvell.com> Reviewed-by: Jisheng Zhang <jszhang@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jiri Pirko says: ==================== mlxsw: Enable VRF offload Ido says: Packets received from netdevs enslaved to different VRF devices are forwarded using different FIB tables. In the Spectrum ASIC this is achieved by binding different router interfaces (RIFs) to different virtual routers (VRs). Each RIF represents an enslaved netdev and each VR has its own FIB table according to which packets are forwarded. The first three patches add an helper to check if a FIB rule is a default rule and extend the FIB notification chain to include the rule's info as part of the RULE_{ADD,DEL} events. This allows offloading drivers to sanitize the rules they don't support and flush their tables. The fourth patch introduces a small change in the VRF driver to allow capable drivers to more easily offload VRFs. Finally, the last patches gradually add support for VRFs in the mlxsw driver. First, on top of port netdevs, stacked LAG and VLAN devices and then on top of bridges. Some limitations I would like to point out: 1) The old model where 'oif' / 'iif' rules were programmed for each L3 master device isn't supported. Upon insertion of these rules the driver will flush its tables and forwarding will be done by the kernel instead. It's inferior in every way to the single 'l3mdev' rule, so this shouldn't be an issue. 2) Inter-VRF routes pointing to a VRF device aren't offloaded. Packets hitting these routes will be forwarded by the kernel. Inter-VRF routes pointing to netdevs enslaved to a different VRF are offloaded. 3) There's a small discrepancy between the kernel's datapath and the device's. By default, packets forwarded by the kernel first do a lookup in the local table and then in the VRF's table (assuming no match). In the device, lookup is done only in the VRF's table, which is probably the intended behavior. Changes in v2 allow user to properly re-order the default rules without triggering the abort mechanism. Changes in v3: * Remove 'l3mdev' from the matchall list, as it's related to the action and not the selector (David Ahern). * Use container_of() instead of typecasting (David Ahern). * Add David's Acked-by to the second patch. * Add an helper in IPv4 code to check if rule is a default rule (David Ahern). Changes in v2: * Drop default rule indication and allow re-ordering of default rules (David Ahern). * Remove ifdef around 'struct fib_rule_notifier_info' and drop redundant dependency on IP_MULTIPLE_TABLES from rocker and mlxsw. * Add David's Acked-by to the fourth patch. * Remove netif_is_vrf_master() and use netif_is_l3_master() instead (David Ahern). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Now that port netdevs can be enslaved to a VRF master we need to make sure the device's routing tables won't be flushed upon the insertion of a l3mdev rule. Note that we assume the notified l3mdev rule is a simple rule as used by the VRF master. We don't check for the presence of other selectors such as 'iif' and 'oif'. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
In a similar fashion to the previous patch, allow bridges and VLAN devices on top of bridges to be enslaved to a VRF master device. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Allow port netdevs, LAG and VLAN devices stacked on top of these to be enslaved to a VRF master device. Upon enslavement, create a router interface (RIF) for the enslaved netdev and associate it with a virtual router (VR) based on the VRF's table ID. If a RIF already exists for the netdev (f.e., due to the existence of an IP address), then it's deleted and a new one is created with the appropriate VR binding. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
We usually destroy the netdev's router interface (RIF) when the last IP address is removed from it. However, we shouldn't do that if it's enslaved to an L3 master device. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-