- 14 Jul, 2020 40 commits
-
-
Julian Wiedmann authored
We're not modifying these data blobs, so mark them as constant. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
To keep track of the addresses programmed from an RX modeset, we have two separate hashtables (L2: mac_htable, L3: ip_mc_htable). These are never used at the same time, so unify them into a single rx_mode_addrs hashtable. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
While initially just trying to fix up the indentation, condense a few lines and get rid of a goto label. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Use the correct struct member instead of hardcoding its offset. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Use the correct helper for casting to a user pointer. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
As the cmd IO path has learned to propagate errnos back to its callers, let them deal with errors instead of trying to restore their previous configuration from within the IO error path. Also translate the HW error to a meaningful errno, instead of returning -EIO for all cases (and don't map this to -EOPNOTSUPP later on...). While at it, add a READ_ONCE() / WRITE_ONCE() pair to ensure that the data path always sees a valid isolation mode during reconfiguration. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
When qeth_set_access_ctrl_online() is called during the device's initialization and discovers that isolation mode isn't supported, don't clear the user's currently configured mode. They intentionally choose to operate the device in this specific mode, and degrading the isolation is not an option. Only adjust the configuration when called via sysfs (ie. fallback = 1), and here follow the common pattern and restore it from prev_isolation. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
A newly initialized device defaults to ISOLATION_MODE_NONE, don't bother with programming this a second time. Then remove the OSD/OSX check, it's already done in the sysfs path whenever the user actually changes the configuration. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
If we cancel all pending cmds (eg. when tearing down the device), don't blame it on an IO error. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Rather than delaying the decision until netdev setup, immediately reject a device when we discover that it has an unsupported link type (ie. Token Ring). Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ido Schimmel says: ==================== mlxsw: Mirror to CPU preparations A future patch set will add the ability to trap packets that were dropped due to buffer related reasons (e.g., early drop). Internally this is implemented by mirroring these packets towards the CPU port. This patch set adds the required infrastructure to enable such mirroring. Patches #1-#2 extend two registers needed for above mentioned functionality. Patches #3-#6 gradually add support for setting the mirroring target of a SPAN (mirroring) agent as the CPU port. This is only supported from Spectrum-2 onwards, so an error is returned for Spectrum-1. Patches #7-#8 add the ability to set a policer on a SPAN agent. This is required because unlike regularly trapped packets, a policer cannot be set on the trap group with which the mirroring trap is associated. Patches #9-#12 parse the mirror reason field from the Completion Queue Element (CQE). Unlike other trapped packets, the trap identifier of mirrored packets only indicates that the packet was mirrored, but not why. The reason (e.g., tail drop) is encoded in the mirror reason field. Patch #13 utilizes the mirror reason field in order to lookup the matching Rx listener. This allows us to maintain the abstraction that an Rx listener is mapped to a single trap reason. Without taking the mirror reason into account we would need to register a single Rx listener for all mirrored packets. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The Rx listener abstraction allows the switch driver (e.g., mlxsw_spectrum) to register a function that is called when a packet is received (trapped) for a specific reason. Up until now, the Rx listener lookup was solely based on the trap identifier. However, when a packet is mirrored to the CPU the trap identifier merely indicates that the packet was mirrored, but not why it was mirrored. This makes it impossible for the switch driver to register different Rx listeners for different mirror reasons. Solve this by allowing the switch driver to register a Rx listener with a mirror reason and by extending the Rx listener lookup to take the mirror reason into account. Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
In case the mirror reason is valid, retrieve it into the Rx information so that it could be used during listener lookup in a later patch. Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The Completion Queue Element version 2 (CQEv2) includes a field called 'mirror_reason' which indicates why the packet was mirrored to the CPU. Add the field so that it can be used by a later patch. Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Packets that are mirrored to the CPU port are trapped with one of eight trap identifiers. Add them. Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
The trap identifier was increased to 10 bits in new versions of the Programmer's Reference Manual (PRM). Increase it accordingly in the Host PacKet Trap (HPKT) register and in the Completion Queue Element (CQE). This is significant for subsequent patches that will introduce trap identifiers which utilize the extended range. Signed-off-by: Amit Cohen <amitc@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
When mirroring packets to the CPU port the mirrored packets are trapped to the CPU. However, unlike other traps, it is not possible to set a policer on the associated trap group. Instead, the policer needs to be set on the SPAN agent. Moreover, the policer ID must be within a specified range: From a configurable (even) base ID to this base plus the maximum number of SPAN agents. While the immediate use case is to set the policer on a SPAN agent that mirrors to the CPU port, a policer can be set on any SPAN agent. Therefore, the operation is implemented for all SPAN agent types. Extend the SPAN agent request API to allow passing the desired policer ID that should be bound to the SPAN agent. Return an error for Spectrum-1, as it does not support policer setting on a SPAN agent. Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Currently, the only parameter of a SPAN agent is the netdev which the SPAN agent should mirror to. The next patch will add the ability to request a SPAN agent that mirrors to a specific netdev and has a specific policer ID bound to it. This is required when mirroring packets to the CPU port. Therefore, encapsulate the sole parameter to mlxsw_sp_span_agent_get() in a structure, so that it could later be extended with policer information. Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The Spectrum-2 and Spectrum-3 ASICs are able to mirror packets towards the CPU. These packets are then trapped like any other packet, but with a special packet trap and additional metadata such as why the packet was mirrored. The ability to mirror packets towards the CPU will be utilized by a subsequent patch set that will mirror packets that were dropped by the ASIC for various buffer-related reasons, such as tail-drop and early-drop. Add mirroring towards the CPU as a new SPAN agent type and re-use the functions that mirror to a physical port where possible. Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Currently, the destination netdev to which we mirror must be a valid netdev. However, this is going to change with the introduction of mirroring towards the CPU port, as the CPU port does not have a backing netdev. Avoid dereferencing the destination netdev when it is not clear if it is valid or not. Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The parms_set() callback is supposed to fill in the parameters for the SPAN agent, such as the destination port and encapsulation info, if any. When mirroring to the CPU port we cannot resolve the destination port (the CPU port) without access to the driver private info. Pass the driver private info to parms_set() callback so that it could be used later on to resolve the CPU port. Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The various SPAN agent types differ in their mirror targets (i.e., physical port netdev vs. VLAN netdev) and the encapsulation headers that they need to encapsulate the mirrored packets with. The Spectrum-2 and Spectrum-3 ASICs support a SPAN agent type that is able to mirror towards the CPU, whereas the Spectrum-1 ASIC does not. Prepare for the addition of this new SPAN agent type by splitting the SPAN agent operations to be per-ASIC. Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Allow setting mirroring_pid_base using MOGCR register. Signed-off-by: Amit Cohen <amitc@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Allow setting session_id and pid as part of port analyzer configurations. Signed-off-by: Amit Cohen <amitc@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below. No GFP_ flag needs to be corrected. It has been compile tested. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GPF_ with a correct flag. It has been compile tested. When memory is allocated in 'init_shared_mem()' GFP_KERNEL can be used because this flag is already used to allocate some memory in this function. While at it, update some debug message to match the new function names. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GPF_ with a correct flag. It has been compile tested. When memory is allocated in 'lan743x_tx_ring_cleanup()' and 'lan743x_rx_ring_init()', GFP_KERNEL can be used because this flag is already used to allocate some memory in these functions. While at it, remove a useless (void *) casting in the first hunk in so that the code is more consistent. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Horatiu Vultur says: ==================== bridge: mrp: Add support for interconnect ring This patch series extends existing MRP to add support for interconnect ring. An interconnect ring is a ring that connects 2 rings. In this way is possible to connect multiple rings. Each interconnect ring is form of 4 nodes, in which 3 have the role MIC(Media Redundancy Interconnect Client) and one has the role MIM(Media Redundancy Interconnect Manager). All these nodes need to have the same ID and the ID needs to be unique between multiple interconnect rings. And 2 nodes needs to be part of one ring and the other 2 nodes needs to be part of the other ring that is connected. +---------+ | | +----------| MRM |---------------+ | | | | | +---------+ | | | | | | | +--------------+ +-----------------+ | | | | | MRC/MIC |------------------| MRC/MIM | | | | | +--------------+ +-----------------+ | | |Interconnect port |Interconnect port | | | | +--------------+ +-----------------+ | | | | | MRC/MIC |----------------- | MRC/MIC | | | | | +--------------+ +-----------------+ | | | | | +---------+ | | | | | +----------| MRM |----------------+ | | +---------+ Each node in a ring needs to have one of the following ring roles, MRM or MRC. And it can also have an interconnect role like MIM or MIC if it is part of an interconnect ring. In the figure above the MRM doesn't have any interconnect role but the MRC from the top ring have the interconnect roles MIC respectively MIM. Therefore it is not possible for a node to have only an interconnect role. There are 2 ways for interconnect ring to detect when is open or closed: 1. To use CCM frames on the interconnect port to detect when the interconnect link goes down/up. This mode is called LC-mode. 2. To send InTest frames on all 3 ports(2 ring ports and 1 interconnect port) and detect when these frames are received back. This mode is called RC-mode. This patch series adds support only for RC-mode. Where MIM sends InTest frames on all 3 ports and detects when it receives back the InTest. When it receives the InTest it means that the ring is closed so it would set the interconnect port in blocking state. If it stops receiving the InTest frames then it would set the port in forwarding state and it would send InTopo frames. These InTopo frames will be received by MRM nodes and process them. And then the MRM will send Topo frames in the rings so each client will clear its FDB. v4: - always cancel delay work if the MRP instance is deleted or interconnect role is disabled but allow only to start to send InTest frames only if the role is MIM. v3: - update 'br_mrp_set_in_role' to stop sending test if the role is disabled and don't allow to set a different interconnect port if there is already one. v2: - rearrange structures not to contain holes - stop sending MRP_InTest frames when the MRP instance is deleted ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
This patch adds a new port attribute, IFLA_BRPORT_MRP_IN_OPEN, which allows to notify the userspace when the node lost the contiuity of MRP_InTest frames. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
This patch extends the function br_mrp_fill_info to return also the status for the interconnect ring. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
Extend the existing MRP_INFO to return status of MRP interconnect. In case there is no MRP interconnect on the node then the role will be disabled so the other attributes can be ignored. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
This patch extends the existing MRP netlink interface with the following attributes: IFLA_BRIDGE_MRP_IN_ROLE, IFLA_BRIDGE_MRP_IN_STATE and IFLA_BRIDGE_MRP_START_IN_TEST. These attributes are similar with their ring attributes but they apply to the interconnect port. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
Thie patch adds support for MRP Interconnect. Similar with the MRP ring, if the HW can't generate MRP_InTest frames, then the SW will try to generate them. And if also the SW fails to generate the frames then an error is return to userspace. The forwarding/termination of MRP_In frames is happening in the kernel and is done by MRP instances. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
Implement the MRP API for interconnect switchdev. Similar with the other br_mrp_switchdev function, these function will just eventually call the switchdev functions: switchdev_port_obj_add/del. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
This function notifies the userspace when the node lost the continuity of MRP_InTest frames. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
This patch renames the function br_mrp_port_open to br_mrp_ring_port_open. In this way is more clear that a ring port lost the continuity because there will be also a br_mrp_in_port_open. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
This patch extends the 'struct br_mrp' to contain information regarding the MRP interconnect. It contains the following: - the interconnect port 'i_port', which is NULL if the node doesn't have a interconnect role - the interconnect id, which is similar with the ring id, but this field is also part of the MRP_InTest frames. - the interconnect role, which can be MIM or MIC. - the interconnect state, which can be open or closed. - the interconnect delayed_work for sending MRP_InTest frames and check for lost of continuity. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
This patch adds a new flag(BR_MRP_LOST_IN_CONT) to the net bridge ports. This bit will be set when the port lost the continuity of MRP_InTest frames. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
Extend the existing MRP netlink attributes to allow to configure MRP Interconnect: IFLA_BRIDGE_MRP_IN_ROLE - the parameter type is br_mrp_in_role which contains the interconnect id, the ring id, the interconnect role(MIM or MIC) and the port ifindex that represents the interconnect port. IFLA_BRIDGE_MRP_IN_STATE - the parameter type is br_mrp_in_state which contains the interconnect id and the interconnect state. IFLA_BRIDGE_MRP_IN_TEST - the parameter type is br_mrp_start_in_test which contains the interconnect id, the interval at which to send MRP_InTest frames, how many test frames can be missed before declaring the interconnect ring open and the period which represents for how long to send MRP_InTest frames. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
Extend switchdev API to add support for MRP interconnect. The HW is notified in the following cases: SWITCHDEV_OBJ_ID_IN_ROLE_MRP: This is used when the interconnect role of the node changes. The supported roles are MIM and MIC. SWITCHDEV_OBJ_ID_IN_STATE_MRP: This is used when the interconnect ring changes it states to open or closed. SWITCHDEV_OBJ_ID_IN_TEST_MRP: This is used to start/stop sending MRP_InTest frames on all MRP ports. This is called only on nodes that have the interconnect role MIM. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-