- 11 Mar, 2022 40 commits
-
-
Jonathan Lemon authored
This adds support for the "IN: None" selector, which disables the input on a sma pin. This should be compatible with old firmware (the firmware will ignore it if not supported). Signed-off-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jonathan Lemon authored
Assuming the firmware allows it, the direction of each SMA connector is no longer fixed. Handle remapping directions for each pin. Signed-off-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
When doing manual injection of the frame, it is required to check if the TX FIFO is ready to accept the next word of the frame. For this we are using 'readx_poll_timeout_atomic', the only problem is that before it actually checks the status, is determining the time when to finish polling the status. Which seems to be an expensive operation. Therefore check the status of the TX FIFO before calling 'readx_poll_timeout_atomic'. Doing this will improve the TX bitrate by ~70%. Because 99% the FIFO is ready by that time. The measurements were done using iperf3. Before: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.03 sec 55.2 MBytes 46.2 Mbits/sec 0 sender [ 5] 0.00-10.04 sec 53.8 MBytes 45.0 Mbits/sec receiver After: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.10 sec 95.0 MBytes 78.9 Mbits/sec 0 sender [ 5] 0.00-10.11 sec 95.0 MBytes 78.8 Mbits/sec receiver Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yihao Han authored
Remove dev_err() messages after platform_get_irq*() failures. platform_get_irq() already prints an error. Generated by: scripts/coccinelle/api/platform_get_irq.cocci Signed-off-by: Yihao Han <hanyihao@vivo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kurt Kanzenbach authored
Commit bf08824a ("flow_dissector: Add support for HSR") added support for HSR within the flow dissector. However, it only works for HSR in version 1. Version 0 uses a different Ether Type. Add support for it. Reported-by: Anthony Harivel <anthony.harivel@linutronix.de> Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Minghao Chi authored
It is not recommened to use platform_get_resource(pdev, IORESOURCE_IRQ) for requesting IRQ's resources any more, as they can be not ready yet in case of DT-booting. platform_get_irq() instead is a recommended way for getting IRQ even if it was not retrieved earlier. It also makes code simpler because we're getting "int" value right away and no conversion from resource to int is required. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lad Prabhakar authored
platform_get_resource(pdev, IORESOURCE_IRQ, ..) relies on static allocation of IRQ resources in DT core code, this causes an issue when using hierarchical interrupt domains using "interrupts" property in the node as this bypasses the hierarchical setup and messes up the irq chaining. In preparation for removal of static setup of IRQ resource from DT core code use platform_get_irq() for DT users only. While at it propagate error code in emac_dev_stop() in case platform_get_irq_optional() fails. Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Siddharth Vadapalli authored
Convert am65-cpsw driver and am65-cpsw ethtool to use Phylink APIs as described at Documentation/networking/sfp-phylink.rst. All calls to Phy APIs are replaced with their equivalent Phylink APIs. No functional change intended. Use Phylink instead of conventional Phylib, in preparation to add support for SGMII/QSGMII modes. Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe Leroy authored
Today's implementation of csum_shift() leads to branching based on parity of 'offset' 000002f8 <csum_block_add>: 2f8: 70 a5 00 01 andi. r5,r5,1 2fc: 41 a2 00 08 beq 304 <csum_block_add+0xc> 300: 54 84 c0 3e rotlwi r4,r4,24 304: 7c 63 20 14 addc r3,r3,r4 308: 7c 63 01 94 addze r3,r3 30c: 4e 80 00 20 blr Use first bit of 'offset' directly as input of the rotation instead of branching. 000002f8 <csum_block_add>: 2f8: 54 a5 1f 38 rlwinm r5,r5,3,28,28 2fc: 20 a5 00 20 subfic r5,r5,32 300: 5c 84 28 3e rotlw r4,r4,r5 304: 7c 63 20 14 addc r3,r3,r4 308: 7c 63 01 94 addze r3,r3 30c: 4e 80 00 20 blr And change to left shift instead of right shift to skip one more instruction. This has no impact on the final sum. 000002f8 <csum_block_add>: 2f8: 54 a5 1f 38 rlwinm r5,r5,3,28,28 2fc: 5c 84 28 3e rotlw r4,r4,r5 300: 7c 63 20 14 addc r3,r3,r4 304: 7c 63 01 94 addze r3,r3 308: 4e 80 00 20 blr Seems like only powerpc benefits from a branchless implementation. Other main architectures like ARM or X86 get better code with the generic implementation and its branch. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2022-03-10 1) Leon removes useless includes from both mlx5 and mlx4 2) Tariq adds node awareness to some object allocations 3) Gal Cleanups and improvements to EEPROM query 4) Paul adds Software steering to Connection Tracking, to speed up CT Rules insertion. Paul Blakey Says: ================= To improve insertion rate, this series allows for using software steering API directly instead of going through the fs_core layer. This can be done for CT because it doesn't need fs_core layer extra facilities, such as autogroups, FTE IDs and modifications (which require a copy of the flow key/mask). Skipping fs_core layer also allows to create the software steering objects (dr_* objects) ahead of time and re-use them for multiple rules, whereas software steering under fs_core creates them on the fly and discards them. This in turn increased insertion rate. The series first introduces a lightweight CT flow steering provider with the first implementations using fs_core layer, and moves CT to use it. The next patches implement a provider using software steering directly, bypassing fs_core, and uses it if software steering is available. ================= ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gal Pressman authored
Unlike the legacy EEPROM callbacks, when using the netlink EEPROM query (get_module_eeprom_by_page) the driver should not try to validate the query parameters, but just perform the read requested by the userspace. Recent discussion in the mailing list: https://lore.kernel.org/netdev/20220120093051.70845141@kicinski-fedora-PC1C0HJN.hsd1.ca.comcast.net/Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Gal Pressman authored
The assumption that the first byte in the module mapping dword is the module number shouldn't be hard-coded in the driver, but come from mlx5_ifc structs. While at it, fix the incorrect width for the 'rx_lane' and 'tx_lane' fields. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Gal Pressman authored
The MCIA register supports either 12 or 32 dwords, use the correct value by querying the capability from the MCAM register. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Paul Blakey authored
SMFS dr matchers are processed sequentially in hardware according to their priorities, and not skipped if empty. Currently, smfs ct fs creates four predefined dr matchers per ct table (ct/ct nat) with hardcoded priority. Compared to dmfs ct fs using autogroups, this might cause additional hops in fastpath for traffic patterns that match later priorties, even if previous priorites are empty, e.g user only using ipv6 UDP traffic will have additional 3 hops. Create the matchers dynamically, using the highest priority available, on first rule usage, and remove them on last usage. Signed-off-by: Paul Blakey <paulb@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Paul Blakey authored
fs_core layer adds extra book keeping that is either unneeded for CT, or unused by the underlying software steering, such as allocating FTEs and FTE ids, saving the match key and mask, and autogroups management. On top of that, direct steering has a translation layer (fs_dr) from PRM commands to direct steering objects, for example, creating temporary dr_action objects. This has a performance impact when dealing with CT high insertion rate. To use direct steering (smfs) directly for ct, add a tc ct fs smfs implementation. Instead of dmfs autogroups, smfs ct fs uses one of 4 predefined dr matchers in CT and CT-NAT tables, for each combination of tuple ethertype (ipv4/ipv6), and tuple ip_proto (udp/tcp) that is currently used by nf flow table flow offload. At rule insertions, validate the flow rule fits one of the predfined matcher, and insert to it. To fill the dr_actions of the rule efficiently, create the fwd to post_ct tbl dr_action at fs init, the count dr_action at counter creation, and re-use the already pre-allocated modify header dr_action. Signed-off-by: Paul Blakey <paulb@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Paul Blakey authored
Add a thin layer that exports selected direct steering (dr) API which will be used by a ct fs implementation in a following patch. Signed-off-by: Paul Blakey <paulb@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Paul Blakey authored
If sw steering was used to create the table, dr steeering fs creates a backing dr table for the mlx5 flow table. Add helper to return this table so it can be used to create matchers and add rules on it directly instead of passing via eswitch_offloads/fs_core insertion. Signed-off-by: Paul Blakey <paulb@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Paul Blakey authored
Currently, fs_core layer provides flow steering services to the driver including: autogroups, allocating FTEs (flow table entries) and FTE ids, and support of fte action modification. If then software steering is configured, rule insertion will go through a translation layer from firmware buffers to software steering objects (see fs_dr.c). The connection tracking table is a system table that is not directly controlled by the user and is a very high scale table. These fs_core services introduces an overhead that may be optimized by using software steering API directly. Introduce ct flow steering interface to allow multiple flow steering providers. Use the new interface to implement the current dmfs (device managed flow steering) provider which uses fs_core insertion. Signed-off-by: Paul Blakey <paulb@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Tariq Toukan authored
The function is node-aware and gets the node as an argument. Use a node-aware allocation for the doorbell pgdir structure. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Tariq Toukan authored
Prefer the aware allocation, use the device NUMA node. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Tariq Toukan authored
Prefer the aware allocation, use the device NUMA node. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Tariq Toukan authored
Prefer the aware allocation, use the device NUMA node. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Tariq Toukan authored
Prefer the aware allocation, use the device NUMA node. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Leon Romanovsky authored
There is no need in include of module.h in the following files. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Leon Romanovsky authored
Remove inclusion of not used moduleparam.h. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Jakub Kicinski authored
Alex Elder says: ==================== net: ipa: use bulk interconnect interfaces The IPA code currently enables and disables interconnects by setting the bandwidth of each to a non-zero value, or to zero. The interconnect API now supports enable/disable functions, so we can use those instead. In addition, the interconnect API provides bulk interfaces that allow all interconnects to be operated on at once. This series converts the IPA driver to use the bulk enable and disable interfaces. In the process it uses some existing data structures rather than defining new ones. ==================== Link: https://lore.kernel.org/r/20220309192037.667879-1-elder@linaro.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The ipa_power structure contains a copy of the IPA device pointer, so there's no need to pass it to ipa_interconnect_init(). We can also use that pointer for an error message in ipa_power_enable(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Rather than allocating the interconnect array dynamically, represent the interconnects with a variable-length array at the end of the ipa_power structure. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The previous patch used bulk interconnect operations to initialize IPA interconnects one at a time. This rearranges things to use the bulk interfaces as intended--on all interconnects together. As a result ipa_interconnect_init_one() and ipa_interconnect_exit_one() are no longer needed. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Use of_icc_bulk_get() and icc_bulk_put(), icc_bulk_set_bw(), and icc_bulk_enable() and icc_bulk_disable() to initialize individual IPA interconnects. Those functions already log messages in the event of error so we don't need to. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The power interconnect array is now an array of icc_bulk_data structures, which is what the interconnect bulk enable and disable functions require. Get rid of ipa_interconnect_enable() and ipa_interconnect_disable(), and just call icc_bulk_enable() and icc_bulk_disable() instead. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The interconnect framework now provides the ability to enable and disable interconnects without having to change their recorded "enabled" bandwidth value. Use this mechanism, rather than setting the bandwidth values to zero and non-zero respectively to disable and enable the IPA interconnects. Disable each interconnect before setting its "enabled" average and peak bandwidth values. Thereafter, enable and disable interconnects when required rather than setting their bandwidths. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The ipa_interconnect structure contains an icc_path pointer, plus an average and peak bandwidth value. Other than the interconnect name, this matches the icc_bulk_data structure exactly. Use the icc_bulk_data structure in place of the ipa_interconnect structure, and add an initialization of its name field. Then get rid of the now unnecessary ipa_interconnect structure definition. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jonathan Lemon authored
The serial port driver attempts to test for correct THRE behavior on startup. However, it does this by disabling interrupts, and then intentionally trying to trigger an interrupt in order to see if the IIR bit is set in the UART. However, in this FPGA design, the UART interrupt is generated through the MSI vector, so when interrupts are re-enabled after the test, the DMAR-IR reports an unhandled IRTE entry, since no irq handler is installed at this point - it is installed after the test. This only happens on the /second/ open of the UART, since on the first open, the x86_vector has installed and activated by the driver probe, and is correctly handled. When the serial port is closed for the first time, this vector is deactivated and removed, leading to this error. Signed-off-by: Jonathan Lemon <jonathan.lemon@gmail.com> Link: https://lore.kernel.org/r/20220309223427.34745-1-jonathan.lemon@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Yinjun Zhang authored
Previous commits introduced AF_XDP zero-copy support, in which we need register different mem model for xdp_rxq when AF_XDP zero-copy is enabled or not. And this should be done after xdp_rxq info is registered, which is not needed for ctrl port, otherwise there complaints warnings: "Missing register, driver bug". Fix this by not registering mem model for ctrl port, just like we don't register xdp_rxq info for ctrl port. Fixes: 6402528b ("nfp: xsk: add AF_XDP zero-copy Rx and Tx support") Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com> Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20220309135533.10162-1-simon.horman@corigine.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queueJakub Kicinski authored
Tony Nguyen says: ==================== 100GbE Intel Wired LAN Driver Updates 2022-03-09 This series contains updates to ice driver only. Martyna implements switchdev filtering on inner EtherType field for tunnels. Marcin adds reporting of slowpath statistics for port representors. Jonathan Toppins changes a non-fatal link error message from warning to debug. Maciej removes unnecessary checks in ice_clean_tx_irq(). Amritha adds support for ADQ to match outer destination MAC for tunnels. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue: ice: Add support for outer dest MAC for ADQ tunnels ice: avoid XDP checks in ice_clean_tx_irq() ice: change "can't set link" message to dbg level ice: Add slow path offload stats on port representor in switchdev ice: Add support for inner etype in switchdev ==================== Link: https://lore.kernel.org/r/20220309190315.1380414-1-anthony.l.nguyen@intel.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Jakub Kicinski says: ==================== net: control the length of the altname list Count the memory used for altnames and don't let user overflow the property nlattr. This was reported by George: https://lore.kernel.org/all/3e564baf-a1dd-122e-2882-ff143f7eb578@gmail.com/ ==================== Link: https://lore.kernel.org/r/20220309182914.423834-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Property list (altname is a link "property") is wrapped in a nlattr. nlattrs length is 16bit so practically speaking the list of properties can't be longer than that, otherwise user space would have to interpret broken netlink messages. Prevent the problem from occurring by checking the length of the property list before adding new entries. Reported-by: George Shuklin <george.shuklin@gmail.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
George reports that altnames can eat up kernel memory. We should charge that memory appropriately. Reported-by: George Shuklin <george.shuklin@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ilya Maximets authored
Few years ago OVS user space made a strange choice in the commit [1] to define types only valid for the user space inside the copy of a kernel uAPI header. '#ifndef __KERNEL__' and another attribute was added later. This leads to the inevitable clash between user space and kernel types when the kernel uAPI is extended. The issue was unveiled with the addition of a new type for IPv6 extension header in kernel uAPI. When kernel provides the OVS_KEY_ATTR_IPV6_EXTHDRS attribute to the older user space application, application tries to parse it as OVS_KEY_ATTR_PACKET_TYPE and discards the whole netlink message as malformed. Since OVS_KEY_ATTR_IPV6_EXTHDRS is supplied along with every IPv6 packet that goes to the user space, IPv6 support is fully broken. Fixing that by bringing these user space attributes to the kernel uAPI to avoid the clash. Strictly speaking this is not the problem of the kernel uAPI, but changing it is the only way to avoid breakage of the older user space applications at this point. These 2 types are explicitly rejected now since they should not be passed to the kernel. Additionally, OVS_KEY_ATTR_TUNNEL_INFO moved out from the '#ifdef __KERNEL__' as there is no good reason to hide it from the userspace. And it's also explicitly rejected now, because it's for in-kernel use only. Comments with warnings were added to avoid the problem coming back. (1 << type) converted to (1ULL << type) to avoid integer overflow on OVS_KEY_ATTR_IPV6_EXTHDRS, since it equals 32 now. [1] beb75a40fdc2 ("userspace: Switching of L3 packets in L2 pipeline") Fixes: 28a3f060 ("net: openvswitch: IPv6: Add IPv6 extension header support") Link: https://lore.kernel.org/netdev/3adf00c7-fe65-3ef4-b6d7-6d8a0cad8a5f@nvidia.com Link: https://github.com/openvswitch/ovs/commit/beb75a40fdc295bfd6521b0068b4cd12f6de507cReported-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org> Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Acked-by: Aaron Conole <aconole@redhat.com> Link: https://lore.kernel.org/r/20220309222033.3018976-1-i.maximets@ovn.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-