- 27 Nov, 2020 1 commit
-
-
Antony Antony authored
redact XFRM SA secret in the netlink response to xfrm_get_sa() or dumpall sa. Enable lockdown, confidentiality mode, at boot or at run time. e.g. when enabled: cat /sys/kernel/security/lockdown none integrity [confidentiality] ip xfrm state src 172.16.1.200 dst 172.16.1.100 proto esp spi 0x00000002 reqid 2 mode tunnel replay-window 0 aead rfc4106(gcm(aes)) 0x0000000000000000000000000000000000000000 96 note: the aead secret is redacted. Redacting secret is also a FIPS 140-2 requirement. v1->v2 - add size checks before memset calls v2->v3 - replace spaces with tabs for consistency v3->v4 - use kernel lockdown instead of a /proc setting v4->v5 - remove kconfig option Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Antony Antony <antony.antony@secunet.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
-
- 10 Nov, 2020 21 commits
-
-
Alexander Lobakin authored
Similar to commit fda55eca ("net: introduce skb_transport_header_was_set()"), avoid resetting transport offsets that were already set by GRO layer. This not only mirrors the behavior of __netif_receive_skb_core(), but also makes sense when it comes to UDP GSO fraglists forwarding: transport offset of such skbs is set only once by GRO receive callback and remains untouched and correct up to the xmitting driver in 1:1 case, but becomes junk after untagging in ingress VLAN case and breaks UDP GSO offload. This does not happen after this change, and all types of forwarding of UDP GSO fraglists work as expected. Since v1 [1]: - keep the code 1:1 with __netif_receive_skb_core() (Jakub). [1] https://lore.kernel.org/netdev/zYurwsZRN7BkqSoikWQLVqHyxz18h4LhHU4NFa2Vw@cp4-web-038.plabs.chSigned-off-by: Alexander Lobakin <alobakin@pm.me> Link: https://lore.kernel.org/r/7JgIkgEztzt0W6ZtC9V9Cnk5qfkrUFYcpN871syCi8@cp4-web-040.plabs.chSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Heiner Kallweit says: ==================== net: add and use dev_get_tstats64 It's a frequent pattern to use netdev->stats for the less frequently accessed counters and per-cpu counters for the frequently accessed counters (rx/tx bytes/packets). Add a default ndo_get_stats64() implementation for this use case. Subsequently switch more drivers to use this pattern. v2: - add patches for replacing ip_tunnel_get_stats64 Requested additional migrations will come in a separate series. v3: - add atomic_long_t member rx_frame_errors in patch 3 for making counter updates atomic ==================== Link: https://lore.kernel.org/r/99273e2f-c218-cd19-916e-9161d8ad8c56@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
After having migrated all users remove ip_tunnel_get_stats64(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
Replace ip_tunnel_get_stats64() with the new identical core function dev_get_tstats64(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
Replace ip_tunnel_get_stats64() with the new identical core function dev_get_tstats64(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
Replace ip_tunnel_get_stats64() with the new identical core function dev_get_tstats64(). Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
Replace ip_tunnel_get_stats64() with the new identical core function dev_get_tstats64(). Acked-by: Harald Welte <laforge@gnumonks.org> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
Replace ip_tunnel_get_stats64() with the new identical core function dev_get_tstats64(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
Switch ip6_tunnel to the standard statistics pattern: - use dev->stats for the less frequently accessed counters - use dev->tstats for the frequently accessed counters An additional benefit is that we now have 64bit statistics also on 32bit systems. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
Switch tun to the standard statistics pattern: - use netdev->stats for the less frequently accessed counters - use netdev->tstats for the frequently accessed per-cpu counters v3: - add atomic_long_t member rx_frame_errors for making counter updates atomic Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
Use netdev->tstats instead of a member of dsa_slave_priv for storing a pointer to the per-cpu counters. This allows us to use core functionality for statistics handling. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
It's a frequent pattern to use netdev->stats for the less frequently accessed counters and per-cpu counters for the frequently accessed counters (rx/tx bytes/packets). Add a default ndo_get_stats64() implementation for this use case. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Export the raw VTU data and related registers in a devlink region so that it can be inspected from userspace and compared to the current bridge configuration. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201109082927.8684-1-tobias@waldekranz.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jisheng Zhang authored
The .config_aneg in microchip_t1 is genphy_config_aneg, so it's not needed, because the phy core will call genphy_config_aneg() if the .config_aneg is NULL. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201109091605.3951c969@xhacker.debianSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kaixu Xia authored
Fix the following coccinelle warnings: ./drivers/net/ethernet/mellanox/mlx4/en_rx.c:687:1-17: WARNING: Assignment of 0/1 to bool variable Reported-by: Tosk Robot <tencent_os_robot@tencent.com> Signed-off-by: Kaixu Xia <kaixuxia@tencent.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/1604732038-6057-1-git-send-email-kaixuxia@tencent.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Menglong Dong authored
The initialization for 'err' with '-EINVAL' is redundant and can be removed, as it is updated soon and not used. Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/1604644960-48378-2-git-send-email-dong.menglong@zte.com.cnSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Menglong Dong authored
The initialization for 'err' with 0 is redundant and can be removed, as it is updated by ip_send_skb and not used before that. Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/1604644960-48378-4-git-send-email-dong.menglong@zte.com.cnSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Horatiu Vultur authored
Replace list_head with hlist_head for MRP list under the bridge. There is no need for a circular list when a linear list will work. This will also decrease the size of 'struct net_bridge'. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Link: https://lore.kernel.org/r/20201106215049.1448185-1-horatiu.vultur@microchip.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Tanner Love says: ==================== net/packet: make packet_fanout.arr size configurable up to 64K First patch makes the change; second patch adds unit tests. ==================== Link: https://lore.kernel.org/r/20201106180741.2839668-1-tannerlove.kernel@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tanner Love authored
Add an additional control test that verifies: -specifying two different max_num_members values fails -specifying max_num_members > PACKET_FANOUT_MAX fails In datapath tests, set max_num_members to PACKET_FANOUT_MAX. Signed-off-by: Tanner Love <tannerlove@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tanner Love authored
One use case of PACKET_FANOUT is lockless reception with one socket per CPU. 256 is a practical limit on increasingly many machines. Increase PACKET_FANOUT_MAX to 64K. Expand setsockopt PACKET_FANOUT to take an extra argument max_num_members. Also explicitly define a fanout_args struct, instead of implicitly casting to an integer. This documents the API and simplifies the control flow. If max_num_members is not specified or is set to 0, then 256 is used, same as before. Signed-off-by: Tanner Love <tannerlove@google.com> Signed-off-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 09 Nov, 2020 1 commit
-
-
Menglong Dong authored
When udp_memory_allocated is at the limit, __udp_enqueue_schedule_skb will return a -ENOBUFS, and skb will be dropped in __udp_queue_rcv_skb without any counters being done. It's hard to find out what happened once this happen. So we introduce a UDP_MIB_MEMERRORS to do this job. Well, this change looks friendly to the existing users, such as netstat: $ netstat -u -s Udp: 0 packets received 639 packets to unknown port received. 158689 packet receive errors 180022 packets sent RcvbufErrors: 20930 MemErrors: 137759 UdpLite: IpExt: InOctets: 257426235 OutOctets: 257460598 InNoECTPkts: 181177 v2: - Fix some alignment problems Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/1604627354-43207-1-git-send-email-dong.menglong@zte.com.cnSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 08 Nov, 2020 1 commit
-
-
Voon Weifeng authored
Set all EHL/TGL phy_addr to -1 so that the driver will automatically detect it at run-time by probing all the possible 32 addresses. Signed-off-by: Voon Weifeng <weifeng.voon@intel.com> Signed-off-by: Wong Vee Khee <vee.khee.wong@intel.com> Link: https://lore.kernel.org/r/20201106094341.4241-1-vee.khee.wong@intel.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 07 Nov, 2020 16 commits
-
-
Wang Qing authored
Actually, withing should be within. Signed-off-by: Wang Qing <wangqing@vivo.com> Link: https://lore.kernel.org/r/1604649025-22559-1-git-send-email-wangqing@vivo.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Wang Qing authored
withing should be within. Signed-off-by: Wang Qing <wangqing@vivo.com> Link: https://lore.kernel.org/r/1604650310-30432-1-git-send-email-wangqing@vivo.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Alex Elder says: ==================== net: ipa: constrain GSI interrupts The goal of this series is to more tightly control when GSI interrupts are enabled. This is a long-ish series, so I'll describe it in parts. The first patch is actually unrelated... I forgot to include it in my previous series (which exposed the GSI layer to the IPA version). It is a trivial comments-only update patch. The second patch defers registering the GSI interrupt handler until *after* all of the resources that handler touches have been initialized. In practice, we don't see this interrupt that early, but this precludes an obvious problem. The next two patches are simple changes. The first just trivially renames a field. The second switches from using constant mask values to using an enumerated type of bit positions to represent each GSI interrupt type. The rest implement the "real work." First, all interrupts are disabled at initialization time. Next, we keep track of a bitmask of enabled GSI interrupt types, updating it each time we enable or disable one of them. From there we have a set of patches that one-by-one enable each interrupt type only during the period it is required. This includes allowing a channel to generate IEOB interrupts only when it has been enabled. And finally, the last patch simplifies some code now that all GSI interrupt types are handled uniformly. ==================== Link: https://lore.kernel.org/r/20201105181407.8006-1-elder@linaro.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Now that all of the GSI interrupts are handled uniformly, change gsi_irq_type_update() so it takes a value. Have the function assign that value to the cached mask of enabled GSI IRQ types before writing it to hardware. Note that gsi_irq_teardown() will only be called after gsi_irq_disable(), so it's not necessary for the former to disable all IRQ types. Get rid of that. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Most GSI general errors are unrecoverable without a full reset. Despite that, we want to receive these errors so we can at least report what happened before whatever undefined behavior ensues. Explicitly disable all such interrupts in gsi_irq_setup(), then enable those we want in gsi_irq_enable(). List the interrupt types we are interested in (everything but breakpoint) explicitly rather than using GSI_CNTXT_GSI_IRQ_ALL, and remove that symbol's definition. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
It is possible for other execution environments (EEs, like the modem) to request changes to local (AP) channel or event ring state. We do not support this feature. In gsi_irq_setup(), explicitly zero the mask that defines which channels are permitted to generate inter-EE channel state change interrupts. Do the same for the event ring mask. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
A GSI channel must be started in order to use it to perform a transfer data (or command) transaction. And the only time we'll see an IEOB interrupt is if we send a transaction to a started channel. Therefore we do not need to have the IEOB interrupt type enabled until at least one channel has been started. And once the last started channel has been stopped, we can disable the IEOB interrupt type again. We already enable the IEOB interrupt for a particular channel only when it is started. Extend that by having the IEOB interrupt *type* be enabled only when at least one channel is in STARTED state. Disallow all channels from triggering the IEOB interrupt in gsi_irq_setup(). We only enable an channel's interrupt when needed, so there is no longer any need to zero the channel mask in gsi_irq_disable(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The completion of a generic EE GSI command is signaled by a global interrupt of type GP_INT1. The only other used type for a global interrupt is a hardware error report. First, disallow all global interrupt types in gsi_irq_setup(). We want to know about hardware errors, so re-enable the interrupt type in gsi_irq_enable(), to allow hardware errors to be reported. Disable that interrupt type again in gsi_irq_disable(). We only issue generic EE commands one at a time, and there's no reason to keep the completion interrupt enabled when no generic EE command is pending. We furthermore have no need to enable the GP_INT2 or GP_INT3 interrupt types (which aren't used). The change in gsi_irq_enable() makes GSI_CNTXT_GLOB_IRQ_ALL unused, so get rid of it. Have gsi_generic_command() enable the GP_INT1 interrupt type (in addition to the ERROR_INT type) only while a generic command is pending. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
A GSI event ring causes an event control interrupt to fire whenever its state changes (between NOT_ALLOCATED and ALLOCATED). No event ring should ever change state except when we request it to. Currently, we permit *all* events rings to generate event control interrupts--even those that are never used. And we enable event control interrupts essentially at all times, from setup to teardown. Instead, only enable the event control interrupt type for the duration of an event ring command, and when doing so, only allow the event ring being operated upon to cause the interrupt to fire. Disallow all event rings from issuing the event control interrupt in gsi_irq_setup(). Because an event ring's interrupt is only enabled when needed, there is no longer any need to zero the event channel mask in gsi_irq_disable(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
A GSI channel causes a channel control interrupt to fire whenever its state changes (between NOT_ALLOCATED, ALLOCATED, STARTED, etc.). We do not support inter-EE channel commands (initiated by other EEs), so no channel should ever change state except when we request it to. Currently, we permit *all* channels to generate channel control interrupts--even those that are never used. And we enable channel control interrupts essentially at all times, from setup to teardown. Instead, disable all channel control interrupts initially in gsi_irq_setup(), and only enable the channel control interrupt type for the duration of a channel command. When doing so, only allow the channel being operated upon to cause the interrupt to fire. Because a channel's interrupt is now enabled only when needed (one channel at a time), there is no longer any need to zero the channel mask in gsi_irq_disable(). Add new gsi_irq_type_enable() and gsi_irq_type_disable() as helper functions to control whether a given GSI interrupt type is enabled. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Keep track of the set of GSI interrupt types that are currently enabled by recording the mask value to write (or last written) to the TYPE_IRQ_MSK register. Create a new helper function gsi_irq_type_update() to handle actually writing the register. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Introduce gsi_irq_setup() and gsi_irq_teardown() to disable all GSI interrupts when first setting up GSI hardware, and to clean things up when we're done. Re-enable all GSI interrupt types in gsi_irq_enable(), but do so only after each of the type-specific interrupt masks has been configured. Similarly, disable all interrupt types in gsi_irq_disable()--first--before zeroing out the type-specific masks. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Define the GSI interrupt types with an enumerated type whose values are the bit positions representing each interrupt type. Include a short comment describing how each interrupt type is used. Build up the enabled interrupt mask explicitly in gsi_irq_enable(), and get rid of the definition of GSI_CNTXT_TYPE_IRQ_MSK_ALL. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Rename the "event_enable_bitmap" field of the GSI structure to be "ieob_enabled_bitmap". An upcoming patch will cache the last value stored for another interrupt mask and this is a more direct naming convention to follow. Add a few comments to explain the bitmap fields in the GSI structure. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Introduce gsi_irq_init() and gsi_irq_exit(), to encapsulate looking up the GSI IRQ and registering its handler. Call gsi_irq_init() a little later in gsi_init(), and initialize the completion earlier. The IRQ handler accesses both the GSI virtual memory pointer and the completion, and this way these things will have been initialized before the gsi_irq() can ever be called. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The GSI code is now exposed to IPA version numbers, and we handle version-specific behavior based on the IPA version. Modify some comments that talk about GSI versions so they reference IPA versions instead. Correct version number errors in a couple of these comments. The (comment) mapping between IPA and GSI versions in the definition of the ipa_version enumerated type remains. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-