- 07 Feb, 2023 10 commits
-
-
Eric Dumazet authored
Recent removal of ksize() in alloc_skb() increased performance because we no longer read the associated struct page. We have an equivalent cost at kfree_skb() time. kfree(skb->head) has to access a struct page, often cold in cpu caches to get the owning struct kmem_cache. Considering that many allocations are small (at least for TCP ones) we can have our own kmem_cache to avoid the cache line miss. This also saves memory because these small heads are no longer padded to 1024 bytes. CONFIG_SLUB=y $ grep skbuff_small_head /proc/slabinfo skbuff_small_head 2907 2907 640 51 8 : tunables 0 0 0 : slabdata 57 57 0 CONFIG_SLAB=y $ grep skbuff_small_head /proc/slabinfo skbuff_small_head 607 624 640 6 1 : tunables 54 27 8 : slabdata 104 104 5 Notes: - After Kees Cook patches and this one, we might be able to revert commit dbae2b06 ("net: skb: introduce and use a single page frag cache") because GRO_MAX_HEAD is also small. - This patch is a NOP for CONFIG_SLOB=y builds. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
All kmalloc_reserve() callers have to make the same computation, we can factorize them, to prepare following patch in the series. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
This is a cleanup patch, to prepare following change. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
We have many places using this expression: SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) Use of SKB_HEAD_ALIGN() will allow to clean them. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Paolo Abeni authored
Merge tag 'linux-can-next-for-6.3-20230206' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2023-02-06 this is a pull request of 47 patches for net-next/master. The first two patch is by Oliver Hartkopp. One adds missing error checking to the CAN_GW protocol, the other adds a missing CAN address family check to the CAN ISO TP protocol. Thomas Kopp contributes a performance optimization to the mcp251xfd driver. The next 11 patches are by Geert Uytterhoeven and add support for R-Car V4H systems to the rcar_canfd driver. Stephane Grosjean and Lukas Magel contribute 8 patches to the peak_usb driver, which add support for configurable CAN channel ID. The last 17 patches are by me and target the CAN bit timing configuration. The bit timing is cleaned up, error messages are improved and forwarded to user space via NL_SET_ERR_MSG_FMT() instead of netdev_err(), and the SJW handling is updated, including the definition of a new default value that will benefit CAN-FD controllers, by increasing their oscillator tolerance. * tag 'linux-can-next-for-6.3-20230206' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next: (47 commits) can: bittiming: can_validate_bitrate(): report error via netlink can: bittiming: can_calc_bittiming(): convert from netdev_err() to NL_SET_ERR_MSG_FMT() can: bittiming: can_calc_bittiming(): clean up SJW handling can: bittiming: can_sjw_set_default(): use Phase Seg2 / 2 as default for SJW can: bittiming: can_sjw_check(): check that SJW is not longer than either Phase Buffer Segment can: bittiming: can_sjw_check(): report error via netlink and harmonize error value can: bittiming: can_fixup_bittiming(): report error via netlink and harmonize error value can: bittiming: factor out can_sjw_set_default() and can_sjw_check() can: bittiming: can_changelink() pass extack down callstack can: netlink: can_changelink(): convert from netdev_err() to NL_SET_ERR_MSG_FMT() can: netlink: can_validate(): validate sample point for CAN and CAN-FD can: dev: register_candev(): bail out if both fixed bit rates and bit timing constants are provided can: dev: register_candev(): ensure that bittiming const are valid can: bittiming: can_get_bittiming(): use direct return and remove unneeded else can: bittiming: can_fixup_bittiming(): set effective tq can: bittiming: can_fixup_bittiming(): use CAN_SYNC_SEG instead of 1 can: bittiming(): replace open coded variants of can_bit_time() can: peak_usb: Reorder include directives alphabetically can: peak_usb: align CAN channel ID format in log with sysfs attribute can: peak_usb: export PCAN CAN channel ID as sysfs device attribute ... ==================== Link: https://lore.kernel.org/r/20230206131620.2758724-1-mkl@pengutronix.deSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
If ops->get_mm() returns a non-zero error code, we goto out_complete, but there, we return 0. Fix that to propagate the "ret" variable to the caller. If ops->get_mm() succeeds, it will always return 0. Fixes: 2b30f829 ("net: ethtool: add support for MAC Merge layer") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230206094932.446379-1-vladimir.oltean@nxp.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Eddy Tao authored
Use actual CPU number instead of hardcoded value to decide the size of 'cpu_used_mask' in 'struct sw_flow'. Below is the reason. 'struct cpumask cpu_used_mask' is embedded in struct sw_flow. Its size is hardcoded to CONFIG_NR_CPUS bits, which can be 8192 by default, it costs memory and slows down ovs_flow_alloc. To address this: Redefine cpu_used_mask to pointer. Append cpumask_size() bytes after 'stat' to hold cpumask. Initialization cpu_used_mask right after stats_last_writer. APIs like cpumask_next and cpumask_set_cpu never access bits beyond cpu count, cpumask_size() bytes of memory is enough. Signed-off-by: Eddy Tao <taoyuan_eddy@hotmail.com> Acked-by: Eelco Chaudron <echaudro@redhat.com> Link: https://lore.kernel.org/r/OS3P286MB229570CCED618B20355D227AF5D59@OS3P286MB2295.JPNP286.PROD.OUTLOOK.COMSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Arnd Bergmann authored
The forward declaration was introduced with a prototype that does not match the function definition: drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c:2166:13: error: conflicting types for 'xgbe_phy_perform_ratechange' due to enum/integer mismatch; have 'void(struct xgbe_prv_data *, enum xgbe_mb_cmd, enum xgbe_mb_subcmd)' [-Werror=enum-int-mismatch] 2166 | static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c:391:13: note: previous declaration of 'xgbe_phy_perform_ratechange' with type 'void(struct xgbe_prv_data *, unsigned int, unsigned int)' 391 | static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ Ideally there should not be any forward declarations here, which would make it easier to show that there is no unbounded recursion. I tried fixing this but could not figure out how to avoid the recursive call. As a hotfix, address only the broken prototype to fix the build problem instead. Fixes: 4f3b20bf ("amd-xgbe: add support for rx-adaptation") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Simon Horman <simon.horman@corigine.com> Acked-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Link: https://lore.kernel.org/r/20230203121553.2871598-1-arnd@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Colin Foster authored
There are no external users of the vsc7514_*_regmap[] symbols or vsc7514_vcap_* functions. They were exported in commit 32ecd22b ("net: mscc: ocelot: split register definitions to a separate file") with the intention of being used, but the actual structure used in commit 2efaca41 ("net: mscc: ocelot: expose vsc7514_regmap definition") ended up being all that was needed. Bury these unnecessary symbols. Signed-off-by: Colin Foster <colin.foster@in-advantage.com> Suggested-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Link: https://lore.kernel.org/r/20230204182056.25502-1-colin.foster@in-advantage.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
https://github.com/ajitkhaparde1/linuxJakub Kicinski authored
Ajit Khaparde says: ==================== bnxt: Add Auxiliary driver support Add auxiliary device driver for Broadcom devices. The bnxt_en driver will register and initialize an aux device if RDMA is enabled in the underlying device. The bnxt_re driver will then probe and initialize the RoCE interfaces with the infiniband stack. We got rid of the bnxt_en_ops which the bnxt_re driver used to communicate with bnxt_en. Similarly We have tried to clean up most of the bnxt_ulp_ops. In most of the cases we used the functions and entry points provided by the auxiliary bus driver framework. And now these are the minimal functions needed to support the functionality. We will try to work on getting rid of the remaining if we find any other viable option in future. * 'aux-bus-v11' of https://github.com/ajitkhaparde1/linux: bnxt_en: Remove runtime interrupt vector allocation RDMA/bnxt_re: Remove the sriov config callback bnxt_en: Remove struct bnxt access from RoCE driver bnxt_en: Use auxiliary bus calls over proprietary calls bnxt_en: Use direct API instead of indirection bnxt_en: Remove usage of ulp_id RDMA/bnxt_re: Use auxiliary driver interface bnxt_en: Add auxiliary driver support ==================== Link: https://lore.kernel.org/r/20230202033809.3989-1-ajit.khaparde@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 06 Feb, 2023 30 commits
-
-
Marc Kleine-Budde authored
Marc Kleine-Budde <mkl@pengutronix.de> says: several people noticed that on modern CAN controllers with wide bit timing registers the default SJW of 1 can result in unstable or no synchronization to the CAN network. See Patch 14/17 for details. During review of v1 Vincent pointed out that the original code and the series doesn't always check user provided bit timing parameters, sometimes silently limits them and the return error values are not consistent. This series first cleans up some code in bittiming.c, replacing open-coded variants by macros or functions (Patches 1, 2). Patch 3 adds the missing assignment of the effective TQ if the interface is configured with low level timing parameters. Patch 4 is another code cleanup. Patches 5, 6 check the bit timing parameter during interface registration. Patch 7 adds a validation of the sample point. The patches 8-13 convert the error messages from netdev_err() to NL_SET_ERR_MSG_FMT, factor out the SJW handling from can_fixup_bittiming(), add checking and error messages for the individual limits and harmonize the error return values. Patch 14 changes the default SJW value from 1 to min(Phase Seg1, Phase Seg2 / 2). Patch 15 switches can_calc_bittiming() to use the new SJW handling. Patch 16 converts can_calc_bittiming() to NL_SET_ERR_MSG_FMT(). And patch 16 adds a NL_SET_ERR_MSG_FMT() error message to can_validate_bitrate(). v1: https://lore.kernel.org/all/20220907103845.3929288-1-mkl@pengutronix.de Link: https://lore.kernel.org/all/20230202110854.2318594-1-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Report an error to user space via netlink if the requested bit rate is not supported by the device. Link: https://lore.kernel.org/all/20230202110854.2318594-18-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Replace the netdev_err() by NL_SET_ERR_MSG_FMT() to better inform the user about the problem. While there, use %u to print unsigned values and improve error message a bit. In case of an error, return -EINVAL instead of -EDOM, this corresponds better to the actual meaning of the error value. Link: https://lore.kernel.org/all/20230202110854.2318594-17-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
In the current code, if the user configures a bitrate, a default SJW value of 1 is used. If the user configures both a bitrate and a SJW value, can_calc_bittiming() silently limits the SJW value to SJW max and TSEG2. We came to the conclusion that if the user provided an invalid SJW value, it's best to bail out and inform the user [1]. [1] https://lore.kernel.org/all/CAMZ6RqKqhmTgUZiwe5uqUjBDnhhC2iOjZ791+Y845btJYwVDKg@mail.gmail.com Further the ISO 11898-1:2015 standard mandates that "SJW shall be less than or equal to the minimum of these two items: Phase_Seg1 and Phase_Seg2." [2] The current code is missing that check. [2] https://lore.kernel.org/all/BL3PR11MB64844E3FC13C55433CDD0B3DFB449@BL3PR11MB6484.namprd11.prod.outlook.com The previous patches introduced 1) can_sjw_set_default() - sets a default value for SJW if unset 2) can_sjw_check() - implements a SJW check against SJW max, Phase Seg1 and Phase Seg2. In the error case this function reports the error to user space via netlink. Replace both the open-coded SJW default setting and the open-coded and insufficient checks of SJW with the helper functions can_sjw_set_default() and can_sjw_check(). Link: https://lore.kernel.org/all/20230202110854.2318594-16-mkl@pengutronix.de Link: https://lore.kernel.org/all/CAMZ6RqKqhmTgUZiwe5uqUjBDnhhC2iOjZ791+Y845btJYwVDKg@mail.gmail.com Link: https://lore.kernel.org/all/BL3PR11MB64844E3FC13C55433CDD0B3DFB449@BL3PR11MB6484.namprd11.prod.outlook.comSuggested-by: Thomas Kopp <Thomas.Kopp@microchip.com> Suggested-by: Vincent Mailhol <vincent.mailhol@gmail.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
"The (Re-)Synchronization Jump Width (SJW) defines how far a resynchronization may move the Sample Point inside the limits defined by the Phase Buffer Segments to compensate for edge phase errors." [1] In other words, this means that the SJW parameter controls the tolerance of the CAN controller to frequency errors compared to other CAN controllers. If the user space does not provide an SJW parameter, the kernel chooses a default value of 1. This has proven to be a good default value for classic CAN controllers, but no longer for modern CAN-FD controllers. In the past there were CAN controllers like the sja1000 with a rather limited range of bit timing parameters. For the standard bit rates this results in the following bit timing parameters: | Bit timing parameters for sja1000 with 8.000000 MHz ref clock | _----+--------------=> tseg1: 1 … 16 | / / _---------=> tseg2: 1 … 8 | | | / _-----=> sjw: 1 … 4 | | | | / _-=> brp: 1 … 64 (inc: 1) | | | | | / | nominal | | | | | real Bitrt nom real SampP | Bitrate TQ[ns] PrS PhS1 PhS2 SJW BRP Bitrate Error SampP SampP Error BTR0 BTR1 | 1000000 125 2 3 2 1 1 1000000 0.0% 75.0% 75.0% 0.0% 0x00 0x14 | 800000 125 3 4 2 1 1 800000 0.0% 80.0% 80.0% 0.0% 0x00 0x16 | 666666 125 4 4 3 1 1 666666 0.0% 80.0% 75.0% 6.2% 0x00 0x27 | 500000 125 6 7 2 1 1 500000 0.0% 87.5% 87.5% 0.0% 0x00 0x1c | 250000 250 6 7 2 1 2 250000 0.0% 87.5% 87.5% 0.0% 0x01 0x1c | 125000 500 6 7 2 1 4 125000 0.0% 87.5% 87.5% 0.0% 0x03 0x1c | 100000 625 6 7 2 1 5 100000 0.0% 87.5% 87.5% 0.0% 0x04 0x1c | 83333 750 6 7 2 1 6 83333 0.0% 87.5% 87.5% 0.0% 0x05 0x1c | 50000 1250 6 7 2 1 10 50000 0.0% 87.5% 87.5% 0.0% 0x09 0x1c | 33333 1875 6 7 2 1 15 33333 0.0% 87.5% 87.5% 0.0% 0x0e 0x1c | 20000 3125 6 7 2 1 25 20000 0.0% 87.5% 87.5% 0.0% 0x18 0x1c | 10000 6250 6 7 2 1 50 10000 0.0% 87.5% 87.5% 0.0% 0x31 0x1c The attentive reader will notice that the SJW is 1 in most cases, while the Seg2 phase is 2. Both values are given in TQ units, which in turn is a duration in nanoseconds. For example the 500 kbit/s configuration: | nominal real Bitrt nom real SampP | Bitrate TQ[ns] PrS PhS1 PhS2 SJW BRP Bitrate Error SampP SampP Error BTR0 BTR1 | 500000 125 6 7 2 1 1 500000 0.0% 87.5% 87.5% 0.0% 0x00 0x1c the TQ is 125ns, the Phase Seg2 is "2" (== 250ns), the SJW is "1" (== 125 ns). Looking at a more modern CAN controller like a mcp2518fd, it has wider bit timing registers. | Bit timing parameters for mcp251xfd with 40.000000 MHz ref clock | _----+--------------=> tseg1: 2 … 256 | / / _---------=> tseg2: 1 … 128 | | | / _-----=> sjw: 1 … 128 | | | | / _-=> brp: 1 … 256 (inc: 1) | | | | | / | nominal | | | | | real Bitrt nom real SampP | Bitrate TQ[ns] PrS PhS1 PhS2 SJW BRP Bitrate Error SampP SampP Error NBTCFG | 500000 25 34 35 10 1 1 500000 0.0% 87.5% 87.5% 0.0% 0x00440900 The TQ is 25ns, the Phase Seg 2 is "10" (== 250ns), the SJW is "1" (== 25ns). Since the kernel chooses a default SJW of 1 regardless of the TQ, this leads to a much smaller SJW and thus much smaller tolerances to frequency errors. To maintain the same oscillator tolerances on controllers with wide bit timing registers, select a default SJW value of Phase Seg2 / 2 unless Phase Seg 1 is less. This results in the following bit timing parameters: | Bit timing parameters for mcp251xfd with 40.000000 MHz ref clock | _----+--------------=> tseg1: 2 … 256 | / / _---------=> tseg2: 1 … 128 | | | / _-----=> sjw: 1 … 128 | | | | / _-=> brp: 1 … 256 (inc: 1) | | | | | / | nominal | | | | | real Bitrt nom real SampP | Bitrate TQ[ns] PrS PhS1 PhS2 SJW BRP Bitrate Error SampP SampP Error NBTCFG | 500000 25 34 35 10 5 1 500000 0.0% 87.5% 87.5% 0.0% 0x00440904 The TQ is 25ns, the Phase Seg 2 is "10" (== 250ns), the SJW is "5" (== 125ns). Which is the same as on the sja1000 controller. [1] http://web.archive.org/http://www.oertel-halle.de/files/cia99paper.pdf Link: https://lore.kernel.org/all/20230202110854.2318594-15-mkl@pengutronix.de Cc: Mark Bath <mark@baggywrinkle.co.uk> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
According to "The Configuration of the CAN Bit Timing" [1] the SJW "may not be longer than either Phase Buffer Segment". Check SJW against length of both Phase buffers. In case the SJW is greater, report an error via netlink to user space and bail out. [1] http://web.archive.org/http://www.oertel-halle.de/files/cia99paper.pdf Link: https://lore.kernel.org/all/20230202110854.2318594-14-mkl@pengutronix.deSuggested-by: Vincent Mailhol <vincent.mailhol@gmail.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
If the user space has supplied an invalid SJW value (greater than the maximum SJW value), report -EINVAL instead of -ERANGE, this better matches the actual meaning of the error value. Additionally report an error message via netlink to the user space. Link: https://lore.kernel.org/all/20230202110854.2318594-13-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Check each bit timing parameter first individually against their limits and report a meaningful error message via netlink to the user space. In case of an error, return -EINVAL instead of -ERANGE, this corresponds better to the actual meaning of the error value. Link: https://lore.kernel.org/all/20230202110854.2318594-12-mkl@pengutronix.deSuggested-by: Vincent Mailhol <vincent.mailhol@gmail.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Factor out the functionality of assigning a SJW default value into can_sjw_set_default() and the checking the SJW limits into can_sjw_check(). This functions will be improved and called from a different function in the following patches. Link: https://lore.kernel.org/all/20230202110854.2318594-11-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This is a preparation patch. In order to pass warning/error messages during netlink calls back to user space, pass the extack struct down the callstack of can_changelink(), the actual error messages will be added in the following ptaches. Link: https://lore.kernel.org/all/20230202110854.2318594-10-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Since commit 51c352bd ("netlink: add support for formatted extack messages") formatted extack messages are supported to inform the user space or warnings/errors during netlink calls. Replace the netdev_err() by NL_SET_ERR_MSG_FMT() to better inform the user about the problem. While there, use %u to print unsigned values and improve error message a bit. Link: https://lore.kernel.org/all/20230202110854.2318594-9-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
The sample point is a value in tenths of a percent. Meaningful values are between 0 and 1000. Invalid values are rejected and an error message is returned to user space via netlink. Link: https://lore.kernel.org/all/20230202110854.2318594-8-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
The CAN driver framework supports either fixed bit rates or bit timing constants. Bail out during driver registration if both are given. Link: https://lore.kernel.org/all/20230202110854.2318594-7-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Implement the function can_bittiming_const_valid() to check the validity of the specified bit timing constant. Call this function from register_candev() to check the bit timing constants during the registration of the CAN interface. Link: https://lore.kernel.org/all/20230202110854.2318594-6-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Clean up the code flow a bit, don't assign err variable but directly return. Remove the unneeded else, too. Link: https://lore.kernel.org/all/20230202110854.2318594-5-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
The can_fixup_bittiming() function is used to validate the user-supplied low-level bit timing parameters and calculate the bitrate prescaler (brp) from the requested time quanta (tq) and the CAN clock of the controller. can_fixup_bittiming() selects the best matching integer bit rate prescaler, which may result in a different time quantum than the value specified by the user. Calculate the resulting time quantum and assign it so that the user sees the effective time quantum. Link: https://lore.kernel.org/all/20230202110854.2318594-4-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Commit 1c47fa6b ("can: dev: add a helper function to calculate the duration of one bit") made the constant CAN_SYNC_SEG available in a header file. The magic number 1 in can_fixup_bittiming() represents the width of the sync segment, replace it by CAN_SYNC_SEG to make the code more readable. Link: https://lore.kernel.org/all/20230202110854.2318594-3-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Commit 1c47fa6b ("can: dev: add a helper function to calculate the duration of one bit") added the helper function can_bit_time(). Replace open coded variants of can_bit_time() by the helper function. Link: https://lore.kernel.org/all/20230202110854.2318594-2-mkl@pengutronix.deSigned-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
David S. Miller authored
Pietro Borrello says: ==================== tuntap: correctly initialize socket uid sock_init_data() assumes that the `struct socket` passed in input is contained in a `struct socket_alloc` allocated with sock_alloc(). However, tap_open() and tun_chr_open() pass a `struct socket` embedded in a `struct tap_queue` and `struct tun_file` respectively, both allocated with sk_alloc(). This causes a type confusion when issuing a container_of() with SOCK_INODE() in sock_init_data() which results in assigning a wrong sk_uid to the `struct sock` in input. Due to the type confusion, both sockets happen to have their uid set to 0, i.e. root. While it will be often correct, as tuntap devices require CAP_NET_ADMIN, it may not always be the case. Not sure how widespread is the impact of this, it seems the socket uid may be used for network filtering and routing, thus tuntap sockets may be incorrectly managed. Additionally, it seems a socket with an incorrect uid may be returned to the vhost driver when issuing a get_socket() on a tuntap device in vhost_net_set_backend(). Fix the bugs by adding and using sock_init_data_uid(), which explicitly takes a uid as argument. Signed-off-by: Pietro Borrello <borrello@diag.uniroma1.it> --- Changes in v3: - Fix the bug by defining and using sock_init_data_uid() - Link to v2: https://lore.kernel.org/r/20230131-tuntap-sk-uid-v2-0-29ec15592813@diag.uniroma1.it Changes in v2: - Shorten and format comments - Link to v1: https://lore.kernel.org/r/20230131-tuntap-sk-uid-v1-0-af4f9f40979d@diag.uniroma1.it ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pietro Borrello authored
sock_init_data() assumes that the `struct socket` passed in input is contained in a `struct socket_alloc` allocated with sock_alloc(). However, tap_open() passes a `struct socket` embedded in a `struct tap_queue` allocated with sk_alloc(). This causes a type confusion when issuing a container_of() with SOCK_INODE() in sock_init_data() which results in assigning a wrong sk_uid to the `struct sock` in input. On default configuration, the type confused field overlaps with padding bytes between `int vnet_hdr_sz` and `struct tap_dev __rcu *tap` in `struct tap_queue`, which makes the uid of all tap sockets 0, i.e., the root one. Fix the assignment by using sock_init_data_uid(). Fixes: 86741ec2 ("net: core: Add a UID field to struct sock.") Signed-off-by: Pietro Borrello <borrello@diag.uniroma1.it> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pietro Borrello authored
sock_init_data() assumes that the `struct socket` passed in input is contained in a `struct socket_alloc` allocated with sock_alloc(). However, tun_chr_open() passes a `struct socket` embedded in a `struct tun_file` allocated with sk_alloc(). This causes a type confusion when issuing a container_of() with SOCK_INODE() in sock_init_data() which results in assigning a wrong sk_uid to the `struct sock` in input. On default configuration, the type confused field overlaps with the high 4 bytes of `struct tun_struct __rcu *tun` of `struct tun_file`, NULL at the time of call, which makes the uid of all tun sockets 0, i.e., the root one. Fix the assignment by using sock_init_data_uid(). Fixes: 86741ec2 ("net: core: Add a UID field to struct sock.") Signed-off-by: Pietro Borrello <borrello@diag.uniroma1.it> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pietro Borrello authored
Add sock_init_data_uid() to explicitly initialize the socket uid. To initialise the socket uid, sock_init_data() assumes a the struct socket* sock is always embedded in a struct socket_alloc, used to access the corresponding inode uid. This may not be true. Examples are sockets created in tun_chr_open() and tap_open(). Fixes: 86741ec2 ("net: core: Add a UID field to struct sock.") Signed-off-by: Pietro Borrello <borrello@diag.uniroma1.it> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vladimir Oltean says: ==================== net: ENETC mqprio/taprio cleanup Please excuse the increased patch set size compared to v4's 15 patches, but Claudiu stirred up the pot :) when he pointed out that the mqprio TXQ validation procedure is still incorrect, so I had to fix that, and then do some consolidation work so that taprio doesn't duplicate mqprio's bugs. Compared to v4, 3 patches are new and 1 was dropped for now ("net/sched: taprio: mask off bits in gate mask that exceed number of TCs"), since there's not really much to gain from it. Since the previous patch set has largely been reviewed, I hope that a delta overview will help and make up for the large size. v4->v5: - new patches: "[08/17] net/sched: mqprio: allow reverse TC:TXQ mappings" "[11/17] net/sched: taprio: centralize mqprio qopt validation" "[12/17] net/sched: refactor mqprio qopt reconstruction to a library function" - changed patches worth revisiting: "[09/17] net/sched: mqprio: allow offloading drivers to request queue count validation" v4 at: https://patchwork.kernel.org/project/netdevbpf/cover/20230130173145.475943-1-vladimir.oltean@nxp.com/ v3->v4: - adjusted patch 07/15 to not remove "#include <net/pkt_sched.h>" from ti cpsw https://patchwork.kernel.org/project/netdevbpf/cover/20230127001516.592984-1-vladimir.oltean@nxp.com/ v2->v3: - move min_num_stack_tx_queues definition so it doesn't conflict with the ethtool mm patches I haven't submitted yet for enetc (and also to make use of a 4 byte hole) - warn and mask off excess TCs in gate mask instead of failing - finally CC qdisc maintainers v2 at: https://patchwork.kernel.org/project/netdevbpf/patch/20230126125308.1199404-16-vladimir.oltean@nxp.com/ v1->v2: - patches 1->4 are new - update some header inclusions in drivers - fix typo (said "taprio" instead of "mqprio") - better enetc mqprio error handling - dynamically reconstruct mqprio configuration in taprio offload - also let stmmac and tsnep use per-TXQ gate_mask v1 (RFC) at: https://patchwork.kernel.org/project/netdevbpf/cover/20230120141537.1350744-1-vladimir.oltean@nxp.com/ The main goal of this patch set is to make taprio pass the mqprio queue configuration structure down to ndo_setup_tc() - patch 13/17. But mqprio itself is not in the best shape currently, so there are some consolidation patches on that as well. Next, there are some consolidation patches in the enetc driver's handling of TX queues and their traffic class assignment. Then, there is a consolidation between the TX queue configuration for mqprio and taprio. Finally, there is a change in the meaning of the gate_mask passed by taprio through ndo_setup_tc(). We introduce a capability through which drivers can request the gate mask to be per TXQ. The default is changed so that it is per TC. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
We assume that the mqprio queue configuration from taprio has a simple 1:1 mapping between prio and traffic class, and one TX queue per TC. That might not be the case. Actually parse and act upon the mqprio config. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
Regardless of the requested queue count per traffic class, the enetc driver allocates a number of TX rings equal to the number of TCs, and hardcodes a queue configuration of "1@0 1@1 ... 1@max-tc". Other configurations are silently ignored and treated the same. Improve that by allowing what the user requests to be actually fulfilled. This allows more than one TX ring per traffic class. For example: $ tc qdisc add dev eno0 root handle 1: mqprio num_tc 4 \ map 0 0 1 1 2 2 3 3 queues 2@0 2@2 2@4 2@6 [ 146.267648] fsl_enetc 0000:00:00.0 eno0: TX ring 0 prio 0 [ 146.273451] fsl_enetc 0000:00:00.0 eno0: TX ring 1 prio 0 [ 146.283280] fsl_enetc 0000:00:00.0 eno0: TX ring 2 prio 1 [ 146.293987] fsl_enetc 0000:00:00.0 eno0: TX ring 3 prio 1 [ 146.300467] fsl_enetc 0000:00:00.0 eno0: TX ring 4 prio 2 [ 146.306866] fsl_enetc 0000:00:00.0 eno0: TX ring 5 prio 2 [ 146.313261] fsl_enetc 0000:00:00.0 eno0: TX ring 6 prio 3 [ 146.319622] fsl_enetc 0000:00:00.0 eno0: TX ring 7 prio 3 $ tc qdisc del dev eno0 root [ 178.238418] fsl_enetc 0000:00:00.0 eno0: TX ring 0 prio 0 [ 178.244369] fsl_enetc 0000:00:00.0 eno0: TX ring 1 prio 0 [ 178.251486] fsl_enetc 0000:00:00.0 eno0: TX ring 2 prio 0 [ 178.258006] fsl_enetc 0000:00:00.0 eno0: TX ring 3 prio 0 [ 178.265038] fsl_enetc 0000:00:00.0 eno0: TX ring 4 prio 0 [ 178.271557] fsl_enetc 0000:00:00.0 eno0: TX ring 5 prio 0 [ 178.277910] fsl_enetc 0000:00:00.0 eno0: TX ring 6 prio 0 [ 178.284281] fsl_enetc 0000:00:00.0 eno0: TX ring 7 prio 0 $ tc qdisc add dev eno0 root handle 1: mqprio num_tc 8 \ map 0 1 2 3 4 5 6 7 queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 hw 1 [ 186.113162] fsl_enetc 0000:00:00.0 eno0: TX ring 0 prio 0 [ 186.118764] fsl_enetc 0000:00:00.0 eno0: TX ring 1 prio 1 [ 186.124374] fsl_enetc 0000:00:00.0 eno0: TX ring 2 prio 2 [ 186.130765] fsl_enetc 0000:00:00.0 eno0: TX ring 3 prio 3 [ 186.136404] fsl_enetc 0000:00:00.0 eno0: TX ring 4 prio 4 [ 186.142049] fsl_enetc 0000:00:00.0 eno0: TX ring 5 prio 5 [ 186.147674] fsl_enetc 0000:00:00.0 eno0: TX ring 6 prio 6 [ 186.153305] fsl_enetc 0000:00:00.0 eno0: TX ring 7 prio 7 The driver used to set TC_MQPRIO_HW_OFFLOAD_TCS, near which there is this comment in the UAPI header: TC_MQPRIO_HW_OFFLOAD_TCS, /* offload TCs, no queue counts */ which is what enetc was doing up until now (and no longer is; we offload queue counts too), remove that assignment. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
The enetc driver does not validate the mqprio queue configuration, so it currently allows things like this: $ tc qdisc add dev swp0 root handle 1: mqprio num_tc 8 \ map 0 1 2 3 4 5 6 7 queues 3@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 hw 1 But also things like this, completely omitting the queue configuration: $ tc qdisc add dev eno0 root handle 1: mqprio num_tc 8 \ map 0 1 2 3 4 5 6 7 hw 1 By requesting validation via the mqprio capability structure, this is no longer allowed, and we bring what is accepted by hardware in line with what is accepted by software. The check that num_tc <= real_num_tx_queues also becomes superfluous and can be dropped, because mqprio_validate_queue_counts() validates that no TXQ range exceeds real_num_tx_queues. That is a stronger check, because there is at least 1 TXQ per TC, so there are at least as many TXQs as TCs. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
There are 2 classes of in-tree drivers currently: - those who act upon struct tc_taprio_sched_entry :: gate_mask as if it holds a bit mask of TXQs - those who act upon the gate_mask as if it holds a bit mask of TCs When it comes to the standard, IEEE 802.1Q-2018 does say this in the second paragraph of section 8.6.8.4 Enhancements for scheduled traffic: | A gate control list associated with each Port contains an ordered list | of gate operations. Each gate operation changes the transmission gate | state for the gate associated with each of the Port's traffic class | queues and allows associated control operations to be scheduled. In typically obtuse language, it refers to a "traffic class queue" rather than a "traffic class" or a "queue". But careful reading of 802.1Q clarifies that "traffic class" and "queue" are in fact synonymous (see 8.6.6 Queuing frames): | A queue in this context is not necessarily a single FIFO data structure. | A queue is a record of all frames of a given traffic class awaiting | transmission on a given Bridge Port. The structure of this record is not | specified. i.o.w. their definition of "queue" isn't the Linux TX queue. The gate_mask really is input into taprio via its UAPI as a mask of traffic classes, but taprio_sched_to_offload() converts it into a TXQ mask. The breakdown of drivers which handle TC_SETUP_QDISC_TAPRIO is: - hellcreek, felix, sja1105: these are DSA switches, it's not even very clear what TXQs correspond to, other than purely software constructs. Only the mqprio configuration with 8 TCs and 1 TXQ per TC makes sense. So it's fine to convert these to a gate mask per TC. - enetc: I have the hardware and can confirm that the gate mask is per TC, and affects all TXQs (BD rings) configured for that priority. - igc: in igc_save_qbv_schedule(), the gate_mask is clearly interpreted to be per-TXQ. - tsnep: Gerhard Engleder clarifies that even though this hardware supports at most 1 TXQ per TC, the TXQ indices may be different from the TC values themselves, and it is the TXQ indices that matter to this hardware. So keep it per-TXQ as well. - stmmac: I have a GMAC datasheet, and in the EST section it does specify that the gate events are per TXQ rather than per TC. - lan966x: again, this is a switch, and while not a DSA one, the way in which it implements lan966x_mqprio_add() - by only allowing num_tc == NUM_PRIO_QUEUES (8) - makes it clear to me that TXQs are a purely software construct here as well. They seem to map 1:1 with TCs. - am65_cpsw: from looking at am65_cpsw_est_set_sched_cmds(), I get the impression that the fetch_allow variable is treated like a prio_mask. This definitely sounds closer to a per-TC gate mask rather than a per-TXQ one, and TI documentation does seem to recomment an identity mapping between TCs and TXQs. However, Roger Quadros would like to do some testing before making changes, so I'm leaving this driver to operate as it did before, for now. Link with more details at the end. Based on this breakdown, we have 5 drivers with a gate mask per TC and 4 with a gate mask per TXQ. So let's make the gate mask per TXQ the opt-in and the gate mask per TC the default. Benefit from the TC_QUERY_CAPS feature that Jakub suggested we add, and query the device driver before calling the proper ndo_setup_tc(), and figure out if it expects one or the other format. Link: https://patchwork.kernel.org/project/netdevbpf/patch/20230202003621.2679603-15-vladimir.oltean@nxp.com/#25193204 Cc: Horatiu Vultur <horatiu.vultur@microchip.com> Cc: Siddharth Vadapalli <s-vadapalli@ti.com> Cc: Roger Quadros <rogerq@kernel.org> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Acked-by: Kurt Kanzenbach <kurt@linutronix.de> # hellcreek Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
The taprio qdisc does not currently pass the mqprio queue configuration down to the offloading device driver. So the driver cannot act upon the TXQ counts/offsets per TC, or upon the prio->tc map. It was probably assumed that the driver only wants to offload num_tc (see TC_MQPRIO_HW_OFFLOAD_TCS), which it can get from netdev_get_num_tc(), but there's clearly more to the mqprio configuration than that. I've considered 2 mechanisms to remedy that. First is to pass a struct tc_mqprio_qopt_offload as part of the tc_taprio_qopt_offload. The second is to make taprio actually call TC_SETUP_QDISC_MQPRIO, *in addition to* TC_SETUP_QDISC_TAPRIO. The difference is that in the first case, existing drivers (offloading or not) all ignore taprio's mqprio portion currently, whereas in the second case, we could control whether to call TC_SETUP_QDISC_MQPRIO, based on a new capability. The question is which approach would be better. I'm afraid that calling TC_SETUP_QDISC_MQPRIO unconditionally (not based on a taprio capability bit) would risk introducing regressions. For example, taprio doesn't populate (or validate) qopt->hw, as well as mqprio.flags, mqprio.shaper, mqprio.min_rate, mqprio.max_rate. In comparison, adding a capability is functionally equivalent to just passing the mqprio in a way that drivers can ignore it, except it's slightly more complicated to use it (need to set the capability). Ultimately, what made me go for the "mqprio in taprio" variant was that it's easier for offloading drivers to interpret the mqprio qopt slightly differently when it comes from taprio vs when it comes from mqprio, should that ever become necessary. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
The taprio qdisc will need to reconstruct a struct tc_mqprio_qopt from netdev settings once more in a future patch, but this code was already written twice, once in taprio and once in mqprio. Refactor the code to a helper in the common mqprio library. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
There is a lot of code in taprio which is "borrowed" from mqprio. It makes sense to put a stop to the "borrowing" and start actually reusing code. Because taprio and mqprio are built as part of different kernel modules, code reuse can only take place either by writing it as static inline (limiting), putting it in sch_generic.o (not generic enough), or creating a third auto-selectable kernel module which only holds library code. I opted for the third variant. In a previous change, mqprio gained support for reverse TC:TXQ mappings, something which taprio still denies. Make taprio use the same validation logic so that it supports this configuration as well. The taprio code didn't enforce TXQ overlaps in txtime-assist mode and that looks intentional, even if I've no idea why that might be. Preserve that, but add a comment. There isn't any dedicated MAINTAINERS entry for mqprio, so nothing to update there. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-