- 13 Dec, 2023 10 commits
-
-
Michael Chan authored
On the new P7 chips, TPA for tunnel packets can be independently enabled for each VNIC. The default TPA configuration should not include UDP tunnels because the UDP ports for these tunnels are not known yet. The chip should not aggregate these UDP tunneled packets using default UDP ports until the ports are known. Add a new function bnxt_hwrm_vnic_update_tunl_tpa() to enable VXLAN and Geneve TPA if the corresponding UDP ports are known. Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20231212005122.2401-10-michael.chan@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Michael Chan authored
Add a new bnxt_udp_tunnels_p7 struct to support the new P7 chips that can parse VXLAN GPE packets. Add VXLAN GPE tunnel type handling to the .set_port() and .unset_port() functions. .ndo_features_check() is also enhanced to support VXLAN GPE which may encapsulate inner IP packets instead of ethernet packets. Reviewed-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20231212005122.2401-9-michael.chan@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Michael Chan authored
In bnxt_udp_tunnel_set_port(), use the proper ALLOC commands instead of the FREE commands for correctness. The ALLOC and FREE commands happen to be identical so this is just a cosmetic fix for correctness. Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20231212005122.2401-8-michael.chan@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Selvin Xavier authored
The Fast QP modify destroy RoCE feature requires additional QP entries in QP context backing store. FW reports the extra count to be allocated during backing store query. Use this value and allocate extra memory. Note that this works for both the V1 and V1 backing store FW APIs. Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20231212005122.2401-7-michael.chan@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Michael Chan authored
TX coalesced completions are supported on newer chips to provide one TX completion record for multiple TX packets up to the sq_cons_idx in the completion record. This method saves PCIe bandwidth by reducing the number of TX completions. Only very minor changes are now required to support this mode with the new framework that handles TX completions based on the consumer indices. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20231212005122.2401-6-michael.chan@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Michael Chan authored
If xmit_more condition is true, the driver may set the TX_BD_FLAGS_NO_CMPL flag. If after this packet, the TX ring can no longer hold a packet with maximum fragments, we will stop the TX queue. When this happens, we must clear the TX_BD_FLAGS_NO_CMPL flag on the last packet or there will be no completion and cause TX timeout. Fixes: c1056a59 ("bnxt_en: Optimize xmit_more TX path") Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com> Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com> Reviewed-by: Hongguang Gao <hongguang.gao@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20231212005122.2401-5-michael.chan@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Michael Chan authored
Two spots were missed when modifying the TX ring indexing logic. The use of unmasked TX index in bnxt_tx_int() will cause unnecessary __bnxt_tx_int() calls. The same issue in bnxt_tx_int_xdp() can result in illegal array index. Fixes: 6d1add95 ("bnxt_en: Modify TX ring indexing logic.") Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20231212005122.2401-4-michael.chan@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Somnath Kotur authored
_bnxt_get_max_rings() that is invoked in bnxt_check_rings() already accounts for the AGG ring(s) and gives a max value based on that. Increasing for AGG rings before calling _bnxt_get_max_rings() will result in checking for twice the number of rings than required and it can fail. Fix it by adjusting for AGG rings after calling _bnxt_get_max_rings(). Fixes: f5b29c6a ("bnxt_en: Add helper to get the number of CP rings required for TX rings") Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20231212005122.2401-3-michael.chan@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Michael Chan authored
The recent commit to trim the RX and TX rings on P5 chips by assigning each with max CP rings divided by 2 is not correct. Max CP rings divided by 2 may be bigger than the original RX or TX and would lead to failure. In other words, we may be checking for increased RX/TX rings than required and it may fail. Fix it by calling __bnxt_trim_rings() instead that would properly trim RX and TX without the possibility of increasing their values. Fixes: f5b29c6a ("bnxt_en: Add helper to get the number of CP rings required for TX rings") Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20231212005122.2401-2-michael.chan@broadcom.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Justin Stitt authored
strncpy() is deprecated for use on NUL-terminated destination strings [1] and as such we should prefer more robust and less ambiguous string interfaces. We expect new_bus->id to be NUL-terminated but not NUL-padded based on its prior assignment through snprintf: | snprintf(new_bus->id, MII_BUS_ID_SIZE, "gpio-%x", bus_id); Due to this, a suitable replacement is `strscpy` [2] due to the fact that it guarantees NUL-termination on the destination buffer without unnecessarily NUL-padding. We can also use sizeof() instead of a length macro as this more closely ties the maximum buffer size to the destination buffer. Do this for two instances. Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings [1] Link: https://manpages.debian.org/testing/linux-manual-4.8/strscpy.9.en.html [2] Link: https://github.com/KSPP/linux/issues/90Signed-off-by: Justin Stitt <justinstitt@google.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20231211-strncpy-drivers-net-mdio-mdio-gpio-c-v3-1-76dea53a1a52@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 12 Dec, 2023 21 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrlJakub Kicinski authored
Linus Walleij says: ==================== Immutable tag for the PEF2256 framer * tag 'pef2256-framer' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl: MAINTAINERS: Add the Lantiq PEF2256 driver entry pinctrl: Add support for the Lantic PEF2256 pinmux net: wan: framer: Add support for the Lantiq PEF2256 framer dt-bindings: net: Add the Lantiq PEF2256 E1/T1/J1 framer net: wan: Add framer framework support ==================== Link: https://lore.kernel.org/all/CACRpkdYT1J7noFUhObFgfA60XQAfL4rb=knEmWS__TKKtCMh7Q@mail.gmail.com/Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Herve Codina authored
After contributing the driver, add myself as the maintainer for the Lantiq PEF2256 driver. Signed-off-by: Herve Codina <herve.codina@bootlin.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Link: https://lore.kernel.org/r/20231128132534.258459-6-herve.codina@bootlin.comSigned-off-by: Linus Walleij <linus.walleij@linaro.org>
-
Herve Codina authored
The Lantiq PEF2256 is a framer and line interface component designed to fulfill all required interfacing between an analog E1/T1/J1 line and the digital PCM system highway/H.100 bus. This kind of component can be found in old telecommunication system. It was used to digital transmission of many simultaneous telephone calls by time-division multiplexing. Also using HDLC protocol, WAN networks can be reached through the framer. This pinmux support handles the pin muxing part (pins RP(A..D) and pins XP(A..D)) of the PEF2256. Signed-off-by: Herve Codina <herve.codina@bootlin.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20231128132534.258459-5-herve.codina@bootlin.comSigned-off-by: Linus Walleij <linus.walleij@linaro.org>
-
Herve Codina authored
The Lantiq PEF2256 is a framer and line interface component designed to fulfill all required interfacing between an analog E1/T1/J1 line and the digital PCM system highway/H.100 bus. Signed-off-by: Herve Codina <herve.codina@bootlin.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Jakub Kicinski <kuba@kernel.org> Link: https://lore.kernel.org/r/20231128132534.258459-4-herve.codina@bootlin.comSigned-off-by: Linus Walleij <linus.walleij@linaro.org>
-
Herve Codina authored
The Lantiq PEF2256 is a framer and line interface component designed to fulfill all required interfacing between an analog E1/T1/J1 line and the digital PCM system highway/H.100 bus. Signed-off-by: Herve Codina <herve.codina@bootlin.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20231128132534.258459-3-herve.codina@bootlin.comSigned-off-by: Linus Walleij <linus.walleij@linaro.org>
-
Herve Codina authored
A framer is a component in charge of an E1/T1 line interface. Connected usually to a TDM bus, it converts TDM frames to/from E1/T1 frames. It also provides information related to the E1/T1 line. The framer framework provides a set of APIs for the framer drivers (framer provider) to create/destroy a framer and APIs for the framer users (framer consumer) to obtain a reference to the framer, and use the framer. This basic implementation provides a framer abstraction for: - power on/off the framer - get the framer status (line state) - be notified on framer status changes - get/set the framer configuration Signed-off-by: Herve Codina <herve.codina@bootlin.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Jakub Kicinski <kuba@kernel.org> Link: https://lore.kernel.org/r/20231128132534.258459-2-herve.codina@bootlin.comSigned-off-by: Linus Walleij <linus.walleij@linaro.org>
-
Dmitry Antipov authored
When compiling with gcc version 14.0.0 20231129 (experimental) and CONFIG_FORTIFY_SOURCE=y, I've noticed the following warning: ... In function 'fortify_memcpy_chk', inlined from 'ax88796c_tx_fixup' at drivers/net/ethernet/asix/ax88796c_main.c:287:2: ./include/linux/fortify-string.h:588:25: warning: call to '__read_overflow2_field' declared with attribute warning: detected read beyond size of field (2nd parameter); maybe use struct_group()? [-Wattribute-warning] 588 | __read_overflow2_field(q_size_field, size); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ... This call to 'memcpy()' is interpreted as an attempt to copy TX_OVERHEAD (which is 8) bytes from 4-byte 'sop' field of 'struct tx_pkt_info' and thus overread warning is issued. Since we actually want to copy both 'sop' and 'seg' fields at once, use the convenient 'struct_group()' here. Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Acked-by: Łukasz Stelmach <l.stelmach@samsung.com> Link: https://lore.kernel.org/r/20231211090535.9730-1-dmantipov@yandex.ruSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Paolo Abeni authored
Linus Walleij says: ==================== net: dsa: realtek: Two RTL8366RB fixes These minor fixes were found while digging into other issues: a weirdly named variable and bogus MTU handling. Fix it up. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> ==================== Link: https://lore.kernel.org/r/20231209-rtl8366rb-mtu-fix-v1-0-df863e2b2b2a@linaro.orgSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Linus Walleij authored
The MTU callbacks are in layer 1 size, so for example 1500 bytes is a normal setting. Cache this size, and only add the layer 2 framing right before choosing the setting. On the CPU port this will however include the DSA tag since this is transmitted from the parent ethernet interface! Add the layer 2 overhead such as ethernet and VLAN framing and FCS before selecting the size in the register. This will make the code easier to understand. The rtl8366rb_max_mtu() callback returns a bogus MTU just subtracting the CPU tag, which is the only thing we should NOT subtract. Return the correct layer 1 max MTU after removing headers and checksum. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Alvin Šipraga <alsi@bang-olufsen.dk> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Linus Walleij authored
Rename the register name to RTL8366RB instead of the bogus RTL8368S (internal product name?) Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Alvin Šipraga <alsi@bang-olufsen.dk> Reviewed-by: Luiz Angelo Daros de Luca <luizluca@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Ahelenia Ziemiańska authored
$ modinfo dnsresolver dns_resolver | grep name modinfo: ERROR: Module dnsresolver not found. filename: /lib/modules/6.1.0-9-amd64/kernel/net/dns_resolver/dns_resolver.ko name: dns_resolver Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz> Link: https://lore.kernel.org/r/gh4sxphjxbo56n2spgmc66vtazyxgiehpmv5f2gkvgicy6f4rs@tarta.nabijaczleweli.xyzSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Andy Shevchenko authored
The driver is using iowriteXX()/ioreadXX() APIs which are LE IO accessors simplified as 1. Convert given value _from_ CPU _to_ LE 2. Write it to the device as is The dev_addr is a byte stream, but because the driver uses 16-bit IO accessors, it wants to perform double conversion on BE CPUs, but it took it wrong, as it effectivelly does two times _from_ CPU _to_ LE. What it has to do is to consider dev_addr as an array of LE16 and hence do _from_ LE _to_ CPU conversion, followed by implied _from_ CPU _to_ LE in the iowrite16(). To achieve that, use get_unaligned_le16(). This will make it correct and allows to avoid sparse warning as reported by LKP. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202312030058.hfZPTXd7-lkp@intel.com/Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20231208153327.3306798-1-andriy.shevchenko@linux.intel.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Swarup Laxman Kotiaklapudi authored
Add some missing(not all) attributes in devlink.yaml. Signed-off-by: Swarup Laxman Kotiaklapudi <swarupkotikalapudi@gmail.com> Suggested-by: Jiri Pirko <jiri@resnulli.us> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Link: https://lore.kernel.org/r/20231208182515.1206616-1-swarupkotikalapudi@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Pedro Tammela says: ==================== net/sched: conditional notification of events for cls and act This is an optimization we have been leveraging on P4TC but we believe it will benefit rtnl users in general. It's common to allocate an skb, build a notification message and then broadcast an event. In the absence of any user space listeners, these resources (cpu and memory operations) are wasted. In cases where the subsystem is lockless (such as in tc-flower) this waste is more prominent. For the scenarios where the rtnl_lock is held it is not as prominent. The idea is simple. Build and send the notification iif: - The user requests via NLM_F_ECHO or - Someone is listening to the rtnl group (tc mon) On a simple test with tc-flower adding 1M entries, using just a single core, there's already a noticeable difference in the cycles spent in tc_new_tfilter with this patchset. before: - 43.68% tc_new_tfilter + 31.73% fl_change + 6.35% tfilter_notify + 1.62% nlmsg_notify 0.66% __tcf_qdisc_find.part.0 0.64% __tcf_chain_get 0.54% fl_get + 0.53% tcf_proto_lookup_ops after: - 39.20% tc_new_tfilter + 34.58% fl_change 0.69% __tcf_qdisc_find.part.0 0.67% __tcf_chain_get + 0.61% tcf_proto_lookup_ops Note, the above test is using iproute2:tc which execs a shell. We expect people using netlink directly to observe even greater reductions. The qdisc side needs some refactoring of the notification routines to fit in this new model, so they will be sent in a later patchset. ==================== Link: https://lore.kernel.org/r/20231208192847.714940-1-pctammela@mojatatu.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Pedro Tammela authored
As of today tc-filter/chain events are unconditionally built and sent to RTNLGRP_TC. As with the introduction of rtnl_notify_needed we can check before-hand if they are really needed. This will help to alleviate system pressure when filters are concurrently added without the rtnl lock as in tc-flower. Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Link: https://lore.kernel.org/r/20231208192847.714940-8-pctammela@mojatatu.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Pedro Tammela authored
This argument is never called while set to true, so remove it as there's no need for it. Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Link: https://lore.kernel.org/r/20231208192847.714940-7-pctammela@mojatatu.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Pedro Tammela authored
As of today tc-action events are unconditionally built and sent to RTNLGRP_TC. As with the introduction of rtnl_notify_needed we can check before-hand if they are really needed. Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Link: https://lore.kernel.org/r/20231208192847.714940-6-pctammela@mojatatu.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Pedro Tammela authored
Use max() in a couple of places that are open coding it with the ternary operator. Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Link: https://lore.kernel.org/r/20231208192847.714940-5-pctammela@mojatatu.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Pedro Tammela authored
This is a convenience helper for routines handling conditional rtnl events, that is code that might send a notification depending on rtnl_has_listeners/rtnl_notify_needed. Instead of: if (skb) rtnetlink_send(...) Use: rtnetlink_maybe_send(...) Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Link: https://lore.kernel.org/r/20231208192847.714940-4-pctammela@mojatatu.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Victor Nogueira authored
Building on the rtnl_has_listeners helper, add the rtnl_notify_needed helper to check if we can bail out early in the notification routines. Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Victor Nogueira <victor@mojatatu.com> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Link: https://lore.kernel.org/r/20231208192847.714940-3-pctammela@mojatatu.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jamal Hadi Salim authored
As of today, rtnl code creates a new skb and unconditionally fills and broadcasts it to the relevant group. For most operations this is okay and doesn't waste resources in general. When operations are done without the rtnl_lock, as in tc-flower, such skb allocation, message fill and no-op broadcasting can happen in all cores of the system, which contributes to system pressure and wastes precious cpu cycles when no one will receive the built message. Introduce this helper so rtnetlink operations can simply check if someone is listening and then proceed if necessary. Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Victor Nogueira <victor@mojatatu.com> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Link: https://lore.kernel.org/r/20231208192847.714940-2-pctammela@mojatatu.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 11 Dec, 2023 5 commits
-
-
David S. Miller authored
Eric Dumazet says: ==================== ipv6: more data-race annotations Small follow up series, taking care of races around np->mcast_oif and np->ucast_oif. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
np->ucast_oif is read locklessly in some contexts. Make all accesses to this field lockless, adding appropriate annotations. This also makes setsockopt( IPV6_UNICAST_IF ) lockless. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
np->mcast_oif is read locklessly in some contexts. Make all accesses to this field lockless, adding appropriate annotations. This also makes setsockopt( IPV6_MULTICAST_IF ) lockless. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
This reverts commit b8dbbbc5 ("net: rtnetlink: remove local list in __linkwatch_run_queue()"). It's evidently broken when there's a non-urgent work that gets added back, and then the loop can never finish. While reverting, add a note about that. Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Fixes: b8dbbbc5 ("net: rtnetlink: remove local list in __linkwatch_run_queue()") Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fei Qin authored
The device supports UDP hardware segmentation offload, which helps improving the performance. Thus, this patch adds support for UDP segmentation offload from the driver side. Signed-off-by: Fei Qin <fei.qin@corigine.com> Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 10 Dec, 2023 4 commits
-
-
David S. Miller authored
Yoshihiro Shimoda says: ==================== net: rswitch: Add jumbo frames support This patch series is based on the latest net-next.git / main branch. Changes from v3: https://lore.kernel.org/all/20231204012058.3876078-1-yoshihiro.shimoda.uh@renesas.com/ - Based on the latest net-next.git / main branch. - Modify for code consistancy in the patch 3/9. - Add a condition in the patch 3/9. - Fix usage of dma_addr in the patch 8/9. Changes from v2: https://lore.kernel.org/all/20231201054655.3731772-1-yoshihiro.shimoda.uh@renesas.com/ - Based on the latest net-next.git / main branch. - Fix using a variable in the patch 8/9. - Add Reviewed-by tag in the patch 1/9. Changes from v1: https://lore.kernel.org/all/20231127115334.3670790-1-yoshihiro.shimoda.uh@renesas.com/ - Based on the latest net-next.git / main branch. - Fix commit descriptions (s/near the future/the near future/). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yoshihiro Shimoda authored
Allow jumbo frames by changing maximum MTU size and number of RX queues. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yoshihiro Shimoda authored
If the driver would like to transmit a jumbo frame like 2KiB or more, it should be split into multiple queues. In the near future, to support this, add handling specific descriptor types F{START,MID,END}. However, such jumbo frames will not happen yet because the maximum MTU size is still default for now. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yoshihiro Shimoda authored
If this hardware receives a jumbo frame like 2KiB or more, it will be split into multiple queues. In the near future, to support this, add handling specific descriptor types F{START,MID,END}. However, such jumbo frames will not happen yet because the maximum MTU size is still default for now. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-