1. 03 Mar, 2022 22 commits
  2. 02 Mar, 2022 13 commits
    • Wang Qing's avatar
      net: hamradio: fix compliation error · a577223a
      Wang Qing authored
      add missing ")" which caused by previous commit.
      
      Fixes: 61c4fb9c ("net: hamradio: use time_is_after_jiffies() instead of open coding it")
      Link: https://lore.kernel.org/all/1646018012-61129-1-git-send-email-wangqing@vivo.com/Signed-off-by: default avatarWang Qing <wangqing@vivo.com>
      Link: https://lore.kernel.org/r/1646203277-83159-1-git-send-email-wangqing@vivo.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      a577223a
    • Sven Eckelmann's avatar
      batman-adv: Demote batadv-on-batadv skip error message · 6ee3c393
      Sven Eckelmann authored
      The error message "Cannot find parent device" was shown for users of
      macvtap (on batadv devices) whenever the macvtap was moved to a different
      netns. This happens because macvtap doesn't provide an implementation for
      rtnl_link_ops->get_link_net.
      
      The situation for which this message is printed is actually not an error
      but just a warning that the optional sanity check was skipped. So demote
      the message from error to warning and adjust the text to better explain
      what happened.
      Reported-by: default avatarLeonardo Mörlein <freifunk@irrelefant.net>
      Signed-off-by: default avatarSven Eckelmann <sven@narfation.org>
      Signed-off-by: default avatarSimon Wunderlich <sw@simonwunderlich.de>
      6ee3c393
    • Sven Eckelmann's avatar
      batman-adv: Migrate to linux/container_of.h · eb7da4f1
      Sven Eckelmann authored
      The commit d2a8ebbf ("kernel.h: split out container_of() and
      typeof_member() macros")  introduced a new header for the container_of
      related macros from (previously) linux/kernel.h.
      Signed-off-by: default avatarSven Eckelmann <sven@narfation.org>
      Signed-off-by: default avatarSimon Wunderlich <sw@simonwunderlich.de>
      eb7da4f1
    • Jakub Kicinski's avatar
      Merge branch 'if_ether-h-add-industrial-fieldbus-ethertypes' · 96946d89
      Jakub Kicinski authored
      Daniel Braunwarth says:
      
      ====================
      if_ether.h: add industrial fieldbus Ethertypes
      
      This set of patches adds the Ethertypes for PROFINET and EtherCAT.
      
      The defines should be used by iproute2 to extend the list of available link
      layer protocols.
      ====================
      
      Link: https://lore.kernel.org/r/20220228133029.100913-1-daniel@braunwarth.devSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      96946d89
    • Daniel Braunwarth's avatar
      if_ether.h: add EtherCAT Ethertype · cd73cda7
      Daniel Braunwarth authored
      Add the Ethertype for EtherCAT protocol.
      Signed-off-by: default avatarDaniel Braunwarth <daniel@braunwarth.dev>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      cd73cda7
    • Daniel Braunwarth's avatar
      if_ether.h: add PROFINET Ethertype · dd0ca255
      Daniel Braunwarth authored
      Add the Ethertype for PROFINET protocol.
      Signed-off-by: default avatarDaniel Braunwarth <daniel@braunwarth.dev>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      dd0ca255
    • Sven Eckelmann's avatar
      macvtap: advertise link netns via netlink · a0219215
      Sven Eckelmann authored
      Assign rtnl_link_ops->get_link_net() callback so that IFLA_LINK_NETNSID is
      added to rtnetlink messages. This fixes iproute2 which otherwise resolved
      the link interface to an interface in the wrong namespace.
      
      Test commands:
      
        ip netns add nst
        ip link add dummy0 type dummy
        ip link add link macvtap0 link dummy0 type macvtap
        ip link set macvtap0 netns nst
        ip -netns nst link show macvtap0
      
      Before:
      
        10: macvtap0@gre0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 500
            link/ether 5e:8f:ae:1d:60:50 brd ff:ff:ff:ff:ff:ff
      
      After:
      
        10: macvtap0@if2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 500
            link/ether 5e:8f:ae:1d:60:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
      Reported-by: default avatarLeonardo Mörlein <freifunk@irrelefant.net>
      Signed-off-by: default avatarSven Eckelmann <sven@narfation.org>
      Link: https://lore.kernel.org/r/20220228003240.1337426-1-sven@narfation.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      a0219215
    • Wan Jiabing's avatar
      nfp: avoid newline at end of message in NL_SET_ERR_MSG_MOD · 323d51ca
      Wan Jiabing authored
      Fix the following coccicheck warning:
      ./drivers/net/ethernet/netronome/nfp/flower/qos_conf.c:750:7-55: WARNING
      avoid newline at end of message in NL_SET_ERR_MSG_MOD
      Signed-off-by: default avatarWan Jiabing <wanjiabing@vivo.com>
      Reviewed-by: default avatarSimon Horman <simon.horman@corigine.com>
      Link: https://lore.kernel.org/r/20220301112356.1820985-1-wanjiabing@vivo.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      323d51ca
    • Harold Huang's avatar
      tun: support NAPI for packets received from batched XDP buffs · fb3f9037
      Harold Huang authored
      In tun, NAPI is supported and we can also use NAPI in the path of
      batched XDP buffs to accelerate packet processing. What is more, after
      we use NAPI, GRO is also supported. The iperf shows that the throughput of
      single stream could be improved from 4.5Gbps to 9.2Gbps. Additionally, 9.2
      Gbps nearly reachs the line speed of the phy nic and there is still about
      15% idle cpu core remaining on the vhost thread.
      
      Test topology:
      [iperf server]<--->tap<--->dpdk testpmd<--->phy nic<--->[iperf client]
      
      Iperf stream:
      iperf3 -c 10.0.0.2  -i 1 -t 10
      
      Before:
      ...
      [  5]   5.00-6.00   sec   558 MBytes  4.68 Gbits/sec    0   1.50 MBytes
      [  5]   6.00-7.00   sec   556 MBytes  4.67 Gbits/sec    1   1.35 MBytes
      [  5]   7.00-8.00   sec   556 MBytes  4.67 Gbits/sec    2   1.18 MBytes
      [  5]   8.00-9.00   sec   559 MBytes  4.69 Gbits/sec    0   1.48 MBytes
      [  5]   9.00-10.00  sec   556 MBytes  4.67 Gbits/sec    1   1.33 MBytes
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [ ID] Interval           Transfer     Bitrate         Retr
      [  5]   0.00-10.00  sec  5.39 GBytes  4.63 Gbits/sec   72          sender
      [  5]   0.00-10.04  sec  5.39 GBytes  4.61 Gbits/sec               receiver
      
      After:
      ...
      [  5]   5.00-6.00   sec  1.07 GBytes  9.19 Gbits/sec    0   1.55 MBytes
      [  5]   6.00-7.00   sec  1.08 GBytes  9.30 Gbits/sec    0   1.63 MBytes
      [  5]   7.00-8.00   sec  1.08 GBytes  9.25 Gbits/sec    0   1.72 MBytes
      [  5]   8.00-9.00   sec  1.08 GBytes  9.25 Gbits/sec   77   1.31 MBytes
      [  5]   9.00-10.00  sec  1.08 GBytes  9.24 Gbits/sec    0   1.48 MBytes
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [ ID] Interval           Transfer     Bitrate         Retr
      [  5]   0.00-10.00  sec  10.8 GBytes  9.28 Gbits/sec  166          sender
      [  5]   0.00-10.04  sec  10.8 GBytes  9.24 Gbits/sec               receiver
      
      Reported-at: https://lore.kernel.org/all/CACGkMEvTLG0Ayg+TtbN4q4pPW-ycgCCs3sC3-TF8cuRTf7Pp1A@mail.gmail.comSigned-off-by: default avatarHarold Huang <baymaxhuang@gmail.com>
      Acked-by: default avatarJason Wang <jasowang@redhat.com>
      Link: https://lore.kernel.org/r/20220228033805.1579435-1-baymaxhuang@gmail.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      fb3f9037
    • Jakub Kicinski's avatar
      Merge branch 'sfc-optimize-rxqs-count-and-affinities' · 422ce836
      Jakub Kicinski authored
      Íñigo Huguet says:
      
      ====================
      sfc: optimize RXQs count and affinities
      
      In sfc driver one RX queue per physical core was allocated by default.
      Later on, IRQ affinities were set spreading the IRQs in all NUMA local
      CPUs.
      
      However, with that default configuration it result in a non very optimal
      configuration in many modern systems. Specifically, in systems with hyper
      threading and 2 NUMA nodes, affinities are set in a way that IRQs are
      handled by all logical cores of one same NUMA node. Handling IRQs from
      both hyper threading siblings has no benefit, and setting affinities to one
      queue per physical core is neither a very good idea because there is a
      performance penalty for moving data across nodes (I was able to check it
      with some XDP tests using pktgen).
      
      This patches reduce the default number of channels to one per physical
      core in the local NUMA node. Then, they set IRQ affinities to CPUs in
      the local NUMA node only. This way we save hardware resources since
      channels are limited resources. We also leave more room for XDP_TX
      channels without hitting driver's limit of 32 channels per interface.
      
      Running performance tests using iperf with a SFC9140 device showed no
      performance penalty for reducing the number of channels.
      
      RX XDP tests showed that performance can go down to less than half if
      the IRQ is handled by a CPU in a different NUMA node, which doesn't
      happen with the new defaults from this patches.
      ====================
      
      Link: https://lore.kernel.org/r/20220228132254.25787-1-ihuguet@redhat.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      422ce836
    • Íñigo Huguet's avatar
      sfc: set affinity hints in local NUMA node only · 09a99ab1
      Íñigo Huguet authored
      Affinity hints were being set to CPUs in local NUMA node first, and then
      in other CPUs. This was creating 2 unintended issues:
      1. Channels created to be assigned each to a different physical core
         were assigned to hyperthreading siblings because of being in same
         NUMA node.
         Since the patch previous to this one, this did not longer happen
         with default rss_cpus modparam because less channels are created.
      2. XDP channels could be assigned to CPUs in different NUMA nodes,
         decreasing performance too much (to less than half in some of my
         tests).
      
      This patch sets the affinity hints spreading the channels only in local
      NUMA node's CPUs. A fallback for the case that no CPU in local NUMA node
      is online has been added too.
      
      Example of CPUs being assigned in a non optimal way before this and the
      previous patch (note: in this system, xdp-8 to xdp-15 are created
      because num_possible_cpus == 64, but num_present_cpus == 32 so they're
      never used):
      
      $ lscpu | grep -i numa
      NUMA node(s):                    2
      NUMA node0 CPU(s):               0-7,16-23
      NUMA node1 CPU(s):               8-15,24-31
      
      $ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list
      /proc/irq/141/0000:07:00.0-0/../smp_affinity_list:0
      /proc/irq/142/0000:07:00.0-1/../smp_affinity_list:1
      /proc/irq/143/0000:07:00.0-2/../smp_affinity_list:2
      /proc/irq/144/0000:07:00.0-3/../smp_affinity_list:3
      /proc/irq/145/0000:07:00.0-4/../smp_affinity_list:4
      /proc/irq/146/0000:07:00.0-5/../smp_affinity_list:5
      /proc/irq/147/0000:07:00.0-6/../smp_affinity_list:6
      /proc/irq/148/0000:07:00.0-7/../smp_affinity_list:7
      /proc/irq/149/0000:07:00.0-8/../smp_affinity_list:16
      /proc/irq/150/0000:07:00.0-9/../smp_affinity_list:17
      /proc/irq/151/0000:07:00.0-10/../smp_affinity_list:18
      /proc/irq/152/0000:07:00.0-11/../smp_affinity_list:19
      /proc/irq/153/0000:07:00.0-12/../smp_affinity_list:20
      /proc/irq/154/0000:07:00.0-13/../smp_affinity_list:21
      /proc/irq/155/0000:07:00.0-14/../smp_affinity_list:22
      /proc/irq/156/0000:07:00.0-15/../smp_affinity_list:23
      /proc/irq/157/0000:07:00.0-xdp-0/../smp_affinity_list:8
      /proc/irq/158/0000:07:00.0-xdp-1/../smp_affinity_list:9
      /proc/irq/159/0000:07:00.0-xdp-2/../smp_affinity_list:10
      /proc/irq/160/0000:07:00.0-xdp-3/../smp_affinity_list:11
      /proc/irq/161/0000:07:00.0-xdp-4/../smp_affinity_list:12
      /proc/irq/162/0000:07:00.0-xdp-5/../smp_affinity_list:13
      /proc/irq/163/0000:07:00.0-xdp-6/../smp_affinity_list:14
      /proc/irq/164/0000:07:00.0-xdp-7/../smp_affinity_list:15
      /proc/irq/165/0000:07:00.0-xdp-8/../smp_affinity_list:24
      /proc/irq/166/0000:07:00.0-xdp-9/../smp_affinity_list:25
      /proc/irq/167/0000:07:00.0-xdp-10/../smp_affinity_list:26
      /proc/irq/168/0000:07:00.0-xdp-11/../smp_affinity_list:27
      /proc/irq/169/0000:07:00.0-xdp-12/../smp_affinity_list:28
      /proc/irq/170/0000:07:00.0-xdp-13/../smp_affinity_list:29
      /proc/irq/171/0000:07:00.0-xdp-14/../smp_affinity_list:30
      /proc/irq/172/0000:07:00.0-xdp-15/../smp_affinity_list:31
      
      CPUs assignments after this and previous patch, so normal channels
      created only one per core in NUMA node and affinities set only to local
      NUMA node:
      
      $ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list
      /proc/irq/116/0000:07:00.0-0/../smp_affinity_list:0
      /proc/irq/117/0000:07:00.0-1/../smp_affinity_list:1
      /proc/irq/118/0000:07:00.0-2/../smp_affinity_list:2
      /proc/irq/119/0000:07:00.0-3/../smp_affinity_list:3
      /proc/irq/120/0000:07:00.0-4/../smp_affinity_list:4
      /proc/irq/121/0000:07:00.0-5/../smp_affinity_list:5
      /proc/irq/122/0000:07:00.0-6/../smp_affinity_list:6
      /proc/irq/123/0000:07:00.0-7/../smp_affinity_list:7
      /proc/irq/124/0000:07:00.0-xdp-0/../smp_affinity_list:16
      /proc/irq/125/0000:07:00.0-xdp-1/../smp_affinity_list:17
      /proc/irq/126/0000:07:00.0-xdp-2/../smp_affinity_list:18
      /proc/irq/127/0000:07:00.0-xdp-3/../smp_affinity_list:19
      /proc/irq/128/0000:07:00.0-xdp-4/../smp_affinity_list:20
      /proc/irq/129/0000:07:00.0-xdp-5/../smp_affinity_list:21
      /proc/irq/130/0000:07:00.0-xdp-6/../smp_affinity_list:22
      /proc/irq/131/0000:07:00.0-xdp-7/../smp_affinity_list:23
      /proc/irq/132/0000:07:00.0-xdp-8/../smp_affinity_list:0
      /proc/irq/133/0000:07:00.0-xdp-9/../smp_affinity_list:1
      /proc/irq/134/0000:07:00.0-xdp-10/../smp_affinity_list:2
      /proc/irq/135/0000:07:00.0-xdp-11/../smp_affinity_list:3
      /proc/irq/136/0000:07:00.0-xdp-12/../smp_affinity_list:4
      /proc/irq/137/0000:07:00.0-xdp-13/../smp_affinity_list:5
      /proc/irq/138/0000:07:00.0-xdp-14/../smp_affinity_list:6
      /proc/irq/139/0000:07:00.0-xdp-15/../smp_affinity_list:7
      Signed-off-by: default avatarÍñigo Huguet <ihuguet@redhat.com>
      Acked-by: default avatarMartin Habets <habetsm.xilinx@gmail.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      09a99ab1
    • Íñigo Huguet's avatar
      sfc: default config to 1 channel/core in local NUMA node only · c265b569
      Íñigo Huguet authored
      Handling channels from CPUs in different NUMA node can penalize
      performance, so better configure only one channel per core in the same
      NUMA node than the NIC, and not per each core in the system.
      
      Fallback to all other online cores if there are not online CPUs in local
      NUMA node.
      Signed-off-by: default avatarÍñigo Huguet <ihuguet@redhat.com>
      Acked-by: default avatarMartin Habets <habetsm.xilinx@gmail.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      c265b569
    • Jakub Kicinski's avatar
      net: smc: fix different types in min() · ef739f1d
      Jakub Kicinski authored
      Fix build:
      
       include/linux/minmax.h:45:25: note: in expansion of macro ‘__careful_cmp’
         45 | #define min(x, y)       __careful_cmp(x, y, <)
            |                         ^~~~~~~~~~~~~
       net/smc/smc_tx.c:150:24: note: in expansion of macro ‘min’
        150 |         corking_size = min(sock_net(&smc->sk)->smc.sysctl_autocorking_size,
            |                        ^~~
      
      Fixes: 12bbb0d1 ("net/smc: add sysctl for autocorking")
      Link: https://lore.kernel.org/r/20220301222446.1271127-1-kuba@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      ef739f1d
  3. 01 Mar, 2022 5 commits