1. 17 Apr, 2018 27 commits
    • Lorenzo Bianconi's avatar
      ipv6: send netlink notifications for manually configured addresses · a2d481b3
      Lorenzo Bianconi authored
      Send a netlink notification when userspace adds a manually configured
      address if DAD is enabled and optimistic flag isn't set.
      Moreover send RTM_DELADDR notifications for tentative addresses.
      
      Some userspace applications (e.g. NetworkManager) are interested in
      addr netlink events albeit the address is still in tentative state,
      however events are not sent if DAD process is not completed.
      If the address is added and immediately removed userspace listeners
      are not notified. This behaviour can be easily reproduced by using
      veth interfaces:
      
      $ ip -b - <<EOF
      > link add dev vm1 type veth peer name vm2
      > link set dev vm1 up
      > link set dev vm2 up
      > addr add 2001:db8:a:b:1:2:3:4/64 dev vm1
      > addr del 2001:db8:a:b:1:2:3:4/64 dev vm1
      EOF
      
      This patch reverts the behaviour introduced by the commit f784ad3d
      ("ipv6: do not send RTM_DELADDR for tentative addresses")
      Suggested-by: default avatarThomas Haller <thaller@redhat.com>
      Signed-off-by: default avatarLorenzo Bianconi <lorenzo.bianconi@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a2d481b3
    • Ganesh Goudar's avatar
      cxgb4vf: display pause settings · a64dcddc
      Ganesh Goudar authored
      Add support to display pause settings
      Signed-off-by: default avatarGanesh Goudar <ganeshgr@chelsio.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a64dcddc
    • Hangbin Liu's avatar
      vxlan: add ttl inherit support · 72f6d71e
      Hangbin Liu authored
      Like tos inherit, ttl inherit should also means inherit the inner protocol's
      ttl values, which actually not implemented in vxlan yet.
      
      But we could not treat ttl == 0 as "use the inner TTL", because that would be
      used also when the "ttl" option is not specified and that would be a behavior
      change, and breaking real use cases.
      
      So add a different attribute IFLA_VXLAN_TTL_INHERIT when "ttl inherit" is
      specified with ip cmd.
      Reported-by: default avatarJianlin Shi <jishi@redhat.com>
      Suggested-by: default avatarJiri Benc <jbenc@redhat.com>
      Signed-off-by: default avatarHangbin Liu <liuhangbin@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      72f6d71e
    • Samuel Mendoza-Jonas's avatar
      net/ncsi: Refactor MAC, VLAN filters · 062b3e1b
      Samuel Mendoza-Jonas authored
      The NCSI driver defines a generic ncsi_channel_filter struct that can be
      used to store arbitrarily formatted filters, and several generic methods
      of accessing data stored in such a filter.
      However in both the driver and as defined in the NCSI specification
      there are only two actual filters: VLAN ID filters and MAC address
      filters. The splitting of the MAC filter into unicast, multicast, and
      mixed is also technically not necessary as these are stored in the same
      location in hardware.
      
      To save complexity, particularly in the set up and accessing of these
      generic filters, remove them in favour of two specific structs. These
      can be acted on directly and do not need several generic helper
      functions to use.
      
      This also fixes a memory error found by KASAN on ARM32 (which is not
      upstream yet), where response handlers accessing a filter's data field
      could write past allocated memory.
      
      [  114.926512] ==================================================================
      [  114.933861] BUG: KASAN: slab-out-of-bounds in ncsi_configure_channel+0x4b8/0xc58
      [  114.941304] Read of size 2 at addr 94888558 by task kworker/0:2/546
      [  114.947593]
      [  114.949146] CPU: 0 PID: 546 Comm: kworker/0:2 Not tainted 4.16.0-rc6-00119-ge156398bfcad #13
      ...
      [  115.170233] The buggy address belongs to the object at 94888540
      [  115.170233]  which belongs to the cache kmalloc-32 of size 32
      [  115.181917] The buggy address is located 24 bytes inside of
      [  115.181917]  32-byte region [94888540, 94888560)
      [  115.192115] The buggy address belongs to the page:
      [  115.196943] page:9eeac100 count:1 mapcount:0 mapping:94888000 index:0x94888fc1
      [  115.204200] flags: 0x100(slab)
      [  115.207330] raw: 00000100 94888000 94888fc1 0000003f 00000001 9eea2014 9eecaa74 96c003e0
      [  115.215444] page dumped because: kasan: bad access detected
      [  115.221036]
      [  115.222544] Memory state around the buggy address:
      [  115.227384]  94888400: fb fb fb fb fc fc fc fc 04 fc fc fc fc fc fc fc
      [  115.233959]  94888480: 00 00 00 fc fc fc fc fc 00 04 fc fc fc fc fc fc
      [  115.240529] >94888500: 00 00 04 fc fc fc fc fc 00 00 04 fc fc fc fc fc
      [  115.247077]                                             ^
      [  115.252523]  94888580: 00 04 fc fc fc fc fc fc 06 fc fc fc fc fc fc fc
      [  115.259093]  94888600: 00 00 06 fc fc fc fc fc 00 00 04 fc fc fc fc fc
      [  115.265639] ==================================================================
      Reported-by: default avatarJoel Stanley <joel@jms.id.au>
      Signed-off-by: default avatarSamuel Mendoza-Jonas <sam@mendozajonas.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      062b3e1b
    • Eric Biggers's avatar
      KEYS: DNS: limit the length of option strings · c210f7b4
      Eric Biggers authored
      Adding a dns_resolver key whose payload contains a very long option name
      resulted in that string being printed in full.  This hit the WARN_ONCE()
      in set_precision() during the printk(), because printk() only supports a
      precision of up to 32767 bytes:
      
          precision 1000000 too large
          WARNING: CPU: 0 PID: 752 at lib/vsprintf.c:2189 vsnprintf+0x4bc/0x5b0
      
      Fix it by limiting option strings (combined name + value) to a much more
      reasonable 128 bytes.  The exact limit is arbitrary, but currently the
      only recognized option is formatted as "dnserror=%lu" which fits well
      within this limit.
      
      Also ratelimit the printks.
      
      Reproducer:
      
          perl -e 'print "#", "A" x 1000000, "\x00"' | keyctl padd dns_resolver desc @s
      
      This bug was found using syzkaller.
      Reported-by: default avatarMark Rutland <mark.rutland@arm.com>
      Fixes: 4a2d7892 ("DNS: If the DNS server returns an error, allow that to be cached [ver #2]")
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c210f7b4
    • Davide Caratti's avatar
    • Stephen Suryaputra's avatar
      ipv6: Count interface receive statistics on the ingress netdev · bdb7cc64
      Stephen Suryaputra authored
      The statistics such as InHdrErrors should be counted on the ingress
      netdev rather than on the dev from the dst, which is the egress.
      Signed-off-by: default avatarStephen Suryaputra <ssuryaextr@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bdb7cc64
    • David Ahern's avatar
      net/ipv6: Make __inet6_bind static · 032234d8
      David Ahern authored
      BPF core gets access to __inet6_bind via ipv6_bpf_stub_impl, so it is
      not invoked directly outside of af_inet6.c. Make it static and move
      inet6_bind after to avoid forward declaration.
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      032234d8
    • David S. Miller's avatar
      Merge branch 'XDP-redirect-memory-return-API' · 684009d4
      David S. Miller authored
      Jesper Dangaard Brouer says:
      
      ====================
      XDP redirect memory return API
      
      Submitted against net-next, as it contains NIC driver changes.
      
      This patchset works towards supporting different XDP RX-ring memory
      allocators.  As this will be needed by the AF_XDP zero-copy mode.
      
      The patchset uses mlx5 as the sample driver, which gets implemented
      XDP_REDIRECT RX-mode, but not ndo_xdp_xmit (as this API is subject to
      change thought the patchset).
      
      A new struct xdp_frame is introduced (modeled after cpumap xdp_pkt).
      And both ndo_xdp_xmit and the new xdp_return_frame end-up using this.
      
      Support for a driver supplied allocator is implemented, and a
      refurbished version of page_pool is the first return allocator type
      introduced.  This will be a integration point for AF_XDP zero-copy.
      
      The mlx5 driver evolve into using the page_pool, and see a performance
      increase (with ndo_xdp_xmit out ixgbe driver) from 6Mpps to 12Mpps.
      
      The patchset stop at 16 patches (one over limit), but more API changes
      are planned.  Specifically extending ndo_xdp_xmit and xdp_return_frame
      APIs to support bulking.  As this will address some known limits.
      
      V2: Updated according to Tariq's feedback
      V3: Updated based on feedback from Jason Wang and Alex Duyck
      V4: Updated based on feedback from Tariq and Jason
      V5: Fix SPDX license, add Tariq's reviews, improve patch desc for perf test
      V6: Updated based on feedback from Eric Dumazet and Alex Duyck
      V7: Adapt to i40e that got XDP_REDIRECT support in-between
      V8:
       Updated based on feedback kbuild test robot, and adjust for mlx5 changes
       page_pool only compiled into kernel when drivers Kconfig 'select' feature
      V9:
       Remove some inline statements, let compiler decide what to inline
       Fix return value in virtio_net driver
       Adjust for mlx5 changes in-between submissions
      V10:
       Minor adjust for mlx5 requested by Tariq
       Resubmit against net-next
      V11: avoid leaking info stored in frame data on page reuse
      ====================
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      684009d4
    • Jesper Dangaard Brouer's avatar
      xdp: avoid leaking info stored in frame data on page reuse · 6dfb970d
      Jesper Dangaard Brouer authored
      The bpf infrastructure and verifier goes to great length to avoid
      bpf progs leaking kernel (pointer) info.
      
      For queueing an xdp_buff via XDP_REDIRECT, xdp_frame info stores
      kernel info (incl pointers) in top part of frame data (xdp->data_hard_start).
      Checks are in place to assure enough headroom is available for this.
      
      This info is not cleared, and if the frame is reused, then a
      malicious user could use bpf_xdp_adjust_head helper to move
      xdp->data into this area.  Thus, making this area readable.
      
      This is not super critical as XDP progs requires root or
      CAP_SYS_ADMIN, which are privileged enough for such info.  An
      effort (is underway) towards moving networking bpf hooks to the
      lesser privileged mode CAP_NET_ADMIN, where leaking such info
      should be avoided.  Thus, this patch to clear the info when
      needed.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6dfb970d
    • Jesper Dangaard Brouer's avatar
      xdp: transition into using xdp_frame for ndo_xdp_xmit · 44fa2dbd
      Jesper Dangaard Brouer authored
      Changing API ndo_xdp_xmit to take a struct xdp_frame instead of struct
      xdp_buff.  This brings xdp_return_frame and ndp_xdp_xmit in sync.
      
      This builds towards changing the API further to become a bulk API,
      because xdp_buff is not a queue-able object while xdp_frame is.
      
      V4: Adjust for commit 59655a5b ("tuntap: XDP_TX can use native XDP")
      V7: Adjust for commit d9314c47 ("i40e: add support for XDP_REDIRECT")
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      44fa2dbd
    • Jesper Dangaard Brouer's avatar
      xdp: transition into using xdp_frame for return API · 03993094
      Jesper Dangaard Brouer authored
      Changing API xdp_return_frame() to take struct xdp_frame as argument,
      seems like a natural choice. But there are some subtle performance
      details here that needs extra care, which is a deliberate choice.
      
      When de-referencing xdp_frame on a remote CPU during DMA-TX
      completion, result in the cache-line is change to "Shared"
      state. Later when the page is reused for RX, then this xdp_frame
      cache-line is written, which change the state to "Modified".
      
      This situation already happens (naturally) for, virtio_net, tun and
      cpumap as the xdp_frame pointer is the queued object.  In tun and
      cpumap, the ptr_ring is used for efficiently transferring cache-lines
      (with pointers) between CPUs. Thus, the only option is to
      de-referencing xdp_frame.
      
      It is only the ixgbe driver that had an optimization, in which it can
      avoid doing the de-reference of xdp_frame.  The driver already have
      TX-ring queue, which (in case of remote DMA-TX completion) have to be
      transferred between CPUs anyhow.  In this data area, we stored a
      struct xdp_mem_info and a data pointer, which allowed us to avoid
      de-referencing xdp_frame.
      
      To compensate for this, a prefetchw is used for telling the cache
      coherency protocol about our access pattern.  My benchmarks show that
      this prefetchw is enough to compensate the ixgbe driver.
      
      V7: Adjust for commit d9314c47 ("i40e: add support for XDP_REDIRECT")
      V8: Adjust for commit bd658dda ("net/mlx5e: Separate dma base address
      and offset in dma_sync call")
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      03993094
    • Jesper Dangaard Brouer's avatar
      mlx5: use page_pool for xdp_return_frame call · 60bbf7ee
      Jesper Dangaard Brouer authored
      This patch shows how it is possible to have both the driver local page
      cache, which uses elevated refcnt for "catching"/avoiding SKB
      put_page returns the page through the page allocator.  And at the
      same time, have pages getting returned to the page_pool from
      ndp_xdp_xmit DMA completion.
      
      The performance improvement for XDP_REDIRECT in this patch is really
      good.  Especially considering that (currently) the xdp_return_frame
      API and page_pool_put_page() does per frame operations of both
      rhashtable ID-lookup and locked return into (page_pool) ptr_ring.
      (It is the plan to remove these per frame operation in a followup
      patchset).
      
      The benchmark performed was RX on mlx5 and XDP_REDIRECT out ixgbe,
      with xdp_redirect_map (using devmap) . And the target/maximum
      capability of ixgbe is 13Mpps (on this HW setup).
      
      Before this patch for mlx5, XDP redirected frames were returned via
      the page allocator.  The single flow performance was 6Mpps, and if I
      started two flows the collective performance drop to 4Mpps, because we
      hit the page allocator lock (further negative scaling occurs).
      
      Two test scenarios need to be covered, for xdp_return_frame API, which
      is DMA-TX completion running on same-CPU or cross-CPU free/return.
      Results were same-CPU=10Mpps, and cross-CPU=12Mpps.  This is very
      close to our 13Mpps max target.
      
      The reason max target isn't reached in cross-CPU test, is likely due
      to RX-ring DMA unmap/map overhead (which doesn't occur in ixgbe to
      ixgbe testing).  It is also planned to remove this unnecessary DMA
      unmap in a later patchset
      
      V2: Adjustments requested by Tariq
       - Changed page_pool_create return codes not return NULL, only
         ERR_PTR, as this simplifies err handling in drivers.
       - Save a branch in mlx5e_page_release
       - Correct page_pool size calc for MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ
      
      V5: Updated patch desc
      
      V8: Adjust for b0cedc84 ("net/mlx5e: Remove rq_headroom field from params")
      V9:
       - Adjust for 121e8927 ("net/mlx5e: Refactor RQ XDP_TX indication")
       - Adjust for 73281b78 ("net/mlx5e: Derive Striding RQ size from MTU")
       - Correct handling if page_pool_create fail for MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ
      
      V10: Req from Tariq
       - Change pool_size calc for MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Reviewed-by: default avatarTariq Toukan <tariqt@mellanox.com>
      Acked-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      60bbf7ee
    • Jesper Dangaard Brouer's avatar
      xdp: allow page_pool as an allocator type in xdp_return_frame · 57d0a1c1
      Jesper Dangaard Brouer authored
      New allocator type MEM_TYPE_PAGE_POOL for page_pool usage.
      
      The registered allocator page_pool pointer is not available directly
      from xdp_rxq_info, but it could be (if needed).  For now, the driver
      should keep separate track of the page_pool pointer, which it should
      use for RX-ring page allocation.
      
      As suggested by Saeed, to maintain a symmetric API it is the drivers
      responsibility to allocate/create and free/destroy the page_pool.
      Thus, after the driver have called xdp_rxq_info_unreg(), it is drivers
      responsibility to free the page_pool, but with a RCU free call.  This
      is done easily via the page_pool helper page_pool_destroy() (which
      avoids touching any driver code during the RCU callback, which could
      happen after the driver have been unloaded).
      
      V8: address issues found by kbuild test robot
       - Address sparse should be static warnings
       - Allow xdp.o to be compiled without page_pool.o
      
      V9: Remove inline from .c file, compiler knows best
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      57d0a1c1
    • Jesper Dangaard Brouer's avatar
      page_pool: refurbish version of page_pool code · ff7d6b27
      Jesper Dangaard Brouer authored
      Need a fast page recycle mechanism for ndo_xdp_xmit API for returning
      pages on DMA-TX completion time, which have good cross CPU
      performance, given DMA-TX completion time can happen on a remote CPU.
      
      Refurbish my page_pool code, that was presented[1] at MM-summit 2016.
      Adapted page_pool code to not depend the page allocator and
      integration into struct page.  The DMA mapping feature is kept,
      even-though it will not be activated/used in this patchset.
      
      [1] http://people.netfilter.org/hawk/presentations/MM-summit2016/generic_page_pool_mm_summit2016.pdf
      
      V2: Adjustments requested by Tariq
       - Changed page_pool_create return codes, don't return NULL, only
         ERR_PTR, as this simplifies err handling in drivers.
      
      V4: many small improvements and cleanups
      - Add DOC comment section, that can be used by kernel-doc
      - Improve fallback mode, to work better with refcnt based recycling
        e.g. remove a WARN as pointed out by Tariq
        e.g. quicker fallback if ptr_ring is empty.
      
      V5: Fixed SPDX license as pointed out by Alexei
      
      V6: Adjustments requested by Eric Dumazet
       - Adjust ____cacheline_aligned_in_smp usage/placement
       - Move rcu_head in struct page_pool
       - Free pages quicker on destroy, minimize resources delayed an RCU period
       - Remove code for forward/backward compat ABI interface
      
      V8: Issues found by kbuild test robot
       - Address sparse should be static warnings
       - Only compile+link when a driver use/select page_pool,
         mlx5 selects CONFIG_PAGE_POOL, although its first used in two patches
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ff7d6b27
    • Jesper Dangaard Brouer's avatar
      xdp: rhashtable with allocator ID to pointer mapping · 8d5d8852
      Jesper Dangaard Brouer authored
      Use the IDA infrastructure for getting a cyclic increasing ID number,
      that is used for keeping track of each registered allocator per
      RX-queue xdp_rxq_info.  Instead of using the IDR infrastructure, which
      uses a radix tree, use a dynamic rhashtable, for creating ID to
      pointer lookup table, because this is faster.
      
      The problem that is being solved here is that, the xdp_rxq_info
      pointer (stored in xdp_buff) cannot be used directly, as the
      guaranteed lifetime is too short.  The info is needed on a
      (potentially) remote CPU during DMA-TX completion time . In an
      xdp_frame the xdp_mem_info is stored, when it got converted from an
      xdp_buff, which is sufficient for the simple page refcnt based recycle
      schemes.
      
      For more advanced allocators there is a need to store a pointer to the
      registered allocator.  Thus, there is a need to guard the lifetime or
      validity of the allocator pointer, which is done through this
      rhashtable ID map to pointer. The removal and validity of of the
      allocator and helper struct xdp_mem_allocator is guarded by RCU.  The
      allocator will be created by the driver, and registered with
      xdp_rxq_info_reg_mem_model().
      
      It is up-to debate who is responsible for freeing the allocator
      pointer or invoking the allocator destructor function.  In any case,
      this must happen via RCU freeing.
      
      Use the IDA infrastructure for getting a cyclic increasing ID number,
      that is used for keeping track of each registered allocator per
      RX-queue xdp_rxq_info.
      
      V4: Per req of Jason Wang
      - Use xdp_rxq_info_reg_mem_model() in all drivers implementing
        XDP_REDIRECT, even-though it's not strictly necessary when
        allocator==NULL for type MEM_TYPE_PAGE_SHARED (given it's zero).
      
      V6: Per req of Alex Duyck
      - Introduce rhashtable_lookup() call in later patch
      
      V8: Address sparse should be static warnings (from kbuild test robot)
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8d5d8852
    • Jesper Dangaard Brouer's avatar
      mlx5: register a memory model when XDP is enabled · 84f5e3fb
      Jesper Dangaard Brouer authored
      Now all the users of ndo_xdp_xmit have been converted to use xdp_return_frame.
      This enable a different memory model, thus activating another code path
      in the xdp_return_frame API.
      
      V2: Fixed issues pointed out by Tariq.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Reviewed-by: default avatarTariq Toukan <tariqt@mellanox.com>
      Acked-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      84f5e3fb
    • Jesper Dangaard Brouer's avatar
      i40e: convert to use generic xdp_frame and xdp_return_frame API · b411ef11
      Jesper Dangaard Brouer authored
      Also convert driver i40e, which very recently got XDP_REDIRECT support
      in commit d9314c47 ("i40e: add support for XDP_REDIRECT").
      
      V7: This patch got added in V7 of this patchset.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b411ef11
    • Jesper Dangaard Brouer's avatar
      bpf: cpumap convert to use generic xdp_frame · 70280ed9
      Jesper Dangaard Brouer authored
      The generic xdp_frame format, was inspired by the cpumap own internal
      xdp_pkt format.  It is now time to convert it over to the generic
      xdp_frame format.  The cpumap needs one extra field dev_rx.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      70280ed9
    • Jesper Dangaard Brouer's avatar
      virtio_net: convert to use generic xdp_frame and xdp_return_frame API · cac320c8
      Jesper Dangaard Brouer authored
      The virtio_net driver assumes XDP frames are always released based on
      page refcnt (via put_page).  Thus, is only queues the XDP data pointer
      address and uses virt_to_head_page() to retrieve struct page.
      
      Use the XDP return API to get away from such assumptions. Instead
      queue an xdp_frame, which allow us to use the xdp_return_frame API,
      when releasing the frame.
      
      V8: Avoid endianness issues (found by kbuild test robot)
      V9: Change __virtnet_xdp_xmit from bool to int return value (found by Dan Carpenter)
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cac320c8
    • Jesper Dangaard Brouer's avatar
      tun: convert to use generic xdp_frame and xdp_return_frame API · 1ffcbc85
      Jesper Dangaard Brouer authored
      The tuntap driver invented it's own driver specific way of queuing
      XDP packets, by storing the xdp_buff information in the top of
      the XDP frame data.
      
      Convert it over to use the more generic xdp_frame structure.  The
      main problem with the in-driver method is that the xdp_rxq_info pointer
      cannot be trused/used when dequeueing the frame.
      
      V3: Remove check based on feedback from Jason
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1ffcbc85
    • Jesper Dangaard Brouer's avatar
      xdp: introduce a new xdp_frame type · c0048cff
      Jesper Dangaard Brouer authored
      This is needed to convert drivers tuntap and virtio_net.
      
      This is a generalization of what is done inside cpumap, which will be
      converted later.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c0048cff
    • Jesper Dangaard Brouer's avatar
      xdp: move struct xdp_buff from filter.h to xdp.h · 106ca27f
      Jesper Dangaard Brouer authored
      This is done to prepare for the next patch, and it is also
      nice to move this XDP related struct out of filter.h.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      106ca27f
    • Jesper Dangaard Brouer's avatar
      ixgbe: use xdp_return_frame API · 189ead81
      Jesper Dangaard Brouer authored
      Extend struct ixgbe_tx_buffer to store the xdp_mem_info.
      
      Notice that this could be optimized further by putting this into
      a union in the struct ixgbe_tx_buffer, but this patchset
      works towards removing this again.  Thus, this is not done.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      189ead81
    • Jesper Dangaard Brouer's avatar
      xdp: introduce xdp_return_frame API and use in cpumap · 5ab073ff
      Jesper Dangaard Brouer authored
      Introduce an xdp_return_frame API, and convert over cpumap as
      the first user, given it have queued XDP frame structure to leverage.
      
      V3: Cleanup and remove C99 style comments, pointed out by Alex Duyck.
      V6: Remove comment that id will be added later (Req by Alex Duyck)
      V8: Rename enum mem_type to xdp_mem_type (found by kbuild test robot)
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5ab073ff
    • Jesper Dangaard Brouer's avatar
      mlx5: basic XDP_REDIRECT forward support · 5168d732
      Jesper Dangaard Brouer authored
      This implements basic XDP redirect support in mlx5 driver.
      
      Notice that the ndo_xdp_xmit() is NOT implemented, because that API
      need some changes that this patchset is working towards.
      
      The main purpose of this patch is have different drivers doing
      XDP_REDIRECT to show how different memory models behave in a cross
      driver world.
      
      Update(pre-RFCv2 Tariq): Need to DMA unmap page before xdp_do_redirect,
      as the return API does not exist yet to to keep this mapped.
      
      Update(pre-RFCv3 Saeed): Don't mix XDP_TX and XDP_REDIRECT flushing,
      introduce xdpsq.db.redirect_flush boolian.
      
      V9: Adjust for commit 121e8927 ("net/mlx5e: Refactor RQ XDP_TX indication")
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Reviewed-by: default avatarTariq Toukan <tariqt@mellanox.com>
      Acked-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5168d732
    • Intiyaz Basha's avatar
      liquidio: Enhanced ethtool stats · 897ddc24
      Intiyaz Basha authored
      1. Added red_drops stats. Inbound packets dropped by RED, buffer exhaustion
      2. Included fcs_err, jabber_err, l2_err and frame_err errors under
         rx_errors
      3. Included fifo_err, dmac_drop, red_drops, fw_err_pko, fw_err_link and
         fw_err_drop under rx_dropped
      4. Included max_collision_fail, max_deferral_fail, total_collisions,
         fw_err_pko, fw_err_link, fw_err_drop and fw_err_pki under tx_dropped
      5. Counting dma mapping errors
      6. Added some firmware stats description and removed for some
      Signed-off-by: default avatarIntiyaz Basha <intiyaz.basha@cavium.com>
      Acked-by: default avatarDerek Chickles <derek.chickles@cavium.com>
      Acked-by: default avatarSatanand Burla <satananda.burla@cavium.com>
      Signed-off-by: default avatarFelix Manlunas <felix.manlunas@cavium.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      897ddc24
  2. 16 Apr, 2018 13 commits
    • Andrey Ignatov's avatar
      net: Remove unused tcp_set_state tracepoint · ef53e9e1
      Andrey Ignatov authored
      This tracepoint was replaced by inet_sock_set_state in 563e0bb0 and not
      used anywhere in the kernel anymore. Remove it.
      Signed-off-by: default avatarAndrey Ignatov <rdna@fb.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ef53e9e1
    • David S. Miller's avatar
      Merge branch 'pci-mrrs-consts' · 4c85d2d4
      David S. Miller authored
      Heiner Kallweit says:
      
      ====================
      PCI: add two more values for PCIe Max_Read_Request_Size and initially use them in r8169 network driver
      
      In r8169 network driver I stumbled across a magic number translating
      to PCI MRRS size 4K. The PCI core is still missing constants for
      values 2K and 4K (as defined in PCI standard).
      
      So let's add these two constants and use the 4K constant in r8169.
      
      Second patch depends on the first one, therefore both patches
      preferrably should go through either PCI or netdev tree.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4c85d2d4
    • Heiner Kallweit's avatar
      r8169: replace magic numbers with PCI MRRS constant · 8d98aa39
      Heiner Kallweit authored
      Replace magic number "0x5 << MAX_READ_REQUEST_SHIFT" with the
      appropriate constant as defined in PCI core.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8d98aa39
    • Heiner Kallweit's avatar
      PCI: Add two more values for PCIe Max_Read_Request_Size · a5724fc3
      Heiner Kallweit authored
      This patch adds missing values for the max read request size.
      E.g. network driver r8169 uses a value of 4K.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Acked-by: default avatarBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a5724fc3
    • David S. Miller's avatar
      Merge branch 'net-stmmac-Stop-using-hard-coded-callbacks' · 5da8baa3
      David S. Miller authored
      Jose Abreu says:
      
      ====================
      net: stmmac: Stop using hard-coded callbacks
      
      This a starting point for a cleanup and re-organization of stmmac.
      
      In this series we stop using hard-coded callbacks along the code and use
      instead helpers which are defined in a single place ("hwif.h").
      
      This brings several advantages:
      	1) Less typing :)
      	2) Guaranteed function pointer check
      	3) More flexibility
      
      By 2) we stop using the repeated pattern of:
      	if (priv->hw->mac->some_func)
      		priv->hw->mac->some_func(...)
      
      I didn't check but I expect the final .ko will be bigger with this series
      because *all* of function pointers are checked.
      
      Anyway, I hope this can make the code more readable and more flexible now.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5da8baa3
    • Jose Abreu's avatar
      net: stmmac: Switch stmmac_mode_ops to generic HW Interface Helpers · 2c520b1c
      Jose Abreu authored
      Switch stmmac_mode_ops to generic Hardware Interface Helpers instead of
      using hard-coded callbacks. This makes the code more readable and more
      flexible.
      
      No functional change.
      Signed-off-by: default avatarJose Abreu <joabreu@synopsys.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Joao Pinto <jpinto@synopsys.com>
      Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
      Cc: Alexandre Torgue <alexandre.torgue@st.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2c520b1c
    • Jose Abreu's avatar
      net: stmmac: Switch stmmac_hwtimestamp to generic HW Interface Helpers · cc4c9001
      Jose Abreu authored
      Switch stmmac_hwtimestamp to generic Hardware Interface Helpers instead
      of using hard-coded callbacks. This makes the code more readable and
      more flexible.
      
      No functional change.
      Signed-off-by: default avatarJose Abreu <joabreu@synopsys.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Joao Pinto <jpinto@synopsys.com>
      Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
      Cc: Alexandre Torgue <alexandre.torgue@st.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cc4c9001
    • Jose Abreu's avatar
      net: stmmac: Switch stmmac_ops to generic HW Interface Helpers · c10d4c82
      Jose Abreu authored
      Switch stmmac_ops to generic Hardware Interface Helpers instead of using
      hard-coded callbacks. This makes the code more readable and more
      flexible.
      
      No functional change.
      Signed-off-by: default avatarJose Abreu <joabreu@synopsys.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Joao Pinto <jpinto@synopsys.com>
      Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
      Cc: Alexandre Torgue <alexandre.torgue@st.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c10d4c82
    • Jose Abreu's avatar
      net: stmmac: Switch stmmac_dma_ops to generic HW Interface Helpers · a4e887fa
      Jose Abreu authored
      Switch stmmac_dma_ops to generic Hardware Interface Helpers instead of
      using hard-coded callbacks. This makes the code more readable and more
      flexible.
      
      No functional change.
      Signed-off-by: default avatarJose Abreu <joabreu@synopsys.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Joao Pinto <jpinto@synopsys.com>
      Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
      Cc: Alexandre Torgue <alexandre.torgue@st.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a4e887fa
    • Jose Abreu's avatar
      net: stmmac: Switch stmmac_desc_ops to generic HW Interface Helpers · 42de047d
      Jose Abreu authored
      Switch stmmac_desc_ops to generic Hardware Interface Helpers instead of
      using hard-coded callbacks. This makes the code more readable and more
      flexible.
      
      No functional change.
      Signed-off-by: default avatarJose Abreu <joabreu@synopsys.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Joao Pinto <jpinto@synopsys.com>
      Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
      Cc: Alexandre Torgue <alexandre.torgue@st.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      42de047d
    • David S. Miller's avatar
      Merge branch 'tcp-zero-copy-receive' · 309c446c
      David S. Miller authored
      Eric Dumazet says:
      
      ====================
      tcp: add zero copy receive
      
      This patch series add mmap() support to TCP sockets for RX zero copy.
      
      While tcp_mmap() patch itself is quite small (~100 LOC), optimal support
      for asynchronous mmap() required better SO_RCVLOWAT behavior, and a
      test program to demonstrate how mmap() on TCP sockets can be used.
      
      Note that mmap() (and associated munmap()) calls are adding more
      pressure on per-process VM semaphore, so might not show benefit
      for processus with high number of threads.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      309c446c
    • Eric Dumazet's avatar
      selftests: net: add tcp_mmap program · 192dc405
      Eric Dumazet authored
      This is a reference program showing how mmap() can be used
      on TCP flows to implement receive zero copy.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      192dc405
    • Eric Dumazet's avatar
      tcp: implement mmap() for zero copy receive · 93ab6cc6
      Eric Dumazet authored
      Some networks can make sure TCP payload can exactly fit 4KB pages,
      with well chosen MSS/MTU and architectures.
      
      Implement mmap() system call so that applications can avoid
      copying data without complex splice() games.
      
      Note that a successful mmap( X bytes) on TCP socket is consuming
      bytes, as if recvmsg() has been done. (tp->copied += X)
      
      Only PROT_READ mappings are accepted, as skb page frags
      are fundamentally shared and read only.
      
      If tcp_mmap() finds data that is not a full page, or a patch of
      urgent data, -EINVAL is returned, no bytes are consumed.
      
      Application must fallback to recvmsg() to read the problematic sequence.
      
      mmap() wont block,  regardless of socket being in blocking or
      non-blocking mode. If not enough bytes are in receive queue,
      mmap() would return -EAGAIN, or -EIO if socket is in a state
      where no other bytes can be added into receive queue.
      
      An application might use SO_RCVLOWAT, poll() and/or ioctl( FIONREAD)
      to efficiently use mmap()
      
      On the sender side, MSG_EOR might help to clearly separate unaligned
      headers and 4K-aligned chunks if necessary.
      
      Tested:
      
      mlx4 (cx-3) 40Gbit NIC, with tcp_mmap program provided in following patch.
      MTU set to 4168  (4096 TCP payload, 40 bytes IPv6 header, 32 bytes TCP header)
      
      Without mmap() (tcp_mmap -s)
      
      received 32768 MB (0 % mmap'ed) in 8.13342 s, 33.7961 Gbit,
        cpu usage user:0.034 sys:3.778, 116.333 usec per MB, 63062 c-switches
      received 32768 MB (0 % mmap'ed) in 8.14501 s, 33.748 Gbit,
        cpu usage user:0.029 sys:3.997, 122.864 usec per MB, 61903 c-switches
      received 32768 MB (0 % mmap'ed) in 8.11723 s, 33.8635 Gbit,
        cpu usage user:0.048 sys:3.964, 122.437 usec per MB, 62983 c-switches
      received 32768 MB (0 % mmap'ed) in 8.39189 s, 32.7552 Gbit,
        cpu usage user:0.038 sys:4.181, 128.754 usec per MB, 55834 c-switches
      
      With mmap() on receiver (tcp_mmap -s -z)
      
      received 32768 MB (100 % mmap'ed) in 8.03083 s, 34.2278 Gbit,
        cpu usage user:0.024 sys:1.466, 45.4712 usec per MB, 65479 c-switches
      received 32768 MB (100 % mmap'ed) in 7.98805 s, 34.4111 Gbit,
        cpu usage user:0.026 sys:1.401, 43.5486 usec per MB, 65447 c-switches
      received 32768 MB (100 % mmap'ed) in 7.98377 s, 34.4296 Gbit,
        cpu usage user:0.028 sys:1.452, 45.166 usec per MB, 65496 c-switches
      received 32768 MB (99.9969 % mmap'ed) in 8.01838 s, 34.281 Gbit,
        cpu usage user:0.02 sys:1.446, 44.7388 usec per MB, 65505 c-switches
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      93ab6cc6