1. 24 Apr, 2019 4 commits
    • Eric Dumazet's avatar
      ipv6: convert fib6_ref to refcount_t · f05713e0
      Eric Dumazet authored
      We suspect some issues involving fib6_ref 0 -> 1 transitions might
      cause strange syzbot reports.
      
      Lets convert fib6_ref to refcount_t to catch them earlier.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Wei Wang <weiwan@google.com>
      Acked-by: default avatarWei Wang <weiwan@google.com>
      Reviewed-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f05713e0
    • Eric Dumazet's avatar
      ipv6: broadly use fib6_info_hold() helper · 5ea71528
      Eric Dumazet authored
      Instead of using atomic_inc(), prefer fib6_info_hold()
      so that upcoming refcount_t conversion is simpler.
      
      Only fib6_info_alloc() is using atomic_set() since we
      just allocated a new object.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Wei Wang <weiwan@google.com>
      Acked-by: default avatarWei Wang <weiwan@google.com>
      Reviewed-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5ea71528
    • Eric Dumazet's avatar
      ipv6: fib6_info_destroy_rcu() cleanup · b0270550
      Eric Dumazet authored
      We do not need to clear f6i->rt6i_exception_bucket right before
      freeing f6i.
      
      Note that f6i->rt6i_exception_bucket is properly protected by
      f6i->exception_bucket_flushed being set to one in rt6_flush_exceptions()
      under the protection of rt6_exception_lock.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Wei Wang <weiwan@google.com>
      Acked-by: default avatarWei Wang <weiwan@google.com>
      Reviewed-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b0270550
    • David S. Miller's avatar
      Merge tag 'mlx5-updates-2019-04-22' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux · 20eb08b2
      David S. Miller authored
      Saeed Mahameed says:
      
      ====================
      mlx5-updates-2019-04-22
      
      This series includes updates to mlx5e driver RX data path and some
      significant XDP RX/TX improvements to overcome/mitigate HW and PCIE
      bottlenecks.
      
      From Tariq:
      1) Some Enhancements in rq->flags
      2) Stabilize RX packet rate (on Striding RQ) with
      multiple outstanding UMR posts
      In this patch, we add support for multiple outstanding UMR posts,
       to allow faster gap closure between consuming MPWQEs and reposting
      them back into the WQ.
      
      Performance test:
      As expected, huge improvement in large-scale (48 cores).
      
      xdp_redirect_map, 64B UDP multi-stream.
      Redirect from ConnectX-5 100Gbps to ConnectX-6 100Gbps.
      CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz.
      
      Before: Unstable, 7 to 30 Mpps
      After:  Stable,   at 70.5 Mpps
      
      From Shay:
      3) XDP, Inline small packets into the TX MPWQE in XDP xmit flow
      
      Upon high packet rate with multiple CPUs TX workloads, much of the HCA's
      resources are spent on prefetching TX descriptors, thus affecting
      transmission rates.
      This patch comes to mitigate this problem by moving some workload to the
      CPU and reducing the HW data prefetch overhead for small packets (<= 256B).
      
      When forwarding packets with XDP, a packet that is smaller
      than a certain size (set to ~256 bytes) would be sent inline within
      its WQE TX descrptor (mem-copied), when the hardware tx queue is congested
      beyond a pre-defined water-mark.
      
      Performance:
          Tested packet rate for UDP 64Byte multi-stream
          over two dual port ConnectX-5 100Gbps NICs.
          CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
      
          * Tested with hyper-threading disabled
      
          XDP_TX:
      
          |          | before | after   |       |
          | 24 rings | 51Mpps | 116Mpps | +126% |
          | 1 ring   | 12Mpps | 12Mpps  | same  |
      
          XDP_REDIRECT:
      
          ** Below is the transmit rate, not the redirection rate
          which might be larger, and is not affected by this patch.
      
          |          | before  | after   |      |
          | 32 rings | 64Mpps  | 92Mpps  | +43% |
          | 1 ring   | 6.4Mpps | 6.4Mpps | same |
      
      As we can see, feature significantly improves scaling, without
      hurting single ring performance.
      
      From Maxim:
      4) Some trivial refactoring and code improvements prior to a larger series
      to support AF_XDP.
      ====================
      Acked-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      20eb08b2
  2. 23 Apr, 2019 36 commits