1. 10 Mar, 2017 27 commits
  2. 09 Mar, 2017 13 commits
    • David S. Miller's avatar
      Merge branch 'bpf-htab-fixes' · 8ddbb312
      David S. Miller authored
      Alexei Starovoitov says:
      
      ====================
      bpf: htab fixes
      
      Two bpf hashtable fixes. See individual patches for details.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8ddbb312
    • Alexei Starovoitov's avatar
      bpf: convert htab map to hlist_nulls · 4fe84359
      Alexei Starovoitov authored
      when all map elements are pre-allocated one cpu can delete and reuse htab_elem
      while another cpu is still walking the hlist. In such case the lookup may
      miss the element. Convert hlist to hlist_nulls to avoid such scenario.
      When bucket lock is taken there is no need to take such precautions,
      so only convert map_lookup and map_get_next to nulls.
      The race window is extremely small and only reproducible with explicit
      udelay() inside lookup_nulls_elem_raw()
      
      Similar to hlist add hlist_nulls_for_each_entry_safe() and
      hlist_nulls_entry_safe() helpers.
      
      Fixes: 6c905981 ("bpf: pre-allocate hash map elements")
      Reported-by: default avatarJonathan Perry <jonperry@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4fe84359
    • Alexei Starovoitov's avatar
      bpf: fix struct htab_elem layout · 9f691549
      Alexei Starovoitov authored
      when htab_elem is removed from the bucket list the htab_elem.hash_node.next
      field should not be overridden too early otherwise we have a tiny race window
      between lookup and delete.
      The bug was discovered by manual code analysis and reproducible
      only with explicit udelay() in lookup_elem_raw().
      
      Fixes: 6c905981 ("bpf: pre-allocate hash map elements")
      Reported-by: default avatarJonathan Perry <jonperry@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9f691549
    • Dmitry V. Levin's avatar
      uapi: fix linux/packet_diag.h userspace compilation error · 745cb7f8
      Dmitry V. Levin authored
      Replace MAX_ADDR_LEN with its numeric value to fix the following
      linux/packet_diag.h userspace compilation error:
      
      /usr/include/linux/packet_diag.h:67:17: error: 'MAX_ADDR_LEN' undeclared here (not in a function)
        __u8 pdmc_addr[MAX_ADDR_LEN];
      
      This is not the first case in the UAPI where the numeric value
      of MAX_ADDR_LEN is used instead of symbolic one, uapi/linux/if_link.h
      already does the same:
      
      $ grep MAX_ADDR_LEN include/uapi/linux/if_link.h
      	__u8 mac[32]; /* MAX_ADDR_LEN */
      
      There are no UAPI headers besides these two that use MAX_ADDR_LEN.
      Signed-off-by: default avatarDmitry V. Levin <ldv@altlinux.org>
      Acked-by: default avatarPavel Emelyanov <xemul@virtuozzo.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      745cb7f8
    • Paolo Abeni's avatar
      net/tunnel: set inner protocol in network gro hooks · 294acf1c
      Paolo Abeni authored
      The gso code of several tunnels type (gre and udp tunnels)
      takes for granted that the skb->inner_protocol is properly
      initialized and drops the packet elsewhere.
      
      On the forwarding path no one is initializing such field,
      so gro encapsulated packets are dropped on forward.
      
      Since commit 38720352 ("gre: Use inner_proto to obtain
      inner header protocol"), this can be reproduced when the
      encapsulated packets use gre as the tunneling protocol.
      
      The issue happens also with vxlan and geneve tunnels since
      commit 8bce6d7d ("udp: Generalize skb_udp_segment"), if the
      forwarding host's ingress nic has h/w offload for such tunnel
      and a vxlan/geneve device is configured on top of it, regardless
      of the configured peer address and vni.
      
      To address the issue, this change initialize the inner_protocol
      field for encapsulated packets in both ipv4 and ipv6 gro complete
      callbacks.
      
      Fixes: 38720352 ("gre: Use inner_proto to obtain inner header protocol")
      Fixes: 8bce6d7d ("udp: Generalize skb_udp_segment")
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Acked-by: default avatarAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      294acf1c
    • robert.foss@collabora.com's avatar
      qed: Fix copy of uninitialized memory · 8aad6f14
      robert.foss@collabora.com authored
      In qed_ll2_start_ooo() the ll2_info variable is uninitialized and then
      passed to qed_ll2_acquire_connection() where it is copied into a new
      memory space.
      
      This shouldn't cause any issue as long as non of the copied memory is
      every read.
      But the potential for a bug being introduced by reading this memory
      is real.
      
      Detected by CoverityScan, CID#1399632 ("Uninitialized scalar variable")
      Signed-off-by: default avatarRobert Foss <robert.foss@collabora.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8aad6f14
    • David S. Miller's avatar
      Merge branch 'thunderx-misc-fixes' · c021aaca
      David S. Miller authored
      Sunil Goutham says:
      
      ====================
      net: thunderx: Miscellaneous fixes
      
      This patch set fixes multiple issues such as IOMMU
      translation faults when kernel is booted with IOMMU enabled
      on host, incorrect MAC ID reading from ACPI tables and IPv6
      UDP packet drop due to failure of checksum validation.
      
      Changes from v1:
      - As suggested by David Miller, got rid of conditional
        calling of DMA map/unmap APIs. Also updated commit message
        in 'IOMMU translation faults' patch.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c021aaca
    • Thanneeru Srinivasulu's avatar
      net: thunderx: Allow IPv6 frames with zero UDP checksum · 36fa35d2
      Thanneeru Srinivasulu authored
      Do not consider IPv6 frames with zero UDP checksum as frames
      with bad checksum and drop them.
      Signed-off-by: default avatarThanneeru Srinivasulu <tsrinivasulu@cavium.com>
      Signed-off-by: default avatarSunil Goutham <sgoutham@cavium.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      36fa35d2
    • Sunil Goutham's avatar
      net: thunderx: Fix invalid mac addresses for node1 interfaces · 78aacb6f
      Sunil Goutham authored
      When booted with ACPI, random mac addresses are being
      assigned to node1 interfaces due to mismatch of bgx_id
      in BGX driver and ACPI tables.
      
      This patch fixes this issue by setting maximum BGX devices
      per node based on platform/soc instead of a macro. This
      change will set the bgx_id appropriately.
      Signed-off-by: default avatarSunil Goutham <sgoutham@cavium.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      78aacb6f
    • Sunil Goutham's avatar
      net: thunderx: Fix LMAC mode debug prints for QSGMII mode · 18de7ba9
      Sunil Goutham authored
      When BGX/LMACs are in QSGMII mode, for some LMACs, mode info is
      not being printed. This patch will fix that. With changes already
      done to not do any sort of serdes 2 lane mapping config calculation
      in kernel driver, we can get rid of this logic.
      Signed-off-by: default avatarSunil Goutham <sgoutham@cavium.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      18de7ba9
    • Sunil Goutham's avatar
      net: thunderx: Fix IOMMU translation faults · 83abb7d7
      Sunil Goutham authored
      ACPI support has been added to ARM IOMMU driver in 4.10 kernel
      and that has resulted in VNIC interfaces throwing translation
      faults when kernel is booted with ACPI as driver was not using
      DMA API. This patch fixes the issue by using DMA API which inturn
      will create translation tables when IOMMU is enabled.
      
      Also VNIC doesn't have a seperate receive buffer ring per receive
      queue, so there is no 1:1 descriptor index matching between CQE_RX
      and the index in buffer ring from where a buffer has been used for
      DMA'ing. Unlike other NICs, here it's not possible to maintain dma
      address to virt address mappings within the driver. This leaves us
      no other choice but to use IOMMU's IOVA address conversion API to
      get buffer's virtual address which can be given to network stack
      for processing.
      Signed-off-by: default avatarSunil Goutham <sgoutham@cavium.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      83abb7d7
    • Zhu Yanjun's avatar
      rds: ib: add error handle · 3b12f73a
      Zhu Yanjun authored
      In the function rds_ib_setup_qp, the error handle is missing. When some
      error occurs, it is possible that memory leak occurs. As such, error
      handle is added.
      
      Cc: Joe Jin <joe.jin@oracle.com>
      Reviewed-by: default avatarJunxiao Bi <junxiao.bi@oracle.com>
      Reviewed-by: default avatarGuanglei Li <guanglei.li@oracle.com>
      Signed-off-by: default avatarZhu Yanjun <yanjun.zhu@oracle.com>
      Acked-by: default avatarSantosh Shilimkar <santosh.shilimkar@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      3b12f73a
    • VSR Burru's avatar
      liquidio: improve UDP TX performance · 67e303e0
      VSR Burru authored
      Improve UDP TX performance by:
      * reducing the ring size from 2K to 512
      * replacing the numerous streaming DMA allocations for info buffers and
        gather lists with one large consistent DMA allocation per ring
      
      BQL is not effective here.  We reduced the ring size because there is heavy
      overhead with dma_map_single every so often.  With iommu=on, dma_map_single
      in PF Tx data path was taking longer time (~700usec) for every ~250
      packets.  Debugged intel_iommu code, and found that PF driver is utilizing
      too many static IO virtual address mapping entries (for gather list entries
      and info buffers): about 100K entries for two PF's each using 8 rings.
      Also, finding an empty entry (in rbtree of device domain's iova mapping in
      kernel) during Tx path becomes a bottleneck every so often; the loop to
      find the empty entry goes through over 40K iterations; this is too costly
      and was the major overhead.  Overhead is low when this loop quits quickly.
      
      Netperf benchmark numbers before and after patch:
      
      PF UDP TX
      +--------+--------+------------+------------+---------+
      |        |        |  Before    |  After     |         |
      | Number |        |  Patch     |  Patch     |         |
      |  of    | Packet | Throughput | Throughput | Percent |
      | Flows  |  Size  |  (Gbps)    |  (Gbps)    | Change  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   0.52     |   0.93     |  +78.9  |
      |   1    |  1024  |   1.62     |   2.84     |  +75.3  |
      |        |  1518  |   2.44     |   4.21     |  +72.5  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   0.45     |   1.59     | +253.3  |
      |   4    |  1024  |   1.34     |   5.48     | +308.9  |
      |        |  1518  |   2.27     |   8.31     | +266.1  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   0.40     |   1.61     | +302.5  |
      |   8    |  1024  |   1.64     |   4.24     | +158.5  |
      |        |  1518  |   2.87     |   6.52     | +127.2  |
      +--------+--------+------------+------------+---------+
      
      VF UDP TX
      +--------+--------+------------+------------+---------+
      |        |        |  Before    |  After     |         |
      | Number |        |  Patch     |  Patch     |         |
      |  of    | Packet | Throughput | Throughput | Percent |
      | Flows  |  Size  |  (Gbps)    |  (Gbps)    | Change  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   1.28     |   1.49     |  +16.4  |
      |   1    |  1024  |   4.44     |   4.39     |   -1.1  |
      |        |  1518  |   6.08     |   6.51     |   +7.1  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   2.35     |   2.35     |    0.0  |
      |   4    |  1024  |   6.41     |   8.07     |  +25.9  |
      |        |  1518  |   9.56     |   9.54     |   -0.2  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   3.41     |   3.65     |   +7.0  |
      |   8    |  1024  |   9.35     |   9.34     |   -0.1  |
      |        |  1518  |   9.56     |   9.57     |   +0.1  |
      +--------+--------+------------+------------+---------+
      Signed-off-by: default avatarVSR Burru <veerasenareddy.burru@cavium.com>
      Signed-off-by: default avatarFelix Manlunas <felix.manlunas@cavium.com>
      Signed-off-by: default avatarDerek Chickles <derek.chickles@cavium.com>
      Signed-off-by: default avatarRaghu Vatsavayi <raghu.vatsavayi@cavium.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      67e303e0