1. 28 Dec, 2017 13 commits
  2. 27 Dec, 2017 5 commits
  3. 22 Dec, 2017 18 commits
  4. 21 Dec, 2017 4 commits
    • Alex Vesker's avatar
      IB/ipoib: Fix lockdep issue found on ipoib_ib_dev_heavy_flush · 1f80bd6a
      Alex Vesker authored
      The locking order of vlan_rwsem (LOCK A) and then rtnl (LOCK B),
      contradicts other flows such as ipoib_open possibly causing a deadlock.
      To prevent this deadlock heavy flush is called with RTNL locked and
      only then tries to acquire vlan_rwsem.
      This deadlock is possible only when there are child interfaces.
      
      [  140.941758] ======================================================
      [  140.946276] WARNING: possible circular locking dependency detected
      [  140.950950] 4.15.0-rc1+ #9 Tainted: G           O
      [  140.954797] ------------------------------------------------------
      [  140.959424] kworker/u32:1/146 is trying to acquire lock:
      [  140.963450]  (rtnl_mutex){+.+.}, at: [<ffffffffc083516a>] __ipoib_ib_dev_flush+0x2da/0x4e0 [ib_ipoib]
      [  140.970006]
      but task is already holding lock:
      [  140.975141]  (&priv->vlan_rwsem){++++}, at: [<ffffffffc0834ee1>] __ipoib_ib_dev_flush+0x51/0x4e0 [ib_ipoib]
      [  140.982105]
      which lock already depends on the new lock.
      [  140.990023]
      the existing dependency chain (in reverse order) is:
      [  140.998650]
      -> #1 (&priv->vlan_rwsem){++++}:
      [  141.005276]        down_read+0x4d/0xb0
      [  141.009560]        ipoib_open+0xad/0x120 [ib_ipoib]
      [  141.014400]        __dev_open+0xcb/0x140
      [  141.017919]        __dev_change_flags+0x1a4/0x1e0
      [  141.022133]        dev_change_flags+0x23/0x60
      [  141.025695]        devinet_ioctl+0x704/0x7d0
      [  141.029156]        sock_do_ioctl+0x20/0x50
      [  141.032526]        sock_ioctl+0x221/0x300
      [  141.036079]        do_vfs_ioctl+0xa6/0x6d0
      [  141.039656]        SyS_ioctl+0x74/0x80
      [  141.042811]        entry_SYSCALL_64_fastpath+0x1f/0x96
      [  141.046891]
      -> #0 (rtnl_mutex){+.+.}:
      [  141.051701]        lock_acquire+0xd4/0x220
      [  141.055212]        __mutex_lock+0x88/0x970
      [  141.058631]        __ipoib_ib_dev_flush+0x2da/0x4e0 [ib_ipoib]
      [  141.063160]        __ipoib_ib_dev_flush+0x71/0x4e0 [ib_ipoib]
      [  141.067648]        process_one_work+0x1f5/0x610
      [  141.071429]        worker_thread+0x4a/0x3f0
      [  141.074890]        kthread+0x141/0x180
      [  141.078085]        ret_from_fork+0x24/0x30
      [  141.081559]
      
      other info that might help us debug this:
      [  141.088967]  Possible unsafe locking scenario:
      [  141.094280]        CPU0                    CPU1
      [  141.097953]        ----                    ----
      [  141.101640]   lock(&priv->vlan_rwsem);
      [  141.104771]                                lock(rtnl_mutex);
      [  141.109207]                                lock(&priv->vlan_rwsem);
      [  141.114032]   lock(rtnl_mutex);
      [  141.116800]
       *** DEADLOCK ***
      
      Fixes: b4b678b0 ("IB/ipoib: Grab rtnl lock on heavy flush when calling ndo_open/stop")
      Signed-off-by: default avatarAlex Vesker <valex@mellanox.com>
      Signed-off-by: default avatarLeon Romanovsky <leon@kernel.org>
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      1f80bd6a
    • Majd Dibbiny's avatar
      IB/mlx5: Fix congestion counters in LAG mode · 71a0ff65
      Majd Dibbiny authored
      Congestion counters are counted and queried per physical function.
      When working in LAG mode, CNP packets can be sent or received on both
      of the functions, thus congestion counters should be aggregated from
      the two physical functions.
      
      Fixes: e1f24a79 ("IB/mlx5: Support congestion related counters")
      Signed-off-by: default avatarMajd Dibbiny <majd@mellanox.com>
      Reviewed-by: default avatarAviv Heller <avivh@mellanox.com>
      Signed-off-by: default avatarLeon Romanovsky <leon@kernel.org>
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      71a0ff65
    • Bryan Tan's avatar
      RDMA/vmw_pvrdma: Avoid use after free due to QP/CQ/SRQ destroy · e3524b26
      Bryan Tan authored
      The use of wait queues in vmw_pvrdma for handling concurrent
      access to a resource leaves a race condition which can cause a use
      after free bug.
      
      Fix this by using the pattern from other drivers, complete() protected by
      dec_and_test to ensure complete() is called only once.
      
      Fixes: 29c8d9eb ("IB: Add vmw_pvrdma driver")
      Signed-off-by: default avatarBryan Tan <bryantan@vmware.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      e3524b26
    • Bryan Tan's avatar
      RDMA/vmw_pvrdma: Use refcount_dec_and_test to avoid warning · 30a366a9
      Bryan Tan authored
      refcount_dec generates a warning when the operation
      causes the refcount to hit zero. Avoid this by using
      refcount_dec_and_test.
      
      Fixes: 8b10ba78 ("RDMA/vmw_pvrdma: Add shared receive queue support")
      Reviewed-by: default avatarAdit Ranadive <aditr@vmware.com>
      Reviewed-by: default avatarAditya Sarwade <asarwade@vmware.com>
      Reviewed-by: default avatarJorgen Hansen <jhansen@vmware.com>
      Signed-off-by: default avatarBryan Tan <bryantan@vmware.com>
      Reviewed-by: default avatarLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      30a366a9