- 08 Oct, 2020 5 commits
-
-
Bob Pearson authored
The changes referenced below replaced sbk_clone)_ by taking additional references, passing the skb along and then freeing the skb. This deleted the packets before they could be processed and additionally passed bad data in each packet. Since pkt is stored in skb->cb changing pkt->qp changed it for all the packets. Replace skb_get() by sbk_clone() in rxe_rcv_mcast_pkt() for cases where multiple QPs are receiving multicast packets on the same address. Delete kfree_skb() because the packets need to live until they have been processed by each QP. They are freed later. Fixes: 86af6176 ("IB/rxe: remove unnecessary skb_clone") Fixes: fe896ceb ("IB/rxe: replace refcount_inc with skb_get") Link: https://lore.kernel.org/r/20201008203651.256958-1-rpearson@hpe.comSigned-off-by: Bob Pearson <rpearson@hpe.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Struct rxe_mem had pd, lkey and rkey values both in itself and in the struct ib_mr which is also included in rxe_mem. Delete these entries and replace references with the ones in ibmr.Add mr_pd, mr_lkey and mr_rkey macros which extract these values from mr. Link: https://lore.kernel.org/r/20201008212818.265303-1-rpearson@hpe.comSigned-off-by: Bob Pearson <rpearson@hpe.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Dennis Dalessandro authored
Intel has spun off the Omni-Path Architecture group which is now a new company known as Cornelis Networks. Updating the MAINTAINERS file to reflect this change and our new email addresses. Link: https://lore.kernel.org/r/20201008171803.189100.43448.stgit@awfm-01.aw.intel.comSigned-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Colin Ian King authored
An incorrect sizeof is being used, struct rvt_ibport ** is not correct, it should be struct rvt_ibport *. Note that since ** is the same size as * this is not causing any issues. Improve this fix by using sizeof(*rdi->ports) as this allows us to not even reference the type of the pointer. Also remove line breaks as the entire statement can fit on one line. Link: https://lore.kernel.org/r/20201008095204.82683-1-colin.king@canonical.com Addresses-Coverity: ("Sizeof not portable (SIZEOF_MISMATCH)") Fixes: ff6acd69 ("IB/rdmavt: Add device structure allocation") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Acked-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Joe Perches authored
Parvi Kaustubhi's email bounces. Link: https://lore.kernel.org/r/f7726a1873f14972f137f64a4d6cd35e530c6c95.camel@perches.comSigned-off-by: Joe Perches <joe@perches.com> Acked-by: Christian Benvenuti <benve@cisco.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 06 Oct, 2020 2 commits
-
-
Colin Ian King authored
An incorrect sizeof is being used, u64 * is not correct, it should be just u64 for a table of umem_pgs number of u64 items in the pbl_tbl. Use the idiom sizeof(*pbl_tbl) to get the object type without the need to explicitly use u64. Link: https://lore.kernel.org/r/20201006114700.537916-1-colin.king@canonical.com Addresses-Coverity: ("Sizeof not portable (SIZEOF_MISMATCH)") Fixes: 1ac5a404 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This driver is taking the SGL out of the umem and passing it through a struct bnxt_qplib_sg_info. Instead of passing the SGL pass the umem and then use rdma_umem_for_each_dma_block() directly. Move the calls of ib_umem_num_dma_blocks() closer to their actual point of use, npages is only set for non-umem pbl flows. Link: https://lore.kernel.org/r/0-v1-b37437a73f35+49c-bnxt_re_dma_block_jgg@nvidia.comAcked-by: Selvin Xavier <selvin.xavier@broadcom.com> Tested-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 05 Oct, 2020 1 commit
-
-
Kamal Heib authored
Report the "ipoib pkey", "mode" and "umcast" netlink attributes for every IPoiB interface type, not just children created with 'ip link add'. After setting the rtnl_link_ops for the parent interface, implement the dellink() callback to block users from trying to remove it. Fixes: 862096a8 ("IB/ipoib: Add more rtnl_link_ops callbacks") Link: https://lore.kernel.org/r/20201004132948.26669-1-kamalheib1@gmail.comSigned-off-by: Kamal Heib <kamalheib1@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 02 Oct, 2020 5 commits
-
-
Avihai Horon authored
Expose the query GID table and entry API to user space by adding two new methods and method handlers to the device object. This API provides a faster way to query a GID table using single call and will be used in libibverbs to improve current approach that requires multiple calls to open, close and read multiple sysfs files for a single GID table entry. Link: https://lore.kernel.org/r/20200923165015.2491894-5-leon@kernel.orgSigned-off-by: Avihai Horon <avihaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Avihai Horon authored
Introduce rdma_query_gid_table which enables querying all the GID tables of a given device and copying the attributes of all valid GID entries to a provided buffer. This API provides a faster way to query a GID table using single call and will be used in libibverbs to improve current approach that requires multiple calls to open, close and read multiple sysfs files for a single GID table entry. Link: https://lore.kernel.org/r/20200923165015.2491894-4-leon@kernel.orgSigned-off-by: Avihai Horon <avihaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Avihai Horon authored
Separate IB_GID_TYPE_IB and IB_GID_TYPE_ROCE to two different values, so enum ib_gid_type will match the gid types of the new query GID table API which will be introduced in the following patches. This change in enum ib_gid_type requires to change also enum rdma_network_type by separating RDMA_NETWORK_IB and RDMA_NETWORK_ROCE_V1 values. Link: https://lore.kernel.org/r/20200923165015.2491894-3-leon@kernel.orgSigned-off-by: Avihai Horon <avihaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Avihai Horon authored
Change the error code returned from rdma_get_gid_attr when the GID entry is invalid but the GID index is in the gid table size range to -ENODATA instead of -EINVAL. This change is done in order to provide a more accurate error reporting to be used by the new GID query API in user space. Nevertheless, -EINVAL is still returned from sysfs in the aforementioned case to maintain compatibility with user space that expects -EINVAL. Link: https://lore.kernel.org/r/20200923165015.2491894-2-leon@kernel.orgSigned-off-by: Avihai Horon <avihaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The various array_size functions use SIZE_MAX define, but missed limits.h causes to failure to compile code that needs overflow.h. In file included from drivers/infiniband/core/uverbs_std_types_device.c:6: ./include/linux/overflow.h: In function 'array_size': ./include/linux/overflow.h:258:10: error: 'SIZE_MAX' undeclared (first use in this function) 258 | return SIZE_MAX; | ^~~~~~~~ Fixes: 610b15c5 ("overflow.h: Add allocation size calculation helpers") Link: https://lore.kernel.org/r/20200913102928.134985-1-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 01 Oct, 2020 7 commits
-
-
Alok Prasad authored
Making a change to fix following sparse warnings reported by kbuild bot. CHECK drivers/infiniband/hw/qedr/verbs.c drivers/infiniband/hw/qedr/verbs.c:3872:59: warning: incorrect type in assignment (different base types) drivers/infiniband/hw/qedr/verbs.c:3872:59: expected restricted __le32 [usertype] sge_prod drivers/infiniband/hw/qedr/verbs.c:3872:59: got unsigned int [usertype] sge_prod drivers/infiniband/hw/qedr/verbs.c:3875:59: warning: incorrect type in assignment (different base types) drivers/infiniband/hw/qedr/verbs.c:3875:59: expected restricted __le32 [usertype] wqe_prod drivers/infiniband/hw/qedr/verbs.c:3875:59: got unsigned int [usertype] wqe_prod Link: https://lore.kernel.org/r/20201001100959.19940-1-palok@marvell.comReported-by: kbuild test robot <lkp@intel.com> Fixes: acca72e2 ("RDMA/qedr: SRQ's bug fixes") Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: Michal Kalderon <mkalderon@marvell.com> Signed-off-by: Alok Prasad <palok@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Rikard Falkeborn authored
The only usage of these is to pass their address to sysfs_create_group() and sysfs_remove_group(), both which takes const pointers. Make it const to allow the compiler to put them in read-only memory. Link: https://lore.kernel.org/r/20200930224004.24279-3-rikard.falkeborn@gmail.comSigned-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Rikard Falkeborn authored
The only usage of the pma_table field in the ib_port struct is to pass its address to sysfs_create_group() and sysfs_remove_group(). Make it const to make it possible to constify a couple of static struct attribute_group. This allows the compiler to put them in read-only memory. Link: https://lore.kernel.org/r/20200930224004.24279-2-rikard.falkeborn@gmail.comSigned-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yishai Hadas authored
Sync device with CPU pages upon ODP MR registration. mlx5 already has to zero the HW's version of the PAS list, may as well deliver a PAS list that matches the current CPU page tables configuration. Link: https://lore.kernel.org/r/20200930163828.1336747-5-leon@kernel.orgSigned-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yishai Hadas authored
Extend advice MR to support non faulting mode, this can improve performance by increasing the populated page tables in the device. Link: https://lore.kernel.org/r/20200930163828.1336747-4-leon@kernel.orgSigned-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yishai Hadas authored
Enable ODP sync without faulting, this improves performance by reducing the number of page faults in the system. The gain from this option is that the device page table can be aligned with the presented pages in the CPU page table without causing page faults. As of that, the overhead on data path from hardware point of view to trigger a fault which end-up by calling the driver to bring the pages will be dropped. Link: https://lore.kernel.org/r/20200930163828.1336747-3-leon@kernel.orgSigned-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yishai Hadas authored
Move to use hmm_range_fault() instead of get_user_pags_remote() to improve performance in a few aspects: This includes: - Dropping the need to allocate and free memory to hold its output - No need any more to use put_page() to unpin the pages - The logic to detect contiguous pages is done based on the returned order, no need to run per page and evaluate. In addition, moving to use hmm_range_fault() enables to reduce page faults in the system with it's snapshot mode, this will be introduced in next patches from this series. As part of this, cleanup some flows and use the required data structures to work with hmm_range_fault(). Link: https://lore.kernel.org/r/20200930163828.1336747-2-leon@kernel.orgSigned-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 30 Sep, 2020 3 commits
-
-
Jason Gunthorpe authored
This three thread race can result in the work being run once the callback becomes NULL: CPU1 CPU2 CPU3 netevent_callback() process_one_req() rdma_addr_cancel() [..] spin_lock_bh() set_timeout() spin_unlock_bh() spin_lock_bh() list_del_init(&req->list); spin_unlock_bh() req->callback = NULL spin_lock_bh() if (!list_empty(&req->list)) // Skipped! // cancel_delayed_work(&req->work); spin_unlock_bh() process_one_req() // again req->callback() // BOOM cancel_delayed_work_sync() The solution is to always cancel the work once it is completed so any in between set_timeout() does not result in it running again. Cc: stable@vger.kernel.org Fixes: 44e75052 ("RDMA/rdma_cm: Make rdma_addr_cancel into a fence") Link: https://lore.kernel.org/r/20200930072007.1009692-1-leon@kernel.orgReported-by: Dan Aloni <dan@kernelim.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Nothing reads this any more, and the reason for its existence has passed due to the deferred fput() scheme. Fixes: 8ea1f989 ("drivers/IB,usnic: reduce scope of mmap_sem") Link: https://lore.kernel.org/r/0-v1-df64ff042436+42-uctx_closing_jgg@nvidia.comReviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gioh Kim authored
list field is not used anywhere Link: https://lore.kernel.org/r/20200930131407.6438-1-gi-oh.kim@clous.ionos.comSigned-off-by: Gioh Kim <gi-oh.kim@cloud.ionos.com> Acked-by: Jack Wang <jinpu.wang@cloud.ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 29 Sep, 2020 11 commits
-
-
Lang Cheng authored
Some code was removed but the variables were still there, and some parameters have been changed to be queried from firmware. So the definitions of them are no longer needed. Fixes: 2a3d923f ("RDMA/hns: Replace magic numbers with #defines") Fixes: 82e620d9 ("RDMA/hns: Modify the data structure of hns_roce_av") Fixes: 82547469 ("IB/hns: Implement the add_gid/del_gid and optimize the GIDs management") Fixes: 21b97f53 ("RDMA/hns: Fixup qp release bug") Link: https://lore.kernel.org/r/1601371934-40003-1-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
There is no real need to have an intermediate pointer for the same struct, remove it, and use struct directly. Link: https://lore.kernel.org/r/20200926102450.2966017-11-leon@kernel.orgAcked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
As preparation for the removal of QP allocation logic, we need to ensure that ib_core allocates the right amount of memory before a call to the driver create_qp(). It requires from driver to have the same structs for all types of QPs. Link: https://lore.kernel.org/r/20200926102450.2966017-10-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
GSI QP can't be created from the user space, hence the udata check is always false (udata == NULL). Remove that check and simplify the flow. Link: https://lore.kernel.org/r/20200926102450.2966017-9-leon@kernel.orgReviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The ioctl flow checks that the user provides only a supported list of QP types, while write flow didn't do it and relied on the driver to check it. Align those flows to fail as early as possible. Link: https://lore.kernel.org/r/20200926102450.2966017-8-leon@kernel.orgReviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Since all mlx4 QP have same storage type, move the QP allocation to be in one place. This change is preparation to removal of such allocation from the driver. Link: https://lore.kernel.org/r/20200926102450.2966017-7-leon@kernel.orgReviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Refactor the storage struct of mlx4 GSI QP to be embedded in mlx4_ib QP. This allows to remove internal memory allocation of QP struct which is hidden inside the mlx4_ib_create_qp() flow. Link: https://lore.kernel.org/r/20200926102450.2966017-6-leon@kernel.orgReviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
GSI QP doesn't need signal QP type because it is initialized statically to zero, which is IB_SIGNAL_ALL_WR also wr->send_flags isn't set too. This means that the GSI QP signal QP type can be removed. Link: https://lore.kernel.org/r/20200926102450.2966017-5-leon@kernel.orgReviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
There is no reason to have separate create flow for the GSI QP, while general create_qp routine has all needed checks and ability to allocate and free the proper struct mlx5_ib_qp. Link: https://lore.kernel.org/r/20200926102450.2966017-4-leon@kernel.orgReviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Remove duplication of mlx5_ib_qp and mlx5_ib_gsi_qp fields. This change returns the memory footprint of mlx5_ib QP to be as it was before embedding GSI QP. Link: https://lore.kernel.org/r/20200926102450.2966017-3-leon@kernel.orgReviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The GSI QPs have different create flow from the regular QPs, but it is not really needed. Update the code to use mlx5_ib_qp as a storage class for all outside of GSI calls. Link: https://lore.kernel.org/r/20200926102450.2966017-2-leon@kernel.orgReviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 25 Sep, 2020 1 commit
-
-
Liu Shixin authored
sizeof() when applied to a pointer typed expression should give the size of the pointed data, even if the data is a pointer. Fixes: e1f24a79 ("IB/mlx5: Support congestion related counters") Link: https://lore.kernel.org/r/20200917081354.2083293-1-liushixin2@huawei.comSigned-off-by: Liu Shixin <liushixin2@huawei.com> Acked-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 24 Sep, 2020 5 commits
-
-
Weihang Li authored
HIP08 supports RC inline up to size of 32 Bytes, and all data should be put into SQWQE. For HIP09, this capability is extended to 1024 Bytes, if length of data is longer than 32 Bytes, they will be filled into extended sge space. Link: https://lore.kernel.org/r/1599744069-9968-1-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The sq_sig_type field should be filled when querying QP, or the users may get a wrong value. Fixes: 926a01dc ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1600509802-44382-9-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The hardware will add AckReq flag in BTH header according to the value of ack_req_freq to request ACK from responder for the packets with this flag. It should be greater than or equal to lp_pktn_ini instead of using a fixed value. Fixes: 7b9bd73e ("RDMA/hns: Fix wrong assignment of lp_pktn_ini in QPC") Link: https://lore.kernel.org/r/1600509802-44382-8-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The rnr_retry returned to the user is not correct, it should be got from another fields in QPC. Fixes: bfe86035 ("RDMA/hns: Fix cast from or to restricted __le32 for driver") Link: https://lore.kernel.org/r/1600509802-44382-7-git-send-email-liweihang@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jiaran Zhang authored
calc_pg_sz() may gets a data calculation overflow if the PAGE_SIZE is 64 KB and hop_num is 2. It is because that all variables involved in calculation are defined in type of int. So change the type of bt_chunk_size, buf_chunk_size and obj_per_chunk_default to u64. Fixes: ba6bb7e9 ("RDMA/hns: Add interfaces to get pf capabilities from firmware") Link: https://lore.kernel.org/r/1600509802-44382-6-git-send-email-liweihang@huawei.comSigned-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-