- 26 Jun, 2018 2 commits
-
-
Leon Romanovsky authored
dma_map_sg_attrs() returns 0 on error and can't return a negative number (ensured by BUG_ON), so don't check. Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Leon Romanovsky authored
Number of specs is provided by user and in valid case can be equal to zero. Such argument causes to call to kcalloc() with zero-length request and in return the ZERO_SIZE_PTR is assigned. This pointer is different from NULL and makes various if (..) checks to success. Fixes: b6ba4a9a ("IB/uverbs: Add support for flow counters") Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 25 Jun, 2018 15 commits
-
-
Leon Romanovsky authored
Following the removal of ib_create_flow(), adjust the code to get rid of ib_destroy_flow() too. Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Leon Romanovsky authored
There are no kernel users of this interface so lets drop it. Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
In the accepted series "Refactor ib_uverbs_write path", we presented the roadmap to get rid of uverbs_cmd_mask and uverbs_ex_cmd_mask fields in favor of simple check of function pointer. So let's put NULL check of create_flow function callback despite the fact that uverbs_ex_cmd_mask still exists. Link: https://www.spinics.net/lists/linux-rdma/msg60753.htmlSuggested-by: Michael J Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Reflect the acquisition of SanDisk by Western Digital in my e-mail address. Remove the reference to David Dillow's git tree since SRP patches are queued by Doug and Jason. Remove the reference to the OpenFabrics website since the srp_daemon source code has been moved from that website into the rdma-core project. Add an entry for the SRP target driver. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Cc: Bart Van Assche <bart.vanassche@sandisk.com> Cc: David Dillow <dillow@google.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
usnic has a modified version of the core codes' ib_umem_get() and related, and the copy misses many of the bug fixes done over the years: Commit bc3e53f6 ("mm: distinguish between mlocked and pinned pages") Commit 87773dd5 ("IB: ib_umem_release() should decrement mm->pinned_vm from ib_umem_get") Commit 8494057a ("IB/uverbs: Prevent integer overflow in ib_umem_get address arithmetic") Commit 8abaae62 ("IB/core: disallow registering 0-sized memory region") Commit 66578b0b ("IB/core: don't disallow registering region starting at 0x0") Commit 53376fed ("RDMA/core: not to set page dirty bit if it's already set.") Commit 8e907ed4 ("IB/umem: Use the correct mm during ib_umem_release") Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yishai Hadas authored
This patch follows the logic from ib_core but considers the internal device state upon executing the involved commands. Specifically, Upon internal error state modify QP to an error state can be assumed to be success as each in-progress WR going to be flushed in error in any case as expected by that modify command. In addition, As the drain should never fail the driver makes sure that post_send/recv will succeed even if the device is already in an internal error state. As such once the driver will supply the simulated/SW CQEs the CQE for the drain WR will be handled as well. In case of an internal error state the CQE for the drain WR may be completed as part of the main task that handled the error state or by the task that issued the drain WR. As the above depends on scheduling the code takes the relevant locks and actions to make sure that the completion handler for that WR will always be called after that the post_send/recv were issued but not in parallel to the other task that handles the error flow. Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yishai Hadas authored
This patch follows the logic from ib_core but considers the internal device state upon executing the involved commands. Specifically, Upon internal error state modify QP to an error state can be assumed to be success as each in-progress WR going to be flushed in error in any case as expected by that modify command. In addition, As the drain should never fail the driver makes sure that post_send/recv will succeed even if the device is already in an internal error state. As such once the driver will supply the simulated/SW CQEs the CQE for the drain WR will be handled as well. In case of an internal error state the CQE for the drain WR may be completed as part of the main task that handled the error state or by the task that issued the drain WR. As the above depends on scheduling the code takes the relevant locks and actions to make sure that the completion handler for that WR will always be called after that the post_send/recv were issued but not in parallel to the other task that handles the error flow. Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Now that all users have been converted to use the version of these APIs that returns a gid_attr pointer we can delete the old entry points. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Parav Pandit authored
Prior patches now ensure that the AV has a sgid_attr, if one would have been required. Instead of querying for one, take it directly from the AH. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Parav Pandit authored
While processing a path record entry in CM messages the associated GID attribute is now also supplied. Currently for RoCE a netdevice's net namespace pointer and ifindex are stored in path record entry. Both of these fields of the netdev can change anytime while processing CM messages. Additionally storing net namespace without holding reference will lead to use-after-free crash. Therefore it is removed. Netdevice information for RoCE is instead provided via referenced gid attribute in ib_cm requests. Such a design leads to a situation where the kernel can crash when the net pointer becomes invalid. However today it is always initialized to init_net, which cannot become invalid. In order to support processing packets in any arbitrary namespace of the received packet, it is necessary to avoid such conditions. This patch removes the dependency on the net pointer and ifindex; instead it will rely on SGID attribute which contains a pointer to netdev. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Parav Pandit authored
Make the sgid_attr available along with path information to the event consumer, this allows the consumer to keep using the same GID table entry as the event is related to. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Parav Pandit authored
Hold reference to the the sgid_attr which is used in a cm_id until the cm_id is destroyed. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Parav Pandit authored
Use the sgid and other information from the path record to figure out the sgid_attrs. Store the selected table entry in the sgid_attr for everything else to use. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Parav Pandit authored
This is really just a CM support function, normally a multicast address does not have a specific SGID - but the RDMA CM usage model does restrict things to the netdevice the CM id is bound to, at least for roce case. Store the selected table entry in the sgid_attr for everything else to use. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Parav Pandit authored
The work completion is inspected to determine what dgid table entry was used to receieve the packet, produces a sgid_attr that matches and sticks it in the ah_attr. All callers of this function are now required to release the ah_attr on success. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
- 22 Jun, 2018 12 commits
-
-
Michael J. Ruhl authored
The INTx IRQ support does not work for all HF1 IRQ handlers (specifically the receive data IRQs). Remove all supporting code for the INTx IRQ. If the requested MSIx vector request is unsuccessful, do not allow the driver to continue. Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: Kamenee Arumugam <kamenee.arumugam@intel.com> Reviewed-by: Sadanand Warrier <sadanand.warrier@intel.com> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
Many fields in ctxtdata are incorrectly sized and the organization of the fields within the structure is a jumble. Fix by: - Correcting oversize fields. - Putting fields common to all contexts at the top with hot fields at the top. - Moving PSM fields to the bottom of the structure. Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
Remove the sizeable cache of the chip sizing CSRs and replace with CSR reads as needed. Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
Fields in this structure are sized excessively based on hardware limitations and input values. Fix by reducing fields as appropriate and repositioning to close holes in the structure. Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
It is only ever written. Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
The usage of this ctxt data field is not hot path and the value can be computed on demand to cut down the ctxtdata bloat. Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Parav Pandit authored
If we already hold the table->lock when doing the kref_put it means we are in a context where it is safe to do the deletion synchronously, with no need for the work queue. This helps to eliminate issues when GID change is requested as part of MAC address change or bonding event change where expectation is to replace the GID almost immediately. Fixes: b150c386 ("IB/core: Introduce GID entry reference counts") Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Parav Pandit authored
When sending multicast leave request, consider the net ns in which this cm_id is created. Code was duplicated in cma_leave_mc_groups() and rdma_leave_multicast(), which is now done using a helper function cma_leave_roce_mc_group(). Fixes: bee3c3c9 ("IB/cma: Join and leave multicast groups with IGMP") Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
In this context the uobject is not allowed to be NULL, so type is the same as uobject->type, and at least for IDR, id is the same as uobject->id. FD objects should never handle the FD number outside the uAPI boundary code. Suggested-by: Guy Levi <guyle@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
For dependencies, branch based on 'mellanox/mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git Pull RoCE ICRC counters from Leon Romanovsky: ==================== This series exposes RoCE ICRC counter through existing RDMA hw_counters sysfs interface. The first patch has all HW definitions in mlx5_ifc.h file and second patch is the actual counter implementation. ==================== * branch 'icrc-counter': IB/mlx5: Support RoCE ICRC encapsulated error counter net/mlx5: Add RoCE RX ICRC encapsulated counter
-
Talat Batheesh authored
This patch adds support to query the counter that counts the RoCE packets with corrupted ICRC (Invariant Cyclic Redundancy Code). This counter will be under /sys/class/infiniband/<mlx5-dev>/ports/<port>/hw_counters/ rx_icrc_encapsulated - The number of RoCE packets with ICRC error. Signed-off-by: Talat Batheesh <talatb@mellanox.com> Reviewed-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 21 Jun, 2018 1 commit
-
-
Talat Batheesh authored
Add capability bit in PCAM register and RoCE ICRC error counter to PPCNT register. Signed-off-by: Talat Batheesh <talatb@mellanox.com> Reviewed-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
- 19 Jun, 2018 10 commits
-
-
Leon Romanovsky authored
Put all relevant checks for transport domain in the mlx5_ib_alloc/dealloc_transport_domain functions. Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
Move some s_flags defines out of rdmavt and into hfi1 because they are hfi1 specific and therefore should remain in the driver instead of bubbling up to rdmavt. Document device specific ranges in rdmavt and remap those in hfi1. Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Kaike Wan <kaike.wan@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
The field is based on a constant that can never change. Use the define to assign the register instead. Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
This field should be in ctxtdata to allow for better locality of access by eliminating a dd dereference. The new field is now side-by-side with rcvhdrqentsize since the rhf_offset is a function of the rcvhdrqentsize. Both fields are now correctly sized as u8. Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
The current implementation precludes having receive context specific packet type receive handlers. Fix this by adding adding c99 const array for the existing handlers and remove the current 72 bytes of pointers from devdata. A new pointer in hfi1_ctxtdata will point to the const array. Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yishai Hadas authored
Expose DEVX tree to be used by upper layers. Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yishai Hadas authored
Return the matching device EQN for a given user vector number via the DEVX interface. Note: EQs are owned by the kernel and shared by all user processes. Basically, a user CQ can point to any EQ. The kernel doesn't enforce any such limitation today either. Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yishai Hadas authored
Add support to register a memory with the firmware via the DEVX interface. The driver translates a given user address to ib_umem then it will register the physical addresses with the firmware and get a unique id for this registration to be used for this virtual address. Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yishai Hadas authored
Return a device UAR index for a given user index via the DEVX interface. Security note: The hardware protection mechanism works like this: Each device object that is subject to UAR doorbells (QP/SQ/CQ) gets a UAR ID (called uar_page in the device specification manual) upon its creation. Then upon doorbell, hardware fetches the object context for which the doorbell was rang, and validates that the UAR through which the DB was rang matches the UAR ID of the object. If no match the doorbell is silently ignored by the hardware. Of course, the user cannot ring a doorbell on a UAR that was not mapped to it. Now in devx, as the devx kernel does not manipulate the QP/SQ/CQ command mailboxes (except tagging them with UID), we expose to the user its UAR ID, so it can embed it in these objects in the expected specification format. So the only thing the user can do is hurt itself by creating a QP/SQ/CQ with a UAR ID other than his, and then in this case other users may ring a doorbell on its objects. The consequence of that will be that another user can schedule a QP/SQ of the buggy user for execution (just insert it to the hardware schedule queue or arm its CQ for event generation), no further harm is expected. Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yishai Hadas authored
Add support in DEVX for modify and query commands, the required lock is taken (i.e. READ/WRITE) by the KABI infrastructure accordingly. Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-