- 21 Mar, 2016 2 commits
-
-
Leon Romanovsky authored
Add caching of maximum device capability of ATOMIC endian mode. Fixes: f91e6d89 ('net/mlx5_core: Add setting ATOMIC endian mode') Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Reviewed-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Dan Carpenter authored
The first argument of WARN_ON() is a condition, so it means the warning message here will just be the name without the ->qp_num information. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 16 Mar, 2016 7 commits
-
-
Doug Ledford authored
-
Faisal Latif authored
During large connection test, there is a crash at wake_up() in the callback as waitq is not yet initialized. Callback can happen before iwpm_wait_complete_req() is called to initialize waitq. To resolve, using signaling semaphore instead of waitq. Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Reviewed-by: Tatyana E Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Faisal Latif <faisal.latif@intel.com> Reviewed-by: Steve Wise <swise@opengridcomputing.com> Tested-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Steve Wise authored
Now with the new iWARP port mapping service in the iwcm, it is trivial to add cxgb3 support. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Steve Wise authored
Now that most of the port mapper code been moved to iwcm, we can remove it from iw_cxgb4. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Faisal Latif authored
Now that most of the port mapper code been moved to iwcm, we can remove it from port mapper service user drivers. Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Tatyana E. Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Faisal Latif <faisal.latif@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Faisal Latif authored
moved port mapper related code from drivers into common code Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Tatyana E. Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Faisal Latif <faisal.latif@intel.com> Reviewed-by: Steve Wise <swise@opengridcomputing.com> Tested-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Doug Ledford authored
-
- 14 Mar, 2016 1 commit
-
-
Doug Ledford authored
-
- 11 Mar, 2016 1 commit
-
-
Christoph Hellwig authored
Trivial conversion to the new RDMA CQ API. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Dominique Martinet <dominique.martinet@cea.fr> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 10 Mar, 2016 3 commits
-
-
Maor Gottlieb authored
Each bypass flow steering priority will be split into two priorities: 1. Priority for don't trap rules. 2. Priority for normal rules. When user creates a flow using IB_FLOW_ATTR_FLAGS_DONT_TRAP flag, the driver creates two flow rules, one used for receiving the traffic and the other one for forwarding the packet to continue matching in lower or equal priorities. Signed-off-by: Maor Gottlieb <maorg@mellanox.com> Reviewed-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Maor Gottlieb authored
Add support to create flow rule that forward packets to the first flow table in the next priority (next priority could be the first priority in the next namespace or the next priority in the same namespace). This feature could be used for DONT_TRAP rules or rules that only want to mark the packet with flow tag. In order to do it optimally, each flow table has list of all rules that point to this flow table, when a flow table is destroyed/created, we update the list head correspondingly. This kind of rule is created when destination is NULL and action is MLX5_FLOW_CONTEXT_ACTION_FWD_NEXT_PRIO. Signed-off-by: Maor Gottlieb <maorg@mellanox.com> Reviewed-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Maor Gottlieb authored
Create an empty flow table in the end of NIC rx namesapce. Adding this flow table simplify the implementation of "forward to next prio" rules. Signed-off-by: Maor Gottlieb <maorg@mellanox.com> Reviewed-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 04 Mar, 2016 6 commits
-
-
Sagi Grimberg authored
If the device support arbitrary sg list mapping (device cap IB_DEVICE_SG_GAPS_REG set) we allocate the memory regions with IB_MR_TYPE_SG_GAPS and allow the block layer to pass us gaps by skip setting the queue virt_boundary. Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Sagi Grimberg authored
Allocate proper context for arbitrary scatterlist registration If ib_alloc_mr is called with IB_MR_MAP_ARB_SG, the driver allocate a private klm list instead of a private page list. Set the UMR wqe correctly when posting the fast registration. Also, expose device cap IB_DEVICE_MAP_ARB_SG according to the device id (until we have a FW bit that correctly exposes it). Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Sagi Grimberg authored
Devices that are capable in registering SG lists with gaps can now expose it in the core to ULPs using a new device capability IB_DEVICE_SG_GAPS_REG (in a new field device_cap_flags_ex in the device attributes as we ran out of bits), and a new mr_type IB_MR_TYPE_SG_GAPS_REG which allocates a memory region which is capable of handling SG lists with gaps. Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Sagi Grimberg authored
While documentation indicates that the number of translation entries per memory key is unlimited, in practice, we can only fit a finite amount of translation entries in a single registration wqe (which is log_max_klm_list_size). Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Doug Ledford authored
These three related functions can't agree whether to put the umrwr on the stack dirty and then memset it, or to initialize it on the stack. Make them all agree. Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Christoph Hellwig authored
Simplifies the code, and makes it more fair vs other users by using a softirq for polling. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Haggai Eran <haggaie@mellanox.com> Reviewed-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 03 Mar, 2016 13 commits
-
-
Markus Elfring authored
Return the value from a call of the ocrdma_mbx_modify_qp() function without using an extra assignment for the local variable "status". Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Markus Elfring authored
Return zero at the end without using the local variable "status". Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Markus Elfring authored
The variable "status" will be set to an appropriate value a bit later. Thus let us omit the explicit initialisation at the beginning. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Hal Rosenstock authored
In ib_mad.h, ib_mad_snoop_handler uses send_buf rather than send_wr Signed-off-by: Hal Rosenstock <hal@mellanox.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Parav Pandit authored
1. Replaced printk with appropriate pr_warn, pr_err, pr_info. 2. Removed unnecessary prints around memory allocation failure which are not required, as reported by the checkpatch script. Signed-off-by: Parav Pandit <pandit.parav@gmail.com> Reviewed-by: Haggai Eran <haggaie@mellanox.com> Reviewed-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Amitoj Kaur Chawla authored
Use eth_zero_addr to assign the zero address to the given address array instead of memset when second argument is address of zero. The Coccinelle semantic patch used to make this change is as follows: // <smpl> @eth_zero_addr@ expression e; @@ -memset(e,0x00,ETH_ALEN); +eth_zero_addr(e); // </smpl> Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Eli Cohen authored
Since we allow to call legacy verbs using their extended counterpart, the check on ucontext has to move up to a common area in case this verb is ever extended. Signed-off-by: Eli Cohen <eli@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Eli Cohen authored
When an extended verb is an extension to a legacy verb, the original functionality is preserved. Hence we do not require each hardware driver to set the extended capability. This will allow the use of the extended verb in its simple form with drivers that do not support the extended capability. Signed-off-by: Eli Cohen <eli@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Eli Cohen authored
Move the check on the validity of the command to a common area. Signed-off-by: Eli Cohen <eli@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Doug Ledford authored
-
Majd Dibbiny authored
Currently, the inlen field of the vendor's part of the command doesn't match the command buffer. This happens because the inlen accommodates ib_uverbs_cmd_hdr which is deducted from the in buffer. This is problematic since the vendor function could be called either from the legacy verb (where the input length mismatches the actual length) or by the extended verb (where the length matches). The vendor has no idea which function calls it and therefore has no way to know how the length variable should be treated. Fixing this by aligning the inlen to the correct length. All vendor drivers either assumed that inlen >= sizeof(vendor_uhw_cmd) or just failed wrongly (mlx5) and fixed in this patch. Fixes: cfb5e088 ('IB/mlx5: Add CQE version 1 support to user QPs and SRQs') Signed-off-by: Majd Dibbiny <majd@mellanox.com> Reviewed-by: Matan Barak <matanb@mellanox.com> Reviewed-by: Haggai Eran <haggaie@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Majd Dibbiny authored
Normal SRQs, unlike XRC SRQs, don't have user-index, therefore avoid verifying it and using it. Fixes: cfb5e088 ('IB/mlx5: Add CQE version 1 support to user QPs and SRQs') Signed-off-by: Majd Dibbiny <majd@mellanox.com> Reviewed-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Hans Westgaard Ry authored
IPoIB converts skb-fragments to sge adding 1 extra sge when SG is enabled. Current codepath assumes that the max number of sge a device support is at least MAX_SKB_FRAGS+1, there is no interaction with upper layers to limit number of fragments in an skb if a device suports fewer sges. The assumptions also lead to requesting a fixed number of sge when IPoIB creates queue-pairs with SG enabled. A fallback/slowpath is implemented using skb_linearize to handle cases where the conversion would result in more sges than supported. Signed-off-by: Hans Westgaard Ry <hans.westgaard.ry@oracle.com> Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com> Reviewed-by: Wei Lin Guay <wei.lin.guay@oracle.com> Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 01 Mar, 2016 7 commits
-
-
Matan Barak authored
This patch adds user-space support for memory windows allocation and deallocation. It also exposes the supported types via query_device_caps verb. Signed-off-by: Matan Barak <matanb@mellanox.com> Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Tested-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Matan Barak authored
Passing udata to the vendor's driver in order to pass data from the user-space driver to the kernel-space driver. This data will be used in downstream patches. Signed-off-by: Matan Barak <matanb@mellanox.com> Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Matan Barak authored
Mlx5's mkey mechanism is also used for memory windows. The current code base uses MR (memory region) naming, which is inaccurate. Changing MR to mkey in order to represent its different usages more accurately. Signed-off-by: Matan Barak <matanb@mellanox.com> Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Noa Osherovich authored
This patch adds support for re-registration of memory regions in MLX5. The functionality is basically the same as deregister followed by register, but attempts to reuse the existing resources as much as possible. Original memory keys are kept if possible, saving the need to communicate new ones to remote peers. Signed-off-by: Noa Osherovich <noaos@mellanox.com> Reviewed-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Noa Osherovich authored
In order to add re-registration of memory region, some logic was extracted to separate functions: - ODP related logic. - Some of the UMR WQE preparation code. - DMA mapping. - Umem creation. - Creating MKey using FW interface. - MR fields assignments after successful creation. Signed-off-by: Noa Osherovich <noaos@mellanox.com> Reviewed-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Haggai Eran authored
Commit 4c21b5bc ("IB/cma: Add net_dev and private data checks to RDMA CM") added checks for incoming RDMA CM requests that they can be matched to a netdev based on the P_Key in the BTH of the request. This behavior was reverted in commit ab3964ad ("IB/cma: Use inner P_Key to determine netdev"), since the mlx5 and ipath drivers didn't send the correct value in the BTH P_Key. Since the ipath driver was removed, and the mlx5 driver can now send GSI packets on different P_Keys, we could revert the patch to let the rdma_cm module look on the BTH P_Key when deciding to what netdev a packet belongs. However, that still breaks compatibility with the older drivers. Change the behavior to print a warning when receiving a request that has a different BTH P_Key and inner payload P_Key. In the future, after users have seen the warnings and upgraded their setups, remove the warning and block these requests. Signed-off-by: Haggai Eran <haggaie@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Haggai Eran authored
Now that the transmission of GSI MADs is done with the special transmission QPs, eliminate the send buffers in the GSI receive QP. Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Haggai Eran <haggaie@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-