Commit b8ba4526 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma

Pull more rdma updates from Doug Ledford:
 "Round two of 4.6 merge window patches.

  This is a monster pull request.  I held off on the hfi1 driver updates
  (the hfi1 driver is intimately tied to the qib driver and the new
  rdmavt software library that was created to help both of them) in my
  first pull request.  The hfi1/qib/rdmavt update is probably 90% of
  this pull request.  The hfi1 driver is being left in staging so that
  it can be fixed up in regards to the API that Al and yourself didn't
  like.  Intel has agreed to do the work, but in the meantime, this
  clears out 300+ patches in the backlog queue and brings my tree and
  their tree closer to sync.

  This also includes about 10 patches to the core and a few to mlx5 to
  create an infrastructure for configuring SRIOV ports on IB devices.
  That series includes one patch to the net core that we sent to netdev@
  and Dave Miller with each of the three revisions to the series.  We
  didn't get any response to the patch, so we took that as implicit
  approval.

  Finally, this series includes Intel's new iWARP driver for their x722
  cards.  It's not nearly the beast as the hfi1 driver.  It also has a
  linux-next merge issue, but that has been resolved and it now passes
  just fine.

  Summary:

   - A few minor core fixups needed for the next patch series

   - The IB SRIOV series.  This has bounced around for several versions.
     Of note is the fact that the first patch in this series effects the
     net core.  It was directed to netdev and DaveM for each iteration
     of the series (three versions total).  Dave did not object, but did
     not respond either.  I've taken this as permission to move forward
     with the series.

   - The new Intel X722 iWARP driver

   - A huge set of updates to the Intel hfi1 driver.  Of particular
     interest here is that we have left the driver in staging since it
     still has an API that people object to.  Intel is working on a fix,
     but getting these patches in now helps keep me sane as the upstream
     and Intel's trees were over 300 patches apart"

* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma: (362 commits)
  IB/ipoib: Allow mcast packets from other VFs
  IB/mlx5: Implement callbacks for manipulating VFs
  net/mlx5_core: Implement modify HCA vport command
  net/mlx5_core: Add VF param when querying vport counter
  IB/ipoib: Add ndo operations for configuring VFs
  IB/core: Add interfaces to control VF attributes
  IB/core: Support accessing SA in virtualized environment
  IB/core: Add subnet prefix to port info
  IB/mlx5: Fix decision on using MAD_IFC
  net/core: Add support for configuring VF GUIDs
  IB/{core, ulp} Support above 32 possible device capability flags
  IB/core: Replace setting the zero values in ib_uverbs_ex_query_device
  net/mlx5_core: Introduce offload arithmetic hardware capabilities
  net/mlx5_core: Refactor device capability function
  net/mlx5_core: Fix caching ATOMIC endian mode capability
  ib_srpt: fix a WARN_ON() message
  i40iw: Replace the obsolete crypto hash interface with shash
  IB/hfi1: Add SDMA cache eviction algorithm
  IB/hfi1: Switch to using the pin query function
  IB/hfi1: Specify mm when releasing pages
  ...
parents 01cde153 520a07bf
......@@ -78,9 +78,10 @@ HFI1
chip_reset - diagnostic (root only)
boardversion - board version
ports/1/
CMgtA/
CCMgtA/
cc_settings_bin - CCA tables used by PSM2
cc_table_bin
cc_prescan - enable prescaning for faster BECN response
sc2v/ - 32 files (0 - 31) used to translate sl->vl
sl2sc/ - 32 files (0 - 31) used to translate sl->sc
vl2mtu/ - 16 (0 - 15) files used to determine MTU for vl
......@@ -5770,6 +5770,16 @@ F: Documentation/networking/i40evf.txt
F: drivers/net/ethernet/intel/
F: drivers/net/ethernet/intel/*/
INTEL RDMA RNIC DRIVER
M: Faisal Latif <faisal.latif@intel.com>
R: Chien Tin Tung <chien.tin.tung@intel.com>
R: Mustafa Ismail <mustafa.ismail@intel.com>
R: Shiraz Saleem <shiraz.saleem@intel.com>
R: Tatyana Nikolova <tatyana.e.nikolova@intel.com>
L: linux-rdma@vger.kernel.org
S: Supported
F: drivers/infiniband/hw/i40iw/
INTEL-MID GPIO DRIVER
M: David Cohen <david.a.cohen@linux.intel.com>
L: linux-gpio@vger.kernel.org
......@@ -9224,6 +9234,12 @@ S: Supported
F: net/rds/
F: Documentation/networking/rds.txt
RDMAVT - RDMA verbs software
M: Dennis Dalessandro <dennis.dalessandro@intel.com>
L: linux-rdma@vger.kernel.org
S: Supported
F: drivers/infiniband/sw/rdmavt
READ-COPY UPDATE (RCU)
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
M: Josh Triplett <josh@joshtriplett.org>
......
......@@ -68,6 +68,7 @@ source "drivers/infiniband/hw/mthca/Kconfig"
source "drivers/infiniband/hw/qib/Kconfig"
source "drivers/infiniband/hw/cxgb3/Kconfig"
source "drivers/infiniband/hw/cxgb4/Kconfig"
source "drivers/infiniband/hw/i40iw/Kconfig"
source "drivers/infiniband/hw/mlx4/Kconfig"
source "drivers/infiniband/hw/mlx5/Kconfig"
source "drivers/infiniband/hw/nes/Kconfig"
......@@ -82,4 +83,6 @@ source "drivers/infiniband/ulp/srpt/Kconfig"
source "drivers/infiniband/ulp/iser/Kconfig"
source "drivers/infiniband/ulp/isert/Kconfig"
source "drivers/infiniband/sw/rdmavt/Kconfig"
endif # INFINIBAND
obj-$(CONFIG_INFINIBAND) += core/
obj-$(CONFIG_INFINIBAND) += hw/
obj-$(CONFIG_INFINIBAND) += ulp/
obj-$(CONFIG_INFINIBAND) += sw/
......@@ -650,10 +650,23 @@ int ib_query_port(struct ib_device *device,
u8 port_num,
struct ib_port_attr *port_attr)
{
union ib_gid gid;
int err;
if (port_num < rdma_start_port(device) || port_num > rdma_end_port(device))
return -EINVAL;
return device->query_port(device, port_num, port_attr);
memset(port_attr, 0, sizeof(*port_attr));
err = device->query_port(device, port_num, port_attr);
if (err || port_attr->subnet_prefix)
return err;
err = ib_query_gid(device, port_num, 0, &gid, NULL);
if (err)
return err;
port_attr->subnet_prefix = be64_to_cpu(gid.global.subnet_prefix);
return 0;
}
EXPORT_SYMBOL(ib_query_port);
......
......@@ -885,6 +885,11 @@ static void update_sm_ah(struct work_struct *work)
ah_attr.dlid = port_attr.sm_lid;
ah_attr.sl = port_attr.sm_sl;
ah_attr.port_num = port->port_num;
if (port_attr.grh_required) {
ah_attr.ah_flags = IB_AH_GRH;
ah_attr.grh.dgid.global.subnet_prefix = cpu_to_be64(port_attr.subnet_prefix);
ah_attr.grh.dgid.global.interface_id = cpu_to_be64(IB_SA_WELL_KNOWN_GUID);
}
new_ah->ah = ib_create_ah(port->agent->qp->pd, &ah_attr);
if (IS_ERR(new_ah->ah)) {
......
......@@ -402,7 +402,7 @@ static void copy_query_dev_fields(struct ib_uverbs_file *file,
resp->hw_ver = attr->hw_ver;
resp->max_qp = attr->max_qp;
resp->max_qp_wr = attr->max_qp_wr;
resp->device_cap_flags = attr->device_cap_flags;
resp->device_cap_flags = lower_32_bits(attr->device_cap_flags);
resp->max_sge = attr->max_sge;
resp->max_sge_rd = attr->max_sge_rd;
resp->max_cq = attr->max_cq;
......@@ -3600,9 +3600,9 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file,
struct ib_udata *ucore,
struct ib_udata *uhw)
{
struct ib_uverbs_ex_query_device_resp resp;
struct ib_uverbs_ex_query_device_resp resp = { {0} };
struct ib_uverbs_ex_query_device cmd;
struct ib_device_attr attr;
struct ib_device_attr attr = {0};
int err;
if (ucore->inlen < sizeof(cmd))
......@@ -3623,14 +3623,11 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file,
if (ucore->outlen < resp.response_length)
return -ENOSPC;
memset(&attr, 0, sizeof(attr));
err = ib_dev->query_device(ib_dev, &attr, uhw);
if (err)
return err;
copy_query_dev_fields(file, ib_dev, &resp.base, &attr);
resp.comp_mask = 0;
if (ucore->outlen < resp.response_length + sizeof(resp.odp_caps))
goto end;
......@@ -3643,9 +3640,6 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file,
attr.odp_caps.per_transport_caps.uc_odp_caps;
resp.odp_caps.per_transport_caps.ud_odp_caps =
attr.odp_caps.per_transport_caps.ud_odp_caps;
resp.odp_caps.reserved = 0;
#else
memset(&resp.odp_caps, 0, sizeof(resp.odp_caps));
#endif
resp.response_length += sizeof(resp.odp_caps);
......@@ -3663,8 +3657,5 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file,
end:
err = ib_copy_to_udata(ucore, &resp, resp.response_length);
if (err)
return err;
return 0;
return err;
}
......@@ -1551,6 +1551,46 @@ int ib_check_mr_status(struct ib_mr *mr, u32 check_mask,
}
EXPORT_SYMBOL(ib_check_mr_status);
int ib_set_vf_link_state(struct ib_device *device, int vf, u8 port,
int state)
{
if (!device->set_vf_link_state)
return -ENOSYS;
return device->set_vf_link_state(device, vf, port, state);
}
EXPORT_SYMBOL(ib_set_vf_link_state);
int ib_get_vf_config(struct ib_device *device, int vf, u8 port,
struct ifla_vf_info *info)
{
if (!device->get_vf_config)
return -ENOSYS;
return device->get_vf_config(device, vf, port, info);
}
EXPORT_SYMBOL(ib_get_vf_config);
int ib_get_vf_stats(struct ib_device *device, int vf, u8 port,
struct ifla_vf_stats *stats)
{
if (!device->get_vf_stats)
return -ENOSYS;
return device->get_vf_stats(device, vf, port, stats);
}
EXPORT_SYMBOL(ib_get_vf_stats);
int ib_set_vf_guid(struct ib_device *device, int vf, u8 port, u64 guid,
int type)
{
if (!device->set_vf_guid)
return -ENOSYS;
return device->set_vf_guid(device, vf, port, guid, type);
}
EXPORT_SYMBOL(ib_set_vf_guid);
/**
* ib_map_mr_sg() - Map the largest prefix of a dma mapped SG list
* and set it the memory region.
......
......@@ -2,6 +2,7 @@ obj-$(CONFIG_INFINIBAND_MTHCA) += mthca/
obj-$(CONFIG_INFINIBAND_QIB) += qib/
obj-$(CONFIG_INFINIBAND_CXGB3) += cxgb3/
obj-$(CONFIG_INFINIBAND_CXGB4) += cxgb4/
obj-$(CONFIG_INFINIBAND_I40IW) += i40iw/
obj-$(CONFIG_MLX4_INFINIBAND) += mlx4/
obj-$(CONFIG_MLX5_INFINIBAND) += mlx5/
obj-$(CONFIG_INFINIBAND_NES) += nes/
......
config INFINIBAND_I40IW
tristate "Intel(R) Ethernet X722 iWARP Driver"
depends on INET && I40E
select GENERIC_ALLOCATOR
---help---
Intel(R) Ethernet X722 iWARP Driver
INET && I40IW && INFINIBAND && I40E
ccflags-y := -Idrivers/net/ethernet/intel/i40e
obj-$(CONFIG_INFINIBAND_I40IW) += i40iw.o
i40iw-objs :=\
i40iw_cm.o i40iw_ctrl.o \
i40iw_hmc.o i40iw_hw.o i40iw_main.o \
i40iw_pble.o i40iw_puda.o i40iw_uk.o i40iw_utils.o \
i40iw_verbs.o i40iw_virtchnl.o i40iw_vf.o
This diff is collapsed.
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#ifndef I40IW_HMC_H
#define I40IW_HMC_H
#include "i40iw_d.h"
struct i40iw_hw;
enum i40iw_status_code;
#define I40IW_HMC_MAX_BP_COUNT 512
#define I40IW_MAX_SD_ENTRIES 11
#define I40IW_HW_DBG_HMC_INVALID_BP_MARK 0xCA
#define I40IW_HMC_INFO_SIGNATURE 0x484D5347
#define I40IW_HMC_PD_CNT_IN_SD 512
#define I40IW_HMC_DIRECT_BP_SIZE 0x200000
#define I40IW_HMC_MAX_SD_COUNT 4096
#define I40IW_HMC_PAGED_BP_SIZE 4096
#define I40IW_HMC_PD_BP_BUF_ALIGNMENT 4096
#define I40IW_FIRST_VF_FPM_ID 16
#define FPM_MULTIPLIER 1024
#define I40IW_INC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt++)
#define I40IW_INC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt++)
#define I40IW_INC_BP_REFCNT(bp) ((bp)->ref_cnt++)
#define I40IW_DEC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt--)
#define I40IW_DEC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt--)
#define I40IW_DEC_BP_REFCNT(bp) ((bp)->ref_cnt--)
/**
* I40IW_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
* @hw: pointer to our hw struct
* @sd_idx: segment descriptor index
* @pd_idx: page descriptor index
*/
#define I40IW_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx) \
i40iw_wr32((hw), I40E_PFHMC_PDINV, \
(((sd_idx) << I40E_PFHMC_PDINV_PMSDIDX_SHIFT) | \
(0x1 << I40E_PFHMC_PDINV_PMSDPARTSEL_SHIFT) | \
((pd_idx) << I40E_PFHMC_PDINV_PMPDIDX_SHIFT)))
/**
* I40IW_INVALIDATE_VF_HMC_PD - Invalidates the pd cache in the hardware
* @hw: pointer to our hw struct
* @sd_idx: segment descriptor index
* @pd_idx: page descriptor index
* @hmc_fn_id: VF's function id
*/
#define I40IW_INVALIDATE_VF_HMC_PD(hw, sd_idx, pd_idx, hmc_fn_id) \
i40iw_wr32(hw, I40E_GLHMC_VFPDINV(hmc_fn_id - I40IW_FIRST_VF_FPM_ID), \
((sd_idx << I40E_PFHMC_PDINV_PMSDIDX_SHIFT) | \
(pd_idx << I40E_PFHMC_PDINV_PMPDIDX_SHIFT)))
struct i40iw_hmc_obj_info {
u64 base;
u32 max_cnt;
u32 cnt;
u64 size;
};
enum i40iw_sd_entry_type {
I40IW_SD_TYPE_INVALID = 0,
I40IW_SD_TYPE_PAGED = 1,
I40IW_SD_TYPE_DIRECT = 2
};
struct i40iw_hmc_bp {
enum i40iw_sd_entry_type entry_type;
struct i40iw_dma_mem addr;
u32 sd_pd_index;
u32 ref_cnt;
};
struct i40iw_hmc_pd_entry {
struct i40iw_hmc_bp bp;
u32 sd_index;
bool rsrc_pg;
bool valid;
};
struct i40iw_hmc_pd_table {
struct i40iw_dma_mem pd_page_addr;
struct i40iw_hmc_pd_entry *pd_entry;
struct i40iw_virt_mem pd_entry_virt_mem;
u32 ref_cnt;
u32 sd_index;
};
struct i40iw_hmc_sd_entry {
enum i40iw_sd_entry_type entry_type;
bool valid;
union {
struct i40iw_hmc_pd_table pd_table;
struct i40iw_hmc_bp bp;
} u;
};
struct i40iw_hmc_sd_table {
struct i40iw_virt_mem addr;
u32 sd_cnt;
u32 ref_cnt;
struct i40iw_hmc_sd_entry *sd_entry;
};
struct i40iw_hmc_info {
u32 signature;
u8 hmc_fn_id;
u16 first_sd_index;
struct i40iw_hmc_obj_info *hmc_obj;
struct i40iw_virt_mem hmc_obj_virt_mem;
struct i40iw_hmc_sd_table sd_table;
u16 sd_indexes[I40IW_HMC_MAX_SD_COUNT];
};
struct update_sd_entry {
u64 cmd;
u64 data;
};
struct i40iw_update_sds_info {
u32 cnt;
u8 hmc_fn_id;
struct update_sd_entry entry[I40IW_MAX_SD_ENTRIES];
};
struct i40iw_ccq_cqe_info;
struct i40iw_hmc_fcn_info {
void (*callback_fcn)(struct i40iw_sc_dev *, void *,
struct i40iw_ccq_cqe_info *);
void *cqp_callback_param;
u32 vf_id;
u16 iw_vf_idx;
bool free_fcn;
};
enum i40iw_hmc_rsrc_type {
I40IW_HMC_IW_QP = 0,
I40IW_HMC_IW_CQ = 1,
I40IW_HMC_IW_SRQ = 2,
I40IW_HMC_IW_HTE = 3,
I40IW_HMC_IW_ARP = 4,
I40IW_HMC_IW_APBVT_ENTRY = 5,
I40IW_HMC_IW_MR = 6,
I40IW_HMC_IW_XF = 7,
I40IW_HMC_IW_XFFL = 8,
I40IW_HMC_IW_Q1 = 9,
I40IW_HMC_IW_Q1FL = 10,
I40IW_HMC_IW_TIMER = 11,
I40IW_HMC_IW_FSIMC = 12,
I40IW_HMC_IW_FSIAV = 13,
I40IW_HMC_IW_PBLE = 14,
I40IW_HMC_IW_MAX = 15,
};
struct i40iw_hmc_create_obj_info {
struct i40iw_hmc_info *hmc_info;
struct i40iw_virt_mem add_sd_virt_mem;
u32 rsrc_type;
u32 start_idx;
u32 count;
u32 add_sd_cnt;
enum i40iw_sd_entry_type entry_type;
bool is_pf;
};
struct i40iw_hmc_del_obj_info {
struct i40iw_hmc_info *hmc_info;
struct i40iw_virt_mem del_sd_virt_mem;
u32 rsrc_type;
u32 start_idx;
u32 count;
u32 del_sd_cnt;
bool is_pf;
};
enum i40iw_status_code i40iw_copy_dma_mem(struct i40iw_hw *hw, void *dest_buf,
struct i40iw_dma_mem *src_mem, u64 src_offset, u64 size);
enum i40iw_status_code i40iw_sc_create_hmc_obj(struct i40iw_sc_dev *dev,
struct i40iw_hmc_create_obj_info *info);
enum i40iw_status_code i40iw_sc_del_hmc_obj(struct i40iw_sc_dev *dev,
struct i40iw_hmc_del_obj_info *info,
bool reset);
enum i40iw_status_code i40iw_hmc_sd_one(struct i40iw_sc_dev *dev, u8 hmc_fn_id,
u64 pa, u32 sd_idx, enum i40iw_sd_entry_type type,
bool setsd);
enum i40iw_status_code i40iw_update_sds_noccq(struct i40iw_sc_dev *dev,
struct i40iw_update_sds_info *info);
struct i40iw_vfdev *i40iw_vfdev_from_fpm(struct i40iw_sc_dev *dev, u8 hmc_fn_id);
struct i40iw_hmc_info *i40iw_vf_hmcinfo_from_fpm(struct i40iw_sc_dev *dev,
u8 hmc_fn_id);
enum i40iw_status_code i40iw_add_sd_table_entry(struct i40iw_hw *hw,
struct i40iw_hmc_info *hmc_info, u32 sd_index,
enum i40iw_sd_entry_type type, u64 direct_mode_sz);
enum i40iw_status_code i40iw_add_pd_table_entry(struct i40iw_hw *hw,
struct i40iw_hmc_info *hmc_info, u32 pd_index,
struct i40iw_dma_mem *rsrc_pg);
enum i40iw_status_code i40iw_remove_pd_bp(struct i40iw_hw *hw,
struct i40iw_hmc_info *hmc_info, u32 idx, bool is_pf);
enum i40iw_status_code i40iw_prep_remove_sd_bp(struct i40iw_hmc_info *hmc_info, u32 idx);
enum i40iw_status_code i40iw_prep_remove_pd_page(struct i40iw_hmc_info *hmc_info, u32 idx);
#define ENTER_SHARED_FUNCTION()
#define EXIT_SHARED_FUNCTION()
#endif /* I40IW_HMC_H */
This diff is collapsed.
This diff is collapsed.
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#ifndef I40IW_OSDEP_H
#define I40IW_OSDEP_H
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/bitops.h>
#include <net/tcp.h>
#include <crypto/hash.h>
/* get readq/writeq support for 32 bit kernels, use the low-first version */
#include <linux/io-64-nonatomic-lo-hi.h>
#define STATS_TIMER_DELAY 1000
static inline void set_64bit_val(u64 *wqe_words, u32 byte_index, u64 value)
{
wqe_words[byte_index >> 3] = value;
}
/**
* set_32bit_val - set 32 value to hw wqe
* @wqe_words: wqe addr to write
* @byte_index: index in wqe
* @value: value to write
**/
static inline void set_32bit_val(u32 *wqe_words, u32 byte_index, u32 value)
{
wqe_words[byte_index >> 2] = value;
}
/**
* get_64bit_val - read 64 bit value from wqe
* @wqe_words: wqe addr
* @byte_index: index to read from
* @value: read value
**/
static inline void get_64bit_val(u64 *wqe_words, u32 byte_index, u64 *value)
{
*value = wqe_words[byte_index >> 3];
}
/**
* get_32bit_val - read 32 bit value from wqe
* @wqe_words: wqe addr
* @byte_index: index to reaad from
* @value: return 32 bit value
**/
static inline void get_32bit_val(u32 *wqe_words, u32 byte_index, u32 *value)
{
*value = wqe_words[byte_index >> 2];
}
struct i40iw_dma_mem {
void *va;
dma_addr_t pa;
u32 size;
} __packed;
struct i40iw_virt_mem {
void *va;
u32 size;
} __packed;
#define i40iw_debug(h, m, s, ...) \
do { \
if (((m) & (h)->debug_mask)) \
pr_info("i40iw " s, ##__VA_ARGS__); \
} while (0)
#define i40iw_flush(a) readl((a)->hw_addr + I40E_GLGEN_STAT)
#define I40E_GLHMC_VFSDCMD(_i) (0x000C8000 + ((_i) * 4)) \
/* _i=0...31 */
#define I40E_GLHMC_VFSDCMD_MAX_INDEX 31
#define I40E_GLHMC_VFSDCMD_PMSDIDX_SHIFT 0
#define I40E_GLHMC_VFSDCMD_PMSDIDX_MASK (0xFFF \
<< I40E_GLHMC_VFSDCMD_PMSDIDX_SHIFT)
#define I40E_GLHMC_VFSDCMD_PF_SHIFT 16
#define I40E_GLHMC_VFSDCMD_PF_MASK (0xF << I40E_GLHMC_VFSDCMD_PF_SHIFT)
#define I40E_GLHMC_VFSDCMD_VF_SHIFT 20
#define I40E_GLHMC_VFSDCMD_VF_MASK (0x1FF << I40E_GLHMC_VFSDCMD_VF_SHIFT)
#define I40E_GLHMC_VFSDCMD_PMF_TYPE_SHIFT 29
#define I40E_GLHMC_VFSDCMD_PMF_TYPE_MASK (0x3 \
<< I40E_GLHMC_VFSDCMD_PMF_TYPE_SHIFT)
#define I40E_GLHMC_VFSDCMD_PMSDWR_SHIFT 31
#define I40E_GLHMC_VFSDCMD_PMSDWR_MASK (0x1 << I40E_GLHMC_VFSDCMD_PMSDWR_SHIFT)
#define I40E_GLHMC_VFSDDATAHIGH(_i) (0x000C8200 + ((_i) * 4)) \
/* _i=0...31 */
#define I40E_GLHMC_VFSDDATAHIGH_MAX_INDEX 31
#define I40E_GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_SHIFT 0
#define I40E_GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_MASK (0xFFFFFFFF \
<< I40E_GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_SHIFT)
#define I40E_GLHMC_VFSDDATALOW(_i) (0x000C8100 + ((_i) * 4)) \
/* _i=0...31 */
#define I40E_GLHMC_VFSDDATALOW_MAX_INDEX 31
#define I40E_GLHMC_VFSDDATALOW_PMSDVALID_SHIFT 0
#define I40E_GLHMC_VFSDDATALOW_PMSDVALID_MASK (0x1 \
<< I40E_GLHMC_VFSDDATALOW_PMSDVALID_SHIFT)
#define I40E_GLHMC_VFSDDATALOW_PMSDTYPE_SHIFT 1
#define I40E_GLHMC_VFSDDATALOW_PMSDTYPE_MASK (0x1 \
<< I40E_GLHMC_VFSDDATALOW_PMSDTYPE_SHIFT)
#define I40E_GLHMC_VFSDDATALOW_PMSDBPCOUNT_SHIFT 2
#define I40E_GLHMC_VFSDDATALOW_PMSDBPCOUNT_MASK (0x3FF \
<< I40E_GLHMC_VFSDDATALOW_PMSDBPCOUNT_SHIFT)
#define I40E_GLHMC_VFSDDATALOW_PMSDDATALOW_SHIFT 12
#define I40E_GLHMC_VFSDDATALOW_PMSDDATALOW_MASK (0xFFFFF \
<< I40E_GLHMC_VFSDDATALOW_PMSDDATALOW_SHIFT)
#define I40E_GLPE_FWLDSTATUS 0x0000D200
#define I40E_GLPE_FWLDSTATUS_LOAD_REQUESTED_SHIFT 0
#define I40E_GLPE_FWLDSTATUS_LOAD_REQUESTED_MASK (0x1 \
<< I40E_GLPE_FWLDSTATUS_LOAD_REQUESTED_SHIFT)
#define I40E_GLPE_FWLDSTATUS_DONE_SHIFT 1
#define I40E_GLPE_FWLDSTATUS_DONE_MASK (0x1 << I40E_GLPE_FWLDSTATUS_DONE_SHIFT)
#define I40E_GLPE_FWLDSTATUS_CQP_FAIL_SHIFT 2
#define I40E_GLPE_FWLDSTATUS_CQP_FAIL_MASK (0x1 \
<< I40E_GLPE_FWLDSTATUS_CQP_FAIL_SHIFT)
#define I40E_GLPE_FWLDSTATUS_TEP_FAIL_SHIFT 3
#define I40E_GLPE_FWLDSTATUS_TEP_FAIL_MASK (0x1 \
<< I40E_GLPE_FWLDSTATUS_TEP_FAIL_SHIFT)
#define I40E_GLPE_FWLDSTATUS_OOP_FAIL_SHIFT 4
#define I40E_GLPE_FWLDSTATUS_OOP_FAIL_MASK (0x1 \
<< I40E_GLPE_FWLDSTATUS_OOP_FAIL_SHIFT)
struct i40iw_sc_dev;
struct i40iw_sc_qp;
struct i40iw_puda_buf;
struct i40iw_puda_completion_info;
struct i40iw_update_sds_info;
struct i40iw_hmc_fcn_info;
struct i40iw_virtchnl_work_info;
struct i40iw_manage_vf_pble_info;
struct i40iw_device;
struct i40iw_hmc_info;
struct i40iw_hw;
u8 __iomem *i40iw_get_hw_addr(void *dev);
void i40iw_ieq_mpa_crc_ae(struct i40iw_sc_dev *dev, struct i40iw_sc_qp *qp);
enum i40iw_status_code i40iw_vf_wait_vchnl_resp(struct i40iw_sc_dev *dev);
enum i40iw_status_code i40iw_ieq_check_mpacrc(struct shash_desc *desc, void *addr,
u32 length, u32 value);
struct i40iw_sc_qp *i40iw_ieq_get_qp(struct i40iw_sc_dev *dev, struct i40iw_puda_buf *buf);
void i40iw_ieq_update_tcpip_info(struct i40iw_puda_buf *buf, u16 length, u32 seqnum);
void i40iw_free_hash_desc(struct shash_desc *);
enum i40iw_status_code i40iw_init_hash_desc(struct shash_desc **);
enum i40iw_status_code i40iw_puda_get_tcpip_info(struct i40iw_puda_completion_info *info,
struct i40iw_puda_buf *buf);
enum i40iw_status_code i40iw_cqp_sds_cmd(struct i40iw_sc_dev *dev,
struct i40iw_update_sds_info *info);
enum i40iw_status_code i40iw_cqp_manage_hmc_fcn_cmd(struct i40iw_sc_dev *dev,
struct i40iw_hmc_fcn_info *hmcfcninfo);
enum i40iw_status_code i40iw_cqp_query_fpm_values_cmd(struct i40iw_sc_dev *dev,
struct i40iw_dma_mem *values_mem,
u8 hmc_fn_id);
enum i40iw_status_code i40iw_cqp_commit_fpm_values_cmd(struct i40iw_sc_dev *dev,
struct i40iw_dma_mem *values_mem,
u8 hmc_fn_id);
enum i40iw_status_code i40iw_alloc_query_fpm_buf(struct i40iw_sc_dev *dev,
struct i40iw_dma_mem *mem);
enum i40iw_status_code i40iw_cqp_manage_vf_pble_bp(struct i40iw_sc_dev *dev,
struct i40iw_manage_vf_pble_info *info);
void i40iw_cqp_spawn_worker(struct i40iw_sc_dev *dev,
struct i40iw_virtchnl_work_info *work_info, u32 iw_vf_idx);
void *i40iw_remove_head(struct list_head *list);
void i40iw_term_modify_qp(struct i40iw_sc_qp *qp, u8 next_state, u8 term, u8 term_len);
void i40iw_terminate_done(struct i40iw_sc_qp *qp, int timeout_occurred);
void i40iw_terminate_start_timer(struct i40iw_sc_qp *qp);
void i40iw_terminate_del_timer(struct i40iw_sc_qp *qp);
enum i40iw_status_code i40iw_hw_manage_vf_pble_bp(struct i40iw_device *iwdev,
struct i40iw_manage_vf_pble_info *info,
bool wait);
struct i40iw_dev_pestat;
void i40iw_hw_stats_start_timer(struct i40iw_sc_dev *);
void i40iw_hw_stats_del_timer(struct i40iw_sc_dev *);
#define i40iw_mmiowb() mmiowb()
void i40iw_wr32(struct i40iw_hw *hw, u32 reg, u32 value);
u32 i40iw_rd32(struct i40iw_hw *hw, u32 reg);
#endif /* _I40IW_OSDEP_H_ */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
obj-$(CONFIG_MLX5_INFINIBAND) += mlx5_ib.o
mlx5_ib-y := main.o cq.o doorbell.o qp.o mem.o srq.o mr.o ah.o mad.o gsi.o
mlx5_ib-y := main.o cq.o doorbell.o qp.o mem.o srq.o mr.o ah.o mad.o gsi.o ib_virt.o
mlx5_ib-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += odp.o
This diff is collapsed.
......@@ -208,7 +208,7 @@ static int process_pma_cmd(struct ib_device *ibdev, u8 port_num,
if (!out_cnt)
return IB_MAD_RESULT_FAILURE;
err = mlx5_core_query_vport_counter(dev->mdev, 0,
err = mlx5_core_query_vport_counter(dev->mdev, 0, 0,
port_num, out_cnt, sz);
if (!err)
pma_cnt_ext_assign(pma_cnt_ext, out_cnt);
......
This diff is collapsed.
......@@ -776,6 +776,14 @@ void mlx5_ib_qp_disable_pagefaults(struct mlx5_ib_qp *qp);
void mlx5_ib_qp_enable_pagefaults(struct mlx5_ib_qp *qp);
void mlx5_ib_invalidate_range(struct ib_umem *umem, unsigned long start,
unsigned long end);
int mlx5_ib_get_vf_config(struct ib_device *device, int vf,
u8 port, struct ifla_vf_info *info);
int mlx5_ib_set_vf_link_state(struct ib_device *device, int vf,
u8 port, int state);
int mlx5_ib_get_vf_stats(struct ib_device *device, int vf,
u8 port, struct ifla_vf_stats *stats);
int mlx5_ib_set_vf_guid(struct ib_device *device, int vf, u8 port,
u64 guid, int type);
#else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */
static inline void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
......
config INFINIBAND_QIB
tristate "Intel PCIe HCA support"
depends on 64BIT
depends on 64BIT && INFINIBAND_RDMAVT
---help---
This is a low-level driver for Intel PCIe QLE InfiniBand host
channel adapters. This driver does not support the Intel
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -74,7 +74,7 @@ static void signal_ib_event(struct qib_pportdata *ppd, enum ib_event_type ev)
struct ib_event event;
struct qib_devdata *dd = ppd->dd;
event.device = &dd->verbs_dev.ibdev;
event.device = &dd->verbs_dev.rdi.ibdev;
event.element.port_num = ppd->port;
event.event = ev;
ib_dispatch_event(&event);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
obj-$(CONFIG_INFINIBAND_RDMAVT) += rdmavt/
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment