Commit 1dcf3ac4 authored by David S. Miller's avatar David S. Miller

Merge branch 'mlx5-next'

Amir Vadai says:

====================
net/mlx5: ConnectX-4 100G Ethernet driver

This patchset extends the mlx5_core driver to support Ethernet
functionality. The Ethernet functionality in the mlx5 driver is
integrated into the core driver and not as separated driver. The
IB functionality remains in the mlx5_ib driver as before.

This functionality will enable the Ethernet capability of Mellanox's new
famility of cards - ConnectX-4. Due to the fact that backword
compatability is being kept, existing Connect-IB cards that are using
this driver are fully working with the modified driver, and no issues
with current deployments should be seen.

Like the ConnectX-3 cards, ConnectX-4 is a VPI (Virtual Port Interface -
every port can be configured as Infiniband or Ethernet) card.
Unlike previous generations, the ConnectX-4 has a separate PCI function
per port.

The current code has a limitation that Infiniband and Ethernet port types
are mutually exclusive. When the driver is compiled with Ethernet
support, the Infiniband functionality is disabled and vice versa. To
control that we added the CONFIG_MLX5_CORE_EN config directive
which is 'n' by default, but can be changed by the user.

This limitation is short-lived and would be addressed soon.

As part of this patchset, mlx5_ifc.h was heavily modified [1]. This file
is now generated automatically from the device specification document.
Since this patch is too big for the mail server, it might be missing in
the mailing list, but could be pulled from an external git repository [2].

irq name selection is done at driver initialization and doesn't contain the
interface name as part of the irq name.
irq_balancer will still work thanks to an improvement introduced by Neil Horman
[3] to use sysfs instead of /proc/interrupts.

Patchset was applied on top of commit ed2dfd90 ("tcp/dccp: warn user for
preferred ip_local_port_range")

[1] - Patch 4/11 ("net/mlx5_core: HW data structs/types definitions preparation for mlx5 ehternet driver")
[2] - http://git.openfabrics.org/?p=~amirv/linux.git;a=shortlog;h=refs/heads/mlx5e_v1
[3] - kernel: da8d1c8b PCI/sysfs: add per pci device msi[x] irq listing (v5)
      irq_balancer: 32a7757 Complete rework of how we detect and classify irqs

Thanks to Achiad, Saeed, Yevheny, Or and the whole team for making this happen,
Amir

Changes from V4:
- Removed Patch 3/12: net/mlx5_core: Add EQ renaming mechanism
- Patch 12/12: net/mlx5: Extend mlx5_core to support ConnectX-4 Ethernet functionality
  - irq name is created on driver initialization, therefore it won't contain
    the network interface name in it. This won't effect irq_balancer thanks to
    patches introduced by Neil Horman to use sysfs instead of /proc/interrupts.

Changes from V3:
- PATCH 8/11: net/mlx5_core: Set/Query port MTU commands
  - Return value directly - no need for err.

Changes from V2:
- Improved changelogs and cover-letter
- Added CONFIG_MLX5_EN to disable/enable the Ethernet functionality
- Moved en.h and wq.[ch] into the patch with data-path related code

Changes from V1:
- Added patch 1/12 ("net/mlx5_core,mlx5_ib: Do not use vmap() on coherent
  memory")

Changes from V0:
- Removed V0 Patch 1/11 ("net/mlx5_core: Virtually extend work/completion queue
  buffers by one page") due to misuse of DMA API. Thanks Dave.
- Patch 1/11 ("net/mlx5_core: Set irq affinity hints"):
  - Use kcalloc instead of kzalloc
  - Fix build error when CONFIG_CPUMASK_OFFSTACK=n. Driver loading will fail
    now if cpumask allocation is failing.
  - Using dev_to_node helper. Thanks, Ido.
- Patch 3/11 ("net/mlx5_core: HW data structs/types definitions preparation for
  mlx5 ehternet driver")
  - Removed Mellanox internal comment at the head of the file. Thanks Joe
- Patch 6/11 ("net/mlx5_core: Implement get/set port status")
  - Use direct return of function's result. Thanks Sergei.
- Added Patch 8/11 ("net/mlx5_core: Set/Query port MTU commands")
- Patch 9/11 ("net/mlx5: Ethernet Datapath files")
  - Use rq->wqe_sz instead of skb_end_offset. Thanks Ido.
  - Use dma_wmb() when possible instead of wmb(). Thanks Alex.
  - Fix checkpatch issues
- Patch 10/11 ("net/mlx5: ethernet resources handling")
  - checkpatch issues
  - Added missing include
- Patch 11/11 ("net/mlx5: Ethernet driver")
  - checkpatch issues
  - fixed typo
  - Modified use of affinity hint
  - Using dev_to_node helper. Thanks, Ido.
  - Use new hardware commands from Patch 8/11 ("net/mlx5_core: Set/Query port
    MTU commands") to get/set port MTU in hardware.
  - Removed NETIF_F_SG since hardware ring wraparound is not supported
  - Use dma_wmb() when possible instead of wmb(). Thanks Alex.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 8ed9b5e1 f62b8bb8
config MLX5_INFINIBAND config MLX5_INFINIBAND
tristate "Mellanox Connect-IB HCA support" tristate "Mellanox Connect-IB HCA support"
depends on NETDEVICES && ETHERNET && PCI depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE
select NET_VENDOR_MELLANOX
select MLX5_CORE
---help--- ---help---
This driver provides low-level InfiniBand support for This driver provides low-level InfiniBand support for
Mellanox Connect-IB PCI Express host channel adapters (HCAs). Mellanox Connect-IB PCI Express host channel adapters (HCAs).
......
...@@ -590,8 +590,7 @@ static int alloc_cq_buf(struct mlx5_ib_dev *dev, struct mlx5_ib_cq_buf *buf, ...@@ -590,8 +590,7 @@ static int alloc_cq_buf(struct mlx5_ib_dev *dev, struct mlx5_ib_cq_buf *buf,
{ {
int err; int err;
err = mlx5_buf_alloc(dev->mdev, nent * cqe_size, err = mlx5_buf_alloc(dev->mdev, nent * cqe_size, &buf->buf);
PAGE_SIZE * 2, &buf->buf);
if (err) if (err)
return err; return err;
...@@ -754,7 +753,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev, int entries, ...@@ -754,7 +753,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev, int entries,
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
entries = roundup_pow_of_two(entries + 1); entries = roundup_pow_of_two(entries + 1);
if (entries > dev->mdev->caps.gen.max_cqes) if (entries > (1 << MLX5_CAP_GEN(dev->mdev, log_max_cq_sz)))
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
cq = kzalloc(sizeof(*cq), GFP_KERNEL); cq = kzalloc(sizeof(*cq), GFP_KERNEL);
...@@ -921,7 +920,7 @@ int mlx5_ib_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period) ...@@ -921,7 +920,7 @@ int mlx5_ib_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
int err; int err;
u32 fsel; u32 fsel;
if (!(dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_CQ_MODER)) if (!MLX5_CAP_GEN(dev->mdev, cq_moderation))
return -ENOSYS; return -ENOSYS;
in = kzalloc(sizeof(*in), GFP_KERNEL); in = kzalloc(sizeof(*in), GFP_KERNEL);
...@@ -1076,7 +1075,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata) ...@@ -1076,7 +1075,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata)
int uninitialized_var(cqe_size); int uninitialized_var(cqe_size);
unsigned long flags; unsigned long flags;
if (!(dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_RESIZE_CQ)) { if (!MLX5_CAP_GEN(dev->mdev, cq_resize)) {
pr_info("Firmware does not support resize CQ\n"); pr_info("Firmware does not support resize CQ\n");
return -ENOSYS; return -ENOSYS;
} }
...@@ -1085,7 +1084,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata) ...@@ -1085,7 +1084,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata)
return -EINVAL; return -EINVAL;
entries = roundup_pow_of_two(entries + 1); entries = roundup_pow_of_two(entries + 1);
if (entries > dev->mdev->caps.gen.max_cqes + 1) if (entries > (1 << MLX5_CAP_GEN(dev->mdev, log_max_cq_sz)) + 1)
return -EINVAL; return -EINVAL;
if (entries == ibcq->cqe + 1) if (entries == ibcq->cqe + 1)
......
...@@ -129,7 +129,7 @@ int mlx5_query_ext_port_caps(struct mlx5_ib_dev *dev, u8 port) ...@@ -129,7 +129,7 @@ int mlx5_query_ext_port_caps(struct mlx5_ib_dev *dev, u8 port)
packet_error = be16_to_cpu(out_mad->status); packet_error = be16_to_cpu(out_mad->status);
dev->mdev->caps.gen.ext_port_cap[port - 1] = (!err && !packet_error) ? dev->mdev->port_caps[port - 1].ext_port_cap = (!err && !packet_error) ?
MLX_EXT_PORT_CAP_FLAG_EXTENDED_PORT_INFO : 0; MLX_EXT_PORT_CAP_FLAG_EXTENDED_PORT_INFO : 0;
out: out:
......
This diff is collapsed.
...@@ -617,7 +617,7 @@ int mlx5_ib_check_mr_status(struct ib_mr *ibmr, u32 check_mask, ...@@ -617,7 +617,7 @@ int mlx5_ib_check_mr_status(struct ib_mr *ibmr, u32 check_mask,
#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
extern struct workqueue_struct *mlx5_ib_page_fault_wq; extern struct workqueue_struct *mlx5_ib_page_fault_wq;
int mlx5_ib_internal_query_odp_caps(struct mlx5_ib_dev *dev); void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev);
void mlx5_ib_mr_pfault_handler(struct mlx5_ib_qp *qp, void mlx5_ib_mr_pfault_handler(struct mlx5_ib_qp *qp,
struct mlx5_ib_pfault *pfault); struct mlx5_ib_pfault *pfault);
void mlx5_ib_odp_create_qp(struct mlx5_ib_qp *qp); void mlx5_ib_odp_create_qp(struct mlx5_ib_qp *qp);
...@@ -631,9 +631,9 @@ void mlx5_ib_invalidate_range(struct ib_umem *umem, unsigned long start, ...@@ -631,9 +631,9 @@ void mlx5_ib_invalidate_range(struct ib_umem *umem, unsigned long start,
unsigned long end); unsigned long end);
#else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */ #else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */
static inline int mlx5_ib_internal_query_odp_caps(struct mlx5_ib_dev *dev) static inline void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
{ {
return 0; return;
} }
static inline void mlx5_ib_odp_create_qp(struct mlx5_ib_qp *qp) {} static inline void mlx5_ib_odp_create_qp(struct mlx5_ib_qp *qp) {}
......
...@@ -975,8 +975,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_pd *pd, u64 virt_addr, ...@@ -975,8 +975,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_pd *pd, u64 virt_addr,
struct mlx5_ib_mr *mr; struct mlx5_ib_mr *mr;
int inlen; int inlen;
int err; int err;
bool pg_cap = !!(dev->mdev->caps.gen.flags & bool pg_cap = !!(MLX5_CAP_GEN(dev->mdev, pg));
MLX5_DEV_CAP_FLAG_ON_DMND_PG);
mr = kzalloc(sizeof(*mr), GFP_KERNEL); mr = kzalloc(sizeof(*mr), GFP_KERNEL);
if (!mr) if (!mr)
......
...@@ -109,40 +109,33 @@ void mlx5_ib_invalidate_range(struct ib_umem *umem, unsigned long start, ...@@ -109,40 +109,33 @@ void mlx5_ib_invalidate_range(struct ib_umem *umem, unsigned long start,
ib_umem_odp_unmap_dma_pages(umem, start, end); ib_umem_odp_unmap_dma_pages(umem, start, end);
} }
#define COPY_ODP_BIT_MLX_TO_IB(reg, ib_caps, field_name, bit_name) do { \ void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
if (be32_to_cpu(reg.field_name) & MLX5_ODP_SUPPORT_##bit_name) \
ib_caps->field_name |= IB_ODP_SUPPORT_##bit_name; \
} while (0)
int mlx5_ib_internal_query_odp_caps(struct mlx5_ib_dev *dev)
{ {
int err;
struct mlx5_odp_caps hw_caps;
struct ib_odp_caps *caps = &dev->odp_caps; struct ib_odp_caps *caps = &dev->odp_caps;
memset(caps, 0, sizeof(*caps)); memset(caps, 0, sizeof(*caps));
if (!(dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_ON_DMND_PG)) if (!MLX5_CAP_GEN(dev->mdev, pg))
return 0; return;
err = mlx5_query_odp_caps(dev->mdev, &hw_caps);
if (err)
goto out;
caps->general_caps = IB_ODP_SUPPORT; caps->general_caps = IB_ODP_SUPPORT;
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.ud_odp_caps,
SEND); if (MLX5_CAP_ODP(dev->mdev, ud_odp_caps.send))
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps, caps->per_transport_caps.ud_odp_caps |= IB_ODP_SUPPORT_SEND;
SEND);
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps, if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.send))
RECV); caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_SEND;
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps,
WRITE); if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.receive))
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps, caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_RECV;
READ);
if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.write))
out: caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_WRITE;
return err;
if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.read))
caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_READ;
return;
} }
static struct mlx5_ib_mr *mlx5_ib_odp_find_mr_lkey(struct mlx5_ib_dev *dev, static struct mlx5_ib_mr *mlx5_ib_odp_find_mr_lkey(struct mlx5_ib_dev *dev,
......
This diff is collapsed.
...@@ -165,7 +165,7 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq, ...@@ -165,7 +165,7 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq,
return err; return err;
} }
if (mlx5_buf_alloc(dev->mdev, buf_size, PAGE_SIZE * 2, &srq->buf)) { if (mlx5_buf_alloc(dev->mdev, buf_size, &srq->buf)) {
mlx5_ib_dbg(dev, "buf alloc failed\n"); mlx5_ib_dbg(dev, "buf alloc failed\n");
err = -ENOMEM; err = -ENOMEM;
goto err_db; goto err_db;
...@@ -236,7 +236,6 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd, ...@@ -236,7 +236,6 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
struct ib_udata *udata) struct ib_udata *udata)
{ {
struct mlx5_ib_dev *dev = to_mdev(pd->device); struct mlx5_ib_dev *dev = to_mdev(pd->device);
struct mlx5_general_caps *gen;
struct mlx5_ib_srq *srq; struct mlx5_ib_srq *srq;
int desc_size; int desc_size;
int buf_size; int buf_size;
...@@ -245,13 +244,13 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd, ...@@ -245,13 +244,13 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
int uninitialized_var(inlen); int uninitialized_var(inlen);
int is_xrc; int is_xrc;
u32 flgs, xrcdn; u32 flgs, xrcdn;
__u32 max_srq_wqes = 1 << MLX5_CAP_GEN(dev->mdev, log_max_srq_sz);
gen = &dev->mdev->caps.gen;
/* Sanity check SRQ size before proceeding */ /* Sanity check SRQ size before proceeding */
if (init_attr->attr.max_wr >= gen->max_srq_wqes) { if (init_attr->attr.max_wr >= max_srq_wqes) {
mlx5_ib_dbg(dev, "max_wr %d, cap %d\n", mlx5_ib_dbg(dev, "max_wr %d, cap %d\n",
init_attr->attr.max_wr, init_attr->attr.max_wr,
gen->max_srq_wqes); max_srq_wqes);
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
} }
......
...@@ -3,6 +3,18 @@ ...@@ -3,6 +3,18 @@
# #
config MLX5_CORE config MLX5_CORE
tristate tristate "Mellanox Technologies ConnectX-4 and Connect-IB core driver"
depends on PCI depends on PCI
default n default n
---help---
Core driver for low level functionality of the ConnectX-4 and
Connect-IB cards by Mellanox Technologies.
config MLX5_CORE_EN
bool "Mellanox Technologies ConnectX-4 Ethernet support"
depends on MLX5_INFINIBAND=n && NETDEVICES && ETHERNET && PCI && MLX5_CORE
default n
---help---
Ethernet support in Mellanox Technologies ConnectX-4 NIC.
Ethernet and Infiniband support in ConnectX-4 are currently mutually
exclusive.
...@@ -3,3 +3,6 @@ obj-$(CONFIG_MLX5_CORE) += mlx5_core.o ...@@ -3,3 +3,6 @@ obj-$(CONFIG_MLX5_CORE) += mlx5_core.o
mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \ mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
health.o mcg.o cq.o srq.o alloc.o qp.o port.o mr.o pd.o \ health.o mcg.o cq.o srq.o alloc.o qp.o port.o mr.o pd.o \
mad.o mad.o
mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o flow_table.o vport.o transobj.o \
en_main.o en_flow_table.o en_ethtool.o en_tx.o en_rx.o \
en_txrx.o
...@@ -42,95 +42,36 @@ ...@@ -42,95 +42,36 @@
#include "mlx5_core.h" #include "mlx5_core.h"
/* Handling for queue buffers -- we allocate a bunch of memory and /* Handling for queue buffers -- we allocate a bunch of memory and
* register it in a memory region at HCA virtual address 0. If the * register it in a memory region at HCA virtual address 0.
* requested size is > max_direct, we split the allocation into
* multiple pages, so we don't require too much contiguous memory.
*/ */
int mlx5_buf_alloc(struct mlx5_core_dev *dev, int size, int max_direct, int mlx5_buf_alloc(struct mlx5_core_dev *dev, int size, struct mlx5_buf *buf)
struct mlx5_buf *buf)
{ {
dma_addr_t t; dma_addr_t t;
buf->size = size; buf->size = size;
if (size <= max_direct) { buf->npages = 1;
buf->nbufs = 1; buf->page_shift = (u8)get_order(size) + PAGE_SHIFT;
buf->npages = 1; buf->direct.buf = dma_zalloc_coherent(&dev->pdev->dev,
buf->page_shift = (u8)get_order(size) + PAGE_SHIFT; size, &t, GFP_KERNEL);
buf->direct.buf = dma_zalloc_coherent(&dev->pdev->dev, if (!buf->direct.buf)
size, &t, GFP_KERNEL); return -ENOMEM;
if (!buf->direct.buf)
return -ENOMEM;
buf->direct.map = t;
while (t & ((1 << buf->page_shift) - 1)) {
--buf->page_shift;
buf->npages *= 2;
}
} else {
int i;
buf->direct.buf = NULL;
buf->nbufs = (size + PAGE_SIZE - 1) / PAGE_SIZE;
buf->npages = buf->nbufs;
buf->page_shift = PAGE_SHIFT;
buf->page_list = kcalloc(buf->nbufs, sizeof(*buf->page_list),
GFP_KERNEL);
if (!buf->page_list)
return -ENOMEM;
for (i = 0; i < buf->nbufs; i++) {
buf->page_list[i].buf =
dma_zalloc_coherent(&dev->pdev->dev, PAGE_SIZE,
&t, GFP_KERNEL);
if (!buf->page_list[i].buf)
goto err_free;
buf->page_list[i].map = t;
}
if (BITS_PER_LONG == 64) {
struct page **pages;
pages = kmalloc(sizeof(*pages) * buf->nbufs, GFP_KERNEL);
if (!pages)
goto err_free;
for (i = 0; i < buf->nbufs; i++)
pages[i] = virt_to_page(buf->page_list[i].buf);
buf->direct.buf = vmap(pages, buf->nbufs, VM_MAP, PAGE_KERNEL);
kfree(pages);
if (!buf->direct.buf)
goto err_free;
}
}
return 0; buf->direct.map = t;
err_free: while (t & ((1 << buf->page_shift) - 1)) {
mlx5_buf_free(dev, buf); --buf->page_shift;
buf->npages *= 2;
}
return -ENOMEM; return 0;
} }
EXPORT_SYMBOL_GPL(mlx5_buf_alloc); EXPORT_SYMBOL_GPL(mlx5_buf_alloc);
void mlx5_buf_free(struct mlx5_core_dev *dev, struct mlx5_buf *buf) void mlx5_buf_free(struct mlx5_core_dev *dev, struct mlx5_buf *buf)
{ {
int i; dma_free_coherent(&dev->pdev->dev, buf->size, buf->direct.buf,
buf->direct.map);
if (buf->nbufs == 1)
dma_free_coherent(&dev->pdev->dev, buf->size, buf->direct.buf,
buf->direct.map);
else {
if (BITS_PER_LONG == 64)
vunmap(buf->direct.buf);
for (i = 0; i < buf->nbufs; i++)
if (buf->page_list[i].buf)
dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
buf->page_list[i].buf,
buf->page_list[i].map);
kfree(buf->page_list);
}
} }
EXPORT_SYMBOL_GPL(mlx5_buf_free); EXPORT_SYMBOL_GPL(mlx5_buf_free);
...@@ -230,10 +171,7 @@ void mlx5_fill_page_array(struct mlx5_buf *buf, __be64 *pas) ...@@ -230,10 +171,7 @@ void mlx5_fill_page_array(struct mlx5_buf *buf, __be64 *pas)
int i; int i;
for (i = 0; i < buf->npages; i++) { for (i = 0; i < buf->npages; i++) {
if (buf->nbufs == 1) addr = buf->direct.map + (i << buf->page_shift);
addr = buf->direct.map + (i << buf->page_shift);
else
addr = buf->page_list[i].map;
pas[i] = cpu_to_be64(addr); pas[i] = cpu_to_be64(addr);
} }
......
...@@ -75,25 +75,6 @@ enum { ...@@ -75,25 +75,6 @@ enum {
MLX5_CMD_DELIVERY_STAT_CMD_DESCR_ERR = 0x10, MLX5_CMD_DELIVERY_STAT_CMD_DESCR_ERR = 0x10,
}; };
enum {
MLX5_CMD_STAT_OK = 0x0,
MLX5_CMD_STAT_INT_ERR = 0x1,
MLX5_CMD_STAT_BAD_OP_ERR = 0x2,
MLX5_CMD_STAT_BAD_PARAM_ERR = 0x3,
MLX5_CMD_STAT_BAD_SYS_STATE_ERR = 0x4,
MLX5_CMD_STAT_BAD_RES_ERR = 0x5,
MLX5_CMD_STAT_RES_BUSY = 0x6,
MLX5_CMD_STAT_LIM_ERR = 0x8,
MLX5_CMD_STAT_BAD_RES_STATE_ERR = 0x9,
MLX5_CMD_STAT_IX_ERR = 0xa,
MLX5_CMD_STAT_NO_RES_ERR = 0xf,
MLX5_CMD_STAT_BAD_INP_LEN_ERR = 0x50,
MLX5_CMD_STAT_BAD_OUTP_LEN_ERR = 0x51,
MLX5_CMD_STAT_BAD_QP_STATE_ERR = 0x10,
MLX5_CMD_STAT_BAD_PKT_ERR = 0x30,
MLX5_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR = 0x40,
};
static struct mlx5_cmd_work_ent *alloc_cmd(struct mlx5_cmd *cmd, static struct mlx5_cmd_work_ent *alloc_cmd(struct mlx5_cmd *cmd,
struct mlx5_cmd_msg *in, struct mlx5_cmd_msg *in,
struct mlx5_cmd_msg *out, struct mlx5_cmd_msg *out,
...@@ -390,8 +371,17 @@ const char *mlx5_command_str(int command) ...@@ -390,8 +371,17 @@ const char *mlx5_command_str(int command)
case MLX5_CMD_OP_ARM_RQ: case MLX5_CMD_OP_ARM_RQ:
return "ARM_RQ"; return "ARM_RQ";
case MLX5_CMD_OP_RESIZE_SRQ: case MLX5_CMD_OP_CREATE_XRC_SRQ:
return "RESIZE_SRQ"; return "CREATE_XRC_SRQ";
case MLX5_CMD_OP_DESTROY_XRC_SRQ:
return "DESTROY_XRC_SRQ";
case MLX5_CMD_OP_QUERY_XRC_SRQ:
return "QUERY_XRC_SRQ";
case MLX5_CMD_OP_ARM_XRC_SRQ:
return "ARM_XRC_SRQ";
case MLX5_CMD_OP_ALLOC_PD: case MLX5_CMD_OP_ALLOC_PD:
return "ALLOC_PD"; return "ALLOC_PD";
...@@ -408,8 +398,8 @@ const char *mlx5_command_str(int command) ...@@ -408,8 +398,8 @@ const char *mlx5_command_str(int command)
case MLX5_CMD_OP_ATTACH_TO_MCG: case MLX5_CMD_OP_ATTACH_TO_MCG:
return "ATTACH_TO_MCG"; return "ATTACH_TO_MCG";
case MLX5_CMD_OP_DETACH_FROM_MCG: case MLX5_CMD_OP_DETTACH_FROM_MCG:
return "DETACH_FROM_MCG"; return "DETTACH_FROM_MCG";
case MLX5_CMD_OP_ALLOC_XRCD: case MLX5_CMD_OP_ALLOC_XRCD:
return "ALLOC_XRCD"; return "ALLOC_XRCD";
......
...@@ -219,6 +219,24 @@ int mlx5_core_modify_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq, ...@@ -219,6 +219,24 @@ int mlx5_core_modify_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq,
} }
EXPORT_SYMBOL(mlx5_core_modify_cq); EXPORT_SYMBOL(mlx5_core_modify_cq);
int mlx5_core_modify_cq_moderation(struct mlx5_core_dev *dev,
struct mlx5_core_cq *cq,
u16 cq_period,
u16 cq_max_count)
{
struct mlx5_modify_cq_mbox_in in;
memset(&in, 0, sizeof(in));
in.cqn = cpu_to_be32(cq->cqn);
in.ctx.cq_period = cpu_to_be16(cq_period);
in.ctx.cq_max_count = cpu_to_be16(cq_max_count);
in.field_select = cpu_to_be32(MLX5_CQ_MODIFY_PERIOD |
MLX5_CQ_MODIFY_COUNT);
return mlx5_core_modify_cq(dev, cq, &in, sizeof(in));
}
int mlx5_init_cq_table(struct mlx5_core_dev *dev) int mlx5_init_cq_table(struct mlx5_core_dev *dev)
{ {
struct mlx5_cq_table *table = &dev->priv.cq_table; struct mlx5_cq_table *table = &dev->priv.cq_table;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* Copyright (c) 2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "en.h"
struct mlx5_cqe64 *mlx5e_get_cqe(struct mlx5e_cq *cq)
{
struct mlx5_cqwq *wq = &cq->wq;
u32 ci = mlx5_cqwq_get_ci(wq);
struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(wq, ci);
int cqe_ownership_bit = cqe->op_own & MLX5_CQE_OWNER_MASK;
int sw_ownership_val = mlx5_cqwq_get_wrap_cnt(wq) & 1;
if (cqe_ownership_bit != sw_ownership_val)
return NULL;
mlx5_cqwq_pop(wq);
/* ensure cqe content is read after cqe ownership bit */
rmb();
return cqe;
}
int mlx5e_napi_poll(struct napi_struct *napi, int budget)
{
struct mlx5e_channel *c = container_of(napi, struct mlx5e_channel,
napi);
bool busy = false;
int i;
clear_bit(MLX5E_CHANNEL_NAPI_SCHED, &c->flags);
for (i = 0; i < c->num_tc; i++)
busy |= mlx5e_poll_tx_cq(&c->sq[i].cq);
busy |= mlx5e_poll_rx_cq(&c->rq.cq, budget);
busy |= mlx5e_post_rx_wqes(c->rq.cq.sqrq);
if (busy)
return budget;
napi_complete(napi);
/* avoid losing completion event during/after polling cqs */
if (test_bit(MLX5E_CHANNEL_NAPI_SCHED, &c->flags)) {
napi_schedule(napi);
return 0;
}
for (i = 0; i < c->num_tc; i++)
mlx5e_cq_arm(&c->sq[i].cq);
mlx5e_cq_arm(&c->rq.cq);
return 0;
}
void mlx5e_completion_event(struct mlx5_core_cq *mcq)
{
struct mlx5e_cq *cq = container_of(mcq, struct mlx5e_cq, mcq);
set_bit(MLX5E_CQ_HAS_CQES, &cq->flags);
set_bit(MLX5E_CHANNEL_NAPI_SCHED, &cq->channel->flags);
barrier();
napi_schedule(cq->napi);
}
void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, enum mlx5_event event)
{
struct mlx5e_cq *cq = container_of(mcq, struct mlx5e_cq, mcq);
struct mlx5e_channel *c = cq->channel;
struct mlx5e_priv *priv = c->priv;
struct net_device *netdev = priv->netdev;
netdev_err(netdev, "%s: cqn=0x%.6x event=0x%.2x\n",
__func__, mcq->cqn, event);
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment