Commit 020929d6 authored by David S. Miller's avatar David S. Miller

Merge branch 'hnx3-vf'

Salil Mehta says:

====================
Hisilicon Network Subsystem 3 VF Ethernet Driver

This patch-set contains the support of the HNS3 (Hisilicon Network Subsystem 3)
Virtual Function Ethernet driver for hip08 family of SoCs. The Physical Function
driver is already part of the Linux mainline.

This VF driver has its Hardware Compatibility Layer and has commom/unified ENET
layer/client/ethtool code with the PF driver. It also has support of mailbox to
communicate with the HNS3 PF driver. The basic architecture of VF driver is
derivative of the PF driver. Just like PF driver, this driver is also PCI
Express based.

This driver is the ongoing development work and HNS3 VF Ethernet driver would be
incrementally enhanced with more new features.

High Level Architecture:

                     [ Ethtool ]
	                 |
                 [ Ethernet Client ] ... [ RoCE Client ]
                         |                     |
                   [ HNAE Device ]             |________
                         |                     |       |
    ---------------------------------------------      |
                                                       |
     [ HNAE3 Framework (Register/unregister) ]         |
                                                       |
    ---------------------------------------------      |
                         |                             |
                 [ VF HCLGE Layer ]                    |
                  |             |                      |
                  |             |                      |
                  |             |                      |
                  |     [ VF Mailbox (To PF via IMP) ] |
                  |             |                      |
             [ IMP command Interface ]  [ IMP command Interface ]
                        |                              |
                        |                              |
           (A B O V E  R U N S  O N  G U E S T  S Y S T E M)
    -------------------------------------------------------------
              Q E M U / V F I O / K V M (on Host System)
    -------------------------------------------------------------
            HIP08  H A R D W A R E (limited to VF by SMMU)

   [ IMP/Mgmt Processor (hardware common to system/cmd based) ]

                Fig 1.   HNS3 Virtual Function Driver

    	[ dcbnl ]  [ Ethtool ]
            |          |
   	[  Ethernet Client  ]  [ ODP/UIO Client ] . . .[ RoCE Client ]
              |_____________________|                 |
                         |                   _________|
                   [ HNAE Device ]           |        |
                         |                   |        |
    ---------------------------------------------     |
                                                      |
     [ HNAE3 Framework (Register/unregister) ]        |
                                                      |
    ---------------------------------------------     |
                         |                            |
                  [ HCLGE Layer ]                     |
         ________________|_________________           |
        |                |                 |          |
     [ DCB ]             |                 |          |
        |                |                 |          |
  [ Scheduler/Shaper ] [ MDIO ]      [ PF Mailbox ]   |
        |                |                 |          |
        |________________|_________________|          |
                         |                            |
             [ IMP command Interface ]     [ IMP command Interface ]
    ----------------------------------------------------------------
              HIP08  H A R D W A R E

  [ IMP/Mgmt Processor (hardware common to system/cmd based) ]

               Fig 2.    Existing HNS3 PF Driver (added with mailbox)

Change Log Summary:
Patch V4: Addressed SPDX related comment by Philippe Ombredanne
Patch V3: Addressed SPDX change requested by Philippe Ombredanne
Patch V2: 1. Addressed some comments by David Miller.
	  2. Addressed some internal comments on various patches
Patch V1: Initial Submit
====================
Acked-by: default avatarPhilippe Ombredanne <pombredanne@nexb.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents be17bbec c1a81619
...@@ -94,15 +94,6 @@ config HNS3_HCLGE ...@@ -94,15 +94,6 @@ config HNS3_HCLGE
compatibility layer. The engine would be used in Hisilicon hip08 family of compatibility layer. The engine would be used in Hisilicon hip08 family of
SoCs and further upcoming SoCs. SoCs and further upcoming SoCs.
config HNS3_ENET
tristate "Hisilicon HNS3 Ethernet Device Support"
depends on 64BIT && PCI
depends on HNS3 && HNS3_HCLGE
---help---
This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08
family of SoCs. This module depends upon HNAE3 driver to access the HNAE3
devices and their associated operations.
config HNS3_DCB config HNS3_DCB
bool "Hisilicon HNS3 Data Center Bridge Support" bool "Hisilicon HNS3 Data Center Bridge Support"
default n default n
...@@ -112,4 +103,23 @@ config HNS3_DCB ...@@ -112,4 +103,23 @@ config HNS3_DCB
If unsure, say N. If unsure, say N.
config HNS3_HCLGEVF
tristate "Hisilicon HNS3VF Acceleration Engine & Compatibility Layer Support"
depends on PCI_MSI
depends on HNS3
depends on HNS3_HCLGE
---help---
This selects the HNS3 VF drivers network acceleration engine & its hardware
compatibility layer. The engine would be used in Hisilicon hip08 family of
SoCs and further upcoming SoCs.
config HNS3_ENET
tristate "Hisilicon HNS3 Ethernet Device Support"
depends on 64BIT && PCI
depends on HNS3
---help---
This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08
family of SoCs. This module depends upon HNAE3 driver to access the HNAE3
devices and their associated operations.
endif # NET_VENDOR_HISILICON endif # NET_VENDOR_HISILICON
# SPDX-License-Identifier: GPL-2.0+
# #
# Makefile for the HISILICON network device drivers. # Makefile for the HISILICON network device drivers.
# #
obj-$(CONFIG_HNS3) += hns3pf/ obj-$(CONFIG_HNS3) += hns3pf/
obj-$(CONFIG_HNS3) += hns3vf/
obj-$(CONFIG_HNS3) += hnae3.o obj-$(CONFIG_HNS3) += hnae3.o
obj-$(CONFIG_HNS3_ENET) += hns3.o
hns3-objs = hns3_enet.o hns3_ethtool.o
hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o
/* SPDX-License-Identifier: GPL-2.0+ */
/* Copyright (c) 2016-2017 Hisilicon Limited. */
#ifndef __HCLGE_MBX_H
#define __HCLGE_MBX_H
#include <linux/init.h>
#include <linux/mutex.h>
#include <linux/types.h>
#define HCLGE_MBX_VF_MSG_DATA_NUM 16
enum HCLGE_MBX_OPCODE {
HCLGE_MBX_RESET = 0x01, /* (VF -> PF) assert reset */
HCLGE_MBX_SET_UNICAST, /* (VF -> PF) set UC addr */
HCLGE_MBX_SET_MULTICAST, /* (VF -> PF) set MC addr */
HCLGE_MBX_SET_VLAN, /* (VF -> PF) set VLAN */
HCLGE_MBX_MAP_RING_TO_VECTOR, /* (VF -> PF) map ring-to-vector */
HCLGE_MBX_UNMAP_RING_TO_VECTOR, /* (VF -> PF) unamp ring-to-vector */
HCLGE_MBX_SET_PROMISC_MODE, /* (VF -> PF) set promiscuous mode */
HCLGE_MBX_SET_MACVLAN, /* (VF -> PF) set unicast filter */
HCLGE_MBX_API_NEGOTIATE, /* (VF -> PF) negotiate API version */
HCLGE_MBX_GET_QINFO, /* (VF -> PF) get queue config */
HCLGE_MBX_GET_TCINFO, /* (VF -> PF) get TC config */
HCLGE_MBX_GET_RETA, /* (VF -> PF) get RETA */
HCLGE_MBX_GET_RSS_KEY, /* (VF -> PF) get RSS key */
HCLGE_MBX_GET_MAC_ADDR, /* (VF -> PF) get MAC addr */
HCLGE_MBX_PF_VF_RESP, /* (PF -> VF) generate respone to VF */
HCLGE_MBX_GET_BDNUM, /* (VF -> PF) get BD num */
HCLGE_MBX_GET_BUFSIZE, /* (VF -> PF) get buffer size */
HCLGE_MBX_GET_STREAMID, /* (VF -> PF) get stream id */
HCLGE_MBX_SET_AESTART, /* (VF -> PF) start ae */
HCLGE_MBX_SET_TSOSTATS, /* (VF -> PF) get tso stats */
HCLGE_MBX_LINK_STAT_CHANGE, /* (PF -> VF) link status has changed */
HCLGE_MBX_GET_BASE_CONFIG, /* (VF -> PF) get config */
HCLGE_MBX_BIND_FUNC_QUEUE, /* (VF -> PF) bind function and queue */
HCLGE_MBX_GET_LINK_STATUS, /* (VF -> PF) get link status */
HCLGE_MBX_QUEUE_RESET, /* (VF -> PF) reset queue */
};
/* below are per-VF mac-vlan subcodes */
enum hclge_mbx_mac_vlan_subcode {
HCLGE_MBX_MAC_VLAN_UC_MODIFY = 0, /* modify UC mac addr */
HCLGE_MBX_MAC_VLAN_UC_ADD, /* add a new UC mac addr */
HCLGE_MBX_MAC_VLAN_UC_REMOVE, /* remove a new UC mac addr */
HCLGE_MBX_MAC_VLAN_MC_MODIFY, /* modify MC mac addr */
HCLGE_MBX_MAC_VLAN_MC_ADD, /* add new MC mac addr */
HCLGE_MBX_MAC_VLAN_MC_REMOVE, /* remove MC mac addr */
HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE, /* config func MTA enable */
};
/* below are per-VF vlan cfg subcodes */
enum hclge_mbx_vlan_cfg_subcode {
HCLGE_MBX_VLAN_FILTER = 0, /* set vlan filter */
HCLGE_MBX_VLAN_TX_OFF_CFG, /* set tx side vlan offload */
HCLGE_MBX_VLAN_RX_OFF_CFG, /* set rx side vlan offload */
};
#define HCLGE_MBX_MAX_MSG_SIZE 16
#define HCLGE_MBX_MAX_RESP_DATA_SIZE 8
struct hclgevf_mbx_resp_status {
struct mutex mbx_mutex; /* protects against contending sync cmd resp */
u32 origin_mbx_msg;
bool received_resp;
int resp_status;
u8 additional_info[HCLGE_MBX_MAX_RESP_DATA_SIZE];
};
struct hclge_mbx_vf_to_pf_cmd {
u8 rsv;
u8 mbx_src_vfid; /* Auto filled by IMP */
u8 rsv1[2];
u8 msg_len;
u8 rsv2[3];
u8 msg[HCLGE_MBX_MAX_MSG_SIZE];
};
struct hclge_mbx_pf_to_vf_cmd {
u8 dest_vfid;
u8 rsv[3];
u8 msg_len;
u8 rsv1[3];
u16 msg[8];
};
#define hclge_mbx_ring_ptr_move_crq(crq) \
(crq->next_to_use = (crq->next_to_use + 1) % crq->desc_num)
#endif
...@@ -196,9 +196,18 @@ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev) ...@@ -196,9 +196,18 @@ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev)
const struct pci_device_id *id; const struct pci_device_id *id;
struct hnae3_ae_algo *ae_algo; struct hnae3_ae_algo *ae_algo;
struct hnae3_client *client; struct hnae3_client *client;
int ret = 0; int ret = 0, lock_acquired;
/* we can get deadlocked if SRIOV is being enabled in context to probe
* and probe gets called again in same context. This can happen when
* pci_enable_sriov() is called to create VFs from PF probes context.
* Therefore, for simplicity uniformly defering further probing in all
* cases where we detect contention.
*/
lock_acquired = mutex_trylock(&hnae3_common_lock);
if (!lock_acquired)
return -EPROBE_DEFER;
mutex_lock(&hnae3_common_lock);
list_add_tail(&ae_dev->node, &hnae3_ae_dev_list); list_add_tail(&ae_dev->node, &hnae3_ae_dev_list);
/* Check if there are matched ae_algo */ /* Check if there are matched ae_algo */
...@@ -211,6 +220,7 @@ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev) ...@@ -211,6 +220,7 @@ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev)
if (!ae_dev->ops) { if (!ae_dev->ops) {
dev_err(&ae_dev->pdev->dev, "ae_dev ops are null\n"); dev_err(&ae_dev->pdev->dev, "ae_dev ops are null\n");
ret = -EOPNOTSUPP;
goto out_err; goto out_err;
} }
......
...@@ -452,9 +452,10 @@ struct hnae3_unic_private_info { ...@@ -452,9 +452,10 @@ struct hnae3_unic_private_info {
struct hnae3_queue **tqp; /* array base of all TQPs of this instance */ struct hnae3_queue **tqp; /* array base of all TQPs of this instance */
}; };
#define HNAE3_SUPPORT_MAC_LOOPBACK 1 #define HNAE3_SUPPORT_MAC_LOOPBACK BIT(0)
#define HNAE3_SUPPORT_PHY_LOOPBACK 2 #define HNAE3_SUPPORT_PHY_LOOPBACK BIT(1)
#define HNAE3_SUPPORT_SERDES_LOOPBACK 4 #define HNAE3_SUPPORT_SERDES_LOOPBACK BIT(2)
#define HNAE3_SUPPORT_VF BIT(3)
struct hnae3_handle { struct hnae3_handle {
struct hnae3_client *client; struct hnae3_client *client;
......
...@@ -93,7 +93,7 @@ void hns3_dcbnl_setup(struct hnae3_handle *handle) ...@@ -93,7 +93,7 @@ void hns3_dcbnl_setup(struct hnae3_handle *handle)
{ {
struct net_device *dev = handle->kinfo.netdev; struct net_device *dev = handle->kinfo.netdev;
if (!handle->kinfo.dcb_ops) if ((!handle->kinfo.dcb_ops) || (handle->flags & HNAE3_SUPPORT_VF))
return; return;
dev->dcbnl_ops = &hns3_dcbnl_ops; dev->dcbnl_ops = &hns3_dcbnl_ops;
......
...@@ -52,6 +52,8 @@ static const struct pci_device_id hns3_pci_tbl[] = { ...@@ -52,6 +52,8 @@ static const struct pci_device_id hns3_pci_tbl[] = {
HNAE3_DEV_SUPPORT_ROCE_DCB_BITS}, HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_MACSEC), {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_MACSEC),
HNAE3_DEV_SUPPORT_ROCE_DCB_BITS}, HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_VF), 0},
{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_DCB_PFC_VF), 0},
/* required last entry */ /* required last entry */
{0, } {0, }
}; };
......
...@@ -849,6 +849,21 @@ static int hns3_nway_reset(struct net_device *netdev) ...@@ -849,6 +849,21 @@ static int hns3_nway_reset(struct net_device *netdev)
return genphy_restart_aneg(phy); return genphy_restart_aneg(phy);
} }
static const struct ethtool_ops hns3vf_ethtool_ops = {
.get_drvinfo = hns3_get_drvinfo,
.get_ringparam = hns3_get_ringparam,
.set_ringparam = hns3_set_ringparam,
.get_strings = hns3_get_strings,
.get_ethtool_stats = hns3_get_stats,
.get_sset_count = hns3_get_sset_count,
.get_rxnfc = hns3_get_rxnfc,
.get_rxfh_key_size = hns3_get_rss_key_size,
.get_rxfh_indir_size = hns3_get_rss_indir_size,
.get_rxfh = hns3_get_rss,
.set_rxfh = hns3_set_rss,
.get_link_ksettings = hns3_get_link_ksettings,
};
static const struct ethtool_ops hns3_ethtool_ops = { static const struct ethtool_ops hns3_ethtool_ops = {
.self_test = hns3_self_test, .self_test = hns3_self_test,
.get_drvinfo = hns3_get_drvinfo, .get_drvinfo = hns3_get_drvinfo,
...@@ -872,5 +887,10 @@ static const struct ethtool_ops hns3_ethtool_ops = { ...@@ -872,5 +887,10 @@ static const struct ethtool_ops hns3_ethtool_ops = {
void hns3_ethtool_set_ops(struct net_device *netdev) void hns3_ethtool_set_ops(struct net_device *netdev)
{ {
netdev->ethtool_ops = &hns3_ethtool_ops; struct hnae3_handle *h = hns3_get_handle(netdev);
if (h->flags & HNAE3_SUPPORT_VF)
netdev->ethtool_ops = &hns3vf_ethtool_ops;
else
netdev->ethtool_ops = &hns3_ethtool_ops;
} }
# SPDX-License-Identifier: GPL-2.0+
# #
# Makefile for the HISILICON network device drivers. # Makefile for the HISILICON network device drivers.
# #
...@@ -5,11 +6,6 @@ ...@@ -5,11 +6,6 @@
ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3 ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
obj-$(CONFIG_HNS3_HCLGE) += hclge.o obj-$(CONFIG_HNS3_HCLGE) += hclge.o
hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o hclge_mbx.o
hclge-$(CONFIG_HNS3_DCB) += hclge_dcb.o hclge-$(CONFIG_HNS3_DCB) += hclge_dcb.o
obj-$(CONFIG_HNS3_ENET) += hns3.o
hns3-objs = hns3_enet.o hns3_ethtool.o
hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o
...@@ -92,6 +92,11 @@ ...@@ -92,6 +92,11 @@
#define HCLGE_VECTOR0_CORERESET_INT_B 6 #define HCLGE_VECTOR0_CORERESET_INT_B 6
#define HCLGE_VECTOR0_IMPRESET_INT_B 7 #define HCLGE_VECTOR0_IMPRESET_INT_B 7
/* Vector0 interrupt CMDQ event source register(RW) */
#define HCLGE_VECTOR0_CMDQ_SRC_REG 0x27100
/* CMDQ register bits for RX event(=MBX event) */
#define HCLGE_VECTOR0_RX_CMDQ_INT_B 1
enum HCLGE_DEV_STATE { enum HCLGE_DEV_STATE {
HCLGE_STATE_REINITING, HCLGE_STATE_REINITING,
HCLGE_STATE_DOWN, HCLGE_STATE_DOWN,
...@@ -101,8 +106,8 @@ enum HCLGE_DEV_STATE { ...@@ -101,8 +106,8 @@ enum HCLGE_DEV_STATE {
HCLGE_STATE_SERVICE_SCHED, HCLGE_STATE_SERVICE_SCHED,
HCLGE_STATE_RST_SERVICE_SCHED, HCLGE_STATE_RST_SERVICE_SCHED,
HCLGE_STATE_RST_HANDLING, HCLGE_STATE_RST_HANDLING,
HCLGE_STATE_MBX_SERVICE_SCHED,
HCLGE_STATE_MBX_HANDLING, HCLGE_STATE_MBX_HANDLING,
HCLGE_STATE_MBX_IRQ,
HCLGE_STATE_MAX HCLGE_STATE_MAX
}; };
...@@ -479,6 +484,7 @@ struct hclge_dev { ...@@ -479,6 +484,7 @@ struct hclge_dev {
struct timer_list service_timer; struct timer_list service_timer;
struct work_struct service_task; struct work_struct service_task;
struct work_struct rst_service_task; struct work_struct rst_service_task;
struct work_struct mbx_service_task;
bool cur_promisc; bool cur_promisc;
int num_alloc_vfs; /* Actual number of VFs allocated */ int num_alloc_vfs; /* Actual number of VFs allocated */
...@@ -539,8 +545,10 @@ int hclge_cfg_func_mta_filter(struct hclge_dev *hdev, ...@@ -539,8 +545,10 @@ int hclge_cfg_func_mta_filter(struct hclge_dev *hdev,
u8 func_id, u8 func_id,
bool enable); bool enable);
struct hclge_vport *hclge_get_vport(struct hnae3_handle *handle); struct hclge_vport *hclge_get_vport(struct hnae3_handle *handle);
int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector, int hclge_bind_ring_with_vector(struct hclge_vport *vport,
struct hnae3_ring_chain_node *ring_chain); int vector_id, bool en,
struct hnae3_ring_chain_node *ring_chain);
static inline int hclge_get_queue_id(struct hnae3_queue *queue) static inline int hclge_get_queue_id(struct hnae3_queue *queue)
{ {
struct hclge_tqp *tqp = container_of(queue, struct hclge_tqp, q); struct hclge_tqp *tqp = container_of(queue, struct hclge_tqp, q);
...@@ -554,4 +562,7 @@ int hclge_set_vf_vlan_common(struct hclge_dev *vport, int vfid, ...@@ -554,4 +562,7 @@ int hclge_set_vf_vlan_common(struct hclge_dev *vport, int vfid,
int hclge_buffer_alloc(struct hclge_dev *hdev); int hclge_buffer_alloc(struct hclge_dev *hdev);
int hclge_rss_init_hw(struct hclge_dev *hdev); int hclge_rss_init_hw(struct hclge_dev *hdev);
void hclge_mbx_handler(struct hclge_dev *hdev);
void hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id);
#endif #endif
This diff is collapsed.
# SPDX-License-Identifier: GPL-2.0+
#
# Makefile for the HISILICON network device drivers.
#
ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o
hclgevf-objs = hclgevf_main.o hclgevf_cmd.o hclgevf_mbx.o
\ No newline at end of file
// SPDX-License-Identifier: GPL-2.0+
// Copyright (c) 2016-2017 Hisilicon Limited.
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/err.h>
#include <linux/pci.h>
#include <linux/slab.h>
#include "hclgevf_cmd.h"
#include "hclgevf_main.h"
#include "hnae3.h"
#define hclgevf_is_csq(ring) ((ring)->flag & HCLGEVF_TYPE_CSQ)
#define hclgevf_ring_to_dma_dir(ring) (hclgevf_is_csq(ring) ? \
DMA_TO_DEVICE : DMA_FROM_DEVICE)
#define cmq_ring_to_dev(ring) (&(ring)->dev->pdev->dev)
static int hclgevf_ring_space(struct hclgevf_cmq_ring *ring)
{
int ntc = ring->next_to_clean;
int ntu = ring->next_to_use;
int used;
used = (ntu - ntc + ring->desc_num) % ring->desc_num;
return ring->desc_num - used - 1;
}
static int hclgevf_cmd_csq_clean(struct hclgevf_hw *hw)
{
struct hclgevf_cmq_ring *csq = &hw->cmq.csq;
u16 ntc = csq->next_to_clean;
struct hclgevf_desc *desc;
int clean = 0;
u32 head;
desc = &csq->desc[ntc];
head = hclgevf_read_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG);
while (head != ntc) {
memset(desc, 0, sizeof(*desc));
ntc++;
if (ntc == csq->desc_num)
ntc = 0;
desc = &csq->desc[ntc];
clean++;
}
csq->next_to_clean = ntc;
return clean;
}
static bool hclgevf_cmd_csq_done(struct hclgevf_hw *hw)
{
u32 head;
head = hclgevf_read_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG);
return head == hw->cmq.csq.next_to_use;
}
static bool hclgevf_is_special_opcode(u16 opcode)
{
u16 spec_opcode[] = {0x30, 0x31, 0x32};
int i;
for (i = 0; i < ARRAY_SIZE(spec_opcode); i++) {
if (spec_opcode[i] == opcode)
return true;
}
return false;
}
static int hclgevf_alloc_cmd_desc(struct hclgevf_cmq_ring *ring)
{
int size = ring->desc_num * sizeof(struct hclgevf_desc);
ring->desc = kzalloc(size, GFP_KERNEL);
if (!ring->desc)
return -ENOMEM;
ring->desc_dma_addr = dma_map_single(cmq_ring_to_dev(ring), ring->desc,
size, DMA_BIDIRECTIONAL);
if (dma_mapping_error(cmq_ring_to_dev(ring), ring->desc_dma_addr)) {
ring->desc_dma_addr = 0;
kfree(ring->desc);
ring->desc = NULL;
return -ENOMEM;
}
return 0;
}
static void hclgevf_free_cmd_desc(struct hclgevf_cmq_ring *ring)
{
dma_unmap_single(cmq_ring_to_dev(ring), ring->desc_dma_addr,
ring->desc_num * sizeof(ring->desc[0]),
hclgevf_ring_to_dma_dir(ring));
ring->desc_dma_addr = 0;
kfree(ring->desc);
ring->desc = NULL;
}
static int hclgevf_init_cmd_queue(struct hclgevf_dev *hdev,
struct hclgevf_cmq_ring *ring)
{
struct hclgevf_hw *hw = &hdev->hw;
int ring_type = ring->flag;
u32 reg_val;
int ret;
ring->desc_num = HCLGEVF_NIC_CMQ_DESC_NUM;
spin_lock_init(&ring->lock);
ring->next_to_clean = 0;
ring->next_to_use = 0;
ring->dev = hdev;
/* allocate CSQ/CRQ descriptor */
ret = hclgevf_alloc_cmd_desc(ring);
if (ret) {
dev_err(&hdev->pdev->dev, "failed(%d) to alloc %s desc\n", ret,
(ring_type == HCLGEVF_TYPE_CSQ) ? "CSQ" : "CRQ");
return ret;
}
/* initialize the hardware registers with csq/crq dma-address,
* descriptor number, head & tail pointers
*/
switch (ring_type) {
case HCLGEVF_TYPE_CSQ:
reg_val = (u32)ring->desc_dma_addr;
hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_L_REG, reg_val);
reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1);
hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_H_REG, reg_val);
reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S);
reg_val |= HCLGEVF_NIC_CMQ_ENABLE;
hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_DEPTH_REG, reg_val);
hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG, 0);
hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG, 0);
break;
case HCLGEVF_TYPE_CRQ:
reg_val = (u32)ring->desc_dma_addr;
hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_L_REG, reg_val);
reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1);
hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_H_REG, reg_val);
reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S);
reg_val |= HCLGEVF_NIC_CMQ_ENABLE;
hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_DEPTH_REG, reg_val);
hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_TAIL_REG, 0);
hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_HEAD_REG, 0);
break;
}
return 0;
}
void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc,
enum hclgevf_opcode_type opcode, bool is_read)
{
memset(desc, 0, sizeof(struct hclgevf_desc));
desc->opcode = cpu_to_le16(opcode);
desc->flag = cpu_to_le16(HCLGEVF_CMD_FLAG_NO_INTR |
HCLGEVF_CMD_FLAG_IN);
if (is_read)
desc->flag |= cpu_to_le16(HCLGEVF_CMD_FLAG_WR);
else
desc->flag &= cpu_to_le16(~HCLGEVF_CMD_FLAG_WR);
}
/* hclgevf_cmd_send - send command to command queue
* @hw: pointer to the hw struct
* @desc: prefilled descriptor for describing the command
* @num : the number of descriptors to be sent
*
* This is the main send command for command queue, it
* sends the queue, cleans the queue, etc
*/
int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num)
{
struct hclgevf_dev *hdev = (struct hclgevf_dev *)hw->hdev;
struct hclgevf_desc *desc_to_use;
bool complete = false;
u32 timeout = 0;
int handle = 0;
int status = 0;
u16 retval;
u16 opcode;
int ntc;
spin_lock_bh(&hw->cmq.csq.lock);
if (num > hclgevf_ring_space(&hw->cmq.csq)) {
spin_unlock_bh(&hw->cmq.csq.lock);
return -EBUSY;
}
/* Record the location of desc in the ring for this time
* which will be use for hardware to write back
*/
ntc = hw->cmq.csq.next_to_use;
opcode = le16_to_cpu(desc[0].opcode);
while (handle < num) {
desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use];
*desc_to_use = desc[handle];
(hw->cmq.csq.next_to_use)++;
if (hw->cmq.csq.next_to_use == hw->cmq.csq.desc_num)
hw->cmq.csq.next_to_use = 0;
handle++;
}
/* Write to hardware */
hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG,
hw->cmq.csq.next_to_use);
/* If the command is sync, wait for the firmware to write back,
* if multi descriptors to be sent, use the first one to check
*/
if (HCLGEVF_SEND_SYNC(le16_to_cpu(desc->flag))) {
do {
if (hclgevf_cmd_csq_done(hw))
break;
udelay(1);
timeout++;
} while (timeout < hw->cmq.tx_timeout);
}
if (hclgevf_cmd_csq_done(hw)) {
complete = true;
handle = 0;
while (handle < num) {
/* Get the result of hardware write back */
desc_to_use = &hw->cmq.csq.desc[ntc];
desc[handle] = *desc_to_use;
if (likely(!hclgevf_is_special_opcode(opcode)))
retval = le16_to_cpu(desc[handle].retval);
else
retval = le16_to_cpu(desc[0].retval);
if ((enum hclgevf_cmd_return_status)retval ==
HCLGEVF_CMD_EXEC_SUCCESS)
status = 0;
else
status = -EIO;
hw->cmq.last_status = (enum hclgevf_cmd_status)retval;
ntc++;
handle++;
if (ntc == hw->cmq.csq.desc_num)
ntc = 0;
}
}
if (!complete)
status = -EAGAIN;
/* Clean the command send queue */
handle = hclgevf_cmd_csq_clean(hw);
if (handle != num) {
dev_warn(&hdev->pdev->dev,
"cleaned %d, need to clean %d\n", handle, num);
}
spin_unlock_bh(&hw->cmq.csq.lock);
return status;
}
static int hclgevf_cmd_query_firmware_version(struct hclgevf_hw *hw,
u32 *version)
{
struct hclgevf_query_version_cmd *resp;
struct hclgevf_desc desc;
int status;
resp = (struct hclgevf_query_version_cmd *)desc.data;
hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_QUERY_FW_VER, 1);
status = hclgevf_cmd_send(hw, &desc, 1);
if (!status)
*version = le32_to_cpu(resp->firmware);
return status;
}
int hclgevf_cmd_init(struct hclgevf_dev *hdev)
{
u32 version;
int ret;
/* setup Tx write back timeout */
hdev->hw.cmq.tx_timeout = HCLGEVF_CMDQ_TX_TIMEOUT;
/* setup queue CSQ/CRQ rings */
hdev->hw.cmq.csq.flag = HCLGEVF_TYPE_CSQ;
ret = hclgevf_init_cmd_queue(hdev, &hdev->hw.cmq.csq);
if (ret) {
dev_err(&hdev->pdev->dev,
"failed(%d) to initialize CSQ ring\n", ret);
return ret;
}
hdev->hw.cmq.crq.flag = HCLGEVF_TYPE_CRQ;
ret = hclgevf_init_cmd_queue(hdev, &hdev->hw.cmq.crq);
if (ret) {
dev_err(&hdev->pdev->dev,
"failed(%d) to initialize CRQ ring\n", ret);
goto err_csq;
}
/* get firmware version */
ret = hclgevf_cmd_query_firmware_version(&hdev->hw, &version);
if (ret) {
dev_err(&hdev->pdev->dev,
"failed(%d) to query firmware version\n", ret);
goto err_crq;
}
hdev->fw_version = version;
dev_info(&hdev->pdev->dev, "The firmware version is %08x\n", version);
return 0;
err_crq:
hclgevf_free_cmd_desc(&hdev->hw.cmq.crq);
err_csq:
hclgevf_free_cmd_desc(&hdev->hw.cmq.csq);
return ret;
}
void hclgevf_cmd_uninit(struct hclgevf_dev *hdev)
{
hclgevf_free_cmd_desc(&hdev->hw.cmq.csq);
hclgevf_free_cmd_desc(&hdev->hw.cmq.crq);
}
/* SPDX-License-Identifier: GPL-2.0+ */
/* Copyright (c) 2016-2017 Hisilicon Limited. */
#ifndef __HCLGEVF_CMD_H
#define __HCLGEVF_CMD_H
#include <linux/io.h>
#include <linux/types.h>
#include "hnae3.h"
#define HCLGEVF_CMDQ_TX_TIMEOUT 200
#define HCLGEVF_CMDQ_RX_INVLD_B 0
#define HCLGEVF_CMDQ_RX_OUTVLD_B 1
struct hclgevf_hw;
struct hclgevf_dev;
struct hclgevf_desc {
__le16 opcode;
__le16 flag;
__le16 retval;
__le16 rsv;
__le32 data[6];
};
struct hclgevf_desc_cb {
dma_addr_t dma;
void *va;
u32 length;
};
struct hclgevf_cmq_ring {
dma_addr_t desc_dma_addr;
struct hclgevf_desc *desc;
struct hclgevf_desc_cb *desc_cb;
struct hclgevf_dev *dev;
u32 head;
u32 tail;
u16 buf_size;
u16 desc_num;
int next_to_use;
int next_to_clean;
u8 flag;
spinlock_t lock; /* Command queue lock */
};
enum hclgevf_cmd_return_status {
HCLGEVF_CMD_EXEC_SUCCESS = 0,
HCLGEVF_CMD_NO_AUTH = 1,
HCLGEVF_CMD_NOT_EXEC = 2,
HCLGEVF_CMD_QUEUE_FULL = 3,
};
enum hclgevf_cmd_status {
HCLGEVF_STATUS_SUCCESS = 0,
HCLGEVF_ERR_CSQ_FULL = -1,
HCLGEVF_ERR_CSQ_TIMEOUT = -2,
HCLGEVF_ERR_CSQ_ERROR = -3
};
struct hclgevf_cmq {
struct hclgevf_cmq_ring csq;
struct hclgevf_cmq_ring crq;
u16 tx_timeout; /* Tx timeout */
enum hclgevf_cmd_status last_status;
};
#define HCLGEVF_CMD_FLAG_IN_VALID_SHIFT 0
#define HCLGEVF_CMD_FLAG_OUT_VALID_SHIFT 1
#define HCLGEVF_CMD_FLAG_NEXT_SHIFT 2
#define HCLGEVF_CMD_FLAG_WR_OR_RD_SHIFT 3
#define HCLGEVF_CMD_FLAG_NO_INTR_SHIFT 4
#define HCLGEVF_CMD_FLAG_ERR_INTR_SHIFT 5
#define HCLGEVF_CMD_FLAG_IN BIT(HCLGEVF_CMD_FLAG_IN_VALID_SHIFT)
#define HCLGEVF_CMD_FLAG_OUT BIT(HCLGEVF_CMD_FLAG_OUT_VALID_SHIFT)
#define HCLGEVF_CMD_FLAG_NEXT BIT(HCLGEVF_CMD_FLAG_NEXT_SHIFT)
#define HCLGEVF_CMD_FLAG_WR BIT(HCLGEVF_CMD_FLAG_WR_OR_RD_SHIFT)
#define HCLGEVF_CMD_FLAG_NO_INTR BIT(HCLGEVF_CMD_FLAG_NO_INTR_SHIFT)
#define HCLGEVF_CMD_FLAG_ERR_INTR BIT(HCLGEVF_CMD_FLAG_ERR_INTR_SHIFT)
enum hclgevf_opcode_type {
/* Generic command */
HCLGEVF_OPC_QUERY_FW_VER = 0x0001,
/* TQP command */
HCLGEVF_OPC_QUERY_TX_STATUS = 0x0B03,
HCLGEVF_OPC_QUERY_RX_STATUS = 0x0B13,
HCLGEVF_OPC_CFG_COM_TQP_QUEUE = 0x0B20,
/* TSO cmd */
HCLGEVF_OPC_TSO_GENERIC_CONFIG = 0x0C01,
/* RSS cmd */
HCLGEVF_OPC_RSS_GENERIC_CONFIG = 0x0D01,
HCLGEVF_OPC_RSS_INDIR_TABLE = 0x0D07,
HCLGEVF_OPC_RSS_TC_MODE = 0x0D08,
/* Mailbox cmd */
HCLGEVF_OPC_MBX_VF_TO_PF = 0x2001,
};
#define HCLGEVF_TQP_REG_OFFSET 0x80000
#define HCLGEVF_TQP_REG_SIZE 0x200
struct hclgevf_tqp_map {
__le16 tqp_id; /* Absolute tqp id for in this pf */
u8 tqp_vf; /* VF id */
#define HCLGEVF_TQP_MAP_TYPE_PF 0
#define HCLGEVF_TQP_MAP_TYPE_VF 1
#define HCLGEVF_TQP_MAP_TYPE_B 0
#define HCLGEVF_TQP_MAP_EN_B 1
u8 tqp_flag; /* Indicate it's pf or vf tqp */
__le16 tqp_vid; /* Virtual id in this pf/vf */
u8 rsv[18];
};
#define HCLGEVF_VECTOR_ELEMENTS_PER_CMD 10
enum hclgevf_int_type {
HCLGEVF_INT_TX = 0,
HCLGEVF_INT_RX,
HCLGEVF_INT_EVENT,
};
struct hclgevf_ctrl_vector_chain {
u8 int_vector_id;
u8 int_cause_num;
#define HCLGEVF_INT_TYPE_S 0
#define HCLGEVF_INT_TYPE_M 0x3
#define HCLGEVF_TQP_ID_S 2
#define HCLGEVF_TQP_ID_M (0x3fff << HCLGEVF_TQP_ID_S)
__le16 tqp_type_and_id[HCLGEVF_VECTOR_ELEMENTS_PER_CMD];
u8 vfid;
u8 resv;
};
struct hclgevf_query_version_cmd {
__le32 firmware;
__le32 firmware_rsv[5];
};
#define HCLGEVF_RSS_HASH_KEY_OFFSET 4
#define HCLGEVF_RSS_HASH_KEY_NUM 16
struct hclgevf_rss_config_cmd {
u8 hash_config;
u8 rsv[7];
u8 hash_key[HCLGEVF_RSS_HASH_KEY_NUM];
};
struct hclgevf_rss_input_tuple_cmd {
u8 ipv4_tcp_en;
u8 ipv4_udp_en;
u8 ipv4_stcp_en;
u8 ipv4_fragment_en;
u8 ipv6_tcp_en;
u8 ipv6_udp_en;
u8 ipv6_stcp_en;
u8 ipv6_fragment_en;
u8 rsv[16];
};
#define HCLGEVF_RSS_CFG_TBL_SIZE 16
struct hclgevf_rss_indirection_table_cmd {
u16 start_table_index;
u16 rss_set_bitmap;
u8 rsv[4];
u8 rss_result[HCLGEVF_RSS_CFG_TBL_SIZE];
};
#define HCLGEVF_RSS_TC_OFFSET_S 0
#define HCLGEVF_RSS_TC_OFFSET_M (0x3ff << HCLGEVF_RSS_TC_OFFSET_S)
#define HCLGEVF_RSS_TC_SIZE_S 12
#define HCLGEVF_RSS_TC_SIZE_M (0x7 << HCLGEVF_RSS_TC_SIZE_S)
#define HCLGEVF_RSS_TC_VALID_B 15
#define HCLGEVF_MAX_TC_NUM 8
struct hclgevf_rss_tc_mode_cmd {
u16 rss_tc_mode[HCLGEVF_MAX_TC_NUM];
u8 rsv[8];
};
#define HCLGEVF_LINK_STS_B 0
#define HCLGEVF_LINK_STATUS BIT(HCLGEVF_LINK_STS_B)
struct hclgevf_link_status_cmd {
u8 status;
u8 rsv[23];
};
#define HCLGEVF_RING_ID_MASK 0x3ff
#define HCLGEVF_TQP_ENABLE_B 0
struct hclgevf_cfg_com_tqp_queue_cmd {
__le16 tqp_id;
__le16 stream_id;
u8 enable;
u8 rsv[19];
};
struct hclgevf_cfg_tx_queue_pointer_cmd {
__le16 tqp_id;
__le16 tx_tail;
__le16 tx_head;
__le16 fbd_num;
__le16 ring_offset;
u8 rsv[14];
};
#define HCLGEVF_TSO_ENABLE_B 0
struct hclgevf_cfg_tso_status_cmd {
u8 tso_enable;
u8 rsv[23];
};
#define HCLGEVF_TYPE_CRQ 0
#define HCLGEVF_TYPE_CSQ 1
#define HCLGEVF_NIC_CSQ_BASEADDR_L_REG 0x27000
#define HCLGEVF_NIC_CSQ_BASEADDR_H_REG 0x27004
#define HCLGEVF_NIC_CSQ_DEPTH_REG 0x27008
#define HCLGEVF_NIC_CSQ_TAIL_REG 0x27010
#define HCLGEVF_NIC_CSQ_HEAD_REG 0x27014
#define HCLGEVF_NIC_CRQ_BASEADDR_L_REG 0x27018
#define HCLGEVF_NIC_CRQ_BASEADDR_H_REG 0x2701c
#define HCLGEVF_NIC_CRQ_DEPTH_REG 0x27020
#define HCLGEVF_NIC_CRQ_TAIL_REG 0x27024
#define HCLGEVF_NIC_CRQ_HEAD_REG 0x27028
#define HCLGEVF_NIC_CMQ_EN_B 16
#define HCLGEVF_NIC_CMQ_ENABLE BIT(HCLGEVF_NIC_CMQ_EN_B)
#define HCLGEVF_NIC_CMQ_DESC_NUM 1024
#define HCLGEVF_NIC_CMQ_DESC_NUM_S 3
#define HCLGEVF_NIC_CMDQ_INT_SRC_REG 0x27100
static inline void hclgevf_write_reg(void __iomem *base, u32 reg, u32 value)
{
writel(value, base + reg);
}
static inline u32 hclgevf_read_reg(u8 __iomem *base, u32 reg)
{
u8 __iomem *reg_addr = READ_ONCE(base);
return readl(reg_addr + reg);
}
#define hclgevf_write_dev(a, reg, value) \
hclgevf_write_reg((a)->io_base, (reg), (value))
#define hclgevf_read_dev(a, reg) \
hclgevf_read_reg((a)->io_base, (reg))
#define HCLGEVF_SEND_SYNC(flag) \
((flag) & HCLGEVF_CMD_FLAG_NO_INTR)
int hclgevf_cmd_init(struct hclgevf_dev *hdev);
void hclgevf_cmd_uninit(struct hclgevf_dev *hdev);
int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num);
void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc,
enum hclgevf_opcode_type opcode,
bool is_read);
#endif
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0+ */
/* Copyright (c) 2016-2017 Hisilicon Limited. */
#ifndef __HCLGEVF_MAIN_H
#define __HCLGEVF_MAIN_H
#include <linux/fs.h>
#include <linux/types.h>
#include "hclge_mbx.h"
#include "hclgevf_cmd.h"
#include "hnae3.h"
#define HCLGEVF_MOD_VERSION "v1.0"
#define HCLGEVF_DRIVER_NAME "hclgevf"
#define HCLGEVF_ROCEE_VECTOR_NUM 0
#define HCLGEVF_MISC_VECTOR_NUM 0
#define HCLGEVF_INVALID_VPORT 0xffff
/* This number in actual depends upon the total number of VFs
* created by physical function. But the maximum number of
* possible vector-per-VF is {VFn(1-32), VECTn(32 + 1)}.
*/
#define HCLGEVF_MAX_VF_VECTOR_NUM (32 + 1)
#define HCLGEVF_VECTOR_REG_BASE 0x20000
#define HCLGEVF_MISC_VECTOR_REG_BASE 0x20400
#define HCLGEVF_VECTOR_REG_OFFSET 0x4
#define HCLGEVF_VECTOR_VF_OFFSET 0x100000
/* Vector0 interrupt CMDQ event source register(RW) */
#define HCLGEVF_VECTOR0_CMDQ_SRC_REG 0x27100
/* CMDQ register bits for RX event(=MBX event) */
#define HCLGEVF_VECTOR0_RX_CMDQ_INT_B 1
#define HCLGEVF_TQP_RESET_TRY_TIMES 10
#define HCLGEVF_RSS_IND_TBL_SIZE 512
#define HCLGEVF_RSS_SET_BITMAP_MSK 0xffff
#define HCLGEVF_RSS_KEY_SIZE 40
#define HCLGEVF_RSS_HASH_ALGO_TOEPLITZ 0
#define HCLGEVF_RSS_HASH_ALGO_SIMPLE 1
#define HCLGEVF_RSS_HASH_ALGO_SYMMETRIC 2
#define HCLGEVF_RSS_HASH_ALGO_MASK 0xf
#define HCLGEVF_RSS_CFG_TBL_NUM \
(HCLGEVF_RSS_IND_TBL_SIZE / HCLGEVF_RSS_CFG_TBL_SIZE)
/* states of hclgevf device & tasks */
enum hclgevf_states {
/* device states */
HCLGEVF_STATE_DOWN,
HCLGEVF_STATE_DISABLED,
/* task states */
HCLGEVF_STATE_SERVICE_SCHED,
HCLGEVF_STATE_MBX_SERVICE_SCHED,
HCLGEVF_STATE_MBX_HANDLING,
};
#define HCLGEVF_MPF_ENBALE 1
struct hclgevf_mac {
u8 mac_addr[ETH_ALEN];
int link;
};
struct hclgevf_hw {
void __iomem *io_base;
int num_vec;
struct hclgevf_cmq cmq;
struct hclgevf_mac mac;
void *hdev; /* hchgevf device it is part of */
};
/* TQP stats */
struct hlcgevf_tqp_stats {
/* query_tqp_tx_queue_statistics ,opcode id: 0x0B03 */
u64 rcb_tx_ring_pktnum_rcd; /* 32bit */
/* query_tqp_rx_queue_statistics ,opcode id: 0x0B13 */
u64 rcb_rx_ring_pktnum_rcd; /* 32bit */
};
struct hclgevf_tqp {
struct device *dev; /* device for DMA mapping */
struct hnae3_queue q;
struct hlcgevf_tqp_stats tqp_stats;
u16 index; /* global index in a NIC controller */
bool alloced;
};
struct hclgevf_cfg {
u8 vmdq_vport_num;
u8 tc_num;
u16 tqp_desc_num;
u16 rx_buf_len;
u8 phy_addr;
u8 media_type;
u8 mac_addr[ETH_ALEN];
u32 numa_node_map;
};
struct hclgevf_rss_cfg {
u8 rss_hash_key[HCLGEVF_RSS_KEY_SIZE]; /* user configured hash keys */
u32 hash_algo;
u32 rss_size;
u8 hw_tc_map;
u8 rss_indirection_tbl[HCLGEVF_RSS_IND_TBL_SIZE]; /* shadow table */
};
struct hclgevf_misc_vector {
u8 __iomem *addr;
int vector_irq;
};
struct hclgevf_dev {
struct pci_dev *pdev;
struct hnae3_ae_dev *ae_dev;
struct hclgevf_hw hw;
struct hclgevf_misc_vector misc_vector;
struct hclgevf_rss_cfg rss_cfg;
unsigned long state;
u32 fw_version;
u16 num_tqps; /* num task queue pairs of this PF */
u16 alloc_rss_size; /* allocated RSS task queue */
u16 rss_size_max; /* HW defined max RSS task queue */
u16 num_alloc_vport; /* num vports this driver supports */
u32 numa_node_mask;
u16 rx_buf_len;
u16 num_desc;
u8 hw_tc_map;
u16 num_msi;
u16 num_msi_left;
u16 num_msi_used;
u32 base_msi_vector;
u16 *vector_status;
int *vector_irq;
bool accept_mta_mc; /* whether to accept mta filter multicast */
struct hclgevf_mbx_resp_status mbx_resp; /* mailbox response */
struct timer_list service_timer;
struct work_struct service_task;
struct work_struct mbx_service_task;
struct hclgevf_tqp *htqp;
struct hnae3_handle nic;
struct hnae3_handle roce;
struct hnae3_client *nic_client;
struct hnae3_client *roce_client;
u32 flag;
};
int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode,
const u8 *msg_data, u8 msg_len, bool need_resp,
u8 *resp_data, u16 resp_len);
void hclgevf_mbx_handler(struct hclgevf_dev *hdev);
void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state);
#endif
// SPDX-License-Identifier: GPL-2.0+
// Copyright (c) 2016-2017 Hisilicon Limited.
#include "hclge_mbx.h"
#include "hclgevf_main.h"
#include "hnae3.h"
static void hclgevf_reset_mbx_resp_status(struct hclgevf_dev *hdev)
{
/* this function should be called with mbx_resp.mbx_mutex held
* to prtect the received_response from race condition
*/
hdev->mbx_resp.received_resp = false;
hdev->mbx_resp.origin_mbx_msg = 0;
hdev->mbx_resp.resp_status = 0;
memset(hdev->mbx_resp.additional_info, 0, HCLGE_MBX_MAX_RESP_DATA_SIZE);
}
/* hclgevf_get_mbx_resp: used to get a response from PF after VF sends a mailbox
* message to PF.
* @hdev: pointer to struct hclgevf_dev
* @resp_msg: pointer to store the original message type and response status
* @len: the resp_msg data array length.
*/
static int hclgevf_get_mbx_resp(struct hclgevf_dev *hdev, u16 code0, u16 code1,
u8 *resp_data, u16 resp_len)
{
#define HCLGEVF_MAX_TRY_TIMES 500
#define HCLGEVF_SLEEP_USCOEND 1000
struct hclgevf_mbx_resp_status *mbx_resp;
u16 r_code0, r_code1;
int i = 0;
if (resp_len > HCLGE_MBX_MAX_RESP_DATA_SIZE) {
dev_err(&hdev->pdev->dev,
"VF mbx response len(=%d) exceeds maximum(=%d)\n",
resp_len,
HCLGE_MBX_MAX_RESP_DATA_SIZE);
return -EINVAL;
}
while ((!hdev->mbx_resp.received_resp) && (i < HCLGEVF_MAX_TRY_TIMES)) {
udelay(HCLGEVF_SLEEP_USCOEND);
i++;
}
if (i >= HCLGEVF_MAX_TRY_TIMES) {
dev_err(&hdev->pdev->dev,
"VF could not get mbx resp(=%d) from PF in %d tries\n",
hdev->mbx_resp.received_resp, i);
return -EIO;
}
mbx_resp = &hdev->mbx_resp;
r_code0 = (u16)(mbx_resp->origin_mbx_msg >> 16);
r_code1 = (u16)(mbx_resp->origin_mbx_msg & 0xff);
if (resp_data)
memcpy(resp_data, &mbx_resp->additional_info[0], resp_len);
hclgevf_reset_mbx_resp_status(hdev);
if (!(r_code0 == code0 && r_code1 == code1 && !mbx_resp->resp_status)) {
dev_err(&hdev->pdev->dev,
"VF could not match resp code(code0=%d,code1=%d), %d",
code0, code1, mbx_resp->resp_status);
return -EIO;
}
return 0;
}
int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode,
const u8 *msg_data, u8 msg_len, bool need_resp,
u8 *resp_data, u16 resp_len)
{
struct hclge_mbx_vf_to_pf_cmd *req;
struct hclgevf_desc desc;
int status;
req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
/* first two bytes are reserved for code & subcode */
if (msg_len > (HCLGE_MBX_MAX_MSG_SIZE - 2)) {
dev_err(&hdev->pdev->dev,
"VF send mbx msg fail, msg len %d exceeds max len %d\n",
msg_len, HCLGE_MBX_MAX_MSG_SIZE);
return -EINVAL;
}
hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false);
req->msg[0] = code;
req->msg[1] = subcode;
memcpy(&req->msg[2], msg_data, msg_len);
/* synchronous send */
if (need_resp) {
mutex_lock(&hdev->mbx_resp.mbx_mutex);
hclgevf_reset_mbx_resp_status(hdev);
status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
if (status) {
dev_err(&hdev->pdev->dev,
"VF failed(=%d) to send mbx message to PF\n",
status);
mutex_unlock(&hdev->mbx_resp.mbx_mutex);
return status;
}
status = hclgevf_get_mbx_resp(hdev, code, subcode, resp_data,
resp_len);
mutex_unlock(&hdev->mbx_resp.mbx_mutex);
} else {
/* asynchronous send */
status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
if (status) {
dev_err(&hdev->pdev->dev,
"VF failed(=%d) to send mbx message to PF\n",
status);
return status;
}
}
return status;
}
void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
{
struct hclgevf_mbx_resp_status *resp;
struct hclge_mbx_pf_to_vf_cmd *req;
struct hclgevf_cmq_ring *crq;
struct hclgevf_desc *desc;
u16 link_status, flag;
u8 *temp;
int i;
resp = &hdev->mbx_resp;
crq = &hdev->hw.cmq.crq;
flag = le16_to_cpu(crq->desc[crq->next_to_use].flag);
while (hnae_get_bit(flag, HCLGEVF_CMDQ_RX_OUTVLD_B)) {
desc = &crq->desc[crq->next_to_use];
req = (struct hclge_mbx_pf_to_vf_cmd *)desc->data;
switch (req->msg[0]) {
case HCLGE_MBX_PF_VF_RESP:
if (resp->received_resp)
dev_warn(&hdev->pdev->dev,
"VF mbx resp flag not clear(%d)\n",
req->msg[1]);
resp->received_resp = true;
resp->origin_mbx_msg = (req->msg[1] << 16);
resp->origin_mbx_msg |= req->msg[2];
resp->resp_status = req->msg[3];
temp = (u8 *)&req->msg[4];
for (i = 0; i < HCLGE_MBX_MAX_RESP_DATA_SIZE; i++) {
resp->additional_info[i] = *temp;
temp++;
}
break;
case HCLGE_MBX_LINK_STAT_CHANGE:
link_status = le16_to_cpu(req->msg[1]);
/* update upper layer with new link link status */
hclgevf_update_link_status(hdev, link_status);
break;
default:
dev_err(&hdev->pdev->dev,
"VF received unsupported(%d) mbx msg from PF\n",
req->msg[0]);
break;
}
hclge_mbx_ring_ptr_move_crq(crq);
flag = le16_to_cpu(crq->desc[crq->next_to_use].flag);
}
/* Write back CMDQ_RQ header pointer, M7 need this pointer */
hclgevf_write_dev(&hdev->hw, HCLGEVF_NIC_CRQ_HEAD_REG,
crq->next_to_use);
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment