Commit b6a7eeb4 authored by David S. Miller's avatar David S. Miller

Merge branch '200GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue

Tony Nguyen says:

====================
Introduce Intel IDPF driver

Pavan Kumar Linga says:

This patch series introduces the Intel Infrastructure Data Path Function
(IDPF) driver. It is used for both physical and virtual functions. Except
for some of the device operations the rest of the functionality is the
same for both PF and VF. IDPF uses virtchnl version2 opcodes and
structures defined in the virtchnl2 header file which helps the driver
to learn the capabilities and register offsets from the device
Control Plane (CP) instead of assuming the default values.

The format of the series follows the driver init flow to interface open.
To start with, probe gets called and kicks off the driver initialization
by spawning the 'vc_event_task' work queue which in turn calls the
'hard reset' function. As part of that, the mailbox is initialized which
is used to send/receive the virtchnl messages to/from the CP. Once that is
done, 'core init' kicks in which requests all the required global resources
from the CP and spawns the 'init_task' work queue to create the vports.

Based on the capability information received, the driver creates the said
number of vports (one or many) where each vport is associated to a netdev.
Also, each vport has its own resources such as queues, vectors etc.
From there, rest of the netdev_ops and data path are added.

IDPF implements both single queue which is traditional queueing model
as well as split queue model. In split queue model, it uses separate queue
for both completion descriptors and buffers which helps to implement
out-of-order completions. It also helps to implement asymmetric queues,
for example multiple RX completion queues can be processed by a single
RX buffer queue and multiple TX buffer queues can be processed by a
single TX completion queue. In single queue model, same queue is used
for both descriptor completions as well as buffer completions. It also
supports features such as generic checksum offload, generic receive
offload (hardware GRO) etc.
---
v7:
Patch 2:
 * removed pci_[disable|enable]_pcie_error_reporting as they are dropped
   from the core
Patch 4, 9:
 * used 'kasprintf' instead of 'snprintf' to avoid providing explicit
   character string size which also fixes "-Wformat-truncation" warnings
Patch 14:
 * used 'ethtool_sprintf' instead of 'snprintf' to avoid providing explicit
   character string size which also fixes "-Wformat-truncation" warning
 * add string format argument to the 'ethtool_sprintf' to avoid warning on
   "-Wformat-security"

v6: https://lore.kernel.org/netdev/20230825235954.894050-1-pavan.kumar.linga@intel.com/
Note: 'Acked-by' was only added to patches 1, 2, 12 and not to the other
   patches because of the changes in v6

Patch 3, 4, 5, 6, 7, 8, 9, 11, 13, 14, 15:
 * renamed 'reset_lock' to 'vport_ctrl_lock' to reflect the lock usage
 * to avoid defensive programming, used 'vport_ctrl_lock' for the user
   callbacks that access the 'vport' to prevent the hardware reset thread
   from releasing the 'vport', when the user callback is in progress
 * added some variables to netdev private structure to avoid vport access
   if possible from ethtool and ndo callbacks
 * moved 'mac_filter_list_lock' and MAC related flags to vport_config
   structure and refactored mac filter flow to handle asynchronous
   ndo mac filter callbacks
 * stop the queues before starting the reset flow to avoid TX hangs
 * removed 'sw_mutex' and 'stop_mutex' as they are not needed anymore
 * added missing clear bit in 'init_task' error path
 * renamed labels appropriately
Patch 8:
 * replaced page_pool_put_page with page_pool_put_full_page
 * for the page pool max_len, used PAGE_SIZE
Patch 10, 11, 13:
 * made use of the 'netif_txq_maybe_stop', '__netif_txq_completed_wake'
   helper macros
Patch 13:
 * removed IDPF_HR_RESET_IN_PROG flag check in idpf_tx_singleq_start
   as it is defensive
Patch 14:
 * removed max descriptor check as the core does that
 * removed unnecessary error messages
 * removed the stats that are common between the ones reported by ethtool
   and ip link
 * replaced snprintf with ethtool_sprintf
 * added a comment to explain the reason for the max queue check
 * as the netdev queues are set on alloc, there is no need to set
   them again on reset unless there is a queue change, so move the
   'idpf_set_real_num_queues' to 'idpf_initiate_soft_reset'
 Patch 15:
 * reworded the 'configure SRIOV' in the commit message

v5: https://lore.kernel.org/netdev/20230816004305.216136-1-anthony.l.nguyen@intel.com/
Most Patches:
 * wrapped line limit to 80 chars to those which don't effect readability
Patch 12:
 * in skb_add_rx_frag, offset 'headlen' w.r.t page_offset when adding a
   frag to avoid adding the header again
Patch 14:
 * added NULL check for 'rxq' when dereferencing it in page_pool_get_stats

v4: https://lore.kernel.org/netdev/20230808003416.3805142-1-anthony.l.nguyen@intel.com/
Patch 1:
 * s/virtcnl/virtchnl
 * removed the kernel doc for the error code definitions that don't exist
 * reworded the summary part in the virtchnl2 header
Patch 3:
 * don't set local variable to NULL on error
 * renamed sq_send_command_out label with err_unlock
 * don't use __GFP_ZERO in dma_alloc_coherent
Patch 4:
 * introduced mailbox workqueue to process mailbox interrupts
Patch 3, 4, 5, 6, 7, 8, 9, 11, 15:
 * removed unnecessary variable 0-init
Patch 3, 5, 7, 8, 9, 15:
 * removed defensive programming checks wherever applicable
 * removed IDPF_CAP_FIELD_LAST as it can be treated as defensive
   programming
Patch 3, 4, 5, 6, 7:
 * replaced IDPF_DFLT_MBX_BUF_SIZE with IDPF_CTLQ_MAX_BUF_LEN
Patch 2 to 15:
 * add kernel-doc for idpf.h and idpf_txrx.h enums and structures
Patch 4, 5, 15:
 * adjusted the destroy sequence of the workqueues as per the alloc
   sequence
Patch 4, 5, 9, 15:
 * scrub unnecessary flags in 'idpf_flags'
   - IDPF_REMOVE_IN_PROG flag can take care of the cases where
     IDPF_REL_RES_IN_PROG is used, removed the later one
   - IDPF_REQ_[TX|RX]_SPLITQ are replaced with struct variables
   - IDPF_CANCEL_[SERVICE|STATS]_TASK are redundant as the work queue
     doesn't get rescheduled again after 'cancel_delayed_work_sync'
   - IDPF_HR_CORE_RESET is removed as there is no set_bit for this flag
   - IDPF_MB_INTR_TRIGGER is removed as it is not needed anymore with the
     mailbox workqueue implementation
Patch 7 to 15:
 * replaced the custom buffer recycling code with page pool API
 * switched the header split buffer allocations from using a bunch of
   pages to using one large chunk of DMA memory
 * reordered some of the flows in vport_open to support page pool
Patch 8, 12:
 * don't suppress the alloc errors by using __GFP_NOWARN
Patch 9:
 * removed dyn_ctl_clrpba_m as it is not being used
Patch 14:
 * introduced enum idpf_vport_reset_cause instead of using vport flags
 * introduced page pool stats

v3: https://lore.kernel.org/netdev/20230616231341.2885622-1-anthony.l.nguyen@intel.com/
Patch 5:
 * instead of void, used 'struct virtchnl2_create_vport' type for
   vport_params_recvd and vport_params_reqd and removed the typecasting
 * used u16/u32 as needed instead of int for variables which cannot be
   negative and updated in all the places whereever applicable
Patch 6:
 * changed the commit message to "add ptypes and MAC filter support"
 * used the sender Signed-off-by as the last tag on all the patches
 * removed unnecessary variables 0-init
 * instead of fixing the code in this commit, fixed it in the commit
   where the change was introduced first
 * moved get_type_info struct on to the stack instead of memory alloc
 * moved mutex_lock and ptype_info memory alloc outside while loop and
   adjusted the return flow
 * used 'break' instead of 'continue' in ptype id switch case

v2: https://lore.kernel.org/netdev/20230614171428.1504179-1-anthony.l.nguyen@intel.com/
Patch 2:
 * added "Intel(R)" to the DRV_SUMMARY and Makefile.
Patch 4, 5, 6, 15:
 * replaced IDPF_VC_MSG_PENDING flag with mutex 'vc_buf_lock' for the
   adapter related virtchnl opcodes.
 * get the mutex lock in the virtchnl send thread itself instead of
   in receive thread.
Patch 5, 6, 7, 8, 9, 11, 14, 15:
 * replaced IDPF_VPORT_VC_MSG_PENDING flag with mutex 'vc_buf_lock' for
   the vport related virtchnl opcodes.
 * get the mutex lock in the virtchnl send thread itself instead of
   in receive thread.
Patch 6:
 * converted get_ptype_info logic from 1:N to 1:1 message exchange for
   better handling of mutex lock.
Patch 15:
 * introduced 'stats_lock' spinlock to avoid concurrent stats update.

v1: https://lore.kernel.org/netdev/20230530234501.2680230-1-anthony.l.nguyen@intel.com/

====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 2fa6175d a251eee6
......@@ -32,6 +32,7 @@ Contents:
intel/e1000
intel/e1000e
intel/fm10k
intel/idpf
intel/igb
intel/igbvf
intel/ixgbe
......
.. SPDX-License-Identifier: GPL-2.0+
==========================================================================
idpf Linux* Base Driver for the Intel(R) Infrastructure Data Path Function
==========================================================================
Intel idpf Linux driver.
Copyright(C) 2023 Intel Corporation.
.. contents::
The idpf driver serves as both the Physical Function (PF) and Virtual Function
(VF) driver for the Intel(R) Infrastructure Data Path Function.
Driver information can be obtained using ethtool, lspci, and ip.
For questions related to hardware requirements, refer to the documentation
supplied with your Intel adapter. All hardware requirements listed apply to use
with Linux.
Identifying Your Adapter
========================
For information on how to identify your adapter, and for the latest Intel
network drivers, refer to the Intel Support website:
http://www.intel.com/support
Additional Features and Configurations
======================================
ethtool
-------
The driver utilizes the ethtool interface for driver configuration and
diagnostics, as well as displaying statistical information. The latest ethtool
version is required for this functionality. If you don't have one yet, you can
obtain it at:
https://kernel.org/pub/software/network/ethtool/
Viewing Link Messages
---------------------
Link messages will not be displayed to the console if the distribution is
restricting system messages. In order to see network driver link messages on
your console, set dmesg to eight by entering the following::
# dmesg -n 8
.. note::
This setting is not saved across reboots.
Jumbo Frames
------------
Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU)
to a value larger than the default value of 1500.
Use the ip command to increase the MTU size. For example, enter the following
where <ethX> is the interface number::
# ip link set mtu 9000 dev <ethX>
# ip link set up dev <ethX>
.. note::
The maximum MTU setting for jumbo frames is 9706. This corresponds to the
maximum jumbo frame size of 9728 bytes.
.. note::
This driver will attempt to use multiple page sized buffers to receive
each jumbo packet. This should help to avoid buffer starvation issues when
allocating receive packets.
.. note::
Packet loss may have a greater impact on throughput when you use jumbo
frames. If you observe a drop in performance after enabling jumbo frames,
enabling flow control may mitigate the issue.
Performance Optimization
========================
Driver defaults are meant to fit a wide variety of workloads, but if further
optimization is required, we recommend experimenting with the following
settings.
Interrupt Rate Limiting
-----------------------
This driver supports an adaptive interrupt throttle rate (ITR) mechanism that
is tuned for general workloads. The user can customize the interrupt rate
control for specific workloads, via ethtool, adjusting the number of
microseconds between interrupts.
To set the interrupt rate manually, you must disable adaptive mode::
# ethtool -C <ethX> adaptive-rx off adaptive-tx off
For lower CPU utilization:
- Disable adaptive ITR and lower Rx and Tx interrupts. The examples below
affect every queue of the specified interface.
- Setting rx-usecs and tx-usecs to 80 will limit interrupts to about
12,500 interrupts per second per queue::
# ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 80
tx-usecs 80
For reduced latency:
- Disable adaptive ITR and ITR by setting rx-usecs and tx-usecs to 0
using ethtool::
# ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 0
tx-usecs 0
Per-queue interrupt rate settings:
- The following examples are for queues 1 and 3, but you can adjust other
queues.
- To disable Rx adaptive ITR and set static Rx ITR to 10 microseconds or
about 100,000 interrupts/second, for queues 1 and 3::
# ethtool --per-queue <ethX> queue_mask 0xa --coalesce adaptive-rx off
rx-usecs 10
- To show the current coalesce settings for queues 1 and 3::
# ethtool --per-queue <ethX> queue_mask 0xa --show-coalesce
Virtualized Environments
------------------------
In addition to the other suggestions in this section, the following may be
helpful to optimize performance in VMs.
- Using the appropriate mechanism (vcpupin) in the VM, pin the CPUs to
individual LCPUs, making sure to use a set of CPUs included in the
device's local_cpulist: /sys/class/net/<ethX>/device/local_cpulist.
- Configure as many Rx/Tx queues in the VM as available. (See the idpf driver
documentation for the number of queues supported.) For example::
# ethtool -L <virt_interface> rx <max> tx <max>
Support
=======
For general information, go to the Intel support website at:
http://www.intel.com/support/
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
to intel-wired-lan@lists.osuosl.org.
Trademarks
==========
Intel is a trademark or registered trademark of Intel Corporation or its
subsidiaries in the United States and/or other countries.
* Other names and brands may be claimed as the property of others.
......@@ -355,5 +355,17 @@ config IGC
To compile this driver as a module, choose M here. The module
will be called igc.
config IDPF
tristate "Intel(R) Infrastructure Data Path Function Support"
depends on PCI_MSI
select DIMLIB
select PAGE_POOL
select PAGE_POOL_STATS
help
This driver supports Intel(R) Infrastructure Data Path Function
devices.
To compile this driver as a module, choose M here. The module
will be called idpf.
endif # NET_VENDOR_INTEL
......@@ -15,3 +15,4 @@ obj-$(CONFIG_I40E) += i40e/
obj-$(CONFIG_IAVF) += iavf/
obj-$(CONFIG_FM10K) += fm10k/
obj-$(CONFIG_ICE) += ice/
obj-$(CONFIG_IDPF) += idpf/
# SPDX-License-Identifier: GPL-2.0-only
# Copyright (C) 2023 Intel Corporation
# Makefile for Intel(R) Infrastructure Data Path Function Linux Driver
obj-$(CONFIG_IDPF) += idpf.o
idpf-y := \
idpf_controlq.o \
idpf_controlq_setup.o \
idpf_dev.o \
idpf_ethtool.o \
idpf_lib.o \
idpf_main.o \
idpf_singleq_txrx.o \
idpf_txrx.o \
idpf_virtchnl.o \
idpf_vf_dev.o
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_CONTROLQ_H_
#define _IDPF_CONTROLQ_H_
#include <linux/slab.h>
#include "idpf_controlq_api.h"
/* Maximum buffer length for all control queue types */
#define IDPF_CTLQ_MAX_BUF_LEN 4096
#define IDPF_CTLQ_DESC(R, i) \
(&(((struct idpf_ctlq_desc *)((R)->desc_ring.va))[i]))
#define IDPF_CTLQ_DESC_UNUSED(R) \
((u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->ring_size) + \
(R)->next_to_clean - (R)->next_to_use - 1))
/* Control Queue default settings */
#define IDPF_CTRL_SQ_CMD_TIMEOUT 250 /* msecs */
struct idpf_ctlq_desc {
/* Control queue descriptor flags */
__le16 flags;
/* Control queue message opcode */
__le16 opcode;
__le16 datalen; /* 0 for direct commands */
union {
__le16 ret_val;
__le16 pfid_vfid;
#define IDPF_CTLQ_DESC_VF_ID_S 0
#define IDPF_CTLQ_DESC_VF_ID_M (0x7FF << IDPF_CTLQ_DESC_VF_ID_S)
#define IDPF_CTLQ_DESC_PF_ID_S 11
#define IDPF_CTLQ_DESC_PF_ID_M (0x1F << IDPF_CTLQ_DESC_PF_ID_S)
};
/* Virtchnl message opcode and virtchnl descriptor type
* v_opcode=[27:0], v_dtype=[31:28]
*/
__le32 v_opcode_dtype;
/* Virtchnl return value */
__le32 v_retval;
union {
struct {
__le32 param0;
__le32 param1;
__le32 param2;
__le32 param3;
} direct;
struct {
__le32 param0;
__le16 sw_cookie;
/* Virtchnl flags */
__le16 v_flags;
__le32 addr_high;
__le32 addr_low;
} indirect;
u8 raw[16];
} params;
};
/* Flags sub-structure
* |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |
* |DD |CMP|ERR| * RSV * |FTYPE | *RSV* |RD |VFC|BUF| HOST_ID |
*/
/* command flags and offsets */
#define IDPF_CTLQ_FLAG_DD_S 0
#define IDPF_CTLQ_FLAG_CMP_S 1
#define IDPF_CTLQ_FLAG_ERR_S 2
#define IDPF_CTLQ_FLAG_FTYPE_S 6
#define IDPF_CTLQ_FLAG_RD_S 10
#define IDPF_CTLQ_FLAG_VFC_S 11
#define IDPF_CTLQ_FLAG_BUF_S 12
#define IDPF_CTLQ_FLAG_HOST_ID_S 13
#define IDPF_CTLQ_FLAG_DD BIT(IDPF_CTLQ_FLAG_DD_S) /* 0x1 */
#define IDPF_CTLQ_FLAG_CMP BIT(IDPF_CTLQ_FLAG_CMP_S) /* 0x2 */
#define IDPF_CTLQ_FLAG_ERR BIT(IDPF_CTLQ_FLAG_ERR_S) /* 0x4 */
#define IDPF_CTLQ_FLAG_FTYPE_VM BIT(IDPF_CTLQ_FLAG_FTYPE_S) /* 0x40 */
#define IDPF_CTLQ_FLAG_FTYPE_PF BIT(IDPF_CTLQ_FLAG_FTYPE_S + 1) /* 0x80 */
#define IDPF_CTLQ_FLAG_RD BIT(IDPF_CTLQ_FLAG_RD_S) /* 0x400 */
#define IDPF_CTLQ_FLAG_VFC BIT(IDPF_CTLQ_FLAG_VFC_S) /* 0x800 */
#define IDPF_CTLQ_FLAG_BUF BIT(IDPF_CTLQ_FLAG_BUF_S) /* 0x1000 */
/* Host ID is a special field that has 3b and not a 1b flag */
#define IDPF_CTLQ_FLAG_HOST_ID_M MAKE_MASK(0x7000UL, IDPF_CTLQ_FLAG_HOST_ID_S)
struct idpf_mbxq_desc {
u8 pad[8]; /* CTLQ flags/opcode/len/retval fields */
u32 chnl_opcode; /* avoid confusion with desc->opcode */
u32 chnl_retval; /* ditto for desc->retval */
u32 pf_vf_id; /* used by CP when sending to PF */
};
/* Define the driver hardware struct to replace other control structs as needed
* Align to ctlq_hw_info
*/
struct idpf_hw {
void __iomem *hw_addr;
resource_size_t hw_addr_len;
struct idpf_adapter *back;
/* control queue - send and receive */
struct idpf_ctlq_info *asq;
struct idpf_ctlq_info *arq;
/* pci info */
u16 device_id;
u16 vendor_id;
u16 subsystem_device_id;
u16 subsystem_vendor_id;
u8 revision_id;
bool adapter_stopped;
struct list_head cq_list_head;
};
int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw,
struct idpf_ctlq_info *cq);
void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq);
/* prototype for functions used for dynamic memory allocation */
void *idpf_alloc_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem,
u64 size);
void idpf_free_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem);
#endif /* _IDPF_CONTROLQ_H_ */
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_CONTROLQ_API_H_
#define _IDPF_CONTROLQ_API_H_
#include "idpf_mem.h"
struct idpf_hw;
/* Used for queue init, response and events */
enum idpf_ctlq_type {
IDPF_CTLQ_TYPE_MAILBOX_TX = 0,
IDPF_CTLQ_TYPE_MAILBOX_RX = 1,
IDPF_CTLQ_TYPE_CONFIG_TX = 2,
IDPF_CTLQ_TYPE_CONFIG_RX = 3,
IDPF_CTLQ_TYPE_EVENT_RX = 4,
IDPF_CTLQ_TYPE_RDMA_TX = 5,
IDPF_CTLQ_TYPE_RDMA_RX = 6,
IDPF_CTLQ_TYPE_RDMA_COMPL = 7
};
/* Generic Control Queue Structures */
struct idpf_ctlq_reg {
/* used for queue tracking */
u32 head;
u32 tail;
/* Below applies only to default mb (if present) */
u32 len;
u32 bah;
u32 bal;
u32 len_mask;
u32 len_ena_mask;
u32 head_mask;
};
/* Generic queue msg structure */
struct idpf_ctlq_msg {
u8 vmvf_type; /* represents the source of the message on recv */
#define IDPF_VMVF_TYPE_VF 0
#define IDPF_VMVF_TYPE_VM 1
#define IDPF_VMVF_TYPE_PF 2
u8 host_id;
/* 3b field used only when sending a message to CP - to be used in
* combination with target func_id to route the message
*/
#define IDPF_HOST_ID_MASK 0x7
u16 opcode;
u16 data_len; /* data_len = 0 when no payload is attached */
union {
u16 func_id; /* when sending a message */
u16 status; /* when receiving a message */
};
union {
struct {
u32 chnl_opcode;
u32 chnl_retval;
} mbx;
} cookie;
union {
#define IDPF_DIRECT_CTX_SIZE 16
#define IDPF_INDIRECT_CTX_SIZE 8
/* 16 bytes of context can be provided or 8 bytes of context
* plus the address of a DMA buffer
*/
u8 direct[IDPF_DIRECT_CTX_SIZE];
struct {
u8 context[IDPF_INDIRECT_CTX_SIZE];
struct idpf_dma_mem *payload;
} indirect;
} ctx;
};
/* Generic queue info structures */
/* MB, CONFIG and EVENT q do not have extended info */
struct idpf_ctlq_create_info {
enum idpf_ctlq_type type;
int id; /* absolute queue offset passed as input
* -1 for default mailbox if present
*/
u16 len; /* Queue length passed as input */
u16 buf_size; /* buffer size passed as input */
u64 base_address; /* output, HPA of the Queue start */
struct idpf_ctlq_reg reg; /* registers accessed by ctlqs */
int ext_info_size;
void *ext_info; /* Specific to q type */
};
/* Control Queue information */
struct idpf_ctlq_info {
struct list_head cq_list;
enum idpf_ctlq_type cq_type;
int q_id;
struct mutex cq_lock; /* control queue lock */
/* used for interrupt processing */
u16 next_to_use;
u16 next_to_clean;
u16 next_to_post; /* starting descriptor to post buffers
* to after recev
*/
struct idpf_dma_mem desc_ring; /* descriptor ring memory
* idpf_dma_mem is defined in OSdep.h
*/
union {
struct idpf_dma_mem **rx_buff;
struct idpf_ctlq_msg **tx_msg;
} bi;
u16 buf_size; /* queue buffer size */
u16 ring_size; /* Number of descriptors */
struct idpf_ctlq_reg reg; /* registers accessed by ctlqs */
};
/**
* enum idpf_mbx_opc - PF/VF mailbox commands
* @idpf_mbq_opc_send_msg_to_cp: used by PF or VF to send a message to its CP
*/
enum idpf_mbx_opc {
idpf_mbq_opc_send_msg_to_cp = 0x0801,
};
/* API supported for control queue management */
/* Will init all required q including default mb. "q_info" is an array of
* create_info structs equal to the number of control queues to be created.
*/
int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
struct idpf_ctlq_create_info *q_info);
/* Allocate and initialize a single control queue, which will be added to the
* control queue list; returns a handle to the created control queue
*/
int idpf_ctlq_add(struct idpf_hw *hw,
struct idpf_ctlq_create_info *qinfo,
struct idpf_ctlq_info **cq);
/* Deinitialize and deallocate a single control queue */
void idpf_ctlq_remove(struct idpf_hw *hw,
struct idpf_ctlq_info *cq);
/* Sends messages to HW and will also free the buffer*/
int idpf_ctlq_send(struct idpf_hw *hw,
struct idpf_ctlq_info *cq,
u16 num_q_msg,
struct idpf_ctlq_msg q_msg[]);
/* Receives messages and called by interrupt handler/polling
* initiated by app/process. Also caller is supposed to free the buffers
*/
int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
struct idpf_ctlq_msg *q_msg);
/* Reclaims send descriptors on HW write back */
int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
struct idpf_ctlq_msg *msg_status[]);
/* Indicate RX buffers are done being processed */
int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw,
struct idpf_ctlq_info *cq,
u16 *buff_count,
struct idpf_dma_mem **buffs);
/* Will destroy all q including the default mb */
void idpf_ctlq_deinit(struct idpf_hw *hw);
#endif /* _IDPF_CONTROLQ_API_H_ */
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf_controlq.h"
/**
* idpf_ctlq_alloc_desc_ring - Allocate Control Queue (CQ) rings
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*/
static int idpf_ctlq_alloc_desc_ring(struct idpf_hw *hw,
struct idpf_ctlq_info *cq)
{
size_t size = cq->ring_size * sizeof(struct idpf_ctlq_desc);
cq->desc_ring.va = idpf_alloc_dma_mem(hw, &cq->desc_ring, size);
if (!cq->desc_ring.va)
return -ENOMEM;
return 0;
}
/**
* idpf_ctlq_alloc_bufs - Allocate Control Queue (CQ) buffers
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*
* Allocate the buffer head for all control queues, and if it's a receive
* queue, allocate DMA buffers
*/
static int idpf_ctlq_alloc_bufs(struct idpf_hw *hw,
struct idpf_ctlq_info *cq)
{
int i;
/* Do not allocate DMA buffers for transmit queues */
if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
return 0;
/* We'll be allocating the buffer info memory first, then we can
* allocate the mapped buffers for the event processing
*/
cq->bi.rx_buff = kcalloc(cq->ring_size, sizeof(struct idpf_dma_mem *),
GFP_KERNEL);
if (!cq->bi.rx_buff)
return -ENOMEM;
/* allocate the mapped buffers (except for the last one) */
for (i = 0; i < cq->ring_size - 1; i++) {
struct idpf_dma_mem *bi;
int num = 1; /* number of idpf_dma_mem to be allocated */
cq->bi.rx_buff[i] = kcalloc(num, sizeof(struct idpf_dma_mem),
GFP_KERNEL);
if (!cq->bi.rx_buff[i])
goto unwind_alloc_cq_bufs;
bi = cq->bi.rx_buff[i];
bi->va = idpf_alloc_dma_mem(hw, bi, cq->buf_size);
if (!bi->va) {
/* unwind will not free the failed entry */
kfree(cq->bi.rx_buff[i]);
goto unwind_alloc_cq_bufs;
}
}
return 0;
unwind_alloc_cq_bufs:
/* don't try to free the one that failed... */
i--;
for (; i >= 0; i--) {
idpf_free_dma_mem(hw, cq->bi.rx_buff[i]);
kfree(cq->bi.rx_buff[i]);
}
kfree(cq->bi.rx_buff);
return -ENOMEM;
}
/**
* idpf_ctlq_free_desc_ring - Free Control Queue (CQ) rings
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*
* This assumes the posted send buffers have already been cleaned
* and de-allocated
*/
static void idpf_ctlq_free_desc_ring(struct idpf_hw *hw,
struct idpf_ctlq_info *cq)
{
idpf_free_dma_mem(hw, &cq->desc_ring);
}
/**
* idpf_ctlq_free_bufs - Free CQ buffer info elements
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*
* Free the DMA buffers for RX queues, and DMA buffer header for both RX and TX
* queues. The upper layers are expected to manage freeing of TX DMA buffers
*/
static void idpf_ctlq_free_bufs(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
{
void *bi;
if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX) {
int i;
/* free DMA buffers for rx queues*/
for (i = 0; i < cq->ring_size; i++) {
if (cq->bi.rx_buff[i]) {
idpf_free_dma_mem(hw, cq->bi.rx_buff[i]);
kfree(cq->bi.rx_buff[i]);
}
}
bi = (void *)cq->bi.rx_buff;
} else {
bi = (void *)cq->bi.tx_msg;
}
/* free the buffer header */
kfree(bi);
}
/**
* idpf_ctlq_dealloc_ring_res - Free memory allocated for control queue
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*
* Free the memory used by the ring, buffers and other related structures
*/
void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
{
/* free ring buffers and the ring itself */
idpf_ctlq_free_bufs(hw, cq);
idpf_ctlq_free_desc_ring(hw, cq);
}
/**
* idpf_ctlq_alloc_ring_res - allocate memory for descriptor ring and bufs
* @hw: pointer to hw struct
* @cq: pointer to control queue struct
*
* Do *NOT* hold cq_lock when calling this as the memory allocation routines
* called are not going to be atomic context safe
*/
int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
{
int err;
/* allocate the ring memory */
err = idpf_ctlq_alloc_desc_ring(hw, cq);
if (err)
return err;
/* allocate buffers in the rings */
err = idpf_ctlq_alloc_bufs(hw, cq);
if (err)
goto idpf_init_cq_free_ring;
/* success! */
return 0;
idpf_init_cq_free_ring:
idpf_free_dma_mem(hw, &cq->desc_ring);
return err;
}
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf.h"
#include "idpf_lan_pf_regs.h"
#define IDPF_PF_ITR_IDX_SPACING 0x4
/**
* idpf_ctlq_reg_init - initialize default mailbox registers
* @cq: pointer to the array of create control queues
*/
static void idpf_ctlq_reg_init(struct idpf_ctlq_create_info *cq)
{
int i;
for (i = 0; i < IDPF_NUM_DFLT_MBX_Q; i++) {
struct idpf_ctlq_create_info *ccq = cq + i;
switch (ccq->type) {
case IDPF_CTLQ_TYPE_MAILBOX_TX:
/* set head and tail registers in our local struct */
ccq->reg.head = PF_FW_ATQH;
ccq->reg.tail = PF_FW_ATQT;
ccq->reg.len = PF_FW_ATQLEN;
ccq->reg.bah = PF_FW_ATQBAH;
ccq->reg.bal = PF_FW_ATQBAL;
ccq->reg.len_mask = PF_FW_ATQLEN_ATQLEN_M;
ccq->reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
ccq->reg.head_mask = PF_FW_ATQH_ATQH_M;
break;
case IDPF_CTLQ_TYPE_MAILBOX_RX:
/* set head and tail registers in our local struct */
ccq->reg.head = PF_FW_ARQH;
ccq->reg.tail = PF_FW_ARQT;
ccq->reg.len = PF_FW_ARQLEN;
ccq->reg.bah = PF_FW_ARQBAH;
ccq->reg.bal = PF_FW_ARQBAL;
ccq->reg.len_mask = PF_FW_ARQLEN_ARQLEN_M;
ccq->reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
ccq->reg.head_mask = PF_FW_ARQH_ARQH_M;
break;
default:
break;
}
}
}
/**
* idpf_mb_intr_reg_init - Initialize mailbox interrupt register
* @adapter: adapter structure
*/
static void idpf_mb_intr_reg_init(struct idpf_adapter *adapter)
{
struct idpf_intr_reg *intr = &adapter->mb_vector.intr_reg;
u32 dyn_ctl = le32_to_cpu(adapter->caps.mailbox_dyn_ctl);
intr->dyn_ctl = idpf_get_reg_addr(adapter, dyn_ctl);
intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M;
intr->dyn_ctl_itridx_m = PF_GLINT_DYN_CTL_ITR_INDX_M;
intr->icr_ena = idpf_get_reg_addr(adapter, PF_INT_DIR_OICR_ENA);
intr->icr_ena_ctlq_m = PF_INT_DIR_OICR_ENA_M;
}
/**
* idpf_intr_reg_init - Initialize interrupt registers
* @vport: virtual port structure
*/
static int idpf_intr_reg_init(struct idpf_vport *vport)
{
struct idpf_adapter *adapter = vport->adapter;
int num_vecs = vport->num_q_vectors;
struct idpf_vec_regs *reg_vals;
int num_regs, i, err = 0;
u32 rx_itr, tx_itr;
u16 total_vecs;
total_vecs = idpf_get_reserved_vecs(vport->adapter);
reg_vals = kcalloc(total_vecs, sizeof(struct idpf_vec_regs),
GFP_KERNEL);
if (!reg_vals)
return -ENOMEM;
num_regs = idpf_get_reg_intr_vecs(vport, reg_vals);
if (num_regs < num_vecs) {
err = -EINVAL;
goto free_reg_vals;
}
for (i = 0; i < num_vecs; i++) {
struct idpf_q_vector *q_vector = &vport->q_vectors[i];
u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC;
struct idpf_intr_reg *intr = &q_vector->intr_reg;
u32 spacing;
intr->dyn_ctl = idpf_get_reg_addr(adapter,
reg_vals[vec_id].dyn_ctl_reg);
intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M;
intr->dyn_ctl_itridx_s = PF_GLINT_DYN_CTL_ITR_INDX_S;
intr->dyn_ctl_intrvl_s = PF_GLINT_DYN_CTL_INTERVAL_S;
spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing,
IDPF_PF_ITR_IDX_SPACING);
rx_itr = PF_GLINT_ITR_ADDR(VIRTCHNL2_ITR_IDX_0,
reg_vals[vec_id].itrn_reg,
spacing);
tx_itr = PF_GLINT_ITR_ADDR(VIRTCHNL2_ITR_IDX_1,
reg_vals[vec_id].itrn_reg,
spacing);
intr->rx_itr = idpf_get_reg_addr(adapter, rx_itr);
intr->tx_itr = idpf_get_reg_addr(adapter, tx_itr);
}
free_reg_vals:
kfree(reg_vals);
return err;
}
/**
* idpf_reset_reg_init - Initialize reset registers
* @adapter: Driver specific private structure
*/
static void idpf_reset_reg_init(struct idpf_adapter *adapter)
{
adapter->reset_reg.rstat = idpf_get_reg_addr(adapter, PFGEN_RSTAT);
adapter->reset_reg.rstat_m = PFGEN_RSTAT_PFR_STATE_M;
}
/**
* idpf_trigger_reset - trigger reset
* @adapter: Driver specific private structure
* @trig_cause: Reason to trigger a reset
*/
static void idpf_trigger_reset(struct idpf_adapter *adapter,
enum idpf_flags __always_unused trig_cause)
{
u32 reset_reg;
reset_reg = readl(idpf_get_reg_addr(adapter, PFGEN_CTRL));
writel(reset_reg | PFGEN_CTRL_PFSWR,
idpf_get_reg_addr(adapter, PFGEN_CTRL));
}
/**
* idpf_reg_ops_init - Initialize register API function pointers
* @adapter: Driver specific private structure
*/
static void idpf_reg_ops_init(struct idpf_adapter *adapter)
{
adapter->dev_ops.reg_ops.ctlq_reg_init = idpf_ctlq_reg_init;
adapter->dev_ops.reg_ops.intr_reg_init = idpf_intr_reg_init;
adapter->dev_ops.reg_ops.mb_intr_reg_init = idpf_mb_intr_reg_init;
adapter->dev_ops.reg_ops.reset_reg_init = idpf_reset_reg_init;
adapter->dev_ops.reg_ops.trigger_reset = idpf_trigger_reset;
}
/**
* idpf_dev_ops_init - Initialize device API function pointers
* @adapter: Driver specific private structure
*/
void idpf_dev_ops_init(struct idpf_adapter *adapter)
{
idpf_reg_ops_init(adapter);
}
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_DEVIDS_H_
#define _IDPF_DEVIDS_H_
#define IDPF_DEV_ID_PF 0x1452
#define IDPF_DEV_ID_VF 0x145C
#endif /* _IDPF_DEVIDS_H_ */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_LAN_PF_REGS_H_
#define _IDPF_LAN_PF_REGS_H_
/* Receive queues */
#define PF_QRX_BASE 0x00000000
#define PF_QRX_TAIL(_QRX) (PF_QRX_BASE + (((_QRX) * 0x1000)))
#define PF_QRX_BUFFQ_BASE 0x03000000
#define PF_QRX_BUFFQ_TAIL(_QRX) (PF_QRX_BUFFQ_BASE + (((_QRX) * 0x1000)))
/* Transmit queues */
#define PF_QTX_BASE 0x05000000
#define PF_QTX_COMM_DBELL(_DBQM) (PF_QTX_BASE + ((_DBQM) * 0x1000))
/* Control(PF Mailbox) Queue */
#define PF_FW_BASE 0x08400000
#define PF_FW_ARQBAL (PF_FW_BASE)
#define PF_FW_ARQBAH (PF_FW_BASE + 0x4)
#define PF_FW_ARQLEN (PF_FW_BASE + 0x8)
#define PF_FW_ARQLEN_ARQLEN_S 0
#define PF_FW_ARQLEN_ARQLEN_M GENMASK(12, 0)
#define PF_FW_ARQLEN_ARQVFE_S 28
#define PF_FW_ARQLEN_ARQVFE_M BIT(PF_FW_ARQLEN_ARQVFE_S)
#define PF_FW_ARQLEN_ARQOVFL_S 29
#define PF_FW_ARQLEN_ARQOVFL_M BIT(PF_FW_ARQLEN_ARQOVFL_S)
#define PF_FW_ARQLEN_ARQCRIT_S 30
#define PF_FW_ARQLEN_ARQCRIT_M BIT(PF_FW_ARQLEN_ARQCRIT_S)
#define PF_FW_ARQLEN_ARQENABLE_S 31
#define PF_FW_ARQLEN_ARQENABLE_M BIT(PF_FW_ARQLEN_ARQENABLE_S)
#define PF_FW_ARQH (PF_FW_BASE + 0xC)
#define PF_FW_ARQH_ARQH_S 0
#define PF_FW_ARQH_ARQH_M GENMASK(12, 0)
#define PF_FW_ARQT (PF_FW_BASE + 0x10)
#define PF_FW_ATQBAL (PF_FW_BASE + 0x14)
#define PF_FW_ATQBAH (PF_FW_BASE + 0x18)
#define PF_FW_ATQLEN (PF_FW_BASE + 0x1C)
#define PF_FW_ATQLEN_ATQLEN_S 0
#define PF_FW_ATQLEN_ATQLEN_M GENMASK(9, 0)
#define PF_FW_ATQLEN_ATQVFE_S 28
#define PF_FW_ATQLEN_ATQVFE_M BIT(PF_FW_ATQLEN_ATQVFE_S)
#define PF_FW_ATQLEN_ATQOVFL_S 29
#define PF_FW_ATQLEN_ATQOVFL_M BIT(PF_FW_ATQLEN_ATQOVFL_S)
#define PF_FW_ATQLEN_ATQCRIT_S 30
#define PF_FW_ATQLEN_ATQCRIT_M BIT(PF_FW_ATQLEN_ATQCRIT_S)
#define PF_FW_ATQLEN_ATQENABLE_S 31
#define PF_FW_ATQLEN_ATQENABLE_M BIT(PF_FW_ATQLEN_ATQENABLE_S)
#define PF_FW_ATQH (PF_FW_BASE + 0x20)
#define PF_FW_ATQH_ATQH_S 0
#define PF_FW_ATQH_ATQH_M GENMASK(9, 0)
#define PF_FW_ATQT (PF_FW_BASE + 0x24)
/* Interrupts */
#define PF_GLINT_BASE 0x08900000
#define PF_GLINT_DYN_CTL(_INT) (PF_GLINT_BASE + ((_INT) * 0x1000))
#define PF_GLINT_DYN_CTL_INTENA_S 0
#define PF_GLINT_DYN_CTL_INTENA_M BIT(PF_GLINT_DYN_CTL_INTENA_S)
#define PF_GLINT_DYN_CTL_CLEARPBA_S 1
#define PF_GLINT_DYN_CTL_CLEARPBA_M BIT(PF_GLINT_DYN_CTL_CLEARPBA_S)
#define PF_GLINT_DYN_CTL_SWINT_TRIG_S 2
#define PF_GLINT_DYN_CTL_SWINT_TRIG_M BIT(PF_GLINT_DYN_CTL_SWINT_TRIG_S)
#define PF_GLINT_DYN_CTL_ITR_INDX_S 3
#define PF_GLINT_DYN_CTL_ITR_INDX_M GENMASK(4, 3)
#define PF_GLINT_DYN_CTL_INTERVAL_S 5
#define PF_GLINT_DYN_CTL_INTERVAL_M BIT(PF_GLINT_DYN_CTL_INTERVAL_S)
#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S 24
#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S)
#define PF_GLINT_DYN_CTL_SW_ITR_INDX_S 25
#define PF_GLINT_DYN_CTL_SW_ITR_INDX_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_S)
#define PF_GLINT_DYN_CTL_WB_ON_ITR_S 30
#define PF_GLINT_DYN_CTL_WB_ON_ITR_M BIT(PF_GLINT_DYN_CTL_WB_ON_ITR_S)
#define PF_GLINT_DYN_CTL_INTENA_MSK_S 31
#define PF_GLINT_DYN_CTL_INTENA_MSK_M BIT(PF_GLINT_DYN_CTL_INTENA_MSK_S)
/* _ITR is ITR index, _INT is interrupt index, _itrn_indx_spacing is
* spacing b/w itrn registers of the same vector.
*/
#define PF_GLINT_ITR_ADDR(_ITR, _reg_start, _itrn_indx_spacing) \
((_reg_start) + ((_ITR) * (_itrn_indx_spacing)))
/* For PF, itrn_indx_spacing is 4 and itrn_reg_spacing is 0x1000 */
#define PF_GLINT_ITR(_ITR, _INT) \
(PF_GLINT_BASE + (((_ITR) + 1) * 4) + ((_INT) * 0x1000))
#define PF_GLINT_ITR_MAX_INDEX 2
#define PF_GLINT_ITR_INTERVAL_S 0
#define PF_GLINT_ITR_INTERVAL_M GENMASK(11, 0)
/* Generic registers */
#define PF_INT_DIR_OICR_ENA 0x08406000
#define PF_INT_DIR_OICR_ENA_S 0
#define PF_INT_DIR_OICR_ENA_M GENMASK(31, 0)
#define PF_INT_DIR_OICR 0x08406004
#define PF_INT_DIR_OICR_TSYN_EVNT 0
#define PF_INT_DIR_OICR_PHY_TS_0 BIT(1)
#define PF_INT_DIR_OICR_PHY_TS_1 BIT(2)
#define PF_INT_DIR_OICR_CAUSE 0x08406008
#define PF_INT_DIR_OICR_CAUSE_CAUSE_S 0
#define PF_INT_DIR_OICR_CAUSE_CAUSE_M GENMASK(31, 0)
#define PF_INT_PBA_CLEAR 0x0840600C
#define PF_FUNC_RID 0x08406010
#define PF_FUNC_RID_FUNCTION_NUMBER_S 0
#define PF_FUNC_RID_FUNCTION_NUMBER_M GENMASK(2, 0)
#define PF_FUNC_RID_DEVICE_NUMBER_S 3
#define PF_FUNC_RID_DEVICE_NUMBER_M GENMASK(7, 3)
#define PF_FUNC_RID_BUS_NUMBER_S 8
#define PF_FUNC_RID_BUS_NUMBER_M GENMASK(15, 8)
/* Reset registers */
#define PFGEN_RTRIG 0x08407000
#define PFGEN_RTRIG_CORER_S 0
#define PFGEN_RTRIG_CORER_M BIT(0)
#define PFGEN_RTRIG_LINKR_S 1
#define PFGEN_RTRIG_LINKR_M BIT(1)
#define PFGEN_RTRIG_IMCR_S 2
#define PFGEN_RTRIG_IMCR_M BIT(2)
#define PFGEN_RSTAT 0x08407008 /* PFR Status */
#define PFGEN_RSTAT_PFR_STATE_S 0
#define PFGEN_RSTAT_PFR_STATE_M GENMASK(1, 0)
#define PFGEN_CTRL 0x0840700C
#define PFGEN_CTRL_PFSWR BIT(0)
#endif
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_LAN_TXRX_H_
#define _IDPF_LAN_TXRX_H_
enum idpf_rss_hash {
IDPF_HASH_INVALID = 0,
/* Values 1 - 28 are reserved for future use */
IDPF_HASH_NONF_UNICAST_IPV4_UDP = 29,
IDPF_HASH_NONF_MULTICAST_IPV4_UDP,
IDPF_HASH_NONF_IPV4_UDP,
IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK,
IDPF_HASH_NONF_IPV4_TCP,
IDPF_HASH_NONF_IPV4_SCTP,
IDPF_HASH_NONF_IPV4_OTHER,
IDPF_HASH_FRAG_IPV4,
/* Values 37-38 are reserved */
IDPF_HASH_NONF_UNICAST_IPV6_UDP = 39,
IDPF_HASH_NONF_MULTICAST_IPV6_UDP,
IDPF_HASH_NONF_IPV6_UDP,
IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK,
IDPF_HASH_NONF_IPV6_TCP,
IDPF_HASH_NONF_IPV6_SCTP,
IDPF_HASH_NONF_IPV6_OTHER,
IDPF_HASH_FRAG_IPV6,
IDPF_HASH_NONF_RSVD47,
IDPF_HASH_NONF_FCOE_OX,
IDPF_HASH_NONF_FCOE_RX,
IDPF_HASH_NONF_FCOE_OTHER,
/* Values 51-62 are reserved */
IDPF_HASH_L2_PAYLOAD = 63,
IDPF_HASH_MAX
};
/* Supported RSS offloads */
#define IDPF_DEFAULT_RSS_HASH \
(BIT_ULL(IDPF_HASH_NONF_IPV4_UDP) | \
BIT_ULL(IDPF_HASH_NONF_IPV4_SCTP) | \
BIT_ULL(IDPF_HASH_NONF_IPV4_TCP) | \
BIT_ULL(IDPF_HASH_NONF_IPV4_OTHER) | \
BIT_ULL(IDPF_HASH_FRAG_IPV4) | \
BIT_ULL(IDPF_HASH_NONF_IPV6_UDP) | \
BIT_ULL(IDPF_HASH_NONF_IPV6_TCP) | \
BIT_ULL(IDPF_HASH_NONF_IPV6_SCTP) | \
BIT_ULL(IDPF_HASH_NONF_IPV6_OTHER) | \
BIT_ULL(IDPF_HASH_FRAG_IPV6) | \
BIT_ULL(IDPF_HASH_L2_PAYLOAD))
#define IDPF_DEFAULT_RSS_HASH_EXPANDED (IDPF_DEFAULT_RSS_HASH | \
BIT_ULL(IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK) | \
BIT_ULL(IDPF_HASH_NONF_UNICAST_IPV4_UDP) | \
BIT_ULL(IDPF_HASH_NONF_MULTICAST_IPV4_UDP) | \
BIT_ULL(IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK) | \
BIT_ULL(IDPF_HASH_NONF_UNICAST_IPV6_UDP) | \
BIT_ULL(IDPF_HASH_NONF_MULTICAST_IPV6_UDP))
/* For idpf_splitq_base_tx_compl_desc */
#define IDPF_TXD_COMPLQ_GEN_S 15
#define IDPF_TXD_COMPLQ_GEN_M BIT_ULL(IDPF_TXD_COMPLQ_GEN_S)
#define IDPF_TXD_COMPLQ_COMPL_TYPE_S 11
#define IDPF_TXD_COMPLQ_COMPL_TYPE_M GENMASK_ULL(13, 11)
#define IDPF_TXD_COMPLQ_QID_S 0
#define IDPF_TXD_COMPLQ_QID_M GENMASK_ULL(9, 0)
/* For base mode TX descriptors */
#define IDPF_TXD_CTX_QW0_TUNN_L4T_CS_S 23
#define IDPF_TXD_CTX_QW0_TUNN_L4T_CS_M BIT_ULL(IDPF_TXD_CTX_QW0_TUNN_L4T_CS_S)
#define IDPF_TXD_CTX_QW0_TUNN_DECTTL_S 19
#define IDPF_TXD_CTX_QW0_TUNN_DECTTL_M \
(0xFULL << IDPF_TXD_CTX_QW0_TUNN_DECTTL_S)
#define IDPF_TXD_CTX_QW0_TUNN_NATLEN_S 12
#define IDPF_TXD_CTX_QW0_TUNN_NATLEN_M \
(0X7FULL << IDPF_TXD_CTX_QW0_TUNN_NATLEN_S)
#define IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_S 11
#define IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_M \
BIT_ULL(IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_S)
#define IDPF_TXD_CTX_EIP_NOINC_IPID_CONST \
IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_M
#define IDPF_TXD_CTX_QW0_TUNN_NATT_S 9
#define IDPF_TXD_CTX_QW0_TUNN_NATT_M (0x3ULL << IDPF_TXD_CTX_QW0_TUNN_NATT_S)
#define IDPF_TXD_CTX_UDP_TUNNELING BIT_ULL(IDPF_TXD_CTX_QW0_TUNN_NATT_S)
#define IDPF_TXD_CTX_GRE_TUNNELING (0x2ULL << IDPF_TXD_CTX_QW0_TUNN_NATT_S)
#define IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_S 2
#define IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_M \
(0x3FULL << IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_S)
#define IDPF_TXD_CTX_QW0_TUNN_EXT_IP_S 0
#define IDPF_TXD_CTX_QW0_TUNN_EXT_IP_M \
(0x3ULL << IDPF_TXD_CTX_QW0_TUNN_EXT_IP_S)
#define IDPF_TXD_CTX_QW1_MSS_S 50
#define IDPF_TXD_CTX_QW1_MSS_M GENMASK_ULL(63, 50)
#define IDPF_TXD_CTX_QW1_TSO_LEN_S 30
#define IDPF_TXD_CTX_QW1_TSO_LEN_M GENMASK_ULL(47, 30)
#define IDPF_TXD_CTX_QW1_CMD_S 4
#define IDPF_TXD_CTX_QW1_CMD_M GENMASK_ULL(15, 4)
#define IDPF_TXD_CTX_QW1_DTYPE_S 0
#define IDPF_TXD_CTX_QW1_DTYPE_M GENMASK_ULL(3, 0)
#define IDPF_TXD_QW1_L2TAG1_S 48
#define IDPF_TXD_QW1_L2TAG1_M GENMASK_ULL(63, 48)
#define IDPF_TXD_QW1_TX_BUF_SZ_S 34
#define IDPF_TXD_QW1_TX_BUF_SZ_M GENMASK_ULL(47, 34)
#define IDPF_TXD_QW1_OFFSET_S 16
#define IDPF_TXD_QW1_OFFSET_M GENMASK_ULL(33, 16)
#define IDPF_TXD_QW1_CMD_S 4
#define IDPF_TXD_QW1_CMD_M GENMASK_ULL(15, 4)
#define IDPF_TXD_QW1_DTYPE_S 0
#define IDPF_TXD_QW1_DTYPE_M GENMASK_ULL(3, 0)
/* TX Completion Descriptor Completion Types */
#define IDPF_TXD_COMPLT_ITR_FLUSH 0
/* Descriptor completion type 1 is reserved */
#define IDPF_TXD_COMPLT_RS 2
/* Descriptor completion type 3 is reserved */
#define IDPF_TXD_COMPLT_RE 4
#define IDPF_TXD_COMPLT_SW_MARKER 5
enum idpf_tx_desc_dtype_value {
IDPF_TX_DESC_DTYPE_DATA = 0,
IDPF_TX_DESC_DTYPE_CTX = 1,
/* DTYPE 2 is reserved
* DTYPE 3 is free for future use
* DTYPE 4 is reserved
*/
IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX = 5,
/* DTYPE 6 is reserved */
IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 = 7,
/* DTYPE 8, 9 are free for future use
* DTYPE 10 is reserved
* DTYPE 11 is free for future use
*/
IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE = 12,
/* DTYPE 13, 14 are free for future use */
/* DESC_DONE - HW has completed write-back of descriptor */
IDPF_TX_DESC_DTYPE_DESC_DONE = 15,
};
enum idpf_tx_ctx_desc_cmd_bits {
IDPF_TX_CTX_DESC_TSO = 0x01,
IDPF_TX_CTX_DESC_TSYN = 0x02,
IDPF_TX_CTX_DESC_IL2TAG2 = 0x04,
IDPF_TX_CTX_DESC_RSVD = 0x08,
IDPF_TX_CTX_DESC_SWTCH_NOTAG = 0x00,
IDPF_TX_CTX_DESC_SWTCH_UPLINK = 0x10,
IDPF_TX_CTX_DESC_SWTCH_LOCAL = 0x20,
IDPF_TX_CTX_DESC_SWTCH_VSI = 0x30,
IDPF_TX_CTX_DESC_FILT_AU_EN = 0x40,
IDPF_TX_CTX_DESC_FILT_AU_EVICT = 0x80,
IDPF_TX_CTX_DESC_RSVD1 = 0xF00
};
enum idpf_tx_desc_len_fields {
/* Note: These are predefined bit offsets */
IDPF_TX_DESC_LEN_MACLEN_S = 0, /* 7 BITS */
IDPF_TX_DESC_LEN_IPLEN_S = 7, /* 7 BITS */
IDPF_TX_DESC_LEN_L4_LEN_S = 14 /* 4 BITS */
};
enum idpf_tx_base_desc_cmd_bits {
IDPF_TX_DESC_CMD_EOP = BIT(0),
IDPF_TX_DESC_CMD_RS = BIT(1),
/* only on VFs else RSVD */
IDPF_TX_DESC_CMD_ICRC = BIT(2),
IDPF_TX_DESC_CMD_IL2TAG1 = BIT(3),
IDPF_TX_DESC_CMD_RSVD1 = BIT(4),
IDPF_TX_DESC_CMD_IIPT_IPV6 = BIT(5),
IDPF_TX_DESC_CMD_IIPT_IPV4 = BIT(6),
IDPF_TX_DESC_CMD_IIPT_IPV4_CSUM = GENMASK(6, 5),
IDPF_TX_DESC_CMD_RSVD2 = BIT(7),
IDPF_TX_DESC_CMD_L4T_EOFT_TCP = BIT(8),
IDPF_TX_DESC_CMD_L4T_EOFT_SCTP = BIT(9),
IDPF_TX_DESC_CMD_L4T_EOFT_UDP = GENMASK(9, 8),
IDPF_TX_DESC_CMD_RSVD3 = BIT(10),
IDPF_TX_DESC_CMD_RSVD4 = BIT(11),
};
/* Transmit descriptors */
/* splitq tx buf, singleq tx buf and singleq compl desc */
struct idpf_base_tx_desc {
__le64 buf_addr; /* Address of descriptor's data buf */
__le64 qw1; /* type_cmd_offset_bsz_l2tag1 */
}; /* read used with buffer queues */
struct idpf_splitq_tx_compl_desc {
/* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */
__le16 qid_comptype_gen;
union {
__le16 q_head; /* Queue head */
__le16 compl_tag; /* Completion tag */
} q_head_compl_tag;
u8 ts[3];
u8 rsvd; /* Reserved */
}; /* writeback used with completion queues */
/* Context descriptors */
struct idpf_base_tx_ctx_desc {
struct {
__le32 tunneling_params;
__le16 l2tag2;
__le16 rsvd1;
} qw0;
__le64 qw1; /* type_cmd_tlen_mss/rt_hint */
};
/* Common cmd field defines for all desc except Flex Flow Scheduler (0x0C) */
enum idpf_tx_flex_desc_cmd_bits {
IDPF_TX_FLEX_DESC_CMD_EOP = BIT(0),
IDPF_TX_FLEX_DESC_CMD_RS = BIT(1),
IDPF_TX_FLEX_DESC_CMD_RE = BIT(2),
IDPF_TX_FLEX_DESC_CMD_IL2TAG1 = BIT(3),
IDPF_TX_FLEX_DESC_CMD_DUMMY = BIT(4),
IDPF_TX_FLEX_DESC_CMD_CS_EN = BIT(5),
IDPF_TX_FLEX_DESC_CMD_FILT_AU_EN = BIT(6),
IDPF_TX_FLEX_DESC_CMD_FILT_AU_EVICT = BIT(7),
};
struct idpf_flex_tx_desc {
__le64 buf_addr; /* Packet buffer address */
struct {
#define IDPF_FLEX_TXD_QW1_DTYPE_S 0
#define IDPF_FLEX_TXD_QW1_DTYPE_M GENMASK(4, 0)
#define IDPF_FLEX_TXD_QW1_CMD_S 5
#define IDPF_FLEX_TXD_QW1_CMD_M GENMASK(15, 5)
__le16 cmd_dtype;
/* DTYPE=IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 (0x07) */
struct {
__le16 l2tag1;
__le16 l2tag2;
} l2tags;
__le16 buf_size;
} qw1;
};
struct idpf_flex_tx_sched_desc {
__le64 buf_addr; /* Packet buffer address */
/* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE_16B (0x0C) */
struct {
u8 cmd_dtype;
#define IDPF_TXD_FLEX_FLOW_DTYPE_M GENMASK(4, 0)
#define IDPF_TXD_FLEX_FLOW_CMD_EOP BIT(5)
#define IDPF_TXD_FLEX_FLOW_CMD_CS_EN BIT(6)
#define IDPF_TXD_FLEX_FLOW_CMD_RE BIT(7)
/* [23:23] Horizon Overflow bit, [22:0] timestamp */
u8 ts[3];
#define IDPF_TXD_FLOW_SCH_HORIZON_OVERFLOW_M BIT(7)
__le16 compl_tag;
__le16 rxr_bufsize;
#define IDPF_TXD_FLEX_FLOW_RXR BIT(14)
#define IDPF_TXD_FLEX_FLOW_BUFSIZE_M GENMASK(13, 0)
} qw1;
};
/* Common cmd fields for all flex context descriptors
* Note: these defines already account for the 5 bit dtype in the cmd_dtype
* field
*/
enum idpf_tx_flex_ctx_desc_cmd_bits {
IDPF_TX_FLEX_CTX_DESC_CMD_TSO = BIT(5),
IDPF_TX_FLEX_CTX_DESC_CMD_TSYN_EN = BIT(6),
IDPF_TX_FLEX_CTX_DESC_CMD_L2TAG2 = BIT(7),
IDPF_TX_FLEX_CTX_DESC_CMD_SWTCH_UPLNK = BIT(9),
IDPF_TX_FLEX_CTX_DESC_CMD_SWTCH_LOCAL = BIT(10),
IDPF_TX_FLEX_CTX_DESC_CMD_SWTCH_TARGETVSI = GENMASK(10, 9),
};
/* Standard flex descriptor TSO context quad word */
struct idpf_flex_tx_tso_ctx_qw {
__le32 flex_tlen;
#define IDPF_TXD_FLEX_CTX_TLEN_M GENMASK(17, 0)
#define IDPF_TXD_FLEX_TSO_CTX_FLEX_S 24
__le16 mss_rt;
#define IDPF_TXD_FLEX_CTX_MSS_RT_M GENMASK(13, 0)
u8 hdr_len;
u8 flex;
};
struct idpf_flex_tx_ctx_desc {
/* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
struct {
struct idpf_flex_tx_tso_ctx_qw qw0;
struct {
__le16 cmd_dtype;
u8 flex[6];
} qw1;
} tso;
};
#endif /* _IDPF_LAN_TXRX_H_ */
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_LAN_VF_REGS_H_
#define _IDPF_LAN_VF_REGS_H_
/* Reset */
#define VFGEN_RSTAT 0x00008800
#define VFGEN_RSTAT_VFR_STATE_S 0
#define VFGEN_RSTAT_VFR_STATE_M GENMASK(1, 0)
/* Control(VF Mailbox) Queue */
#define VF_BASE 0x00006000
#define VF_ATQBAL (VF_BASE + 0x1C00)
#define VF_ATQBAH (VF_BASE + 0x1800)
#define VF_ATQLEN (VF_BASE + 0x0800)
#define VF_ATQLEN_ATQLEN_S 0
#define VF_ATQLEN_ATQLEN_M GENMASK(9, 0)
#define VF_ATQLEN_ATQVFE_S 28
#define VF_ATQLEN_ATQVFE_M BIT(VF_ATQLEN_ATQVFE_S)
#define VF_ATQLEN_ATQOVFL_S 29
#define VF_ATQLEN_ATQOVFL_M BIT(VF_ATQLEN_ATQOVFL_S)
#define VF_ATQLEN_ATQCRIT_S 30
#define VF_ATQLEN_ATQCRIT_M BIT(VF_ATQLEN_ATQCRIT_S)
#define VF_ATQLEN_ATQENABLE_S 31
#define VF_ATQLEN_ATQENABLE_M BIT(VF_ATQLEN_ATQENABLE_S)
#define VF_ATQH (VF_BASE + 0x0400)
#define VF_ATQH_ATQH_S 0
#define VF_ATQH_ATQH_M GENMASK(9, 0)
#define VF_ATQT (VF_BASE + 0x2400)
#define VF_ARQBAL (VF_BASE + 0x0C00)
#define VF_ARQBAH (VF_BASE)
#define VF_ARQLEN (VF_BASE + 0x2000)
#define VF_ARQLEN_ARQLEN_S 0
#define VF_ARQLEN_ARQLEN_M GENMASK(9, 0)
#define VF_ARQLEN_ARQVFE_S 28
#define VF_ARQLEN_ARQVFE_M BIT(VF_ARQLEN_ARQVFE_S)
#define VF_ARQLEN_ARQOVFL_S 29
#define VF_ARQLEN_ARQOVFL_M BIT(VF_ARQLEN_ARQOVFL_S)
#define VF_ARQLEN_ARQCRIT_S 30
#define VF_ARQLEN_ARQCRIT_M BIT(VF_ARQLEN_ARQCRIT_S)
#define VF_ARQLEN_ARQENABLE_S 31
#define VF_ARQLEN_ARQENABLE_M BIT(VF_ARQLEN_ARQENABLE_S)
#define VF_ARQH (VF_BASE + 0x1400)
#define VF_ARQH_ARQH_S 0
#define VF_ARQH_ARQH_M GENMASK(12, 0)
#define VF_ARQT (VF_BASE + 0x1000)
/* Transmit queues */
#define VF_QTX_TAIL_BASE 0x00000000
#define VF_QTX_TAIL(_QTX) (VF_QTX_TAIL_BASE + (_QTX) * 0x4)
#define VF_QTX_TAIL_EXT_BASE 0x00040000
#define VF_QTX_TAIL_EXT(_QTX) (VF_QTX_TAIL_EXT_BASE + ((_QTX) * 4))
/* Receive queues */
#define VF_QRX_TAIL_BASE 0x00002000
#define VF_QRX_TAIL(_QRX) (VF_QRX_TAIL_BASE + ((_QRX) * 4))
#define VF_QRX_TAIL_EXT_BASE 0x00050000
#define VF_QRX_TAIL_EXT(_QRX) (VF_QRX_TAIL_EXT_BASE + ((_QRX) * 4))
#define VF_QRXB_TAIL_BASE 0x00060000
#define VF_QRXB_TAIL(_QRX) (VF_QRXB_TAIL_BASE + ((_QRX) * 4))
/* Interrupts */
#define VF_INT_DYN_CTL0 0x00005C00
#define VF_INT_DYN_CTL0_INTENA_S 0
#define VF_INT_DYN_CTL0_INTENA_M BIT(VF_INT_DYN_CTL0_INTENA_S)
#define VF_INT_DYN_CTL0_ITR_INDX_S 3
#define VF_INT_DYN_CTL0_ITR_INDX_M GENMASK(4, 3)
#define VF_INT_DYN_CTLN(_INT) (0x00003800 + ((_INT) * 4))
#define VF_INT_DYN_CTLN_EXT(_INT) (0x00070000 + ((_INT) * 4))
#define VF_INT_DYN_CTLN_INTENA_S 0
#define VF_INT_DYN_CTLN_INTENA_M BIT(VF_INT_DYN_CTLN_INTENA_S)
#define VF_INT_DYN_CTLN_CLEARPBA_S 1
#define VF_INT_DYN_CTLN_CLEARPBA_M BIT(VF_INT_DYN_CTLN_CLEARPBA_S)
#define VF_INT_DYN_CTLN_SWINT_TRIG_S 2
#define VF_INT_DYN_CTLN_SWINT_TRIG_M BIT(VF_INT_DYN_CTLN_SWINT_TRIG_S)
#define VF_INT_DYN_CTLN_ITR_INDX_S 3
#define VF_INT_DYN_CTLN_ITR_INDX_M GENMASK(4, 3)
#define VF_INT_DYN_CTLN_INTERVAL_S 5
#define VF_INT_DYN_CTLN_INTERVAL_M BIT(VF_INT_DYN_CTLN_INTERVAL_S)
#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S 24
#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S)
#define VF_INT_DYN_CTLN_SW_ITR_INDX_S 25
#define VF_INT_DYN_CTLN_SW_ITR_INDX_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_S)
#define VF_INT_DYN_CTLN_WB_ON_ITR_S 30
#define VF_INT_DYN_CTLN_WB_ON_ITR_M BIT(VF_INT_DYN_CTLN_WB_ON_ITR_S)
#define VF_INT_DYN_CTLN_INTENA_MSK_S 31
#define VF_INT_DYN_CTLN_INTENA_MSK_M BIT(VF_INT_DYN_CTLN_INTENA_MSK_S)
/* _ITR is ITR index, _INT is interrupt index, _itrn_indx_spacing is spacing
* b/w itrn registers of the same vector
*/
#define VF_INT_ITR0(_ITR) (0x00004C00 + ((_ITR) * 4))
#define VF_INT_ITRN_ADDR(_ITR, _reg_start, _itrn_indx_spacing) \
((_reg_start) + ((_ITR) * (_itrn_indx_spacing)))
/* For VF with 16 vector support, itrn_reg_spacing is 0x4, itrn_indx_spacing
* is 0x40 and base register offset is 0x00002800
*/
#define VF_INT_ITRN(_INT, _ITR) \
(0x00002800 + ((_INT) * 4) + ((_ITR) * 0x40))
/* For VF with 64 vector support, itrn_reg_spacing is 0x4, itrn_indx_spacing
* is 0x100 and base register offset is 0x00002C00
*/
#define VF_INT_ITRN_64(_INT, _ITR) \
(0x00002C00 + ((_INT) * 4) + ((_ITR) * 0x100))
/* For VF with 2k vector support, itrn_reg_spacing is 0x4, itrn_indx_spacing
* is 0x2000 and base register offset is 0x00072000
*/
#define VF_INT_ITRN_2K(_INT, _ITR) \
(0x00072000 + ((_INT) * 4) + ((_ITR) * 0x2000))
#define VF_INT_ITRN_MAX_INDEX 2
#define VF_INT_ITRN_INTERVAL_S 0
#define VF_INT_ITRN_INTERVAL_M GENMASK(11, 0)
#define VF_INT_PBA_CLEAR 0x00008900
#define VF_INT_ICR0_ENA1 0x00005000
#define VF_INT_ICR0_ENA1_ADMINQ_S 30
#define VF_INT_ICR0_ENA1_ADMINQ_M BIT(VF_INT_ICR0_ENA1_ADMINQ_S)
#define VF_INT_ICR0_ENA1_RSVD_S 31
#define VF_INT_ICR01 0x00004800
#define VF_QF_HENA(_i) (0x0000C400 + ((_i) * 4))
#define VF_QF_HENA_MAX_INDX 1
#define VF_QF_HKEY(_i) (0x0000CC00 + ((_i) * 4))
#define VF_QF_HKEY_MAX_INDX 12
#define VF_QF_HLUT(_i) (0x0000D000 + ((_i) * 4))
#define VF_QF_HLUT_MAX_INDX 15
#endif
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf.h"
#include "idpf_devids.h"
#define DRV_SUMMARY "Intel(R) Infrastructure Data Path Function Linux Driver"
MODULE_DESCRIPTION(DRV_SUMMARY);
MODULE_LICENSE("GPL");
/**
* idpf_remove - Device removal routine
* @pdev: PCI device information struct
*/
static void idpf_remove(struct pci_dev *pdev)
{
struct idpf_adapter *adapter = pci_get_drvdata(pdev);
int i;
set_bit(IDPF_REMOVE_IN_PROG, adapter->flags);
/* Wait until vc_event_task is done to consider if any hard reset is
* in progress else we may go ahead and release the resources but the
* thread doing the hard reset might continue the init path and
* end up in bad state.
*/
cancel_delayed_work_sync(&adapter->vc_event_task);
if (adapter->num_vfs)
idpf_sriov_configure(pdev, 0);
idpf_vc_core_deinit(adapter);
/* Be a good citizen and leave the device clean on exit */
adapter->dev_ops.reg_ops.trigger_reset(adapter, IDPF_HR_FUNC_RESET);
idpf_deinit_dflt_mbx(adapter);
if (!adapter->netdevs)
goto destroy_wqs;
/* There are some cases where it's possible to still have netdevs
* registered with the stack at this point, e.g. if the driver detected
* a HW reset and rmmod is called before it fully recovers. Unregister
* any stale netdevs here.
*/
for (i = 0; i < adapter->max_vports; i++) {
if (!adapter->netdevs[i])
continue;
if (adapter->netdevs[i]->reg_state != NETREG_UNINITIALIZED)
unregister_netdev(adapter->netdevs[i]);
free_netdev(adapter->netdevs[i]);
adapter->netdevs[i] = NULL;
}
destroy_wqs:
destroy_workqueue(adapter->init_wq);
destroy_workqueue(adapter->serv_wq);
destroy_workqueue(adapter->mbx_wq);
destroy_workqueue(adapter->stats_wq);
destroy_workqueue(adapter->vc_event_wq);
for (i = 0; i < adapter->max_vports; i++) {
kfree(adapter->vport_config[i]);
adapter->vport_config[i] = NULL;
}
kfree(adapter->vport_config);
adapter->vport_config = NULL;
kfree(adapter->netdevs);
adapter->netdevs = NULL;
mutex_destroy(&adapter->vport_ctrl_lock);
mutex_destroy(&adapter->vector_lock);
mutex_destroy(&adapter->queue_lock);
mutex_destroy(&adapter->vc_buf_lock);
pci_set_drvdata(pdev, NULL);
kfree(adapter);
}
/**
* idpf_shutdown - PCI callback for shutting down device
* @pdev: PCI device information struct
*/
static void idpf_shutdown(struct pci_dev *pdev)
{
idpf_remove(pdev);
if (system_state == SYSTEM_POWER_OFF)
pci_set_power_state(pdev, PCI_D3hot);
}
/**
* idpf_cfg_hw - Initialize HW struct
* @adapter: adapter to setup hw struct for
*
* Returns 0 on success, negative on failure
*/
static int idpf_cfg_hw(struct idpf_adapter *adapter)
{
struct pci_dev *pdev = adapter->pdev;
struct idpf_hw *hw = &adapter->hw;
hw->hw_addr = pcim_iomap_table(pdev)[0];
if (!hw->hw_addr) {
pci_err(pdev, "failed to allocate PCI iomap table\n");
return -ENOMEM;
}
hw->back = adapter;
return 0;
}
/**
* idpf_probe - Device initialization routine
* @pdev: PCI device information struct
* @ent: entry in idpf_pci_tbl
*
* Returns 0 on success, negative on failure
*/
static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct device *dev = &pdev->dev;
struct idpf_adapter *adapter;
int err;
adapter = kzalloc(sizeof(*adapter), GFP_KERNEL);
if (!adapter)
return -ENOMEM;
adapter->req_tx_splitq = true;
adapter->req_rx_splitq = true;
switch (ent->device) {
case IDPF_DEV_ID_PF:
idpf_dev_ops_init(adapter);
break;
case IDPF_DEV_ID_VF:
idpf_vf_dev_ops_init(adapter);
adapter->crc_enable = true;
break;
default:
err = -ENODEV;
dev_err(&pdev->dev, "Unexpected dev ID 0x%x in idpf probe\n",
ent->device);
goto err_free;
}
adapter->pdev = pdev;
err = pcim_enable_device(pdev);
if (err)
goto err_free;
err = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev));
if (err) {
pci_err(pdev, "pcim_iomap_regions failed %pe\n", ERR_PTR(err));
goto err_free;
}
/* set up for high or low dma */
err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
if (err) {
pci_err(pdev, "DMA configuration failed: %pe\n", ERR_PTR(err));
goto err_free;
}
pci_set_master(pdev);
pci_set_drvdata(pdev, adapter);
adapter->init_wq = alloc_workqueue("%s-%s-init", 0, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->init_wq) {
dev_err(dev, "Failed to allocate init workqueue\n");
err = -ENOMEM;
goto err_free;
}
adapter->serv_wq = alloc_workqueue("%s-%s-service", 0, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->serv_wq) {
dev_err(dev, "Failed to allocate service workqueue\n");
err = -ENOMEM;
goto err_serv_wq_alloc;
}
adapter->mbx_wq = alloc_workqueue("%s-%s-mbx", 0, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->mbx_wq) {
dev_err(dev, "Failed to allocate mailbox workqueue\n");
err = -ENOMEM;
goto err_mbx_wq_alloc;
}
adapter->stats_wq = alloc_workqueue("%s-%s-stats", 0, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->stats_wq) {
dev_err(dev, "Failed to allocate workqueue\n");
err = -ENOMEM;
goto err_stats_wq_alloc;
}
adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event", 0, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->vc_event_wq) {
dev_err(dev, "Failed to allocate virtchnl event workqueue\n");
err = -ENOMEM;
goto err_vc_event_wq_alloc;
}
/* setup msglvl */
adapter->msg_enable = netif_msg_init(-1, IDPF_AVAIL_NETIF_M);
err = idpf_cfg_hw(adapter);
if (err) {
dev_err(dev, "Failed to configure HW structure for adapter: %d\n",
err);
goto err_cfg_hw;
}
mutex_init(&adapter->vport_ctrl_lock);
mutex_init(&adapter->vector_lock);
mutex_init(&adapter->queue_lock);
mutex_init(&adapter->vc_buf_lock);
init_waitqueue_head(&adapter->vchnl_wq);
INIT_DELAYED_WORK(&adapter->init_task, idpf_init_task);
INIT_DELAYED_WORK(&adapter->serv_task, idpf_service_task);
INIT_DELAYED_WORK(&adapter->mbx_task, idpf_mbx_task);
INIT_DELAYED_WORK(&adapter->stats_task, idpf_statistics_task);
INIT_DELAYED_WORK(&adapter->vc_event_task, idpf_vc_event_task);
adapter->dev_ops.reg_ops.reset_reg_init(adapter);
set_bit(IDPF_HR_DRV_LOAD, adapter->flags);
queue_delayed_work(adapter->vc_event_wq, &adapter->vc_event_task,
msecs_to_jiffies(10 * (pdev->devfn & 0x07)));
return 0;
err_cfg_hw:
destroy_workqueue(adapter->vc_event_wq);
err_vc_event_wq_alloc:
destroy_workqueue(adapter->stats_wq);
err_stats_wq_alloc:
destroy_workqueue(adapter->mbx_wq);
err_mbx_wq_alloc:
destroy_workqueue(adapter->serv_wq);
err_serv_wq_alloc:
destroy_workqueue(adapter->init_wq);
err_free:
kfree(adapter);
return err;
}
/* idpf_pci_tbl - PCI Dev idpf ID Table
*/
static const struct pci_device_id idpf_pci_tbl[] = {
{ PCI_VDEVICE(INTEL, IDPF_DEV_ID_PF)},
{ PCI_VDEVICE(INTEL, IDPF_DEV_ID_VF)},
{ /* Sentinel */ }
};
MODULE_DEVICE_TABLE(pci, idpf_pci_tbl);
static struct pci_driver idpf_driver = {
.name = KBUILD_MODNAME,
.id_table = idpf_pci_tbl,
.probe = idpf_probe,
.sriov_configure = idpf_sriov_configure,
.remove = idpf_remove,
.shutdown = idpf_shutdown,
};
module_pci_driver(idpf_driver);
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_MEM_H_
#define _IDPF_MEM_H_
#include <linux/io.h>
struct idpf_dma_mem {
void *va;
dma_addr_t pa;
size_t size;
};
#define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
#define rd32(a, reg) readl((a)->hw_addr + (reg))
#define wr64(a, reg, value) writeq((value), ((a)->hw_addr + (reg)))
#define rd64(a, reg) readq((a)->hw_addr + (reg))
#endif /* _IDPF_MEM_H_ */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf.h"
#include "idpf_lan_vf_regs.h"
#define IDPF_VF_ITR_IDX_SPACING 0x40
/**
* idpf_vf_ctlq_reg_init - initialize default mailbox registers
* @cq: pointer to the array of create control queues
*/
static void idpf_vf_ctlq_reg_init(struct idpf_ctlq_create_info *cq)
{
int i;
for (i = 0; i < IDPF_NUM_DFLT_MBX_Q; i++) {
struct idpf_ctlq_create_info *ccq = cq + i;
switch (ccq->type) {
case IDPF_CTLQ_TYPE_MAILBOX_TX:
/* set head and tail registers in our local struct */
ccq->reg.head = VF_ATQH;
ccq->reg.tail = VF_ATQT;
ccq->reg.len = VF_ATQLEN;
ccq->reg.bah = VF_ATQBAH;
ccq->reg.bal = VF_ATQBAL;
ccq->reg.len_mask = VF_ATQLEN_ATQLEN_M;
ccq->reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
ccq->reg.head_mask = VF_ATQH_ATQH_M;
break;
case IDPF_CTLQ_TYPE_MAILBOX_RX:
/* set head and tail registers in our local struct */
ccq->reg.head = VF_ARQH;
ccq->reg.tail = VF_ARQT;
ccq->reg.len = VF_ARQLEN;
ccq->reg.bah = VF_ARQBAH;
ccq->reg.bal = VF_ARQBAL;
ccq->reg.len_mask = VF_ARQLEN_ARQLEN_M;
ccq->reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
ccq->reg.head_mask = VF_ARQH_ARQH_M;
break;
default:
break;
}
}
}
/**
* idpf_vf_mb_intr_reg_init - Initialize the mailbox register
* @adapter: adapter structure
*/
static void idpf_vf_mb_intr_reg_init(struct idpf_adapter *adapter)
{
struct idpf_intr_reg *intr = &adapter->mb_vector.intr_reg;
u32 dyn_ctl = le32_to_cpu(adapter->caps.mailbox_dyn_ctl);
intr->dyn_ctl = idpf_get_reg_addr(adapter, dyn_ctl);
intr->dyn_ctl_intena_m = VF_INT_DYN_CTL0_INTENA_M;
intr->dyn_ctl_itridx_m = VF_INT_DYN_CTL0_ITR_INDX_M;
intr->icr_ena = idpf_get_reg_addr(adapter, VF_INT_ICR0_ENA1);
intr->icr_ena_ctlq_m = VF_INT_ICR0_ENA1_ADMINQ_M;
}
/**
* idpf_vf_intr_reg_init - Initialize interrupt registers
* @vport: virtual port structure
*/
static int idpf_vf_intr_reg_init(struct idpf_vport *vport)
{
struct idpf_adapter *adapter = vport->adapter;
int num_vecs = vport->num_q_vectors;
struct idpf_vec_regs *reg_vals;
int num_regs, i, err = 0;
u32 rx_itr, tx_itr;
u16 total_vecs;
total_vecs = idpf_get_reserved_vecs(vport->adapter);
reg_vals = kcalloc(total_vecs, sizeof(struct idpf_vec_regs),
GFP_KERNEL);
if (!reg_vals)
return -ENOMEM;
num_regs = idpf_get_reg_intr_vecs(vport, reg_vals);
if (num_regs < num_vecs) {
err = -EINVAL;
goto free_reg_vals;
}
for (i = 0; i < num_vecs; i++) {
struct idpf_q_vector *q_vector = &vport->q_vectors[i];
u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC;
struct idpf_intr_reg *intr = &q_vector->intr_reg;
u32 spacing;
intr->dyn_ctl = idpf_get_reg_addr(adapter,
reg_vals[vec_id].dyn_ctl_reg);
intr->dyn_ctl_intena_m = VF_INT_DYN_CTLN_INTENA_M;
intr->dyn_ctl_itridx_s = VF_INT_DYN_CTLN_ITR_INDX_S;
spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing,
IDPF_VF_ITR_IDX_SPACING);
rx_itr = VF_INT_ITRN_ADDR(VIRTCHNL2_ITR_IDX_0,
reg_vals[vec_id].itrn_reg,
spacing);
tx_itr = VF_INT_ITRN_ADDR(VIRTCHNL2_ITR_IDX_1,
reg_vals[vec_id].itrn_reg,
spacing);
intr->rx_itr = idpf_get_reg_addr(adapter, rx_itr);
intr->tx_itr = idpf_get_reg_addr(adapter, tx_itr);
}
free_reg_vals:
kfree(reg_vals);
return err;
}
/**
* idpf_vf_reset_reg_init - Initialize reset registers
* @adapter: Driver specific private structure
*/
static void idpf_vf_reset_reg_init(struct idpf_adapter *adapter)
{
adapter->reset_reg.rstat = idpf_get_reg_addr(adapter, VFGEN_RSTAT);
adapter->reset_reg.rstat_m = VFGEN_RSTAT_VFR_STATE_M;
}
/**
* idpf_vf_trigger_reset - trigger reset
* @adapter: Driver specific private structure
* @trig_cause: Reason to trigger a reset
*/
static void idpf_vf_trigger_reset(struct idpf_adapter *adapter,
enum idpf_flags trig_cause)
{
/* Do not send VIRTCHNL2_OP_RESET_VF message on driver unload */
if (trig_cause == IDPF_HR_FUNC_RESET &&
!test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
idpf_send_mb_msg(adapter, VIRTCHNL2_OP_RESET_VF, 0, NULL);
}
/**
* idpf_vf_reg_ops_init - Initialize register API function pointers
* @adapter: Driver specific private structure
*/
static void idpf_vf_reg_ops_init(struct idpf_adapter *adapter)
{
adapter->dev_ops.reg_ops.ctlq_reg_init = idpf_vf_ctlq_reg_init;
adapter->dev_ops.reg_ops.intr_reg_init = idpf_vf_intr_reg_init;
adapter->dev_ops.reg_ops.mb_intr_reg_init = idpf_vf_mb_intr_reg_init;
adapter->dev_ops.reg_ops.reset_reg_init = idpf_vf_reset_reg_init;
adapter->dev_ops.reg_ops.trigger_reset = idpf_vf_trigger_reset;
}
/**
* idpf_vf_dev_ops_init - Initialize device API function pointers
* @adapter: Driver specific private structure
*/
void idpf_vf_dev_ops_init(struct idpf_adapter *adapter)
{
idpf_vf_reg_ops_init(adapter);
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment