Commit b6a7eeb4 authored by David S. Miller's avatar David S. Miller

Merge branch '200GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue

Tony Nguyen says:

====================
Introduce Intel IDPF driver

Pavan Kumar Linga says:

This patch series introduces the Intel Infrastructure Data Path Function
(IDPF) driver. It is used for both physical and virtual functions. Except
for some of the device operations the rest of the functionality is the
same for both PF and VF. IDPF uses virtchnl version2 opcodes and
structures defined in the virtchnl2 header file which helps the driver
to learn the capabilities and register offsets from the device
Control Plane (CP) instead of assuming the default values.

The format of the series follows the driver init flow to interface open.
To start with, probe gets called and kicks off the driver initialization
by spawning the 'vc_event_task' work queue which in turn calls the
'hard reset' function. As part of that, the mailbox is initialized which
is used to send/receive the virtchnl messages to/from the CP. Once that is
done, 'core init' kicks in which requests all the required global resources
from the CP and spawns the 'init_task' work queue to create the vports.

Based on the capability information received, the driver creates the said
number of vports (one or many) where each vport is associated to a netdev.
Also, each vport has its own resources such as queues, vectors etc.
From there, rest of the netdev_ops and data path are added.

IDPF implements both single queue which is traditional queueing model
as well as split queue model. In split queue model, it uses separate queue
for both completion descriptors and buffers which helps to implement
out-of-order completions. It also helps to implement asymmetric queues,
for example multiple RX completion queues can be processed by a single
RX buffer queue and multiple TX buffer queues can be processed by a
single TX completion queue. In single queue model, same queue is used
for both descriptor completions as well as buffer completions. It also
supports features such as generic checksum offload, generic receive
offload (hardware GRO) etc.
---
v7:
Patch 2:
 * removed pci_[disable|enable]_pcie_error_reporting as they are dropped
   from the core
Patch 4, 9:
 * used 'kasprintf' instead of 'snprintf' to avoid providing explicit
   character string size which also fixes "-Wformat-truncation" warnings
Patch 14:
 * used 'ethtool_sprintf' instead of 'snprintf' to avoid providing explicit
   character string size which also fixes "-Wformat-truncation" warning
 * add string format argument to the 'ethtool_sprintf' to avoid warning on
   "-Wformat-security"

v6: https://lore.kernel.org/netdev/20230825235954.894050-1-pavan.kumar.linga@intel.com/
Note: 'Acked-by' was only added to patches 1, 2, 12 and not to the other
   patches because of the changes in v6

Patch 3, 4, 5, 6, 7, 8, 9, 11, 13, 14, 15:
 * renamed 'reset_lock' to 'vport_ctrl_lock' to reflect the lock usage
 * to avoid defensive programming, used 'vport_ctrl_lock' for the user
   callbacks that access the 'vport' to prevent the hardware reset thread
   from releasing the 'vport', when the user callback is in progress
 * added some variables to netdev private structure to avoid vport access
   if possible from ethtool and ndo callbacks
 * moved 'mac_filter_list_lock' and MAC related flags to vport_config
   structure and refactored mac filter flow to handle asynchronous
   ndo mac filter callbacks
 * stop the queues before starting the reset flow to avoid TX hangs
 * removed 'sw_mutex' and 'stop_mutex' as they are not needed anymore
 * added missing clear bit in 'init_task' error path
 * renamed labels appropriately
Patch 8:
 * replaced page_pool_put_page with page_pool_put_full_page
 * for the page pool max_len, used PAGE_SIZE
Patch 10, 11, 13:
 * made use of the 'netif_txq_maybe_stop', '__netif_txq_completed_wake'
   helper macros
Patch 13:
 * removed IDPF_HR_RESET_IN_PROG flag check in idpf_tx_singleq_start
   as it is defensive
Patch 14:
 * removed max descriptor check as the core does that
 * removed unnecessary error messages
 * removed the stats that are common between the ones reported by ethtool
   and ip link
 * replaced snprintf with ethtool_sprintf
 * added a comment to explain the reason for the max queue check
 * as the netdev queues are set on alloc, there is no need to set
   them again on reset unless there is a queue change, so move the
   'idpf_set_real_num_queues' to 'idpf_initiate_soft_reset'
 Patch 15:
 * reworded the 'configure SRIOV' in the commit message

v5: https://lore.kernel.org/netdev/20230816004305.216136-1-anthony.l.nguyen@intel.com/
Most Patches:
 * wrapped line limit to 80 chars to those which don't effect readability
Patch 12:
 * in skb_add_rx_frag, offset 'headlen' w.r.t page_offset when adding a
   frag to avoid adding the header again
Patch 14:
 * added NULL check for 'rxq' when dereferencing it in page_pool_get_stats

v4: https://lore.kernel.org/netdev/20230808003416.3805142-1-anthony.l.nguyen@intel.com/
Patch 1:
 * s/virtcnl/virtchnl
 * removed the kernel doc for the error code definitions that don't exist
 * reworded the summary part in the virtchnl2 header
Patch 3:
 * don't set local variable to NULL on error
 * renamed sq_send_command_out label with err_unlock
 * don't use __GFP_ZERO in dma_alloc_coherent
Patch 4:
 * introduced mailbox workqueue to process mailbox interrupts
Patch 3, 4, 5, 6, 7, 8, 9, 11, 15:
 * removed unnecessary variable 0-init
Patch 3, 5, 7, 8, 9, 15:
 * removed defensive programming checks wherever applicable
 * removed IDPF_CAP_FIELD_LAST as it can be treated as defensive
   programming
Patch 3, 4, 5, 6, 7:
 * replaced IDPF_DFLT_MBX_BUF_SIZE with IDPF_CTLQ_MAX_BUF_LEN
Patch 2 to 15:
 * add kernel-doc for idpf.h and idpf_txrx.h enums and structures
Patch 4, 5, 15:
 * adjusted the destroy sequence of the workqueues as per the alloc
   sequence
Patch 4, 5, 9, 15:
 * scrub unnecessary flags in 'idpf_flags'
   - IDPF_REMOVE_IN_PROG flag can take care of the cases where
     IDPF_REL_RES_IN_PROG is used, removed the later one
   - IDPF_REQ_[TX|RX]_SPLITQ are replaced with struct variables
   - IDPF_CANCEL_[SERVICE|STATS]_TASK are redundant as the work queue
     doesn't get rescheduled again after 'cancel_delayed_work_sync'
   - IDPF_HR_CORE_RESET is removed as there is no set_bit for this flag
   - IDPF_MB_INTR_TRIGGER is removed as it is not needed anymore with the
     mailbox workqueue implementation
Patch 7 to 15:
 * replaced the custom buffer recycling code with page pool API
 * switched the header split buffer allocations from using a bunch of
   pages to using one large chunk of DMA memory
 * reordered some of the flows in vport_open to support page pool
Patch 8, 12:
 * don't suppress the alloc errors by using __GFP_NOWARN
Patch 9:
 * removed dyn_ctl_clrpba_m as it is not being used
Patch 14:
 * introduced enum idpf_vport_reset_cause instead of using vport flags
 * introduced page pool stats

v3: https://lore.kernel.org/netdev/20230616231341.2885622-1-anthony.l.nguyen@intel.com/
Patch 5:
 * instead of void, used 'struct virtchnl2_create_vport' type for
   vport_params_recvd and vport_params_reqd and removed the typecasting
 * used u16/u32 as needed instead of int for variables which cannot be
   negative and updated in all the places whereever applicable
Patch 6:
 * changed the commit message to "add ptypes and MAC filter support"
 * used the sender Signed-off-by as the last tag on all the patches
 * removed unnecessary variables 0-init
 * instead of fixing the code in this commit, fixed it in the commit
   where the change was introduced first
 * moved get_type_info struct on to the stack instead of memory alloc
 * moved mutex_lock and ptype_info memory alloc outside while loop and
   adjusted the return flow
 * used 'break' instead of 'continue' in ptype id switch case

v2: https://lore.kernel.org/netdev/20230614171428.1504179-1-anthony.l.nguyen@intel.com/
Patch 2:
 * added "Intel(R)" to the DRV_SUMMARY and Makefile.
Patch 4, 5, 6, 15:
 * replaced IDPF_VC_MSG_PENDING flag with mutex 'vc_buf_lock' for the
   adapter related virtchnl opcodes.
 * get the mutex lock in the virtchnl send thread itself instead of
   in receive thread.
Patch 5, 6, 7, 8, 9, 11, 14, 15:
 * replaced IDPF_VPORT_VC_MSG_PENDING flag with mutex 'vc_buf_lock' for
   the vport related virtchnl opcodes.
 * get the mutex lock in the virtchnl send thread itself instead of
   in receive thread.
Patch 6:
 * converted get_ptype_info logic from 1:N to 1:1 message exchange for
   better handling of mutex lock.
Patch 15:
 * introduced 'stats_lock' spinlock to avoid concurrent stats update.

v1: https://lore.kernel.org/netdev/20230530234501.2680230-1-anthony.l.nguyen@intel.com/

====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 2fa6175d a251eee6
......@@ -32,6 +32,7 @@ Contents:
intel/e1000
intel/e1000e
intel/fm10k
intel/idpf
intel/igb
intel/igbvf
intel/ixgbe
......
.. SPDX-License-Identifier: GPL-2.0+
==========================================================================
idpf Linux* Base Driver for the Intel(R) Infrastructure Data Path Function
==========================================================================
Intel idpf Linux driver.
Copyright(C) 2023 Intel Corporation.
.. contents::
The idpf driver serves as both the Physical Function (PF) and Virtual Function
(VF) driver for the Intel(R) Infrastructure Data Path Function.
Driver information can be obtained using ethtool, lspci, and ip.
For questions related to hardware requirements, refer to the documentation
supplied with your Intel adapter. All hardware requirements listed apply to use
with Linux.
Identifying Your Adapter
========================
For information on how to identify your adapter, and for the latest Intel
network drivers, refer to the Intel Support website:
http://www.intel.com/support
Additional Features and Configurations
======================================
ethtool
-------
The driver utilizes the ethtool interface for driver configuration and
diagnostics, as well as displaying statistical information. The latest ethtool
version is required for this functionality. If you don't have one yet, you can
obtain it at:
https://kernel.org/pub/software/network/ethtool/
Viewing Link Messages
---------------------
Link messages will not be displayed to the console if the distribution is
restricting system messages. In order to see network driver link messages on
your console, set dmesg to eight by entering the following::
# dmesg -n 8
.. note::
This setting is not saved across reboots.
Jumbo Frames
------------
Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU)
to a value larger than the default value of 1500.
Use the ip command to increase the MTU size. For example, enter the following
where <ethX> is the interface number::
# ip link set mtu 9000 dev <ethX>
# ip link set up dev <ethX>
.. note::
The maximum MTU setting for jumbo frames is 9706. This corresponds to the
maximum jumbo frame size of 9728 bytes.
.. note::
This driver will attempt to use multiple page sized buffers to receive
each jumbo packet. This should help to avoid buffer starvation issues when
allocating receive packets.
.. note::
Packet loss may have a greater impact on throughput when you use jumbo
frames. If you observe a drop in performance after enabling jumbo frames,
enabling flow control may mitigate the issue.
Performance Optimization
========================
Driver defaults are meant to fit a wide variety of workloads, but if further
optimization is required, we recommend experimenting with the following
settings.
Interrupt Rate Limiting
-----------------------
This driver supports an adaptive interrupt throttle rate (ITR) mechanism that
is tuned for general workloads. The user can customize the interrupt rate
control for specific workloads, via ethtool, adjusting the number of
microseconds between interrupts.
To set the interrupt rate manually, you must disable adaptive mode::
# ethtool -C <ethX> adaptive-rx off adaptive-tx off
For lower CPU utilization:
- Disable adaptive ITR and lower Rx and Tx interrupts. The examples below
affect every queue of the specified interface.
- Setting rx-usecs and tx-usecs to 80 will limit interrupts to about
12,500 interrupts per second per queue::
# ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 80
tx-usecs 80
For reduced latency:
- Disable adaptive ITR and ITR by setting rx-usecs and tx-usecs to 0
using ethtool::
# ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 0
tx-usecs 0
Per-queue interrupt rate settings:
- The following examples are for queues 1 and 3, but you can adjust other
queues.
- To disable Rx adaptive ITR and set static Rx ITR to 10 microseconds or
about 100,000 interrupts/second, for queues 1 and 3::
# ethtool --per-queue <ethX> queue_mask 0xa --coalesce adaptive-rx off
rx-usecs 10
- To show the current coalesce settings for queues 1 and 3::
# ethtool --per-queue <ethX> queue_mask 0xa --show-coalesce
Virtualized Environments
------------------------
In addition to the other suggestions in this section, the following may be
helpful to optimize performance in VMs.
- Using the appropriate mechanism (vcpupin) in the VM, pin the CPUs to
individual LCPUs, making sure to use a set of CPUs included in the
device's local_cpulist: /sys/class/net/<ethX>/device/local_cpulist.
- Configure as many Rx/Tx queues in the VM as available. (See the idpf driver
documentation for the number of queues supported.) For example::
# ethtool -L <virt_interface> rx <max> tx <max>
Support
=======
For general information, go to the Intel support website at:
http://www.intel.com/support/
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
to intel-wired-lan@lists.osuosl.org.
Trademarks
==========
Intel is a trademark or registered trademark of Intel Corporation or its
subsidiaries in the United States and/or other countries.
* Other names and brands may be claimed as the property of others.
......@@ -355,5 +355,17 @@ config IGC
To compile this driver as a module, choose M here. The module
will be called igc.
config IDPF
tristate "Intel(R) Infrastructure Data Path Function Support"
depends on PCI_MSI
select DIMLIB
select PAGE_POOL
select PAGE_POOL_STATS
help
This driver supports Intel(R) Infrastructure Data Path Function
devices.
To compile this driver as a module, choose M here. The module
will be called idpf.
endif # NET_VENDOR_INTEL
......@@ -15,3 +15,4 @@ obj-$(CONFIG_I40E) += i40e/
obj-$(CONFIG_IAVF) += iavf/
obj-$(CONFIG_FM10K) += fm10k/
obj-$(CONFIG_ICE) += ice/
obj-$(CONFIG_IDPF) += idpf/
# SPDX-License-Identifier: GPL-2.0-only
# Copyright (C) 2023 Intel Corporation
# Makefile for Intel(R) Infrastructure Data Path Function Linux Driver
obj-$(CONFIG_IDPF) += idpf.o
idpf-y := \
idpf_controlq.o \
idpf_controlq_setup.o \
idpf_dev.o \
idpf_ethtool.o \
idpf_lib.o \
idpf_main.o \
idpf_singleq_txrx.o \
idpf_txrx.o \
idpf_virtchnl.o \
idpf_vf_dev.o
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_H_
#define _IDPF_H_
/* Forward declaration */
struct idpf_adapter;
struct idpf_vport;
struct idpf_vport_max_q;
#include <net/pkt_sched.h>
#include <linux/aer.h>
#include <linux/etherdevice.h>
#include <linux/pci.h>
#include <linux/bitfield.h>
#include <linux/sctp.h>
#include <linux/ethtool.h>
#include <net/gro.h>
#include <linux/dim.h>
#include "virtchnl2.h"
#include "idpf_lan_txrx.h"
#include "idpf_txrx.h"
#include "idpf_controlq.h"
#define GETMAXVAL(num_bits) GENMASK((num_bits) - 1, 0)
#define IDPF_NO_FREE_SLOT 0xffff
/* Default Mailbox settings */
#define IDPF_NUM_FILTERS_PER_MSG 20
#define IDPF_NUM_DFLT_MBX_Q 2 /* includes both TX and RX */
#define IDPF_DFLT_MBX_Q_LEN 64
#define IDPF_DFLT_MBX_ID -1
/* maximum number of times to try before resetting mailbox */
#define IDPF_MB_MAX_ERR 20
#define IDPF_NUM_CHUNKS_PER_MSG(struct_sz, chunk_sz) \
((IDPF_CTLQ_MAX_BUF_LEN - (struct_sz)) / (chunk_sz))
#define IDPF_WAIT_FOR_EVENT_TIMEO_MIN 2000
#define IDPF_WAIT_FOR_EVENT_TIMEO 60000
#define IDPF_MAX_WAIT 500
/* available message levels */
#define IDPF_AVAIL_NETIF_M (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK)
#define IDPF_DIM_PROFILE_SLOTS 5
#define IDPF_VIRTCHNL_VERSION_MAJOR VIRTCHNL2_VERSION_MAJOR_2
#define IDPF_VIRTCHNL_VERSION_MINOR VIRTCHNL2_VERSION_MINOR_0
/**
* struct idpf_mac_filter
* @list: list member field
* @macaddr: MAC address
* @remove: filter should be removed (virtchnl)
* @add: filter should be added (virtchnl)
*/
struct idpf_mac_filter {
struct list_head list;
u8 macaddr[ETH_ALEN];
bool remove;
bool add;
};
/**
* enum idpf_state - State machine to handle bring up
* @__IDPF_STARTUP: Start the state machine
* @__IDPF_VER_CHECK: Negotiate virtchnl version
* @__IDPF_GET_CAPS: Negotiate capabilities
* @__IDPF_INIT_SW: Init based on given capabilities
* @__IDPF_STATE_LAST: Must be last, used to determine size
*/
enum idpf_state {
__IDPF_STARTUP,
__IDPF_VER_CHECK,
__IDPF_GET_CAPS,
__IDPF_INIT_SW,
__IDPF_STATE_LAST,
};
/**
* enum idpf_flags - Hard reset causes.
* @IDPF_HR_FUNC_RESET: Hard reset when TxRx timeout
* @IDPF_HR_DRV_LOAD: Set on driver load for a clean HW
* @IDPF_HR_RESET_IN_PROG: Reset in progress
* @IDPF_REMOVE_IN_PROG: Driver remove in progress
* @IDPF_MB_INTR_MODE: Mailbox in interrupt mode
* @IDPF_FLAGS_NBITS: Must be last
*/
enum idpf_flags {
IDPF_HR_FUNC_RESET,
IDPF_HR_DRV_LOAD,
IDPF_HR_RESET_IN_PROG,
IDPF_REMOVE_IN_PROG,
IDPF_MB_INTR_MODE,
IDPF_FLAGS_NBITS,
};
/**
* enum idpf_cap_field - Offsets into capabilities struct for specific caps
* @IDPF_BASE_CAPS: generic base capabilities
* @IDPF_CSUM_CAPS: checksum offload capabilities
* @IDPF_SEG_CAPS: segmentation offload capabilities
* @IDPF_RSS_CAPS: RSS offload capabilities
* @IDPF_HSPLIT_CAPS: Header split capabilities
* @IDPF_RSC_CAPS: RSC offload capabilities
* @IDPF_OTHER_CAPS: miscellaneous offloads
*
* Used when checking for a specific capability flag since different capability
* sets are not mutually exclusive numerically, the caller must specify which
* type of capability they are checking for.
*/
enum idpf_cap_field {
IDPF_BASE_CAPS = -1,
IDPF_CSUM_CAPS = offsetof(struct virtchnl2_get_capabilities,
csum_caps),
IDPF_SEG_CAPS = offsetof(struct virtchnl2_get_capabilities,
seg_caps),
IDPF_RSS_CAPS = offsetof(struct virtchnl2_get_capabilities,
rss_caps),
IDPF_HSPLIT_CAPS = offsetof(struct virtchnl2_get_capabilities,
hsplit_caps),
IDPF_RSC_CAPS = offsetof(struct virtchnl2_get_capabilities,
rsc_caps),
IDPF_OTHER_CAPS = offsetof(struct virtchnl2_get_capabilities,
other_caps),
};
/**
* enum idpf_vport_state - Current vport state
* @__IDPF_VPORT_DOWN: Vport is down
* @__IDPF_VPORT_UP: Vport is up
* @__IDPF_VPORT_STATE_LAST: Must be last, number of states
*/
enum idpf_vport_state {
__IDPF_VPORT_DOWN,
__IDPF_VPORT_UP,
__IDPF_VPORT_STATE_LAST,
};
/**
* struct idpf_netdev_priv - Struct to store vport back pointer
* @adapter: Adapter back pointer
* @vport: Vport back pointer
* @vport_id: Vport identifier
* @vport_idx: Relative vport index
* @state: See enum idpf_vport_state
* @netstats: Packet and byte stats
* @stats_lock: Lock to protect stats update
*/
struct idpf_netdev_priv {
struct idpf_adapter *adapter;
struct idpf_vport *vport;
u32 vport_id;
u16 vport_idx;
enum idpf_vport_state state;
struct rtnl_link_stats64 netstats;
spinlock_t stats_lock;
};
/**
* struct idpf_reset_reg - Reset register offsets/masks
* @rstat: Reset status register
* @rstat_m: Reset status mask
*/
struct idpf_reset_reg {
void __iomem *rstat;
u32 rstat_m;
};
/**
* struct idpf_vport_max_q - Queue limits
* @max_rxq: Maximum number of RX queues supported
* @max_txq: Maixmum number of TX queues supported
* @max_bufq: In splitq, maximum number of buffer queues supported
* @max_complq: In splitq, maximum number of completion queues supported
*/
struct idpf_vport_max_q {
u16 max_rxq;
u16 max_txq;
u16 max_bufq;
u16 max_complq;
};
/**
* struct idpf_reg_ops - Device specific register operation function pointers
* @ctlq_reg_init: Mailbox control queue register initialization
* @intr_reg_init: Traffic interrupt register initialization
* @mb_intr_reg_init: Mailbox interrupt register initialization
* @reset_reg_init: Reset register initialization
* @trigger_reset: Trigger a reset to occur
*/
struct idpf_reg_ops {
void (*ctlq_reg_init)(struct idpf_ctlq_create_info *cq);
int (*intr_reg_init)(struct idpf_vport *vport);
void (*mb_intr_reg_init)(struct idpf_adapter *adapter);
void (*reset_reg_init)(struct idpf_adapter *adapter);
void (*trigger_reset)(struct idpf_adapter *adapter,
enum idpf_flags trig_cause);
};
/**
* struct idpf_dev_ops - Device specific operations
* @reg_ops: Register operations
*/
struct idpf_dev_ops {
struct idpf_reg_ops reg_ops;
};
/* These macros allow us to generate an enum and a matching char * array of
* stringified enums that are always in sync. Checkpatch issues a bogus warning
* about this being a complex macro; but it's wrong, these are never used as a
* statement and instead only used to define the enum and array.
*/
#define IDPF_FOREACH_VPORT_VC_STATE(STATE) \
STATE(IDPF_VC_CREATE_VPORT) \
STATE(IDPF_VC_CREATE_VPORT_ERR) \
STATE(IDPF_VC_ENA_VPORT) \
STATE(IDPF_VC_ENA_VPORT_ERR) \
STATE(IDPF_VC_DIS_VPORT) \
STATE(IDPF_VC_DIS_VPORT_ERR) \
STATE(IDPF_VC_DESTROY_VPORT) \
STATE(IDPF_VC_DESTROY_VPORT_ERR) \
STATE(IDPF_VC_CONFIG_TXQ) \
STATE(IDPF_VC_CONFIG_TXQ_ERR) \
STATE(IDPF_VC_CONFIG_RXQ) \
STATE(IDPF_VC_CONFIG_RXQ_ERR) \
STATE(IDPF_VC_ENA_QUEUES) \
STATE(IDPF_VC_ENA_QUEUES_ERR) \
STATE(IDPF_VC_DIS_QUEUES) \
STATE(IDPF_VC_DIS_QUEUES_ERR) \
STATE(IDPF_VC_MAP_IRQ) \
STATE(IDPF_VC_MAP_IRQ_ERR) \
STATE(IDPF_VC_UNMAP_IRQ) \
STATE(IDPF_VC_UNMAP_IRQ_ERR) \
STATE(IDPF_VC_ADD_QUEUES) \
STATE(IDPF_VC_ADD_QUEUES_ERR) \
STATE(IDPF_VC_DEL_QUEUES) \
STATE(IDPF_VC_DEL_QUEUES_ERR) \
STATE(IDPF_VC_ALLOC_VECTORS) \
STATE(IDPF_VC_ALLOC_VECTORS_ERR) \
STATE(IDPF_VC_DEALLOC_VECTORS) \
STATE(IDPF_VC_DEALLOC_VECTORS_ERR) \
STATE(IDPF_VC_SET_SRIOV_VFS) \
STATE(IDPF_VC_SET_SRIOV_VFS_ERR) \
STATE(IDPF_VC_GET_RSS_LUT) \
STATE(IDPF_VC_GET_RSS_LUT_ERR) \
STATE(IDPF_VC_SET_RSS_LUT) \
STATE(IDPF_VC_SET_RSS_LUT_ERR) \
STATE(IDPF_VC_GET_RSS_KEY) \
STATE(IDPF_VC_GET_RSS_KEY_ERR) \
STATE(IDPF_VC_SET_RSS_KEY) \
STATE(IDPF_VC_SET_RSS_KEY_ERR) \
STATE(IDPF_VC_GET_STATS) \
STATE(IDPF_VC_GET_STATS_ERR) \
STATE(IDPF_VC_ADD_MAC_ADDR) \
STATE(IDPF_VC_ADD_MAC_ADDR_ERR) \
STATE(IDPF_VC_DEL_MAC_ADDR) \
STATE(IDPF_VC_DEL_MAC_ADDR_ERR) \
STATE(IDPF_VC_GET_PTYPE_INFO) \
STATE(IDPF_VC_GET_PTYPE_INFO_ERR) \
STATE(IDPF_VC_LOOPBACK_STATE) \
STATE(IDPF_VC_LOOPBACK_STATE_ERR) \
STATE(IDPF_VC_NBITS)
#define IDPF_GEN_ENUM(ENUM) ENUM,
#define IDPF_GEN_STRING(STRING) #STRING,
enum idpf_vport_vc_state {
IDPF_FOREACH_VPORT_VC_STATE(IDPF_GEN_ENUM)
};
extern const char * const idpf_vport_vc_state_str[];
/**
* enum idpf_vport_reset_cause - Vport soft reset causes
* @IDPF_SR_Q_CHANGE: Soft reset queue change
* @IDPF_SR_Q_DESC_CHANGE: Soft reset descriptor change
* @IDPF_SR_MTU_CHANGE: Soft reset MTU change
* @IDPF_SR_RSC_CHANGE: Soft reset RSC change
*/
enum idpf_vport_reset_cause {
IDPF_SR_Q_CHANGE,
IDPF_SR_Q_DESC_CHANGE,
IDPF_SR_MTU_CHANGE,
IDPF_SR_RSC_CHANGE,
};
/**
* enum idpf_vport_flags - Vport flags
* @IDPF_VPORT_DEL_QUEUES: To send delete queues message
* @IDPF_VPORT_SW_MARKER: Indicate TX pipe drain software marker packets
* processing is done
* @IDPF_VPORT_FLAGS_NBITS: Must be last
*/
enum idpf_vport_flags {
IDPF_VPORT_DEL_QUEUES,
IDPF_VPORT_SW_MARKER,
IDPF_VPORT_FLAGS_NBITS,
};
struct idpf_port_stats {
struct u64_stats_sync stats_sync;
u64_stats_t rx_hw_csum_err;
u64_stats_t rx_hsplit;
u64_stats_t rx_hsplit_hbo;
u64_stats_t rx_bad_descs;
u64_stats_t tx_linearize;
u64_stats_t tx_busy;
u64_stats_t tx_drops;
u64_stats_t tx_dma_map_errs;
struct virtchnl2_vport_stats vport_stats;
};
/**
* struct idpf_vport - Handle for netdevices and queue resources
* @num_txq: Number of allocated TX queues
* @num_complq: Number of allocated completion queues
* @txq_desc_count: TX queue descriptor count
* @complq_desc_count: Completion queue descriptor count
* @compln_clean_budget: Work budget for completion clean
* @num_txq_grp: Number of TX queue groups
* @txq_grps: Array of TX queue groups
* @txq_model: Split queue or single queue queuing model
* @txqs: Used only in hotpath to get to the right queue very fast
* @crc_enable: Enable CRC insertion offload
* @num_rxq: Number of allocated RX queues
* @num_bufq: Number of allocated buffer queues
* @rxq_desc_count: RX queue descriptor count. *MUST* have enough descriptors
* to complete all buffer descriptors for all buffer queues in
* the worst case.
* @num_bufqs_per_qgrp: Buffer queues per RX queue in a given grouping
* @bufq_desc_count: Buffer queue descriptor count
* @bufq_size: Size of buffers in ring (e.g. 2K, 4K, etc)
* @num_rxq_grp: Number of RX queues in a group
* @rxq_grps: Total number of RX groups. Number of groups * number of RX per
* group will yield total number of RX queues.
* @rxq_model: Splitq queue or single queue queuing model
* @rx_ptype_lkup: Lookup table for ptypes on RX
* @adapter: back pointer to associated adapter
* @netdev: Associated net_device. Each vport should have one and only one
* associated netdev.
* @flags: See enum idpf_vport_flags
* @vport_type: Default SRIOV, SIOV, etc.
* @vport_id: Device given vport identifier
* @idx: Software index in adapter vports struct
* @default_vport: Use this vport if one isn't specified
* @base_rxd: True if the driver should use base descriptors instead of flex
* @num_q_vectors: Number of IRQ vectors allocated
* @q_vectors: Array of queue vectors
* @q_vector_idxs: Starting index of queue vectors
* @max_mtu: device given max possible MTU
* @default_mac_addr: device will give a default MAC to use
* @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation
* @tx_itr_profile: TX profiles for Dynamic Interrupt Moderation
* @port_stats: per port csum, header split, and other offload stats
* @link_up: True if link is up
* @link_speed_mbps: Link speed in mbps
* @vc_msg: Virtchnl message buffer
* @vc_state: Virtchnl message state
* @vchnl_wq: Wait queue for virtchnl messages
* @sw_marker_wq: workqueue for marker packets
* @vc_buf_lock: Lock to protect virtchnl buffer
*/
struct idpf_vport {
u16 num_txq;
u16 num_complq;
u32 txq_desc_count;
u32 complq_desc_count;
u32 compln_clean_budget;
u16 num_txq_grp;
struct idpf_txq_group *txq_grps;
u32 txq_model;
struct idpf_queue **txqs;
bool crc_enable;
u16 num_rxq;
u16 num_bufq;
u32 rxq_desc_count;
u8 num_bufqs_per_qgrp;
u32 bufq_desc_count[IDPF_MAX_BUFQS_PER_RXQ_GRP];
u32 bufq_size[IDPF_MAX_BUFQS_PER_RXQ_GRP];
u16 num_rxq_grp;
struct idpf_rxq_group *rxq_grps;
u32 rxq_model;
struct idpf_rx_ptype_decoded rx_ptype_lkup[IDPF_RX_MAX_PTYPE];
struct idpf_adapter *adapter;
struct net_device *netdev;
DECLARE_BITMAP(flags, IDPF_VPORT_FLAGS_NBITS);
u16 vport_type;
u32 vport_id;
u16 idx;
bool default_vport;
bool base_rxd;
u16 num_q_vectors;
struct idpf_q_vector *q_vectors;
u16 *q_vector_idxs;
u16 max_mtu;
u8 default_mac_addr[ETH_ALEN];
u16 rx_itr_profile[IDPF_DIM_PROFILE_SLOTS];
u16 tx_itr_profile[IDPF_DIM_PROFILE_SLOTS];
struct idpf_port_stats port_stats;
bool link_up;
u32 link_speed_mbps;
char vc_msg[IDPF_CTLQ_MAX_BUF_LEN];
DECLARE_BITMAP(vc_state, IDPF_VC_NBITS);
wait_queue_head_t vchnl_wq;
wait_queue_head_t sw_marker_wq;
struct mutex vc_buf_lock;
};
/**
* enum idpf_user_flags
* @__IDPF_PROMISC_UC: Unicast promiscuous mode
* @__IDPF_PROMISC_MC: Multicast promiscuous mode
* @__IDPF_USER_FLAGS_NBITS: Must be last
*/
enum idpf_user_flags {
__IDPF_PROMISC_UC = 32,
__IDPF_PROMISC_MC,
__IDPF_USER_FLAGS_NBITS,
};
/**
* struct idpf_rss_data - Associated RSS data
* @rss_key_size: Size of RSS hash key
* @rss_key: RSS hash key
* @rss_lut_size: Size of RSS lookup table
* @rss_lut: RSS lookup table
* @cached_lut: Used to restore previously init RSS lut
*/
struct idpf_rss_data {
u16 rss_key_size;
u8 *rss_key;
u16 rss_lut_size;
u32 *rss_lut;
u32 *cached_lut;
};
/**
* struct idpf_vport_user_config_data - User defined configuration values for
* each vport.
* @rss_data: See struct idpf_rss_data
* @num_req_tx_qs: Number of user requested TX queues through ethtool
* @num_req_rx_qs: Number of user requested RX queues through ethtool
* @num_req_txq_desc: Number of user requested TX queue descriptors through
* ethtool
* @num_req_rxq_desc: Number of user requested RX queue descriptors through
* ethtool
* @user_flags: User toggled config flags
* @mac_filter_list: List of MAC filters
*
* Used to restore configuration after a reset as the vport will get wiped.
*/
struct idpf_vport_user_config_data {
struct idpf_rss_data rss_data;
u16 num_req_tx_qs;
u16 num_req_rx_qs;
u32 num_req_txq_desc;
u32 num_req_rxq_desc;
DECLARE_BITMAP(user_flags, __IDPF_USER_FLAGS_NBITS);
struct list_head mac_filter_list;
};
/**
* enum idpf_vport_config_flags - Vport config flags
* @IDPF_VPORT_REG_NETDEV: Register netdev
* @IDPF_VPORT_UP_REQUESTED: Set if interface up is requested on core reset
* @IDPF_VPORT_ADD_MAC_REQ: Asynchronous add ether address in flight
* @IDPF_VPORT_DEL_MAC_REQ: Asynchronous delete ether address in flight
* @IDPF_VPORT_CONFIG_FLAGS_NBITS: Must be last
*/
enum idpf_vport_config_flags {
IDPF_VPORT_REG_NETDEV,
IDPF_VPORT_UP_REQUESTED,
IDPF_VPORT_ADD_MAC_REQ,
IDPF_VPORT_DEL_MAC_REQ,
IDPF_VPORT_CONFIG_FLAGS_NBITS,
};
/**
* struct idpf_avail_queue_info
* @avail_rxq: Available RX queues
* @avail_txq: Available TX queues
* @avail_bufq: Available buffer queues
* @avail_complq: Available completion queues
*
* Maintain total queues available after allocating max queues to each vport.
*/
struct idpf_avail_queue_info {
u16 avail_rxq;
u16 avail_txq;
u16 avail_bufq;
u16 avail_complq;
};
/**
* struct idpf_vector_info - Utility structure to pass function arguments as a
* structure
* @num_req_vecs: Vectors required based on the number of queues updated by the
* user via ethtool
* @num_curr_vecs: Current number of vectors, must be >= @num_req_vecs
* @index: Relative starting index for vectors
* @default_vport: Vectors are for default vport
*/
struct idpf_vector_info {
u16 num_req_vecs;
u16 num_curr_vecs;
u16 index;
bool default_vport;
};
/**
* struct idpf_vector_lifo - Stack to maintain vector indexes used for vector
* distribution algorithm
* @top: Points to stack top i.e. next available vector index
* @base: Always points to start of the free pool
* @size: Total size of the vector stack
* @vec_idx: Array to store all the vector indexes
*
* Vector stack maintains all the relative vector indexes at the *adapter*
* level. This stack is divided into 2 parts, first one is called as 'default
* pool' and other one is called 'free pool'. Vector distribution algorithm
* gives priority to default vports in a way that at least IDPF_MIN_Q_VEC
* vectors are allocated per default vport and the relative vector indexes for
* those are maintained in default pool. Free pool contains all the unallocated
* vector indexes which can be allocated on-demand basis. Mailbox vector index
* is maintained in the default pool of the stack.
*/
struct idpf_vector_lifo {
u16 top;
u16 base;
u16 size;
u16 *vec_idx;
};
/**
* struct idpf_vport_config - Vport configuration data
* @user_config: see struct idpf_vport_user_config_data
* @max_q: Maximum possible queues
* @req_qs_chunks: Queue chunk data for requested queues
* @mac_filter_list_lock: Lock to protect mac filters
* @flags: See enum idpf_vport_config_flags
*/
struct idpf_vport_config {
struct idpf_vport_user_config_data user_config;
struct idpf_vport_max_q max_q;
void *req_qs_chunks;
spinlock_t mac_filter_list_lock;
DECLARE_BITMAP(flags, IDPF_VPORT_CONFIG_FLAGS_NBITS);
};
/**
* struct idpf_adapter - Device data struct generated on probe
* @pdev: PCI device struct given on probe
* @virt_ver_maj: Virtchnl version major
* @virt_ver_min: Virtchnl version minor
* @msg_enable: Debug message level enabled
* @mb_wait_count: Number of times mailbox was attempted initialization
* @state: Init state machine
* @flags: See enum idpf_flags
* @reset_reg: See struct idpf_reset_reg
* @hw: Device access data
* @num_req_msix: Requested number of MSIX vectors
* @num_avail_msix: Available number of MSIX vectors
* @num_msix_entries: Number of entries in MSIX table
* @msix_entries: MSIX table
* @req_vec_chunks: Requested vector chunk data
* @mb_vector: Mailbox vector data
* @vector_stack: Stack to store the msix vector indexes
* @irq_mb_handler: Handler for hard interrupt for mailbox
* @tx_timeout_count: Number of TX timeouts that have occurred
* @avail_queues: Device given queue limits
* @vports: Array to store vports created by the driver
* @netdevs: Associated Vport netdevs
* @vport_params_reqd: Vport params requested
* @vport_params_recvd: Vport params received
* @vport_ids: Array of device given vport identifiers
* @vport_config: Vport config parameters
* @max_vports: Maximum vports that can be allocated
* @num_alloc_vports: Current number of vports allocated
* @next_vport: Next free slot in pf->vport[] - 0-based!
* @init_task: Initialization task
* @init_wq: Workqueue for initialization task
* @serv_task: Periodically recurring maintenance task
* @serv_wq: Workqueue for service task
* @mbx_task: Task to handle mailbox interrupts
* @mbx_wq: Workqueue for mailbox responses
* @vc_event_task: Task to handle out of band virtchnl event notifications
* @vc_event_wq: Workqueue for virtchnl events
* @stats_task: Periodic statistics retrieval task
* @stats_wq: Workqueue for statistics task
* @caps: Negotiated capabilities with device
* @vchnl_wq: Wait queue for virtchnl messages
* @vc_state: Virtchnl message state
* @vc_msg: Virtchnl message buffer
* @dev_ops: See idpf_dev_ops
* @num_vfs: Number of allocated VFs through sysfs. PF does not directly talk
* to VFs but is used to initialize them
* @crc_enable: Enable CRC insertion offload
* @req_tx_splitq: TX split or single queue model to request
* @req_rx_splitq: RX split or single queue model to request
* @vport_ctrl_lock: Lock to protect the vport control flow
* @vector_lock: Lock to protect vector distribution
* @queue_lock: Lock to protect queue distribution
* @vc_buf_lock: Lock to protect virtchnl buffer
*/
struct idpf_adapter {
struct pci_dev *pdev;
u32 virt_ver_maj;
u32 virt_ver_min;
u32 msg_enable;
u32 mb_wait_count;
enum idpf_state state;
DECLARE_BITMAP(flags, IDPF_FLAGS_NBITS);
struct idpf_reset_reg reset_reg;
struct idpf_hw hw;
u16 num_req_msix;
u16 num_avail_msix;
u16 num_msix_entries;
struct msix_entry *msix_entries;
struct virtchnl2_alloc_vectors *req_vec_chunks;
struct idpf_q_vector mb_vector;
struct idpf_vector_lifo vector_stack;
irqreturn_t (*irq_mb_handler)(int irq, void *data);
u32 tx_timeout_count;
struct idpf_avail_queue_info avail_queues;
struct idpf_vport **vports;
struct net_device **netdevs;
struct virtchnl2_create_vport **vport_params_reqd;
struct virtchnl2_create_vport **vport_params_recvd;
u32 *vport_ids;
struct idpf_vport_config **vport_config;
u16 max_vports;
u16 num_alloc_vports;
u16 next_vport;
struct delayed_work init_task;
struct workqueue_struct *init_wq;
struct delayed_work serv_task;
struct workqueue_struct *serv_wq;
struct delayed_work mbx_task;
struct workqueue_struct *mbx_wq;
struct delayed_work vc_event_task;
struct workqueue_struct *vc_event_wq;
struct delayed_work stats_task;
struct workqueue_struct *stats_wq;
struct virtchnl2_get_capabilities caps;
wait_queue_head_t vchnl_wq;
DECLARE_BITMAP(vc_state, IDPF_VC_NBITS);
char vc_msg[IDPF_CTLQ_MAX_BUF_LEN];
struct idpf_dev_ops dev_ops;
int num_vfs;
bool crc_enable;
bool req_tx_splitq;
bool req_rx_splitq;
struct mutex vport_ctrl_lock;
struct mutex vector_lock;
struct mutex queue_lock;
struct mutex vc_buf_lock;
};
/**
* idpf_is_queue_model_split - check if queue model is split
* @q_model: queue model single or split
*
* Returns true if queue model is split else false
*/
static inline int idpf_is_queue_model_split(u16 q_model)
{
return q_model == VIRTCHNL2_QUEUE_MODEL_SPLIT;
}
#define idpf_is_cap_ena(adapter, field, flag) \
idpf_is_capability_ena(adapter, false, field, flag)
#define idpf_is_cap_ena_all(adapter, field, flag) \
idpf_is_capability_ena(adapter, true, field, flag)
bool idpf_is_capability_ena(struct idpf_adapter *adapter, bool all,
enum idpf_cap_field field, u64 flag);
#define IDPF_CAP_RSS (\
VIRTCHNL2_CAP_RSS_IPV4_TCP |\
VIRTCHNL2_CAP_RSS_IPV4_TCP |\
VIRTCHNL2_CAP_RSS_IPV4_UDP |\
VIRTCHNL2_CAP_RSS_IPV4_SCTP |\
VIRTCHNL2_CAP_RSS_IPV4_OTHER |\
VIRTCHNL2_CAP_RSS_IPV6_TCP |\
VIRTCHNL2_CAP_RSS_IPV6_TCP |\
VIRTCHNL2_CAP_RSS_IPV6_UDP |\
VIRTCHNL2_CAP_RSS_IPV6_SCTP |\
VIRTCHNL2_CAP_RSS_IPV6_OTHER)
#define IDPF_CAP_RSC (\
VIRTCHNL2_CAP_RSC_IPV4_TCP |\
VIRTCHNL2_CAP_RSC_IPV6_TCP)
#define IDPF_CAP_HSPLIT (\
VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 |\
VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6)
#define IDPF_CAP_RX_CSUM_L4V4 (\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP)
#define IDPF_CAP_RX_CSUM_L4V6 (\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP)
#define IDPF_CAP_RX_CSUM (\
VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP)
#define IDPF_CAP_SCTP_CSUM (\
VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP |\
VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP)
#define IDPF_CAP_TUNNEL_TX_CSUM (\
VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL |\
VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL)
/**
* idpf_get_reserved_vecs - Get reserved vectors
* @adapter: private data struct
*/
static inline u16 idpf_get_reserved_vecs(struct idpf_adapter *adapter)
{
return le16_to_cpu(adapter->caps.num_allocated_vectors);
}
/**
* idpf_get_default_vports - Get default number of vports
* @adapter: private data struct
*/
static inline u16 idpf_get_default_vports(struct idpf_adapter *adapter)
{
return le16_to_cpu(adapter->caps.default_num_vports);
}
/**
* idpf_get_max_vports - Get max number of vports
* @adapter: private data struct
*/
static inline u16 idpf_get_max_vports(struct idpf_adapter *adapter)
{
return le16_to_cpu(adapter->caps.max_vports);
}
/**
* idpf_get_max_tx_bufs - Get max scatter-gather buffers supported by the device
* @adapter: private data struct
*/
static inline unsigned int idpf_get_max_tx_bufs(struct idpf_adapter *adapter)
{
return adapter->caps.max_sg_bufs_per_tx_pkt;
}
/**
* idpf_get_min_tx_pkt_len - Get min packet length supported by the device
* @adapter: private data struct
*/
static inline u8 idpf_get_min_tx_pkt_len(struct idpf_adapter *adapter)
{
u8 pkt_len = adapter->caps.min_sso_packet_len;
return pkt_len ? pkt_len : IDPF_TX_MIN_PKT_LEN;
}
/**
* idpf_get_reg_addr - Get BAR0 register address
* @adapter: private data struct
* @reg_offset: register offset value
*
* Based on the register offset, return the actual BAR0 register address
*/
static inline void __iomem *idpf_get_reg_addr(struct idpf_adapter *adapter,
resource_size_t reg_offset)
{
return (void __iomem *)(adapter->hw.hw_addr + reg_offset);
}
/**
* idpf_is_reset_detected - check if we were reset at some point
* @adapter: driver specific private structure
*
* Returns true if we are either in reset currently or were previously reset.
*/
static inline bool idpf_is_reset_detected(struct idpf_adapter *adapter)
{
if (!adapter->hw.arq)
return true;
return !(readl(idpf_get_reg_addr(adapter, adapter->hw.arq->reg.len)) &
adapter->hw.arq->reg.len_mask);
}
/**
* idpf_is_reset_in_prog - check if reset is in progress
* @adapter: driver specific private structure
*
* Returns true if hard reset is in progress, false otherwise
*/
static inline bool idpf_is_reset_in_prog(struct idpf_adapter *adapter)
{
return (test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags) ||
test_bit(IDPF_HR_FUNC_RESET, adapter->flags) ||
test_bit(IDPF_HR_DRV_LOAD, adapter->flags));
}
/**
* idpf_netdev_to_vport - get a vport handle from a netdev
* @netdev: network interface device structure
*/
static inline struct idpf_vport *idpf_netdev_to_vport(struct net_device *netdev)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
return np->vport;
}
/**
* idpf_netdev_to_adapter - Get adapter handle from a netdev
* @netdev: Network interface device structure
*/
static inline struct idpf_adapter *idpf_netdev_to_adapter(struct net_device *netdev)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
return np->adapter;
}
/**
* idpf_is_feature_ena - Determine if a particular feature is enabled
* @vport: Vport to check
* @feature: Netdev flag to check
*
* Returns true or false if a particular feature is enabled.
*/
static inline bool idpf_is_feature_ena(const struct idpf_vport *vport,
netdev_features_t feature)
{
return vport->netdev->features & feature;
}
/**
* idpf_get_max_tx_hdr_size -- get the size of tx header
* @adapter: Driver specific private structure
*/
static inline u16 idpf_get_max_tx_hdr_size(struct idpf_adapter *adapter)
{
return le16_to_cpu(adapter->caps.max_tx_hdr_size);
}
/**
* idpf_vport_ctrl_lock - Acquire the vport control lock
* @netdev: Network interface device structure
*
* This lock should be used by non-datapath code to protect against vport
* destruction.
*/
static inline void idpf_vport_ctrl_lock(struct net_device *netdev)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
mutex_lock(&np->adapter->vport_ctrl_lock);
}
/**
* idpf_vport_ctrl_unlock - Release the vport control lock
* @netdev: Network interface device structure
*/
static inline void idpf_vport_ctrl_unlock(struct net_device *netdev)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
mutex_unlock(&np->adapter->vport_ctrl_lock);
}
void idpf_statistics_task(struct work_struct *work);
void idpf_init_task(struct work_struct *work);
void idpf_service_task(struct work_struct *work);
void idpf_mbx_task(struct work_struct *work);
void idpf_vc_event_task(struct work_struct *work);
void idpf_dev_ops_init(struct idpf_adapter *adapter);
void idpf_vf_dev_ops_init(struct idpf_adapter *adapter);
int idpf_vport_adjust_qs(struct idpf_vport *vport);
int idpf_init_dflt_mbx(struct idpf_adapter *adapter);
void idpf_deinit_dflt_mbx(struct idpf_adapter *adapter);
int idpf_vc_core_init(struct idpf_adapter *adapter);
void idpf_vc_core_deinit(struct idpf_adapter *adapter);
int idpf_intr_req(struct idpf_adapter *adapter);
void idpf_intr_rel(struct idpf_adapter *adapter);
int idpf_get_reg_intr_vecs(struct idpf_vport *vport,
struct idpf_vec_regs *reg_vals);
u16 idpf_get_max_tx_hdr_size(struct idpf_adapter *adapter);
int idpf_send_delete_queues_msg(struct idpf_vport *vport);
int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
u16 num_complq, u16 num_rx_q, u16 num_rx_bufq);
int idpf_initiate_soft_reset(struct idpf_vport *vport,
enum idpf_vport_reset_cause reset_cause);
int idpf_send_enable_vport_msg(struct idpf_vport *vport);
int idpf_send_disable_vport_msg(struct idpf_vport *vport);
int idpf_send_destroy_vport_msg(struct idpf_vport *vport);
int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport);
int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport);
int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get);
int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get);
int idpf_send_dealloc_vectors_msg(struct idpf_adapter *adapter);
int idpf_send_alloc_vectors_msg(struct idpf_adapter *adapter, u16 num_vectors);
void idpf_deinit_task(struct idpf_adapter *adapter);
int idpf_req_rel_vector_indexes(struct idpf_adapter *adapter,
u16 *q_vector_idxs,
struct idpf_vector_info *vec_info);
int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport);
int idpf_send_get_stats_msg(struct idpf_vport *vport);
int idpf_get_vec_ids(struct idpf_adapter *adapter,
u16 *vecids, int num_vecids,
struct virtchnl2_vector_chunks *chunks);
int idpf_recv_mb_msg(struct idpf_adapter *adapter, u32 op,
void *msg, int msg_size);
int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
u16 msg_size, u8 *msg);
void idpf_set_ethtool_ops(struct net_device *netdev);
int idpf_vport_alloc_max_qs(struct idpf_adapter *adapter,
struct idpf_vport_max_q *max_q);
void idpf_vport_dealloc_max_qs(struct idpf_adapter *adapter,
struct idpf_vport_max_q *max_q);
int idpf_add_del_mac_filters(struct idpf_vport *vport,
struct idpf_netdev_priv *np,
bool add, bool async);
int idpf_set_promiscuous(struct idpf_adapter *adapter,
struct idpf_vport_user_config_data *config_data,
u32 vport_id);
int idpf_send_disable_queues_msg(struct idpf_vport *vport);
void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q);
u32 idpf_get_vport_id(struct idpf_vport *vport);
int idpf_vport_queue_ids_init(struct idpf_vport *vport);
int idpf_queue_reg_init(struct idpf_vport *vport);
int idpf_send_config_queues_msg(struct idpf_vport *vport);
int idpf_send_enable_queues_msg(struct idpf_vport *vport);
int idpf_send_create_vport_msg(struct idpf_adapter *adapter,
struct idpf_vport_max_q *max_q);
int idpf_check_supported_desc_ids(struct idpf_vport *vport);
void idpf_vport_intr_write_itr(struct idpf_q_vector *q_vector,
u16 itr, bool tx);
int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map);
int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs);
int idpf_sriov_configure(struct pci_dev *pdev, int num_vfs);
#endif /* !_IDPF_H_ */
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf_controlq.h"
/**
* idpf_ctlq_setup_regs - initialize control queue registers
* @cq: pointer to the specific control queue
* @q_create_info: structs containing info for each queue to be initialized
*/
static void idpf_ctlq_setup_regs(struct idpf_ctlq_info *cq,
struct idpf_ctlq_create_info *q_create_info)
{
/* set control queue registers in our local struct */
cq->reg.head = q_create_info->reg.head;
cq->reg.tail = q_create_info->reg.tail;
cq->reg.len = q_create_info->reg.len;
cq->reg.bah = q_create_info->reg.bah;
cq->reg.bal = q_create_info->reg.bal;
cq->reg.len_mask = q_create_info->reg.len_mask;
cq->reg.len_ena_mask = q_create_info->reg.len_ena_mask;
cq->reg.head_mask = q_create_info->reg.head_mask;
}
/**
* idpf_ctlq_init_regs - Initialize control queue registers
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
* @is_rxq: true if receive control queue, false otherwise
*
* Initialize registers. The caller is expected to have already initialized the
* descriptor ring memory and buffer memory
*/
static void idpf_ctlq_init_regs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
bool is_rxq)
{
/* Update tail to post pre-allocated buffers for rx queues */
if (is_rxq)
wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1));
/* For non-Mailbox control queues only TAIL need to be set */
if (cq->q_id != -1)
return;
/* Clear Head for both send or receive */
wr32(hw, cq->reg.head, 0);
/* set starting point */
wr32(hw, cq->reg.bal, lower_32_bits(cq->desc_ring.pa));
wr32(hw, cq->reg.bah, upper_32_bits(cq->desc_ring.pa));
wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask));
}
/**
* idpf_ctlq_init_rxq_bufs - populate receive queue descriptors with buf
* @cq: pointer to the specific Control queue
*
* Record the address of the receive queue DMA buffers in the descriptors.
* The buffers must have been previously allocated.
*/
static void idpf_ctlq_init_rxq_bufs(struct idpf_ctlq_info *cq)
{
int i;
for (i = 0; i < cq->ring_size; i++) {
struct idpf_ctlq_desc *desc = IDPF_CTLQ_DESC(cq, i);
struct idpf_dma_mem *bi = cq->bi.rx_buff[i];
/* No buffer to post to descriptor, continue */
if (!bi)
continue;
desc->flags =
cpu_to_le16(IDPF_CTLQ_FLAG_BUF | IDPF_CTLQ_FLAG_RD);
desc->opcode = 0;
desc->datalen = cpu_to_le16(bi->size);
desc->ret_val = 0;
desc->v_opcode_dtype = 0;
desc->v_retval = 0;
desc->params.indirect.addr_high =
cpu_to_le32(upper_32_bits(bi->pa));
desc->params.indirect.addr_low =
cpu_to_le32(lower_32_bits(bi->pa));
desc->params.indirect.param0 = 0;
desc->params.indirect.sw_cookie = 0;
desc->params.indirect.v_flags = 0;
}
}
/**
* idpf_ctlq_shutdown - shutdown the CQ
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*
* The main shutdown routine for any controq queue
*/
static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
{
mutex_lock(&cq->cq_lock);
/* free ring buffers and the ring itself */
idpf_ctlq_dealloc_ring_res(hw, cq);
/* Set ring_size to 0 to indicate uninitialized queue */
cq->ring_size = 0;
mutex_unlock(&cq->cq_lock);
mutex_destroy(&cq->cq_lock);
}
/**
* idpf_ctlq_add - add one control queue
* @hw: pointer to hardware struct
* @qinfo: info for queue to be created
* @cq_out: (output) double pointer to control queue to be created
*
* Allocate and initialize a control queue and add it to the control queue list.
* The cq parameter will be allocated/initialized and passed back to the caller
* if no errors occur.
*
* Note: idpf_ctlq_init must be called prior to any calls to idpf_ctlq_add
*/
int idpf_ctlq_add(struct idpf_hw *hw,
struct idpf_ctlq_create_info *qinfo,
struct idpf_ctlq_info **cq_out)
{
struct idpf_ctlq_info *cq;
bool is_rxq = false;
int err;
cq = kzalloc(sizeof(*cq), GFP_KERNEL);
if (!cq)
return -ENOMEM;
cq->cq_type = qinfo->type;
cq->q_id = qinfo->id;
cq->buf_size = qinfo->buf_size;
cq->ring_size = qinfo->len;
cq->next_to_use = 0;
cq->next_to_clean = 0;
cq->next_to_post = cq->ring_size - 1;
switch (qinfo->type) {
case IDPF_CTLQ_TYPE_MAILBOX_RX:
is_rxq = true;
fallthrough;
case IDPF_CTLQ_TYPE_MAILBOX_TX:
err = idpf_ctlq_alloc_ring_res(hw, cq);
break;
default:
err = -EBADR;
break;
}
if (err)
goto init_free_q;
if (is_rxq) {
idpf_ctlq_init_rxq_bufs(cq);
} else {
/* Allocate the array of msg pointers for TX queues */
cq->bi.tx_msg = kcalloc(qinfo->len,
sizeof(struct idpf_ctlq_msg *),
GFP_KERNEL);
if (!cq->bi.tx_msg) {
err = -ENOMEM;
goto init_dealloc_q_mem;
}
}
idpf_ctlq_setup_regs(cq, qinfo);
idpf_ctlq_init_regs(hw, cq, is_rxq);
mutex_init(&cq->cq_lock);
list_add(&cq->cq_list, &hw->cq_list_head);
*cq_out = cq;
return 0;
init_dealloc_q_mem:
/* free ring buffers and the ring itself */
idpf_ctlq_dealloc_ring_res(hw, cq);
init_free_q:
kfree(cq);
return err;
}
/**
* idpf_ctlq_remove - deallocate and remove specified control queue
* @hw: pointer to hardware struct
* @cq: pointer to control queue to be removed
*/
void idpf_ctlq_remove(struct idpf_hw *hw,
struct idpf_ctlq_info *cq)
{
list_del(&cq->cq_list);
idpf_ctlq_shutdown(hw, cq);
kfree(cq);
}
/**
* idpf_ctlq_init - main initialization routine for all control queues
* @hw: pointer to hardware struct
* @num_q: number of queues to initialize
* @q_info: array of structs containing info for each queue to be initialized
*
* This initializes any number and any type of control queues. This is an all
* or nothing routine; if one fails, all previously allocated queues will be
* destroyed. This must be called prior to using the individual add/remove
* APIs.
*/
int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
struct idpf_ctlq_create_info *q_info)
{
struct idpf_ctlq_info *cq, *tmp;
int err;
int i;
INIT_LIST_HEAD(&hw->cq_list_head);
for (i = 0; i < num_q; i++) {
struct idpf_ctlq_create_info *qinfo = q_info + i;
err = idpf_ctlq_add(hw, qinfo, &cq);
if (err)
goto init_destroy_qs;
}
return 0;
init_destroy_qs:
list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list)
idpf_ctlq_remove(hw, cq);
return err;
}
/**
* idpf_ctlq_deinit - destroy all control queues
* @hw: pointer to hw struct
*/
void idpf_ctlq_deinit(struct idpf_hw *hw)
{
struct idpf_ctlq_info *cq, *tmp;
list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list)
idpf_ctlq_remove(hw, cq);
}
/**
* idpf_ctlq_send - send command to Control Queue (CTQ)
* @hw: pointer to hw struct
* @cq: handle to control queue struct to send on
* @num_q_msg: number of messages to send on control queue
* @q_msg: pointer to array of queue messages to be sent
*
* The caller is expected to allocate DMAable buffers and pass them to the
* send routine via the q_msg struct / control queue specific data struct.
* The control queue will hold a reference to each send message until
* the completion for that message has been cleaned.
*/
int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
u16 num_q_msg, struct idpf_ctlq_msg q_msg[])
{
struct idpf_ctlq_desc *desc;
int num_desc_avail;
int err = 0;
int i;
mutex_lock(&cq->cq_lock);
/* Ensure there are enough descriptors to send all messages */
num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq);
if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
err = -ENOSPC;
goto err_unlock;
}
for (i = 0; i < num_q_msg; i++) {
struct idpf_ctlq_msg *msg = &q_msg[i];
desc = IDPF_CTLQ_DESC(cq, cq->next_to_use);
desc->opcode = cpu_to_le16(msg->opcode);
desc->pfid_vfid = cpu_to_le16(msg->func_id);
desc->v_opcode_dtype = cpu_to_le32(msg->cookie.mbx.chnl_opcode);
desc->v_retval = cpu_to_le32(msg->cookie.mbx.chnl_retval);
desc->flags = cpu_to_le16((msg->host_id & IDPF_HOST_ID_MASK) <<
IDPF_CTLQ_FLAG_HOST_ID_S);
if (msg->data_len) {
struct idpf_dma_mem *buff = msg->ctx.indirect.payload;
desc->datalen |= cpu_to_le16(msg->data_len);
desc->flags |= cpu_to_le16(IDPF_CTLQ_FLAG_BUF);
desc->flags |= cpu_to_le16(IDPF_CTLQ_FLAG_RD);
/* Update the address values in the desc with the pa
* value for respective buffer
*/
desc->params.indirect.addr_high =
cpu_to_le32(upper_32_bits(buff->pa));
desc->params.indirect.addr_low =
cpu_to_le32(lower_32_bits(buff->pa));
memcpy(&desc->params, msg->ctx.indirect.context,
IDPF_INDIRECT_CTX_SIZE);
} else {
memcpy(&desc->params, msg->ctx.direct,
IDPF_DIRECT_CTX_SIZE);
}
/* Store buffer info */
cq->bi.tx_msg[cq->next_to_use] = msg;
(cq->next_to_use)++;
if (cq->next_to_use == cq->ring_size)
cq->next_to_use = 0;
}
/* Force memory write to complete before letting hardware
* know that there are new descriptors to fetch.
*/
dma_wmb();
wr32(hw, cq->reg.tail, cq->next_to_use);
err_unlock:
mutex_unlock(&cq->cq_lock);
return err;
}
/**
* idpf_ctlq_clean_sq - reclaim send descriptors on HW write back for the
* requested queue
* @cq: pointer to the specific Control queue
* @clean_count: (input|output) number of descriptors to clean as input, and
* number of descriptors actually cleaned as output
* @msg_status: (output) pointer to msg pointer array to be populated; needs
* to be allocated by caller
*
* Returns an array of message pointers associated with the cleaned
* descriptors. The pointers are to the original ctlq_msgs sent on the cleaned
* descriptors. The status will be returned for each; any messages that failed
* to send will have a non-zero status. The caller is expected to free original
* ctlq_msgs and free or reuse the DMA buffers.
*/
int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
struct idpf_ctlq_msg *msg_status[])
{
struct idpf_ctlq_desc *desc;
u16 i, num_to_clean;
u16 ntc, desc_err;
if (*clean_count == 0)
return 0;
if (*clean_count > cq->ring_size)
return -EBADR;
mutex_lock(&cq->cq_lock);
ntc = cq->next_to_clean;
num_to_clean = *clean_count;
for (i = 0; i < num_to_clean; i++) {
/* Fetch next descriptor and check if marked as done */
desc = IDPF_CTLQ_DESC(cq, ntc);
if (!(le16_to_cpu(desc->flags) & IDPF_CTLQ_FLAG_DD))
break;
/* strip off FW internal code */
desc_err = le16_to_cpu(desc->ret_val) & 0xff;
msg_status[i] = cq->bi.tx_msg[ntc];
msg_status[i]->status = desc_err;
cq->bi.tx_msg[ntc] = NULL;
/* Zero out any stale data */
memset(desc, 0, sizeof(*desc));
ntc++;
if (ntc == cq->ring_size)
ntc = 0;
}
cq->next_to_clean = ntc;
mutex_unlock(&cq->cq_lock);
/* Return number of descriptors actually cleaned */
*clean_count = i;
return 0;
}
/**
* idpf_ctlq_post_rx_buffs - post buffers to descriptor ring
* @hw: pointer to hw struct
* @cq: pointer to control queue handle
* @buff_count: (input|output) input is number of buffers caller is trying to
* return; output is number of buffers that were not posted
* @buffs: array of pointers to dma mem structs to be given to hardware
*
* Caller uses this function to return DMA buffers to the descriptor ring after
* consuming them; buff_count will be the number of buffers.
*
* Note: this function needs to be called after a receive call even
* if there are no DMA buffers to be returned, i.e. buff_count = 0,
* buffs = NULL to support direct commands
*/
int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
u16 *buff_count, struct idpf_dma_mem **buffs)
{
struct idpf_ctlq_desc *desc;
u16 ntp = cq->next_to_post;
bool buffs_avail = false;
u16 tbp = ntp + 1;
int i = 0;
if (*buff_count > cq->ring_size)
return -EBADR;
if (*buff_count > 0)
buffs_avail = true;
mutex_lock(&cq->cq_lock);
if (tbp >= cq->ring_size)
tbp = 0;
if (tbp == cq->next_to_clean)
/* Nothing to do */
goto post_buffs_out;
/* Post buffers for as many as provided or up until the last one used */
while (ntp != cq->next_to_clean) {
desc = IDPF_CTLQ_DESC(cq, ntp);
if (cq->bi.rx_buff[ntp])
goto fill_desc;
if (!buffs_avail) {
/* If the caller hasn't given us any buffers or
* there are none left, search the ring itself
* for an available buffer to move to this
* entry starting at the next entry in the ring
*/
tbp = ntp + 1;
/* Wrap ring if necessary */
if (tbp >= cq->ring_size)
tbp = 0;
while (tbp != cq->next_to_clean) {
if (cq->bi.rx_buff[tbp]) {
cq->bi.rx_buff[ntp] =
cq->bi.rx_buff[tbp];
cq->bi.rx_buff[tbp] = NULL;
/* Found a buffer, no need to
* search anymore
*/
break;
}
/* Wrap ring if necessary */
tbp++;
if (tbp >= cq->ring_size)
tbp = 0;
}
if (tbp == cq->next_to_clean)
goto post_buffs_out;
} else {
/* Give back pointer to DMA buffer */
cq->bi.rx_buff[ntp] = buffs[i];
i++;
if (i >= *buff_count)
buffs_avail = false;
}
fill_desc:
desc->flags =
cpu_to_le16(IDPF_CTLQ_FLAG_BUF | IDPF_CTLQ_FLAG_RD);
/* Post buffers to descriptor */
desc->datalen = cpu_to_le16(cq->bi.rx_buff[ntp]->size);
desc->params.indirect.addr_high =
cpu_to_le32(upper_32_bits(cq->bi.rx_buff[ntp]->pa));
desc->params.indirect.addr_low =
cpu_to_le32(lower_32_bits(cq->bi.rx_buff[ntp]->pa));
ntp++;
if (ntp == cq->ring_size)
ntp = 0;
}
post_buffs_out:
/* Only update tail if buffers were actually posted */
if (cq->next_to_post != ntp) {
if (ntp)
/* Update next_to_post to ntp - 1 since current ntp
* will not have a buffer
*/
cq->next_to_post = ntp - 1;
else
/* Wrap to end of end ring since current ntp is 0 */
cq->next_to_post = cq->ring_size - 1;
wr32(hw, cq->reg.tail, cq->next_to_post);
}
mutex_unlock(&cq->cq_lock);
/* return the number of buffers that were not posted */
*buff_count = *buff_count - i;
return 0;
}
/**
* idpf_ctlq_recv - receive control queue message call back
* @cq: pointer to control queue handle to receive on
* @num_q_msg: (input|output) input number of messages that should be received;
* output number of messages actually received
* @q_msg: (output) array of received control queue messages on this q;
* needs to be pre-allocated by caller for as many messages as requested
*
* Called by interrupt handler or polling mechanism. Caller is expected
* to free buffers
*/
int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
struct idpf_ctlq_msg *q_msg)
{
u16 num_to_clean, ntc, flags;
struct idpf_ctlq_desc *desc;
int err = 0;
u16 i;
if (*num_q_msg == 0)
return 0;
else if (*num_q_msg > cq->ring_size)
return -EBADR;
/* take the lock before we start messing with the ring */
mutex_lock(&cq->cq_lock);
ntc = cq->next_to_clean;
num_to_clean = *num_q_msg;
for (i = 0; i < num_to_clean; i++) {
/* Fetch next descriptor and check if marked as done */
desc = IDPF_CTLQ_DESC(cq, ntc);
flags = le16_to_cpu(desc->flags);
if (!(flags & IDPF_CTLQ_FLAG_DD))
break;
q_msg[i].vmvf_type = (flags &
(IDPF_CTLQ_FLAG_FTYPE_VM |
IDPF_CTLQ_FLAG_FTYPE_PF)) >>
IDPF_CTLQ_FLAG_FTYPE_S;
if (flags & IDPF_CTLQ_FLAG_ERR)
err = -EBADMSG;
q_msg[i].cookie.mbx.chnl_opcode =
le32_to_cpu(desc->v_opcode_dtype);
q_msg[i].cookie.mbx.chnl_retval =
le32_to_cpu(desc->v_retval);
q_msg[i].opcode = le16_to_cpu(desc->opcode);
q_msg[i].data_len = le16_to_cpu(desc->datalen);
q_msg[i].status = le16_to_cpu(desc->ret_val);
if (desc->datalen) {
memcpy(q_msg[i].ctx.indirect.context,
&desc->params.indirect, IDPF_INDIRECT_CTX_SIZE);
/* Assign pointer to dma buffer to ctlq_msg array
* to be given to upper layer
*/
q_msg[i].ctx.indirect.payload = cq->bi.rx_buff[ntc];
/* Zero out pointer to DMA buffer info;
* will be repopulated by post buffers API
*/
cq->bi.rx_buff[ntc] = NULL;
} else {
memcpy(q_msg[i].ctx.direct, desc->params.raw,
IDPF_DIRECT_CTX_SIZE);
}
/* Zero out stale data in descriptor */
memset(desc, 0, sizeof(struct idpf_ctlq_desc));
ntc++;
if (ntc == cq->ring_size)
ntc = 0;
}
cq->next_to_clean = ntc;
mutex_unlock(&cq->cq_lock);
*num_q_msg = i;
if (*num_q_msg == 0)
err = -ENOMSG;
return err;
}
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_CONTROLQ_H_
#define _IDPF_CONTROLQ_H_
#include <linux/slab.h>
#include "idpf_controlq_api.h"
/* Maximum buffer length for all control queue types */
#define IDPF_CTLQ_MAX_BUF_LEN 4096
#define IDPF_CTLQ_DESC(R, i) \
(&(((struct idpf_ctlq_desc *)((R)->desc_ring.va))[i]))
#define IDPF_CTLQ_DESC_UNUSED(R) \
((u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->ring_size) + \
(R)->next_to_clean - (R)->next_to_use - 1))
/* Control Queue default settings */
#define IDPF_CTRL_SQ_CMD_TIMEOUT 250 /* msecs */
struct idpf_ctlq_desc {
/* Control queue descriptor flags */
__le16 flags;
/* Control queue message opcode */
__le16 opcode;
__le16 datalen; /* 0 for direct commands */
union {
__le16 ret_val;
__le16 pfid_vfid;
#define IDPF_CTLQ_DESC_VF_ID_S 0
#define IDPF_CTLQ_DESC_VF_ID_M (0x7FF << IDPF_CTLQ_DESC_VF_ID_S)
#define IDPF_CTLQ_DESC_PF_ID_S 11
#define IDPF_CTLQ_DESC_PF_ID_M (0x1F << IDPF_CTLQ_DESC_PF_ID_S)
};
/* Virtchnl message opcode and virtchnl descriptor type
* v_opcode=[27:0], v_dtype=[31:28]
*/
__le32 v_opcode_dtype;
/* Virtchnl return value */
__le32 v_retval;
union {
struct {
__le32 param0;
__le32 param1;
__le32 param2;
__le32 param3;
} direct;
struct {
__le32 param0;
__le16 sw_cookie;
/* Virtchnl flags */
__le16 v_flags;
__le32 addr_high;
__le32 addr_low;
} indirect;
u8 raw[16];
} params;
};
/* Flags sub-structure
* |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |
* |DD |CMP|ERR| * RSV * |FTYPE | *RSV* |RD |VFC|BUF| HOST_ID |
*/
/* command flags and offsets */
#define IDPF_CTLQ_FLAG_DD_S 0
#define IDPF_CTLQ_FLAG_CMP_S 1
#define IDPF_CTLQ_FLAG_ERR_S 2
#define IDPF_CTLQ_FLAG_FTYPE_S 6
#define IDPF_CTLQ_FLAG_RD_S 10
#define IDPF_CTLQ_FLAG_VFC_S 11
#define IDPF_CTLQ_FLAG_BUF_S 12
#define IDPF_CTLQ_FLAG_HOST_ID_S 13
#define IDPF_CTLQ_FLAG_DD BIT(IDPF_CTLQ_FLAG_DD_S) /* 0x1 */
#define IDPF_CTLQ_FLAG_CMP BIT(IDPF_CTLQ_FLAG_CMP_S) /* 0x2 */
#define IDPF_CTLQ_FLAG_ERR BIT(IDPF_CTLQ_FLAG_ERR_S) /* 0x4 */
#define IDPF_CTLQ_FLAG_FTYPE_VM BIT(IDPF_CTLQ_FLAG_FTYPE_S) /* 0x40 */
#define IDPF_CTLQ_FLAG_FTYPE_PF BIT(IDPF_CTLQ_FLAG_FTYPE_S + 1) /* 0x80 */
#define IDPF_CTLQ_FLAG_RD BIT(IDPF_CTLQ_FLAG_RD_S) /* 0x400 */
#define IDPF_CTLQ_FLAG_VFC BIT(IDPF_CTLQ_FLAG_VFC_S) /* 0x800 */
#define IDPF_CTLQ_FLAG_BUF BIT(IDPF_CTLQ_FLAG_BUF_S) /* 0x1000 */
/* Host ID is a special field that has 3b and not a 1b flag */
#define IDPF_CTLQ_FLAG_HOST_ID_M MAKE_MASK(0x7000UL, IDPF_CTLQ_FLAG_HOST_ID_S)
struct idpf_mbxq_desc {
u8 pad[8]; /* CTLQ flags/opcode/len/retval fields */
u32 chnl_opcode; /* avoid confusion with desc->opcode */
u32 chnl_retval; /* ditto for desc->retval */
u32 pf_vf_id; /* used by CP when sending to PF */
};
/* Define the driver hardware struct to replace other control structs as needed
* Align to ctlq_hw_info
*/
struct idpf_hw {
void __iomem *hw_addr;
resource_size_t hw_addr_len;
struct idpf_adapter *back;
/* control queue - send and receive */
struct idpf_ctlq_info *asq;
struct idpf_ctlq_info *arq;
/* pci info */
u16 device_id;
u16 vendor_id;
u16 subsystem_device_id;
u16 subsystem_vendor_id;
u8 revision_id;
bool adapter_stopped;
struct list_head cq_list_head;
};
int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw,
struct idpf_ctlq_info *cq);
void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq);
/* prototype for functions used for dynamic memory allocation */
void *idpf_alloc_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem,
u64 size);
void idpf_free_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem);
#endif /* _IDPF_CONTROLQ_H_ */
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_CONTROLQ_API_H_
#define _IDPF_CONTROLQ_API_H_
#include "idpf_mem.h"
struct idpf_hw;
/* Used for queue init, response and events */
enum idpf_ctlq_type {
IDPF_CTLQ_TYPE_MAILBOX_TX = 0,
IDPF_CTLQ_TYPE_MAILBOX_RX = 1,
IDPF_CTLQ_TYPE_CONFIG_TX = 2,
IDPF_CTLQ_TYPE_CONFIG_RX = 3,
IDPF_CTLQ_TYPE_EVENT_RX = 4,
IDPF_CTLQ_TYPE_RDMA_TX = 5,
IDPF_CTLQ_TYPE_RDMA_RX = 6,
IDPF_CTLQ_TYPE_RDMA_COMPL = 7
};
/* Generic Control Queue Structures */
struct idpf_ctlq_reg {
/* used for queue tracking */
u32 head;
u32 tail;
/* Below applies only to default mb (if present) */
u32 len;
u32 bah;
u32 bal;
u32 len_mask;
u32 len_ena_mask;
u32 head_mask;
};
/* Generic queue msg structure */
struct idpf_ctlq_msg {
u8 vmvf_type; /* represents the source of the message on recv */
#define IDPF_VMVF_TYPE_VF 0
#define IDPF_VMVF_TYPE_VM 1
#define IDPF_VMVF_TYPE_PF 2
u8 host_id;
/* 3b field used only when sending a message to CP - to be used in
* combination with target func_id to route the message
*/
#define IDPF_HOST_ID_MASK 0x7
u16 opcode;
u16 data_len; /* data_len = 0 when no payload is attached */
union {
u16 func_id; /* when sending a message */
u16 status; /* when receiving a message */
};
union {
struct {
u32 chnl_opcode;
u32 chnl_retval;
} mbx;
} cookie;
union {
#define IDPF_DIRECT_CTX_SIZE 16
#define IDPF_INDIRECT_CTX_SIZE 8
/* 16 bytes of context can be provided or 8 bytes of context
* plus the address of a DMA buffer
*/
u8 direct[IDPF_DIRECT_CTX_SIZE];
struct {
u8 context[IDPF_INDIRECT_CTX_SIZE];
struct idpf_dma_mem *payload;
} indirect;
} ctx;
};
/* Generic queue info structures */
/* MB, CONFIG and EVENT q do not have extended info */
struct idpf_ctlq_create_info {
enum idpf_ctlq_type type;
int id; /* absolute queue offset passed as input
* -1 for default mailbox if present
*/
u16 len; /* Queue length passed as input */
u16 buf_size; /* buffer size passed as input */
u64 base_address; /* output, HPA of the Queue start */
struct idpf_ctlq_reg reg; /* registers accessed by ctlqs */
int ext_info_size;
void *ext_info; /* Specific to q type */
};
/* Control Queue information */
struct idpf_ctlq_info {
struct list_head cq_list;
enum idpf_ctlq_type cq_type;
int q_id;
struct mutex cq_lock; /* control queue lock */
/* used for interrupt processing */
u16 next_to_use;
u16 next_to_clean;
u16 next_to_post; /* starting descriptor to post buffers
* to after recev
*/
struct idpf_dma_mem desc_ring; /* descriptor ring memory
* idpf_dma_mem is defined in OSdep.h
*/
union {
struct idpf_dma_mem **rx_buff;
struct idpf_ctlq_msg **tx_msg;
} bi;
u16 buf_size; /* queue buffer size */
u16 ring_size; /* Number of descriptors */
struct idpf_ctlq_reg reg; /* registers accessed by ctlqs */
};
/**
* enum idpf_mbx_opc - PF/VF mailbox commands
* @idpf_mbq_opc_send_msg_to_cp: used by PF or VF to send a message to its CP
*/
enum idpf_mbx_opc {
idpf_mbq_opc_send_msg_to_cp = 0x0801,
};
/* API supported for control queue management */
/* Will init all required q including default mb. "q_info" is an array of
* create_info structs equal to the number of control queues to be created.
*/
int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
struct idpf_ctlq_create_info *q_info);
/* Allocate and initialize a single control queue, which will be added to the
* control queue list; returns a handle to the created control queue
*/
int idpf_ctlq_add(struct idpf_hw *hw,
struct idpf_ctlq_create_info *qinfo,
struct idpf_ctlq_info **cq);
/* Deinitialize and deallocate a single control queue */
void idpf_ctlq_remove(struct idpf_hw *hw,
struct idpf_ctlq_info *cq);
/* Sends messages to HW and will also free the buffer*/
int idpf_ctlq_send(struct idpf_hw *hw,
struct idpf_ctlq_info *cq,
u16 num_q_msg,
struct idpf_ctlq_msg q_msg[]);
/* Receives messages and called by interrupt handler/polling
* initiated by app/process. Also caller is supposed to free the buffers
*/
int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
struct idpf_ctlq_msg *q_msg);
/* Reclaims send descriptors on HW write back */
int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
struct idpf_ctlq_msg *msg_status[]);
/* Indicate RX buffers are done being processed */
int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw,
struct idpf_ctlq_info *cq,
u16 *buff_count,
struct idpf_dma_mem **buffs);
/* Will destroy all q including the default mb */
void idpf_ctlq_deinit(struct idpf_hw *hw);
#endif /* _IDPF_CONTROLQ_API_H_ */
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf_controlq.h"
/**
* idpf_ctlq_alloc_desc_ring - Allocate Control Queue (CQ) rings
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*/
static int idpf_ctlq_alloc_desc_ring(struct idpf_hw *hw,
struct idpf_ctlq_info *cq)
{
size_t size = cq->ring_size * sizeof(struct idpf_ctlq_desc);
cq->desc_ring.va = idpf_alloc_dma_mem(hw, &cq->desc_ring, size);
if (!cq->desc_ring.va)
return -ENOMEM;
return 0;
}
/**
* idpf_ctlq_alloc_bufs - Allocate Control Queue (CQ) buffers
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*
* Allocate the buffer head for all control queues, and if it's a receive
* queue, allocate DMA buffers
*/
static int idpf_ctlq_alloc_bufs(struct idpf_hw *hw,
struct idpf_ctlq_info *cq)
{
int i;
/* Do not allocate DMA buffers for transmit queues */
if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
return 0;
/* We'll be allocating the buffer info memory first, then we can
* allocate the mapped buffers for the event processing
*/
cq->bi.rx_buff = kcalloc(cq->ring_size, sizeof(struct idpf_dma_mem *),
GFP_KERNEL);
if (!cq->bi.rx_buff)
return -ENOMEM;
/* allocate the mapped buffers (except for the last one) */
for (i = 0; i < cq->ring_size - 1; i++) {
struct idpf_dma_mem *bi;
int num = 1; /* number of idpf_dma_mem to be allocated */
cq->bi.rx_buff[i] = kcalloc(num, sizeof(struct idpf_dma_mem),
GFP_KERNEL);
if (!cq->bi.rx_buff[i])
goto unwind_alloc_cq_bufs;
bi = cq->bi.rx_buff[i];
bi->va = idpf_alloc_dma_mem(hw, bi, cq->buf_size);
if (!bi->va) {
/* unwind will not free the failed entry */
kfree(cq->bi.rx_buff[i]);
goto unwind_alloc_cq_bufs;
}
}
return 0;
unwind_alloc_cq_bufs:
/* don't try to free the one that failed... */
i--;
for (; i >= 0; i--) {
idpf_free_dma_mem(hw, cq->bi.rx_buff[i]);
kfree(cq->bi.rx_buff[i]);
}
kfree(cq->bi.rx_buff);
return -ENOMEM;
}
/**
* idpf_ctlq_free_desc_ring - Free Control Queue (CQ) rings
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*
* This assumes the posted send buffers have already been cleaned
* and de-allocated
*/
static void idpf_ctlq_free_desc_ring(struct idpf_hw *hw,
struct idpf_ctlq_info *cq)
{
idpf_free_dma_mem(hw, &cq->desc_ring);
}
/**
* idpf_ctlq_free_bufs - Free CQ buffer info elements
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*
* Free the DMA buffers for RX queues, and DMA buffer header for both RX and TX
* queues. The upper layers are expected to manage freeing of TX DMA buffers
*/
static void idpf_ctlq_free_bufs(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
{
void *bi;
if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX) {
int i;
/* free DMA buffers for rx queues*/
for (i = 0; i < cq->ring_size; i++) {
if (cq->bi.rx_buff[i]) {
idpf_free_dma_mem(hw, cq->bi.rx_buff[i]);
kfree(cq->bi.rx_buff[i]);
}
}
bi = (void *)cq->bi.rx_buff;
} else {
bi = (void *)cq->bi.tx_msg;
}
/* free the buffer header */
kfree(bi);
}
/**
* idpf_ctlq_dealloc_ring_res - Free memory allocated for control queue
* @hw: pointer to hw struct
* @cq: pointer to the specific Control queue
*
* Free the memory used by the ring, buffers and other related structures
*/
void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
{
/* free ring buffers and the ring itself */
idpf_ctlq_free_bufs(hw, cq);
idpf_ctlq_free_desc_ring(hw, cq);
}
/**
* idpf_ctlq_alloc_ring_res - allocate memory for descriptor ring and bufs
* @hw: pointer to hw struct
* @cq: pointer to control queue struct
*
* Do *NOT* hold cq_lock when calling this as the memory allocation routines
* called are not going to be atomic context safe
*/
int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
{
int err;
/* allocate the ring memory */
err = idpf_ctlq_alloc_desc_ring(hw, cq);
if (err)
return err;
/* allocate buffers in the rings */
err = idpf_ctlq_alloc_bufs(hw, cq);
if (err)
goto idpf_init_cq_free_ring;
/* success! */
return 0;
idpf_init_cq_free_ring:
idpf_free_dma_mem(hw, &cq->desc_ring);
return err;
}
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf.h"
#include "idpf_lan_pf_regs.h"
#define IDPF_PF_ITR_IDX_SPACING 0x4
/**
* idpf_ctlq_reg_init - initialize default mailbox registers
* @cq: pointer to the array of create control queues
*/
static void idpf_ctlq_reg_init(struct idpf_ctlq_create_info *cq)
{
int i;
for (i = 0; i < IDPF_NUM_DFLT_MBX_Q; i++) {
struct idpf_ctlq_create_info *ccq = cq + i;
switch (ccq->type) {
case IDPF_CTLQ_TYPE_MAILBOX_TX:
/* set head and tail registers in our local struct */
ccq->reg.head = PF_FW_ATQH;
ccq->reg.tail = PF_FW_ATQT;
ccq->reg.len = PF_FW_ATQLEN;
ccq->reg.bah = PF_FW_ATQBAH;
ccq->reg.bal = PF_FW_ATQBAL;
ccq->reg.len_mask = PF_FW_ATQLEN_ATQLEN_M;
ccq->reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
ccq->reg.head_mask = PF_FW_ATQH_ATQH_M;
break;
case IDPF_CTLQ_TYPE_MAILBOX_RX:
/* set head and tail registers in our local struct */
ccq->reg.head = PF_FW_ARQH;
ccq->reg.tail = PF_FW_ARQT;
ccq->reg.len = PF_FW_ARQLEN;
ccq->reg.bah = PF_FW_ARQBAH;
ccq->reg.bal = PF_FW_ARQBAL;
ccq->reg.len_mask = PF_FW_ARQLEN_ARQLEN_M;
ccq->reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
ccq->reg.head_mask = PF_FW_ARQH_ARQH_M;
break;
default:
break;
}
}
}
/**
* idpf_mb_intr_reg_init - Initialize mailbox interrupt register
* @adapter: adapter structure
*/
static void idpf_mb_intr_reg_init(struct idpf_adapter *adapter)
{
struct idpf_intr_reg *intr = &adapter->mb_vector.intr_reg;
u32 dyn_ctl = le32_to_cpu(adapter->caps.mailbox_dyn_ctl);
intr->dyn_ctl = idpf_get_reg_addr(adapter, dyn_ctl);
intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M;
intr->dyn_ctl_itridx_m = PF_GLINT_DYN_CTL_ITR_INDX_M;
intr->icr_ena = idpf_get_reg_addr(adapter, PF_INT_DIR_OICR_ENA);
intr->icr_ena_ctlq_m = PF_INT_DIR_OICR_ENA_M;
}
/**
* idpf_intr_reg_init - Initialize interrupt registers
* @vport: virtual port structure
*/
static int idpf_intr_reg_init(struct idpf_vport *vport)
{
struct idpf_adapter *adapter = vport->adapter;
int num_vecs = vport->num_q_vectors;
struct idpf_vec_regs *reg_vals;
int num_regs, i, err = 0;
u32 rx_itr, tx_itr;
u16 total_vecs;
total_vecs = idpf_get_reserved_vecs(vport->adapter);
reg_vals = kcalloc(total_vecs, sizeof(struct idpf_vec_regs),
GFP_KERNEL);
if (!reg_vals)
return -ENOMEM;
num_regs = idpf_get_reg_intr_vecs(vport, reg_vals);
if (num_regs < num_vecs) {
err = -EINVAL;
goto free_reg_vals;
}
for (i = 0; i < num_vecs; i++) {
struct idpf_q_vector *q_vector = &vport->q_vectors[i];
u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC;
struct idpf_intr_reg *intr = &q_vector->intr_reg;
u32 spacing;
intr->dyn_ctl = idpf_get_reg_addr(adapter,
reg_vals[vec_id].dyn_ctl_reg);
intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M;
intr->dyn_ctl_itridx_s = PF_GLINT_DYN_CTL_ITR_INDX_S;
intr->dyn_ctl_intrvl_s = PF_GLINT_DYN_CTL_INTERVAL_S;
spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing,
IDPF_PF_ITR_IDX_SPACING);
rx_itr = PF_GLINT_ITR_ADDR(VIRTCHNL2_ITR_IDX_0,
reg_vals[vec_id].itrn_reg,
spacing);
tx_itr = PF_GLINT_ITR_ADDR(VIRTCHNL2_ITR_IDX_1,
reg_vals[vec_id].itrn_reg,
spacing);
intr->rx_itr = idpf_get_reg_addr(adapter, rx_itr);
intr->tx_itr = idpf_get_reg_addr(adapter, tx_itr);
}
free_reg_vals:
kfree(reg_vals);
return err;
}
/**
* idpf_reset_reg_init - Initialize reset registers
* @adapter: Driver specific private structure
*/
static void idpf_reset_reg_init(struct idpf_adapter *adapter)
{
adapter->reset_reg.rstat = idpf_get_reg_addr(adapter, PFGEN_RSTAT);
adapter->reset_reg.rstat_m = PFGEN_RSTAT_PFR_STATE_M;
}
/**
* idpf_trigger_reset - trigger reset
* @adapter: Driver specific private structure
* @trig_cause: Reason to trigger a reset
*/
static void idpf_trigger_reset(struct idpf_adapter *adapter,
enum idpf_flags __always_unused trig_cause)
{
u32 reset_reg;
reset_reg = readl(idpf_get_reg_addr(adapter, PFGEN_CTRL));
writel(reset_reg | PFGEN_CTRL_PFSWR,
idpf_get_reg_addr(adapter, PFGEN_CTRL));
}
/**
* idpf_reg_ops_init - Initialize register API function pointers
* @adapter: Driver specific private structure
*/
static void idpf_reg_ops_init(struct idpf_adapter *adapter)
{
adapter->dev_ops.reg_ops.ctlq_reg_init = idpf_ctlq_reg_init;
adapter->dev_ops.reg_ops.intr_reg_init = idpf_intr_reg_init;
adapter->dev_ops.reg_ops.mb_intr_reg_init = idpf_mb_intr_reg_init;
adapter->dev_ops.reg_ops.reset_reg_init = idpf_reset_reg_init;
adapter->dev_ops.reg_ops.trigger_reset = idpf_trigger_reset;
}
/**
* idpf_dev_ops_init - Initialize device API function pointers
* @adapter: Driver specific private structure
*/
void idpf_dev_ops_init(struct idpf_adapter *adapter)
{
idpf_reg_ops_init(adapter);
}
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_DEVIDS_H_
#define _IDPF_DEVIDS_H_
#define IDPF_DEV_ID_PF 0x1452
#define IDPF_DEV_ID_VF 0x145C
#endif /* _IDPF_DEVIDS_H_ */
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf.h"
/**
* idpf_get_rxnfc - command to get RX flow classification rules
* @netdev: network interface device structure
* @cmd: ethtool rxnfc command
* @rule_locs: pointer to store rule locations
*
* Returns Success if the command is supported.
*/
static int idpf_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
u32 __always_unused *rule_locs)
{
struct idpf_vport *vport;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
switch (cmd->cmd) {
case ETHTOOL_GRXRINGS:
cmd->data = vport->num_rxq;
idpf_vport_ctrl_unlock(netdev);
return 0;
default:
break;
}
idpf_vport_ctrl_unlock(netdev);
return -EOPNOTSUPP;
}
/**
* idpf_get_rxfh_key_size - get the RSS hash key size
* @netdev: network interface device structure
*
* Returns the key size on success, error value on failure.
*/
static u32 idpf_get_rxfh_key_size(struct net_device *netdev)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport_user_config_data *user_config;
if (!idpf_is_cap_ena_all(np->adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS))
return -EOPNOTSUPP;
user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
return user_config->rss_data.rss_key_size;
}
/**
* idpf_get_rxfh_indir_size - get the rx flow hash indirection table size
* @netdev: network interface device structure
*
* Returns the table size on success, error value on failure.
*/
static u32 idpf_get_rxfh_indir_size(struct net_device *netdev)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport_user_config_data *user_config;
if (!idpf_is_cap_ena_all(np->adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS))
return -EOPNOTSUPP;
user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
return user_config->rss_data.rss_lut_size;
}
/**
* idpf_get_rxfh - get the rx flow hash indirection table
* @netdev: network interface device structure
* @indir: indirection table
* @key: hash key
* @hfunc: hash function in use
*
* Reads the indirection table directly from the hardware. Always returns 0.
*/
static int idpf_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
u8 *hfunc)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_rss_data *rss_data;
struct idpf_adapter *adapter;
int err = 0;
u16 i;
idpf_vport_ctrl_lock(netdev);
adapter = np->adapter;
if (!idpf_is_cap_ena_all(adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS)) {
err = -EOPNOTSUPP;
goto unlock_mutex;
}
rss_data = &adapter->vport_config[np->vport_idx]->user_config.rss_data;
if (np->state != __IDPF_VPORT_UP)
goto unlock_mutex;
if (hfunc)
*hfunc = ETH_RSS_HASH_TOP;
if (key)
memcpy(key, rss_data->rss_key, rss_data->rss_key_size);
if (indir) {
for (i = 0; i < rss_data->rss_lut_size; i++)
indir[i] = rss_data->rss_lut[i];
}
unlock_mutex:
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* idpf_set_rxfh - set the rx flow hash indirection table
* @netdev: network interface device structure
* @indir: indirection table
* @key: hash key
* @hfunc: hash function to use
*
* Returns -EINVAL if the table specifies an invalid queue id, otherwise
* returns 0 after programming the table.
*/
static int idpf_set_rxfh(struct net_device *netdev, const u32 *indir,
const u8 *key, const u8 hfunc)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_rss_data *rss_data;
struct idpf_adapter *adapter;
struct idpf_vport *vport;
int err = 0;
u16 lut;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
adapter = vport->adapter;
if (!idpf_is_cap_ena_all(adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS)) {
err = -EOPNOTSUPP;
goto unlock_mutex;
}
rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;
if (np->state != __IDPF_VPORT_UP)
goto unlock_mutex;
if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP) {
err = -EOPNOTSUPP;
goto unlock_mutex;
}
if (key)
memcpy(rss_data->rss_key, key, rss_data->rss_key_size);
if (indir) {
for (lut = 0; lut < rss_data->rss_lut_size; lut++)
rss_data->rss_lut[lut] = indir[lut];
}
err = idpf_config_rss(vport);
unlock_mutex:
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* idpf_get_channels: get the number of channels supported by the device
* @netdev: network interface device structure
* @ch: channel information structure
*
* Report maximum of TX and RX. Report one extra channel to match our MailBox
* Queue.
*/
static void idpf_get_channels(struct net_device *netdev,
struct ethtool_channels *ch)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport_config *vport_config;
u16 num_txq, num_rxq;
u16 combined;
vport_config = np->adapter->vport_config[np->vport_idx];
num_txq = vport_config->user_config.num_req_tx_qs;
num_rxq = vport_config->user_config.num_req_rx_qs;
combined = min(num_txq, num_rxq);
/* Report maximum channels */
ch->max_combined = min_t(u16, vport_config->max_q.max_txq,
vport_config->max_q.max_rxq);
ch->max_rx = vport_config->max_q.max_rxq;
ch->max_tx = vport_config->max_q.max_txq;
ch->max_other = IDPF_MAX_MBXQ;
ch->other_count = IDPF_MAX_MBXQ;
ch->combined_count = combined;
ch->rx_count = num_rxq - combined;
ch->tx_count = num_txq - combined;
}
/**
* idpf_set_channels: set the new channel count
* @netdev: network interface device structure
* @ch: channel information structure
*
* Negotiate a new number of channels with CP. Returns 0 on success, negative
* on failure.
*/
static int idpf_set_channels(struct net_device *netdev,
struct ethtool_channels *ch)
{
struct idpf_vport_config *vport_config;
u16 combined, num_txq, num_rxq;
unsigned int num_req_tx_q;
unsigned int num_req_rx_q;
struct idpf_vport *vport;
struct device *dev;
int err = 0;
u16 idx;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
idx = vport->idx;
vport_config = vport->adapter->vport_config[idx];
num_txq = vport_config->user_config.num_req_tx_qs;
num_rxq = vport_config->user_config.num_req_rx_qs;
combined = min(num_txq, num_rxq);
/* these checks are for cases where user didn't specify a particular
* value on cmd line but we get non-zero value anyway via
* get_channels(); look at ethtool.c in ethtool repository (the user
* space part), particularly, do_schannels() routine
*/
if (ch->combined_count == combined)
ch->combined_count = 0;
if (ch->combined_count && ch->rx_count == num_rxq - combined)
ch->rx_count = 0;
if (ch->combined_count && ch->tx_count == num_txq - combined)
ch->tx_count = 0;
num_req_tx_q = ch->combined_count + ch->tx_count;
num_req_rx_q = ch->combined_count + ch->rx_count;
dev = &vport->adapter->pdev->dev;
/* It's possible to specify number of queues that exceeds max.
* Stack checks max combined_count and max [tx|rx]_count but not the
* max combined_count + [tx|rx]_count. These checks should catch that.
*/
if (num_req_tx_q > vport_config->max_q.max_txq) {
dev_info(dev, "Maximum TX queues is %d\n",
vport_config->max_q.max_txq);
err = -EINVAL;
goto unlock_mutex;
}
if (num_req_rx_q > vport_config->max_q.max_rxq) {
dev_info(dev, "Maximum RX queues is %d\n",
vport_config->max_q.max_rxq);
err = -EINVAL;
goto unlock_mutex;
}
if (num_req_tx_q == num_txq && num_req_rx_q == num_rxq)
goto unlock_mutex;
vport_config->user_config.num_req_tx_qs = num_req_tx_q;
vport_config->user_config.num_req_rx_qs = num_req_rx_q;
err = idpf_initiate_soft_reset(vport, IDPF_SR_Q_CHANGE);
if (err) {
/* roll back queue change */
vport_config->user_config.num_req_tx_qs = num_txq;
vport_config->user_config.num_req_rx_qs = num_rxq;
}
unlock_mutex:
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* idpf_get_ringparam - Get ring parameters
* @netdev: network interface device structure
* @ring: ethtool ringparam structure
* @kring: unused
* @ext_ack: unused
*
* Returns current ring parameters. TX and RX rings are reported separately,
* but the number of rings is not reported.
*/
static void idpf_get_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring,
struct kernel_ethtool_ringparam *kring,
struct netlink_ext_ack *ext_ack)
{
struct idpf_vport *vport;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
ring->rx_max_pending = IDPF_MAX_RXQ_DESC;
ring->tx_max_pending = IDPF_MAX_TXQ_DESC;
ring->rx_pending = vport->rxq_desc_count;
ring->tx_pending = vport->txq_desc_count;
idpf_vport_ctrl_unlock(netdev);
}
/**
* idpf_set_ringparam - Set ring parameters
* @netdev: network interface device structure
* @ring: ethtool ringparam structure
* @kring: unused
* @ext_ack: unused
*
* Sets ring parameters. TX and RX rings are controlled separately, but the
* number of rings is not specified, so all rings get the same settings.
*/
static int idpf_set_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring,
struct kernel_ethtool_ringparam *kring,
struct netlink_ext_ack *ext_ack)
{
struct idpf_vport_user_config_data *config_data;
u32 new_rx_count, new_tx_count;
struct idpf_vport *vport;
int i, err = 0;
u16 idx;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
idx = vport->idx;
if (ring->tx_pending < IDPF_MIN_TXQ_DESC) {
netdev_err(netdev, "Descriptors requested (Tx: %u) is less than min supported (%u)\n",
ring->tx_pending,
IDPF_MIN_TXQ_DESC);
err = -EINVAL;
goto unlock_mutex;
}
if (ring->rx_pending < IDPF_MIN_RXQ_DESC) {
netdev_err(netdev, "Descriptors requested (Rx: %u) is less than min supported (%u)\n",
ring->rx_pending,
IDPF_MIN_RXQ_DESC);
err = -EINVAL;
goto unlock_mutex;
}
new_rx_count = ALIGN(ring->rx_pending, IDPF_REQ_RXQ_DESC_MULTIPLE);
if (new_rx_count != ring->rx_pending)
netdev_info(netdev, "Requested Rx descriptor count rounded up to %u\n",
new_rx_count);
new_tx_count = ALIGN(ring->tx_pending, IDPF_REQ_DESC_MULTIPLE);
if (new_tx_count != ring->tx_pending)
netdev_info(netdev, "Requested Tx descriptor count rounded up to %u\n",
new_tx_count);
if (new_tx_count == vport->txq_desc_count &&
new_rx_count == vport->rxq_desc_count)
goto unlock_mutex;
config_data = &vport->adapter->vport_config[idx]->user_config;
config_data->num_req_txq_desc = new_tx_count;
config_data->num_req_rxq_desc = new_rx_count;
/* Since we adjusted the RX completion queue count, the RX buffer queue
* descriptor count needs to be adjusted as well
*/
for (i = 0; i < vport->num_bufqs_per_qgrp; i++)
vport->bufq_desc_count[i] =
IDPF_RX_BUFQ_DESC_COUNT(new_rx_count,
vport->num_bufqs_per_qgrp);
err = idpf_initiate_soft_reset(vport, IDPF_SR_Q_DESC_CHANGE);
unlock_mutex:
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* struct idpf_stats - definition for an ethtool statistic
* @stat_string: statistic name to display in ethtool -S output
* @sizeof_stat: the sizeof() the stat, must be no greater than sizeof(u64)
* @stat_offset: offsetof() the stat from a base pointer
*
* This structure defines a statistic to be added to the ethtool stats buffer.
* It defines a statistic as offset from a common base pointer. Stats should
* be defined in constant arrays using the IDPF_STAT macro, with every element
* of the array using the same _type for calculating the sizeof_stat and
* stat_offset.
*
* The @sizeof_stat is expected to be sizeof(u8), sizeof(u16), sizeof(u32) or
* sizeof(u64). Other sizes are not expected and will produce a WARN_ONCE from
* the idpf_add_ethtool_stat() helper function.
*
* The @stat_string is interpreted as a format string, allowing formatted
* values to be inserted while looping over multiple structures for a given
* statistics array. Thus, every statistic string in an array should have the
* same type and number of format specifiers, to be formatted by variadic
* arguments to the idpf_add_stat_string() helper function.
*/
struct idpf_stats {
char stat_string[ETH_GSTRING_LEN];
int sizeof_stat;
int stat_offset;
};
/* Helper macro to define an idpf_stat structure with proper size and type.
* Use this when defining constant statistics arrays. Note that @_type expects
* only a type name and is used multiple times.
*/
#define IDPF_STAT(_type, _name, _stat) { \
.stat_string = _name, \
.sizeof_stat = sizeof_field(_type, _stat), \
.stat_offset = offsetof(_type, _stat) \
}
/* Helper macro for defining some statistics related to queues */
#define IDPF_QUEUE_STAT(_name, _stat) \
IDPF_STAT(struct idpf_queue, _name, _stat)
/* Stats associated with a Tx queue */
static const struct idpf_stats idpf_gstrings_tx_queue_stats[] = {
IDPF_QUEUE_STAT("pkts", q_stats.tx.packets),
IDPF_QUEUE_STAT("bytes", q_stats.tx.bytes),
IDPF_QUEUE_STAT("lso_pkts", q_stats.tx.lso_pkts),
};
/* Stats associated with an Rx queue */
static const struct idpf_stats idpf_gstrings_rx_queue_stats[] = {
IDPF_QUEUE_STAT("pkts", q_stats.rx.packets),
IDPF_QUEUE_STAT("bytes", q_stats.rx.bytes),
IDPF_QUEUE_STAT("rx_gro_hw_pkts", q_stats.rx.rsc_pkts),
};
#define IDPF_TX_QUEUE_STATS_LEN ARRAY_SIZE(idpf_gstrings_tx_queue_stats)
#define IDPF_RX_QUEUE_STATS_LEN ARRAY_SIZE(idpf_gstrings_rx_queue_stats)
#define IDPF_PORT_STAT(_name, _stat) \
IDPF_STAT(struct idpf_vport, _name, _stat)
static const struct idpf_stats idpf_gstrings_port_stats[] = {
IDPF_PORT_STAT("rx-csum_errors", port_stats.rx_hw_csum_err),
IDPF_PORT_STAT("rx-hsplit", port_stats.rx_hsplit),
IDPF_PORT_STAT("rx-hsplit_hbo", port_stats.rx_hsplit_hbo),
IDPF_PORT_STAT("rx-bad_descs", port_stats.rx_bad_descs),
IDPF_PORT_STAT("tx-skb_drops", port_stats.tx_drops),
IDPF_PORT_STAT("tx-dma_map_errs", port_stats.tx_dma_map_errs),
IDPF_PORT_STAT("tx-linearized_pkts", port_stats.tx_linearize),
IDPF_PORT_STAT("tx-busy_events", port_stats.tx_busy),
IDPF_PORT_STAT("rx-unicast_pkts", port_stats.vport_stats.rx_unicast),
IDPF_PORT_STAT("rx-multicast_pkts", port_stats.vport_stats.rx_multicast),
IDPF_PORT_STAT("rx-broadcast_pkts", port_stats.vport_stats.rx_broadcast),
IDPF_PORT_STAT("rx-unknown_protocol", port_stats.vport_stats.rx_unknown_protocol),
IDPF_PORT_STAT("tx-unicast_pkts", port_stats.vport_stats.tx_unicast),
IDPF_PORT_STAT("tx-multicast_pkts", port_stats.vport_stats.tx_multicast),
IDPF_PORT_STAT("tx-broadcast_pkts", port_stats.vport_stats.tx_broadcast),
};
#define IDPF_PORT_STATS_LEN ARRAY_SIZE(idpf_gstrings_port_stats)
/**
* __idpf_add_qstat_strings - copy stat strings into ethtool buffer
* @p: ethtool supplied buffer
* @stats: stat definitions array
* @size: size of the stats array
* @type: stat type
* @idx: stat index
*
* Format and copy the strings described by stats into the buffer pointed at
* by p.
*/
static void __idpf_add_qstat_strings(u8 **p, const struct idpf_stats *stats,
const unsigned int size, const char *type,
unsigned int idx)
{
unsigned int i;
for (i = 0; i < size; i++)
ethtool_sprintf(p, "%s_q-%u_%s",
type, idx, stats[i].stat_string);
}
/**
* idpf_add_qstat_strings - Copy queue stat strings into ethtool buffer
* @p: ethtool supplied buffer
* @stats: stat definitions array
* @type: stat type
* @idx: stat idx
*
* Format and copy the strings described by the const static stats value into
* the buffer pointed at by p.
*
* The parameter @stats is evaluated twice, so parameters with side effects
* should be avoided. Additionally, stats must be an array such that
* ARRAY_SIZE can be called on it.
*/
#define idpf_add_qstat_strings(p, stats, type, idx) \
__idpf_add_qstat_strings(p, stats, ARRAY_SIZE(stats), type, idx)
/**
* idpf_add_stat_strings - Copy port stat strings into ethtool buffer
* @p: ethtool buffer
* @stats: struct to copy from
* @size: size of stats array to copy from
*/
static void idpf_add_stat_strings(u8 **p, const struct idpf_stats *stats,
const unsigned int size)
{
unsigned int i;
for (i = 0; i < size; i++)
ethtool_sprintf(p, "%s", stats[i].stat_string);
}
/**
* idpf_get_stat_strings - Get stat strings
* @netdev: network interface device structure
* @data: buffer for string data
*
* Builds the statistics string table
*/
static void idpf_get_stat_strings(struct net_device *netdev, u8 *data)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport_config *vport_config;
unsigned int i;
idpf_add_stat_strings(&data, idpf_gstrings_port_stats,
IDPF_PORT_STATS_LEN);
vport_config = np->adapter->vport_config[np->vport_idx];
/* It's critical that we always report a constant number of strings and
* that the strings are reported in the same order regardless of how
* many queues are actually in use.
*/
for (i = 0; i < vport_config->max_q.max_txq; i++)
idpf_add_qstat_strings(&data, idpf_gstrings_tx_queue_stats,
"tx", i);
for (i = 0; i < vport_config->max_q.max_rxq; i++)
idpf_add_qstat_strings(&data, idpf_gstrings_rx_queue_stats,
"rx", i);
page_pool_ethtool_stats_get_strings(data);
}
/**
* idpf_get_strings - Get string set
* @netdev: network interface device structure
* @sset: id of string set
* @data: buffer for string data
*
* Builds string tables for various string sets
*/
static void idpf_get_strings(struct net_device *netdev, u32 sset, u8 *data)
{
switch (sset) {
case ETH_SS_STATS:
idpf_get_stat_strings(netdev, data);
break;
default:
break;
}
}
/**
* idpf_get_sset_count - Get length of string set
* @netdev: network interface device structure
* @sset: id of string set
*
* Reports size of various string tables.
*/
static int idpf_get_sset_count(struct net_device *netdev, int sset)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport_config *vport_config;
u16 max_txq, max_rxq;
unsigned int size;
if (sset != ETH_SS_STATS)
return -EINVAL;
vport_config = np->adapter->vport_config[np->vport_idx];
/* This size reported back here *must* be constant throughout the
* lifecycle of the netdevice, i.e. we must report the maximum length
* even for queues that don't technically exist. This is due to the
* fact that this userspace API uses three separate ioctl calls to get
* stats data but has no way to communicate back to userspace when that
* size has changed, which can typically happen as a result of changing
* number of queues. If the number/order of stats change in the middle
* of this call chain it will lead to userspace crashing/accessing bad
* data through buffer under/overflow.
*/
max_txq = vport_config->max_q.max_txq;
max_rxq = vport_config->max_q.max_rxq;
size = IDPF_PORT_STATS_LEN + (IDPF_TX_QUEUE_STATS_LEN * max_txq) +
(IDPF_RX_QUEUE_STATS_LEN * max_rxq);
size += page_pool_ethtool_stats_get_count();
return size;
}
/**
* idpf_add_one_ethtool_stat - copy the stat into the supplied buffer
* @data: location to store the stat value
* @pstat: old stat pointer to copy from
* @stat: the stat definition
*
* Copies the stat data defined by the pointer and stat structure pair into
* the memory supplied as data. If the pointer is null, data will be zero'd.
*/
static void idpf_add_one_ethtool_stat(u64 *data, void *pstat,
const struct idpf_stats *stat)
{
char *p;
if (!pstat) {
/* Ensure that the ethtool data buffer is zero'd for any stats
* which don't have a valid pointer.
*/
*data = 0;
return;
}
p = (char *)pstat + stat->stat_offset;
switch (stat->sizeof_stat) {
case sizeof(u64):
*data = *((u64 *)p);
break;
case sizeof(u32):
*data = *((u32 *)p);
break;
case sizeof(u16):
*data = *((u16 *)p);
break;
case sizeof(u8):
*data = *((u8 *)p);
break;
default:
WARN_ONCE(1, "unexpected stat size for %s",
stat->stat_string);
*data = 0;
}
}
/**
* idpf_add_queue_stats - copy queue statistics into supplied buffer
* @data: ethtool stats buffer
* @q: the queue to copy
*
* Queue statistics must be copied while protected by u64_stats_fetch_begin,
* so we can't directly use idpf_add_ethtool_stats. Assumes that queue stats
* are defined in idpf_gstrings_queue_stats. If the queue pointer is null,
* zero out the queue stat values and update the data pointer. Otherwise
* safely copy the stats from the queue into the supplied buffer and update
* the data pointer when finished.
*
* This function expects to be called while under rcu_read_lock().
*/
static void idpf_add_queue_stats(u64 **data, struct idpf_queue *q)
{
const struct idpf_stats *stats;
unsigned int start;
unsigned int size;
unsigned int i;
if (q->q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
size = IDPF_RX_QUEUE_STATS_LEN;
stats = idpf_gstrings_rx_queue_stats;
} else {
size = IDPF_TX_QUEUE_STATS_LEN;
stats = idpf_gstrings_tx_queue_stats;
}
/* To avoid invalid statistics values, ensure that we keep retrying
* the copy until we get a consistent value according to
* u64_stats_fetch_retry.
*/
do {
start = u64_stats_fetch_begin(&q->stats_sync);
for (i = 0; i < size; i++)
idpf_add_one_ethtool_stat(&(*data)[i], q, &stats[i]);
} while (u64_stats_fetch_retry(&q->stats_sync, start));
/* Once we successfully copy the stats in, update the data pointer */
*data += size;
}
/**
* idpf_add_empty_queue_stats - Add stats for a non-existent queue
* @data: pointer to data buffer
* @qtype: type of data queue
*
* We must report a constant length of stats back to userspace regardless of
* how many queues are actually in use because stats collection happens over
* three separate ioctls and there's no way to notify userspace the size
* changed between those calls. This adds empty to data to the stats since we
* don't have a real queue to refer to for this stats slot.
*/
static void idpf_add_empty_queue_stats(u64 **data, u16 qtype)
{
unsigned int i;
int stats_len;
if (qtype == VIRTCHNL2_QUEUE_TYPE_RX)
stats_len = IDPF_RX_QUEUE_STATS_LEN;
else
stats_len = IDPF_TX_QUEUE_STATS_LEN;
for (i = 0; i < stats_len; i++)
(*data)[i] = 0;
*data += stats_len;
}
/**
* idpf_add_port_stats - Copy port stats into ethtool buffer
* @vport: virtual port struct
* @data: ethtool buffer to copy into
*/
static void idpf_add_port_stats(struct idpf_vport *vport, u64 **data)
{
unsigned int size = IDPF_PORT_STATS_LEN;
unsigned int start;
unsigned int i;
do {
start = u64_stats_fetch_begin(&vport->port_stats.stats_sync);
for (i = 0; i < size; i++)
idpf_add_one_ethtool_stat(&(*data)[i], vport,
&idpf_gstrings_port_stats[i]);
} while (u64_stats_fetch_retry(&vport->port_stats.stats_sync, start));
*data += size;
}
/**
* idpf_collect_queue_stats - accumulate various per queue stats
* into port level stats
* @vport: pointer to vport struct
**/
static void idpf_collect_queue_stats(struct idpf_vport *vport)
{
struct idpf_port_stats *pstats = &vport->port_stats;
int i, j;
/* zero out port stats since they're actually tracked in per
* queue stats; this is only for reporting
*/
u64_stats_update_begin(&pstats->stats_sync);
u64_stats_set(&pstats->rx_hw_csum_err, 0);
u64_stats_set(&pstats->rx_hsplit, 0);
u64_stats_set(&pstats->rx_hsplit_hbo, 0);
u64_stats_set(&pstats->rx_bad_descs, 0);
u64_stats_set(&pstats->tx_linearize, 0);
u64_stats_set(&pstats->tx_busy, 0);
u64_stats_set(&pstats->tx_drops, 0);
u64_stats_set(&pstats->tx_dma_map_errs, 0);
u64_stats_update_end(&pstats->stats_sync);
for (i = 0; i < vport->num_rxq_grp; i++) {
struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i];
u16 num_rxq;
if (idpf_is_queue_model_split(vport->rxq_model))
num_rxq = rxq_grp->splitq.num_rxq_sets;
else
num_rxq = rxq_grp->singleq.num_rxq;
for (j = 0; j < num_rxq; j++) {
u64 hw_csum_err, hsplit, hsplit_hbo, bad_descs;
struct idpf_rx_queue_stats *stats;
struct idpf_queue *rxq;
unsigned int start;
if (idpf_is_queue_model_split(vport->rxq_model))
rxq = &rxq_grp->splitq.rxq_sets[j]->rxq;
else
rxq = rxq_grp->singleq.rxqs[j];
if (!rxq)
continue;
do {
start = u64_stats_fetch_begin(&rxq->stats_sync);
stats = &rxq->q_stats.rx;
hw_csum_err = u64_stats_read(&stats->hw_csum_err);
hsplit = u64_stats_read(&stats->hsplit_pkts);
hsplit_hbo = u64_stats_read(&stats->hsplit_buf_ovf);
bad_descs = u64_stats_read(&stats->bad_descs);
} while (u64_stats_fetch_retry(&rxq->stats_sync, start));
u64_stats_update_begin(&pstats->stats_sync);
u64_stats_add(&pstats->rx_hw_csum_err, hw_csum_err);
u64_stats_add(&pstats->rx_hsplit, hsplit);
u64_stats_add(&pstats->rx_hsplit_hbo, hsplit_hbo);
u64_stats_add(&pstats->rx_bad_descs, bad_descs);
u64_stats_update_end(&pstats->stats_sync);
}
}
for (i = 0; i < vport->num_txq_grp; i++) {
struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
for (j = 0; j < txq_grp->num_txq; j++) {
u64 linearize, qbusy, skb_drops, dma_map_errs;
struct idpf_queue *txq = txq_grp->txqs[j];
struct idpf_tx_queue_stats *stats;
unsigned int start;
if (!txq)
continue;
do {
start = u64_stats_fetch_begin(&txq->stats_sync);
stats = &txq->q_stats.tx;
linearize = u64_stats_read(&stats->linearize);
qbusy = u64_stats_read(&stats->q_busy);
skb_drops = u64_stats_read(&stats->skb_drops);
dma_map_errs = u64_stats_read(&stats->dma_map_errs);
} while (u64_stats_fetch_retry(&txq->stats_sync, start));
u64_stats_update_begin(&pstats->stats_sync);
u64_stats_add(&pstats->tx_linearize, linearize);
u64_stats_add(&pstats->tx_busy, qbusy);
u64_stats_add(&pstats->tx_drops, skb_drops);
u64_stats_add(&pstats->tx_dma_map_errs, dma_map_errs);
u64_stats_update_end(&pstats->stats_sync);
}
}
}
/**
* idpf_get_ethtool_stats - report device statistics
* @netdev: network interface device structure
* @stats: ethtool statistics structure
* @data: pointer to data buffer
*
* All statistics are added to the data buffer as an array of u64.
*/
static void idpf_get_ethtool_stats(struct net_device *netdev,
struct ethtool_stats __always_unused *stats,
u64 *data)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport_config *vport_config;
struct page_pool_stats pp_stats = { };
struct idpf_vport *vport;
unsigned int total = 0;
unsigned int i, j;
bool is_splitq;
u16 qtype;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
if (np->state != __IDPF_VPORT_UP) {
idpf_vport_ctrl_unlock(netdev);
return;
}
rcu_read_lock();
idpf_collect_queue_stats(vport);
idpf_add_port_stats(vport, &data);
for (i = 0; i < vport->num_txq_grp; i++) {
struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
qtype = VIRTCHNL2_QUEUE_TYPE_TX;
for (j = 0; j < txq_grp->num_txq; j++, total++) {
struct idpf_queue *txq = txq_grp->txqs[j];
if (!txq)
idpf_add_empty_queue_stats(&data, qtype);
else
idpf_add_queue_stats(&data, txq);
}
}
vport_config = vport->adapter->vport_config[vport->idx];
/* It is critical we provide a constant number of stats back to
* userspace regardless of how many queues are actually in use because
* there is no way to inform userspace the size has changed between
* ioctl calls. This will fill in any missing stats with zero.
*/
for (; total < vport_config->max_q.max_txq; total++)
idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_TX);
total = 0;
is_splitq = idpf_is_queue_model_split(vport->rxq_model);
for (i = 0; i < vport->num_rxq_grp; i++) {
struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i];
u16 num_rxq;
qtype = VIRTCHNL2_QUEUE_TYPE_RX;
if (is_splitq)
num_rxq = rxq_grp->splitq.num_rxq_sets;
else
num_rxq = rxq_grp->singleq.num_rxq;
for (j = 0; j < num_rxq; j++, total++) {
struct idpf_queue *rxq;
if (is_splitq)
rxq = &rxq_grp->splitq.rxq_sets[j]->rxq;
else
rxq = rxq_grp->singleq.rxqs[j];
if (!rxq)
idpf_add_empty_queue_stats(&data, qtype);
else
idpf_add_queue_stats(&data, rxq);
/* In splitq mode, don't get page pool stats here since
* the pools are attached to the buffer queues
*/
if (is_splitq)
continue;
if (rxq)
page_pool_get_stats(rxq->pp, &pp_stats);
}
}
for (i = 0; i < vport->num_rxq_grp; i++) {
for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
struct idpf_queue *rxbufq =
&vport->rxq_grps[i].splitq.bufq_sets[j].bufq;
page_pool_get_stats(rxbufq->pp, &pp_stats);
}
}
for (; total < vport_config->max_q.max_rxq; total++)
idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_RX);
page_pool_ethtool_stats_get(data, &pp_stats);
rcu_read_unlock();
idpf_vport_ctrl_unlock(netdev);
}
/**
* idpf_find_rxq - find rxq from q index
* @vport: virtual port associated to queue
* @q_num: q index used to find queue
*
* returns pointer to rx queue
*/
static struct idpf_queue *idpf_find_rxq(struct idpf_vport *vport, int q_num)
{
int q_grp, q_idx;
if (!idpf_is_queue_model_split(vport->rxq_model))
return vport->rxq_grps->singleq.rxqs[q_num];
q_grp = q_num / IDPF_DFLT_SPLITQ_RXQ_PER_GROUP;
q_idx = q_num % IDPF_DFLT_SPLITQ_RXQ_PER_GROUP;
return &vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq;
}
/**
* idpf_find_txq - find txq from q index
* @vport: virtual port associated to queue
* @q_num: q index used to find queue
*
* returns pointer to tx queue
*/
static struct idpf_queue *idpf_find_txq(struct idpf_vport *vport, int q_num)
{
int q_grp;
if (!idpf_is_queue_model_split(vport->txq_model))
return vport->txqs[q_num];
q_grp = q_num / IDPF_DFLT_SPLITQ_TXQ_PER_GROUP;
return vport->txq_grps[q_grp].complq;
}
/**
* __idpf_get_q_coalesce - get ITR values for specific queue
* @ec: ethtool structure to fill with driver's coalesce settings
* @q: quuee of Rx or Tx
*/
static void __idpf_get_q_coalesce(struct ethtool_coalesce *ec,
struct idpf_queue *q)
{
if (q->q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
ec->use_adaptive_rx_coalesce =
IDPF_ITR_IS_DYNAMIC(q->q_vector->rx_intr_mode);
ec->rx_coalesce_usecs = q->q_vector->rx_itr_value;
} else {
ec->use_adaptive_tx_coalesce =
IDPF_ITR_IS_DYNAMIC(q->q_vector->tx_intr_mode);
ec->tx_coalesce_usecs = q->q_vector->tx_itr_value;
}
}
/**
* idpf_get_q_coalesce - get ITR values for specific queue
* @netdev: pointer to the netdev associated with this query
* @ec: coalesce settings to program the device with
* @q_num: update ITR/INTRL (coalesce) settings for this queue number/index
*
* Return 0 on success, and negative on failure
*/
static int idpf_get_q_coalesce(struct net_device *netdev,
struct ethtool_coalesce *ec,
u32 q_num)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport *vport;
int err = 0;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
if (np->state != __IDPF_VPORT_UP)
goto unlock_mutex;
if (q_num >= vport->num_rxq && q_num >= vport->num_txq) {
err = -EINVAL;
goto unlock_mutex;
}
if (q_num < vport->num_rxq)
__idpf_get_q_coalesce(ec, idpf_find_rxq(vport, q_num));
if (q_num < vport->num_txq)
__idpf_get_q_coalesce(ec, idpf_find_txq(vport, q_num));
unlock_mutex:
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* idpf_get_coalesce - get ITR values as requested by user
* @netdev: pointer to the netdev associated with this query
* @ec: coalesce settings to be filled
* @kec: unused
* @extack: unused
*
* Return 0 on success, and negative on failure
*/
static int idpf_get_coalesce(struct net_device *netdev,
struct ethtool_coalesce *ec,
struct kernel_ethtool_coalesce *kec,
struct netlink_ext_ack *extack)
{
/* Return coalesce based on queue number zero */
return idpf_get_q_coalesce(netdev, ec, 0);
}
/**
* idpf_get_per_q_coalesce - get ITR values as requested by user
* @netdev: pointer to the netdev associated with this query
* @q_num: queue for which the itr values has to retrieved
* @ec: coalesce settings to be filled
*
* Return 0 on success, and negative on failure
*/
static int idpf_get_per_q_coalesce(struct net_device *netdev, u32 q_num,
struct ethtool_coalesce *ec)
{
return idpf_get_q_coalesce(netdev, ec, q_num);
}
/**
* __idpf_set_q_coalesce - set ITR values for specific queue
* @ec: ethtool structure from user to update ITR settings
* @q: queue for which itr values has to be set
* @is_rxq: is queue type rx
*
* Returns 0 on success, negative otherwise.
*/
static int __idpf_set_q_coalesce(struct ethtool_coalesce *ec,
struct idpf_queue *q, bool is_rxq)
{
u32 use_adaptive_coalesce, coalesce_usecs;
struct idpf_q_vector *qv = q->q_vector;
bool is_dim_ena = false;
u16 itr_val;
if (is_rxq) {
is_dim_ena = IDPF_ITR_IS_DYNAMIC(qv->rx_intr_mode);
use_adaptive_coalesce = ec->use_adaptive_rx_coalesce;
coalesce_usecs = ec->rx_coalesce_usecs;
itr_val = qv->rx_itr_value;
} else {
is_dim_ena = IDPF_ITR_IS_DYNAMIC(qv->tx_intr_mode);
use_adaptive_coalesce = ec->use_adaptive_tx_coalesce;
coalesce_usecs = ec->tx_coalesce_usecs;
itr_val = qv->tx_itr_value;
}
if (coalesce_usecs != itr_val && use_adaptive_coalesce) {
netdev_err(q->vport->netdev, "Cannot set coalesce usecs if adaptive enabled\n");
return -EINVAL;
}
if (is_dim_ena && use_adaptive_coalesce)
return 0;
if (coalesce_usecs > IDPF_ITR_MAX) {
netdev_err(q->vport->netdev,
"Invalid value, %d-usecs range is 0-%d\n",
coalesce_usecs, IDPF_ITR_MAX);
return -EINVAL;
}
if (coalesce_usecs % 2) {
coalesce_usecs--;
netdev_info(q->vport->netdev,
"HW only supports even ITR values, ITR rounded to %d\n",
coalesce_usecs);
}
if (is_rxq) {
qv->rx_itr_value = coalesce_usecs;
if (use_adaptive_coalesce) {
qv->rx_intr_mode = IDPF_ITR_DYNAMIC;
} else {
qv->rx_intr_mode = !IDPF_ITR_DYNAMIC;
idpf_vport_intr_write_itr(qv, qv->rx_itr_value,
false);
}
} else {
qv->tx_itr_value = coalesce_usecs;
if (use_adaptive_coalesce) {
qv->tx_intr_mode = IDPF_ITR_DYNAMIC;
} else {
qv->tx_intr_mode = !IDPF_ITR_DYNAMIC;
idpf_vport_intr_write_itr(qv, qv->tx_itr_value, true);
}
}
/* Update of static/dynamic itr will be taken care when interrupt is
* fired
*/
return 0;
}
/**
* idpf_set_q_coalesce - set ITR values for specific queue
* @vport: vport associated to the queue that need updating
* @ec: coalesce settings to program the device with
* @q_num: update ITR/INTRL (coalesce) settings for this queue number/index
* @is_rxq: is queue type rx
*
* Return 0 on success, and negative on failure
*/
static int idpf_set_q_coalesce(struct idpf_vport *vport,
struct ethtool_coalesce *ec,
int q_num, bool is_rxq)
{
struct idpf_queue *q;
q = is_rxq ? idpf_find_rxq(vport, q_num) : idpf_find_txq(vport, q_num);
if (q && __idpf_set_q_coalesce(ec, q, is_rxq))
return -EINVAL;
return 0;
}
/**
* idpf_set_coalesce - set ITR values as requested by user
* @netdev: pointer to the netdev associated with this query
* @ec: coalesce settings to program the device with
* @kec: unused
* @extack: unused
*
* Return 0 on success, and negative on failure
*/
static int idpf_set_coalesce(struct net_device *netdev,
struct ethtool_coalesce *ec,
struct kernel_ethtool_coalesce *kec,
struct netlink_ext_ack *extack)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport *vport;
int i, err = 0;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
if (np->state != __IDPF_VPORT_UP)
goto unlock_mutex;
for (i = 0; i < vport->num_txq; i++) {
err = idpf_set_q_coalesce(vport, ec, i, false);
if (err)
goto unlock_mutex;
}
for (i = 0; i < vport->num_rxq; i++) {
err = idpf_set_q_coalesce(vport, ec, i, true);
if (err)
goto unlock_mutex;
}
unlock_mutex:
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* idpf_set_per_q_coalesce - set ITR values as requested by user
* @netdev: pointer to the netdev associated with this query
* @q_num: queue for which the itr values has to be set
* @ec: coalesce settings to program the device with
*
* Return 0 on success, and negative on failure
*/
static int idpf_set_per_q_coalesce(struct net_device *netdev, u32 q_num,
struct ethtool_coalesce *ec)
{
struct idpf_vport *vport;
int err;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
err = idpf_set_q_coalesce(vport, ec, q_num, false);
if (err) {
idpf_vport_ctrl_unlock(netdev);
return err;
}
err = idpf_set_q_coalesce(vport, ec, q_num, true);
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* idpf_get_msglevel - Get debug message level
* @netdev: network interface device structure
*
* Returns current debug message level.
*/
static u32 idpf_get_msglevel(struct net_device *netdev)
{
struct idpf_adapter *adapter = idpf_netdev_to_adapter(netdev);
return adapter->msg_enable;
}
/**
* idpf_set_msglevel - Set debug message level
* @netdev: network interface device structure
* @data: message level
*
* Set current debug message level. Higher values cause the driver to
* be noisier.
*/
static void idpf_set_msglevel(struct net_device *netdev, u32 data)
{
struct idpf_adapter *adapter = idpf_netdev_to_adapter(netdev);
adapter->msg_enable = data;
}
/**
* idpf_get_link_ksettings - Get Link Speed and Duplex settings
* @netdev: network interface device structure
* @cmd: ethtool command
*
* Reports speed/duplex settings.
**/
static int idpf_get_link_ksettings(struct net_device *netdev,
struct ethtool_link_ksettings *cmd)
{
struct idpf_vport *vport;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
ethtool_link_ksettings_zero_link_mode(cmd, supported);
cmd->base.autoneg = AUTONEG_DISABLE;
cmd->base.port = PORT_NONE;
if (vport->link_up) {
cmd->base.duplex = DUPLEX_FULL;
cmd->base.speed = vport->link_speed_mbps;
} else {
cmd->base.duplex = DUPLEX_UNKNOWN;
cmd->base.speed = SPEED_UNKNOWN;
}
idpf_vport_ctrl_unlock(netdev);
return 0;
}
static const struct ethtool_ops idpf_ethtool_ops = {
.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
ETHTOOL_COALESCE_USE_ADAPTIVE,
.get_msglevel = idpf_get_msglevel,
.set_msglevel = idpf_set_msglevel,
.get_link = ethtool_op_get_link,
.get_coalesce = idpf_get_coalesce,
.set_coalesce = idpf_set_coalesce,
.get_per_queue_coalesce = idpf_get_per_q_coalesce,
.set_per_queue_coalesce = idpf_set_per_q_coalesce,
.get_ethtool_stats = idpf_get_ethtool_stats,
.get_strings = idpf_get_strings,
.get_sset_count = idpf_get_sset_count,
.get_channels = idpf_get_channels,
.get_rxnfc = idpf_get_rxnfc,
.get_rxfh_key_size = idpf_get_rxfh_key_size,
.get_rxfh_indir_size = idpf_get_rxfh_indir_size,
.get_rxfh = idpf_get_rxfh,
.set_rxfh = idpf_set_rxfh,
.set_channels = idpf_set_channels,
.get_ringparam = idpf_get_ringparam,
.set_ringparam = idpf_set_ringparam,
.get_link_ksettings = idpf_get_link_ksettings,
};
/**
* idpf_set_ethtool_ops - Initialize ethtool ops struct
* @netdev: network interface device structure
*
* Sets ethtool ops struct in our netdev so that ethtool can call
* our functions.
*/
void idpf_set_ethtool_ops(struct net_device *netdev)
{
netdev->ethtool_ops = &idpf_ethtool_ops;
}
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_LAN_PF_REGS_H_
#define _IDPF_LAN_PF_REGS_H_
/* Receive queues */
#define PF_QRX_BASE 0x00000000
#define PF_QRX_TAIL(_QRX) (PF_QRX_BASE + (((_QRX) * 0x1000)))
#define PF_QRX_BUFFQ_BASE 0x03000000
#define PF_QRX_BUFFQ_TAIL(_QRX) (PF_QRX_BUFFQ_BASE + (((_QRX) * 0x1000)))
/* Transmit queues */
#define PF_QTX_BASE 0x05000000
#define PF_QTX_COMM_DBELL(_DBQM) (PF_QTX_BASE + ((_DBQM) * 0x1000))
/* Control(PF Mailbox) Queue */
#define PF_FW_BASE 0x08400000
#define PF_FW_ARQBAL (PF_FW_BASE)
#define PF_FW_ARQBAH (PF_FW_BASE + 0x4)
#define PF_FW_ARQLEN (PF_FW_BASE + 0x8)
#define PF_FW_ARQLEN_ARQLEN_S 0
#define PF_FW_ARQLEN_ARQLEN_M GENMASK(12, 0)
#define PF_FW_ARQLEN_ARQVFE_S 28
#define PF_FW_ARQLEN_ARQVFE_M BIT(PF_FW_ARQLEN_ARQVFE_S)
#define PF_FW_ARQLEN_ARQOVFL_S 29
#define PF_FW_ARQLEN_ARQOVFL_M BIT(PF_FW_ARQLEN_ARQOVFL_S)
#define PF_FW_ARQLEN_ARQCRIT_S 30
#define PF_FW_ARQLEN_ARQCRIT_M BIT(PF_FW_ARQLEN_ARQCRIT_S)
#define PF_FW_ARQLEN_ARQENABLE_S 31
#define PF_FW_ARQLEN_ARQENABLE_M BIT(PF_FW_ARQLEN_ARQENABLE_S)
#define PF_FW_ARQH (PF_FW_BASE + 0xC)
#define PF_FW_ARQH_ARQH_S 0
#define PF_FW_ARQH_ARQH_M GENMASK(12, 0)
#define PF_FW_ARQT (PF_FW_BASE + 0x10)
#define PF_FW_ATQBAL (PF_FW_BASE + 0x14)
#define PF_FW_ATQBAH (PF_FW_BASE + 0x18)
#define PF_FW_ATQLEN (PF_FW_BASE + 0x1C)
#define PF_FW_ATQLEN_ATQLEN_S 0
#define PF_FW_ATQLEN_ATQLEN_M GENMASK(9, 0)
#define PF_FW_ATQLEN_ATQVFE_S 28
#define PF_FW_ATQLEN_ATQVFE_M BIT(PF_FW_ATQLEN_ATQVFE_S)
#define PF_FW_ATQLEN_ATQOVFL_S 29
#define PF_FW_ATQLEN_ATQOVFL_M BIT(PF_FW_ATQLEN_ATQOVFL_S)
#define PF_FW_ATQLEN_ATQCRIT_S 30
#define PF_FW_ATQLEN_ATQCRIT_M BIT(PF_FW_ATQLEN_ATQCRIT_S)
#define PF_FW_ATQLEN_ATQENABLE_S 31
#define PF_FW_ATQLEN_ATQENABLE_M BIT(PF_FW_ATQLEN_ATQENABLE_S)
#define PF_FW_ATQH (PF_FW_BASE + 0x20)
#define PF_FW_ATQH_ATQH_S 0
#define PF_FW_ATQH_ATQH_M GENMASK(9, 0)
#define PF_FW_ATQT (PF_FW_BASE + 0x24)
/* Interrupts */
#define PF_GLINT_BASE 0x08900000
#define PF_GLINT_DYN_CTL(_INT) (PF_GLINT_BASE + ((_INT) * 0x1000))
#define PF_GLINT_DYN_CTL_INTENA_S 0
#define PF_GLINT_DYN_CTL_INTENA_M BIT(PF_GLINT_DYN_CTL_INTENA_S)
#define PF_GLINT_DYN_CTL_CLEARPBA_S 1
#define PF_GLINT_DYN_CTL_CLEARPBA_M BIT(PF_GLINT_DYN_CTL_CLEARPBA_S)
#define PF_GLINT_DYN_CTL_SWINT_TRIG_S 2
#define PF_GLINT_DYN_CTL_SWINT_TRIG_M BIT(PF_GLINT_DYN_CTL_SWINT_TRIG_S)
#define PF_GLINT_DYN_CTL_ITR_INDX_S 3
#define PF_GLINT_DYN_CTL_ITR_INDX_M GENMASK(4, 3)
#define PF_GLINT_DYN_CTL_INTERVAL_S 5
#define PF_GLINT_DYN_CTL_INTERVAL_M BIT(PF_GLINT_DYN_CTL_INTERVAL_S)
#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S 24
#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S)
#define PF_GLINT_DYN_CTL_SW_ITR_INDX_S 25
#define PF_GLINT_DYN_CTL_SW_ITR_INDX_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_S)
#define PF_GLINT_DYN_CTL_WB_ON_ITR_S 30
#define PF_GLINT_DYN_CTL_WB_ON_ITR_M BIT(PF_GLINT_DYN_CTL_WB_ON_ITR_S)
#define PF_GLINT_DYN_CTL_INTENA_MSK_S 31
#define PF_GLINT_DYN_CTL_INTENA_MSK_M BIT(PF_GLINT_DYN_CTL_INTENA_MSK_S)
/* _ITR is ITR index, _INT is interrupt index, _itrn_indx_spacing is
* spacing b/w itrn registers of the same vector.
*/
#define PF_GLINT_ITR_ADDR(_ITR, _reg_start, _itrn_indx_spacing) \
((_reg_start) + ((_ITR) * (_itrn_indx_spacing)))
/* For PF, itrn_indx_spacing is 4 and itrn_reg_spacing is 0x1000 */
#define PF_GLINT_ITR(_ITR, _INT) \
(PF_GLINT_BASE + (((_ITR) + 1) * 4) + ((_INT) * 0x1000))
#define PF_GLINT_ITR_MAX_INDEX 2
#define PF_GLINT_ITR_INTERVAL_S 0
#define PF_GLINT_ITR_INTERVAL_M GENMASK(11, 0)
/* Generic registers */
#define PF_INT_DIR_OICR_ENA 0x08406000
#define PF_INT_DIR_OICR_ENA_S 0
#define PF_INT_DIR_OICR_ENA_M GENMASK(31, 0)
#define PF_INT_DIR_OICR 0x08406004
#define PF_INT_DIR_OICR_TSYN_EVNT 0
#define PF_INT_DIR_OICR_PHY_TS_0 BIT(1)
#define PF_INT_DIR_OICR_PHY_TS_1 BIT(2)
#define PF_INT_DIR_OICR_CAUSE 0x08406008
#define PF_INT_DIR_OICR_CAUSE_CAUSE_S 0
#define PF_INT_DIR_OICR_CAUSE_CAUSE_M GENMASK(31, 0)
#define PF_INT_PBA_CLEAR 0x0840600C
#define PF_FUNC_RID 0x08406010
#define PF_FUNC_RID_FUNCTION_NUMBER_S 0
#define PF_FUNC_RID_FUNCTION_NUMBER_M GENMASK(2, 0)
#define PF_FUNC_RID_DEVICE_NUMBER_S 3
#define PF_FUNC_RID_DEVICE_NUMBER_M GENMASK(7, 3)
#define PF_FUNC_RID_BUS_NUMBER_S 8
#define PF_FUNC_RID_BUS_NUMBER_M GENMASK(15, 8)
/* Reset registers */
#define PFGEN_RTRIG 0x08407000
#define PFGEN_RTRIG_CORER_S 0
#define PFGEN_RTRIG_CORER_M BIT(0)
#define PFGEN_RTRIG_LINKR_S 1
#define PFGEN_RTRIG_LINKR_M BIT(1)
#define PFGEN_RTRIG_IMCR_S 2
#define PFGEN_RTRIG_IMCR_M BIT(2)
#define PFGEN_RSTAT 0x08407008 /* PFR Status */
#define PFGEN_RSTAT_PFR_STATE_S 0
#define PFGEN_RSTAT_PFR_STATE_M GENMASK(1, 0)
#define PFGEN_CTRL 0x0840700C
#define PFGEN_CTRL_PFSWR BIT(0)
#endif
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_LAN_TXRX_H_
#define _IDPF_LAN_TXRX_H_
enum idpf_rss_hash {
IDPF_HASH_INVALID = 0,
/* Values 1 - 28 are reserved for future use */
IDPF_HASH_NONF_UNICAST_IPV4_UDP = 29,
IDPF_HASH_NONF_MULTICAST_IPV4_UDP,
IDPF_HASH_NONF_IPV4_UDP,
IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK,
IDPF_HASH_NONF_IPV4_TCP,
IDPF_HASH_NONF_IPV4_SCTP,
IDPF_HASH_NONF_IPV4_OTHER,
IDPF_HASH_FRAG_IPV4,
/* Values 37-38 are reserved */
IDPF_HASH_NONF_UNICAST_IPV6_UDP = 39,
IDPF_HASH_NONF_MULTICAST_IPV6_UDP,
IDPF_HASH_NONF_IPV6_UDP,
IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK,
IDPF_HASH_NONF_IPV6_TCP,
IDPF_HASH_NONF_IPV6_SCTP,
IDPF_HASH_NONF_IPV6_OTHER,
IDPF_HASH_FRAG_IPV6,
IDPF_HASH_NONF_RSVD47,
IDPF_HASH_NONF_FCOE_OX,
IDPF_HASH_NONF_FCOE_RX,
IDPF_HASH_NONF_FCOE_OTHER,
/* Values 51-62 are reserved */
IDPF_HASH_L2_PAYLOAD = 63,
IDPF_HASH_MAX
};
/* Supported RSS offloads */
#define IDPF_DEFAULT_RSS_HASH \
(BIT_ULL(IDPF_HASH_NONF_IPV4_UDP) | \
BIT_ULL(IDPF_HASH_NONF_IPV4_SCTP) | \
BIT_ULL(IDPF_HASH_NONF_IPV4_TCP) | \
BIT_ULL(IDPF_HASH_NONF_IPV4_OTHER) | \
BIT_ULL(IDPF_HASH_FRAG_IPV4) | \
BIT_ULL(IDPF_HASH_NONF_IPV6_UDP) | \
BIT_ULL(IDPF_HASH_NONF_IPV6_TCP) | \
BIT_ULL(IDPF_HASH_NONF_IPV6_SCTP) | \
BIT_ULL(IDPF_HASH_NONF_IPV6_OTHER) | \
BIT_ULL(IDPF_HASH_FRAG_IPV6) | \
BIT_ULL(IDPF_HASH_L2_PAYLOAD))
#define IDPF_DEFAULT_RSS_HASH_EXPANDED (IDPF_DEFAULT_RSS_HASH | \
BIT_ULL(IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK) | \
BIT_ULL(IDPF_HASH_NONF_UNICAST_IPV4_UDP) | \
BIT_ULL(IDPF_HASH_NONF_MULTICAST_IPV4_UDP) | \
BIT_ULL(IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK) | \
BIT_ULL(IDPF_HASH_NONF_UNICAST_IPV6_UDP) | \
BIT_ULL(IDPF_HASH_NONF_MULTICAST_IPV6_UDP))
/* For idpf_splitq_base_tx_compl_desc */
#define IDPF_TXD_COMPLQ_GEN_S 15
#define IDPF_TXD_COMPLQ_GEN_M BIT_ULL(IDPF_TXD_COMPLQ_GEN_S)
#define IDPF_TXD_COMPLQ_COMPL_TYPE_S 11
#define IDPF_TXD_COMPLQ_COMPL_TYPE_M GENMASK_ULL(13, 11)
#define IDPF_TXD_COMPLQ_QID_S 0
#define IDPF_TXD_COMPLQ_QID_M GENMASK_ULL(9, 0)
/* For base mode TX descriptors */
#define IDPF_TXD_CTX_QW0_TUNN_L4T_CS_S 23
#define IDPF_TXD_CTX_QW0_TUNN_L4T_CS_M BIT_ULL(IDPF_TXD_CTX_QW0_TUNN_L4T_CS_S)
#define IDPF_TXD_CTX_QW0_TUNN_DECTTL_S 19
#define IDPF_TXD_CTX_QW0_TUNN_DECTTL_M \
(0xFULL << IDPF_TXD_CTX_QW0_TUNN_DECTTL_S)
#define IDPF_TXD_CTX_QW0_TUNN_NATLEN_S 12
#define IDPF_TXD_CTX_QW0_TUNN_NATLEN_M \
(0X7FULL << IDPF_TXD_CTX_QW0_TUNN_NATLEN_S)
#define IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_S 11
#define IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_M \
BIT_ULL(IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_S)
#define IDPF_TXD_CTX_EIP_NOINC_IPID_CONST \
IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_M
#define IDPF_TXD_CTX_QW0_TUNN_NATT_S 9
#define IDPF_TXD_CTX_QW0_TUNN_NATT_M (0x3ULL << IDPF_TXD_CTX_QW0_TUNN_NATT_S)
#define IDPF_TXD_CTX_UDP_TUNNELING BIT_ULL(IDPF_TXD_CTX_QW0_TUNN_NATT_S)
#define IDPF_TXD_CTX_GRE_TUNNELING (0x2ULL << IDPF_TXD_CTX_QW0_TUNN_NATT_S)
#define IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_S 2
#define IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_M \
(0x3FULL << IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_S)
#define IDPF_TXD_CTX_QW0_TUNN_EXT_IP_S 0
#define IDPF_TXD_CTX_QW0_TUNN_EXT_IP_M \
(0x3ULL << IDPF_TXD_CTX_QW0_TUNN_EXT_IP_S)
#define IDPF_TXD_CTX_QW1_MSS_S 50
#define IDPF_TXD_CTX_QW1_MSS_M GENMASK_ULL(63, 50)
#define IDPF_TXD_CTX_QW1_TSO_LEN_S 30
#define IDPF_TXD_CTX_QW1_TSO_LEN_M GENMASK_ULL(47, 30)
#define IDPF_TXD_CTX_QW1_CMD_S 4
#define IDPF_TXD_CTX_QW1_CMD_M GENMASK_ULL(15, 4)
#define IDPF_TXD_CTX_QW1_DTYPE_S 0
#define IDPF_TXD_CTX_QW1_DTYPE_M GENMASK_ULL(3, 0)
#define IDPF_TXD_QW1_L2TAG1_S 48
#define IDPF_TXD_QW1_L2TAG1_M GENMASK_ULL(63, 48)
#define IDPF_TXD_QW1_TX_BUF_SZ_S 34
#define IDPF_TXD_QW1_TX_BUF_SZ_M GENMASK_ULL(47, 34)
#define IDPF_TXD_QW1_OFFSET_S 16
#define IDPF_TXD_QW1_OFFSET_M GENMASK_ULL(33, 16)
#define IDPF_TXD_QW1_CMD_S 4
#define IDPF_TXD_QW1_CMD_M GENMASK_ULL(15, 4)
#define IDPF_TXD_QW1_DTYPE_S 0
#define IDPF_TXD_QW1_DTYPE_M GENMASK_ULL(3, 0)
/* TX Completion Descriptor Completion Types */
#define IDPF_TXD_COMPLT_ITR_FLUSH 0
/* Descriptor completion type 1 is reserved */
#define IDPF_TXD_COMPLT_RS 2
/* Descriptor completion type 3 is reserved */
#define IDPF_TXD_COMPLT_RE 4
#define IDPF_TXD_COMPLT_SW_MARKER 5
enum idpf_tx_desc_dtype_value {
IDPF_TX_DESC_DTYPE_DATA = 0,
IDPF_TX_DESC_DTYPE_CTX = 1,
/* DTYPE 2 is reserved
* DTYPE 3 is free for future use
* DTYPE 4 is reserved
*/
IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX = 5,
/* DTYPE 6 is reserved */
IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 = 7,
/* DTYPE 8, 9 are free for future use
* DTYPE 10 is reserved
* DTYPE 11 is free for future use
*/
IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE = 12,
/* DTYPE 13, 14 are free for future use */
/* DESC_DONE - HW has completed write-back of descriptor */
IDPF_TX_DESC_DTYPE_DESC_DONE = 15,
};
enum idpf_tx_ctx_desc_cmd_bits {
IDPF_TX_CTX_DESC_TSO = 0x01,
IDPF_TX_CTX_DESC_TSYN = 0x02,
IDPF_TX_CTX_DESC_IL2TAG2 = 0x04,
IDPF_TX_CTX_DESC_RSVD = 0x08,
IDPF_TX_CTX_DESC_SWTCH_NOTAG = 0x00,
IDPF_TX_CTX_DESC_SWTCH_UPLINK = 0x10,
IDPF_TX_CTX_DESC_SWTCH_LOCAL = 0x20,
IDPF_TX_CTX_DESC_SWTCH_VSI = 0x30,
IDPF_TX_CTX_DESC_FILT_AU_EN = 0x40,
IDPF_TX_CTX_DESC_FILT_AU_EVICT = 0x80,
IDPF_TX_CTX_DESC_RSVD1 = 0xF00
};
enum idpf_tx_desc_len_fields {
/* Note: These are predefined bit offsets */
IDPF_TX_DESC_LEN_MACLEN_S = 0, /* 7 BITS */
IDPF_TX_DESC_LEN_IPLEN_S = 7, /* 7 BITS */
IDPF_TX_DESC_LEN_L4_LEN_S = 14 /* 4 BITS */
};
enum idpf_tx_base_desc_cmd_bits {
IDPF_TX_DESC_CMD_EOP = BIT(0),
IDPF_TX_DESC_CMD_RS = BIT(1),
/* only on VFs else RSVD */
IDPF_TX_DESC_CMD_ICRC = BIT(2),
IDPF_TX_DESC_CMD_IL2TAG1 = BIT(3),
IDPF_TX_DESC_CMD_RSVD1 = BIT(4),
IDPF_TX_DESC_CMD_IIPT_IPV6 = BIT(5),
IDPF_TX_DESC_CMD_IIPT_IPV4 = BIT(6),
IDPF_TX_DESC_CMD_IIPT_IPV4_CSUM = GENMASK(6, 5),
IDPF_TX_DESC_CMD_RSVD2 = BIT(7),
IDPF_TX_DESC_CMD_L4T_EOFT_TCP = BIT(8),
IDPF_TX_DESC_CMD_L4T_EOFT_SCTP = BIT(9),
IDPF_TX_DESC_CMD_L4T_EOFT_UDP = GENMASK(9, 8),
IDPF_TX_DESC_CMD_RSVD3 = BIT(10),
IDPF_TX_DESC_CMD_RSVD4 = BIT(11),
};
/* Transmit descriptors */
/* splitq tx buf, singleq tx buf and singleq compl desc */
struct idpf_base_tx_desc {
__le64 buf_addr; /* Address of descriptor's data buf */
__le64 qw1; /* type_cmd_offset_bsz_l2tag1 */
}; /* read used with buffer queues */
struct idpf_splitq_tx_compl_desc {
/* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */
__le16 qid_comptype_gen;
union {
__le16 q_head; /* Queue head */
__le16 compl_tag; /* Completion tag */
} q_head_compl_tag;
u8 ts[3];
u8 rsvd; /* Reserved */
}; /* writeback used with completion queues */
/* Context descriptors */
struct idpf_base_tx_ctx_desc {
struct {
__le32 tunneling_params;
__le16 l2tag2;
__le16 rsvd1;
} qw0;
__le64 qw1; /* type_cmd_tlen_mss/rt_hint */
};
/* Common cmd field defines for all desc except Flex Flow Scheduler (0x0C) */
enum idpf_tx_flex_desc_cmd_bits {
IDPF_TX_FLEX_DESC_CMD_EOP = BIT(0),
IDPF_TX_FLEX_DESC_CMD_RS = BIT(1),
IDPF_TX_FLEX_DESC_CMD_RE = BIT(2),
IDPF_TX_FLEX_DESC_CMD_IL2TAG1 = BIT(3),
IDPF_TX_FLEX_DESC_CMD_DUMMY = BIT(4),
IDPF_TX_FLEX_DESC_CMD_CS_EN = BIT(5),
IDPF_TX_FLEX_DESC_CMD_FILT_AU_EN = BIT(6),
IDPF_TX_FLEX_DESC_CMD_FILT_AU_EVICT = BIT(7),
};
struct idpf_flex_tx_desc {
__le64 buf_addr; /* Packet buffer address */
struct {
#define IDPF_FLEX_TXD_QW1_DTYPE_S 0
#define IDPF_FLEX_TXD_QW1_DTYPE_M GENMASK(4, 0)
#define IDPF_FLEX_TXD_QW1_CMD_S 5
#define IDPF_FLEX_TXD_QW1_CMD_M GENMASK(15, 5)
__le16 cmd_dtype;
/* DTYPE=IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 (0x07) */
struct {
__le16 l2tag1;
__le16 l2tag2;
} l2tags;
__le16 buf_size;
} qw1;
};
struct idpf_flex_tx_sched_desc {
__le64 buf_addr; /* Packet buffer address */
/* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE_16B (0x0C) */
struct {
u8 cmd_dtype;
#define IDPF_TXD_FLEX_FLOW_DTYPE_M GENMASK(4, 0)
#define IDPF_TXD_FLEX_FLOW_CMD_EOP BIT(5)
#define IDPF_TXD_FLEX_FLOW_CMD_CS_EN BIT(6)
#define IDPF_TXD_FLEX_FLOW_CMD_RE BIT(7)
/* [23:23] Horizon Overflow bit, [22:0] timestamp */
u8 ts[3];
#define IDPF_TXD_FLOW_SCH_HORIZON_OVERFLOW_M BIT(7)
__le16 compl_tag;
__le16 rxr_bufsize;
#define IDPF_TXD_FLEX_FLOW_RXR BIT(14)
#define IDPF_TXD_FLEX_FLOW_BUFSIZE_M GENMASK(13, 0)
} qw1;
};
/* Common cmd fields for all flex context descriptors
* Note: these defines already account for the 5 bit dtype in the cmd_dtype
* field
*/
enum idpf_tx_flex_ctx_desc_cmd_bits {
IDPF_TX_FLEX_CTX_DESC_CMD_TSO = BIT(5),
IDPF_TX_FLEX_CTX_DESC_CMD_TSYN_EN = BIT(6),
IDPF_TX_FLEX_CTX_DESC_CMD_L2TAG2 = BIT(7),
IDPF_TX_FLEX_CTX_DESC_CMD_SWTCH_UPLNK = BIT(9),
IDPF_TX_FLEX_CTX_DESC_CMD_SWTCH_LOCAL = BIT(10),
IDPF_TX_FLEX_CTX_DESC_CMD_SWTCH_TARGETVSI = GENMASK(10, 9),
};
/* Standard flex descriptor TSO context quad word */
struct idpf_flex_tx_tso_ctx_qw {
__le32 flex_tlen;
#define IDPF_TXD_FLEX_CTX_TLEN_M GENMASK(17, 0)
#define IDPF_TXD_FLEX_TSO_CTX_FLEX_S 24
__le16 mss_rt;
#define IDPF_TXD_FLEX_CTX_MSS_RT_M GENMASK(13, 0)
u8 hdr_len;
u8 flex;
};
struct idpf_flex_tx_ctx_desc {
/* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
struct {
struct idpf_flex_tx_tso_ctx_qw qw0;
struct {
__le16 cmd_dtype;
u8 flex[6];
} qw1;
} tso;
};
#endif /* _IDPF_LAN_TXRX_H_ */
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_LAN_VF_REGS_H_
#define _IDPF_LAN_VF_REGS_H_
/* Reset */
#define VFGEN_RSTAT 0x00008800
#define VFGEN_RSTAT_VFR_STATE_S 0
#define VFGEN_RSTAT_VFR_STATE_M GENMASK(1, 0)
/* Control(VF Mailbox) Queue */
#define VF_BASE 0x00006000
#define VF_ATQBAL (VF_BASE + 0x1C00)
#define VF_ATQBAH (VF_BASE + 0x1800)
#define VF_ATQLEN (VF_BASE + 0x0800)
#define VF_ATQLEN_ATQLEN_S 0
#define VF_ATQLEN_ATQLEN_M GENMASK(9, 0)
#define VF_ATQLEN_ATQVFE_S 28
#define VF_ATQLEN_ATQVFE_M BIT(VF_ATQLEN_ATQVFE_S)
#define VF_ATQLEN_ATQOVFL_S 29
#define VF_ATQLEN_ATQOVFL_M BIT(VF_ATQLEN_ATQOVFL_S)
#define VF_ATQLEN_ATQCRIT_S 30
#define VF_ATQLEN_ATQCRIT_M BIT(VF_ATQLEN_ATQCRIT_S)
#define VF_ATQLEN_ATQENABLE_S 31
#define VF_ATQLEN_ATQENABLE_M BIT(VF_ATQLEN_ATQENABLE_S)
#define VF_ATQH (VF_BASE + 0x0400)
#define VF_ATQH_ATQH_S 0
#define VF_ATQH_ATQH_M GENMASK(9, 0)
#define VF_ATQT (VF_BASE + 0x2400)
#define VF_ARQBAL (VF_BASE + 0x0C00)
#define VF_ARQBAH (VF_BASE)
#define VF_ARQLEN (VF_BASE + 0x2000)
#define VF_ARQLEN_ARQLEN_S 0
#define VF_ARQLEN_ARQLEN_M GENMASK(9, 0)
#define VF_ARQLEN_ARQVFE_S 28
#define VF_ARQLEN_ARQVFE_M BIT(VF_ARQLEN_ARQVFE_S)
#define VF_ARQLEN_ARQOVFL_S 29
#define VF_ARQLEN_ARQOVFL_M BIT(VF_ARQLEN_ARQOVFL_S)
#define VF_ARQLEN_ARQCRIT_S 30
#define VF_ARQLEN_ARQCRIT_M BIT(VF_ARQLEN_ARQCRIT_S)
#define VF_ARQLEN_ARQENABLE_S 31
#define VF_ARQLEN_ARQENABLE_M BIT(VF_ARQLEN_ARQENABLE_S)
#define VF_ARQH (VF_BASE + 0x1400)
#define VF_ARQH_ARQH_S 0
#define VF_ARQH_ARQH_M GENMASK(12, 0)
#define VF_ARQT (VF_BASE + 0x1000)
/* Transmit queues */
#define VF_QTX_TAIL_BASE 0x00000000
#define VF_QTX_TAIL(_QTX) (VF_QTX_TAIL_BASE + (_QTX) * 0x4)
#define VF_QTX_TAIL_EXT_BASE 0x00040000
#define VF_QTX_TAIL_EXT(_QTX) (VF_QTX_TAIL_EXT_BASE + ((_QTX) * 4))
/* Receive queues */
#define VF_QRX_TAIL_BASE 0x00002000
#define VF_QRX_TAIL(_QRX) (VF_QRX_TAIL_BASE + ((_QRX) * 4))
#define VF_QRX_TAIL_EXT_BASE 0x00050000
#define VF_QRX_TAIL_EXT(_QRX) (VF_QRX_TAIL_EXT_BASE + ((_QRX) * 4))
#define VF_QRXB_TAIL_BASE 0x00060000
#define VF_QRXB_TAIL(_QRX) (VF_QRXB_TAIL_BASE + ((_QRX) * 4))
/* Interrupts */
#define VF_INT_DYN_CTL0 0x00005C00
#define VF_INT_DYN_CTL0_INTENA_S 0
#define VF_INT_DYN_CTL0_INTENA_M BIT(VF_INT_DYN_CTL0_INTENA_S)
#define VF_INT_DYN_CTL0_ITR_INDX_S 3
#define VF_INT_DYN_CTL0_ITR_INDX_M GENMASK(4, 3)
#define VF_INT_DYN_CTLN(_INT) (0x00003800 + ((_INT) * 4))
#define VF_INT_DYN_CTLN_EXT(_INT) (0x00070000 + ((_INT) * 4))
#define VF_INT_DYN_CTLN_INTENA_S 0
#define VF_INT_DYN_CTLN_INTENA_M BIT(VF_INT_DYN_CTLN_INTENA_S)
#define VF_INT_DYN_CTLN_CLEARPBA_S 1
#define VF_INT_DYN_CTLN_CLEARPBA_M BIT(VF_INT_DYN_CTLN_CLEARPBA_S)
#define VF_INT_DYN_CTLN_SWINT_TRIG_S 2
#define VF_INT_DYN_CTLN_SWINT_TRIG_M BIT(VF_INT_DYN_CTLN_SWINT_TRIG_S)
#define VF_INT_DYN_CTLN_ITR_INDX_S 3
#define VF_INT_DYN_CTLN_ITR_INDX_M GENMASK(4, 3)
#define VF_INT_DYN_CTLN_INTERVAL_S 5
#define VF_INT_DYN_CTLN_INTERVAL_M BIT(VF_INT_DYN_CTLN_INTERVAL_S)
#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S 24
#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S)
#define VF_INT_DYN_CTLN_SW_ITR_INDX_S 25
#define VF_INT_DYN_CTLN_SW_ITR_INDX_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_S)
#define VF_INT_DYN_CTLN_WB_ON_ITR_S 30
#define VF_INT_DYN_CTLN_WB_ON_ITR_M BIT(VF_INT_DYN_CTLN_WB_ON_ITR_S)
#define VF_INT_DYN_CTLN_INTENA_MSK_S 31
#define VF_INT_DYN_CTLN_INTENA_MSK_M BIT(VF_INT_DYN_CTLN_INTENA_MSK_S)
/* _ITR is ITR index, _INT is interrupt index, _itrn_indx_spacing is spacing
* b/w itrn registers of the same vector
*/
#define VF_INT_ITR0(_ITR) (0x00004C00 + ((_ITR) * 4))
#define VF_INT_ITRN_ADDR(_ITR, _reg_start, _itrn_indx_spacing) \
((_reg_start) + ((_ITR) * (_itrn_indx_spacing)))
/* For VF with 16 vector support, itrn_reg_spacing is 0x4, itrn_indx_spacing
* is 0x40 and base register offset is 0x00002800
*/
#define VF_INT_ITRN(_INT, _ITR) \
(0x00002800 + ((_INT) * 4) + ((_ITR) * 0x40))
/* For VF with 64 vector support, itrn_reg_spacing is 0x4, itrn_indx_spacing
* is 0x100 and base register offset is 0x00002C00
*/
#define VF_INT_ITRN_64(_INT, _ITR) \
(0x00002C00 + ((_INT) * 4) + ((_ITR) * 0x100))
/* For VF with 2k vector support, itrn_reg_spacing is 0x4, itrn_indx_spacing
* is 0x2000 and base register offset is 0x00072000
*/
#define VF_INT_ITRN_2K(_INT, _ITR) \
(0x00072000 + ((_INT) * 4) + ((_ITR) * 0x2000))
#define VF_INT_ITRN_MAX_INDEX 2
#define VF_INT_ITRN_INTERVAL_S 0
#define VF_INT_ITRN_INTERVAL_M GENMASK(11, 0)
#define VF_INT_PBA_CLEAR 0x00008900
#define VF_INT_ICR0_ENA1 0x00005000
#define VF_INT_ICR0_ENA1_ADMINQ_S 30
#define VF_INT_ICR0_ENA1_ADMINQ_M BIT(VF_INT_ICR0_ENA1_ADMINQ_S)
#define VF_INT_ICR0_ENA1_RSVD_S 31
#define VF_INT_ICR01 0x00004800
#define VF_QF_HENA(_i) (0x0000C400 + ((_i) * 4))
#define VF_QF_HENA_MAX_INDX 1
#define VF_QF_HKEY(_i) (0x0000CC00 + ((_i) * 4))
#define VF_QF_HKEY_MAX_INDX 12
#define VF_QF_HLUT(_i) (0x0000D000 + ((_i) * 4))
#define VF_QF_HLUT_MAX_INDX 15
#endif
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf.h"
static const struct net_device_ops idpf_netdev_ops_splitq;
static const struct net_device_ops idpf_netdev_ops_singleq;
const char * const idpf_vport_vc_state_str[] = {
IDPF_FOREACH_VPORT_VC_STATE(IDPF_GEN_STRING)
};
/**
* idpf_init_vector_stack - Fill the MSIX vector stack with vector index
* @adapter: private data struct
*
* Return 0 on success, error on failure
*/
static int idpf_init_vector_stack(struct idpf_adapter *adapter)
{
struct idpf_vector_lifo *stack;
u16 min_vec;
u32 i;
mutex_lock(&adapter->vector_lock);
min_vec = adapter->num_msix_entries - adapter->num_avail_msix;
stack = &adapter->vector_stack;
stack->size = adapter->num_msix_entries;
/* set the base and top to point at start of the 'free pool' to
* distribute the unused vectors on-demand basis
*/
stack->base = min_vec;
stack->top = min_vec;
stack->vec_idx = kcalloc(stack->size, sizeof(u16), GFP_KERNEL);
if (!stack->vec_idx) {
mutex_unlock(&adapter->vector_lock);
return -ENOMEM;
}
for (i = 0; i < stack->size; i++)
stack->vec_idx[i] = i;
mutex_unlock(&adapter->vector_lock);
return 0;
}
/**
* idpf_deinit_vector_stack - zero out the MSIX vector stack
* @adapter: private data struct
*/
static void idpf_deinit_vector_stack(struct idpf_adapter *adapter)
{
struct idpf_vector_lifo *stack;
mutex_lock(&adapter->vector_lock);
stack = &adapter->vector_stack;
kfree(stack->vec_idx);
stack->vec_idx = NULL;
mutex_unlock(&adapter->vector_lock);
}
/**
* idpf_mb_intr_rel_irq - Free the IRQ association with the OS
* @adapter: adapter structure
*
* This will also disable interrupt mode and queue up mailbox task. Mailbox
* task will reschedule itself if not in interrupt mode.
*/
static void idpf_mb_intr_rel_irq(struct idpf_adapter *adapter)
{
clear_bit(IDPF_MB_INTR_MODE, adapter->flags);
free_irq(adapter->msix_entries[0].vector, adapter);
queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task, 0);
}
/**
* idpf_intr_rel - Release interrupt capabilities and free memory
* @adapter: adapter to disable interrupts on
*/
void idpf_intr_rel(struct idpf_adapter *adapter)
{
int err;
if (!adapter->msix_entries)
return;
idpf_mb_intr_rel_irq(adapter);
pci_free_irq_vectors(adapter->pdev);
err = idpf_send_dealloc_vectors_msg(adapter);
if (err)
dev_err(&adapter->pdev->dev,
"Failed to deallocate vectors: %d\n", err);
idpf_deinit_vector_stack(adapter);
kfree(adapter->msix_entries);
adapter->msix_entries = NULL;
}
/**
* idpf_mb_intr_clean - Interrupt handler for the mailbox
* @irq: interrupt number
* @data: pointer to the adapter structure
*/
static irqreturn_t idpf_mb_intr_clean(int __always_unused irq, void *data)
{
struct idpf_adapter *adapter = (struct idpf_adapter *)data;
queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task, 0);
return IRQ_HANDLED;
}
/**
* idpf_mb_irq_enable - Enable MSIX interrupt for the mailbox
* @adapter: adapter to get the hardware address for register write
*/
static void idpf_mb_irq_enable(struct idpf_adapter *adapter)
{
struct idpf_intr_reg *intr = &adapter->mb_vector.intr_reg;
u32 val;
val = intr->dyn_ctl_intena_m | intr->dyn_ctl_itridx_m;
writel(val, intr->dyn_ctl);
writel(intr->icr_ena_ctlq_m, intr->icr_ena);
}
/**
* idpf_mb_intr_req_irq - Request irq for the mailbox interrupt
* @adapter: adapter structure to pass to the mailbox irq handler
*/
static int idpf_mb_intr_req_irq(struct idpf_adapter *adapter)
{
struct idpf_q_vector *mb_vector = &adapter->mb_vector;
int irq_num, mb_vidx = 0, err;
irq_num = adapter->msix_entries[mb_vidx].vector;
mb_vector->name = kasprintf(GFP_KERNEL, "%s-%s-%d",
dev_driver_string(&adapter->pdev->dev),
"Mailbox", mb_vidx);
err = request_irq(irq_num, adapter->irq_mb_handler, 0,
mb_vector->name, adapter);
if (err) {
dev_err(&adapter->pdev->dev,
"IRQ request for mailbox failed, error: %d\n", err);
return err;
}
set_bit(IDPF_MB_INTR_MODE, adapter->flags);
return 0;
}
/**
* idpf_set_mb_vec_id - Set vector index for mailbox
* @adapter: adapter structure to access the vector chunks
*
* The first vector id in the requested vector chunks from the CP is for
* the mailbox
*/
static void idpf_set_mb_vec_id(struct idpf_adapter *adapter)
{
if (adapter->req_vec_chunks)
adapter->mb_vector.v_idx =
le16_to_cpu(adapter->caps.mailbox_vector_id);
else
adapter->mb_vector.v_idx = 0;
}
/**
* idpf_mb_intr_init - Initialize the mailbox interrupt
* @adapter: adapter structure to store the mailbox vector
*/
static int idpf_mb_intr_init(struct idpf_adapter *adapter)
{
adapter->dev_ops.reg_ops.mb_intr_reg_init(adapter);
adapter->irq_mb_handler = idpf_mb_intr_clean;
return idpf_mb_intr_req_irq(adapter);
}
/**
* idpf_vector_lifo_push - push MSIX vector index onto stack
* @adapter: private data struct
* @vec_idx: vector index to store
*/
static int idpf_vector_lifo_push(struct idpf_adapter *adapter, u16 vec_idx)
{
struct idpf_vector_lifo *stack = &adapter->vector_stack;
lockdep_assert_held(&adapter->vector_lock);
if (stack->top == stack->base) {
dev_err(&adapter->pdev->dev, "Exceeded the vector stack limit: %d\n",
stack->top);
return -EINVAL;
}
stack->vec_idx[--stack->top] = vec_idx;
return 0;
}
/**
* idpf_vector_lifo_pop - pop MSIX vector index from stack
* @adapter: private data struct
*/
static int idpf_vector_lifo_pop(struct idpf_adapter *adapter)
{
struct idpf_vector_lifo *stack = &adapter->vector_stack;
lockdep_assert_held(&adapter->vector_lock);
if (stack->top == stack->size) {
dev_err(&adapter->pdev->dev, "No interrupt vectors are available to distribute!\n");
return -EINVAL;
}
return stack->vec_idx[stack->top++];
}
/**
* idpf_vector_stash - Store the vector indexes onto the stack
* @adapter: private data struct
* @q_vector_idxs: vector index array
* @vec_info: info related to the number of vectors
*
* This function is a no-op if there are no vectors indexes to be stashed
*/
static void idpf_vector_stash(struct idpf_adapter *adapter, u16 *q_vector_idxs,
struct idpf_vector_info *vec_info)
{
int i, base = 0;
u16 vec_idx;
lockdep_assert_held(&adapter->vector_lock);
if (!vec_info->num_curr_vecs)
return;
/* For default vports, no need to stash vector allocated from the
* default pool onto the stack
*/
if (vec_info->default_vport)
base = IDPF_MIN_Q_VEC;
for (i = vec_info->num_curr_vecs - 1; i >= base ; i--) {
vec_idx = q_vector_idxs[i];
idpf_vector_lifo_push(adapter, vec_idx);
adapter->num_avail_msix++;
}
}
/**
* idpf_req_rel_vector_indexes - Request or release MSIX vector indexes
* @adapter: driver specific private structure
* @q_vector_idxs: vector index array
* @vec_info: info related to the number of vectors
*
* This is the core function to distribute the MSIX vectors acquired from the
* OS. It expects the caller to pass the number of vectors required and
* also previously allocated. First, it stashes previously allocated vector
* indexes on to the stack and then figures out if it can allocate requested
* vectors. It can wait on acquiring the mutex lock. If the caller passes 0 as
* requested vectors, then this function just stashes the already allocated
* vectors and returns 0.
*
* Returns actual number of vectors allocated on success, error value on failure
* If 0 is returned, implies the stack has no vectors to allocate which is also
* a failure case for the caller
*/
int idpf_req_rel_vector_indexes(struct idpf_adapter *adapter,
u16 *q_vector_idxs,
struct idpf_vector_info *vec_info)
{
u16 num_req_vecs, num_alloc_vecs = 0, max_vecs;
struct idpf_vector_lifo *stack;
int i, j, vecid;
mutex_lock(&adapter->vector_lock);
stack = &adapter->vector_stack;
num_req_vecs = vec_info->num_req_vecs;
/* Stash interrupt vector indexes onto the stack if required */
idpf_vector_stash(adapter, q_vector_idxs, vec_info);
if (!num_req_vecs)
goto rel_lock;
if (vec_info->default_vport) {
/* As IDPF_MIN_Q_VEC per default vport is put aside in the
* default pool of the stack, use them for default vports
*/
j = vec_info->index * IDPF_MIN_Q_VEC + IDPF_MBX_Q_VEC;
for (i = 0; i < IDPF_MIN_Q_VEC; i++) {
q_vector_idxs[num_alloc_vecs++] = stack->vec_idx[j++];
num_req_vecs--;
}
}
/* Find if stack has enough vector to allocate */
max_vecs = min(adapter->num_avail_msix, num_req_vecs);
for (j = 0; j < max_vecs; j++) {
vecid = idpf_vector_lifo_pop(adapter);
q_vector_idxs[num_alloc_vecs++] = vecid;
}
adapter->num_avail_msix -= max_vecs;
rel_lock:
mutex_unlock(&adapter->vector_lock);
return num_alloc_vecs;
}
/**
* idpf_intr_req - Request interrupt capabilities
* @adapter: adapter to enable interrupts on
*
* Returns 0 on success, negative on failure
*/
int idpf_intr_req(struct idpf_adapter *adapter)
{
u16 default_vports = idpf_get_default_vports(adapter);
int num_q_vecs, total_vecs, num_vec_ids;
int min_vectors, v_actual, err;
unsigned int vector;
u16 *vecids;
total_vecs = idpf_get_reserved_vecs(adapter);
num_q_vecs = total_vecs - IDPF_MBX_Q_VEC;
err = idpf_send_alloc_vectors_msg(adapter, num_q_vecs);
if (err) {
dev_err(&adapter->pdev->dev,
"Failed to allocate %d vectors: %d\n", num_q_vecs, err);
return -EAGAIN;
}
min_vectors = IDPF_MBX_Q_VEC + IDPF_MIN_Q_VEC * default_vports;
v_actual = pci_alloc_irq_vectors(adapter->pdev, min_vectors,
total_vecs, PCI_IRQ_MSIX);
if (v_actual < min_vectors) {
dev_err(&adapter->pdev->dev, "Failed to allocate MSIX vectors: %d\n",
v_actual);
err = -EAGAIN;
goto send_dealloc_vecs;
}
adapter->msix_entries = kcalloc(v_actual, sizeof(struct msix_entry),
GFP_KERNEL);
if (!adapter->msix_entries) {
err = -ENOMEM;
goto free_irq;
}
idpf_set_mb_vec_id(adapter);
vecids = kcalloc(total_vecs, sizeof(u16), GFP_KERNEL);
if (!vecids) {
err = -ENOMEM;
goto free_msix;
}
if (adapter->req_vec_chunks) {
struct virtchnl2_vector_chunks *vchunks;
struct virtchnl2_alloc_vectors *ac;
ac = adapter->req_vec_chunks;
vchunks = &ac->vchunks;
num_vec_ids = idpf_get_vec_ids(adapter, vecids, total_vecs,
vchunks);
if (num_vec_ids < v_actual) {
err = -EINVAL;
goto free_vecids;
}
} else {
int i;
for (i = 0; i < v_actual; i++)
vecids[i] = i;
}
for (vector = 0; vector < v_actual; vector++) {
adapter->msix_entries[vector].entry = vecids[vector];
adapter->msix_entries[vector].vector =
pci_irq_vector(adapter->pdev, vector);
}
adapter->num_req_msix = total_vecs;
adapter->num_msix_entries = v_actual;
/* 'num_avail_msix' is used to distribute excess vectors to the vports
* after considering the minimum vectors required per each default
* vport
*/
adapter->num_avail_msix = v_actual - min_vectors;
/* Fill MSIX vector lifo stack with vector indexes */
err = idpf_init_vector_stack(adapter);
if (err)
goto free_vecids;
err = idpf_mb_intr_init(adapter);
if (err)
goto deinit_vec_stack;
idpf_mb_irq_enable(adapter);
kfree(vecids);
return 0;
deinit_vec_stack:
idpf_deinit_vector_stack(adapter);
free_vecids:
kfree(vecids);
free_msix:
kfree(adapter->msix_entries);
adapter->msix_entries = NULL;
free_irq:
pci_free_irq_vectors(adapter->pdev);
send_dealloc_vecs:
idpf_send_dealloc_vectors_msg(adapter);
return err;
}
/**
* idpf_find_mac_filter - Search filter list for specific mac filter
* @vconfig: Vport config structure
* @macaddr: The MAC address
*
* Returns ptr to the filter object or NULL. Must be called while holding the
* mac_filter_list_lock.
**/
static struct idpf_mac_filter *idpf_find_mac_filter(struct idpf_vport_config *vconfig,
const u8 *macaddr)
{
struct idpf_mac_filter *f;
if (!macaddr)
return NULL;
list_for_each_entry(f, &vconfig->user_config.mac_filter_list, list) {
if (ether_addr_equal(macaddr, f->macaddr))
return f;
}
return NULL;
}
/**
* __idpf_del_mac_filter - Delete a MAC filter from the filter list
* @vport_config: Vport config structure
* @macaddr: The MAC address
*
* Returns 0 on success, error value on failure
**/
static int __idpf_del_mac_filter(struct idpf_vport_config *vport_config,
const u8 *macaddr)
{
struct idpf_mac_filter *f;
spin_lock_bh(&vport_config->mac_filter_list_lock);
f = idpf_find_mac_filter(vport_config, macaddr);
if (f) {
list_del(&f->list);
kfree(f);
}
spin_unlock_bh(&vport_config->mac_filter_list_lock);
return 0;
}
/**
* idpf_del_mac_filter - Delete a MAC filter from the filter list
* @vport: Main vport structure
* @np: Netdev private structure
* @macaddr: The MAC address
* @async: Don't wait for return message
*
* Removes filter from list and if interface is up, tells hardware about the
* removed filter.
**/
static int idpf_del_mac_filter(struct idpf_vport *vport,
struct idpf_netdev_priv *np,
const u8 *macaddr, bool async)
{
struct idpf_vport_config *vport_config;
struct idpf_mac_filter *f;
vport_config = np->adapter->vport_config[np->vport_idx];
spin_lock_bh(&vport_config->mac_filter_list_lock);
f = idpf_find_mac_filter(vport_config, macaddr);
if (f) {
f->remove = true;
} else {
spin_unlock_bh(&vport_config->mac_filter_list_lock);
return -EINVAL;
}
spin_unlock_bh(&vport_config->mac_filter_list_lock);
if (np->state == __IDPF_VPORT_UP) {
int err;
err = idpf_add_del_mac_filters(vport, np, false, async);
if (err)
return err;
}
return __idpf_del_mac_filter(vport_config, macaddr);
}
/**
* __idpf_add_mac_filter - Add mac filter helper function
* @vport_config: Vport config structure
* @macaddr: Address to add
*
* Takes mac_filter_list_lock spinlock to add new filter to list.
*/
static int __idpf_add_mac_filter(struct idpf_vport_config *vport_config,
const u8 *macaddr)
{
struct idpf_mac_filter *f;
spin_lock_bh(&vport_config->mac_filter_list_lock);
f = idpf_find_mac_filter(vport_config, macaddr);
if (f) {
f->remove = false;
spin_unlock_bh(&vport_config->mac_filter_list_lock);
return 0;
}
f = kzalloc(sizeof(*f), GFP_ATOMIC);
if (!f) {
spin_unlock_bh(&vport_config->mac_filter_list_lock);
return -ENOMEM;
}
ether_addr_copy(f->macaddr, macaddr);
list_add_tail(&f->list, &vport_config->user_config.mac_filter_list);
f->add = true;
spin_unlock_bh(&vport_config->mac_filter_list_lock);
return 0;
}
/**
* idpf_add_mac_filter - Add a mac filter to the filter list
* @vport: Main vport structure
* @np: Netdev private structure
* @macaddr: The MAC address
* @async: Don't wait for return message
*
* Returns 0 on success or error on failure. If interface is up, we'll also
* send the virtchnl message to tell hardware about the filter.
**/
static int idpf_add_mac_filter(struct idpf_vport *vport,
struct idpf_netdev_priv *np,
const u8 *macaddr, bool async)
{
struct idpf_vport_config *vport_config;
int err;
vport_config = np->adapter->vport_config[np->vport_idx];
err = __idpf_add_mac_filter(vport_config, macaddr);
if (err)
return err;
if (np->state == __IDPF_VPORT_UP)
err = idpf_add_del_mac_filters(vport, np, true, async);
return err;
}
/**
* idpf_del_all_mac_filters - Delete all MAC filters in list
* @vport: main vport struct
*
* Takes mac_filter_list_lock spinlock. Deletes all filters
*/
static void idpf_del_all_mac_filters(struct idpf_vport *vport)
{
struct idpf_vport_config *vport_config;
struct idpf_mac_filter *f, *ftmp;
vport_config = vport->adapter->vport_config[vport->idx];
spin_lock_bh(&vport_config->mac_filter_list_lock);
list_for_each_entry_safe(f, ftmp, &vport_config->user_config.mac_filter_list,
list) {
list_del(&f->list);
kfree(f);
}
spin_unlock_bh(&vport_config->mac_filter_list_lock);
}
/**
* idpf_restore_mac_filters - Re-add all MAC filters in list
* @vport: main vport struct
*
* Takes mac_filter_list_lock spinlock. Sets add field to true for filters to
* resync filters back to HW.
*/
static void idpf_restore_mac_filters(struct idpf_vport *vport)
{
struct idpf_vport_config *vport_config;
struct idpf_mac_filter *f;
vport_config = vport->adapter->vport_config[vport->idx];
spin_lock_bh(&vport_config->mac_filter_list_lock);
list_for_each_entry(f, &vport_config->user_config.mac_filter_list, list)
f->add = true;
spin_unlock_bh(&vport_config->mac_filter_list_lock);
idpf_add_del_mac_filters(vport, netdev_priv(vport->netdev),
true, false);
}
/**
* idpf_remove_mac_filters - Remove all MAC filters in list
* @vport: main vport struct
*
* Takes mac_filter_list_lock spinlock. Sets remove field to true for filters
* to remove filters in HW.
*/
static void idpf_remove_mac_filters(struct idpf_vport *vport)
{
struct idpf_vport_config *vport_config;
struct idpf_mac_filter *f;
vport_config = vport->adapter->vport_config[vport->idx];
spin_lock_bh(&vport_config->mac_filter_list_lock);
list_for_each_entry(f, &vport_config->user_config.mac_filter_list, list)
f->remove = true;
spin_unlock_bh(&vport_config->mac_filter_list_lock);
idpf_add_del_mac_filters(vport, netdev_priv(vport->netdev),
false, false);
}
/**
* idpf_deinit_mac_addr - deinitialize mac address for vport
* @vport: main vport structure
*/
static void idpf_deinit_mac_addr(struct idpf_vport *vport)
{
struct idpf_vport_config *vport_config;
struct idpf_mac_filter *f;
vport_config = vport->adapter->vport_config[vport->idx];
spin_lock_bh(&vport_config->mac_filter_list_lock);
f = idpf_find_mac_filter(vport_config, vport->default_mac_addr);
if (f) {
list_del(&f->list);
kfree(f);
}
spin_unlock_bh(&vport_config->mac_filter_list_lock);
}
/**
* idpf_init_mac_addr - initialize mac address for vport
* @vport: main vport structure
* @netdev: pointer to netdev struct associated with this vport
*/
static int idpf_init_mac_addr(struct idpf_vport *vport,
struct net_device *netdev)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_adapter *adapter = vport->adapter;
int err;
if (is_valid_ether_addr(vport->default_mac_addr)) {
eth_hw_addr_set(netdev, vport->default_mac_addr);
ether_addr_copy(netdev->perm_addr, vport->default_mac_addr);
return idpf_add_mac_filter(vport, np, vport->default_mac_addr,
false);
}
if (!idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS,
VIRTCHNL2_CAP_MACFILTER)) {
dev_err(&adapter->pdev->dev,
"MAC address is not provided and capability is not set\n");
return -EINVAL;
}
eth_hw_addr_random(netdev);
err = idpf_add_mac_filter(vport, np, netdev->dev_addr, false);
if (err)
return err;
dev_info(&adapter->pdev->dev, "Invalid MAC address %pM, using random %pM\n",
vport->default_mac_addr, netdev->dev_addr);
ether_addr_copy(vport->default_mac_addr, netdev->dev_addr);
return 0;
}
/**
* idpf_cfg_netdev - Allocate, configure and register a netdev
* @vport: main vport structure
*
* Returns 0 on success, negative value on failure.
*/
static int idpf_cfg_netdev(struct idpf_vport *vport)
{
struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport_config *vport_config;
netdev_features_t dflt_features;
netdev_features_t offloads = 0;
struct idpf_netdev_priv *np;
struct net_device *netdev;
u16 idx = vport->idx;
int err;
vport_config = adapter->vport_config[idx];
/* It's possible we already have a netdev allocated and registered for
* this vport
*/
if (test_bit(IDPF_VPORT_REG_NETDEV, vport_config->flags)) {
netdev = adapter->netdevs[idx];
np = netdev_priv(netdev);
np->vport = vport;
np->vport_idx = vport->idx;
np->vport_id = vport->vport_id;
vport->netdev = netdev;
return idpf_init_mac_addr(vport, netdev);
}
netdev = alloc_etherdev_mqs(sizeof(struct idpf_netdev_priv),
vport_config->max_q.max_txq,
vport_config->max_q.max_rxq);
if (!netdev)
return -ENOMEM;
vport->netdev = netdev;
np = netdev_priv(netdev);
np->vport = vport;
np->adapter = adapter;
np->vport_idx = vport->idx;
np->vport_id = vport->vport_id;
spin_lock_init(&np->stats_lock);
err = idpf_init_mac_addr(vport, netdev);
if (err) {
free_netdev(vport->netdev);
vport->netdev = NULL;
return err;
}
/* assign netdev_ops */
if (idpf_is_queue_model_split(vport->txq_model))
netdev->netdev_ops = &idpf_netdev_ops_splitq;
else
netdev->netdev_ops = &idpf_netdev_ops_singleq;
/* setup watchdog timeout value to be 5 second */
netdev->watchdog_timeo = 5 * HZ;
/* configure default MTU size */
netdev->min_mtu = ETH_MIN_MTU;
netdev->max_mtu = vport->max_mtu;
dflt_features = NETIF_F_SG |
NETIF_F_HIGHDMA;
if (idpf_is_cap_ena_all(adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS))
dflt_features |= NETIF_F_RXHASH;
if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM_L4V4))
dflt_features |= NETIF_F_IP_CSUM;
if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM_L4V6))
dflt_features |= NETIF_F_IPV6_CSUM;
if (idpf_is_cap_ena(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM))
dflt_features |= NETIF_F_RXCSUM;
if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_SCTP_CSUM))
dflt_features |= NETIF_F_SCTP_CRC;
if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV4_TCP))
dflt_features |= NETIF_F_TSO;
if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV6_TCP))
dflt_features |= NETIF_F_TSO6;
if (idpf_is_cap_ena_all(adapter, IDPF_SEG_CAPS,
VIRTCHNL2_CAP_SEG_IPV4_UDP |
VIRTCHNL2_CAP_SEG_IPV6_UDP))
dflt_features |= NETIF_F_GSO_UDP_L4;
if (idpf_is_cap_ena_all(adapter, IDPF_RSC_CAPS, IDPF_CAP_RSC))
offloads |= NETIF_F_GRO_HW;
/* advertise to stack only if offloads for encapsulated packets is
* supported
*/
if (idpf_is_cap_ena(vport->adapter, IDPF_SEG_CAPS,
VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL)) {
offloads |= NETIF_F_GSO_UDP_TUNNEL |
NETIF_F_GSO_GRE |
NETIF_F_GSO_GRE_CSUM |
NETIF_F_GSO_PARTIAL |
NETIF_F_GSO_UDP_TUNNEL_CSUM |
NETIF_F_GSO_IPXIP4 |
NETIF_F_GSO_IPXIP6 |
0;
if (!idpf_is_cap_ena_all(vport->adapter, IDPF_CSUM_CAPS,
IDPF_CAP_TUNNEL_TX_CSUM))
netdev->gso_partial_features |=
NETIF_F_GSO_UDP_TUNNEL_CSUM;
netdev->gso_partial_features |= NETIF_F_GSO_GRE_CSUM;
offloads |= NETIF_F_TSO_MANGLEID;
}
if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_LOOPBACK))
offloads |= NETIF_F_LOOPBACK;
netdev->features |= dflt_features;
netdev->hw_features |= dflt_features | offloads;
netdev->hw_enc_features |= dflt_features | offloads;
idpf_set_ethtool_ops(netdev);
SET_NETDEV_DEV(netdev, &adapter->pdev->dev);
/* carrier off on init to avoid Tx hangs */
netif_carrier_off(netdev);
/* make sure transmit queues start off as stopped */
netif_tx_stop_all_queues(netdev);
/* The vport can be arbitrarily released so we need to also track
* netdevs in the adapter struct
*/
adapter->netdevs[idx] = netdev;
return 0;
}
/**
* idpf_get_free_slot - get the next non-NULL location index in array
* @adapter: adapter in which to look for a free vport slot
*/
static int idpf_get_free_slot(struct idpf_adapter *adapter)
{
unsigned int i;
for (i = 0; i < adapter->max_vports; i++) {
if (!adapter->vports[i])
return i;
}
return IDPF_NO_FREE_SLOT;
}
/**
* idpf_remove_features - Turn off feature configs
* @vport: virtual port structure
*/
static void idpf_remove_features(struct idpf_vport *vport)
{
struct idpf_adapter *adapter = vport->adapter;
if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_MACFILTER))
idpf_remove_mac_filters(vport);
}
/**
* idpf_vport_stop - Disable a vport
* @vport: vport to disable
*/
static void idpf_vport_stop(struct idpf_vport *vport)
{
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
if (np->state <= __IDPF_VPORT_DOWN)
return;
netif_carrier_off(vport->netdev);
netif_tx_disable(vport->netdev);
idpf_send_disable_vport_msg(vport);
idpf_send_disable_queues_msg(vport);
idpf_send_map_unmap_queue_vector_msg(vport, false);
/* Normally we ask for queues in create_vport, but if the number of
* initially requested queues have changed, for example via ethtool
* set channels, we do delete queues and then add the queues back
* instead of deleting and reallocating the vport.
*/
if (test_and_clear_bit(IDPF_VPORT_DEL_QUEUES, vport->flags))
idpf_send_delete_queues_msg(vport);
idpf_remove_features(vport);
vport->link_up = false;
idpf_vport_intr_deinit(vport);
idpf_vport_intr_rel(vport);
idpf_vport_queues_rel(vport);
np->state = __IDPF_VPORT_DOWN;
}
/**
* idpf_stop - Disables a network interface
* @netdev: network interface device structure
*
* The stop entry point is called when an interface is de-activated by the OS,
* and the netdevice enters the DOWN state. The hardware is still under the
* driver's control, but the netdev interface is disabled.
*
* Returns success only - not allowed to fail
*/
static int idpf_stop(struct net_device *netdev)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport *vport;
if (test_bit(IDPF_REMOVE_IN_PROG, np->adapter->flags))
return 0;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
idpf_vport_stop(vport);
idpf_vport_ctrl_unlock(netdev);
return 0;
}
/**
* idpf_decfg_netdev - Unregister the netdev
* @vport: vport for which netdev to be unregistered
*/
static void idpf_decfg_netdev(struct idpf_vport *vport)
{
struct idpf_adapter *adapter = vport->adapter;
unregister_netdev(vport->netdev);
free_netdev(vport->netdev);
vport->netdev = NULL;
adapter->netdevs[vport->idx] = NULL;
}
/**
* idpf_vport_rel - Delete a vport and free its resources
* @vport: the vport being removed
*/
static void idpf_vport_rel(struct idpf_vport *vport)
{
struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport_config *vport_config;
struct idpf_vector_info vec_info;
struct idpf_rss_data *rss_data;
struct idpf_vport_max_q max_q;
u16 idx = vport->idx;
int i;
vport_config = adapter->vport_config[vport->idx];
idpf_deinit_rss(vport);
rss_data = &vport_config->user_config.rss_data;
kfree(rss_data->rss_key);
rss_data->rss_key = NULL;
idpf_send_destroy_vport_msg(vport);
/* Set all bits as we dont know on which vc_state the vport vhnl_wq
* is waiting on and wakeup the virtchnl workqueue even if it is
* waiting for the response as we are going down
*/
for (i = 0; i < IDPF_VC_NBITS; i++)
set_bit(i, vport->vc_state);
wake_up(&vport->vchnl_wq);
mutex_destroy(&vport->vc_buf_lock);
/* Clear all the bits */
for (i = 0; i < IDPF_VC_NBITS; i++)
clear_bit(i, vport->vc_state);
/* Release all max queues allocated to the adapter's pool */
max_q.max_rxq = vport_config->max_q.max_rxq;
max_q.max_txq = vport_config->max_q.max_txq;
max_q.max_bufq = vport_config->max_q.max_bufq;
max_q.max_complq = vport_config->max_q.max_complq;
idpf_vport_dealloc_max_qs(adapter, &max_q);
/* Release all the allocated vectors on the stack */
vec_info.num_req_vecs = 0;
vec_info.num_curr_vecs = vport->num_q_vectors;
vec_info.default_vport = vport->default_vport;
idpf_req_rel_vector_indexes(adapter, vport->q_vector_idxs, &vec_info);
kfree(vport->q_vector_idxs);
vport->q_vector_idxs = NULL;
kfree(adapter->vport_params_recvd[idx]);
adapter->vport_params_recvd[idx] = NULL;
kfree(adapter->vport_params_reqd[idx]);
adapter->vport_params_reqd[idx] = NULL;
if (adapter->vport_config[idx]) {
kfree(adapter->vport_config[idx]->req_qs_chunks);
adapter->vport_config[idx]->req_qs_chunks = NULL;
}
kfree(vport);
adapter->num_alloc_vports--;
}
/**
* idpf_vport_dealloc - cleanup and release a given vport
* @vport: pointer to idpf vport structure
*
* returns nothing
*/
static void idpf_vport_dealloc(struct idpf_vport *vport)
{
struct idpf_adapter *adapter = vport->adapter;
unsigned int i = vport->idx;
idpf_deinit_mac_addr(vport);
idpf_vport_stop(vport);
if (!test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags))
idpf_decfg_netdev(vport);
if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
idpf_del_all_mac_filters(vport);
if (adapter->netdevs[i]) {
struct idpf_netdev_priv *np = netdev_priv(adapter->netdevs[i]);
np->vport = NULL;
}
idpf_vport_rel(vport);
adapter->vports[i] = NULL;
adapter->next_vport = idpf_get_free_slot(adapter);
}
/**
* idpf_vport_alloc - Allocates the next available struct vport in the adapter
* @adapter: board private structure
* @max_q: vport max queue info
*
* returns a pointer to a vport on success, NULL on failure.
*/
static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
struct idpf_vport_max_q *max_q)
{
struct idpf_rss_data *rss_data;
u16 idx = adapter->next_vport;
struct idpf_vport *vport;
u16 num_max_q;
if (idx == IDPF_NO_FREE_SLOT)
return NULL;
vport = kzalloc(sizeof(*vport), GFP_KERNEL);
if (!vport)
return vport;
if (!adapter->vport_config[idx]) {
struct idpf_vport_config *vport_config;
vport_config = kzalloc(sizeof(*vport_config), GFP_KERNEL);
if (!vport_config) {
kfree(vport);
return NULL;
}
adapter->vport_config[idx] = vport_config;
}
vport->idx = idx;
vport->adapter = adapter;
vport->compln_clean_budget = IDPF_TX_COMPLQ_CLEAN_BUDGET;
vport->default_vport = adapter->num_alloc_vports <
idpf_get_default_vports(adapter);
num_max_q = max(max_q->max_txq, max_q->max_rxq);
vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL);
if (!vport->q_vector_idxs) {
kfree(vport);
return NULL;
}
idpf_vport_init(vport, max_q);
/* This alloc is done separate from the LUT because it's not strictly
* dependent on how many queues we have. If we change number of queues
* and soft reset we'll need a new LUT but the key can remain the same
* for as long as the vport exists.
*/
rss_data = &adapter->vport_config[idx]->user_config.rss_data;
rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL);
if (!rss_data->rss_key) {
kfree(vport);
return NULL;
}
/* Initialize default rss key */
netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size);
/* fill vport slot in the adapter struct */
adapter->vports[idx] = vport;
adapter->vport_ids[idx] = idpf_get_vport_id(vport);
adapter->num_alloc_vports++;
/* prepare adapter->next_vport for next use */
adapter->next_vport = idpf_get_free_slot(adapter);
return vport;
}
/**
* idpf_get_stats64 - get statistics for network device structure
* @netdev: network interface device structure
* @stats: main device statistics structure
*/
static void idpf_get_stats64(struct net_device *netdev,
struct rtnl_link_stats64 *stats)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
spin_lock_bh(&np->stats_lock);
*stats = np->netstats;
spin_unlock_bh(&np->stats_lock);
}
/**
* idpf_statistics_task - Delayed task to get statistics over mailbox
* @work: work_struct handle to our data
*/
void idpf_statistics_task(struct work_struct *work)
{
struct idpf_adapter *adapter;
int i;
adapter = container_of(work, struct idpf_adapter, stats_task.work);
for (i = 0; i < adapter->max_vports; i++) {
struct idpf_vport *vport = adapter->vports[i];
if (vport && !test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags))
idpf_send_get_stats_msg(vport);
}
queue_delayed_work(adapter->stats_wq, &adapter->stats_task,
msecs_to_jiffies(10000));
}
/**
* idpf_mbx_task - Delayed task to handle mailbox responses
* @work: work_struct handle
*/
void idpf_mbx_task(struct work_struct *work)
{
struct idpf_adapter *adapter;
adapter = container_of(work, struct idpf_adapter, mbx_task.work);
if (test_bit(IDPF_MB_INTR_MODE, adapter->flags))
idpf_mb_irq_enable(adapter);
else
queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task,
msecs_to_jiffies(300));
idpf_recv_mb_msg(adapter, VIRTCHNL2_OP_UNKNOWN, NULL, 0);
}
/**
* idpf_service_task - Delayed task for handling mailbox responses
* @work: work_struct handle to our data
*
*/
void idpf_service_task(struct work_struct *work)
{
struct idpf_adapter *adapter;
adapter = container_of(work, struct idpf_adapter, serv_task.work);
if (idpf_is_reset_detected(adapter) &&
!idpf_is_reset_in_prog(adapter) &&
!test_bit(IDPF_REMOVE_IN_PROG, adapter->flags)) {
dev_info(&adapter->pdev->dev, "HW reset detected\n");
set_bit(IDPF_HR_FUNC_RESET, adapter->flags);
queue_delayed_work(adapter->vc_event_wq,
&adapter->vc_event_task,
msecs_to_jiffies(10));
}
queue_delayed_work(adapter->serv_wq, &adapter->serv_task,
msecs_to_jiffies(300));
}
/**
* idpf_restore_features - Restore feature configs
* @vport: virtual port structure
*/
static void idpf_restore_features(struct idpf_vport *vport)
{
struct idpf_adapter *adapter = vport->adapter;
if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_MACFILTER))
idpf_restore_mac_filters(vport);
}
/**
* idpf_set_real_num_queues - set number of queues for netdev
* @vport: virtual port structure
*
* Returns 0 on success, negative on failure.
*/
static int idpf_set_real_num_queues(struct idpf_vport *vport)
{
int err;
err = netif_set_real_num_rx_queues(vport->netdev, vport->num_rxq);
if (err)
return err;
return netif_set_real_num_tx_queues(vport->netdev, vport->num_txq);
}
/**
* idpf_up_complete - Complete interface up sequence
* @vport: virtual port structure
*
* Returns 0 on success, negative on failure.
*/
static int idpf_up_complete(struct idpf_vport *vport)
{
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
if (vport->link_up && !netif_carrier_ok(vport->netdev)) {
netif_carrier_on(vport->netdev);
netif_tx_start_all_queues(vport->netdev);
}
np->state = __IDPF_VPORT_UP;
return 0;
}
/**
* idpf_rx_init_buf_tail - Write initial buffer ring tail value
* @vport: virtual port struct
*/
static void idpf_rx_init_buf_tail(struct idpf_vport *vport)
{
int i, j;
for (i = 0; i < vport->num_rxq_grp; i++) {
struct idpf_rxq_group *grp = &vport->rxq_grps[i];
if (idpf_is_queue_model_split(vport->rxq_model)) {
for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
struct idpf_queue *q =
&grp->splitq.bufq_sets[j].bufq;
writel(q->next_to_alloc, q->tail);
}
} else {
for (j = 0; j < grp->singleq.num_rxq; j++) {
struct idpf_queue *q =
grp->singleq.rxqs[j];
writel(q->next_to_alloc, q->tail);
}
}
}
}
/**
* idpf_vport_open - Bring up a vport
* @vport: vport to bring up
* @alloc_res: allocate queue resources
*/
static int idpf_vport_open(struct idpf_vport *vport, bool alloc_res)
{
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport_config *vport_config;
int err;
if (np->state != __IDPF_VPORT_DOWN)
return -EBUSY;
/* we do not allow interface up just yet */
netif_carrier_off(vport->netdev);
if (alloc_res) {
err = idpf_vport_queues_alloc(vport);
if (err)
return err;
}
err = idpf_vport_intr_alloc(vport);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to allocate interrupts for vport %u: %d\n",
vport->vport_id, err);
goto queues_rel;
}
err = idpf_vport_queue_ids_init(vport);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize queue ids for vport %u: %d\n",
vport->vport_id, err);
goto intr_rel;
}
err = idpf_vport_intr_init(vport);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize interrupts for vport %u: %d\n",
vport->vport_id, err);
goto intr_rel;
}
err = idpf_rx_bufs_init_all(vport);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport %u: %d\n",
vport->vport_id, err);
goto intr_rel;
}
err = idpf_queue_reg_init(vport);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n",
vport->vport_id, err);
goto intr_rel;
}
idpf_rx_init_buf_tail(vport);
err = idpf_send_config_queues_msg(vport);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to configure queues for vport %u, %d\n",
vport->vport_id, err);
goto intr_deinit;
}
err = idpf_send_map_unmap_queue_vector_msg(vport, true);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to map queue vectors for vport %u: %d\n",
vport->vport_id, err);
goto intr_deinit;
}
err = idpf_send_enable_queues_msg(vport);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to enable queues for vport %u: %d\n",
vport->vport_id, err);
goto unmap_queue_vectors;
}
err = idpf_send_enable_vport_msg(vport);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to enable vport %u: %d\n",
vport->vport_id, err);
err = -EAGAIN;
goto disable_queues;
}
idpf_restore_features(vport);
vport_config = adapter->vport_config[vport->idx];
if (vport_config->user_config.rss_data.rss_lut)
err = idpf_config_rss(vport);
else
err = idpf_init_rss(vport);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize RSS for vport %u: %d\n",
vport->vport_id, err);
goto disable_vport;
}
err = idpf_up_complete(vport);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to complete interface up for vport %u: %d\n",
vport->vport_id, err);
goto deinit_rss;
}
return 0;
deinit_rss:
idpf_deinit_rss(vport);
disable_vport:
idpf_send_disable_vport_msg(vport);
disable_queues:
idpf_send_disable_queues_msg(vport);
unmap_queue_vectors:
idpf_send_map_unmap_queue_vector_msg(vport, false);
intr_deinit:
idpf_vport_intr_deinit(vport);
intr_rel:
idpf_vport_intr_rel(vport);
queues_rel:
idpf_vport_queues_rel(vport);
return err;
}
/**
* idpf_init_task - Delayed initialization task
* @work: work_struct handle to our data
*
* Init task finishes up pending work started in probe. Due to the asynchronous
* nature in which the device communicates with hardware, we may have to wait
* several milliseconds to get a response. Instead of busy polling in probe,
* pulling it out into a delayed work task prevents us from bogging down the
* whole system waiting for a response from hardware.
*/
void idpf_init_task(struct work_struct *work)
{
struct idpf_vport_config *vport_config;
struct idpf_vport_max_q max_q;
struct idpf_adapter *adapter;
struct idpf_netdev_priv *np;
struct idpf_vport *vport;
u16 num_default_vports;
struct pci_dev *pdev;
bool default_vport;
int index, err;
adapter = container_of(work, struct idpf_adapter, init_task.work);
num_default_vports = idpf_get_default_vports(adapter);
if (adapter->num_alloc_vports < num_default_vports)
default_vport = true;
else
default_vport = false;
err = idpf_vport_alloc_max_qs(adapter, &max_q);
if (err)
goto unwind_vports;
err = idpf_send_create_vport_msg(adapter, &max_q);
if (err) {
idpf_vport_dealloc_max_qs(adapter, &max_q);
goto unwind_vports;
}
pdev = adapter->pdev;
vport = idpf_vport_alloc(adapter, &max_q);
if (!vport) {
err = -EFAULT;
dev_err(&pdev->dev, "failed to allocate vport: %d\n",
err);
idpf_vport_dealloc_max_qs(adapter, &max_q);
goto unwind_vports;
}
index = vport->idx;
vport_config = adapter->vport_config[index];
init_waitqueue_head(&vport->sw_marker_wq);
init_waitqueue_head(&vport->vchnl_wq);
mutex_init(&vport->vc_buf_lock);
spin_lock_init(&vport_config->mac_filter_list_lock);
INIT_LIST_HEAD(&vport_config->user_config.mac_filter_list);
err = idpf_check_supported_desc_ids(vport);
if (err) {
dev_err(&pdev->dev, "failed to get required descriptor ids\n");
goto cfg_netdev_err;
}
if (idpf_cfg_netdev(vport))
goto cfg_netdev_err;
err = idpf_send_get_rx_ptype_msg(vport);
if (err)
goto handle_err;
/* Once state is put into DOWN, driver is ready for dev_open */
np = netdev_priv(vport->netdev);
np->state = __IDPF_VPORT_DOWN;
if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED, vport_config->flags))
idpf_vport_open(vport, true);
/* Spawn and return 'idpf_init_task' work queue until all the
* default vports are created
*/
if (adapter->num_alloc_vports < num_default_vports) {
queue_delayed_work(adapter->init_wq, &adapter->init_task,
msecs_to_jiffies(5 * (adapter->pdev->devfn & 0x07)));
return;
}
for (index = 0; index < adapter->max_vports; index++) {
if (adapter->netdevs[index] &&
!test_bit(IDPF_VPORT_REG_NETDEV,
adapter->vport_config[index]->flags)) {
register_netdev(adapter->netdevs[index]);
set_bit(IDPF_VPORT_REG_NETDEV,
adapter->vport_config[index]->flags);
}
}
/* As all the required vports are created, clear the reset flag
* unconditionally here in case we were in reset and the link was down.
*/
clear_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
/* Start the statistics task now */
queue_delayed_work(adapter->stats_wq, &adapter->stats_task,
msecs_to_jiffies(10 * (pdev->devfn & 0x07)));
return;
handle_err:
idpf_decfg_netdev(vport);
cfg_netdev_err:
idpf_vport_rel(vport);
adapter->vports[index] = NULL;
unwind_vports:
if (default_vport) {
for (index = 0; index < adapter->max_vports; index++) {
if (adapter->vports[index])
idpf_vport_dealloc(adapter->vports[index]);
}
}
clear_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
}
/**
* idpf_sriov_ena - Enable or change number of VFs
* @adapter: private data struct
* @num_vfs: number of VFs to allocate
*/
static int idpf_sriov_ena(struct idpf_adapter *adapter, int num_vfs)
{
struct device *dev = &adapter->pdev->dev;
int err;
err = idpf_send_set_sriov_vfs_msg(adapter, num_vfs);
if (err) {
dev_err(dev, "Failed to allocate VFs: %d\n", err);
return err;
}
err = pci_enable_sriov(adapter->pdev, num_vfs);
if (err) {
idpf_send_set_sriov_vfs_msg(adapter, 0);
dev_err(dev, "Failed to enable SR-IOV: %d\n", err);
return err;
}
adapter->num_vfs = num_vfs;
return num_vfs;
}
/**
* idpf_sriov_configure - Configure the requested VFs
* @pdev: pointer to a pci_dev structure
* @num_vfs: number of vfs to allocate
*
* Enable or change the number of VFs. Called when the user updates the number
* of VFs in sysfs.
**/
int idpf_sriov_configure(struct pci_dev *pdev, int num_vfs)
{
struct idpf_adapter *adapter = pci_get_drvdata(pdev);
if (!idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_SRIOV)) {
dev_info(&pdev->dev, "SR-IOV is not supported on this device\n");
return -EOPNOTSUPP;
}
if (num_vfs)
return idpf_sriov_ena(adapter, num_vfs);
if (pci_vfs_assigned(pdev)) {
dev_warn(&pdev->dev, "Unable to free VFs because some are assigned to VMs\n");
return -EBUSY;
}
pci_disable_sriov(adapter->pdev);
idpf_send_set_sriov_vfs_msg(adapter, 0);
adapter->num_vfs = 0;
return 0;
}
/**
* idpf_deinit_task - Device deinit routine
* @adapter: Driver specific private structure
*
* Extended remove logic which will be used for
* hard reset as well
*/
void idpf_deinit_task(struct idpf_adapter *adapter)
{
unsigned int i;
/* Wait until the init_task is done else this thread might release
* the resources first and the other thread might end up in a bad state
*/
cancel_delayed_work_sync(&adapter->init_task);
if (!adapter->vports)
return;
cancel_delayed_work_sync(&adapter->stats_task);
for (i = 0; i < adapter->max_vports; i++) {
if (adapter->vports[i])
idpf_vport_dealloc(adapter->vports[i]);
}
}
/**
* idpf_check_reset_complete - check that reset is complete
* @hw: pointer to hw struct
* @reset_reg: struct with reset registers
*
* Returns 0 if device is ready to use, or -EBUSY if it's in reset.
**/
static int idpf_check_reset_complete(struct idpf_hw *hw,
struct idpf_reset_reg *reset_reg)
{
struct idpf_adapter *adapter = hw->back;
int i;
for (i = 0; i < 2000; i++) {
u32 reg_val = readl(reset_reg->rstat);
/* 0xFFFFFFFF might be read if other side hasn't cleared the
* register for us yet and 0xFFFFFFFF is not a valid value for
* the register, so treat that as invalid.
*/
if (reg_val != 0xFFFFFFFF && (reg_val & reset_reg->rstat_m))
return 0;
usleep_range(5000, 10000);
}
dev_warn(&adapter->pdev->dev, "Device reset timeout!\n");
/* Clear the reset flag unconditionally here since the reset
* technically isn't in progress anymore from the driver's perspective
*/
clear_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
return -EBUSY;
}
/**
* idpf_set_vport_state - Set the vport state to be after the reset
* @adapter: Driver specific private structure
*/
static void idpf_set_vport_state(struct idpf_adapter *adapter)
{
u16 i;
for (i = 0; i < adapter->max_vports; i++) {
struct idpf_netdev_priv *np;
if (!adapter->netdevs[i])
continue;
np = netdev_priv(adapter->netdevs[i]);
if (np->state == __IDPF_VPORT_UP)
set_bit(IDPF_VPORT_UP_REQUESTED,
adapter->vport_config[i]->flags);
}
}
/**
* idpf_init_hard_reset - Initiate a hardware reset
* @adapter: Driver specific private structure
*
* Deallocate the vports and all the resources associated with them and
* reallocate. Also reinitialize the mailbox. Return 0 on success,
* negative on failure.
*/
static int idpf_init_hard_reset(struct idpf_adapter *adapter)
{
struct idpf_reg_ops *reg_ops = &adapter->dev_ops.reg_ops;
struct device *dev = &adapter->pdev->dev;
struct net_device *netdev;
int err;
u16 i;
mutex_lock(&adapter->vport_ctrl_lock);
dev_info(dev, "Device HW Reset initiated\n");
/* Avoid TX hangs on reset */
for (i = 0; i < adapter->max_vports; i++) {
netdev = adapter->netdevs[i];
if (!netdev)
continue;
netif_carrier_off(netdev);
netif_tx_disable(netdev);
}
/* Prepare for reset */
if (test_and_clear_bit(IDPF_HR_DRV_LOAD, adapter->flags)) {
reg_ops->trigger_reset(adapter, IDPF_HR_DRV_LOAD);
} else if (test_and_clear_bit(IDPF_HR_FUNC_RESET, adapter->flags)) {
bool is_reset = idpf_is_reset_detected(adapter);
idpf_set_vport_state(adapter);
idpf_vc_core_deinit(adapter);
if (!is_reset)
reg_ops->trigger_reset(adapter, IDPF_HR_FUNC_RESET);
idpf_deinit_dflt_mbx(adapter);
} else {
dev_err(dev, "Unhandled hard reset cause\n");
err = -EBADRQC;
goto unlock_mutex;
}
/* Wait for reset to complete */
err = idpf_check_reset_complete(&adapter->hw, &adapter->reset_reg);
if (err) {
dev_err(dev, "The driver was unable to contact the device's firmware. Check that the FW is running. Driver state= 0x%x\n",
adapter->state);
goto unlock_mutex;
}
/* Reset is complete and so start building the driver resources again */
err = idpf_init_dflt_mbx(adapter);
if (err) {
dev_err(dev, "Failed to initialize default mailbox: %d\n", err);
goto unlock_mutex;
}
/* Initialize the state machine, also allocate memory and request
* resources
*/
err = idpf_vc_core_init(adapter);
if (err) {
idpf_deinit_dflt_mbx(adapter);
goto unlock_mutex;
}
/* Wait till all the vports are initialized to release the reset lock,
* else user space callbacks may access uninitialized vports
*/
while (test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags))
msleep(100);
unlock_mutex:
mutex_unlock(&adapter->vport_ctrl_lock);
return err;
}
/**
* idpf_vc_event_task - Handle virtchannel event logic
* @work: work queue struct
*/
void idpf_vc_event_task(struct work_struct *work)
{
struct idpf_adapter *adapter;
adapter = container_of(work, struct idpf_adapter, vc_event_task.work);
if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
return;
if (test_bit(IDPF_HR_FUNC_RESET, adapter->flags) ||
test_bit(IDPF_HR_DRV_LOAD, adapter->flags)) {
set_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
idpf_init_hard_reset(adapter);
}
}
/**
* idpf_initiate_soft_reset - Initiate a software reset
* @vport: virtual port data struct
* @reset_cause: reason for the soft reset
*
* Soft reset only reallocs vport queue resources. Returns 0 on success,
* negative on failure.
*/
int idpf_initiate_soft_reset(struct idpf_vport *vport,
enum idpf_vport_reset_cause reset_cause)
{
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
enum idpf_vport_state current_state = np->state;
struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport *new_vport;
int err, i;
/* If the system is low on memory, we can end up in bad state if we
* free all the memory for queue resources and try to allocate them
* again. Instead, we can pre-allocate the new resources before doing
* anything and bailing if the alloc fails.
*
* Make a clone of the existing vport to mimic its current
* configuration, then modify the new structure with any requested
* changes. Once the allocation of the new resources is done, stop the
* existing vport and copy the configuration to the main vport. If an
* error occurred, the existing vport will be untouched.
*
*/
new_vport = kzalloc(sizeof(*vport), GFP_KERNEL);
if (!new_vport)
return -ENOMEM;
/* This purposely avoids copying the end of the struct because it
* contains wait_queues and mutexes and other stuff we don't want to
* mess with. Nothing below should use those variables from new_vport
* and should instead always refer to them in vport if they need to.
*/
memcpy(new_vport, vport, offsetof(struct idpf_vport, vc_state));
/* Adjust resource parameters prior to reallocating resources */
switch (reset_cause) {
case IDPF_SR_Q_CHANGE:
err = idpf_vport_adjust_qs(new_vport);
if (err)
goto free_vport;
break;
case IDPF_SR_Q_DESC_CHANGE:
/* Update queue parameters before allocating resources */
idpf_vport_calc_num_q_desc(new_vport);
break;
case IDPF_SR_MTU_CHANGE:
case IDPF_SR_RSC_CHANGE:
break;
default:
dev_err(&adapter->pdev->dev, "Unhandled soft reset cause\n");
err = -EINVAL;
goto free_vport;
}
err = idpf_vport_queues_alloc(new_vport);
if (err)
goto free_vport;
if (current_state <= __IDPF_VPORT_DOWN) {
idpf_send_delete_queues_msg(vport);
} else {
set_bit(IDPF_VPORT_DEL_QUEUES, vport->flags);
idpf_vport_stop(vport);
}
idpf_deinit_rss(vport);
/* We're passing in vport here because we need its wait_queue
* to send a message and it should be getting all the vport
* config data out of the adapter but we need to be careful not
* to add code to add_queues to change the vport config within
* vport itself as it will be wiped with a memcpy later.
*/
err = idpf_send_add_queues_msg(vport, new_vport->num_txq,
new_vport->num_complq,
new_vport->num_rxq,
new_vport->num_bufq);
if (err)
goto err_reset;
/* Same comment as above regarding avoiding copying the wait_queues and
* mutexes applies here. We do not want to mess with those if possible.
*/
memcpy(vport, new_vport, offsetof(struct idpf_vport, vc_state));
/* Since idpf_vport_queues_alloc was called with new_port, the queue
* back pointers are currently pointing to the local new_vport. Reset
* the backpointers to the original vport here
*/
for (i = 0; i < vport->num_txq_grp; i++) {
struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
int j;
tx_qgrp->vport = vport;
for (j = 0; j < tx_qgrp->num_txq; j++)
tx_qgrp->txqs[j]->vport = vport;
if (idpf_is_queue_model_split(vport->txq_model))
tx_qgrp->complq->vport = vport;
}
for (i = 0; i < vport->num_rxq_grp; i++) {
struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
struct idpf_queue *q;
u16 num_rxq;
int j;
rx_qgrp->vport = vport;
for (j = 0; j < vport->num_bufqs_per_qgrp; j++)
rx_qgrp->splitq.bufq_sets[j].bufq.vport = vport;
if (idpf_is_queue_model_split(vport->rxq_model))
num_rxq = rx_qgrp->splitq.num_rxq_sets;
else
num_rxq = rx_qgrp->singleq.num_rxq;
for (j = 0; j < num_rxq; j++) {
if (idpf_is_queue_model_split(vport->rxq_model))
q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
else
q = rx_qgrp->singleq.rxqs[j];
q->vport = vport;
}
}
if (reset_cause == IDPF_SR_Q_CHANGE)
idpf_vport_alloc_vec_indexes(vport);
err = idpf_set_real_num_queues(vport);
if (err)
goto err_reset;
if (current_state == __IDPF_VPORT_UP)
err = idpf_vport_open(vport, false);
kfree(new_vport);
return err;
err_reset:
idpf_vport_queues_rel(new_vport);
free_vport:
kfree(new_vport);
return err;
}
/**
* idpf_addr_sync - Callback for dev_(mc|uc)_sync to add address
* @netdev: the netdevice
* @addr: address to add
*
* Called by __dev_(mc|uc)_sync when an address needs to be added. We call
* __dev_(uc|mc)_sync from .set_rx_mode. Kernel takes addr_list_lock spinlock
* meaning we cannot sleep in this context. Due to this, we have to add the
* filter and send the virtchnl message asynchronously without waiting for the
* response from the other side. We won't know whether or not the operation
* actually succeeded until we get the message back. Returns 0 on success,
* negative on failure.
*/
static int idpf_addr_sync(struct net_device *netdev, const u8 *addr)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
return idpf_add_mac_filter(np->vport, np, addr, true);
}
/**
* idpf_addr_unsync - Callback for dev_(mc|uc)_sync to remove address
* @netdev: the netdevice
* @addr: address to add
*
* Called by __dev_(mc|uc)_sync when an address needs to be added. We call
* __dev_(uc|mc)_sync from .set_rx_mode. Kernel takes addr_list_lock spinlock
* meaning we cannot sleep in this context. Due to this we have to delete the
* filter and send the virtchnl message asynchronously without waiting for the
* return from the other side. We won't know whether or not the operation
* actually succeeded until we get the message back. Returns 0 on success,
* negative on failure.
*/
static int idpf_addr_unsync(struct net_device *netdev, const u8 *addr)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
/* Under some circumstances, we might receive a request to delete
* our own device address from our uc list. Because we store the
* device address in the VSI's MAC filter list, we need to ignore
* such requests and not delete our device address from this list.
*/
if (ether_addr_equal(addr, netdev->dev_addr))
return 0;
idpf_del_mac_filter(np->vport, np, addr, true);
return 0;
}
/**
* idpf_set_rx_mode - NDO callback to set the netdev filters
* @netdev: network interface device structure
*
* Stack takes addr_list_lock spinlock before calling our .set_rx_mode. We
* cannot sleep in this context.
*/
static void idpf_set_rx_mode(struct net_device *netdev)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport_user_config_data *config_data;
struct idpf_adapter *adapter;
bool changed = false;
struct device *dev;
int err;
adapter = np->adapter;
dev = &adapter->pdev->dev;
if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_MACFILTER)) {
__dev_uc_sync(netdev, idpf_addr_sync, idpf_addr_unsync);
__dev_mc_sync(netdev, idpf_addr_sync, idpf_addr_unsync);
}
if (!idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_PROMISC))
return;
config_data = &adapter->vport_config[np->vport_idx]->user_config;
/* IFF_PROMISC enables both unicast and multicast promiscuous,
* while IFF_ALLMULTI only enables multicast such that:
*
* promisc + allmulti = unicast | multicast
* promisc + !allmulti = unicast | multicast
* !promisc + allmulti = multicast
*/
if ((netdev->flags & IFF_PROMISC) &&
!test_and_set_bit(__IDPF_PROMISC_UC, config_data->user_flags)) {
changed = true;
dev_info(&adapter->pdev->dev, "Entering promiscuous mode\n");
if (!test_and_set_bit(__IDPF_PROMISC_MC, adapter->flags))
dev_info(dev, "Entering multicast promiscuous mode\n");
}
if (!(netdev->flags & IFF_PROMISC) &&
test_and_clear_bit(__IDPF_PROMISC_UC, config_data->user_flags)) {
changed = true;
dev_info(dev, "Leaving promiscuous mode\n");
}
if (netdev->flags & IFF_ALLMULTI &&
!test_and_set_bit(__IDPF_PROMISC_MC, config_data->user_flags)) {
changed = true;
dev_info(dev, "Entering multicast promiscuous mode\n");
}
if (!(netdev->flags & (IFF_ALLMULTI | IFF_PROMISC)) &&
test_and_clear_bit(__IDPF_PROMISC_MC, config_data->user_flags)) {
changed = true;
dev_info(dev, "Leaving multicast promiscuous mode\n");
}
if (!changed)
return;
err = idpf_set_promiscuous(adapter, config_data, np->vport_id);
if (err)
dev_err(dev, "Failed to set promiscuous mode: %d\n", err);
}
/**
* idpf_vport_manage_rss_lut - disable/enable RSS
* @vport: the vport being changed
*
* In the event of disable request for RSS, this function will zero out RSS
* LUT, while in the event of enable request for RSS, it will reconfigure RSS
* LUT with the default LUT configuration.
*/
static int idpf_vport_manage_rss_lut(struct idpf_vport *vport)
{
bool ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH);
struct idpf_rss_data *rss_data;
u16 idx = vport->idx;
int lut_size;
rss_data = &vport->adapter->vport_config[idx]->user_config.rss_data;
lut_size = rss_data->rss_lut_size * sizeof(u32);
if (ena) {
/* This will contain the default or user configured LUT */
memcpy(rss_data->rss_lut, rss_data->cached_lut, lut_size);
} else {
/* Save a copy of the current LUT to be restored later if
* requested.
*/
memcpy(rss_data->cached_lut, rss_data->rss_lut, lut_size);
/* Zero out the current LUT to disable */
memset(rss_data->rss_lut, 0, lut_size);
}
return idpf_config_rss(vport);
}
/**
* idpf_set_features - set the netdev feature flags
* @netdev: ptr to the netdev being adjusted
* @features: the feature set that the stack is suggesting
*/
static int idpf_set_features(struct net_device *netdev,
netdev_features_t features)
{
netdev_features_t changed = netdev->features ^ features;
struct idpf_adapter *adapter;
struct idpf_vport *vport;
int err = 0;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
adapter = vport->adapter;
if (idpf_is_reset_in_prog(adapter)) {
dev_err(&adapter->pdev->dev, "Device is resetting, changing netdev features temporarily unavailable.\n");
err = -EBUSY;
goto unlock_mutex;
}
if (changed & NETIF_F_RXHASH) {
netdev->features ^= NETIF_F_RXHASH;
err = idpf_vport_manage_rss_lut(vport);
if (err)
goto unlock_mutex;
}
if (changed & NETIF_F_GRO_HW) {
netdev->features ^= NETIF_F_GRO_HW;
err = idpf_initiate_soft_reset(vport, IDPF_SR_RSC_CHANGE);
if (err)
goto unlock_mutex;
}
if (changed & NETIF_F_LOOPBACK) {
netdev->features ^= NETIF_F_LOOPBACK;
err = idpf_send_ena_dis_loopback_msg(vport);
}
unlock_mutex:
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* idpf_open - Called when a network interface becomes active
* @netdev: network interface device structure
*
* The open entry point is called when a network interface is made
* active by the system (IFF_UP). At this point all resources needed
* for transmit and receive operations are allocated, the interrupt
* handler is registered with the OS, the netdev watchdog is enabled,
* and the stack is notified that the interface is ready.
*
* Returns 0 on success, negative value on failure
*/
static int idpf_open(struct net_device *netdev)
{
struct idpf_vport *vport;
int err;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
err = idpf_vport_open(vport, true);
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* idpf_change_mtu - NDO callback to change the MTU
* @netdev: network interface device structure
* @new_mtu: new value for maximum frame size
*
* Returns 0 on success, negative on failure
*/
static int idpf_change_mtu(struct net_device *netdev, int new_mtu)
{
struct idpf_vport *vport;
int err;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
netdev->mtu = new_mtu;
err = idpf_initiate_soft_reset(vport, IDPF_SR_MTU_CHANGE);
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* idpf_features_check - Validate packet conforms to limits
* @skb: skb buffer
* @netdev: This port's netdev
* @features: Offload features that the stack believes apply
*/
static netdev_features_t idpf_features_check(struct sk_buff *skb,
struct net_device *netdev,
netdev_features_t features)
{
struct idpf_vport *vport = idpf_netdev_to_vport(netdev);
struct idpf_adapter *adapter = vport->adapter;
size_t len;
/* No point in doing any of this if neither checksum nor GSO are
* being requested for this frame. We can rule out both by just
* checking for CHECKSUM_PARTIAL
*/
if (skb->ip_summed != CHECKSUM_PARTIAL)
return features;
/* We cannot support GSO if the MSS is going to be less than
* 88 bytes. If it is then we need to drop support for GSO.
*/
if (skb_is_gso(skb) &&
(skb_shinfo(skb)->gso_size < IDPF_TX_TSO_MIN_MSS))
features &= ~NETIF_F_GSO_MASK;
/* Ensure MACLEN is <= 126 bytes (63 words) and not an odd size */
len = skb_network_offset(skb);
if (unlikely(len & ~(126)))
goto unsupported;
len = skb_network_header_len(skb);
if (unlikely(len > idpf_get_max_tx_hdr_size(adapter)))
goto unsupported;
if (!skb->encapsulation)
return features;
/* L4TUNLEN can support 127 words */
len = skb_inner_network_header(skb) - skb_transport_header(skb);
if (unlikely(len & ~(127 * 2)))
goto unsupported;
/* IPLEN can support at most 127 dwords */
len = skb_inner_network_header_len(skb);
if (unlikely(len > idpf_get_max_tx_hdr_size(adapter)))
goto unsupported;
/* No need to validate L4LEN as TCP is the only protocol with a
* a flexible value and we support all possible values supported
* by TCP, which is at most 15 dwords
*/
return features;
unsupported:
return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
}
/**
* idpf_set_mac - NDO callback to set port mac address
* @netdev: network interface device structure
* @p: pointer to an address structure
*
* Returns 0 on success, negative on failure
**/
static int idpf_set_mac(struct net_device *netdev, void *p)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport_config *vport_config;
struct sockaddr *addr = p;
struct idpf_vport *vport;
int err = 0;
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
if (!idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS,
VIRTCHNL2_CAP_MACFILTER)) {
dev_info(&vport->adapter->pdev->dev, "Setting MAC address is not supported\n");
err = -EOPNOTSUPP;
goto unlock_mutex;
}
if (!is_valid_ether_addr(addr->sa_data)) {
dev_info(&vport->adapter->pdev->dev, "Invalid MAC address: %pM\n",
addr->sa_data);
err = -EADDRNOTAVAIL;
goto unlock_mutex;
}
if (ether_addr_equal(netdev->dev_addr, addr->sa_data))
goto unlock_mutex;
vport_config = vport->adapter->vport_config[vport->idx];
err = idpf_add_mac_filter(vport, np, addr->sa_data, false);
if (err) {
__idpf_del_mac_filter(vport_config, addr->sa_data);
goto unlock_mutex;
}
if (is_valid_ether_addr(vport->default_mac_addr))
idpf_del_mac_filter(vport, np, vport->default_mac_addr, false);
ether_addr_copy(vport->default_mac_addr, addr->sa_data);
eth_hw_addr_set(netdev, addr->sa_data);
unlock_mutex:
idpf_vport_ctrl_unlock(netdev);
return err;
}
/**
* idpf_alloc_dma_mem - Allocate dma memory
* @hw: pointer to hw struct
* @mem: pointer to dma_mem struct
* @size: size of the memory to allocate
*/
void *idpf_alloc_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem, u64 size)
{
struct idpf_adapter *adapter = hw->back;
size_t sz = ALIGN(size, 4096);
mem->va = dma_alloc_coherent(&adapter->pdev->dev, sz,
&mem->pa, GFP_KERNEL);
mem->size = sz;
return mem->va;
}
/**
* idpf_free_dma_mem - Free the allocated dma memory
* @hw: pointer to hw struct
* @mem: pointer to dma_mem struct
*/
void idpf_free_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem)
{
struct idpf_adapter *adapter = hw->back;
dma_free_coherent(&adapter->pdev->dev, mem->size,
mem->va, mem->pa);
mem->size = 0;
mem->va = NULL;
mem->pa = 0;
}
static const struct net_device_ops idpf_netdev_ops_splitq = {
.ndo_open = idpf_open,
.ndo_stop = idpf_stop,
.ndo_start_xmit = idpf_tx_splitq_start,
.ndo_features_check = idpf_features_check,
.ndo_set_rx_mode = idpf_set_rx_mode,
.ndo_validate_addr = eth_validate_addr,
.ndo_set_mac_address = idpf_set_mac,
.ndo_change_mtu = idpf_change_mtu,
.ndo_get_stats64 = idpf_get_stats64,
.ndo_set_features = idpf_set_features,
.ndo_tx_timeout = idpf_tx_timeout,
};
static const struct net_device_ops idpf_netdev_ops_singleq = {
.ndo_open = idpf_open,
.ndo_stop = idpf_stop,
.ndo_start_xmit = idpf_tx_singleq_start,
.ndo_features_check = idpf_features_check,
.ndo_set_rx_mode = idpf_set_rx_mode,
.ndo_validate_addr = eth_validate_addr,
.ndo_set_mac_address = idpf_set_mac,
.ndo_change_mtu = idpf_change_mtu,
.ndo_get_stats64 = idpf_get_stats64,
.ndo_set_features = idpf_set_features,
.ndo_tx_timeout = idpf_tx_timeout,
};
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf.h"
#include "idpf_devids.h"
#define DRV_SUMMARY "Intel(R) Infrastructure Data Path Function Linux Driver"
MODULE_DESCRIPTION(DRV_SUMMARY);
MODULE_LICENSE("GPL");
/**
* idpf_remove - Device removal routine
* @pdev: PCI device information struct
*/
static void idpf_remove(struct pci_dev *pdev)
{
struct idpf_adapter *adapter = pci_get_drvdata(pdev);
int i;
set_bit(IDPF_REMOVE_IN_PROG, adapter->flags);
/* Wait until vc_event_task is done to consider if any hard reset is
* in progress else we may go ahead and release the resources but the
* thread doing the hard reset might continue the init path and
* end up in bad state.
*/
cancel_delayed_work_sync(&adapter->vc_event_task);
if (adapter->num_vfs)
idpf_sriov_configure(pdev, 0);
idpf_vc_core_deinit(adapter);
/* Be a good citizen and leave the device clean on exit */
adapter->dev_ops.reg_ops.trigger_reset(adapter, IDPF_HR_FUNC_RESET);
idpf_deinit_dflt_mbx(adapter);
if (!adapter->netdevs)
goto destroy_wqs;
/* There are some cases where it's possible to still have netdevs
* registered with the stack at this point, e.g. if the driver detected
* a HW reset and rmmod is called before it fully recovers. Unregister
* any stale netdevs here.
*/
for (i = 0; i < adapter->max_vports; i++) {
if (!adapter->netdevs[i])
continue;
if (adapter->netdevs[i]->reg_state != NETREG_UNINITIALIZED)
unregister_netdev(adapter->netdevs[i]);
free_netdev(adapter->netdevs[i]);
adapter->netdevs[i] = NULL;
}
destroy_wqs:
destroy_workqueue(adapter->init_wq);
destroy_workqueue(adapter->serv_wq);
destroy_workqueue(adapter->mbx_wq);
destroy_workqueue(adapter->stats_wq);
destroy_workqueue(adapter->vc_event_wq);
for (i = 0; i < adapter->max_vports; i++) {
kfree(adapter->vport_config[i]);
adapter->vport_config[i] = NULL;
}
kfree(adapter->vport_config);
adapter->vport_config = NULL;
kfree(adapter->netdevs);
adapter->netdevs = NULL;
mutex_destroy(&adapter->vport_ctrl_lock);
mutex_destroy(&adapter->vector_lock);
mutex_destroy(&adapter->queue_lock);
mutex_destroy(&adapter->vc_buf_lock);
pci_set_drvdata(pdev, NULL);
kfree(adapter);
}
/**
* idpf_shutdown - PCI callback for shutting down device
* @pdev: PCI device information struct
*/
static void idpf_shutdown(struct pci_dev *pdev)
{
idpf_remove(pdev);
if (system_state == SYSTEM_POWER_OFF)
pci_set_power_state(pdev, PCI_D3hot);
}
/**
* idpf_cfg_hw - Initialize HW struct
* @adapter: adapter to setup hw struct for
*
* Returns 0 on success, negative on failure
*/
static int idpf_cfg_hw(struct idpf_adapter *adapter)
{
struct pci_dev *pdev = adapter->pdev;
struct idpf_hw *hw = &adapter->hw;
hw->hw_addr = pcim_iomap_table(pdev)[0];
if (!hw->hw_addr) {
pci_err(pdev, "failed to allocate PCI iomap table\n");
return -ENOMEM;
}
hw->back = adapter;
return 0;
}
/**
* idpf_probe - Device initialization routine
* @pdev: PCI device information struct
* @ent: entry in idpf_pci_tbl
*
* Returns 0 on success, negative on failure
*/
static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct device *dev = &pdev->dev;
struct idpf_adapter *adapter;
int err;
adapter = kzalloc(sizeof(*adapter), GFP_KERNEL);
if (!adapter)
return -ENOMEM;
adapter->req_tx_splitq = true;
adapter->req_rx_splitq = true;
switch (ent->device) {
case IDPF_DEV_ID_PF:
idpf_dev_ops_init(adapter);
break;
case IDPF_DEV_ID_VF:
idpf_vf_dev_ops_init(adapter);
adapter->crc_enable = true;
break;
default:
err = -ENODEV;
dev_err(&pdev->dev, "Unexpected dev ID 0x%x in idpf probe\n",
ent->device);
goto err_free;
}
adapter->pdev = pdev;
err = pcim_enable_device(pdev);
if (err)
goto err_free;
err = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev));
if (err) {
pci_err(pdev, "pcim_iomap_regions failed %pe\n", ERR_PTR(err));
goto err_free;
}
/* set up for high or low dma */
err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
if (err) {
pci_err(pdev, "DMA configuration failed: %pe\n", ERR_PTR(err));
goto err_free;
}
pci_set_master(pdev);
pci_set_drvdata(pdev, adapter);
adapter->init_wq = alloc_workqueue("%s-%s-init", 0, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->init_wq) {
dev_err(dev, "Failed to allocate init workqueue\n");
err = -ENOMEM;
goto err_free;
}
adapter->serv_wq = alloc_workqueue("%s-%s-service", 0, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->serv_wq) {
dev_err(dev, "Failed to allocate service workqueue\n");
err = -ENOMEM;
goto err_serv_wq_alloc;
}
adapter->mbx_wq = alloc_workqueue("%s-%s-mbx", 0, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->mbx_wq) {
dev_err(dev, "Failed to allocate mailbox workqueue\n");
err = -ENOMEM;
goto err_mbx_wq_alloc;
}
adapter->stats_wq = alloc_workqueue("%s-%s-stats", 0, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->stats_wq) {
dev_err(dev, "Failed to allocate workqueue\n");
err = -ENOMEM;
goto err_stats_wq_alloc;
}
adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event", 0, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->vc_event_wq) {
dev_err(dev, "Failed to allocate virtchnl event workqueue\n");
err = -ENOMEM;
goto err_vc_event_wq_alloc;
}
/* setup msglvl */
adapter->msg_enable = netif_msg_init(-1, IDPF_AVAIL_NETIF_M);
err = idpf_cfg_hw(adapter);
if (err) {
dev_err(dev, "Failed to configure HW structure for adapter: %d\n",
err);
goto err_cfg_hw;
}
mutex_init(&adapter->vport_ctrl_lock);
mutex_init(&adapter->vector_lock);
mutex_init(&adapter->queue_lock);
mutex_init(&adapter->vc_buf_lock);
init_waitqueue_head(&adapter->vchnl_wq);
INIT_DELAYED_WORK(&adapter->init_task, idpf_init_task);
INIT_DELAYED_WORK(&adapter->serv_task, idpf_service_task);
INIT_DELAYED_WORK(&adapter->mbx_task, idpf_mbx_task);
INIT_DELAYED_WORK(&adapter->stats_task, idpf_statistics_task);
INIT_DELAYED_WORK(&adapter->vc_event_task, idpf_vc_event_task);
adapter->dev_ops.reg_ops.reset_reg_init(adapter);
set_bit(IDPF_HR_DRV_LOAD, adapter->flags);
queue_delayed_work(adapter->vc_event_wq, &adapter->vc_event_task,
msecs_to_jiffies(10 * (pdev->devfn & 0x07)));
return 0;
err_cfg_hw:
destroy_workqueue(adapter->vc_event_wq);
err_vc_event_wq_alloc:
destroy_workqueue(adapter->stats_wq);
err_stats_wq_alloc:
destroy_workqueue(adapter->mbx_wq);
err_mbx_wq_alloc:
destroy_workqueue(adapter->serv_wq);
err_serv_wq_alloc:
destroy_workqueue(adapter->init_wq);
err_free:
kfree(adapter);
return err;
}
/* idpf_pci_tbl - PCI Dev idpf ID Table
*/
static const struct pci_device_id idpf_pci_tbl[] = {
{ PCI_VDEVICE(INTEL, IDPF_DEV_ID_PF)},
{ PCI_VDEVICE(INTEL, IDPF_DEV_ID_VF)},
{ /* Sentinel */ }
};
MODULE_DEVICE_TABLE(pci, idpf_pci_tbl);
static struct pci_driver idpf_driver = {
.name = KBUILD_MODNAME,
.id_table = idpf_pci_tbl,
.probe = idpf_probe,
.sriov_configure = idpf_sriov_configure,
.remove = idpf_remove,
.shutdown = idpf_shutdown,
};
module_pci_driver(idpf_driver);
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_MEM_H_
#define _IDPF_MEM_H_
#include <linux/io.h>
struct idpf_dma_mem {
void *va;
dma_addr_t pa;
size_t size;
};
#define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
#define rd32(a, reg) readl((a)->hw_addr + (reg))
#define wr64(a, reg, value) writeq((value), ((a)->hw_addr + (reg)))
#define rd64(a, reg) readq((a)->hw_addr + (reg))
#endif /* _IDPF_MEM_H_ */
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf.h"
/**
* idpf_tx_singleq_csum - Enable tx checksum offloads
* @skb: pointer to skb
* @off: pointer to struct that holds offload parameters
*
* Returns 0 or error (negative) if checksum offload cannot be executed, 1
* otherwise.
*/
static int idpf_tx_singleq_csum(struct sk_buff *skb,
struct idpf_tx_offload_params *off)
{
u32 l4_len, l3_len, l2_len;
union {
struct iphdr *v4;
struct ipv6hdr *v6;
unsigned char *hdr;
} ip;
union {
struct tcphdr *tcp;
unsigned char *hdr;
} l4;
u32 offset, cmd = 0;
u8 l4_proto = 0;
__be16 frag_off;
bool is_tso;
if (skb->ip_summed != CHECKSUM_PARTIAL)
return 0;
ip.hdr = skb_network_header(skb);
l4.hdr = skb_transport_header(skb);
/* compute outer L2 header size */
l2_len = ip.hdr - skb->data;
offset = FIELD_PREP(0x3F << IDPF_TX_DESC_LEN_MACLEN_S, l2_len / 2);
is_tso = !!(off->tx_flags & IDPF_TX_FLAGS_TSO);
if (skb->encapsulation) {
u32 tunnel = 0;
/* define outer network header type */
if (off->tx_flags & IDPF_TX_FLAGS_IPV4) {
/* The stack computes the IP header already, the only
* time we need the hardware to recompute it is in the
* case of TSO.
*/
tunnel |= is_tso ?
IDPF_TX_CTX_EXT_IP_IPV4 :
IDPF_TX_CTX_EXT_IP_IPV4_NO_CSUM;
l4_proto = ip.v4->protocol;
} else if (off->tx_flags & IDPF_TX_FLAGS_IPV6) {
tunnel |= IDPF_TX_CTX_EXT_IP_IPV6;
l4_proto = ip.v6->nexthdr;
if (ipv6_ext_hdr(l4_proto))
ipv6_skip_exthdr(skb, skb_network_offset(skb) +
sizeof(*ip.v6),
&l4_proto, &frag_off);
}
/* define outer transport */
switch (l4_proto) {
case IPPROTO_UDP:
tunnel |= IDPF_TXD_CTX_UDP_TUNNELING;
break;
case IPPROTO_GRE:
tunnel |= IDPF_TXD_CTX_GRE_TUNNELING;
break;
case IPPROTO_IPIP:
case IPPROTO_IPV6:
l4.hdr = skb_inner_network_header(skb);
break;
default:
if (is_tso)
return -1;
skb_checksum_help(skb);
return 0;
}
off->tx_flags |= IDPF_TX_FLAGS_TUNNEL;
/* compute outer L3 header size */
tunnel |= FIELD_PREP(IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_M,
(l4.hdr - ip.hdr) / 4);
/* switch IP header pointer from outer to inner header */
ip.hdr = skb_inner_network_header(skb);
/* compute tunnel header size */
tunnel |= FIELD_PREP(IDPF_TXD_CTX_QW0_TUNN_NATLEN_M,
(ip.hdr - l4.hdr) / 2);
/* indicate if we need to offload outer UDP header */
if (is_tso &&
!(skb_shinfo(skb)->gso_type & SKB_GSO_PARTIAL) &&
(skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM))
tunnel |= IDPF_TXD_CTX_QW0_TUNN_L4T_CS_M;
/* record tunnel offload values */
off->cd_tunneling |= tunnel;
/* switch L4 header pointer from outer to inner */
l4.hdr = skb_inner_transport_header(skb);
l4_proto = 0;
/* reset type as we transition from outer to inner headers */
off->tx_flags &= ~(IDPF_TX_FLAGS_IPV4 | IDPF_TX_FLAGS_IPV6);
if (ip.v4->version == 4)
off->tx_flags |= IDPF_TX_FLAGS_IPV4;
if (ip.v6->version == 6)
off->tx_flags |= IDPF_TX_FLAGS_IPV6;
}
/* Enable IP checksum offloads */
if (off->tx_flags & IDPF_TX_FLAGS_IPV4) {
l4_proto = ip.v4->protocol;
/* See comment above regarding need for HW to recompute IP
* header checksum in the case of TSO.
*/
if (is_tso)
cmd |= IDPF_TX_DESC_CMD_IIPT_IPV4_CSUM;
else
cmd |= IDPF_TX_DESC_CMD_IIPT_IPV4;
} else if (off->tx_flags & IDPF_TX_FLAGS_IPV6) {
cmd |= IDPF_TX_DESC_CMD_IIPT_IPV6;
l4_proto = ip.v6->nexthdr;
if (ipv6_ext_hdr(l4_proto))
ipv6_skip_exthdr(skb, skb_network_offset(skb) +
sizeof(*ip.v6), &l4_proto,
&frag_off);
} else {
return -1;
}
/* compute inner L3 header size */
l3_len = l4.hdr - ip.hdr;
offset |= (l3_len / 4) << IDPF_TX_DESC_LEN_IPLEN_S;
/* Enable L4 checksum offloads */
switch (l4_proto) {
case IPPROTO_TCP:
/* enable checksum offloads */
cmd |= IDPF_TX_DESC_CMD_L4T_EOFT_TCP;
l4_len = l4.tcp->doff;
break;
case IPPROTO_UDP:
/* enable UDP checksum offload */
cmd |= IDPF_TX_DESC_CMD_L4T_EOFT_UDP;
l4_len = sizeof(struct udphdr) >> 2;
break;
case IPPROTO_SCTP:
/* enable SCTP checksum offload */
cmd |= IDPF_TX_DESC_CMD_L4T_EOFT_SCTP;
l4_len = sizeof(struct sctphdr) >> 2;
break;
default:
if (is_tso)
return -1;
skb_checksum_help(skb);
return 0;
}
offset |= l4_len << IDPF_TX_DESC_LEN_L4_LEN_S;
off->td_cmd |= cmd;
off->hdr_offsets |= offset;
return 1;
}
/**
* idpf_tx_singleq_map - Build the Tx base descriptor
* @tx_q: queue to send buffer on
* @first: first buffer info buffer to use
* @offloads: pointer to struct that holds offload parameters
*
* This function loops over the skb data pointed to by *first
* and gets a physical address for each memory location and programs
* it and the length into the transmit base mode descriptor.
*/
static void idpf_tx_singleq_map(struct idpf_queue *tx_q,
struct idpf_tx_buf *first,
struct idpf_tx_offload_params *offloads)
{
u32 offsets = offloads->hdr_offsets;
struct idpf_tx_buf *tx_buf = first;
struct idpf_base_tx_desc *tx_desc;
struct sk_buff *skb = first->skb;
u64 td_cmd = offloads->td_cmd;
unsigned int data_len, size;
u16 i = tx_q->next_to_use;
struct netdev_queue *nq;
skb_frag_t *frag;
dma_addr_t dma;
u64 td_tag = 0;
data_len = skb->data_len;
size = skb_headlen(skb);
tx_desc = IDPF_BASE_TX_DESC(tx_q, i);
dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE);
/* write each descriptor with CRC bit */
if (tx_q->vport->crc_enable)
td_cmd |= IDPF_TX_DESC_CMD_ICRC;
for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
if (dma_mapping_error(tx_q->dev, dma))
return idpf_tx_dma_map_error(tx_q, skb, first, i);
/* record length, and DMA address */
dma_unmap_len_set(tx_buf, len, size);
dma_unmap_addr_set(tx_buf, dma, dma);
/* align size to end of page */
max_data += -dma & (IDPF_TX_MAX_READ_REQ_SIZE - 1);
tx_desc->buf_addr = cpu_to_le64(dma);
/* account for data chunks larger than the hardware
* can handle
*/
while (unlikely(size > IDPF_TX_MAX_DESC_DATA)) {
tx_desc->qw1 = idpf_tx_singleq_build_ctob(td_cmd,
offsets,
max_data,
td_tag);
tx_desc++;
i++;
if (i == tx_q->desc_count) {
tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
i = 0;
}
dma += max_data;
size -= max_data;
max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
tx_desc->buf_addr = cpu_to_le64(dma);
}
if (!data_len)
break;
tx_desc->qw1 = idpf_tx_singleq_build_ctob(td_cmd, offsets,
size, td_tag);
tx_desc++;
i++;
if (i == tx_q->desc_count) {
tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
i = 0;
}
size = skb_frag_size(frag);
data_len -= size;
dma = skb_frag_dma_map(tx_q->dev, frag, 0, size,
DMA_TO_DEVICE);
tx_buf = &tx_q->tx_buf[i];
}
skb_tx_timestamp(first->skb);
/* write last descriptor with RS and EOP bits */
td_cmd |= (u64)(IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS);
tx_desc->qw1 = idpf_tx_singleq_build_ctob(td_cmd, offsets,
size, td_tag);
IDPF_SINGLEQ_BUMP_RING_IDX(tx_q, i);
/* set next_to_watch value indicating a packet is present */
first->next_to_watch = tx_desc;
nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
netdev_tx_sent_queue(nq, first->bytecount);
idpf_tx_buf_hw_update(tx_q, i, netdev_xmit_more());
}
/**
* idpf_tx_singleq_get_ctx_desc - grab next desc and update buffer ring
* @txq: queue to put context descriptor on
*
* Since the TX buffer rings mimics the descriptor ring, update the tx buffer
* ring entry to reflect that this index is a context descriptor
*/
static struct idpf_base_tx_ctx_desc *
idpf_tx_singleq_get_ctx_desc(struct idpf_queue *txq)
{
struct idpf_base_tx_ctx_desc *ctx_desc;
int ntu = txq->next_to_use;
memset(&txq->tx_buf[ntu], 0, sizeof(struct idpf_tx_buf));
txq->tx_buf[ntu].ctx_entry = true;
ctx_desc = IDPF_BASE_TX_CTX_DESC(txq, ntu);
IDPF_SINGLEQ_BUMP_RING_IDX(txq, ntu);
txq->next_to_use = ntu;
return ctx_desc;
}
/**
* idpf_tx_singleq_build_ctx_desc - populate context descriptor
* @txq: queue to send buffer on
* @offload: offload parameter structure
**/
static void idpf_tx_singleq_build_ctx_desc(struct idpf_queue *txq,
struct idpf_tx_offload_params *offload)
{
struct idpf_base_tx_ctx_desc *desc = idpf_tx_singleq_get_ctx_desc(txq);
u64 qw1 = (u64)IDPF_TX_DESC_DTYPE_CTX;
if (offload->tso_segs) {
qw1 |= IDPF_TX_CTX_DESC_TSO << IDPF_TXD_CTX_QW1_CMD_S;
qw1 |= ((u64)offload->tso_len << IDPF_TXD_CTX_QW1_TSO_LEN_S) &
IDPF_TXD_CTX_QW1_TSO_LEN_M;
qw1 |= ((u64)offload->mss << IDPF_TXD_CTX_QW1_MSS_S) &
IDPF_TXD_CTX_QW1_MSS_M;
u64_stats_update_begin(&txq->stats_sync);
u64_stats_inc(&txq->q_stats.tx.lso_pkts);
u64_stats_update_end(&txq->stats_sync);
}
desc->qw0.tunneling_params = cpu_to_le32(offload->cd_tunneling);
desc->qw0.l2tag2 = 0;
desc->qw0.rsvd1 = 0;
desc->qw1 = cpu_to_le64(qw1);
}
/**
* idpf_tx_singleq_frame - Sends buffer on Tx ring using base descriptors
* @skb: send buffer
* @tx_q: queue to send buffer on
*
* Returns NETDEV_TX_OK if sent, else an error code
*/
static netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
struct idpf_queue *tx_q)
{
struct idpf_tx_offload_params offload = { };
struct idpf_tx_buf *first;
unsigned int count;
__be16 protocol;
int csum, tso;
count = idpf_tx_desc_count_required(tx_q, skb);
if (unlikely(!count))
return idpf_tx_drop_skb(tx_q, skb);
if (idpf_tx_maybe_stop_common(tx_q,
count + IDPF_TX_DESCS_PER_CACHE_LINE +
IDPF_TX_DESCS_FOR_CTX)) {
idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
return NETDEV_TX_BUSY;
}
protocol = vlan_get_protocol(skb);
if (protocol == htons(ETH_P_IP))
offload.tx_flags |= IDPF_TX_FLAGS_IPV4;
else if (protocol == htons(ETH_P_IPV6))
offload.tx_flags |= IDPF_TX_FLAGS_IPV6;
tso = idpf_tso(skb, &offload);
if (tso < 0)
goto out_drop;
csum = idpf_tx_singleq_csum(skb, &offload);
if (csum < 0)
goto out_drop;
if (tso || offload.cd_tunneling)
idpf_tx_singleq_build_ctx_desc(tx_q, &offload);
/* record the location of the first descriptor for this packet */
first = &tx_q->tx_buf[tx_q->next_to_use];
first->skb = skb;
if (tso) {
first->gso_segs = offload.tso_segs;
first->bytecount = skb->len + ((first->gso_segs - 1) * offload.tso_hdr_len);
} else {
first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN);
first->gso_segs = 1;
}
idpf_tx_singleq_map(tx_q, first, &offload);
return NETDEV_TX_OK;
out_drop:
return idpf_tx_drop_skb(tx_q, skb);
}
/**
* idpf_tx_singleq_start - Selects the right Tx queue to send buffer
* @skb: send buffer
* @netdev: network interface device structure
*
* Returns NETDEV_TX_OK if sent, else an error code
*/
netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb,
struct net_device *netdev)
{
struct idpf_vport *vport = idpf_netdev_to_vport(netdev);
struct idpf_queue *tx_q;
tx_q = vport->txqs[skb_get_queue_mapping(skb)];
/* hardware can't handle really short frames, hardware padding works
* beyond this point
*/
if (skb_put_padto(skb, IDPF_TX_MIN_PKT_LEN)) {
idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
return NETDEV_TX_OK;
}
return idpf_tx_singleq_frame(skb, tx_q);
}
/**
* idpf_tx_singleq_clean - Reclaim resources from queue
* @tx_q: Tx queue to clean
* @napi_budget: Used to determine if we are in netpoll
* @cleaned: returns number of packets cleaned
*
*/
static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget,
int *cleaned)
{
unsigned int budget = tx_q->vport->compln_clean_budget;
unsigned int total_bytes = 0, total_pkts = 0;
struct idpf_base_tx_desc *tx_desc;
s16 ntc = tx_q->next_to_clean;
struct idpf_netdev_priv *np;
struct idpf_tx_buf *tx_buf;
struct idpf_vport *vport;
struct netdev_queue *nq;
bool dont_wake;
tx_desc = IDPF_BASE_TX_DESC(tx_q, ntc);
tx_buf = &tx_q->tx_buf[ntc];
ntc -= tx_q->desc_count;
do {
struct idpf_base_tx_desc *eop_desc;
/* If this entry in the ring was used as a context descriptor,
* it's corresponding entry in the buffer ring will indicate as
* such. We can skip this descriptor since there is no buffer
* to clean.
*/
if (tx_buf->ctx_entry) {
/* Clear this flag here to avoid stale flag values when
* this buffer is used for actual data in the future.
* There are cases where the tx_buf struct / the flags
* field will not be cleared before being reused.
*/
tx_buf->ctx_entry = false;
goto fetch_next_txq_desc;
}
/* if next_to_watch is not set then no work pending */
eop_desc = (struct idpf_base_tx_desc *)tx_buf->next_to_watch;
if (!eop_desc)
break;
/* prevent any other reads prior to eop_desc */
smp_rmb();
/* if the descriptor isn't done, no work yet to do */
if (!(eop_desc->qw1 &
cpu_to_le64(IDPF_TX_DESC_DTYPE_DESC_DONE)))
break;
/* clear next_to_watch to prevent false hangs */
tx_buf->next_to_watch = NULL;
/* update the statistics for this packet */
total_bytes += tx_buf->bytecount;
total_pkts += tx_buf->gso_segs;
napi_consume_skb(tx_buf->skb, napi_budget);
/* unmap skb header data */
dma_unmap_single(tx_q->dev,
dma_unmap_addr(tx_buf, dma),
dma_unmap_len(tx_buf, len),
DMA_TO_DEVICE);
/* clear tx_buf data */
tx_buf->skb = NULL;
dma_unmap_len_set(tx_buf, len, 0);
/* unmap remaining buffers */
while (tx_desc != eop_desc) {
tx_buf++;
tx_desc++;
ntc++;
if (unlikely(!ntc)) {
ntc -= tx_q->desc_count;
tx_buf = tx_q->tx_buf;
tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
}
/* unmap any remaining paged data */
if (dma_unmap_len(tx_buf, len)) {
dma_unmap_page(tx_q->dev,
dma_unmap_addr(tx_buf, dma),
dma_unmap_len(tx_buf, len),
DMA_TO_DEVICE);
dma_unmap_len_set(tx_buf, len, 0);
}
}
/* update budget only if we did something */
budget--;
fetch_next_txq_desc:
tx_buf++;
tx_desc++;
ntc++;
if (unlikely(!ntc)) {
ntc -= tx_q->desc_count;
tx_buf = tx_q->tx_buf;
tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
}
} while (likely(budget));
ntc += tx_q->desc_count;
tx_q->next_to_clean = ntc;
*cleaned += total_pkts;
u64_stats_update_begin(&tx_q->stats_sync);
u64_stats_add(&tx_q->q_stats.tx.packets, total_pkts);
u64_stats_add(&tx_q->q_stats.tx.bytes, total_bytes);
u64_stats_update_end(&tx_q->stats_sync);
vport = tx_q->vport;
np = netdev_priv(vport->netdev);
nq = netdev_get_tx_queue(vport->netdev, tx_q->idx);
dont_wake = np->state != __IDPF_VPORT_UP ||
!netif_carrier_ok(vport->netdev);
__netif_txq_completed_wake(nq, total_pkts, total_bytes,
IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH,
dont_wake);
return !!budget;
}
/**
* idpf_tx_singleq_clean_all - Clean all Tx queues
* @q_vec: queue vector
* @budget: Used to determine if we are in netpoll
* @cleaned: returns number of packets cleaned
*
* Returns false if clean is not complete else returns true
*/
static bool idpf_tx_singleq_clean_all(struct idpf_q_vector *q_vec, int budget,
int *cleaned)
{
u16 num_txq = q_vec->num_txq;
bool clean_complete = true;
int i, budget_per_q;
budget_per_q = num_txq ? max(budget / num_txq, 1) : 0;
for (i = 0; i < num_txq; i++) {
struct idpf_queue *q;
q = q_vec->tx[i];
clean_complete &= idpf_tx_singleq_clean(q, budget_per_q,
cleaned);
}
return clean_complete;
}
/**
* idpf_rx_singleq_test_staterr - tests bits in Rx descriptor
* status and error fields
* @rx_desc: pointer to receive descriptor (in le64 format)
* @stat_err_bits: value to mask
*
* This function does some fast chicanery in order to return the
* value of the mask which is really only used for boolean tests.
* The status_error_ptype_len doesn't need to be shifted because it begins
* at offset zero.
*/
static bool idpf_rx_singleq_test_staterr(const union virtchnl2_rx_desc *rx_desc,
const u64 stat_err_bits)
{
return !!(rx_desc->base_wb.qword1.status_error_ptype_len &
cpu_to_le64(stat_err_bits));
}
/**
* idpf_rx_singleq_is_non_eop - process handling of non-EOP buffers
* @rxq: Rx ring being processed
* @rx_desc: Rx descriptor for current buffer
* @skb: Current socket buffer containing buffer in progress
* @ntc: next to clean
*/
static bool idpf_rx_singleq_is_non_eop(struct idpf_queue *rxq,
union virtchnl2_rx_desc *rx_desc,
struct sk_buff *skb, u16 ntc)
{
/* if we are the last buffer then there is nothing else to do */
if (likely(idpf_rx_singleq_test_staterr(rx_desc, IDPF_RXD_EOF_SINGLEQ)))
return false;
return true;
}
/**
* idpf_rx_singleq_csum - Indicate in skb if checksum is good
* @rxq: Rx ring being processed
* @skb: skb currently being received and modified
* @csum_bits: checksum bits from descriptor
* @ptype: the packet type decoded by hardware
*
* skb->protocol must be set before this function is called
*/
static void idpf_rx_singleq_csum(struct idpf_queue *rxq, struct sk_buff *skb,
struct idpf_rx_csum_decoded *csum_bits,
u16 ptype)
{
struct idpf_rx_ptype_decoded decoded;
bool ipv4, ipv6;
/* check if Rx checksum is enabled */
if (unlikely(!(rxq->vport->netdev->features & NETIF_F_RXCSUM)))
return;
/* check if HW has decoded the packet and checksum */
if (unlikely(!(csum_bits->l3l4p)))
return;
decoded = rxq->vport->rx_ptype_lkup[ptype];
if (unlikely(!(decoded.known && decoded.outer_ip)))
return;
ipv4 = IDPF_RX_PTYPE_TO_IPV(&decoded, IDPF_RX_PTYPE_OUTER_IPV4);
ipv6 = IDPF_RX_PTYPE_TO_IPV(&decoded, IDPF_RX_PTYPE_OUTER_IPV6);
/* Check if there were any checksum errors */
if (unlikely(ipv4 && (csum_bits->ipe || csum_bits->eipe)))
goto checksum_fail;
/* Device could not do any checksum offload for certain extension
* headers as indicated by setting IPV6EXADD bit
*/
if (unlikely(ipv6 && csum_bits->ipv6exadd))
return;
/* check for L4 errors and handle packets that were not able to be
* checksummed due to arrival speed
*/
if (unlikely(csum_bits->l4e))
goto checksum_fail;
if (unlikely(csum_bits->nat && csum_bits->eudpe))
goto checksum_fail;
/* Handle packets that were not able to be checksummed due to arrival
* speed, in this case the stack can compute the csum.
*/
if (unlikely(csum_bits->pprs))
return;
/* If there is an outer header present that might contain a checksum
* we need to bump the checksum level by 1 to reflect the fact that
* we are indicating we validated the inner checksum.
*/
if (decoded.tunnel_type >= IDPF_RX_PTYPE_TUNNEL_IP_GRENAT)
skb->csum_level = 1;
/* Only report checksum unnecessary for ICMP, TCP, UDP, or SCTP */
switch (decoded.inner_prot) {
case IDPF_RX_PTYPE_INNER_PROT_ICMP:
case IDPF_RX_PTYPE_INNER_PROT_TCP:
case IDPF_RX_PTYPE_INNER_PROT_UDP:
case IDPF_RX_PTYPE_INNER_PROT_SCTP:
skb->ip_summed = CHECKSUM_UNNECESSARY;
return;
default:
return;
}
checksum_fail:
u64_stats_update_begin(&rxq->stats_sync);
u64_stats_inc(&rxq->q_stats.rx.hw_csum_err);
u64_stats_update_end(&rxq->stats_sync);
}
/**
* idpf_rx_singleq_base_csum - Indicate in skb if hw indicated a good cksum
* @rx_q: Rx completion queue
* @skb: skb currently being received and modified
* @rx_desc: the receive descriptor
* @ptype: Rx packet type
*
* This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte
* descriptor writeback format.
**/
static void idpf_rx_singleq_base_csum(struct idpf_queue *rx_q,
struct sk_buff *skb,
union virtchnl2_rx_desc *rx_desc,
u16 ptype)
{
struct idpf_rx_csum_decoded csum_bits;
u32 rx_error, rx_status;
u64 qword;
qword = le64_to_cpu(rx_desc->base_wb.qword1.status_error_ptype_len);
rx_status = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M, qword);
rx_error = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M, qword);
csum_bits.ipe = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_M, rx_error);
csum_bits.eipe = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_M,
rx_error);
csum_bits.l4e = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_M, rx_error);
csum_bits.pprs = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_M,
rx_error);
csum_bits.l3l4p = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_M,
rx_status);
csum_bits.ipv6exadd = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_M,
rx_status);
csum_bits.nat = 0;
csum_bits.eudpe = 0;
idpf_rx_singleq_csum(rx_q, skb, &csum_bits, ptype);
}
/**
* idpf_rx_singleq_flex_csum - Indicate in skb if hw indicated a good cksum
* @rx_q: Rx completion queue
* @skb: skb currently being received and modified
* @rx_desc: the receive descriptor
* @ptype: Rx packet type
*
* This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible
* descriptor writeback format.
**/
static void idpf_rx_singleq_flex_csum(struct idpf_queue *rx_q,
struct sk_buff *skb,
union virtchnl2_rx_desc *rx_desc,
u16 ptype)
{
struct idpf_rx_csum_decoded csum_bits;
u16 rx_status0, rx_status1;
rx_status0 = le16_to_cpu(rx_desc->flex_nic_wb.status_error0);
rx_status1 = le16_to_cpu(rx_desc->flex_nic_wb.status_error1);
csum_bits.ipe = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_M,
rx_status0);
csum_bits.eipe = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_M,
rx_status0);
csum_bits.l4e = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_M,
rx_status0);
csum_bits.eudpe = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_M,
rx_status0);
csum_bits.l3l4p = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_M,
rx_status0);
csum_bits.ipv6exadd = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_M,
rx_status0);
csum_bits.nat = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_M,
rx_status1);
csum_bits.pprs = 0;
idpf_rx_singleq_csum(rx_q, skb, &csum_bits, ptype);
}
/**
* idpf_rx_singleq_base_hash - set the hash value in the skb
* @rx_q: Rx completion queue
* @skb: skb currently being received and modified
* @rx_desc: specific descriptor
* @decoded: Decoded Rx packet type related fields
*
* This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte
* descriptor writeback format.
**/
static void idpf_rx_singleq_base_hash(struct idpf_queue *rx_q,
struct sk_buff *skb,
union virtchnl2_rx_desc *rx_desc,
struct idpf_rx_ptype_decoded *decoded)
{
u64 mask, qw1;
if (unlikely(!(rx_q->vport->netdev->features & NETIF_F_RXHASH)))
return;
mask = VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH_M;
qw1 = le64_to_cpu(rx_desc->base_wb.qword1.status_error_ptype_len);
if (FIELD_GET(mask, qw1) == mask) {
u32 hash = le32_to_cpu(rx_desc->base_wb.qword0.hi_dword.rss);
skb_set_hash(skb, hash, idpf_ptype_to_htype(decoded));
}
}
/**
* idpf_rx_singleq_flex_hash - set the hash value in the skb
* @rx_q: Rx completion queue
* @skb: skb currently being received and modified
* @rx_desc: specific descriptor
* @decoded: Decoded Rx packet type related fields
*
* This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible
* descriptor writeback format.
**/
static void idpf_rx_singleq_flex_hash(struct idpf_queue *rx_q,
struct sk_buff *skb,
union virtchnl2_rx_desc *rx_desc,
struct idpf_rx_ptype_decoded *decoded)
{
if (unlikely(!(rx_q->vport->netdev->features & NETIF_F_RXHASH)))
return;
if (FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_M,
le16_to_cpu(rx_desc->flex_nic_wb.status_error0)))
skb_set_hash(skb, le32_to_cpu(rx_desc->flex_nic_wb.rss_hash),
idpf_ptype_to_htype(decoded));
}
/**
* idpf_rx_singleq_process_skb_fields - Populate skb header fields from Rx
* descriptor
* @rx_q: Rx ring being processed
* @skb: pointer to current skb being populated
* @rx_desc: descriptor for skb
* @ptype: packet type
*
* This function checks the ring, descriptor, and packet information in
* order to populate the hash, checksum, VLAN, protocol, and
* other fields within the skb.
*/
static void idpf_rx_singleq_process_skb_fields(struct idpf_queue *rx_q,
struct sk_buff *skb,
union virtchnl2_rx_desc *rx_desc,
u16 ptype)
{
struct idpf_rx_ptype_decoded decoded =
rx_q->vport->rx_ptype_lkup[ptype];
/* modifies the skb - consumes the enet header */
skb->protocol = eth_type_trans(skb, rx_q->vport->netdev);
/* Check if we're using base mode descriptor IDs */
if (rx_q->rxdids == VIRTCHNL2_RXDID_1_32B_BASE_M) {
idpf_rx_singleq_base_hash(rx_q, skb, rx_desc, &decoded);
idpf_rx_singleq_base_csum(rx_q, skb, rx_desc, ptype);
} else {
idpf_rx_singleq_flex_hash(rx_q, skb, rx_desc, &decoded);
idpf_rx_singleq_flex_csum(rx_q, skb, rx_desc, ptype);
}
}
/**
* idpf_rx_singleq_buf_hw_alloc_all - Replace used receive buffers
* @rx_q: queue for which the hw buffers are allocated
* @cleaned_count: number of buffers to replace
*
* Returns false if all allocations were successful, true if any fail
*/
bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q,
u16 cleaned_count)
{
struct virtchnl2_singleq_rx_buf_desc *desc;
u16 nta = rx_q->next_to_alloc;
struct idpf_rx_buf *buf;
if (!cleaned_count)
return false;
desc = IDPF_SINGLEQ_RX_BUF_DESC(rx_q, nta);
buf = &rx_q->rx_buf.buf[nta];
do {
dma_addr_t addr;
addr = idpf_alloc_page(rx_q->pp, buf, rx_q->rx_buf_size);
if (unlikely(addr == DMA_MAPPING_ERROR))
break;
/* Refresh the desc even if buffer_addrs didn't change
* because each write-back erases this info.
*/
desc->pkt_addr = cpu_to_le64(addr);
desc->hdr_addr = 0;
desc++;
buf++;
nta++;
if (unlikely(nta == rx_q->desc_count)) {
desc = IDPF_SINGLEQ_RX_BUF_DESC(rx_q, 0);
buf = rx_q->rx_buf.buf;
nta = 0;
}
cleaned_count--;
} while (cleaned_count);
if (rx_q->next_to_alloc != nta) {
idpf_rx_buf_hw_update(rx_q, nta);
rx_q->next_to_alloc = nta;
}
return !!cleaned_count;
}
/**
* idpf_rx_singleq_extract_base_fields - Extract fields from the Rx descriptor
* @rx_q: Rx descriptor queue
* @rx_desc: the descriptor to process
* @fields: storage for extracted values
*
* Decode the Rx descriptor and extract relevant information including the
* size and Rx packet type.
*
* This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte
* descriptor writeback format.
*/
static void idpf_rx_singleq_extract_base_fields(struct idpf_queue *rx_q,
union virtchnl2_rx_desc *rx_desc,
struct idpf_rx_extracted *fields)
{
u64 qword;
qword = le64_to_cpu(rx_desc->base_wb.qword1.status_error_ptype_len);
fields->size = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M, qword);
fields->rx_ptype = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M, qword);
}
/**
* idpf_rx_singleq_extract_flex_fields - Extract fields from the Rx descriptor
* @rx_q: Rx descriptor queue
* @rx_desc: the descriptor to process
* @fields: storage for extracted values
*
* Decode the Rx descriptor and extract relevant information including the
* size and Rx packet type.
*
* This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible
* descriptor writeback format.
*/
static void idpf_rx_singleq_extract_flex_fields(struct idpf_queue *rx_q,
union virtchnl2_rx_desc *rx_desc,
struct idpf_rx_extracted *fields)
{
fields->size = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M,
le16_to_cpu(rx_desc->flex_nic_wb.pkt_len));
fields->rx_ptype = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_PTYPE_M,
le16_to_cpu(rx_desc->flex_nic_wb.ptype_flex_flags0));
}
/**
* idpf_rx_singleq_extract_fields - Extract fields from the Rx descriptor
* @rx_q: Rx descriptor queue
* @rx_desc: the descriptor to process
* @fields: storage for extracted values
*
*/
static void idpf_rx_singleq_extract_fields(struct idpf_queue *rx_q,
union virtchnl2_rx_desc *rx_desc,
struct idpf_rx_extracted *fields)
{
if (rx_q->rxdids == VIRTCHNL2_RXDID_1_32B_BASE_M)
idpf_rx_singleq_extract_base_fields(rx_q, rx_desc, fields);
else
idpf_rx_singleq_extract_flex_fields(rx_q, rx_desc, fields);
}
/**
* idpf_rx_singleq_clean - Reclaim resources after receive completes
* @rx_q: rx queue to clean
* @budget: Total limit on number of packets to process
*
* Returns true if there's any budget left (e.g. the clean is finished)
*/
static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget)
{
unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
struct sk_buff *skb = rx_q->skb;
u16 ntc = rx_q->next_to_clean;
u16 cleaned_count = 0;
bool failure = false;
/* Process Rx packets bounded by budget */
while (likely(total_rx_pkts < (unsigned int)budget)) {
struct idpf_rx_extracted fields = { };
union virtchnl2_rx_desc *rx_desc;
struct idpf_rx_buf *rx_buf;
/* get the Rx desc from Rx queue based on 'next_to_clean' */
rx_desc = IDPF_RX_DESC(rx_q, ntc);
/* status_error_ptype_len will always be zero for unused
* descriptors because it's cleared in cleanup, and overlaps
* with hdr_addr which is always zero because packet split
* isn't used, if the hardware wrote DD then the length will be
* non-zero
*/
#define IDPF_RXD_DD VIRTCHNL2_RX_BASE_DESC_STATUS_DD_M
if (!idpf_rx_singleq_test_staterr(rx_desc,
IDPF_RXD_DD))
break;
/* This memory barrier is needed to keep us from reading
* any other fields out of the rx_desc
*/
dma_rmb();
idpf_rx_singleq_extract_fields(rx_q, rx_desc, &fields);
rx_buf = &rx_q->rx_buf.buf[ntc];
if (!fields.size) {
idpf_rx_put_page(rx_buf);
goto skip_data;
}
idpf_rx_sync_for_cpu(rx_buf, fields.size);
skb = rx_q->skb;
if (skb)
idpf_rx_add_frag(rx_buf, skb, fields.size);
else
skb = idpf_rx_construct_skb(rx_q, rx_buf, fields.size);
/* exit if we failed to retrieve a buffer */
if (!skb)
break;
skip_data:
IDPF_SINGLEQ_BUMP_RING_IDX(rx_q, ntc);
cleaned_count++;
/* skip if it is non EOP desc */
if (idpf_rx_singleq_is_non_eop(rx_q, rx_desc, skb, ntc))
continue;
#define IDPF_RXD_ERR_S FIELD_PREP(VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M, \
VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_M)
if (unlikely(idpf_rx_singleq_test_staterr(rx_desc,
IDPF_RXD_ERR_S))) {
dev_kfree_skb_any(skb);
skb = NULL;
continue;
}
/* pad skb if needed (to make valid ethernet frame) */
if (eth_skb_pad(skb)) {
skb = NULL;
continue;
}
/* probably a little skewed due to removing CRC */
total_rx_bytes += skb->len;
/* protocol */
idpf_rx_singleq_process_skb_fields(rx_q, skb,
rx_desc, fields.rx_ptype);
/* send completed skb up the stack */
napi_gro_receive(&rx_q->q_vector->napi, skb);
skb = NULL;
/* update budget accounting */
total_rx_pkts++;
}
rx_q->skb = skb;
rx_q->next_to_clean = ntc;
if (cleaned_count)
failure = idpf_rx_singleq_buf_hw_alloc_all(rx_q, cleaned_count);
u64_stats_update_begin(&rx_q->stats_sync);
u64_stats_add(&rx_q->q_stats.rx.packets, total_rx_pkts);
u64_stats_add(&rx_q->q_stats.rx.bytes, total_rx_bytes);
u64_stats_update_end(&rx_q->stats_sync);
/* guarantee a trip back through this routine if there was a failure */
return failure ? budget : (int)total_rx_pkts;
}
/**
* idpf_rx_singleq_clean_all - Clean all Rx queues
* @q_vec: queue vector
* @budget: Used to determine if we are in netpoll
* @cleaned: returns number of packets cleaned
*
* Returns false if clean is not complete else returns true
*/
static bool idpf_rx_singleq_clean_all(struct idpf_q_vector *q_vec, int budget,
int *cleaned)
{
u16 num_rxq = q_vec->num_rxq;
bool clean_complete = true;
int budget_per_q, i;
/* We attempt to distribute budget to each Rx queue fairly, but don't
* allow the budget to go below 1 because that would exit polling early.
*/
budget_per_q = num_rxq ? max(budget / num_rxq, 1) : 0;
for (i = 0; i < num_rxq; i++) {
struct idpf_queue *rxq = q_vec->rx[i];
int pkts_cleaned_per_q;
pkts_cleaned_per_q = idpf_rx_singleq_clean(rxq, budget_per_q);
/* if we clean as many as budgeted, we must not be done */
if (pkts_cleaned_per_q >= budget_per_q)
clean_complete = false;
*cleaned += pkts_cleaned_per_q;
}
return clean_complete;
}
/**
* idpf_vport_singleq_napi_poll - NAPI handler
* @napi: struct from which you get q_vector
* @budget: budget provided by stack
*/
int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget)
{
struct idpf_q_vector *q_vector =
container_of(napi, struct idpf_q_vector, napi);
bool clean_complete;
int work_done = 0;
/* Handle case where we are called by netpoll with a budget of 0 */
if (budget <= 0) {
idpf_tx_singleq_clean_all(q_vector, budget, &work_done);
return budget;
}
clean_complete = idpf_rx_singleq_clean_all(q_vector, budget,
&work_done);
clean_complete &= idpf_tx_singleq_clean_all(q_vector, budget,
&work_done);
/* If work not completed, return budget and polling will return */
if (!clean_complete)
return budget;
work_done = min_t(int, work_done, budget - 1);
/* Exit the polling mode, but don't re-enable interrupts if stack might
* poll us due to busy-polling
*/
if (likely(napi_complete_done(napi, work_done)))
idpf_vport_intr_update_itr_ena_irq(q_vector);
return work_done;
}
This source diff could not be displayed because it is too large. You can view the blob instead.
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _IDPF_TXRX_H_
#define _IDPF_TXRX_H_
#include <net/page_pool/helpers.h>
#include <net/tcp.h>
#include <net/netdev_queues.h>
#define IDPF_LARGE_MAX_Q 256
#define IDPF_MAX_Q 16
#define IDPF_MIN_Q 2
/* Mailbox Queue */
#define IDPF_MAX_MBXQ 1
#define IDPF_MIN_TXQ_DESC 64
#define IDPF_MIN_RXQ_DESC 64
#define IDPF_MIN_TXQ_COMPLQ_DESC 256
#define IDPF_MAX_QIDS 256
/* Number of descriptors in a queue should be a multiple of 32. RX queue
* descriptors alone should be a multiple of IDPF_REQ_RXQ_DESC_MULTIPLE
* to achieve BufQ descriptors aligned to 32
*/
#define IDPF_REQ_DESC_MULTIPLE 32
#define IDPF_REQ_RXQ_DESC_MULTIPLE (IDPF_MAX_BUFQS_PER_RXQ_GRP * 32)
#define IDPF_MIN_TX_DESC_NEEDED (MAX_SKB_FRAGS + 6)
#define IDPF_TX_WAKE_THRESH ((u16)IDPF_MIN_TX_DESC_NEEDED * 2)
#define IDPF_MAX_DESCS 8160
#define IDPF_MAX_TXQ_DESC ALIGN_DOWN(IDPF_MAX_DESCS, IDPF_REQ_DESC_MULTIPLE)
#define IDPF_MAX_RXQ_DESC ALIGN_DOWN(IDPF_MAX_DESCS, IDPF_REQ_RXQ_DESC_MULTIPLE)
#define MIN_SUPPORT_TXDID (\
VIRTCHNL2_TXDID_FLEX_FLOW_SCHED |\
VIRTCHNL2_TXDID_FLEX_TSO_CTX)
#define IDPF_DFLT_SINGLEQ_TX_Q_GROUPS 1
#define IDPF_DFLT_SINGLEQ_RX_Q_GROUPS 1
#define IDPF_DFLT_SINGLEQ_TXQ_PER_GROUP 4
#define IDPF_DFLT_SINGLEQ_RXQ_PER_GROUP 4
#define IDPF_COMPLQ_PER_GROUP 1
#define IDPF_SINGLE_BUFQ_PER_RXQ_GRP 1
#define IDPF_MAX_BUFQS_PER_RXQ_GRP 2
#define IDPF_BUFQ2_ENA 1
#define IDPF_NUMQ_PER_CHUNK 1
#define IDPF_DFLT_SPLITQ_TXQ_PER_GROUP 1
#define IDPF_DFLT_SPLITQ_RXQ_PER_GROUP 1
/* Default vector sharing */
#define IDPF_MBX_Q_VEC 1
#define IDPF_MIN_Q_VEC 1
#define IDPF_DFLT_TX_Q_DESC_COUNT 512
#define IDPF_DFLT_TX_COMPLQ_DESC_COUNT 512
#define IDPF_DFLT_RX_Q_DESC_COUNT 512
/* IMPORTANT: We absolutely _cannot_ have more buffers in the system than a
* given RX completion queue has descriptors. This includes _ALL_ buffer
* queues. E.g.: If you have two buffer queues of 512 descriptors and buffers,
* you have a total of 1024 buffers so your RX queue _must_ have at least that
* many descriptors. This macro divides a given number of RX descriptors by
* number of buffer queues to calculate how many descriptors each buffer queue
* can have without overrunning the RX queue.
*
* If you give hardware more buffers than completion descriptors what will
* happen is that if hardware gets a chance to post more than ring wrap of
* descriptors before SW gets an interrupt and overwrites SW head, the gen bit
* in the descriptor will be wrong. Any overwritten descriptors' buffers will
* be gone forever and SW has no reasonable way to tell that this has happened.
* From SW perspective, when we finally get an interrupt, it looks like we're
* still waiting for descriptor to be done, stalling forever.
*/
#define IDPF_RX_BUFQ_DESC_COUNT(RXD, NUM_BUFQ) ((RXD) / (NUM_BUFQ))
#define IDPF_RX_BUFQ_WORKING_SET(rxq) ((rxq)->desc_count - 1)
#define IDPF_RX_BUMP_NTC(rxq, ntc) \
do { \
if (unlikely(++(ntc) == (rxq)->desc_count)) { \
ntc = 0; \
change_bit(__IDPF_Q_GEN_CHK, (rxq)->flags); \
} \
} while (0)
#define IDPF_SINGLEQ_BUMP_RING_IDX(q, idx) \
do { \
if (unlikely(++(idx) == (q)->desc_count)) \
idx = 0; \
} while (0)
#define IDPF_RX_HDR_SIZE 256
#define IDPF_RX_BUF_2048 2048
#define IDPF_RX_BUF_4096 4096
#define IDPF_RX_BUF_STRIDE 32
#define IDPF_RX_BUF_POST_STRIDE 16
#define IDPF_LOW_WATERMARK 64
/* Size of header buffer specifically for header split */
#define IDPF_HDR_BUF_SIZE 256
#define IDPF_PACKET_HDR_PAD \
(ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN * 2)
#define IDPF_TX_TSO_MIN_MSS 88
/* Minimum number of descriptors between 2 descriptors with the RE bit set;
* only relevant in flow scheduling mode
*/
#define IDPF_TX_SPLITQ_RE_MIN_GAP 64
#define IDPF_RX_BI_BUFID_S 0
#define IDPF_RX_BI_BUFID_M GENMASK(14, 0)
#define IDPF_RX_BI_GEN_S 15
#define IDPF_RX_BI_GEN_M BIT(IDPF_RX_BI_GEN_S)
#define IDPF_RXD_EOF_SPLITQ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M
#define IDPF_RXD_EOF_SINGLEQ VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M
#define IDPF_SINGLEQ_RX_BUF_DESC(rxq, i) \
(&(((struct virtchnl2_singleq_rx_buf_desc *)((rxq)->desc_ring))[i]))
#define IDPF_SPLITQ_RX_BUF_DESC(rxq, i) \
(&(((struct virtchnl2_splitq_rx_buf_desc *)((rxq)->desc_ring))[i]))
#define IDPF_SPLITQ_RX_BI_DESC(rxq, i) ((((rxq)->ring))[i])
#define IDPF_BASE_TX_DESC(txq, i) \
(&(((struct idpf_base_tx_desc *)((txq)->desc_ring))[i]))
#define IDPF_BASE_TX_CTX_DESC(txq, i) \
(&(((struct idpf_base_tx_ctx_desc *)((txq)->desc_ring))[i]))
#define IDPF_SPLITQ_TX_COMPLQ_DESC(txcq, i) \
(&(((struct idpf_splitq_tx_compl_desc *)((txcq)->desc_ring))[i]))
#define IDPF_FLEX_TX_DESC(txq, i) \
(&(((union idpf_tx_flex_desc *)((txq)->desc_ring))[i]))
#define IDPF_FLEX_TX_CTX_DESC(txq, i) \
(&(((struct idpf_flex_tx_ctx_desc *)((txq)->desc_ring))[i]))
#define IDPF_DESC_UNUSED(txq) \
((((txq)->next_to_clean > (txq)->next_to_use) ? 0 : (txq)->desc_count) + \
(txq)->next_to_clean - (txq)->next_to_use - 1)
#define IDPF_TX_BUF_RSV_UNUSED(txq) ((txq)->buf_stack.top)
#define IDPF_TX_BUF_RSV_LOW(txq) (IDPF_TX_BUF_RSV_UNUSED(txq) < \
(txq)->desc_count >> 2)
#define IDPF_TX_COMPLQ_OVERFLOW_THRESH(txcq) ((txcq)->desc_count >> 1)
/* Determine the absolute number of completions pending, i.e. the number of
* completions that are expected to arrive on the TX completion queue.
*/
#define IDPF_TX_COMPLQ_PENDING(txq) \
(((txq)->num_completions_pending >= (txq)->complq->num_completions ? \
0 : U64_MAX) + \
(txq)->num_completions_pending - (txq)->complq->num_completions)
#define IDPF_TX_SPLITQ_COMPL_TAG_WIDTH 16
#define IDPF_SPLITQ_TX_INVAL_COMPL_TAG -1
/* Adjust the generation for the completion tag and wrap if necessary */
#define IDPF_TX_ADJ_COMPL_TAG_GEN(txq) \
((++(txq)->compl_tag_cur_gen) >= (txq)->compl_tag_gen_max ? \
0 : (txq)->compl_tag_cur_gen)
#define IDPF_TXD_LAST_DESC_CMD (IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS)
#define IDPF_TX_FLAGS_TSO BIT(0)
#define IDPF_TX_FLAGS_IPV4 BIT(1)
#define IDPF_TX_FLAGS_IPV6 BIT(2)
#define IDPF_TX_FLAGS_TUNNEL BIT(3)
union idpf_tx_flex_desc {
struct idpf_flex_tx_desc q; /* queue based scheduling */
struct idpf_flex_tx_sched_desc flow; /* flow based scheduling */
};
/**
* struct idpf_tx_buf
* @next_to_watch: Next descriptor to clean
* @skb: Pointer to the skb
* @dma: DMA address
* @len: DMA length
* @bytecount: Number of bytes
* @gso_segs: Number of GSO segments
* @compl_tag: Splitq only, unique identifier for a buffer. Used to compare
* with completion tag returned in buffer completion event.
* Because the completion tag is expected to be the same in all
* data descriptors for a given packet, and a single packet can
* span multiple buffers, we need this field to track all
* buffers associated with this completion tag independently of
* the buf_id. The tag consists of a N bit buf_id and M upper
* order "generation bits". See compl_tag_bufid_m and
* compl_tag_gen_s in struct idpf_queue. We'll use a value of -1
* to indicate the tag is not valid.
* @ctx_entry: Singleq only. Used to indicate the corresponding entry
* in the descriptor ring was used for a context descriptor and
* this buffer entry should be skipped.
*/
struct idpf_tx_buf {
void *next_to_watch;
struct sk_buff *skb;
DEFINE_DMA_UNMAP_ADDR(dma);
DEFINE_DMA_UNMAP_LEN(len);
unsigned int bytecount;
unsigned short gso_segs;
union {
int compl_tag;
bool ctx_entry;
};
};
struct idpf_tx_stash {
struct hlist_node hlist;
struct idpf_tx_buf buf;
};
/**
* struct idpf_buf_lifo - LIFO for managing OOO completions
* @top: Used to know how many buffers are left
* @size: Total size of LIFO
* @bufs: Backing array
*/
struct idpf_buf_lifo {
u16 top;
u16 size;
struct idpf_tx_stash **bufs;
};
/**
* struct idpf_tx_offload_params - Offload parameters for a given packet
* @tx_flags: Feature flags enabled for this packet
* @hdr_offsets: Offset parameter for single queue model
* @cd_tunneling: Type of tunneling enabled for single queue model
* @tso_len: Total length of payload to segment
* @mss: Segment size
* @tso_segs: Number of segments to be sent
* @tso_hdr_len: Length of headers to be duplicated
* @td_cmd: Command field to be inserted into descriptor
*/
struct idpf_tx_offload_params {
u32 tx_flags;
u32 hdr_offsets;
u32 cd_tunneling;
u32 tso_len;
u16 mss;
u16 tso_segs;
u16 tso_hdr_len;
u16 td_cmd;
};
/**
* struct idpf_tx_splitq_params
* @dtype: General descriptor info
* @eop_cmd: Type of EOP
* @compl_tag: Associated tag for completion
* @td_tag: Descriptor tunneling tag
* @offload: Offload parameters
*/
struct idpf_tx_splitq_params {
enum idpf_tx_desc_dtype_value dtype;
u16 eop_cmd;
union {
u16 compl_tag;
u16 td_tag;
};
struct idpf_tx_offload_params offload;
};
enum idpf_tx_ctx_desc_eipt_offload {
IDPF_TX_CTX_EXT_IP_NONE = 0x0,
IDPF_TX_CTX_EXT_IP_IPV6 = 0x1,
IDPF_TX_CTX_EXT_IP_IPV4_NO_CSUM = 0x2,
IDPF_TX_CTX_EXT_IP_IPV4 = 0x3
};
/* Checksum offload bits decoded from the receive descriptor. */
struct idpf_rx_csum_decoded {
u32 l3l4p : 1;
u32 ipe : 1;
u32 eipe : 1;
u32 eudpe : 1;
u32 ipv6exadd : 1;
u32 l4e : 1;
u32 pprs : 1;
u32 nat : 1;
u32 raw_csum_inv : 1;
u32 raw_csum : 16;
};
struct idpf_rx_extracted {
unsigned int size;
u16 rx_ptype;
};
#define IDPF_TX_COMPLQ_CLEAN_BUDGET 256
#define IDPF_TX_MIN_PKT_LEN 17
#define IDPF_TX_DESCS_FOR_SKB_DATA_PTR 1
#define IDPF_TX_DESCS_PER_CACHE_LINE (L1_CACHE_BYTES / \
sizeof(struct idpf_flex_tx_desc))
#define IDPF_TX_DESCS_FOR_CTX 1
/* TX descriptors needed, worst case */
#define IDPF_TX_DESC_NEEDED (MAX_SKB_FRAGS + IDPF_TX_DESCS_FOR_CTX + \
IDPF_TX_DESCS_PER_CACHE_LINE + \
IDPF_TX_DESCS_FOR_SKB_DATA_PTR)
/* The size limit for a transmit buffer in a descriptor is (16K - 1).
* In order to align with the read requests we will align the value to
* the nearest 4K which represents our maximum read request size.
*/
#define IDPF_TX_MAX_READ_REQ_SIZE SZ_4K
#define IDPF_TX_MAX_DESC_DATA (SZ_16K - 1)
#define IDPF_TX_MAX_DESC_DATA_ALIGNED \
ALIGN_DOWN(IDPF_TX_MAX_DESC_DATA, IDPF_TX_MAX_READ_REQ_SIZE)
#define IDPF_RX_DMA_ATTR \
(DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
#define IDPF_RX_DESC(rxq, i) \
(&(((union virtchnl2_rx_desc *)((rxq)->desc_ring))[i]))
struct idpf_rx_buf {
struct page *page;
unsigned int page_offset;
u16 truesize;
};
#define IDPF_RX_MAX_PTYPE_PROTO_IDS 32
#define IDPF_RX_MAX_PTYPE_SZ (sizeof(struct virtchnl2_ptype) + \
(sizeof(u16) * IDPF_RX_MAX_PTYPE_PROTO_IDS))
#define IDPF_RX_PTYPE_HDR_SZ sizeof(struct virtchnl2_get_ptype_info)
#define IDPF_RX_MAX_PTYPES_PER_BUF \
DIV_ROUND_DOWN_ULL((IDPF_CTLQ_MAX_BUF_LEN - IDPF_RX_PTYPE_HDR_SZ), \
IDPF_RX_MAX_PTYPE_SZ)
#define IDPF_GET_PTYPE_SIZE(p) struct_size((p), proto_id, (p)->proto_id_count)
#define IDPF_TUN_IP_GRE (\
IDPF_PTYPE_TUNNEL_IP |\
IDPF_PTYPE_TUNNEL_IP_GRENAT)
#define IDPF_TUN_IP_GRE_MAC (\
IDPF_TUN_IP_GRE |\
IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC)
#define IDPF_RX_MAX_PTYPE 1024
#define IDPF_RX_MAX_BASE_PTYPE 256
#define IDPF_INVALID_PTYPE_ID 0xFFFF
/* Packet type non-ip values */
enum idpf_rx_ptype_l2 {
IDPF_RX_PTYPE_L2_RESERVED = 0,
IDPF_RX_PTYPE_L2_MAC_PAY2 = 1,
IDPF_RX_PTYPE_L2_TIMESYNC_PAY2 = 2,
IDPF_RX_PTYPE_L2_FIP_PAY2 = 3,
IDPF_RX_PTYPE_L2_OUI_PAY2 = 4,
IDPF_RX_PTYPE_L2_MACCNTRL_PAY2 = 5,
IDPF_RX_PTYPE_L2_LLDP_PAY2 = 6,
IDPF_RX_PTYPE_L2_ECP_PAY2 = 7,
IDPF_RX_PTYPE_L2_EVB_PAY2 = 8,
IDPF_RX_PTYPE_L2_QCN_PAY2 = 9,
IDPF_RX_PTYPE_L2_EAPOL_PAY2 = 10,
IDPF_RX_PTYPE_L2_ARP = 11,
};
enum idpf_rx_ptype_outer_ip {
IDPF_RX_PTYPE_OUTER_L2 = 0,
IDPF_RX_PTYPE_OUTER_IP = 1,
};
#define IDPF_RX_PTYPE_TO_IPV(ptype, ipv) \
(((ptype)->outer_ip == IDPF_RX_PTYPE_OUTER_IP) && \
((ptype)->outer_ip_ver == (ipv)))
enum idpf_rx_ptype_outer_ip_ver {
IDPF_RX_PTYPE_OUTER_NONE = 0,
IDPF_RX_PTYPE_OUTER_IPV4 = 1,
IDPF_RX_PTYPE_OUTER_IPV6 = 2,
};
enum idpf_rx_ptype_outer_fragmented {
IDPF_RX_PTYPE_NOT_FRAG = 0,
IDPF_RX_PTYPE_FRAG = 1,
};
enum idpf_rx_ptype_tunnel_type {
IDPF_RX_PTYPE_TUNNEL_NONE = 0,
IDPF_RX_PTYPE_TUNNEL_IP_IP = 1,
IDPF_RX_PTYPE_TUNNEL_IP_GRENAT = 2,
IDPF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3,
IDPF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4,
};
enum idpf_rx_ptype_tunnel_end_prot {
IDPF_RX_PTYPE_TUNNEL_END_NONE = 0,
IDPF_RX_PTYPE_TUNNEL_END_IPV4 = 1,
IDPF_RX_PTYPE_TUNNEL_END_IPV6 = 2,
};
enum idpf_rx_ptype_inner_prot {
IDPF_RX_PTYPE_INNER_PROT_NONE = 0,
IDPF_RX_PTYPE_INNER_PROT_UDP = 1,
IDPF_RX_PTYPE_INNER_PROT_TCP = 2,
IDPF_RX_PTYPE_INNER_PROT_SCTP = 3,
IDPF_RX_PTYPE_INNER_PROT_ICMP = 4,
IDPF_RX_PTYPE_INNER_PROT_TIMESYNC = 5,
};
enum idpf_rx_ptype_payload_layer {
IDPF_RX_PTYPE_PAYLOAD_LAYER_NONE = 0,
IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1,
IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2,
IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3,
};
enum idpf_tunnel_state {
IDPF_PTYPE_TUNNEL_IP = BIT(0),
IDPF_PTYPE_TUNNEL_IP_GRENAT = BIT(1),
IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC = BIT(2),
};
struct idpf_ptype_state {
bool outer_ip;
bool outer_frag;
u8 tunnel_state;
};
struct idpf_rx_ptype_decoded {
u32 ptype:10;
u32 known:1;
u32 outer_ip:1;
u32 outer_ip_ver:2;
u32 outer_frag:1;
u32 tunnel_type:3;
u32 tunnel_end_prot:2;
u32 tunnel_end_frag:1;
u32 inner_prot:4;
u32 payload_layer:3;
};
/**
* enum idpf_queue_flags_t
* @__IDPF_Q_GEN_CHK: Queues operating in splitq mode use a generation bit to
* identify new descriptor writebacks on the ring. HW sets
* the gen bit to 1 on the first writeback of any given
* descriptor. After the ring wraps, HW sets the gen bit of
* those descriptors to 0, and continues flipping
* 0->1 or 1->0 on each ring wrap. SW maintains its own
* gen bit to know what value will indicate writebacks on
* the next pass around the ring. E.g. it is initialized
* to 1 and knows that reading a gen bit of 1 in any
* descriptor on the initial pass of the ring indicates a
* writeback. It also flips on every ring wrap.
* @__IDPF_RFLQ_GEN_CHK: Refill queues are SW only, so Q_GEN acts as the HW bit
* and RFLGQ_GEN is the SW bit.
* @__IDPF_Q_FLOW_SCH_EN: Enable flow scheduling
* @__IDPF_Q_SW_MARKER: Used to indicate TX queue marker completions
* @__IDPF_Q_POLL_MODE: Enable poll mode
* @__IDPF_Q_FLAGS_NBITS: Must be last
*/
enum idpf_queue_flags_t {
__IDPF_Q_GEN_CHK,
__IDPF_RFLQ_GEN_CHK,
__IDPF_Q_FLOW_SCH_EN,
__IDPF_Q_SW_MARKER,
__IDPF_Q_POLL_MODE,
__IDPF_Q_FLAGS_NBITS,
};
/**
* struct idpf_vec_regs
* @dyn_ctl_reg: Dynamic control interrupt register offset
* @itrn_reg: Interrupt Throttling Rate register offset
* @itrn_index_spacing: Register spacing between ITR registers of the same
* vector
*/
struct idpf_vec_regs {
u32 dyn_ctl_reg;
u32 itrn_reg;
u32 itrn_index_spacing;
};
/**
* struct idpf_intr_reg
* @dyn_ctl: Dynamic control interrupt register
* @dyn_ctl_intena_m: Mask for dyn_ctl interrupt enable
* @dyn_ctl_itridx_s: Register bit offset for ITR index
* @dyn_ctl_itridx_m: Mask for ITR index
* @dyn_ctl_intrvl_s: Register bit offset for ITR interval
* @rx_itr: RX ITR register
* @tx_itr: TX ITR register
* @icr_ena: Interrupt cause register offset
* @icr_ena_ctlq_m: Mask for ICR
*/
struct idpf_intr_reg {
void __iomem *dyn_ctl;
u32 dyn_ctl_intena_m;
u32 dyn_ctl_itridx_s;
u32 dyn_ctl_itridx_m;
u32 dyn_ctl_intrvl_s;
void __iomem *rx_itr;
void __iomem *tx_itr;
void __iomem *icr_ena;
u32 icr_ena_ctlq_m;
};
/**
* struct idpf_q_vector
* @vport: Vport back pointer
* @affinity_mask: CPU affinity mask
* @napi: napi handler
* @v_idx: Vector index
* @intr_reg: See struct idpf_intr_reg
* @num_txq: Number of TX queues
* @tx: Array of TX queues to service
* @tx_dim: Data for TX net_dim algorithm
* @tx_itr_value: TX interrupt throttling rate
* @tx_intr_mode: Dynamic ITR or not
* @tx_itr_idx: TX ITR index
* @num_rxq: Number of RX queues
* @rx: Array of RX queues to service
* @rx_dim: Data for RX net_dim algorithm
* @rx_itr_value: RX interrupt throttling rate
* @rx_intr_mode: Dynamic ITR or not
* @rx_itr_idx: RX ITR index
* @num_bufq: Number of buffer queues
* @bufq: Array of buffer queues to service
* @total_events: Number of interrupts processed
* @name: Queue vector name
*/
struct idpf_q_vector {
struct idpf_vport *vport;
cpumask_t affinity_mask;
struct napi_struct napi;
u16 v_idx;
struct idpf_intr_reg intr_reg;
u16 num_txq;
struct idpf_queue **tx;
struct dim tx_dim;
u16 tx_itr_value;
bool tx_intr_mode;
u32 tx_itr_idx;
u16 num_rxq;
struct idpf_queue **rx;
struct dim rx_dim;
u16 rx_itr_value;
bool rx_intr_mode;
u32 rx_itr_idx;
u16 num_bufq;
struct idpf_queue **bufq;
u16 total_events;
char *name;
};
struct idpf_rx_queue_stats {
u64_stats_t packets;
u64_stats_t bytes;
u64_stats_t rsc_pkts;
u64_stats_t hw_csum_err;
u64_stats_t hsplit_pkts;
u64_stats_t hsplit_buf_ovf;
u64_stats_t bad_descs;
};
struct idpf_tx_queue_stats {
u64_stats_t packets;
u64_stats_t bytes;
u64_stats_t lso_pkts;
u64_stats_t linearize;
u64_stats_t q_busy;
u64_stats_t skb_drops;
u64_stats_t dma_map_errs;
};
struct idpf_cleaned_stats {
u32 packets;
u32 bytes;
};
union idpf_queue_stats {
struct idpf_rx_queue_stats rx;
struct idpf_tx_queue_stats tx;
};
#define IDPF_ITR_DYNAMIC 1
#define IDPF_ITR_MAX 0x1FE0
#define IDPF_ITR_20K 0x0032
#define IDPF_ITR_GRAN_S 1 /* Assume ITR granularity is 2us */
#define IDPF_ITR_MASK 0x1FFE /* ITR register value alignment mask */
#define ITR_REG_ALIGN(setting) ((setting) & IDPF_ITR_MASK)
#define IDPF_ITR_IS_DYNAMIC(itr_mode) (itr_mode)
#define IDPF_ITR_TX_DEF IDPF_ITR_20K
#define IDPF_ITR_RX_DEF IDPF_ITR_20K
/* Index used for 'No ITR' update in DYN_CTL register */
#define IDPF_NO_ITR_UPDATE_IDX 3
#define IDPF_ITR_IDX_SPACING(spacing, dflt) (spacing ? spacing : dflt)
#define IDPF_DIM_DEFAULT_PROFILE_IX 1
/**
* struct idpf_queue
* @dev: Device back pointer for DMA mapping
* @vport: Back pointer to associated vport
* @txq_grp: See struct idpf_txq_group
* @rxq_grp: See struct idpf_rxq_group
* @idx: For buffer queue, it is used as group id, either 0 or 1. On clean,
* buffer queue uses this index to determine which group of refill queues
* to clean.
* For TX queue, it is used as index to map between TX queue group and
* hot path TX pointers stored in vport. Used in both singleq/splitq.
* For RX queue, it is used to index to total RX queue across groups and
* used for skb reporting.
* @tail: Tail offset. Used for both queue models single and split. In splitq
* model relevant only for TX queue and RX queue.
* @tx_buf: See struct idpf_tx_buf
* @rx_buf: Struct with RX buffer related members
* @rx_buf.buf: See struct idpf_rx_buf
* @rx_buf.hdr_buf_pa: DMA handle
* @rx_buf.hdr_buf_va: Virtual address
* @pp: Page pool pointer
* @skb: Pointer to the skb
* @q_type: Queue type (TX, RX, TX completion, RX buffer)
* @q_id: Queue id
* @desc_count: Number of descriptors
* @next_to_use: Next descriptor to use. Relevant in both split & single txq
* and bufq.
* @next_to_clean: Next descriptor to clean. In split queue model, only
* relevant to TX completion queue and RX queue.
* @next_to_alloc: RX buffer to allocate at. Used only for RX. In splitq model
* only relevant to RX queue.
* @flags: See enum idpf_queue_flags_t
* @q_stats: See union idpf_queue_stats
* @stats_sync: See struct u64_stats_sync
* @cleaned_bytes: Splitq only, TXQ only: When a TX completion is received on
* the TX completion queue, it can be for any TXQ associated
* with that completion queue. This means we can clean up to
* N TXQs during a single call to clean the completion queue.
* cleaned_bytes|pkts tracks the clean stats per TXQ during
* that single call to clean the completion queue. By doing so,
* we can update BQL with aggregate cleaned stats for each TXQ
* only once at the end of the cleaning routine.
* @cleaned_pkts: Number of packets cleaned for the above said case
* @rx_hsplit_en: RX headsplit enable
* @rx_hbuf_size: Header buffer size
* @rx_buf_size: Buffer size
* @rx_max_pkt_size: RX max packet size
* @rx_buf_stride: RX buffer stride
* @rx_buffer_low_watermark: RX buffer low watermark
* @rxdids: Supported RX descriptor ids
* @q_vector: Backreference to associated vector
* @size: Length of descriptor ring in bytes
* @dma: Physical address of ring
* @desc_ring: Descriptor ring memory
* @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
* @tx_min_pkt_len: Min supported packet length
* @num_completions: Only relevant for TX completion queue. It tracks the
* number of completions received to compare against the
* number of completions pending, as accumulated by the
* TX queues.
* @buf_stack: Stack of empty buffers to store buffer info for out of order
* buffer completions. See struct idpf_buf_lifo.
* @compl_tag_bufid_m: Completion tag buffer id mask
* @compl_tag_gen_s: Completion tag generation bit
* The format of the completion tag will change based on the TXQ
* descriptor ring size so that we can maintain roughly the same level
* of "uniqueness" across all descriptor sizes. For example, if the
* TXQ descriptor ring size is 64 (the minimum size supported), the
* completion tag will be formatted as below:
* 15 6 5 0
* --------------------------------
* | GEN=0-1023 |IDX = 0-63|
* --------------------------------
*
* This gives us 64*1024 = 65536 possible unique values. Similarly, if
* the TXQ descriptor ring size is 8160 (the maximum size supported),
* the completion tag will be formatted as below:
* 15 13 12 0
* --------------------------------
* |GEN | IDX = 0-8159 |
* --------------------------------
*
* This gives us 8*8160 = 65280 possible unique values.
* @compl_tag_cur_gen: Used to keep track of current completion tag generation
* @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset
* @sched_buf_hash: Hash table to stores buffers
*/
struct idpf_queue {
struct device *dev;
struct idpf_vport *vport;
union {
struct idpf_txq_group *txq_grp;
struct idpf_rxq_group *rxq_grp;
};
u16 idx;
void __iomem *tail;
union {
struct idpf_tx_buf *tx_buf;
struct {
struct idpf_rx_buf *buf;
dma_addr_t hdr_buf_pa;
void *hdr_buf_va;
} rx_buf;
};
struct page_pool *pp;
struct sk_buff *skb;
u16 q_type;
u32 q_id;
u16 desc_count;
u16 next_to_use;
u16 next_to_clean;
u16 next_to_alloc;
DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
union idpf_queue_stats q_stats;
struct u64_stats_sync stats_sync;
u32 cleaned_bytes;
u16 cleaned_pkts;
bool rx_hsplit_en;
u16 rx_hbuf_size;
u16 rx_buf_size;
u16 rx_max_pkt_size;
u16 rx_buf_stride;
u8 rx_buffer_low_watermark;
u64 rxdids;
struct idpf_q_vector *q_vector;
unsigned int size;
dma_addr_t dma;
void *desc_ring;
u16 tx_max_bufs;
u8 tx_min_pkt_len;
u32 num_completions;
struct idpf_buf_lifo buf_stack;
u16 compl_tag_bufid_m;
u16 compl_tag_gen_s;
u16 compl_tag_cur_gen;
u16 compl_tag_gen_max;
DECLARE_HASHTABLE(sched_buf_hash, 12);
} ____cacheline_internodealigned_in_smp;
/**
* struct idpf_sw_queue
* @next_to_clean: Next descriptor to clean
* @next_to_alloc: Buffer to allocate at
* @flags: See enum idpf_queue_flags_t
* @ring: Pointer to the ring
* @desc_count: Descriptor count
* @dev: Device back pointer for DMA mapping
*
* Software queues are used in splitq mode to manage buffers between rxq
* producer and the bufq consumer. These are required in order to maintain a
* lockless buffer management system and are strictly software only constructs.
*/
struct idpf_sw_queue {
u16 next_to_clean;
u16 next_to_alloc;
DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
u16 *ring;
u16 desc_count;
struct device *dev;
} ____cacheline_internodealigned_in_smp;
/**
* struct idpf_rxq_set
* @rxq: RX queue
* @refillq0: Pointer to refill queue 0
* @refillq1: Pointer to refill queue 1
*
* Splitq only. idpf_rxq_set associates an rxq with at an array of refillqs.
* Each rxq needs a refillq to return used buffers back to the respective bufq.
* Bufqs then clean these refillqs for buffers to give to hardware.
*/
struct idpf_rxq_set {
struct idpf_queue rxq;
struct idpf_sw_queue *refillq0;
struct idpf_sw_queue *refillq1;
};
/**
* struct idpf_bufq_set
* @bufq: Buffer queue
* @num_refillqs: Number of refill queues. This is always equal to num_rxq_sets
* in idpf_rxq_group.
* @refillqs: Pointer to refill queues array.
*
* Splitq only. idpf_bufq_set associates a bufq to an array of refillqs.
* In this bufq_set, there will be one refillq for each rxq in this rxq_group.
* Used buffers received by rxqs will be put on refillqs which bufqs will
* clean to return new buffers back to hardware.
*
* Buffers needed by some number of rxqs associated in this rxq_group are
* managed by at most two bufqs (depending on performance configuration).
*/
struct idpf_bufq_set {
struct idpf_queue bufq;
int num_refillqs;
struct idpf_sw_queue *refillqs;
};
/**
* struct idpf_rxq_group
* @vport: Vport back pointer
* @singleq: Struct with single queue related members
* @singleq.num_rxq: Number of RX queues associated
* @singleq.rxqs: Array of RX queue pointers
* @splitq: Struct with split queue related members
* @splitq.num_rxq_sets: Number of RX queue sets
* @splitq.rxq_sets: Array of RX queue sets
* @splitq.bufq_sets: Buffer queue set pointer
*
* In singleq mode, an rxq_group is simply an array of rxqs. In splitq, a
* rxq_group contains all the rxqs, bufqs and refillqs needed to
* manage buffers in splitq mode.
*/
struct idpf_rxq_group {
struct idpf_vport *vport;
union {
struct {
u16 num_rxq;
struct idpf_queue *rxqs[IDPF_LARGE_MAX_Q];
} singleq;
struct {
u16 num_rxq_sets;
struct idpf_rxq_set *rxq_sets[IDPF_LARGE_MAX_Q];
struct idpf_bufq_set *bufq_sets;
} splitq;
};
};
/**
* struct idpf_txq_group
* @vport: Vport back pointer
* @num_txq: Number of TX queues associated
* @txqs: Array of TX queue pointers
* @complq: Associated completion queue pointer, split queue only
* @num_completions_pending: Total number of completions pending for the
* completion queue, acculumated for all TX queues
* associated with that completion queue.
*
* Between singleq and splitq, a txq_group is largely the same except for the
* complq. In splitq a single complq is responsible for handling completions
* for some number of txqs associated in this txq_group.
*/
struct idpf_txq_group {
struct idpf_vport *vport;
u16 num_txq;
struct idpf_queue *txqs[IDPF_LARGE_MAX_Q];
struct idpf_queue *complq;
u32 num_completions_pending;
};
/**
* idpf_size_to_txd_count - Get number of descriptors needed for large Tx frag
* @size: transmit request size in bytes
*
* In the case where a large frag (>= 16K) needs to be split across multiple
* descriptors, we need to assume that we can have no more than 12K of data
* per descriptor due to hardware alignment restrictions (4K alignment).
*/
static inline u32 idpf_size_to_txd_count(unsigned int size)
{
return DIV_ROUND_UP(size, IDPF_TX_MAX_DESC_DATA_ALIGNED);
}
/**
* idpf_tx_singleq_build_ctob - populate command tag offset and size
* @td_cmd: Command to be filled in desc
* @td_offset: Offset to be filled in desc
* @size: Size of the buffer
* @td_tag: td tag to be filled
*
* Returns the 64 bit value populated with the input parameters
*/
static inline __le64 idpf_tx_singleq_build_ctob(u64 td_cmd, u64 td_offset,
unsigned int size, u64 td_tag)
{
return cpu_to_le64(IDPF_TX_DESC_DTYPE_DATA |
(td_cmd << IDPF_TXD_QW1_CMD_S) |
(td_offset << IDPF_TXD_QW1_OFFSET_S) |
((u64)size << IDPF_TXD_QW1_TX_BUF_SZ_S) |
(td_tag << IDPF_TXD_QW1_L2TAG1_S));
}
void idpf_tx_splitq_build_ctb(union idpf_tx_flex_desc *desc,
struct idpf_tx_splitq_params *params,
u16 td_cmd, u16 size);
void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
struct idpf_tx_splitq_params *params,
u16 td_cmd, u16 size);
/**
* idpf_tx_splitq_build_desc - determine which type of data descriptor to build
* @desc: descriptor to populate
* @params: pointer to tx params struct
* @td_cmd: command to be filled in desc
* @size: size of buffer
*/
static inline void idpf_tx_splitq_build_desc(union idpf_tx_flex_desc *desc,
struct idpf_tx_splitq_params *params,
u16 td_cmd, u16 size)
{
if (params->dtype == IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2)
idpf_tx_splitq_build_ctb(desc, params, td_cmd, size);
else
idpf_tx_splitq_build_flow_desc(desc, params, td_cmd, size);
}
/**
* idpf_alloc_page - Allocate a new RX buffer from the page pool
* @pool: page_pool to allocate from
* @buf: metadata struct to populate with page info
* @buf_size: 2K or 4K
*
* Returns &dma_addr_t to be passed to HW for Rx, %DMA_MAPPING_ERROR otherwise.
*/
static inline dma_addr_t idpf_alloc_page(struct page_pool *pool,
struct idpf_rx_buf *buf,
unsigned int buf_size)
{
if (buf_size == IDPF_RX_BUF_2048)
buf->page = page_pool_dev_alloc_frag(pool, &buf->page_offset,
buf_size);
else
buf->page = page_pool_dev_alloc_pages(pool);
if (!buf->page)
return DMA_MAPPING_ERROR;
buf->truesize = buf_size;
return page_pool_get_dma_addr(buf->page) + buf->page_offset +
pool->p.offset;
}
/**
* idpf_rx_put_page - Return RX buffer page to pool
* @rx_buf: RX buffer metadata struct
*/
static inline void idpf_rx_put_page(struct idpf_rx_buf *rx_buf)
{
page_pool_put_page(rx_buf->page->pp, rx_buf->page,
rx_buf->truesize, true);
rx_buf->page = NULL;
}
/**
* idpf_rx_sync_for_cpu - Synchronize DMA buffer
* @rx_buf: RX buffer metadata struct
* @len: frame length from descriptor
*/
static inline void idpf_rx_sync_for_cpu(struct idpf_rx_buf *rx_buf, u32 len)
{
struct page *page = rx_buf->page;
struct page_pool *pp = page->pp;
dma_sync_single_range_for_cpu(pp->p.dev,
page_pool_get_dma_addr(page),
rx_buf->page_offset + pp->p.offset, len,
page_pool_get_dma_dir(pp));
}
int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget);
void idpf_vport_init_num_qs(struct idpf_vport *vport,
struct virtchnl2_create_vport *vport_msg);
void idpf_vport_calc_num_q_desc(struct idpf_vport *vport);
int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_index,
struct virtchnl2_create_vport *vport_msg,
struct idpf_vport_max_q *max_q);
void idpf_vport_calc_num_q_groups(struct idpf_vport *vport);
int idpf_vport_queues_alloc(struct idpf_vport *vport);
void idpf_vport_queues_rel(struct idpf_vport *vport);
void idpf_vport_intr_rel(struct idpf_vport *vport);
int idpf_vport_intr_alloc(struct idpf_vport *vport);
void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector);
void idpf_vport_intr_deinit(struct idpf_vport *vport);
int idpf_vport_intr_init(struct idpf_vport *vport);
enum pkt_hash_types idpf_ptype_to_htype(const struct idpf_rx_ptype_decoded *decoded);
int idpf_config_rss(struct idpf_vport *vport);
int idpf_init_rss(struct idpf_vport *vport);
void idpf_deinit_rss(struct idpf_vport *vport);
int idpf_rx_bufs_init_all(struct idpf_vport *vport);
void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb,
unsigned int size);
struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq,
struct idpf_rx_buf *rx_buf,
unsigned int size);
bool idpf_init_rx_buf_hw_alloc(struct idpf_queue *rxq, struct idpf_rx_buf *buf);
void idpf_rx_buf_hw_update(struct idpf_queue *rxq, u32 val);
void idpf_tx_buf_hw_update(struct idpf_queue *tx_q, u32 val,
bool xmit_more);
unsigned int idpf_size_to_txd_count(unsigned int size);
netdev_tx_t idpf_tx_drop_skb(struct idpf_queue *tx_q, struct sk_buff *skb);
void idpf_tx_dma_map_error(struct idpf_queue *txq, struct sk_buff *skb,
struct idpf_tx_buf *first, u16 ring_idx);
unsigned int idpf_tx_desc_count_required(struct idpf_queue *txq,
struct sk_buff *skb);
bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs,
unsigned int count);
int idpf_tx_maybe_stop_common(struct idpf_queue *tx_q, unsigned int size);
void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue);
netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb,
struct net_device *netdev);
netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb,
struct net_device *netdev);
bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rxq,
u16 cleaned_count);
int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off);
#endif /* !_IDPF_TXRX_H_ */
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2023 Intel Corporation */
#include "idpf.h"
#include "idpf_lan_vf_regs.h"
#define IDPF_VF_ITR_IDX_SPACING 0x40
/**
* idpf_vf_ctlq_reg_init - initialize default mailbox registers
* @cq: pointer to the array of create control queues
*/
static void idpf_vf_ctlq_reg_init(struct idpf_ctlq_create_info *cq)
{
int i;
for (i = 0; i < IDPF_NUM_DFLT_MBX_Q; i++) {
struct idpf_ctlq_create_info *ccq = cq + i;
switch (ccq->type) {
case IDPF_CTLQ_TYPE_MAILBOX_TX:
/* set head and tail registers in our local struct */
ccq->reg.head = VF_ATQH;
ccq->reg.tail = VF_ATQT;
ccq->reg.len = VF_ATQLEN;
ccq->reg.bah = VF_ATQBAH;
ccq->reg.bal = VF_ATQBAL;
ccq->reg.len_mask = VF_ATQLEN_ATQLEN_M;
ccq->reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
ccq->reg.head_mask = VF_ATQH_ATQH_M;
break;
case IDPF_CTLQ_TYPE_MAILBOX_RX:
/* set head and tail registers in our local struct */
ccq->reg.head = VF_ARQH;
ccq->reg.tail = VF_ARQT;
ccq->reg.len = VF_ARQLEN;
ccq->reg.bah = VF_ARQBAH;
ccq->reg.bal = VF_ARQBAL;
ccq->reg.len_mask = VF_ARQLEN_ARQLEN_M;
ccq->reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
ccq->reg.head_mask = VF_ARQH_ARQH_M;
break;
default:
break;
}
}
}
/**
* idpf_vf_mb_intr_reg_init - Initialize the mailbox register
* @adapter: adapter structure
*/
static void idpf_vf_mb_intr_reg_init(struct idpf_adapter *adapter)
{
struct idpf_intr_reg *intr = &adapter->mb_vector.intr_reg;
u32 dyn_ctl = le32_to_cpu(adapter->caps.mailbox_dyn_ctl);
intr->dyn_ctl = idpf_get_reg_addr(adapter, dyn_ctl);
intr->dyn_ctl_intena_m = VF_INT_DYN_CTL0_INTENA_M;
intr->dyn_ctl_itridx_m = VF_INT_DYN_CTL0_ITR_INDX_M;
intr->icr_ena = idpf_get_reg_addr(adapter, VF_INT_ICR0_ENA1);
intr->icr_ena_ctlq_m = VF_INT_ICR0_ENA1_ADMINQ_M;
}
/**
* idpf_vf_intr_reg_init - Initialize interrupt registers
* @vport: virtual port structure
*/
static int idpf_vf_intr_reg_init(struct idpf_vport *vport)
{
struct idpf_adapter *adapter = vport->adapter;
int num_vecs = vport->num_q_vectors;
struct idpf_vec_regs *reg_vals;
int num_regs, i, err = 0;
u32 rx_itr, tx_itr;
u16 total_vecs;
total_vecs = idpf_get_reserved_vecs(vport->adapter);
reg_vals = kcalloc(total_vecs, sizeof(struct idpf_vec_regs),
GFP_KERNEL);
if (!reg_vals)
return -ENOMEM;
num_regs = idpf_get_reg_intr_vecs(vport, reg_vals);
if (num_regs < num_vecs) {
err = -EINVAL;
goto free_reg_vals;
}
for (i = 0; i < num_vecs; i++) {
struct idpf_q_vector *q_vector = &vport->q_vectors[i];
u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC;
struct idpf_intr_reg *intr = &q_vector->intr_reg;
u32 spacing;
intr->dyn_ctl = idpf_get_reg_addr(adapter,
reg_vals[vec_id].dyn_ctl_reg);
intr->dyn_ctl_intena_m = VF_INT_DYN_CTLN_INTENA_M;
intr->dyn_ctl_itridx_s = VF_INT_DYN_CTLN_ITR_INDX_S;
spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing,
IDPF_VF_ITR_IDX_SPACING);
rx_itr = VF_INT_ITRN_ADDR(VIRTCHNL2_ITR_IDX_0,
reg_vals[vec_id].itrn_reg,
spacing);
tx_itr = VF_INT_ITRN_ADDR(VIRTCHNL2_ITR_IDX_1,
reg_vals[vec_id].itrn_reg,
spacing);
intr->rx_itr = idpf_get_reg_addr(adapter, rx_itr);
intr->tx_itr = idpf_get_reg_addr(adapter, tx_itr);
}
free_reg_vals:
kfree(reg_vals);
return err;
}
/**
* idpf_vf_reset_reg_init - Initialize reset registers
* @adapter: Driver specific private structure
*/
static void idpf_vf_reset_reg_init(struct idpf_adapter *adapter)
{
adapter->reset_reg.rstat = idpf_get_reg_addr(adapter, VFGEN_RSTAT);
adapter->reset_reg.rstat_m = VFGEN_RSTAT_VFR_STATE_M;
}
/**
* idpf_vf_trigger_reset - trigger reset
* @adapter: Driver specific private structure
* @trig_cause: Reason to trigger a reset
*/
static void idpf_vf_trigger_reset(struct idpf_adapter *adapter,
enum idpf_flags trig_cause)
{
/* Do not send VIRTCHNL2_OP_RESET_VF message on driver unload */
if (trig_cause == IDPF_HR_FUNC_RESET &&
!test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
idpf_send_mb_msg(adapter, VIRTCHNL2_OP_RESET_VF, 0, NULL);
}
/**
* idpf_vf_reg_ops_init - Initialize register API function pointers
* @adapter: Driver specific private structure
*/
static void idpf_vf_reg_ops_init(struct idpf_adapter *adapter)
{
adapter->dev_ops.reg_ops.ctlq_reg_init = idpf_vf_ctlq_reg_init;
adapter->dev_ops.reg_ops.intr_reg_init = idpf_vf_intr_reg_init;
adapter->dev_ops.reg_ops.mb_intr_reg_init = idpf_vf_mb_intr_reg_init;
adapter->dev_ops.reg_ops.reset_reg_init = idpf_vf_reset_reg_init;
adapter->dev_ops.reg_ops.trigger_reset = idpf_vf_trigger_reset;
}
/**
* idpf_vf_dev_ops_init - Initialize device API function pointers
* @adapter: Driver specific private structure
*/
void idpf_vf_dev_ops_init(struct idpf_adapter *adapter)
{
idpf_vf_reg_ops_init(adapter);
}
This source diff could not be displayed because it is too large. You can view the blob instead.
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _VIRTCHNL2_H_
#define _VIRTCHNL2_H_
/* All opcodes associated with virtchnl2 are prefixed with virtchnl2 or
* VIRTCHNL2. Any future opcodes, offloads/capabilities, structures,
* and defines must be prefixed with virtchnl2 or VIRTCHNL2 to avoid confusion.
*
* PF/VF uses the virtchnl2 interface defined in this header file to communicate
* with device Control Plane (CP). Driver and the CP may run on different
* platforms with different endianness. To avoid byte order discrepancies,
* all the structures in this header follow little-endian format.
*
* This is an interface definition file where existing enums and their values
* must remain unchanged over time, so we specify explicit values for all enums.
*/
#include "virtchnl2_lan_desc.h"
/* This macro is used to generate compilation errors if a structure
* is not exactly the correct length.
*/
#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) \
static_assert((n) == sizeof(struct X))
/* New major set of opcodes introduced and so leaving room for
* old misc opcodes to be added in future. Also these opcodes may only
* be used if both the PF and VF have successfully negotiated the
* VIRTCHNL version as 2.0 during VIRTCHNL2_OP_VERSION exchange.
*/
enum virtchnl2_op {
VIRTCHNL2_OP_UNKNOWN = 0,
VIRTCHNL2_OP_VERSION = 1,
VIRTCHNL2_OP_GET_CAPS = 500,
VIRTCHNL2_OP_CREATE_VPORT = 501,
VIRTCHNL2_OP_DESTROY_VPORT = 502,
VIRTCHNL2_OP_ENABLE_VPORT = 503,
VIRTCHNL2_OP_DISABLE_VPORT = 504,
VIRTCHNL2_OP_CONFIG_TX_QUEUES = 505,
VIRTCHNL2_OP_CONFIG_RX_QUEUES = 506,
VIRTCHNL2_OP_ENABLE_QUEUES = 507,
VIRTCHNL2_OP_DISABLE_QUEUES = 508,
VIRTCHNL2_OP_ADD_QUEUES = 509,
VIRTCHNL2_OP_DEL_QUEUES = 510,
VIRTCHNL2_OP_MAP_QUEUE_VECTOR = 511,
VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR = 512,
VIRTCHNL2_OP_GET_RSS_KEY = 513,
VIRTCHNL2_OP_SET_RSS_KEY = 514,
VIRTCHNL2_OP_GET_RSS_LUT = 515,
VIRTCHNL2_OP_SET_RSS_LUT = 516,
VIRTCHNL2_OP_GET_RSS_HASH = 517,
VIRTCHNL2_OP_SET_RSS_HASH = 518,
VIRTCHNL2_OP_SET_SRIOV_VFS = 519,
VIRTCHNL2_OP_ALLOC_VECTORS = 520,
VIRTCHNL2_OP_DEALLOC_VECTORS = 521,
VIRTCHNL2_OP_EVENT = 522,
VIRTCHNL2_OP_GET_STATS = 523,
VIRTCHNL2_OP_RESET_VF = 524,
VIRTCHNL2_OP_GET_EDT_CAPS = 525,
VIRTCHNL2_OP_GET_PTYPE_INFO = 526,
/* Opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
* VIRTCHNL2_OP_GET_PTYPE_INFO_RAW.
* Opcodes 529, 530, 531, 532 and 533 are reserved.
*/
VIRTCHNL2_OP_LOOPBACK = 534,
VIRTCHNL2_OP_ADD_MAC_ADDR = 535,
VIRTCHNL2_OP_DEL_MAC_ADDR = 536,
VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE = 537,
};
/**
* enum virtchnl2_vport_type - Type of virtual port.
* @VIRTCHNL2_VPORT_TYPE_DEFAULT: Default virtual port type.
*/
enum virtchnl2_vport_type {
VIRTCHNL2_VPORT_TYPE_DEFAULT = 0,
};
/**
* enum virtchnl2_queue_model - Type of queue model.
* @VIRTCHNL2_QUEUE_MODEL_SINGLE: Single queue model.
* @VIRTCHNL2_QUEUE_MODEL_SPLIT: Split queue model.
*
* In the single queue model, the same transmit descriptor queue is used by
* software to post descriptors to hardware and by hardware to post completed
* descriptors to software.
* Likewise, the same receive descriptor queue is used by hardware to post
* completions to software and by software to post buffers to hardware.
*
* In the split queue model, hardware uses transmit completion queues to post
* descriptor/buffer completions to software, while software uses transmit
* descriptor queues to post descriptors to hardware.
* Likewise, hardware posts descriptor completions to the receive descriptor
* queue, while software uses receive buffer queues to post buffers to hardware.
*/
enum virtchnl2_queue_model {
VIRTCHNL2_QUEUE_MODEL_SINGLE = 0,
VIRTCHNL2_QUEUE_MODEL_SPLIT = 1,
};
/* Checksum offload capability flags */
enum virtchnl2_cap_txrx_csum {
VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 = BIT(0),
VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP = BIT(1),
VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP = BIT(2),
VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP = BIT(3),
VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP = BIT(4),
VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP = BIT(5),
VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP = BIT(6),
VIRTCHNL2_CAP_TX_CSUM_GENERIC = BIT(7),
VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 = BIT(8),
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP = BIT(9),
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP = BIT(10),
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP = BIT(11),
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP = BIT(12),
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP = BIT(13),
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP = BIT(14),
VIRTCHNL2_CAP_RX_CSUM_GENERIC = BIT(15),
VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL = BIT(16),
VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL = BIT(17),
VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL = BIT(18),
VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL = BIT(19),
VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL = BIT(20),
VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL = BIT(21),
VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL = BIT(22),
VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL = BIT(23),
};
/* Segmentation offload capability flags */
enum virtchnl2_cap_seg {
VIRTCHNL2_CAP_SEG_IPV4_TCP = BIT(0),
VIRTCHNL2_CAP_SEG_IPV4_UDP = BIT(1),
VIRTCHNL2_CAP_SEG_IPV4_SCTP = BIT(2),
VIRTCHNL2_CAP_SEG_IPV6_TCP = BIT(3),
VIRTCHNL2_CAP_SEG_IPV6_UDP = BIT(4),
VIRTCHNL2_CAP_SEG_IPV6_SCTP = BIT(5),
VIRTCHNL2_CAP_SEG_GENERIC = BIT(6),
VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL = BIT(7),
VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL = BIT(8),
};
/* Receive Side Scaling Flow type capability flags */
enum virtchnl2_cap_rss {
VIRTCHNL2_CAP_RSS_IPV4_TCP = BIT(0),
VIRTCHNL2_CAP_RSS_IPV4_UDP = BIT(1),
VIRTCHNL2_CAP_RSS_IPV4_SCTP = BIT(2),
VIRTCHNL2_CAP_RSS_IPV4_OTHER = BIT(3),
VIRTCHNL2_CAP_RSS_IPV6_TCP = BIT(4),
VIRTCHNL2_CAP_RSS_IPV6_UDP = BIT(5),
VIRTCHNL2_CAP_RSS_IPV6_SCTP = BIT(6),
VIRTCHNL2_CAP_RSS_IPV6_OTHER = BIT(7),
VIRTCHNL2_CAP_RSS_IPV4_AH = BIT(8),
VIRTCHNL2_CAP_RSS_IPV4_ESP = BIT(9),
VIRTCHNL2_CAP_RSS_IPV4_AH_ESP = BIT(10),
VIRTCHNL2_CAP_RSS_IPV6_AH = BIT(11),
VIRTCHNL2_CAP_RSS_IPV6_ESP = BIT(12),
VIRTCHNL2_CAP_RSS_IPV6_AH_ESP = BIT(13),
};
/* Header split capability flags */
enum virtchnl2_cap_rx_hsplit_at {
/* for prepended metadata */
VIRTCHNL2_CAP_RX_HSPLIT_AT_L2 = BIT(0),
/* all VLANs go into header buffer */
VIRTCHNL2_CAP_RX_HSPLIT_AT_L3 = BIT(1),
VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 = BIT(2),
VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6 = BIT(3),
};
/* Receive Side Coalescing offload capability flags */
enum virtchnl2_cap_rsc {
VIRTCHNL2_CAP_RSC_IPV4_TCP = BIT(0),
VIRTCHNL2_CAP_RSC_IPV4_SCTP = BIT(1),
VIRTCHNL2_CAP_RSC_IPV6_TCP = BIT(2),
VIRTCHNL2_CAP_RSC_IPV6_SCTP = BIT(3),
};
/* Other capability flags */
enum virtchnl2_cap_other {
VIRTCHNL2_CAP_RDMA = BIT_ULL(0),
VIRTCHNL2_CAP_SRIOV = BIT_ULL(1),
VIRTCHNL2_CAP_MACFILTER = BIT_ULL(2),
VIRTCHNL2_CAP_FLOW_DIRECTOR = BIT_ULL(3),
/* Queue based scheduling using split queue model */
VIRTCHNL2_CAP_SPLITQ_QSCHED = BIT_ULL(4),
VIRTCHNL2_CAP_CRC = BIT_ULL(5),
VIRTCHNL2_CAP_ADQ = BIT_ULL(6),
VIRTCHNL2_CAP_WB_ON_ITR = BIT_ULL(7),
VIRTCHNL2_CAP_PROMISC = BIT_ULL(8),
VIRTCHNL2_CAP_LINK_SPEED = BIT_ULL(9),
VIRTCHNL2_CAP_INLINE_IPSEC = BIT_ULL(10),
VIRTCHNL2_CAP_LARGE_NUM_QUEUES = BIT_ULL(11),
VIRTCHNL2_CAP_VLAN = BIT_ULL(12),
VIRTCHNL2_CAP_PTP = BIT_ULL(13),
/* EDT: Earliest Departure Time capability used for Timing Wheel */
VIRTCHNL2_CAP_EDT = BIT_ULL(14),
VIRTCHNL2_CAP_ADV_RSS = BIT_ULL(15),
VIRTCHNL2_CAP_FDIR = BIT_ULL(16),
VIRTCHNL2_CAP_RX_FLEX_DESC = BIT_ULL(17),
VIRTCHNL2_CAP_PTYPE = BIT_ULL(18),
VIRTCHNL2_CAP_LOOPBACK = BIT_ULL(19),
/* Other capability 20 is reserved */
/* this must be the last capability */
VIRTCHNL2_CAP_OEM = BIT_ULL(63),
};
/* underlying device type */
enum virtchl2_device_type {
VIRTCHNL2_MEV_DEVICE = 0,
};
/**
* enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes.
* @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
* completions where descriptors and buffers
* are completed at the same time.
* @VIRTCHNL2_TXQ_SCHED_MODE_FLOW: Flow scheduling mode allows for out of order
* packet processing where descriptors are
* cleaned in order, but buffers can be
* completed out of order.
*/
enum virtchnl2_txq_sched_mode {
VIRTCHNL2_TXQ_SCHED_MODE_QUEUE = 0,
VIRTCHNL2_TXQ_SCHED_MODE_FLOW = 1,
};
/**
* enum virtchnl2_rxq_flags - Receive Queue Feature flags.
* @VIRTCHNL2_RXQ_RSC: Rx queue RSC flag.
* @VIRTCHNL2_RXQ_HDR_SPLIT: Rx queue header split flag.
* @VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK: When set, packet descriptors are flushed
* by hardware immediately after processing
* each packet.
* @VIRTCHNL2_RX_DESC_SIZE_16BYTE: Rx queue 16 byte descriptor size.
* @VIRTCHNL2_RX_DESC_SIZE_32BYTE: Rx queue 32 byte descriptor size.
*/
enum virtchnl2_rxq_flags {
VIRTCHNL2_RXQ_RSC = BIT(0),
VIRTCHNL2_RXQ_HDR_SPLIT = BIT(1),
VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK = BIT(2),
VIRTCHNL2_RX_DESC_SIZE_16BYTE = BIT(3),
VIRTCHNL2_RX_DESC_SIZE_32BYTE = BIT(4),
};
/* Type of RSS algorithm */
enum virtchnl2_rss_alg {
VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC = 0,
VIRTCHNL2_RSS_ALG_R_ASYMMETRIC = 1,
VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC = 2,
VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC = 3,
};
/* Type of event */
enum virtchnl2_event_codes {
VIRTCHNL2_EVENT_UNKNOWN = 0,
VIRTCHNL2_EVENT_LINK_CHANGE = 1,
/* Event type 2, 3 are reserved */
};
/* Transmit and Receive queue types are valid in legacy as well as split queue
* models. With Split Queue model, 2 additional types are introduced -
* TX_COMPLETION and RX_BUFFER. In split queue model, receive corresponds to
* the queue where hardware posts completions.
*/
enum virtchnl2_queue_type {
VIRTCHNL2_QUEUE_TYPE_TX = 0,
VIRTCHNL2_QUEUE_TYPE_RX = 1,
VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION = 2,
VIRTCHNL2_QUEUE_TYPE_RX_BUFFER = 3,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX = 4,
VIRTCHNL2_QUEUE_TYPE_CONFIG_RX = 5,
/* Queue types 6, 7, 8, 9 are reserved */
VIRTCHNL2_QUEUE_TYPE_MBX_TX = 10,
VIRTCHNL2_QUEUE_TYPE_MBX_RX = 11,
};
/* Interrupt throttling rate index */
enum virtchnl2_itr_idx {
VIRTCHNL2_ITR_IDX_0 = 0,
VIRTCHNL2_ITR_IDX_1 = 1,
};
/**
* enum virtchnl2_mac_addr_type - MAC address types.
* @VIRTCHNL2_MAC_ADDR_PRIMARY: PF/VF driver should set this type for the
* primary/device unicast MAC address filter for
* VIRTCHNL2_OP_ADD_MAC_ADDR and
* VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the
* underlying control plane function to accurately
* track the MAC address and for VM/function reset.
*
* @VIRTCHNL2_MAC_ADDR_EXTRA: PF/VF driver should set this type for any extra
* unicast and/or multicast filters that are being
* added/deleted via VIRTCHNL2_OP_ADD_MAC_ADDR or
* VIRTCHNL2_OP_DEL_MAC_ADDR.
*/
enum virtchnl2_mac_addr_type {
VIRTCHNL2_MAC_ADDR_PRIMARY = 1,
VIRTCHNL2_MAC_ADDR_EXTRA = 2,
};
/* Flags used for promiscuous mode */
enum virtchnl2_promisc_flags {
VIRTCHNL2_UNICAST_PROMISC = BIT(0),
VIRTCHNL2_MULTICAST_PROMISC = BIT(1),
};
/* Protocol header type within a packet segment. A segment consists of one or
* more protocol headers that make up a logical group of protocol headers. Each
* logical group of protocol headers encapsulates or is encapsulated using/by
* tunneling or encapsulation protocols for network virtualization.
*/
enum virtchnl2_proto_hdr_type {
/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_ANY = 0,
VIRTCHNL2_PROTO_HDR_PRE_MAC = 1,
/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_MAC = 2,
VIRTCHNL2_PROTO_HDR_POST_MAC = 3,
VIRTCHNL2_PROTO_HDR_ETHERTYPE = 4,
VIRTCHNL2_PROTO_HDR_VLAN = 5,
VIRTCHNL2_PROTO_HDR_SVLAN = 6,
VIRTCHNL2_PROTO_HDR_CVLAN = 7,
VIRTCHNL2_PROTO_HDR_MPLS = 8,
VIRTCHNL2_PROTO_HDR_UMPLS = 9,
VIRTCHNL2_PROTO_HDR_MMPLS = 10,
VIRTCHNL2_PROTO_HDR_PTP = 11,
VIRTCHNL2_PROTO_HDR_CTRL = 12,
VIRTCHNL2_PROTO_HDR_LLDP = 13,
VIRTCHNL2_PROTO_HDR_ARP = 14,
VIRTCHNL2_PROTO_HDR_ECP = 15,
VIRTCHNL2_PROTO_HDR_EAPOL = 16,
VIRTCHNL2_PROTO_HDR_PPPOD = 17,
VIRTCHNL2_PROTO_HDR_PPPOE = 18,
/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_IPV4 = 19,
/* IPv4 and IPv6 Fragment header types are only associated to
* VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
* cannot be used independently.
*/
/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_IPV4_FRAG = 20,
/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_IPV6 = 21,
/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_IPV6_FRAG = 22,
VIRTCHNL2_PROTO_HDR_IPV6_EH = 23,
/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_UDP = 24,
/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_TCP = 25,
/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_SCTP = 26,
/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_ICMP = 27,
/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_ICMPV6 = 28,
VIRTCHNL2_PROTO_HDR_IGMP = 29,
VIRTCHNL2_PROTO_HDR_AH = 30,
VIRTCHNL2_PROTO_HDR_ESP = 31,
VIRTCHNL2_PROTO_HDR_IKE = 32,
VIRTCHNL2_PROTO_HDR_NATT_KEEP = 33,
/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_PAY = 34,
VIRTCHNL2_PROTO_HDR_L2TPV2 = 35,
VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL = 36,
VIRTCHNL2_PROTO_HDR_L2TPV3 = 37,
VIRTCHNL2_PROTO_HDR_GTP = 38,
VIRTCHNL2_PROTO_HDR_GTP_EH = 39,
VIRTCHNL2_PROTO_HDR_GTPCV2 = 40,
VIRTCHNL2_PROTO_HDR_GTPC_TEID = 41,
VIRTCHNL2_PROTO_HDR_GTPU = 42,
VIRTCHNL2_PROTO_HDR_GTPU_UL = 43,
VIRTCHNL2_PROTO_HDR_GTPU_DL = 44,
VIRTCHNL2_PROTO_HDR_ECPRI = 45,
VIRTCHNL2_PROTO_HDR_VRRP = 46,
VIRTCHNL2_PROTO_HDR_OSPF = 47,
/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
VIRTCHNL2_PROTO_HDR_TUN = 48,
VIRTCHNL2_PROTO_HDR_GRE = 49,
VIRTCHNL2_PROTO_HDR_NVGRE = 50,
VIRTCHNL2_PROTO_HDR_VXLAN = 51,
VIRTCHNL2_PROTO_HDR_VXLAN_GPE = 52,
VIRTCHNL2_PROTO_HDR_GENEVE = 53,
VIRTCHNL2_PROTO_HDR_NSH = 54,
VIRTCHNL2_PROTO_HDR_QUIC = 55,
VIRTCHNL2_PROTO_HDR_PFCP = 56,
VIRTCHNL2_PROTO_HDR_PFCP_NODE = 57,
VIRTCHNL2_PROTO_HDR_PFCP_SESSION = 58,
VIRTCHNL2_PROTO_HDR_RTP = 59,
VIRTCHNL2_PROTO_HDR_ROCE = 60,
VIRTCHNL2_PROTO_HDR_ROCEV1 = 61,
VIRTCHNL2_PROTO_HDR_ROCEV2 = 62,
/* Protocol ids up to 32767 are reserved.
* 32768 - 65534 are used for user defined protocol ids.
* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id.
*/
VIRTCHNL2_PROTO_HDR_NO_PROTO = 65535,
};
enum virtchl2_version {
VIRTCHNL2_VERSION_MINOR_0 = 0,
VIRTCHNL2_VERSION_MAJOR_2 = 2,
};
/**
* struct virtchnl2_edt_caps - Get EDT granularity and time horizon.
* @tstamp_granularity_ns: Timestamp granularity in nanoseconds.
* @time_horizon_ns: Total time window in nanoseconds.
*
* Associated with VIRTCHNL2_OP_GET_EDT_CAPS.
*/
struct virtchnl2_edt_caps {
__le64 tstamp_granularity_ns;
__le64 time_horizon_ns;
};
VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_edt_caps);
/**
* struct virtchnl2_version_info - Version information.
* @major: Major version.
* @minor: Minor version.
*
* PF/VF posts its version number to the CP. CP responds with its version number
* in the same format, along with a return code.
* If there is a major version mismatch, then the PF/VF cannot operate.
* If there is a minor version mismatch, then the PF/VF can operate but should
* add a warning to the system log.
*
* This version opcode MUST always be specified as == 1, regardless of other
* changes in the API. The CP must always respond to this message without
* error regardless of version mismatch.
*
* Associated with VIRTCHNL2_OP_VERSION.
*/
struct virtchnl2_version_info {
__le32 major;
__le32 minor;
};
VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
/**
* struct virtchnl2_get_capabilities - Capabilities info.
* @csum_caps: See enum virtchnl2_cap_txrx_csum.
* @seg_caps: See enum virtchnl2_cap_seg.
* @hsplit_caps: See enum virtchnl2_cap_rx_hsplit_at.
* @rsc_caps: See enum virtchnl2_cap_rsc.
* @rss_caps: See enum virtchnl2_cap_rss.
* @other_caps: See enum virtchnl2_cap_other.
* @mailbox_dyn_ctl: DYN_CTL register offset and vector id for mailbox
* provided by CP.
* @mailbox_vector_id: Mailbox vector id.
* @num_allocated_vectors: Maximum number of allocated vectors for the device.
* @max_rx_q: Maximum number of supported Rx queues.
* @max_tx_q: Maximum number of supported Tx queues.
* @max_rx_bufq: Maximum number of supported buffer queues.
* @max_tx_complq: Maximum number of supported completion queues.
* @max_sriov_vfs: The PF sends the maximum VFs it is requesting. The CP
* responds with the maximum VFs granted.
* @max_vports: Maximum number of vports that can be supported.
* @default_num_vports: Default number of vports driver should allocate on load.
* @max_tx_hdr_size: Max header length hardware can parse/checksum, in bytes.
* @max_sg_bufs_per_tx_pkt: Max number of scatter gather buffers that can be
* sent per transmit packet without needing to be
* linearized.
* @pad: Padding.
* @reserved: Reserved.
* @device_type: See enum virtchl2_device_type.
* @min_sso_packet_len: Min packet length supported by device for single
* segment offload.
* @max_hdr_buf_per_lso: Max number of header buffers that can be used for
* an LSO.
* @pad1: Padding for future extensions.
*
* Dataplane driver sends this message to CP to negotiate capabilities and
* provides a virtchnl2_get_capabilities structure with its desired
* capabilities, max_sriov_vfs and num_allocated_vectors.
* CP responds with a virtchnl2_get_capabilities structure updated
* with allowed capabilities and the other fields as below.
* If PF sets max_sriov_vfs as 0, CP will respond with max number of VFs
* that can be created by this PF. For any other value 'n', CP responds
* with max_sriov_vfs set to min(n, x) where x is the max number of VFs
* allowed by CP's policy. max_sriov_vfs is not applicable for VFs.
* If dataplane driver sets num_allocated_vectors as 0, CP will respond with 1
* which is default vector associated with the default mailbox. For any other
* value 'n', CP responds with a value <= n based on the CP's policy of
* max number of vectors for a PF.
* CP will respond with the vector ID of mailbox allocated to the PF in
* mailbox_vector_id and the number of itr index registers in itr_idx_map.
* It also responds with default number of vports that the dataplane driver
* should comeup with in default_num_vports and maximum number of vports that
* can be supported in max_vports.
*
* Associated with VIRTCHNL2_OP_GET_CAPS.
*/
struct virtchnl2_get_capabilities {
__le32 csum_caps;
__le32 seg_caps;
__le32 hsplit_caps;
__le32 rsc_caps;
__le64 rss_caps;
__le64 other_caps;
__le32 mailbox_dyn_ctl;
__le16 mailbox_vector_id;
__le16 num_allocated_vectors;
__le16 max_rx_q;
__le16 max_tx_q;
__le16 max_rx_bufq;
__le16 max_tx_complq;
__le16 max_sriov_vfs;
__le16 max_vports;
__le16 default_num_vports;
__le16 max_tx_hdr_size;
u8 max_sg_bufs_per_tx_pkt;
u8 pad[3];
u8 reserved[4];
__le32 device_type;
u8 min_sso_packet_len;
u8 max_hdr_buf_per_lso;
u8 pad1[10];
};
VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
/**
* struct virtchnl2_queue_reg_chunk - Single queue chunk.
* @type: See enum virtchnl2_queue_type.
* @start_queue_id: Start Queue ID.
* @num_queues: Number of queues in the chunk.
* @pad: Padding.
* @qtail_reg_start: Queue tail register offset.
* @qtail_reg_spacing: Queue tail register spacing.
* @pad1: Padding for future extensions.
*/
struct virtchnl2_queue_reg_chunk {
__le32 type;
__le32 start_queue_id;
__le32 num_queues;
__le32 pad;
__le64 qtail_reg_start;
__le32 qtail_reg_spacing;
u8 pad1[4];
};
VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
/**
* struct virtchnl2_queue_reg_chunks - Specify several chunks of contiguous
* queues.
* @num_chunks: Number of chunks.
* @pad: Padding.
* @chunks: Chunks of queue info.
*/
struct virtchnl2_queue_reg_chunks {
__le16 num_chunks;
u8 pad[6];
struct virtchnl2_queue_reg_chunk chunks[];
};
VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_reg_chunks);
/**
* struct virtchnl2_create_vport - Create vport config info.
* @vport_type: See enum virtchnl2_vport_type.
* @txq_model: See virtchnl2_queue_model.
* @rxq_model: See virtchnl2_queue_model.
* @num_tx_q: Number of Tx queues.
* @num_tx_complq: Valid only if txq_model is split queue.
* @num_rx_q: Number of Rx queues.
* @num_rx_bufq: Valid only if rxq_model is split queue.
* @default_rx_q: Relative receive queue index to be used as default.
* @vport_index: Used to align PF and CP in case of default multiple vports,
* it is filled by the PF and CP returns the same value, to
* enable the driver to support multiple asynchronous parallel
* CREATE_VPORT requests and associate a response to a specific
* request.
* @max_mtu: Max MTU. CP populates this field on response.
* @vport_id: Vport id. CP populates this field on response.
* @default_mac_addr: Default MAC address.
* @pad: Padding.
* @rx_desc_ids: See VIRTCHNL2_RX_DESC_IDS definitions.
* @tx_desc_ids: See VIRTCHNL2_TX_DESC_IDS definitions.
* @pad1: Padding.
* @rss_algorithm: RSS algorithm.
* @rss_key_size: RSS key size.
* @rss_lut_size: RSS LUT size.
* @rx_split_pos: See enum virtchnl2_cap_rx_hsplit_at.
* @pad2: Padding.
* @chunks: Chunks of contiguous queues.
*
* PF sends this message to CP to create a vport by filling in required
* fields of virtchnl2_create_vport structure.
* CP responds with the updated virtchnl2_create_vport structure containing the
* necessary fields followed by chunks which in turn will have an array of
* num_chunks entries of virtchnl2_queue_chunk structures.
*
* Associated with VIRTCHNL2_OP_CREATE_VPORT.
*/
struct virtchnl2_create_vport {
__le16 vport_type;
__le16 txq_model;
__le16 rxq_model;
__le16 num_tx_q;
__le16 num_tx_complq;
__le16 num_rx_q;
__le16 num_rx_bufq;
__le16 default_rx_q;
__le16 vport_index;
/* CP populates the following fields on response */
__le16 max_mtu;
__le32 vport_id;
u8 default_mac_addr[ETH_ALEN];
__le16 pad;
__le64 rx_desc_ids;
__le64 tx_desc_ids;
u8 pad1[72];
__le32 rss_algorithm;
__le16 rss_key_size;
__le16 rss_lut_size;
__le32 rx_split_pos;
u8 pad2[20];
struct virtchnl2_queue_reg_chunks chunks;
};
VIRTCHNL2_CHECK_STRUCT_LEN(160, virtchnl2_create_vport);
/**
* struct virtchnl2_vport - Vport ID info.
* @vport_id: Vport id.
* @pad: Padding for future extensions.
*
* PF sends this message to CP to destroy, enable or disable a vport by filling
* in the vport_id in virtchnl2_vport structure.
* CP responds with the status of the requested operation.
*
* Associated with VIRTCHNL2_OP_DESTROY_VPORT, VIRTCHNL2_OP_ENABLE_VPORT,
* VIRTCHNL2_OP_DISABLE_VPORT.
*/
struct virtchnl2_vport {
__le32 vport_id;
u8 pad[4];
};
VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
/**
* struct virtchnl2_txq_info - Transmit queue config info
* @dma_ring_addr: DMA address.
* @type: See enum virtchnl2_queue_type.
* @queue_id: Queue ID.
* @relative_queue_id: Valid only if queue model is split and type is transmit
* queue. Used in many to one mapping of transmit queues to
* completion queue.
* @model: See enum virtchnl2_queue_model.
* @sched_mode: See enum virtchnl2_txq_sched_mode.
* @qflags: TX queue feature flags.
* @ring_len: Ring length.
* @tx_compl_queue_id: Valid only if queue model is split and type is transmit
* queue.
* @peer_type: Valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX
* @peer_rx_queue_id: Valid only if queue type is CONFIG_TX and used to deliver
* messages for the respective CONFIG_TX queue.
* @pad: Padding.
* @egress_pasid: Egress PASID info.
* @egress_hdr_pasid: Egress HDR passid.
* @egress_buf_pasid: Egress buf passid.
* @pad1: Padding for future extensions.
*/
struct virtchnl2_txq_info {
__le64 dma_ring_addr;
__le32 type;
__le32 queue_id;
__le16 relative_queue_id;
__le16 model;
__le16 sched_mode;
__le16 qflags;
__le16 ring_len;
__le16 tx_compl_queue_id;
__le16 peer_type;
__le16 peer_rx_queue_id;
u8 pad[4];
__le32 egress_pasid;
__le32 egress_hdr_pasid;
__le32 egress_buf_pasid;
u8 pad1[8];
};
VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
/**
* struct virtchnl2_config_tx_queues - TX queue config.
* @vport_id: Vport id.
* @num_qinfo: Number of virtchnl2_txq_info structs.
* @pad: Padding.
* @qinfo: Tx queues config info.
*
* PF sends this message to set up parameters for one or more transmit queues.
* This message contains an array of num_qinfo instances of virtchnl2_txq_info
* structures. CP configures requested queues and returns a status code. If
* num_qinfo specified is greater than the number of queues associated with the
* vport, an error is returned and no queues are configured.
*
* Associated with VIRTCHNL2_OP_CONFIG_TX_QUEUES.
*/
struct virtchnl2_config_tx_queues {
__le32 vport_id;
__le16 num_qinfo;
u8 pad[10];
struct virtchnl2_txq_info qinfo[];
};
VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_config_tx_queues);
/**
* struct virtchnl2_rxq_info - Receive queue config info.
* @desc_ids: See VIRTCHNL2_RX_DESC_IDS definitions.
* @dma_ring_addr: See VIRTCHNL2_RX_DESC_IDS definitions.
* @type: See enum virtchnl2_queue_type.
* @queue_id: Queue id.
* @model: See enum virtchnl2_queue_model.
* @hdr_buffer_size: Header buffer size.
* @data_buffer_size: Data buffer size.
* @max_pkt_size: Max packet size.
* @ring_len: Ring length.
* @buffer_notif_stride: Buffer notification stride in units of 32-descriptors.
* This field must be a power of 2.
* @pad: Padding.
* @dma_head_wb_addr: Applicable only for receive buffer queues.
* @qflags: Applicable only for receive completion queues.
* See enum virtchnl2_rxq_flags.
* @rx_buffer_low_watermark: Rx buffer low watermark.
* @rx_bufq1_id: Buffer queue index of the first buffer queue associated with
* the Rx queue. Valid only in split queue model.
* @rx_bufq2_id: Buffer queue index of the second buffer queue associated with
* the Rx queue. Valid only in split queue model.
* @bufq2_ena: It indicates if there is a second buffer, rx_bufq2_id is valid
* only if this field is set.
* @pad1: Padding.
* @ingress_pasid: Ingress PASID.
* @ingress_hdr_pasid: Ingress PASID header.
* @ingress_buf_pasid: Ingress PASID buffer.
* @pad2: Padding for future extensions.
*/
struct virtchnl2_rxq_info {
__le64 desc_ids;
__le64 dma_ring_addr;
__le32 type;
__le32 queue_id;
__le16 model;
__le16 hdr_buffer_size;
__le32 data_buffer_size;
__le32 max_pkt_size;
__le16 ring_len;
u8 buffer_notif_stride;
u8 pad;
__le64 dma_head_wb_addr;
__le16 qflags;
__le16 rx_buffer_low_watermark;
__le16 rx_bufq1_id;
__le16 rx_bufq2_id;
u8 bufq2_ena;
u8 pad1[3];
__le32 ingress_pasid;
__le32 ingress_hdr_pasid;
__le32 ingress_buf_pasid;
u8 pad2[16];
};
VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
/**
* struct virtchnl2_config_rx_queues - Rx queues config.
* @vport_id: Vport id.
* @num_qinfo: Number of instances.
* @pad: Padding.
* @qinfo: Rx queues config info.
*
* PF sends this message to set up parameters for one or more receive queues.
* This message contains an array of num_qinfo instances of virtchnl2_rxq_info
* structures. CP configures requested queues and returns a status code.
* If the number of queues specified is greater than the number of queues
* associated with the vport, an error is returned and no queues are configured.
*
* Associated with VIRTCHNL2_OP_CONFIG_RX_QUEUES.
*/
struct virtchnl2_config_rx_queues {
__le32 vport_id;
__le16 num_qinfo;
u8 pad[18];
struct virtchnl2_rxq_info qinfo[];
};
VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_config_rx_queues);
/**
* struct virtchnl2_add_queues - data for VIRTCHNL2_OP_ADD_QUEUES.
* @vport_id: Vport id.
* @num_tx_q: Number of Tx qieues.
* @num_tx_complq: Number of Tx completion queues.
* @num_rx_q: Number of Rx queues.
* @num_rx_bufq: Number of Rx buffer queues.
* @pad: Padding.
* @chunks: Chunks of contiguous queues.
*
* PF sends this message to request additional transmit/receive queues beyond
* the ones that were assigned via CREATE_VPORT request. virtchnl2_add_queues
* structure is used to specify the number of each type of queues.
* CP responds with the same structure with the actual number of queues assigned
* followed by num_chunks of virtchnl2_queue_chunk structures.
*
* Associated with VIRTCHNL2_OP_ADD_QUEUES.
*/
struct virtchnl2_add_queues {
__le32 vport_id;
__le16 num_tx_q;
__le16 num_tx_complq;
__le16 num_rx_q;
__le16 num_rx_bufq;
u8 pad[4];
struct virtchnl2_queue_reg_chunks chunks;
};
VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_add_queues);
/**
* struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
* interrupt vectors.
* @start_vector_id: Start vector id.
* @start_evv_id: Start EVV id.
* @num_vectors: Number of vectors.
* @pad: Padding.
* @dynctl_reg_start: DYN_CTL register offset.
* @dynctl_reg_spacing: register spacing between DYN_CTL registers of 2
* consecutive vectors.
* @itrn_reg_start: ITRN register offset.
* @itrn_reg_spacing: Register spacing between dynctl registers of 2
* consecutive vectors.
* @itrn_index_spacing: Register spacing between itrn registers of the same
* vector where n=0..2.
* @pad1: Padding for future extensions.
*
* Register offsets and spacing provided by CP.
* Dynamic control registers are used for enabling/disabling/re-enabling
* interrupts and updating interrupt rates in the hotpath. Any changes
* to interrupt rates in the dynamic control registers will be reflected
* in the interrupt throttling rate registers.
* itrn registers are used to update interrupt rates for specific
* interrupt indices without modifying the state of the interrupt.
*/
struct virtchnl2_vector_chunk {
__le16 start_vector_id;
__le16 start_evv_id;
__le16 num_vectors;
__le16 pad;
__le32 dynctl_reg_start;
__le32 dynctl_reg_spacing;
__le32 itrn_reg_start;
__le32 itrn_reg_spacing;
__le32 itrn_index_spacing;
u8 pad1[4];
};
VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
/**
* struct virtchnl2_vector_chunks - chunks of contiguous interrupt vectors.
* @num_vchunks: number of vector chunks.
* @pad: Padding.
* @vchunks: Chunks of contiguous vector info.
*
* PF sends virtchnl2_vector_chunks struct to specify the vectors it is giving
* away. CP performs requested action and returns status.
*
* Associated with VIRTCHNL2_OP_DEALLOC_VECTORS.
*/
struct virtchnl2_vector_chunks {
__le16 num_vchunks;
u8 pad[14];
struct virtchnl2_vector_chunk vchunks[];
};
VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_vector_chunks);
/**
* struct virtchnl2_alloc_vectors - vector allocation info.
* @num_vectors: Number of vectors.
* @pad: Padding.
* @vchunks: Chunks of contiguous vector info.
*
* PF sends this message to request additional interrupt vectors beyond the
* ones that were assigned via GET_CAPS request. virtchnl2_alloc_vectors
* structure is used to specify the number of vectors requested. CP responds
* with the same structure with the actual number of vectors assigned followed
* by virtchnl2_vector_chunks structure identifying the vector ids.
*
* Associated with VIRTCHNL2_OP_ALLOC_VECTORS.
*/
struct virtchnl2_alloc_vectors {
__le16 num_vectors;
u8 pad[14];
struct virtchnl2_vector_chunks vchunks;
};
VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_alloc_vectors);
/**
* struct virtchnl2_rss_lut - RSS LUT info.
* @vport_id: Vport id.
* @lut_entries_start: Start of LUT entries.
* @lut_entries: Number of LUT entrties.
* @pad: Padding.
* @lut: RSS lookup table.
*
* PF sends this message to get or set RSS lookup table. Only supported if
* both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
* negotiation.
*
* Associated with VIRTCHNL2_OP_GET_RSS_LUT and VIRTCHNL2_OP_SET_RSS_LUT.
*/
struct virtchnl2_rss_lut {
__le32 vport_id;
__le16 lut_entries_start;
__le16 lut_entries;
u8 pad[4];
__le32 lut[];
};
VIRTCHNL2_CHECK_STRUCT_LEN(12, virtchnl2_rss_lut);
/**
* struct virtchnl2_rss_hash - RSS hash info.
* @ptype_groups: Packet type groups bitmap.
* @vport_id: Vport id.
* @pad: Padding for future extensions.
*
* PF sends these messages to get and set the hash filter enable bits for RSS.
* By default, the CP sets these to all possible traffic types that the
* hardware supports. The PF can query this value if it wants to change the
* traffic types that are hashed by the hardware.
* Only supported if both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit
* during configuration negotiation.
*
* Associated with VIRTCHNL2_OP_GET_RSS_HASH and VIRTCHNL2_OP_SET_RSS_HASH
*/
struct virtchnl2_rss_hash {
__le64 ptype_groups;
__le32 vport_id;
u8 pad[4];
};
VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
/**
* struct virtchnl2_sriov_vfs_info - VFs info.
* @num_vfs: Number of VFs.
* @pad: Padding for future extensions.
*
* This message is used to set number of SRIOV VFs to be created. The actual
* allocation of resources for the VFs in terms of vport, queues and interrupts
* is done by CP. When this call completes, the IDPF driver calls
* pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
* The number of VFs set to 0 will destroy all the VFs of this function.
*
* Associated with VIRTCHNL2_OP_SET_SRIOV_VFS.
*/
struct virtchnl2_sriov_vfs_info {
__le16 num_vfs;
__le16 pad;
};
VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
/**
* struct virtchnl2_ptype - Packet type info.
* @ptype_id_10: 10-bit packet type.
* @ptype_id_8: 8-bit packet type.
* @proto_id_count: Number of protocol ids the packet supports, maximum of 32
* protocol ids are supported.
* @pad: Padding.
* @proto_id: proto_id_count decides the allocation of protocol id array.
* See enum virtchnl2_proto_hdr_type.
*
* Based on the descriptor type the PF supports, CP fills ptype_id_10 or
* ptype_id_8 for flex and base descriptor respectively. If ptype_id_10 value
* is set to 0xFFFF, PF should consider this ptype as dummy one and it is the
* last ptype.
*/
struct virtchnl2_ptype {
__le16 ptype_id_10;
u8 ptype_id_8;
u8 proto_id_count;
__le16 pad;
__le16 proto_id[];
};
VIRTCHNL2_CHECK_STRUCT_LEN(6, virtchnl2_ptype);
/**
* struct virtchnl2_get_ptype_info - Packet type info.
* @start_ptype_id: Starting ptype ID.
* @num_ptypes: Number of packet types from start_ptype_id.
* @pad: Padding for future extensions.
*
* The total number of supported packet types is based on the descriptor type.
* For the flex descriptor, it is 1024 (10-bit ptype), and for the base
* descriptor, it is 256 (8-bit ptype). Send this message to the CP by
* populating the 'start_ptype_id' and the 'num_ptypes'. CP responds with the
* 'start_ptype_id', 'num_ptypes', and the array of ptype (virtchnl2_ptype) that
* are added at the end of the 'virtchnl2_get_ptype_info' message (Note: There
* is no specific field for the ptypes but are added at the end of the
* ptype info message. PF/VF is expected to extract the ptypes accordingly.
* Reason for doing this is because compiler doesn't allow nested flexible
* array fields).
*
* If all the ptypes don't fit into one mailbox buffer, CP splits the
* ptype info into multiple messages, where each message will have its own
* 'start_ptype_id', 'num_ptypes', and the ptype array itself. When CP is done
* updating all the ptype information extracted from the package (the number of
* ptypes extracted might be less than what PF/VF expects), it will append a
* dummy ptype (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF)
* to the ptype array.
*
* PF/VF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO messages.
*
* Associated with VIRTCHNL2_OP_GET_PTYPE_INFO.
*/
struct virtchnl2_get_ptype_info {
__le16 start_ptype_id;
__le16 num_ptypes;
__le32 pad;
};
VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_get_ptype_info);
/**
* struct virtchnl2_vport_stats - Vport statistics.
* @vport_id: Vport id.
* @pad: Padding.
* @rx_bytes: Received bytes.
* @rx_unicast: Received unicast packets.
* @rx_multicast: Received multicast packets.
* @rx_broadcast: Received broadcast packets.
* @rx_discards: Discarded packets on receive.
* @rx_errors: Receive errors.
* @rx_unknown_protocol: Unlnown protocol.
* @tx_bytes: Transmitted bytes.
* @tx_unicast: Transmitted unicast packets.
* @tx_multicast: Transmitted multicast packets.
* @tx_broadcast: Transmitted broadcast packets.
* @tx_discards: Discarded packets on transmit.
* @tx_errors: Transmit errors.
* @rx_invalid_frame_length: Packets with invalid frame length.
* @rx_overflow_drop: Packets dropped on buffer overflow.
*
* PF/VF sends this message to CP to get the update stats by specifying the
* vport_id. CP responds with stats in struct virtchnl2_vport_stats.
*
* Associated with VIRTCHNL2_OP_GET_STATS.
*/
struct virtchnl2_vport_stats {
__le32 vport_id;
u8 pad[4];
__le64 rx_bytes;
__le64 rx_unicast;
__le64 rx_multicast;
__le64 rx_broadcast;
__le64 rx_discards;
__le64 rx_errors;
__le64 rx_unknown_protocol;
__le64 tx_bytes;
__le64 tx_unicast;
__le64 tx_multicast;
__le64 tx_broadcast;
__le64 tx_discards;
__le64 tx_errors;
__le64 rx_invalid_frame_length;
__le64 rx_overflow_drop;
};
VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats);
/**
* struct virtchnl2_event - Event info.
* @event: Event opcode. See enum virtchnl2_event_codes.
* @link_speed: Link_speed provided in Mbps.
* @vport_id: Vport ID.
* @link_status: Link status.
* @pad: Padding.
* @reserved: Reserved.
*
* CP sends this message to inform the PF/VF driver of events that may affect
* it. No direct response is expected from the driver, though it may generate
* other messages in response to this one.
*
* Associated with VIRTCHNL2_OP_EVENT.
*/
struct virtchnl2_event {
__le32 event;
__le32 link_speed;
__le32 vport_id;
u8 link_status;
u8 pad;
__le16 reserved;
};
VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_event);
/**
* struct virtchnl2_rss_key - RSS key info.
* @vport_id: Vport id.
* @key_len: Length of RSS key.
* @pad: Padding.
* @key_flex: RSS hash key, packed bytes.
* PF/VF sends this message to get or set RSS key. Only supported if both
* PF/VF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
* negotiation.
*
* Associated with VIRTCHNL2_OP_GET_RSS_KEY and VIRTCHNL2_OP_SET_RSS_KEY.
*/
struct virtchnl2_rss_key {
__le32 vport_id;
__le16 key_len;
u8 pad;
__DECLARE_FLEX_ARRAY(u8, key_flex);
};
VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
/**
* struct virtchnl2_queue_chunk - chunk of contiguous queues
* @type: See enum virtchnl2_queue_type.
* @start_queue_id: Starting queue id.
* @num_queues: Number of queues.
* @pad: Padding for future extensions.
*/
struct virtchnl2_queue_chunk {
__le32 type;
__le32 start_queue_id;
__le32 num_queues;
u8 pad[4];
};
VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
/* struct virtchnl2_queue_chunks - chunks of contiguous queues
* @num_chunks: Number of chunks.
* @pad: Padding.
* @chunks: Chunks of contiguous queues info.
*/
struct virtchnl2_queue_chunks {
__le16 num_chunks;
u8 pad[6];
struct virtchnl2_queue_chunk chunks[];
};
VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_chunks);
/**
* struct virtchnl2_del_ena_dis_queues - Enable/disable queues info.
* @vport_id: Vport id.
* @pad: Padding.
* @chunks: Chunks of contiguous queues info.
*
* PF sends these messages to enable, disable or delete queues specified in
* chunks. PF sends virtchnl2_del_ena_dis_queues struct to specify the queues
* to be enabled/disabled/deleted. Also applicable to single queue receive or
* transmit. CP performs requested action and returns status.
*
* Associated with VIRTCHNL2_OP_ENABLE_QUEUES, VIRTCHNL2_OP_DISABLE_QUEUES and
* VIRTCHNL2_OP_DISABLE_QUEUES.
*/
struct virtchnl2_del_ena_dis_queues {
__le32 vport_id;
u8 pad[4];
struct virtchnl2_queue_chunks chunks;
};
VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_del_ena_dis_queues);
/**
* struct virtchnl2_queue_vector - Queue to vector mapping.
* @queue_id: Queue id.
* @vector_id: Vector id.
* @pad: Padding.
* @itr_idx: See enum virtchnl2_itr_idx.
* @queue_type: See enum virtchnl2_queue_type.
* @pad1: Padding for future extensions.
*/
struct virtchnl2_queue_vector {
__le32 queue_id;
__le16 vector_id;
u8 pad[2];
__le32 itr_idx;
__le32 queue_type;
u8 pad1[8];
};
VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
/**
* struct virtchnl2_queue_vector_maps - Map/unmap queues info.
* @vport_id: Vport id.
* @num_qv_maps: Number of queue vector maps.
* @pad: Padding.
* @qv_maps: Queue to vector maps.
*
* PF sends this message to map or unmap queues to vectors and interrupt
* throttling rate index registers. External data buffer contains
* virtchnl2_queue_vector_maps structure that contains num_qv_maps of
* virtchnl2_queue_vector structures. CP maps the requested queue vector maps
* after validating the queue and vector ids and returns a status code.
*
* Associated with VIRTCHNL2_OP_MAP_QUEUE_VECTOR and
* VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR.
*/
struct virtchnl2_queue_vector_maps {
__le32 vport_id;
__le16 num_qv_maps;
u8 pad[10];
struct virtchnl2_queue_vector qv_maps[];
};
VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_vector_maps);
/**
* struct virtchnl2_loopback - Loopback info.
* @vport_id: Vport id.
* @enable: Enable/disable.
* @pad: Padding for future extensions.
*
* PF/VF sends this message to transition to/from the loopback state. Setting
* the 'enable' to 1 enables the loopback state and setting 'enable' to 0
* disables it. CP configures the state to loopback and returns status.
*
* Associated with VIRTCHNL2_OP_LOOPBACK.
*/
struct virtchnl2_loopback {
__le32 vport_id;
u8 enable;
u8 pad[3];
};
VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_loopback);
/* struct virtchnl2_mac_addr - MAC address info.
* @addr: MAC address.
* @type: MAC type. See enum virtchnl2_mac_addr_type.
* @pad: Padding for future extensions.
*/
struct virtchnl2_mac_addr {
u8 addr[ETH_ALEN];
u8 type;
u8 pad;
};
VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_mac_addr);
/**
* struct virtchnl2_mac_addr_list - List of MAC addresses.
* @vport_id: Vport id.
* @num_mac_addr: Number of MAC addresses.
* @pad: Padding.
* @mac_addr_list: List with MAC address info.
*
* PF/VF driver uses this structure to send list of MAC addresses to be
* added/deleted to the CP where as CP performs the action and returns the
* status.
*
* Associated with VIRTCHNL2_OP_ADD_MAC_ADDR and VIRTCHNL2_OP_DEL_MAC_ADDR.
*/
struct virtchnl2_mac_addr_list {
__le32 vport_id;
__le16 num_mac_addr;
u8 pad[2];
struct virtchnl2_mac_addr mac_addr_list[];
};
VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_mac_addr_list);
/**
* struct virtchnl2_promisc_info - Promisc type info.
* @vport_id: Vport id.
* @flags: See enum virtchnl2_promisc_flags.
* @pad: Padding for future extensions.
*
* PF/VF sends vport id and flags to the CP where as CP performs the action
* and returns the status.
*
* Associated with VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE.
*/
struct virtchnl2_promisc_info {
__le32 vport_id;
/* See VIRTCHNL2_PROMISC_FLAGS definitions */
__le16 flags;
u8 pad[2];
};
VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_promisc_info);
#endif /* _VIRTCHNL_2_H_ */
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2023 Intel Corporation */
#ifndef _VIRTCHNL2_LAN_DESC_H_
#define _VIRTCHNL2_LAN_DESC_H_
#include <linux/bits.h>
/* This is an interface definition file where existing enums and their values
* must remain unchanged over time, so we specify explicit values for all enums.
*/
/* Transmit descriptor ID flags
*/
enum virtchnl2_tx_desc_ids {
VIRTCHNL2_TXDID_DATA = BIT(0),
VIRTCHNL2_TXDID_CTX = BIT(1),
/* TXDID bit 2 is reserved
* TXDID bit 3 is free for future use
* TXDID bit 4 is reserved
*/
VIRTCHNL2_TXDID_FLEX_TSO_CTX = BIT(5),
/* TXDID bit 6 is reserved */
VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2 = BIT(7),
/* TXDID bits 8 and 9 are free for future use
* TXDID bit 10 is reserved
* TXDID bit 11 is free for future use
*/
VIRTCHNL2_TXDID_FLEX_FLOW_SCHED = BIT(12),
/* TXDID bits 13 and 14 are free for future use */
VIRTCHNL2_TXDID_DESC_DONE = BIT(15),
};
/* Receive descriptor IDs */
enum virtchnl2_rx_desc_ids {
VIRTCHNL2_RXDID_1_32B_BASE = 1,
/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
* differentiated based on queue model; e.g. single queue model can
* only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
* for DID 2.
*/
VIRTCHNL2_RXDID_2_FLEX_SPLITQ = 2,
VIRTCHNL2_RXDID_2_FLEX_SQ_NIC = VIRTCHNL2_RXDID_2_FLEX_SPLITQ,
/* 3 through 6 are reserved */
VIRTCHNL2_RXDID_7_HW_RSVD = 7,
/* 8 through 15 are free */
};
/* Receive descriptor ID bitmasks */
#define VIRTCHNL2_RXDID_M(bit) BIT_ULL(VIRTCHNL2_RXDID_##bit)
enum virtchnl2_rx_desc_id_bitmasks {
VIRTCHNL2_RXDID_1_32B_BASE_M = VIRTCHNL2_RXDID_M(1_32B_BASE),
VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M = VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ),
VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M = VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC),
VIRTCHNL2_RXDID_7_HW_RSVD_M = VIRTCHNL2_RXDID_M(7_HW_RSVD),
};
/* For splitq virtchnl2_rx_flex_desc_adv_nic_3 desc members */
#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M GENMASK(3, 0)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M GENMASK(7, 6)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M GENMASK(9, 0)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_S 12
#define VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_M \
BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_S)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M GENMASK(15, 13)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M GENMASK(13, 0)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S 14
#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M \
BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S 15
#define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M \
BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M GENMASK(9, 0)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S 10
#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M \
BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S 11
#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M \
BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S 12
#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M GENMASK(14, 12)
#define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S 15
#define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M \
BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
/* Bitmasks for splitq virtchnl2_rx_flex_desc_adv_nic_3 */
enum virtchl2_rx_flex_desc_adv_status_error_0_qw1_bits {
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_M = BIT(0),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M = BIT(1),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_M = BIT(2),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_M = BIT(3),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_M = BIT(4),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_M = BIT(5),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_M = BIT(6),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_M = BIT(7),
};
/* Bitmasks for splitq virtchnl2_rx_flex_desc_adv_nic_3 */
enum virtchnl2_rx_flex_desc_adv_status_error_0_qw0_bits {
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_M = BIT(0),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_M = BIT(1),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_M = BIT(2),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_M = BIT(3),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_M = BIT(4),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_M = BIT(5),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_M = BIT(6),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_M = BIT(7),
};
/* Bitmasks for splitq virtchnl2_rx_flex_desc_adv_nic_3 */
enum virtchnl2_rx_flex_desc_adv_status_error_1_bits {
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_M = GENMASK(1, 0),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_M = BIT(2),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_M = BIT(3),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_M = BIT(4),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_M = BIT(5),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_M = BIT(6),
VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_M = BIT(7),
};
/* For singleq (flex) virtchnl2_rx_flex_desc fields
* For virtchnl2_rx_flex_desc.ptype_flex_flags0 member
*/
#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M GENMASK(9, 0)
/* For virtchnl2_rx_flex_desc.pkt_len member */
#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M GENMASK(13, 0)
/* Bitmasks for singleq (flex) virtchnl2_rx_flex_desc */
enum virtchnl2_rx_flex_desc_status_error_0_bits {
VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_M = BIT(0),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_M = BIT(1),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_M = BIT(2),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_M = BIT(3),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_M = BIT(4),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_M = BIT(5),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_M = BIT(6),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_M = BIT(7),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_M = BIT(8),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_M = BIT(9),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_M = BIT(10),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_M = BIT(11),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_M = BIT(12),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_M = BIT(13),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_M = BIT(14),
VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_M = BIT(15),
};
/* Bitmasks for singleq (flex) virtchnl2_rx_flex_desc */
enum virtchnl2_rx_flex_desc_status_error_1_bits {
VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_M = GENMASK(3, 0),
VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_M = BIT(4),
VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_M = BIT(5),
/* [10:6] reserved */
VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_M = BIT(11),
VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_M = BIT(12),
VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_M = BIT(13),
VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_M = BIT(14),
VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_M = BIT(15),
};
/* For virtchnl2_rx_flex_desc.ts_low member */
#define VIRTCHNL2_RX_FLEX_TSTAMP_VALID BIT(0)
/* For singleq (non flex) virtchnl2_singleq_base_rx_desc legacy desc members */
#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M GENMASK_ULL(51, 38)
#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M GENMASK_ULL(37, 30)
#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M GENMASK_ULL(26, 19)
#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M GENMASK_ULL(18, 0)
/* Bitmasks for singleq (base) virtchnl2_rx_base_desc */
enum virtchnl2_rx_base_desc_status_bits {
VIRTCHNL2_RX_BASE_DESC_STATUS_DD_M = BIT(0),
VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M = BIT(1),
VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_M = BIT(2),
VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_M = BIT(3),
VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_M = BIT(4),
VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_M = GENMASK(7, 5),
VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_M = BIT(8),
VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_M = GENMASK(10, 9),
VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_M = BIT(11),
VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_M = GENMASK(13, 12),
VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_M = BIT(14),
VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_M = BIT(15),
VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_M = GENMASK(17, 16),
VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_M = BIT(18),
};
/* Bitmasks for singleq (base) virtchnl2_rx_base_desc */
enum virtchnl2_rx_base_desc_error_bits {
VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_M = BIT(0),
VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_M = BIT(1),
VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_M = BIT(2),
VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_M = GENMASK(5, 3),
VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_M = BIT(3),
VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_M = BIT(4),
VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_M = BIT(5),
VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_M = BIT(6),
VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_M = BIT(7),
};
/* Bitmasks for singleq (base) virtchnl2_rx_base_desc */
#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH_M GENMASK(13, 12)
/**
* struct virtchnl2_splitq_rx_buf_desc - SplitQ RX buffer descriptor format
* @qword0: RX buffer struct.
* @qword0.buf_id: Buffer identifier.
* @qword0.rsvd0: Reserved.
* @qword0.rsvd1: Reserved.
* @pkt_addr: Packet buffer address.
* @hdr_addr: Header buffer address.
* @rsvd2: Rerserved.
*
* Receive Descriptors
* SplitQ buffer
* | 16| 0|
* ----------------------------------------------------------------
* | RSV | Buffer ID |
* ----------------------------------------------------------------
* | Rx packet buffer address |
* ----------------------------------------------------------------
* | Rx header buffer address |
* ----------------------------------------------------------------
* | RSV |
* ----------------------------------------------------------------
* | 0|
*/
struct virtchnl2_splitq_rx_buf_desc {
struct {
__le16 buf_id;
__le16 rsvd0;
__le32 rsvd1;
} qword0;
__le64 pkt_addr;
__le64 hdr_addr;
__le64 rsvd2;
};
/**
* struct virtchnl2_singleq_rx_buf_desc - SingleQ RX buffer descriptor format.
* @pkt_addr: Packet buffer address.
* @hdr_addr: Header buffer address.
* @rsvd1: Reserved.
* @rsvd2: Reserved.
*
* SingleQ buffer
* | 0|
* ----------------------------------------------------------------
* | Rx packet buffer address |
* ----------------------------------------------------------------
* | Rx header buffer address |
* ----------------------------------------------------------------
* | RSV |
* ----------------------------------------------------------------
* | RSV |
* ----------------------------------------------------------------
* | 0|
*/
struct virtchnl2_singleq_rx_buf_desc {
__le64 pkt_addr;
__le64 hdr_addr;
__le64 rsvd1;
__le64 rsvd2;
};
/**
* struct virtchnl2_singleq_base_rx_desc - RX descriptor writeback format.
* @qword0: First quad word struct.
* @qword0.lo_dword: Lower dual word struct.
* @qword0.lo_dword.mirroring_status: Mirrored packet status.
* @qword0.lo_dword.l2tag1: Stripped L2 tag from the received packet.
* @qword0.hi_dword: High dual word union.
* @qword0.hi_dword.rss: RSS hash.
* @qword0.hi_dword.fd_id: Flow director filter id.
* @qword1: Second quad word struct.
* @qword1.status_error_ptype_len: Status/error/PTYPE/length.
* @qword2: Third quad word struct.
* @qword2.ext_status: Extended status.
* @qword2.rsvd: Reserved.
* @qword2.l2tag2_1: Extracted L2 tag 2 from the packet.
* @qword2.l2tag2_2: Reserved.
* @qword3: Fourth quad word struct.
* @qword3.reserved: Reserved.
* @qword3.fd_id: Flow director filter id.
*
* Profile ID 0x1, SingleQ, base writeback format
*/
struct virtchnl2_singleq_base_rx_desc {
struct {
struct {
__le16 mirroring_status;
__le16 l2tag1;
} lo_dword;
union {
__le32 rss;
__le32 fd_id;
} hi_dword;
} qword0;
struct {
__le64 status_error_ptype_len;
} qword1;
struct {
__le16 ext_status;
__le16 rsvd;
__le16 l2tag2_1;
__le16 l2tag2_2;
} qword2;
struct {
__le32 reserved;
__le32 fd_id;
} qword3;
};
/**
* struct virtchnl2_rx_flex_desc_nic - RX descriptor writeback format.
*
* @rxdid: Descriptor builder profile id.
* @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
* @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
* @pkt_len: Packet length, [15:14] are reserved.
* @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0].
* @status_error0: Status/Error section 0.
* @l2tag1: Stripped L2 tag from the received packet
* @rss_hash: RSS hash.
* @status_error1: Status/Error section 1.
* @flexi_flags2: Flexible flags section 2.
* @ts_low: Lower word of timestamp value.
* @l2tag2_1st: First L2TAG2.
* @l2tag2_2nd: Second L2TAG2.
* @flow_id: Flow id.
* @flex_ts: Timestamp and flexible flow id union.
* @flex_ts.ts_high: Timestamp higher word of the timestamp value.
* @flex_ts.flex.rsvd: Reserved.
* @flex_ts.flex.flow_id_ipv6: IPv6 flow id.
*
* Profile ID 0x2, SingleQ, flex writeback format
*/
struct virtchnl2_rx_flex_desc_nic {
/* Qword 0 */
u8 rxdid;
u8 mir_id_umb_cast;
__le16 ptype_flex_flags0;
__le16 pkt_len;
__le16 hdr_len_sph_flex_flags1;
/* Qword 1 */
__le16 status_error0;
__le16 l2tag1;
__le32 rss_hash;
/* Qword 2 */
__le16 status_error1;
u8 flexi_flags2;
u8 ts_low;
__le16 l2tag2_1st;
__le16 l2tag2_2nd;
/* Qword 3 */
__le32 flow_id;
union {
struct {
__le16 rsvd;
__le16 flow_id_ipv6;
} flex;
__le32 ts_high;
} flex_ts;
};
/**
* struct virtchnl2_rx_flex_desc_adv_nic_3 - RX descriptor writeback format.
* @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0].
* @status_err0_qw0: Status/Error section 0 in quad word 0.
* @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
* ptype=[9:0].
* @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
* plen=[13:0].
* @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
* ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
* only in splitq, header=[9:0].
* @status_err0_qw1: Status/Error section 0 in quad word 1.
* @status_err1: Status/Error section 1.
* @fflags1: Flexible flags section 1.
* @ts_low: Lower word of timestamp value.
* @buf_id: Buffer identifier. Only in splitq mode.
* @misc: Union.
* @misc.raw_cs: Raw checksum.
* @misc.l2tag1: Stripped L2 tag from the received packet
* @misc.rscseglen:
* @hash1: Lower bits of Rx hash value.
* @ff2_mirrid_hash2: Union.
* @ff2_mirrid_hash2.fflags2: Flexible flags section 2.
* @ff2_mirrid_hash2.mirrorid: Mirror id.
* @ff2_mirrid_hash2.rscseglen: RSC segment length.
* @hash3: Upper bits of Rx hash value.
* @l2tag2: Extracted L2 tag 2 from the packet.
* @fmd4: Flexible metadata container 4.
* @l2tag1: Stripped L2 tag from the received packet
* @fmd6: Flexible metadata container 6.
* @ts_high: Timestamp higher word of the timestamp value.
*
* Profile ID 0x2, SplitQ, flex writeback format
*
* Flex-field 0: BufferID
* Flex-field 1: Raw checksum/L2TAG1/RSC Seg Len (determined by HW)
* Flex-field 2: Hash[15:0]
* Flex-flags 2: Hash[23:16]
* Flex-field 3: L2TAG2
* Flex-field 5: L2TAG1
* Flex-field 7: Timestamp (upper 32 bits)
*/
struct virtchnl2_rx_flex_desc_adv_nic_3 {
/* Qword 0 */
u8 rxdid_ucast;
u8 status_err0_qw0;
__le16 ptype_err_fflags0;
__le16 pktlen_gen_bufq_id;
__le16 hdrlen_flags;
/* Qword 1 */
u8 status_err0_qw1;
u8 status_err1;
u8 fflags1;
u8 ts_low;
__le16 buf_id;
union {
__le16 raw_cs;
__le16 l2tag1;
__le16 rscseglen;
} misc;
/* Qword 2 */
__le16 hash1;
union {
u8 fflags2;
u8 mirrorid;
u8 hash2;
} ff2_mirrid_hash2;
u8 hash3;
__le16 l2tag2;
__le16 fmd4;
/* Qword 3 */
__le16 l2tag1;
__le16 fmd6;
__le32 ts_high;
};
/* Common union for accessing descriptor format structs */
union virtchnl2_rx_desc {
struct virtchnl2_singleq_base_rx_desc base_wb;
struct virtchnl2_rx_flex_desc_nic flex_nic_wb;
struct virtchnl2_rx_flex_desc_adv_nic_3 flex_adv_nic_3_wb;
};
#endif /* _VIRTCHNL_LAN_DESC_H_ */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment