Merge branch '200GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue
Tony Nguyen says: ==================== Introduce Intel IDPF driver Pavan Kumar Linga says: This patch series introduces the Intel Infrastructure Data Path Function (IDPF) driver. It is used for both physical and virtual functions. Except for some of the device operations the rest of the functionality is the same for both PF and VF. IDPF uses virtchnl version2 opcodes and structures defined in the virtchnl2 header file which helps the driver to learn the capabilities and register offsets from the device Control Plane (CP) instead of assuming the default values. The format of the series follows the driver init flow to interface open. To start with, probe gets called and kicks off the driver initialization by spawning the 'vc_event_task' work queue which in turn calls the 'hard reset' function. As part of that, the mailbox is initialized which is used to send/receive the virtchnl messages to/from the CP. Once that is done, 'core init' kicks in which requests all the required global resources from the CP and spawns the 'init_task' work queue to create the vports. Based on the capability information received, the driver creates the said number of vports (one or many) where each vport is associated to a netdev. Also, each vport has its own resources such as queues, vectors etc. From there, rest of the netdev_ops and data path are added. IDPF implements both single queue which is traditional queueing model as well as split queue model. In split queue model, it uses separate queue for both completion descriptors and buffers which helps to implement out-of-order completions. It also helps to implement asymmetric queues, for example multiple RX completion queues can be processed by a single RX buffer queue and multiple TX buffer queues can be processed by a single TX completion queue. In single queue model, same queue is used for both descriptor completions as well as buffer completions. It also supports features such as generic checksum offload, generic receive offload (hardware GRO) etc. --- v7: Patch 2: * removed pci_[disable|enable]_pcie_error_reporting as they are dropped from the core Patch 4, 9: * used 'kasprintf' instead of 'snprintf' to avoid providing explicit character string size which also fixes "-Wformat-truncation" warnings Patch 14: * used 'ethtool_sprintf' instead of 'snprintf' to avoid providing explicit character string size which also fixes "-Wformat-truncation" warning * add string format argument to the 'ethtool_sprintf' to avoid warning on "-Wformat-security" v6: https://lore.kernel.org/netdev/20230825235954.894050-1-pavan.kumar.linga@intel.com/ Note: 'Acked-by' was only added to patches 1, 2, 12 and not to the other patches because of the changes in v6 Patch 3, 4, 5, 6, 7, 8, 9, 11, 13, 14, 15: * renamed 'reset_lock' to 'vport_ctrl_lock' to reflect the lock usage * to avoid defensive programming, used 'vport_ctrl_lock' for the user callbacks that access the 'vport' to prevent the hardware reset thread from releasing the 'vport', when the user callback is in progress * added some variables to netdev private structure to avoid vport access if possible from ethtool and ndo callbacks * moved 'mac_filter_list_lock' and MAC related flags to vport_config structure and refactored mac filter flow to handle asynchronous ndo mac filter callbacks * stop the queues before starting the reset flow to avoid TX hangs * removed 'sw_mutex' and 'stop_mutex' as they are not needed anymore * added missing clear bit in 'init_task' error path * renamed labels appropriately Patch 8: * replaced page_pool_put_page with page_pool_put_full_page * for the page pool max_len, used PAGE_SIZE Patch 10, 11, 13: * made use of the 'netif_txq_maybe_stop', '__netif_txq_completed_wake' helper macros Patch 13: * removed IDPF_HR_RESET_IN_PROG flag check in idpf_tx_singleq_start as it is defensive Patch 14: * removed max descriptor check as the core does that * removed unnecessary error messages * removed the stats that are common between the ones reported by ethtool and ip link * replaced snprintf with ethtool_sprintf * added a comment to explain the reason for the max queue check * as the netdev queues are set on alloc, there is no need to set them again on reset unless there is a queue change, so move the 'idpf_set_real_num_queues' to 'idpf_initiate_soft_reset' Patch 15: * reworded the 'configure SRIOV' in the commit message v5: https://lore.kernel.org/netdev/20230816004305.216136-1-anthony.l.nguyen@intel.com/ Most Patches: * wrapped line limit to 80 chars to those which don't effect readability Patch 12: * in skb_add_rx_frag, offset 'headlen' w.r.t page_offset when adding a frag to avoid adding the header again Patch 14: * added NULL check for 'rxq' when dereferencing it in page_pool_get_stats v4: https://lore.kernel.org/netdev/20230808003416.3805142-1-anthony.l.nguyen@intel.com/ Patch 1: * s/virtcnl/virtchnl * removed the kernel doc for the error code definitions that don't exist * reworded the summary part in the virtchnl2 header Patch 3: * don't set local variable to NULL on error * renamed sq_send_command_out label with err_unlock * don't use __GFP_ZERO in dma_alloc_coherent Patch 4: * introduced mailbox workqueue to process mailbox interrupts Patch 3, 4, 5, 6, 7, 8, 9, 11, 15: * removed unnecessary variable 0-init Patch 3, 5, 7, 8, 9, 15: * removed defensive programming checks wherever applicable * removed IDPF_CAP_FIELD_LAST as it can be treated as defensive programming Patch 3, 4, 5, 6, 7: * replaced IDPF_DFLT_MBX_BUF_SIZE with IDPF_CTLQ_MAX_BUF_LEN Patch 2 to 15: * add kernel-doc for idpf.h and idpf_txrx.h enums and structures Patch 4, 5, 15: * adjusted the destroy sequence of the workqueues as per the alloc sequence Patch 4, 5, 9, 15: * scrub unnecessary flags in 'idpf_flags' - IDPF_REMOVE_IN_PROG flag can take care of the cases where IDPF_REL_RES_IN_PROG is used, removed the later one - IDPF_REQ_[TX|RX]_SPLITQ are replaced with struct variables - IDPF_CANCEL_[SERVICE|STATS]_TASK are redundant as the work queue doesn't get rescheduled again after 'cancel_delayed_work_sync' - IDPF_HR_CORE_RESET is removed as there is no set_bit for this flag - IDPF_MB_INTR_TRIGGER is removed as it is not needed anymore with the mailbox workqueue implementation Patch 7 to 15: * replaced the custom buffer recycling code with page pool API * switched the header split buffer allocations from using a bunch of pages to using one large chunk of DMA memory * reordered some of the flows in vport_open to support page pool Patch 8, 12: * don't suppress the alloc errors by using __GFP_NOWARN Patch 9: * removed dyn_ctl_clrpba_m as it is not being used Patch 14: * introduced enum idpf_vport_reset_cause instead of using vport flags * introduced page pool stats v3: https://lore.kernel.org/netdev/20230616231341.2885622-1-anthony.l.nguyen@intel.com/ Patch 5: * instead of void, used 'struct virtchnl2_create_vport' type for vport_params_recvd and vport_params_reqd and removed the typecasting * used u16/u32 as needed instead of int for variables which cannot be negative and updated in all the places whereever applicable Patch 6: * changed the commit message to "add ptypes and MAC filter support" * used the sender Signed-off-by as the last tag on all the patches * removed unnecessary variables 0-init * instead of fixing the code in this commit, fixed it in the commit where the change was introduced first * moved get_type_info struct on to the stack instead of memory alloc * moved mutex_lock and ptype_info memory alloc outside while loop and adjusted the return flow * used 'break' instead of 'continue' in ptype id switch case v2: https://lore.kernel.org/netdev/20230614171428.1504179-1-anthony.l.nguyen@intel.com/ Patch 2: * added "Intel(R)" to the DRV_SUMMARY and Makefile. Patch 4, 5, 6, 15: * replaced IDPF_VC_MSG_PENDING flag with mutex 'vc_buf_lock' for the adapter related virtchnl opcodes. * get the mutex lock in the virtchnl send thread itself instead of in receive thread. Patch 5, 6, 7, 8, 9, 11, 14, 15: * replaced IDPF_VPORT_VC_MSG_PENDING flag with mutex 'vc_buf_lock' for the vport related virtchnl opcodes. * get the mutex lock in the virtchnl send thread itself instead of in receive thread. Patch 6: * converted get_ptype_info logic from 1:N to 1:1 message exchange for better handling of mutex lock. Patch 15: * introduced 'stats_lock' spinlock to avoid concurrent stats update. v1: https://lore.kernel.org/netdev/20230530234501.2680230-1-anthony.l.nguyen@intel.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
Showing
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Please register or sign in to comment