Commit 7957066c authored by Dave Airlie's avatar Dave Airlie

Merge tag 'drm-xe-next-2024-06-06' of https://gitlab.freedesktop.org/drm/xe/kernel into drm-next

UAPI Changes:
- Expose the L3 bank mask (Francois)

Cross-subsystem Changes:
- Update Xe driver maintainers (Oded)

Display (i915):
- Add missing include to intel_vga.c (Michal Wajdeczko)

Driver Changes:
- Fix Display (xe-only) detection for ADL-N (Lucas)
- Runtime PM fixes that enabled PC-10 and D3Cold (Francois, Rodrigo)
- Fix unexpected silent drm backmerge issues (Thomas)
- More (a lot more) preparation for SR-IOV support (Michal Wajdeczko)
- Devcoredump fixes and improvements (Jose, Tejas, Matt Brost)
- Introduce device 'wedged' state (Rodrigo)
- Improve debug and info messages (Michal Wajdeczko, Rodrigo, Nirmoy)
- Adding or fixing workarounds (Tejas, Shekhar, Lucas, Bommu)
- Check result of drmm_mutex_init (Michal Wajdeczko)
- Enlarge the critical dma fence area for preempt fences (Matt Auld)
- Prevent UAF in VM's rebind work (Matt Auld)
- GuC submit related clean-ups and fixes (Matt Brost, Himal, Jonathan, Niranjana)
- Prefer local helpers to perform dma reservation locking (Himal)
- Spelling and typo fixes (Colin, Francois)
- Prep patches for 1 job per VM bind IOCTL (no uapi change yet) (Matt Brost)
- Remove uninitialized end var from xe_gt_tlb_invalidation_range (Nirmoy)
- GSC related changes targeting LNL support (Daniele)
- Fix assert in L3 bank mask generation (Francois)
- Perform dma_map when moving system buffer objects to TT (Thomas)
- Add helpers for manipulating macro arguments (Michal Wajdeczko)
- Refactor default device atomic settings (Nirmoy)
- Add debugfs node to dump mocs (Janga)
- Use ordered WQ for G2H handler (Matt Brost)
- Clean up and fixes in header includes (Michal Wajdeczko)
- Prefer flexible-array over deprecated zero-lenght ones (Lucas)
- Add Indirect Ring State support (Niranjana)
- Fix UBSAN shift-out-of-bounds failure (Shuicheng)
- HWMon fixes and additions (Karthik)
- Clean-up refactor around probe init functions (Lucas, Michal Wajdeczko)
- Fix PCODE init function (Himal)
- Only use reserved BCS instances for usm migrate exec queue (Matt Brost)
- Only zap PTEs as needed (Matt Brost)
- Per client usage info (Lucas)
- Core hotunplug improvements converting stuff towards devm (Matt Auld)
- Don't emit false error if running in execlist mode (Michal Wajdeczko)
- Remove unused struct (Dr. David)
- Support/debug for slow GuC loads (John Harrison)
- Decouple job seqno and lrc seqno (Matt Brost)
- Allow migrate vm gpu submissions from reclaim context (Thomas)
- Rename drm-client running time to run_ticks and fix a UAF (Umesh)
- Check empty pinned BO list with lock held (Nirmoy)
- Drop undesired prefix from the platform name (Michal Wajdeczko)
- Remove unwanted mutex locking on xe file close (Niranjana)
- Replace format-less snprintf() with strscpy() (Arnd)
- Other general clean-ups on registers definitions and function names (Michal Wajdeczko)
- Add kernel-doc to some xe_lrc interfaces (Niranajana)
- Use missing lock in relay_needs_worker (Nirmoy)
- Drop redundant W=1 warnings from Makefile (Jani)
- Simplify if condition in preempt fences code (Thorsten)
- Flush engine buffers before signalling user fence on all engines (Andrzej)
- Don't overmap identity VRAM mapping (Matt Brost)
- Do not dereference NULL job->fence in trace points (Matt Brost)
- Add synchronous gt reset debugfs (Jonathan)
- Xe gt_idle fixes (Riana)
Signed-off-by: default avatarDave Airlie <airlied@redhat.com>

From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/ZmItmuf7vq_xvRjJ@intel.com
parents 83a7eefe 6800e63c
...@@ -112,6 +112,19 @@ larger value within a reasonable period. Upon observing a value lower than what ...@@ -112,6 +112,19 @@ larger value within a reasonable period. Upon observing a value lower than what
was previously read, userspace is expected to stay with that larger previous was previously read, userspace is expected to stay with that larger previous
value until a monotonic update is seen. value until a monotonic update is seen.
- drm-total-cycles-<keystr>: <uint>
Engine identifier string must be the same as the one specified in the
drm-cycles-<keystr> tag and shall contain the total number cycles for the given
engine.
This is a timestamp in GPU unspecified unit that matches the update rate
of drm-cycles-<keystr>. For drivers that implement this interface, the engine
utilization can be calculated entirely on the GPU clock domain, without
considering the CPU sleep time between 2 samples.
A driver may implement either this key or drm-maxfreq-<keystr>, but not both.
- drm-maxfreq-<keystr>: <uint> [Hz|MHz|KHz] - drm-maxfreq-<keystr>: <uint> [Hz|MHz|KHz]
Engine identifier string must be the same as the one specified in the Engine identifier string must be the same as the one specified in the
...@@ -121,6 +134,9 @@ percentage utilization of the engine, whereas drm-engine-<keystr> only reflects ...@@ -121,6 +134,9 @@ percentage utilization of the engine, whereas drm-engine-<keystr> only reflects
time active without considering what frequency the engine is operating as a time active without considering what frequency the engine is operating as a
percentage of its maximum frequency. percentage of its maximum frequency.
A driver may implement either this key or drm-total-cycles-<keystr>, but not
both.
Memory Memory
^^^^^^ ^^^^^^
...@@ -168,5 +184,6 @@ be documented above and where possible, aligned with other drivers. ...@@ -168,5 +184,6 @@ be documented above and where possible, aligned with other drivers.
Driver specific implementations Driver specific implementations
------------------------------- -------------------------------
:ref:`i915-usage-stats` * :ref:`i915-usage-stats`
:ref:`panfrost-usage-stats` * :ref:`panfrost-usage-stats`
* :ref:`xe-usage-stats`
...@@ -23,3 +23,4 @@ DG2, etc is provided to prototype the driver. ...@@ -23,3 +23,4 @@ DG2, etc is provided to prototype the driver.
xe_firmware xe_firmware
xe_tile xe_tile
xe_debugging xe_debugging
xe-drm-usage-stats.rst
.. SPDX-License-Identifier: GPL-2.0+
.. _xe-usage-stats:
========================================
Xe DRM client usage stats implementation
========================================
.. kernel-doc:: drivers/gpu/drm/xe/xe_drm_client.c
:doc: DRM Client usage stats
...@@ -11034,7 +11034,6 @@ F: include/uapi/drm/i915_drm.h ...@@ -11034,7 +11034,6 @@ F: include/uapi/drm/i915_drm.h
INTEL DRM XE DRIVER (Lunar Lake and newer) INTEL DRM XE DRIVER (Lunar Lake and newer)
M: Lucas De Marchi <lucas.demarchi@intel.com> M: Lucas De Marchi <lucas.demarchi@intel.com>
M: Oded Gabbay <ogabbay@kernel.org>
M: Thomas Hellström <thomas.hellstrom@linux.intel.com> M: Thomas Hellström <thomas.hellstrom@linux.intel.com>
L: intel-xe@lists.freedesktop.org L: intel-xe@lists.freedesktop.org
S: Supported S: Supported
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
* Copyright © 2019 Intel Corporation * Copyright © 2019 Intel Corporation
*/ */
#include <linux/delay.h>
#include <linux/vgaarb.h> #include <linux/vgaarb.h>
#include <video/vga.h> #include <video/vga.h>
......
...@@ -61,16 +61,6 @@ config DRM_XE_DEBUG_MEM ...@@ -61,16 +61,6 @@ config DRM_XE_DEBUG_MEM
If in doubt, say "N". If in doubt, say "N".
config DRM_XE_SIMPLE_ERROR_CAPTURE
bool "Enable simple error capture to dmesg on job timeout"
default n
help
Choose this option when debugging an unexpected job timeout
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_KUNIT_TEST config DRM_XE_KUNIT_TEST
tristate "KUnit tests for the drm xe driver" if !KUNIT_ALL_TESTS tristate "KUnit tests for the drm xe driver" if !KUNIT_ALL_TESTS
depends on DRM_XE && KUNIT && DEBUG_FS depends on DRM_XE && KUNIT && DEBUG_FS
......
...@@ -3,31 +3,8 @@ ...@@ -3,31 +3,8 @@
# Makefile for the drm device driver. This driver provides support for the # Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher. # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
# Unconditionally enable W=1 warnings locally # Enable W=1 warnings not enabled in drm subsystem Makefile
# --- begin copy-paste W=1 warnings from scripts/Makefile.extrawarn
subdir-ccflags-y += -Wextra -Wunused -Wno-unused-parameter
subdir-ccflags-y += -Wmissing-declarations
subdir-ccflags-y += $(call cc-option, -Wrestrict)
subdir-ccflags-y += -Wmissing-format-attribute
subdir-ccflags-y += -Wmissing-prototypes
subdir-ccflags-y += -Wold-style-definition
subdir-ccflags-y += -Wmissing-include-dirs
subdir-ccflags-y += $(call cc-option, -Wunused-but-set-variable)
subdir-ccflags-y += $(call cc-option, -Wunused-const-variable)
subdir-ccflags-y += $(call cc-option, -Wpacked-not-aligned)
subdir-ccflags-y += $(call cc-option, -Wformat-overflow)
subdir-ccflags-y += $(call cc-option, -Wformat-truncation) subdir-ccflags-y += $(call cc-option, -Wformat-truncation)
subdir-ccflags-y += $(call cc-option, -Wstringop-truncation)
# The following turn off the warnings enabled by -Wextra
ifeq ($(findstring 2, $(KBUILD_EXTRA_WARN)),)
subdir-ccflags-y += -Wno-missing-field-initializers
subdir-ccflags-y += -Wno-type-limits
subdir-ccflags-y += -Wno-shift-negative-value
endif
ifeq ($(findstring 3, $(KBUILD_EXTRA_WARN)),)
subdir-ccflags-y += -Wno-sign-compare
endif
# --- end copy-paste
# Enable -Werror in CI and development # Enable -Werror in CI and development
subdir-ccflags-$(CONFIG_DRM_XE_WERROR) += -Werror subdir-ccflags-$(CONFIG_DRM_XE_WERROR) += -Werror
...@@ -89,7 +66,7 @@ xe-y += xe_bb.o \ ...@@ -89,7 +66,7 @@ xe-y += xe_bb.o \
xe_gt_mcr.o \ xe_gt_mcr.o \
xe_gt_pagefault.o \ xe_gt_pagefault.o \
xe_gt_sysfs.o \ xe_gt_sysfs.o \
xe_gt_throttle_sysfs.o \ xe_gt_throttle.o \
xe_gt_tlb_invalidation.o \ xe_gt_tlb_invalidation.o \
xe_gt_topology.o \ xe_gt_topology.o \
xe_guc.o \ xe_guc.o \
...@@ -143,6 +120,7 @@ xe-y += xe_bb.o \ ...@@ -143,6 +120,7 @@ xe-y += xe_bb.o \
xe_uc_debugfs.o \ xe_uc_debugfs.o \
xe_uc_fw.o \ xe_uc_fw.o \
xe_vm.o \ xe_vm.o \
xe_vram.o \
xe_vram_freq.o \ xe_vram_freq.o \
xe_wait_user_fence.o \ xe_wait_user_fence.o \
xe_wa.o \ xe_wa.o \
...@@ -155,6 +133,8 @@ xe-$(CONFIG_HWMON) += xe_hwmon.o ...@@ -155,6 +133,8 @@ xe-$(CONFIG_HWMON) += xe_hwmon.o
# graphics virtualization (SR-IOV) support # graphics virtualization (SR-IOV) support
xe-y += \ xe-y += \
xe_gt_sriov_vf.o \
xe_gt_sriov_vf_debugfs.o \
xe_guc_relay.o \ xe_guc_relay.o \
xe_memirq.o \ xe_memirq.o \
xe_sriov.o xe_sriov.o
...@@ -163,10 +143,14 @@ xe-$(CONFIG_PCI_IOV) += \ ...@@ -163,10 +143,14 @@ xe-$(CONFIG_PCI_IOV) += \
xe_gt_sriov_pf.o \ xe_gt_sriov_pf.o \
xe_gt_sriov_pf_config.o \ xe_gt_sriov_pf_config.o \
xe_gt_sriov_pf_control.o \ xe_gt_sriov_pf_control.o \
xe_gt_sriov_pf_debugfs.o \
xe_gt_sriov_pf_monitor.o \
xe_gt_sriov_pf_policy.o \ xe_gt_sriov_pf_policy.o \
xe_gt_sriov_pf_service.o \
xe_lmtt.o \ xe_lmtt.o \
xe_lmtt_2l.o \ xe_lmtt_2l.o \
xe_lmtt_ml.o \ xe_lmtt_ml.o \
xe_pci_sriov.o \
xe_sriov_pf.o xe_sriov_pf.o
# include helpers for tests even when XE is built-in # include helpers for tests even when XE is built-in
......
...@@ -8,6 +8,10 @@ ...@@ -8,6 +8,10 @@
enum xe_guc_response_status { enum xe_guc_response_status {
XE_GUC_RESPONSE_STATUS_SUCCESS = 0x0, XE_GUC_RESPONSE_STATUS_SUCCESS = 0x0,
XE_GUC_RESPONSE_NOT_SUPPORTED = 0x20,
XE_GUC_RESPONSE_NO_ATTRIBUTE_TABLE = 0x201,
XE_GUC_RESPONSE_NO_DECRYPTION_KEY = 0x202,
XE_GUC_RESPONSE_DECRYPTION_FAILED = 0x204,
XE_GUC_RESPONSE_STATUS_GENERIC_FAIL = 0xF000, XE_GUC_RESPONSE_STATUS_GENERIC_FAIL = 0xF000,
}; };
...@@ -17,6 +21,9 @@ enum xe_guc_load_status { ...@@ -17,6 +21,9 @@ enum xe_guc_load_status {
XE_GUC_LOAD_STATUS_ERROR_DEVID_BUILD_MISMATCH = 0x02, XE_GUC_LOAD_STATUS_ERROR_DEVID_BUILD_MISMATCH = 0x02,
XE_GUC_LOAD_STATUS_GUC_PREPROD_BUILD_MISMATCH = 0x03, XE_GUC_LOAD_STATUS_GUC_PREPROD_BUILD_MISMATCH = 0x03,
XE_GUC_LOAD_STATUS_ERROR_DEVID_INVALID_GUCTYPE = 0x04, XE_GUC_LOAD_STATUS_ERROR_DEVID_INVALID_GUCTYPE = 0x04,
XE_GUC_LOAD_STATUS_HWCONFIG_START = 0x05,
XE_GUC_LOAD_STATUS_HWCONFIG_DONE = 0x06,
XE_GUC_LOAD_STATUS_HWCONFIG_ERROR = 0x07,
XE_GUC_LOAD_STATUS_GDT_DONE = 0x10, XE_GUC_LOAD_STATUS_GDT_DONE = 0x10,
XE_GUC_LOAD_STATUS_IDT_DONE = 0x20, XE_GUC_LOAD_STATUS_IDT_DONE = 0x20,
XE_GUC_LOAD_STATUS_LAPIC_DONE = 0x30, XE_GUC_LOAD_STATUS_LAPIC_DONE = 0x30,
...@@ -34,4 +41,19 @@ enum xe_guc_load_status { ...@@ -34,4 +41,19 @@ enum xe_guc_load_status {
XE_GUC_LOAD_STATUS_READY = 0xF0, XE_GUC_LOAD_STATUS_READY = 0xF0,
}; };
enum xe_bootrom_load_status {
XE_BOOTROM_STATUS_NO_KEY_FOUND = 0x13,
XE_BOOTROM_STATUS_AES_PROD_KEY_FOUND = 0x1A,
XE_BOOTROM_STATUS_PROD_KEY_CHECK_FAILURE = 0x2B,
XE_BOOTROM_STATUS_RSA_FAILED = 0x50,
XE_BOOTROM_STATUS_PAVPC_FAILED = 0x73,
XE_BOOTROM_STATUS_WOPCM_FAILED = 0x74,
XE_BOOTROM_STATUS_LOADLOC_FAILED = 0x75,
XE_BOOTROM_STATUS_JUMP_PASSED = 0x76,
XE_BOOTROM_STATUS_JUMP_FAILED = 0x77,
XE_BOOTROM_STATUS_RC6CTXCONFIG_FAILED = 0x79,
XE_BOOTROM_STATUS_MPUMAP_INCORRECT = 0x7A,
XE_BOOTROM_STATUS_EXCEPTION = 0x7E,
};
#endif #endif
...@@ -35,6 +35,20 @@ ...@@ -35,6 +35,20 @@
#define GUC_KLV_0_LEN (0xffffu << 0) #define GUC_KLV_0_LEN (0xffffu << 0)
#define GUC_KLV_n_VALUE (0xffffffffu << 0) #define GUC_KLV_n_VALUE (0xffffffffu << 0)
/**
* DOC: GuC Global Config KLVs
*
* `GuC KLV`_ keys available for use with HOST2GUC_SELF_CFG_.
*
* _`GUC_KLV_GLOBAL_CFG_GMD_ID` : 0x3000
* Refers to 32 bit architecture version as reported by the HW IP.
* This key is supported on MTL+ platforms only.
* Requires GuC ABI 1.2+.
*/
#define GUC_KLV_GLOBAL_CFG_GMD_ID_KEY 0x3000u
#define GUC_KLV_GLOBAL_CFG_GMD_ID_LEN 1u
/** /**
* DOC: GuC Self Config KLVs * DOC: GuC Self Config KLVs
* *
...@@ -194,14 +208,18 @@ enum { ...@@ -194,14 +208,18 @@ enum {
* granularity) since the GPUs clock time runs off a different crystal * granularity) since the GPUs clock time runs off a different crystal
* from the CPUs clock. Changing this KLV on a VF that is currently * from the CPUs clock. Changing this KLV on a VF that is currently
* running a context wont take effect until a new context is scheduled in. * running a context wont take effect until a new context is scheduled in.
* That said, when the PF is changing this value from 0xFFFFFFFF to * That said, when the PF is changing this value from 0x0 to
* something else, it might never take effect if the VF is running an * a non-zero value, it might never take effect if the VF is running an
* inifinitely long compute or shader kernel. In such a scenario, the * infinitely long compute or shader kernel. In such a scenario, the
* PF would need to trigger a VM PAUSE and then change the KLV to force * PF would need to trigger a VM PAUSE and then change the KLV to force
* it to take effect. Such cases might typically happen on a 1PF+1VF * it to take effect. Such cases might typically happen on a 1PF+1VF
* Virtualization config enabled for heavier workloads like AI/ML. * Virtualization config enabled for heavier workloads like AI/ML.
* *
* The max value for this KLV is 100 seconds, anything exceeding that
* will be clamped to the max.
*
* :0: infinite exec quantum (default) * :0: infinite exec quantum (default)
* :100000: maximum exec quantum (100000ms == 100s)
* *
* _`GUC_KLV_VF_CFG_PREEMPT_TIMEOUT` : 0x8A02 * _`GUC_KLV_VF_CFG_PREEMPT_TIMEOUT` : 0x8A02
* This config sets the VF-preemption-timeout in microseconds. * This config sets the VF-preemption-timeout in microseconds.
...@@ -211,15 +229,19 @@ enum { ...@@ -211,15 +229,19 @@ enum {
* different crystal from the CPUs clock. Changing this KLV on a VF * different crystal from the CPUs clock. Changing this KLV on a VF
* that is currently running a context wont take effect until a new * that is currently running a context wont take effect until a new
* context is scheduled in. * context is scheduled in.
* That said, when the PF is changing this value from 0xFFFFFFFF to * That said, when the PF is changing this value from 0x0 to
* something else, it might never take effect if the VF is running an * a non-zero value, it might never take effect if the VF is running an
* inifinitely long compute or shader kernel. * infinitely long compute or shader kernel.
* In this case, the PF would need to trigger a VM PAUSE and then change * In this case, the PF would need to trigger a VM PAUSE and then change
* the KLV to force it to take effect. Such cases might typically happen * the KLV to force it to take effect. Such cases might typically happen
* on a 1PF+1VF Virtualization config enabled for heavier workloads like * on a 1PF+1VF Virtualization config enabled for heavier workloads like
* AI/ML. * AI/ML.
* *
* The max value for this KLV is 100 seconds, anything exceeding that
* will be clamped to the max.
*
* :0: no preemption timeout (default) * :0: no preemption timeout (default)
* :100000000: maximum preemption timeout (100000000us == 100s)
* *
* _`GUC_KLV_VF_CFG_THRESHOLD_CAT_ERR` : 0x8A03 * _`GUC_KLV_VF_CFG_THRESHOLD_CAT_ERR` : 0x8A03
* This config sets threshold for CAT errors caused by the VF. * This config sets threshold for CAT errors caused by the VF.
...@@ -291,9 +313,11 @@ enum { ...@@ -291,9 +313,11 @@ enum {
#define GUC_KLV_VF_CFG_EXEC_QUANTUM_KEY 0x8a01 #define GUC_KLV_VF_CFG_EXEC_QUANTUM_KEY 0x8a01
#define GUC_KLV_VF_CFG_EXEC_QUANTUM_LEN 1u #define GUC_KLV_VF_CFG_EXEC_QUANTUM_LEN 1u
#define GUC_KLV_VF_CFG_EXEC_QUANTUM_MAX_VALUE 100000u
#define GUC_KLV_VF_CFG_PREEMPT_TIMEOUT_KEY 0x8a02 #define GUC_KLV_VF_CFG_PREEMPT_TIMEOUT_KEY 0x8a02
#define GUC_KLV_VF_CFG_PREEMPT_TIMEOUT_LEN 1u #define GUC_KLV_VF_CFG_PREEMPT_TIMEOUT_LEN 1u
#define GUC_KLV_VF_CFG_PREEMPT_TIMEOUT_MAX_VALUE 100000000u
#define GUC_KLV_VF_CFG_THRESHOLD_CAT_ERR_KEY 0x8a03 #define GUC_KLV_VF_CFG_THRESHOLD_CAT_ERR_KEY 0x8a03
#define GUC_KLV_VF_CFG_THRESHOLD_CAT_ERR_LEN 1u #define GUC_KLV_VF_CFG_THRESHOLD_CAT_ERR_LEN 1u
......
/* SPDX-License-Identifier: MIT */ /* SPDX-License-Identifier: MIT */
/* /*
* Copyright © 2023 Intel Corporation * Copyright © 2023-2024 Intel Corporation
*/ */
#ifndef _ABI_GUC_RELAY_ACTIONS_ABI_H_ #ifndef _ABI_GUC_RELAY_ACTIONS_ABI_H_
#define _ABI_GUC_RELAY_ACTIONS_ABI_H_ #define _ABI_GUC_RELAY_ACTIONS_ABI_H_
#include "abi/guc_relay_communication_abi.h"
/**
* DOC: GuC Relay VF/PF ABI Version
*
* The _`GUC_RELAY_VERSION_BASE` defines minimum VF/PF ABI version that
* drivers must support. Currently this is version 1.0.
*
* The _`GUC_RELAY_VERSION_LATEST` defines latest VF/PF ABI version that
* drivers may use. Currently this is version 1.0.
*
* Some platforms may require different base VF/PF ABI version.
* No supported VF/PF ABI version can be 0.0.
*/
#define GUC_RELAY_VERSION_BASE_MAJOR 1
#define GUC_RELAY_VERSION_BASE_MINOR 0
#define GUC_RELAY_VERSION_LATEST_MAJOR 1
#define GUC_RELAY_VERSION_LATEST_MINOR 0
/**
* DOC: GuC Relay Actions
*
* The following actions are supported from VF/PF ABI version 1.0:
*
* * `VF2PF_HANDSHAKE`_
* * `VF2PF_QUERY_RUNTIME`_
*/
/**
* DOC: VF2PF_HANDSHAKE
*
* This `Relay Message`_ is used by the VF to establish ABI version with the PF.
*
* Prior to exchanging any other messages, both VF driver and PF driver must
* negotiate the VF/PF ABI version that will be used in their communication.
*
* The VF driver shall use @MAJOR and @MINOR fields to pass requested ABI version.
* The VF driver may use special version 0.0 (both @MAJOR and @MINOR set to 0)
* to request latest (or any) ABI version that is supported by the PF driver.
*
* This message definition shall be supported by all future ABI versions.
* This message definition shall not be changed by future ABI versions.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ |
* | +-------+--------------------------------------------------------------+
* | | 27:16 | DATA0 = MBZ |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | ACTION = _`GUC_RELAY_ACTION_VF2PF_HANDSHAKE` = 0x0001 |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:16 | **MAJOR** - requested major version of the VFPF interface |
* | | | (use MAJOR_ANY to request latest version supported by PF) |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | **MINOR** - requested minor version of the VFPF interface |
* | | | (use MINOR_ANY to request latest version supported by PF) |
* +---+-------+--------------------------------------------------------------+
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_SUCCESS_ |
* | +-------+--------------------------------------------------------------+
* | | 27:0 | DATA0 = MBZ |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:16 | **MAJOR** - agreed major version of the VFPF interface |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | **MINOR** - agreed minor version of the VFPF interface |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_RELAY_ACTION_VF2PF_HANDSHAKE 0x0001u
#define VF2PF_HANDSHAKE_REQUEST_MSG_LEN 2u
#define VF2PF_HANDSHAKE_REQUEST_MSG_0_MBZ GUC_HXG_REQUEST_MSG_0_DATA0
#define VF2PF_HANDSHAKE_REQUEST_MSG_1_MAJOR (0xffffu << 16)
#define VF2PF_HANDSHAKE_MAJOR_ANY 0
#define VF2PF_HANDSHAKE_REQUEST_MSG_1_MINOR (0xffffu << 0)
#define VF2PF_HANDSHAKE_MINOR_ANY 0
#define VF2PF_HANDSHAKE_RESPONSE_MSG_LEN 2u
#define VF2PF_HANDSHAKE_RESPONSE_MSG_0_MBZ GUC_HXG_RESPONSE_MSG_0_DATA0
#define VF2PF_HANDSHAKE_RESPONSE_MSG_1_MAJOR (0xffffu << 16)
#define VF2PF_HANDSHAKE_RESPONSE_MSG_1_MINOR (0xffffu << 0)
/**
* DOC: VF2PF_QUERY_RUNTIME
*
* This `Relay Message`_ is used by the VF to query values of runtime registers.
*
* On some platforms, VF drivers may not have access to the some fuse registers
* (referred here as 'runtime registers') and therefore VF drivers need to ask
* the PF driver to obtain their values.
*
* However, the list of such registers, and their values, is fully owned and
* maintained by the PF driver and the VF driver may only initiate the query
* sequence and indicate in the @START field the starting index of the next
* requested register from this predefined list.
*
* In the response, the PF driver will return tuple of 32-bit register offset and
* the 32-bit value of that register (respectively @REG_OFFSET and @REG_VALUE).
*
* The VF driver can use @LIMIT field to limit number of returned register tuples.
* If @LIMIT is unset then PF decides about number of returned register tuples.
*
* This message definition is supported from ABI version 1.0.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ |
* | +-------+--------------------------------------------------------------+
* | | 27:16 | DATA0 = **LIMIT** - limit number of returned entries |
* | | | (use zero to not enforce any limits on the response) |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | ACTION = _`GUC_RELAY_ACTION_VF2PF_QUERY_RUNTIME` = 0x0101 |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | DATA1 = **START** - index of the first requested entry |
* +---+-------+--------------------------------------------------------------+
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_SUCCESS_ |
* | +-------+--------------------------------------------------------------+
* | | 27:0 | DATA0 = **COUNT** - number of entries included in response |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | DATA1 = **REMAINING** - number of remaining entries |
* +---+-------+--------------------------------------------------------------+
* | 2 | 31:0 | DATA2 = **REG_OFFSET** - offset of register[START] |
* +---+-------+--------------------------------------------------------------+
* | 3 | 31:0 | DATA3 = **REG_VALUE** - value of register[START] |
* +---+-------+--------------------------------------------------------------+
* | | | |
* +---+-------+--------------------------------------------------------------+
* |n-1| 31:0 | REG_OFFSET - offset of register[START + x] |
* +---+-------+--------------------------------------------------------------+
* | n | 31:0 | REG_VALUE - value of register[START + x] |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_RELAY_ACTION_VF2PF_QUERY_RUNTIME 0x0101u
#define VF2PF_QUERY_RUNTIME_REQUEST_MSG_LEN 2u
#define VF2PF_QUERY_RUNTIME_REQUEST_MSG_0_LIMIT GUC_HXG_REQUEST_MSG_0_DATA0
#define VF2PF_QUERY_RUNTIME_NO_LIMIT 0u
#define VF2PF_QUERY_RUNTIME_REQUEST_MSG_1_START GUC_HXG_REQUEST_MSG_n_DATAn
#define VF2PF_QUERY_RUNTIME_RESPONSE_MSG_MIN_LEN (GUC_HXG_MSG_MIN_LEN + 1u)
#define VF2PF_QUERY_RUNTIME_RESPONSE_MSG_MAX_LEN \
(VF2PF_QUERY_RUNTIME_RESPONSE_MSG_MIN_LEN + VF2PF_QUERY_RUNTIME_MAX_COUNT * 2)
#define VF2PF_QUERY_RUNTIME_RESPONSE_MSG_0_COUNT GUC_HXG_RESPONSE_MSG_0_DATA0
#define VF2PF_QUERY_RUNTIME_MIN_COUNT 0
#define VF2PF_QUERY_RUNTIME_MAX_COUNT \
((GUC_RELAY_MSG_MAX_LEN - VF2PF_QUERY_RUNTIME_RESPONSE_MSG_MIN_LEN) / 2)
#define VF2PF_QUERY_RUNTIME_RESPONSE_MSG_1_REMAINING GUC_HXG_RESPONSE_MSG_n_DATAn
#define VF2PF_QUERY_RUNTIME_RESPONSE_DATAn_REG_OFFSETx GUC_HXG_RESPONSE_MSG_n_DATAn
#define VF2PF_QUERY_RUNTIME_RESPONSE_DATAn_REG_VALUEx GUC_HXG_RESPONSE_MSG_n_DATAn
/** /**
* DOC: GuC Relay Debug Actions * DOC: GuC Relay Debug Actions
* *
......
...@@ -126,15 +126,14 @@ int xe_display_init_nommio(struct xe_device *xe) ...@@ -126,15 +126,14 @@ int xe_display_init_nommio(struct xe_device *xe)
return drmm_add_action_or_reset(&xe->drm, xe_display_fini_nommio, xe); return drmm_add_action_or_reset(&xe->drm, xe_display_fini_nommio, xe);
} }
static void xe_display_fini_noirq(struct drm_device *dev, void *dummy) static void xe_display_fini_noirq(void *arg)
{ {
struct xe_device *xe = to_xe_device(dev); struct xe_device *xe = arg;
if (!xe->info.enable_display) if (!xe->info.enable_display)
return; return;
intel_display_driver_remove_noirq(xe); intel_display_driver_remove_noirq(xe);
intel_power_domains_driver_remove(xe);
} }
int xe_display_init_noirq(struct xe_device *xe) int xe_display_init_noirq(struct xe_device *xe)
...@@ -163,12 +162,12 @@ int xe_display_init_noirq(struct xe_device *xe) ...@@ -163,12 +162,12 @@ int xe_display_init_noirq(struct xe_device *xe)
if (err) if (err)
return err; return err;
return drmm_add_action_or_reset(&xe->drm, xe_display_fini_noirq, NULL); return devm_add_action_or_reset(xe->drm.dev, xe_display_fini_noirq, xe);
} }
static void xe_display_fini_noaccel(struct drm_device *dev, void *dummy) static void xe_display_fini_noaccel(void *arg)
{ {
struct xe_device *xe = to_xe_device(dev); struct xe_device *xe = arg;
if (!xe->info.enable_display) if (!xe->info.enable_display)
return; return;
...@@ -187,7 +186,7 @@ int xe_display_init_noaccel(struct xe_device *xe) ...@@ -187,7 +186,7 @@ int xe_display_init_noaccel(struct xe_device *xe)
if (err) if (err)
return err; return err;
return drmm_add_action_or_reset(&xe->drm, xe_display_fini_noaccel, NULL); return devm_add_action_or_reset(xe->drm.dev, xe_display_fini_noaccel, xe);
} }
int xe_display_init(struct xe_device *xe) int xe_display_init(struct xe_device *xe)
...@@ -235,8 +234,6 @@ void xe_display_driver_remove(struct xe_device *xe) ...@@ -235,8 +234,6 @@ void xe_display_driver_remove(struct xe_device *xe)
return; return;
intel_display_driver_remove(xe); intel_display_driver_remove(xe);
intel_display_device_remove(xe);
} }
/* IRQ-related functions */ /* IRQ-related functions */
...@@ -300,7 +297,7 @@ static bool suspend_to_idle(void) ...@@ -300,7 +297,7 @@ static bool suspend_to_idle(void)
return false; return false;
} }
void xe_display_pm_suspend(struct xe_device *xe) void xe_display_pm_suspend(struct xe_device *xe, bool runtime)
{ {
bool s2idle = suspend_to_idle(); bool s2idle = suspend_to_idle();
if (!xe->info.enable_display) if (!xe->info.enable_display)
...@@ -314,6 +311,7 @@ void xe_display_pm_suspend(struct xe_device *xe) ...@@ -314,6 +311,7 @@ void xe_display_pm_suspend(struct xe_device *xe)
if (has_display(xe)) if (has_display(xe))
drm_kms_helper_poll_disable(&xe->drm); drm_kms_helper_poll_disable(&xe->drm);
if (!runtime)
intel_display_driver_suspend(xe); intel_display_driver_suspend(xe);
intel_dp_mst_suspend(xe); intel_dp_mst_suspend(xe);
...@@ -350,7 +348,7 @@ void xe_display_pm_resume_early(struct xe_device *xe) ...@@ -350,7 +348,7 @@ void xe_display_pm_resume_early(struct xe_device *xe)
intel_power_domains_resume(xe); intel_power_domains_resume(xe);
} }
void xe_display_pm_resume(struct xe_device *xe) void xe_display_pm_resume(struct xe_device *xe, bool runtime)
{ {
if (!xe->info.enable_display) if (!xe->info.enable_display)
return; return;
...@@ -365,6 +363,7 @@ void xe_display_pm_resume(struct xe_device *xe) ...@@ -365,6 +363,7 @@ void xe_display_pm_resume(struct xe_device *xe)
/* MST sideband requires HPD interrupts enabled */ /* MST sideband requires HPD interrupts enabled */
intel_dp_mst_resume(xe); intel_dp_mst_resume(xe);
if (!runtime)
intel_display_driver_resume(xe); intel_display_driver_resume(xe);
intel_hpd_poll_disable(xe); intel_hpd_poll_disable(xe);
...@@ -378,17 +377,31 @@ void xe_display_pm_resume(struct xe_device *xe) ...@@ -378,17 +377,31 @@ void xe_display_pm_resume(struct xe_device *xe)
intel_power_domains_enable(xe); intel_power_domains_enable(xe);
} }
void xe_display_probe(struct xe_device *xe) static void display_device_remove(struct drm_device *dev, void *arg)
{
struct xe_device *xe = arg;
intel_display_device_remove(xe);
}
int xe_display_probe(struct xe_device *xe)
{ {
int err;
if (!xe->info.enable_display) if (!xe->info.enable_display)
goto no_display; goto no_display;
intel_display_device_probe(xe); intel_display_device_probe(xe);
err = drmm_add_action_or_reset(&xe->drm, display_device_remove, xe);
if (err)
return err;
if (has_display(xe)) if (has_display(xe))
return; return 0;
no_display: no_display:
xe->info.enable_display = false; xe->info.enable_display = false;
unset_display_features(xe); unset_display_features(xe);
return 0;
} }
...@@ -18,7 +18,7 @@ void xe_display_driver_remove(struct xe_device *xe); ...@@ -18,7 +18,7 @@ void xe_display_driver_remove(struct xe_device *xe);
int xe_display_create(struct xe_device *xe); int xe_display_create(struct xe_device *xe);
void xe_display_probe(struct xe_device *xe); int xe_display_probe(struct xe_device *xe);
int xe_display_init_nommio(struct xe_device *xe); int xe_display_init_nommio(struct xe_device *xe);
int xe_display_init_noirq(struct xe_device *xe); int xe_display_init_noirq(struct xe_device *xe);
...@@ -34,10 +34,10 @@ void xe_display_irq_enable(struct xe_device *xe, u32 gu_misc_iir); ...@@ -34,10 +34,10 @@ void xe_display_irq_enable(struct xe_device *xe, u32 gu_misc_iir);
void xe_display_irq_reset(struct xe_device *xe); void xe_display_irq_reset(struct xe_device *xe);
void xe_display_irq_postinstall(struct xe_device *xe, struct xe_gt *gt); void xe_display_irq_postinstall(struct xe_device *xe, struct xe_gt *gt);
void xe_display_pm_suspend(struct xe_device *xe); void xe_display_pm_suspend(struct xe_device *xe, bool runtime);
void xe_display_pm_suspend_late(struct xe_device *xe); void xe_display_pm_suspend_late(struct xe_device *xe);
void xe_display_pm_resume_early(struct xe_device *xe); void xe_display_pm_resume_early(struct xe_device *xe);
void xe_display_pm_resume(struct xe_device *xe); void xe_display_pm_resume(struct xe_device *xe, bool runtime);
#else #else
...@@ -47,7 +47,7 @@ static inline void xe_display_driver_remove(struct xe_device *xe) {} ...@@ -47,7 +47,7 @@ static inline void xe_display_driver_remove(struct xe_device *xe) {}
static inline int xe_display_create(struct xe_device *xe) { return 0; } static inline int xe_display_create(struct xe_device *xe) { return 0; }
static inline void xe_display_probe(struct xe_device *xe) { } static inline int xe_display_probe(struct xe_device *xe) { return 0; }
static inline int xe_display_init_nommio(struct xe_device *xe) { return 0; } static inline int xe_display_init_nommio(struct xe_device *xe) { return 0; }
static inline int xe_display_init_noirq(struct xe_device *xe) { return 0; } static inline int xe_display_init_noirq(struct xe_device *xe) { return 0; }
...@@ -63,10 +63,10 @@ static inline void xe_display_irq_enable(struct xe_device *xe, u32 gu_misc_iir) ...@@ -63,10 +63,10 @@ static inline void xe_display_irq_enable(struct xe_device *xe, u32 gu_misc_iir)
static inline void xe_display_irq_reset(struct xe_device *xe) {} static inline void xe_display_irq_reset(struct xe_device *xe) {}
static inline void xe_display_irq_postinstall(struct xe_device *xe, struct xe_gt *gt) {} static inline void xe_display_irq_postinstall(struct xe_device *xe, struct xe_gt *gt) {}
static inline void xe_display_pm_suspend(struct xe_device *xe) {} static inline void xe_display_pm_suspend(struct xe_device *xe, bool runtime) {}
static inline void xe_display_pm_suspend_late(struct xe_device *xe) {} static inline void xe_display_pm_suspend_late(struct xe_device *xe) {}
static inline void xe_display_pm_resume_early(struct xe_device *xe) {} static inline void xe_display_pm_resume_early(struct xe_device *xe) {}
static inline void xe_display_pm_resume(struct xe_device *xe) {} static inline void xe_display_pm_resume(struct xe_device *xe, bool runtime) {}
#endif /* CONFIG_DRM_XE_DISPLAY */ #endif /* CONFIG_DRM_XE_DISPLAY */
#endif /* _XE_DISPLAY_H_ */ #endif /* _XE_DISPLAY_H_ */
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include "xe_bo.h" #include "xe_bo.h"
#include "xe_device.h" #include "xe_device.h"
#include "xe_device_types.h" #include "xe_device_types.h"
#include "xe_force_wake.h"
#include "xe_gsc_proxy.h" #include "xe_gsc_proxy.h"
#include "xe_gsc_submit.h" #include "xe_gsc_submit.h"
#include "xe_gt.h" #include "xe_gt.h"
......
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2024 Intel Corporation
*/
#ifndef _XE_BARS_H_
#define _XE_BARS_H_
#define GTTMMADR_BAR 0 /* MMIO + GTT */
#define LMEM_BAR 2 /* VRAM */
#endif
...@@ -44,9 +44,10 @@ ...@@ -44,9 +44,10 @@
#define GSCCS_RING_BASE 0x11a000 #define GSCCS_RING_BASE 0x11a000
#define RING_TAIL(base) XE_REG((base) + 0x30) #define RING_TAIL(base) XE_REG((base) + 0x30)
#define TAIL_ADDR REG_GENMASK(20, 3)
#define RING_HEAD(base) XE_REG((base) + 0x34) #define RING_HEAD(base) XE_REG((base) + 0x34)
#define HEAD_ADDR 0x001FFFFC #define HEAD_ADDR REG_GENMASK(20, 2)
#define RING_START(base) XE_REG((base) + 0x38) #define RING_START(base) XE_REG((base) + 0x38)
...@@ -54,6 +55,8 @@ ...@@ -54,6 +55,8 @@
#define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */ #define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */
#define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */ #define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */
#define RING_START_UDW(base) XE_REG((base) + 0x48)
#define RING_PSMI_CTL(base) XE_REG((base) + 0x50, XE_REG_OPTION_MASKED) #define RING_PSMI_CTL(base) XE_REG((base) + 0x50, XE_REG_OPTION_MASKED)
#define RC_SEMA_IDLE_MSG_DISABLE REG_BIT(12) #define RC_SEMA_IDLE_MSG_DISABLE REG_BIT(12)
#define WAIT_FOR_EVENT_POWER_DOWN_DISABLE REG_BIT(7) #define WAIT_FOR_EVENT_POWER_DOWN_DISABLE REG_BIT(7)
...@@ -65,6 +68,7 @@ ...@@ -65,6 +68,7 @@
#define RING_ACTHD_UDW(base) XE_REG((base) + 0x5c) #define RING_ACTHD_UDW(base) XE_REG((base) + 0x5c)
#define RING_DMA_FADD_UDW(base) XE_REG((base) + 0x60) #define RING_DMA_FADD_UDW(base) XE_REG((base) + 0x60)
#define RING_IPEHR(base) XE_REG((base) + 0x68) #define RING_IPEHR(base) XE_REG((base) + 0x68)
#define RING_INSTDONE(base) XE_REG((base) + 0x6c)
#define RING_ACTHD(base) XE_REG((base) + 0x74) #define RING_ACTHD(base) XE_REG((base) + 0x74)
#define RING_DMA_FADD(base) XE_REG((base) + 0x78) #define RING_DMA_FADD(base) XE_REG((base) + 0x78)
#define RING_HWS_PGA(base) XE_REG((base) + 0x80) #define RING_HWS_PGA(base) XE_REG((base) + 0x80)
...@@ -108,6 +112,8 @@ ...@@ -108,6 +112,8 @@
#define FF_DOP_CLOCK_GATE_DISABLE REG_BIT(1) #define FF_DOP_CLOCK_GATE_DISABLE REG_BIT(1)
#define REPLAY_MODE_GRANULARITY REG_BIT(0) #define REPLAY_MODE_GRANULARITY REG_BIT(0)
#define INDIRECT_RING_STATE(base) XE_REG((base) + 0x108)
#define RING_BBADDR(base) XE_REG((base) + 0x140) #define RING_BBADDR(base) XE_REG((base) + 0x140)
#define RING_BBADDR_UDW(base) XE_REG((base) + 0x168) #define RING_BBADDR_UDW(base) XE_REG((base) + 0x168)
...@@ -123,6 +129,7 @@ ...@@ -123,6 +129,7 @@
#define RING_EXECLIST_STATUS_HI(base) XE_REG((base) + 0x234 + 4) #define RING_EXECLIST_STATUS_HI(base) XE_REG((base) + 0x234 + 4)
#define RING_CONTEXT_CONTROL(base) XE_REG((base) + 0x244, XE_REG_OPTION_MASKED) #define RING_CONTEXT_CONTROL(base) XE_REG((base) + 0x244, XE_REG_OPTION_MASKED)
#define CTX_CTRL_INDIRECT_RING_STATE_ENABLE REG_BIT(4)
#define CTX_CTRL_INHIBIT_SYN_CTX_SWITCH REG_BIT(3) #define CTX_CTRL_INHIBIT_SYN_CTX_SWITCH REG_BIT(3)
#define CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT REG_BIT(0) #define CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT REG_BIT(0)
...@@ -135,7 +142,6 @@ ...@@ -135,7 +142,6 @@
#define RING_VALID_MASK 0x00000001 #define RING_VALID_MASK 0x00000001
#define RING_VALID 0x00000001 #define RING_VALID 0x00000001
#define STOP_RING REG_BIT(8) #define STOP_RING REG_BIT(8)
#define TAIL_ADDR 0x001FFFF8
#define RING_CTX_TIMESTAMP(base) XE_REG((base) + 0x3a8) #define RING_CTX_TIMESTAMP(base) XE_REG((base) + 0x3a8)
#define CSBE_DEBUG_STATUS(base) XE_REG((base) + 0x3fc) #define CSBE_DEBUG_STATUS(base) XE_REG((base) + 0x3fc)
......
...@@ -59,6 +59,27 @@ ...@@ -59,6 +59,27 @@
#define XELP_GLOBAL_MOCS(i) XE_REG(0x4000 + (i) * 4) #define XELP_GLOBAL_MOCS(i) XE_REG(0x4000 + (i) * 4)
#define XEHP_GLOBAL_MOCS(i) XE_REG_MCR(0x4000 + (i) * 4) #define XEHP_GLOBAL_MOCS(i) XE_REG_MCR(0x4000 + (i) * 4)
#define LE_SSE_MASK REG_GENMASK(18, 17)
#define LE_SSE(value) REG_FIELD_PREP(LE_SSE_MASK, value)
#define LE_COS_MASK REG_GENMASK(16, 15)
#define LE_COS(value) REG_FIELD_PREP(LE_COS_MASK)
#define LE_SCF_MASK REG_BIT(14)
#define LE_SCF(value) REG_FIELD_PREP(LE_SCF_MASK, value)
#define LE_PFM_MASK REG_GENMASK(13, 11)
#define LE_PFM(value) REG_FIELD_PREP(LE_PFM_MASK, value)
#define LE_SCC_MASK REG_GENMASK(10, 8)
#define LE_SCC(value) REG_FIELD_PREP(LE_SCC_MASK, value)
#define LE_RSC_MASK REG_BIT(7)
#define LE_RSC(value) REG_FIELD_PREP(LE_RSC_MASK, value)
#define LE_AOM_MASK REG_BIT(6)
#define LE_AOM(value) REG_FIELD_PREP(LE_AOM_MASK, value)
#define LE_LRUM_MASK REG_GENMASK(5, 4)
#define LE_LRUM(value) REG_FIELD_PREP(LE_LRUM_MASK, value)
#define LE_TGT_CACHE_MASK REG_GENMASK(3, 2)
#define LE_TGT_CACHE(value) REG_FIELD_PREP(LE_TGT_CACHE_MASK, value)
#define LE_CACHEABILITY_MASK REG_GENMASK(1, 0)
#define LE_CACHEABILITY(value) REG_FIELD_PREP(LE_CACHEABILITY_MASK, value)
#define CCS_AUX_INV XE_REG(0x4208) #define CCS_AUX_INV XE_REG(0x4208)
#define VD0_AUX_INV XE_REG(0x4218) #define VD0_AUX_INV XE_REG(0x4218)
...@@ -98,6 +119,8 @@ ...@@ -98,6 +119,8 @@
#define FF_MODE2_TDS_TIMER_MASK REG_GENMASK(23, 16) #define FF_MODE2_TDS_TIMER_MASK REG_GENMASK(23, 16)
#define FF_MODE2_TDS_TIMER_128 REG_FIELD_PREP(FF_MODE2_TDS_TIMER_MASK, 4) #define FF_MODE2_TDS_TIMER_128 REG_FIELD_PREP(FF_MODE2_TDS_TIMER_MASK, 4)
#define XEHPG_INSTDONE_GEOM_SVGUNIT XE_REG_MCR(0x666c)
#define CACHE_MODE_1 XE_REG(0x7004, XE_REG_OPTION_MASKED) #define CACHE_MODE_1 XE_REG(0x7004, XE_REG_OPTION_MASKED)
#define MSAA_OPTIMIZATION_REDUC_DISABLE REG_BIT(11) #define MSAA_OPTIMIZATION_REDUC_DISABLE REG_BIT(11)
...@@ -115,6 +138,14 @@ ...@@ -115,6 +138,14 @@
#define FLSH_IGNORES_PSD REG_BIT(10) #define FLSH_IGNORES_PSD REG_BIT(10)
#define FD_END_COLLECT REG_BIT(5) #define FD_END_COLLECT REG_BIT(5)
#define SC_INSTDONE XE_REG(0x7100)
#define SC_INSTDONE_EXTRA XE_REG(0x7104)
#define SC_INSTDONE_EXTRA2 XE_REG(0x7108)
#define XEHPG_SC_INSTDONE XE_REG_MCR(0x7100)
#define XEHPG_SC_INSTDONE_EXTRA XE_REG_MCR(0x7104)
#define XEHPG_SC_INSTDONE_EXTRA2 XE_REG_MCR(0x7108)
#define COMMON_SLICE_CHICKEN4 XE_REG(0x7300, XE_REG_OPTION_MASKED) #define COMMON_SLICE_CHICKEN4 XE_REG(0x7300, XE_REG_OPTION_MASKED)
#define DISABLE_TDC_LOAD_BALANCING_CALC REG_BIT(6) #define DISABLE_TDC_LOAD_BALANCING_CALC REG_BIT(6)
...@@ -173,8 +204,11 @@ ...@@ -173,8 +204,11 @@
#define MAX_MSLICES 4 #define MAX_MSLICES 4
#define MEML3_EN_MASK REG_GENMASK(3, 0) #define MEML3_EN_MASK REG_GENMASK(3, 0)
#define MIRROR_FUSE1 XE_REG(0x911c)
#define XELP_EU_ENABLE XE_REG(0x9134) /* "_DISABLE" on Xe_LP */ #define XELP_EU_ENABLE XE_REG(0x9134) /* "_DISABLE" on Xe_LP */
#define XELP_EU_MASK REG_GENMASK(7, 0) #define XELP_EU_MASK REG_GENMASK(7, 0)
#define XELP_GT_SLICE_ENABLE XE_REG(0x9138)
#define XELP_GT_GEOMETRY_DSS_ENABLE XE_REG(0x913c) #define XELP_GT_GEOMETRY_DSS_ENABLE XE_REG(0x913c)
#define GT_VEBOX_VDBOX_DISABLE XE_REG(0x9140) #define GT_VEBOX_VDBOX_DISABLE XE_REG(0x9140)
...@@ -275,6 +309,8 @@ ...@@ -275,6 +309,8 @@
#define RC_CTL_RC6_ENABLE REG_BIT(18) #define RC_CTL_RC6_ENABLE REG_BIT(18)
#define RC_STATE XE_REG(0xa094) #define RC_STATE XE_REG(0xa094)
#define RC_IDLE_HYSTERSIS XE_REG(0xa0ac) #define RC_IDLE_HYSTERSIS XE_REG(0xa0ac)
#define MEDIA_POWERGATE_IDLE_HYSTERESIS XE_REG(0xa0c4)
#define RENDER_POWERGATE_IDLE_HYSTERESIS XE_REG(0xa0c8)
#define PMINTRMSK XE_REG(0xa168) #define PMINTRMSK XE_REG(0xa168)
#define PMINTR_DISABLE_REDIRECT_TO_GUC REG_BIT(31) #define PMINTR_DISABLE_REDIRECT_TO_GUC REG_BIT(31)
...@@ -282,11 +318,11 @@ ...@@ -282,11 +318,11 @@
#define FORCEWAKE_GT XE_REG(0xa188) #define FORCEWAKE_GT XE_REG(0xa188)
#define PG_ENABLE XE_REG(0xa210) #define POWERGATE_ENABLE XE_REG(0xa210)
#define VD2_MFXVDENC_POWERGATE_ENABLE REG_BIT(8) #define RENDER_POWERGATE_ENABLE REG_BIT(0)
#define VD2_HCP_POWERGATE_ENABLE REG_BIT(7) #define MEDIA_POWERGATE_ENABLE REG_BIT(1)
#define VD0_MFXVDENC_POWERGATE_ENABLE REG_BIT(4) #define VDN_HCP_POWERGATE_ENABLE(n) REG_BIT(3 + 2 * (n))
#define VD0_HCP_POWERGATE_ENABLE REG_BIT(3) #define VDN_MFXVDENC_POWERGATE_ENABLE(n) REG_BIT(4 + 2 * (n))
#define CTC_MODE XE_REG(0xa26c) #define CTC_MODE XE_REG(0xa26c)
#define CTC_SHIFT_PARAMETER_MASK REG_GENMASK(2, 1) #define CTC_SHIFT_PARAMETER_MASK REG_GENMASK(2, 1)
...@@ -301,9 +337,24 @@ ...@@ -301,9 +337,24 @@
#define XEHPC_OVRLSCCC REG_BIT(0) #define XEHPC_OVRLSCCC REG_BIT(0)
/* L3 Cache Control */ /* L3 Cache Control */
#define LNCFCMOCS_REG_COUNT 32
#define XELP_LNCFCMOCS(i) XE_REG(0xb020 + (i) * 4) #define XELP_LNCFCMOCS(i) XE_REG(0xb020 + (i) * 4)
#define XEHP_LNCFCMOCS(i) XE_REG_MCR(0xb020 + (i) * 4) #define XEHP_LNCFCMOCS(i) XE_REG_MCR(0xb020 + (i) * 4)
#define LNCFCMOCS_REG_COUNT 32 #define L3_UPPER_LKUP_MASK REG_BIT(23)
#define L3_UPPER_GLBGO_MASK REG_BIT(22)
#define L3_UPPER_IDX_CACHEABILITY_MASK REG_GENMASK(21, 20)
#define L3_UPPER_IDX_SCC_MASK REG_GENMASK(19, 17)
#define L3_UPPER_IDX_ESC_MASK REG_BIT(16)
#define L3_LKUP_MASK REG_BIT(7)
#define L3_LKUP(value) REG_FIELD_PREP(L3_LKUP_MASK, value)
#define L3_GLBGO_MASK REG_BIT(6)
#define L3_GLBGO(value) REG_FIELD_PREP(L3_GLBGO_MASK, value)
#define L3_CACHEABILITY_MASK REG_GENMASK(5, 4)
#define L3_CACHEABILITY(value) REG_FIELD_PREP(L3_CACHEABILITY_MASK, value)
#define L3_SCC_MASK REG_GENMASK(3, 1)
#define L3_SCC(value) REG_FIELD_PREP(L3_SCC_MASK, value)
#define L3_ESC_MASK REG_BIT(0)
#define L3_ESC(value) REG_FIELD_PREP(L3_ESC_MASK, value)
#define XEHP_L3NODEARBCFG XE_REG_MCR(0xb0b4) #define XEHP_L3NODEARBCFG XE_REG_MCR(0xb0b4)
#define XEHP_LNESPARE REG_BIT(19) #define XEHP_LNESPARE REG_BIT(19)
...@@ -342,6 +393,9 @@ ...@@ -342,6 +393,9 @@
#define HALF_SLICE_CHICKEN5 XE_REG_MCR(0xe188, XE_REG_OPTION_MASKED) #define HALF_SLICE_CHICKEN5 XE_REG_MCR(0xe188, XE_REG_OPTION_MASKED)
#define DISABLE_SAMPLE_G_PERFORMANCE REG_BIT(0) #define DISABLE_SAMPLE_G_PERFORMANCE REG_BIT(0)
#define SAMPLER_INSTDONE XE_REG_MCR(0xe160)
#define ROW_INSTDONE XE_REG_MCR(0xe164)
#define SAMPLER_MODE XE_REG_MCR(0xe18c, XE_REG_OPTION_MASKED) #define SAMPLER_MODE XE_REG_MCR(0xe18c, XE_REG_OPTION_MASKED)
#define ENABLE_SMALLPL REG_BIT(15) #define ENABLE_SMALLPL REG_BIT(15)
#define SC_DISABLE_POWER_OPTIMIZATION_EBB REG_BIT(9) #define SC_DISABLE_POWER_OPTIMIZATION_EBB REG_BIT(9)
...@@ -350,6 +404,7 @@ ...@@ -350,6 +404,7 @@
#define HALF_SLICE_CHICKEN7 XE_REG_MCR(0xe194, XE_REG_OPTION_MASKED) #define HALF_SLICE_CHICKEN7 XE_REG_MCR(0xe194, XE_REG_OPTION_MASKED)
#define DG2_DISABLE_ROUND_ENABLE_ALLOW_FOR_SSLA REG_BIT(15) #define DG2_DISABLE_ROUND_ENABLE_ALLOW_FOR_SSLA REG_BIT(15)
#define CLEAR_OPTIMIZATION_DISABLE REG_BIT(6)
#define CACHE_MODE_SS XE_REG_MCR(0xe420, XE_REG_OPTION_MASKED) #define CACHE_MODE_SS XE_REG_MCR(0xe420, XE_REG_OPTION_MASKED)
#define DISABLE_ECC REG_BIT(5) #define DISABLE_ECC REG_BIT(5)
......
...@@ -40,6 +40,8 @@ ...@@ -40,6 +40,8 @@
#define GS_BOOTROM_JUMP_PASSED REG_FIELD_PREP(GS_BOOTROM_MASK, 0x76) #define GS_BOOTROM_JUMP_PASSED REG_FIELD_PREP(GS_BOOTROM_MASK, 0x76)
#define GS_MIA_IN_RESET REG_BIT(0) #define GS_MIA_IN_RESET REG_BIT(0)
#define GUC_HEADER_INFO XE_REG(0xc014)
#define GUC_WOPCM_SIZE XE_REG(0xc050) #define GUC_WOPCM_SIZE XE_REG(0xc050)
#define GUC_WOPCM_SIZE_MASK REG_GENMASK(31, 12) #define GUC_WOPCM_SIZE_MASK REG_GENMASK(31, 12)
#define GUC_WOPCM_SIZE_LOCKED REG_BIT(0) #define GUC_WOPCM_SIZE_LOCKED REG_BIT(0)
......
...@@ -11,6 +11,8 @@ ...@@ -11,6 +11,8 @@
#define CTX_RING_TAIL (0x06 + 1) #define CTX_RING_TAIL (0x06 + 1)
#define CTX_RING_START (0x08 + 1) #define CTX_RING_START (0x08 + 1)
#define CTX_RING_CTL (0x0a + 1) #define CTX_RING_CTL (0x0a + 1)
#define CTX_TIMESTAMP (0x22 + 1)
#define CTX_INDIRECT_RING_STATE (0x26 + 1)
#define CTX_PDP0_UDW (0x30 + 1) #define CTX_PDP0_UDW (0x30 + 1)
#define CTX_PDP0_LDW (0x32 + 1) #define CTX_PDP0_LDW (0x32 + 1)
...@@ -23,4 +25,10 @@ ...@@ -23,4 +25,10 @@
#define CTX_INT_SRC_REPORT_REG (CTX_LRI_INT_REPORT_PTR + 3) #define CTX_INT_SRC_REPORT_REG (CTX_LRI_INT_REPORT_PTR + 3)
#define CTX_INT_SRC_REPORT_PTR (CTX_LRI_INT_REPORT_PTR + 4) #define CTX_INT_SRC_REPORT_PTR (CTX_LRI_INT_REPORT_PTR + 4)
#define INDIRECT_CTX_RING_HEAD (0x02 + 1)
#define INDIRECT_CTX_RING_TAIL (0x04 + 1)
#define INDIRECT_CTX_RING_START (0x06 + 1)
#define INDIRECT_CTX_RING_START_UDW (0x08 + 1)
#define INDIRECT_CTX_RING_CTL (0x0a + 1)
#endif #endif
...@@ -18,4 +18,11 @@ ...@@ -18,4 +18,11 @@
#define PVC_GT0_PLATFORM_ENERGY_STATUS XE_REG(0x28106c) #define PVC_GT0_PLATFORM_ENERGY_STATUS XE_REG(0x28106c)
#define PVC_GT0_PACKAGE_POWER_SKU XE_REG(0x281080) #define PVC_GT0_PACKAGE_POWER_SKU XE_REG(0x281080)
#define BMG_PACKAGE_POWER_SKU XE_REG(0x138098)
#define BMG_PACKAGE_POWER_SKU_UNIT XE_REG(0x1380dc)
#define BMG_PACKAGE_ENERGY_STATUS XE_REG(0x138120)
#define BMG_PACKAGE_RAPL_LIMIT XE_REG(0x138440)
#define BMG_PLATFORM_ENERGY_STATUS XE_REG(0x138458)
#define BMG_PLATFORM_POWER_LIMIT XE_REG(0x138460)
#endif /* _XE_PCODE_REGS_H_ */ #endif /* _XE_PCODE_REGS_H_ */
...@@ -30,6 +30,9 @@ ...@@ -30,6 +30,9 @@
#define XEHP_CLOCK_GATE_DIS XE_REG(0x101014) #define XEHP_CLOCK_GATE_DIS XE_REG(0x101014)
#define SGSI_SIDECLK_DIS REG_BIT(17) #define SGSI_SIDECLK_DIS REG_BIT(17)
#define XEHP_MTCFG_ADDR XE_REG(0x101800)
#define TILE_COUNT REG_GENMASK(15, 8)
#define GGC XE_REG(0x108040) #define GGC XE_REG(0x108040)
#define GMS_MASK REG_GENMASK(15, 8) #define GMS_MASK REG_GENMASK(15, 8)
#define GGMS_MASK REG_GENMASK(7, 6) #define GGMS_MASK REG_GENMASK(7, 6)
......
...@@ -14,6 +14,9 @@ ...@@ -14,6 +14,9 @@
#define LMEM_EN REG_BIT(31) #define LMEM_EN REG_BIT(31)
#define LMTT_DIR_PTR REG_GENMASK(30, 0) /* in multiples of 64KB */ #define LMTT_DIR_PTR REG_GENMASK(30, 0) /* in multiples of 64KB */
#define VIRTUAL_CTRL_REG XE_REG(0x10108c)
#define GUEST_GTT_UPDATE_EN REG_BIT(8)
#define VF_CAP_REG XE_REG(0x1901f8, XE_REG_OPTION_VF) #define VF_CAP_REG XE_REG(0x1901f8, XE_REG_OPTION_VF)
#define VF_CAP REG_BIT(0) #define VF_CAP REG_BIT(0)
......
...@@ -11,6 +11,7 @@ xe_live_test-y = xe_live_test_mod.o \ ...@@ -11,6 +11,7 @@ xe_live_test-y = xe_live_test_mod.o \
# Normal kunit tests # Normal kunit tests
obj-$(CONFIG_DRM_XE_KUNIT_TEST) += xe_test.o obj-$(CONFIG_DRM_XE_KUNIT_TEST) += xe_test.o
xe_test-y = xe_test_mod.o \ xe_test-y = xe_test_mod.o \
xe_args_test.o \
xe_pci_test.o \ xe_pci_test.o \
xe_rtp_test.o \ xe_rtp_test.o \
xe_wa_test.o xe_wa_test.o
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright © 2024 Intel Corporation
*/
#include <kunit/test.h>
#include "xe_args.h"
static void call_args_example(struct kunit *test)
{
#define foo X, Y, Z, Q
#define bar COUNT_ARGS(foo)
#define buz CALL_ARGS(COUNT_ARGS, foo)
KUNIT_EXPECT_EQ(test, bar, 1);
KUNIT_EXPECT_EQ(test, buz, 4);
#undef foo
#undef bar
#undef buz
}
static void drop_first_arg_example(struct kunit *test)
{
#define foo X, Y, Z, Q
#define bar CALL_ARGS(COUNT_ARGS, DROP_FIRST_ARG(foo))
KUNIT_EXPECT_EQ(test, bar, 3);
#undef foo
#undef bar
}
static void first_arg_example(struct kunit *test)
{
int X = 1;
#define foo X, Y, Z, Q
#define bar FIRST_ARG(foo)
KUNIT_EXPECT_EQ(test, bar, X);
KUNIT_EXPECT_STREQ(test, __stringify(bar), "X");
#undef foo
#undef bar
}
static void last_arg_example(struct kunit *test)
{
int Q = 1;
#define foo X, Y, Z, Q
#define bar LAST_ARG(foo)
KUNIT_EXPECT_EQ(test, bar, Q);
KUNIT_EXPECT_STREQ(test, __stringify(bar), "Q");
#undef foo
#undef bar
}
static void pick_arg_example(struct kunit *test)
{
int Y = 1, Z = 2;
#define foo X, Y, Z, Q
#define bar PICK_ARG(2, foo)
#define buz PICK_ARG3(foo)
KUNIT_EXPECT_EQ(test, bar, Y);
KUNIT_EXPECT_STREQ(test, __stringify(bar), "Y");
KUNIT_EXPECT_EQ(test, buz, Z);
KUNIT_EXPECT_STREQ(test, __stringify(buz), "Z");
#undef foo
#undef bar
#undef buz
}
static void sep_comma_example(struct kunit *test)
{
#define foo(f) f(X) f(Y) f(Z) f(Q)
#define bar DROP_FIRST_ARG(foo(ARGS_SEP_COMMA __stringify))
#define buz CALL_ARGS(COUNT_ARGS, DROP_FIRST_ARG(foo(ARGS_SEP_COMMA)))
static const char * const a[] = { bar };
KUNIT_EXPECT_STREQ(test, a[0], "X");
KUNIT_EXPECT_STREQ(test, a[1], "Y");
KUNIT_EXPECT_STREQ(test, a[2], "Z");
KUNIT_EXPECT_STREQ(test, a[3], "Q");
KUNIT_EXPECT_EQ(test, buz, 4);
#undef foo
#undef bar
#undef buz
}
#define NO_ARGS
#define FOO_ARGS X, Y, Z, Q
#define MAX_ARGS -1, -2, -3, -4, -5, -6, -7, -8, -9, -10, -11, -12
static void count_args_test(struct kunit *test)
{
int count;
/* COUNT_ARGS() counts to 12 */
count = COUNT_ARGS();
KUNIT_EXPECT_EQ(test, count, 0);
count = COUNT_ARGS(1);
KUNIT_EXPECT_EQ(test, count, 1);
count = COUNT_ARGS(a, b, c, d, e);
KUNIT_EXPECT_EQ(test, count, 5);
count = COUNT_ARGS(a, b, c, d, e, f, g, h, i, j, k, l);
KUNIT_EXPECT_EQ(test, count, 12);
/* COUNT_ARGS() does not expand params */
count = COUNT_ARGS(NO_ARGS);
KUNIT_EXPECT_EQ(test, count, 1);
count = COUNT_ARGS(FOO_ARGS);
KUNIT_EXPECT_EQ(test, count, 1);
}
static void call_args_test(struct kunit *test)
{
int count;
count = CALL_ARGS(COUNT_ARGS, NO_ARGS);
KUNIT_EXPECT_EQ(test, count, 0);
KUNIT_EXPECT_EQ(test, CALL_ARGS(COUNT_ARGS, NO_ARGS), 0);
KUNIT_EXPECT_EQ(test, CALL_ARGS(COUNT_ARGS, FOO_ARGS), 4);
KUNIT_EXPECT_EQ(test, CALL_ARGS(COUNT_ARGS, FOO_ARGS, FOO_ARGS), 8);
KUNIT_EXPECT_EQ(test, CALL_ARGS(COUNT_ARGS, MAX_ARGS), 12);
}
static void drop_first_arg_test(struct kunit *test)
{
int Y = -2, Z = -3, Q = -4;
int a[] = { DROP_FIRST_ARG(FOO_ARGS) };
KUNIT_EXPECT_EQ(test, DROP_FIRST_ARG(0, -1), -1);
KUNIT_EXPECT_EQ(test, DROP_FIRST_ARG(DROP_FIRST_ARG(0, -1, -2)), -2);
KUNIT_EXPECT_EQ(test, CALL_ARGS(COUNT_ARGS, DROP_FIRST_ARG(FOO_ARGS)), 3);
KUNIT_EXPECT_EQ(test, DROP_FIRST_ARG(DROP_FIRST_ARG(DROP_FIRST_ARG(FOO_ARGS))), -4);
KUNIT_EXPECT_EQ(test, a[0], -2);
KUNIT_EXPECT_EQ(test, a[1], -3);
KUNIT_EXPECT_EQ(test, a[2], -4);
#define foo DROP_FIRST_ARG(FOO_ARGS)
#define bar DROP_FIRST_ARG(DROP_FIRST_ARG(FOO_ARGS))
#define buz DROP_FIRST_ARG(DROP_FIRST_ARG(DROP_FIRST_ARG(FOO_ARGS)))
KUNIT_EXPECT_EQ(test, CALL_ARGS(COUNT_ARGS, foo), 3);
KUNIT_EXPECT_EQ(test, CALL_ARGS(COUNT_ARGS, bar), 2);
KUNIT_EXPECT_EQ(test, CALL_ARGS(COUNT_ARGS, buz), 1);
KUNIT_EXPECT_STREQ(test, __stringify(buz), "Q");
#undef foo
#undef bar
#undef buz
}
static void first_arg_test(struct kunit *test)
{
int X = -1;
int a[] = { FIRST_ARG(FOO_ARGS) };
KUNIT_EXPECT_EQ(test, FIRST_ARG(-1, -2), -1);
KUNIT_EXPECT_EQ(test, CALL_ARGS(COUNT_ARGS, FIRST_ARG(FOO_ARGS)), 1);
KUNIT_EXPECT_EQ(test, FIRST_ARG(FOO_ARGS), -1);
KUNIT_EXPECT_EQ(test, a[0], -1);
KUNIT_EXPECT_STREQ(test, __stringify(FIRST_ARG(FOO_ARGS)), "X");
}
static void last_arg_test(struct kunit *test)
{
int Q = -4;
int a[] = { LAST_ARG(FOO_ARGS) };
KUNIT_EXPECT_EQ(test, LAST_ARG(-1, -2), -2);
KUNIT_EXPECT_EQ(test, CALL_ARGS(COUNT_ARGS, LAST_ARG(FOO_ARGS)), 1);
KUNIT_EXPECT_EQ(test, LAST_ARG(FOO_ARGS), -4);
KUNIT_EXPECT_EQ(test, a[0], -4);
KUNIT_EXPECT_STREQ(test, __stringify(LAST_ARG(FOO_ARGS)), "Q");
KUNIT_EXPECT_EQ(test, LAST_ARG(MAX_ARGS), -12);
KUNIT_EXPECT_STREQ(test, __stringify(LAST_ARG(MAX_ARGS)), "-12");
}
static struct kunit_case args_tests[] = {
KUNIT_CASE(count_args_test),
KUNIT_CASE(call_args_example),
KUNIT_CASE(call_args_test),
KUNIT_CASE(drop_first_arg_example),
KUNIT_CASE(drop_first_arg_test),
KUNIT_CASE(first_arg_example),
KUNIT_CASE(first_arg_test),
KUNIT_CASE(last_arg_example),
KUNIT_CASE(last_arg_test),
KUNIT_CASE(pick_arg_example),
KUNIT_CASE(sep_comma_example),
{}
};
static struct kunit_suite args_test_suite = {
.name = "args",
.test_cases = args_tests,
};
kunit_test_suite(args_test_suite);
// SPDX-License-Identifier: GPL-2.0 AND MIT
/*
* Copyright © 2024 Intel Corporation
*/
#include <kunit/test.h>
#include "xe_device.h"
#include "xe_kunit_helpers.h"
#include "xe_pci_test.h"
static int pf_service_test_init(struct kunit *test)
{
struct xe_pci_fake_data fake = {
.sriov_mode = XE_SRIOV_MODE_PF,
.platform = XE_TIGERLAKE, /* some random platform */
.subplatform = XE_SUBPLATFORM_NONE,
};
struct xe_device *xe;
struct xe_gt *gt;
test->priv = &fake;
xe_kunit_helper_xe_device_test_init(test);
xe = test->priv;
KUNIT_ASSERT_EQ(test, xe_sriov_init(xe), 0);
gt = xe_device_get_gt(xe, 0);
pf_init_versions(gt);
/*
* sanity check:
* - all supported platforms VF/PF ABI versions must be defined
* - base version can't be newer than latest
*/
KUNIT_ASSERT_NE(test, 0, gt->sriov.pf.service.version.base.major);
KUNIT_ASSERT_NE(test, 0, gt->sriov.pf.service.version.latest.major);
KUNIT_ASSERT_LE(test, gt->sriov.pf.service.version.base.major,
gt->sriov.pf.service.version.latest.major);
if (gt->sriov.pf.service.version.base.major == gt->sriov.pf.service.version.latest.major)
KUNIT_ASSERT_LE(test, gt->sriov.pf.service.version.base.minor,
gt->sriov.pf.service.version.latest.minor);
test->priv = gt;
return 0;
}
static void pf_negotiate_any(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
KUNIT_ASSERT_EQ(test, 0,
pf_negotiate_version(gt, VF2PF_HANDSHAKE_MAJOR_ANY,
VF2PF_HANDSHAKE_MINOR_ANY,
&major, &minor));
KUNIT_ASSERT_EQ(test, major, gt->sriov.pf.service.version.latest.major);
KUNIT_ASSERT_EQ(test, minor, gt->sriov.pf.service.version.latest.minor);
}
static void pf_negotiate_base_match(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
KUNIT_ASSERT_EQ(test, 0,
pf_negotiate_version(gt,
gt->sriov.pf.service.version.base.major,
gt->sriov.pf.service.version.base.minor,
&major, &minor));
KUNIT_ASSERT_EQ(test, major, gt->sriov.pf.service.version.base.major);
KUNIT_ASSERT_EQ(test, minor, gt->sriov.pf.service.version.base.minor);
}
static void pf_negotiate_base_newer(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
KUNIT_ASSERT_EQ(test, 0,
pf_negotiate_version(gt,
gt->sriov.pf.service.version.base.major,
gt->sriov.pf.service.version.base.minor + 1,
&major, &minor));
KUNIT_ASSERT_EQ(test, major, gt->sriov.pf.service.version.base.major);
KUNIT_ASSERT_GE(test, minor, gt->sriov.pf.service.version.base.minor);
if (gt->sriov.pf.service.version.base.major == gt->sriov.pf.service.version.latest.major)
KUNIT_ASSERT_LE(test, minor, gt->sriov.pf.service.version.latest.minor);
else
KUNIT_FAIL(test, "FIXME: don't know how to test multi-version yet!\n");
}
static void pf_negotiate_base_next(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
KUNIT_ASSERT_EQ(test, 0,
pf_negotiate_version(gt,
gt->sriov.pf.service.version.base.major + 1, 0,
&major, &minor));
KUNIT_ASSERT_GE(test, major, gt->sriov.pf.service.version.base.major);
KUNIT_ASSERT_LE(test, major, gt->sriov.pf.service.version.latest.major);
if (major == gt->sriov.pf.service.version.latest.major)
KUNIT_ASSERT_LE(test, minor, gt->sriov.pf.service.version.latest.minor);
else
KUNIT_FAIL(test, "FIXME: don't know how to test multi-version yet!\n");
}
static void pf_negotiate_base_older(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
if (!gt->sriov.pf.service.version.base.minor)
kunit_skip(test, "no older minor\n");
KUNIT_ASSERT_NE(test, 0,
pf_negotiate_version(gt,
gt->sriov.pf.service.version.base.major,
gt->sriov.pf.service.version.base.minor - 1,
&major, &minor));
}
static void pf_negotiate_base_prev(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
KUNIT_ASSERT_NE(test, 0,
pf_negotiate_version(gt,
gt->sriov.pf.service.version.base.major - 1, 1,
&major, &minor));
}
static void pf_negotiate_latest_match(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
KUNIT_ASSERT_EQ(test, 0,
pf_negotiate_version(gt,
gt->sriov.pf.service.version.latest.major,
gt->sriov.pf.service.version.latest.minor,
&major, &minor));
KUNIT_ASSERT_EQ(test, major, gt->sriov.pf.service.version.latest.major);
KUNIT_ASSERT_EQ(test, minor, gt->sriov.pf.service.version.latest.minor);
}
static void pf_negotiate_latest_newer(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
KUNIT_ASSERT_EQ(test, 0,
pf_negotiate_version(gt,
gt->sriov.pf.service.version.latest.major,
gt->sriov.pf.service.version.latest.minor + 1,
&major, &minor));
KUNIT_ASSERT_EQ(test, major, gt->sriov.pf.service.version.latest.major);
KUNIT_ASSERT_EQ(test, minor, gt->sriov.pf.service.version.latest.minor);
}
static void pf_negotiate_latest_next(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
KUNIT_ASSERT_EQ(test, 0,
pf_negotiate_version(gt,
gt->sriov.pf.service.version.latest.major + 1, 0,
&major, &minor));
KUNIT_ASSERT_EQ(test, major, gt->sriov.pf.service.version.latest.major);
KUNIT_ASSERT_EQ(test, minor, gt->sriov.pf.service.version.latest.minor);
}
static void pf_negotiate_latest_older(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
if (!gt->sriov.pf.service.version.latest.minor)
kunit_skip(test, "no older minor\n");
KUNIT_ASSERT_EQ(test, 0,
pf_negotiate_version(gt,
gt->sriov.pf.service.version.latest.major,
gt->sriov.pf.service.version.latest.minor - 1,
&major, &minor));
KUNIT_ASSERT_EQ(test, major, gt->sriov.pf.service.version.latest.major);
KUNIT_ASSERT_EQ(test, minor, gt->sriov.pf.service.version.latest.minor - 1);
}
static void pf_negotiate_latest_prev(struct kunit *test)
{
struct xe_gt *gt = test->priv;
u32 major, minor;
if (gt->sriov.pf.service.version.base.major == gt->sriov.pf.service.version.latest.major)
kunit_skip(test, "no prev major");
KUNIT_ASSERT_EQ(test, 0,
pf_negotiate_version(gt,
gt->sriov.pf.service.version.latest.major - 1,
gt->sriov.pf.service.version.base.minor + 1,
&major, &minor));
KUNIT_ASSERT_EQ(test, major, gt->sriov.pf.service.version.latest.major - 1);
KUNIT_ASSERT_GE(test, major, gt->sriov.pf.service.version.base.major);
}
static struct kunit_case pf_service_test_cases[] = {
KUNIT_CASE(pf_negotiate_any),
KUNIT_CASE(pf_negotiate_base_match),
KUNIT_CASE(pf_negotiate_base_newer),
KUNIT_CASE(pf_negotiate_base_next),
KUNIT_CASE(pf_negotiate_base_older),
KUNIT_CASE(pf_negotiate_base_prev),
KUNIT_CASE(pf_negotiate_latest_match),
KUNIT_CASE(pf_negotiate_latest_newer),
KUNIT_CASE(pf_negotiate_latest_next),
KUNIT_CASE(pf_negotiate_latest_older),
KUNIT_CASE(pf_negotiate_latest_prev),
{}
};
static struct kunit_suite pf_service_suite = {
.name = "pf_service",
.test_cases = pf_service_test_cases,
.init = pf_service_test_init,
};
kunit_test_suite(pf_service_suite);
...@@ -62,36 +62,6 @@ static int run_sanity_job(struct xe_migrate *m, struct xe_device *xe, ...@@ -62,36 +62,6 @@ static int run_sanity_job(struct xe_migrate *m, struct xe_device *xe,
return 0; return 0;
} }
static void
sanity_populate_cb(struct xe_migrate_pt_update *pt_update,
struct xe_tile *tile, struct iosys_map *map, void *dst,
u32 qword_ofs, u32 num_qwords,
const struct xe_vm_pgtable_update *update)
{
struct migrate_test_params *p =
to_migrate_test_params(xe_cur_kunit_priv(XE_TEST_LIVE_MIGRATE));
int i;
u64 *ptr = dst;
u64 value;
for (i = 0; i < num_qwords; i++) {
value = (qword_ofs + i - update->ofs) * 0x1111111111111111ULL;
if (map)
xe_map_wr(tile_to_xe(tile), map, (qword_ofs + i) *
sizeof(u64), u64, value);
else
ptr[i] = value;
}
kunit_info(xe_cur_kunit(), "Used %s.\n", map ? "CPU" : "GPU");
if (p->force_gpu && map)
KUNIT_FAIL(xe_cur_kunit(), "GPU pagetable update used CPU.\n");
}
static const struct xe_migrate_pt_update_ops sanity_ops = {
.populate = sanity_populate_cb,
};
#define check(_retval, _expected, str, _test) \ #define check(_retval, _expected, str, _test) \
do { if ((_retval) != (_expected)) { \ do { if ((_retval) != (_expected)) { \
KUNIT_FAIL(_test, "Sanity check failed: " str \ KUNIT_FAIL(_test, "Sanity check failed: " str \
...@@ -209,57 +179,6 @@ static void test_copy_vram(struct xe_migrate *m, struct xe_bo *bo, ...@@ -209,57 +179,6 @@ static void test_copy_vram(struct xe_migrate *m, struct xe_bo *bo,
test_copy(m, bo, test, region); test_copy(m, bo, test, region);
} }
static void test_pt_update(struct xe_migrate *m, struct xe_bo *pt,
struct kunit *test, bool force_gpu)
{
struct xe_device *xe = tile_to_xe(m->tile);
struct dma_fence *fence;
u64 retval, expected;
ktime_t then, now;
int i;
struct xe_vm_pgtable_update update = {
.ofs = 1,
.qwords = 0x10,
.pt_bo = pt,
};
struct xe_migrate_pt_update pt_update = {
.ops = &sanity_ops,
};
struct migrate_test_params p = {
.base.id = XE_TEST_LIVE_MIGRATE,
.force_gpu = force_gpu,
};
test->priv = &p;
/* Test xe_migrate_update_pgtables() updates the pagetable as expected */
expected = 0xf0f0f0f0f0f0f0f0ULL;
xe_map_memset(xe, &pt->vmap, 0, (u8)expected, pt->size);
then = ktime_get();
fence = xe_migrate_update_pgtables(m, m->q->vm, NULL, m->q, &update, 1,
NULL, 0, &pt_update);
now = ktime_get();
if (sanity_fence_failed(xe, fence, "Migration pagetable update", test))
return;
kunit_info(test, "Updating without syncing took %llu us,\n",
(unsigned long long)ktime_to_us(ktime_sub(now, then)));
dma_fence_put(fence);
retval = xe_map_rd(xe, &pt->vmap, 0, u64);
check(retval, expected, "PTE[0] must stay untouched", test);
for (i = 0; i < update.qwords; i++) {
retval = xe_map_rd(xe, &pt->vmap, (update.ofs + i) * 8, u64);
check(retval, i * 0x1111111111111111ULL, "PTE update", test);
}
retval = xe_map_rd(xe, &pt->vmap, 8 * (update.ofs + update.qwords),
u64);
check(retval, expected, "PTE[0x11] must stay untouched", test);
}
static void xe_migrate_sanity_test(struct xe_migrate *m, struct kunit *test) static void xe_migrate_sanity_test(struct xe_migrate *m, struct kunit *test)
{ {
struct xe_tile *tile = m->tile; struct xe_tile *tile = m->tile;
...@@ -398,11 +317,6 @@ static void xe_migrate_sanity_test(struct xe_migrate *m, struct kunit *test) ...@@ -398,11 +317,6 @@ static void xe_migrate_sanity_test(struct xe_migrate *m, struct kunit *test)
test_copy_vram(m, big, test); test_copy_vram(m, big, test);
} }
kunit_info(test, "Testing page table update using CPU if GPU idle.\n");
test_pt_update(m, pt, test, false);
kunit_info(test, "Testing page table update using GPU\n");
test_pt_update(m, pt, test, true);
out: out:
xe_bb_free(bb, NULL); xe_bb_free(bb, NULL);
free_tiny: free_tiny:
...@@ -430,7 +344,7 @@ static int migrate_test_run_device(struct xe_device *xe) ...@@ -430,7 +344,7 @@ static int migrate_test_run_device(struct xe_device *xe)
struct xe_migrate *m = tile->migrate; struct xe_migrate *m = tile->migrate;
kunit_info(test, "Testing tile id %d.\n", id); kunit_info(test, "Testing tile id %d.\n", id);
xe_vm_lock(m->q->vm, true); xe_vm_lock(m->q->vm, false);
xe_migrate_sanity_test(m, test); xe_migrate_sanity_test(m, test);
xe_vm_unlock(m->q->vm); xe_vm_unlock(m->q->vm);
} }
......
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2024 Intel Corporation
*/
#ifndef _XE_ARGS_H_
#define _XE_ARGS_H_
#include <linux/args.h>
/*
* Why don't the following macros have the XE prefix?
*
* Once we find more potential users outside of the Xe driver, we plan to move
* all of the following macros unchanged to linux/args.h.
*/
/**
* CALL_ARGS - Invoke a macro, but allow parameters to be expanded beforehand.
* @f: name of the macro to invoke
* @args: arguments for the macro
*
* This macro allows calling macros which names might generated or we want to
* make sure it's arguments will be correctly expanded.
*
* Example:
*
* #define foo X,Y,Z,Q
* #define bar COUNT_ARGS(foo)
* #define buz CALL_ARGS(COUNT_ARGS, foo)
*
* With above definitions bar expands to 1 while buz expands to 4.
*/
#define CALL_ARGS(f, args...) __CALL_ARGS(f, args)
#define __CALL_ARGS(f, args...) f(args)
/**
* DROP_FIRST_ARG - Returns all arguments except the first one.
* @args: arguments
*
* This helper macro allows manipulation the argument list before passing it
* to the next level macro.
*
* Example:
*
* #define foo X,Y,Z,Q
* #define bar CALL_ARGS(COUNT_ARGS, DROP_FIRST_ARG(foo))
*
* With above definitions bar expands to 3.
*/
#define DROP_FIRST_ARG(args...) __DROP_FIRST_ARG(args)
#define __DROP_FIRST_ARG(a, b...) b
/**
* FIRST_ARG - Returns the first argument.
* @args: arguments
*
* This helper macro allows manipulation the argument list before passing it
* to the next level macro.
*
* Example:
*
* #define foo X,Y,Z,Q
* #define bar FIRST_ARG(foo)
*
* With above definitions bar expands to X.
*/
#define FIRST_ARG(args...) __FIRST_ARG(args)
#define __FIRST_ARG(a, b...) a
/**
* LAST_ARG - Returns the last argument.
* @args: arguments
*
* This helper macro allows manipulation the argument list before passing it
* to the next level macro.
*
* Like COUNT_ARGS() this macro works up to 12 arguments.
*
* Example:
*
* #define foo X,Y,Z,Q
* #define bar LAST_ARG(foo)
*
* With above definitions bar expands to Q.
*/
#define LAST_ARG(args...) __LAST_ARG(args)
#define __LAST_ARG(args...) PICK_ARG(COUNT_ARGS(args), args)
/**
* PICK_ARG - Returns the n-th argument.
* @n: argument number to be returned
* @args: arguments
*
* This helper macro allows manipulation the argument list before passing it
* to the next level macro.
*
* Like COUNT_ARGS() this macro supports n up to 12.
* Specialized macros PICK_ARG1() to PICK_ARG12() are also available.
*
* Example:
*
* #define foo X,Y,Z,Q
* #define bar PICK_ARG(2, foo)
* #define buz PICK_ARG3(foo)
*
* With above definitions bar expands to Y and buz expands to Z.
*/
#define PICK_ARG(n, args...) __PICK_ARG(n, args)
#define __PICK_ARG(n, args...) CALL_ARGS(CONCATENATE(PICK_ARG, n), args)
#define PICK_ARG1(args...) FIRST_ARG(args)
#define PICK_ARG2(args...) PICK_ARG1(DROP_FIRST_ARG(args))
#define PICK_ARG3(args...) PICK_ARG2(DROP_FIRST_ARG(args))
#define PICK_ARG4(args...) PICK_ARG3(DROP_FIRST_ARG(args))
#define PICK_ARG5(args...) PICK_ARG4(DROP_FIRST_ARG(args))
#define PICK_ARG6(args...) PICK_ARG5(DROP_FIRST_ARG(args))
#define PICK_ARG7(args...) PICK_ARG6(DROP_FIRST_ARG(args))
#define PICK_ARG8(args...) PICK_ARG7(DROP_FIRST_ARG(args))
#define PICK_ARG9(args...) PICK_ARG8(DROP_FIRST_ARG(args))
#define PICK_ARG10(args...) PICK_ARG9(DROP_FIRST_ARG(args))
#define PICK_ARG11(args...) PICK_ARG10(DROP_FIRST_ARG(args))
#define PICK_ARG12(args...) PICK_ARG11(DROP_FIRST_ARG(args))
/**
* ARGS_SEP_COMMA - Definition of a comma character.
*
* This definition can be used in cases where any intermediate macro expects
* fixed number of arguments, but we want to pass more arguments which can
* be properly evaluated only by the next level macro.
*
* Example:
*
* #define foo(f) f(X) f(Y) f(Z) f(Q)
* #define bar DROP_FIRST_ARG(foo(ARGS_SEP_COMMA __stringify))
* #define buz CALL_ARGS(COUNT_ARGS, DROP_FIRST_ARG(foo(ARGS_SEP_COMMA)))
*
* With above definitions bar expands to
* "X", "Y", "Z", "Q"
* and buz expands to 4.
*/
#define ARGS_SEP_COMMA ,
#endif
...@@ -109,11 +109,11 @@ ...@@ -109,11 +109,11 @@
#define xe_assert_msg(xe, condition, msg, arg...) ({ \ #define xe_assert_msg(xe, condition, msg, arg...) ({ \
const struct xe_device *__xe = (xe); \ const struct xe_device *__xe = (xe); \
__xe_assert_msg(__xe, condition, \ __xe_assert_msg(__xe, condition, \
"platform: %d subplatform: %d\n" \ "platform: %s subplatform: %d\n" \
"graphics: %s %u.%02u step %s\n" \ "graphics: %s %u.%02u step %s\n" \
"media: %s %u.%02u step %s\n" \ "media: %s %u.%02u step %s\n" \
msg, \ msg, \
__xe->info.platform, __xe->info.subplatform, \ __xe->info.platform_name, __xe->info.subplatform, \
__xe->info.graphics_name, \ __xe->info.graphics_name, \
__xe->info.graphics_verx100 / 100, \ __xe->info.graphics_verx100 / 100, \
__xe->info.graphics_verx100 % 100, \ __xe->info.graphics_verx100 % 100, \
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
#include "xe_bb.h" #include "xe_bb.h"
#include "instructions/xe_mi_commands.h" #include "instructions/xe_mi_commands.h"
#include "regs/xe_gpu_commands.h" #include "xe_assert.h"
#include "xe_device.h" #include "xe_device.h"
#include "xe_exec_queue_types.h" #include "xe_exec_queue_types.h"
#include "xe_gt.h" #include "xe_gt.h"
......
...@@ -95,6 +95,20 @@ bool xe_bo_is_stolen(struct xe_bo *bo) ...@@ -95,6 +95,20 @@ bool xe_bo_is_stolen(struct xe_bo *bo)
return bo->ttm.resource->mem_type == XE_PL_STOLEN; return bo->ttm.resource->mem_type == XE_PL_STOLEN;
} }
/**
* xe_bo_has_single_placement - check if BO is placed only in one memory location
* @bo: The BO
*
* This function checks whether a given BO is placed in only one memory location.
*
* Returns: true if the BO is placed in a single memory location, false otherwise.
*
*/
bool xe_bo_has_single_placement(struct xe_bo *bo)
{
return bo->placement.num_placement == 1;
}
/** /**
* xe_bo_is_stolen_devmem - check if BO is of stolen type accessed via PCI BAR * xe_bo_is_stolen_devmem - check if BO is of stolen type accessed via PCI BAR
* @bo: The BO * @bo: The BO
...@@ -302,6 +316,18 @@ static int xe_tt_map_sg(struct ttm_tt *tt) ...@@ -302,6 +316,18 @@ static int xe_tt_map_sg(struct ttm_tt *tt)
return 0; return 0;
} }
static void xe_tt_unmap_sg(struct ttm_tt *tt)
{
struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
if (xe_tt->sg) {
dma_unmap_sgtable(xe_tt->dev, xe_tt->sg,
DMA_BIDIRECTIONAL, 0);
sg_free_table(xe_tt->sg);
xe_tt->sg = NULL;
}
}
struct sg_table *xe_bo_sg(struct xe_bo *bo) struct sg_table *xe_bo_sg(struct xe_bo *bo)
{ {
struct ttm_tt *tt = bo->ttm.ttm; struct ttm_tt *tt = bo->ttm.ttm;
...@@ -377,27 +403,15 @@ static int xe_ttm_tt_populate(struct ttm_device *ttm_dev, struct ttm_tt *tt, ...@@ -377,27 +403,15 @@ static int xe_ttm_tt_populate(struct ttm_device *ttm_dev, struct ttm_tt *tt,
if (err) if (err)
return err; return err;
/* A follow up may move this xe_bo_move when BO is moved to XE_PL_TT */
err = xe_tt_map_sg(tt);
if (err)
ttm_pool_free(&ttm_dev->pool, tt);
return err; return err;
} }
static void xe_ttm_tt_unpopulate(struct ttm_device *ttm_dev, struct ttm_tt *tt) static void xe_ttm_tt_unpopulate(struct ttm_device *ttm_dev, struct ttm_tt *tt)
{ {
struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
if (tt->page_flags & TTM_TT_FLAG_EXTERNAL) if (tt->page_flags & TTM_TT_FLAG_EXTERNAL)
return; return;
if (xe_tt->sg) { xe_tt_unmap_sg(tt);
dma_unmap_sgtable(xe_tt->dev, xe_tt->sg,
DMA_BIDIRECTIONAL, 0);
sg_free_table(xe_tt->sg);
xe_tt->sg = NULL;
}
return ttm_pool_free(&ttm_dev->pool, tt); return ttm_pool_free(&ttm_dev->pool, tt);
} }
...@@ -628,17 +642,21 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict, ...@@ -628,17 +642,21 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
bool handle_system_ccs = (!IS_DGFX(xe) && xe_bo_needs_ccs_pages(bo) && bool handle_system_ccs = (!IS_DGFX(xe) && xe_bo_needs_ccs_pages(bo) &&
ttm && ttm_tt_is_populated(ttm)) ? true : false; ttm && ttm_tt_is_populated(ttm)) ? true : false;
int ret = 0; int ret = 0;
/* Bo creation path, moving to system or TT. */ /* Bo creation path, moving to system or TT. */
if ((!old_mem && ttm) && !handle_system_ccs) { if ((!old_mem && ttm) && !handle_system_ccs) {
if (new_mem->mem_type == XE_PL_TT)
ret = xe_tt_map_sg(ttm);
if (!ret)
ttm_bo_move_null(ttm_bo, new_mem); ttm_bo_move_null(ttm_bo, new_mem);
return 0; goto out;
} }
if (ttm_bo->type == ttm_bo_type_sg) { if (ttm_bo->type == ttm_bo_type_sg) {
ret = xe_bo_move_notify(bo, ctx); ret = xe_bo_move_notify(bo, ctx);
if (!ret) if (!ret)
ret = xe_bo_move_dmabuf(ttm_bo, new_mem); ret = xe_bo_move_dmabuf(ttm_bo, new_mem);
goto out; return ret;
} }
tt_has_data = ttm && (ttm_tt_is_populated(ttm) || tt_has_data = ttm && (ttm_tt_is_populated(ttm) ||
...@@ -650,6 +668,12 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict, ...@@ -650,6 +668,12 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
needs_clear = (ttm && ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC) || needs_clear = (ttm && ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC) ||
(!ttm && ttm_bo->type == ttm_bo_type_device); (!ttm && ttm_bo->type == ttm_bo_type_device);
if (new_mem->mem_type == XE_PL_TT) {
ret = xe_tt_map_sg(ttm);
if (ret)
goto out;
}
if ((move_lacks_source && !needs_clear)) { if ((move_lacks_source && !needs_clear)) {
ttm_bo_move_null(ttm_bo, new_mem); ttm_bo_move_null(ttm_bo, new_mem);
goto out; goto out;
...@@ -786,8 +810,11 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict, ...@@ -786,8 +810,11 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
xe_pm_runtime_put(xe); xe_pm_runtime_put(xe);
out: out:
return ret; if ((!ttm_bo->resource || ttm_bo->resource->mem_type == XE_PL_SYSTEM) &&
ttm_bo->ttm)
xe_tt_unmap_sg(ttm_bo->ttm);
return ret;
} }
/** /**
...@@ -1731,11 +1758,10 @@ void xe_bo_unpin_external(struct xe_bo *bo) ...@@ -1731,11 +1758,10 @@ void xe_bo_unpin_external(struct xe_bo *bo)
xe_assert(xe, xe_bo_is_pinned(bo)); xe_assert(xe, xe_bo_is_pinned(bo));
xe_assert(xe, xe_bo_is_user(bo)); xe_assert(xe, xe_bo_is_user(bo));
if (bo->ttm.pin_count == 1 && !list_empty(&bo->pinned_link)) {
spin_lock(&xe->pinned.lock); spin_lock(&xe->pinned.lock);
if (bo->ttm.pin_count == 1 && !list_empty(&bo->pinned_link))
list_del_init(&bo->pinned_link); list_del_init(&bo->pinned_link);
spin_unlock(&xe->pinned.lock); spin_unlock(&xe->pinned.lock);
}
ttm_bo_unpin(&bo->ttm); ttm_bo_unpin(&bo->ttm);
...@@ -1758,9 +1784,8 @@ void xe_bo_unpin(struct xe_bo *bo) ...@@ -1758,9 +1784,8 @@ void xe_bo_unpin(struct xe_bo *bo)
struct ttm_place *place = &(bo->placements[0]); struct ttm_place *place = &(bo->placements[0]);
if (mem_type_is_vram(place->mem_type)) { if (mem_type_is_vram(place->mem_type)) {
xe_assert(xe, !list_empty(&bo->pinned_link));
spin_lock(&xe->pinned.lock); spin_lock(&xe->pinned.lock);
xe_assert(xe, !list_empty(&bo->pinned_link));
list_del_init(&bo->pinned_link); list_del_init(&bo->pinned_link);
spin_unlock(&xe->pinned.lock); spin_unlock(&xe->pinned.lock);
} }
......
...@@ -206,6 +206,7 @@ bool mem_type_is_vram(u32 mem_type); ...@@ -206,6 +206,7 @@ bool mem_type_is_vram(u32 mem_type);
bool xe_bo_is_vram(struct xe_bo *bo); bool xe_bo_is_vram(struct xe_bo *bo);
bool xe_bo_is_stolen(struct xe_bo *bo); bool xe_bo_is_stolen(struct xe_bo *bo);
bool xe_bo_is_stolen_devmem(struct xe_bo *bo); bool xe_bo_is_stolen_devmem(struct xe_bo *bo);
bool xe_bo_has_single_placement(struct xe_bo *bo);
uint64_t vram_region_gpu_offset(struct ttm_resource *res); uint64_t vram_region_gpu_offset(struct ttm_resource *res);
bool xe_bo_can_migrate(struct xe_bo *bo, u32 mem_type); bool xe_bo_can_migrate(struct xe_bo *bo, u32 mem_type);
......
...@@ -12,7 +12,10 @@ ...@@ -12,7 +12,10 @@
#include "xe_bo.h" #include "xe_bo.h"
#include "xe_device.h" #include "xe_device.h"
#include "xe_force_wake.h"
#include "xe_gt_debugfs.h" #include "xe_gt_debugfs.h"
#include "xe_gt_printk.h"
#include "xe_guc_ads.h"
#include "xe_pm.h" #include "xe_pm.h"
#include "xe_sriov.h" #include "xe_sriov.h"
#include "xe_step.h" #include "xe_step.h"
...@@ -118,6 +121,58 @@ static const struct file_operations forcewake_all_fops = { ...@@ -118,6 +121,58 @@ static const struct file_operations forcewake_all_fops = {
.release = forcewake_release, .release = forcewake_release,
}; };
static ssize_t wedged_mode_show(struct file *f, char __user *ubuf,
size_t size, loff_t *pos)
{
struct xe_device *xe = file_inode(f)->i_private;
char buf[32];
int len = 0;
len = scnprintf(buf, sizeof(buf), "%d\n", xe->wedged.mode);
return simple_read_from_buffer(ubuf, size, pos, buf, len);
}
static ssize_t wedged_mode_set(struct file *f, const char __user *ubuf,
size_t size, loff_t *pos)
{
struct xe_device *xe = file_inode(f)->i_private;
struct xe_gt *gt;
u32 wedged_mode;
ssize_t ret;
u8 id;
ret = kstrtouint_from_user(ubuf, size, 0, &wedged_mode);
if (ret)
return ret;
if (wedged_mode > 2)
return -EINVAL;
if (xe->wedged.mode == wedged_mode)
return 0;
xe->wedged.mode = wedged_mode;
xe_pm_runtime_get(xe);
for_each_gt(gt, xe, id) {
ret = xe_guc_ads_scheduler_policy_toggle_reset(&gt->uc.guc.ads);
if (ret) {
xe_gt_err(gt, "Failed to update GuC ADS scheduler policy. GuC may still cause engine reset even with wedged_mode=2\n");
return -EIO;
}
}
xe_pm_runtime_put(xe);
return size;
}
static const struct file_operations wedged_mode_fops = {
.owner = THIS_MODULE,
.read = wedged_mode_show,
.write = wedged_mode_set,
};
void xe_debugfs_register(struct xe_device *xe) void xe_debugfs_register(struct xe_device *xe)
{ {
struct ttm_device *bdev = &xe->ttm; struct ttm_device *bdev = &xe->ttm;
...@@ -135,6 +190,9 @@ void xe_debugfs_register(struct xe_device *xe) ...@@ -135,6 +190,9 @@ void xe_debugfs_register(struct xe_device *xe)
debugfs_create_file("forcewake_all", 0400, root, xe, debugfs_create_file("forcewake_all", 0400, root, xe,
&forcewake_all_fops); &forcewake_all_fops);
debugfs_create_file("wedged_mode", 0400, root, xe,
&wedged_mode_fops);
for (mem_type = XE_PL_VRAM0; mem_type <= XE_PL_VRAM1; ++mem_type) { for (mem_type = XE_PL_VRAM0; mem_type <= XE_PL_VRAM1; ++mem_type) {
man = ttm_manager_type(bdev, mem_type); man = ttm_manager_type(bdev, mem_type);
......
...@@ -110,6 +110,7 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset, ...@@ -110,6 +110,7 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
drm_printf(&p, "Snapshot time: %lld.%09ld\n", ts.tv_sec, ts.tv_nsec); drm_printf(&p, "Snapshot time: %lld.%09ld\n", ts.tv_sec, ts.tv_nsec);
ts = ktime_to_timespec64(ss->boot_time); ts = ktime_to_timespec64(ss->boot_time);
drm_printf(&p, "Uptime: %lld.%09ld\n", ts.tv_sec, ts.tv_nsec); drm_printf(&p, "Uptime: %lld.%09ld\n", ts.tv_sec, ts.tv_nsec);
drm_printf(&p, "Process: %s\n", ss->process_name);
xe_device_snapshot_print(xe, &p); xe_device_snapshot_print(xe, &p);
drm_printf(&p, "\n**** GuC CT ****\n"); drm_printf(&p, "\n**** GuC CT ****\n");
...@@ -166,12 +167,24 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump, ...@@ -166,12 +167,24 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
enum xe_hw_engine_id id; enum xe_hw_engine_id id;
u32 adj_logical_mask = q->logical_mask; u32 adj_logical_mask = q->logical_mask;
u32 width_mask = (0x1 << q->width) - 1; u32 width_mask = (0x1 << q->width) - 1;
const char *process_name = "no process";
struct task_struct *task = NULL;
int i; int i;
bool cookie; bool cookie;
ss->snapshot_time = ktime_get_real(); ss->snapshot_time = ktime_get_real();
ss->boot_time = ktime_get_boottime(); ss->boot_time = ktime_get_boottime();
if (q->vm && q->vm->xef) {
task = get_pid_task(q->vm->xef->drm->pid, PIDTYPE_PID);
if (task)
process_name = task->comm;
}
strscpy(ss->process_name, process_name);
if (task)
put_task_struct(task);
ss->gt = q->gt; ss->gt = q->gt;
INIT_WORK(&ss->work, xe_devcoredump_deferred_snap_work); INIT_WORK(&ss->work, xe_devcoredump_deferred_snap_work);
...@@ -238,13 +251,15 @@ void xe_devcoredump(struct xe_sched_job *job) ...@@ -238,13 +251,15 @@ void xe_devcoredump(struct xe_sched_job *job)
xe_devcoredump_read, xe_devcoredump_free); xe_devcoredump_read, xe_devcoredump_free);
} }
static void xe_driver_devcoredump_fini(struct drm_device *drm, void *arg) static void xe_driver_devcoredump_fini(void *arg)
{ {
struct drm_device *drm = arg;
dev_coredump_put(drm->dev); dev_coredump_put(drm->dev);
} }
int xe_devcoredump_init(struct xe_device *xe) int xe_devcoredump_init(struct xe_device *xe)
{ {
return drmm_add_action_or_reset(&xe->drm, xe_driver_devcoredump_fini, xe); return devm_add_action_or_reset(xe->drm.dev, xe_driver_devcoredump_fini, &xe->drm);
} }
#endif #endif
...@@ -26,6 +26,8 @@ struct xe_devcoredump_snapshot { ...@@ -26,6 +26,8 @@ struct xe_devcoredump_snapshot {
ktime_t snapshot_time; ktime_t snapshot_time;
/** @boot_time: Relative boot time so the uptime can be calculated. */ /** @boot_time: Relative boot time so the uptime can be calculated. */
ktime_t boot_time; ktime_t boot_time;
/** @process_name: Name of process that triggered this gpu hang */
char process_name[TASK_COMM_LEN];
/** @gt: Affected GT, used by forcewake for delayed capture */ /** @gt: Affected GT, used by forcewake for delayed capture */
struct xe_gt *gt; struct xe_gt *gt;
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include "xe_device.h" #include "xe_device.h"
#include <linux/delay.h>
#include <linux/units.h> #include <linux/units.h>
#include <drm/drm_aperture.h> #include <drm/drm_aperture.h>
...@@ -17,6 +18,7 @@ ...@@ -17,6 +18,7 @@
#include <drm/xe_drm.h> #include <drm/xe_drm.h>
#include "display/xe_display.h" #include "display/xe_display.h"
#include "instructions/xe_gpu_commands.h"
#include "regs/xe_gt_regs.h" #include "regs/xe_gt_regs.h"
#include "regs/xe_regs.h" #include "regs/xe_regs.h"
#include "xe_bo.h" #include "xe_bo.h"
...@@ -27,10 +29,14 @@ ...@@ -27,10 +29,14 @@
#include "xe_drv.h" #include "xe_drv.h"
#include "xe_exec.h" #include "xe_exec.h"
#include "xe_exec_queue.h" #include "xe_exec_queue.h"
#include "xe_force_wake.h"
#include "xe_ggtt.h" #include "xe_ggtt.h"
#include "xe_gsc_proxy.h" #include "xe_gsc_proxy.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_gt_mcr.h" #include "xe_gt_mcr.h"
#include "xe_gt_printk.h"
#include "xe_gt_sriov_vf.h"
#include "xe_guc.h"
#include "xe_hwmon.h" #include "xe_hwmon.h"
#include "xe_irq.h" #include "xe_irq.h"
#include "xe_memirq.h" #include "xe_memirq.h"
...@@ -45,6 +51,7 @@ ...@@ -45,6 +51,7 @@
#include "xe_ttm_stolen_mgr.h" #include "xe_ttm_stolen_mgr.h"
#include "xe_ttm_sys_mgr.h" #include "xe_ttm_sys_mgr.h"
#include "xe_vm.h" #include "xe_vm.h"
#include "xe_vram.h"
#include "xe_wait_user_fence.h" #include "xe_wait_user_fence.h"
static int xe_file_open(struct drm_device *dev, struct drm_file *file) static int xe_file_open(struct drm_device *dev, struct drm_file *file)
...@@ -90,12 +97,16 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file) ...@@ -90,12 +97,16 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
struct xe_exec_queue *q; struct xe_exec_queue *q;
unsigned long idx; unsigned long idx;
mutex_lock(&xef->exec_queue.lock); /*
* No need for exec_queue.lock here as there is no contention for it
* when FD is closing as IOCTLs presumably can't be modifying the
* xarray. Taking exec_queue.lock here causes undue dependency on
* vm->lock taken during xe_exec_queue_kill().
*/
xa_for_each(&xef->exec_queue.xa, idx, q) { xa_for_each(&xef->exec_queue.xa, idx, q) {
xe_exec_queue_kill(q); xe_exec_queue_kill(q);
xe_exec_queue_put(q); xe_exec_queue_put(q);
} }
mutex_unlock(&xef->exec_queue.lock);
xa_destroy(&xef->exec_queue.xa); xa_destroy(&xef->exec_queue.xa);
mutex_destroy(&xef->exec_queue.lock); mutex_destroy(&xef->exec_queue.lock);
mutex_lock(&xef->vm.lock); mutex_lock(&xef->vm.lock);
...@@ -138,6 +149,9 @@ static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg) ...@@ -138,6 +149,9 @@ static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
struct xe_device *xe = to_xe_device(file_priv->minor->dev); struct xe_device *xe = to_xe_device(file_priv->minor->dev);
long ret; long ret;
if (xe_device_wedged(xe))
return -ECANCELED;
ret = xe_pm_runtime_get_ioctl(xe); ret = xe_pm_runtime_get_ioctl(xe);
if (ret >= 0) if (ret >= 0)
ret = drm_ioctl(file, cmd, arg); ret = drm_ioctl(file, cmd, arg);
...@@ -153,6 +167,9 @@ static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned lo ...@@ -153,6 +167,9 @@ static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned lo
struct xe_device *xe = to_xe_device(file_priv->minor->dev); struct xe_device *xe = to_xe_device(file_priv->minor->dev);
long ret; long ret;
if (xe_device_wedged(xe))
return -ECANCELED;
ret = xe_pm_runtime_get_ioctl(xe); ret = xe_pm_runtime_get_ioctl(xe);
if (ret >= 0) if (ret >= 0)
ret = drm_compat_ioctl(file, cmd, arg); ret = drm_compat_ioctl(file, cmd, arg);
...@@ -180,13 +197,6 @@ static const struct file_operations xe_driver_fops = { ...@@ -180,13 +197,6 @@ static const struct file_operations xe_driver_fops = {
#endif #endif
}; };
static void xe_driver_release(struct drm_device *dev)
{
struct xe_device *xe = to_xe_device(dev);
pci_set_drvdata(to_pci_dev(xe->drm.dev), NULL);
}
static struct drm_driver driver = { static struct drm_driver driver = {
/* Don't use MTRRs here; the Xserver or userspace app should /* Don't use MTRRs here; the Xserver or userspace app should
* deal with them for Intel hardware. * deal with them for Intel hardware.
...@@ -205,8 +215,6 @@ static struct drm_driver driver = { ...@@ -205,8 +215,6 @@ static struct drm_driver driver = {
#ifdef CONFIG_PROC_FS #ifdef CONFIG_PROC_FS
.show_fdinfo = xe_drm_client_fdinfo, .show_fdinfo = xe_drm_client_fdinfo,
#endif #endif
.release = &xe_driver_release,
.ioctls = xe_ioctls, .ioctls = xe_ioctls,
.num_ioctls = ARRAY_SIZE(xe_ioctls), .num_ioctls = ARRAY_SIZE(xe_ioctls),
.fops = &xe_driver_fops, .fops = &xe_driver_fops,
...@@ -269,7 +277,10 @@ struct xe_device *xe_device_create(struct pci_dev *pdev, ...@@ -269,7 +277,10 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
init_waitqueue_head(&xe->ufence_wq); init_waitqueue_head(&xe->ufence_wq);
drmm_mutex_init(&xe->drm, &xe->usm.lock); err = drmm_mutex_init(&xe->drm, &xe->usm.lock);
if (err)
goto err;
xa_init_flags(&xe->usm.asid_to_vm, XA_FLAGS_ALLOC); xa_init_flags(&xe->usm.asid_to_vm, XA_FLAGS_ALLOC);
if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) { if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) {
...@@ -378,7 +389,7 @@ static void xe_driver_flr(struct xe_device *xe) ...@@ -378,7 +389,7 @@ static void xe_driver_flr(struct xe_device *xe)
xe_mmio_write32(gt, GU_DEBUG, DRIVERFLR_STATUS); xe_mmio_write32(gt, GU_DEBUG, DRIVERFLR_STATUS);
} }
static void xe_driver_flr_fini(struct drm_device *drm, void *arg) static void xe_driver_flr_fini(void *arg)
{ {
struct xe_device *xe = arg; struct xe_device *xe = arg;
...@@ -386,7 +397,7 @@ static void xe_driver_flr_fini(struct drm_device *drm, void *arg) ...@@ -386,7 +397,7 @@ static void xe_driver_flr_fini(struct drm_device *drm, void *arg)
xe_driver_flr(xe); xe_driver_flr(xe);
} }
static void xe_device_sanitize(struct drm_device *drm, void *arg) static void xe_device_sanitize(void *arg)
{ {
struct xe_device *xe = arg; struct xe_device *xe = arg;
struct xe_gt *gt; struct xe_gt *gt;
...@@ -501,6 +512,8 @@ int xe_device_probe_early(struct xe_device *xe) ...@@ -501,6 +512,8 @@ int xe_device_probe_early(struct xe_device *xe)
if (err) if (err)
return err; return err;
xe->wedged.mode = xe_modparam.wedged_mode;
return 0; return 0;
} }
...@@ -551,14 +564,28 @@ int xe_device_probe(struct xe_device *xe) ...@@ -551,14 +564,28 @@ int xe_device_probe(struct xe_device *xe)
if (err) if (err)
return err; return err;
xe_mmio_probe_tiles(xe); err = xe_mmio_probe_tiles(xe);
if (err)
return err;
xe_ttm_sys_mgr_init(xe); xe_ttm_sys_mgr_init(xe);
for_each_gt(gt, xe, id) for_each_gt(gt, xe, id) {
xe_force_wake_init_gt(gt, gt_to_fw(gt)); err = xe_gt_init_early(gt);
if (err)
return err;
}
for_each_tile(tile, xe, id) { for_each_tile(tile, xe, id) {
if (IS_SRIOV_VF(xe)) {
xe_guc_comm_init_early(&tile->primary_gt->uc.guc);
err = xe_gt_sriov_vf_bootstrap(tile->primary_gt);
if (err)
return err;
err = xe_gt_sriov_vf_query_config(tile->primary_gt);
if (err)
return err;
}
err = xe_ggtt_init_early(tile->mem.ggtt); err = xe_ggtt_init_early(tile->mem.ggtt);
if (err) if (err)
return err; return err;
...@@ -578,13 +605,10 @@ int xe_device_probe(struct xe_device *xe) ...@@ -578,13 +605,10 @@ int xe_device_probe(struct xe_device *xe)
err = xe_devcoredump_init(xe); err = xe_devcoredump_init(xe);
if (err) if (err)
return err; return err;
err = drmm_add_action_or_reset(&xe->drm, xe_driver_flr_fini, xe); err = devm_add_action_or_reset(xe->drm.dev, xe_driver_flr_fini, xe);
if (err) if (err)
return err; return err;
for_each_gt(gt, xe, id)
xe_pcode_init(gt);
err = xe_display_init_noirq(xe); err = xe_display_init_noirq(xe);
if (err) if (err)
return err; return err;
...@@ -593,17 +617,11 @@ int xe_device_probe(struct xe_device *xe) ...@@ -593,17 +617,11 @@ int xe_device_probe(struct xe_device *xe)
if (err) if (err)
goto err; goto err;
for_each_gt(gt, xe, id) {
err = xe_gt_init_early(gt);
if (err)
goto err_irq_shutdown;
}
err = xe_device_set_has_flat_ccs(xe); err = xe_device_set_has_flat_ccs(xe);
if (err) if (err)
goto err_irq_shutdown; goto err_irq_shutdown;
err = xe_mmio_probe_vram(xe); err = xe_vram_probe(xe);
if (err) if (err)
goto err_irq_shutdown; goto err_irq_shutdown;
...@@ -650,7 +668,7 @@ int xe_device_probe(struct xe_device *xe) ...@@ -650,7 +668,7 @@ int xe_device_probe(struct xe_device *xe)
xe_hwmon_register(xe); xe_hwmon_register(xe);
return drmm_add_action_or_reset(&xe->drm, xe_device_sanitize, xe); return devm_add_action_or_reset(xe->drm.dev, xe_device_sanitize, xe);
err_fini_display: err_fini_display:
xe_display_driver_remove(xe); xe_display_driver_remove(xe);
...@@ -759,3 +777,34 @@ u64 xe_device_uncanonicalize_addr(struct xe_device *xe, u64 address) ...@@ -759,3 +777,34 @@ u64 xe_device_uncanonicalize_addr(struct xe_device *xe, u64 address)
{ {
return address & GENMASK_ULL(xe->info.va_bits - 1, 0); return address & GENMASK_ULL(xe->info.va_bits - 1, 0);
} }
/**
* xe_device_declare_wedged - Declare device wedged
* @xe: xe device instance
*
* This is a final state that can only be cleared with a mudule
* re-probe (unbind + bind).
* In this state every IOCTL will be blocked so the GT cannot be used.
* In general it will be called upon any critical error such as gt reset
* failure or guc loading failure.
* If xe.wedged module parameter is set to 2, this function will be called
* on every single execution timeout (a.k.a. GPU hang) right after devcoredump
* snapshot capture. In this mode, GT reset won't be attempted so the state of
* the issue is preserved for further debugging.
*/
void xe_device_declare_wedged(struct xe_device *xe)
{
if (xe->wedged.mode == 0) {
drm_dbg(&xe->drm, "Wedged mode is forcibly disabled\n");
return;
}
if (!atomic_xchg(&xe->wedged.flag, 1)) {
xe->needs_flr_on_fini = true;
drm_err(&xe->drm,
"CRITICAL: Xe has declared device %s as wedged.\n"
"IOCTLs and executions are blocked. Only a rebind may clear the failure\n"
"Please file a _new_ bug report at https://gitlab.freedesktop.org/drm/xe/kernel/issues/new\n",
dev_name(xe->drm.dev));
}
}
...@@ -6,15 +6,9 @@ ...@@ -6,15 +6,9 @@
#ifndef _XE_DEVICE_H_ #ifndef _XE_DEVICE_H_
#define _XE_DEVICE_H_ #define _XE_DEVICE_H_
struct xe_exec_queue;
struct xe_file;
#include <drm/drm_util.h> #include <drm/drm_util.h>
#include "regs/xe_gpu_commands.h"
#include "xe_device_types.h" #include "xe_device_types.h"
#include "xe_force_wake.h"
#include "xe_macros.h"
static inline struct xe_device *to_xe_device(const struct drm_device *dev) static inline struct xe_device *to_xe_device(const struct drm_device *dev)
{ {
...@@ -167,4 +161,11 @@ void xe_device_snapshot_print(struct xe_device *xe, struct drm_printer *p); ...@@ -167,4 +161,11 @@ void xe_device_snapshot_print(struct xe_device *xe, struct drm_printer *p);
u64 xe_device_canonicalize_addr(struct xe_device *xe, u64 address); u64 xe_device_canonicalize_addr(struct xe_device *xe, u64 address);
u64 xe_device_uncanonicalize_addr(struct xe_device *xe, u64 address); u64 xe_device_uncanonicalize_addr(struct xe_device *xe, u64 address);
static inline bool xe_device_wedged(struct xe_device *xe)
{
return atomic_read(&xe->wedged.flag);
}
void xe_device_declare_wedged(struct xe_device *xe);
#endif #endif
...@@ -69,7 +69,7 @@ vram_d3cold_threshold_store(struct device *dev, struct device_attribute *attr, ...@@ -69,7 +69,7 @@ vram_d3cold_threshold_store(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR_RW(vram_d3cold_threshold); static DEVICE_ATTR_RW(vram_d3cold_threshold);
static void xe_device_sysfs_fini(struct drm_device *drm, void *arg) static void xe_device_sysfs_fini(void *arg)
{ {
struct xe_device *xe = arg; struct xe_device *xe = arg;
...@@ -85,5 +85,5 @@ int xe_device_sysfs_init(struct xe_device *xe) ...@@ -85,5 +85,5 @@ int xe_device_sysfs_init(struct xe_device *xe)
if (ret) if (ret)
return ret; return ret;
return drmm_add_action_or_reset(&xe->drm, xe_device_sysfs_fini, xe); return devm_add_action_or_reset(dev, xe_device_sysfs_fini, xe);
} }
...@@ -196,6 +196,9 @@ struct xe_tile { ...@@ -196,6 +196,9 @@ struct xe_tile {
struct { struct {
/** @sriov.vf.memirq: Memory Based Interrupts. */ /** @sriov.vf.memirq: Memory Based Interrupts. */
struct xe_memirq memirq; struct xe_memirq memirq;
/** @sriov.vf.ggtt_balloon: GGTT regions excluded from use. */
struct drm_mm_node ggtt_balloon[2];
} vf; } vf;
} sriov; } sriov;
...@@ -218,6 +221,8 @@ struct xe_device { ...@@ -218,6 +221,8 @@ struct xe_device {
/** @info: device info */ /** @info: device info */
struct intel_device_info { struct intel_device_info {
/** @info.platform_name: platform name */
const char *platform_name;
/** @info.graphics_name: graphics IP name */ /** @info.graphics_name: graphics IP name */
const char *graphics_name; const char *graphics_name;
/** @info.media_name: media IP name */ /** @info.media_name: media IP name */
...@@ -281,6 +286,10 @@ struct xe_device { ...@@ -281,6 +286,10 @@ struct xe_device {
u8 has_heci_gscfi:1; u8 has_heci_gscfi:1;
/** @info.skip_guc_pc: Skip GuC based PM feature init */ /** @info.skip_guc_pc: Skip GuC based PM feature init */
u8 skip_guc_pc:1; u8 skip_guc_pc:1;
/** @info.has_atomic_enable_pte_bit: Device has atomic enable PTE bit */
u8 has_atomic_enable_pte_bit:1;
/** @info.has_device_atomics_on_smem: Supports device atomics on SMEM */
u8 has_device_atomics_on_smem:1;
#if IS_ENABLED(CONFIG_DRM_XE_DISPLAY) #if IS_ENABLED(CONFIG_DRM_XE_DISPLAY)
struct { struct {
...@@ -427,9 +436,6 @@ struct xe_device { ...@@ -427,9 +436,6 @@ struct xe_device {
/** @d3cold.allowed: Indicates if d3cold is a valid device state */ /** @d3cold.allowed: Indicates if d3cold is a valid device state */
bool allowed; bool allowed;
/** @d3cold.power_lost: Indicates if card has really lost power. */
bool power_lost;
/** /**
* @d3cold.vram_threshold: * @d3cold.vram_threshold:
* *
...@@ -459,6 +465,14 @@ struct xe_device { ...@@ -459,6 +465,14 @@ struct xe_device {
/** @needs_flr_on_fini: requests function-reset on fini */ /** @needs_flr_on_fini: requests function-reset on fini */
bool needs_flr_on_fini; bool needs_flr_on_fini;
/** @wedged: Struct to control Wedged States and mode */
struct {
/** @wedged.flag: Xe device faced a critical error and is now blocked. */
atomic_t flag;
/** @wedged.mode: Mode controlled by kernel parameter and debugfs */
int mode;
} wedged;
/* private: */ /* private: */
#if IS_ENABLED(CONFIG_DRM_XE_DISPLAY) #if IS_ENABLED(CONFIG_DRM_XE_DISPLAY)
...@@ -547,6 +561,9 @@ struct xe_file { ...@@ -547,6 +561,9 @@ struct xe_file {
struct mutex lock; struct mutex lock;
} exec_queue; } exec_queue;
/** @run_ticks: hw engine class run time in ticks for this drm client */
u64 run_ticks[XE_ENGINE_CLASS_MAX];
/** @client: drm client */ /** @client: drm client */
struct xe_drm_client *client; struct xe_drm_client *client;
}; };
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
/* /*
* Copyright © 2023 Intel Corporation * Copyright © 2023 Intel Corporation
*/ */
#include "xe_drm_client.h"
#include <drm/drm_print.h> #include <drm/drm_print.h>
#include <drm/xe_drm.h> #include <drm/xe_drm.h>
...@@ -12,9 +13,66 @@ ...@@ -12,9 +13,66 @@
#include "xe_bo.h" #include "xe_bo.h"
#include "xe_bo_types.h" #include "xe_bo_types.h"
#include "xe_device_types.h" #include "xe_device_types.h"
#include "xe_drm_client.h" #include "xe_exec_queue.h"
#include "xe_force_wake.h"
#include "xe_gt.h"
#include "xe_hw_engine.h"
#include "xe_pm.h"
#include "xe_trace.h" #include "xe_trace.h"
/**
* DOC: DRM Client usage stats
*
* The drm/xe driver implements the DRM client usage stats specification as
* documented in :ref:`drm-client-usage-stats`.
*
* Example of the output showing the implemented key value pairs and entirety of
* the currently possible format options:
*
* ::
*
* pos: 0
* flags: 0100002
* mnt_id: 26
* ino: 685
* drm-driver: xe
* drm-client-id: 3
* drm-pdev: 0000:03:00.0
* drm-total-system: 0
* drm-shared-system: 0
* drm-active-system: 0
* drm-resident-system: 0
* drm-purgeable-system: 0
* drm-total-gtt: 192 KiB
* drm-shared-gtt: 0
* drm-active-gtt: 0
* drm-resident-gtt: 192 KiB
* drm-total-vram0: 23992 KiB
* drm-shared-vram0: 16 MiB
* drm-active-vram0: 0
* drm-resident-vram0: 23992 KiB
* drm-total-stolen: 0
* drm-shared-stolen: 0
* drm-active-stolen: 0
* drm-resident-stolen: 0
* drm-cycles-rcs: 28257900
* drm-total-cycles-rcs: 7655183225
* drm-cycles-bcs: 0
* drm-total-cycles-bcs: 7655183225
* drm-cycles-vcs: 0
* drm-total-cycles-vcs: 7655183225
* drm-engine-capacity-vcs: 2
* drm-cycles-vecs: 0
* drm-total-cycles-vecs: 7655183225
* drm-engine-capacity-vecs: 2
* drm-cycles-ccs: 0
* drm-total-cycles-ccs: 7655183225
* drm-engine-capacity-ccs: 4
*
* Possible `drm-cycles-` key names are: `rcs`, `ccs`, `bcs`, `vcs`, `vecs` and
* "other".
*/
/** /**
* xe_drm_client_alloc() - Allocate drm client * xe_drm_client_alloc() - Allocate drm client
* @void: No arg * @void: No arg
...@@ -179,6 +237,69 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file) ...@@ -179,6 +237,69 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file)
} }
} }
static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
{
unsigned long class, i, gt_id, capacity[XE_ENGINE_CLASS_MAX] = { };
struct xe_file *xef = file->driver_priv;
struct xe_device *xe = xef->xe;
struct xe_gt *gt;
struct xe_hw_engine *hwe;
struct xe_exec_queue *q;
u64 gpu_timestamp;
xe_pm_runtime_get(xe);
/* Accumulate all the exec queues from this client */
mutex_lock(&xef->exec_queue.lock);
xa_for_each(&xef->exec_queue.xa, i, q) {
xe_exec_queue_update_run_ticks(q);
xef->run_ticks[q->class] += q->run_ticks - q->old_run_ticks;
q->old_run_ticks = q->run_ticks;
}
mutex_unlock(&xef->exec_queue.lock);
/* Get the total GPU cycles */
for_each_gt(gt, xe, gt_id) {
hwe = xe_gt_any_hw_engine(gt);
if (!hwe)
continue;
xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
gpu_timestamp = xe_hw_engine_read_timestamp(hwe);
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
break;
}
xe_pm_runtime_put(xe);
if (unlikely(!hwe))
return;
for (class = 0; class < XE_ENGINE_CLASS_MAX; class++) {
const char *class_name;
for_each_gt(gt, xe, gt_id)
capacity[class] += gt->user_engines.instances_per_class[class];
/*
* Engines may be fused off or not exposed to userspace. Don't
* return anything if this entire class is not available
*/
if (!capacity[class])
continue;
class_name = xe_hw_engine_class_to_str(class);
drm_printf(p, "drm-cycles-%s:\t%llu\n",
class_name, xef->run_ticks[class]);
drm_printf(p, "drm-total-cycles-%s:\t%llu\n",
class_name, gpu_timestamp);
if (capacity[class] > 1)
drm_printf(p, "drm-engine-capacity-%s:\t%lu\n",
class_name, capacity[class]);
}
}
/** /**
* xe_drm_client_fdinfo() - Callback for fdinfo interface * xe_drm_client_fdinfo() - Callback for fdinfo interface
* @p: The drm_printer ptr * @p: The drm_printer ptr
...@@ -192,5 +313,6 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file) ...@@ -192,5 +313,6 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file)
void xe_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file) void xe_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
{ {
show_meminfo(p, file); show_meminfo(p, file);
show_run_ticks(p, file);
} }
#endif #endif
...@@ -86,7 +86,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe, ...@@ -86,7 +86,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
if (extensions) { if (extensions) {
/* /*
* may set q->usm, must come before xe_lrc_init(), * may set q->usm, must come before xe_lrc_create(),
* may overwrite q->sched_props, must come before q->ops->init() * may overwrite q->sched_props, must come before q->ops->init()
*/ */
err = exec_queue_user_extensions(xe, q, extensions, 0); err = exec_queue_user_extensions(xe, q, extensions, 0);
...@@ -96,45 +96,30 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe, ...@@ -96,45 +96,30 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
} }
} }
if (xe_exec_queue_is_parallel(q)) {
q->parallel.composite_fence_ctx = dma_fence_context_alloc(1);
q->parallel.composite_fence_seqno = XE_FENCE_INITIAL_SEQNO;
}
return q; return q;
} }
static int __xe_exec_queue_init(struct xe_exec_queue *q) static int __xe_exec_queue_init(struct xe_exec_queue *q)
{ {
struct xe_device *xe = gt_to_xe(q->gt);
int i, err; int i, err;
for (i = 0; i < q->width; ++i) { for (i = 0; i < q->width; ++i) {
err = xe_lrc_init(q->lrc + i, q->hwe, q, q->vm, SZ_16K); q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K);
if (err) if (IS_ERR(q->lrc[i])) {
err = PTR_ERR(q->lrc[i]);
goto err_lrc; goto err_lrc;
} }
}
err = q->ops->init(q); err = q->ops->init(q);
if (err) if (err)
goto err_lrc; goto err_lrc;
/*
* Normally the user vm holds an rpm ref to keep the device
* awake, and the context holds a ref for the vm, however for
* some engines we use the kernels migrate vm underneath which offers no
* such rpm ref, or we lack a vm. Make sure we keep a ref here, so we
* can perform GuC CT actions when needed. Caller is expected to have
* already grabbed the rpm ref outside any sensitive locks.
*/
if (!(q->flags & EXEC_QUEUE_FLAG_PERMANENT) && (q->flags & EXEC_QUEUE_FLAG_VM || !q->vm))
xe_pm_runtime_get_noresume(xe);
return 0; return 0;
err_lrc: err_lrc:
for (i = i - 1; i >= 0; --i) for (i = i - 1; i >= 0; --i)
xe_lrc_finish(q->lrc + i); xe_lrc_put(q->lrc[i]);
return err; return err;
} }
...@@ -215,9 +200,7 @@ void xe_exec_queue_fini(struct xe_exec_queue *q) ...@@ -215,9 +200,7 @@ void xe_exec_queue_fini(struct xe_exec_queue *q)
int i; int i;
for (i = 0; i < q->width; ++i) for (i = 0; i < q->width; ++i)
xe_lrc_finish(q->lrc + i); xe_lrc_put(q->lrc[i]);
if (!(q->flags & EXEC_QUEUE_FLAG_PERMANENT) && (q->flags & EXEC_QUEUE_FLAG_VM || !q->vm))
xe_pm_runtime_put(gt_to_xe(q->gt));
__xe_exec_queue_free(q); __xe_exec_queue_free(q);
} }
...@@ -720,7 +703,7 @@ bool xe_exec_queue_is_lr(struct xe_exec_queue *q) ...@@ -720,7 +703,7 @@ bool xe_exec_queue_is_lr(struct xe_exec_queue *q)
static s32 xe_exec_queue_num_job_inflight(struct xe_exec_queue *q) static s32 xe_exec_queue_num_job_inflight(struct xe_exec_queue *q)
{ {
return q->lrc->fence_ctx.next_seqno - xe_lrc_seqno(q->lrc) - 1; return q->lrc[0]->fence_ctx.next_seqno - xe_lrc_seqno(q->lrc[0]) - 1;
} }
/** /**
...@@ -731,7 +714,7 @@ static s32 xe_exec_queue_num_job_inflight(struct xe_exec_queue *q) ...@@ -731,7 +714,7 @@ static s32 xe_exec_queue_num_job_inflight(struct xe_exec_queue *q)
*/ */
bool xe_exec_queue_ring_full(struct xe_exec_queue *q) bool xe_exec_queue_ring_full(struct xe_exec_queue *q)
{ {
struct xe_lrc *lrc = q->lrc; struct xe_lrc *lrc = q->lrc[0];
s32 max_job = lrc->ring.size / MAX_JOB_SIZE_BYTES; s32 max_job = lrc->ring.size / MAX_JOB_SIZE_BYTES;
return xe_exec_queue_num_job_inflight(q) >= max_job; return xe_exec_queue_num_job_inflight(q) >= max_job;
...@@ -757,16 +740,50 @@ bool xe_exec_queue_is_idle(struct xe_exec_queue *q) ...@@ -757,16 +740,50 @@ bool xe_exec_queue_is_idle(struct xe_exec_queue *q)
int i; int i;
for (i = 0; i < q->width; ++i) { for (i = 0; i < q->width; ++i) {
if (xe_lrc_seqno(&q->lrc[i]) != if (xe_lrc_seqno(q->lrc[i]) !=
q->lrc[i].fence_ctx.next_seqno - 1) q->lrc[i]->fence_ctx.next_seqno - 1)
return false; return false;
} }
return true; return true;
} }
return xe_lrc_seqno(&q->lrc[0]) == return xe_lrc_seqno(q->lrc[0]) ==
q->lrc[0].fence_ctx.next_seqno - 1; q->lrc[0]->fence_ctx.next_seqno - 1;
}
/**
* xe_exec_queue_update_run_ticks() - Update run time in ticks for this exec queue
* from hw
* @q: The exec queue
*
* Update the timestamp saved by HW for this exec queue and save run ticks
* calculated by using the delta from last update.
*/
void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
{
struct xe_lrc *lrc;
u32 old_ts, new_ts;
/*
* Jobs that are run during driver load may use an exec_queue, but are
* not associated with a user xe file, so avoid accumulating busyness
* for kernel specific work.
*/
if (!q->vm || !q->vm->xef)
return;
/*
* Only sample the first LRC. For parallel submission, all of them are
* scheduled together and we compensate that below by multiplying by
* width - this may introduce errors if that premise is not true and
* they don't exit 100% aligned. On the other hand, looping through
* the LRCs and reading them in different time could also introduce
* errors.
*/
lrc = q->lrc[0];
new_ts = xe_lrc_update_timestamp(lrc, &old_ts);
q->run_ticks += (new_ts - old_ts) * q->width;
} }
void xe_exec_queue_kill(struct xe_exec_queue *q) void xe_exec_queue_kill(struct xe_exec_queue *q)
......
...@@ -26,6 +26,15 @@ void xe_exec_queue_fini(struct xe_exec_queue *q); ...@@ -26,6 +26,15 @@ void xe_exec_queue_fini(struct xe_exec_queue *q);
void xe_exec_queue_destroy(struct kref *ref); void xe_exec_queue_destroy(struct kref *ref);
void xe_exec_queue_assign_name(struct xe_exec_queue *q, u32 instance); void xe_exec_queue_assign_name(struct xe_exec_queue *q, u32 instance);
static inline struct xe_exec_queue *
xe_exec_queue_get_unless_zero(struct xe_exec_queue *q)
{
if (kref_get_unless_zero(&q->refcount))
return q;
return NULL;
}
struct xe_exec_queue *xe_exec_queue_lookup(struct xe_file *xef, u32 id); struct xe_exec_queue *xe_exec_queue_lookup(struct xe_file *xef, u32 id);
static inline struct xe_exec_queue *xe_exec_queue_get(struct xe_exec_queue *q) static inline struct xe_exec_queue *xe_exec_queue_get(struct xe_exec_queue *q)
...@@ -66,5 +75,6 @@ struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *e, ...@@ -66,5 +75,6 @@ struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *e,
struct xe_vm *vm); struct xe_vm *vm);
void xe_exec_queue_last_fence_set(struct xe_exec_queue *e, struct xe_vm *vm, void xe_exec_queue_last_fence_set(struct xe_exec_queue *e, struct xe_vm *vm,
struct dma_fence *fence); struct dma_fence *fence);
void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q);
#endif #endif
...@@ -103,16 +103,6 @@ struct xe_exec_queue { ...@@ -103,16 +103,6 @@ struct xe_exec_queue {
struct xe_guc_exec_queue *guc; struct xe_guc_exec_queue *guc;
}; };
/**
* @parallel: parallel submission state
*/
struct {
/** @parallel.composite_fence_ctx: context composite fence */
u64 composite_fence_ctx;
/** @parallel.composite_fence_seqno: seqno for composite fence */
u32 composite_fence_seqno;
} parallel;
/** @sched_props: scheduling properties */ /** @sched_props: scheduling properties */
struct { struct {
/** @sched_props.timeslice_us: timeslice period in micro-seconds */ /** @sched_props.timeslice_us: timeslice period in micro-seconds */
...@@ -151,8 +141,12 @@ struct xe_exec_queue { ...@@ -151,8 +141,12 @@ struct xe_exec_queue {
* Protected by @vm's resv. Unused if @vm == NULL. * Protected by @vm's resv. Unused if @vm == NULL.
*/ */
u64 tlb_flush_seqno; u64 tlb_flush_seqno;
/** @old_run_ticks: prior hw engine class run time in ticks for this exec queue */
u64 old_run_ticks;
/** @run_ticks: hw engine class run time in ticks for this exec queue */
u64 run_ticks;
/** @lrc: logical ring context for this exec queue */ /** @lrc: logical ring context for this exec queue */
struct xe_lrc lrc[]; struct xe_lrc *lrc[];
}; };
/** /**
......
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
#include "instructions/xe_mi_commands.h" #include "instructions/xe_mi_commands.h"
#include "regs/xe_engine_regs.h" #include "regs/xe_engine_regs.h"
#include "regs/xe_gpu_commands.h"
#include "regs/xe_gt_regs.h" #include "regs/xe_gt_regs.h"
#include "regs/xe_lrc_layout.h" #include "regs/xe_lrc_layout.h"
#include "xe_assert.h" #include "xe_assert.h"
...@@ -110,7 +109,7 @@ static void __xe_execlist_port_start(struct xe_execlist_port *port, ...@@ -110,7 +109,7 @@ static void __xe_execlist_port_start(struct xe_execlist_port *port,
port->last_ctx_id = 1; port->last_ctx_id = 1;
} }
__start_lrc(port->hwe, exl->q->lrc, port->last_ctx_id); __start_lrc(port->hwe, exl->q->lrc[0], port->last_ctx_id);
port->running_exl = exl; port->running_exl = exl;
exl->has_run = true; exl->has_run = true;
} }
...@@ -124,14 +123,14 @@ static void __xe_execlist_port_idle(struct xe_execlist_port *port) ...@@ -124,14 +123,14 @@ static void __xe_execlist_port_idle(struct xe_execlist_port *port)
if (!port->running_exl) if (!port->running_exl)
return; return;
xe_lrc_write_ring(&port->hwe->kernel_lrc, noop, sizeof(noop)); xe_lrc_write_ring(port->hwe->kernel_lrc, noop, sizeof(noop));
__start_lrc(port->hwe, &port->hwe->kernel_lrc, 0); __start_lrc(port->hwe, port->hwe->kernel_lrc, 0);
port->running_exl = NULL; port->running_exl = NULL;
} }
static bool xe_execlist_is_idle(struct xe_execlist_exec_queue *exl) static bool xe_execlist_is_idle(struct xe_execlist_exec_queue *exl)
{ {
struct xe_lrc *lrc = exl->q->lrc; struct xe_lrc *lrc = exl->q->lrc[0];
return lrc->ring.tail == lrc->ring.old_tail; return lrc->ring.tail == lrc->ring.old_tail;
} }
...@@ -307,6 +306,7 @@ static void execlist_job_free(struct drm_sched_job *drm_job) ...@@ -307,6 +306,7 @@ static void execlist_job_free(struct drm_sched_job *drm_job)
{ {
struct xe_sched_job *job = to_xe_sched_job(drm_job); struct xe_sched_job *job = to_xe_sched_job(drm_job);
xe_exec_queue_update_run_ticks(job->q);
xe_sched_job_put(job); xe_sched_job_put(job);
} }
...@@ -333,7 +333,7 @@ static int execlist_exec_queue_init(struct xe_exec_queue *q) ...@@ -333,7 +333,7 @@ static int execlist_exec_queue_init(struct xe_exec_queue *q)
exl->q = q; exl->q = q;
err = drm_sched_init(&exl->sched, &drm_sched_ops, NULL, 1, err = drm_sched_init(&exl->sched, &drm_sched_ops, NULL, 1,
q->lrc[0].ring.size / MAX_JOB_SIZE_BYTES, q->lrc[0]->ring.size / MAX_JOB_SIZE_BYTES,
XE_SCHED_HANG_LIMIT, XE_SCHED_JOB_TIMEOUT, XE_SCHED_HANG_LIMIT, XE_SCHED_JOB_TIMEOUT,
NULL, NULL, q->hwe->name, NULL, NULL, q->hwe->name,
gt_to_xe(q->gt)->drm.dev); gt_to_xe(q->gt)->drm.dev);
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <linux/io-64-nonatomic-lo-hi.h> #include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/sizes.h> #include <linux/sizes.h>
#include <drm/drm_drv.h>
#include <drm/drm_managed.h> #include <drm/drm_managed.h>
#include <drm/i915_drm.h> #include <drm/i915_drm.h>
...@@ -19,6 +20,7 @@ ...@@ -19,6 +20,7 @@
#include "xe_device.h" #include "xe_device.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_gt_printk.h" #include "xe_gt_printk.h"
#include "xe_gt_sriov_vf.h"
#include "xe_gt_tlb_invalidation.h" #include "xe_gt_tlb_invalidation.h"
#include "xe_map.h" #include "xe_map.h"
#include "xe_pm.h" #include "xe_pm.h"
...@@ -140,6 +142,7 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt) ...@@ -140,6 +142,7 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt)
struct xe_device *xe = tile_to_xe(ggtt->tile); struct xe_device *xe = tile_to_xe(ggtt->tile);
struct pci_dev *pdev = to_pci_dev(xe->drm.dev); struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
unsigned int gsm_size; unsigned int gsm_size;
int err;
if (IS_SRIOV_VF(xe)) if (IS_SRIOV_VF(xe))
gsm_size = SZ_8M; /* GGTT is expected to be 4GiB */ gsm_size = SZ_8M; /* GGTT is expected to be 4GiB */
...@@ -193,7 +196,17 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt) ...@@ -193,7 +196,17 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt)
mutex_init(&ggtt->lock); mutex_init(&ggtt->lock);
primelockdep(ggtt); primelockdep(ggtt);
return drmm_add_action_or_reset(&xe->drm, ggtt_fini_early, ggtt); err = drmm_add_action_or_reset(&xe->drm, ggtt_fini_early, ggtt);
if (err)
return err;
if (IS_SRIOV_VF(xe)) {
err = xe_gt_sriov_vf_prepare_ggtt(xe_tile_get_gt(ggtt->tile, 0));
if (err)
return err;
}
return 0;
} }
static void xe_ggtt_invalidate(struct xe_ggtt *ggtt); static void xe_ggtt_invalidate(struct xe_ggtt *ggtt);
...@@ -433,18 +446,29 @@ int xe_ggtt_insert_bo(struct xe_ggtt *ggtt, struct xe_bo *bo) ...@@ -433,18 +446,29 @@ int xe_ggtt_insert_bo(struct xe_ggtt *ggtt, struct xe_bo *bo)
void xe_ggtt_remove_node(struct xe_ggtt *ggtt, struct drm_mm_node *node, void xe_ggtt_remove_node(struct xe_ggtt *ggtt, struct drm_mm_node *node,
bool invalidate) bool invalidate)
{ {
xe_pm_runtime_get_noresume(tile_to_xe(ggtt->tile)); struct xe_device *xe = tile_to_xe(ggtt->tile);
bool bound;
int idx;
bound = drm_dev_enter(&xe->drm, &idx);
if (bound)
xe_pm_runtime_get_noresume(xe);
mutex_lock(&ggtt->lock); mutex_lock(&ggtt->lock);
if (bound)
xe_ggtt_clear(ggtt, node->start, node->size); xe_ggtt_clear(ggtt, node->start, node->size);
drm_mm_remove_node(node); drm_mm_remove_node(node);
node->size = 0; node->size = 0;
mutex_unlock(&ggtt->lock); mutex_unlock(&ggtt->lock);
if (!bound)
return;
if (invalidate) if (invalidate)
xe_ggtt_invalidate(ggtt); xe_ggtt_invalidate(ggtt);
xe_pm_runtime_put(tile_to_xe(ggtt->tile)); xe_pm_runtime_put(xe);
drm_dev_exit(idx);
} }
void xe_ggtt_remove_bo(struct xe_ggtt *ggtt, struct xe_bo *bo) void xe_ggtt_remove_bo(struct xe_ggtt *ggtt, struct xe_bo *bo)
......
...@@ -5,6 +5,8 @@ ...@@ -5,6 +5,8 @@
#include "xe_gsc.h" #include "xe_gsc.h"
#include <linux/delay.h>
#include <drm/drm_managed.h> #include <drm/drm_managed.h>
#include <generated/xe_wa_oob.h> #include <generated/xe_wa_oob.h>
...@@ -14,6 +16,7 @@ ...@@ -14,6 +16,7 @@
#include "xe_bo.h" #include "xe_bo.h"
#include "xe_device.h" #include "xe_device.h"
#include "xe_exec_queue.h" #include "xe_exec_queue.h"
#include "xe_force_wake.h"
#include "xe_gsc_proxy.h" #include "xe_gsc_proxy.h"
#include "xe_gsc_submit.h" #include "xe_gsc_submit.h"
#include "xe_gt.h" #include "xe_gt.h"
......
...@@ -6,8 +6,9 @@ ...@@ -6,8 +6,9 @@
#ifndef _XE_GSC_H_ #ifndef _XE_GSC_H_
#define _XE_GSC_H_ #define _XE_GSC_H_
#include "xe_gsc_types.h" #include <linux/types.h>
struct xe_gsc;
struct xe_gt; struct xe_gt;
struct xe_hw_engine; struct xe_hw_engine;
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include "abi/gsc_proxy_commands_abi.h" #include "abi/gsc_proxy_commands_abi.h"
#include "regs/xe_gsc_regs.h" #include "regs/xe_gsc_regs.h"
#include "xe_bo.h" #include "xe_bo.h"
#include "xe_force_wake.h"
#include "xe_gsc.h" #include "xe_gsc.h"
#include "xe_gsc_submit.h" #include "xe_gsc_submit.h"
#include "xe_gt.h" #include "xe_gt.h"
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <linux/poison.h> #include <linux/poison.h>
#include "abi/gsc_command_header_abi.h" #include "abi/gsc_command_header_abi.h"
#include "xe_assert.h"
#include "xe_bb.h" #include "xe_bb.h"
#include "xe_exec_queue.h" #include "xe_exec_queue.h"
#include "xe_gt_printk.h" #include "xe_gt_printk.h"
......
...@@ -44,6 +44,7 @@ ...@@ -44,6 +44,7 @@
#include "xe_migrate.h" #include "xe_migrate.h"
#include "xe_mmio.h" #include "xe_mmio.h"
#include "xe_pat.h" #include "xe_pat.h"
#include "xe_pcode.h"
#include "xe_pm.h" #include "xe_pm.h"
#include "xe_mocs.h" #include "xe_mocs.h"
#include "xe_reg_sr.h" #include "xe_reg_sr.h"
...@@ -57,9 +58,17 @@ ...@@ -57,9 +58,17 @@
#include "xe_wa.h" #include "xe_wa.h"
#include "xe_wopcm.h" #include "xe_wopcm.h"
static void gt_fini(struct drm_device *drm, void *arg)
{
struct xe_gt *gt = arg;
destroy_workqueue(gt->ordered_wq);
}
struct xe_gt *xe_gt_alloc(struct xe_tile *tile) struct xe_gt *xe_gt_alloc(struct xe_tile *tile)
{ {
struct xe_gt *gt; struct xe_gt *gt;
int err;
gt = drmm_kzalloc(&tile_to_xe(tile)->drm, sizeof(*gt), GFP_KERNEL); gt = drmm_kzalloc(&tile_to_xe(tile)->drm, sizeof(*gt), GFP_KERNEL);
if (!gt) if (!gt)
...@@ -68,6 +77,10 @@ struct xe_gt *xe_gt_alloc(struct xe_tile *tile) ...@@ -68,6 +77,10 @@ struct xe_gt *xe_gt_alloc(struct xe_tile *tile)
gt->tile = tile; gt->tile = tile;
gt->ordered_wq = alloc_ordered_workqueue("gt-ordered-wq", 0); gt->ordered_wq = alloc_ordered_workqueue("gt-ordered-wq", 0);
err = drmm_add_action_or_reset(&gt_to_xe(gt)->drm, gt_fini, gt);
if (err)
return ERR_PTR(err);
return gt; return gt;
} }
...@@ -90,15 +103,9 @@ void xe_gt_sanitize(struct xe_gt *gt) ...@@ -90,15 +103,9 @@ void xe_gt_sanitize(struct xe_gt *gt)
*/ */
void xe_gt_remove(struct xe_gt *gt) void xe_gt_remove(struct xe_gt *gt)
{ {
xe_uc_remove(&gt->uc);
}
static void gt_fini(struct drm_device *drm, void *arg)
{
struct xe_gt *gt = arg;
int i; int i;
destroy_workqueue(gt->ordered_wq); xe_uc_remove(&gt->uc);
for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i) for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
xe_hw_fence_irq_finish(&gt->fence_irq[i]); xe_hw_fence_irq_finish(&gt->fence_irq[i]);
...@@ -160,7 +167,7 @@ static int emit_wa_job(struct xe_gt *gt, struct xe_exec_queue *q) ...@@ -160,7 +167,7 @@ static int emit_wa_job(struct xe_gt *gt, struct xe_exec_queue *q)
if (q->hwe->class == XE_ENGINE_CLASS_RENDER) if (q->hwe->class == XE_ENGINE_CLASS_RENDER)
/* Big enough to emit all of the context's 3DSTATE */ /* Big enough to emit all of the context's 3DSTATE */
bb = xe_bb_new(gt, xe_lrc_size(gt_to_xe(gt), q->hwe->class), false); bb = xe_bb_new(gt, xe_gt_lrc_size(gt, q->hwe->class), false);
else else
/* Just pick a large BB size */ /* Just pick a large BB size */
bb = xe_bb_new(gt, SZ_4K, false); bb = xe_bb_new(gt, SZ_4K, false);
...@@ -244,7 +251,7 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt) ...@@ -244,7 +251,7 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt)
xe_tuning_process_lrc(hwe); xe_tuning_process_lrc(hwe);
default_lrc = drmm_kzalloc(&xe->drm, default_lrc = drmm_kzalloc(&xe->drm,
xe_lrc_size(xe, hwe->class), xe_gt_lrc_size(gt, hwe->class),
GFP_KERNEL); GFP_KERNEL);
if (!default_lrc) if (!default_lrc)
return -ENOMEM; return -ENOMEM;
...@@ -292,9 +299,9 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt) ...@@ -292,9 +299,9 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt)
} }
xe_map_memcpy_from(xe, default_lrc, xe_map_memcpy_from(xe, default_lrc,
&q->lrc[0].bo->vmap, &q->lrc[0]->bo->vmap,
xe_lrc_pphwsp_offset(&q->lrc[0]), xe_lrc_pphwsp_offset(q->lrc[0]),
xe_lrc_size(xe, hwe->class)); xe_gt_lrc_size(gt, hwe->class));
gt->default_lrc[hwe->class] = default_lrc; gt->default_lrc[hwe->class] = default_lrc;
put_nop_q: put_nop_q:
...@@ -318,14 +325,6 @@ int xe_gt_init_early(struct xe_gt *gt) ...@@ -318,14 +325,6 @@ int xe_gt_init_early(struct xe_gt *gt)
return err; return err;
} }
err = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
if (err)
return err;
err = xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
if (err)
return err;
xe_reg_sr_init(&gt->reg_sr, "GT", gt_to_xe(gt)); xe_reg_sr_init(&gt->reg_sr, "GT", gt_to_xe(gt));
err = xe_wa_init(gt); err = xe_wa_init(gt);
...@@ -336,6 +335,9 @@ int xe_gt_init_early(struct xe_gt *gt) ...@@ -336,6 +335,9 @@ int xe_gt_init_early(struct xe_gt *gt)
xe_wa_process_oob(gt); xe_wa_process_oob(gt);
xe_tuning_process_gt(gt); xe_tuning_process_gt(gt);
xe_force_wake_init_gt(gt, gt_to_fw(gt));
xe_pcode_init(gt);
return 0; return 0;
} }
...@@ -366,10 +368,6 @@ static int gt_fw_domain_init(struct xe_gt *gt) ...@@ -366,10 +368,6 @@ static int gt_fw_domain_init(struct xe_gt *gt)
xe_lmtt_init(&gt_to_tile(gt)->sriov.pf.lmtt); xe_lmtt_init(&gt_to_tile(gt)->sriov.pf.lmtt);
} }
err = xe_gt_idle_sysfs_init(&gt->gtidle);
if (err)
goto err_force_wake;
/* Enable per hw engine IRQs */ /* Enable per hw engine IRQs */
xe_irq_enable_hwe(gt); xe_irq_enable_hwe(gt);
...@@ -434,6 +432,10 @@ static int all_fw_domain_init(struct xe_gt *gt) ...@@ -434,6 +432,10 @@ static int all_fw_domain_init(struct xe_gt *gt)
if (err) if (err)
goto err_force_wake; goto err_force_wake;
err = xe_uc_init_post_hwconfig(&gt->uc);
if (err)
goto err_force_wake;
if (!xe_gt_is_media_type(gt)) { if (!xe_gt_is_media_type(gt)) {
/* /*
* USM has its only SA pool to non-block behind user operations * USM has its only SA pool to non-block behind user operations
...@@ -460,10 +462,6 @@ static int all_fw_domain_init(struct xe_gt *gt) ...@@ -460,10 +462,6 @@ static int all_fw_domain_init(struct xe_gt *gt)
} }
} }
err = xe_uc_init_post_hwconfig(&gt->uc);
if (err)
goto err_force_wake;
err = xe_uc_init_hw(&gt->uc); err = xe_uc_init_hw(&gt->uc);
if (err) if (err)
goto err_force_wake; goto err_force_wake;
...@@ -477,6 +475,9 @@ static int all_fw_domain_init(struct xe_gt *gt) ...@@ -477,6 +475,9 @@ static int all_fw_domain_init(struct xe_gt *gt)
if (IS_SRIOV_PF(gt_to_xe(gt)) && !xe_gt_is_media_type(gt)) if (IS_SRIOV_PF(gt_to_xe(gt)) && !xe_gt_is_media_type(gt))
xe_lmtt_init_hw(&gt_to_tile(gt)->sriov.pf.lmtt); xe_lmtt_init_hw(&gt_to_tile(gt)->sriov.pf.lmtt);
if (IS_SRIOV_PF(gt_to_xe(gt)))
xe_gt_sriov_pf_init_hw(gt);
err = xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL); err = xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
XE_WARN_ON(err); XE_WARN_ON(err);
...@@ -503,8 +504,7 @@ int xe_gt_init_hwconfig(struct xe_gt *gt) ...@@ -503,8 +504,7 @@ int xe_gt_init_hwconfig(struct xe_gt *gt)
if (err) if (err)
goto out; goto out;
xe_gt_topology_init(gt); xe_gt_mcr_init_early(gt);
xe_gt_mcr_init(gt);
xe_pat_init(gt); xe_pat_init(gt);
err = xe_uc_init(&gt->uc); err = xe_uc_init(&gt->uc);
...@@ -515,8 +515,8 @@ int xe_gt_init_hwconfig(struct xe_gt *gt) ...@@ -515,8 +515,8 @@ int xe_gt_init_hwconfig(struct xe_gt *gt)
if (err) if (err)
goto out_fw; goto out_fw;
/* XXX: Fake that we pull the engine mask from hwconfig blob */ xe_gt_topology_init(gt);
gt->info.engine_mask = gt->info.__engine_mask; xe_gt_mcr_init(gt);
out_fw: out_fw:
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT); xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
...@@ -554,6 +554,10 @@ int xe_gt_init(struct xe_gt *gt) ...@@ -554,6 +554,10 @@ int xe_gt_init(struct xe_gt *gt)
if (err) if (err)
return err; return err;
err = xe_gt_idle_init(&gt->gtidle);
if (err)
return err;
err = xe_gt_freq_init(gt); err = xe_gt_freq_init(gt);
if (err) if (err)
return err; return err;
...@@ -564,7 +568,30 @@ int xe_gt_init(struct xe_gt *gt) ...@@ -564,7 +568,30 @@ int xe_gt_init(struct xe_gt *gt)
if (err) if (err)
return err; return err;
return drmm_add_action_or_reset(&gt_to_xe(gt)->drm, gt_fini, gt); xe_gt_record_user_engines(gt);
return 0;
}
void xe_gt_record_user_engines(struct xe_gt *gt)
{
struct xe_hw_engine *hwe;
enum xe_hw_engine_id id;
gt->user_engines.mask = 0;
memset(gt->user_engines.instances_per_class, 0,
sizeof(gt->user_engines.instances_per_class));
for_each_hw_engine(hwe, gt, id) {
if (xe_hw_engine_is_reserved(hwe))
continue;
gt->user_engines.mask |= BIT_ULL(id);
gt->user_engines.instances_per_class[hwe->class]++;
}
xe_gt_assert(gt, (gt->user_engines.mask | gt->info.engine_mask)
== gt->info.engine_mask);
} }
static int do_gt_reset(struct xe_gt *gt) static int do_gt_reset(struct xe_gt *gt)
...@@ -584,12 +611,34 @@ static int do_gt_reset(struct xe_gt *gt) ...@@ -584,12 +611,34 @@ static int do_gt_reset(struct xe_gt *gt)
return err; return err;
} }
static int vf_gt_restart(struct xe_gt *gt)
{
int err;
err = xe_uc_sanitize_reset(&gt->uc);
if (err)
return err;
err = xe_uc_init_hw(&gt->uc);
if (err)
return err;
err = xe_uc_start(&gt->uc);
if (err)
return err;
return 0;
}
static int do_gt_restart(struct xe_gt *gt) static int do_gt_restart(struct xe_gt *gt)
{ {
struct xe_hw_engine *hwe; struct xe_hw_engine *hwe;
enum xe_hw_engine_id id; enum xe_hw_engine_id id;
int err; int err;
if (IS_SRIOV_VF(gt_to_xe(gt)))
return vf_gt_restart(gt);
xe_pat_init(gt); xe_pat_init(gt);
xe_gt_mcr_set_implicit_defaults(gt); xe_gt_mcr_set_implicit_defaults(gt);
...@@ -613,6 +662,9 @@ static int do_gt_restart(struct xe_gt *gt) ...@@ -613,6 +662,9 @@ static int do_gt_restart(struct xe_gt *gt)
if (IS_SRIOV_PF(gt_to_xe(gt)) && !xe_gt_is_media_type(gt)) if (IS_SRIOV_PF(gt_to_xe(gt)) && !xe_gt_is_media_type(gt))
xe_lmtt_init_hw(&gt_to_tile(gt)->sriov.pf.lmtt); xe_lmtt_init_hw(&gt_to_tile(gt)->sriov.pf.lmtt);
if (IS_SRIOV_PF(gt_to_xe(gt)))
xe_gt_sriov_pf_init_hw(gt);
xe_mocs_init(gt); xe_mocs_init(gt);
err = xe_uc_start(&gt->uc); err = xe_uc_start(&gt->uc);
if (err) if (err)
...@@ -633,6 +685,9 @@ static int gt_reset(struct xe_gt *gt) ...@@ -633,6 +685,9 @@ static int gt_reset(struct xe_gt *gt)
{ {
int err; int err;
if (xe_device_wedged(gt_to_xe(gt)))
return -ECANCELED;
/* We only support GT resets with GuC submission */ /* We only support GT resets with GuC submission */
if (!xe_device_uc_enabled(gt_to_xe(gt))) if (!xe_device_uc_enabled(gt_to_xe(gt)))
return -ENODEV; return -ENODEV;
...@@ -655,9 +710,7 @@ static int gt_reset(struct xe_gt *gt) ...@@ -655,9 +710,7 @@ static int gt_reset(struct xe_gt *gt)
xe_uc_stop_prepare(&gt->uc); xe_uc_stop_prepare(&gt->uc);
xe_gt_pagefault_reset(gt); xe_gt_pagefault_reset(gt);
err = xe_uc_stop(&gt->uc); xe_uc_stop(&gt->uc);
if (err)
goto err_out;
xe_gt_tlb_invalidation_reset(gt); xe_gt_tlb_invalidation_reset(gt);
...@@ -685,7 +738,7 @@ static int gt_reset(struct xe_gt *gt) ...@@ -685,7 +738,7 @@ static int gt_reset(struct xe_gt *gt)
err_fail: err_fail:
xe_gt_err(gt, "reset failed (%pe)\n", ERR_PTR(err)); xe_gt_err(gt, "reset failed (%pe)\n", ERR_PTR(err));
gt_to_xe(gt)->needs_flr_on_fini = true; xe_device_declare_wedged(gt_to_xe(gt));
return err; return err;
} }
...@@ -733,6 +786,8 @@ int xe_gt_suspend(struct xe_gt *gt) ...@@ -733,6 +786,8 @@ int xe_gt_suspend(struct xe_gt *gt)
if (err) if (err)
goto err_force_wake; goto err_force_wake;
xe_gt_idle_disable_pg(gt);
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL)); XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
xe_gt_dbg(gt, "suspended\n"); xe_gt_dbg(gt, "suspended\n");
...@@ -759,6 +814,8 @@ int xe_gt_resume(struct xe_gt *gt) ...@@ -759,6 +814,8 @@ int xe_gt_resume(struct xe_gt *gt)
if (err) if (err)
goto err_force_wake; goto err_force_wake;
xe_gt_idle_enable_pg(gt);
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL)); XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
xe_gt_dbg(gt, "resumed\n"); xe_gt_dbg(gt, "resumed\n");
...@@ -810,3 +867,14 @@ struct xe_hw_engine *xe_gt_any_hw_engine_by_reset_domain(struct xe_gt *gt, ...@@ -810,3 +867,14 @@ struct xe_hw_engine *xe_gt_any_hw_engine_by_reset_domain(struct xe_gt *gt,
return NULL; return NULL;
} }
struct xe_hw_engine *xe_gt_any_hw_engine(struct xe_gt *gt)
{
struct xe_hw_engine *hwe;
enum xe_hw_engine_id id;
for_each_hw_engine(hwe, gt, id)
return hwe;
return NULL;
}
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <drm/drm_util.h> #include <drm/drm_util.h>
#include "xe_device.h"
#include "xe_device_types.h" #include "xe_device_types.h"
#include "xe_hw_engine.h" #include "xe_hw_engine.h"
...@@ -37,6 +38,19 @@ int xe_gt_init_hwconfig(struct xe_gt *gt); ...@@ -37,6 +38,19 @@ int xe_gt_init_hwconfig(struct xe_gt *gt);
int xe_gt_init_early(struct xe_gt *gt); int xe_gt_init_early(struct xe_gt *gt);
int xe_gt_init(struct xe_gt *gt); int xe_gt_init(struct xe_gt *gt);
int xe_gt_record_default_lrcs(struct xe_gt *gt); int xe_gt_record_default_lrcs(struct xe_gt *gt);
/**
* xe_gt_record_user_engines - save data related to engines available to
* usersapce
* @gt: GT structure
*
* Walk the available HW engines from gt->info.engine_mask and calculate data
* related to those engines that may be used by userspace. To be used whenever
* available engines change in runtime (e.g. with ccs_mode) or during
* initialization
*/
void xe_gt_record_user_engines(struct xe_gt *gt);
void xe_gt_suspend_prepare(struct xe_gt *gt); void xe_gt_suspend_prepare(struct xe_gt *gt);
int xe_gt_suspend(struct xe_gt *gt); int xe_gt_suspend(struct xe_gt *gt);
int xe_gt_resume(struct xe_gt *gt); int xe_gt_resume(struct xe_gt *gt);
...@@ -53,11 +67,24 @@ void xe_gt_remove(struct xe_gt *gt); ...@@ -53,11 +67,24 @@ void xe_gt_remove(struct xe_gt *gt);
struct xe_hw_engine * struct xe_hw_engine *
xe_gt_any_hw_engine_by_reset_domain(struct xe_gt *gt, enum xe_engine_class class); xe_gt_any_hw_engine_by_reset_domain(struct xe_gt *gt, enum xe_engine_class class);
/**
* xe_gt_any_hw_engine - scan the list of engines and return the
* first available
* @gt: GT structure
*/
struct xe_hw_engine *xe_gt_any_hw_engine(struct xe_gt *gt);
struct xe_hw_engine *xe_gt_hw_engine(struct xe_gt *gt, struct xe_hw_engine *xe_gt_hw_engine(struct xe_gt *gt,
enum xe_engine_class class, enum xe_engine_class class,
u16 instance, u16 instance,
bool logical); bool logical);
static inline bool xe_gt_has_indirect_ring_state(struct xe_gt *gt)
{
return gt->info.has_indirect_ring_state &&
xe_device_uc_enabled(gt_to_xe(gt));
}
static inline bool xe_gt_is_media_type(struct xe_gt *gt) static inline bool xe_gt_is_media_type(struct xe_gt *gt)
{ {
return gt->info.type == XE_GT_TYPE_MEDIA; return gt->info.type == XE_GT_TYPE_MEDIA;
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#include "xe_assert.h" #include "xe_assert.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_gt_ccs_mode.h" #include "xe_gt_ccs_mode.h"
#include "xe_gt_printk.h"
#include "xe_gt_sysfs.h" #include "xe_gt_sysfs.h"
#include "xe_mmio.h" #include "xe_mmio.h"
...@@ -68,7 +69,7 @@ static void __xe_gt_apply_ccs_mode(struct xe_gt *gt, u32 num_engines) ...@@ -68,7 +69,7 @@ static void __xe_gt_apply_ccs_mode(struct xe_gt *gt, u32 num_engines)
xe_mmio_write32(gt, CCS_MODE, mode); xe_mmio_write32(gt, CCS_MODE, mode);
xe_gt_info(gt, "CCS_MODE=%x config:%08x, num_engines:%d, num_slices:%d\n", xe_gt_dbg(gt, "CCS_MODE=%x config:%08x, num_engines:%d, num_slices:%d\n",
mode, config, num_engines, num_slices); mode, config, num_engines, num_slices);
} }
...@@ -134,6 +135,7 @@ ccs_mode_store(struct device *kdev, struct device_attribute *attr, ...@@ -134,6 +135,7 @@ ccs_mode_store(struct device *kdev, struct device_attribute *attr,
if (gt->ccs_mode != num_engines) { if (gt->ccs_mode != num_engines) {
xe_gt_info(gt, "Setting compute mode to %d\n", num_engines); xe_gt_info(gt, "Setting compute mode to %d\n", num_engines);
gt->ccs_mode = num_engines; gt->ccs_mode = num_engines;
xe_gt_record_user_engines(gt);
xe_gt_reset_async(gt); xe_gt_reset_async(gt);
} }
...@@ -150,7 +152,7 @@ static const struct attribute *gt_ccs_mode_attrs[] = { ...@@ -150,7 +152,7 @@ static const struct attribute *gt_ccs_mode_attrs[] = {
NULL, NULL,
}; };
static void xe_gt_ccs_mode_sysfs_fini(struct drm_device *drm, void *arg) static void xe_gt_ccs_mode_sysfs_fini(void *arg)
{ {
struct xe_gt *gt = arg; struct xe_gt *gt = arg;
...@@ -182,5 +184,5 @@ int xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt) ...@@ -182,5 +184,5 @@ int xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt)
if (err) if (err)
return err; return err;
return drmm_add_action_or_reset(&xe->drm, xe_gt_ccs_mode_sysfs_fini, gt); return devm_add_action_or_reset(xe->drm.dev, xe_gt_ccs_mode_sysfs_fini, gt);
} }
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include "regs/xe_gt_regs.h" #include "regs/xe_gt_regs.h"
#include "regs/xe_regs.h" #include "regs/xe_regs.h"
#include "xe_assert.h"
#include "xe_device.h" #include "xe_device.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_macros.h" #include "xe_macros.h"
......
...@@ -15,14 +15,18 @@ ...@@ -15,14 +15,18 @@
#include "xe_ggtt.h" #include "xe_ggtt.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_gt_mcr.h" #include "xe_gt_mcr.h"
#include "xe_gt_sriov_pf_debugfs.h"
#include "xe_gt_sriov_vf_debugfs.h"
#include "xe_gt_topology.h" #include "xe_gt_topology.h"
#include "xe_hw_engine.h" #include "xe_hw_engine.h"
#include "xe_lrc.h" #include "xe_lrc.h"
#include "xe_macros.h" #include "xe_macros.h"
#include "xe_mocs.h"
#include "xe_pat.h" #include "xe_pat.h"
#include "xe_pm.h" #include "xe_pm.h"
#include "xe_reg_sr.h" #include "xe_reg_sr.h"
#include "xe_reg_whitelist.h" #include "xe_reg_whitelist.h"
#include "xe_sriov.h"
#include "xe_uc_debugfs.h" #include "xe_uc_debugfs.h"
#include "xe_wa.h" #include "xe_wa.h"
...@@ -112,6 +116,17 @@ static int force_reset(struct xe_gt *gt, struct drm_printer *p) ...@@ -112,6 +116,17 @@ static int force_reset(struct xe_gt *gt, struct drm_printer *p)
return 0; return 0;
} }
static int force_reset_sync(struct xe_gt *gt, struct drm_printer *p)
{
xe_pm_runtime_get(gt_to_xe(gt));
xe_gt_reset_async(gt);
xe_pm_runtime_put(gt_to_xe(gt));
flush_work(&gt->reset.worker);
return 0;
}
static int sa_info(struct xe_gt *gt, struct drm_printer *p) static int sa_info(struct xe_gt *gt, struct drm_printer *p)
{ {
struct xe_tile *tile = gt_to_tile(gt); struct xe_tile *tile = gt_to_tile(gt);
...@@ -200,6 +215,15 @@ static int pat(struct xe_gt *gt, struct drm_printer *p) ...@@ -200,6 +215,15 @@ static int pat(struct xe_gt *gt, struct drm_printer *p)
return 0; return 0;
} }
static int mocs(struct xe_gt *gt, struct drm_printer *p)
{
xe_pm_runtime_get(gt_to_xe(gt));
xe_mocs_dump(gt, p);
xe_pm_runtime_put(gt_to_xe(gt));
return 0;
}
static int rcs_default_lrc(struct xe_gt *gt, struct drm_printer *p) static int rcs_default_lrc(struct xe_gt *gt, struct drm_printer *p)
{ {
xe_pm_runtime_get(gt_to_xe(gt)); xe_pm_runtime_get(gt_to_xe(gt));
...@@ -248,6 +272,7 @@ static int vecs_default_lrc(struct xe_gt *gt, struct drm_printer *p) ...@@ -248,6 +272,7 @@ static int vecs_default_lrc(struct xe_gt *gt, struct drm_printer *p)
static const struct drm_info_list debugfs_list[] = { static const struct drm_info_list debugfs_list[] = {
{"hw_engines", .show = xe_gt_debugfs_simple_show, .data = hw_engines}, {"hw_engines", .show = xe_gt_debugfs_simple_show, .data = hw_engines},
{"force_reset", .show = xe_gt_debugfs_simple_show, .data = force_reset}, {"force_reset", .show = xe_gt_debugfs_simple_show, .data = force_reset},
{"force_reset_sync", .show = xe_gt_debugfs_simple_show, .data = force_reset_sync},
{"sa_info", .show = xe_gt_debugfs_simple_show, .data = sa_info}, {"sa_info", .show = xe_gt_debugfs_simple_show, .data = sa_info},
{"topology", .show = xe_gt_debugfs_simple_show, .data = topology}, {"topology", .show = xe_gt_debugfs_simple_show, .data = topology},
{"steering", .show = xe_gt_debugfs_simple_show, .data = steering}, {"steering", .show = xe_gt_debugfs_simple_show, .data = steering},
...@@ -255,6 +280,7 @@ static const struct drm_info_list debugfs_list[] = { ...@@ -255,6 +280,7 @@ static const struct drm_info_list debugfs_list[] = {
{"register-save-restore", .show = xe_gt_debugfs_simple_show, .data = register_save_restore}, {"register-save-restore", .show = xe_gt_debugfs_simple_show, .data = register_save_restore},
{"workarounds", .show = xe_gt_debugfs_simple_show, .data = workarounds}, {"workarounds", .show = xe_gt_debugfs_simple_show, .data = workarounds},
{"pat", .show = xe_gt_debugfs_simple_show, .data = pat}, {"pat", .show = xe_gt_debugfs_simple_show, .data = pat},
{"mocs", .show = xe_gt_debugfs_simple_show, .data = mocs},
{"default_lrc_rcs", .show = xe_gt_debugfs_simple_show, .data = rcs_default_lrc}, {"default_lrc_rcs", .show = xe_gt_debugfs_simple_show, .data = rcs_default_lrc},
{"default_lrc_ccs", .show = xe_gt_debugfs_simple_show, .data = ccs_default_lrc}, {"default_lrc_ccs", .show = xe_gt_debugfs_simple_show, .data = ccs_default_lrc},
{"default_lrc_bcs", .show = xe_gt_debugfs_simple_show, .data = bcs_default_lrc}, {"default_lrc_bcs", .show = xe_gt_debugfs_simple_show, .data = bcs_default_lrc},
...@@ -290,4 +316,9 @@ void xe_gt_debugfs_register(struct xe_gt *gt) ...@@ -290,4 +316,9 @@ void xe_gt_debugfs_register(struct xe_gt *gt)
root, minor); root, minor);
xe_uc_debugfs_register(&gt->uc, root); xe_uc_debugfs_register(&gt->uc, root);
if (IS_SRIOV_PF(xe))
xe_gt_sriov_pf_debugfs_register(gt, root);
else if (IS_SRIOV_VF(xe))
xe_gt_sriov_vf_debugfs_register(gt, root);
} }
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
#include "xe_device_types.h" #include "xe_device_types.h"
#include "xe_gt_sysfs.h" #include "xe_gt_sysfs.h"
#include "xe_gt_throttle_sysfs.h" #include "xe_gt_throttle.h"
#include "xe_guc_pc.h" #include "xe_guc_pc.h"
#include "xe_pm.h" #include "xe_pm.h"
...@@ -209,7 +209,7 @@ static const struct attribute *freq_attrs[] = { ...@@ -209,7 +209,7 @@ static const struct attribute *freq_attrs[] = {
NULL NULL
}; };
static void freq_fini(struct drm_device *drm, void *arg) static void freq_fini(void *arg)
{ {
struct kobject *kobj = arg; struct kobject *kobj = arg;
...@@ -237,7 +237,7 @@ int xe_gt_freq_init(struct xe_gt *gt) ...@@ -237,7 +237,7 @@ int xe_gt_freq_init(struct xe_gt *gt)
if (!gt->freq) if (!gt->freq)
return -ENOMEM; return -ENOMEM;
err = drmm_add_action_or_reset(&xe->drm, freq_fini, gt->freq); err = devm_add_action(xe->drm.dev, freq_fini, gt->freq);
if (err) if (err)
return err; return err;
...@@ -245,5 +245,5 @@ int xe_gt_freq_init(struct xe_gt *gt) ...@@ -245,5 +245,5 @@ int xe_gt_freq_init(struct xe_gt *gt)
if (err) if (err)
return err; return err;
return xe_gt_throttle_sysfs_init(gt); return xe_gt_throttle_init(gt);
} }
...@@ -5,12 +5,14 @@ ...@@ -5,12 +5,14 @@
#include <drm/drm_managed.h> #include <drm/drm_managed.h>
#include "xe_force_wake.h"
#include "xe_device.h" #include "xe_device.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_gt_idle.h" #include "xe_gt_idle.h"
#include "xe_gt_sysfs.h" #include "xe_gt_sysfs.h"
#include "xe_guc_pc.h" #include "xe_guc_pc.h"
#include "regs/xe_gt_regs.h" #include "regs/xe_gt_regs.h"
#include "xe_macros.h"
#include "xe_mmio.h" #include "xe_mmio.h"
#include "xe_pm.h" #include "xe_pm.h"
...@@ -92,6 +94,50 @@ static u64 get_residency_ms(struct xe_gt_idle *gtidle, u64 cur_residency) ...@@ -92,6 +94,50 @@ static u64 get_residency_ms(struct xe_gt_idle *gtidle, u64 cur_residency)
return cur_residency; return cur_residency;
} }
void xe_gt_idle_enable_pg(struct xe_gt *gt)
{
struct xe_device *xe = gt_to_xe(gt);
u32 pg_enable;
int i, j;
/* Disable CPG for PVC */
if (xe->info.platform == XE_PVC)
return;
xe_device_assert_mem_access(gt_to_xe(gt));
pg_enable = RENDER_POWERGATE_ENABLE | MEDIA_POWERGATE_ENABLE;
for (i = XE_HW_ENGINE_VCS0, j = 0; i <= XE_HW_ENGINE_VCS7; ++i, ++j) {
if ((gt->info.engine_mask & BIT(i)))
pg_enable |= (VDN_HCP_POWERGATE_ENABLE(j) |
VDN_MFXVDENC_POWERGATE_ENABLE(j));
}
XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
if (xe->info.skip_guc_pc) {
/*
* GuC sets the hysteresis value when GuC PC is enabled
* else set it to 25 (25 * 1.28us)
*/
xe_mmio_write32(gt, MEDIA_POWERGATE_IDLE_HYSTERESIS, 25);
xe_mmio_write32(gt, RENDER_POWERGATE_IDLE_HYSTERESIS, 25);
}
xe_mmio_write32(gt, POWERGATE_ENABLE, pg_enable);
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FW_GT));
}
void xe_gt_idle_disable_pg(struct xe_gt *gt)
{
xe_device_assert_mem_access(gt_to_xe(gt));
XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
xe_mmio_write32(gt, POWERGATE_ENABLE, 0);
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FW_GT));
}
static ssize_t name_show(struct device *dev, static ssize_t name_show(struct device *dev,
struct device_attribute *attr, char *buff) struct device_attribute *attr, char *buff)
{ {
...@@ -144,15 +190,24 @@ static const struct attribute *gt_idle_attrs[] = { ...@@ -144,15 +190,24 @@ static const struct attribute *gt_idle_attrs[] = {
NULL, NULL,
}; };
static void gt_idle_sysfs_fini(struct drm_device *drm, void *arg) static void gt_idle_fini(void *arg)
{ {
struct kobject *kobj = arg; struct kobject *kobj = arg;
struct xe_gt *gt = kobj_to_gt(kobj->parent);
xe_gt_idle_disable_pg(gt);
if (gt_to_xe(gt)->info.skip_guc_pc) {
XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
xe_gt_idle_disable_c6(gt);
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
}
sysfs_remove_files(kobj, gt_idle_attrs); sysfs_remove_files(kobj, gt_idle_attrs);
kobject_put(kobj); kobject_put(kobj);
} }
int xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle) int xe_gt_idle_init(struct xe_gt_idle *gtidle)
{ {
struct xe_gt *gt = gtidle_to_gt(gtidle); struct xe_gt *gt = gtidle_to_gt(gtidle);
struct xe_device *xe = gt_to_xe(gt); struct xe_device *xe = gt_to_xe(gt);
...@@ -181,7 +236,9 @@ int xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle) ...@@ -181,7 +236,9 @@ int xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle)
return err; return err;
} }
return drmm_add_action_or_reset(&xe->drm, gt_idle_sysfs_fini, kobj); xe_gt_idle_enable_pg(gt);
return devm_add_action_or_reset(xe->drm.dev, gt_idle_fini, kobj);
} }
void xe_gt_idle_enable_c6(struct xe_gt *gt) void xe_gt_idle_enable_c6(struct xe_gt *gt)
...@@ -199,9 +256,8 @@ void xe_gt_idle_enable_c6(struct xe_gt *gt) ...@@ -199,9 +256,8 @@ void xe_gt_idle_enable_c6(struct xe_gt *gt)
void xe_gt_idle_disable_c6(struct xe_gt *gt) void xe_gt_idle_disable_c6(struct xe_gt *gt)
{ {
xe_device_assert_mem_access(gt_to_xe(gt)); xe_device_assert_mem_access(gt_to_xe(gt));
xe_force_wake_assert_held(gt_to_fw(gt), XE_FORCEWAKE_ALL); xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
xe_mmio_write32(gt, PG_ENABLE, 0);
xe_mmio_write32(gt, RC_CONTROL, 0); xe_mmio_write32(gt, RC_CONTROL, 0);
xe_mmio_write32(gt, RC_STATE, 0); xe_mmio_write32(gt, RC_STATE, 0);
} }
...@@ -10,8 +10,10 @@ ...@@ -10,8 +10,10 @@
struct xe_gt; struct xe_gt;
int xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle); int xe_gt_idle_init(struct xe_gt_idle *gtidle);
void xe_gt_idle_enable_c6(struct xe_gt *gt); void xe_gt_idle_enable_c6(struct xe_gt *gt);
void xe_gt_idle_disable_c6(struct xe_gt *gt); void xe_gt_idle_disable_c6(struct xe_gt *gt);
void xe_gt_idle_enable_pg(struct xe_gt *gt);
void xe_gt_idle_disable_pg(struct xe_gt *gt);
#endif /* _XE_GT_IDLE_H_ */ #endif /* _XE_GT_IDLE_H_ */
...@@ -375,18 +375,35 @@ static const struct { ...@@ -375,18 +375,35 @@ static const struct {
[IMPLICIT_STEERING] = { "IMPLICIT", NULL }, [IMPLICIT_STEERING] = { "IMPLICIT", NULL },
}; };
void xe_gt_mcr_init(struct xe_gt *gt) /**
* xe_gt_mcr_init_early - Early initialization of the MCR support
* @gt: GT structure
*
* Perform early software only initialization of the MCR lock to allow
* the synchronization on accessing the STEER_SEMAPHORE register and
* use the xe_gt_mcr_multicast_write() function.
*/
void xe_gt_mcr_init_early(struct xe_gt *gt)
{ {
struct xe_device *xe = gt_to_xe(gt);
BUILD_BUG_ON(IMPLICIT_STEERING + 1 != NUM_STEERING_TYPES); BUILD_BUG_ON(IMPLICIT_STEERING + 1 != NUM_STEERING_TYPES);
BUILD_BUG_ON(ARRAY_SIZE(xe_steering_types) != NUM_STEERING_TYPES); BUILD_BUG_ON(ARRAY_SIZE(xe_steering_types) != NUM_STEERING_TYPES);
spin_lock_init(&gt->mcr_lock);
}
/**
* xe_gt_mcr_init - Normal initialization of the MCR support
* @gt: GT structure
*
* Perform normal initialization of the MCR for all usages.
*/
void xe_gt_mcr_init(struct xe_gt *gt)
{
struct xe_device *xe = gt_to_xe(gt);
if (IS_SRIOV_VF(xe)) if (IS_SRIOV_VF(xe))
return; return;
spin_lock_init(&gt->mcr_lock);
if (gt->info.type == XE_GT_TYPE_MEDIA) { if (gt->info.type == XE_GT_TYPE_MEDIA) {
drm_WARN_ON(&xe->drm, MEDIA_VER(xe) < 13); drm_WARN_ON(&xe->drm, MEDIA_VER(xe) < 13);
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
struct drm_printer; struct drm_printer;
struct xe_gt; struct xe_gt;
void xe_gt_mcr_init_early(struct xe_gt *gt);
void xe_gt_mcr_init(struct xe_gt *gt); void xe_gt_mcr_init(struct xe_gt *gt);
void xe_gt_mcr_set_implicit_defaults(struct xe_gt *gt); void xe_gt_mcr_set_implicit_defaults(struct xe_gt *gt);
...@@ -40,4 +41,28 @@ void xe_gt_mcr_get_dss_steering(struct xe_gt *gt, unsigned int dss, u16 *group, ...@@ -40,4 +41,28 @@ void xe_gt_mcr_get_dss_steering(struct xe_gt *gt, unsigned int dss, u16 *group,
for_each_dss((dss), (gt)) \ for_each_dss((dss), (gt)) \
for_each_if((xe_gt_mcr_get_dss_steering((gt), (dss), &(group), &(instance)), true)) for_each_if((xe_gt_mcr_get_dss_steering((gt), (dss), &(group), &(instance)), true))
/*
* Loop over each DSS available for geometry and determine the group and
* instance IDs that should be used to steer MCR accesses toward this DSS.
* @dss: DSS ID to obtain steering for
* @gt: GT structure
* @group: steering group ID, data type: u16
* @instance: steering instance ID, data type: u16
*/
#define for_each_geometry_dss(dss, gt, group, instance) \
for_each_dss_steering(dss, gt, group, instance) \
if (xe_gt_has_geometry_dss(gt, dss))
/*
* Loop over each DSS available for compute and determine the group and
* instance IDs that should be used to steer MCR accesses toward this DSS.
* @dss: DSS ID to obtain steering for
* @gt: GT structure
* @group: steering group ID, data type: u16
* @instance: steering instance ID, data type: u16
*/
#define for_each_compute_dss(dss, gt, group, instance) \
for_each_dss_steering(dss, gt, group, instance) \
if (xe_gt_has_compute_dss(gt, dss))
#endif /* _XE_GT_MCR_H_ */ #endif /* _XE_GT_MCR_H_ */
...@@ -19,7 +19,6 @@ ...@@ -19,7 +19,6 @@
#include "xe_guc.h" #include "xe_guc.h"
#include "xe_guc_ct.h" #include "xe_guc_ct.h"
#include "xe_migrate.h" #include "xe_migrate.h"
#include "xe_pt.h"
#include "xe_trace.h" #include "xe_trace.h"
#include "xe_vm.h" #include "xe_vm.h"
...@@ -204,16 +203,15 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf) ...@@ -204,16 +203,15 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
drm_exec_retry_on_contention(&exec); drm_exec_retry_on_contention(&exec);
if (ret) if (ret)
goto unlock_dma_resv; goto unlock_dma_resv;
}
/* Bind VMA only to the GT that has faulted */ /* Bind VMA only to the GT that has faulted */
trace_xe_vma_pf_bind(vma); trace_xe_vma_pf_bind(vma);
fence = __xe_pt_bind_vma(tile, vma, xe_tile_migrate_engine(tile), NULL, 0, fence = xe_vma_rebind(vm, vma, BIT(tile->id));
vma->tile_present & BIT(tile->id));
if (IS_ERR(fence)) { if (IS_ERR(fence)) {
ret = PTR_ERR(fence); ret = PTR_ERR(fence);
goto unlock_dma_resv; goto unlock_dma_resv;
} }
}
/* /*
* XXX: Should we drop the lock before waiting? This only helps if doing * XXX: Should we drop the lock before waiting? This only helps if doing
......
...@@ -5,8 +5,12 @@ ...@@ -5,8 +5,12 @@
#include <drm/drm_managed.h> #include <drm/drm_managed.h>
#include "regs/xe_sriov_regs.h"
#include "xe_gt_sriov_pf.h" #include "xe_gt_sriov_pf.h"
#include "xe_gt_sriov_pf_helpers.h" #include "xe_gt_sriov_pf_helpers.h"
#include "xe_gt_sriov_pf_service.h"
#include "xe_mmio.h"
/* /*
* VF's metadata is maintained in the flexible array where: * VF's metadata is maintained in the flexible array where:
...@@ -48,5 +52,33 @@ int xe_gt_sriov_pf_init_early(struct xe_gt *gt) ...@@ -48,5 +52,33 @@ int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
if (err) if (err)
return err; return err;
err = xe_gt_sriov_pf_service_init(gt);
if (err)
return err;
return 0; return 0;
} }
static bool pf_needs_enable_ggtt_guest_update(struct xe_device *xe)
{
return GRAPHICS_VERx100(xe) == 1200;
}
static void pf_enable_ggtt_guest_update(struct xe_gt *gt)
{
xe_mmio_write32(gt, VIRTUAL_CTRL_REG, GUEST_GTT_UPDATE_EN);
}
/**
* xe_gt_sriov_pf_init_hw - Initialize SR-IOV hardware support.
* @gt: the &xe_gt to initialize
*
* On some platforms the PF must explicitly enable VF's access to the GGTT.
*/
void xe_gt_sriov_pf_init_hw(struct xe_gt *gt)
{
if (pf_needs_enable_ggtt_guest_update(gt_to_xe(gt)))
pf_enable_ggtt_guest_update(gt);
xe_gt_sriov_pf_service_update(gt);
}
...@@ -10,11 +10,16 @@ struct xe_gt; ...@@ -10,11 +10,16 @@ struct xe_gt;
#ifdef CONFIG_PCI_IOV #ifdef CONFIG_PCI_IOV
int xe_gt_sriov_pf_init_early(struct xe_gt *gt); int xe_gt_sriov_pf_init_early(struct xe_gt *gt);
void xe_gt_sriov_pf_init_hw(struct xe_gt *gt);
#else #else
static inline int xe_gt_sriov_pf_init_early(struct xe_gt *gt) static inline int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
{ {
return 0; return 0;
} }
static inline void xe_gt_sriov_pf_init_hw(struct xe_gt *gt)
{
}
#endif #endif
#endif #endif
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
#include "xe_guc_fwif.h" #include "xe_guc_fwif.h"
#include "xe_guc_id_mgr.h" #include "xe_guc_id_mgr.h"
#include "xe_guc_klv_helpers.h" #include "xe_guc_klv_helpers.h"
#include "xe_guc_klv_thresholds_set.h"
#include "xe_guc_submit.h" #include "xe_guc_submit.h"
#include "xe_lmtt.h" #include "xe_lmtt.h"
#include "xe_map.h" #include "xe_map.h"
...@@ -187,14 +188,20 @@ static int pf_push_vf_cfg_dbs(struct xe_gt *gt, unsigned int vfid, u32 begin, u3 ...@@ -187,14 +188,20 @@ static int pf_push_vf_cfg_dbs(struct xe_gt *gt, unsigned int vfid, u32 begin, u3
return pf_push_vf_cfg_klvs(gt, vfid, 2, klvs, ARRAY_SIZE(klvs)); return pf_push_vf_cfg_klvs(gt, vfid, 2, klvs, ARRAY_SIZE(klvs));
} }
static int pf_push_vf_cfg_exec_quantum(struct xe_gt *gt, unsigned int vfid, u32 exec_quantum) static int pf_push_vf_cfg_exec_quantum(struct xe_gt *gt, unsigned int vfid, u32 *exec_quantum)
{ {
return pf_push_vf_cfg_u32(gt, vfid, GUC_KLV_VF_CFG_EXEC_QUANTUM_KEY, exec_quantum); /* GuC will silently clamp values exceeding max */
*exec_quantum = min_t(u32, *exec_quantum, GUC_KLV_VF_CFG_EXEC_QUANTUM_MAX_VALUE);
return pf_push_vf_cfg_u32(gt, vfid, GUC_KLV_VF_CFG_EXEC_QUANTUM_KEY, *exec_quantum);
} }
static int pf_push_vf_cfg_preempt_timeout(struct xe_gt *gt, unsigned int vfid, u32 preempt_timeout) static int pf_push_vf_cfg_preempt_timeout(struct xe_gt *gt, unsigned int vfid, u32 *preempt_timeout)
{ {
return pf_push_vf_cfg_u32(gt, vfid, GUC_KLV_VF_CFG_PREEMPT_TIMEOUT_KEY, preempt_timeout); /* GuC will silently clamp values exceeding max */
*preempt_timeout = min_t(u32, *preempt_timeout, GUC_KLV_VF_CFG_PREEMPT_TIMEOUT_MAX_VALUE);
return pf_push_vf_cfg_u32(gt, vfid, GUC_KLV_VF_CFG_PREEMPT_TIMEOUT_KEY, *preempt_timeout);
} }
static int pf_push_vf_cfg_lmem(struct xe_gt *gt, unsigned int vfid, u64 size) static int pf_push_vf_cfg_lmem(struct xe_gt *gt, unsigned int vfid, u64 size)
...@@ -202,6 +209,15 @@ static int pf_push_vf_cfg_lmem(struct xe_gt *gt, unsigned int vfid, u64 size) ...@@ -202,6 +209,15 @@ static int pf_push_vf_cfg_lmem(struct xe_gt *gt, unsigned int vfid, u64 size)
return pf_push_vf_cfg_u64(gt, vfid, GUC_KLV_VF_CFG_LMEM_SIZE_KEY, size); return pf_push_vf_cfg_u64(gt, vfid, GUC_KLV_VF_CFG_LMEM_SIZE_KEY, size);
} }
static int pf_push_vf_cfg_threshold(struct xe_gt *gt, unsigned int vfid,
enum xe_guc_klv_threshold_index index, u32 value)
{
u32 key = xe_guc_klv_threshold_index_to_key(index);
xe_gt_assert(gt, key);
return pf_push_vf_cfg_u32(gt, vfid, key, value);
}
static struct xe_gt_sriov_config *pf_pick_vf_config(struct xe_gt *gt, unsigned int vfid) static struct xe_gt_sriov_config *pf_pick_vf_config(struct xe_gt *gt, unsigned int vfid)
{ {
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt))); xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
...@@ -1604,7 +1620,7 @@ static int pf_provision_exec_quantum(struct xe_gt *gt, unsigned int vfid, ...@@ -1604,7 +1620,7 @@ static int pf_provision_exec_quantum(struct xe_gt *gt, unsigned int vfid,
struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid); struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
int err; int err;
err = pf_push_vf_cfg_exec_quantum(gt, vfid, exec_quantum); err = pf_push_vf_cfg_exec_quantum(gt, vfid, &exec_quantum);
if (unlikely(err)) if (unlikely(err))
return err; return err;
...@@ -1674,7 +1690,7 @@ static int pf_provision_preempt_timeout(struct xe_gt *gt, unsigned int vfid, ...@@ -1674,7 +1690,7 @@ static int pf_provision_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid); struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
int err; int err;
err = pf_push_vf_cfg_preempt_timeout(gt, vfid, preempt_timeout); err = pf_push_vf_cfg_preempt_timeout(gt, vfid, &preempt_timeout);
if (unlikely(err)) if (unlikely(err))
return err; return err;
...@@ -1742,6 +1758,83 @@ static void pf_reset_config_sched(struct xe_gt *gt, struct xe_gt_sriov_config *c ...@@ -1742,6 +1758,83 @@ static void pf_reset_config_sched(struct xe_gt *gt, struct xe_gt_sriov_config *c
config->preempt_timeout = 0; config->preempt_timeout = 0;
} }
static int pf_provision_threshold(struct xe_gt *gt, unsigned int vfid,
enum xe_guc_klv_threshold_index index, u32 value)
{
struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
int err;
err = pf_push_vf_cfg_threshold(gt, vfid, index, value);
if (unlikely(err))
return err;
config->thresholds[index] = value;
return 0;
}
static int pf_get_threshold(struct xe_gt *gt, unsigned int vfid,
enum xe_guc_klv_threshold_index index)
{
struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
return config->thresholds[index];
}
static const char *threshold_unit(u32 threshold)
{
return threshold ? "" : "(disabled)";
}
/**
* xe_gt_sriov_pf_config_set_threshold - Configure threshold for the VF.
* @gt: the &xe_gt
* @vfid: the VF identifier
* @index: the threshold index
* @value: requested value (0 means disabled)
*
* This function can only be called on PF.
*
* Return: 0 on success or a negative error code on failure.
*/
int xe_gt_sriov_pf_config_set_threshold(struct xe_gt *gt, unsigned int vfid,
enum xe_guc_klv_threshold_index index, u32 value)
{
u32 key = xe_guc_klv_threshold_index_to_key(index);
const char *name = xe_guc_klv_key_to_string(key);
int err;
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
err = pf_provision_threshold(gt, vfid, index, value);
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
return pf_config_set_u32_done(gt, vfid, value,
xe_gt_sriov_pf_config_get_threshold(gt, vfid, index),
name, threshold_unit, err);
}
/**
* xe_gt_sriov_pf_config_get_threshold - Get VF's threshold.
* @gt: the &xe_gt
* @vfid: the VF identifier
* @index: the threshold index
*
* This function can only be called on PF.
*
* Return: value of VF's (or PF's) threshold.
*/
u32 xe_gt_sriov_pf_config_get_threshold(struct xe_gt *gt, unsigned int vfid,
enum xe_guc_klv_threshold_index index)
{
u32 value;
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
value = pf_get_threshold(gt, vfid, index);
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
return value;
}
static void pf_release_vf_config(struct xe_gt *gt, unsigned int vfid) static void pf_release_vf_config(struct xe_gt *gt, unsigned int vfid)
{ {
struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid); struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <linux/types.h> #include <linux/types.h>
enum xe_guc_klv_threshold_index;
struct drm_printer; struct drm_printer;
struct xe_gt; struct xe_gt;
...@@ -43,6 +44,11 @@ u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfi ...@@ -43,6 +44,11 @@ u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfi
int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid, int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
u32 preempt_timeout); u32 preempt_timeout);
u32 xe_gt_sriov_pf_config_get_threshold(struct xe_gt *gt, unsigned int vfid,
enum xe_guc_klv_threshold_index index);
int xe_gt_sriov_pf_config_set_threshold(struct xe_gt *gt, unsigned int vfid,
enum xe_guc_klv_threshold_index index, u32 value);
int xe_gt_sriov_pf_config_set_fair(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs); int xe_gt_sriov_pf_config_set_fair(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs);
int xe_gt_sriov_pf_config_release(struct xe_gt *gt, unsigned int vfid, bool force); int xe_gt_sriov_pf_config_release(struct xe_gt *gt, unsigned int vfid, bool force);
int xe_gt_sriov_pf_config_push(struct xe_gt *gt, unsigned int vfid, bool refresh); int xe_gt_sriov_pf_config_push(struct xe_gt *gt, unsigned int vfid, bool refresh);
......
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
#include <drm/drm_mm.h> #include <drm/drm_mm.h>
#include "xe_guc_klv_thresholds_set_types.h"
struct xe_bo; struct xe_bo;
/** /**
...@@ -32,6 +34,8 @@ struct xe_gt_sriov_config { ...@@ -32,6 +34,8 @@ struct xe_gt_sriov_config {
u32 exec_quantum; u32 exec_quantum;
/** @preempt_timeout: preemption timeout in microseconds. */ /** @preempt_timeout: preemption timeout in microseconds. */
u32 preempt_timeout; u32 preempt_timeout;
/** @thresholds: GuC thresholds for adverse events notifications. */
u32 thresholds[XE_GUC_KLV_NUM_THRESHOLDS];
}; };
/** /**
......
This diff is collapsed.
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023-2024 Intel Corporation
*/
#ifndef _XE_GT_SRIOV_PF_DEBUGFS_H_
#define _XE_GT_SRIOV_PF_DEBUGFS_H_
struct xe_gt;
struct dentry;
#ifdef CONFIG_PCI_IOV
void xe_gt_sriov_pf_debugfs_register(struct xe_gt *gt, struct dentry *root);
#else
static inline void xe_gt_sriov_pf_debugfs_register(struct xe_gt *gt, struct dentry *root) { }
#endif
#endif
// SPDX-License-Identifier: MIT
/*
* Copyright © 2023-2024 Intel Corporation
*/
#include "abi/guc_actions_sriov_abi.h"
#include "abi/guc_messages_abi.h"
#include "xe_gt_sriov_pf_config.h"
#include "xe_gt_sriov_pf_helpers.h"
#include "xe_gt_sriov_pf_monitor.h"
#include "xe_gt_sriov_printk.h"
#include "xe_guc_klv_helpers.h"
#include "xe_guc_klv_thresholds_set.h"
/**
* xe_gt_sriov_pf_monitor_flr - Cleanup VF data after VF FLR.
* @gt: the &xe_gt
* @vfid: the VF identifier
*
* On FLR this function will reset all event data related to the VF.
* This function is for PF only.
*/
void xe_gt_sriov_pf_monitor_flr(struct xe_gt *gt, u32 vfid)
{
int e;
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
xe_gt_sriov_pf_assert_vfid(gt, vfid);
for (e = 0; e < XE_GUC_KLV_NUM_THRESHOLDS; e++)
gt->sriov.pf.vfs[vfid].monitor.guc.events[e] = 0;
}
static void pf_update_event_counter(struct xe_gt *gt, u32 vfid,
enum xe_guc_klv_threshold_index e)
{
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
xe_gt_assert(gt, e < XE_GUC_KLV_NUM_THRESHOLDS);
gt->sriov.pf.vfs[vfid].monitor.guc.events[e]++;
}
static int pf_handle_vf_threshold_event(struct xe_gt *gt, u32 vfid, u32 threshold)
{
char origin[8];
int e;
e = xe_guc_klv_threshold_key_to_index(threshold);
xe_sriov_function_name(vfid, origin, sizeof(origin));
/* was there a new KEY added that we missed? */
if (unlikely(e < 0)) {
xe_gt_sriov_notice(gt, "unknown threshold key %#x reported for %s\n",
threshold, origin);
return -ENOTCONN;
}
xe_gt_sriov_dbg(gt, "%s exceeded threshold %u %s\n",
origin, xe_gt_sriov_pf_config_get_threshold(gt, vfid, e),
xe_guc_klv_key_to_string(threshold));
pf_update_event_counter(gt, vfid, e);
return 0;
}
/**
* xe_gt_sriov_pf_monitor_process_guc2pf - Handle adverse event notification from the GuC.
* @gt: the &xe_gt
* @msg: G2H event message
* @len: length of the message
*
* This function is intended for PF only.
*
* Return: 0 on success or a negative error code on failure.
*/
int xe_gt_sriov_pf_monitor_process_guc2pf(struct xe_gt *gt, const u32 *msg, u32 len)
{
struct xe_device *xe = gt_to_xe(gt);
u32 vfid;
u32 threshold;
xe_gt_assert(gt, len >= GUC_HXG_MSG_MIN_LEN);
xe_gt_assert(gt, FIELD_GET(GUC_HXG_MSG_0_ORIGIN, msg[0]) == GUC_HXG_ORIGIN_GUC);
xe_gt_assert(gt, FIELD_GET(GUC_HXG_MSG_0_TYPE, msg[0]) == GUC_HXG_TYPE_EVENT);
xe_gt_assert(gt, FIELD_GET(GUC_HXG_EVENT_MSG_0_ACTION, msg[0]) ==
GUC_ACTION_GUC2PF_ADVERSE_EVENT);
if (unlikely(!IS_SRIOV_PF(xe)))
return -EPROTO;
if (unlikely(FIELD_GET(GUC2PF_ADVERSE_EVENT_EVENT_MSG_0_MBZ, msg[0])))
return -EPFNOSUPPORT;
if (unlikely(len < GUC2PF_ADVERSE_EVENT_EVENT_MSG_LEN))
return -EPROTO;
vfid = FIELD_GET(GUC2PF_ADVERSE_EVENT_EVENT_MSG_1_VFID, msg[1]);
threshold = FIELD_GET(GUC2PF_ADVERSE_EVENT_EVENT_MSG_2_THRESHOLD, msg[2]);
if (unlikely(vfid > xe_gt_sriov_pf_get_totalvfs(gt)))
return -EINVAL;
return pf_handle_vf_threshold_event(gt, vfid, threshold);
}
/**
* xe_gt_sriov_pf_monitor_print_events - Print adverse events counters.
* @gt: the &xe_gt to print events from
* @p: the &drm_printer
*
* Print adverse events counters for all VFs.
* VFs with no events are not printed.
*
* This function can only be called on PF.
*/
void xe_gt_sriov_pf_monitor_print_events(struct xe_gt *gt, struct drm_printer *p)
{
unsigned int n, total_vfs = xe_gt_sriov_pf_get_totalvfs(gt);
const struct xe_gt_sriov_monitor *data;
int e;
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
for (n = 1; n <= total_vfs; n++) {
data = &gt->sriov.pf.vfs[n].monitor;
for (e = 0; e < XE_GUC_KLV_NUM_THRESHOLDS; e++)
if (data->guc.events[e])
break;
/* skip empty unless in debug mode */
if (e >= XE_GUC_KLV_NUM_THRESHOLDS &&
!IS_ENABLED(CONFIG_DRM_XE_DEBUG_SRIOV))
continue;
#define __format(...) "%s:%u "
#define __value(TAG, NAME, ...) , #NAME, data->guc.events[MAKE_XE_GUC_KLV_THRESHOLD_INDEX(TAG)]
drm_printf(p, "VF%u:\t" MAKE_XE_GUC_KLV_THRESHOLDS_SET(__format) "\n",
n MAKE_XE_GUC_KLV_THRESHOLDS_SET(__value));
#undef __format
#undef __value
}
}
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023-2024 Intel Corporation
*/
#ifndef _XE_GT_SRIOV_PF_MONITOR_H_
#define _XE_GT_SRIOV_PF_MONITOR_H_
#include <linux/errno.h>
#include <linux/types.h>
struct drm_printer;
struct xe_gt;
void xe_gt_sriov_pf_monitor_flr(struct xe_gt *gt, u32 vfid);
void xe_gt_sriov_pf_monitor_print_events(struct xe_gt *gt, struct drm_printer *p);
#ifdef CONFIG_PCI_IOV
int xe_gt_sriov_pf_monitor_process_guc2pf(struct xe_gt *gt, const u32 *msg, u32 len);
#else
static inline int xe_gt_sriov_pf_monitor_process_guc2pf(struct xe_gt *gt, const u32 *msg, u32 len)
{
return -EPROTO;
}
#endif
#endif
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023-2024 Intel Corporation
*/
#ifndef _XE_GT_SRIOV_PF_MONITOR_TYPES_H_
#define _XE_GT_SRIOV_PF_MONITOR_TYPES_H_
#include "xe_guc_klv_thresholds_set_types.h"
/**
* struct xe_gt_sriov_monitor - GT level per-VF monitoring data.
*/
struct xe_gt_sriov_monitor {
/** @guc: monitoring data related to the GuC. */
struct {
/** @guc.events: number of adverse events reported by the GuC. */
unsigned int events[XE_GUC_KLV_NUM_THRESHOLDS];
} guc;
};
#endif
This diff is collapsed.
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023-2024 Intel Corporation
*/
#ifndef _XE_GT_SRIOV_PF_SERVICE_H_
#define _XE_GT_SRIOV_PF_SERVICE_H_
#include <linux/errno.h>
#include <linux/types.h>
struct drm_printer;
struct xe_gt;
int xe_gt_sriov_pf_service_init(struct xe_gt *gt);
void xe_gt_sriov_pf_service_update(struct xe_gt *gt);
void xe_gt_sriov_pf_service_reset(struct xe_gt *gt, unsigned int vfid);
int xe_gt_sriov_pf_service_print_version(struct xe_gt *gt, struct drm_printer *p);
int xe_gt_sriov_pf_service_print_runtime(struct xe_gt *gt, struct drm_printer *p);
#ifdef CONFIG_PCI_IOV
int xe_gt_sriov_pf_service_process_request(struct xe_gt *gt, u32 origin,
const u32 *msg, u32 msg_len,
u32 *response, u32 resp_size);
#else
static inline int
xe_gt_sriov_pf_service_process_request(struct xe_gt *gt, u32 origin,
const u32 *msg, u32 msg_len,
u32 *response, u32 resp_size)
{
return -EPROTO;
}
#endif
#endif
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023-2024 Intel Corporation
*/
#ifndef _XE_GT_SRIOV_PF_SERVICE_TYPES_H_
#define _XE_GT_SRIOV_PF_SERVICE_TYPES_H_
#include <linux/types.h>
struct xe_reg;
/**
* struct xe_gt_sriov_pf_service_version - VF/PF ABI Version.
* @major: the major version of the VF/PF ABI
* @minor: the minor version of the VF/PF ABI
*
* See `GuC Relay Communication`_.
*/
struct xe_gt_sriov_pf_service_version {
u16 major;
u16 minor;
};
/**
* struct xe_gt_sriov_pf_service_runtime_regs - Runtime data shared with VFs.
* @regs: pointer to static array with register offsets.
* @values: pointer to array with captured register values.
* @size: size of the regs and value arrays.
*/
struct xe_gt_sriov_pf_service_runtime_regs {
const struct xe_reg *regs;
u32 *values;
u32 size;
};
/**
* struct xe_gt_sriov_pf_service - Data used by the PF service.
* @version: information about VF/PF ABI versions for current platform.
* @version.base: lowest VF/PF ABI version that could be negotiated with VF.
* @version.latest: latest VF/PF ABI version supported by the PF driver.
* @runtime: runtime data shared with VFs.
*/
struct xe_gt_sriov_pf_service {
struct {
struct xe_gt_sriov_pf_service_version base;
struct xe_gt_sriov_pf_service_version latest;
} version;
struct xe_gt_sriov_pf_service_runtime_regs runtime;
};
#endif
...@@ -9,7 +9,9 @@ ...@@ -9,7 +9,9 @@
#include <linux/types.h> #include <linux/types.h>
#include "xe_gt_sriov_pf_config_types.h" #include "xe_gt_sriov_pf_config_types.h"
#include "xe_gt_sriov_pf_monitor_types.h"
#include "xe_gt_sriov_pf_policy_types.h" #include "xe_gt_sriov_pf_policy_types.h"
#include "xe_gt_sriov_pf_service_types.h"
/** /**
* struct xe_gt_sriov_metadata - GT level per-VF metadata. * struct xe_gt_sriov_metadata - GT level per-VF metadata.
...@@ -17,15 +19,23 @@ ...@@ -17,15 +19,23 @@
struct xe_gt_sriov_metadata { struct xe_gt_sriov_metadata {
/** @config: per-VF provisioning data. */ /** @config: per-VF provisioning data. */
struct xe_gt_sriov_config config; struct xe_gt_sriov_config config;
/** @monitor: per-VF monitoring data. */
struct xe_gt_sriov_monitor monitor;
/** @version: negotiated VF/PF ABI version */
struct xe_gt_sriov_pf_service_version version;
}; };
/** /**
* struct xe_gt_sriov_pf - GT level PF virtualization data. * struct xe_gt_sriov_pf - GT level PF virtualization data.
* @service: service data.
* @policy: policy data. * @policy: policy data.
* @spare: PF-only provisioning configuration. * @spare: PF-only provisioning configuration.
* @vfs: metadata for all VFs. * @vfs: metadata for all VFs.
*/ */
struct xe_gt_sriov_pf { struct xe_gt_sriov_pf {
struct xe_gt_sriov_pf_service service;
struct xe_gt_sriov_pf_policy policy; struct xe_gt_sriov_pf_policy policy;
struct xe_gt_sriov_spare_config spare; struct xe_gt_sriov_spare_config spare;
struct xe_gt_sriov_metadata *vfs; struct xe_gt_sriov_metadata *vfs;
......
This diff is collapsed.
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023-2024 Intel Corporation
*/
#ifndef _XE_GT_SRIOV_VF_H_
#define _XE_GT_SRIOV_VF_H_
#include <linux/types.h>
struct drm_printer;
struct xe_gt;
struct xe_reg;
int xe_gt_sriov_vf_bootstrap(struct xe_gt *gt);
int xe_gt_sriov_vf_query_config(struct xe_gt *gt);
int xe_gt_sriov_vf_connect(struct xe_gt *gt);
int xe_gt_sriov_vf_query_runtime(struct xe_gt *gt);
int xe_gt_sriov_vf_prepare_ggtt(struct xe_gt *gt);
u32 xe_gt_sriov_vf_gmdid(struct xe_gt *gt);
u16 xe_gt_sriov_vf_guc_ids(struct xe_gt *gt);
u64 xe_gt_sriov_vf_lmem(struct xe_gt *gt);
u32 xe_gt_sriov_vf_read32(struct xe_gt *gt, struct xe_reg reg);
void xe_gt_sriov_vf_print_config(struct xe_gt *gt, struct drm_printer *p);
void xe_gt_sriov_vf_print_runtime(struct xe_gt *gt, struct drm_printer *p);
void xe_gt_sriov_vf_print_version(struct xe_gt *gt, struct drm_printer *p);
#endif
// SPDX-License-Identifier: MIT
/*
* Copyright © 2023-2024 Intel Corporation
*/
#include <linux/debugfs.h>
#include <drm/drm_debugfs.h>
#include "xe_gt_debugfs.h"
#include "xe_gt_sriov_vf.h"
#include "xe_gt_sriov_vf_debugfs.h"
#include "xe_gt_types.h"
#include "xe_sriov.h"
/*
* /sys/kernel/debug/dri/0/
* ├── gt0
* │   ├── vf
* │   │   ├── self_config
* │   │   ├── abi_versions
* │   │   ├── runtime_regs
*/
static const struct drm_info_list vf_info[] = {
{
"self_config",
.show = xe_gt_debugfs_simple_show,
.data = xe_gt_sriov_vf_print_config,
},
{
"abi_versions",
.show = xe_gt_debugfs_simple_show,
.data = xe_gt_sriov_vf_print_version,
},
#if defined(CONFIG_DRM_XE_DEBUG) || defined(CONFIG_DRM_XE_DEBUG_SRIOV)
{
"runtime_regs",
.show = xe_gt_debugfs_simple_show,
.data = xe_gt_sriov_vf_print_runtime,
},
#endif
};
/**
* xe_gt_sriov_vf_debugfs_register - Register SR-IOV VF specific entries in GT debugfs.
* @gt: the &xe_gt to register
* @root: the &dentry that represents the GT directory
*
* Register SR-IOV VF entries that are GT related and must be shown under GT debugfs.
*/
void xe_gt_sriov_vf_debugfs_register(struct xe_gt *gt, struct dentry *root)
{
struct xe_device *xe = gt_to_xe(gt);
struct drm_minor *minor = xe->drm.primary;
struct dentry *vfdentry;
xe_assert(xe, IS_SRIOV_VF(xe));
xe_assert(xe, root->d_inode->i_private == gt);
/*
* /sys/kernel/debug/dri/0/
* ├── gt0
* │   ├── vf
*/
vfdentry = debugfs_create_dir("vf", root);
if (IS_ERR(vfdentry))
return;
vfdentry->d_inode->i_private = gt;
drm_debugfs_create_files(vf_info, ARRAY_SIZE(vf_info), vfdentry, minor);
}
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023-2024 Intel Corporation
*/
#ifndef _XE_GT_SRIOV_VF_DEBUGFS_H_
#define _XE_GT_SRIOV_VF_DEBUGFS_H_
struct xe_gt;
struct dentry;
void xe_gt_sriov_vf_debugfs_register(struct xe_gt *gt, struct dentry *root);
#endif
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023-2024 Intel Corporation
*/
#ifndef _XE_GT_SRIOV_VF_TYPES_H_
#define _XE_GT_SRIOV_VF_TYPES_H_
#include <linux/types.h>
/**
* struct xe_gt_sriov_vf_guc_version - GuC ABI version details.
*/
struct xe_gt_sriov_vf_guc_version {
/** @branch: branch version. */
u8 branch;
/** @major: major version. */
u8 major;
/** @minor: minor version. */
u8 minor;
/** @patch: patch version. */
u8 patch;
};
/**
* struct xe_gt_sriov_vf_relay_version - PF ABI version details.
*/
struct xe_gt_sriov_vf_relay_version {
/** @major: major version. */
u16 major;
/** @minor: minor version. */
u16 minor;
};
/**
* struct xe_gt_sriov_vf_selfconfig - VF configuration data.
*/
struct xe_gt_sriov_vf_selfconfig {
/** @ggtt_base: assigned base offset of the GGTT region. */
u64 ggtt_base;
/** @ggtt_size: assigned size of the GGTT region. */
u64 ggtt_size;
/** @lmem_size: assigned size of the LMEM. */
u64 lmem_size;
/** @num_ctxs: assigned number of GuC submission context IDs. */
u16 num_ctxs;
/** @num_dbs: assigned number of GuC doorbells IDs. */
u16 num_dbs;
};
/**
* struct xe_gt_sriov_vf_runtime - VF runtime data.
*/
struct xe_gt_sriov_vf_runtime {
/** @gmdid: cached value of the GDMID register. */
u32 gmdid;
/** @regs_size: size of runtime register array. */
u32 regs_size;
/** @num_regs: number of runtime registers in the array. */
u32 num_regs;
/** @regs: pointer to array of register offset/value pairs. */
struct vf_runtime_reg {
/** @regs.offset: register offset. */
u32 offset;
/** @regs.value: register value. */
u32 value;
} *regs;
};
/**
* struct xe_gt_sriov_vf - GT level VF virtualization data.
*/
struct xe_gt_sriov_vf {
/** @guc_version: negotiated GuC ABI version. */
struct xe_gt_sriov_vf_guc_version guc_version;
/** @self_config: resource configurations. */
struct xe_gt_sriov_vf_selfconfig self_config;
/** @pf_version: negotiated VF/PF ABI version. */
struct xe_gt_sriov_vf_relay_version pf_version;
/** @runtime: runtime data retrieved from the PF. */
struct xe_gt_sriov_vf_runtime runtime;
};
#endif
...@@ -22,7 +22,7 @@ static const struct kobj_type xe_gt_sysfs_kobj_type = { ...@@ -22,7 +22,7 @@ static const struct kobj_type xe_gt_sysfs_kobj_type = {
.sysfs_ops = &kobj_sysfs_ops, .sysfs_ops = &kobj_sysfs_ops,
}; };
static void gt_sysfs_fini(struct drm_device *drm, void *arg) static void gt_sysfs_fini(void *arg)
{ {
struct xe_gt *gt = arg; struct xe_gt *gt = arg;
...@@ -51,5 +51,5 @@ int xe_gt_sysfs_init(struct xe_gt *gt) ...@@ -51,5 +51,5 @@ int xe_gt_sysfs_init(struct xe_gt *gt)
gt->sysfs = &kg->base; gt->sysfs = &kg->base;
return drmm_add_action_or_reset(&xe->drm, gt_sysfs_fini, gt); return devm_add_action(xe->drm.dev, gt_sysfs_fini, gt);
} }
...@@ -9,14 +9,14 @@ ...@@ -9,14 +9,14 @@
#include "xe_device.h" #include "xe_device.h"
#include "xe_gt.h" #include "xe_gt.h"
#include "xe_gt_sysfs.h" #include "xe_gt_sysfs.h"
#include "xe_gt_throttle_sysfs.h" #include "xe_gt_throttle.h"
#include "xe_mmio.h" #include "xe_mmio.h"
#include "xe_pm.h" #include "xe_pm.h"
/** /**
* DOC: Xe GT Throttle * DOC: Xe GT Throttle
* *
* Provides sysfs entries for frequency throttle reasons in GT * Provides sysfs entries and other helpers for frequency throttle reasons in GT
* *
* device/gt#/freq0/throttle/status - Overall status * device/gt#/freq0/throttle/status - Overall status
* device/gt#/freq0/throttle/reason_pl1 - Frequency throttle due to PL1 * device/gt#/freq0/throttle/reason_pl1 - Frequency throttle due to PL1
...@@ -35,7 +35,7 @@ dev_to_gt(struct device *dev) ...@@ -35,7 +35,7 @@ dev_to_gt(struct device *dev)
return kobj_to_gt(dev->kobj.parent); return kobj_to_gt(dev->kobj.parent);
} }
static u32 read_perf_limit_reasons(struct xe_gt *gt) u32 xe_gt_throttle_get_limit_reasons(struct xe_gt *gt)
{ {
u32 reg; u32 reg;
...@@ -51,63 +51,63 @@ static u32 read_perf_limit_reasons(struct xe_gt *gt) ...@@ -51,63 +51,63 @@ static u32 read_perf_limit_reasons(struct xe_gt *gt)
static u32 read_status(struct xe_gt *gt) static u32 read_status(struct xe_gt *gt)
{ {
u32 status = read_perf_limit_reasons(gt) & GT0_PERF_LIMIT_REASONS_MASK; u32 status = xe_gt_throttle_get_limit_reasons(gt) & GT0_PERF_LIMIT_REASONS_MASK;
return status; return status;
} }
static u32 read_reason_pl1(struct xe_gt *gt) static u32 read_reason_pl1(struct xe_gt *gt)
{ {
u32 pl1 = read_perf_limit_reasons(gt) & POWER_LIMIT_1_MASK; u32 pl1 = xe_gt_throttle_get_limit_reasons(gt) & POWER_LIMIT_1_MASK;
return pl1; return pl1;
} }
static u32 read_reason_pl2(struct xe_gt *gt) static u32 read_reason_pl2(struct xe_gt *gt)
{ {
u32 pl2 = read_perf_limit_reasons(gt) & POWER_LIMIT_2_MASK; u32 pl2 = xe_gt_throttle_get_limit_reasons(gt) & POWER_LIMIT_2_MASK;
return pl2; return pl2;
} }
static u32 read_reason_pl4(struct xe_gt *gt) static u32 read_reason_pl4(struct xe_gt *gt)
{ {
u32 pl4 = read_perf_limit_reasons(gt) & POWER_LIMIT_4_MASK; u32 pl4 = xe_gt_throttle_get_limit_reasons(gt) & POWER_LIMIT_4_MASK;
return pl4; return pl4;
} }
static u32 read_reason_thermal(struct xe_gt *gt) static u32 read_reason_thermal(struct xe_gt *gt)
{ {
u32 thermal = read_perf_limit_reasons(gt) & THERMAL_LIMIT_MASK; u32 thermal = xe_gt_throttle_get_limit_reasons(gt) & THERMAL_LIMIT_MASK;
return thermal; return thermal;
} }
static u32 read_reason_prochot(struct xe_gt *gt) static u32 read_reason_prochot(struct xe_gt *gt)
{ {
u32 prochot = read_perf_limit_reasons(gt) & PROCHOT_MASK; u32 prochot = xe_gt_throttle_get_limit_reasons(gt) & PROCHOT_MASK;
return prochot; return prochot;
} }
static u32 read_reason_ratl(struct xe_gt *gt) static u32 read_reason_ratl(struct xe_gt *gt)
{ {
u32 ratl = read_perf_limit_reasons(gt) & RATL_MASK; u32 ratl = xe_gt_throttle_get_limit_reasons(gt) & RATL_MASK;
return ratl; return ratl;
} }
static u32 read_reason_vr_thermalert(struct xe_gt *gt) static u32 read_reason_vr_thermalert(struct xe_gt *gt)
{ {
u32 thermalert = read_perf_limit_reasons(gt) & VR_THERMALERT_MASK; u32 thermalert = xe_gt_throttle_get_limit_reasons(gt) & VR_THERMALERT_MASK;
return thermalert; return thermalert;
} }
static u32 read_reason_vr_tdc(struct xe_gt *gt) static u32 read_reason_vr_tdc(struct xe_gt *gt)
{ {
u32 tdc = read_perf_limit_reasons(gt) & VR_TDC_MASK; u32 tdc = xe_gt_throttle_get_limit_reasons(gt) & VR_TDC_MASK;
return tdc; return tdc;
} }
...@@ -229,14 +229,14 @@ static const struct attribute_group throttle_group_attrs = { ...@@ -229,14 +229,14 @@ static const struct attribute_group throttle_group_attrs = {
.attrs = throttle_attrs, .attrs = throttle_attrs,
}; };
static void gt_throttle_sysfs_fini(struct drm_device *drm, void *arg) static void gt_throttle_sysfs_fini(void *arg)
{ {
struct xe_gt *gt = arg; struct xe_gt *gt = arg;
sysfs_remove_group(gt->freq, &throttle_group_attrs); sysfs_remove_group(gt->freq, &throttle_group_attrs);
} }
int xe_gt_throttle_sysfs_init(struct xe_gt *gt) int xe_gt_throttle_init(struct xe_gt *gt)
{ {
struct xe_device *xe = gt_to_xe(gt); struct xe_device *xe = gt_to_xe(gt);
int err; int err;
...@@ -245,5 +245,5 @@ int xe_gt_throttle_sysfs_init(struct xe_gt *gt) ...@@ -245,5 +245,5 @@ int xe_gt_throttle_sysfs_init(struct xe_gt *gt)
if (err) if (err)
return err; return err;
return drmm_add_action_or_reset(&xe->drm, gt_throttle_sysfs_fini, gt); return devm_add_action_or_reset(xe->drm.dev, gt_throttle_sysfs_fini, gt);
} }
...@@ -3,14 +3,15 @@ ...@@ -3,14 +3,15 @@
* Copyright © 2023 Intel Corporation * Copyright © 2023 Intel Corporation
*/ */
#ifndef _XE_GT_THROTTLE_SYSFS_H_ #ifndef _XE_GT_THROTTLE_H_
#define _XE_GT_THROTTLE_SYSFS_H_ #define _XE_GT_THROTTLE_H_
#include <drm/drm_managed.h> #include <linux/types.h>
struct xe_gt; struct xe_gt;
int xe_gt_throttle_sysfs_init(struct xe_gt *gt); int xe_gt_throttle_init(struct xe_gt *gt);
#endif /* _XE_GT_THROTTLE_SYSFS_H_ */ u32 xe_gt_throttle_get_limit_reasons(struct xe_gt *gt);
#endif /* _XE_GT_THROTTLE_H_ */
This diff is collapsed.
...@@ -20,6 +20,9 @@ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt); ...@@ -20,6 +20,9 @@ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);
int xe_gt_tlb_invalidation_vma(struct xe_gt *gt, int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
struct xe_gt_tlb_invalidation_fence *fence, struct xe_gt_tlb_invalidation_fence *fence,
struct xe_vma *vma); struct xe_vma *vma);
int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
struct xe_gt_tlb_invalidation_fence *fence,
u64 start, u64 end, u32 asid);
int xe_gt_tlb_invalidation_wait(struct xe_gt *gt, int seqno); int xe_gt_tlb_invalidation_wait(struct xe_gt *gt, int seqno);
int xe_guc_tlb_invalidation_done_handler(struct xe_guc *guc, u32 *msg, u32 len); int xe_guc_tlb_invalidation_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
......
...@@ -108,7 +108,9 @@ gen_l3_mask_from_pattern(struct xe_device *xe, xe_l3_bank_mask_t dst, ...@@ -108,7 +108,9 @@ gen_l3_mask_from_pattern(struct xe_device *xe, xe_l3_bank_mask_t dst,
{ {
unsigned long bit; unsigned long bit;
xe_assert(xe, fls(mask) <= patternbits); xe_assert(xe, find_last_bit(pattern, XE_MAX_L3_BANK_MASK_BITS) < patternbits ||
bitmap_empty(pattern, XE_MAX_L3_BANK_MASK_BITS));
xe_assert(xe, !mask || patternbits * (__fls(mask) + 1) <= XE_MAX_L3_BANK_MASK_BITS);
for_each_set_bit(bit, &mask, 32) { for_each_set_bit(bit, &mask, 32) {
xe_l3_bank_mask_t shifted_pattern = {}; xe_l3_bank_mask_t shifted_pattern = {};
...@@ -278,3 +280,13 @@ bool xe_gt_topology_has_dss_in_quadrant(struct xe_gt *gt, int quad) ...@@ -278,3 +280,13 @@ bool xe_gt_topology_has_dss_in_quadrant(struct xe_gt *gt, int quad)
return quad_first < (quad + 1) * dss_per_quad; return quad_first < (quad + 1) * dss_per_quad;
} }
bool xe_gt_has_geometry_dss(struct xe_gt *gt, unsigned int dss)
{
return test_bit(dss, gt->fuse_topo.g_dss_mask);
}
bool xe_gt_has_compute_dss(struct xe_gt *gt, unsigned int dss)
{
return test_bit(dss, gt->fuse_topo.c_dss_mask);
}
...@@ -33,4 +33,7 @@ bool xe_dss_mask_empty(const xe_dss_mask_t mask); ...@@ -33,4 +33,7 @@ bool xe_dss_mask_empty(const xe_dss_mask_t mask);
bool bool
xe_gt_topology_has_dss_in_quadrant(struct xe_gt *gt, int quad); xe_gt_topology_has_dss_in_quadrant(struct xe_gt *gt, int quad);
bool xe_gt_has_geometry_dss(struct xe_gt *gt, unsigned int dss);
bool xe_gt_has_compute_dss(struct xe_gt *gt, unsigned int dss);
#endif /* _XE_GT_TOPOLOGY_H_ */ #endif /* _XE_GT_TOPOLOGY_H_ */
This diff is collapsed.
This diff is collapsed.
...@@ -35,9 +35,8 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p); ...@@ -35,9 +35,8 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p);
int xe_guc_reset_prepare(struct xe_guc *guc); int xe_guc_reset_prepare(struct xe_guc *guc);
void xe_guc_reset_wait(struct xe_guc *guc); void xe_guc_reset_wait(struct xe_guc *guc);
void xe_guc_stop_prepare(struct xe_guc *guc); void xe_guc_stop_prepare(struct xe_guc *guc);
int xe_guc_stop(struct xe_guc *guc); void xe_guc_stop(struct xe_guc *guc);
int xe_guc_start(struct xe_guc *guc); int xe_guc_start(struct xe_guc *guc);
bool xe_guc_in_reset(struct xe_guc *guc);
static inline u16 xe_engine_class_to_guc_class(enum xe_engine_class class) static inline u16 xe_engine_class_to_guc_class(enum xe_engine_class class)
{ {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment