- 19 Dec, 2023 40 commits
-
-
Lucas De Marchi authored
Platform order in enum xe_platform started to be used by some parts of the code, like the GuC/HuC firmware loading logic. The order itself is not very important, but it's better to follow a convention: as was documented in the comment above the enum, reorder the platforms by graphics version. While at it, remove the gen terminology. v2: - Use "graphics version" instead of chronological order (Matt Roper) - Also change pciidlist to follow the same order - Remove "gen" from comments around enum xe_platform Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20230331230902.1603294-1-lucas.demarchi@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Niranjana Vishwanathapura authored
In xe_migrate functions, use proper vram io offset of the tiles while calculating addresses. Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Niranjana Vishwanathapura authored
In xe_migrate_sanity_kunit test, use correct expected value as the expected value was not only used for the xe_migrate_clear(), but also for the xe_migrate_copy() operation. v2: Add 'Fixes' tag and update commit text Fixes: 11a2407e ("drm/xe: Stop accepting value in xe_migrate_clear") Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Niranjana Vishwanathapura authored
In xe_migrate_sanity_kunit test, use proper batch base address by considering usm case. Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Lucas De Marchi authored
The rev field is always 0 so it ends up never used. In i915 it was introduced because of CML: up to rev 5 it reuses the guc and huc firmware blobs from KBL. After that there is a specific firmware for that platform. This can be reintroduced later if ever needed. With the removal of revid the packed attribute in uc_fw_platform_requirement, which is there only for reducing the space these tables take, can also be removed since it has even more limited usefulness: currently there's only padding of 2 bytes. Remove the attribute to avoid the unaligned access. $ pahole -C uc_fw_platform_requirement build64/drivers/gpu/drm/xe/xe_uc_fw.o struct uc_fw_platform_requirement { enum xe_platform p; /* 0 4 */ const struct uc_fw_blob blob; /* 4 10 */ /* size: 16, cachelines: 1, members: 2 */ /* padding: 2 */ /* last cacheline: 16 bytes */ }; Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20230324051754.1346390-2-lucas.demarchi@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
The MI_BATCH_BUFFER_END is already added automatically by __xe_bb_create_job(); including it in the construction of the workaround batchbuffer results in an unnecessary duplicate. Link: https://lore.kernel.org/r/20230329173334.4015124-4-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
We should WARN (not BUG) when creating a job if the batchbuffer does not have sufficient space and padding. The hardware prefetch requirements should also be considered. Link: https://lore.kernel.org/r/20230329173334.4015124-3-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
The hardware prefetches several cachelines of data from batchbuffers before they are parsed. This prefetching only stops when the parser encounters an MI_BATCH_BUFFER_END instruction (or a nested MI_BATCH_BUFFER_START), so we must ensure that there is enough padding at the end of the batchbuffer to prevent the prefetcher from running past the end of the allocation and potentially faulting. Bspec: 45717 Link: https://lore.kernel.org/r/20230329173334.4015124-2-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
Add some error messages describing the problem when xe_gt_record_default_lrcs fails. Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Chang, Bruce authored
In general, this is due to FW load failure, should just report error and fail the probe so that user can easily retry again. Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Bruce Chang <yu.bruce.chang@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
The tables are only used within this file; there's no reason for them not to be static. Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://lore.kernel.org/r/20230327175824.2967914-1-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
Get rid of some of the duplication here. In a future patch we need to also consider [fpfn, lpfn], so better adjust in only one place. Suggested-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
So we don't have to keep repeating VRAM0 | VRAM1. Also if there are ever more instances, then we have less places to update. Suggested-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
Starting with MTL, the number of entries in the PAT table increased to 16. The register offset jumped between index 7 and index 8, so a slight adjustment is needed to ensure the PAT_INDEX macros select the proper offset for the upper half of the table. Note that although there are 16 registers in the hardware, the driver is currently only asked to program the first 5, and we leave the rest at their hardware default values. That means we don't actually touch the upper half of the PAT table in the driver today and this patch won't have any functional effect [yet]. Bspec: 44235 Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://lore.kernel.org/r/20230324210415.2434992-7-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
Re-sync our MTL PAT table with the bspec. 1-way coherency should only be set on table entry 3. We do not want an incorrect setting here to accidentally paper over other bugs elsewhere in the driver. Bspec: 45101 Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://lore.kernel.org/r/20230324210415.2434992-6-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
Replace the deprecated "GEN" terminology in the PAT definitions. Acked-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://lore.kernel.org/r/20230324210415.2434992-5-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
The PAT_INDEX registers are MCR registers on some platforms and unicast on others. On MTL the handling even varies between GTs: the primary GT uses MCR registers while the media GT uses unicast registers. Let's add proper MCR programming on the relevant platforms/GTs. Given that we PAT tables to change pretty regularly on future platforms, we'll make PAT programming an exception to the usual model of assuming new platforms should inherit the previous platform's behavior. Instead we'll raise a warning if the current platform isn't handled in the if/else ladder. This should help prevent subtle cache misbehavior if we forget to add the table for a new platform. Bspec: 66534, 67609, 67788 Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://lore.kernel.org/r/20230324210415.2434992-4-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
Provide per-platform tables of PAT values rather than per-platform functions. This will simplify the handling of unicast vs MCR registers in the upcoming patches. Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://lore.kernel.org/r/20230324210415.2434992-3-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
PAT handling is growing in complexity and will continue to do so in upcoming platforms. Separate it out to a dedicated file to keep things tidy. The code is moved as-is here (aside from a few unused #define's that are just dropped); further changes will come in future patches. Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://lore.kernel.org/r/20230324210415.2434992-2-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
Rather waiting for the VM to be destroyed (all refs to VM go to zero), drop the fault mode counts when the VM is closed in xe_vm_close_and_put. This avoids a window where user space can create a faulting VM, close it, and a subsequent creation of a non-faulting VM fails. v2 (Lucas): Drop VLK reference in commit message Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Suggested-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
José Roberto de Souza authored
Intel Vulkan driver needs to know what is the maximum priority to fill a device info struct for applications. Right now we getting this information by creating a engine and setting priorities from min to high to know what is the maximum priority for running process but this leads to info messages to be printed to dmesg: xe 0000:03:00.0: [drm] Ioctl argument check failed at drivers/gpu/drm/xe/xe_engine.c:178: value == DRM_SCHED_PRIORITY_HIGH && !capable(CAP_SYS_NICE) It does not cause any harm but when executing a test suite like crucible it causes thousands of those messages to be printed. So here adding one more property to drm_xe_query_config to fetch the max engine priority. Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Anusha Srivatsa authored
Alderlake S uses TGL HuC. Signed-off-by: Anusha Srivatsa <anusha.srivatsa@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20230323224651.1187366-3-lucas.demarchi@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Anusha Srivatsa authored
Follow the new direction of firmware and add macro support for loading unversioned HuC. Keep HuC versioned loading support as well for platforms that fall under force_probe support Add check to ensure driver does not do any version check for HuC if going through unversioned load. v2: unversioned firmware to be the default for platforms not under force_probe. Maintain versioned firmware macro support for platforms under force-probe protection. v3: Minor style and naming adjustments (Lucas) Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Anusha Srivatsa <anusha.srivatsa@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20230323224651.1187366-2-lucas.demarchi@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
Within a class the GuC will hault scheduling if the head of the queue can't be scheduled the queue will block. This can lead to deadlock if BCS0-7 all have faults and another engine on BCS0-7 is at head of the GuC scheduling queue as the migration engine used to fix tthe fault will be blocked. To work around this set the migration engine to the highest priority when servicing page faults. v2 (Maarten): Set priority to kernel once at creation Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Brian Welty <brian.welty@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
Rather than using the passed in GT, use the BO's GT determine dma_offset when programming PTEs as these two GT's could differ (i.e. mapping a BO from a remote GT). The BO's GT is correct GT to use as this where BO resides, while the passed in GT is where the mapping is created. v2: (Thomas) - Kernel doc, extra new line (CI) - Rebase to tip Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
Make sure we pass along the correct errors. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
Render / compute engines have additional caches (not just TLBs) that need to be invalidated each batch, reinstate these invalidations in ring ops. Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Suggested-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Maarten Lankhorst authored
xe_guc_ct_fast_path() is called from an irq context, and cannot lock the mutex used by xe_device_mem_access_ongoing(). Fortunately it is easy to fix, and the atomic guarantees are good enough to ensure xe->mem_access.hold_rpm is set before last ref is dropped. As far as I can tell, the runtime ref in device access should be killable, but don't dare to do it yet. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
Zero-length arrays as fake flexible arrays are deprecated and we are moving towards adopting C99 flexible-array members instead. Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
Copy this from i915. We need .compatible for vram -> vram transfers, so they don't just get nooped by ttm, if need to move something from mappable to non-mappble or vice versa. The .intersects is needed for eviction, to determine if a victim resource is worth eviction. e.g if we need mappable space there is no point in evicting a resource that has zero mappable pages. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
Replace the allocation code with the i915 version. This simplifies the code a little, and importantly we get the accounting at the mgr level, which is useful for debug (and maybe userspace), plus per resource tracking so we can easily check if a resource is using one or pages in the mappable part of vram (useful for eviction), or if the resource is completely within the mappable portion (useful for checking if the resource can be safely CPU mapped). v2: Fix missing PAGE_SHIFT v3: (Gwan-gyeong Mun) - Fix incorrect usage of ilog2(mm.chunk_size). - Fix calculation when checking for impossible allocation sizes, also check much earlier. v4: (Gwan-gyeong Mun) - Fix calculation when extending the [fpfn, lpfn] range due to the roundup_pow_of_two(). v5: (Gwan-gyeong Mun) - Move the check for running out of mappable VRAM to before doing any of the roundup_pow_of_two(). v6: (Jani) - Stop abusing BUG_ON(). We can easily just use WARN_ON() here and return a proper error to the caller, which is much nicer if we ever trigger these. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Balasubramani Vivekanandan authored
Although xe_migrate_clear() has a value argument, currently the driver is only passing 0 at all the places this function is invoked with the exception the kunit tests are using the parameter to validate this function with different values. xe_migrate_clear() is failing on platforms with link copy engines because xe_migrate_clear() via emit_clear() is using the blitter instruction XY_FAST_COLOR_BLT to clear the memory. But this instruction is not supported by link copy engine. So the solution is to use the alternate instruction MEM_SET when platform contains link copy engine. But MEM_SET instruction accepts only 8-bit value for setting whereas the value agrument of xe_migrate_clear() is 32-bit. So instead of spreading this limitation around all invocations of xe_migrate_clear() and causing more confusion, it was decided to not accept any value itself as driver does not really need this currently. All the kunit tests are adapted as per the new function prototype. This will be followed by a patch to add support for link copy engines. Signed-off-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Balasubramani Vivekanandan authored
When the GuC wopcm base and size registers are populated by BIOS/IFWI, validate the parameters against the maximum allowed wopcm size. Bpsec: 44982 Signed-off-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
Hopefully not needed anymore. We can add a .compatible() hook once we need to differentiate between mappable and non-mappable vram. If the allocation is not contiguous then the start value is kind of meaningless, so rather just mark as invalid. In upstream, TTM wants to eventually remove the ttm_resource.start usage. References: 54443270 ("drm/ttm: Add new callbacks to ttm res mgr") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Lucas De Marchi authored
In order to improve readability and make it more future proof, split the engine type from the graphics/platform checks. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20230317223441.3891073-1-lucas.demarchi@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
First step towards supporting small-bar is to track the io_size for vram. We can longer assume that the io_size == vram size. This way we know how much is CPU accessible via the BAR, and how much is not. Effectively giving us a two tiered vram, where in some later patches we can support different allocation strategies depending on if the memory needs to be CPU accessible or not. Note as this stage we still clamp the vram size to the usable vram size. Only in the final patch do we turn this on for real, and allow distinct io_size and vram_size. v2: (Lucas): - Improve the commit message, plus improve the kernel-doc for the io_size to give a better sense of what it actually is. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Thomas Hellström authored
If all compute engines of a vm in compute mode are idle, defer a rebind to the next exec to avoid the VM unnecessarily trying to make memory resident and compete with other VMs for available memory space. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Thomas Hellström authored
Add a test parameter to force GPU page-table updates with the migrate test and test both CPU- and GPU updates. Also provide some timing results. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Thomas Hellström authored
Causes an early 32-bit wrap and may thus help CI catch wrapping errors that may otherwise not show early enough. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Thomas Hellström authored
Introduce xe_engine_is_idle, and replace the static function in xe_migrate.c. The latter had two flaws. First the seqno == 1 test might return a false true value each time the seqno counter wrapped, Second, the cur_seqno == next_seqno test would never return true. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-