- 21 Dec, 2023 40 commits
-
-
Brian Welty authored
Replace the xe_ttm_vram_mgr.tile pointer with a xe_mem_region pointer instead. The former is currently unused. TTM VRAM regions are exposing device vram and is better to store a pointer directly to the xe_mem_region instead of the tile. This allows to cleanup unnecessary usage of xe_tile from xe_bo.c in later patch. Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Badal Nilawar authored
Expose power1_max_interval, that is the tau corresponding to PL1, as a custom hwmon attribute. Some bit manipulation is needed because of the format of PKG_PWR_LIM_1_TIME in PACKAGE_RAPL_LIMIT register (1.x * power(2,y)) v2: Get rpm wake ref while accessing power1_max_interval v3: %s/hwmon/xe_hwmon/ v4: - As power1_max_interval is rw attr take lock in read function as well - Refine comment about val to fix point conversion (Andi) - Update kernel version and date in doc v5: Fix review comments (Anshuman) Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Badal Nilawar <badal.nilawar@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231030115618.1382200-4-badal.nilawar@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Badal Nilawar authored
Take hwmon_lock while accessing hwmon rw attributes. For readonly attributes its not required to take lock as reads are protected by sysfs layer and therefore sequential. Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Cc: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Badal Nilawar <badal.nilawar@intel.com> Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231030115618.1382200-3-badal.nilawar@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Badal Nilawar authored
Add kernel doc and refactor some of the hwmon functions, there is no functionality change. Cc: Anshuman Gupta <anshuman.gupta@intel.com> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: Badal Nilawar <badal.nilawar@intel.com> Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231030115618.1382200-2-badal.nilawar@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
With things like pipelined evictions, VRAM pages can be marked as free and yet still have some active kernel fences, with the idea that the next caller to allocate the memory will respect them. However it looks like we are missing synchronisation for KMD internal buffers, like page-tables, lrc etc. For userspace objects we should already have the required synchronisation for CPU access via the fault handler, and likewise for GPU access when vm_binding them. To fix this synchronise against any kernel fences for all KMD objects at creation. This should resolve some severe corruption seen during evictions. v2 (Matt B): - Revamp the comment explaining this. Also mention why USAGE_KERNEL is correct here. v3 (Thomas): - Make sure to use ctx.interruptible for the wait. Testcase: igt@xe-evict-ccs Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/853 Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/855Reported-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Tested-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
There could be active fences already in the dma-resv for the object prior to clearing. Make sure to input them as dependencies for the clear job. v2 (Matt B): - We can use USAGE_KERNEL here, since it's only the move fences we care about here. Also add a comment. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
Spec says: "This is a privileged command; it will not be effective (will be converted to a no-op) if executed from within a non-privileged batch buffer." However here it looks like we are just emitting it inside some bb which was jumped to via the ppGTT, which should be considered a non-privileged address space. It looks like we just need some way of preventing things like the emit_pte() and later copy/clear being preempted in-between so rather just emit directly in the ring for migration jobs. Bspec: 45716 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Shekhar Chauhan authored
Add Xe_LPM+ support to an existing workaround. BSpec: 51762 Signed-off-by: Shekhar Chauhan <shekhar.chauhan@intel.com> Link: https://lore.kernel.org/r/20231030150756.1011777-1-shekhar.chauhan@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Priyanka Dandamudi authored
Add conditional check for access counter granularity. This check will return -EINVAL if granularity is beyond 64M which is a hardware limitation. v2: Defined XE_ACC_GRANULARITY_128K 0 XE_ACC_GRANULARITY_2M 1 XE_ACC_GRANULARITY_16M 2 XE_ACC_GRANULARITY_64M 3 as part of uAPI. So, that user can also use it.(Oak) v3: Move uAPI to proper location and give proper documentation.(Brian, Oak) Cc: Oak Zeng <oak.zeng@intel.com> Cc: Janga Rahul Kumar <janga.rahul.kumar@intel.com> Cc: Brian Welty <brian.welty@intel.com> Signed-off-by: Priyanka Dandamudi <priyanka.dandamudi@intel.com> Reviewed-by: Oak Zeng <oak.zeng@intel.com> Reviewed-by: Oak Zeng <oak.zeng@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Lucas De Marchi authored
FORCE_SLM_FENCE_SCOPE_TO_TILE and FORCE_UGM_FENCE_SCOPE_TO_TILE are in the up dword of LSC_CHICKEN_BIT_0 register. Also, the 14010918519 workaround only applies to early steppings, A*. Eventually those should be dropped, like they were in commit eaeb4b36 ("drm/i915/dg2: Drop pre-production GT workarounds"), so let's make sure they are annotated appropriately. Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> Link: https://lore.kernel.org/r/20231024220412.223868-1-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
MTL uses a versionless GSC-enabled binary. v2: don't use the filename to identify the header type (Lucas) v3: fix commit msg (Lucas) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
On newer platforms the HuC survives reset and stays authenticated, so no need to re-authenticate it. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
On MTL-style multi-gt platforms, the HuC is only available on the media GT, so we need to consider it as not supported on the render GT. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
The GSC-enabled HuC binary starts with a GSC header, which is followed by the legacy-style CSS header and the binary itself. We can parse the GSC headers to find the HuC version and the location of the binary to be used for the DMA transfer. The parsing function has been designed to be re-used for the GSC binary, so the entry names are external parameters (because the GSC uses different ones) and the CSS entry is optional (because the GSC doesn't have it). v2: move new code to uc_fw.c, better comments and error checking, split old code move to separate patch (Lucas), move headers and documentation to uc_fw_abi.h. v3: use 2 separate loops, rework marker check (Lucas) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
GSC binaries and newer HuC ones use GSC-style headers instead of the CSS. In preparation for adding support for such parsing, split out the current parsing code to its own function, to make it cleaner to add the new paths. The existing doc section has also been renamed to narrow it to CSS-based binaries. v2: new patch in series, split out from next patch for easier reviewing v3: drop unneeded include (Lucas) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Badal Nilawar authored
In existing code flow for future platforms i.e. >1270, the rpX (rp0,rpn and rpe) fused values are read from gen 6 registers. Which is not correct. Unless specified gen 1270 regs should be valid for gen 1270+ platforms as well. Signed-off-by: Badal Nilawar <badal.nilawar@intel.com> Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
The MOCS registers should be written in an MCR-specific manner on Xe_HP and beyond to prevent any other driver threads or external firmware from putting the hardware into unicast mode while we initialize the MOCS table. Bspec: 66534, 67609, 71185 Cc: Ruthuvikas Ravikumar <ruthuvikas.ravikumar@intel.com> Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> Link: https://lore.kernel.org/r/20231023204112.2856331-2-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
As with DG2/MTL, Xe2 also fails to emit instruction headers for SVG state instructions if no explicit state has been set. The SVG part of the LRC is nearly identical to DG2/MTL; the only change is that 3DSTATE_DRAWING_RECTANGLE has been replaced by 3DSTATE_DRAWING_RECTANGLE_FAST, so we can just re-use the same state table and handle that single instruction when we encounter it. Bspec: 65182 Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Link: https://lore.kernel.org/r/20231025151732.3461842-8-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
When recording the default LRC, the expectation is that the hardware's original state settings (both register and instruction) will be written out to the LRC upon first context switch. For many 3DSTATE_* state instructions that don't truly have "default" values, this translates to a simple instruction header (opcodes + dword length) being written to the LRC, followed by an appropriate number of blank dwords as a place holder. When userspace creates a context (which starts as a copy of the default LRC), they'll generally emit real 3DSTATE_* as part of their initialization to select the settings they desire. If they don't emit one of the 3DSTATE instructions, then the zeroed dwords that remain in their LRC image generally translate to various state remaining disabled. This will either be what userspace wants or will lead to very reproducible and easily-debugged problems (rendering glitches, engine hangs). It turns out that a subset of the 3DSTATE instructions, specifically those belonging to the SVG (State Variable - Global) unit, are not only emitting 0's for the instruction's "body" dwords, but also for the instruction header dword if no specific state has been explicitly set before context switch. This means that when the hardware switches to a context that hasn't explicitly provided an appropriate state setting, the hardware will just see a sequence of NOOPs in the spot reserved for that 3DSTATE instruction while executing the LRC, and the actual hardware state setting will unintentionally inherit the configuration used by the previously running context. Now when userspace makes a mistake and forgets to emit an important state instruction they no longer get consistent, easily-reproducible corruption/hangs, but rather erratic behavior where the presence/absence of a problem depends on what other workloads are running on the system and what order the contexts are scheduled on the engine. A specific example of this that came up recently related to mesh shading The OpenGL driver was not specifically emitting a 3DSTATE_MESH_CONTROL to disable mesh shading at context init, so on context switch, mesh shading would either be on or off depending on what the previous context had been doing. Vulkan apps _were_ enabling mesh shading, so running a Vulkan app and then context switching to an OpenGL app resulted in mesh shading still unexpectedly being enabled during OpenGL operation, and since other Mesh-related state was not properly initialized for that context a GPU hang was seen. Due to the specific ordering requirements (Vulkan app runs first, followed by OpenGL app), it took additional debug effort to track down the cause of the problem. There are various workarounds related to this behavior, with current implementations handled in the userspace drivers. E.g., Wa_14019789679 and Wa_22018402687. However it's been suggested that the kernel driver can help simplify things here by emitting zeroed SVG state with proper instruction headers as part of our default context creation (i.e., at the same point we apply LRC workarounds). This will help ensure that any future cases where a userspace driver does not emit an important state setting will result in consistent behavior. Bspec: 46261 Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Link: https://lore.kernel.org/r/20231025151732.3461842-7-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matt Roper authored
On some platforms we need to emit some non-register state while recording an engine class' default LRC. Add the infrastructure to support this; actual per-platform tables will be added in future patches. v2: - Checkpatch whitespace fix - Add extra assertion to ensure num_dw != 0. (Bala) Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Link: https://lore.kernel.org/r/20231025151732.3461842-6-matthew.d.roper@intel.comSigned-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Balasubramani Vivekanandan authored
Make use of EVENT_CLASS to group similar trace events Signed-off-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Reviewed-by: Haridhar Kalvala <haridhar.kalvala@intel.com> Link: https://lore.kernel.org/intel-xe/20231019093140.1901665-3-balasubramani.vivekanandan@intel.com/Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Balasubramani Vivekanandan authored
Event tracing enabled for CTB submissions. Additional minor refactor - Removed a unnecessary ct_to_xe() call. v2: Remove a unwanted comment (Hari) Add missing change to commit message Signed-off-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Reviewed-by: Haridhar Kalvala <haridhar.kalvala@intel.com> Link: https://lore.kernel.org/intel-xe/20231019093140.1901665-2-balasubramani.vivekanandan@intel.com/Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Shekhar Chauhan authored
Add L3SQCREG5 as part of HW recommended settings. The recommended value in Bspec is 00e0007f. For Xe2-LPG, bits 23:21 don't exist anymore, but it's confirmed with HW engineers that setting them doesn't do anything. They still exist on the media GT, Xe2-LPM, but they are already they are already set as per HW default value. So for Xe2 platform, the only bits that need to be set are 9:0 since HW's default is 0x1ff and the recommended value is 0x7f. Unlike most registers, which have the same relative offset on both the primary and media GT, this register has a different base offset on the media GT. On MTL the register only exists for the primary (graphics) GT, so there's no need to program it on the media gt. Also, it's part of the RCS engine's context, so it needs to be added as a LRC workaround. Bspec: 72161 Signed-off-by: Shekhar Chauhan <shekhar.chauhan@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231024220739.224251-2-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Dnyaneshwar Bhadane authored
Add the initial collection of gt/engine/lrc workarounds. While at it, add some newlines around the platform/IP comments to make them consistent across all workarounds. v2: - FF_MODE is an MCR register (Matt Roper) - Group 18032247524 with other Xe2 workarounds (Matt Roper) - Move WA changing PSS_CHICKEN to lrc_was[] as for Xe2 that register is part of the render context image (Matt Roper) - Apply WA 16020518922 only on render engine (Matt Roper) Signed-off-by: Dnyaneshwar Bhadane <dnyaneshwar.bhadane@intel.com> Signed-off-by: Shekhar Chauhan <shekhar.chauhan@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231024220739.224251-1-lucas.demarchi@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Brian Welty authored
We can remove the unnecessary indirection thru xe->tiles[] to get the TTM VRAM manager. This code can be common for VRAM and STOLEN. Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Auld authored
Bit 7 in the leaf node is normally programmed with pat[2], however with 2M/1G pages that same bit in the PDE/PDPE also toggles 2M/1G pages. For 2M/1G entries the pat[2] is rather moved to bit 12, which is now free given that the address must be aligned to 2M or 1G, leaving bit 7 for toggling 2M/1G pages. Bspec: 59510, 45038 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Daniele Ceraolo Spurio authored
The PVC GuC version that we're currently using (70.6.4) has a known issue that leads to dropping the disabling of contexts that have pending page faults. This is fixed in newer blobs, so we need to update to a more recent release. Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/696Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Francois Dugast authored
This is used for the priority of an exec queue (not an engine) and should be named accordingly. Signed-off-by: Francois Dugast <francois.dugast@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
-
Rodrigo Vivi authored
During the uapi review it was identified a possible confusion with the plural of acronym with a new acronym. So the recommendation is to go with gt_list instead. Suggested-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
-
Rodrigo Vivi authored
We already have many bits reserved at the end already. Let's kill the unused ones. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
-
Rodrigo Vivi authored
Let's have a single GT ID per GT within the PCI Device Card. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
-
Rodrigo Vivi authored
Split drm_xe_query_gt out of the gt list one in order to better document it. No functional change at this point. Any actual change to the uapi should come in follow-up additions. v2: s/maks/mask Cc: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
-
Matthew Brost authored
A case existed where an out-sync of a later VM bind operation could signal before a previous one if the later operation results in a NOP (e.g. a unbind or prefetch to a VA range without any mappings). This breaks the ordering rules, fix this. This patch also lays the groundwork for users to pass in num_binds == 0 and out-syncs. Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
Async worker is gone. All jobs and memory allocations done in IOCTL to align with dma fencing rules. Async vs. sync now means when do bind operations complete relative to the IOCTL. Async completes when out-syncs signal while sync completes when the IOCTL returns. In-syncs and out-syncs are only allowed in async mode. If memory allocations fail in the job creation step the VM is killed. This is temporary, eventually a proper unwind will be done and VM will be usable. Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
This is not used nor does it align VM async document, kill this. Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Rodrigo Vivi authored
This extension is currently not used and it is not aligned with the error handling on async VM_BIND. Let's remove it and along with that, since it was the only extension for the vm_create, remove VM extension entirely. v2: rebase on top of the removal of drm_xe_ext_exec_queue_set_property Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
-
Ashutosh Dixit authored
There really is no difference between 'struct drm_xe_ext_vm_set_property' and 'struct drm_xe_ext_exec_queue_set_property', they are extensions which specify a <property, value> pair. Replace the two extensions with a single common 'struct drm_xe_ext_set_property' extension. The rationale is that rather than have each XE module (including future modules) invent their own property/value extensions, all XE modules use a common set_property extension when possible. Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
-
Matthew Brost authored
Functionality of XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE deprecated in a previous patch, drop from uAPI. The property is just simply inherented from the VM. v2: - Update commit message (Niranjana) Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
Much better name and aligns with xe_vm_add_compute_exec_queue. As part of the rename, move the implementation from xe_exec_queue.c to xe_vm.c. Suggested-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Matthew Brost authored
We are going to remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE from the uAPI, deprecate the implementation first by making XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE a NOP. After removal of XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE the proper is simply inherented from the VM. v2: - Update commit message with explaination of removal (Niranjana) Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-