Commit e7518276 authored by Matthew Brost's avatar Matthew Brost Committed by Lucas De Marchi

drm/xe: Use bookkeep slots for external BO's in exec IOCTL

Fix external BO's dma-resv usage in exec IOCTL using bookkeep slots
rather than write slots. This leaves syncing to user space rather than
the KMD blindly enforcing write semantics on every external BO.

Fixes: dd08ebf6 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Kenneth Graunke <kenneth.w.graunke@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reported-by: default avatarSimona Vetter <simona.vetter@ffwll.ch>
Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2673Signed-off-by: default avatarMatthew Brost <matthew.brost@intel.com>
Reviewed-by: default avatarJosé Roberto de Souza <jose.souza@intel.com>
Reviewed-by: default avatarKenneth Graunke <kenneth@whitecape.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20240911152622.903058-1-matthew.brost@intel.com
(cherry picked from commit b8b1163248759ba18509f7443a2d19b15b4c1df8)
Signed-off-by: default avatarLucas De Marchi <lucas.demarchi@intel.com>
parent 477d665e
...@@ -41,11 +41,6 @@ ...@@ -41,11 +41,6 @@
* user knows an exec writes to a BO and reads from the BO in the next exec, it * user knows an exec writes to a BO and reads from the BO in the next exec, it
* is the user's responsibility to pass in / out fence between the two execs). * is the user's responsibility to pass in / out fence between the two execs).
* *
* Implicit dependencies for external BOs are handled by using the dma-buf
* implicit dependency uAPI (TODO: add link). To make this works each exec must
* install the job's fence into the DMA_RESV_USAGE_WRITE slot of every external
* BO mapped in the VM.
*
* We do not allow a user to trigger a bind at exec time rather we have a VM * We do not allow a user to trigger a bind at exec time rather we have a VM
* bind IOCTL which uses the same in / out fence interface as exec. In that * bind IOCTL which uses the same in / out fence interface as exec. In that
* sense, a VM bind is basically the same operation as an exec from the user * sense, a VM bind is basically the same operation as an exec from the user
...@@ -59,8 +54,8 @@ ...@@ -59,8 +54,8 @@
* behind any pending kernel operations on any external BOs in VM or any BOs * behind any pending kernel operations on any external BOs in VM or any BOs
* private to the VM. This is accomplished by the rebinds waiting on BOs * private to the VM. This is accomplished by the rebinds waiting on BOs
* DMA_RESV_USAGE_KERNEL slot (kernel ops) and kernel ops waiting on all BOs * DMA_RESV_USAGE_KERNEL slot (kernel ops) and kernel ops waiting on all BOs
* slots (inflight execs are in the DMA_RESV_USAGE_BOOKING for private BOs and * slots (inflight execs are in the DMA_RESV_USAGE_BOOKKEEP for private BOs and
* in DMA_RESV_USAGE_WRITE for external BOs). * for external BOs).
* *
* Rebinds / dma-resv usage applies to non-compute mode VMs only as for compute * Rebinds / dma-resv usage applies to non-compute mode VMs only as for compute
* mode VMs we use preempt fences and a rebind worker (TODO: add link). * mode VMs we use preempt fences and a rebind worker (TODO: add link).
...@@ -304,7 +299,8 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file) ...@@ -304,7 +299,8 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
xe_sched_job_arm(job); xe_sched_job_arm(job);
if (!xe_vm_in_lr_mode(vm)) if (!xe_vm_in_lr_mode(vm))
drm_gpuvm_resv_add_fence(&vm->gpuvm, exec, &job->drm.s_fence->finished, drm_gpuvm_resv_add_fence(&vm->gpuvm, exec, &job->drm.s_fence->finished,
DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_WRITE); DMA_RESV_USAGE_BOOKKEEP,
DMA_RESV_USAGE_BOOKKEEP);
for (i = 0; i < num_syncs; i++) { for (i = 0; i < num_syncs; i++) {
xe_sync_entry_signal(&syncs[i], &job->drm.s_fence->finished); xe_sync_entry_signal(&syncs[i], &job->drm.s_fence->finished);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment