Commit 730b7248 authored by Matthew Auld's avatar Matthew Auld Committed by Rodrigo Vivi

drm/xe: prevent UAF around preempt fence

The fence lock is part of the queue, therefore in the current design
anything locking the fence should then also hold a ref to the queue to
prevent the queue from being freed.

However, currently it looks like we signal the fence and then drop the
queue ref, but if something is waiting on the fence, the waiter is
kicked to wake up at some later point, where upon waking up it first
grabs the lock before checking the fence state. But if we have already
dropped the queue ref, then the lock might already be freed as part of
the queue, leading to uaf.

To prevent this, move the fence lock into the fence itself so we don't
run into lifetime issues. Alternative might be to have device level
lock, or only release the queue in the fence release callback, however
that might require pushing to another worker to avoid locking issues.

Fixes: dd08ebf6 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2454
References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2342
References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2020Signed-off-by: default avatarMatthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: <stable@vger.kernel.org> # v6.8+
Reviewed-by: default avatarMatthew Brost <matthew.brost@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240814110129.825847-2-matthew.auld@intel.com
(cherry picked from commit 7116c35a)
Signed-off-by: default avatarRodrigo Vivi <rodrigo.vivi@intel.com>
parent 15939ca7
...@@ -643,7 +643,6 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data, ...@@ -643,7 +643,6 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
if (xe_vm_in_preempt_fence_mode(vm)) { if (xe_vm_in_preempt_fence_mode(vm)) {
q->lr.context = dma_fence_context_alloc(1); q->lr.context = dma_fence_context_alloc(1);
spin_lock_init(&q->lr.lock);
err = xe_vm_add_compute_exec_queue(vm, q); err = xe_vm_add_compute_exec_queue(vm, q);
if (XE_IOCTL_DBG(xe, err)) if (XE_IOCTL_DBG(xe, err))
......
...@@ -126,8 +126,6 @@ struct xe_exec_queue { ...@@ -126,8 +126,6 @@ struct xe_exec_queue {
u32 seqno; u32 seqno;
/** @lr.link: link into VM's list of exec queues */ /** @lr.link: link into VM's list of exec queues */
struct list_head link; struct list_head link;
/** @lr.lock: preemption fences lock */
spinlock_t lock;
} lr; } lr;
/** @ops: submission backend exec queue operations */ /** @ops: submission backend exec queue operations */
......
...@@ -128,8 +128,9 @@ xe_preempt_fence_arm(struct xe_preempt_fence *pfence, struct xe_exec_queue *q, ...@@ -128,8 +128,9 @@ xe_preempt_fence_arm(struct xe_preempt_fence *pfence, struct xe_exec_queue *q,
{ {
list_del_init(&pfence->link); list_del_init(&pfence->link);
pfence->q = xe_exec_queue_get(q); pfence->q = xe_exec_queue_get(q);
spin_lock_init(&pfence->lock);
dma_fence_init(&pfence->base, &preempt_fence_ops, dma_fence_init(&pfence->base, &preempt_fence_ops,
&q->lr.lock, context, seqno); &pfence->lock, context, seqno);
return &pfence->base; return &pfence->base;
} }
......
...@@ -25,6 +25,8 @@ struct xe_preempt_fence { ...@@ -25,6 +25,8 @@ struct xe_preempt_fence {
struct xe_exec_queue *q; struct xe_exec_queue *q;
/** @preempt_work: work struct which issues preemption */ /** @preempt_work: work struct which issues preemption */
struct work_struct preempt_work; struct work_struct preempt_work;
/** @lock: dma-fence fence lock */
spinlock_t lock;
/** @error: preempt fence is in error state */ /** @error: preempt fence is in error state */
int error; int error;
}; };
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment