Commit c5baa566 authored by Tomas Elf's avatar Tomas Elf Committed by Daniel Vetter

drm/i915: Update to post-reset execlist queue clean-up

When clearing an execlist queue, instead of traversing it and unreferencing all
requests while holding the spinlock (which might lead to thread sleeping with
IRQs are turned off - bad news!), just move all requests to the retire request
list while holding spinlock and then drop spinlock and invoke the execlists
request retirement path, which already deals with the intricacies of
purging/dereferencing execlist queue requests.

This patch can be considered v3 of:

	commit b96db8b81c54ef30485ddb5992d63305d86ea8d3
	Author: Tomas Elf <tomas.elf@intel.com>
	drm/i915: Grab execlist spinlock to avoid post-reset concurrency issues

This patch assumes v2 of the above patch is part of the baseline, reverts v2
and adds changes on top to turn it into v3.
Signed-off-by: default avatarTomas Elf <tomas.elf@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1445619757-19822-1-git-send-email-tomas.elf@intel.comReviewed-by: default avatarThomas Daniel <thomas.daniel@intel.com>
Reviewed-by: default avatarDave Gordon <dave.gordon@intel.com>
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
parent 56c48978
...@@ -2757,20 +2757,13 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv, ...@@ -2757,20 +2757,13 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
if (i915.enable_execlists) { if (i915.enable_execlists) {
spin_lock_irq(&ring->execlist_lock); spin_lock_irq(&ring->execlist_lock);
while (!list_empty(&ring->execlist_queue)) {
struct drm_i915_gem_request *submit_req;
submit_req = list_first_entry(&ring->execlist_queue, /* list_splice_tail_init checks for empty lists */
struct drm_i915_gem_request, list_splice_tail_init(&ring->execlist_queue,
execlist_link); &ring->execlist_retired_req_list);
list_del(&submit_req->execlist_link);
if (submit_req->ctx != ring->default_context)
intel_lr_context_unpin(submit_req);
i915_gem_request_unreference(submit_req);
}
spin_unlock_irq(&ring->execlist_lock); spin_unlock_irq(&ring->execlist_lock);
intel_execlists_retire_requests(ring);
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment