Commit 4bb1bedb authored by Chris Wilson's avatar Chris Wilson Committed by Daniel Vetter

drm/i915: Use the global runtime-pm wakelock for a busy GPU for execlists

When we submit a request to the GPU, we first take the rpm wakelock, and
only release it once the GPU has been idle for a small period of time
after all requests have been complete. This means that we are sure no
new interrupt can arrive whilst we do not hold the rpm wakelock and so
can drop the individual get/put around every single request inside
execlists.

Note: to close one potential issue we should mark the GPU as busy
earlier in __i915_add_request.

To elaborate: The issue is that we emit the irq signalling sequence
before we grab the rpm reference, which means we could miss the
resulting interrupt (since that's not set up when suspended). The only
bad side effect is a missed interrupt, gt mmio writes automatically
wake up the hw itself. But otoh we have an umbrella rpm reference for
the entirety of execbuf, as long as that's there we're covered.
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
[danvet: Explain a bit more about the add_request issue, which after
some irc chatting with Chris turns out to not be an issue really.]
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
parent b5eba372
...@@ -2605,7 +2605,6 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv, ...@@ -2605,7 +2605,6 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
struct drm_i915_gem_request, struct drm_i915_gem_request,
execlist_link); execlist_link);
list_del(&submit_req->execlist_link); list_del(&submit_req->execlist_link);
intel_runtime_pm_put(dev_priv);
if (submit_req->ctx != ring->default_context) if (submit_req->ctx != ring->default_context)
intel_lr_context_unpin(ring, submit_req->ctx); intel_lr_context_unpin(ring, submit_req->ctx);
......
...@@ -546,8 +546,6 @@ static int execlists_context_queue(struct intel_engine_cs *ring, ...@@ -546,8 +546,6 @@ static int execlists_context_queue(struct intel_engine_cs *ring,
} }
request->tail = tail; request->tail = tail;
intel_runtime_pm_get(dev_priv);
spin_lock_irq(&ring->execlist_lock); spin_lock_irq(&ring->execlist_lock);
list_for_each_entry(cursor, &ring->execlist_queue, execlist_link) list_for_each_entry(cursor, &ring->execlist_queue, execlist_link)
...@@ -977,7 +975,6 @@ void intel_execlists_retire_requests(struct intel_engine_cs *ring) ...@@ -977,7 +975,6 @@ void intel_execlists_retire_requests(struct intel_engine_cs *ring)
if (ctx_obj && (ctx != ring->default_context)) if (ctx_obj && (ctx != ring->default_context))
intel_lr_context_unpin(ring, ctx); intel_lr_context_unpin(ring, ctx);
intel_runtime_pm_put(dev_priv);
list_del(&req->execlist_link); list_del(&req->execlist_link);
i915_gem_request_unreference(req); i915_gem_request_unreference(req);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment