- 21 Jun, 2019 16 commits
-
-
Tvrtko Ursulin authored
More removal of implicit dev_priv from using old mmio accessors. Actually the top level function remains but is split into a part which writes to i915 and part which operates on intel_gt in order to initialize the hardware. GuC and engines are the only odd ones out remaining. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-15-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
Replace some gen6/7 open coded rmw with intel_uncore_rmw. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-14-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
More removal of implicit dev_priv from using old mmio accessors. v2: * Rebase for uncore_to_i915 removal. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-13-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
More removal of implicit dev_priv from using old mmio accessors. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-12-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
It will come useful in the next patch. v2: * Do mock_engine as well. v3: * And the virtual engine... Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-11-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
More conversion of i915_gem_init_hw to uncore. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-10-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
More removal of implicit dev_priv from using old mmio accessors. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-9-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
Two easy opportunities to compact the code by using the existing helper. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Suggested-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-8-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
Start using the newly introduced struct intel_gt to fuse together correct logical init flow with uncore for more removal of implicit dev_priv in mmio access. v2: * Move code to i915_gem_fence_reg. (Chris) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-7-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
Continuing the conversion and elimination of implicit dev_priv. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Suggested-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-6-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
And also rename to intel_gt_pm_init_early and make it operate on gt. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-5-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
We need an easy way to get back to i915 and uncore. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-4-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
As it will grow in a following patch make a new home for it. v2: * Convert mock_gem_device as well. (Chris) v3: * Rename to intel_gt_init_early and move call site to i915_drv.c. (Chris) v4: * Adjust SPDX tags. * No need to gt/ path when including intel_gt_types.h. (Chris) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-3-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
We have long been slighlty annoyed by the anonymous i915->gt. Promote it to a separate structure and give it its own header. This is a first step towards cleaning up the separation between i915 and gt. v2: * Adjust SPDX header. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-2-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
More removal of implicit dev_priv from using old mmio accessors. Furthermore these calls really operate on ggtt so it logically makes sense if they take it as parameter. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190621070811.7006-1-tvrtko.ursulin@linux.intel.com
-
Chris Wilson authored
The call to kick_siblings() dereferences the rq->context, so we should not drop our local reference until afterwards! v2: Stick to setting ce.inflight=NULL before kicking as this is what the other threads will check to see if the context is ready for takeover. Fixes: 22b7a426 ("drm/i915/execlists: Preempt-to-busy") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190621080729.2652-1-chris@chris-wilson.co.uk
-
- 20 Jun, 2019 13 commits
-
-
Chris Wilson authored
Enable RCU protection of i915_address_space and its ppgtt superclasses, and defer its cleanup into a worker executed after an RCU grace period. In the future we will be able to use the RCU protection to reduce the locking around VM lookups, but the immediate benefit is being able to defer the release into a kworker (process context). This is required as we may need to sleep to reap the WC pages stashed away inside the ppgtt. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110934Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190620183705.31006-1-chris@chris-wilson.co.uk
-
José Roberto de Souza authored
The other additional step in the DSI sequence for EHL. v2: - Using REG_BIT()(Matt) - Fixed commit message typo(Vandita) BSpec: 20597 Cc: Uma Shankar <uma.shankar@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190619233134.20009-2-jose.souza@intel.com
-
Vandita Kulkarni authored
EHL has 2 additional steps in the DSI sequence, this is one of then the lane latency optimization for DW1. BSpec: 20597 Cc: Uma Shankar <uma.shankar@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Vandita Kulkarni <vandita.kulkarni@intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190619233134.20009-1-jose.souza@intel.com
-
Chris Wilson authored
Since commit 79ffac85 ("drm/i915: Invert the GEM wakeref hierarchy"), the request creation itself took responsibility for managing the engine/GT wakerefs and so we can remove the redundant grabs in our selftests. References: 79ffac85 ("drm/i915: Invert the GEM wakeref hierarchy") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190620102432.31580-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Our intel_rings are always flushed as they are continually used to submit commands to the GPU, and so do not need to be flushed on unpinning. This avoids pulling in the flush_ggtt_writes locking into our context unpin, which we want to allow from atomic context (for simplicity). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190619203504.4220-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
If we have multiple contexts of equal priority pending execution, activate a timer to demote the currently executing context in favour of the next in the queue when that timeslice expires. This enforces fairness between contexts (so long as they allow preemption -- forced preemption, in the future, will kick those who do not obey) and allows us to avoid userspace blocking forward progress with e.g. unbounded MI_SEMAPHORE_WAIT. For the starting point here, we use the jiffie as our timeslice so that we should be reasonably efficient wrt frequent CPU wakeups. Testcase: igt/gem_exec_scheduler/semaphore-resolve Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190620142052.19311-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
When using a global seqno, we required a precise stop-the-workd event to handle preemption and unwind the global seqno counter. To accomplish this, we would preempt to a special out-of-band context and wait for the machine to report that it was idle. Given an idle machine, we could very precisely see which requests had completed and which we needed to feed back into the run queue. However, now that we have scrapped the global seqno, we no longer need to precisely unwind the global counter and only track requests by their per-context seqno. This allows us to loosely unwind inflight requests while scheduling a preemption, with the enormous caveat that the requests we put back on the run queue are still _inflight_ (until the preemption request is complete). This makes request tracking much more messy, as at any point then we can see a completed request that we believe is not currently scheduled for execution. We also have to be careful not to rewind RING_TAIL past RING_HEAD on preempting to the running context, and for this we use a semaphore to prevent completion of the request before continuing. To accomplish this feat, we change how we track requests scheduled to the HW. Instead of appending our requests onto a single list as we submit, we track each submission to ELSP as its own block. Then upon receiving the CS preemption event, we promote the pending block to the inflight block (discarding what was previously being tracked). As normal CS completion events arrive, we then remove stale entries from the inflight tracker. v2: Be a tinge paranoid and ensure we flush the write into the HWS page for the GPU semaphore to pick in a timely fashion. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190620142052.19311-1-chris@chris-wilson.co.uk
-
Daniele Ceraolo Spurio authored
With multiple uncore to initialize (GT vs Display), it makes little sense to have the vgpu_check inside uncore_init(). We also have a catch-22 scenario where the uncore is required to read the vgpu capabilities while the vgpu capabilities are required to decide if we need to initialize forcewake support. To remove this circular dependency, we can perform the required MMIO access by mmapping just the vgtif shared page in mmio space and use raw accessors. v2: rename check_vgpu to detect_vgpu (Chris) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-7-daniele.ceraolospurio@intel.com
-
Daniele Ceraolo Spurio authored
We'd like to introduce a display uncore with no forcewake domains, so let's avoid wasting memory and be ready to allocate only what we need. Even without multiple uncore, we still don't need all the domains on all gens. v2: avoid hidden control flow, improve checks (Tvrtko), fix IVB special case, add failure injection point Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-6-daniele.ceraolospurio@intel.com
-
Daniele Ceraolo Spurio authored
We always call some of the setup/cleanup functions for forcewake, even if the feature is not actually available. Skipping these operations if forcewake is not available saves us some operations on older gens and prepares us for having a forcewake-less display uncore. v2: do not make suspend/resume functions forcewake-specific (Chris, Tvrtko), use GEM_BUG_ON in internal forcewake-only functions (Tvrtko) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-5-daniele.ceraolospurio@intel.com
-
Daniele Ceraolo Spurio authored
Let's get rid of it before it proliferates, since with split GT/Display uncores the container_of won't work anymore. I've kept the rpm pointer as well to minimize the pointer chasing in the MMIO accessors. v2: swap parameter order for intel_uncore_init_early (Tvrtko) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-4-daniele.ceraolospurio@intel.com
-
Daniele Ceraolo Spurio authored
uncore_sanitize performs no action on the uncore structure and just calls intel_sanitize_gt_powersave, so we can just call the latter directly. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-3-daniele.ceraolospurio@intel.com
-
Daniele Ceraolo Spurio authored
Instead of going through the if-else chain every time, let's save the function in the uncore structure. Note that the new functions are purposely not used from the reg read/write functions to keep the inlining there. While at it, use the new macro to call the old ones to clean the code a bit. v2: Rename macros for no-forcewake function assignment (Tvrtko) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-2-daniele.ceraolospurio@intel.com
-
- 19 Jun, 2019 8 commits
-
-
Chris Wilson authored
Remember to keep the rings pinned as well as the context image until the GPU is no longer active. v2: Introduce a ring->pin_count primarily to hide the mock_ring that doesn't fit into the normal GGTT vma picture. v3: Order is important in teardown, ringbuffer submission needs to drop the pin count on the engine->kernel_context before it can gleefully free its ring. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110946 Fixes: ce476c80 ("drm/i915: Keep contexts pinned until after the next kernel context switch") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190619170135.15281-1-chris@chris-wilson.co.uk
-
Matt Roper authored
EHL has a mux on combo PHY A that allows it to be driven either by an internal display (DDI-A or DSI DPHY) or by an external display (DDI-D). This is a motherboard design decision that can not be changed on the fly. Unfortunately there are no strap registers that allow us to detect the board configuration directly, so let's use the VBT to try to figure it out and program the mux accordingly. For now if we run across a broken VBT that tries to claim that PHY A is attached to both internal and external displays at the same time, we'll resolve the conflict in favor of the internal display. To help debug these kind of bad VBT's, let's also add a quick DRM_DEBUG message during child device parsing so that it's easier to understand these cases if they show up in bug reports. v2: - Confirmed that VBT's dvo port refers to the DDI and not the PHY. Thus we can check more explicitly for (ddi_d && !(ddi_a || dsi)). If a bad VBT contradicts itself, let internal display win. (Ville) v3: - Switch condition from !IS_ICELAKE to IS_ELKHARTLAKE. Although the convention is usually to assume that future platforms will inherit all current platform behavior, this feels more like a one-platform quirk. (Ville) - Update commit message to describe what we do if/when we encounter broken VBT's, and note that the new debug print during child device parsing is intentional. Cc: Clint Taylor <Clinton.A.Taylor@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190618175131.9139-1-matthew.d.roper@intel.com
-
Chris Wilson authored
In the unlikely case the request completes while we regard it as not even executing on the GPU (see the next patch!), we have to flush any pending execution callbacks at retirement and ensure that we do not add any more. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190618074153.16055-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
With the upcoming change to automanaged i915_active, the intent is that whenever we wait on the set of active fences, they are signaled and collected. The requirement is that all successful returns from i915_request_wait() signal the fence, so fixup the one remaining path where we may return before the interrupt has been run. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190619112341.9082-3-chris@chris-wilson.co.uk
-
Jani Nikula authored
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
-
Chris Wilson authored
Since commit eb8d0f5a ("drm/i915: Remove GPU reset dependence on struct_mutex"), the I915_WAIT_LOCKED flags passed to i915_request_wait() has been defunct. Now go ahead and remove it from all callers. References: eb8d0f5a ("drm/i915: Remove GPU reset dependence on struct_mutex") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190618074153.16055-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
The process_csb routine from execlists_submission is incompatible with the GuC backend. Add a warning to detect if we accidentally end up in the wrong spot. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190618110736.31155-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
The idea behind keeping the saturation mask local to a context backfired spectacularly. The premise with the local mask was that we would be more proactive in attempting to use semaphores after each time the context idled, and that all new contexts would attempt to use semaphores ignoring the current state of the system. This turns out to be horribly optimistic. If the system state is still oversaturated and the existing workloads have all stopped using semaphores, the new workloads would attempt to use semaphores and be deprioritised behind real work. The new contexts would not switch off using semaphores until their initial batch of low priority work had completed. Given sufficient backload load of equal user priority, this would completely starve the new work of any GPU time. To compensate, remove the local tracking in favour of keeping it as global state on the engine -- once the system is saturated and semaphores are disabled, everyone stops attempting to use semaphores until the system is idle again. One of the reason for preferring local context tracking was that it worked with virtual engines, so for switching to global state we could either do a complete check of all the virtual siblings or simply disable semaphores for those requests. This takes the simpler approach of disabling semaphores on virtual engines. The downside is that the decision that the engine is saturated is a local measure -- we are only checking whether or not this context was scheduled in a timely fashion, it may be legitimately delayed due to user priorities. We still have the same dilemma though, that we do not want to employ the semaphore poll unless it will be used. v2: Explain why we need to assume the worst wrt virtual engines. Fixes: ca6e56f6 ("drm/i915: Disable semaphore busywaits on saturated systems") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> Cc: Dmitry Ermilov <dmitry.ermilov@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190618074153.16055-8-chris@chris-wilson.co.uk
-
- 18 Jun, 2019 3 commits
-
-
José Roberto de Souza authored
To do frontbuffer tracking we are depending on Display WA #0884 to exit PSR when there is a frontbuffer modification but according to user reports a write to CURSURFLIVE do not cause PSR to exit in older gens so lets force a PSR exit. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110799 Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Tested-by: Thomas Rohwer <trohwer85@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190617195154.30292-1-jose.souza@intel.com
-
Chris Wilson authored
This has caught me out on countless occasions, when we retrieve a pointer from the submission/execlists backend, it does not carry a reference to the context or ring. Those are only pinned while the request is active, so if we see the request is already completed, it may be in the process of being retired and those pointers defunct. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110938 Fixes: 3a068721 ("drm/i915: Show ring->start for the ELSP context/request queue") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190618161951.28820-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Be sure to cleanup after live_evict by flushing any residual state off the GPU using igt_flush_test. Tvrtko mentioned that it is probably wise to stop repeating this ad hoc around the tests and implement a live test runner. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190618161951.28820-1-chris@chris-wilson.co.uk
-