- 06 Jan, 2021 2 commits
-
-
Chris Wilson authored
Use memset_io() on the iomem, and silence sparse as we copy from the iomem to normal system pages. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210106123939.18435-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
AFter detecting a register mismatch between the protocontext and the image generated by HW, immediately break out of the double loop. Otherwise we end up with a second confusing error message. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210106123939.18435-1-chris@chris-wilson.co.uk
-
- 05 Jan, 2021 5 commits
-
-
Chris Wilson authored
In generating the reference LRC, we want a page-aligned address for simplicity in computing the offsets within. This then shares the computation for the HW LRC which is mapped and so page aligned, making the comparison straightforward. It seems that kmalloc(4k) is not always returning from a 4k-aligned slab cache (which would give us a page aligned address) so force alignment by explicitly allocating a page. Reported-by: "Gote, Nitin R" <nitin.r.gote@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: "Gote, Nitin R" <nitin.r.gote@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200923114156.17749-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
If another sibling is able to claim the virtual request, by the time we inspect the request under the lock it may no longer match the local engine. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2877Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Andi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210104115145.24460-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
If the engine reset fails, we will attempt to resume with the current inflight submissions. When that happens, we cannot assert that the engine reset cleared the pending submission, so do not. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2878 Fixes: 16f2941a ("drm/i915/gt: Replace direct submit with direct call to tasklet") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Andi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210104115145.24460-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
Older platforms use rawclk to derive the CS clock. rawclk is being determined during intel_device_info_init(), and so that needs to be pushed slightly earlier. Fixes: f170523a ("drm/i915/gt: Consolidate the CS timestamp clocks") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Andi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210104115145.24460-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
A few missed PTR_ERR() upon create_gang() errors. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Andi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210104115145.24460-1-chris@chris-wilson.co.uk
-
- 04 Jan, 2021 3 commits
-
-
Chris Wilson authored
In the near future, upstream will introduce a SZ_8G macro that is slightly different to our own. Employ a temporary ifndef to avoid compilation failure until we have backmerged. References: 8b0fac44 ("sizes.h: add SZ_8G/SZ_16G/SZ_32G macros") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210104171511.32684-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Some rcs0 workarounds were being incorrectly applied to the GT, and so we failed to restore the expected register settings after a reset. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210104114914.30165-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Some rcs0 workarounds were being incorrectly applied to the GT, and so we failed to restore the expected register settings after a reset. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210104114914.30165-1-chris@chris-wilson.co.uk
-
- 03 Jan, 2021 1 commit
-
-
Arnd Bergmann authored
Randconfig builds on 32-bit machines show lots of warnings for the i915 driver for passing a 32-bit value into __const_hweight64(): drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c:2584:9: error: shift count >= width of type [-Werror,-Wshift-count-overflow] return hweight64(VDBOX_MASK(&i915->gt)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ include/asm-generic/bitops/const_hweight.h:29:49: note: expanded from macro 'hweight64' #define hweight64(w) (__builtin_constant_p(w) ? __const_hweight64(w) : __arch_hweight64(w)) Change it to hweight_long() to avoid the warning. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20210103135158.3591442-1-arnd@kernel.org
-
- 01 Jan, 2021 2 commits
-
-
Maarten Lankhorst authored
This allows us to remove pin_map from state allocation, which saves us a few retry loops. We won't need this until first pin, anyway. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201231170405.22843-1-chris@chris-wilson.co.uk
-
Matt Roper authored
Let's modify the "workaround lost" error message slightly to make it more clear what the various numbers represent. Also, the 'expected' value needs to be &'d with wa->read so that it doesn't include the mask bits for masked registers (those bits are write-only in the hardware and will usually always read out as 0's). Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201231191103.854519-1-matthew.d.roper@intel.com
-
- 31 Dec, 2020 3 commits
-
-
Chris Wilson authored
Since we use a flag within i915_request.flags to indicate when we have boosted the request (so that we only apply the boost) once, this can be used as the serialisation with i915_request_retire() to avoid having to explicitly take the i915_request.lock which is more heavily contended. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201231093149.19086-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
We only need to evaluate the current status of the context when it is scheduled in, we will force a reschedule when the context is closed propagating the change to inflight contexts. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201231093946.11649-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Since we process schedule-in of a context after submitting the request, if we decide to reset the context at that time, we also have to cancel the requets we have marked for submission. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201230220028.17089-1-chris@chris-wilson.co.uk
-
- 30 Dec, 2020 1 commit
-
-
Chris Wilson authored
Declare that, under extreme circumstances, the shrinker may need to wait upon a request, in which case reset must not itself deadlock in order to ensure forward progress of the driver. That is since the shrinker may depend upon a reset, any reset cannot touch the shrinker. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201229141626.4773-1-chris@chris-wilson.co.uk
-
- 29 Dec, 2020 2 commits
-
-
Chris Wilson authored
If supported by the backend, we can quickly look at the context's inflight engine rather than search along the active list to confirm. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Andi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201229144114.31686-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
When splitting the Coffeelake define to also identify Cometlakes, I missed the double fw_def for Coffeelake. That is only newer Cometlakes use the cml specific guc firmware, older Cometlakes should use kbl firmware. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2859 Fixes: 5f4ae270 ("drm/i915: Identify Cometlake platform") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: <stable@vger.kernel.org> # v5.9+ Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201229120828.29931-1-chris@chris-wilson.co.uk
-
- 24 Dec, 2020 12 commits
-
-
Chris Wilson authored
Pull the individual strands of creating a custom heartbeat requests into a pair of common functions. This will reduce the number of changes we will need to make in future. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201224160213.29521-1-chris@chris-wilson.co.uk
-
Matthew Auld authored
The reloc batch is short lived but can exist in the user visible ppGTT, and since it's backed by an internal object, which lacks page clearing, we should take care to clear it upfront. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201224151358.401345-2-matthew.auld@intel.com Cc: stable@vger.kernel.org
-
Matthew Auld authored
The shadow batch is an internal object, which doesn't have any page clearing, and since the batch_len can be smaller than the object, we should take care to clear it. Testcase: igt/gen9_exec_parse/shadow-peek Fixes: 4f7af194 ("drm/i915: Support ro ppgtt mapped cmdparser shadow buffers") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201224151358.401345-1-matthew.auld@intel.com Cc: stable@vger.kernel.org
-
Chris Wilson authored
Since schedule-in and schedule-out are now both always under the tasklet bitlock, we can reduce the individual atomic operations to simple instructions and worry less. This notably eliminates the race observed with intel_context_inflight in __engine_unpark(). Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2583Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-9-chris@chris-wilson.co.uk
-
Chris Wilson authored
Now that the tasklet completely controls scheduling of the requests, and we postpone scheduling out the old requests, we can keep a hanging virtual request bound to the engine on which it hung, and remove it from te queue. On release, it will be returned to the same engine and remain in its queue until it is scheduled; after which point it will become eligible for transfer to a sibling. Instead, we could opt to resubmit the request along the virtual engine on unhold, making it eligible for load balancing immediately -- but that seems like a pointless optimisation for a hanging context. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-8-chris@chris-wilson.co.uk
-
Chris Wilson authored
Having recognised that we do not change the sibling until we schedule out, we can then defer the decision to resubmit the virtual engine from the unwind of the active queue to scheduling out of the virtual context. This improves our resilence in virtual engine scheduling, and should eliminate the rare cases of gem_exec_balance failing. By keeping the unwind order intact on the local engine, we can preserve data dependency ordering while doing a preempt-to-busy pass until we have determined the new ELSP. This means that if we try to timeslice between a virtual engine and a data-dependent ordinary request, the pair will maintain their relative ordering and we will avoid the resubmission, cancelling the timeslicing until further change. The dilemma though is that we then may end up in a situation where the 'demotion' of the virtual request to an ordinary request in the engine queue results in filling the ELSP[] with virtual requests instead of spreading the load across the engines. To compensate for this, we mark each virtual request and refuse to resubmit a virtual request in the secondary ELSP slots, thus forcing subsequent virtual requests to be scheduled out after timeslicing. By delaying the decision until we schedule out, we will avoid unnecessary resubmission. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2079 Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2098Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-7-chris@chris-wilson.co.uk
-
Chris Wilson authored
Let's only wait for the list iterator when decoupling the virtual breadcrumb, as the signaling of all the requests may take a long time, during which we do not want to keep the tasklet spinning. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-6-chris@chris-wilson.co.uk
-
Chris Wilson authored
The issue with stale virtual breadcrumbs remain. Now we have the problem that if the irq-signaler is still referencing the stale breadcrumb as we transfer it to a new sibling, the list becomes spaghetti. This is a very small window, but that doesn't stop it being hit infrequently. To prevent the lists being tangled (the iterator starting on one engine's b->signalers but walking onto another list), always decouple the virtual breadcrumb on schedule-out and make sure that the walker has stepped out of the lists. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-5-chris@chris-wilson.co.uk
-
Chris Wilson authored
Inside schedule_out, we do extra work upon idling the context, such as updating the runtime, kicking off retires, kicking virtual engines. However, if we are in a series of processing single requests per contexts, we may find ourselves scheduling out the context, only to immediately schedule it back in during dequeue. This is just extra work that we can avoid if we keep the context marked as inflight across the dequeue. This becomes more significant later on for minimising virtual engine misses. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
Once a virtual engine has been bound to a sibling, it will remain bound until we finally schedule out the last active request. We can not rebind the context to a new sibling while it is inflight as the context save will conflict, hence we wait. As we cannot then use any other sibliing while the context is inflight, only kick the bound sibling while it inflight and upon scheduling out the kick the rest (so that we can swap engines on timeslicing if the previously bound engine becomes oversubscribed). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
Rather than going back and forth between the rb_node entry and the virtual_engine type, store the ve local and reuse it. As the container_of conversion from rb_node to virtual_engine requires a variable offset, performing that conversion just once shaves off a bit of code. v2: Keep a single virtual engine lookup, for typical use. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Rather than having special case code for opportunistically calling process_csb() and performing a direct submit while holding the engine spinlock for submitting the request, simply call the tasklet directly. This allows us to retain the direct submission path, including the CS draining to allow fast/immediate submissions, without requiring any duplicated code paths, and most importantly greatly simplifying the control flow by removing reentrancy. This will enable us to close a few races in the virtual engines in the next few patches. The trickiest part here is to ensure that paired operations (such as schedule_in/schedule_out) remain under consistent locking domains, e.g. when pulled outside of the engine->active.lock v2: Use bh kicking, see commit 3c53776e ("Mark HI and TASKLET softirq synchronous"). v3: Update engine-reset to be tasklet aware Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201224135544.1713-1-chris@chris-wilson.co.uk
-
- 23 Dec, 2020 7 commits
-
-
Chris Wilson authored
As we shrink an object, also see if we can prune the dma-resv of idle fences it is maintaining a reference to. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201223122051.4624-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
If we want to reuse a fence that is in active use by the GPU, we have to wait an uncertain amount of time, but if we reuse an inactive fence, we can change it right away. Loop through the list of available fences twice, ignoring any active fences on the first pass. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201223122051.4624-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Pull the GT clock information [used to derive CS timestamps and PM interval] under the GT so that is it local to the users. In doing so, we consolidate the two references for the same information, of which the runtime-info took note of a potential clock source override and scaling factors. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201223122359.22562-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
We assume that both timestamps are driven off the same clock [reported to userspace as I915_PARAM_CS_TIMESTAMP_FREQUENCY]. Verify that this is so by reading the timestamp registers around a busywait (on an otherwise idle engine so there should be no preemptions). v2: Icelake (not ehl, nor tgl) seems to be using a fixed 80ns interval for, and only for, CTX_TIMESTAMP -- or it may be GPU frequency and the test is always running at maximum frequency?. As far as I can tell, this isolated change in behaviour is undocumented. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201223122359.22562-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
We just need the context image from the logical state to force eviction of many contexts, so simplify by avoiding the GEM context container. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201223154509.14155-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
The caller determines if the failure is an error or not, so avoid warning when we will try again and succeed. For example, <7> [111.319321] [drm:intel_guc_fw_upload [i915]] GuC status 0x20 <3> [111.319340] i915 0000:00:02.0: [drm] *ERROR* GuC load failed: status = 0x00000020 <3> [111.319606] i915 0000:00:02.0: [drm] *ERROR* GuC load failed: status: Reset = 0, BootROM = 0x10, UKernel = 0x00, MIA = 0x00, Auth = 0x00 <7> [111.320045] [drm:__uc_init_hw [i915]] GuC fw load failed: -110; will reset and retry 2 more time(s) <7> [111.322978] [drm:intel_guc_fw_upload [i915]] GuC status 0x8002f0ec should not have been reported as a _test_ failure, as the GuC was successfully loaded on the second attempt and the system remained operational. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2797Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201214100949.11387-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
By using the double wide cmpxchg64 on 32bit, we can use the same algorithm on both 32/64b systems. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201211110310.22740-1-chris@chris-wilson.co.uk
-
- 22 Dec, 2020 2 commits
-
-
Chris Wilson authored
When waiting for the submit, before checking the status of the request, kick the tasklet to make sure we are processing the submission. This speeds up submission if we are using any tasklet suppression for secondary requests. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201222113536.3775-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
Make sure that the request has been submitted to HW before we begin our wait. This reduces our reliance on the semaphore yield interrupt driving the preemption request. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201222113536.3775-2-chris@chris-wilson.co.uk
-