- 23 Dec, 2020 7 commits
-
-
Chris Wilson authored
As we shrink an object, also see if we can prune the dma-resv of idle fences it is maintaining a reference to. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201223122051.4624-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
If we want to reuse a fence that is in active use by the GPU, we have to wait an uncertain amount of time, but if we reuse an inactive fence, we can change it right away. Loop through the list of available fences twice, ignoring any active fences on the first pass. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201223122051.4624-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Pull the GT clock information [used to derive CS timestamps and PM interval] under the GT so that is it local to the users. In doing so, we consolidate the two references for the same information, of which the runtime-info took note of a potential clock source override and scaling factors. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201223122359.22562-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
We assume that both timestamps are driven off the same clock [reported to userspace as I915_PARAM_CS_TIMESTAMP_FREQUENCY]. Verify that this is so by reading the timestamp registers around a busywait (on an otherwise idle engine so there should be no preemptions). v2: Icelake (not ehl, nor tgl) seems to be using a fixed 80ns interval for, and only for, CTX_TIMESTAMP -- or it may be GPU frequency and the test is always running at maximum frequency?. As far as I can tell, this isolated change in behaviour is undocumented. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201223122359.22562-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
We just need the context image from the logical state to force eviction of many contexts, so simplify by avoiding the GEM context container. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201223154509.14155-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
The caller determines if the failure is an error or not, so avoid warning when we will try again and succeed. For example, <7> [111.319321] [drm:intel_guc_fw_upload [i915]] GuC status 0x20 <3> [111.319340] i915 0000:00:02.0: [drm] *ERROR* GuC load failed: status = 0x00000020 <3> [111.319606] i915 0000:00:02.0: [drm] *ERROR* GuC load failed: status: Reset = 0, BootROM = 0x10, UKernel = 0x00, MIA = 0x00, Auth = 0x00 <7> [111.320045] [drm:__uc_init_hw [i915]] GuC fw load failed: -110; will reset and retry 2 more time(s) <7> [111.322978] [drm:intel_guc_fw_upload [i915]] GuC status 0x8002f0ec should not have been reported as a _test_ failure, as the GuC was successfully loaded on the second attempt and the system remained operational. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2797Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201214100949.11387-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
By using the double wide cmpxchg64 on 32bit, we can use the same algorithm on both 32/64b systems. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201211110310.22740-1-chris@chris-wilson.co.uk
-
- 22 Dec, 2020 4 commits
-
-
Chris Wilson authored
When waiting for the submit, before checking the status of the request, kick the tasklet to make sure we are processing the submission. This speeds up submission if we are using any tasklet suppression for secondary requests. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201222113536.3775-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
Make sure that the request has been submitted to HW before we begin our wait. This reduces our reliance on the semaphore yield interrupt driving the preemption request. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201222113536.3775-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Keep on kicking the timeslice in case on the first retirement, it did not stay idle. This may happen when using semaphore yields. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201222113536.3775-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
We assume that the contents of the HWSP are lost across suspend, and so upon resume we must restore critical values such as the timeline seqno. Keep track of every timeline allocated that uses the HWSP as its storage and so we can then reset all seqno values by walking that list. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201222104242.10993-1-chris@chris-wilson.co.uk
-
- 21 Dec, 2020 2 commits
-
-
Chris Wilson authored
Primarily used by selftests, but also by runtime debugging of engine w/a, is a routine to create a temporarily bound buffer for readback. Almagamate the duplicated routines into one. Suggested-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201219020343.22681-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Split the definition, construction and updating of the Logical Ring Context from the execlist submission interface. The LRC is used by the HW, irrespective of our different submission backends. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201219020343.22681-1-chris@chris-wilson.co.uk
-
- 20 Dec, 2020 1 commit
-
-
Chris Wilson authored
tasklet_kill() ensures that we _yield_ the processor until a remote tasklet is completed. However, this leads to a starvation condition as being at the bottom of the scheduler's runqueue means that anything else is able to run, including all hogs keeping the tasklet occupied. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201220134858.10510-1-chris@chris-wilson.co.uk
-
- 18 Dec, 2020 3 commits
-
-
Chris Wilson authored
Since we allow removing the timeline map at runtime, there is a risk that rq->hwsp points into a stale page. To control that risk, we hold the RCU read lock while reading *rq->hwsp, but we missed a couple of important barriers. First, the unpinning / removal of the timeline map must be after all RCU readers into that map are complete, i.e. after an rcu barrier (in this case courtesy of call_rcu()). Secondly, we must make sure that the rq->hwsp we are about to dereference under the RCU lock is valid. In this case, we make the rq->hwsp pointer safe during i915_request_retire() and so we know that rq->hwsp may become invalid only after the request has been signaled. Therefore is the request is not yet signaled when we acquire rq->hwsp under the RCU, we know that rq->hwsp will remain valid for the duration of the RCU read lock. This is a very small window that may lead to either considering the request not completed (causing a delay until the request is checked again, any wait for the request is not affected) or dereferencing an invalid pointer. Fixes: 3adac468 ("drm/i915: Introduce concept of per-timeline (context) HWSP") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: <stable@vger.kernel.org> # v5.1+ Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201218122421.18344-1-chris@chris-wilson.co.uk
-
Aditya Swarup authored
Add bound checks for TGL REV ID array. Since, there might be a possibility of using older kernels on latest platform revisions, resulting in out of bounds access for rev ID array. In this scenario, use the latest rev ID available and apply those WAs. Also, modify GT macros for TGL rev ID to reuse tgl_revids_get(). Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Aditya Swarup <aditya.swarup@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201203072359.156682-2-aditya.swarup@intel.com
-
Aditya Swarup authored
Fix TGL REVID macros to fetch correct display/gt stepping based on SOC rev id from INTEL_REVID() macro. Previously, we were just returning the first element of the revid array instead of using the correct index based on SOC rev id. Fixes: c33298cb ("drm/i915/tgl: Fix stepping WA matching") Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Aditya Swarup <aditya.swarup@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201203072359.156682-1-aditya.swarup@intel.com
-
- 17 Dec, 2020 2 commits
-
-
Chris Wilson authored
Since we wake the GT up before executing a request, and go to sleep as soon as it is retired, the GT wake time not only represents how long the device is powered up, but also provides a summary, albeit an overestimate, of the device runtime (i.e. the rc0 time to compare against rc6 time). v2: s/busy/awake/ v3: software-gt-awake-time and I915_PMU_SOFTWARE_GT_AWAKE_TIME Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reported-by: kernel test robot <oliver.sang@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201215154456.13954-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Matthew Brost pointed out that the while-loop on a shared breadcrumb was inherently fraught with danger as it competed with the other users of the breadcrumbs. However, in order to completely drain the re-arming irq worker, the while-loop is a necessity, despite my optimism that we could force cancellation with a couple of irq_work invocations. Given that we can't merely drop the while-loop, use an activity counter on the breadcrumbs to detect when we are parking the breadcrumbs for the last time. Based on a patch by Matthew Brost. Reported-by: Matthew Brost <matthew.brost@intel.com> Suggested-by: Matthew Brost <matthew.brost@intel.com> Fixes: 9d5612ca ("drm/i915/gt: Defer enabling the breadcrumb interrupt to after submission") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201217091524.10258-1-chris@chris-wilson.co.uk
-
- 16 Dec, 2020 4 commits
-
-
Chris Wilson authored
Use the wait_queue_entry.flags to denote the special fence behaviour (flattening continuations along fence chains, and for propagating errors) rather than trying to detect ordinary waiters by their functions. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201216165850.25030-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Reduce the pollution of intel_engine.h by moving gen8_emit_pipe_control and friends to gen8_engine_cs.h Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201216135452.6063-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
The free_list and worker was introduced in commit 5f09a9c8 ("drm/i915: Allow contexts to be unreferenced locklessly"), but subsequently made redundant by the removal of the last sleeping lock in commit 2935ed53 ("drm/i915: Remove logical HW ID"). As we can now free the GEM context immediately from any context, remove the deferral of the free_list v2: Lift removing the context from the global list into close(). Suggested-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201215152138.8158-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
When inserting a VMA, we restrict the placement to the low 4G unless the caller opts into using the full range. This was done to allow usersapce the opportunity to transition slowly from a 32b address space, and to avoid breaking inherent 32b assumptions of some commands. However, for insert we limited ourselves to 4G-4K, but on verification we allowed the full 4G. This causes some attempts to bind a new buffer to sporadically fail with -ENOSPC, but at other times be bound successfully. commit 48ea1e32 ("drm/i915/gen9: Set PIN_ZONE_4G end to 4GB - 1 page") suggests that there is a genuine problem with stateless addressing that cannot utilize the last page in 4G and so we purposefully excluded it. This means that the quick pin pass may cause us to utilize a buggy placement. Reported-by: CQ Tang <cq.tang@intel.com> Testcase: igt/gem_exec_params/larger-than-life-batch Fixes: 48ea1e32 ("drm/i915/gen9: Set PIN_ZONE_4G end to 4GB - 1 page") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: CQ Tang <cq.tang@intel.com> Reviewed-by: CQ Tang <cq.tang@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v4.5+ Link: https://patchwork.freedesktop.org/patch/msgid/20201216092951.7124-1-chris@chris-wilson.co.uk
-
- 15 Dec, 2020 1 commit
-
-
José Roberto de Souza authored
Commit 70a2b431 ("drm/i915/gt: Rename lrc.c to execlists_submission.c") renamed intel_lrc.c to intel_execlists_submission.c but forgot to update i915.rst. Fixes: 70a2b431 ("drm/i915/gt: Rename lrc.c to execlists_submission.c") Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201214185440.243537-1-jose.souza@intel.com
-
- 14 Dec, 2020 3 commits
-
-
Tvrtko Ursulin authored
Chris spotted that since 16ffe73c ("drm/i915/pmu: Use GT parked for estimating RC6 while asleep") we don't rely on runtime pm internals when estimating RC6 while asleep. We can remove the ifdef code to simplify and at the same time wake up the device less when querying RC6 if CONFIG_PM is not compiled in. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> References: 16ffe73c ("drm/i915/pmu: Use GT parked for estimating RC6 while asleep") Reported-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201214094349.3563876-3-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
RC6 is a hardware counter and as such estimating it using the raw clock during runtime suspend is more appropriate. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> References: 34f43927 ("perf: Add per event clockid support") Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201214094349.3563876-2-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
Chris found a CI report which points out calling intel_runtime_pm_get from inside i915_pmu_enable hook is not allowed since it can be invoked from hard irq context. This is something we knew but forgot, so lets fix it once again. We do this by syncing the internal book keeping with hardware rc6 counter on driver load. v2: * Always sync on parking and fully sync on init. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: f4e9894b ("drm/i915/pmu: Correct the rc6 offset upon enabling") Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201214094349.3563876-1-tvrtko.ursulin@linux.intel.com
-
- 10 Dec, 2020 3 commits
-
-
Chris Wilson authored
The workarounds are tied to the GT and we should derive the tests local to the GT. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201210080240.24529-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
When we reset the legacy ring context, due to potential corruption over suspend/resume, remove the valid bit so that we avoid loading garbage. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201210080240.24529-1-chris@chris-wilson.co.uk
-
John Harrison authored
The above workaround was added as an engine workaround not a GT workaround. Moved it to the correct location. Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201210170615.3107266-1-lucas.demarchi@intel.com
-
- 09 Dec, 2020 9 commits
-
-
Daniele Ceraolo Spurio authored
These functions are independent from the backend used and can therefore be split out of the exelists submission file, so they can be re-used by the upcoming GuC submission backend. Based on a patch by Chris Wilson. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209233618.4287-3-chris@chris-wilson.co.ukSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
We want to separate the utility functions for controlling the logical ring context from the execlists submission mechanism (which is an overgrown scheduler). This is similar to Daniele's work to split up the files, but being selfish I wanted to base it after my own changes to intel_lrc.c petered out. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209233618.4287-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Cleanup intel_lrc.h by moving some of the residual common register definitions into intel_lrc_reg.h, prior to rebranding and splitting off the submission backends. v2: keep the SCHEDULE enum in the old file, since it is specific to the gvt usage of the execlists submission backend (John) Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> #v2 Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209233618.4287-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Now that the only user of the uninterruptible wait was eliminated, remove the support. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209164008.5487-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
Tigerlake is plagued by spontaneous DMAR faults [reason 7, next page table ptr is invalid] which lead to GPU hangs. These faults occur when an iommu map is immediately reused. Adding further clflushes and barriers around either the GTT PTE or iommu PTE updates do not prevent the faults. So far the only effect has been from inducing a delay between reuse of the iommu on the GPU, and applying the delay at the iommu map allows for the smallest stable delay. Note that such a delay is hideous and clearly does not fix the root cause, and so should only be a bandaid until a complete solution is found. The delay was determined by running igt/gem_exec_fence/parallel in a loop for a few hours (unpatched MTBF is about 10s). We have also seen such DMAR fault [reason 7] errors on other platforms, notably gen9-gen11, but so far it has only been trivially and consistently reproduced on Tigerlake. v2: Leave a tell-tale to know when we apply the vt'd quirk, and as a reminder to remove it again. Hopefully. Testcase: igt/gem_exec_fence/parallel Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209164008.5487-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
A call to wait for the GT to idle from inside the put_pages fallback is prone to cause an uninterruptible livelock. As it does not provide adequate serialisation with new requests, simply fallback to a trivial sleep. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209164008.5487-1-chris@chris-wilson.co.uk
-
Lucas De Marchi authored
Document what a masked register is according to bspec so we avoid developers using the wrong functions to implement WAs. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201209045246.2905675-3-lucas.demarchi@intel.com
-
Lucas De Marchi authored
The use of "masked" in this function is due to its history. Once upon a time it received a mask and a value as parameter. Since commit eeec73f8 ("drm/i915/gt: Skip rmw for masked registers") that is not true anymore and now there is a clear and a set parameter. Depending on the case, that can still be thought as a mask and value, but there are some subtle differences: what we clear doesn't need to be the same bits we are setting, particularly when we are using masked registers. The fact that we also have "masked registers", i.e. registers whose mask is stored in the upper 16 bits of the register, makes it even more confusing, because "masked" in wa_write_masked_or() has little to do with masked registers, but rather refers to the old mask parameter the function received (that can also, but not exclusively, be used to write to masked register). Avoid the ambiguity and misnomer by renaming it to something else, hopefully less confusing: wa_write_clr_set(), to designate that we are doing both clr and set operations in the register. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201209045246.2905675-2-lucas.demarchi@intel.com
-
Lucas De Marchi authored
When using masked registers, there is nothing to clear since a masked register has the mask in the upper 16b: we can just write to the location we want and use the mask to control what bits we are writing to. However that doesn't mean we don't want to read back the register and check the value actually matched what we wanted to write, i.e. that the WA stick. That should be an explicit opt-out for registers that are either write-only or that are affected by hardware misbehavior. Moreover both wa_masked_en() and wa_masked_dis() check the WA stick, so skipping the check just because the field is more than 1 bit is surprising and error-prone. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201209045246.2905675-1-lucas.demarchi@intel.com
-
- 08 Dec, 2020 1 commit
-
-
Chris Wilson authored
Closed vma are protected by the GT wakeref held as we lookup the vma, so we know that the vma will not be freed as we process it for the execbuf. Instead we expect to catch the closed status of the context, and simply allow the close-race on an individual vma to be washed away. Longer term, the GT wakeref protection will be removed by explicit vma.kref tracking. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2245Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201207193824.18114-1-chris@chris-wilson.co.uk
-