- 11 Dec, 2021 1 commit
-
-
John Harrison authored
It is possible for platforms to require GuC but not HuC firmware. Also, the firmware versions for GuC and HuC advance independently. So split the macros up to allow the lists to be maintained separately. Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211210044022.1842938-2-John.C.Harrison@Intel.com
-
- 09 Dec, 2021 6 commits
-
-
Umesh Nerlige Ramappa authored
GuC PMU busyness gets gt wakeref if awake, but fails to release the wakeref if a reset is in progress. Release the wakeref if it was acquried successfully. v2: Simplify the fix (Ashutosh) Fixes: 2a67b18e ("drm/i915/pmu: Fix synchronization of PMU callback with reset") Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211207020239.43402-1-umesh.nerlige.ramappa@intel.com
-
Umesh Nerlige Ramappa authored
live_engine_busy_stats waits for busyness to start ticking before sampling busyness for the test sample duration. The wait accesses an MMIO register and the uncore call to read it takes up to 3 ms in the worst case. This can result in the wait timing out since the MMIO read itself consumes up the timeout of 500us. Increase the timeout to a larger value of 10ms to account for the MMIO read time. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4536 Fixes: 77cdd054 ("drm/i915/pmu: Connect engine busyness stats from GuC to pmu") Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211208183313.13126-1-umesh.nerlige.ramappa@intel.com
-
Matthew Auld authored
If the device needs 64K minimum GTT pages for device local-memory, like on XEHPSDV, then we need to fail the allocation if we can't meet it, instead of falling back to 4K pages, otherwise we can't safely support the insertion of device local-memory pages for this vm, since the HW expects the correct physical alignment and size for every PTE, if we mark the page-table as 64K GTT mode. v2: s/userpsace/userspace [Thomas] Signed-off-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211208141613.7251-5-ramalingam.c@intel.com
-
Matthew Auld authored
On some platforms the hw has dropped support for 4K GTT pages when dealing with LMEM, and due to the design of 64K GTT pages in the hw, we can only mark the *entire* page-table as operating in 64K GTT mode, since the enable bit is still on the pde, and not the pte. And since we we still need to allow 4K GTT pages for SMEM objects, we can't have a "normal" 4K page-table with scratch pointing to LMEM, since that's undefined from the hw pov. The simplest solution is to just move the 64K scratch page to SMEM on such platforms and call it a day, since that should work for all configurations. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Thomas Hellstrom <thomas.hellstrom@linux.intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211208141613.7251-4-ramalingam.c@intel.com
-
Matthew Auld authored
Conditionally allocate LMEM with 64K granularity, since 4K page support for LMEM will be dropped on some platforms when using the PPGTT. v2: updated commit msg [Thomas] Signed-off-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Stuart Summers <stuart.summers@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Thomas Hellstrom <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211208154854.28037-1-ramalingam.c@intel.com
-
Stuart Summers authored
Add a new platform flag, has_64k_pages, to mark the requirement of 64K GTT page sizes or larger for device local memory access. Also implies that we require or at least support the compact PT layout for the ppGTT when using 64K GTT pages. v2: More explanation for the flag [Thomas] Signed-off-by: Stuart Summers <stuart.summers@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211208141613.7251-2-ramalingam.c@intel.com
-
- 08 Dec, 2021 5 commits
-
-
Tejas Upadhyay authored
We need a way to reset engines by their reset domains. This change sets up way to fetch reset domains of each engine globally. Changes since V1: - Use static reset domain array - Ville and Tvrtko - Use BUG_ON at appropriate place - Tvrtko Signed-off-by: Tejas Upadhyay <tejaskumarx.surendrakumar.upadhyay@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206081026.4024401-1-tejaskumarx.surendrakumar.upadhyay@intel.com
-
Matthew Auld authored
Ensure we account for any object rounding due to min_page_size restrictions. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206112539.3149779-4-matthew.auld@intel.com
-
Matthew Auld authored
No need to insert PTEs for the PTE window itself, also foreach expects a length not an end offset, which could be gigantic here with a second engine. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206112539.3149779-3-matthew.auld@intel.com
-
Matthew Auld authored
Ensure we add the engine base only after we calculate the qword offset into the PTE window. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206112539.3149779-2-matthew.auld@intel.com
-
Matthew Auld authored
The scratch page might not be allocated in LMEM(like on DG2), so instead of using that as the deciding factor for where the paging structures live, let's just query the pt before mapping it. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206112539.3149779-1-matthew.auld@intel.com
-
- 07 Dec, 2021 2 commits
-
-
Bruce Chang authored
Follow up on below commit, to increase the timeout further on new platforms, to accomodate the additional time required for the completion of guc submissions for numerous requests created in loop. commit 5e076529 Author: Matthew Brost <matthew.brost@intel.com> Date: Mon Jul 26 20:17:03 2021 -0700 drm/i915/selftests: Increase timeout in i915_gem_contexts selftests Signed-off-by: Bruce Chang <yu.bruce.chang@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211207003845.12419-1-yu.bruce.chang@intel.com
-
Michael Cheng authored
Certain functions within i915 uses macros that are defined for specific architectures by the mmu, such as _PAGE_RW and _PAGE_PRESENT (Some architectures don't even have these macros defined, like ARM64). Instead of re-using bits defined for the CPU, we should use bits defined for i915. This patch introduces two new 64 bit macros, GEN8_PAGE_PRESENT and GEN8_PAGE_RW, to check for bits 0 and 1 and, to replace all occurrences of _PAGE_RW and _PAGE_PRESENT within i915. v2(Michael Cheng): Use GEN8_ instead of I915_ Signed-off-by: Michael Cheng <michael.cheng@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> [ Move defines together with other GEN8 defines ] Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211206215245.513677-2-michael.cheng@intel.com
-
- 03 Dec, 2021 4 commits
-
-
Dan Carpenter authored
Originally "out_fence" was set using out_fence = sync_file_create() but which returns NULL, but now it is set with out_fence = eb_requests_create() which returns error pointers. The error path needs to be modified to avoid an Oops in the "goto err_request;" path. Fixes: 544460c3 ("drm/i915: Multi-BB execbuf") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211202044831.29583-1-matthew.brost@intel.com
-
Raviteja Goud Talla authored
Bspec page says "Reset: BUS", Accordingly moving w/a's: Wa_1407352427,Wa_1406680159 to proper function icl_gt_workarounds_init() Which will resolve guc enabling error v2: - Previous patch rev2 was created by email client which caused the Build failure, This v2 is to resolve the previous broken series Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Raviteja Goud Talla <ravitejax.goud.talla@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211203145603.4006937-1-ravitejax.goud.talla@intel.com
-
Ramalingam C authored
Invalidate IC cache through pipe control command as part of the ctx restore flow through indirect ctx pointer. v2: - Move pipe control from xcs indirect context to the rcs indirect context. We'll eventually need this on the CCS engines too, but support for those hasn't landed yet. Cc: Chris Wilson <chris.p.wilson@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211116174818.2128062-5-matthew.d.roper@intel.com
-
Matt Roper authored
Coarse power gating for render should not be enabled on some DG2 steppings. Bspec: 52698 Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211116174818.2128062-4-matthew.d.roper@intel.com
-
- 01 Dec, 2021 7 commits
-
-
José Roberto de Souza authored
Those two workarounds needs to be implemented in UMD, KMD only needs to whitelist the registers, so here only adding the workaround number to facilitate future workaroud table checks. Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211119140931.32791-2-jose.souza@intel.com
-
José Roberto de Souza authored
This workarounds are causing hangs, because I missed the fact that it needs to be enabled for all cases and disabled when doing a resolve pass. So KMD only needs to whitelist it and UMD will be the one setting it on per case. This reverts commit 28ec02c9. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4145Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Fixes: 28ec02c9 ("drm/i915: Implement Wa_1508744258") Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211119140931.32791-1-jose.souza@intel.com
-
Thomas Hellström authored
With asynchronous migrations, the vma state may be several migrations ahead of the state that matches the request we're capturing. Address that by introducing an i915_vma_snapshot structure that can be used to snapshot relevant state at request submission. In order to make sure we access the correct memory, the snapshots take references on relevant sg-tables and memory regions. Also move the capture list allocation out of the fence signaling critical path and use the CONFIG_DRM_I915_CAPTURE_ERROR define to avoid compiling in members and functions used for error capture when they're not used. Finally, Introduce lockdep annotation. v4: - Break out the capture allocation mode change to a separate patch. v5: - Fix compilation error in the !CONFIG_DRM_I915_CAPTURE_ERROR case (kernel test robot) v6: - Use #if IS_ENABLED() instead of #ifdef to match driver style. - Move yet another change of allocation mode to the separate patch. - Commit message rework due to patch reordering. v7: - Adjust for removal of region refcounting. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211129202245.472043-1-thomas.hellstrom@linux.intel.com
-
Zhou Qingyang authored
In igt_request_rewind(), mock_context(i915, "A") is assigned to ctx[0] and used in i915_gem_context_get_engine(). There is a dereference of ctx[0] in i915_gem_context_get_engine(), which could lead to a NULL pointer dereference on failure of mock_context(i915, "A") . So as mock_context(i915, "B"). Although this bug is not serious for it belongs to testing code, it is better to be fixed to avoid unexpected failure in testing. Fix this bugs by adding checks about ctx[0] and ctx[1]. This bug was found by a static analyzer. The analysis employs differential checking to identify inconsistent security operations (e.g., checks or kfrees) between two code paths and confirms that the inconsistent operations are not recovered in the current function or the callers, so they constitute bugs. Note that, as a bug found by static analysis, it can be a false positive or hard to trigger. Multiple researchers have cross-reviewed the bug. Builds with CONFIG_DRM_I915_SELFTEST=y show no new warnings, and our static analyzer no longer warns about this code. References: 591c0fb8 ("drm/i915: Exercise request cancellation using a mock selftest") [tursulin: Replaced fixes with references to avoid.] Signed-off-by: Zhou Qingyang <zhou1615@umn.edu> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211130141545.153899-1-zhou1615@umn.edu
-
Tvrtko Ursulin authored
With both integrated and discrete Intel GPUs in a system, the current global check of intel_iommu_gfx_mapped, as done from intel_vtd_active() may not be completely accurate. In this patch we add i915 parameter to intel_vtd_active() in order to prepare it for multiple GPUs and we also change the check away from Intel specific intel_iommu_gfx_mapped (global exported by the Intel IOMMU driver) to probing the presence of IOMMU on a specific device using device_iommu_mapped(). This will return true both for IOMMU pass-through and address translation modes which matches the current behaviour. If in the future we wanted to distinguish between these two modes we could either use iommu_get_domain_for_dev() and check for __IOMMU_DOMAIN_PAGING bit indicating address translation, or ask for a new API to be exported from the IOMMU core code. v2: * Check for dmar translation specifically, not just iommu domain. (Baolu) v3: * Go back to plain "any domain" check for now, rewrite commit message. v4: * Use device_iommu_mapped. (Robin, Baolu) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Robin Murphy <robin.murphy@arm.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211126141424.493753-1-tvrtko.ursulin@linux.intel.com
-
Matthew Brost authored
Rather than stealing bits from i915_sw_fence function pointer use separate fields for function pointer and flags. If using two different fields, the 4 byte alignment for the i915_sw_fence function pointer can also be dropped. v2: (CI) - Set new function field rather than flags in __i915_sw_fence_init v3: (Tvrtko) - Remove BUG_ON(!fence->flags) in reinit as that will now blow up - Only define fence->flags if CONFIG_DRM_I915_SW_FENCE_CHECK_DAG is defined v4: - Rebase, resend for CI Signed-off-by: Matthew Brost <matthew.brost@intel.com> Acked-by: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Alan Previn <alan.previn.teres.alexis@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211116194929.10211-1-matthew.brost@intel.com
-
Umesh Nerlige Ramappa authored
Since the PMU callback runs in irq context, it synchronizes with gt reset using the reset count. We could run into a case where the PMU callback could read the reset count before it is updated. This has a potential of corrupting the busyness stats. In addition to the reset count, check if the reset bit is set before capturing busyness. In addition save the previous stats only if you intend to update them. v2: - The 2 reset counts captured in the PMU callback can end up being the same if they were captured right after the count is incremented in the reset flow. This can lead to a bad busyness state. Ensure that reset is not in progress when the initial reset count is captured. Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211108211057.68783-1-umesh.nerlige.ramappa@intel.com
-
- 30 Nov, 2021 1 commit
-
-
Thomas Hellström authored
If a dma_fence_array is reported signaled by a call to dma_fence_is_signaled(), it may leak the PENDING_ERROR status. Fix this by clearing the PENDING_ERROR status if we return true in dma_fence_array_signaled(). v2: - Update Cc list, and add R-b. Fixes: 1f70b8b8 ("dma-fence: Propagate errors to dma-fence-array container") Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Gustavo Padovan <gustavo@padovan.org> Cc: Christian König <christian.koenig@amd.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: <stable@vger.kernel.org> # v5.4+ Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Christian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211129152727.448908-1-thomas.hellstrom@linux.intel.com
-
- 26 Nov, 2021 3 commits
-
-
Matthew Auld authored
vfs_kernel_mount() modifies the passed in mount options, leaving us with "huge", instead of "huge=within_size". Normally this shouldn't matter with the usual module load/unload flow, however with the core_hotunplug IGT we are hitting the following, when re-probing the memory regions: i915 0000:00:02.0: [drm] Transparent Hugepage mode 'huge' tmpfs: Bad value for 'huge' [drm] Unable to create a private tmpfs mount, hugepage support will be disabled(-22). References: https://gitlab.freedesktop.org/drm/intel/-/issues/4651Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211126110843.2028582-1-matthew.auld@intel.com
-
Thomas Hellström authored
The capture code is typically run entirely in the fence signalling critical path. We're about to add lockdep annotation in an upcoming patch which reveals a lockdep splat similar to the below one. Fix the associated potential deadlocks using __GFP_KSWAPD_RECLAIM (which is the same as GFP_WAIT, but open-coded for clarity) rather than GFP_KERNEL for memory allocation in the capture path. This has the potential drawback that capture might fail in situations with memory pressure. [ 234.842048] WARNING: possible circular locking dependency detected [ 234.842050] 5.15.0-rc7+ #20 Tainted: G U W [ 234.842052] ------------------------------------------------------ [ 234.842054] gem_exec_captur/1180 is trying to acquire lock: [ 234.842056] ffffffffa3e51c00 (fs_reclaim){+.+.}-{0:0}, at: __kmalloc+0x4d/0x330 [ 234.842063] but task is already holding lock: [ 234.842064] ffffffffa3f57620 (dma_fence_map){++++}-{0:0}, at: i915_vma_snapshot_resource_pin+0x27/0x30 [i915] [ 234.842138] which lock already depends on the new lock. [ 234.842140] the existing dependency chain (in reverse order) is: [ 234.842142] -> #2 (dma_fence_map){++++}-{0:0}: [ 234.842145] __dma_fence_might_wait+0x41/0xa0 [ 234.842149] dma_resv_lockdep+0x1dc/0x28f [ 234.842151] do_one_initcall+0x58/0x2d0 [ 234.842154] kernel_init_freeable+0x273/0x2bf [ 234.842157] kernel_init+0x16/0x120 [ 234.842160] ret_from_fork+0x1f/0x30 [ 234.842163] -> #1 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}: [ 234.842166] fs_reclaim_acquire+0x6d/0xd0 [ 234.842168] __kmalloc_node+0x51/0x3a0 [ 234.842171] alloc_cpumask_var_node+0x1b/0x30 [ 234.842174] native_smp_prepare_cpus+0xc7/0x292 [ 234.842177] kernel_init_freeable+0x160/0x2bf [ 234.842179] kernel_init+0x16/0x120 [ 234.842181] ret_from_fork+0x1f/0x30 [ 234.842184] -> #0 (fs_reclaim){+.+.}-{0:0}: [ 234.842186] __lock_acquire+0x1161/0x1dc0 [ 234.842189] lock_acquire+0xb5/0x2b0 [ 234.842192] fs_reclaim_acquire+0xa1/0xd0 [ 234.842193] __kmalloc+0x4d/0x330 [ 234.842196] i915_vma_coredump_create+0x78/0x5b0 [i915] [ 234.842253] intel_engine_coredump_add_vma+0x36/0xe0 [i915] [ 234.842307] __i915_gpu_coredump+0x290/0x5e0 [i915] [ 234.842365] i915_capture_error_state+0x57/0xa0 [i915] [ 234.842415] intel_gt_handle_error+0x348/0x3e0 [i915] [ 234.842462] intel_gt_debugfs_reset_store+0x3c/0x90 [i915] [ 234.842504] simple_attr_write+0xc1/0xe0 [ 234.842507] full_proxy_write+0x53/0x80 [ 234.842509] vfs_write+0xbc/0x350 [ 234.842513] ksys_write+0x58/0xd0 [ 234.842514] do_syscall_64+0x38/0x90 [ 234.842516] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 234.842519] other info that might help us debug this: [ 234.842521] Chain exists of: fs_reclaim --> mmu_notifier_invalidate_range_start --> dma_fence_map [ 234.842526] Possible unsafe locking scenario: [ 234.842528] CPU0 CPU1 [ 234.842529] ---- ---- [ 234.842531] lock(dma_fence_map); [ 234.842532] lock(mmu_notifier_invalidate_range_start); [ 234.842535] lock(dma_fence_map); [ 234.842537] lock(fs_reclaim); [ 234.842539] *** DEADLOCK *** [ 234.842540] 4 locks held by gem_exec_captur/1180: [ 234.842543] #0: ffff9007812d9460 (sb_writers#17){.+.+}-{0:0}, at: ksys_write+0x58/0xd0 [ 234.842547] #1: ffff900781d9ecb8 (&attr->mutex){+.+.}-{3:3}, at: simple_attr_write+0x3a/0xe0 [ 234.842552] #2: ffffffffc11913a8 (capture_mutex){+.+.}-{3:3}, at: i915_capture_error_state+0x1a/0xa0 [i915] [ 234.842602] #3: ffffffffa3f57620 (dma_fence_map){++++}-{0:0}, at: i915_vma_snapshot_resource_pin+0x27/0x30 [i915] [ 234.842656] stack backtrace: [ 234.842658] CPU: 0 PID: 1180 Comm: gem_exec_captur Tainted: G U W 5.15.0-rc7+ #20 [ 234.842661] Hardware name: ASUS System Product Name/PRIME B560M-A AC, BIOS 0403 01/26/2021 [ 234.842664] Call Trace: [ 234.842666] dump_stack_lvl+0x57/0x72 [ 234.842669] check_noncircular+0xde/0x100 [ 234.842672] ? __lock_acquire+0x3bf/0x1dc0 [ 234.842675] __lock_acquire+0x1161/0x1dc0 [ 234.842678] lock_acquire+0xb5/0x2b0 [ 234.842680] ? __kmalloc+0x4d/0x330 [ 234.842683] ? finish_task_switch.isra.0+0xf2/0x360 [ 234.842686] ? i915_vma_coredump_create+0x78/0x5b0 [i915] [ 234.842734] fs_reclaim_acquire+0xa1/0xd0 [ 234.842737] ? __kmalloc+0x4d/0x330 [ 234.842739] __kmalloc+0x4d/0x330 [ 234.842742] i915_vma_coredump_create+0x78/0x5b0 [i915] [ 234.842793] ? capture_vma+0xbe/0x110 [i915] [ 234.842844] intel_engine_coredump_add_vma+0x36/0xe0 [i915] [ 234.842892] __i915_gpu_coredump+0x290/0x5e0 [i915] [ 234.842939] i915_capture_error_state+0x57/0xa0 [i915] [ 234.842985] intel_gt_handle_error+0x348/0x3e0 [i915] [ 234.843032] ? __mutex_lock+0x81/0x830 [ 234.843035] ? simple_attr_write+0x3a/0xe0 [ 234.843038] ? __lock_acquire+0x3bf/0x1dc0 [ 234.843041] intel_gt_debugfs_reset_store+0x3c/0x90 [i915] [ 234.843083] ? _copy_from_user+0x45/0x80 [ 234.843086] simple_attr_write+0xc1/0xe0 [ 234.843089] full_proxy_write+0x53/0x80 [ 234.843091] vfs_write+0xbc/0x350 [ 234.843094] ksys_write+0x58/0xd0 [ 234.843096] do_syscall_64+0x38/0x90 [ 234.843098] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 234.843101] RIP: 0033:0x7fa467480877 [ 234.843103] Code: 75 05 48 83 c4 58 c3 e8 37 4e ff ff 0f 1f 80 00 00 00 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24 [ 234.843108] RSP: 002b:00007ffd14d79b08 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 [ 234.843112] RAX: ffffffffffffffda RBX: 00007ffd14d79b60 RCX: 00007fa467480877 [ 234.843114] RDX: 0000000000000014 RSI: 00007ffd14d79b60 RDI: 0000000000000007 [ 234.843116] RBP: 0000000000000007 R08: 0000000000000000 R09: 00007ffd14d79ab0 [ 234.843119] R10: ffffffffffffffff R11: 0000000000000246 R12: 0000000000000014 [ 234.843121] R13: 0000000000000000 R14: 00007ffd14d79b60 R15: 0000000000000005 v5: - Use __GFP_KSWAPD_RECLAIM rather than __GFP_NOWAIT for clarity. (Daniel Vetter) v6: - Include an instance in execlists_capture_work(). - Rework the commit message due to patch reordering. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211108174547.979714-3-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
The gpu coredump typically takes place in a dma_fence signalling critical path, and hence can't use GFP_KERNEL allocations, as that means we might hit deadlocks under memory pressure. However changing to __GFP_KSWAPD_RECLAIM which will be done in an upcoming patch will instead mean a lower chance of the allocation succeeding. In particular large contigous allocations like the coredump page vector. Remove the page vector in favor of a linked list of single pages. Use the page lru list head as the list link, as the page owner is allowed to do that. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211108174547.979714-2-thomas.hellstrom@linux.intel.com
-
- 25 Nov, 2021 8 commits
-
-
Maarten Lankhorst authored
The signaled bit is already used for quick testing if a fence is signaled. On top of that, it's a terrible abuse of dma-fence api, and in the common case where the object is already locked by the caller, the trylock will fail. If it were useful, the core dma-api would have exposed the same functionality. The fact that i915 has a dma_resv_utils.c file should be a warning that the functionality either belongs in core, or is not very useful at all. In this case the latter. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> [mlankhorst: Improve commit message] Acked-by: Christian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211021103605.735002-3-maarten.lankhorst@linux.intel.com Reviewed-by: Matthew Auld <matthew.auld@intel.com> #irc
-
Tvrtko Ursulin authored
Maarten requested a backmerge due his work depending on subtle semantic changes introduced by: 7e2e69ed ("drm/i915: Fix i915_request fence wait semantics") 2cbb8d4d ("drm/i915: use new iterator in i915_gem_object_wait_reservation") Both should probably have been merged to drm-intel-gt-next anyway. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Thomas Hellström authored
Update the copy function i915_gem_obj_copy_ttm() to be asynchronous for future users and update the only current user to sync the objects as needed after this function. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-7-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
Don't wait sync while migrating, but rather make the GPU blit await the dependencies and add a moving fence to the object. This also enables asynchronous VRAM management in that on eviction, rather than waiting for the moving fence to expire before freeing VRAM, it is freed immediately and the fence is stored with the VRAM manager and handed out to newly allocated objects to await before clears and swapins, or for kernel objects before setting up gpu vmas or mapping. To collect dependencies before migrating, add a set of utilities that coalesce these to a single dma_fence. What is still missing for fully asynchronous operation is asynchronous vma unbinding, which is still to be implemented. This commit substantially reduces execution time in the gem_lmem_swapping test. v2: - Make a couple of functions static. v4: - Fix some style issues (Matthew Auld) - Audit and add more checks for ghost objects (Matthew Auld) - Add more documentation for the i915_deps utility (Mattew Auld) - Simplify the i915_deps_sync() function v6: - Re-check for fence signaled before returning -EBUSY (Matthew Auld) - Use dma_resv_iter_is_exclusive() (Matthew Auld) - Await all dma-resv fences before a migration blit (Matthew Auld) Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-6-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
With async migration, the shrinker may end up wanting to release the pages of an object while the migration blit is still running, since the GT migration code doesn't set up VMAs and the shrinker is thus oblivious to the fact that the GPU is still using the pages. Add waiting for gpu in the shrinker_release_pages() op and an argument to that function indicating whether the shrinker expects it to not wait for gpu. In the latter case the shrinker_release_pages() op will return -EBUSY if the object is not idle. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-5-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
There is an interesting refcounting loop: struct intel_memory_region has a struct ttm_resource_manager, ttm_resource_manager->move may hold a reference to i915_request, i915_request may hold a reference to intel_context, intel_context may hold a reference to drm_i915_gem_object, drm_i915_gem_object may hold a reference to intel_memory_region. Break this loop by dropping region reference counting. In addition, Have regions with a manager moving fence make sure that all region objects are released before freeing the region. v6: - Fix a code comment. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-4-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
Move the i915_gem_obj_copy_ttm() function to i915_gem_ttm_move.h. This will help keep a number of functions static when introducing async moves. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-3-thomas.hellstrom@linux.intel.com
-
Maarten Lankhorst authored
For now, we will only allow async migration when TTM is used, so the paths we care about are related to TTM. The mmap path is handled by having the fence in ttm_bo->moving, when pinning, the binding only becomes available after the moving fence is signaled, and pinning a cpu map will only work after the moving fence signals. This should close all holes where userspace can read a buffer before it's fully migrated. v2: - Fix a couple of SPARSE warnings v3: - Fix a NULL pointer dereference v4: - Ditch the moving fence waiting for i915_vma_pin_iomap() and replace with a verification that the vma is already bound. (Matthew Auld) - Squash with a previous patch introducing moving fence waiting and accessing interfaces (Matthew Auld) - Rename to indicated that we also add support for sync waiting. v5: - Fix check for NULL and unreferencing i915_vma_verify_bind_complete() (Matthew Auld) - Fix compilation failure if !CONFIG_DRM_I915_DEBUG_GEM - Fix include ordering. (Matthew Auld) v7: - Fix yet another compilation failure with clang if !CONFIG_DRM_I915_DEBUG_GEM Co-developed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-2-thomas.hellstrom@linux.intel.com
-
- 24 Nov, 2021 1 commit
-
-
Tvrtko Ursulin authored
This reverts commit 777226da ("drm/i915/dmabuf: fix broken build"). Approach taken in the patch was rejected by Linus and the upstream tree now already contains the required include directive via 304ac803 ("Merge tag 'drm-next-2021-11-12' of git://anongit.freedesktop.org/drm/drm"). Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: 777226da ("drm/i915/dmabuf: fix broken build") Cc: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Acked-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122135758.85444-1-tvrtko.ursulin@linux.intel.com [tursulin: fixup commit message sha format]
-
- 23 Nov, 2021 2 commits
-
-
Tejas Upadhyay authored
selftest --r live shows failure in suspend tests when RPM wakelock is not acquired during suspend. This changes addresses below error : <4> [154.177535] RPM wakelock ref not held during HW access <4> [154.177575] WARNING: CPU: 4 PID: 5772 at drivers/gpu/drm/i915/intel_runtime_pm.h:113 fwtable_write32+0x240/0x320 [i915] <4> [154.177974] Modules linked in: i915(+) vgem drm_shmem_helper fuse snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp mei_pxp x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel snd_intel_dspcfg snd_hda_codec snd_hwdep igc snd_hda_core ttm mei_me ptp snd_pcm prime_numbers mei i2c_i801 pps_core i2c_smbus intel_lpss_pci btusb btrtl btbcm btintel bluetooth ecdh_generic ecc [last unloaded: i915] <4> [154.178143] CPU: 4 PID: 5772 Comm: i915_selftest Tainted: G U 5.15.0-rc6-CI-Patchwork_21432+ #1 <4> [154.178154] Hardware name: ASUS System Product Name/TUF GAMING Z590-PLUS WIFI, BIOS 0811 04/06/2021 <4> [154.178160] RIP: 0010:fwtable_write32+0x240/0x320 [i915] <4> [154.178604] Code: 15 7b e1 0f 0b e9 34 fe ff ff 80 3d a9 89 31 00 00 0f 85 31 fe ff ff 48 c7 c7 88 9e 4f a0 c6 05 95 89 31 00 01 e8 c0 15 7b e1 <0f> 0b e9 17 fe ff ff 8b 05 0f 83 58 e2 85 c0 0f 85 8d 00 00 00 48 <4> [154.178614] RSP: 0018:ffffc900016279f0 EFLAGS: 00010286 <4> [154.178626] RAX: 0000000000000000 RBX: ffff888204fe0ee0 RCX: 0000000000000001 <4> [154.178634] RDX: 0000000080000001 RSI: ffffffff823142b5 RDI: 00000000ffffffff <4> [154.178641] RBP: 00000000000320f0 R08: 0000000000000000 R09: c0000000ffffcd5a <4> [154.178647] R10: 00000000000f8c90 R11: ffffc90001627808 R12: 0000000000000000 <4> [154.178654] R13: 0000000040000000 R14: ffffffffa04d12e0 R15: 0000000000000000 <4> [154.178660] FS: 00007f7390aa4c00(0000) GS:ffff88844f000000(0000) knlGS:0000000000000000 <4> [154.178669] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 <4> [154.178675] CR2: 000055bc40595028 CR3: 0000000204474005 CR4: 0000000000770ee0 <4> [154.178682] PKRU: 55555554 <4> [154.178687] Call Trace: <4> [154.178706] intel_pxp_fini_hw+0x23/0x30 [i915] <4> [154.179284] intel_pxp_suspend+0x1f/0x30 [i915] <4> [154.179807] live_gt_resume+0x5b/0x90 [i915] Changes since V2 : - Remove boolean in intel_pxp_runtime_preapre for non-pxp configs. Solves build error Changes since V2 : - Open-code intel_pxp_runtime_suspend - Daniele - Remove boolean in intel_pxp_runtime_preapre - Daniele Changes since V1 : - split the HW access parts in gt_suspend_late - Daniele - Remove default PXP configs Signed-off-by: Tejas Upadhyay <tejaskumarx.surendrakumar.upadhyay@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Fixes: 0cfab4cb ("drm/i915/pxp: Enable PXP power management") Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211117060321.3729343-1-tejaskumarx.surendrakumar.upadhyay@intel.com
-
Umesh Nerlige Ramappa authored
Irrespective of the backend for request submissions, busyness for an engine with an active context is calculated using: busyness = total + (current_time - context_switch_in_time) In execlists mode of operation, the context switch events are handled by the CPU. Context switch in/out time and current_time are captured in CPU time domain using ktime_get(). In GuC mode of submission, context switch events are handled by GuC and the times in the above formula are captured in GT clock domain. This information is shared with the CPU through shared memory. This results in 2 caveats: 1) The time taken between start of a batch and the time that CPU is able to see the context_switch_in_time in shared memory is dependent on GuC and memory bandwidth constraints. 2) Determining current_time requires an MMIO read that can take anywhere between a few us to a couple ms. A reference CPU time is captured soon after reading the MMIO so that the caller can compare the cpu delta between 2 busyness samples. The issue here is that the CPU delta and the busyness delta can be skewed because of the time taken to read the register. These 2 factors affect the accuracy of the selftest - live_engine_busy_stats. For (1) the selftest waits until busyness stats are visible to the CPU. The effects of (2) are more prominent for the current busyness sample period of 100 us. Increase the busyness sample period from 100 us to 10 ms to overccome (2). v2: Fix checkpatch issues Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211115221640.30793-1-umesh.nerlige.ramappa@intel.com
-