- 10 Dec, 2019 3 commits
-
-
Matt Roper authored
Gen12 can improve bandwidth efficiency by pairing up memory requests with similar addresses. We need to program the BW_BUDDY1 and BW_BUDDY2 registers according to the memory configuration during display initialization to take advantage of this capability. The magic numbers we program here feel like something that could definitely change on future platforms, so let's use a table-based programming scheme to make this easy to extend in the future. v2: - Add separate table for Wa_1409767108. (Stan) - Reorder structure reduce size by a word. Page mask can still be up to 28 bits (even though current values are small) so we should keep it as a u32, but just using a u8 for DRAM type instead of the actual enum type saves space. (Lucas, Ville) - Rename function to tgl_bw_buddy_init() to be more precise about what it does. (Lucas) Bspec: 49189 Bspec: 49218 Bspec: 52890 Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com> Cc: Ville Syrjala <ville.syrjala@linux.intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205224848.76712-1-matthew.d.roper@intel.comReviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
-
Colin Ian King authored
Currently the variable sum is not uninitialized and hence will cause an incorrect result in the summation values. Fix this by initializing sum to the first item in the summation. Addresses-Coverity: ("Uninitialized scalar variable") Fixes: 3c7a44bb ("drm/i915/selftests: Perform some basic cycle counting of MI ops") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191210143205.338308-1-colin.king@canonical.com
-
Chris Wilson authored
In order to avoid confusing the HW, we must never submit an empty ring during lite-restore, that is we should always advance the RING_TAIL before submitting to stay ahead of the RING_HEAD. Normally this is prevented by keeping a couple of spare NOPs in the request->wa_tail so that on resubmission we can advance the tail. This relies on the request only being resubmitted once, which is the normal condition as it is seen once for ELSP[1] and then later in ELSP[0]. On preemption, the requests are unwound and the tail reset back to the normal end point (as we know the request is incomplete and therefore its RING_HEAD is even earlier). However, if this w/a should fail we would try and resubmit the request with the RING_TAIL already set to the location of this request's wa_tail potentially causing a GPU hang. We can spot when we do try and incorrectly resubmit without advancing the RING_TAIL and spare any embarrassment by forcing the context restore. In the case of preempt-to-busy, we leave the requests running on the HW while we unwind. As the ring is still live, we cannot rewind our rq->tail without forcing a reload so leave it set to rq->wa_tail and only force a reload if we resubmit after a lite-restore. (Normally, the forced reload will be a part of the preemption event.) Fixes: 22b7a426 ("drm/i915/execlists: Preempt-to-busy") Closes: https://gitlab.freedesktop.org/drm/intel/issues/673Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@kernel.vger.org Link: https://patchwork.freedesktop.org/patch/msgid/20191209023215.3519970-1-chris@chris-wilson.co.uk
-
- 09 Dec, 2019 25 commits
-
-
Daniele Ceraolo Spurio authored
We now only use 1 client without any plan to add more. The client is also only holding information about the WQ and the process desc, so we can just move those in the intel_guc structure and always use stage_id 0. v2: fix comment (John) v3: fix the comment for real, fix kerneldoc Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205220243.27403-4-daniele.ceraolospurio@intel.com
-
Daniele Ceraolo Spurio authored
Instead of relying on the workqueue, the upcoming reworked GuC submission flow will offer the host driver indipendent control over the execution status of each context submitted to GuC. As part of this, the doorbell usage model has been reworked, with each doorbell being paired to a single lrc and a doorbell ring representing new work available for that specific context. This mechanism, however, limits the number of contexts that can be registered with GuC to the number of doorbells, which is an undesired limitation. To avoid this limitation, we requested the GuC team to also provide a H2G that will allow the host to notify the GuC of work available for a specified lrc, so we can use that mechanism instead of relying on the doorbells. We can therefore drop the doorbell code we currently have, also given the fact that in the unlikely case we'd want to switch back to using doorbells we'd have to heavily rework it. The workqueue will still have a use in the new interface to pass special commands, so that code has been retained for now. With the doorbells gone and the GuC client becoming even simpler, the existing GuC selftests don't give us any meaningful coverage so we can remove them as well. Some selftests might come with the new code, but they will look different from what we have now so if doesn't seem worth it to keep the file around in the meantime. v2: fix comments and commit message (John) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205220243.27403-3-daniele.ceraolospurio@intel.com
-
Daniele Ceraolo Spurio authored
We already have a couple of use-cases in the code and another one will come in one of the later patches in the series. v2: use the new function for the CT object as well Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> #v1 Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205220243.27403-2-daniele.ceraolospurio@intel.com
-
Daniele Ceraolo Spurio authored
Remove unused enums and ctx_save_restore_disabled() function, leftover from the legacy preemption removal. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191205220243.27403-1-daniele.ceraolospurio@intel.com
-
Ville Syrjälä authored
intel_hdcp_transcoder_config() is clobbering some globally visible state in .compute_config(). That is a big no no as .compute_config() is supposed to have no visible side effects when either the commit fails or it's just a TEST_ONLY commit. Inline this stuff into intel_hdcp_enable() so that the state only gets modified when we actually commit the state to the hardware. Cc: Ramalingam C <ramalingam.c@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Uma Shankar <uma.shankar@intel.com> Fixes: 39e2df09 ("drm/i915/hdcp: update current transcoder into intel_hdcp") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191204180549.1267-2-ville.syrjala@linux.intel.comReviewed-by: Ramalingam C <ramalingam.c@intel.com>
-
Ville Syrjälä authored
The code assumes we can omit the cfb allocation once fbc has been enabled once. That's nonsense. Let's try to reallocate it if we need to. The code is still a mess, but maybe this is enough to get fbc going in some cases where it initially underallocates the cfb and there's no full modeset to fix it up. Cc: Daniel Drake <drake@endlessm.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Jian-Hong Pan <jian-hong@endlessm.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-15-ville.syrjala@linux.intel.comTested-by: Daniel Drake <drake@endlessm.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
Now that we have the glk+ w/a for back to back fbc disable + plane update in place we can once more enable fbc on glk+ by default. Cc: Daniel Drake <drake@endlessm.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Jian-Hong Pan <jian-hong@endlessm.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-14-ville.syrjala@linux.intel.comTested-by: Daniel Drake <drake@endlessm.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
On glk+ the hardware gets confused if we disable FBC while it's recompressing and we perform a plane update during the same frame. The result is that top of the screen gets corrupted. We can avoid that by giving the hardware enough time to finish the FBC disable before we touch the plane registers. Ie. we need an extra vblank wait after FBC disable. v2: Don't do the vblank wait if we never activated FBC in hw Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191128150338.12490-1-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
The hardware automagically nukes the cfb on flip. We can use that whenever the plane/crtc configuration doesn't change too much. Let's hook that up. We'll need this for glk+ since we need to introduce an extra vblank wait after FBC disable. As we're currently disabling FBC around all plane updates we'd slow them down by an extra frame. Not a great user experience when your fps is always capped at vrefres/2. With flip nuke we don't need the extra vblank wait. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-12-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
fbc.enabled == (fbc.crtc != NULL), so let's just nuke fbc.enabled. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-11-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
Replace the 'gen9 && !glk' with the slightly more obvious 'gen9_bc || bxt'. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-10-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
i965gm no longer needs the fence for scanout so we should be do what we do for ctg+ and only configure a fence for FBC when we have one. In theory this should do nothing atm on account of intel_fbc_can_activate() requiring the fence, but since we do this for g4x+ let's do it for i965gm as well. We may want to relax the requirements at some point and allow FBC without a fence. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-9-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
Rather than playing around with vma+flags let's just grab the fence id from within and stash that directly in the fbc cache/params. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-8-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
Currently the code (ab)uses cache->vma to indicate the plane visibility. I want to nuke that so let's add a dedicated boolean for this. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-7-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
Precompute the override cfb stride value so that we can check it when determining if flip nuke can be used or not. The hardware has 13 bits for this, so we can shrink the storage to u16 while at it. v2: Don't explode when crtc_state->enable_fbc lies to us Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-6-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
We don't want to use the FBC hardware render tracking so let's not enable it. To use the hw tracking properly we'd anyway need to integrate this into the command submissing path as the register is context saved, and if rendering happens via the ppgtt we'd have to configure it with the ppgtt address instead of the ggtt address. Easier to use software tracking instead. Note that on pre-ilk we can't actually disable render tracking. However we can't rely on it because it requires that DSPSURF to match the render target address, and since we play tricks with DSPSURF that may not be the case. Hence we shall rely on software render tracking on all platforms. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-5-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
Move intel_crtc_active() next to its only remaining user (pre-g4x wm code). Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-4-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
Not sure where the single pipe only restriction came for fbc1. Nothing I can see that would prevent this. v2: Nuke no_fbc_on_multiple_pipes() too Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-3-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
We're missing a workaround in the fbc code for all glk+ platforms which can cause corruption around the top of the screen. So enabling fbc by default is a bad idea. I'm not keen to backport the w/a so let's start by disabling fbc by default on all glk+. We'll lift the restriction once the w/a is in place. Cc: stable@vger.kernel.org Cc: Daniel Drake <drake@endlessm.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Jian-Hong Pan <jian-hong@endlessm.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-2-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Umesh Nerlige Ramappa authored
Gen12 supports saving/restoring render counters per context. Apply OAR configuration only for the context that is passed in to perf. v2: - Fix OACTXCONTROL value to only stop/resume counters. - Remove gen12_update_reg_state_unlocked as power state is already applied by the caller. v3: (Lionel) - Move register initialization into the array - Assume a valid oa_config in enable_metric_set Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Fixes: 00a7f0d7 ("drm/i915/tgl: Add perf support on TGL") Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191206194339.31356-2-umesh.nerlige.ramappa@intel.com
-
Umesh Nerlige Ramappa authored
SAMPLE_OA_REPORT enables sampling of OA reports from the OA buffer. Since reports from OA buffer had system wide visibility, collecting samples from the OA buffer was a privileged operation on previous platforms. Prior to TGL, it was also necessary to sample the OA buffer to normalize reports from MI REPORT PERF COUNT. TGL has a dedicated OAR unit to sample perf reports for a specific render context. This removes the necessity to sample OA buffer. - If not sampling the OA buffer, allow non-privileged access. An earlier patch allows the non-privilege access: https://patchwork.freedesktop.org/patch/337716/?series=68582&rev=1 - Clear up the path for non-privileged access in this patch Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Fixes: 00a7f0d7 ("drm/i915/tgl: Add perf support on TGL") Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191206194339.31356-1-umesh.nerlige.ramappa@intel.com
-
Chris Wilson authored
If someone else acquires the i915_vma before we complete our wait and unbind it, we currently error out with -EBUSY. Use -EAGAIN instead so that if necessary the caller is prepared to try again. Closes: https://gitlab.freedesktop.org/drm/intel/issues/683Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191208161252.3015727-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
As i915_gem_object_unbind() waits on an rcu_barrier() to flush vm releases (and destruction of their bound vma), we have to be careful not to invoke that barrier from beneath the shrinker: <4> [430.222671] WARNING: possible circular locking dependency detected <4> [430.222673] 5.4.0-rc8-CI-CI_DRM_7508+ #1 Tainted: G U <4> [430.222675] ------------------------------------------------------ <4> [430.222677] gem_pwrite/2317 is trying to acquire lock: <4> [430.222678] ffffffff82248218 (rcu_state.barrier_mutex){+.+.}, at: rcu_barrier+0x23/0x190 <4> [430.222685] but task is already holding lock: <4> [430.222687] ffffffff82263a40 (fs_reclaim){+.+.}, at: fs_reclaim_acquire.part.117+0x0/0x30 <4> [430.222691] which lock already depends on the new lock. <4> [430.222693] the existing dependency chain (in reverse order) is: <4> [430.222695] -> #2 (fs_reclaim){+.+.}: <4> [430.222698] fs_reclaim_acquire.part.117+0x24/0x30 <4> [430.222702] kmem_cache_alloc_trace+0x2a/0x2c0 <4> [430.222705] intel_cpuc_prepare+0x37/0x1a0 <4> [430.222709] cpuhp_invoke_callback+0x9b/0x9d0 <4> [430.222712] _cpu_up+0xa2/0x140 <4> [430.222714] do_cpu_up+0x61/0xa0 <4> [430.222718] smp_init+0x57/0x96 <4> [430.222722] kernel_init_freeable+0xac/0x1c7 <4> [430.222725] kernel_init+0x5/0x100 <4> [430.222728] ret_from_fork+0x24/0x50 <4> [430.222729] -> #1 (cpu_hotplug_lock.rw_sem){++++}: <4> [430.222733] cpus_read_lock+0x34/0xd0 <4> [430.222734] rcu_barrier+0xaa/0x190 <4> [430.222736] kernel_init+0x21/0x100 <4> [430.222737] ret_from_fork+0x24/0x50 <4> [430.222739] -> #0 (rcu_state.barrier_mutex){+.+.}: <4> [430.222742] __lock_acquire+0x1328/0x15d0 <4> [430.222743] lock_acquire+0xa7/0x1c0 <4> [430.222746] __mutex_lock+0x9a/0x9d0 <4> [430.222747] rcu_barrier+0x23/0x190 <4> [430.222850] i915_gem_object_unbind+0x264/0x3d0 [i915] <4> [430.222882] i915_gem_shrink+0x297/0x5f0 [i915] <4> [430.222912] i915_gem_shrink_all+0x38/0x60 [i915] <4> [430.222934] i915_drop_caches_set+0x1f0/0x240 [i915] <4> [430.222938] simple_attr_write+0xb0/0xd0 <4> [430.222941] full_proxy_write+0x51/0x80 <4> [430.222943] vfs_write+0xb9/0x1d0 <4> [430.222944] ksys_write+0x9f/0xe0 <4> [430.222946] do_syscall_64+0x4f/0x210 <4> [430.222948] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4> [430.222950] other info that might help us debug this: <4> [430.222952] Chain exists of: rcu_state.barrier_mutex --> cpu_hotplug_lock.rw_sem --> fs_reclaim <4> [430.222955] Possible unsafe locking scenario: <4> [430.222957] CPU0 CPU1 <4> [430.222958] ---- ---- <4> [430.222960] lock(fs_reclaim); <4> [430.222961] lock(cpu_hotplug_lock.rw_sem); <4> [430.222963] lock(fs_reclaim); <4> [430.222964] lock(rcu_state.barrier_mutex); <4> [430.222966] *** DEADLOCK *** <4> [430.222968] 3 locks held by gem_pwrite/2317: <4> [430.222969] #0: ffff88849e2d9408 (sb_writers#14){.+.+}, at: vfs_write+0x1a4/0x1d0 <4> [430.222973] #1: ffff888496976db0 (&attr->mutex){+.+.}, at: simple_attr_write+0x36/0xd0 <4> [430.222976] #2: ffffffff82263a40 (fs_reclaim){+.+.}, at: fs_reclaim_acquire.part.117+0x0/0x30 <4> [430.222980] stack backtrace: <4> [430.222982] CPU: 1 PID: 2317 Comm: gem_pwrite Tainted: G U 5.4.0-rc8-CI-CI_DRM_7508+ #1 <4> [430.222985] Hardware name: Intel Corporation Tiger Lake Client Platform/TigerLake U DDR4 SODIMM RVP, BIOS TGLSFWI1.R00.2321.A08.1909162051 09/16/2019 <4> [430.222989] Call Trace: <4> [430.222992] dump_stack+0x71/0x9b <4> [430.222995] check_noncircular+0x19b/0x1c0 <4> [430.222998] ? __lock_acquire+0x1328/0x15d0 <4> [430.222999] __lock_acquire+0x1328/0x15d0 <4> [430.223001] ? mark_held_locks+0x49/0x70 <4> [430.223003] lock_acquire+0xa7/0x1c0 <4> [430.223005] ? rcu_barrier+0x23/0x190 <4> [430.223008] __mutex_lock+0x9a/0x9d0 <4> [430.223009] ? rcu_barrier+0x23/0x190 <4> [430.223011] ? rcu_barrier+0x23/0x190 <4> [430.223013] ? find_held_lock+0x2d/0x90 <4> [430.223045] ? i915_gem_object_unbind+0x24a/0x3d0 [i915] <4> [430.223048] ? rcu_barrier+0x23/0x190 <4> [430.223049] rcu_barrier+0x23/0x190 <4> [430.223081] i915_gem_object_unbind+0x264/0x3d0 [i915] <4> [430.223119] i915_gem_shrink+0x297/0x5f0 [i915] Closes: https://gitlab.freedesktop.org/drm/intel/issues/743Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191208161252.3015727-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Include all the number fields for describing the GT, as well as the current boolean flags, primarily for inclusion in error states. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Andi Shyti <andi.shyti@intel.com> Reviewed-by: Andi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191207182937.2583002-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Since we didn't check and insist that args.pad must be zero for MMAP_GTT historically, we cannot insert a check now as old userspace may be feeding in garbage. As such the lack of check is enshrined into the ABI, so add a comment to remind us we cannot add the check later. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191207222644.2830129-1-chris@chris-wilson.co.uk
-
- 08 Dec, 2019 1 commit
-
-
Chris Wilson authored
"Have you tried switching it off and on again?" Set the size of the mm to 0 to disable all PD cachelines, before enabling the whole mm again. Let's see if that tricks the TLB into reloading. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191208143648.2986669-1-chris@chris-wilson.co.uk
-
- 07 Dec, 2019 3 commits
-
-
Chris Wilson authored
Our asserts allow for the PDEs to be allocated concurrently, but we did not account for the aliasing-ppgtt to be preallocated on top. Testcase: igt/gem_ppgtt #bsw Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191207221453.2802627-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
In the extreme case, we may wish to wait on an rcu-barrier to reap stale vm to purge the last of the object bindings. However, we are not allowed to use rcu_barrier() beneath the dma_resv (i.e. object) lock and do not take lightly the prospect of unlocking a mutex deep in the bowels of the routine. i915_gem_object_unbind() itself does not need the object lock, and it turns out the callers do not need to the unbind as part of a locked sequence around set-cache-level, so rearrange the code to avoid taking the object lock in the callers. <4> [186.816311] ====================================================== <4> [186.816313] WARNING: possible circular locking dependency detected <4> [186.816316] 5.4.0-rc8-CI-CI_DRM_7486+ #1 Tainted: G U <4> [186.816318] ------------------------------------------------------ <4> [186.816320] perf_pmu/1321 is trying to acquire lock: <4> [186.816322] ffff88849487c4d8 (&mm->mmap_sem#2){++++}, at: __might_fault+0x39/0x90 <4> [186.816331] but task is already holding lock: <4> [186.816333] ffffe8ffffa05008 (&cpuctx_mutex){+.+.}, at: perf_event_ctx_lock_nested+0xa9/0x1b0 <4> [186.816339] which lock already depends on the new lock. <4> [186.816341] the existing dependency chain (in reverse order) is: <4> [186.816343] -> #6 (&cpuctx_mutex){+.+.}: <4> [186.816349] __mutex_lock+0x9a/0x9d0 <4> [186.816352] perf_event_init_cpu+0xa4/0x140 <4> [186.816357] perf_event_init+0x19d/0x1cd <4> [186.816362] start_kernel+0x372/0x4f4 <4> [186.816365] secondary_startup_64+0xa4/0xb0 <4> [186.816381] -> #5 (pmus_lock){+.+.}: <4> [186.816385] __mutex_lock+0x9a/0x9d0 <4> [186.816387] perf_event_init_cpu+0x6b/0x140 <4> [186.816404] cpuhp_invoke_callback+0x9b/0x9d0 <4> [186.816406] _cpu_up+0xa2/0x140 <4> [186.816409] do_cpu_up+0x61/0xa0 <4> [186.816411] smp_init+0x57/0x96 <4> [186.816413] kernel_init_freeable+0xac/0x1c7 <4> [186.816416] kernel_init+0x5/0x100 <4> [186.816419] ret_from_fork+0x24/0x50 <4> [186.816421] -> #4 (cpu_hotplug_lock.rw_sem){++++}: <4> [186.816424] cpus_read_lock+0x34/0xd0 <4> [186.816427] rcu_barrier+0xaa/0x190 <4> [186.816429] kernel_init+0x21/0x100 <4> [186.816431] ret_from_fork+0x24/0x50 <4> [186.816433] -> #3 (rcu_state.barrier_mutex){+.+.}: <4> [186.816436] __mutex_lock+0x9a/0x9d0 <4> [186.816438] rcu_barrier+0x23/0x190 <4> [186.816502] i915_gem_object_unbind+0x3a6/0x400 [i915] <4> [186.816537] i915_gem_object_set_cache_level+0x32/0x90 [i915] <4> [186.816571] i915_gem_object_pin_to_display_plane+0x5d/0x160 [i915] <4> [186.816612] intel_pin_and_fence_fb_obj+0x9e/0x200 [i915] <4> [186.816679] intel_plane_pin_fb+0x3f/0xd0 [i915] <4> [186.816717] intel_prepare_plane_fb+0x130/0x520 [i915] <4> [186.816722] drm_atomic_helper_prepare_planes+0x85/0x110 <4> [186.816761] intel_atomic_commit+0xc6/0x350 [i915] <4> [186.816764] drm_atomic_helper_update_plane+0xed/0x110 <4> [186.816768] setplane_internal+0x97/0x190 <4> [186.816770] drm_mode_setplane+0xcd/0x190 <4> [186.816773] drm_ioctl_kernel+0xa7/0xf0 <4> [186.816775] drm_ioctl+0x2e1/0x390 <4> [186.816778] do_vfs_ioctl+0xa0/0x6f0 <4> [186.816780] ksys_ioctl+0x35/0x60 <4> [186.816782] __x64_sys_ioctl+0x11/0x20 <4> [186.816785] do_syscall_64+0x4f/0x210 <4> [186.816787] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4> [186.816789] -> #2 (reservation_ww_class_mutex){+.+.}: <4> [186.816793] __ww_mutex_lock.constprop.15+0xc3/0x1090 <4> [186.816795] ww_mutex_lock+0x39/0x70 <4> [186.816798] dma_resv_lockdep+0x10e/0x1f7 <4> [186.816800] do_one_initcall+0x58/0x2ff <4> [186.816802] kernel_init_freeable+0x137/0x1c7 <4> [186.816804] kernel_init+0x5/0x100 <4> [186.816806] ret_from_fork+0x24/0x50 <4> [186.816808] -> #1 (reservation_ww_class_acquire){+.+.}: <4> [186.816811] dma_resv_lockdep+0xec/0x1f7 <4> [186.816813] do_one_initcall+0x58/0x2ff <4> [186.816815] kernel_init_freeable+0x137/0x1c7 <4> [186.816817] kernel_init+0x5/0x100 <4> [186.816819] ret_from_fork+0x24/0x50 <4> [186.816820] -> #0 (&mm->mmap_sem#2){++++}: <4> [186.816824] __lock_acquire+0x1328/0x15d0 <4> [186.816826] lock_acquire+0xa7/0x1c0 <4> [186.816828] __might_fault+0x63/0x90 <4> [186.816831] _copy_to_user+0x1e/0x80 <4> [186.816834] perf_read+0x200/0x2b0 <4> [186.816836] vfs_read+0x96/0x160 <4> [186.816838] ksys_read+0x9f/0xe0 <4> [186.816839] do_syscall_64+0x4f/0x210 <4> [186.816841] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4> [186.816843] other info that might help us debug this: <4> [186.816846] Chain exists of: &mm->mmap_sem#2 --> pmus_lock --> &cpuctx_mutex <4> [186.816849] Possible unsafe locking scenario: <4> [186.816851] CPU0 CPU1 <4> [186.816853] ---- ---- <4> [186.816854] lock(&cpuctx_mutex); <4> [186.816856] lock(pmus_lock); <4> [186.816858] lock(&cpuctx_mutex); <4> [186.816860] lock(&mm->mmap_sem#2); <4> [186.816861] *** DEADLOCK *** Closes: https://gitlab.freedesktop.org/drm/intel/issues/728Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Andi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191206105527.1130413-5-chris@chris-wilson.co.uk
-
Matthew Brost authored
The preferred way to access the uncore is through the GT structure. Update the GuC function, flush_ggtt_writes, to use this path. Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: John Harrison <john.c.harrison@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191207010033.24667-1-John.C.Harrison@Intel.com
-
- 06 Dec, 2019 8 commits
-
-
Chris Wilson authored
All pinning must be done prior to i915_request_create, to avoid timeline->mutex inversions. Here we slightly abuse the context_barrier_task stages to utilise the 'skip' decision as an opportunity to acquire the pin on the new ppgtt. Consider it s/skip/prepare/. At the moment, we only have on user of context_barrier_task, so it might be worth breaking it down for the specific task of set-vm and refactor it later if we find a second purpose. <4> [402.377487] WARNING: possible circular locking dependency detected <4> [402.377493] 5.4.0-rc8-CI-CI_DRM_7491+ #1 Tainted: G U <4> [402.377497] ------------------------------------------------------ <4> [402.377502] gem_exec_parall/2506 is trying to acquire lock: <4> [402.377507] ffff888403cdac70 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915] <4> [402.377593] but task is already holding lock: <4> [402.377597] ffff88835efad550 (&ppgtt->pin_mutex){+.+.}, at: gen6_ppgtt_pin+0x4d/0x110 [i915] <4> [402.377660] which lock already depends on the new lock. <4> [402.377664] the existing dependency chain (in reverse order) is: <4> [402.377668] -> #1 (&ppgtt->pin_mutex){+.+.}: <4> [402.377674] __mutex_lock+0x9a/0x9d0 <4> [402.377713] gen6_ppgtt_pin+0x4d/0x110 [i915] <4> [402.377756] emit_ppgtt_update+0x1dc/0x370 [i915] <4> [402.377801] context_barrier_task+0x176/0x310 [i915] <4> [402.377844] ctx_setparam+0x400/0xb10 [i915] <4> [402.377886] i915_gem_context_setparam_ioctl+0xc8/0x160 [i915] <4> [402.377891] drm_ioctl_kernel+0xa7/0xf0 <4> [402.377895] drm_ioctl+0x2e1/0x390 <4> [402.377899] do_vfs_ioctl+0xa0/0x6f0 <4> [402.377903] ksys_ioctl+0x35/0x60 <4> [402.377906] __x64_sys_ioctl+0x11/0x20 <4> [402.377910] do_syscall_64+0x4f/0x210 <4> [402.377914] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4> [402.377917] -> #0 (&kernel#2){+.+.}: <4> [402.377923] __lock_acquire+0x1328/0x15d0 <4> [402.377926] lock_acquire+0xa7/0x1c0 <4> [402.377930] __mutex_lock+0x9a/0x9d0 <4> [402.377977] i915_request_create+0x16/0x1c0 [i915] <4> [402.378013] intel_engine_flush_barriers+0x4c/0x100 [i915] <4> [402.378062] i915_ggtt_pin+0x7d/0x130 [i915] <4> [402.378108] gen6_ppgtt_pin+0x9c/0x110 [i915] <4> [402.378148] ring_context_pin+0x2e/0xc0 [i915] <4> [402.378183] __intel_context_do_pin+0x6b/0x190 [i915] <4> [402.378226] i915_gem_do_execbuffer+0x180c/0x26b0 [i915] <4> [402.378268] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915] <4> [402.378272] drm_ioctl_kernel+0xa7/0xf0 <4> [402.378275] drm_ioctl+0x2e1/0x390 <4> [402.378279] do_vfs_ioctl+0xa0/0x6f0 <4> [402.378282] ksys_ioctl+0x35/0x60 <4> [402.378286] __x64_sys_ioctl+0x11/0x20 <4> [402.378289] do_syscall_64+0x4f/0x210 <4> [402.378292] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4> [402.378295] other info that might help us debug this: <4> [402.378299] Possible unsafe locking scenario: <4> [402.378302] CPU0 CPU1 <4> [402.378305] ---- ---- <4> [402.378307] lock(&ppgtt->pin_mutex); <4> [402.378310] lock(&kernel#2); <4> [402.378314] lock(&ppgtt->pin_mutex); <4> [402.378317] lock(&kernel#2); <4> [402.378320] Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Andi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191206105527.1130413-4-chris@chris-wilson.co.uk
-
Andi Shyti authored
Get rid of the last remaining I915_WRITEs and replace them with intel_uncore_write(). Signed-off-by: Andi Shyti <andi.shyti@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191206212417.20178-1-andi@etezian.org
-
José Roberto de Souza authored
Commit 9c722e17 ("drm/i915: Disable pipes in reverse order") reverted the order that pipes gets disabled because of TGL master/slave relationship between transcoders in MST mode. But as stated in a comment in skl_commit_modeset_enables() the enabling order is not always crescent, possibly causing previously selected slave transcoder being enabled before master so another approach will be needed to select a transcoder to master in MST mode. It will be similar to the approach taken in port sync. But instead of implement something like intel_trans_port_sync_modeset_disables() to MST lets simply it and iterate over all pipes 2 times, the first one disabling any slave and then disabling everything else. The MST bits will be added in another patch. v2: Not using crtc->active as it is deprecated v3: Removing is_trans_port_sync_mode() check, just check for is_trans_port_sync_master() is enough v4: Adding and using is_trans_port_sync_slave(), otherwise non-port sync pipes will be disabled in the first loop, what is not wrong but is not what patch description promises Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Manasi Navare <manasi.d.navare@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v2) Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205210350.96795-3-jose.souza@intel.com
-
José Roberto de Souza authored
For TGL the step to turn off the transcoder clock was moved to after the complete shutdown of DDI. Only the MST slave transcoders should disable the clock before that. v2: - Adding last_mst_stream to intel_mst_post_disable_dp, make code more easy to read and is similar to first_mst_stream in intel_mst_pre_enable_dp()(Ville's idea) - Calling intel_ddi_disable_pipe_clock() for GEN12+ right intel_disable_ddi_buf() as stated in BSpec(Ville) BSpec: 49190 Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205210350.96795-2-jose.souza@intel.com
-
José Roberto de Souza authored
It should not care about DDB allocations of pipes going through a fullmodeset, as at this point those pipes are disabled. The comment in the code also points to that but that was not what was being executed. Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191205210350.96795-1-jose.souza@intel.com
-
José Roberto de Souza authored
Adding the recently added EHL/JSL PCI ids. BSpec: 29153 Cc: James Ausmus <james.ausmus@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191203211308.109703-1-jose.souza@intel.com
-
Chris Wilson authored
If we see an already signaled dma-fence that we want to await on, we skip adding to the i915_sw_fence. However, we should pay attention to whether there was an error on that fence and if so propagate it for our future request. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191206160428.1503343-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
If we see an already signaled fence that we want to await on, we skip adding to the i915_sw_fence. However, we should pay attention to whether there was an error on that fence and if so propagate it for our future request. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191206160428.1503343-2-chris@chris-wilson.co.uk
-