- 07 Feb, 2019 10 commits
-
-
Ville Syrjälä authored
The LUTs are single buffered so we should program them after the double buffered pipe updates have been latched by the hardware. We'll also fix up the IPS vs. split gamma w/a to do the IPS disable like everyone else. Note that this is currently dead code as we don't use the split gamma mode on HSW, but that will be fixed up shortly. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205160848.24662-7-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
Split the color management hooks along the single vs. double buffered registers line. Of the currently programmed registers GAMMA_MODE and the ilk+ pipe CSC are double buffered, the LUTS and CHV CGM block are single buffered. The double buffered register will be programmed during the normal pipe update with evasion, and also during pipe enable so that the settings will already be correct when the pipe starts up before the planes are enabled. The single buffered registers are currently programmed before the vblank evade. Which is totally wrong, but we'll correct that later. v2: Add some docs to explain the two vfuncs (Matt,Uma) Rebase Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205160848.24662-6-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
For bdw+ let's move the GAMMA_MODE write for the legacy LUT mode into the .load_luts() funciton directly, rather than relying on haswell_load_luts(). We'll be getting rid of haswell_load_luts() entirely soon, and it's anyway cleaner to have the GAMMA_MODE write in a single place. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Uma Shankar <uma.shankar@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205160848.24662-5-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
Pass the crtc state etc. as const to the color management commit functions. And while at it polish some of the local variables. v2: Rebase Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Uma Shankar <uma.shankar@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205160848.24662-4-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
We shouldn't be computing gamma mode during the commit phase. Move it to the check phase. v2: Reword comments a bit (Matt) Rebase Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Uma Shankar <uma.shankar@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205160848.24662-3-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
On g4x+ the pipe gamma enable bit for the primary plane affects the pipe bottom color as well. The same for the pipe csc enable bit on ilk+. Thus we must configure those bits correctly even when the primary plane is disabled. To make the feasible let's split those settings from the plane_ctl() function into a seprate funciton that we can call from the ->disable_plane() hook as well. For consistency we'll do that on all the plane types. While that has no real benefits at this time, it'll become useful when we start to control the pipe gamma/csc enable bits dynamically when we overhaul the color management code. On pre-g4x there doesn't appear to be any way to gamma correct the pipe bottom color, but sticking to the same pattern doesn't hurt. And it'll still help us to do crtc state readout correctly for the pipe gamma enable bit for the color management overhaul. An alternative apporach would be to still precompute these bits into plane_state->ctl, but that would require that we run through the plane check even when the plane isn't logically enabled on any crtc. Currently that condition causes us to short circuit the entire thing and not call ->check_plane(). There would also be some chicken and egg problems with ->check_plane() vs. crtc color state check that would requite splitting certain things into multiple steps. So all in all this seems like the easier route. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205160848.24662-2-ville.syrjala@linux.intel.comReviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
-
Ville Syrjälä authored
update_wm_post is meant for pre-g4x only. Don't ever set it on g4x+. The only effect of a bogus update_wm_post on g4x+ could be that we clear the legacy_cursor_update flag in intel_atomic_commit(). Since legacy_cursor_update is only set for legacy cursor updates (as the name suggests) and we only set update_wm_post for a modeset the two cases should never occur at the same time. But let's be consistent in setting update_wm_post so we don't end up confusing so many people. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190206185433.8116-1-ville.syrjala@linux.intel.comReviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Chris Wilson authored
Apply backpressure to hogs that emit requests faster than the GPU can process them by waiting for their ring to be less than half-full before proceeding with taking the struct_mutex. This is a gross hack to apply throttling backpressure, the long term goal is to remove the struct_mutex contention so that each client naturally waits, preferably in an asynchronous, nonblocking fashion (pipelined operations for the win), for their own resources and never blocks another client within the driver at least. (Realtime priority goals would extend to ensuring that resource contention favours high priority clients as well.) This patch only limits excessive request production and does not attempt to throttle clients that block waiting for eviction (either global GTT or system memory) or any other global resources, see above for the long term goal. No microbenchmarks are harmed (to the best of my knowledge). Testcase: igt/gem_exec_schedule/pi-ringfull-* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190207071829.5574-1-chris@chris-wilson.co.uk
-
Joonas Lahtinen authored
Add err goto label and use it when VMA can't be established or changes underneath. v2: - Dropping Fixes: as it's indeed impossible to race an object to the error address. (Chris) v3: - Use IS_ERR_VALUE (Chris) Reported-by: Adam Zabrocki <adamza@microsoft.com> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Adam Zabrocki <adamza@microsoft.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v2 Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190207085454.10598-2-joonas.lahtinen@linux.intel.com
-
Joonas Lahtinen authored
Make sure the underlying VMA in the process address space is the same as it was during vm_mmap to avoid applying WC to wrong VMA. A more long-term solution would be to have vm_mmap_locked variant in linux/mmap.h for when caller wants to hold mmap_sem for an extended duration. v2: - Refactor the compare function Fixes: 1816f923 ("drm/i915: Support creation of unbound wc user mappings for objects") Reported-by: Adam Zabrocki <adamza@microsoft.com> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <stable@vger.kernel.org> # v4.0+ Cc: Akash Goel <akash.goel@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Adam Zabrocki <adamza@microsoft.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20190207085454.10598-1-joonas.lahtinen@linux.intel.com
-
- 06 Feb, 2019 6 commits
-
-
Lyude Paul authored
This hotplug also isn't needed: drm_dp_mst_topology_mgr_set_mst() already sends a hotplug on its own from drm_dp_destroy_connector_work() after destroying connectors in the MST topology. Signed-off-by: Lyude Paul <lyude@redhat.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190129191001.442-4-lyude@redhat.com
-
Lyude Paul authored
We have a bad habit of calling drm_fb_helper_hotplug_event() far more then we actually need to. MST appears to be one of these cases, where we call drm_fb_helper_hotplug_event() if we fail to resume a connected MST topology in intel_dp_mst_resume(). We don't actually need to do this at all though since hotplug events are already sent from drm_dp_connector_destroy_work() every time connectors are unregistered from userspace's PoV. Additionally, extra calls to drm_fb_helper_hotplug_event() also just mean more of a chance of doing a connector probe somewhere we shouldn't. So, don't send any hotplug events during resume if the MST topology fails to come up. Just rely on the DP MST helpers to send them for us. Signed-off-by: Lyude Paul <lyude@redhat.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190129191001.442-3-lyude@redhat.com
-
Lyude Paul authored
When resuming, we check whether or not any previously connected MST topologies are still present and if so, attempt to resume them. If this fails, we disable said MST topologies and fire off a hotplug event so that userspace knows to reprobe. However, sending a hotplug event involves calling drm_fb_helper_hotplug_event(), which in turn results in fbcon doing a connector reprobe in the caller's thread - something we can't do at the point in which i915 calls drm_dp_mst_topology_mgr_resume() since hotplugging hasn't been fully initialized yet. This currently causes some rather subtle but fatal issues. For example, on my T480s the laptop dock connected to it usually disappears during a suspend cycle, and comes back up a short while after the system has been resumed. This guarantees pretty much every suspend and resume cycle, drm_dp_mst_topology_mgr_set_mst(mgr, false); will be caused and in turn, a connector hotplug will occur. Now it's Rute Goldberg time: when the connector hotplug occurs, i915 reprobes /all/ of the connectors, including eDP. However, eDP probing requires that we power on the panel VDD which in turn, grabs a wakeref to the appropriate power domain on the GPU (on my T480s, this is the PORT_DDI_A_IO domain). This is where things start breaking, since this all happens before intel_power_domains_enable() is called we end up leaking the wakeref that was acquired and never releasing it later. Come next suspend/resume cycle, this causes us to fail to shut down the GPU properly, which causes it not to resume properly and die a horrible complicated death. (as a note: this only happens when there's both an eDP panel and MST topology connected which is removed mid-suspend. One or the other seems to always be OK). We could try to fix the VDD wakeref leak, but this doesn't seem like it's worth it at all since we aren't able to handle hotplug detection while resuming anyway. So, let's go with a more robust solution inspired by nouveau: block fbdev from handling hotplug events until we resume fbdev. This allows us to still send sysfs hotplug events to be handled later by user space while we're resuming, while also preventing us from actually processing any hotplug events we receive until it's safe. This fixes the wakeref leak observed on the T480s and as such, also fixes suspend/resume with MST topologies connected on this machine. Changes since v2: * Don't call drm_fb_helper_hotplug_event() under lock, do it after lock (Chris Wilson) * Don't call drm_fb_helper_hotplug_event() in intel_fbdev_output_poll_changed() under lock (Chris Wilson) * Always set ifbdev->hpd_waiting (Chris Wilson) Signed-off-by: Lyude Paul <lyude@redhat.com> Fixes: 0e32b39c ("drm/i915: add DP 1.2 MST support (v0.7)") Cc: Todd Previte <tprevite@gmail.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: intel-gfx@lists.freedesktop.org Cc: <stable@vger.kernel.org> # v3.17+ Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190129191001.442-2-lyude@redhat.com
-
Ville Syrjälä authored
The unused bits on PLANE_WM & co. are hardwired to zero. So no need to worry about reading the extra bit on pre-icl. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205205056.30081-2-ville.syrjala@linux.intel.comReviewed-by: José Roberto de Souza <jose.souza@intel.com>
-
Ville Syrjälä authored
On icl the plane watermark blocks field is 11 bits. Bump our define to match so that readout won't ignore the extra bit. We can safely do this for older platforms too since the unused bits are hardwired to zero. Cc: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205205056.30081-1-ville.syrjala@linux.intel.comReviewed-by: José Roberto de Souza <jose.souza@intel.com>
-
Tvrtko Ursulin authored
Enable count array is supposed to have one counter for each possible engine sampler. As such, array sizing and bounds checking is not correct and would blow up the asserts if more samplers were added. No ill-effect in the current code base but lets fix it for correctness. At the same time tidy the assert for readability and robustness. v2: * One check per assert. (Chris Wilson) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: b46a33e2 ("drm/i915/pmu: Expose a PMU interface for perf queries") Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190205130353.21105-1-tvrtko.ursulin@linux.intel.com
-
- 05 Feb, 2019 18 commits
-
-
Ville Syrjälä authored
Disabling WM1+ on ICL causes tons of underruns with linear/X-tiled framebuffers. We can avoid this by flipping on a chicken bit affecting the way the hw fill the FIFO. This may not be the final solution but should hopefully avoid some underruns in the meantime. v2: Apparently PIPE_CHICKEN is icl+ only Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190204202232.27153-1-ville.syrjala@linux.intel.comReviewed-by: Matt Roper <matthew.d.roper@intel.com>
-
Ville Syrjälä authored
Configure PIPE_CHICKEN during intel_update_pipe_config() to make sure we have our chickens in a row with fastboot too. v2: Apparently PIPE_CHICKEN is icl+ only Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190204202214.27051-1-ville.syrjala@linux.intel.comReviewed-by: Matt Roper <matthew.d.roper@intel.com>
-
Ville Syrjälä authored
We need configure PIPE_CHICKEN during fastboot as well. Let's extract it to a helper. v2: Apparently PIPE_CHICKEN is icl+ only Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190204202139.26884-1-ville.syrjala@linux.intel.comReviewed-by: Matt Roper <matthew.d.roper@intel.com>
-
Ville Syrjälä authored
When adding the early latency==0 check back I neglected to realize that we no longer have a way to return a failure from the wm computation like we had in the past (since we now calculate wms before ddb allocations). Also plane_en being false doesn't actually indicate that the level is invalid as it wil also happen when the plane is not enabled. skl_allocate_pipe_ddb() starts scanning from the maximum watermark level and it stops as soon as it finds a level that is deemed viable. The assumption being that if level n+1 is valid then level n is valid as well. Thus if we now disable any watermark level by zeroing its latency the code will think that level to be actually valid and won't confirm whether the actually enabled lower watermark level(s) actually fit into the allotted ddb space. This results in hilarious watermark values that exceed the ddb allocation of the plane. The way we must now indicate a failure is to assign an unreasoanbly big value to min_ddb_alloc which will then make skl_allocate_pipe_ddb() reject the entire level. v2: Also do the same for the lines>31 case (Matt) v3: Make 'blocks' u32 (Matt) Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205155053.10081-1-ville.syrjala@linux.intel.com
-
Chris Wilson authored
clear_intel_crtc_state() uses the stack for saving a temporary copy of certain bits of the inherited crtc_state before clearing the unwanted bits. This pushes it over the stack limit for my little 32b Pineview, so move the temporary allocation to the heap instead. As we now use a zeroed struct, we can copy the whole extended state back to both preserve what bits need to be preserved and zero the rest. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205092759.16018-1-chris@chris-wilson.co.uk
-
Ville Syrjälä authored
We generally omit register polling from the i915_reg_rw tracepoint. Understandable since polling could generate a lot of noise in the trace. The downside is that the trace is incomplete. As a compromise let's trace the final register value observed while polling. That should be generally sufficient to observe what the code should be doing next. I suppose in some cases it might make sense to also trace the initial register value, and maybe the number of times we polled. But that would require a separate tracepoint so let's leave it for the future. The other users of _NOTRACE() are i915_pmu and i2c bitbanging, which I decided to leave alone. Next we should do something to claw back the tracepoints for planes and whatnot which were switched to _FW() a while back. I guess just new macros for raw_rw+trace. The question is what to call it? Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190204211644.21967-1-ville.syrjala@linux.intel.comReviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Greg Kroah-Hartman authored
When calling debugfs functions, they can now return error values if something went wrong. If that happens, return a NULL as a *dentry to the relay core instead of passing it an illegal pointer. The relay core should be able to handle an illegal pointer, but add this check to be safe. Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190131131507.GA19807@kroah.com
-
Rodrigo Vivi authored
First of all GMCH can be considered a feature by itself since it is a chip present in some platforms that connects the IA processor to memory and other components in PC. Also with the introduction of display block at device info, we got a redundant definition: .display.has_gmch_display = 1, So, let's clean up things a bit and use the standardized way of has_feature on displays side. No functional change and no manual interaction to generate this patch. It is only: sed -si -e 's/has_gmch_display/has_gmch/g' \ -e 's/HAS_GMCH_DISPLAY/HAS_GMCH/g' drivers/gpu/drm/i915/*{c,h} Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190204222538.15842-1-rodrigo.vivi@intel.com
-
Chris Wilson authored
Looking forward, we need to break the struct_mutex dependency on i915_gem_active. In the meantime, external use of i915_gem_active is quite beguiling, little do new users suspect that it implies a barrier as each request it tracks must be ordered wrt the previous one. As one of many, it can be used to track activity across multiple timelines, a shared fence, which fits our unordered request submission much better. We need to steer external users away from the singular, exclusive fence imposed by i915_gem_active to i915_active instead. As part of that process, we move i915_gem_active out of i915_request.c into i915_active.c to start separating the two concepts, and rename it to i915_active_request (both to tie it to the concept of tracking just one request, and to give it a longer, less appealing name). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205130005.2807-5-chris@chris-wilson.co.uk
-
Chris Wilson authored
Wrap the active tracking for a GPU references in a slabcache for faster allocations, and hopefully better fragmentation reduction. v3: Nothing device specific left, it's just a slabcache that we can make global. v4: Include i915_active.h and don't put the initfunc under DEBUG_GEM Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205130005.2807-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
As soon as we detect that the active tracker is idle and we prepare to call the retire callback, release the storage for our tree of per-timeline nodes. We expect these to be infrequently used and quick to allocate, so there is little benefit in keeping the tree cached and we would prefer to return the pages back to the system in a timely fashion. This also means that when we finalize the struct as a whole, we know as the activity tracker must be idle, the tree has already been released. Indeed we can reduce i915_active_fini() just to the assertions that there is nothing to do. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205130005.2807-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
We currently track GPU memory usage inside VMA, such that we never release memory used by the GPU until after it has finished accessing it. However, we may want to track other resources aside from VMA, or we may want to split a VMA into multiple independent regions and track each separately. For this purpose, generalise our request tracking (akin to struct reservation_object) so that we can embed it into other objects. v2: Tweak error handling during selftest setup. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205130005.2807-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Build a chain using 2 contexts (A, B) then request a preemption such that a later A request runs before the spinner in B. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205123835.25331-1-chris@chris-wilson.co.uk
-
Tvrtko Ursulin authored
Exercise the context image reconfiguration logic for idle and busy contexts, with the resets thrown into the mix as well. Free from the uAPI restrictions this test runs on all Gen9+ platforms with slice power gating. v2: * Rename some helpers for clarity. * Include subtest names in error logs. * Remove unnecessary function export. v3: * Rebase for RUNTIME_INFO. v4: * Fix incomplete unexport from v2. (Chris Wilson) v5: * Rebased for runtime pm api changes. v6: * Rebased for i915_reset.c. v7: * Tidy checkpatch warnings. * Consolidate error checking and logging a bit. * Skip idle test phase if something failed before it. v8: (Chris Wilson) * Fix i915_request_wait error handling. * No need to PIN_HIGH the VMA. * Remove pointless GEM_BUG_ON before pointer dereference. v9: * Avoid rq leak if rpcs query fails. (Chris) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> # v6 Link: https://patchwork.freedesktop.org/patch/msgid/20190205095032.22673-5-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
We want to allow userspace to reconfigure the subslice configuration on a per context basis. This is required for the functional requirement of shutting down non-VME enabled sub-slices on Gen11 parts. To do so, we expose a context parameter to allow adjustment of the RPCS register stored within the context image (and currently not accessible via LRI). If the context is adjusted before first use or whilst idle, the adjustment is for "free"; otherwise if the context is active we queue a request to do so (using the kernel context), following all other activity by that context, which is also marked as barrier for all following submission against the same context. Since the overhead of device re-configuration during context switching can be significant, especially in multi-context workloads, we limit this new uAPI to only support the Gen11 VME use case. In this use case either the device is fully enabled, and exactly one slice and half of the subslices are enabled. Example usage: struct drm_i915_gem_context_param_sseu sseu = { }; struct drm_i915_gem_context_param arg = { .param = I915_CONTEXT_PARAM_SSEU, .ctx_id = gem_context_create(fd), .size = sizeof(sseu), .value = to_user_pointer(&sseu) }; /* Query device defaults. */ gem_context_get_param(fd, &arg); /* Set VME configuration on a 1x6x8 part. */ sseu.slice_mask = 0x1; sseu.subslice_mask = 0xe0; gem_context_set_param(fd, &arg); v2: Fix offset of CTX_R_PWR_CLK_STATE in intel_lr_context_set_sseu() (Lionel) v3: Add ability to program this per engine (Chris) v4: Move most get_sseu() into i915_gem_context.c (Lionel) v5: Validate sseu configuration against the device's capabilities (Lionel) v6: Change context powergating settings through MI_SDM on kernel context (Chris) v7: Synchronize the requests following a powergating setting change using a global dependency (Chris) Iterate timelines through dev_priv.gt.active_rings (Tvrtko) Disable RPCS configuration setting for non capable users (Lionel/Tvrtko) v8: s/union intel_sseu/struct intel_sseu/ (Lionel) s/dev_priv/i915/ (Tvrtko) Change uapi class/instance fields to u16 (Tvrtko) Bump mask fields to 64bits (Lionel) Don't return EPERM when dynamic sseu is disabled (Tvrtko) v9: Import context image into kernel context's ppgtt only when reconfiguring powergated slice/subslices (Chris) Use aliasing ppgtt when needed (Michel) Tvrtko Ursulin: v10: * Update for upstream changes. * Request submit needs a RPM reference. * Reject on !FULL_PPGTT for simplicity. * Pull out get/set param to helpers for readability and less indent. * Use i915_request_await_dma_fence in add_global_barrier to skip waits on the same timeline and avoid GEM_BUG_ON. * No need to explicitly assign a NULL pointer to engine in legacy mode. * No need to move gen8_make_rpcs up. * Factored out global barrier as prep patch. * Allow to only CAP_SYS_ADMIN if !Gen11. v11: * Remove engine vfunc in favour of local helper. (Chris Wilson) * Stop retiring requests before updates since it is not needed (Chris Wilson) * Implement direct CPU update path for idle contexts. (Chris Wilson) * Left side dependency needs only be on the same context timeline. (Chris Wilson) * It is sufficient to order the timeline. (Chris Wilson) * Reject !RCS configuration attempts with -ENODEV for now. v12: * Rebase for make_rpcs. v13: * Centralize SSEU normalization to make_rpcs. * Type width checking (uAPI <-> implementation). * Gen11 restrictions uAPI checks. * Gen11 subslice count differences handling. Chris Wilson: * args->size handling fixes. * Update context image from GGTT. * Postpone context image update to pinning. * Use i915_gem_active_raw instead of last_request_on_engine. v14: * Add activity tracker on intel_context to fix the lifetime issues and simplify the code. (Chris Wilson) v15: * Fix context pin leak if no space in ring by simplifying the context pinning sequence. v16: * Rebase for context get/set param locking changes. * Just -ENODEV on !Gen11. (Joonas) v17: * Fix one Gen11 subslice enablement rule. * Handle error from i915_sw_fence_await_sw_fence_gfp. (Chris Wilson) v18: * Update commit message. (Joonas) * Restrict uAPI to VME use case. (Joonas) v19: * Rebase. v20: * Rebase for ce->active_tracker. v21: * Rebase for IS_GEN changes. v22: * Reserve uAPI for flags straight away. (Chris Wilson) v23: * Rebase for RUNTIME_INFO. v24: * Added some headline docs for the uapi usage. (Joonas/Chris) v25: * Renamed class/instance to engine_class/engine_instance to avoid clash with C++ keyword. (Tony Ye) v26: * Rebased for runtime pm api changes. v27: * Rebased for intel_context_init. * Wrap commit msg to 75. v28: (Chris Wilson) * Use i915_gem_ggtt. * Use i915_request_await_dma_fence to show a better example. v29: * i915_timeline_set_barrier can now fail. (Chris Wilson) v30: * Capture some acks. v31: * Drop the WARN_ON from use controllable paths. (Chris Wilson) * Use overflows_type for all checks. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100899 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107634 Issue: https://github.com/intel/media-driver/issues/267Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Zhipeng Gong <zhipeng.gong@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tony Ye <tony.ye@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Acked-by: Timo Aaltonen <timo.aaltonen@canonical.com> Acked-by: Takashi Iwai <tiwai@suse.de> Acked-by: Stéphane Marchesin <marcheu@chromium.org> Link: https://patchwork.freedesktop.org/patch/msgid/20190205095032.22673-4-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
Timeline barrier allows serialization between different timelines. After calling i915_timeline_set_barrier with a request, all following submissions on this timeline will be set up as depending on this request, or barrier. Once the barrier has been completed it automatically gets cleared and things continue as normal. This facility will be used by the upcoming context SSEU code. v2: * Assert barrier has been retired on timeline_fini. (Chris Wilson) * Fix mock_timeline. v3: * Improved comment language. (Chris Wilson) v4: * Maintain ordering with previous barriers set on the timeline. v5: * Rebase. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190205095032.22673-3-tvrtko.ursulin@linux.intel.com
-
Lionel Landwerlin authored
If some of the contexts submitting workloads to the GPU have been configured to shutdown slices/subslices, we might loose the NOA configurations written in the NOA muxes. One possible solution to this problem is to reprogram the NOA muxes when we switch to a new context. We initially tried this in the workaround batchbuffer but some concerns where raised about the cost of reprogramming at every context switch. This solution is also not without consequences from the userspace point of view. Reprogramming of the muxes can only happen once the powergating configuration has changed (which happens after context switch). This means for a window of time during the recording, counters recorded by the OA unit might be invalid. This requires userspace dealing with OA reports to discard the invalid values. Minimizing the reprogramming could be implemented by tracking of the last programmed configuration somewhere in GGTT and use MI_PREDICATE to discard some of the programming commands, but the command streamer would still have to parse all the MI_LRI instructions in the workaround batchbuffer. Another solution, which this change implements, is to simply disregard the user requested configuration for the period of time when i915/perf is active. On most platforms there are no issues with this apart from a performance penality for some media workloads that benefit from running on a partially powergated GPU. We already prevent RC6 from affecting the programming so it doesn't sound completely unreasonable to hold on powergating for the same reason. On Icelake however there would a functional problem if the slices not- containing the VME block were left enabled with a running media workload which explicitly disabled them. To avoid a GPU hang in this case, on Icelake we lock the enablement to only slices which contain VME blocks. Downside is that it means degraded GPU performance when OA is active but there is no known alternative solution for this. v2: Leave RPCS programming in intel_lrc.c (Lionel) v3: Update for s/union intel_sseu/struct intel_sseu/ (Lionel) More to_intel_context() (Tvrtko) s/dev_priv/i915/ (Tvrtko) Tvrtko Ursulin: v4: * Rebase for make_rpcs changes. v5: * Apply OA restriction from make_rpcs directly. v6: * Rebase for context image setup changes. v7: * Move stream assignment before metric enable. v8-9: * Rebase. v10: * Squashed with ICL support patch. Bspec: 21140 Co-developed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> # v9 Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205095032.22673-2-tvrtko.ursulin@linux.intel.com
-
Lionel Landwerlin authored
We want to expose the ability to reconfigure the slices, subslice and eu per context and per engine. To facilitate that, store the current configuration on the context for each engine, which is initially set to the device default upon creation. v2: record sseu configuration per context & engine (Chris) v3: introduce the i915_gem_context_sseu to store powergating programming, sseu_dev_info has grown quite a bit (Lionel) v4: rename i915_gem_sseu into intel_sseu (Chris) use to_intel_context() (Chris) v5: More to_intel_context() (Tvrtko) Switch intel_sseu from union to struct (Tvrtko) Move context default sseu in existing loop (Chris) v6: s/intel_sseu_from_device_sseu/intel_device_default_sseu/ (Tvrtko) Tvrtko Ursulin: v7: * Pass intel_sseu by pointer instead of value to make_rpcs. * Rebase for make_rpcs changes. v8: * Rebase for RPCS edit on pin. v9: * Rebase for context image setup changes. v10: * Rename dev_priv to i915. (Chris Wilson) v11: * Rebase. v12: * Rebase for IS_GEN changes. v13: * Rebase for RUNTIME_INFO. v14: * Rebase for intel_context_init. v15: * Rebase for drm-tip changes. v16: * Moved struct intel_sseu definition to i915_gem_context.h. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190205095032.22673-1-tvrtko.ursulin@linux.intel.com
-
- 04 Feb, 2019 2 commits
-
-
Chris Wilson authored
Limit the NEWCLIENT boost to only give its small priority boost to fresh clients only that have no dependencies. The idea for using NEWCLIENT boosting, commit b16c7651 ("drm/i915: Priority boost for new clients"), is that short-lived streams are often interactive and require lower latency -- and that by executing those ahead of the long running hogs, the short-lived clients do little to interfere with the system throughput by virtue of their short-lived nature. However, we were only considering the client's own timeline for determining whether or not it was a fresh stream. This allowed for compositors to wake up before their vblank and bump all of its client streams. However, in testing with media-bench this results in chaining all cooperating contexts together preventing us from being able to reorder contexts to reduce bubbles (pipeline stalls), overall increasing latency, and reducing system throughput. The exact opposite of our intent. The compromise of applying the NEWCLIENT boost to strictly fresh clients (that do not wait upon anything else) should maintain the "real-time response under load" characteristics of FQ_CODEL, without locking together the long chains of dependencies across the system. References: b16c7651 ("drm/i915: Priority boost for new clients") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190204150101.30759-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
When first enabling preemption, we hesitated from making it a free-for-all where every higher priority client would force a preempt-to-idle cycle and take over from all lower priority clients. We hesitated because we were uncertain just how well preemption would work in practice, whether the preemption latency itself would detract from the latency gains for higher priority tasks and whether it would work at all. Since introducing preemption, we have been enabling it for more common tasks, even giving normal clients a small preemptive boost when they first start (to aide fairness and improve interactivity). Now lets take one step further and give permission for all normal (priority:0) clients to preempt any idle (priority:<0) task so that users running long compute jobs do not overly impact other jobs (i.e. their desktop) and the system remains responsive under such idle loads. References: f6322edd ("drm/i915/preemption: Allow preemption between submission ports") References: b16c7651 ("drm/i915: Priority boost for new clients") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: "Bloomfield, Jon" <jon.bloomfield@intel.com> Cc: "Stead, Alan" <alan.stead@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190204084116.3013-1-chris@chris-wilson.co.uk
-
- 02 Feb, 2019 1 commit
-
-
Rodrigo Vivi authored
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
- 31 Jan, 2019 1 commit
-
-
Rodrigo Vivi authored
While cross checking PCI IDs from Intel Media SDK and kernel Dmitry noticed this gap. So we checked the spec and this new ID had been recently added. v2: Adding new H_GT1 entry to i915_pci.c (Jose) Reported-by: Dmitry Rogozhkin<dmitry.v.rogozhkin@intel.com> Cc: Dmitry Rogozhkin<dmitry.v.rogozhkin@intel.com> Cc: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190201235049.27206-1-rodrigo.vivi@intel.com
-
- 01 Feb, 2019 2 commits
-
-
https://github.com/intel/gvt-linuxRodrigo Vivi authored
gvt-next-2019-02-01 - new VFIO EDID region support (Henry) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> From: Zhenyu Wang <zhenyuw@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190201061523.GE5588@zhen-hp.sh.intel.com
-
Hans de Goede authored
We really want to have fastboot enabled by default to avoid an ugly modeset during boot. Currently we are enabling fastboot by default on gen9+ (Skylake and newer). The intention is to enable it on older generations after it has seen more testing on gen9+. VLV and CHV devices are still being sold in stores today, as such it is desirable to also enable fastboot by default on these now. I've extensively tested fastboot=1 support on over 50 different Bay- and Cherry-Trail devices. Testing DSI and eDP panels as well as HDMI output (and even DP over Type-C on one device). All 50 devices work fine with fastboot=1. On 2 devices their DSI panel turns black as soon as the i915 driver loads when fastboot=0, so having fastboot enabled is required for these 2 to work properly (for lack of a better fix). Signed-off-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190129142237.8684-1-hdegoede@redhat.com
-