- 23 May, 2016 9 commits
-
-
Chris Wilson authored
Current intel_opregion_init is called during the driver registration phase and intel_opregion_fini from the unregistration phase. Rename the functions so that this is clear from their names. The phases tell us what we expect the existing hw state to be, e.g. whether interrupts are still enabled etc. It should be noted that the opregion init/fini routines are asymmetric and this is carried across into their new names. Indeed, their new names make it even clearer that perhaps all is not well in the opregion suspend/resume sequence (as well in the module unload). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1464012490-30961-2-git-send-email-chris@chris-wilson.co.ukReviewed-by: Jani Nikula <jani.nikula@linux.intel.com>
-
Chris Wilson authored
Prefer passing struct drm_i915_private to internal interfaces as this saves us having to dance between drm_device and our native struct. The savings hare are small (only 70 bytes of unrequired dancing), but progressive! Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1464012490-30961-1-git-send-email-chris@chris-wilson.co.ukReviewed-by: Jani Nikula <jani.nikula@linux.intel.com>
-
Ville Syrjälä authored
We've never actually enabled or unmasked the GSE interrupt on BDW+, even though the interrupt handler was always prepared for it. Let's enable it and see what happens. Credit to Mark Kettenis who fixed this in the OpenBSD fork of the driver. He reports that it fixed the "ACPI _BCM/_BCQ-based brightness mechanism on a MacBookPro12,1 and a 3rd gen Lenovo X1 Carbon" for them. Mark says: "FWIW, this *is* needed if you want ACPI-based backlight control to work. On Linux you probably don't notice, since "hardware" backlight control is preferred over "firmware" or "platform" backlight control. It would help me if this did land in the Linux tree though, as it will make future imports of the i915 driver into OpenBSD easier." So even though we don't really need this, let's put it in to help Mark with future porting efforts. Should be harmless to have it enabled in any case. Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> References: http://lists.freedesktop.org/archives/intel-gfx/2015-December/081799.htmlReported-by: Mark Kettenis <mark.kettenis@xs4all.nl> Cc: Mark Kettenis <mark.kettenis@xs4all.nl> Cc: Jani Nikula <jani.nikula@linux.intel.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463649283-28698-1-git-send-email-ville.syrjala@linux.intel.comAcked-by: Jani Nikula <jani.nikula@intel.com>
-
Dave Gordon authored
Mostly little optimisations and future-proofing against code breakage. For instance, if the driver is correctly following the submission protocol, the "out of space" condition is impossible, so the previous runtime WARN_ON() is promoted to a GEM_BUG_ON() for a more dramatic effect in development and less impact in end-user systems. Similarly we can make alignment checking more stringent and replace other WARN_ON() conditions that don't relate to the runtime hardware state with either BUILD_BUG_ON() for compile-time-detectable issues, or GEM_BUG_ON() for logical "can't happen" errors. With those changes, we can convert it to void, as suggested by Chris Wilson, and update the calling code appropriately. v2: Note that we're now putting the request seqno in the "fence_id" field of each GuC-work-item, in case it turns up somewhere useful (e.g. in a GuC log) [Tvrtko Ursulin]. Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
-
Dave Gordon authored
Rather than wait to see whether more space becomes available in the GuC submission workqueue, we can just return -EAGAIN and let the caller try again in a little while. This gets rid of an uninterruptable sleep in the polling code :) We'll also add a counter to the GuC client statistics, to see how often we find the WQ full. v2: Flag the likely() code path (Tvtrko Ursulin). v4: Add/update comments about failure counters (Tvtrko Ursulin). Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
-
Dave Gordon authored
The knowledge of how to derive the relevant client from the request should be localised within i915_guc_submission.c; the LRC code shouldn't have to know about the internal details of the GuC submission process. And all the information the GuC code needs should be encapsulated in (or reachable from) the request. v2: GEM_BUG_ON() for bad GuC client (Tvrtko Ursulin). Add/update kerneldoc explaining check_space/submit protocol Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
-
Dave Gordon authored
Split the function of "enable_guc_submission" into two separate options. The new one ("enable_guc_loading") controls only the *fetching and loading* of the GuC firmware image. The existing one is redefined to control only the *use* of the GuC for batch submission once the firmware is loaded. In addition, the degree of control has been refined from a simple bool to an integer key, allowing several options: -1 (default) whatever the platform default is 0 DISABLE don't load/use the GuC 1 BEST EFFORT try to load/use the GuC, fallback if not available 2 REQUIRE must load/use the GuC, else leave the GPU wedged The new platform default (as coded here) will be to attempt to load the GuC iff the device has a GuC that requires firmware, but not yet to use it for submission. A later patch will change to enable it if appropriate. v4: Changed some error-message levels, mostly ERROR->INFO, per review comments by Tvrtko Ursulin. v5: Dropped one more error message, disabled GuC submission on hypothetical firmware-free devices [Tvrtko Ursulin]. v6: Logging tidy by Tvrtko Ursulin: * Do not log falling back to execlists when wedging the GPU. * Do not log fw load errors when load was disabled by user. * Pass down some error code from fw load for log message to make more sense. Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> (v5) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Tested-by: Fiedorowicz, Lukasz <lukasz.fiedorowicz@intel.com> Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> (v5) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Nick Hoath <nicholas.hoath@intel.com> (v6)
-
Dave Gordon authored
For now, anything with a GuC requires uCode loading, and then supports command submission once loaded. But these are logically distinct from simply "having a GuC", so we need a separate macro for the latter. Then, various tests should use this new macro rather than HAS_GUC_UCODE() or testing enable_guc_submission. v4: Added a couple more uses of the new macro. Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
-
Dave Gordon authored
The GuC initialisation code could do other things apart from loading firmware, so here we rename the three primary entry points to remove any specific reference to "ucode" (no functional changes, just renaming). Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 22 May, 2016 1 commit
-
-
Daniel Vetter authored
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 May, 2016 9 commits
-
-
Dave Gordon authored
Avoiding the out-of-line call to sg_next() reduces the kernel execution overhead by 10% in some workloads (for example the Unreal Engine 4 demo Atlantis on 2GiB GTTs) which are dominated by the cost of inserting PTEs due to texture thrashing. We can demonstrate this in a microbenchmark that forces us to rebind the object on every execbuf, where we can measure a 25% improvement, in the time required to execute an execbuf requiring a texture to be rebound, for inlining the sg_next() for large texture sizes. Benchmark: igt/benchmarks/gem_exec_fault Benchmark: igt/benchmarks/gem_exec_trace/Atlantis Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1463741647-15666-5-git-send-email-chris@chris-wilson.co.uk
-
Dave Gordon authored
The existing for_each_sg_page() iterator is somewhat heavyweight, and is limiting i915 driver performance in a few benchmarks. So here we introduce somewhat lighter weight iterators, primarily for use with GEM objects or other case where we need only deal with whole aligned pages. Unlike the old iterator, the new iterators use an internal state structure which is not intended to be accessed by the caller; instead each takes as a parameter an output variable which is set before each iteration. This makes them particularly simple to use :) One of the new iterators provides the caller with the DMA address of each page in turn; the other provides the 'struct page' pointer required by many memory management operations. Various uses of for_each_sg_page() are then converted to the new macros. v2: Force inlining of the sg_iter constructor and make the union anonymous. Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1463741647-15666-4-git-send-email-chris@chris-wilson.co.uk
-
Dave Gordon authored
We're using this function for ringbuffers and other "small" objects, so it's worth avoiding an extra malloc()/free() cycle if the page array is small enough to put on the stack. Here we've chosen an arbitrary cutoff of 32 (4k) pages, which is big enough for a ringbuffer (4 pages) or a context image (currently up to 22 pages). v5: change name of local array [Chris Wilson] Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1463741647-15666-3-git-send-email-chris@chris-wilson.co.uk
-
Dave Gordon authored
The recently-added i915_gem_object_pin_map() can be further optimised for "small" objects. To facilitate this, and simplify the error paths before adding the new code, this patch pulls out the "mapping" part of the operation (involving local allocations which must be undone before return) into its own subfunction. The next patch will then insert the new optimisation into the middle of the now-separated subfunction. This reorganisation will probably not affect the generated code, as the compiler will most likely inline it anyway, but it makes the logical structure a bit clearer and easier to modify. v2: Restructure loop-over-pages & error check [Chris Wilson] v3: Add page count to debug messages [Chris Wilson] Convert WARN_ON() to GEM_BUG_ON() v4: Drop the DEBUG messages [Tvrtko Ursulin] Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1463741647-15666-2-git-send-email-chris@chris-wilson.co.uk
-
Daniel Vetter authored
Found this while browsing Bspec. Looks like it applies to both skl and kbl. v2: Also for bxt (Art). Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Sonika Jindal <sonika.jindal@intel.com> Cc: Durgadoss R <durgadoss.r@intel.com> Cc: "Pandiyan, Dhinakaran" <dhinakaran.pandiyan@intel.com> Cc: "Runyan, Arthur J" <arthur.j.runyan@intel.com> Reviewed-by: Sonika Jindal<sonika.jindal@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463642060-30728-1-git-send-email-daniel.vetter@ffwll.ch
-
Daniel Vetter authored
I just wanted to get rid of the rmw cycle for gen9, but this also fixes some bugs we haven't carried over, like using recommended precharge and timeout values. Also I noticed that we don't set the fastwake sync length on skl, and that's used by PSR2 selective updates. Fix that. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Sonika Jindal <sonika.jindal@intel.com> Cc: Durgadoss R <durgadoss.r@intel.com> Cc: "Pandiyan, Dhinakaran" <dhinakaran.pandiyan@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463590036-17824-6-git-send-email-daniel.vetter@ffwll.ch
-
Daniel Vetter authored
On bdw/hsw we have a separate psr dp aux registers to set up, but on bdw it's shared with the main dp aux thing. Which means any subsequent dp aux transaction will trample over it, and hence must be done beforehand. Also this means we can't do any dp aux transactions while PSR is active, or at least we must restore the old state. Probably need a psr disable/enable pair around dp aux transactions in general. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Sonika Jindal <sonika.jindal@intel.com> Cc: Durgadoss R <durgadoss.r@intel.com> Cc: "Pandiyan, Dhinakaran" <dhinakaran.pandiyan@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463590036-17824-5-git-send-email-daniel.vetter@ffwll.ch
-
Daniel Vetter authored
This reverts commit dfaf37ba Author: Rodrigo Vivi <rodrigo.vivi@intel.com> Date: Mon Dec 7 14:45:20 2015 -0800 drm/i915: Fix idle_frames counter. and commit 97173eaf Author: Rodrigo Vivi <rodrigo.vivi@intel.com> Date: Tue Jul 7 16:28:55 2015 -0700 drm/i915: PSR: Increase idle_frames and implements commit d44b4dcb Author: Rodrigo Vivi <rodrigo.vivi@intel.com> Date: Fri Nov 14 08:52:31 2014 -0800 drm/i915: HSW/BDW PSR Set idle_frames = VBT + 1 without the hack to use 2 idle frames when VBT says 1. We keep the + 1 just for safety, although I haven't really figured out why that one exists. It's nonsense. idle_frames = number of frames where the screen is entirely idle before we think about entering PSR. idle_patter = part of link training, and we probably totally butchered link training because we told the hw to entirely skip it. No wonder PSR occasionally just fell over. I suspect the reason we've increased idle frames is that it makes PSR entry slightly less likely, and more likely to happen in a quite system, which probably increased the changes the panel came back up without link training. The proper fix is to implement link training for PSR. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Sonika Jindal <sonika.jindal@intel.com> Cc: Durgadoss R <durgadoss.r@intel.com> Cc: "Pandiyan, Dhinakaran" <dhinakaran.pandiyan@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463590036-17824-3-git-send-email-daniel.vetter@ffwll.ch
-
Daniel Vetter authored
The default of 0 is 500us of link training, but that's not enough for some platforms. Decoding this correctly means we're using 2.5ms of link training on these platforms, which fixes flickering issues associated with enabling PSR. v2: Unbotch the math a bit. v3: Drop debug hunk. v4: Improve commit message. Tested-by: Lyude <cpaul@redhat.com> Cc: Lyude <cpaul@redhat.com> Cc: stable@vger.kernel.org Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=95176 Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Sonika Jindal <sonika.jindal@intel.com> Cc: Durgadoss R <durgadoss.r@intel.com> Cc: "Pandiyan, Dhinakaran" <dhinakaran.pandiyan@intel.com> Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Tested-by: fritsch@kodi.tv Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463590036-17824-2-git-send-email-daniel.vetter@ffwll.ch
-
- 19 May, 2016 21 commits
-
-
Chris Wilson authored
userptr directly only uses drm_device in a single interface where it meant to use drm_i915_private (everywhere else we have to derive it from the drm_i915_gem_object and so require going from drm_device). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463671036-3235-1-git-send-email-chris@chris-wilson.co.uk
-
Maarten Lankhorst authored
If planes are added to the state after the call to drm_atomic_helper_check_planes planes_changed may not be set and we will not unpin the old framebuffer. This results in a pin leak long after the framebuffer is destroyed, so to find this add some checks when it happens. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-21-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
This reapplies commit acf4e84d. With async unpin this should no longer break. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-20-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
All of intel_post_plane_update is handled there now, so move it over. This is run after the hw state checker because it can't handle checking crtc's separately yet. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-19-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
intel_unpin_work may not take the list lock because it requires the connector_mutex. To prevent taking locks we add an array of old and new state. The old state to free, the new state to commit and verify. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-18-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
This is required to let fbc updates run async. It has a lot of checks whether certain locks are taken, which can be removed when the relevant states are passed in as pointers. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-17-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
With the removal of cs-based flips all mmio waits will finish without requiring the reset counter, because the waits will complete during gpu reset. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-16-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
With the removal of cs support this is no longer reachable. Can be revived if needed. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-15-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
With the removal of cs flips this is always force enabled. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-14-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
With mmio flips now available on all platforms it's time to remove support for cs flips. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-13-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
Create a work structure that will be used for all changes. This will be used later on in the atomic commit function. Changes since v1: - Free old_crtc_state from unpin_work_fn properly. Changes since v2: - Add hunk for calling hw state verifier. - Add missing support for color spaces. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-12-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
Set plane_state->base.fence to the dma_buf exclusive fence, and add a wait to the mmio function. This will make it easier to unify plane updates later on. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-11-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
This will be required to allow more than 1 update in the future. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-10-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
With intel_pipe_update begin/end we ensure that the mmio updates don't run during vblank interrupt, using the hw counter we can be sure that when current vblank count != vblank count at the time of pipe_update_end the mmio update is complete. This allows us to use mmio updates on all platforms, using the update_plane call. With Chris Wilson's patch to skip waiting for vblanks for legacy_cursor_update this potentially leaves a small race condition, in which update_plane can be called with a freed crtc_state. Because of this commit acf4e84d ("drm/i915: Avoid stalling on pending flips for legacy cursor updates") is temporarily reverted. Changes since v1: - Split out the flip_work rename. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-9-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
This reverts commit acf4e84d. Unfortunately this breaks the next commit with a use-after-free, so temporarily revert until we can apply a solution. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-8-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
Rename intel_unpin_work to intel_flip_work and use it for mmio flips and unpinning. Use flip_queued_req to hold the wait request in the mmio case, and the vblank counter from intel_crtc_get_vblank_counter. MMIO flips get their own path through intel_finish_page_flip_mmio, handled on vblank. CS page flips go through *_cs. Changes since v1: - Clean up destinction between MMIO and CS flips. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-7-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
This uses the newly created drm_accurate_vblank_count_and_time to accurately get a vblank count when the hw counter is unavailable. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-6-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
Instead of calling prepare_flip right before calling finish_page_flip do everything from prepare_page_flip in finish_page_flip. Putting prepare and finish page_flip in a single step removes the need for INTEL_FLIP_COMPLETE, so it can be removed. This simplifies the code slightly. Changes since v1: - Invert if case to simplify code. - Add missing barrier. - Reword commit message. Changes since v2: - intel_page_flip_plane is removed. - work->pending is turned into a bool. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-5-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
This function is duplicated with intel_finish_page_flip, and is only ever used from planes that could use the other function anyway. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-4-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-
Maarten Lankhorst authored
Both intel_unpin_work.pending and intel_unpin_work.enable_stall_check were used to see if work should be enabled. By only using pending some special cases are gone, and access to unpin_work can be simplified. A flip could previously be queued before stallcheck was active. With the addition of the pending member enable_stall_check became obsolete and can thus be removed. Use this to only access work members untilintel_mark_page_flip_active is called, or intel_queue_mmio_flip is used. This will prevent use-after-free, and makes it easier to verify accesses. Changes since v1: - Reword commit message. - Do not access unpin_work after intel_mark_page_flip_active. - Add the right memory barriers. Changes since v2: - atomic_read() needs a full smp_rmb. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463490484-19540-3-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
-