- 23 Jun, 2015 18 commits
-
-
John Harrison authored
The do_execbuf() function takes quite a few parameters. The actual set of parameters is going to change with the conversion to passing requests around. Further, it is due to grow massively with the arrival of the GPU scheduler. This patch simplifies the prototype by passing a parameter structure instead. Changing the parameter set in the future is then simply a matter of adding/removing items to the structure. Note that the structure does not contain absolutely everything that is passed in. This is because the intention is to use this structure more extensively later in this patch series and more especially in the GPU scheduler that is coming soon. The latter requires hanging on to the structure as the final hardware submission can be delayed until long after the execbuf IOCTL has returned to user land. Thus it is unsafe to put anything in the structure that is local to the IOCTL call itself - such as the 'args' parameter. All entries must be copies of data or pointers to structures that are reference counted in some way and guaranteed to exist for the duration of the batch buffer's life. v2: Rebased to newer tree and updated for changes to the command parser. Specifically, a code shuffle has required saving the batch start address in the params structure. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
In execlist mode, the context object pointer is written in to the request structure (and reference counted) at the point of request creation. In legacy mode, this only happens inside i915_add_request(). This patch updates the legacy code path to match the execlist version. This allows all the intermediate code between request creation and request submission to get at the context object given only a request structure. Thus negating the need to pass context pointers here, there and everywhere. v2: Moved the context reference so it does not need to be undone if the get_seqno() fails. v3: Fixed execlist mode always hitting a warning about invalid last_contexts (which don't exist in execlist mode). v4: Updated for new i915_gem_request_alloc() scheme. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
Start of explicit request management in the execbuffer code path. This patch adds a call to allocate a request structure before all the actual hardware work is done. Thus guaranteeing that all that work is tagged by a known request. At present, nothing further is done with the request, the rest comes later in the series. The only noticable change is that failure to get a request (e.g. due to lack of memory) will be caught earlier in the sequence. It now occurs right at the start before any un-undoable work has been done. v2: Simplified the error handling path. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
The i915_add_request() function is called to keep track of work that has been written to the ring buffer. It adds epilogue commands to track progress (seqno updates and such), moves the request structure onto the right list and other such house keeping tasks. However, the work itself has already been written to the ring and will get executed whether or not the add request call succeeds. So no matter what goes wrong, there isn't a whole lot of point in failing the call. At the moment, this is fine(ish). If the add request does bail early on and not do the housekeeping, the request will still float around in the ring->outstanding_lazy_request field and be picked up next time. It means multiple pieces of work will be tagged as the same request and driver can't actually wait for the first piece of work until something else has been submitted. But it all sort of hangs together. This patch series is all about removing the OLR and guaranteeing that each piece of work gets its own personal request. That means that there is no more 'hoovering up of forgotten requests'. If the request does not get tracked then it will be leaked. Thus the add request call _must_ not fail. The previous patch should have already ensured that it _will_ not fail by removing the potential for running out of ring space. This patch enforces the rule by actually removing the early exit paths and the return code. Note that if something does manage to fail and the epilogue commands don't get written to the ring, the driver will still hang together. The request will be added to the tracking lists. And as in the old case, any subsequent work will generate a new seqno which will suffice for marking the old one as complete. v2: Improved WARNings (Tomas Elf review request). For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
It is a bad idea for i915_add_request() to fail. The work will already have been send to the ring and will be processed, but there will not be any tracking or management of that work. The only way the add request call can fail is if it can't write its epilogue commands to the ring (cache flushing, seqno updates, interrupt signalling). The reasons for that are mostly down to running out of ring buffer space and the problems associated with trying to get some more. This patch prevents that situation from happening in the first place. When a request is created, it marks sufficient space as reserved for the epilogue commands. Thus guaranteeing that by the time the epilogue is written, there will be plenty of space for it. Note that a ring_begin() call is required to actually reserve the space (and do any potential waiting). However, that is not currently done at request creation time. This is because the ring_begin() code can allocate a request. Hence calling begin() from the request allocation code would lead to infinite recursion! Later patches in this series remove the need for begin() to do the allocate. At that point, it becomes safe for the allocate to call begin() and really reserve the space. Until then, there is a potential for insufficient space to be available at the point of calling i915_add_request(). However, that would only be in the case where the request was created and immediately submitted without ever calling ring_begin() and adding any work to that request. Which should never happen. And even if it does, and if that request happens to fall down the tiny window of opportunity for failing due to being out of ring space then does it really matter because the request wasn't doing anything in the first place? v2: Updated the 'reserved space too small' warning to include the offending sizes. Added a 'cancel' operation to clean up when a request is abandoned. Added re-initialisation of tracking state after a buffer wrap to keep the sanity checks accurate. v3: Incremented the reserved size to accommodate Ironlake (after finally managing to run on an ILK system). Also fixed missing wrap code in LRC mode. v4: Added extra comment and removed duplicate WARN (feedback from Tomas). For: VIZ-5115 CC: Tomas Elf <tomas.elf@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
Backmerge drm-next because the conflict between Ander's atomic fixes for 4.2 and Maartens future work are getting to unwielding to handle. Conflicts: drivers/gpu/drm/i915/intel_display.c drivers/gpu/drm/i915/intel_ringbuffer.h Just always take ours, same as git merge -X ours, but done by hand because I didn't trust git: It's confusing that it doesn't show any conflicts in the merge diff at all. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Arun Siluvery authored
In Indirect context w/a batch buffer, +WaFlushCoherentL3CacheLinesAtContextSwitch:bdw v2: Add LRI commands to set/reset bit that invalidates coherent lines, update WA to include programming restrictions and exclude CHV as it is not required (Ville) v3: Avoid unnecessary read when it can be done by reading register once (Chris). Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Rafael Barbalho <rafael.barbalho@intel.com> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Acked-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Arun Siluvery authored
In Indirect and Per context w/a batch buffer, +WaDisableCtxRestoreArbitration Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Rafael Barbalho <rafael.barbalho@intel.com> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Acked-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Arun Siluvery authored
Some of the WA applied using WA batch buffers perform writes to scratch page. In the current flow WA are initialized before scratch obj is allocated. This patch reorders intel_init_pipe_control() to have a valid scratch obj before we initialize WA. v2: Check for valid scratch page before initializing WA as some of them perform writes to it. Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Michel Thierry <michel.thierry@intel.com> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Arun Siluvery authored
Some of the WA are to be applied during context save but before restore and some at the end of context save/restore but before executing the instructions in the ring, WA batch buffers are created for this purpose and these WA cannot be applied using normal means. Each context has two registers to load the offsets of these batch buffers. If they are non-zero, HW understands that it need to execute these batches. v1: In this version two separate ring_buffer objects were used to load WA instructions for indirect and per context batch buffers and they were part of every context. v2: Chris suggested to include additional page in context and use it to load these WA instead of creating separate objects. This will simplify lot of things as we need not explicity pin/unpin them. Thomas Daniel further pointed that GuC is planning to use a similar setup to share data between GuC and driver and WA batch buffers can probably share that page. However after discussions with Dave who is implementing GuC changes, he suggested to use an independent page for the reasons - GuC area might grow and these WA are initialized only once and are not changed afterwards so we can share them share across all contexts. The page is updated with WA during render ring init. This has an advantage of not adding more special cases to default_context. We don't know upfront the number of WA we will applying using these batch buffers. For this reason the size was fixed earlier but it is not a good idea. To fix this, the functions that load instructions are modified to report the no of commands inserted and the size is now calculated after the batch is updated. A macro is introduced to add commands to these batch buffers which also checks for overflow and returns error. We have a full page dedicated for these WA so that should be sufficient for good number of WA, anything more means we have major issues. The list for Gen8 is small, same for Gen9 also, maybe few more gets added going forward but not close to filling entire page. Chris suggested a two-pass approach but we agreed to go with single page setup as it is a one-off routine and simpler code wins. One additional option is offset field which is helpful if we would like to have multiple batches at different offsets within the page and select them based on some criteria. This is not a requirement at this point but could help in future (Dave). Chris provided some helpful macros and suggestions which further simplified the code, they will also help in reducing code duplication when WA for other Gen are added. Add detailed comments explaining restrictions. Use do {} while(0) for wa_ctx_emit() macro. (Many thanks to Chris, Dave and Thomas for their reviews and inputs) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Rafael Barbalho <rafael.barbalho@intel.com> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
If the user disables the GPU reset using the i915.reset parameter and one occurs, report that we failed to reset the GPU. If we return early, as we currently do, then we leave all state intact (with a hung GPU) and clients block forever waiting for their requests to complete. Testcase: igt/gem_eio Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> [danvet: Mark i915.reset as an unsafe modoption, as discussed with Chris.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
The module pciid list got lost, but somehow most distros seem to force-load drm drivers early and no one noticed for a while. Bug introduced in commit fd930478 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Jun 19 20:27:27 2015 +0100 drm/i915: Remove KMS Kconfig option Reported-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Damien Lespiau <damien.lespiau@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Dave Airlie authored
If we are doing an MST transaction and we've gotten HPD and we lookup the device from the incoming msg, we should take the mgr lock around it, so that mst_primary and mstb->ports are valid. Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Daniel Vetter authored
This validates the mst_primary under the lock, and then calls into the check and send function. This makes the code a lot easier to understand the locking rules in. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Dave Airlie authored
Merge tag 'drm-intel-next-fixes-2015-06-22' of git://anongit.freedesktop.org/drm-intel into drm-next fix warning introduced in last -fixes * tag 'drm-intel-next-fixes-2015-06-22' of git://anongit.freedesktop.org/drm-intel: drm/i915: Silence compiler warning
-
Dave Airlie authored
This symbol came via exynos-next, but modular builds are broken so just fix it up now. Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Dave Airlie authored
Merge branch 'exynos-drm-next' of git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos into drm-next Summary: . Add atomic feature support - Exynos also now supports atomic feature. However, it doesn't guarantee atomic operation yet, and is required for more cleanups. This time we just modified for Exynos drm driver to use atomic interfaces instead of legacy ones. Next time, we will enhance Exynos drm driver to support the atomic operation. . Add iommu support - This is a patch series according to below Exynos iommu integration work with DT and dma-mapping subsystem, http://lwn.net/Articles/607626/ . Consolidate Exynos drm driver initialization. - This patch sereis resolves the issue that only the first compoments was bound when happened deferred probing for other pipelines and also makes the driver to be more cleanned up by moving the dispered codes for registering kms drivers to one place. . Add new MIC, DECON drivers, and MIPI-DSI support for Exynos5433. - Add MIC(Mobile image compressor) driver. MIC is a new IP for Exynos5433 and later, which is used to transfer frame data to MIPI-DSI controller compressing the data to reduce memory bandwidth. - Add DECON driver for Exynos5433 SoC. This IP is a dislay controller similar to Exynos7's one but this controller has much different registers from Exynos7's ones so this driver has been implemented separately. We will implement a helper modules for FIMD and two DECON controllers to remove duplicated codes later. - Add Exynos5433 SoC support to MIPI-DSI driver, and device tree relevant patches. * 'exynos-drm-next' of git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos: (50 commits) ARM: dts: rename the clock of MIPI DSI 'pll_clk' to 'sclk_mipi' drm/exynos: dsi: do not set TE GPIO direction by input drm/exynos: dsi: add support for MIC driver as a bridge drm/exynos: dsi: add support for Exynos5433 drm/exynos: dsi: make use of array for clock access drm/exynos: dsi: make use of driver data for static values drm/exynos: dsi: add macros for register access drm/exynos: dsi: rename pll_clk to sclk_clk drm/exynos: mic: add MIC driver of: add helper for getting endpoint node of specific identifiers drm/exynos: add Exynos5433 decon driver drm/exynos: fix the input prompt of Exynos7 DECON drm/exynos: add drm_iommu_attach_device_if_possible() drm/exynos: Add the dependency for DRM_EXYNOS to DPI/DSI/DP drm/exynos: remove the dependency of DP driver for ARCH_EXYNOS drm/exynos: do not wait for vblank at atomic operation drm/exynos: Remove unused vma field of exynos_drm_gem_obj drm/exynos: fimd: fix page fault issue with iommu drm/exynos: iommu: improve a check for non-iommu dma_ops drm/exynos: iommu: detach from default dma-mapping domain on init ...
-
git://anongit.freedesktop.org/drm-intelDave Airlie authored
One more drm-misc pull for 4.2. The important one is the fix from Laurent for Daniel Stone's mode_blob work. * tag 'topic/drm-misc-2015-06-22' of git://anongit.freedesktop.org/drm-intel: drm/atomic: Don't set crtc_state->enable manually drm: prime: Document gem_prime_mmap drm: Avoid the double clflush on the last cache line in drm_clflush_virt_range() drm/atomic: Extract needs_modeset function drm/cma: Fix 64-bit size_t build warnings Documentation/drm: Update rotation property
-
- 22 Jun, 2015 22 commits
-
-
Chris Wilson authored
Since we only support modesetting by default (disabling modesetting on the command line prevents i915.ko from loading), having a parameter to disable modesstting by default is superfluous, i.e. saying CONFIG_DRM_I915_KMS=n is equivalent to CONFIG_DRM_I915=n. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Veter <daniel.vetter@ffwll.ch> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
On older gen, pre-Ironlake, parts there is no hardwired pin to report the presence of an LVDS panel. Instead, we have to rely on the VBT to declare whether the machine has a panel or not. Though notoriously unreliable, so far we have erred on the side of false-positives and have required a list of machines which end up falsely reporting a panel as present. However, we now have reports of false-negatives, machines with an LVDS that are being ignored due to the VBT not declaring the panel. This patch ignores the VBT setting if the BIOS has already enabled the LVDS panel (and on Ironlake+ we also have the hardware presence pin). It fixes the Samsung NP680Z5E-X01FR in the bug report, but is likely to result in more false-positives, and since we rely on the BIOS to enable the panel, there are likely different circumstances where the BIOS will not enable that panel (and so we may see the same machine with and without a panel all on the whim of the BIOS). Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=90979 Reported-and-tested-by: lysxia@gmail.com Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Internal requirement for the alignment is that it must be a power-of-two, so enforce rejection at the user interface to execbuffer (which allows the caller to specify a stricter-than-expected alignment criterion). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Rodrigo Vivi authored
This patch doesn't have any functional change, but organize fruntbuffer invalidate and busy by removing unecesarry signature argument for ring. It was unsed on mark_fb_busy and only used on fb_obj_invalidate for the same ORIGIN_CS usage. So let's clean it a bit Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ville Syrjälä authored
The same dpll p2 divider selection is repeated three times in the gen2-4 .find_dpll() functions. Factor it out. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jani Nikula authored
Make Paulo happier. Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jani Nikula authored
We have enough generic hotplug functions sprinkled all over i915_irq.c to warrant moving them to a file of their own. This should further underline the distinction between generic code in the new file and platform specific hotplug and irq code that remains in i915_irq.c. Add new intel_hpd_init_work to keep work functions static, and rename get_port_from_pin to intel_hpd_pin_to_port while increasing its visibility, but keep everything else the same. Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jani Nikula authored
We'll have three functions: intel_hpd_irq_storm_detect for detecting irq storms, intel_hpd_irq_storm_disable for disabling hotplugs after detected storms, intel_hpd_irq_storm_reenable_work for re-enabling hotplug. No functional changes. Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jani Nikula authored
Continue abstracting hotplug storm related functions to clarify the code. This time, abstract hotplug irq storm related hotplug disabling. While at it, clean up the loop iterating over connectors for readability. Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jani Nikula authored
The hotplug work function has two loops iterating over connectors, the first for handling hotplug disabling due to irq storms and the second for actually handling the hotplug events. Move the debug printing into the second one, so we can abstract the storm handling better. This may change the output ordering slightly when there are multiple simultaneous hotplug events. Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Hyungwon Hwang authored
The clock which was named as 'pll_clk' is actually not the clock source of PLL in MIPI DSI. This patch fixes this disagreement. Signed-off-by: Hyungwon Hwang <human.hwang@samsung.com> Acked-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com>
-
Maarten Lankhorst authored
The skylake scalers depend on the cdclk freq, but that frequency can change during a modeset. So when a modeset happens calculate the new cdclk in the atomic state. With the transitional helpers gone the cached value can be used in the scaler, and committed after all crtc's are disabled. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=90874Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
All transitional plane helpers are gone, party! Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
By making color key atomic there are no more transitional helpers. The plane check function will reject the color key when a scaler is active. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
No need to repeatedly call update_watermarks, or update_fbc. Down to a single call to update_watermarks in .crtc_enable Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
Now that all planes are added during a modeset we can use the calculated changes before disabling a plane, and then either commit or force disable a plane before disabling the crtc. The code is shared with atomic_begin/flush, except watermark updating and vblank evasion are not used. This is needed for proper atomic suspend/resume support. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=90868Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
Read out the initial state, and add a quirk to force add all planes to crtc_state->plane_mask during initial commit. This will disable all planes during the initial modeset. The initial plane quirk is temporary, and will go away when hardware readout is fully atomic, and the watermark updates in intel_sprite.c are removed. Changes since v1: - Unset state->visible on !primary planes. - Do not rely on the plane->crtc pointer in intel_atomic_plane, instead assume planes are invisible until modeset. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
All the checks in intel_modeset_checks are only useful when a modeset occurs, because there is nothing to update otherwise. Same for power/cdclk changes, if there is no modeset they are noops. Unfortunately intel_modeset_pipe_config still gets called without modeset, because atomic hw readout isn't done yet. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
To allow them to be used in intel_set_mode. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
This is probably intended to be be done during vblank evasion. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
The idea was good, but planes can have a fb even though they're disabled. This makes the force argument useless and always true, because only the commit function updates state. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
By passing crtc_state to the check_plane functions a lot of duplicated code can be removed. There are still some transitional helper calls, they will be removed later. Changes since v1: - Revert state->visible changes. - Use plane->state->crtc instead of plane->crtc. - Use drm_atomic_get_existing_crtc_state. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-