- 10 Apr, 2015 40 commits
-
-
Sonika Jindal authored
v2: Moving creation of property in a function, checking for 90/270 rotation simultaneously (Chris) Letting primary plane to be positioned v3: Adding if/else for 90/270 and rest params programming, adding check for pixel_format, some cleanup (review comments) v4: Adding right pixel_formats, using src_* params instead of crtc_* for offset and size programming (Ville) v5: Rebased on -nightly and Tvrtko's series for gtt remapping. v6: Rebased on -nightly (Tvrtko's series merged) v7: Moving pixel_format check to intel_atomic_plane_check (Matt) Signed-off-by: Sonika Jindal <sonika.jindal@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Sonika Jindal authored
Signed-off-by: Sonika Jindal <sonika.jindal@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Sagar Kamble authored
Change-Id: I4253459c075c50d9b6f034b4ed4ad2f54cd7d1d7 Signed-off-by: Sagar Kamble <sagar.a.kamble@intel.com> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ander Conselvan de Oliveira authored
Since the following commit, the PLL calculations are done earlier, so the code following the comment doesn't do anything PLL or encoder related. It only updates the primary plane now. commit f3019a4d Author: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Date: Wed Oct 29 11:32:37 2014 +0200 drm/i915: Remove crtc_mode_set() hook Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
When looking for viable candidates to shrink, we only want objects that are not pinned. However to do so we performed a double iteration over the vma in the objects, first looking for the pin-count, then looking for allocations. We can do both at once and be slightly more explicit in our validity test. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
As we never expose context objects directly to userspace, we can forgo allocating a first-class GEM object for them and prefer to use the limited resource of reserved/stolen memory for them. Note this means that their initial contents are undefined. However, a downside of using stolen objects for execlists is that we cannot access the physical address directly (thanks MCH!) which prevents their use. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
We already assign a unique identifier to every request: seqno. That someone felt like adding a second one without even mentioning why and tweaking ABI smells very fishy. Fixes regression from commit b3a38998 Author: Nick Hoath <nicholas.hoath@intel.com> Date: Thu Feb 19 16:30:47 2015 +0000 drm/i915: Fix a use after free, and unbalanced refcounting v2: Rebase Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Nick Hoath <nicholas.hoath@intel.com> Cc: Thomas Daniel <thomas.daniel@intel.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Jani Nikula <jani.nikula@intel.com> [danvet: Fixup because different merge order.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Remove some needless variables and parameter passing. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Similar in vain in reducing the number of unrequired spinlocks used for execlist command submission (where the forcewake is required but manually controlled), we know that the IRQ registers are outside of the powerwell and so we can access them directly. Since we now have direct access exported via I915_READ_FW/I915_WRITE_FW, lets put those to use in the irq handlers as well. In the process, reorder the execlist submission to happen as early as possible. v2: Restrict the untraced register mmio to just the GT path (i.e. the hotpath for execlists) Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
This eliminates six needless spin lock/unlock pairs when writing out ELSP. v2: Respin with my preferred colour. v3: Mostly back to the original colour Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> [v1] Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
Already tagged this one and 0-day builder is failing me. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Chris Wilson authored
vma are more frequently allocated than objects and so should equally benefit from having a dedicated slab. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
requests are even more frequently allocated than objects and equally benefit from having a dedicated slab. v2: Rebase Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Matt Roper authored
Once we have full atomic modeset, these kind of flags should be in a real intel_crtc_state that's tracked properly. In the meantime, make sure we clear out any old flags at the beginning of a transaction so that we don't wind up seeing leftover flags from old transactions that were checked, but never went to the commit step. At the moment, a failed check or prepare could leave stale flags behind that interfere with the next atomic transaction. v2: Just do a memset; the series this patch was originally part of placed additional fields into the structure that shouldn't be cleared, but that's no longer the case. Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Matt Roper authored
Switch from our plane update/disable entrypoints to use the full atomic helpers (which generate a top-level atomic transaction) rather than the transitional helpers (which only create/manipulate orphaned plane states independent of a top-level transaction). Various upcoming work (SKL scalers, atomic watermarks, etc.) requires a full atomic transaction to behave properly/cleanly. Last time we tried this, we had to back out the change because we still call the drm_plane vfuncs directly from within our legacy modesetting code. This potentially results in nested atomic transactions, locking collisions, and other failures. To avoid that problem again, we sidestep the issue by calling the transitional helpers directly (rather than through a vfunc) when we're nested inside of other legacy modesetting code. However this does allow legacy SetPlane() ioctl's to process an entire drm_atomic_state transaction, which is important for upcoming patches. Cc: Chandra Konduru <chandra.konduru@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chandra Konduru authored
Adding drm helper function to return plane pointer from index where index is a returned by drm_plane_index. v2: -avoided nested loop by adding loop count (Daniel) v3: -updated patch header prefix to 'drm' (Matt) v4: -fixed a kerneldoc issue (kbuild-internal) Cc: dri-devel@lists.freedesktop.org Signed-off-by: Chandra Konduru <chandra.konduru@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Acked-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
After the removal of DRI1, all access to the rings are through requests and so we can always be sure that there is a request to wait upon to free up available space. The fallback code only existed so that we could quiesce the GPU following unmediated access by DRI1. v2: Rebase Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
When we submit a request to the GPU, we first take the rpm wakelock, and only release it once the GPU has been idle for a small period of time after all requests have been complete. This means that we are sure no new interrupt can arrive whilst we do not hold the rpm wakelock and so can drop the individual get/put around every single request inside execlists. Note: to close one potential issue we should mark the GPU as busy earlier in __i915_add_request. To elaborate: The issue is that we emit the irq signalling sequence before we grab the rpm reference, which means we could miss the resulting interrupt (since that's not set up when suspended). The only bad side effect is a missed interrupt, gt mmio writes automatically wake up the hw itself. But otoh we have an umbrella rpm reference for the entirety of execbuf, as long as that's there we're covered. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> [danvet: Explain a bit more about the add_request issue, which after some irc chatting with Chris turns out to not be an issue really.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
We can use the simpler spinlock form to disable interrupts as we are always outside of an irq/softirq handler. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ville Syrjälä authored
Recent BSW VBT has a VBT child device size 37 bytes instead of the 33 bytes our code assumes. This means we fail to parse the VBT and thus fail to detect eDP ports properly and just register them as DP ports instead. Fix it up by using the reported child device size from the VBT instead of assuming it matches out struct defintions. The latest spec I have shows that the child device size should be 36 bytes for rev >= 195, however on my BSW the size is actually 37 bytes. And our current struct definition is 33 bytes. Feels like the entire VBT parses would need to be rewritten to handle changes in the layout better, but for now I've decided to do just the bare minimum to get my eDP port back. Cc: Vijay Purushothaman <vijay.a.purushothaman@linux.intel.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
True PPGTT is capable of having a full address space, even if the system has less allocated memory. Note that aliasing PPGTT always aliases the GGTT and thus should remain of the same size. Signed-off-by: Michel Thierry <michel.thierry@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
This finishes off the dynamic page tables allocations, in the legacy 3 level style that already exists. Most everything has already been setup to this point, the patch finishes off the enabling by setting the appropriate function pointers. In LRC mode, contexts need to know the PDPs when they are populated. With dynamic page table allocations, these PDPs may not exist yet. Check if PDPs have been allocated and use the scratch page if they do not exist yet. Before submission, update the PDPs in the logic ring context as PDPs have been allocated. v2: Update aliasing/true ppgtt allocate/teardown/clear functions for gen 6 & 7. v3: Rebase. v4: Remove BUG() from ppgtt_unbind_vma, but keep checking that either teardown_va_range or clear_range functions exist (Daniel). v5: Similar to gen6, in init, gen8_ppgtt_clear_range call is only needed for aliasing ppgtt. Zombie tracking was originally added for teardown function and is no longer required. v6: Update err_out case in gen8_alloc_va_range (missed from lastest rebase). v7: Rebase after s/page_tables/page_table/. v8: Updated scratch_pt check after scratch flag was removed in previous patch. v9: Note that lrc mode needs to be updated to support init state without any PDP. v10: Unmap correct page_table in gen8_alloc_va_range's error case, clean-up gen8_aliasing_ppgtt_init (remove duplicated map), and initialize PTs during page table allocation. v11: Squashed LRC enabling commit, otherwise LRC mode would be left broken until it was updated to handle the init case without any PDP. v12: Do not overallocate new_pts bitmap, make alloc_gen8_temp_bitmaps static and don't abuse of inline functions. (Mika) Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
Like with gen6/7, we can enable bitmap tracking with all the preallocations to make sure things actually don't blow up. v2: Rebased to match changes from previous patches. v3: Without teardown logic, rely on used_pdpes and used_pdes when freeing page tables. v4: Rebased after s/page_tables/page_table/. v5: Rebased after page table generalizations. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
When we do dynamic page table allocations for gen8, we'll need to have more control over how and when we map page tables, similar to gen6. In particular, DMA mappings for page directories/tables occur at allocation time. This patch adds the functionality and calls it at init, which should have no functional change. The PDPEs are still a special case for now. We'll need a function for that in the future as well. v2: Handle renamed unmap_and_free_page functions. v3: Updated after teardown_va logic was removed. v4: Rebase after s/page_tables/page_table/. v5: No longer allocate all PDPs in GEN8+ systems with less than 4GB of memory, and update populate_lr_context to handle this new case (proper tracking will be added later in the patch series). v6: Assign lrc page directory pointer addresses using a macro. (Mika) Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
This will be useful for when we move to 48b addressing, and the PDP isn't the root of the page table structure. v2: Rebase after changes for Gen8+ systems with less than 4GB of memory. v3: Rebase after Mika's code review. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2) Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
These values are never quite useful for dynamic allocations of the page tables. Getting rid of them will help prevent later confusion. v2: Updated to use unmap_and_free_pd functions. v3: Updated gen8_ppgtt_free after teardown logic was removed. v4: Rebase after s/page_tables/page_table/. v5: Keep allocating all page directories in GEN8+ systems with less than 4GB of memory. Updated gen6_for_all_pdes. v6: Prevent (harmless) out of range access in gen6_for_all_pdes. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
One important part of this patch is we now write a scratch page directory into any unused PDP descriptors. This matters for 2 reasons, first, we're not allowed to just use 0, or an invalid pointer, and second, we must wipe out any previous contents from the last context. The latter point only matters with full PPGTT. The former point only effect platforms with less than 4GB memory. v2: Updated commit message to point that we must set unused PDPs to the scratch page. v3: Unmap scratch_pd in gen8_ppgtt_free. v4: Initialize scratch_pd. (Mika) Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
Start using gen8_for_each_pde macro to allocate page tables. v2: teardown_va_range references removed. v3: Rebase after s/page_tables/page_table/. v4: Keep setting up page tables for all page directories in systems with less than 4GB of memory. v5: Also initialize the page tables. (Mika) v6: Initialize all page tables, including the extra ones from systems with less than 4GB of memory. (Mika) Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
Start using gen8_for_each_pdpe macro to allocate the page directories. Similar to PTs, while setting up a page directory, make all entries of the pd point to the scratch pd before mapping (and make all its entries point to the scratch page); this is to be safe in case of out of bound access or proactive prefetch. Systems without LLC require an explicit flush. v2: Rebased after s/free_pt_*/unmap_and_free_pt/ change. v3: Rebased after teardown va range logic was removed. v4: Keep setting up all page directories for systems with less than 4GB of memory. v5: Initialize PDs. (Mika) v6: Initialize also the extra PDs from systems with less than 4GB of memory. (Mika) Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
Similar to gen6, we will use for_each_pde/for_each_pdpe and pte/pde/pdpe_index to iterate over these new structures. v2: Match trace_i915_va_teardown params v3: Multiple rebases. v4: Updated to use unmap_and_free_pt. v5: teardown_va_range logic no longer needed. v6: Rebase after s/page_tables/page_table/. v7: Renamed commit to match what it does now (it was "Use dynamic allocation idioms on free"). v8: Prevent (harmless) out of range access in gen8_for_each_pde and gen8_for_each_pdpe_e. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+) Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> [danvet: s/BUG/WARN/] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
Similar to gen6, while setting up a page table, make all entries of the pt point to the scratch page before mapping; this is to be safe in case of out of bound access or proactive prefetch. Systems without LLC require an explicit flush. v2: Expanded commit text and fixed indentation (Mika) Signed-off-by: Michel Thierry <michel.thierry@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
We are already unmapping them in gen8_ppgtt_free. This function became redundant since commit 06fda602 ("drm/i915: Create page table allocators"). Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Michel Thierry <michel.thierry@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Michel Thierry authored
Lets try to keep this consistent: Page Directory Pointer (PDP). Page Directory (PD), also known as page directory pointer entries. Page Table (PT), also known as page directory entries. s/struct i915_page_table_entry/struct i915_page_table/ s/struct i915_page_directory_entry/struct i915_page_directory/ s/struct i915_page_directory_pointer_entry/struct i915_page_directory_pointer/ Suggested-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Michel Thierry <michel.thierry@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
This is mostly useful for execlists where the rings switch between contexts (and so checking that the ring's start register matches the context is important). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
This is just so that I don't have to read about the batch pool on systems that are not using it! Rather than using a newline between the kernel clients and userspace clients, just distinguish the internal allocations with a '[k]' Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Since we use obj->active as a hint in many places throughout the code, knowing its state in debugfs is extremely useful. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Now with the trimmed memcpy before the command parser, we try to allocate many different sizes of batches, predominantly one or two pages. We can therefore speed up searching for a good sized batch by keeping the objects of buckets of roughly the same size. v2: Add a comment about bucket sizes Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
At runtime, this helps ensure that the batch pools are kept trim and fast. Then at suspend, this releases memory that we do not need to restore. It also ties into the oom-notifier to ensure that we recover as much kernel memory as possible during OOM. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-