- 26 Jun, 2015 9 commits
-
-
Mika Kuoppala authored
As there is flushing involved when we have done the cpu write, make functions for mapping for cpu space. Make macros to map any type of paging structure. v2: Make it clear tha flushing kunmap is only for ppgtt (Ville) v3: Flushing fixed (Ville, Michel). Removed superfluous semicolon Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Michel Thierry <michel.thierry@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
When we setup page directories and tables, we point the entries to a to the next level scratch structure. Make this generic by introducing a fill_page_dma which maps and flushes. We also need 32 bit variant for legacy gens. v2: Fix flushes and handle valleyview (Ville) v3: Now really fix flushes (Michel, Ville) Reviewed-by: Michel Thierry <michel.thierry@intel.com> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
This has slipped in somewhere but it was harmless as we check the page pointer before teardown. Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Michel Thierry <michel.thierry@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
All the paging structures are now similar and mapped for dma. The unmapping is taken care of by common accessors, so don't overload the reader with such details. v2: Be consistent with goto labels (Michel) Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Michel Thierry <michel.thierry@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
All our paging structures have struct page and dma address for that page. Add struct for page/dma address pairs and use it to make the setup and teardown for different paging structures identical. Include the page directory offset also in the struct for legacy gens. Rename it to clearly point out that it is offset into the ggtt. v2: Add comment about ggtt_offset (Michel) Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Michel Thierry <michel.thierry@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
The legacy mode mm switch and the execlist context assignment needs dma address for the page directories. Introduce a function that encapsulates the scratch_pd dma fallback if no pd is found. v2: Rebase, s/ring/req Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Michel Thierry <michel.thierry@intel.com> (v1) Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
We can have exactly 4GB sized ppgtt with 32bit system. size_t is inadequate for this. v2: Convert a lot more places (Daniel) Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Michel Thierry <michel.thierry@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
Check the allocation area against the known end of address space instead of against fixed value. v2: Return ENODEV on internal bugs (Chris) Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Michel Thierry <michel.thierry@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
When we touch gen8+ page maps, mark them dirty like we do with previous gens. v2: Update comment (Joonas) Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 25 Jun, 2015 2 commits
-
-
Ville Syrjälä authored
Currently we don't have any real indication when a pipe gets enabled/disabled. Add some. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ville Syrjälä authored
Avoid some 'switch (plane->type)' by storing the fronbuffer_bits in intel_plane. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> [danvet: use singular frontbuffer_bits in intel_plane since a plan can only ever have one bit. Discussed with Ville on irc.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 Jun, 2015 5 commits
-
-
Bob Paauwe authored
The registers and process differ from other platforms. If the hardware was programmed incorrectly, this will return invalid cdclk values, which should then cause reprogramming of the hardware. v2(Matt): Return 19.2 MHz when DE PLL is disabled (Ville) v3: Make less assumptions about the hardware state (Ville) Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Bob Paauwe <bob.j.paauwe@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Tvrtko Ursulin authored
Currently object size is returned for the rotated VMA size which can be bigger than the rotated view itself. Since the binding code pads all excess size with scratch pages the only minor issue with this is wasting some GGTT space, but still feels nicer to fix and report the real size. v2: Rebase for tracking size in bytes instead of pages. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Tvrtko Ursulin authored
This way data is available as soon as the view is passed into the call chain. v2: Store size in bytes instead of pages under the appropriate name. (Chris Wilson) v3: Use uint64_t instead of size_t. (Daniel Vetter) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (v2) Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Tvrtko Ursulin authored
It is only used in logging and it doesn't need to exist on its own. Also it was misleading to log view size as object size. v2: Improve commit message. (Joonas Lahtinen) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> [danvet: s/%lu/%zu/ where needed, reported by 0-day.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
With the new DRRS code it kinda sticks out, and we never managed to get this to work well enough without causing issues. Time to wave goodbye. I've decided to keep the logic for programming the reduced clocks intact, but everything else is gone. If anyone ever wants to resurrect this we need to redo it all anyway on top of the frontbuffer tracking. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Acked-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 Jun, 2015 24 commits
-
-
Imre Deak authored
This typo lead to the crtc scaler getting enabled incorrectly and an evantual state checker mismatch about the scaler_id. Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Arun Siluvery authored
In Indirect context w/a batch buffer, WaClearSlmSpaceAtContextSwitch This WA performs writes to scratch page so it must be valid, this check is performed before initializing the batch with this WA. v2: s/PIPE_CONTROL_FLUSH_RO_CACHES/PIPE_CONTROL_FLUSH_L3 (Ville) v3: GTT bit in scratch address should be mbz (Chris) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Rafael Barbalho <rafael.barbalho@intel.com> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Acked-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
Must have missed the transition. Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Daniel Vetter authored
The frontbuffer code gives us accurate information about activity, let's use it. Again this should avoid unecessary updates when multiple screens are on. Also realign function paramaters, I couldn't resist that bit of OCD. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Durgadoss R <durgadoss.r@intel.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Daniel Vetter authored
The current code tracks business across all pipes, but we're only really interested in the one pipe DRRS is enabled on. Fairly tiny optimization, but something I noticed while reading the code. But it might matter a bit when e.g. showing a video or something only on the external screen, while the panel is kept static. Also regroup the code slightly: First compute new bitmasks, then take appropriate actions. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Durgadoss R <durgadoss.r@intel.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Daniel Vetter authored
The current code tracks business across all pipes, but we're only really interested in the one pipe DRRS is enabled on. Fairly tiny optimization, but something I noticed while reading the code. But it might matter a bit when e.g. showing a video or something only on the external screen, while the panel is kept static. Also regroup the code slightly: First compute new bitmasks, then take appropriate actions. Cc: Ramalingam C <ramalingam.c@intel.com> Cc: Sivakumar Thulasimani <sivakumar.thulasimani@intel.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Daniel Vetter authored
I was momentarily confused until I've double-checked that these functions really only compute state and don't update the hardware state. They once did that, but since Ander's rework of the dpll computation flow that's no longer the case. Rename them to avoid further confusion. Note that the ilk code already follows the compute_dpll naming scheme for computing the actual register value. DDI code goes with _calc_, but that is close enough. Cc: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Daniel Vetter authored
Useful to figure out whether stuck bits are due to the frontbuffer tracking code as opposed to individual consumers (who have their own bitmask tracking). Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Daniel Vetter authored
Paulo noticed that the fbc frontbuffer tracking flush callback occasionally gets a call without any bit set. This can happen when we have to filter flush calls due to e.g. gpu rendering. Filter these out. Reported-by: Paulo Zanoni <przanoni@gmail.com> Cc: Paulo Zanoni <przanoni@gmail.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Daniel Vetter authored
The current/old frontbuffer might still have gpu frontbuffer rendering pending. But once flipped it won't have the corresponding frontbuffer bits any more and hence the request retire function won't ever clear the corresponding busy bits. The async flip tracking (with the flip_prepare and flip_complete functions) already does this, but somehow I've forgotten to do this for synchronous flips. Note that we don't track outstanding rendering of the new framebuffer with busy_bits since all our plane update code waits for previous rendering to complete before displaying a new buffer. Hence a new buffer will never be busy. v2: Drop the spurious inline Ville spotted. v3: Don't touch flip_bits in the synchronsou frontbuffer_flip function, noticed by Paulo. v4: Remove one more inline that slipped through (Paulo). Reported-by: Paulo Zanoni <przanoni@gmail.com> Cc: Paulo Zanoni <przanoni@gmail.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Testcase: igt/kms_frontbuffer_tracking/fbc-modesetfrombusy Tested-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Arun Siluvery authored
To initialize WA batch, at the moment we first allocate batch and then check whether we have any WA to be initialized for the given Gen; if we don't have any WA then we WARN the user, destroy the batch and return but this is causing another WARN in cleanup code complaining about sleeping in atomic context. Till we understand this better and to keep things simpler, bail out early if we don't have WA. Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Arun Siluvery authored
Kernel 0-day framework reported warnings with WA batch patches, this patch fixes those warnings and an additional warning reported in intel_lrc.c file. Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
As there is no OLR to check, the check_olr() function is now a no-op and can be removed. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
A bunch of the low level LRC functions were passing around ringbuf and ctx pairs. In a few cases, they took the r/c pair and a request as well. This is all quite messy and unnecesary. The context_queue() call is especially bad since the fake request code got removed - it takes a request and three extra things that must be extracted from the request and then it checks them against what it finds in the request. Removing all the derivable data makes the code much simpler all round. This patch updates those functions to just take the request structure. Note that logical_ring_wait_for_space now takes a request structure but already had a local request pointer that it uses to scan for something to wait on. To avoid confusion the local variable has been renamed 'target' (it is searching for a target request to do something with) and the parameter has been called req (to guarantee anything accidentally missed gets a compiler error). v2: Updated commit message re wait_for_space (Tomas Elf review comment). For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
The LRC submission code requires a request for tracking purposes. It does not actually require that request to 'complete' it simply uses it for keeping hold of reference counts on contexts and such like. Previously, the fall back path of polling for space in the ring would start by submitting any outstanding work that was sat in the buffer. This submission was not done as part of the request that that work was owned by because that would lead to complications with the request being submitted twice. Instead, a null request structure was passed in to the submit call and a fake one was created. That fall back path has long since been obsoleted and has now been removed. Thus there is never any need to fake up a request structure. This patch removes that code. A couple of sanity check warnings are added as well, just in case. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Thomas Daniel <thomas.daniel@intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
In _i915_add_request(), the request is associated with a userland client. Specifically it is linked to the 'file' structure and the current user process is recorded. One problem here is that the current user process is not necessarily the same as when the request was submitted to the driver. This is especially true when the GPU scheduler arrives and decouples driver submission from hardware submission. Note also that it is only in the case where the add request comes from an execbuff call that there is a client to associate. Any other add request call is kernel only so does not need to do it. This patch moves the client association into a separate function. This is then called from the execbuffer code path itself at a sensible time. It also removes the now redundant 'file' pointer from the add request parameter list. An extra cleanup of the client association is also added to the request clean up code for the eventuality where the request is killed after association but before being submitted (e.g. due to out of memory error somewhere). Once the submission has happened, the request is on the request list and the regular request list removal will clear the association. Note that this still needs to happen at this point in time because the request might be kept floating around much longer (due to someone holding a reference count) and the client should not be worrying about this request after it has been retired. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
The outstanding_lazy_request is no longer used anywhere in the driver. Everything that was looking at it now has a request explicitly passed in from on high. Everything that was relying upon it behind the scenes is now explicitly creating/passing/submitting its own private request. Thus the OLR can be removed. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
Much of the driver has now been converted to passing requests around instead of rings/ringbufs/contexts. Thus the function for retreiving the request from a ring (i.e. the OLR) is no longer used and can be removed. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
Now that the *_ring_begin() functions no longer call the request allocation code, it is finally safe for the request allocation code to call *_ring_begin(). This is important to guarantee that the space reserved for the subsequent i915_add_request() call does actually get reserved. v2: Renamed functions according to review feedback (Tomas Elf). For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
Now that everything above has been converted to use requests, intel_logical_ring_begin() can be updated to take a request instead of a ringbuf/context pair. This also means that it no longer needs to lazily allocate a request if no-one happens to have done it earlier. Note that this change makes the execlist signature the same as the legacy version. Thus the two functions could be merged into a ring->begin() wrapper if required. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
Now that everything above has been converted to use requests, intel_ring_begin() can be updated to take a request instead of a ring. This also means that it no longer needs to lazily allocate a request if no-one happens to have done it earlier. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
Updated intel_ring_cacheline_align() to take a request instead of a ring. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
Updated the various ring->signal() implementations to take a request instead of a ring. This removes their reliance on the OLR to obtain the seqno value that should be used for the signal. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
John Harrison authored
Updated the ring->sync_to() implementations to take a request instead of a ring. Also updated the tracer to include the request id. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> [danvet: Rebase since I didn't merge the patch which added ->uniq.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-