1. 18 Jul, 2013 2 commits
    • Shobhit Kumar's avatar
      drm: Added SDP and VSC structures for handling PSR for eDP · 52604b1f
      Shobhit Kumar authored
      SDP header and SDP VSC header as per eDP 1.3 spec, section 3.5,
      chapter "PSR Secondary Data Package Support".
      
      v2: Modified and corrected the structures to be more in line for
      kernel coding guidelines and rebased the code on Paulo's DP patchset
      v3: removing unecessary identation at DP_RECEIVER_CAP_SIZE
      v4: moving them to include/drm/drm_dp_helper.h and also already
          icluding EDP_PSR_RECEIVER_CAP_SIZE to add everything needed
          for PSR at once at drm_dp_helper.h
      v5: Fix SDP VSC header and identation by (Paulo Zanoni) and
          remove i915 from title (Daniel Vetter)
      v6: Fix spec version and move comments from code to commit message
          since numbers might change in the future (by Paulo Zanoni).
      
      CC: Paulo Zanoni <paulo.r.zanoni@intel.com>
      Reviewed-by: default avatarPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: default avatarSateesh Kavuri <sateesh.kavuri@intel.com>
      Signed-off-by: default avatarShobhit Kumar <shobhit.kumar@intel.com>
      Signed-off-by: default avatarRodrigo Vivi <rodrigo.vivi@gmail.com>
      Acked-by: default avatarDave Airlie <airlied@gmail.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      52604b1f
    • Ben Widawsky's avatar
      drm/i915: Create VMAs · 2f633156
      Ben Widawsky authored
      Formerly: "drm/i915: Create VMAs (part 1)"
      
      In a previous patch, the notion of a VM was introduced. A VMA describes
      an area of part of the VM address space. A VMA is similar to the concept
      in the linux mm. However, instead of representing regular memory, a VMA
      is backed by a GEM BO. There may be many VMAs for a given object, one
      for each VM the object is to be used in. This may occur through flink,
      dma-buf, or a number of other transient states.
      
      Currently the code depends on only 1 VMA per object, for the global GTT
      (and aliasing PPGTT). The following patches will address this and make
      the rest of the infrastructure more suited
      
      v2: s/i915_obj/i915_gem_obj (Chris)
      
      v3: Only move an object to the now global unbound list if there are no
      more VMAs for the object which are bound into a VM (ie. the list is
      empty).
      
      v4: killed obj->gtt_space
      some reworks due to rebase
      
      v5: Free vma on error path (Imre)
      
      v6: Another missed vma free in i915_gem_object_bind_to_gtt error path
      (Imre)
      Fixed vma freeing in stolen preallocation (Imre)
      Signed-off-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Reviewed-by: default avatarImre Deak <imre.deak@intel.com>
      [danvet: Squash in fixup from Ben to not deref a non-existing vma in
      set_cache_level, reported by Chris.]
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      2f633156
  2. 17 Jul, 2013 5 commits
    • Ben Widawsky's avatar
      drm/i915: Free stolen node on failed preallocation · f7f18184
      Ben Widawsky authored
      The odds of this happening are *extremely* unlikely.
      Reported-by: default avatarImre Deak <imre.deak@intel.com>
      Signed-off-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      f7f18184
    • Ben Widawsky's avatar
      drm/i915: Move active/inactive lists to new mm · 5cef07e1
      Ben Widawsky authored
      Shamelessly manipulated out of Daniel :-)
      "When moving the lists around explain that the active/inactive stuff is
      used by eviction when we run out of address space, so needs to be
      per-vma and per-address space. Bound/unbound otoh is used by the
      shrinker which only cares about the amount of memory used and not one
      bit about in which address space this memory is all used in. Of course
      to actual kick out an object we need to unbind it from every address
      space, but for that we have the per-object list of vmas."
      
      v2: Leave the bound list as a global one. (Chris, indirectly)
      
      v3: Rebased with no i915_gtt_vm. In most places I added a new *vm local,
      since it will eventually be replaces by a vm argument.
      Put comment back inline, since it no longer makes sense to do otherwise.
      
      v4: Rebased on hangcheck/error state movement
      Signed-off-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Reviewed-by: default avatarImre Deak <imre.deak@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      5cef07e1
    • Ben Widawsky's avatar
      drm/i915: Create a global list of vms · a7bbbd63
      Ben Widawsky authored
      After we plumb our code to support multiple address spaces (VMs), there
      are a few situations where we want to be able to traverse the list of
      all address spaces in the system. Cases like eviction, or error state
      collection are obvious example.
      
      v2: Delete the global link instead of the list head. While this in and
      of itself shouldn't be really be a problem, doing this allows us to WARN
      on an non-empty list, which is a problem. (Daniel)
      Signed-off-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Reviewed-by: default avatarImre Deak <imre.deak@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      a7bbbd63
    • Ben Widawsky's avatar
      drm/i915: Put the mm in the parent address space · 93bd8649
      Ben Widawsky authored
      Every address space should support object allocation. It therefore makes
      sense to have the allocator be part of the "superclass" which GGTT and
      PPGTT will derive.
      
      Since our maximum address space size is only 2GB we're not yet able to
      avoid doing allocation/eviction; but we'd hope one day this becomes
      almost irrelvant.
      
      v2: Rebased
      Signed-off-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Reviewed-by: default avatarImre Deak <imre.deak@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      93bd8649
    • Ben Widawsky's avatar
      drm/i915: Move gtt and ppgtt under address space umbrella · 853ba5d2
      Ben Widawsky authored
      The GTT and PPGTT can be thought of more generally as GPU address
      spaces. Many of their actions (insert entries), state (LRU lists), and
      many of their characteristics (size) can be shared. Do that.
      
      The change itself doesn't actually impact most of the VMA/VM rework
      coming up, it just fits in with the grand scheme of abstracting the GPU
      VM operations. GGTT will usually be a special case where we either know
      an object must be in the GGTT (dislay engine, workarounds, etc.).
      
      The scratch page is left as part of the VM (even though it's currently
      shared with the ppgtt code) because in the future when we have Full
      PPGTT, I intend to create a separate scratch page for each.
      
      v2: Drop usage of i915_gtt_vm (Daniel)
      Make cleanup also part of the parent class (Ben)
      Modified commit msg
      Rebased
      
      v3: Properly share scratch page (Imre)
      Finish commit message (Daniel, Imre)
      Signed-off-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Reviewed-by: default avatarImre Deak <imre.deak@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      853ba5d2
  3. 16 Jul, 2013 17 commits
  4. 12 Jul, 2013 5 commits
  5. 11 Jul, 2013 11 commits
    • Damien Lespiau's avatar
      drm/i915: Use for_each_pipe() when possible · 08e2a7de
      Damien Lespiau authored
      Came accross two open coding of for_each_pipe(), might as well use the
      macro.
      Signed-off-by: default avatarDamien Lespiau <damien.lespiau@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      08e2a7de
    • Daniel Vetter's avatar
      drm/i915: don't enable PM_VEBOX_CS_ERROR_INTERRUPT · c0d6a3dd
      Daniel Vetter authored
      The code to handle it is broken - there's simply no code to clear CS
      parser errors on gen5+. And behold, for all the other rings we also
      don't enable it!
      
      Leave the handling code itself in place just to be consistent with the
      existing mess though. And in case someone feels like fixing it all up.
      
      This has been errornously enabled in
      
      commit 12638c57
      Author: Ben Widawsky <ben@bwidawsk.net>
      Date:   Tue May 28 19:22:31 2013 -0700
      
          drm/i915: Enable vebox interrupts
      
      Cc: Damien Lespiau <damien.lespiau@intel.com>
      Cc: Ben Widawsky <ben@bwidawsk.net>
      Reviewed-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      c0d6a3dd
    • Daniel Vetter's avatar
      drm/i915: unify ring irq refcounts (again) · c7113cc3
      Daniel Vetter authored
      With the simplified locking there's no reason any more to keep the
      refcounts seperate.
      
      v2: Readd the lost comment that ring->irq_refcount is protected by
      dev_priv->irq_lock.
      Reviewed-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      c7113cc3
    • Daniel Vetter's avatar
      drm/i915: kill dev_priv->rps.lock · 59cdb63d
      Daniel Vetter authored
      Now that the rps interrupt locking isn't clearly separated (at elast
      conceptually) from all the other interrupt locking having a different
      lock stopped making sense: It protects much more than just the rps
      workqueue it started out with. But with the addition of VECS the
      separation started to blurr and resulted in some more complex locking
      for the ring interrupt refcount.
      
      With this we can (again) unifiy the ringbuffer irq refcounts without
      causing a massive confusion, but that's for the next patch.
      
      v2: Explain better why the rps.lock once made sense and why no longer,
      requested by Ben.
      Reviewed-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      59cdb63d
    • Daniel Vetter's avatar
      drm/i915: queue work outside spinlock in hsw_pm_irq_handler · 2adbee62
      Daniel Vetter authored
      And kill the comment about it. Queueing work is a barrier type event,
      no amount of locking will help in ordering things (as long as we queue
      the work after having updated all relevant data structures). Also, the
      queue_work works itself as a sufficient memory barrier.
      
      Again on the surface this is just a tiny micro-optimization to reduce
      the hold-time of dev_priv->irq_lock. But the better reason is that it
      reduces superficial locking and so makes it clearer what we actually
      need for correctness.
      Reviewed-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      2adbee62
    • Daniel Vetter's avatar
      drm/i915: streamline hsw_pm_irq_handler · 41a05a3a
      Daniel Vetter authored
      The if (pm_iir & ~GEN6_PM_RPS_EVENTS) check was redunandant. Otoh
      adding a check for rps events allows us to avoid the spinlock grabbing
      for VECS interrupts.
      
      v2: Drop misplaced hunk which now moved to the right patch.
      Reviewed-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      41a05a3a
    • Daniel Vetter's avatar
      drm/i915: irq handlers don't need interrupt-safe spinlocks · d0ecd7e2
      Daniel Vetter authored
      Since we only have one interrupt handler and interrupt handlers are
      non-reentrant.
      
      To drive the point really home give them all an _irq_handler suffix.
      
      This is a tiny micro-optimization but even more important it makes it
      clearer what locking we actually need. And in case someone screws this
      up: lockdep will catch hardirq vs. other context deadlocks.
      
      v2: Fix up compile fail.
      Reviewed-by: default avatarPaulo Zanoni <paulo.r.zanoni@intel.com>
      Reviewed-by: default avatarBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      d0ecd7e2
    • Daniel Vetter's avatar
      drm/i915: kill lpt pch transcoder->crtc mapping code for fifo underruns · de28075d
      Daniel Vetter authored
      It's racy: There's no guarantee that we won't walk this code (due to a
      pch fifo underrun interrupt) while someone is changing the pointers
      around.
      
      The only reason we do this is to use the righ crtc for the pch fifo
      underrun accounting. But we never expose this to userspace, so
      essentially no one really cares if we use the "wrong" crtc.
      
      So let's just rip it out.
      
      With this patch fifo underrun code will always use crtc A for tracking
      underruns on the (only) pch transcoder on LPT.
      
      v2: Add a big comment explaining what's going on. Requested by Paulo.
      
      v3: Fixup spelling in comment as spotted by Paulo.
      
      Cc: Paulo Zanoni <przanoni@gmail.com>
      Reviewed-by: default avatarPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      de28075d
    • Daniel Vetter's avatar
      drm/i915: improve GEN7_ERR_INT clearing for fifo underrun reporting · 7336df65
      Daniel Vetter authored
      Same treatment as for SERR_INT: If we clear only the bit for the pipe
      we're enabling (but unconditionally) then we can always check for
      possible underruns after having disabled the interrupt. That way pipe
      underruns won't be lost, but at worst only get reported in a delayed
      fashion.
      
      v2: The same logic bug as in the SERR handling change also existed
      here. The same bugfix of only reporting missed underruns when the
      error interrupt was masked applies, too.
      
      v3: Do the same fixes as for the SERR handling that Paulo suggested in
      his review:
      - s/%i/%c/ fix in the debug output
      - move the DE_ERR_INT_IVB read into the respective if block
      
      Cc: Paulo Zanoni <przanoni@gmail.com>
      Reviewed-by: default avatarPaulo Zanoni <paulo.r.zanoni@intel.com>
      [danvet: Fix up the checkpatch bikeshed Paulo noticed.]
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      7336df65
    • Daniel Vetter's avatar
      drm/i915: improve SERR_INT clearing for fifo underrun reporting · 1dd246fb
      Daniel Vetter authored
      The current code won't report any fifo underruns on cpt if just one
      pipe has fifo underrun reporting disabled. We can't enable the
      interrupts, but we can still check the per-transcoder bits and so
      report the underrun delayed if:
      - We always clear the transcoder's bit (and none of the other bits)
        when enabling.
      - We check the transcoder's bit after disabling (to avoid racing with
        the interrupt handler).
      
      v2: I've forgotten to actually remove the old SERR_INT clearing.
      
      v3: Use transcoder_name as suggested by Paulo Zanoni. Paulo also
      noticed a logic bug: When an underrun interrupt fires we report it
      both in the interrupt handler and when checking for underruns when
      disabling it in cpt_set_fifo_underrun_reporting. But that second check
      is only required if the interrupt is disabled and we're switching of
      underrun reporting (e.g. because we're disabling the crtc). Hence
      check for that condition.
      
      At first I wanted to rework the code to pass that bit of information
      from the uppper functions down to cpt_set_fifo_underrun_reporting. But
      that turned out too messy. Hence the quick&dirty check whether the
      south error interrupt source is masked off or not.
      
      v4: Streamline the control flow a bit.
      
      v5: s/pipe/pch transcoder/ in the dmesg output, suggested by Paulo.
      
      v6: Review from Paulo:
      - Reorder the was_enabled assignment to only read the register when we
        need it. Also add a comment that we need to do that before updating
        the register.
      - s/%i/%c/ fix for the debug output.
      - Fix the checkpath complaint in the SERR_INT_TRANS_FIFO_UNDERRUN
        #define.
      
      v7: Hopefully put that elusive SERR hunk back into this patch, spotted
      by Paulo.
      
      Cc: Paulo Zanoni <przanoni@gmail.com>
      Reviewed-by: default avatarPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      1dd246fb
    • Daniel Vetter's avatar
      drm/i915: extract ibx_display_interrupt_update · fee884ed
      Daniel Vetter authored
      This way all changes to SDEIMR all go through the same function, with
      the exception of the (single-threaded) setup/teardown code.
      
      For paranoia again add an assert_spin_locked.
      
      v2: For even more paranoia also sprinkle a spinlock assert over
      cpt_can_enable_serr_int since we need to have that one there, too.
      
      v3: Fix the logic of interrupt enabling, add enable/disable macros for
      the simple cases in the fifo code and add a comment. All requested by
      Paulo.
      Reviewed-by: default avatarPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      fee884ed