1. 26 Jan, 2013 5 commits
  2. 24 Jan, 2013 24 commits
  3. 22 Jan, 2013 6 commits
  4. 21 Jan, 2013 2 commits
    • Daniel Vetter's avatar
      drm/i915: clarify concurrent hang detect/gpu reset consistency · 7db0ba24
      Daniel Vetter authored
      Damien Lespiau wondered how race the gpu reset/hang detection code is
      against concurrent gpu resets/hang detections or combinations thereof.
      Luckily the single work item is guranteed to never run concurrently,
      so reset handling is already single-threaded.
      
      Hence we only have to worry about concurrent hang detections, or a
      hang detection firing off while we're still processing an older gpu
      reset request. Due to the new mechanism of setting the reset in
      progress flag and the ordering guaranteed by the schedule_work
      function there's nothing to do but add a comment explaining why we're
      safe.
      
      The only thing I've noticed is that we still try to reset the gpu now,
      even when it is declared terminally wedged. Add a check for that to
      avoid continous warnings about failed resets, in case the hangcheck
      timer ever gets stuck.
      Reviewed-by: default avatarDamien Lespiau <damien.lespiau@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      7db0ba24
    • Daniel Vetter's avatar
      drm/i915: create a race-free reset detection · f69061be
      Daniel Vetter authored
      With the previous patch the state transition handling of the reset
      code itself is now (hopefully) race free and solid. But that still
      leaves out everyone else - with the various lock-free wait paths
      we have there's the possibility that the reset happens between the
      point where we read the seqno we should wait on and the actual wait.
      
      And if __wait_seqno then never sees the RESET_IN_PROGRESS state, we'll
      happily wait for a seqno which will in all likelyhood never signal.
      
      In practice this is not a big problem since the X server gets
      constantly interrupted, and can then submit more work (hopefully) to
      unblock everyone else: As soon as a new seqno write lands, all waiters
      will unblock. But running the i-g-t reset testcase ZZ_hangman can
      expose this race, especially on slower hw with fewer cpu cores.
      
      Now looking forward to ARB_robustness and friends that's not the best
      possible behaviour, hence this patch adds a reset_counter to be able
      to detect any reset, even if a given thread never observed the
      in-progress state.
      
      The important part is to correctly order things:
      - The write side needs to increment the counter after any seqno gets
        reset.  Hence we need to do that at the end of the reset work, and
        again wake everyone up. We also need to place a barrier in between
        any possible seqno changes and the counter increment, since any
        unlock operations only guarantee that nothing leaks out, but not
        that at later load operation gets moved ahead.
      - On the read side we need to ensure that no reset can sneak in and
        invalidate the seqno. In all cases we can use the one-sided barrier
        that unlock operations guarantee (of the lock protecting the
        respective seqno/ring pair) to ensure correct ordering. Hence it is
        sufficient to place the atomic read before the mutex/spin_unlock and
        no additional barriers are required.
      
      The end-result of all this is that we need to wake up everyone twice
      in a reset operation:
      - First, before the reset starts, to get any lockholders of the locks,
        so that the reset can proceed.
      - Second, after the reset is completed, to allow waiters to properly
        and reliably detect the reset condition and bail out.
      
      I admit that this entire reset_counter thing smells a bit like
      overkill, but I think it's justified since it makes it really explicit
      what the bail-out condition is. And we need a reset counter anyway to
      implement ARB_robustness, and imo with finer-grained locking on the
      horizont this is the most resilient scheme I could think of.
      
      v2: Drop spurious change in the wait_for_error EXIT_COND - we only
      need to wait until we leave the reset-in-progress wedged state.
      
      v3: Don't play tricks with barriers in the throttle ioctl, the
      spin_unlock is barrier enough.
      
      I've also considered using a little helper to grab the current
      reset_counter, but then decided that hiding the atomic_read isn't a
      great idea, since having it explicitly show up in the code is a nice
      remainder to reviews to check the memory barriers.
      
      v4: Add a comment to explain why we need to fall through in
      __wait_seqno in the end variable assignments.
      
      v5: Review from Damien:
      - s/smb/smp/ in a comment
      - don't increment the reset counter after we've set it to WEDGED. Now
        we (again) properly wedge the gpu when the reset fails.
      Reviewed-by: default avatarDamien Lespiau <damien.lespiau@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      f69061be
  5. 20 Jan, 2013 3 commits
    • Chris Wilson's avatar
      drm/i915: Only apply the mb() when flushing the GTT domain during a finish · 97c809fd
      Chris Wilson authored
      Now that we seem to have brought order to the GTT barriers, the last one
      to review is the terminal barrier before we unbind the buffer from the
      GTT. This needs to only be performed if the buffer still resides in the
      GTT domain, and so we can skip some needless barriers otherwise.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarJesse Barnes <jbarnes@virtuousgeek.org>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      97c809fd
    • Chris Wilson's avatar
      drm/i915: Only insert the mb() before updating the fence parameter · d0a57789
      Chris Wilson authored
      With a fence, we only need to insert a memory barrier around the actual
      fence alteration for CPU accesses through the GTT. Performing the
      barrier in flush-fence was inserting unnecessary and expensive barriers
      for never fenced objects.
      
      Note removing the barriers from flush-fence, which was effectively a
      barrier before every direct access through the GTT, revealed that we
      where missing a barrier before the first access through the GTT. Lack of
      that barrier was sufficient to cause GPU hangs.
      
      v2: Add a couple more comments to explain the new barriers
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      Reviewed-by: default avatarJesse Barnes <jbarnes@virtuousgeek.org>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      d0a57789
    • Daniel Vetter's avatar
      drm/i915: clear up wedged transitions · 1f83fee0
      Daniel Vetter authored
      We have two important transitions of the wedged state in the current
      code:
      
      - 0 -> 1: This means a hang has been detected, and signals to everyone
        that they please get of any locks, so that the reset work item can
        do its job.
      
      - 1 -> 0: The reset handler has completed.
      
      Now the last transition mixes up two states: "Reset completed and
      successful" and "Reset failed". To distinguish these two we do some
      tricks with the reset completion, but I simply could not convince
      myself that this doesn't race under odd circumstances.
      
      Hence split this up, and add a new terminal state indicating that the
      hw is gone for good.
      
      Also add explicit #defines for both states, update comments.
      
      v2: Split out the reset handling bugfix for the throttle ioctl.
      
      v3: s/tmp/wedged/ sugested by Chris Wilson. Also fixup up a rebase
      error which prevented this patch from actually compiling.
      
      v4: To unify the wedged state with the reset counter, keep the
      reset-in-progress state just as a flag. The terminally-wedged state is
      now denoted with a big number.
      
      v5: Add a comment to the reset_counter special values explaining that
      WEDGED & RESET_IN_PROGRESS needs to be true for the code to be
      correct.
      
      v6: Fixup logic errors introduced with the wedged+reset_counter
      unification. Since WEDGED implies reset-in-progress (in a way we're
      terminally stuck in the dead-but-reset-not-completed state), we need
      ensure that we check for this everywhere. The specific bug was in
      wait_for_error, which would simply have timed out.
      
      v7: Extract an inline i915_reset_in_progress helper to make the code
      more readable. Also annote the reset-in-progress case with an
      unlikely, to help the compiler optimize the fastpath. Do the same for
      the terminally wedged case with i915_terminally_wedged.
      Reviewed-by: default avatarDamien Lespiau <damien.lespiau@intel.com>
      Signed-Off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      1f83fee0