Commit f69061be authored by Daniel Vetter's avatar Daniel Vetter

drm/i915: create a race-free reset detection

With the previous patch the state transition handling of the reset
code itself is now (hopefully) race free and solid. But that still
leaves out everyone else - with the various lock-free wait paths
we have there's the possibility that the reset happens between the
point where we read the seqno we should wait on and the actual wait.

And if __wait_seqno then never sees the RESET_IN_PROGRESS state, we'll
happily wait for a seqno which will in all likelyhood never signal.

In practice this is not a big problem since the X server gets
constantly interrupted, and can then submit more work (hopefully) to
unblock everyone else: As soon as a new seqno write lands, all waiters
will unblock. But running the i-g-t reset testcase ZZ_hangman can
expose this race, especially on slower hw with fewer cpu cores.

Now looking forward to ARB_robustness and friends that's not the best
possible behaviour, hence this patch adds a reset_counter to be able
to detect any reset, even if a given thread never observed the
in-progress state.

The important part is to correctly order things:
- The write side needs to increment the counter after any seqno gets
  reset.  Hence we need to do that at the end of the reset work, and
  again wake everyone up. We also need to place a barrier in between
  any possible seqno changes and the counter increment, since any
  unlock operations only guarantee that nothing leaks out, but not
  that at later load operation gets moved ahead.
- On the read side we need to ensure that no reset can sneak in and
  invalidate the seqno. In all cases we can use the one-sided barrier
  that unlock operations guarantee (of the lock protecting the
  respective seqno/ring pair) to ensure correct ordering. Hence it is
  sufficient to place the atomic read before the mutex/spin_unlock and
  no additional barriers are required.

The end-result of all this is that we need to wake up everyone twice
in a reset operation:
- First, before the reset starts, to get any lockholders of the locks,
  so that the reset can proceed.
- Second, after the reset is completed, to allow waiters to properly
  and reliably detect the reset condition and bail out.

I admit that this entire reset_counter thing smells a bit like
overkill, but I think it's justified since it makes it really explicit
what the bail-out condition is. And we need a reset counter anyway to
implement ARB_robustness, and imo with finer-grained locking on the
horizont this is the most resilient scheme I could think of.

v2: Drop spurious change in the wait_for_error EXIT_COND - we only
need to wait until we leave the reset-in-progress wedged state.

v3: Don't play tricks with barriers in the throttle ioctl, the
spin_unlock is barrier enough.

I've also considered using a little helper to grab the current
reset_counter, but then decided that hiding the atomic_read isn't a
great idea, since having it explicitly show up in the code is a nice
remainder to reviews to check the memory barriers.

v4: Add a comment to explain why we need to fall through in
__wait_seqno in the end variable assignments.

v5: Review from Damien:
- s/smb/smp/ in a comment
- don't increment the reset counter after we've set it to WEDGED. Now
  we (again) properly wedge the gpu when the reset fails.
Reviewed-by: default avatarDamien Lespiau <damien.lespiau@intel.com>
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
parent 97c809fd
...@@ -775,9 +775,16 @@ struct i915_gpu_error { ...@@ -775,9 +775,16 @@ struct i915_gpu_error {
unsigned long last_reset; unsigned long last_reset;
/** /**
* State variable controlling the reset flow * State variable and reset counter controlling the reset flow
* *
* Upper bits are for the reset counter. * Upper bits are for the reset counter. This counter is used by the
* wait_seqno code to race-free noticed that a reset event happened and
* that it needs to restart the entire ioctl (since most likely the
* seqno it waited for won't ever signal anytime soon).
*
* This is important for lock-free wait paths, where no contended lock
* naturally enforces the correct ordering between the bail-out of the
* waiter and the gpu reset work code.
* *
* Lowest bit controls the reset state machine: Set means a reset is in * Lowest bit controls the reset state machine: Set means a reset is in
* progress. This state will (presuming we don't have any bugs) decay * progress. This state will (presuming we don't have any bugs) decay
......
...@@ -976,13 +976,22 @@ i915_gem_check_olr(struct intel_ring_buffer *ring, u32 seqno) ...@@ -976,13 +976,22 @@ i915_gem_check_olr(struct intel_ring_buffer *ring, u32 seqno)
* __wait_seqno - wait until execution of seqno has finished * __wait_seqno - wait until execution of seqno has finished
* @ring: the ring expected to report seqno * @ring: the ring expected to report seqno
* @seqno: duh! * @seqno: duh!
* @reset_counter: reset sequence associated with the given seqno
* @interruptible: do an interruptible wait (normally yes) * @interruptible: do an interruptible wait (normally yes)
* @timeout: in - how long to wait (NULL forever); out - how much time remaining * @timeout: in - how long to wait (NULL forever); out - how much time remaining
* *
* Note: It is of utmost importance that the passed in seqno and reset_counter
* values have been read by the caller in an smp safe manner. Where read-side
* locks are involved, it is sufficient to read the reset_counter before
* unlocking the lock that protects the seqno. For lockless tricks, the
* reset_counter _must_ be read before, and an appropriate smp_rmb must be
* inserted.
*
* Returns 0 if the seqno was found within the alloted time. Else returns the * Returns 0 if the seqno was found within the alloted time. Else returns the
* errno with remaining time filled in timeout argument. * errno with remaining time filled in timeout argument.
*/ */
static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno, static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno,
unsigned reset_counter,
bool interruptible, struct timespec *timeout) bool interruptible, struct timespec *timeout)
{ {
drm_i915_private_t *dev_priv = ring->dev->dev_private; drm_i915_private_t *dev_priv = ring->dev->dev_private;
...@@ -1012,7 +1021,8 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno, ...@@ -1012,7 +1021,8 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno,
#define EXIT_COND \ #define EXIT_COND \
(i915_seqno_passed(ring->get_seqno(ring, false), seqno) || \ (i915_seqno_passed(ring->get_seqno(ring, false), seqno) || \
i915_reset_in_progress(&dev_priv->gpu_error)) i915_reset_in_progress(&dev_priv->gpu_error) || \
reset_counter != atomic_read(&dev_priv->gpu_error.reset_counter))
do { do {
if (interruptible) if (interruptible)
end = wait_event_interruptible_timeout(ring->irq_queue, end = wait_event_interruptible_timeout(ring->irq_queue,
...@@ -1022,6 +1032,13 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno, ...@@ -1022,6 +1032,13 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno,
end = wait_event_timeout(ring->irq_queue, EXIT_COND, end = wait_event_timeout(ring->irq_queue, EXIT_COND,
timeout_jiffies); timeout_jiffies);
/* We need to check whether any gpu reset happened in between
* the caller grabbing the seqno and now ... */
if (reset_counter != atomic_read(&dev_priv->gpu_error.reset_counter))
end = -EAGAIN;
/* ... but upgrade the -EGAIN to an -EIO if the gpu is truely
* gone. */
ret = i915_gem_check_wedge(&dev_priv->gpu_error, interruptible); ret = i915_gem_check_wedge(&dev_priv->gpu_error, interruptible);
if (ret) if (ret)
end = ret; end = ret;
...@@ -1076,7 +1093,9 @@ i915_wait_seqno(struct intel_ring_buffer *ring, uint32_t seqno) ...@@ -1076,7 +1093,9 @@ i915_wait_seqno(struct intel_ring_buffer *ring, uint32_t seqno)
if (ret) if (ret)
return ret; return ret;
return __wait_seqno(ring, seqno, interruptible, NULL); return __wait_seqno(ring, seqno,
atomic_read(&dev_priv->gpu_error.reset_counter),
interruptible, NULL);
} }
/** /**
...@@ -1123,6 +1142,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj, ...@@ -1123,6 +1142,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
struct drm_device *dev = obj->base.dev; struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ring_buffer *ring = obj->ring; struct intel_ring_buffer *ring = obj->ring;
unsigned reset_counter;
u32 seqno; u32 seqno;
int ret; int ret;
...@@ -1141,8 +1161,9 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj, ...@@ -1141,8 +1161,9 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
if (ret) if (ret)
return ret; return ret;
reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
ret = __wait_seqno(ring, seqno, true, NULL); ret = __wait_seqno(ring, seqno, reset_counter, true, NULL);
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
i915_gem_retire_requests_ring(ring); i915_gem_retire_requests_ring(ring);
...@@ -2297,10 +2318,12 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj) ...@@ -2297,10 +2318,12 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
int int
i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file) i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
{ {
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_gem_wait *args = data; struct drm_i915_gem_wait *args = data;
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj;
struct intel_ring_buffer *ring = NULL; struct intel_ring_buffer *ring = NULL;
struct timespec timeout_stack, *timeout = NULL; struct timespec timeout_stack, *timeout = NULL;
unsigned reset_counter;
u32 seqno = 0; u32 seqno = 0;
int ret = 0; int ret = 0;
...@@ -2341,9 +2364,10 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file) ...@@ -2341,9 +2364,10 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
} }
drm_gem_object_unreference(&obj->base); drm_gem_object_unreference(&obj->base);
reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
ret = __wait_seqno(ring, seqno, true, timeout); ret = __wait_seqno(ring, seqno, reset_counter, true, timeout);
if (timeout) { if (timeout) {
WARN_ON(!timespec_valid(timeout)); WARN_ON(!timespec_valid(timeout));
args->timeout_ns = timespec_to_ns(timeout); args->timeout_ns = timespec_to_ns(timeout);
...@@ -3394,6 +3418,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file) ...@@ -3394,6 +3418,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
unsigned long recent_enough = jiffies - msecs_to_jiffies(20); unsigned long recent_enough = jiffies - msecs_to_jiffies(20);
struct drm_i915_gem_request *request; struct drm_i915_gem_request *request;
struct intel_ring_buffer *ring = NULL; struct intel_ring_buffer *ring = NULL;
unsigned reset_counter;
u32 seqno = 0; u32 seqno = 0;
int ret; int ret;
...@@ -3413,12 +3438,13 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file) ...@@ -3413,12 +3438,13 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
ring = request->ring; ring = request->ring;
seqno = request->seqno; seqno = request->seqno;
} }
reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
spin_unlock(&file_priv->mm.lock); spin_unlock(&file_priv->mm.lock);
if (seqno == 0) if (seqno == 0)
return 0; return 0;
ret = __wait_seqno(ring, seqno, true, NULL); ret = __wait_seqno(ring, seqno, reset_counter, true, NULL);
if (ret == 0) if (ret == 0)
queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, 0); queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, 0);
......
...@@ -867,9 +867,11 @@ static void i915_error_work_func(struct work_struct *work) ...@@ -867,9 +867,11 @@ static void i915_error_work_func(struct work_struct *work)
drm_i915_private_t *dev_priv = container_of(error, drm_i915_private_t, drm_i915_private_t *dev_priv = container_of(error, drm_i915_private_t,
gpu_error); gpu_error);
struct drm_device *dev = dev_priv->dev; struct drm_device *dev = dev_priv->dev;
struct intel_ring_buffer *ring;
char *error_event[] = { "ERROR=1", NULL }; char *error_event[] = { "ERROR=1", NULL };
char *reset_event[] = { "RESET=1", NULL }; char *reset_event[] = { "RESET=1", NULL };
char *reset_done_event[] = { "ERROR=0", NULL }; char *reset_done_event[] = { "ERROR=0", NULL };
int i, ret;
kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE, error_event); kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE, error_event);
...@@ -877,13 +879,31 @@ static void i915_error_work_func(struct work_struct *work) ...@@ -877,13 +879,31 @@ static void i915_error_work_func(struct work_struct *work)
DRM_DEBUG_DRIVER("resetting chip\n"); DRM_DEBUG_DRIVER("resetting chip\n");
kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE, reset_event); kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE, reset_event);
if (!i915_reset(dev)) { ret = i915_reset(dev);
atomic_set(&error->reset_counter, 0);
kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE, reset_done_event); if (ret == 0) {
/*
* After all the gem state is reset, increment the reset
* counter and wake up everyone waiting for the reset to
* complete.
*
* Since unlock operations are a one-sided barrier only,
* we need to insert a barrier here to order any seqno
* updates before
* the counter increment.
*/
smp_mb__before_atomic_inc();
atomic_inc(&dev_priv->gpu_error.reset_counter);
kobject_uevent_env(&dev->primary->kdev.kobj,
KOBJ_CHANGE, reset_done_event);
} else { } else {
atomic_set(&error->reset_counter, I915_WEDGED); atomic_set(&error->reset_counter, I915_WEDGED);
} }
for_each_ring(ring, dev_priv, i)
wake_up_all(&ring->irq_queue);
wake_up_all(&dev_priv->gpu_error.reset_queue); wake_up_all(&dev_priv->gpu_error.reset_queue);
} }
} }
...@@ -1488,8 +1508,8 @@ void i915_handle_error(struct drm_device *dev, bool wedged) ...@@ -1488,8 +1508,8 @@ void i915_handle_error(struct drm_device *dev, bool wedged)
i915_report_and_clear_eir(dev); i915_report_and_clear_eir(dev);
if (wedged) { if (wedged) {
atomic_set(&dev_priv->gpu_error.reset_counter, atomic_set_mask(I915_RESET_IN_PROGRESS_FLAG,
I915_RESET_IN_PROGRESS_FLAG); &dev_priv->gpu_error.reset_counter);
/* /*
* Wakeup waiting processes so that the reset work item * Wakeup waiting processes so that the reset work item
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment