Commit 0f397f2c authored by Kirill Tkhai's avatar Kirill Tkhai Committed by Ingo Molnar

sched/dl: Fix race in dl_task_timer()

Throttled task is still on rq, and it may be moved to other cpu
if user is playing with sched_setaffinity(). Therefore, unlocked
task_rq() access makes the race.

Juri Lelli reports he got this race when dl_bandwidth_enabled()
was not set.

Other thing, pointed by Peter Zijlstra:

   "Now I suppose the problem can still actually happen when
    you change the root domain and trigger a effective affinity
    change that way".

To fix that we do the same as made in __task_rq_lock(). We do not
use __task_rq_lock() itself, because it has a useful lockdep check,
which is not correct in case of dl_task_timer(). We do not need
pi_lock locked here. This case is an exception (PeterZ):

   "The only reason we don't strictly need ->pi_lock now is because
    we're guaranteed to have p->state == TASK_RUNNING here and are
    thus free of ttwu races".
Signed-off-by: default avatarKirill Tkhai <tkhai@yandex.ru>
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Cc: <stable@vger.kernel.org> # v3.14+
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/3056991400578422@web14g.yandex.ruSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent b14ed2c2
......@@ -513,9 +513,17 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
struct sched_dl_entity,
dl_timer);
struct task_struct *p = dl_task_of(dl_se);
struct rq *rq = task_rq(p);
struct rq *rq;
again:
rq = task_rq(p);
raw_spin_lock(&rq->lock);
if (rq != task_rq(p)) {
/* Task was moved, retrying. */
raw_spin_unlock(&rq->lock);
goto again;
}
/*
* We need to take care of a possible races here. In fact, the
* task might have changed its scheduling policy to something
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment