Commit b3104104 authored by Lai Jiangshan's avatar Lai Jiangshan Committed by Tejun Heo

workqueue: better define synchronization rule around rescuer->pool updates

Rescuers visit different worker_pools to process work items from pools
under pressure.  Currently, rescuer->pool is updated outside any
locking and when an outsider looks at a rescuer, there's no way to
tell when and whether rescuer->pool is gonna change.  While this
doesn't currently cause any problem, it is nasty.

With recent worker_maybe_bind_and_lock() changes, we can move
rescuer->pool updates inside pool locks such that if rescuer->pool
equals a locked pool, it's guaranteed to stay that way until the pool
is unlocked.

Move rescuer->pool inside pool->lock.

This patch doesn't introduce any visible behavior difference.

tj: Updated the description.
Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
parent f36dc67b
......@@ -2357,8 +2357,8 @@ static int rescuer_thread(void *__rescuer)
mayday_clear_cpu(cpu, wq->mayday_mask);
/* migrate to the target cpu if possible */
rescuer->pool = pool;
worker_maybe_bind_and_lock(pool);
rescuer->pool = pool;
/*
* Slurp in all works issued via this workqueue and
......@@ -2379,6 +2379,7 @@ static int rescuer_thread(void *__rescuer)
if (keep_working(pool))
wake_up_worker(pool);
rescuer->pool = NULL;
spin_unlock_irq(&pool->lock);
}
......
......@@ -32,6 +32,7 @@ struct worker {
struct list_head scheduled; /* L: scheduled works */
struct task_struct *task; /* I: worker task */
struct worker_pool *pool; /* I: the associated pool */
/* L: for rescuers */
/* 64 bytes boundary on 64bit, 32 on 32bit */
unsigned long last_active; /* L: last active timestamp */
unsigned int flags; /* X: flags */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment