Commit a880b28a authored by Eric Biggers's avatar Eric Biggers

binder: use wake_up_pollfree()

wake_up_poll() uses nr_exclusive=1, so it's not guaranteed to wake up
all exclusive waiters.  Yet, POLLFREE *must* wake up all waiters.  epoll
and aio poll are fortunately not affected by this, but it's very
fragile.  Thus, the new function wake_up_pollfree() has been introduced.

Convert binder to use wake_up_pollfree().
Reported-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Fixes: f5cb779b ("ANDROID: binder: remove waitqueue when thread exits.")
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20211209010455.42744-3-ebiggers@kernel.orgSigned-off-by: default avatarEric Biggers <ebiggers@google.com>
parent 42288cb4
...@@ -4422,23 +4422,20 @@ static int binder_thread_release(struct binder_proc *proc, ...@@ -4422,23 +4422,20 @@ static int binder_thread_release(struct binder_proc *proc,
__release(&t->lock); __release(&t->lock);
/* /*
* If this thread used poll, make sure we remove the waitqueue * If this thread used poll, make sure we remove the waitqueue from any
* from any epoll data structures holding it with POLLFREE. * poll data structures holding it.
* waitqueue_active() is safe to use here because we're holding
* the inner lock.
*/ */
if ((thread->looper & BINDER_LOOPER_STATE_POLL) && if (thread->looper & BINDER_LOOPER_STATE_POLL)
waitqueue_active(&thread->wait)) { wake_up_pollfree(&thread->wait);
wake_up_poll(&thread->wait, EPOLLHUP | POLLFREE);
}
binder_inner_proc_unlock(thread->proc); binder_inner_proc_unlock(thread->proc);
/* /*
* This is needed to avoid races between wake_up_poll() above and * This is needed to avoid races between wake_up_pollfree() above and
* and ep_remove_waitqueue() called for other reasons (eg the epoll file * someone else removing the last entry from the queue for other reasons
* descriptor being closed); ep_remove_waitqueue() holds an RCU read * (e.g. ep_remove_wait_queue() being called due to an epoll file
* lock, so we can be sure it's done after calling synchronize_rcu(). * descriptor being closed). Such other users hold an RCU read lock, so
* we can be sure they're done after we call synchronize_rcu().
*/ */
if (thread->looper & BINDER_LOOPER_STATE_POLL) if (thread->looper & BINDER_LOOPER_STATE_POLL)
synchronize_rcu(); synchronize_rcu();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment