Commit e59d3c64 authored by Soheil Hassas Yeganeh's avatar Soheil Hassas Yeganeh Committed by Linus Torvalds

epoll: eliminate unnecessary lock for zero timeout

We call ep_events_available() under lock when timeout is 0, and then call
it without locks in the loop for the other cases.

Instead, call ep_events_available() without lock for all cases.  For
non-zero timeouts, we will recheck after adding the thread to the wait
queue.  For zero timeout cases, by definition, user is opportunistically
polling and will have to call epoll_wait again in the future.

Note that this lock was kept in c5a282e9 because the whole loop was
historically under lock.

This patch results in a 1% CPU/RPC reduction in RPC benchmarks.

Link: https://lkml.kernel.org/r/20201106231635.3528496-9-soheil.kdev@gmail.comSigned-off-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
Suggested-by: default avatarEric Dumazet <edumazet@google.com>
Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
Reviewed-by: default avatarWillem de Bruijn <willemb@google.com>
Reviewed-by: default avatarKhazhismel Kumykov <khazhy@google.com>
Cc: Guantao Liu <guantaol@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 00b27634
...@@ -1743,7 +1743,7 @@ static inline struct timespec64 ep_set_mstimeout(long ms) ...@@ -1743,7 +1743,7 @@ static inline struct timespec64 ep_set_mstimeout(long ms)
static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
int maxevents, long timeout) int maxevents, long timeout)
{ {
int res, eavail = 0, timed_out = 0; int res, eavail, timed_out = 0;
u64 slack = 0; u64 slack = 0;
wait_queue_entry_t wait; wait_queue_entry_t wait;
ktime_t expires, *to = NULL; ktime_t expires, *to = NULL;
...@@ -1759,17 +1759,20 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, ...@@ -1759,17 +1759,20 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
} else if (timeout == 0) { } else if (timeout == 0) {
/* /*
* Avoid the unnecessary trip to the wait queue loop, if the * Avoid the unnecessary trip to the wait queue loop, if the
* caller specified a non blocking operation. We still need * caller specified a non blocking operation.
* lock because we could race and not see an epi being added
* to the ready list while in irq callback. Thus incorrectly
* returning 0 back to userspace.
*/ */
timed_out = 1; timed_out = 1;
}
write_lock_irq(&ep->lock); /*
* This call is racy: We may or may not see events that are being added
* to the ready list under the lock (e.g., in IRQ callbacks). For, cases
* with a non-zero timeout, this thread will check the ready list under
* lock and will added to the wait queue. For, cases with a zero
* timeout, the user by definition should not care and will have to
* recheck again.
*/
eavail = ep_events_available(ep); eavail = ep_events_available(ep);
write_unlock_irq(&ep->lock);
}
while (1) { while (1) {
if (eavail) { if (eavail) {
...@@ -1786,10 +1789,6 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, ...@@ -1786,10 +1789,6 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
if (timed_out) if (timed_out)
return 0; return 0;
eavail = ep_events_available(ep);
if (eavail)
continue;
eavail = ep_busy_loop(ep, timed_out); eavail = ep_busy_loop(ep, timed_out);
if (eavail) if (eavail)
continue; continue;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment