Commit 61a5ff15 authored by Amos Kong's avatar Amos Kong Committed by David S. Miller

tun: do not put self in waitq if doing a nonblock read

Perf shows a relatively high rate (about 8%) race in
spin_lock_irqsave() when doing netperf between external host and
guest. It's mainly becuase the lock contention between the
tun_do_read() and tun_xmit_skb(), so this patch do not put self into
waitqueue to reduce this kind of race. After this patch, it drops to
4%.
Signed-off-by: default avatarJason Wang <jasowang@redhat.com>
Signed-off-by: default avatarAmos Kong <akong@redhat.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 6f7c156c
...@@ -817,6 +817,7 @@ static ssize_t tun_do_read(struct tun_struct *tun, ...@@ -817,6 +817,7 @@ static ssize_t tun_do_read(struct tun_struct *tun,
tun_debug(KERN_INFO, tun, "tun_chr_read\n"); tun_debug(KERN_INFO, tun, "tun_chr_read\n");
if (unlikely(!noblock))
add_wait_queue(&tun->wq.wait, &wait); add_wait_queue(&tun->wq.wait, &wait);
while (len) { while (len) {
current->state = TASK_INTERRUPTIBLE; current->state = TASK_INTERRUPTIBLE;
...@@ -848,6 +849,7 @@ static ssize_t tun_do_read(struct tun_struct *tun, ...@@ -848,6 +849,7 @@ static ssize_t tun_do_read(struct tun_struct *tun,
} }
current->state = TASK_RUNNING; current->state = TASK_RUNNING;
if (unlikely(!noblock))
remove_wait_queue(&tun->wq.wait, &wait); remove_wait_queue(&tun->wq.wait, &wait);
return ret; return ret;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment