Commit 11ef7a89 authored by Tom Herbert's avatar Tom Herbert Committed by David S. Miller

net: Performance fix for process_backlog

In process_backlog the input_pkt_queue is only checked once for new
packets and quota is artificially reduced to reflect precisely the
number of packets on the input_pkt_queue so that the loop exits
appropriately.

This patches changes the behavior to be more straightforward and
less convoluted. Packets are processed until either the quota
is met or there are no more packets to process.

This patch seems to provide a small, but noticeable performance
improvement. The performance improvement is a result of staying
in the process_backlog loop longer which can reduce number of IPI's.

Performance data using super_netperf TCP_RR with 200 flows:

Before fix:

88.06% CPU utilization
125/190/309 90/95/99% latencies
1.46808e+06 tps
1145382 intrs.sec.

With fix:

87.73% CPU utilization
122/183/296 90/95/99% latencies
1.4921e+06 tps
1021674.30 intrs./sec.
Signed-off-by: default avatarTom Herbert <therbert@google.com>
Acked-by: default avatarEric Dumazet <edumazet@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 68b7107b
...@@ -4227,9 +4227,8 @@ static int process_backlog(struct napi_struct *napi, int quota) ...@@ -4227,9 +4227,8 @@ static int process_backlog(struct napi_struct *napi, int quota)
#endif #endif
napi->weight = weight_p; napi->weight = weight_p;
local_irq_disable(); local_irq_disable();
while (work < quota) { while (1) {
struct sk_buff *skb; struct sk_buff *skb;
unsigned int qlen;
while ((skb = __skb_dequeue(&sd->process_queue))) { while ((skb = __skb_dequeue(&sd->process_queue))) {
local_irq_enable(); local_irq_enable();
...@@ -4243,24 +4242,24 @@ static int process_backlog(struct napi_struct *napi, int quota) ...@@ -4243,24 +4242,24 @@ static int process_backlog(struct napi_struct *napi, int quota)
} }
rps_lock(sd); rps_lock(sd);
qlen = skb_queue_len(&sd->input_pkt_queue); if (skb_queue_empty(&sd->input_pkt_queue)) {
if (qlen)
skb_queue_splice_tail_init(&sd->input_pkt_queue,
&sd->process_queue);
if (qlen < quota - work) {
/* /*
* Inline a custom version of __napi_complete(). * Inline a custom version of __napi_complete().
* only current cpu owns and manipulates this napi, * only current cpu owns and manipulates this napi,
* and NAPI_STATE_SCHED is the only possible flag set on backlog. * and NAPI_STATE_SCHED is the only possible flag set
* we can use a plain write instead of clear_bit(), * on backlog.
* We can use a plain write instead of clear_bit(),
* and we dont need an smp_mb() memory barrier. * and we dont need an smp_mb() memory barrier.
*/ */
list_del(&napi->poll_list); list_del(&napi->poll_list);
napi->state = 0; napi->state = 0;
rps_unlock(sd);
quota = work + qlen; break;
} }
skb_queue_splice_tail_init(&sd->input_pkt_queue,
&sd->process_queue);
rps_unlock(sd); rps_unlock(sd);
} }
local_irq_enable(); local_irq_enable();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment