Commit 47fbf0ae authored by Eric Dumazet's avatar Eric Dumazet Committed by Luis Henriques

net: rps: fix cpu unplug

commit ac64da0b upstream.

softnet_data.input_pkt_queue is protected by a spinlock that
we must hold when transferring packets from victim queue to an active
one. This is because other cpus could still be trying to enqueue packets
into victim queue.

A second problem is that when we transfert the NAPI poll_list from
victim to current cpu, we absolutely need to special case the percpu
backlog, because we do not want to add complex locking to protect
process_queue : Only owner cpu is allowed to manipulate it, unless cpu
is offline.

Based on initial patch from Prasad Sodagudi & Subash Abhinov
Kasiviswanathan.

This version is better because we do not slow down packet processing,
only make migration safer.
Reported-by: default avatarPrasad Sodagudi <psodagud@codeaurora.org>
Reported-by: default avatarSubash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Cc: Tom Herbert <therbert@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
parent e22d24ea
...@@ -6903,10 +6903,20 @@ static int dev_cpu_callback(struct notifier_block *nfb, ...@@ -6903,10 +6903,20 @@ static int dev_cpu_callback(struct notifier_block *nfb,
oldsd->output_queue = NULL; oldsd->output_queue = NULL;
oldsd->output_queue_tailp = &oldsd->output_queue; oldsd->output_queue_tailp = &oldsd->output_queue;
} }
/* Append NAPI poll list from offline CPU. */ /* Append NAPI poll list from offline CPU, with one exception :
if (!list_empty(&oldsd->poll_list)) { * process_backlog() must be called by cpu owning percpu backlog.
list_splice_init(&oldsd->poll_list, &sd->poll_list); * We properly handle process_queue & input_pkt_queue later.
raise_softirq_irqoff(NET_RX_SOFTIRQ); */
while (!list_empty(&oldsd->poll_list)) {
struct napi_struct *napi = list_first_entry(&oldsd->poll_list,
struct napi_struct,
poll_list);
list_del_init(&napi->poll_list);
if (napi->poll == process_backlog)
napi->state = 0;
else
____napi_schedule(sd, napi);
} }
raise_softirq_irqoff(NET_TX_SOFTIRQ); raise_softirq_irqoff(NET_TX_SOFTIRQ);
...@@ -6917,7 +6927,7 @@ static int dev_cpu_callback(struct notifier_block *nfb, ...@@ -6917,7 +6927,7 @@ static int dev_cpu_callback(struct notifier_block *nfb,
netif_rx_internal(skb); netif_rx_internal(skb);
input_queue_head_incr(oldsd); input_queue_head_incr(oldsd);
} }
while ((skb = __skb_dequeue(&oldsd->input_pkt_queue))) { while ((skb = skb_dequeue(&oldsd->input_pkt_queue))) {
netif_rx_internal(skb); netif_rx_internal(skb);
input_queue_head_incr(oldsd); input_queue_head_incr(oldsd);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment