Commit f1fdb184 authored by Julian Anastasov's avatar Julian Anastasov Committed by Sasha Levin

net: do not process device backlog during unregistration

[ Upstream commit e9e4dd32 ]

commit 381c759d ("ipv4: Avoid crashing in ip_error")
fixes a problem where processed packet comes from device
with destroyed inetdev (dev->ip_ptr). This is not expected
because inetdev_destroy is called in NETDEV_UNREGISTER
phase and packets should not be processed after
dev_close_many() and synchronize_net(). Above fix is still
required because inetdev_destroy can be called for other
reasons. But it shows the real problem: backlog can keep
packets for long time and they do not hold reference to
device. Such packets are then delivered to upper levels
at the same time when device is unregistered.
Calling flush_backlog after NETDEV_UNREGISTER_FINAL still
accounts all packets from backlog but before that some packets
continue to be delivered to upper levels long after the
synchronize_net call which is supposed to wait the last
ones. Also, as Eric pointed out, processed packets, mostly
from other devices, can continue to add new packets to backlog.

Fix the problem by moving flush_backlog early, after the
device driver is stopped and before the synchronize_net() call.
Then use netif_running check to make sure we do not add more
packets to backlog. We have to do it in enqueue_to_backlog
context when the local IRQ is disabled. As result, after the
flush_backlog and synchronize_net sequence all packets
should be accounted.

Thanks to Eric W. Biederman for the test script and his
valuable feedback!
Reported-by: default avatarVittorio Gambaletta <linuxbugs@vittgam.net>
Fixes: 6e583ce5 ("net: eliminate refcounting in backlog queue")
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
parent d69e9e18
...@@ -3309,6 +3309,8 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu, ...@@ -3309,6 +3309,8 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
local_irq_save(flags); local_irq_save(flags);
rps_lock(sd); rps_lock(sd);
if (!netif_running(skb->dev))
goto drop;
qlen = skb_queue_len(&sd->input_pkt_queue); qlen = skb_queue_len(&sd->input_pkt_queue);
if (qlen <= netdev_max_backlog && !skb_flow_limit(skb, qlen)) { if (qlen <= netdev_max_backlog && !skb_flow_limit(skb, qlen)) {
if (skb_queue_len(&sd->input_pkt_queue)) { if (skb_queue_len(&sd->input_pkt_queue)) {
...@@ -3330,6 +3332,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu, ...@@ -3330,6 +3332,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
goto enqueue; goto enqueue;
} }
drop:
sd->dropped++; sd->dropped++;
rps_unlock(sd); rps_unlock(sd);
...@@ -5867,6 +5870,7 @@ static void rollback_registered_many(struct list_head *head) ...@@ -5867,6 +5870,7 @@ static void rollback_registered_many(struct list_head *head)
unlist_netdevice(dev); unlist_netdevice(dev);
dev->reg_state = NETREG_UNREGISTERING; dev->reg_state = NETREG_UNREGISTERING;
on_each_cpu(flush_backlog, dev, 1);
} }
synchronize_net(); synchronize_net();
...@@ -6487,8 +6491,6 @@ void netdev_run_todo(void) ...@@ -6487,8 +6491,6 @@ void netdev_run_todo(void)
dev->reg_state = NETREG_UNREGISTERED; dev->reg_state = NETREG_UNREGISTERED;
on_each_cpu(flush_backlog, dev, 1);
netdev_wait_allrefs(dev); netdev_wait_allrefs(dev);
/* paranoia */ /* paranoia */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment