Commit 38e96b35 authored by Peter Crosthwaite's avatar Peter Crosthwaite Committed by David S. Miller

net: axienet: Handle 0 packet receive gracefully

The AXI-DMA rx-delay interrupt can sometimes be triggered
when there are 0 outstanding packets received. This is due
to the fact that the receive function will greedily consume
as many packets as possible on interrupt. So if two packets
(with a very particular timing) arrive in succession they
will each cause the rx-delay interrupt, but the first interrupt
will consume both packets.
This means the second interrupt is a 0 packet receive.

This is mostly OK, except that the tail pointer register is
updated unconditionally on receive. Currently the tail pointer
is always set to the current bd-ring descriptor under
the assumption that the hardware has moved onto the next
descriptor. What this means for length 0 recv is the current
descriptor that the hardware is potentially yet to use will
be marked as the tail. This causes the hardware to think
its run out of descriptors deadlocking the whole rx path.

Fixed by updating the tail pointer to the most recent
successfully consumed descriptor.
Reported-by: default avatarWendy Liang <wendy.liang@xilinx.com>
Signed-off-by: default avatarPeter Crosthwaite <peter.crosthwaite@xilinx.com>
Tested-by: default avatarJason Wu <huanyu@xilinx.com>
Acked-by: default avatarMichal Simek <michal.simek@xilinx.com>
Signed-off-by: default avatarMichal Simek <michal.simek@xilinx.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent d1d372e8
...@@ -726,15 +726,15 @@ static void axienet_recv(struct net_device *ndev) ...@@ -726,15 +726,15 @@ static void axienet_recv(struct net_device *ndev)
u32 csumstatus; u32 csumstatus;
u32 size = 0; u32 size = 0;
u32 packets = 0; u32 packets = 0;
dma_addr_t tail_p; dma_addr_t tail_p = 0;
struct axienet_local *lp = netdev_priv(ndev); struct axienet_local *lp = netdev_priv(ndev);
struct sk_buff *skb, *new_skb; struct sk_buff *skb, *new_skb;
struct axidma_bd *cur_p; struct axidma_bd *cur_p;
tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
cur_p = &lp->rx_bd_v[lp->rx_bd_ci]; cur_p = &lp->rx_bd_v[lp->rx_bd_ci];
while ((cur_p->status & XAXIDMA_BD_STS_COMPLETE_MASK)) { while ((cur_p->status & XAXIDMA_BD_STS_COMPLETE_MASK)) {
tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
skb = (struct sk_buff *) (cur_p->sw_id_offset); skb = (struct sk_buff *) (cur_p->sw_id_offset);
length = cur_p->app4 & 0x0000FFFF; length = cur_p->app4 & 0x0000FFFF;
...@@ -786,7 +786,8 @@ static void axienet_recv(struct net_device *ndev) ...@@ -786,7 +786,8 @@ static void axienet_recv(struct net_device *ndev)
ndev->stats.rx_packets += packets; ndev->stats.rx_packets += packets;
ndev->stats.rx_bytes += size; ndev->stats.rx_bytes += size;
axienet_dma_out32(lp, XAXIDMA_RX_TDESC_OFFSET, tail_p); if (tail_p)
axienet_dma_out32(lp, XAXIDMA_RX_TDESC_OFFSET, tail_p);
} }
/** /**
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment