Commit 82d95799 authored by Mark Einon's avatar Mark Einon Committed by Greg Kroah-Hartman

staging: et131x: Simplify unlocking tcb_send_qlock in et131x_tx_timeout()

The tcb_send_qlock spinlock is unlocked in all three paths at the end of
et131x_tx_timeout(). We can call it once before entering any of the paths,
saving ourselves a few lines of code.

This change puts tcb->count++ outside of the lock, but et131x_tx_timeout()
itself is protected by the tx_global_lock, so this shouldn't matter.
Signed-off-by: default avatarMark Einon <mark.einon@gmail.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 0b06912b
......@@ -4170,16 +4170,13 @@ static void et131x_tx_timeout(struct net_device *netdev)
/* Is send stuck? */
spin_lock_irqsave(&adapter->tcb_send_qlock, flags);
tcb = tx_ring->send_head;
spin_unlock_irqrestore(&adapter->tcb_send_qlock, flags);
if (tcb != NULL) {
if (tcb) {
tcb->count++;
if (tcb->count > NIC_SEND_HANG_THRESHOLD) {
spin_unlock_irqrestore(&adapter->tcb_send_qlock,
flags);
dev_warn(&adapter->pdev->dev,
"Send stuck - reset. tcb->WrIndex %x\n",
tcb->index);
......@@ -4189,11 +4186,8 @@ static void et131x_tx_timeout(struct net_device *netdev)
/* perform reset of tx/rx */
et131x_disable_txrx(netdev);
et131x_enable_txrx(netdev);
return;
}
}
spin_unlock_irqrestore(&adapter->tcb_send_qlock, flags);
}
/* et131x_change_mtu - The handler called to change the MTU for the device */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment