Commit 61e800cf authored by Matt Carlson's avatar Matt Carlson Committed by David S. Miller

tg3: Enforce DMA mapping / skb assignment ordering

Michael Chan noted that there is nothing in the code that would prevent
the compiler from delaying the access of the "mapping" member of the
newly arrived packet until much later.  If this happened after the
skb = NULL assignment, it is possible for the driver to pass a bad
dma_addr value to pci_unmap_single().  To enforce this ordering, we need
a write memory barrier.  The pairing read memory barrier already exists
in tg3_rx_prodring_xfer() under the comments starting with
"Ensure that updates to the...".
Signed-off-by: default avatarMatt Carlson <mcarlson@broadcom.com>
Signed-off-by: default avatarMichael Chan <mchan@broadcom.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 99405162
...@@ -4659,11 +4659,16 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget) ...@@ -4659,11 +4659,16 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget)
if (skb_size < 0) if (skb_size < 0)
goto drop_it; goto drop_it;
ri->skb = NULL;
pci_unmap_single(tp->pdev, dma_addr, skb_size, pci_unmap_single(tp->pdev, dma_addr, skb_size,
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
/* Ensure that the update to the skb happens
* after the usage of the old DMA mapping.
*/
smp_wmb();
ri->skb = NULL;
skb_put(skb, len); skb_put(skb, len);
} else { } else {
struct sk_buff *copy_skb; struct sk_buff *copy_skb;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment