Commit 09d3e84d authored by Herbert Xu's avatar Herbert Xu Committed by Thomas Graf

[NET]: Add missing memory barrier to kfree_skb().

Also kill kfree_skb_fast(), that is a relic from fast switching
which was killed off years ago.

The bug is that in the case where we do the atomic_read()
optimization, we need to make sure that reads of skb state
later in __kfree_skb() processing (particularly the skb->list
BUG check) are not reordered to occur before the counter
read by the cpu.

Thanks to Olaf Kirch and Anton Blanchard for discovering
and helping fix this bug.
Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 97d52752
......@@ -353,15 +353,11 @@ static inline struct sk_buff *skb_get(struct sk_buff *skb)
*/
static inline void kfree_skb(struct sk_buff *skb)
{
if (atomic_read(&skb->users) == 1 || atomic_dec_and_test(&skb->users))
__kfree_skb(skb);
}
/* Use this if you didn't touch the skb state [for fast switching] */
static inline void kfree_skb_fast(struct sk_buff *skb)
{
if (atomic_read(&skb->users) == 1 || atomic_dec_and_test(&skb->users))
kfree_skbmem(skb);
if (likely(atomic_read(&skb->users) == 1))
smp_rmb();
else if (likely(!atomic_dec_and_test(&skb->users)))
return;
__kfree_skb(skb);
}
/**
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment