Commit 96f30d3e authored by Michal Kubeček's avatar Michal Kubeček Committed by Ben Hutchings

net: handle NAPI_GRO_FREE_STOLEN_HEAD case also in napi_frags_finish()

commit e44699d2 upstream.

Recently I started seeing warnings about pages with refcount -1. The
problem was traced to packets being reused after their head was merged into
a GRO packet by skb_gro_receive(). While bisecting the issue pointed to
commit c21b48cc ("net: adjust skb->truesize in ___pskb_trim()") and
I have never seen it on a kernel with it reverted, I believe the real
problem appeared earlier when the option to merge head frag in GRO was
implemented.

Handling NAPI_GRO_FREE_STOLEN_HEAD state was only added to GRO_MERGED_FREE
branch of napi_skb_finish() so that if the driver uses napi_gro_frags()
and head is merged (which in my case happens after the skb_condense()
call added by the commit mentioned above), the skb is reused including the
head that has been merged. As a result, we release the page reference
twice and eventually end up with negative page refcount.

To fix the problem, handle NAPI_GRO_FREE_STOLEN_HEAD in napi_frags_finish()
the same way it's done in napi_skb_finish().

Fixes: d7e8883c ("net: make GRO aware of skb->head_frag")
Signed-off-by: default avatarMichal Kubecek <mkubecek@suse.cz>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
[bwh: Backported to 3.16: The necessary cleanup is just kmem_cache_free(),
 so don't bother adding a function for this.]
Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
parent 5a74f081
...@@ -4166,7 +4166,13 @@ static gro_result_t napi_frags_finish(struct napi_struct *napi, ...@@ -4166,7 +4166,13 @@ static gro_result_t napi_frags_finish(struct napi_struct *napi,
break; break;
case GRO_DROP: case GRO_DROP:
napi_reuse_skb(napi, skb);
break;
case GRO_MERGED_FREE: case GRO_MERGED_FREE:
if (NAPI_GRO_CB(skb)->free == NAPI_GRO_FREE_STOLEN_HEAD)
kmem_cache_free(skbuff_head_cache, skb);
else
napi_reuse_skb(napi, skb); napi_reuse_skb(napi, skb);
break; break;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment