Commit 29c30026 authored by Eric Dumazet's avatar Eric Dumazet Committed by Jakub Kicinski

net: optimize skb_postpull_rcsum()

Remove one pair of add/adc instructions and their dependency
against carry flag.

We can leverage third argument to csum_partial():

  X = csum_block_sub(X, csum_partial(start, len, 0), 0);

  -->

  X = csum_block_add(X, ~csum_partial(start, len, 0), 0);

  -->

  X = ~csum_partial(start, len, ~X);
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parent 0bd28476
...@@ -3485,7 +3485,11 @@ __skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len, ...@@ -3485,7 +3485,11 @@ __skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
static inline void skb_postpull_rcsum(struct sk_buff *skb, static inline void skb_postpull_rcsum(struct sk_buff *skb,
const void *start, unsigned int len) const void *start, unsigned int len)
{ {
__skb_postpull_rcsum(skb, start, len, 0); if (skb->ip_summed == CHECKSUM_COMPLETE)
skb->csum = ~csum_partial(start, len, ~skb->csum);
else if (skb->ip_summed == CHECKSUM_PARTIAL &&
skb_checksum_start_offset(skb) < 0)
skb->ip_summed = CHECKSUM_NONE;
} }
static __always_inline void static __always_inline void
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment