Commit 394fcd8a authored by Eric Dumazet's avatar Eric Dumazet Committed by David S. Miller

net: zerocopy: combine pages in zerocopy_sg_from_iter()

Currently, tcp sendmsg(MSG_ZEROCOPY) is building skbs with order-0 fragments.
Compared to standard sendmsg(), these skbs usually contain up to 16 fragments
on arches with 4KB page sizes, instead of two.

This adds considerable costs on various ndo_start_xmit() handlers,
especially when IOMMU is in the picture.

As high performance applications are often using huge pages,
we can try to combine adjacent pages belonging to same
compound page.

Tested on AMD Rome platform, with IOMMU, nominal single TCP flow speed
is roughly doubled (~55Gbit -> ~100Gbit), when user application
is using hugepages.

For reference, nominal single TCP flow speed on this platform
without MSG_ZEROCOPY is ~65Gbit.
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 4f6c09f7
...@@ -623,10 +623,11 @@ int __zerocopy_sg_from_iter(struct sock *sk, struct sk_buff *skb, ...@@ -623,10 +623,11 @@ int __zerocopy_sg_from_iter(struct sock *sk, struct sk_buff *skb,
while (length && iov_iter_count(from)) { while (length && iov_iter_count(from)) {
struct page *pages[MAX_SKB_FRAGS]; struct page *pages[MAX_SKB_FRAGS];
struct page *last_head = NULL;
size_t start; size_t start;
ssize_t copied; ssize_t copied;
unsigned long truesize; unsigned long truesize;
int n = 0; int refs, n = 0;
if (frag == MAX_SKB_FRAGS) if (frag == MAX_SKB_FRAGS)
return -EMSGSIZE; return -EMSGSIZE;
...@@ -649,13 +650,37 @@ int __zerocopy_sg_from_iter(struct sock *sk, struct sk_buff *skb, ...@@ -649,13 +650,37 @@ int __zerocopy_sg_from_iter(struct sock *sk, struct sk_buff *skb,
} else { } else {
refcount_add(truesize, &skb->sk->sk_wmem_alloc); refcount_add(truesize, &skb->sk->sk_wmem_alloc);
} }
while (copied) { for (refs = 0; copied != 0; start = 0) {
int size = min_t(int, copied, PAGE_SIZE - start); int size = min_t(int, copied, PAGE_SIZE - start);
skb_fill_page_desc(skb, frag++, pages[n], start, size); struct page *head = compound_head(pages[n]);
start = 0;
start += (pages[n] - head) << PAGE_SHIFT;
copied -= size; copied -= size;
n++; n++;
if (frag) {
skb_frag_t *last = &skb_shinfo(skb)->frags[frag - 1];
if (head == skb_frag_page(last) &&
start == skb_frag_off(last) + skb_frag_size(last)) {
skb_frag_size_add(last, size);
/* We combined this page, we need to release
* a reference. Since compound pages refcount
* is shared among many pages, batch the refcount
* adjustments to limit false sharing.
*/
last_head = head;
refs++;
continue;
}
}
if (refs) {
page_ref_sub(last_head, refs);
refs = 0;
}
skb_fill_page_desc(skb, frag++, head, start, size);
} }
if (refs)
page_ref_sub(last_head, refs);
} }
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment