Commit b9032741 authored by Eric Dumazet's avatar Eric Dumazet Committed by David S. Miller

bnx2x: avoid two atomic ops per page on x86

Commit 4cace675 ("bnx2x: Alloc 4k fragment for each rx ring buffer
element") added extra put_page() and get_page() calls on arches where
PAGE_SIZE=4K like x86

Reorder things to avoid this overhead.
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Cc: Gabriel Krisman Bertazi <krisman@linux.vnet.ibm.com>
Cc: Yuval Mintz <Yuval.Mintz@cavium.com>
Cc: Ariel Elior <ariel.elior@cavium.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 41e8c70e
......@@ -549,14 +549,7 @@ static int bnx2x_alloc_rx_sge(struct bnx2x *bp, struct bnx2x_fastpath *fp,
struct bnx2x_alloc_pool *pool = &fp->page_pool;
dma_addr_t mapping;
if (!pool->page || (PAGE_SIZE - pool->offset) < SGE_PAGE_SIZE) {
/* put page reference used by the memory pool, since we
* won't be using this page as the mempool anymore.
*/
if (pool->page)
put_page(pool->page);
if (!pool->page) {
pool->page = alloc_pages(gfp_mask, PAGES_PER_SGE_SHIFT);
if (unlikely(!pool->page))
return -ENOMEM;
......@@ -571,7 +564,6 @@ static int bnx2x_alloc_rx_sge(struct bnx2x *bp, struct bnx2x_fastpath *fp,
return -ENOMEM;
}
get_page(pool->page);
sw_buf->page = pool->page;
sw_buf->offset = pool->offset;
......@@ -581,7 +573,10 @@ static int bnx2x_alloc_rx_sge(struct bnx2x *bp, struct bnx2x_fastpath *fp,
sge->addr_lo = cpu_to_le32(U64_LO(mapping));
pool->offset += SGE_PAGE_SIZE;
if (PAGE_SIZE - pool->offset >= SGE_PAGE_SIZE)
get_page(pool->page);
else
pool->page = NULL;
return 0;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment