Commit be852795 authored by Eric Dumazet's avatar Eric Dumazet Committed by Linus Torvalds

alloc_percpu() fails to allocate percpu data

Some oprofile results obtained while using tbench on a 2x2 cpu machine were
very surprising.

For example, loopback_xmit() function was using high number of cpu cycles
to perform the statistic updates, supposed to be real cheap since they use
percpu data

        pcpu_lstats = netdev_priv(dev);
        lb_stats = per_cpu_ptr(pcpu_lstats, smp_processor_id());
        lb_stats->packets++;  /* HERE : serious contention */
        lb_stats->bytes += skb->len;

struct pcpu_lstats is a small structure containing two longs.  It appears
that on my 32bits platform, alloc_percpu(8) allocates a single cache line,
instead of giving to each cpu a separate cache line.

Using the following patch gave me impressive boost in various benchmarks
( 6 % in tbench)
(all percpu_counters hit this bug too)

Long term fix (ie >= 2.6.26) would be to let each CPU allocate their own
block of memory, so that we dont need to roudup sizes to L1_CACHE_BYTES, or
merging the SGI stuff of course...

Note : SLUB vs SLAB is important here to *show* the improvement, since they
dont have the same minimum allocation sizes (8 bytes vs 32 bytes).  This
could very well explain regressions some guys reported when they switched
to SLUB.
Signed-off-by: default avatarEric Dumazet <dada1@cosmosbay.com>
Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e3892296
...@@ -6,6 +6,10 @@ ...@@ -6,6 +6,10 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/module.h> #include <linux/module.h>
#ifndef cache_line_size
#define cache_line_size() L1_CACHE_BYTES
#endif
/** /**
* percpu_depopulate - depopulate per-cpu data for given cpu * percpu_depopulate - depopulate per-cpu data for given cpu
* @__pdata: per-cpu data to depopulate * @__pdata: per-cpu data to depopulate
...@@ -52,6 +56,11 @@ void *percpu_populate(void *__pdata, size_t size, gfp_t gfp, int cpu) ...@@ -52,6 +56,11 @@ void *percpu_populate(void *__pdata, size_t size, gfp_t gfp, int cpu)
struct percpu_data *pdata = __percpu_disguise(__pdata); struct percpu_data *pdata = __percpu_disguise(__pdata);
int node = cpu_to_node(cpu); int node = cpu_to_node(cpu);
/*
* We should make sure each CPU gets private memory.
*/
size = roundup(size, cache_line_size());
BUG_ON(pdata->ptrs[cpu]); BUG_ON(pdata->ptrs[cpu]);
if (node_online(node)) if (node_online(node))
pdata->ptrs[cpu] = kmalloc_node(size, gfp|__GFP_ZERO, node); pdata->ptrs[cpu] = kmalloc_node(size, gfp|__GFP_ZERO, node);
...@@ -98,7 +107,11 @@ EXPORT_SYMBOL_GPL(__percpu_populate_mask); ...@@ -98,7 +107,11 @@ EXPORT_SYMBOL_GPL(__percpu_populate_mask);
*/ */
void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask) void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask)
{ {
void *pdata = kzalloc(nr_cpu_ids * sizeof(void *), gfp); /*
* We allocate whole cache lines to avoid false sharing
*/
size_t sz = roundup(nr_cpu_ids * sizeof(void *), cache_line_size());
void *pdata = kzalloc(sz, gfp);
void *__pdata = __percpu_disguise(pdata); void *__pdata = __percpu_disguise(pdata);
if (unlikely(!pdata)) if (unlikely(!pdata))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment