Commit 40064aec authored by Dennis Zhou (Facebook)'s avatar Dennis Zhou (Facebook) Committed by Tejun Heo

percpu: replace area map allocator with bitmap

The percpu memory allocator is experiencing scalability issues when
allocating and freeing large numbers of counters as in BPF.
Additionally, there is a corner case where iteration is triggered over
all chunks if the contig_hint is the right size, but wrong alignment.

This patch replaces the area map allocator with a basic bitmap allocator
implementation. Each subsequent patch will introduce new features and
replace full scanning functions with faster non-scanning options when
possible.

Implementation:
This patchset removes the area map allocator in favor of a bitmap
allocator backed by metadata blocks. The primary goal is to provide
consistency in performance and memory footprint with a focus on small
allocations (< 64 bytes). The bitmap removes the heavy memmove from the
freeing critical path and provides a consistent memory footprint. The
metadata blocks provide a bound on the amount of scanning required by
maintaining a set of hints.

In an effort to make freeing fast, the metadata is updated on the free
path if the new free area makes a page free, a block free, or spans
across blocks. This causes the chunk's contig hint to potentially be
smaller than what it could allocate by up to the smaller of a page or a
block. If the chunk's contig hint is contained within a block, a check
occurs and the hint is kept accurate. Metadata is always kept accurate
on allocation, so there will not be a situation where a chunk has a
later contig hint than available.

Evaluation:
I have primarily done testing against a simple workload of allocation of
1 million objects (2^20) of varying size. Deallocation was done by in
order, alternating, and in reverse. These numbers were collected after
rebasing ontop of a80099a1. I present the worst-case numbers here:

  Area Map Allocator:

        Object Size | Alloc Time (ms) | Free Time (ms)
        ----------------------------------------------
              4B    |        310      |     4770
             16B    |        557      |     1325
             64B    |        436      |      273
            256B    |        776      |      131
           1024B    |       3280      |      122

  Bitmap Allocator:

        Object Size | Alloc Time (ms) | Free Time (ms)
        ----------------------------------------------
              4B    |        490      |       70
             16B    |        515      |       75
             64B    |        610      |       80
            256B    |        950      |      100
           1024B    |       3520      |      200

This data demonstrates the inability for the area map allocator to
handle less than ideal situations. In the best case of reverse
deallocation, the area map allocator was able to perform within range
of the bitmap allocator. In the worst case situation, freeing took
nearly 5 seconds for 1 million 4-byte objects. The bitmap allocator
dramatically improves the consistency of the free path. The small
allocations performed nearly identical regardless of the freeing
pattern.

While it does add to the allocation latency, the allocation scenario
here is optimal for the area map allocator. The area map allocator runs
into trouble when it is allocating in chunks where the latter half is
full. It is difficult to replicate this, so I present a variant where
the pages are second half filled. Freeing was done sequentially. Below
are the numbers for this scenario:

  Area Map Allocator:

        Object Size | Alloc Time (ms) | Free Time (ms)
        ----------------------------------------------
              4B    |       4118      |     4892
             16B    |       1651      |     1163
             64B    |        598      |      285
            256B    |        771      |      158
           1024B    |       3034      |      160

  Bitmap Allocator:

        Object Size | Alloc Time (ms) | Free Time (ms)
        ----------------------------------------------
              4B    |        481      |       67
             16B    |        506      |       69
             64B    |        636      |       75
            256B    |        892      |       90
           1024B    |       3262      |      147

The data shows a parabolic curve of performance for the area map
allocator. This is due to the memmove operation being the dominant cost
with the lower object sizes as more objects are packed in a chunk and at
higher object sizes, the traversal of the chunk slots is the dominating
cost. The bitmap allocator suffers this problem as well. The above data
shows the inability to scale for the allocation path with the area map
allocator and that the bitmap allocator demonstrates consistent
performance in general.

The second problem of additional scanning can result in the area map
allocator completing in 52 minutes when trying to allocate 1 million
4-byte objects with 8-byte alignment. The same workload takes
approximately 16 seconds to complete for the bitmap allocator.

V2:
Fixed a bug in pcpu_alloc_first_chunk end_offset was setting the bitmap
using bytes instead of bits.

Added a comment to pcpu_cnt_pop_pages to explain bitmap_weight.
Signed-off-by: default avatarDennis Zhou <dennisszhou@gmail.com>
Reviewed-by: default avatarJosef Bacik <jbacik@fb.com>
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
parent 91e914c5
...@@ -120,7 +120,6 @@ extern bool is_kernel_percpu_address(unsigned long addr); ...@@ -120,7 +120,6 @@ extern bool is_kernel_percpu_address(unsigned long addr);
#if !defined(CONFIG_SMP) || !defined(CONFIG_HAVE_SETUP_PER_CPU_AREA) #if !defined(CONFIG_SMP) || !defined(CONFIG_HAVE_SETUP_PER_CPU_AREA)
extern void __init setup_per_cpu_areas(void); extern void __init setup_per_cpu_areas(void);
#endif #endif
extern void __init percpu_init_late(void);
extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp); extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp);
extern void __percpu *__alloc_percpu(size_t size, size_t align); extern void __percpu *__alloc_percpu(size_t size, size_t align);
......
...@@ -500,7 +500,6 @@ static void __init mm_init(void) ...@@ -500,7 +500,6 @@ static void __init mm_init(void)
page_ext_init_flatmem(); page_ext_init_flatmem();
mem_init(); mem_init();
kmem_cache_init(); kmem_cache_init();
percpu_init_late();
pgtable_init(); pgtable_init();
vmalloc_init(); vmalloc_init();
ioremap_huge_init(); ioremap_huge_init();
......
...@@ -11,14 +11,12 @@ struct pcpu_chunk { ...@@ -11,14 +11,12 @@ struct pcpu_chunk {
#endif #endif
struct list_head list; /* linked to pcpu_slot lists */ struct list_head list; /* linked to pcpu_slot lists */
int free_size; /* free bytes in the chunk */ int free_bytes; /* free bytes in the chunk */
int contig_hint; /* max contiguous size hint */ int contig_bits; /* max contiguous size hint */
void *base_addr; /* base address of this chunk */ void *base_addr; /* base address of this chunk */
int map_used; /* # of map entries used before the sentry */ unsigned long *alloc_map; /* allocation map */
int map_alloc; /* # of map entries allocated */ unsigned long *bound_map; /* boundary map */
int *map; /* allocation map */
struct list_head map_extend_list;/* on pcpu_map_extend_chunks */
void *data; /* chunk data */ void *data; /* chunk data */
int first_free; /* no free below this */ int first_free; /* no free below this */
...@@ -45,6 +43,30 @@ extern int pcpu_nr_empty_pop_pages; ...@@ -45,6 +43,30 @@ extern int pcpu_nr_empty_pop_pages;
extern struct pcpu_chunk *pcpu_first_chunk; extern struct pcpu_chunk *pcpu_first_chunk;
extern struct pcpu_chunk *pcpu_reserved_chunk; extern struct pcpu_chunk *pcpu_reserved_chunk;
/**
* pcpu_nr_pages_to_map_bits - converts the pages to size of bitmap
* @pages: number of physical pages
*
* This conversion is from physical pages to the number of bits
* required in the bitmap.
*/
static inline int pcpu_nr_pages_to_map_bits(int pages)
{
return pages * PAGE_SIZE / PCPU_MIN_ALLOC_SIZE;
}
/**
* pcpu_chunk_map_bits - helper to convert nr_pages to size of bitmap
* @chunk: chunk of interest
*
* This conversion is from the number of physical pages that the chunk
* serves to the number of bits in the bitmap.
*/
static inline int pcpu_chunk_map_bits(struct pcpu_chunk *chunk)
{
return pcpu_nr_pages_to_map_bits(chunk->nr_pages);
}
#ifdef CONFIG_PERCPU_STATS #ifdef CONFIG_PERCPU_STATS
#include <linux/spinlock.h> #include <linux/spinlock.h>
......
...@@ -69,7 +69,7 @@ static struct pcpu_chunk *pcpu_create_chunk(void) ...@@ -69,7 +69,7 @@ static struct pcpu_chunk *pcpu_create_chunk(void)
chunk->base_addr = page_address(pages) - pcpu_group_offsets[0]; chunk->base_addr = page_address(pages) - pcpu_group_offsets[0];
spin_lock_irq(&pcpu_lock); spin_lock_irq(&pcpu_lock);
pcpu_chunk_populated(chunk, 0, nr_pages); pcpu_chunk_populated(chunk, 0, nr_pages, false);
spin_unlock_irq(&pcpu_lock); spin_unlock_irq(&pcpu_lock);
pcpu_stats_chunk_alloc(); pcpu_stats_chunk_alloc();
......
...@@ -29,65 +29,85 @@ static int cmpint(const void *a, const void *b) ...@@ -29,65 +29,85 @@ static int cmpint(const void *a, const void *b)
} }
/* /*
* Iterates over all chunks to find the max # of map entries used. * Iterates over all chunks to find the max nr_alloc entries.
*/ */
static int find_max_map_used(void) static int find_max_nr_alloc(void)
{ {
struct pcpu_chunk *chunk; struct pcpu_chunk *chunk;
int slot, max_map_used; int slot, max_nr_alloc;
max_map_used = 0; max_nr_alloc = 0;
for (slot = 0; slot < pcpu_nr_slots; slot++) for (slot = 0; slot < pcpu_nr_slots; slot++)
list_for_each_entry(chunk, &pcpu_slot[slot], list) list_for_each_entry(chunk, &pcpu_slot[slot], list)
max_map_used = max(max_map_used, chunk->map_used); max_nr_alloc = max(max_nr_alloc, chunk->nr_alloc);
return max_map_used; return max_nr_alloc;
} }
/* /*
* Prints out chunk state. Fragmentation is considered between * Prints out chunk state. Fragmentation is considered between
* the beginning of the chunk to the last allocation. * the beginning of the chunk to the last allocation.
*
* All statistics are in bytes unless stated otherwise.
*/ */
static void chunk_map_stats(struct seq_file *m, struct pcpu_chunk *chunk, static void chunk_map_stats(struct seq_file *m, struct pcpu_chunk *chunk,
int *buffer) int *buffer)
{ {
int i, s_index, e_index, last_alloc, alloc_sign, as_len; int i, last_alloc, as_len, start, end;
int *alloc_sizes, *p; int *alloc_sizes, *p;
/* statistics */ /* statistics */
int sum_frag = 0, max_frag = 0; int sum_frag = 0, max_frag = 0;
int cur_min_alloc = 0, cur_med_alloc = 0, cur_max_alloc = 0; int cur_min_alloc = 0, cur_med_alloc = 0, cur_max_alloc = 0;
alloc_sizes = buffer; alloc_sizes = buffer;
s_index = (chunk->start_offset) ? 1 : 0;
e_index = chunk->map_used - ((chunk->end_offset) ? 1 : 0);
/* find last allocation */
last_alloc = -1;
for (i = e_index - 1; i >= s_index; i--) {
if (chunk->map[i] & 1) {
last_alloc = i;
break;
}
}
/* if the chunk is not empty - ignoring reserve */ /*
if (last_alloc >= s_index) { * find_last_bit returns the start value if nothing found.
as_len = last_alloc + 1 - s_index; * Therefore, we must determine if it is a failure of find_last_bit
* and set the appropriate value.
/* */
* Iterate through chunk map computing size info. last_alloc = find_last_bit(chunk->alloc_map,
* The first bit is overloaded to be a used flag. pcpu_chunk_map_bits(chunk) -
* negative = free space, positive = allocated chunk->end_offset / PCPU_MIN_ALLOC_SIZE - 1);
*/ last_alloc = test_bit(last_alloc, chunk->alloc_map) ?
for (i = 0, p = chunk->map + s_index; i < as_len; i++, p++) { last_alloc + 1 : 0;
alloc_sign = (*p & 1) ? 1 : -1;
alloc_sizes[i] = alloc_sign * as_len = 0;
((p[1] & ~1) - (p[0] & ~1)); start = chunk->start_offset;
/*
* If a bit is set in the allocation map, the bound_map identifies
* where the allocation ends. If the allocation is not set, the
* bound_map does not identify free areas as it is only kept accurate
* on allocation, not free.
*
* Positive values are allocations and negative values are free
* fragments.
*/
while (start < last_alloc) {
if (test_bit(start, chunk->alloc_map)) {
end = find_next_bit(chunk->bound_map, last_alloc,
start + 1);
alloc_sizes[as_len] = 1;
} else {
end = find_next_bit(chunk->alloc_map, last_alloc,
start + 1);
alloc_sizes[as_len] = -1;
} }
sort(alloc_sizes, as_len, sizeof(chunk->map[0]), cmpint, NULL); alloc_sizes[as_len++] *= (end - start) * PCPU_MIN_ALLOC_SIZE;
start = end;
}
/*
* The negative values are free fragments and thus sorting gives the
* free fragments at the beginning in largest first order.
*/
if (as_len > 0) {
sort(alloc_sizes, as_len, sizeof(int), cmpint, NULL);
/* Iterate through the unallocated fragements. */ /* iterate through the unallocated fragments */
for (i = 0, p = alloc_sizes; *p < 0 && i < as_len; i++, p++) { for (i = 0, p = alloc_sizes; *p < 0 && i < as_len; i++, p++) {
sum_frag -= *p; sum_frag -= *p;
max_frag = max(max_frag, -1 * (*p)); max_frag = max(max_frag, -1 * (*p));
...@@ -101,8 +121,8 @@ static void chunk_map_stats(struct seq_file *m, struct pcpu_chunk *chunk, ...@@ -101,8 +121,8 @@ static void chunk_map_stats(struct seq_file *m, struct pcpu_chunk *chunk,
P("nr_alloc", chunk->nr_alloc); P("nr_alloc", chunk->nr_alloc);
P("max_alloc_size", chunk->max_alloc_size); P("max_alloc_size", chunk->max_alloc_size);
P("empty_pop_pages", chunk->nr_empty_pop_pages); P("empty_pop_pages", chunk->nr_empty_pop_pages);
P("free_size", chunk->free_size); P("free_bytes", chunk->free_bytes);
P("contig_hint", chunk->contig_hint); P("contig_bytes", chunk->contig_bits * PCPU_MIN_ALLOC_SIZE);
P("sum_frag", sum_frag); P("sum_frag", sum_frag);
P("max_frag", max_frag); P("max_frag", max_frag);
P("cur_min_alloc", cur_min_alloc); P("cur_min_alloc", cur_min_alloc);
...@@ -114,22 +134,23 @@ static void chunk_map_stats(struct seq_file *m, struct pcpu_chunk *chunk, ...@@ -114,22 +134,23 @@ static void chunk_map_stats(struct seq_file *m, struct pcpu_chunk *chunk,
static int percpu_stats_show(struct seq_file *m, void *v) static int percpu_stats_show(struct seq_file *m, void *v)
{ {
struct pcpu_chunk *chunk; struct pcpu_chunk *chunk;
int slot, max_map_used; int slot, max_nr_alloc;
int *buffer; int *buffer;
alloc_buffer: alloc_buffer:
spin_lock_irq(&pcpu_lock); spin_lock_irq(&pcpu_lock);
max_map_used = find_max_map_used(); max_nr_alloc = find_max_nr_alloc();
spin_unlock_irq(&pcpu_lock); spin_unlock_irq(&pcpu_lock);
buffer = vmalloc(max_map_used * sizeof(pcpu_first_chunk->map[0])); /* there can be at most this many free and allocated fragments */
buffer = vmalloc((2 * max_nr_alloc + 1) * sizeof(int));
if (!buffer) if (!buffer)
return -ENOMEM; return -ENOMEM;
spin_lock_irq(&pcpu_lock); spin_lock_irq(&pcpu_lock);
/* if the buffer allocated earlier is too small */ /* if the buffer allocated earlier is too small */
if (max_map_used < find_max_map_used()) { if (max_nr_alloc < find_max_nr_alloc()) {
spin_unlock_irq(&pcpu_lock); spin_unlock_irq(&pcpu_lock);
vfree(buffer); vfree(buffer);
goto alloc_buffer; goto alloc_buffer;
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment