• Alexei Starovoitov's avatar
    bpf: Introduce any context BPF specific memory allocator. · 7c8199e2
    Alexei Starovoitov authored
    Tracing BPF programs can attach to kprobe and fentry. Hence they
    run in unknown context where calling plain kmalloc() might not be safe.
    
    Front-end kmalloc() with minimal per-cpu cache of free elements.
    Refill this cache asynchronously from irq_work.
    
    BPF programs always run with migration disabled.
    It's safe to allocate from cache of the current cpu with irqs disabled.
    Free-ing is always done into bucket of the current cpu as well.
    irq_work trims extra free elements from buckets with kfree
    and refills them with kmalloc, so global kmalloc logic takes care
    of freeing objects allocated by one cpu and freed on another.
    
    struct bpf_mem_alloc supports two modes:
    - When size != 0 create kmem_cache and bpf_mem_cache for each cpu.
      This is typical bpf hash map use case when all elements have equal size.
    - When size == 0 allocate 11 bpf_mem_cache-s for each cpu, then rely on
      kmalloc/kfree. Max allocation size is 4096 in this case.
      This is bpf_dynptr and bpf_kptr use case.
    
    bpf_mem_alloc/bpf_mem_free are bpf specific 'wrappers' of kmalloc/kfree.
    bpf_mem_cache_alloc/bpf_mem_cache_free are 'wrappers' of kmem_cache_alloc/kmem_cache_free.
    
    The allocators are NMI-safe from bpf programs only. They are not NMI-safe in general.
    Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
    Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
    Acked-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
    Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220902211058.60789-2-alexei.starovoitov@gmail.com
    7c8199e2
memalloc.c 12.1 KB