• Jesper Dangaard Brouer's avatar
    net: use SLAB_NO_MERGE for kmem_cache skbuff_head_cache · 0a064316
    Jesper Dangaard Brouer authored
    Since v6.5-rc1 MM-tree is merged and contains a new flag SLAB_NO_MERGE
    in commit d0bf7d57 ("mm/slab: introduce kmem_cache flag SLAB_NO_MERGE")
    now is the time to use this flag for networking as proposed
    earlier see link.
    
    The SKB (sk_buff) kmem_cache slab is critical for network performance.
    Network stack uses kmem_cache_{alloc,free}_bulk APIs to gain
    performance by amortising the alloc/free cost.
    
    For the bulk API to perform efficiently the slub fragmentation need to
    be low. Especially for the SLUB allocator, the efficiency of bulk free
    API depend on objects belonging to the same slab (page).
    
    When running different network performance microbenchmarks, I started
    to notice that performance was reduced (slightly) when machines had
    longer uptimes. I believe the cause was 'skbuff_head_cache' got
    aliased/merged into the general slub for 256 bytes sized objects (with
    my kernel config, without CONFIG_HARDENED_USERCOPY).
    
    For SKB kmem_cache network stack have other various reasons for
    not merging, but it varies depending on kernel config (e.g.
    CONFIG_HARDENED_USERCOPY). We want to explicitly set SLAB_NO_MERGE
    for this kmem_cache to get most out of kmem_cache_{alloc,free}_bulk APIs.
    
    When CONFIG_SLUB_TINY is configured the bulk APIs are essentially
    disabled. Thus, for this case drop the SLAB_NO_MERGE flag.
    
    Link: https://lore.kernel.org/all/167396280045.539803.7540459812377220500.stgit@firesoul/
    
    Signed-off-by: default avatarJesper Dangaard Brouer <hawk@kernel.org>
    Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
    Link: https://lore.kernel.org/r/169211265663.1491038.8580163757548985946.stgit@firesoul
    
    Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
    0a064316
skbuff.c 172 KB