• Linus Torvalds's avatar
    Merge tag 'slab-for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab · 76d9b92e
    Linus Torvalds authored
    Pull slab updates from Vlastimil Babka:
     "The most prominent change this time is the kmem_buckets based
      hardening of kmalloc() allocations from Kees Cook.
    
      We have also extended the kmalloc() alignment guarantees for
      non-power-of-two sizes in a way that benefits rust.
    
      The rest are various cleanups and non-critical fixups.
    
       - Dedicated bucket allocator (Kees Cook)
    
         This series [1] enhances the probabilistic defense against heap
         spraying/grooming of CONFIG_RANDOM_KMALLOC_CACHES from last year.
    
         kmalloc() users that are known to be useful for exploits can get
         completely separate set of kmalloc caches that can't be shared with
         other users. The first converted users are alloc_msg() and
         memdup_user().
    
         The hardening is enabled by CONFIG_SLAB_BUCKETS.
    
       - Extended kmalloc() alignment guarantees (Vlastimil Babka)
    
         For years now we have guaranteed natural alignment for power-of-two
         allocations, but nothing was defined for other sizes (in practice,
         we have two such buckets, kmalloc-96 and kmalloc-192).
    
         To avoid unnecessary padding in the rust layer due to its alignment
         rules, extend the guarantee so that the alignment is at least the
         largest power-of-two divisor of the requested size.
    
         This fits what rust needs, is a superset of the existing
         power-of-two guarantee, and does not in practice change the layout
         (and thus does not add overhead due to padding) of the kmalloc-96
         and kmalloc-192 caches, unless slab debugging is enabled for them.
    
       - Cleanups and non-critical fixups (Chengming Zhou, Suren
         Baghdasaryan, Matthew Willcox, Alex Shi, and Vlastimil Babka)
    
         Various tweaks related to the new alloc profiling code, folio
         conversion, debugging and more leftovers after SLAB"
    
    Link: https://lore.kernel.org/all/20240701190152.it.631-kees@kernel.org/ [1]
    
    * tag 'slab-for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab:
      mm/memcg: alignment memcg_data define condition
      mm, slab: move prepare_slab_obj_exts_hook under CONFIG_MEM_ALLOC_PROFILING
      mm, slab: move allocation tagging code in the alloc path into a hook
      mm/util: Use dedicated slab buckets for memdup_user()
      ipc, msg: Use dedicated slab buckets for alloc_msg()
      mm/slab: Introduce kmem_buckets_create() and family
      mm/slab: Introduce kvmalloc_buckets_node() that can take kmem_buckets argument
      mm/slab: Plumb kmem_buckets into __do_kmalloc_node()
      mm/slab: Introduce kmem_buckets typedef
      slab, rust: extend kmalloc() alignment guarantees to remove Rust padding
      slab: delete useless RED_INACTIVE and RED_ACTIVE
      slab: don't put freepointer outside of object if only orig_size
      slab: make check_object() more consistent
      mm: Reduce the number of slab->folio casts
      mm, slab: don't wrap internal functions with alloc_hooks()
    76d9b92e
fortify_kunit.c 39.2 KB