• Joonsoo Kim's avatar
    mm/slab: introduce new slab management type, OBJFREELIST_SLAB · b03a017b
    Joonsoo Kim authored
    SLAB needs an array to manage freed objects in a slab.  It is only used
    if some objects are freed so we can use free object itself as this
    array.  This requires additional branch in somewhat critical lock path
    to check if it is first freed object or not but that's all we need.
    Benefits is that we can save extra memory usage and reduce some
    computational overhead by allocating a management array when new slab is
    created.
    
    Code change is rather complex than what we can expect from the idea, in
    order to handle debugging feature efficiently.  If you want to see core
    idea only, please remove '#if DEBUG' block in the patch.
    
    Although this idea can apply to all caches whose size is larger than
    management array size, it isn't applied to caches which have a
    constructor.  If such cache's object is used for management array,
    constructor should be called for it before that object is returned to
    user.  I guess that overhead overwhelm benefit in that case so this idea
    doesn't applied to them at least now.
    
    For summary, from now on, slab management type is determined by
    following logic.
    
    1) if management array size is smaller than object size and no ctor, it
       becomes OBJFREELIST_SLAB.
    
    2) if management array size is smaller than leftover, it becomes
       NORMAL_SLAB which uses leftover as a array.
    
    3) if OFF_SLAB help to save memory than way 4), it becomes OFF_SLAB.
       It allocate a management array from the other cache so memory waste
       happens.
    
    4) others become NORMAL_SLAB.  It uses dedicated internal memory in a
       slab as a management array so it causes memory waste.
    
    In my system, without enabling CONFIG_DEBUG_SLAB, Almost caches become
    OBJFREELIST_SLAB and NORMAL_SLAB (using leftover) which doesn't waste
    memory.  Following is the result of number of caches with specific slab
    management type.
    
    TOTAL = OBJFREELIST + NORMAL(leftover) + NORMAL + OFF
    
    /Before/
    126 = 0 + 60 + 25 + 41
    
    /After/
    126 = 97 + 12 + 15 + 2
    
    Result shows that number of caches that doesn't waste memory increase
    from 60 to 109.
    
    I did some benchmarking and it looks that benefit are more than loss.
    
    Kmalloc: Repeatedly allocate then free test
    
    /Before/
    [    0.286809] 1. Kmalloc: Repeatedly allocate then free test
    [    1.143674] 100000 times kmalloc(32) -> 116 cycles kfree -> 78 cycles
    [    1.441726] 100000 times kmalloc(64) -> 121 cycles kfree -> 80 cycles
    [    1.815734] 100000 times kmalloc(128) -> 168 cycles kfree -> 85 cycles
    [    2.380709] 100000 times kmalloc(256) -> 287 cycles kfree -> 95 cycles
    [    3.101153] 100000 times kmalloc(512) -> 370 cycles kfree -> 117 cycles
    [    3.942432] 100000 times kmalloc(1024) -> 413 cycles kfree -> 156 cycles
    [    5.227396] 100000 times kmalloc(2048) -> 622 cycles kfree -> 248 cycles
    [    7.519793] 100000 times kmalloc(4096) -> 1102 cycles kfree -> 452 cycles
    
    /After/
    [    1.205313] 100000 times kmalloc(32) -> 117 cycles kfree -> 78 cycles
    [    1.510526] 100000 times kmalloc(64) -> 124 cycles kfree -> 81 cycles
    [    1.827382] 100000 times kmalloc(128) -> 130 cycles kfree -> 84 cycles
    [    2.226073] 100000 times kmalloc(256) -> 177 cycles kfree -> 92 cycles
    [    2.814747] 100000 times kmalloc(512) -> 286 cycles kfree -> 112 cycles
    [    3.532952] 100000 times kmalloc(1024) -> 344 cycles kfree -> 141 cycles
    [    4.608777] 100000 times kmalloc(2048) -> 519 cycles kfree -> 210 cycles
    [    6.350105] 100000 times kmalloc(4096) -> 789 cycles kfree -> 391 cycles
    
    In fact, I tested another idea implementing OBJFREELIST_SLAB with
    extendable linked array through another freed object.  It can remove
    memory waste completely but it causes more computational overhead in
    critical lock path and it seems that overhead outweigh benefit.  So, this
    patch doesn't include it.
    Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: Pekka Enberg <penberg@kernel.org>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Jesper Dangaard Brouer <brouer@redhat.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    b03a017b
slab.c 109 KB