• Christoph Lameter's avatar
    SLUB: direct pass through of page size or higher kmalloc requests · aadb4bc4
    Christoph Lameter authored
    This gets rid of all kmalloc caches larger than page size.  A kmalloc
    request larger than PAGE_SIZE > 2 is going to be passed through to the page
    allocator.  This works both inline where we will call __get_free_pages
    instead of kmem_cache_alloc and in __kmalloc.
    
    kfree is modified to check if the object is in a slab page. If not then
    the page is freed via the page allocator instead. Roughly similar to what
    SLOB does.
    
    Advantages:
    - Reduces memory overhead for kmalloc array
    - Large kmalloc operations are faster since they do not
      need to pass through the slab allocator to get to the
      page allocator.
    - Performance increase of 10%-20% on alloc and 50% on free for
      PAGE_SIZEd allocations.
      SLUB must call page allocator for each alloc anyways since
      the higher order pages which that allowed avoiding the page alloc calls
      are not available in a reliable way anymore. So we are basically removing
      useless slab allocator overhead.
    - Large kmallocs yields page aligned object which is what
      SLAB did. Bad things like using page sized kmalloc allocations to
      stand in for page allocate allocs can be transparently handled and are not
      distinguishable from page allocator uses.
    - Checking for too large objects can be removed since
      it is done by the page allocator.
    
    Drawbacks:
    - No accounting for large kmalloc slab allocations anymore
    - No debugging of large kmalloc slab allocations.
    Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    aadb4bc4
slub.c 89.9 KB