An error occurred fetching the project authors.
  1. 08 Mar, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] slab use-after-free detector · 9ec42f94
      Andrew Morton authored
      Patch from Petr Vandrovec <vandrove@vc.cvut.cz>
      
      Modifies check_poison function to not only verify that last byte is
      POISON_END, but also that all preceeding bytes are either POISON_BEFORE or
      POISON_AFTER bytes.
      9ec42f94
  2. 02 Mar, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] fix preempt-issues with smp_call_function() · a8dd6484
      Andrew Morton authored
      Patch from Thomas Schlichter <schlicht@uni-mannheim.de>
      
      Based on a patch from Dave Jones.
      
      It converts a large number of instances of:
      
      	smp_call_function(foo);
      	foo();
      
      into
      
      	on_each_cpu(foo);
      
      and in doing so fixes up the preempt-unsafeness of the first version.
      a8dd6484
  3. 25 Feb, 2003 2 commits
  4. 22 Feb, 2003 1 commit
  5. 15 Feb, 2003 1 commit
  6. 06 Feb, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] CPU Hotplug mm/slab.c CPU_UP_CANCELED fix · 4f1cb3ff
      Andrew Morton authored
      Patch from Manfred Spraul.
      
      Fixes a bug which was exposed by Zwane's hotplug CPU work.  The
      cache_cache.array pointer is initially given a temp bootstrap area, which is
      later converted over to the final value after the CPU is brought up.
      
      But if slab is enhanced to permit cancellation of a CPU bringup, this pointer
      ends up pointing at stale memory.  So reinitialise it by hand when
      kmem_cache_init() is run.
      4f1cb3ff
  7. 04 Feb, 2003 1 commit
  8. 02 Feb, 2003 2 commits
    • Andrew Morton's avatar
      [PATCH] slab IRQ fix · 43bb7a3a
      Andrew Morton authored
      Patch from Manfred Spraul <manfred@colorfullife.com>
      
      cache_alloc_refill() forgets to disable interrupts again on an error path.
      This exposes us to slab corruption and it makes slab debugging go BUG (it
      expects local irqs to be disabled).
      43bb7a3a
    • Andrew Morton's avatar
      [PATCH] slab poison checking fix · 5a3446d8
      Andrew Morton authored
      Spotted by Andries Brouwer.  There's one place where slab is calling
      check_poison_obj() but not reporting on any detected failure.
      
      We used to go BUG() in there.  Convert it over to the kinder, gentler
      slab_error() regime.
      5a3446d8
  9. 10 Jan, 2003 1 commit
  10. 05 Jan, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] slab: redzoning cleanup · d06f1e21
      Andrew Morton authored
      slab redzoning errors are very hard to decrypt.
      
      The patch adds a human-readable interpretation to each error and changes it
      to not go BUG() when an error is detected.
      d06f1e21
  11. 30 Dec, 2002 2 commits
  12. 21 Dec, 2002 1 commit
    • Andrew Morton's avatar
      [PATCH] more informative slab poisoning · 4f781c84
      Andrew Morton authored
      slab poisons objects with 0x5a both when they are constructed and when
      they are freed.  So it is not possible to tell whether a deref of
      0x5a5a5a5a was a use-before-initialisation bug or a use-after-free bug.
      
      The patch changes it so that
      
      1) A deref of 0x5a5a5a5a means use-of-uninitialised-memory
      
      2) A deref of 0x6b6b6b6b means use-of-freed-memory.
      4f781c84
  13. 05 Dec, 2002 1 commit
  14. 16 Nov, 2002 1 commit
    • Andrew Morton's avatar
      [PATCH] improved slab error diagnostics · db054df8
      Andrew Morton authored
      slab does various consistency checks during `cat /proc/slabinfo'
      processing.  If it finds one it stupidly goes BUG just before
      displaying the info which is required to diagnose the bug.
      
      Change it to not go BUG, but to emit some useful printks and continue.
      
      The patch also removes an uninteresting printk from the boot process.
      db054df8
  15. 08 Nov, 2002 2 commits
  16. 05 Nov, 2002 1 commit
    • Anton Blanchard's avatar
      [PATCH] fix slab allocator for non zero boot cpu · d737f84b
      Anton Blanchard authored
      The slab allocator doesnt initialise ->array for all cpus. This means
      we fail to boot on a machine with boot cpu != 0. I was testing current
      2.5 BK.
      
      Luckily Rusty was at hand to explain the ins and outs of initialisers
      to me.
      d737f84b
  17. 01 Nov, 2002 1 commit
  18. 30 Oct, 2002 12 commits
    • Andrew Morton's avatar
      [PATCH] slab: Use CPU notifiers · 4524ea04
      Andrew Morton authored
      - allocate memory for cpu buffers in cpu_up_prepare
      
      - start the timer in cpu_online
      
      - free the memory for cpu buffers in cpu_up_cancel.
      4524ea04
    • Andrew Morton's avatar
      [PATCH] slab: additional code cleanup · b464df2e
      Andrew Morton authored
      From Manfred Spraul
      
      - remove all typedef, except the kmem_bufctl_t.  It's a redefine for
        an int, i.e.  qualifies as tiny.
      
      - convert most macros to inline functions.
      b464df2e
    • Andrew Morton's avatar
      [PATCH] slab: Remove cache_chain_lock · 716b7ab1
      Andrew Morton authored
      Manfred added a new lock to protect the global list of slab caches.  We
      already have a semaphore from those but he needs locking from timer
      context.
      
      So here we remove that lock and just do a down_trylock() on the
      existing semaphore.  If that fails give up - we'll try again next timer
      tick.
      716b7ab1
    • Andrew Morton's avatar
      [PATCH] slab: Rework the slab timer code to use add_timer_on · bf19f75e
      Andrew Morton authored
      Manfred had all this weird code to schedule a kernel thread onto a
      different CPU just so that we could bond a timer to that CPU.
      
      Convert it all to use the new add_timer_on().
      bf19f75e
    • Andrew Morton's avatar
      [PATCH] slab: reap timers · fd1425d5
      Andrew Morton authored
      - add a reap timer that returns stale objects from the cpu arrays
      - use list_for_each instead of while loops
      - /proc/slabinfo layout change, for a new field about reaping.
      
      Implementation:
      slab contains 2 caches that contain objects that might be usable to the
      systems:
      - the cpu arrays contains objects that other cpus could use
      - the slabs_free list contains freeable slabs, i.e. pages that someone
      else might want.
      
      The patch now keeps track of accesses to the cpu arrays and to the free
      list. If there were no recent activities in one of the caches, part of
      the cache is flushed.
      
      Unlike <2.5.39, only a small part (~20%) is flushed each time:
      The older kernel would refill/drain bounce heavily under memory pressure:
      
      - kmem_cache_alloc: notices that there are no objects in the cpu
              cache, loads 120 objects from the slab lists, return 1.
              [assuming batchcount=120]
      - kmem_cache_reap is called due to memory pressure, finds 119
              objects in the cpu array and returns them to the slab lists.
      - repeat.
      
      In addition, the length of the free list is limited based on the free
      list accesses: a fixed "1" limit hurts the large object caches.
      
      That's the last part for now, next is: [not yet written]
      - cleanup: BUG_ON instead of if() BUG
      - OOM handling for enable_cpucaches
      - remove the unconditional might_sleep() from
              cache_alloc_debugcheck_before, and make that DEBUG dependant.
      - initial NUMA support, just to collect some stats:
              Which percentage of the objects are freed on the wrong
              node? 0.1% or 20%?
      fd1425d5
    • Andrew Morton's avatar
      [PATCH] slab: uninline poisoning checks · 1aabbecc
      Andrew Morton authored
      remove inline from the cache poison checks: the functions are not
      performance critical.
      1aabbecc
    • Andrew Morton's avatar
      [PATCH] slab: cleanups and speedups · cad9cd51
      Andrew Morton authored
      - enable the cpu array for all caches
      
      - remove the optimized implementations for quick list access - with
        cpu arrays in all caches, the list access is now rare.
      
      - make the cpu arrays mandatory, this removes 50% of the conditional
        branches from the hot path of kmem_cache_alloc [1]
      
      - poisoning for objects with constructors
      
      Patch got a bit longer...
      
      I forgot to mention this: head arrays mean that some pages can be
      blocked due to objects in the head arrays, and not returned to
      page_alloc.c.  The current kernel never flushes the head arrays, this
      might worsen the behaviour of low memory systems.  The hunk that
      flushes the arrays regularly comes next.
      
      Details changelog: [to be read site by side with the patch]
      
      * docu update
      
      * "growing" is not really needed: races between grow and shrink are
        handled by retrying.  [additionally, the current kernel never
        shrinks]
      
      * move the batchcount into the cpu array:
      	the old code contained a race during cpu cache tuning:
      		update batchcount [in cachep] before or after the IPI?
      	And NUMA will need it anyway.
      
      * bootstrap support: the cpu arrays are really mandatory, nothing
        works without them.  Thus a statically allocated cpu array is needed
        to for starting the allocators.
      
      * move the full, partial & free lists into a separate structure, as a
        preparation for NUMA
      
      * structure reorganization: now the cpu arrays are the most important
        part, not the lists.
      
      * dead code elimination: remove "failures", nowhere read.
      
      * dead code elimination: remove "OPTIMIZE": not implemented.  The
        idea is to skip the virt_to_page lookup for caches with on-slab slab
        structures, and use (ptr&PAGE_MASK) instead.  The details are in
        Bonwicks paper.  Not fully implemented.
      
      * remove GROWN: kernel never shrinks a cache, thus grown is
        meaningless.
      
      * bootstrap: starting the slab allocator is now a 3 stage process:
      	- nothing works, use the statically allocated cpu arrays.
      	- the smallest kmalloc allocator works, use it to allocate
      		cpu arrays.
      	- all kmalloc allocators work, use the default cpu array size
      
      * register a cpu nodifier callback, and allocate the needed head
        arrays if a new cpu arrives
      
      * always enable head arrays, even for DEBUG builds.  Poisoning and
        red-zoning now happens before an object is added to the arrays.
        Insert enable_all_cpucaches into cpucache_init, there is no need for
        seperate function.
      
      * modifications to the debug checks due to the earlier calls of the
        dtor for caches with poisoning enabled
      
      * poison+ctor is now supported
      
      * squeezing 3 objects into a cacheline is hopeless, the FIXME is not
        solvable and can be removed.
      
      * add additional debug tests: check_irq_off(), check_irq_on(),
        check_spinlock_acquired().
      
      * move do_ccupdate_local nearer to do_tune_cpucache.  Should have
        been part of -04-drain.
      
      * additional objects checks.  red-zoning is tricky: it's implemented
        by increasing the object size by 2*BYTES_PER_WORD.  Thus
        BYTES_PER_WORD must be added to objp before calling the destructor,
        constructor or before returing the object from alloc.  The poison
        functions add BYTES_PER_WORD internally.
      
      * create a flagcheck function, right now the tests are duplicated in
        cache_grow [always] and alloc_debugcheck_before [DEBUG only]
      
      * modify slab list updates: all allocs are now bulk allocs that try
        to get multiple objects at once, update the list pointers only at the
        end of a bulk alloc, not once per alloc.
      
      * might_sleep was moved into kmem_flagcheck.
      
      * major hotpath change:
      	- cc always exists, no fallback
      	- cache_alloc_refill is called with disabled interrupts,
      	  and does everything to recover from an empty cpu array.
      	  Far shorter & simpler __cache_alloc [inlined in both
      	  kmalloc and kmem_cache_alloc]
      
      * __free_block, free_block, cache_flusharray: main implementation of
        returning objects to the lists.  no big changes, diff lost track.
      
      * new debug check: too early kmalloc or kmem_cache_alloc
      
      * slightly reduce the sizes of the cpu arrays: keep the size < a
        power of 2, including batchcount, avail and now limit, for optimal
        kmalloc memory efficiency.
      
      That's it.  I even found 2 bugs while reading: dtors and ctors for
      verify were called with wrong parameters, with RED_ZONE enabled, and
      some checks still assumed that POISON and ctor are incompatible.
      cad9cd51
    • Andrew Morton's avatar
      [PATCH] slab: remove spaces from /proc identifiers · 5bbb9ea6
      Andrew Morton authored
      From Manfred Spraul
      
      remove the space from the name of the DMA caches: they make it
      impossible to tune the caches through /proc/slabinfo, and make parsing
      /proc/slabinfo difficult
      5bbb9ea6
    • Andrew Morton's avatar
      [PATCH] slab: take the spinlock in the drain function. · fa652753
      Andrew Morton authored
      In 2.5, local_irq_disable() provides protection against
      smp_call_function() on all architectures.  (Or it will, not sure.  But
      davem says this is OK).
      
      So a spin_lock() within the smp_call_function() callback is now
      permitted, and we can remove/cleanup the workaround.
      fa652753
    • Andrew Morton's avatar
      [PATCH] slab: reduce internal fragmentation · 69e74939
      Andrew Morton authored
      From Manfred Spraul
      
      If an object is freed from a slab, then move the slab to the tail of
      the partial list - this should increase the probability that the other
      objects from the same page are freed, too, and that a page can be
      returned to gfp later.
      
      In other words: if we just freed an object from this page then make
      this page be the *last* page which is eligible for new allocations.
      Under the assumption that other objects in that same page are about to
      be freed up as well.
      
      The cpu arrays are now always in front of the list, i.e.  cache hit
      rates should not matter.
      69e74939
    • Andrew Morton's avatar
      [PATCH] slab: enable the cpu arrays on uniprocessor · 23797198
      Andrew Morton authored
      From Manfred Spraul
      
      Always enable the cpu arrays, even on uniprocessor.
      
      They provide LIFO ordering, which should improve cache hit rates.  And
      the array allocator is slightly faster than the list operations.
      23797198
    • Andrew Morton's avatar
      [PATCH] slab: cleanup: rename static functions · 91767dfd
      Andrew Morton authored
      From Manfred Spraul
      
      remove kmem_ from all static function that are only used in slab.c.
      Except kmem_cache_slabmgmt, I've renamed it to alloc_slabmgmt().
      91767dfd
  19. 29 Sep, 2002 1 commit
    • Andrew Morton's avatar
      [PATCH] kmem_cache_destroy fix · 3b5c86dd
      Andrew Morton authored
      Slab currently has a policy of buffering a single spare page per slab.
      We're putting that on the partially-full list, which confuses
      kmem_cache_destroy().
      
      So put it on cachep->slabs_free, which is where empty pages go.
      3b5c86dd
  20. 25 Sep, 2002 2 commits
    • Andrew Morton's avatar
      [PATCH] increase traffic on linux-kernel · 4f3e8109
      Andrew Morton authored
      [This has four scalps already.  Thomas Molina has agreed
       to track things as they are identified ]
      
      Infrastructure to detect sleep-inside-spinlock bugs.  Really only
      useful if compiled with CONFIG_PREEMPT=y.  It prints out a whiny
      message and a stack backtrace if someone calls a function which might
      sleep from within an atomic region.
      
      This patch generates a storm of output at boot, due to
      drivers/ide/ide-probe.c:init_irq() calling lots of things which it
      shouldn't under ide_lock.
      
      It'll find other bugs too.
      4f3e8109
    • Andrew Morton's avatar
      [PATCH] slab reclaim balancing · b65bbded
      Andrew Morton authored
      A patch from Ed Tomlinson which improves the way in which the kernel
      reclaims slab objects.
      
      The theory is: a cached object's usefulness is measured in terms of the
      number of disk seeks which it saves.  Furthermore, we assume that one
      dentry or inode saves as many seeks as one pagecache page.
      
      So we reap slab objects at the same rate as we reclaim pages.  For each
      1% of reclaimed pagecache we reclaim 1% of slab.  (Actually, we _scan_
      1% of slab for each 1% of scanned pages).
      
      Furthermore we assume that one swapout costs twice as many seeks as one
      pagecache page, and twice as many seeks as one slab object.  So we
      double the pressure on slab when anonymous pages are being considered
      for eviction.
      
      The code works nicely, and smoothly.  Possibly it does not shrink slab
      hard enough, but that is now very easy to tune up and down.  It is just:
      
      	ratio *= 3;
      
      in shrink_caches().
      
      Slab caches no longer hold onto completely empty pages.  Instead, pages
      are freed as soon as they have zero objects.  This is possibly a
      performance hit for slabs which have constructors, but it's doubtful.
      Most allocations after a batch of frees are satisfied from inside
      internally-fragmented pages and by the time slab gets back onto using
      the wholly-empty pages they'll be cache-cold.  slab would be better off
      going and requesting a new, cache-warm page and reconstructing the
      objects therein.  (Once we have the per-cpu hot-page allocator in
      place.  It's happening).
      
      As a consequence of the above, kmem_cache_shrink() is now unused.  No
      great loss there - the serialising effect of kmem_cache_shrink and its
      semaphore in front of page reclaim was measurably bad.
      
      Still todo:
      
      - batch up the shrinking so we don't call into prune_dcache and
        friends at high frequency asking for a tiny number of objects.
      
      - Maybe expose the shrink ratio via a tunable.
      
      - clean up slab.c
      
      - highmem page reclaim in prune_icache: highmem pages can pin
        inodes.
      b65bbded
  21. 24 Sep, 2002 1 commit
    • Robert Love's avatar
      [PATCH] remove preempt workaround in slab.c · 7f644d00
      Robert Love authored
      Before the irqs_disabled() check in preempt_schedule(), we worked around
      some locking issues in slab.c.  Now that we will never preempt with
      interrupts disabled, we can remove those and clean things up.
      
      This is courtesy of Manfred Spraul.
      7f644d00
  22. 19 Sep, 2002 1 commit
  23. 17 Sep, 2002 1 commit
    • Andrew Morton's avatar
      [PATCH] Add /proc/meminfo:Slab · 6b27052e
      Andrew Morton authored
      Display the total slab memory in /proc/meminfo.  Handy while we play
      with the slab pruning code.
      
      This info is also available via /proc/slabinfo, but I think this
      convenience is worth the extra few lines.
      6b27052e
  24. 15 Sep, 2002 1 commit
    • Andrew Morton's avatar
      [PATCH] various small cleanups · 16b38746
      Andrew Morton authored
      - Remove defunct active_list/inactive_list declarations (wli)
      
      - Update an obsolete comment (wli)
      
      - "mm/slab.c contains one leftover from the initial version with
        'unsigned short' bufctl entries.  The attached patch replaces '2'
        with the correct sizeof [which is now 4]" - Manfred Spraul
      
      - BUG checks for vfree/vunmap being called in interrupt context
        (because they take irq-unsafe spinlocks, I guess?) - davej
      
      - Simplify some coding in one_highpage_init() (Christoph Hellwig).
      16b38746