1. 20 Sep, 2002 3 commits
  2. 21 Sep, 2002 11 commits
  3. 20 Sep, 2002 9 commits
  4. 19 Sep, 2002 17 commits
    • Anton Blanchard's avatar
    • Anton Blanchard's avatar
      Merge samba.org:/scratch/anton/linux-2.5 · 0dafdcbd
      Anton Blanchard authored
      into samba.org:/scratch/anton/linux-2.5_ppc64_new
      0dafdcbd
    • Linus Torvalds's avatar
      Merge home.transmeta.com:/home/torvalds/v2.5/akpm · 2188a617
      Linus Torvalds authored
      into home.transmeta.com:/home/torvalds/v2.5/linux
      2188a617
    • Andrew Morton's avatar
      [PATCH] permit hugetlb pages to be allocated from highmem · c7ea169d
      Andrew Morton authored
      Patch from Rohit Seth: allow hugetlb pages to be allocated from the
      highmem zone.
      c7ea169d
    • Andrew Morton's avatar
      [PATCH] reduced locking in release_pages() · 12f189a1
      Andrew Morton authored
      From Marcus Alanen <maalanen@ra.abo.fi>
      
      Don't retake the zone lock after spilling a batch of pages into the
      buddy.
      
      Instead, just clear local variable `zone' to indicate that no lock is
      held.
      
      This is actually a common case - whenever release_pages() is called
      with exactly 16 pages (truncate, page reclaim..) Marcus' patch will
      save a lock and an unlock.
      
      Also, remove some lock-avoidance heuristics in
      pagevec_deactivate_inactive(): the caller has already made these
      checks, and the chance of the check here actually doing anything useful
      is negligible.
      12f189a1
    • Andrew Morton's avatar
      [PATCH] misc fixes · e19941e9
      Andrew Morton authored
      - Spell Jeremy's name correctly.
      
      - Fix compile warning in raw.c
      
      - Do a waitqueue_active() test before waking klogd in printk.
      
        Not only is is negligibly faster, but the wake_up() in there causes
        deadlocks when you try to print debug info out from inside scheduler
        code.
      
        This patch gives a delightfully obscure way of avoiding the
        deadlock: kill off klogd.
      
      - Fix a couple of compile warnings in the mtrr code.
      e19941e9
    • Andrew Morton's avatar
      [PATCH] blk_init() cleanups · fc1be578
      Andrew Morton authored
      From Christoph Hellwig, acked by Jens.
      
      - remove some unneeded runtime initializers.
      
      - remove the explicit call to hd_init() - it already goes through
      module_init(), so we're currently running hd_init() twice.
      fc1be578
    • Andrew Morton's avatar
      [PATCH] hugetlbpages cleanup · a7d2851c
      Andrew Morton authored
      From Christoph Hellwig, acked by Rohit.
      
      - fix config.in description: we know we're on i386 and we also know
      that a feature can only be enabled if the hw supports it, the code
      alone is not enough
      
      - the sysctl is VM-releated, so move it from /proc/sys/kernel tp
      /proc/sys/vm
      
      - adopt to standard sysctl names
      a7d2851c
    • Andrew Morton's avatar
      [PATCH] remove smp_lock.h inclusions from mm/* · 53f93a7a
      Andrew Morton authored
      From Christoph Hellwig.
      
      There are no lock_kernel() calls in mm/
      53f93a7a
    • Andrew Morton's avatar
      [PATCH] fix mmap(MAP_LOCKED) · 859629c6
      Andrew Morton authored
      From Hubertus Franke.
      
      The MAP_LOCKED flag to mmap() currently does nothing.  Hubertus' patch
      fixes it so that the relevant mapping is locked into memory, if the
      called has CAP_IPC_LOCK.
      859629c6
    • Andrew Morton's avatar
      [PATCH] fix suppression of page allocation failure warnings · d51832f3
      Andrew Morton authored
      Somebody somewhere is stomping on PF_NOWARN, and page allocation
      failure warnings are coming out of the wrong places.
      
      So change the handling of current->flags to be:
      
      int pf_flags = current->flags;
      
      current->flags |= PF_NOWARN;
      ...
      current->flags = pf_flags;
      
      which is a generally more robust approach.
      d51832f3
    • Andrew Morton's avatar
      [PATCH] readv/writev bounds checking fixes · d4872de3
      Andrew Morton authored
      - writev currently returns -EFAULT if _any_ of the segments has an
      invalid address.  We should only return -EFAULT if the first segment
      has a bad address.
      
      If some of the first segments have valid addresses we need to write
      them and return a partial result.
      
      - The current code only checks if the sum-of-lengths is negative.  If
      individual segments have a negative length but the result is positive
      we miss that.
      
      So rework the code to detect this, and to be immune to odd wrapping
      situations.
      
      As a bonus, we save one pass across the iovec.
      
      - ditto for readv.
      
      The check for "does any segment have a negative length" has already
      been performed in do_readv_writev(), but it's basically free here, and
      we need to do it for generic_file_read/write anyway.
      
      This all means that the iov_length() function is unsafe because of
      wrap/overflow isues.  It should only be used after the
      generic_file_read/write or do_readv_writev() checking has been
      performed.  Its callers have been reviewed and they are OK.
      
      The code now passes LTP testing and has been QA'd by Janet's team.
      d4872de3
    • Andrew Morton's avatar
      [PATCH] writev speedup · bd90a275
      Andrew Morton authored
      A patch from Hirokazu Takahashi to speed up the new sped-up writev
      code.
      
      Instead of running ->prepare_write/->commit_write for each individual
      segment, we walk the segments between prepage and commit.  So
      potentially much larger amounts of data are passed to commit_write(),
      and prepare_write() is called much less often.
      
      Added bonus: the segment walk happens inside the kmap_atomic(), so we
      run kmap_atomic() once per page, not once per segment.
      
      We've demonstrated a speedup of over 3x.  This is writing 1024-segment
      iovecs where the individual segments have an average length of 24
      bytes, which is a favourable case for this patch.
      bd90a275
    • Andrew Morton's avatar
      [PATCH] swapout fix · 62a29ea1
      Andrew Morton authored
      Silly bug which was halving swapout bandwidth: we've taken a copy of
      page->mapping into a local convenience variable, but forgot to update
      that local after adding the page to swapcache.
      62a29ea1
    • Andrew Morton's avatar
      [PATCH] remove /proc/sys/vm/dirty_sync_thresh · da1eca60
      Andrew Morton authored
      This was designed to be a really sterm throttling threshold: if dirty
      memory reaches this level then perform writeback and actually wait on
      it.
      
      It doesn't work.  Because memory dirtiers are required to perform
      writeback if the amount of dirty AND writeback memory exceeds
      dirty_async_ratio.
      
      So kill it, and rely just on the request queues being appropriately
      scaled to the machine size (they are).
      
      This is basically what 2.4 does.
      da1eca60
    • Andrew Morton's avatar
      [PATCH] remove statm_pgd_range · 6fda85f2
      Andrew Morton authored
      Bill Irwin's patch to avoid having to walk pagetables while generating
      /proc/stat output.
      
      It can significantly overstate the size of various mappings because it
      assumes that all VMAs are fully populated.
      
      But spending 100% of one of my four CPUs running top(1) is a bug.
      
      Bill says this fixes a bug, too.  The `SIZE' parameter is supposed to
      display the amount of memory which the process would consume if it
      faulted everything in.  But "before it only showed instantiated
      3rd-level pagetables, so if something within a 4MB aligned range hadn't
      been faulted in it would slip past the old one".
      6fda85f2
    • Andrew Morton's avatar
      [PATCH] _alloc_pages cleanup · ccc98a67
      Andrew Morton authored
      Patch from Martin Bligh.  It should only affect machines using
      discontigmem.
      
      "This patch is was originally from Andrea's tree (from SGI??), and has
      been tweaked since by both Christoph (who cleaned up all the code),
      and myself (who just hit it until it worked).
      
      It removes _alloc_pages, and adds all nodes to the zonelists
      directly, which also changes the fallback zone order to something more
      sensible ...  instead of: "foreach (node) { foreach (zone) }" we now
      do something more like "foreach (zone_type) { foreach (node) }"
      
      Christoph has a more recent version that's fancier and does a couple
      more cleanups, but it seems to have a bug in it that I can't track
      down easily, so I propose we do the simple thing for now, and take the
      rest of the cleanups when it works ...  it seems to build nicely on
      top of this seperately to me.
      
      Tested on 16-way NUMA-Q with discontigmem + NUMA support."
      ccc98a67