1. 20 Sep, 2002 5 commits
  2. 21 Sep, 2002 11 commits
  3. 20 Sep, 2002 9 commits
  4. 19 Sep, 2002 15 commits
    • Anton Blanchard's avatar
    • Anton Blanchard's avatar
      Merge samba.org:/scratch/anton/linux-2.5 · 0dafdcbd
      Anton Blanchard authored
      into samba.org:/scratch/anton/linux-2.5_ppc64_new
      0dafdcbd
    • David Mosberger's avatar
      ia64: Fix zx1-platform support. · fa6ec6f7
      David Mosberger authored
      fa6ec6f7
    • David Mosberger's avatar
      Merge tiger.hpl.hp.com:/data1/bk/vanilla/linux-2.5 · 3c82eb67
      David Mosberger authored
      into tiger.hpl.hp.com:/data1/bk/lia64/to-linus-2.5
      3c82eb67
    • Linus Torvalds's avatar
      Merge home.transmeta.com:/home/torvalds/v2.5/akpm · 2188a617
      Linus Torvalds authored
      into home.transmeta.com:/home/torvalds/v2.5/linux
      2188a617
    • Andrew Morton's avatar
      [PATCH] permit hugetlb pages to be allocated from highmem · c7ea169d
      Andrew Morton authored
      Patch from Rohit Seth: allow hugetlb pages to be allocated from the
      highmem zone.
      c7ea169d
    • Andrew Morton's avatar
      [PATCH] reduced locking in release_pages() · 12f189a1
      Andrew Morton authored
      From Marcus Alanen <maalanen@ra.abo.fi>
      
      Don't retake the zone lock after spilling a batch of pages into the
      buddy.
      
      Instead, just clear local variable `zone' to indicate that no lock is
      held.
      
      This is actually a common case - whenever release_pages() is called
      with exactly 16 pages (truncate, page reclaim..) Marcus' patch will
      save a lock and an unlock.
      
      Also, remove some lock-avoidance heuristics in
      pagevec_deactivate_inactive(): the caller has already made these
      checks, and the chance of the check here actually doing anything useful
      is negligible.
      12f189a1
    • Andrew Morton's avatar
      [PATCH] misc fixes · e19941e9
      Andrew Morton authored
      - Spell Jeremy's name correctly.
      
      - Fix compile warning in raw.c
      
      - Do a waitqueue_active() test before waking klogd in printk.
      
        Not only is is negligibly faster, but the wake_up() in there causes
        deadlocks when you try to print debug info out from inside scheduler
        code.
      
        This patch gives a delightfully obscure way of avoiding the
        deadlock: kill off klogd.
      
      - Fix a couple of compile warnings in the mtrr code.
      e19941e9
    • Andrew Morton's avatar
      [PATCH] blk_init() cleanups · fc1be578
      Andrew Morton authored
      From Christoph Hellwig, acked by Jens.
      
      - remove some unneeded runtime initializers.
      
      - remove the explicit call to hd_init() - it already goes through
      module_init(), so we're currently running hd_init() twice.
      fc1be578
    • Andrew Morton's avatar
      [PATCH] hugetlbpages cleanup · a7d2851c
      Andrew Morton authored
      From Christoph Hellwig, acked by Rohit.
      
      - fix config.in description: we know we're on i386 and we also know
      that a feature can only be enabled if the hw supports it, the code
      alone is not enough
      
      - the sysctl is VM-releated, so move it from /proc/sys/kernel tp
      /proc/sys/vm
      
      - adopt to standard sysctl names
      a7d2851c
    • Andrew Morton's avatar
      [PATCH] remove smp_lock.h inclusions from mm/* · 53f93a7a
      Andrew Morton authored
      From Christoph Hellwig.
      
      There are no lock_kernel() calls in mm/
      53f93a7a
    • Andrew Morton's avatar
      [PATCH] fix mmap(MAP_LOCKED) · 859629c6
      Andrew Morton authored
      From Hubertus Franke.
      
      The MAP_LOCKED flag to mmap() currently does nothing.  Hubertus' patch
      fixes it so that the relevant mapping is locked into memory, if the
      called has CAP_IPC_LOCK.
      859629c6
    • Andrew Morton's avatar
      [PATCH] fix suppression of page allocation failure warnings · d51832f3
      Andrew Morton authored
      Somebody somewhere is stomping on PF_NOWARN, and page allocation
      failure warnings are coming out of the wrong places.
      
      So change the handling of current->flags to be:
      
      int pf_flags = current->flags;
      
      current->flags |= PF_NOWARN;
      ...
      current->flags = pf_flags;
      
      which is a generally more robust approach.
      d51832f3
    • Andrew Morton's avatar
      [PATCH] readv/writev bounds checking fixes · d4872de3
      Andrew Morton authored
      - writev currently returns -EFAULT if _any_ of the segments has an
      invalid address.  We should only return -EFAULT if the first segment
      has a bad address.
      
      If some of the first segments have valid addresses we need to write
      them and return a partial result.
      
      - The current code only checks if the sum-of-lengths is negative.  If
      individual segments have a negative length but the result is positive
      we miss that.
      
      So rework the code to detect this, and to be immune to odd wrapping
      situations.
      
      As a bonus, we save one pass across the iovec.
      
      - ditto for readv.
      
      The check for "does any segment have a negative length" has already
      been performed in do_readv_writev(), but it's basically free here, and
      we need to do it for generic_file_read/write anyway.
      
      This all means that the iov_length() function is unsafe because of
      wrap/overflow isues.  It should only be used after the
      generic_file_read/write or do_readv_writev() checking has been
      performed.  Its callers have been reviewed and they are OK.
      
      The code now passes LTP testing and has been QA'd by Janet's team.
      d4872de3
    • Andrew Morton's avatar
      [PATCH] writev speedup · bd90a275
      Andrew Morton authored
      A patch from Hirokazu Takahashi to speed up the new sped-up writev
      code.
      
      Instead of running ->prepare_write/->commit_write for each individual
      segment, we walk the segments between prepage and commit.  So
      potentially much larger amounts of data are passed to commit_write(),
      and prepare_write() is called much less often.
      
      Added bonus: the segment walk happens inside the kmap_atomic(), so we
      run kmap_atomic() once per page, not once per segment.
      
      We've demonstrated a speedup of over 3x.  This is writing 1024-segment
      iovecs where the individual segments have an average length of 24
      bytes, which is a favourable case for this patch.
      bd90a275