1. 13 Mar, 2004 21 commits
  2. 12 Mar, 2004 19 commits
    • Linus Torvalds's avatar
      Merge bk://gkernel.bkbits.net/libata-2.5 · a8b828f4
      Linus Torvalds authored
      into ppc970.osdl.org:/home/torvalds/v2.5/linux
      a8b828f4
    • Jeff Garzik's avatar
      Merge redhat.com:/spare/repo/linux-2.5 · 3f9d4e0f
      Jeff Garzik authored
      into redhat.com:/spare/repo/libata-2.5
      3f9d4e0f
    • Linus Torvalds's avatar
      Merge bk://kernel.bkbits.net/jgarzik/netconsole-2.5 · 2d0512a4
      Linus Torvalds authored
      into ppc970.osdl.org:/home/torvalds/v2.5/linux
      2d0512a4
    • Jeff Garzik's avatar
      Merge redhat.com:/spare/repo/netdev-2.6/netpoll · 9277cf69
      Jeff Garzik authored
      into redhat.com:/spare/repo/netconsole-2.5
      9277cf69
    • Jeff Garzik's avatar
      Add Promise SX8 (carmel) block driver. · 4061c061
      Jeff Garzik authored
      4061c061
    • Linus Torvalds's avatar
      Merge bk://gkernel.bkbits.net/prism54-2.5 · 4d92fbee
      Linus Torvalds authored
      into ppc970.osdl.org:/home/torvalds/v2.5/linux
      4d92fbee
    • Jeff Garzik's avatar
      0c1e7e8e
    • Jeff Garzik's avatar
      [wireless] Add new Prism54 wireless driver. · 8eae4cbf
      Jeff Garzik authored
      8eae4cbf
    • Linus Torvalds's avatar
      Merge bk://linux-scsi.bkbits.net/scsi-for-linus-2.6 · aba7eead
      Linus Torvalds authored
      into ppc970.osdl.org:/home/torvalds/v2.5/linux
      aba7eead
    • Linus Torvalds's avatar
      Merge http://lia64.bkbits.net/to-linus-2.5 · 60059a51
      Linus Torvalds authored
      into ppc970.osdl.org:/home/torvalds/v2.5/linux
      60059a51
    • Linus Torvalds's avatar
      Revert attribute_used changes in module.h. They were wrong. · dede844e
      Linus Torvalds authored
      Cset exclude: akpm@osdl.org|ChangeSet|20040312161945|47751
      dede844e
    • Andrew Morton's avatar
      [PATCH] slab: avoid higher-order allocations · 29d18b52
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      At present slab is using 2-order allocations for the size-2048 cache.  Of
      course, this can affect networking quite seriously.
      
      The patch ensures that slab will never use more than a 1-order allocation
      for objects which have a size of less than 2*PAGE_SIZE.
      29d18b52
    • Andrew Morton's avatar
      [PATCH] vmscan: add lru_to_page() helper · 349055d0
      Andrew Morton authored
      From: Nick Piggin <piggin@cyberone.com.au>
      
      Add a little helper macro for a common list extraction operation in vmscan.c
      349055d0
    • Andrew Morton's avatar
      [PATCH] vm: balance inactive zone refill rates · fb5b4abe
      Andrew Morton authored
      The current refill logic in refill_inactive_zone() takes an arbitrarily large
      number of pages and chops it down to SWAP_CLUSTER_MAX*4, regardless of the
      size of the zone.
      
      This has the effect of reducing the amount of refilling of large zones
      proportionately much more than of small zones.
      
      We made this change in may 2003 and I'm damned if I remember why.  let's put
      it back so we don't truncate the refill count and see what happens.
      fb5b4abe
    • Andrew Morton's avatar
      [PATCH] fix vm-batch-inactive-scanning.patch · 07a25779
      Andrew Morton authored
      - prevent nr_scan_inactive from going negative
      
      - compare `count' with SWAP_CLUSTER_MAX, not `max_scan'
      
      - Use ">= SWAP_CLUSTER_MAX", not "> SWAP_CLUSTER_MAX".
      07a25779
    • Andrew Morton's avatar
      [PATCH] vmscan: batch up inactive list scanning work · ceb37d32
      Andrew Morton authored
      From: Nick Piggin <piggin@cyberone.com.au>
      
      Use a "refill_counter" for inactive list scanning, similar to the one used
      for active list scanning.  This batches up scanning now that we precisely
      balance ratios, and don't round up the amount to be done.
      
      No observed benefits, but I imagine it would lower the acquisition
      frequency of the lru locks in some cases, and make codepaths more efficient
      in general due to cache niceness.
      ceb37d32
    • Andrew Morton's avatar
      [PATCH] vmscan: less throttling of page allocators and kswapd · 085b4897
      Andrew Morton authored
      This is just a random unsubstantiated tuning tweak: don't immediately
      throttle page allocators and kwapd when the going is getting heavier: scan a
      bit more of the LRU before throttling.
      085b4897
    • Andrew Morton's avatar
      [PATCH] fix the kswapd zone scanning algorithm · ffa0fb78
      Andrew Morton authored
      This removes a vestige of the old algorithm.  We don't want to skip zones if
      all_zones_ok is true: we've already precalculated which zones need scanning
      and this just stops us from ever performing kswapd reclaim from the DMA zone.
      ffa0fb78
    • Andrew Morton's avatar
      [PATCH] kswapd: fix lumpy page reclaim · 519ab68b
      Andrew Morton authored
      As kswapd is now scanning zones in the highmem->normal->dma direction it can
      get into competition with the page allocator: kswapd keep on trying to free
      pages from highmem, then kswapd moves onto lowmem.  By the time kswapd has
      done proportional scanning in lowmem, someone has come in and allocated a few
      pages from highmem.  So kswapd goes back and frees some highmem, then some
      lowmem again.  But nobody has allocated any lowmem yet.  So we keep on and on
      scanning lowmem in response to highmem page allocations.
      
      With a simple `dd' on a 1G box we get:
      
       r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy wa id
       0  3      0  59340   4628 922348    0    0     4 28188 1072   808  0 10 46 44
       0  3      0  29932   4660 951760    0    0     0 30752 1078   441  1  6 30 64
       0  3      0  57568   4556 924052    0    0     0 30748 1075   478  0  8 43 49
       0  3      0  29664   4584 952176    0    0     0 30752 1075   472  0  6 34 60
       0  3      0   5304   4620 976280    0    0     4 40484 1073   456  1  7 52 41
       0  3      0 104856   4508 877112    0    0     0 18452 1074    97  0  7 67 26
       0  3      0  70768   4540 911488    0    0     0 35876 1078   746  0  7 34 59
       1  2      0  42544   4568 939680    0    0     0 21524 1073   556  0  5 43 51
       0  3      0   5520   4608 976428    0    0     4 37924 1076   836  0  7 41 51
       0  2      0   4848   4632 976812    0    0    32 12308 1092    94  0  1 33 66
      
      Simple fix: go back to scanning the zones in the dma->normal->highmem
      direction so we meet the page allocator in the middle somewhere.
      
       r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy wa id
       1  3      0   5152   3468 976548    0    0     4 37924 1071   650  0  8 64 28
       1  2      0   4888   3496 976588    0    0     0 23576 1075   726  0  6 66 27
       0  3      0   5336   3532 976348    0    0     0 31264 1072   708  0  8 60 32
       0  3      0   6168   3560 975504    0    0     0 40992 1072   683  0  6 63 31
       0  3      0   4560   3580 976844    0    0     0 18448 1073   233  0  4 59 37
       0  3      0   5840   3624 975712    0    0     4 26660 1072   800  1  8 46 45
       0  3      0   4816   3648 976640    0    0     0 40992 1073   526  0  6 47 47
       0  3      0   5456   3672 976072    0    0     0 19984 1070   320  0  5 60 35
      519ab68b