1. 23 Sep, 2002 16 commits
    • Ingo Molnar's avatar
      [PATCH] de-xchg fork.c · 94eda096
      Ingo Molnar authored
      This fixes all xchg()'s and a preemption bug.
      94eda096
    • Ingo Molnar's avatar
      [PATCH] pidhash cleanups, tgid-2.5.38-F3 · 817fdd72
      Ingo Molnar authored
      This does the following things:
      
       - removes the ->thread_group list and uses a new PIDTYPE_TGID pid class
         to handle thread groups. This cleans up lots of code in signal.c and
         elsewhere.
      
       - fixes sys_execve() if a non-leader thread calls it. (2.5.38 crashed in
         this case.)
      
       - renames list_for_each_noprefetch to __list_for_each.
      
       - cleans up delayed-leader parent notification.
      
       - introduces link_pid() to optimize PIDTYPE_TGID installation in the
         thread-group case.
      
      I've tested the patch with a number of threaded and non-threaded
      workloads, and it works just fine. Compiles & boots on UP and SMP x86.
      
      The session/pgrp bugs reported to lkml are probably still open, they are
      the next on my todo - now that we have a clean pidhash architecture they
      should be easier to fix.
      817fdd72
    • Linus Torvalds's avatar
      Terminate a failed IO properly · 76417366
      Linus Torvalds authored
      76417366
    • Linus Torvalds's avatar
      Merge http://linux-isdn.bkbits.net/linux-2.5.isdn · 46796862
      Linus Torvalds authored
      into home.transmeta.com:/home/torvalds/v2.5/linux
      46796862
    • Kai Germaschewski's avatar
      ISDN: Fix build when CONFIG_ISDN_TTY_FAX is not set · 767bf8a2
      Kai Germaschewski authored
      T30_s * is part of a union, so the typedef needs to exist even when
      CONFIG_ISDN_TTY_FAX is not set.
      767bf8a2
    • Linus Torvalds's avatar
      Merge http://linux-isdn.bkbits.net/linux-2.5.make · c06fd892
      Linus Torvalds authored
      into home.transmeta.com:/home/torvalds/v2.5/linux
      c06fd892
    • Kai Germaschewski's avatar
      kbuild: Convert missed L_TARGET references · a8c7db20
      Kai Germaschewski authored
      When converting all L_TARGETs to lib.a, I missed these instances.
      a8c7db20
    • Peter Rival's avatar
      [PATCH] Compile fixes for alpha arch · 41447041
      Peter Rival authored
      Update alpha port to work with new nanosecond xtime, and the in_atomic()
      requirements.
      41447041
    • Linus Torvalds's avatar
      Merge bk://thebsh.namesys.com/bk/reiser3-linux-2.5 · 59e8b32c
      Linus Torvalds authored
      into home.transmeta.com:/home/torvalds/v2.5/linux
      59e8b32c
    • Mikael Pettersson's avatar
      [PATCH] fix UP_APIC linkage problem in 2.5.3[78] · cb45d949
      Mikael Pettersson authored
      The problem is that the local APIC code references stuff in
      mpparse, but 2.5.37 changed arch/i386/kernel/Makefile to only
      compile mpparse for SMP.
      
      This patch works around this by enforcing CONFIG_X86_MPPARSE
      for all LOCAL_APIC-enabled configs.
      cb45d949
    • Jens Axboe's avatar
      [PATCH] bio_get_nr_vecs · 63b9d36d
      Jens Axboe authored
      Add bio_get_nr_vecs(). It returns an approximate number of pages that
      can be added to a block device. It's just a ballpark number, but I think
      this is quite fine for the type of thing it is needed for: mpage etc
      need to know an approx size of a bio that they need to allocate. It
      would be silly to continously allocate 64-page sized bio_vec entries, if
      the target cannot do more than 8, for example.
      63b9d36d
    • Jens Axboe's avatar
      [PATCH] pdc4030 · 58c1b542
      Jens Axboe authored
      make pdc4030 work
      58c1b542
    • Jens Axboe's avatar
      [PATCH] trm compile · 6c99eec3
      Jens Axboe authored
      Bad merge from 2.4.20-pre-ac, ide_build_dmatable() does not need data
      direction argument in 2.5 (it's implicit in the request)
      6c99eec3
    • Tim Schmielau's avatar
      76dc17b4
    • Ivan Kokshaysky's avatar
      [PATCH] Re: 2.5.36 IDE fixes · 7d663f71
      Ivan Kokshaysky authored
      I'm terribly sorry - I've sent you the wrong diff, it was
      some intermediate variant. Actually it added extra breakage to
      ide_hwif_configure().
      
      Desired behavior was:
      
      if ctl == base == 0, the device is in "true legacy" mode (as per PCI
      spec); use values from the base address registers otherwise.
      7d663f71
    • Jens Axboe's avatar
      [PATCH] more bio updates · ef869838
      Jens Axboe authored
      cleanup end_that_request_first() end_io handling, and fix bug where
      partial completes didn't get accounted right wrt blk_recalc_rq_sectors()
      ef869838
  2. 22 Sep, 2002 24 commits
    • Kai Germaschewski's avatar
      Merge zephyr:src/kernel/v2.5/linux-2.5.isdn · 53fcb937
      Kai Germaschewski authored
      into tp1.ruhr-uni-bochum.de:/home/kai/kernel/v2.5/linux-2.5.isdn
      53fcb937
    • Kai Germaschewski's avatar
      ISDN: Lock list of phone numbers appropriately · cd2d00c6
      Kai Germaschewski authored
      It was (only partially) protected by cli() before, which we want to
      get rid of.
      cd2d00c6
    • Kai Germaschewski's avatar
      ISDN: Use <linux/list.h> for list of phone numbers · 51430a3c
      Kai Germaschewski authored
      Simplifies the code which was previously using an open coded
      singly linked list.
      
      Also, deleting a phone number during dial-out could easily oops
      the kernel before this patch.
      51430a3c
    • Kai Germaschewski's avatar
      ISDN: ISDN_GLOBAL_STOPPED cleanup · 7e22da6a
      Kai Germaschewski authored
      ISDN_GLOBAL_STOPPED is a way to globally stop the system
      from dialing out / accepting incoming calls. Instead of
      spreading checks all over the place, just catch dial
      commands / incoming call indications in one place.
      
      Also, kill isdn_net_phone typedef and clean up affected code.
      7e22da6a
    • Kai Germaschewski's avatar
      ISDN: Kill isdn_net_autohup() · 2a002db2
      Kai Germaschewski authored
      It's not used for the timeout controlled hangup anymore, only to
      hangup depending on the dialmode, which we handle directly now.
      2a002db2
    • Kai Germaschewski's avatar
      ISDN: PPP cleanups · 79deb1f1
      Kai Germaschewski authored
      o PPP_IPX is defined in a header these days
      o isdn_net_hangup takes an isdn_net_local *, simplifying
        code a bit.
      79deb1f1
    • Kai Germaschewski's avatar
      Merge tp1.ruhr-uni-bochum.de:/home/kai/kernel/v2.5/linux-2.5.isdn · 967ee5ae
      Kai Germaschewski authored
      into tp1.ruhr-uni-bochum.de:/home/kai/kernel/v2.5/linux-2.5.make
      967ee5ae
    • Linus Torvalds's avatar
      Merge master.kernel.org:/home/davem/BK/sparc-2.5 · f20bf018
      Linus Torvalds authored
      into home.transmeta.com:/home/torvalds/v2.5/linux
      f20bf018
    • Linus Torvalds's avatar
      Merge master.kernel.org:/home/davem/BK/net-2.5 · e7144e64
      Linus Torvalds authored
      into home.transmeta.com:/home/torvalds/v2.5/linux
      e7144e64
    • Andrew Morton's avatar
      [PATCH] low-latency page reclaim · 407ee6c8
      Andrew Morton authored
      Convert the VM to not wait on other people's dirty data.
      
       - If we find a dirty page and its queue is not congested, do some writeback.
      
       - If we find a dirty page and its queue _is_ congested then just
         refile the page.
      
       - If we find a PageWriteback page then just refile the page.
      
       - There is additional throttling for write(2) callers.  Within
         generic_file_write(), record their backing queue in ->current.
         Within page reclaim, if this tasks encounters a page which is dirty
         or under writeback onthis queue, block on it.  This gives some more
         writer throttling and reduces the page refiling frequency.
      
      It's somewhat CPU expensive - under really heavy load we only get a 50%
      reclaim rate in pages coming off the tail of the LRU.  This can be
      fixed by splitting the inactive list into reclaimable and
      non-reclaimable lists.  But the CPU load isn't too bad, and latency is
      much, much more important in these situations.
      
      Example: with `mem=512m', running 4 instances of `dbench 100', 2.5.34
      took 35 minutes to compile a kernel.  With this patch, it took three
      minutes, 45 seconds.
      
      I haven't done swapcache or MAP_SHARED pages yet.  If there's tons of
      dirty swapcache or mmap data around we still stall heavily in page
      reclaim.  That's less important.
      
      This patch also has a tweak for swapless machines: don't even bother
      bringing anon pages onto the inactive list if there is no swap online.
      407ee6c8
    • Andrew Morton's avatar
      [PATCH] use the congestion APIs in pdflush · c9b22619
      Andrew Morton authored
      The key concept here is that pdflush does not block on request queues
      any more.  Instead, it circulates across the queues, keeping any
      non-congested queues full of write data.  When all queues are full,
      pdflush takes a nap, to be woken when *any* queue exits write
      congestion.
      
      This code can keep sixty spindles saturated - we've never been able to
      do that before.
      
       - Add the `nonblocking' flag to struct writeback_control, and teach
         the writeback paths to honour it.
      
       - Add the `encountered_congestion' flag to struct writeback_control
         and teach the writeback paths to set it.
      
      So as soon as a mapping's backing_dev_info indicates that it is getting
      congested, bale out of writeback.  And don't even start writeback
      against filesystems whose queues are congested.
      
       - Convert pdflush's background_writeback() function to use
         nonblocking writeback.
      
      This way, a single pdflush thread will circulate around all the
      dirty queues, keeping them filled.
      
       - Convert the pdlfush `kupdate' function to do the same thing.
      
      This solves the problem of pdflush thread pool exhaustion.
      
      It solves the problem of pdflush startup latency.
      
      It solves the (minor) problem wherein `kupdate' writeback only writes
      back a single disk at a time (it was getting blocked on each queue in
      turn).
      
      It probably means that we only ever need a single pdflush thread.
      c9b22619
    • Andrew Morton's avatar
      [PATCH] use the queue congestion API in ext2_preread_inode() · f3332384
      Andrew Morton authored
      Use the new queue congestion detector in ext2_preread_inode().  Don't
      try the speculative read if the read queue is congested.
      
      Also, don't try it if the disk is write-congested.  Presumably it is
      more important to get the dirty memory cleaned out.
      f3332384
    • Andrew Morton's avatar
      [PATCH] infrastructure for monitoring queue congestion state · 4cef1b04
      Andrew Morton authored
      The patch provides a means for the VM to be able to determine whether a
      request queue is in a "congested" state.  If it is congested, then a
      write to (or read from) the queue may cause blockage in
      get_request_wait().
      
      So the VM can do:
      
      	if (!bdi_write_congested(page->mapping->backing_dev_info))
      		writepage(page);
      
      This is not exact.  The code assumes that if the request queue still
      has 1/4 of its capacity (queue_nr_requests) available then a request
      will be non-blocking.  There is a small chance that another CPU could
      zoom in and consume those requests.  But on the rare occasions where
      that may happen the result will mereley be some unexpected latency -
      it's not worth doing anything elaborate to prevent this.
      
      The patch decreases the size of `batch_requests'.  batch_requests is
      positively harmful - when a "heavy" writer and a "light" writer are
      both writing to the same queue, batch_requests provides a means for the
      heavy writer to massively stall the light writer.  Instead of waiting
      for one or two requests to come free, the light writer has to wait for
      32 requests to complete.
      
      Plus batch_requests generally makes things harder to tune, understand
      and predict.  I wanted to kill it altogether, but Jens says that it is
      important for some hardware - it allows decent size requests to be
      submitted.
      
      The VM changes which go along with this code cause batch_requests to be
      not so painful anyway - the only processes which sleep in
      get_request_wait() are the ones which we elect, by design, to wait in
      there - typically heavy writers.
      
      
      The patch changes the meaning of `queue_nr_requests'.  It used to mean
      "total number of requests per queue".  Half of these are for reads, and
      half are for writes.  This always confused the heck out of me, and the
      code needs to divide queue_nr_requests by two all over the place.
      
      So queue_nr_requests now means "the number of write requests per queue"
      and "the number of read requests per queue".  ie: I halved it.
      
      Also, queue_nr_requests was converted to static scope.  Nothing else
      uses it.
      
      
      The accuracy of bdi_read_congested() and bdi_write_congested() depends
      upon the accuracy of mapping->backing_dev_info.  With complex block
      stacking arrangements it is possible that ->backing_dev_info is
      pointing at the wrong queue.  I don't know.
      
      But the cost of getting this wrong is merely latency, and if it is a
      problem we can fix it up in the block layer, by getting stacking
      devices to communicate their congestion state upwards in some manner.
      4cef1b04
    • Andrew Morton's avatar
      [PATCH] don't hold mapping->private_lock while marking a page dirty · b5742733
      Andrew Morton authored
      __set_page_dirty_buffers() is calling __mark_inode_dirty under
      mapping->private_lock.
      
      We don't need to hold ->private_lock across that call.  It's only there
      to pin page->buffers.
      
      This simplifies the VM locking heirarchy.
      b5742733
    • Andrew Morton's avatar
      [PATCH] fix ext3 in data=writeback mode · c8b254cc
      Andrew Morton authored
      When I converted ext3 to use to use direct-to-BIO writeback for
      data=writeback mode I forgot that we need to hold a transaction open on
      behalf of MAP_SHARED pages.  The fileystem is BUGging in get_block()
      because there is no transaction open.
      
      So let's forget that idea for now and send data=writeback mode back to
      ext3_writepage.
      c8b254cc
    • David S. Miller's avatar
      Merge nuts.ninka.net:/home/davem/src/BK/sparcwork-2.5 · 2d35bd3f
      David S. Miller authored
      into nuts.ninka.net:/home/davem/src/BK/sparc-2.5
      2d35bd3f
    • David S. Miller's avatar
      Merge nuts.ninka.net:/home/davem/src/BK/network-2.5 · da29f6a8
      David S. Miller authored
      into nuts.ninka.net:/home/davem/src/BK/net-2.5
      da29f6a8
    • David S. Miller's avatar
      Merge master.kernel.org:/home/acme/BK/llc-2.5 · e1ec2e00
      David S. Miller authored
      into nuts.ninka.net:/home/davem/src/BK/net-2.5
      e1ec2e00
    • Arnaldo Carvalho de Melo's avatar
      [LLC] move reason to the {station,sap,conn}_ev structs · 1502caff
      Arnaldo Carvalho de Melo authored
      Slowly killing the ugly struct forest.
      1502caff
    • Arnaldo Carvalho de Melo's avatar
    • Arnaldo Carvalho de Melo's avatar
      [LLC] use the core lists to get info for /proc/net/llc · 5d8c0602
      Arnaldo Carvalho de Melo authored
      With this llc_ui_sockets is almost not needed anymore, next
      changesets will deal with the dataunit/xid/test primitives, that
      are still using it.
      5d8c0602
    • Arnaldo Carvalho de Melo's avatar
    • Kai Germaschewski's avatar
      050aa25b
    • Kai Germaschewski's avatar
      3c6c1425