1. 12 Jan, 2003 6 commits
  2. 11 Jan, 2003 26 commits
    • Linus Torvalds's avatar
      Merge http://linux-isdn.bkbits.net/linux-2.5.isdn · 811b6a3b
      Linus Torvalds authored
      into home.transmeta.com:/home/torvalds/v2.5/linux
      811b6a3b
    • Kai Germaschewski's avatar
      Merge tp1.ruhr-uni-bochum.de:/scratch/kai/kernel/v2.5/linux-2.5 · c29fba8a
      Kai Germaschewski authored
      into tp1.ruhr-uni-bochum.de:/scratch/kai/kernel/v2.5/linux-2.5.isdn
      c29fba8a
    • Kai Germaschewski's avatar
      ISDN: remove kernel 2.0 code · 11aeb65a
      Kai Germaschewski authored
      From: Adrian Bunk <bunk@fs.tum.de>
      
      The patch below removes #if'd kernel 2.0 code from
      drivers/isdn/divert/divert_init.c.
      11aeb65a
    • Kai Germaschewski's avatar
      ISDN: isdn-tty driver not HZ aware · 4e242203
      Kai Germaschewski authored
      From: Christian Borntraeger <linux@borntraeger.net>
        
      This patch makes isdn_tty HZ aware.
      The first change changes 3000 jiffies (now 3 seconds) to 30 seconds according to
      the comment.
      I dont know, if the second change (schedule_timeout(50);) has to be half a
      second but this was the value used in 2.4.
      4e242203
    • Kai Germaschewski's avatar
      ISDN/HiSax: Add missing __devexit_p() · 70028644
      Kai Germaschewski authored
      70028644
    • Kai Germaschewski's avatar
      ISDN/HiSax: Clean up the gazel subdriver · 84cadf5c
      Kai Germaschewski authored
      Instead of having "switch (subtype)" in just about every function,
      rather use separate functions and invoke the right one using
      the now existing struct card_ops infrastructure.
      84cadf5c
    • Kai Germaschewski's avatar
      ISDN/HiSax: Share IPAC IRQ handler · 8686ec19
      Kai Germaschewski authored
      All IRQ handlers for IPAC based cards were basically the same (not
      a big surprise, since the chip is the same), so we can share
      the IRQ handler.
      8686ec19
    • Kai Germaschewski's avatar
      ISDN/HiSax: Generate D/B channel access functions for IPAC · b59b6557
      Kai Germaschewski authored
      IPAC is basically a combined HSCX/ISAC chip, so we can generate
      the D- and B-channel access functions knowing how to access the IPAC.
        
      For performance reasons, this happens in a macro.
      b59b6557
    • Kai Germaschewski's avatar
      ISDN/HiSax: Clean up the various IPAC IRQ handlers · bfb4d184
      Kai Germaschewski authored
      Just renaming and introducing some helpers makes them look very similar
      to each other..
      bfb4d184
    • Kai Germaschewski's avatar
      ISDN/HiSax: Share interrupt handler for ISAC/HSCX cards · 14dad372
      Kai Germaschewski authored
      Except for a minor performance penalty, using the same IRQ handler
      for cards which used the same code anyway seems perfectly natural...
      14dad372
    • Kai Germaschewski's avatar
      ISDN/HiSax: Share some common D-channel init code · c2cacbb4
      Kai Germaschewski authored
      Again, just killing some duplicated code.
      c2cacbb4
    • Kai Germaschewski's avatar
      ISDN/HiSax: Move open/close of D-channel stack -> dc_l1_ops · ad7f8a9b
      Kai Germaschewski authored
      Same change which happened for the B-channel earlier.
      ad7f8a9b
    • Kai Germaschewski's avatar
      ISDN/HiSax: Introduce methods for reset/test/release/ · 6cc9cb58
      Kai Germaschewski authored
      This mostly finishes splitting up the multiplexing ->cardmsg.
      6cc9cb58
    • Kai Germaschewski's avatar
      ISDN/HiSax: Move interrupt function to per-card struct · 02f7568c
      Kai Germaschewski authored
      Since we now have a per-card ops struct, use it to provide the
      irq handler function, too.
      
      Some drivers actually drive more than one specific hardware card,
      instead of having "switch (cs->subtyp)" scattered around, we rather
      aim at having different card_ops structures which just provide the
      right functions for the hardware. Of course, this patch is only the
      beginning of that separation, but allows for some cleaning already.
      02f7568c
    • Kai Germaschewski's avatar
      ISDN/HiSax: Introduce per-card init function · 4bffaa97
      Kai Germaschewski authored
      Linux normally uses separate callbacks instead of a multiplexing
      function like "cardmsg". So start to break that into pieces.
      4bffaa97
    • Kai Germaschewski's avatar
      Merge uidc2-166.inav.uiowa.net:kernel/v2.5/linux-2.5.isdn · 62a758c5
      Kai Germaschewski authored
      into tp1.ruhr-uni-bochum.de:/scratch/kai/kernel/v2.5/linux-2.5.isdn
      62a758c5
    • Andrew Morton's avatar
      [PATCH] inline 1,2 and 4-byte copy_*_user operations · b18e5d6c
      Andrew Morton authored
      The patch arranges for constant 1, 2 and 4-byte copy_*_user() invokations to
      be inlined.
      
      It's hard to tell really, but the AIM9 creat_clo, signal_test and dir_rtns_1
      numbers went up by 3%-9%, which is to be expected.
      b18e5d6c
    • Andrew Morton's avatar
      [PATCH] fix set_page_dirty vs truncate&free races · 5ba2948d
      Andrew Morton authored
      set_page_dirty() is racy if the caller has no reference against
      page->mapping->host, and if the page is unlocked.  This is because
      another CPU could truncate the page off the mapping and then free the
      mapping.
      
      Usually, the page _is_ locked, or the caller is a user-space process which
      holds a reference on the inode by having an open file.
      
      The exceptional cases are where the page was obtained via
      get_user_pages().  The patch changes those to lock the page around the
      set_page_dirty() call.
      5ba2948d
    • Andrew Morton's avatar
      [PATCH] misc fixes · 0c682373
      Andrew Morton authored
      - Fix error-path mem leak in __vfs_follow_link() (From a recent AC->2.4
        patch)
      
      - Make drivers/net/aironet4500_proc.c:driver_lock static.
      0c682373
    • Andrew Morton's avatar
      [PATCH] Fix an SMP+preempt latency problem · 2faf4338
      Andrew Morton authored
      Here is spin_lock():
      
      #define spin_lock(lock) \
      do { \
              preempt_disable(); \
              _raw_spin_lock(lock); \
      } while(0)
      
      
      Here is the scenario:
      
      CPU0:
      	spin_lock(some_lock);
      	do_very_long_thing();	/* This has cond_resched()s in it */
      
      CPU1:
      	spin_lock(some_lock);
      
      Now suppose that the scheduler tries to schedule a task on CPU1.  Nothing
      happens, because CPU1 is spinning on the lock with preemption disabled.  CPU0
      will happliy hold the lock for a long time because nobody has set
      need_resched() against CPU0.
      
      This problem can cause scheduling latencies of many tens of milliseconds on
      SMP on kernels which handle UP quite happily.
      
      
      This patch fixes the problem by changing the spin_lock() and write_lock()
      contended slowpath to spin on the lock by hand, while polling for preemption
      requests.
      
      I would have done read_lock() too, but we don't seem to have read_trylock()
      primitives.
      
      The patch also shrinks the kernel by 30k due to not having separate
      out-of-line spinning code for each spin_lock() callsite.
      2faf4338
    • Andrew Morton's avatar
      [PATCH] low-latency pagetable teardown · b4adddd6
      Andrew Morton authored
      Pagetable teardown can hold page_table_lock for extremely long periods -
      hundreds of milliseconds.  This is pretty much the final source of high
      scheduling latency in the core kernel.
      
      We fixed it for zap_page_range() by chunking the work up and dropping the
      lock occasionally if needed.  But that did not fix exit_mmap() and
      unmap_region().
      
      So what this patch does is to create an uber-zapper "unmap_vmas()" which
      provides all the vma-walking, page unmapping and low-latency lock-dropping
      which zap_page_range(), exit_mmap() and unmap_region() require.  Those three
      functions are updated to call unmap_vmas().
      
      It's actually a bit of a cleanup...
      b4adddd6
    • Andrew Morton's avatar
      [PATCH] Don't reverse the VMA list in touched_by_munmap() · 670fe925
      Andrew Morton authored
      touched_by_munmap() returns a reversed list of VMA's.  That makes things
      harder in the low-latency-page-zapping patch.
      
      So change touched_by_munmap() to return a VMA list which is in the original
      order - ascending virtual addresses.
      
      Oh, and rename it to <hugh>detach_vmas_to_be_unmapped()</hugh>.  It now
      returns nothing, because we know that the VMA we passed in is the head of the
      to-be-unmapped list.
      670fe925
    • Andrew Morton's avatar
      [PATCH] replace `typedef mmu_gather_t' with `struct mmu_gather' · 0c17b328
      Andrew Morton authored
      In the next patch I wish to add to mm.h prototypes of functions which take an
      mmu_gather_t* argument.   To do this I must either:
      
      a) include tlb.h in mm.h
      
         Not good - more nested includes when a simple forward decl is sufficient.
      
      b) Add `typedef struct free_pte_ctx mmu_gather_t;' to mm.h.
      
         That's silly - it's supposed to be an opaque type.
      
         or
      
      c) Remove the pesky typedef.
      
         Bingo.
      0c17b328
    • Andrew Morton's avatar
      [PATCH] simplify and generalise cond_resched_lock · ab706391
      Andrew Morton authored
      cond_resched_lock() _used_ to be "if this is the only lock which I am holding
      then drop it and schedule if needed".
      
      However with the i_shared_lock->i_shared_sem change, neither of its two
      callsites now need those semantics.  So this patch changes it to mean just
      "if needed, drop this lock and reschedule".
      
      This allows us to also schedule if CONFIG_PREEMPT=n, which is useful -
      zap_page_range() can run for an awfully long time.
      
      The preempt and non-preempt versions of cond_resched_lock() have been
      unified.
      ab706391
    • Andrew Morton's avatar
      [PATCH] turn i_shared_lock into a semaphore · d9be9136
      Andrew Morton authored
      i_shared_lock is held for a very long time during vmtruncate() and causes
      high scheduling latencies when truncating a file which is mmapped.  I've seen
      100 milliseconds.
      
      So turn it into a semaphore.  It nests inside mmap_sem.
      
      This change is also needed by the shared pagetable patch, which needs to
      unshare pte's on the vmtruncate path - lots of pagetable pages need to
      be allocated and they are using __GFP_WAIT.
      
      The patch also makes unmap_vma() static.
      d9be9136
    • Ingo Molnar's avatar
      [PATCH] ptrace-fix-2.5.56-A0 · b473e48b
      Ingo Molnar authored
      This patch from Roland McGrath fixes a threading related ptrace bug:
      PTRACE_ATTACH should not stop everybody for each thread attached.
      b473e48b
  3. 10 Jan, 2003 8 commits