1. 10 Sep, 2005 28 commits
    • Thomas Gleixner's avatar
      [PATCH] PPC: C99 initializers for hw_interrupt_type structures · 2830e21e
      Thomas Gleixner authored
      Convert the initializers of hw_interrupt_type structures to C99 initializers.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      2830e21e
    • Randy Dunlap's avatar
      [PATCH] kernel/acct: add kerneldoc · 417ef531
      Randy Dunlap authored
      for kernel/acct.c:
      - fix typos
      - add kerneldoc for non-static functions
      Signed-off-by: default avatarRandy Dunlap <rdunlap@xenotime.net>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      417ef531
    • Victor Fusco's avatar
      [PATCH] char/n_tty: fix sparse warnings (__nocast type) · 621a4d1a
      Victor Fusco authored
      Fix the sparse warning "implicit cast to nocast type"
      Signed-off-by: default avatarVictor Fusco <victor@cetuc.puc-rio.br>
      Signed-off-by: default avatarDomen Puncer <domen@coderock.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      621a4d1a
    • Victor Fusco's avatar
      [PATCH] mm/slab: fix sparse warnings · b2d55073
      Victor Fusco authored
      Fix the sparse warning "implicit cast to nocast type"
      Signed-off-by: default avatarVictor Fusco <victor@cetuc.puc-rio.br>
      Signed-off-by: default avatarDomen Puncer <domen@coderock.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      b2d55073
    • Alexey Dobriyan's avatar
      [PATCH] sb16_csp: untypedef · dfc866e5
      Alexey Dobriyan authored
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: default avatarDomen Puncer <domen@coderock.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      dfc866e5
    • Alexey Dobriyan's avatar
    • Christophe Lucas's avatar
    • Christophe Lucas's avatar
    • Christophe Lucas's avatar
      [PATCH] applicom: fix error handling · 819a3eba
      Christophe Lucas authored
      misc_register() can fail.
      Signed-off-by: default avatarChristophe Lucas <clucas@rotomalug.org>
      Signed-off-by: default avatarDomen Puncer <domen@coderock.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      819a3eba
    • Adrian Bunk's avatar
      [PATCH] mm/filemap.c: make two functions static · 5ce7852c
      Adrian Bunk authored
      With Nick Piggin <npiggin@suse.de>
      
      Give some things static scope.
      Signed-off-by: default avatarAdrian Bunk <bunk@stusta.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      5ce7852c
    • Paul E. McKenney's avatar
      [PATCH] Yet another RCU documentation update · dd81eca8
      Paul E. McKenney authored
      Update RCU documentation based on discussions and review of RCU-based tree
      patches.  Add an introductory whatisRCU.txt file.
      
      Signed-off-by: <paulmck@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      dd81eca8
    • Arthur Othieno's avatar
      [PATCH] Remove even more stale references to Documentation/smp.tex · 12c62c2e
      Arthur Othieno authored
      Randy cleaned out the bulk of these stale references to the now long gone
      Documentation/smp.tex back in 2004.  I followed this up with a few more
      sweeps.  Somehow, these have managed to sneak back in since.
      
      I can't seem to figure out a contact point for M32R (no one listed in
      MAINTAINERS!), but, these patches are only but trivial.
      Signed-off-by: default avatarArthur Othieno <a.othieno@bluewin.ch>
      Acked-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      12c62c2e
    • Siddha, Suresh B's avatar
      [PATCH] sched: allow the load to grow upto its cpu_power · 0c117f1b
      Siddha, Suresh B authored
      Don't pull tasks from a group if that would cause the group's total load to
      drop below its total cpu_power (ie.  cause the group to start going idle).
      Signed-off-by: default avatarSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      0c117f1b
    • Siddha, Suresh B's avatar
      [PATCH] sched: don't kick ALB in the presence of pinned task · fa3b6ddc
      Siddha, Suresh B authored
      Jack Steiner brought this issue at my OLS talk.
      
      Take a scenario where two tasks are pinned to two HT threads in a physical
      package.  Idle packages in the system will keep kicking migration_thread on
      the busy package with out any success.
      
      We will run into similar scenarios in the presence of CMP/NUMA.
      Signed-off-by: default avatarSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      fa3b6ddc
    • Renaud Lienhart's avatar
      [PATCH] sched: use cached variable in sys_sched_yield() · 5927ad78
      Renaud Lienhart authored
      In sys_sched_yield(), we cache current->array in the "array" variable, thus
      there's no need to dereference "current" again later.
      Signed-Off-By: default avatarRenaud Lienhart <renaud.lienhart@free.fr>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      5927ad78
    • Nick Piggin's avatar
      [PATCH] sched: HT optimisation · 5969fe06
      Nick Piggin authored
      If an idle sibling of an HT queue encounters a busy sibling, then make
      higher level load balancing of the non-idle variety.
      
      Performance of multiprocessor HT systems with low numbers of tasks
      (generally < number of virtual CPUs) can be significantly worse than the
      exact same workloads when running in non-HT mode.  The reason is largely
      due to poor scheduling behaviour.
      
      This patch improves the situation, making the performance gap far less
      significant on one problematic test case (tbench).
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      5969fe06
    • Nick Piggin's avatar
      [PATCH] sched: less locking · e17224bf
      Nick Piggin authored
      During periodic load balancing, don't hold this runqueue's lock while
      scanning remote runqueues, which can take a non trivial amount of time
      especially on very large systems.
      
      Holding the runqueue lock will only help to stabilise ->nr_running, however
      this doesn't do much to help because tasks being woken will simply get held
      up on the runqueue lock, so ->nr_running would not provide a really
      accurate picture of runqueue load in that case anyway.
      
      What's more, ->nr_running (and possibly the cpu_load averages) of remote
      runqueues won't be stable anyway, so load balancing is always an inexact
      operation.
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      e17224bf
    • Nick Piggin's avatar
      [PATCH] sched: less newidle locking · d6d5cfaf
      Nick Piggin authored
      Similarly to the earlier change in load_balance, only lock the runqueue in
      load_balance_newidle if the busiest queue found has a nr_running > 1.  This
      will reduce frequency of expensive remote runqueue lock aquisitions in the
      schedule() path on some workloads.
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      d6d5cfaf
    • Ingo Molnar's avatar
      [PATCH] sched: fix SMT scheduler latency bug · 67f9a619
      Ingo Molnar authored
      William Weston reported unusually high scheduling latencies on his x86 HT
      box, on the -RT kernel.  I managed to reproduce it on my HT box and the
      latency tracer shows the incident in action:
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
            du-2803  3Dnh2    0us : __trace_start_sched_wakeup (try_to_wake_up)
              ..............................................................
              ... we are running on CPU#3, PID 2778 gets woken to CPU#1: ...
              ..............................................................
            du-2803  3Dnh2    0us : __trace_start_sched_wakeup <<...>-2778> (73 1)
            du-2803  3Dnh2    0us : _raw_spin_unlock (try_to_wake_up)
              ................................................
              ... still on CPU#3, we send an IPI to CPU#1: ...
              ................................................
            du-2803  3Dnh1    0us : resched_task (try_to_wake_up)
            du-2803  3Dnh1    1us : smp_send_reschedule (try_to_wake_up)
            du-2803  3Dnh1    1us : send_IPI_mask_bitmask (smp_send_reschedule)
            du-2803  3Dnh1    2us : _raw_spin_unlock_irqrestore (try_to_wake_up)
              ...............................................
              ... 1 usec later, the IPI arrives on CPU#1: ...
              ...............................................
        <idle>-0     1Dnh.    2us : smp_reschedule_interrupt (c0100c5a 0 0)
      
      So far so good, this is the normal wakeup/preemption mechanism.  But here
      comes the scheduler anomaly on CPU#1:
      
        <idle>-0     1Dnh.    2us : preempt_schedule_irq (need_resched)
        <idle>-0     1Dnh.    2us : preempt_schedule_irq (need_resched)
        <idle>-0     1Dnh.    3us : __schedule (preempt_schedule_irq)
        <idle>-0     1Dnh.    3us : profile_hit (__schedule)
        <idle>-0     1Dnh1    3us : sched_clock (__schedule)
        <idle>-0     1Dnh1    4us : _raw_spin_lock_irq (__schedule)
        <idle>-0     1Dnh1    4us : _raw_spin_lock_irqsave (__schedule)
        <idle>-0     1Dnh2    5us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh1    5us : preempt_schedule (__schedule)
        <idle>-0     1Dnh1    6us : _raw_spin_lock (__schedule)
        <idle>-0     1Dnh2    6us : find_next_bit (__schedule)
        <idle>-0     1Dnh2    6us : _raw_spin_lock (__schedule)
        <idle>-0     1Dnh3    7us : find_next_bit (__schedule)
        <idle>-0     1Dnh3    7us : find_next_bit (__schedule)
        <idle>-0     1Dnh3    8us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh2    8us : preempt_schedule (__schedule)
        <idle>-0     1Dnh2    8us : find_next_bit (__schedule)
        <idle>-0     1Dnh2    9us : trace_stop_sched_switched (__schedule)
        <idle>-0     1Dnh2    9us : _raw_spin_lock (trace_stop_sched_switched)
        <idle>-0     1Dnh3   10us : trace_stop_sched_switched <<...>-2778> (73 8c)
        <idle>-0     1Dnh3   10us : _raw_spin_unlock (trace_stop_sched_switched)
        <idle>-0     1Dnh1   10us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh.   11us : local_irq_enable_noresched (preempt_schedule_irq)
        <idle>-0     1Dnh.   11us < (0)
      
      we didnt pick up pid 2778! It only gets scheduled much later:
      
         <...>-2778  1Dnh2  412us : __switch_to (__schedule)
         <...>-2778  1Dnh2  413us : __schedule <<idle>-0> (8c 73)
         <...>-2778  1Dnh2  413us : _raw_spin_unlock (__schedule)
         <...>-2778  1Dnh1  413us : trace_stop_sched_switched (__schedule)
         <...>-2778  1Dnh1  414us : _raw_spin_lock (trace_stop_sched_switched)
         <...>-2778  1Dnh2  414us : trace_stop_sched_switched <<...>-2778> (73 1)
         <...>-2778  1Dnh2  414us : _raw_spin_unlock (trace_stop_sched_switched)
         <...>-2778  1Dnh1  415us : trace_stop_sched_switched (__schedule)
      
      the reason for this anomaly is the following code in dependent_sleeper():
      
                      /*
                       * If a user task with lower static priority than the
                       * running task on the SMT sibling is trying to schedule,
                       * delay it till there is proportionately less timeslice
                       * left of the sibling task to prevent a lower priority
                       * task from using an unfair proportion of the
                       * physical cpu's resources. -ck
                       */
      [...]
                              if (((smt_curr->time_slice * (100 - sd->per_cpu_gain) /
                                      100) > task_timeslice(p)))
                                              ret = 1;
      
      Note that in contrast to the comment above, we dont actually do the check
      based on static priority, we do the check based on timeslices.  But
      timeslices go up and down, and even highprio tasks can randomly have very
      low timeslices (just before their next refill) and can thus be judged as
      'lowprio' by the above piece of code.  This condition is clearly buggy.
      The correct test is to check for static_prio _and_ to check for the
      preemption priority.  Even on different static priority levels, a
      higher-prio interactive task should not be delayed due to a
      higher-static-prio CPU hog.
      
      There is a symmetric bug in the 'kick SMT sibling' code of this function as
      well, which can be solved in a similar way.
      
      The patch below (against the current scheduler queue in -mm) fixes both
      bugs.  I have build and boot-tested this on x86 SMT, and nice +20 tasks
      still get properly throttled - so the dependent-sleeper logic is still in
      action.
      
      btw., these bugs pessimised the SMT scheduler because the 'delay wakeup'
      property was applied too liberally, so this fix is likely a throughput
      improvement as well.
      
      I separated out a smt_slice() function to make the code easier to read.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      67f9a619
    • Ingo Molnar's avatar
      [PATCH] sched: TASK_NONINTERACTIVE · d79fc0fc
      Ingo Molnar authored
      This patch implements a task state bit (TASK_NONINTERACTIVE), which can be
      used by blocking points to mark the task's wait as "non-interactive".  This
      does not mean the task will be considered a CPU-hog - the wait will simply
      not have an effect on the waiting task's priority - positive or negative
      alike.  Right now only pipe_wait() will make use of it, because it's a
      common source of not-so-interactive waits (kernel compilation jobs, etc.).
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      d79fc0fc
    • Ingo Molnar's avatar
      [PATCH] sched cleanups · 95cdf3b7
      Ingo Molnar authored
      whitespace cleanups.
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      95cdf3b7
    • M.Baris Demiray's avatar
      [PATCH] sched: make idlest_group/cpu cpus_allowed-aware · da5a5522
      M.Baris Demiray authored
      Add relevant checks into find_idlest_group() and find_idlest_cpu() to make
      them return only the groups that have allowed CPUs and allowed CPUs
      respectively.
      Signed-off-by: default avatarM.Baris Demiray <baris@labristeknoloji.com>
      Signed-off-by: default avatarNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      da5a5522
    • Con Kolivas's avatar
      [PATCH] sched: run SCHED_NORMAL tasks with real time tasks on SMT siblings · fc38ed75
      Con Kolivas authored
      The hyperthread aware nice handling currently puts to sleep any non real
      time task when a real time task is running on its sibling cpu.  This can
      lead to prolonged starvation by having the non real time task pegged to the
      cpu with load balancing not pulling that task away.
      
      Currently we force lower priority hyperthread tasks to run a percentage of
      time difference based on timeslice differences which is meaningless when
      comparing real time tasks to SCHED_NORMAL tasks.  We can allow non real
      time tasks to run with real time tasks on the sibling up to per_cpu_gain%
      if we use jiffies as a counter.
      
      Cleanups and micro-optimisations to the relevant code section should make
      it more understandable as well.
      Signed-off-by: default avatarCon Kolivas <kernel@kolivas.org>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      fc38ed75
    • Paul Fulghum's avatar
      [PATCH] synclink_cs add statistics clear · a7482a2e
      Paul Fulghum authored
      Add ability to clear statistics.
      Signed-off-by: default avatarPaul Fulghum <paulkf@microgate.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      a7482a2e
    • Paul Jackson's avatar
      [PATCH] cpuset semaphore depth check deadlock fix · 4247bdc6
      Paul Jackson authored
      The cpusets-formalize-intermediate-gfp_kernel-containment patch
      has a deadlock problem.
      
      This patch was part of a set of four patches to make more
      extensive use of the cpuset 'mem_exclusive' attribute to
      manage kernel GFP_KERNEL memory allocations and to constrain
      the out-of-memory (oom) killer.
      
      A task that is changing cpusets in particular ways on a system
      when it is very short of free memory could double trip over
      the global cpuset_sem semaphore (get the lock and then deadlock
      trying to get it again).
      
      The second attempt to get cpuset_sem would be in the routine
      cpuset_zone_allowed().  This was discovered by code inspection.
      I can not reproduce the problem except with an artifically
      hacked kernel and a specialized stress test.
      
      In real life you cannot hit this unless you are manipulating
      cpusets, and are very unlikely to hit it unless you are rapidly
      modifying cpusets on a memory tight system.  Even then it would
      be a rare occurence.
      
      If you did hit it, the task double tripping over cpuset_sem
      would deadlock in the kernel, and any other task also trying
      to manipulate cpusets would deadlock there too, on cpuset_sem.
      Your batch manager would be wedged solid (if it was cpuset
      savvy), but classic Unix shells and utilities would work well
      enough to reboot the system.
      
      The unusual condition that led to this bug is that unlike most
      semaphores, cpuset_sem _can_ be acquired while in the page
      allocation code, when __alloc_pages() calls cpuset_zone_allowed.
      So it easy to mistakenly perform the following sequence:
        1) task makes system call to alter a cpuset
        2) take cpuset_sem
        3) try to allocate memory
        4) memory allocator, via cpuset_zone_allowed, trys to take cpuset_sem
        5) deadlock
      
      The reason that this is not a serious bug for most users
      is that almost all calls to allocate memory don't require
      taking cpuset_sem.  Only some code paths off the beaten
      track require taking cpuset_sem -- which is good.  Taking
      a global semaphore on the main code path for allocating
      memory would not scale well.
      
      This patch fixes this deadlock by wrapping the up() and down()
      calls on cpuset_sem in kernel/cpuset.c with code that tracks
      the nesting depth of the current task on that semaphore, and
      only does the real down() if the task doesn't hold the lock
      already, and only does the real up() if the nesting depth
      (number of unmatched downs) is exactly one.
      
      The previous required use of refresh_mems(), anytime that
      the cpuset_sem semaphore was acquired and the code executed
      while holding that semaphore might try to allocate memory, is
      no longer required.  Two refresh_mems() calls were removed
      thanks to this.  This is a good change, as failing to get
      all the necessary refresh_mems() calls placed was a primary
      source of bugs in this cpuset code.  The only remaining call
      to refresh_mems() is made while doing a memory allocation,
      if certain task memory placement data needs to be updated
      from its cpuset, due to the cpuset having been changed behind
      the tasks back.
      Signed-off-by: default avatarPaul Jackson <pj@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      4247bdc6
    • Ingo Molnar's avatar
      [PATCH] spinlock consolidation · fb1c8f93
      Ingo Molnar authored
      This patch (written by me and also containing many suggestions of Arjan van
      de Ven) does a major cleanup of the spinlock code.  It does the following
      things:
      
       - consolidates and enhances the spinlock/rwlock debugging code
      
       - simplifies the asm/spinlock.h files
      
       - encapsulates the raw spinlock type and moves generic spinlock
         features (such as ->break_lock) into the generic code.
      
       - cleans up the spinlock code hierarchy to get rid of the spaghetti.
      
      Most notably there's now only a single variant of the debugging code,
      located in lib/spinlock_debug.c.  (previously we had one SMP debugging
      variant per architecture, plus a separate generic one for UP builds)
      
      Also, i've enhanced the rwlock debugging facility, it will now track
      write-owners.  There is new spinlock-owner/CPU-tracking on SMP builds too.
      All locks have lockup detection now, which will work for both soft and hard
      spin/rwlock lockups.
      
      The arch-level include files now only contain the minimally necessary
      subset of the spinlock code - all the rest that can be generalized now
      lives in the generic headers:
      
       include/asm-i386/spinlock_types.h       |   16
       include/asm-x86_64/spinlock_types.h     |   16
      
      I have also split up the various spinlock variants into separate files,
      making it easier to see which does what. The new layout is:
      
         SMP                         |  UP
         ----------------------------|-----------------------------------
         asm/spinlock_types_smp.h    |  linux/spinlock_types_up.h
         linux/spinlock_types.h      |  linux/spinlock_types.h
         asm/spinlock_smp.h          |  linux/spinlock_up.h
         linux/spinlock_api_smp.h    |  linux/spinlock_api_up.h
         linux/spinlock.h            |  linux/spinlock.h
      
      /*
       * here's the role of the various spinlock/rwlock related include files:
       *
       * on SMP builds:
       *
       *  asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
       *                        initializers
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  asm/spinlock.h:       contains the __raw_spin_*()/etc. lowlevel
       *                        implementations, mostly inline assembly code
       *
       *   (also included on UP-debug builds:)
       *
       *  linux/spinlock_api_smp.h:
       *                        contains the prototypes for the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       *
       * on UP builds:
       *
       *  linux/spinlock_type_up.h:
       *                        contains the generic, simplified UP spinlock type.
       *                        (which is an empty structure on non-debug builds)
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  linux/spinlock_up.h:
       *                        contains the __raw_spin_*()/etc. version of UP
       *                        builds. (which are NOPs on non-debug, non-preempt
       *                        builds)
       *
       *   (included on UP-non-debug builds:)
       *
       *  linux/spinlock_api_up.h:
       *                        builds the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       */
      
      All SMP and UP architectures are converted by this patch.
      
      arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
      crosscompilers.  m32r, mips, sh, sparc, have not been tested yet, but should
      be mostly fine.
      
      From: Grant Grundler <grundler@parisc-linux.org>
      
        Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
        Builds 32-bit SMP kernel (not booted or tested).  I did not try to build
        non-SMP kernels.  That should be trivial to fix up later if necessary.
      
        I converted bit ops atomic_hash lock to raw_spinlock_t.  Doing so avoids
        some ugly nesting of linux/*.h and asm/*.h files.  Those particular locks
        are well tested and contained entirely inside arch specific code.  I do NOT
        expect any new issues to arise with them.
      
       If someone does ever need to use debug/metrics with them, then they will
        need to unravel this hairball between spinlocks, atomic ops, and bit ops
        that exist only because parisc has exactly one atomic instruction: LDCW
        (load and clear word).
      
      From: "Luck, Tony" <tony.luck@intel.com>
      
         ia64 fix
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarArjan van de Ven <arjanv@infradead.org>
      Signed-off-by: default avatarGrant Grundler <grundler@parisc-linux.org>
      Cc: Matthew Wilcox <willy@debian.org>
      Signed-off-by: default avatarHirokazu Takata <takata@linux-m32r.org>
      Signed-off-by: default avatarMikael Pettersson <mikpe@csd.uu.se>
      Signed-off-by: default avatarBenoit Boissinot <benoit.boissinot@ens-lyon.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      fb1c8f93
    • Alan Cox's avatar
      [PATCH] Subject: PATCH: fix numa caused compile warnings · 4327edf6
      Alan Cox authored
      pcibus_to_cpumask expands into more than just an initialiser so gcc
      moans about code before variable declarations.
      Signed-off-by: default avatarAlan Cox <alan@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      4327edf6
    • Andrew Morton's avatar
      [PATCH] ntfs build fix · b4012a98
      Andrew Morton authored
      *** Warning: "bit_spin_lock" [fs/ntfs/ntfs.ko] undefined!
      *** Warning: "bit_spin_unlock" [fs/ntfs/ntfs.ko] undefined!
      
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      b4012a98
  2. 09 Sep, 2005 12 commits