An error occurred fetching the project authors.
  1. 27 Mar, 2006 40 commits
    • Ingo Molnar's avatar
      [PATCH] lightweight robust futexes: arch defaults · e9056f13
      Ingo Molnar authored
      This patchset provides a new (written from scratch) implementation of robust
      futexes, called "lightweight robust futexes".  We believe this new
      implementation is faster and simpler than the vma-based robust futex solutions
      presented before, and we'd like this patchset to be adopted in the upstream
      kernel.  This is version 1 of the patchset.
      
        Background
        ----------
      
      What are robust futexes?  To answer that, we first need to understand what
      futexes are: normal futexes are special types of locks that in the
      noncontended case can be acquired/released from userspace without having to
      enter the kernel.
      
      A futex is in essence a user-space address, e.g.  a 32-bit lock variable
      field.  If userspace notices contention (the lock is already owned and someone
      else wants to grab it too) then the lock is marked with a value that says
      "there's a waiter pending", and the sys_futex(FUTEX_WAIT) syscall is used to
      wait for the other guy to release it.  The kernel creates a 'futex queue'
      internally, so that it can later on match up the waiter with the waker -
      without them having to know about each other.  When the owner thread releases
      the futex, it notices (via the variable value) that there were waiter(s)
      pending, and does the sys_futex(FUTEX_WAKE) syscall to wake them up.  Once all
      waiters have taken and released the lock, the futex is again back to
      'uncontended' state, and there's no in-kernel state associated with it.  The
      kernel completely forgets that there ever was a futex at that address.  This
      method makes futexes very lightweight and scalable.
      
      "Robustness" is about dealing with crashes while holding a lock: if a process
      exits prematurely while holding a pthread_mutex_t lock that is also shared
      with some other process (e.g.  yum segfaults while holding a pthread_mutex_t,
      or yum is kill -9-ed), then waiters for that lock need to be notified that the
      last owner of the lock exited in some irregular way.
      
      To solve such types of problems, "robust mutex" userspace APIs were created:
      pthread_mutex_lock() returns an error value if the owner exits prematurely -
      and the new owner can decide whether the data protected by the lock can be
      recovered safely.
      
      There is a big conceptual problem with futex based mutexes though: it is the
      kernel that destroys the owner task (e.g.  due to a SEGFAULT), but the kernel
      cannot help with the cleanup: if there is no 'futex queue' (and in most cases
      there is none, futexes being fast lightweight locks) then the kernel has no
      information to clean up after the held lock!  Userspace has no chance to clean
      up after the lock either - userspace is the one that crashes, so it has no
      opportunity to clean up.  Catch-22.
      
      In practice, when e.g.  yum is kill -9-ed (or segfaults), a system reboot is
      needed to release that futex based lock.  This is one of the leading
      bugreports against yum.
      
      To solve this problem, 'Robust Futex' patches were created and presented on
      lkml: the one written by Todd Kneisel and David Singleton is the most advanced
      at the moment.  These patches all tried to extend the futex abstraction by
      registering futex-based locks in the kernel - and thus give the kernel a
      chance to clean up.
      
      E.g.  in David Singleton's robust-futex-6.patch, there are 3 new syscall
      variants to sys_futex(): FUTEX_REGISTER, FUTEX_DEREGISTER and FUTEX_RECOVER.
      The kernel attaches such robust futexes to vmas (via
      vma->vm_file->f_mapping->robust_head), and at do_exit() time, all vmas are
      searched to see whether they have a robust_head set.
      
      Lots of work went into the vma-based robust-futex patch, and recently it has
      improved significantly, but unfortunately it still has two fundamental
      problems left:
      
       - they have quite complex locking and race scenarios.  The vma-based
         patches had been pending for years, but they are still not completely
         reliable.
      
       - they have to scan _every_ vma at sys_exit() time, per thread!
      
      The second disadvantage is a real killer: pthread_exit() takes around 1
      microsecond on Linux, but with thousands (or tens of thousands) of vmas every
      pthread_exit() takes a millisecond or more, also totally destroying the CPU's
      L1 and L2 caches!
      
      This is very much noticeable even for normal process sys_exit_group() calls:
      the kernel has to do the vma scanning unconditionally!  (this is because the
      kernel has no knowledge about how many robust futexes there are to be cleaned
      up, because a robust futex might have been registered in another task, and the
      futex variable might have been simply mmap()-ed into this process's address
      space).
      
      This huge overhead forced the creation of CONFIG_FUTEX_ROBUST, but worse than
      that: the overhead makes robust futexes impractical for any type of generic
      Linux distribution.
      
      So it became clear to us, something had to be done.  Last week, when Thomas
      Gleixner tried to fix up the vma-based robust futex patch in the -rt tree, he
      found a handful of new races and we were talking about it and were analyzing
      the situation.  At that point a fundamentally different solution occured to
      me.  This patchset (written in the past couple of days) implements that new
      solution.  Be warned though - the patchset does things we normally dont do in
      Linux, so some might find the approach disturbing.  Parental advice
      recommended ;-)
      
        New approach to robust futexes
        ------------------------------
      
      At the heart of this new approach there is a per-thread private list of robust
      locks that userspace is holding (maintained by glibc) - which userspace list
      is registered with the kernel via a new syscall [this registration happens at
      most once per thread lifetime].  At do_exit() time, the kernel checks this
      user-space list: are there any robust futex locks to be cleaned up?
      
      In the common case, at do_exit() time, there is no list registered, so the
      cost of robust futexes is just a simple current->robust_list != NULL
      comparison.  If the thread has registered a list, then normally the list is
      empty.  If the thread/process crashed or terminated in some incorrect way then
      the list might be non-empty: in this case the kernel carefully walks the list
      [not trusting it], and marks all locks that are owned by this thread with the
      FUTEX_OWNER_DEAD bit, and wakes up one waiter (if any).
      
      The list is guaranteed to be private and per-thread, so it's lockless.  There
      is one race possible though: since adding to and removing from the list is
      done after the futex is acquired by glibc, there is a few instructions window
      for the thread (or process) to die there, leaving the futex hung.  To protect
      against this possibility, userspace (glibc) also maintains a simple per-thread
      'list_op_pending' field, to allow the kernel to clean up if the thread dies
      after acquiring the lock, but just before it could have added itself to the
      list.  Glibc sets this list_op_pending field before it tries to acquire the
      futex, and clears it after the list-add (or list-remove) has finished.
      
      That's all that is needed - all the rest of robust-futex cleanup is done in
      userspace [just like with the previous patches].
      
      Ulrich Drepper has implemented the necessary glibc support for this new
      mechanism, which fully enables robust mutexes.  (Ulrich plans to commit these
      changes to glibc-HEAD later today.)
      
      Key differences of this userspace-list based approach, compared to the vma
      based method:
      
       - it's much, much faster: at thread exit time, there's no need to loop
         over every vma (!), which the VM-based method has to do.  Only a very
         simple 'is the list empty' op is done.
      
       - no VM changes are needed - 'struct address_space' is left alone.
      
       - no registration of individual locks is needed: robust mutexes dont need
         any extra per-lock syscalls.  Robust mutexes thus become a very lightweight
         primitive - so they dont force the application designer to do a hard choice
         between performance and robustness - robust mutexes are just as fast.
      
       - no per-lock kernel allocation happens.
      
       - no resource limits are needed.
      
       - no kernel-space recovery call (FUTEX_RECOVER) is needed.
      
       - the implementation and the locking is "obvious", and there are no
         interactions with the VM.
      
        Performance
        -----------
      
      I have benchmarked the time needed for the kernel to process a list of 1
      million (!) held locks, using the new method [on a 2GHz CPU]:
      
       - with FUTEX_WAIT set [contended mutex]: 130 msecs
       - without FUTEX_WAIT set [uncontended mutex]: 30 msecs
      
      I have also measured an approach where glibc does the lock notification [which
      it currently does for !pshared robust mutexes], and that took 256 msecs -
      clearly slower, due to the 1 million FUTEX_WAKE syscalls userspace had to do.
      
      (1 million held locks are unheard of - we expect at most a handful of locks to
      be held at a time.  Nevertheless it's nice to know that this approach scales
      nicely.)
      
        Implementation details
        ----------------------
      
      The patch adds two new syscalls: one to register the userspace list, and one
      to query the registered list pointer:
      
       asmlinkage long
       sys_set_robust_list(struct robust_list_head __user *head,
                           size_t len);
      
       asmlinkage long
       sys_get_robust_list(int pid, struct robust_list_head __user **head_ptr,
                           size_t __user *len_ptr);
      
      List registration is very fast: the pointer is simply stored in
      current->robust_list.  [Note that in the future, if robust futexes become
      widespread, we could extend sys_clone() to register a robust-list head for new
      threads, without the need of another syscall.]
      
      So there is virtually zero overhead for tasks not using robust futexes, and
      even for robust futex users, there is only one extra syscall per thread
      lifetime, and the cleanup operation, if it happens, is fast and
      straightforward.  The kernel doesnt have any internal distinction between
      robust and normal futexes.
      
      If a futex is found to be held at exit time, the kernel sets the highest bit
      of the futex word:
      
      	#define FUTEX_OWNER_DIED        0x40000000
      
      and wakes up the next futex waiter (if any). User-space does the rest of
      the cleanup.
      
      Otherwise, robust futexes are acquired by glibc by putting the TID into the
      futex field atomically.  Waiters set the FUTEX_WAITERS bit:
      
      	#define FUTEX_WAITERS           0x80000000
      
      and the remaining bits are for the TID.
      
        Testing, architecture support
        -----------------------------
      
      I've tested the new syscalls on x86 and x86_64, and have made sure the parsing
      of the userspace list is robust [ ;-) ] even if the list is deliberately
      corrupted.
      
      i386 and x86_64 syscalls are wired up at the moment, and Ulrich has tested the
      new glibc code (on x86_64 and i386), and it works for his robust-mutex
      testcases.
      
      All other architectures should build just fine too - but they wont have the
      new syscalls yet.
      
      Architectures need to implement the new futex_atomic_cmpxchg_inuser() inline
      function before writing up the syscalls (that function returns -ENOSYS right
      now).
      
      This patch:
      
      Add placeholder futex_atomic_cmpxchg_inuser() implementations to every
      architecture that supports futexes.  It returns -ENOSYS.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarArjan van de Ven <arjan@infradead.org>
      Acked-by: default avatarUlrich Drepper <drepper@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      e9056f13
    • Ingo Molnar's avatar
      [PATCH] mips: add ptr_to_compat() · 62ac285f
      Ingo Molnar authored
      Add ptr_to_compat() - needed by the new robust futex code.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      62ac285f
    • Ingo Molnar's avatar
      [PATCH] parisc: add ptr_to_compat() · 213b63b7
      Ingo Molnar authored
      Add ptr_to_compat() to parisc - needed by the new robust futex code.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Grant Grundler <iod00d@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      213b63b7
    • Ingo Molnar's avatar
      [PATCH] s390: add ptr_to_compat() · f267fa9f
      Ingo Molnar authored
      Add ptr_to_compat() to s390 - needed by the new robust-futex code.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      
      untested. CHECKME: am i right about the 0x7fffffffUL masking?
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      f267fa9f
    • Ingo Molnar's avatar
      [PATCH] ia64: add ptr_to_compat() · 66e863ac
      Ingo Molnar authored
      Add ptr_to_compat() to ia64 - needed by the robust-futex code.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      66e863ac
    • Dave Hansen's avatar
      [PATCH] unify PFN_* macros · 22a9835c
      Dave Hansen authored
      Just about every architecture defines some macros to do operations on pfns.
       They're all virtually identical.  This patch consolidates all of them.
      
      One minor glitch is that at least i386 uses them in a very skeletal header
      file.  To keep away from #include dependency hell, I stuck the new
      definitions in a new, isolated header.
      
      Of all of the implementations, sh64 is the only one that varied by a bit.
      It used some masks to ensure that any sign-extension got ripped away before
      the arithmetic is done.  This has been posted to that sh64 maintainers and
      the development list.
      
      Compiles on x86, x86_64, ia64 and ppc64.
      Signed-off-by: default avatarDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      22a9835c
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] uninline zone helpers · 95144c78
      KAMEZAWA Hiroyuki authored
      Helper functions for for_each_online_pgdat/for_each_zone look too big to be
      inlined.  Speed of these helper macro itself is not very important.  (inner
      loops are tend to do more work than this)
      
      This patch make helper function to be out-of-lined.
      
      	inline		out-of-line
      .text   005c0680        005bf6a0
      
      005c0680 - 005bf6a0 = FE0 = 4Kbytes.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      95144c78
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] for_each_online_pgdat: remove pgdat_list · ae0f15fb
      KAMEZAWA Hiroyuki authored
      By using for_each_online_pgdat(), pgdat_list is not necessary now.  This patch
      removes it.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      ae0f15fb
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] for_each_online_pgdat: remove sorting pgdat · 3571761f
      KAMEZAWA Hiroyuki authored
      Because pgdat_list was linked to pgdat_list in *reverse* order, (By default)
      some of arch has to sort it by themselves.
      
      for_each_pgdat has gone..for_each_online_pgdat() uses node_online_map, which
      doesn't need to be sorted.
      
      This patch removes codes for sorting pgdat.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      3571761f
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] for_each_online_pgdat: renaming for_each_pgdat · ec936fc5
      KAMEZAWA Hiroyuki authored
      Replace for_each_pgdat() with for_each_online_pgdat().
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      ec936fc5
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] for_each_online_pgdat: for_each_bootmem · 679bc9fb
      KAMEZAWA Hiroyuki authored
      Add a list_head to bootmem_data_t and make bootmems use it.  bootmem list is
      sorted by node_boot_start.
      
      Only nodes against which init_bootmem() is called are linked to the list.
      (i386 allocates bootmem only from one node(0) not from all online nodes.)
      
      A summary:
       1. for_each_online_pgdat() traverses all *online* nodes.
       2. alloc_bootmem() allocates memory only from initialized-for-bootmem nodes.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      679bc9fb
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] define for_each_online_pgdat · 8357f869
      KAMEZAWA Hiroyuki authored
      This patch defines for_each_online_pgdat() as a replacement of
      for_each_pgdat()
      
      Now, online nodes are managed by node_online_map.  But for_each_pgdat()
      uses pgdat_link to iterate over all nodes(pgdat).  This means management
      structure for online pgdat is duplicated.
      
      I think using node_online_map for for_each_pgdat() is simple and sane
      rather ather than pgdat_link.  New macro is named as
      for_each_online_pgdat().  Following patch will fix callers of
      for_each_pgdat().
      
      The bootmem allocater uses for_each_pgdat() before pgdat initialization.  I
      don't think it's sane.  Following patch will fix it.
      Signed-off-by: default avatarYasunori Goto     <y-goto@jp.fujitsu.com>
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      8357f869
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] remove zone_mem_map · a0140c1d
      KAMEZAWA Hiroyuki authored
      This patch removes zone_mem_map.
      
      pfn_to_page uses pgdat, page_to_pfn uses zone.  page_to_pfn can use pgdat
      instead of zone, which is only one user of zone_mem_map.  By modifing it,
      we can remove zone_mem_map.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Christoph Lameter <christoph@lameter.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      a0140c1d
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: ia64 pfn_to_page · 0ecd702b
      KAMEZAWA Hiroyuki authored
      ia64 has special config CONFIG_VIRTUAL_MEM_MAP.
      CONFIG_DISCONTIGMEM=y && CONFIG_VIRTUAL_MEM_MAP!=y is bug ?
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      0ecd702b
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: xtensa pfn_to_page · 655a0443
      KAMEZAWA Hiroyuki authored
      xtensa can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Chris Zankel <chris@zankel.net>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      655a0443
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: v850 pfn_to_page · e6009f1b
      KAMEZAWA Hiroyuki authored
      v850 can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      e6009f1b
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: uml pfn_to_page · 9828c185
      KAMEZAWA Hiroyuki authored
      UML can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      9828c185
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: sparc pfn_to_page · 064b2a07
      KAMEZAWA Hiroyuki authored
      sparc can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      064b2a07
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: sh64 pfn_to_page · 309d34bb
      KAMEZAWA Hiroyuki authored
      sh64 can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Richard Curnow <rc@rc0.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      309d34bb
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: sh pfn_to_page · 104b8dea
      KAMEZAWA Hiroyuki authored
      sh can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      104b8dea
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: s390 pfn_to_page · aed63043
      KAMEZAWA Hiroyuki authored
      s390 can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      aed63043
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: ppc pfn_to_page · f68d4c99
      KAMEZAWA Hiroyuki authored
      PPC can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      f68d4c99
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: parisc pfn_to_page · 0d833b41
      KAMEZAWA Hiroyuki authored
      PARISC can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      0d833b41
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: mips pfn_to_page · a02036e7
      KAMEZAWA Hiroyuki authored
      MIPS can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      a02036e7
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: m32r pfn_to_page · 7126cffe
      KAMEZAWA Hiroyuki authored
      m32r can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hirokazu Takata <takata.hirokazu@renesas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      7126cffe
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: h8300 pfn_to_page · dd6cc763
      KAMEZAWA Hiroyuki authored
      H8300 can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      dd6cc763
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: FRV pfn_to_page · 5cdac7ca
      KAMEZAWA Hiroyuki authored
      FRV can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      5cdac7ca
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: cris pfn_to_page · bb872f78
      KAMEZAWA Hiroyuki authored
      cris can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mikael Starvik <starvik@axis.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      bb872f78
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: arm26 pfn_to_page · 89fccaf2
      KAMEZAWA Hiroyuki authored
      arm26 can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Ian Molton <spyro@f2s.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      89fccaf2
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: arm pfn_to_page · 7eb98a2f
      KAMEZAWA Hiroyuki authored
      ARM can use generic funcs.
      PFN_TO_NID, LOCAL_MAP_NR are defined by sub-archs.
      Signed-off-by: default avatarKAMEZAWA Hirotuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      7eb98a2f
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: alpha pfn_to_page · 1c05dda2
      KAMEZAWA Hiroyuki authored
      Alpha can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      1c05dda2
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: powerpc pfn_to_page · 659e3505
      KAMEZAWA Hiroyuki authored
      PowerPC can use generic ones.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      659e3505
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: x86_64 pfn_to_page · dc8ecb43
      KAMEZAWA Hiroyuki authored
      x86_64 can use generic funcs.
      For DISCONTIGMEM, CONFIG_OUT_OF_LINE_PFN_TO_PAGE is selected.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      dc8ecb43
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: i386 pfn_to_page · ad658b38
      KAMEZAWA Hiroyuki authored
      i386 can use generic funcs.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      ad658b38
    • KAMEZAWA Hiroyuki's avatar
      [PATCH] unify pfn_to_page: generic functions · a117e66e
      KAMEZAWA Hiroyuki authored
      There are 3 memory models, FLATMEM, DISCONTIGMEM, SPARSEMEM.
      Each arch has its own page_to_pfn(), pfn_to_page() for each models.
      But most of them can use the same arithmetic.
      
      This patch adds asm-generic/memory_model.h, which includes generic
      page_to_pfn(), pfn_to_page() definitions for each memory model.
      
      When CONFIG_OUT_OF_LINE_PFN_TO_PAGE=y, out-of-line functions are
      used instead of macro. This is enabled by some archs and  reduces
      text size.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ian Molton <spyro@f2s.com>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Hirokazu Takata <takata.hirokazu@renesas.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp>
      Cc: Richard Curnow <rc@rc0.org.uk>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      a117e66e
    • Shaohua Li's avatar
      [PATCH] x86: don't use cpuid.2 to determine cache info if cpuid.4 is supported · b06be912
      Shaohua Li authored
      Don't use cpuid.2 to determine cache info if cpuid.4 is supported.  The
      exception is P4 trace cache.  We always use cpuid.2 to get trace cache
      under P4.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      b06be912
    • Siddha, Suresh B's avatar
      [PATCH] sched: fix group power for allnodes_domains · 08069033
      Siddha, Suresh B authored
      Current sched groups power calculation for allnodes_domains is wrong.  We
      should really be using cumulative power of the physical packages in that
      group (similar to the calculation in node_domains)
      Signed-off-by: default avatarSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      08069033
    • Siddha, Suresh B's avatar
      [PATCH] sched: new sched domain for representing multi-core · 1e9f28fa
      Siddha, Suresh B authored
      Add a new sched domain for representing multi-core with shared caches
      between cores.  Consider a dual package system, each package containing two
      cores and with last level cache shared between cores with in a package.  If
      there are two runnable processes, with this appended patch those two
      processes will be scheduled on different packages.
      
      On such systems, with this patch we have observed 8% perf improvement with
      specJBB(2 warehouse) benchmark and 35% improvement with CFP2000 rate(with 2
      users).
      
      This new domain will come into play only on multi-core systems with shared
      caches.  On other systems, this sched domain will be removed by domain
      degeneration code.  This new domain can be also used for implementing power
      savings policy (see OLS 2005 CMP kernel scheduler paper for more details..
      I will post another patch for power savings policy soon)
      
      Most of the arch/* file changes are for cpu_coregroup_map() implementation.
      Signed-off-by: default avatarSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      1e9f28fa
    • Andreas Mohr's avatar
      [PATCH] Small schedule() optimization · 77e4bfbc
      Andreas Mohr authored
      small schedule() microoptimization.
      Signed-off-by: default avatarAndreas Mohr <andi@lisas.de>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      77e4bfbc
    • Martin Andersson's avatar
      [PATCH] sched: fix task interactivity calculation · 013d3868
      Martin Andersson authored
      Is a truncation error in kernel/sched.c triggered when the nice value is
      negative.  The affected code is used in the TASK_INTERACTIVE macro.
      
      The code is:
      #define SCALE(v1,v1_max,v2_max) \
      	(v1) * (v2_max) / (v1_max)
      
      which is used in this way:
      SCALE(TASK_NICE(p), 40, MAX_BONUS)
      
      Comments in the code says:
        * This part scales the interactivity limit depending on niceness.
        *
        * We scale it linearly, offset by the INTERACTIVE_DELTA delta.
        * Here are a few examples of different nice levels:
        *
        *  TASK_INTERACTIVE(-20): [1,1,1,1,1,1,1,1,1,0,0]
        *  TASK_INTERACTIVE(-10): [1,1,1,1,1,1,1,0,0,0,0]
        *  TASK_INTERACTIVE(  0): [1,1,1,1,0,0,0,0,0,0,0]
        *  TASK_INTERACTIVE( 10): [1,1,0,0,0,0,0,0,0,0,0]
        *  TASK_INTERACTIVE( 19): [0,0,0,0,0,0,0,0,0,0,0]
        *
        * (the X axis represents the possible -5 ... 0 ... +5 dynamic
        *  priority range a task can explore, a value of '1' means the
        *  task is rated interactive.)
      
      However, the current code does not scale it linearly and the result differs
      from the given examples.  If the mathematical function "floor" is used when
      the nice value is negative instead of the truncation one gets when using
      integer division, the result conforms to the documentation.
      
      Output of TASK_INTERACTIVE when using the kernel code:
      nice    dynamic priorities
      -20     1     1     1     1     1     1     1     1     1     0     0
      -19     1     1     1     1     1     1     1     1     0     0     0
      -18     1     1     1     1     1     1     1     1     0     0     0
      -17     1     1     1     1     1     1     1     1     0     0     0
      -16     1     1     1     1     1     1     1     1     0     0     0
      -15     1     1     1     1     1     1     1     0     0     0     0
      -14     1     1     1     1     1     1     1     0     0     0     0
      -13     1     1     1     1     1     1     1     0     0     0     0
      -12     1     1     1     1     1     1     1     0     0     0     0
      -11     1     1     1     1     1     1     0     0     0     0     0
      -10     1     1     1     1     1     1     0     0     0     0     0
        -9     1     1     1     1     1     1     0     0     0     0     0
        -8     1     1     1     1     1     1     0     0     0     0     0
        -7     1     1     1     1     1     0     0     0     0     0     0
        -6     1     1     1     1     1     0     0     0     0     0     0
        -5     1     1     1     1     1     0     0     0     0     0     0
        -4     1     1     1     1     1     0     0     0     0     0     0
        -3     1     1     1     1     0     0     0     0     0     0     0
        -2     1     1     1     1     0     0     0     0     0     0     0
        -1     1     1     1     1     0     0     0     0     0     0     0
        0      1     1     1     1     0     0     0     0     0     0     0
        1      1     1     1     1     0     0     0     0     0     0     0
        2      1     1     1     1     0     0     0     0     0     0     0
        3      1     1     1     1     0     0     0     0     0     0     0
        4      1     1     1     0     0     0     0     0     0     0     0
        5      1     1     1     0     0     0     0     0     0     0     0
        6      1     1     1     0     0     0     0     0     0     0     0
        7      1     1     1     0     0     0     0     0     0     0     0
        8      1     1     0     0     0     0     0     0     0     0     0
        9      1     1     0     0     0     0     0     0     0     0     0
      10      1     1     0     0     0     0     0     0     0     0     0
      11      1     1     0     0     0     0     0     0     0     0     0
      12      1     0     0     0     0     0     0     0     0     0     0
      13      1     0     0     0     0     0     0     0     0     0     0
      14      1     0     0     0     0     0     0     0     0     0     0
      15      1     0     0     0     0     0     0     0     0     0     0
      16      0     0     0     0     0     0     0     0     0     0     0
      17      0     0     0     0     0     0     0     0     0     0     0
      18      0     0     0     0     0     0     0     0     0     0     0
      19      0     0     0     0     0     0     0     0     0     0     0
      
      Output of TASK_INTERACTIVE when using "floor"
      nice    dynamic priorities
      -20     1     1     1     1     1     1     1     1     1     0     0
      -19     1     1     1     1     1     1     1     1     1     0     0
      -18     1     1     1     1     1     1     1     1     1     0     0
      -17     1     1     1     1     1     1     1     1     1     0     0
      -16     1     1     1     1     1     1     1     1     0     0     0
      -15     1     1     1     1     1     1     1     1     0     0     0
      -14     1     1     1     1     1     1     1     1     0     0     0
      -13     1     1     1     1     1     1     1     1     0     0     0
      -12     1     1     1     1     1     1     1     0     0     0     0
      -11     1     1     1     1     1     1     1     0     0     0     0
      -10     1     1     1     1     1     1     1     0     0     0     0
        -9     1     1     1     1     1     1     1     0     0     0     0
        -8     1     1     1     1     1     1     0     0     0     0     0
        -7     1     1     1     1     1     1     0     0     0     0     0
        -6     1     1     1     1     1     1     0     0     0     0     0
        -5     1     1     1     1     1     1     0     0     0     0     0
        -4     1     1     1     1     1     0     0     0     0     0     0
        -3     1     1     1     1     1     0     0     0     0     0     0
        -2     1     1     1     1     1     0     0     0     0     0     0
        -1     1     1     1     1     1     0     0     0     0     0     0
         0     1     1     1     1     0     0     0     0     0     0     0
         1     1     1     1     1     0     0     0     0     0     0     0
         2     1     1     1     1     0     0     0     0     0     0     0
         3     1     1     1     1     0     0     0     0     0     0     0
         4     1     1     1     0     0     0     0     0     0     0     0
         5     1     1     1     0     0     0     0     0     0     0     0
         6     1     1     1     0     0     0     0     0     0     0     0
         7     1     1     1     0     0     0     0     0     0     0     0
         8     1     1     0     0     0     0     0     0     0     0     0
         9     1     1     0     0     0     0     0     0     0     0     0
        10     1     1     0     0     0     0     0     0     0     0     0
        11     1     1     0     0     0     0     0     0     0     0     0
        12     1     0     0     0     0     0     0     0     0     0     0
        13     1     0     0     0     0     0     0     0     0     0     0
        14     1     0     0     0     0     0     0     0     0     0     0
        15     1     0     0     0     0     0     0     0     0     0     0
        16     0     0     0     0     0     0     0     0     0     0     0
        17     0     0     0     0     0     0     0     0     0     0     0
        18     0     0     0     0     0     0     0     0     0     0     0
        19     0     0     0     0     0     0     0     0     0     0     0
      Signed-off-by: default avatarMartin Andersson <martin.andersson@control.lth.se>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Williams <pwil3058@bigpond.net.au>
      Cc: Con Kolivas <kernel@kolivas.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      013d3868