1. 25 Jan, 2008 40 commits
    • Dmitry Adamushko's avatar
      sched: get rid of 'new_cpu' in try_to_wake_up() · 5d2f5a61
      Dmitry Adamushko authored
      Clean-up try_to_wake_up().
      
      Get rid of the 'new_cpu' variable in try_to_wake_up() [ that's, one
      #ifdef section less ].  Also remove a few redundant blank lines.
      Signed-off-by: default avatarDmitry Adamushko <dmitry.adamushko@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      5d2f5a61
    • Dmitry Adamushko's avatar
      sched: no need for 'affine wakeup' balancing · 9ec3b77e
      Dmitry Adamushko authored
      No need to do a check for 'affine wakeup and passive balancing possibilities'
      in select_task_rq_fair() when task_cpu(p) == this_cpu.
      
      I guess, this part got missed upon introduction of per-sched_class
      select_task_rq() in try_to_wake_up().
      Signed-off-by: default avatarDmitry Adamushko <dmitry.adamushko@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      9ec3b77e
    • Ingo Molnar's avatar
      sched: whitespace cleanups in topology.h · 32525d02
      Ingo Molnar authored
      whitespace cleanups in topology.h.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      32525d02
    • Ingo Molnar's avatar
      sched: reactivate fork balancing · 52d85343
      Ingo Molnar authored
      reactivate fork balancing.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      52d85343
    • Ingo Molnar's avatar
      sched: add credits for RT balancing improvements · b9131769
      Ingo Molnar authored
      add credits for RT balancing improvements.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      b9131769
    • Ingo Molnar's avatar
      sched: style cleanup, #2 · 0eab9146
      Ingo Molnar authored
      style cleanup of various changes that were done recently.
      
      no code changed:
      
            text    data     bss     dec     hex filename
           26399    2578      48   29025    7161 sched.o.before
           26399    2578      48   29025    7161 sched.o.after
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      0eab9146
    • Ingo Molnar's avatar
      sched: remove unused JIFFIES_TO_NS() macro · d7876a08
      Ingo Molnar authored
      remove unused JIFFIES_TO_NS() macro.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d7876a08
    • Ingo Molnar's avatar
      sched: fix sched_rt.c:join/leave_domain · bdd7c81b
      Ingo Molnar authored
      fix build bug in sched_rt.c:join/leave_domain and make them only
      be included on SMP builds.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      bdd7c81b
    • Gregory Haskins's avatar
      sched: only balance our RT tasks within our domain · 637f5085
      Gregory Haskins authored
      We move the rt-overload data as the first global to per-domain
      reclassification.  This limits the scope of overload related cache-line
      bouncing to stay with a specified partition instead of affecting all
      cpus in the system.
      
      Finally, we limit the scope of find_lowest_cpu searches to the domain
      instead of the entire system.  Note that we would always respect domain
      boundaries even without this patch, but we first would scan potentially
      all cpus before whittling the list down.  Now we can avoid looking at
      RQs that are out of scope, again reducing cache-line hits.
      
      Note: In some cases, task->cpus_allowed will effectively reduce our search
      to within our domain.  However, I believe there are cases where the
      cpus_allowed mask may be all ones and therefore we err on the side of
      caution.  If it can be optimized later, so be it.
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      CC: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      637f5085
    • Gregory Haskins's avatar
      sched: add sched-domain roots · 57d885fe
      Gregory Haskins authored
      We add the notion of a root-domain which will be used later to rescope
      global variables to per-domain variables.  Each exclusive cpuset
      essentially defines an island domain by fully partitioning the member cpus
      from any other cpuset.  However, we currently still maintain some
      policy/state as global variables which transcend all cpusets.  Consider,
      for instance, rt-overload state.
      
      Whenever a new exclusive cpuset is created, we also create a new
      root-domain object and move each cpu member to the root-domain's span.
      By default the system creates a single root-domain with all cpus as
      members (mimicking the global state we have today).
      
      We add some plumbing for storing class specific data in our root-domain.
      Whenever a RQ is switching root-domains (because of repartitioning) we
      give each sched_class the opportunity to remove any state from its old
      domain and add state to the new one.  This logic doesn't have any clients
      yet but it will later in the series.
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      CC: Christoph Lameter <clameter@sgi.com>
      CC: Paul Jackson <pj@sgi.com>
      CC: Simon Derr <simon.derr@bull.net>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      57d885fe
    • Ingo Molnar's avatar
      sched: clean up schedule_balance_rt() · 7f51f298
      Ingo Molnar authored
      clean up schedule_balance_rt().
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      7f51f298
    • Ingo Molnar's avatar
      sched: clean up pull_rt_task() · 80bf3171
      Ingo Molnar authored
      clean up pull_rt_task().
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      80bf3171
    • Ingo Molnar's avatar
      sched: remove leftover debugging · 00597c3e
      Ingo Molnar authored
      remove leftover debugging.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      00597c3e
    • Ingo Molnar's avatar
      sched: remove rt_overload() · 6e1938d3
      Ingo Molnar authored
      remove rt_overload() - it's an unnecessary indirection.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      6e1938d3
    • Ingo Molnar's avatar
      sched: clean up kernel/sched_rt.c · 84de4274
      Ingo Molnar authored
      clean up whitespace damage and missing comments in kernel/sched_rt.c.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      84de4274
    • Ingo Molnar's avatar
      sched: clean up overlong line in kernel/sched_debug.c · deeeccd4
      Ingo Molnar authored
      clean up overlong line in kernel/sched_debug.c.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      deeeccd4
    • Ingo Molnar's avatar
      sched: clean up find_lock_lowest_rq() · 4df64c0b
      Ingo Molnar authored
      clean up find_lock_lowest_rq().
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      4df64c0b
    • Ingo Molnar's avatar
      sched: clean up pick_next_highest_task_rt() · 79064fbf
      Ingo Molnar authored
      clean up pick_next_highest_task_rt().
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      79064fbf
    • Steven Rostedt's avatar
      sched: RT-balance on new task · 0d1311a5
      Steven Rostedt authored
      rt-balance when creating new tasks.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      0d1311a5
    • Steven Rostedt's avatar
      sched: RT-balance, optimize cpu search · 610bf056
      Steven Rostedt authored
      This patch removes several cpumask operations by keeping track
      of the first of the CPUS that is of the lowest priority. When
      the search for the lowest priority runqueue is completed, all
      the bits up to the first CPU with the lowest priority runqueue
      is cleared.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      610bf056
    • Gregory Haskins's avatar
      sched: RT-balance, optimize · 06f90dbd
      Gregory Haskins authored
      We can cheaply track the number of bits set in the cpumask for the lowest
      priority CPUs.  Therefore, compute the mask's weight and use it to skip
      the optimal domain search logic when there is only one CPU available.
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      06f90dbd
    • Gregory Haskins's avatar
      sched: break out early if RT task cannot be migrated · 17b3279b
      Gregory Haskins authored
      We don't need to bother searching if the task cannot be migrated
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      17b3279b
    • Steven Rostedt's avatar
      sched: RT-balance, avoid overloading · e1f47d89
      Steven Rostedt authored
      This patch changes the searching for a run queue by a waking RT task
      to try to pick another runqueue if the currently running task
      is an RT task.
      
      The reason is that RT tasks behave different than normal
      tasks. Preempting a normal task to run a RT task to keep
      its cache hot is fine, because the preempted non-RT task
      may wait on that same runqueue to run again unless the
      migration thread comes along and pulls it off.
      
      RT tasks behave differently. If one is preempted, it makes
      an active effort to continue to run. So by having a high
      priority task preempt a lower priority RT task, that lower
      RT task will then quickly try to run on another runqueue.
      This will cause that lower RT task to replace its nice
      hot cache (and TLB) with a completely cold one. This is
      for the hope that the new high priority RT task will keep
       its cache hot.
      
      Remeber that this high priority RT task was just woken up.
      So it may likely have been sleeping for several milliseconds,
      and will end up with a cold cache anyway. RT tasks run till
      they voluntarily stop, or are preempted by a higher priority
      task. This means that it is unlikely that the woken RT task
      will have a hot cache to wake up to. So pushing off a lower
      RT task is just killing its cache for no good reason.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      e1f47d89
    • Gregory Haskins's avatar
      sched: wake-balance fixes · a22d7fc1
      Gregory Haskins authored
      We have logic to detect whether the system has migratable tasks, but we are
      not using it when deciding whether to push tasks away.  So we add support
      for considering this new information.
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      a22d7fc1
    • Gregory Haskins's avatar
      sched: optimize RT affinity · 6e1254d2
      Gregory Haskins authored
      The current code base assumes a relatively flat CPU/core topology and will
      route RT tasks to any CPU fairly equally.  In the real world, there are
      various toplogies and affinities that govern where a task is best suited to
      run with the smallest amount of overhead.  NUMA and multi-core CPUs are
      prime examples of topologies that can impact cache performance.
      
      Fortunately, linux is already structured to represent these topologies via
      the sched_domains interface.  So we change our RT router to consult a
      combination of topology and affinity policy to best place tasks during
      migration.
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      6e1254d2
    • Gregory Haskins's avatar
      sched: pre-route RT tasks on wakeup · 318e0893
      Gregory Haskins authored
      In the original patch series that Steven Rostedt and I worked on together,
      we both took different approaches to low-priority wakeup path.  I utilized
      "pre-routing" (push the task away to a less important RQ before activating)
      approach, while Steve utilized a "post-routing" approach.  The advantage of
      my approach is that you avoid the overhead of a wasted activate/deactivate
      cycle and peripherally related burdens.  The advantage of Steve's method is
      that it neatly solves an issue preventing a "pull" optimization from being
      deployed.
      
      In the end, we ended up deploying Steve's idea.  But it later dawned on me
      that we could get the best of both worlds by deploying both ideas together,
      albeit slightly modified.
      
      The idea is simple:  Use a "light-weight" lookup for pre-routing, since we
      only need to approximate a good home for the task.  And we also retain the
      post-routing push logic to clean up any inaccuracies caused by a condition
      of "priority mistargeting" caused by the lightweight lookup.  Most of the
      time, the pre-routing should work and yield lower overhead.  In the cases
      where it doesnt, the post-router will bat cleanup.
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      318e0893
    • Gregory Haskins's avatar
      sched: RT balancing: include current CPU · 2de0b463
      Gregory Haskins authored
      It doesn't hurt if we allow the current CPU to be included in the
      search.  We will just simply skip it later if the current CPU turns out
      to be the lowest.
      
      We will use this later in the series
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      2de0b463
    • Gregory Haskins's avatar
      sched: break out search for RT tasks · 07b4032c
      Gregory Haskins authored
      Isolate the search logic into a function so that it can be used later
      in places other than find_locked_lowest_rq().
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      07b4032c
    • Gregory Haskins's avatar
      sched: de-SCHED_OTHER-ize the RT path · e7693a36
      Gregory Haskins authored
      The current wake-up code path tries to determine if it can optimize the
      wake-up to "this_cpu" by computing load calculations.  The problem is that
      these calculations are only relevant to SCHED_OTHER tasks where load is king.
      For RT tasks, priority is king.  So the load calculation is completely wasted
      bandwidth.
      
      Therefore, we create a new sched_class interface to help with
      pre-wakeup routing decisions and move the load calculation as a function
      of CFS task's class.
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      e7693a36
    • Gregory Haskins's avatar
      sched: clean up this_rq use in kernel/sched_rt.c · 697f0a48
      Gregory Haskins authored
      "this_rq" is normally used to denote the RQ on the current cpu
      (i.e. "cpu_rq(this_cpu)").  So clean up the usage of this_rq to be
      more consistent with the rest of the code.
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      697f0a48
    • Gregory Haskins's avatar
      sched: add RT-balance cpu-weight · 73fe6aae
      Gregory Haskins authored
      Some RT tasks (particularly kthreads) are bound to one specific CPU.
      It is fairly common for two or more bound tasks to get queued up at the
      same time.  Consider, for instance, softirq_timer and softirq_sched.  A
      timer goes off in an ISR which schedules softirq_thread to run at RT50.
      Then the timer handler determines that it's time to smp-rebalance the
      system so it schedules softirq_sched to run.  So we are in a situation
      where we have two RT50 tasks queued, and the system will go into
      rt-overload condition to request other CPUs for help.
      
      This causes two problems in the current code:
      
      1) If a high-priority bound task and a low-priority unbounded task queue
         up behind the running task, we will fail to ever relocate the unbounded
         task because we terminate the search on the first unmovable task.
      
      2) We spend precious futile cycles in the fast-path trying to pull
         overloaded tasks over.  It is therefore optimial to strive to avoid the
         overhead all together if we can cheaply detect the condition before
         overload even occurs.
      
      This patch tries to achieve this optimization by utilizing the hamming
      weight of the task->cpus_allowed mask.  A weight of 1 indicates that
      the task cannot be migrated.  We will then utilize this information to
      skip non-migratable tasks and to eliminate uncessary rebalance attempts.
      
      We introduce a per-rq variable to count the number of migratable tasks
      that are currently running.  We only go into overload if we have more
      than one rt task, AND at least one of them is migratable.
      
      In addition, we introduce a per-task variable to cache the cpus_allowed
      weight, since the hamming calculation is probably relatively expensive.
      We only update the cached value when the mask is updated which should be
      relatively infrequent, especially compared to scheduling frequency
      in the fast path.
      Signed-off-by: default avatarGregory Haskins <ghaskins@novell.com>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      73fe6aae
    • Steven Rostedt's avatar
      sched: disable standard balancer for RT tasks · c7a1e46a
      Steven Rostedt authored
      Since we now take an active approach to load balancing, we don't need to
      balance RT tasks via the normal task balancer. In fact, this code was
      found to pull RT tasks away from CPUS that the active movement performed,
      resulting in large latencies.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      c7a1e46a
    • Steven Rostedt's avatar
      sched: push RT tasks from overloaded CPUs · 4642dafd
      Steven Rostedt authored
      This patch adds pushing of overloaded RT tasks from a runqueue that is
      having tasks (most likely RT tasks) added to the run queue.
      
      TODO: We don't cover the case of waking of new RT tasks (yet).
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      4642dafd
    • Steven Rostedt's avatar
      sched: pull RT tasks from overloaded runqueues · f65eda4f
      Steven Rostedt authored
      This patch adds the algorithm to pull tasks from RT overloaded runqueues.
      
      When a pull RT is initiated, all overloaded runqueues are examined for
      a RT task that is higher in prio than the highest prio task queued on the
      target runqueue. If another runqueue holds a RT task that is of higher
      prio than the highest prio task on the target runqueue is found it is pulled
      to the target runqueue.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      f65eda4f
    • Steven Rostedt's avatar
      sched: add rt-overload tracking · 4fd29176
      Steven Rostedt authored
      This patch adds an RT overload accounting system. When a runqueue has
      more than one RT task queued, it is marked as overloaded. That is that it
      is a candidate to have RT tasks pulled from it.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      4fd29176
    • Steven Rostedt's avatar
      sched: add RT task pushing · e8fa1362
      Steven Rostedt authored
      This patch adds an algorithm to push extra RT tasks off a run queue to
      other CPU runqueues.
      
      When more than one RT task is added to a run queue, this algorithm takes
      an assertive approach to push the RT tasks that are not running onto other
      run queues that have lower priority.  The way this works is that the highest
      RT task that is not running is looked at and we examine the runqueues on
      the CPUS for that tasks affinity mask. We find the runqueue with the lowest
      prio in the CPU affinity of the picked task, and if it is lower in prio than
      the picked task, we push the task onto that CPU runqueue.
      
      We continue pushing RT tasks off the current runqueue until we don't push any
      more.  The algorithm stops when the next highest RT task can't preempt any
      other processes on other CPUS.
      
      TODO: The algorithm may stop when there are still RT tasks that can be
       migrated. Specifically, if the highest non running RT task CPU affinity
       is restricted to CPUs that are running higher priority tasks, there may
       be a lower priority task queued that has an affinity with a CPU that is
       running a lower priority task that it could be migrated to.  This
       patch set does not address this issue.
      
      Note: checkpatch reveals two over 80 character instances. I'm not sure
       that breaking them up will help visually, so I left them as is.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      e8fa1362
    • Steven Rostedt's avatar
      sched: track highest prio task queued · 764a9d6f
      Steven Rostedt authored
      This patch adds accounting to each runqueue to keep track of the
      highest prio task queued on the run queue. We only care about
      RT tasks, so if the run queue does not contain any active RT tasks
      its priority will be considered MAX_RT_PRIO.
      
      This information will be used for later patches.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      764a9d6f
    • Steven Rostedt's avatar
      sched: count # of queued RT tasks · 63489e45
      Steven Rostedt authored
      This patch adds accounting to keep track of the number of RT tasks running
      on a runqueue. This information will be used in later patches.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      63489e45
    • Ingo Molnar's avatar
      softlockup: automatically detect hung TASK_UNINTERRUPTIBLE tasks · 82a1fcb9
      Ingo Molnar authored
      this patch extends the soft-lockup detector to automatically
      detect hung TASK_UNINTERRUPTIBLE tasks. Such hung tasks are
      printed the following way:
      
       ------------------>
       INFO: task prctl:3042 blocked for more than 120 seconds.
       "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message
       prctl         D fd5e3793     0  3042   2997
              f6050f38 00000046 00000001 fd5e3793 00000009 c06d8264 c06dae80 00000286
              f6050f40 f6050f00 f7d34d90 f7d34fc8 c1e1be80 00000001 f6050000 00000000
              f7e92d00 00000286 f6050f18 c0489d1a f6050f40 00006605 00000000 c0133a5b
       Call Trace:
        [<c04883a5>] schedule_timeout+0x6d/0x8b
        [<c04883d8>] schedule_timeout_uninterruptible+0x15/0x17
        [<c0133a76>] msleep+0x10/0x16
        [<c0138974>] sys_prctl+0x30/0x1e2
        [<c0104c52>] sysenter_past_esp+0x5f/0xa5
        =======================
       2 locks held by prctl/3042:
       #0:  (&sb->s_type->i_mutex_key#5){--..}, at: [<c0197d11>] do_fsync+0x38/0x7a
       #1:  (jbd_handle){--..}, at: [<c01ca3d2>] journal_start+0xc7/0xe9
       <------------------
      
      the current default timeout is 120 seconds. Such messages are printed
      up to 10 times per bootup. If the system has crashed already then the
      messages are not printed.
      
      if lockdep is enabled then all held locks are printed as well.
      
      this feature is a natural extension to the softlockup-detector (kernel
      locked up without scheduling) and to the NMI watchdog (kernel locked up
      with IRQs disabled).
      
      [ Gautham R Shenoy <ego@in.ibm.com>: CPU hotplug fixes. ]
      [ Andrew Morton <akpm@linux-foundation.org>: build warning fix. ]
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarArjan van de Ven <arjan@linux.intel.com>
      82a1fcb9
    • Ingo Molnar's avatar
      cpu-hotplug: fix build on !CONFIG_SMP · d0d23b54
      Ingo Molnar authored
      fix build on !CONFIG_SMP.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d0d23b54