Commit 9d89c257 authored by Yuyang Du's avatar Yuyang Du Committed by Ingo Molnar

sched/fair: Rewrite runnable load and utilization average tracking

The idea of runnable load average (let runnable time contribute to weight)
was proposed by Paul Turner and Ben Segall, and it is still followed by
this rewrite. This rewrite aims to solve the following issues:

1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
   updated at the granularity of an entity at a time, which results in the
   cfs_rq's load average is stale or partially updated: at any time, only
   one entity is up to date, all other entities are effectively lagging
   behind. This is undesirable.

   To illustrate, if we have n runnable entities in the cfs_rq, as time
   elapses, they certainly become outdated:

     t0: cfs_rq { e1_old, e2_old, ..., en_old }

   and when we update:

     t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }

     t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }

     ...

   We solve this by combining all runnable entities' load averages together
   in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
   on the fact that if we regard the update as a function, then:

   w * update(e) = update(w * e) and

   update(e1) + update(e2) = update(e1 + e2), then

   w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)

   therefore, by this rewrite, we have an entirely updated cfs_rq at the
   time we update it:

     t1: update cfs_rq { e1_new, e2_new, ..., en_new }

     t2: update cfs_rq { e1_new, e2_new, ..., en_new }

     ...

2. cfs_rq's load average is different between top rq->cfs_rq and other
   task_group's per CPU cfs_rqs in whether or not blocked_load_average
   contributes to the load.

   The basic idea behind runnable load average (the same for utilization)
   is that the blocked state is taken into account as opposed to only
   accounting for the currently runnable state. Therefore, the average
   should include both the runnable/running and blocked load averages.
   This rewrite does that.

   In addition, we also combine runnable/running and blocked averages
   of all entities into the cfs_rq's average, and update it together at
   once. This is based on the fact that:

     update(runnable) + update(blocked) = update(runnable + blocked)

   This significantly reduces the code as we don't need to separately
   maintain/update runnable/running load and blocked load.

3. How task_group entities' share is calculated is complex and imprecise.

   We reduce the complexity in this rewrite to allow a very simple rule:
   the task_group's load_avg is aggregated from its per CPU cfs_rqs's
   load_avgs. Then group entity's weight is simply proportional to its
   own cfs_rq's load_avg / task_group's load_avg. To illustrate,

   if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,

   task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then

   cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share

To sum up, this rewrite in principle is equivalent to the current one, but
fixes the issues described above. Turns out, it significantly reduces the
code complexity and hence increases clarity and efficiency. In addition,
the new averages are more smooth/continuous (no spurious spikes and valleys)
and updated more consistently and quickly to reflect the load dynamics.

As a result, we have less load tracking overhead, better performance,
and especially better power efficiency due to more balanced load.
Signed-off-by: default avatarYuyang Du <yuyang.du@intel.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent cd126afe
...@@ -1175,29 +1175,24 @@ struct load_weight { ...@@ -1175,29 +1175,24 @@ struct load_weight {
u32 inv_weight; u32 inv_weight;
}; };
/*
* The load_avg/util_avg accumulates an infinite geometric series.
* 1) load_avg factors the amount of time that a sched_entity is
* runnable on a rq into its weight. For cfs_rq, it is the aggregated
* such weights of all runnable and blocked sched_entities.
* 2) util_avg factors frequency scaling into the amount of time
* that a sched_entity is running on a CPU, in the range [0..SCHED_LOAD_SCALE].
* For cfs_rq, it is the aggregated such times of all runnable and
* blocked sched_entities.
* The 64 bit load_sum can:
* 1) for cfs_rq, afford 4353082796 (=2^64/47742/88761) entities with
* the highest weight (=88761) always runnable, we should not overflow
* 2) for entity, support any load.weight always runnable
*/
struct sched_avg { struct sched_avg {
u64 last_runnable_update; u64 last_update_time, load_sum;
s64 decay_count; u32 util_sum, period_contrib;
/* unsigned long load_avg, util_avg;
* utilization_avg_contrib describes the amount of time that a
* sched_entity is running on a CPU. It is based on running_avg_sum
* and is scaled in the range [0..SCHED_LOAD_SCALE].
* load_avg_contrib described the amount of time that a sched_entity
* is runnable on a rq. It is based on both runnable_avg_sum and the
* weight of the task.
*/
unsigned long load_avg_contrib, utilization_avg_contrib;
/*
* These sums represent an infinite geometric series and so are bound
* above by 1024/(1-y). Thus we only need a u32 to store them for all
* choices of y < 1-2^(-32)*1024.
* running_avg_sum reflects the time that the sched_entity is
* effectively running on the CPU.
* runnable_avg_sum represents the amount of time a sched_entity is on
* a runqueue which includes the running time that is monitored by
* running_avg_sum.
*/
u32 runnable_avg_sum, avg_period, running_avg_sum;
}; };
#ifdef CONFIG_SCHEDSTATS #ifdef CONFIG_SCHEDSTATS
...@@ -1263,7 +1258,7 @@ struct sched_entity { ...@@ -1263,7 +1258,7 @@ struct sched_entity {
#endif #endif
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* Per-entity load-tracking */ /* Per entity load average tracking */
struct sched_avg avg; struct sched_avg avg;
#endif #endif
}; };
......
...@@ -2020,9 +2020,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) ...@@ -2020,9 +2020,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
p->se.prev_sum_exec_runtime = 0; p->se.prev_sum_exec_runtime = 0;
p->se.nr_migrations = 0; p->se.nr_migrations = 0;
p->se.vruntime = 0; p->se.vruntime = 0;
#ifdef CONFIG_SMP
p->se.avg.decay_count = 0;
#endif
INIT_LIST_HEAD(&p->se.group_node); INIT_LIST_HEAD(&p->se.group_node);
#ifdef CONFIG_SCHEDSTATS #ifdef CONFIG_SCHEDSTATS
......
...@@ -88,12 +88,8 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group ...@@ -88,12 +88,8 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
#endif #endif
P(se->load.weight); P(se->load.weight);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
P(se->avg.runnable_avg_sum); P(se->avg.load_avg);
P(se->avg.running_avg_sum); P(se->avg.util_avg);
P(se->avg.avg_period);
P(se->avg.load_avg_contrib);
P(se->avg.utilization_avg_contrib);
P(se->avg.decay_count);
#endif #endif
#undef PN #undef PN
#undef P #undef P
...@@ -209,21 +205,19 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) ...@@ -209,21 +205,19 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running);
SEQ_printf(m, " .%-30s: %ld\n", "load", cfs_rq->load.weight); SEQ_printf(m, " .%-30s: %ld\n", "load", cfs_rq->load.weight);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
SEQ_printf(m, " .%-30s: %ld\n", "runnable_load_avg", SEQ_printf(m, " .%-30s: %lu\n", "load_avg",
cfs_rq->runnable_load_avg); cfs_rq->avg.load_avg);
SEQ_printf(m, " .%-30s: %ld\n", "blocked_load_avg", SEQ_printf(m, " .%-30s: %lu\n", "util_avg",
cfs_rq->blocked_load_avg); cfs_rq->avg.util_avg);
SEQ_printf(m, " .%-30s: %ld\n", "utilization_load_avg", SEQ_printf(m, " .%-30s: %ld\n", "removed_load_avg",
cfs_rq->utilization_load_avg); atomic_long_read(&cfs_rq->removed_load_avg));
SEQ_printf(m, " .%-30s: %ld\n", "removed_util_avg",
atomic_long_read(&cfs_rq->removed_util_avg));
#ifdef CONFIG_FAIR_GROUP_SCHED #ifdef CONFIG_FAIR_GROUP_SCHED
SEQ_printf(m, " .%-30s: %ld\n", "tg_load_contrib", SEQ_printf(m, " .%-30s: %lu\n", "tg_load_avg_contrib",
cfs_rq->tg_load_contrib); cfs_rq->tg_load_avg_contrib);
SEQ_printf(m, " .%-30s: %d\n", "tg_runnable_contrib",
cfs_rq->tg_runnable_contrib);
SEQ_printf(m, " .%-30s: %ld\n", "tg_load_avg", SEQ_printf(m, " .%-30s: %ld\n", "tg_load_avg",
atomic_long_read(&cfs_rq->tg->load_avg)); atomic_long_read(&cfs_rq->tg->load_avg));
SEQ_printf(m, " .%-30s: %d\n", "tg->runnable_avg",
atomic_read(&cfs_rq->tg->runnable_avg));
#endif #endif
#endif #endif
#ifdef CONFIG_CFS_BANDWIDTH #ifdef CONFIG_CFS_BANDWIDTH
...@@ -631,12 +625,11 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) ...@@ -631,12 +625,11 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
P(se.load.weight); P(se.load.weight);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
P(se.avg.runnable_avg_sum); P(se.avg.load_sum);
P(se.avg.running_avg_sum); P(se.avg.util_sum);
P(se.avg.avg_period); P(se.avg.load_avg);
P(se.avg.load_avg_contrib); P(se.avg.util_avg);
P(se.avg.utilization_avg_contrib); P(se.avg.last_update_time);
P(se.avg.decay_count);
#endif #endif
P(policy); P(policy);
P(prio); P(prio);
......
This diff is collapsed.
...@@ -245,7 +245,6 @@ struct task_group { ...@@ -245,7 +245,6 @@ struct task_group {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
atomic_long_t load_avg; atomic_long_t load_avg;
atomic_t runnable_avg;
#endif #endif
#endif #endif
...@@ -366,27 +365,18 @@ struct cfs_rq { ...@@ -366,27 +365,18 @@ struct cfs_rq {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* /*
* CFS Load tracking * CFS load tracking
* Under CFS, load is tracked on a per-entity basis and aggregated up.
* This allows for the description of both thread and group usage (in
* the FAIR_GROUP_SCHED case).
* runnable_load_avg is the sum of the load_avg_contrib of the
* sched_entities on the rq.
* blocked_load_avg is similar to runnable_load_avg except that its
* the blocked sched_entities on the rq.
* utilization_load_avg is the sum of the average running time of the
* sched_entities on the rq.
*/ */
unsigned long runnable_load_avg, blocked_load_avg, utilization_load_avg; struct sched_avg avg;
atomic64_t decay_counter;
u64 last_decay;
atomic_long_t removed_load;
#ifdef CONFIG_FAIR_GROUP_SCHED #ifdef CONFIG_FAIR_GROUP_SCHED
/* Required to track per-cpu representation of a task_group */ unsigned long tg_load_avg_contrib;
u32 tg_runnable_contrib; #endif
unsigned long tg_load_contrib; atomic_long_t removed_load_avg, removed_util_avg;
#ifndef CONFIG_64BIT
u64 load_last_update_time_copy;
#endif
#ifdef CONFIG_FAIR_GROUP_SCHED
/* /*
* h_load = weight * f(tg) * h_load = weight * f(tg)
* *
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment