Commit 63ce50ff authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'sched-core-2023-10-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull scheduler updates from Ingo Molnar:
 "Fair scheduler (SCHED_OTHER) improvements:
   - Remove the old and now unused SIS_PROP code & option
   - Scan cluster before LLC in the wake-up path
   - Use candidate prev/recent_used CPU if scanning failed for cluster
     wakeup

  NUMA scheduling improvements:
   - Improve the VMA access-PID code to better skip/scan VMAs
   - Extend tracing to cover VMA-skipping decisions
   - Improve/fix the recently introduced sched_numa_find_nth_cpu() code
   - Generalize numa_map_to_online_node()

  Energy scheduling improvements:
   - Remove the EM_MAX_COMPLEXITY limit
   - Add tracepoints to track energy computation
   - Make the behavior of the 'sched_energy_aware' sysctl more
     consistent
   - Consolidate and clean up access to a CPU's max compute capacity
   - Fix uclamp code corner cases

  RT scheduling improvements:
   - Drive dl_rq->overloaded with dl_rq->pushable_dl_tasks updates
   - Drive the ->rto_mask with rt_rq->pushable_tasks updates

  Scheduler scalability improvements:
   - Rate-limit updates to tg->load_avg
   - On x86 disable IBRS when CPU is offline to improve single-threaded
     performance
   - Micro-optimize in_task() and in_interrupt()
   - Micro-optimize the PSI code
   - Avoid updating PSI triggers and ->rtpoll_total when there are no
     state changes

  Core scheduler infrastructure improvements:
   - Use saved_state to reduce some spurious freezer wakeups
   - Bring in a handful of fast-headers improvements to scheduler
     headers
   - Make the scheduler UAPI headers more widely usable by user-space
   - Simplify the control flow of scheduler syscalls by using lock
     guards
   - Fix sched_setaffinity() vs. CPU hotplug race

  Scheduler debuggability improvements:
   - Disallow writing invalid values to sched_rt_period_us
   - Fix a race in the rq-clock debugging code triggering warnings
   - Fix a warning in the bandwidth distribution code
   - Micro-optimize in_atomic_preempt_off() checks
   - Enforce that the tasklist_lock is held in for_each_thread()
   - Print the TGID in sched_show_task()
   - Remove the /proc/sys/kernel/sched_child_runs_first sysctl

  ... and misc cleanups & fixes"

* tag 'sched-core-2023-10-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (82 commits)
  sched/fair: Remove SIS_PROP
  sched/fair: Use candidate prev/recent_used CPU if scanning failed for cluster wakeup
  sched/fair: Scan cluster before scanning LLC in wake-up path
  sched: Add cpus_share_resources API
  sched/core: Fix RQCF_ACT_SKIP leak
  sched/fair: Remove unused 'curr' argument from pick_next_entity()
  sched/nohz: Update comments about NEWILB_KICK
  sched/fair: Remove duplicate #include
  sched/psi: Update poll => rtpoll in relevant comments
  sched: Make PELT acronym definition searchable
  sched: Fix stop_one_cpu_nowait() vs hotplug
  sched/psi: Bail out early from irq time accounting
  sched/topology: Rename 'DIE' domain to 'PKG'
  sched/psi: Delete the 'update_total' function parameter from update_triggers()
  sched/psi: Avoid updating PSI triggers and ->rtpoll_total when there are no state changes
  sched/headers: Remove comment referring to rq::cpu_load, since this has been removed
  sched/numa: Complete scanning of inactive VMAs when there is no alternative
  sched/numa: Complete scanning of partial VMAs regardless of PID activity
  sched/numa: Move up the access pid reset logic
  sched/numa: Trace decisions related to skipping VMAs
  ...
parents 3cf3fabc 984ffb6a
...@@ -170,7 +170,7 @@ and ``idle=nomwait``. If any of them is present in the kernel command line, the ...@@ -170,7 +170,7 @@ and ``idle=nomwait``. If any of them is present in the kernel command line, the
``MWAIT`` instruction is not allowed to be used, so the initialization of ``MWAIT`` instruction is not allowed to be used, so the initialization of
``intel_idle`` will fail. ``intel_idle`` will fail.
Apart from that there are four module parameters recognized by ``intel_idle`` Apart from that there are five module parameters recognized by ``intel_idle``
itself that can be set via the kernel command line (they cannot be updated via itself that can be set via the kernel command line (they cannot be updated via
sysfs, so that is the only way to change their values). sysfs, so that is the only way to change their values).
...@@ -216,6 +216,21 @@ are ignored). ...@@ -216,6 +216,21 @@ are ignored).
The idle states disabled this way can be enabled (on a per-CPU basis) from user The idle states disabled this way can be enabled (on a per-CPU basis) from user
space via ``sysfs``. space via ``sysfs``.
The ``ibrs_off`` module parameter is a boolean flag (defaults to
false). If set, it is used to control if IBRS (Indirect Branch Restricted
Speculation) should be turned off when the CPU enters an idle state.
This flag does not affect CPUs that use Enhanced IBRS which can remain
on with little performance impact.
For some CPUs, IBRS will be selected as mitigation for Spectre v2 and Retbleed
security vulnerabilities by default. Leaving the IBRS mode on while idling may
have a performance impact on its sibling CPU. The IBRS mode will be turned off
by default when the CPU enters into a deep idle state, but not in some
shallower ones. Setting the ``ibrs_off`` module parameter will force the IBRS
mode to off when the CPU is in any one of the available idle states. This may
help performance of a sibling CPU at the expense of a slightly higher wakeup
latency for the idle CPU.
.. _intel-idle-core-and-package-idle-states: .. _intel-idle-core-and-package-idle-states:
......
...@@ -1182,7 +1182,8 @@ automatically on platforms where it can run (that is, ...@@ -1182,7 +1182,8 @@ automatically on platforms where it can run (that is,
platforms with asymmetric CPU topologies and having an Energy platforms with asymmetric CPU topologies and having an Energy
Model available). If your platform happens to meet the Model available). If your platform happens to meet the
requirements for EAS but you do not want to use it, change requirements for EAS but you do not want to use it, change
this value to 0. this value to 0. On Non-EAS platforms, write operation fails and
read doesn't return anything.
task_delayacct task_delayacct
=============== ===============
......
...@@ -39,14 +39,15 @@ per Hz, leading to:: ...@@ -39,14 +39,15 @@ per Hz, leading to::
------------------- -------------------
Two different capacity values are used within the scheduler. A CPU's Two different capacity values are used within the scheduler. A CPU's
``capacity_orig`` is its maximum attainable capacity, i.e. its maximum ``original capacity`` is its maximum attainable capacity, i.e. its maximum
attainable performance level. A CPU's ``capacity`` is its ``capacity_orig`` to attainable performance level. This original capacity is returned by
which some loss of available performance (e.g. time spent handling IRQs) is the function arch_scale_cpu_capacity(). A CPU's ``capacity`` is its ``original
subtracted. capacity`` to which some loss of available performance (e.g. time spent
handling IRQs) is subtracted.
Note that a CPU's ``capacity`` is solely intended to be used by the CFS class, Note that a CPU's ``capacity`` is solely intended to be used by the CFS class,
while ``capacity_orig`` is class-agnostic. The rest of this document will use while ``original capacity`` is class-agnostic. The rest of this document will use
the term ``capacity`` interchangeably with ``capacity_orig`` for the sake of the term ``capacity`` interchangeably with ``original capacity`` for the sake of
brevity. brevity.
1.3 Platform examples 1.3 Platform examples
......
...@@ -359,32 +359,9 @@ in milli-Watts or in an 'abstract scale'. ...@@ -359,32 +359,9 @@ in milli-Watts or in an 'abstract scale'.
6.3 - Energy Model complexity 6.3 - Energy Model complexity
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The task wake-up path is very latency-sensitive. When the EM of a platform is EAS does not impose any complexity limit on the number of PDs/OPPs/CPUs but
too complex (too many CPUs, too many performance domains, too many performance restricts the number of CPUs to EM_MAX_NUM_CPUS to prevent overflows during
states, ...), the cost of using it in the wake-up path can become prohibitive. the energy estimation.
The energy-aware wake-up algorithm has a complexity of:
C = Nd * (Nc + Ns)
with: Nd the number of performance domains; Nc the number of CPUs; and Ns the
total number of OPPs (ex: for two perf. domains with 4 OPPs each, Ns = 8).
A complexity check is performed at the root domain level, when scheduling
domains are built. EAS will not start on a root domain if its C happens to be
higher than the completely arbitrary EM_MAX_COMPLEXITY threshold (2048 at the
time of writing).
If you really want to use EAS but the complexity of your platform's Energy
Model is too high to be used with a single root domain, you're left with only
two possible options:
1. split your system into separate, smaller, root domains using exclusive
cpusets and enable EAS locally on each of them. This option has the
benefit to work out of the box but the drawback of preventing load
balance between root domains, which can result in an unbalanced system
overall;
2. submit patches to reduce the complexity of the EAS wake-up algorithm,
hence enabling it to cope with larger EMs in reasonable time.
6.4 - Schedutil governor 6.4 - Schedutil governor
......
...@@ -39,10 +39,10 @@ Most notable: ...@@ -39,10 +39,10 @@ Most notable:
1.1 The problem 1.1 The problem
--------------- ---------------
Realtime scheduling is all about determinism, a group has to be able to rely on Real-time scheduling is all about determinism, a group has to be able to rely on
the amount of bandwidth (eg. CPU time) being constant. In order to schedule the amount of bandwidth (eg. CPU time) being constant. In order to schedule
multiple groups of realtime tasks, each group must be assigned a fixed portion multiple groups of real-time tasks, each group must be assigned a fixed portion
of the CPU time available. Without a minimum guarantee a realtime group can of the CPU time available. Without a minimum guarantee a real-time group can
obviously fall short. A fuzzy upper limit is of no use since it cannot be obviously fall short. A fuzzy upper limit is of no use since it cannot be
relied upon. Which leaves us with just the single fixed portion. relied upon. Which leaves us with just the single fixed portion.
...@@ -50,14 +50,14 @@ relied upon. Which leaves us with just the single fixed portion. ...@@ -50,14 +50,14 @@ relied upon. Which leaves us with just the single fixed portion.
---------------- ----------------
CPU time is divided by means of specifying how much time can be spent running CPU time is divided by means of specifying how much time can be spent running
in a given period. We allocate this "run time" for each realtime group which in a given period. We allocate this "run time" for each real-time group which
the other realtime groups will not be permitted to use. the other real-time groups will not be permitted to use.
Any time not allocated to a realtime group will be used to run normal priority Any time not allocated to a real-time group will be used to run normal priority
tasks (SCHED_OTHER). Any allocated run time not used will also be picked up by tasks (SCHED_OTHER). Any allocated run time not used will also be picked up by
SCHED_OTHER. SCHED_OTHER.
Let's consider an example: a frame fixed realtime renderer must deliver 25 Let's consider an example: a frame fixed real-time renderer must deliver 25
frames a second, which yields a period of 0.04s per frame. Now say it will also frames a second, which yields a period of 0.04s per frame. Now say it will also
have to play some music and respond to input, leaving it with around 80% CPU have to play some music and respond to input, leaving it with around 80% CPU
time dedicated for the graphics. We can then give this group a run time of 0.8 time dedicated for the graphics. We can then give this group a run time of 0.8
...@@ -70,7 +70,7 @@ needs only about 3% CPU time to do so, it can do with a 0.03 * 0.005s = ...@@ -70,7 +70,7 @@ needs only about 3% CPU time to do so, it can do with a 0.03 * 0.005s =
of 0.00015s. of 0.00015s.
The remaining CPU time will be used for user input and other tasks. Because The remaining CPU time will be used for user input and other tasks. Because
realtime tasks have explicitly allocated the CPU time they need to perform real-time tasks have explicitly allocated the CPU time they need to perform
their tasks, buffer underruns in the graphics or audio can be eliminated. their tasks, buffer underruns in the graphics or audio can be eliminated.
NOTE: the above example is not fully implemented yet. We still NOTE: the above example is not fully implemented yet. We still
...@@ -87,18 +87,20 @@ lack an EDF scheduler to make non-uniform periods usable. ...@@ -87,18 +87,20 @@ lack an EDF scheduler to make non-uniform periods usable.
The system wide settings are configured under the /proc virtual file system: The system wide settings are configured under the /proc virtual file system:
/proc/sys/kernel/sched_rt_period_us: /proc/sys/kernel/sched_rt_period_us:
The scheduling period that is equivalent to 100% CPU bandwidth The scheduling period that is equivalent to 100% CPU bandwidth.
/proc/sys/kernel/sched_rt_runtime_us: /proc/sys/kernel/sched_rt_runtime_us:
A global limit on how much time realtime scheduling may use. Even without A global limit on how much time real-time scheduling may use. This is always
CONFIG_RT_GROUP_SCHED enabled, this will limit time reserved to realtime less or equal to the period_us, as it denotes the time allocated from the
processes. With CONFIG_RT_GROUP_SCHED it signifies the total bandwidth period_us for the real-time tasks. Even without CONFIG_RT_GROUP_SCHED enabled,
available to all realtime groups. this will limit time reserved to real-time processes. With
CONFIG_RT_GROUP_SCHED=y it signifies the total bandwidth available to all
real-time groups.
* Time is specified in us because the interface is s32. This gives an * Time is specified in us because the interface is s32. This gives an
operating range from 1us to about 35 minutes. operating range from 1us to about 35 minutes.
* sched_rt_period_us takes values from 1 to INT_MAX. * sched_rt_period_us takes values from 1 to INT_MAX.
* sched_rt_runtime_us takes values from -1 to (INT_MAX - 1). * sched_rt_runtime_us takes values from -1 to sched_rt_period_us.
* A run time of -1 specifies runtime == period, ie. no limit. * A run time of -1 specifies runtime == period, ie. no limit.
...@@ -108,7 +110,7 @@ The system wide settings are configured under the /proc virtual file system: ...@@ -108,7 +110,7 @@ The system wide settings are configured under the /proc virtual file system:
The default values for sched_rt_period_us (1000000 or 1s) and The default values for sched_rt_period_us (1000000 or 1s) and
sched_rt_runtime_us (950000 or 0.95s). This gives 0.05s to be used by sched_rt_runtime_us (950000 or 0.95s). This gives 0.05s to be used by
SCHED_OTHER (non-RT tasks). These defaults were chosen so that a run-away SCHED_OTHER (non-RT tasks). These defaults were chosen so that a run-away
realtime tasks will not lock up the machine but leave a little time to recover real-time tasks will not lock up the machine but leave a little time to recover
it. By setting runtime to -1 you'd get the old behaviour back. it. By setting runtime to -1 you'd get the old behaviour back.
By default all bandwidth is assigned to the root group and new groups get the By default all bandwidth is assigned to the root group and new groups get the
...@@ -116,10 +118,10 @@ period from /proc/sys/kernel/sched_rt_period_us and a run time of 0. If you ...@@ -116,10 +118,10 @@ period from /proc/sys/kernel/sched_rt_period_us and a run time of 0. If you
want to assign bandwidth to another group, reduce the root group's bandwidth want to assign bandwidth to another group, reduce the root group's bandwidth
and assign some or all of the difference to another group. and assign some or all of the difference to another group.
Realtime group scheduling means you have to assign a portion of total CPU Real-time group scheduling means you have to assign a portion of total CPU
bandwidth to the group before it will accept realtime tasks. Therefore you will bandwidth to the group before it will accept real-time tasks. Therefore you will
not be able to run realtime tasks as any user other than root until you have not be able to run real-time tasks as any user other than root until you have
done that, even if the user has the rights to run processes with realtime done that, even if the user has the rights to run processes with real-time
priority! priority!
......
...@@ -1051,7 +1051,7 @@ static struct sched_domain_topology_level powerpc_topology[] = { ...@@ -1051,7 +1051,7 @@ static struct sched_domain_topology_level powerpc_topology[] = {
#endif #endif
{ shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) }, { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) },
{ cpu_mc_mask, SD_INIT_NAME(MC) }, { cpu_mc_mask, SD_INIT_NAME(MC) },
{ cpu_cpu_mask, SD_INIT_NAME(DIE) }, { cpu_cpu_mask, SD_INIT_NAME(PKG) },
{ NULL, }, { NULL, },
}; };
...@@ -1595,7 +1595,7 @@ static void add_cpu_to_masks(int cpu) ...@@ -1595,7 +1595,7 @@ static void add_cpu_to_masks(int cpu)
/* Skip all CPUs already part of current CPU core mask */ /* Skip all CPUs already part of current CPU core mask */
cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu)); cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu));
/* If chip_id is -1; limit the cpu_core_mask to within DIE*/ /* If chip_id is -1; limit the cpu_core_mask to within PKG */
if (chip_id == -1) if (chip_id == -1)
cpumask_and(mask, mask, cpu_cpu_mask(cpu)); cpumask_and(mask, mask, cpu_cpu_mask(cpu));
......
...@@ -522,7 +522,7 @@ static struct sched_domain_topology_level s390_topology[] = { ...@@ -522,7 +522,7 @@ static struct sched_domain_topology_level s390_topology[] = {
{ cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
{ cpu_book_mask, SD_INIT_NAME(BOOK) }, { cpu_book_mask, SD_INIT_NAME(BOOK) },
{ cpu_drawer_mask, SD_INIT_NAME(DRAWER) }, { cpu_drawer_mask, SD_INIT_NAME(DRAWER) },
{ cpu_cpu_mask, SD_INIT_NAME(DIE) }, { cpu_cpu_mask, SD_INIT_NAME(PKG) },
{ NULL, }, { NULL, },
}; };
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <asm/nospec-branch.h> #include <asm/nospec-branch.h>
#include <asm/msr.h>
/* /*
* On VMENTER we must preserve whatever view of the SPEC_CTRL MSR * On VMENTER we must preserve whatever view of the SPEC_CTRL MSR
...@@ -76,6 +77,16 @@ static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn) ...@@ -76,6 +77,16 @@ static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL; return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
} }
/*
* This can be used in noinstr functions & should only be called in bare
* metal context.
*/
static __always_inline void __update_spec_ctrl(u64 val)
{
__this_cpu_write(x86_spec_ctrl_current, val);
native_wrmsrl(MSR_IA32_SPEC_CTRL, val);
}
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
extern void speculative_store_bypass_ht_init(void); extern void speculative_store_bypass_ht_init(void);
#else #else
......
...@@ -87,6 +87,7 @@ ...@@ -87,6 +87,7 @@
#include <asm/hw_irq.h> #include <asm/hw_irq.h>
#include <asm/stackprotector.h> #include <asm/stackprotector.h>
#include <asm/sev.h> #include <asm/sev.h>
#include <asm/spec-ctrl.h>
/* representing HT siblings of each logical CPU */ /* representing HT siblings of each logical CPU */
DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map); DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map);
...@@ -640,13 +641,13 @@ static void __init build_sched_topology(void) ...@@ -640,13 +641,13 @@ static void __init build_sched_topology(void)
}; };
#endif #endif
/* /*
* When there is NUMA topology inside the package skip the DIE domain * When there is NUMA topology inside the package skip the PKG domain
* since the NUMA domains will auto-magically create the right spanning * since the NUMA domains will auto-magically create the right spanning
* domains based on the SLIT. * domains based on the SLIT.
*/ */
if (!x86_has_numa_in_package) { if (!x86_has_numa_in_package) {
x86_topology[i++] = (struct sched_domain_topology_level){ x86_topology[i++] = (struct sched_domain_topology_level){
cpu_cpu_mask, x86_die_flags, SD_INIT_NAME(DIE) cpu_cpu_mask, x86_die_flags, SD_INIT_NAME(PKG)
}; };
} }
...@@ -1596,8 +1597,15 @@ void __noreturn hlt_play_dead(void) ...@@ -1596,8 +1597,15 @@ void __noreturn hlt_play_dead(void)
native_halt(); native_halt();
} }
/*
* native_play_dead() is essentially a __noreturn function, but it can't
* be marked as such as the compiler may complain about it.
*/
void native_play_dead(void) void native_play_dead(void)
{ {
if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS))
__update_spec_ctrl(0);
play_dead_common(); play_dead_common();
tboot_shutdown(TB_SHUTDOWN_WFS); tboot_shutdown(TB_SHUTDOWN_WFS);
......
...@@ -53,9 +53,8 @@ ...@@ -53,9 +53,8 @@
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
#include <asm/cpu_device_id.h> #include <asm/cpu_device_id.h>
#include <asm/intel-family.h> #include <asm/intel-family.h>
#include <asm/nospec-branch.h>
#include <asm/mwait.h> #include <asm/mwait.h>
#include <asm/msr.h> #include <asm/spec-ctrl.h>
#include <asm/fpu/api.h> #include <asm/fpu/api.h>
#define INTEL_IDLE_VERSION "0.5.1" #define INTEL_IDLE_VERSION "0.5.1"
...@@ -69,6 +68,7 @@ static int max_cstate = CPUIDLE_STATE_MAX - 1; ...@@ -69,6 +68,7 @@ static int max_cstate = CPUIDLE_STATE_MAX - 1;
static unsigned int disabled_states_mask __read_mostly; static unsigned int disabled_states_mask __read_mostly;
static unsigned int preferred_states_mask __read_mostly; static unsigned int preferred_states_mask __read_mostly;
static bool force_irq_on __read_mostly; static bool force_irq_on __read_mostly;
static bool ibrs_off __read_mostly;
static struct cpuidle_device __percpu *intel_idle_cpuidle_devices; static struct cpuidle_device __percpu *intel_idle_cpuidle_devices;
...@@ -182,12 +182,12 @@ static __cpuidle int intel_idle_ibrs(struct cpuidle_device *dev, ...@@ -182,12 +182,12 @@ static __cpuidle int intel_idle_ibrs(struct cpuidle_device *dev,
int ret; int ret;
if (smt_active) if (smt_active)
native_wrmsrl(MSR_IA32_SPEC_CTRL, 0); __update_spec_ctrl(0);
ret = __intel_idle(dev, drv, index); ret = __intel_idle(dev, drv, index);
if (smt_active) if (smt_active)
native_wrmsrl(MSR_IA32_SPEC_CTRL, spec_ctrl); __update_spec_ctrl(spec_ctrl);
return ret; return ret;
} }
...@@ -1853,11 +1853,13 @@ static void state_update_enter_method(struct cpuidle_state *state, int cstate) ...@@ -1853,11 +1853,13 @@ static void state_update_enter_method(struct cpuidle_state *state, int cstate)
} }
if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) && if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) &&
state->flags & CPUIDLE_FLAG_IBRS) { ((state->flags & CPUIDLE_FLAG_IBRS) || ibrs_off)) {
/* /*
* IBRS mitigation requires that C-states are entered * IBRS mitigation requires that C-states are entered
* with interrupts disabled. * with interrupts disabled.
*/ */
if (ibrs_off && (state->flags & CPUIDLE_FLAG_IRQ_ENABLE))
state->flags &= ~CPUIDLE_FLAG_IRQ_ENABLE;
WARN_ON_ONCE(state->flags & CPUIDLE_FLAG_IRQ_ENABLE); WARN_ON_ONCE(state->flags & CPUIDLE_FLAG_IRQ_ENABLE);
state->enter = intel_idle_ibrs; state->enter = intel_idle_ibrs;
return; return;
...@@ -2176,3 +2178,9 @@ MODULE_PARM_DESC(preferred_cstates, "Mask of preferred idle states"); ...@@ -2176,3 +2178,9 @@ MODULE_PARM_DESC(preferred_cstates, "Mask of preferred idle states");
* 'CPUIDLE_FLAG_INIT_XSTATE' and 'CPUIDLE_FLAG_IBRS' flags. * 'CPUIDLE_FLAG_INIT_XSTATE' and 'CPUIDLE_FLAG_IBRS' flags.
*/ */
module_param(force_irq_on, bool, 0444); module_param(force_irq_on, bool, 0444);
/*
* Force the disabling of IBRS when X86_FEATURE_KERNEL_IBRS is on and
* CPUIDLE_FLAG_IRQ_ENABLE isn't set.
*/
module_param(ibrs_off, bool, 0444);
MODULE_PARM_DESC(ibrs_off, "Disable IBRS when idle");
...@@ -155,6 +155,8 @@ static inline int remove_cpu(unsigned int cpu) { return -EPERM; } ...@@ -155,6 +155,8 @@ static inline int remove_cpu(unsigned int cpu) { return -EPERM; }
static inline void smp_shutdown_nonboot_cpus(unsigned int primary_cpu) { } static inline void smp_shutdown_nonboot_cpus(unsigned int primary_cpu) { }
#endif /* !CONFIG_HOTPLUG_CPU */ #endif /* !CONFIG_HOTPLUG_CPU */
DEFINE_LOCK_GUARD_0(cpus_read_lock, cpus_read_lock(), cpus_read_unlock())
#ifdef CONFIG_PM_SLEEP_SMP #ifdef CONFIG_PM_SLEEP_SMP
extern int freeze_secondary_cpus(int primary); extern int freeze_secondary_cpus(int primary);
extern void thaw_secondary_cpus(void); extern void thaw_secondary_cpus(void);
......
...@@ -686,6 +686,14 @@ static inline void list_splice_tail_init(struct list_head *list, ...@@ -686,6 +686,14 @@ static inline void list_splice_tail_init(struct list_head *list,
#define list_for_each(pos, head) \ #define list_for_each(pos, head) \
for (pos = (head)->next; !list_is_head(pos, (head)); pos = pos->next) for (pos = (head)->next; !list_is_head(pos, (head)); pos = pos->next)
/**
* list_for_each_reverse - iterate backwards over a list
* @pos: the &struct list_head to use as a loop cursor.
* @head: the head for your list.
*/
#define list_for_each_reverse(pos, head) \
for (pos = (head)->prev; pos != (head); pos = pos->prev)
/** /**
* list_for_each_rcu - Iterate over a list in an RCU-safe fashion * list_for_each_rcu - Iterate over a list in an RCU-safe fashion
* @pos: the &struct list_head to use as a loop cursor. * @pos: the &struct list_head to use as a loop cursor.
......
...@@ -1726,8 +1726,8 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma) ...@@ -1726,8 +1726,8 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
unsigned int pid_bit; unsigned int pid_bit;
pid_bit = hash_32(current->pid, ilog2(BITS_PER_LONG)); pid_bit = hash_32(current->pid, ilog2(BITS_PER_LONG));
if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->access_pids[1])) { if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->pids_active[1])) {
__set_bit(pid_bit, &vma->numab_state->access_pids[1]); __set_bit(pid_bit, &vma->numab_state->pids_active[1]);
} }
} }
#else /* !CONFIG_NUMA_BALANCING */ #else /* !CONFIG_NUMA_BALANCING */
......
...@@ -551,9 +551,36 @@ struct vma_lock { ...@@ -551,9 +551,36 @@ struct vma_lock {
}; };
struct vma_numab_state { struct vma_numab_state {
/*
* Initialised as time in 'jiffies' after which VMA
* should be scanned. Delays first scan of new VMA by at
* least sysctl_numa_balancing_scan_delay:
*/
unsigned long next_scan; unsigned long next_scan;
unsigned long next_pid_reset;
unsigned long access_pids[2]; /*
* Time in jiffies when pids_active[] is reset to
* detect phase change behaviour:
*/
unsigned long pids_active_reset;
/*
* Approximate tracking of PIDs that trapped a NUMA hinting
* fault. May produce false positives due to hash collisions.
*
* [0] Previous PID tracking
* [1] Current PID tracking
*
* Window moves after next_pid_reset has expired approximately
* every VMA_PID_RESET_PERIOD jiffies:
*/
unsigned long pids_active[2];
/*
* MM scan sequence ID when the VMA was last completely scanned.
* A VMA is not eligible for scanning if prev_scan_seq == numa_scan_seq
*/
int prev_scan_seq;
}; };
/* /*
......
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
#include <asm/sparsemem.h> #include <asm/sparsemem.h>
/* Generic implementation available */ /* Generic implementation available */
int numa_map_to_online_node(int node); int numa_nearest_node(int node, unsigned int state);
#ifndef memory_add_physaddr_to_nid #ifndef memory_add_physaddr_to_nid
static inline int memory_add_physaddr_to_nid(u64 start) static inline int memory_add_physaddr_to_nid(u64 start)
...@@ -44,10 +44,11 @@ static inline int phys_to_target_node(u64 start) ...@@ -44,10 +44,11 @@ static inline int phys_to_target_node(u64 start)
} }
#endif #endif
#else /* !CONFIG_NUMA */ #else /* !CONFIG_NUMA */
static inline int numa_map_to_online_node(int node) static inline int numa_nearest_node(int node, unsigned int state)
{ {
return NUMA_NO_NODE; return NUMA_NO_NODE;
} }
static inline int memory_add_physaddr_to_nid(u64 start) static inline int memory_add_physaddr_to_nid(u64 start)
{ {
return 0; return 0;
...@@ -58,6 +59,8 @@ static inline int phys_to_target_node(u64 start) ...@@ -58,6 +59,8 @@ static inline int phys_to_target_node(u64 start)
} }
#endif #endif
#define numa_map_to_online_node(node) numa_nearest_node(node, N_ONLINE)
#ifdef CONFIG_HAVE_ARCH_NODE_DEV_GROUP #ifdef CONFIG_HAVE_ARCH_NODE_DEV_GROUP
extern const struct attribute_group arch_node_dev_group; extern const struct attribute_group arch_node_dev_group;
#endif #endif
......
...@@ -99,14 +99,21 @@ static __always_inline unsigned char interrupt_context_level(void) ...@@ -99,14 +99,21 @@ static __always_inline unsigned char interrupt_context_level(void)
return level; return level;
} }
/*
* These macro definitions avoid redundant invocations of preempt_count()
* because such invocations would result in redundant loads given that
* preempt_count() is commonly implemented with READ_ONCE().
*/
#define nmi_count() (preempt_count() & NMI_MASK) #define nmi_count() (preempt_count() & NMI_MASK)
#define hardirq_count() (preempt_count() & HARDIRQ_MASK) #define hardirq_count() (preempt_count() & HARDIRQ_MASK)
#ifdef CONFIG_PREEMPT_RT #ifdef CONFIG_PREEMPT_RT
# define softirq_count() (current->softirq_disable_cnt & SOFTIRQ_MASK) # define softirq_count() (current->softirq_disable_cnt & SOFTIRQ_MASK)
# define irq_count() ((preempt_count() & (NMI_MASK | HARDIRQ_MASK)) | softirq_count())
#else #else
# define softirq_count() (preempt_count() & SOFTIRQ_MASK) # define softirq_count() (preempt_count() & SOFTIRQ_MASK)
# define irq_count() (preempt_count() & (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_MASK))
#endif #endif
#define irq_count() (nmi_count() | hardirq_count() | softirq_count())
/* /*
* Macros to retrieve the current execution context: * Macros to retrieve the current execution context:
...@@ -119,7 +126,11 @@ static __always_inline unsigned char interrupt_context_level(void) ...@@ -119,7 +126,11 @@ static __always_inline unsigned char interrupt_context_level(void)
#define in_nmi() (nmi_count()) #define in_nmi() (nmi_count())
#define in_hardirq() (hardirq_count()) #define in_hardirq() (hardirq_count())
#define in_serving_softirq() (softirq_count() & SOFTIRQ_OFFSET) #define in_serving_softirq() (softirq_count() & SOFTIRQ_OFFSET)
#define in_task() (!(in_nmi() | in_hardirq() | in_serving_softirq())) #ifdef CONFIG_PREEMPT_RT
# define in_task() (!((preempt_count() & (NMI_MASK | HARDIRQ_MASK)) | in_serving_softirq()))
#else
# define in_task() (!(preempt_count() & (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
#endif
/* /*
* The following macros are deprecated and should not be used in new code: * The following macros are deprecated and should not be used in new code:
......
...@@ -63,7 +63,6 @@ struct robust_list_head; ...@@ -63,7 +63,6 @@ struct robust_list_head;
struct root_domain; struct root_domain;
struct rq; struct rq;
struct sched_attr; struct sched_attr;
struct sched_param;
struct seq_file; struct seq_file;
struct sighand_struct; struct sighand_struct;
struct signal_struct; struct signal_struct;
...@@ -370,6 +369,10 @@ extern struct root_domain def_root_domain; ...@@ -370,6 +369,10 @@ extern struct root_domain def_root_domain;
extern struct mutex sched_domains_mutex; extern struct mutex sched_domains_mutex;
#endif #endif
struct sched_param {
int sched_priority;
};
struct sched_info { struct sched_info {
#ifdef CONFIG_SCHED_INFO #ifdef CONFIG_SCHED_INFO
/* Cumulative counters: */ /* Cumulative counters: */
...@@ -750,10 +753,8 @@ struct task_struct { ...@@ -750,10 +753,8 @@ struct task_struct {
#endif #endif
unsigned int __state; unsigned int __state;
#ifdef CONFIG_PREEMPT_RT
/* saved state for "spinlock sleepers" */ /* saved state for "spinlock sleepers" */
unsigned int saved_state; unsigned int saved_state;
#endif
/* /*
* This begins the randomizable portion of task_struct. Only * This begins the randomizable portion of task_struct. Only
......
/* SPDX-License-Identifier: GPL-2.0 */ /* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_SCHED_DEADLINE_H
#define _LINUX_SCHED_DEADLINE_H
/* /*
* SCHED_DEADLINE tasks has negative priorities, reflecting * SCHED_DEADLINE tasks has negative priorities, reflecting
...@@ -34,3 +36,5 @@ extern void dl_add_task_root_domain(struct task_struct *p); ...@@ -34,3 +36,5 @@ extern void dl_add_task_root_domain(struct task_struct *p);
extern void dl_clear_root_domain(struct root_domain *rd); extern void dl_clear_root_domain(struct root_domain *rd);
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
#endif /* _LINUX_SCHED_DEADLINE_H */
...@@ -15,6 +15,16 @@ ...@@ -15,6 +15,16 @@
#define TNF_FAULT_LOCAL 0x08 #define TNF_FAULT_LOCAL 0x08
#define TNF_MIGRATE_FAIL 0x10 #define TNF_MIGRATE_FAIL 0x10
enum numa_vmaskip_reason {
NUMAB_SKIP_UNSUITABLE,
NUMAB_SKIP_SHARED_RO,
NUMAB_SKIP_INACCESSIBLE,
NUMAB_SKIP_SCAN_DELAY,
NUMAB_SKIP_PID_INACTIVE,
NUMAB_SKIP_IGNORE_PID,
NUMAB_SKIP_SEQ_COMPLETED,
};
#ifdef CONFIG_NUMA_BALANCING #ifdef CONFIG_NUMA_BALANCING
extern void task_numa_fault(int last_node, int node, int pages, int flags); extern void task_numa_fault(int last_node, int node, int pages, int flags);
extern pid_t task_numa_group_id(struct task_struct *p); extern pid_t task_numa_group_id(struct task_struct *p);
......
...@@ -109,6 +109,13 @@ SD_FLAG(SD_ASYM_CPUCAPACITY_FULL, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS) ...@@ -109,6 +109,13 @@ SD_FLAG(SD_ASYM_CPUCAPACITY_FULL, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS)
*/ */
SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
/*
* Domain members share CPU cluster (LLC tags or L2 cache)
*
* NEEDS_GROUPS: Clusters are shared between groups.
*/
SD_FLAG(SD_CLUSTER, SDF_NEEDS_GROUPS)
/* /*
* Domain members share CPU package resources (i.e. caches) * Domain members share CPU package resources (i.e. caches)
* *
......
...@@ -656,7 +656,8 @@ extern bool current_is_single_threaded(void); ...@@ -656,7 +656,8 @@ extern bool current_is_single_threaded(void);
while ((t = next_thread(t)) != g) while ((t = next_thread(t)) != g)
#define __for_each_thread(signal, t) \ #define __for_each_thread(signal, t) \
list_for_each_entry_rcu(t, &(signal)->thread_head, thread_node) list_for_each_entry_rcu(t, &(signal)->thread_head, thread_node, \
lockdep_is_held(&tasklist_lock))
#define for_each_thread(p, t) \ #define for_each_thread(p, t) \
__for_each_thread((p)->signal, t) __for_each_thread((p)->signal, t)
......
...@@ -17,4 +17,4 @@ static inline bool sched_smt_active(void) { return false; } ...@@ -17,4 +17,4 @@ static inline bool sched_smt_active(void) { return false; }
void arch_smt_update(void); void arch_smt_update(void);
#endif #endif /* _LINUX_SCHED_SMT_H */
...@@ -45,7 +45,7 @@ static inline int cpu_smt_flags(void) ...@@ -45,7 +45,7 @@ static inline int cpu_smt_flags(void)
#ifdef CONFIG_SCHED_CLUSTER #ifdef CONFIG_SCHED_CLUSTER
static inline int cpu_cluster_flags(void) static inline int cpu_cluster_flags(void)
{ {
return SD_SHARE_PKG_RESOURCES; return SD_CLUSTER | SD_SHARE_PKG_RESOURCES;
} }
#endif #endif
...@@ -109,8 +109,6 @@ struct sched_domain { ...@@ -109,8 +109,6 @@ struct sched_domain {
u64 max_newidle_lb_cost; u64 max_newidle_lb_cost;
unsigned long last_decay_max_lb_cost; unsigned long last_decay_max_lb_cost;
u64 avg_scan_cost; /* select_idle_sibling */
#ifdef CONFIG_SCHEDSTATS #ifdef CONFIG_SCHEDSTATS
/* load_balance() stats */ /* load_balance() stats */
unsigned int lb_count[CPU_MAX_IDLE_TYPES]; unsigned int lb_count[CPU_MAX_IDLE_TYPES];
...@@ -179,6 +177,7 @@ cpumask_var_t *alloc_sched_domains(unsigned int ndoms); ...@@ -179,6 +177,7 @@ cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms); void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
bool cpus_share_cache(int this_cpu, int that_cpu); bool cpus_share_cache(int this_cpu, int that_cpu);
bool cpus_share_resources(int this_cpu, int that_cpu);
typedef const struct cpumask *(*sched_domain_mask_f)(int cpu); typedef const struct cpumask *(*sched_domain_mask_f)(int cpu);
typedef int (*sched_domain_flags_f)(void); typedef int (*sched_domain_flags_f)(void);
...@@ -232,6 +231,11 @@ static inline bool cpus_share_cache(int this_cpu, int that_cpu) ...@@ -232,6 +231,11 @@ static inline bool cpus_share_cache(int this_cpu, int that_cpu)
return true; return true;
} }
static inline bool cpus_share_resources(int this_cpu, int that_cpu)
{
return true;
}
#endif /* !CONFIG_SMP */ #endif /* !CONFIG_SMP */
#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) #if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
......
...@@ -20,4 +20,4 @@ struct task_cputime { ...@@ -20,4 +20,4 @@ struct task_cputime {
unsigned long long sum_exec_runtime; unsigned long long sum_exec_runtime;
}; };
#endif #endif /* _LINUX_SCHED_TYPES_H */
/* SPDX-License-Identifier: GPL-2.0 */ /* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_VHOST_TASK_H #ifndef _LINUX_SCHED_VHOST_TASK_H
#define _LINUX_VHOST_TASK_H #define _LINUX_SCHED_VHOST_TASK_H
struct vhost_task; struct vhost_task;
...@@ -11,4 +10,4 @@ void vhost_task_start(struct vhost_task *vtsk); ...@@ -11,4 +10,4 @@ void vhost_task_start(struct vhost_task *vtsk);
void vhost_task_stop(struct vhost_task *vtsk); void vhost_task_stop(struct vhost_task *vtsk);
void vhost_task_wake(struct vhost_task *vtsk); void vhost_task_wake(struct vhost_task *vtsk);
#endif #endif /* _LINUX_SCHED_VHOST_TASK_H */
...@@ -251,7 +251,7 @@ extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int ...@@ -251,7 +251,7 @@ extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int
#else #else
static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
{ {
return cpumask_nth(cpu, cpus); return cpumask_nth_and(cpu, cpus, cpu_online_mask);
} }
static inline const struct cpumask * static inline const struct cpumask *
......
...@@ -664,6 +664,58 @@ DEFINE_EVENT(sched_numa_pair_template, sched_swap_numa, ...@@ -664,6 +664,58 @@ DEFINE_EVENT(sched_numa_pair_template, sched_swap_numa,
TP_ARGS(src_tsk, src_cpu, dst_tsk, dst_cpu) TP_ARGS(src_tsk, src_cpu, dst_tsk, dst_cpu)
); );
#ifdef CONFIG_NUMA_BALANCING
#define NUMAB_SKIP_REASON \
EM( NUMAB_SKIP_UNSUITABLE, "unsuitable" ) \
EM( NUMAB_SKIP_SHARED_RO, "shared_ro" ) \
EM( NUMAB_SKIP_INACCESSIBLE, "inaccessible" ) \
EM( NUMAB_SKIP_SCAN_DELAY, "scan_delay" ) \
EM( NUMAB_SKIP_PID_INACTIVE, "pid_inactive" ) \
EM( NUMAB_SKIP_IGNORE_PID, "ignore_pid_inactive" ) \
EMe(NUMAB_SKIP_SEQ_COMPLETED, "seq_completed" )
/* Redefine for export. */
#undef EM
#undef EMe
#define EM(a, b) TRACE_DEFINE_ENUM(a);
#define EMe(a, b) TRACE_DEFINE_ENUM(a);
NUMAB_SKIP_REASON
/* Redefine for symbolic printing. */
#undef EM
#undef EMe
#define EM(a, b) { a, b },
#define EMe(a, b) { a, b }
TRACE_EVENT(sched_skip_vma_numa,
TP_PROTO(struct mm_struct *mm, struct vm_area_struct *vma,
enum numa_vmaskip_reason reason),
TP_ARGS(mm, vma, reason),
TP_STRUCT__entry(
__field(unsigned long, numa_scan_offset)
__field(unsigned long, vm_start)
__field(unsigned long, vm_end)
__field(enum numa_vmaskip_reason, reason)
),
TP_fast_assign(
__entry->numa_scan_offset = mm->numa_scan_offset;
__entry->vm_start = vma->vm_start;
__entry->vm_end = vma->vm_end;
__entry->reason = reason;
),
TP_printk("numa_scan_offset=%lX vm_start=%lX vm_end=%lX reason=%s",
__entry->numa_scan_offset,
__entry->vm_start,
__entry->vm_end,
__print_symbolic(__entry->reason, NUMAB_SKIP_REASON))
);
#endif /* CONFIG_NUMA_BALANCING */
/* /*
* Tracepoint for waking a polling cpu without an IPI. * Tracepoint for waking a polling cpu without an IPI.
...@@ -735,6 +787,11 @@ DECLARE_TRACE(sched_update_nr_running_tp, ...@@ -735,6 +787,11 @@ DECLARE_TRACE(sched_update_nr_running_tp,
TP_PROTO(struct rq *rq, int change), TP_PROTO(struct rq *rq, int change),
TP_ARGS(rq, change)); TP_ARGS(rq, change));
DECLARE_TRACE(sched_compute_energy_tp,
TP_PROTO(struct task_struct *p, int dst_cpu, unsigned long energy,
unsigned long max_util, unsigned long busy_time),
TP_ARGS(p, dst_cpu, energy, max_util, busy_time));
#endif /* _TRACE_SCHED_H */ #endif /* _TRACE_SCHED_H */
/* This part must be outside protection */ /* This part must be outside protection */
......
...@@ -4,10 +4,6 @@ ...@@ -4,10 +4,6 @@
#include <linux/types.h> #include <linux/types.h>
struct sched_param {
int sched_priority;
};
#define SCHED_ATTR_SIZE_VER0 48 /* sizeof first published struct */ #define SCHED_ATTR_SIZE_VER0 48 /* sizeof first published struct */
#define SCHED_ATTR_SIZE_VER1 56 /* add: util_{min,max} */ #define SCHED_ATTR_SIZE_VER1 56 /* add: util_{min,max} */
......
...@@ -71,7 +71,11 @@ bool __refrigerator(bool check_kthr_stop) ...@@ -71,7 +71,11 @@ bool __refrigerator(bool check_kthr_stop)
for (;;) { for (;;) {
bool freeze; bool freeze;
raw_spin_lock_irq(&current->pi_lock);
set_current_state(TASK_FROZEN); set_current_state(TASK_FROZEN);
/* unstale saved_state so that __thaw_task() will wake us up */
current->saved_state = TASK_RUNNING;
raw_spin_unlock_irq(&current->pi_lock);
spin_lock_irq(&freezer_lock); spin_lock_irq(&freezer_lock);
freeze = freezing(current) && !(check_kthr_stop && kthread_should_stop()); freeze = freezing(current) && !(check_kthr_stop && kthread_should_stop());
...@@ -129,6 +133,7 @@ static int __set_task_frozen(struct task_struct *p, void *arg) ...@@ -129,6 +133,7 @@ static int __set_task_frozen(struct task_struct *p, void *arg)
WARN_ON_ONCE(debug_locks && p->lockdep_depth); WARN_ON_ONCE(debug_locks && p->lockdep_depth);
#endif #endif
p->saved_state = p->__state;
WRITE_ONCE(p->__state, TASK_FROZEN); WRITE_ONCE(p->__state, TASK_FROZEN);
return TASK_FROZEN; return TASK_FROZEN;
} }
...@@ -170,42 +175,34 @@ bool freeze_task(struct task_struct *p) ...@@ -170,42 +175,34 @@ bool freeze_task(struct task_struct *p)
} }
/* /*
* The special task states (TASK_STOPPED, TASK_TRACED) keep their canonical * Restore the saved_state before the task entered freezer. For typical task
* state in p->jobctl. If either of them got a wakeup that was missed because * in the __refrigerator(), saved_state == TASK_RUNNING so nothing happens
* TASK_FROZEN, then their canonical state reflects that and the below will * here. For tasks which were TASK_NORMAL | TASK_FREEZABLE, their initial state
* refuse to restore the special state and instead issue the wakeup. * is restored unless they got an expected wakeup (see ttwu_state_match()).
* Returns 1 if the task state was restored.
*/ */
static int __set_task_special(struct task_struct *p, void *arg) static int __restore_freezer_state(struct task_struct *p, void *arg)
{ {
unsigned int state = 0; unsigned int state = p->saved_state;
if (p->jobctl & JOBCTL_TRACED) if (state != TASK_RUNNING) {
state = TASK_TRACED;
else if (p->jobctl & JOBCTL_STOPPED)
state = TASK_STOPPED;
if (state)
WRITE_ONCE(p->__state, state); WRITE_ONCE(p->__state, state);
return 1;
}
return state; return 0;
} }
void __thaw_task(struct task_struct *p) void __thaw_task(struct task_struct *p)
{ {
unsigned long flags, flags2; unsigned long flags;
spin_lock_irqsave(&freezer_lock, flags); spin_lock_irqsave(&freezer_lock, flags);
if (WARN_ON_ONCE(freezing(p))) if (WARN_ON_ONCE(freezing(p)))
goto unlock; goto unlock;
if (lock_task_sighand(p, &flags2)) { if (task_call_func(p, __restore_freezer_state, NULL))
/* TASK_FROZEN -> TASK_{STOPPED,TRACED} */
bool ret = task_call_func(p, __set_task_special, NULL);
unlock_task_sighand(p, &flags2);
if (ret)
goto unlock; goto unlock;
}
wake_up_state(p, TASK_FROZEN); wake_up_state(p, TASK_FROZEN);
unlock: unlock:
......
...@@ -34,7 +34,6 @@ ...@@ -34,7 +34,6 @@
#include <linux/nospec.h> #include <linux/nospec.h>
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/psi.h> #include <linux/psi.h>
#include <linux/psi.h>
#include <linux/ptrace_api.h> #include <linux/ptrace_api.h>
#include <linux/sched_clock.h> #include <linux/sched_clock.h>
#include <linux/security.h> #include <linux/security.h>
......
This diff is collapsed.
...@@ -131,7 +131,7 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p, ...@@ -131,7 +131,7 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p,
if (!dl_task_fits_capacity(p, cpu)) { if (!dl_task_fits_capacity(p, cpu)) {
cpumask_clear_cpu(cpu, later_mask); cpumask_clear_cpu(cpu, later_mask);
cap = capacity_orig_of(cpu); cap = arch_scale_cpu_capacity(cpu);
if (cap > max_cap || if (cap > max_cap ||
(cpu == task_cpu(p) && cap == max_cap)) { (cpu == task_cpu(p) && cap == max_cap)) {
......
...@@ -132,7 +132,7 @@ static inline unsigned long __dl_bw_capacity(const struct cpumask *mask) ...@@ -132,7 +132,7 @@ static inline unsigned long __dl_bw_capacity(const struct cpumask *mask)
int i; int i;
for_each_cpu_and(i, mask, cpu_active_mask) for_each_cpu_and(i, mask, cpu_active_mask)
cap += capacity_orig_of(i); cap += arch_scale_cpu_capacity(i);
return cap; return cap;
} }
...@@ -144,7 +144,7 @@ static inline unsigned long __dl_bw_capacity(const struct cpumask *mask) ...@@ -144,7 +144,7 @@ static inline unsigned long __dl_bw_capacity(const struct cpumask *mask)
static inline unsigned long dl_bw_capacity(int i) static inline unsigned long dl_bw_capacity(int i)
{ {
if (!sched_asym_cpucap_active() && if (!sched_asym_cpucap_active() &&
capacity_orig_of(i) == SCHED_CAPACITY_SCALE) { arch_scale_cpu_capacity(i) == SCHED_CAPACITY_SCALE) {
return dl_bw_cpus(i) << SCHED_CAPACITY_SHIFT; return dl_bw_cpus(i) << SCHED_CAPACITY_SHIFT;
} else { } else {
RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(), RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
...@@ -509,7 +509,6 @@ void init_dl_rq(struct dl_rq *dl_rq) ...@@ -509,7 +509,6 @@ void init_dl_rq(struct dl_rq *dl_rq)
/* zero means no -deadline tasks */ /* zero means no -deadline tasks */
dl_rq->earliest_dl.curr = dl_rq->earliest_dl.next = 0; dl_rq->earliest_dl.curr = dl_rq->earliest_dl.next = 0;
dl_rq->dl_nr_migratory = 0;
dl_rq->overloaded = 0; dl_rq->overloaded = 0;
dl_rq->pushable_dl_tasks_root = RB_ROOT_CACHED; dl_rq->pushable_dl_tasks_root = RB_ROOT_CACHED;
#else #else
...@@ -553,39 +552,6 @@ static inline void dl_clear_overload(struct rq *rq) ...@@ -553,39 +552,6 @@ static inline void dl_clear_overload(struct rq *rq)
cpumask_clear_cpu(rq->cpu, rq->rd->dlo_mask); cpumask_clear_cpu(rq->cpu, rq->rd->dlo_mask);
} }
static void update_dl_migration(struct dl_rq *dl_rq)
{
if (dl_rq->dl_nr_migratory && dl_rq->dl_nr_running > 1) {
if (!dl_rq->overloaded) {
dl_set_overload(rq_of_dl_rq(dl_rq));
dl_rq->overloaded = 1;
}
} else if (dl_rq->overloaded) {
dl_clear_overload(rq_of_dl_rq(dl_rq));
dl_rq->overloaded = 0;
}
}
static void inc_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
{
struct task_struct *p = dl_task_of(dl_se);
if (p->nr_cpus_allowed > 1)
dl_rq->dl_nr_migratory++;
update_dl_migration(dl_rq);
}
static void dec_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
{
struct task_struct *p = dl_task_of(dl_se);
if (p->nr_cpus_allowed > 1)
dl_rq->dl_nr_migratory--;
update_dl_migration(dl_rq);
}
#define __node_2_pdl(node) \ #define __node_2_pdl(node) \
rb_entry((node), struct task_struct, pushable_dl_tasks) rb_entry((node), struct task_struct, pushable_dl_tasks)
...@@ -594,6 +560,11 @@ static inline bool __pushable_less(struct rb_node *a, const struct rb_node *b) ...@@ -594,6 +560,11 @@ static inline bool __pushable_less(struct rb_node *a, const struct rb_node *b)
return dl_entity_preempt(&__node_2_pdl(a)->dl, &__node_2_pdl(b)->dl); return dl_entity_preempt(&__node_2_pdl(a)->dl, &__node_2_pdl(b)->dl);
} }
static inline int has_pushable_dl_tasks(struct rq *rq)
{
return !RB_EMPTY_ROOT(&rq->dl.pushable_dl_tasks_root.rb_root);
}
/* /*
* The list of pushable -deadline task is not a plist, like in * The list of pushable -deadline task is not a plist, like in
* sched_rt.c, it is an rb-tree with tasks ordered by deadline. * sched_rt.c, it is an rb-tree with tasks ordered by deadline.
...@@ -609,6 +580,11 @@ static void enqueue_pushable_dl_task(struct rq *rq, struct task_struct *p) ...@@ -609,6 +580,11 @@ static void enqueue_pushable_dl_task(struct rq *rq, struct task_struct *p)
__pushable_less); __pushable_less);
if (leftmost) if (leftmost)
rq->dl.earliest_dl.next = p->dl.deadline; rq->dl.earliest_dl.next = p->dl.deadline;
if (!rq->dl.overloaded) {
dl_set_overload(rq);
rq->dl.overloaded = 1;
}
} }
static void dequeue_pushable_dl_task(struct rq *rq, struct task_struct *p) static void dequeue_pushable_dl_task(struct rq *rq, struct task_struct *p)
...@@ -625,11 +601,11 @@ static void dequeue_pushable_dl_task(struct rq *rq, struct task_struct *p) ...@@ -625,11 +601,11 @@ static void dequeue_pushable_dl_task(struct rq *rq, struct task_struct *p)
dl_rq->earliest_dl.next = __node_2_pdl(leftmost)->dl.deadline; dl_rq->earliest_dl.next = __node_2_pdl(leftmost)->dl.deadline;
RB_CLEAR_NODE(&p->pushable_dl_tasks); RB_CLEAR_NODE(&p->pushable_dl_tasks);
}
static inline int has_pushable_dl_tasks(struct rq *rq) if (!has_pushable_dl_tasks(rq) && rq->dl.overloaded) {
{ dl_clear_overload(rq);
return !RB_EMPTY_ROOT(&rq->dl.pushable_dl_tasks_root.rb_root); rq->dl.overloaded = 0;
}
} }
static int push_dl_task(struct rq *rq); static int push_dl_task(struct rq *rq);
...@@ -763,7 +739,7 @@ static inline void deadline_queue_pull_task(struct rq *rq) ...@@ -763,7 +739,7 @@ static inline void deadline_queue_pull_task(struct rq *rq)
static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags); static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags);
static void __dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags); static void __dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags);
static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, int flags); static void wakeup_preempt_dl(struct rq *rq, struct task_struct *p, int flags);
static inline void replenish_dl_new_period(struct sched_dl_entity *dl_se, static inline void replenish_dl_new_period(struct sched_dl_entity *dl_se,
struct rq *rq) struct rq *rq)
...@@ -1175,7 +1151,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) ...@@ -1175,7 +1151,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
enqueue_task_dl(rq, p, ENQUEUE_REPLENISH); enqueue_task_dl(rq, p, ENQUEUE_REPLENISH);
if (dl_task(rq->curr)) if (dl_task(rq->curr))
check_preempt_curr_dl(rq, p, 0); wakeup_preempt_dl(rq, p, 0);
else else
resched_curr(rq); resched_curr(rq);
...@@ -1504,7 +1480,6 @@ void inc_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) ...@@ -1504,7 +1480,6 @@ void inc_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
add_nr_running(rq_of_dl_rq(dl_rq), 1); add_nr_running(rq_of_dl_rq(dl_rq), 1);
inc_dl_deadline(dl_rq, deadline); inc_dl_deadline(dl_rq, deadline);
inc_dl_migration(dl_se, dl_rq);
} }
static inline static inline
...@@ -1518,7 +1493,6 @@ void dec_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) ...@@ -1518,7 +1493,6 @@ void dec_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
sub_nr_running(rq_of_dl_rq(dl_rq), 1); sub_nr_running(rq_of_dl_rq(dl_rq), 1);
dec_dl_deadline(dl_rq, dl_se->deadline); dec_dl_deadline(dl_rq, dl_se->deadline);
dec_dl_migration(dl_se, dl_rq);
} }
static inline bool __dl_less(struct rb_node *a, const struct rb_node *b) static inline bool __dl_less(struct rb_node *a, const struct rb_node *b)
...@@ -1939,7 +1913,7 @@ static int balance_dl(struct rq *rq, struct task_struct *p, struct rq_flags *rf) ...@@ -1939,7 +1913,7 @@ static int balance_dl(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
* Only called when both the current and waking task are -deadline * Only called when both the current and waking task are -deadline
* tasks. * tasks.
*/ */
static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, static void wakeup_preempt_dl(struct rq *rq, struct task_struct *p,
int flags) int flags)
{ {
if (dl_entity_preempt(&p->dl, &rq->curr->dl)) { if (dl_entity_preempt(&p->dl, &rq->curr->dl)) {
...@@ -2291,9 +2265,6 @@ static int push_dl_task(struct rq *rq) ...@@ -2291,9 +2265,6 @@ static int push_dl_task(struct rq *rq)
struct rq *later_rq; struct rq *later_rq;
int ret = 0; int ret = 0;
if (!rq->dl.overloaded)
return 0;
next_task = pick_next_pushable_dl_task(rq); next_task = pick_next_pushable_dl_task(rq);
if (!next_task) if (!next_task)
return 0; return 0;
...@@ -2449,9 +2420,11 @@ static void pull_dl_task(struct rq *this_rq) ...@@ -2449,9 +2420,11 @@ static void pull_dl_task(struct rq *this_rq)
double_unlock_balance(this_rq, src_rq); double_unlock_balance(this_rq, src_rq);
if (push_task) { if (push_task) {
preempt_disable();
raw_spin_rq_unlock(this_rq); raw_spin_rq_unlock(this_rq);
stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop, stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop,
push_task, &src_rq->push_work); push_task, &src_rq->push_work);
preempt_enable();
raw_spin_rq_lock(this_rq); raw_spin_rq_lock(this_rq);
} }
} }
...@@ -2652,7 +2625,7 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p) ...@@ -2652,7 +2625,7 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
deadline_queue_push_tasks(rq); deadline_queue_push_tasks(rq);
#endif #endif
if (dl_task(rq->curr)) if (dl_task(rq->curr))
check_preempt_curr_dl(rq, p, 0); wakeup_preempt_dl(rq, p, 0);
else else
resched_curr(rq); resched_curr(rq);
} else { } else {
...@@ -2721,7 +2694,7 @@ DEFINE_SCHED_CLASS(dl) = { ...@@ -2721,7 +2694,7 @@ DEFINE_SCHED_CLASS(dl) = {
.dequeue_task = dequeue_task_dl, .dequeue_task = dequeue_task_dl,
.yield_task = yield_task_dl, .yield_task = yield_task_dl,
.check_preempt_curr = check_preempt_curr_dl, .wakeup_preempt = wakeup_preempt_dl,
.pick_next_task = pick_next_task_dl, .pick_next_task = pick_next_task_dl,
.put_prev_task = put_prev_task_dl, .put_prev_task = put_prev_task_dl,
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
*/ */
/* /*
* This allows printing both to /proc/sched_debug and * This allows printing both to /sys/kernel/debug/sched/debug and
* to the console * to the console
*/ */
#define SEQ_printf(m, x...) \ #define SEQ_printf(m, x...) \
...@@ -724,9 +724,6 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq) ...@@ -724,9 +724,6 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
SEQ_printf(m, " .%-30s: %Ld.%06ld\n", #x, SPLIT_NS(rt_rq->x)) SEQ_printf(m, " .%-30s: %Ld.%06ld\n", #x, SPLIT_NS(rt_rq->x))
PU(rt_nr_running); PU(rt_nr_running);
#ifdef CONFIG_SMP
PU(rt_nr_migratory);
#endif
P(rt_throttled); P(rt_throttled);
PN(rt_time); PN(rt_time);
PN(rt_runtime); PN(rt_runtime);
...@@ -748,7 +745,6 @@ void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq) ...@@ -748,7 +745,6 @@ void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq)
PU(dl_nr_running); PU(dl_nr_running);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
PU(dl_nr_migratory);
dl_bw = &cpu_rq(cpu)->rd->dl_bw; dl_bw = &cpu_rq(cpu)->rd->dl_bw;
#else #else
dl_bw = &dl_rq->dl_bw; dl_bw = &dl_rq->dl_bw;
...@@ -864,7 +860,6 @@ static void sched_debug_header(struct seq_file *m) ...@@ -864,7 +860,6 @@ static void sched_debug_header(struct seq_file *m)
#define PN(x) \ #define PN(x) \
SEQ_printf(m, " .%-40s: %Ld.%06ld\n", #x, SPLIT_NS(x)) SEQ_printf(m, " .%-40s: %Ld.%06ld\n", #x, SPLIT_NS(x))
PN(sysctl_sched_base_slice); PN(sysctl_sched_base_slice);
P(sysctl_sched_child_runs_first);
P(sysctl_sched_features); P(sysctl_sched_features);
#undef PN #undef PN
#undef P #undef P
......
This diff is collapsed.
...@@ -49,7 +49,6 @@ SCHED_FEAT(TTWU_QUEUE, true) ...@@ -49,7 +49,6 @@ SCHED_FEAT(TTWU_QUEUE, true)
/* /*
* When doing wakeups, attempt to limit superfluous scans of the LLC domain. * When doing wakeups, attempt to limit superfluous scans of the LLC domain.
*/ */
SCHED_FEAT(SIS_PROP, false)
SCHED_FEAT(SIS_UTIL, true) SCHED_FEAT(SIS_UTIL, true)
/* /*
......
...@@ -401,7 +401,7 @@ balance_idle(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) ...@@ -401,7 +401,7 @@ balance_idle(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
/* /*
* Idle tasks are unconditionally rescheduled: * Idle tasks are unconditionally rescheduled:
*/ */
static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int flags) static void wakeup_preempt_idle(struct rq *rq, struct task_struct *p, int flags)
{ {
resched_curr(rq); resched_curr(rq);
} }
...@@ -482,7 +482,7 @@ DEFINE_SCHED_CLASS(idle) = { ...@@ -482,7 +482,7 @@ DEFINE_SCHED_CLASS(idle) = {
/* dequeue is not valid, we print a debug message there: */ /* dequeue is not valid, we print a debug message there: */
.dequeue_task = dequeue_task_idle, .dequeue_task = dequeue_task_idle,
.check_preempt_curr = check_preempt_curr_idle, .wakeup_preempt = wakeup_preempt_idle,
.pick_next_task = pick_next_task_idle, .pick_next_task = pick_next_task_idle,
.put_prev_task = put_prev_task_idle, .put_prev_task = put_prev_task_idle,
......
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/* /*
* Per Entity Load Tracking * Per Entity Load Tracking (PELT)
* *
* Copyright (C) 2007 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> * Copyright (C) 2007 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
* *
......
...@@ -434,14 +434,13 @@ static u64 window_update(struct psi_window *win, u64 now, u64 value) ...@@ -434,14 +434,13 @@ static u64 window_update(struct psi_window *win, u64 now, u64 value)
return growth; return growth;
} }
static u64 update_triggers(struct psi_group *group, u64 now, bool *update_total, static void update_triggers(struct psi_group *group, u64 now,
enum psi_aggregators aggregator) enum psi_aggregators aggregator)
{ {
struct psi_trigger *t; struct psi_trigger *t;
u64 *total = group->total[aggregator]; u64 *total = group->total[aggregator];
struct list_head *triggers; struct list_head *triggers;
u64 *aggregator_total; u64 *aggregator_total;
*update_total = false;
if (aggregator == PSI_AVGS) { if (aggregator == PSI_AVGS) {
triggers = &group->avg_triggers; triggers = &group->avg_triggers;
...@@ -471,14 +470,6 @@ static u64 update_triggers(struct psi_group *group, u64 now, bool *update_total, ...@@ -471,14 +470,6 @@ static u64 update_triggers(struct psi_group *group, u64 now, bool *update_total,
* events without dropping any). * events without dropping any).
*/ */
if (new_stall) { if (new_stall) {
/*
* Multiple triggers might be looking at the same state,
* remember to update group->polling_total[] once we've
* been through all of them. Also remember to extend the
* polling time if we see new stall activity.
*/
*update_total = true;
/* Calculate growth since last update */ /* Calculate growth since last update */
growth = window_update(&t->win, now, total[t->state]); growth = window_update(&t->win, now, total[t->state]);
if (!t->pending_event) { if (!t->pending_event) {
...@@ -503,8 +494,6 @@ static u64 update_triggers(struct psi_group *group, u64 now, bool *update_total, ...@@ -503,8 +494,6 @@ static u64 update_triggers(struct psi_group *group, u64 now, bool *update_total,
/* Reset threshold breach flag once event got generated */ /* Reset threshold breach flag once event got generated */
t->pending_event = false; t->pending_event = false;
} }
return now + group->rtpoll_min_period;
} }
static u64 update_averages(struct psi_group *group, u64 now) static u64 update_averages(struct psi_group *group, u64 now)
...@@ -565,7 +554,6 @@ static void psi_avgs_work(struct work_struct *work) ...@@ -565,7 +554,6 @@ static void psi_avgs_work(struct work_struct *work)
struct delayed_work *dwork; struct delayed_work *dwork;
struct psi_group *group; struct psi_group *group;
u32 changed_states; u32 changed_states;
bool update_total;
u64 now; u64 now;
dwork = to_delayed_work(work); dwork = to_delayed_work(work);
...@@ -584,7 +572,7 @@ static void psi_avgs_work(struct work_struct *work) ...@@ -584,7 +572,7 @@ static void psi_avgs_work(struct work_struct *work)
* go - see calc_avgs() and missed_periods. * go - see calc_avgs() and missed_periods.
*/ */
if (now >= group->avg_next_update) { if (now >= group->avg_next_update) {
update_triggers(group, now, &update_total, PSI_AVGS); update_triggers(group, now, PSI_AVGS);
group->avg_next_update = update_averages(group, now); group->avg_next_update = update_averages(group, now);
} }
...@@ -608,7 +596,7 @@ static void init_rtpoll_triggers(struct psi_group *group, u64 now) ...@@ -608,7 +596,7 @@ static void init_rtpoll_triggers(struct psi_group *group, u64 now)
group->rtpoll_next_update = now + group->rtpoll_min_period; group->rtpoll_next_update = now + group->rtpoll_min_period;
} }
/* Schedule polling if it's not already scheduled or forced. */ /* Schedule rtpolling if it's not already scheduled or forced. */
static void psi_schedule_rtpoll_work(struct psi_group *group, unsigned long delay, static void psi_schedule_rtpoll_work(struct psi_group *group, unsigned long delay,
bool force) bool force)
{ {
...@@ -640,7 +628,6 @@ static void psi_rtpoll_work(struct psi_group *group) ...@@ -640,7 +628,6 @@ static void psi_rtpoll_work(struct psi_group *group)
{ {
bool force_reschedule = false; bool force_reschedule = false;
u32 changed_states; u32 changed_states;
bool update_total;
u64 now; u64 now;
mutex_lock(&group->rtpoll_trigger_lock); mutex_lock(&group->rtpoll_trigger_lock);
...@@ -649,37 +636,37 @@ static void psi_rtpoll_work(struct psi_group *group) ...@@ -649,37 +636,37 @@ static void psi_rtpoll_work(struct psi_group *group)
if (now > group->rtpoll_until) { if (now > group->rtpoll_until) {
/* /*
* We are either about to start or might stop polling if no * We are either about to start or might stop rtpolling if no
* state change was recorded. Resetting poll_scheduled leaves * state change was recorded. Resetting rtpoll_scheduled leaves
* a small window for psi_group_change to sneak in and schedule * a small window for psi_group_change to sneak in and schedule
* an immediate poll_work before we get to rescheduling. One * an immediate rtpoll_work before we get to rescheduling. One
* potential extra wakeup at the end of the polling window * potential extra wakeup at the end of the rtpolling window
* should be negligible and polling_next_update still keeps * should be negligible and rtpoll_next_update still keeps
* updates correctly on schedule. * updates correctly on schedule.
*/ */
atomic_set(&group->rtpoll_scheduled, 0); atomic_set(&group->rtpoll_scheduled, 0);
/* /*
* A task change can race with the poll worker that is supposed to * A task change can race with the rtpoll worker that is supposed to
* report on it. To avoid missing events, ensure ordering between * report on it. To avoid missing events, ensure ordering between
* poll_scheduled and the task state accesses, such that if the poll * rtpoll_scheduled and the task state accesses, such that if the
* worker misses the state update, the task change is guaranteed to * rtpoll worker misses the state update, the task change is
* reschedule the poll worker: * guaranteed to reschedule the rtpoll worker:
* *
* poll worker: * rtpoll worker:
* atomic_set(poll_scheduled, 0) * atomic_set(rtpoll_scheduled, 0)
* smp_mb() * smp_mb()
* LOAD states * LOAD states
* *
* task change: * task change:
* STORE states * STORE states
* if atomic_xchg(poll_scheduled, 1) == 0: * if atomic_xchg(rtpoll_scheduled, 1) == 0:
* schedule poll worker * schedule rtpoll worker
* *
* The atomic_xchg() implies a full barrier. * The atomic_xchg() implies a full barrier.
*/ */
smp_mb(); smp_mb();
} else { } else {
/* Polling window is not over, keep rescheduling */ /* The rtpolling window is not over, keep rescheduling */
force_reschedule = true; force_reschedule = true;
} }
...@@ -687,7 +674,7 @@ static void psi_rtpoll_work(struct psi_group *group) ...@@ -687,7 +674,7 @@ static void psi_rtpoll_work(struct psi_group *group)
collect_percpu_times(group, PSI_POLL, &changed_states); collect_percpu_times(group, PSI_POLL, &changed_states);
if (changed_states & group->rtpoll_states) { if (changed_states & group->rtpoll_states) {
/* Initialize trigger windows when entering polling mode */ /* Initialize trigger windows when entering rtpolling mode */
if (now > group->rtpoll_until) if (now > group->rtpoll_until)
init_rtpoll_triggers(group, now); init_rtpoll_triggers(group, now);
...@@ -706,11 +693,13 @@ static void psi_rtpoll_work(struct psi_group *group) ...@@ -706,11 +693,13 @@ static void psi_rtpoll_work(struct psi_group *group)
} }
if (now >= group->rtpoll_next_update) { if (now >= group->rtpoll_next_update) {
group->rtpoll_next_update = update_triggers(group, now, &update_total, PSI_POLL); if (changed_states & group->rtpoll_states) {
if (update_total) update_triggers(group, now, PSI_POLL);
memcpy(group->rtpoll_total, group->total[PSI_POLL], memcpy(group->rtpoll_total, group->total[PSI_POLL],
sizeof(group->rtpoll_total)); sizeof(group->rtpoll_total));
} }
group->rtpoll_next_update = now + group->rtpoll_min_period;
}
psi_schedule_rtpoll_work(group, psi_schedule_rtpoll_work(group,
nsecs_to_jiffies(group->rtpoll_next_update - now) + 1, nsecs_to_jiffies(group->rtpoll_next_update - now) + 1,
...@@ -1009,6 +998,9 @@ void psi_account_irqtime(struct task_struct *task, u32 delta) ...@@ -1009,6 +998,9 @@ void psi_account_irqtime(struct task_struct *task, u32 delta)
struct psi_group_cpu *groupc; struct psi_group_cpu *groupc;
u64 now; u64 now;
if (static_branch_likely(&psi_disabled))
return;
if (!task->pid) if (!task->pid)
return; return;
......
...@@ -16,7 +16,7 @@ struct rt_bandwidth def_rt_bandwidth; ...@@ -16,7 +16,7 @@ struct rt_bandwidth def_rt_bandwidth;
* period over which we measure -rt task CPU usage in us. * period over which we measure -rt task CPU usage in us.
* default: 1s * default: 1s
*/ */
unsigned int sysctl_sched_rt_period = 1000000; int sysctl_sched_rt_period = 1000000;
/* /*
* part of the period that we allow rt tasks to run in us. * part of the period that we allow rt tasks to run in us.
...@@ -34,9 +34,11 @@ static struct ctl_table sched_rt_sysctls[] = { ...@@ -34,9 +34,11 @@ static struct ctl_table sched_rt_sysctls[] = {
{ {
.procname = "sched_rt_period_us", .procname = "sched_rt_period_us",
.data = &sysctl_sched_rt_period, .data = &sysctl_sched_rt_period,
.maxlen = sizeof(unsigned int), .maxlen = sizeof(int),
.mode = 0644, .mode = 0644,
.proc_handler = sched_rt_handler, .proc_handler = sched_rt_handler,
.extra1 = SYSCTL_ONE,
.extra2 = SYSCTL_INT_MAX,
}, },
{ {
.procname = "sched_rt_runtime_us", .procname = "sched_rt_runtime_us",
...@@ -44,6 +46,8 @@ static struct ctl_table sched_rt_sysctls[] = { ...@@ -44,6 +46,8 @@ static struct ctl_table sched_rt_sysctls[] = {
.maxlen = sizeof(int), .maxlen = sizeof(int),
.mode = 0644, .mode = 0644,
.proc_handler = sched_rt_handler, .proc_handler = sched_rt_handler,
.extra1 = SYSCTL_NEG_ONE,
.extra2 = (void *)&sysctl_sched_rt_period,
}, },
{ {
.procname = "sched_rr_timeslice_ms", .procname = "sched_rr_timeslice_ms",
...@@ -143,7 +147,6 @@ void init_rt_rq(struct rt_rq *rt_rq) ...@@ -143,7 +147,6 @@ void init_rt_rq(struct rt_rq *rt_rq)
#if defined CONFIG_SMP #if defined CONFIG_SMP
rt_rq->highest_prio.curr = MAX_RT_PRIO-1; rt_rq->highest_prio.curr = MAX_RT_PRIO-1;
rt_rq->highest_prio.next = MAX_RT_PRIO-1; rt_rq->highest_prio.next = MAX_RT_PRIO-1;
rt_rq->rt_nr_migratory = 0;
rt_rq->overloaded = 0; rt_rq->overloaded = 0;
plist_head_init(&rt_rq->pushable_tasks); plist_head_init(&rt_rq->pushable_tasks);
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
...@@ -358,53 +361,6 @@ static inline void rt_clear_overload(struct rq *rq) ...@@ -358,53 +361,6 @@ static inline void rt_clear_overload(struct rq *rq)
cpumask_clear_cpu(rq->cpu, rq->rd->rto_mask); cpumask_clear_cpu(rq->cpu, rq->rd->rto_mask);
} }
static void update_rt_migration(struct rt_rq *rt_rq)
{
if (rt_rq->rt_nr_migratory && rt_rq->rt_nr_total > 1) {
if (!rt_rq->overloaded) {
rt_set_overload(rq_of_rt_rq(rt_rq));
rt_rq->overloaded = 1;
}
} else if (rt_rq->overloaded) {
rt_clear_overload(rq_of_rt_rq(rt_rq));
rt_rq->overloaded = 0;
}
}
static void inc_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
{
struct task_struct *p;
if (!rt_entity_is_task(rt_se))
return;
p = rt_task_of(rt_se);
rt_rq = &rq_of_rt_rq(rt_rq)->rt;
rt_rq->rt_nr_total++;
if (p->nr_cpus_allowed > 1)
rt_rq->rt_nr_migratory++;
update_rt_migration(rt_rq);
}
static void dec_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
{
struct task_struct *p;
if (!rt_entity_is_task(rt_se))
return;
p = rt_task_of(rt_se);
rt_rq = &rq_of_rt_rq(rt_rq)->rt;
rt_rq->rt_nr_total--;
if (p->nr_cpus_allowed > 1)
rt_rq->rt_nr_migratory--;
update_rt_migration(rt_rq);
}
static inline int has_pushable_tasks(struct rq *rq) static inline int has_pushable_tasks(struct rq *rq)
{ {
return !plist_head_empty(&rq->rt.pushable_tasks); return !plist_head_empty(&rq->rt.pushable_tasks);
...@@ -438,6 +394,11 @@ static void enqueue_pushable_task(struct rq *rq, struct task_struct *p) ...@@ -438,6 +394,11 @@ static void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
/* Update the highest prio pushable task */ /* Update the highest prio pushable task */
if (p->prio < rq->rt.highest_prio.next) if (p->prio < rq->rt.highest_prio.next)
rq->rt.highest_prio.next = p->prio; rq->rt.highest_prio.next = p->prio;
if (!rq->rt.overloaded) {
rt_set_overload(rq);
rq->rt.overloaded = 1;
}
} }
static void dequeue_pushable_task(struct rq *rq, struct task_struct *p) static void dequeue_pushable_task(struct rq *rq, struct task_struct *p)
...@@ -451,6 +412,11 @@ static void dequeue_pushable_task(struct rq *rq, struct task_struct *p) ...@@ -451,6 +412,11 @@ static void dequeue_pushable_task(struct rq *rq, struct task_struct *p)
rq->rt.highest_prio.next = p->prio; rq->rt.highest_prio.next = p->prio;
} else { } else {
rq->rt.highest_prio.next = MAX_RT_PRIO-1; rq->rt.highest_prio.next = MAX_RT_PRIO-1;
if (rq->rt.overloaded) {
rt_clear_overload(rq);
rq->rt.overloaded = 0;
}
} }
} }
...@@ -464,16 +430,6 @@ static inline void dequeue_pushable_task(struct rq *rq, struct task_struct *p) ...@@ -464,16 +430,6 @@ static inline void dequeue_pushable_task(struct rq *rq, struct task_struct *p)
{ {
} }
static inline
void inc_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
{
}
static inline
void dec_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
{
}
static inline void rt_queue_push_tasks(struct rq *rq) static inline void rt_queue_push_tasks(struct rq *rq)
{ {
} }
...@@ -515,7 +471,7 @@ static inline bool rt_task_fits_capacity(struct task_struct *p, int cpu) ...@@ -515,7 +471,7 @@ static inline bool rt_task_fits_capacity(struct task_struct *p, int cpu)
min_cap = uclamp_eff_value(p, UCLAMP_MIN); min_cap = uclamp_eff_value(p, UCLAMP_MIN);
max_cap = uclamp_eff_value(p, UCLAMP_MAX); max_cap = uclamp_eff_value(p, UCLAMP_MAX);
cpu_cap = capacity_orig_of(cpu); cpu_cap = arch_scale_cpu_capacity(cpu);
return cpu_cap >= min(min_cap, max_cap); return cpu_cap >= min(min_cap, max_cap);
} }
...@@ -953,7 +909,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun) ...@@ -953,7 +909,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
/* /*
* When we're idle and a woken (rt) task is * When we're idle and a woken (rt) task is
* throttled check_preempt_curr() will set * throttled wakeup_preempt() will set
* skip_update and the time between the wakeup * skip_update and the time between the wakeup
* and this unthrottle will get accounted as * and this unthrottle will get accounted as
* 'runtime'. * 'runtime'.
...@@ -1281,7 +1237,6 @@ void inc_rt_tasks(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq) ...@@ -1281,7 +1237,6 @@ void inc_rt_tasks(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
rt_rq->rr_nr_running += rt_se_rr_nr_running(rt_se); rt_rq->rr_nr_running += rt_se_rr_nr_running(rt_se);
inc_rt_prio(rt_rq, prio); inc_rt_prio(rt_rq, prio);
inc_rt_migration(rt_se, rt_rq);
inc_rt_group(rt_se, rt_rq); inc_rt_group(rt_se, rt_rq);
} }
...@@ -1294,7 +1249,6 @@ void dec_rt_tasks(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq) ...@@ -1294,7 +1249,6 @@ void dec_rt_tasks(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
rt_rq->rr_nr_running -= rt_se_rr_nr_running(rt_se); rt_rq->rr_nr_running -= rt_se_rr_nr_running(rt_se);
dec_rt_prio(rt_rq, rt_se_prio(rt_se)); dec_rt_prio(rt_rq, rt_se_prio(rt_se));
dec_rt_migration(rt_se, rt_rq);
dec_rt_group(rt_se, rt_rq); dec_rt_group(rt_se, rt_rq);
} }
...@@ -1715,7 +1669,7 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf) ...@@ -1715,7 +1669,7 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
/* /*
* Preempt the current task with a newly woken task if needed: * Preempt the current task with a newly woken task if needed:
*/ */
static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flags) static void wakeup_preempt_rt(struct rq *rq, struct task_struct *p, int flags)
{ {
if (p->prio < rq->curr->prio) { if (p->prio < rq->curr->prio) {
resched_curr(rq); resched_curr(rq);
...@@ -2109,9 +2063,11 @@ static int push_rt_task(struct rq *rq, bool pull) ...@@ -2109,9 +2063,11 @@ static int push_rt_task(struct rq *rq, bool pull)
*/ */
push_task = get_push_task(rq); push_task = get_push_task(rq);
if (push_task) { if (push_task) {
preempt_disable();
raw_spin_rq_unlock(rq); raw_spin_rq_unlock(rq);
stop_one_cpu_nowait(rq->cpu, push_cpu_stop, stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
push_task, &rq->push_work); push_task, &rq->push_work);
preempt_enable();
raw_spin_rq_lock(rq); raw_spin_rq_lock(rq);
} }
...@@ -2448,9 +2404,11 @@ static void pull_rt_task(struct rq *this_rq) ...@@ -2448,9 +2404,11 @@ static void pull_rt_task(struct rq *this_rq)
double_unlock_balance(this_rq, src_rq); double_unlock_balance(this_rq, src_rq);
if (push_task) { if (push_task) {
preempt_disable();
raw_spin_rq_unlock(this_rq); raw_spin_rq_unlock(this_rq);
stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop, stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop,
push_task, &src_rq->push_work); push_task, &src_rq->push_work);
preempt_enable();
raw_spin_rq_lock(this_rq); raw_spin_rq_lock(this_rq);
} }
} }
...@@ -2702,7 +2660,7 @@ DEFINE_SCHED_CLASS(rt) = { ...@@ -2702,7 +2660,7 @@ DEFINE_SCHED_CLASS(rt) = {
.dequeue_task = dequeue_task_rt, .dequeue_task = dequeue_task_rt,
.yield_task = yield_task_rt, .yield_task = yield_task_rt,
.check_preempt_curr = check_preempt_curr_rt, .wakeup_preempt = wakeup_preempt_rt,
.pick_next_task = pick_next_task_rt, .pick_next_task = pick_next_task_rt,
.put_prev_task = put_prev_task_rt, .put_prev_task = put_prev_task_rt,
...@@ -2985,9 +2943,6 @@ static int sched_rt_global_constraints(void) ...@@ -2985,9 +2943,6 @@ static int sched_rt_global_constraints(void)
#ifdef CONFIG_SYSCTL #ifdef CONFIG_SYSCTL
static int sched_rt_global_validate(void) static int sched_rt_global_validate(void)
{ {
if (sysctl_sched_rt_period <= 0)
return -EINVAL;
if ((sysctl_sched_rt_runtime != RUNTIME_INF) && if ((sysctl_sched_rt_runtime != RUNTIME_INF) &&
((sysctl_sched_rt_runtime > sysctl_sched_rt_period) || ((sysctl_sched_rt_runtime > sysctl_sched_rt_period) ||
((u64)sysctl_sched_rt_runtime * ((u64)sysctl_sched_rt_runtime *
...@@ -3018,7 +2973,7 @@ static int sched_rt_handler(struct ctl_table *table, int write, void *buffer, ...@@ -3018,7 +2973,7 @@ static int sched_rt_handler(struct ctl_table *table, int write, void *buffer,
old_period = sysctl_sched_rt_period; old_period = sysctl_sched_rt_period;
old_runtime = sysctl_sched_rt_runtime; old_runtime = sysctl_sched_rt_runtime;
ret = proc_dointvec(table, write, buffer, lenp, ppos); ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
if (!ret && write) { if (!ret && write) {
ret = sched_rt_global_validate(); ret = sched_rt_global_validate();
......
...@@ -74,15 +74,6 @@ ...@@ -74,15 +74,6 @@
#include "../workqueue_internal.h" #include "../workqueue_internal.h"
#ifdef CONFIG_CGROUP_SCHED
#include <linux/cgroup.h>
#include <linux/psi.h>
#endif
#ifdef CONFIG_SCHED_DEBUG
# include <linux/static_key.h>
#endif
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
# include <asm/paravirt.h> # include <asm/paravirt.h>
# include <asm/paravirt_api_clock.h> # include <asm/paravirt_api_clock.h>
...@@ -109,14 +100,12 @@ extern __read_mostly int scheduler_running; ...@@ -109,14 +100,12 @@ extern __read_mostly int scheduler_running;
extern unsigned long calc_load_update; extern unsigned long calc_load_update;
extern atomic_long_t calc_load_tasks; extern atomic_long_t calc_load_tasks;
extern unsigned int sysctl_sched_child_runs_first;
extern void calc_global_load_tick(struct rq *this_rq); extern void calc_global_load_tick(struct rq *this_rq);
extern long calc_load_fold_active(struct rq *this_rq, long adjust); extern long calc_load_fold_active(struct rq *this_rq, long adjust);
extern void call_trace_sched_update_nr_running(struct rq *rq, int count); extern void call_trace_sched_update_nr_running(struct rq *rq, int count);
extern unsigned int sysctl_sched_rt_period; extern int sysctl_sched_rt_period;
extern int sysctl_sched_rt_runtime; extern int sysctl_sched_rt_runtime;
extern int sched_rr_timeslice; extern int sched_rr_timeslice;
...@@ -594,6 +583,7 @@ struct cfs_rq { ...@@ -594,6 +583,7 @@ struct cfs_rq {
} removed; } removed;
#ifdef CONFIG_FAIR_GROUP_SCHED #ifdef CONFIG_FAIR_GROUP_SCHED
u64 last_update_tg_load_avg;
unsigned long tg_load_avg_contrib; unsigned long tg_load_avg_contrib;
long propagate; long propagate;
long prop_runnable_sum; long prop_runnable_sum;
...@@ -644,9 +634,7 @@ struct cfs_rq { ...@@ -644,9 +634,7 @@ struct cfs_rq {
int throttled; int throttled;
int throttle_count; int throttle_count;
struct list_head throttled_list; struct list_head throttled_list;
#ifdef CONFIG_SMP
struct list_head throttled_csd_list; struct list_head throttled_csd_list;
#endif
#endif /* CONFIG_CFS_BANDWIDTH */ #endif /* CONFIG_CFS_BANDWIDTH */
#endif /* CONFIG_FAIR_GROUP_SCHED */ #endif /* CONFIG_FAIR_GROUP_SCHED */
}; };
...@@ -675,8 +663,6 @@ struct rt_rq { ...@@ -675,8 +663,6 @@ struct rt_rq {
} highest_prio; } highest_prio;
#endif #endif
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
unsigned int rt_nr_migratory;
unsigned int rt_nr_total;
int overloaded; int overloaded;
struct plist_head pushable_tasks; struct plist_head pushable_tasks;
...@@ -721,7 +707,6 @@ struct dl_rq { ...@@ -721,7 +707,6 @@ struct dl_rq {
u64 next; u64 next;
} earliest_dl; } earliest_dl;
unsigned int dl_nr_migratory;
int overloaded; int overloaded;
/* /*
...@@ -963,10 +948,6 @@ struct rq { ...@@ -963,10 +948,6 @@ struct rq {
/* runqueue lock: */ /* runqueue lock: */
raw_spinlock_t __lock; raw_spinlock_t __lock;
/*
* nr_running and cpu_load should be in the same cacheline because
* remote CPUs use both these fields when doing load calculation.
*/
unsigned int nr_running; unsigned int nr_running;
#ifdef CONFIG_NUMA_BALANCING #ifdef CONFIG_NUMA_BALANCING
unsigned int nr_numa_running; unsigned int nr_numa_running;
...@@ -1048,7 +1029,6 @@ struct rq { ...@@ -1048,7 +1029,6 @@ struct rq {
struct sched_domain __rcu *sd; struct sched_domain __rcu *sd;
unsigned long cpu_capacity; unsigned long cpu_capacity;
unsigned long cpu_capacity_orig;
struct balance_callback *balance_callback; struct balance_callback *balance_callback;
...@@ -1079,9 +1059,6 @@ struct rq { ...@@ -1079,9 +1059,6 @@ struct rq {
u64 idle_stamp; u64 idle_stamp;
u64 avg_idle; u64 avg_idle;
unsigned long wake_stamp;
u64 wake_avg_idle;
/* This is used to determine avg_idle's max value */ /* This is used to determine avg_idle's max value */
u64 max_idle_balance_cost; u64 max_idle_balance_cost;
...@@ -1658,6 +1635,11 @@ task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) ...@@ -1658,6 +1635,11 @@ task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
} }
DEFINE_LOCK_GUARD_1(task_rq_lock, struct task_struct,
_T->rq = task_rq_lock(_T->lock, &_T->rf),
task_rq_unlock(_T->rq, _T->lock, &_T->rf),
struct rq *rq; struct rq_flags rf)
static inline void static inline void
rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
__acquires(rq->lock) __acquires(rq->lock)
...@@ -1868,11 +1850,13 @@ static inline struct sched_domain *lowest_flag_domain(int cpu, int flag) ...@@ -1868,11 +1850,13 @@ static inline struct sched_domain *lowest_flag_domain(int cpu, int flag)
DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc);
DECLARE_PER_CPU(int, sd_llc_size); DECLARE_PER_CPU(int, sd_llc_size);
DECLARE_PER_CPU(int, sd_llc_id); DECLARE_PER_CPU(int, sd_llc_id);
DECLARE_PER_CPU(int, sd_share_id);
DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared); DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
extern struct static_key_false sched_asym_cpucapacity; extern struct static_key_false sched_asym_cpucapacity;
extern struct static_key_false sched_cluster_active;
static __always_inline bool sched_asym_cpucap_active(void) static __always_inline bool sched_asym_cpucap_active(void)
{ {
...@@ -2239,7 +2223,7 @@ struct sched_class { ...@@ -2239,7 +2223,7 @@ struct sched_class {
void (*yield_task) (struct rq *rq); void (*yield_task) (struct rq *rq);
bool (*yield_to_task)(struct rq *rq, struct task_struct *p); bool (*yield_to_task)(struct rq *rq, struct task_struct *p);
void (*check_preempt_curr)(struct rq *rq, struct task_struct *p, int flags); void (*wakeup_preempt)(struct rq *rq, struct task_struct *p, int flags);
struct task_struct *(*pick_next_task)(struct rq *rq); struct task_struct *(*pick_next_task)(struct rq *rq);
...@@ -2513,7 +2497,7 @@ static inline void sub_nr_running(struct rq *rq, unsigned count) ...@@ -2513,7 +2497,7 @@ static inline void sub_nr_running(struct rq *rq, unsigned count)
extern void activate_task(struct rq *rq, struct task_struct *p, int flags); extern void activate_task(struct rq *rq, struct task_struct *p, int flags);
extern void deactivate_task(struct rq *rq, struct task_struct *p, int flags); extern void deactivate_task(struct rq *rq, struct task_struct *p, int flags);
extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags); extern void wakeup_preempt(struct rq *rq, struct task_struct *p, int flags);
#ifdef CONFIG_PREEMPT_RT #ifdef CONFIG_PREEMPT_RT
#define SCHED_NR_MIGRATE_BREAK 8 #define SCHED_NR_MIGRATE_BREAK 8
...@@ -2977,11 +2961,6 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} ...@@ -2977,11 +2961,6 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
#endif #endif
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static inline unsigned long capacity_orig_of(int cpu)
{
return cpu_rq(cpu)->cpu_capacity_orig;
}
/** /**
* enum cpu_util_type - CPU utilization type * enum cpu_util_type - CPU utilization type
* @FREQUENCY_UTIL: Utilization used to select frequency * @FREQUENCY_UTIL: Utilization used to select frequency
...@@ -3219,6 +3198,8 @@ static inline bool sched_energy_enabled(void) ...@@ -3219,6 +3198,8 @@ static inline bool sched_energy_enabled(void)
return static_branch_unlikely(&sched_energy_present); return static_branch_unlikely(&sched_energy_present);
} }
extern struct cpufreq_governor schedutil_gov;
#else /* ! (CONFIG_ENERGY_MODEL && CONFIG_CPU_FREQ_GOV_SCHEDUTIL) */ #else /* ! (CONFIG_ENERGY_MODEL && CONFIG_CPU_FREQ_GOV_SCHEDUTIL) */
#define perf_domain_span(pd) NULL #define perf_domain_span(pd) NULL
......
...@@ -23,7 +23,7 @@ balance_stop(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) ...@@ -23,7 +23,7 @@ balance_stop(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
static void static void
check_preempt_curr_stop(struct rq *rq, struct task_struct *p, int flags) wakeup_preempt_stop(struct rq *rq, struct task_struct *p, int flags)
{ {
/* we're never preempted */ /* we're never preempted */
} }
...@@ -120,7 +120,7 @@ DEFINE_SCHED_CLASS(stop) = { ...@@ -120,7 +120,7 @@ DEFINE_SCHED_CLASS(stop) = {
.dequeue_task = dequeue_task_stop, .dequeue_task = dequeue_task_stop,
.yield_task = yield_task_stop, .yield_task = yield_task_stop,
.check_preempt_curr = check_preempt_curr_stop, .wakeup_preempt = wakeup_preempt_stop,
.pick_next_task = pick_next_task_stop, .pick_next_task = pick_next_task_stop,
.put_prev_task = put_prev_task_stop, .put_prev_task = put_prev_task_stop,
......
This diff is collapsed.
...@@ -146,9 +146,7 @@ unsigned int cpumask_local_spread(unsigned int i, int node) ...@@ -146,9 +146,7 @@ unsigned int cpumask_local_spread(unsigned int i, int node)
/* Wrap: we always want a cpu. */ /* Wrap: we always want a cpu. */
i %= num_online_cpus(); i %= num_online_cpus();
cpu = (node == NUMA_NO_NODE) ? cpu = sched_numa_find_nth_cpu(cpu_online_mask, i, node);
cpumask_nth(i, cpu_online_mask) :
sched_numa_find_nth_cpu(cpu_online_mask, i, node);
WARN_ON(cpu >= nr_cpu_ids); WARN_ON(cpu >= nr_cpu_ids);
return cpu; return cpu;
......
...@@ -131,22 +131,26 @@ static struct mempolicy default_policy = { ...@@ -131,22 +131,26 @@ static struct mempolicy default_policy = {
static struct mempolicy preferred_node_policy[MAX_NUMNODES]; static struct mempolicy preferred_node_policy[MAX_NUMNODES];
/** /**
* numa_map_to_online_node - Find closest online node * numa_nearest_node - Find nearest node by state
* @node: Node id to start the search * @node: Node id to start the search
* @state: State to filter the search
* *
* Lookup the next closest node by distance if @nid is not online. * Lookup the closest node by distance if @nid is not in state.
* *
* Return: this @node if it is online, otherwise the closest node by distance * Return: this @node if it is in state, otherwise the closest node by distance
*/ */
int numa_map_to_online_node(int node) int numa_nearest_node(int node, unsigned int state)
{ {
int min_dist = INT_MAX, dist, n, min_node; int min_dist = INT_MAX, dist, n, min_node;
if (node == NUMA_NO_NODE || node_online(node)) if (state >= NR_NODE_STATES)
return -EINVAL;
if (node == NUMA_NO_NODE || node_state(node, state))
return node; return node;
min_node = node; min_node = node;
for_each_online_node(n) { for_each_node_state(n, state) {
dist = node_distance(node, n); dist = node_distance(node, n);
if (dist < min_dist) { if (dist < min_dist) {
min_dist = dist; min_dist = dist;
...@@ -156,7 +160,7 @@ int numa_map_to_online_node(int node) ...@@ -156,7 +160,7 @@ int numa_map_to_online_node(int node)
return min_node; return min_node;
} }
EXPORT_SYMBOL_GPL(numa_map_to_online_node); EXPORT_SYMBOL_GPL(numa_nearest_node);
struct mempolicy *get_task_policy(struct task_struct *p) struct mempolicy *get_task_policy(struct task_struct *p)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment