Commit 43964409 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "This time (again) cpufreq gets the majority of changes which mostly
  are driver updates (including a major consolidation of intel_pstate),
  some schedutil governor modifications and core cleanups.

  There also are some changes in the system suspend area, mostly related
  to diagnostics and debug messages plus some renames of things related
  to suspend-to-idle. One major change here is that suspend-to-idle is
  now going to be preferred over S3 on systems where the ACPI tables
  indicate to do so and provide requsite support (the Low Power Idle S0
  _DSM in particular). The system sleep documentation and the tools
  related to it are updated too.

  The rest is a few cpuidle changes (nothing major), devfreq updates,
  generic power domains (genpd) framework updates and a few assorted
  modifications elsewhere.

  Specifics:

   - Drop the P-state selection algorithm based on a PID controller from
     intel_pstate and make it use the same P-state selection method
     (based on the CPU load) for all types of systems in the active mode
     (Rafael Wysocki, Srinivas Pandruvada).

   - Rework the cpufreq core and governors to make it possible to take
     cross-CPU utilization updates into account and modify the schedutil
     governor to actually do so (Viresh Kumar).

   - Clean up the handling of transition latency information in the
     cpufreq core and untangle it from the information on which drivers
     cannot do dynamic frequency switching (Viresh Kumar).

   - Add support for new SoCs (MT2701/MT7623 and MT7622) to the mediatek
     cpufreq driver and update its DT bindings (Sean Wang).

   - Modify the cpufreq dt-platdev driver to autimatically create
     cpufreq devices for the new (v2) Operating Performance Points (OPP)
     DT bindings and update its whitelist of supported systems (Viresh
     Kumar, Shubhrajyoti Datta, Marc Gonzalez, Khiem Nguyen, Finley
     Xiao).

   - Add support for Ux500 to the cpufreq-dt driver and drop the
     obsolete dbx500 cpufreq driver (Linus Walleij, Arnd Bergmann).

   - Add new SoC (R8A7795) support to the cpufreq rcar driver (Khiem
     Nguyen).

   - Fix and clean up assorted issues in the cpufreq drivers and core
     (Arvind Yadav, Christophe Jaillet, Colin Ian King, Gustavo Silva,
     Julia Lawall, Leonard Crestez, Rob Herring, Sudeep Holla).

   - Update the IO-wait boost handling in the schedutil governor to make
     it less aggressive (Joel Fernandes).

   - Rework system suspend diagnostics to make it print fewer messages
     to the kernel log by default, add a sysfs knob to allow more
     suspend-related messages to be printed and add Low Power S0 Idle
     constraints checks to the ACPI suspend-to-idle code (Rafael
     Wysocki, Srinivas Pandruvada).

   - Prefer suspend-to-idle over S3 on ACPI-based systems with the
     ACPI_FADT_LOW_POWER_S0 flag set and the Low Power Idle S0 _DSM
     interface present in the ACPI tables (Rafael Wysocki).

   - Update documentation related to system sleep and rename a number of
     items in the code to make it cleare that they are related to
     suspend-to-idle (Rafael Wysocki).

   - Export a variable allowing device drivers to check the target
     system sleep state from the core system suspend code (Florian
     Fainelli).

   - Clean up the cpuidle subsystem to handle the polling state on x86
     in a more straightforward way and to use %pOF instead of full_name
     (Rafael Wysocki, Rob Herring).

   - Update the devfreq framework to fix and clean up a few minor issues
     (Chanwoo Choi, Rob Herring).

   - Extend diagnostics in the generic power domains (genpd) framework
     and clean it up slightly (Thara Gopinath, Rob Herring).

   - Fix and clean up a couple of issues in the operating performance
     points (OPP) framework (Viresh Kumar, Waldemar Rymarkiewicz).

   - Add support for RV1108 to the rockchip-io Adaptive Voltage Scaling
     (AVS) driver (David Wu).

   - Fix the usage of notifiers in CPU power management on some
     platforms (Alex Shi).

   - Update the pm-graph system suspend/hibernation and boot profiling
     utility (Todd Brandt).

   - Make it possible to run the cpupower utility without CPU0 (Prarit
     Bhargava)"

* tag 'pm-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (87 commits)
  cpuidle: Make drivers initialize polling state
  cpuidle: Move polling state initialization code to separate file
  cpuidle: Eliminate the CPUIDLE_DRIVER_STATE_START symbol
  cpufreq: imx6q: Fix imx6sx low frequency support
  cpufreq: speedstep-lib: make several arrays static, makes code smaller
  PM: docs: Delete the obsolete states.txt document
  PM: docs: Describe high-level PM strategies and sleep states
  PM / devfreq: Fix memory leak when fail to register device
  PM / devfreq: Add dependency on PM_OPP
  PM / devfreq: Move private devfreq_update_stats() into devfreq
  PM / devfreq: Convert to using %pOF instead of full_name
  PM / AVS: rockchip-io: add io selectors and supplies for RV1108
  cpufreq: ti: Fix 'of_node_put' being called twice in error handling path
  cpufreq: dt-platdev: Drop few entries from whitelist
  cpufreq: dt-platdev: Automatically create cpufreq device with OPP v2
  ARM: ux500: don't select CPUFREQ_DT
  cpuidle: Convert to using %pOF instead of full_name
  cpufreq: Convert to using %pOF instead of full_name
  PM / Domains: Convert to using %pOF instead of full_name
  cpufreq: Cap the default transition delay value to 10 ms
  ...
parents b42a362e d97561f4
...@@ -273,3 +273,15 @@ Description: ...@@ -273,3 +273,15 @@ Description:
This output is useful for system wakeup diagnostics of spurious This output is useful for system wakeup diagnostics of spurious
wakeup interrupts. wakeup interrupts.
What: /sys/power/pm_debug_messages
Date: July 2017
Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/pm_debug_messages file controls the printing
of debug messages from the system suspend/hiberbation
infrastructure to the kernel log.
Writing a "1" to this file enables the debug messages and
writing a "0" (default) to it disables them. Reads from
this file return the current value.
...@@ -479,14 +479,6 @@ This governor exposes the following tunables: ...@@ -479,14 +479,6 @@ This governor exposes the following tunables:
# echo `$(($(cat cpuinfo_transition_latency) * 750 / 1000)) > ondemand/sampling_rate # echo `$(($(cat cpuinfo_transition_latency) * 750 / 1000)) > ondemand/sampling_rate
``min_sampling_rate``
The minimum value of ``sampling_rate``.
Equal to 10000 (10 ms) if :c:macro:`CONFIG_NO_HZ_COMMON` and
:c:data:`tick_nohz_active` are both set or to 20 times the value of
:c:data:`jiffies` in microseconds otherwise.
``up_threshold`` ``up_threshold``
If the estimated CPU load is above this value (in percent), the governor If the estimated CPU load is above this value (in percent), the governor
will set the frequency to the maximum value allowed for the policy. will set the frequency to the maximum value allowed for the policy.
......
...@@ -5,12 +5,6 @@ Power Management ...@@ -5,12 +5,6 @@ Power Management
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
cpufreq strategies
intel_pstate system-wide
working-state
.. only:: subproject and html
Indices
=======
* :ref:`genindex`
...@@ -167,35 +167,17 @@ is set. ...@@ -167,35 +167,17 @@ is set.
``powersave`` ``powersave``
............. .............
Without HWP, this P-state selection algorithm generally depends on the Without HWP, this P-state selection algorithm is similar to the algorithm
processor model and/or the system profile setting in the ACPI tables and there
are two variants of it.
One of them is used with processors from the Atom line and (regardless of the
processor model) on platforms with the system profile in the ACPI tables set to
"mobile" (laptops mostly), "tablet", "appliance PC", "desktop", or
"workstation". It is also used with processors supporting the HWP feature if
that feature has not been enabled (that is, with the ``intel_pstate=no_hwp``
argument in the kernel command line). It is similar to the algorithm
implemented by the generic ``schedutil`` scaling governor except that the implemented by the generic ``schedutil`` scaling governor except that the
utilization metric used by it is based on numbers coming from feedback utilization metric used by it is based on numbers coming from feedback
registers of the CPU. It generally selects P-states proportional to the registers of the CPU. It generally selects P-states proportional to the
current CPU utilization, so it is referred to as the "proportional" algorithm. current CPU utilization.
The second variant of the ``powersave`` P-state selection algorithm, used in all This algorithm is run by the driver's utilization update callback for the
of the other cases (generally, on processors from the Core line, so it is given CPU when it is invoked by the CPU scheduler, but not more often than
referred to as the "Core" algorithm), is based on the values read from the APERF every 10 ms. Like in the ``performance`` case, the hardware configuration
and MPERF feedback registers and the previously requested target P-state. is not touched if the new P-state turns out to be the same as the current
It does not really take CPU utilization into account explicitly, but as a rule one.
it causes the CPU P-state to ramp up very quickly in response to increased
utilization which is generally desirable in server environments.
Regardless of the variant, this algorithm is run by the driver's utilization
update callback for the given CPU when it is invoked by the CPU scheduler, but
not more often than every 10 ms (that can be tweaked via ``debugfs`` in `this
particular case <Tuning Interface in debugfs_>`_). Like in the ``performance``
case, the hardware configuration is not touched if the new P-state turns out to
be the same as the current one.
This is the default P-state selection algorithm if the This is the default P-state selection algorithm if the
:c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option :c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option
...@@ -720,34 +702,7 @@ P-state is called, the ``ftrace`` filter can be set to to ...@@ -720,34 +702,7 @@ P-state is called, the ``ftrace`` filter can be set to to
gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func
<idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func <idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func
Tuning Interface in ``debugfs``
-------------------------------
The ``powersave`` algorithm provided by ``intel_pstate`` for `the Core line of
processors in the active mode <powersave_>`_ is based on a `PID controller`_
whose parameters were chosen to address a number of different use cases at the
same time. However, it still is possible to fine-tune it to a specific workload
and the ``debugfs`` interface under ``/sys/kernel/debug/pstate_snb/`` is
provided for this purpose. [Note that the ``pstate_snb`` directory will be
present only if the specific P-state selection algorithm matching the interface
in it actually is in use.]
The following files present in that directory can be used to modify the PID
controller parameters at run time:
| ``deadband``
| ``d_gain_pct``
| ``i_gain_pct``
| ``p_gain_pct``
| ``sample_rate_ms``
| ``setpoint``
Note, however, that achieving desirable results this way generally requires
expert-level understanding of the power vs performance tradeoff, so extra care
is recommended when attempting to do that.
.. _LCEU2015: http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf .. _LCEU2015: http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf
.. _SDM: http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html .. _SDM: http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html
.. _ACPI specification: http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf .. _ACPI specification: http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf
.. _PID controller: https://en.wikipedia.org/wiki/PID_controller
This diff is collapsed.
===========================
Power Management Strategies
===========================
::
Copyright (c) 2017 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The Linux kernel supports two major high-level power management strategies.
One of them is based on using global low-power states of the whole system in
which user space code cannot be executed and the overall system activity is
significantly reduced, referred to as :doc:`sleep states <sleep-states>`. The
kernel puts the system into one of these states when requested by user space
and the system stays in it until a special signal is received from one of
designated devices, triggering a transition to the ``working state`` in which
user space code can run. Because sleep states are global and the whole system
is affected by the state changes, this strategy is referred to as the
:doc:`system-wide power management <system-wide>`.
The other strategy, referred to as the :doc:`working-state power management
<working-state>`, is based on adjusting the power states of individual hardware
components of the system, as needed, in the working state. In consequence, if
this strategy is in use, the working state of the system usually does not
correspond to any particular physical configuration of it, but can be treated as
a metastate covering a range of different power states of the system in which
the individual components of it can be either ``active`` (in use) or
``inactive`` (idle). If they are active, they have to be in power states
allowing them to process data and to be accessed by software. In turn, if they
are inactive, ideally, they should be in low-power states in which they may not
be accessible.
If all of the system components are active, the system as a whole is regarded as
"runtime active" and that situation typically corresponds to the maximum power
draw (or maximum energy usage) of it. If all of them are inactive, the system
as a whole is regarded as "runtime idle" which may be very close to a sleep
state from the physical system configuration and power draw perspective, but
then it takes much less time and effort to start executing user space code than
for the same system in a sleep state. However, transitions from sleep states
back to the working state can only be started by a limited set of devices, so
typically the system can spend much more time in a sleep state than it can be
runtime idle in one go. For this reason, systems usually use less energy in
sleep states than when they are runtime idle most of the time.
Moreover, the two power management strategies address different usage scenarios.
Namely, if the user indicates that the system will not be in use going forward,
for example by closing its lid (if the system is a laptop), it probably should
go into a sleep state at that point. On the other hand, if the user simply goes
away from the laptop keyboard, it probably should stay in the working state and
use the working-state power management in case it becomes idle, because the user
may come back to it at any time and then may want the system to be immediately
accessible.
============================
System-Wide Power Management
============================
.. toctree::
:maxdepth: 2
sleep-states
==============================
Working-State Power Management
==============================
.. toctree::
:maxdepth: 2
cpufreq
intel_pstate
Device Tree Clock bindins for CPU DVFS of Mediatek MT8173 SoC Binding for MediaTek's CPUFreq driver
=====================================
Required properties: Required properties:
- clocks: A list of phandle + clock-specifier pairs for the clocks listed in clock names. - clocks: A list of phandle + clock-specifier pairs for the clocks listed in clock names.
...@@ -9,6 +10,8 @@ Required properties: ...@@ -9,6 +10,8 @@ Required properties:
transition and not stable yet. transition and not stable yet.
Please refer to Documentation/devicetree/bindings/clk/clock-bindings.txt for Please refer to Documentation/devicetree/bindings/clk/clock-bindings.txt for
generic clock consumer properties. generic clock consumer properties.
- operating-points-v2: Please refer to Documentation/devicetree/bindings/opp/opp.txt
for detail.
- proc-supply: Regulator for Vproc of CPU cluster. - proc-supply: Regulator for Vproc of CPU cluster.
Optional properties: Optional properties:
...@@ -17,9 +20,166 @@ Optional properties: ...@@ -17,9 +20,166 @@ Optional properties:
Vsram to fit SoC specific needs. When absent, the voltage scaling Vsram to fit SoC specific needs. When absent, the voltage scaling
flow is handled by hardware, hence no software "voltage tracking" is flow is handled by hardware, hence no software "voltage tracking" is
needed. needed.
- #cooling-cells:
- cooling-min-level:
- cooling-max-level:
Please refer to Documentation/devicetree/bindings/thermal/thermal.txt
for detail.
Example 1 (MT7623 SoC):
cpu_opp_table: opp_table {
compatible = "operating-points-v2";
opp-shared;
opp-598000000 {
opp-hz = /bits/ 64 <598000000>;
opp-microvolt = <1050000>;
};
opp-747500000 {
opp-hz = /bits/ 64 <747500000>;
opp-microvolt = <1050000>;
};
opp-1040000000 {
opp-hz = /bits/ 64 <1040000000>;
opp-microvolt = <1150000>;
};
opp-1196000000 {
opp-hz = /bits/ 64 <1196000000>;
opp-microvolt = <1200000>;
};
opp-1300000000 {
opp-hz = /bits/ 64 <1300000000>;
opp-microvolt = <1300000>;
};
};
cpu0: cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a7";
reg = <0x0>;
clocks = <&infracfg CLK_INFRA_CPUSEL>,
<&apmixedsys CLK_APMIXED_MAINPLL>;
clock-names = "cpu", "intermediate";
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
cooling-min-level = <0>;
cooling-max-level = <7>;
};
cpu@1 {
device_type = "cpu";
compatible = "arm,cortex-a7";
reg = <0x1>;
operating-points-v2 = <&cpu_opp_table>;
};
cpu@2 {
device_type = "cpu";
compatible = "arm,cortex-a7";
reg = <0x2>;
operating-points-v2 = <&cpu_opp_table>;
};
cpu@3 {
device_type = "cpu";
compatible = "arm,cortex-a7";
reg = <0x3>;
operating-points-v2 = <&cpu_opp_table>;
};
Example 2 (MT8173 SoC):
cpu_opp_table_a: opp_table_a {
compatible = "operating-points-v2";
opp-shared;
opp-507000000 {
opp-hz = /bits/ 64 <507000000>;
opp-microvolt = <859000>;
};
opp-702000000 {
opp-hz = /bits/ 64 <702000000>;
opp-microvolt = <908000>;
};
opp-1001000000 {
opp-hz = /bits/ 64 <1001000000>;
opp-microvolt = <983000>;
};
opp-1105000000 {
opp-hz = /bits/ 64 <1105000000>;
opp-microvolt = <1009000>;
};
opp-1183000000 {
opp-hz = /bits/ 64 <1183000000>;
opp-microvolt = <1028000>;
};
opp-1404000000 {
opp-hz = /bits/ 64 <1404000000>;
opp-microvolt = <1083000>;
};
opp-1508000000 {
opp-hz = /bits/ 64 <1508000000>;
opp-microvolt = <1109000>;
};
opp-1573000000 {
opp-hz = /bits/ 64 <1573000000>;
opp-microvolt = <1125000>;
};
};
cpu_opp_table_b: opp_table_b {
compatible = "operating-points-v2";
opp-shared;
opp-507000000 {
opp-hz = /bits/ 64 <507000000>;
opp-microvolt = <828000>;
};
opp-702000000 {
opp-hz = /bits/ 64 <702000000>;
opp-microvolt = <867000>;
};
opp-1001000000 {
opp-hz = /bits/ 64 <1001000000>;
opp-microvolt = <927000>;
};
opp-1209000000 {
opp-hz = /bits/ 64 <1209000000>;
opp-microvolt = <968000>;
};
opp-1404000000 {
opp-hz = /bits/ 64 <1007000000>;
opp-microvolt = <1028000>;
};
opp-1612000000 {
opp-hz = /bits/ 64 <1612000000>;
opp-microvolt = <1049000>;
};
opp-1807000000 {
opp-hz = /bits/ 64 <1807000000>;
opp-microvolt = <1089000>;
};
opp-1989000000 {
opp-hz = /bits/ 64 <1989000000>;
opp-microvolt = <1125000>;
};
};
Example:
--------
cpu0: cpu@0 { cpu0: cpu@0 {
device_type = "cpu"; device_type = "cpu";
compatible = "arm,cortex-a53"; compatible = "arm,cortex-a53";
...@@ -29,6 +189,7 @@ Example: ...@@ -29,6 +189,7 @@ Example:
clocks = <&infracfg CLK_INFRA_CA53SEL>, clocks = <&infracfg CLK_INFRA_CA53SEL>,
<&apmixedsys CLK_APMIXED_MAINPLL>; <&apmixedsys CLK_APMIXED_MAINPLL>;
clock-names = "cpu", "intermediate"; clock-names = "cpu", "intermediate";
operating-points-v2 = <&cpu_opp_table_a>;
}; };
cpu1: cpu@1 { cpu1: cpu@1 {
...@@ -40,6 +201,7 @@ Example: ...@@ -40,6 +201,7 @@ Example:
clocks = <&infracfg CLK_INFRA_CA53SEL>, clocks = <&infracfg CLK_INFRA_CA53SEL>,
<&apmixedsys CLK_APMIXED_MAINPLL>; <&apmixedsys CLK_APMIXED_MAINPLL>;
clock-names = "cpu", "intermediate"; clock-names = "cpu", "intermediate";
operating-points-v2 = <&cpu_opp_table_a>;
}; };
cpu2: cpu@100 { cpu2: cpu@100 {
...@@ -51,6 +213,7 @@ Example: ...@@ -51,6 +213,7 @@ Example:
clocks = <&infracfg CLK_INFRA_CA57SEL>, clocks = <&infracfg CLK_INFRA_CA57SEL>,
<&apmixedsys CLK_APMIXED_MAINPLL>; <&apmixedsys CLK_APMIXED_MAINPLL>;
clock-names = "cpu", "intermediate"; clock-names = "cpu", "intermediate";
operating-points-v2 = <&cpu_opp_table_b>;
}; };
cpu3: cpu@101 { cpu3: cpu@101 {
...@@ -62,6 +225,7 @@ Example: ...@@ -62,6 +225,7 @@ Example:
clocks = <&infracfg CLK_INFRA_CA57SEL>, clocks = <&infracfg CLK_INFRA_CA57SEL>,
<&apmixedsys CLK_APMIXED_MAINPLL>; <&apmixedsys CLK_APMIXED_MAINPLL>;
clock-names = "cpu", "intermediate"; clock-names = "cpu", "intermediate";
operating-points-v2 = <&cpu_opp_table_b>;
}; };
&cpu0 { &cpu0 {
......
...@@ -39,6 +39,8 @@ Required properties: ...@@ -39,6 +39,8 @@ Required properties:
- "rockchip,rk3368-pmu-io-voltage-domain" for rk3368 pmu-domains - "rockchip,rk3368-pmu-io-voltage-domain" for rk3368 pmu-domains
- "rockchip,rk3399-io-voltage-domain" for rk3399 - "rockchip,rk3399-io-voltage-domain" for rk3399
- "rockchip,rk3399-pmu-io-voltage-domain" for rk3399 pmu-domains - "rockchip,rk3399-pmu-io-voltage-domain" for rk3399 pmu-domains
- "rockchip,rv1108-io-voltage-domain" for rv1108
- "rockchip,rv1108-pmu-io-voltage-domain" for rv1108 pmu-domains
Deprecated properties: Deprecated properties:
- rockchip,grf: phandle to the syscon managing the "general register files" - rockchip,grf: phandle to the syscon managing the "general register files"
......
System Power Management Sleep States
(C) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The kernel supports up to four system sleep states generically, although three
of them depend on the platform support code to implement the low-level details
for each state.
The states are represented by strings that can be read or written to the
/sys/power/state file. Those strings may be "mem", "standby", "freeze" and
"disk", where the last three always represent Power-On Suspend (if supported),
Suspend-To-Idle and hibernation (Suspend-To-Disk), respectively.
The meaning of the "mem" string is controlled by the /sys/power/mem_sleep file.
It contains strings representing the available modes of system suspend that may
be triggered by writing "mem" to /sys/power/state. These modes are "s2idle"
(Suspend-To-Idle), "shallow" (Power-On Suspend) and "deep" (Suspend-To-RAM).
The "s2idle" mode is always available, while the other ones are only available
if supported by the platform (if not supported, the strings representing them
are not present in /sys/power/mem_sleep). The string representing the suspend
mode to be used subsequently is enclosed in square brackets. Writing one of
the other strings present in /sys/power/mem_sleep to it causes the suspend mode
to be used subsequently to change to the one represented by that string.
Consequently, there are two ways to cause the system to go into the
Suspend-To-Idle sleep state. The first one is to write "freeze" directly to
/sys/power/state. The second one is to write "s2idle" to /sys/power/mem_sleep
and then to write "mem" to /sys/power/state. Similarly, there are two ways
to cause the system to go into the Power-On Suspend sleep state (the strings to
write to the control files in that case are "standby" or "shallow" and "mem",
respectively) if that state is supported by the platform. In turn, there is
only one way to cause the system to go into the Suspend-To-RAM state (write
"deep" into /sys/power/mem_sleep and "mem" into /sys/power/state).
The default suspend mode (ie. the one to be used without writing anything into
/sys/power/mem_sleep) is either "deep" (if Suspend-To-RAM is supported) or
"s2idle", but it can be overridden by the value of the "mem_sleep_default"
parameter in the kernel command line.
The properties of all of the sleep states are described below.
State: Suspend-To-Idle
ACPI state: S0
Label: "s2idle" ("freeze")
This state is a generic, pure software, light-weight, system sleep state.
It allows more energy to be saved relative to runtime idle by freezing user
space and putting all I/O devices into low-power states (possibly
lower-power than available at run time), such that the processors can
spend more time in their idle states.
This state can be used for platforms without Power-On Suspend/Suspend-to-RAM
support, or it can be used in addition to Suspend-to-RAM to provide reduced
resume latency. It is always supported.
State: Standby / Power-On Suspend
ACPI State: S1
Label: "shallow" ("standby")
This state, if supported, offers moderate, though real, power savings, while
providing a relatively low-latency transition back to a working system. No
operating state is lost (the CPU retains power), so the system easily starts up
again where it left off.
In addition to freezing user space and putting all I/O devices into low-power
states, which is done for Suspend-To-Idle too, nonboot CPUs are taken offline
and all low-level system functions are suspended during transitions into this
state. For this reason, it should allow more energy to be saved relative to
Suspend-To-Idle, but the resume latency will generally be greater than for that
state.
State: Suspend-to-RAM
ACPI State: S3
Label: "deep"
This state, if supported, offers significant power savings as everything in the
system is put into a low-power state, except for memory, which should be placed
into the self-refresh mode to retain its contents. All of the steps carried out
when entering Power-On Suspend are also carried out during transitions to STR.
Additional operations may take place depending on the platform capabilities. In
particular, on ACPI systems the kernel passes control to the BIOS (platform
firmware) as the last step during STR transitions and that usually results in
powering down some more low-level components that aren't directly controlled by
the kernel.
System and device state is saved and kept in memory. All devices are suspended
and put into low-power states. In many cases, all peripheral buses lose power
when entering STR, so devices must be able to handle the transition back to the
"on" state.
For at least ACPI, STR requires some minimal boot-strapping code to resume the
system from it. This may be the case on other platforms too.
State: Suspend-to-disk
ACPI State: S4
Label: "disk"
This state offers the greatest power savings, and can be used even in
the absence of low-level platform support for power management. This
state operates similarly to Suspend-to-RAM, but includes a final step
of writing memory contents to disk. On resume, this is read and memory
is restored to its pre-suspend state.
STD can be handled by the firmware or the kernel. If it is handled by
the firmware, it usually requires a dedicated partition that must be
setup via another operating system for it to use. Despite the
inconvenience, this method requires minimal work by the kernel, since
the firmware will also handle restoring memory contents on resume.
For suspend-to-disk, a mechanism called 'swsusp' (Swap Suspend) is used
to write memory contents to free swap space. swsusp has some restrictive
requirements, but should work in most cases. Some, albeit outdated,
documentation can be found in Documentation/power/swsusp.txt.
Alternatively, userspace can do most of the actual suspend to disk work,
see userland-swsusp.txt.
Once memory state is written to disk, the system may either enter a
low-power state (like ACPI S4), or it may simply power down. Powering
down offers greater savings, and allows this mechanism to work on any
system. However, entering a real low-power state allows the user to
trigger wake up events (e.g. pressing a key or opening a laptop lid).
...@@ -13,7 +13,6 @@ cpu0: cpu@0 { ...@@ -13,7 +13,6 @@ cpu0: cpu@0 {
reg = <0>; reg = <0>;
clocks = <&clkgen CPU_CLK>; clocks = <&clkgen CPU_CLK>;
clock-latency = <1>; clock-latency = <1>;
operating-points = <1215000 0 607500 0 405000 0 243000 0 135000 0>;
}; };
cpu1: cpu@1 { cpu1: cpu@1 {
......
...@@ -60,7 +60,7 @@ static int tegra114_idle_power_down(struct cpuidle_device *dev, ...@@ -60,7 +60,7 @@ static int tegra114_idle_power_down(struct cpuidle_device *dev,
return index; return index;
} }
static void tegra114_idle_enter_freeze(struct cpuidle_device *dev, static void tegra114_idle_enter_s2idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, struct cpuidle_driver *drv,
int index) int index)
{ {
...@@ -77,7 +77,7 @@ static struct cpuidle_driver tegra_idle_driver = { ...@@ -77,7 +77,7 @@ static struct cpuidle_driver tegra_idle_driver = {
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
[1] = { [1] = {
.enter = tegra114_idle_power_down, .enter = tegra114_idle_power_down,
.enter_freeze = tegra114_idle_enter_freeze, .enter_s2idle = tegra114_idle_enter_s2idle,
.exit_latency = 500, .exit_latency = 500,
.target_residency = 1000, .target_residency = 1000,
.flags = CPUIDLE_FLAG_TIMER_STOP, .flags = CPUIDLE_FLAG_TIMER_STOP,
......
...@@ -48,6 +48,8 @@ ...@@ -48,6 +48,8 @@
#define _COMPONENT ACPI_PROCESSOR_COMPONENT #define _COMPONENT ACPI_PROCESSOR_COMPONENT
ACPI_MODULE_NAME("processor_idle"); ACPI_MODULE_NAME("processor_idle");
#define ACPI_IDLE_STATE_START (IS_ENABLED(CONFIG_ARCH_HAS_CPU_RELAX) ? 1 : 0)
static unsigned int max_cstate __read_mostly = ACPI_PROCESSOR_MAX_POWER; static unsigned int max_cstate __read_mostly = ACPI_PROCESSOR_MAX_POWER;
module_param(max_cstate, uint, 0000); module_param(max_cstate, uint, 0000);
static unsigned int nocst __read_mostly; static unsigned int nocst __read_mostly;
...@@ -759,7 +761,7 @@ static int acpi_idle_enter(struct cpuidle_device *dev, ...@@ -759,7 +761,7 @@ static int acpi_idle_enter(struct cpuidle_device *dev,
if (cx->type != ACPI_STATE_C1) { if (cx->type != ACPI_STATE_C1) {
if (acpi_idle_fallback_to_c1(pr) && num_online_cpus() > 1) { if (acpi_idle_fallback_to_c1(pr) && num_online_cpus() > 1) {
index = CPUIDLE_DRIVER_STATE_START; index = ACPI_IDLE_STATE_START;
cx = per_cpu(acpi_cstate[index], dev->cpu); cx = per_cpu(acpi_cstate[index], dev->cpu);
} else if (cx->type == ACPI_STATE_C3 && pr->flags.bm_check) { } else if (cx->type == ACPI_STATE_C3 && pr->flags.bm_check) {
if (cx->bm_sts_skip || !acpi_idle_bm_check()) { if (cx->bm_sts_skip || !acpi_idle_bm_check()) {
...@@ -787,7 +789,7 @@ static int acpi_idle_enter(struct cpuidle_device *dev, ...@@ -787,7 +789,7 @@ static int acpi_idle_enter(struct cpuidle_device *dev,
return index; return index;
} }
static void acpi_idle_enter_freeze(struct cpuidle_device *dev, static void acpi_idle_enter_s2idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index) struct cpuidle_driver *drv, int index)
{ {
struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu); struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
...@@ -811,7 +813,7 @@ static void acpi_idle_enter_freeze(struct cpuidle_device *dev, ...@@ -811,7 +813,7 @@ static void acpi_idle_enter_freeze(struct cpuidle_device *dev,
static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr, static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr,
struct cpuidle_device *dev) struct cpuidle_device *dev)
{ {
int i, count = CPUIDLE_DRIVER_STATE_START; int i, count = ACPI_IDLE_STATE_START;
struct acpi_processor_cx *cx; struct acpi_processor_cx *cx;
if (max_cstate == 0) if (max_cstate == 0)
...@@ -838,7 +840,7 @@ static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr, ...@@ -838,7 +840,7 @@ static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr,
static int acpi_processor_setup_cstates(struct acpi_processor *pr) static int acpi_processor_setup_cstates(struct acpi_processor *pr)
{ {
int i, count = CPUIDLE_DRIVER_STATE_START; int i, count;
struct acpi_processor_cx *cx; struct acpi_processor_cx *cx;
struct cpuidle_state *state; struct cpuidle_state *state;
struct cpuidle_driver *drv = &acpi_idle_driver; struct cpuidle_driver *drv = &acpi_idle_driver;
...@@ -846,6 +848,13 @@ static int acpi_processor_setup_cstates(struct acpi_processor *pr) ...@@ -846,6 +848,13 @@ static int acpi_processor_setup_cstates(struct acpi_processor *pr)
if (max_cstate == 0) if (max_cstate == 0)
max_cstate = 1; max_cstate = 1;
if (IS_ENABLED(CONFIG_ARCH_HAS_CPU_RELAX)) {
cpuidle_poll_state_init(drv);
count = 1;
} else {
count = 0;
}
for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++) { for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++) {
cx = &pr->power.states[i]; cx = &pr->power.states[i];
...@@ -865,14 +874,14 @@ static int acpi_processor_setup_cstates(struct acpi_processor *pr) ...@@ -865,14 +874,14 @@ static int acpi_processor_setup_cstates(struct acpi_processor *pr)
drv->safe_state_index = count; drv->safe_state_index = count;
} }
/* /*
* Halt-induced C1 is not good for ->enter_freeze, because it * Halt-induced C1 is not good for ->enter_s2idle, because it
* re-enables interrupts on exit. Moreover, C1 is generally not * re-enables interrupts on exit. Moreover, C1 is generally not
* particularly interesting from the suspend-to-idle angle, so * particularly interesting from the suspend-to-idle angle, so
* avoid C1 and the situations in which we may need to fall back * avoid C1 and the situations in which we may need to fall back
* to it altogether. * to it altogether.
*/ */
if (cx->type != ACPI_STATE_C1 && !acpi_idle_fallback_to_c1(pr)) if (cx->type != ACPI_STATE_C1 && !acpi_idle_fallback_to_c1(pr))
state->enter_freeze = acpi_idle_enter_freeze; state->enter_s2idle = acpi_idle_enter_s2idle;
count++; count++;
if (count == CPUIDLE_STATE_MAX) if (count == CPUIDLE_STATE_MAX)
...@@ -1289,7 +1298,7 @@ static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr) ...@@ -1289,7 +1298,7 @@ static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr)
return -EINVAL; return -EINVAL;
drv->safe_state_index = -1; drv->safe_state_index = -1;
for (i = CPUIDLE_DRIVER_STATE_START; i < CPUIDLE_STATE_MAX; i++) { for (i = ACPI_IDLE_STATE_START; i < CPUIDLE_STATE_MAX; i++) {
drv->states[i].name[0] = '\0'; drv->states[i].name[0] = '\0';
drv->states[i].desc[0] = '\0'; drv->states[i].desc[0] = '\0';
} }
......
...@@ -669,6 +669,7 @@ static const struct acpi_device_id lps0_device_ids[] = { ...@@ -669,6 +669,7 @@ static const struct acpi_device_id lps0_device_ids[] = {
#define ACPI_LPS0_DSM_UUID "c4eb40a0-6cd2-11e2-bcfd-0800200c9a66" #define ACPI_LPS0_DSM_UUID "c4eb40a0-6cd2-11e2-bcfd-0800200c9a66"
#define ACPI_LPS0_GET_DEVICE_CONSTRAINTS 1
#define ACPI_LPS0_SCREEN_OFF 3 #define ACPI_LPS0_SCREEN_OFF 3
#define ACPI_LPS0_SCREEN_ON 4 #define ACPI_LPS0_SCREEN_ON 4
#define ACPI_LPS0_ENTRY 5 #define ACPI_LPS0_ENTRY 5
...@@ -680,6 +681,166 @@ static acpi_handle lps0_device_handle; ...@@ -680,6 +681,166 @@ static acpi_handle lps0_device_handle;
static guid_t lps0_dsm_guid; static guid_t lps0_dsm_guid;
static char lps0_dsm_func_mask; static char lps0_dsm_func_mask;
/* Device constraint entry structure */
struct lpi_device_info {
char *name;
int enabled;
union acpi_object *package;
};
/* Constraint package structure */
struct lpi_device_constraint {
int uid;
int min_dstate;
int function_states;
};
struct lpi_constraints {
acpi_handle handle;
int min_dstate;
};
static struct lpi_constraints *lpi_constraints_table;
static int lpi_constraints_table_size;
static void lpi_device_get_constraints(void)
{
union acpi_object *out_obj;
int i;
out_obj = acpi_evaluate_dsm_typed(lps0_device_handle, &lps0_dsm_guid,
1, ACPI_LPS0_GET_DEVICE_CONSTRAINTS,
NULL, ACPI_TYPE_PACKAGE);
acpi_handle_debug(lps0_device_handle, "_DSM function 1 eval %s\n",
out_obj ? "successful" : "failed");
if (!out_obj)
return;
lpi_constraints_table = kcalloc(out_obj->package.count,
sizeof(*lpi_constraints_table),
GFP_KERNEL);
if (!lpi_constraints_table)
goto free_acpi_buffer;
acpi_handle_debug(lps0_device_handle, "LPI: constraints list begin:\n");
for (i = 0; i < out_obj->package.count; i++) {
struct lpi_constraints *constraint;
acpi_status status;
union acpi_object *package = &out_obj->package.elements[i];
struct lpi_device_info info = { };
int package_count = 0, j;
if (!package)
continue;
for (j = 0; j < package->package.count; ++j) {
union acpi_object *element =
&(package->package.elements[j]);
switch (element->type) {
case ACPI_TYPE_INTEGER:
info.enabled = element->integer.value;
break;
case ACPI_TYPE_STRING:
info.name = element->string.pointer;
break;
case ACPI_TYPE_PACKAGE:
package_count = element->package.count;
info.package = element->package.elements;
break;
}
}
if (!info.enabled || !info.package || !info.name)
continue;
constraint = &lpi_constraints_table[lpi_constraints_table_size];
status = acpi_get_handle(NULL, info.name, &constraint->handle);
if (ACPI_FAILURE(status))
continue;
acpi_handle_debug(lps0_device_handle,
"index:%d Name:%s\n", i, info.name);
constraint->min_dstate = -1;
for (j = 0; j < package_count; ++j) {
union acpi_object *info_obj = &info.package[j];
union acpi_object *cnstr_pkg;
union acpi_object *obj;
struct lpi_device_constraint dev_info;
switch (info_obj->type) {
case ACPI_TYPE_INTEGER:
/* version */
break;
case ACPI_TYPE_PACKAGE:
if (info_obj->package.count < 2)
break;
cnstr_pkg = info_obj->package.elements;
obj = &cnstr_pkg[0];
dev_info.uid = obj->integer.value;
obj = &cnstr_pkg[1];
dev_info.min_dstate = obj->integer.value;
acpi_handle_debug(lps0_device_handle,
"uid:%d min_dstate:%s\n",
dev_info.uid,
acpi_power_state_string(dev_info.min_dstate));
constraint->min_dstate = dev_info.min_dstate;
break;
}
}
if (constraint->min_dstate < 0) {
acpi_handle_debug(lps0_device_handle,
"Incomplete constraint defined\n");
continue;
}
lpi_constraints_table_size++;
}
acpi_handle_debug(lps0_device_handle, "LPI: constraints list end\n");
free_acpi_buffer:
ACPI_FREE(out_obj);
}
static void lpi_check_constraints(void)
{
int i;
for (i = 0; i < lpi_constraints_table_size; ++i) {
struct acpi_device *adev;
if (acpi_bus_get_device(lpi_constraints_table[i].handle, &adev))
continue;
acpi_handle_debug(adev->handle,
"LPI: required min power state:%s current power state:%s\n",
acpi_power_state_string(lpi_constraints_table[i].min_dstate),
acpi_power_state_string(adev->power.state));
if (!adev->flags.power_manageable) {
acpi_handle_info(adev->handle, "LPI: Device not power manageble\n");
continue;
}
if (adev->power.state < lpi_constraints_table[i].min_dstate)
acpi_handle_info(adev->handle,
"LPI: Constraint not met; min power state:%s current power state:%s\n",
acpi_power_state_string(lpi_constraints_table[i].min_dstate),
acpi_power_state_string(adev->power.state));
}
}
static void acpi_sleep_run_lps0_dsm(unsigned int func) static void acpi_sleep_run_lps0_dsm(unsigned int func)
{ {
union acpi_object *out_obj; union acpi_object *out_obj;
...@@ -714,6 +875,12 @@ static int lps0_device_attach(struct acpi_device *adev, ...@@ -714,6 +875,12 @@ static int lps0_device_attach(struct acpi_device *adev,
if ((bitmask & ACPI_S2IDLE_FUNC_MASK) == ACPI_S2IDLE_FUNC_MASK) { if ((bitmask & ACPI_S2IDLE_FUNC_MASK) == ACPI_S2IDLE_FUNC_MASK) {
lps0_dsm_func_mask = bitmask; lps0_dsm_func_mask = bitmask;
lps0_device_handle = adev->handle; lps0_device_handle = adev->handle;
/*
* Use suspend-to-idle by default if the default
* suspend mode was not set from the command line.
*/
if (mem_sleep_default > PM_SUSPEND_MEM)
mem_sleep_current = PM_SUSPEND_TO_IDLE;
} }
acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n", acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n",
...@@ -723,6 +890,9 @@ static int lps0_device_attach(struct acpi_device *adev, ...@@ -723,6 +890,9 @@ static int lps0_device_attach(struct acpi_device *adev,
"_DSM function 0 evaluation failed\n"); "_DSM function 0 evaluation failed\n");
} }
ACPI_FREE(out_obj); ACPI_FREE(out_obj);
lpi_device_get_constraints();
return 0; return 0;
} }
...@@ -731,14 +901,14 @@ static struct acpi_scan_handler lps0_handler = { ...@@ -731,14 +901,14 @@ static struct acpi_scan_handler lps0_handler = {
.attach = lps0_device_attach, .attach = lps0_device_attach,
}; };
static int acpi_freeze_begin(void) static int acpi_s2idle_begin(void)
{ {
acpi_scan_lock_acquire(); acpi_scan_lock_acquire();
s2idle_in_progress = true; s2idle_in_progress = true;
return 0; return 0;
} }
static int acpi_freeze_prepare(void) static int acpi_s2idle_prepare(void)
{ {
if (lps0_device_handle) { if (lps0_device_handle) {
acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF); acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF);
...@@ -758,8 +928,12 @@ static int acpi_freeze_prepare(void) ...@@ -758,8 +928,12 @@ static int acpi_freeze_prepare(void)
return 0; return 0;
} }
static void acpi_freeze_wake(void) static void acpi_s2idle_wake(void)
{ {
if (pm_debug_messages_on)
lpi_check_constraints();
/* /*
* If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means * If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means
* that the SCI has triggered while suspended, so cancel the wakeup in * that the SCI has triggered while suspended, so cancel the wakeup in
...@@ -772,7 +946,7 @@ static void acpi_freeze_wake(void) ...@@ -772,7 +946,7 @@ static void acpi_freeze_wake(void)
} }
} }
static void acpi_freeze_sync(void) static void acpi_s2idle_sync(void)
{ {
/* /*
* Process all pending events in case there are any wakeup ones. * Process all pending events in case there are any wakeup ones.
...@@ -785,7 +959,7 @@ static void acpi_freeze_sync(void) ...@@ -785,7 +959,7 @@ static void acpi_freeze_sync(void)
s2idle_wakeup = false; s2idle_wakeup = false;
} }
static void acpi_freeze_restore(void) static void acpi_s2idle_restore(void)
{ {
if (acpi_sci_irq_valid()) if (acpi_sci_irq_valid())
disable_irq_wake(acpi_sci_irq); disable_irq_wake(acpi_sci_irq);
...@@ -798,19 +972,19 @@ static void acpi_freeze_restore(void) ...@@ -798,19 +972,19 @@ static void acpi_freeze_restore(void)
} }
} }
static void acpi_freeze_end(void) static void acpi_s2idle_end(void)
{ {
s2idle_in_progress = false; s2idle_in_progress = false;
acpi_scan_lock_release(); acpi_scan_lock_release();
} }
static const struct platform_freeze_ops acpi_freeze_ops = { static const struct platform_s2idle_ops acpi_s2idle_ops = {
.begin = acpi_freeze_begin, .begin = acpi_s2idle_begin,
.prepare = acpi_freeze_prepare, .prepare = acpi_s2idle_prepare,
.wake = acpi_freeze_wake, .wake = acpi_s2idle_wake,
.sync = acpi_freeze_sync, .sync = acpi_s2idle_sync,
.restore = acpi_freeze_restore, .restore = acpi_s2idle_restore,
.end = acpi_freeze_end, .end = acpi_s2idle_end,
}; };
static void acpi_sleep_suspend_setup(void) static void acpi_sleep_suspend_setup(void)
...@@ -825,7 +999,7 @@ static void acpi_sleep_suspend_setup(void) ...@@ -825,7 +999,7 @@ static void acpi_sleep_suspend_setup(void)
&acpi_suspend_ops_old : &acpi_suspend_ops); &acpi_suspend_ops_old : &acpi_suspend_ops);
acpi_scan_add_handler(&lps0_handler); acpi_scan_add_handler(&lps0_handler);
freeze_set_ops(&acpi_freeze_ops); s2idle_set_ops(&acpi_s2idle_ops);
} }
#else /* !CONFIG_SUSPEND */ #else /* !CONFIG_SUSPEND */
......
...@@ -209,6 +209,34 @@ static void genpd_sd_counter_inc(struct generic_pm_domain *genpd) ...@@ -209,6 +209,34 @@ static void genpd_sd_counter_inc(struct generic_pm_domain *genpd)
smp_mb__after_atomic(); smp_mb__after_atomic();
} }
#ifdef CONFIG_DEBUG_FS
static void genpd_update_accounting(struct generic_pm_domain *genpd)
{
ktime_t delta, now;
now = ktime_get();
delta = ktime_sub(now, genpd->accounting_time);
/*
* If genpd->status is active, it means we are just
* out of off and so update the idle time and vice
* versa.
*/
if (genpd->status == GPD_STATE_ACTIVE) {
int state_idx = genpd->state_idx;
genpd->states[state_idx].idle_time =
ktime_add(genpd->states[state_idx].idle_time, delta);
} else {
genpd->on_time = ktime_add(genpd->on_time, delta);
}
genpd->accounting_time = now;
}
#else
static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {}
#endif
static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed) static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
{ {
unsigned int state_idx = genpd->state_idx; unsigned int state_idx = genpd->state_idx;
...@@ -361,6 +389,7 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on, ...@@ -361,6 +389,7 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on,
} }
genpd->status = GPD_STATE_POWER_OFF; genpd->status = GPD_STATE_POWER_OFF;
genpd_update_accounting(genpd);
list_for_each_entry(link, &genpd->slave_links, slave_node) { list_for_each_entry(link, &genpd->slave_links, slave_node) {
genpd_sd_counter_dec(link->master); genpd_sd_counter_dec(link->master);
...@@ -413,6 +442,8 @@ static int genpd_power_on(struct generic_pm_domain *genpd, unsigned int depth) ...@@ -413,6 +442,8 @@ static int genpd_power_on(struct generic_pm_domain *genpd, unsigned int depth)
goto err; goto err;
genpd->status = GPD_STATE_ACTIVE; genpd->status = GPD_STATE_ACTIVE;
genpd_update_accounting(genpd);
return 0; return 0;
err: err:
...@@ -1540,6 +1571,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd, ...@@ -1540,6 +1571,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
genpd->max_off_time_changed = true; genpd->max_off_time_changed = true;
genpd->provider = NULL; genpd->provider = NULL;
genpd->has_provider = false; genpd->has_provider = false;
genpd->accounting_time = ktime_get();
genpd->domain.ops.runtime_suspend = genpd_runtime_suspend; genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
genpd->domain.ops.runtime_resume = genpd_runtime_resume; genpd->domain.ops.runtime_resume = genpd_runtime_resume;
genpd->domain.ops.prepare = pm_genpd_prepare; genpd->domain.ops.prepare = pm_genpd_prepare;
...@@ -1743,7 +1775,7 @@ static int genpd_add_provider(struct device_node *np, genpd_xlate_t xlate, ...@@ -1743,7 +1775,7 @@ static int genpd_add_provider(struct device_node *np, genpd_xlate_t xlate,
mutex_lock(&of_genpd_mutex); mutex_lock(&of_genpd_mutex);
list_add(&cp->link, &of_genpd_providers); list_add(&cp->link, &of_genpd_providers);
mutex_unlock(&of_genpd_mutex); mutex_unlock(&of_genpd_mutex);
pr_debug("Added domain provider from %s\n", np->full_name); pr_debug("Added domain provider from %pOF\n", np);
return 0; return 0;
} }
...@@ -2149,16 +2181,16 @@ static int genpd_parse_state(struct genpd_power_state *genpd_state, ...@@ -2149,16 +2181,16 @@ static int genpd_parse_state(struct genpd_power_state *genpd_state,
err = of_property_read_u32(state_node, "entry-latency-us", err = of_property_read_u32(state_node, "entry-latency-us",
&entry_latency); &entry_latency);
if (err) { if (err) {
pr_debug(" * %s missing entry-latency-us property\n", pr_debug(" * %pOF missing entry-latency-us property\n",
state_node->full_name); state_node);
return -EINVAL; return -EINVAL;
} }
err = of_property_read_u32(state_node, "exit-latency-us", err = of_property_read_u32(state_node, "exit-latency-us",
&exit_latency); &exit_latency);
if (err) { if (err) {
pr_debug(" * %s missing exit-latency-us property\n", pr_debug(" * %pOF missing exit-latency-us property\n",
state_node->full_name); state_node);
return -EINVAL; return -EINVAL;
} }
...@@ -2212,8 +2244,8 @@ int of_genpd_parse_idle_states(struct device_node *dn, ...@@ -2212,8 +2244,8 @@ int of_genpd_parse_idle_states(struct device_node *dn,
ret = genpd_parse_state(&st[i++], np); ret = genpd_parse_state(&st[i++], np);
if (ret) { if (ret) {
pr_err pr_err
("Parsing idle state node %s failed with err %d\n", ("Parsing idle state node %pOF failed with err %d\n",
np->full_name, ret); np, ret);
of_node_put(np); of_node_put(np);
kfree(st); kfree(st);
return ret; return ret;
...@@ -2327,7 +2359,7 @@ static int pm_genpd_summary_one(struct seq_file *s, ...@@ -2327,7 +2359,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
return 0; return 0;
} }
static int pm_genpd_summary_show(struct seq_file *s, void *data) static int genpd_summary_show(struct seq_file *s, void *data)
{ {
struct generic_pm_domain *genpd; struct generic_pm_domain *genpd;
int ret = 0; int ret = 0;
...@@ -2350,21 +2382,187 @@ static int pm_genpd_summary_show(struct seq_file *s, void *data) ...@@ -2350,21 +2382,187 @@ static int pm_genpd_summary_show(struct seq_file *s, void *data)
return ret; return ret;
} }
static int pm_genpd_summary_open(struct inode *inode, struct file *file) static int genpd_status_show(struct seq_file *s, void *data)
{ {
return single_open(file, pm_genpd_summary_show, NULL); static const char * const status_lookup[] = {
[GPD_STATE_ACTIVE] = "on",
[GPD_STATE_POWER_OFF] = "off"
};
struct generic_pm_domain *genpd = s->private;
int ret = 0;
ret = genpd_lock_interruptible(genpd);
if (ret)
return -ERESTARTSYS;
if (WARN_ON_ONCE(genpd->status >= ARRAY_SIZE(status_lookup)))
goto exit;
if (genpd->status == GPD_STATE_POWER_OFF)
seq_printf(s, "%s-%u\n", status_lookup[genpd->status],
genpd->state_idx);
else
seq_printf(s, "%s\n", status_lookup[genpd->status]);
exit:
genpd_unlock(genpd);
return ret;
} }
static const struct file_operations pm_genpd_summary_fops = { static int genpd_sub_domains_show(struct seq_file *s, void *data)
.open = pm_genpd_summary_open, {
.read = seq_read, struct generic_pm_domain *genpd = s->private;
.llseek = seq_lseek, struct gpd_link *link;
.release = single_release, int ret = 0;
};
ret = genpd_lock_interruptible(genpd);
if (ret)
return -ERESTARTSYS;
list_for_each_entry(link, &genpd->master_links, master_node)
seq_printf(s, "%s\n", link->slave->name);
genpd_unlock(genpd);
return ret;
}
static int genpd_idle_states_show(struct seq_file *s, void *data)
{
struct generic_pm_domain *genpd = s->private;
unsigned int i;
int ret = 0;
ret = genpd_lock_interruptible(genpd);
if (ret)
return -ERESTARTSYS;
seq_puts(s, "State Time Spent(ms)\n");
for (i = 0; i < genpd->state_count; i++) {
ktime_t delta = 0;
s64 msecs;
if ((genpd->status == GPD_STATE_POWER_OFF) &&
(genpd->state_idx == i))
delta = ktime_sub(ktime_get(), genpd->accounting_time);
msecs = ktime_to_ms(
ktime_add(genpd->states[i].idle_time, delta));
seq_printf(s, "S%-13i %lld\n", i, msecs);
}
genpd_unlock(genpd);
return ret;
}
static int genpd_active_time_show(struct seq_file *s, void *data)
{
struct generic_pm_domain *genpd = s->private;
ktime_t delta = 0;
int ret = 0;
ret = genpd_lock_interruptible(genpd);
if (ret)
return -ERESTARTSYS;
if (genpd->status == GPD_STATE_ACTIVE)
delta = ktime_sub(ktime_get(), genpd->accounting_time);
seq_printf(s, "%lld ms\n", ktime_to_ms(
ktime_add(genpd->on_time, delta)));
genpd_unlock(genpd);
return ret;
}
static int genpd_total_idle_time_show(struct seq_file *s, void *data)
{
struct generic_pm_domain *genpd = s->private;
ktime_t delta = 0, total = 0;
unsigned int i;
int ret = 0;
ret = genpd_lock_interruptible(genpd);
if (ret)
return -ERESTARTSYS;
for (i = 0; i < genpd->state_count; i++) {
if ((genpd->status == GPD_STATE_POWER_OFF) &&
(genpd->state_idx == i))
delta = ktime_sub(ktime_get(), genpd->accounting_time);
total = ktime_add(total, genpd->states[i].idle_time);
}
total = ktime_add(total, delta);
seq_printf(s, "%lld ms\n", ktime_to_ms(total));
genpd_unlock(genpd);
return ret;
}
static int genpd_devices_show(struct seq_file *s, void *data)
{
struct generic_pm_domain *genpd = s->private;
struct pm_domain_data *pm_data;
const char *kobj_path;
int ret = 0;
ret = genpd_lock_interruptible(genpd);
if (ret)
return -ERESTARTSYS;
list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
kobj_path = kobject_get_path(&pm_data->dev->kobj,
genpd_is_irq_safe(genpd) ?
GFP_ATOMIC : GFP_KERNEL);
if (kobj_path == NULL)
continue;
seq_printf(s, "%s\n", kobj_path);
kfree(kobj_path);
}
genpd_unlock(genpd);
return ret;
}
#define define_genpd_open_function(name) \
static int genpd_##name##_open(struct inode *inode, struct file *file) \
{ \
return single_open(file, genpd_##name##_show, inode->i_private); \
}
define_genpd_open_function(summary);
define_genpd_open_function(status);
define_genpd_open_function(sub_domains);
define_genpd_open_function(idle_states);
define_genpd_open_function(active_time);
define_genpd_open_function(total_idle_time);
define_genpd_open_function(devices);
#define define_genpd_debugfs_fops(name) \
static const struct file_operations genpd_##name##_fops = { \
.open = genpd_##name##_open, \
.read = seq_read, \
.llseek = seq_lseek, \
.release = single_release, \
}
define_genpd_debugfs_fops(summary);
define_genpd_debugfs_fops(status);
define_genpd_debugfs_fops(sub_domains);
define_genpd_debugfs_fops(idle_states);
define_genpd_debugfs_fops(active_time);
define_genpd_debugfs_fops(total_idle_time);
define_genpd_debugfs_fops(devices);
static int __init pm_genpd_debug_init(void) static int __init pm_genpd_debug_init(void)
{ {
struct dentry *d; struct dentry *d;
struct generic_pm_domain *genpd;
pm_genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL); pm_genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL);
...@@ -2372,10 +2570,29 @@ static int __init pm_genpd_debug_init(void) ...@@ -2372,10 +2570,29 @@ static int __init pm_genpd_debug_init(void)
return -ENOMEM; return -ENOMEM;
d = debugfs_create_file("pm_genpd_summary", S_IRUGO, d = debugfs_create_file("pm_genpd_summary", S_IRUGO,
pm_genpd_debugfs_dir, NULL, &pm_genpd_summary_fops); pm_genpd_debugfs_dir, NULL, &genpd_summary_fops);
if (!d) if (!d)
return -ENOMEM; return -ENOMEM;
list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
d = debugfs_create_dir(genpd->name, pm_genpd_debugfs_dir);
if (!d)
return -ENOMEM;
debugfs_create_file("current_state", 0444,
d, genpd, &genpd_status_fops);
debugfs_create_file("sub_domains", 0444,
d, genpd, &genpd_sub_domains_fops);
debugfs_create_file("idle_states", 0444,
d, genpd, &genpd_idle_states_fops);
debugfs_create_file("active_time", 0444,
d, genpd, &genpd_active_time_fops);
debugfs_create_file("total_idle_time", 0444,
d, genpd, &genpd_total_idle_time_fops);
debugfs_create_file("devices", 0444,
d, genpd, &genpd_devices_fops);
}
return 0; return 0;
} }
late_initcall(pm_genpd_debug_init); late_initcall(pm_genpd_debug_init);
......
...@@ -418,8 +418,7 @@ static void pm_dev_err(struct device *dev, pm_message_t state, const char *info, ...@@ -418,8 +418,7 @@ static void pm_dev_err(struct device *dev, pm_message_t state, const char *info,
dev_name(dev), pm_verb(state.event), info, error); dev_name(dev), pm_verb(state.event), info, error);
} }
#ifdef CONFIG_PM_DEBUG static void dpm_show_time(ktime_t starttime, pm_message_t state, int error,
static void dpm_show_time(ktime_t starttime, pm_message_t state,
const char *info) const char *info)
{ {
ktime_t calltime; ktime_t calltime;
...@@ -432,14 +431,12 @@ static void dpm_show_time(ktime_t starttime, pm_message_t state, ...@@ -432,14 +431,12 @@ static void dpm_show_time(ktime_t starttime, pm_message_t state,
usecs = usecs64; usecs = usecs64;
if (usecs == 0) if (usecs == 0)
usecs = 1; usecs = 1;
pr_info("PM: %s%s%s of devices complete after %ld.%03ld msecs\n",
info ?: "", info ? " " : "", pm_verb(state.event), pm_pr_dbg("%s%s%s of devices %s after %ld.%03ld msecs\n",
usecs / USEC_PER_MSEC, usecs % USEC_PER_MSEC); info ?: "", info ? " " : "", pm_verb(state.event),
error ? "aborted" : "complete",
usecs / USEC_PER_MSEC, usecs % USEC_PER_MSEC);
} }
#else
static inline void dpm_show_time(ktime_t starttime, pm_message_t state,
const char *info) {}
#endif /* CONFIG_PM_DEBUG */
static int dpm_run_callback(pm_callback_t cb, struct device *dev, static int dpm_run_callback(pm_callback_t cb, struct device *dev,
pm_message_t state, const char *info) pm_message_t state, const char *info)
...@@ -602,14 +599,7 @@ static void async_resume_noirq(void *data, async_cookie_t cookie) ...@@ -602,14 +599,7 @@ static void async_resume_noirq(void *data, async_cookie_t cookie)
put_device(dev); put_device(dev);
} }
/** void dpm_noirq_resume_devices(pm_message_t state)
* dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
* @state: PM transition of the system being carried out.
*
* Call the "noirq" resume handlers for all devices in dpm_noirq_list and
* enable device drivers to receive interrupts.
*/
void dpm_resume_noirq(pm_message_t state)
{ {
struct device *dev; struct device *dev;
ktime_t starttime = ktime_get(); ktime_t starttime = ktime_get();
...@@ -654,11 +644,28 @@ void dpm_resume_noirq(pm_message_t state) ...@@ -654,11 +644,28 @@ void dpm_resume_noirq(pm_message_t state)
} }
mutex_unlock(&dpm_list_mtx); mutex_unlock(&dpm_list_mtx);
async_synchronize_full(); async_synchronize_full();
dpm_show_time(starttime, state, "noirq"); dpm_show_time(starttime, state, 0, "noirq");
trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
}
void dpm_noirq_end(void)
{
resume_device_irqs(); resume_device_irqs();
device_wakeup_disarm_wake_irqs(); device_wakeup_disarm_wake_irqs();
cpuidle_resume(); cpuidle_resume();
trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false); }
/**
* dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
* @state: PM transition of the system being carried out.
*
* Invoke the "noirq" resume callbacks for all devices in dpm_noirq_list and
* allow device drivers' interrupt handlers to be called.
*/
void dpm_resume_noirq(pm_message_t state)
{
dpm_noirq_resume_devices(state);
dpm_noirq_end();
} }
/** /**
...@@ -776,7 +783,7 @@ void dpm_resume_early(pm_message_t state) ...@@ -776,7 +783,7 @@ void dpm_resume_early(pm_message_t state)
} }
mutex_unlock(&dpm_list_mtx); mutex_unlock(&dpm_list_mtx);
async_synchronize_full(); async_synchronize_full();
dpm_show_time(starttime, state, "early"); dpm_show_time(starttime, state, 0, "early");
trace_suspend_resume(TPS("dpm_resume_early"), state.event, false); trace_suspend_resume(TPS("dpm_resume_early"), state.event, false);
} }
...@@ -948,7 +955,7 @@ void dpm_resume(pm_message_t state) ...@@ -948,7 +955,7 @@ void dpm_resume(pm_message_t state)
} }
mutex_unlock(&dpm_list_mtx); mutex_unlock(&dpm_list_mtx);
async_synchronize_full(); async_synchronize_full();
dpm_show_time(starttime, state, NULL); dpm_show_time(starttime, state, 0, NULL);
cpufreq_resume(); cpufreq_resume();
trace_suspend_resume(TPS("dpm_resume"), state.event, false); trace_suspend_resume(TPS("dpm_resume"), state.event, false);
...@@ -1098,6 +1105,11 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a ...@@ -1098,6 +1105,11 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
if (async_error) if (async_error)
goto Complete; goto Complete;
if (pm_wakeup_pending()) {
async_error = -EBUSY;
goto Complete;
}
if (dev->power.syscore || dev->power.direct_complete) if (dev->power.syscore || dev->power.direct_complete)
goto Complete; goto Complete;
...@@ -1158,22 +1170,19 @@ static int device_suspend_noirq(struct device *dev) ...@@ -1158,22 +1170,19 @@ static int device_suspend_noirq(struct device *dev)
return __device_suspend_noirq(dev, pm_transition, false); return __device_suspend_noirq(dev, pm_transition, false);
} }
/** void dpm_noirq_begin(void)
* dpm_suspend_noirq - Execute "noirq suspend" callbacks for all devices. {
* @state: PM transition of the system being carried out. cpuidle_pause();
* device_wakeup_arm_wake_irqs();
* Prevent device drivers from receiving interrupts and call the "noirq" suspend suspend_device_irqs();
* handlers for all non-sysdev devices. }
*/
int dpm_suspend_noirq(pm_message_t state) int dpm_noirq_suspend_devices(pm_message_t state)
{ {
ktime_t starttime = ktime_get(); ktime_t starttime = ktime_get();
int error = 0; int error = 0;
trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, true); trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, true);
cpuidle_pause();
device_wakeup_arm_wake_irqs();
suspend_device_irqs();
mutex_lock(&dpm_list_mtx); mutex_lock(&dpm_list_mtx);
pm_transition = state; pm_transition = state;
async_error = 0; async_error = 0;
...@@ -1208,14 +1217,31 @@ int dpm_suspend_noirq(pm_message_t state) ...@@ -1208,14 +1217,31 @@ int dpm_suspend_noirq(pm_message_t state)
if (error) { if (error) {
suspend_stats.failed_suspend_noirq++; suspend_stats.failed_suspend_noirq++;
dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ); dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_resume_noirq(resume_event(state));
} else {
dpm_show_time(starttime, state, "noirq");
} }
dpm_show_time(starttime, state, error, "noirq");
trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, false); trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, false);
return error; return error;
} }
/**
* dpm_suspend_noirq - Execute "noirq suspend" callbacks for all devices.
* @state: PM transition of the system being carried out.
*
* Prevent device drivers' interrupt handlers from being called and invoke
* "noirq" suspend callbacks for all non-sysdev devices.
*/
int dpm_suspend_noirq(pm_message_t state)
{
int ret;
dpm_noirq_begin();
ret = dpm_noirq_suspend_devices(state);
if (ret)
dpm_resume_noirq(resume_event(state));
return ret;
}
/** /**
* device_suspend_late - Execute a "late suspend" callback for given device. * device_suspend_late - Execute a "late suspend" callback for given device.
* @dev: Device to handle. * @dev: Device to handle.
...@@ -1350,9 +1376,8 @@ int dpm_suspend_late(pm_message_t state) ...@@ -1350,9 +1376,8 @@ int dpm_suspend_late(pm_message_t state)
suspend_stats.failed_suspend_late++; suspend_stats.failed_suspend_late++;
dpm_save_failed_step(SUSPEND_SUSPEND_LATE); dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_resume_early(resume_event(state)); dpm_resume_early(resume_event(state));
} else {
dpm_show_time(starttime, state, "late");
} }
dpm_show_time(starttime, state, error, "late");
trace_suspend_resume(TPS("dpm_suspend_late"), state.event, false); trace_suspend_resume(TPS("dpm_suspend_late"), state.event, false);
return error; return error;
} }
...@@ -1618,8 +1643,8 @@ int dpm_suspend(pm_message_t state) ...@@ -1618,8 +1643,8 @@ int dpm_suspend(pm_message_t state)
if (error) { if (error) {
suspend_stats.failed_suspend++; suspend_stats.failed_suspend++;
dpm_save_failed_step(SUSPEND_SUSPEND); dpm_save_failed_step(SUSPEND_SUSPEND);
} else }
dpm_show_time(starttime, state, NULL); dpm_show_time(starttime, state, error, NULL);
trace_suspend_resume(TPS("dpm_suspend"), state.event, false); trace_suspend_resume(TPS("dpm_suspend"), state.event, false);
return error; return error;
} }
......
...@@ -248,15 +248,22 @@ void dev_pm_opp_of_remove_table(struct device *dev) ...@@ -248,15 +248,22 @@ void dev_pm_opp_of_remove_table(struct device *dev)
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);
/* Returns opp descriptor node for a device, caller must do of_node_put() */ /* Returns opp descriptor node for a device node, caller must
struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev) * do of_node_put() */
static struct device_node *_opp_of_get_opp_desc_node(struct device_node *np)
{ {
/* /*
* There should be only ONE phandle present in "operating-points-v2" * There should be only ONE phandle present in "operating-points-v2"
* property. * property.
*/ */
return of_parse_phandle(dev->of_node, "operating-points-v2", 0); return of_parse_phandle(np, "operating-points-v2", 0);
}
/* Returns opp descriptor node for a device, caller must do of_node_put() */
struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev)
{
return _opp_of_get_opp_desc_node(dev->of_node);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node); EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node);
...@@ -539,8 +546,12 @@ int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask) ...@@ -539,8 +546,12 @@ int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask)
ret = dev_pm_opp_of_add_table(cpu_dev); ret = dev_pm_opp_of_add_table(cpu_dev);
if (ret) { if (ret) {
pr_err("%s: couldn't find opp table for cpu:%d, %d\n", /*
__func__, cpu, ret); * OPP may get registered dynamically, don't print error
* message here.
*/
pr_debug("%s: couldn't find opp table for cpu:%d, %d\n",
__func__, cpu, ret);
/* Free all other OPPs */ /* Free all other OPPs */
dev_pm_opp_of_cpumask_remove_table(cpumask); dev_pm_opp_of_cpumask_remove_table(cpumask);
...@@ -572,8 +583,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table); ...@@ -572,8 +583,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table);
int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
struct cpumask *cpumask) struct cpumask *cpumask)
{ {
struct device_node *np, *tmp_np; struct device_node *np, *tmp_np, *cpu_np;
struct device *tcpu_dev;
int cpu, ret = 0; int cpu, ret = 0;
/* Get OPP descriptor node */ /* Get OPP descriptor node */
...@@ -593,19 +603,18 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, ...@@ -593,19 +603,18 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
if (cpu == cpu_dev->id) if (cpu == cpu_dev->id)
continue; continue;
tcpu_dev = get_cpu_device(cpu); cpu_np = of_get_cpu_node(cpu, NULL);
if (!tcpu_dev) { if (!cpu_np) {
dev_err(cpu_dev, "%s: failed to get cpu%d device\n", dev_err(cpu_dev, "%s: failed to get cpu%d node\n",
__func__, cpu); __func__, cpu);
ret = -ENODEV; ret = -ENOENT;
goto put_cpu_node; goto put_cpu_node;
} }
/* Get OPP descriptor node */ /* Get OPP descriptor node */
tmp_np = dev_pm_opp_of_get_opp_desc_node(tcpu_dev); tmp_np = _opp_of_get_opp_desc_node(cpu_np);
if (!tmp_np) { if (!tmp_np) {
dev_err(tcpu_dev, "%s: Couldn't find opp node.\n", pr_err("%pOF: Couldn't find opp node\n", cpu_np);
__func__);
ret = -ENOENT; ret = -ENOENT;
goto put_cpu_node; goto put_cpu_node;
} }
......
...@@ -412,15 +412,17 @@ void device_set_wakeup_capable(struct device *dev, bool capable) ...@@ -412,15 +412,17 @@ void device_set_wakeup_capable(struct device *dev, bool capable)
if (!!dev->power.can_wakeup == !!capable) if (!!dev->power.can_wakeup == !!capable)
return; return;
dev->power.can_wakeup = capable;
if (device_is_registered(dev) && !list_empty(&dev->power.entry)) { if (device_is_registered(dev) && !list_empty(&dev->power.entry)) {
if (capable) { if (capable) {
if (wakeup_sysfs_add(dev)) int ret = wakeup_sysfs_add(dev);
return;
if (ret)
dev_info(dev, "Wakeup sysfs attributes not added\n");
} else { } else {
wakeup_sysfs_remove(dev); wakeup_sysfs_remove(dev);
} }
} }
dev->power.can_wakeup = capable;
} }
EXPORT_SYMBOL_GPL(device_set_wakeup_capable); EXPORT_SYMBOL_GPL(device_set_wakeup_capable);
...@@ -863,7 +865,7 @@ bool pm_wakeup_pending(void) ...@@ -863,7 +865,7 @@ bool pm_wakeup_pending(void)
void pm_system_wakeup(void) void pm_system_wakeup(void)
{ {
atomic_inc(&pm_abort_suspend); atomic_inc(&pm_abort_suspend);
freeze_wake(); s2idle_wake();
} }
EXPORT_SYMBOL_GPL(pm_system_wakeup); EXPORT_SYMBOL_GPL(pm_system_wakeup);
......
...@@ -71,15 +71,6 @@ config ARM_HIGHBANK_CPUFREQ ...@@ -71,15 +71,6 @@ config ARM_HIGHBANK_CPUFREQ
If in doubt, say N. If in doubt, say N.
config ARM_DB8500_CPUFREQ
tristate "ST-Ericsson DB8500 cpufreq" if COMPILE_TEST && !ARCH_U8500
default ARCH_U8500
depends on HAS_IOMEM
depends on !CPU_THERMAL || THERMAL
help
This adds the CPUFreq driver for ST-Ericsson Ux500 (DB8500) SoC
series.
config ARM_IMX6Q_CPUFREQ config ARM_IMX6Q_CPUFREQ
tristate "Freescale i.MX6 cpufreq support" tristate "Freescale i.MX6 cpufreq support"
depends on ARCH_MXC depends on ARCH_MXC
...@@ -96,14 +87,13 @@ config ARM_KIRKWOOD_CPUFREQ ...@@ -96,14 +87,13 @@ config ARM_KIRKWOOD_CPUFREQ
This adds the CPUFreq driver for Marvell Kirkwood This adds the CPUFreq driver for Marvell Kirkwood
SoCs. SoCs.
config ARM_MT8173_CPUFREQ config ARM_MEDIATEK_CPUFREQ
tristate "Mediatek MT8173 CPUFreq support" tristate "CPU Frequency scaling support for MediaTek SoCs"
depends on ARCH_MEDIATEK && REGULATOR depends on ARCH_MEDIATEK && REGULATOR
depends on ARM64 || (ARM_CPU_TOPOLOGY && COMPILE_TEST)
depends on !CPU_THERMAL || THERMAL depends on !CPU_THERMAL || THERMAL
select PM_OPP select PM_OPP
help help
This adds the CPUFreq driver support for Mediatek MT8173 SoC. This adds the CPUFreq driver support for MediaTek SoCs.
config ARM_OMAP2PLUS_CPUFREQ config ARM_OMAP2PLUS_CPUFREQ
bool "TI OMAP2+" bool "TI OMAP2+"
...@@ -242,6 +232,11 @@ config ARM_STI_CPUFREQ ...@@ -242,6 +232,11 @@ config ARM_STI_CPUFREQ
this config option if you wish to add CPUFreq support for STi based this config option if you wish to add CPUFreq support for STi based
SoCs. SoCs.
config ARM_TANGO_CPUFREQ
bool
depends on CPUFREQ_DT && ARCH_TANGO
default y
config ARM_TEGRA20_CPUFREQ config ARM_TEGRA20_CPUFREQ
bool "Tegra20 CPUFreq support" bool "Tegra20 CPUFreq support"
depends on ARCH_TEGRA depends on ARCH_TEGRA
......
...@@ -53,12 +53,11 @@ obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o ...@@ -53,12 +53,11 @@ obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o
obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o
obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o
obj-$(CONFIG_ARM_DB8500_CPUFREQ) += dbx500-cpufreq.o
obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o
obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o
obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o
obj-$(CONFIG_ARM_MT8173_CPUFREQ) += mt8173-cpufreq.o obj-$(CONFIG_ARM_MEDIATEK_CPUFREQ) += mediatek-cpufreq.o
obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o
obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o
obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
...@@ -75,6 +74,7 @@ obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o ...@@ -75,6 +74,7 @@ obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o
obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o
obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o
obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o
obj-$(CONFIG_ARM_TANGO_CPUFREQ) += tango-cpufreq.o
obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o
obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
obj-$(CONFIG_ARM_TEGRA186_CPUFREQ) += tegra186-cpufreq.o obj-$(CONFIG_ARM_TEGRA186_CPUFREQ) += tegra186-cpufreq.o
......
...@@ -483,11 +483,8 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy) ...@@ -483,11 +483,8 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy)
return ret; return ret;
} }
if (arm_bL_ops->get_transition_latency) policy->cpuinfo.transition_latency =
policy->cpuinfo.transition_latency = arm_bL_ops->get_transition_latency(cpu_dev);
arm_bL_ops->get_transition_latency(cpu_dev);
else
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
if (is_bL_switching_enabled()) if (is_bL_switching_enabled())
per_cpu(cpu_last_req_freq, policy->cpu) = clk_get_cpu_rate(policy->cpu); per_cpu(cpu_last_req_freq, policy->cpu) = clk_get_cpu_rate(policy->cpu);
...@@ -622,7 +619,8 @@ int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops) ...@@ -622,7 +619,8 @@ int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops)
return -EBUSY; return -EBUSY;
} }
if (!ops || !strlen(ops->name) || !ops->init_opp_table) { if (!ops || !strlen(ops->name) || !ops->init_opp_table ||
!ops->get_transition_latency) {
pr_err("%s: Invalid arm_bL_ops, exiting\n", __func__); pr_err("%s: Invalid arm_bL_ops, exiting\n", __func__);
return -ENODEV; return -ENODEV;
} }
......
...@@ -172,7 +172,6 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -172,7 +172,6 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
return -EFAULT; return -EFAULT;
} }
cpumask_set_cpu(policy->cpu, policy->cpus);
cpu->cur_policy = policy; cpu->cur_policy = policy;
/* Set policy->cur to max now. The governors will adjust later. */ /* Set policy->cur to max now. The governors will adjust later. */
......
...@@ -9,11 +9,16 @@ ...@@ -9,11 +9,16 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include "cpufreq-dt.h" #include "cpufreq-dt.h"
static const struct of_device_id machines[] __initconst = { /*
* Machines for which the cpufreq device is *always* created, mostly used for
* platforms using "operating-points" (V1) property.
*/
static const struct of_device_id whitelist[] __initconst = {
{ .compatible = "allwinner,sun4i-a10", }, { .compatible = "allwinner,sun4i-a10", },
{ .compatible = "allwinner,sun5i-a10s", }, { .compatible = "allwinner,sun5i-a10s", },
{ .compatible = "allwinner,sun5i-a13", }, { .compatible = "allwinner,sun5i-a13", },
...@@ -22,7 +27,6 @@ static const struct of_device_id machines[] __initconst = { ...@@ -22,7 +27,6 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "allwinner,sun6i-a31s", }, { .compatible = "allwinner,sun6i-a31s", },
{ .compatible = "allwinner,sun7i-a20", }, { .compatible = "allwinner,sun7i-a20", },
{ .compatible = "allwinner,sun8i-a23", }, { .compatible = "allwinner,sun8i-a23", },
{ .compatible = "allwinner,sun8i-a33", },
{ .compatible = "allwinner,sun8i-a83t", }, { .compatible = "allwinner,sun8i-a83t", },
{ .compatible = "allwinner,sun8i-h3", }, { .compatible = "allwinner,sun8i-h3", },
...@@ -32,7 +36,6 @@ static const struct of_device_id machines[] __initconst = { ...@@ -32,7 +36,6 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "arm,integrator-cp", }, { .compatible = "arm,integrator-cp", },
{ .compatible = "hisilicon,hi3660", }, { .compatible = "hisilicon,hi3660", },
{ .compatible = "hisilicon,hi6220", },
{ .compatible = "fsl,imx27", }, { .compatible = "fsl,imx27", },
{ .compatible = "fsl,imx51", }, { .compatible = "fsl,imx51", },
...@@ -46,11 +49,8 @@ static const struct of_device_id machines[] __initconst = { ...@@ -46,11 +49,8 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "samsung,exynos3250", }, { .compatible = "samsung,exynos3250", },
{ .compatible = "samsung,exynos4210", }, { .compatible = "samsung,exynos4210", },
{ .compatible = "samsung,exynos4212", }, { .compatible = "samsung,exynos4212", },
{ .compatible = "samsung,exynos4412", },
{ .compatible = "samsung,exynos5250", }, { .compatible = "samsung,exynos5250", },
#ifndef CONFIG_BL_SWITCHER #ifndef CONFIG_BL_SWITCHER
{ .compatible = "samsung,exynos5420", },
{ .compatible = "samsung,exynos5433", },
{ .compatible = "samsung,exynos5800", }, { .compatible = "samsung,exynos5800", },
#endif #endif
...@@ -67,6 +67,8 @@ static const struct of_device_id machines[] __initconst = { ...@@ -67,6 +67,8 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "renesas,r8a7792", }, { .compatible = "renesas,r8a7792", },
{ .compatible = "renesas,r8a7793", }, { .compatible = "renesas,r8a7793", },
{ .compatible = "renesas,r8a7794", }, { .compatible = "renesas,r8a7794", },
{ .compatible = "renesas,r8a7795", },
{ .compatible = "renesas,r8a7796", },
{ .compatible = "renesas,sh73a0", }, { .compatible = "renesas,sh73a0", },
{ .compatible = "rockchip,rk2928", }, { .compatible = "rockchip,rk2928", },
...@@ -76,17 +78,17 @@ static const struct of_device_id machines[] __initconst = { ...@@ -76,17 +78,17 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "rockchip,rk3188", }, { .compatible = "rockchip,rk3188", },
{ .compatible = "rockchip,rk3228", }, { .compatible = "rockchip,rk3228", },
{ .compatible = "rockchip,rk3288", }, { .compatible = "rockchip,rk3288", },
{ .compatible = "rockchip,rk3328", },
{ .compatible = "rockchip,rk3366", }, { .compatible = "rockchip,rk3366", },
{ .compatible = "rockchip,rk3368", }, { .compatible = "rockchip,rk3368", },
{ .compatible = "rockchip,rk3399", }, { .compatible = "rockchip,rk3399", },
{ .compatible = "sigma,tango4" },
{ .compatible = "socionext,uniphier-pro5", },
{ .compatible = "socionext,uniphier-pxs2", },
{ .compatible = "socionext,uniphier-ld6b", }, { .compatible = "socionext,uniphier-ld6b", },
{ .compatible = "socionext,uniphier-ld11", },
{ .compatible = "socionext,uniphier-ld20", }, { .compatible = "st-ericsson,u8500", },
{ .compatible = "st-ericsson,u8540", },
{ .compatible = "st-ericsson,u9500", },
{ .compatible = "st-ericsson,u9540", },
{ .compatible = "ti,omap2", }, { .compatible = "ti,omap2", },
{ .compatible = "ti,omap3", }, { .compatible = "ti,omap3", },
...@@ -94,27 +96,56 @@ static const struct of_device_id machines[] __initconst = { ...@@ -94,27 +96,56 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "ti,omap5", }, { .compatible = "ti,omap5", },
{ .compatible = "xlnx,zynq-7000", }, { .compatible = "xlnx,zynq-7000", },
{ .compatible = "xlnx,zynqmp", },
{ .compatible = "zte,zx296718", }, { }
};
/*
* Machines for which the cpufreq device is *not* created, mostly used for
* platforms using "operating-points-v2" property.
*/
static const struct of_device_id blacklist[] __initconst = {
{ } { }
}; };
static bool __init cpu0_node_has_opp_v2_prop(void)
{
struct device_node *np = of_cpu_device_node_get(0);
bool ret = false;
if (of_get_property(np, "operating-points-v2", NULL))
ret = true;
of_node_put(np);
return ret;
}
static int __init cpufreq_dt_platdev_init(void) static int __init cpufreq_dt_platdev_init(void)
{ {
struct device_node *np = of_find_node_by_path("/"); struct device_node *np = of_find_node_by_path("/");
const struct of_device_id *match; const struct of_device_id *match;
const void *data = NULL;
if (!np) if (!np)
return -ENODEV; return -ENODEV;
match = of_match_node(machines, np); match = of_match_node(whitelist, np);
if (match) {
data = match->data;
goto create_pdev;
}
if (cpu0_node_has_opp_v2_prop() && !of_match_node(blacklist, np))
goto create_pdev;
of_node_put(np); of_node_put(np);
if (!match) return -ENODEV;
return -ENODEV;
create_pdev:
of_node_put(np);
return PTR_ERR_OR_ZERO(platform_device_register_data(NULL, "cpufreq-dt", return PTR_ERR_OR_ZERO(platform_device_register_data(NULL, "cpufreq-dt",
-1, match->data, -1, data,
sizeof(struct cpufreq_dt_platform_data))); sizeof(struct cpufreq_dt_platform_data)));
} }
device_initcall(cpufreq_dt_platdev_init); device_initcall(cpufreq_dt_platdev_init);
...@@ -274,6 +274,7 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -274,6 +274,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
transition_latency = CPUFREQ_ETERNAL; transition_latency = CPUFREQ_ETERNAL;
policy->cpuinfo.transition_latency = transition_latency; policy->cpuinfo.transition_latency = transition_latency;
policy->dvfs_possible_from_any_cpu = true;
return 0; return 0;
......
...@@ -357,7 +357,6 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy) ...@@ -357,7 +357,6 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy)
/* cpuinfo and default policy values */ /* cpuinfo and default policy values */
policy->min = policy->cpuinfo.min_freq = min_fsb * fid * 100; policy->min = policy->cpuinfo.min_freq = min_fsb * fid * 100;
policy->max = policy->cpuinfo.max_freq = max_fsb * fid * 100; policy->max = policy->cpuinfo.max_freq = max_fsb * fid * 100;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
return 0; return 0;
} }
...@@ -369,6 +368,7 @@ static int nforce2_cpu_exit(struct cpufreq_policy *policy) ...@@ -369,6 +368,7 @@ static int nforce2_cpu_exit(struct cpufreq_policy *policy)
static struct cpufreq_driver nforce2_driver = { static struct cpufreq_driver nforce2_driver = {
.name = "nforce2", .name = "nforce2",
.flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
.verify = nforce2_verify, .verify = nforce2_verify,
.target = nforce2_target, .target = nforce2_target,
.get = nforce2_get, .get = nforce2_get,
......
...@@ -524,6 +524,32 @@ unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy, ...@@ -524,6 +524,32 @@ unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
} }
EXPORT_SYMBOL_GPL(cpufreq_driver_resolve_freq); EXPORT_SYMBOL_GPL(cpufreq_driver_resolve_freq);
unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy)
{
unsigned int latency;
if (policy->transition_delay_us)
return policy->transition_delay_us;
latency = policy->cpuinfo.transition_latency / NSEC_PER_USEC;
if (latency) {
/*
* For platforms that can change the frequency very fast (< 10
* us), the above formula gives a decent transition delay. But
* for platforms where transition_latency is in milliseconds, it
* ends up giving unrealistic values.
*
* Cap the default transition delay to 10 ms, which seems to be
* a reasonable amount of time after which we should reevaluate
* the frequency.
*/
return min(latency * LATENCY_MULTIPLIER, (unsigned int)10000);
}
return LATENCY_MULTIPLIER;
}
EXPORT_SYMBOL_GPL(cpufreq_policy_transition_delay_us);
/********************************************************************* /*********************************************************************
* SYSFS INTERFACE * * SYSFS INTERFACE *
*********************************************************************/ *********************************************************************/
...@@ -1817,9 +1843,10 @@ EXPORT_SYMBOL(cpufreq_unregister_notifier); ...@@ -1817,9 +1843,10 @@ EXPORT_SYMBOL(cpufreq_unregister_notifier);
* twice in parallel for the same policy and that it will never be called in * twice in parallel for the same policy and that it will never be called in
* parallel with either ->target() or ->target_index() for the same policy. * parallel with either ->target() or ->target_index() for the same policy.
* *
* If CPUFREQ_ENTRY_INVALID is returned by the driver's ->fast_switch() * Returns the actual frequency set for the CPU.
* callback to indicate an error condition, the hardware configuration must be *
* preserved. * If 0 is returned by the driver's ->fast_switch() callback to indicate an
* error condition, the hardware configuration must be preserved.
*/ */
unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy, unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq) unsigned int target_freq)
...@@ -1988,13 +2015,13 @@ static int cpufreq_init_governor(struct cpufreq_policy *policy) ...@@ -1988,13 +2015,13 @@ static int cpufreq_init_governor(struct cpufreq_policy *policy)
if (!policy->governor) if (!policy->governor)
return -EINVAL; return -EINVAL;
if (policy->governor->max_transition_latency && /* Platform doesn't want dynamic frequency switching ? */
policy->cpuinfo.transition_latency > if (policy->governor->dynamic_switching &&
policy->governor->max_transition_latency) { cpufreq_driver->flags & CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING) {
struct cpufreq_governor *gov = cpufreq_fallback_governor(); struct cpufreq_governor *gov = cpufreq_fallback_governor();
if (gov) { if (gov) {
pr_warn("%s governor failed, too long transition latency of HW, fallback to %s governor\n", pr_warn("Can't use %s governor as dynamic switching is disallowed. Fallback to %s governor\n",
policy->governor->name, gov->name); policy->governor->name, gov->name);
policy->governor = gov; policy->governor = gov;
} else { } else {
......
...@@ -246,7 +246,6 @@ gov_show_one_common(sampling_rate); ...@@ -246,7 +246,6 @@ gov_show_one_common(sampling_rate);
gov_show_one_common(sampling_down_factor); gov_show_one_common(sampling_down_factor);
gov_show_one_common(up_threshold); gov_show_one_common(up_threshold);
gov_show_one_common(ignore_nice_load); gov_show_one_common(ignore_nice_load);
gov_show_one_common(min_sampling_rate);
gov_show_one(cs, down_threshold); gov_show_one(cs, down_threshold);
gov_show_one(cs, freq_step); gov_show_one(cs, freq_step);
...@@ -254,12 +253,10 @@ gov_attr_rw(sampling_rate); ...@@ -254,12 +253,10 @@ gov_attr_rw(sampling_rate);
gov_attr_rw(sampling_down_factor); gov_attr_rw(sampling_down_factor);
gov_attr_rw(up_threshold); gov_attr_rw(up_threshold);
gov_attr_rw(ignore_nice_load); gov_attr_rw(ignore_nice_load);
gov_attr_ro(min_sampling_rate);
gov_attr_rw(down_threshold); gov_attr_rw(down_threshold);
gov_attr_rw(freq_step); gov_attr_rw(freq_step);
static struct attribute *cs_attributes[] = { static struct attribute *cs_attributes[] = {
&min_sampling_rate.attr,
&sampling_rate.attr, &sampling_rate.attr,
&sampling_down_factor.attr, &sampling_down_factor.attr,
&up_threshold.attr, &up_threshold.attr,
...@@ -297,10 +294,7 @@ static int cs_init(struct dbs_data *dbs_data) ...@@ -297,10 +294,7 @@ static int cs_init(struct dbs_data *dbs_data)
dbs_data->up_threshold = DEF_FREQUENCY_UP_THRESHOLD; dbs_data->up_threshold = DEF_FREQUENCY_UP_THRESHOLD;
dbs_data->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR; dbs_data->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR;
dbs_data->ignore_nice_load = 0; dbs_data->ignore_nice_load = 0;
dbs_data->tuners = tuners; dbs_data->tuners = tuners;
dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO *
jiffies_to_usecs(10);
return 0; return 0;
} }
......
...@@ -47,14 +47,11 @@ ssize_t store_sampling_rate(struct gov_attr_set *attr_set, const char *buf, ...@@ -47,14 +47,11 @@ ssize_t store_sampling_rate(struct gov_attr_set *attr_set, const char *buf,
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set); struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct policy_dbs_info *policy_dbs; struct policy_dbs_info *policy_dbs;
unsigned int rate;
int ret; int ret;
ret = sscanf(buf, "%u", &rate); ret = sscanf(buf, "%u", &dbs_data->sampling_rate);
if (ret != 1) if (ret != 1)
return -EINVAL; return -EINVAL;
dbs_data->sampling_rate = max(rate, dbs_data->min_sampling_rate);
/* /*
* We are operating under dbs_data->mutex and so the list and its * We are operating under dbs_data->mutex and so the list and its
* entries can't be freed concurrently. * entries can't be freed concurrently.
...@@ -275,6 +272,9 @@ static void dbs_update_util_handler(struct update_util_data *data, u64 time, ...@@ -275,6 +272,9 @@ static void dbs_update_util_handler(struct update_util_data *data, u64 time,
struct policy_dbs_info *policy_dbs = cdbs->policy_dbs; struct policy_dbs_info *policy_dbs = cdbs->policy_dbs;
u64 delta_ns, lst; u64 delta_ns, lst;
if (!cpufreq_can_do_remote_dvfs(policy_dbs->policy))
return;
/* /*
* The work may not be allowed to be queued up right now. * The work may not be allowed to be queued up right now.
* Possible reasons: * Possible reasons:
...@@ -392,7 +392,6 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy) ...@@ -392,7 +392,6 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
struct dbs_governor *gov = dbs_governor_of(policy); struct dbs_governor *gov = dbs_governor_of(policy);
struct dbs_data *dbs_data; struct dbs_data *dbs_data;
struct policy_dbs_info *policy_dbs; struct policy_dbs_info *policy_dbs;
unsigned int latency;
int ret = 0; int ret = 0;
/* State should be equivalent to EXIT */ /* State should be equivalent to EXIT */
...@@ -431,16 +430,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy) ...@@ -431,16 +430,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
if (ret) if (ret)
goto free_policy_dbs_info; goto free_policy_dbs_info;
/* policy latency is in ns. Convert it to us first */ dbs_data->sampling_rate = cpufreq_policy_transition_delay_us(policy);
latency = policy->cpuinfo.transition_latency / 1000;
if (latency == 0)
latency = 1;
/* Bring kernel and HW constraints together */
dbs_data->min_sampling_rate = max(dbs_data->min_sampling_rate,
MIN_LATENCY_MULTIPLIER * latency);
dbs_data->sampling_rate = max(dbs_data->min_sampling_rate,
LATENCY_MULTIPLIER * latency);
if (!have_governor_per_policy()) if (!have_governor_per_policy())
gov->gdbs_data = dbs_data; gov->gdbs_data = dbs_data;
......
...@@ -41,7 +41,6 @@ enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE}; ...@@ -41,7 +41,6 @@ enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE};
struct dbs_data { struct dbs_data {
struct gov_attr_set attr_set; struct gov_attr_set attr_set;
void *tuners; void *tuners;
unsigned int min_sampling_rate;
unsigned int ignore_nice_load; unsigned int ignore_nice_load;
unsigned int sampling_rate; unsigned int sampling_rate;
unsigned int sampling_down_factor; unsigned int sampling_down_factor;
...@@ -160,7 +159,7 @@ void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy); ...@@ -160,7 +159,7 @@ void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy);
#define CPUFREQ_DBS_GOVERNOR_INITIALIZER(_name_) \ #define CPUFREQ_DBS_GOVERNOR_INITIALIZER(_name_) \
{ \ { \
.name = _name_, \ .name = _name_, \
.max_transition_latency = TRANSITION_LATENCY_LIMIT, \ .dynamic_switching = true, \
.owner = THIS_MODULE, \ .owner = THIS_MODULE, \
.init = cpufreq_dbs_governor_init, \ .init = cpufreq_dbs_governor_init, \
.exit = cpufreq_dbs_governor_exit, \ .exit = cpufreq_dbs_governor_exit, \
......
...@@ -319,7 +319,6 @@ gov_show_one_common(sampling_rate); ...@@ -319,7 +319,6 @@ gov_show_one_common(sampling_rate);
gov_show_one_common(up_threshold); gov_show_one_common(up_threshold);
gov_show_one_common(sampling_down_factor); gov_show_one_common(sampling_down_factor);
gov_show_one_common(ignore_nice_load); gov_show_one_common(ignore_nice_load);
gov_show_one_common(min_sampling_rate);
gov_show_one_common(io_is_busy); gov_show_one_common(io_is_busy);
gov_show_one(od, powersave_bias); gov_show_one(od, powersave_bias);
...@@ -329,10 +328,8 @@ gov_attr_rw(up_threshold); ...@@ -329,10 +328,8 @@ gov_attr_rw(up_threshold);
gov_attr_rw(sampling_down_factor); gov_attr_rw(sampling_down_factor);
gov_attr_rw(ignore_nice_load); gov_attr_rw(ignore_nice_load);
gov_attr_rw(powersave_bias); gov_attr_rw(powersave_bias);
gov_attr_ro(min_sampling_rate);
static struct attribute *od_attributes[] = { static struct attribute *od_attributes[] = {
&min_sampling_rate.attr,
&sampling_rate.attr, &sampling_rate.attr,
&up_threshold.attr, &up_threshold.attr,
&sampling_down_factor.attr, &sampling_down_factor.attr,
...@@ -373,17 +370,8 @@ static int od_init(struct dbs_data *dbs_data) ...@@ -373,17 +370,8 @@ static int od_init(struct dbs_data *dbs_data)
if (idle_time != -1ULL) { if (idle_time != -1ULL) {
/* Idle micro accounting is supported. Use finer thresholds */ /* Idle micro accounting is supported. Use finer thresholds */
dbs_data->up_threshold = MICRO_FREQUENCY_UP_THRESHOLD; dbs_data->up_threshold = MICRO_FREQUENCY_UP_THRESHOLD;
/*
* In nohz/micro accounting case we set the minimum frequency
* not depending on HZ, but fixed (very low).
*/
dbs_data->min_sampling_rate = MICRO_FREQUENCY_MIN_SAMPLE_RATE;
} else { } else {
dbs_data->up_threshold = DEF_FREQUENCY_UP_THRESHOLD; dbs_data->up_threshold = DEF_FREQUENCY_UP_THRESHOLD;
/* For correct statistics, we need 10 ticks for each measure */
dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO *
jiffies_to_usecs(10);
} }
dbs_data->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR; dbs_data->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR;
......
/*
* Copyright (C) STMicroelectronics 2009
* Copyright (C) ST-Ericsson SA 2010-2012
*
* License Terms: GNU General Public License v2
* Author: Sundar Iyer <sundar.iyer@stericsson.com>
* Author: Martin Persson <martin.persson@stericsson.com>
* Author: Jonas Aaberg <jonas.aberg@stericsson.com>
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/cpufreq.h>
#include <linux/cpu_cooling.h>
#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/platform_device.h>
#include <linux/clk.h>
static struct cpufreq_frequency_table *freq_table;
static struct clk *armss_clk;
static struct thermal_cooling_device *cdev;
static int dbx500_cpufreq_target(struct cpufreq_policy *policy,
unsigned int index)
{
/* update armss clk frequency */
return clk_set_rate(armss_clk, freq_table[index].frequency * 1000);
}
static int dbx500_cpufreq_init(struct cpufreq_policy *policy)
{
policy->clk = armss_clk;
return cpufreq_generic_init(policy, freq_table, 20 * 1000);
}
static int dbx500_cpufreq_exit(struct cpufreq_policy *policy)
{
if (!IS_ERR(cdev))
cpufreq_cooling_unregister(cdev);
return 0;
}
static void dbx500_cpufreq_ready(struct cpufreq_policy *policy)
{
cdev = cpufreq_cooling_register(policy);
if (IS_ERR(cdev))
pr_err("Failed to register cooling device %ld\n", PTR_ERR(cdev));
else
pr_info("Cooling device registered: %s\n", cdev->type);
}
static struct cpufreq_driver dbx500_cpufreq_driver = {
.flags = CPUFREQ_STICKY | CPUFREQ_CONST_LOOPS |
CPUFREQ_NEED_INITIAL_FREQ_CHECK,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = dbx500_cpufreq_target,
.get = cpufreq_generic_get,
.init = dbx500_cpufreq_init,
.exit = dbx500_cpufreq_exit,
.ready = dbx500_cpufreq_ready,
.name = "DBX500",
.attr = cpufreq_generic_attr,
};
static int dbx500_cpufreq_probe(struct platform_device *pdev)
{
struct cpufreq_frequency_table *pos;
freq_table = dev_get_platdata(&pdev->dev);
if (!freq_table) {
pr_err("dbx500-cpufreq: Failed to fetch cpufreq table\n");
return -ENODEV;
}
armss_clk = clk_get(&pdev->dev, "armss");
if (IS_ERR(armss_clk)) {
pr_err("dbx500-cpufreq: Failed to get armss clk\n");
return PTR_ERR(armss_clk);
}
pr_info("dbx500-cpufreq: Available frequencies:\n");
cpufreq_for_each_entry(pos, freq_table)
pr_info(" %d Mhz\n", pos->frequency / 1000);
return cpufreq_register_driver(&dbx500_cpufreq_driver);
}
static struct platform_driver dbx500_cpufreq_plat_driver = {
.driver = {
.name = "cpufreq-ux500",
},
.probe = dbx500_cpufreq_probe,
};
static int __init dbx500_cpufreq_register(void)
{
return platform_driver_register(&dbx500_cpufreq_plat_driver);
}
device_initcall(dbx500_cpufreq_register);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("cpufreq driver for DBX500");
...@@ -165,9 +165,6 @@ static int elanfreq_cpu_init(struct cpufreq_policy *policy) ...@@ -165,9 +165,6 @@ static int elanfreq_cpu_init(struct cpufreq_policy *policy)
if (pos->frequency > max_freq) if (pos->frequency > max_freq)
pos->frequency = CPUFREQ_ENTRY_INVALID; pos->frequency = CPUFREQ_ENTRY_INVALID;
/* cpuinfo and default policy values */
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
return cpufreq_table_validate_and_show(policy, elanfreq_table); return cpufreq_table_validate_and_show(policy, elanfreq_table);
} }
...@@ -196,6 +193,7 @@ __setup("elanfreq=", elanfreq_setup); ...@@ -196,6 +193,7 @@ __setup("elanfreq=", elanfreq_setup);
static struct cpufreq_driver elanfreq_driver = { static struct cpufreq_driver elanfreq_driver = {
.get = elanfreq_get_cpu_frequency, .get = elanfreq_get_cpu_frequency,
.flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
.verify = cpufreq_generic_frequency_table_verify, .verify = cpufreq_generic_frequency_table_verify,
.target_index = elanfreq_target, .target_index = elanfreq_target,
.init = elanfreq_cpu_init, .init = elanfreq_cpu_init,
......
...@@ -428,7 +428,6 @@ static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy) ...@@ -428,7 +428,6 @@ static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy)
policy->max = maxfreq; policy->max = maxfreq;
policy->cpuinfo.min_freq = maxfreq / max_duration; policy->cpuinfo.min_freq = maxfreq / max_duration;
policy->cpuinfo.max_freq = maxfreq; policy->cpuinfo.max_freq = maxfreq;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
return 0; return 0;
} }
...@@ -438,6 +437,7 @@ static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy) ...@@ -438,6 +437,7 @@ static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy)
* MediaGX/Geode GX initialize cpufreq driver * MediaGX/Geode GX initialize cpufreq driver
*/ */
static struct cpufreq_driver gx_suspmod_driver = { static struct cpufreq_driver gx_suspmod_driver = {
.flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
.get = gx_get_cpuspeed, .get = gx_get_cpuspeed,
.verify = cpufreq_gx_verify, .verify = cpufreq_gx_verify,
.target = cpufreq_gx_target, .target = cpufreq_gx_target,
......
...@@ -47,6 +47,7 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index) ...@@ -47,6 +47,7 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
unsigned long freq_hz, volt, volt_old; unsigned long freq_hz, volt, volt_old;
unsigned int old_freq, new_freq; unsigned int old_freq, new_freq;
bool pll1_sys_temp_enabled = false;
int ret; int ret;
new_freq = freq_table[index].frequency; new_freq = freq_table[index].frequency;
...@@ -124,6 +125,10 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index) ...@@ -124,6 +125,10 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
if (freq_hz > clk_get_rate(pll2_pfd2_396m_clk)) { if (freq_hz > clk_get_rate(pll2_pfd2_396m_clk)) {
clk_set_rate(pll1_sys_clk, new_freq * 1000); clk_set_rate(pll1_sys_clk, new_freq * 1000);
clk_set_parent(pll1_sw_clk, pll1_sys_clk); clk_set_parent(pll1_sw_clk, pll1_sys_clk);
} else {
/* pll1_sys needs to be enabled for divider rate change to work. */
pll1_sys_temp_enabled = true;
clk_prepare_enable(pll1_sys_clk);
} }
} }
...@@ -135,6 +140,10 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index) ...@@ -135,6 +140,10 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
return ret; return ret;
} }
/* PLL1 is only needed until after ARM-PODF is set. */
if (pll1_sys_temp_enabled)
clk_disable_unprepare(pll1_sys_clk);
/* scaling down? scale voltage after frequency */ /* scaling down? scale voltage after frequency */
if (new_freq < old_freq) { if (new_freq < old_freq) {
ret = regulator_set_voltage_tol(arm_reg, volt, 0); ret = regulator_set_voltage_tol(arm_reg, volt, 0);
......
This diff is collapsed.
...@@ -270,7 +270,6 @@ static int longrun_cpu_init(struct cpufreq_policy *policy) ...@@ -270,7 +270,6 @@ static int longrun_cpu_init(struct cpufreq_policy *policy)
/* cpuinfo and default policy values */ /* cpuinfo and default policy values */
policy->cpuinfo.min_freq = longrun_low_freq; policy->cpuinfo.min_freq = longrun_low_freq;
policy->cpuinfo.max_freq = longrun_high_freq; policy->cpuinfo.max_freq = longrun_high_freq;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
longrun_get_policy(policy); longrun_get_policy(policy);
return 0; return 0;
......
...@@ -114,7 +114,7 @@ static struct cpufreq_driver loongson2_cpufreq_driver = { ...@@ -114,7 +114,7 @@ static struct cpufreq_driver loongson2_cpufreq_driver = {
.attr = cpufreq_generic_attr, .attr = cpufreq_generic_attr,
}; };
static struct platform_device_id platform_device_ids[] = { static const struct platform_device_id platform_device_ids[] = {
{ {
.name = "loongson2_cpufreq", .name = "loongson2_cpufreq",
}, },
......
...@@ -507,7 +507,7 @@ static int mtk_cpufreq_exit(struct cpufreq_policy *policy) ...@@ -507,7 +507,7 @@ static int mtk_cpufreq_exit(struct cpufreq_policy *policy)
return 0; return 0;
} }
static struct cpufreq_driver mt8173_cpufreq_driver = { static struct cpufreq_driver mtk_cpufreq_driver = {
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK | .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK |
CPUFREQ_HAVE_GOVERNOR_PER_POLICY, CPUFREQ_HAVE_GOVERNOR_PER_POLICY,
.verify = cpufreq_generic_frequency_table_verify, .verify = cpufreq_generic_frequency_table_verify,
...@@ -520,7 +520,7 @@ static struct cpufreq_driver mt8173_cpufreq_driver = { ...@@ -520,7 +520,7 @@ static struct cpufreq_driver mt8173_cpufreq_driver = {
.attr = cpufreq_generic_attr, .attr = cpufreq_generic_attr,
}; };
static int mt8173_cpufreq_probe(struct platform_device *pdev) static int mtk_cpufreq_probe(struct platform_device *pdev)
{ {
struct mtk_cpu_dvfs_info *info, *tmp; struct mtk_cpu_dvfs_info *info, *tmp;
int cpu, ret; int cpu, ret;
...@@ -547,7 +547,7 @@ static int mt8173_cpufreq_probe(struct platform_device *pdev) ...@@ -547,7 +547,7 @@ static int mt8173_cpufreq_probe(struct platform_device *pdev)
list_add(&info->list_head, &dvfs_info_list); list_add(&info->list_head, &dvfs_info_list);
} }
ret = cpufreq_register_driver(&mt8173_cpufreq_driver); ret = cpufreq_register_driver(&mtk_cpufreq_driver);
if (ret) { if (ret) {
dev_err(&pdev->dev, "failed to register mtk cpufreq driver\n"); dev_err(&pdev->dev, "failed to register mtk cpufreq driver\n");
goto release_dvfs_info_list; goto release_dvfs_info_list;
...@@ -564,15 +564,18 @@ static int mt8173_cpufreq_probe(struct platform_device *pdev) ...@@ -564,15 +564,18 @@ static int mt8173_cpufreq_probe(struct platform_device *pdev)
return ret; return ret;
} }
static struct platform_driver mt8173_cpufreq_platdrv = { static struct platform_driver mtk_cpufreq_platdrv = {
.driver = { .driver = {
.name = "mt8173-cpufreq", .name = "mtk-cpufreq",
}, },
.probe = mt8173_cpufreq_probe, .probe = mtk_cpufreq_probe,
}; };
/* List of machines supported by this driver */ /* List of machines supported by this driver */
static const struct of_device_id mt8173_cpufreq_machines[] __initconst = { static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
{ .compatible = "mediatek,mt2701", },
{ .compatible = "mediatek,mt7622", },
{ .compatible = "mediatek,mt7623", },
{ .compatible = "mediatek,mt817x", }, { .compatible = "mediatek,mt817x", },
{ .compatible = "mediatek,mt8173", }, { .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", }, { .compatible = "mediatek,mt8176", },
...@@ -580,7 +583,7 @@ static const struct of_device_id mt8173_cpufreq_machines[] __initconst = { ...@@ -580,7 +583,7 @@ static const struct of_device_id mt8173_cpufreq_machines[] __initconst = {
{ } { }
}; };
static int __init mt8173_cpufreq_driver_init(void) static int __init mtk_cpufreq_driver_init(void)
{ {
struct device_node *np; struct device_node *np;
const struct of_device_id *match; const struct of_device_id *match;
...@@ -591,14 +594,14 @@ static int __init mt8173_cpufreq_driver_init(void) ...@@ -591,14 +594,14 @@ static int __init mt8173_cpufreq_driver_init(void)
if (!np) if (!np)
return -ENODEV; return -ENODEV;
match = of_match_node(mt8173_cpufreq_machines, np); match = of_match_node(mtk_cpufreq_machines, np);
of_node_put(np); of_node_put(np);
if (!match) { if (!match) {
pr_warn("Machine is not compatible with mt8173-cpufreq\n"); pr_warn("Machine is not compatible with mtk-cpufreq\n");
return -ENODEV; return -ENODEV;
} }
err = platform_driver_register(&mt8173_cpufreq_platdrv); err = platform_driver_register(&mtk_cpufreq_platdrv);
if (err) if (err)
return err; return err;
...@@ -608,7 +611,7 @@ static int __init mt8173_cpufreq_driver_init(void) ...@@ -608,7 +611,7 @@ static int __init mt8173_cpufreq_driver_init(void)
* and the device registration codes are put here to handle defer * and the device registration codes are put here to handle defer
* probing. * probing.
*/ */
pdev = platform_device_register_simple("mt8173-cpufreq", -1, NULL, 0); pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0);
if (IS_ERR(pdev)) { if (IS_ERR(pdev)) {
pr_err("failed to register mtk-cpufreq platform device\n"); pr_err("failed to register mtk-cpufreq platform device\n");
return PTR_ERR(pdev); return PTR_ERR(pdev);
...@@ -616,4 +619,4 @@ static int __init mt8173_cpufreq_driver_init(void) ...@@ -616,4 +619,4 @@ static int __init mt8173_cpufreq_driver_init(void)
return 0; return 0;
} }
device_initcall(mt8173_cpufreq_driver_init); device_initcall(mtk_cpufreq_driver_init);
...@@ -442,7 +442,8 @@ static struct cpufreq_driver pmac_cpufreq_driver = { ...@@ -442,7 +442,8 @@ static struct cpufreq_driver pmac_cpufreq_driver = {
.init = pmac_cpufreq_cpu_init, .init = pmac_cpufreq_cpu_init,
.suspend = pmac_cpufreq_suspend, .suspend = pmac_cpufreq_suspend,
.resume = pmac_cpufreq_resume, .resume = pmac_cpufreq_resume,
.flags = CPUFREQ_PM_NO_WARN, .flags = CPUFREQ_PM_NO_WARN |
CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
.attr = cpufreq_generic_attr, .attr = cpufreq_generic_attr,
.name = "powermac", .name = "powermac",
}; };
...@@ -626,14 +627,16 @@ static int __init pmac_cpufreq_setup(void) ...@@ -626,14 +627,16 @@ static int __init pmac_cpufreq_setup(void)
if (!value) if (!value)
goto out; goto out;
cur_freq = (*value) / 1000; cur_freq = (*value) / 1000;
transition_latency = CPUFREQ_ETERNAL;
/* Check for 7447A based MacRISC3 */ /* Check for 7447A based MacRISC3 */
if (of_machine_is_compatible("MacRISC3") && if (of_machine_is_compatible("MacRISC3") &&
of_get_property(cpunode, "dynamic-power-step", NULL) && of_get_property(cpunode, "dynamic-power-step", NULL) &&
PVR_VER(mfspr(SPRN_PVR)) == 0x8003) { PVR_VER(mfspr(SPRN_PVR)) == 0x8003) {
pmac_cpufreq_init_7447A(cpunode); pmac_cpufreq_init_7447A(cpunode);
/* Allow dynamic switching */
transition_latency = 8000000; transition_latency = 8000000;
pmac_cpufreq_driver.flags &= ~CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING;
/* Check for other MacRISC3 machines */ /* Check for other MacRISC3 machines */
} else if (of_machine_is_compatible("PowerBook3,4") || } else if (of_machine_is_compatible("PowerBook3,4") ||
of_machine_is_compatible("PowerBook3,5") || of_machine_is_compatible("PowerBook3,5") ||
......
...@@ -516,7 +516,7 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode) ...@@ -516,7 +516,7 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode)
goto bail; goto bail;
} }
DBG("cpufreq: i2c clock chip found: %s\n", hwclock->full_name); DBG("cpufreq: i2c clock chip found: %pOF\n", hwclock);
/* Now get all the platform functions */ /* Now get all the platform functions */
pfunc_cpu_getfreq = pfunc_cpu_getfreq =
......
...@@ -602,6 +602,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev) ...@@ -602,6 +602,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
} }
clk_base = of_iomap(np, 0); clk_base = of_iomap(np, 0);
of_node_put(np);
if (!clk_base) { if (!clk_base) {
pr_err("%s: failed to map clock registers\n", __func__); pr_err("%s: failed to map clock registers\n", __func__);
return -EFAULT; return -EFAULT;
...@@ -612,6 +613,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev) ...@@ -612,6 +613,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
if (id < 0 || id >= ARRAY_SIZE(dmc_base)) { if (id < 0 || id >= ARRAY_SIZE(dmc_base)) {
pr_err("%s: failed to get alias of dmc node '%s'\n", pr_err("%s: failed to get alias of dmc node '%s'\n",
__func__, np->name); __func__, np->name);
of_node_put(np);
return id; return id;
} }
...@@ -619,6 +621,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev) ...@@ -619,6 +621,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
if (!dmc_base[id]) { if (!dmc_base[id]) {
pr_err("%s: failed to map dmc%d registers\n", pr_err("%s: failed to map dmc%d registers\n",
__func__, id); __func__, id);
of_node_put(np);
return -EFAULT; return -EFAULT;
} }
} }
......
...@@ -197,11 +197,12 @@ static int sa1100_target(struct cpufreq_policy *policy, unsigned int ppcr) ...@@ -197,11 +197,12 @@ static int sa1100_target(struct cpufreq_policy *policy, unsigned int ppcr)
static int __init sa1100_cpu_init(struct cpufreq_policy *policy) static int __init sa1100_cpu_init(struct cpufreq_policy *policy)
{ {
return cpufreq_generic_init(policy, sa11x0_freq_table, CPUFREQ_ETERNAL); return cpufreq_generic_init(policy, sa11x0_freq_table, 0);
} }
static struct cpufreq_driver sa1100_driver __refdata = { static struct cpufreq_driver sa1100_driver __refdata = {
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK |
CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
.verify = cpufreq_generic_frequency_table_verify, .verify = cpufreq_generic_frequency_table_verify,
.target_index = sa1100_target, .target_index = sa1100_target,
.get = sa11x0_getspeed, .get = sa11x0_getspeed,
......
...@@ -306,13 +306,14 @@ static int sa1110_target(struct cpufreq_policy *policy, unsigned int ppcr) ...@@ -306,13 +306,14 @@ static int sa1110_target(struct cpufreq_policy *policy, unsigned int ppcr)
static int __init sa1110_cpu_init(struct cpufreq_policy *policy) static int __init sa1110_cpu_init(struct cpufreq_policy *policy)
{ {
return cpufreq_generic_init(policy, sa11x0_freq_table, CPUFREQ_ETERNAL); return cpufreq_generic_init(policy, sa11x0_freq_table, 0);
} }
/* sa1110_driver needs __refdata because it must remain after init registers /* sa1110_driver needs __refdata because it must remain after init registers
* it with cpufreq_register_driver() */ * it with cpufreq_register_driver() */
static struct cpufreq_driver sa1110_driver __refdata = { static struct cpufreq_driver sa1110_driver __refdata = {
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK |
CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
.verify = cpufreq_generic_frequency_table_verify, .verify = cpufreq_generic_frequency_table_verify,
.target_index = sa1110_target, .target_index = sa1110_target,
.get = sa11x0_getspeed, .get = sa11x0_getspeed,
......
...@@ -137,8 +137,6 @@ static int sh_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -137,8 +137,6 @@ static int sh_cpufreq_cpu_init(struct cpufreq_policy *policy)
(clk_round_rate(cpuclk, ~0UL) + 500) / 1000; (clk_round_rate(cpuclk, ~0UL) + 500) / 1000;
} }
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
dev_info(dev, "CPU Frequencies - Minimum %u.%03u MHz, " dev_info(dev, "CPU Frequencies - Minimum %u.%03u MHz, "
"Maximum %u.%03u MHz.\n", "Maximum %u.%03u MHz.\n",
policy->min / 1000, policy->min % 1000, policy->min / 1000, policy->min % 1000,
...@@ -159,6 +157,7 @@ static int sh_cpufreq_cpu_exit(struct cpufreq_policy *policy) ...@@ -159,6 +157,7 @@ static int sh_cpufreq_cpu_exit(struct cpufreq_policy *policy)
static struct cpufreq_driver sh_cpufreq_driver = { static struct cpufreq_driver sh_cpufreq_driver = {
.name = "sh", .name = "sh",
.flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
.get = sh_cpufreq_get, .get = sh_cpufreq_get,
.target = sh_cpufreq_target, .target = sh_cpufreq_target,
.verify = sh_cpufreq_verify, .verify = sh_cpufreq_verify,
......
...@@ -207,7 +207,7 @@ static unsigned int speedstep_detect_chipset(void) ...@@ -207,7 +207,7 @@ static unsigned int speedstep_detect_chipset(void)
* 8100 which use a pretty old revision of the 82815 * 8100 which use a pretty old revision of the 82815
* host bridge. Abort on these systems. * host bridge. Abort on these systems.
*/ */
static struct pci_dev *hostbridge; struct pci_dev *hostbridge;
hostbridge = pci_get_subsys(PCI_VENDOR_ID_INTEL, hostbridge = pci_get_subsys(PCI_VENDOR_ID_INTEL,
PCI_DEVICE_ID_INTEL_82815_MC, PCI_DEVICE_ID_INTEL_82815_MC,
......
...@@ -35,7 +35,7 @@ static int relaxed_check; ...@@ -35,7 +35,7 @@ static int relaxed_check;
static unsigned int pentium3_get_frequency(enum speedstep_processor processor) static unsigned int pentium3_get_frequency(enum speedstep_processor processor)
{ {
/* See table 14 of p3_ds.pdf and table 22 of 29834003.pdf */ /* See table 14 of p3_ds.pdf and table 22 of 29834003.pdf */
struct { static const struct {
unsigned int ratio; /* Frequency Multiplier (x10) */ unsigned int ratio; /* Frequency Multiplier (x10) */
u8 bitmap; /* power on configuration bits u8 bitmap; /* power on configuration bits
[27, 25:22] (in MSR 0x2a) */ [27, 25:22] (in MSR 0x2a) */
...@@ -58,7 +58,7 @@ static unsigned int pentium3_get_frequency(enum speedstep_processor processor) ...@@ -58,7 +58,7 @@ static unsigned int pentium3_get_frequency(enum speedstep_processor processor)
}; };
/* PIII(-M) FSB settings: see table b1-b of 24547206.pdf */ /* PIII(-M) FSB settings: see table b1-b of 24547206.pdf */
struct { static const struct {
unsigned int value; /* Front Side Bus speed in MHz */ unsigned int value; /* Front Side Bus speed in MHz */
u8 bitmap; /* power on configuration bits [18: 19] u8 bitmap; /* power on configuration bits [18: 19]
(in MSR 0x2a) */ (in MSR 0x2a) */
......
...@@ -266,7 +266,6 @@ static int speedstep_cpu_init(struct cpufreq_policy *policy) ...@@ -266,7 +266,6 @@ static int speedstep_cpu_init(struct cpufreq_policy *policy)
pr_debug("workaround worked.\n"); pr_debug("workaround worked.\n");
} }
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
return cpufreq_table_validate_and_show(policy, speedstep_freqs); return cpufreq_table_validate_and_show(policy, speedstep_freqs);
} }
...@@ -290,6 +289,7 @@ static int speedstep_resume(struct cpufreq_policy *policy) ...@@ -290,6 +289,7 @@ static int speedstep_resume(struct cpufreq_policy *policy)
static struct cpufreq_driver speedstep_driver = { static struct cpufreq_driver speedstep_driver = {
.name = "speedstep-smi", .name = "speedstep-smi",
.flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
.verify = cpufreq_generic_frequency_table_verify, .verify = cpufreq_generic_frequency_table_verify,
.target_index = speedstep_target, .target_index = speedstep_target,
.init = speedstep_cpu_init, .init = speedstep_cpu_init,
......
...@@ -65,8 +65,8 @@ static int sti_cpufreq_fetch_major(void) { ...@@ -65,8 +65,8 @@ static int sti_cpufreq_fetch_major(void) {
ret = of_property_read_u32_index(np, "st,syscfg", ret = of_property_read_u32_index(np, "st,syscfg",
MAJOR_ID_INDEX, &major_offset); MAJOR_ID_INDEX, &major_offset);
if (ret) { if (ret) {
dev_err(dev, "No major number offset provided in %s [%d]\n", dev_err(dev, "No major number offset provided in %pOF [%d]\n",
np->full_name, ret); np, ret);
return ret; return ret;
} }
...@@ -92,8 +92,8 @@ static int sti_cpufreq_fetch_minor(void) ...@@ -92,8 +92,8 @@ static int sti_cpufreq_fetch_minor(void)
MINOR_ID_INDEX, &minor_offset); MINOR_ID_INDEX, &minor_offset);
if (ret) { if (ret) {
dev_err(dev, dev_err(dev,
"No minor number offset provided %s [%d]\n", "No minor number offset provided %pOF [%d]\n",
np->full_name, ret); np, ret);
return ret; return ret;
} }
......
#include <linux/of.h>
#include <linux/cpu.h>
#include <linux/clk.h>
#include <linux/pm_opp.h>
#include <linux/platform_device.h>
static const struct of_device_id machines[] __initconst = {
{ .compatible = "sigma,tango4" },
{ /* sentinel */ }
};
static int __init tango_cpufreq_init(void)
{
struct device *cpu_dev = get_cpu_device(0);
unsigned long max_freq;
struct clk *cpu_clk;
void *res;
if (!of_match_node(machines, of_root))
return -ENODEV;
cpu_clk = clk_get(cpu_dev, NULL);
if (IS_ERR(cpu_clk))
return -ENODEV;
max_freq = clk_get_rate(cpu_clk);
dev_pm_opp_add(cpu_dev, max_freq / 1, 0);
dev_pm_opp_add(cpu_dev, max_freq / 2, 0);
dev_pm_opp_add(cpu_dev, max_freq / 3, 0);
dev_pm_opp_add(cpu_dev, max_freq / 5, 0);
dev_pm_opp_add(cpu_dev, max_freq / 9, 0);
res = platform_device_register_data(NULL, "cpufreq-dt", -1, NULL, 0);
return PTR_ERR_OR_ZERO(res);
}
device_initcall(tango_cpufreq_init);
...@@ -245,8 +245,6 @@ static int ti_cpufreq_init(void) ...@@ -245,8 +245,6 @@ static int ti_cpufreq_init(void)
if (ret) if (ret)
goto fail_put_node; goto fail_put_node;
of_node_put(opp_data->opp_node);
ret = PTR_ERR_OR_ZERO(dev_pm_opp_set_supported_hw(opp_data->cpu_dev, ret = PTR_ERR_OR_ZERO(dev_pm_opp_set_supported_hw(opp_data->cpu_dev,
version, VERSION_COUNT)); version, VERSION_COUNT));
if (ret) { if (ret) {
...@@ -255,6 +253,8 @@ static int ti_cpufreq_init(void) ...@@ -255,6 +253,8 @@ static int ti_cpufreq_init(void)
goto fail_put_node; goto fail_put_node;
} }
of_node_put(opp_data->opp_node);
register_cpufreq_dt: register_cpufreq_dt:
platform_device_register_simple("cpufreq-dt", -1, NULL, 0); platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
......
...@@ -58,13 +58,12 @@ static int __init ucv2_cpu_init(struct cpufreq_policy *policy) ...@@ -58,13 +58,12 @@ static int __init ucv2_cpu_init(struct cpufreq_policy *policy)
policy->min = policy->cpuinfo.min_freq = 250000; policy->min = policy->cpuinfo.min_freq = 250000;
policy->max = policy->cpuinfo.max_freq = 1000000; policy->max = policy->cpuinfo.max_freq = 1000000;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
policy->clk = clk_get(NULL, "MAIN_CLK"); policy->clk = clk_get(NULL, "MAIN_CLK");
return PTR_ERR_OR_ZERO(policy->clk); return PTR_ERR_OR_ZERO(policy->clk);
} }
static struct cpufreq_driver ucv2_driver = { static struct cpufreq_driver ucv2_driver = {
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY | CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
.verify = ucv2_verify_speed, .verify = ucv2_verify_speed,
.target = ucv2_target, .target = ucv2_target,
.get = cpufreq_generic_get, .get = cpufreq_generic_get,
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
obj-y += cpuidle.o driver.o governor.o sysfs.o governors/ obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o
obj-$(CONFIG_ARCH_HAS_CPU_RELAX) += poll_state.o
################################################################################## ##################################################################################
# ARM SoC drivers # ARM SoC drivers
......
...@@ -77,7 +77,7 @@ static int find_deepest_state(struct cpuidle_driver *drv, ...@@ -77,7 +77,7 @@ static int find_deepest_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev, struct cpuidle_device *dev,
unsigned int max_latency, unsigned int max_latency,
unsigned int forbidden_flags, unsigned int forbidden_flags,
bool freeze) bool s2idle)
{ {
unsigned int latency_req = 0; unsigned int latency_req = 0;
int i, ret = 0; int i, ret = 0;
...@@ -89,7 +89,7 @@ static int find_deepest_state(struct cpuidle_driver *drv, ...@@ -89,7 +89,7 @@ static int find_deepest_state(struct cpuidle_driver *drv,
if (s->disabled || su->disable || s->exit_latency <= latency_req if (s->disabled || su->disable || s->exit_latency <= latency_req
|| s->exit_latency > max_latency || s->exit_latency > max_latency
|| (s->flags & forbidden_flags) || (s->flags & forbidden_flags)
|| (freeze && !s->enter_freeze)) || (s2idle && !s->enter_s2idle))
continue; continue;
latency_req = s->exit_latency; latency_req = s->exit_latency;
...@@ -128,7 +128,7 @@ int cpuidle_find_deepest_state(struct cpuidle_driver *drv, ...@@ -128,7 +128,7 @@ int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
} }
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
static void enter_freeze_proper(struct cpuidle_driver *drv, static void enter_s2idle_proper(struct cpuidle_driver *drv,
struct cpuidle_device *dev, int index) struct cpuidle_device *dev, int index)
{ {
/* /*
...@@ -143,7 +143,7 @@ static void enter_freeze_proper(struct cpuidle_driver *drv, ...@@ -143,7 +143,7 @@ static void enter_freeze_proper(struct cpuidle_driver *drv,
* suspended is generally unsafe. * suspended is generally unsafe.
*/ */
stop_critical_timings(); stop_critical_timings();
drv->states[index].enter_freeze(dev, drv, index); drv->states[index].enter_s2idle(dev, drv, index);
WARN_ON(!irqs_disabled()); WARN_ON(!irqs_disabled());
/* /*
* timekeeping_resume() that will be called by tick_unfreeze() for the * timekeeping_resume() that will be called by tick_unfreeze() for the
...@@ -155,25 +155,25 @@ static void enter_freeze_proper(struct cpuidle_driver *drv, ...@@ -155,25 +155,25 @@ static void enter_freeze_proper(struct cpuidle_driver *drv,
} }
/** /**
* cpuidle_enter_freeze - Enter an idle state suitable for suspend-to-idle. * cpuidle_enter_s2idle - Enter an idle state suitable for suspend-to-idle.
* @drv: cpuidle driver for the given CPU. * @drv: cpuidle driver for the given CPU.
* @dev: cpuidle device for the given CPU. * @dev: cpuidle device for the given CPU.
* *
* If there are states with the ->enter_freeze callback, find the deepest of * If there are states with the ->enter_s2idle callback, find the deepest of
* them and enter it with frozen tick. * them and enter it with frozen tick.
*/ */
int cpuidle_enter_freeze(struct cpuidle_driver *drv, struct cpuidle_device *dev) int cpuidle_enter_s2idle(struct cpuidle_driver *drv, struct cpuidle_device *dev)
{ {
int index; int index;
/* /*
* Find the deepest state with ->enter_freeze present, which guarantees * Find the deepest state with ->enter_s2idle present, which guarantees
* that interrupts won't be enabled when it exits and allows the tick to * that interrupts won't be enabled when it exits and allows the tick to
* be frozen safely. * be frozen safely.
*/ */
index = find_deepest_state(drv, dev, UINT_MAX, 0, true); index = find_deepest_state(drv, dev, UINT_MAX, 0, true);
if (index > 0) if (index > 0)
enter_freeze_proper(drv, dev, index); enter_s2idle_proper(drv, dev, index);
return index; return index;
} }
......
...@@ -179,36 +179,6 @@ static void __cpuidle_driver_init(struct cpuidle_driver *drv) ...@@ -179,36 +179,6 @@ static void __cpuidle_driver_init(struct cpuidle_driver *drv)
} }
} }
#ifdef CONFIG_ARCH_HAS_CPU_RELAX
static int __cpuidle poll_idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index)
{
local_irq_enable();
if (!current_set_polling_and_test()) {
while (!need_resched())
cpu_relax();
}
current_clr_polling();
return index;
}
static void poll_idle_init(struct cpuidle_driver *drv)
{
struct cpuidle_state *state = &drv->states[0];
snprintf(state->name, CPUIDLE_NAME_LEN, "POLL");
snprintf(state->desc, CPUIDLE_DESC_LEN, "CPUIDLE CORE POLL IDLE");
state->exit_latency = 0;
state->target_residency = 0;
state->power_usage = -1;
state->enter = poll_idle;
state->disabled = false;
}
#else
static void poll_idle_init(struct cpuidle_driver *drv) {}
#endif /* !CONFIG_ARCH_HAS_CPU_RELAX */
/** /**
* __cpuidle_register_driver: register the driver * __cpuidle_register_driver: register the driver
* @drv: a valid pointer to a struct cpuidle_driver * @drv: a valid pointer to a struct cpuidle_driver
...@@ -246,8 +216,6 @@ static int __cpuidle_register_driver(struct cpuidle_driver *drv) ...@@ -246,8 +216,6 @@ static int __cpuidle_register_driver(struct cpuidle_driver *drv)
on_each_cpu_mask(drv->cpumask, cpuidle_setup_broadcast_timer, on_each_cpu_mask(drv->cpumask, cpuidle_setup_broadcast_timer,
(void *)1, 1); (void *)1, 1);
poll_idle_init(drv);
return 0; return 0;
} }
......
...@@ -41,9 +41,9 @@ static int init_state_node(struct cpuidle_state *idle_state, ...@@ -41,9 +41,9 @@ static int init_state_node(struct cpuidle_state *idle_state,
/* /*
* Since this is not a "coupled" state, it's safe to assume interrupts * Since this is not a "coupled" state, it's safe to assume interrupts
* won't be enabled when it exits allowing the tick to be frozen * won't be enabled when it exits allowing the tick to be frozen
* safely. So enter() can be also enter_freeze() callback. * safely. So enter() can be also enter_s2idle() callback.
*/ */
idle_state->enter_freeze = match_id->data; idle_state->enter_s2idle = match_id->data;
err = of_property_read_u32(state_node, "wakeup-latency-us", err = of_property_read_u32(state_node, "wakeup-latency-us",
&idle_state->exit_latency); &idle_state->exit_latency);
...@@ -53,16 +53,16 @@ static int init_state_node(struct cpuidle_state *idle_state, ...@@ -53,16 +53,16 @@ static int init_state_node(struct cpuidle_state *idle_state,
err = of_property_read_u32(state_node, "entry-latency-us", err = of_property_read_u32(state_node, "entry-latency-us",
&entry_latency); &entry_latency);
if (err) { if (err) {
pr_debug(" * %s missing entry-latency-us property\n", pr_debug(" * %pOF missing entry-latency-us property\n",
state_node->full_name); state_node);
return -EINVAL; return -EINVAL;
} }
err = of_property_read_u32(state_node, "exit-latency-us", err = of_property_read_u32(state_node, "exit-latency-us",
&exit_latency); &exit_latency);
if (err) { if (err) {
pr_debug(" * %s missing exit-latency-us property\n", pr_debug(" * %pOF missing exit-latency-us property\n",
state_node->full_name); state_node);
return -EINVAL; return -EINVAL;
} }
/* /*
...@@ -75,8 +75,8 @@ static int init_state_node(struct cpuidle_state *idle_state, ...@@ -75,8 +75,8 @@ static int init_state_node(struct cpuidle_state *idle_state,
err = of_property_read_u32(state_node, "min-residency-us", err = of_property_read_u32(state_node, "min-residency-us",
&idle_state->target_residency); &idle_state->target_residency);
if (err) { if (err) {
pr_debug(" * %s missing min-residency-us property\n", pr_debug(" * %pOF missing min-residency-us property\n",
state_node->full_name); state_node);
return -EINVAL; return -EINVAL;
} }
...@@ -186,8 +186,8 @@ int dt_init_idle_driver(struct cpuidle_driver *drv, ...@@ -186,8 +186,8 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
} }
if (!idle_state_valid(state_node, i, cpumask)) { if (!idle_state_valid(state_node, i, cpumask)) {
pr_warn("%s idle state not valid, bailing out\n", pr_warn("%pOF idle state not valid, bailing out\n",
state_node->full_name); state_node);
err = -EINVAL; err = -EINVAL;
break; break;
} }
...@@ -200,8 +200,8 @@ int dt_init_idle_driver(struct cpuidle_driver *drv, ...@@ -200,8 +200,8 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
idle_state = &drv->states[state_idx++]; idle_state = &drv->states[state_idx++];
err = init_state_node(idle_state, matches, state_node); err = init_state_node(idle_state, matches, state_node);
if (err) { if (err) {
pr_err("Parsing idle state node %s failed with err %d\n", pr_err("Parsing idle state node %pOF failed with err %d\n",
state_node->full_name, err); state_node, err);
err = -EINVAL; err = -EINVAL;
break; break;
} }
......
...@@ -69,6 +69,7 @@ static int ladder_select_state(struct cpuidle_driver *drv, ...@@ -69,6 +69,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); struct ladder_device *ldev = this_cpu_ptr(&ladder_devices);
struct ladder_device_state *last_state; struct ladder_device_state *last_state;
int last_residency, last_idx = ldev->last_state_idx; int last_residency, last_idx = ldev->last_state_idx;
int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0;
int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
/* Special case when user has set very strict latency requirement */ /* Special case when user has set very strict latency requirement */
...@@ -96,13 +97,13 @@ static int ladder_select_state(struct cpuidle_driver *drv, ...@@ -96,13 +97,13 @@ static int ladder_select_state(struct cpuidle_driver *drv,
} }
/* consider demotion */ /* consider demotion */
if (last_idx > CPUIDLE_DRIVER_STATE_START && if (last_idx > first_idx &&
(drv->states[last_idx].disabled || (drv->states[last_idx].disabled ||
dev->states_usage[last_idx].disable || dev->states_usage[last_idx].disable ||
drv->states[last_idx].exit_latency > latency_req)) { drv->states[last_idx].exit_latency > latency_req)) {
int i; int i;
for (i = last_idx - 1; i > CPUIDLE_DRIVER_STATE_START; i--) { for (i = last_idx - 1; i > first_idx; i--) {
if (drv->states[i].exit_latency <= latency_req) if (drv->states[i].exit_latency <= latency_req)
break; break;
} }
...@@ -110,7 +111,7 @@ static int ladder_select_state(struct cpuidle_driver *drv, ...@@ -110,7 +111,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
return i; return i;
} }
if (last_idx > CPUIDLE_DRIVER_STATE_START && if (last_idx > first_idx &&
last_residency < last_state->threshold.demotion_time) { last_residency < last_state->threshold.demotion_time) {
last_state->stats.demotion_count++; last_state->stats.demotion_count++;
last_state->stats.promotion_count = 0; last_state->stats.promotion_count = 0;
...@@ -133,13 +134,14 @@ static int ladder_enable_device(struct cpuidle_driver *drv, ...@@ -133,13 +134,14 @@ static int ladder_enable_device(struct cpuidle_driver *drv,
struct cpuidle_device *dev) struct cpuidle_device *dev)
{ {
int i; int i;
int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0;
struct ladder_device *ldev = &per_cpu(ladder_devices, dev->cpu); struct ladder_device *ldev = &per_cpu(ladder_devices, dev->cpu);
struct ladder_device_state *lstate; struct ladder_device_state *lstate;
struct cpuidle_state *state; struct cpuidle_state *state;
ldev->last_state_idx = CPUIDLE_DRIVER_STATE_START; ldev->last_state_idx = first_idx;
for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) { for (i = first_idx; i < drv->state_count; i++) {
state = &drv->states[i]; state = &drv->states[i];
lstate = &ldev->states[i]; lstate = &ldev->states[i];
...@@ -151,7 +153,7 @@ static int ladder_enable_device(struct cpuidle_driver *drv, ...@@ -151,7 +153,7 @@ static int ladder_enable_device(struct cpuidle_driver *drv,
if (i < drv->state_count - 1) if (i < drv->state_count - 1)
lstate->threshold.promotion_time = state->exit_latency; lstate->threshold.promotion_time = state->exit_latency;
if (i > CPUIDLE_DRIVER_STATE_START) if (i > first_idx)
lstate->threshold.demotion_time = state->exit_latency; lstate->threshold.demotion_time = state->exit_latency;
} }
......
...@@ -324,8 +324,9 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) ...@@ -324,8 +324,9 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
expected_interval = get_typical_interval(data); expected_interval = get_typical_interval(data);
expected_interval = min(expected_interval, data->next_timer_us); expected_interval = min(expected_interval, data->next_timer_us);
if (CPUIDLE_DRIVER_STATE_START > 0) { first_idx = 0;
struct cpuidle_state *s = &drv->states[CPUIDLE_DRIVER_STATE_START]; if (drv->states[0].flags & CPUIDLE_FLAG_POLLING) {
struct cpuidle_state *s = &drv->states[1];
unsigned int polling_threshold; unsigned int polling_threshold;
/* /*
...@@ -336,12 +337,8 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) ...@@ -336,12 +337,8 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
polling_threshold = max_t(unsigned int, 20, s->target_residency); polling_threshold = max_t(unsigned int, 20, s->target_residency);
if (data->next_timer_us > polling_threshold && if (data->next_timer_us > polling_threshold &&
latency_req > s->exit_latency && !s->disabled && latency_req > s->exit_latency && !s->disabled &&
!dev->states_usage[CPUIDLE_DRIVER_STATE_START].disable) !dev->states_usage[1].disable)
first_idx = CPUIDLE_DRIVER_STATE_START; first_idx = 1;
else
first_idx = CPUIDLE_DRIVER_STATE_START - 1;
} else {
first_idx = 0;
} }
/* /*
......
/*
* poll_state.c - Polling idle state
*
* This file is released under the GPLv2.
*/
#include <linux/cpuidle.h>
#include <linux/sched.h>
#include <linux/sched/idle.h>
static int __cpuidle poll_idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index)
{
local_irq_enable();
if (!current_set_polling_and_test()) {
while (!need_resched())
cpu_relax();
}
current_clr_polling();
return index;
}
void cpuidle_poll_state_init(struct cpuidle_driver *drv)
{
struct cpuidle_state *state = &drv->states[0];
snprintf(state->name, CPUIDLE_NAME_LEN, "POLL");
snprintf(state->desc, CPUIDLE_DESC_LEN, "CPUIDLE CORE POLL IDLE");
state->exit_latency = 0;
state->target_residency = 0;
state->power_usage = -1;
state->enter = poll_idle;
state->disabled = false;
state->flags = CPUIDLE_FLAG_POLLING;
}
EXPORT_SYMBOL_GPL(cpuidle_poll_state_init);
menuconfig PM_DEVFREQ menuconfig PM_DEVFREQ
bool "Generic Dynamic Voltage and Frequency Scaling (DVFS) support" bool "Generic Dynamic Voltage and Frequency Scaling (DVFS) support"
select SRCU select SRCU
select PM_OPP
help help
A device may have a list of frequencies and voltages available. A device may have a list of frequencies and voltages available.
devfreq, a generic DVFS framework can be registered for a device devfreq, a generic DVFS framework can be registered for a device
......
...@@ -277,8 +277,8 @@ int devfreq_event_get_edev_count(struct device *dev) ...@@ -277,8 +277,8 @@ int devfreq_event_get_edev_count(struct device *dev)
sizeof(u32)); sizeof(u32));
if (count < 0) { if (count < 0) {
dev_err(dev, dev_err(dev,
"failed to get the count of devfreq-event in %s node\n", "failed to get the count of devfreq-event in %pOF node\n",
dev->of_node->full_name); dev->of_node);
return count; return count;
} }
......
...@@ -564,7 +564,7 @@ struct devfreq *devfreq_add_device(struct device *dev, ...@@ -564,7 +564,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
err = device_register(&devfreq->dev); err = device_register(&devfreq->dev);
if (err) { if (err) {
mutex_unlock(&devfreq->lock); mutex_unlock(&devfreq->lock);
goto err_out; goto err_dev;
} }
devfreq->trans_table = devm_kzalloc(&devfreq->dev, devfreq->trans_table = devm_kzalloc(&devfreq->dev,
...@@ -610,6 +610,9 @@ struct devfreq *devfreq_add_device(struct device *dev, ...@@ -610,6 +610,9 @@ struct devfreq *devfreq_add_device(struct device *dev,
mutex_unlock(&devfreq_list_lock); mutex_unlock(&devfreq_list_lock);
device_unregister(&devfreq->dev); device_unregister(&devfreq->dev);
err_dev:
if (devfreq)
kfree(devfreq);
err_out: err_out:
return ERR_PTR(err); return ERR_PTR(err);
} }
......
...@@ -69,4 +69,8 @@ extern int devfreq_remove_governor(struct devfreq_governor *governor); ...@@ -69,4 +69,8 @@ extern int devfreq_remove_governor(struct devfreq_governor *governor);
extern int devfreq_update_status(struct devfreq *devfreq, unsigned long freq); extern int devfreq_update_status(struct devfreq *devfreq, unsigned long freq);
static inline int devfreq_update_stats(struct devfreq *df)
{
return df->profile->get_dev_status(df->dev.parent, &df->last_status);
}
#endif /* _GOVERNOR_H */ #endif /* _GOVERNOR_H */
This diff is collapsed.
...@@ -33,7 +33,6 @@ ...@@ -33,7 +33,6 @@
#include <linux/mfd/abx500/ab8500.h> #include <linux/mfd/abx500/ab8500.h>
#include <linux/regulator/db8500-prcmu.h> #include <linux/regulator/db8500-prcmu.h>
#include <linux/regulator/machine.h> #include <linux/regulator/machine.h>
#include <linux/cpufreq.h>
#include <linux/platform_data/ux500_wdt.h> #include <linux/platform_data/ux500_wdt.h>
#include <linux/platform_data/db8500_thermal.h> #include <linux/platform_data/db8500_thermal.h>
#include "dbx500-prcmu-regs.h" #include "dbx500-prcmu-regs.h"
...@@ -1692,32 +1691,27 @@ static long round_clock_rate(u8 clock, unsigned long rate) ...@@ -1692,32 +1691,27 @@ static long round_clock_rate(u8 clock, unsigned long rate)
return rounded_rate; return rounded_rate;
} }
/* CPU FREQ table, may be changed due to if MAX_OPP is supported. */ static const unsigned long armss_freqs[] = {
static struct cpufreq_frequency_table db8500_cpufreq_table[] = { 200000000,
{ .frequency = 200000, .driver_data = ARM_EXTCLK,}, 400000000,
{ .frequency = 400000, .driver_data = ARM_50_OPP,}, 800000000,
{ .frequency = 800000, .driver_data = ARM_100_OPP,}, 998400000
{ .frequency = CPUFREQ_TABLE_END,}, /* To be used for MAX_OPP. */
{ .frequency = CPUFREQ_TABLE_END,},
}; };
static long round_armss_rate(unsigned long rate) static long round_armss_rate(unsigned long rate)
{ {
struct cpufreq_frequency_table *pos; unsigned long freq = 0;
long freq = 0; int i;
/* cpufreq table frequencies is in KHz. */
rate = rate / 1000;
/* Find the corresponding arm opp from the cpufreq table. */ /* Find the corresponding arm opp from the cpufreq table. */
cpufreq_for_each_entry(pos, db8500_cpufreq_table) { for (i = 0; i < ARRAY_SIZE(armss_freqs); i++) {
freq = pos->frequency; freq = armss_freqs[i];
if (freq == rate) if (rate <= freq)
break; break;
} }
/* Return the last valid value, even if a match was not found. */ /* Return the last valid value, even if a match was not found. */
return freq * 1000; return freq;
} }
#define MIN_PLL_VCO_RATE 600000000ULL #define MIN_PLL_VCO_RATE 600000000ULL
...@@ -1854,21 +1848,23 @@ static void set_clock_rate(u8 clock, unsigned long rate) ...@@ -1854,21 +1848,23 @@ static void set_clock_rate(u8 clock, unsigned long rate)
static int set_armss_rate(unsigned long rate) static int set_armss_rate(unsigned long rate)
{ {
struct cpufreq_frequency_table *pos; unsigned long freq;
u8 opps[] = { ARM_EXTCLK, ARM_50_OPP, ARM_100_OPP, ARM_MAX_OPP };
/* cpufreq table frequencies is in KHz. */ int i;
rate = rate / 1000;
/* Find the corresponding arm opp from the cpufreq table. */ /* Find the corresponding arm opp from the cpufreq table. */
cpufreq_for_each_entry(pos, db8500_cpufreq_table) for (i = 0; i < ARRAY_SIZE(armss_freqs); i++) {
if (pos->frequency == rate) freq = armss_freqs[i];
if (rate == freq)
break; break;
}
if (pos->frequency != rate) if (rate != freq)
return -EINVAL; return -EINVAL;
/* Set the new arm opp. */ /* Set the new arm opp. */
return db8500_prcmu_set_arm_opp(pos->driver_data); pr_debug("SET ARM OPP 0x%02x\n", opps[i]);
return db8500_prcmu_set_arm_opp(opps[i]);
} }
static int set_plldsi_rate(unsigned long rate) static int set_plldsi_rate(unsigned long rate)
...@@ -3048,12 +3044,6 @@ static const struct mfd_cell db8500_prcmu_devs[] = { ...@@ -3048,12 +3044,6 @@ static const struct mfd_cell db8500_prcmu_devs[] = {
.platform_data = &db8500_regulators, .platform_data = &db8500_regulators,
.pdata_size = sizeof(db8500_regulators), .pdata_size = sizeof(db8500_regulators),
}, },
{
.name = "cpufreq-ux500",
.of_compatible = "stericsson,cpufreq-ux500",
.platform_data = &db8500_cpufreq_table,
.pdata_size = sizeof(db8500_cpufreq_table),
},
{ {
.name = "cpuidle-dbx500", .name = "cpuidle-dbx500",
.of_compatible = "stericsson,cpuidle-dbx500", .of_compatible = "stericsson,cpuidle-dbx500",
...@@ -3067,14 +3057,6 @@ static const struct mfd_cell db8500_prcmu_devs[] = { ...@@ -3067,14 +3057,6 @@ static const struct mfd_cell db8500_prcmu_devs[] = {
}, },
}; };
static void db8500_prcmu_update_cpufreq(void)
{
if (prcmu_has_arm_maxopp()) {
db8500_cpufreq_table[3].frequency = 1000000;
db8500_cpufreq_table[3].driver_data = ARM_MAX_OPP;
}
}
static int db8500_prcmu_register_ab8500(struct device *parent) static int db8500_prcmu_register_ab8500(struct device *parent)
{ {
struct device_node *np; struct device_node *np;
...@@ -3160,8 +3142,6 @@ static int db8500_prcmu_probe(struct platform_device *pdev) ...@@ -3160,8 +3142,6 @@ static int db8500_prcmu_probe(struct platform_device *pdev)
prcmu_config_esram0_deep_sleep(ESRAM0_DEEP_SLEEP_STATE_RET); prcmu_config_esram0_deep_sleep(ESRAM0_DEEP_SLEEP_STATE_RET);
db8500_prcmu_update_cpufreq();
err = mfd_add_devices(&pdev->dev, 0, common_prcmu_devs, err = mfd_add_devices(&pdev->dev, 0, common_prcmu_devs,
ARRAY_SIZE(common_prcmu_devs), NULL, 0, db8500_irq_domain); ARRAY_SIZE(common_prcmu_devs), NULL, 0, db8500_irq_domain);
if (err) { if (err) {
......
...@@ -203,15 +203,26 @@ static void notify_handler(acpi_handle handle, u32 event, void *context) ...@@ -203,15 +203,26 @@ static void notify_handler(acpi_handle handle, u32 event, void *context)
acpi_status status; acpi_status status;
if (priv->wakeup_mode) { if (priv->wakeup_mode) {
/*
* Needed for wakeup from suspend-to-idle to work on some
* platforms that don't expose the 5-button array, but still
* send notifies with the power button event code to this
* device object on power button actions while suspended.
*/
if (event == 0xce)
goto wakeup;
/* Wake up on 5-button array events only. */ /* Wake up on 5-button array events only. */
if (event == 0xc0 || !priv->array) if (event == 0xc0 || !priv->array)
return; return;
if (sparse_keymap_entry_from_scancode(priv->array, event)) if (!sparse_keymap_entry_from_scancode(priv->array, event)) {
pm_wakeup_hard_event(&device->dev);
else
dev_info(&device->dev, "unknown event 0x%x\n", event); dev_info(&device->dev, "unknown event 0x%x\n", event);
return;
}
wakeup:
pm_wakeup_hard_event(&device->dev);
return; return;
} }
......
...@@ -349,6 +349,36 @@ static const struct rockchip_iodomain_soc_data soc_data_rk3399_pmu = { ...@@ -349,6 +349,36 @@ static const struct rockchip_iodomain_soc_data soc_data_rk3399_pmu = {
.init = rk3399_pmu_iodomain_init, .init = rk3399_pmu_iodomain_init,
}; };
static const struct rockchip_iodomain_soc_data soc_data_rv1108 = {
.grf_offset = 0x404,
.supply_names = {
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
"vccio1",
"vccio2",
"vccio3",
"vccio5",
"vccio6",
},
};
static const struct rockchip_iodomain_soc_data soc_data_rv1108_pmu = {
.grf_offset = 0x104,
.supply_names = {
"pmu",
},
};
static const struct of_device_id rockchip_iodomain_match[] = { static const struct of_device_id rockchip_iodomain_match[] = {
{ {
.compatible = "rockchip,rk3188-io-voltage-domain", .compatible = "rockchip,rk3188-io-voltage-domain",
...@@ -382,6 +412,14 @@ static const struct of_device_id rockchip_iodomain_match[] = { ...@@ -382,6 +412,14 @@ static const struct of_device_id rockchip_iodomain_match[] = {
.compatible = "rockchip,rk3399-pmu-io-voltage-domain", .compatible = "rockchip,rk3399-pmu-io-voltage-domain",
.data = (void *)&soc_data_rk3399_pmu .data = (void *)&soc_data_rk3399_pmu
}, },
{
.compatible = "rockchip,rv1108-io-voltage-domain",
.data = (void *)&soc_data_rv1108
},
{
.compatible = "rockchip,rv1108-pmu-io-voltage-domain",
.data = (void *)&soc_data_rv1108_pmu
},
{ /* sentinel */ }, { /* sentinel */ },
}; };
MODULE_DEVICE_TABLE(of, rockchip_iodomain_match); MODULE_DEVICE_TABLE(of, rockchip_iodomain_match);
......
...@@ -150,7 +150,7 @@ static void of_get_regulation_constraints(struct device_node *np, ...@@ -150,7 +150,7 @@ static void of_get_regulation_constraints(struct device_node *np,
suspend_state = &constraints->state_disk; suspend_state = &constraints->state_disk;
break; break;
case PM_SUSPEND_ON: case PM_SUSPEND_ON:
case PM_SUSPEND_FREEZE: case PM_SUSPEND_TO_IDLE:
case PM_SUSPEND_STANDBY: case PM_SUSPEND_STANDBY:
default: default:
continue; continue;
......
...@@ -127,6 +127,15 @@ struct cpufreq_policy { ...@@ -127,6 +127,15 @@ struct cpufreq_policy {
*/ */
unsigned int transition_delay_us; unsigned int transition_delay_us;
/*
* Remote DVFS flag (Not added to the driver structure as we don't want
* to access another structure from scheduler hotpath).
*
* Should be set if CPUs can do DVFS on behalf of other CPUs from
* different cpufreq policies.
*/
bool dvfs_possible_from_any_cpu;
/* Cached frequency lookup from cpufreq_driver_resolve_freq. */ /* Cached frequency lookup from cpufreq_driver_resolve_freq. */
unsigned int cached_target_freq; unsigned int cached_target_freq;
int cached_resolved_idx; int cached_resolved_idx;
...@@ -370,6 +379,12 @@ struct cpufreq_driver { ...@@ -370,6 +379,12 @@ struct cpufreq_driver {
*/ */
#define CPUFREQ_NEED_INITIAL_FREQ_CHECK (1 << 5) #define CPUFREQ_NEED_INITIAL_FREQ_CHECK (1 << 5)
/*
* Set by drivers to disallow use of governors with "dynamic_switching" flag
* set.
*/
#define CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING (1 << 6)
int cpufreq_register_driver(struct cpufreq_driver *driver_data); int cpufreq_register_driver(struct cpufreq_driver *driver_data);
int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); int cpufreq_unregister_driver(struct cpufreq_driver *driver_data);
...@@ -487,14 +502,8 @@ static inline unsigned long cpufreq_scale(unsigned long old, u_int div, ...@@ -487,14 +502,8 @@ static inline unsigned long cpufreq_scale(unsigned long old, u_int div,
* polling frequency is 1000 times the transition latency of the processor. The * polling frequency is 1000 times the transition latency of the processor. The
* ondemand governor will work on any processor with transition latency <= 10ms, * ondemand governor will work on any processor with transition latency <= 10ms,
* using appropriate sampling rate. * using appropriate sampling rate.
*
* For CPUs with transition latency > 10ms (mostly drivers with CPUFREQ_ETERNAL)
* the ondemand governor will not work. All times here are in us (microseconds).
*/ */
#define MIN_SAMPLING_RATE_RATIO (2)
#define LATENCY_MULTIPLIER (1000) #define LATENCY_MULTIPLIER (1000)
#define MIN_LATENCY_MULTIPLIER (20)
#define TRANSITION_LATENCY_LIMIT (10 * 1000 * 1000)
struct cpufreq_governor { struct cpufreq_governor {
char name[CPUFREQ_NAME_LEN]; char name[CPUFREQ_NAME_LEN];
...@@ -507,9 +516,8 @@ struct cpufreq_governor { ...@@ -507,9 +516,8 @@ struct cpufreq_governor {
char *buf); char *buf);
int (*store_setspeed) (struct cpufreq_policy *policy, int (*store_setspeed) (struct cpufreq_policy *policy,
unsigned int freq); unsigned int freq);
unsigned int max_transition_latency; /* HW must be able to switch to /* For governors which change frequency dynamically by themselves */
next freq faster than this value in nano secs or we bool dynamic_switching;
will fallback to performance governor */
struct list_head governor_list; struct list_head governor_list;
struct module *owner; struct module *owner;
}; };
...@@ -525,6 +533,7 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy, ...@@ -525,6 +533,7 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
unsigned int relation); unsigned int relation);
unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy, unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
unsigned int target_freq); unsigned int target_freq);
unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy);
int cpufreq_register_governor(struct cpufreq_governor *governor); int cpufreq_register_governor(struct cpufreq_governor *governor);
void cpufreq_unregister_governor(struct cpufreq_governor *governor); void cpufreq_unregister_governor(struct cpufreq_governor *governor);
...@@ -562,6 +571,17 @@ struct governor_attr { ...@@ -562,6 +571,17 @@ struct governor_attr {
size_t count); size_t count);
}; };
static inline bool cpufreq_can_do_remote_dvfs(struct cpufreq_policy *policy)
{
/*
* Allow remote callbacks if:
* - dvfs_possible_from_any_cpu flag is set
* - the local and remote CPUs share cpufreq policy
*/
return policy->dvfs_possible_from_any_cpu ||
cpumask_test_cpu(smp_processor_id(), policy->cpus);
}
/********************************************************************* /*********************************************************************
* FREQUENCY TABLE HELPERS * * FREQUENCY TABLE HELPERS *
*********************************************************************/ *********************************************************************/
......
...@@ -52,17 +52,18 @@ struct cpuidle_state { ...@@ -52,17 +52,18 @@ struct cpuidle_state {
int (*enter_dead) (struct cpuidle_device *dev, int index); int (*enter_dead) (struct cpuidle_device *dev, int index);
/* /*
* CPUs execute ->enter_freeze with the local tick or entire timekeeping * CPUs execute ->enter_s2idle with the local tick or entire timekeeping
* suspended, so it must not re-enable interrupts at any point (even * suspended, so it must not re-enable interrupts at any point (even
* temporarily) or attempt to change states of clock event devices. * temporarily) or attempt to change states of clock event devices.
*/ */
void (*enter_freeze) (struct cpuidle_device *dev, void (*enter_s2idle) (struct cpuidle_device *dev,
struct cpuidle_driver *drv, struct cpuidle_driver *drv,
int index); int index);
}; };
/* Idle State Flags */ /* Idle State Flags */
#define CPUIDLE_FLAG_NONE (0x00) #define CPUIDLE_FLAG_NONE (0x00)
#define CPUIDLE_FLAG_POLLING (0x01) /* polling state */
#define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */ #define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */
#define CPUIDLE_FLAG_TIMER_STOP (0x04) /* timer is stopped on this state */ #define CPUIDLE_FLAG_TIMER_STOP (0x04) /* timer is stopped on this state */
...@@ -197,14 +198,14 @@ static inline struct cpuidle_device *cpuidle_get_device(void) {return NULL; } ...@@ -197,14 +198,14 @@ static inline struct cpuidle_device *cpuidle_get_device(void) {return NULL; }
#ifdef CONFIG_CPU_IDLE #ifdef CONFIG_CPU_IDLE
extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv, extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev); struct cpuidle_device *dev);
extern int cpuidle_enter_freeze(struct cpuidle_driver *drv, extern int cpuidle_enter_s2idle(struct cpuidle_driver *drv,
struct cpuidle_device *dev); struct cpuidle_device *dev);
extern void cpuidle_use_deepest_state(bool enable); extern void cpuidle_use_deepest_state(bool enable);
#else #else
static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv, static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev) struct cpuidle_device *dev)
{return -ENODEV; } {return -ENODEV; }
static inline int cpuidle_enter_freeze(struct cpuidle_driver *drv, static inline int cpuidle_enter_s2idle(struct cpuidle_driver *drv,
struct cpuidle_device *dev) struct cpuidle_device *dev)
{return -ENODEV; } {return -ENODEV; }
static inline void cpuidle_use_deepest_state(bool enable) static inline void cpuidle_use_deepest_state(bool enable)
...@@ -224,6 +225,12 @@ static inline void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, ...@@ -224,6 +225,12 @@ static inline void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev,
} }
#endif #endif
#ifdef CONFIG_ARCH_HAS_CPU_RELAX
void cpuidle_poll_state_init(struct cpuidle_driver *drv);
#else
static inline void cpuidle_poll_state_init(struct cpuidle_driver *drv) {}
#endif
/****************************** /******************************
* CPUIDLE GOVERNOR INTERFACE * * CPUIDLE GOVERNOR INTERFACE *
******************************/ ******************************/
...@@ -250,12 +257,6 @@ static inline int cpuidle_register_governor(struct cpuidle_governor *gov) ...@@ -250,12 +257,6 @@ static inline int cpuidle_register_governor(struct cpuidle_governor *gov)
{return 0;} {return 0;}
#endif #endif
#ifdef CONFIG_ARCH_HAS_CPU_RELAX
#define CPUIDLE_DRIVER_STATE_START 1
#else
#define CPUIDLE_DRIVER_STATE_START 0
#endif
#define CPU_PM_CPU_IDLE_ENTER(low_level_idle_enter, idx) \ #define CPU_PM_CPU_IDLE_ENTER(low_level_idle_enter, idx) \
({ \ ({ \
int __ret; \ int __ret; \
......
...@@ -214,19 +214,6 @@ extern void devm_devfreq_unregister_notifier(struct device *dev, ...@@ -214,19 +214,6 @@ extern void devm_devfreq_unregister_notifier(struct device *dev,
extern struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, extern struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev,
int index); int index);
/**
* devfreq_update_stats() - update the last_status pointer in struct devfreq
* @df: the devfreq instance whose status needs updating
*
* Governors are recommended to use this function along with last_status,
* which allows other entities to reuse the last_status without affecting
* the values fetched later by governors.
*/
static inline int devfreq_update_stats(struct devfreq *df)
{
return df->profile->get_dev_status(df->dev.parent, &df->last_status);
}
#if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) #if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND)
/** /**
* struct devfreq_simple_ondemand_data - void *data fed to struct devfreq * struct devfreq_simple_ondemand_data - void *data fed to struct devfreq
......
...@@ -689,6 +689,8 @@ struct dev_pm_domain { ...@@ -689,6 +689,8 @@ struct dev_pm_domain {
extern void device_pm_lock(void); extern void device_pm_lock(void);
extern void dpm_resume_start(pm_message_t state); extern void dpm_resume_start(pm_message_t state);
extern void dpm_resume_end(pm_message_t state); extern void dpm_resume_end(pm_message_t state);
extern void dpm_noirq_resume_devices(pm_message_t state);
extern void dpm_noirq_end(void);
extern void dpm_resume_noirq(pm_message_t state); extern void dpm_resume_noirq(pm_message_t state);
extern void dpm_resume_early(pm_message_t state); extern void dpm_resume_early(pm_message_t state);
extern void dpm_resume(pm_message_t state); extern void dpm_resume(pm_message_t state);
...@@ -697,6 +699,8 @@ extern void dpm_complete(pm_message_t state); ...@@ -697,6 +699,8 @@ extern void dpm_complete(pm_message_t state);
extern void device_pm_unlock(void); extern void device_pm_unlock(void);
extern int dpm_suspend_end(pm_message_t state); extern int dpm_suspend_end(pm_message_t state);
extern int dpm_suspend_start(pm_message_t state); extern int dpm_suspend_start(pm_message_t state);
extern void dpm_noirq_begin(void);
extern int dpm_noirq_suspend_devices(pm_message_t state);
extern int dpm_suspend_noirq(pm_message_t state); extern int dpm_suspend_noirq(pm_message_t state);
extern int dpm_suspend_late(pm_message_t state); extern int dpm_suspend_late(pm_message_t state);
extern int dpm_suspend(pm_message_t state); extern int dpm_suspend(pm_message_t state);
......
...@@ -43,6 +43,7 @@ struct genpd_power_state { ...@@ -43,6 +43,7 @@ struct genpd_power_state {
s64 power_on_latency_ns; s64 power_on_latency_ns;
s64 residency_ns; s64 residency_ns;
struct fwnode_handle *fwnode; struct fwnode_handle *fwnode;
ktime_t idle_time;
}; };
struct genpd_lock_ops; struct genpd_lock_ops;
...@@ -78,6 +79,8 @@ struct generic_pm_domain { ...@@ -78,6 +79,8 @@ struct generic_pm_domain {
unsigned int state_count; /* number of states */ unsigned int state_count; /* number of states */
unsigned int state_idx; /* state that genpd will go to when off */ unsigned int state_idx; /* state that genpd will go to when off */
void *free; /* Free the state that was allocated for default */ void *free; /* Free the state that was allocated for default */
ktime_t on_time;
ktime_t accounting_time;
const struct genpd_lock_ops *lock_ops; const struct genpd_lock_ops *lock_ops;
union { union {
struct mutex mlock; struct mutex mlock;
......
...@@ -33,10 +33,10 @@ static inline void pm_restore_console(void) ...@@ -33,10 +33,10 @@ static inline void pm_restore_console(void)
typedef int __bitwise suspend_state_t; typedef int __bitwise suspend_state_t;
#define PM_SUSPEND_ON ((__force suspend_state_t) 0) #define PM_SUSPEND_ON ((__force suspend_state_t) 0)
#define PM_SUSPEND_FREEZE ((__force suspend_state_t) 1) #define PM_SUSPEND_TO_IDLE ((__force suspend_state_t) 1)
#define PM_SUSPEND_STANDBY ((__force suspend_state_t) 2) #define PM_SUSPEND_STANDBY ((__force suspend_state_t) 2)
#define PM_SUSPEND_MEM ((__force suspend_state_t) 3) #define PM_SUSPEND_MEM ((__force suspend_state_t) 3)
#define PM_SUSPEND_MIN PM_SUSPEND_FREEZE #define PM_SUSPEND_MIN PM_SUSPEND_TO_IDLE
#define PM_SUSPEND_MAX ((__force suspend_state_t) 4) #define PM_SUSPEND_MAX ((__force suspend_state_t) 4)
enum suspend_stat_step { enum suspend_stat_step {
...@@ -186,7 +186,7 @@ struct platform_suspend_ops { ...@@ -186,7 +186,7 @@ struct platform_suspend_ops {
void (*recover)(void); void (*recover)(void);
}; };
struct platform_freeze_ops { struct platform_s2idle_ops {
int (*begin)(void); int (*begin)(void);
int (*prepare)(void); int (*prepare)(void);
void (*wake)(void); void (*wake)(void);
...@@ -196,6 +196,9 @@ struct platform_freeze_ops { ...@@ -196,6 +196,9 @@ struct platform_freeze_ops {
}; };
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
extern suspend_state_t mem_sleep_current;
extern suspend_state_t mem_sleep_default;
/** /**
* suspend_set_ops - set platform dependent suspend operations * suspend_set_ops - set platform dependent suspend operations
* @ops: The new suspend operations to set. * @ops: The new suspend operations to set.
...@@ -234,22 +237,22 @@ static inline bool pm_resume_via_firmware(void) ...@@ -234,22 +237,22 @@ static inline bool pm_resume_via_firmware(void)
} }
/* Suspend-to-idle state machnine. */ /* Suspend-to-idle state machnine. */
enum freeze_state { enum s2idle_states {
FREEZE_STATE_NONE, /* Not suspended/suspending. */ S2IDLE_STATE_NONE, /* Not suspended/suspending. */
FREEZE_STATE_ENTER, /* Enter suspend-to-idle. */ S2IDLE_STATE_ENTER, /* Enter suspend-to-idle. */
FREEZE_STATE_WAKE, /* Wake up from suspend-to-idle. */ S2IDLE_STATE_WAKE, /* Wake up from suspend-to-idle. */
}; };
extern enum freeze_state __read_mostly suspend_freeze_state; extern enum s2idle_states __read_mostly s2idle_state;
static inline bool idle_should_freeze(void) static inline bool idle_should_enter_s2idle(void)
{ {
return unlikely(suspend_freeze_state == FREEZE_STATE_ENTER); return unlikely(s2idle_state == S2IDLE_STATE_ENTER);
} }
extern void __init pm_states_init(void); extern void __init pm_states_init(void);
extern void freeze_set_ops(const struct platform_freeze_ops *ops); extern void s2idle_set_ops(const struct platform_s2idle_ops *ops);
extern void freeze_wake(void); extern void s2idle_wake(void);
/** /**
* arch_suspend_disable_irqs - disable IRQs for suspend * arch_suspend_disable_irqs - disable IRQs for suspend
...@@ -281,10 +284,10 @@ static inline bool pm_resume_via_firmware(void) { return false; } ...@@ -281,10 +284,10 @@ static inline bool pm_resume_via_firmware(void) { return false; }
static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {} static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {}
static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; } static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; }
static inline bool idle_should_freeze(void) { return false; } static inline bool idle_should_enter_s2idle(void) { return false; }
static inline void __init pm_states_init(void) {} static inline void __init pm_states_init(void) {}
static inline void freeze_set_ops(const struct platform_freeze_ops *ops) {} static inline void s2idle_set_ops(const struct platform_s2idle_ops *ops) {}
static inline void freeze_wake(void) {} static inline void s2idle_wake(void) {}
#endif /* !CONFIG_SUSPEND */ #endif /* !CONFIG_SUSPEND */
/* struct pbe is used for creating lists of pages that should be restored /* struct pbe is used for creating lists of pages that should be restored
...@@ -427,6 +430,7 @@ extern int unregister_pm_notifier(struct notifier_block *nb); ...@@ -427,6 +430,7 @@ extern int unregister_pm_notifier(struct notifier_block *nb);
/* drivers/base/power/wakeup.c */ /* drivers/base/power/wakeup.c */
extern bool events_check_enabled; extern bool events_check_enabled;
extern unsigned int pm_wakeup_irq; extern unsigned int pm_wakeup_irq;
extern suspend_state_t pm_suspend_target_state;
extern bool pm_wakeup_pending(void); extern bool pm_wakeup_pending(void);
extern void pm_system_wakeup(void); extern void pm_system_wakeup(void);
...@@ -491,10 +495,24 @@ static inline void unlock_system_sleep(void) {} ...@@ -491,10 +495,24 @@ static inline void unlock_system_sleep(void) {}
#ifdef CONFIG_PM_SLEEP_DEBUG #ifdef CONFIG_PM_SLEEP_DEBUG
extern bool pm_print_times_enabled; extern bool pm_print_times_enabled;
extern bool pm_debug_messages_on;
extern __printf(2, 3) void __pm_pr_dbg(bool defer, const char *fmt, ...);
#else #else
#define pm_print_times_enabled (false) #define pm_print_times_enabled (false)
#define pm_debug_messages_on (false)
#include <linux/printk.h>
#define __pm_pr_dbg(defer, fmt, ...) \
no_printk(KERN_DEBUG fmt, ##__VA_ARGS__)
#endif #endif
#define pm_pr_dbg(fmt, ...) \
__pm_pr_dbg(false, fmt, ##__VA_ARGS__)
#define pm_deferred_pr_dbg(fmt, ...) \
__pm_pr_dbg(true, fmt, ##__VA_ARGS__)
#ifdef CONFIG_PM_AUTOSLEEP #ifdef CONFIG_PM_AUTOSLEEP
/* kernel/power/autosleep.c */ /* kernel/power/autosleep.c */
......
...@@ -22,15 +22,21 @@ ...@@ -22,15 +22,21 @@
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/syscore_ops.h> #include <linux/syscore_ops.h>
static DEFINE_RWLOCK(cpu_pm_notifier_lock); static ATOMIC_NOTIFIER_HEAD(cpu_pm_notifier_chain);
static RAW_NOTIFIER_HEAD(cpu_pm_notifier_chain);
static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls) static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls)
{ {
int ret; int ret;
ret = __raw_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL, /*
* __atomic_notifier_call_chain has a RCU read critical section, which
* could be disfunctional in cpu idle. Copy RCU_NONIDLE code to let
* RCU know this.
*/
rcu_irq_enter_irqson();
ret = __atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL,
nr_to_call, nr_calls); nr_to_call, nr_calls);
rcu_irq_exit_irqson();
return notifier_to_errno(ret); return notifier_to_errno(ret);
} }
...@@ -47,14 +53,7 @@ static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls) ...@@ -47,14 +53,7 @@ static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls)
*/ */
int cpu_pm_register_notifier(struct notifier_block *nb) int cpu_pm_register_notifier(struct notifier_block *nb)
{ {
unsigned long flags; return atomic_notifier_chain_register(&cpu_pm_notifier_chain, nb);
int ret;
write_lock_irqsave(&cpu_pm_notifier_lock, flags);
ret = raw_notifier_chain_register(&cpu_pm_notifier_chain, nb);
write_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
return ret;
} }
EXPORT_SYMBOL_GPL(cpu_pm_register_notifier); EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
...@@ -69,14 +68,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier); ...@@ -69,14 +68,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
*/ */
int cpu_pm_unregister_notifier(struct notifier_block *nb) int cpu_pm_unregister_notifier(struct notifier_block *nb)
{ {
unsigned long flags; return atomic_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
int ret;
write_lock_irqsave(&cpu_pm_notifier_lock, flags);
ret = raw_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
write_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
return ret;
} }
EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier); EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier);
...@@ -100,7 +92,6 @@ int cpu_pm_enter(void) ...@@ -100,7 +92,6 @@ int cpu_pm_enter(void)
int nr_calls; int nr_calls;
int ret = 0; int ret = 0;
read_lock(&cpu_pm_notifier_lock);
ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls); ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls);
if (ret) if (ret)
/* /*
...@@ -108,7 +99,6 @@ int cpu_pm_enter(void) ...@@ -108,7 +99,6 @@ int cpu_pm_enter(void)
* PM entry who are notified earlier to prepare for it. * PM entry who are notified earlier to prepare for it.
*/ */
cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL); cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
read_unlock(&cpu_pm_notifier_lock);
return ret; return ret;
} }
...@@ -128,13 +118,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_enter); ...@@ -128,13 +118,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_enter);
*/ */
int cpu_pm_exit(void) int cpu_pm_exit(void)
{ {
int ret; return cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
read_lock(&cpu_pm_notifier_lock);
ret = cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
read_unlock(&cpu_pm_notifier_lock);
return ret;
} }
EXPORT_SYMBOL_GPL(cpu_pm_exit); EXPORT_SYMBOL_GPL(cpu_pm_exit);
...@@ -159,7 +143,6 @@ int cpu_cluster_pm_enter(void) ...@@ -159,7 +143,6 @@ int cpu_cluster_pm_enter(void)
int nr_calls; int nr_calls;
int ret = 0; int ret = 0;
read_lock(&cpu_pm_notifier_lock);
ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls); ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls);
if (ret) if (ret)
/* /*
...@@ -167,7 +150,6 @@ int cpu_cluster_pm_enter(void) ...@@ -167,7 +150,6 @@ int cpu_cluster_pm_enter(void)
* PM entry who are notified earlier to prepare for it. * PM entry who are notified earlier to prepare for it.
*/ */
cpu_pm_notify(CPU_CLUSTER_PM_ENTER_FAILED, nr_calls - 1, NULL); cpu_pm_notify(CPU_CLUSTER_PM_ENTER_FAILED, nr_calls - 1, NULL);
read_unlock(&cpu_pm_notifier_lock);
return ret; return ret;
} }
...@@ -190,13 +172,7 @@ EXPORT_SYMBOL_GPL(cpu_cluster_pm_enter); ...@@ -190,13 +172,7 @@ EXPORT_SYMBOL_GPL(cpu_cluster_pm_enter);
*/ */
int cpu_cluster_pm_exit(void) int cpu_cluster_pm_exit(void)
{ {
int ret; return cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL);
read_lock(&cpu_pm_notifier_lock);
ret = cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL);
read_unlock(&cpu_pm_notifier_lock);
return ret;
} }
EXPORT_SYMBOL_GPL(cpu_cluster_pm_exit); EXPORT_SYMBOL_GPL(cpu_cluster_pm_exit);
......
...@@ -651,7 +651,7 @@ static int load_image_and_restore(void) ...@@ -651,7 +651,7 @@ static int load_image_and_restore(void)
int error; int error;
unsigned int flags; unsigned int flags;
pr_debug("Loading hibernation image.\n"); pm_pr_dbg("Loading hibernation image.\n");
lock_device_hotplug(); lock_device_hotplug();
error = create_basic_memory_bitmaps(); error = create_basic_memory_bitmaps();
...@@ -681,7 +681,7 @@ int hibernate(void) ...@@ -681,7 +681,7 @@ int hibernate(void)
bool snapshot_test = false; bool snapshot_test = false;
if (!hibernation_available()) { if (!hibernation_available()) {
pr_debug("Hibernation not available.\n"); pm_pr_dbg("Hibernation not available.\n");
return -EPERM; return -EPERM;
} }
...@@ -692,6 +692,7 @@ int hibernate(void) ...@@ -692,6 +692,7 @@ int hibernate(void)
goto Unlock; goto Unlock;
} }
pr_info("hibernation entry\n");
pm_prepare_console(); pm_prepare_console();
error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls); error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls);
if (error) { if (error) {
...@@ -727,7 +728,7 @@ int hibernate(void) ...@@ -727,7 +728,7 @@ int hibernate(void)
else else
flags |= SF_CRC32_MODE; flags |= SF_CRC32_MODE;
pr_debug("Writing image.\n"); pm_pr_dbg("Writing image.\n");
error = swsusp_write(flags); error = swsusp_write(flags);
swsusp_free(); swsusp_free();
if (!error) { if (!error) {
...@@ -739,7 +740,7 @@ int hibernate(void) ...@@ -739,7 +740,7 @@ int hibernate(void)
in_suspend = 0; in_suspend = 0;
pm_restore_gfp_mask(); pm_restore_gfp_mask();
} else { } else {
pr_debug("Image restored successfully.\n"); pm_pr_dbg("Image restored successfully.\n");
} }
Free_bitmaps: Free_bitmaps:
...@@ -747,7 +748,7 @@ int hibernate(void) ...@@ -747,7 +748,7 @@ int hibernate(void)
Thaw: Thaw:
unlock_device_hotplug(); unlock_device_hotplug();
if (snapshot_test) { if (snapshot_test) {
pr_debug("Checking hibernation image\n"); pm_pr_dbg("Checking hibernation image\n");
error = swsusp_check(); error = swsusp_check();
if (!error) if (!error)
error = load_image_and_restore(); error = load_image_and_restore();
...@@ -762,6 +763,8 @@ int hibernate(void) ...@@ -762,6 +763,8 @@ int hibernate(void)
atomic_inc(&snapshot_device_available); atomic_inc(&snapshot_device_available);
Unlock: Unlock:
unlock_system_sleep(); unlock_system_sleep();
pr_info("hibernation exit\n");
return error; return error;
} }
...@@ -811,7 +814,7 @@ static int software_resume(void) ...@@ -811,7 +814,7 @@ static int software_resume(void)
goto Unlock; goto Unlock;
} }
pr_debug("Checking hibernation image partition %s\n", resume_file); pm_pr_dbg("Checking hibernation image partition %s\n", resume_file);
if (resume_delay) { if (resume_delay) {
pr_info("Waiting %dsec before reading resume device ...\n", pr_info("Waiting %dsec before reading resume device ...\n",
...@@ -853,10 +856,10 @@ static int software_resume(void) ...@@ -853,10 +856,10 @@ static int software_resume(void)
} }
Check_image: Check_image:
pr_debug("Hibernation image partition %d:%d present\n", pm_pr_dbg("Hibernation image partition %d:%d present\n",
MAJOR(swsusp_resume_device), MINOR(swsusp_resume_device)); MAJOR(swsusp_resume_device), MINOR(swsusp_resume_device));
pr_debug("Looking for hibernation image.\n"); pm_pr_dbg("Looking for hibernation image.\n");
error = swsusp_check(); error = swsusp_check();
if (error) if (error)
goto Unlock; goto Unlock;
...@@ -868,6 +871,7 @@ static int software_resume(void) ...@@ -868,6 +871,7 @@ static int software_resume(void)
goto Unlock; goto Unlock;
} }
pr_info("resume from hibernation\n");
pm_prepare_console(); pm_prepare_console();
error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls); error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls);
if (error) { if (error) {
...@@ -875,7 +879,7 @@ static int software_resume(void) ...@@ -875,7 +879,7 @@ static int software_resume(void)
goto Close_Finish; goto Close_Finish;
} }
pr_debug("Preparing processes for restore.\n"); pm_pr_dbg("Preparing processes for restore.\n");
error = freeze_processes(); error = freeze_processes();
if (error) if (error)
goto Close_Finish; goto Close_Finish;
...@@ -884,11 +888,12 @@ static int software_resume(void) ...@@ -884,11 +888,12 @@ static int software_resume(void)
Finish: Finish:
__pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL); __pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL);
pm_restore_console(); pm_restore_console();
pr_info("resume from hibernation failed (%d)\n", error);
atomic_inc(&snapshot_device_available); atomic_inc(&snapshot_device_available);
/* For success case, the suspend path will release the lock */ /* For success case, the suspend path will release the lock */
Unlock: Unlock:
mutex_unlock(&pm_mutex); mutex_unlock(&pm_mutex);
pr_debug("Hibernation image not present or could not be loaded.\n"); pm_pr_dbg("Hibernation image not present or could not be loaded.\n");
return error; return error;
Close_Finish: Close_Finish:
swsusp_close(FMODE_READ); swsusp_close(FMODE_READ);
...@@ -1012,8 +1017,8 @@ static ssize_t disk_store(struct kobject *kobj, struct kobj_attribute *attr, ...@@ -1012,8 +1017,8 @@ static ssize_t disk_store(struct kobject *kobj, struct kobj_attribute *attr,
error = -EINVAL; error = -EINVAL;
if (!error) if (!error)
pr_debug("Hibernation mode set to '%s'\n", pm_pr_dbg("Hibernation mode set to '%s'\n",
hibernation_modes[mode]); hibernation_modes[mode]);
unlock_system_sleep(); unlock_system_sleep();
return error ? error : n; return error ? error : n;
} }
......
This diff is collapsed.
...@@ -192,7 +192,6 @@ extern void swsusp_show_speed(ktime_t, ktime_t, unsigned int, char *); ...@@ -192,7 +192,6 @@ extern void swsusp_show_speed(ktime_t, ktime_t, unsigned int, char *);
extern const char * const pm_labels[]; extern const char * const pm_labels[];
extern const char *pm_states[]; extern const char *pm_states[];
extern const char *mem_sleep_states[]; extern const char *mem_sleep_states[];
extern suspend_state_t mem_sleep_current;
extern int suspend_devices_and_enter(suspend_state_t state); extern int suspend_devices_and_enter(suspend_state_t state);
#else /* !CONFIG_SUSPEND */ #else /* !CONFIG_SUSPEND */
...@@ -245,7 +244,11 @@ enum { ...@@ -245,7 +244,11 @@ enum {
#define TEST_FIRST TEST_NONE #define TEST_FIRST TEST_NONE
#define TEST_MAX (__TEST_AFTER_LAST - 1) #define TEST_MAX (__TEST_AFTER_LAST - 1)
#ifdef CONFIG_PM_SLEEP_DEBUG
extern int pm_test_level; extern int pm_test_level;
#else
#define pm_test_level (TEST_NONE)
#endif
#ifdef CONFIG_SUSPEND_FREEZER #ifdef CONFIG_SUSPEND_FREEZER
static inline int suspend_freeze_processes(void) static inline int suspend_freeze_processes(void)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment