Commit 77dcfe2b authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These include a rework of the main suspend-to-idle code flow (related
  to the handling of spurious wakeups), a switch over of several users
  of cpufreq notifiers to QoS-based limits, a new devfreq driver for
  Tegra20, a new cpuidle driver and governor for virtualized guests, an
  extension of the wakeup sources framework to expose wakeup sources as
  device objects in sysfs, and more.

  Specifics:

   - Rework the main suspend-to-idle control flow to avoid repeating
     "noirq" device resume and suspend operations in case of spurious
     wakeups from the ACPI EC and decouple the ACPI EC wakeups support
     from the LPS0 _DSM support (Rafael Wysocki).

   - Extend the wakeup sources framework to expose wakeup sources as
     device objects in sysfs (Tri Vo, Stephen Boyd).

   - Expose system suspend statistics in sysfs (Kalesh Singh).

   - Introduce a new haltpoll cpuidle driver and a new matching governor
     for virtualized guests wanting to do guest-side polling in the idle
     loop (Marcelo Tosatti, Joao Martins, Wanpeng Li, Stephen Rothwell).

   - Fix the menu and teo cpuidle governors to allow the scheduler tick
     to be stopped if PM QoS is used to limit the CPU idle state exit
     latency in some cases (Rafael Wysocki).

   - Increase the resolution of the play_idle() argument to microseconds
     for more fine-grained injection of CPU idle cycles (Daniel
     Lezcano).

   - Switch over some users of cpuidle notifiers to the new QoS-based
     frequency limits and drop the CPUFREQ_ADJUST and CPUFREQ_NOTIFY
     policy notifier events (Viresh Kumar).

   - Add new cpufreq driver based on nvmem for sun50i (Yangtao Li).

   - Add support for MT8183 and MT8516 to the mediatek cpufreq driver
     (Andrew-sh.Cheng, Fabien Parent).

   - Add i.MX8MN support to the imx-cpufreq-dt cpufreq driver (Anson
     Huang).

   - Add qcs404 to cpufreq-dt-platdev blacklist (Jorge Ramirez-Ortiz).

   - Update the qcom cpufreq driver (among other things, to make it
     easier to extend and to use kryo cpufreq for other nvmem-based
     SoCs) and add qcs404 support to it (Niklas Cassel, Douglas
     RAILLARD, Sibi Sankar, Sricharan R).

   - Fix assorted issues and make assorted minor improvements in the
     cpufreq code (Colin Ian King, Douglas RAILLARD, Florian Fainelli,
     Gustavo Silva, Hariprasad Kelam).

   - Add new devfreq driver for NVidia Tegra20 (Dmitry Osipenko, Arnd
     Bergmann).

   - Add new Exynos PPMU events to devfreq events and extend that
     mechanism (Lukasz Luba).

   - Fix and clean up the exynos-bus devfreq driver (Kamil Konieczny).

   - Improve devfreq documentation and governor code, fix spelling typos
     in devfreq (Ezequiel Garcia, Krzysztof Kozlowski, Leonard Crestez,
     MyungJoo Ham, Gaël PORTAY).

   - Add regulators enable and disable to the OPP (operating performance
     points) framework (Kamil Konieczny).

   - Update the OPP framework to support multiple opp-suspend properties
     (Anson Huang).

   - Fix assorted issues and make assorted minor improvements in the OPP
     code (Niklas Cassel, Viresh Kumar, Yue Hu).

   - Clean up the generic power domains (genpd) framework (Ulf Hansson).

   - Clean up assorted pieces of power management code and documentation
     (Akinobu Mita, Amit Kucheria, Chuhong Yuan).

   - Update the pm-graph tool to version 5.5 including multiple fixes
     and improvements (Todd Brandt).

   - Update the cpupower utility (Benjamin Weis, Geert Uytterhoeven,
     Sébastien Szymanski)"

* tag 'pm-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (126 commits)
  cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available
  cpuidle-haltpoll: do not set an owner to allow modunload
  cpuidle-haltpoll: return -ENODEV on modinit failure
  cpuidle-haltpoll: set haltpoll as preferred governor
  cpuidle: allow governor switch on cpuidle_register_driver()
  PM: runtime: Documentation: add runtime_status ABI document
  pm-graph: make setVal unbuffered again for python2 and python3
  powercap: idle_inject: Use higher resolution for idle injection
  cpuidle: play_idle: Increase the resolution to usec
  cpuidle-haltpoll: vcpu hotplug support
  cpufreq: Add qcs404 to cpufreq-dt-platdev blacklist
  cpufreq: qcom: Add support for qcs404 on nvmem driver
  cpufreq: qcom: Refactor the driver to make it easier to extend
  cpufreq: qcom: Re-organise kryo cpufreq to use it for other nvmem based qcom socs
  dt-bindings: opp: Add qcom-opp bindings with properties needed for CPR
  dt-bindings: opp: qcom-nvmem: Support pstates provided by a power domain
  Documentation: cpufreq: Update policy notifier documentation
  cpufreq: Remove CPUFREQ_ADJUST and CPUFREQ_NOTIFY policy notifier events
  PM / Domains: Verify PM domain type in dev_pm_genpd_set_performance_state()
  PM / Domains: Simplify genpd_lookup_dev()
  ...
parents 04cbfba6 fc6763a2
What: /sys/class/wakeup/
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
The /sys/class/wakeup/ directory contains pointers to all
wakeup sources in the kernel at that moment in time.
What: /sys/class/wakeup/.../name
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the name of the wakeup source.
What: /sys/class/wakeup/.../active_count
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the number of times the wakeup source was
activated.
What: /sys/class/wakeup/.../event_count
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the number of signaled wakeup events
associated with the wakeup source.
What: /sys/class/wakeup/.../wakeup_count
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the number of times the wakeup source might
abort suspend.
What: /sys/class/wakeup/.../expire_count
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the number of times the wakeup source's
timeout has expired.
What: /sys/class/wakeup/.../active_time_ms
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the amount of time the wakeup source has
been continuously active, in milliseconds. If the wakeup
source is not active, this file contains '0'.
What: /sys/class/wakeup/.../total_time_ms
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the total amount of time this wakeup source
has been active, in milliseconds.
What: /sys/class/wakeup/.../max_time_ms
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the maximum amount of time this wakeup
source has been continuously active, in milliseconds.
What: /sys/class/wakeup/.../last_change_ms
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the monotonic clock time when the wakeup
source was touched last time, in milliseconds.
What: /sys/class/wakeup/.../prevent_suspend_time_ms
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
The file contains the total amount of time this wakeup source
has been preventing autosleep, in milliseconds.
...@@ -260,3 +260,12 @@ Description: ...@@ -260,3 +260,12 @@ Description:
This attribute has no effect on system-wide suspend/resume and This attribute has no effect on system-wide suspend/resume and
hibernation. hibernation.
What: /sys/devices/.../power/runtime_status
Date: April 2010
Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../power/runtime_status attribute contains
the current runtime PM status of the device, which may be
"suspended", "suspending", "resuming", "active", "error" (fatal
error), or "unsupported" (runtime PM is disabled).
...@@ -301,3 +301,109 @@ Description: ...@@ -301,3 +301,109 @@ Description:
Using this sysfs file will override any values that were Using this sysfs file will override any values that were
set using the kernel command line for disk offset. set using the kernel command line for disk offset.
What: /sys/power/suspend_stats
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats directory contains suspend related
statistics.
What: /sys/power/suspend_stats/success
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/success file contains the number
of times entering system sleep state succeeded.
What: /sys/power/suspend_stats/fail
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/fail file contains the number
of times entering system sleep state failed.
What: /sys/power/suspend_stats/failed_freeze
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_freeze file contains the
number of times freezing processes failed.
What: /sys/power/suspend_stats/failed_prepare
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_prepare file contains the
number of times preparing all non-sysdev devices for
a system PM transition failed.
What: /sys/power/suspend_stats/failed_resume
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_resume file contains the
number of times executing "resume" callbacks of
non-sysdev devices failed.
What: /sys/power/suspend_stats/failed_resume_early
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_resume_early file contains
the number of times executing "early resume" callbacks
of devices failed.
What: /sys/power/suspend_stats/failed_resume_noirq
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_resume_noirq file contains
the number of times executing "noirq resume" callbacks
of devices failed.
What: /sys/power/suspend_stats/failed_suspend
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_suspend file contains
the number of times executing "suspend" callbacks
of all non-sysdev devices failed.
What: /sys/power/suspend_stats/failed_suspend_late
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_suspend_late file contains
the number of times executing "late suspend" callbacks
of all devices failed.
What: /sys/power/suspend_stats/failed_suspend_noirq
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_suspend_noirq file contains
the number of times executing "noirq suspend" callbacks
of all devices failed.
What: /sys/power/suspend_stats/last_failed_dev
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/last_failed_dev file contains
the last device for which a suspend/resume callback failed.
What: /sys/power/suspend_stats/last_failed_errno
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/last_failed_errno file contains
the errno of the last failed attempt at entering
system sleep state.
What: /sys/power/suspend_stats/last_failed_step
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/last_failed_step file contains
the last failed step in the suspend/resume path.
...@@ -57,19 +57,11 @@ transition notifiers. ...@@ -57,19 +57,11 @@ transition notifiers.
2.1 CPUFreq policy notifiers 2.1 CPUFreq policy notifiers
---------------------------- ----------------------------
These are notified when a new policy is intended to be set. Each These are notified when a new policy is created or removed.
CPUFreq policy notifier is called twice for a policy transition:
1.) During CPUFREQ_ADJUST all CPUFreq notifiers may change the limit if The phase is specified in the second argument to the notifier. The phase is
they see a need for this - may it be thermal considerations or CPUFREQ_CREATE_POLICY when the policy is first created and it is
hardware limitations. CPUFREQ_REMOVE_POLICY when the policy is removed.
2.) And during CPUFREQ_NOTIFY all notifiers are informed of the new policy
- if two hardware drivers failed to agree on a new policy before this
stage, the incompatible hardware shall be shut down, and the user
informed of this.
The phase is specified in the second argument to the notifier.
The third argument, a void *pointer, points to a struct cpufreq_policy The third argument, a void *pointer, points to a struct cpufreq_policy
consisting of several values, including min, max (the lower and upper consisting of several values, including min, max (the lower and upper
......
...@@ -140,8 +140,8 @@ Optional properties: ...@@ -140,8 +140,8 @@ Optional properties:
frequency for a short duration of time limited by the device's power, current frequency for a short duration of time limited by the device's power, current
and thermal limits. and thermal limits.
- opp-suspend: Marks the OPP to be used during device suspend. Only one OPP in - opp-suspend: Marks the OPP to be used during device suspend. If multiple OPPs
the table should have this. in the table have this, the OPP with highest opp-hz will be used.
- opp-supported-hw: This enables us to select only a subset of OPPs from the - opp-supported-hw: This enables us to select only a subset of OPPs from the
larger OPP table, based on what version of the hardware we are running on. We larger OPP table, based on what version of the hardware we are running on. We
......
Qualcomm Technologies, Inc. KRYO CPUFreq and OPP bindings Qualcomm Technologies, Inc. NVMEM CPUFreq and OPP bindings
=================================== ===================================
In Certain Qualcomm Technologies, Inc. SoCs like apq8096 and msm8996 In Certain Qualcomm Technologies, Inc. SoCs like apq8096 and msm8996,
that have KRYO processors, the CPU ferequencies subset and voltage value the CPU frequencies subset and voltage value of each OPP varies based on
of each OPP varies based on the silicon variant in use. the silicon variant in use.
Qualcomm Technologies, Inc. Process Voltage Scaling Tables Qualcomm Technologies, Inc. Process Voltage Scaling Tables
defines the voltage and frequency value based on the msm-id in SMEM defines the voltage and frequency value based on the msm-id in SMEM
and speedbin blown in the efuse combination. and speedbin blown in the efuse combination.
The qcom-cpufreq-kryo driver reads the msm-id and efuse value from the SoC The qcom-cpufreq-nvmem driver reads the msm-id and efuse value from the SoC
to provide the OPP framework with required information (existing HW bitmap). to provide the OPP framework with required information (existing HW bitmap).
This is used to determine the voltage and frequency value for each OPP of This is used to determine the voltage and frequency value for each OPP of
operating-points-v2 table when it is parsed by the OPP framework. operating-points-v2 table when it is parsed by the OPP framework.
Required properties: Required properties:
-------------------- --------------------
In 'cpus' nodes: In 'cpu' nodes:
- operating-points-v2: Phandle to the operating-points-v2 table to use. - operating-points-v2: Phandle to the operating-points-v2 table to use.
In 'operating-points-v2' table: In 'operating-points-v2' table:
- compatible: Should be - compatible: Should be
- 'operating-points-v2-kryo-cpu' for apq8096 and msm8996. - 'operating-points-v2-kryo-cpu' for apq8096 and msm8996.
Optional properties:
--------------------
In 'cpu' nodes:
- power-domains: A phandle pointing to the PM domain specifier which provides
the performance states available for active state management.
Please refer to the power-domains bindings
Documentation/devicetree/bindings/power/power_domain.txt
and also examples below.
- power-domain-names: Should be
- 'cpr' for qcs404.
In 'operating-points-v2' table:
- nvmem-cells: A phandle pointing to a nvmem-cells node representing the - nvmem-cells: A phandle pointing to a nvmem-cells node representing the
efuse registers that has information about the efuse registers that has information about the
speedbin that is used to select the right frequency/voltage speedbin that is used to select the right frequency/voltage
...@@ -678,3 +691,105 @@ soc { ...@@ -678,3 +691,105 @@ soc {
}; };
}; };
}; };
Example 2:
---------
cpus {
#address-cells = <1>;
#size-cells = <0>;
CPU0: cpu@100 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <0x100>;
....
clocks = <&apcs_glb>;
operating-points-v2 = <&cpu_opp_table>;
power-domains = <&cpr>;
power-domain-names = "cpr";
};
CPU1: cpu@101 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <0x101>;
....
clocks = <&apcs_glb>;
operating-points-v2 = <&cpu_opp_table>;
power-domains = <&cpr>;
power-domain-names = "cpr";
};
CPU2: cpu@102 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <0x102>;
....
clocks = <&apcs_glb>;
operating-points-v2 = <&cpu_opp_table>;
power-domains = <&cpr>;
power-domain-names = "cpr";
};
CPU3: cpu@103 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <0x103>;
....
clocks = <&apcs_glb>;
operating-points-v2 = <&cpu_opp_table>;
power-domains = <&cpr>;
power-domain-names = "cpr";
};
};
cpu_opp_table: cpu-opp-table {
compatible = "operating-points-v2-kryo-cpu";
opp-shared;
opp-1094400000 {
opp-hz = /bits/ 64 <1094400000>;
required-opps = <&cpr_opp1>;
};
opp-1248000000 {
opp-hz = /bits/ 64 <1248000000>;
required-opps = <&cpr_opp2>;
};
opp-1401600000 {
opp-hz = /bits/ 64 <1401600000>;
required-opps = <&cpr_opp3>;
};
};
cpr_opp_table: cpr-opp-table {
compatible = "operating-points-v2-qcom-level";
cpr_opp1: opp1 {
opp-level = <1>;
qcom,opp-fuse-level = <1>;
};
cpr_opp2: opp2 {
opp-level = <2>;
qcom,opp-fuse-level = <2>;
};
cpr_opp3: opp3 {
opp-level = <3>;
qcom,opp-fuse-level = <3>;
};
};
....
soc {
....
cpr: power-controller@b018000 {
compatible = "qcom,qcs404-cpr", "qcom,cpr";
reg = <0x0b018000 0x1000>;
....
vdd-apc-supply = <&pms405_s3>;
#power-domain-cells = <0>;
operating-points-v2 = <&cpr_opp_table>;
....
};
};
Qualcomm OPP bindings to describe OPP nodes
The bindings are based on top of the operating-points-v2 bindings
described in Documentation/devicetree/bindings/opp/opp.txt
Additional properties are described below.
* OPP Table Node
Required properties:
- compatible: Allow OPPs to express their compatibility. It should be:
"operating-points-v2-qcom-level"
* OPP Node
Required properties:
- qcom,opp-fuse-level: A positive value representing the fuse corner/level
associated with this OPP node. Sometimes several corners/levels shares
a certain fuse corner/level. A fuse corner/level contains e.g. ref uV,
min uV, and max uV.
Allwinner Technologies, Inc. NVMEM CPUFreq and OPP bindings
===================================
For some SoCs, the CPU frequency subset and voltage value of each OPP
varies based on the silicon variant in use. Allwinner Process Voltage
Scaling Tables defines the voltage and frequency value based on the
speedbin blown in the efuse combination. The sun50i-cpufreq-nvmem driver
reads the efuse value from the SoC to provide the OPP framework with
required information.
Required properties:
--------------------
In 'cpus' nodes:
- operating-points-v2: Phandle to the operating-points-v2 table to use.
In 'operating-points-v2' table:
- compatible: Should be
- 'allwinner,sun50i-h6-operating-points'.
- nvmem-cells: A phandle pointing to a nvmem-cells node representing the
efuse registers that has information about the speedbin
that is used to select the right frequency/voltage value
pair. Please refer the for nvmem-cells bindings
Documentation/devicetree/bindings/nvmem/nvmem.txt and
also examples below.
In every OPP node:
- opp-microvolt-<name>: Voltage in micro Volts.
At runtime, the platform can pick a <name> and
matching opp-microvolt-<name> property.
[See: opp.txt]
HW: <name>:
sun50i-h6 speed0 speed1 speed2
Example 1:
---------
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu0: cpu@0 {
compatible = "arm,cortex-a53";
device_type = "cpu";
reg = <0>;
enable-method = "psci";
clocks = <&ccu CLK_CPUX>;
clock-latency-ns = <244144>; /* 8 32k periods */
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
};
cpu1: cpu@1 {
compatible = "arm,cortex-a53";
device_type = "cpu";
reg = <1>;
enable-method = "psci";
clocks = <&ccu CLK_CPUX>;
clock-latency-ns = <244144>; /* 8 32k periods */
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
};
cpu2: cpu@2 {
compatible = "arm,cortex-a53";
device_type = "cpu";
reg = <2>;
enable-method = "psci";
clocks = <&ccu CLK_CPUX>;
clock-latency-ns = <244144>; /* 8 32k periods */
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
};
cpu3: cpu@3 {
compatible = "arm,cortex-a53";
device_type = "cpu";
reg = <3>;
enable-method = "psci";
clocks = <&ccu CLK_CPUX>;
clock-latency-ns = <244144>; /* 8 32k periods */
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
};
};
cpu_opp_table: opp_table {
compatible = "allwinner,sun50i-h6-operating-points";
nvmem-cells = <&speedbin_efuse>;
opp-shared;
opp@480000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <480000000>;
opp-microvolt-speed0 = <880000>;
opp-microvolt-speed1 = <820000>;
opp-microvolt-speed2 = <800000>;
};
opp@720000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <720000000>;
opp-microvolt-speed0 = <880000>;
opp-microvolt-speed1 = <820000>;
opp-microvolt-speed2 = <800000>;
};
opp@816000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <816000000>;
opp-microvolt-speed0 = <880000>;
opp-microvolt-speed1 = <820000>;
opp-microvolt-speed2 = <800000>;
};
opp@888000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <888000000>;
opp-microvolt-speed0 = <940000>;
opp-microvolt-speed1 = <820000>;
opp-microvolt-speed2 = <800000>;
};
opp@1080000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <1080000000>;
opp-microvolt-speed0 = <1060000>;
opp-microvolt-speed1 = <880000>;
opp-microvolt-speed2 = <840000>;
};
opp@1320000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <1320000000>;
opp-microvolt-speed0 = <1160000>;
opp-microvolt-speed1 = <940000>;
opp-microvolt-speed2 = <900000>;
};
opp@1488000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <1488000000>;
opp-microvolt-speed0 = <1160000>;
opp-microvolt-speed1 = <1000000>;
opp-microvolt-speed2 = <960000>;
};
};
....
soc {
....
sid: sid@3006000 {
compatible = "allwinner,sun50i-h6-sid";
reg = <0x03006000 0x400>;
#address-cells = <1>;
#size-cells = <1>;
....
speedbin_efuse: speed@1c {
reg = <0x1c 4>;
};
};
};
...@@ -46,7 +46,7 @@ We can represent these as three OPPs as the following {Hz, uV} tuples: ...@@ -46,7 +46,7 @@ We can represent these as three OPPs as the following {Hz, uV} tuples:
---------------------------------------- ----------------------------------------
OPP library provides a set of helper functions to organize and query the OPP OPP library provides a set of helper functions to organize and query the OPP
information. The library is located in drivers/base/power/opp.c and the header information. The library is located in drivers/opp/ directory and the header
is located in include/linux/pm_opp.h. OPP library can be enabled by enabling is located in include/linux/pm_opp.h. OPP library can be enabled by enabling
CONFIG_PM_OPP from power management menuconfig menu. OPP library depends on CONFIG_PM_OPP from power management menuconfig menu. OPP library depends on
CONFIG_PM as certain SoCs such as Texas Instrument's OMAP framework allows to CONFIG_PM as certain SoCs such as Texas Instrument's OMAP framework allows to
......
...@@ -7,8 +7,7 @@ performance expectations by drivers, subsystems and user space applications on ...@@ -7,8 +7,7 @@ performance expectations by drivers, subsystems and user space applications on
one of the parameters. one of the parameters.
Two different PM QoS frameworks are available: Two different PM QoS frameworks are available:
1. PM QoS classes for cpu_dma_latency, network_latency, network_throughput, 1. PM QoS classes for cpu_dma_latency
memory_bandwidth.
2. the per-device PM QoS framework provides the API to manage the per-device latency 2. the per-device PM QoS framework provides the API to manage the per-device latency
constraints and PM QoS flags. constraints and PM QoS flags.
...@@ -79,7 +78,7 @@ cleanup of a process, the interface requires the process to register its ...@@ -79,7 +78,7 @@ cleanup of a process, the interface requires the process to register its
parameter requests in the following way: parameter requests in the following way:
To register the default pm_qos target for the specific parameter, the process To register the default pm_qos target for the specific parameter, the process
must open one of /dev/[cpu_dma_latency, network_latency, network_throughput] must open /dev/cpu_dma_latency
As long as the device node is held open that process has a registered As long as the device node is held open that process has a registered
request on the parameter. request on the parameter.
......
Guest halt polling
==================
The cpuidle_haltpoll driver, with the haltpoll governor, allows
the guest vcpus to poll for a specified amount of time before
halting.
This provides the following benefits to host side polling:
1) The POLL flag is set while polling is performed, which allows
a remote vCPU to avoid sending an IPI (and the associated
cost of handling the IPI) when performing a wakeup.
2) The VM-exit cost can be avoided.
The downside of guest side polling is that polling is performed
even with other runnable tasks in the host.
The basic logic as follows: A global value, guest_halt_poll_ns,
is configured by the user, indicating the maximum amount of
time polling is allowed. This value is fixed.
Each vcpu has an adjustable guest_halt_poll_ns
("per-cpu guest_halt_poll_ns"), which is adjusted by the algorithm
in response to events (explained below).
Module Parameters
=================
The haltpoll governor has 5 tunable module parameters:
1) guest_halt_poll_ns:
Maximum amount of time, in nanoseconds, that polling is
performed before halting.
Default: 200000
2) guest_halt_poll_shrink:
Division factor used to shrink per-cpu guest_halt_poll_ns when
wakeup event occurs after the global guest_halt_poll_ns.
Default: 2
3) guest_halt_poll_grow:
Multiplication factor used to grow per-cpu guest_halt_poll_ns
when event occurs after per-cpu guest_halt_poll_ns
but before global guest_halt_poll_ns.
Default: 2
4) guest_halt_poll_grow_start:
The per-cpu guest_halt_poll_ns eventually reaches zero
in case of an idle system. This value sets the initial
per-cpu guest_halt_poll_ns when growing. This can
be increased from 10000, to avoid misses during the initial
growth stage:
10k, 20k, 40k, ... (example assumes guest_halt_poll_grow=2).
Default: 50000
5) guest_halt_poll_allow_shrink:
Bool parameter which allows shrinking. Set to N
to avoid it (per-cpu guest_halt_poll_ns will remain
high once achieves global guest_halt_poll_ns value).
Default: Y
The module parameters can be set from the debugfs files in:
/sys/module/haltpoll/parameters/
Further Notes
=============
- Care should be taken when setting the guest_halt_poll_ns parameter as a
large value has the potential to drive the cpu usage to 100% on a machine which
would be almost entirely idle otherwise.
...@@ -668,6 +668,13 @@ L: linux-media@vger.kernel.org ...@@ -668,6 +668,13 @@ L: linux-media@vger.kernel.org
S: Maintained S: Maintained
F: drivers/staging/media/allegro-dvt/ F: drivers/staging/media/allegro-dvt/
ALLWINNER CPUFREQ DRIVER
M: Yangtao Li <tiny.windzz@gmail.com>
L: linux-pm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/opp/sun50i-nvmem-cpufreq.txt
F: drivers/cpufreq/sun50i-cpufreq-nvmem.c
ALLWINNER SECURITY SYSTEM ALLWINNER SECURITY SYSTEM
M: Corentin Labbe <clabbe.montjoie@gmail.com> M: Corentin Labbe <clabbe.montjoie@gmail.com>
L: linux-crypto@vger.kernel.org L: linux-crypto@vger.kernel.org
...@@ -13311,8 +13318,8 @@ QUALCOMM CPUFREQ DRIVER MSM8996/APQ8096 ...@@ -13311,8 +13318,8 @@ QUALCOMM CPUFREQ DRIVER MSM8996/APQ8096
M: Ilia Lin <ilia.lin@kernel.org> M: Ilia Lin <ilia.lin@kernel.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/opp/kryo-cpufreq.txt F: Documentation/devicetree/bindings/opp/qcom-nvmem-cpufreq.txt
F: drivers/cpufreq/qcom-cpufreq-kryo.c F: drivers/cpufreq/qcom-cpufreq-nvmem.c
QUALCOMM EMAC GIGABIT ETHERNET DRIVER QUALCOMM EMAC GIGABIT ETHERNET DRIVER
M: Timur Tabi <timur@kernel.org> M: Timur Tabi <timur@kernel.org>
......
...@@ -794,6 +794,7 @@ config KVM_GUEST ...@@ -794,6 +794,7 @@ config KVM_GUEST
bool "KVM Guest support (including kvmclock)" bool "KVM Guest support (including kvmclock)"
depends on PARAVIRT depends on PARAVIRT
select PARAVIRT_CLOCK select PARAVIRT_CLOCK
select ARCH_CPUIDLE_HALTPOLL
default y default y
---help--- ---help---
This option enables various optimizations for running under the KVM This option enables various optimizations for running under the KVM
...@@ -802,6 +803,12 @@ config KVM_GUEST ...@@ -802,6 +803,12 @@ config KVM_GUEST
underlying device model, the host provides the guest with underlying device model, the host provides the guest with
timing infrastructure such as time of day, and system time timing infrastructure such as time of day, and system time
config ARCH_CPUIDLE_HALTPOLL
def_bool n
prompt "Disable host haltpoll when loading haltpoll driver"
help
If virtualized under KVM, disable host haltpoll.
config PVH config PVH
bool "Support for running PVH guests" bool "Support for running PVH guests"
---help--- ---help---
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ARCH_HALTPOLL_H
#define _ARCH_HALTPOLL_H
void arch_haltpoll_enable(unsigned int cpu);
void arch_haltpoll_disable(unsigned int cpu);
#endif
...@@ -705,6 +705,7 @@ unsigned int kvm_arch_para_hints(void) ...@@ -705,6 +705,7 @@ unsigned int kvm_arch_para_hints(void)
{ {
return cpuid_edx(kvm_cpuid_base() | KVM_CPUID_FEATURES); return cpuid_edx(kvm_cpuid_base() | KVM_CPUID_FEATURES);
} }
EXPORT_SYMBOL_GPL(kvm_arch_para_hints);
static uint32_t __init kvm_detect(void) static uint32_t __init kvm_detect(void)
{ {
...@@ -867,3 +868,39 @@ void __init kvm_spinlock_init(void) ...@@ -867,3 +868,39 @@ void __init kvm_spinlock_init(void)
} }
#endif /* CONFIG_PARAVIRT_SPINLOCKS */ #endif /* CONFIG_PARAVIRT_SPINLOCKS */
#ifdef CONFIG_ARCH_CPUIDLE_HALTPOLL
static void kvm_disable_host_haltpoll(void *i)
{
wrmsrl(MSR_KVM_POLL_CONTROL, 0);
}
static void kvm_enable_host_haltpoll(void *i)
{
wrmsrl(MSR_KVM_POLL_CONTROL, 1);
}
void arch_haltpoll_enable(unsigned int cpu)
{
if (!kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL)) {
pr_err_once("kvm: host does not support poll control\n");
pr_err_once("kvm: host upgrade recommended\n");
return;
}
/* Enable guest halt poll disables host halt poll */
smp_call_function_single(cpu, kvm_disable_host_haltpoll, NULL, 1);
}
EXPORT_SYMBOL_GPL(arch_haltpoll_enable);
void arch_haltpoll_disable(unsigned int cpu)
{
if (!kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL))
return;
/* Enable guest halt poll disables host halt poll */
smp_call_function_single(cpu, kvm_enable_host_haltpoll, NULL, 1);
}
EXPORT_SYMBOL_GPL(arch_haltpoll_disable);
#endif
...@@ -580,7 +580,7 @@ void __cpuidle default_idle(void) ...@@ -580,7 +580,7 @@ void __cpuidle default_idle(void)
safe_halt(); safe_halt();
trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id()); trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id());
} }
#ifdef CONFIG_APM_MODULE #if defined(CONFIG_APM_MODULE) || defined(CONFIG_HALTPOLL_CPUIDLE_MODULE)
EXPORT_SYMBOL(default_idle); EXPORT_SYMBOL(default_idle);
#endif #endif
......
...@@ -644,17 +644,17 @@ ACPI_EXPORT_SYMBOL(acpi_get_gpe_status) ...@@ -644,17 +644,17 @@ ACPI_EXPORT_SYMBOL(acpi_get_gpe_status)
* PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1 * PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1
* gpe_number - GPE level within the GPE block * gpe_number - GPE level within the GPE block
* *
* RETURN: None * RETURN: INTERRUPT_HANDLED or INTERRUPT_NOT_HANDLED
* *
* DESCRIPTION: Detect and dispatch a General Purpose Event to either a function * DESCRIPTION: Detect and dispatch a General Purpose Event to either a function
* (e.g. EC) or method (e.g. _Lxx/_Exx) handler. * (e.g. EC) or method (e.g. _Lxx/_Exx) handler.
* *
******************************************************************************/ ******************************************************************************/
void acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number) u32 acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number)
{ {
ACPI_FUNCTION_TRACE(acpi_dispatch_gpe); ACPI_FUNCTION_TRACE(acpi_dispatch_gpe);
acpi_ev_detect_gpe(gpe_device, NULL, gpe_number); return acpi_ev_detect_gpe(gpe_device, NULL, gpe_number);
} }
ACPI_EXPORT_SYMBOL(acpi_dispatch_gpe) ACPI_EXPORT_SYMBOL(acpi_dispatch_gpe)
......
...@@ -166,6 +166,10 @@ int acpi_device_set_power(struct acpi_device *device, int state) ...@@ -166,6 +166,10 @@ int acpi_device_set_power(struct acpi_device *device, int state)
|| (state < ACPI_STATE_D0) || (state > ACPI_STATE_D3_COLD)) || (state < ACPI_STATE_D0) || (state > ACPI_STATE_D3_COLD))
return -EINVAL; return -EINVAL;
acpi_handle_debug(device->handle, "Power state change: %s -> %s\n",
acpi_power_state_string(device->power.state),
acpi_power_state_string(state));
/* Make sure this is a valid target state */ /* Make sure this is a valid target state */
/* There is a special case for D0 addressed below. */ /* There is a special case for D0 addressed below. */
...@@ -497,7 +501,8 @@ acpi_status acpi_add_pm_notifier(struct acpi_device *adev, struct device *dev, ...@@ -497,7 +501,8 @@ acpi_status acpi_add_pm_notifier(struct acpi_device *adev, struct device *dev,
goto out; goto out;
mutex_lock(&acpi_pm_notifier_lock); mutex_lock(&acpi_pm_notifier_lock);
adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev)); adev->wakeup.ws = wakeup_source_register(&adev->dev,
dev_name(&adev->dev));
adev->wakeup.context.dev = dev; adev->wakeup.context.dev = dev;
adev->wakeup.context.func = func; adev->wakeup.context.func = func;
adev->wakeup.flags.notifier_present = true; adev->wakeup.flags.notifier_present = true;
......
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
#include <linux/list.h> #include <linux/list.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/suspend.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/dmi.h> #include <linux/dmi.h>
#include <asm/io.h> #include <asm/io.h>
...@@ -1048,24 +1049,6 @@ void acpi_ec_unblock_transactions(void) ...@@ -1048,24 +1049,6 @@ void acpi_ec_unblock_transactions(void)
acpi_ec_start(first_ec, true); acpi_ec_start(first_ec, true);
} }
void acpi_ec_mark_gpe_for_wake(void)
{
if (first_ec && !ec_no_wakeup)
acpi_mark_gpe_for_wake(NULL, first_ec->gpe);
}
void acpi_ec_set_gpe_wake_mask(u8 action)
{
if (first_ec && !ec_no_wakeup)
acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
}
void acpi_ec_dispatch_gpe(void)
{
if (first_ec)
acpi_dispatch_gpe(NULL, first_ec->gpe);
}
/* -------------------------------------------------------------------------- /* --------------------------------------------------------------------------
Event Management Event Management
-------------------------------------------------------------------------- */ -------------------------------------------------------------------------- */
...@@ -1931,7 +1914,7 @@ static int acpi_ec_suspend(struct device *dev) ...@@ -1931,7 +1914,7 @@ static int acpi_ec_suspend(struct device *dev)
struct acpi_ec *ec = struct acpi_ec *ec =
acpi_driver_data(to_acpi_device(dev)); acpi_driver_data(to_acpi_device(dev));
if (acpi_sleep_no_ec_events() && ec_freeze_events) if (!pm_suspend_no_platform() && ec_freeze_events)
acpi_ec_disable_event(ec); acpi_ec_disable_event(ec);
return 0; return 0;
} }
...@@ -1948,8 +1931,7 @@ static int acpi_ec_suspend_noirq(struct device *dev) ...@@ -1948,8 +1931,7 @@ static int acpi_ec_suspend_noirq(struct device *dev)
ec->reference_count >= 1) ec->reference_count >= 1)
acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_DISABLE); acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_DISABLE);
if (acpi_sleep_no_ec_events()) acpi_ec_enter_noirq(ec);
acpi_ec_enter_noirq(ec);
return 0; return 0;
} }
...@@ -1958,8 +1940,7 @@ static int acpi_ec_resume_noirq(struct device *dev) ...@@ -1958,8 +1940,7 @@ static int acpi_ec_resume_noirq(struct device *dev)
{ {
struct acpi_ec *ec = acpi_driver_data(to_acpi_device(dev)); struct acpi_ec *ec = acpi_driver_data(to_acpi_device(dev));
if (acpi_sleep_no_ec_events()) acpi_ec_leave_noirq(ec);
acpi_ec_leave_noirq(ec);
if (ec_no_wakeup && test_bit(EC_FLAGS_STARTED, &ec->flags) && if (ec_no_wakeup && test_bit(EC_FLAGS_STARTED, &ec->flags) &&
ec->reference_count >= 1) ec->reference_count >= 1)
...@@ -1976,7 +1957,35 @@ static int acpi_ec_resume(struct device *dev) ...@@ -1976,7 +1957,35 @@ static int acpi_ec_resume(struct device *dev)
acpi_ec_enable_event(ec); acpi_ec_enable_event(ec);
return 0; return 0;
} }
#endif
void acpi_ec_mark_gpe_for_wake(void)
{
if (first_ec && !ec_no_wakeup)
acpi_mark_gpe_for_wake(NULL, first_ec->gpe);
}
EXPORT_SYMBOL_GPL(acpi_ec_mark_gpe_for_wake);
void acpi_ec_set_gpe_wake_mask(u8 action)
{
if (pm_suspend_no_platform() && first_ec && !ec_no_wakeup)
acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
}
bool acpi_ec_dispatch_gpe(void)
{
u32 ret;
if (!first_ec)
return false;
ret = acpi_dispatch_gpe(NULL, first_ec->gpe);
if (ret == ACPI_INTERRUPT_HANDLED) {
pm_pr_dbg("EC GPE dispatched\n");
return true;
}
return false;
}
#endif /* CONFIG_PM_SLEEP */
static const struct dev_pm_ops acpi_ec_pm = { static const struct dev_pm_ops acpi_ec_pm = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(acpi_ec_suspend_noirq, acpi_ec_resume_noirq) SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(acpi_ec_suspend_noirq, acpi_ec_resume_noirq)
......
...@@ -194,9 +194,6 @@ void acpi_ec_ecdt_probe(void); ...@@ -194,9 +194,6 @@ void acpi_ec_ecdt_probe(void);
void acpi_ec_dsdt_probe(void); void acpi_ec_dsdt_probe(void);
void acpi_ec_block_transactions(void); void acpi_ec_block_transactions(void);
void acpi_ec_unblock_transactions(void); void acpi_ec_unblock_transactions(void);
void acpi_ec_mark_gpe_for_wake(void);
void acpi_ec_set_gpe_wake_mask(u8 action);
void acpi_ec_dispatch_gpe(void);
int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit, int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
acpi_handle handle, acpi_ec_query_func func, acpi_handle handle, acpi_ec_query_func func,
void *data); void *data);
...@@ -204,6 +201,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit); ...@@ -204,6 +201,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
void acpi_ec_flush_work(void); void acpi_ec_flush_work(void);
bool acpi_ec_dispatch_gpe(void);
#endif #endif
...@@ -212,11 +210,9 @@ void acpi_ec_flush_work(void); ...@@ -212,11 +210,9 @@ void acpi_ec_flush_work(void);
-------------------------------------------------------------------------- */ -------------------------------------------------------------------------- */
#ifdef CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT #ifdef CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT
extern bool acpi_s2idle_wakeup(void); extern bool acpi_s2idle_wakeup(void);
extern bool acpi_sleep_no_ec_events(void);
extern int acpi_sleep_init(void); extern int acpi_sleep_init(void);
#else #else
static inline bool acpi_s2idle_wakeup(void) { return false; } static inline bool acpi_s2idle_wakeup(void) { return false; }
static inline bool acpi_sleep_no_ec_events(void) { return true; }
static inline int acpi_sleep_init(void) { return -ENXIO; } static inline int acpi_sleep_init(void) { return -ENXIO; }
#endif #endif
......
...@@ -284,6 +284,29 @@ static int acpi_processor_stop(struct device *dev) ...@@ -284,6 +284,29 @@ static int acpi_processor_stop(struct device *dev)
return 0; return 0;
} }
bool acpi_processor_cpufreq_init;
static int acpi_processor_notifier(struct notifier_block *nb,
unsigned long event, void *data)
{
struct cpufreq_policy *policy = data;
int cpu = policy->cpu;
if (event == CPUFREQ_CREATE_POLICY) {
acpi_thermal_cpufreq_init(cpu);
acpi_processor_ppc_init(cpu);
} else if (event == CPUFREQ_REMOVE_POLICY) {
acpi_processor_ppc_exit(cpu);
acpi_thermal_cpufreq_exit(cpu);
}
return 0;
}
static struct notifier_block acpi_processor_notifier_block = {
.notifier_call = acpi_processor_notifier,
};
/* /*
* We keep the driver loaded even when ACPI is not running. * We keep the driver loaded even when ACPI is not running.
* This is needed for the powernow-k8 driver, that works even without * This is needed for the powernow-k8 driver, that works even without
...@@ -310,8 +333,12 @@ static int __init acpi_processor_driver_init(void) ...@@ -310,8 +333,12 @@ static int __init acpi_processor_driver_init(void)
cpuhp_setup_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD, "acpi/cpu-drv:dead", cpuhp_setup_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD, "acpi/cpu-drv:dead",
NULL, acpi_soft_cpu_dead); NULL, acpi_soft_cpu_dead);
acpi_thermal_cpufreq_init(); if (!cpufreq_register_notifier(&acpi_processor_notifier_block,
acpi_processor_ppc_init(); CPUFREQ_POLICY_NOTIFIER)) {
acpi_processor_cpufreq_init = true;
acpi_processor_ignore_ppc_init();
}
acpi_processor_throttling_init(); acpi_processor_throttling_init();
return 0; return 0;
err: err:
...@@ -324,8 +351,12 @@ static void __exit acpi_processor_driver_exit(void) ...@@ -324,8 +351,12 @@ static void __exit acpi_processor_driver_exit(void)
if (acpi_disabled) if (acpi_disabled)
return; return;
acpi_processor_ppc_exit(); if (acpi_processor_cpufreq_init) {
acpi_thermal_cpufreq_exit(); cpufreq_unregister_notifier(&acpi_processor_notifier_block,
CPUFREQ_POLICY_NOTIFIER);
acpi_processor_cpufreq_init = false;
}
cpuhp_remove_state_nocalls(hp_online); cpuhp_remove_state_nocalls(hp_online);
cpuhp_remove_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD); cpuhp_remove_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD);
driver_unregister(&acpi_processor_driver); driver_unregister(&acpi_processor_driver);
......
...@@ -50,57 +50,13 @@ module_param(ignore_ppc, int, 0644); ...@@ -50,57 +50,13 @@ module_param(ignore_ppc, int, 0644);
MODULE_PARM_DESC(ignore_ppc, "If the frequency of your machine gets wrongly" \ MODULE_PARM_DESC(ignore_ppc, "If the frequency of your machine gets wrongly" \
"limited by BIOS, this should help"); "limited by BIOS, this should help");
#define PPC_REGISTERED 1 static bool acpi_processor_ppc_in_use;
#define PPC_IN_USE 2
static int acpi_processor_ppc_status;
static int acpi_processor_ppc_notifier(struct notifier_block *nb,
unsigned long event, void *data)
{
struct cpufreq_policy *policy = data;
struct acpi_processor *pr;
unsigned int ppc = 0;
if (ignore_ppc < 0)
ignore_ppc = 0;
if (ignore_ppc)
return 0;
if (event != CPUFREQ_ADJUST)
return 0;
mutex_lock(&performance_mutex);
pr = per_cpu(processors, policy->cpu);
if (!pr || !pr->performance)
goto out;
ppc = (unsigned int)pr->performance_platform_limit;
if (ppc >= pr->performance->state_count)
goto out;
cpufreq_verify_within_limits(policy, 0,
pr->performance->states[ppc].
core_frequency * 1000);
out:
mutex_unlock(&performance_mutex);
return 0;
}
static struct notifier_block acpi_ppc_notifier_block = {
.notifier_call = acpi_processor_ppc_notifier,
};
static int acpi_processor_get_platform_limit(struct acpi_processor *pr) static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
{ {
acpi_status status = 0; acpi_status status = 0;
unsigned long long ppc = 0; unsigned long long ppc = 0;
int ret;
if (!pr) if (!pr)
return -EINVAL; return -EINVAL;
...@@ -112,7 +68,7 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr) ...@@ -112,7 +68,7 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
status = acpi_evaluate_integer(pr->handle, "_PPC", NULL, &ppc); status = acpi_evaluate_integer(pr->handle, "_PPC", NULL, &ppc);
if (status != AE_NOT_FOUND) if (status != AE_NOT_FOUND)
acpi_processor_ppc_status |= PPC_IN_USE; acpi_processor_ppc_in_use = true;
if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) {
ACPI_EXCEPTION((AE_INFO, status, "Evaluating _PPC")); ACPI_EXCEPTION((AE_INFO, status, "Evaluating _PPC"));
...@@ -124,6 +80,17 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr) ...@@ -124,6 +80,17 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
pr->performance_platform_limit = (int)ppc; pr->performance_platform_limit = (int)ppc;
if (ppc >= pr->performance->state_count ||
unlikely(!dev_pm_qos_request_active(&pr->perflib_req)))
return 0;
ret = dev_pm_qos_update_request(&pr->perflib_req,
pr->performance->states[ppc].core_frequency * 1000);
if (ret < 0) {
pr_warn("Failed to update perflib freq constraint: CPU%d (%d)\n",
pr->id, ret);
}
return 0; return 0;
} }
...@@ -184,23 +151,32 @@ int acpi_processor_get_bios_limit(int cpu, unsigned int *limit) ...@@ -184,23 +151,32 @@ int acpi_processor_get_bios_limit(int cpu, unsigned int *limit)
} }
EXPORT_SYMBOL(acpi_processor_get_bios_limit); EXPORT_SYMBOL(acpi_processor_get_bios_limit);
void acpi_processor_ppc_init(void) void acpi_processor_ignore_ppc_init(void)
{ {
if (!cpufreq_register_notifier if (ignore_ppc < 0)
(&acpi_ppc_notifier_block, CPUFREQ_POLICY_NOTIFIER)) ignore_ppc = 0;
acpi_processor_ppc_status |= PPC_REGISTERED; }
else
printk(KERN_DEBUG void acpi_processor_ppc_init(int cpu)
"Warning: Processor Platform Limit not supported.\n"); {
struct acpi_processor *pr = per_cpu(processors, cpu);
int ret;
ret = dev_pm_qos_add_request(get_cpu_device(cpu),
&pr->perflib_req, DEV_PM_QOS_MAX_FREQUENCY,
INT_MAX);
if (ret < 0) {
pr_err("Failed to add freq constraint for CPU%d (%d)\n", cpu,
ret);
return;
}
} }
void acpi_processor_ppc_exit(void) void acpi_processor_ppc_exit(int cpu)
{ {
if (acpi_processor_ppc_status & PPC_REGISTERED) struct acpi_processor *pr = per_cpu(processors, cpu);
cpufreq_unregister_notifier(&acpi_ppc_notifier_block,
CPUFREQ_POLICY_NOTIFIER);
acpi_processor_ppc_status &= ~PPC_REGISTERED; dev_pm_qos_remove_request(&pr->perflib_req);
} }
static int acpi_processor_get_performance_control(struct acpi_processor *pr) static int acpi_processor_get_performance_control(struct acpi_processor *pr)
...@@ -477,7 +453,7 @@ int acpi_processor_notify_smm(struct module *calling_module) ...@@ -477,7 +453,7 @@ int acpi_processor_notify_smm(struct module *calling_module)
static int is_done = 0; static int is_done = 0;
int result; int result;
if (!(acpi_processor_ppc_status & PPC_REGISTERED)) if (!acpi_processor_cpufreq_init)
return -EBUSY; return -EBUSY;
if (!try_module_get(calling_module)) if (!try_module_get(calling_module))
...@@ -513,7 +489,7 @@ int acpi_processor_notify_smm(struct module *calling_module) ...@@ -513,7 +489,7 @@ int acpi_processor_notify_smm(struct module *calling_module)
* we can allow the cpufreq driver to be rmmod'ed. */ * we can allow the cpufreq driver to be rmmod'ed. */
is_done = 1; is_done = 1;
if (!(acpi_processor_ppc_status & PPC_IN_USE)) if (!acpi_processor_ppc_in_use)
module_put(calling_module); module_put(calling_module);
return 0; return 0;
...@@ -742,7 +718,7 @@ acpi_processor_register_performance(struct acpi_processor_performance ...@@ -742,7 +718,7 @@ acpi_processor_register_performance(struct acpi_processor_performance
{ {
struct acpi_processor *pr; struct acpi_processor *pr;
if (!(acpi_processor_ppc_status & PPC_REGISTERED)) if (!acpi_processor_cpufreq_init)
return -EINVAL; return -EINVAL;
mutex_lock(&performance_mutex); mutex_lock(&performance_mutex);
......
...@@ -35,7 +35,6 @@ ACPI_MODULE_NAME("processor_thermal"); ...@@ -35,7 +35,6 @@ ACPI_MODULE_NAME("processor_thermal");
#define CPUFREQ_THERMAL_MAX_STEP 3 #define CPUFREQ_THERMAL_MAX_STEP 3
static DEFINE_PER_CPU(unsigned int, cpufreq_thermal_reduction_pctg); static DEFINE_PER_CPU(unsigned int, cpufreq_thermal_reduction_pctg);
static unsigned int acpi_thermal_cpufreq_is_init = 0;
#define reduction_pctg(cpu) \ #define reduction_pctg(cpu) \
per_cpu(cpufreq_thermal_reduction_pctg, phys_package_first_cpu(cpu)) per_cpu(cpufreq_thermal_reduction_pctg, phys_package_first_cpu(cpu))
...@@ -61,35 +60,11 @@ static int phys_package_first_cpu(int cpu) ...@@ -61,35 +60,11 @@ static int phys_package_first_cpu(int cpu)
static int cpu_has_cpufreq(unsigned int cpu) static int cpu_has_cpufreq(unsigned int cpu)
{ {
struct cpufreq_policy policy; struct cpufreq_policy policy;
if (!acpi_thermal_cpufreq_is_init || cpufreq_get_policy(&policy, cpu)) if (!acpi_processor_cpufreq_init || cpufreq_get_policy(&policy, cpu))
return 0; return 0;
return 1; return 1;
} }
static int acpi_thermal_cpufreq_notifier(struct notifier_block *nb,
unsigned long event, void *data)
{
struct cpufreq_policy *policy = data;
unsigned long max_freq = 0;
if (event != CPUFREQ_ADJUST)
goto out;
max_freq = (
policy->cpuinfo.max_freq *
(100 - reduction_pctg(policy->cpu) * 20)
) / 100;
cpufreq_verify_within_limits(policy, 0, max_freq);
out:
return 0;
}
static struct notifier_block acpi_thermal_cpufreq_notifier_block = {
.notifier_call = acpi_thermal_cpufreq_notifier,
};
static int cpufreq_get_max_state(unsigned int cpu) static int cpufreq_get_max_state(unsigned int cpu)
{ {
if (!cpu_has_cpufreq(cpu)) if (!cpu_has_cpufreq(cpu))
...@@ -108,7 +83,10 @@ static int cpufreq_get_cur_state(unsigned int cpu) ...@@ -108,7 +83,10 @@ static int cpufreq_get_cur_state(unsigned int cpu)
static int cpufreq_set_cur_state(unsigned int cpu, int state) static int cpufreq_set_cur_state(unsigned int cpu, int state)
{ {
int i; struct cpufreq_policy *policy;
struct acpi_processor *pr;
unsigned long max_freq;
int i, ret;
if (!cpu_has_cpufreq(cpu)) if (!cpu_has_cpufreq(cpu))
return 0; return 0;
...@@ -121,33 +99,53 @@ static int cpufreq_set_cur_state(unsigned int cpu, int state) ...@@ -121,33 +99,53 @@ static int cpufreq_set_cur_state(unsigned int cpu, int state)
* frequency. * frequency.
*/ */
for_each_online_cpu(i) { for_each_online_cpu(i) {
if (topology_physical_package_id(i) == if (topology_physical_package_id(i) !=
topology_physical_package_id(cpu)) topology_physical_package_id(cpu))
cpufreq_update_policy(i); continue;
pr = per_cpu(processors, i);
if (unlikely(!dev_pm_qos_request_active(&pr->thermal_req)))
continue;
policy = cpufreq_cpu_get(i);
if (!policy)
return -EINVAL;
max_freq = (policy->cpuinfo.max_freq * (100 - reduction_pctg(i) * 20)) / 100;
cpufreq_cpu_put(policy);
ret = dev_pm_qos_update_request(&pr->thermal_req, max_freq);
if (ret < 0) {
pr_warn("Failed to update thermal freq constraint: CPU%d (%d)\n",
pr->id, ret);
}
} }
return 0; return 0;
} }
void acpi_thermal_cpufreq_init(void) void acpi_thermal_cpufreq_init(int cpu)
{ {
int i; struct acpi_processor *pr = per_cpu(processors, cpu);
int ret;
i = cpufreq_register_notifier(&acpi_thermal_cpufreq_notifier_block,
CPUFREQ_POLICY_NOTIFIER); ret = dev_pm_qos_add_request(get_cpu_device(cpu),
if (!i) &pr->thermal_req, DEV_PM_QOS_MAX_FREQUENCY,
acpi_thermal_cpufreq_is_init = 1; INT_MAX);
if (ret < 0) {
pr_err("Failed to add freq constraint for CPU%d (%d)\n", cpu,
ret);
return;
}
} }
void acpi_thermal_cpufreq_exit(void) void acpi_thermal_cpufreq_exit(int cpu)
{ {
if (acpi_thermal_cpufreq_is_init) struct acpi_processor *pr = per_cpu(processors, cpu);
cpufreq_unregister_notifier
(&acpi_thermal_cpufreq_notifier_block,
CPUFREQ_POLICY_NOTIFIER);
acpi_thermal_cpufreq_is_init = 0; dev_pm_qos_remove_request(&pr->thermal_req);
} }
#else /* ! CONFIG_CPU_FREQ */ #else /* ! CONFIG_CPU_FREQ */
static int cpufreq_get_max_state(unsigned int cpu) static int cpufreq_get_max_state(unsigned int cpu)
{ {
......
...@@ -89,6 +89,10 @@ bool acpi_sleep_state_supported(u8 sleep_state) ...@@ -89,6 +89,10 @@ bool acpi_sleep_state_supported(u8 sleep_state)
} }
#ifdef CONFIG_ACPI_SLEEP #ifdef CONFIG_ACPI_SLEEP
static bool sleep_no_lps0 __read_mostly;
module_param(sleep_no_lps0, bool, 0644);
MODULE_PARM_DESC(sleep_no_lps0, "Do not use the special LPS0 device interface");
static u32 acpi_target_sleep_state = ACPI_STATE_S0; static u32 acpi_target_sleep_state = ACPI_STATE_S0;
u32 acpi_target_system_state(void) u32 acpi_target_system_state(void)
...@@ -158,11 +162,11 @@ static int __init init_nvs_nosave(const struct dmi_system_id *d) ...@@ -158,11 +162,11 @@ static int __init init_nvs_nosave(const struct dmi_system_id *d)
return 0; return 0;
} }
static bool acpi_sleep_no_lps0; static bool acpi_sleep_default_s3;
static int __init init_no_lps0(const struct dmi_system_id *d) static int __init init_default_s3(const struct dmi_system_id *d)
{ {
acpi_sleep_no_lps0 = true; acpi_sleep_default_s3 = true;
return 0; return 0;
} }
...@@ -363,7 +367,7 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = { ...@@ -363,7 +367,7 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
* S0 Idle firmware interface. * S0 Idle firmware interface.
*/ */
{ {
.callback = init_no_lps0, .callback = init_default_s3,
.ident = "Dell XPS13 9360", .ident = "Dell XPS13 9360",
.matches = { .matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
...@@ -376,7 +380,7 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = { ...@@ -376,7 +380,7 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
* https://bugzilla.kernel.org/show_bug.cgi?id=199057). * https://bugzilla.kernel.org/show_bug.cgi?id=199057).
*/ */
{ {
.callback = init_no_lps0, .callback = init_default_s3,
.ident = "ThinkPad X1 Tablet(2016)", .ident = "ThinkPad X1 Tablet(2016)",
.matches = { .matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
...@@ -524,8 +528,9 @@ static void acpi_pm_end(void) ...@@ -524,8 +528,9 @@ static void acpi_pm_end(void)
acpi_sleep_tts_switch(acpi_target_sleep_state); acpi_sleep_tts_switch(acpi_target_sleep_state);
} }
#else /* !CONFIG_ACPI_SLEEP */ #else /* !CONFIG_ACPI_SLEEP */
#define sleep_no_lps0 (1)
#define acpi_target_sleep_state ACPI_STATE_S0 #define acpi_target_sleep_state ACPI_STATE_S0
#define acpi_sleep_no_lps0 (false) #define acpi_sleep_default_s3 (1)
static inline void acpi_sleep_dmi_check(void) {} static inline void acpi_sleep_dmi_check(void) {}
#endif /* CONFIG_ACPI_SLEEP */ #endif /* CONFIG_ACPI_SLEEP */
...@@ -691,7 +696,6 @@ static const struct platform_suspend_ops acpi_suspend_ops_old = { ...@@ -691,7 +696,6 @@ static const struct platform_suspend_ops acpi_suspend_ops_old = {
.recover = acpi_pm_finish, .recover = acpi_pm_finish,
}; };
static bool s2idle_in_progress;
static bool s2idle_wakeup; static bool s2idle_wakeup;
/* /*
...@@ -904,42 +908,43 @@ static int lps0_device_attach(struct acpi_device *adev, ...@@ -904,42 +908,43 @@ static int lps0_device_attach(struct acpi_device *adev,
if (lps0_device_handle) if (lps0_device_handle)
return 0; return 0;
if (acpi_sleep_no_lps0) {
acpi_handle_info(adev->handle,
"Low Power S0 Idle interface disabled\n");
return 0;
}
if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
return 0; return 0;
guid_parse(ACPI_LPS0_DSM_UUID, &lps0_dsm_guid); guid_parse(ACPI_LPS0_DSM_UUID, &lps0_dsm_guid);
/* Check if the _DSM is present and as expected. */ /* Check if the _DSM is present and as expected. */
out_obj = acpi_evaluate_dsm(adev->handle, &lps0_dsm_guid, 1, 0, NULL); out_obj = acpi_evaluate_dsm(adev->handle, &lps0_dsm_guid, 1, 0, NULL);
if (out_obj && out_obj->type == ACPI_TYPE_BUFFER) { if (!out_obj || out_obj->type != ACPI_TYPE_BUFFER) {
char bitmask = *(char *)out_obj->buffer.pointer;
lps0_dsm_func_mask = bitmask;
lps0_device_handle = adev->handle;
/*
* Use suspend-to-idle by default if the default
* suspend mode was not set from the command line.
*/
if (mem_sleep_default > PM_SUSPEND_MEM)
mem_sleep_current = PM_SUSPEND_TO_IDLE;
acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n",
bitmask);
acpi_ec_mark_gpe_for_wake();
} else {
acpi_handle_debug(adev->handle, acpi_handle_debug(adev->handle,
"_DSM function 0 evaluation failed\n"); "_DSM function 0 evaluation failed\n");
return 0;
} }
lps0_dsm_func_mask = *(char *)out_obj->buffer.pointer;
ACPI_FREE(out_obj); ACPI_FREE(out_obj);
acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n",
lps0_dsm_func_mask);
lps0_device_handle = adev->handle;
lpi_device_get_constraints(); lpi_device_get_constraints();
/*
* Use suspend-to-idle by default if the default suspend mode was not
* set from the command line.
*/
if (mem_sleep_default > PM_SUSPEND_MEM && !acpi_sleep_default_s3)
mem_sleep_current = PM_SUSPEND_TO_IDLE;
/*
* Some LPS0 systems, like ASUS Zenbook UX430UNR/i7-8550U, require the
* EC GPE to be enabled while suspended for certain wakeup devices to
* work, so mark it as wakeup-capable.
*/
acpi_ec_mark_gpe_for_wake();
return 0; return 0;
} }
...@@ -951,98 +956,110 @@ static struct acpi_scan_handler lps0_handler = { ...@@ -951,98 +956,110 @@ static struct acpi_scan_handler lps0_handler = {
static int acpi_s2idle_begin(void) static int acpi_s2idle_begin(void)
{ {
acpi_scan_lock_acquire(); acpi_scan_lock_acquire();
s2idle_in_progress = true;
return 0; return 0;
} }
static int acpi_s2idle_prepare(void) static int acpi_s2idle_prepare(void)
{ {
if (lps0_device_handle) { if (acpi_sci_irq_valid()) {
acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF); enable_irq_wake(acpi_sci_irq);
acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY);
acpi_ec_set_gpe_wake_mask(ACPI_GPE_ENABLE); acpi_ec_set_gpe_wake_mask(ACPI_GPE_ENABLE);
} }
if (acpi_sci_irq_valid())
enable_irq_wake(acpi_sci_irq);
acpi_enable_wakeup_devices(ACPI_STATE_S0); acpi_enable_wakeup_devices(ACPI_STATE_S0);
/* Change the configuration of GPEs to avoid spurious wakeup. */ /* Change the configuration of GPEs to avoid spurious wakeup. */
acpi_enable_all_wakeup_gpes(); acpi_enable_all_wakeup_gpes();
acpi_os_wait_events_complete(); acpi_os_wait_events_complete();
s2idle_wakeup = true;
return 0; return 0;
} }
static void acpi_s2idle_wake(void) static int acpi_s2idle_prepare_late(void)
{ {
if (!lps0_device_handle) if (!lps0_device_handle || sleep_no_lps0)
return; return 0;
if (pm_debug_messages_on) if (pm_debug_messages_on)
lpi_check_constraints(); lpi_check_constraints();
acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF);
acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY);
return 0;
}
static void acpi_s2idle_wake(void)
{
/*
* If IRQD_WAKEUP_ARMED is set for the SCI at this point, the SCI has
* not triggered while suspended, so bail out.
*/
if (!acpi_sci_irq_valid() ||
irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq)))
return;
/* /*
* If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means * If there are EC events to process, the wakeup may be a spurious one
* that the SCI has triggered while suspended, so cancel the wakeup in * coming from the EC.
* case it has not been a wakeup event (the GPEs will be checked later).
*/ */
if (acpi_sci_irq_valid() && if (acpi_ec_dispatch_gpe()) {
!irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq))) { /*
* Cancel the wakeup and process all pending events in case
* there are any wakeup ones in there.
*
* Note that if any non-EC GPEs are active at this point, the
* SCI will retrigger after the rearming below, so no events
* should be missed by canceling the wakeup here.
*/
pm_system_cancel_wakeup(); pm_system_cancel_wakeup();
s2idle_wakeup = true;
/* /*
* On some platforms with the LPS0 _DSM device noirq resume * The EC driver uses the system workqueue and an additional
* takes too much time for EC wakeup events to survive, so look * special one, so those need to be flushed too.
* for them now.
*/ */
acpi_ec_dispatch_gpe(); acpi_os_wait_events_complete(); /* synchronize EC GPE processing */
acpi_ec_flush_work();
acpi_os_wait_events_complete(); /* synchronize Notify handling */
rearm_wake_irq(acpi_sci_irq);
} }
} }
static void acpi_s2idle_sync(void) static void acpi_s2idle_restore_early(void)
{ {
/* if (!lps0_device_handle || sleep_no_lps0)
* Process all pending events in case there are any wakeup ones. return;
*
* The EC driver uses the system workqueue and an additional special acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT);
* one, so those need to be flushed too. acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON);
*/
acpi_os_wait_events_complete(); /* synchronize SCI IRQ handling */
acpi_ec_flush_work();
acpi_os_wait_events_complete(); /* synchronize Notify handling */
s2idle_wakeup = false;
} }
static void acpi_s2idle_restore(void) static void acpi_s2idle_restore(void)
{ {
s2idle_wakeup = false;
acpi_enable_all_runtime_gpes(); acpi_enable_all_runtime_gpes();
acpi_disable_wakeup_devices(ACPI_STATE_S0); acpi_disable_wakeup_devices(ACPI_STATE_S0);
if (acpi_sci_irq_valid()) if (acpi_sci_irq_valid()) {
disable_irq_wake(acpi_sci_irq);
if (lps0_device_handle) {
acpi_ec_set_gpe_wake_mask(ACPI_GPE_DISABLE); acpi_ec_set_gpe_wake_mask(ACPI_GPE_DISABLE);
disable_irq_wake(acpi_sci_irq);
acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT);
acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON);
} }
} }
static void acpi_s2idle_end(void) static void acpi_s2idle_end(void)
{ {
s2idle_in_progress = false;
acpi_scan_lock_release(); acpi_scan_lock_release();
} }
static const struct platform_s2idle_ops acpi_s2idle_ops = { static const struct platform_s2idle_ops acpi_s2idle_ops = {
.begin = acpi_s2idle_begin, .begin = acpi_s2idle_begin,
.prepare = acpi_s2idle_prepare, .prepare = acpi_s2idle_prepare,
.prepare_late = acpi_s2idle_prepare_late,
.wake = acpi_s2idle_wake, .wake = acpi_s2idle_wake,
.sync = acpi_s2idle_sync, .restore_early = acpi_s2idle_restore_early,
.restore = acpi_s2idle_restore, .restore = acpi_s2idle_restore,
.end = acpi_s2idle_end, .end = acpi_s2idle_end,
}; };
...@@ -1063,7 +1080,6 @@ static void acpi_sleep_suspend_setup(void) ...@@ -1063,7 +1080,6 @@ static void acpi_sleep_suspend_setup(void)
} }
#else /* !CONFIG_SUSPEND */ #else /* !CONFIG_SUSPEND */
#define s2idle_in_progress (false)
#define s2idle_wakeup (false) #define s2idle_wakeup (false)
#define lps0_device_handle (NULL) #define lps0_device_handle (NULL)
static inline void acpi_sleep_suspend_setup(void) {} static inline void acpi_sleep_suspend_setup(void) {}
...@@ -1074,11 +1090,6 @@ bool acpi_s2idle_wakeup(void) ...@@ -1074,11 +1090,6 @@ bool acpi_s2idle_wakeup(void)
return s2idle_wakeup; return s2idle_wakeup;
} }
bool acpi_sleep_no_ec_events(void)
{
return !s2idle_in_progress || !lps0_device_handle;
}
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
static u32 saved_bm_rld; static u32 saved_bm_rld;
......
...@@ -179,7 +179,7 @@ init_cpu_capacity_callback(struct notifier_block *nb, ...@@ -179,7 +179,7 @@ init_cpu_capacity_callback(struct notifier_block *nb,
if (!raw_capacity) if (!raw_capacity)
return 0; return 0;
if (val != CPUFREQ_NOTIFY) if (val != CPUFREQ_CREATE_POLICY)
return 0; return 0;
pr_debug("cpu_capacity: init cpu capacity for CPUs [%*pbl] (to_visit=%*pbl)\n", pr_debug("cpu_capacity: init cpu capacity for CPUs [%*pbl] (to_visit=%*pbl)\n",
......
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o
obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o wakeup_stats.o
obj-$(CONFIG_PM_TRACE_RTC) += trace.o obj-$(CONFIG_PM_TRACE_RTC) += trace.o
obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
obj-$(CONFIG_HAVE_CLK) += clock_ops.o obj-$(CONFIG_HAVE_CLK) += clock_ops.o
......
...@@ -149,29 +149,24 @@ static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, ...@@ -149,29 +149,24 @@ static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
return ret; return ret;
} }
static int genpd_runtime_suspend(struct device *dev);
/* /*
* Get the generic PM domain for a particular struct device. * Get the generic PM domain for a particular struct device.
* This validates the struct device pointer, the PM domain pointer, * This validates the struct device pointer, the PM domain pointer,
* and checks that the PM domain pointer is a real generic PM domain. * and checks that the PM domain pointer is a real generic PM domain.
* Any failure results in NULL being returned. * Any failure results in NULL being returned.
*/ */
static struct generic_pm_domain *genpd_lookup_dev(struct device *dev) static struct generic_pm_domain *dev_to_genpd_safe(struct device *dev)
{ {
struct generic_pm_domain *genpd = NULL, *gpd;
if (IS_ERR_OR_NULL(dev) || IS_ERR_OR_NULL(dev->pm_domain)) if (IS_ERR_OR_NULL(dev) || IS_ERR_OR_NULL(dev->pm_domain))
return NULL; return NULL;
mutex_lock(&gpd_list_lock); /* A genpd's always have its ->runtime_suspend() callback assigned. */
list_for_each_entry(gpd, &gpd_list, gpd_list_node) { if (dev->pm_domain->ops.runtime_suspend == genpd_runtime_suspend)
if (&gpd->domain == dev->pm_domain) { return pd_to_genpd(dev->pm_domain);
genpd = gpd;
break;
}
}
mutex_unlock(&gpd_list_lock);
return genpd; return NULL;
} }
/* /*
...@@ -385,8 +380,8 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state) ...@@ -385,8 +380,8 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state)
unsigned int prev; unsigned int prev;
int ret; int ret;
genpd = dev_to_genpd(dev); genpd = dev_to_genpd_safe(dev);
if (IS_ERR(genpd)) if (!genpd)
return -ENODEV; return -ENODEV;
if (unlikely(!genpd->set_performance_state)) if (unlikely(!genpd->set_performance_state))
...@@ -1610,7 +1605,7 @@ static int genpd_remove_device(struct generic_pm_domain *genpd, ...@@ -1610,7 +1605,7 @@ static int genpd_remove_device(struct generic_pm_domain *genpd,
*/ */
int pm_genpd_remove_device(struct device *dev) int pm_genpd_remove_device(struct device *dev)
{ {
struct generic_pm_domain *genpd = genpd_lookup_dev(dev); struct generic_pm_domain *genpd = dev_to_genpd_safe(dev);
if (!genpd) if (!genpd)
return -EINVAL; return -EINVAL;
......
...@@ -716,7 +716,7 @@ static void async_resume_noirq(void *data, async_cookie_t cookie) ...@@ -716,7 +716,7 @@ static void async_resume_noirq(void *data, async_cookie_t cookie)
put_device(dev); put_device(dev);
} }
void dpm_noirq_resume_devices(pm_message_t state) static void dpm_noirq_resume_devices(pm_message_t state)
{ {
struct device *dev; struct device *dev;
ktime_t starttime = ktime_get(); ktime_t starttime = ktime_get();
...@@ -760,13 +760,6 @@ void dpm_noirq_resume_devices(pm_message_t state) ...@@ -760,13 +760,6 @@ void dpm_noirq_resume_devices(pm_message_t state)
trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false); trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
} }
void dpm_noirq_end(void)
{
resume_device_irqs();
device_wakeup_disarm_wake_irqs();
cpuidle_resume();
}
/** /**
* dpm_resume_noirq - Execute "noirq resume" callbacks for all devices. * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
* @state: PM transition of the system being carried out. * @state: PM transition of the system being carried out.
...@@ -777,7 +770,11 @@ void dpm_noirq_end(void) ...@@ -777,7 +770,11 @@ void dpm_noirq_end(void)
void dpm_resume_noirq(pm_message_t state) void dpm_resume_noirq(pm_message_t state)
{ {
dpm_noirq_resume_devices(state); dpm_noirq_resume_devices(state);
dpm_noirq_end();
resume_device_irqs();
device_wakeup_disarm_wake_irqs();
cpuidle_resume();
} }
static pm_callback_t dpm_subsys_resume_early_cb(struct device *dev, static pm_callback_t dpm_subsys_resume_early_cb(struct device *dev,
...@@ -1291,11 +1288,6 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a ...@@ -1291,11 +1288,6 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
if (async_error) if (async_error)
goto Complete; goto Complete;
if (pm_wakeup_pending()) {
async_error = -EBUSY;
goto Complete;
}
if (dev->power.syscore || dev->power.direct_complete) if (dev->power.syscore || dev->power.direct_complete)
goto Complete; goto Complete;
...@@ -1362,14 +1354,7 @@ static int device_suspend_noirq(struct device *dev) ...@@ -1362,14 +1354,7 @@ static int device_suspend_noirq(struct device *dev)
return __device_suspend_noirq(dev, pm_transition, false); return __device_suspend_noirq(dev, pm_transition, false);
} }
void dpm_noirq_begin(void) static int dpm_noirq_suspend_devices(pm_message_t state)
{
cpuidle_pause();
device_wakeup_arm_wake_irqs();
suspend_device_irqs();
}
int dpm_noirq_suspend_devices(pm_message_t state)
{ {
ktime_t starttime = ktime_get(); ktime_t starttime = ktime_get();
int error = 0; int error = 0;
...@@ -1426,7 +1411,11 @@ int dpm_suspend_noirq(pm_message_t state) ...@@ -1426,7 +1411,11 @@ int dpm_suspend_noirq(pm_message_t state)
{ {
int ret; int ret;
dpm_noirq_begin(); cpuidle_pause();
device_wakeup_arm_wake_irqs();
suspend_device_irqs();
ret = dpm_noirq_suspend_devices(state); ret = dpm_noirq_suspend_devices(state);
if (ret) if (ret)
dpm_resume_noirq(resume_event(state)); dpm_resume_noirq(resume_event(state));
......
...@@ -149,3 +149,21 @@ static inline void device_pm_init(struct device *dev) ...@@ -149,3 +149,21 @@ static inline void device_pm_init(struct device *dev)
device_pm_sleep_init(dev); device_pm_sleep_init(dev);
pm_runtime_init(dev); pm_runtime_init(dev);
} }
#ifdef CONFIG_PM_SLEEP
/* drivers/base/power/wakeup_stats.c */
extern int wakeup_source_sysfs_add(struct device *parent,
struct wakeup_source *ws);
extern void wakeup_source_sysfs_remove(struct wakeup_source *ws);
extern int pm_wakeup_source_sysfs_add(struct device *parent);
#else /* !CONFIG_PM_SLEEP */
static inline int pm_wakeup_source_sysfs_add(struct device *parent)
{
return 0;
}
#endif /* CONFIG_PM_SLEEP */
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/pm_qos.h> #include <linux/pm_qos.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/pm_wakeup.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/jiffies.h> #include <linux/jiffies.h>
#include "power.h" #include "power.h"
...@@ -667,8 +668,13 @@ int dpm_sysfs_add(struct device *dev) ...@@ -667,8 +668,13 @@ int dpm_sysfs_add(struct device *dev)
if (rc) if (rc)
goto err_wakeup; goto err_wakeup;
} }
rc = pm_wakeup_source_sysfs_add(dev);
if (rc)
goto err_latency;
return 0; return 0;
err_latency:
sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group);
err_wakeup: err_wakeup:
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group); sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
err_runtime: err_runtime:
......
...@@ -72,22 +72,7 @@ static struct wakeup_source deleted_ws = { ...@@ -72,22 +72,7 @@ static struct wakeup_source deleted_ws = {
.lock = __SPIN_LOCK_UNLOCKED(deleted_ws.lock), .lock = __SPIN_LOCK_UNLOCKED(deleted_ws.lock),
}; };
/** static DEFINE_IDA(wakeup_ida);
* wakeup_source_prepare - Prepare a new wakeup source for initialization.
* @ws: Wakeup source to prepare.
* @name: Pointer to the name of the new wakeup source.
*
* Callers must ensure that the @name string won't be freed when @ws is still in
* use.
*/
void wakeup_source_prepare(struct wakeup_source *ws, const char *name)
{
if (ws) {
memset(ws, 0, sizeof(*ws));
ws->name = name;
}
}
EXPORT_SYMBOL_GPL(wakeup_source_prepare);
/** /**
* wakeup_source_create - Create a struct wakeup_source object. * wakeup_source_create - Create a struct wakeup_source object.
...@@ -96,13 +81,31 @@ EXPORT_SYMBOL_GPL(wakeup_source_prepare); ...@@ -96,13 +81,31 @@ EXPORT_SYMBOL_GPL(wakeup_source_prepare);
struct wakeup_source *wakeup_source_create(const char *name) struct wakeup_source *wakeup_source_create(const char *name)
{ {
struct wakeup_source *ws; struct wakeup_source *ws;
const char *ws_name;
int id;
ws = kmalloc(sizeof(*ws), GFP_KERNEL); ws = kzalloc(sizeof(*ws), GFP_KERNEL);
if (!ws) if (!ws)
return NULL; goto err_ws;
ws_name = kstrdup_const(name, GFP_KERNEL);
if (!ws_name)
goto err_name;
ws->name = ws_name;
id = ida_alloc(&wakeup_ida, GFP_KERNEL);
if (id < 0)
goto err_id;
ws->id = id;
wakeup_source_prepare(ws, name ? kstrdup_const(name, GFP_KERNEL) : NULL);
return ws; return ws;
err_id:
kfree_const(ws->name);
err_name:
kfree(ws);
err_ws:
return NULL;
} }
EXPORT_SYMBOL_GPL(wakeup_source_create); EXPORT_SYMBOL_GPL(wakeup_source_create);
...@@ -134,6 +137,13 @@ static void wakeup_source_record(struct wakeup_source *ws) ...@@ -134,6 +137,13 @@ static void wakeup_source_record(struct wakeup_source *ws)
spin_unlock_irqrestore(&deleted_ws.lock, flags); spin_unlock_irqrestore(&deleted_ws.lock, flags);
} }
static void wakeup_source_free(struct wakeup_source *ws)
{
ida_free(&wakeup_ida, ws->id);
kfree_const(ws->name);
kfree(ws);
}
/** /**
* wakeup_source_destroy - Destroy a struct wakeup_source object. * wakeup_source_destroy - Destroy a struct wakeup_source object.
* @ws: Wakeup source to destroy. * @ws: Wakeup source to destroy.
...@@ -147,8 +157,7 @@ void wakeup_source_destroy(struct wakeup_source *ws) ...@@ -147,8 +157,7 @@ void wakeup_source_destroy(struct wakeup_source *ws)
__pm_relax(ws); __pm_relax(ws);
wakeup_source_record(ws); wakeup_source_record(ws);
kfree_const(ws->name); wakeup_source_free(ws);
kfree(ws);
} }
EXPORT_SYMBOL_GPL(wakeup_source_destroy); EXPORT_SYMBOL_GPL(wakeup_source_destroy);
...@@ -200,16 +209,26 @@ EXPORT_SYMBOL_GPL(wakeup_source_remove); ...@@ -200,16 +209,26 @@ EXPORT_SYMBOL_GPL(wakeup_source_remove);
/** /**
* wakeup_source_register - Create wakeup source and add it to the list. * wakeup_source_register - Create wakeup source and add it to the list.
* @dev: Device this wakeup source is associated with (or NULL if virtual).
* @name: Name of the wakeup source to register. * @name: Name of the wakeup source to register.
*/ */
struct wakeup_source *wakeup_source_register(const char *name) struct wakeup_source *wakeup_source_register(struct device *dev,
const char *name)
{ {
struct wakeup_source *ws; struct wakeup_source *ws;
int ret;
ws = wakeup_source_create(name); ws = wakeup_source_create(name);
if (ws) if (ws) {
if (!dev || device_is_registered(dev)) {
ret = wakeup_source_sysfs_add(dev, ws);
if (ret) {
wakeup_source_free(ws);
return NULL;
}
}
wakeup_source_add(ws); wakeup_source_add(ws);
}
return ws; return ws;
} }
EXPORT_SYMBOL_GPL(wakeup_source_register); EXPORT_SYMBOL_GPL(wakeup_source_register);
...@@ -222,6 +241,7 @@ void wakeup_source_unregister(struct wakeup_source *ws) ...@@ -222,6 +241,7 @@ void wakeup_source_unregister(struct wakeup_source *ws)
{ {
if (ws) { if (ws) {
wakeup_source_remove(ws); wakeup_source_remove(ws);
wakeup_source_sysfs_remove(ws);
wakeup_source_destroy(ws); wakeup_source_destroy(ws);
} }
} }
...@@ -265,7 +285,7 @@ int device_wakeup_enable(struct device *dev) ...@@ -265,7 +285,7 @@ int device_wakeup_enable(struct device *dev)
if (pm_suspend_target_state != PM_SUSPEND_ON) if (pm_suspend_target_state != PM_SUSPEND_ON)
dev_dbg(dev, "Suspicious %s() during system transition!\n", __func__); dev_dbg(dev, "Suspicious %s() during system transition!\n", __func__);
ws = wakeup_source_register(dev_name(dev)); ws = wakeup_source_register(dev, dev_name(dev));
if (!ws) if (!ws)
return -ENOMEM; return -ENOMEM;
...@@ -859,7 +879,7 @@ EXPORT_SYMBOL_GPL(pm_system_wakeup); ...@@ -859,7 +879,7 @@ EXPORT_SYMBOL_GPL(pm_system_wakeup);
void pm_system_cancel_wakeup(void) void pm_system_cancel_wakeup(void)
{ {
atomic_dec(&pm_abort_suspend); atomic_dec_if_positive(&pm_abort_suspend);
} }
void pm_wakeup_clear(bool reset) void pm_wakeup_clear(bool reset)
......
// SPDX-License-Identifier: GPL-2.0
/*
* Wakeup statistics in sysfs
*
* Copyright (c) 2019 Linux Foundation
* Copyright (c) 2019 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Copyright (c) 2019 Google Inc.
*/
#include <linux/device.h>
#include <linux/idr.h>
#include <linux/init.h>
#include <linux/kdev_t.h>
#include <linux/kernel.h>
#include <linux/kobject.h>
#include <linux/slab.h>
#include <linux/timekeeping.h>
#include "power.h"
static struct class *wakeup_class;
#define wakeup_attr(_name) \
static ssize_t _name##_show(struct device *dev, \
struct device_attribute *attr, char *buf) \
{ \
struct wakeup_source *ws = dev_get_drvdata(dev); \
\
return sprintf(buf, "%lu\n", ws->_name); \
} \
static DEVICE_ATTR_RO(_name)
wakeup_attr(active_count);
wakeup_attr(event_count);
wakeup_attr(wakeup_count);
wakeup_attr(expire_count);
static ssize_t active_time_ms_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
ktime_t active_time =
ws->active ? ktime_sub(ktime_get(), ws->last_time) : 0;
return sprintf(buf, "%lld\n", ktime_to_ms(active_time));
}
static DEVICE_ATTR_RO(active_time_ms);
static ssize_t total_time_ms_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
ktime_t active_time;
ktime_t total_time = ws->total_time;
if (ws->active) {
active_time = ktime_sub(ktime_get(), ws->last_time);
total_time = ktime_add(total_time, active_time);
}
return sprintf(buf, "%lld\n", ktime_to_ms(total_time));
}
static DEVICE_ATTR_RO(total_time_ms);
static ssize_t max_time_ms_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
ktime_t active_time;
ktime_t max_time = ws->max_time;
if (ws->active) {
active_time = ktime_sub(ktime_get(), ws->last_time);
if (active_time > max_time)
max_time = active_time;
}
return sprintf(buf, "%lld\n", ktime_to_ms(max_time));
}
static DEVICE_ATTR_RO(max_time_ms);
static ssize_t last_change_ms_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
return sprintf(buf, "%lld\n", ktime_to_ms(ws->last_time));
}
static DEVICE_ATTR_RO(last_change_ms);
static ssize_t name_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
return sprintf(buf, "%s\n", ws->name);
}
static DEVICE_ATTR_RO(name);
static ssize_t prevent_suspend_time_ms_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
ktime_t prevent_sleep_time = ws->prevent_sleep_time;
if (ws->active && ws->autosleep_enabled) {
prevent_sleep_time = ktime_add(prevent_sleep_time,
ktime_sub(ktime_get(), ws->start_prevent_time));
}
return sprintf(buf, "%lld\n", ktime_to_ms(prevent_sleep_time));
}
static DEVICE_ATTR_RO(prevent_suspend_time_ms);
static struct attribute *wakeup_source_attrs[] = {
&dev_attr_name.attr,
&dev_attr_active_count.attr,
&dev_attr_event_count.attr,
&dev_attr_wakeup_count.attr,
&dev_attr_expire_count.attr,
&dev_attr_active_time_ms.attr,
&dev_attr_total_time_ms.attr,
&dev_attr_max_time_ms.attr,
&dev_attr_last_change_ms.attr,
&dev_attr_prevent_suspend_time_ms.attr,
NULL,
};
ATTRIBUTE_GROUPS(wakeup_source);
static void device_create_release(struct device *dev)
{
kfree(dev);
}
static struct device *wakeup_source_device_create(struct device *parent,
struct wakeup_source *ws)
{
struct device *dev = NULL;
int retval = -ENODEV;
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (!dev) {
retval = -ENOMEM;
goto error;
}
device_initialize(dev);
dev->devt = MKDEV(0, 0);
dev->class = wakeup_class;
dev->parent = parent;
dev->groups = wakeup_source_groups;
dev->release = device_create_release;
dev_set_drvdata(dev, ws);
device_set_pm_not_required(dev);
retval = kobject_set_name(&dev->kobj, "wakeup%d", ws->id);
if (retval)
goto error;
retval = device_add(dev);
if (retval)
goto error;
return dev;
error:
put_device(dev);
return ERR_PTR(retval);
}
/**
* wakeup_source_sysfs_add - Add wakeup_source attributes to sysfs.
* @parent: Device given wakeup source is associated with (or NULL if virtual).
* @ws: Wakeup source to be added in sysfs.
*/
int wakeup_source_sysfs_add(struct device *parent, struct wakeup_source *ws)
{
struct device *dev;
dev = wakeup_source_device_create(parent, ws);
if (IS_ERR(dev))
return PTR_ERR(dev);
ws->dev = dev;
return 0;
}
/**
* pm_wakeup_source_sysfs_add - Add wakeup_source attributes to sysfs
* for a device if they're missing.
* @parent: Device given wakeup source is associated with
*/
int pm_wakeup_source_sysfs_add(struct device *parent)
{
if (!parent->power.wakeup || parent->power.wakeup->dev)
return 0;
return wakeup_source_sysfs_add(parent, parent->power.wakeup);
}
/**
* wakeup_source_sysfs_remove - Remove wakeup_source attributes from sysfs.
* @ws: Wakeup source to be removed from sysfs.
*/
void wakeup_source_sysfs_remove(struct wakeup_source *ws)
{
device_unregister(ws->dev);
}
static int __init wakeup_sources_sysfs_init(void)
{
wakeup_class = class_create(THIS_MODULE, "wakeup");
return PTR_ERR_OR_ZERO(wakeup_class);
}
postcore_initcall(wakeup_sources_sysfs_init);
...@@ -19,6 +19,18 @@ config ACPI_CPPC_CPUFREQ ...@@ -19,6 +19,18 @@ config ACPI_CPPC_CPUFREQ
If in doubt, say N. If in doubt, say N.
config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM
tristate "Allwinner nvmem based SUN50I CPUFreq driver"
depends on ARCH_SUNXI
depends on NVMEM_SUNXI_SID
select PM_OPP
help
This adds the nvmem based CPUFreq driver for Allwinner
h6 SoC.
To compile this driver as a module, choose M here: the
module will be called sun50i-cpufreq-nvmem.
config ARM_ARMADA_37XX_CPUFREQ config ARM_ARMADA_37XX_CPUFREQ
tristate "Armada 37xx CPUFreq support" tristate "Armada 37xx CPUFreq support"
depends on ARCH_MVEBU && CPUFREQ_DT depends on ARCH_MVEBU && CPUFREQ_DT
...@@ -120,8 +132,8 @@ config ARM_OMAP2PLUS_CPUFREQ ...@@ -120,8 +132,8 @@ config ARM_OMAP2PLUS_CPUFREQ
depends on ARCH_OMAP2PLUS depends on ARCH_OMAP2PLUS
default ARCH_OMAP2PLUS default ARCH_OMAP2PLUS
config ARM_QCOM_CPUFREQ_KRYO config ARM_QCOM_CPUFREQ_NVMEM
tristate "Qualcomm Kryo based CPUFreq" tristate "Qualcomm nvmem based CPUFreq"
depends on ARM64 depends on ARM64
depends on QCOM_QFPROM depends on QCOM_QFPROM
depends on QCOM_SMEM depends on QCOM_SMEM
......
...@@ -64,7 +64,7 @@ obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o ...@@ -64,7 +64,7 @@ obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o
obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o
obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
obj-$(CONFIG_ARM_QCOM_CPUFREQ_HW) += qcom-cpufreq-hw.o obj-$(CONFIG_ARM_QCOM_CPUFREQ_HW) += qcom-cpufreq-hw.o
obj-$(CONFIG_ARM_QCOM_CPUFREQ_KRYO) += qcom-cpufreq-kryo.o obj-$(CONFIG_ARM_QCOM_CPUFREQ_NVMEM) += qcom-cpufreq-nvmem.o
obj-$(CONFIG_ARM_RASPBERRYPI_CPUFREQ) += raspberrypi-cpufreq.o obj-$(CONFIG_ARM_RASPBERRYPI_CPUFREQ) += raspberrypi-cpufreq.o
obj-$(CONFIG_ARM_S3C2410_CPUFREQ) += s3c2410-cpufreq.o obj-$(CONFIG_ARM_S3C2410_CPUFREQ) += s3c2410-cpufreq.o
obj-$(CONFIG_ARM_S3C2412_CPUFREQ) += s3c2412-cpufreq.o obj-$(CONFIG_ARM_S3C2412_CPUFREQ) += s3c2412-cpufreq.o
...@@ -80,6 +80,7 @@ obj-$(CONFIG_ARM_SCMI_CPUFREQ) += scmi-cpufreq.o ...@@ -80,6 +80,7 @@ obj-$(CONFIG_ARM_SCMI_CPUFREQ) += scmi-cpufreq.o
obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o
obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o
obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o
obj-$(CONFIG_ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM) += sun50i-cpufreq-nvmem.o
obj-$(CONFIG_ARM_TANGO_CPUFREQ) += tango-cpufreq.o obj-$(CONFIG_ARM_TANGO_CPUFREQ) += tango-cpufreq.o
obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o
obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
......
...@@ -136,6 +136,8 @@ static int __init armada_8k_cpufreq_init(void) ...@@ -136,6 +136,8 @@ static int __init armada_8k_cpufreq_init(void)
nb_cpus = num_possible_cpus(); nb_cpus = num_possible_cpus();
freq_tables = kcalloc(nb_cpus, sizeof(*freq_tables), GFP_KERNEL); freq_tables = kcalloc(nb_cpus, sizeof(*freq_tables), GFP_KERNEL);
if (!freq_tables)
return -ENOMEM;
cpumask_copy(&cpus, cpu_possible_mask); cpumask_copy(&cpus, cpu_possible_mask);
/* /*
......
...@@ -101,12 +101,15 @@ static const struct of_device_id whitelist[] __initconst = { ...@@ -101,12 +101,15 @@ static const struct of_device_id whitelist[] __initconst = {
* platforms using "operating-points-v2" property. * platforms using "operating-points-v2" property.
*/ */
static const struct of_device_id blacklist[] __initconst = { static const struct of_device_id blacklist[] __initconst = {
{ .compatible = "allwinner,sun50i-h6", },
{ .compatible = "calxeda,highbank", }, { .compatible = "calxeda,highbank", },
{ .compatible = "calxeda,ecx-2000", }, { .compatible = "calxeda,ecx-2000", },
{ .compatible = "fsl,imx7d", }, { .compatible = "fsl,imx7d", },
{ .compatible = "fsl,imx8mq", }, { .compatible = "fsl,imx8mq", },
{ .compatible = "fsl,imx8mm", }, { .compatible = "fsl,imx8mm", },
{ .compatible = "fsl,imx8mn", },
{ .compatible = "marvell,armadaxp", }, { .compatible = "marvell,armadaxp", },
...@@ -117,12 +120,14 @@ static const struct of_device_id blacklist[] __initconst = { ...@@ -117,12 +120,14 @@ static const struct of_device_id blacklist[] __initconst = {
{ .compatible = "mediatek,mt817x", }, { .compatible = "mediatek,mt817x", },
{ .compatible = "mediatek,mt8173", }, { .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", }, { .compatible = "mediatek,mt8176", },
{ .compatible = "mediatek,mt8183", },
{ .compatible = "nvidia,tegra124", }, { .compatible = "nvidia,tegra124", },
{ .compatible = "nvidia,tegra210", }, { .compatible = "nvidia,tegra210", },
{ .compatible = "qcom,apq8096", }, { .compatible = "qcom,apq8096", },
{ .compatible = "qcom,msm8996", }, { .compatible = "qcom,msm8996", },
{ .compatible = "qcom,qcs404", },
{ .compatible = "st,stih407", }, { .compatible = "st,stih407", },
{ .compatible = "st,stih410", }, { .compatible = "st,stih410", },
......
...@@ -1266,7 +1266,17 @@ static void cpufreq_policy_free(struct cpufreq_policy *policy) ...@@ -1266,7 +1266,17 @@ static void cpufreq_policy_free(struct cpufreq_policy *policy)
DEV_PM_QOS_MAX_FREQUENCY); DEV_PM_QOS_MAX_FREQUENCY);
dev_pm_qos_remove_notifier(dev, &policy->nb_min, dev_pm_qos_remove_notifier(dev, &policy->nb_min,
DEV_PM_QOS_MIN_FREQUENCY); DEV_PM_QOS_MIN_FREQUENCY);
dev_pm_qos_remove_request(policy->max_freq_req);
if (policy->max_freq_req) {
/*
* CPUFREQ_CREATE_POLICY notification is sent only after
* successfully adding max_freq_req request.
*/
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_REMOVE_POLICY, policy);
dev_pm_qos_remove_request(policy->max_freq_req);
}
dev_pm_qos_remove_request(policy->min_freq_req); dev_pm_qos_remove_request(policy->min_freq_req);
kfree(policy->min_freq_req); kfree(policy->min_freq_req);
...@@ -1391,6 +1401,9 @@ static int cpufreq_online(unsigned int cpu) ...@@ -1391,6 +1401,9 @@ static int cpufreq_online(unsigned int cpu)
ret); ret);
goto out_destroy_policy; goto out_destroy_policy;
} }
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_CREATE_POLICY, policy);
} }
if (cpufreq_driver->get && has_target()) { if (cpufreq_driver->get && has_target()) {
...@@ -1807,8 +1820,8 @@ void cpufreq_suspend(void) ...@@ -1807,8 +1820,8 @@ void cpufreq_suspend(void)
} }
if (cpufreq_driver->suspend && cpufreq_driver->suspend(policy)) if (cpufreq_driver->suspend && cpufreq_driver->suspend(policy))
pr_err("%s: Failed to suspend driver: %p\n", __func__, pr_err("%s: Failed to suspend driver: %s\n", __func__,
policy); cpufreq_driver->name);
} }
suspend: suspend:
...@@ -2140,7 +2153,7 @@ int cpufreq_driver_target(struct cpufreq_policy *policy, ...@@ -2140,7 +2153,7 @@ int cpufreq_driver_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int target_freq,
unsigned int relation) unsigned int relation)
{ {
int ret = -EINVAL; int ret;
down_write(&policy->rwsem); down_write(&policy->rwsem);
...@@ -2347,15 +2360,13 @@ EXPORT_SYMBOL(cpufreq_get_policy); ...@@ -2347,15 +2360,13 @@ EXPORT_SYMBOL(cpufreq_get_policy);
* @policy: Policy object to modify. * @policy: Policy object to modify.
* @new_policy: New policy data. * @new_policy: New policy data.
* *
* Pass @new_policy to the cpufreq driver's ->verify() callback, run the * Pass @new_policy to the cpufreq driver's ->verify() callback. Next, copy the
* installed policy notifiers for it with the CPUFREQ_ADJUST value, pass it to * min and max parameters of @new_policy to @policy and either invoke the
* the driver's ->verify() callback again and run the notifiers for it again * driver's ->setpolicy() callback (if present) or carry out a governor update
* with the CPUFREQ_NOTIFY value. Next, copy the min and max parameters * for @policy. That is, run the current governor's ->limits() callback (if the
* of @new_policy to @policy and either invoke the driver's ->setpolicy() * governor field in @new_policy points to the same object as the one in
* callback (if present) or carry out a governor update for @policy. That is, * @policy) or replace the governor for @policy with the new one stored in
* run the current governor's ->limits() callback (if the governor field in * @new_policy.
* @new_policy points to the same object as the one in @policy) or replace the
* governor for @policy with the new one stored in @new_policy.
* *
* The cpuinfo part of @policy is not updated by this function. * The cpuinfo part of @policy is not updated by this function.
*/ */
...@@ -2383,26 +2394,6 @@ int cpufreq_set_policy(struct cpufreq_policy *policy, ...@@ -2383,26 +2394,6 @@ int cpufreq_set_policy(struct cpufreq_policy *policy,
if (ret) if (ret)
return ret; return ret;
/*
* The notifier-chain shall be removed once all the users of
* CPUFREQ_ADJUST are moved to use the QoS framework.
*/
/* adjust if necessary - all reasons */
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_ADJUST, new_policy);
/*
* verify the cpu speed can be set within this limit, which might be
* different to the first one
*/
ret = cpufreq_driver->verify(new_policy);
if (ret)
return ret;
/* notification of the new policy */
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_NOTIFY, new_policy);
policy->min = new_policy->min; policy->min = new_policy->min;
policy->max = new_policy->max; policy->max = new_policy->max;
trace_cpu_frequency_limits(policy); trace_cpu_frequency_limits(policy);
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#define OCOTP_CFG3_SPEED_GRADE_SHIFT 8 #define OCOTP_CFG3_SPEED_GRADE_SHIFT 8
#define OCOTP_CFG3_SPEED_GRADE_MASK (0x3 << 8) #define OCOTP_CFG3_SPEED_GRADE_MASK (0x3 << 8)
#define IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK (0xf << 8)
#define OCOTP_CFG3_MKT_SEGMENT_SHIFT 6 #define OCOTP_CFG3_MKT_SEGMENT_SHIFT 6
#define OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 6) #define OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 6)
...@@ -34,7 +35,12 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev) ...@@ -34,7 +35,12 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
if (ret) if (ret)
return ret; return ret;
speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK) >> OCOTP_CFG3_SPEED_GRADE_SHIFT; if (of_machine_is_compatible("fsl,imx8mn"))
speed_grade = (cell_value & IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK)
>> OCOTP_CFG3_SPEED_GRADE_SHIFT;
else
speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK)
>> OCOTP_CFG3_SPEED_GRADE_SHIFT;
mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) >> OCOTP_CFG3_MKT_SEGMENT_SHIFT; mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) >> OCOTP_CFG3_MKT_SEGMENT_SHIFT;
/* /*
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/pm_qos.h>
#include <trace/events/power.h> #include <trace/events/power.h>
#include <asm/div64.h> #include <asm/div64.h>
...@@ -1085,6 +1086,47 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b, ...@@ -1085,6 +1086,47 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
return count; return count;
} }
static struct cpufreq_driver intel_pstate;
static void update_qos_request(enum dev_pm_qos_req_type type)
{
int max_state, turbo_max, freq, i, perf_pct;
struct dev_pm_qos_request *req;
struct cpufreq_policy *policy;
for_each_possible_cpu(i) {
struct cpudata *cpu = all_cpu_data[i];
policy = cpufreq_cpu_get(i);
if (!policy)
continue;
req = policy->driver_data;
cpufreq_cpu_put(policy);
if (!req)
continue;
if (hwp_active)
intel_pstate_get_hwp_max(i, &turbo_max, &max_state);
else
turbo_max = cpu->pstate.turbo_pstate;
if (type == DEV_PM_QOS_MIN_FREQUENCY) {
perf_pct = global.min_perf_pct;
} else {
req++;
perf_pct = global.max_perf_pct;
}
freq = DIV_ROUND_UP(turbo_max * perf_pct, 100);
freq *= cpu->pstate.scaling;
if (dev_pm_qos_update_request(req, freq) < 0)
pr_warn("Failed to update freq constraint: CPU%d\n", i);
}
}
static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b, static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
const char *buf, size_t count) const char *buf, size_t count)
{ {
...@@ -1108,7 +1150,10 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b, ...@@ -1108,7 +1150,10 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
mutex_unlock(&intel_pstate_limits_lock); mutex_unlock(&intel_pstate_limits_lock);
intel_pstate_update_policies(); if (intel_pstate_driver == &intel_pstate)
intel_pstate_update_policies();
else
update_qos_request(DEV_PM_QOS_MAX_FREQUENCY);
mutex_unlock(&intel_pstate_driver_lock); mutex_unlock(&intel_pstate_driver_lock);
...@@ -1139,7 +1184,10 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b, ...@@ -1139,7 +1184,10 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b,
mutex_unlock(&intel_pstate_limits_lock); mutex_unlock(&intel_pstate_limits_lock);
intel_pstate_update_policies(); if (intel_pstate_driver == &intel_pstate)
intel_pstate_update_policies();
else
update_qos_request(DEV_PM_QOS_MIN_FREQUENCY);
mutex_unlock(&intel_pstate_driver_lock); mutex_unlock(&intel_pstate_driver_lock);
...@@ -2332,8 +2380,16 @@ static unsigned int intel_cpufreq_fast_switch(struct cpufreq_policy *policy, ...@@ -2332,8 +2380,16 @@ static unsigned int intel_cpufreq_fast_switch(struct cpufreq_policy *policy,
static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy) static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
{ {
int ret = __intel_pstate_cpu_init(policy); int max_state, turbo_max, min_freq, max_freq, ret;
struct dev_pm_qos_request *req;
struct cpudata *cpu;
struct device *dev;
dev = get_cpu_device(policy->cpu);
if (!dev)
return -ENODEV;
ret = __intel_pstate_cpu_init(policy);
if (ret) if (ret)
return ret; return ret;
...@@ -2342,7 +2398,63 @@ static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -2342,7 +2398,63 @@ static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
/* This reflects the intel_pstate_get_cpu_pstates() setting. */ /* This reflects the intel_pstate_get_cpu_pstates() setting. */
policy->cur = policy->cpuinfo.min_freq; policy->cur = policy->cpuinfo.min_freq;
req = kcalloc(2, sizeof(*req), GFP_KERNEL);
if (!req) {
ret = -ENOMEM;
goto pstate_exit;
}
cpu = all_cpu_data[policy->cpu];
if (hwp_active)
intel_pstate_get_hwp_max(policy->cpu, &turbo_max, &max_state);
else
turbo_max = cpu->pstate.turbo_pstate;
min_freq = DIV_ROUND_UP(turbo_max * global.min_perf_pct, 100);
min_freq *= cpu->pstate.scaling;
max_freq = DIV_ROUND_UP(turbo_max * global.max_perf_pct, 100);
max_freq *= cpu->pstate.scaling;
ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_MIN_FREQUENCY,
min_freq);
if (ret < 0) {
dev_err(dev, "Failed to add min-freq constraint (%d)\n", ret);
goto free_req;
}
ret = dev_pm_qos_add_request(dev, req + 1, DEV_PM_QOS_MAX_FREQUENCY,
max_freq);
if (ret < 0) {
dev_err(dev, "Failed to add max-freq constraint (%d)\n", ret);
goto remove_min_req;
}
policy->driver_data = req;
return 0; return 0;
remove_min_req:
dev_pm_qos_remove_request(req);
free_req:
kfree(req);
pstate_exit:
intel_pstate_exit_perf_limits(policy);
return ret;
}
static int intel_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
struct dev_pm_qos_request *req;
req = policy->driver_data;
dev_pm_qos_remove_request(req + 1);
dev_pm_qos_remove_request(req);
kfree(req);
return intel_pstate_cpu_exit(policy);
} }
static struct cpufreq_driver intel_cpufreq = { static struct cpufreq_driver intel_cpufreq = {
...@@ -2351,7 +2463,7 @@ static struct cpufreq_driver intel_cpufreq = { ...@@ -2351,7 +2463,7 @@ static struct cpufreq_driver intel_cpufreq = {
.target = intel_cpufreq_target, .target = intel_cpufreq_target,
.fast_switch = intel_cpufreq_fast_switch, .fast_switch = intel_cpufreq_fast_switch,
.init = intel_cpufreq_cpu_init, .init = intel_cpufreq_cpu_init,
.exit = intel_pstate_cpu_exit, .exit = intel_cpufreq_cpu_exit,
.stop_cpu = intel_cpufreq_stop_cpu, .stop_cpu = intel_cpufreq_stop_cpu,
.update_limits = intel_pstate_update_limits, .update_limits = intel_pstate_update_limits,
.name = "intel_cpufreq", .name = "intel_cpufreq",
......
...@@ -338,7 +338,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) ...@@ -338,7 +338,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
goto out_free_resources; goto out_free_resources;
} }
proc_reg = regulator_get_exclusive(cpu_dev, "proc"); proc_reg = regulator_get_optional(cpu_dev, "proc");
if (IS_ERR(proc_reg)) { if (IS_ERR(proc_reg)) {
if (PTR_ERR(proc_reg) == -EPROBE_DEFER) if (PTR_ERR(proc_reg) == -EPROBE_DEFER)
pr_warn("proc regulator for cpu%d not ready, retry.\n", pr_warn("proc regulator for cpu%d not ready, retry.\n",
...@@ -535,6 +535,8 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = { ...@@ -535,6 +535,8 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
{ .compatible = "mediatek,mt817x", }, { .compatible = "mediatek,mt817x", },
{ .compatible = "mediatek,mt8173", }, { .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", }, { .compatible = "mediatek,mt8176", },
{ .compatible = "mediatek,mt8183", },
{ .compatible = "mediatek,mt8516", },
{ } { }
}; };
......
...@@ -110,6 +110,13 @@ static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -110,6 +110,13 @@ static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy)
#endif #endif
policy->freq_table = cbe_freqs; policy->freq_table = cbe_freqs;
cbe_cpufreq_pmi_policy_init(policy);
return 0;
}
static int cbe_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
cbe_cpufreq_pmi_policy_exit(policy);
return 0; return 0;
} }
...@@ -129,6 +136,7 @@ static struct cpufreq_driver cbe_cpufreq_driver = { ...@@ -129,6 +136,7 @@ static struct cpufreq_driver cbe_cpufreq_driver = {
.verify = cpufreq_generic_frequency_table_verify, .verify = cpufreq_generic_frequency_table_verify,
.target_index = cbe_cpufreq_target, .target_index = cbe_cpufreq_target,
.init = cbe_cpufreq_cpu_init, .init = cbe_cpufreq_cpu_init,
.exit = cbe_cpufreq_cpu_exit,
.name = "cbe-cpufreq", .name = "cbe-cpufreq",
.flags = CPUFREQ_CONST_LOOPS, .flags = CPUFREQ_CONST_LOOPS,
}; };
...@@ -139,15 +147,24 @@ static struct cpufreq_driver cbe_cpufreq_driver = { ...@@ -139,15 +147,24 @@ static struct cpufreq_driver cbe_cpufreq_driver = {
static int __init cbe_cpufreq_init(void) static int __init cbe_cpufreq_init(void)
{ {
int ret;
if (!machine_is(cell)) if (!machine_is(cell))
return -ENODEV; return -ENODEV;
return cpufreq_register_driver(&cbe_cpufreq_driver); cbe_cpufreq_pmi_init();
ret = cpufreq_register_driver(&cbe_cpufreq_driver);
if (ret)
cbe_cpufreq_pmi_exit();
return ret;
} }
static void __exit cbe_cpufreq_exit(void) static void __exit cbe_cpufreq_exit(void)
{ {
cpufreq_unregister_driver(&cbe_cpufreq_driver); cpufreq_unregister_driver(&cbe_cpufreq_driver);
cbe_cpufreq_pmi_exit();
} }
module_init(cbe_cpufreq_init); module_init(cbe_cpufreq_init);
......
...@@ -20,6 +20,14 @@ int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode); ...@@ -20,6 +20,14 @@ int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode);
#if IS_ENABLED(CONFIG_CPU_FREQ_CBE_PMI) #if IS_ENABLED(CONFIG_CPU_FREQ_CBE_PMI)
extern bool cbe_cpufreq_has_pmi; extern bool cbe_cpufreq_has_pmi;
void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy);
void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy);
void cbe_cpufreq_pmi_init(void);
void cbe_cpufreq_pmi_exit(void);
#else #else
#define cbe_cpufreq_has_pmi (0) #define cbe_cpufreq_has_pmi (0)
static inline void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy) {}
static inline void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy) {}
static inline void cbe_cpufreq_pmi_init(void) {}
static inline void cbe_cpufreq_pmi_exit(void) {}
#endif #endif
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/timer.h> #include <linux/timer.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/pm_qos.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/prom.h> #include <asm/prom.h>
...@@ -24,8 +25,6 @@ ...@@ -24,8 +25,6 @@
#include "ppc_cbe_cpufreq.h" #include "ppc_cbe_cpufreq.h"
static u8 pmi_slow_mode_limit[MAX_CBE];
bool cbe_cpufreq_has_pmi = false; bool cbe_cpufreq_has_pmi = false;
EXPORT_SYMBOL_GPL(cbe_cpufreq_has_pmi); EXPORT_SYMBOL_GPL(cbe_cpufreq_has_pmi);
...@@ -65,64 +64,89 @@ EXPORT_SYMBOL_GPL(cbe_cpufreq_set_pmode_pmi); ...@@ -65,64 +64,89 @@ EXPORT_SYMBOL_GPL(cbe_cpufreq_set_pmode_pmi);
static void cbe_cpufreq_handle_pmi(pmi_message_t pmi_msg) static void cbe_cpufreq_handle_pmi(pmi_message_t pmi_msg)
{ {
struct cpufreq_policy *policy;
struct dev_pm_qos_request *req;
u8 node, slow_mode; u8 node, slow_mode;
int cpu, ret;
BUG_ON(pmi_msg.type != PMI_TYPE_FREQ_CHANGE); BUG_ON(pmi_msg.type != PMI_TYPE_FREQ_CHANGE);
node = pmi_msg.data1; node = pmi_msg.data1;
slow_mode = pmi_msg.data2; slow_mode = pmi_msg.data2;
pmi_slow_mode_limit[node] = slow_mode; cpu = cbe_node_to_cpu(node);
pr_debug("cbe_handle_pmi: node: %d max_freq: %d\n", node, slow_mode); pr_debug("cbe_handle_pmi: node: %d max_freq: %d\n", node, slow_mode);
}
static int pmi_notifier(struct notifier_block *nb,
unsigned long event, void *data)
{
struct cpufreq_policy *policy = data;
struct cpufreq_frequency_table *cbe_freqs = policy->freq_table;
u8 node;
/* Should this really be called for CPUFREQ_ADJUST and CPUFREQ_NOTIFY
* policy events?)
*/
node = cbe_cpu_to_node(policy->cpu);
pr_debug("got notified, event=%lu, node=%u\n", event, node);
if (pmi_slow_mode_limit[node] != 0) { policy = cpufreq_cpu_get(cpu);
pr_debug("limiting node %d to slow mode %d\n", if (!policy) {
node, pmi_slow_mode_limit[node]); pr_warn("cpufreq policy not found cpu%d\n", cpu);
return;
}
cpufreq_verify_within_limits(policy, 0, req = policy->driver_data;
cbe_freqs[pmi_slow_mode_limit[node]].frequency); ret = dev_pm_qos_update_request(req,
} policy->freq_table[slow_mode].frequency);
if (ret < 0)
pr_warn("Failed to update freq constraint: %d\n", ret);
else
pr_debug("limiting node %d to slow mode %d\n", node, slow_mode);
return 0; cpufreq_cpu_put(policy);
} }
static struct notifier_block pmi_notifier_block = {
.notifier_call = pmi_notifier,
};
static struct pmi_handler cbe_pmi_handler = { static struct pmi_handler cbe_pmi_handler = {
.type = PMI_TYPE_FREQ_CHANGE, .type = PMI_TYPE_FREQ_CHANGE,
.handle_pmi_message = cbe_cpufreq_handle_pmi, .handle_pmi_message = cbe_cpufreq_handle_pmi,
}; };
void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy)
{
struct dev_pm_qos_request *req;
int ret;
if (!cbe_cpufreq_has_pmi)
return;
req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req)
return;
ret = dev_pm_qos_add_request(get_cpu_device(policy->cpu), req,
DEV_PM_QOS_MAX_FREQUENCY,
policy->freq_table[0].frequency);
if (ret < 0) {
pr_err("Failed to add freq constraint (%d)\n", ret);
kfree(req);
return;
}
policy->driver_data = req;
}
EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_policy_init);
static int __init cbe_cpufreq_pmi_init(void) void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy)
{ {
cbe_cpufreq_has_pmi = pmi_register_handler(&cbe_pmi_handler) == 0; struct dev_pm_qos_request *req = policy->driver_data;
if (!cbe_cpufreq_has_pmi) if (cbe_cpufreq_has_pmi) {
return -ENODEV; dev_pm_qos_remove_request(req);
kfree(req);
}
}
EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_policy_exit);
cpufreq_register_notifier(&pmi_notifier_block, CPUFREQ_POLICY_NOTIFIER); void cbe_cpufreq_pmi_init(void)
{
if (!pmi_register_handler(&cbe_pmi_handler))
cbe_cpufreq_has_pmi = true;
}
EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_init);
return 0; void cbe_cpufreq_pmi_exit(void)
{
pmi_unregister_handler(&cbe_pmi_handler);
cbe_cpufreq_has_pmi = false;
} }
device_initcall(cbe_cpufreq_pmi_init); EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_exit);
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#define LUT_VOLT GENMASK(11, 0) #define LUT_VOLT GENMASK(11, 0)
#define LUT_ROW_SIZE 32 #define LUT_ROW_SIZE 32
#define CLK_HW_DIV 2 #define CLK_HW_DIV 2
#define LUT_TURBO_IND 1
/* Register offsets */ /* Register offsets */
#define REG_ENABLE 0x0 #define REG_ENABLE 0x0
...@@ -34,9 +35,12 @@ static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy, ...@@ -34,9 +35,12 @@ static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy,
unsigned int index) unsigned int index)
{ {
void __iomem *perf_state_reg = policy->driver_data; void __iomem *perf_state_reg = policy->driver_data;
unsigned long freq = policy->freq_table[index].frequency;
writel_relaxed(index, perf_state_reg); writel_relaxed(index, perf_state_reg);
arch_set_freq_scale(policy->related_cpus, freq,
policy->cpuinfo.max_freq);
return 0; return 0;
} }
...@@ -63,6 +67,7 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy, ...@@ -63,6 +67,7 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
{ {
void __iomem *perf_state_reg = policy->driver_data; void __iomem *perf_state_reg = policy->driver_data;
int index; int index;
unsigned long freq;
index = policy->cached_resolved_idx; index = policy->cached_resolved_idx;
if (index < 0) if (index < 0)
...@@ -70,16 +75,19 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy, ...@@ -70,16 +75,19 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
writel_relaxed(index, perf_state_reg); writel_relaxed(index, perf_state_reg);
return policy->freq_table[index].frequency; freq = policy->freq_table[index].frequency;
arch_set_freq_scale(policy->related_cpus, freq,
policy->cpuinfo.max_freq);
return freq;
} }
static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev, static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
struct cpufreq_policy *policy, struct cpufreq_policy *policy,
void __iomem *base) void __iomem *base)
{ {
u32 data, src, lval, i, core_count, prev_cc = 0, prev_freq = 0, freq; u32 data, src, lval, i, core_count, prev_freq = 0, freq;
u32 volt; u32 volt;
unsigned int max_cores = cpumask_weight(policy->cpus);
struct cpufreq_frequency_table *table; struct cpufreq_frequency_table *table;
table = kcalloc(LUT_MAX_ENTRIES + 1, sizeof(*table), GFP_KERNEL); table = kcalloc(LUT_MAX_ENTRIES + 1, sizeof(*table), GFP_KERNEL);
...@@ -102,12 +110,12 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev, ...@@ -102,12 +110,12 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
else else
freq = cpu_hw_rate / 1000; freq = cpu_hw_rate / 1000;
if (freq != prev_freq && core_count == max_cores) { if (freq != prev_freq && core_count != LUT_TURBO_IND) {
table[i].frequency = freq; table[i].frequency = freq;
dev_pm_opp_add(cpu_dev, freq * 1000, volt); dev_pm_opp_add(cpu_dev, freq * 1000, volt);
dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i, dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i,
freq, core_count); freq, core_count);
} else { } else if (core_count == LUT_TURBO_IND) {
table[i].frequency = CPUFREQ_ENTRY_INVALID; table[i].frequency = CPUFREQ_ENTRY_INVALID;
} }
...@@ -115,14 +123,14 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev, ...@@ -115,14 +123,14 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
* Two of the same frequencies with the same core counts means * Two of the same frequencies with the same core counts means
* end of table * end of table
*/ */
if (i > 0 && prev_freq == freq && prev_cc == core_count) { if (i > 0 && prev_freq == freq) {
struct cpufreq_frequency_table *prev = &table[i - 1]; struct cpufreq_frequency_table *prev = &table[i - 1];
/* /*
* Only treat the last frequency that might be a boost * Only treat the last frequency that might be a boost
* as the boost frequency * as the boost frequency
*/ */
if (prev_cc != max_cores) { if (prev->frequency == CPUFREQ_ENTRY_INVALID) {
prev->frequency = prev_freq; prev->frequency = prev_freq;
prev->flags = CPUFREQ_BOOST_FREQ; prev->flags = CPUFREQ_BOOST_FREQ;
dev_pm_opp_add(cpu_dev, prev_freq * 1000, volt); dev_pm_opp_add(cpu_dev, prev_freq * 1000, volt);
...@@ -131,7 +139,6 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev, ...@@ -131,7 +139,6 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
break; break;
} }
prev_cc = core_count;
prev_freq = freq; prev_freq = freq;
} }
......
// SPDX-License-Identifier: GPL-2.0
/*
* Allwinner CPUFreq nvmem based driver
*
* The sun50i-cpufreq-nvmem driver reads the efuse value from the SoC to
* provide the OPP framework with required information.
*
* Copyright (C) 2019 Yangtao Li <tiny.windzz@gmail.com>
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/nvmem-consumer.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
#define MAX_NAME_LEN 7
#define NVMEM_MASK 0x7
#define NVMEM_SHIFT 5
static struct platform_device *cpufreq_dt_pdev, *sun50i_cpufreq_pdev;
/**
* sun50i_cpufreq_get_efuse() - Parse and return efuse value present on SoC
* @versions: Set to the value parsed from efuse
*
* Returns 0 if success.
*/
static int sun50i_cpufreq_get_efuse(u32 *versions)
{
struct nvmem_cell *speedbin_nvmem;
struct device_node *np;
struct device *cpu_dev;
u32 *speedbin, efuse_value;
size_t len;
int ret;
cpu_dev = get_cpu_device(0);
if (!cpu_dev)
return -ENODEV;
np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
if (!np)
return -ENOENT;
ret = of_device_is_compatible(np,
"allwinner,sun50i-h6-operating-points");
if (!ret) {
of_node_put(np);
return -ENOENT;
}
speedbin_nvmem = of_nvmem_cell_get(np, NULL);
of_node_put(np);
if (IS_ERR(speedbin_nvmem)) {
if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
pr_err("Could not get nvmem cell: %ld\n",
PTR_ERR(speedbin_nvmem));
return PTR_ERR(speedbin_nvmem);
}
speedbin = nvmem_cell_read(speedbin_nvmem, &len);
nvmem_cell_put(speedbin_nvmem);
if (IS_ERR(speedbin))
return PTR_ERR(speedbin);
efuse_value = (*speedbin >> NVMEM_SHIFT) & NVMEM_MASK;
switch (efuse_value) {
case 0b0001:
*versions = 1;
break;
case 0b0011:
*versions = 2;
break;
default:
/*
* For other situations, we treat it as bin0.
* This vf table can be run for any good cpu.
*/
*versions = 0;
break;
}
kfree(speedbin);
return 0;
};
static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev)
{
struct opp_table **opp_tables;
char name[MAX_NAME_LEN];
unsigned int cpu;
u32 speed = 0;
int ret;
opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables),
GFP_KERNEL);
if (!opp_tables)
return -ENOMEM;
ret = sun50i_cpufreq_get_efuse(&speed);
if (ret)
return ret;
snprintf(name, MAX_NAME_LEN, "speed%d", speed);
for_each_possible_cpu(cpu) {
struct device *cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
ret = -ENODEV;
goto free_opp;
}
opp_tables[cpu] = dev_pm_opp_set_prop_name(cpu_dev, name);
if (IS_ERR(opp_tables[cpu])) {
ret = PTR_ERR(opp_tables[cpu]);
pr_err("Failed to set prop name\n");
goto free_opp;
}
}
cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
NULL, 0);
if (!IS_ERR(cpufreq_dt_pdev)) {
platform_set_drvdata(pdev, opp_tables);
return 0;
}
ret = PTR_ERR(cpufreq_dt_pdev);
pr_err("Failed to register platform device\n");
free_opp:
for_each_possible_cpu(cpu) {
if (IS_ERR_OR_NULL(opp_tables[cpu]))
break;
dev_pm_opp_put_prop_name(opp_tables[cpu]);
}
kfree(opp_tables);
return ret;
}
static int sun50i_cpufreq_nvmem_remove(struct platform_device *pdev)
{
struct opp_table **opp_tables = platform_get_drvdata(pdev);
unsigned int cpu;
platform_device_unregister(cpufreq_dt_pdev);
for_each_possible_cpu(cpu)
dev_pm_opp_put_prop_name(opp_tables[cpu]);
kfree(opp_tables);
return 0;
}
static struct platform_driver sun50i_cpufreq_driver = {
.probe = sun50i_cpufreq_nvmem_probe,
.remove = sun50i_cpufreq_nvmem_remove,
.driver = {
.name = "sun50i-cpufreq-nvmem",
},
};
static const struct of_device_id sun50i_cpufreq_match_list[] = {
{ .compatible = "allwinner,sun50i-h6" },
{}
};
static const struct of_device_id *sun50i_cpufreq_match_node(void)
{
const struct of_device_id *match;
struct device_node *np;
np = of_find_node_by_path("/");
match = of_match_node(sun50i_cpufreq_match_list, np);
of_node_put(np);
return match;
}
/*
* Since the driver depends on nvmem drivers, which may return EPROBE_DEFER,
* all the real activity is done in the probe, which may be defered as well.
* The init here is only registering the driver and the platform device.
*/
static int __init sun50i_cpufreq_init(void)
{
const struct of_device_id *match;
int ret;
match = sun50i_cpufreq_match_node();
if (!match)
return -ENODEV;
ret = platform_driver_register(&sun50i_cpufreq_driver);
if (unlikely(ret < 0))
return ret;
sun50i_cpufreq_pdev =
platform_device_register_simple("sun50i-cpufreq-nvmem",
-1, NULL, 0);
ret = PTR_ERR_OR_ZERO(sun50i_cpufreq_pdev);
if (ret == 0)
return 0;
platform_driver_unregister(&sun50i_cpufreq_driver);
return ret;
}
module_init(sun50i_cpufreq_init);
static void __exit sun50i_cpufreq_exit(void)
{
platform_device_unregister(sun50i_cpufreq_pdev);
platform_driver_unregister(&sun50i_cpufreq_driver);
}
module_exit(sun50i_cpufreq_exit);
MODULE_DESCRIPTION("Sun50i-h6 cpufreq driver");
MODULE_LICENSE("GPL v2");
...@@ -77,6 +77,7 @@ static unsigned long dra7_efuse_xlate(struct ti_cpufreq_data *opp_data, ...@@ -77,6 +77,7 @@ static unsigned long dra7_efuse_xlate(struct ti_cpufreq_data *opp_data,
case DRA7_EFUSE_HAS_ALL_MPU_OPP: case DRA7_EFUSE_HAS_ALL_MPU_OPP:
case DRA7_EFUSE_HAS_HIGH_MPU_OPP: case DRA7_EFUSE_HAS_HIGH_MPU_OPP:
calculated_efuse |= DRA7_EFUSE_HIGH_MPU_OPP; calculated_efuse |= DRA7_EFUSE_HIGH_MPU_OPP;
/* Fall through */
case DRA7_EFUSE_HAS_OD_MPU_OPP: case DRA7_EFUSE_HAS_OD_MPU_OPP:
calculated_efuse |= DRA7_EFUSE_OD_MPU_OPP; calculated_efuse |= DRA7_EFUSE_OD_MPU_OPP;
} }
......
...@@ -33,6 +33,17 @@ config CPU_IDLE_GOV_TEO ...@@ -33,6 +33,17 @@ config CPU_IDLE_GOV_TEO
Some workloads benefit from using it and it generally should be safe Some workloads benefit from using it and it generally should be safe
to use. Say Y here if you are not happy with the alternatives. to use. Say Y here if you are not happy with the alternatives.
config CPU_IDLE_GOV_HALTPOLL
bool "Haltpoll governor (for virtualized systems)"
depends on KVM_GUEST
help
This governor implements haltpoll idle state selection, to be
used in conjunction with the haltpoll cpuidle driver, allowing
for polling for a certain amount of time before entering idle
state.
Some virtualized workloads benefit from using it.
config DT_IDLE_STATES config DT_IDLE_STATES
bool bool
...@@ -51,6 +62,15 @@ depends on PPC ...@@ -51,6 +62,15 @@ depends on PPC
source "drivers/cpuidle/Kconfig.powerpc" source "drivers/cpuidle/Kconfig.powerpc"
endmenu endmenu
config HALTPOLL_CPUIDLE
tristate "Halt poll cpuidle driver"
depends on X86 && KVM_GUEST
default y
help
This option enables halt poll cpuidle driver, which allows to poll
before halting in the guest (more efficient than polling in the
host via halt_poll_ns for some scenarios).
endif endif
config ARCH_NEEDS_CPU_IDLE_COUPLED config ARCH_NEEDS_CPU_IDLE_COUPLED
......
...@@ -7,6 +7,7 @@ obj-y += cpuidle.o driver.o governor.o sysfs.o governors/ ...@@ -7,6 +7,7 @@ obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o
obj-$(CONFIG_ARCH_HAS_CPU_RELAX) += poll_state.o obj-$(CONFIG_ARCH_HAS_CPU_RELAX) += poll_state.o
obj-$(CONFIG_HALTPOLL_CPUIDLE) += cpuidle-haltpoll.o
################################################################################## ##################################################################################
# ARM SoC drivers # ARM SoC drivers
......
// SPDX-License-Identifier: GPL-2.0
/*
* cpuidle driver for haltpoll governor.
*
* Copyright 2019 Red Hat, Inc. and/or its affiliates.
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Authors: Marcelo Tosatti <mtosatti@redhat.com>
*/
#include <linux/init.h>
#include <linux/cpu.h>
#include <linux/cpuidle.h>
#include <linux/module.h>
#include <linux/sched/idle.h>
#include <linux/kvm_para.h>
#include <linux/cpuidle_haltpoll.h>
static struct cpuidle_device __percpu *haltpoll_cpuidle_devices;
static enum cpuhp_state haltpoll_hp_state;
static int default_enter_idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index)
{
if (current_clr_polling_and_test()) {
local_irq_enable();
return index;
}
default_idle();
return index;
}
static struct cpuidle_driver haltpoll_driver = {
.name = "haltpoll",
.governor = "haltpoll",
.states = {
{ /* entry 0 is for polling */ },
{
.enter = default_enter_idle,
.exit_latency = 1,
.target_residency = 1,
.power_usage = -1,
.name = "haltpoll idle",
.desc = "default architecture idle",
},
},
.safe_state_index = 0,
.state_count = 2,
};
static int haltpoll_cpu_online(unsigned int cpu)
{
struct cpuidle_device *dev;
dev = per_cpu_ptr(haltpoll_cpuidle_devices, cpu);
if (!dev->registered) {
dev->cpu = cpu;
if (cpuidle_register_device(dev)) {
pr_notice("cpuidle_register_device %d failed!\n", cpu);
return -EIO;
}
arch_haltpoll_enable(cpu);
}
return 0;
}
static int haltpoll_cpu_offline(unsigned int cpu)
{
struct cpuidle_device *dev;
dev = per_cpu_ptr(haltpoll_cpuidle_devices, cpu);
if (dev->registered) {
arch_haltpoll_disable(cpu);
cpuidle_unregister_device(dev);
}
return 0;
}
static void haltpoll_uninit(void)
{
if (haltpoll_hp_state)
cpuhp_remove_state(haltpoll_hp_state);
cpuidle_unregister_driver(&haltpoll_driver);
free_percpu(haltpoll_cpuidle_devices);
haltpoll_cpuidle_devices = NULL;
}
static int __init haltpoll_init(void)
{
int ret;
struct cpuidle_driver *drv = &haltpoll_driver;
cpuidle_poll_state_init(drv);
if (!kvm_para_available() ||
!kvm_para_has_hint(KVM_HINTS_REALTIME))
return -ENODEV;
ret = cpuidle_register_driver(drv);
if (ret < 0)
return ret;
haltpoll_cpuidle_devices = alloc_percpu(struct cpuidle_device);
if (haltpoll_cpuidle_devices == NULL) {
cpuidle_unregister_driver(drv);
return -ENOMEM;
}
ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "cpuidle/haltpoll:online",
haltpoll_cpu_online, haltpoll_cpu_offline);
if (ret < 0) {
haltpoll_uninit();
} else {
haltpoll_hp_state = ret;
ret = 0;
}
return ret;
}
static void __exit haltpoll_exit(void)
{
haltpoll_uninit();
}
module_init(haltpoll_init);
module_exit(haltpoll_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Marcelo Tosatti <mtosatti@redhat.com>");
...@@ -361,6 +361,36 @@ void cpuidle_reflect(struct cpuidle_device *dev, int index) ...@@ -361,6 +361,36 @@ void cpuidle_reflect(struct cpuidle_device *dev, int index)
cpuidle_curr_governor->reflect(dev, index); cpuidle_curr_governor->reflect(dev, index);
} }
/**
* cpuidle_poll_time - return amount of time to poll for,
* governors can override dev->poll_limit_ns if necessary
*
* @drv: the cpuidle driver tied with the cpu
* @dev: the cpuidle device
*
*/
u64 cpuidle_poll_time(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{
int i;
u64 limit_ns;
if (dev->poll_limit_ns)
return dev->poll_limit_ns;
limit_ns = TICK_NSEC;
for (i = 1; i < drv->state_count; i++) {
if (drv->states[i].disabled || dev->states_usage[i].disable)
continue;
limit_ns = (u64)drv->states[i].target_residency * NSEC_PER_USEC;
}
dev->poll_limit_ns = limit_ns;
return dev->poll_limit_ns;
}
/** /**
* cpuidle_install_idle_handler - installs the cpuidle idle loop handler * cpuidle_install_idle_handler - installs the cpuidle idle loop handler
*/ */
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
/* For internal use only */ /* For internal use only */
extern char param_governor[]; extern char param_governor[];
extern struct cpuidle_governor *cpuidle_curr_governor; extern struct cpuidle_governor *cpuidle_curr_governor;
extern struct cpuidle_governor *cpuidle_prev_governor;
extern struct list_head cpuidle_governors; extern struct list_head cpuidle_governors;
extern struct list_head cpuidle_detected_devices; extern struct list_head cpuidle_detected_devices;
extern struct mutex cpuidle_lock; extern struct mutex cpuidle_lock;
...@@ -22,6 +23,7 @@ extern void cpuidle_install_idle_handler(void); ...@@ -22,6 +23,7 @@ extern void cpuidle_install_idle_handler(void);
extern void cpuidle_uninstall_idle_handler(void); extern void cpuidle_uninstall_idle_handler(void);
/* governors */ /* governors */
extern struct cpuidle_governor *cpuidle_find_governor(const char *str);
extern int cpuidle_switch_governor(struct cpuidle_governor *gov); extern int cpuidle_switch_governor(struct cpuidle_governor *gov);
/* sysfs */ /* sysfs */
......
...@@ -254,12 +254,25 @@ static void __cpuidle_unregister_driver(struct cpuidle_driver *drv) ...@@ -254,12 +254,25 @@ static void __cpuidle_unregister_driver(struct cpuidle_driver *drv)
*/ */
int cpuidle_register_driver(struct cpuidle_driver *drv) int cpuidle_register_driver(struct cpuidle_driver *drv)
{ {
struct cpuidle_governor *gov;
int ret; int ret;
spin_lock(&cpuidle_driver_lock); spin_lock(&cpuidle_driver_lock);
ret = __cpuidle_register_driver(drv); ret = __cpuidle_register_driver(drv);
spin_unlock(&cpuidle_driver_lock); spin_unlock(&cpuidle_driver_lock);
if (!ret && !strlen(param_governor) && drv->governor &&
(cpuidle_get_driver() == drv)) {
mutex_lock(&cpuidle_lock);
gov = cpuidle_find_governor(drv->governor);
if (gov) {
cpuidle_prev_governor = cpuidle_curr_governor;
if (cpuidle_switch_governor(gov) < 0)
cpuidle_prev_governor = NULL;
}
mutex_unlock(&cpuidle_lock);
}
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(cpuidle_register_driver); EXPORT_SYMBOL_GPL(cpuidle_register_driver);
...@@ -274,9 +287,21 @@ EXPORT_SYMBOL_GPL(cpuidle_register_driver); ...@@ -274,9 +287,21 @@ EXPORT_SYMBOL_GPL(cpuidle_register_driver);
*/ */
void cpuidle_unregister_driver(struct cpuidle_driver *drv) void cpuidle_unregister_driver(struct cpuidle_driver *drv)
{ {
bool enabled = (cpuidle_get_driver() == drv);
spin_lock(&cpuidle_driver_lock); spin_lock(&cpuidle_driver_lock);
__cpuidle_unregister_driver(drv); __cpuidle_unregister_driver(drv);
spin_unlock(&cpuidle_driver_lock); spin_unlock(&cpuidle_driver_lock);
if (!enabled)
return;
mutex_lock(&cpuidle_lock);
if (cpuidle_prev_governor) {
if (!cpuidle_switch_governor(cpuidle_prev_governor))
cpuidle_prev_governor = NULL;
}
mutex_unlock(&cpuidle_lock);
} }
EXPORT_SYMBOL_GPL(cpuidle_unregister_driver); EXPORT_SYMBOL_GPL(cpuidle_unregister_driver);
......
...@@ -20,14 +20,15 @@ char param_governor[CPUIDLE_NAME_LEN]; ...@@ -20,14 +20,15 @@ char param_governor[CPUIDLE_NAME_LEN];
LIST_HEAD(cpuidle_governors); LIST_HEAD(cpuidle_governors);
struct cpuidle_governor *cpuidle_curr_governor; struct cpuidle_governor *cpuidle_curr_governor;
struct cpuidle_governor *cpuidle_prev_governor;
/** /**
* __cpuidle_find_governor - finds a governor of the specified name * cpuidle_find_governor - finds a governor of the specified name
* @str: the name * @str: the name
* *
* Must be called with cpuidle_lock acquired. * Must be called with cpuidle_lock acquired.
*/ */
static struct cpuidle_governor * __cpuidle_find_governor(const char *str) struct cpuidle_governor *cpuidle_find_governor(const char *str)
{ {
struct cpuidle_governor *gov; struct cpuidle_governor *gov;
...@@ -87,7 +88,7 @@ int cpuidle_register_governor(struct cpuidle_governor *gov) ...@@ -87,7 +88,7 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
return -ENODEV; return -ENODEV;
mutex_lock(&cpuidle_lock); mutex_lock(&cpuidle_lock);
if (__cpuidle_find_governor(gov->name) == NULL) { if (cpuidle_find_governor(gov->name) == NULL) {
ret = 0; ret = 0;
list_add_tail(&gov->governor_list, &cpuidle_governors); list_add_tail(&gov->governor_list, &cpuidle_governors);
if (!cpuidle_curr_governor || if (!cpuidle_curr_governor ||
......
...@@ -6,3 +6,4 @@ ...@@ -6,3 +6,4 @@
obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o
obj-$(CONFIG_CPU_IDLE_GOV_HALTPOLL) += haltpoll.o
// SPDX-License-Identifier: GPL-2.0
/*
* haltpoll.c - haltpoll idle governor
*
* Copyright 2019 Red Hat, Inc. and/or its affiliates.
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Authors: Marcelo Tosatti <mtosatti@redhat.com>
*/
#include <linux/kernel.h>
#include <linux/cpuidle.h>
#include <linux/time.h>
#include <linux/ktime.h>
#include <linux/hrtimer.h>
#include <linux/tick.h>
#include <linux/sched.h>
#include <linux/module.h>
#include <linux/kvm_para.h>
static unsigned int guest_halt_poll_ns __read_mostly = 200000;
module_param(guest_halt_poll_ns, uint, 0644);
/* division factor to shrink halt_poll_ns */
static unsigned int guest_halt_poll_shrink __read_mostly = 2;
module_param(guest_halt_poll_shrink, uint, 0644);
/* multiplication factor to grow per-cpu poll_limit_ns */
static unsigned int guest_halt_poll_grow __read_mostly = 2;
module_param(guest_halt_poll_grow, uint, 0644);
/* value in us to start growing per-cpu halt_poll_ns */
static unsigned int guest_halt_poll_grow_start __read_mostly = 50000;
module_param(guest_halt_poll_grow_start, uint, 0644);
/* allow shrinking guest halt poll */
static bool guest_halt_poll_allow_shrink __read_mostly = true;
module_param(guest_halt_poll_allow_shrink, bool, 0644);
/**
* haltpoll_select - selects the next idle state to enter
* @drv: cpuidle driver containing state data
* @dev: the CPU
* @stop_tick: indication on whether or not to stop the tick
*/
static int haltpoll_select(struct cpuidle_driver *drv,
struct cpuidle_device *dev,
bool *stop_tick)
{
int latency_req = cpuidle_governor_latency_req(dev->cpu);
if (!drv->state_count || latency_req == 0) {
*stop_tick = false;
return 0;
}
if (dev->poll_limit_ns == 0)
return 1;
/* Last state was poll? */
if (dev->last_state_idx == 0) {
/* Halt if no event occurred on poll window */
if (dev->poll_time_limit == true)
return 1;
*stop_tick = false;
/* Otherwise, poll again */
return 0;
}
*stop_tick = false;
/* Last state was halt: poll */
return 0;
}
static void adjust_poll_limit(struct cpuidle_device *dev, unsigned int block_us)
{
unsigned int val;
u64 block_ns = block_us*NSEC_PER_USEC;
/* Grow cpu_halt_poll_us if
* cpu_halt_poll_us < block_ns < guest_halt_poll_us
*/
if (block_ns > dev->poll_limit_ns && block_ns <= guest_halt_poll_ns) {
val = dev->poll_limit_ns * guest_halt_poll_grow;
if (val < guest_halt_poll_grow_start)
val = guest_halt_poll_grow_start;
if (val > guest_halt_poll_ns)
val = guest_halt_poll_ns;
dev->poll_limit_ns = val;
} else if (block_ns > guest_halt_poll_ns &&
guest_halt_poll_allow_shrink) {
unsigned int shrink = guest_halt_poll_shrink;
val = dev->poll_limit_ns;
if (shrink == 0)
val = 0;
else
val /= shrink;
dev->poll_limit_ns = val;
}
}
/**
* haltpoll_reflect - update variables and update poll time
* @dev: the CPU
* @index: the index of actual entered state
*/
static void haltpoll_reflect(struct cpuidle_device *dev, int index)
{
dev->last_state_idx = index;
if (index != 0)
adjust_poll_limit(dev, dev->last_residency);
}
/**
* haltpoll_enable_device - scans a CPU's states and does setup
* @drv: cpuidle driver
* @dev: the CPU
*/
static int haltpoll_enable_device(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{
dev->poll_limit_ns = 0;
return 0;
}
static struct cpuidle_governor haltpoll_governor = {
.name = "haltpoll",
.rating = 9,
.enable = haltpoll_enable_device,
.select = haltpoll_select,
.reflect = haltpoll_reflect,
};
static int __init init_haltpoll(void)
{
if (kvm_para_available())
return cpuidle_register_governor(&haltpoll_governor);
return 0;
}
postcore_initcall(init_haltpoll);
...@@ -38,7 +38,6 @@ struct ladder_device_state { ...@@ -38,7 +38,6 @@ struct ladder_device_state {
struct ladder_device { struct ladder_device {
struct ladder_device_state states[CPUIDLE_STATE_MAX]; struct ladder_device_state states[CPUIDLE_STATE_MAX];
int last_state_idx;
}; };
static DEFINE_PER_CPU(struct ladder_device, ladder_devices); static DEFINE_PER_CPU(struct ladder_device, ladder_devices);
...@@ -49,12 +48,13 @@ static DEFINE_PER_CPU(struct ladder_device, ladder_devices); ...@@ -49,12 +48,13 @@ static DEFINE_PER_CPU(struct ladder_device, ladder_devices);
* @old_idx: the current state index * @old_idx: the current state index
* @new_idx: the new target state index * @new_idx: the new target state index
*/ */
static inline void ladder_do_selection(struct ladder_device *ldev, static inline void ladder_do_selection(struct cpuidle_device *dev,
struct ladder_device *ldev,
int old_idx, int new_idx) int old_idx, int new_idx)
{ {
ldev->states[old_idx].stats.promotion_count = 0; ldev->states[old_idx].stats.promotion_count = 0;
ldev->states[old_idx].stats.demotion_count = 0; ldev->states[old_idx].stats.demotion_count = 0;
ldev->last_state_idx = new_idx; dev->last_state_idx = new_idx;
} }
/** /**
...@@ -68,13 +68,13 @@ static int ladder_select_state(struct cpuidle_driver *drv, ...@@ -68,13 +68,13 @@ static int ladder_select_state(struct cpuidle_driver *drv,
{ {
struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); struct ladder_device *ldev = this_cpu_ptr(&ladder_devices);
struct ladder_device_state *last_state; struct ladder_device_state *last_state;
int last_residency, last_idx = ldev->last_state_idx; int last_residency, last_idx = dev->last_state_idx;
int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0; int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0;
int latency_req = cpuidle_governor_latency_req(dev->cpu); int latency_req = cpuidle_governor_latency_req(dev->cpu);
/* Special case when user has set very strict latency requirement */ /* Special case when user has set very strict latency requirement */
if (unlikely(latency_req == 0)) { if (unlikely(latency_req == 0)) {
ladder_do_selection(ldev, last_idx, 0); ladder_do_selection(dev, ldev, last_idx, 0);
return 0; return 0;
} }
...@@ -91,7 +91,7 @@ static int ladder_select_state(struct cpuidle_driver *drv, ...@@ -91,7 +91,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
last_state->stats.promotion_count++; last_state->stats.promotion_count++;
last_state->stats.demotion_count = 0; last_state->stats.demotion_count = 0;
if (last_state->stats.promotion_count >= last_state->threshold.promotion_count) { if (last_state->stats.promotion_count >= last_state->threshold.promotion_count) {
ladder_do_selection(ldev, last_idx, last_idx + 1); ladder_do_selection(dev, ldev, last_idx, last_idx + 1);
return last_idx + 1; return last_idx + 1;
} }
} }
...@@ -107,7 +107,7 @@ static int ladder_select_state(struct cpuidle_driver *drv, ...@@ -107,7 +107,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
if (drv->states[i].exit_latency <= latency_req) if (drv->states[i].exit_latency <= latency_req)
break; break;
} }
ladder_do_selection(ldev, last_idx, i); ladder_do_selection(dev, ldev, last_idx, i);
return i; return i;
} }
...@@ -116,7 +116,7 @@ static int ladder_select_state(struct cpuidle_driver *drv, ...@@ -116,7 +116,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
last_state->stats.demotion_count++; last_state->stats.demotion_count++;
last_state->stats.promotion_count = 0; last_state->stats.promotion_count = 0;
if (last_state->stats.demotion_count >= last_state->threshold.demotion_count) { if (last_state->stats.demotion_count >= last_state->threshold.demotion_count) {
ladder_do_selection(ldev, last_idx, last_idx - 1); ladder_do_selection(dev, ldev, last_idx, last_idx - 1);
return last_idx - 1; return last_idx - 1;
} }
} }
...@@ -139,7 +139,7 @@ static int ladder_enable_device(struct cpuidle_driver *drv, ...@@ -139,7 +139,7 @@ static int ladder_enable_device(struct cpuidle_driver *drv,
struct ladder_device_state *lstate; struct ladder_device_state *lstate;
struct cpuidle_state *state; struct cpuidle_state *state;
ldev->last_state_idx = first_idx; dev->last_state_idx = first_idx;
for (i = first_idx; i < drv->state_count; i++) { for (i = first_idx; i < drv->state_count; i++) {
state = &drv->states[i]; state = &drv->states[i];
...@@ -167,9 +167,8 @@ static int ladder_enable_device(struct cpuidle_driver *drv, ...@@ -167,9 +167,8 @@ static int ladder_enable_device(struct cpuidle_driver *drv,
*/ */
static void ladder_reflect(struct cpuidle_device *dev, int index) static void ladder_reflect(struct cpuidle_device *dev, int index)
{ {
struct ladder_device *ldev = this_cpu_ptr(&ladder_devices);
if (index > 0) if (index > 0)
ldev->last_state_idx = index; dev->last_state_idx = index;
} }
static struct cpuidle_governor ladder_governor = { static struct cpuidle_governor ladder_governor = {
......
...@@ -117,7 +117,6 @@ ...@@ -117,7 +117,6 @@
*/ */
struct menu_device { struct menu_device {
int last_state_idx;
int needs_update; int needs_update;
int tick_wakeup; int tick_wakeup;
...@@ -302,9 +301,10 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, ...@@ -302,9 +301,10 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
!drv->states[0].disabled && !dev->states_usage[0].disable)) { !drv->states[0].disabled && !dev->states_usage[0].disable)) {
/* /*
* In this case state[0] will be used no matter what, so return * In this case state[0] will be used no matter what, so return
* it right away and keep the tick running. * it right away and keep the tick running if state[0] is a
* polling one.
*/ */
*stop_tick = false; *stop_tick = !(drv->states[0].flags & CPUIDLE_FLAG_POLLING);
return 0; return 0;
} }
...@@ -395,16 +395,9 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, ...@@ -395,16 +395,9 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
return idx; return idx;
} }
if (s->exit_latency > latency_req) { if (s->exit_latency > latency_req)
/*
* If we break out of the loop for latency reasons, use
* the target residency of the selected state as the
* expected idle duration so that the tick is retained
* as long as that target residency is low enough.
*/
predicted_us = drv->states[idx].target_residency;
break; break;
}
idx = i; idx = i;
} }
...@@ -455,7 +448,7 @@ static void menu_reflect(struct cpuidle_device *dev, int index) ...@@ -455,7 +448,7 @@ static void menu_reflect(struct cpuidle_device *dev, int index)
{ {
struct menu_device *data = this_cpu_ptr(&menu_devices); struct menu_device *data = this_cpu_ptr(&menu_devices);
data->last_state_idx = index; dev->last_state_idx = index;
data->needs_update = 1; data->needs_update = 1;
data->tick_wakeup = tick_nohz_idle_got_tick(); data->tick_wakeup = tick_nohz_idle_got_tick();
} }
...@@ -468,7 +461,7 @@ static void menu_reflect(struct cpuidle_device *dev, int index) ...@@ -468,7 +461,7 @@ static void menu_reflect(struct cpuidle_device *dev, int index)
static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
{ {
struct menu_device *data = this_cpu_ptr(&menu_devices); struct menu_device *data = this_cpu_ptr(&menu_devices);
int last_idx = data->last_state_idx; int last_idx = dev->last_state_idx;
struct cpuidle_state *target = &drv->states[last_idx]; struct cpuidle_state *target = &drv->states[last_idx];
unsigned int measured_us; unsigned int measured_us;
unsigned int new_factor; unsigned int new_factor;
......
...@@ -96,7 +96,6 @@ struct teo_idle_state { ...@@ -96,7 +96,6 @@ struct teo_idle_state {
* @time_span_ns: Time between idle state selection and post-wakeup update. * @time_span_ns: Time between idle state selection and post-wakeup update.
* @sleep_length_ns: Time till the closest timer event (at the selection time). * @sleep_length_ns: Time till the closest timer event (at the selection time).
* @states: Idle states data corresponding to this CPU. * @states: Idle states data corresponding to this CPU.
* @last_state: Idle state entered by the CPU last time.
* @interval_idx: Index of the most recent saved idle interval. * @interval_idx: Index of the most recent saved idle interval.
* @intervals: Saved idle duration values. * @intervals: Saved idle duration values.
*/ */
...@@ -104,7 +103,6 @@ struct teo_cpu { ...@@ -104,7 +103,6 @@ struct teo_cpu {
u64 time_span_ns; u64 time_span_ns;
u64 sleep_length_ns; u64 sleep_length_ns;
struct teo_idle_state states[CPUIDLE_STATE_MAX]; struct teo_idle_state states[CPUIDLE_STATE_MAX];
int last_state;
int interval_idx; int interval_idx;
unsigned int intervals[INTERVALS]; unsigned int intervals[INTERVALS];
}; };
...@@ -125,12 +123,15 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) ...@@ -125,12 +123,15 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) { if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) {
/* /*
* One of the safety nets has triggered or this was a timer * One of the safety nets has triggered or the wakeup was close
* wakeup (or equivalent). * enough to the closest timer event expected at the idle state
* selection time to be discarded.
*/ */
measured_us = sleep_length_us; measured_us = UINT_MAX;
} else { } else {
unsigned int lat = drv->states[cpu_data->last_state].exit_latency; unsigned int lat;
lat = drv->states[dev->last_state_idx].exit_latency;
measured_us = ktime_to_us(cpu_data->time_span_ns); measured_us = ktime_to_us(cpu_data->time_span_ns);
/* /*
...@@ -188,15 +189,6 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) ...@@ -188,15 +189,6 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
cpu_data->states[idx_timer].hits = hits; cpu_data->states[idx_timer].hits = hits;
} }
/*
* If the total time span between idle state selection and the "reflect"
* callback is greater than or equal to the sleep length determined at
* the idle state selection time, the wakeup is likely to be due to a
* timer event.
*/
if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns)
measured_us = UINT_MAX;
/* /*
* Save idle duration values corresponding to non-timer wakeups for * Save idle duration values corresponding to non-timer wakeups for
* pattern detection. * pattern detection.
...@@ -242,12 +234,12 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, ...@@ -242,12 +234,12 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
int latency_req = cpuidle_governor_latency_req(dev->cpu); int latency_req = cpuidle_governor_latency_req(dev->cpu);
unsigned int duration_us, count; unsigned int duration_us, count;
int max_early_idx, idx, i; int max_early_idx, constraint_idx, idx, i;
ktime_t delta_tick; ktime_t delta_tick;
if (cpu_data->last_state >= 0) { if (dev->last_state_idx >= 0) {
teo_update(drv, dev); teo_update(drv, dev);
cpu_data->last_state = -1; dev->last_state_idx = -1;
} }
cpu_data->time_span_ns = local_clock(); cpu_data->time_span_ns = local_clock();
...@@ -257,6 +249,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, ...@@ -257,6 +249,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
count = 0; count = 0;
max_early_idx = -1; max_early_idx = -1;
constraint_idx = drv->state_count;
idx = -1; idx = -1;
for (i = 0; i < drv->state_count; i++) { for (i = 0; i < drv->state_count; i++) {
...@@ -286,16 +279,8 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, ...@@ -286,16 +279,8 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
if (s->target_residency > duration_us) if (s->target_residency > duration_us)
break; break;
if (s->exit_latency > latency_req) { if (s->exit_latency > latency_req && constraint_idx > i)
/* constraint_idx = i;
* If we break out of the loop for latency reasons, use
* the target residency of the selected state as the
* expected idle duration to avoid stopping the tick
* as long as that target residency is low enough.
*/
duration_us = drv->states[idx].target_residency;
goto refine;
}
idx = i; idx = i;
...@@ -321,7 +306,13 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, ...@@ -321,7 +306,13 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
duration_us = drv->states[idx].target_residency; duration_us = drv->states[idx].target_residency;
} }
refine: /*
* If there is a latency constraint, it may be necessary to use a
* shallower idle state than the one selected so far.
*/
if (constraint_idx < idx)
idx = constraint_idx;
if (idx < 0) { if (idx < 0) {
idx = 0; /* No states enabled. Must use 0. */ idx = 0; /* No states enabled. Must use 0. */
} else if (idx > 0) { } else if (idx > 0) {
...@@ -331,13 +322,12 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, ...@@ -331,13 +322,12 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
/* /*
* Count and sum the most recent idle duration values less than * Count and sum the most recent idle duration values less than
* the target residency of the state selected so far, find the * the current expected idle duration value.
* max.
*/ */
for (i = 0; i < INTERVALS; i++) { for (i = 0; i < INTERVALS; i++) {
unsigned int val = cpu_data->intervals[i]; unsigned int val = cpu_data->intervals[i];
if (val >= drv->states[idx].target_residency) if (val >= duration_us)
continue; continue;
count++; count++;
...@@ -356,8 +346,10 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, ...@@ -356,8 +346,10 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
* would be too shallow. * would be too shallow.
*/ */
if (!(tick_nohz_tick_stopped() && avg_us < TICK_USEC)) { if (!(tick_nohz_tick_stopped() && avg_us < TICK_USEC)) {
idx = teo_find_shallower_state(drv, dev, idx, avg_us);
duration_us = avg_us; duration_us = avg_us;
if (drv->states[idx].target_residency > avg_us)
idx = teo_find_shallower_state(drv, dev,
idx, avg_us);
} }
} }
} }
...@@ -394,7 +386,7 @@ static void teo_reflect(struct cpuidle_device *dev, int state) ...@@ -394,7 +386,7 @@ static void teo_reflect(struct cpuidle_device *dev, int state)
{ {
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
cpu_data->last_state = state; dev->last_state_idx = state;
/* /*
* If the wakeup was not "natural", but triggered by one of the safety * If the wakeup was not "natural", but triggered by one of the safety
* nets, assume that the CPU might have been idle for the entire sleep * nets, assume that the CPU might have been idle for the entire sleep
......
...@@ -20,16 +20,9 @@ static int __cpuidle poll_idle(struct cpuidle_device *dev, ...@@ -20,16 +20,9 @@ static int __cpuidle poll_idle(struct cpuidle_device *dev,
local_irq_enable(); local_irq_enable();
if (!current_set_polling_and_test()) { if (!current_set_polling_and_test()) {
unsigned int loop_count = 0; unsigned int loop_count = 0;
u64 limit = TICK_NSEC; u64 limit;
int i;
for (i = 1; i < drv->state_count; i++) { limit = cpuidle_poll_time(drv, dev);
if (drv->states[i].disabled || dev->states_usage[i].disable)
continue;
limit = (u64)drv->states[i].target_residency * NSEC_PER_USEC;
break;
}
while (!need_resched()) { while (!need_resched()) {
cpu_relax(); cpu_relax();
......
...@@ -334,6 +334,7 @@ struct cpuidle_state_kobj { ...@@ -334,6 +334,7 @@ struct cpuidle_state_kobj {
struct cpuidle_state_usage *state_usage; struct cpuidle_state_usage *state_usage;
struct completion kobj_unregister; struct completion kobj_unregister;
struct kobject kobj; struct kobject kobj;
struct cpuidle_device *device;
}; };
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
...@@ -391,6 +392,7 @@ static inline void cpuidle_remove_s2idle_attr_group(struct cpuidle_state_kobj *k ...@@ -391,6 +392,7 @@ static inline void cpuidle_remove_s2idle_attr_group(struct cpuidle_state_kobj *k
#define kobj_to_state_obj(k) container_of(k, struct cpuidle_state_kobj, kobj) #define kobj_to_state_obj(k) container_of(k, struct cpuidle_state_kobj, kobj)
#define kobj_to_state(k) (kobj_to_state_obj(k)->state) #define kobj_to_state(k) (kobj_to_state_obj(k)->state)
#define kobj_to_state_usage(k) (kobj_to_state_obj(k)->state_usage) #define kobj_to_state_usage(k) (kobj_to_state_obj(k)->state_usage)
#define kobj_to_device(k) (kobj_to_state_obj(k)->device)
#define attr_to_stateattr(a) container_of(a, struct cpuidle_state_attr, attr) #define attr_to_stateattr(a) container_of(a, struct cpuidle_state_attr, attr)
static ssize_t cpuidle_state_show(struct kobject *kobj, struct attribute *attr, static ssize_t cpuidle_state_show(struct kobject *kobj, struct attribute *attr,
...@@ -414,10 +416,14 @@ static ssize_t cpuidle_state_store(struct kobject *kobj, struct attribute *attr, ...@@ -414,10 +416,14 @@ static ssize_t cpuidle_state_store(struct kobject *kobj, struct attribute *attr,
struct cpuidle_state *state = kobj_to_state(kobj); struct cpuidle_state *state = kobj_to_state(kobj);
struct cpuidle_state_usage *state_usage = kobj_to_state_usage(kobj); struct cpuidle_state_usage *state_usage = kobj_to_state_usage(kobj);
struct cpuidle_state_attr *cattr = attr_to_stateattr(attr); struct cpuidle_state_attr *cattr = attr_to_stateattr(attr);
struct cpuidle_device *dev = kobj_to_device(kobj);
if (cattr->store) if (cattr->store)
ret = cattr->store(state, state_usage, buf, size); ret = cattr->store(state, state_usage, buf, size);
/* reset poll time cache */
dev->poll_limit_ns = 0;
return ret; return ret;
} }
...@@ -468,6 +474,7 @@ static int cpuidle_add_state_sysfs(struct cpuidle_device *device) ...@@ -468,6 +474,7 @@ static int cpuidle_add_state_sysfs(struct cpuidle_device *device)
} }
kobj->state = &drv->states[i]; kobj->state = &drv->states[i];
kobj->state_usage = &device->states_usage[i]; kobj->state_usage = &device->states_usage[i];
kobj->device = device;
init_completion(&kobj->kobj_unregister); init_completion(&kobj->kobj_unregister);
ret = kobject_init_and_add(&kobj->kobj, &ktype_state_cpuidle, ret = kobject_init_and_add(&kobj->kobj, &ktype_state_cpuidle,
......
...@@ -93,15 +93,28 @@ config ARM_EXYNOS_BUS_DEVFREQ ...@@ -93,15 +93,28 @@ config ARM_EXYNOS_BUS_DEVFREQ
This does not yet operate with optimal voltages. This does not yet operate with optimal voltages.
config ARM_TEGRA_DEVFREQ config ARM_TEGRA_DEVFREQ
tristate "Tegra DEVFREQ Driver" tristate "NVIDIA Tegra30/114/124/210 DEVFREQ Driver"
depends on ARCH_TEGRA_124_SOC depends on ARCH_TEGRA_3x_SOC || ARCH_TEGRA_114_SOC || \
select DEVFREQ_GOV_SIMPLE_ONDEMAND ARCH_TEGRA_132_SOC || ARCH_TEGRA_124_SOC || \
ARCH_TEGRA_210_SOC || \
COMPILE_TEST
select PM_OPP select PM_OPP
help help
This adds the DEVFREQ driver for the Tegra family of SoCs. This adds the DEVFREQ driver for the Tegra family of SoCs.
It reads ACTMON counters of memory controllers and adjusts the It reads ACTMON counters of memory controllers and adjusts the
operating frequencies and voltages with OPP support. operating frequencies and voltages with OPP support.
config ARM_TEGRA20_DEVFREQ
tristate "NVIDIA Tegra20 DEVFREQ Driver"
depends on (TEGRA_MC && TEGRA20_EMC) || COMPILE_TEST
depends on COMMON_CLK
select DEVFREQ_GOV_SIMPLE_ONDEMAND
select PM_OPP
help
This adds the DEVFREQ driver for the Tegra20 family of SoCs.
It reads Memory Controller counters and adjusts the operating
frequencies and voltages with OPP support.
config ARM_RK3399_DMC_DEVFREQ config ARM_RK3399_DMC_DEVFREQ
tristate "ARM RK3399 DMC DEVFREQ Driver" tristate "ARM RK3399 DMC DEVFREQ Driver"
depends on ARCH_ROCKCHIP depends on ARCH_ROCKCHIP
......
...@@ -10,7 +10,8 @@ obj-$(CONFIG_DEVFREQ_GOV_PASSIVE) += governor_passive.o ...@@ -10,7 +10,8 @@ obj-$(CONFIG_DEVFREQ_GOV_PASSIVE) += governor_passive.o
# DEVFREQ Drivers # DEVFREQ Drivers
obj-$(CONFIG_ARM_EXYNOS_BUS_DEVFREQ) += exynos-bus.o obj-$(CONFIG_ARM_EXYNOS_BUS_DEVFREQ) += exynos-bus.o
obj-$(CONFIG_ARM_RK3399_DMC_DEVFREQ) += rk3399_dmc.o obj-$(CONFIG_ARM_RK3399_DMC_DEVFREQ) += rk3399_dmc.o
obj-$(CONFIG_ARM_TEGRA_DEVFREQ) += tegra-devfreq.o obj-$(CONFIG_ARM_TEGRA_DEVFREQ) += tegra30-devfreq.o
obj-$(CONFIG_ARM_TEGRA20_DEVFREQ) += tegra20-devfreq.o
# DEVFREQ Event Drivers # DEVFREQ Event Drivers
obj-$(CONFIG_PM_DEVFREQ_EVENT) += event/ obj-$(CONFIG_PM_DEVFREQ_EVENT) += event/
...@@ -254,7 +254,7 @@ static struct devfreq_governor *try_then_request_governor(const char *name) ...@@ -254,7 +254,7 @@ static struct devfreq_governor *try_then_request_governor(const char *name)
/* Restore previous state before return */ /* Restore previous state before return */
mutex_lock(&devfreq_list_lock); mutex_lock(&devfreq_list_lock);
if (err) if (err)
return ERR_PTR(err); return (err < 0) ? ERR_PTR(err) : ERR_PTR(-EINVAL);
governor = find_devfreq_governor(name); governor = find_devfreq_governor(name);
} }
...@@ -402,7 +402,7 @@ static void devfreq_monitor(struct work_struct *work) ...@@ -402,7 +402,7 @@ static void devfreq_monitor(struct work_struct *work)
* devfreq_monitor_start() - Start load monitoring of devfreq instance * devfreq_monitor_start() - Start load monitoring of devfreq instance
* @devfreq: the devfreq instance. * @devfreq: the devfreq instance.
* *
* Helper function for starting devfreq device load monitoing. By * Helper function for starting devfreq device load monitoring. By
* default delayed work based monitoring is supported. Function * default delayed work based monitoring is supported. Function
* to be called from governor in response to DEVFREQ_GOV_START * to be called from governor in response to DEVFREQ_GOV_START
* event when device is added to devfreq framework. * event when device is added to devfreq framework.
...@@ -420,7 +420,7 @@ EXPORT_SYMBOL(devfreq_monitor_start); ...@@ -420,7 +420,7 @@ EXPORT_SYMBOL(devfreq_monitor_start);
* devfreq_monitor_stop() - Stop load monitoring of a devfreq instance * devfreq_monitor_stop() - Stop load monitoring of a devfreq instance
* @devfreq: the devfreq instance. * @devfreq: the devfreq instance.
* *
* Helper function to stop devfreq device load monitoing. Function * Helper function to stop devfreq device load monitoring. Function
* to be called from governor in response to DEVFREQ_GOV_STOP * to be called from governor in response to DEVFREQ_GOV_STOP
* event when device is removed from devfreq framework. * event when device is removed from devfreq framework.
*/ */
...@@ -434,7 +434,7 @@ EXPORT_SYMBOL(devfreq_monitor_stop); ...@@ -434,7 +434,7 @@ EXPORT_SYMBOL(devfreq_monitor_stop);
* devfreq_monitor_suspend() - Suspend load monitoring of a devfreq instance * devfreq_monitor_suspend() - Suspend load monitoring of a devfreq instance
* @devfreq: the devfreq instance. * @devfreq: the devfreq instance.
* *
* Helper function to suspend devfreq device load monitoing. Function * Helper function to suspend devfreq device load monitoring. Function
* to be called from governor in response to DEVFREQ_GOV_SUSPEND * to be called from governor in response to DEVFREQ_GOV_SUSPEND
* event or when polling interval is set to zero. * event or when polling interval is set to zero.
* *
...@@ -461,7 +461,7 @@ EXPORT_SYMBOL(devfreq_monitor_suspend); ...@@ -461,7 +461,7 @@ EXPORT_SYMBOL(devfreq_monitor_suspend);
* devfreq_monitor_resume() - Resume load monitoring of a devfreq instance * devfreq_monitor_resume() - Resume load monitoring of a devfreq instance
* @devfreq: the devfreq instance. * @devfreq: the devfreq instance.
* *
* Helper function to resume devfreq device load monitoing. Function * Helper function to resume devfreq device load monitoring. Function
* to be called from governor in response to DEVFREQ_GOV_RESUME * to be called from governor in response to DEVFREQ_GOV_RESUME
* event or when polling interval is set to non-zero. * event or when polling interval is set to non-zero.
*/ */
...@@ -867,7 +867,7 @@ EXPORT_SYMBOL_GPL(devfreq_get_devfreq_by_phandle); ...@@ -867,7 +867,7 @@ EXPORT_SYMBOL_GPL(devfreq_get_devfreq_by_phandle);
/** /**
* devm_devfreq_remove_device() - Resource-managed devfreq_remove_device() * devm_devfreq_remove_device() - Resource-managed devfreq_remove_device()
* @dev: the device to add devfreq feature. * @dev: the device from which to remove devfreq feature.
* @devfreq: the devfreq instance to be removed * @devfreq: the devfreq instance to be removed
*/ */
void devm_devfreq_remove_device(struct device *dev, struct devfreq *devfreq) void devm_devfreq_remove_device(struct device *dev, struct devfreq *devfreq)
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/regmap.h> #include <linux/regmap.h>
#include <linux/suspend.h> #include <linux/suspend.h>
...@@ -20,6 +21,11 @@ ...@@ -20,6 +21,11 @@
#include "exynos-ppmu.h" #include "exynos-ppmu.h"
enum exynos_ppmu_type {
EXYNOS_TYPE_PPMU,
EXYNOS_TYPE_PPMU_V2,
};
struct exynos_ppmu_data { struct exynos_ppmu_data {
struct clk *clk; struct clk *clk;
}; };
...@@ -33,6 +39,7 @@ struct exynos_ppmu { ...@@ -33,6 +39,7 @@ struct exynos_ppmu {
struct regmap *regmap; struct regmap *regmap;
struct exynos_ppmu_data ppmu; struct exynos_ppmu_data ppmu;
enum exynos_ppmu_type ppmu_type;
}; };
#define PPMU_EVENT(name) \ #define PPMU_EVENT(name) \
...@@ -86,6 +93,12 @@ static struct __exynos_ppmu_events { ...@@ -86,6 +93,12 @@ static struct __exynos_ppmu_events {
PPMU_EVENT(d1-cpu), PPMU_EVENT(d1-cpu),
PPMU_EVENT(d1-general), PPMU_EVENT(d1-general),
PPMU_EVENT(d1-rt), PPMU_EVENT(d1-rt),
/* For Exynos5422 SoC */
PPMU_EVENT(dmc0_0),
PPMU_EVENT(dmc0_1),
PPMU_EVENT(dmc1_0),
PPMU_EVENT(dmc1_1),
}; };
static int exynos_ppmu_find_ppmu_id(struct devfreq_event_dev *edev) static int exynos_ppmu_find_ppmu_id(struct devfreq_event_dev *edev)
...@@ -151,9 +164,9 @@ static int exynos_ppmu_set_event(struct devfreq_event_dev *edev) ...@@ -151,9 +164,9 @@ static int exynos_ppmu_set_event(struct devfreq_event_dev *edev)
if (ret < 0) if (ret < 0)
return ret; return ret;
/* Set the event of Read/Write data count */ /* Set the event of proper data type monitoring */
ret = regmap_write(info->regmap, PPMU_BEVTxSEL(id), ret = regmap_write(info->regmap, PPMU_BEVTxSEL(id),
PPMU_RO_DATA_CNT | PPMU_WO_DATA_CNT); edev->desc->event_type);
if (ret < 0) if (ret < 0)
return ret; return ret;
...@@ -365,23 +378,11 @@ static int exynos_ppmu_v2_set_event(struct devfreq_event_dev *edev) ...@@ -365,23 +378,11 @@ static int exynos_ppmu_v2_set_event(struct devfreq_event_dev *edev)
if (ret < 0) if (ret < 0)
return ret; return ret;
/* Set the event of Read/Write data count */ /* Set the event of proper data type monitoring */
switch (id) { ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
case PPMU_PMNCNT0: edev->desc->event_type);
case PPMU_PMNCNT1: if (ret < 0)
case PPMU_PMNCNT2: return ret;
ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
PPMU_V2_RO_DATA_CNT | PPMU_V2_WO_DATA_CNT);
if (ret < 0)
return ret;
break;
case PPMU_PMNCNT3:
ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
PPMU_V2_EVT3_RW_DATA_CNT);
if (ret < 0)
return ret;
break;
}
/* Reset cycle counter/performance counter and enable PPMU */ /* Reset cycle counter/performance counter and enable PPMU */
ret = regmap_read(info->regmap, PPMU_V2_PMNC, &pmnc); ret = regmap_read(info->regmap, PPMU_V2_PMNC, &pmnc);
...@@ -480,31 +481,24 @@ static const struct devfreq_event_ops exynos_ppmu_v2_ops = { ...@@ -480,31 +481,24 @@ static const struct devfreq_event_ops exynos_ppmu_v2_ops = {
static const struct of_device_id exynos_ppmu_id_match[] = { static const struct of_device_id exynos_ppmu_id_match[] = {
{ {
.compatible = "samsung,exynos-ppmu", .compatible = "samsung,exynos-ppmu",
.data = (void *)&exynos_ppmu_ops, .data = (void *)EXYNOS_TYPE_PPMU,
}, { }, {
.compatible = "samsung,exynos-ppmu-v2", .compatible = "samsung,exynos-ppmu-v2",
.data = (void *)&exynos_ppmu_v2_ops, .data = (void *)EXYNOS_TYPE_PPMU_V2,
}, },
{ /* sentinel */ }, { /* sentinel */ },
}; };
MODULE_DEVICE_TABLE(of, exynos_ppmu_id_match); MODULE_DEVICE_TABLE(of, exynos_ppmu_id_match);
static struct devfreq_event_ops *exynos_bus_get_ops(struct device_node *np)
{
const struct of_device_id *match;
match = of_match_node(exynos_ppmu_id_match, np);
return (struct devfreq_event_ops *)match->data;
}
static int of_get_devfreq_events(struct device_node *np, static int of_get_devfreq_events(struct device_node *np,
struct exynos_ppmu *info) struct exynos_ppmu *info)
{ {
struct devfreq_event_desc *desc; struct devfreq_event_desc *desc;
struct devfreq_event_ops *event_ops;
struct device *dev = info->dev; struct device *dev = info->dev;
struct device_node *events_np, *node; struct device_node *events_np, *node;
int i, j, count; int i, j, count;
const struct of_device_id *of_id;
int ret;
events_np = of_get_child_by_name(np, "events"); events_np = of_get_child_by_name(np, "events");
if (!events_np) { if (!events_np) {
...@@ -512,7 +506,6 @@ static int of_get_devfreq_events(struct device_node *np, ...@@ -512,7 +506,6 @@ static int of_get_devfreq_events(struct device_node *np,
"failed to get child node of devfreq-event devices\n"); "failed to get child node of devfreq-event devices\n");
return -EINVAL; return -EINVAL;
} }
event_ops = exynos_bus_get_ops(np);
count = of_get_child_count(events_np); count = of_get_child_count(events_np);
desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL); desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL);
...@@ -520,6 +513,12 @@ static int of_get_devfreq_events(struct device_node *np, ...@@ -520,6 +513,12 @@ static int of_get_devfreq_events(struct device_node *np,
return -ENOMEM; return -ENOMEM;
info->num_events = count; info->num_events = count;
of_id = of_match_device(exynos_ppmu_id_match, dev);
if (of_id)
info->ppmu_type = (enum exynos_ppmu_type)of_id->data;
else
return -EINVAL;
j = 0; j = 0;
for_each_child_of_node(events_np, node) { for_each_child_of_node(events_np, node) {
for (i = 0; i < ARRAY_SIZE(ppmu_events); i++) { for (i = 0; i < ARRAY_SIZE(ppmu_events); i++) {
...@@ -537,10 +536,51 @@ static int of_get_devfreq_events(struct device_node *np, ...@@ -537,10 +536,51 @@ static int of_get_devfreq_events(struct device_node *np,
continue; continue;
} }
desc[j].ops = event_ops; switch (info->ppmu_type) {
case EXYNOS_TYPE_PPMU:
desc[j].ops = &exynos_ppmu_ops;
break;
case EXYNOS_TYPE_PPMU_V2:
desc[j].ops = &exynos_ppmu_v2_ops;
break;
}
desc[j].driver_data = info; desc[j].driver_data = info;
of_property_read_string(node, "event-name", &desc[j].name); of_property_read_string(node, "event-name", &desc[j].name);
ret = of_property_read_u32(node, "event-data-type",
&desc[j].event_type);
if (ret) {
/* Set the event of proper data type counting.
* Check if the data type has been defined in DT,
* use default if not.
*/
if (info->ppmu_type == EXYNOS_TYPE_PPMU_V2) {
struct devfreq_event_dev edev;
int id;
/* Not all registers take the same value for
* read+write data count.
*/
edev.desc = &desc[j];
id = exynos_ppmu_find_ppmu_id(&edev);
switch (id) {
case PPMU_PMNCNT0:
case PPMU_PMNCNT1:
case PPMU_PMNCNT2:
desc[j].event_type = PPMU_V2_RO_DATA_CNT
| PPMU_V2_WO_DATA_CNT;
break;
case PPMU_PMNCNT3:
desc[j].event_type =
PPMU_V2_EVT3_RW_DATA_CNT;
break;
}
} else {
desc[j].event_type = PPMU_RO_DATA_CNT |
PPMU_WO_DATA_CNT;
}
}
j++; j++;
} }
......
...@@ -22,7 +22,6 @@ ...@@ -22,7 +22,6 @@
#include <linux/slab.h> #include <linux/slab.h>
#define DEFAULT_SATURATION_RATIO 40 #define DEFAULT_SATURATION_RATIO 40
#define DEFAULT_VOLTAGE_TOLERANCE 2
struct exynos_bus { struct exynos_bus {
struct device *dev; struct device *dev;
...@@ -34,9 +33,8 @@ struct exynos_bus { ...@@ -34,9 +33,8 @@ struct exynos_bus {
unsigned long curr_freq; unsigned long curr_freq;
struct regulator *regulator; struct opp_table *opp_table;
struct clk *clk; struct clk *clk;
unsigned int voltage_tolerance;
unsigned int ratio; unsigned int ratio;
}; };
...@@ -90,62 +88,29 @@ static int exynos_bus_get_event(struct exynos_bus *bus, ...@@ -90,62 +88,29 @@ static int exynos_bus_get_event(struct exynos_bus *bus,
} }
/* /*
* Must necessary function for devfreq simple-ondemand governor * devfreq function for both simple-ondemand and passive governor
*/ */
static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags) static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags)
{ {
struct exynos_bus *bus = dev_get_drvdata(dev); struct exynos_bus *bus = dev_get_drvdata(dev);
struct dev_pm_opp *new_opp; struct dev_pm_opp *new_opp;
unsigned long old_freq, new_freq, new_volt, tol;
int ret = 0; int ret = 0;
/* Get new opp-bus instance according to new bus clock */ /* Get correct frequency for bus. */
new_opp = devfreq_recommended_opp(dev, freq, flags); new_opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(new_opp)) { if (IS_ERR(new_opp)) {
dev_err(dev, "failed to get recommended opp instance\n"); dev_err(dev, "failed to get recommended opp instance\n");
return PTR_ERR(new_opp); return PTR_ERR(new_opp);
} }
new_freq = dev_pm_opp_get_freq(new_opp);
new_volt = dev_pm_opp_get_voltage(new_opp);
dev_pm_opp_put(new_opp); dev_pm_opp_put(new_opp);
old_freq = bus->curr_freq;
if (old_freq == new_freq)
return 0;
tol = new_volt * bus->voltage_tolerance / 100;
/* Change voltage and frequency according to new OPP level */ /* Change voltage and frequency according to new OPP level */
mutex_lock(&bus->lock); mutex_lock(&bus->lock);
ret = dev_pm_opp_set_rate(dev, *freq);
if (!ret)
bus->curr_freq = *freq;
if (old_freq < new_freq) {
ret = regulator_set_voltage_tol(bus->regulator, new_volt, tol);
if (ret < 0) {
dev_err(bus->dev, "failed to set voltage\n");
goto out;
}
}
ret = clk_set_rate(bus->clk, new_freq);
if (ret < 0) {
dev_err(dev, "failed to change clock of bus\n");
clk_set_rate(bus->clk, old_freq);
goto out;
}
if (old_freq > new_freq) {
ret = regulator_set_voltage_tol(bus->regulator, new_volt, tol);
if (ret < 0) {
dev_err(bus->dev, "failed to set voltage\n");
goto out;
}
}
bus->curr_freq = new_freq;
dev_dbg(dev, "Set the frequency of bus (%luHz -> %luHz, %luHz)\n",
old_freq, new_freq, clk_get_rate(bus->clk));
out:
mutex_unlock(&bus->lock); mutex_unlock(&bus->lock);
return ret; return ret;
...@@ -191,57 +156,12 @@ static void exynos_bus_exit(struct device *dev) ...@@ -191,57 +156,12 @@ static void exynos_bus_exit(struct device *dev)
if (ret < 0) if (ret < 0)
dev_warn(dev, "failed to disable the devfreq-event devices\n"); dev_warn(dev, "failed to disable the devfreq-event devices\n");
if (bus->regulator)
regulator_disable(bus->regulator);
dev_pm_opp_of_remove_table(dev); dev_pm_opp_of_remove_table(dev);
clk_disable_unprepare(bus->clk); clk_disable_unprepare(bus->clk);
} if (bus->opp_table) {
dev_pm_opp_put_regulators(bus->opp_table);
/* bus->opp_table = NULL;
* Must necessary function for devfreq passive governor
*/
static int exynos_bus_passive_target(struct device *dev, unsigned long *freq,
u32 flags)
{
struct exynos_bus *bus = dev_get_drvdata(dev);
struct dev_pm_opp *new_opp;
unsigned long old_freq, new_freq;
int ret = 0;
/* Get new opp-bus instance according to new bus clock */
new_opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(new_opp)) {
dev_err(dev, "failed to get recommended opp instance\n");
return PTR_ERR(new_opp);
} }
new_freq = dev_pm_opp_get_freq(new_opp);
dev_pm_opp_put(new_opp);
old_freq = bus->curr_freq;
if (old_freq == new_freq)
return 0;
/* Change the frequency according to new OPP level */
mutex_lock(&bus->lock);
ret = clk_set_rate(bus->clk, new_freq);
if (ret < 0) {
dev_err(dev, "failed to set the clock of bus\n");
goto out;
}
*freq = new_freq;
bus->curr_freq = new_freq;
dev_dbg(dev, "Set the frequency of bus (%luHz -> %luHz, %luHz)\n",
old_freq, new_freq, clk_get_rate(bus->clk));
out:
mutex_unlock(&bus->lock);
return ret;
} }
static void exynos_bus_passive_exit(struct device *dev) static void exynos_bus_passive_exit(struct device *dev)
...@@ -256,21 +176,19 @@ static int exynos_bus_parent_parse_of(struct device_node *np, ...@@ -256,21 +176,19 @@ static int exynos_bus_parent_parse_of(struct device_node *np,
struct exynos_bus *bus) struct exynos_bus *bus)
{ {
struct device *dev = bus->dev; struct device *dev = bus->dev;
struct opp_table *opp_table;
const char *vdd = "vdd";
int i, ret, count, size; int i, ret, count, size;
/* Get the regulator to provide each bus with the power */ opp_table = dev_pm_opp_set_regulators(dev, &vdd, 1);
bus->regulator = devm_regulator_get(dev, "vdd"); if (IS_ERR(opp_table)) {
if (IS_ERR(bus->regulator)) { ret = PTR_ERR(opp_table);
dev_err(dev, "failed to get VDD regulator\n"); dev_err(dev, "failed to set regulators %d\n", ret);
return PTR_ERR(bus->regulator);
}
ret = regulator_enable(bus->regulator);
if (ret < 0) {
dev_err(dev, "failed to enable VDD regulator\n");
return ret; return ret;
} }
bus->opp_table = opp_table;
/* /*
* Get the devfreq-event devices to get the current utilization of * Get the devfreq-event devices to get the current utilization of
* buses. This raw data will be used in devfreq ondemand governor. * buses. This raw data will be used in devfreq ondemand governor.
...@@ -311,14 +229,11 @@ static int exynos_bus_parent_parse_of(struct device_node *np, ...@@ -311,14 +229,11 @@ static int exynos_bus_parent_parse_of(struct device_node *np,
if (of_property_read_u32(np, "exynos,saturation-ratio", &bus->ratio)) if (of_property_read_u32(np, "exynos,saturation-ratio", &bus->ratio))
bus->ratio = DEFAULT_SATURATION_RATIO; bus->ratio = DEFAULT_SATURATION_RATIO;
if (of_property_read_u32(np, "exynos,voltage-tolerance",
&bus->voltage_tolerance))
bus->voltage_tolerance = DEFAULT_VOLTAGE_TOLERANCE;
return 0; return 0;
err_regulator: err_regulator:
regulator_disable(bus->regulator); dev_pm_opp_put_regulators(bus->opp_table);
bus->opp_table = NULL;
return ret; return ret;
} }
...@@ -383,6 +298,7 @@ static int exynos_bus_probe(struct platform_device *pdev) ...@@ -383,6 +298,7 @@ static int exynos_bus_probe(struct platform_device *pdev)
struct exynos_bus *bus; struct exynos_bus *bus;
int ret, max_state; int ret, max_state;
unsigned long min_freq, max_freq; unsigned long min_freq, max_freq;
bool passive = false;
if (!np) { if (!np) {
dev_err(dev, "failed to find devicetree node\n"); dev_err(dev, "failed to find devicetree node\n");
...@@ -396,27 +312,27 @@ static int exynos_bus_probe(struct platform_device *pdev) ...@@ -396,27 +312,27 @@ static int exynos_bus_probe(struct platform_device *pdev)
bus->dev = &pdev->dev; bus->dev = &pdev->dev;
platform_set_drvdata(pdev, bus); platform_set_drvdata(pdev, bus);
/* Parse the device-tree to get the resource information */
ret = exynos_bus_parse_of(np, bus);
if (ret < 0)
return ret;
profile = devm_kzalloc(dev, sizeof(*profile), GFP_KERNEL); profile = devm_kzalloc(dev, sizeof(*profile), GFP_KERNEL);
if (!profile) { if (!profile)
ret = -ENOMEM; return -ENOMEM;
goto err;
}
node = of_parse_phandle(dev->of_node, "devfreq", 0); node = of_parse_phandle(dev->of_node, "devfreq", 0);
if (node) { if (node) {
of_node_put(node); of_node_put(node);
goto passive; passive = true;
} else { } else {
ret = exynos_bus_parent_parse_of(np, bus); ret = exynos_bus_parent_parse_of(np, bus);
if (ret < 0)
return ret;
} }
/* Parse the device-tree to get the resource information */
ret = exynos_bus_parse_of(np, bus);
if (ret < 0) if (ret < 0)
goto err; goto err_reg;
if (passive)
goto passive;
/* Initialize the struct profile and governor data for parent device */ /* Initialize the struct profile and governor data for parent device */
profile->polling_ms = 50; profile->polling_ms = 50;
...@@ -468,7 +384,7 @@ static int exynos_bus_probe(struct platform_device *pdev) ...@@ -468,7 +384,7 @@ static int exynos_bus_probe(struct platform_device *pdev)
goto out; goto out;
passive: passive:
/* Initialize the struct profile and governor data for passive device */ /* Initialize the struct profile and governor data for passive device */
profile->target = exynos_bus_passive_target; profile->target = exynos_bus_target;
profile->exit = exynos_bus_passive_exit; profile->exit = exynos_bus_passive_exit;
/* Get the instance of parent devfreq device */ /* Get the instance of parent devfreq device */
...@@ -507,6 +423,11 @@ static int exynos_bus_probe(struct platform_device *pdev) ...@@ -507,6 +423,11 @@ static int exynos_bus_probe(struct platform_device *pdev)
err: err:
dev_pm_opp_of_remove_table(dev); dev_pm_opp_of_remove_table(dev);
clk_disable_unprepare(bus->clk); clk_disable_unprepare(bus->clk);
err_reg:
if (!passive) {
dev_pm_opp_put_regulators(bus->opp_table);
bus->opp_table = NULL;
}
return ret; return ret;
} }
......
...@@ -149,7 +149,6 @@ static int devfreq_passive_notifier_call(struct notifier_block *nb, ...@@ -149,7 +149,6 @@ static int devfreq_passive_notifier_call(struct notifier_block *nb,
static int devfreq_passive_event_handler(struct devfreq *devfreq, static int devfreq_passive_event_handler(struct devfreq *devfreq,
unsigned int event, void *data) unsigned int event, void *data)
{ {
struct device *dev = devfreq->dev.parent;
struct devfreq_passive_data *p_data struct devfreq_passive_data *p_data
= (struct devfreq_passive_data *)devfreq->data; = (struct devfreq_passive_data *)devfreq->data;
struct devfreq *parent = (struct devfreq *)p_data->parent; struct devfreq *parent = (struct devfreq *)p_data->parent;
...@@ -165,12 +164,12 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq, ...@@ -165,12 +164,12 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
p_data->this = devfreq; p_data->this = devfreq;
nb->notifier_call = devfreq_passive_notifier_call; nb->notifier_call = devfreq_passive_notifier_call;
ret = devm_devfreq_register_notifier(dev, parent, nb, ret = devfreq_register_notifier(parent, nb,
DEVFREQ_TRANSITION_NOTIFIER); DEVFREQ_TRANSITION_NOTIFIER);
break; break;
case DEVFREQ_GOV_STOP: case DEVFREQ_GOV_STOP:
devm_devfreq_unregister_notifier(dev, parent, nb, WARN_ON(devfreq_unregister_notifier(parent, nb,
DEVFREQ_TRANSITION_NOTIFIER); DEVFREQ_TRANSITION_NOTIFIER));
break; break;
default: default:
break; break;
......
...@@ -351,7 +351,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev) ...@@ -351,7 +351,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
/* /*
* Get dram timing and pass it to arm trust firmware, * Get dram timing and pass it to arm trust firmware,
* the dram drvier in arm trust firmware will get these * the dram driver in arm trust firmware will get these
* timing and to do dram initial. * timing and to do dram initial.
*/ */
if (!of_get_ddr_timings(&data->timing, np)) { if (!of_get_ddr_timings(&data->timing, np)) {
......
// SPDX-License-Identifier: GPL-2.0
/*
* NVIDIA Tegra20 devfreq driver
*
* Copyright (C) 2019 GRATE-DRIVER project
*/
#include <linux/clk.h>
#include <linux/devfreq.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
#include <soc/tegra/mc.h>
#include "governor.h"
#define MC_STAT_CONTROL 0x90
#define MC_STAT_EMC_CLOCK_LIMIT 0xa0
#define MC_STAT_EMC_CLOCKS 0xa4
#define MC_STAT_EMC_CONTROL 0xa8
#define MC_STAT_EMC_COUNT 0xb8
#define EMC_GATHER_CLEAR (1 << 8)
#define EMC_GATHER_ENABLE (3 << 8)
struct tegra_devfreq {
struct devfreq *devfreq;
struct clk *emc_clock;
void __iomem *regs;
};
static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
u32 flags)
{
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
struct devfreq *devfreq = tegra->devfreq;
struct dev_pm_opp *opp;
unsigned long rate;
int err;
opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(opp))
return PTR_ERR(opp);
rate = dev_pm_opp_get_freq(opp);
dev_pm_opp_put(opp);
err = clk_set_min_rate(tegra->emc_clock, rate);
if (err)
return err;
err = clk_set_rate(tegra->emc_clock, 0);
if (err)
goto restore_min_rate;
return 0;
restore_min_rate:
clk_set_min_rate(tegra->emc_clock, devfreq->previous_freq);
return err;
}
static int tegra_devfreq_get_dev_status(struct device *dev,
struct devfreq_dev_status *stat)
{
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
/*
* EMC_COUNT returns number of memory events, that number is lower
* than the number of clocks. Conversion ratio of 1/8 results in a
* bit higher bandwidth than actually needed, it is good enough for
* the time being because drivers don't support requesting minimum
* needed memory bandwidth yet.
*
* TODO: adjust the ratio value once relevant drivers will support
* memory bandwidth management.
*/
stat->busy_time = readl_relaxed(tegra->regs + MC_STAT_EMC_COUNT);
stat->total_time = readl_relaxed(tegra->regs + MC_STAT_EMC_CLOCKS) / 8;
stat->current_frequency = clk_get_rate(tegra->emc_clock);
writel_relaxed(EMC_GATHER_CLEAR, tegra->regs + MC_STAT_CONTROL);
writel_relaxed(EMC_GATHER_ENABLE, tegra->regs + MC_STAT_CONTROL);
return 0;
}
static struct devfreq_dev_profile tegra_devfreq_profile = {
.polling_ms = 500,
.target = tegra_devfreq_target,
.get_dev_status = tegra_devfreq_get_dev_status,
};
static struct tegra_mc *tegra_get_memory_controller(void)
{
struct platform_device *pdev;
struct device_node *np;
struct tegra_mc *mc;
np = of_find_compatible_node(NULL, NULL, "nvidia,tegra20-mc-gart");
if (!np)
return ERR_PTR(-ENOENT);
pdev = of_find_device_by_node(np);
of_node_put(np);
if (!pdev)
return ERR_PTR(-ENODEV);
mc = platform_get_drvdata(pdev);
if (!mc)
return ERR_PTR(-EPROBE_DEFER);
return mc;
}
static int tegra_devfreq_probe(struct platform_device *pdev)
{
struct tegra_devfreq *tegra;
struct tegra_mc *mc;
unsigned long max_rate;
unsigned long rate;
int err;
mc = tegra_get_memory_controller();
if (IS_ERR(mc)) {
err = PTR_ERR(mc);
dev_err(&pdev->dev, "failed to get memory controller: %d\n",
err);
return err;
}
tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL);
if (!tegra)
return -ENOMEM;
/* EMC is a system-critical clock that is always enabled */
tegra->emc_clock = devm_clk_get(&pdev->dev, "emc");
if (IS_ERR(tegra->emc_clock)) {
err = PTR_ERR(tegra->emc_clock);
dev_err(&pdev->dev, "failed to get emc clock: %d\n", err);
return err;
}
tegra->regs = mc->regs;
max_rate = clk_round_rate(tegra->emc_clock, ULONG_MAX);
for (rate = 0; rate <= max_rate; rate++) {
rate = clk_round_rate(tegra->emc_clock, rate);
err = dev_pm_opp_add(&pdev->dev, rate, 0);
if (err) {
dev_err(&pdev->dev, "failed to add opp: %d\n", err);
goto remove_opps;
}
}
/*
* Reset statistic gathers state, select global bandwidth for the
* statistics collection mode and set clocks counter saturation
* limit to maximum.
*/
writel_relaxed(0x00000000, tegra->regs + MC_STAT_CONTROL);
writel_relaxed(0x00000000, tegra->regs + MC_STAT_EMC_CONTROL);
writel_relaxed(0xffffffff, tegra->regs + MC_STAT_EMC_CLOCK_LIMIT);
platform_set_drvdata(pdev, tegra);
tegra->devfreq = devfreq_add_device(&pdev->dev, &tegra_devfreq_profile,
DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL);
if (IS_ERR(tegra->devfreq)) {
err = PTR_ERR(tegra->devfreq);
goto remove_opps;
}
return 0;
remove_opps:
dev_pm_opp_remove_all_dynamic(&pdev->dev);
return err;
}
static int tegra_devfreq_remove(struct platform_device *pdev)
{
struct tegra_devfreq *tegra = platform_get_drvdata(pdev);
devfreq_remove_device(tegra->devfreq);
dev_pm_opp_remove_all_dynamic(&pdev->dev);
return 0;
}
static struct platform_driver tegra_devfreq_driver = {
.probe = tegra_devfreq_probe,
.remove = tegra_devfreq_remove,
.driver = {
.name = "tegra20-devfreq",
},
};
module_platform_driver(tegra_devfreq_driver);
MODULE_ALIAS("platform:tegra20-devfreq");
MODULE_AUTHOR("Dmitry Osipenko <digetx@gmail.com>");
MODULE_DESCRIPTION("NVIDIA Tegra20 devfreq driver");
MODULE_LICENSE("GPL v2");
...@@ -3,9 +3,11 @@ ...@@ -3,9 +3,11 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/pm_qos.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/cpu.h>
#include <linux/cpufreq.h> #include <linux/cpufreq.h>
#include <asm/prom.h> #include <asm/prom.h>
...@@ -16,36 +18,24 @@ ...@@ -16,36 +18,24 @@
static int clamped; static int clamped;
static struct wf_control *clamp_control; static struct wf_control *clamp_control;
static struct dev_pm_qos_request qos_req;
static int clamp_notifier_call(struct notifier_block *self, static unsigned int min_freq, max_freq;
unsigned long event, void *data)
{
struct cpufreq_policy *p = data;
unsigned long max_freq;
if (event != CPUFREQ_ADJUST)
return 0;
max_freq = clamped ? (p->cpuinfo.min_freq) : (p->cpuinfo.max_freq);
cpufreq_verify_within_limits(p, 0, max_freq);
return 0;
}
static struct notifier_block clamp_notifier = {
.notifier_call = clamp_notifier_call,
};
static int clamp_set(struct wf_control *ct, s32 value) static int clamp_set(struct wf_control *ct, s32 value)
{ {
if (value) unsigned int freq;
if (value) {
freq = min_freq;
printk(KERN_INFO "windfarm: Clamping CPU frequency to " printk(KERN_INFO "windfarm: Clamping CPU frequency to "
"minimum !\n"); "minimum !\n");
else } else {
freq = max_freq;
printk(KERN_INFO "windfarm: CPU frequency unclamped !\n"); printk(KERN_INFO "windfarm: CPU frequency unclamped !\n");
}
clamped = value; clamped = value;
cpufreq_update_policy(0);
return 0; return dev_pm_qos_update_request(&qos_req, freq);
} }
static int clamp_get(struct wf_control *ct, s32 *value) static int clamp_get(struct wf_control *ct, s32 *value)
...@@ -74,27 +64,60 @@ static const struct wf_control_ops clamp_ops = { ...@@ -74,27 +64,60 @@ static const struct wf_control_ops clamp_ops = {
static int __init wf_cpufreq_clamp_init(void) static int __init wf_cpufreq_clamp_init(void)
{ {
struct cpufreq_policy *policy;
struct wf_control *clamp; struct wf_control *clamp;
struct device *dev;
int ret;
policy = cpufreq_cpu_get(0);
if (!policy) {
pr_warn("%s: cpufreq policy not found cpu0\n", __func__);
return -EPROBE_DEFER;
}
min_freq = policy->cpuinfo.min_freq;
max_freq = policy->cpuinfo.max_freq;
cpufreq_cpu_put(policy);
dev = get_cpu_device(0);
if (unlikely(!dev)) {
pr_warn("%s: No cpu device for cpu0\n", __func__);
return -ENODEV;
}
clamp = kmalloc(sizeof(struct wf_control), GFP_KERNEL); clamp = kmalloc(sizeof(struct wf_control), GFP_KERNEL);
if (clamp == NULL) if (clamp == NULL)
return -ENOMEM; return -ENOMEM;
cpufreq_register_notifier(&clamp_notifier, CPUFREQ_POLICY_NOTIFIER);
ret = dev_pm_qos_add_request(dev, &qos_req, DEV_PM_QOS_MAX_FREQUENCY,
max_freq);
if (ret < 0) {
pr_err("%s: Failed to add freq constraint (%d)\n", __func__,
ret);
goto free;
}
clamp->ops = &clamp_ops; clamp->ops = &clamp_ops;
clamp->name = "cpufreq-clamp"; clamp->name = "cpufreq-clamp";
if (wf_register_control(clamp)) ret = wf_register_control(clamp);
if (ret)
goto fail; goto fail;
clamp_control = clamp; clamp_control = clamp;
return 0; return 0;
fail: fail:
dev_pm_qos_remove_request(&qos_req);
free:
kfree(clamp); kfree(clamp);
return -ENODEV; return ret;
} }
static void __exit wf_cpufreq_clamp_exit(void) static void __exit wf_cpufreq_clamp_exit(void)
{ {
if (clamp_control) if (clamp_control) {
wf_unregister_control(clamp_control); wf_unregister_control(clamp_control);
dev_pm_qos_remove_request(&qos_req);
}
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment