Commit 16642a2e authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm-for-3.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael J Wysocki:

 - Improved system suspend/resume and runtime PM handling for the SH
   TMU, CMT and MTU2 clock event devices (also used by ARM/shmobile).

 - Generic PM domains framework extensions related to cpuidle support
   and domain objects lookup using names.

 - ARM/shmobile power management updates including improved support for
   the SH7372's A4S power domain containing the CPU core.

 - cpufreq changes related to AMD CPUs support from Matthew Garrett,
   Andre Przywara and Borislav Petkov.

 - cpu0 cpufreq driver from Shawn Guo.

 - cpufreq governor fixes related to the relaxing of limit from Michal
   Pecio.

 - OMAP cpufreq updates from Axel Lin and Richard Zhao.

 - cpuidle ladder governor fixes related to the disabling of states from
   Carsten Emde and me.

 - Runtime PM core updates related to the interactions with the system
   suspend core from Alan Stern and Kevin Hilman.

 - Wakeup sources modification allowing more helper functions to be
   called from interrupt context from John Stultz and additional
   diagnostic code from Todd Poynor.

 - System suspend error code path fix from Feng Hong.

Fixed up conflicts in cpufreq/powernow-k8 that stemmed from the
workqueue fixes conflicting fairly badly with the removal of support for
hardware P-state chips.  The changes were independent but somewhat
intertwined.

* tag 'pm-for-3.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (76 commits)
  Revert "PM QoS: Use spinlock in the per-device PM QoS constraints code"
  PM / Runtime: let rpm_resume() succeed if RPM_ACTIVE, even when disabled, v2
  cpuidle: rename function name "__cpuidle_register_driver", v2
  cpufreq: OMAP: Check IS_ERR() instead of NULL for omap_device_get_by_hwmod_name
  cpuidle: remove some empty lines
  PM: Prevent runtime suspend during system resume
  PM QoS: Use spinlock in the per-device PM QoS constraints code
  PM / Sleep: use resume event when call dpm_resume_early
  cpuidle / ACPI : move cpuidle_device field out of the acpi_processor_power structure
  ACPI / processor: remove pointless variable initialization
  ACPI / processor: remove unused function parameter
  cpufreq: OMAP: remove loops_per_jiffy recalculate for smp
  sections: fix section conflicts in drivers/cpufreq
  cpufreq: conservative: update frequency when limits are relaxed
  cpufreq / ondemand: update frequency when limits are relaxed
  properly __init-annotate pm_sysrq_init()
  cpufreq: Add a generic cpufreq-cpu0 driver
  PM / OPP: Initialize OPP table from device tree
  ARM: add cpufreq transiton notifier to adjust loops_per_jiffy for smp
  cpufreq: Remove support for hardware P-state chips from powernow-k8
  ...
parents 51562cba b9142167
...@@ -176,3 +176,14 @@ Description: Disable L3 cache indices ...@@ -176,3 +176,14 @@ Description: Disable L3 cache indices
All AMD processors with L3 caches provide this functionality. All AMD processors with L3 caches provide this functionality.
For details, see BKDGs at For details, see BKDGs at
http://developer.amd.com/documentation/guides/Pages/default.aspx http://developer.amd.com/documentation/guides/Pages/default.aspx
What: /sys/devices/system/cpu/cpufreq/boost
Date: August 2012
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: Processor frequency boosting control
This switch controls the boost setting for the whole system.
Boosting allows the CPU and the firmware to run at a frequency
beyound it's nominal limit.
More details can be found in Documentation/cpu-freq/boost.txt
Processor boosting control
- information for users -
Quick guide for the impatient:
--------------------
/sys/devices/system/cpu/cpufreq/boost
controls the boost setting for the whole system. You can read and write
that file with either "0" (boosting disabled) or "1" (boosting allowed).
Reading or writing 1 does not mean that the system is boosting at this
very moment, but only that the CPU _may_ raise the frequency at it's
discretion.
--------------------
Introduction
-------------
Some CPUs support a functionality to raise the operating frequency of
some cores in a multi-core package if certain conditions apply, mostly
if the whole chip is not fully utilized and below it's intended thermal
budget. This is done without operating system control by a combination
of hardware and firmware.
On Intel CPUs this is called "Turbo Boost", AMD calls it "Turbo-Core",
in technical documentation "Core performance boost". In Linux we use
the term "boost" for convenience.
Rationale for disable switch
----------------------------
Though the idea is to just give better performance without any user
intervention, sometimes the need arises to disable this functionality.
Most systems offer a switch in the (BIOS) firmware to disable the
functionality at all, but a more fine-grained and dynamic control would
be desirable:
1. While running benchmarks, reproducible results are important. Since
the boosting functionality depends on the load of the whole package,
single thread performance can vary. By explicitly disabling the boost
functionality at least for the benchmark's run-time the system will run
at a fixed frequency and results are reproducible again.
2. To examine the impact of the boosting functionality it is helpful
to do tests with and without boosting.
3. Boosting means overclocking the processor, though under controlled
conditions. By raising the frequency and the voltage the processor
will consume more power than without the boosting, which may be
undesirable for instance for mobile users. Disabling boosting may
save power here, though this depends on the workload.
User controlled switch
----------------------
To allow the user to toggle the boosting functionality, the acpi-cpufreq
driver exports a sysfs knob to disable it. There is a file:
/sys/devices/system/cpu/cpufreq/boost
which can either read "0" (boosting disabled) or "1" (boosting enabled).
Reading the file is always supported, even if the processor does not
support boosting. In this case the file will be read-only and always
reads as "0". Explicitly changing the permissions and writing to that
file anyway will return EINVAL.
On supported CPUs one can write either a "0" or a "1" into this file.
This will either disable the boost functionality on all cores in the
whole system (0) or will allow the hardware to boost at will (1).
Writing a "1" does not explicitly boost the system, but just allows the
CPU (and the firmware) to boost at their discretion. Some implementations
take external factors like the chip's temperature into account, so
boosting once does not necessarily mean that it will occur every time
even using the exact same software setup.
AMD legacy cpb switch
---------------------
The AMD powernow-k8 driver used to support a very similar switch to
disable or enable the "Core Performance Boost" feature of some AMD CPUs.
This switch was instantiated in each CPU's cpufreq directory
(/sys/devices/system/cpu[0-9]*/cpufreq) and was called "cpb".
Though the per CPU existence hints at a more fine grained control, the
actual implementation only supported a system-global switch semantics,
which was simply reflected into each CPU's file. Writing a 0 or 1 into it
would pull the other CPUs to the same state.
For compatibility reasons this file and its behavior is still supported
on AMD CPUs, though it is now protected by a config switch
(X86_ACPI_CPUFREQ_CPB). On Intel CPUs this file will never be created,
even with the config option set.
This functionality is considered legacy and will be removed in some future
kernel version.
More fine grained boosting control
----------------------------------
Technically it is possible to switch the boosting functionality at least
on a per package basis, for some CPUs even per core. Currently the driver
does not support it, but this may be implemented in the future.
...@@ -76,9 +76,17 @@ total 0 ...@@ -76,9 +76,17 @@ total 0
* desc : Small description about the idle state (string) * desc : Small description about the idle state (string)
* disable : Option to disable this idle state (bool) * disable : Option to disable this idle state (bool) -> see note below
* latency : Latency to exit out of this idle state (in microseconds) * latency : Latency to exit out of this idle state (in microseconds)
* name : Name of the idle state (string) * name : Name of the idle state (string)
* power : Power consumed while in this idle state (in milliwatts) * power : Power consumed while in this idle state (in milliwatts)
* time : Total time spent in this idle state (in microseconds) * time : Total time spent in this idle state (in microseconds)
* usage : Number of times this state was entered (count) * usage : Number of times this state was entered (count)
Note:
The behavior and the effect of the disable variable depends on the
implementation of a particular governor. In the ladder governor, for
example, it is not coherent, i.e. if one is disabling a light state,
then all deeper states are disabled as well, but the disable variable
does not reflect it. Likewise, if one enables a deep state but a lighter
state still is disabled, then this has no effect.
Generic CPU0 cpufreq driver
It is a generic cpufreq driver for CPU0 frequency management. It
supports both uniprocessor (UP) and symmetric multiprocessor (SMP)
systems which share clock and voltage across all CPUs.
Both required and optional properties listed below must be defined
under node /cpus/cpu@0.
Required properties:
- operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt
for details
Optional properties:
- clock-latency: Specify the possible maximum transition latency for clock,
in unit of nanoseconds.
- voltage-tolerance: Specify the CPU voltage tolerance in percentage.
Examples:
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
compatible = "arm,cortex-a9";
reg = <0>;
next-level-cache = <&L2>;
operating-points = <
/* kHz uV */
792000 1100000
396000 950000
198000 850000
>;
transition-latency = <61036>; /* two CLK32 periods */
};
cpu@1 {
compatible = "arm,cortex-a9";
reg = <1>;
next-level-cache = <&L2>;
};
cpu@2 {
compatible = "arm,cortex-a9";
reg = <2>;
next-level-cache = <&L2>;
};
cpu@3 {
compatible = "arm,cortex-a9";
reg = <3>;
next-level-cache = <&L2>;
};
};
* Generic OPP Interface
SoCs have a standard set of tuples consisting of frequency and
voltage pairs that the device will support per voltage domain. These
are called Operating Performance Points or OPPs.
Properties:
- operating-points: An array of 2-tuples items, and each item consists
of frequency and voltage like <freq-kHz vol-uV>.
freq: clock frequency in kHz
vol: voltage in microvolt
Examples:
cpu@0 {
compatible = "arm,cortex-a9";
reg = <0>;
next-level-cache = <&L2>;
operating-points = <
/* kHz uV */
792000 1100000
396000 950000
198000 850000
>;
};
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/percpu.h> #include <linux/percpu.h>
#include <linux/clockchips.h> #include <linux/clockchips.h>
#include <linux/completion.h> #include <linux/completion.h>
#include <linux/cpufreq.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <asm/smp.h> #include <asm/smp.h>
...@@ -650,3 +651,56 @@ int setup_profiling_timer(unsigned int multiplier) ...@@ -650,3 +651,56 @@ int setup_profiling_timer(unsigned int multiplier)
{ {
return -EINVAL; return -EINVAL;
} }
#ifdef CONFIG_CPU_FREQ
static DEFINE_PER_CPU(unsigned long, l_p_j_ref);
static DEFINE_PER_CPU(unsigned long, l_p_j_ref_freq);
static unsigned long global_l_p_j_ref;
static unsigned long global_l_p_j_ref_freq;
static int cpufreq_callback(struct notifier_block *nb,
unsigned long val, void *data)
{
struct cpufreq_freqs *freq = data;
int cpu = freq->cpu;
if (freq->flags & CPUFREQ_CONST_LOOPS)
return NOTIFY_OK;
if (!per_cpu(l_p_j_ref, cpu)) {
per_cpu(l_p_j_ref, cpu) =
per_cpu(cpu_data, cpu).loops_per_jiffy;
per_cpu(l_p_j_ref_freq, cpu) = freq->old;
if (!global_l_p_j_ref) {
global_l_p_j_ref = loops_per_jiffy;
global_l_p_j_ref_freq = freq->old;
}
}
if ((val == CPUFREQ_PRECHANGE && freq->old < freq->new) ||
(val == CPUFREQ_POSTCHANGE && freq->old > freq->new) ||
(val == CPUFREQ_RESUMECHANGE || val == CPUFREQ_SUSPENDCHANGE)) {
loops_per_jiffy = cpufreq_scale(global_l_p_j_ref,
global_l_p_j_ref_freq,
freq->new);
per_cpu(cpu_data, cpu).loops_per_jiffy =
cpufreq_scale(per_cpu(l_p_j_ref, cpu),
per_cpu(l_p_j_ref_freq, cpu),
freq->new);
}
return NOTIFY_OK;
}
static struct notifier_block cpufreq_notifier = {
.notifier_call = cpufreq_callback,
};
static int __init register_cpufreq_notifier(void)
{
return cpufreq_register_notifier(&cpufreq_notifier,
CPUFREQ_TRANSITION_NOTIFIER);
}
core_initcall(register_cpufreq_notifier);
#endif
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
# #
# Common objects # Common objects
obj-y := timer.o console.o clock.o common.o obj-y := timer.o console.o clock.o
# CPU objects # CPU objects
obj-$(CONFIG_ARCH_SH7367) += setup-sh7367.o clock-sh7367.o intc-sh7367.o obj-$(CONFIG_ARCH_SH7367) += setup-sh7367.o clock-sh7367.o intc-sh7367.o
......
...@@ -1231,6 +1231,15 @@ static struct i2c_board_info i2c1_devices[] = { ...@@ -1231,6 +1231,15 @@ static struct i2c_board_info i2c1_devices[] = {
#define USCCR1 IOMEM(0xE6058144) #define USCCR1 IOMEM(0xE6058144)
static void __init ap4evb_init(void) static void __init ap4evb_init(void)
{ {
struct pm_domain_device domain_devices[] = {
{ "A4LC", &lcdc1_device, },
{ "A4LC", &lcdc_device, },
{ "A4MP", &fsi_device, },
{ "A3SP", &sh_mmcif_device, },
{ "A3SP", &sdhi0_device, },
{ "A3SP", &sdhi1_device, },
{ "A4R", &ceu_device, },
};
u32 srcr4; u32 srcr4;
struct clk *clk; struct clk *clk;
...@@ -1463,14 +1472,8 @@ static void __init ap4evb_init(void) ...@@ -1463,14 +1472,8 @@ static void __init ap4evb_init(void)
platform_add_devices(ap4evb_devices, ARRAY_SIZE(ap4evb_devices)); platform_add_devices(ap4evb_devices, ARRAY_SIZE(ap4evb_devices));
rmobile_add_device_to_domain(&sh7372_pd_a4lc, &lcdc1_device); rmobile_add_devices_to_domains(domain_devices,
rmobile_add_device_to_domain(&sh7372_pd_a4lc, &lcdc_device); ARRAY_SIZE(domain_devices));
rmobile_add_device_to_domain(&sh7372_pd_a4mp, &fsi_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sh_mmcif_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sdhi0_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sdhi1_device);
rmobile_add_device_to_domain(&sh7372_pd_a4r, &ceu_device);
hdmi_init_pm_clock(); hdmi_init_pm_clock();
fsi_init_pm_clock(); fsi_init_pm_clock();
...@@ -1485,6 +1488,6 @@ MACHINE_START(AP4EVB, "ap4evb") ...@@ -1485,6 +1488,6 @@ MACHINE_START(AP4EVB, "ap4evb")
.init_irq = sh7372_init_irq, .init_irq = sh7372_init_irq,
.handle_irq = shmobile_handle_irq_intc, .handle_irq = shmobile_handle_irq_intc,
.init_machine = ap4evb_init, .init_machine = ap4evb_init,
.init_late = shmobile_init_late, .init_late = sh7372_pm_init_late,
.timer = &shmobile_timer, .timer = &shmobile_timer,
MACHINE_END MACHINE_END
...@@ -1209,10 +1209,10 @@ static void __init eva_init(void) ...@@ -1209,10 +1209,10 @@ static void __init eva_init(void)
eva_clock_init(); eva_clock_init();
rmobile_add_device_to_domain(&r8a7740_pd_a4lc, &lcdc0_device); rmobile_add_device_to_domain("A4LC", &lcdc0_device);
rmobile_add_device_to_domain(&r8a7740_pd_a4lc, &hdmi_lcdc_device); rmobile_add_device_to_domain("A4LC", &hdmi_lcdc_device);
if (usb) if (usb)
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, usb); rmobile_add_device_to_domain("A3SP", usb);
} }
static void __init eva_earlytimer_init(void) static void __init eva_earlytimer_init(void)
......
...@@ -1412,6 +1412,22 @@ static struct i2c_board_info i2c1_devices[] = { ...@@ -1412,6 +1412,22 @@ static struct i2c_board_info i2c1_devices[] = {
#define USCCR1 IOMEM(0xE6058144) #define USCCR1 IOMEM(0xE6058144)
static void __init mackerel_init(void) static void __init mackerel_init(void)
{ {
struct pm_domain_device domain_devices[] = {
{ "A4LC", &lcdc_device, },
{ "A4LC", &hdmi_lcdc_device, },
{ "A4LC", &meram_device, },
{ "A4MP", &fsi_device, },
{ "A3SP", &usbhs0_device, },
{ "A3SP", &usbhs1_device, },
{ "A3SP", &nand_flash_device, },
{ "A3SP", &sh_mmcif_device, },
{ "A3SP", &sdhi0_device, },
#if !defined(CONFIG_MMC_SH_MMCIF) && !defined(CONFIG_MMC_SH_MMCIF_MODULE)
{ "A3SP", &sdhi1_device, },
#endif
{ "A3SP", &sdhi2_device, },
{ "A4R", &ceu_device, },
};
u32 srcr4; u32 srcr4;
struct clk *clk; struct clk *clk;
...@@ -1626,20 +1642,8 @@ static void __init mackerel_init(void) ...@@ -1626,20 +1642,8 @@ static void __init mackerel_init(void)
platform_add_devices(mackerel_devices, ARRAY_SIZE(mackerel_devices)); platform_add_devices(mackerel_devices, ARRAY_SIZE(mackerel_devices));
rmobile_add_device_to_domain(&sh7372_pd_a4lc, &lcdc_device); rmobile_add_devices_to_domains(domain_devices,
rmobile_add_device_to_domain(&sh7372_pd_a4lc, &hdmi_lcdc_device); ARRAY_SIZE(domain_devices));
rmobile_add_device_to_domain(&sh7372_pd_a4lc, &meram_device);
rmobile_add_device_to_domain(&sh7372_pd_a4mp, &fsi_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &usbhs0_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &usbhs1_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &nand_flash_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sh_mmcif_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sdhi0_device);
#if !defined(CONFIG_MMC_SH_MMCIF) && !defined(CONFIG_MMC_SH_MMCIF_MODULE)
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sdhi1_device);
#endif
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sdhi2_device);
rmobile_add_device_to_domain(&sh7372_pd_a4r, &ceu_device);
hdmi_init_pm_clock(); hdmi_init_pm_clock();
sh7372_pm_init(); sh7372_pm_init();
...@@ -1653,6 +1657,6 @@ MACHINE_START(MACKEREL, "mackerel") ...@@ -1653,6 +1657,6 @@ MACHINE_START(MACKEREL, "mackerel")
.init_irq = sh7372_init_irq, .init_irq = sh7372_init_irq,
.handle_irq = shmobile_handle_irq_intc, .handle_irq = shmobile_handle_irq_intc,
.init_machine = mackerel_init, .init_machine = mackerel_init,
.init_late = shmobile_init_late, .init_late = sh7372_pm_init_late,
.timer = &shmobile_timer, .timer = &shmobile_timer,
MACHINE_END MACHINE_END
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <mach/common.h>
void __init shmobile_init_late(void)
{
shmobile_suspend_init();
shmobile_cpuidle_init();
}
...@@ -16,51 +16,38 @@ ...@@ -16,51 +16,38 @@
#include <asm/cpuidle.h> #include <asm/cpuidle.h>
#include <asm/io.h> #include <asm/io.h>
static void shmobile_enter_wfi(void) int shmobile_enter_wfi(struct cpuidle_device *dev, struct cpuidle_driver *drv,
int index)
{ {
cpu_do_idle(); cpu_do_idle();
} return 0;
void (*shmobile_cpuidle_modes[CPUIDLE_STATE_MAX])(void) = {
shmobile_enter_wfi, /* regular sleep mode */
};
static int shmobile_cpuidle_enter(struct cpuidle_device *dev,
struct cpuidle_driver *drv,
int index)
{
shmobile_cpuidle_modes[index]();
return index;
} }
static struct cpuidle_device shmobile_cpuidle_dev; static struct cpuidle_device shmobile_cpuidle_dev;
static struct cpuidle_driver shmobile_cpuidle_driver = { static struct cpuidle_driver shmobile_cpuidle_default_driver = {
.name = "shmobile_cpuidle", .name = "shmobile_cpuidle",
.owner = THIS_MODULE, .owner = THIS_MODULE,
.en_core_tk_irqen = 1, .en_core_tk_irqen = 1,
.states[0] = ARM_CPUIDLE_WFI_STATE, .states[0] = ARM_CPUIDLE_WFI_STATE,
.states[0].enter = shmobile_enter_wfi,
.safe_state_index = 0, /* C1 */ .safe_state_index = 0, /* C1 */
.state_count = 1, .state_count = 1,
}; };
void (*shmobile_cpuidle_setup)(struct cpuidle_driver *drv); static struct cpuidle_driver *cpuidle_drv = &shmobile_cpuidle_default_driver;
void shmobile_cpuidle_set_driver(struct cpuidle_driver *drv)
{
cpuidle_drv = drv;
}
int shmobile_cpuidle_init(void) int shmobile_cpuidle_init(void)
{ {
struct cpuidle_device *dev = &shmobile_cpuidle_dev; struct cpuidle_device *dev = &shmobile_cpuidle_dev;
struct cpuidle_driver *drv = &shmobile_cpuidle_driver;
int i;
for (i = 0; i < CPUIDLE_STATE_MAX; i++)
drv->states[i].enter = shmobile_cpuidle_enter;
if (shmobile_cpuidle_setup)
shmobile_cpuidle_setup(drv);
cpuidle_register_driver(drv); cpuidle_register_driver(cpuidle_drv);
dev->state_count = drv->state_count; dev->state_count = cpuidle_drv->state_count;
cpuidle_register_device(dev); cpuidle_register_device(dev);
return 0; return 0;
......
...@@ -13,8 +13,10 @@ extern int shmobile_clk_init(void); ...@@ -13,8 +13,10 @@ extern int shmobile_clk_init(void);
extern void shmobile_handle_irq_intc(struct pt_regs *); extern void shmobile_handle_irq_intc(struct pt_regs *);
extern struct platform_suspend_ops shmobile_suspend_ops; extern struct platform_suspend_ops shmobile_suspend_ops;
struct cpuidle_driver; struct cpuidle_driver;
extern void (*shmobile_cpuidle_modes[])(void); struct cpuidle_device;
extern void (*shmobile_cpuidle_setup)(struct cpuidle_driver *drv); extern int shmobile_enter_wfi(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index);
extern void shmobile_cpuidle_set_driver(struct cpuidle_driver *drv);
extern void sh7367_init_irq(void); extern void sh7367_init_irq(void);
extern void sh7367_map_io(void); extern void sh7367_map_io(void);
...@@ -75,8 +77,6 @@ extern void r8a7740_meram_workaround(void); ...@@ -75,8 +77,6 @@ extern void r8a7740_meram_workaround(void);
extern void r8a7779_register_twd(void); extern void r8a7779_register_twd(void);
extern void shmobile_init_late(void);
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
int shmobile_suspend_init(void); int shmobile_suspend_init(void);
#else #else
...@@ -100,4 +100,10 @@ static inline int shmobile_cpu_is_dead(unsigned int cpu) { return 1; } ...@@ -100,4 +100,10 @@ static inline int shmobile_cpu_is_dead(unsigned int cpu) { return 1; }
extern void shmobile_smp_init_cpus(unsigned int ncores); extern void shmobile_smp_init_cpus(unsigned int ncores);
static inline void shmobile_init_late(void)
{
shmobile_suspend_init();
shmobile_cpuidle_init();
}
#endif /* __ARCH_MACH_COMMON_H */ #endif /* __ARCH_MACH_COMMON_H */
...@@ -12,6 +12,8 @@ ...@@ -12,6 +12,8 @@
#include <linux/pm_domain.h> #include <linux/pm_domain.h>
#define DEFAULT_DEV_LATENCY_NS 250000
struct platform_device; struct platform_device;
struct rmobile_pm_domain { struct rmobile_pm_domain {
...@@ -29,16 +31,33 @@ struct rmobile_pm_domain *to_rmobile_pd(struct generic_pm_domain *d) ...@@ -29,16 +31,33 @@ struct rmobile_pm_domain *to_rmobile_pd(struct generic_pm_domain *d)
return container_of(d, struct rmobile_pm_domain, genpd); return container_of(d, struct rmobile_pm_domain, genpd);
} }
struct pm_domain_device {
const char *domain_name;
struct platform_device *pdev;
};
#ifdef CONFIG_PM #ifdef CONFIG_PM
extern void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd); extern void rmobile_init_domains(struct rmobile_pm_domain domains[], int num);
extern void rmobile_add_device_to_domain(struct rmobile_pm_domain *rmobile_pd, extern void rmobile_add_device_to_domain_td(const char *domain_name,
struct platform_device *pdev); struct platform_device *pdev,
extern void rmobile_pm_add_subdomain(struct rmobile_pm_domain *rmobile_pd, struct gpd_timing_data *td);
struct rmobile_pm_domain *rmobile_sd);
static inline void rmobile_add_device_to_domain(const char *domain_name,
struct platform_device *pdev)
{
rmobile_add_device_to_domain_td(domain_name, pdev, NULL);
}
extern void rmobile_add_devices_to_domains(struct pm_domain_device data[],
int size);
#else #else
#define rmobile_init_pm_domain(pd) do { } while (0)
#define rmobile_add_device_to_domain(pd, pdev) do { } while (0) #define rmobile_init_domains(domains, num) do { } while (0)
#define rmobile_pm_add_subdomain(pd, sd) do { } while (0) #define rmobile_add_device_to_domain_td(name, pdev, td) do { } while (0)
#define rmobile_add_device_to_domain(name, pdev) do { } while (0)
static inline void rmobile_add_devices_to_domains(struct pm_domain_device d[],
int size) {}
#endif /* CONFIG_PM */ #endif /* CONFIG_PM */
#endif /* PM_RMOBILE_H */ #endif /* PM_RMOBILE_H */
...@@ -607,9 +607,9 @@ enum { ...@@ -607,9 +607,9 @@ enum {
}; };
#ifdef CONFIG_PM #ifdef CONFIG_PM
extern struct rmobile_pm_domain r8a7740_pd_a4s; extern void __init r8a7740_init_pm_domains(void);
extern struct rmobile_pm_domain r8a7740_pd_a3sp; #else
extern struct rmobile_pm_domain r8a7740_pd_a4lc; static inline void r8a7740_init_pm_domains(void) {}
#endif /* CONFIG_PM */ #endif /* CONFIG_PM */
#endif /* __ASM_R8A7740_H__ */ #endif /* __ASM_R8A7740_H__ */
...@@ -347,17 +347,9 @@ extern int r8a7779_sysc_power_down(struct r8a7779_pm_ch *r8a7779_ch); ...@@ -347,17 +347,9 @@ extern int r8a7779_sysc_power_down(struct r8a7779_pm_ch *r8a7779_ch);
extern int r8a7779_sysc_power_up(struct r8a7779_pm_ch *r8a7779_ch); extern int r8a7779_sysc_power_up(struct r8a7779_pm_ch *r8a7779_ch);
#ifdef CONFIG_PM #ifdef CONFIG_PM
extern struct r8a7779_pm_domain r8a7779_sh4a; extern void __init r8a7779_init_pm_domains(void);
extern struct r8a7779_pm_domain r8a7779_sgx;
extern struct r8a7779_pm_domain r8a7779_vdp1;
extern struct r8a7779_pm_domain r8a7779_impx3;
extern void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd);
extern void r8a7779_add_device_to_domain(struct r8a7779_pm_domain *r8a7779_pd,
struct platform_device *pdev);
#else #else
#define r8a7779_init_pm_domain(pd) do { } while (0) static inline void r8a7779_init_pm_domains(void) {}
#define r8a7779_add_device_to_domain(pd, pdev) do { } while (0)
#endif /* CONFIG_PM */ #endif /* CONFIG_PM */
extern struct smp_operations r8a7779_smp_ops; extern struct smp_operations r8a7779_smp_ops;
......
...@@ -478,21 +478,17 @@ extern struct clk sh7372_fsibck_clk; ...@@ -478,21 +478,17 @@ extern struct clk sh7372_fsibck_clk;
extern struct clk sh7372_fsidiva_clk; extern struct clk sh7372_fsidiva_clk;
extern struct clk sh7372_fsidivb_clk; extern struct clk sh7372_fsidivb_clk;
#ifdef CONFIG_PM
extern struct rmobile_pm_domain sh7372_pd_a4lc;
extern struct rmobile_pm_domain sh7372_pd_a4mp;
extern struct rmobile_pm_domain sh7372_pd_d4;
extern struct rmobile_pm_domain sh7372_pd_a4r;
extern struct rmobile_pm_domain sh7372_pd_a3rv;
extern struct rmobile_pm_domain sh7372_pd_a3ri;
extern struct rmobile_pm_domain sh7372_pd_a4s;
extern struct rmobile_pm_domain sh7372_pd_a3sp;
extern struct rmobile_pm_domain sh7372_pd_a3sg;
#endif /* CONFIG_PM */
extern void sh7372_intcs_suspend(void); extern void sh7372_intcs_suspend(void);
extern void sh7372_intcs_resume(void); extern void sh7372_intcs_resume(void);
extern void sh7372_intca_suspend(void); extern void sh7372_intca_suspend(void);
extern void sh7372_intca_resume(void); extern void sh7372_intca_resume(void);
#ifdef CONFIG_PM
extern void __init sh7372_init_pm_domains(void);
#else
static inline void sh7372_init_pm_domains(void) {}
#endif
extern void __init sh7372_pm_init_late(void);
#endif /* __ASM_SH7372_H__ */ #endif /* __ASM_SH7372_H__ */
...@@ -21,14 +21,6 @@ static int r8a7740_pd_a4s_suspend(void) ...@@ -21,14 +21,6 @@ static int r8a7740_pd_a4s_suspend(void)
return -EBUSY; return -EBUSY;
} }
struct rmobile_pm_domain r8a7740_pd_a4s = {
.genpd.name = "A4S",
.bit_shift = 10,
.gov = &pm_domain_always_on_gov,
.no_debug = true,
.suspend = r8a7740_pd_a4s_suspend,
};
static int r8a7740_pd_a3sp_suspend(void) static int r8a7740_pd_a3sp_suspend(void)
{ {
/* /*
...@@ -38,17 +30,31 @@ static int r8a7740_pd_a3sp_suspend(void) ...@@ -38,17 +30,31 @@ static int r8a7740_pd_a3sp_suspend(void)
return console_suspend_enabled ? 0 : -EBUSY; return console_suspend_enabled ? 0 : -EBUSY;
} }
struct rmobile_pm_domain r8a7740_pd_a3sp = { static struct rmobile_pm_domain r8a7740_pm_domains[] = {
.genpd.name = "A3SP", {
.bit_shift = 11, .genpd.name = "A4S",
.gov = &pm_domain_always_on_gov, .bit_shift = 10,
.no_debug = true, .gov = &pm_domain_always_on_gov,
.suspend = r8a7740_pd_a3sp_suspend, .no_debug = true,
.suspend = r8a7740_pd_a4s_suspend,
},
{
.genpd.name = "A3SP",
.bit_shift = 11,
.gov = &pm_domain_always_on_gov,
.no_debug = true,
.suspend = r8a7740_pd_a3sp_suspend,
},
{
.genpd.name = "A4LC",
.bit_shift = 1,
},
}; };
struct rmobile_pm_domain r8a7740_pd_a4lc = { void __init r8a7740_init_pm_domains(void)
.genpd.name = "A4LC", {
.bit_shift = 1, rmobile_init_domains(r8a7740_pm_domains, ARRAY_SIZE(r8a7740_pm_domains));
}; pm_genpd_add_subdomain_names("A4S", "A3SP");
}
#endif /* CONFIG_PM */ #endif /* CONFIG_PM */
...@@ -183,7 +183,7 @@ static bool pd_active_wakeup(struct device *dev) ...@@ -183,7 +183,7 @@ static bool pd_active_wakeup(struct device *dev)
return true; return true;
} }
void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd) static void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd)
{ {
struct generic_pm_domain *genpd = &r8a7779_pd->genpd; struct generic_pm_domain *genpd = &r8a7779_pd->genpd;
...@@ -199,43 +199,44 @@ void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd) ...@@ -199,43 +199,44 @@ void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd)
pd_power_up(&r8a7779_pd->genpd); pd_power_up(&r8a7779_pd->genpd);
} }
void r8a7779_add_device_to_domain(struct r8a7779_pm_domain *r8a7779_pd, static struct r8a7779_pm_domain r8a7779_pm_domains[] = {
struct platform_device *pdev) {
{ .genpd.name = "SH4A",
struct device *dev = &pdev->dev; .ch = {
.chan_offs = 0x80, /* PWRSR1 .. PWRER1 */
pm_genpd_add_device(&r8a7779_pd->genpd, dev); .isr_bit = 16, /* SH4A */
if (pm_clk_no_clocks(dev)) },
pm_clk_add(dev, NULL); },
} {
.genpd.name = "SGX",
struct r8a7779_pm_domain r8a7779_sh4a = { .ch = {
.ch = { .chan_offs = 0xc0, /* PWRSR2 .. PWRER2 */
.chan_offs = 0x80, /* PWRSR1 .. PWRER1 */ .isr_bit = 20, /* SGX */
.isr_bit = 16, /* SH4A */ },
} },
}; {
.genpd.name = "VDP1",
struct r8a7779_pm_domain r8a7779_sgx = { .ch = {
.ch = { .chan_offs = 0x100, /* PWRSR3 .. PWRER3 */
.chan_offs = 0xc0, /* PWRSR2 .. PWRER2 */ .isr_bit = 21, /* VDP */
.isr_bit = 20, /* SGX */ },
} },
{
.genpd.name = "IMPX3",
.ch = {
.chan_offs = 0x140, /* PWRSR4 .. PWRER4 */
.isr_bit = 24, /* IMP */
},
},
}; };
struct r8a7779_pm_domain r8a7779_vdp1 = { void __init r8a7779_init_pm_domains(void)
.ch = { {
.chan_offs = 0x100, /* PWRSR3 .. PWRER3 */ int j;
.isr_bit = 21, /* VDP */
}
};
struct r8a7779_pm_domain r8a7779_impx3 = { for (j = 0; j < ARRAY_SIZE(r8a7779_pm_domains); j++)
.ch = { r8a7779_init_pm_domain(&r8a7779_pm_domains[j]);
.chan_offs = 0x140, /* PWRSR4 .. PWRER4 */ }
.isr_bit = 24, /* IMP */
}
};
#endif /* CONFIG_PM */ #endif /* CONFIG_PM */
......
...@@ -134,7 +134,7 @@ static int rmobile_pd_start_dev(struct device *dev) ...@@ -134,7 +134,7 @@ static int rmobile_pd_start_dev(struct device *dev)
return ret; return ret;
} }
void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd) static void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd)
{ {
struct generic_pm_domain *genpd = &rmobile_pd->genpd; struct generic_pm_domain *genpd = &rmobile_pd->genpd;
struct dev_power_governor *gov = rmobile_pd->gov; struct dev_power_governor *gov = rmobile_pd->gov;
...@@ -149,19 +149,38 @@ void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd) ...@@ -149,19 +149,38 @@ void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd)
__rmobile_pd_power_up(rmobile_pd, false); __rmobile_pd_power_up(rmobile_pd, false);
} }
void rmobile_add_device_to_domain(struct rmobile_pm_domain *rmobile_pd, void rmobile_init_domains(struct rmobile_pm_domain domains[], int num)
struct platform_device *pdev) {
int j;
for (j = 0; j < num; j++)
rmobile_init_pm_domain(&domains[j]);
}
void rmobile_add_device_to_domain_td(const char *domain_name,
struct platform_device *pdev,
struct gpd_timing_data *td)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
pm_genpd_add_device(&rmobile_pd->genpd, dev); __pm_genpd_name_add_device(domain_name, dev, td);
if (pm_clk_no_clocks(dev)) if (pm_clk_no_clocks(dev))
pm_clk_add(dev, NULL); pm_clk_add(dev, NULL);
} }
void rmobile_pm_add_subdomain(struct rmobile_pm_domain *rmobile_pd, void rmobile_add_devices_to_domains(struct pm_domain_device data[],
struct rmobile_pm_domain *rmobile_sd) int size)
{ {
pm_genpd_add_subdomain(&rmobile_pd->genpd, &rmobile_sd->genpd); struct gpd_timing_data latencies = {
.stop_latency_ns = DEFAULT_DEV_LATENCY_NS,
.start_latency_ns = DEFAULT_DEV_LATENCY_NS,
.save_state_latency_ns = DEFAULT_DEV_LATENCY_NS,
.restore_state_latency_ns = DEFAULT_DEV_LATENCY_NS,
};
int j;
for (j = 0; j < size; j++)
rmobile_add_device_to_domain_td(data[j].domain_name,
data[j].pdev, &latencies);
} }
#endif /* CONFIG_PM */ #endif /* CONFIG_PM */
This diff is collapsed.
...@@ -673,12 +673,7 @@ void __init r8a7740_add_standard_devices(void) ...@@ -673,12 +673,7 @@ void __init r8a7740_add_standard_devices(void)
r8a7740_i2c_workaround(&i2c0_device); r8a7740_i2c_workaround(&i2c0_device);
r8a7740_i2c_workaround(&i2c1_device); r8a7740_i2c_workaround(&i2c1_device);
/* PM domain */ r8a7740_init_pm_domains();
rmobile_init_pm_domain(&r8a7740_pd_a4s);
rmobile_init_pm_domain(&r8a7740_pd_a3sp);
rmobile_init_pm_domain(&r8a7740_pd_a4lc);
rmobile_pm_add_subdomain(&r8a7740_pd_a4s, &r8a7740_pd_a3sp);
/* add devices */ /* add devices */
platform_add_devices(r8a7740_early_devices, platform_add_devices(r8a7740_early_devices,
...@@ -688,16 +683,16 @@ void __init r8a7740_add_standard_devices(void) ...@@ -688,16 +683,16 @@ void __init r8a7740_add_standard_devices(void)
/* add devices to PM domain */ /* add devices to PM domain */
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif0_device); rmobile_add_device_to_domain("A3SP", &scif0_device);
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif1_device); rmobile_add_device_to_domain("A3SP", &scif1_device);
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif2_device); rmobile_add_device_to_domain("A3SP", &scif2_device);
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif3_device); rmobile_add_device_to_domain("A3SP", &scif3_device);
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif4_device); rmobile_add_device_to_domain("A3SP", &scif4_device);
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif5_device); rmobile_add_device_to_domain("A3SP", &scif5_device);
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif6_device); rmobile_add_device_to_domain("A3SP", &scif6_device);
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif7_device); rmobile_add_device_to_domain("A3SP", &scif7_device);
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scifb_device); rmobile_add_device_to_domain("A3SP", &scifb_device);
rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &i2c1_device); rmobile_add_device_to_domain("A3SP", &i2c1_device);
} }
static void __init r8a7740_earlytimer_init(void) static void __init r8a7740_earlytimer_init(void)
......
...@@ -251,10 +251,7 @@ void __init r8a7779_add_standard_devices(void) ...@@ -251,10 +251,7 @@ void __init r8a7779_add_standard_devices(void)
#endif #endif
r8a7779_pm_init(); r8a7779_pm_init();
r8a7779_init_pm_domain(&r8a7779_sh4a); r8a7779_init_pm_domains();
r8a7779_init_pm_domain(&r8a7779_sgx);
r8a7779_init_pm_domain(&r8a7779_vdp1);
r8a7779_init_pm_domain(&r8a7779_impx3);
platform_add_devices(r8a7779_early_devices, platform_add_devices(r8a7779_early_devices,
ARRAY_SIZE(r8a7779_early_devices)); ARRAY_SIZE(r8a7779_early_devices));
......
...@@ -1001,21 +1001,34 @@ static struct platform_device *sh7372_late_devices[] __initdata = { ...@@ -1001,21 +1001,34 @@ static struct platform_device *sh7372_late_devices[] __initdata = {
void __init sh7372_add_standard_devices(void) void __init sh7372_add_standard_devices(void)
{ {
rmobile_init_pm_domain(&sh7372_pd_a4lc); struct pm_domain_device domain_devices[] = {
rmobile_init_pm_domain(&sh7372_pd_a4mp); { "A3RV", &vpu_device, },
rmobile_init_pm_domain(&sh7372_pd_d4); { "A4MP", &spu0_device, },
rmobile_init_pm_domain(&sh7372_pd_a4r); { "A4MP", &spu1_device, },
rmobile_init_pm_domain(&sh7372_pd_a3rv); { "A3SP", &scif0_device, },
rmobile_init_pm_domain(&sh7372_pd_a3ri); { "A3SP", &scif1_device, },
rmobile_init_pm_domain(&sh7372_pd_a4s); { "A3SP", &scif2_device, },
rmobile_init_pm_domain(&sh7372_pd_a3sp); { "A3SP", &scif3_device, },
rmobile_init_pm_domain(&sh7372_pd_a3sg); { "A3SP", &scif4_device, },
{ "A3SP", &scif5_device, },
rmobile_pm_add_subdomain(&sh7372_pd_a4lc, &sh7372_pd_a3rv); { "A3SP", &scif6_device, },
rmobile_pm_add_subdomain(&sh7372_pd_a4r, &sh7372_pd_a4lc); { "A3SP", &iic1_device, },
{ "A3SP", &dma0_device, },
rmobile_pm_add_subdomain(&sh7372_pd_a4s, &sh7372_pd_a3sg); { "A3SP", &dma1_device, },
rmobile_pm_add_subdomain(&sh7372_pd_a4s, &sh7372_pd_a3sp); { "A3SP", &dma2_device, },
{ "A3SP", &usb_dma0_device, },
{ "A3SP", &usb_dma1_device, },
{ "A4R", &iic0_device, },
{ "A4R", &veu0_device, },
{ "A4R", &veu1_device, },
{ "A4R", &veu2_device, },
{ "A4R", &veu3_device, },
{ "A4R", &jpu_device, },
{ "A4R", &tmu00_device, },
{ "A4R", &tmu01_device, },
};
sh7372_init_pm_domains();
platform_add_devices(sh7372_early_devices, platform_add_devices(sh7372_early_devices,
ARRAY_SIZE(sh7372_early_devices)); ARRAY_SIZE(sh7372_early_devices));
...@@ -1023,30 +1036,8 @@ void __init sh7372_add_standard_devices(void) ...@@ -1023,30 +1036,8 @@ void __init sh7372_add_standard_devices(void)
platform_add_devices(sh7372_late_devices, platform_add_devices(sh7372_late_devices,
ARRAY_SIZE(sh7372_late_devices)); ARRAY_SIZE(sh7372_late_devices));
rmobile_add_device_to_domain(&sh7372_pd_a3rv, &vpu_device); rmobile_add_devices_to_domains(domain_devices,
rmobile_add_device_to_domain(&sh7372_pd_a4mp, &spu0_device); ARRAY_SIZE(domain_devices));
rmobile_add_device_to_domain(&sh7372_pd_a4mp, &spu1_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif0_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif1_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif2_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif3_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif4_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif5_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif6_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &iic1_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &dma0_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &dma1_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &dma2_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &usb_dma0_device);
rmobile_add_device_to_domain(&sh7372_pd_a3sp, &usb_dma1_device);
rmobile_add_device_to_domain(&sh7372_pd_a4r, &iic0_device);
rmobile_add_device_to_domain(&sh7372_pd_a4r, &veu0_device);
rmobile_add_device_to_domain(&sh7372_pd_a4r, &veu1_device);
rmobile_add_device_to_domain(&sh7372_pd_a4r, &veu2_device);
rmobile_add_device_to_domain(&sh7372_pd_a4r, &veu3_device);
rmobile_add_device_to_domain(&sh7372_pd_a4r, &jpu_device);
rmobile_add_device_to_domain(&sh7372_pd_a4r, &tmu00_device);
rmobile_add_device_to_domain(&sh7372_pd_a4r, &tmu01_device);
} }
static void __init sh7372_earlytimer_init(void) static void __init sh7372_earlytimer_init(void)
......
...@@ -248,6 +248,9 @@ ...@@ -248,6 +248,9 @@
#define MSR_IA32_PERF_STATUS 0x00000198 #define MSR_IA32_PERF_STATUS 0x00000198
#define MSR_IA32_PERF_CTL 0x00000199 #define MSR_IA32_PERF_CTL 0x00000199
#define MSR_AMD_PSTATE_DEF_BASE 0xc0010064
#define MSR_AMD_PERF_STATUS 0xc0010063
#define MSR_AMD_PERF_CTL 0xc0010062
#define MSR_IA32_MPERF 0x000000e7 #define MSR_IA32_MPERF 0x000000e7
#define MSR_IA32_APERF 0x000000e8 #define MSR_IA32_APERF 0x000000e8
......
...@@ -475,7 +475,7 @@ static __ref int acpi_processor_start(struct acpi_processor *pr) ...@@ -475,7 +475,7 @@ static __ref int acpi_processor_start(struct acpi_processor *pr)
acpi_processor_get_limit_info(pr); acpi_processor_get_limit_info(pr);
if (!cpuidle_get_driver() || cpuidle_get_driver() == &acpi_idle_driver) if (!cpuidle_get_driver() || cpuidle_get_driver() == &acpi_idle_driver)
acpi_processor_power_init(pr, device); acpi_processor_power_init(pr);
pr->cdev = thermal_cooling_device_register("Processor", device, pr->cdev = thermal_cooling_device_register("Processor", device,
&processor_cooling_ops); &processor_cooling_ops);
...@@ -509,7 +509,7 @@ static __ref int acpi_processor_start(struct acpi_processor *pr) ...@@ -509,7 +509,7 @@ static __ref int acpi_processor_start(struct acpi_processor *pr)
err_thermal_unregister: err_thermal_unregister:
thermal_cooling_device_unregister(pr->cdev); thermal_cooling_device_unregister(pr->cdev);
err_power_exit: err_power_exit:
acpi_processor_power_exit(pr, device); acpi_processor_power_exit(pr);
return result; return result;
} }
...@@ -620,7 +620,7 @@ static int acpi_processor_remove(struct acpi_device *device, int type) ...@@ -620,7 +620,7 @@ static int acpi_processor_remove(struct acpi_device *device, int type)
return -EINVAL; return -EINVAL;
} }
acpi_processor_power_exit(pr, device); acpi_processor_power_exit(pr);
sysfs_remove_link(&device->dev.kobj, "sysdev"); sysfs_remove_link(&device->dev.kobj, "sysdev");
...@@ -905,8 +905,6 @@ static int __init acpi_processor_init(void) ...@@ -905,8 +905,6 @@ static int __init acpi_processor_init(void)
if (acpi_disabled) if (acpi_disabled)
return 0; return 0;
memset(&errata, 0, sizeof(errata));
result = acpi_bus_register_driver(&acpi_processor_driver); result = acpi_bus_register_driver(&acpi_processor_driver);
if (result < 0) if (result < 0)
return result; return result;
......
...@@ -79,6 +79,8 @@ module_param(bm_check_disable, uint, 0000); ...@@ -79,6 +79,8 @@ module_param(bm_check_disable, uint, 0000);
static unsigned int latency_factor __read_mostly = 2; static unsigned int latency_factor __read_mostly = 2;
module_param(latency_factor, uint, 0644); module_param(latency_factor, uint, 0644);
static DEFINE_PER_CPU(struct cpuidle_device *, acpi_cpuidle_device);
static int disabled_by_idle_boot_param(void) static int disabled_by_idle_boot_param(void)
{ {
return boot_option_idle_override == IDLE_POLL || return boot_option_idle_override == IDLE_POLL ||
...@@ -483,8 +485,6 @@ static int acpi_processor_get_power_info_cst(struct acpi_processor *pr) ...@@ -483,8 +485,6 @@ static int acpi_processor_get_power_info_cst(struct acpi_processor *pr)
if (obj->type != ACPI_TYPE_INTEGER) if (obj->type != ACPI_TYPE_INTEGER)
continue; continue;
cx.power = obj->integer.value;
current_count++; current_count++;
memcpy(&(pr->power.states[current_count]), &cx, sizeof(cx)); memcpy(&(pr->power.states[current_count]), &cx, sizeof(cx));
...@@ -1000,7 +1000,7 @@ static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr) ...@@ -1000,7 +1000,7 @@ static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr)
int i, count = CPUIDLE_DRIVER_STATE_START; int i, count = CPUIDLE_DRIVER_STATE_START;
struct acpi_processor_cx *cx; struct acpi_processor_cx *cx;
struct cpuidle_state_usage *state_usage; struct cpuidle_state_usage *state_usage;
struct cpuidle_device *dev = &pr->power.dev; struct cpuidle_device *dev = per_cpu(acpi_cpuidle_device, pr->id);
if (!pr->flags.power_setup_done) if (!pr->flags.power_setup_done)
return -EINVAL; return -EINVAL;
...@@ -1132,6 +1132,7 @@ static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr) ...@@ -1132,6 +1132,7 @@ static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr)
int acpi_processor_hotplug(struct acpi_processor *pr) int acpi_processor_hotplug(struct acpi_processor *pr)
{ {
int ret = 0; int ret = 0;
struct cpuidle_device *dev = per_cpu(acpi_cpuidle_device, pr->id);
if (disabled_by_idle_boot_param()) if (disabled_by_idle_boot_param())
return 0; return 0;
...@@ -1147,11 +1148,11 @@ int acpi_processor_hotplug(struct acpi_processor *pr) ...@@ -1147,11 +1148,11 @@ int acpi_processor_hotplug(struct acpi_processor *pr)
return -ENODEV; return -ENODEV;
cpuidle_pause_and_lock(); cpuidle_pause_and_lock();
cpuidle_disable_device(&pr->power.dev); cpuidle_disable_device(dev);
acpi_processor_get_power_info(pr); acpi_processor_get_power_info(pr);
if (pr->flags.power) { if (pr->flags.power) {
acpi_processor_setup_cpuidle_cx(pr); acpi_processor_setup_cpuidle_cx(pr);
ret = cpuidle_enable_device(&pr->power.dev); ret = cpuidle_enable_device(dev);
} }
cpuidle_resume_and_unlock(); cpuidle_resume_and_unlock();
...@@ -1162,6 +1163,7 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) ...@@ -1162,6 +1163,7 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr)
{ {
int cpu; int cpu;
struct acpi_processor *_pr; struct acpi_processor *_pr;
struct cpuidle_device *dev;
if (disabled_by_idle_boot_param()) if (disabled_by_idle_boot_param())
return 0; return 0;
...@@ -1192,7 +1194,8 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) ...@@ -1192,7 +1194,8 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr)
_pr = per_cpu(processors, cpu); _pr = per_cpu(processors, cpu);
if (!_pr || !_pr->flags.power_setup_done) if (!_pr || !_pr->flags.power_setup_done)
continue; continue;
cpuidle_disable_device(&_pr->power.dev); dev = per_cpu(acpi_cpuidle_device, cpu);
cpuidle_disable_device(dev);
} }
/* Populate Updated C-state information */ /* Populate Updated C-state information */
...@@ -1206,7 +1209,8 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) ...@@ -1206,7 +1209,8 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr)
acpi_processor_get_power_info(_pr); acpi_processor_get_power_info(_pr);
if (_pr->flags.power) { if (_pr->flags.power) {
acpi_processor_setup_cpuidle_cx(_pr); acpi_processor_setup_cpuidle_cx(_pr);
cpuidle_enable_device(&_pr->power.dev); dev = per_cpu(acpi_cpuidle_device, cpu);
cpuidle_enable_device(dev);
} }
} }
put_online_cpus(); put_online_cpus();
...@@ -1218,11 +1222,11 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) ...@@ -1218,11 +1222,11 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr)
static int acpi_processor_registered; static int acpi_processor_registered;
int __cpuinit acpi_processor_power_init(struct acpi_processor *pr, int __cpuinit acpi_processor_power_init(struct acpi_processor *pr)
struct acpi_device *device)
{ {
acpi_status status = 0; acpi_status status = 0;
int retval; int retval;
struct cpuidle_device *dev;
static int first_run; static int first_run;
if (disabled_by_idle_boot_param()) if (disabled_by_idle_boot_param())
...@@ -1268,11 +1272,18 @@ int __cpuinit acpi_processor_power_init(struct acpi_processor *pr, ...@@ -1268,11 +1272,18 @@ int __cpuinit acpi_processor_power_init(struct acpi_processor *pr,
printk(KERN_DEBUG "ACPI: %s registered with cpuidle\n", printk(KERN_DEBUG "ACPI: %s registered with cpuidle\n",
acpi_idle_driver.name); acpi_idle_driver.name);
} }
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (!dev)
return -ENOMEM;
per_cpu(acpi_cpuidle_device, pr->id) = dev;
acpi_processor_setup_cpuidle_cx(pr);
/* Register per-cpu cpuidle_device. Cpuidle driver /* Register per-cpu cpuidle_device. Cpuidle driver
* must already be registered before registering device * must already be registered before registering device
*/ */
acpi_processor_setup_cpuidle_cx(pr); retval = cpuidle_register_device(dev);
retval = cpuidle_register_device(&pr->power.dev);
if (retval) { if (retval) {
if (acpi_processor_registered == 0) if (acpi_processor_registered == 0)
cpuidle_unregister_driver(&acpi_idle_driver); cpuidle_unregister_driver(&acpi_idle_driver);
...@@ -1283,14 +1294,15 @@ int __cpuinit acpi_processor_power_init(struct acpi_processor *pr, ...@@ -1283,14 +1294,15 @@ int __cpuinit acpi_processor_power_init(struct acpi_processor *pr,
return 0; return 0;
} }
int acpi_processor_power_exit(struct acpi_processor *pr, int acpi_processor_power_exit(struct acpi_processor *pr)
struct acpi_device *device)
{ {
struct cpuidle_device *dev = per_cpu(acpi_cpuidle_device, pr->id);
if (disabled_by_idle_boot_param()) if (disabled_by_idle_boot_param())
return 0; return 0;
if (pr->flags.power) { if (pr->flags.power) {
cpuidle_unregister_device(&pr->power.dev); cpuidle_unregister_device(dev);
acpi_processor_registered--; acpi_processor_registered--;
if (acpi_processor_registered == 0) if (acpi_processor_registered == 0)
cpuidle_unregister_driver(&acpi_idle_driver); cpuidle_unregister_driver(&acpi_idle_driver);
......
...@@ -324,6 +324,34 @@ static int acpi_processor_get_performance_control(struct acpi_processor *pr) ...@@ -324,6 +324,34 @@ static int acpi_processor_get_performance_control(struct acpi_processor *pr)
return result; return result;
} }
#ifdef CONFIG_X86
/*
* Some AMDs have 50MHz frequency multiples, but only provide 100MHz rounding
* in their ACPI data. Calculate the real values and fix up the _PSS data.
*/
static void amd_fixup_frequency(struct acpi_processor_px *px, int i)
{
u32 hi, lo, fid, did;
int index = px->control & 0x00000007;
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
return;
if ((boot_cpu_data.x86 == 0x10 && boot_cpu_data.x86_model < 10)
|| boot_cpu_data.x86 == 0x11) {
rdmsr(MSR_AMD_PSTATE_DEF_BASE + index, lo, hi);
fid = lo & 0x3f;
did = (lo >> 6) & 7;
if (boot_cpu_data.x86 == 0x10)
px->core_frequency = (100 * (fid + 0x10)) >> did;
else
px->core_frequency = (100 * (fid + 8)) >> did;
}
}
#else
static void amd_fixup_frequency(struct acpi_processor_px *px, int i) {};
#endif
static int acpi_processor_get_performance_states(struct acpi_processor *pr) static int acpi_processor_get_performance_states(struct acpi_processor *pr)
{ {
int result = 0; int result = 0;
...@@ -379,6 +407,8 @@ static int acpi_processor_get_performance_states(struct acpi_processor *pr) ...@@ -379,6 +407,8 @@ static int acpi_processor_get_performance_states(struct acpi_processor *pr)
goto end; goto end;
} }
amd_fixup_frequency(px, i);
ACPI_DEBUG_PRINT((ACPI_DB_INFO, ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"State [%d]: core_frequency[%d] power[%d] transition_latency[%d] bus_master_latency[%d] control[0x%x] status[0x%x]\n", "State [%d]: core_frequency[%d] power[%d] transition_latency[%d] bus_master_latency[%d] control[0x%x] status[0x%x]\n",
i, i,
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <linux/idr.h> #include <linux/idr.h>
#include "base.h" #include "base.h"
#include "power/power.h"
/* For automatically allocated device IDs */ /* For automatically allocated device IDs */
static DEFINE_IDA(platform_devid_ida); static DEFINE_IDA(platform_devid_ida);
...@@ -983,6 +984,7 @@ void __init early_platform_add_devices(struct platform_device **devs, int num) ...@@ -983,6 +984,7 @@ void __init early_platform_add_devices(struct platform_device **devs, int num)
dev = &devs[i]->dev; dev = &devs[i]->dev;
if (!dev->devres_head.next) { if (!dev->devres_head.next) {
pm_runtime_early_init(dev);
INIT_LIST_HEAD(&dev->devres_head); INIT_LIST_HEAD(&dev->devres_head);
list_add_tail(&dev->devres_head, list_add_tail(&dev->devres_head,
&early_platform_device_list); &early_platform_device_list);
......
This diff is collapsed.
...@@ -57,20 +57,17 @@ static pm_message_t pm_transition; ...@@ -57,20 +57,17 @@ static pm_message_t pm_transition;
static int async_error; static int async_error;
/** /**
* device_pm_init - Initialize the PM-related part of a device object. * device_pm_sleep_init - Initialize system suspend-related device fields.
* @dev: Device object being initialized. * @dev: Device object being initialized.
*/ */
void device_pm_init(struct device *dev) void device_pm_sleep_init(struct device *dev)
{ {
dev->power.is_prepared = false; dev->power.is_prepared = false;
dev->power.is_suspended = false; dev->power.is_suspended = false;
init_completion(&dev->power.completion); init_completion(&dev->power.completion);
complete_all(&dev->power.completion); complete_all(&dev->power.completion);
dev->power.wakeup = NULL; dev->power.wakeup = NULL;
spin_lock_init(&dev->power.lock);
pm_runtime_init(dev);
INIT_LIST_HEAD(&dev->power.entry); INIT_LIST_HEAD(&dev->power.entry);
dev->power.power_state = PMSG_INVALID;
} }
/** /**
...@@ -408,6 +405,9 @@ static int device_resume_noirq(struct device *dev, pm_message_t state) ...@@ -408,6 +405,9 @@ static int device_resume_noirq(struct device *dev, pm_message_t state)
TRACE_DEVICE(dev); TRACE_DEVICE(dev);
TRACE_RESUME(0); TRACE_RESUME(0);
if (dev->power.syscore)
goto Out;
if (dev->pm_domain) { if (dev->pm_domain) {
info = "noirq power domain "; info = "noirq power domain ";
callback = pm_noirq_op(&dev->pm_domain->ops, state); callback = pm_noirq_op(&dev->pm_domain->ops, state);
...@@ -429,6 +429,7 @@ static int device_resume_noirq(struct device *dev, pm_message_t state) ...@@ -429,6 +429,7 @@ static int device_resume_noirq(struct device *dev, pm_message_t state)
error = dpm_run_callback(callback, dev, state, info); error = dpm_run_callback(callback, dev, state, info);
Out:
TRACE_RESUME(error); TRACE_RESUME(error);
return error; return error;
} }
...@@ -486,6 +487,9 @@ static int device_resume_early(struct device *dev, pm_message_t state) ...@@ -486,6 +487,9 @@ static int device_resume_early(struct device *dev, pm_message_t state)
TRACE_DEVICE(dev); TRACE_DEVICE(dev);
TRACE_RESUME(0); TRACE_RESUME(0);
if (dev->power.syscore)
goto Out;
if (dev->pm_domain) { if (dev->pm_domain) {
info = "early power domain "; info = "early power domain ";
callback = pm_late_early_op(&dev->pm_domain->ops, state); callback = pm_late_early_op(&dev->pm_domain->ops, state);
...@@ -507,6 +511,7 @@ static int device_resume_early(struct device *dev, pm_message_t state) ...@@ -507,6 +511,7 @@ static int device_resume_early(struct device *dev, pm_message_t state)
error = dpm_run_callback(callback, dev, state, info); error = dpm_run_callback(callback, dev, state, info);
Out:
TRACE_RESUME(error); TRACE_RESUME(error);
return error; return error;
} }
...@@ -565,11 +570,13 @@ static int device_resume(struct device *dev, pm_message_t state, bool async) ...@@ -565,11 +570,13 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
pm_callback_t callback = NULL; pm_callback_t callback = NULL;
char *info = NULL; char *info = NULL;
int error = 0; int error = 0;
bool put = false;
TRACE_DEVICE(dev); TRACE_DEVICE(dev);
TRACE_RESUME(0); TRACE_RESUME(0);
if (dev->power.syscore)
goto Complete;
dpm_wait(dev->parent, async); dpm_wait(dev->parent, async);
device_lock(dev); device_lock(dev);
...@@ -583,7 +590,6 @@ static int device_resume(struct device *dev, pm_message_t state, bool async) ...@@ -583,7 +590,6 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
goto Unlock; goto Unlock;
pm_runtime_enable(dev); pm_runtime_enable(dev);
put = true;
if (dev->pm_domain) { if (dev->pm_domain) {
info = "power domain "; info = "power domain ";
...@@ -632,13 +638,12 @@ static int device_resume(struct device *dev, pm_message_t state, bool async) ...@@ -632,13 +638,12 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
Unlock: Unlock:
device_unlock(dev); device_unlock(dev);
Complete:
complete_all(&dev->power.completion); complete_all(&dev->power.completion);
TRACE_RESUME(error); TRACE_RESUME(error);
if (put)
pm_runtime_put_sync(dev);
return error; return error;
} }
...@@ -722,6 +727,9 @@ static void device_complete(struct device *dev, pm_message_t state) ...@@ -722,6 +727,9 @@ static void device_complete(struct device *dev, pm_message_t state)
void (*callback)(struct device *) = NULL; void (*callback)(struct device *) = NULL;
char *info = NULL; char *info = NULL;
if (dev->power.syscore)
return;
device_lock(dev); device_lock(dev);
if (dev->pm_domain) { if (dev->pm_domain) {
...@@ -749,6 +757,8 @@ static void device_complete(struct device *dev, pm_message_t state) ...@@ -749,6 +757,8 @@ static void device_complete(struct device *dev, pm_message_t state)
} }
device_unlock(dev); device_unlock(dev);
pm_runtime_put_sync(dev);
} }
/** /**
...@@ -834,6 +844,9 @@ static int device_suspend_noirq(struct device *dev, pm_message_t state) ...@@ -834,6 +844,9 @@ static int device_suspend_noirq(struct device *dev, pm_message_t state)
pm_callback_t callback = NULL; pm_callback_t callback = NULL;
char *info = NULL; char *info = NULL;
if (dev->power.syscore)
return 0;
if (dev->pm_domain) { if (dev->pm_domain) {
info = "noirq power domain "; info = "noirq power domain ";
callback = pm_noirq_op(&dev->pm_domain->ops, state); callback = pm_noirq_op(&dev->pm_domain->ops, state);
...@@ -917,6 +930,9 @@ static int device_suspend_late(struct device *dev, pm_message_t state) ...@@ -917,6 +930,9 @@ static int device_suspend_late(struct device *dev, pm_message_t state)
pm_callback_t callback = NULL; pm_callback_t callback = NULL;
char *info = NULL; char *info = NULL;
if (dev->power.syscore)
return 0;
if (dev->pm_domain) { if (dev->pm_domain) {
info = "late power domain "; info = "late power domain ";
callback = pm_late_early_op(&dev->pm_domain->ops, state); callback = pm_late_early_op(&dev->pm_domain->ops, state);
...@@ -996,7 +1012,7 @@ int dpm_suspend_end(pm_message_t state) ...@@ -996,7 +1012,7 @@ int dpm_suspend_end(pm_message_t state)
error = dpm_suspend_noirq(state); error = dpm_suspend_noirq(state);
if (error) { if (error) {
dpm_resume_early(state); dpm_resume_early(resume_event(state));
return error; return error;
} }
...@@ -1043,16 +1059,23 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) ...@@ -1043,16 +1059,23 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
if (async_error) if (async_error)
goto Complete; goto Complete;
pm_runtime_get_noresume(dev); /*
* If a device configured to wake up the system from sleep states
* has been suspended at run time and there's a resume request pending
* for it, this is equivalent to the device signaling wakeup, so the
* system suspend operation should be aborted.
*/
if (pm_runtime_barrier(dev) && device_may_wakeup(dev)) if (pm_runtime_barrier(dev) && device_may_wakeup(dev))
pm_wakeup_event(dev, 0); pm_wakeup_event(dev, 0);
if (pm_wakeup_pending()) { if (pm_wakeup_pending()) {
pm_runtime_put_sync(dev);
async_error = -EBUSY; async_error = -EBUSY;
goto Complete; goto Complete;
} }
if (dev->power.syscore)
goto Complete;
device_lock(dev); device_lock(dev);
if (dev->pm_domain) { if (dev->pm_domain) {
...@@ -1111,12 +1134,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) ...@@ -1111,12 +1134,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
Complete: Complete:
complete_all(&dev->power.completion); complete_all(&dev->power.completion);
if (error) { if (error)
pm_runtime_put_sync(dev);
async_error = error; async_error = error;
} else if (dev->power.is_suspended) { else if (dev->power.is_suspended)
__pm_runtime_disable(dev, false); __pm_runtime_disable(dev, false);
}
return error; return error;
} }
...@@ -1209,6 +1230,17 @@ static int device_prepare(struct device *dev, pm_message_t state) ...@@ -1209,6 +1230,17 @@ static int device_prepare(struct device *dev, pm_message_t state)
char *info = NULL; char *info = NULL;
int error = 0; int error = 0;
if (dev->power.syscore)
return 0;
/*
* If a device's parent goes into runtime suspend at the wrong time,
* it won't be possible to resume the device. To prevent this we
* block runtime suspend here, during the prepare phase, and allow
* it again during the complete phase.
*/
pm_runtime_get_noresume(dev);
device_lock(dev); device_lock(dev);
dev->power.wakeup_path = device_may_wakeup(dev); dev->power.wakeup_path = device_may_wakeup(dev);
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/rculist.h> #include <linux/rculist.h>
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <linux/opp.h> #include <linux/opp.h>
#include <linux/of.h>
/* /*
* Internal data structure organization with the OPP layer library is as * Internal data structure organization with the OPP layer library is as
...@@ -674,3 +675,49 @@ struct srcu_notifier_head *opp_get_notifier(struct device *dev) ...@@ -674,3 +675,49 @@ struct srcu_notifier_head *opp_get_notifier(struct device *dev)
return &dev_opp->head; return &dev_opp->head;
} }
#ifdef CONFIG_OF
/**
* of_init_opp_table() - Initialize opp table from device tree
* @dev: device pointer used to lookup device OPPs.
*
* Register the initial OPP table with the OPP library for given device.
*/
int of_init_opp_table(struct device *dev)
{
const struct property *prop;
const __be32 *val;
int nr;
prop = of_find_property(dev->of_node, "operating-points", NULL);
if (!prop)
return -ENODEV;
if (!prop->value)
return -ENODATA;
/*
* Each OPP is a set of tuples consisting of frequency and
* voltage like <freq-kHz vol-uV>.
*/
nr = prop->length / sizeof(u32);
if (nr % 2) {
dev_err(dev, "%s: Invalid OPP list\n", __func__);
return -EINVAL;
}
val = prop->value;
while (nr) {
unsigned long freq = be32_to_cpup(val++) * 1000;
unsigned long volt = be32_to_cpup(val++);
if (opp_add(dev, freq, volt)) {
dev_warn(dev, "%s: Failed to add OPP %ld\n",
__func__, freq);
continue;
}
nr -= 2;
}
return 0;
}
#endif
#include <linux/pm_qos.h> #include <linux/pm_qos.h>
static inline void device_pm_init_common(struct device *dev)
{
if (!dev->power.early_init) {
spin_lock_init(&dev->power.lock);
dev->power.power_state = PMSG_INVALID;
dev->power.early_init = true;
}
}
#ifdef CONFIG_PM_RUNTIME #ifdef CONFIG_PM_RUNTIME
static inline void pm_runtime_early_init(struct device *dev)
{
dev->power.disable_depth = 1;
device_pm_init_common(dev);
}
extern void pm_runtime_init(struct device *dev); extern void pm_runtime_init(struct device *dev);
extern void pm_runtime_remove(struct device *dev); extern void pm_runtime_remove(struct device *dev);
#else /* !CONFIG_PM_RUNTIME */ #else /* !CONFIG_PM_RUNTIME */
static inline void pm_runtime_early_init(struct device *dev)
{
device_pm_init_common(dev);
}
static inline void pm_runtime_init(struct device *dev) {} static inline void pm_runtime_init(struct device *dev) {}
static inline void pm_runtime_remove(struct device *dev) {} static inline void pm_runtime_remove(struct device *dev) {}
...@@ -25,7 +45,7 @@ static inline struct device *to_device(struct list_head *entry) ...@@ -25,7 +45,7 @@ static inline struct device *to_device(struct list_head *entry)
return container_of(entry, struct device, power.entry); return container_of(entry, struct device, power.entry);
} }
extern void device_pm_init(struct device *dev); extern void device_pm_sleep_init(struct device *dev);
extern void device_pm_add(struct device *); extern void device_pm_add(struct device *);
extern void device_pm_remove(struct device *); extern void device_pm_remove(struct device *);
extern void device_pm_move_before(struct device *, struct device *); extern void device_pm_move_before(struct device *, struct device *);
...@@ -34,12 +54,7 @@ extern void device_pm_move_last(struct device *); ...@@ -34,12 +54,7 @@ extern void device_pm_move_last(struct device *);
#else /* !CONFIG_PM_SLEEP */ #else /* !CONFIG_PM_SLEEP */
static inline void device_pm_init(struct device *dev) static inline void device_pm_sleep_init(struct device *dev) {}
{
spin_lock_init(&dev->power.lock);
dev->power.power_state = PMSG_INVALID;
pm_runtime_init(dev);
}
static inline void device_pm_add(struct device *dev) static inline void device_pm_add(struct device *dev)
{ {
...@@ -60,6 +75,13 @@ static inline void device_pm_move_last(struct device *dev) {} ...@@ -60,6 +75,13 @@ static inline void device_pm_move_last(struct device *dev) {}
#endif /* !CONFIG_PM_SLEEP */ #endif /* !CONFIG_PM_SLEEP */
static inline void device_pm_init(struct device *dev)
{
device_pm_init_common(dev);
device_pm_sleep_init(dev);
pm_runtime_init(dev);
}
#ifdef CONFIG_PM #ifdef CONFIG_PM
/* /*
......
...@@ -509,6 +509,9 @@ static int rpm_resume(struct device *dev, int rpmflags) ...@@ -509,6 +509,9 @@ static int rpm_resume(struct device *dev, int rpmflags)
repeat: repeat:
if (dev->power.runtime_error) if (dev->power.runtime_error)
retval = -EINVAL; retval = -EINVAL;
else if (dev->power.disable_depth == 1 && dev->power.is_suspended
&& dev->power.runtime_status == RPM_ACTIVE)
retval = 1;
else if (dev->power.disable_depth > 0) else if (dev->power.disable_depth > 0)
retval = -EACCES; retval = -EACCES;
if (retval) if (retval)
......
...@@ -127,6 +127,8 @@ EXPORT_SYMBOL_GPL(wakeup_source_destroy); ...@@ -127,6 +127,8 @@ EXPORT_SYMBOL_GPL(wakeup_source_destroy);
*/ */
void wakeup_source_add(struct wakeup_source *ws) void wakeup_source_add(struct wakeup_source *ws)
{ {
unsigned long flags;
if (WARN_ON(!ws)) if (WARN_ON(!ws))
return; return;
...@@ -135,9 +137,9 @@ void wakeup_source_add(struct wakeup_source *ws) ...@@ -135,9 +137,9 @@ void wakeup_source_add(struct wakeup_source *ws)
ws->active = false; ws->active = false;
ws->last_time = ktime_get(); ws->last_time = ktime_get();
spin_lock_irq(&events_lock); spin_lock_irqsave(&events_lock, flags);
list_add_rcu(&ws->entry, &wakeup_sources); list_add_rcu(&ws->entry, &wakeup_sources);
spin_unlock_irq(&events_lock); spin_unlock_irqrestore(&events_lock, flags);
} }
EXPORT_SYMBOL_GPL(wakeup_source_add); EXPORT_SYMBOL_GPL(wakeup_source_add);
...@@ -147,12 +149,14 @@ EXPORT_SYMBOL_GPL(wakeup_source_add); ...@@ -147,12 +149,14 @@ EXPORT_SYMBOL_GPL(wakeup_source_add);
*/ */
void wakeup_source_remove(struct wakeup_source *ws) void wakeup_source_remove(struct wakeup_source *ws)
{ {
unsigned long flags;
if (WARN_ON(!ws)) if (WARN_ON(!ws))
return; return;
spin_lock_irq(&events_lock); spin_lock_irqsave(&events_lock, flags);
list_del_rcu(&ws->entry); list_del_rcu(&ws->entry);
spin_unlock_irq(&events_lock); spin_unlock_irqrestore(&events_lock, flags);
synchronize_rcu(); synchronize_rcu();
} }
EXPORT_SYMBOL_GPL(wakeup_source_remove); EXPORT_SYMBOL_GPL(wakeup_source_remove);
...@@ -649,6 +653,31 @@ void pm_wakeup_event(struct device *dev, unsigned int msec) ...@@ -649,6 +653,31 @@ void pm_wakeup_event(struct device *dev, unsigned int msec)
} }
EXPORT_SYMBOL_GPL(pm_wakeup_event); EXPORT_SYMBOL_GPL(pm_wakeup_event);
static void print_active_wakeup_sources(void)
{
struct wakeup_source *ws;
int active = 0;
struct wakeup_source *last_activity_ws = NULL;
rcu_read_lock();
list_for_each_entry_rcu(ws, &wakeup_sources, entry) {
if (ws->active) {
pr_info("active wakeup source: %s\n", ws->name);
active = 1;
} else if (!active &&
(!last_activity_ws ||
ktime_to_ns(ws->last_time) >
ktime_to_ns(last_activity_ws->last_time))) {
last_activity_ws = ws;
}
}
if (!active && last_activity_ws)
pr_info("last active wakeup source: %s\n",
last_activity_ws->name);
rcu_read_unlock();
}
/** /**
* pm_wakeup_pending - Check if power transition in progress should be aborted. * pm_wakeup_pending - Check if power transition in progress should be aborted.
* *
...@@ -671,6 +700,10 @@ bool pm_wakeup_pending(void) ...@@ -671,6 +700,10 @@ bool pm_wakeup_pending(void)
events_check_enabled = !ret; events_check_enabled = !ret;
} }
spin_unlock_irqrestore(&events_lock, flags); spin_unlock_irqrestore(&events_lock, flags);
if (ret)
print_active_wakeup_sources();
return ret; return ret;
} }
...@@ -723,15 +756,16 @@ bool pm_get_wakeup_count(unsigned int *count, bool block) ...@@ -723,15 +756,16 @@ bool pm_get_wakeup_count(unsigned int *count, bool block)
bool pm_save_wakeup_count(unsigned int count) bool pm_save_wakeup_count(unsigned int count)
{ {
unsigned int cnt, inpr; unsigned int cnt, inpr;
unsigned long flags;
events_check_enabled = false; events_check_enabled = false;
spin_lock_irq(&events_lock); spin_lock_irqsave(&events_lock, flags);
split_counters(&cnt, &inpr); split_counters(&cnt, &inpr);
if (cnt == count && inpr == 0) { if (cnt == count && inpr == 0) {
saved_count = count; saved_count = count;
events_check_enabled = true; events_check_enabled = true;
} }
spin_unlock_irq(&events_lock); spin_unlock_irqrestore(&events_lock, flags);
return events_check_enabled; return events_check_enabled;
} }
......
...@@ -33,6 +33,7 @@ ...@@ -33,6 +33,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/pm_domain.h> #include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
struct sh_cmt_priv { struct sh_cmt_priv {
void __iomem *mapbase; void __iomem *mapbase;
...@@ -52,6 +53,7 @@ struct sh_cmt_priv { ...@@ -52,6 +53,7 @@ struct sh_cmt_priv {
struct clock_event_device ced; struct clock_event_device ced;
struct clocksource cs; struct clocksource cs;
unsigned long total_cycles; unsigned long total_cycles;
bool cs_enabled;
}; };
static DEFINE_RAW_SPINLOCK(sh_cmt_lock); static DEFINE_RAW_SPINLOCK(sh_cmt_lock);
...@@ -155,6 +157,9 @@ static int sh_cmt_enable(struct sh_cmt_priv *p, unsigned long *rate) ...@@ -155,6 +157,9 @@ static int sh_cmt_enable(struct sh_cmt_priv *p, unsigned long *rate)
{ {
int k, ret; int k, ret;
pm_runtime_get_sync(&p->pdev->dev);
dev_pm_syscore_device(&p->pdev->dev, true);
/* enable clock */ /* enable clock */
ret = clk_enable(p->clk); ret = clk_enable(p->clk);
if (ret) { if (ret) {
...@@ -221,6 +226,9 @@ static void sh_cmt_disable(struct sh_cmt_priv *p) ...@@ -221,6 +226,9 @@ static void sh_cmt_disable(struct sh_cmt_priv *p)
/* stop clock */ /* stop clock */
clk_disable(p->clk); clk_disable(p->clk);
dev_pm_syscore_device(&p->pdev->dev, false);
pm_runtime_put(&p->pdev->dev);
} }
/* private flags */ /* private flags */
...@@ -451,22 +459,42 @@ static int sh_cmt_clocksource_enable(struct clocksource *cs) ...@@ -451,22 +459,42 @@ static int sh_cmt_clocksource_enable(struct clocksource *cs)
int ret; int ret;
struct sh_cmt_priv *p = cs_to_sh_cmt(cs); struct sh_cmt_priv *p = cs_to_sh_cmt(cs);
WARN_ON(p->cs_enabled);
p->total_cycles = 0; p->total_cycles = 0;
ret = sh_cmt_start(p, FLAG_CLOCKSOURCE); ret = sh_cmt_start(p, FLAG_CLOCKSOURCE);
if (!ret) if (!ret) {
__clocksource_updatefreq_hz(cs, p->rate); __clocksource_updatefreq_hz(cs, p->rate);
p->cs_enabled = true;
}
return ret; return ret;
} }
static void sh_cmt_clocksource_disable(struct clocksource *cs) static void sh_cmt_clocksource_disable(struct clocksource *cs)
{ {
sh_cmt_stop(cs_to_sh_cmt(cs), FLAG_CLOCKSOURCE); struct sh_cmt_priv *p = cs_to_sh_cmt(cs);
WARN_ON(!p->cs_enabled);
sh_cmt_stop(p, FLAG_CLOCKSOURCE);
p->cs_enabled = false;
}
static void sh_cmt_clocksource_suspend(struct clocksource *cs)
{
struct sh_cmt_priv *p = cs_to_sh_cmt(cs);
sh_cmt_stop(p, FLAG_CLOCKSOURCE);
pm_genpd_syscore_poweroff(&p->pdev->dev);
} }
static void sh_cmt_clocksource_resume(struct clocksource *cs) static void sh_cmt_clocksource_resume(struct clocksource *cs)
{ {
sh_cmt_start(cs_to_sh_cmt(cs), FLAG_CLOCKSOURCE); struct sh_cmt_priv *p = cs_to_sh_cmt(cs);
pm_genpd_syscore_poweron(&p->pdev->dev);
sh_cmt_start(p, FLAG_CLOCKSOURCE);
} }
static int sh_cmt_register_clocksource(struct sh_cmt_priv *p, static int sh_cmt_register_clocksource(struct sh_cmt_priv *p,
...@@ -479,7 +507,7 @@ static int sh_cmt_register_clocksource(struct sh_cmt_priv *p, ...@@ -479,7 +507,7 @@ static int sh_cmt_register_clocksource(struct sh_cmt_priv *p,
cs->read = sh_cmt_clocksource_read; cs->read = sh_cmt_clocksource_read;
cs->enable = sh_cmt_clocksource_enable; cs->enable = sh_cmt_clocksource_enable;
cs->disable = sh_cmt_clocksource_disable; cs->disable = sh_cmt_clocksource_disable;
cs->suspend = sh_cmt_clocksource_disable; cs->suspend = sh_cmt_clocksource_suspend;
cs->resume = sh_cmt_clocksource_resume; cs->resume = sh_cmt_clocksource_resume;
cs->mask = CLOCKSOURCE_MASK(sizeof(unsigned long) * 8); cs->mask = CLOCKSOURCE_MASK(sizeof(unsigned long) * 8);
cs->flags = CLOCK_SOURCE_IS_CONTINUOUS; cs->flags = CLOCK_SOURCE_IS_CONTINUOUS;
...@@ -562,6 +590,16 @@ static int sh_cmt_clock_event_next(unsigned long delta, ...@@ -562,6 +590,16 @@ static int sh_cmt_clock_event_next(unsigned long delta,
return 0; return 0;
} }
static void sh_cmt_clock_event_suspend(struct clock_event_device *ced)
{
pm_genpd_syscore_poweroff(&ced_to_sh_cmt(ced)->pdev->dev);
}
static void sh_cmt_clock_event_resume(struct clock_event_device *ced)
{
pm_genpd_syscore_poweron(&ced_to_sh_cmt(ced)->pdev->dev);
}
static void sh_cmt_register_clockevent(struct sh_cmt_priv *p, static void sh_cmt_register_clockevent(struct sh_cmt_priv *p,
char *name, unsigned long rating) char *name, unsigned long rating)
{ {
...@@ -576,6 +614,8 @@ static void sh_cmt_register_clockevent(struct sh_cmt_priv *p, ...@@ -576,6 +614,8 @@ static void sh_cmt_register_clockevent(struct sh_cmt_priv *p,
ced->cpumask = cpumask_of(0); ced->cpumask = cpumask_of(0);
ced->set_next_event = sh_cmt_clock_event_next; ced->set_next_event = sh_cmt_clock_event_next;
ced->set_mode = sh_cmt_clock_event_mode; ced->set_mode = sh_cmt_clock_event_mode;
ced->suspend = sh_cmt_clock_event_suspend;
ced->resume = sh_cmt_clock_event_resume;
dev_info(&p->pdev->dev, "used for clock events\n"); dev_info(&p->pdev->dev, "used for clock events\n");
clockevents_register_device(ced); clockevents_register_device(ced);
...@@ -670,6 +710,7 @@ static int sh_cmt_setup(struct sh_cmt_priv *p, struct platform_device *pdev) ...@@ -670,6 +710,7 @@ static int sh_cmt_setup(struct sh_cmt_priv *p, struct platform_device *pdev)
dev_err(&p->pdev->dev, "registration failed\n"); dev_err(&p->pdev->dev, "registration failed\n");
goto err1; goto err1;
} }
p->cs_enabled = false;
ret = setup_irq(irq, &p->irqaction); ret = setup_irq(irq, &p->irqaction);
if (ret) { if (ret) {
...@@ -688,14 +729,17 @@ static int sh_cmt_setup(struct sh_cmt_priv *p, struct platform_device *pdev) ...@@ -688,14 +729,17 @@ static int sh_cmt_setup(struct sh_cmt_priv *p, struct platform_device *pdev)
static int __devinit sh_cmt_probe(struct platform_device *pdev) static int __devinit sh_cmt_probe(struct platform_device *pdev)
{ {
struct sh_cmt_priv *p = platform_get_drvdata(pdev); struct sh_cmt_priv *p = platform_get_drvdata(pdev);
struct sh_timer_config *cfg = pdev->dev.platform_data;
int ret; int ret;
if (!is_early_platform_device(pdev)) if (!is_early_platform_device(pdev)) {
pm_genpd_dev_always_on(&pdev->dev, true); pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
}
if (p) { if (p) {
dev_info(&pdev->dev, "kept as earlytimer\n"); dev_info(&pdev->dev, "kept as earlytimer\n");
return 0; goto out;
} }
p = kmalloc(sizeof(*p), GFP_KERNEL); p = kmalloc(sizeof(*p), GFP_KERNEL);
...@@ -708,8 +752,19 @@ static int __devinit sh_cmt_probe(struct platform_device *pdev) ...@@ -708,8 +752,19 @@ static int __devinit sh_cmt_probe(struct platform_device *pdev)
if (ret) { if (ret) {
kfree(p); kfree(p);
platform_set_drvdata(pdev, NULL); platform_set_drvdata(pdev, NULL);
pm_runtime_idle(&pdev->dev);
return ret;
} }
return ret; if (is_early_platform_device(pdev))
return 0;
out:
if (cfg->clockevent_rating || cfg->clocksource_rating)
pm_runtime_irq_safe(&pdev->dev);
else
pm_runtime_idle(&pdev->dev);
return 0;
} }
static int __devexit sh_cmt_remove(struct platform_device *pdev) static int __devexit sh_cmt_remove(struct platform_device *pdev)
......
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/pm_domain.h> #include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
struct sh_mtu2_priv { struct sh_mtu2_priv {
void __iomem *mapbase; void __iomem *mapbase;
...@@ -123,6 +124,9 @@ static int sh_mtu2_enable(struct sh_mtu2_priv *p) ...@@ -123,6 +124,9 @@ static int sh_mtu2_enable(struct sh_mtu2_priv *p)
{ {
int ret; int ret;
pm_runtime_get_sync(&p->pdev->dev);
dev_pm_syscore_device(&p->pdev->dev, true);
/* enable clock */ /* enable clock */
ret = clk_enable(p->clk); ret = clk_enable(p->clk);
if (ret) { if (ret) {
...@@ -157,6 +161,9 @@ static void sh_mtu2_disable(struct sh_mtu2_priv *p) ...@@ -157,6 +161,9 @@ static void sh_mtu2_disable(struct sh_mtu2_priv *p)
/* stop clock */ /* stop clock */
clk_disable(p->clk); clk_disable(p->clk);
dev_pm_syscore_device(&p->pdev->dev, false);
pm_runtime_put(&p->pdev->dev);
} }
static irqreturn_t sh_mtu2_interrupt(int irq, void *dev_id) static irqreturn_t sh_mtu2_interrupt(int irq, void *dev_id)
...@@ -208,6 +215,16 @@ static void sh_mtu2_clock_event_mode(enum clock_event_mode mode, ...@@ -208,6 +215,16 @@ static void sh_mtu2_clock_event_mode(enum clock_event_mode mode,
} }
} }
static void sh_mtu2_clock_event_suspend(struct clock_event_device *ced)
{
pm_genpd_syscore_poweroff(&ced_to_sh_mtu2(ced)->pdev->dev);
}
static void sh_mtu2_clock_event_resume(struct clock_event_device *ced)
{
pm_genpd_syscore_poweron(&ced_to_sh_mtu2(ced)->pdev->dev);
}
static void sh_mtu2_register_clockevent(struct sh_mtu2_priv *p, static void sh_mtu2_register_clockevent(struct sh_mtu2_priv *p,
char *name, unsigned long rating) char *name, unsigned long rating)
{ {
...@@ -221,6 +238,8 @@ static void sh_mtu2_register_clockevent(struct sh_mtu2_priv *p, ...@@ -221,6 +238,8 @@ static void sh_mtu2_register_clockevent(struct sh_mtu2_priv *p,
ced->rating = rating; ced->rating = rating;
ced->cpumask = cpumask_of(0); ced->cpumask = cpumask_of(0);
ced->set_mode = sh_mtu2_clock_event_mode; ced->set_mode = sh_mtu2_clock_event_mode;
ced->suspend = sh_mtu2_clock_event_suspend;
ced->resume = sh_mtu2_clock_event_resume;
dev_info(&p->pdev->dev, "used for clock events\n"); dev_info(&p->pdev->dev, "used for clock events\n");
clockevents_register_device(ced); clockevents_register_device(ced);
...@@ -305,14 +324,17 @@ static int sh_mtu2_setup(struct sh_mtu2_priv *p, struct platform_device *pdev) ...@@ -305,14 +324,17 @@ static int sh_mtu2_setup(struct sh_mtu2_priv *p, struct platform_device *pdev)
static int __devinit sh_mtu2_probe(struct platform_device *pdev) static int __devinit sh_mtu2_probe(struct platform_device *pdev)
{ {
struct sh_mtu2_priv *p = platform_get_drvdata(pdev); struct sh_mtu2_priv *p = platform_get_drvdata(pdev);
struct sh_timer_config *cfg = pdev->dev.platform_data;
int ret; int ret;
if (!is_early_platform_device(pdev)) if (!is_early_platform_device(pdev)) {
pm_genpd_dev_always_on(&pdev->dev, true); pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
}
if (p) { if (p) {
dev_info(&pdev->dev, "kept as earlytimer\n"); dev_info(&pdev->dev, "kept as earlytimer\n");
return 0; goto out;
} }
p = kmalloc(sizeof(*p), GFP_KERNEL); p = kmalloc(sizeof(*p), GFP_KERNEL);
...@@ -325,8 +347,19 @@ static int __devinit sh_mtu2_probe(struct platform_device *pdev) ...@@ -325,8 +347,19 @@ static int __devinit sh_mtu2_probe(struct platform_device *pdev)
if (ret) { if (ret) {
kfree(p); kfree(p);
platform_set_drvdata(pdev, NULL); platform_set_drvdata(pdev, NULL);
pm_runtime_idle(&pdev->dev);
return ret;
} }
return ret; if (is_early_platform_device(pdev))
return 0;
out:
if (cfg->clockevent_rating)
pm_runtime_irq_safe(&pdev->dev);
else
pm_runtime_idle(&pdev->dev);
return 0;
} }
static int __devexit sh_mtu2_remove(struct platform_device *pdev) static int __devexit sh_mtu2_remove(struct platform_device *pdev)
......
...@@ -33,6 +33,7 @@ ...@@ -33,6 +33,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/pm_domain.h> #include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
struct sh_tmu_priv { struct sh_tmu_priv {
void __iomem *mapbase; void __iomem *mapbase;
...@@ -43,6 +44,8 @@ struct sh_tmu_priv { ...@@ -43,6 +44,8 @@ struct sh_tmu_priv {
unsigned long periodic; unsigned long periodic;
struct clock_event_device ced; struct clock_event_device ced;
struct clocksource cs; struct clocksource cs;
bool cs_enabled;
unsigned int enable_count;
}; };
static DEFINE_RAW_SPINLOCK(sh_tmu_lock); static DEFINE_RAW_SPINLOCK(sh_tmu_lock);
...@@ -107,7 +110,7 @@ static void sh_tmu_start_stop_ch(struct sh_tmu_priv *p, int start) ...@@ -107,7 +110,7 @@ static void sh_tmu_start_stop_ch(struct sh_tmu_priv *p, int start)
raw_spin_unlock_irqrestore(&sh_tmu_lock, flags); raw_spin_unlock_irqrestore(&sh_tmu_lock, flags);
} }
static int sh_tmu_enable(struct sh_tmu_priv *p) static int __sh_tmu_enable(struct sh_tmu_priv *p)
{ {
int ret; int ret;
...@@ -135,7 +138,18 @@ static int sh_tmu_enable(struct sh_tmu_priv *p) ...@@ -135,7 +138,18 @@ static int sh_tmu_enable(struct sh_tmu_priv *p)
return 0; return 0;
} }
static void sh_tmu_disable(struct sh_tmu_priv *p) static int sh_tmu_enable(struct sh_tmu_priv *p)
{
if (p->enable_count++ > 0)
return 0;
pm_runtime_get_sync(&p->pdev->dev);
dev_pm_syscore_device(&p->pdev->dev, true);
return __sh_tmu_enable(p);
}
static void __sh_tmu_disable(struct sh_tmu_priv *p)
{ {
/* disable channel */ /* disable channel */
sh_tmu_start_stop_ch(p, 0); sh_tmu_start_stop_ch(p, 0);
...@@ -147,6 +161,20 @@ static void sh_tmu_disable(struct sh_tmu_priv *p) ...@@ -147,6 +161,20 @@ static void sh_tmu_disable(struct sh_tmu_priv *p)
clk_disable(p->clk); clk_disable(p->clk);
} }
static void sh_tmu_disable(struct sh_tmu_priv *p)
{
if (WARN_ON(p->enable_count == 0))
return;
if (--p->enable_count > 0)
return;
__sh_tmu_disable(p);
dev_pm_syscore_device(&p->pdev->dev, false);
pm_runtime_put(&p->pdev->dev);
}
static void sh_tmu_set_next(struct sh_tmu_priv *p, unsigned long delta, static void sh_tmu_set_next(struct sh_tmu_priv *p, unsigned long delta,
int periodic) int periodic)
{ {
...@@ -203,15 +231,53 @@ static int sh_tmu_clocksource_enable(struct clocksource *cs) ...@@ -203,15 +231,53 @@ static int sh_tmu_clocksource_enable(struct clocksource *cs)
struct sh_tmu_priv *p = cs_to_sh_tmu(cs); struct sh_tmu_priv *p = cs_to_sh_tmu(cs);
int ret; int ret;
if (WARN_ON(p->cs_enabled))
return 0;
ret = sh_tmu_enable(p); ret = sh_tmu_enable(p);
if (!ret) if (!ret) {
__clocksource_updatefreq_hz(cs, p->rate); __clocksource_updatefreq_hz(cs, p->rate);
p->cs_enabled = true;
}
return ret; return ret;
} }
static void sh_tmu_clocksource_disable(struct clocksource *cs) static void sh_tmu_clocksource_disable(struct clocksource *cs)
{ {
sh_tmu_disable(cs_to_sh_tmu(cs)); struct sh_tmu_priv *p = cs_to_sh_tmu(cs);
if (WARN_ON(!p->cs_enabled))
return;
sh_tmu_disable(p);
p->cs_enabled = false;
}
static void sh_tmu_clocksource_suspend(struct clocksource *cs)
{
struct sh_tmu_priv *p = cs_to_sh_tmu(cs);
if (!p->cs_enabled)
return;
if (--p->enable_count == 0) {
__sh_tmu_disable(p);
pm_genpd_syscore_poweroff(&p->pdev->dev);
}
}
static void sh_tmu_clocksource_resume(struct clocksource *cs)
{
struct sh_tmu_priv *p = cs_to_sh_tmu(cs);
if (!p->cs_enabled)
return;
if (p->enable_count++ == 0) {
pm_genpd_syscore_poweron(&p->pdev->dev);
__sh_tmu_enable(p);
}
} }
static int sh_tmu_register_clocksource(struct sh_tmu_priv *p, static int sh_tmu_register_clocksource(struct sh_tmu_priv *p,
...@@ -224,6 +290,8 @@ static int sh_tmu_register_clocksource(struct sh_tmu_priv *p, ...@@ -224,6 +290,8 @@ static int sh_tmu_register_clocksource(struct sh_tmu_priv *p,
cs->read = sh_tmu_clocksource_read; cs->read = sh_tmu_clocksource_read;
cs->enable = sh_tmu_clocksource_enable; cs->enable = sh_tmu_clocksource_enable;
cs->disable = sh_tmu_clocksource_disable; cs->disable = sh_tmu_clocksource_disable;
cs->suspend = sh_tmu_clocksource_suspend;
cs->resume = sh_tmu_clocksource_resume;
cs->mask = CLOCKSOURCE_MASK(32); cs->mask = CLOCKSOURCE_MASK(32);
cs->flags = CLOCK_SOURCE_IS_CONTINUOUS; cs->flags = CLOCK_SOURCE_IS_CONTINUOUS;
...@@ -301,6 +369,16 @@ static int sh_tmu_clock_event_next(unsigned long delta, ...@@ -301,6 +369,16 @@ static int sh_tmu_clock_event_next(unsigned long delta,
return 0; return 0;
} }
static void sh_tmu_clock_event_suspend(struct clock_event_device *ced)
{
pm_genpd_syscore_poweroff(&ced_to_sh_tmu(ced)->pdev->dev);
}
static void sh_tmu_clock_event_resume(struct clock_event_device *ced)
{
pm_genpd_syscore_poweron(&ced_to_sh_tmu(ced)->pdev->dev);
}
static void sh_tmu_register_clockevent(struct sh_tmu_priv *p, static void sh_tmu_register_clockevent(struct sh_tmu_priv *p,
char *name, unsigned long rating) char *name, unsigned long rating)
{ {
...@@ -316,6 +394,8 @@ static void sh_tmu_register_clockevent(struct sh_tmu_priv *p, ...@@ -316,6 +394,8 @@ static void sh_tmu_register_clockevent(struct sh_tmu_priv *p,
ced->cpumask = cpumask_of(0); ced->cpumask = cpumask_of(0);
ced->set_next_event = sh_tmu_clock_event_next; ced->set_next_event = sh_tmu_clock_event_next;
ced->set_mode = sh_tmu_clock_event_mode; ced->set_mode = sh_tmu_clock_event_mode;
ced->suspend = sh_tmu_clock_event_suspend;
ced->resume = sh_tmu_clock_event_resume;
dev_info(&p->pdev->dev, "used for clock events\n"); dev_info(&p->pdev->dev, "used for clock events\n");
...@@ -392,6 +472,8 @@ static int sh_tmu_setup(struct sh_tmu_priv *p, struct platform_device *pdev) ...@@ -392,6 +472,8 @@ static int sh_tmu_setup(struct sh_tmu_priv *p, struct platform_device *pdev)
ret = PTR_ERR(p->clk); ret = PTR_ERR(p->clk);
goto err1; goto err1;
} }
p->cs_enabled = false;
p->enable_count = 0;
return sh_tmu_register(p, (char *)dev_name(&p->pdev->dev), return sh_tmu_register(p, (char *)dev_name(&p->pdev->dev),
cfg->clockevent_rating, cfg->clockevent_rating,
...@@ -405,14 +487,17 @@ static int sh_tmu_setup(struct sh_tmu_priv *p, struct platform_device *pdev) ...@@ -405,14 +487,17 @@ static int sh_tmu_setup(struct sh_tmu_priv *p, struct platform_device *pdev)
static int __devinit sh_tmu_probe(struct platform_device *pdev) static int __devinit sh_tmu_probe(struct platform_device *pdev)
{ {
struct sh_tmu_priv *p = platform_get_drvdata(pdev); struct sh_tmu_priv *p = platform_get_drvdata(pdev);
struct sh_timer_config *cfg = pdev->dev.platform_data;
int ret; int ret;
if (!is_early_platform_device(pdev)) if (!is_early_platform_device(pdev)) {
pm_genpd_dev_always_on(&pdev->dev, true); pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
}
if (p) { if (p) {
dev_info(&pdev->dev, "kept as earlytimer\n"); dev_info(&pdev->dev, "kept as earlytimer\n");
return 0; goto out;
} }
p = kmalloc(sizeof(*p), GFP_KERNEL); p = kmalloc(sizeof(*p), GFP_KERNEL);
...@@ -425,8 +510,19 @@ static int __devinit sh_tmu_probe(struct platform_device *pdev) ...@@ -425,8 +510,19 @@ static int __devinit sh_tmu_probe(struct platform_device *pdev)
if (ret) { if (ret) {
kfree(p); kfree(p);
platform_set_drvdata(pdev, NULL); platform_set_drvdata(pdev, NULL);
pm_runtime_idle(&pdev->dev);
return ret;
} }
return ret; if (is_early_platform_device(pdev))
return 0;
out:
if (cfg->clockevent_rating || cfg->clocksource_rating)
pm_runtime_irq_safe(&pdev->dev);
else
pm_runtime_idle(&pdev->dev);
return 0;
} }
static int __devexit sh_tmu_remove(struct platform_device *pdev) static int __devexit sh_tmu_remove(struct platform_device *pdev)
......
...@@ -179,6 +179,17 @@ config CPU_FREQ_GOV_CONSERVATIVE ...@@ -179,6 +179,17 @@ config CPU_FREQ_GOV_CONSERVATIVE
If in doubt, say N. If in doubt, say N.
config GENERIC_CPUFREQ_CPU0
bool "Generic CPU0 cpufreq driver"
depends on HAVE_CLK && REGULATOR && PM_OPP && OF
select CPU_FREQ_TABLE
help
This adds a generic cpufreq driver for CPU0 frequency management.
It supports both uniprocessor (UP) and symmetric multiprocessor (SMP)
systems which share clock and voltage across all CPUs.
If in doubt, say N.
menu "x86 CPU frequency scaling drivers" menu "x86 CPU frequency scaling drivers"
depends on X86 depends on X86
source "drivers/cpufreq/Kconfig.x86" source "drivers/cpufreq/Kconfig.x86"
......
...@@ -23,7 +23,8 @@ config X86_ACPI_CPUFREQ ...@@ -23,7 +23,8 @@ config X86_ACPI_CPUFREQ
help help
This driver adds a CPUFreq driver which utilizes the ACPI This driver adds a CPUFreq driver which utilizes the ACPI
Processor Performance States. Processor Performance States.
This driver also supports Intel Enhanced Speedstep. This driver also supports Intel Enhanced Speedstep and newer
AMD CPUs.
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called acpi-cpufreq. module will be called acpi-cpufreq.
...@@ -32,6 +33,18 @@ config X86_ACPI_CPUFREQ ...@@ -32,6 +33,18 @@ config X86_ACPI_CPUFREQ
If in doubt, say N. If in doubt, say N.
config X86_ACPI_CPUFREQ_CPB
default y
bool "Legacy cpb sysfs knob support for AMD CPUs"
depends on X86_ACPI_CPUFREQ && CPU_SUP_AMD
help
The powernow-k8 driver used to provide a sysfs knob called "cpb"
to disable the Core Performance Boosting feature of AMD CPUs. This
file has now been superseeded by the more generic "boost" entry.
By enabling this option the acpi_cpufreq driver provides the old
entry in addition to the new boost ones, for compatibility reasons.
config ELAN_CPUFREQ config ELAN_CPUFREQ
tristate "AMD Elan SC400 and SC410" tristate "AMD Elan SC400 and SC410"
select CPU_FREQ_TABLE select CPU_FREQ_TABLE
...@@ -95,7 +108,8 @@ config X86_POWERNOW_K8 ...@@ -95,7 +108,8 @@ config X86_POWERNOW_K8
select CPU_FREQ_TABLE select CPU_FREQ_TABLE
depends on ACPI && ACPI_PROCESSOR depends on ACPI && ACPI_PROCESSOR
help help
This adds the CPUFreq driver for K8/K10 Opteron/Athlon64 processors. This adds the CPUFreq driver for K8/early Opteron/Athlon64 processors.
Support for K10 and newer processors is now in acpi-cpufreq.
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called powernow-k8. module will be called powernow-k8.
......
...@@ -13,13 +13,15 @@ obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o ...@@ -13,13 +13,15 @@ obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o
# CPUfreq cross-arch helpers # CPUfreq cross-arch helpers
obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o
obj-$(CONFIG_GENERIC_CPUFREQ_CPU0) += cpufreq-cpu0.o
################################################################################## ##################################################################################
# x86 drivers. # x86 drivers.
# Link order matters. K8 is preferred to ACPI because of firmware bugs in early # Link order matters. K8 is preferred to ACPI because of firmware bugs in early
# K8 systems. ACPI is preferred to all other hardware-specific drivers. # K8 systems. ACPI is preferred to all other hardware-specific drivers.
# speedstep-* is preferred over p4-clockmod. # speedstep-* is preferred over p4-clockmod.
obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o mperf.o obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o
obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o mperf.o obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o mperf.o
obj-$(CONFIG_X86_PCC_CPUFREQ) += pcc-cpufreq.o obj-$(CONFIG_X86_PCC_CPUFREQ) += pcc-cpufreq.o
obj-$(CONFIG_X86_POWERNOW_K6) += powernow-k6.o obj-$(CONFIG_X86_POWERNOW_K6) += powernow-k6.o
......
This diff is collapsed.
/*
* Copyright (C) 2012 Freescale Semiconductor, Inc.
*
* The OPP code in function cpu0_set_target() is reused from
* drivers/cpufreq/omap-cpufreq.c
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/clk.h>
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/err.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/opp.h>
#include <linux/regulator/consumer.h>
#include <linux/slab.h>
static unsigned int transition_latency;
static unsigned int voltage_tolerance; /* in percentage */
static struct device *cpu_dev;
static struct clk *cpu_clk;
static struct regulator *cpu_reg;
static struct cpufreq_frequency_table *freq_table;
static int cpu0_verify_speed(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, freq_table);
}
static unsigned int cpu0_get_speed(unsigned int cpu)
{
return clk_get_rate(cpu_clk) / 1000;
}
static int cpu0_set_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int relation)
{
struct cpufreq_freqs freqs;
struct opp *opp;
unsigned long freq_Hz, volt = 0, volt_old = 0, tol = 0;
unsigned int index, cpu;
int ret;
ret = cpufreq_frequency_table_target(policy, freq_table, target_freq,
relation, &index);
if (ret) {
pr_err("failed to match target freqency %d: %d\n",
target_freq, ret);
return ret;
}
freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000);
if (freq_Hz < 0)
freq_Hz = freq_table[index].frequency * 1000;
freqs.new = freq_Hz / 1000;
freqs.old = clk_get_rate(cpu_clk) / 1000;
if (freqs.old == freqs.new)
return 0;
for_each_online_cpu(cpu) {
freqs.cpu = cpu;
cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
}
if (cpu_reg) {
opp = opp_find_freq_ceil(cpu_dev, &freq_Hz);
if (IS_ERR(opp)) {
pr_err("failed to find OPP for %ld\n", freq_Hz);
return PTR_ERR(opp);
}
volt = opp_get_voltage(opp);
tol = volt * voltage_tolerance / 100;
volt_old = regulator_get_voltage(cpu_reg);
}
pr_debug("%u MHz, %ld mV --> %u MHz, %ld mV\n",
freqs.old / 1000, volt_old ? volt_old / 1000 : -1,
freqs.new / 1000, volt ? volt / 1000 : -1);
/* scaling up? scale voltage before frequency */
if (cpu_reg && freqs.new > freqs.old) {
ret = regulator_set_voltage_tol(cpu_reg, volt, tol);
if (ret) {
pr_err("failed to scale voltage up: %d\n", ret);
freqs.new = freqs.old;
return ret;
}
}
ret = clk_set_rate(cpu_clk, freqs.new * 1000);
if (ret) {
pr_err("failed to set clock rate: %d\n", ret);
if (cpu_reg)
regulator_set_voltage_tol(cpu_reg, volt_old, tol);
return ret;
}
/* scaling down? scale voltage after frequency */
if (cpu_reg && freqs.new < freqs.old) {
ret = regulator_set_voltage_tol(cpu_reg, volt, tol);
if (ret) {
pr_err("failed to scale voltage down: %d\n", ret);
clk_set_rate(cpu_clk, freqs.old * 1000);
freqs.new = freqs.old;
return ret;
}
}
for_each_online_cpu(cpu) {
freqs.cpu = cpu;
cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
}
return 0;
}
static int cpu0_cpufreq_init(struct cpufreq_policy *policy)
{
int ret;
if (policy->cpu != 0)
return -EINVAL;
ret = cpufreq_frequency_table_cpuinfo(policy, freq_table);
if (ret) {
pr_err("invalid frequency table: %d\n", ret);
return ret;
}
policy->cpuinfo.transition_latency = transition_latency;
policy->cur = clk_get_rate(cpu_clk) / 1000;
/*
* The driver only supports the SMP configuartion where all processors
* share the clock and voltage and clock. Use cpufreq affected_cpus
* interface to have all CPUs scaled together.
*/
policy->shared_type = CPUFREQ_SHARED_TYPE_ANY;
cpumask_setall(policy->cpus);
cpufreq_frequency_table_get_attr(freq_table, policy->cpu);
return 0;
}
static int cpu0_cpufreq_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *cpu0_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver cpu0_cpufreq_driver = {
.flags = CPUFREQ_STICKY,
.verify = cpu0_verify_speed,
.target = cpu0_set_target,
.get = cpu0_get_speed,
.init = cpu0_cpufreq_init,
.exit = cpu0_cpufreq_exit,
.name = "generic_cpu0",
.attr = cpu0_cpufreq_attr,
};
static int __devinit cpu0_cpufreq_driver_init(void)
{
struct device_node *np;
int ret;
np = of_find_node_by_path("/cpus/cpu@0");
if (!np) {
pr_err("failed to find cpu0 node\n");
return -ENOENT;
}
cpu_dev = get_cpu_device(0);
if (!cpu_dev) {
pr_err("failed to get cpu0 device\n");
ret = -ENODEV;
goto out_put_node;
}
cpu_dev->of_node = np;
cpu_clk = clk_get(cpu_dev, NULL);
if (IS_ERR(cpu_clk)) {
ret = PTR_ERR(cpu_clk);
pr_err("failed to get cpu0 clock: %d\n", ret);
goto out_put_node;
}
cpu_reg = regulator_get(cpu_dev, "cpu0");
if (IS_ERR(cpu_reg)) {
pr_warn("failed to get cpu0 regulator\n");
cpu_reg = NULL;
}
ret = of_init_opp_table(cpu_dev);
if (ret) {
pr_err("failed to init OPP table: %d\n", ret);
goto out_put_node;
}
ret = opp_init_cpufreq_table(cpu_dev, &freq_table);
if (ret) {
pr_err("failed to init cpufreq table: %d\n", ret);
goto out_put_node;
}
of_property_read_u32(np, "voltage-tolerance", &voltage_tolerance);
if (of_property_read_u32(np, "clock-latency", &transition_latency))
transition_latency = CPUFREQ_ETERNAL;
if (cpu_reg) {
struct opp *opp;
unsigned long min_uV, max_uV;
int i;
/*
* OPP is maintained in order of increasing frequency, and
* freq_table initialised from OPP is therefore sorted in the
* same order.
*/
for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++)
;
opp = opp_find_freq_exact(cpu_dev,
freq_table[0].frequency * 1000, true);
min_uV = opp_get_voltage(opp);
opp = opp_find_freq_exact(cpu_dev,
freq_table[i-1].frequency * 1000, true);
max_uV = opp_get_voltage(opp);
ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV);
if (ret > 0)
transition_latency += ret * 1000;
}
ret = cpufreq_register_driver(&cpu0_cpufreq_driver);
if (ret) {
pr_err("failed register driver: %d\n", ret);
goto out_free_table;
}
of_node_put(np);
return 0;
out_free_table:
opp_free_cpufreq_table(cpu_dev, &freq_table);
out_put_node:
of_node_put(np);
return ret;
}
late_initcall(cpu0_cpufreq_driver_init);
MODULE_AUTHOR("Shawn Guo <shawn.guo@linaro.org>");
MODULE_DESCRIPTION("Generic CPU0 cpufreq driver");
MODULE_LICENSE("GPL");
...@@ -504,6 +504,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, ...@@ -504,6 +504,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
j_dbs_info->prev_cpu_nice = j_dbs_info->prev_cpu_nice =
kcpustat_cpu(j).cpustat[CPUTIME_NICE]; kcpustat_cpu(j).cpustat[CPUTIME_NICE];
} }
this_dbs_info->cpu = cpu;
this_dbs_info->down_skip = 0; this_dbs_info->down_skip = 0;
this_dbs_info->requested_freq = policy->cur; this_dbs_info->requested_freq = policy->cur;
...@@ -583,6 +584,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, ...@@ -583,6 +584,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
__cpufreq_driver_target( __cpufreq_driver_target(
this_dbs_info->cur_policy, this_dbs_info->cur_policy,
policy->min, CPUFREQ_RELATION_L); policy->min, CPUFREQ_RELATION_L);
dbs_check_cpu(this_dbs_info);
mutex_unlock(&this_dbs_info->timer_mutex); mutex_unlock(&this_dbs_info->timer_mutex);
break; break;
......
...@@ -761,6 +761,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, ...@@ -761,6 +761,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
else if (policy->min > this_dbs_info->cur_policy->cur) else if (policy->min > this_dbs_info->cur_policy->cur)
__cpufreq_driver_target(this_dbs_info->cur_policy, __cpufreq_driver_target(this_dbs_info->cur_policy,
policy->min, CPUFREQ_RELATION_L); policy->min, CPUFREQ_RELATION_L);
dbs_check_cpu(this_dbs_info);
mutex_unlock(&this_dbs_info->timer_mutex); mutex_unlock(&this_dbs_info->timer_mutex);
break; break;
} }
......
...@@ -56,7 +56,7 @@ union msr_longhaul { ...@@ -56,7 +56,7 @@ union msr_longhaul {
/* /*
* VIA C3 Samuel 1 & Samuel 2 (stepping 0) * VIA C3 Samuel 1 & Samuel 2 (stepping 0)
*/ */
static const int __cpuinitdata samuel1_mults[16] = { static const int __cpuinitconst samuel1_mults[16] = {
-1, /* 0000 -> RESERVED */ -1, /* 0000 -> RESERVED */
30, /* 0001 -> 3.0x */ 30, /* 0001 -> 3.0x */
40, /* 0010 -> 4.0x */ 40, /* 0010 -> 4.0x */
...@@ -75,7 +75,7 @@ static const int __cpuinitdata samuel1_mults[16] = { ...@@ -75,7 +75,7 @@ static const int __cpuinitdata samuel1_mults[16] = {
-1, /* 1111 -> RESERVED */ -1, /* 1111 -> RESERVED */
}; };
static const int __cpuinitdata samuel1_eblcr[16] = { static const int __cpuinitconst samuel1_eblcr[16] = {
50, /* 0000 -> RESERVED */ 50, /* 0000 -> RESERVED */
30, /* 0001 -> 3.0x */ 30, /* 0001 -> 3.0x */
40, /* 0010 -> 4.0x */ 40, /* 0010 -> 4.0x */
...@@ -97,7 +97,7 @@ static const int __cpuinitdata samuel1_eblcr[16] = { ...@@ -97,7 +97,7 @@ static const int __cpuinitdata samuel1_eblcr[16] = {
/* /*
* VIA C3 Samuel2 Stepping 1->15 * VIA C3 Samuel2 Stepping 1->15
*/ */
static const int __cpuinitdata samuel2_eblcr[16] = { static const int __cpuinitconst samuel2_eblcr[16] = {
50, /* 0000 -> 5.0x */ 50, /* 0000 -> 5.0x */
30, /* 0001 -> 3.0x */ 30, /* 0001 -> 3.0x */
40, /* 0010 -> 4.0x */ 40, /* 0010 -> 4.0x */
...@@ -119,7 +119,7 @@ static const int __cpuinitdata samuel2_eblcr[16] = { ...@@ -119,7 +119,7 @@ static const int __cpuinitdata samuel2_eblcr[16] = {
/* /*
* VIA C3 Ezra * VIA C3 Ezra
*/ */
static const int __cpuinitdata ezra_mults[16] = { static const int __cpuinitconst ezra_mults[16] = {
100, /* 0000 -> 10.0x */ 100, /* 0000 -> 10.0x */
30, /* 0001 -> 3.0x */ 30, /* 0001 -> 3.0x */
40, /* 0010 -> 4.0x */ 40, /* 0010 -> 4.0x */
...@@ -138,7 +138,7 @@ static const int __cpuinitdata ezra_mults[16] = { ...@@ -138,7 +138,7 @@ static const int __cpuinitdata ezra_mults[16] = {
120, /* 1111 -> 12.0x */ 120, /* 1111 -> 12.0x */
}; };
static const int __cpuinitdata ezra_eblcr[16] = { static const int __cpuinitconst ezra_eblcr[16] = {
50, /* 0000 -> 5.0x */ 50, /* 0000 -> 5.0x */
30, /* 0001 -> 3.0x */ 30, /* 0001 -> 3.0x */
40, /* 0010 -> 4.0x */ 40, /* 0010 -> 4.0x */
...@@ -160,7 +160,7 @@ static const int __cpuinitdata ezra_eblcr[16] = { ...@@ -160,7 +160,7 @@ static const int __cpuinitdata ezra_eblcr[16] = {
/* /*
* VIA C3 (Ezra-T) [C5M]. * VIA C3 (Ezra-T) [C5M].
*/ */
static const int __cpuinitdata ezrat_mults[32] = { static const int __cpuinitconst ezrat_mults[32] = {
100, /* 0000 -> 10.0x */ 100, /* 0000 -> 10.0x */
30, /* 0001 -> 3.0x */ 30, /* 0001 -> 3.0x */
40, /* 0010 -> 4.0x */ 40, /* 0010 -> 4.0x */
...@@ -196,7 +196,7 @@ static const int __cpuinitdata ezrat_mults[32] = { ...@@ -196,7 +196,7 @@ static const int __cpuinitdata ezrat_mults[32] = {
-1, /* 1111 -> RESERVED (12.0x) */ -1, /* 1111 -> RESERVED (12.0x) */
}; };
static const int __cpuinitdata ezrat_eblcr[32] = { static const int __cpuinitconst ezrat_eblcr[32] = {
50, /* 0000 -> 5.0x */ 50, /* 0000 -> 5.0x */
30, /* 0001 -> 3.0x */ 30, /* 0001 -> 3.0x */
40, /* 0010 -> 4.0x */ 40, /* 0010 -> 4.0x */
...@@ -235,7 +235,7 @@ static const int __cpuinitdata ezrat_eblcr[32] = { ...@@ -235,7 +235,7 @@ static const int __cpuinitdata ezrat_eblcr[32] = {
/* /*
* VIA C3 Nehemiah */ * VIA C3 Nehemiah */
static const int __cpuinitdata nehemiah_mults[32] = { static const int __cpuinitconst nehemiah_mults[32] = {
100, /* 0000 -> 10.0x */ 100, /* 0000 -> 10.0x */
-1, /* 0001 -> 16.0x */ -1, /* 0001 -> 16.0x */
40, /* 0010 -> 4.0x */ 40, /* 0010 -> 4.0x */
...@@ -270,7 +270,7 @@ static const int __cpuinitdata nehemiah_mults[32] = { ...@@ -270,7 +270,7 @@ static const int __cpuinitdata nehemiah_mults[32] = {
-1, /* 1111 -> 12.0x */ -1, /* 1111 -> 12.0x */
}; };
static const int __cpuinitdata nehemiah_eblcr[32] = { static const int __cpuinitconst nehemiah_eblcr[32] = {
50, /* 0000 -> 5.0x */ 50, /* 0000 -> 5.0x */
160, /* 0001 -> 16.0x */ 160, /* 0001 -> 16.0x */
40, /* 0010 -> 4.0x */ 40, /* 0010 -> 4.0x */
...@@ -315,7 +315,7 @@ struct mV_pos { ...@@ -315,7 +315,7 @@ struct mV_pos {
unsigned short pos; unsigned short pos;
}; };
static const struct mV_pos __cpuinitdata vrm85_mV[32] = { static const struct mV_pos __cpuinitconst vrm85_mV[32] = {
{1250, 8}, {1200, 6}, {1150, 4}, {1100, 2}, {1250, 8}, {1200, 6}, {1150, 4}, {1100, 2},
{1050, 0}, {1800, 30}, {1750, 28}, {1700, 26}, {1050, 0}, {1800, 30}, {1750, 28}, {1700, 26},
{1650, 24}, {1600, 22}, {1550, 20}, {1500, 18}, {1650, 24}, {1600, 22}, {1550, 20}, {1500, 18},
...@@ -326,14 +326,14 @@ static const struct mV_pos __cpuinitdata vrm85_mV[32] = { ...@@ -326,14 +326,14 @@ static const struct mV_pos __cpuinitdata vrm85_mV[32] = {
{1475, 17}, {1425, 15}, {1375, 13}, {1325, 11} {1475, 17}, {1425, 15}, {1375, 13}, {1325, 11}
}; };
static const unsigned char __cpuinitdata mV_vrm85[32] = { static const unsigned char __cpuinitconst mV_vrm85[32] = {
0x04, 0x14, 0x03, 0x13, 0x02, 0x12, 0x01, 0x11, 0x04, 0x14, 0x03, 0x13, 0x02, 0x12, 0x01, 0x11,
0x00, 0x10, 0x0f, 0x1f, 0x0e, 0x1e, 0x0d, 0x1d, 0x00, 0x10, 0x0f, 0x1f, 0x0e, 0x1e, 0x0d, 0x1d,
0x0c, 0x1c, 0x0b, 0x1b, 0x0a, 0x1a, 0x09, 0x19, 0x0c, 0x1c, 0x0b, 0x1b, 0x0a, 0x1a, 0x09, 0x19,
0x08, 0x18, 0x07, 0x17, 0x06, 0x16, 0x05, 0x15 0x08, 0x18, 0x07, 0x17, 0x06, 0x16, 0x05, 0x15
}; };
static const struct mV_pos __cpuinitdata mobilevrm_mV[32] = { static const struct mV_pos __cpuinitconst mobilevrm_mV[32] = {
{1750, 31}, {1700, 30}, {1650, 29}, {1600, 28}, {1750, 31}, {1700, 30}, {1650, 29}, {1600, 28},
{1550, 27}, {1500, 26}, {1450, 25}, {1400, 24}, {1550, 27}, {1500, 26}, {1450, 25}, {1400, 24},
{1350, 23}, {1300, 22}, {1250, 21}, {1200, 20}, {1350, 23}, {1300, 22}, {1250, 21}, {1200, 20},
...@@ -344,7 +344,7 @@ static const struct mV_pos __cpuinitdata mobilevrm_mV[32] = { ...@@ -344,7 +344,7 @@ static const struct mV_pos __cpuinitdata mobilevrm_mV[32] = {
{675, 3}, {650, 2}, {625, 1}, {600, 0} {675, 3}, {650, 2}, {625, 1}, {600, 0}
}; };
static const unsigned char __cpuinitdata mV_mobilevrm[32] = { static const unsigned char __cpuinitconst mV_mobilevrm[32] = {
0x1f, 0x1e, 0x1d, 0x1c, 0x1b, 0x1a, 0x19, 0x18, 0x1f, 0x1e, 0x1d, 0x1c, 0x1b, 0x1a, 0x19, 0x18,
0x17, 0x16, 0x15, 0x14, 0x13, 0x12, 0x11, 0x10, 0x17, 0x16, 0x15, 0x14, 0x13, 0x12, 0x11, 0x10,
0x0f, 0x0e, 0x0d, 0x0c, 0x0b, 0x0a, 0x09, 0x08, 0x0f, 0x0e, 0x0d, 0x0c, 0x0b, 0x0a, 0x09, 0x08,
......
...@@ -40,16 +40,6 @@ ...@@ -40,16 +40,6 @@
/* OPP tolerance in percentage */ /* OPP tolerance in percentage */
#define OPP_TOLERANCE 4 #define OPP_TOLERANCE 4
#ifdef CONFIG_SMP
struct lpj_info {
unsigned long ref;
unsigned int freq;
};
static DEFINE_PER_CPU(struct lpj_info, lpj_ref);
static struct lpj_info global_lpj_ref;
#endif
static struct cpufreq_frequency_table *freq_table; static struct cpufreq_frequency_table *freq_table;
static atomic_t freq_table_users = ATOMIC_INIT(0); static atomic_t freq_table_users = ATOMIC_INIT(0);
static struct clk *mpu_clk; static struct clk *mpu_clk;
...@@ -161,31 +151,6 @@ static int omap_target(struct cpufreq_policy *policy, ...@@ -161,31 +151,6 @@ static int omap_target(struct cpufreq_policy *policy,
} }
freqs.new = omap_getspeed(policy->cpu); freqs.new = omap_getspeed(policy->cpu);
#ifdef CONFIG_SMP
/*
* Note that loops_per_jiffy is not updated on SMP systems in
* cpufreq driver. So, update the per-CPU loops_per_jiffy value
* on frequency transition. We need to update all dependent CPUs.
*/
for_each_cpu(i, policy->cpus) {
struct lpj_info *lpj = &per_cpu(lpj_ref, i);
if (!lpj->freq) {
lpj->ref = per_cpu(cpu_data, i).loops_per_jiffy;
lpj->freq = freqs.old;
}
per_cpu(cpu_data, i).loops_per_jiffy =
cpufreq_scale(lpj->ref, lpj->freq, freqs.new);
}
/* And don't forget to adjust the global one */
if (!global_lpj_ref.freq) {
global_lpj_ref.ref = loops_per_jiffy;
global_lpj_ref.freq = freqs.old;
}
loops_per_jiffy = cpufreq_scale(global_lpj_ref.ref, global_lpj_ref.freq,
freqs.new);
#endif
done: done:
/* notifiers */ /* notifiers */
...@@ -301,9 +266,9 @@ static int __init omap_cpufreq_init(void) ...@@ -301,9 +266,9 @@ static int __init omap_cpufreq_init(void)
} }
mpu_dev = omap_device_get_by_hwmod_name("mpu"); mpu_dev = omap_device_get_by_hwmod_name("mpu");
if (!mpu_dev) { if (IS_ERR(mpu_dev)) {
pr_warning("%s: unable to get the mpu device\n", __func__); pr_warning("%s: unable to get the mpu device\n", __func__);
return -EINVAL; return PTR_ERR(mpu_dev);
} }
mpu_reg = regulator_get(mpu_dev, "vcc"); mpu_reg = regulator_get(mpu_dev, "vcc");
......
This diff is collapsed.
...@@ -5,24 +5,11 @@ ...@@ -5,24 +5,11 @@
* http://www.gnu.org/licenses/gpl.html * http://www.gnu.org/licenses/gpl.html
*/ */
enum pstate {
HW_PSTATE_INVALID = 0xff,
HW_PSTATE_0 = 0,
HW_PSTATE_1 = 1,
HW_PSTATE_2 = 2,
HW_PSTATE_3 = 3,
HW_PSTATE_4 = 4,
HW_PSTATE_5 = 5,
HW_PSTATE_6 = 6,
HW_PSTATE_7 = 7,
};
struct powernow_k8_data { struct powernow_k8_data {
unsigned int cpu; unsigned int cpu;
u32 numps; /* number of p-states */ u32 numps; /* number of p-states */
u32 batps; /* number of p-states supported on battery */ u32 batps; /* number of p-states supported on battery */
u32 max_hw_pstate; /* maximum legal hardware pstate */
/* these values are constant when the PSB is used to determine /* these values are constant when the PSB is used to determine
* vid/fid pairings, but are modified during the ->target() call * vid/fid pairings, but are modified during the ->target() call
...@@ -37,7 +24,6 @@ struct powernow_k8_data { ...@@ -37,7 +24,6 @@ struct powernow_k8_data {
/* keep track of the current fid / vid or pstate */ /* keep track of the current fid / vid or pstate */
u32 currvid; u32 currvid;
u32 currfid; u32 currfid;
enum pstate currpstate;
/* the powernow_table includes all frequency and vid/fid pairings: /* the powernow_table includes all frequency and vid/fid pairings:
* fid are the lower 8 bits of the index, vid are the upper 8 bits. * fid are the lower 8 bits of the index, vid are the upper 8 bits.
...@@ -97,23 +83,6 @@ struct powernow_k8_data { ...@@ -97,23 +83,6 @@ struct powernow_k8_data {
#define MSR_S_HI_CURRENT_VID 0x0000003f #define MSR_S_HI_CURRENT_VID 0x0000003f
#define MSR_C_HI_STP_GNT_BENIGN 0x00000001 #define MSR_C_HI_STP_GNT_BENIGN 0x00000001
/* Hardware Pstate _PSS and MSR definitions */
#define USE_HW_PSTATE 0x00000080
#define HW_PSTATE_MASK 0x00000007
#define HW_PSTATE_VALID_MASK 0x80000000
#define HW_PSTATE_MAX_MASK 0x000000f0
#define HW_PSTATE_MAX_SHIFT 4
#define MSR_PSTATE_DEF_BASE 0xc0010064 /* base of Pstate MSRs */
#define MSR_PSTATE_STATUS 0xc0010063 /* Pstate Status MSR */
#define MSR_PSTATE_CTRL 0xc0010062 /* Pstate control MSR */
#define MSR_PSTATE_CUR_LIMIT 0xc0010061 /* pstate current limit MSR */
/* define the two driver architectures */
#define CPU_OPTERON 0
#define CPU_HW_PSTATE 1
/* /*
* There are restrictions frequencies have to follow: * There are restrictions frequencies have to follow:
* - only 1 entry in the low fid table ( <=1.4GHz ) * - only 1 entry in the low fid table ( <=1.4GHz )
...@@ -218,5 +187,4 @@ static int core_frequency_transition(struct powernow_k8_data *data, u32 reqfid); ...@@ -218,5 +187,4 @@ static int core_frequency_transition(struct powernow_k8_data *data, u32 reqfid);
static void powernow_k8_acpi_pst_values(struct powernow_k8_data *data, unsigned int index); static void powernow_k8_acpi_pst_values(struct powernow_k8_data *data, unsigned int index);
static int fill_powernow_table_pstate(struct powernow_k8_data *data, struct cpufreq_frequency_table *powernow_table);
static int fill_powernow_table_fidvid(struct powernow_k8_data *data, struct cpufreq_frequency_table *powernow_table); static int fill_powernow_table_fidvid(struct powernow_k8_data *data, struct cpufreq_frequency_table *powernow_table);
...@@ -18,9 +18,10 @@ static struct cpuidle_driver *cpuidle_curr_driver; ...@@ -18,9 +18,10 @@ static struct cpuidle_driver *cpuidle_curr_driver;
DEFINE_SPINLOCK(cpuidle_driver_lock); DEFINE_SPINLOCK(cpuidle_driver_lock);
int cpuidle_driver_refcount; int cpuidle_driver_refcount;
static void __cpuidle_register_driver(struct cpuidle_driver *drv) static void set_power_states(struct cpuidle_driver *drv)
{ {
int i; int i;
/* /*
* cpuidle driver should set the drv->power_specified bit * cpuidle driver should set the drv->power_specified bit
* before registering if the driver provides * before registering if the driver provides
...@@ -35,13 +36,10 @@ static void __cpuidle_register_driver(struct cpuidle_driver *drv) ...@@ -35,13 +36,10 @@ static void __cpuidle_register_driver(struct cpuidle_driver *drv)
* an power value of -1. So we use -2, -3, etc, for other * an power value of -1. So we use -2, -3, etc, for other
* c-states. * c-states.
*/ */
if (!drv->power_specified) { for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++)
for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) drv->states[i].power_usage = -1 - i;
drv->states[i].power_usage = -1 - i;
}
} }
/** /**
* cpuidle_register_driver - registers a driver * cpuidle_register_driver - registers a driver
* @drv: the driver * @drv: the driver
...@@ -59,13 +57,16 @@ int cpuidle_register_driver(struct cpuidle_driver *drv) ...@@ -59,13 +57,16 @@ int cpuidle_register_driver(struct cpuidle_driver *drv)
spin_unlock(&cpuidle_driver_lock); spin_unlock(&cpuidle_driver_lock);
return -EBUSY; return -EBUSY;
} }
__cpuidle_register_driver(drv);
if (!drv->power_specified)
set_power_states(drv);
cpuidle_curr_driver = drv; cpuidle_curr_driver = drv;
spin_unlock(&cpuidle_driver_lock); spin_unlock(&cpuidle_driver_lock);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(cpuidle_register_driver); EXPORT_SYMBOL_GPL(cpuidle_register_driver);
/** /**
...@@ -96,7 +97,6 @@ void cpuidle_unregister_driver(struct cpuidle_driver *drv) ...@@ -96,7 +97,6 @@ void cpuidle_unregister_driver(struct cpuidle_driver *drv)
spin_unlock(&cpuidle_driver_lock); spin_unlock(&cpuidle_driver_lock);
} }
EXPORT_SYMBOL_GPL(cpuidle_unregister_driver); EXPORT_SYMBOL_GPL(cpuidle_unregister_driver);
struct cpuidle_driver *cpuidle_driver_ref(void) struct cpuidle_driver *cpuidle_driver_ref(void)
......
...@@ -88,6 +88,8 @@ static int ladder_select_state(struct cpuidle_driver *drv, ...@@ -88,6 +88,8 @@ static int ladder_select_state(struct cpuidle_driver *drv,
/* consider promotion */ /* consider promotion */
if (last_idx < drv->state_count - 1 && if (last_idx < drv->state_count - 1 &&
!drv->states[last_idx + 1].disabled &&
!dev->states_usage[last_idx + 1].disable &&
last_residency > last_state->threshold.promotion_time && last_residency > last_state->threshold.promotion_time &&
drv->states[last_idx + 1].exit_latency <= latency_req) { drv->states[last_idx + 1].exit_latency <= latency_req) {
last_state->stats.promotion_count++; last_state->stats.promotion_count++;
...@@ -100,7 +102,9 @@ static int ladder_select_state(struct cpuidle_driver *drv, ...@@ -100,7 +102,9 @@ static int ladder_select_state(struct cpuidle_driver *drv,
/* consider demotion */ /* consider demotion */
if (last_idx > CPUIDLE_DRIVER_STATE_START && if (last_idx > CPUIDLE_DRIVER_STATE_START &&
drv->states[last_idx].exit_latency > latency_req) { (drv->states[last_idx].disabled ||
dev->states_usage[last_idx].disable ||
drv->states[last_idx].exit_latency > latency_req)) {
int i; int i;
for (i = last_idx - 1; i > CPUIDLE_DRIVER_STATE_START; i--) { for (i = last_idx - 1; i > CPUIDLE_DRIVER_STATE_START; i--) {
......
...@@ -606,21 +606,6 @@ static int pci_pm_prepare(struct device *dev) ...@@ -606,21 +606,6 @@ static int pci_pm_prepare(struct device *dev)
struct device_driver *drv = dev->driver; struct device_driver *drv = dev->driver;
int error = 0; int error = 0;
/*
* If a PCI device configured to wake up the system from sleep states
* has been suspended at run time and there's a resume request pending
* for it, this is equivalent to the device signaling wakeup, so the
* system suspend operation should be aborted.
*/
pm_runtime_get_noresume(dev);
if (pm_runtime_barrier(dev) && device_may_wakeup(dev))
pm_wakeup_event(dev, 0);
if (pm_wakeup_pending()) {
pm_runtime_put_sync(dev);
return -EBUSY;
}
/* /*
* PCI devices suspended at run time need to be resumed at this * PCI devices suspended at run time need to be resumed at this
* point, because in general it is necessary to reconfigure them for * point, because in general it is necessary to reconfigure them for
...@@ -644,8 +629,6 @@ static void pci_pm_complete(struct device *dev) ...@@ -644,8 +629,6 @@ static void pci_pm_complete(struct device *dev)
if (drv && drv->pm && drv->pm->complete) if (drv && drv->pm && drv->pm->complete)
drv->pm->complete(dev); drv->pm->complete(dev);
pm_runtime_put_sync(dev);
} }
#else /* !CONFIG_PM_SLEEP */ #else /* !CONFIG_PM_SLEEP */
......
...@@ -98,7 +98,6 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr) ...@@ -98,7 +98,6 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr)
dst_cx->type = cx->type; dst_cx->type = cx->type;
dst_cx->latency = cx->latency; dst_cx->latency = cx->latency;
dst_cx->power = cx->power;
dst_cx->dpcnt = 0; dst_cx->dpcnt = 0;
set_xen_guest_handle(dst_cx->dp, NULL); set_xen_guest_handle(dst_cx->dp, NULL);
......
...@@ -3,7 +3,6 @@ ...@@ -3,7 +3,6 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpuidle.h>
#include <linux/thermal.h> #include <linux/thermal.h>
#include <asm/acpi.h> #include <asm/acpi.h>
...@@ -59,13 +58,11 @@ struct acpi_processor_cx { ...@@ -59,13 +58,11 @@ struct acpi_processor_cx {
u8 entry_method; u8 entry_method;
u8 index; u8 index;
u32 latency; u32 latency;
u32 power;
u8 bm_sts_skip; u8 bm_sts_skip;
char desc[ACPI_CX_DESC_LEN]; char desc[ACPI_CX_DESC_LEN];
}; };
struct acpi_processor_power { struct acpi_processor_power {
struct cpuidle_device dev;
struct acpi_processor_cx *state; struct acpi_processor_cx *state;
unsigned long bm_check_timestamp; unsigned long bm_check_timestamp;
u32 default_state; u32 default_state;
...@@ -325,12 +322,10 @@ extern void acpi_processor_reevaluate_tstate(struct acpi_processor *pr, ...@@ -325,12 +322,10 @@ extern void acpi_processor_reevaluate_tstate(struct acpi_processor *pr,
extern const struct file_operations acpi_processor_throttling_fops; extern const struct file_operations acpi_processor_throttling_fops;
extern void acpi_processor_throttling_init(void); extern void acpi_processor_throttling_init(void);
/* in processor_idle.c */ /* in processor_idle.c */
int acpi_processor_power_init(struct acpi_processor *pr, int acpi_processor_power_init(struct acpi_processor *pr);
struct acpi_device *device); int acpi_processor_power_exit(struct acpi_processor *pr);
int acpi_processor_cst_has_changed(struct acpi_processor *pr); int acpi_processor_cst_has_changed(struct acpi_processor *pr);
int acpi_processor_hotplug(struct acpi_processor *pr); int acpi_processor_hotplug(struct acpi_processor *pr);
int acpi_processor_power_exit(struct acpi_processor *pr,
struct acpi_device *device);
int acpi_processor_suspend(struct device *dev); int acpi_processor_suspend(struct device *dev);
int acpi_processor_resume(struct device *dev); int acpi_processor_resume(struct device *dev);
extern struct cpuidle_driver acpi_idle_driver; extern struct cpuidle_driver acpi_idle_driver;
......
...@@ -97,6 +97,8 @@ struct clock_event_device { ...@@ -97,6 +97,8 @@ struct clock_event_device {
void (*broadcast)(const struct cpumask *mask); void (*broadcast)(const struct cpumask *mask);
void (*set_mode)(enum clock_event_mode mode, void (*set_mode)(enum clock_event_mode mode,
struct clock_event_device *); struct clock_event_device *);
void (*suspend)(struct clock_event_device *);
void (*resume)(struct clock_event_device *);
unsigned long min_delta_ticks; unsigned long min_delta_ticks;
unsigned long max_delta_ticks; unsigned long max_delta_ticks;
...@@ -156,6 +158,9 @@ clockevents_calc_mult_shift(struct clock_event_device *ce, u32 freq, u32 minsec) ...@@ -156,6 +158,9 @@ clockevents_calc_mult_shift(struct clock_event_device *ce, u32 freq, u32 minsec)
freq, minsec); freq, minsec);
} }
extern void clockevents_suspend(void);
extern void clockevents_resume(void);
#ifdef CONFIG_GENERIC_CLOCKEVENTS #ifdef CONFIG_GENERIC_CLOCKEVENTS
extern void clockevents_notify(unsigned long reason, void *arg); extern void clockevents_notify(unsigned long reason, void *arg);
#else #else
...@@ -164,6 +169,9 @@ extern void clockevents_notify(unsigned long reason, void *arg); ...@@ -164,6 +169,9 @@ extern void clockevents_notify(unsigned long reason, void *arg);
#else /* CONFIG_GENERIC_CLOCKEVENTS_BUILD */ #else /* CONFIG_GENERIC_CLOCKEVENTS_BUILD */
static inline void clockevents_suspend(void) {}
static inline void clockevents_resume(void) {}
#define clockevents_notify(reason, arg) do { } while (0) #define clockevents_notify(reason, arg) do { } while (0)
#endif #endif
......
...@@ -776,6 +776,13 @@ static inline void pm_suspend_ignore_children(struct device *dev, bool enable) ...@@ -776,6 +776,13 @@ static inline void pm_suspend_ignore_children(struct device *dev, bool enable)
dev->power.ignore_children = enable; dev->power.ignore_children = enable;
} }
static inline void dev_pm_syscore_device(struct device *dev, bool val)
{
#ifdef CONFIG_PM_SLEEP
dev->power.syscore = val;
#endif
}
static inline void device_lock(struct device *dev) static inline void device_lock(struct device *dev)
{ {
mutex_lock(&dev->mutex); mutex_lock(&dev->mutex);
......
...@@ -48,6 +48,14 @@ int opp_disable(struct device *dev, unsigned long freq); ...@@ -48,6 +48,14 @@ int opp_disable(struct device *dev, unsigned long freq);
struct srcu_notifier_head *opp_get_notifier(struct device *dev); struct srcu_notifier_head *opp_get_notifier(struct device *dev);
#ifdef CONFIG_OF
int of_init_opp_table(struct device *dev);
#else
static inline int of_init_opp_table(struct device *dev)
{
return -EINVAL;
}
#endif /* CONFIG_OF */
#else #else
static inline unsigned long opp_get_voltage(struct opp *opp) static inline unsigned long opp_get_voltage(struct opp *opp)
{ {
......
...@@ -510,12 +510,14 @@ struct dev_pm_info { ...@@ -510,12 +510,14 @@ struct dev_pm_info {
bool is_prepared:1; /* Owned by the PM core */ bool is_prepared:1; /* Owned by the PM core */
bool is_suspended:1; /* Ditto */ bool is_suspended:1; /* Ditto */
bool ignore_children:1; bool ignore_children:1;
bool early_init:1; /* Owned by the PM core */
spinlock_t lock; spinlock_t lock;
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
struct list_head entry; struct list_head entry;
struct completion completion; struct completion completion;
struct wakeup_source *wakeup; struct wakeup_source *wakeup;
bool wakeup_path:1; bool wakeup_path:1;
bool syscore:1;
#else #else
unsigned int should_wakeup:1; unsigned int should_wakeup:1;
#endif #endif
......
...@@ -114,7 +114,6 @@ struct generic_pm_domain_data { ...@@ -114,7 +114,6 @@ struct generic_pm_domain_data {
struct mutex lock; struct mutex lock;
unsigned int refcount; unsigned int refcount;
bool need_restore; bool need_restore;
bool always_on;
}; };
#ifdef CONFIG_PM_GENERIC_DOMAINS #ifdef CONFIG_PM_GENERIC_DOMAINS
...@@ -139,36 +138,32 @@ extern int __pm_genpd_of_add_device(struct device_node *genpd_node, ...@@ -139,36 +138,32 @@ extern int __pm_genpd_of_add_device(struct device_node *genpd_node,
struct device *dev, struct device *dev,
struct gpd_timing_data *td); struct gpd_timing_data *td);
static inline int pm_genpd_add_device(struct generic_pm_domain *genpd, extern int __pm_genpd_name_add_device(const char *domain_name,
struct device *dev) struct device *dev,
{ struct gpd_timing_data *td);
return __pm_genpd_add_device(genpd, dev, NULL);
}
static inline int pm_genpd_of_add_device(struct device_node *genpd_node,
struct device *dev)
{
return __pm_genpd_of_add_device(genpd_node, dev, NULL);
}
extern int pm_genpd_remove_device(struct generic_pm_domain *genpd, extern int pm_genpd_remove_device(struct generic_pm_domain *genpd,
struct device *dev); struct device *dev);
extern void pm_genpd_dev_always_on(struct device *dev, bool val);
extern void pm_genpd_dev_need_restore(struct device *dev, bool val); extern void pm_genpd_dev_need_restore(struct device *dev, bool val);
extern int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, extern int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
struct generic_pm_domain *new_subdomain); struct generic_pm_domain *new_subdomain);
extern int pm_genpd_add_subdomain_names(const char *master_name,
const char *subdomain_name);
extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
struct generic_pm_domain *target); struct generic_pm_domain *target);
extern int pm_genpd_add_callbacks(struct device *dev, extern int pm_genpd_add_callbacks(struct device *dev,
struct gpd_dev_ops *ops, struct gpd_dev_ops *ops,
struct gpd_timing_data *td); struct gpd_timing_data *td);
extern int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td); extern int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td);
extern int genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state); extern int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state);
extern int genpd_detach_cpuidle(struct generic_pm_domain *genpd); extern int pm_genpd_name_attach_cpuidle(const char *name, int state);
extern int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd);
extern int pm_genpd_name_detach_cpuidle(const char *name);
extern void pm_genpd_init(struct generic_pm_domain *genpd, extern void pm_genpd_init(struct generic_pm_domain *genpd,
struct dev_power_governor *gov, bool is_off); struct dev_power_governor *gov, bool is_off);
extern int pm_genpd_poweron(struct generic_pm_domain *genpd); extern int pm_genpd_poweron(struct generic_pm_domain *genpd);
extern int pm_genpd_name_poweron(const char *domain_name);
extern bool default_stop_ok(struct device *dev); extern bool default_stop_ok(struct device *dev);
...@@ -189,8 +184,15 @@ static inline int __pm_genpd_add_device(struct generic_pm_domain *genpd, ...@@ -189,8 +184,15 @@ static inline int __pm_genpd_add_device(struct generic_pm_domain *genpd,
{ {
return -ENOSYS; return -ENOSYS;
} }
static inline int pm_genpd_add_device(struct generic_pm_domain *genpd, static inline int __pm_genpd_of_add_device(struct device_node *genpd_node,
struct device *dev) struct device *dev,
struct gpd_timing_data *td)
{
return -ENOSYS;
}
static inline int __pm_genpd_name_add_device(const char *domain_name,
struct device *dev,
struct gpd_timing_data *td)
{ {
return -ENOSYS; return -ENOSYS;
} }
...@@ -199,13 +201,17 @@ static inline int pm_genpd_remove_device(struct generic_pm_domain *genpd, ...@@ -199,13 +201,17 @@ static inline int pm_genpd_remove_device(struct generic_pm_domain *genpd,
{ {
return -ENOSYS; return -ENOSYS;
} }
static inline void pm_genpd_dev_always_on(struct device *dev, bool val) {}
static inline void pm_genpd_dev_need_restore(struct device *dev, bool val) {} static inline void pm_genpd_dev_need_restore(struct device *dev, bool val) {}
static inline int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, static inline int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
struct generic_pm_domain *new_sd) struct generic_pm_domain *new_sd)
{ {
return -ENOSYS; return -ENOSYS;
} }
static inline int pm_genpd_add_subdomain_names(const char *master_name,
const char *subdomain_name)
{
return -ENOSYS;
}
static inline int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, static inline int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
struct generic_pm_domain *target) struct generic_pm_domain *target)
{ {
...@@ -221,11 +227,19 @@ static inline int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td) ...@@ -221,11 +227,19 @@ static inline int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td)
{ {
return -ENOSYS; return -ENOSYS;
} }
static inline int genpd_attach_cpuidle(struct generic_pm_domain *genpd, int st) static inline int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int st)
{ {
return -ENOSYS; return -ENOSYS;
} }
static inline int genpd_detach_cpuidle(struct generic_pm_domain *genpd) static inline int pm_genpd_name_attach_cpuidle(const char *name, int state)
{
return -ENOSYS;
}
static inline int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd)
{
return -ENOSYS;
}
static inline int pm_genpd_name_detach_cpuidle(const char *name)
{ {
return -ENOSYS; return -ENOSYS;
} }
...@@ -237,6 +251,10 @@ static inline int pm_genpd_poweron(struct generic_pm_domain *genpd) ...@@ -237,6 +251,10 @@ static inline int pm_genpd_poweron(struct generic_pm_domain *genpd)
{ {
return -ENOSYS; return -ENOSYS;
} }
static inline int pm_genpd_name_poweron(const char *domain_name)
{
return -ENOSYS;
}
static inline bool default_stop_ok(struct device *dev) static inline bool default_stop_ok(struct device *dev)
{ {
return false; return false;
...@@ -245,6 +263,24 @@ static inline bool default_stop_ok(struct device *dev) ...@@ -245,6 +263,24 @@ static inline bool default_stop_ok(struct device *dev)
#define pm_domain_always_on_gov NULL #define pm_domain_always_on_gov NULL
#endif #endif
static inline int pm_genpd_add_device(struct generic_pm_domain *genpd,
struct device *dev)
{
return __pm_genpd_add_device(genpd, dev, NULL);
}
static inline int pm_genpd_of_add_device(struct device_node *genpd_node,
struct device *dev)
{
return __pm_genpd_of_add_device(genpd_node, dev, NULL);
}
static inline int pm_genpd_name_add_device(const char *domain_name,
struct device *dev)
{
return __pm_genpd_name_add_device(domain_name, dev, NULL);
}
static inline int pm_genpd_remove_callbacks(struct device *dev) static inline int pm_genpd_remove_callbacks(struct device *dev)
{ {
return __pm_genpd_remove_callbacks(dev, true); return __pm_genpd_remove_callbacks(dev, true);
...@@ -258,4 +294,20 @@ static inline void genpd_queue_power_off_work(struct generic_pm_domain *gpd) {} ...@@ -258,4 +294,20 @@ static inline void genpd_queue_power_off_work(struct generic_pm_domain *gpd) {}
static inline void pm_genpd_poweroff_unused(void) {} static inline void pm_genpd_poweroff_unused(void) {}
#endif #endif
#ifdef CONFIG_PM_GENERIC_DOMAINS_SLEEP
extern void pm_genpd_syscore_switch(struct device *dev, bool suspend);
#else
static inline void pm_genpd_syscore_switch(struct device *dev, bool suspend) {}
#endif
static inline void pm_genpd_syscore_poweroff(struct device *dev)
{
pm_genpd_syscore_switch(dev, true);
}
static inline void pm_genpd_syscore_poweron(struct device *dev)
{
pm_genpd_syscore_switch(dev, false);
}
#endif /* _LINUX_PM_DOMAIN_H */ #endif /* _LINUX_PM_DOMAIN_H */
...@@ -263,6 +263,10 @@ config PM_GENERIC_DOMAINS ...@@ -263,6 +263,10 @@ config PM_GENERIC_DOMAINS
bool bool
depends on PM depends on PM
config PM_GENERIC_DOMAINS_SLEEP
def_bool y
depends on PM_SLEEP && PM_GENERIC_DOMAINS
config PM_GENERIC_DOMAINS_RUNTIME config PM_GENERIC_DOMAINS_RUNTIME
def_bool y def_bool y
depends on PM_RUNTIME && PM_GENERIC_DOMAINS depends on PM_RUNTIME && PM_GENERIC_DOMAINS
......
...@@ -37,7 +37,7 @@ static struct sysrq_key_op sysrq_poweroff_op = { ...@@ -37,7 +37,7 @@ static struct sysrq_key_op sysrq_poweroff_op = {
.enable_mask = SYSRQ_ENABLE_BOOT, .enable_mask = SYSRQ_ENABLE_BOOT,
}; };
static int pm_sysrq_init(void) static int __init pm_sysrq_init(void)
{ {
register_sysrq_key('o', &sysrq_poweroff_op); register_sysrq_key('o', &sysrq_poweroff_op);
return 0; return 0;
......
...@@ -79,7 +79,7 @@ static int try_to_freeze_tasks(bool user_only) ...@@ -79,7 +79,7 @@ static int try_to_freeze_tasks(bool user_only)
/* /*
* We need to retry, but first give the freezing tasks some * We need to retry, but first give the freezing tasks some
* time to enter the regrigerator. * time to enter the refrigerator.
*/ */
msleep(10); msleep(10);
} }
......
...@@ -139,6 +139,7 @@ static inline int pm_qos_get_value(struct pm_qos_constraints *c) ...@@ -139,6 +139,7 @@ static inline int pm_qos_get_value(struct pm_qos_constraints *c)
default: default:
/* runtime check for not using enum */ /* runtime check for not using enum */
BUG(); BUG();
return PM_QOS_DEFAULT_VALUE;
} }
} }
......
...@@ -397,6 +397,30 @@ void clockevents_exchange_device(struct clock_event_device *old, ...@@ -397,6 +397,30 @@ void clockevents_exchange_device(struct clock_event_device *old,
local_irq_restore(flags); local_irq_restore(flags);
} }
/**
* clockevents_suspend - suspend clock devices
*/
void clockevents_suspend(void)
{
struct clock_event_device *dev;
list_for_each_entry_reverse(dev, &clockevent_devices, list)
if (dev->suspend)
dev->suspend(dev);
}
/**
* clockevents_resume - resume clock devices
*/
void clockevents_resume(void)
{
struct clock_event_device *dev;
list_for_each_entry(dev, &clockevent_devices, list)
if (dev->resume)
dev->resume(dev);
}
#ifdef CONFIG_GENERIC_CLOCKEVENTS #ifdef CONFIG_GENERIC_CLOCKEVENTS
/** /**
* clockevents_notify - notification about relevant events * clockevents_notify - notification about relevant events
......
...@@ -776,6 +776,7 @@ static void timekeeping_resume(void) ...@@ -776,6 +776,7 @@ static void timekeeping_resume(void)
read_persistent_clock(&ts); read_persistent_clock(&ts);
clockevents_resume();
clocksource_resume(); clocksource_resume();
write_seqlock_irqsave(&tk->lock, flags); write_seqlock_irqsave(&tk->lock, flags);
...@@ -835,6 +836,7 @@ static int timekeeping_suspend(void) ...@@ -835,6 +836,7 @@ static int timekeeping_suspend(void)
clockevents_notify(CLOCK_EVT_NOTIFY_SUSPEND, NULL); clockevents_notify(CLOCK_EVT_NOTIFY_SUSPEND, NULL);
clocksource_suspend(); clocksource_suspend();
clockevents_suspend();
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment