Commit e6c81cce authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'armsoc-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC platform updates from Olof Johansson:
 "Our SoC branch usually contains expanded support for new SoCs and
  other core platform code.  In this case, that includes:

   - support for the new Annapurna Labs "Alpine" platform

   - a rework greatly simplifying adding new platform support to the
     MCPM subsystem (Multi-cluster power management)

   - cpuidle and PM improvements for Exynos3250

   - misc updates for Renesas, OMAP, Meson, i.MX.  Some of these could
     have gone in other branches but ended up here for various reasons"

* tag 'armsoc-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (53 commits)
  ARM: alpine: add support for generic pci
  ARM: Exynos: migrate DCSCB to the new MCPM backend abstraction
  ARM: vexpress: migrate DCSCB to the new MCPM backend abstraction
  ARM: vexpress: DCSCB: tighten CPU validity assertion
  ARM: vexpress: migrate TC2 to the new MCPM backend abstraction
  ARM: MCPM: move the algorithmic complexity to the core code
  ARM: EXYNOS: allow cpuidle driver usage on Exynos3250 SoC
  ARM: EXYNOS: add AFTR mode support for Exynos3250
  ARM: EXYNOS: add code for setting/clearing boot flag
  ARM: EXYNOS: fix CPU1 hotplug on Exynos3250
  ARM: S3C64XX: Use fixed IRQ bases to avoid conflicts on Cragganmore
  ARM: cygnus: fix const declaration bcm_cygnus_dt_compat
  ARM: DRA7: hwmod: Fix the hwmod class for GPTimer4
  ARM: DRA7: hwmod: Add data for GPTimers 13 through 16
  ARM: EXYNOS: Remove left over 'extra_save'
  ARM: EXYNOS: Constify exynos_pm_data array
  ARM: EXYNOS: use static in suspend.c
  ARM: EXYNOS: Use platform device name as power domain name
  ARM: EXYNOS: add support for async-bridge clocks for pm_domains
  ARM: omap-device: add missed callback for suspend-to-disk
  ...
parents d0440c59 a018bb2f
...@@ -96,6 +96,11 @@ EBU Armada family ...@@ -96,6 +96,11 @@ EBU Armada family
88F6820 88F6820
88F6828 88F6828
Armada 390/398 Flavors:
88F6920
88F6928
Product infos: http://www.marvell.com/embedded-processors/armada-39x/
Armada XP Flavors: Armada XP Flavors:
MV78230 MV78230
MV78260 MV78260
......
Annapurna Labs Alpine Platform Device Tree Bindings
---------------------------------------------------------------
Boards in the Alpine family shall have the following properties:
* Required root node properties:
compatible: must contain "al,alpine"
* Example:
/ {
model = "Annapurna Labs Alpine Dev Board";
compatible = "al,alpine";
...
}
* CPU node:
The Alpine platform includes cortex-a15 cores.
enable-method: must be "al,alpine-smp" to allow smp [1]
Example:
cpus {
#address-cells = <1>;
#size-cells = <0>;
enable-method = "al,alpine-smp";
cpu@0 {
compatible = "arm,cortex-a15";
device_type = "cpu";
reg = <0>;
};
cpu@1 {
compatible = "arm,cortex-a15";
device_type = "cpu";
reg = <1>;
};
cpu@2 {
compatible = "arm,cortex-a15";
device_type = "cpu";
reg = <2>;
};
cpu@3 {
compatible = "arm,cortex-a15";
device_type = "cpu";
reg = <3>;
};
};
* Alpine CPU resume registers
The CPU resume register are used to define required resume address after
reset.
Properties:
- compatible : Should contain "al,alpine-cpu-resume".
- reg : Offset and length of the register set for the device
Example:
cpu_resume {
compatible = "al,alpine-cpu-resume";
reg = <0xfbff5ed0 0x30>;
};
* Alpine System-Fabric Service Registers
The System-Fabric Service Registers allow various operation on CPU and
system fabric, like powering CPUs off.
Properties:
- compatible : Should contain "al,alpine-sysfabric-service" and "syscon".
- reg : Offset and length of the register set for the device
Example:
nb_service {
compatible = "al,alpine-sysfabric-service", "syscon";
reg = <0xfb070000 0x10000>;
};
[1] arm/cpu-enable-method/al,alpine-smp
...@@ -8,3 +8,7 @@ Boards with the Amlogic Meson6 SoC shall have the following properties: ...@@ -8,3 +8,7 @@ Boards with the Amlogic Meson6 SoC shall have the following properties:
Boards with the Amlogic Meson8 SoC shall have the following properties: Boards with the Amlogic Meson8 SoC shall have the following properties:
Required root node property: Required root node property:
compatible: "amlogic,meson8"; compatible: "amlogic,meson8";
Board compatible values:
- "geniatech,atv1200"
- "minix,neo-x8"
Marvell Armada 39x Platforms Device Tree Bindings
-------------------------------------------------
Boards with a SoC of the Marvell Armada 39x family shall have the
following property:
Required root node property:
- compatible: must contain "marvell,armada390"
In addition, boards using the Marvell Armada 398 SoC shall have the
following property before the previous one:
Required root node property:
compatible: must contain "marvell,armada398"
Example:
compatible = "marvell,a398-db", "marvell,armada398", "marvell,armada390";
========================================================
Secondary CPU enable-method "al,alpine-smp" binding
========================================================
This document describes the "al,alpine-smp" method for
enabling secondary CPUs. To apply to all CPUs, a single
"al,alpine-smp" enable method should be defined in the
"cpus" node.
Enable method name: "al,alpine-smp"
Compatible machines: "al,alpine"
Compatible CPUs: "arm,cortex-a15"
Related properties: (none)
Note:
This enable method requires valid nodes compatible with
"al,alpine-cpu-resume" and "al,alpine-nb-service"[1].
Example:
cpus {
#address-cells = <1>;
#size-cells = <0>;
enable-method = "al,alpine-smp";
cpu@0 {
compatible = "arm,cortex-a15";
device_type = "cpu";
reg = <0>;
};
cpu@1 {
compatible = "arm,cortex-a15";
device_type = "cpu";
reg = <1>;
};
cpu@2 {
compatible = "arm,cortex-a15";
device_type = "cpu";
reg = <2>;
};
cpu@3 {
compatible = "arm,cortex-a15";
device_type = "cpu";
reg = <3>;
};
};
--
[1] arm/al,alpine.txt
...@@ -192,6 +192,7 @@ nodes to be present and contain the properties described below. ...@@ -192,6 +192,7 @@ nodes to be present and contain the properties described below.
"brcm,brahma-b15" "brcm,brahma-b15"
"marvell,armada-375-smp" "marvell,armada-375-smp"
"marvell,armada-380-smp" "marvell,armada-380-smp"
"marvell,armada-390-smp"
"marvell,armada-xp-smp" "marvell,armada-xp-smp"
"qcom,gcc-msm8660" "qcom,gcc-msm8660"
"qcom,kpss-acc-v1" "qcom,kpss-acc-v1"
......
Geniatech platforms device tree bindings
-------------------------------------------
Geniatech ATV1200
- compatible = "geniatech,atv1200"
Freescale i.MX General Power Controller
=======================================
The i.MX6Q General Power Control (GPC) block contains DVFS load tracking
counters and Power Gating Control (PGC) for the CPU and PU (GPU/VPU) power
domains.
Required properties:
- compatible: Should be "fsl,imx6q-gpc" or "fsl,imx6sl-gpc"
- reg: should be register base and length as documented in the
datasheet
- interrupts: Should contain GPC interrupt request 1
- pu-supply: Link to the LDO regulator powering the PU power domain
- clocks: Clock phandles to devices in the PU power domain that need
to be enabled during domain power-up for reset propagation.
- #power-domain-cells: Should be 1, see below:
The gpc node is a power-controller as documented by the generic power domain
bindings in Documentation/devicetree/bindings/power/power_domain.txt.
Example:
gpc: gpc@020dc000 {
compatible = "fsl,imx6q-gpc";
reg = <0x020dc000 0x4000>;
interrupts = <0 89 IRQ_TYPE_LEVEL_HIGH>,
<0 90 IRQ_TYPE_LEVEL_HIGH>;
pu-supply = <&reg_pu>;
clocks = <&clks IMX6QDL_CLK_GPU3D_CORE>,
<&clks IMX6QDL_CLK_GPU3D_SHADER>,
<&clks IMX6QDL_CLK_GPU2D_CORE>,
<&clks IMX6QDL_CLK_GPU2D_AXI>,
<&clks IMX6QDL_CLK_OPENVG_AXI>,
<&clks IMX6QDL_CLK_VPU_AXI>;
#power-domain-cells = <1>;
};
Specifying power domain for IP modules
======================================
IP cores belonging to a power domain should contain a 'power-domains' property
that is a phandle pointing to the gpc device node and a DOMAIN_INDEX specifying
the power domain the device belongs to.
Example of a device that is part of the PU power domain:
vpu: vpu@02040000 {
reg = <0x02040000 0x3c000>;
/* ... */
power-domains = <&gpc 1>;
/* ... */
};
The following DOMAIN_INDEX values are valid for i.MX6Q:
ARM_DOMAIN 0
PU_DOMAIN 1
The following additional DOMAIN_INDEX value is valid for i.MX6SL:
DISPLAY_DOMAIN 2
...@@ -11,6 +11,7 @@ adapteva Adapteva, Inc. ...@@ -11,6 +11,7 @@ adapteva Adapteva, Inc.
adh AD Holdings Plc. adh AD Holdings Plc.
adi Analog Devices, Inc. adi Analog Devices, Inc.
aeroflexgaisler Aeroflex Gaisler AB aeroflexgaisler Aeroflex Gaisler AB
al Annapurna Labs
allwinner Allwinner Technology Co., Ltd. allwinner Allwinner Technology Co., Ltd.
alphascale AlphaScale Integrated Circuits Systems, Inc. alphascale AlphaScale Integrated Circuits Systems, Inc.
altr Altera Corp. altr Altera Corp.
...@@ -118,6 +119,7 @@ merrii Merrii Technology Co., Ltd. ...@@ -118,6 +119,7 @@ merrii Merrii Technology Co., Ltd.
micrel Micrel Inc. micrel Micrel Inc.
microchip Microchip Technology Inc. microchip Microchip Technology Inc.
micron Micron Technology Inc. micron Micron Technology Inc.
minix MINIX Technology Ltd.
mitsubishi Mitsubishi Electric Corporation mitsubishi Mitsubishi Electric Corporation
mosaixtech Mosaix Technologies, Inc. mosaixtech Mosaix Technologies, Inc.
moxa Moxa moxa Moxa
......
...@@ -886,6 +886,11 @@ S: Maintained ...@@ -886,6 +886,11 @@ S: Maintained
F: drivers/media/rc/meson-ir.c F: drivers/media/rc/meson-ir.c
N: meson[x68] N: meson[x68]
ARM/Annapurna Labs ALPINE ARCHITECTURE
M: Tsahee Zidenberg <tsahee@annapurnalabs.com>
S: Maintained
F: arch/arm/mach-alpine/
ARM/ATMEL AT91RM9200 AND AT91SAM ARM ARCHITECTURES ARM/ATMEL AT91RM9200 AND AT91SAM ARM ARCHITECTURES
M: Andrew Victor <linux@maxim.org.za> M: Andrew Victor <linux@maxim.org.za>
M: Nicolas Ferre <nicolas.ferre@atmel.com> M: Nicolas Ferre <nicolas.ferre@atmel.com>
......
...@@ -839,6 +839,8 @@ config ARCH_VIRT ...@@ -839,6 +839,8 @@ config ARCH_VIRT
# #
source "arch/arm/mach-mvebu/Kconfig" source "arch/arm/mach-mvebu/Kconfig"
source "arch/arm/mach-alpine/Kconfig"
source "arch/arm/mach-asm9260/Kconfig" source "arch/arm/mach-asm9260/Kconfig"
source "arch/arm/mach-at91/Kconfig" source "arch/arm/mach-at91/Kconfig"
......
...@@ -93,6 +93,14 @@ choice ...@@ -93,6 +93,14 @@ choice
prompt "Kernel low-level debugging port" prompt "Kernel low-level debugging port"
depends on DEBUG_LL depends on DEBUG_LL
config DEBUG_ALPINE_UART0
bool "Kernel low-level debugging messages via Alpine UART0"
depends on ARCH_ALPINE
select DEBUG_UART_8250
help
Say Y here if you want kernel low-level debugging support
on Alpine based platforms.
config DEBUG_ASM9260_UART config DEBUG_ASM9260_UART
bool "Kernel low-level debugging via asm9260 UART" bool "Kernel low-level debugging via asm9260 UART"
depends on MACH_ASM9260 depends on MACH_ASM9260
...@@ -1397,6 +1405,7 @@ config DEBUG_UART_PHYS ...@@ -1397,6 +1405,7 @@ config DEBUG_UART_PHYS
default 0xf8b00000 if DEBUG_HIX5HD2_UART default 0xf8b00000 if DEBUG_HIX5HD2_UART
default 0xf991e000 if DEBUG_QCOM_UARTDM default 0xf991e000 if DEBUG_QCOM_UARTDM
default 0xfcb00000 if DEBUG_HI3620_UART default 0xfcb00000 if DEBUG_HI3620_UART
default 0xfd883000 if DEBUG_ALPINE_UART0
default 0xfe800000 if ARCH_IOP32X default 0xfe800000 if ARCH_IOP32X
default 0xff690000 if DEBUG_RK32_UART2 default 0xff690000 if DEBUG_RK32_UART2
default 0xffc02000 if DEBUG_SOCFPGA_UART default 0xffc02000 if DEBUG_SOCFPGA_UART
...@@ -1462,6 +1471,7 @@ config DEBUG_UART_VIRT ...@@ -1462,6 +1471,7 @@ config DEBUG_UART_VIRT
default 0xfd000000 if ARCH_SPEAR3XX || ARCH_SPEAR6XX default 0xfd000000 if ARCH_SPEAR3XX || ARCH_SPEAR6XX
default 0xfd000000 if ARCH_SPEAR13XX default 0xfd000000 if ARCH_SPEAR13XX
default 0xfd012000 if ARCH_MV78XX0 default 0xfd012000 if ARCH_MV78XX0
default 0xfd883000 if DEBUG_ALPINE_UART0
default 0xfde12000 if ARCH_DOVE default 0xfde12000 if ARCH_DOVE
default 0xfe012000 if ARCH_ORION5X default 0xfe012000 if ARCH_ORION5X
default 0xf31004c0 if DEBUG_MESON_UARTAO default 0xf31004c0 if DEBUG_MESON_UARTAO
...@@ -1522,7 +1532,7 @@ config DEBUG_UART_8250_WORD ...@@ -1522,7 +1532,7 @@ config DEBUG_UART_8250_WORD
depends on DEBUG_LL_UART_8250 || DEBUG_UART_8250 depends on DEBUG_LL_UART_8250 || DEBUG_UART_8250
depends on DEBUG_UART_8250_SHIFT >= 2 depends on DEBUG_UART_8250_SHIFT >= 2
default y if DEBUG_PICOXCELL_UART || DEBUG_SOCFPGA_UART || \ default y if DEBUG_PICOXCELL_UART || DEBUG_SOCFPGA_UART || \
ARCH_KEYSTONE || \ ARCH_KEYSTONE || DEBUG_ALPINE_UART0 || \
DEBUG_DAVINCI_DMx_UART0 || DEBUG_DAVINCI_DA8XX_UART1 || \ DEBUG_DAVINCI_DMx_UART0 || DEBUG_DAVINCI_DA8XX_UART1 || \
DEBUG_DAVINCI_DA8XX_UART2 || \ DEBUG_DAVINCI_DA8XX_UART2 || \
DEBUG_BCM_KONA_UART || DEBUG_RK32_UART2 || \ DEBUG_BCM_KONA_UART || DEBUG_RK32_UART2 || \
......
...@@ -142,6 +142,7 @@ textofs-$(CONFIG_ARCH_AXXIA) := 0x00308000 ...@@ -142,6 +142,7 @@ textofs-$(CONFIG_ARCH_AXXIA) := 0x00308000
# Machine directory name. This list is sorted alphanumerically # Machine directory name. This list is sorted alphanumerically
# by CONFIG_* macro name. # by CONFIG_* macro name.
machine-$(CONFIG_ARCH_ALPINE) += alpine
machine-$(CONFIG_ARCH_AT91) += at91 machine-$(CONFIG_ARCH_AT91) += at91
machine-$(CONFIG_ARCH_AXXIA) += axxia machine-$(CONFIG_ARCH_AXXIA) += axxia
machine-$(CONFIG_ARCH_BCM) += bcm machine-$(CONFIG_ARCH_BCM) += bcm
......
...@@ -513,9 +513,27 @@ &iic3 { ...@@ -513,9 +513,27 @@ &iic3 {
pinctrl-0 = <&iic3_pins>; pinctrl-0 = <&iic3_pins>;
status = "okay"; status = "okay";
pmic@58 {
compatible = "dlg,da9063";
reg = <0x58>;
interrupt-parent = <&irqc0>;
interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
interrupt-controller;
rtc {
compatible = "dlg,da9063-rtc";
};
wdt {
compatible = "dlg,da9063-watchdog";
};
};
vdd_dvfs: regulator@68 { vdd_dvfs: regulator@68 {
compatible = "dlg,da9210"; compatible = "dlg,da9210";
reg = <0x68>; reg = <0x68>;
interrupt-parent = <&irqc0>;
interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
regulator-min-microvolt = <1000000>; regulator-min-microvolt = <1000000>;
regulator-max-microvolt = <1000000>; regulator-max-microvolt = <1000000>;
......
...@@ -517,9 +517,27 @@ &i2c6 { ...@@ -517,9 +517,27 @@ &i2c6 {
status = "okay"; status = "okay";
clock-frequency = <100000>; clock-frequency = <100000>;
pmic@58 {
compatible = "dlg,da9063";
reg = <0x58>;
interrupt-parent = <&irqc0>;
interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
interrupt-controller;
rtc {
compatible = "dlg,da9063-rtc";
};
wdt {
compatible = "dlg,da9063-watchdog";
};
};
vdd_dvfs: regulator@68 { vdd_dvfs: regulator@68 {
compatible = "dlg,da9210"; compatible = "dlg,da9210";
reg = <0x68>; reg = <0x68>;
interrupt-parent = <&irqc0>;
interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
regulator-min-microvolt = <1000000>; regulator-min-microvolt = <1000000>;
regulator-max-microvolt = <1000000>; regulator-max-microvolt = <1000000>;
......
...@@ -55,21 +55,80 @@ bool mcpm_is_available(void) ...@@ -55,21 +55,80 @@ bool mcpm_is_available(void)
return (platform_ops) ? true : false; return (platform_ops) ? true : false;
} }
/*
* We can't use regular spinlocks. In the switcher case, it is possible
* for an outbound CPU to call power_down() after its inbound counterpart
* is already live using the same logical CPU number which trips lockdep
* debugging.
*/
static arch_spinlock_t mcpm_lock = __ARCH_SPIN_LOCK_UNLOCKED;
static int mcpm_cpu_use_count[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER];
static inline bool mcpm_cluster_unused(unsigned int cluster)
{
int i, cnt;
for (i = 0, cnt = 0; i < MAX_CPUS_PER_CLUSTER; i++)
cnt |= mcpm_cpu_use_count[cluster][i];
return !cnt;
}
int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster) int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster)
{ {
bool cpu_is_down, cluster_is_down;
int ret = 0;
if (!platform_ops) if (!platform_ops)
return -EUNATCH; /* try not to shadow power_up errors */ return -EUNATCH; /* try not to shadow power_up errors */
might_sleep(); might_sleep();
/* backward compatibility callback */
if (platform_ops->power_up)
return platform_ops->power_up(cpu, cluster); return platform_ops->power_up(cpu, cluster);
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
/*
* Since this is called with IRQs enabled, and no arch_spin_lock_irq
* variant exists, we need to disable IRQs manually here.
*/
local_irq_disable();
arch_spin_lock(&mcpm_lock);
cpu_is_down = !mcpm_cpu_use_count[cluster][cpu];
cluster_is_down = mcpm_cluster_unused(cluster);
mcpm_cpu_use_count[cluster][cpu]++;
/*
* The only possible values are:
* 0 = CPU down
* 1 = CPU (still) up
* 2 = CPU requested to be up before it had a chance
* to actually make itself down.
* Any other value is a bug.
*/
BUG_ON(mcpm_cpu_use_count[cluster][cpu] != 1 &&
mcpm_cpu_use_count[cluster][cpu] != 2);
if (cluster_is_down)
ret = platform_ops->cluster_powerup(cluster);
if (cpu_is_down && !ret)
ret = platform_ops->cpu_powerup(cpu, cluster);
arch_spin_unlock(&mcpm_lock);
local_irq_enable();
return ret;
} }
typedef void (*phys_reset_t)(unsigned long); typedef void (*phys_reset_t)(unsigned long);
void mcpm_cpu_power_down(void) void mcpm_cpu_power_down(void)
{ {
unsigned int mpidr, cpu, cluster;
bool cpu_going_down, last_man;
phys_reset_t phys_reset; phys_reset_t phys_reset;
if (WARN_ON_ONCE(!platform_ops || !platform_ops->power_down)) if (WARN_ON_ONCE(!platform_ops))
return; return;
BUG_ON(!irqs_disabled()); BUG_ON(!irqs_disabled());
...@@ -79,28 +138,65 @@ void mcpm_cpu_power_down(void) ...@@ -79,28 +138,65 @@ void mcpm_cpu_power_down(void)
*/ */
setup_mm_for_reboot(); setup_mm_for_reboot();
/* backward compatibility callback */
if (platform_ops->power_down) {
platform_ops->power_down(); platform_ops->power_down();
goto not_dead;
}
mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
__mcpm_cpu_going_down(cpu, cluster);
arch_spin_lock(&mcpm_lock);
BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
mcpm_cpu_use_count[cluster][cpu]--;
BUG_ON(mcpm_cpu_use_count[cluster][cpu] != 0 &&
mcpm_cpu_use_count[cluster][cpu] != 1);
cpu_going_down = !mcpm_cpu_use_count[cluster][cpu];
last_man = mcpm_cluster_unused(cluster);
if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) {
platform_ops->cpu_powerdown_prepare(cpu, cluster);
platform_ops->cluster_powerdown_prepare(cluster);
arch_spin_unlock(&mcpm_lock);
platform_ops->cluster_cache_disable();
__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN);
} else {
if (cpu_going_down)
platform_ops->cpu_powerdown_prepare(cpu, cluster);
arch_spin_unlock(&mcpm_lock);
/*
* If cpu_going_down is false here, that means a power_up
* request raced ahead of us. Even if we do not want to
* shut this CPU down, the caller still expects execution
* to return through the system resume entry path, like
* when the WFI is aborted due to a new IRQ or the like..
* So let's continue with cache cleaning in all cases.
*/
platform_ops->cpu_cache_disable();
}
__mcpm_cpu_down(cpu, cluster);
/* Now we are prepared for power-down, do it: */
if (cpu_going_down)
wfi();
not_dead:
/* /*
* It is possible for a power_up request to happen concurrently * It is possible for a power_up request to happen concurrently
* with a power_down request for the same CPU. In this case the * with a power_down request for the same CPU. In this case the
* power_down method might not be able to actually enter a * CPU might not be able to actually enter a powered down state
* powered down state with the WFI instruction if the power_up * with the WFI instruction if the power_up request has removed
* method has removed the required reset condition. The * the required reset condition. We must perform a re-entry in
* power_down method is then allowed to return. We must perform * the kernel as if the power_up method just had deasserted reset
* a re-entry in the kernel as if the power_up method just had * on the CPU.
* deasserted reset on the CPU.
*
* To simplify race issues, the platform specific implementation
* must accommodate for the possibility of unordered calls to
* power_down and power_up with a usage count. Therefore, if a
* call to power_up is issued for a CPU that is not down, then
* the next call to power_down must not attempt a full shutdown
* but only do the minimum (normally disabling L1 cache and CPU
* coherency) and return just as if a concurrent power_up request
* had happened as described above.
*/ */
phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
phys_reset(virt_to_phys(mcpm_entry_point)); phys_reset(virt_to_phys(mcpm_entry_point));
...@@ -125,26 +221,66 @@ int mcpm_wait_for_cpu_powerdown(unsigned int cpu, unsigned int cluster) ...@@ -125,26 +221,66 @@ int mcpm_wait_for_cpu_powerdown(unsigned int cpu, unsigned int cluster)
void mcpm_cpu_suspend(u64 expected_residency) void mcpm_cpu_suspend(u64 expected_residency)
{ {
phys_reset_t phys_reset; if (WARN_ON_ONCE(!platform_ops))
if (WARN_ON_ONCE(!platform_ops || !platform_ops->suspend))
return; return;
BUG_ON(!irqs_disabled());
/* Very similar to mcpm_cpu_power_down() */ /* backward compatibility callback */
if (platform_ops->suspend) {
phys_reset_t phys_reset;
BUG_ON(!irqs_disabled());
setup_mm_for_reboot(); setup_mm_for_reboot();
platform_ops->suspend(expected_residency); platform_ops->suspend(expected_residency);
phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
phys_reset(virt_to_phys(mcpm_entry_point)); phys_reset(virt_to_phys(mcpm_entry_point));
BUG(); BUG();
}
/* Some platforms might have to enable special resume modes, etc. */
if (platform_ops->cpu_suspend_prepare) {
unsigned int mpidr = read_cpuid_mpidr();
unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
arch_spin_lock(&mcpm_lock);
platform_ops->cpu_suspend_prepare(cpu, cluster);
arch_spin_unlock(&mcpm_lock);
}
mcpm_cpu_power_down();
} }
int mcpm_cpu_powered_up(void) int mcpm_cpu_powered_up(void)
{ {
unsigned int mpidr, cpu, cluster;
bool cpu_was_down, first_man;
unsigned long flags;
if (!platform_ops) if (!platform_ops)
return -EUNATCH; return -EUNATCH;
if (platform_ops->powered_up)
/* backward compatibility callback */
if (platform_ops->powered_up) {
platform_ops->powered_up(); platform_ops->powered_up();
return 0;
}
mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
local_irq_save(flags);
arch_spin_lock(&mcpm_lock);
cpu_was_down = !mcpm_cpu_use_count[cluster][cpu];
first_man = mcpm_cluster_unused(cluster);
if (first_man && platform_ops->cluster_is_up)
platform_ops->cluster_is_up(cluster);
if (cpu_was_down)
mcpm_cpu_use_count[cluster][cpu] = 1;
if (platform_ops->cpu_is_up)
platform_ops->cpu_is_up(cpu, cluster);
arch_spin_unlock(&mcpm_lock);
local_irq_restore(flags);
return 0; return 0;
} }
...@@ -334,8 +470,10 @@ int __init mcpm_sync_init( ...@@ -334,8 +470,10 @@ int __init mcpm_sync_init(
} }
mpidr = read_cpuid_mpidr(); mpidr = read_cpuid_mpidr();
this_cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); this_cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
for_each_online_cpu(i) for_each_online_cpu(i) {
mcpm_cpu_use_count[this_cluster][i] = 1;
mcpm_sync.clusters[this_cluster].cpus[i].cpu = CPU_UP; mcpm_sync.clusters[this_cluster].cpus[i].cpu = CPU_UP;
}
mcpm_sync.clusters[this_cluster].cluster = CLUSTER_UP; mcpm_sync.clusters[this_cluster].cluster = CLUSTER_UP;
sync_cache_w(&mcpm_sync); sync_cache_w(&mcpm_sync);
......
...@@ -171,12 +171,73 @@ void mcpm_cpu_suspend(u64 expected_residency); ...@@ -171,12 +171,73 @@ void mcpm_cpu_suspend(u64 expected_residency);
int mcpm_cpu_powered_up(void); int mcpm_cpu_powered_up(void);
/* /*
* Platform specific methods used in the implementation of the above API. * Platform specific callbacks used in the implementation of the above API.
*
* cpu_powerup:
* Make given CPU runable. Called with MCPM lock held and IRQs disabled.
* The given cluster is assumed to be set up (cluster_powerup would have
* been called beforehand). Must return 0 for success or negative error code.
*
* cluster_powerup:
* Set up power for given cluster. Called with MCPM lock held and IRQs
* disabled. Called before first cpu_powerup when cluster is down. Must
* return 0 for success or negative error code.
*
* cpu_suspend_prepare:
* Special suspend configuration. Called on target CPU with MCPM lock held
* and IRQs disabled. This callback is optional. If provided, it is called
* before cpu_powerdown_prepare.
*
* cpu_powerdown_prepare:
* Configure given CPU for power down. Called on target CPU with MCPM lock
* held and IRQs disabled. Power down must be effective only at the next WFI instruction.
*
* cluster_powerdown_prepare:
* Configure given cluster for power down. Called on one CPU from target
* cluster with MCPM lock held and IRQs disabled. A cpu_powerdown_prepare
* for each CPU in the cluster has happened when this occurs.
*
* cpu_cache_disable:
* Clean and disable CPU level cache for the calling CPU. Called on with IRQs
* disabled only. The CPU is no longer cache coherent with the rest of the
* system when this returns.
*
* cluster_cache_disable:
* Clean and disable the cluster wide cache as well as the CPU level cache
* for the calling CPU. No call to cpu_cache_disable will happen for this
* CPU. Called with IRQs disabled and only when all the other CPUs are done
* with their own cpu_cache_disable. The cluster is no longer cache coherent
* with the rest of the system when this returns.
*
* cpu_is_up:
* Called on given CPU after it has been powered up or resumed. The MCPM lock
* is held and IRQs disabled. This callback is optional.
*
* cluster_is_up:
* Called by the first CPU to be powered up or resumed in given cluster.
* The MCPM lock is held and IRQs disabled. This callback is optional. If
* provided, it is called before cpu_is_up for that CPU.
*
* wait_for_powerdown:
* Wait until given CPU is powered down. This is called in sleeping context.
* Some reasonable timeout must be considered. Must return 0 for success or
* negative error code.
*/ */
struct mcpm_platform_ops { struct mcpm_platform_ops {
int (*cpu_powerup)(unsigned int cpu, unsigned int cluster);
int (*cluster_powerup)(unsigned int cluster);
void (*cpu_suspend_prepare)(unsigned int cpu, unsigned int cluster);
void (*cpu_powerdown_prepare)(unsigned int cpu, unsigned int cluster);
void (*cluster_powerdown_prepare)(unsigned int cluster);
void (*cpu_cache_disable)(void);
void (*cluster_cache_disable)(void);
void (*cpu_is_up)(unsigned int cpu, unsigned int cluster);
void (*cluster_is_up)(unsigned int cluster);
int (*wait_for_powerdown)(unsigned int cpu, unsigned int cluster);
/* deprecated callbacks */
int (*power_up)(unsigned int cpu, unsigned int cluster); int (*power_up)(unsigned int cpu, unsigned int cluster);
void (*power_down)(void); void (*power_down)(void);
int (*wait_for_powerdown)(unsigned int cpu, unsigned int cluster);
void (*suspend)(u64); void (*suspend)(u64);
void (*powered_up)(void); void (*powered_up)(void);
}; };
......
config ARCH_ALPINE
bool "Annapurna Labs Alpine platform" if ARCH_MULTI_V7
select ARM_AMBA
select ARM_GIC
select GENERIC_IRQ_CHIP
select HAVE_ARM_ARCH_TIMER
select HAVE_SMP
select MFD_SYSCON
select PCI
select PCI_HOST_GENERIC
help
This enables support for the Annapurna Labs Alpine V1 boards.
obj-y += alpine_machine.o
obj-$(CONFIG_SMP) += platsmp.o alpine_cpu_pm.o
/*
* Low-level power-management support for Alpine platform.
*
* Copyright (C) 2015 Annapurna Labs Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/regmap.h>
#include <linux/mfd/syscon.h>
#include "alpine_cpu_pm.h"
#include "alpine_cpu_resume.h"
/* NB registers */
#define AL_SYSFAB_POWER_CONTROL(cpu) (0x2000 + (cpu)*0x100 + 0x20)
static struct regmap *al_sysfabric;
static struct al_cpu_resume_regs __iomem *al_cpu_resume_regs;
static int wakeup_supported;
int alpine_cpu_wakeup(unsigned int phys_cpu, uint32_t phys_resume_addr)
{
if (!wakeup_supported)
return -ENOSYS;
/*
* Set CPU resume address -
* secure firmware running on boot will jump to this address
* after setting proper CPU mode, and initialiing e.g. secure
* regs (the same mode all CPUs are booted to - usually HYP)
*/
writel(phys_resume_addr,
&al_cpu_resume_regs->per_cpu[phys_cpu].resume_addr);
/* Power-up the CPU */
regmap_write(al_sysfabric, AL_SYSFAB_POWER_CONTROL(phys_cpu), 0);
return 0;
}
void __init alpine_cpu_pm_init(void)
{
struct device_node *np;
uint32_t watermark;
al_sysfabric = syscon_regmap_lookup_by_compatible("al,alpine-sysfabric-service");
np = of_find_compatible_node(NULL, NULL, "al,alpine-cpu-resume");
al_cpu_resume_regs = of_iomap(np, 0);
wakeup_supported = !IS_ERR(al_sysfabric) && al_cpu_resume_regs;
if (wakeup_supported) {
watermark = readl(&al_cpu_resume_regs->watermark);
wakeup_supported = (watermark & AL_CPU_RESUME_MAGIC_NUM_MASK)
== AL_CPU_RESUME_MAGIC_NUM;
}
}
/*
* Low-level power-management support for Alpine platform.
*
* Copyright (C) 2015 Annapurna Labs Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __ALPINE_CPU_PM_H__
#define __ALPINE_CPU_PM_H__
/* Alpine CPU Power Management Services Initialization */
void alpine_cpu_pm_init(void);
/* Wake-up a CPU */
int alpine_cpu_wakeup(unsigned int phys_cpu, uint32_t phys_resume_addr);
#endif /* __ALPINE_CPU_PM_H__ */
/*
* Annapurna labs cpu-resume register structure.
*
* Copyright (C) 2015 Annapurna Labs Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef ALPINE_CPU_RESUME_H_
#define ALPINE_CPU_RESUME_H_
/* Per-cpu regs */
struct al_cpu_resume_regs_per_cpu {
uint32_t flags;
uint32_t resume_addr;
};
/* general regs */
struct al_cpu_resume_regs {
/* Watermark for validating the CPU resume struct */
uint32_t watermark;
uint32_t flags;
struct al_cpu_resume_regs_per_cpu per_cpu[];
};
/* The expected magic number for validating the resume addresses */
#define AL_CPU_RESUME_MAGIC_NUM 0xf0e1d200
#define AL_CPU_RESUME_MAGIC_NUM_MASK 0xffffff00
#endif /* ALPINE_CPU_RESUME_H_ */
/*
* Machine declaration for Alpine platforms.
*
* Copyright (C) 2015 Annapurna Labs Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/of_platform.h>
#include <asm/mach/arch.h>
static const char * const al_match[] __initconst = {
"al,alpine",
NULL,
};
DT_MACHINE_START(AL_DT, "Annapurna Labs Alpine")
.dt_compat = al_match,
MACHINE_END
/*
* SMP operations for Alpine platform.
*
* Copyright (C) 2015 Annapurna Labs Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/io.h>
#include <linux/of.h>
#include <asm/smp_plat.h>
#include "alpine_cpu_pm.h"
static int alpine_boot_secondary(unsigned int cpu, struct task_struct *idle)
{
phys_addr_t addr;
addr = virt_to_phys(secondary_startup);
if (addr > (phys_addr_t)(uint32_t)(-1)) {
pr_err("FAIL: resume address over 32bit (%pa)", &addr);
return -EINVAL;
}
return alpine_cpu_wakeup(cpu_logical_map(cpu), (uint32_t)addr);
}
static void __init alpine_smp_prepare_cpus(unsigned int max_cpus)
{
alpine_cpu_pm_init();
}
static struct smp_operations alpine_smp_ops __initdata = {
.smp_prepare_cpus = alpine_smp_prepare_cpus,
.smp_boot_secondary = alpine_boot_secondary,
};
CPU_METHOD_OF_DECLARE(alpine_smp, "al,alpine-smp", &alpine_smp_ops);
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
#include <asm/mach/arch.h> #include <asm/mach/arch.h>
static const char const *bcm_cygnus_dt_compat[] = { static const char * const bcm_cygnus_dt_compat[] __initconst = {
"brcm,cygnus", "brcm,cygnus",
NULL, NULL,
}; };
......
...@@ -126,6 +126,12 @@ enum { ...@@ -126,6 +126,12 @@ enum {
void exynos_firmware_init(void); void exynos_firmware_init(void);
/* CPU BOOT mode flag for Exynos3250 SoC bootloader */
#define C2_STATE (1 << 3)
void exynos_set_boot_flag(unsigned int cpu, unsigned int mode);
void exynos_clear_boot_flag(unsigned int cpu, unsigned int mode);
extern u32 exynos_get_eint_wake_mask(void); extern u32 exynos_get_eint_wake_mask(void);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
......
...@@ -214,6 +214,7 @@ static void __init exynos_dt_machine_init(void) ...@@ -214,6 +214,7 @@ static void __init exynos_dt_machine_init(void)
of_machine_is_compatible("samsung,exynos4212") || of_machine_is_compatible("samsung,exynos4212") ||
(of_machine_is_compatible("samsung,exynos4412") && (of_machine_is_compatible("samsung,exynos4412") &&
of_machine_is_compatible("samsung,trats2")) || of_machine_is_compatible("samsung,trats2")) ||
of_machine_is_compatible("samsung,exynos3250") ||
of_machine_is_compatible("samsung,exynos5250")) of_machine_is_compatible("samsung,exynos5250"))
platform_device_register(&exynos_cpuidle); platform_device_register(&exynos_cpuidle);
......
...@@ -48,6 +48,12 @@ static int exynos_do_idle(unsigned long mode) ...@@ -48,6 +48,12 @@ static int exynos_do_idle(unsigned long mode)
__raw_writel(virt_to_phys(exynos_cpu_resume_ns), __raw_writel(virt_to_phys(exynos_cpu_resume_ns),
sysram_ns_base_addr + 0x24); sysram_ns_base_addr + 0x24);
__raw_writel(EXYNOS_AFTR_MAGIC, sysram_ns_base_addr + 0x20); __raw_writel(EXYNOS_AFTR_MAGIC, sysram_ns_base_addr + 0x20);
if (soc_is_exynos3250()) {
exynos_smc(SMC_CMD_SAVE, OP_TYPE_CORE,
SMC_POWERSTATE_IDLE, 0);
exynos_smc(SMC_CMD_SHUTDOWN, OP_TYPE_CLUSTER,
SMC_POWERSTATE_IDLE, 0);
} else
exynos_smc(SMC_CMD_CPU0AFTR, 0, 0, 0); exynos_smc(SMC_CMD_CPU0AFTR, 0, 0, 0);
break; break;
case FW_DO_IDLE_SLEEP: case FW_DO_IDLE_SLEEP:
...@@ -206,3 +212,28 @@ void __init exynos_firmware_init(void) ...@@ -206,3 +212,28 @@ void __init exynos_firmware_init(void)
outer_cache.configure = exynos_l2_configure; outer_cache.configure = exynos_l2_configure;
} }
} }
#define REG_CPU_STATE_ADDR (sysram_ns_base_addr + 0x28)
#define BOOT_MODE_MASK 0x1f
void exynos_set_boot_flag(unsigned int cpu, unsigned int mode)
{
unsigned int tmp;
tmp = __raw_readl(REG_CPU_STATE_ADDR + cpu * 4);
if (mode & BOOT_MODE_MASK)
tmp &= ~BOOT_MODE_MASK;
tmp |= mode;
__raw_writel(tmp, REG_CPU_STATE_ADDR + cpu * 4);
}
void exynos_clear_boot_flag(unsigned int cpu, unsigned int mode)
{
unsigned int tmp;
tmp = __raw_readl(REG_CPU_STATE_ADDR + cpu * 4);
tmp &= ~mode;
__raw_writel(tmp, REG_CPU_STATE_ADDR + cpu * 4);
}
...@@ -61,25 +61,7 @@ static void __iomem *ns_sram_base_addr; ...@@ -61,25 +61,7 @@ static void __iomem *ns_sram_base_addr;
: "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", \ : "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", \
"r9", "r10", "lr", "memory") "r9", "r10", "lr", "memory")
/* static int exynos_cpu_powerup(unsigned int cpu, unsigned int cluster)
* We can't use regular spinlocks. In the switcher case, it is possible
* for an outbound CPU to call power_down() after its inbound counterpart
* is already live using the same logical CPU number which trips lockdep
* debugging.
*/
static arch_spinlock_t exynos_mcpm_lock = __ARCH_SPIN_LOCK_UNLOCKED;
static int
cpu_use_count[EXYNOS5420_CPUS_PER_CLUSTER][EXYNOS5420_NR_CLUSTERS];
#define exynos_cluster_usecnt(cluster) \
(cpu_use_count[0][cluster] + \
cpu_use_count[1][cluster] + \
cpu_use_count[2][cluster] + \
cpu_use_count[3][cluster])
#define exynos_cluster_unused(cluster) !exynos_cluster_usecnt(cluster)
static int exynos_power_up(unsigned int cpu, unsigned int cluster)
{ {
unsigned int cpunr = cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER); unsigned int cpunr = cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
...@@ -88,91 +70,45 @@ static int exynos_power_up(unsigned int cpu, unsigned int cluster) ...@@ -88,91 +70,45 @@ static int exynos_power_up(unsigned int cpu, unsigned int cluster)
cluster >= EXYNOS5420_NR_CLUSTERS) cluster >= EXYNOS5420_NR_CLUSTERS)
return -EINVAL; return -EINVAL;
/*
* Since this is called with IRQs enabled, and no arch_spin_lock_irq
* variant exists, we need to disable IRQs manually here.
*/
local_irq_disable();
arch_spin_lock(&exynos_mcpm_lock);
cpu_use_count[cpu][cluster]++;
if (cpu_use_count[cpu][cluster] == 1) {
bool was_cluster_down =
(exynos_cluster_usecnt(cluster) == 1);
/*
* Turn on the cluster (L2/COMMON) and then power on the
* cores.
*/
if (was_cluster_down)
exynos_cluster_power_up(cluster);
exynos_cpu_power_up(cpunr); exynos_cpu_power_up(cpunr);
} else if (cpu_use_count[cpu][cluster] != 2) { return 0;
/* }
* The only possible values are:
* 0 = CPU down
* 1 = CPU (still) up
* 2 = CPU requested to be up before it had a chance
* to actually make itself down.
* Any other value is a bug.
*/
BUG();
}
arch_spin_unlock(&exynos_mcpm_lock); static int exynos_cluster_powerup(unsigned int cluster)
local_irq_enable(); {
pr_debug("%s: cluster %u\n", __func__, cluster);
if (cluster >= EXYNOS5420_NR_CLUSTERS)
return -EINVAL;
exynos_cluster_power_up(cluster);
return 0; return 0;
} }
/* static void exynos_cpu_powerdown_prepare(unsigned int cpu, unsigned int cluster)
* NOTE: This function requires the stack data to be visible through power down
* and can only be executed on processors like A15 and A7 that hit the cache
* with the C bit clear in the SCTLR register.
*/
static void exynos_power_down(void)
{ {
unsigned int mpidr, cpu, cluster; unsigned int cpunr = cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
bool last_man = false, skip_wfi = false;
unsigned int cpunr;
mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
cpunr = cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
BUG_ON(cpu >= EXYNOS5420_CPUS_PER_CLUSTER || BUG_ON(cpu >= EXYNOS5420_CPUS_PER_CLUSTER ||
cluster >= EXYNOS5420_NR_CLUSTERS); cluster >= EXYNOS5420_NR_CLUSTERS);
__mcpm_cpu_going_down(cpu, cluster);
arch_spin_lock(&exynos_mcpm_lock);
BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
cpu_use_count[cpu][cluster]--;
if (cpu_use_count[cpu][cluster] == 0) {
exynos_cpu_power_down(cpunr); exynos_cpu_power_down(cpunr);
}
if (exynos_cluster_unused(cluster)) { static void exynos_cluster_powerdown_prepare(unsigned int cluster)
{
pr_debug("%s: cluster %u\n", __func__, cluster);
BUG_ON(cluster >= EXYNOS5420_NR_CLUSTERS);
exynos_cluster_power_down(cluster); exynos_cluster_power_down(cluster);
last_man = true; }
}
} else if (cpu_use_count[cpu][cluster] == 1) {
/*
* A power_up request went ahead of us.
* Even if we do not want to shut this CPU down,
* the caller expects a certain state as if the WFI
* was aborted. So let's continue with cache cleaning.
*/
skip_wfi = true;
} else {
BUG();
}
if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { static void exynos_cpu_cache_disable(void)
arch_spin_unlock(&exynos_mcpm_lock); {
/* Disable and flush the local CPU cache. */
exynos_v7_exit_coherency_flush(louis);
}
static void exynos_cluster_cache_disable(void)
{
if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) { if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
/* /*
* On the Cortex-A15 we need to disable * On the Cortex-A15 we need to disable
...@@ -192,23 +128,7 @@ static void exynos_power_down(void) ...@@ -192,23 +128,7 @@ static void exynos_power_down(void)
* Disable cluster-level coherency by masking * Disable cluster-level coherency by masking
* incoming snoops and DVM messages: * incoming snoops and DVM messages:
*/ */
cci_disable_port_by_cpu(mpidr); cci_disable_port_by_cpu(read_cpuid_mpidr());
__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN);
} else {
arch_spin_unlock(&exynos_mcpm_lock);
/* Disable and flush the local CPU cache. */
exynos_v7_exit_coherency_flush(louis);
}
__mcpm_cpu_down(cpu, cluster);
/* Now we are prepared for power-down, do it: */
if (!skip_wfi)
wfi();
/* Not dead at this point? Let our caller cope. */
} }
static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster) static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
...@@ -222,10 +142,8 @@ static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster) ...@@ -222,10 +142,8 @@ static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
/* Wait for the core state to be OFF */ /* Wait for the core state to be OFF */
while (tries--) { while (tries--) {
if (ACCESS_ONCE(cpu_use_count[cpu][cluster]) == 0) {
if ((exynos_cpu_power_state(cpunr) == 0)) if ((exynos_cpu_power_state(cpunr) == 0))
return 0; /* success: the CPU is halted */ return 0; /* success: the CPU is halted */
}
/* Otherwise, wait and retry: */ /* Otherwise, wait and retry: */
msleep(1); msleep(1);
...@@ -234,63 +152,23 @@ static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster) ...@@ -234,63 +152,23 @@ static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
return -ETIMEDOUT; /* timeout */ return -ETIMEDOUT; /* timeout */
} }
static void exynos_powered_up(void) static void exynos_cpu_is_up(unsigned int cpu, unsigned int cluster)
{
unsigned int mpidr, cpu, cluster;
mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
arch_spin_lock(&exynos_mcpm_lock);
if (cpu_use_count[cpu][cluster] == 0)
cpu_use_count[cpu][cluster] = 1;
arch_spin_unlock(&exynos_mcpm_lock);
}
static void exynos_suspend(u64 residency)
{ {
unsigned int mpidr, cpunr; /* especially when resuming: make sure power control is set */
exynos_cpu_powerup(cpu, cluster);
exynos_power_down();
/*
* Execution reaches here only if cpu did not power down.
* Hence roll back the changes done in exynos_power_down function.
*
* CAUTION: "This function requires the stack data to be visible through
* power down and can only be executed on processors like A15 and A7
* that hit the cache with the C bit clear in the SCTLR register."
*/
mpidr = read_cpuid_mpidr();
cpunr = exynos_pmu_cpunr(mpidr);
exynos_cpu_power_up(cpunr);
} }
static const struct mcpm_platform_ops exynos_power_ops = { static const struct mcpm_platform_ops exynos_power_ops = {
.power_up = exynos_power_up, .cpu_powerup = exynos_cpu_powerup,
.power_down = exynos_power_down, .cluster_powerup = exynos_cluster_powerup,
.cpu_powerdown_prepare = exynos_cpu_powerdown_prepare,
.cluster_powerdown_prepare = exynos_cluster_powerdown_prepare,
.cpu_cache_disable = exynos_cpu_cache_disable,
.cluster_cache_disable = exynos_cluster_cache_disable,
.wait_for_powerdown = exynos_wait_for_powerdown, .wait_for_powerdown = exynos_wait_for_powerdown,
.suspend = exynos_suspend, .cpu_is_up = exynos_cpu_is_up,
.powered_up = exynos_powered_up,
}; };
static void __init exynos_mcpm_usage_count_init(void)
{
unsigned int mpidr, cpu, cluster;
mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
BUG_ON(cpu >= EXYNOS5420_CPUS_PER_CLUSTER ||
cluster >= EXYNOS5420_NR_CLUSTERS);
cpu_use_count[cpu][cluster] = 1;
}
/* /*
* Enable cluster-level coherency, in preparation for turning on the MMU. * Enable cluster-level coherency, in preparation for turning on the MMU.
*/ */
...@@ -302,19 +180,6 @@ static void __naked exynos_pm_power_up_setup(unsigned int affinity_level) ...@@ -302,19 +180,6 @@ static void __naked exynos_pm_power_up_setup(unsigned int affinity_level)
"b cci_enable_port_for_self"); "b cci_enable_port_for_self");
} }
static void __init exynos_cache_off(void)
{
if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
/* disable L2 prefetching on the Cortex-A15 */
asm volatile(
"mcr p15, 1, %0, c15, c0, 3\n\t"
"isb\n\t"
"dsb"
: : "r" (0x400));
}
exynos_v7_exit_coherency_flush(all);
}
static const struct of_device_id exynos_dt_mcpm_match[] = { static const struct of_device_id exynos_dt_mcpm_match[] = {
{ .compatible = "samsung,exynos5420" }, { .compatible = "samsung,exynos5420" },
{ .compatible = "samsung,exynos5800" }, { .compatible = "samsung,exynos5800" },
...@@ -370,13 +235,11 @@ static int __init exynos_mcpm_init(void) ...@@ -370,13 +235,11 @@ static int __init exynos_mcpm_init(void)
*/ */
pmu_raw_writel(EXYNOS5420_SWRESET_KFC_SEL, S5P_PMU_SPARE3); pmu_raw_writel(EXYNOS5420_SWRESET_KFC_SEL, S5P_PMU_SPARE3);
exynos_mcpm_usage_count_init();
ret = mcpm_platform_register(&exynos_power_ops); ret = mcpm_platform_register(&exynos_power_ops);
if (!ret) if (!ret)
ret = mcpm_sync_init(exynos_pm_power_up_setup); ret = mcpm_sync_init(exynos_pm_power_up_setup);
if (!ret) if (!ret)
ret = mcpm_loopback(exynos_cache_off); /* turn on the CCI */ ret = mcpm_loopback(exynos_cluster_cache_disable); /* turn on the CCI */
if (ret) { if (ret) {
iounmap(ns_sram_base_addr); iounmap(ns_sram_base_addr);
return ret; return ret;
......
...@@ -126,6 +126,8 @@ static inline void platform_do_lowpower(unsigned int cpu, int *spurious) ...@@ -126,6 +126,8 @@ static inline void platform_do_lowpower(unsigned int cpu, int *spurious)
*/ */
void exynos_cpu_power_down(int cpu) void exynos_cpu_power_down(int cpu)
{ {
u32 core_conf;
if (cpu == 0 && (soc_is_exynos5420() || soc_is_exynos5800())) { if (cpu == 0 && (soc_is_exynos5420() || soc_is_exynos5800())) {
/* /*
* Bypass power down for CPU0 during suspend. Check for * Bypass power down for CPU0 during suspend. Check for
...@@ -137,7 +139,10 @@ void exynos_cpu_power_down(int cpu) ...@@ -137,7 +139,10 @@ void exynos_cpu_power_down(int cpu)
if (!(val & S5P_CORE_LOCAL_PWR_EN)) if (!(val & S5P_CORE_LOCAL_PWR_EN))
return; return;
} }
pmu_raw_writel(0, EXYNOS_ARM_CORE_CONFIGURATION(cpu));
core_conf = pmu_raw_readl(EXYNOS_ARM_CORE_CONFIGURATION(cpu));
core_conf &= ~S5P_CORE_LOCAL_PWR_EN;
pmu_raw_writel(core_conf, EXYNOS_ARM_CORE_CONFIGURATION(cpu));
} }
/** /**
...@@ -148,7 +153,12 @@ void exynos_cpu_power_down(int cpu) ...@@ -148,7 +153,12 @@ void exynos_cpu_power_down(int cpu)
*/ */
void exynos_cpu_power_up(int cpu) void exynos_cpu_power_up(int cpu)
{ {
pmu_raw_writel(S5P_CORE_LOCAL_PWR_EN, u32 core_conf = S5P_CORE_LOCAL_PWR_EN;
if (soc_is_exynos3250())
core_conf |= S5P_CORE_AUTOWAKEUP_EN;
pmu_raw_writel(core_conf,
EXYNOS_ARM_CORE_CONFIGURATION(cpu)); EXYNOS_ARM_CORE_CONFIGURATION(cpu));
} }
...@@ -226,6 +236,10 @@ static void exynos_core_restart(u32 core_id) ...@@ -226,6 +236,10 @@ static void exynos_core_restart(u32 core_id)
if (!of_machine_is_compatible("samsung,exynos3250")) if (!of_machine_is_compatible("samsung,exynos3250"))
return; return;
while (!pmu_raw_readl(S5P_PMU_SPARE2))
udelay(10);
udelay(10);
val = pmu_raw_readl(EXYNOS_ARM_CORE_STATUS(core_id)); val = pmu_raw_readl(EXYNOS_ARM_CORE_STATUS(core_id));
val |= S5P_CORE_WAKEUP_FROM_LOCAL_CFG; val |= S5P_CORE_WAKEUP_FROM_LOCAL_CFG;
pmu_raw_writel(val, EXYNOS_ARM_CORE_STATUS(core_id)); pmu_raw_writel(val, EXYNOS_ARM_CORE_STATUS(core_id));
...@@ -346,6 +360,9 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) ...@@ -346,6 +360,9 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
call_firmware_op(cpu_boot, core_id); call_firmware_op(cpu_boot, core_id);
if (soc_is_exynos3250())
dsb_sev();
else
arch_send_wakeup_ipi_mask(cpumask_of(cpu)); arch_send_wakeup_ipi_mask(cpumask_of(cpu));
if (pen_release == -1) if (pen_release == -1)
......
...@@ -127,6 +127,8 @@ int exynos_pm_central_resume(void) ...@@ -127,6 +127,8 @@ int exynos_pm_central_resume(void)
static void exynos_set_wakeupmask(long mask) static void exynos_set_wakeupmask(long mask)
{ {
pmu_raw_writel(mask, S5P_WAKEUP_MASK); pmu_raw_writel(mask, S5P_WAKEUP_MASK);
if (soc_is_exynos3250())
pmu_raw_writel(0x0, S5P_WAKEUP_MASK2);
} }
static void exynos_cpu_set_boot_vector(long flags) static void exynos_cpu_set_boot_vector(long flags)
...@@ -140,7 +142,7 @@ static int exynos_aftr_finisher(unsigned long flags) ...@@ -140,7 +142,7 @@ static int exynos_aftr_finisher(unsigned long flags)
{ {
int ret; int ret;
exynos_set_wakeupmask(0x0000ff3e); exynos_set_wakeupmask(soc_is_exynos3250() ? 0x40003ffe : 0x0000ff3e);
/* Set value of power down register for aftr mode */ /* Set value of power down register for aftr mode */
exynos_sys_powerdown_conf(SYS_AFTR); exynos_sys_powerdown_conf(SYS_AFTR);
...@@ -157,8 +159,13 @@ static int exynos_aftr_finisher(unsigned long flags) ...@@ -157,8 +159,13 @@ static int exynos_aftr_finisher(unsigned long flags)
void exynos_enter_aftr(void) void exynos_enter_aftr(void)
{ {
unsigned int cpuid = smp_processor_id();
cpu_pm_enter(); cpu_pm_enter();
if (soc_is_exynos3250())
exynos_set_boot_flag(cpuid, C2_STATE);
exynos_pm_central_suspend(); exynos_pm_central_suspend();
if (of_machine_is_compatible("samsung,exynos4212") || if (of_machine_is_compatible("samsung,exynos4212") ||
...@@ -178,6 +185,9 @@ void exynos_enter_aftr(void) ...@@ -178,6 +185,9 @@ void exynos_enter_aftr(void)
exynos_pm_central_resume(); exynos_pm_central_resume();
if (soc_is_exynos3250())
exynos_clear_boot_flag(cpuid, C2_STATE);
cpu_pm_exit(); cpu_pm_exit();
} }
......
...@@ -37,6 +37,7 @@ struct exynos_pm_domain { ...@@ -37,6 +37,7 @@ struct exynos_pm_domain {
struct clk *oscclk; struct clk *oscclk;
struct clk *clk[MAX_CLK_PER_DOMAIN]; struct clk *clk[MAX_CLK_PER_DOMAIN];
struct clk *pclk[MAX_CLK_PER_DOMAIN]; struct clk *pclk[MAX_CLK_PER_DOMAIN];
struct clk *asb_clk[MAX_CLK_PER_DOMAIN];
}; };
static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)
...@@ -45,14 +46,19 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) ...@@ -45,14 +46,19 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)
void __iomem *base; void __iomem *base;
u32 timeout, pwr; u32 timeout, pwr;
char *op; char *op;
int i;
pd = container_of(domain, struct exynos_pm_domain, pd); pd = container_of(domain, struct exynos_pm_domain, pd);
base = pd->base; base = pd->base;
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
if (IS_ERR(pd->asb_clk[i]))
break;
clk_prepare_enable(pd->asb_clk[i]);
}
/* Set oscclk before powering off a domain*/ /* Set oscclk before powering off a domain*/
if (!power_on) { if (!power_on) {
int i;
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
if (IS_ERR(pd->clk[i])) if (IS_ERR(pd->clk[i]))
break; break;
...@@ -81,8 +87,6 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) ...@@ -81,8 +87,6 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)
/* Restore clocks after powering on a domain*/ /* Restore clocks after powering on a domain*/
if (power_on) { if (power_on) {
int i;
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
if (IS_ERR(pd->clk[i])) if (IS_ERR(pd->clk[i]))
break; break;
...@@ -92,6 +96,12 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) ...@@ -92,6 +96,12 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)
} }
} }
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
if (IS_ERR(pd->asb_clk[i]))
break;
clk_disable_unprepare(pd->asb_clk[i]);
}
return 0; return 0;
} }
...@@ -125,12 +135,21 @@ static __init int exynos4_pm_init_power_domain(void) ...@@ -125,12 +135,21 @@ static __init int exynos4_pm_init_power_domain(void)
return -ENOMEM; return -ENOMEM;
} }
pd->pd.name = kstrdup(np->name, GFP_KERNEL); pd->pd.name = kstrdup(dev_name(dev), GFP_KERNEL);
pd->name = pd->pd.name; pd->name = pd->pd.name;
pd->base = of_iomap(np, 0); pd->base = of_iomap(np, 0);
pd->pd.power_off = exynos_pd_power_off; pd->pd.power_off = exynos_pd_power_off;
pd->pd.power_on = exynos_pd_power_on; pd->pd.power_on = exynos_pd_power_on;
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
char clk_name[8];
snprintf(clk_name, sizeof(clk_name), "asb%d", i);
pd->asb_clk[i] = clk_get(dev, clk_name);
if (IS_ERR(pd->asb_clk[i]))
break;
}
pd->oscclk = clk_get(dev, "oscclk"); pd->oscclk = clk_get(dev, "oscclk");
if (IS_ERR(pd->oscclk)) if (IS_ERR(pd->oscclk))
goto no_clk; goto no_clk;
......
...@@ -43,12 +43,14 @@ ...@@ -43,12 +43,14 @@
#define S5P_WAKEUP_STAT 0x0600 #define S5P_WAKEUP_STAT 0x0600
#define S5P_EINT_WAKEUP_MASK 0x0604 #define S5P_EINT_WAKEUP_MASK 0x0604
#define S5P_WAKEUP_MASK 0x0608 #define S5P_WAKEUP_MASK 0x0608
#define S5P_WAKEUP_MASK2 0x0614
#define S5P_INFORM0 0x0800 #define S5P_INFORM0 0x0800
#define S5P_INFORM1 0x0804 #define S5P_INFORM1 0x0804
#define S5P_INFORM5 0x0814 #define S5P_INFORM5 0x0814
#define S5P_INFORM6 0x0818 #define S5P_INFORM6 0x0818
#define S5P_INFORM7 0x081C #define S5P_INFORM7 0x081C
#define S5P_PMU_SPARE2 0x0908
#define S5P_PMU_SPARE3 0x090C #define S5P_PMU_SPARE3 0x090C
#define EXYNOS_IROM_DATA2 0x0988 #define EXYNOS_IROM_DATA2 0x0988
...@@ -182,6 +184,7 @@ ...@@ -182,6 +184,7 @@
#define S5P_CORE_LOCAL_PWR_EN 0x3 #define S5P_CORE_LOCAL_PWR_EN 0x3
#define S5P_CORE_WAKEUP_FROM_LOCAL_CFG (0x3 << 8) #define S5P_CORE_WAKEUP_FROM_LOCAL_CFG (0x3 << 8)
#define S5P_CORE_AUTOWAKEUP_EN (1 << 31)
/* Only for EXYNOS4210 */ /* Only for EXYNOS4210 */
#define S5P_CMU_CLKSTOP_LCD1_LOWPWR 0x1154 #define S5P_CMU_CLKSTOP_LCD1_LOWPWR 0x1154
......
...@@ -17,6 +17,8 @@ ...@@ -17,6 +17,8 @@
#define SMC_CMD_SLEEP (-3) #define SMC_CMD_SLEEP (-3)
#define SMC_CMD_CPU1BOOT (-4) #define SMC_CMD_CPU1BOOT (-4)
#define SMC_CMD_CPU0AFTR (-5) #define SMC_CMD_CPU0AFTR (-5)
#define SMC_CMD_SAVE (-6)
#define SMC_CMD_SHUTDOWN (-7)
/* For CP15 Access */ /* For CP15 Access */
#define SMC_CMD_C15RESUME (-11) #define SMC_CMD_C15RESUME (-11)
/* For L2 Cache Access */ /* For L2 Cache Access */
...@@ -32,4 +34,11 @@ extern void exynos_smc(u32 cmd, u32 arg1, u32 arg2, u32 arg3); ...@@ -32,4 +34,11 @@ extern void exynos_smc(u32 cmd, u32 arg1, u32 arg2, u32 arg3);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
/* op type for SMC_CMD_SAVE and SMC_CMD_SHUTDOWN */
#define OP_TYPE_CORE 0x0
#define OP_TYPE_CLUSTER 0x1
/* Power State required for SMC_CMD_SAVE and SMC_CMD_SHUTDOWN */
#define SMC_POWERSTATE_IDLE 0x1
#endif #endif
...@@ -65,8 +65,6 @@ static struct sleep_save exynos_core_save[] = { ...@@ -65,8 +65,6 @@ static struct sleep_save exynos_core_save[] = {
struct exynos_pm_data { struct exynos_pm_data {
const struct exynos_wkup_irq *wkup_irq; const struct exynos_wkup_irq *wkup_irq;
struct sleep_save *extra_save;
int num_extra_save;
unsigned int wake_disable_mask; unsigned int wake_disable_mask;
unsigned int *release_ret_regs; unsigned int *release_ret_regs;
...@@ -77,7 +75,7 @@ struct exynos_pm_data { ...@@ -77,7 +75,7 @@ struct exynos_pm_data {
int (*cpu_suspend)(unsigned long); int (*cpu_suspend)(unsigned long);
}; };
struct exynos_pm_data *pm_data; static const struct exynos_pm_data *pm_data;
static int exynos5420_cpu_state; static int exynos5420_cpu_state;
static unsigned int exynos_pmu_spare3; static unsigned int exynos_pmu_spare3;
...@@ -106,7 +104,7 @@ static const struct exynos_wkup_irq exynos5250_wkup_irq[] = { ...@@ -106,7 +104,7 @@ static const struct exynos_wkup_irq exynos5250_wkup_irq[] = {
{ /* sentinel */ }, { /* sentinel */ },
}; };
unsigned int exynos_release_ret_regs[] = { static unsigned int exynos_release_ret_regs[] = {
S5P_PAD_RET_MAUDIO_OPTION, S5P_PAD_RET_MAUDIO_OPTION,
S5P_PAD_RET_GPIO_OPTION, S5P_PAD_RET_GPIO_OPTION,
S5P_PAD_RET_UART_OPTION, S5P_PAD_RET_UART_OPTION,
...@@ -117,7 +115,7 @@ unsigned int exynos_release_ret_regs[] = { ...@@ -117,7 +115,7 @@ unsigned int exynos_release_ret_regs[] = {
REG_TABLE_END, REG_TABLE_END,
}; };
unsigned int exynos3250_release_ret_regs[] = { static unsigned int exynos3250_release_ret_regs[] = {
S5P_PAD_RET_MAUDIO_OPTION, S5P_PAD_RET_MAUDIO_OPTION,
S5P_PAD_RET_GPIO_OPTION, S5P_PAD_RET_GPIO_OPTION,
S5P_PAD_RET_UART_OPTION, S5P_PAD_RET_UART_OPTION,
...@@ -130,7 +128,7 @@ unsigned int exynos3250_release_ret_regs[] = { ...@@ -130,7 +128,7 @@ unsigned int exynos3250_release_ret_regs[] = {
REG_TABLE_END, REG_TABLE_END,
}; };
unsigned int exynos5420_release_ret_regs[] = { static unsigned int exynos5420_release_ret_regs[] = {
EXYNOS_PAD_RET_DRAM_OPTION, EXYNOS_PAD_RET_DRAM_OPTION,
EXYNOS_PAD_RET_MAUDIO_OPTION, EXYNOS_PAD_RET_MAUDIO_OPTION,
EXYNOS_PAD_RET_JTAG_OPTION, EXYNOS_PAD_RET_JTAG_OPTION,
...@@ -349,10 +347,6 @@ static void exynos_pm_prepare(void) ...@@ -349,10 +347,6 @@ static void exynos_pm_prepare(void)
s3c_pm_do_save(exynos_core_save, ARRAY_SIZE(exynos_core_save)); s3c_pm_do_save(exynos_core_save, ARRAY_SIZE(exynos_core_save));
if (pm_data->extra_save)
s3c_pm_do_save(pm_data->extra_save,
pm_data->num_extra_save);
exynos_pm_enter_sleep_mode(); exynos_pm_enter_sleep_mode();
/* ensure at least INFORM0 has the resume address */ /* ensure at least INFORM0 has the resume address */
...@@ -475,10 +469,6 @@ static void exynos_pm_resume(void) ...@@ -475,10 +469,6 @@ static void exynos_pm_resume(void)
/* For release retention */ /* For release retention */
exynos_pm_release_retention(); exynos_pm_release_retention();
if (pm_data->extra_save)
s3c_pm_do_restore_core(pm_data->extra_save,
pm_data->num_extra_save);
s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save)); s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save));
if (cpuid == ARM_CPU_PART_CORTEX_A9) if (cpuid == ARM_CPU_PART_CORTEX_A9)
...@@ -685,7 +675,7 @@ static const struct exynos_pm_data exynos5250_pm_data = { ...@@ -685,7 +675,7 @@ static const struct exynos_pm_data exynos5250_pm_data = {
.cpu_suspend = exynos_cpu_suspend, .cpu_suspend = exynos_cpu_suspend,
}; };
static struct exynos_pm_data exynos5420_pm_data = { static const struct exynos_pm_data exynos5420_pm_data = {
.wkup_irq = exynos5250_wkup_irq, .wkup_irq = exynos5250_wkup_irq,
.wake_disable_mask = (0x7F << 7) | (0x1F << 1), .wake_disable_mask = (0x7F << 7) | (0x1F << 1),
.release_ret_regs = exynos5420_release_ret_regs, .release_ret_regs = exynos5420_release_ret_regs,
...@@ -736,7 +726,7 @@ void __init exynos_pm_init(void) ...@@ -736,7 +726,7 @@ void __init exynos_pm_init(void)
if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL))) if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL)))
pr_warn("Outdated DT detected, suspend/resume will NOT work\n"); pr_warn("Outdated DT detected, suspend/resume will NOT work\n");
pm_data = (struct exynos_pm_data *) match->data; pm_data = (const struct exynos_pm_data *) match->data;
/* All wakeup disable */ /* All wakeup disable */
tmp = pmu_raw_readl(S5P_WAKEUP_MASK); tmp = pmu_raw_readl(S5P_WAKEUP_MASK);
......
...@@ -21,6 +21,7 @@ config MXC_AVIC ...@@ -21,6 +21,7 @@ config MXC_AVIC
config MXC_DEBUG_BOARD config MXC_DEBUG_BOARD
bool "Enable MXC debug board(for 3-stack)" bool "Enable MXC debug board(for 3-stack)"
depends on MACH_MX27_3DS || MACH_MX31_3DS || MACH_MX35_3DS
help help
The debug board is an integral part of the MXC 3-stack(PDK) The debug board is an integral part of the MXC 3-stack(PDK)
platforms, it can be attached or removed from the peripheral platforms, it can be attached or removed from the peripheral
...@@ -50,6 +51,7 @@ config HAVE_IMX_ANATOP ...@@ -50,6 +51,7 @@ config HAVE_IMX_ANATOP
config HAVE_IMX_GPC config HAVE_IMX_GPC
bool bool
select PM_GENERIC_DOMAINS if PM
config HAVE_IMX_MMDC config HAVE_IMX_MMDC
bool bool
...@@ -586,6 +588,7 @@ config SOC_VF610 ...@@ -586,6 +588,7 @@ config SOC_VF610
select ARM_GIC select ARM_GIC
select PINCTRL_VF610 select PINCTRL_VF610
select PL310_ERRATA_769419 if CACHE_L2X0 select PL310_ERRATA_769419 if CACHE_L2X0
select SMP_ON_UP if SMP
help help
This enables support for Freescale Vybrid VF610 processor. This enables support for Freescale Vybrid VF610 processor.
......
...@@ -119,6 +119,7 @@ static unsigned int share_count_asrc; ...@@ -119,6 +119,7 @@ static unsigned int share_count_asrc;
static unsigned int share_count_ssi1; static unsigned int share_count_ssi1;
static unsigned int share_count_ssi2; static unsigned int share_count_ssi2;
static unsigned int share_count_ssi3; static unsigned int share_count_ssi3;
static unsigned int share_count_mipi_core_cfg;
static void __init imx6q_clocks_init(struct device_node *ccm_node) static void __init imx6q_clocks_init(struct device_node *ccm_node)
{ {
...@@ -246,6 +247,7 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node) ...@@ -246,6 +247,7 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
clk[IMX6QDL_CLK_PLL3_60M] = imx_clk_fixed_factor("pll3_60m", "pll3_usb_otg", 1, 8); clk[IMX6QDL_CLK_PLL3_60M] = imx_clk_fixed_factor("pll3_60m", "pll3_usb_otg", 1, 8);
clk[IMX6QDL_CLK_TWD] = imx_clk_fixed_factor("twd", "arm", 1, 2); clk[IMX6QDL_CLK_TWD] = imx_clk_fixed_factor("twd", "arm", 1, 2);
clk[IMX6QDL_CLK_GPT_3M] = imx_clk_fixed_factor("gpt_3m", "osc", 1, 8); clk[IMX6QDL_CLK_GPT_3M] = imx_clk_fixed_factor("gpt_3m", "osc", 1, 8);
clk[IMX6QDL_CLK_VIDEO_27M] = imx_clk_fixed_factor("video_27m", "pll3_pfd1_540m", 1, 20);
if (cpu_is_imx6dl()) { if (cpu_is_imx6dl()) {
clk[IMX6QDL_CLK_GPU2D_AXI] = imx_clk_fixed_factor("gpu2d_axi", "mmdc_ch0_axi_podf", 1, 1); clk[IMX6QDL_CLK_GPU2D_AXI] = imx_clk_fixed_factor("gpu2d_axi", "mmdc_ch0_axi_podf", 1, 1);
clk[IMX6QDL_CLK_GPU3D_AXI] = imx_clk_fixed_factor("gpu3d_axi", "mmdc_ch0_axi_podf", 1, 1); clk[IMX6QDL_CLK_GPU3D_AXI] = imx_clk_fixed_factor("gpu3d_axi", "mmdc_ch0_axi_podf", 1, 1);
...@@ -400,7 +402,7 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node) ...@@ -400,7 +402,7 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
clk[IMX6QDL_CLK_GPU2D_CORE] = imx_clk_gate2("gpu2d_core", "gpu2d_core_podf", base + 0x6c, 24); clk[IMX6QDL_CLK_GPU2D_CORE] = imx_clk_gate2("gpu2d_core", "gpu2d_core_podf", base + 0x6c, 24);
clk[IMX6QDL_CLK_GPU3D_CORE] = imx_clk_gate2("gpu3d_core", "gpu3d_core_podf", base + 0x6c, 26); clk[IMX6QDL_CLK_GPU3D_CORE] = imx_clk_gate2("gpu3d_core", "gpu3d_core_podf", base + 0x6c, 26);
clk[IMX6QDL_CLK_HDMI_IAHB] = imx_clk_gate2("hdmi_iahb", "ahb", base + 0x70, 0); clk[IMX6QDL_CLK_HDMI_IAHB] = imx_clk_gate2("hdmi_iahb", "ahb", base + 0x70, 0);
clk[IMX6QDL_CLK_HDMI_ISFR] = imx_clk_gate2("hdmi_isfr", "pll3_pfd1_540m", base + 0x70, 4); clk[IMX6QDL_CLK_HDMI_ISFR] = imx_clk_gate2("hdmi_isfr", "video_27m", base + 0x70, 4);
clk[IMX6QDL_CLK_I2C1] = imx_clk_gate2("i2c1", "ipg_per", base + 0x70, 6); clk[IMX6QDL_CLK_I2C1] = imx_clk_gate2("i2c1", "ipg_per", base + 0x70, 6);
clk[IMX6QDL_CLK_I2C2] = imx_clk_gate2("i2c2", "ipg_per", base + 0x70, 8); clk[IMX6QDL_CLK_I2C2] = imx_clk_gate2("i2c2", "ipg_per", base + 0x70, 8);
clk[IMX6QDL_CLK_I2C3] = imx_clk_gate2("i2c3", "ipg_per", base + 0x70, 10); clk[IMX6QDL_CLK_I2C3] = imx_clk_gate2("i2c3", "ipg_per", base + 0x70, 10);
...@@ -415,7 +417,9 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node) ...@@ -415,7 +417,9 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
clk[IMX6QDL_CLK_LDB_DI0] = imx_clk_gate2("ldb_di0", "ldb_di0_podf", base + 0x74, 12); clk[IMX6QDL_CLK_LDB_DI0] = imx_clk_gate2("ldb_di0", "ldb_di0_podf", base + 0x74, 12);
clk[IMX6QDL_CLK_LDB_DI1] = imx_clk_gate2("ldb_di1", "ldb_di1_podf", base + 0x74, 14); clk[IMX6QDL_CLK_LDB_DI1] = imx_clk_gate2("ldb_di1", "ldb_di1_podf", base + 0x74, 14);
clk[IMX6QDL_CLK_IPU2_DI1] = imx_clk_gate2("ipu2_di1", "ipu2_di1_sel", base + 0x74, 10); clk[IMX6QDL_CLK_IPU2_DI1] = imx_clk_gate2("ipu2_di1", "ipu2_di1_sel", base + 0x74, 10);
clk[IMX6QDL_CLK_HSI_TX] = imx_clk_gate2("hsi_tx", "hsi_tx_podf", base + 0x74, 16); clk[IMX6QDL_CLK_HSI_TX] = imx_clk_gate2_shared("hsi_tx", "hsi_tx_podf", base + 0x74, 16, &share_count_mipi_core_cfg);
clk[IMX6QDL_CLK_MIPI_CORE_CFG] = imx_clk_gate2_shared("mipi_core_cfg", "video_27m", base + 0x74, 16, &share_count_mipi_core_cfg);
clk[IMX6QDL_CLK_MIPI_IPG] = imx_clk_gate2_shared("mipi_ipg", "ipg", base + 0x74, 16, &share_count_mipi_core_cfg);
if (cpu_is_imx6dl()) if (cpu_is_imx6dl())
/* /*
* The multiplexer and divider of the imx6q clock gpu2d get * The multiplexer and divider of the imx6q clock gpu2d get
......
...@@ -10,15 +10,25 @@ ...@@ -10,15 +10,25 @@
* http://www.gnu.org/copyleft/gpl.html * http://www.gnu.org/copyleft/gpl.html
*/ */
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_irq.h> #include <linux/of_irq.h>
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#include <linux/regulator/consumer.h>
#include <linux/irqchip/arm-gic.h> #include <linux/irqchip/arm-gic.h>
#include "common.h" #include "common.h"
#include "hardware.h"
#define GPC_CNTR 0x000
#define GPC_IMR1 0x008 #define GPC_IMR1 0x008
#define GPC_PGC_GPU_PDN 0x260
#define GPC_PGC_GPU_PUPSCR 0x264
#define GPC_PGC_GPU_PDNSCR 0x268
#define GPC_PGC_CPU_PDN 0x2a0 #define GPC_PGC_CPU_PDN 0x2a0
#define GPC_PGC_CPU_PUPSCR 0x2a4 #define GPC_PGC_CPU_PUPSCR 0x2a4
#define GPC_PGC_CPU_PDNSCR 0x2a8 #define GPC_PGC_CPU_PDNSCR 0x2a8
...@@ -27,6 +37,18 @@ ...@@ -27,6 +37,18 @@
#define IMR_NUM 4 #define IMR_NUM 4
#define GPU_VPU_PUP_REQ BIT(1)
#define GPU_VPU_PDN_REQ BIT(0)
#define GPC_CLK_MAX 6
struct pu_domain {
struct generic_pm_domain base;
struct regulator *reg;
struct clk *clk[GPC_CLK_MAX];
int num_clks;
};
static void __iomem *gpc_base; static void __iomem *gpc_base;
static u32 gpc_wake_irqs[IMR_NUM]; static u32 gpc_wake_irqs[IMR_NUM];
static u32 gpc_saved_imrs[IMR_NUM]; static u32 gpc_saved_imrs[IMR_NUM];
...@@ -170,3 +192,194 @@ void __init imx_gpc_init(void) ...@@ -170,3 +192,194 @@ void __init imx_gpc_init(void)
gic_arch_extn.irq_unmask = imx_gpc_irq_unmask; gic_arch_extn.irq_unmask = imx_gpc_irq_unmask;
gic_arch_extn.irq_set_wake = imx_gpc_irq_set_wake; gic_arch_extn.irq_set_wake = imx_gpc_irq_set_wake;
} }
#ifdef CONFIG_PM_GENERIC_DOMAINS
static void _imx6q_pm_pu_power_off(struct generic_pm_domain *genpd)
{
int iso, iso2sw;
u32 val;
/* Read ISO and ISO2SW power down delays */
val = readl_relaxed(gpc_base + GPC_PGC_GPU_PDNSCR);
iso = val & 0x3f;
iso2sw = (val >> 8) & 0x3f;
/* Gate off PU domain when GPU/VPU when powered down */
writel_relaxed(0x1, gpc_base + GPC_PGC_GPU_PDN);
/* Request GPC to power down GPU/VPU */
val = readl_relaxed(gpc_base + GPC_CNTR);
val |= GPU_VPU_PDN_REQ;
writel_relaxed(val, gpc_base + GPC_CNTR);
/* Wait ISO + ISO2SW IPG clock cycles */
ndelay((iso + iso2sw) * 1000 / 66);
}
static int imx6q_pm_pu_power_off(struct generic_pm_domain *genpd)
{
struct pu_domain *pu = container_of(genpd, struct pu_domain, base);
_imx6q_pm_pu_power_off(genpd);
if (pu->reg)
regulator_disable(pu->reg);
return 0;
}
static int imx6q_pm_pu_power_on(struct generic_pm_domain *genpd)
{
struct pu_domain *pu = container_of(genpd, struct pu_domain, base);
int i, ret, sw, sw2iso;
u32 val;
if (pu->reg)
ret = regulator_enable(pu->reg);
if (pu->reg && ret) {
pr_err("%s: failed to enable regulator: %d\n", __func__, ret);
return ret;
}
/* Enable reset clocks for all devices in the PU domain */
for (i = 0; i < pu->num_clks; i++)
clk_prepare_enable(pu->clk[i]);
/* Gate off PU domain when GPU/VPU when powered down */
writel_relaxed(0x1, gpc_base + GPC_PGC_GPU_PDN);
/* Read ISO and ISO2SW power down delays */
val = readl_relaxed(gpc_base + GPC_PGC_GPU_PUPSCR);
sw = val & 0x3f;
sw2iso = (val >> 8) & 0x3f;
/* Request GPC to power up GPU/VPU */
val = readl_relaxed(gpc_base + GPC_CNTR);
val |= GPU_VPU_PUP_REQ;
writel_relaxed(val, gpc_base + GPC_CNTR);
/* Wait ISO + ISO2SW IPG clock cycles */
ndelay((sw + sw2iso) * 1000 / 66);
/* Disable reset clocks for all devices in the PU domain */
for (i = 0; i < pu->num_clks; i++)
clk_disable_unprepare(pu->clk[i]);
return 0;
}
static struct generic_pm_domain imx6q_arm_domain = {
.name = "ARM",
};
static struct pu_domain imx6q_pu_domain = {
.base = {
.name = "PU",
.power_off = imx6q_pm_pu_power_off,
.power_on = imx6q_pm_pu_power_on,
.power_off_latency_ns = 25000,
.power_on_latency_ns = 2000000,
},
};
static struct generic_pm_domain imx6sl_display_domain = {
.name = "DISPLAY",
};
static struct generic_pm_domain *imx_gpc_domains[] = {
&imx6q_arm_domain,
&imx6q_pu_domain.base,
&imx6sl_display_domain,
};
static struct genpd_onecell_data imx_gpc_onecell_data = {
.domains = imx_gpc_domains,
.num_domains = ARRAY_SIZE(imx_gpc_domains),
};
static int imx_gpc_genpd_init(struct device *dev, struct regulator *pu_reg)
{
struct clk *clk;
bool is_off;
int i;
imx6q_pu_domain.reg = pu_reg;
for (i = 0; ; i++) {
clk = of_clk_get(dev->of_node, i);
if (IS_ERR(clk))
break;
if (i >= GPC_CLK_MAX) {
dev_err(dev, "more than %d clocks\n", GPC_CLK_MAX);
goto clk_err;
}
imx6q_pu_domain.clk[i] = clk;
}
imx6q_pu_domain.num_clks = i;
is_off = IS_ENABLED(CONFIG_PM);
if (is_off) {
_imx6q_pm_pu_power_off(&imx6q_pu_domain.base);
} else {
/*
* Enable power if compiled without CONFIG_PM in case the
* bootloader disabled it.
*/
imx6q_pm_pu_power_on(&imx6q_pu_domain.base);
}
pm_genpd_init(&imx6q_pu_domain.base, NULL, is_off);
return of_genpd_add_provider_onecell(dev->of_node,
&imx_gpc_onecell_data);
clk_err:
while (i--)
clk_put(imx6q_pu_domain.clk[i]);
return -EINVAL;
}
#else
static inline int imx_gpc_genpd_init(struct device *dev, struct regulator *reg)
{
return 0;
}
#endif /* CONFIG_PM_GENERIC_DOMAINS */
static int imx_gpc_probe(struct platform_device *pdev)
{
struct regulator *pu_reg;
int ret;
pu_reg = devm_regulator_get_optional(&pdev->dev, "pu");
if (PTR_ERR(pu_reg) == -ENODEV)
pu_reg = NULL;
if (IS_ERR(pu_reg)) {
ret = PTR_ERR(pu_reg);
dev_err(&pdev->dev, "failed to get pu regulator: %d\n", ret);
return ret;
}
return imx_gpc_genpd_init(&pdev->dev, pu_reg);
}
static const struct of_device_id imx_gpc_dt_ids[] = {
{ .compatible = "fsl,imx6q-gpc" },
{ .compatible = "fsl,imx6sl-gpc" },
{ }
};
static struct platform_driver imx_gpc_driver = {
.driver = {
.name = "imx-gpc",
.owner = THIS_MODULE,
.of_match_table = imx_gpc_dt_ids,
},
.probe = imx_gpc_probe,
};
static int __init imx_pgc_init(void)
{
return platform_driver_register(&imx_gpc_driver);
}
subsys_initcall(imx_pgc_init);
menuconfig ARCH_MESON menuconfig ARCH_MESON
bool "Amlogic Meson SoCs" if ARCH_MULTI_V7 bool "Amlogic Meson SoCs" if ARCH_MULTI_V7
select ARCH_REQUIRE_GPIOLIB
select GENERIC_IRQ_CHIP select GENERIC_IRQ_CHIP
select ARM_GIC select ARM_GIC
select CACHE_L2X0 select CACHE_L2X0
select PINCTRL
select PINCTRL_MESON
if ARCH_MESON if ARCH_MESON
......
...@@ -64,6 +64,20 @@ config MACH_ARMADA_38X ...@@ -64,6 +64,20 @@ config MACH_ARMADA_38X
Say 'Y' here if you want your kernel to support boards based Say 'Y' here if you want your kernel to support boards based
on the Marvell Armada 380/385 SoC with device tree. on the Marvell Armada 380/385 SoC with device tree.
config MACH_ARMADA_39X
bool "Marvell Armada 39x boards" if ARCH_MULTI_V7
select ARM_GIC
select ARMADA_39X_CLK
select CACHE_L2X0
select HAVE_ARM_SCU
select HAVE_ARM_TWD if SMP
select HAVE_SMP
select MACH_MVEBU_V7
select PINCTRL_ARMADA_39X
help
Say 'Y' here if you want your kernel to support boards based
on the Marvell Armada 39x SoC with device tree.
config MACH_ARMADA_XP config MACH_ARMADA_XP
bool "Marvell Armada XP boards" if ARCH_MULTI_V7 bool "Marvell Armada XP boards" if ARCH_MULTI_V7
select ARMADA_XP_CLK select ARMADA_XP_CLK
......
...@@ -232,3 +232,17 @@ DT_MACHINE_START(ARMADA_38X_DT, "Marvell Armada 380/385 (Device Tree)") ...@@ -232,3 +232,17 @@ DT_MACHINE_START(ARMADA_38X_DT, "Marvell Armada 380/385 (Device Tree)")
.restart = mvebu_restart, .restart = mvebu_restart,
.dt_compat = armada_38x_dt_compat, .dt_compat = armada_38x_dt_compat,
MACHINE_END MACHINE_END
static const char * const armada_39x_dt_compat[] __initconst = {
"marvell,armada390",
"marvell,armada398",
NULL,
};
DT_MACHINE_START(ARMADA_39X_DT, "Marvell Armada 39x (Device Tree)")
.l2c_aux_val = 0,
.l2c_aux_mask = ~0,
.init_irq = mvebu_init_irq,
.restart = mvebu_restart,
.dt_compat = armada_39x_dt_compat,
MACHINE_END
...@@ -110,3 +110,5 @@ CPU_METHOD_OF_DECLARE(mvebu_armada_375_smp, "marvell,armada-375-smp", ...@@ -110,3 +110,5 @@ CPU_METHOD_OF_DECLARE(mvebu_armada_375_smp, "marvell,armada-375-smp",
&mvebu_cortex_a9_smp_ops); &mvebu_cortex_a9_smp_ops);
CPU_METHOD_OF_DECLARE(mvebu_armada_380_smp, "marvell,armada-380-smp", CPU_METHOD_OF_DECLARE(mvebu_armada_380_smp, "marvell,armada-380-smp",
&armada_38x_smp_ops); &armada_38x_smp_ops);
CPU_METHOD_OF_DECLARE(mvebu_armada_390_smp, "marvell,armada-390-smp",
&armada_38x_smp_ops);
...@@ -690,6 +690,9 @@ struct dev_pm_domain omap_device_pm_domain = { ...@@ -690,6 +690,9 @@ struct dev_pm_domain omap_device_pm_domain = {
USE_PLATFORM_PM_SLEEP_OPS USE_PLATFORM_PM_SLEEP_OPS
.suspend_noirq = _od_suspend_noirq, .suspend_noirq = _od_suspend_noirq,
.resume_noirq = _od_resume_noirq, .resume_noirq = _od_resume_noirq,
.freeze_noirq = _od_suspend_noirq,
.thaw_noirq = _od_resume_noirq,
.restore_noirq = _od_resume_noirq,
} }
}; };
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include "omap_hwmod_33xx_43xx_common_data.h" #include "omap_hwmod_33xx_43xx_common_data.h"
#include "prcm43xx.h" #include "prcm43xx.h"
#include "omap_hwmod_common_data.h" #include "omap_hwmod_common_data.h"
#include "hdq1w.h"
/* IP blocks */ /* IP blocks */
...@@ -516,6 +517,33 @@ static struct omap_hwmod am43xx_dss_rfbi_hwmod = { ...@@ -516,6 +517,33 @@ static struct omap_hwmod am43xx_dss_rfbi_hwmod = {
.parent_hwmod = &am43xx_dss_core_hwmod, .parent_hwmod = &am43xx_dss_core_hwmod,
}; };
/* HDQ1W */
static struct omap_hwmod_class_sysconfig am43xx_hdq1w_sysc = {
.rev_offs = 0x0000,
.sysc_offs = 0x0014,
.syss_offs = 0x0018,
.sysc_flags = (SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE),
.sysc_fields = &omap_hwmod_sysc_type1,
};
static struct omap_hwmod_class am43xx_hdq1w_hwmod_class = {
.name = "hdq1w",
.sysc = &am43xx_hdq1w_sysc,
.reset = &omap_hdq1w_reset,
};
static struct omap_hwmod am43xx_hdq1w_hwmod = {
.name = "hdq1w",
.class = &am43xx_hdq1w_hwmod_class,
.clkdm_name = "l4ls_clkdm",
.prcm = {
.omap4 = {
.clkctrl_offs = AM43XX_CM_PER_HDQ1W_CLKCTRL_OFFSET,
.modulemode = MODULEMODE_SWCTRL,
},
},
};
/* Interfaces */ /* Interfaces */
static struct omap_hwmod_ocp_if am43xx_l3_main__l4_hs = { static struct omap_hwmod_ocp_if am43xx_l3_main__l4_hs = {
.master = &am33xx_l3_main_hwmod, .master = &am33xx_l3_main_hwmod,
...@@ -790,6 +818,13 @@ static struct omap_hwmod_ocp_if am43xx_l4_ls__dss_rfbi = { ...@@ -790,6 +818,13 @@ static struct omap_hwmod_ocp_if am43xx_l4_ls__dss_rfbi = {
.user = OCP_USER_MPU | OCP_USER_SDMA, .user = OCP_USER_MPU | OCP_USER_SDMA,
}; };
static struct omap_hwmod_ocp_if am43xx_l4_ls__hdq1w = {
.master = &am33xx_l4_ls_hwmod,
.slave = &am43xx_hdq1w_hwmod,
.clk = "l4ls_gclk",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
static struct omap_hwmod_ocp_if *am43xx_hwmod_ocp_ifs[] __initdata = { static struct omap_hwmod_ocp_if *am43xx_hwmod_ocp_ifs[] __initdata = {
&am33xx_l4_wkup__synctimer, &am33xx_l4_wkup__synctimer,
&am43xx_l4_ls__timer8, &am43xx_l4_ls__timer8,
...@@ -889,6 +924,7 @@ static struct omap_hwmod_ocp_if *am43xx_hwmod_ocp_ifs[] __initdata = { ...@@ -889,6 +924,7 @@ static struct omap_hwmod_ocp_if *am43xx_hwmod_ocp_ifs[] __initdata = {
&am43xx_l4_ls__dss, &am43xx_l4_ls__dss,
&am43xx_l4_ls__dss_dispc, &am43xx_l4_ls__dss_dispc,
&am43xx_l4_ls__dss_rfbi, &am43xx_l4_ls__dss_rfbi,
&am43xx_l4_ls__hdq1w,
NULL, NULL,
}; };
......
...@@ -1726,21 +1726,6 @@ static struct omap_hwmod_class dra7xx_timer_1ms_hwmod_class = { ...@@ -1726,21 +1726,6 @@ static struct omap_hwmod_class dra7xx_timer_1ms_hwmod_class = {
.sysc = &dra7xx_timer_1ms_sysc, .sysc = &dra7xx_timer_1ms_sysc,
}; };
static struct omap_hwmod_class_sysconfig dra7xx_timer_secure_sysc = {
.rev_offs = 0x0000,
.sysc_offs = 0x0010,
.sysc_flags = (SYSC_HAS_EMUFREE | SYSC_HAS_RESET_STATUS |
SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP),
.sysc_fields = &omap_hwmod_sysc_type2,
};
static struct omap_hwmod_class dra7xx_timer_secure_hwmod_class = {
.name = "timer",
.sysc = &dra7xx_timer_secure_sysc,
};
static struct omap_hwmod_class_sysconfig dra7xx_timer_sysc = { static struct omap_hwmod_class_sysconfig dra7xx_timer_sysc = {
.rev_offs = 0x0000, .rev_offs = 0x0000,
.sysc_offs = 0x0010, .sysc_offs = 0x0010,
...@@ -1804,7 +1789,7 @@ static struct omap_hwmod dra7xx_timer3_hwmod = { ...@@ -1804,7 +1789,7 @@ static struct omap_hwmod dra7xx_timer3_hwmod = {
/* timer4 */ /* timer4 */
static struct omap_hwmod dra7xx_timer4_hwmod = { static struct omap_hwmod dra7xx_timer4_hwmod = {
.name = "timer4", .name = "timer4",
.class = &dra7xx_timer_secure_hwmod_class, .class = &dra7xx_timer_hwmod_class,
.clkdm_name = "l4per_clkdm", .clkdm_name = "l4per_clkdm",
.main_clk = "timer4_gfclk_mux", .main_clk = "timer4_gfclk_mux",
.prcm = { .prcm = {
...@@ -1921,6 +1906,66 @@ static struct omap_hwmod dra7xx_timer11_hwmod = { ...@@ -1921,6 +1906,66 @@ static struct omap_hwmod dra7xx_timer11_hwmod = {
}, },
}; };
/* timer13 */
static struct omap_hwmod dra7xx_timer13_hwmod = {
.name = "timer13",
.class = &dra7xx_timer_hwmod_class,
.clkdm_name = "l4per3_clkdm",
.main_clk = "timer13_gfclk_mux",
.prcm = {
.omap4 = {
.clkctrl_offs = DRA7XX_CM_L4PER3_TIMER13_CLKCTRL_OFFSET,
.context_offs = DRA7XX_RM_L4PER3_TIMER13_CONTEXT_OFFSET,
.modulemode = MODULEMODE_SWCTRL,
},
},
};
/* timer14 */
static struct omap_hwmod dra7xx_timer14_hwmod = {
.name = "timer14",
.class = &dra7xx_timer_hwmod_class,
.clkdm_name = "l4per3_clkdm",
.main_clk = "timer14_gfclk_mux",
.prcm = {
.omap4 = {
.clkctrl_offs = DRA7XX_CM_L4PER3_TIMER14_CLKCTRL_OFFSET,
.context_offs = DRA7XX_RM_L4PER3_TIMER14_CONTEXT_OFFSET,
.modulemode = MODULEMODE_SWCTRL,
},
},
};
/* timer15 */
static struct omap_hwmod dra7xx_timer15_hwmod = {
.name = "timer15",
.class = &dra7xx_timer_hwmod_class,
.clkdm_name = "l4per3_clkdm",
.main_clk = "timer15_gfclk_mux",
.prcm = {
.omap4 = {
.clkctrl_offs = DRA7XX_CM_L4PER3_TIMER15_CLKCTRL_OFFSET,
.context_offs = DRA7XX_RM_L4PER3_TIMER15_CONTEXT_OFFSET,
.modulemode = MODULEMODE_SWCTRL,
},
},
};
/* timer16 */
static struct omap_hwmod dra7xx_timer16_hwmod = {
.name = "timer16",
.class = &dra7xx_timer_hwmod_class,
.clkdm_name = "l4per3_clkdm",
.main_clk = "timer16_gfclk_mux",
.prcm = {
.omap4 = {
.clkctrl_offs = DRA7XX_CM_L4PER3_TIMER16_CLKCTRL_OFFSET,
.context_offs = DRA7XX_RM_L4PER3_TIMER16_CONTEXT_OFFSET,
.modulemode = MODULEMODE_SWCTRL,
},
},
};
/* /*
* 'uart' class * 'uart' class
* *
...@@ -3059,6 +3104,38 @@ static struct omap_hwmod_ocp_if dra7xx_l4_per1__timer11 = { ...@@ -3059,6 +3104,38 @@ static struct omap_hwmod_ocp_if dra7xx_l4_per1__timer11 = {
.user = OCP_USER_MPU | OCP_USER_SDMA, .user = OCP_USER_MPU | OCP_USER_SDMA,
}; };
/* l4_per3 -> timer13 */
static struct omap_hwmod_ocp_if dra7xx_l4_per3__timer13 = {
.master = &dra7xx_l4_per3_hwmod,
.slave = &dra7xx_timer13_hwmod,
.clk = "l3_iclk_div",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
/* l4_per3 -> timer14 */
static struct omap_hwmod_ocp_if dra7xx_l4_per3__timer14 = {
.master = &dra7xx_l4_per3_hwmod,
.slave = &dra7xx_timer14_hwmod,
.clk = "l3_iclk_div",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
/* l4_per3 -> timer15 */
static struct omap_hwmod_ocp_if dra7xx_l4_per3__timer15 = {
.master = &dra7xx_l4_per3_hwmod,
.slave = &dra7xx_timer15_hwmod,
.clk = "l3_iclk_div",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
/* l4_per3 -> timer16 */
static struct omap_hwmod_ocp_if dra7xx_l4_per3__timer16 = {
.master = &dra7xx_l4_per3_hwmod,
.slave = &dra7xx_timer16_hwmod,
.clk = "l3_iclk_div",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
/* l4_per1 -> uart1 */ /* l4_per1 -> uart1 */
static struct omap_hwmod_ocp_if dra7xx_l4_per1__uart1 = { static struct omap_hwmod_ocp_if dra7xx_l4_per1__uart1 = {
.master = &dra7xx_l4_per1_hwmod, .master = &dra7xx_l4_per1_hwmod,
...@@ -3295,6 +3372,10 @@ static struct omap_hwmod_ocp_if *dra7xx_hwmod_ocp_ifs[] __initdata = { ...@@ -3295,6 +3372,10 @@ static struct omap_hwmod_ocp_if *dra7xx_hwmod_ocp_ifs[] __initdata = {
&dra7xx_l4_per1__timer9, &dra7xx_l4_per1__timer9,
&dra7xx_l4_per1__timer10, &dra7xx_l4_per1__timer10,
&dra7xx_l4_per1__timer11, &dra7xx_l4_per1__timer11,
&dra7xx_l4_per3__timer13,
&dra7xx_l4_per3__timer14,
&dra7xx_l4_per3__timer15,
&dra7xx_l4_per3__timer16,
&dra7xx_l4_per1__uart1, &dra7xx_l4_per1__uart1,
&dra7xx_l4_per1__uart2, &dra7xx_l4_per1__uart2,
&dra7xx_l4_per1__uart3, &dra7xx_l4_per1__uart3,
......
...@@ -143,5 +143,6 @@ ...@@ -143,5 +143,6 @@
#define AM43XX_CM_PER_USB_OTG_SS1_CLKCTRL_OFFSET 0x0268 #define AM43XX_CM_PER_USB_OTG_SS1_CLKCTRL_OFFSET 0x0268
#define AM43XX_CM_PER_USBPHYOCP2SCP1_CLKCTRL_OFFSET 0x05C0 #define AM43XX_CM_PER_USBPHYOCP2SCP1_CLKCTRL_OFFSET 0x05C0
#define AM43XX_CM_PER_DSS_CLKCTRL_OFFSET 0x0a20 #define AM43XX_CM_PER_DSS_CLKCTRL_OFFSET 0x0a20
#define AM43XX_CM_PER_HDQ1W_CLKCTRL_OFFSET 0x04a0
#endif #endif
...@@ -55,7 +55,7 @@ static int pmu_power_domain_is_on(int pd) ...@@ -55,7 +55,7 @@ static int pmu_power_domain_is_on(int pd)
return !(val & BIT(pd)); return !(val & BIT(pd));
} }
struct reset_control *rockchip_get_core_reset(int cpu) static struct reset_control *rockchip_get_core_reset(int cpu)
{ {
struct device *dev = get_cpu_device(cpu); struct device *dev = get_cpu_device(cpu);
struct device_node *np; struct device_node *np;
...@@ -201,7 +201,7 @@ static int __init rockchip_smp_prepare_sram(struct device_node *node) ...@@ -201,7 +201,7 @@ static int __init rockchip_smp_prepare_sram(struct device_node *node)
return 0; return 0;
} }
static struct regmap_config rockchip_pmu_regmap_config = { static const struct regmap_config rockchip_pmu_regmap_config = {
.reg_bits = 32, .reg_bits = 32,
.val_bits = 32, .val_bits = 32,
.reg_stride = 4, .reg_stride = 4,
......
...@@ -75,9 +75,13 @@ static void rk3288_slp_mode_set(int level) ...@@ -75,9 +75,13 @@ static void rk3288_slp_mode_set(int level)
regmap_read(pmu_regmap, RK3288_PMU_PWRMODE_CON, regmap_read(pmu_regmap, RK3288_PMU_PWRMODE_CON,
&rk3288_pmu_pwr_mode_con); &rk3288_pmu_pwr_mode_con);
/* set bit 8 so that system will resume to FAST_BOOT_ADDR */ /*
* SGRF_FAST_BOOT_EN - system to boot from FAST_BOOT_ADDR
* PCLK_WDT_GATE - disable WDT during suspend.
*/
regmap_write(sgrf_regmap, RK3288_SGRF_SOC_CON0, regmap_write(sgrf_regmap, RK3288_SGRF_SOC_CON0,
SGRF_FAST_BOOT_EN | SGRF_FAST_BOOT_EN_WRITE); SGRF_PCLK_WDT_GATE | SGRF_FAST_BOOT_EN
| SGRF_PCLK_WDT_GATE_WRITE | SGRF_FAST_BOOT_EN_WRITE);
/* booting address of resuming system is from this register value */ /* booting address of resuming system is from this register value */
regmap_write(sgrf_regmap, RK3288_SGRF_FAST_BOOT_ADDR, regmap_write(sgrf_regmap, RK3288_SGRF_FAST_BOOT_ADDR,
...@@ -122,7 +126,8 @@ static void rk3288_slp_mode_set_resume(void) ...@@ -122,7 +126,8 @@ static void rk3288_slp_mode_set_resume(void)
rk3288_pmu_pwr_mode_con); rk3288_pmu_pwr_mode_con);
regmap_write(sgrf_regmap, RK3288_SGRF_SOC_CON0, regmap_write(sgrf_regmap, RK3288_SGRF_SOC_CON0,
rk3288_sgrf_soc_con0 | SGRF_FAST_BOOT_EN_WRITE); rk3288_sgrf_soc_con0 | SGRF_PCLK_WDT_GATE_WRITE
| SGRF_FAST_BOOT_EN_WRITE);
} }
static int rockchip_lpmode_enter(unsigned long arg) static int rockchip_lpmode_enter(unsigned long arg)
...@@ -209,6 +214,9 @@ static int rk3288_suspend_init(struct device_node *np) ...@@ -209,6 +214,9 @@ static int rk3288_suspend_init(struct device_node *np)
memcpy(rk3288_bootram_base, rockchip_slp_cpu_resume, memcpy(rk3288_bootram_base, rockchip_slp_cpu_resume,
rk3288_bootram_sz); rk3288_bootram_sz);
regmap_write(pmu_regmap, RK3288_PMU_OSC_CNT, OSC_STABL_CNT_THRESH);
regmap_write(pmu_regmap, RK3288_PMU_STABL_CNT, PMU_STABL_CNT_THRESH);
return 0; return 0;
} }
......
...@@ -50,6 +50,8 @@ static inline void rockchip_suspend_init(void) ...@@ -50,6 +50,8 @@ static inline void rockchip_suspend_init(void)
#define RK3288_SGRF_SOC_CON0 (0x0000) #define RK3288_SGRF_SOC_CON0 (0x0000)
#define RK3288_SGRF_FAST_BOOT_ADDR (0x0120) #define RK3288_SGRF_FAST_BOOT_ADDR (0x0120)
#define SGRF_PCLK_WDT_GATE BIT(6)
#define SGRF_PCLK_WDT_GATE_WRITE BIT(22)
#define SGRF_FAST_BOOT_EN BIT(8) #define SGRF_FAST_BOOT_EN BIT(8)
#define SGRF_FAST_BOOT_EN_WRITE BIT(24) #define SGRF_FAST_BOOT_EN_WRITE BIT(24)
...@@ -63,6 +65,10 @@ static inline void rockchip_suspend_init(void) ...@@ -63,6 +65,10 @@ static inline void rockchip_suspend_init(void)
/* PMU_WAKEUP_CFG1 bits */ /* PMU_WAKEUP_CFG1 bits */
#define PMU_ARMINT_WAKEUP_EN BIT(0) #define PMU_ARMINT_WAKEUP_EN BIT(0)
/* wait 30ms for OSC stable and 30ms for pmic stable */
#define OSC_STABL_CNT_THRESH (32 * 30)
#define PMU_STABL_CNT_THRESH (32 * 30)
enum rk3288_pwr_mode_con { enum rk3288_pwr_mode_con {
PMU_PWR_MODE_EN = 0, PMU_PWR_MODE_EN = 0,
PMU_CLK_CORE_SRC_GATE_EN, PMU_CLK_CORE_SRC_GATE_EN,
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <mach/gpio-samsung.h> #include <mach/gpio-samsung.h>
#define GLENFARCLAS_PMIC_IRQ_BASE IRQ_BOARD_START #define GLENFARCLAS_PMIC_IRQ_BASE IRQ_BOARD_START
#define BANFF_PMIC_IRQ_BASE (IRQ_BOARD_START + 64)
#define PCA935X_GPIO_BASE GPIO_BOARD_START #define PCA935X_GPIO_BASE GPIO_BOARD_START
#define CODEC_GPIO_BASE (GPIO_BOARD_START + 8) #define CODEC_GPIO_BASE (GPIO_BOARD_START + 8)
......
...@@ -554,6 +554,7 @@ static struct wm831x_touch_pdata touch_pdata = { ...@@ -554,6 +554,7 @@ static struct wm831x_touch_pdata touch_pdata = {
static struct wm831x_pdata crag_pmic_pdata = { static struct wm831x_pdata crag_pmic_pdata = {
.wm831x_num = 1, .wm831x_num = 1,
.irq_base = BANFF_PMIC_IRQ_BASE,
.gpio_base = BANFF_PMIC_GPIO_BASE, .gpio_base = BANFF_PMIC_GPIO_BASE,
.soft_shutdown = true, .soft_shutdown = true,
......
...@@ -69,10 +69,12 @@ config ARCH_R8A7779 ...@@ -69,10 +69,12 @@ config ARCH_R8A7779
config ARCH_R8A7790 config ARCH_R8A7790
bool "R-Car H2 (R8A77900)" bool "R-Car H2 (R8A77900)"
select ARCH_RCAR_GEN2 select ARCH_RCAR_GEN2
select I2C
config ARCH_R8A7791 config ARCH_R8A7791
bool "R-Car M2-W (R8A77910)" bool "R-Car M2-W (R8A77910)"
select ARCH_RCAR_GEN2 select ARCH_RCAR_GEN2
select I2C
config ARCH_R8A7794 config ARCH_R8A7794
bool "R-Car E2 (R8A77940)" bool "R-Car E2 (R8A77940)"
......
...@@ -35,6 +35,8 @@ cpu-y := platsmp.o headsmp.o ...@@ -35,6 +35,8 @@ cpu-y := platsmp.o headsmp.o
# Shared SoC family objects # Shared SoC family objects
obj-$(CONFIG_ARCH_RCAR_GEN2) += setup-rcar-gen2.o platsmp-apmu.o $(cpu-y) obj-$(CONFIG_ARCH_RCAR_GEN2) += setup-rcar-gen2.o platsmp-apmu.o $(cpu-y)
CFLAGS_setup-rcar-gen2.o += -march=armv7-a CFLAGS_setup-rcar-gen2.o += -march=armv7-a
obj-$(CONFIG_ARCH_R8A7790) += regulator-quirk-rcar-gen2.o
obj-$(CONFIG_ARCH_R8A7791) += regulator-quirk-rcar-gen2.o
# SMP objects # SMP objects
smp-y := $(cpu-y) smp-y := $(cpu-y)
......
/*
* R-Car Generation 2 da9063/da9210 regulator quirk
*
* The r8a7790/lager and r8a7791/koelsch development boards have da9063 and
* da9210 regulators. Both regulators have their interrupt request lines tied
* to the same interrupt pin (IRQ2) on the SoC.
*
* After cold boot or da9063-induced restart, both the da9063 and da9210 seem
* to assert their interrupt request lines. Hence as soon as one driver
* requests this irq, it gets stuck in an interrupt storm, as it only manages
* to deassert its own interrupt request line, and the other driver hasn't
* installed an interrupt handler yet.
*
* To handle this, install a quirk that masks the interrupts in both the
* da9063 and da9210. This quirk has to run after the i2c master driver has
* been initialized, but before the i2c slave drivers are initialized.
*
* Copyright (C) 2015 Glider bvba
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/device.h>
#include <linux/i2c.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/notifier.h>
#include <linux/of.h>
#include <linux/mfd/da9063/registers.h>
#define IRQC_BASE 0xe61c0000
#define IRQC_MONITOR 0x104 /* IRQn Signal Level Monitor Register */
#define REGULATOR_IRQ_MASK BIT(2) /* IRQ2, active low */
static void __iomem *irqc;
static const u8 da9063_mask_regs[] = {
DA9063_REG_IRQ_MASK_A,
DA9063_REG_IRQ_MASK_B,
DA9063_REG_IRQ_MASK_C,
DA9063_REG_IRQ_MASK_D,
};
/* DA9210 System Control and Event Registers */
#define DA9210_REG_MASK_A 0x54
#define DA9210_REG_MASK_B 0x55
static const u8 da9210_mask_regs[] = {
DA9210_REG_MASK_A,
DA9210_REG_MASK_B,
};
static void da9xxx_mask_irqs(struct i2c_client *client, const u8 regs[],
unsigned int nregs)
{
unsigned int i;
dev_info(&client->dev, "Masking %s interrupt sources\n", client->name);
for (i = 0; i < nregs; i++) {
int error = i2c_smbus_write_byte_data(client, regs[i], ~0);
if (error) {
dev_err(&client->dev, "i2c error %d\n", error);
return;
}
}
}
static int regulator_quirk_notify(struct notifier_block *nb,
unsigned long action, void *data)
{
struct device *dev = data;
struct i2c_client *client;
u32 mon;
mon = ioread32(irqc + IRQC_MONITOR);
dev_dbg(dev, "%s: %ld, IRQC_MONITOR = 0x%x\n", __func__, action, mon);
if (mon & REGULATOR_IRQ_MASK)
goto remove;
if (action != BUS_NOTIFY_ADD_DEVICE || dev->type == &i2c_adapter_type)
return 0;
client = to_i2c_client(dev);
dev_dbg(dev, "Detected %s\n", client->name);
if ((client->addr == 0x58 && !strcmp(client->name, "da9063")))
da9xxx_mask_irqs(client, da9063_mask_regs,
ARRAY_SIZE(da9063_mask_regs));
else if (client->addr == 0x68 && !strcmp(client->name, "da9210"))
da9xxx_mask_irqs(client, da9210_mask_regs,
ARRAY_SIZE(da9210_mask_regs));
mon = ioread32(irqc + IRQC_MONITOR);
if (mon & REGULATOR_IRQ_MASK)
goto remove;
return 0;
remove:
dev_info(dev, "IRQ2 is not asserted, removing quirk\n");
bus_unregister_notifier(&i2c_bus_type, nb);
iounmap(irqc);
return 0;
}
static struct notifier_block regulator_quirk_nb = {
.notifier_call = regulator_quirk_notify
};
static int __init rcar_gen2_regulator_quirk(void)
{
u32 mon;
if (!of_machine_is_compatible("renesas,koelsch") &&
!of_machine_is_compatible("renesas,lager"))
return -ENODEV;
irqc = ioremap(IRQC_BASE, PAGE_SIZE);
if (!irqc)
return -ENOMEM;
mon = ioread32(irqc + IRQC_MONITOR);
if (mon & REGULATOR_IRQ_MASK) {
pr_debug("%s: IRQ2 is not asserted, not installing quirk\n",
__func__);
iounmap(irqc);
return 0;
}
pr_info("IRQ2 is asserted, installing da9063/da9210 regulator quirk\n");
bus_register_notifier(&i2c_bus_type, &regulator_quirk_nb);
return 0;
}
arch_initcall(rcar_gen2_regulator_quirk);
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/dma-contiguous.h> #include <linux/dma-contiguous.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/memblock.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_fdt.h> #include <linux/of_fdt.h>
#include <asm/mach/arch.h> #include <asm/mach/arch.h>
...@@ -195,7 +196,7 @@ void __init rcar_gen2_reserve(void) ...@@ -195,7 +196,7 @@ void __init rcar_gen2_reserve(void)
of_scan_flat_dt(rcar_gen2_scan_mem, &mrc); of_scan_flat_dt(rcar_gen2_scan_mem, &mrc);
#ifdef CONFIG_DMA_CMA #ifdef CONFIG_DMA_CMA
if (mrc.size) if (mrc.size && memblock_is_region_memory(mrc.base, mrc.size))
dma_contiguous_reserve_area(mrc.size, mrc.base, 0, dma_contiguous_reserve_area(mrc.size, mrc.base, 0,
&rcar_gen2_dma_contiguous, true); &rcar_gen2_dma_contiguous, true);
#endif #endif
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/spinlock.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/vexpress.h> #include <linux/vexpress.h>
...@@ -36,106 +35,71 @@ ...@@ -36,106 +35,71 @@
#define KFC_CFG_W 0x2c #define KFC_CFG_W 0x2c
#define DCS_CFG_R 0x30 #define DCS_CFG_R 0x30
/*
* We can't use regular spinlocks. In the switcher case, it is possible
* for an outbound CPU to call power_down() while its inbound counterpart
* is already live using the same logical CPU number which trips lockdep
* debugging.
*/
static arch_spinlock_t dcscb_lock = __ARCH_SPIN_LOCK_UNLOCKED;
static void __iomem *dcscb_base; static void __iomem *dcscb_base;
static int dcscb_use_count[4][2];
static int dcscb_allcpus_mask[2]; static int dcscb_allcpus_mask[2];
static int dcscb_power_up(unsigned int cpu, unsigned int cluster) static int dcscb_cpu_powerup(unsigned int cpu, unsigned int cluster)
{ {
unsigned int rst_hold, cpumask = (1 << cpu); unsigned int rst_hold, cpumask = (1 << cpu);
unsigned int all_mask;
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
if (cpu >= 4 || cluster >= 2) if (cluster >= 2 || !(cpumask & dcscb_allcpus_mask[cluster]))
return -EINVAL; return -EINVAL;
all_mask = dcscb_allcpus_mask[cluster];
/*
* Since this is called with IRQs enabled, and no arch_spin_lock_irq
* variant exists, we need to disable IRQs manually here.
*/
local_irq_disable();
arch_spin_lock(&dcscb_lock);
dcscb_use_count[cpu][cluster]++;
if (dcscb_use_count[cpu][cluster] == 1) {
rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4); rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4);
if (rst_hold & (1 << 8)) {
/* remove cluster reset and add individual CPU's reset */
rst_hold &= ~(1 << 8);
rst_hold |= all_mask;
}
rst_hold &= ~(cpumask | (cpumask << 4)); rst_hold &= ~(cpumask | (cpumask << 4));
writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4); writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4);
} else if (dcscb_use_count[cpu][cluster] != 2) { return 0;
/* }
* The only possible values are:
* 0 = CPU down
* 1 = CPU (still) up
* 2 = CPU requested to be up before it had a chance
* to actually make itself down.
* Any other value is a bug.
*/
BUG();
}
arch_spin_unlock(&dcscb_lock); static int dcscb_cluster_powerup(unsigned int cluster)
local_irq_enable(); {
unsigned int rst_hold;
pr_debug("%s: cluster %u\n", __func__, cluster);
if (cluster >= 2)
return -EINVAL;
/* remove cluster reset and add individual CPU's reset */
rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4);
rst_hold &= ~(1 << 8);
rst_hold |= dcscb_allcpus_mask[cluster];
writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4);
return 0; return 0;
} }
static void dcscb_power_down(void) static void dcscb_cpu_powerdown_prepare(unsigned int cpu, unsigned int cluster)
{ {
unsigned int mpidr, cpu, cluster, rst_hold, cpumask, all_mask; unsigned int rst_hold;
bool last_man = false, skip_wfi = false;
mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
cpumask = (1 << cpu);
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
BUG_ON(cpu >= 4 || cluster >= 2); BUG_ON(cluster >= 2 || !((1 << cpu) & dcscb_allcpus_mask[cluster]));
all_mask = dcscb_allcpus_mask[cluster]; rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4);
rst_hold |= (1 << cpu);
writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4);
}
__mcpm_cpu_going_down(cpu, cluster); static void dcscb_cluster_powerdown_prepare(unsigned int cluster)
{
unsigned int rst_hold;
pr_debug("%s: cluster %u\n", __func__, cluster);
BUG_ON(cluster >= 2);
arch_spin_lock(&dcscb_lock);
BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
dcscb_use_count[cpu][cluster]--;
if (dcscb_use_count[cpu][cluster] == 0) {
rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4); rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4);
rst_hold |= cpumask;
if (((rst_hold | (rst_hold >> 4)) & all_mask) == all_mask) {
rst_hold |= (1 << 8); rst_hold |= (1 << 8);
last_man = true;
}
writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4); writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4);
} else if (dcscb_use_count[cpu][cluster] == 1) { }
/*
* A power_up request went ahead of us.
* Even if we do not want to shut this CPU down,
* the caller expects a certain state as if the WFI
* was aborted. So let's continue with cache cleaning.
*/
skip_wfi = true;
} else
BUG();
if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { static void dcscb_cpu_cache_disable(void)
arch_spin_unlock(&dcscb_lock); {
/* Disable and flush the local CPU cache. */
v7_exit_coherency_flush(louis);
}
static void dcscb_cluster_cache_disable(void)
{
/* Flush all cache levels for this cluster. */ /* Flush all cache levels for this cluster. */
v7_exit_coherency_flush(all); v7_exit_coherency_flush(all);
...@@ -155,44 +119,18 @@ static void dcscb_power_down(void) ...@@ -155,44 +119,18 @@ static void dcscb_power_down(void)
* Disable cluster-level coherency by masking * Disable cluster-level coherency by masking
* incoming snoops and DVM messages: * incoming snoops and DVM messages:
*/ */
cci_disable_port_by_cpu(mpidr); cci_disable_port_by_cpu(read_cpuid_mpidr());
__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN);
} else {
arch_spin_unlock(&dcscb_lock);
/* Disable and flush the local CPU cache. */
v7_exit_coherency_flush(louis);
}
__mcpm_cpu_down(cpu, cluster);
/* Now we are prepared for power-down, do it: */
dsb();
if (!skip_wfi)
wfi();
/* Not dead at this point? Let our caller cope. */
} }
static const struct mcpm_platform_ops dcscb_power_ops = { static const struct mcpm_platform_ops dcscb_power_ops = {
.power_up = dcscb_power_up, .cpu_powerup = dcscb_cpu_powerup,
.power_down = dcscb_power_down, .cluster_powerup = dcscb_cluster_powerup,
.cpu_powerdown_prepare = dcscb_cpu_powerdown_prepare,
.cluster_powerdown_prepare = dcscb_cluster_powerdown_prepare,
.cpu_cache_disable = dcscb_cpu_cache_disable,
.cluster_cache_disable = dcscb_cluster_cache_disable,
}; };
static void __init dcscb_usage_count_init(void)
{
unsigned int mpidr, cpu, cluster;
mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
BUG_ON(cpu >= 4 || cluster >= 2);
dcscb_use_count[cpu][cluster] = 1;
}
extern void dcscb_power_up_setup(unsigned int affinity_level); extern void dcscb_power_up_setup(unsigned int affinity_level);
static int __init dcscb_init(void) static int __init dcscb_init(void)
...@@ -213,7 +151,6 @@ static int __init dcscb_init(void) ...@@ -213,7 +151,6 @@ static int __init dcscb_init(void)
cfg = readl_relaxed(dcscb_base + DCS_CFG_R); cfg = readl_relaxed(dcscb_base + DCS_CFG_R);
dcscb_allcpus_mask[0] = (1 << (((cfg >> 16) >> (0 << 2)) & 0xf)) - 1; dcscb_allcpus_mask[0] = (1 << (((cfg >> 16) >> (0 << 2)) & 0xf)) - 1;
dcscb_allcpus_mask[1] = (1 << (((cfg >> 16) >> (1 << 2)) & 0xf)) - 1; dcscb_allcpus_mask[1] = (1 << (((cfg >> 16) >> (1 << 2)) & 0xf)) - 1;
dcscb_usage_count_init();
ret = mcpm_platform_register(&dcscb_power_ops); ret = mcpm_platform_register(&dcscb_power_ops);
if (!ret) if (!ret)
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_irq.h> #include <linux/of_irq.h>
#include <linux/spinlock.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/irqchip/arm-gic.h> #include <linux/irqchip/arm-gic.h>
...@@ -44,101 +43,36 @@ ...@@ -44,101 +43,36 @@
static void __iomem *scc; static void __iomem *scc;
/*
* We can't use regular spinlocks. In the switcher case, it is possible
* for an outbound CPU to call power_down() after its inbound counterpart
* is already live using the same logical CPU number which trips lockdep
* debugging.
*/
static arch_spinlock_t tc2_pm_lock = __ARCH_SPIN_LOCK_UNLOCKED;
#define TC2_CLUSTERS 2 #define TC2_CLUSTERS 2
#define TC2_MAX_CPUS_PER_CLUSTER 3 #define TC2_MAX_CPUS_PER_CLUSTER 3
static unsigned int tc2_nr_cpus[TC2_CLUSTERS]; static unsigned int tc2_nr_cpus[TC2_CLUSTERS];
/* Keep per-cpu usage count to cope with unordered up/down requests */ static int tc2_pm_cpu_powerup(unsigned int cpu, unsigned int cluster)
static int tc2_pm_use_count[TC2_MAX_CPUS_PER_CLUSTER][TC2_CLUSTERS];
#define tc2_cluster_unused(cluster) \
(!tc2_pm_use_count[0][cluster] && \
!tc2_pm_use_count[1][cluster] && \
!tc2_pm_use_count[2][cluster])
static int tc2_pm_power_up(unsigned int cpu, unsigned int cluster)
{ {
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
if (cluster >= TC2_CLUSTERS || cpu >= tc2_nr_cpus[cluster]) if (cluster >= TC2_CLUSTERS || cpu >= tc2_nr_cpus[cluster])
return -EINVAL; return -EINVAL;
/*
* Since this is called with IRQs enabled, and no arch_spin_lock_irq
* variant exists, we need to disable IRQs manually here.
*/
local_irq_disable();
arch_spin_lock(&tc2_pm_lock);
if (tc2_cluster_unused(cluster))
ve_spc_powerdown(cluster, false);
tc2_pm_use_count[cpu][cluster]++;
if (tc2_pm_use_count[cpu][cluster] == 1) {
ve_spc_set_resume_addr(cluster, cpu, ve_spc_set_resume_addr(cluster, cpu,
virt_to_phys(mcpm_entry_point)); virt_to_phys(mcpm_entry_point));
ve_spc_cpu_wakeup_irq(cluster, cpu, true); ve_spc_cpu_wakeup_irq(cluster, cpu, true);
} else if (tc2_pm_use_count[cpu][cluster] != 2) {
/*
* The only possible values are:
* 0 = CPU down
* 1 = CPU (still) up
* 2 = CPU requested to be up before it had a chance
* to actually make itself down.
* Any other value is a bug.
*/
BUG();
}
arch_spin_unlock(&tc2_pm_lock);
local_irq_enable();
return 0; return 0;
} }
static void tc2_pm_down(u64 residency) static int tc2_pm_cluster_powerup(unsigned int cluster)
{ {
unsigned int mpidr, cpu, cluster; pr_debug("%s: cluster %u\n", __func__, cluster);
bool last_man = false, skip_wfi = false; if (cluster >= TC2_CLUSTERS)
return -EINVAL;
mpidr = read_cpuid_mpidr(); ve_spc_powerdown(cluster, false);
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); return 0;
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); }
static void tc2_pm_cpu_powerdown_prepare(unsigned int cpu, unsigned int cluster)
{
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER); BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER);
__mcpm_cpu_going_down(cpu, cluster);
arch_spin_lock(&tc2_pm_lock);
BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
tc2_pm_use_count[cpu][cluster]--;
if (tc2_pm_use_count[cpu][cluster] == 0) {
ve_spc_cpu_wakeup_irq(cluster, cpu, true); ve_spc_cpu_wakeup_irq(cluster, cpu, true);
if (tc2_cluster_unused(cluster)) {
ve_spc_powerdown(cluster, true);
ve_spc_global_wakeup_irq(true);
last_man = true;
}
} else if (tc2_pm_use_count[cpu][cluster] == 1) {
/*
* A power_up request went ahead of us.
* Even if we do not want to shut this CPU down,
* the caller expects a certain state as if the WFI
* was aborted. So let's continue with cache cleaning.
*/
skip_wfi = true;
} else
BUG();
/* /*
* If the CPU is committed to power down, make sure * If the CPU is committed to power down, make sure
* the power controller will be in charge of waking it * the power controller will be in charge of waking it
...@@ -146,12 +80,24 @@ static void tc2_pm_down(u64 residency) ...@@ -146,12 +80,24 @@ static void tc2_pm_down(u64 residency)
* to the CPU by disabling the GIC CPU IF to prevent wfi * to the CPU by disabling the GIC CPU IF to prevent wfi
* from completing execution behind power controller back * from completing execution behind power controller back
*/ */
if (!skip_wfi)
gic_cpu_if_down(); gic_cpu_if_down();
}
if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { static void tc2_pm_cluster_powerdown_prepare(unsigned int cluster)
arch_spin_unlock(&tc2_pm_lock); {
pr_debug("%s: cluster %u\n", __func__, cluster);
BUG_ON(cluster >= TC2_CLUSTERS);
ve_spc_powerdown(cluster, true);
ve_spc_global_wakeup_irq(true);
}
static void tc2_pm_cpu_cache_disable(void)
{
v7_exit_coherency_flush(louis);
}
static void tc2_pm_cluster_cache_disable(void)
{
if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) { if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
/* /*
* On the Cortex-A15 we need to disable * On the Cortex-A15 we need to disable
...@@ -165,36 +111,7 @@ static void tc2_pm_down(u64 residency) ...@@ -165,36 +111,7 @@ static void tc2_pm_down(u64 residency)
} }
v7_exit_coherency_flush(all); v7_exit_coherency_flush(all);
cci_disable_port_by_cpu(read_cpuid_mpidr());
cci_disable_port_by_cpu(mpidr);
__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN);
} else {
/*
* If last man then undo any setup done previously.
*/
if (last_man) {
ve_spc_powerdown(cluster, false);
ve_spc_global_wakeup_irq(false);
}
arch_spin_unlock(&tc2_pm_lock);
v7_exit_coherency_flush(louis);
}
__mcpm_cpu_down(cpu, cluster);
/* Now we are prepared for power-down, do it: */
if (!skip_wfi)
wfi();
/* Not dead at this point? Let our caller cope. */
}
static void tc2_pm_power_down(void)
{
tc2_pm_down(0);
} }
static int tc2_core_in_reset(unsigned int cpu, unsigned int cluster) static int tc2_core_in_reset(unsigned int cpu, unsigned int cluster)
...@@ -217,11 +134,6 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster) ...@@ -217,11 +134,6 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER); BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER);
for (tries = 0; tries < TIMEOUT_MSEC / POLL_MSEC; ++tries) { for (tries = 0; tries < TIMEOUT_MSEC / POLL_MSEC; ++tries) {
/*
* Only examine the hardware state if the target CPU has
* caught up at least as far as tc2_pm_down():
*/
if (ACCESS_ONCE(tc2_pm_use_count[cpu][cluster]) == 0) {
pr_debug("%s(cpu=%u, cluster=%u): RESET_CTRL = 0x%08X\n", pr_debug("%s(cpu=%u, cluster=%u): RESET_CTRL = 0x%08X\n",
__func__, cpu, cluster, __func__, cpu, cluster,
readl_relaxed(scc + RESET_CTRL)); readl_relaxed(scc + RESET_CTRL));
...@@ -237,7 +149,6 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster) ...@@ -237,7 +149,6 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
if (tc2_core_in_reset(cpu, cluster) || if (tc2_core_in_reset(cpu, cluster) ||
ve_spc_cpu_in_wfi(cpu, cluster)) ve_spc_cpu_in_wfi(cpu, cluster))
return 0; /* success: the CPU is halted */ return 0; /* success: the CPU is halted */
}
/* Otherwise, wait and retry: */ /* Otherwise, wait and retry: */
msleep(POLL_MSEC); msleep(POLL_MSEC);
...@@ -246,72 +157,40 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster) ...@@ -246,72 +157,40 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
return -ETIMEDOUT; /* timeout */ return -ETIMEDOUT; /* timeout */
} }
static void tc2_pm_suspend(u64 residency) static void tc2_pm_cpu_suspend_prepare(unsigned int cpu, unsigned int cluster)
{ {
unsigned int mpidr, cpu, cluster;
mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
ve_spc_set_resume_addr(cluster, cpu, virt_to_phys(mcpm_entry_point)); ve_spc_set_resume_addr(cluster, cpu, virt_to_phys(mcpm_entry_point));
tc2_pm_down(residency);
} }
static void tc2_pm_powered_up(void) static void tc2_pm_cpu_is_up(unsigned int cpu, unsigned int cluster)
{ {
unsigned int mpidr, cpu, cluster;
unsigned long flags;
mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER); BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER);
local_irq_save(flags);
arch_spin_lock(&tc2_pm_lock);
if (tc2_cluster_unused(cluster)) {
ve_spc_powerdown(cluster, false);
ve_spc_global_wakeup_irq(false);
}
if (!tc2_pm_use_count[cpu][cluster])
tc2_pm_use_count[cpu][cluster] = 1;
ve_spc_cpu_wakeup_irq(cluster, cpu, false); ve_spc_cpu_wakeup_irq(cluster, cpu, false);
ve_spc_set_resume_addr(cluster, cpu, 0); ve_spc_set_resume_addr(cluster, cpu, 0);
}
arch_spin_unlock(&tc2_pm_lock); static void tc2_pm_cluster_is_up(unsigned int cluster)
local_irq_restore(flags); {
pr_debug("%s: cluster %u\n", __func__, cluster);
BUG_ON(cluster >= TC2_CLUSTERS);
ve_spc_powerdown(cluster, false);
ve_spc_global_wakeup_irq(false);
} }
static const struct mcpm_platform_ops tc2_pm_power_ops = { static const struct mcpm_platform_ops tc2_pm_power_ops = {
.power_up = tc2_pm_power_up, .cpu_powerup = tc2_pm_cpu_powerup,
.power_down = tc2_pm_power_down, .cluster_powerup = tc2_pm_cluster_powerup,
.cpu_suspend_prepare = tc2_pm_cpu_suspend_prepare,
.cpu_powerdown_prepare = tc2_pm_cpu_powerdown_prepare,
.cluster_powerdown_prepare = tc2_pm_cluster_powerdown_prepare,
.cpu_cache_disable = tc2_pm_cpu_cache_disable,
.cluster_cache_disable = tc2_pm_cluster_cache_disable,
.wait_for_powerdown = tc2_pm_wait_for_powerdown, .wait_for_powerdown = tc2_pm_wait_for_powerdown,
.suspend = tc2_pm_suspend, .cpu_is_up = tc2_pm_cpu_is_up,
.powered_up = tc2_pm_powered_up, .cluster_is_up = tc2_pm_cluster_is_up,
}; };
static bool __init tc2_pm_usage_count_init(void)
{
unsigned int mpidr, cpu, cluster;
mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
if (cluster >= TC2_CLUSTERS || cpu >= tc2_nr_cpus[cluster]) {
pr_err("%s: boot CPU is out of bound!\n", __func__);
return false;
}
tc2_pm_use_count[cpu][cluster] = 1;
return true;
}
/* /*
* Enable cluster-level coherency, in preparation for turning on the MMU. * Enable cluster-level coherency, in preparation for turning on the MMU.
*/ */
...@@ -323,23 +202,9 @@ static void __naked tc2_pm_power_up_setup(unsigned int affinity_level) ...@@ -323,23 +202,9 @@ static void __naked tc2_pm_power_up_setup(unsigned int affinity_level)
" b cci_enable_port_for_self "); " b cci_enable_port_for_self ");
} }
static void __init tc2_cache_off(void)
{
pr_info("TC2: disabling cache during MCPM loopback test\n");
if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
/* disable L2 prefetching on the Cortex-A15 */
asm volatile(
"mcr p15, 1, %0, c15, c0, 3 \n\t"
"isb \n\t"
"dsb "
: : "r" (0x400) );
}
v7_exit_coherency_flush(all);
cci_disable_port_by_cpu(read_cpuid_mpidr());
}
static int __init tc2_pm_init(void) static int __init tc2_pm_init(void)
{ {
unsigned int mpidr, cpu, cluster;
int ret, irq; int ret, irq;
u32 a15_cluster_id, a7_cluster_id, sys_info; u32 a15_cluster_id, a7_cluster_id, sys_info;
struct device_node *np; struct device_node *np;
...@@ -379,14 +244,20 @@ static int __init tc2_pm_init(void) ...@@ -379,14 +244,20 @@ static int __init tc2_pm_init(void)
if (!cci_probed()) if (!cci_probed())
return -ENODEV; return -ENODEV;
if (!tc2_pm_usage_count_init()) mpidr = read_cpuid_mpidr();
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
if (cluster >= TC2_CLUSTERS || cpu >= tc2_nr_cpus[cluster]) {
pr_err("%s: boot CPU is out of bound!\n", __func__);
return -EINVAL; return -EINVAL;
}
ret = mcpm_platform_register(&tc2_pm_power_ops); ret = mcpm_platform_register(&tc2_pm_power_ops);
if (!ret) { if (!ret) {
mcpm_sync_init(tc2_pm_power_up_setup); mcpm_sync_init(tc2_pm_power_up_setup);
/* test if we can (re)enable the CCI on our own */ /* test if we can (re)enable the CCI on our own */
BUG_ON(mcpm_loopback(tc2_cache_off) != 0); BUG_ON(mcpm_loopback(tc2_pm_cluster_cache_disable) != 0);
pr_info("TC2 power management initialized\n"); pr_info("TC2 power management initialized\n");
} }
return ret; return ret;
......
...@@ -142,7 +142,7 @@ static int __init weim_parse_dt(struct platform_device *pdev, ...@@ -142,7 +142,7 @@ static int __init weim_parse_dt(struct platform_device *pdev,
&pdev->dev); &pdev->dev);
const struct imx_weim_devtype *devtype = of_id->data; const struct imx_weim_devtype *devtype = of_id->data;
struct device_node *child; struct device_node *child;
int ret; int ret, have_child = 0;
if (devtype == &imx50_weim_devtype) { if (devtype == &imx50_weim_devtype) {
ret = imx_weim_gpr_setup(pdev); ret = imx_weim_gpr_setup(pdev);
...@@ -155,13 +155,14 @@ static int __init weim_parse_dt(struct platform_device *pdev, ...@@ -155,13 +155,14 @@ static int __init weim_parse_dt(struct platform_device *pdev,
continue; continue;
ret = weim_timing_setup(child, base, devtype); ret = weim_timing_setup(child, base, devtype);
if (ret) { if (ret)
dev_err(&pdev->dev, "%s set timing failed.\n", dev_warn(&pdev->dev, "%s set timing failed.\n",
child->full_name); child->full_name);
return ret; else
} have_child = 1;
} }
if (have_child)
ret = of_platform_populate(pdev->dev.of_node, ret = of_platform_populate(pdev->dev.of_node,
of_default_bus_match_table, of_default_bus_match_table,
NULL, &pdev->dev); NULL, &pdev->dev);
......
...@@ -248,6 +248,9 @@ ...@@ -248,6 +248,9 @@
#define IMX6QDL_PLL6_BYPASS 235 #define IMX6QDL_PLL6_BYPASS 235
#define IMX6QDL_PLL7_BYPASS 236 #define IMX6QDL_PLL7_BYPASS 236
#define IMX6QDL_CLK_GPT_3M 237 #define IMX6QDL_CLK_GPT_3M 237
#define IMX6QDL_CLK_END 238 #define IMX6QDL_CLK_VIDEO_27M 238
#define IMX6QDL_CLK_MIPI_CORE_CFG 239
#define IMX6QDL_CLK_MIPI_IPG 240
#define IMX6QDL_CLK_END 241
#endif /* __DT_BINDINGS_CLOCK_IMX6QDL_H */ #endif /* __DT_BINDINGS_CLOCK_IMX6QDL_H */
...@@ -207,6 +207,7 @@ ...@@ -207,6 +207,7 @@
#define IMX6Q_GPR3_LVDS0_MUX_CTL_IPU1_DI1 (0x1 << 6) #define IMX6Q_GPR3_LVDS0_MUX_CTL_IPU1_DI1 (0x1 << 6)
#define IMX6Q_GPR3_LVDS0_MUX_CTL_IPU2_DI0 (0x2 << 6) #define IMX6Q_GPR3_LVDS0_MUX_CTL_IPU2_DI0 (0x2 << 6)
#define IMX6Q_GPR3_LVDS0_MUX_CTL_IPU2_DI1 (0x3 << 6) #define IMX6Q_GPR3_LVDS0_MUX_CTL_IPU2_DI1 (0x3 << 6)
#define IMX6Q_GPR3_MIPI_MUX_CTL_SHIFT 4
#define IMX6Q_GPR3_MIPI_MUX_CTL_MASK (0x3 << 4) #define IMX6Q_GPR3_MIPI_MUX_CTL_MASK (0x3 << 4)
#define IMX6Q_GPR3_MIPI_MUX_CTL_IPU1_DI0 (0x0 << 4) #define IMX6Q_GPR3_MIPI_MUX_CTL_IPU1_DI0 (0x0 << 4)
#define IMX6Q_GPR3_MIPI_MUX_CTL_IPU1_DI1 (0x1 << 4) #define IMX6Q_GPR3_MIPI_MUX_CTL_IPU1_DI1 (0x1 << 4)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment