Commit 4d23a5e8 authored by Ulf Hansson's avatar Ulf Hansson Committed by Rafael J. Wysocki

PM / Domains: Allow runtime PM during system PM phases

In cases when a PM domain isn't powered off when genpd's ->prepare()
callback is invoked, genpd runtime resumes and disables runtime PM for the
device. This behaviour was needed when genpd managed intermediate states
during the power off sequence, as to maintain proper low power states of
devices during system PM suspend/resume.

Commit ba2bbfbf (PM / Domains: Remove intermediate states from the
power off sequence), enables genpd to improve its behaviour in that
respect.

The PM core disables runtime PM at __device_suspend_late() before it calls
a system PM "late" callback for a device. When resuming a device, after a
corresponding "early" callback has been invoked, the PM core re-enables
runtime PM.

By changing genpd to allow runtime PM according to the same system PM
phases as the PM core, devices can be runtime resumed by their
corresponding subsystem/driver when really needed.

In this way, genpd no longer need to runtime resume the device from its
->prepare() callback. In most cases that avoids unnecessary and energy-
wasting operations of runtime resuming devices that have nothing to do,
only to runtime suspend them shortly after.

Although, because of changing this behaviour in genpd and due to that
genpd powers on the PM domain unconditionally in the system PM resume
"noirq" phase, it could potentially cause a PM domain to stay powered
on even if it's unused after the system has resumed. To avoid this,
schedule a power off work when genpd's system PM ->complete() callback
has been invoked for the last device in the PM domain.
Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
Reviewed-by: default avatarKevin Hilman <khilman@baylibre.com>
Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
parent 9f5b5274
...@@ -739,21 +739,6 @@ static int pm_genpd_prepare(struct device *dev) ...@@ -739,21 +739,6 @@ static int pm_genpd_prepare(struct device *dev)
mutex_unlock(&genpd->lock); mutex_unlock(&genpd->lock);
/*
* Even if the PM domain is powered off at this point, we can't expect
* it to remain in that state during the entire system PM suspend
* phase. Any subsystem/driver for a device in the PM domain, may still
* need to serve a request which may require the device to be runtime
* resumed and its PM domain to be powered.
*
* As we are disabling runtime PM at this point, we are preventing the
* subsystem/driver to decide themselves. For that reason, we need to
* make sure the device is operational as it may be required in some
* cases.
*/
pm_runtime_resume(dev);
__pm_runtime_disable(dev, false);
ret = pm_generic_prepare(dev); ret = pm_generic_prepare(dev);
if (ret) { if (ret) {
mutex_lock(&genpd->lock); mutex_lock(&genpd->lock);
...@@ -761,7 +746,6 @@ static int pm_genpd_prepare(struct device *dev) ...@@ -761,7 +746,6 @@ static int pm_genpd_prepare(struct device *dev)
genpd->prepared_count--; genpd->prepared_count--;
mutex_unlock(&genpd->lock); mutex_unlock(&genpd->lock);
pm_runtime_enable(dev);
} }
return ret; return ret;
...@@ -787,8 +771,6 @@ static int pm_genpd_suspend_noirq(struct device *dev) ...@@ -787,8 +771,6 @@ static int pm_genpd_suspend_noirq(struct device *dev)
if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))
return 0; return 0;
genpd_stop_dev(genpd, dev);
/* /*
* Since all of the "noirq" callbacks are executed sequentially, it is * Since all of the "noirq" callbacks are executed sequentially, it is
* guaranteed that this function will never run twice in parallel for * guaranteed that this function will never run twice in parallel for
...@@ -827,7 +809,7 @@ static int pm_genpd_resume_noirq(struct device *dev) ...@@ -827,7 +809,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
pm_genpd_sync_poweron(genpd, true); pm_genpd_sync_poweron(genpd, true);
genpd->suspended_count--; genpd->suspended_count--;
return genpd_start_dev(genpd, dev); return 0;
} }
/** /**
...@@ -849,7 +831,7 @@ static int pm_genpd_freeze_noirq(struct device *dev) ...@@ -849,7 +831,7 @@ static int pm_genpd_freeze_noirq(struct device *dev)
if (IS_ERR(genpd)) if (IS_ERR(genpd))
return -EINVAL; return -EINVAL;
return genpd_stop_dev(genpd, dev); return 0;
} }
/** /**
...@@ -869,7 +851,7 @@ static int pm_genpd_thaw_noirq(struct device *dev) ...@@ -869,7 +851,7 @@ static int pm_genpd_thaw_noirq(struct device *dev)
if (IS_ERR(genpd)) if (IS_ERR(genpd))
return -EINVAL; return -EINVAL;
return genpd_start_dev(genpd, dev); return 0;
} }
/** /**
...@@ -907,7 +889,7 @@ static int pm_genpd_restore_noirq(struct device *dev) ...@@ -907,7 +889,7 @@ static int pm_genpd_restore_noirq(struct device *dev)
pm_genpd_sync_poweron(genpd, true); pm_genpd_sync_poweron(genpd, true);
return genpd_start_dev(genpd, dev); return 0;
} }
/** /**
...@@ -929,15 +911,15 @@ static void pm_genpd_complete(struct device *dev) ...@@ -929,15 +911,15 @@ static void pm_genpd_complete(struct device *dev)
if (IS_ERR(genpd)) if (IS_ERR(genpd))
return; return;
pm_generic_complete(dev);
mutex_lock(&genpd->lock); mutex_lock(&genpd->lock);
genpd->prepared_count--; genpd->prepared_count--;
if (!genpd->prepared_count)
genpd_queue_power_off_work(genpd);
mutex_unlock(&genpd->lock); mutex_unlock(&genpd->lock);
pm_generic_complete(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
} }
/** /**
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment