Commit 43c9fad9 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm+acpi-4.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management and ACPI updates from Rafael Wysocki:
 "The rework of backlight interface selection API from Hans de Goede
  stands out from the number of commits and the number of affected
  places perspective.  The cpufreq core fixes from Viresh Kumar are
  quite significant too as far as the number of commits goes and because
  they should reduce CPU online/offline overhead quite a bit in the
  majority of cases.

  From the new featues point of view, the ACPICA update (to upstream
  revision 20150515) adding support for new ACPI 6 material to ACPICA is
  the one that matters the most as some new significant features will be
  based on it going forward.  Also included is an update of the ACPI
  device power management core to follow ACPI 6 (which in turn reflects
  the Windows' device PM implementation), a PM core extension to support
  wakeup interrupts in a more generic way and support for the ACPI _CCA
  device configuration object.

  The rest is mostly fixes and cleanups all over and some documentation
  updates, including new DT bindings for Operating Performance Points.

  There is one fix for a regression introduced in the 4.1 cycle, but it
  adds quite a number of lines of code, it wasn't really ready before
  Thursday and you were on vacation, so I refrained from pushing it on
  the last minute for 4.1.

  Specifics:

   - ACPICA update to upstream revision 20150515 including basic support
     for ACPI 6 features: new ACPI tables introduced by ACPI 6 (STAO,
     XENV, WPBT, NFIT, IORT), changes related to the other tables (DTRM,
     FADT, LPIT, MADT), new predefined names (_BTH, _CR3, _DSD, _LPI,
     _MTL, _PRR, _RDI, _RST, _TFP, _TSN), fixes and cleanups (Bob Moore,
     Lv Zheng).

   - ACPI device power management core code update to follow ACPI 6
     which reflects the ACPI device power management implementation in
     Windows (Rafael J Wysocki).

   - rework of the backlight interface selection logic to reduce the
     number of kernel command line options and improve the handling of
     DMI quirks that may be involved in that and to make the code
     generally more straightforward (Hans de Goede).

   - fixes for the ACPI Embedded Controller (EC) driver related to the
     handling of EC transactions (Lv Zheng).

   - fix for a regression related to the ACPI resources management and
     resulting from a recent change of ACPI initialization code ordering
     (Rafael J Wysocki).

   - fix for a system initialization regression related to ACPI
     introduced during the 3.14 cycle and caused by running the code
     that switches the platform over to the ACPI mode too early in the
     initialization sequence (Rafael J Wysocki).

   - support for the ACPI _CCA device configuration object related to
     DMA cache coherence (Suravee Suthikulpanit).

   - ACPI/APEI fixes and cleanups (Jiri Kosina, Borislav Petkov).

   - ACPI battery driver cleanups (Luis Henriques, Mathias Krause).

   - ACPI processor driver cleanups (Hanjun Guo).

   - cleanups and documentation update related to the ACPI device
     properties interface based on _DSD (Rafael J Wysocki).

   - ACPI device power management fixes (Rafael J Wysocki).

   - assorted cleanups related to ACPI (Dominik Brodowski, Fabian
     Frederick, Lorenzo Pieralisi, Mathias Krause, Rafael J Wysocki).

   - fix for a long-standing issue causing General Protection Faults to
     be generated occasionally on return to user space after resume from
     ACPI-based suspend-to-RAM on 32-bit x86 (Ingo Molnar).

   - fix to make the suspend core code return -EBUSY consistently in all
     cases when system suspend is aborted due to wakeup detection (Ruchi
     Kandoi).

   - support for automated device wakeup IRQ handling allowing drivers
     to make their PM support more starightforward (Tony Lindgren).

   - new tracepoints for suspend-to-idle tracing and rework of the
     prepare/complete callbacks tracing in the PM core (Todd E Brandt,
     Rafael J Wysocki).

   - wakeup sources framework enhancements (Jin Qian).

   - new macro for noirq system PM callbacks (Grygorii Strashko).

   - assorted cleanups related to system suspend (Rafael J Wysocki).

   - cpuidle core cleanups to make the code more efficient (Rafael J
     Wysocki).

   - powernv/pseries cpuidle driver update (Shilpasri G Bhat).

   - cpufreq core fixes related to CPU online/offline that should reduce
     the overhead of these operations quite a bit, unless the CPU in
     question is physically going away (Viresh Kumar, Saravana Kannan).

   - serialization of cpufreq governor callbacks to avoid race
     conditions in some cases (Viresh Kumar).

   - intel_pstate driver fixes and cleanups (Doug Smythies, Prarit
     Bhargava, Joe Konno).

   - cpufreq driver (arm_big_little, cpufreq-dt, qoriq) updates (Sudeep
     Holla, Felipe Balbi, Tang Yuantian).

   - assorted cleanups in cpufreq drivers and core (Shailendra Verma,
     Fabian Frederick, Wang Long).

   - new Device Tree bindings for representing Operating Performance
     Points (Viresh Kumar).

   - updates for the common clock operations support code in the PM core
     (Rajendra Nayak, Geert Uytterhoeven).

   - PM domains core code update (Geert Uytterhoeven).

   - Intel Knights Landing support for the RAPL (Running Average Power
     Limit) power capping driver (Dasaratharaman Chandramouli).

   - fixes related to the floor frequency setting on Atom SoCs in the
     RAPL power capping driver (Ajay Thomas).

   - runtime PM framework documentation update (Ben Dooks).

   - cpupower tool fix (Herton R Krzesinski)"

* tag 'pm+acpi-4.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (194 commits)
  cpuidle: powernv/pseries: Auto-promotion of snooze to deeper idle state
  x86: Load __USER_DS into DS/ES after resume
  PM / OPP: Add binding for 'opp-suspend'
  PM / OPP: Allow multiple OPP tables to be passed via DT
  PM / OPP: Add new bindings to address shortcomings of existing bindings
  ACPI: Constify ACPI device IDs in documentation
  ACPI / enumeration: Document the rules regarding the PRP0001 device ID
  ACPI / video: Make acpi_video_unregister_backlight() private
  acpi-video-detect: Remove old API
  toshiba-acpi: Port to new backlight interface selection API
  thinkpad-acpi: Port to new backlight interface selection API
  sony-laptop: Port to new backlight interface selection API
  samsung-laptop: Port to new backlight interface selection API
  msi-wmi: Port to new backlight interface selection API
  msi-laptop: Port to new backlight interface selection API
  intel-oaktrail: Port to new backlight interface selection API
  ideapad-laptop: Port to new backlight interface selection API
  fujitsu-laptop: Port to new backlight interface selection API
  eeepc-laptop: Port to new backlight interface selection API
  dell-wmi: Port to new backlight interface selection API
  ...
parents cb8a4dea d4610035
...@@ -42,7 +42,7 @@ Adding ACPI support for an existing driver should be pretty ...@@ -42,7 +42,7 @@ Adding ACPI support for an existing driver should be pretty
straightforward. Here is the simplest example: straightforward. Here is the simplest example:
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
static struct acpi_device_id mydrv_acpi_match[] = { static const struct acpi_device_id mydrv_acpi_match[] = {
/* ACPI IDs here */ /* ACPI IDs here */
{ } { }
}; };
...@@ -166,7 +166,7 @@ the platform device drivers. Below is an example where we add ACPI support ...@@ -166,7 +166,7 @@ the platform device drivers. Below is an example where we add ACPI support
to at25 SPI eeprom driver (this is meant for the above ACPI snippet): to at25 SPI eeprom driver (this is meant for the above ACPI snippet):
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
static struct acpi_device_id at25_acpi_match[] = { static const struct acpi_device_id at25_acpi_match[] = {
{ "AT25", 0 }, { "AT25", 0 },
{ }, { },
}; };
...@@ -230,7 +230,7 @@ Below is an example of how to add ACPI support to the existing mpu3050 ...@@ -230,7 +230,7 @@ Below is an example of how to add ACPI support to the existing mpu3050
input driver: input driver:
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
static struct acpi_device_id mpu3050_acpi_match[] = { static const struct acpi_device_id mpu3050_acpi_match[] = {
{ "MPU3050", 0 }, { "MPU3050", 0 },
{ }, { },
}; };
...@@ -359,3 +359,54 @@ the id should be set like: ...@@ -359,3 +359,54 @@ the id should be set like:
The ACPI id "XYZ0001" is then used to lookup an ACPI device directly under The ACPI id "XYZ0001" is then used to lookup an ACPI device directly under
the MFD device and if found, that ACPI companion device is bound to the the MFD device and if found, that ACPI companion device is bound to the
resulting child platform device. resulting child platform device.
Device Tree namespace link device ID
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Device Tree protocol uses device indentification based on the "compatible"
property whose value is a string or an array of strings recognized as device
identifiers by drivers and the driver core. The set of all those strings may be
regarded as a device indentification namespace analogous to the ACPI/PNP device
ID namespace. Consequently, in principle it should not be necessary to allocate
a new (and arguably redundant) ACPI/PNP device ID for a devices with an existing
identification string in the Device Tree (DT) namespace, especially if that ID
is only needed to indicate that a given device is compatible with another one,
presumably having a matching driver in the kernel already.
In ACPI, the device identification object called _CID (Compatible ID) is used to
list the IDs of devices the given one is compatible with, but those IDs must
belong to one of the namespaces prescribed by the ACPI specification (see
Section 6.1.2 of ACPI 6.0 for details) and the DT namespace is not one of them.
Moreover, the specification mandates that either a _HID or an _ADR identificaion
object be present for all ACPI objects representing devices (Section 6.1 of ACPI
6.0). For non-enumerable bus types that object must be _HID and its value must
be a device ID from one of the namespaces prescribed by the specification too.
The special DT namespace link device ID, PRP0001, provides a means to use the
existing DT-compatible device identification in ACPI and to satisfy the above
requirements following from the ACPI specification at the same time. Namely,
if PRP0001 is returned by _HID, the ACPI subsystem will look for the
"compatible" property in the device object's _DSD and will use the value of that
property to identify the corresponding device in analogy with the original DT
device identification algorithm. If the "compatible" property is not present
or its value is not valid, the device will not be enumerated by the ACPI
subsystem. Otherwise, it will be enumerated automatically as a platform device
(except when an I2C or SPI link from the device to its parent is present, in
which case the ACPI core will leave the device enumeration to the parent's
driver) and the identification strings from the "compatible" property value will
be used to find a driver for the device along with the device IDs listed by _CID
(if present).
Analogously, if PRP0001 is present in the list of device IDs returned by _CID,
the identification strings listed by the "compatible" property value (if present
and valid) will be used to look for a driver matching the device, but in that
case their relative priority with respect to the other device IDs listed by
_HID and _CID depends on the position of PRP0001 in the _CID return package.
Specifically, the device IDs returned by _HID and preceding PRP0001 in the _CID
return package will be checked first. Also in that case the bus type the device
will be enumerated to depends on the device ID returned by _HID.
It is valid to define device objects with a _HID returning PRP0001 and without
the "compatible" property in the _DSD or a _CID as long as one of their
ancestors provides a _DSD with a valid "compatible" property. Such device
objects are then simply regarded as additional "blocks" providing hierarchical
configuration information to the driver of the composite ancestor device.
...@@ -196,8 +196,6 @@ affected_cpus : List of Online CPUs that require software ...@@ -196,8 +196,6 @@ affected_cpus : List of Online CPUs that require software
related_cpus : List of Online + Offline CPUs that need software related_cpus : List of Online + Offline CPUs that need software
coordination of frequency. coordination of frequency.
scaling_driver : Hardware driver for cpufreq.
scaling_cur_freq : Current frequency of the CPU as determined by scaling_cur_freq : Current frequency of the CPU as determined by
the governor and cpufreq core, in KHz. This is the governor and cpufreq core, in KHz. This is
the frequency the kernel thinks the CPU runs the frequency the kernel thinks the CPU runs
......
...@@ -179,11 +179,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -179,11 +179,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
See also Documentation/power/runtime_pm.txt, pci=noacpi See also Documentation/power/runtime_pm.txt, pci=noacpi
acpi_rsdp= [ACPI,EFI,KEXEC]
Pass the RSDP address to the kernel, mostly used
on machines running EFI runtime service to boot the
second kernel for kdump.
acpi_apic_instance= [ACPI, IOAPIC] acpi_apic_instance= [ACPI, IOAPIC]
Format: <int> Format: <int>
2: use 2nd APIC table, if available 2: use 2nd APIC table, if available
...@@ -197,6 +192,14 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -197,6 +192,14 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
(e.g. thinkpad_acpi, sony_acpi, etc.) instead (e.g. thinkpad_acpi, sony_acpi, etc.) instead
of the ACPI video.ko driver. of the ACPI video.ko driver.
acpica_no_return_repair [HW, ACPI]
Disable AML predefined validation mechanism
This mechanism can repair the evaluation result to make
the return objects more ACPI specification compliant.
This option is useful for developers to identify the
root cause of an AML interpreter issue when the issue
has something to do with the repair mechanism.
acpi.debug_layer= [HW,ACPI,ACPI_DEBUG] acpi.debug_layer= [HW,ACPI,ACPI_DEBUG]
acpi.debug_level= [HW,ACPI,ACPI_DEBUG] acpi.debug_level= [HW,ACPI,ACPI_DEBUG]
Format: <int> Format: <int>
...@@ -225,6 +228,22 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -225,6 +228,22 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
unusable. The "log_buf_len" parameter may be useful unusable. The "log_buf_len" parameter may be useful
if you need to capture more output. if you need to capture more output.
acpi_enforce_resources= [ACPI]
{ strict | lax | no }
Check for resource conflicts between native drivers
and ACPI OperationRegions (SystemIO and SystemMemory
only). IO ports and memory declared in ACPI might be
used by the ACPI subsystem in arbitrary AML code and
can interfere with legacy drivers.
strict (default): access to resources claimed by ACPI
is denied; legacy drivers trying to access reserved
resources will fail to bind to device using them.
lax: access to resources claimed by ACPI is allowed;
legacy drivers trying to access reserved resources
will bind successfully but a warning message is logged.
no: ACPI OperationRegions are not marked as reserved,
no further checks are performed.
acpi_force_table_verification [HW,ACPI] acpi_force_table_verification [HW,ACPI]
Enable table checksum verification during early stage. Enable table checksum verification during early stage.
By default, this is disabled due to x86 early mapping By default, this is disabled due to x86 early mapping
...@@ -253,6 +272,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -253,6 +272,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
This feature is enabled by default. This feature is enabled by default.
This option allows to turn off the feature. This option allows to turn off the feature.
acpi_no_memhotplug [ACPI] Disable memory hotplug. Useful for kdump
kernels.
acpi_no_static_ssdt [HW,ACPI] acpi_no_static_ssdt [HW,ACPI]
Disable installation of static SSDTs at early boot time Disable installation of static SSDTs at early boot time
By default, SSDTs contained in the RSDT/XSDT will be By default, SSDTs contained in the RSDT/XSDT will be
...@@ -263,13 +285,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -263,13 +285,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
dynamic table installation which will install SSDT dynamic table installation which will install SSDT
tables to /sys/firmware/acpi/tables/dynamic. tables to /sys/firmware/acpi/tables/dynamic.
acpica_no_return_repair [HW, ACPI] acpi_rsdp= [ACPI,EFI,KEXEC]
Disable AML predefined validation mechanism Pass the RSDP address to the kernel, mostly used
This mechanism can repair the evaluation result to make on machines running EFI runtime service to boot the
the return objects more ACPI specification compliant. second kernel for kdump.
This option is useful for developers to identify the
root cause of an AML interpreter issue when the issue
has something to do with the repair mechanism.
acpi_os_name= [HW,ACPI] Tell ACPI BIOS the name of the OS acpi_os_name= [HW,ACPI] Tell ACPI BIOS the name of the OS
Format: To spoof as Windows 98: ="Microsoft Windows" Format: To spoof as Windows 98: ="Microsoft Windows"
...@@ -365,25 +384,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -365,25 +384,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Use timer override. For some broken Nvidia NF5 boards Use timer override. For some broken Nvidia NF5 boards
that require a timer override, but don't have HPET that require a timer override, but don't have HPET
acpi_enforce_resources= [ACPI]
{ strict | lax | no }
Check for resource conflicts between native drivers
and ACPI OperationRegions (SystemIO and SystemMemory
only). IO ports and memory declared in ACPI might be
used by the ACPI subsystem in arbitrary AML code and
can interfere with legacy drivers.
strict (default): access to resources claimed by ACPI
is denied; legacy drivers trying to access reserved
resources will fail to bind to device using them.
lax: access to resources claimed by ACPI is allowed;
legacy drivers trying to access reserved resources
will bind successfully but a warning message is logged.
no: ACPI OperationRegions are not marked as reserved,
no further checks are performed.
acpi_no_memhotplug [ACPI] Disable memory hotplug. Useful for kdump
kernels.
add_efi_memmap [EFI; X86] Include EFI memory map in add_efi_memmap [EFI; X86] Include EFI memory map in
kernel's map of available physical RAM. kernel's map of available physical RAM.
......
...@@ -556,6 +556,12 @@ helper functions described in Section 4. In that case, pm_runtime_resume() ...@@ -556,6 +556,12 @@ helper functions described in Section 4. In that case, pm_runtime_resume()
should be used. Of course, for this purpose the device's runtime PM has to be should be used. Of course, for this purpose the device's runtime PM has to be
enabled earlier by calling pm_runtime_enable(). enabled earlier by calling pm_runtime_enable().
Note, if the device may execute pm_runtime calls during the probe (such as
if it is registers with a subsystem that may call back in) then the
pm_runtime_get_sync() call paired with a pm_runtime_put() call will be
appropriate to ensure that the device is not put back to sleep during the
probe. This can happen with systems such as the network device layer.
It may be desirable to suspend the device once ->probe() has finished. It may be desirable to suspend the device once ->probe() has finished.
Therefore the driver core uses the asyncronous pm_request_idle() to submit a Therefore the driver core uses the asyncronous pm_request_idle() to submit a
request to execute the subsystem-level idle callback for the device at that request to execute the subsystem-level idle callback for the device at that
......
...@@ -14,39 +14,9 @@ ...@@ -14,39 +14,9 @@
#include <linux/pm_clock.h> #include <linux/pm_clock.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#ifdef CONFIG_PM
static int davinci_pm_runtime_suspend(struct device *dev)
{
int ret;
dev_dbg(dev, "%s\n", __func__);
ret = pm_generic_runtime_suspend(dev);
if (ret)
return ret;
ret = pm_clk_suspend(dev);
if (ret) {
pm_generic_runtime_resume(dev);
return ret;
}
return 0;
}
static int davinci_pm_runtime_resume(struct device *dev)
{
dev_dbg(dev, "%s\n", __func__);
pm_clk_resume(dev);
return pm_generic_runtime_resume(dev);
}
#endif
static struct dev_pm_domain davinci_pm_domain = { static struct dev_pm_domain davinci_pm_domain = {
.ops = { .ops = {
SET_RUNTIME_PM_OPS(davinci_pm_runtime_suspend, USE_PM_CLK_RUNTIME_OPS
davinci_pm_runtime_resume, NULL)
USE_PLATFORM_PM_SLEEP_OPS USE_PLATFORM_PM_SLEEP_OPS
}, },
}; };
......
...@@ -19,40 +19,9 @@ ...@@ -19,40 +19,9 @@
#include <linux/clk-provider.h> #include <linux/clk-provider.h>
#include <linux/of.h> #include <linux/of.h>
#ifdef CONFIG_PM
static int keystone_pm_runtime_suspend(struct device *dev)
{
int ret;
dev_dbg(dev, "%s\n", __func__);
ret = pm_generic_runtime_suspend(dev);
if (ret)
return ret;
ret = pm_clk_suspend(dev);
if (ret) {
pm_generic_runtime_resume(dev);
return ret;
}
return 0;
}
static int keystone_pm_runtime_resume(struct device *dev)
{
dev_dbg(dev, "%s\n", __func__);
pm_clk_resume(dev);
return pm_generic_runtime_resume(dev);
}
#endif
static struct dev_pm_domain keystone_pm_domain = { static struct dev_pm_domain keystone_pm_domain = {
.ops = { .ops = {
SET_RUNTIME_PM_OPS(keystone_pm_runtime_suspend, USE_PM_CLK_RUNTIME_OPS
keystone_pm_runtime_resume, NULL)
USE_PLATFORM_PM_SLEEP_OPS USE_PLATFORM_PM_SLEEP_OPS
}, },
}; };
......
...@@ -21,48 +21,15 @@ ...@@ -21,48 +21,15 @@
#include "soc.h" #include "soc.h"
#ifdef CONFIG_PM
static int omap1_pm_runtime_suspend(struct device *dev)
{
int ret;
dev_dbg(dev, "%s\n", __func__);
ret = pm_generic_runtime_suspend(dev);
if (ret)
return ret;
ret = pm_clk_suspend(dev);
if (ret) {
pm_generic_runtime_resume(dev);
return ret;
}
return 0;
}
static int omap1_pm_runtime_resume(struct device *dev)
{
dev_dbg(dev, "%s\n", __func__);
pm_clk_resume(dev);
return pm_generic_runtime_resume(dev);
}
static struct dev_pm_domain default_pm_domain = { static struct dev_pm_domain default_pm_domain = {
.ops = { .ops = {
.runtime_suspend = omap1_pm_runtime_suspend, USE_PM_CLK_RUNTIME_OPS
.runtime_resume = omap1_pm_runtime_resume,
USE_PLATFORM_PM_SLEEP_OPS USE_PLATFORM_PM_SLEEP_OPS
}, },
}; };
#define OMAP1_PM_DOMAIN (&default_pm_domain)
#else
#define OMAP1_PM_DOMAIN NULL
#endif /* CONFIG_PM */
static struct pm_clk_notifier_block platform_bus_notifier = { static struct pm_clk_notifier_block platform_bus_notifier = {
.pm_domain = OMAP1_PM_DOMAIN, .pm_domain = &default_pm_domain,
.con_ids = { "ick", "fck", NULL, }, .con_ids = { "ick", "fck", NULL, },
}; };
......
...@@ -688,11 +688,8 @@ struct dev_pm_domain omap_device_pm_domain = { ...@@ -688,11 +688,8 @@ struct dev_pm_domain omap_device_pm_domain = {
SET_RUNTIME_PM_OPS(_od_runtime_suspend, _od_runtime_resume, SET_RUNTIME_PM_OPS(_od_runtime_suspend, _od_runtime_resume,
NULL) NULL)
USE_PLATFORM_PM_SLEEP_OPS USE_PLATFORM_PM_SLEEP_OPS
.suspend_noirq = _od_suspend_noirq, SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(_od_suspend_noirq,
.resume_noirq = _od_resume_noirq, _od_resume_noirq)
.freeze_noirq = _od_suspend_noirq,
.thaw_noirq = _od_resume_noirq,
.restore_noirq = _od_resume_noirq,
} }
}; };
......
config ARM64 config ARM64
def_bool y def_bool y
select ACPI_CCA_REQUIRED if ACPI
select ACPI_GENERIC_GSI if ACPI select ACPI_GENERIC_GSI if ACPI
select ACPI_REDUCED_HARDWARE_ONLY if ACPI select ACPI_REDUCED_HARDWARE_ONLY if ACPI
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/acpi.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
...@@ -28,13 +29,23 @@ ...@@ -28,13 +29,23 @@
#define DMA_ERROR_CODE (~(dma_addr_t)0) #define DMA_ERROR_CODE (~(dma_addr_t)0)
extern struct dma_map_ops *dma_ops; extern struct dma_map_ops *dma_ops;
extern struct dma_map_ops dummy_dma_ops;
static inline struct dma_map_ops *__generic_dma_ops(struct device *dev) static inline struct dma_map_ops *__generic_dma_ops(struct device *dev)
{ {
if (unlikely(!dev) || !dev->archdata.dma_ops) if (unlikely(!dev))
return dma_ops; return dma_ops;
else else if (dev->archdata.dma_ops)
return dev->archdata.dma_ops; return dev->archdata.dma_ops;
else if (acpi_disabled)
return dma_ops;
/*
* When ACPI is enabled, if arch_set_dma_ops is not called,
* we will disable device DMA capability by setting it
* to dummy_dma_ops.
*/
return &dummy_dma_ops;
} }
static inline struct dma_map_ops *get_dma_ops(struct device *dev) static inline struct dma_map_ops *get_dma_ops(struct device *dev)
...@@ -48,6 +59,9 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev) ...@@ -48,6 +59,9 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
struct iommu_ops *iommu, bool coherent) struct iommu_ops *iommu, bool coherent)
{ {
if (!acpi_disabled && !dev->archdata.dma_ops)
dev->archdata.dma_ops = dma_ops;
dev->archdata.dma_coherent = coherent; dev->archdata.dma_coherent = coherent;
} }
#define arch_setup_dma_ops arch_setup_dma_ops #define arch_setup_dma_ops arch_setup_dma_ops
......
...@@ -414,6 +414,98 @@ static int __init atomic_pool_init(void) ...@@ -414,6 +414,98 @@ static int __init atomic_pool_init(void)
return -ENOMEM; return -ENOMEM;
} }
/********************************************
* The following APIs are for dummy DMA ops *
********************************************/
static void *__dummy_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flags,
struct dma_attrs *attrs)
{
return NULL;
}
static void __dummy_free(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle,
struct dma_attrs *attrs)
{
}
static int __dummy_mmap(struct device *dev,
struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size,
struct dma_attrs *attrs)
{
return -ENXIO;
}
static dma_addr_t __dummy_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir,
struct dma_attrs *attrs)
{
return DMA_ERROR_CODE;
}
static void __dummy_unmap_page(struct device *dev, dma_addr_t dev_addr,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
}
static int __dummy_map_sg(struct device *dev, struct scatterlist *sgl,
int nelems, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
return 0;
}
static void __dummy_unmap_sg(struct device *dev,
struct scatterlist *sgl, int nelems,
enum dma_data_direction dir,
struct dma_attrs *attrs)
{
}
static void __dummy_sync_single(struct device *dev,
dma_addr_t dev_addr, size_t size,
enum dma_data_direction dir)
{
}
static void __dummy_sync_sg(struct device *dev,
struct scatterlist *sgl, int nelems,
enum dma_data_direction dir)
{
}
static int __dummy_mapping_error(struct device *hwdev, dma_addr_t dma_addr)
{
return 1;
}
static int __dummy_dma_supported(struct device *hwdev, u64 mask)
{
return 0;
}
struct dma_map_ops dummy_dma_ops = {
.alloc = __dummy_alloc,
.free = __dummy_free,
.mmap = __dummy_mmap,
.map_page = __dummy_map_page,
.unmap_page = __dummy_unmap_page,
.map_sg = __dummy_map_sg,
.unmap_sg = __dummy_unmap_sg,
.sync_single_for_cpu = __dummy_sync_single,
.sync_single_for_device = __dummy_sync_single,
.sync_sg_for_cpu = __dummy_sync_sg,
.sync_sg_for_device = __dummy_sync_sg,
.mapping_error = __dummy_mapping_error,
.dma_supported = __dummy_dma_supported,
};
EXPORT_SYMBOL(dummy_dma_ops);
static int __init arm64_dma_init(void) static int __init arm64_dma_init(void)
{ {
int ret; int ret;
......
...@@ -12,11 +12,13 @@ ENTRY(wakeup_pmode_return) ...@@ -12,11 +12,13 @@ ENTRY(wakeup_pmode_return)
wakeup_pmode_return: wakeup_pmode_return:
movw $__KERNEL_DS, %ax movw $__KERNEL_DS, %ax
movw %ax, %ss movw %ax, %ss
movw %ax, %ds
movw %ax, %es
movw %ax, %fs movw %ax, %fs
movw %ax, %gs movw %ax, %gs
movw $__USER_DS, %ax
movw %ax, %ds
movw %ax, %es
# reload the gdt, as we need the full 32 bit address # reload the gdt, as we need the full 32 bit address
lidt saved_idt lidt saved_idt
lldt saved_ldt lldt saved_ldt
......
...@@ -54,6 +54,9 @@ config ACPI_GENERIC_GSI ...@@ -54,6 +54,9 @@ config ACPI_GENERIC_GSI
config ACPI_SYSTEM_POWER_STATES_SUPPORT config ACPI_SYSTEM_POWER_STATES_SUPPORT
bool bool
config ACPI_CCA_REQUIRED
bool
config ACPI_SLEEP config ACPI_SLEEP
bool bool
depends on SUSPEND || HIBERNATION depends on SUSPEND || HIBERNATION
...@@ -62,7 +65,7 @@ config ACPI_SLEEP ...@@ -62,7 +65,7 @@ config ACPI_SLEEP
config ACPI_PROCFS_POWER config ACPI_PROCFS_POWER
bool "Deprecated power /proc/acpi directories" bool "Deprecated power /proc/acpi directories"
depends on PROC_FS depends on X86 && PROC_FS
help help
For backwards compatibility, this option allows For backwards compatibility, this option allows
deprecated power /proc/acpi/ directories to exist, even when deprecated power /proc/acpi/ directories to exist, even when
......
...@@ -52,9 +52,6 @@ acpi-$(CONFIG_X86) += acpi_cmos_rtc.o ...@@ -52,9 +52,6 @@ acpi-$(CONFIG_X86) += acpi_cmos_rtc.o
acpi-$(CONFIG_DEBUG_FS) += debugfs.o acpi-$(CONFIG_DEBUG_FS) += debugfs.o
acpi-$(CONFIG_ACPI_NUMA) += numa.o acpi-$(CONFIG_ACPI_NUMA) += numa.o
acpi-$(CONFIG_ACPI_PROCFS_POWER) += cm_sbs.o acpi-$(CONFIG_ACPI_PROCFS_POWER) += cm_sbs.o
ifdef CONFIG_ACPI_VIDEO
acpi-y += video_detect.o
endif
acpi-y += acpi_lpat.o acpi-y += acpi_lpat.o
acpi-$(CONFIG_ACPI_GENERIC_GSI) += gsi.o acpi-$(CONFIG_ACPI_GENERIC_GSI) += gsi.o
...@@ -95,3 +92,5 @@ obj-$(CONFIG_ACPI_EXTLOG) += acpi_extlog.o ...@@ -95,3 +92,5 @@ obj-$(CONFIG_ACPI_EXTLOG) += acpi_extlog.o
obj-$(CONFIG_PMIC_OPREGION) += pmic/intel_pmic.o obj-$(CONFIG_PMIC_OPREGION) += pmic/intel_pmic.o
obj-$(CONFIG_CRC_PMIC_OPREGION) += pmic/intel_pmic_crc.o obj-$(CONFIG_CRC_PMIC_OPREGION) += pmic/intel_pmic_crc.o
obj-$(CONFIG_XPOWER_PMIC_OPREGION) += pmic/intel_pmic_xpower.o obj-$(CONFIG_XPOWER_PMIC_OPREGION) += pmic/intel_pmic_xpower.o
video-objs += acpi_video.o video_detect.o
...@@ -308,7 +308,7 @@ static int thinkpad_e530_quirk(const struct dmi_system_id *d) ...@@ -308,7 +308,7 @@ static int thinkpad_e530_quirk(const struct dmi_system_id *d)
return 0; return 0;
} }
static struct dmi_system_id ac_dmi_table[] = { static const struct dmi_system_id ac_dmi_table[] = {
{ {
.callback = thinkpad_e530_quirk, .callback = thinkpad_e530_quirk,
.ident = "thinkpad e530", .ident = "thinkpad e530",
......
...@@ -129,50 +129,50 @@ static void byt_i2c_setup(struct lpss_private_data *pdata) ...@@ -129,50 +129,50 @@ static void byt_i2c_setup(struct lpss_private_data *pdata)
writel(0, pdata->mmio_base + LPSS_I2C_ENABLE); writel(0, pdata->mmio_base + LPSS_I2C_ENABLE);
} }
static struct lpss_device_desc lpt_dev_desc = { static const struct lpss_device_desc lpt_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR, .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR,
.prv_offset = 0x800, .prv_offset = 0x800,
}; };
static struct lpss_device_desc lpt_i2c_dev_desc = { static const struct lpss_device_desc lpt_i2c_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_LTR, .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_LTR,
.prv_offset = 0x800, .prv_offset = 0x800,
}; };
static struct lpss_device_desc lpt_uart_dev_desc = { static const struct lpss_device_desc lpt_uart_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR, .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR,
.clk_con_id = "baudclk", .clk_con_id = "baudclk",
.prv_offset = 0x800, .prv_offset = 0x800,
.setup = lpss_uart_setup, .setup = lpss_uart_setup,
}; };
static struct lpss_device_desc lpt_sdio_dev_desc = { static const struct lpss_device_desc lpt_sdio_dev_desc = {
.flags = LPSS_LTR, .flags = LPSS_LTR,
.prv_offset = 0x1000, .prv_offset = 0x1000,
.prv_size_override = 0x1018, .prv_size_override = 0x1018,
}; };
static struct lpss_device_desc byt_pwm_dev_desc = { static const struct lpss_device_desc byt_pwm_dev_desc = {
.flags = LPSS_SAVE_CTX, .flags = LPSS_SAVE_CTX,
}; };
static struct lpss_device_desc byt_uart_dev_desc = { static const struct lpss_device_desc byt_uart_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX, .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX,
.clk_con_id = "baudclk", .clk_con_id = "baudclk",
.prv_offset = 0x800, .prv_offset = 0x800,
.setup = lpss_uart_setup, .setup = lpss_uart_setup,
}; };
static struct lpss_device_desc byt_spi_dev_desc = { static const struct lpss_device_desc byt_spi_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX, .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX,
.prv_offset = 0x400, .prv_offset = 0x400,
}; };
static struct lpss_device_desc byt_sdio_dev_desc = { static const struct lpss_device_desc byt_sdio_dev_desc = {
.flags = LPSS_CLK, .flags = LPSS_CLK,
}; };
static struct lpss_device_desc byt_i2c_dev_desc = { static const struct lpss_device_desc byt_i2c_dev_desc = {
.flags = LPSS_CLK | LPSS_SAVE_CTX, .flags = LPSS_CLK | LPSS_SAVE_CTX,
.prv_offset = 0x800, .prv_offset = 0x800,
.setup = byt_i2c_setup, .setup = byt_i2c_setup,
...@@ -323,14 +323,14 @@ static int register_device_clock(struct acpi_device *adev, ...@@ -323,14 +323,14 @@ static int register_device_clock(struct acpi_device *adev,
static int acpi_lpss_create_device(struct acpi_device *adev, static int acpi_lpss_create_device(struct acpi_device *adev,
const struct acpi_device_id *id) const struct acpi_device_id *id)
{ {
struct lpss_device_desc *dev_desc; const struct lpss_device_desc *dev_desc;
struct lpss_private_data *pdata; struct lpss_private_data *pdata;
struct resource_entry *rentry; struct resource_entry *rentry;
struct list_head resource_list; struct list_head resource_list;
struct platform_device *pdev; struct platform_device *pdev;
int ret; int ret;
dev_desc = (struct lpss_device_desc *)id->driver_data; dev_desc = (const struct lpss_device_desc *)id->driver_data;
if (!dev_desc) { if (!dev_desc) {
pdev = acpi_create_platform_device(adev); pdev = acpi_create_platform_device(adev);
return IS_ERR_OR_NULL(pdev) ? PTR_ERR(pdev) : 1; return IS_ERR_OR_NULL(pdev) ? PTR_ERR(pdev) : 1;
......
...@@ -103,7 +103,7 @@ struct platform_device *acpi_create_platform_device(struct acpi_device *adev) ...@@ -103,7 +103,7 @@ struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
pdevinfo.res = resources; pdevinfo.res = resources;
pdevinfo.num_res = count; pdevinfo.num_res = count;
pdevinfo.fwnode = acpi_fwnode_handle(adev); pdevinfo.fwnode = acpi_fwnode_handle(adev);
pdevinfo.dma_mask = DMA_BIT_MASK(32); pdevinfo.dma_mask = acpi_check_dma(adev, NULL) ? DMA_BIT_MASK(32) : 0;
pdev = platform_device_register_full(&pdevinfo); pdev = platform_device_register_full(&pdevinfo);
if (IS_ERR(pdev)) if (IS_ERR(pdev))
dev_err(&adev->dev, "platform device creation failed: %ld\n", dev_err(&adev->dev, "platform device creation failed: %ld\n",
......
...@@ -170,7 +170,7 @@ static int acpi_processor_hotadd_init(struct acpi_processor *pr) ...@@ -170,7 +170,7 @@ static int acpi_processor_hotadd_init(struct acpi_processor *pr)
acpi_status status; acpi_status status;
int ret; int ret;
if (pr->phys_id == PHYS_CPUID_INVALID) if (invalid_phys_cpuid(pr->phys_id))
return -ENODEV; return -ENODEV;
status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta); status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta);
...@@ -215,8 +215,7 @@ static int acpi_processor_get_info(struct acpi_device *device) ...@@ -215,8 +215,7 @@ static int acpi_processor_get_info(struct acpi_device *device)
union acpi_object object = { 0 }; union acpi_object object = { 0 };
struct acpi_buffer buffer = { sizeof(union acpi_object), &object }; struct acpi_buffer buffer = { sizeof(union acpi_object), &object };
struct acpi_processor *pr = acpi_driver_data(device); struct acpi_processor *pr = acpi_driver_data(device);
phys_cpuid_t phys_id; int device_declaration = 0;
int cpu_index, device_declaration = 0;
acpi_status status = AE_OK; acpi_status status = AE_OK;
static int cpu0_initialized; static int cpu0_initialized;
unsigned long long value; unsigned long long value;
...@@ -263,29 +262,28 @@ static int acpi_processor_get_info(struct acpi_device *device) ...@@ -263,29 +262,28 @@ static int acpi_processor_get_info(struct acpi_device *device)
pr->acpi_id = value; pr->acpi_id = value;
} }
phys_id = acpi_get_phys_id(pr->handle, device_declaration, pr->acpi_id); pr->phys_id = acpi_get_phys_id(pr->handle, device_declaration,
if (phys_id == PHYS_CPUID_INVALID) pr->acpi_id);
if (invalid_phys_cpuid(pr->phys_id))
acpi_handle_debug(pr->handle, "failed to get CPU physical ID.\n"); acpi_handle_debug(pr->handle, "failed to get CPU physical ID.\n");
pr->phys_id = phys_id;
cpu_index = acpi_map_cpuid(pr->phys_id, pr->acpi_id); pr->id = acpi_map_cpuid(pr->phys_id, pr->acpi_id);
if (!cpu0_initialized && !acpi_has_cpu_in_madt()) { if (!cpu0_initialized && !acpi_has_cpu_in_madt()) {
cpu0_initialized = 1; cpu0_initialized = 1;
/* /*
* Handle UP system running SMP kernel, with no CPU * Handle UP system running SMP kernel, with no CPU
* entry in MADT * entry in MADT
*/ */
if ((cpu_index == -1) && (num_online_cpus() == 1)) if (invalid_logical_cpuid(pr->id) && (num_online_cpus() == 1))
cpu_index = 0; pr->id = 0;
} }
pr->id = cpu_index;
/* /*
* Extra Processor objects may be enumerated on MP systems with * Extra Processor objects may be enumerated on MP systems with
* less than the max # of CPUs. They should be ignored _iff * less than the max # of CPUs. They should be ignored _iff
* they are physically not present. * they are physically not present.
*/ */
if (pr->id == -1) { if (invalid_logical_cpuid(pr->id)) {
int ret = acpi_processor_hotadd_init(pr); int ret = acpi_processor_hotadd_init(pr);
if (ret) if (ret)
return ret; return ret;
......
...@@ -231,7 +231,9 @@ void acpi_db_open_debug_file(char *name); ...@@ -231,7 +231,9 @@ void acpi_db_open_debug_file(char *name);
acpi_status acpi_db_load_acpi_table(char *filename); acpi_status acpi_db_load_acpi_table(char *filename);
acpi_status acpi_status
acpi_db_get_table_from_file(char *filename, struct acpi_table_header **table); acpi_db_get_table_from_file(char *filename,
struct acpi_table_header **table,
u8 must_be_aml_table);
/* /*
* dbhistry - debugger HISTORY command * dbhistry - debugger HISTORY command
......
...@@ -352,11 +352,21 @@ struct acpi_package_info3 { ...@@ -352,11 +352,21 @@ struct acpi_package_info3 {
u16 reserved; u16 reserved;
}; };
struct acpi_package_info4 {
u8 type;
u8 object_type1;
u8 count1;
u8 sub_object_types;
u8 pkg_count;
u16 reserved;
};
union acpi_predefined_info { union acpi_predefined_info {
struct acpi_name_info info; struct acpi_name_info info;
struct acpi_package_info ret_info; struct acpi_package_info ret_info;
struct acpi_package_info2 ret_info2; struct acpi_package_info2 ret_info2;
struct acpi_package_info3 ret_info3; struct acpi_package_info3 ret_info3;
struct acpi_package_info4 ret_info4;
}; };
/* Reset to default packing */ /* Reset to default packing */
...@@ -1165,4 +1175,9 @@ struct ah_uuid { ...@@ -1165,4 +1175,9 @@ struct ah_uuid {
char *string; char *string;
}; };
struct ah_table {
char *signature;
char *description;
};
#endif /* __ACLOCAL_H__ */ #endif /* __ACLOCAL_H__ */
...@@ -70,6 +70,9 @@ ...@@ -70,6 +70,9 @@
* *
*****************************************************************************/ *****************************************************************************/
extern const u8 acpi_gbl_short_op_index[];
extern const u8 acpi_gbl_long_op_index[];
/* /*
* psxface - Parser external interfaces * psxface - Parser external interfaces
*/ */
......
...@@ -105,6 +105,11 @@ ...@@ -105,6 +105,11 @@
* count = 0 (optional) * count = 0 (optional)
* (Used for _DLM) * (Used for _DLM)
* *
* ACPI_PTYPE2_VAR_VAR: Variable number of subpackages, each of either a
* constant or variable length. The subpackages are preceded by a
* constant number of objects.
* (Used for _LPI, _RDI)
*
* ACPI_PTYPE2_UUID_PAIR: Each subpackage is preceded by a UUID Buffer. The UUID * ACPI_PTYPE2_UUID_PAIR: Each subpackage is preceded by a UUID Buffer. The UUID
* defines the format of the package. Zero-length parent package is * defines the format of the package. Zero-length parent package is
* allowed. * allowed.
...@@ -123,7 +128,8 @@ enum acpi_return_package_types { ...@@ -123,7 +128,8 @@ enum acpi_return_package_types {
ACPI_PTYPE2_MIN = 8, ACPI_PTYPE2_MIN = 8,
ACPI_PTYPE2_REV_FIXED = 9, ACPI_PTYPE2_REV_FIXED = 9,
ACPI_PTYPE2_FIX_VAR = 10, ACPI_PTYPE2_FIX_VAR = 10,
ACPI_PTYPE2_UUID_PAIR = 11 ACPI_PTYPE2_VAR_VAR = 11,
ACPI_PTYPE2_UUID_PAIR = 12
}; };
/* Support macros for users of the predefined info table */ /* Support macros for users of the predefined info table */
...@@ -172,7 +178,7 @@ enum acpi_return_package_types { ...@@ -172,7 +178,7 @@ enum acpi_return_package_types {
* These are the names that can actually be evaluated via acpi_evaluate_object. * These are the names that can actually be evaluated via acpi_evaluate_object.
* Not present in this table are the following: * Not present in this table are the following:
* *
* 1) Predefined/Reserved names that are never evaluated via * 1) Predefined/Reserved names that are not usually evaluated via
* acpi_evaluate_object: * acpi_evaluate_object:
* _Lxx and _Exx GPE methods * _Lxx and _Exx GPE methods
* _Qxx EC methods * _Qxx EC methods
...@@ -361,6 +367,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -361,6 +367,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Fixed-length (4 Int) */ METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Fixed-length (4 Int) */
PACKAGE_INFO(ACPI_PTYPE1_FIXED, ACPI_RTYPE_INTEGER, 4, 0, 0, 0), PACKAGE_INFO(ACPI_PTYPE1_FIXED, ACPI_RTYPE_INTEGER, 4, 0, 0, 0),
{{"_BTH", METHOD_1ARGS(ACPI_TYPE_INTEGER), /* ACPI 6.0 */
METHOD_NO_RETURN_VALUE}},
{{"_BTM", METHOD_1ARGS(ACPI_TYPE_INTEGER), {{"_BTM", METHOD_1ARGS(ACPI_TYPE_INTEGER),
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
...@@ -390,6 +399,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -390,6 +399,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_INTEGER | ACPI_RTYPE_BUFFER, 0, PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_INTEGER | ACPI_RTYPE_BUFFER, 0,
0, 0, 0), 0, 0, 0),
{{"_CR3", METHOD_0ARGS, /* ACPI 6.0 */
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
{{"_CRS", METHOD_0ARGS, {{"_CRS", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_BUFFER)}}, METHOD_RETURNS(ACPI_RTYPE_BUFFER)}},
...@@ -445,7 +457,7 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -445,7 +457,7 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
{{"_DOS", METHOD_1ARGS(ACPI_TYPE_INTEGER), {{"_DOS", METHOD_1ARGS(ACPI_TYPE_INTEGER),
METHOD_NO_RETURN_VALUE}}, METHOD_NO_RETURN_VALUE}},
{{"_DSD", METHOD_0ARGS, {{"_DSD", METHOD_0ARGS, /* ACPI 6.0 */
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Pkgs) each: 1 Buf, 1 Pkg */ METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Pkgs) each: 1 Buf, 1 Pkg */
PACKAGE_INFO(ACPI_PTYPE2_UUID_PAIR, ACPI_RTYPE_BUFFER, 1, PACKAGE_INFO(ACPI_PTYPE2_UUID_PAIR, ACPI_RTYPE_BUFFER, 1,
ACPI_RTYPE_PACKAGE, 1, 0), ACPI_RTYPE_PACKAGE, 1, 0),
...@@ -604,6 +616,12 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -604,6 +616,12 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (1 Int(rev), n Pkg (2 Int) */ METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (1 Int(rev), n Pkg (2 Int) */
PACKAGE_INFO(ACPI_PTYPE2_REV_FIXED, ACPI_RTYPE_INTEGER, 2, 0, 0, 0), PACKAGE_INFO(ACPI_PTYPE2_REV_FIXED, ACPI_RTYPE_INTEGER, 2, 0, 0, 0),
{{"_LPI", METHOD_0ARGS, /* ACPI 6.0 */
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (3 Int, n Pkg (10 Int/Buf) */
PACKAGE_INFO(ACPI_PTYPE2_VAR_VAR, ACPI_RTYPE_INTEGER, 3,
ACPI_RTYPE_INTEGER | ACPI_RTYPE_BUFFER | ACPI_RTYPE_STRING,
10, 0),
{{"_MAT", METHOD_0ARGS, {{"_MAT", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_BUFFER)}}, METHOD_RETURNS(ACPI_RTYPE_BUFFER)}},
...@@ -624,6 +642,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -624,6 +642,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
ACPI_TYPE_INTEGER), ACPI_TYPE_INTEGER),
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
{{"_MTL", METHOD_0ARGS, /* ACPI 6.0 */
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
{{"_NTT", METHOD_0ARGS, {{"_NTT", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
...@@ -716,6 +737,10 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -716,6 +737,10 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Refs) */ METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Refs) */
PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_REFERENCE, 0, 0, 0, 0), PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_REFERENCE, 0, 0, 0, 0),
{{"_PRR", METHOD_0ARGS, /* ACPI 6.0 */
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Fixed-length (1 Ref) */
PACKAGE_INFO(ACPI_PTYPE1_FIXED, ACPI_RTYPE_REFERENCE, 1, 0, 0, 0),
{{"_PRS", METHOD_0ARGS, {{"_PRS", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_BUFFER)}}, METHOD_RETURNS(ACPI_RTYPE_BUFFER)}},
...@@ -796,6 +821,11 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -796,6 +821,11 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
{{"_PXM", METHOD_0ARGS, {{"_PXM", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
{{"_RDI", METHOD_0ARGS, /* ACPI 6.0 */
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (1 Int, n Pkg (m Ref)) */
PACKAGE_INFO(ACPI_PTYPE2_VAR_VAR, ACPI_RTYPE_INTEGER, 1,
ACPI_RTYPE_REFERENCE, 0, 0),
{{"_REG", METHOD_2ARGS(ACPI_TYPE_INTEGER, ACPI_TYPE_INTEGER), {{"_REG", METHOD_2ARGS(ACPI_TYPE_INTEGER, ACPI_TYPE_INTEGER),
METHOD_NO_RETURN_VALUE}}, METHOD_NO_RETURN_VALUE}},
...@@ -808,6 +838,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -808,6 +838,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
{{"_ROM", METHOD_2ARGS(ACPI_TYPE_INTEGER, ACPI_TYPE_INTEGER), {{"_ROM", METHOD_2ARGS(ACPI_TYPE_INTEGER, ACPI_TYPE_INTEGER),
METHOD_RETURNS(ACPI_RTYPE_BUFFER)}}, METHOD_RETURNS(ACPI_RTYPE_BUFFER)}},
{{"_RST", METHOD_0ARGS, /* ACPI 6.0 */
METHOD_NO_RETURN_VALUE}},
{{"_RTV", METHOD_0ARGS, {{"_RTV", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
...@@ -935,6 +968,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -935,6 +968,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
{{"_TDL", METHOD_0ARGS, {{"_TDL", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
{{"_TFP", METHOD_0ARGS, /* ACPI 6.0 */
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
{{"_TIP", METHOD_1ARGS(ACPI_TYPE_INTEGER), {{"_TIP", METHOD_1ARGS(ACPI_TYPE_INTEGER),
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
...@@ -959,6 +995,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -959,6 +995,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Pkgs) each 5 Int with count */ METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Pkgs) each 5 Int with count */
PACKAGE_INFO(ACPI_PTYPE2_COUNT, ACPI_RTYPE_INTEGER, 5, 0, 0, 0), PACKAGE_INFO(ACPI_PTYPE2_COUNT, ACPI_RTYPE_INTEGER, 5, 0, 0, 0),
{{"_TSN", METHOD_0ARGS, /* ACPI 6.0 */
METHOD_RETURNS(ACPI_RTYPE_REFERENCE)}},
{{"_TSP", METHOD_0ARGS, {{"_TSP", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
......
...@@ -251,7 +251,7 @@ extern const u8 _acpi_ctype[]; ...@@ -251,7 +251,7 @@ extern const u8 _acpi_ctype[];
#define _ACPI_DI 0x04 /* '0'-'9' */ #define _ACPI_DI 0x04 /* '0'-'9' */
#define _ACPI_LO 0x02 /* 'a'-'z' */ #define _ACPI_LO 0x02 /* 'a'-'z' */
#define _ACPI_PU 0x10 /* punctuation */ #define _ACPI_PU 0x10 /* punctuation */
#define _ACPI_SP 0x08 /* space */ #define _ACPI_SP 0x08 /* space, tab, CR, LF, VT, FF */
#define _ACPI_UP 0x01 /* 'A'-'Z' */ #define _ACPI_UP 0x01 /* 'A'-'Z' */
#define _ACPI_XD 0x80 /* '0'-'9', 'A'-'F', 'a'-'f' */ #define _ACPI_XD 0x80 /* '0'-'9', 'A'-'F', 'a'-'f' */
......
...@@ -116,6 +116,7 @@ acpi_ds_auto_serialize_method(struct acpi_namespace_node *node, ...@@ -116,6 +116,7 @@ acpi_ds_auto_serialize_method(struct acpi_namespace_node *node,
walk_state = walk_state =
acpi_ds_create_walk_state(node->owner_id, NULL, NULL, NULL); acpi_ds_create_walk_state(node->owner_id, NULL, NULL, NULL);
if (!walk_state) { if (!walk_state) {
acpi_ps_free_op(op);
return_ACPI_STATUS(AE_NO_MEMORY); return_ACPI_STATUS(AE_NO_MEMORY);
} }
...@@ -125,6 +126,7 @@ acpi_ds_auto_serialize_method(struct acpi_namespace_node *node, ...@@ -125,6 +126,7 @@ acpi_ds_auto_serialize_method(struct acpi_namespace_node *node,
obj_desc->method.aml_length, NULL, 0); obj_desc->method.aml_length, NULL, 0);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
acpi_ds_delete_walk_state(walk_state); acpi_ds_delete_walk_state(walk_state);
acpi_ps_free_op(op);
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
} }
...@@ -133,9 +135,6 @@ acpi_ds_auto_serialize_method(struct acpi_namespace_node *node, ...@@ -133,9 +135,6 @@ acpi_ds_auto_serialize_method(struct acpi_namespace_node *node,
/* Parse the method, scan for creation of named objects */ /* Parse the method, scan for creation of named objects */
status = acpi_ps_parse_aml(walk_state); status = acpi_ps_parse_aml(walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
acpi_ps_delete_parse_tree(op); acpi_ps_delete_parse_tree(op);
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
......
...@@ -123,7 +123,7 @@ acpi_hw_derive_pci_id(struct acpi_pci_id *pci_id, ...@@ -123,7 +123,7 @@ acpi_hw_derive_pci_id(struct acpi_pci_id *pci_id,
acpi_handle root_pci_device, acpi_handle pci_region) acpi_handle root_pci_device, acpi_handle pci_region)
{ {
acpi_status status; acpi_status status;
struct acpi_pci_device *list_head = NULL; struct acpi_pci_device *list_head;
ACPI_FUNCTION_TRACE(hw_derive_pci_id); ACPI_FUNCTION_TRACE(hw_derive_pci_id);
...@@ -177,13 +177,13 @@ acpi_hw_build_pci_list(acpi_handle root_pci_device, ...@@ -177,13 +177,13 @@ acpi_hw_build_pci_list(acpi_handle root_pci_device,
acpi_handle parent_device; acpi_handle parent_device;
acpi_status status; acpi_status status;
struct acpi_pci_device *list_element; struct acpi_pci_device *list_element;
struct acpi_pci_device *list_head = NULL;
/* /*
* Ascend namespace branch until the root_pci_device is reached, building * Ascend namespace branch until the root_pci_device is reached, building
* a list of device nodes. Loop will exit when either the PCI device is * a list of device nodes. Loop will exit when either the PCI device is
* found, or the root of the namespace is reached. * found, or the root of the namespace is reached.
*/ */
*return_list_head = NULL;
current_device = pci_region; current_device = pci_region;
while (1) { while (1) {
status = acpi_get_parent(current_device, &parent_device); status = acpi_get_parent(current_device, &parent_device);
...@@ -198,7 +198,6 @@ acpi_hw_build_pci_list(acpi_handle root_pci_device, ...@@ -198,7 +198,6 @@ acpi_hw_build_pci_list(acpi_handle root_pci_device,
/* Finished when we reach the PCI root device (PNP0A03 or PNP0A08) */ /* Finished when we reach the PCI root device (PNP0A03 or PNP0A08) */
if (parent_device == root_pci_device) { if (parent_device == root_pci_device) {
*return_list_head = list_head;
return (AE_OK); return (AE_OK);
} }
...@@ -213,9 +212,9 @@ acpi_hw_build_pci_list(acpi_handle root_pci_device, ...@@ -213,9 +212,9 @@ acpi_hw_build_pci_list(acpi_handle root_pci_device,
/* Put new element at the head of the list */ /* Put new element at the head of the list */
list_element->next = list_head; list_element->next = *return_list_head;
list_element->device = parent_device; list_element->device = parent_device;
list_head = list_element; *return_list_head = list_element;
current_device = parent_device; current_device = parent_device;
} }
......
...@@ -316,6 +316,13 @@ acpi_ns_check_package(struct acpi_evaluate_info *info, ...@@ -316,6 +316,13 @@ acpi_ns_check_package(struct acpi_evaluate_info *info,
acpi_ns_check_package_list(info, package, elements, count); acpi_ns_check_package_list(info, package, elements, count);
break; break;
case ACPI_PTYPE2_VAR_VAR:
/*
* Returns a variable list of packages, each with a variable list
* of objects.
*/
break;
case ACPI_PTYPE2_UUID_PAIR: case ACPI_PTYPE2_UUID_PAIR:
/* The package must contain pairs of (UUID + type) */ /* The package must contain pairs of (UUID + type) */
...@@ -487,6 +494,12 @@ acpi_ns_check_package_list(struct acpi_evaluate_info *info, ...@@ -487,6 +494,12 @@ acpi_ns_check_package_list(struct acpi_evaluate_info *info,
} }
break; break;
case ACPI_PTYPE2_VAR_VAR:
/*
* Each subpackage has a fixed or variable number of elements
*/
break;
case ACPI_PTYPE2_FIXED: case ACPI_PTYPE2_FIXED:
/* Each subpackage has a fixed length */ /* Each subpackage has a fixed length */
......
...@@ -497,10 +497,10 @@ acpi_ns_remove_null_elements(struct acpi_evaluate_info *info, ...@@ -497,10 +497,10 @@ acpi_ns_remove_null_elements(struct acpi_evaluate_info *info,
case ACPI_PTYPE2_MIN: case ACPI_PTYPE2_MIN:
case ACPI_PTYPE2_REV_FIXED: case ACPI_PTYPE2_REV_FIXED:
case ACPI_PTYPE2_FIX_VAR: case ACPI_PTYPE2_FIX_VAR:
break; break;
default: default:
case ACPI_PTYPE2_VAR_VAR:
case ACPI_PTYPE1_FIXED: case ACPI_PTYPE1_FIXED:
case ACPI_PTYPE1_OPTION: case ACPI_PTYPE1_OPTION:
return; return;
......
...@@ -50,9 +50,6 @@ ...@@ -50,9 +50,6 @@
#define _COMPONENT ACPI_PARSER #define _COMPONENT ACPI_PARSER
ACPI_MODULE_NAME("psopinfo") ACPI_MODULE_NAME("psopinfo")
extern const u8 acpi_gbl_short_op_index[];
extern const u8 acpi_gbl_long_op_index[];
static const u8 acpi_gbl_argument_count[] = static const u8 acpi_gbl_argument_count[] =
{ 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 6 }; { 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 6 };
......
...@@ -198,11 +198,8 @@ acpi_ut_read_table(FILE * fp, ...@@ -198,11 +198,8 @@ acpi_ut_read_table(FILE * fp,
table_header.length, file_size); table_header.length, file_size);
#ifdef ACPI_ASL_COMPILER #ifdef ACPI_ASL_COMPILER
status = fl_check_for_ascii(fp, NULL, FALSE); acpi_os_printf("File is corrupt or is ASCII text -- "
if (ACPI_SUCCESS(status)) { "it must be a binary file\n");
acpi_os_printf
("File appears to be ASCII only, must be binary\n");
}
#endif #endif
return (AE_BAD_HEADER); return (AE_BAD_HEADER);
} }
...@@ -315,7 +312,7 @@ acpi_ut_read_table_from_file(char *filename, struct acpi_table_header ** table) ...@@ -315,7 +312,7 @@ acpi_ut_read_table_from_file(char *filename, struct acpi_table_header ** table)
/* Get the entire file */ /* Get the entire file */
fprintf(stderr, fprintf(stderr,
"Loading Acpi table from file %10s - Length %.8u (%06X)\n", "Reading ACPI table from file %10s - Length %.8u (0x%06X)\n",
filename, file_size, file_size); filename, file_size, file_size);
status = acpi_ut_read_table(file, table, &table_length); status = acpi_ut_read_table(file, table, &table_length);
......
...@@ -75,9 +75,9 @@ char acpi_ut_hex_to_ascii_char(u64 integer, u32 position) ...@@ -75,9 +75,9 @@ char acpi_ut_hex_to_ascii_char(u64 integer, u32 position)
/******************************************************************************* /*******************************************************************************
* *
* FUNCTION: acpi_ut_hex_char_to_value * FUNCTION: acpi_ut_ascii_char_to_hex
* *
* PARAMETERS: ascii_char - Hex character in Ascii * PARAMETERS: hex_char - Hex character in Ascii
* *
* RETURN: The binary value of the ascii/hex character * RETURN: The binary value of the ascii/hex character
* *
......
...@@ -107,9 +107,16 @@ acpi_exception(const char *module_name, ...@@ -107,9 +107,16 @@ acpi_exception(const char *module_name,
va_list arg_list; va_list arg_list;
ACPI_MSG_REDIRECT_BEGIN; ACPI_MSG_REDIRECT_BEGIN;
acpi_os_printf(ACPI_MSG_EXCEPTION "%s, ",
acpi_format_exception(status));
/* For AE_OK, just print the message */
if (ACPI_SUCCESS(status)) {
acpi_os_printf(ACPI_MSG_EXCEPTION);
} else {
acpi_os_printf(ACPI_MSG_EXCEPTION "%s, ",
acpi_format_exception(status));
}
va_start(arg_list, format); va_start(arg_list, format);
acpi_os_vprintf(format, arg_list); acpi_os_vprintf(format, arg_list);
ACPI_MSG_SUFFIX; ACPI_MSG_SUFFIX;
......
...@@ -729,10 +729,10 @@ static struct llist_head ghes_estatus_llist; ...@@ -729,10 +729,10 @@ static struct llist_head ghes_estatus_llist;
static struct irq_work ghes_proc_irq_work; static struct irq_work ghes_proc_irq_work;
/* /*
* NMI may be triggered on any CPU, so ghes_nmi_lock is used for * NMI may be triggered on any CPU, so ghes_in_nmi is used for
* mutual exclusion. * having only one concurrent reader.
*/ */
static DEFINE_RAW_SPINLOCK(ghes_nmi_lock); static atomic_t ghes_in_nmi = ATOMIC_INIT(0);
static LIST_HEAD(ghes_nmi); static LIST_HEAD(ghes_nmi);
...@@ -797,73 +797,75 @@ static void ghes_print_queued_estatus(void) ...@@ -797,73 +797,75 @@ static void ghes_print_queued_estatus(void)
} }
} }
/* Save estatus for further processing in IRQ context */
static void __process_error(struct ghes *ghes)
{
#ifdef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
u32 len, node_len;
struct ghes_estatus_node *estatus_node;
struct acpi_hest_generic_status *estatus;
if (ghes_estatus_cached(ghes->estatus))
return;
len = cper_estatus_len(ghes->estatus);
node_len = GHES_ESTATUS_NODE_LEN(len);
estatus_node = (void *)gen_pool_alloc(ghes_estatus_pool, node_len);
if (!estatus_node)
return;
estatus_node->ghes = ghes;
estatus_node->generic = ghes->generic;
estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
memcpy(estatus, ghes->estatus, len);
llist_add(&estatus_node->llnode, &ghes_estatus_llist);
#endif
}
static void __ghes_panic(struct ghes *ghes)
{
oops_begin();
ghes_print_queued_estatus();
__ghes_print_estatus(KERN_EMERG, ghes->generic, ghes->estatus);
/* reboot to log the error! */
if (panic_timeout == 0)
panic_timeout = ghes_panic_timeout;
panic("Fatal hardware error!");
}
static int ghes_notify_nmi(unsigned int cmd, struct pt_regs *regs) static int ghes_notify_nmi(unsigned int cmd, struct pt_regs *regs)
{ {
struct ghes *ghes, *ghes_global = NULL; struct ghes *ghes;
int sev, sev_global = -1; int sev, ret = NMI_DONE;
int ret = NMI_DONE;
if (!atomic_add_unless(&ghes_in_nmi, 1, 1))
return ret;
raw_spin_lock(&ghes_nmi_lock);
list_for_each_entry_rcu(ghes, &ghes_nmi, list) { list_for_each_entry_rcu(ghes, &ghes_nmi, list) {
if (ghes_read_estatus(ghes, 1)) { if (ghes_read_estatus(ghes, 1)) {
ghes_clear_estatus(ghes); ghes_clear_estatus(ghes);
continue; continue;
} }
sev = ghes_severity(ghes->estatus->error_severity);
if (sev > sev_global) {
sev_global = sev;
ghes_global = ghes;
}
ret = NMI_HANDLED;
}
if (ret == NMI_DONE)
goto out;
if (sev_global >= GHES_SEV_PANIC) { sev = ghes_severity(ghes->estatus->error_severity);
oops_begin(); if (sev >= GHES_SEV_PANIC)
ghes_print_queued_estatus(); __ghes_panic(ghes);
__ghes_print_estatus(KERN_EMERG, ghes_global->generic,
ghes_global->estatus);
/* reboot to log the error! */
if (panic_timeout == 0)
panic_timeout = ghes_panic_timeout;
panic("Fatal hardware error!");
}
list_for_each_entry_rcu(ghes, &ghes_nmi, list) {
#ifdef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
u32 len, node_len;
struct ghes_estatus_node *estatus_node;
struct acpi_hest_generic_status *estatus;
#endif
if (!(ghes->flags & GHES_TO_CLEAR)) if (!(ghes->flags & GHES_TO_CLEAR))
continue; continue;
#ifdef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
if (ghes_estatus_cached(ghes->estatus)) __process_error(ghes);
goto next;
/* Save estatus for further processing in IRQ context */
len = cper_estatus_len(ghes->estatus);
node_len = GHES_ESTATUS_NODE_LEN(len);
estatus_node = (void *)gen_pool_alloc(ghes_estatus_pool,
node_len);
if (estatus_node) {
estatus_node->ghes = ghes;
estatus_node->generic = ghes->generic;
estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
memcpy(estatus, ghes->estatus, len);
llist_add(&estatus_node->llnode, &ghes_estatus_llist);
}
next:
#endif
ghes_clear_estatus(ghes); ghes_clear_estatus(ghes);
ret = NMI_HANDLED;
} }
#ifdef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG #ifdef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
irq_work_queue(&ghes_proc_irq_work); irq_work_queue(&ghes_proc_irq_work);
#endif #endif
atomic_dec(&ghes_in_nmi);
out:
raw_spin_unlock(&ghes_nmi_lock);
return ret; return ret;
} }
......
...@@ -70,6 +70,7 @@ MODULE_AUTHOR("Alexey Starikovskiy <astarikovskiy@suse.de>"); ...@@ -70,6 +70,7 @@ MODULE_AUTHOR("Alexey Starikovskiy <astarikovskiy@suse.de>");
MODULE_DESCRIPTION("ACPI Battery Driver"); MODULE_DESCRIPTION("ACPI Battery Driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
static async_cookie_t async_cookie;
static int battery_bix_broken_package; static int battery_bix_broken_package;
static int battery_notification_delay_ms; static int battery_notification_delay_ms;
static unsigned int cache_time = 1000; static unsigned int cache_time = 1000;
...@@ -338,14 +339,6 @@ static enum power_supply_property energy_battery_props[] = { ...@@ -338,14 +339,6 @@ static enum power_supply_property energy_battery_props[] = {
POWER_SUPPLY_PROP_SERIAL_NUMBER, POWER_SUPPLY_PROP_SERIAL_NUMBER,
}; };
#ifdef CONFIG_ACPI_PROCFS_POWER
inline char *acpi_battery_units(struct acpi_battery *battery)
{
return (battery->power_unit == ACPI_BATTERY_POWER_UNIT_MA) ?
"mA" : "mW";
}
#endif
/* -------------------------------------------------------------------------- /* --------------------------------------------------------------------------
Battery Management Battery Management
-------------------------------------------------------------------------- */ -------------------------------------------------------------------------- */
...@@ -354,14 +347,14 @@ struct acpi_offsets { ...@@ -354,14 +347,14 @@ struct acpi_offsets {
u8 mode; /* int or string? */ u8 mode; /* int or string? */
}; };
static struct acpi_offsets state_offsets[] = { static const struct acpi_offsets state_offsets[] = {
{offsetof(struct acpi_battery, state), 0}, {offsetof(struct acpi_battery, state), 0},
{offsetof(struct acpi_battery, rate_now), 0}, {offsetof(struct acpi_battery, rate_now), 0},
{offsetof(struct acpi_battery, capacity_now), 0}, {offsetof(struct acpi_battery, capacity_now), 0},
{offsetof(struct acpi_battery, voltage_now), 0}, {offsetof(struct acpi_battery, voltage_now), 0},
}; };
static struct acpi_offsets info_offsets[] = { static const struct acpi_offsets info_offsets[] = {
{offsetof(struct acpi_battery, power_unit), 0}, {offsetof(struct acpi_battery, power_unit), 0},
{offsetof(struct acpi_battery, design_capacity), 0}, {offsetof(struct acpi_battery, design_capacity), 0},
{offsetof(struct acpi_battery, full_charge_capacity), 0}, {offsetof(struct acpi_battery, full_charge_capacity), 0},
...@@ -377,7 +370,7 @@ static struct acpi_offsets info_offsets[] = { ...@@ -377,7 +370,7 @@ static struct acpi_offsets info_offsets[] = {
{offsetof(struct acpi_battery, oem_info), 1}, {offsetof(struct acpi_battery, oem_info), 1},
}; };
static struct acpi_offsets extended_info_offsets[] = { static const struct acpi_offsets extended_info_offsets[] = {
{offsetof(struct acpi_battery, revision), 0}, {offsetof(struct acpi_battery, revision), 0},
{offsetof(struct acpi_battery, power_unit), 0}, {offsetof(struct acpi_battery, power_unit), 0},
{offsetof(struct acpi_battery, design_capacity), 0}, {offsetof(struct acpi_battery, design_capacity), 0},
...@@ -402,7 +395,7 @@ static struct acpi_offsets extended_info_offsets[] = { ...@@ -402,7 +395,7 @@ static struct acpi_offsets extended_info_offsets[] = {
static int extract_package(struct acpi_battery *battery, static int extract_package(struct acpi_battery *battery,
union acpi_object *package, union acpi_object *package,
struct acpi_offsets *offsets, int num) const struct acpi_offsets *offsets, int num)
{ {
int i; int i;
union acpi_object *element; union acpi_object *element;
...@@ -792,6 +785,12 @@ static void acpi_battery_refresh(struct acpi_battery *battery) ...@@ -792,6 +785,12 @@ static void acpi_battery_refresh(struct acpi_battery *battery)
#ifdef CONFIG_ACPI_PROCFS_POWER #ifdef CONFIG_ACPI_PROCFS_POWER
static struct proc_dir_entry *acpi_battery_dir; static struct proc_dir_entry *acpi_battery_dir;
static const char *acpi_battery_units(const struct acpi_battery *battery)
{
return (battery->power_unit == ACPI_BATTERY_POWER_UNIT_MA) ?
"mA" : "mW";
}
static int acpi_battery_print_info(struct seq_file *seq, int result) static int acpi_battery_print_info(struct seq_file *seq, int result)
{ {
struct acpi_battery *battery = seq->private; struct acpi_battery *battery = seq->private;
...@@ -1125,19 +1124,21 @@ static int battery_notify(struct notifier_block *nb, ...@@ -1125,19 +1124,21 @@ static int battery_notify(struct notifier_block *nb,
return 0; return 0;
} }
static int battery_bix_broken_package_quirk(const struct dmi_system_id *d) static int __init
battery_bix_broken_package_quirk(const struct dmi_system_id *d)
{ {
battery_bix_broken_package = 1; battery_bix_broken_package = 1;
return 0; return 0;
} }
static int battery_notification_delay_quirk(const struct dmi_system_id *d) static int __init
battery_notification_delay_quirk(const struct dmi_system_id *d)
{ {
battery_notification_delay_ms = 1000; battery_notification_delay_ms = 1000;
return 0; return 0;
} }
static struct dmi_system_id bat_dmi_table[] = { static const struct dmi_system_id bat_dmi_table[] __initconst = {
{ {
.callback = battery_bix_broken_package_quirk, .callback = battery_bix_broken_package_quirk,
.ident = "NEC LZ750/LS", .ident = "NEC LZ750/LS",
...@@ -1292,33 +1293,34 @@ static struct acpi_driver acpi_battery_driver = { ...@@ -1292,33 +1293,34 @@ static struct acpi_driver acpi_battery_driver = {
static void __init acpi_battery_init_async(void *unused, async_cookie_t cookie) static void __init acpi_battery_init_async(void *unused, async_cookie_t cookie)
{ {
if (acpi_disabled) int result;
return;
dmi_check_system(bat_dmi_table); dmi_check_system(bat_dmi_table);
#ifdef CONFIG_ACPI_PROCFS_POWER #ifdef CONFIG_ACPI_PROCFS_POWER
acpi_battery_dir = acpi_lock_battery_dir(); acpi_battery_dir = acpi_lock_battery_dir();
if (!acpi_battery_dir) if (!acpi_battery_dir)
return; return;
#endif #endif
if (acpi_bus_register_driver(&acpi_battery_driver) < 0) { result = acpi_bus_register_driver(&acpi_battery_driver);
#ifdef CONFIG_ACPI_PROCFS_POWER #ifdef CONFIG_ACPI_PROCFS_POWER
if (result < 0)
acpi_unlock_battery_dir(acpi_battery_dir); acpi_unlock_battery_dir(acpi_battery_dir);
#endif #endif
return;
}
return;
} }
static int __init acpi_battery_init(void) static int __init acpi_battery_init(void)
{ {
async_schedule(acpi_battery_init_async, NULL); if (acpi_disabled)
return -ENODEV;
async_cookie = async_schedule(acpi_battery_init_async, NULL);
return 0; return 0;
} }
static void __exit acpi_battery_exit(void) static void __exit acpi_battery_exit(void)
{ {
async_synchronize_cookie(async_cookie);
acpi_bus_unregister_driver(&acpi_battery_driver); acpi_bus_unregister_driver(&acpi_battery_driver);
#ifdef CONFIG_ACPI_PROCFS_POWER #ifdef CONFIG_ACPI_PROCFS_POWER
acpi_unlock_battery_dir(acpi_battery_dir); acpi_unlock_battery_dir(acpi_battery_dir);
......
...@@ -470,6 +470,16 @@ static int __init acpi_bus_init_irq(void) ...@@ -470,6 +470,16 @@ static int __init acpi_bus_init_irq(void)
return 0; return 0;
} }
/**
* acpi_early_init - Initialize ACPICA and populate the ACPI namespace.
*
* The ACPI tables are accessible after this, but the handling of events has not
* been initialized and the global lock is not available yet, so AML should not
* be executed at this point.
*
* Doing this before switching the EFI runtime services to virtual mode allows
* the EfiBootServices memory to be freed slightly earlier on boot.
*/
void __init acpi_early_init(void) void __init acpi_early_init(void)
{ {
acpi_status status; acpi_status status;
...@@ -533,26 +543,42 @@ void __init acpi_early_init(void) ...@@ -533,26 +543,42 @@ void __init acpi_early_init(void)
acpi_gbl_FADT.sci_interrupt = acpi_sci_override_gsi; acpi_gbl_FADT.sci_interrupt = acpi_sci_override_gsi;
} }
#endif #endif
return;
error0:
disable_acpi();
}
/**
* acpi_subsystem_init - Finalize the early initialization of ACPI.
*
* Switch over the platform to the ACPI mode (if possible), initialize the
* handling of ACPI events, install the interrupt and global lock handlers.
*
* Doing this too early is generally unsafe, but at the same time it needs to be
* done before all things that really depend on ACPI. The right spot appears to
* be before finalizing the EFI initialization.
*/
void __init acpi_subsystem_init(void)
{
acpi_status status;
if (acpi_disabled)
return;
status = acpi_enable_subsystem(~ACPI_NO_ACPI_ENABLE); status = acpi_enable_subsystem(~ACPI_NO_ACPI_ENABLE);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
printk(KERN_ERR PREFIX "Unable to enable ACPI\n"); printk(KERN_ERR PREFIX "Unable to enable ACPI\n");
goto error0; disable_acpi();
} else {
/*
* If the system is using ACPI then we can be reasonably
* confident that any regulators are managed by the firmware
* so tell the regulator core it has everything it needs to
* know.
*/
regulator_has_full_constraints();
} }
/*
* If the system is using ACPI then we can be reasonably
* confident that any regulators are managed by the firmware
* so tell the regulator core it has everything it needs to
* know.
*/
regulator_has_full_constraints();
return;
error0:
disable_acpi();
return;
} }
static int __init acpi_bus_init(void) static int __init acpi_bus_init(void)
......
...@@ -98,17 +98,16 @@ int acpi_device_get_power(struct acpi_device *device, int *state) ...@@ -98,17 +98,16 @@ int acpi_device_get_power(struct acpi_device *device, int *state)
/* /*
* The power resources settings may indicate a power state * The power resources settings may indicate a power state
* shallower than the actual power state of the device. * shallower than the actual power state of the device, because
* the same power resources may be referenced by other devices.
* *
* Moreover, on systems predating ACPI 4.0, if the device * For systems predating ACPI 4.0 we assume that D3hot is the
* doesn't depend on any power resources and _PSC returns 3, * deepest state that can be supported.
* that means "power off". We need to maintain compatibility
* with those systems.
*/ */
if (psc > result && psc < ACPI_STATE_D3_COLD) if (psc > result && psc < ACPI_STATE_D3_COLD)
result = psc; result = psc;
else if (result == ACPI_STATE_UNKNOWN) else if (result == ACPI_STATE_UNKNOWN)
result = psc > ACPI_STATE_D2 ? ACPI_STATE_D3_COLD : psc; result = psc > ACPI_STATE_D2 ? ACPI_STATE_D3_HOT : psc;
} }
/* /*
...@@ -153,8 +152,8 @@ static int acpi_dev_pm_explicit_set(struct acpi_device *adev, int state) ...@@ -153,8 +152,8 @@ static int acpi_dev_pm_explicit_set(struct acpi_device *adev, int state)
*/ */
int acpi_device_set_power(struct acpi_device *device, int state) int acpi_device_set_power(struct acpi_device *device, int state)
{ {
int target_state = state;
int result = 0; int result = 0;
bool cut_power = false;
if (!device || !device->flags.power_manageable if (!device || !device->flags.power_manageable
|| (state < ACPI_STATE_D0) || (state > ACPI_STATE_D3_COLD)) || (state < ACPI_STATE_D0) || (state > ACPI_STATE_D3_COLD))
...@@ -169,11 +168,21 @@ int acpi_device_set_power(struct acpi_device *device, int state) ...@@ -169,11 +168,21 @@ int acpi_device_set_power(struct acpi_device *device, int state)
return 0; return 0;
} }
if (!device->power.states[state].flags.valid) { if (state == ACPI_STATE_D3_COLD) {
/*
* For transitions to D3cold we need to execute _PS3 and then
* possibly drop references to the power resources in use.
*/
state = ACPI_STATE_D3_HOT;
/* If _PR3 is not available, use D3hot as the target state. */
if (!device->power.states[ACPI_STATE_D3_COLD].flags.valid)
target_state = state;
} else if (!device->power.states[state].flags.valid) {
dev_warn(&device->dev, "Power state %s not supported\n", dev_warn(&device->dev, "Power state %s not supported\n",
acpi_power_state_string(state)); acpi_power_state_string(state));
return -ENODEV; return -ENODEV;
} }
if (!device->power.flags.ignore_parent && if (!device->power.flags.ignore_parent &&
device->parent && (state < device->parent->power.state)) { device->parent && (state < device->parent->power.state)) {
dev_warn(&device->dev, dev_warn(&device->dev,
...@@ -183,39 +192,38 @@ int acpi_device_set_power(struct acpi_device *device, int state) ...@@ -183,39 +192,38 @@ int acpi_device_set_power(struct acpi_device *device, int state)
return -ENODEV; return -ENODEV;
} }
/* For D3cold we should first transition into D3hot. */
if (state == ACPI_STATE_D3_COLD
&& device->power.states[ACPI_STATE_D3_COLD].flags.os_accessible) {
state = ACPI_STATE_D3_HOT;
cut_power = true;
}
if (state < device->power.state && state != ACPI_STATE_D0
&& device->power.state >= ACPI_STATE_D3_HOT) {
dev_warn(&device->dev,
"Cannot transition to non-D0 state from D3\n");
return -ENODEV;
}
/* /*
* Transition Power * Transition Power
* ---------------- * ----------------
* In accordance with the ACPI specification first apply power (via * In accordance with ACPI 6, _PSx is executed before manipulating power
* power resources) and then evaluate _PSx. * resources, unless the target state is D0, in which case _PS0 is
* supposed to be executed after turning the power resources on.
*/ */
if (device->power.flags.power_resources) { if (state > ACPI_STATE_D0) {
result = acpi_power_transition(device, state); /*
* According to ACPI 6, devices cannot go from lower-power
* (deeper) states to higher-power (shallower) states.
*/
if (state < device->power.state) {
dev_warn(&device->dev, "Cannot transition from %s to %s\n",
acpi_power_state_string(device->power.state),
acpi_power_state_string(state));
return -ENODEV;
}
result = acpi_dev_pm_explicit_set(device, state);
if (result) if (result)
goto end; goto end;
}
result = acpi_dev_pm_explicit_set(device, state);
if (result)
goto end;
if (cut_power) { if (device->power.flags.power_resources)
device->power.state = state; result = acpi_power_transition(device, target_state);
state = ACPI_STATE_D3_COLD; } else {
result = acpi_power_transition(device, state); if (device->power.flags.power_resources) {
result = acpi_power_transition(device, ACPI_STATE_D0);
if (result)
goto end;
}
result = acpi_dev_pm_explicit_set(device, ACPI_STATE_D0);
} }
end: end:
...@@ -264,13 +272,24 @@ int acpi_bus_init_power(struct acpi_device *device) ...@@ -264,13 +272,24 @@ int acpi_bus_init_power(struct acpi_device *device)
return result; return result;
if (state < ACPI_STATE_D3_COLD && device->power.flags.power_resources) { if (state < ACPI_STATE_D3_COLD && device->power.flags.power_resources) {
/* Reference count the power resources. */
result = acpi_power_on_resources(device, state); result = acpi_power_on_resources(device, state);
if (result) if (result)
return result; return result;
result = acpi_dev_pm_explicit_set(device, state); if (state == ACPI_STATE_D0) {
if (result) /*
return result; * If _PSC is not present and the state inferred from
* power resources appears to be D0, it still may be
* necessary to execute _PS0 at this point, because
* another device using the same power resources may
* have been put into D0 previously and that's why we
* see D0 here.
*/
result = acpi_dev_pm_explicit_set(device, state);
if (result)
return result;
}
} else if (state == ACPI_STATE_UNKNOWN) { } else if (state == ACPI_STATE_UNKNOWN) {
/* /*
* No power resources and missing _PSC? Cross fingers and make * No power resources and missing _PSC? Cross fingers and make
...@@ -603,12 +622,12 @@ int acpi_pm_device_sleep_state(struct device *dev, int *d_min_p, int d_max_in) ...@@ -603,12 +622,12 @@ int acpi_pm_device_sleep_state(struct device *dev, int *d_min_p, int d_max_in)
if (d_max_in < ACPI_STATE_D0 || d_max_in > ACPI_STATE_D3_COLD) if (d_max_in < ACPI_STATE_D0 || d_max_in > ACPI_STATE_D3_COLD)
return -EINVAL; return -EINVAL;
if (d_max_in > ACPI_STATE_D3_HOT) { if (d_max_in > ACPI_STATE_D2) {
enum pm_qos_flags_status stat; enum pm_qos_flags_status stat;
stat = dev_pm_qos_flags(dev, PM_QOS_FLAG_NO_POWER_OFF); stat = dev_pm_qos_flags(dev, PM_QOS_FLAG_NO_POWER_OFF);
if (stat == PM_QOS_FLAGS_ALL) if (stat == PM_QOS_FLAGS_ALL)
d_max_in = ACPI_STATE_D3_HOT; d_max_in = ACPI_STATE_D2;
} }
adev = ACPI_COMPANION(dev); adev = ACPI_COMPANION(dev);
...@@ -953,6 +972,7 @@ EXPORT_SYMBOL_GPL(acpi_subsys_prepare); ...@@ -953,6 +972,7 @@ EXPORT_SYMBOL_GPL(acpi_subsys_prepare);
*/ */
void acpi_subsys_complete(struct device *dev) void acpi_subsys_complete(struct device *dev)
{ {
pm_generic_complete(dev);
/* /*
* If the device had been runtime-suspended before the system went into * If the device had been runtime-suspended before the system went into
* the sleep state it is going out of and it has never been resumed till * the sleep state it is going out of and it has never been resumed till
......
This diff is collapsed.
...@@ -158,8 +158,9 @@ static int fan_get_state(struct acpi_device *device, unsigned long *state) ...@@ -158,8 +158,9 @@ static int fan_get_state(struct acpi_device *device, unsigned long *state)
if (result) if (result)
return result; return result;
*state = (acpi_state == ACPI_STATE_D3_COLD ? 0 : *state = acpi_state == ACPI_STATE_D3_COLD
(acpi_state == ACPI_STATE_D0 ? 1 : -1)); || acpi_state == ACPI_STATE_D3_HOT ?
0 : (acpi_state == ACPI_STATE_D0 ? 1 : -1);
return 0; return 0;
} }
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/rwsem.h> #include <linux/rwsem.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/dma-mapping.h>
#include "internal.h" #include "internal.h"
...@@ -167,6 +168,7 @@ int acpi_bind_one(struct device *dev, struct acpi_device *acpi_dev) ...@@ -167,6 +168,7 @@ int acpi_bind_one(struct device *dev, struct acpi_device *acpi_dev)
struct list_head *physnode_list; struct list_head *physnode_list;
unsigned int node_id; unsigned int node_id;
int retval = -EINVAL; int retval = -EINVAL;
bool coherent;
if (has_acpi_companion(dev)) { if (has_acpi_companion(dev)) {
if (acpi_dev) { if (acpi_dev) {
...@@ -223,6 +225,9 @@ int acpi_bind_one(struct device *dev, struct acpi_device *acpi_dev) ...@@ -223,6 +225,9 @@ int acpi_bind_one(struct device *dev, struct acpi_device *acpi_dev)
if (!has_acpi_companion(dev)) if (!has_acpi_companion(dev))
ACPI_COMPANION_SET(dev, acpi_dev); ACPI_COMPANION_SET(dev, acpi_dev);
if (acpi_check_dma(acpi_dev, &coherent))
arch_setup_dma_ops(dev, 0, 0, NULL, coherent);
acpi_physnode_link_name(physical_node_name, node_id); acpi_physnode_link_name(physical_node_name, node_id);
retval = sysfs_create_link(&acpi_dev->dev.kobj, &dev->kobj, retval = sysfs_create_link(&acpi_dev->dev.kobj, &dev->kobj,
physical_node_name); physical_node_name);
......
...@@ -27,7 +27,7 @@ ...@@ -27,7 +27,7 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <acpi/hed.h> #include <acpi/hed.h>
static struct acpi_device_id acpi_hed_ids[] = { static const struct acpi_device_id acpi_hed_ids[] = {
{"PNP0C33", 0}, {"PNP0C33", 0},
{"", 0}, {"", 0},
}; };
......
...@@ -138,6 +138,8 @@ struct acpi_ec { ...@@ -138,6 +138,8 @@ struct acpi_ec {
struct transaction *curr; struct transaction *curr;
spinlock_t lock; spinlock_t lock;
struct work_struct work; struct work_struct work;
unsigned long timestamp;
unsigned long nr_pending_queries;
}; };
extern struct acpi_ec *first_ec; extern struct acpi_ec *first_ec;
...@@ -181,16 +183,11 @@ static inline int suspend_nvs_save(void) { return 0; } ...@@ -181,16 +183,11 @@ static inline int suspend_nvs_save(void) { return 0; }
static inline void suspend_nvs_restore(void) {} static inline void suspend_nvs_restore(void) {}
#endif #endif
/*--------------------------------------------------------------------------
Video
-------------------------------------------------------------------------- */
#if defined(CONFIG_ACPI_VIDEO) || defined(CONFIG_ACPI_VIDEO_MODULE)
bool acpi_osi_is_win8(void);
#endif
/*-------------------------------------------------------------------------- /*--------------------------------------------------------------------------
Device properties Device properties
-------------------------------------------------------------------------- */ -------------------------------------------------------------------------- */
#define ACPI_DT_NAMESPACE_HID "PRP0001"
void acpi_init_properties(struct acpi_device *adev); void acpi_init_properties(struct acpi_device *adev);
void acpi_free_properties(struct acpi_device *adev); void acpi_free_properties(struct acpi_device *adev);
......
...@@ -175,11 +175,7 @@ static void __init acpi_request_region (struct acpi_generic_address *gas, ...@@ -175,11 +175,7 @@ static void __init acpi_request_region (struct acpi_generic_address *gas,
if (!addr || !length) if (!addr || !length)
return; return;
/* Resources are never freed */ acpi_reserve_region(addr, length, gas->space_id, 0, desc);
if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO)
request_region(addr, length, desc);
else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
request_mem_region(addr, length, desc);
} }
static void __init acpi_reserve_resources(void) static void __init acpi_reserve_resources(void)
...@@ -540,7 +536,7 @@ static char acpi_os_name[ACPI_MAX_OVERRIDE_LEN]; ...@@ -540,7 +536,7 @@ static char acpi_os_name[ACPI_MAX_OVERRIDE_LEN];
acpi_status acpi_status
acpi_os_predefined_override(const struct acpi_predefined_names *init_val, acpi_os_predefined_override(const struct acpi_predefined_names *init_val,
acpi_string * new_val) char **new_val)
{ {
if (!init_val || !new_val) if (!init_val || !new_val)
return AE_BAD_PARAMETER; return AE_BAD_PARAMETER;
...@@ -1684,6 +1680,12 @@ int acpi_resources_are_enforced(void) ...@@ -1684,6 +1680,12 @@ int acpi_resources_are_enforced(void)
} }
EXPORT_SYMBOL(acpi_resources_are_enforced); EXPORT_SYMBOL(acpi_resources_are_enforced);
bool acpi_osi_is_win8(void)
{
return acpi_gbl_osi_data >= ACPI_OSI_WIN_8;
}
EXPORT_SYMBOL(acpi_osi_is_win8);
/* /*
* Deallocate the memory for a spinlock. * Deallocate the memory for a spinlock.
*/ */
......
...@@ -44,7 +44,6 @@ ...@@ -44,7 +44,6 @@
ACPI_MODULE_NAME("pci_irq"); ACPI_MODULE_NAME("pci_irq");
struct acpi_prt_entry { struct acpi_prt_entry {
struct list_head list;
struct acpi_pci_id id; struct acpi_pci_id id;
u8 pin; u8 pin;
acpi_handle link; acpi_handle link;
......
...@@ -684,7 +684,8 @@ int acpi_power_get_inferred_state(struct acpi_device *device, int *state) ...@@ -684,7 +684,8 @@ int acpi_power_get_inferred_state(struct acpi_device *device, int *state)
} }
} }
*state = ACPI_STATE_D3_COLD; *state = device->power.states[ACPI_STATE_D3_COLD].flags.valid ?
ACPI_STATE_D3_COLD : ACPI_STATE_D3_HOT;
return 0; return 0;
} }
...@@ -710,8 +711,6 @@ int acpi_power_transition(struct acpi_device *device, int state) ...@@ -710,8 +711,6 @@ int acpi_power_transition(struct acpi_device *device, int state)
|| (device->power.state > ACPI_STATE_D3_COLD)) || (device->power.state > ACPI_STATE_D3_COLD))
return -ENODEV; return -ENODEV;
/* TBD: Resources must be ordered. */
/* /*
* First we reference all power resources required in the target list * First we reference all power resources required in the target list
* (e.g. so the device doesn't lose power while transitioning). Then, * (e.g. so the device doesn't lose power while transitioning). Then,
...@@ -761,6 +760,25 @@ static void acpi_power_sysfs_remove(struct acpi_device *device) ...@@ -761,6 +760,25 @@ static void acpi_power_sysfs_remove(struct acpi_device *device)
device_remove_file(&device->dev, &dev_attr_resource_in_use); device_remove_file(&device->dev, &dev_attr_resource_in_use);
} }
static void acpi_power_add_resource_to_list(struct acpi_power_resource *resource)
{
mutex_lock(&power_resource_list_lock);
if (!list_empty(&acpi_power_resource_list)) {
struct acpi_power_resource *r;
list_for_each_entry(r, &acpi_power_resource_list, list_node)
if (r->order > resource->order) {
list_add_tail(&resource->list_node, &r->list_node);
goto out;
}
}
list_add_tail(&resource->list_node, &acpi_power_resource_list);
out:
mutex_unlock(&power_resource_list_lock);
}
int acpi_add_power_resource(acpi_handle handle) int acpi_add_power_resource(acpi_handle handle)
{ {
struct acpi_power_resource *resource; struct acpi_power_resource *resource;
...@@ -811,9 +829,7 @@ int acpi_add_power_resource(acpi_handle handle) ...@@ -811,9 +829,7 @@ int acpi_add_power_resource(acpi_handle handle)
if (!device_create_file(&device->dev, &dev_attr_resource_in_use)) if (!device_create_file(&device->dev, &dev_attr_resource_in_use))
device->remove = acpi_power_sysfs_remove; device->remove = acpi_power_sysfs_remove;
mutex_lock(&power_resource_list_lock); acpi_power_add_resource_to_list(resource);
list_add(&resource->list_node, &acpi_power_resource_list);
mutex_unlock(&power_resource_list_lock);
acpi_device_add_finalize(device); acpi_device_add_finalize(device);
return 0; return 0;
...@@ -844,7 +860,22 @@ void acpi_resume_power_resources(void) ...@@ -844,7 +860,22 @@ void acpi_resume_power_resources(void)
&& resource->ref_count) { && resource->ref_count) {
dev_info(&resource->device.dev, "Turning ON\n"); dev_info(&resource->device.dev, "Turning ON\n");
__acpi_power_on(resource); __acpi_power_on(resource);
} else if (state == ACPI_POWER_RESOURCE_STATE_ON }
mutex_unlock(&resource->resource_lock);
}
list_for_each_entry_reverse(resource, &acpi_power_resource_list, list_node) {
int result, state;
mutex_lock(&resource->resource_lock);
result = acpi_power_get_state(resource->device.handle, &state);
if (result) {
mutex_unlock(&resource->resource_lock);
continue;
}
if (state == ACPI_POWER_RESOURCE_STATE_ON
&& !resource->ref_count) { && !resource->ref_count) {
dev_info(&resource->device.dev, "Turning OFF\n"); dev_info(&resource->device.dev, "Turning OFF\n");
__acpi_power_off(resource); __acpi_power_off(resource);
......
...@@ -184,7 +184,7 @@ phys_cpuid_t acpi_get_phys_id(acpi_handle handle, int type, u32 acpi_id) ...@@ -184,7 +184,7 @@ phys_cpuid_t acpi_get_phys_id(acpi_handle handle, int type, u32 acpi_id)
phys_cpuid_t phys_id; phys_cpuid_t phys_id;
phys_id = map_mat_entry(handle, type, acpi_id); phys_id = map_mat_entry(handle, type, acpi_id);
if (phys_id == PHYS_CPUID_INVALID) if (invalid_phys_cpuid(phys_id))
phys_id = map_madt_entry(type, acpi_id); phys_id = map_madt_entry(type, acpi_id);
return phys_id; return phys_id;
...@@ -196,7 +196,7 @@ int acpi_map_cpuid(phys_cpuid_t phys_id, u32 acpi_id) ...@@ -196,7 +196,7 @@ int acpi_map_cpuid(phys_cpuid_t phys_id, u32 acpi_id)
int i; int i;
#endif #endif
if (phys_id == PHYS_CPUID_INVALID) { if (invalid_phys_cpuid(phys_id)) {
/* /*
* On UP processor, there is no _MAT or MADT table. * On UP processor, there is no _MAT or MADT table.
* So above phys_id is always set to PHYS_CPUID_INVALID. * So above phys_id is always set to PHYS_CPUID_INVALID.
...@@ -215,12 +215,12 @@ int acpi_map_cpuid(phys_cpuid_t phys_id, u32 acpi_id) ...@@ -215,12 +215,12 @@ int acpi_map_cpuid(phys_cpuid_t phys_id, u32 acpi_id)
* Ignores phys_id and always returns 0 for the processor * Ignores phys_id and always returns 0 for the processor
* handle with acpi id 0 if nr_cpu_ids is 1. * handle with acpi id 0 if nr_cpu_ids is 1.
* This should be the case if SMP tables are not found. * This should be the case if SMP tables are not found.
* Return -1 for other CPU's handle. * Return -EINVAL for other CPU's handle.
*/ */
if (nr_cpu_ids <= 1 && acpi_id == 0) if (nr_cpu_ids <= 1 && acpi_id == 0)
return acpi_id; return acpi_id;
else else
return -1; return -EINVAL;
} }
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
...@@ -233,7 +233,7 @@ int acpi_map_cpuid(phys_cpuid_t phys_id, u32 acpi_id) ...@@ -233,7 +233,7 @@ int acpi_map_cpuid(phys_cpuid_t phys_id, u32 acpi_id)
if (phys_id == 0) if (phys_id == 0)
return phys_id; return phys_id;
#endif #endif
return -1; return -ENODEV;
} }
int acpi_get_cpuid(acpi_handle handle, int type, u32 acpi_id) int acpi_get_cpuid(acpi_handle handle, int type, u32 acpi_id)
......
...@@ -94,7 +94,7 @@ static int set_max_cstate(const struct dmi_system_id *id) ...@@ -94,7 +94,7 @@ static int set_max_cstate(const struct dmi_system_id *id)
return 0; return 0;
} }
static struct dmi_system_id processor_power_dmi_table[] = { static const struct dmi_system_id processor_power_dmi_table[] = {
{ set_max_cstate, "Clevo 5600D", { { set_max_cstate, "Clevo 5600D", {
DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"), DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
DMI_MATCH(DMI_BIOS_VERSION,"SHE845M0.86C.0013.D.0302131307")}, DMI_MATCH(DMI_BIOS_VERSION,"SHE845M0.86C.0013.D.0302131307")},
......
...@@ -52,10 +52,7 @@ static bool __init processor_physically_present(acpi_handle handle) ...@@ -52,10 +52,7 @@ static bool __init processor_physically_present(acpi_handle handle)
type = (acpi_type == ACPI_TYPE_DEVICE) ? 1 : 0; type = (acpi_type == ACPI_TYPE_DEVICE) ? 1 : 0;
cpuid = acpi_get_cpuid(handle, type, acpi_id); cpuid = acpi_get_cpuid(handle, type, acpi_id);
if (cpuid == -1) return !invalid_logical_cpuid(cpuid);
return false;
return true;
} }
static void acpi_set_pdc_bits(u32 *buf) static void acpi_set_pdc_bits(u32 *buf)
......
...@@ -79,50 +79,51 @@ static bool acpi_properties_format_valid(const union acpi_object *properties) ...@@ -79,50 +79,51 @@ static bool acpi_properties_format_valid(const union acpi_object *properties)
static void acpi_init_of_compatible(struct acpi_device *adev) static void acpi_init_of_compatible(struct acpi_device *adev)
{ {
const union acpi_object *of_compatible; const union acpi_object *of_compatible;
struct acpi_hardware_id *hwid;
bool acpi_of = false;
int ret; int ret;
/*
* Check if the special PRP0001 ACPI ID is present and in that
* case we fill in Device Tree compatible properties for this
* device.
*/
list_for_each_entry(hwid, &adev->pnp.ids, list) {
if (!strcmp(hwid->id, "PRP0001")) {
acpi_of = true;
break;
}
}
if (!acpi_of)
return;
ret = acpi_dev_get_property_array(adev, "compatible", ACPI_TYPE_STRING, ret = acpi_dev_get_property_array(adev, "compatible", ACPI_TYPE_STRING,
&of_compatible); &of_compatible);
if (ret) { if (ret) {
ret = acpi_dev_get_property(adev, "compatible", ret = acpi_dev_get_property(adev, "compatible",
ACPI_TYPE_STRING, &of_compatible); ACPI_TYPE_STRING, &of_compatible);
if (ret) { if (ret) {
acpi_handle_warn(adev->handle, if (adev->parent
"PRP0001 requires compatible property\n"); && adev->parent->flags.of_compatible_ok)
goto out;
return; return;
} }
} }
adev->data.of_compatible = of_compatible; adev->data.of_compatible = of_compatible;
out:
adev->flags.of_compatible_ok = 1;
} }
void acpi_init_properties(struct acpi_device *adev) void acpi_init_properties(struct acpi_device *adev)
{ {
struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER }; struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
bool acpi_of = false;
struct acpi_hardware_id *hwid;
const union acpi_object *desc; const union acpi_object *desc;
acpi_status status; acpi_status status;
int i; int i;
/*
* Check if ACPI_DT_NAMESPACE_HID is present and inthat case we fill in
* Device Tree compatible properties for this device.
*/
list_for_each_entry(hwid, &adev->pnp.ids, list) {
if (!strcmp(hwid->id, ACPI_DT_NAMESPACE_HID)) {
acpi_of = true;
break;
}
}
status = acpi_evaluate_object_typed(adev->handle, "_DSD", NULL, &buf, status = acpi_evaluate_object_typed(adev->handle, "_DSD", NULL, &buf,
ACPI_TYPE_PACKAGE); ACPI_TYPE_PACKAGE);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
return; goto out;
desc = buf.pointer; desc = buf.pointer;
if (desc->package.count % 2) if (desc->package.count % 2)
...@@ -156,13 +157,20 @@ void acpi_init_properties(struct acpi_device *adev) ...@@ -156,13 +157,20 @@ void acpi_init_properties(struct acpi_device *adev)
adev->data.pointer = buf.pointer; adev->data.pointer = buf.pointer;
adev->data.properties = properties; adev->data.properties = properties;
acpi_init_of_compatible(adev); if (acpi_of)
return; acpi_init_of_compatible(adev);
goto out;
} }
fail: fail:
dev_warn(&adev->dev, "Returned _DSD data is not valid, skipping\n"); dev_dbg(&adev->dev, "Returned _DSD data is not valid, skipping\n");
ACPI_FREE(buf.pointer); ACPI_FREE(buf.pointer);
out:
if (acpi_of && !adev->flags.of_compatible_ok)
acpi_handle_info(adev->handle,
ACPI_DT_NAMESPACE_HID " requires 'compatible' property\n");
} }
void acpi_free_properties(struct acpi_device *adev) void acpi_free_properties(struct acpi_device *adev)
......
...@@ -26,6 +26,7 @@ ...@@ -26,6 +26,7 @@
#include <linux/device.h> #include <linux/device.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/list.h>
#include <linux/slab.h> #include <linux/slab.h>
#ifdef CONFIG_X86 #ifdef CONFIG_X86
...@@ -621,3 +622,162 @@ int acpi_dev_filter_resource_type(struct acpi_resource *ares, ...@@ -621,3 +622,162 @@ int acpi_dev_filter_resource_type(struct acpi_resource *ares,
return (type & types) ? 0 : 1; return (type & types) ? 0 : 1;
} }
EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type); EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type);
struct reserved_region {
struct list_head node;
u64 start;
u64 end;
};
static LIST_HEAD(reserved_io_regions);
static LIST_HEAD(reserved_mem_regions);
static int request_range(u64 start, u64 end, u8 space_id, unsigned long flags,
char *desc)
{
unsigned int length = end - start + 1;
struct resource *res;
res = space_id == ACPI_ADR_SPACE_SYSTEM_IO ?
request_region(start, length, desc) :
request_mem_region(start, length, desc);
if (!res)
return -EIO;
res->flags &= ~flags;
return 0;
}
static int add_region_before(u64 start, u64 end, u8 space_id,
unsigned long flags, char *desc,
struct list_head *head)
{
struct reserved_region *reg;
int error;
reg = kmalloc(sizeof(*reg), GFP_KERNEL);
if (!reg)
return -ENOMEM;
error = request_range(start, end, space_id, flags, desc);
if (error)
return error;
reg->start = start;
reg->end = end;
list_add_tail(&reg->node, head);
return 0;
}
/**
* acpi_reserve_region - Reserve an I/O or memory region as a system resource.
* @start: Starting address of the region.
* @length: Length of the region.
* @space_id: Identifier of address space to reserve the region from.
* @flags: Resource flags to clear for the region after requesting it.
* @desc: Region description (for messages).
*
* Reserve an I/O or memory region as a system resource to prevent others from
* using it. If the new region overlaps with one of the regions (in the given
* address space) already reserved by this routine, only the non-overlapping
* parts of it will be reserved.
*
* Returned is either 0 (success) or a negative error code indicating a resource
* reservation problem. It is the code of the first encountered error, but the
* routine doesn't abort until it has attempted to request all of the parts of
* the new region that don't overlap with other regions reserved previously.
*
* The resources requested by this routine are never released.
*/
int acpi_reserve_region(u64 start, unsigned int length, u8 space_id,
unsigned long flags, char *desc)
{
struct list_head *regions;
struct reserved_region *reg;
u64 end = start + length - 1;
int ret = 0, error = 0;
if (space_id == ACPI_ADR_SPACE_SYSTEM_IO)
regions = &reserved_io_regions;
else if (space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
regions = &reserved_mem_regions;
else
return -EINVAL;
if (list_empty(regions))
return add_region_before(start, end, space_id, flags, desc, regions);
list_for_each_entry(reg, regions, node)
if (reg->start == end + 1) {
/* The new region can be prepended to this one. */
ret = request_range(start, end, space_id, flags, desc);
if (!ret)
reg->start = start;
return ret;
} else if (reg->start > end) {
/* No overlap. Add the new region here and get out. */
return add_region_before(start, end, space_id, flags,
desc, &reg->node);
} else if (reg->end == start - 1) {
goto combine;
} else if (reg->end >= start) {
goto overlap;
}
/* The new region goes after the last existing one. */
return add_region_before(start, end, space_id, flags, desc, regions);
overlap:
/*
* The new region overlaps an existing one.
*
* The head part of the new region immediately preceding the existing
* overlapping one can be combined with it right away.
*/
if (reg->start > start) {
error = request_range(start, reg->start - 1, space_id, flags, desc);
if (error)
ret = error;
else
reg->start = start;
}
combine:
/*
* The new region is adjacent to an existing one. If it extends beyond
* that region all the way to the next one, it is possible to combine
* all three of them.
*/
while (reg->end < end) {
struct reserved_region *next = NULL;
u64 a = reg->end + 1, b = end;
if (!list_is_last(&reg->node, regions)) {
next = list_next_entry(reg, node);
if (next->start <= end)
b = next->start - 1;
}
error = request_range(a, b, space_id, flags, desc);
if (!error) {
if (next && next->start == b + 1) {
reg->end = next->end;
list_del(&next->node);
kfree(next);
} else {
reg->end = end;
break;
}
} else if (next) {
if (!ret)
ret = error;
reg = next;
} else {
break;
}
}
return ret ? ret : error;
}
EXPORT_SYMBOL_GPL(acpi_reserve_region);
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/dmi.h> #include <linux/dmi.h>
#include <linux/nls.h> #include <linux/nls.h>
#include <linux/dma-mapping.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
...@@ -135,12 +136,13 @@ static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias, ...@@ -135,12 +136,13 @@ static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias,
struct acpi_hardware_id *id; struct acpi_hardware_id *id;
/* /*
* Since we skip PRP0001 from the modalias below, 0 should be returned * Since we skip ACPI_DT_NAMESPACE_HID from the modalias below, 0 should
* if PRP0001 is the only ACPI/PNP ID in the device's list. * be returned if ACPI_DT_NAMESPACE_HID is the only ACPI/PNP ID in the
* device's list.
*/ */
count = 0; count = 0;
list_for_each_entry(id, &acpi_dev->pnp.ids, list) list_for_each_entry(id, &acpi_dev->pnp.ids, list)
if (strcmp(id->id, "PRP0001")) if (strcmp(id->id, ACPI_DT_NAMESPACE_HID))
count++; count++;
if (!count) if (!count)
...@@ -153,7 +155,7 @@ static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias, ...@@ -153,7 +155,7 @@ static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias,
size -= len; size -= len;
list_for_each_entry(id, &acpi_dev->pnp.ids, list) { list_for_each_entry(id, &acpi_dev->pnp.ids, list) {
if (!strcmp(id->id, "PRP0001")) if (!strcmp(id->id, ACPI_DT_NAMESPACE_HID))
continue; continue;
count = snprintf(&modalias[len], size, "%s:", id->id); count = snprintf(&modalias[len], size, "%s:", id->id);
...@@ -177,7 +179,8 @@ static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias, ...@@ -177,7 +179,8 @@ static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias,
* @size: Size of the buffer. * @size: Size of the buffer.
* *
* Expose DT compatible modalias as of:NnameTCcompatible. This function should * Expose DT compatible modalias as of:NnameTCcompatible. This function should
* only be called for devices having PRP0001 in their list of ACPI/PNP IDs. * only be called for devices having ACPI_DT_NAMESPACE_HID in their list of
* ACPI/PNP IDs.
*/ */
static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias, static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias,
int size) int size)
...@@ -980,9 +983,9 @@ static void acpi_device_remove_files(struct acpi_device *dev) ...@@ -980,9 +983,9 @@ static void acpi_device_remove_files(struct acpi_device *dev)
* @adev: ACPI device object to match. * @adev: ACPI device object to match.
* @of_match_table: List of device IDs to match against. * @of_match_table: List of device IDs to match against.
* *
* If @dev has an ACPI companion which has the special PRP0001 device ID in its * If @dev has an ACPI companion which has ACPI_DT_NAMESPACE_HID in its list of
* list of identifiers and a _DSD object with the "compatible" property, use * identifiers and a _DSD object with the "compatible" property, use that
* that property to match against the given list of identifiers. * property to match against the given list of identifiers.
*/ */
static bool acpi_of_match_device(struct acpi_device *adev, static bool acpi_of_match_device(struct acpi_device *adev,
const struct of_device_id *of_match_table) const struct of_device_id *of_match_table)
...@@ -1038,14 +1041,14 @@ static const struct acpi_device_id *__acpi_match_device( ...@@ -1038,14 +1041,14 @@ static const struct acpi_device_id *__acpi_match_device(
return id; return id;
/* /*
* Next, check the special "PRP0001" ID and try to match the * Next, check ACPI_DT_NAMESPACE_HID and try to match the
* "compatible" property if found. * "compatible" property if found.
* *
* The id returned by the below is not valid, but the only * The id returned by the below is not valid, but the only
* caller passing non-NULL of_ids here is only interested in * caller passing non-NULL of_ids here is only interested in
* whether or not the return value is NULL. * whether or not the return value is NULL.
*/ */
if (!strcmp("PRP0001", hwid->id) if (!strcmp(ACPI_DT_NAMESPACE_HID, hwid->id)
&& acpi_of_match_device(device, of_ids)) && acpi_of_match_device(device, of_ids))
return id; return id;
} }
...@@ -1671,7 +1674,7 @@ static int acpi_bus_extract_wakeup_device_power_package(acpi_handle handle, ...@@ -1671,7 +1674,7 @@ static int acpi_bus_extract_wakeup_device_power_package(acpi_handle handle,
static void acpi_wakeup_gpe_init(struct acpi_device *device) static void acpi_wakeup_gpe_init(struct acpi_device *device)
{ {
struct acpi_device_id button_device_ids[] = { static const struct acpi_device_id button_device_ids[] = {
{"PNP0C0C", 0}, {"PNP0C0C", 0},
{"PNP0C0D", 0}, {"PNP0C0D", 0},
{"PNP0C0E", 0}, {"PNP0C0E", 0},
...@@ -1766,15 +1769,9 @@ static void acpi_bus_init_power_state(struct acpi_device *device, int state) ...@@ -1766,15 +1769,9 @@ static void acpi_bus_init_power_state(struct acpi_device *device, int state)
if (acpi_has_method(device->handle, pathname)) if (acpi_has_method(device->handle, pathname))
ps->flags.explicit_set = 1; ps->flags.explicit_set = 1;
/* /* State is valid if there are means to put the device into it. */
* State is valid if there are means to put the device into it. if (!list_empty(&ps->resources) || ps->flags.explicit_set)
* D3hot is only valid if _PR3 present.
*/
if (!list_empty(&ps->resources)
|| (ps->flags.explicit_set && state < ACPI_STATE_D3_HOT)) {
ps->flags.valid = 1; ps->flags.valid = 1;
ps->flags.os_accessible = 1;
}
ps->power = -1; /* Unknown - driver assigned */ ps->power = -1; /* Unknown - driver assigned */
ps->latency = -1; /* Unknown - driver assigned */ ps->latency = -1; /* Unknown - driver assigned */
...@@ -1810,21 +1807,13 @@ static void acpi_bus_get_power_flags(struct acpi_device *device) ...@@ -1810,21 +1807,13 @@ static void acpi_bus_get_power_flags(struct acpi_device *device)
acpi_bus_init_power_state(device, i); acpi_bus_init_power_state(device, i);
INIT_LIST_HEAD(&device->power.states[ACPI_STATE_D3_COLD].resources); INIT_LIST_HEAD(&device->power.states[ACPI_STATE_D3_COLD].resources);
if (!list_empty(&device->power.states[ACPI_STATE_D3_HOT].resources))
device->power.states[ACPI_STATE_D3_COLD].flags.valid = 1;
/* Set defaults for D0 and D3 states (always valid) */ /* Set defaults for D0 and D3hot states (always valid) */
device->power.states[ACPI_STATE_D0].flags.valid = 1; device->power.states[ACPI_STATE_D0].flags.valid = 1;
device->power.states[ACPI_STATE_D0].power = 100; device->power.states[ACPI_STATE_D0].power = 100;
device->power.states[ACPI_STATE_D3_COLD].flags.valid = 1; device->power.states[ACPI_STATE_D3_HOT].flags.valid = 1;
device->power.states[ACPI_STATE_D3_COLD].power = 0;
/* Set D3cold's explicit_set flag if _PS3 exists. */
if (device->power.states[ACPI_STATE_D3_HOT].flags.explicit_set)
device->power.states[ACPI_STATE_D3_COLD].flags.explicit_set = 1;
/* Presence of _PS3 or _PRx means we can put the device into D3 cold */
if (device->power.states[ACPI_STATE_D3_HOT].flags.explicit_set ||
device->power.flags.power_resources)
device->power.states[ACPI_STATE_D3_COLD].flags.os_accessible = 1;
if (acpi_bus_init_power(device)) if (acpi_bus_init_power(device))
device->flags.power_manageable = 0; device->flags.power_manageable = 0;
...@@ -1947,6 +1936,62 @@ bool acpi_dock_match(acpi_handle handle) ...@@ -1947,6 +1936,62 @@ bool acpi_dock_match(acpi_handle handle)
return acpi_has_method(handle, "_DCK"); return acpi_has_method(handle, "_DCK");
} }
static acpi_status
acpi_backlight_cap_match(acpi_handle handle, u32 level, void *context,
void **return_value)
{
long *cap = context;
if (acpi_has_method(handle, "_BCM") &&
acpi_has_method(handle, "_BCL")) {
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found generic backlight "
"support\n"));
*cap |= ACPI_VIDEO_BACKLIGHT;
if (!acpi_has_method(handle, "_BQC"))
printk(KERN_WARNING FW_BUG PREFIX "No _BQC method, "
"cannot determine initial brightness\n");
/* We have backlight support, no need to scan further */
return AE_CTRL_TERMINATE;
}
return 0;
}
/* Returns true if the ACPI object is a video device which can be
* handled by video.ko.
* The device will get a Linux specific CID added in scan.c to
* identify the device as an ACPI graphics device
* Be aware that the graphics device may not be physically present
* Use acpi_video_get_capabilities() to detect general ACPI video
* capabilities of present cards
*/
long acpi_is_video_device(acpi_handle handle)
{
long video_caps = 0;
/* Is this device able to support video switching ? */
if (acpi_has_method(handle, "_DOD") || acpi_has_method(handle, "_DOS"))
video_caps |= ACPI_VIDEO_OUTPUT_SWITCHING;
/* Is this device able to retrieve a video ROM ? */
if (acpi_has_method(handle, "_ROM"))
video_caps |= ACPI_VIDEO_ROM_AVAILABLE;
/* Is this device able to configure which video head to be POSTed ? */
if (acpi_has_method(handle, "_VPO") &&
acpi_has_method(handle, "_GPD") &&
acpi_has_method(handle, "_SPD"))
video_caps |= ACPI_VIDEO_DEVICE_POSTING;
/* Only check for backlight functionality if one of the above hit. */
if (video_caps)
acpi_walk_namespace(ACPI_TYPE_DEVICE, handle,
ACPI_UINT32_MAX, acpi_backlight_cap_match, NULL,
&video_caps, NULL);
return video_caps;
}
EXPORT_SYMBOL(acpi_is_video_device);
const char *acpi_device_hid(struct acpi_device *device) const char *acpi_device_hid(struct acpi_device *device)
{ {
struct acpi_hardware_id *hid; struct acpi_hardware_id *hid;
...@@ -2109,6 +2154,39 @@ void acpi_free_pnp_ids(struct acpi_device_pnp *pnp) ...@@ -2109,6 +2154,39 @@ void acpi_free_pnp_ids(struct acpi_device_pnp *pnp)
kfree(pnp->unique_id); kfree(pnp->unique_id);
} }
static void acpi_init_coherency(struct acpi_device *adev)
{
unsigned long long cca = 0;
acpi_status status;
struct acpi_device *parent = adev->parent;
if (parent && parent->flags.cca_seen) {
/*
* From ACPI spec, OSPM will ignore _CCA if an ancestor
* already saw one.
*/
adev->flags.cca_seen = 1;
cca = parent->flags.coherent_dma;
} else {
status = acpi_evaluate_integer(adev->handle, "_CCA",
NULL, &cca);
if (ACPI_SUCCESS(status))
adev->flags.cca_seen = 1;
else if (!IS_ENABLED(CONFIG_ACPI_CCA_REQUIRED))
/*
* If architecture does not specify that _CCA is
* required for DMA-able devices (e.g. x86),
* we default to _CCA=1.
*/
cca = 1;
else
acpi_handle_debug(adev->handle,
"ACPI device is missing _CCA.\n");
}
adev->flags.coherent_dma = cca;
}
void acpi_init_device_object(struct acpi_device *device, acpi_handle handle, void acpi_init_device_object(struct acpi_device *device, acpi_handle handle,
int type, unsigned long long sta) int type, unsigned long long sta)
{ {
...@@ -2127,6 +2205,7 @@ void acpi_init_device_object(struct acpi_device *device, acpi_handle handle, ...@@ -2127,6 +2205,7 @@ void acpi_init_device_object(struct acpi_device *device, acpi_handle handle,
device->flags.visited = false; device->flags.visited = false;
device_initialize(&device->dev); device_initialize(&device->dev);
dev_set_uevent_suppress(&device->dev, true); dev_set_uevent_suppress(&device->dev, true);
acpi_init_coherency(device);
} }
void acpi_device_add_finalize(struct acpi_device *device) void acpi_device_add_finalize(struct acpi_device *device)
...@@ -2405,7 +2484,7 @@ static void acpi_default_enumeration(struct acpi_device *device) ...@@ -2405,7 +2484,7 @@ static void acpi_default_enumeration(struct acpi_device *device)
} }
static const struct acpi_device_id generic_device_ids[] = { static const struct acpi_device_id generic_device_ids[] = {
{"PRP0001", }, {ACPI_DT_NAMESPACE_HID, },
{"", }, {"", },
}; };
...@@ -2413,8 +2492,8 @@ static int acpi_generic_device_attach(struct acpi_device *adev, ...@@ -2413,8 +2492,8 @@ static int acpi_generic_device_attach(struct acpi_device *adev,
const struct acpi_device_id *not_used) const struct acpi_device_id *not_used)
{ {
/* /*
* Since PRP0001 is the only ID handled here, the test below can be * Since ACPI_DT_NAMESPACE_HID is the only ID handled here, the test
* unconditional. * below can be unconditional.
*/ */
if (adev->data.of_compatible) if (adev->data.of_compatible)
acpi_default_enumeration(adev); acpi_default_enumeration(adev);
......
...@@ -712,3 +712,18 @@ bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, int rev, u64 funcs) ...@@ -712,3 +712,18 @@ bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, int rev, u64 funcs)
return false; return false;
} }
EXPORT_SYMBOL(acpi_check_dsm); EXPORT_SYMBOL(acpi_check_dsm);
/*
* acpi_backlight= handling, this is done here rather then in video_detect.c
* because __setup cannot be used in modules.
*/
char acpi_video_backlight_string[16];
EXPORT_SYMBOL(acpi_video_backlight_string);
static int __init acpi_backlight(char *str)
{
strlcpy(acpi_video_backlight_string, str,
sizeof(acpi_video_backlight_string));
return 1;
}
__setup("acpi_backlight=", acpi_backlight);
This diff is collapsed.
obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o
obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
obj-$(CONFIG_PM_TRACE_RTC) += trace.o obj-$(CONFIG_PM_TRACE_RTC) += trace.o
obj-$(CONFIG_PM_OPP) += opp.o obj-$(CONFIG_PM_OPP) += opp.o
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/clkdev.h> #include <linux/clkdev.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/pm_runtime.h>
#ifdef CONFIG_PM #ifdef CONFIG_PM
...@@ -67,7 +68,8 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce) ...@@ -67,7 +68,8 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
} else { } else {
clk_prepare(ce->clk); clk_prepare(ce->clk);
ce->status = PCE_STATUS_ACQUIRED; ce->status = PCE_STATUS_ACQUIRED;
dev_dbg(dev, "Clock %s managed by runtime PM.\n", ce->con_id); dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
ce->clk, ce->con_id);
} }
} }
...@@ -93,7 +95,7 @@ static int __pm_clk_add(struct device *dev, const char *con_id, ...@@ -93,7 +95,7 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
return -ENOMEM; return -ENOMEM;
} }
} else { } else {
if (IS_ERR(ce->clk) || !__clk_get(clk)) { if (IS_ERR(clk) || !__clk_get(clk)) {
kfree(ce); kfree(ce);
return -ENOENT; return -ENOENT;
} }
...@@ -367,6 +369,43 @@ static int pm_clk_notify(struct notifier_block *nb, ...@@ -367,6 +369,43 @@ static int pm_clk_notify(struct notifier_block *nb,
return 0; return 0;
} }
int pm_clk_runtime_suspend(struct device *dev)
{
int ret;
dev_dbg(dev, "%s\n", __func__);
ret = pm_generic_runtime_suspend(dev);
if (ret) {
dev_err(dev, "failed to suspend device\n");
return ret;
}
ret = pm_clk_suspend(dev);
if (ret) {
dev_err(dev, "failed to suspend clock\n");
pm_generic_runtime_resume(dev);
return ret;
}
return 0;
}
int pm_clk_runtime_resume(struct device *dev)
{
int ret;
dev_dbg(dev, "%s\n", __func__);
ret = pm_clk_resume(dev);
if (ret) {
dev_err(dev, "failed to resume clock\n");
return ret;
}
return pm_generic_runtime_resume(dev);
}
#else /* !CONFIG_PM */ #else /* !CONFIG_PM */
/** /**
......
...@@ -181,7 +181,7 @@ static void genpd_recalc_cpu_exit_latency(struct generic_pm_domain *genpd) ...@@ -181,7 +181,7 @@ static void genpd_recalc_cpu_exit_latency(struct generic_pm_domain *genpd)
genpd->cpuidle_data->idle_state->exit_latency = usecs64; genpd->cpuidle_data->idle_state->exit_latency = usecs64;
} }
static int genpd_power_on(struct generic_pm_domain *genpd) static int genpd_power_on(struct generic_pm_domain *genpd, bool timed)
{ {
ktime_t time_start; ktime_t time_start;
s64 elapsed_ns; s64 elapsed_ns;
...@@ -190,6 +190,9 @@ static int genpd_power_on(struct generic_pm_domain *genpd) ...@@ -190,6 +190,9 @@ static int genpd_power_on(struct generic_pm_domain *genpd)
if (!genpd->power_on) if (!genpd->power_on)
return 0; return 0;
if (!timed)
return genpd->power_on(genpd);
time_start = ktime_get(); time_start = ktime_get();
ret = genpd->power_on(genpd); ret = genpd->power_on(genpd);
if (ret) if (ret)
...@@ -208,7 +211,7 @@ static int genpd_power_on(struct generic_pm_domain *genpd) ...@@ -208,7 +211,7 @@ static int genpd_power_on(struct generic_pm_domain *genpd)
return ret; return ret;
} }
static int genpd_power_off(struct generic_pm_domain *genpd) static int genpd_power_off(struct generic_pm_domain *genpd, bool timed)
{ {
ktime_t time_start; ktime_t time_start;
s64 elapsed_ns; s64 elapsed_ns;
...@@ -217,6 +220,9 @@ static int genpd_power_off(struct generic_pm_domain *genpd) ...@@ -217,6 +220,9 @@ static int genpd_power_off(struct generic_pm_domain *genpd)
if (!genpd->power_off) if (!genpd->power_off)
return 0; return 0;
if (!timed)
return genpd->power_off(genpd);
time_start = ktime_get(); time_start = ktime_get();
ret = genpd->power_off(genpd); ret = genpd->power_off(genpd);
if (ret == -EBUSY) if (ret == -EBUSY)
...@@ -305,7 +311,7 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd) ...@@ -305,7 +311,7 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
} }
} }
ret = genpd_power_on(genpd); ret = genpd_power_on(genpd, true);
if (ret) if (ret)
goto err; goto err;
...@@ -615,7 +621,7 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd) ...@@ -615,7 +621,7 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
* the pm_genpd_poweron() restore power for us (this shouldn't * the pm_genpd_poweron() restore power for us (this shouldn't
* happen very often). * happen very often).
*/ */
ret = genpd_power_off(genpd); ret = genpd_power_off(genpd, true);
if (ret == -EBUSY) { if (ret == -EBUSY) {
genpd_set_active(genpd); genpd_set_active(genpd);
goto out; goto out;
...@@ -827,6 +833,7 @@ static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd, ...@@ -827,6 +833,7 @@ static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd,
/** /**
* pm_genpd_sync_poweroff - Synchronously power off a PM domain and its masters. * pm_genpd_sync_poweroff - Synchronously power off a PM domain and its masters.
* @genpd: PM domain to power off, if possible. * @genpd: PM domain to power off, if possible.
* @timed: True if latency measurements are allowed.
* *
* Check if the given PM domain can be powered off (during system suspend or * Check if the given PM domain can be powered off (during system suspend or
* hibernation) and do that if so. Also, in that case propagate to its masters. * hibernation) and do that if so. Also, in that case propagate to its masters.
...@@ -836,7 +843,8 @@ static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd, ...@@ -836,7 +843,8 @@ static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd,
* executed sequentially, so it is guaranteed that it will never run twice in * executed sequentially, so it is guaranteed that it will never run twice in
* parallel). * parallel).
*/ */
static void pm_genpd_sync_poweroff(struct generic_pm_domain *genpd) static void pm_genpd_sync_poweroff(struct generic_pm_domain *genpd,
bool timed)
{ {
struct gpd_link *link; struct gpd_link *link;
...@@ -847,26 +855,28 @@ static void pm_genpd_sync_poweroff(struct generic_pm_domain *genpd) ...@@ -847,26 +855,28 @@ static void pm_genpd_sync_poweroff(struct generic_pm_domain *genpd)
|| atomic_read(&genpd->sd_count) > 0) || atomic_read(&genpd->sd_count) > 0)
return; return;
genpd_power_off(genpd); genpd_power_off(genpd, timed);
genpd->status = GPD_STATE_POWER_OFF; genpd->status = GPD_STATE_POWER_OFF;
list_for_each_entry(link, &genpd->slave_links, slave_node) { list_for_each_entry(link, &genpd->slave_links, slave_node) {
genpd_sd_counter_dec(link->master); genpd_sd_counter_dec(link->master);
pm_genpd_sync_poweroff(link->master); pm_genpd_sync_poweroff(link->master, timed);
} }
} }
/** /**
* pm_genpd_sync_poweron - Synchronously power on a PM domain and its masters. * pm_genpd_sync_poweron - Synchronously power on a PM domain and its masters.
* @genpd: PM domain to power on. * @genpd: PM domain to power on.
* @timed: True if latency measurements are allowed.
* *
* This function is only called in "noirq" and "syscore" stages of system power * This function is only called in "noirq" and "syscore" stages of system power
* transitions, so it need not acquire locks (all of the "noirq" callbacks are * transitions, so it need not acquire locks (all of the "noirq" callbacks are
* executed sequentially, so it is guaranteed that it will never run twice in * executed sequentially, so it is guaranteed that it will never run twice in
* parallel). * parallel).
*/ */
static void pm_genpd_sync_poweron(struct generic_pm_domain *genpd) static void pm_genpd_sync_poweron(struct generic_pm_domain *genpd,
bool timed)
{ {
struct gpd_link *link; struct gpd_link *link;
...@@ -874,11 +884,11 @@ static void pm_genpd_sync_poweron(struct generic_pm_domain *genpd) ...@@ -874,11 +884,11 @@ static void pm_genpd_sync_poweron(struct generic_pm_domain *genpd)
return; return;
list_for_each_entry(link, &genpd->slave_links, slave_node) { list_for_each_entry(link, &genpd->slave_links, slave_node) {
pm_genpd_sync_poweron(link->master); pm_genpd_sync_poweron(link->master, timed);
genpd_sd_counter_inc(link->master); genpd_sd_counter_inc(link->master);
} }
genpd_power_on(genpd); genpd_power_on(genpd, timed);
genpd->status = GPD_STATE_ACTIVE; genpd->status = GPD_STATE_ACTIVE;
} }
...@@ -1056,7 +1066,7 @@ static int pm_genpd_suspend_noirq(struct device *dev) ...@@ -1056,7 +1066,7 @@ static int pm_genpd_suspend_noirq(struct device *dev)
* the same PM domain, so it is not necessary to use locking here. * the same PM domain, so it is not necessary to use locking here.
*/ */
genpd->suspended_count++; genpd->suspended_count++;
pm_genpd_sync_poweroff(genpd); pm_genpd_sync_poweroff(genpd, true);
return 0; return 0;
} }
...@@ -1086,7 +1096,7 @@ static int pm_genpd_resume_noirq(struct device *dev) ...@@ -1086,7 +1096,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
* guaranteed that this function will never run twice in parallel for * guaranteed that this function will never run twice in parallel for
* the same PM domain, so it is not necessary to use locking here. * the same PM domain, so it is not necessary to use locking here.
*/ */
pm_genpd_sync_poweron(genpd); pm_genpd_sync_poweron(genpd, true);
genpd->suspended_count--; genpd->suspended_count--;
return genpd_start_dev(genpd, dev); return genpd_start_dev(genpd, dev);
...@@ -1300,7 +1310,7 @@ static int pm_genpd_restore_noirq(struct device *dev) ...@@ -1300,7 +1310,7 @@ static int pm_genpd_restore_noirq(struct device *dev)
* If the domain was off before the hibernation, make * If the domain was off before the hibernation, make
* sure it will be off going forward. * sure it will be off going forward.
*/ */
genpd_power_off(genpd); genpd_power_off(genpd, true);
return 0; return 0;
} }
...@@ -1309,7 +1319,7 @@ static int pm_genpd_restore_noirq(struct device *dev) ...@@ -1309,7 +1319,7 @@ static int pm_genpd_restore_noirq(struct device *dev)
if (genpd->suspend_power_off) if (genpd->suspend_power_off)
return 0; return 0;
pm_genpd_sync_poweron(genpd); pm_genpd_sync_poweron(genpd, true);
return genpd_start_dev(genpd, dev); return genpd_start_dev(genpd, dev);
} }
...@@ -1367,9 +1377,9 @@ static void genpd_syscore_switch(struct device *dev, bool suspend) ...@@ -1367,9 +1377,9 @@ static void genpd_syscore_switch(struct device *dev, bool suspend)
if (suspend) { if (suspend) {
genpd->suspended_count++; genpd->suspended_count++;
pm_genpd_sync_poweroff(genpd); pm_genpd_sync_poweroff(genpd, false);
} else { } else {
pm_genpd_sync_poweron(genpd); pm_genpd_sync_poweron(genpd, false);
genpd->suspended_count--; genpd->suspended_count--;
} }
} }
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/pm.h> #include <linux/pm.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/pm-trace.h> #include <linux/pm-trace.h>
#include <linux/pm_wakeirq.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/async.h> #include <linux/async.h>
...@@ -587,6 +588,7 @@ void dpm_resume_noirq(pm_message_t state) ...@@ -587,6 +588,7 @@ void dpm_resume_noirq(pm_message_t state)
async_synchronize_full(); async_synchronize_full();
dpm_show_time(starttime, state, "noirq"); dpm_show_time(starttime, state, "noirq");
resume_device_irqs(); resume_device_irqs();
device_wakeup_disarm_wake_irqs();
cpuidle_resume(); cpuidle_resume();
trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false); trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
} }
...@@ -920,9 +922,7 @@ static void device_complete(struct device *dev, pm_message_t state) ...@@ -920,9 +922,7 @@ static void device_complete(struct device *dev, pm_message_t state)
if (callback) { if (callback) {
pm_dev_dbg(dev, state, info); pm_dev_dbg(dev, state, info);
trace_device_pm_callback_start(dev, info, state.event);
callback(dev); callback(dev);
trace_device_pm_callback_end(dev, 0);
} }
device_unlock(dev); device_unlock(dev);
...@@ -954,7 +954,9 @@ void dpm_complete(pm_message_t state) ...@@ -954,7 +954,9 @@ void dpm_complete(pm_message_t state)
list_move(&dev->power.entry, &list); list_move(&dev->power.entry, &list);
mutex_unlock(&dpm_list_mtx); mutex_unlock(&dpm_list_mtx);
trace_device_pm_callback_start(dev, "", state.event);
device_complete(dev, state); device_complete(dev, state);
trace_device_pm_callback_end(dev, 0);
mutex_lock(&dpm_list_mtx); mutex_lock(&dpm_list_mtx);
put_device(dev); put_device(dev);
...@@ -1104,6 +1106,7 @@ int dpm_suspend_noirq(pm_message_t state) ...@@ -1104,6 +1106,7 @@ int dpm_suspend_noirq(pm_message_t state)
trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, true); trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, true);
cpuidle_pause(); cpuidle_pause();
device_wakeup_arm_wake_irqs();
suspend_device_irqs(); suspend_device_irqs();
mutex_lock(&dpm_list_mtx); mutex_lock(&dpm_list_mtx);
pm_transition = state; pm_transition = state;
...@@ -1585,11 +1588,8 @@ static int device_prepare(struct device *dev, pm_message_t state) ...@@ -1585,11 +1588,8 @@ static int device_prepare(struct device *dev, pm_message_t state)
callback = dev->driver->pm->prepare; callback = dev->driver->pm->prepare;
} }
if (callback) { if (callback)
trace_device_pm_callback_start(dev, info, state.event);
ret = callback(dev); ret = callback(dev);
trace_device_pm_callback_end(dev, ret);
}
device_unlock(dev); device_unlock(dev);
...@@ -1631,7 +1631,9 @@ int dpm_prepare(pm_message_t state) ...@@ -1631,7 +1631,9 @@ int dpm_prepare(pm_message_t state)
get_device(dev); get_device(dev);
mutex_unlock(&dpm_list_mtx); mutex_unlock(&dpm_list_mtx);
trace_device_pm_callback_start(dev, "", state.event);
error = device_prepare(dev, state); error = device_prepare(dev, state);
trace_device_pm_callback_end(dev, error);
mutex_lock(&dpm_list_mtx); mutex_lock(&dpm_list_mtx);
if (error) { if (error) {
......
...@@ -20,6 +20,46 @@ static inline void pm_runtime_early_init(struct device *dev) ...@@ -20,6 +20,46 @@ static inline void pm_runtime_early_init(struct device *dev)
extern void pm_runtime_init(struct device *dev); extern void pm_runtime_init(struct device *dev);
extern void pm_runtime_remove(struct device *dev); extern void pm_runtime_remove(struct device *dev);
struct wake_irq {
struct device *dev;
int irq;
bool dedicated_irq:1;
};
extern void dev_pm_arm_wake_irq(struct wake_irq *wirq);
extern void dev_pm_disarm_wake_irq(struct wake_irq *wirq);
#ifdef CONFIG_PM_SLEEP
extern int device_wakeup_attach_irq(struct device *dev,
struct wake_irq *wakeirq);
extern void device_wakeup_detach_irq(struct device *dev);
extern void device_wakeup_arm_wake_irqs(void);
extern void device_wakeup_disarm_wake_irqs(void);
#else
static inline int
device_wakeup_attach_irq(struct device *dev,
struct wake_irq *wakeirq)
{
return 0;
}
static inline void device_wakeup_detach_irq(struct device *dev)
{
}
static inline void device_wakeup_arm_wake_irqs(void)
{
}
static inline void device_wakeup_disarm_wake_irqs(void)
{
}
#endif /* CONFIG_PM_SLEEP */
/* /*
* sysfs.c * sysfs.c
*/ */
...@@ -52,6 +92,14 @@ static inline void wakeup_sysfs_remove(struct device *dev) {} ...@@ -52,6 +92,14 @@ static inline void wakeup_sysfs_remove(struct device *dev) {}
static inline int pm_qos_sysfs_add(struct device *dev) { return 0; } static inline int pm_qos_sysfs_add(struct device *dev) { return 0; }
static inline void pm_qos_sysfs_remove(struct device *dev) {} static inline void pm_qos_sysfs_remove(struct device *dev) {}
static inline void dev_pm_arm_wake_irq(struct wake_irq *wirq)
{
}
static inline void dev_pm_disarm_wake_irq(struct wake_irq *wirq)
{
}
#endif #endif
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/pm_wakeirq.h>
#include <trace/events/rpm.h> #include <trace/events/rpm.h>
#include "power.h" #include "power.h"
...@@ -514,6 +515,7 @@ static int rpm_suspend(struct device *dev, int rpmflags) ...@@ -514,6 +515,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
callback = RPM_GET_CALLBACK(dev, runtime_suspend); callback = RPM_GET_CALLBACK(dev, runtime_suspend);
dev_pm_enable_wake_irq(dev);
retval = rpm_callback(callback, dev); retval = rpm_callback(callback, dev);
if (retval) if (retval)
goto fail; goto fail;
...@@ -552,6 +554,7 @@ static int rpm_suspend(struct device *dev, int rpmflags) ...@@ -552,6 +554,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
return retval; return retval;
fail: fail:
dev_pm_disable_wake_irq(dev);
__update_runtime_status(dev, RPM_ACTIVE); __update_runtime_status(dev, RPM_ACTIVE);
dev->power.deferred_resume = false; dev->power.deferred_resume = false;
wake_up_all(&dev->power.wait_queue); wake_up_all(&dev->power.wait_queue);
...@@ -734,13 +737,16 @@ static int rpm_resume(struct device *dev, int rpmflags) ...@@ -734,13 +737,16 @@ static int rpm_resume(struct device *dev, int rpmflags)
callback = RPM_GET_CALLBACK(dev, runtime_resume); callback = RPM_GET_CALLBACK(dev, runtime_resume);
dev_pm_disable_wake_irq(dev);
retval = rpm_callback(callback, dev); retval = rpm_callback(callback, dev);
if (retval) { if (retval) {
__update_runtime_status(dev, RPM_SUSPENDED); __update_runtime_status(dev, RPM_SUSPENDED);
pm_runtime_cancel_pending(dev); pm_runtime_cancel_pending(dev);
dev_pm_enable_wake_irq(dev);
} else { } else {
no_callback: no_callback:
__update_runtime_status(dev, RPM_ACTIVE); __update_runtime_status(dev, RPM_ACTIVE);
pm_runtime_mark_last_busy(dev);
if (parent) if (parent)
atomic_inc(&parent->power.child_count); atomic_inc(&parent->power.child_count);
} }
......
/*
* wakeirq.c - Device wakeirq helper functions
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/device.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/slab.h>
#include <linux/pm_runtime.h>
#include <linux/pm_wakeirq.h>
#include "power.h"
/**
* dev_pm_attach_wake_irq - Attach device interrupt as a wake IRQ
* @dev: Device entry
* @irq: Device wake-up capable interrupt
* @wirq: Wake irq specific data
*
* Internal function to attach either a device IO interrupt or a
* dedicated wake-up interrupt as a wake IRQ.
*/
static int dev_pm_attach_wake_irq(struct device *dev, int irq,
struct wake_irq *wirq)
{
unsigned long flags;
int err;
if (!dev || !wirq)
return -EINVAL;
spin_lock_irqsave(&dev->power.lock, flags);
if (dev_WARN_ONCE(dev, dev->power.wakeirq,
"wake irq already initialized\n")) {
spin_unlock_irqrestore(&dev->power.lock, flags);
return -EEXIST;
}
dev->power.wakeirq = wirq;
spin_unlock_irqrestore(&dev->power.lock, flags);
err = device_wakeup_attach_irq(dev, wirq);
if (err)
return err;
return 0;
}
/**
* dev_pm_set_wake_irq - Attach device IO interrupt as wake IRQ
* @dev: Device entry
* @irq: Device IO interrupt
*
* Attach a device IO interrupt as a wake IRQ. The wake IRQ gets
* automatically configured for wake-up from suspend based
* on the device specific sysfs wakeup entry. Typically called
* during driver probe after calling device_init_wakeup().
*/
int dev_pm_set_wake_irq(struct device *dev, int irq)
{
struct wake_irq *wirq;
int err;
wirq = kzalloc(sizeof(*wirq), GFP_KERNEL);
if (!wirq)
return -ENOMEM;
wirq->dev = dev;
wirq->irq = irq;
err = dev_pm_attach_wake_irq(dev, irq, wirq);
if (err)
kfree(wirq);
return err;
}
EXPORT_SYMBOL_GPL(dev_pm_set_wake_irq);
/**
* dev_pm_clear_wake_irq - Detach a device IO interrupt wake IRQ
* @dev: Device entry
*
* Detach a device wake IRQ and free resources.
*
* Note that it's OK for drivers to call this without calling
* dev_pm_set_wake_irq() as all the driver instances may not have
* a wake IRQ configured. This avoid adding wake IRQ specific
* checks into the drivers.
*/
void dev_pm_clear_wake_irq(struct device *dev)
{
struct wake_irq *wirq = dev->power.wakeirq;
unsigned long flags;
if (!wirq)
return;
spin_lock_irqsave(&dev->power.lock, flags);
dev->power.wakeirq = NULL;
spin_unlock_irqrestore(&dev->power.lock, flags);
device_wakeup_detach_irq(dev);
if (wirq->dedicated_irq)
free_irq(wirq->irq, wirq);
kfree(wirq);
}
EXPORT_SYMBOL_GPL(dev_pm_clear_wake_irq);
/**
* handle_threaded_wake_irq - Handler for dedicated wake-up interrupts
* @irq: Device specific dedicated wake-up interrupt
* @_wirq: Wake IRQ data
*
* Some devices have a separate wake-up interrupt in addition to the
* device IO interrupt. The wake-up interrupt signals that a device
* should be woken up from it's idle state. This handler uses device
* specific pm_runtime functions to wake the device, and then it's
* up to the device to do whatever it needs to. Note that as the
* device may need to restore context and start up regulators, we
* use a threaded IRQ.
*
* Also note that we are not resending the lost device interrupts.
* We assume that the wake-up interrupt just needs to wake-up the
* device, and then device's pm_runtime_resume() can deal with the
* situation.
*/
static irqreturn_t handle_threaded_wake_irq(int irq, void *_wirq)
{
struct wake_irq *wirq = _wirq;
int res;
/* We don't want RPM_ASYNC or RPM_NOWAIT here */
res = pm_runtime_resume(wirq->dev);
if (res < 0)
dev_warn(wirq->dev,
"wake IRQ with no resume: %i\n", res);
return IRQ_HANDLED;
}
/**
* dev_pm_set_dedicated_wake_irq - Request a dedicated wake-up interrupt
* @dev: Device entry
* @irq: Device wake-up interrupt
*
* Unless your hardware has separate wake-up interrupts in addition
* to the device IO interrupts, you don't need this.
*
* Sets up a threaded interrupt handler for a device that has
* a dedicated wake-up interrupt in addition to the device IO
* interrupt.
*
* The interrupt starts disabled, and needs to be managed for
* the device by the bus code or the device driver using
* dev_pm_enable_wake_irq() and dev_pm_disable_wake_irq()
* functions.
*/
int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
{
struct wake_irq *wirq;
int err;
wirq = kzalloc(sizeof(*wirq), GFP_KERNEL);
if (!wirq)
return -ENOMEM;
wirq->dev = dev;
wirq->irq = irq;
wirq->dedicated_irq = true;
irq_set_status_flags(irq, IRQ_NOAUTOEN);
/*
* Consumer device may need to power up and restore state
* so we use a threaded irq.
*/
err = request_threaded_irq(irq, NULL, handle_threaded_wake_irq,
IRQF_ONESHOT, dev_name(dev), wirq);
if (err)
goto err_free;
err = dev_pm_attach_wake_irq(dev, irq, wirq);
if (err)
goto err_free_irq;
return err;
err_free_irq:
free_irq(irq, wirq);
err_free:
kfree(wirq);
return err;
}
EXPORT_SYMBOL_GPL(dev_pm_set_dedicated_wake_irq);
/**
* dev_pm_enable_wake_irq - Enable device wake-up interrupt
* @dev: Device
*
* Called from the bus code or the device driver for
* runtime_suspend() to enable the wake-up interrupt while
* the device is running.
*
* Note that for runtime_suspend()) the wake-up interrupts
* should be unconditionally enabled unlike for suspend()
* that is conditional.
*/
void dev_pm_enable_wake_irq(struct device *dev)
{
struct wake_irq *wirq = dev->power.wakeirq;
if (wirq && wirq->dedicated_irq)
enable_irq(wirq->irq);
}
EXPORT_SYMBOL_GPL(dev_pm_enable_wake_irq);
/**
* dev_pm_disable_wake_irq - Disable device wake-up interrupt
* @dev: Device
*
* Called from the bus code or the device driver for
* runtime_resume() to disable the wake-up interrupt while
* the device is running.
*/
void dev_pm_disable_wake_irq(struct device *dev)
{
struct wake_irq *wirq = dev->power.wakeirq;
if (wirq && wirq->dedicated_irq)
disable_irq_nosync(wirq->irq);
}
EXPORT_SYMBOL_GPL(dev_pm_disable_wake_irq);
/**
* dev_pm_arm_wake_irq - Arm device wake-up
* @wirq: Device wake-up interrupt
*
* Sets up the wake-up event conditionally based on the
* device_may_wake().
*/
void dev_pm_arm_wake_irq(struct wake_irq *wirq)
{
if (!wirq)
return;
if (device_may_wakeup(wirq->dev))
enable_irq_wake(wirq->irq);
}
/**
* dev_pm_disarm_wake_irq - Disarm device wake-up
* @wirq: Device wake-up interrupt
*
* Clears up the wake-up event conditionally based on the
* device_may_wake().
*/
void dev_pm_disarm_wake_irq(struct wake_irq *wirq)
{
if (!wirq)
return;
if (device_may_wakeup(wirq->dev))
disable_irq_wake(wirq->irq);
}
This diff is collapsed.
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_address.h>
#include <linux/property.h> #include <linux/property.h>
/** /**
...@@ -519,3 +520,16 @@ unsigned int device_get_child_node_count(struct device *dev) ...@@ -519,3 +520,16 @@ unsigned int device_get_child_node_count(struct device *dev)
return count; return count;
} }
EXPORT_SYMBOL_GPL(device_get_child_node_count); EXPORT_SYMBOL_GPL(device_get_child_node_count);
bool device_dma_is_coherent(struct device *dev)
{
bool coherent = false;
if (IS_ENABLED(CONFIG_OF) && dev->of_node)
coherent = of_dma_is_coherent(dev->of_node);
else
acpi_check_dma(ACPI_COMPANION(dev), &coherent);
return coherent;
}
EXPORT_SYMBOL_GPL(device_dma_is_coherent);
...@@ -301,7 +301,7 @@ static int omap_l3_probe(struct platform_device *pdev) ...@@ -301,7 +301,7 @@ static int omap_l3_probe(struct platform_device *pdev)
return ret; return ret;
} }
#ifdef CONFIG_PM #ifdef CONFIG_PM_SLEEP
/** /**
* l3_resume_noirq() - resume function for l3_noc * l3_resume_noirq() - resume function for l3_noc
...@@ -347,7 +347,7 @@ static int l3_resume_noirq(struct device *dev) ...@@ -347,7 +347,7 @@ static int l3_resume_noirq(struct device *dev)
} }
static const struct dev_pm_ops l3_dev_pm_ops = { static const struct dev_pm_ops l3_dev_pm_ops = {
.resume_noirq = l3_resume_noirq, SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(NULL, l3_resume_noirq)
}; };
#define L3_DEV_PM_OPS (&l3_dev_pm_ops) #define L3_DEV_PM_OPS (&l3_dev_pm_ops)
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
# big LITTLE core layer and glue drivers # big LITTLE core layer and glue drivers
config ARM_BIG_LITTLE_CPUFREQ config ARM_BIG_LITTLE_CPUFREQ
tristate "Generic ARM big LITTLE CPUfreq driver" tristate "Generic ARM big LITTLE CPUfreq driver"
depends on ARM && BIG_LITTLE && ARM_CPU_TOPOLOGY && HAVE_CLK depends on (ARM_CPU_TOPOLOGY || ARM64) && HAVE_CLK
select PM_OPP select PM_OPP
help help
This enables the Generic CPUfreq driver for ARM big.LITTLE platforms. This enables the Generic CPUfreq driver for ARM big.LITTLE platforms.
......
This diff is collapsed.
...@@ -416,6 +416,7 @@ static struct platform_driver dt_cpufreq_platdrv = { ...@@ -416,6 +416,7 @@ static struct platform_driver dt_cpufreq_platdrv = {
}; };
module_platform_driver(dt_cpufreq_platdrv); module_platform_driver(dt_cpufreq_platdrv);
MODULE_ALIAS("platform:cpufreq-dt");
MODULE_AUTHOR("Viresh Kumar <viresh.kumar@linaro.org>"); MODULE_AUTHOR("Viresh Kumar <viresh.kumar@linaro.org>");
MODULE_AUTHOR("Shawn Guo <shawn.guo@linaro.org>"); MODULE_AUTHOR("Shawn Guo <shawn.guo@linaro.org>");
MODULE_DESCRIPTION("Generic cpufreq driver"); MODULE_DESCRIPTION("Generic cpufreq driver");
......
...@@ -414,7 +414,7 @@ static int nforce2_detect_chipset(void) ...@@ -414,7 +414,7 @@ static int nforce2_detect_chipset(void)
* nforce2_init - initializes the nForce2 CPUFreq driver * nforce2_init - initializes the nForce2 CPUFreq driver
* *
* Initializes the nForce2 FSB support. Returns -ENODEV on unsupported * Initializes the nForce2 FSB support. Returns -ENODEV on unsupported
* devices, -EINVAL on problems during initiatization, and zero on * devices, -EINVAL on problems during initialization, and zero on
* success. * success.
*/ */
static int __init nforce2_init(void) static int __init nforce2_init(void)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -208,11 +208,16 @@ struct common_dbs_data { ...@@ -208,11 +208,16 @@ struct common_dbs_data {
void *(*get_cpu_dbs_info_s)(int cpu); void *(*get_cpu_dbs_info_s)(int cpu);
void (*gov_dbs_timer)(struct work_struct *work); void (*gov_dbs_timer)(struct work_struct *work);
void (*gov_check_cpu)(int cpu, unsigned int load); void (*gov_check_cpu)(int cpu, unsigned int load);
int (*init)(struct dbs_data *dbs_data); int (*init)(struct dbs_data *dbs_data, bool notify);
void (*exit)(struct dbs_data *dbs_data); void (*exit)(struct dbs_data *dbs_data, bool notify);
/* Governor specific ops, see below */ /* Governor specific ops, see below */
void *gov_ops; void *gov_ops;
/*
* Protects governor's data (struct dbs_data and struct common_dbs_data)
*/
struct mutex mutex;
}; };
/* Governor Per policy data */ /* Governor Per policy data */
...@@ -221,9 +226,6 @@ struct dbs_data { ...@@ -221,9 +226,6 @@ struct dbs_data {
unsigned int min_sampling_rate; unsigned int min_sampling_rate;
int usage_count; int usage_count;
void *tuners; void *tuners;
/* dbs_mutex protects dbs_enable in governor start/stop */
struct mutex mutex;
}; };
/* Governor specific ops, will be passed to dbs_data->gov_ops */ /* Governor specific ops, will be passed to dbs_data->gov_ops */
...@@ -234,10 +236,6 @@ struct od_ops { ...@@ -234,10 +236,6 @@ struct od_ops {
void (*freq_increase)(struct cpufreq_policy *policy, unsigned int freq); void (*freq_increase)(struct cpufreq_policy *policy, unsigned int freq);
}; };
struct cs_ops {
struct notifier_block *notifier_block;
};
static inline int delay_for_sampling_rate(unsigned int sampling_rate) static inline int delay_for_sampling_rate(unsigned int sampling_rate)
{ {
int delay = usecs_to_jiffies(sampling_rate); int delay = usecs_to_jiffies(sampling_rate);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment