Commit 4dc4226f authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm+acpi-3.16-rc1' of...

Merge tag 'pm+acpi-3.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm into next

Pull ACPI and power management updates from Rafael Wysocki:
 "ACPICA is the leader this time (63 commits), followed by cpufreq (28
  commits), devfreq (15 commits), system suspend/hibernation (12
  commits), ACPI video and ACPI device enumeration (10 commits each).

  We have no major new features this time, but there are a few
  significant changes of how things work.  The most visible one will
  probably be that we are now going to create platform devices rather
  than PNP devices by default for ACPI device objects with _HID.  That
  was long overdue and will be really necessary to be able to use the
  same drivers for the same hardware blocks on ACPI and DT-based systems
  going forward.  We're not expecting fallout from this one (as usual),
  but it's something to watch nevertheless.

  The second change having a chance to be visible is that ACPI video
  will now default to using native backlight rather than the ACPI
  backlight interface which should generally help systems with broken
  Win8 BIOSes.  We're hoping that all problems with the native backlight
  handling that we had previously have been addressed and we are in a
  good enough shape to flip the default, but this change should be easy
  enough to revert if need be.

  In addition to that, the system suspend core has a new mechanism to
  allow runtime-suspended devices to stay suspended throughout system
  suspend/resume transitions if some extra conditions are met
  (generally, they are related to coordination within device hierarchy).
  However, enabling this feature requires cooperation from the bus type
  layer and for now it has only been implemented for the ACPI PM domain
  (used by ACPI-enumerated platform devices mostly today).

  Also, the acpidump utility that was previously shipped as a separate
  tool will now be provided by the upstream ACPICA along with the rest
  of ACPICA code, which will allow it to be more up to date and better
  supported, and we have one new cpuidle driver (ARM clps711x).

  The rest is improvements related to certain specific use cases,
  cleanups and fixes all over the place.

  Specifics:

   - ACPICA update to upstream version 20140424.  That includes a number
     of fixes and improvements related to things like GPE handling,
     table loading, headers, memory mapping and unmapping, DSDT/SSDT
     overriding, and the Unload() operator.  The acpidump utility from
     upstream ACPICA is included too.  From Bob Moore, Lv Zheng, David
     Box, David Binderman, and Colin Ian King.

   - Fixes and cleanups related to ACPI video and backlight interfaces
     from Hans de Goede.  That includes blacklist entries for some new
     machines and using native backlight by default.

   - ACPI device enumeration changes to create platform devices rather
     than PNP devices for ACPI device objects with _HID by default.  PNP
     devices will still be created for the ACPI device object with
     device IDs corresponding to real PNP devices, so that change should
     not break things left and right, and we're expecting to see more
     and more ACPI-enumerated platform devices in the future.  From
     Zhang Rui and Rafael J Wysocki.

   - Updates for the ACPI LPSS (Low-Power Subsystem) driver allowing it
     to handle system suspend/resume on Asus T100 correctly.  From
     Heikki Krogerus and Rafael J Wysocki.

   - PM core update introducing a mechanism to allow runtime-suspended
     devices to stay suspended over system suspend/resume transitions if
     certain additional conditions related to coordination within device
     hierarchy are met.  Related PM documentation update and ACPI PM
     domain support for the new feature.  From Rafael J Wysocki.

   - Fixes and improvements related to the "freeze" sleep state.  They
     affect several places including cpuidle, PM core, ACPI core, and
     the ACPI battery driver.  From Rafael J Wysocki and Zhang Rui.

   - Miscellaneous fixes and updates of the ACPI core from Aaron Lu,
     Bjørn Mork, Hanjun Guo, Lan Tianyu, and Rafael J Wysocki.

   - Fixes and cleanups for the ACPI processor and ACPI PAD (Processor
     Aggregator Device) drivers from Baoquan He, Manuel Schölling, Tony
     Camuso, and Toshi Kani.

   - System suspend/resume optimization in the ACPI battery driver from
     Lan Tianyu.

   - OPP (Operating Performance Points) subsystem updates from Chander
     Kashyap, Mark Brown, and Nishanth Menon.

   - cpufreq core fixes, updates and cleanups from Srivatsa S Bhat,
     Stratos Karafotis, and Viresh Kumar.

   - Updates, fixes and cleanups for the Tegra, powernow-k8, imx6q,
     s5pv210, nforce2, and powernv cpufreq drivers from Brian Norris,
     Jingoo Han, Paul Bolle, Philipp Zabel, Stratos Karafotis, and
     Viresh Kumar.

   - intel_pstate driver fixes and cleanups from Dirk Brandewie, Doug
     Smythies, and Stratos Karafotis.

   - Enabling the big.LITTLE cpufreq driver on arm64 from Mark Brown.

   - Fix for the cpuidle menu governor from Chander Kashyap.

   - New ARM clps711x cpuidle driver from Alexander Shiyan.

   - Hibernate core fixes and cleanups from Chen Gang, Dan Carpenter,
     Fabian Frederick, Pali Rohár, and Sebastian Capella.

   - Intel RAPL (Running Average Power Limit) driver updates from Jacob
     Pan.

   - PNP subsystem updates from Bjorn Helgaas and Fabian Frederick.

   - devfreq core updates from Chanwoo Choi and Paul Bolle.

   - devfreq updates for exynos4 and exynos5 from Chanwoo Choi and
     Bartlomiej Zolnierkiewicz.

   - turbostat tool fix from Jean Delvare.

   - cpupower tool updates from Prarit Bhargava, Ramkumar Ramachandra
     and Thomas Renninger.

   - New ACPI ec_access.c tool for poking at the EC in a safe way from
     Thomas Renninger"

* tag 'pm+acpi-3.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (187 commits)
  ACPICA: Namespace: Remove _PRP method support.
  intel_pstate: Improve initial busy calculation
  intel_pstate: add sample time scaling
  intel_pstate: Correct rounding in busy calculation
  intel_pstate: Remove C0 tracking
  PM / hibernate: fixed typo in comment
  ACPI: Fix x86 regression related to early mapping size limitation
  ACPICA: Tables: Add mechanism to control early table checksum verification.
  ACPI / scan: use platform bus type by default for _HID enumeration
  ACPI / scan: always register ACPI LPSS scan handler
  ACPI / scan: always register memory hotplug scan handler
  ACPI / scan: always register container scan handler
  ACPI / scan: Change the meaning of missing .attach() in scan handlers
  ACPI / scan: introduce platform_id device PNP type flag
  ACPI / scan: drop unsupported serial IDs from PNP ACPI scan handler ID list
  ACPI / scan: drop IDs that do not comply with the ACPI PNP ID rule
  ACPI / PNP: use device ID list for PNPACPI device enumeration
  ACPI / scan: .match() callback for ACPI scan handlers
  ACPI / battery: wakeup the system only when necessary
  power_supply: allow power supply devices registered w/o wakeup source
  ...
parents d6b92c2c 2e30baad
...@@ -128,7 +128,7 @@ Description: Discover cpuidle policy and mechanism ...@@ -128,7 +128,7 @@ Description: Discover cpuidle policy and mechanism
What: /sys/devices/system/cpu/cpu#/cpufreq/* What: /sys/devices/system/cpu/cpu#/cpufreq/*
Date: pre-git history Date: pre-git history
Contact: cpufreq@vger.kernel.org Contact: linux-pm@vger.kernel.org
Description: Discover and change clock speed of CPUs Description: Discover and change clock speed of CPUs
Clock scaling allows you to change the clock speed of the Clock scaling allows you to change the clock speed of the
...@@ -146,7 +146,7 @@ Description: Discover and change clock speed of CPUs ...@@ -146,7 +146,7 @@ Description: Discover and change clock speed of CPUs
What: /sys/devices/system/cpu/cpu#/cpufreq/freqdomain_cpus What: /sys/devices/system/cpu/cpu#/cpufreq/freqdomain_cpus
Date: June 2013 Date: June 2013
Contact: cpufreq@vger.kernel.org Contact: linux-pm@vger.kernel.org
Description: Discover CPUs in the same CPU frequency coordination domain Description: Discover CPUs in the same CPU frequency coordination domain
freqdomain_cpus is the list of CPUs (online+offline) that share freqdomain_cpus is the list of CPUs (online+offline) that share
......
...@@ -7,19 +7,30 @@ Description: ...@@ -7,19 +7,30 @@ Description:
subsystem. subsystem.
What: /sys/power/state What: /sys/power/state
Date: August 2006 Date: May 2014
Contact: Rafael J. Wysocki <rjw@rjwysocki.net> Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description: Description:
The /sys/power/state file controls the system power state. The /sys/power/state file controls system sleep states.
Reading from this file returns what states are supported, Reading from this file returns the available sleep state
which is hard-coded to 'freeze' (Low-Power Idle), 'standby' labels, which may be "mem", "standby", "freeze" and "disk"
(Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk' (hibernation). The meanings of the first three labels depend on
(Suspend-to-Disk). the relative_sleep_states command line argument as follows:
1) relative_sleep_states = 1
"mem", "standby", "freeze" represent non-hibernation sleep
states from the deepest ("mem", always present) to the
shallowest ("freeze"). "standby" and "freeze" may or may
not be present depending on the capabilities of the
platform. "freeze" can only be present if "standby" is
present.
2) relative_sleep_states = 0 (default)
"mem" - "suspend-to-RAM", present if supported.
"standby" - "power-on suspend", present if supported.
"freeze" - "suspend-to-idle", always present.
Writing to this file one of these strings causes the system to Writing to this file one of these strings causes the system to
transition into that state. Please see the file transition into the corresponding state, if available. See
Documentation/power/states.txt for a description of each of Documentation/power/states.txt for a description of what
these states. "suspend-to-RAM", "power-on suspend" and "suspend-to-idle" mean.
What: /sys/power/disk What: /sys/power/disk
Date: September 2006 Date: September 2006
......
...@@ -20,6 +20,7 @@ Contents: ...@@ -20,6 +20,7 @@ Contents:
--------- ---------
1. CPUFreq core and interfaces 1. CPUFreq core and interfaces
2. CPUFreq notifiers 2. CPUFreq notifiers
3. CPUFreq Table Generation with Operating Performance Point (OPP)
1. General Information 1. General Information
======================= =======================
...@@ -92,3 +93,31 @@ values: ...@@ -92,3 +93,31 @@ values:
cpu - number of the affected CPU cpu - number of the affected CPU
old - old frequency old - old frequency
new - new frequency new - new frequency
3. CPUFreq Table Generation with Operating Performance Point (OPP)
==================================================================
For details about OPP, see Documentation/power/opp.txt
dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with
cpufreq_frequency_table_cpuinfo which is provided with the list of
frequencies that are available for operation. This function provides
a ready to use conversion routine to translate the OPP layer's internal
information about the available frequencies into a format readily
providable to cpufreq.
WARNING: Do not use this function in interrupt context.
Example:
soc_pm_init()
{
/* Do things */
r = dev_pm_opp_init_cpufreq_table(dev, &freq_table);
if (!r)
cpufreq_frequency_table_cpuinfo(policy, freq_table);
/* Do other things */
}
NOTE: This function is available only if CONFIG_CPU_FREQ is enabled in
addition to CONFIG_PM_OPP.
dev_pm_opp_free_cpufreq_table - Free up the table allocated by dev_pm_opp_init_cpufreq_table
...@@ -228,3 +228,22 @@ is the corresponding frequency table helper for the ->target ...@@ -228,3 +228,22 @@ is the corresponding frequency table helper for the ->target
stage. Just pass the values to this function, and the unsigned int stage. Just pass the values to this function, and the unsigned int
index returns the number of the frequency table entry which contains index returns the number of the frequency table entry which contains
the frequency the CPU shall be set to. the frequency the CPU shall be set to.
The following macros can be used as iterators over cpufreq_frequency_table:
cpufreq_for_each_entry(pos, table) - iterates over all entries of frequency
table.
cpufreq-for_each_valid_entry(pos, table) - iterates over all entries,
excluding CPUFREQ_ENTRY_INVALID frequencies.
Use arguments "pos" - a cpufreq_frequency_table * as a loop cursor and
"table" - the cpufreq_frequency_table * you want to iterate over.
For example:
struct cpufreq_frequency_table *pos, *driver_freq_table;
cpufreq_for_each_entry(pos, driver_freq_table) {
/* Do something with pos */
pos->frequency = ...
}
...@@ -35,8 +35,8 @@ Mailing List ...@@ -35,8 +35,8 @@ Mailing List
------------ ------------
There is a CPU frequency changing CVS commit and general list where There is a CPU frequency changing CVS commit and general list where
you can report bugs, problems or submit patches. To post a message, you can report bugs, problems or submit patches. To post a message,
send an email to cpufreq@vger.kernel.org, to subscribe go to send an email to linux-pm@vger.kernel.org, to subscribe go to
http://vger.kernel.org/vger-lists.html#cpufreq and follow the http://vger.kernel.org/vger-lists.html#linux-pm and follow the
instructions there. instructions there.
Links Links
......
...@@ -214,6 +214,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -214,6 +214,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
unusable. The "log_buf_len" parameter may be useful unusable. The "log_buf_len" parameter may be useful
if you need to capture more output. if you need to capture more output.
acpi_force_table_verification [HW,ACPI]
Enable table checksum verification during early stage.
By default, this is disabled due to x86 early mapping
size limitation.
acpi_irq_balance [HW,ACPI] acpi_irq_balance [HW,ACPI]
ACPI will balance active IRQs ACPI will balance active IRQs
default in APIC mode default in APIC mode
...@@ -237,7 +242,15 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -237,7 +242,15 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
This feature is enabled by default. This feature is enabled by default.
This option allows to turn off the feature. This option allows to turn off the feature.
acpi_no_auto_ssdt [HW,ACPI] Disable automatic loading of SSDT acpi_no_static_ssdt [HW,ACPI]
Disable installation of static SSDTs at early boot time
By default, SSDTs contained in the RSDT/XSDT will be
installed automatically and they will appear under
/sys/firmware/acpi/tables.
This option turns off this feature.
Note that specifying this option does not affect
dynamic table installation which will install SSDT
tables to /sys/firmware/acpi/tables/dynamic.
acpica_no_return_repair [HW, ACPI] acpica_no_return_repair [HW, ACPI]
Disable AML predefined validation mechanism Disable AML predefined validation mechanism
...@@ -2898,6 +2911,13 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -2898,6 +2911,13 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
[KNL, SMP] Set scheduler's default relax_domain_level. [KNL, SMP] Set scheduler's default relax_domain_level.
See Documentation/cgroups/cpusets.txt. See Documentation/cgroups/cpusets.txt.
relative_sleep_states=
[SUSPEND] Use sleep state labeling where the deepest
state available other than hibernation is always "mem".
Format: { "0" | "1" }
0 -- Traditional sleep state labels.
1 -- Relative sleep state labels.
reserve= [KNL,BUGS] Force the kernel to ignore some iomem area reserve= [KNL,BUGS] Force the kernel to ignore some iomem area
reservetop= [X86-32] reservetop= [X86-32]
...@@ -3470,7 +3490,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -3470,7 +3490,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
the allocated input device; If set to 0, video driver the allocated input device; If set to 0, video driver
will only send out the event without touching backlight will only send out the event without touching backlight
brightness level. brightness level.
default: 1 default: 0
virtio_mmio.device= virtio_mmio.device=
[VMMIO] Memory mapped virtio (platform) device. [VMMIO] Memory mapped virtio (platform) device.
......
...@@ -2,6 +2,7 @@ Device Power Management ...@@ -2,6 +2,7 @@ Device Power Management
Copyright (c) 2010-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. Copyright (c) 2010-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
Copyright (c) 2010 Alan Stern <stern@rowland.harvard.edu> Copyright (c) 2010 Alan Stern <stern@rowland.harvard.edu>
Copyright (c) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Most of the code in Linux is device drivers, so most of the Linux power Most of the code in Linux is device drivers, so most of the Linux power
...@@ -326,6 +327,20 @@ the phases are: ...@@ -326,6 +327,20 @@ the phases are:
driver in some way for the upcoming system power transition, but it driver in some way for the upcoming system power transition, but it
should not put the device into a low-power state. should not put the device into a low-power state.
For devices supporting runtime power management, the return value of the
prepare callback can be used to indicate to the PM core that it may
safely leave the device in runtime suspend (if runtime-suspended
already), provided that all of the device's descendants are also left in
runtime suspend. Namely, if the prepare callback returns a positive
number and that happens for all of the descendants of the device too,
and all of them (including the device itself) are runtime-suspended, the
PM core will skip the suspend, suspend_late and suspend_noirq suspend
phases as well as the resume_noirq, resume_early and resume phases of
the following system resume for all of these devices. In that case,
the complete callback will be called directly after the prepare callback
and is entirely responsible for bringing the device back to the
functional state as appropriate.
2. The suspend methods should quiesce the device to stop it from performing 2. The suspend methods should quiesce the device to stop it from performing
I/O. They also may save the device registers and put it into the I/O. They also may save the device registers and put it into the
appropriate low-power state, depending on the bus type the device is on, appropriate low-power state, depending on the bus type the device is on,
...@@ -400,12 +415,23 @@ When resuming from freeze, standby or memory sleep, the phases are: ...@@ -400,12 +415,23 @@ When resuming from freeze, standby or memory sleep, the phases are:
the resume callbacks occur; it's not necessary to wait until the the resume callbacks occur; it's not necessary to wait until the
complete phase. complete phase.
Moreover, if the preceding prepare callback returned a positive number,
the device may have been left in runtime suspend throughout the whole
system suspend and resume (the suspend, suspend_late, suspend_noirq
phases of system suspend and the resume_noirq, resume_early, resume
phases of system resume may have been skipped for it). In that case,
the complete callback is entirely responsible for bringing the device
back to the functional state after system suspend if necessary. [For
example, it may need to queue up a runtime resume request for the device
for this purpose.] To check if that is the case, the complete callback
can consult the device's power.direct_complete flag. Namely, if that
flag is set when the complete callback is being run, it has been called
directly after the preceding prepare and special action may be required
to make the device work correctly afterward.
At the end of these phases, drivers should be as functional as they were before At the end of these phases, drivers should be as functional as they were before
suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are
gated on. Even if the device was in a low-power state before the system sleep gated on.
because of runtime power management, afterwards it should be back in its
full-power state. There are multiple reasons why it's best to do this; they are
discussed in more detail in Documentation/power/runtime_pm.txt.
However, the details here may again be platform-specific. For example, However, the details here may again be platform-specific. For example,
some systems support multiple "run" states, and the mode in effect at some systems support multiple "run" states, and the mode in effect at
......
...@@ -10,8 +10,7 @@ Contents ...@@ -10,8 +10,7 @@ Contents
3. OPP Search Functions 3. OPP Search Functions
4. OPP Availability Control Functions 4. OPP Availability Control Functions
5. OPP Data Retrieval Functions 5. OPP Data Retrieval Functions
6. Cpufreq Table Generation 6. Data Structures
7. Data Structures
1. Introduction 1. Introduction
=============== ===============
...@@ -72,7 +71,6 @@ operations until that OPP could be re-enabled if possible. ...@@ -72,7 +71,6 @@ operations until that OPP could be re-enabled if possible.
OPP library facilitates this concept in it's implementation. The following OPP library facilitates this concept in it's implementation. The following
operational functions operate only on available opps: operational functions operate only on available opps:
opp_find_freq_{ceil, floor}, dev_pm_opp_get_voltage, dev_pm_opp_get_freq, dev_pm_opp_get_opp_count opp_find_freq_{ceil, floor}, dev_pm_opp_get_voltage, dev_pm_opp_get_freq, dev_pm_opp_get_opp_count
and dev_pm_opp_init_cpufreq_table
dev_pm_opp_find_freq_exact is meant to be used to find the opp pointer which can then dev_pm_opp_find_freq_exact is meant to be used to find the opp pointer which can then
be used for dev_pm_opp_enable/disable functions to make an opp available as required. be used for dev_pm_opp_enable/disable functions to make an opp available as required.
...@@ -96,10 +94,9 @@ using RCU read locks. The opp_find_freq_{exact,ceil,floor}, ...@@ -96,10 +94,9 @@ using RCU read locks. The opp_find_freq_{exact,ceil,floor},
opp_get_{voltage, freq, opp_count} fall into this category. opp_get_{voltage, freq, opp_count} fall into this category.
opp_{add,enable,disable} are updaters which use mutex and implement it's own opp_{add,enable,disable} are updaters which use mutex and implement it's own
RCU locking mechanisms. dev_pm_opp_init_cpufreq_table acts as an updater and uses RCU locking mechanisms. These functions should *NOT* be called under RCU locks
mutex to implment RCU updater strategy. These functions should *NOT* be called and other contexts that prevent blocking functions in RCU or mutex operations
under RCU locks and other contexts that prevent blocking functions in RCU or from working.
mutex operations from working.
2. Initial OPP List Registration 2. Initial OPP List Registration
================================ ================================
...@@ -311,34 +308,7 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device ...@@ -311,34 +308,7 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device
/* Do other things */ /* Do other things */
} }
6. Cpufreq Table Generation 6. Data Structures
===========================
dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with
cpufreq_frequency_table_cpuinfo which is provided with the list of
frequencies that are available for operation. This function provides
a ready to use conversion routine to translate the OPP layer's internal
information about the available frequencies into a format readily
providable to cpufreq.
WARNING: Do not use this function in interrupt context.
Example:
soc_pm_init()
{
/* Do things */
r = dev_pm_opp_init_cpufreq_table(dev, &freq_table);
if (!r)
cpufreq_frequency_table_cpuinfo(policy, freq_table);
/* Do other things */
}
NOTE: This function is available only if CONFIG_CPU_FREQ is enabled in
addition to CONFIG_PM as power management feature is required to
dynamically scale voltage and frequency in a system.
dev_pm_opp_free_cpufreq_table - Free up the table allocated by dev_pm_opp_init_cpufreq_table
7. Data Structures
================== ==================
Typically an SoC contains multiple voltage domains which are variable. Each Typically an SoC contains multiple voltage domains which are variable. Each
domain is represented by a device pointer. The relationship to OPP can be domain is represented by a device pointer. The relationship to OPP can be
......
...@@ -2,6 +2,7 @@ Runtime Power Management Framework for I/O Devices ...@@ -2,6 +2,7 @@ Runtime Power Management Framework for I/O Devices
(C) 2009-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. (C) 2009-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
(C) 2010 Alan Stern <stern@rowland.harvard.edu> (C) 2010 Alan Stern <stern@rowland.harvard.edu>
(C) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
1. Introduction 1. Introduction
...@@ -444,6 +445,10 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h: ...@@ -444,6 +445,10 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
bool pm_runtime_status_suspended(struct device *dev); bool pm_runtime_status_suspended(struct device *dev);
- return true if the device's runtime PM status is 'suspended' - return true if the device's runtime PM status is 'suspended'
bool pm_runtime_suspended_if_enabled(struct device *dev);
- return true if the device's runtime PM status is 'suspended' and its
'power.disable_depth' field is equal to 1
void pm_runtime_allow(struct device *dev); void pm_runtime_allow(struct device *dev);
- set the power.runtime_auto flag for the device and decrease its usage - set the power.runtime_auto flag for the device and decrease its usage
counter (used by the /sys/devices/.../power/control interface to counter (used by the /sys/devices/.../power/control interface to
...@@ -644,19 +649,33 @@ place (in particular, if the system is not waking up from hibernation), it may ...@@ -644,19 +649,33 @@ place (in particular, if the system is not waking up from hibernation), it may
be more efficient to leave the devices that had been suspended before the system be more efficient to leave the devices that had been suspended before the system
suspend began in the suspended state. suspend began in the suspended state.
To this end, the PM core provides a mechanism allowing some coordination between
different levels of device hierarchy. Namely, if a system suspend .prepare()
callback returns a positive number for a device, that indicates to the PM core
that the device appears to be runtime-suspended and its state is fine, so it
may be left in runtime suspend provided that all of its descendants are also
left in runtime suspend. If that happens, the PM core will not execute any
system suspend and resume callbacks for all of those devices, except for the
complete callback, which is then entirely responsible for handling the device
as appropriate. This only applies to system suspend transitions that are not
related to hibernation (see Documentation/power/devices.txt for more
information).
The PM core does its best to reduce the probability of race conditions between The PM core does its best to reduce the probability of race conditions between
the runtime PM and system suspend/resume (and hibernation) callbacks by carrying the runtime PM and system suspend/resume (and hibernation) callbacks by carrying
out the following operations: out the following operations:
* During system suspend it calls pm_runtime_get_noresume() and * During system suspend pm_runtime_get_noresume() is called for every device
pm_runtime_barrier() for every device right before executing the right before executing the subsystem-level .prepare() callback for it and
subsystem-level .suspend() callback for it. In addition to that it calls pm_runtime_barrier() is called for every device right before executing the
__pm_runtime_disable() with 'false' as the second argument for every device subsystem-level .suspend() callback for it. In addition to that the PM core
right before executing the subsystem-level .suspend_late() callback for it. calls __pm_runtime_disable() with 'false' as the second argument for every
device right before executing the subsystem-level .suspend_late() callback
* During system resume it calls pm_runtime_enable() and pm_runtime_put() for it.
for every device right after executing the subsystem-level .resume_early()
callback and right after executing the subsystem-level .resume() callback * During system resume pm_runtime_enable() and pm_runtime_put() are called for
every device right after executing the subsystem-level .resume_early()
callback and right after executing the subsystem-level .complete() callback
for it, respectively. for it, respectively.
7. Generic subsystem callbacks 7. Generic subsystem callbacks
......
System Power Management Sleep States
System Power Management States (C) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The kernel supports up to four system sleep states generically, although three
of them depend on the platform support code to implement the low-level details
for each state.
The kernel supports four power management states generically, though The states are represented by strings that can be read or written to the
one is generic and the other three are dependent on platform support /sys/power/state file. Those strings may be "mem", "standby", "freeze" and
code to implement the low-level details for each state. "disk", where the last one always represents hibernation (Suspend-To-Disk) and
This file describes each state, what they are the meaning of the remaining ones depends on the relative_sleep_states command
commonly called, what ACPI state they map to, and what string to write line argument.
to /sys/power/state to enter that state
state: Freeze / Low-Power Idle For relative_sleep_states=1, the strings "mem", "standby" and "freeze" label the
available non-hibernation sleep states from the deepest to the shallowest,
respectively. In that case, "mem" is always present in /sys/power/state,
because there is at least one non-hibernation sleep state in every system. If
the given system supports two non-hibernation sleep states, "standby" is present
in /sys/power/state in addition to "mem". If the system supports three
non-hibernation sleep states, "freeze" will be present in /sys/power/state in
addition to "mem" and "standby".
For relative_sleep_states=0, which is the default, the following descriptions
apply.
state: Suspend-To-Idle
ACPI state: S0 ACPI state: S0
String: "freeze" Label: "freeze"
This state is a generic, pure software, light-weight, low-power state. This state is a generic, pure software, light-weight, system sleep state.
It allows more energy to be saved relative to idle by freezing user It allows more energy to be saved relative to runtime idle by freezing user
space and putting all I/O devices into low-power states (possibly space and putting all I/O devices into low-power states (possibly
lower-power than available at run time), such that the processors can lower-power than available at run time), such that the processors can
spend more time in their idle states. spend more time in their idle states.
This state can be used for platforms without Standby/Suspend-to-RAM
This state can be used for platforms without Power-On Suspend/Suspend-to-RAM
support, or it can be used in addition to Suspend-to-RAM (memory sleep) support, or it can be used in addition to Suspend-to-RAM (memory sleep)
to provide reduced resume latency. to provide reduced resume latency. It is always supported.
State: Standby / Power-On Suspend State: Standby / Power-On Suspend
ACPI State: S1 ACPI State: S1
String: "standby" Label: "standby"
This state offers minimal, though real, power savings, while providing This state, if supported, offers moderate, though real, power savings, while
a very low-latency transition back to a working system. No operating providing a relatively low-latency transition back to a working system. No
state is lost (the CPU retains power), so the system easily starts up operating state is lost (the CPU retains power), so the system easily starts up
again where it left off. again where it left off.
We try to put devices in a low-power state equivalent to D1, which In addition to freezing user space and putting all I/O devices into low-power
also offers low power savings, but low resume latency. Not all devices states, which is done for Suspend-To-Idle too, nonboot CPUs are taken offline
support D1, and those that don't are left on. and all low-level system functions are suspended during transitions into this
state. For this reason, it should allow more energy to be saved relative to
Suspend-To-Idle, but the resume latency will generally be greater than for that
state.
State: Suspend-to-RAM State: Suspend-to-RAM
ACPI State: S3 ACPI State: S3
String: "mem" Label: "mem"
This state offers significant power savings as everything in the This state, if supported, offers significant power savings as everything in the
system is put into a low-power state, except for memory, which is system is put into a low-power state, except for memory, which should be placed
placed in self-refresh mode to retain its contents. into the self-refresh mode to retain its contents. All of the steps carried out
when entering Power-On Suspend are also carried out during transitions to STR.
Additional operations may take place depending on the platform capabilities. In
particular, on ACPI systems the kernel passes control to the BIOS (platform
firmware) as the last step during STR transitions and that usually results in
powering down some more low-level components that aren't directly controlled by
the kernel.
System and device state is saved and kept in memory. All devices are System and device state is saved and kept in memory. All devices are suspended
suspended and put into D3. In many cases, all peripheral buses lose and put into low-power states. In many cases, all peripheral buses lose power
power when entering STR, so devices must be able to handle the when entering STR, so devices must be able to handle the transition back to the
transition back to the On state. "on" state.
For at least ACPI, STR requires some minimal boot-strapping code to For at least ACPI, STR requires some minimal boot-strapping code to resume the
resume the system from STR. This may be true on other platforms. system from it. This may be the case on other platforms too.
State: Suspend-to-disk State: Suspend-to-disk
ACPI State: S4 ACPI State: S4
String: "disk" Label: "disk"
This state offers the greatest power savings, and can be used even in This state offers the greatest power savings, and can be used even in
the absence of low-level platform support for power management. This the absence of low-level platform support for power management. This
......
...@@ -220,7 +220,10 @@ Q: After resuming, system is paging heavily, leading to very bad interactivity. ...@@ -220,7 +220,10 @@ Q: After resuming, system is paging heavily, leading to very bad interactivity.
A: Try running A: Try running
cat `cat /proc/[0-9]*/maps | grep / | sed 's:.* /:/:' | sort -u` > /dev/null cat /proc/[0-9]*/maps | grep / | sed 's:.* /:/:' | sort -u | while read file
do
test -f "$file" && cat "$file" > /dev/null
done
after resume. swapoff -a; swapon -a may also be useful. after resume. swapoff -a; swapon -a may also be useful.
......
...@@ -2405,7 +2405,6 @@ F: drivers/net/ethernet/ti/cpmac.c ...@@ -2405,7 +2405,6 @@ F: drivers/net/ethernet/ti/cpmac.c
CPU FREQUENCY DRIVERS CPU FREQUENCY DRIVERS
M: Rafael J. Wysocki <rjw@rjwysocki.net> M: Rafael J. Wysocki <rjw@rjwysocki.net>
M: Viresh Kumar <viresh.kumar@linaro.org> M: Viresh Kumar <viresh.kumar@linaro.org>
L: cpufreq@vger.kernel.org
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
...@@ -2416,7 +2415,6 @@ F: include/linux/cpufreq.h ...@@ -2416,7 +2415,6 @@ F: include/linux/cpufreq.h
CPU FREQUENCY DRIVERS - ARM BIG LITTLE CPU FREQUENCY DRIVERS - ARM BIG LITTLE
M: Viresh Kumar <viresh.kumar@linaro.org> M: Viresh Kumar <viresh.kumar@linaro.org>
M: Sudeep Holla <sudeep.holla@arm.com> M: Sudeep Holla <sudeep.holla@arm.com>
L: cpufreq@vger.kernel.org
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
W: http://www.arm.com/products/processors/technologies/biglittleprocessing.php W: http://www.arm.com/products/processors/technologies/biglittleprocessing.php
S: Maintained S: Maintained
...@@ -6947,7 +6945,6 @@ F: drivers/power/ ...@@ -6947,7 +6945,6 @@ F: drivers/power/
PNP SUPPORT PNP SUPPORT
M: Rafael J. Wysocki <rafael.j.wysocki@intel.com> M: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
M: Bjorn Helgaas <bhelgaas@google.com>
S: Maintained S: Maintained
F: drivers/pnp/ F: drivers/pnp/
......
...@@ -1092,20 +1092,21 @@ int da850_register_cpufreq(char *async_clk) ...@@ -1092,20 +1092,21 @@ int da850_register_cpufreq(char *async_clk)
static int da850_round_armrate(struct clk *clk, unsigned long rate) static int da850_round_armrate(struct clk *clk, unsigned long rate)
{ {
int i, ret = 0, diff; int ret = 0, diff;
unsigned int best = (unsigned int) -1; unsigned int best = (unsigned int) -1;
struct cpufreq_frequency_table *table = cpufreq_info.freq_table; struct cpufreq_frequency_table *table = cpufreq_info.freq_table;
struct cpufreq_frequency_table *pos;
rate /= 1000; /* convert to kHz */ rate /= 1000; /* convert to kHz */
for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { cpufreq_for_each_entry(pos, table) {
diff = table[i].frequency - rate; diff = pos->frequency - rate;
if (diff < 0) if (diff < 0)
diff = -diff; diff = -diff;
if (diff < best) { if (diff < best) {
best = diff; best = diff;
ret = table[i].frequency; ret = pos->frequency;
} }
} }
......
/*
* IA64 specific ACPICA environments and implementation
*
* Copyright (C) 2014, Intel Corporation
* Author: Lv Zheng <lv.zheng@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_IA64_ACENV_H
#define _ASM_IA64_ACENV_H
#include <asm/intrinsics.h>
#define COMPILER_DEPENDENT_INT64 long
#define COMPILER_DEPENDENT_UINT64 unsigned long
/* Asm macros */
#ifdef CONFIG_ACPI
static inline int
ia64_acpi_acquire_global_lock(unsigned int *lock)
{
unsigned int old, new, val;
do {
old = *lock;
new = (((old & ~0x3) + 2) + ((old >> 1) & 0x1));
val = ia64_cmpxchg4_acq(lock, new, old);
} while (unlikely (val != old));
return (new < 3) ? -1 : 0;
}
static inline int
ia64_acpi_release_global_lock(unsigned int *lock)
{
unsigned int old, new, val;
do {
old = *lock;
new = old & ~0x3;
val = ia64_cmpxchg4_acq(lock, new, old);
} while (unlikely (val != old));
return old & 0x1;
}
#define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \
((Acq) = ia64_acpi_acquire_global_lock(&facs->global_lock))
#define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \
((Acq) = ia64_acpi_release_global_lock(&facs->global_lock))
#endif
#endif /* _ASM_IA64_ACENV_H */
...@@ -34,57 +34,8 @@ ...@@ -34,57 +34,8 @@
#include <linux/numa.h> #include <linux/numa.h>
#include <asm/numa.h> #include <asm/numa.h>
#define COMPILER_DEPENDENT_INT64 long
#define COMPILER_DEPENDENT_UINT64 unsigned long
/*
* Calling conventions:
*
* ACPI_SYSTEM_XFACE - Interfaces to host OS (handlers, threads)
* ACPI_EXTERNAL_XFACE - External ACPI interfaces
* ACPI_INTERNAL_XFACE - Internal ACPI interfaces
* ACPI_INTERNAL_VAR_XFACE - Internal variable-parameter list interfaces
*/
#define ACPI_SYSTEM_XFACE
#define ACPI_EXTERNAL_XFACE
#define ACPI_INTERNAL_XFACE
#define ACPI_INTERNAL_VAR_XFACE
/* Asm macros */
#define ACPI_FLUSH_CPU_CACHE()
static inline int
ia64_acpi_acquire_global_lock (unsigned int *lock)
{
unsigned int old, new, val;
do {
old = *lock;
new = (((old & ~0x3) + 2) + ((old >> 1) & 0x1));
val = ia64_cmpxchg4_acq(lock, new, old);
} while (unlikely (val != old));
return (new < 3) ? -1 : 0;
}
static inline int
ia64_acpi_release_global_lock (unsigned int *lock)
{
unsigned int old, new, val;
do {
old = *lock;
new = old & ~0x3;
val = ia64_cmpxchg4_acq(lock, new, old);
} while (unlikely (val != old));
return old & 0x1;
}
#define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \
((Acq) = ia64_acpi_acquire_global_lock(&facs->global_lock))
#define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \
((Acq) = ia64_acpi_release_global_lock(&facs->global_lock))
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
extern int acpi_lapic;
#define acpi_disabled 0 /* ACPI always enabled on IA64 */ #define acpi_disabled 0 /* ACPI always enabled on IA64 */
#define acpi_noirq 0 /* ACPI always enabled on IA64 */ #define acpi_noirq 0 /* ACPI always enabled on IA64 */
#define acpi_pci_disabled 0 /* ACPI PCI always enabled on IA64 */ #define acpi_pci_disabled 0 /* ACPI PCI always enabled on IA64 */
...@@ -92,7 +43,6 @@ ia64_acpi_release_global_lock (unsigned int *lock) ...@@ -92,7 +43,6 @@ ia64_acpi_release_global_lock (unsigned int *lock)
#endif #endif
#define acpi_processor_cstate_check(x) (x) /* no idle limits on IA64 :) */ #define acpi_processor_cstate_check(x) (x) /* no idle limits on IA64 :) */
static inline void disable_acpi(void) { } static inline void disable_acpi(void) { }
static inline void pci_acpi_crs_quirks(void) { }
#ifdef CONFIG_IA64_GENERIC #ifdef CONFIG_IA64_GENERIC
const char *acpi_get_sysname (void); const char *acpi_get_sysname (void);
......
...@@ -56,6 +56,7 @@ ...@@ -56,6 +56,7 @@
#define PREFIX "ACPI: " #define PREFIX "ACPI: "
int acpi_lapic;
unsigned int acpi_cpei_override; unsigned int acpi_cpei_override;
unsigned int acpi_cpei_phys_cpuid; unsigned int acpi_cpei_phys_cpuid;
...@@ -676,6 +677,8 @@ int __init early_acpi_boot_init(void) ...@@ -676,6 +677,8 @@ int __init early_acpi_boot_init(void)
if (ret < 1) if (ret < 1)
printk(KERN_ERR PREFIX printk(KERN_ERR PREFIX
"Error parsing MADT - no LAPIC entries\n"); "Error parsing MADT - no LAPIC entries\n");
else
acpi_lapic = 1;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
if (available_cpus == 0) { if (available_cpus == 0) {
......
...@@ -91,10 +91,9 @@ EXPORT_SYMBOL(clk_put); ...@@ -91,10 +91,9 @@ EXPORT_SYMBOL(clk_put);
int clk_set_rate(struct clk *clk, unsigned long rate) int clk_set_rate(struct clk *clk, unsigned long rate)
{ {
unsigned int rate_khz = rate / 1000; struct cpufreq_frequency_table *pos;
int ret = 0; int ret = 0;
int regval; int regval;
int i;
if (likely(clk->ops && clk->ops->set_rate)) { if (likely(clk->ops && clk->ops->set_rate)) {
unsigned long flags; unsigned long flags;
...@@ -107,22 +106,16 @@ int clk_set_rate(struct clk *clk, unsigned long rate) ...@@ -107,22 +106,16 @@ int clk_set_rate(struct clk *clk, unsigned long rate)
if (unlikely(clk->flags & CLK_RATE_PROPAGATES)) if (unlikely(clk->flags & CLK_RATE_PROPAGATES))
propagate_rate(clk); propagate_rate(clk);
for (i = 0; loongson2_clockmod_table[i].frequency != CPUFREQ_TABLE_END; cpufreq_for_each_valid_entry(pos, loongson2_clockmod_table)
i++) { if (rate == pos->frequency)
if (loongson2_clockmod_table[i].frequency ==
CPUFREQ_ENTRY_INVALID)
continue;
if (rate_khz == loongson2_clockmod_table[i].frequency)
break; break;
} if (rate != pos->frequency)
if (rate_khz != loongson2_clockmod_table[i].frequency)
return -ENOTSUPP; return -ENOTSUPP;
clk->rate = rate; clk->rate = rate;
regval = LOONGSON_CHIPCFG0; regval = LOONGSON_CHIPCFG0;
regval = (regval & ~0x7) | regval = (regval & ~0x7) | (pos->driver_data - 1);
(loongson2_clockmod_table[i].driver_data - 1);
LOONGSON_CHIPCFG0 = regval; LOONGSON_CHIPCFG0 = regval;
return ret; return ret;
......
/*
* X86 specific ACPICA environments and implementation
*
* Copyright (C) 2014, Intel Corporation
* Author: Lv Zheng <lv.zheng@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_X86_ACENV_H
#define _ASM_X86_ACENV_H
#include <asm/special_insns.h>
/* Asm macros */
#define ACPI_FLUSH_CPU_CACHE() wbinvd()
#ifdef CONFIG_ACPI
int __acpi_acquire_global_lock(unsigned int *lock);
int __acpi_release_global_lock(unsigned int *lock);
#define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \
((Acq) = __acpi_acquire_global_lock(&facs->global_lock))
#define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \
((Acq) = __acpi_release_global_lock(&facs->global_lock))
/*
* Math helper asm macros
*/
#define ACPI_DIV_64_BY_32(n_hi, n_lo, d32, q32, r32) \
asm("divl %2;" \
: "=a"(q32), "=d"(r32) \
: "r"(d32), \
"0"(n_lo), "1"(n_hi))
#define ACPI_SHIFT_RIGHT_64(n_hi, n_lo) \
asm("shrl $1,%2 ;" \
"rcrl $1,%3;" \
: "=r"(n_hi), "=r"(n_lo) \
: "0"(n_hi), "1"(n_lo))
#endif
#endif /* _ASM_X86_ACENV_H */
...@@ -32,51 +32,6 @@ ...@@ -32,51 +32,6 @@
#include <asm/mpspec.h> #include <asm/mpspec.h>
#include <asm/realmode.h> #include <asm/realmode.h>
#define COMPILER_DEPENDENT_INT64 long long
#define COMPILER_DEPENDENT_UINT64 unsigned long long
/*
* Calling conventions:
*
* ACPI_SYSTEM_XFACE - Interfaces to host OS (handlers, threads)
* ACPI_EXTERNAL_XFACE - External ACPI interfaces
* ACPI_INTERNAL_XFACE - Internal ACPI interfaces
* ACPI_INTERNAL_VAR_XFACE - Internal variable-parameter list interfaces
*/
#define ACPI_SYSTEM_XFACE
#define ACPI_EXTERNAL_XFACE
#define ACPI_INTERNAL_XFACE
#define ACPI_INTERNAL_VAR_XFACE
/* Asm macros */
#define ACPI_FLUSH_CPU_CACHE() wbinvd()
int __acpi_acquire_global_lock(unsigned int *lock);
int __acpi_release_global_lock(unsigned int *lock);
#define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \
((Acq) = __acpi_acquire_global_lock(&facs->global_lock))
#define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \
((Acq) = __acpi_release_global_lock(&facs->global_lock))
/*
* Math helper asm macros
*/
#define ACPI_DIV_64_BY_32(n_hi, n_lo, d32, q32, r32) \
asm("divl %2;" \
: "=a"(q32), "=d"(r32) \
: "r"(d32), \
"0"(n_lo), "1"(n_hi))
#define ACPI_SHIFT_RIGHT_64(n_hi, n_lo) \
asm("shrl $1,%2 ;" \
"rcrl $1,%3;" \
: "=r"(n_hi), "=r"(n_lo) \
: "0"(n_hi), "1"(n_lo))
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
extern int acpi_lapic; extern int acpi_lapic;
extern int acpi_ioapic; extern int acpi_ioapic;
......
...@@ -39,8 +39,9 @@ acpi-y += processor_core.o ...@@ -39,8 +39,9 @@ acpi-y += processor_core.o
acpi-y += ec.o acpi-y += ec.o
acpi-$(CONFIG_ACPI_DOCK) += dock.o acpi-$(CONFIG_ACPI_DOCK) += dock.o
acpi-y += pci_root.o pci_link.o pci_irq.o acpi-y += pci_root.o pci_link.o pci_irq.o
acpi-$(CONFIG_X86_INTEL_LPSS) += acpi_lpss.o acpi-y += acpi_lpss.o
acpi-y += acpi_platform.o acpi-y += acpi_platform.o
acpi-y += acpi_pnp.o
acpi-y += power.o acpi-y += power.o
acpi-y += event.o acpi-y += event.o
acpi-y += sysfs.o acpi-y += sysfs.o
...@@ -63,9 +64,9 @@ obj-$(CONFIG_ACPI_FAN) += fan.o ...@@ -63,9 +64,9 @@ obj-$(CONFIG_ACPI_FAN) += fan.o
obj-$(CONFIG_ACPI_VIDEO) += video.o obj-$(CONFIG_ACPI_VIDEO) += video.o
obj-$(CONFIG_ACPI_PCI_SLOT) += pci_slot.o obj-$(CONFIG_ACPI_PCI_SLOT) += pci_slot.o
obj-$(CONFIG_ACPI_PROCESSOR) += processor.o obj-$(CONFIG_ACPI_PROCESSOR) += processor.o
obj-$(CONFIG_ACPI_CONTAINER) += container.o obj-y += container.o
obj-$(CONFIG_ACPI_THERMAL) += thermal.o obj-$(CONFIG_ACPI_THERMAL) += thermal.o
obj-$(CONFIG_ACPI_HOTPLUG_MEMORY) += acpi_memhotplug.o obj-y += acpi_memhotplug.o
obj-$(CONFIG_ACPI_BATTERY) += battery.o obj-$(CONFIG_ACPI_BATTERY) += battery.o
obj-$(CONFIG_ACPI_SBS) += sbshc.o obj-$(CONFIG_ACPI_SBS) += sbshc.o
obj-$(CONFIG_ACPI_SBS) += sbs.o obj-$(CONFIG_ACPI_SBS) += sbs.o
......
...@@ -68,7 +68,7 @@ static int acpi_install_cmos_rtc_space_handler(struct acpi_device *adev, ...@@ -68,7 +68,7 @@ static int acpi_install_cmos_rtc_space_handler(struct acpi_device *adev,
return -ENODEV; return -ENODEV;
} }
return 0; return 1;
} }
static void acpi_remove_cmos_rtc_space_handler(struct acpi_device *adev) static void acpi_remove_cmos_rtc_space_handler(struct acpi_device *adev)
......
...@@ -220,13 +220,13 @@ static int __init extlog_init(void) ...@@ -220,13 +220,13 @@ static int __init extlog_init(void)
goto err; goto err;
} }
extlog_l1_hdr = acpi_os_map_memory(l1_dirbase, l1_hdr_size); extlog_l1_hdr = acpi_os_map_iomem(l1_dirbase, l1_hdr_size);
l1_head = (struct extlog_l1_head *)extlog_l1_hdr; l1_head = (struct extlog_l1_head *)extlog_l1_hdr;
l1_size = l1_head->total_len; l1_size = l1_head->total_len;
l1_percpu_entry = l1_head->entries; l1_percpu_entry = l1_head->entries;
elog_base = l1_head->elog_base; elog_base = l1_head->elog_base;
elog_size = l1_head->elog_len; elog_size = l1_head->elog_len;
acpi_os_unmap_memory(extlog_l1_hdr, l1_hdr_size); acpi_os_unmap_iomem(extlog_l1_hdr, l1_hdr_size);
release_mem_region(l1_dirbase, l1_hdr_size); release_mem_region(l1_dirbase, l1_hdr_size);
/* remap L1 header again based on completed information */ /* remap L1 header again based on completed information */
...@@ -237,7 +237,7 @@ static int __init extlog_init(void) ...@@ -237,7 +237,7 @@ static int __init extlog_init(void)
(unsigned long long)l1_dirbase + l1_size); (unsigned long long)l1_dirbase + l1_size);
goto err; goto err;
} }
extlog_l1_addr = acpi_os_map_memory(l1_dirbase, l1_size); extlog_l1_addr = acpi_os_map_iomem(l1_dirbase, l1_size);
l1_entry_base = (u64 *)((u8 *)extlog_l1_addr + l1_hdr_size); l1_entry_base = (u64 *)((u8 *)extlog_l1_addr + l1_hdr_size);
/* remap elog table */ /* remap elog table */
...@@ -248,7 +248,7 @@ static int __init extlog_init(void) ...@@ -248,7 +248,7 @@ static int __init extlog_init(void)
(unsigned long long)elog_base + elog_size); (unsigned long long)elog_base + elog_size);
goto err_release_l1_dir; goto err_release_l1_dir;
} }
elog_addr = acpi_os_map_memory(elog_base, elog_size); elog_addr = acpi_os_map_iomem(elog_base, elog_size);
rc = -ENOMEM; rc = -ENOMEM;
/* allocate buffer to save elog record */ /* allocate buffer to save elog record */
...@@ -270,11 +270,11 @@ static int __init extlog_init(void) ...@@ -270,11 +270,11 @@ static int __init extlog_init(void)
err_release_elog: err_release_elog:
if (elog_addr) if (elog_addr)
acpi_os_unmap_memory(elog_addr, elog_size); acpi_os_unmap_iomem(elog_addr, elog_size);
release_mem_region(elog_base, elog_size); release_mem_region(elog_base, elog_size);
err_release_l1_dir: err_release_l1_dir:
if (extlog_l1_addr) if (extlog_l1_addr)
acpi_os_unmap_memory(extlog_l1_addr, l1_size); acpi_os_unmap_iomem(extlog_l1_addr, l1_size);
release_mem_region(l1_dirbase, l1_size); release_mem_region(l1_dirbase, l1_size);
err: err:
pr_warn(FW_BUG "Extended error log disabled because of problems parsing f/w tables\n"); pr_warn(FW_BUG "Extended error log disabled because of problems parsing f/w tables\n");
...@@ -287,9 +287,9 @@ static void __exit extlog_exit(void) ...@@ -287,9 +287,9 @@ static void __exit extlog_exit(void)
mce_unregister_decode_chain(&extlog_mce_dec); mce_unregister_decode_chain(&extlog_mce_dec);
((struct extlog_l1_head *)extlog_l1_addr)->flags &= ~FLAG_OS_OPTIN; ((struct extlog_l1_head *)extlog_l1_addr)->flags &= ~FLAG_OS_OPTIN;
if (extlog_l1_addr) if (extlog_l1_addr)
acpi_os_unmap_memory(extlog_l1_addr, l1_size); acpi_os_unmap_iomem(extlog_l1_addr, l1_size);
if (elog_addr) if (elog_addr)
acpi_os_unmap_memory(elog_addr, elog_size); acpi_os_unmap_iomem(elog_addr, elog_size);
release_mem_region(elog_base, elog_size); release_mem_region(elog_base, elog_size);
release_mem_region(l1_dirbase, l1_size); release_mem_region(l1_dirbase, l1_size);
kfree(elog_buf); kfree(elog_buf);
......
This diff is collapsed.
...@@ -44,6 +44,13 @@ ...@@ -44,6 +44,13 @@
ACPI_MODULE_NAME("acpi_memhotplug"); ACPI_MODULE_NAME("acpi_memhotplug");
static const struct acpi_device_id memory_device_ids[] = {
{ACPI_MEMORY_DEVICE_HID, 0},
{"", 0},
};
#ifdef CONFIG_ACPI_HOTPLUG_MEMORY
/* Memory Device States */ /* Memory Device States */
#define MEMORY_INVALID_STATE 0 #define MEMORY_INVALID_STATE 0
#define MEMORY_POWER_ON_STATE 1 #define MEMORY_POWER_ON_STATE 1
...@@ -53,11 +60,6 @@ static int acpi_memory_device_add(struct acpi_device *device, ...@@ -53,11 +60,6 @@ static int acpi_memory_device_add(struct acpi_device *device,
const struct acpi_device_id *not_used); const struct acpi_device_id *not_used);
static void acpi_memory_device_remove(struct acpi_device *device); static void acpi_memory_device_remove(struct acpi_device *device);
static const struct acpi_device_id memory_device_ids[] = {
{ACPI_MEMORY_DEVICE_HID, 0},
{"", 0},
};
static struct acpi_scan_handler memory_device_handler = { static struct acpi_scan_handler memory_device_handler = {
.ids = memory_device_ids, .ids = memory_device_ids,
.attach = acpi_memory_device_add, .attach = acpi_memory_device_add,
...@@ -364,9 +366,11 @@ static bool __initdata acpi_no_memhotplug; ...@@ -364,9 +366,11 @@ static bool __initdata acpi_no_memhotplug;
void __init acpi_memory_hotplug_init(void) void __init acpi_memory_hotplug_init(void)
{ {
if (acpi_no_memhotplug) if (acpi_no_memhotplug) {
memory_device_handler.attach = NULL;
acpi_scan_add_handler(&memory_device_handler);
return; return;
}
acpi_scan_add_handler_with_hotplug(&memory_device_handler, "memory"); acpi_scan_add_handler_with_hotplug(&memory_device_handler, "memory");
} }
...@@ -376,3 +380,16 @@ static int __init disable_acpi_memory_hotplug(char *str) ...@@ -376,3 +380,16 @@ static int __init disable_acpi_memory_hotplug(char *str)
return 1; return 1;
} }
__setup("acpi_no_memhotplug", disable_acpi_memory_hotplug); __setup("acpi_no_memhotplug", disable_acpi_memory_hotplug);
#else
static struct acpi_scan_handler memory_device_handler = {
.ids = memory_device_ids,
};
void __init acpi_memory_hotplug_init(void)
{
acpi_scan_add_handler(&memory_device_handler);
}
#endif /* CONFIG_ACPI_HOTPLUG_MEMORY */
...@@ -156,12 +156,13 @@ static int power_saving_thread(void *data) ...@@ -156,12 +156,13 @@ static int power_saving_thread(void *data)
while (!kthread_should_stop()) { while (!kthread_should_stop()) {
int cpu; int cpu;
u64 expire_time; unsigned long expire_time;
try_to_freeze(); try_to_freeze();
/* round robin to cpus */ /* round robin to cpus */
if (last_jiffies + round_robin_time * HZ < jiffies) { expire_time = last_jiffies + round_robin_time * HZ;
if (time_before(expire_time, jiffies)) {
last_jiffies = jiffies; last_jiffies = jiffies;
round_robin_cpu(tsk_index); round_robin_cpu(tsk_index);
} }
...@@ -200,7 +201,7 @@ static int power_saving_thread(void *data) ...@@ -200,7 +201,7 @@ static int power_saving_thread(void *data)
CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu); CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu);
local_irq_enable(); local_irq_enable();
if (jiffies > expire_time) { if (time_before(expire_time, jiffies)) {
do_sleep = 1; do_sleep = 1;
break; break;
} }
...@@ -215,8 +216,15 @@ static int power_saving_thread(void *data) ...@@ -215,8 +216,15 @@ static int power_saving_thread(void *data)
* borrow CPU time from this CPU and cause RT task use > 95% * borrow CPU time from this CPU and cause RT task use > 95%
* CPU time. To make 'avoid starvation' work, takes a nap here. * CPU time. To make 'avoid starvation' work, takes a nap here.
*/ */
if (do_sleep) if (unlikely(do_sleep))
schedule_timeout_killable(HZ * idle_pct / 100); schedule_timeout_killable(HZ * idle_pct / 100);
/* If an external event has set the need_resched flag, then
* we need to deal with it, or this loop will continue to
* spin without calling __mwait().
*/
if (unlikely(need_resched()))
schedule();
} }
exit_round_robin(tsk_index); exit_round_robin(tsk_index);
......
...@@ -22,27 +22,16 @@ ...@@ -22,27 +22,16 @@
ACPI_MODULE_NAME("platform"); ACPI_MODULE_NAME("platform");
/* static const struct acpi_device_id forbidden_id_list[] = {
* The following ACPI IDs are known to be suitable for representing as {"PNP0000", 0}, /* PIC */
* platform devices. {"PNP0100", 0}, /* Timer */
*/ {"PNP0200", 0}, /* AT DMA Controller */
static const struct acpi_device_id acpi_platform_device_ids[] = { {"", 0},
{ "PNP0D40" },
{ "VPC2004" },
{ "BCM4752" },
/* Intel Smart Sound Technology */
{ "INT33C8" },
{ "80860F28" },
{ }
}; };
/** /**
* acpi_create_platform_device - Create platform device for ACPI device node * acpi_create_platform_device - Create platform device for ACPI device node
* @adev: ACPI device node to create a platform device for. * @adev: ACPI device node to create a platform device for.
* @id: ACPI device ID used to match @adev.
* *
* Check if the given @adev can be represented as a platform device and, if * Check if the given @adev can be represented as a platform device and, if
* that's the case, create and register a platform device, populate its common * that's the case, create and register a platform device, populate its common
...@@ -50,8 +39,7 @@ static const struct acpi_device_id acpi_platform_device_ids[] = { ...@@ -50,8 +39,7 @@ static const struct acpi_device_id acpi_platform_device_ids[] = {
* *
* Name of the platform device will be the same as @adev's. * Name of the platform device will be the same as @adev's.
*/ */
int acpi_create_platform_device(struct acpi_device *adev, struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
const struct acpi_device_id *id)
{ {
struct platform_device *pdev = NULL; struct platform_device *pdev = NULL;
struct acpi_device *acpi_parent; struct acpi_device *acpi_parent;
...@@ -63,19 +51,22 @@ int acpi_create_platform_device(struct acpi_device *adev, ...@@ -63,19 +51,22 @@ int acpi_create_platform_device(struct acpi_device *adev,
/* If the ACPI node already has a physical device attached, skip it. */ /* If the ACPI node already has a physical device attached, skip it. */
if (adev->physical_node_count) if (adev->physical_node_count)
return 0; return NULL;
if (!acpi_match_device_ids(adev, forbidden_id_list))
return ERR_PTR(-EINVAL);
INIT_LIST_HEAD(&resource_list); INIT_LIST_HEAD(&resource_list);
count = acpi_dev_get_resources(adev, &resource_list, NULL, NULL); count = acpi_dev_get_resources(adev, &resource_list, NULL, NULL);
if (count < 0) { if (count < 0) {
return 0; return NULL;
} else if (count > 0) { } else if (count > 0) {
resources = kmalloc(count * sizeof(struct resource), resources = kmalloc(count * sizeof(struct resource),
GFP_KERNEL); GFP_KERNEL);
if (!resources) { if (!resources) {
dev_err(&adev->dev, "No memory for resources\n"); dev_err(&adev->dev, "No memory for resources\n");
acpi_dev_free_resource_list(&resource_list); acpi_dev_free_resource_list(&resource_list);
return -ENOMEM; return ERR_PTR(-ENOMEM);
} }
count = 0; count = 0;
list_for_each_entry(rentry, &resource_list, node) list_for_each_entry(rentry, &resource_list, node)
...@@ -112,25 +103,13 @@ int acpi_create_platform_device(struct acpi_device *adev, ...@@ -112,25 +103,13 @@ int acpi_create_platform_device(struct acpi_device *adev,
pdevinfo.num_res = count; pdevinfo.num_res = count;
pdevinfo.acpi_node.companion = adev; pdevinfo.acpi_node.companion = adev;
pdev = platform_device_register_full(&pdevinfo); pdev = platform_device_register_full(&pdevinfo);
if (IS_ERR(pdev)) { if (IS_ERR(pdev))
dev_err(&adev->dev, "platform device creation failed: %ld\n", dev_err(&adev->dev, "platform device creation failed: %ld\n",
PTR_ERR(pdev)); PTR_ERR(pdev));
pdev = NULL; else
} else {
dev_dbg(&adev->dev, "created platform device %s\n", dev_dbg(&adev->dev, "created platform device %s\n",
dev_name(&pdev->dev)); dev_name(&pdev->dev));
}
kfree(resources); kfree(resources);
return 1; return pdev;
}
static struct acpi_scan_handler platform_handler = {
.ids = acpi_platform_device_ids,
.attach = acpi_create_platform_device,
};
void __init acpi_platform_init(void)
{
acpi_scan_add_handler(&platform_handler);
} }
This diff is collapsed.
...@@ -268,7 +268,7 @@ static int acpi_processor_get_info(struct acpi_device *device) ...@@ -268,7 +268,7 @@ static int acpi_processor_get_info(struct acpi_device *device)
pr->apic_id = apic_id; pr->apic_id = apic_id;
cpu_index = acpi_map_cpuid(pr->apic_id, pr->acpi_id); cpu_index = acpi_map_cpuid(pr->apic_id, pr->acpi_id);
if (!cpu0_initialized) { if (!cpu0_initialized && !acpi_lapic) {
cpu0_initialized = 1; cpu0_initialized = 1;
/* Handle UP system running SMP kernel, with no LAPIC in MADT */ /* Handle UP system running SMP kernel, with no LAPIC in MADT */
if ((cpu_index == -1) && (num_online_cpus() == 1)) if ((cpu_index == -1) && (num_online_cpus() == 1))
......
...@@ -135,6 +135,7 @@ acpi-y += \ ...@@ -135,6 +135,7 @@ acpi-y += \
rsxface.o rsxface.o
acpi-y += \ acpi-y += \
tbdata.o \
tbfadt.o \ tbfadt.o \
tbfind.o \ tbfind.o \
tbinstal.o \ tbinstal.o \
......
/******************************************************************************
*
* Module Name: acapps - common include for ACPI applications/tools
*
*****************************************************************************/
/*
* Copyright (C) 2000 - 2014, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#ifndef _ACAPPS
#define _ACAPPS
/* Common info for tool signons */
#define ACPICA_NAME "Intel ACPI Component Architecture"
#define ACPICA_COPYRIGHT "Copyright (c) 2000 - 2014 Intel Corporation"
#if ACPI_MACHINE_WIDTH == 64
#define ACPI_WIDTH "-64"
#elif ACPI_MACHINE_WIDTH == 32
#define ACPI_WIDTH "-32"
#else
#error unknown ACPI_MACHINE_WIDTH
#define ACPI_WIDTH "-??"
#endif
/* Macros for signons and file headers */
#define ACPI_COMMON_SIGNON(utility_name) \
"\n%s\n%s version %8.8X%s [%s]\n%s\n\n", \
ACPICA_NAME, \
utility_name, ((u32) ACPI_CA_VERSION), ACPI_WIDTH, __DATE__, \
ACPICA_COPYRIGHT
#define ACPI_COMMON_HEADER(utility_name, prefix) \
"%s%s\n%s%s version %8.8X%s [%s]\n%s%s\n%s\n", \
prefix, ACPICA_NAME, \
prefix, utility_name, ((u32) ACPI_CA_VERSION), ACPI_WIDTH, __DATE__, \
prefix, ACPICA_COPYRIGHT, \
prefix
/* Macros for usage messages */
#define ACPI_USAGE_HEADER(usage) \
printf ("Usage: %s\nOptions:\n", usage);
#define ACPI_OPTION(name, description) \
printf (" %-18s%s\n", name, description);
#define FILE_SUFFIX_DISASSEMBLY "dsl"
#define ACPI_TABLE_FILE_SUFFIX ".dat"
/*
* getopt
*/
int acpi_getopt(int argc, char **argv, char *opts);
int acpi_getopt_argument(int argc, char **argv);
extern int acpi_gbl_optind;
extern int acpi_gbl_opterr;
extern int acpi_gbl_sub_opt_char;
extern char *acpi_gbl_optarg;
/*
* cmfsize - Common get file size function
*/
u32 cm_get_file_size(FILE * file);
#ifndef ACPI_DUMP_APP
/*
* adisasm
*/
acpi_status
ad_aml_disassemble(u8 out_to_file,
char *filename, char *prefix, char **out_filename);
void ad_print_statistics(void);
acpi_status ad_find_dsdt(u8 **dsdt_ptr, u32 *dsdt_length);
void ad_dump_tables(void);
acpi_status ad_get_local_tables(void);
acpi_status
ad_parse_table(struct acpi_table_header *table,
acpi_owner_id * owner_id, u8 load_table, u8 external);
acpi_status ad_display_tables(char *filename, struct acpi_table_header *table);
acpi_status ad_display_statistics(void);
/*
* adwalk
*/
void
acpi_dm_cross_reference_namespace(union acpi_parse_object *parse_tree_root,
struct acpi_namespace_node *namespace_root,
acpi_owner_id owner_id);
void acpi_dm_dump_tree(union acpi_parse_object *origin);
void acpi_dm_find_orphan_methods(union acpi_parse_object *origin);
void
acpi_dm_finish_namespace_load(union acpi_parse_object *parse_tree_root,
struct acpi_namespace_node *namespace_root,
acpi_owner_id owner_id);
void
acpi_dm_convert_resource_indexes(union acpi_parse_object *parse_tree_root,
struct acpi_namespace_node *namespace_root);
/*
* adfile
*/
acpi_status ad_initialize(void);
char *fl_generate_filename(char *input_filename, char *suffix);
acpi_status
fl_split_input_pathname(char *input_path,
char **out_directory_path, char **out_filename);
char *ad_generate_filename(char *prefix, char *table_id);
void
ad_write_table(struct acpi_table_header *table,
u32 length, char *table_name, char *oem_table_id);
#endif
#endif /* _ACAPPS */
...@@ -104,9 +104,10 @@ acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info *gpe_event_info); ...@@ -104,9 +104,10 @@ acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info *gpe_event_info);
*/ */
acpi_status acpi_status
acpi_ev_create_gpe_block(struct acpi_namespace_node *gpe_device, acpi_ev_create_gpe_block(struct acpi_namespace_node *gpe_device,
struct acpi_generic_address *gpe_block_address, u64 address,
u8 space_id,
u32 register_count, u32 register_count,
u8 gpe_block_base_number, u16 gpe_block_base_number,
u32 interrupt_number, u32 interrupt_number,
struct acpi_gpe_block_info **return_gpe_block); struct acpi_gpe_block_info **return_gpe_block);
......
...@@ -44,144 +44,14 @@ ...@@ -44,144 +44,14 @@
#ifndef __ACGLOBAL_H__ #ifndef __ACGLOBAL_H__
#define __ACGLOBAL_H__ #define __ACGLOBAL_H__
/*
* Ensure that the globals are actually defined and initialized only once.
*
* The use of these macros allows a single list of globals (here) in order
* to simplify maintenance of the code.
*/
#ifdef DEFINE_ACPI_GLOBALS
#define ACPI_GLOBAL(type,name) \
extern type name; \
type name
#define ACPI_INIT_GLOBAL(type,name,value) \
type name=value
#else
#define ACPI_GLOBAL(type,name) \
extern type name
#define ACPI_INIT_GLOBAL(type,name,value) \
extern type name
#endif
#ifdef DEFINE_ACPI_GLOBALS
/* Public globals, available from outside ACPICA subsystem */
/***************************************************************************** /*****************************************************************************
* *
* Runtime configuration (static defaults that can be overriden at runtime) * Globals related to the ACPI tables
* *
****************************************************************************/ ****************************************************************************/
/* /* Master list of all ACPI tables that were found in the RSDT/XSDT */
* Enable "slack" in the AML interpreter? Default is FALSE, and the
* interpreter strictly follows the ACPI specification. Setting to TRUE
* allows the interpreter to ignore certain errors and/or bad AML constructs.
*
* Currently, these features are enabled by this flag:
*
* 1) Allow "implicit return" of last value in a control method
* 2) Allow access beyond the end of an operation region
* 3) Allow access to uninitialized locals/args (auto-init to integer 0)
* 4) Allow ANY object type to be a source operand for the Store() operator
* 5) Allow unresolved references (invalid target name) in package objects
* 6) Enable warning messages for behavior that is not ACPI spec compliant
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_enable_interpreter_slack, FALSE);
/*
* Automatically serialize all methods that create named objects? Default
* is TRUE, meaning that all non_serialized methods are scanned once at
* table load time to determine those that create named objects. Methods
* that create named objects are marked Serialized in order to prevent
* possible run-time problems if they are entered by more than one thread.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_auto_serialize_methods, TRUE);
/*
* Create the predefined _OSI method in the namespace? Default is TRUE
* because ACPI CA is fully compatible with other ACPI implementations.
* Changing this will revert ACPI CA (and machine ASL) to pre-OSI behavior.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_create_osi_method, TRUE);
/*
* Optionally use default values for the ACPI register widths. Set this to
* TRUE to use the defaults, if an FADT contains incorrect widths/lengths.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_use_default_register_widths, TRUE);
/*
* Optionally enable output from the AML Debug Object.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_enable_aml_debug_object, FALSE);
/*
* Optionally copy the entire DSDT to local memory (instead of simply
* mapping it.) There are some BIOSs that corrupt or replace the original
* DSDT, creating the need for this option. Default is FALSE, do not copy
* the DSDT.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_copy_dsdt_locally, FALSE);
/*
* Optionally ignore an XSDT if present and use the RSDT instead.
* Although the ACPI specification requires that an XSDT be used instead
* of the RSDT, the XSDT has been found to be corrupt or ill-formed on
* some machines. Default behavior is to use the XSDT if present.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_do_not_use_xsdt, FALSE);
/*
* Optionally use 32-bit FADT addresses if and when there is a conflict
* (address mismatch) between the 32-bit and 64-bit versions of the
* address. Although ACPICA adheres to the ACPI specification which
* requires the use of the corresponding 64-bit address if it is non-zero,
* some machines have been found to have a corrupted non-zero 64-bit
* address. Default is TRUE, favor the 32-bit addresses.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_use32_bit_fadt_addresses, TRUE);
/*
* Optionally truncate I/O addresses to 16 bits. Provides compatibility
* with other ACPI implementations. NOTE: During ACPICA initialization,
* this value is set to TRUE if any Windows OSI strings have been
* requested by the BIOS.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_truncate_io_addresses, FALSE);
/*
* Disable runtime checking and repair of values returned by control methods.
* Use only if the repair is causing a problem on a particular machine.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_disable_auto_repair, FALSE);
/*
* Optionally do not load any SSDTs from the RSDT/XSDT during initialization.
* This can be useful for debugging ACPI problems on some machines.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_disable_ssdt_table_load, FALSE);
/*
* We keep track of the latest version of Windows that has been requested by
* the BIOS.
*/
ACPI_INIT_GLOBAL(u8, acpi_gbl_osi_data, 0);
#endif /* DEFINE_ACPI_GLOBALS */
/*****************************************************************************
*
* ACPI Table globals
*
****************************************************************************/
/*
* Master list of all ACPI tables that were found in the RSDT/XSDT.
*/
ACPI_GLOBAL(struct acpi_table_list, acpi_gbl_root_table_list); ACPI_GLOBAL(struct acpi_table_list, acpi_gbl_root_table_list);
/* DSDT information. Used to check for DSDT corruption */ /* DSDT information. Used to check for DSDT corruption */
...@@ -279,7 +149,6 @@ ACPI_GLOBAL(acpi_exception_handler, acpi_gbl_exception_handler); ...@@ -279,7 +149,6 @@ ACPI_GLOBAL(acpi_exception_handler, acpi_gbl_exception_handler);
ACPI_GLOBAL(acpi_init_handler, acpi_gbl_init_handler); ACPI_GLOBAL(acpi_init_handler, acpi_gbl_init_handler);
ACPI_GLOBAL(acpi_table_handler, acpi_gbl_table_handler); ACPI_GLOBAL(acpi_table_handler, acpi_gbl_table_handler);
ACPI_GLOBAL(void *, acpi_gbl_table_handler_context); ACPI_GLOBAL(void *, acpi_gbl_table_handler_context);
ACPI_GLOBAL(struct acpi_walk_state *, acpi_gbl_breakpoint_walk);
ACPI_GLOBAL(acpi_interface_handler, acpi_gbl_interface_handler); ACPI_GLOBAL(acpi_interface_handler, acpi_gbl_interface_handler);
ACPI_GLOBAL(struct acpi_sci_handler_info *, acpi_gbl_sci_handler_list); ACPI_GLOBAL(struct acpi_sci_handler_info *, acpi_gbl_sci_handler_list);
...@@ -296,7 +165,6 @@ ACPI_GLOBAL(u8, acpi_gbl_reg_methods_executed); ...@@ -296,7 +165,6 @@ ACPI_GLOBAL(u8, acpi_gbl_reg_methods_executed);
/* Misc */ /* Misc */
ACPI_GLOBAL(u32, acpi_gbl_original_mode); ACPI_GLOBAL(u32, acpi_gbl_original_mode);
ACPI_GLOBAL(u32, acpi_gbl_rsdp_original_location);
ACPI_GLOBAL(u32, acpi_gbl_ns_lookup_count); ACPI_GLOBAL(u32, acpi_gbl_ns_lookup_count);
ACPI_GLOBAL(u32, acpi_gbl_ps_find_count); ACPI_GLOBAL(u32, acpi_gbl_ps_find_count);
ACPI_GLOBAL(u16, acpi_gbl_pm1_enable_register_save); ACPI_GLOBAL(u16, acpi_gbl_pm1_enable_register_save);
...@@ -483,11 +351,6 @@ ACPI_GLOBAL(u16, acpi_gbl_node_type_count_misc); ...@@ -483,11 +351,6 @@ ACPI_GLOBAL(u16, acpi_gbl_node_type_count_misc);
ACPI_GLOBAL(u32, acpi_gbl_num_nodes); ACPI_GLOBAL(u32, acpi_gbl_num_nodes);
ACPI_GLOBAL(u32, acpi_gbl_num_objects); ACPI_GLOBAL(u32, acpi_gbl_num_objects);
ACPI_GLOBAL(u32, acpi_gbl_size_of_parse_tree);
ACPI_GLOBAL(u32, acpi_gbl_size_of_method_trees);
ACPI_GLOBAL(u32, acpi_gbl_size_of_node_entries);
ACPI_GLOBAL(u32, acpi_gbl_size_of_acpi_objects);
#endif /* ACPI_DEBUGGER */ #endif /* ACPI_DEBUGGER */
/***************************************************************************** /*****************************************************************************
...@@ -509,5 +372,6 @@ ACPI_INIT_GLOBAL(ACPI_FILE, acpi_gbl_debug_file, NULL); ...@@ -509,5 +372,6 @@ ACPI_INIT_GLOBAL(ACPI_FILE, acpi_gbl_debug_file, NULL);
****************************************************************************/ ****************************************************************************/
extern const struct ah_predefined_name asl_predefined_info[]; extern const struct ah_predefined_name asl_predefined_info[];
extern const struct ah_device_id asl_device_ids[];
#endif /* __ACGLOBAL_H__ */ #endif /* __ACGLOBAL_H__ */
...@@ -450,9 +450,9 @@ struct acpi_gpe_event_info { ...@@ -450,9 +450,9 @@ struct acpi_gpe_event_info {
struct acpi_gpe_register_info { struct acpi_gpe_register_info {
struct acpi_generic_address status_address; /* Address of status reg */ struct acpi_generic_address status_address; /* Address of status reg */
struct acpi_generic_address enable_address; /* Address of enable reg */ struct acpi_generic_address enable_address; /* Address of enable reg */
u16 base_gpe_number; /* Base GPE number for this register */
u8 enable_for_wake; /* GPEs to keep enabled when sleeping */ u8 enable_for_wake; /* GPEs to keep enabled when sleeping */
u8 enable_for_run; /* GPEs to keep enabled when running */ u8 enable_for_run; /* GPEs to keep enabled when running */
u8 base_gpe_number; /* Base GPE number for this register */
}; };
/* /*
...@@ -466,10 +466,11 @@ struct acpi_gpe_block_info { ...@@ -466,10 +466,11 @@ struct acpi_gpe_block_info {
struct acpi_gpe_xrupt_info *xrupt_block; /* Backpointer to interrupt block */ struct acpi_gpe_xrupt_info *xrupt_block; /* Backpointer to interrupt block */
struct acpi_gpe_register_info *register_info; /* One per GPE register pair */ struct acpi_gpe_register_info *register_info; /* One per GPE register pair */
struct acpi_gpe_event_info *event_info; /* One for each GPE */ struct acpi_gpe_event_info *event_info; /* One for each GPE */
struct acpi_generic_address block_address; /* Base address of the block */ u64 address; /* Base address of the block */
u32 register_count; /* Number of register pairs in block */ u32 register_count; /* Number of register pairs in block */
u16 gpe_count; /* Number of individual GPEs in block */ u16 gpe_count; /* Number of individual GPEs in block */
u8 block_base_number; /* Base GPE number for this block */ u16 block_base_number; /* Base GPE number for this block */
u8 space_id;
u8 initialized; /* TRUE if this block is initialized */ u8 initialized; /* TRUE if this block is initialized */
}; };
...@@ -733,7 +734,8 @@ union acpi_parse_value { ...@@ -733,7 +734,8 @@ union acpi_parse_value {
#define ACPI_DASM_MATCHOP 0x06 /* Parent opcode is a Match() operator */ #define ACPI_DASM_MATCHOP 0x06 /* Parent opcode is a Match() operator */
#define ACPI_DASM_LNOT_PREFIX 0x07 /* Start of a Lnot_equal (etc.) pair of opcodes */ #define ACPI_DASM_LNOT_PREFIX 0x07 /* Start of a Lnot_equal (etc.) pair of opcodes */
#define ACPI_DASM_LNOT_SUFFIX 0x08 /* End of a Lnot_equal (etc.) pair of opcodes */ #define ACPI_DASM_LNOT_SUFFIX 0x08 /* End of a Lnot_equal (etc.) pair of opcodes */
#define ACPI_DASM_IGNORE 0x09 /* Not used at this time */ #define ACPI_DASM_HID_STRING 0x09 /* String is a _HID or _CID */
#define ACPI_DASM_IGNORE 0x0A /* Not used at this time */
/* /*
* Generic operation (for example: If, While, Store) * Generic operation (for example: If, While, Store)
...@@ -1147,4 +1149,9 @@ struct ah_predefined_name { ...@@ -1147,4 +1149,9 @@ struct ah_predefined_name {
#endif #endif
}; };
struct ah_device_id {
char *name;
char *description;
};
#endif /* __ACLOCAL_H__ */ #endif /* __ACLOCAL_H__ */
...@@ -586,6 +586,10 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -586,6 +586,10 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
{{"_LID", METHOD_0ARGS, {{"_LID", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},
{{"_LPD", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (1 Int(rev), n Pkg (2 Int) */
PACKAGE_INFO(ACPI_PTYPE2_REV_FIXED, ACPI_RTYPE_INTEGER, 2, 0, 0, 0),
{{"_MAT", METHOD_0ARGS, {{"_MAT", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_BUFFER)}}, METHOD_RETURNS(ACPI_RTYPE_BUFFER)}},
...@@ -698,12 +702,6 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = { ...@@ -698,12 +702,6 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Refs) */ METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Refs) */
PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_REFERENCE, 0, 0, 0, 0), PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_REFERENCE, 0, 0, 0, 0),
{{"_PRP", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Pkgs) each: 1 Str, 1 Int/Str/Pkg */
PACKAGE_INFO(ACPI_PTYPE2, ACPI_RTYPE_STRING, 1,
ACPI_RTYPE_INTEGER | ACPI_RTYPE_STRING |
ACPI_RTYPE_PACKAGE | ACPI_RTYPE_REFERENCE, 1, 0),
{{"_PRS", METHOD_0ARGS, {{"_PRS", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_BUFFER)}}, METHOD_RETURNS(ACPI_RTYPE_BUFFER)}},
......
...@@ -53,6 +53,31 @@ acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp *rsdp); ...@@ -53,6 +53,31 @@ acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp *rsdp);
u8 *acpi_tb_scan_memory_for_rsdp(u8 *start_address, u32 length); u8 *acpi_tb_scan_memory_for_rsdp(u8 *start_address, u32 length);
/*
* tbdata - table data structure management
*/
acpi_status acpi_tb_get_next_root_index(u32 *table_index);
void
acpi_tb_init_table_descriptor(struct acpi_table_desc *table_desc,
acpi_physical_address address,
u8 flags, struct acpi_table_header *table);
acpi_status
acpi_tb_acquire_temp_table(struct acpi_table_desc *table_desc,
acpi_physical_address address, u8 flags);
void acpi_tb_release_temp_table(struct acpi_table_desc *table_desc);
acpi_status acpi_tb_validate_temp_table(struct acpi_table_desc *table_desc);
acpi_status
acpi_tb_verify_temp_table(struct acpi_table_desc *table_desc, char *signature);
u8 acpi_tb_is_table_loaded(u32 table_index);
void acpi_tb_set_table_loaded_flag(u32 table_index, u8 is_loaded);
/* /*
* tbfadt - FADT parse/convert/validate * tbfadt - FADT parse/convert/validate
*/ */
...@@ -72,22 +97,32 @@ acpi_tb_find_table(char *signature, ...@@ -72,22 +97,32 @@ acpi_tb_find_table(char *signature,
*/ */
acpi_status acpi_tb_resize_root_table_list(void); acpi_status acpi_tb_resize_root_table_list(void);
acpi_status acpi_tb_verify_table(struct acpi_table_desc *table_desc); acpi_status acpi_tb_validate_table(struct acpi_table_desc *table_desc);
void acpi_tb_invalidate_table(struct acpi_table_desc *table_desc);
void acpi_tb_override_table(struct acpi_table_desc *old_table_desc);
struct acpi_table_header *acpi_tb_table_override(struct acpi_table_header acpi_status
*table_header, acpi_tb_acquire_table(struct acpi_table_desc *table_desc,
struct acpi_table_desc struct acpi_table_header **table_ptr,
*table_desc); u32 *table_length, u8 *table_flags);
void
acpi_tb_release_table(struct acpi_table_header *table,
u32 table_length, u8 table_flags);
acpi_status acpi_status
acpi_tb_add_table(struct acpi_table_desc *table_desc, u32 *table_index); acpi_tb_install_standard_table(acpi_physical_address address,
u8 flags,
u8 reload, u8 override, u32 *table_index);
acpi_status acpi_status
acpi_tb_store_table(acpi_physical_address address, acpi_tb_store_table(acpi_physical_address address,
struct acpi_table_header *table, struct acpi_table_header *table,
u32 length, u8 flags, u32 *table_index); u32 length, u8 flags, u32 *table_index);
void acpi_tb_delete_table(struct acpi_table_desc *table_desc); void acpi_tb_uninstall_table(struct acpi_table_desc *table_desc);
void acpi_tb_terminate(void); void acpi_tb_terminate(void);
...@@ -99,10 +134,6 @@ acpi_status acpi_tb_release_owner_id(u32 table_index); ...@@ -99,10 +134,6 @@ acpi_status acpi_tb_release_owner_id(u32 table_index);
acpi_status acpi_tb_get_owner_id(u32 table_index, acpi_owner_id *owner_id); acpi_status acpi_tb_get_owner_id(u32 table_index, acpi_owner_id *owner_id);
u8 acpi_tb_is_table_loaded(u32 table_index);
void acpi_tb_set_table_loaded_flag(u32 table_index, u8 is_loaded);
/* /*
* tbutils - table manager utilities * tbutils - table manager utilities
*/ */
...@@ -124,7 +155,12 @@ void acpi_tb_check_dsdt_header(void); ...@@ -124,7 +155,12 @@ void acpi_tb_check_dsdt_header(void);
struct acpi_table_header *acpi_tb_copy_dsdt(u32 table_index); struct acpi_table_header *acpi_tb_copy_dsdt(u32 table_index);
void void
acpi_tb_install_table(acpi_physical_address address, acpi_tb_install_table_with_override(u32 table_index,
struct acpi_table_desc *new_table_desc,
u8 override);
acpi_status
acpi_tb_install_fixed_table(acpi_physical_address address,
char *signature, u32 table_index); char *signature, u32 table_index);
acpi_status acpi_tb_parse_root_table(acpi_physical_address rsdp_address); acpi_status acpi_tb_parse_root_table(acpi_physical_address rsdp_address);
......
...@@ -176,8 +176,7 @@ acpi_status acpi_ut_init_globals(void); ...@@ -176,8 +176,7 @@ acpi_status acpi_ut_init_globals(void);
char *acpi_ut_get_mutex_name(u32 mutex_id); char *acpi_ut_get_mutex_name(u32 mutex_id);
const char *acpi_ut_get_notify_name(u32 notify_value); const char *acpi_ut_get_notify_name(u32 notify_value, acpi_object_type type);
#endif #endif
char *acpi_ut_get_type_name(acpi_object_type type); char *acpi_ut_get_type_name(acpi_object_type type);
...@@ -737,4 +736,11 @@ acpi_ut_method_error(const char *module_name, ...@@ -737,4 +736,11 @@ acpi_ut_method_error(const char *module_name,
struct acpi_namespace_node *node, struct acpi_namespace_node *node,
const char *path, acpi_status lookup_status); const char *path, acpi_status lookup_status);
/*
* Utility functions for ACPI names and IDs
*/
const struct ah_predefined_name *acpi_ah_match_predefined_name(char *nameseg);
const struct ah_device_id *acpi_ah_match_hardware_id(char *hid);
#endif /* _ACUTILS_H */ #endif /* _ACUTILS_H */
...@@ -383,7 +383,7 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info * gpe_xrupt_list) ...@@ -383,7 +383,7 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info * gpe_xrupt_list)
if (!(gpe_register_info->enable_for_run | if (!(gpe_register_info->enable_for_run |
gpe_register_info->enable_for_wake)) { gpe_register_info->enable_for_wake)) {
ACPI_DEBUG_PRINT((ACPI_DB_INTERRUPTS, ACPI_DEBUG_PRINT((ACPI_DB_INTERRUPTS,
"Ignore disabled registers for GPE%02X-GPE%02X: " "Ignore disabled registers for GPE %02X-%02X: "
"RunEnable=%02X, WakeEnable=%02X\n", "RunEnable=%02X, WakeEnable=%02X\n",
gpe_register_info-> gpe_register_info->
base_gpe_number, base_gpe_number,
...@@ -416,7 +416,7 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info * gpe_xrupt_list) ...@@ -416,7 +416,7 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info * gpe_xrupt_list)
} }
ACPI_DEBUG_PRINT((ACPI_DB_INTERRUPTS, ACPI_DEBUG_PRINT((ACPI_DB_INTERRUPTS,
"Read registers for GPE%02X-GPE%02X: Status=%02X, Enable=%02X, " "Read registers for GPE %02X-%02X: Status=%02X, Enable=%02X, "
"RunEnable=%02X, WakeEnable=%02X\n", "RunEnable=%02X, WakeEnable=%02X\n",
gpe_register_info->base_gpe_number, gpe_register_info->base_gpe_number,
gpe_register_info->base_gpe_number + gpe_register_info->base_gpe_number +
...@@ -706,7 +706,8 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device, ...@@ -706,7 +706,8 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
status = acpi_hw_clear_gpe(gpe_event_info); status = acpi_hw_clear_gpe(gpe_event_info);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status, ACPI_EXCEPTION((AE_INFO, status,
"Unable to clear GPE%02X", gpe_number)); "Unable to clear GPE %02X",
gpe_number));
return_UINT32(ACPI_INTERRUPT_NOT_HANDLED); return_UINT32(ACPI_INTERRUPT_NOT_HANDLED);
} }
} }
...@@ -723,7 +724,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device, ...@@ -723,7 +724,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_DISABLE); status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_DISABLE);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status, ACPI_EXCEPTION((AE_INFO, status,
"Unable to disable GPE%02X", gpe_number)); "Unable to disable GPE %02X", gpe_number));
return_UINT32(ACPI_INTERRUPT_NOT_HANDLED); return_UINT32(ACPI_INTERRUPT_NOT_HANDLED);
} }
...@@ -764,7 +765,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device, ...@@ -764,7 +765,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
gpe_event_info); gpe_event_info);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status, ACPI_EXCEPTION((AE_INFO, status,
"Unable to queue handler for GPE%02X - event disabled", "Unable to queue handler for GPE %02X - event disabled",
gpe_number)); gpe_number));
} }
break; break;
...@@ -776,7 +777,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device, ...@@ -776,7 +777,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
* a GPE to be enabled if it has no handler or method. * a GPE to be enabled if it has no handler or method.
*/ */
ACPI_ERROR((AE_INFO, ACPI_ERROR((AE_INFO,
"No handler or method for GPE%02X, disabling event", "No handler or method for GPE %02X, disabling event",
gpe_number)); gpe_number));
break; break;
......
...@@ -252,21 +252,17 @@ acpi_ev_create_gpe_info_blocks(struct acpi_gpe_block_info *gpe_block) ...@@ -252,21 +252,17 @@ acpi_ev_create_gpe_info_blocks(struct acpi_gpe_block_info *gpe_block)
/* Init the register_info for this GPE register (8 GPEs) */ /* Init the register_info for this GPE register (8 GPEs) */
this_register->base_gpe_number = this_register->base_gpe_number = (u16)
(u8) (gpe_block->block_base_number + (gpe_block->block_base_number +
(i * ACPI_GPE_REGISTER_WIDTH)); (i * ACPI_GPE_REGISTER_WIDTH));
this_register->status_address.address = this_register->status_address.address = gpe_block->address + i;
gpe_block->block_address.address + i;
this_register->enable_address.address = this_register->enable_address.address =
gpe_block->block_address.address + i + gpe_block->address + i + gpe_block->register_count;
gpe_block->register_count;
this_register->status_address.space_id = this_register->status_address.space_id = gpe_block->space_id;
gpe_block->block_address.space_id; this_register->enable_address.space_id = gpe_block->space_id;
this_register->enable_address.space_id =
gpe_block->block_address.space_id;
this_register->status_address.bit_width = this_register->status_address.bit_width =
ACPI_GPE_REGISTER_WIDTH; ACPI_GPE_REGISTER_WIDTH;
this_register->enable_address.bit_width = this_register->enable_address.bit_width =
...@@ -334,9 +330,10 @@ acpi_ev_create_gpe_info_blocks(struct acpi_gpe_block_info *gpe_block) ...@@ -334,9 +330,10 @@ acpi_ev_create_gpe_info_blocks(struct acpi_gpe_block_info *gpe_block)
acpi_status acpi_status
acpi_ev_create_gpe_block(struct acpi_namespace_node *gpe_device, acpi_ev_create_gpe_block(struct acpi_namespace_node *gpe_device,
struct acpi_generic_address *gpe_block_address, u64 address,
u8 space_id,
u32 register_count, u32 register_count,
u8 gpe_block_base_number, u16 gpe_block_base_number,
u32 interrupt_number, u32 interrupt_number,
struct acpi_gpe_block_info **return_gpe_block) struct acpi_gpe_block_info **return_gpe_block)
{ {
...@@ -359,15 +356,14 @@ acpi_ev_create_gpe_block(struct acpi_namespace_node *gpe_device, ...@@ -359,15 +356,14 @@ acpi_ev_create_gpe_block(struct acpi_namespace_node *gpe_device,
/* Initialize the new GPE block */ /* Initialize the new GPE block */
gpe_block->address = address;
gpe_block->space_id = space_id;
gpe_block->node = gpe_device; gpe_block->node = gpe_device;
gpe_block->gpe_count = (u16)(register_count * ACPI_GPE_REGISTER_WIDTH); gpe_block->gpe_count = (u16)(register_count * ACPI_GPE_REGISTER_WIDTH);
gpe_block->initialized = FALSE; gpe_block->initialized = FALSE;
gpe_block->register_count = register_count; gpe_block->register_count = register_count;
gpe_block->block_base_number = gpe_block_base_number; gpe_block->block_base_number = gpe_block_base_number;
ACPI_MEMCPY(&gpe_block->block_address, gpe_block_address,
sizeof(struct acpi_generic_address));
/* /*
* Create the register_info and event_info sub-structures * Create the register_info and event_info sub-structures
* Note: disables and clears all GPEs in the block * Note: disables and clears all GPEs in the block
...@@ -408,12 +404,14 @@ acpi_ev_create_gpe_block(struct acpi_namespace_node *gpe_device, ...@@ -408,12 +404,14 @@ acpi_ev_create_gpe_block(struct acpi_namespace_node *gpe_device,
} }
ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT, ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT,
" Initialized GPE %02X to %02X [%4.4s] %u regs on interrupt 0x%X\n", " Initialized GPE %02X to %02X [%4.4s] %u regs on interrupt 0x%X%s\n",
(u32)gpe_block->block_base_number, (u32)gpe_block->block_base_number,
(u32)(gpe_block->block_base_number + (u32)(gpe_block->block_base_number +
(gpe_block->gpe_count - 1)), (gpe_block->gpe_count - 1)),
gpe_device->name.ascii, gpe_block->register_count, gpe_device->name.ascii, gpe_block->register_count,
interrupt_number)); interrupt_number,
interrupt_number ==
acpi_gbl_FADT.sci_interrupt ? " (SCI)" : ""));
/* Update global count of currently available GPEs */ /* Update global count of currently available GPEs */
......
...@@ -131,8 +131,10 @@ acpi_status acpi_ev_gpe_initialize(void) ...@@ -131,8 +131,10 @@ acpi_status acpi_ev_gpe_initialize(void)
/* Install GPE Block 0 */ /* Install GPE Block 0 */
status = acpi_ev_create_gpe_block(acpi_gbl_fadt_gpe_device, status = acpi_ev_create_gpe_block(acpi_gbl_fadt_gpe_device,
&acpi_gbl_FADT.xgpe0_block, acpi_gbl_FADT.xgpe0_block.
register_count0, 0, address,
acpi_gbl_FADT.xgpe0_block.
space_id, register_count0, 0,
acpi_gbl_FADT.sci_interrupt, acpi_gbl_FADT.sci_interrupt,
&acpi_gbl_gpe_fadt_blocks[0]); &acpi_gbl_gpe_fadt_blocks[0]);
...@@ -169,8 +171,10 @@ acpi_status acpi_ev_gpe_initialize(void) ...@@ -169,8 +171,10 @@ acpi_status acpi_ev_gpe_initialize(void)
status = status =
acpi_ev_create_gpe_block(acpi_gbl_fadt_gpe_device, acpi_ev_create_gpe_block(acpi_gbl_fadt_gpe_device,
&acpi_gbl_FADT.xgpe1_block, acpi_gbl_FADT.xgpe1_block.
register_count1, address,
acpi_gbl_FADT.xgpe1_block.
space_id, register_count1,
acpi_gbl_FADT.gpe1_base, acpi_gbl_FADT.gpe1_base,
acpi_gbl_FADT. acpi_gbl_FADT.
sci_interrupt, sci_interrupt,
......
...@@ -167,7 +167,8 @@ acpi_ev_queue_notify_request(struct acpi_namespace_node * node, ...@@ -167,7 +167,8 @@ acpi_ev_queue_notify_request(struct acpi_namespace_node * node,
"Dispatching Notify on [%4.4s] (%s) Value 0x%2.2X (%s) Node %p\n", "Dispatching Notify on [%4.4s] (%s) Value 0x%2.2X (%s) Node %p\n",
acpi_ut_get_node_name(node), acpi_ut_get_node_name(node),
acpi_ut_get_type_name(node->type), notify_value, acpi_ut_get_type_name(node->type), notify_value,
acpi_ut_get_notify_name(notify_value), node)); acpi_ut_get_notify_name(notify_value, ACPI_TYPE_ANY),
node));
status = acpi_os_execute(OSL_NOTIFY_HANDLER, acpi_ev_notify_dispatch, status = acpi_os_execute(OSL_NOTIFY_HANDLER, acpi_ev_notify_dispatch,
info); info);
......
...@@ -117,7 +117,7 @@ static u32 ACPI_SYSTEM_XFACE acpi_ev_sci_xrupt_handler(void *context) ...@@ -117,7 +117,7 @@ static u32 ACPI_SYSTEM_XFACE acpi_ev_sci_xrupt_handler(void *context)
ACPI_FUNCTION_TRACE(ev_sci_xrupt_handler); ACPI_FUNCTION_TRACE(ev_sci_xrupt_handler);
/* /*
* We are guaranteed by the ACPI CA initialization/shutdown code that * We are guaranteed by the ACPICA initialization/shutdown code that
* if this interrupt handler is installed, ACPI is enabled. * if this interrupt handler is installed, ACPI is enabled.
*/ */
......
...@@ -239,7 +239,7 @@ acpi_remove_notify_handler(acpi_handle device, ...@@ -239,7 +239,7 @@ acpi_remove_notify_handler(acpi_handle device,
union acpi_operand_object *obj_desc; union acpi_operand_object *obj_desc;
union acpi_operand_object *handler_obj; union acpi_operand_object *handler_obj;
union acpi_operand_object *previous_handler_obj; union acpi_operand_object *previous_handler_obj;
acpi_status status; acpi_status status = AE_OK;
u32 i; u32 i;
ACPI_FUNCTION_TRACE(acpi_remove_notify_handler); ACPI_FUNCTION_TRACE(acpi_remove_notify_handler);
...@@ -251,20 +251,17 @@ acpi_remove_notify_handler(acpi_handle device, ...@@ -251,20 +251,17 @@ acpi_remove_notify_handler(acpi_handle device,
return_ACPI_STATUS(AE_BAD_PARAMETER); return_ACPI_STATUS(AE_BAD_PARAMETER);
} }
/* Make sure all deferred notify tasks are completed */
acpi_os_wait_events_complete();
status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/* Root Object. Global handlers are removed here */ /* Root Object. Global handlers are removed here */
if (device == ACPI_ROOT_OBJECT) { if (device == ACPI_ROOT_OBJECT) {
for (i = 0; i < ACPI_NUM_NOTIFY_TYPES; i++) { for (i = 0; i < ACPI_NUM_NOTIFY_TYPES; i++) {
if (handler_type & (i + 1)) { if (handler_type & (i + 1)) {
status =
acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
if (!acpi_gbl_global_notify[i].handler || if (!acpi_gbl_global_notify[i].handler ||
(acpi_gbl_global_notify[i].handler != (acpi_gbl_global_notify[i].handler !=
handler)) { handler)) {
...@@ -277,31 +274,40 @@ acpi_remove_notify_handler(acpi_handle device, ...@@ -277,31 +274,40 @@ acpi_remove_notify_handler(acpi_handle device,
acpi_gbl_global_notify[i].handler = NULL; acpi_gbl_global_notify[i].handler = NULL;
acpi_gbl_global_notify[i].context = NULL; acpi_gbl_global_notify[i].context = NULL;
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
/* Make sure all deferred notify tasks are completed */
acpi_os_wait_events_complete();
} }
} }
goto unlock_and_exit; return_ACPI_STATUS(AE_OK);
} }
/* All other objects: Are Notifies allowed on this object? */ /* All other objects: Are Notifies allowed on this object? */
if (!acpi_ev_is_notify_object(node)) { if (!acpi_ev_is_notify_object(node)) {
status = AE_TYPE; return_ACPI_STATUS(AE_TYPE);
goto unlock_and_exit;
} }
/* Must have an existing internal object */ /* Must have an existing internal object */
obj_desc = acpi_ns_get_attached_object(node); obj_desc = acpi_ns_get_attached_object(node);
if (!obj_desc) { if (!obj_desc) {
status = AE_NOT_EXIST; return_ACPI_STATUS(AE_NOT_EXIST);
goto unlock_and_exit;
} }
/* Internal object exists. Find the handler and remove it */ /* Internal object exists. Find the handler and remove it */
for (i = 0; i < ACPI_NUM_NOTIFY_TYPES; i++) { for (i = 0; i < ACPI_NUM_NOTIFY_TYPES; i++) {
if (handler_type & (i + 1)) { if (handler_type & (i + 1)) {
status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
handler_obj = obj_desc->common_notify.notify_list[i]; handler_obj = obj_desc->common_notify.notify_list[i];
previous_handler_obj = NULL; previous_handler_obj = NULL;
...@@ -329,10 +335,17 @@ acpi_remove_notify_handler(acpi_handle device, ...@@ -329,10 +335,17 @@ acpi_remove_notify_handler(acpi_handle device,
handler_obj->notify.next[i]; handler_obj->notify.next[i];
} }
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
/* Make sure all deferred notify tasks are completed */
acpi_os_wait_events_complete();
acpi_ut_remove_reference(handler_obj); acpi_ut_remove_reference(handler_obj);
} }
} }
return_ACPI_STATUS(status);
unlock_and_exit: unlock_and_exit:
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
...@@ -457,6 +470,8 @@ acpi_status acpi_install_sci_handler(acpi_sci_handler address, void *context) ...@@ -457,6 +470,8 @@ acpi_status acpi_install_sci_handler(acpi_sci_handler address, void *context)
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
} }
ACPI_EXPORT_SYMBOL(acpi_install_sci_handler)
/******************************************************************************* /*******************************************************************************
* *
* FUNCTION: acpi_remove_sci_handler * FUNCTION: acpi_remove_sci_handler
...@@ -468,7 +483,6 @@ acpi_status acpi_install_sci_handler(acpi_sci_handler address, void *context) ...@@ -468,7 +483,6 @@ acpi_status acpi_install_sci_handler(acpi_sci_handler address, void *context)
* DESCRIPTION: Remove a handler for a System Control Interrupt. * DESCRIPTION: Remove a handler for a System Control Interrupt.
* *
******************************************************************************/ ******************************************************************************/
acpi_status acpi_remove_sci_handler(acpi_sci_handler address) acpi_status acpi_remove_sci_handler(acpi_sci_handler address)
{ {
struct acpi_sci_handler_info *prev_sci_handler; struct acpi_sci_handler_info *prev_sci_handler;
...@@ -522,6 +536,8 @@ acpi_status acpi_remove_sci_handler(acpi_sci_handler address) ...@@ -522,6 +536,8 @@ acpi_status acpi_remove_sci_handler(acpi_sci_handler address)
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
} }
ACPI_EXPORT_SYMBOL(acpi_remove_sci_handler)
/******************************************************************************* /*******************************************************************************
* *
* FUNCTION: acpi_install_global_event_handler * FUNCTION: acpi_install_global_event_handler
...@@ -537,7 +553,6 @@ acpi_status acpi_remove_sci_handler(acpi_sci_handler address) ...@@ -537,7 +553,6 @@ acpi_status acpi_remove_sci_handler(acpi_sci_handler address)
* Can be used to update event counters, etc. * Can be used to update event counters, etc.
* *
******************************************************************************/ ******************************************************************************/
acpi_status acpi_status
acpi_install_global_event_handler(acpi_gbl_event_handler handler, void *context) acpi_install_global_event_handler(acpi_gbl_event_handler handler, void *context)
{ {
...@@ -840,10 +855,6 @@ acpi_remove_gpe_handler(acpi_handle gpe_device, ...@@ -840,10 +855,6 @@ acpi_remove_gpe_handler(acpi_handle gpe_device,
return_ACPI_STATUS(AE_BAD_PARAMETER); return_ACPI_STATUS(AE_BAD_PARAMETER);
} }
/* Make sure all deferred GPE tasks are completed */
acpi_os_wait_events_complete();
status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS); status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
...@@ -895,9 +906,17 @@ acpi_remove_gpe_handler(acpi_handle gpe_device, ...@@ -895,9 +906,17 @@ acpi_remove_gpe_handler(acpi_handle gpe_device,
(void)acpi_ev_add_gpe_reference(gpe_event_info); (void)acpi_ev_add_gpe_reference(gpe_event_info);
} }
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
/* Make sure all deferred GPE tasks are completed */
acpi_os_wait_events_complete();
/* Now we can free the handler object */ /* Now we can free the handler object */
ACPI_FREE(handler); ACPI_FREE(handler);
return_ACPI_STATUS(status);
unlock_and_exit: unlock_and_exit:
acpi_os_release_lock(acpi_gbl_gpe_lock, flags); acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
......
...@@ -599,9 +599,10 @@ acpi_install_gpe_block(acpi_handle gpe_device, ...@@ -599,9 +599,10 @@ acpi_install_gpe_block(acpi_handle gpe_device,
* For user-installed GPE Block Devices, the gpe_block_base_number * For user-installed GPE Block Devices, the gpe_block_base_number
* is always zero * is always zero
*/ */
status = status = acpi_ev_create_gpe_block(node, gpe_block_address->address,
acpi_ev_create_gpe_block(node, gpe_block_address, register_count, 0, gpe_block_address->space_id,
interrupt_number, &gpe_block); register_count, 0, interrupt_number,
&gpe_block);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
goto unlock_and_exit; goto unlock_and_exit;
} }
......
...@@ -343,16 +343,14 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc, ...@@ -343,16 +343,14 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state) struct acpi_walk_state *walk_state)
{ {
union acpi_operand_object *ddb_handle; union acpi_operand_object *ddb_handle;
struct acpi_table_header *table_header;
struct acpi_table_header *table; struct acpi_table_header *table;
struct acpi_table_desc table_desc;
u32 table_index; u32 table_index;
acpi_status status; acpi_status status;
u32 length; u32 length;
ACPI_FUNCTION_TRACE(ex_load_op); ACPI_FUNCTION_TRACE(ex_load_op);
ACPI_MEMSET(&table_desc, 0, sizeof(struct acpi_table_desc));
/* Source Object can be either an op_region or a Buffer/Field */ /* Source Object can be either an op_region or a Buffer/Field */
switch (obj_desc->common.type) { switch (obj_desc->common.type) {
...@@ -380,17 +378,17 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc, ...@@ -380,17 +378,17 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
/* Get the table header first so we can get the table length */ /* Get the table header first so we can get the table length */
table = ACPI_ALLOCATE(sizeof(struct acpi_table_header)); table_header = ACPI_ALLOCATE(sizeof(struct acpi_table_header));
if (!table) { if (!table_header) {
return_ACPI_STATUS(AE_NO_MEMORY); return_ACPI_STATUS(AE_NO_MEMORY);
} }
status = status =
acpi_ex_region_read(obj_desc, acpi_ex_region_read(obj_desc,
sizeof(struct acpi_table_header), sizeof(struct acpi_table_header),
ACPI_CAST_PTR(u8, table)); ACPI_CAST_PTR(u8, table_header));
length = table->length; length = table_header->length;
ACPI_FREE(table); ACPI_FREE(table_header);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
...@@ -420,22 +418,19 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc, ...@@ -420,22 +418,19 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
/* Allocate a buffer for the table */ /* Allocate a buffer for the table */
table_desc.pointer = ACPI_ALLOCATE(length); table = ACPI_ALLOCATE(length);
if (!table_desc.pointer) { if (!table) {
return_ACPI_STATUS(AE_NO_MEMORY); return_ACPI_STATUS(AE_NO_MEMORY);
} }
/* Read the entire table */ /* Read the entire table */
status = acpi_ex_region_read(obj_desc, length, status = acpi_ex_region_read(obj_desc, length,
ACPI_CAST_PTR(u8, ACPI_CAST_PTR(u8, table));
table_desc.pointer));
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
ACPI_FREE(table_desc.pointer); ACPI_FREE(table);
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
} }
table_desc.address = obj_desc->region.address;
break; break;
case ACPI_TYPE_BUFFER: /* Buffer or resolved region_field */ case ACPI_TYPE_BUFFER: /* Buffer or resolved region_field */
...@@ -452,10 +447,10 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc, ...@@ -452,10 +447,10 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
/* Get the actual table length from the table header */ /* Get the actual table length from the table header */
table = table_header =
ACPI_CAST_PTR(struct acpi_table_header, ACPI_CAST_PTR(struct acpi_table_header,
obj_desc->buffer.pointer); obj_desc->buffer.pointer);
length = table->length; length = table_header->length;
/* Table cannot extend beyond the buffer */ /* Table cannot extend beyond the buffer */
...@@ -470,13 +465,12 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc, ...@@ -470,13 +465,12 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
* Copy the table from the buffer because the buffer could be modified * Copy the table from the buffer because the buffer could be modified
* or even deleted in the future * or even deleted in the future
*/ */
table_desc.pointer = ACPI_ALLOCATE(length); table = ACPI_ALLOCATE(length);
if (!table_desc.pointer) { if (!table) {
return_ACPI_STATUS(AE_NO_MEMORY); return_ACPI_STATUS(AE_NO_MEMORY);
} }
ACPI_MEMCPY(table_desc.pointer, table, length); ACPI_MEMCPY(table, table_header, length);
table_desc.address = ACPI_TO_INTEGER(table_desc.pointer);
break; break;
default: default:
...@@ -484,27 +478,32 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc, ...@@ -484,27 +478,32 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
return_ACPI_STATUS(AE_AML_OPERAND_TYPE); return_ACPI_STATUS(AE_AML_OPERAND_TYPE);
} }
/* Validate table checksum (will not get validated in tb_add_table) */ /* Install the new table into the local data structures */
status = acpi_tb_verify_checksum(table_desc.pointer, length);
if (ACPI_FAILURE(status)) {
ACPI_FREE(table_desc.pointer);
return_ACPI_STATUS(status);
}
/* Complete the table descriptor */ ACPI_INFO((AE_INFO, "Dynamic OEM Table Load:"));
(void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES);
table_desc.length = length; status = acpi_tb_install_standard_table(ACPI_PTR_TO_PHYSADDR(table),
table_desc.flags = ACPI_TABLE_ORIGIN_ALLOCATED; ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL,
TRUE, TRUE, &table_index);
/* Install the new table into the local data structures */ (void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
status = acpi_tb_add_table(&table_desc, &table_index);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
/* Delete allocated table buffer */ /* Delete allocated table buffer */
acpi_tb_delete_table(&table_desc); ACPI_FREE(table);
return_ACPI_STATUS(status);
}
/*
* Note: Now table is "INSTALLED", it must be validated before
* loading.
*/
status =
acpi_tb_validate_table(&acpi_gbl_root_table_list.
tables[table_index]);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
} }
...@@ -536,9 +535,6 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc, ...@@ -536,9 +535,6 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
} }
ACPI_INFO((AE_INFO, "Dynamic OEM Table Load:"));
acpi_tb_print_table_header(0, table_desc.pointer);
/* Remove the reference by added by acpi_ex_store above */ /* Remove the reference by added by acpi_ex_store above */
acpi_ut_remove_reference(ddb_handle); acpi_ut_remove_reference(ddb_handle);
...@@ -546,8 +542,7 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc, ...@@ -546,8 +542,7 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
/* Invoke table handler if present */ /* Invoke table handler if present */
if (acpi_gbl_table_handler) { if (acpi_gbl_table_handler) {
(void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_LOAD, (void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_LOAD, table,
table_desc.pointer,
acpi_gbl_table_handler_context); acpi_gbl_table_handler_context);
} }
...@@ -575,6 +570,13 @@ acpi_status acpi_ex_unload_table(union acpi_operand_object *ddb_handle) ...@@ -575,6 +570,13 @@ acpi_status acpi_ex_unload_table(union acpi_operand_object *ddb_handle)
ACPI_FUNCTION_TRACE(ex_unload_table); ACPI_FUNCTION_TRACE(ex_unload_table);
/*
* Temporarily emit a warning so that the ASL for the machine can be
* hopefully obtained. This is to say that the Unload() operator is
* extremely rare if not completely unused.
*/
ACPI_WARNING((AE_INFO, "Received request to unload an ACPI table"));
/* /*
* Validate the handle * Validate the handle
* Although the handle is partially validated in acpi_ex_reconfiguration() * Although the handle is partially validated in acpi_ex_reconfiguration()
......
...@@ -134,9 +134,11 @@ static struct acpi_exdump_info acpi_ex_dump_method[9] = { ...@@ -134,9 +134,11 @@ static struct acpi_exdump_info acpi_ex_dump_method[9] = {
{ACPI_EXD_POINTER, ACPI_EXD_OFFSET(method.aml_start), "Aml Start"} {ACPI_EXD_POINTER, ACPI_EXD_OFFSET(method.aml_start), "Aml Start"}
}; };
static struct acpi_exdump_info acpi_ex_dump_mutex[5] = { static struct acpi_exdump_info acpi_ex_dump_mutex[6] = {
{ACPI_EXD_INIT, ACPI_EXD_TABLE_SIZE(acpi_ex_dump_mutex), NULL}, {ACPI_EXD_INIT, ACPI_EXD_TABLE_SIZE(acpi_ex_dump_mutex), NULL},
{ACPI_EXD_UINT8, ACPI_EXD_OFFSET(mutex.sync_level), "Sync Level"}, {ACPI_EXD_UINT8, ACPI_EXD_OFFSET(mutex.sync_level), "Sync Level"},
{ACPI_EXD_UINT8, ACPI_EXD_OFFSET(mutex.original_sync_level),
"Original Sync Level"},
{ACPI_EXD_POINTER, ACPI_EXD_OFFSET(mutex.owner_thread), "Owner Thread"}, {ACPI_EXD_POINTER, ACPI_EXD_OFFSET(mutex.owner_thread), "Owner Thread"},
{ACPI_EXD_UINT16, ACPI_EXD_OFFSET(mutex.acquisition_depth), {ACPI_EXD_UINT16, ACPI_EXD_OFFSET(mutex.acquisition_depth),
"Acquire Depth"}, "Acquire Depth"},
......
...@@ -140,11 +140,12 @@ acpi_hw_derive_pci_id(struct acpi_pci_id *pci_id, ...@@ -140,11 +140,12 @@ acpi_hw_derive_pci_id(struct acpi_pci_id *pci_id,
/* Walk the list, updating the PCI device/function/bus numbers */ /* Walk the list, updating the PCI device/function/bus numbers */
status = acpi_hw_process_pci_list(pci_id, list_head); status = acpi_hw_process_pci_list(pci_id, list_head);
}
/* Always delete the list */ /* Delete the list */
acpi_hw_delete_pci_list(list_head); acpi_hw_delete_pci_list(list_head);
}
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
} }
...@@ -187,6 +188,10 @@ acpi_hw_build_pci_list(acpi_handle root_pci_device, ...@@ -187,6 +188,10 @@ acpi_hw_build_pci_list(acpi_handle root_pci_device,
while (1) { while (1) {
status = acpi_get_parent(current_device, &parent_device); status = acpi_get_parent(current_device, &parent_device);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
/* Must delete the list before exit */
acpi_hw_delete_pci_list(*return_list_head);
return (status); return (status);
} }
...@@ -199,6 +204,10 @@ acpi_hw_build_pci_list(acpi_handle root_pci_device, ...@@ -199,6 +204,10 @@ acpi_hw_build_pci_list(acpi_handle root_pci_device,
list_element = ACPI_ALLOCATE(sizeof(struct acpi_pci_device)); list_element = ACPI_ALLOCATE(sizeof(struct acpi_pci_device));
if (!list_element) { if (!list_element) {
/* Must delete the list before exit */
acpi_hw_delete_pci_list(*return_list_head);
return (AE_NO_MEMORY); return (AE_NO_MEMORY);
} }
......
...@@ -72,6 +72,8 @@ acpi_buffer_to_resource(u8 *aml_buffer, ...@@ -72,6 +72,8 @@ acpi_buffer_to_resource(u8 *aml_buffer,
void *resource; void *resource;
void *current_resource_ptr; void *current_resource_ptr;
ACPI_FUNCTION_TRACE(acpi_buffer_to_resource);
/* /*
* Note: we allow AE_AML_NO_RESOURCE_END_TAG, since an end tag * Note: we allow AE_AML_NO_RESOURCE_END_TAG, since an end tag
* is not required here. * is not required here.
...@@ -85,7 +87,7 @@ acpi_buffer_to_resource(u8 *aml_buffer, ...@@ -85,7 +87,7 @@ acpi_buffer_to_resource(u8 *aml_buffer,
status = AE_OK; status = AE_OK;
} }
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
return (status); return_ACPI_STATUS(status);
} }
/* Allocate a buffer for the converted resource */ /* Allocate a buffer for the converted resource */
...@@ -93,7 +95,7 @@ acpi_buffer_to_resource(u8 *aml_buffer, ...@@ -93,7 +95,7 @@ acpi_buffer_to_resource(u8 *aml_buffer,
resource = ACPI_ALLOCATE_ZEROED(list_size_needed); resource = ACPI_ALLOCATE_ZEROED(list_size_needed);
current_resource_ptr = resource; current_resource_ptr = resource;
if (!resource) { if (!resource) {
return (AE_NO_MEMORY); return_ACPI_STATUS(AE_NO_MEMORY);
} }
/* Perform the AML-to-Resource conversion */ /* Perform the AML-to-Resource conversion */
...@@ -110,9 +112,11 @@ acpi_buffer_to_resource(u8 *aml_buffer, ...@@ -110,9 +112,11 @@ acpi_buffer_to_resource(u8 *aml_buffer,
*resource_ptr = resource; *resource_ptr = resource;
} }
return (status); return_ACPI_STATUS(status);
} }
ACPI_EXPORT_SYMBOL(acpi_buffer_to_resource)
/******************************************************************************* /*******************************************************************************
* *
* FUNCTION: acpi_rs_create_resource_list * FUNCTION: acpi_rs_create_resource_list
...@@ -130,10 +134,9 @@ acpi_buffer_to_resource(u8 *aml_buffer, ...@@ -130,10 +134,9 @@ acpi_buffer_to_resource(u8 *aml_buffer,
* of device resources. * of device resources.
* *
******************************************************************************/ ******************************************************************************/
acpi_status acpi_status
acpi_rs_create_resource_list(union acpi_operand_object *aml_buffer, acpi_rs_create_resource_list(union acpi_operand_object *aml_buffer,
struct acpi_buffer * output_buffer) struct acpi_buffer *output_buffer)
{ {
acpi_status status; acpi_status status;
......
This diff is collapsed.
...@@ -52,7 +52,8 @@ ACPI_MODULE_NAME("tbfadt") ...@@ -52,7 +52,8 @@ ACPI_MODULE_NAME("tbfadt")
static void static void
acpi_tb_init_generic_address(struct acpi_generic_address *generic_address, acpi_tb_init_generic_address(struct acpi_generic_address *generic_address,
u8 space_id, u8 space_id,
u8 byte_width, u64 address, char *register_name); u8 byte_width,
u64 address, char *register_name, u8 flags);
static void acpi_tb_convert_fadt(void); static void acpi_tb_convert_fadt(void);
...@@ -69,13 +70,14 @@ typedef struct acpi_fadt_info { ...@@ -69,13 +70,14 @@ typedef struct acpi_fadt_info {
u16 address32; u16 address32;
u16 length; u16 length;
u8 default_length; u8 default_length;
u8 type; u8 flags;
} acpi_fadt_info; } acpi_fadt_info;
#define ACPI_FADT_OPTIONAL 0 #define ACPI_FADT_OPTIONAL 0
#define ACPI_FADT_REQUIRED 1 #define ACPI_FADT_REQUIRED 1
#define ACPI_FADT_SEPARATE_LENGTH 2 #define ACPI_FADT_SEPARATE_LENGTH 2
#define ACPI_FADT_GPE_REGISTER 4
static struct acpi_fadt_info fadt_info_table[] = { static struct acpi_fadt_info fadt_info_table[] = {
{"Pm1aEventBlock", {"Pm1aEventBlock",
...@@ -125,14 +127,14 @@ static struct acpi_fadt_info fadt_info_table[] = { ...@@ -125,14 +127,14 @@ static struct acpi_fadt_info fadt_info_table[] = {
ACPI_FADT_OFFSET(gpe0_block), ACPI_FADT_OFFSET(gpe0_block),
ACPI_FADT_OFFSET(gpe0_block_length), ACPI_FADT_OFFSET(gpe0_block_length),
0, 0,
ACPI_FADT_SEPARATE_LENGTH}, ACPI_FADT_SEPARATE_LENGTH | ACPI_FADT_GPE_REGISTER},
{"Gpe1Block", {"Gpe1Block",
ACPI_FADT_OFFSET(xgpe1_block), ACPI_FADT_OFFSET(xgpe1_block),
ACPI_FADT_OFFSET(gpe1_block), ACPI_FADT_OFFSET(gpe1_block),
ACPI_FADT_OFFSET(gpe1_block_length), ACPI_FADT_OFFSET(gpe1_block_length),
0, 0,
ACPI_FADT_SEPARATE_LENGTH} ACPI_FADT_SEPARATE_LENGTH | ACPI_FADT_GPE_REGISTER}
}; };
#define ACPI_FADT_INFO_ENTRIES \ #define ACPI_FADT_INFO_ENTRIES \
...@@ -189,19 +191,29 @@ static struct acpi_fadt_pm_info fadt_pm_info_table[] = { ...@@ -189,19 +191,29 @@ static struct acpi_fadt_pm_info fadt_pm_info_table[] = {
static void static void
acpi_tb_init_generic_address(struct acpi_generic_address *generic_address, acpi_tb_init_generic_address(struct acpi_generic_address *generic_address,
u8 space_id, u8 space_id,
u8 byte_width, u64 address, char *register_name) u8 byte_width,
u64 address, char *register_name, u8 flags)
{ {
u8 bit_width; u8 bit_width;
/* Bit width field in the GAS is only one byte long, 255 max */ /*
* Bit width field in the GAS is only one byte long, 255 max.
* Check for bit_width overflow in GAS.
*/
bit_width = (u8)(byte_width * 8); bit_width = (u8)(byte_width * 8);
if (byte_width > 31) { /* (31*8)=248, (32*8)=256 */
if (byte_width > 31) { /* (31*8)=248 */ /*
* No error for GPE blocks, because we do not use the bit_width
* for GPEs, the legacy length (byte_width) is used instead to
* allow for a large number of GPEs.
*/
if (!(flags & ACPI_FADT_GPE_REGISTER)) {
ACPI_ERROR((AE_INFO, ACPI_ERROR((AE_INFO,
"%s - 32-bit FADT register is too long (%u bytes, %u bits) " "%s - 32-bit FADT register is too long (%u bytes, %u bits) "
"to convert to GAS struct - 255 bits max, truncating", "to convert to GAS struct - 255 bits max, truncating",
register_name, byte_width, (byte_width * 8))); register_name, byte_width,
(byte_width * 8)));
}
bit_width = 255; bit_width = 255;
} }
...@@ -332,14 +344,14 @@ void acpi_tb_parse_fadt(u32 table_index) ...@@ -332,14 +344,14 @@ void acpi_tb_parse_fadt(u32 table_index)
/* Obtain the DSDT and FACS tables via their addresses within the FADT */ /* Obtain the DSDT and FACS tables via their addresses within the FADT */
acpi_tb_install_table((acpi_physical_address) acpi_gbl_FADT.Xdsdt, acpi_tb_install_fixed_table((acpi_physical_address) acpi_gbl_FADT.Xdsdt,
ACPI_SIG_DSDT, ACPI_TABLE_INDEX_DSDT); ACPI_SIG_DSDT, ACPI_TABLE_INDEX_DSDT);
/* If Hardware Reduced flag is set, there is no FACS */ /* If Hardware Reduced flag is set, there is no FACS */
if (!acpi_gbl_reduced_hardware) { if (!acpi_gbl_reduced_hardware) {
acpi_tb_install_table((acpi_physical_address) acpi_gbl_FADT. acpi_tb_install_fixed_table((acpi_physical_address)
Xfacs, ACPI_SIG_FACS, acpi_gbl_FADT.Xfacs, ACPI_SIG_FACS,
ACPI_TABLE_INDEX_FACS); ACPI_TABLE_INDEX_FACS);
} }
} }
...@@ -450,6 +462,7 @@ static void acpi_tb_convert_fadt(void) ...@@ -450,6 +462,7 @@ static void acpi_tb_convert_fadt(void)
struct acpi_generic_address *address64; struct acpi_generic_address *address64;
u32 address32; u32 address32;
u8 length; u8 length;
u8 flags;
u32 i; u32 i;
/* /*
...@@ -515,6 +528,7 @@ static void acpi_tb_convert_fadt(void) ...@@ -515,6 +528,7 @@ static void acpi_tb_convert_fadt(void)
fadt_info_table[i].length); fadt_info_table[i].length);
name = fadt_info_table[i].name; name = fadt_info_table[i].name;
flags = fadt_info_table[i].flags;
/* /*
* Expand the ACPI 1.0 32-bit addresses to the ACPI 2.0 64-bit "X" * Expand the ACPI 1.0 32-bit addresses to the ACPI 2.0 64-bit "X"
...@@ -554,7 +568,7 @@ static void acpi_tb_convert_fadt(void) ...@@ -554,7 +568,7 @@ static void acpi_tb_convert_fadt(void)
[i]. [i].
length), length),
(u64)address32, (u64)address32,
name); name, flags);
} else if (address64->address != (u64)address32) { } else if (address64->address != (u64)address32) {
/* Address mismatch */ /* Address mismatch */
...@@ -582,7 +596,8 @@ static void acpi_tb_convert_fadt(void) ...@@ -582,7 +596,8 @@ static void acpi_tb_convert_fadt(void)
length), length),
(u64) (u64)
address32, address32,
name); name,
flags);
} }
} }
} }
...@@ -603,7 +618,7 @@ static void acpi_tb_convert_fadt(void) ...@@ -603,7 +618,7 @@ static void acpi_tb_convert_fadt(void)
address64->bit_width)); address64->bit_width));
} }
if (fadt_info_table[i].type & ACPI_FADT_REQUIRED) { if (fadt_info_table[i].flags & ACPI_FADT_REQUIRED) {
/* /*
* Field is required (Pm1a_event, Pm1a_control). * Field is required (Pm1a_event, Pm1a_control).
* Both the address and length must be non-zero. * Both the address and length must be non-zero.
...@@ -617,7 +632,7 @@ static void acpi_tb_convert_fadt(void) ...@@ -617,7 +632,7 @@ static void acpi_tb_convert_fadt(void)
address), address),
length)); length));
} }
} else if (fadt_info_table[i].type & ACPI_FADT_SEPARATE_LENGTH) { } else if (fadt_info_table[i].flags & ACPI_FADT_SEPARATE_LENGTH) {
/* /*
* Field is optional (Pm2_control, GPE0, GPE1) AND has its own * Field is optional (Pm2_control, GPE0, GPE1) AND has its own
* length field. If present, both the address and length must * length field. If present, both the address and length must
...@@ -726,7 +741,7 @@ static void acpi_tb_setup_fadt_registers(void) ...@@ -726,7 +741,7 @@ static void acpi_tb_setup_fadt_registers(void)
(fadt_pm_info_table[i]. (fadt_pm_info_table[i].
register_num * register_num *
pm1_register_byte_width), pm1_register_byte_width),
"PmRegisters"); "PmRegisters", 0);
} }
} }
} }
...@@ -99,7 +99,7 @@ acpi_tb_find_table(char *signature, ...@@ -99,7 +99,7 @@ acpi_tb_find_table(char *signature,
/* Table is not currently mapped, map it */ /* Table is not currently mapped, map it */
status = status =
acpi_tb_verify_table(&acpi_gbl_root_table_list. acpi_tb_validate_table(&acpi_gbl_root_table_list.
tables[i]); tables[i]);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
......
This diff is collapsed.
This diff is collapsed.
...@@ -233,7 +233,7 @@ acpi_get_table_header(char *signature, ...@@ -233,7 +233,7 @@ acpi_get_table_header(char *signature,
if (!acpi_gbl_root_table_list.tables[i].pointer) { if (!acpi_gbl_root_table_list.tables[i].pointer) {
if ((acpi_gbl_root_table_list.tables[i].flags & if ((acpi_gbl_root_table_list.tables[i].flags &
ACPI_TABLE_ORIGIN_MASK) == ACPI_TABLE_ORIGIN_MASK) ==
ACPI_TABLE_ORIGIN_MAPPED) { ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL) {
header = header =
acpi_os_map_memory(acpi_gbl_root_table_list. acpi_os_map_memory(acpi_gbl_root_table_list.
tables[i].address, tables[i].address,
...@@ -346,7 +346,7 @@ acpi_get_table_with_size(char *signature, ...@@ -346,7 +346,7 @@ acpi_get_table_with_size(char *signature,
} }
status = status =
acpi_tb_verify_table(&acpi_gbl_root_table_list.tables[i]); acpi_tb_validate_table(&acpi_gbl_root_table_list.tables[i]);
if (ACPI_SUCCESS(status)) { if (ACPI_SUCCESS(status)) {
*out_table = acpi_gbl_root_table_list.tables[i].pointer; *out_table = acpi_gbl_root_table_list.tables[i].pointer;
*tbl_size = acpi_gbl_root_table_list.tables[i].length; *tbl_size = acpi_gbl_root_table_list.tables[i].length;
...@@ -390,7 +390,7 @@ ACPI_EXPORT_SYMBOL(acpi_get_table) ...@@ -390,7 +390,7 @@ ACPI_EXPORT_SYMBOL(acpi_get_table)
* *
******************************************************************************/ ******************************************************************************/
acpi_status acpi_status
acpi_get_table_by_index(u32 table_index, struct acpi_table_header **table) acpi_get_table_by_index(u32 table_index, struct acpi_table_header ** table)
{ {
acpi_status status; acpi_status status;
...@@ -416,7 +416,7 @@ acpi_get_table_by_index(u32 table_index, struct acpi_table_header **table) ...@@ -416,7 +416,7 @@ acpi_get_table_by_index(u32 table_index, struct acpi_table_header **table)
/* Table is not mapped, map it */ /* Table is not mapped, map it */
status = status =
acpi_tb_verify_table(&acpi_gbl_root_table_list. acpi_tb_validate_table(&acpi_gbl_root_table_list.
tables[table_index]); tables[table_index]);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
(void)acpi_ut_release_mutex(ACPI_MTX_TABLES); (void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -353,7 +353,7 @@ void acpi_ut_print_string(char *string, u16 max_length) ...@@ -353,7 +353,7 @@ void acpi_ut_print_string(char *string, u16 max_length)
} }
acpi_os_printf("\""); acpi_os_printf("\"");
for (i = 0; string[i] && (i < max_length); i++) { for (i = 0; (i < max_length) && string[i]; i++) {
/* Escape sequences */ /* Escape sequences */
......
...@@ -53,6 +53,7 @@ ACPI_MODULE_NAME("utxferror") ...@@ -53,6 +53,7 @@ ACPI_MODULE_NAME("utxferror")
* This module is used for the in-kernel ACPICA as well as the ACPICA * This module is used for the in-kernel ACPICA as well as the ACPICA
* tools/applications. * tools/applications.
*/ */
#ifndef ACPI_NO_ERROR_MESSAGES /* Entire module */
/******************************************************************************* /*******************************************************************************
* *
* FUNCTION: acpi_error * FUNCTION: acpi_error
...@@ -249,3 +250,4 @@ acpi_bios_warning(const char *module_name, ...@@ -249,3 +250,4 @@ acpi_bios_warning(const char *module_name,
} }
ACPI_EXPORT_SYMBOL(acpi_bios_warning) ACPI_EXPORT_SYMBOL(acpi_bios_warning)
#endif /* ACPI_NO_ERROR_MESSAGES */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -8,6 +8,7 @@ obj-$(CONFIG_COMMON_CLK) += clk-fixed-rate.o ...@@ -8,6 +8,7 @@ obj-$(CONFIG_COMMON_CLK) += clk-fixed-rate.o
obj-$(CONFIG_COMMON_CLK) += clk-gate.o obj-$(CONFIG_COMMON_CLK) += clk-gate.o
obj-$(CONFIG_COMMON_CLK) += clk-mux.o obj-$(CONFIG_COMMON_CLK) += clk-mux.o
obj-$(CONFIG_COMMON_CLK) += clk-composite.o obj-$(CONFIG_COMMON_CLK) += clk-composite.o
obj-$(CONFIG_COMMON_CLK) += clk-fractional-divider.o
# hardware specific clock types # hardware specific clock types
# please keep this section sorted lexicographically by file/directory path name # please keep this section sorted lexicographically by file/directory path name
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment