Commit 1c45d9a9 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm+acpi-3.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI and power management updates from Rafael Wysocki:
 "This is material that didn't make it to my 3.18-rc1 pull request for
  various reasons, mostly related to timing and travel (LinuxCon EU /
  LPC) plus a couple of fixes for recent bugs.

  The only really new thing here is the PM QoS class for memory
  bandwidth, but it is simple enough and users of it will be added in
  the next cycle.  One major change in behavior is that platform devices
  enumerated by ACPI will use 32-bit DMA mask by default.  Also included
  is an ACPICA update to a new upstream release, but that's mostly
  cleanups, changes in tools and similar.  The rest is fixes and
  cleanups mostly.

  Specifics:

   - Fix for a recent PCI power management change that overlooked the
     fact that some IRQ chips might not be able to configure PCIe PME
     for system wakeup from Lucas Stach.

   - Fix for a bug introduced in 3.17 where acpi_device_wakeup() is
     called with a wrong ordering of arguments from Zhang Rui.

   - A bunch of intel_pstate driver fixes (all -stable candidates) from
     Dirk Brandewie, Gabriele Mazzotta and Pali Rohár.

   - Fixes for a rather long-standing problem with the OOM killer and
     the freezer that frozen processes killed by the OOM do not actually
     release any memory until they are thawed, so OOM-killing them is
     rather pointless, with a couple of cleanups on top (Michal Hocko,
     Cong Wang, Rafael J Wysocki).

   - ACPICA update to upstream release 20140926, inlcuding mostly
     cleanups reducing differences between the upstream ACPICA and the
     kernel code, tools changes (acpidump, acpiexec) and support for the
     _DDN object (Bob Moore, Lv Zheng).

   - New PM QoS class for memory bandwidth from Tomeu Vizoso.

   - Default 32-bit DMA mask for platform devices enumerated by ACPI
     (this change is mostly needed for some drivers development in
     progress targeted at 3.19) from Heikki Krogerus.

   - ACPI EC driver cleanups, mostly related to debugging, from Lv
     Zheng.

   - cpufreq-dt driver updates from Thomas Petazzoni.

   - powernv cpuidle driver update from Preeti U Murthy"

* tag 'pm+acpi-3.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (34 commits)
  intel_pstate: Correct BYT VID values.
  intel_pstate: Fix BYT frequency reporting
  intel_pstate: Don't lose sysfs settings during cpu offline
  cpufreq: intel_pstate: Reflect current no_turbo state correctly
  cpufreq: expose scaling_cur_freq sysfs file for set_policy() drivers
  cpufreq: intel_pstate: Fix setting max_perf_pct in performance policy
  PCI / PM: handle failure to enable wakeup on PCIe PME
  ACPI: invoke acpi_device_wakeup() with correct parameters
  PM / freezer: Clean up code after recent fixes
  PM: convert do_each_thread to for_each_process_thread
  OOM, PM: OOM killed task shouldn't escape PM suspend
  freezer: remove obsolete comments in __thaw_task()
  freezer: Do not freeze tasks killed by OOM killer
  ACPI / platform: provide default DMA mask
  cpuidle: powernv: Populate cpuidle state details by querying the device-tree
  cpufreq: cpufreq-dt: adjust message related to regulators
  cpufreq: cpufreq-dt: extend with platform_data
  cpufreq: allow driver-specific data
  ACPI / EC: Cleanup coding style.
  ACPI / EC: Refine event/query debugging messages.
  ...
parents 8264fce6 a91e99e2
...@@ -5,7 +5,8 @@ performance expectations by drivers, subsystems and user space applications on ...@@ -5,7 +5,8 @@ performance expectations by drivers, subsystems and user space applications on
one of the parameters. one of the parameters.
Two different PM QoS frameworks are available: Two different PM QoS frameworks are available:
1. PM QoS classes for cpu_dma_latency, network_latency, network_throughput. 1. PM QoS classes for cpu_dma_latency, network_latency, network_throughput,
memory_bandwidth.
2. the per-device PM QoS framework provides the API to manage the per-device latency 2. the per-device PM QoS framework provides the API to manage the per-device latency
constraints and PM QoS flags. constraints and PM QoS flags.
...@@ -13,6 +14,7 @@ Each parameters have defined units: ...@@ -13,6 +14,7 @@ Each parameters have defined units:
* latency: usec * latency: usec
* timeout: usec * timeout: usec
* throughput: kbs (kilo bit / sec) * throughput: kbs (kilo bit / sec)
* memory bandwidth: mbs (mega bit / sec)
1. PM QoS framework 1. PM QoS framework
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/dma-mapping.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include "internal.h" #include "internal.h"
...@@ -102,6 +103,7 @@ struct platform_device *acpi_create_platform_device(struct acpi_device *adev) ...@@ -102,6 +103,7 @@ struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
pdevinfo.res = resources; pdevinfo.res = resources;
pdevinfo.num_res = count; pdevinfo.num_res = count;
pdevinfo.acpi_node.companion = adev; pdevinfo.acpi_node.companion = adev;
pdevinfo.dma_mask = DMA_BIT_MASK(32);
pdev = platform_device_register_full(&pdevinfo); pdev = platform_device_register_full(&pdevinfo);
if (IS_ERR(pdev)) if (IS_ERR(pdev))
dev_err(&adev->dev, "platform device creation failed: %ld\n", dev_err(&adev->dev, "platform device creation failed: %ld\n",
......
...@@ -127,7 +127,7 @@ acpi_hw_clear_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info, ...@@ -127,7 +127,7 @@ acpi_hw_clear_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
acpi_status acpi_status
acpi_hw_get_gpe_status(struct acpi_gpe_event_info *gpe_event_info, acpi_hw_get_gpe_status(struct acpi_gpe_event_info *gpe_event_info,
acpi_event_status * event_status); acpi_event_status *event_status);
acpi_status acpi_hw_disable_all_gpes(void); acpi_status acpi_hw_disable_all_gpes(void);
......
...@@ -413,8 +413,8 @@ struct acpi_gpe_handler_info { ...@@ -413,8 +413,8 @@ struct acpi_gpe_handler_info {
acpi_gpe_handler address; /* Address of handler, if any */ acpi_gpe_handler address; /* Address of handler, if any */
void *context; /* Context to be passed to handler */ void *context; /* Context to be passed to handler */
struct acpi_namespace_node *method_node; /* Method node for this GPE level (saved) */ struct acpi_namespace_node *method_node; /* Method node for this GPE level (saved) */
u8 original_flags; /* Original (pre-handler) GPE info */ u8 original_flags; /* Original (pre-handler) GPE info */
u8 originally_enabled; /* True if GPE was originally enabled */ u8 originally_enabled; /* True if GPE was originally enabled */
}; };
/* Notify info for implicit notify, multiple device objects */ /* Notify info for implicit notify, multiple device objects */
......
...@@ -49,6 +49,8 @@ acpi_status acpi_allocate_root_table(u32 initial_table_count); ...@@ -49,6 +49,8 @@ acpi_status acpi_allocate_root_table(u32 initial_table_count);
/* /*
* tbxfroot - Root pointer utilities * tbxfroot - Root pointer utilities
*/ */
u32 acpi_tb_get_rsdp_length(struct acpi_table_rsdp *rsdp);
acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp *rsdp); acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp *rsdp);
u8 *acpi_tb_scan_memory_for_rsdp(u8 *start_address, u32 length); u8 *acpi_tb_scan_memory_for_rsdp(u8 *start_address, u32 length);
......
...@@ -117,6 +117,12 @@ struct asl_resource_node { ...@@ -117,6 +117,12 @@ struct asl_resource_node {
struct asl_resource_node *next; struct asl_resource_node *next;
}; };
struct asl_resource_info {
union acpi_parse_object *descriptor_type_op; /* Resource descriptor parse node */
union acpi_parse_object *mapping_op; /* Used for mapfile support */
u32 current_byte_offset; /* Offset in resource template */
};
/* Macros used to generate AML resource length fields */ /* Macros used to generate AML resource length fields */
#define ACPI_AML_SIZE_LARGE(r) (sizeof (r) - sizeof (struct aml_resource_large_header)) #define ACPI_AML_SIZE_LARGE(r) (sizeof (r) - sizeof (struct aml_resource_large_header))
...@@ -449,4 +455,32 @@ union aml_resource { ...@@ -449,4 +455,32 @@ union aml_resource {
u8 byte_item; u8 byte_item;
}; };
/* Interfaces used by both the disassembler and compiler */
void
mp_save_gpio_info(union acpi_parse_object *op,
union aml_resource *resource,
u32 pin_count, u16 *pin_list, char *device_name);
void
mp_save_serial_info(union acpi_parse_object *op,
union aml_resource *resource, char *device_name);
char *mp_get_hid_from_parse_tree(struct acpi_namespace_node *hid_node);
char *mp_get_hid_via_namestring(char *device_name);
char *mp_get_connection_info(union acpi_parse_object *op,
u32 pin_index,
struct acpi_namespace_node **target_node,
char **target_name);
char *mp_get_parent_device_hid(union acpi_parse_object *op,
struct acpi_namespace_node **target_node,
char **parent_device_name);
char *mp_get_ddn_value(char *device_name);
char *mp_get_hid_value(struct acpi_namespace_node *device_node);
#endif #endif
...@@ -100,13 +100,14 @@ acpi_ev_update_gpe_enable_mask(struct acpi_gpe_event_info *gpe_event_info) ...@@ -100,13 +100,14 @@ acpi_ev_update_gpe_enable_mask(struct acpi_gpe_event_info *gpe_event_info)
* *
* FUNCTION: acpi_ev_enable_gpe * FUNCTION: acpi_ev_enable_gpe
* *
* PARAMETERS: gpe_event_info - GPE to enable * PARAMETERS: gpe_event_info - GPE to enable
* *
* RETURN: Status * RETURN: Status
* *
* DESCRIPTION: Clear a GPE of stale events and enable it. * DESCRIPTION: Clear a GPE of stale events and enable it.
* *
******************************************************************************/ ******************************************************************************/
acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info) acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
{ {
acpi_status status; acpi_status status;
...@@ -125,6 +126,7 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info) ...@@ -125,6 +126,7 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
} }
/* Clear the GPE (of stale events) */ /* Clear the GPE (of stale events) */
status = acpi_hw_clear_gpe(gpe_event_info); status = acpi_hw_clear_gpe(gpe_event_info);
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
...@@ -136,7 +138,6 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info) ...@@ -136,7 +138,6 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
} }
/******************************************************************************* /*******************************************************************************
* *
* FUNCTION: acpi_ev_add_gpe_reference * FUNCTION: acpi_ev_add_gpe_reference
...@@ -212,7 +213,7 @@ acpi_ev_remove_gpe_reference(struct acpi_gpe_event_info *gpe_event_info) ...@@ -212,7 +213,7 @@ acpi_ev_remove_gpe_reference(struct acpi_gpe_event_info *gpe_event_info)
if (ACPI_SUCCESS(status)) { if (ACPI_SUCCESS(status)) {
status = status =
acpi_hw_low_set_gpe(gpe_event_info, acpi_hw_low_set_gpe(gpe_event_info,
ACPI_GPE_DISABLE); ACPI_GPE_DISABLE);
} }
if (ACPI_FAILURE(status)) { if (ACPI_FAILURE(status)) {
...@@ -334,7 +335,7 @@ struct acpi_gpe_event_info *acpi_ev_get_gpe_event_info(acpi_handle gpe_device, ...@@ -334,7 +335,7 @@ struct acpi_gpe_event_info *acpi_ev_get_gpe_event_info(acpi_handle gpe_device,
* *
******************************************************************************/ ******************************************************************************/
u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info * gpe_xrupt_list) u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info *gpe_xrupt_list)
{ {
acpi_status status; acpi_status status;
struct acpi_gpe_block_info *gpe_block; struct acpi_gpe_block_info *gpe_block;
...@@ -427,7 +428,7 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info * gpe_xrupt_list) ...@@ -427,7 +428,7 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info * gpe_xrupt_list)
/* Check if there is anything active at all in this register */ /* Check if there is anything active at all in this register */
enabled_status_byte = (u8) (status_reg & enable_reg); enabled_status_byte = (u8)(status_reg & enable_reg);
if (!enabled_status_byte) { if (!enabled_status_byte) {
/* No active GPEs in this register, move on */ /* No active GPEs in this register, move on */
...@@ -450,7 +451,7 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info * gpe_xrupt_list) ...@@ -450,7 +451,7 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info * gpe_xrupt_list)
acpi_ev_gpe_dispatch(gpe_block-> acpi_ev_gpe_dispatch(gpe_block->
node, node,
&gpe_block-> &gpe_block->
event_info[((acpi_size) i * ACPI_GPE_REGISTER_WIDTH) + j], j + gpe_register_info->base_gpe_number); event_info[((acpi_size) i * ACPI_GPE_REGISTER_WIDTH) + j], j + gpe_register_info->base_gpe_number);
} }
} }
} }
...@@ -636,7 +637,7 @@ static void ACPI_SYSTEM_XFACE acpi_ev_asynch_enable_gpe(void *context) ...@@ -636,7 +637,7 @@ static void ACPI_SYSTEM_XFACE acpi_ev_asynch_enable_gpe(void *context)
* *
******************************************************************************/ ******************************************************************************/
acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info *gpe_event_info) acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info * gpe_event_info)
{ {
acpi_status status; acpi_status status;
...@@ -666,9 +667,9 @@ acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info *gpe_event_info) ...@@ -666,9 +667,9 @@ acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info *gpe_event_info)
* *
* FUNCTION: acpi_ev_gpe_dispatch * FUNCTION: acpi_ev_gpe_dispatch
* *
* PARAMETERS: gpe_device - Device node. NULL for GPE0/GPE1 * PARAMETERS: gpe_device - Device node. NULL for GPE0/GPE1
* gpe_event_info - Info for this GPE * gpe_event_info - Info for this GPE
* gpe_number - Number relative to the parent GPE block * gpe_number - Number relative to the parent GPE block
* *
* RETURN: INTERRUPT_HANDLED or INTERRUPT_NOT_HANDLED * RETURN: INTERRUPT_HANDLED or INTERRUPT_NOT_HANDLED
* *
...@@ -681,7 +682,7 @@ acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info *gpe_event_info) ...@@ -681,7 +682,7 @@ acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info *gpe_event_info)
u32 u32
acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device, acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
struct acpi_gpe_event_info *gpe_event_info, u32 gpe_number) struct acpi_gpe_event_info *gpe_event_info, u32 gpe_number)
{ {
acpi_status status; acpi_status status;
u32 return_value; u32 return_value;
......
...@@ -424,6 +424,7 @@ acpi_ev_match_gpe_method(acpi_handle obj_handle, ...@@ -424,6 +424,7 @@ acpi_ev_match_gpe_method(acpi_handle obj_handle,
} }
/* Disable the GPE in case it's been enabled already. */ /* Disable the GPE in case it's been enabled already. */
(void)acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_DISABLE); (void)acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_DISABLE);
/* /*
......
...@@ -786,18 +786,26 @@ acpi_install_gpe_handler(acpi_handle gpe_device, ...@@ -786,18 +786,26 @@ acpi_install_gpe_handler(acpi_handle gpe_device,
handler->method_node = gpe_event_info->dispatch.method_node; handler->method_node = gpe_event_info->dispatch.method_node;
handler->original_flags = (u8)(gpe_event_info->flags & handler->original_flags = (u8)(gpe_event_info->flags &
(ACPI_GPE_XRUPT_TYPE_MASK | (ACPI_GPE_XRUPT_TYPE_MASK |
ACPI_GPE_DISPATCH_MASK)); ACPI_GPE_DISPATCH_MASK));
/* /*
* If the GPE is associated with a method, it may have been enabled * If the GPE is associated with a method, it may have been enabled
* automatically during initialization, in which case it has to be * automatically during initialization, in which case it has to be
* disabled now to avoid spurious execution of the handler. * disabled now to avoid spurious execution of the handler.
*/ */
if (((handler->original_flags & ACPI_GPE_DISPATCH_METHOD) ||
if ((handler->original_flags & ACPI_GPE_DISPATCH_METHOD) (handler->original_flags & ACPI_GPE_DISPATCH_NOTIFY)) &&
&& gpe_event_info->runtime_count) { gpe_event_info->runtime_count) {
handler->originally_enabled = 1; handler->originally_enabled = TRUE;
(void)acpi_ev_remove_gpe_reference(gpe_event_info); (void)acpi_ev_remove_gpe_reference(gpe_event_info);
/* Sanity check of original type against new type */
if (type !=
(u32)(gpe_event_info->flags & ACPI_GPE_XRUPT_TYPE_MASK)) {
ACPI_WARNING((AE_INFO,
"GPE type mismatch (level/edge)"));
}
} }
/* Install the handler */ /* Install the handler */
...@@ -808,7 +816,7 @@ acpi_install_gpe_handler(acpi_handle gpe_device, ...@@ -808,7 +816,7 @@ acpi_install_gpe_handler(acpi_handle gpe_device,
gpe_event_info->flags &= gpe_event_info->flags &=
~(ACPI_GPE_XRUPT_TYPE_MASK | ACPI_GPE_DISPATCH_MASK); ~(ACPI_GPE_XRUPT_TYPE_MASK | ACPI_GPE_DISPATCH_MASK);
gpe_event_info->flags |= (u8) (type | ACPI_GPE_DISPATCH_HANDLER); gpe_event_info->flags |= (u8)(type | ACPI_GPE_DISPATCH_HANDLER);
acpi_os_release_lock(acpi_gbl_gpe_lock, flags); acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
...@@ -893,7 +901,7 @@ acpi_remove_gpe_handler(acpi_handle gpe_device, ...@@ -893,7 +901,7 @@ acpi_remove_gpe_handler(acpi_handle gpe_device,
gpe_event_info->dispatch.method_node = handler->method_node; gpe_event_info->dispatch.method_node = handler->method_node;
gpe_event_info->flags &= gpe_event_info->flags &=
~(ACPI_GPE_XRUPT_TYPE_MASK | ACPI_GPE_DISPATCH_MASK); ~(ACPI_GPE_XRUPT_TYPE_MASK | ACPI_GPE_DISPATCH_MASK);
gpe_event_info->flags |= handler->original_flags; gpe_event_info->flags |= handler->original_flags;
/* /*
...@@ -901,7 +909,8 @@ acpi_remove_gpe_handler(acpi_handle gpe_device, ...@@ -901,7 +909,8 @@ acpi_remove_gpe_handler(acpi_handle gpe_device,
* enabled, it should be enabled at this point to restore the * enabled, it should be enabled at this point to restore the
* post-initialization configuration. * post-initialization configuration.
*/ */
if ((handler->original_flags & ACPI_GPE_DISPATCH_METHOD) && if (((handler->original_flags & ACPI_GPE_DISPATCH_METHOD) ||
(handler->original_flags & ACPI_GPE_DISPATCH_NOTIFY)) &&
handler->originally_enabled) { handler->originally_enabled) {
(void)acpi_ev_add_gpe_reference(gpe_event_info); (void)acpi_ev_add_gpe_reference(gpe_event_info);
} }
...@@ -946,7 +955,7 @@ ACPI_EXPORT_SYMBOL(acpi_remove_gpe_handler) ...@@ -946,7 +955,7 @@ ACPI_EXPORT_SYMBOL(acpi_remove_gpe_handler)
* handle is returned. * handle is returned.
* *
******************************************************************************/ ******************************************************************************/
acpi_status acpi_acquire_global_lock(u16 timeout, u32 * handle) acpi_status acpi_acquire_global_lock(u16 timeout, u32 *handle)
{ {
acpi_status status; acpi_status status;
......
...@@ -324,8 +324,9 @@ ACPI_EXPORT_SYMBOL(acpi_clear_event) ...@@ -324,8 +324,9 @@ ACPI_EXPORT_SYMBOL(acpi_clear_event)
******************************************************************************/ ******************************************************************************/
acpi_status acpi_get_event_status(u32 event, acpi_event_status * event_status) acpi_status acpi_get_event_status(u32 event, acpi_event_status * event_status)
{ {
acpi_status status = AE_OK; acpi_status status;
u32 value; acpi_event_status local_event_status = 0;
u32 in_byte;
ACPI_FUNCTION_TRACE(acpi_get_event_status); ACPI_FUNCTION_TRACE(acpi_get_event_status);
...@@ -339,29 +340,40 @@ acpi_status acpi_get_event_status(u32 event, acpi_event_status * event_status) ...@@ -339,29 +340,40 @@ acpi_status acpi_get_event_status(u32 event, acpi_event_status * event_status)
return_ACPI_STATUS(AE_BAD_PARAMETER); return_ACPI_STATUS(AE_BAD_PARAMETER);
} }
/* Get the status of the requested fixed event */ /* Fixed event currently can be dispatched? */
if (acpi_gbl_fixed_event_handlers[event].handler) {
local_event_status |= ACPI_EVENT_FLAG_HAS_HANDLER;
}
/* Fixed event currently enabled? */
status = status =
acpi_read_bit_register(acpi_gbl_fixed_event_info[event]. acpi_read_bit_register(acpi_gbl_fixed_event_info[event].
enable_register_id, &value); enable_register_id, &in_byte);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
}
*event_status = value; if (in_byte) {
local_event_status |= ACPI_EVENT_FLAG_ENABLED;
}
/* Fixed event currently active? */
status = status =
acpi_read_bit_register(acpi_gbl_fixed_event_info[event]. acpi_read_bit_register(acpi_gbl_fixed_event_info[event].
status_register_id, &value); status_register_id, &in_byte);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
}
if (value) if (in_byte) {
*event_status |= ACPI_EVENT_FLAG_SET; local_event_status |= ACPI_EVENT_FLAG_SET;
}
if (acpi_gbl_fixed_event_handlers[event].handler)
*event_status |= ACPI_EVENT_FLAG_HANDLE;
return_ACPI_STATUS(status); (*event_status) = local_event_status;
return_ACPI_STATUS(AE_OK);
} }
ACPI_EXPORT_SYMBOL(acpi_get_event_status) ACPI_EXPORT_SYMBOL(acpi_get_event_status)
......
...@@ -106,8 +106,8 @@ ACPI_EXPORT_SYMBOL(acpi_update_all_gpes) ...@@ -106,8 +106,8 @@ ACPI_EXPORT_SYMBOL(acpi_update_all_gpes)
* *
* FUNCTION: acpi_enable_gpe * FUNCTION: acpi_enable_gpe
* *
* PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1 * PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1
* gpe_number - GPE level within the GPE block * gpe_number - GPE level within the GPE block
* *
* RETURN: Status * RETURN: Status
* *
...@@ -115,7 +115,6 @@ ACPI_EXPORT_SYMBOL(acpi_update_all_gpes) ...@@ -115,7 +115,6 @@ ACPI_EXPORT_SYMBOL(acpi_update_all_gpes)
* hardware-enabled. * hardware-enabled.
* *
******************************************************************************/ ******************************************************************************/
acpi_status acpi_enable_gpe(acpi_handle gpe_device, u32 gpe_number) acpi_status acpi_enable_gpe(acpi_handle gpe_device, u32 gpe_number)
{ {
acpi_status status = AE_BAD_PARAMETER; acpi_status status = AE_BAD_PARAMETER;
...@@ -490,8 +489,8 @@ ACPI_EXPORT_SYMBOL(acpi_clear_gpe) ...@@ -490,8 +489,8 @@ ACPI_EXPORT_SYMBOL(acpi_clear_gpe)
* *
* FUNCTION: acpi_get_gpe_status * FUNCTION: acpi_get_gpe_status
* *
* PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1 * PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1
* gpe_number - GPE level within the GPE block * gpe_number - GPE level within the GPE block
* event_status - Where the current status of the event * event_status - Where the current status of the event
* will be returned * will be returned
* *
...@@ -524,9 +523,6 @@ acpi_get_gpe_status(acpi_handle gpe_device, ...@@ -524,9 +523,6 @@ acpi_get_gpe_status(acpi_handle gpe_device,
status = acpi_hw_get_gpe_status(gpe_event_info, event_status); status = acpi_hw_get_gpe_status(gpe_event_info, event_status);
if (gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK)
*event_status |= ACPI_EVENT_FLAG_HANDLE;
unlock_and_exit: unlock_and_exit:
acpi_os_release_lock(acpi_gbl_gpe_lock, flags); acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
return_ACPI_STATUS(status); return_ACPI_STATUS(status);
......
...@@ -202,7 +202,7 @@ acpi_status acpi_hw_clear_gpe(struct acpi_gpe_event_info * gpe_event_info) ...@@ -202,7 +202,7 @@ acpi_status acpi_hw_clear_gpe(struct acpi_gpe_event_info * gpe_event_info)
acpi_status acpi_status
acpi_hw_get_gpe_status(struct acpi_gpe_event_info * gpe_event_info, acpi_hw_get_gpe_status(struct acpi_gpe_event_info * gpe_event_info,
acpi_event_status * event_status) acpi_event_status *event_status)
{ {
u32 in_byte; u32 in_byte;
u32 register_bit; u32 register_bit;
...@@ -216,6 +216,13 @@ acpi_hw_get_gpe_status(struct acpi_gpe_event_info * gpe_event_info, ...@@ -216,6 +216,13 @@ acpi_hw_get_gpe_status(struct acpi_gpe_event_info * gpe_event_info,
return (AE_BAD_PARAMETER); return (AE_BAD_PARAMETER);
} }
/* GPE currently handled? */
if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) !=
ACPI_GPE_DISPATCH_NONE) {
local_event_status |= ACPI_EVENT_FLAG_HAS_HANDLER;
}
/* Get the info block for the entire GPE register */ /* Get the info block for the entire GPE register */
gpe_register_info = gpe_event_info->register_info; gpe_register_info = gpe_event_info->register_info;
......
...@@ -48,6 +48,36 @@ ...@@ -48,6 +48,36 @@
#define _COMPONENT ACPI_TABLES #define _COMPONENT ACPI_TABLES
ACPI_MODULE_NAME("tbxfroot") ACPI_MODULE_NAME("tbxfroot")
/*******************************************************************************
*
* FUNCTION: acpi_tb_get_rsdp_length
*
* PARAMETERS: rsdp - Pointer to RSDP
*
* RETURN: Table length
*
* DESCRIPTION: Get the length of the RSDP
*
******************************************************************************/
u32 acpi_tb_get_rsdp_length(struct acpi_table_rsdp *rsdp)
{
if (!ACPI_VALIDATE_RSDP_SIG(rsdp->signature)) {
/* BAD Signature */
return (0);
}
/* "Length" field is available if table version >= 2 */
if (rsdp->revision >= 2) {
return (rsdp->length);
} else {
return (ACPI_RSDP_CHECKSUM_LENGTH);
}
}
/******************************************************************************* /*******************************************************************************
* *
* FUNCTION: acpi_tb_validate_rsdp * FUNCTION: acpi_tb_validate_rsdp
...@@ -59,7 +89,8 @@ ACPI_MODULE_NAME("tbxfroot") ...@@ -59,7 +89,8 @@ ACPI_MODULE_NAME("tbxfroot")
* DESCRIPTION: Validate the RSDP (ptr) * DESCRIPTION: Validate the RSDP (ptr)
* *
******************************************************************************/ ******************************************************************************/
acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp *rsdp)
acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp * rsdp)
{ {
/* /*
......
...@@ -711,7 +711,7 @@ int acpi_pm_device_run_wake(struct device *phys_dev, bool enable) ...@@ -711,7 +711,7 @@ int acpi_pm_device_run_wake(struct device *phys_dev, bool enable)
return -ENODEV; return -ENODEV;
} }
return acpi_device_wakeup(adev, enable, ACPI_STATE_S0); return acpi_device_wakeup(adev, ACPI_STATE_S0, enable);
} }
EXPORT_SYMBOL(acpi_pm_device_run_wake); EXPORT_SYMBOL(acpi_pm_device_run_wake);
#endif /* CONFIG_PM_RUNTIME */ #endif /* CONFIG_PM_RUNTIME */
......
...@@ -128,12 +128,13 @@ static int EC_FLAGS_SKIP_DSDT_SCAN; /* Not all BIOS survive early DSDT scan */ ...@@ -128,12 +128,13 @@ static int EC_FLAGS_SKIP_DSDT_SCAN; /* Not all BIOS survive early DSDT scan */
static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */ static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
/* -------------------------------------------------------------------------- /* --------------------------------------------------------------------------
Transaction Management * Transaction Management
-------------------------------------------------------------------------- */ * -------------------------------------------------------------------------- */
static inline u8 acpi_ec_read_status(struct acpi_ec *ec) static inline u8 acpi_ec_read_status(struct acpi_ec *ec)
{ {
u8 x = inb(ec->command_addr); u8 x = inb(ec->command_addr);
pr_debug("EC_SC(R) = 0x%2.2x " pr_debug("EC_SC(R) = 0x%2.2x "
"SCI_EVT=%d BURST=%d CMD=%d IBF=%d OBF=%d\n", "SCI_EVT=%d BURST=%d CMD=%d IBF=%d OBF=%d\n",
x, x,
...@@ -148,6 +149,7 @@ static inline u8 acpi_ec_read_status(struct acpi_ec *ec) ...@@ -148,6 +149,7 @@ static inline u8 acpi_ec_read_status(struct acpi_ec *ec)
static inline u8 acpi_ec_read_data(struct acpi_ec *ec) static inline u8 acpi_ec_read_data(struct acpi_ec *ec)
{ {
u8 x = inb(ec->data_addr); u8 x = inb(ec->data_addr);
pr_debug("EC_DATA(R) = 0x%2.2x\n", x); pr_debug("EC_DATA(R) = 0x%2.2x\n", x);
return x; return x;
} }
...@@ -164,10 +166,32 @@ static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data) ...@@ -164,10 +166,32 @@ static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data)
outb(data, ec->data_addr); outb(data, ec->data_addr);
} }
#ifdef DEBUG
static const char *acpi_ec_cmd_string(u8 cmd)
{
switch (cmd) {
case 0x80:
return "RD_EC";
case 0x81:
return "WR_EC";
case 0x82:
return "BE_EC";
case 0x83:
return "BD_EC";
case 0x84:
return "QR_EC";
}
return "UNKNOWN";
}
#else
#define acpi_ec_cmd_string(cmd) "UNDEF"
#endif
static int ec_transaction_completed(struct acpi_ec *ec) static int ec_transaction_completed(struct acpi_ec *ec)
{ {
unsigned long flags; unsigned long flags;
int ret = 0; int ret = 0;
spin_lock_irqsave(&ec->lock, flags); spin_lock_irqsave(&ec->lock, flags);
if (ec->curr && (ec->curr->flags & ACPI_EC_COMMAND_COMPLETE)) if (ec->curr && (ec->curr->flags & ACPI_EC_COMMAND_COMPLETE))
ret = 1; ret = 1;
...@@ -181,7 +205,8 @@ static bool advance_transaction(struct acpi_ec *ec) ...@@ -181,7 +205,8 @@ static bool advance_transaction(struct acpi_ec *ec)
u8 status; u8 status;
bool wakeup = false; bool wakeup = false;
pr_debug("===== %s =====\n", in_interrupt() ? "IRQ" : "TASK"); pr_debug("===== %s (%d) =====\n",
in_interrupt() ? "IRQ" : "TASK", smp_processor_id());
status = acpi_ec_read_status(ec); status = acpi_ec_read_status(ec);
t = ec->curr; t = ec->curr;
if (!t) if (!t)
...@@ -198,7 +223,8 @@ static bool advance_transaction(struct acpi_ec *ec) ...@@ -198,7 +223,8 @@ static bool advance_transaction(struct acpi_ec *ec)
if (t->rlen == t->ri) { if (t->rlen == t->ri) {
t->flags |= ACPI_EC_COMMAND_COMPLETE; t->flags |= ACPI_EC_COMMAND_COMPLETE;
if (t->command == ACPI_EC_COMMAND_QUERY) if (t->command == ACPI_EC_COMMAND_QUERY)
pr_debug("hardware QR_EC completion\n"); pr_debug("***** Command(%s) hardware completion *****\n",
acpi_ec_cmd_string(t->command));
wakeup = true; wakeup = true;
} }
} else } else
...@@ -221,7 +247,8 @@ static bool advance_transaction(struct acpi_ec *ec) ...@@ -221,7 +247,8 @@ static bool advance_transaction(struct acpi_ec *ec)
t->flags |= ACPI_EC_COMMAND_POLL; t->flags |= ACPI_EC_COMMAND_POLL;
t->rdata[t->ri++] = 0x00; t->rdata[t->ri++] = 0x00;
t->flags |= ACPI_EC_COMMAND_COMPLETE; t->flags |= ACPI_EC_COMMAND_COMPLETE;
pr_debug("software QR_EC completion\n"); pr_debug("***** Command(%s) software completion *****\n",
acpi_ec_cmd_string(t->command));
wakeup = true; wakeup = true;
} else if ((status & ACPI_EC_FLAG_IBF) == 0) { } else if ((status & ACPI_EC_FLAG_IBF) == 0) {
acpi_ec_write_cmd(ec, t->command); acpi_ec_write_cmd(ec, t->command);
...@@ -264,6 +291,7 @@ static int ec_poll(struct acpi_ec *ec) ...@@ -264,6 +291,7 @@ static int ec_poll(struct acpi_ec *ec)
{ {
unsigned long flags; unsigned long flags;
int repeat = 5; /* number of command restarts */ int repeat = 5; /* number of command restarts */
while (repeat--) { while (repeat--) {
unsigned long delay = jiffies + unsigned long delay = jiffies +
msecs_to_jiffies(ec_delay); msecs_to_jiffies(ec_delay);
...@@ -296,18 +324,25 @@ static int acpi_ec_transaction_unlocked(struct acpi_ec *ec, ...@@ -296,18 +324,25 @@ static int acpi_ec_transaction_unlocked(struct acpi_ec *ec,
{ {
unsigned long tmp; unsigned long tmp;
int ret = 0; int ret = 0;
if (EC_FLAGS_MSI) if (EC_FLAGS_MSI)
udelay(ACPI_EC_MSI_UDELAY); udelay(ACPI_EC_MSI_UDELAY);
/* start transaction */ /* start transaction */
spin_lock_irqsave(&ec->lock, tmp); spin_lock_irqsave(&ec->lock, tmp);
/* following two actions should be kept atomic */ /* following two actions should be kept atomic */
ec->curr = t; ec->curr = t;
pr_debug("***** Command(%s) started *****\n",
acpi_ec_cmd_string(t->command));
start_transaction(ec); start_transaction(ec);
spin_unlock_irqrestore(&ec->lock, tmp); spin_unlock_irqrestore(&ec->lock, tmp);
ret = ec_poll(ec); ret = ec_poll(ec);
spin_lock_irqsave(&ec->lock, tmp); spin_lock_irqsave(&ec->lock, tmp);
if (ec->curr->command == ACPI_EC_COMMAND_QUERY) if (ec->curr->command == ACPI_EC_COMMAND_QUERY) {
clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags); clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);
pr_debug("***** Event stopped *****\n");
}
pr_debug("***** Command(%s) stopped *****\n",
acpi_ec_cmd_string(t->command));
ec->curr = NULL; ec->curr = NULL;
spin_unlock_irqrestore(&ec->lock, tmp); spin_unlock_irqrestore(&ec->lock, tmp);
return ret; return ret;
...@@ -317,6 +352,7 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) ...@@ -317,6 +352,7 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t)
{ {
int status; int status;
u32 glk; u32 glk;
if (!ec || (!t) || (t->wlen && !t->wdata) || (t->rlen && !t->rdata)) if (!ec || (!t) || (t->wlen && !t->wdata) || (t->rlen && !t->rdata))
return -EINVAL; return -EINVAL;
if (t->rdata) if (t->rdata)
...@@ -333,8 +369,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) ...@@ -333,8 +369,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t)
goto unlock; goto unlock;
} }
} }
pr_debug("transaction start (cmd=0x%02x, addr=0x%02x)\n",
t->command, t->wdata ? t->wdata[0] : 0);
/* disable GPE during transaction if storm is detected */ /* disable GPE during transaction if storm is detected */
if (test_bit(EC_FLAGS_GPE_STORM, &ec->flags)) { if (test_bit(EC_FLAGS_GPE_STORM, &ec->flags)) {
/* It has to be disabled, so that it doesn't trigger. */ /* It has to be disabled, so that it doesn't trigger. */
...@@ -355,7 +389,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) ...@@ -355,7 +389,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t)
t->irq_count); t->irq_count);
set_bit(EC_FLAGS_GPE_STORM, &ec->flags); set_bit(EC_FLAGS_GPE_STORM, &ec->flags);
} }
pr_debug("transaction end\n");
if (ec->global_lock) if (ec->global_lock)
acpi_release_global_lock(glk); acpi_release_global_lock(glk);
unlock: unlock:
...@@ -383,7 +416,7 @@ static int acpi_ec_burst_disable(struct acpi_ec *ec) ...@@ -383,7 +416,7 @@ static int acpi_ec_burst_disable(struct acpi_ec *ec)
acpi_ec_transaction(ec, &t) : 0; acpi_ec_transaction(ec, &t) : 0;
} }
static int acpi_ec_read(struct acpi_ec *ec, u8 address, u8 * data) static int acpi_ec_read(struct acpi_ec *ec, u8 address, u8 *data)
{ {
int result; int result;
u8 d; u8 d;
...@@ -419,10 +452,9 @@ int ec_read(u8 addr, u8 *val) ...@@ -419,10 +452,9 @@ int ec_read(u8 addr, u8 *val)
if (!err) { if (!err) {
*val = temp_data; *val = temp_data;
return 0; return 0;
} else }
return err; return err;
} }
EXPORT_SYMBOL(ec_read); EXPORT_SYMBOL(ec_read);
int ec_write(u8 addr, u8 val) int ec_write(u8 addr, u8 val)
...@@ -436,22 +468,21 @@ int ec_write(u8 addr, u8 val) ...@@ -436,22 +468,21 @@ int ec_write(u8 addr, u8 val)
return err; return err;
} }
EXPORT_SYMBOL(ec_write); EXPORT_SYMBOL(ec_write);
int ec_transaction(u8 command, int ec_transaction(u8 command,
const u8 * wdata, unsigned wdata_len, const u8 *wdata, unsigned wdata_len,
u8 * rdata, unsigned rdata_len) u8 *rdata, unsigned rdata_len)
{ {
struct transaction t = {.command = command, struct transaction t = {.command = command,
.wdata = wdata, .rdata = rdata, .wdata = wdata, .rdata = rdata,
.wlen = wdata_len, .rlen = rdata_len}; .wlen = wdata_len, .rlen = rdata_len};
if (!first_ec) if (!first_ec)
return -ENODEV; return -ENODEV;
return acpi_ec_transaction(first_ec, &t); return acpi_ec_transaction(first_ec, &t);
} }
EXPORT_SYMBOL(ec_transaction); EXPORT_SYMBOL(ec_transaction);
/* Get the handle to the EC device */ /* Get the handle to the EC device */
...@@ -461,7 +492,6 @@ acpi_handle ec_get_handle(void) ...@@ -461,7 +492,6 @@ acpi_handle ec_get_handle(void)
return NULL; return NULL;
return first_ec->handle; return first_ec->handle;
} }
EXPORT_SYMBOL(ec_get_handle); EXPORT_SYMBOL(ec_get_handle);
/* /*
...@@ -525,13 +555,14 @@ void acpi_ec_unblock_transactions_early(void) ...@@ -525,13 +555,14 @@ void acpi_ec_unblock_transactions_early(void)
clear_bit(EC_FLAGS_BLOCKED, &first_ec->flags); clear_bit(EC_FLAGS_BLOCKED, &first_ec->flags);
} }
static int acpi_ec_query_unlocked(struct acpi_ec *ec, u8 * data) static int acpi_ec_query_unlocked(struct acpi_ec *ec, u8 *data)
{ {
int result; int result;
u8 d; u8 d;
struct transaction t = {.command = ACPI_EC_COMMAND_QUERY, struct transaction t = {.command = ACPI_EC_COMMAND_QUERY,
.wdata = NULL, .rdata = &d, .wdata = NULL, .rdata = &d,
.wlen = 0, .rlen = 1}; .wlen = 0, .rlen = 1};
if (!ec || !data) if (!ec || !data)
return -EINVAL; return -EINVAL;
/* /*
...@@ -557,6 +588,7 @@ int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit, ...@@ -557,6 +588,7 @@ int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
{ {
struct acpi_ec_query_handler *handler = struct acpi_ec_query_handler *handler =
kzalloc(sizeof(struct acpi_ec_query_handler), GFP_KERNEL); kzalloc(sizeof(struct acpi_ec_query_handler), GFP_KERNEL);
if (!handler) if (!handler)
return -ENOMEM; return -ENOMEM;
...@@ -569,12 +601,12 @@ int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit, ...@@ -569,12 +601,12 @@ int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
mutex_unlock(&ec->mutex); mutex_unlock(&ec->mutex);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(acpi_ec_add_query_handler); EXPORT_SYMBOL_GPL(acpi_ec_add_query_handler);
void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit) void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
{ {
struct acpi_ec_query_handler *handler, *tmp; struct acpi_ec_query_handler *handler, *tmp;
mutex_lock(&ec->mutex); mutex_lock(&ec->mutex);
list_for_each_entry_safe(handler, tmp, &ec->list, node) { list_for_each_entry_safe(handler, tmp, &ec->list, node) {
if (query_bit == handler->query_bit) { if (query_bit == handler->query_bit) {
...@@ -584,20 +616,20 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit) ...@@ -584,20 +616,20 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
} }
mutex_unlock(&ec->mutex); mutex_unlock(&ec->mutex);
} }
EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler); EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler);
static void acpi_ec_run(void *cxt) static void acpi_ec_run(void *cxt)
{ {
struct acpi_ec_query_handler *handler = cxt; struct acpi_ec_query_handler *handler = cxt;
if (!handler) if (!handler)
return; return;
pr_debug("start query execution\n"); pr_debug("##### Query(0x%02x) started #####\n", handler->query_bit);
if (handler->func) if (handler->func)
handler->func(handler->data); handler->func(handler->data);
else if (handler->handle) else if (handler->handle)
acpi_evaluate_object(handler->handle, NULL, NULL, NULL); acpi_evaluate_object(handler->handle, NULL, NULL, NULL);
pr_debug("stop query execution\n"); pr_debug("##### Query(0x%02x) stopped #####\n", handler->query_bit);
kfree(handler); kfree(handler);
} }
...@@ -620,8 +652,8 @@ static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data) ...@@ -620,8 +652,8 @@ static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data)
if (!copy) if (!copy)
return -ENOMEM; return -ENOMEM;
memcpy(copy, handler, sizeof(*copy)); memcpy(copy, handler, sizeof(*copy));
pr_debug("push query execution (0x%2x) on queue\n", pr_debug("##### Query(0x%02x) scheduled #####\n",
value); handler->query_bit);
return acpi_os_execute((copy->func) ? return acpi_os_execute((copy->func) ?
OSL_NOTIFY_HANDLER : OSL_GPE_HANDLER, OSL_NOTIFY_HANDLER : OSL_GPE_HANDLER,
acpi_ec_run, copy); acpi_ec_run, copy);
...@@ -633,6 +665,7 @@ static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data) ...@@ -633,6 +665,7 @@ static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data)
static void acpi_ec_gpe_query(void *ec_cxt) static void acpi_ec_gpe_query(void *ec_cxt)
{ {
struct acpi_ec *ec = ec_cxt; struct acpi_ec *ec = ec_cxt;
if (!ec) if (!ec)
return; return;
mutex_lock(&ec->mutex); mutex_lock(&ec->mutex);
...@@ -644,7 +677,7 @@ static int ec_check_sci(struct acpi_ec *ec, u8 state) ...@@ -644,7 +677,7 @@ static int ec_check_sci(struct acpi_ec *ec, u8 state)
{ {
if (state & ACPI_EC_FLAG_SCI) { if (state & ACPI_EC_FLAG_SCI) {
if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) { if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) {
pr_debug("push gpe query to the queue\n"); pr_debug("***** Event started *****\n");
return acpi_os_execute(OSL_NOTIFY_HANDLER, return acpi_os_execute(OSL_NOTIFY_HANDLER,
acpi_ec_gpe_query, ec); acpi_ec_gpe_query, ec);
} }
...@@ -667,8 +700,8 @@ static u32 acpi_ec_gpe_handler(acpi_handle gpe_device, ...@@ -667,8 +700,8 @@ static u32 acpi_ec_gpe_handler(acpi_handle gpe_device,
} }
/* -------------------------------------------------------------------------- /* --------------------------------------------------------------------------
Address Space Management * Address Space Management
-------------------------------------------------------------------------- */ * -------------------------------------------------------------------------- */
static acpi_status static acpi_status
acpi_ec_space_handler(u32 function, acpi_physical_address address, acpi_ec_space_handler(u32 function, acpi_physical_address address,
...@@ -699,27 +732,26 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address, ...@@ -699,27 +732,26 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
switch (result) { switch (result) {
case -EINVAL: case -EINVAL:
return AE_BAD_PARAMETER; return AE_BAD_PARAMETER;
break;
case -ENODEV: case -ENODEV:
return AE_NOT_FOUND; return AE_NOT_FOUND;
break;
case -ETIME: case -ETIME:
return AE_TIME; return AE_TIME;
break;
default: default:
return AE_OK; return AE_OK;
} }
} }
/* -------------------------------------------------------------------------- /* --------------------------------------------------------------------------
Driver Interface * Driver Interface
-------------------------------------------------------------------------- */ * -------------------------------------------------------------------------- */
static acpi_status static acpi_status
ec_parse_io_ports(struct acpi_resource *resource, void *context); ec_parse_io_ports(struct acpi_resource *resource, void *context);
static struct acpi_ec *make_acpi_ec(void) static struct acpi_ec *make_acpi_ec(void)
{ {
struct acpi_ec *ec = kzalloc(sizeof(struct acpi_ec), GFP_KERNEL); struct acpi_ec *ec = kzalloc(sizeof(struct acpi_ec), GFP_KERNEL);
if (!ec) if (!ec)
return NULL; return NULL;
ec->flags = 1 << EC_FLAGS_QUERY_PENDING; ec->flags = 1 << EC_FLAGS_QUERY_PENDING;
...@@ -742,9 +774,8 @@ acpi_ec_register_query_methods(acpi_handle handle, u32 level, ...@@ -742,9 +774,8 @@ acpi_ec_register_query_methods(acpi_handle handle, u32 level,
status = acpi_get_name(handle, ACPI_SINGLE_NAME, &buffer); status = acpi_get_name(handle, ACPI_SINGLE_NAME, &buffer);
if (ACPI_SUCCESS(status) && sscanf(node_name, "_Q%x", &value) == 1) { if (ACPI_SUCCESS(status) && sscanf(node_name, "_Q%x", &value) == 1)
acpi_ec_add_query_handler(ec, value, handle, NULL, NULL); acpi_ec_add_query_handler(ec, value, handle, NULL, NULL);
}
return AE_OK; return AE_OK;
} }
...@@ -753,7 +784,6 @@ ec_parse_device(acpi_handle handle, u32 Level, void *context, void **retval) ...@@ -753,7 +784,6 @@ ec_parse_device(acpi_handle handle, u32 Level, void *context, void **retval)
{ {
acpi_status status; acpi_status status;
unsigned long long tmp = 0; unsigned long long tmp = 0;
struct acpi_ec *ec = context; struct acpi_ec *ec = context;
/* clear addr values, ec_parse_io_ports depend on it */ /* clear addr values, ec_parse_io_ports depend on it */
...@@ -781,6 +811,7 @@ ec_parse_device(acpi_handle handle, u32 Level, void *context, void **retval) ...@@ -781,6 +811,7 @@ ec_parse_device(acpi_handle handle, u32 Level, void *context, void **retval)
static int ec_install_handlers(struct acpi_ec *ec) static int ec_install_handlers(struct acpi_ec *ec)
{ {
acpi_status status; acpi_status status;
if (test_bit(EC_FLAGS_HANDLERS_INSTALLED, &ec->flags)) if (test_bit(EC_FLAGS_HANDLERS_INSTALLED, &ec->flags))
return 0; return 0;
status = acpi_install_gpe_handler(NULL, ec->gpe, status = acpi_install_gpe_handler(NULL, ec->gpe,
...@@ -1078,7 +1109,8 @@ int __init acpi_ec_ecdt_probe(void) ...@@ -1078,7 +1109,8 @@ int __init acpi_ec_ecdt_probe(void)
boot_ec->data_addr = ecdt_ptr->data.address; boot_ec->data_addr = ecdt_ptr->data.address;
boot_ec->gpe = ecdt_ptr->gpe; boot_ec->gpe = ecdt_ptr->gpe;
boot_ec->handle = ACPI_ROOT_OBJECT; boot_ec->handle = ACPI_ROOT_OBJECT;
acpi_get_handle(ACPI_ROOT_OBJECT, ecdt_ptr->id, &boot_ec->handle); acpi_get_handle(ACPI_ROOT_OBJECT, ecdt_ptr->id,
&boot_ec->handle);
/* Don't trust ECDT, which comes from ASUSTek */ /* Don't trust ECDT, which comes from ASUSTek */
if (!EC_FLAGS_VALIDATE_ECDT) if (!EC_FLAGS_VALIDATE_ECDT)
goto install; goto install;
...@@ -1162,6 +1194,5 @@ static void __exit acpi_ec_exit(void) ...@@ -1162,6 +1194,5 @@ static void __exit acpi_ec_exit(void)
{ {
acpi_bus_unregister_driver(&acpi_ec_driver); acpi_bus_unregister_driver(&acpi_ec_driver);
return;
} }
#endif /* 0 */ #endif /* 0 */
...@@ -1470,7 +1470,7 @@ static void acpi_wakeup_gpe_init(struct acpi_device *device) ...@@ -1470,7 +1470,7 @@ static void acpi_wakeup_gpe_init(struct acpi_device *device)
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
return; return;
wakeup->flags.run_wake = !!(event_status & ACPI_EVENT_FLAG_HANDLE); wakeup->flags.run_wake = !!(event_status & ACPI_EVENT_FLAG_HAS_HANDLER);
} }
static void acpi_bus_get_wakeup_device_flags(struct acpi_device *device) static void acpi_bus_get_wakeup_device_flags(struct acpi_device *device)
......
...@@ -537,7 +537,7 @@ static ssize_t counter_show(struct kobject *kobj, ...@@ -537,7 +537,7 @@ static ssize_t counter_show(struct kobject *kobj,
if (result) if (result)
goto end; goto end;
if (!(status & ACPI_EVENT_FLAG_HANDLE)) if (!(status & ACPI_EVENT_FLAG_HAS_HANDLER))
size += sprintf(buf + size, " invalid"); size += sprintf(buf + size, " invalid");
else if (status & ACPI_EVENT_FLAG_ENABLED) else if (status & ACPI_EVENT_FLAG_ENABLED)
size += sprintf(buf + size, " enabled"); size += sprintf(buf + size, " enabled");
...@@ -581,7 +581,7 @@ static ssize_t counter_set(struct kobject *kobj, ...@@ -581,7 +581,7 @@ static ssize_t counter_set(struct kobject *kobj,
if (result) if (result)
goto end; goto end;
if (!(status & ACPI_EVENT_FLAG_HANDLE)) { if (!(status & ACPI_EVENT_FLAG_HAS_HANDLER)) {
printk(KERN_WARNING PREFIX printk(KERN_WARNING PREFIX
"Can not change Invalid GPE/Fixed Event status\n"); "Can not change Invalid GPE/Fixed Event status\n");
return -EINVAL; return -EINVAL;
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpu_cooling.h> #include <linux/cpu_cooling.h>
#include <linux/cpufreq.h> #include <linux/cpufreq.h>
#include <linux/cpufreq-dt.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/module.h> #include <linux/module.h>
...@@ -146,8 +147,8 @@ static int allocate_resources(int cpu, struct device **cdev, ...@@ -146,8 +147,8 @@ static int allocate_resources(int cpu, struct device **cdev,
goto try_again; goto try_again;
} }
dev_warn(cpu_dev, "failed to get cpu%d regulator: %ld\n", dev_dbg(cpu_dev, "no regulator for cpu%d: %ld\n",
cpu, PTR_ERR(cpu_reg)); cpu, PTR_ERR(cpu_reg));
} }
cpu_clk = clk_get(cpu_dev, NULL); cpu_clk = clk_get(cpu_dev, NULL);
...@@ -178,6 +179,7 @@ static int allocate_resources(int cpu, struct device **cdev, ...@@ -178,6 +179,7 @@ static int allocate_resources(int cpu, struct device **cdev,
static int cpufreq_init(struct cpufreq_policy *policy) static int cpufreq_init(struct cpufreq_policy *policy)
{ {
struct cpufreq_dt_platform_data *pd;
struct cpufreq_frequency_table *freq_table; struct cpufreq_frequency_table *freq_table;
struct thermal_cooling_device *cdev; struct thermal_cooling_device *cdev;
struct device_node *np; struct device_node *np;
...@@ -265,9 +267,18 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -265,9 +267,18 @@ static int cpufreq_init(struct cpufreq_policy *policy)
policy->driver_data = priv; policy->driver_data = priv;
policy->clk = cpu_clk; policy->clk = cpu_clk;
ret = cpufreq_generic_init(policy, freq_table, transition_latency); ret = cpufreq_table_validate_and_show(policy, freq_table);
if (ret) if (ret) {
dev_err(cpu_dev, "%s: invalid frequency table: %d\n", __func__,
ret);
goto out_cooling_unregister; goto out_cooling_unregister;
}
policy->cpuinfo.transition_latency = transition_latency;
pd = cpufreq_get_driver_data();
if (pd && !pd->independent_clocks)
cpumask_setall(policy->cpus);
of_node_put(np); of_node_put(np);
...@@ -335,6 +346,8 @@ static int dt_cpufreq_probe(struct platform_device *pdev) ...@@ -335,6 +346,8 @@ static int dt_cpufreq_probe(struct platform_device *pdev)
if (!IS_ERR(cpu_reg)) if (!IS_ERR(cpu_reg))
regulator_put(cpu_reg); regulator_put(cpu_reg);
dt_cpufreq_driver.driver_data = dev_get_platdata(&pdev->dev);
ret = cpufreq_register_driver(&dt_cpufreq_driver); ret = cpufreq_register_driver(&dt_cpufreq_driver);
if (ret) if (ret)
dev_err(cpu_dev, "failed register driver: %d\n", ret); dev_err(cpu_dev, "failed register driver: %d\n", ret);
......
...@@ -512,7 +512,18 @@ show_one(cpuinfo_max_freq, cpuinfo.max_freq); ...@@ -512,7 +512,18 @@ show_one(cpuinfo_max_freq, cpuinfo.max_freq);
show_one(cpuinfo_transition_latency, cpuinfo.transition_latency); show_one(cpuinfo_transition_latency, cpuinfo.transition_latency);
show_one(scaling_min_freq, min); show_one(scaling_min_freq, min);
show_one(scaling_max_freq, max); show_one(scaling_max_freq, max);
show_one(scaling_cur_freq, cur);
static ssize_t show_scaling_cur_freq(
struct cpufreq_policy *policy, char *buf)
{
ssize_t ret;
if (cpufreq_driver && cpufreq_driver->setpolicy && cpufreq_driver->get)
ret = sprintf(buf, "%u\n", cpufreq_driver->get(policy->cpu));
else
ret = sprintf(buf, "%u\n", policy->cur);
return ret;
}
static int cpufreq_set_policy(struct cpufreq_policy *policy, static int cpufreq_set_policy(struct cpufreq_policy *policy,
struct cpufreq_policy *new_policy); struct cpufreq_policy *new_policy);
...@@ -906,11 +917,11 @@ static int cpufreq_add_dev_interface(struct cpufreq_policy *policy, ...@@ -906,11 +917,11 @@ static int cpufreq_add_dev_interface(struct cpufreq_policy *policy,
if (ret) if (ret)
goto err_out_kobj_put; goto err_out_kobj_put;
} }
if (has_target()) {
ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr); ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr);
if (ret) if (ret)
goto err_out_kobj_put; goto err_out_kobj_put;
}
if (cpufreq_driver->bios_limit) { if (cpufreq_driver->bios_limit) {
ret = sysfs_create_file(&policy->kobj, &bios_limit.attr); ret = sysfs_create_file(&policy->kobj, &bios_limit.attr);
if (ret) if (ret)
...@@ -1731,6 +1742,21 @@ const char *cpufreq_get_current_driver(void) ...@@ -1731,6 +1742,21 @@ const char *cpufreq_get_current_driver(void)
} }
EXPORT_SYMBOL_GPL(cpufreq_get_current_driver); EXPORT_SYMBOL_GPL(cpufreq_get_current_driver);
/**
* cpufreq_get_driver_data - return current driver data
*
* Return the private data of the currently loaded cpufreq
* driver, or NULL if no cpufreq driver is loaded.
*/
void *cpufreq_get_driver_data(void)
{
if (cpufreq_driver)
return cpufreq_driver->driver_data;
return NULL;
}
EXPORT_SYMBOL_GPL(cpufreq_get_driver_data);
/********************************************************************* /*********************************************************************
* NOTIFIER LISTS INTERFACE * * NOTIFIER LISTS INTERFACE *
*********************************************************************/ *********************************************************************/
......
...@@ -52,6 +52,17 @@ static inline int32_t div_fp(int32_t x, int32_t y) ...@@ -52,6 +52,17 @@ static inline int32_t div_fp(int32_t x, int32_t y)
return div_s64((int64_t)x << FRAC_BITS, y); return div_s64((int64_t)x << FRAC_BITS, y);
} }
static inline int ceiling_fp(int32_t x)
{
int mask, ret;
ret = fp_toint(x);
mask = (1 << FRAC_BITS) - 1;
if (x & mask)
ret += 1;
return ret;
}
struct sample { struct sample {
int32_t core_pct_busy; int32_t core_pct_busy;
u64 aperf; u64 aperf;
...@@ -64,6 +75,7 @@ struct pstate_data { ...@@ -64,6 +75,7 @@ struct pstate_data {
int current_pstate; int current_pstate;
int min_pstate; int min_pstate;
int max_pstate; int max_pstate;
int scaling;
int turbo_pstate; int turbo_pstate;
}; };
...@@ -113,6 +125,7 @@ struct pstate_funcs { ...@@ -113,6 +125,7 @@ struct pstate_funcs {
int (*get_max)(void); int (*get_max)(void);
int (*get_min)(void); int (*get_min)(void);
int (*get_turbo)(void); int (*get_turbo)(void);
int (*get_scaling)(void);
void (*set)(struct cpudata*, int pstate); void (*set)(struct cpudata*, int pstate);
void (*get_vid)(struct cpudata *); void (*get_vid)(struct cpudata *);
}; };
...@@ -138,6 +151,7 @@ struct perf_limits { ...@@ -138,6 +151,7 @@ struct perf_limits {
static struct perf_limits limits = { static struct perf_limits limits = {
.no_turbo = 0, .no_turbo = 0,
.turbo_disabled = 0,
.max_perf_pct = 100, .max_perf_pct = 100,
.max_perf = int_tofp(1), .max_perf = int_tofp(1),
.min_perf_pct = 0, .min_perf_pct = 0,
...@@ -218,6 +232,18 @@ static inline void intel_pstate_reset_all_pid(void) ...@@ -218,6 +232,18 @@ static inline void intel_pstate_reset_all_pid(void)
} }
} }
static inline void update_turbo_state(void)
{
u64 misc_en;
struct cpudata *cpu;
cpu = all_cpu_data[0];
rdmsrl(MSR_IA32_MISC_ENABLE, misc_en);
limits.turbo_disabled =
(misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE ||
cpu->pstate.max_pstate == cpu->pstate.turbo_pstate);
}
/************************** debugfs begin ************************/ /************************** debugfs begin ************************/
static int pid_param_set(void *data, u64 val) static int pid_param_set(void *data, u64 val)
{ {
...@@ -274,6 +300,20 @@ static void __init intel_pstate_debug_expose_params(void) ...@@ -274,6 +300,20 @@ static void __init intel_pstate_debug_expose_params(void)
return sprintf(buf, "%u\n", limits.object); \ return sprintf(buf, "%u\n", limits.object); \
} }
static ssize_t show_no_turbo(struct kobject *kobj,
struct attribute *attr, char *buf)
{
ssize_t ret;
update_turbo_state();
if (limits.turbo_disabled)
ret = sprintf(buf, "%u\n", limits.turbo_disabled);
else
ret = sprintf(buf, "%u\n", limits.no_turbo);
return ret;
}
static ssize_t store_no_turbo(struct kobject *a, struct attribute *b, static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
const char *buf, size_t count) const char *buf, size_t count)
{ {
...@@ -283,11 +323,14 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b, ...@@ -283,11 +323,14 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
ret = sscanf(buf, "%u", &input); ret = sscanf(buf, "%u", &input);
if (ret != 1) if (ret != 1)
return -EINVAL; return -EINVAL;
limits.no_turbo = clamp_t(int, input, 0 , 1);
update_turbo_state();
if (limits.turbo_disabled) { if (limits.turbo_disabled) {
pr_warn("Turbo disabled by BIOS or unavailable on processor\n"); pr_warn("Turbo disabled by BIOS or unavailable on processor\n");
limits.no_turbo = limits.turbo_disabled; return -EPERM;
} }
limits.no_turbo = clamp_t(int, input, 0, 1);
return count; return count;
} }
...@@ -323,7 +366,6 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b, ...@@ -323,7 +366,6 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
return count; return count;
} }
show_one(no_turbo, no_turbo);
show_one(max_perf_pct, max_perf_pct); show_one(max_perf_pct, max_perf_pct);
show_one(min_perf_pct, min_perf_pct); show_one(min_perf_pct, min_perf_pct);
...@@ -394,7 +436,7 @@ static void byt_set_pstate(struct cpudata *cpudata, int pstate) ...@@ -394,7 +436,7 @@ static void byt_set_pstate(struct cpudata *cpudata, int pstate)
cpudata->vid.ratio); cpudata->vid.ratio);
vid_fp = clamp_t(int32_t, vid_fp, cpudata->vid.min, cpudata->vid.max); vid_fp = clamp_t(int32_t, vid_fp, cpudata->vid.min, cpudata->vid.max);
vid = fp_toint(vid_fp); vid = ceiling_fp(vid_fp);
if (pstate > cpudata->pstate.max_pstate) if (pstate > cpudata->pstate.max_pstate)
vid = cpudata->vid.turbo; vid = cpudata->vid.turbo;
...@@ -404,6 +446,22 @@ static void byt_set_pstate(struct cpudata *cpudata, int pstate) ...@@ -404,6 +446,22 @@ static void byt_set_pstate(struct cpudata *cpudata, int pstate)
wrmsrl(MSR_IA32_PERF_CTL, val); wrmsrl(MSR_IA32_PERF_CTL, val);
} }
#define BYT_BCLK_FREQS 5
static int byt_freq_table[BYT_BCLK_FREQS] = { 833, 1000, 1333, 1167, 800};
static int byt_get_scaling(void)
{
u64 value;
int i;
rdmsrl(MSR_FSB_FREQ, value);
i = value & 0x3;
BUG_ON(i > BYT_BCLK_FREQS);
return byt_freq_table[i] * 100;
}
static void byt_get_vid(struct cpudata *cpudata) static void byt_get_vid(struct cpudata *cpudata)
{ {
u64 value; u64 value;
...@@ -449,6 +507,11 @@ static int core_get_turbo_pstate(void) ...@@ -449,6 +507,11 @@ static int core_get_turbo_pstate(void)
return ret; return ret;
} }
static inline int core_get_scaling(void)
{
return 100000;
}
static void core_set_pstate(struct cpudata *cpudata, int pstate) static void core_set_pstate(struct cpudata *cpudata, int pstate)
{ {
u64 val; u64 val;
...@@ -473,6 +536,7 @@ static struct cpu_defaults core_params = { ...@@ -473,6 +536,7 @@ static struct cpu_defaults core_params = {
.get_max = core_get_max_pstate, .get_max = core_get_max_pstate,
.get_min = core_get_min_pstate, .get_min = core_get_min_pstate,
.get_turbo = core_get_turbo_pstate, .get_turbo = core_get_turbo_pstate,
.get_scaling = core_get_scaling,
.set = core_set_pstate, .set = core_set_pstate,
}, },
}; };
...@@ -491,6 +555,7 @@ static struct cpu_defaults byt_params = { ...@@ -491,6 +555,7 @@ static struct cpu_defaults byt_params = {
.get_min = byt_get_min_pstate, .get_min = byt_get_min_pstate,
.get_turbo = byt_get_turbo_pstate, .get_turbo = byt_get_turbo_pstate,
.set = byt_set_pstate, .set = byt_set_pstate,
.get_scaling = byt_get_scaling,
.get_vid = byt_get_vid, .get_vid = byt_get_vid,
}, },
}; };
...@@ -501,7 +566,7 @@ static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max) ...@@ -501,7 +566,7 @@ static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max)
int max_perf_adj; int max_perf_adj;
int min_perf; int min_perf;
if (limits.no_turbo) if (limits.no_turbo || limits.turbo_disabled)
max_perf = cpu->pstate.max_pstate; max_perf = cpu->pstate.max_pstate;
max_perf_adj = fp_toint(mul_fp(int_tofp(max_perf), limits.max_perf)); max_perf_adj = fp_toint(mul_fp(int_tofp(max_perf), limits.max_perf));
...@@ -516,6 +581,8 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) ...@@ -516,6 +581,8 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
{ {
int max_perf, min_perf; int max_perf, min_perf;
update_turbo_state();
intel_pstate_get_min_max(cpu, &min_perf, &max_perf); intel_pstate_get_min_max(cpu, &min_perf, &max_perf);
pstate = clamp_t(int, pstate, min_perf, max_perf); pstate = clamp_t(int, pstate, min_perf, max_perf);
...@@ -523,7 +590,7 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) ...@@ -523,7 +590,7 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
if (pstate == cpu->pstate.current_pstate) if (pstate == cpu->pstate.current_pstate)
return; return;
trace_cpu_frequency(pstate * 100000, cpu->cpu); trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu);
cpu->pstate.current_pstate = pstate; cpu->pstate.current_pstate = pstate;
...@@ -535,6 +602,7 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) ...@@ -535,6 +602,7 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
cpu->pstate.min_pstate = pstate_funcs.get_min(); cpu->pstate.min_pstate = pstate_funcs.get_min();
cpu->pstate.max_pstate = pstate_funcs.get_max(); cpu->pstate.max_pstate = pstate_funcs.get_max();
cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
cpu->pstate.scaling = pstate_funcs.get_scaling();
if (pstate_funcs.get_vid) if (pstate_funcs.get_vid)
pstate_funcs.get_vid(cpu); pstate_funcs.get_vid(cpu);
...@@ -550,7 +618,9 @@ static inline void intel_pstate_calc_busy(struct cpudata *cpu) ...@@ -550,7 +618,9 @@ static inline void intel_pstate_calc_busy(struct cpudata *cpu)
core_pct = div64_u64(core_pct, int_tofp(sample->mperf)); core_pct = div64_u64(core_pct, int_tofp(sample->mperf));
sample->freq = fp_toint( sample->freq = fp_toint(
mul_fp(int_tofp(cpu->pstate.max_pstate * 1000), core_pct)); mul_fp(int_tofp(
cpu->pstate.max_pstate * cpu->pstate.scaling / 100),
core_pct));
sample->core_pct_busy = (int32_t)core_pct; sample->core_pct_busy = (int32_t)core_pct;
} }
...@@ -671,7 +741,9 @@ static int intel_pstate_init_cpu(unsigned int cpunum) ...@@ -671,7 +741,9 @@ static int intel_pstate_init_cpu(unsigned int cpunum)
{ {
struct cpudata *cpu; struct cpudata *cpu;
all_cpu_data[cpunum] = kzalloc(sizeof(struct cpudata), GFP_KERNEL); if (!all_cpu_data[cpunum])
all_cpu_data[cpunum] = kzalloc(sizeof(struct cpudata),
GFP_KERNEL);
if (!all_cpu_data[cpunum]) if (!all_cpu_data[cpunum])
return -ENOMEM; return -ENOMEM;
...@@ -714,9 +786,10 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy) ...@@ -714,9 +786,10 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) { if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
limits.min_perf_pct = 100; limits.min_perf_pct = 100;
limits.min_perf = int_tofp(1); limits.min_perf = int_tofp(1);
limits.max_policy_pct = 100;
limits.max_perf_pct = 100; limits.max_perf_pct = 100;
limits.max_perf = int_tofp(1); limits.max_perf = int_tofp(1);
limits.no_turbo = limits.turbo_disabled; limits.no_turbo = 0;
return 0; return 0;
} }
limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq; limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq;
...@@ -751,15 +824,12 @@ static void intel_pstate_stop_cpu(struct cpufreq_policy *policy) ...@@ -751,15 +824,12 @@ static void intel_pstate_stop_cpu(struct cpufreq_policy *policy)
del_timer_sync(&all_cpu_data[cpu_num]->timer); del_timer_sync(&all_cpu_data[cpu_num]->timer);
intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate); intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate);
kfree(all_cpu_data[cpu_num]);
all_cpu_data[cpu_num] = NULL;
} }
static int intel_pstate_cpu_init(struct cpufreq_policy *policy) static int intel_pstate_cpu_init(struct cpufreq_policy *policy)
{ {
struct cpudata *cpu; struct cpudata *cpu;
int rc; int rc;
u64 misc_en;
rc = intel_pstate_init_cpu(policy->cpu); rc = intel_pstate_init_cpu(policy->cpu);
if (rc) if (rc)
...@@ -767,23 +837,18 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy) ...@@ -767,23 +837,18 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy)
cpu = all_cpu_data[policy->cpu]; cpu = all_cpu_data[policy->cpu];
rdmsrl(MSR_IA32_MISC_ENABLE, misc_en);
if (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE ||
cpu->pstate.max_pstate == cpu->pstate.turbo_pstate) {
limits.turbo_disabled = 1;
limits.no_turbo = 1;
}
if (limits.min_perf_pct == 100 && limits.max_perf_pct == 100) if (limits.min_perf_pct == 100 && limits.max_perf_pct == 100)
policy->policy = CPUFREQ_POLICY_PERFORMANCE; policy->policy = CPUFREQ_POLICY_PERFORMANCE;
else else
policy->policy = CPUFREQ_POLICY_POWERSAVE; policy->policy = CPUFREQ_POLICY_POWERSAVE;
policy->min = cpu->pstate.min_pstate * 100000; policy->min = cpu->pstate.min_pstate * cpu->pstate.scaling;
policy->max = cpu->pstate.turbo_pstate * 100000; policy->max = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
/* cpuinfo and default policy values */ /* cpuinfo and default policy values */
policy->cpuinfo.min_freq = cpu->pstate.min_pstate * 100000; policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling;
policy->cpuinfo.max_freq = cpu->pstate.turbo_pstate * 100000; policy->cpuinfo.max_freq =
cpu->pstate.turbo_pstate * cpu->pstate.scaling;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
cpumask_set_cpu(policy->cpu, policy->cpus); cpumask_set_cpu(policy->cpu, policy->cpus);
...@@ -841,6 +906,7 @@ static void copy_cpu_funcs(struct pstate_funcs *funcs) ...@@ -841,6 +906,7 @@ static void copy_cpu_funcs(struct pstate_funcs *funcs)
pstate_funcs.get_max = funcs->get_max; pstate_funcs.get_max = funcs->get_max;
pstate_funcs.get_min = funcs->get_min; pstate_funcs.get_min = funcs->get_min;
pstate_funcs.get_turbo = funcs->get_turbo; pstate_funcs.get_turbo = funcs->get_turbo;
pstate_funcs.get_scaling = funcs->get_scaling;
pstate_funcs.set = funcs->set; pstate_funcs.set = funcs->set;
pstate_funcs.get_vid = funcs->get_vid; pstate_funcs.get_vid = funcs->get_vid;
} }
......
...@@ -163,7 +163,8 @@ static int powernv_add_idle_states(void) ...@@ -163,7 +163,8 @@ static int powernv_add_idle_states(void)
int nr_idle_states = 1; /* Snooze */ int nr_idle_states = 1; /* Snooze */
int dt_idle_states; int dt_idle_states;
const __be32 *idle_state_flags; const __be32 *idle_state_flags;
u32 len_flags, flags; const __be32 *idle_state_latency;
u32 len_flags, flags, latency_ns;
int i; int i;
/* Currently we have snooze statically defined */ /* Currently we have snooze statically defined */
...@@ -180,18 +181,32 @@ static int powernv_add_idle_states(void) ...@@ -180,18 +181,32 @@ static int powernv_add_idle_states(void)
return nr_idle_states; return nr_idle_states;
} }
idle_state_latency = of_get_property(power_mgt,
"ibm,cpu-idle-state-latencies-ns", NULL);
if (!idle_state_latency) {
pr_warn("DT-PowerMgmt: missing ibm,cpu-idle-state-latencies-ns\n");
return nr_idle_states;
}
dt_idle_states = len_flags / sizeof(u32); dt_idle_states = len_flags / sizeof(u32);
for (i = 0; i < dt_idle_states; i++) { for (i = 0; i < dt_idle_states; i++) {
flags = be32_to_cpu(idle_state_flags[i]); flags = be32_to_cpu(idle_state_flags[i]);
/* Cpuidle accepts exit_latency in us and we estimate
* target residency to be 10x exit_latency
*/
latency_ns = be32_to_cpu(idle_state_latency[i]);
if (flags & IDLE_USE_INST_NAP) { if (flags & IDLE_USE_INST_NAP) {
/* Add NAP state */ /* Add NAP state */
strcpy(powernv_states[nr_idle_states].name, "Nap"); strcpy(powernv_states[nr_idle_states].name, "Nap");
strcpy(powernv_states[nr_idle_states].desc, "Nap"); strcpy(powernv_states[nr_idle_states].desc, "Nap");
powernv_states[nr_idle_states].flags = CPUIDLE_FLAG_TIME_VALID; powernv_states[nr_idle_states].flags = CPUIDLE_FLAG_TIME_VALID;
powernv_states[nr_idle_states].exit_latency = 10; powernv_states[nr_idle_states].exit_latency =
powernv_states[nr_idle_states].target_residency = 100; ((unsigned int)latency_ns) / 1000;
powernv_states[nr_idle_states].target_residency =
((unsigned int)latency_ns / 100);
powernv_states[nr_idle_states].enter = &nap_loop; powernv_states[nr_idle_states].enter = &nap_loop;
nr_idle_states++; nr_idle_states++;
} }
...@@ -202,8 +217,10 @@ static int powernv_add_idle_states(void) ...@@ -202,8 +217,10 @@ static int powernv_add_idle_states(void)
strcpy(powernv_states[nr_idle_states].desc, "FastSleep"); strcpy(powernv_states[nr_idle_states].desc, "FastSleep");
powernv_states[nr_idle_states].flags = powernv_states[nr_idle_states].flags =
CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TIMER_STOP; CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TIMER_STOP;
powernv_states[nr_idle_states].exit_latency = 300; powernv_states[nr_idle_states].exit_latency =
powernv_states[nr_idle_states].target_residency = 1000000; ((unsigned int)latency_ns) / 1000;
powernv_states[nr_idle_states].target_residency =
((unsigned int)latency_ns / 100);
powernv_states[nr_idle_states].enter = &fastsleep_loop; powernv_states[nr_idle_states].enter = &fastsleep_loop;
nr_idle_states++; nr_idle_states++;
} }
......
...@@ -397,6 +397,7 @@ static int pcie_pme_suspend(struct pcie_device *srv) ...@@ -397,6 +397,7 @@ static int pcie_pme_suspend(struct pcie_device *srv)
struct pcie_pme_service_data *data = get_service_data(srv); struct pcie_pme_service_data *data = get_service_data(srv);
struct pci_dev *port = srv->port; struct pci_dev *port = srv->port;
bool wakeup; bool wakeup;
int ret;
if (device_may_wakeup(&port->dev)) { if (device_may_wakeup(&port->dev)) {
wakeup = true; wakeup = true;
...@@ -407,9 +408,10 @@ static int pcie_pme_suspend(struct pcie_device *srv) ...@@ -407,9 +408,10 @@ static int pcie_pme_suspend(struct pcie_device *srv)
} }
spin_lock_irq(&data->lock); spin_lock_irq(&data->lock);
if (wakeup) { if (wakeup) {
enable_irq_wake(srv->irq); ret = enable_irq_wake(srv->irq);
data->suspend_level = PME_SUSPEND_WAKEUP; data->suspend_level = PME_SUSPEND_WAKEUP;
} else { }
if (!wakeup || ret) {
struct pci_dev *port = srv->port; struct pci_dev *port = srv->port;
pcie_pme_interrupt_enable(port, false); pcie_pme_interrupt_enable(port, false);
......
...@@ -52,6 +52,7 @@ ...@@ -52,6 +52,7 @@
#define METHOD_NAME__CBA "_CBA" #define METHOD_NAME__CBA "_CBA"
#define METHOD_NAME__CID "_CID" #define METHOD_NAME__CID "_CID"
#define METHOD_NAME__CRS "_CRS" #define METHOD_NAME__CRS "_CRS"
#define METHOD_NAME__DDN "_DDN"
#define METHOD_NAME__HID "_HID" #define METHOD_NAME__HID "_HID"
#define METHOD_NAME__INI "_INI" #define METHOD_NAME__INI "_INI"
#define METHOD_NAME__PLD "_PLD" #define METHOD_NAME__PLD "_PLD"
......
...@@ -46,7 +46,7 @@ ...@@ -46,7 +46,7 @@
/* Current ACPICA subsystem version in YYYYMMDD format */ /* Current ACPICA subsystem version in YYYYMMDD format */
#define ACPI_CA_VERSION 0x20140828 #define ACPI_CA_VERSION 0x20140926
#include <acpi/acconfig.h> #include <acpi/acconfig.h>
#include <acpi/actypes.h> #include <acpi/actypes.h>
......
...@@ -721,7 +721,7 @@ typedef u32 acpi_event_type; ...@@ -721,7 +721,7 @@ typedef u32 acpi_event_type;
* | | | +--- Enabled for wake? * | | | +--- Enabled for wake?
* | | +----- Set? * | | +----- Set?
* | +------- Has a handler? * | +------- Has a handler?
* +----------- <Reserved> * +------------- <Reserved>
*/ */
typedef u32 acpi_event_status; typedef u32 acpi_event_status;
...@@ -729,7 +729,7 @@ typedef u32 acpi_event_status; ...@@ -729,7 +729,7 @@ typedef u32 acpi_event_status;
#define ACPI_EVENT_FLAG_ENABLED (acpi_event_status) 0x01 #define ACPI_EVENT_FLAG_ENABLED (acpi_event_status) 0x01
#define ACPI_EVENT_FLAG_WAKE_ENABLED (acpi_event_status) 0x02 #define ACPI_EVENT_FLAG_WAKE_ENABLED (acpi_event_status) 0x02
#define ACPI_EVENT_FLAG_SET (acpi_event_status) 0x04 #define ACPI_EVENT_FLAG_SET (acpi_event_status) 0x04
#define ACPI_EVENT_FLAG_HANDLE (acpi_event_status) 0x08 #define ACPI_EVENT_FLAG_HAS_HANDLER (acpi_event_status) 0x08
/* Actions for acpi_set_gpe, acpi_gpe_wakeup, acpi_hw_low_set_gpe */ /* Actions for acpi_set_gpe, acpi_gpe_wakeup, acpi_hw_low_set_gpe */
......
/*
* Copyright (C) 2014 Marvell
* Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __CPUFREQ_DT_H__
#define __CPUFREQ_DT_H__
struct cpufreq_dt_platform_data {
/*
* True when each CPU has its own clock to control its
* frequency, false when all CPUs are controlled by a single
* clock.
*/
bool independent_clocks;
};
#endif /* __CPUFREQ_DT_H__ */
...@@ -219,6 +219,7 @@ __ATTR(_name, 0644, show_##_name, store_##_name) ...@@ -219,6 +219,7 @@ __ATTR(_name, 0644, show_##_name, store_##_name)
struct cpufreq_driver { struct cpufreq_driver {
char name[CPUFREQ_NAME_LEN]; char name[CPUFREQ_NAME_LEN];
u8 flags; u8 flags;
void *driver_data;
/* needed by all drivers */ /* needed by all drivers */
int (*init) (struct cpufreq_policy *policy); int (*init) (struct cpufreq_policy *policy);
...@@ -312,6 +313,7 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data); ...@@ -312,6 +313,7 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data);
int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); int cpufreq_unregister_driver(struct cpufreq_driver *driver_data);
const char *cpufreq_get_current_driver(void); const char *cpufreq_get_current_driver(void);
void *cpufreq_get_driver_data(void);
static inline void cpufreq_verify_within_limits(struct cpufreq_policy *policy, static inline void cpufreq_verify_within_limits(struct cpufreq_policy *policy,
unsigned int min, unsigned int max) unsigned int min, unsigned int max)
......
...@@ -50,6 +50,9 @@ static inline bool oom_task_origin(const struct task_struct *p) ...@@ -50,6 +50,9 @@ static inline bool oom_task_origin(const struct task_struct *p)
extern unsigned long oom_badness(struct task_struct *p, extern unsigned long oom_badness(struct task_struct *p,
struct mem_cgroup *memcg, const nodemask_t *nodemask, struct mem_cgroup *memcg, const nodemask_t *nodemask,
unsigned long totalpages); unsigned long totalpages);
extern int oom_kills_count(void);
extern void note_oom_kill(void);
extern void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, extern void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
unsigned int points, unsigned long totalpages, unsigned int points, unsigned long totalpages,
struct mem_cgroup *memcg, nodemask_t *nodemask, struct mem_cgroup *memcg, nodemask_t *nodemask,
......
...@@ -15,6 +15,7 @@ enum { ...@@ -15,6 +15,7 @@ enum {
PM_QOS_CPU_DMA_LATENCY, PM_QOS_CPU_DMA_LATENCY,
PM_QOS_NETWORK_LATENCY, PM_QOS_NETWORK_LATENCY,
PM_QOS_NETWORK_THROUGHPUT, PM_QOS_NETWORK_THROUGHPUT,
PM_QOS_MEMORY_BANDWIDTH,
/* insert new class ID */ /* insert new class ID */
PM_QOS_NUM_CLASSES, PM_QOS_NUM_CLASSES,
...@@ -32,6 +33,7 @@ enum pm_qos_flags_status { ...@@ -32,6 +33,7 @@ enum pm_qos_flags_status {
#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) #define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
#define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) #define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
#define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0 #define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0
#define PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE 0
#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE 0 #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE 0
#define PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE 0 #define PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE 0
#define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT (-1) #define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT (-1)
...@@ -69,7 +71,8 @@ struct dev_pm_qos_request { ...@@ -69,7 +71,8 @@ struct dev_pm_qos_request {
enum pm_qos_type { enum pm_qos_type {
PM_QOS_UNITIALIZED, PM_QOS_UNITIALIZED,
PM_QOS_MAX, /* return the largest value */ PM_QOS_MAX, /* return the largest value */
PM_QOS_MIN /* return the smallest value */ PM_QOS_MIN, /* return the smallest value */
PM_QOS_SUM /* return the sum */
}; };
/* /*
......
...@@ -42,6 +42,9 @@ bool freezing_slow_path(struct task_struct *p) ...@@ -42,6 +42,9 @@ bool freezing_slow_path(struct task_struct *p)
if (p->flags & (PF_NOFREEZE | PF_SUSPEND_TASK)) if (p->flags & (PF_NOFREEZE | PF_SUSPEND_TASK))
return false; return false;
if (test_thread_flag(TIF_MEMDIE))
return false;
if (pm_nosig_freezing || cgroup_freezing(p)) if (pm_nosig_freezing || cgroup_freezing(p))
return true; return true;
...@@ -147,12 +150,6 @@ void __thaw_task(struct task_struct *p) ...@@ -147,12 +150,6 @@ void __thaw_task(struct task_struct *p)
{ {
unsigned long flags; unsigned long flags;
/*
* Clear freezing and kick @p if FROZEN. Clearing is guaranteed to
* be visible to @p as waking up implies wmb. Waking up inside
* freezer_lock also prevents wakeups from leaking outside
* refrigerator.
*/
spin_lock_irqsave(&freezer_lock, flags); spin_lock_irqsave(&freezer_lock, flags);
if (frozen(p)) if (frozen(p))
wake_up_process(p); wake_up_process(p);
......
...@@ -46,13 +46,13 @@ static int try_to_freeze_tasks(bool user_only) ...@@ -46,13 +46,13 @@ static int try_to_freeze_tasks(bool user_only)
while (true) { while (true) {
todo = 0; todo = 0;
read_lock(&tasklist_lock); read_lock(&tasklist_lock);
do_each_thread(g, p) { for_each_process_thread(g, p) {
if (p == current || !freeze_task(p)) if (p == current || !freeze_task(p))
continue; continue;
if (!freezer_should_skip(p)) if (!freezer_should_skip(p))
todo++; todo++;
} while_each_thread(g, p); }
read_unlock(&tasklist_lock); read_unlock(&tasklist_lock);
if (!user_only) { if (!user_only) {
...@@ -93,11 +93,11 @@ static int try_to_freeze_tasks(bool user_only) ...@@ -93,11 +93,11 @@ static int try_to_freeze_tasks(bool user_only)
if (!wakeup) { if (!wakeup) {
read_lock(&tasklist_lock); read_lock(&tasklist_lock);
do_each_thread(g, p) { for_each_process_thread(g, p) {
if (p != current && !freezer_should_skip(p) if (p != current && !freezer_should_skip(p)
&& freezing(p) && !frozen(p)) && freezing(p) && !frozen(p))
sched_show_task(p); sched_show_task(p);
} while_each_thread(g, p); }
read_unlock(&tasklist_lock); read_unlock(&tasklist_lock);
} }
} else { } else {
...@@ -108,6 +108,30 @@ static int try_to_freeze_tasks(bool user_only) ...@@ -108,6 +108,30 @@ static int try_to_freeze_tasks(bool user_only)
return todo ? -EBUSY : 0; return todo ? -EBUSY : 0;
} }
static bool __check_frozen_processes(void)
{
struct task_struct *g, *p;
for_each_process_thread(g, p)
if (p != current && !freezer_should_skip(p) && !frozen(p))
return false;
return true;
}
/*
* Returns true if all freezable tasks (except for current) are frozen already
*/
static bool check_frozen_processes(void)
{
bool ret;
read_lock(&tasklist_lock);
ret = __check_frozen_processes();
read_unlock(&tasklist_lock);
return ret;
}
/** /**
* freeze_processes - Signal user space processes to enter the refrigerator. * freeze_processes - Signal user space processes to enter the refrigerator.
* The current thread will not be frozen. The same process that calls * The current thread will not be frozen. The same process that calls
...@@ -118,6 +142,7 @@ static int try_to_freeze_tasks(bool user_only) ...@@ -118,6 +142,7 @@ static int try_to_freeze_tasks(bool user_only)
int freeze_processes(void) int freeze_processes(void)
{ {
int error; int error;
int oom_kills_saved;
error = __usermodehelper_disable(UMH_FREEZING); error = __usermodehelper_disable(UMH_FREEZING);
if (error) if (error)
...@@ -132,11 +157,25 @@ int freeze_processes(void) ...@@ -132,11 +157,25 @@ int freeze_processes(void)
pm_wakeup_clear(); pm_wakeup_clear();
printk("Freezing user space processes ... "); printk("Freezing user space processes ... ");
pm_freezing = true; pm_freezing = true;
oom_kills_saved = oom_kills_count();
error = try_to_freeze_tasks(true); error = try_to_freeze_tasks(true);
if (!error) { if (!error) {
printk("done.");
__usermodehelper_set_disable_depth(UMH_DISABLED); __usermodehelper_set_disable_depth(UMH_DISABLED);
oom_killer_disable(); oom_killer_disable();
/*
* There might have been an OOM kill while we were
* freezing tasks and the killed task might be still
* on the way out so we have to double check for race.
*/
if (oom_kills_count() != oom_kills_saved &&
!check_frozen_processes()) {
__usermodehelper_set_disable_depth(UMH_ENABLED);
printk("OOM in progress.");
error = -EBUSY;
} else {
printk("done.");
}
} }
printk("\n"); printk("\n");
BUG_ON(in_atomic()); BUG_ON(in_atomic());
...@@ -191,11 +230,11 @@ void thaw_processes(void) ...@@ -191,11 +230,11 @@ void thaw_processes(void)
thaw_workqueues(); thaw_workqueues();
read_lock(&tasklist_lock); read_lock(&tasklist_lock);
do_each_thread(g, p) { for_each_process_thread(g, p) {
/* No other threads should have PF_SUSPEND_TASK set */ /* No other threads should have PF_SUSPEND_TASK set */
WARN_ON((p != curr) && (p->flags & PF_SUSPEND_TASK)); WARN_ON((p != curr) && (p->flags & PF_SUSPEND_TASK));
__thaw_task(p); __thaw_task(p);
} while_each_thread(g, p); }
read_unlock(&tasklist_lock); read_unlock(&tasklist_lock);
WARN_ON(!(curr->flags & PF_SUSPEND_TASK)); WARN_ON(!(curr->flags & PF_SUSPEND_TASK));
...@@ -218,10 +257,10 @@ void thaw_kernel_threads(void) ...@@ -218,10 +257,10 @@ void thaw_kernel_threads(void)
thaw_workqueues(); thaw_workqueues();
read_lock(&tasklist_lock); read_lock(&tasklist_lock);
do_each_thread(g, p) { for_each_process_thread(g, p) {
if (p->flags & (PF_KTHREAD | PF_WQ_WORKER)) if (p->flags & (PF_KTHREAD | PF_WQ_WORKER))
__thaw_task(p); __thaw_task(p);
} while_each_thread(g, p); }
read_unlock(&tasklist_lock); read_unlock(&tasklist_lock);
schedule(); schedule();
......
...@@ -105,11 +105,27 @@ static struct pm_qos_object network_throughput_pm_qos = { ...@@ -105,11 +105,27 @@ static struct pm_qos_object network_throughput_pm_qos = {
}; };
static BLOCKING_NOTIFIER_HEAD(memory_bandwidth_notifier);
static struct pm_qos_constraints memory_bw_constraints = {
.list = PLIST_HEAD_INIT(memory_bw_constraints.list),
.target_value = PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE,
.default_value = PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE,
.no_constraint_value = PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE,
.type = PM_QOS_SUM,
.notifiers = &memory_bandwidth_notifier,
};
static struct pm_qos_object memory_bandwidth_pm_qos = {
.constraints = &memory_bw_constraints,
.name = "memory_bandwidth",
};
static struct pm_qos_object *pm_qos_array[] = { static struct pm_qos_object *pm_qos_array[] = {
&null_pm_qos, &null_pm_qos,
&cpu_dma_pm_qos, &cpu_dma_pm_qos,
&network_lat_pm_qos, &network_lat_pm_qos,
&network_throughput_pm_qos &network_throughput_pm_qos,
&memory_bandwidth_pm_qos,
}; };
static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf, static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
...@@ -130,6 +146,9 @@ static const struct file_operations pm_qos_power_fops = { ...@@ -130,6 +146,9 @@ static const struct file_operations pm_qos_power_fops = {
/* unlocked internal variant */ /* unlocked internal variant */
static inline int pm_qos_get_value(struct pm_qos_constraints *c) static inline int pm_qos_get_value(struct pm_qos_constraints *c)
{ {
struct plist_node *node;
int total_value = 0;
if (plist_head_empty(&c->list)) if (plist_head_empty(&c->list))
return c->no_constraint_value; return c->no_constraint_value;
...@@ -140,6 +159,12 @@ static inline int pm_qos_get_value(struct pm_qos_constraints *c) ...@@ -140,6 +159,12 @@ static inline int pm_qos_get_value(struct pm_qos_constraints *c)
case PM_QOS_MAX: case PM_QOS_MAX:
return plist_last(&c->list)->prio; return plist_last(&c->list)->prio;
case PM_QOS_SUM:
plist_for_each(node, &c->list)
total_value += node->prio;
return total_value;
default: default:
/* runtime check for not using enum */ /* runtime check for not using enum */
BUG(); BUG();
......
...@@ -404,6 +404,23 @@ static void dump_header(struct task_struct *p, gfp_t gfp_mask, int order, ...@@ -404,6 +404,23 @@ static void dump_header(struct task_struct *p, gfp_t gfp_mask, int order,
dump_tasks(memcg, nodemask); dump_tasks(memcg, nodemask);
} }
/*
* Number of OOM killer invocations (including memcg OOM killer).
* Primarily used by PM freezer to check for potential races with
* OOM killed frozen task.
*/
static atomic_t oom_kills = ATOMIC_INIT(0);
int oom_kills_count(void)
{
return atomic_read(&oom_kills);
}
void note_oom_kill(void)
{
atomic_inc(&oom_kills);
}
#define K(x) ((x) << (PAGE_SHIFT-10)) #define K(x) ((x) << (PAGE_SHIFT-10))
/* /*
* Must be called while holding a reference to p, which will be released upon * Must be called while holding a reference to p, which will be released upon
......
...@@ -2251,6 +2251,14 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, ...@@ -2251,6 +2251,14 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
return NULL; return NULL;
} }
/*
* PM-freezer should be notified that there might be an OOM killer on
* its way to kill and wake somebody up. This is too early and we might
* end up not killing anything but false positives are acceptable.
* See freeze_processes.
*/
note_oom_kill();
/* /*
* Go through the zonelist yet one more time, keep very high watermark * Go through the zonelist yet one more time, keep very high watermark
* here, this is only to catch a parallel oom killing, we must fail if * here, this is only to catch a parallel oom killing, we must fail if
......
...@@ -122,6 +122,14 @@ static void os_enter_line_edit_mode(void) ...@@ -122,6 +122,14 @@ static void os_enter_line_edit_mode(void)
{ {
struct termios local_term_attributes; struct termios local_term_attributes;
term_attributes_were_set = 0;
/* STDIN must be a terminal */
if (!isatty(STDIN_FILENO)) {
return;
}
/* Get and keep the original attributes */ /* Get and keep the original attributes */
if (tcgetattr(STDIN_FILENO, &original_term_attributes)) { if (tcgetattr(STDIN_FILENO, &original_term_attributes)) {
......
...@@ -146,7 +146,7 @@ u32 ap_get_table_length(struct acpi_table_header *table) ...@@ -146,7 +146,7 @@ u32 ap_get_table_length(struct acpi_table_header *table)
if (ACPI_VALIDATE_RSDP_SIG(table->signature)) { if (ACPI_VALIDATE_RSDP_SIG(table->signature)) {
rsdp = ACPI_CAST_PTR(struct acpi_table_rsdp, table); rsdp = ACPI_CAST_PTR(struct acpi_table_rsdp, table);
return (rsdp->length); return (acpi_tb_get_rsdp_length(rsdp));
} }
/* Normal ACPI table */ /* Normal ACPI table */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment