Commit 74b51ee1 authored by Konstantin Khlebnikov's avatar Konstantin Khlebnikov Committed by Rafael J. Wysocki

ACPI / osl: speedup grace period in acpi_os_map_cleanup

ACPI maintains cache of ioremap regions to speed up operations and
access to them from irq context where ioremap() calls aren't allowed.
This code abuses synchronize_rcu() on unmap path for synchronization
with fast-path in acpi_os_read/write_memory which uses this cache.

Since v3.10 CPUs are allowed to enter idle state even if they have RCU
callbacks queued, see commit c0f4dfd4
("rcu: Make RCU_FAST_NO_HZ take advantage of numbered callbacks").
That change caused problems with nvidia proprietary driver which calls
acpi_os_map/unmap_generic_address several times during initialization.
Each unmap calls synchronize_rcu and adds significant delay. Totally
initialization is slowed for a couple of seconds and that is enough to
trigger timeout in hardware, gpu decides to "fell off the bus". Widely
spread workaround is reducing "rcu_idle_gp_delay" from 4 to 1 jiffy.

This patch replaces synchronize_rcu() with synchronize_rcu_expedited()
which is much faster.

Link: https://devtalk.nvidia.com/default/topic/567297/linux/linux-3-10-driver-crash/Signed-off-by: default avatarKonstantin Khlebnikov <koct9i@gmail.com>
Reported-and-tested-by: default avatarAlexander Monakov <amonakov@gmail.com>
Reviewed-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
parent 206c5f60
...@@ -436,7 +436,7 @@ static void acpi_os_drop_map_ref(struct acpi_ioremap *map) ...@@ -436,7 +436,7 @@ static void acpi_os_drop_map_ref(struct acpi_ioremap *map)
static void acpi_os_map_cleanup(struct acpi_ioremap *map) static void acpi_os_map_cleanup(struct acpi_ioremap *map)
{ {
if (!map->refcount) { if (!map->refcount) {
synchronize_rcu(); synchronize_rcu_expedited();
acpi_unmap(map->phys, map->virt); acpi_unmap(map->phys, map->virt);
kfree(map); kfree(map);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment