aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/base/power/runtime.c
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2014-04-01 15:48:54 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2014-04-01 15:48:54 -0400
commit4dedde7c7a18f55180574f934dbc1be84ca0400b (patch)
treed7cc511e8ba8ffceadf3f45b9a63395c4e4183c5 /drivers/base/power/runtime.c
parent683b6c6f82a60fabf47012581c2cfbf1b037ab95 (diff)
parent0ecfe310f4517d7505599be738158087c165be7c (diff)
Merge tag 'pm+acpi-3.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull ACPI and power management updates from Rafael Wysocki: "The majority of this material spent some time in linux-next, some of it even several weeks. There are a few relatively fresh commits in it, but they are mostly fixes and simple cleanups. ACPI took the lead this time, both in terms of the number of commits and the number of modified lines of code, cpufreq follows and there are a few changes in the PM core and in cpuidle too. A new feature that already got some LWN.net's attention is the device PM QoS extension allowing latency tolerance requirements to be propagated from leaf devices to their ancestors with hardware interfaces for specifying latency tolerance. That should help systems with hardware-driven power management to avoid going too far with it in cases when there are latency tolerance constraints. There also are some significant changes in the ACPI core related to the way in which hotplug notifications are handled. They affect PCI hotplug (ACPIPHP) and the ACPI dock station code too. The bottom line is that all those notification now go through the root notify handler and are propagated to the interested subsystems by means of callbacks instead of having to install a notify handler for each device object that we can potentially get hotplug notifications for. In addition to that ACPICA will now advertise "Windows 2013" compatibility for _OSI, because some systems out there don't work correctly if that is not done (some of them don't even boot). On the system suspend side of things, all of the device suspend and resume callbacks, except for ->prepare() and ->complete(), are now going to be executed asynchronously as that turns out to speed up system suspend and resume on some platforms quite significantly and we have a few more optimizations in that area. Apart from that, there are some new device IDs and fixes and cleanups all over. In particular, the system suspend and resume handling by cpufreq should be improved and the cpuidle menu governor should be a bit more robust now. Specifics: - Device PM QoS support for latency tolerance constraints on systems with hardware interfaces allowing such constraints to be specified. That is necessary to prevent hardware-driven power management from becoming overly aggressive on some systems and to prevent power management features leading to excessive latencies from being used in some cases. - Consolidation of the handling of ACPI hotplug notifications for device objects. This causes all device hotplug notifications to go through the root notify handler (that was executed for all of them anyway before) that propagates them to individual subsystems, if necessary, by executing callbacks provided by those subsystems (those callbacks are associated with struct acpi_device objects during device enumeration). As a result, the code in question becomes both smaller in size and more straightforward and all of those changes should not affect users. - ACPICA update, including fixes related to the handling of _PRT in cases when it is broken and the addition of "Windows 2013" to the list of supported "features" for _OSI (which is necessary to support systems that work incorrectly or don't even boot without it). Changes from Bob Moore and Lv Zheng. - Consolidation of ACPI _OST handling from Jiang Liu. - ACPI battery and AC fixes allowing unusual system configurations to be handled by that code from Alexander Mezin. - New device IDs for the ACPI LPSS driver from Chiau Ee Chew. - ACPI fan and thermal optimizations related to system suspend and resume from Aaron Lu. - Cleanups related to ACPI video from Jean Delvare. - Assorted ACPI fixes and cleanups from Al Stone, Hanjun Guo, Lan Tianyu, Paul Bolle, Tomasz Nowicki. - Intel RAPL (Running Average Power Limits) driver cleanups from Jacob Pan. - intel_pstate fixes and cleanups from Dirk Brandewie. - cpufreq fixes related to system suspend/resume handling from Viresh Kumar. - cpufreq core fixes and cleanups from Viresh Kumar, Stratos Karafotis, Saravana Kannan, Rashika Kheria, Joe Perches. - cpufreq drivers updates from Viresh Kumar, Zhuoyu Zhang, Rob Herring. - cpuidle fixes related to the menu governor from Tuukka Tikkanen. - cpuidle fix related to coupled CPUs handling from Paul Burton. - Asynchronous execution of all device suspend and resume callbacks, except for ->prepare and ->complete, during system suspend and resume from Chuansheng Liu. - Delayed resuming of runtime-suspended devices during system suspend for the PCI bus type and ACPI PM domain. - New set of PM helper routines to allow device runtime PM callbacks to be used during system suspend and resume more easily from Ulf Hansson. - Assorted fixes and cleanups in the PM core from Geert Uytterhoeven, Prabhakar Lad, Philipp Zabel, Rashika Kheria, Sebastian Capella. - devfreq fix from Saravana Kannan" * tag 'pm+acpi-3.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (162 commits) PM / devfreq: Rewrite devfreq_update_status() to fix multiple bugs PM / sleep: Correct whitespace errors in <linux/pm.h> intel_pstate: Set core to min P state during core offline cpufreq: Add stop CPU callback to cpufreq_driver interface cpufreq: Remove unnecessary braces cpufreq: Fix checkpatch errors and warnings cpufreq: powerpc: add cpufreq transition latency for FSL e500mc SoCs MAINTAINERS: Reorder maintainer addresses for PM and ACPI PM / Runtime: Update runtime_idle() documentation for return value meaning video / output: Drop display output class support fujitsu-laptop: Drop unneeded include acer-wmi: Stop selecting VIDEO_OUTPUT_CONTROL ACPI / gpu / drm: Stop selecting VIDEO_OUTPUT_CONTROL ACPI / video: fix ACPI_VIDEO dependencies cpufreq: remove unused notifier: CPUFREQ_{SUSPENDCHANGE|RESUMECHANGE} cpufreq: Do not allow ->setpolicy drivers to provide ->target cpufreq: arm_big_little: set 'physical_cluster' for each CPU cpufreq: arm_big_little: make vexpress driver depend on bL core driver ACPI / button: Add ACPI Button event via netlink routine ACPI: Remove duplicate definitions of PREFIX ...
Diffstat (limited to 'drivers/base/power/runtime.c')
-rw-r--r--drivers/base/power/runtime.c164
1 files changed, 124 insertions, 40 deletions
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 72e00e66ecc5..67c7938e430b 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -13,6 +13,43 @@
13#include <trace/events/rpm.h> 13#include <trace/events/rpm.h>
14#include "power.h" 14#include "power.h"
15 15
16#define RPM_GET_CALLBACK(dev, cb) \
17({ \
18 int (*__rpm_cb)(struct device *__d); \
19 \
20 if (dev->pm_domain) \
21 __rpm_cb = dev->pm_domain->ops.cb; \
22 else if (dev->type && dev->type->pm) \
23 __rpm_cb = dev->type->pm->cb; \
24 else if (dev->class && dev->class->pm) \
25 __rpm_cb = dev->class->pm->cb; \
26 else if (dev->bus && dev->bus->pm) \
27 __rpm_cb = dev->bus->pm->cb; \
28 else \
29 __rpm_cb = NULL; \
30 \
31 if (!__rpm_cb && dev->driver && dev->driver->pm) \
32 __rpm_cb = dev->driver->pm->cb; \
33 \
34 __rpm_cb; \
35})
36
37static int (*rpm_get_suspend_cb(struct device *dev))(struct device *)
38{
39 return RPM_GET_CALLBACK(dev, runtime_suspend);
40}
41
42static int (*rpm_get_resume_cb(struct device *dev))(struct device *)
43{
44 return RPM_GET_CALLBACK(dev, runtime_resume);
45}
46
47#ifdef CONFIG_PM_RUNTIME
48static int (*rpm_get_idle_cb(struct device *dev))(struct device *)
49{
50 return RPM_GET_CALLBACK(dev, runtime_idle);
51}
52
16static int rpm_resume(struct device *dev, int rpmflags); 53static int rpm_resume(struct device *dev, int rpmflags);
17static int rpm_suspend(struct device *dev, int rpmflags); 54static int rpm_suspend(struct device *dev, int rpmflags);
18 55
@@ -310,19 +347,7 @@ static int rpm_idle(struct device *dev, int rpmflags)
310 347
311 dev->power.idle_notification = true; 348 dev->power.idle_notification = true;
312 349
313 if (dev->pm_domain) 350 callback = rpm_get_idle_cb(dev);
314 callback = dev->pm_domain->ops.runtime_idle;
315 else if (dev->type && dev->type->pm)
316 callback = dev->type->pm->runtime_idle;
317 else if (dev->class && dev->class->pm)
318 callback = dev->class->pm->runtime_idle;
319 else if (dev->bus && dev->bus->pm)
320 callback = dev->bus->pm->runtime_idle;
321 else
322 callback = NULL;
323
324 if (!callback && dev->driver && dev->driver->pm)
325 callback = dev->driver->pm->runtime_idle;
326 351
327 if (callback) 352 if (callback)
328 retval = __rpm_callback(callback, dev); 353 retval = __rpm_callback(callback, dev);
@@ -492,19 +517,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
492 517
493 __update_runtime_status(dev, RPM_SUSPENDING); 518 __update_runtime_status(dev, RPM_SUSPENDING);
494 519
495 if (dev->pm_domain) 520 callback = rpm_get_suspend_cb(dev);
496 callback = dev->pm_domain->ops.runtime_suspend;
497 else if (dev->type && dev->type->pm)
498 callback = dev->type->pm->runtime_suspend;
499 else if (dev->class && dev->class->pm)
500 callback = dev->class->pm->runtime_suspend;
501 else if (dev->bus && dev->bus->pm)
502 callback = dev->bus->pm->runtime_suspend;
503 else
504 callback = NULL;
505
506 if (!callback && dev->driver && dev->driver->pm)
507 callback = dev->driver->pm->runtime_suspend;
508 521
509 retval = rpm_callback(callback, dev); 522 retval = rpm_callback(callback, dev);
510 if (retval) 523 if (retval)
@@ -724,19 +737,7 @@ static int rpm_resume(struct device *dev, int rpmflags)
724 737
725 __update_runtime_status(dev, RPM_RESUMING); 738 __update_runtime_status(dev, RPM_RESUMING);
726 739
727 if (dev->pm_domain) 740 callback = rpm_get_resume_cb(dev);
728 callback = dev->pm_domain->ops.runtime_resume;
729 else if (dev->type && dev->type->pm)
730 callback = dev->type->pm->runtime_resume;
731 else if (dev->class && dev->class->pm)
732 callback = dev->class->pm->runtime_resume;
733 else if (dev->bus && dev->bus->pm)
734 callback = dev->bus->pm->runtime_resume;
735 else
736 callback = NULL;
737
738 if (!callback && dev->driver && dev->driver->pm)
739 callback = dev->driver->pm->runtime_resume;
740 741
741 retval = rpm_callback(callback, dev); 742 retval = rpm_callback(callback, dev);
742 if (retval) { 743 if (retval) {
@@ -1130,7 +1131,7 @@ EXPORT_SYMBOL_GPL(pm_runtime_barrier);
1130 * @dev: Device to handle. 1131 * @dev: Device to handle.
1131 * @check_resume: If set, check if there's a resume request for the device. 1132 * @check_resume: If set, check if there's a resume request for the device.
1132 * 1133 *
1133 * Increment power.disable_depth for the device and if was zero previously, 1134 * Increment power.disable_depth for the device and if it was zero previously,
1134 * cancel all pending runtime PM requests for the device and wait for all 1135 * cancel all pending runtime PM requests for the device and wait for all
1135 * operations in progress to complete. The device can be either active or 1136 * operations in progress to complete. The device can be either active or
1136 * suspended after its runtime PM has been disabled. 1137 * suspended after its runtime PM has been disabled.
@@ -1401,3 +1402,86 @@ void pm_runtime_remove(struct device *dev)
1401 if (dev->power.irq_safe && dev->parent) 1402 if (dev->power.irq_safe && dev->parent)
1402 pm_runtime_put(dev->parent); 1403 pm_runtime_put(dev->parent);
1403} 1404}
1405#endif
1406
1407/**
1408 * pm_runtime_force_suspend - Force a device into suspend state if needed.
1409 * @dev: Device to suspend.
1410 *
1411 * Disable runtime PM so we safely can check the device's runtime PM status and
1412 * if it is active, invoke it's .runtime_suspend callback to bring it into
1413 * suspend state. Keep runtime PM disabled to preserve the state unless we
1414 * encounter errors.
1415 *
1416 * Typically this function may be invoked from a system suspend callback to make
1417 * sure the device is put into low power state.
1418 */
1419int pm_runtime_force_suspend(struct device *dev)
1420{
1421 int (*callback)(struct device *);
1422 int ret = 0;
1423
1424 pm_runtime_disable(dev);
1425
1426 /*
1427 * Note that pm_runtime_status_suspended() returns false while
1428 * !CONFIG_PM_RUNTIME, which means the device will be put into low
1429 * power state.
1430 */
1431 if (pm_runtime_status_suspended(dev))
1432 return 0;
1433
1434 callback = rpm_get_suspend_cb(dev);
1435
1436 if (!callback) {
1437 ret = -ENOSYS;
1438 goto err;
1439 }
1440
1441 ret = callback(dev);
1442 if (ret)
1443 goto err;
1444
1445 pm_runtime_set_suspended(dev);
1446 return 0;
1447err:
1448 pm_runtime_enable(dev);
1449 return ret;
1450}
1451EXPORT_SYMBOL_GPL(pm_runtime_force_suspend);
1452
1453/**
1454 * pm_runtime_force_resume - Force a device into resume state.
1455 * @dev: Device to resume.
1456 *
1457 * Prior invoking this function we expect the user to have brought the device
1458 * into low power state by a call to pm_runtime_force_suspend(). Here we reverse
1459 * those actions and brings the device into full power. We update the runtime PM
1460 * status and re-enables runtime PM.
1461 *
1462 * Typically this function may be invoked from a system resume callback to make
1463 * sure the device is put into full power state.
1464 */
1465int pm_runtime_force_resume(struct device *dev)
1466{
1467 int (*callback)(struct device *);
1468 int ret = 0;
1469
1470 callback = rpm_get_resume_cb(dev);
1471
1472 if (!callback) {
1473 ret = -ENOSYS;
1474 goto out;
1475 }
1476
1477 ret = callback(dev);
1478 if (ret)
1479 goto out;
1480
1481 pm_runtime_set_active(dev);
1482 pm_runtime_mark_last_busy(dev);
1483out:
1484 pm_runtime_enable(dev);
1485 return ret;
1486}
1487EXPORT_SYMBOL_GPL(pm_runtime_force_resume);