aboutsummaryrefslogtreecommitdiffstats
path: root/drivers
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2017-11-13 22:43:50 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2017-11-13 22:43:50 -0500
commitbd2cd7d5a8f83ddc761025f42a3ca8e56351a6cc (patch)
tree6ea70f09f32544f895020e198dac632145332cc2 /drivers
parentb29c6ef7bb1257853c1e31616d84f55e561cf631 (diff)
parent990a848d537e4da966907c8ccec95bc568f2911c (diff)
Merge tag 'pm-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki: "There are no real big ticket items here this time. The most noticeable change is probably the relocation of the OPP (Operating Performance Points) framework to its own directory under drivers/ as it has grown big enough for that. Also Viresh is now going to maintain it and send pull requests for it to me, so you will see this change in the git history going forward (but still not right now). Another noticeable set of changes is the modifications of the PM core, the PCI subsystem and the ACPI PM domain to allow of more integration between system-wide suspend/resume and runtime PM. For now it's just a way to avoid resuming devices from runtime suspend unnecessarily during system suspend (if the driver sets a flag to indicate its readiness for that) and in the works is an analogous mechanism to allow devices to stay suspended after system resume. In addition to that, we have some changes related to supporting frequency-invariant CPU utilization metrics in the scheduler and in the schedutil cpufreq governor on ARM and changes to add support for device performance states to the generic power domains (genpd) framework. The rest is mostly fixes and cleanups of various sorts. Specifics: - Relocate the OPP (Operating Performance Points) framework to its own directory under drivers/ and add support for power domain performance states to it (Viresh Kumar). - Modify the PM core, the PCI bus type and the ACPI PM domain to support power management driver flags allowing device drivers to specify their capabilities and preferences regarding the handling of devices with enabled runtime PM during system suspend/resume and clean up that code somewhat (Rafael Wysocki, Ulf Hansson). - Add frequency-invariant accounting support to the task scheduler on ARM and ARM64 (Dietmar Eggemann). - Fix PM QoS device resume latency framework to prevent "no restriction" requests from overriding requests with specific requirements and drop the confusing PM_QOS_FLAG_REMOTE_WAKEUP device PM QoS flag (Rafael Wysocki). - Drop legacy class suspend/resume operations from the PM core and drop legacy bus type suspend and resume callbacks from ARM/locomo (Rafael Wysocki). - Add min/max frequency support to devfreq and clean it up somewhat (Chanwoo Choi). - Rework wakeup support in the generic power domains (genpd) framework and update some of its users accordingly (Geert Uytterhoeven). - Convert timers in the PM core to use timer_setup() (Kees Cook). - Add support for exposing the SLP_S0 (Low Power S0 Idle) residency counter based on the LPIT ACPI table on Intel platforms (Srinivas Pandruvada). - Add per-CPU PM QoS resume latency support to the ladder cpuidle governor (Ramesh Thomas). - Fix a deadlock between the wakeup notify handler and the notifier removal in the ACPI core (Ville Syrjälä). - Fix a cpufreq schedutil governor issue causing it to use stale cached frequency values sometimes (Viresh Kumar). - Fix an issue in the system suspend core support code causing wakeup events detection to fail in some cases (Rajat Jain). - Fix the generic power domains (genpd) framework to prevent the PM core from using the direct-complete optimization with it as that is guaranteed to fail (Ulf Hansson). - Fix a minor issue in the cpuidle core and clean it up a bit (Gaurav Jindal, Nicholas Piggin). - Fix and clean up the intel_idle and ARM cpuidle drivers (Jason Baron, Len Brown, Leo Yan). - Fix a couple of minor issues in the OPP framework and clean it up (Arvind Yadav, Fabio Estevam, Sudeep Holla, Tobias Jordan). - Fix and clean up some cpufreq drivers and fix a minor issue in the cpufreq statistics code (Arvind Yadav, Bhumika Goyal, Fabio Estevam, Gautham Shenoy, Gustavo Silva, Marek Szyprowski, Masahiro Yamada, Robert Jarzmik, Zumeng Chen). - Fix minor issues in the system suspend and hibernation core, in power management documentation and in the AVS (Adaptive Voltage Scaling) framework (Helge Deller, Himanshu Jha, Joe Perches, Rafael Wysocki). - Fix some issues in the cpupower utility and document that Shuah Khan is going to maintain it going forward (Prarit Bhargava, Shuah Khan)" * tag 'pm-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (88 commits) tools/power/cpupower: add libcpupower.so.0.0.1 to .gitignore tools/power/cpupower: Add 64 bit library detection intel_idle: Graceful probe failure when MWAIT is disabled cpufreq: schedutil: Reset cached_raw_freq when not in sync with next_freq freezer: Fix typo in freezable_schedule_timeout() comment PM / s2idle: Clear the events_check_enabled flag cpufreq: stats: Handle the case when trans_table goes beyond PAGE_SIZE cpufreq: arm_big_little: make cpufreq_arm_bL_ops structures const cpufreq: arm_big_little: make function arguments and structure pointer const cpuidle: Avoid assignment in if () argument cpuidle: Clean up cpuidle_enable_device() error handling a bit ACPI / PM: Fix acpi_pm_notifier_lock vs flush_workqueue() deadlock PM / Domains: Fix genpd to deal with drivers returning 1 from ->prepare() cpuidle: ladder: Add per CPU PM QoS resume latency support PM / QoS: Fix device resume latency framework PM / domains: Rework governor code to be more consistent PM / Domains: Remove gpd_dev_ops.active_wakeup() callback soc: rockchip: power-domain: Use GENPD_FLAG_ACTIVE_WAKEUP soc: mediatek: Use GENPD_FLAG_ACTIVE_WAKEUP ARM: shmobile: pm-rmobile: Use GENPD_FLAG_ACTIVE_WAKEUP ...
Diffstat (limited to 'drivers')
-rw-r--r--drivers/Kconfig2
-rw-r--r--drivers/Makefile1
-rw-r--r--drivers/acpi/Kconfig5
-rw-r--r--drivers/acpi/Makefile1
-rw-r--r--drivers/acpi/acpi_lpit.c162
-rw-r--r--drivers/acpi/acpi_lpss.c95
-rw-r--r--drivers/acpi/device_pm.c277
-rw-r--r--drivers/acpi/internal.h6
-rw-r--r--drivers/acpi/osl.c42
-rw-r--r--drivers/acpi/scan.c1
-rw-r--r--drivers/base/arch_topology.c29
-rw-r--r--drivers/base/cpu.c3
-rw-r--r--drivers/base/dd.c2
-rw-r--r--drivers/base/power/Makefile1
-rw-r--r--drivers/base/power/domain.c226
-rw-r--r--drivers/base/power/domain_governor.c73
-rw-r--r--drivers/base/power/generic_ops.c23
-rw-r--r--drivers/base/power/main.c53
-rw-r--r--drivers/base/power/qos.c5
-rw-r--r--drivers/base/power/runtime.c9
-rw-r--r--drivers/base/power/sysfs.c53
-rw-r--r--drivers/base/power/wakeup.c11
-rw-r--r--drivers/cpufreq/arm_big_little.c16
-rw-r--r--drivers/cpufreq/arm_big_little.h4
-rw-r--r--drivers/cpufreq/arm_big_little_dt.c2
-rw-r--r--drivers/cpufreq/cpufreq-dt-platdev.c3
-rw-r--r--drivers/cpufreq/cpufreq-dt.c12
-rw-r--r--drivers/cpufreq/cpufreq.c6
-rw-r--r--drivers/cpufreq/cpufreq_stats.c7
-rw-r--r--drivers/cpufreq/imx6q-cpufreq.c85
-rw-r--r--drivers/cpufreq/powernow-k8.c2
-rw-r--r--drivers/cpufreq/pxa2xx-cpufreq.c191
-rw-r--r--drivers/cpufreq/scpi-cpufreq.c2
-rw-r--r--drivers/cpufreq/spear-cpufreq.c4
-rw-r--r--drivers/cpufreq/speedstep-lib.c2
-rw-r--r--drivers/cpufreq/ti-cpufreq.c6
-rw-r--r--drivers/cpufreq/vexpress-spc-cpufreq.c2
-rw-r--r--drivers/cpuidle/cpuidle-arm.c153
-rw-r--r--drivers/cpuidle/cpuidle.c14
-rw-r--r--drivers/cpuidle/governors/ladder.c7
-rw-r--r--drivers/cpuidle/governors/menu.c4
-rw-r--r--drivers/devfreq/devfreq.c139
-rw-r--r--drivers/devfreq/exynos-bus.c5
-rw-r--r--drivers/devfreq/governor_passive.c2
-rw-r--r--drivers/devfreq/governor_performance.c2
-rw-r--r--drivers/devfreq/governor_powersave.c2
-rw-r--r--drivers/devfreq/governor_simpleondemand.c2
-rw-r--r--drivers/devfreq/governor_userspace.c2
-rw-r--r--drivers/devfreq/rk3399_dmc.c2
-rw-r--r--drivers/gpu/drm/i915/i915_drv.c2
-rw-r--r--drivers/idle/intel_idle.c23
-rw-r--r--drivers/misc/mei/pci-me.c2
-rw-r--r--drivers/misc/mei/pci-txe.c2
-rw-r--r--drivers/opp/Kconfig13
-rw-r--r--drivers/opp/Makefile (renamed from drivers/base/power/opp/Makefile)0
-rw-r--r--drivers/opp/core.c (renamed from drivers/base/power/opp/core.c)143
-rw-r--r--drivers/opp/cpu.c (renamed from drivers/base/power/opp/cpu.c)0
-rw-r--r--drivers/opp/debugfs.c (renamed from drivers/base/power/opp/debugfs.c)10
-rw-r--r--drivers/opp/of.c (renamed from drivers/base/power/opp/of.c)6
-rw-r--r--drivers/opp/opp.h (renamed from drivers/base/power/opp/opp.h)6
-rw-r--r--drivers/pci/pci-driver.c134
-rw-r--r--drivers/pci/pci.c3
-rw-r--r--drivers/power/avs/smartreflex.c10
-rw-r--r--drivers/soc/mediatek/mtk-scpsys.c14
-rw-r--r--drivers/soc/rockchip/pm_domains.c14
65 files changed, 1373 insertions, 767 deletions
diff --git a/drivers/Kconfig b/drivers/Kconfig
index 1d7af3c2ff27..152744c5ef0f 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -209,4 +209,6 @@ source "drivers/tee/Kconfig"
209 209
210source "drivers/mux/Kconfig" 210source "drivers/mux/Kconfig"
211 211
212source "drivers/opp/Kconfig"
213
212endmenu 214endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index d242d3514d30..1d034b680431 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -126,6 +126,7 @@ obj-$(CONFIG_ACCESSIBILITY) += accessibility/
126obj-$(CONFIG_ISDN) += isdn/ 126obj-$(CONFIG_ISDN) += isdn/
127obj-$(CONFIG_EDAC) += edac/ 127obj-$(CONFIG_EDAC) += edac/
128obj-$(CONFIG_EISA) += eisa/ 128obj-$(CONFIG_EISA) += eisa/
129obj-$(CONFIG_PM_OPP) += opp/
129obj-$(CONFIG_CPU_FREQ) += cpufreq/ 130obj-$(CONFIG_CPU_FREQ) += cpufreq/
130obj-$(CONFIG_CPU_IDLE) += cpuidle/ 131obj-$(CONFIG_CPU_IDLE) += cpuidle/
131obj-y += mmc/ 132obj-y += mmc/
diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
index 5b1938f4b626..4cb763a01f4d 100644
--- a/drivers/acpi/Kconfig
+++ b/drivers/acpi/Kconfig
@@ -81,6 +81,11 @@ endif
81config ACPI_SPCR_TABLE 81config ACPI_SPCR_TABLE
82 bool 82 bool
83 83
84config ACPI_LPIT
85 bool
86 depends on X86_64
87 default y
88
84config ACPI_SLEEP 89config ACPI_SLEEP
85 bool 90 bool
86 depends on SUSPEND || HIBERNATION 91 depends on SUSPEND || HIBERNATION
diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
index cd1abc9bc325..168e14d29d31 100644
--- a/drivers/acpi/Makefile
+++ b/drivers/acpi/Makefile
@@ -57,6 +57,7 @@ acpi-$(CONFIG_DEBUG_FS) += debugfs.o
57acpi-$(CONFIG_ACPI_NUMA) += numa.o 57acpi-$(CONFIG_ACPI_NUMA) += numa.o
58acpi-$(CONFIG_ACPI_PROCFS_POWER) += cm_sbs.o 58acpi-$(CONFIG_ACPI_PROCFS_POWER) += cm_sbs.o
59acpi-y += acpi_lpat.o 59acpi-y += acpi_lpat.o
60acpi-$(CONFIG_ACPI_LPIT) += acpi_lpit.o
60acpi-$(CONFIG_ACPI_GENERIC_GSI) += irq.o 61acpi-$(CONFIG_ACPI_GENERIC_GSI) += irq.o
61acpi-$(CONFIG_ACPI_WATCHDOG) += acpi_watchdog.o 62acpi-$(CONFIG_ACPI_WATCHDOG) += acpi_watchdog.o
62 63
diff --git a/drivers/acpi/acpi_lpit.c b/drivers/acpi/acpi_lpit.c
new file mode 100644
index 000000000000..e94e478dd18b
--- /dev/null
+++ b/drivers/acpi/acpi_lpit.c
@@ -0,0 +1,162 @@
1
2/*
3 * acpi_lpit.c - LPIT table processing functions
4 *
5 * Copyright (C) 2017 Intel Corporation. All rights reserved.
6 *
7 * This program is free software; you can redistribute it and/or
8 * modify it under the terms of the GNU General Public License version
9 * 2 as published by the Free Software Foundation.
10 *
11 * This program is distributed in the hope that it will be useful,
12 * but WITHOUT ANY WARRANTY; without even the implied warranty of
13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 * GNU General Public License for more details.
15 */
16
17#include <linux/cpu.h>
18#include <linux/acpi.h>
19#include <asm/msr.h>
20#include <asm/tsc.h>
21
22struct lpit_residency_info {
23 struct acpi_generic_address gaddr;
24 u64 frequency;
25 void __iomem *iomem_addr;
26};
27
28/* Storage for an memory mapped and FFH based entries */
29static struct lpit_residency_info residency_info_mem;
30static struct lpit_residency_info residency_info_ffh;
31
32static int lpit_read_residency_counter_us(u64 *counter, bool io_mem)
33{
34 int err;
35
36 if (io_mem) {
37 u64 count = 0;
38 int error;
39
40 error = acpi_os_read_iomem(residency_info_mem.iomem_addr, &count,
41 residency_info_mem.gaddr.bit_width);
42 if (error)
43 return error;
44
45 *counter = div64_u64(count * 1000000ULL, residency_info_mem.frequency);
46 return 0;
47 }
48
49 err = rdmsrl_safe(residency_info_ffh.gaddr.address, counter);
50 if (!err) {
51 u64 mask = GENMASK_ULL(residency_info_ffh.gaddr.bit_offset +
52 residency_info_ffh.gaddr. bit_width - 1,
53 residency_info_ffh.gaddr.bit_offset);
54
55 *counter &= mask;
56 *counter >>= residency_info_ffh.gaddr.bit_offset;
57 *counter = div64_u64(*counter * 1000000ULL, residency_info_ffh.frequency);
58 return 0;
59 }
60
61 return -ENODATA;
62}
63
64static ssize_t low_power_idle_system_residency_us_show(struct device *dev,
65 struct device_attribute *attr,
66 char *buf)
67{
68 u64 counter;
69 int ret;
70
71 ret = lpit_read_residency_counter_us(&counter, true);
72 if (ret)
73 return ret;
74
75 return sprintf(buf, "%llu\n", counter);
76}
77static DEVICE_ATTR_RO(low_power_idle_system_residency_us);
78
79static ssize_t low_power_idle_cpu_residency_us_show(struct device *dev,
80 struct device_attribute *attr,
81 char *buf)
82{
83 u64 counter;
84 int ret;
85
86 ret = lpit_read_residency_counter_us(&counter, false);
87 if (ret)
88 return ret;
89
90 return sprintf(buf, "%llu\n", counter);
91}
92static DEVICE_ATTR_RO(low_power_idle_cpu_residency_us);
93
94int lpit_read_residency_count_address(u64 *address)
95{
96 if (!residency_info_mem.gaddr.address)
97 return -EINVAL;
98
99 *address = residency_info_mem.gaddr.address;
100
101 return 0;
102}
103
104static void lpit_update_residency(struct lpit_residency_info *info,
105 struct acpi_lpit_native *lpit_native)
106{
107 info->frequency = lpit_native->counter_frequency ?
108 lpit_native->counter_frequency : tsc_khz * 1000;
109 if (!info->frequency)
110 info->frequency = 1;
111
112 info->gaddr = lpit_native->residency_counter;
113 if (info->gaddr.space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
114 info->iomem_addr = ioremap_nocache(info->gaddr.address,
115 info->gaddr.bit_width / 8);
116 if (!info->iomem_addr)
117 return;
118
119 /* Silently fail, if cpuidle attribute group is not present */
120 sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
121 &dev_attr_low_power_idle_system_residency_us.attr,
122 "cpuidle");
123 } else if (info->gaddr.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) {
124 /* Silently fail, if cpuidle attribute group is not present */
125 sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
126 &dev_attr_low_power_idle_cpu_residency_us.attr,
127 "cpuidle");
128 }
129}
130
131static void lpit_process(u64 begin, u64 end)
132{
133 while (begin + sizeof(struct acpi_lpit_native) < end) {
134 struct acpi_lpit_native *lpit_native = (struct acpi_lpit_native *)begin;
135
136 if (!lpit_native->header.type && !lpit_native->header.flags) {
137 if (lpit_native->residency_counter.space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY &&
138 !residency_info_mem.gaddr.address) {
139 lpit_update_residency(&residency_info_mem, lpit_native);
140 } else if (lpit_native->residency_counter.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE &&
141 !residency_info_ffh.gaddr.address) {
142 lpit_update_residency(&residency_info_ffh, lpit_native);
143 }
144 }
145 begin += lpit_native->header.length;
146 }
147}
148
149void acpi_init_lpit(void)
150{
151 acpi_status status;
152 u64 lpit_begin;
153 struct acpi_table_lpit *lpit;
154
155 status = acpi_get_table(ACPI_SIG_LPIT, 0, (struct acpi_table_header **)&lpit);
156
157 if (ACPI_FAILURE(status))
158 return;
159
160 lpit_begin = (u64)lpit + sizeof(*lpit);
161 lpit_process(lpit_begin, lpit_begin + lpit->header.length);
162}
diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
index 032ae44710e5..de7385b824e1 100644
--- a/drivers/acpi/acpi_lpss.c
+++ b/drivers/acpi/acpi_lpss.c
@@ -693,7 +693,7 @@ static int acpi_lpss_activate(struct device *dev)
693 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 693 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
694 int ret; 694 int ret;
695 695
696 ret = acpi_dev_runtime_resume(dev); 696 ret = acpi_dev_resume(dev);
697 if (ret) 697 if (ret)
698 return ret; 698 return ret;
699 699
@@ -713,43 +713,9 @@ static int acpi_lpss_activate(struct device *dev)
713 713
714static void acpi_lpss_dismiss(struct device *dev) 714static void acpi_lpss_dismiss(struct device *dev)
715{ 715{
716 acpi_dev_runtime_suspend(dev); 716 acpi_dev_suspend(dev, false);
717} 717}
718 718
719#ifdef CONFIG_PM_SLEEP
720static int acpi_lpss_suspend_late(struct device *dev)
721{
722 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
723 int ret;
724
725 ret = pm_generic_suspend_late(dev);
726 if (ret)
727 return ret;
728
729 if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
730 acpi_lpss_save_ctx(dev, pdata);
731
732 return acpi_dev_suspend_late(dev);
733}
734
735static int acpi_lpss_resume_early(struct device *dev)
736{
737 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
738 int ret;
739
740 ret = acpi_dev_resume_early(dev);
741 if (ret)
742 return ret;
743
744 acpi_lpss_d3_to_d0_delay(pdata);
745
746 if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
747 acpi_lpss_restore_ctx(dev, pdata);
748
749 return pm_generic_resume_early(dev);
750}
751#endif /* CONFIG_PM_SLEEP */
752
753/* IOSF SB for LPSS island */ 719/* IOSF SB for LPSS island */
754#define LPSS_IOSF_UNIT_LPIOEP 0xA0 720#define LPSS_IOSF_UNIT_LPIOEP 0xA0
755#define LPSS_IOSF_UNIT_LPIO1 0xAB 721#define LPSS_IOSF_UNIT_LPIO1 0xAB
@@ -835,19 +801,15 @@ static void lpss_iosf_exit_d3_state(void)
835 mutex_unlock(&lpss_iosf_mutex); 801 mutex_unlock(&lpss_iosf_mutex);
836} 802}
837 803
838static int acpi_lpss_runtime_suspend(struct device *dev) 804static int acpi_lpss_suspend(struct device *dev, bool wakeup)
839{ 805{
840 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 806 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
841 int ret; 807 int ret;
842 808
843 ret = pm_generic_runtime_suspend(dev);
844 if (ret)
845 return ret;
846
847 if (pdata->dev_desc->flags & LPSS_SAVE_CTX) 809 if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
848 acpi_lpss_save_ctx(dev, pdata); 810 acpi_lpss_save_ctx(dev, pdata);
849 811
850 ret = acpi_dev_runtime_suspend(dev); 812 ret = acpi_dev_suspend(dev, wakeup);
851 813
852 /* 814 /*
853 * This call must be last in the sequence, otherwise PMC will return 815 * This call must be last in the sequence, otherwise PMC will return
@@ -860,7 +822,7 @@ static int acpi_lpss_runtime_suspend(struct device *dev)
860 return ret; 822 return ret;
861} 823}
862 824
863static int acpi_lpss_runtime_resume(struct device *dev) 825static int acpi_lpss_resume(struct device *dev)
864{ 826{
865 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 827 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
866 int ret; 828 int ret;
@@ -872,7 +834,7 @@ static int acpi_lpss_runtime_resume(struct device *dev)
872 if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available()) 834 if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
873 lpss_iosf_exit_d3_state(); 835 lpss_iosf_exit_d3_state();
874 836
875 ret = acpi_dev_runtime_resume(dev); 837 ret = acpi_dev_resume(dev);
876 if (ret) 838 if (ret)
877 return ret; 839 return ret;
878 840
@@ -881,7 +843,41 @@ static int acpi_lpss_runtime_resume(struct device *dev)
881 if (pdata->dev_desc->flags & LPSS_SAVE_CTX) 843 if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
882 acpi_lpss_restore_ctx(dev, pdata); 844 acpi_lpss_restore_ctx(dev, pdata);
883 845
884 return pm_generic_runtime_resume(dev); 846 return 0;
847}
848
849#ifdef CONFIG_PM_SLEEP
850static int acpi_lpss_suspend_late(struct device *dev)
851{
852 int ret;
853
854 if (dev_pm_smart_suspend_and_suspended(dev))
855 return 0;
856
857 ret = pm_generic_suspend_late(dev);
858 return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev));
859}
860
861static int acpi_lpss_resume_early(struct device *dev)
862{
863 int ret = acpi_lpss_resume(dev);
864
865 return ret ? ret : pm_generic_resume_early(dev);
866}
867#endif /* CONFIG_PM_SLEEP */
868
869static int acpi_lpss_runtime_suspend(struct device *dev)
870{
871 int ret = pm_generic_runtime_suspend(dev);
872
873 return ret ? ret : acpi_lpss_suspend(dev, true);
874}
875
876static int acpi_lpss_runtime_resume(struct device *dev)
877{
878 int ret = acpi_lpss_resume(dev);
879
880 return ret ? ret : pm_generic_runtime_resume(dev);
885} 881}
886#endif /* CONFIG_PM */ 882#endif /* CONFIG_PM */
887 883
@@ -894,13 +890,20 @@ static struct dev_pm_domain acpi_lpss_pm_domain = {
894#ifdef CONFIG_PM 890#ifdef CONFIG_PM
895#ifdef CONFIG_PM_SLEEP 891#ifdef CONFIG_PM_SLEEP
896 .prepare = acpi_subsys_prepare, 892 .prepare = acpi_subsys_prepare,
897 .complete = pm_complete_with_resume_check, 893 .complete = acpi_subsys_complete,
898 .suspend = acpi_subsys_suspend, 894 .suspend = acpi_subsys_suspend,
899 .suspend_late = acpi_lpss_suspend_late, 895 .suspend_late = acpi_lpss_suspend_late,
896 .suspend_noirq = acpi_subsys_suspend_noirq,
897 .resume_noirq = acpi_subsys_resume_noirq,
900 .resume_early = acpi_lpss_resume_early, 898 .resume_early = acpi_lpss_resume_early,
901 .freeze = acpi_subsys_freeze, 899 .freeze = acpi_subsys_freeze,
900 .freeze_late = acpi_subsys_freeze_late,
901 .freeze_noirq = acpi_subsys_freeze_noirq,
902 .thaw_noirq = acpi_subsys_thaw_noirq,
902 .poweroff = acpi_subsys_suspend, 903 .poweroff = acpi_subsys_suspend,
903 .poweroff_late = acpi_lpss_suspend_late, 904 .poweroff_late = acpi_lpss_suspend_late,
905 .poweroff_noirq = acpi_subsys_suspend_noirq,
906 .restore_noirq = acpi_subsys_resume_noirq,
904 .restore_early = acpi_lpss_resume_early, 907 .restore_early = acpi_lpss_resume_early,
905#endif 908#endif
906 .runtime_suspend = acpi_lpss_runtime_suspend, 909 .runtime_suspend = acpi_lpss_runtime_suspend,
diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
index fbcc73f7a099..e4ffaeec9ec2 100644
--- a/drivers/acpi/device_pm.c
+++ b/drivers/acpi/device_pm.c
@@ -387,6 +387,7 @@ EXPORT_SYMBOL(acpi_bus_power_manageable);
387 387
388#ifdef CONFIG_PM 388#ifdef CONFIG_PM
389static DEFINE_MUTEX(acpi_pm_notifier_lock); 389static DEFINE_MUTEX(acpi_pm_notifier_lock);
390static DEFINE_MUTEX(acpi_pm_notifier_install_lock);
390 391
391void acpi_pm_wakeup_event(struct device *dev) 392void acpi_pm_wakeup_event(struct device *dev)
392{ 393{
@@ -443,24 +444,25 @@ acpi_status acpi_add_pm_notifier(struct acpi_device *adev, struct device *dev,
443 if (!dev && !func) 444 if (!dev && !func)
444 return AE_BAD_PARAMETER; 445 return AE_BAD_PARAMETER;
445 446
446 mutex_lock(&acpi_pm_notifier_lock); 447 mutex_lock(&acpi_pm_notifier_install_lock);
447 448
448 if (adev->wakeup.flags.notifier_present) 449 if (adev->wakeup.flags.notifier_present)
449 goto out; 450 goto out;
450 451
451 adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev));
452 adev->wakeup.context.dev = dev;
453 adev->wakeup.context.func = func;
454
455 status = acpi_install_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY, 452 status = acpi_install_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY,
456 acpi_pm_notify_handler, NULL); 453 acpi_pm_notify_handler, NULL);
457 if (ACPI_FAILURE(status)) 454 if (ACPI_FAILURE(status))
458 goto out; 455 goto out;
459 456
457 mutex_lock(&acpi_pm_notifier_lock);
458 adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev));
459 adev->wakeup.context.dev = dev;
460 adev->wakeup.context.func = func;
460 adev->wakeup.flags.notifier_present = true; 461 adev->wakeup.flags.notifier_present = true;
462 mutex_unlock(&acpi_pm_notifier_lock);
461 463
462 out: 464 out:
463 mutex_unlock(&acpi_pm_notifier_lock); 465 mutex_unlock(&acpi_pm_notifier_install_lock);
464 return status; 466 return status;
465} 467}
466 468
@@ -472,7 +474,7 @@ acpi_status acpi_remove_pm_notifier(struct acpi_device *adev)
472{ 474{
473 acpi_status status = AE_BAD_PARAMETER; 475 acpi_status status = AE_BAD_PARAMETER;
474 476
475 mutex_lock(&acpi_pm_notifier_lock); 477 mutex_lock(&acpi_pm_notifier_install_lock);
476 478
477 if (!adev->wakeup.flags.notifier_present) 479 if (!adev->wakeup.flags.notifier_present)
478 goto out; 480 goto out;
@@ -483,14 +485,15 @@ acpi_status acpi_remove_pm_notifier(struct acpi_device *adev)
483 if (ACPI_FAILURE(status)) 485 if (ACPI_FAILURE(status))
484 goto out; 486 goto out;
485 487
488 mutex_lock(&acpi_pm_notifier_lock);
486 adev->wakeup.context.func = NULL; 489 adev->wakeup.context.func = NULL;
487 adev->wakeup.context.dev = NULL; 490 adev->wakeup.context.dev = NULL;
488 wakeup_source_unregister(adev->wakeup.ws); 491 wakeup_source_unregister(adev->wakeup.ws);
489
490 adev->wakeup.flags.notifier_present = false; 492 adev->wakeup.flags.notifier_present = false;
493 mutex_unlock(&acpi_pm_notifier_lock);
491 494
492 out: 495 out:
493 mutex_unlock(&acpi_pm_notifier_lock); 496 mutex_unlock(&acpi_pm_notifier_install_lock);
494 return status; 497 return status;
495} 498}
496 499
@@ -581,8 +584,7 @@ static int acpi_dev_pm_get_state(struct device *dev, struct acpi_device *adev,
581 d_min = ret; 584 d_min = ret;
582 wakeup = device_may_wakeup(dev) && adev->wakeup.flags.valid 585 wakeup = device_may_wakeup(dev) && adev->wakeup.flags.valid
583 && adev->wakeup.sleep_state >= target_state; 586 && adev->wakeup.sleep_state >= target_state;
584 } else if (dev_pm_qos_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP) != 587 } else {
585 PM_QOS_FLAGS_NONE) {
586 wakeup = adev->wakeup.flags.valid; 588 wakeup = adev->wakeup.flags.valid;
587 } 589 }
588 590
@@ -848,48 +850,48 @@ static int acpi_dev_pm_full_power(struct acpi_device *adev)
848} 850}
849 851
850/** 852/**
851 * acpi_dev_runtime_suspend - Put device into a low-power state using ACPI. 853 * acpi_dev_suspend - Put device into a low-power state using ACPI.
852 * @dev: Device to put into a low-power state. 854 * @dev: Device to put into a low-power state.
855 * @wakeup: Whether or not to enable wakeup for the device.
853 * 856 *
854 * Put the given device into a runtime low-power state using the standard ACPI 857 * Put the given device into a low-power state using the standard ACPI
855 * mechanism. Set up remote wakeup if desired, choose the state to put the 858 * mechanism. Set up remote wakeup if desired, choose the state to put the
856 * device into (this checks if remote wakeup is expected to work too), and set 859 * device into (this checks if remote wakeup is expected to work too), and set
857 * the power state of the device. 860 * the power state of the device.
858 */ 861 */
859int acpi_dev_runtime_suspend(struct device *dev) 862int acpi_dev_suspend(struct device *dev, bool wakeup)
860{ 863{
861 struct acpi_device *adev = ACPI_COMPANION(dev); 864 struct acpi_device *adev = ACPI_COMPANION(dev);
862 bool remote_wakeup; 865 u32 target_state = acpi_target_system_state();
863 int error; 866 int error;
864 867
865 if (!adev) 868 if (!adev)
866 return 0; 869 return 0;
867 870
868 remote_wakeup = dev_pm_qos_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP) > 871 if (wakeup && acpi_device_can_wakeup(adev)) {
869 PM_QOS_FLAGS_NONE; 872 error = acpi_device_wakeup_enable(adev, target_state);
870 if (remote_wakeup) {
871 error = acpi_device_wakeup_enable(adev, ACPI_STATE_S0);
872 if (error) 873 if (error)
873 return -EAGAIN; 874 return -EAGAIN;
875 } else {
876 wakeup = false;
874 } 877 }
875 878
876 error = acpi_dev_pm_low_power(dev, adev, ACPI_STATE_S0); 879 error = acpi_dev_pm_low_power(dev, adev, target_state);
877 if (error && remote_wakeup) 880 if (error && wakeup)
878 acpi_device_wakeup_disable(adev); 881 acpi_device_wakeup_disable(adev);
879 882
880 return error; 883 return error;
881} 884}
882EXPORT_SYMBOL_GPL(acpi_dev_runtime_suspend); 885EXPORT_SYMBOL_GPL(acpi_dev_suspend);
883 886
884/** 887/**
885 * acpi_dev_runtime_resume - Put device into the full-power state using ACPI. 888 * acpi_dev_resume - Put device into the full-power state using ACPI.
886 * @dev: Device to put into the full-power state. 889 * @dev: Device to put into the full-power state.
887 * 890 *
888 * Put the given device into the full-power state using the standard ACPI 891 * Put the given device into the full-power state using the standard ACPI
889 * mechanism at run time. Set the power state of the device to ACPI D0 and 892 * mechanism. Set the power state of the device to ACPI D0 and disable wakeup.
890 * disable remote wakeup.
891 */ 893 */
892int acpi_dev_runtime_resume(struct device *dev) 894int acpi_dev_resume(struct device *dev)
893{ 895{
894 struct acpi_device *adev = ACPI_COMPANION(dev); 896 struct acpi_device *adev = ACPI_COMPANION(dev);
895 int error; 897 int error;
@@ -901,7 +903,7 @@ int acpi_dev_runtime_resume(struct device *dev)
901 acpi_device_wakeup_disable(adev); 903 acpi_device_wakeup_disable(adev);
902 return error; 904 return error;
903} 905}
904EXPORT_SYMBOL_GPL(acpi_dev_runtime_resume); 906EXPORT_SYMBOL_GPL(acpi_dev_resume);
905 907
906/** 908/**
907 * acpi_subsys_runtime_suspend - Suspend device using ACPI. 909 * acpi_subsys_runtime_suspend - Suspend device using ACPI.
@@ -913,7 +915,7 @@ EXPORT_SYMBOL_GPL(acpi_dev_runtime_resume);
913int acpi_subsys_runtime_suspend(struct device *dev) 915int acpi_subsys_runtime_suspend(struct device *dev)
914{ 916{
915 int ret = pm_generic_runtime_suspend(dev); 917 int ret = pm_generic_runtime_suspend(dev);
916 return ret ? ret : acpi_dev_runtime_suspend(dev); 918 return ret ? ret : acpi_dev_suspend(dev, true);
917} 919}
918EXPORT_SYMBOL_GPL(acpi_subsys_runtime_suspend); 920EXPORT_SYMBOL_GPL(acpi_subsys_runtime_suspend);
919 921
@@ -926,68 +928,33 @@ EXPORT_SYMBOL_GPL(acpi_subsys_runtime_suspend);
926 */ 928 */
927int acpi_subsys_runtime_resume(struct device *dev) 929int acpi_subsys_runtime_resume(struct device *dev)
928{ 930{
929 int ret = acpi_dev_runtime_resume(dev); 931 int ret = acpi_dev_resume(dev);
930 return ret ? ret : pm_generic_runtime_resume(dev); 932 return ret ? ret : pm_generic_runtime_resume(dev);
931} 933}
932EXPORT_SYMBOL_GPL(acpi_subsys_runtime_resume); 934EXPORT_SYMBOL_GPL(acpi_subsys_runtime_resume);
933 935
934#ifdef CONFIG_PM_SLEEP 936#ifdef CONFIG_PM_SLEEP
935/** 937static bool acpi_dev_needs_resume(struct device *dev, struct acpi_device *adev)
936 * acpi_dev_suspend_late - Put device into a low-power state using ACPI.
937 * @dev: Device to put into a low-power state.
938 *
939 * Put the given device into a low-power state during system transition to a
940 * sleep state using the standard ACPI mechanism. Set up system wakeup if
941 * desired, choose the state to put the device into (this checks if system
942 * wakeup is expected to work too), and set the power state of the device.
943 */
944int acpi_dev_suspend_late(struct device *dev)
945{ 938{
946 struct acpi_device *adev = ACPI_COMPANION(dev); 939 u32 sys_target = acpi_target_system_state();
947 u32 target_state; 940 int ret, state;
948 bool wakeup;
949 int error;
950
951 if (!adev)
952 return 0;
953
954 target_state = acpi_target_system_state();
955 wakeup = device_may_wakeup(dev) && acpi_device_can_wakeup(adev);
956 if (wakeup) {
957 error = acpi_device_wakeup_enable(adev, target_state);
958 if (error)
959 return error;
960 }
961 941
962 error = acpi_dev_pm_low_power(dev, adev, target_state); 942 if (!pm_runtime_suspended(dev) || !adev ||
963 if (error && wakeup) 943 device_may_wakeup(dev) != !!adev->wakeup.prepare_count)
964 acpi_device_wakeup_disable(adev); 944 return true;
965 945
966 return error; 946 if (sys_target == ACPI_STATE_S0)
967} 947 return false;
968EXPORT_SYMBOL_GPL(acpi_dev_suspend_late);
969 948
970/** 949 if (adev->power.flags.dsw_present)
971 * acpi_dev_resume_early - Put device into the full-power state using ACPI. 950 return true;
972 * @dev: Device to put into the full-power state.
973 *
974 * Put the given device into the full-power state using the standard ACPI
975 * mechanism during system transition to the working state. Set the power
976 * state of the device to ACPI D0 and disable remote wakeup.
977 */
978int acpi_dev_resume_early(struct device *dev)
979{
980 struct acpi_device *adev = ACPI_COMPANION(dev);
981 int error;
982 951
983 if (!adev) 952 ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state);
984 return 0; 953 if (ret)
954 return true;
985 955
986 error = acpi_dev_pm_full_power(adev); 956 return state != adev->power.state;
987 acpi_device_wakeup_disable(adev);
988 return error;
989} 957}
990EXPORT_SYMBOL_GPL(acpi_dev_resume_early);
991 958
992/** 959/**
993 * acpi_subsys_prepare - Prepare device for system transition to a sleep state. 960 * acpi_subsys_prepare - Prepare device for system transition to a sleep state.
@@ -996,39 +963,53 @@ EXPORT_SYMBOL_GPL(acpi_dev_resume_early);
996int acpi_subsys_prepare(struct device *dev) 963int acpi_subsys_prepare(struct device *dev)
997{ 964{
998 struct acpi_device *adev = ACPI_COMPANION(dev); 965 struct acpi_device *adev = ACPI_COMPANION(dev);
999 u32 sys_target;
1000 int ret, state;
1001 966
1002 ret = pm_generic_prepare(dev); 967 if (dev->driver && dev->driver->pm && dev->driver->pm->prepare) {
1003 if (ret < 0) 968 int ret = dev->driver->pm->prepare(dev);
1004 return ret;
1005
1006 if (!adev || !pm_runtime_suspended(dev)
1007 || device_may_wakeup(dev) != !!adev->wakeup.prepare_count)
1008 return 0;
1009 969
1010 sys_target = acpi_target_system_state(); 970 if (ret < 0)
1011 if (sys_target == ACPI_STATE_S0) 971 return ret;
1012 return 1;
1013 972
1014 if (adev->power.flags.dsw_present) 973 if (!ret && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE))
1015 return 0; 974 return 0;
975 }
1016 976
1017 ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state); 977 return !acpi_dev_needs_resume(dev, adev);
1018 return !ret && state == adev->power.state;
1019} 978}
1020EXPORT_SYMBOL_GPL(acpi_subsys_prepare); 979EXPORT_SYMBOL_GPL(acpi_subsys_prepare);
1021 980
1022/** 981/**
982 * acpi_subsys_complete - Finalize device's resume during system resume.
983 * @dev: Device to handle.
984 */
985void acpi_subsys_complete(struct device *dev)
986{
987 pm_generic_complete(dev);
988 /*
989 * If the device had been runtime-suspended before the system went into
990 * the sleep state it is going out of and it has never been resumed till
991 * now, resume it in case the firmware powered it up.
992 */
993 if (dev->power.direct_complete && pm_resume_via_firmware())
994 pm_request_resume(dev);
995}
996EXPORT_SYMBOL_GPL(acpi_subsys_complete);
997
998/**
1023 * acpi_subsys_suspend - Run the device driver's suspend callback. 999 * acpi_subsys_suspend - Run the device driver's suspend callback.
1024 * @dev: Device to handle. 1000 * @dev: Device to handle.
1025 * 1001 *
1026 * Follow PCI and resume devices suspended at run time before running their 1002 * Follow PCI and resume devices from runtime suspend before running their
1027 * system suspend callbacks. 1003 * system suspend callbacks, unless the driver can cope with runtime-suspended
1004 * devices during system suspend and there are no ACPI-specific reasons for
1005 * resuming them.
1028 */ 1006 */
1029int acpi_subsys_suspend(struct device *dev) 1007int acpi_subsys_suspend(struct device *dev)
1030{ 1008{
1031 pm_runtime_resume(dev); 1009 if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
1010 acpi_dev_needs_resume(dev, ACPI_COMPANION(dev)))
1011 pm_runtime_resume(dev);
1012
1032 return pm_generic_suspend(dev); 1013 return pm_generic_suspend(dev);
1033} 1014}
1034EXPORT_SYMBOL_GPL(acpi_subsys_suspend); 1015EXPORT_SYMBOL_GPL(acpi_subsys_suspend);
@@ -1042,12 +1023,48 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend);
1042 */ 1023 */
1043int acpi_subsys_suspend_late(struct device *dev) 1024int acpi_subsys_suspend_late(struct device *dev)
1044{ 1025{
1045 int ret = pm_generic_suspend_late(dev); 1026 int ret;
1046 return ret ? ret : acpi_dev_suspend_late(dev); 1027
1028 if (dev_pm_smart_suspend_and_suspended(dev))
1029 return 0;
1030
1031 ret = pm_generic_suspend_late(dev);
1032 return ret ? ret : acpi_dev_suspend(dev, device_may_wakeup(dev));
1047} 1033}
1048EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late); 1034EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late);
1049 1035
1050/** 1036/**
1037 * acpi_subsys_suspend_noirq - Run the device driver's "noirq" suspend callback.
1038 * @dev: Device to suspend.
1039 */
1040int acpi_subsys_suspend_noirq(struct device *dev)
1041{
1042 if (dev_pm_smart_suspend_and_suspended(dev))
1043 return 0;
1044
1045 return pm_generic_suspend_noirq(dev);
1046}
1047EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);
1048
1049/**
1050 * acpi_subsys_resume_noirq - Run the device driver's "noirq" resume callback.
1051 * @dev: Device to handle.
1052 */
1053int acpi_subsys_resume_noirq(struct device *dev)
1054{
1055 /*
1056 * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
1057 * during system suspend, so update their runtime PM status to "active"
1058 * as they will be put into D0 going forward.
1059 */
1060 if (dev_pm_smart_suspend_and_suspended(dev))
1061 pm_runtime_set_active(dev);
1062
1063 return pm_generic_resume_noirq(dev);
1064}
1065EXPORT_SYMBOL_GPL(acpi_subsys_resume_noirq);
1066
1067/**
1051 * acpi_subsys_resume_early - Resume device using ACPI. 1068 * acpi_subsys_resume_early - Resume device using ACPI.
1052 * @dev: Device to Resume. 1069 * @dev: Device to Resume.
1053 * 1070 *
@@ -1057,7 +1074,7 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late);
1057 */ 1074 */
1058int acpi_subsys_resume_early(struct device *dev) 1075int acpi_subsys_resume_early(struct device *dev)
1059{ 1076{
1060 int ret = acpi_dev_resume_early(dev); 1077 int ret = acpi_dev_resume(dev);
1061 return ret ? ret : pm_generic_resume_early(dev); 1078 return ret ? ret : pm_generic_resume_early(dev);
1062} 1079}
1063EXPORT_SYMBOL_GPL(acpi_subsys_resume_early); 1080EXPORT_SYMBOL_GPL(acpi_subsys_resume_early);
@@ -1074,11 +1091,60 @@ int acpi_subsys_freeze(struct device *dev)
1074 * runtime-suspended devices should not be touched during freeze/thaw 1091 * runtime-suspended devices should not be touched during freeze/thaw
1075 * transitions. 1092 * transitions.
1076 */ 1093 */
1077 pm_runtime_resume(dev); 1094 if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND))
1095 pm_runtime_resume(dev);
1096
1078 return pm_generic_freeze(dev); 1097 return pm_generic_freeze(dev);
1079} 1098}
1080EXPORT_SYMBOL_GPL(acpi_subsys_freeze); 1099EXPORT_SYMBOL_GPL(acpi_subsys_freeze);
1081 1100
1101/**
1102 * acpi_subsys_freeze_late - Run the device driver's "late" freeze callback.
1103 * @dev: Device to handle.
1104 */
1105int acpi_subsys_freeze_late(struct device *dev)
1106{
1107
1108 if (dev_pm_smart_suspend_and_suspended(dev))
1109 return 0;
1110
1111 return pm_generic_freeze_late(dev);
1112}
1113EXPORT_SYMBOL_GPL(acpi_subsys_freeze_late);
1114
1115/**
1116 * acpi_subsys_freeze_noirq - Run the device driver's "noirq" freeze callback.
1117 * @dev: Device to handle.
1118 */
1119int acpi_subsys_freeze_noirq(struct device *dev)
1120{
1121
1122 if (dev_pm_smart_suspend_and_suspended(dev))
1123 return 0;
1124
1125 return pm_generic_freeze_noirq(dev);
1126}
1127EXPORT_SYMBOL_GPL(acpi_subsys_freeze_noirq);
1128
1129/**
1130 * acpi_subsys_thaw_noirq - Run the device driver's "noirq" thaw callback.
1131 * @dev: Device to handle.
1132 */
1133int acpi_subsys_thaw_noirq(struct device *dev)
1134{
1135 /*
1136 * If the device is in runtime suspend, the "thaw" code may not work
1137 * correctly with it, so skip the driver callback and make the PM core
1138 * skip all of the subsequent "thaw" callbacks for the device.
1139 */
1140 if (dev_pm_smart_suspend_and_suspended(dev)) {
1141 dev->power.direct_complete = true;
1142 return 0;
1143 }
1144
1145 return pm_generic_thaw_noirq(dev);
1146}
1147EXPORT_SYMBOL_GPL(acpi_subsys_thaw_noirq);
1082#endif /* CONFIG_PM_SLEEP */ 1148#endif /* CONFIG_PM_SLEEP */
1083 1149
1084static struct dev_pm_domain acpi_general_pm_domain = { 1150static struct dev_pm_domain acpi_general_pm_domain = {
@@ -1087,13 +1153,20 @@ static struct dev_pm_domain acpi_general_pm_domain = {
1087 .runtime_resume = acpi_subsys_runtime_resume, 1153 .runtime_resume = acpi_subsys_runtime_resume,
1088#ifdef CONFIG_PM_SLEEP 1154#ifdef CONFIG_PM_SLEEP
1089 .prepare = acpi_subsys_prepare, 1155 .prepare = acpi_subsys_prepare,
1090 .complete = pm_complete_with_resume_check, 1156 .complete = acpi_subsys_complete,
1091 .suspend = acpi_subsys_suspend, 1157 .suspend = acpi_subsys_suspend,
1092 .suspend_late = acpi_subsys_suspend_late, 1158 .suspend_late = acpi_subsys_suspend_late,
1159 .suspend_noirq = acpi_subsys_suspend_noirq,
1160 .resume_noirq = acpi_subsys_resume_noirq,
1093 .resume_early = acpi_subsys_resume_early, 1161 .resume_early = acpi_subsys_resume_early,
1094 .freeze = acpi_subsys_freeze, 1162 .freeze = acpi_subsys_freeze,
1163 .freeze_late = acpi_subsys_freeze_late,
1164 .freeze_noirq = acpi_subsys_freeze_noirq,
1165 .thaw_noirq = acpi_subsys_thaw_noirq,
1095 .poweroff = acpi_subsys_suspend, 1166 .poweroff = acpi_subsys_suspend,
1096 .poweroff_late = acpi_subsys_suspend_late, 1167 .poweroff_late = acpi_subsys_suspend_late,
1168 .poweroff_noirq = acpi_subsys_suspend_noirq,
1169 .restore_noirq = acpi_subsys_resume_noirq,
1097 .restore_early = acpi_subsys_resume_early, 1170 .restore_early = acpi_subsys_resume_early,
1098#endif 1171#endif
1099 }, 1172 },
diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
index 4361c4415b4f..fc8c43e76707 100644
--- a/drivers/acpi/internal.h
+++ b/drivers/acpi/internal.h
@@ -248,4 +248,10 @@ void acpi_watchdog_init(void);
248static inline void acpi_watchdog_init(void) {} 248static inline void acpi_watchdog_init(void) {}
249#endif 249#endif
250 250
251#ifdef CONFIG_ACPI_LPIT
252void acpi_init_lpit(void);
253#else
254static inline void acpi_init_lpit(void) { }
255#endif
256
251#endif /* _ACPI_INTERNAL_H_ */ 257#endif /* _ACPI_INTERNAL_H_ */
diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index db78d353bab1..3bb46cb24a99 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -663,6 +663,29 @@ acpi_status acpi_os_write_port(acpi_io_address port, u32 value, u32 width)
663 663
664EXPORT_SYMBOL(acpi_os_write_port); 664EXPORT_SYMBOL(acpi_os_write_port);
665 665
666int acpi_os_read_iomem(void __iomem *virt_addr, u64 *value, u32 width)
667{
668
669 switch (width) {
670 case 8:
671 *(u8 *) value = readb(virt_addr);
672 break;
673 case 16:
674 *(u16 *) value = readw(virt_addr);
675 break;
676 case 32:
677 *(u32 *) value = readl(virt_addr);
678 break;
679 case 64:
680 *(u64 *) value = readq(virt_addr);
681 break;
682 default:
683 return -EINVAL;
684 }
685
686 return 0;
687}
688
666acpi_status 689acpi_status
667acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width) 690acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width)
668{ 691{
@@ -670,6 +693,7 @@ acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width)
670 unsigned int size = width / 8; 693 unsigned int size = width / 8;
671 bool unmap = false; 694 bool unmap = false;
672 u64 dummy; 695 u64 dummy;
696 int error;
673 697
674 rcu_read_lock(); 698 rcu_read_lock();
675 virt_addr = acpi_map_vaddr_lookup(phys_addr, size); 699 virt_addr = acpi_map_vaddr_lookup(phys_addr, size);
@@ -684,22 +708,8 @@ acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width)
684 if (!value) 708 if (!value)
685 value = &dummy; 709 value = &dummy;
686 710
687 switch (width) { 711 error = acpi_os_read_iomem(virt_addr, value, width);
688 case 8: 712 BUG_ON(error);
689 *(u8 *) value = readb(virt_addr);
690 break;
691 case 16:
692 *(u16 *) value = readw(virt_addr);
693 break;
694 case 32:
695 *(u32 *) value = readl(virt_addr);
696 break;
697 case 64:
698 *(u64 *) value = readq(virt_addr);
699 break;
700 default:
701 BUG();
702 }
703 713
704 if (unmap) 714 if (unmap)
705 iounmap(virt_addr); 715 iounmap(virt_addr);
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index 602f8ff212f2..81367edc8a10 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -2122,6 +2122,7 @@ int __init acpi_scan_init(void)
2122 acpi_int340x_thermal_init(); 2122 acpi_int340x_thermal_init();
2123 acpi_amba_init(); 2123 acpi_amba_init();
2124 acpi_watchdog_init(); 2124 acpi_watchdog_init();
2125 acpi_init_lpit();
2125 2126
2126 acpi_scan_add_handler(&generic_device_handler); 2127 acpi_scan_add_handler(&generic_device_handler);
2127 2128
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 6df7d6676a48..0739c5b953bf 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -22,14 +22,23 @@
22#include <linux/string.h> 22#include <linux/string.h>
23#include <linux/sched/topology.h> 23#include <linux/sched/topology.h>
24 24
25static DEFINE_MUTEX(cpu_scale_mutex); 25DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE;
26static DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
27 26
28unsigned long topology_get_cpu_scale(struct sched_domain *sd, int cpu) 27void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
28 unsigned long max_freq)
29{ 29{
30 return per_cpu(cpu_scale, cpu); 30 unsigned long scale;
31 int i;
32
33 scale = (cur_freq << SCHED_CAPACITY_SHIFT) / max_freq;
34
35 for_each_cpu(i, cpus)
36 per_cpu(freq_scale, i) = scale;
31} 37}
32 38
39static DEFINE_MUTEX(cpu_scale_mutex);
40DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
41
33void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity) 42void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
34{ 43{
35 per_cpu(cpu_scale, cpu) = capacity; 44 per_cpu(cpu_scale, cpu) = capacity;
@@ -212,6 +221,8 @@ static struct notifier_block init_cpu_capacity_notifier __initdata = {
212 221
213static int __init register_cpufreq_notifier(void) 222static int __init register_cpufreq_notifier(void)
214{ 223{
224 int ret;
225
215 /* 226 /*
216 * on ACPI-based systems we need to use the default cpu capacity 227 * on ACPI-based systems we need to use the default cpu capacity
217 * until we have the necessary code to parse the cpu capacity, so 228 * until we have the necessary code to parse the cpu capacity, so
@@ -227,8 +238,13 @@ static int __init register_cpufreq_notifier(void)
227 238
228 cpumask_copy(cpus_to_visit, cpu_possible_mask); 239 cpumask_copy(cpus_to_visit, cpu_possible_mask);
229 240
230 return cpufreq_register_notifier(&init_cpu_capacity_notifier, 241 ret = cpufreq_register_notifier(&init_cpu_capacity_notifier,
231 CPUFREQ_POLICY_NOTIFIER); 242 CPUFREQ_POLICY_NOTIFIER);
243
244 if (ret)
245 free_cpumask_var(cpus_to_visit);
246
247 return ret;
232} 248}
233core_initcall(register_cpufreq_notifier); 249core_initcall(register_cpufreq_notifier);
234 250
@@ -236,6 +252,7 @@ static void __init parsing_done_workfn(struct work_struct *work)
236{ 252{
237 cpufreq_unregister_notifier(&init_cpu_capacity_notifier, 253 cpufreq_unregister_notifier(&init_cpu_capacity_notifier,
238 CPUFREQ_POLICY_NOTIFIER); 254 CPUFREQ_POLICY_NOTIFIER);
255 free_cpumask_var(cpus_to_visit);
239} 256}
240 257
241#else 258#else
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index a73ab95558f5..58a9b608d821 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -386,7 +386,8 @@ int register_cpu(struct cpu *cpu, int num)
386 386
387 per_cpu(cpu_sys_devices, num) = &cpu->dev; 387 per_cpu(cpu_sys_devices, num) = &cpu->dev;
388 register_cpu_under_node(num, cpu_to_node(num)); 388 register_cpu_under_node(num, cpu_to_node(num));
389 dev_pm_qos_expose_latency_limit(&cpu->dev, 0); 389 dev_pm_qos_expose_latency_limit(&cpu->dev,
390 PM_QOS_RESUME_LATENCY_NO_CONSTRAINT);
390 391
391 return 0; 392 return 0;
392} 393}
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index ad44b40fe284..45575e134696 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -464,6 +464,7 @@ pinctrl_bind_failed:
464 if (dev->pm_domain && dev->pm_domain->dismiss) 464 if (dev->pm_domain && dev->pm_domain->dismiss)
465 dev->pm_domain->dismiss(dev); 465 dev->pm_domain->dismiss(dev);
466 pm_runtime_reinit(dev); 466 pm_runtime_reinit(dev);
467 dev_pm_set_driver_flags(dev, 0);
467 468
468 switch (ret) { 469 switch (ret) {
469 case -EPROBE_DEFER: 470 case -EPROBE_DEFER:
@@ -869,6 +870,7 @@ static void __device_release_driver(struct device *dev, struct device *parent)
869 if (dev->pm_domain && dev->pm_domain->dismiss) 870 if (dev->pm_domain && dev->pm_domain->dismiss)
870 dev->pm_domain->dismiss(dev); 871 dev->pm_domain->dismiss(dev);
871 pm_runtime_reinit(dev); 872 pm_runtime_reinit(dev);
873 dev_pm_set_driver_flags(dev, 0);
872 874
873 klist_remove(&dev->p->knode_driver); 875 klist_remove(&dev->p->knode_driver);
874 device_pm_check_callbacks(dev); 876 device_pm_check_callbacks(dev);
diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index 29cd71d8b360..e1bb691cf8f1 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -2,7 +2,6 @@
2obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o 2obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o
3obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o 3obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
4obj-$(CONFIG_PM_TRACE_RTC) += trace.o 4obj-$(CONFIG_PM_TRACE_RTC) += trace.o
5obj-$(CONFIG_PM_OPP) += opp/
6obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o 5obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
7obj-$(CONFIG_HAVE_CLK) += clock_ops.o 6obj-$(CONFIG_HAVE_CLK) += clock_ops.o
8 7
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index e8ca5e2cf1e5..0c80bea05bcb 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -124,6 +124,7 @@ static const struct genpd_lock_ops genpd_spin_ops = {
124#define genpd_status_on(genpd) (genpd->status == GPD_STATE_ACTIVE) 124#define genpd_status_on(genpd) (genpd->status == GPD_STATE_ACTIVE)
125#define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) 125#define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE)
126#define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) 126#define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON)
127#define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP)
127 128
128static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, 129static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
129 const struct generic_pm_domain *genpd) 130 const struct generic_pm_domain *genpd)
@@ -237,6 +238,95 @@ static void genpd_update_accounting(struct generic_pm_domain *genpd)
237static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {} 238static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {}
238#endif 239#endif
239 240
241/**
242 * dev_pm_genpd_set_performance_state- Set performance state of device's power
243 * domain.
244 *
245 * @dev: Device for which the performance-state needs to be set.
246 * @state: Target performance state of the device. This can be set as 0 when the
247 * device doesn't have any performance state constraints left (And so
248 * the device wouldn't participate anymore to find the target
249 * performance state of the genpd).
250 *
251 * It is assumed that the users guarantee that the genpd wouldn't be detached
252 * while this routine is getting called.
253 *
254 * Returns 0 on success and negative error values on failures.
255 */
256int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state)
257{
258 struct generic_pm_domain *genpd;
259 struct generic_pm_domain_data *gpd_data, *pd_data;
260 struct pm_domain_data *pdd;
261 unsigned int prev;
262 int ret = 0;
263
264 genpd = dev_to_genpd(dev);
265 if (IS_ERR(genpd))
266 return -ENODEV;
267
268 if (unlikely(!genpd->set_performance_state))
269 return -EINVAL;
270
271 if (unlikely(!dev->power.subsys_data ||
272 !dev->power.subsys_data->domain_data)) {
273 WARN_ON(1);
274 return -EINVAL;
275 }
276
277 genpd_lock(genpd);
278
279 gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
280 prev = gpd_data->performance_state;
281 gpd_data->performance_state = state;
282
283 /* New requested state is same as Max requested state */
284 if (state == genpd->performance_state)
285 goto unlock;
286
287 /* New requested state is higher than Max requested state */
288 if (state > genpd->performance_state)
289 goto update_state;
290
291 /* Traverse all devices within the domain */
292 list_for_each_entry(pdd, &genpd->dev_list, list_node) {
293 pd_data = to_gpd_data(pdd);
294
295 if (pd_data->performance_state > state)
296 state = pd_data->performance_state;
297 }
298
299 if (state == genpd->performance_state)
300 goto unlock;
301
302 /*
303 * We aren't propagating performance state changes of a subdomain to its
304 * masters as we don't have hardware that needs it. Over that, the
305 * performance states of subdomain and its masters may not have
306 * one-to-one mapping and would require additional information. We can
307 * get back to this once we have hardware that needs it. For that
308 * reason, we don't have to consider performance state of the subdomains
309 * of genpd here.
310 */
311
312update_state:
313 if (genpd_status_on(genpd)) {
314 ret = genpd->set_performance_state(genpd, state);
315 if (ret) {
316 gpd_data->performance_state = prev;
317 goto unlock;
318 }
319 }
320
321 genpd->performance_state = state;
322
323unlock:
324 genpd_unlock(genpd);
325
326 return ret;
327}
328EXPORT_SYMBOL_GPL(dev_pm_genpd_set_performance_state);
329
240static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed) 330static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
241{ 331{
242 unsigned int state_idx = genpd->state_idx; 332 unsigned int state_idx = genpd->state_idx;
@@ -256,6 +346,15 @@ static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
256 return ret; 346 return ret;
257 347
258 elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start)); 348 elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
349
350 if (unlikely(genpd->set_performance_state)) {
351 ret = genpd->set_performance_state(genpd, genpd->performance_state);
352 if (ret) {
353 pr_warn("%s: Failed to set performance state %d (%d)\n",
354 genpd->name, genpd->performance_state, ret);
355 }
356 }
357
259 if (elapsed_ns <= genpd->states[state_idx].power_on_latency_ns) 358 if (elapsed_ns <= genpd->states[state_idx].power_on_latency_ns)
260 return ret; 359 return ret;
261 360
@@ -346,9 +445,7 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on,
346 list_for_each_entry(pdd, &genpd->dev_list, list_node) { 445 list_for_each_entry(pdd, &genpd->dev_list, list_node) {
347 enum pm_qos_flags_status stat; 446 enum pm_qos_flags_status stat;
348 447
349 stat = dev_pm_qos_flags(pdd->dev, 448 stat = dev_pm_qos_flags(pdd->dev, PM_QOS_FLAG_NO_POWER_OFF);
350 PM_QOS_FLAG_NO_POWER_OFF
351 | PM_QOS_FLAG_REMOTE_WAKEUP);
352 if (stat > PM_QOS_FLAGS_NONE) 449 if (stat > PM_QOS_FLAGS_NONE)
353 return -EBUSY; 450 return -EBUSY;
354 451
@@ -749,11 +846,7 @@ late_initcall(genpd_power_off_unused);
749 846
750#if defined(CONFIG_PM_SLEEP) || defined(CONFIG_PM_GENERIC_DOMAINS_OF) 847#if defined(CONFIG_PM_SLEEP) || defined(CONFIG_PM_GENERIC_DOMAINS_OF)
751 848
752/** 849static bool genpd_present(const struct generic_pm_domain *genpd)
753 * pm_genpd_present - Check if the given PM domain has been initialized.
754 * @genpd: PM domain to check.
755 */
756static bool pm_genpd_present(const struct generic_pm_domain *genpd)
757{ 850{
758 const struct generic_pm_domain *gpd; 851 const struct generic_pm_domain *gpd;
759 852
@@ -771,12 +864,6 @@ static bool pm_genpd_present(const struct generic_pm_domain *genpd)
771 864
772#ifdef CONFIG_PM_SLEEP 865#ifdef CONFIG_PM_SLEEP
773 866
774static bool genpd_dev_active_wakeup(const struct generic_pm_domain *genpd,
775 struct device *dev)
776{
777 return GENPD_DEV_CALLBACK(genpd, bool, active_wakeup, dev);
778}
779
780/** 867/**
781 * genpd_sync_power_off - Synchronously power off a PM domain and its masters. 868 * genpd_sync_power_off - Synchronously power off a PM domain and its masters.
782 * @genpd: PM domain to power off, if possible. 869 * @genpd: PM domain to power off, if possible.
@@ -863,7 +950,7 @@ static void genpd_sync_power_on(struct generic_pm_domain *genpd, bool use_lock,
863 * @genpd: PM domain the device belongs to. 950 * @genpd: PM domain the device belongs to.
864 * 951 *
865 * There are two cases in which a device that can wake up the system from sleep 952 * There are two cases in which a device that can wake up the system from sleep
866 * states should be resumed by pm_genpd_prepare(): (1) if the device is enabled 953 * states should be resumed by genpd_prepare(): (1) if the device is enabled
867 * to wake up the system and it has to remain active for this purpose while the 954 * to wake up the system and it has to remain active for this purpose while the
868 * system is in the sleep state and (2) if the device is not enabled to wake up 955 * system is in the sleep state and (2) if the device is not enabled to wake up
869 * the system from sleep states and it generally doesn't generate wakeup signals 956 * the system from sleep states and it generally doesn't generate wakeup signals
@@ -881,12 +968,12 @@ static bool resume_needed(struct device *dev,
881 if (!device_can_wakeup(dev)) 968 if (!device_can_wakeup(dev))
882 return false; 969 return false;
883 970
884 active_wakeup = genpd_dev_active_wakeup(genpd, dev); 971 active_wakeup = genpd_is_active_wakeup(genpd);
885 return device_may_wakeup(dev) ? active_wakeup : !active_wakeup; 972 return device_may_wakeup(dev) ? active_wakeup : !active_wakeup;
886} 973}
887 974
888/** 975/**
889 * pm_genpd_prepare - Start power transition of a device in a PM domain. 976 * genpd_prepare - Start power transition of a device in a PM domain.
890 * @dev: Device to start the transition of. 977 * @dev: Device to start the transition of.
891 * 978 *
892 * Start a power transition of a device (during a system-wide power transition) 979 * Start a power transition of a device (during a system-wide power transition)
@@ -894,7 +981,7 @@ static bool resume_needed(struct device *dev,
894 * an object of type struct generic_pm_domain representing a PM domain 981 * an object of type struct generic_pm_domain representing a PM domain
895 * consisting of I/O devices. 982 * consisting of I/O devices.
896 */ 983 */
897static int pm_genpd_prepare(struct device *dev) 984static int genpd_prepare(struct device *dev)
898{ 985{
899 struct generic_pm_domain *genpd; 986 struct generic_pm_domain *genpd;
900 int ret; 987 int ret;
@@ -921,7 +1008,7 @@ static int pm_genpd_prepare(struct device *dev)
921 genpd_unlock(genpd); 1008 genpd_unlock(genpd);
922 1009
923 ret = pm_generic_prepare(dev); 1010 ret = pm_generic_prepare(dev);
924 if (ret) { 1011 if (ret < 0) {
925 genpd_lock(genpd); 1012 genpd_lock(genpd);
926 1013
927 genpd->prepared_count--; 1014 genpd->prepared_count--;
@@ -929,7 +1016,8 @@ static int pm_genpd_prepare(struct device *dev)
929 genpd_unlock(genpd); 1016 genpd_unlock(genpd);
930 } 1017 }
931 1018
932 return ret; 1019 /* Never return 1, as genpd don't cope with the direct_complete path. */
1020 return ret >= 0 ? 0 : ret;
933} 1021}
934 1022
935/** 1023/**
@@ -950,7 +1038,7 @@ static int genpd_finish_suspend(struct device *dev, bool poweroff)
950 if (IS_ERR(genpd)) 1038 if (IS_ERR(genpd))
951 return -EINVAL; 1039 return -EINVAL;
952 1040
953 if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) 1041 if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
954 return 0; 1042 return 0;
955 1043
956 if (poweroff) 1044 if (poweroff)
@@ -975,13 +1063,13 @@ static int genpd_finish_suspend(struct device *dev, bool poweroff)
975} 1063}
976 1064
977/** 1065/**
978 * pm_genpd_suspend_noirq - Completion of suspend of device in an I/O PM domain. 1066 * genpd_suspend_noirq - Completion of suspend of device in an I/O PM domain.
979 * @dev: Device to suspend. 1067 * @dev: Device to suspend.
980 * 1068 *
981 * Stop the device and remove power from the domain if all devices in it have 1069 * Stop the device and remove power from the domain if all devices in it have
982 * been stopped. 1070 * been stopped.
983 */ 1071 */
984static int pm_genpd_suspend_noirq(struct device *dev) 1072static int genpd_suspend_noirq(struct device *dev)
985{ 1073{
986 dev_dbg(dev, "%s()\n", __func__); 1074 dev_dbg(dev, "%s()\n", __func__);
987 1075
@@ -989,12 +1077,12 @@ static int pm_genpd_suspend_noirq(struct device *dev)
989} 1077}
990 1078
991/** 1079/**
992 * pm_genpd_resume_noirq - Start of resume of device in an I/O PM domain. 1080 * genpd_resume_noirq - Start of resume of device in an I/O PM domain.
993 * @dev: Device to resume. 1081 * @dev: Device to resume.
994 * 1082 *
995 * Restore power to the device's PM domain, if necessary, and start the device. 1083 * Restore power to the device's PM domain, if necessary, and start the device.
996 */ 1084 */
997static int pm_genpd_resume_noirq(struct device *dev) 1085static int genpd_resume_noirq(struct device *dev)
998{ 1086{
999 struct generic_pm_domain *genpd; 1087 struct generic_pm_domain *genpd;
1000 int ret = 0; 1088 int ret = 0;
@@ -1005,7 +1093,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
1005 if (IS_ERR(genpd)) 1093 if (IS_ERR(genpd))
1006 return -EINVAL; 1094 return -EINVAL;
1007 1095
1008 if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) 1096 if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
1009 return 0; 1097 return 0;
1010 1098
1011 genpd_lock(genpd); 1099 genpd_lock(genpd);
@@ -1024,7 +1112,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
1024} 1112}
1025 1113
1026/** 1114/**
1027 * pm_genpd_freeze_noirq - Completion of freezing a device in an I/O PM domain. 1115 * genpd_freeze_noirq - Completion of freezing a device in an I/O PM domain.
1028 * @dev: Device to freeze. 1116 * @dev: Device to freeze.
1029 * 1117 *
1030 * Carry out a late freeze of a device under the assumption that its 1118 * Carry out a late freeze of a device under the assumption that its
@@ -1032,7 +1120,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
1032 * struct generic_pm_domain representing a power domain consisting of I/O 1120 * struct generic_pm_domain representing a power domain consisting of I/O
1033 * devices. 1121 * devices.
1034 */ 1122 */
1035static int pm_genpd_freeze_noirq(struct device *dev) 1123static int genpd_freeze_noirq(struct device *dev)
1036{ 1124{
1037 const struct generic_pm_domain *genpd; 1125 const struct generic_pm_domain *genpd;
1038 int ret = 0; 1126 int ret = 0;
@@ -1054,13 +1142,13 @@ static int pm_genpd_freeze_noirq(struct device *dev)
1054} 1142}
1055 1143
1056/** 1144/**
1057 * pm_genpd_thaw_noirq - Early thaw of device in an I/O PM domain. 1145 * genpd_thaw_noirq - Early thaw of device in an I/O PM domain.
1058 * @dev: Device to thaw. 1146 * @dev: Device to thaw.
1059 * 1147 *
1060 * Start the device, unless power has been removed from the domain already 1148 * Start the device, unless power has been removed from the domain already
1061 * before the system transition. 1149 * before the system transition.
1062 */ 1150 */
1063static int pm_genpd_thaw_noirq(struct device *dev) 1151static int genpd_thaw_noirq(struct device *dev)
1064{ 1152{
1065 const struct generic_pm_domain *genpd; 1153 const struct generic_pm_domain *genpd;
1066 int ret = 0; 1154 int ret = 0;
@@ -1081,14 +1169,14 @@ static int pm_genpd_thaw_noirq(struct device *dev)
1081} 1169}
1082 1170
1083/** 1171/**
1084 * pm_genpd_poweroff_noirq - Completion of hibernation of device in an 1172 * genpd_poweroff_noirq - Completion of hibernation of device in an
1085 * I/O PM domain. 1173 * I/O PM domain.
1086 * @dev: Device to poweroff. 1174 * @dev: Device to poweroff.
1087 * 1175 *
1088 * Stop the device and remove power from the domain if all devices in it have 1176 * Stop the device and remove power from the domain if all devices in it have
1089 * been stopped. 1177 * been stopped.
1090 */ 1178 */
1091static int pm_genpd_poweroff_noirq(struct device *dev) 1179static int genpd_poweroff_noirq(struct device *dev)
1092{ 1180{
1093 dev_dbg(dev, "%s()\n", __func__); 1181 dev_dbg(dev, "%s()\n", __func__);
1094 1182
@@ -1096,13 +1184,13 @@ static int pm_genpd_poweroff_noirq(struct device *dev)
1096} 1184}
1097 1185
1098/** 1186/**
1099 * pm_genpd_restore_noirq - Start of restore of device in an I/O PM domain. 1187 * genpd_restore_noirq - Start of restore of device in an I/O PM domain.
1100 * @dev: Device to resume. 1188 * @dev: Device to resume.
1101 * 1189 *
1102 * Make sure the domain will be in the same power state as before the 1190 * Make sure the domain will be in the same power state as before the
1103 * hibernation the system is resuming from and start the device if necessary. 1191 * hibernation the system is resuming from and start the device if necessary.
1104 */ 1192 */
1105static int pm_genpd_restore_noirq(struct device *dev) 1193static int genpd_restore_noirq(struct device *dev)
1106{ 1194{
1107 struct generic_pm_domain *genpd; 1195 struct generic_pm_domain *genpd;
1108 int ret = 0; 1196 int ret = 0;
@@ -1139,7 +1227,7 @@ static int pm_genpd_restore_noirq(struct device *dev)
1139} 1227}
1140 1228
1141/** 1229/**
1142 * pm_genpd_complete - Complete power transition of a device in a power domain. 1230 * genpd_complete - Complete power transition of a device in a power domain.
1143 * @dev: Device to complete the transition of. 1231 * @dev: Device to complete the transition of.
1144 * 1232 *
1145 * Complete a power transition of a device (during a system-wide power 1233 * Complete a power transition of a device (during a system-wide power
@@ -1147,7 +1235,7 @@ static int pm_genpd_restore_noirq(struct device *dev)
1147 * domain member of an object of type struct generic_pm_domain representing 1235 * domain member of an object of type struct generic_pm_domain representing
1148 * a power domain consisting of I/O devices. 1236 * a power domain consisting of I/O devices.
1149 */ 1237 */
1150static void pm_genpd_complete(struct device *dev) 1238static void genpd_complete(struct device *dev)
1151{ 1239{
1152 struct generic_pm_domain *genpd; 1240 struct generic_pm_domain *genpd;
1153 1241
@@ -1180,7 +1268,7 @@ static void genpd_syscore_switch(struct device *dev, bool suspend)
1180 struct generic_pm_domain *genpd; 1268 struct generic_pm_domain *genpd;
1181 1269
1182 genpd = dev_to_genpd(dev); 1270 genpd = dev_to_genpd(dev);
1183 if (!pm_genpd_present(genpd)) 1271 if (!genpd_present(genpd))
1184 return; 1272 return;
1185 1273
1186 if (suspend) { 1274 if (suspend) {
@@ -1206,14 +1294,14 @@ EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweron);
1206 1294
1207#else /* !CONFIG_PM_SLEEP */ 1295#else /* !CONFIG_PM_SLEEP */
1208 1296
1209#define pm_genpd_prepare NULL 1297#define genpd_prepare NULL
1210#define pm_genpd_suspend_noirq NULL 1298#define genpd_suspend_noirq NULL
1211#define pm_genpd_resume_noirq NULL 1299#define genpd_resume_noirq NULL
1212#define pm_genpd_freeze_noirq NULL 1300#define genpd_freeze_noirq NULL
1213#define pm_genpd_thaw_noirq NULL 1301#define genpd_thaw_noirq NULL
1214#define pm_genpd_poweroff_noirq NULL 1302#define genpd_poweroff_noirq NULL
1215#define pm_genpd_restore_noirq NULL 1303#define genpd_restore_noirq NULL
1216#define pm_genpd_complete NULL 1304#define genpd_complete NULL
1217 1305
1218#endif /* CONFIG_PM_SLEEP */ 1306#endif /* CONFIG_PM_SLEEP */
1219 1307
@@ -1239,7 +1327,7 @@ static struct generic_pm_domain_data *genpd_alloc_dev_data(struct device *dev,
1239 1327
1240 gpd_data->base.dev = dev; 1328 gpd_data->base.dev = dev;
1241 gpd_data->td.constraint_changed = true; 1329 gpd_data->td.constraint_changed = true;
1242 gpd_data->td.effective_constraint_ns = -1; 1330 gpd_data->td.effective_constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
1243 gpd_data->nb.notifier_call = genpd_dev_pm_qos_notifier; 1331 gpd_data->nb.notifier_call = genpd_dev_pm_qos_notifier;
1244 1332
1245 spin_lock_irq(&dev->power.lock); 1333 spin_lock_irq(&dev->power.lock);
@@ -1574,14 +1662,14 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
1574 genpd->accounting_time = ktime_get(); 1662 genpd->accounting_time = ktime_get();
1575 genpd->domain.ops.runtime_suspend = genpd_runtime_suspend; 1663 genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
1576 genpd->domain.ops.runtime_resume = genpd_runtime_resume; 1664 genpd->domain.ops.runtime_resume = genpd_runtime_resume;
1577 genpd->domain.ops.prepare = pm_genpd_prepare; 1665 genpd->domain.ops.prepare = genpd_prepare;
1578 genpd->domain.ops.suspend_noirq = pm_genpd_suspend_noirq; 1666 genpd->domain.ops.suspend_noirq = genpd_suspend_noirq;
1579 genpd->domain.ops.resume_noirq = pm_genpd_resume_noirq; 1667 genpd->domain.ops.resume_noirq = genpd_resume_noirq;
1580 genpd->domain.ops.freeze_noirq = pm_genpd_freeze_noirq; 1668 genpd->domain.ops.freeze_noirq = genpd_freeze_noirq;
1581 genpd->domain.ops.thaw_noirq = pm_genpd_thaw_noirq; 1669 genpd->domain.ops.thaw_noirq = genpd_thaw_noirq;
1582 genpd->domain.ops.poweroff_noirq = pm_genpd_poweroff_noirq; 1670 genpd->domain.ops.poweroff_noirq = genpd_poweroff_noirq;
1583 genpd->domain.ops.restore_noirq = pm_genpd_restore_noirq; 1671 genpd->domain.ops.restore_noirq = genpd_restore_noirq;
1584 genpd->domain.ops.complete = pm_genpd_complete; 1672 genpd->domain.ops.complete = genpd_complete;
1585 1673
1586 if (genpd->flags & GENPD_FLAG_PM_CLK) { 1674 if (genpd->flags & GENPD_FLAG_PM_CLK) {
1587 genpd->dev_ops.stop = pm_clk_suspend; 1675 genpd->dev_ops.stop = pm_clk_suspend;
@@ -1795,7 +1883,7 @@ int of_genpd_add_provider_simple(struct device_node *np,
1795 1883
1796 mutex_lock(&gpd_list_lock); 1884 mutex_lock(&gpd_list_lock);
1797 1885
1798 if (pm_genpd_present(genpd)) { 1886 if (genpd_present(genpd)) {
1799 ret = genpd_add_provider(np, genpd_xlate_simple, genpd); 1887 ret = genpd_add_provider(np, genpd_xlate_simple, genpd);
1800 if (!ret) { 1888 if (!ret) {
1801 genpd->provider = &np->fwnode; 1889 genpd->provider = &np->fwnode;
@@ -1831,7 +1919,7 @@ int of_genpd_add_provider_onecell(struct device_node *np,
1831 for (i = 0; i < data->num_domains; i++) { 1919 for (i = 0; i < data->num_domains; i++) {
1832 if (!data->domains[i]) 1920 if (!data->domains[i])
1833 continue; 1921 continue;
1834 if (!pm_genpd_present(data->domains[i])) 1922 if (!genpd_present(data->domains[i]))
1835 goto error; 1923 goto error;
1836 1924
1837 data->domains[i]->provider = &np->fwnode; 1925 data->domains[i]->provider = &np->fwnode;
@@ -2274,7 +2362,7 @@ EXPORT_SYMBOL_GPL(of_genpd_parse_idle_states);
2274#include <linux/seq_file.h> 2362#include <linux/seq_file.h>
2275#include <linux/init.h> 2363#include <linux/init.h>
2276#include <linux/kobject.h> 2364#include <linux/kobject.h>
2277static struct dentry *pm_genpd_debugfs_dir; 2365static struct dentry *genpd_debugfs_dir;
2278 2366
2279/* 2367/*
2280 * TODO: This function is a slightly modified version of rtpm_status_show 2368 * TODO: This function is a slightly modified version of rtpm_status_show
@@ -2302,8 +2390,8 @@ static void rtpm_status_str(struct seq_file *s, struct device *dev)
2302 seq_puts(s, p); 2390 seq_puts(s, p);
2303} 2391}
2304 2392
2305static int pm_genpd_summary_one(struct seq_file *s, 2393static int genpd_summary_one(struct seq_file *s,
2306 struct generic_pm_domain *genpd) 2394 struct generic_pm_domain *genpd)
2307{ 2395{
2308 static const char * const status_lookup[] = { 2396 static const char * const status_lookup[] = {
2309 [GPD_STATE_ACTIVE] = "on", 2397 [GPD_STATE_ACTIVE] = "on",
@@ -2373,7 +2461,7 @@ static int genpd_summary_show(struct seq_file *s, void *data)
2373 return -ERESTARTSYS; 2461 return -ERESTARTSYS;
2374 2462
2375 list_for_each_entry(genpd, &gpd_list, gpd_list_node) { 2463 list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
2376 ret = pm_genpd_summary_one(s, genpd); 2464 ret = genpd_summary_one(s, genpd);
2377 if (ret) 2465 if (ret)
2378 break; 2466 break;
2379 } 2467 }
@@ -2559,23 +2647,23 @@ define_genpd_debugfs_fops(active_time);
2559define_genpd_debugfs_fops(total_idle_time); 2647define_genpd_debugfs_fops(total_idle_time);
2560define_genpd_debugfs_fops(devices); 2648define_genpd_debugfs_fops(devices);
2561 2649
2562static int __init pm_genpd_debug_init(void) 2650static int __init genpd_debug_init(void)
2563{ 2651{
2564 struct dentry *d; 2652 struct dentry *d;
2565 struct generic_pm_domain *genpd; 2653 struct generic_pm_domain *genpd;
2566 2654
2567 pm_genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL); 2655 genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL);
2568 2656
2569 if (!pm_genpd_debugfs_dir) 2657 if (!genpd_debugfs_dir)
2570 return -ENOMEM; 2658 return -ENOMEM;
2571 2659
2572 d = debugfs_create_file("pm_genpd_summary", S_IRUGO, 2660 d = debugfs_create_file("pm_genpd_summary", S_IRUGO,
2573 pm_genpd_debugfs_dir, NULL, &genpd_summary_fops); 2661 genpd_debugfs_dir, NULL, &genpd_summary_fops);
2574 if (!d) 2662 if (!d)
2575 return -ENOMEM; 2663 return -ENOMEM;
2576 2664
2577 list_for_each_entry(genpd, &gpd_list, gpd_list_node) { 2665 list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
2578 d = debugfs_create_dir(genpd->name, pm_genpd_debugfs_dir); 2666 d = debugfs_create_dir(genpd->name, genpd_debugfs_dir);
2579 if (!d) 2667 if (!d)
2580 return -ENOMEM; 2668 return -ENOMEM;
2581 2669
@@ -2595,11 +2683,11 @@ static int __init pm_genpd_debug_init(void)
2595 2683
2596 return 0; 2684 return 0;
2597} 2685}
2598late_initcall(pm_genpd_debug_init); 2686late_initcall(genpd_debug_init);
2599 2687
2600static void __exit pm_genpd_debug_exit(void) 2688static void __exit genpd_debug_exit(void)
2601{ 2689{
2602 debugfs_remove_recursive(pm_genpd_debugfs_dir); 2690 debugfs_remove_recursive(genpd_debugfs_dir);
2603} 2691}
2604__exitcall(pm_genpd_debug_exit); 2692__exitcall(genpd_debug_exit);
2605#endif /* CONFIG_DEBUG_FS */ 2693#endif /* CONFIG_DEBUG_FS */
diff --git a/drivers/base/power/domain_governor.c b/drivers/base/power/domain_governor.c
index 281f949c5ffe..99896fbf18e4 100644
--- a/drivers/base/power/domain_governor.c
+++ b/drivers/base/power/domain_governor.c
@@ -14,23 +14,29 @@
14static int dev_update_qos_constraint(struct device *dev, void *data) 14static int dev_update_qos_constraint(struct device *dev, void *data)
15{ 15{
16 s64 *constraint_ns_p = data; 16 s64 *constraint_ns_p = data;
17 s32 constraint_ns = -1; 17 s64 constraint_ns;
18 18
19 if (dev->power.subsys_data && dev->power.subsys_data->domain_data) 19 if (dev->power.subsys_data && dev->power.subsys_data->domain_data) {
20 /*
21 * Only take suspend-time QoS constraints of devices into
22 * account, because constraints updated after the device has
23 * been suspended are not guaranteed to be taken into account
24 * anyway. In order for them to take effect, the device has to
25 * be resumed and suspended again.
26 */
20 constraint_ns = dev_gpd_data(dev)->td.effective_constraint_ns; 27 constraint_ns = dev_gpd_data(dev)->td.effective_constraint_ns;
21 28 } else {
22 if (constraint_ns < 0) { 29 /*
30 * The child is not in a domain and there's no info on its
31 * suspend/resume latencies, so assume them to be negligible and
32 * take its current PM QoS constraint (that's the only thing
33 * known at this point anyway).
34 */
23 constraint_ns = dev_pm_qos_read_value(dev); 35 constraint_ns = dev_pm_qos_read_value(dev);
24 constraint_ns *= NSEC_PER_USEC; 36 constraint_ns *= NSEC_PER_USEC;
25 } 37 }
26 if (constraint_ns == 0)
27 return 0;
28 38
29 /* 39 if (constraint_ns < *constraint_ns_p)
30 * constraint_ns cannot be negative here, because the device has been
31 * suspended.
32 */
33 if (constraint_ns < *constraint_ns_p || *constraint_ns_p == 0)
34 *constraint_ns_p = constraint_ns; 40 *constraint_ns_p = constraint_ns;
35 41
36 return 0; 42 return 0;
@@ -58,12 +64,12 @@ static bool default_suspend_ok(struct device *dev)
58 } 64 }
59 td->constraint_changed = false; 65 td->constraint_changed = false;
60 td->cached_suspend_ok = false; 66 td->cached_suspend_ok = false;
61 td->effective_constraint_ns = -1; 67 td->effective_constraint_ns = 0;
62 constraint_ns = __dev_pm_qos_read_value(dev); 68 constraint_ns = __dev_pm_qos_read_value(dev);
63 69
64 spin_unlock_irqrestore(&dev->power.lock, flags); 70 spin_unlock_irqrestore(&dev->power.lock, flags);
65 71
66 if (constraint_ns < 0) 72 if (constraint_ns == 0)
67 return false; 73 return false;
68 74
69 constraint_ns *= NSEC_PER_USEC; 75 constraint_ns *= NSEC_PER_USEC;
@@ -76,14 +82,32 @@ static bool default_suspend_ok(struct device *dev)
76 device_for_each_child(dev, &constraint_ns, 82 device_for_each_child(dev, &constraint_ns,
77 dev_update_qos_constraint); 83 dev_update_qos_constraint);
78 84
79 if (constraint_ns > 0) { 85 if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS) {
86 /* "No restriction", so the device is allowed to suspend. */
87 td->effective_constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
88 td->cached_suspend_ok = true;
89 } else if (constraint_ns == 0) {
90 /*
91 * This triggers if one of the children that don't belong to a
92 * domain has a zero PM QoS constraint and it's better not to
93 * suspend then. effective_constraint_ns is zero already and
94 * cached_suspend_ok is false, so bail out.
95 */
96 return false;
97 } else {
80 constraint_ns -= td->suspend_latency_ns + 98 constraint_ns -= td->suspend_latency_ns +
81 td->resume_latency_ns; 99 td->resume_latency_ns;
82 if (constraint_ns == 0) 100 /*
101 * effective_constraint_ns is zero already and cached_suspend_ok
102 * is false, so if the computed value is not positive, return
103 * right away.
104 */
105 if (constraint_ns <= 0)
83 return false; 106 return false;
107
108 td->effective_constraint_ns = constraint_ns;
109 td->cached_suspend_ok = true;
84 } 110 }
85 td->effective_constraint_ns = constraint_ns;
86 td->cached_suspend_ok = constraint_ns >= 0;
87 111
88 /* 112 /*
89 * The children have been suspended already, so we don't need to take 113 * The children have been suspended already, so we don't need to take
@@ -144,18 +168,13 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
144 */ 168 */
145 td = &to_gpd_data(pdd)->td; 169 td = &to_gpd_data(pdd)->td;
146 constraint_ns = td->effective_constraint_ns; 170 constraint_ns = td->effective_constraint_ns;
147 /* default_suspend_ok() need not be called before us. */
148 if (constraint_ns < 0) {
149 constraint_ns = dev_pm_qos_read_value(pdd->dev);
150 constraint_ns *= NSEC_PER_USEC;
151 }
152 if (constraint_ns == 0)
153 continue;
154
155 /* 171 /*
156 * constraint_ns cannot be negative here, because the device has 172 * Zero means "no suspend at all" and this runs only when all
157 * been suspended. 173 * devices in the domain are suspended, so it must be positive.
158 */ 174 */
175 if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS)
176 continue;
177
159 if (constraint_ns <= off_on_time_ns) 178 if (constraint_ns <= off_on_time_ns)
160 return false; 179 return false;
161 180
diff --git a/drivers/base/power/generic_ops.c b/drivers/base/power/generic_ops.c
index 07c3c4a9522d..b2ed606265a8 100644
--- a/drivers/base/power/generic_ops.c
+++ b/drivers/base/power/generic_ops.c
@@ -9,7 +9,6 @@
9#include <linux/pm.h> 9#include <linux/pm.h>
10#include <linux/pm_runtime.h> 10#include <linux/pm_runtime.h>
11#include <linux/export.h> 11#include <linux/export.h>
12#include <linux/suspend.h>
13 12
14#ifdef CONFIG_PM 13#ifdef CONFIG_PM
15/** 14/**
@@ -298,26 +297,4 @@ void pm_generic_complete(struct device *dev)
298 if (drv && drv->pm && drv->pm->complete) 297 if (drv && drv->pm && drv->pm->complete)
299 drv->pm->complete(dev); 298 drv->pm->complete(dev);
300} 299}
301
302/**
303 * pm_complete_with_resume_check - Complete a device power transition.
304 * @dev: Device to handle.
305 *
306 * Complete a device power transition during a system-wide power transition and
307 * optionally schedule a runtime resume of the device if the system resume in
308 * progress has been initated by the platform firmware and the device had its
309 * power.direct_complete flag set.
310 */
311void pm_complete_with_resume_check(struct device *dev)
312{
313 pm_generic_complete(dev);
314 /*
315 * If the device had been runtime-suspended before the system went into
316 * the sleep state it is going out of and it has never been resumed till
317 * now, resume it in case the firmware powered it up.
318 */
319 if (dev->power.direct_complete && pm_resume_via_firmware())
320 pm_request_resume(dev);
321}
322EXPORT_SYMBOL_GPL(pm_complete_with_resume_check);
323#endif /* CONFIG_PM_SLEEP */ 300#endif /* CONFIG_PM_SLEEP */
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index ae47b2ec84b4..db2f04415927 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -526,7 +526,7 @@ static void dpm_watchdog_clear(struct dpm_watchdog *wd)
526/*------------------------- Resume routines -------------------------*/ 526/*------------------------- Resume routines -------------------------*/
527 527
528/** 528/**
529 * device_resume_noirq - Execute an "early resume" callback for given device. 529 * device_resume_noirq - Execute a "noirq resume" callback for given device.
530 * @dev: Device to handle. 530 * @dev: Device to handle.
531 * @state: PM transition of the system being carried out. 531 * @state: PM transition of the system being carried out.
532 * @async: If true, the device is being resumed asynchronously. 532 * @async: If true, the device is being resumed asynchronously.
@@ -846,16 +846,10 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
846 goto Driver; 846 goto Driver;
847 } 847 }
848 848
849 if (dev->class) { 849 if (dev->class && dev->class->pm) {
850 if (dev->class->pm) { 850 info = "class ";
851 info = "class "; 851 callback = pm_op(dev->class->pm, state);
852 callback = pm_op(dev->class->pm, state); 852 goto Driver;
853 goto Driver;
854 } else if (dev->class->resume) {
855 info = "legacy class ";
856 callback = dev->class->resume;
857 goto End;
858 }
859 } 853 }
860 854
861 if (dev->bus) { 855 if (dev->bus) {
@@ -1081,7 +1075,7 @@ static pm_message_t resume_event(pm_message_t sleep_state)
1081} 1075}
1082 1076
1083/** 1077/**
1084 * device_suspend_noirq - Execute a "late suspend" callback for given device. 1078 * __device_suspend_noirq - Execute a "noirq suspend" callback for given device.
1085 * @dev: Device to handle. 1079 * @dev: Device to handle.
1086 * @state: PM transition of the system being carried out. 1080 * @state: PM transition of the system being carried out.
1087 * @async: If true, the device is being suspended asynchronously. 1081 * @async: If true, the device is being suspended asynchronously.
@@ -1241,7 +1235,7 @@ int dpm_suspend_noirq(pm_message_t state)
1241} 1235}
1242 1236
1243/** 1237/**
1244 * device_suspend_late - Execute a "late suspend" callback for given device. 1238 * __device_suspend_late - Execute a "late suspend" callback for given device.
1245 * @dev: Device to handle. 1239 * @dev: Device to handle.
1246 * @state: PM transition of the system being carried out. 1240 * @state: PM transition of the system being carried out.
1247 * @async: If true, the device is being suspended asynchronously. 1241 * @async: If true, the device is being suspended asynchronously.
@@ -1443,7 +1437,7 @@ static void dpm_clear_suppliers_direct_complete(struct device *dev)
1443} 1437}
1444 1438
1445/** 1439/**
1446 * device_suspend - Execute "suspend" callbacks for given device. 1440 * __device_suspend - Execute "suspend" callbacks for given device.
1447 * @dev: Device to handle. 1441 * @dev: Device to handle.
1448 * @state: PM transition of the system being carried out. 1442 * @state: PM transition of the system being carried out.
1449 * @async: If true, the device is being suspended asynchronously. 1443 * @async: If true, the device is being suspended asynchronously.
@@ -1506,17 +1500,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
1506 goto Run; 1500 goto Run;
1507 } 1501 }
1508 1502
1509 if (dev->class) { 1503 if (dev->class && dev->class->pm) {
1510 if (dev->class->pm) { 1504 info = "class ";
1511 info = "class "; 1505 callback = pm_op(dev->class->pm, state);
1512 callback = pm_op(dev->class->pm, state); 1506 goto Run;
1513 goto Run;
1514 } else if (dev->class->suspend) {
1515 pm_dev_dbg(dev, state, "legacy class ");
1516 error = legacy_suspend(dev, state, dev->class->suspend,
1517 "legacy class ");
1518 goto End;
1519 }
1520 } 1507 }
1521 1508
1522 if (dev->bus) { 1509 if (dev->bus) {
@@ -1663,6 +1650,9 @@ static int device_prepare(struct device *dev, pm_message_t state)
1663 if (dev->power.syscore) 1650 if (dev->power.syscore)
1664 return 0; 1651 return 0;
1665 1652
1653 WARN_ON(dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) &&
1654 !pm_runtime_enabled(dev));
1655
1666 /* 1656 /*
1667 * If a device's parent goes into runtime suspend at the wrong time, 1657 * If a device's parent goes into runtime suspend at the wrong time,
1668 * it won't be possible to resume the device. To prevent this we 1658 * it won't be possible to resume the device. To prevent this we
@@ -1711,7 +1701,9 @@ unlock:
1711 * applies to suspend transitions, however. 1701 * applies to suspend transitions, however.
1712 */ 1702 */
1713 spin_lock_irq(&dev->power.lock); 1703 spin_lock_irq(&dev->power.lock);
1714 dev->power.direct_complete = ret > 0 && state.event == PM_EVENT_SUSPEND; 1704 dev->power.direct_complete = state.event == PM_EVENT_SUSPEND &&
1705 pm_runtime_suspended(dev) && ret > 0 &&
1706 !dev_pm_test_driver_flags(dev, DPM_FLAG_NEVER_SKIP);
1715 spin_unlock_irq(&dev->power.lock); 1707 spin_unlock_irq(&dev->power.lock);
1716 return 0; 1708 return 0;
1717} 1709}
@@ -1860,11 +1852,16 @@ void device_pm_check_callbacks(struct device *dev)
1860 dev->power.no_pm_callbacks = 1852 dev->power.no_pm_callbacks =
1861 (!dev->bus || (pm_ops_is_empty(dev->bus->pm) && 1853 (!dev->bus || (pm_ops_is_empty(dev->bus->pm) &&
1862 !dev->bus->suspend && !dev->bus->resume)) && 1854 !dev->bus->suspend && !dev->bus->resume)) &&
1863 (!dev->class || (pm_ops_is_empty(dev->class->pm) && 1855 (!dev->class || pm_ops_is_empty(dev->class->pm)) &&
1864 !dev->class->suspend && !dev->class->resume)) &&
1865 (!dev->type || pm_ops_is_empty(dev->type->pm)) && 1856 (!dev->type || pm_ops_is_empty(dev->type->pm)) &&
1866 (!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) && 1857 (!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) &&
1867 (!dev->driver || (pm_ops_is_empty(dev->driver->pm) && 1858 (!dev->driver || (pm_ops_is_empty(dev->driver->pm) &&
1868 !dev->driver->suspend && !dev->driver->resume)); 1859 !dev->driver->suspend && !dev->driver->resume));
1869 spin_unlock_irq(&dev->power.lock); 1860 spin_unlock_irq(&dev->power.lock);
1870} 1861}
1862
1863bool dev_pm_smart_suspend_and_suspended(struct device *dev)
1864{
1865 return dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) &&
1866 pm_runtime_status_suspended(dev);
1867}
diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c
index 277d43a83f53..3382542b39b7 100644
--- a/drivers/base/power/qos.c
+++ b/drivers/base/power/qos.c
@@ -139,6 +139,9 @@ static int apply_constraint(struct dev_pm_qos_request *req,
139 139
140 switch(req->type) { 140 switch(req->type) {
141 case DEV_PM_QOS_RESUME_LATENCY: 141 case DEV_PM_QOS_RESUME_LATENCY:
142 if (WARN_ON(action != PM_QOS_REMOVE_REQ && value < 0))
143 value = 0;
144
142 ret = pm_qos_update_target(&qos->resume_latency, 145 ret = pm_qos_update_target(&qos->resume_latency,
143 &req->data.pnode, action, value); 146 &req->data.pnode, action, value);
144 break; 147 break;
@@ -189,7 +192,7 @@ static int dev_pm_qos_constraints_allocate(struct device *dev)
189 plist_head_init(&c->list); 192 plist_head_init(&c->list);
190 c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 193 c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
191 c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 194 c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
192 c->no_constraint_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 195 c->no_constraint_value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
193 c->type = PM_QOS_MIN; 196 c->type = PM_QOS_MIN;
194 c->notifiers = n; 197 c->notifiers = n;
195 198
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 41d7c2b99f69..2362b9e9701e 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -253,7 +253,7 @@ static int rpm_check_suspend_allowed(struct device *dev)
253 || (dev->power.request_pending 253 || (dev->power.request_pending
254 && dev->power.request == RPM_REQ_RESUME)) 254 && dev->power.request == RPM_REQ_RESUME))
255 retval = -EAGAIN; 255 retval = -EAGAIN;
256 else if (__dev_pm_qos_read_value(dev) < 0) 256 else if (__dev_pm_qos_read_value(dev) == 0)
257 retval = -EPERM; 257 retval = -EPERM;
258 else if (dev->power.runtime_status == RPM_SUSPENDED) 258 else if (dev->power.runtime_status == RPM_SUSPENDED)
259 retval = 1; 259 retval = 1;
@@ -894,9 +894,9 @@ static void pm_runtime_work(struct work_struct *work)
894 * 894 *
895 * Check if the time is right and queue a suspend request. 895 * Check if the time is right and queue a suspend request.
896 */ 896 */
897static void pm_suspend_timer_fn(unsigned long data) 897static void pm_suspend_timer_fn(struct timer_list *t)
898{ 898{
899 struct device *dev = (struct device *)data; 899 struct device *dev = from_timer(dev, t, power.suspend_timer);
900 unsigned long flags; 900 unsigned long flags;
901 unsigned long expires; 901 unsigned long expires;
902 902
@@ -1499,8 +1499,7 @@ void pm_runtime_init(struct device *dev)
1499 INIT_WORK(&dev->power.work, pm_runtime_work); 1499 INIT_WORK(&dev->power.work, pm_runtime_work);
1500 1500
1501 dev->power.timer_expires = 0; 1501 dev->power.timer_expires = 0;
1502 setup_timer(&dev->power.suspend_timer, pm_suspend_timer_fn, 1502 timer_setup(&dev->power.suspend_timer, pm_suspend_timer_fn, 0);
1503 (unsigned long)dev);
1504 1503
1505 init_waitqueue_head(&dev->power.wait_queue); 1504 init_waitqueue_head(&dev->power.wait_queue);
1506} 1505}
diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c
index 156ab57bca77..e153e28b1857 100644
--- a/drivers/base/power/sysfs.c
+++ b/drivers/base/power/sysfs.c
@@ -218,7 +218,14 @@ static ssize_t pm_qos_resume_latency_show(struct device *dev,
218 struct device_attribute *attr, 218 struct device_attribute *attr,
219 char *buf) 219 char *buf)
220{ 220{
221 return sprintf(buf, "%d\n", dev_pm_qos_requested_resume_latency(dev)); 221 s32 value = dev_pm_qos_requested_resume_latency(dev);
222
223 if (value == 0)
224 return sprintf(buf, "n/a\n");
225 else if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
226 value = 0;
227
228 return sprintf(buf, "%d\n", value);
222} 229}
223 230
224static ssize_t pm_qos_resume_latency_store(struct device *dev, 231static ssize_t pm_qos_resume_latency_store(struct device *dev,
@@ -228,11 +235,21 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev,
228 s32 value; 235 s32 value;
229 int ret; 236 int ret;
230 237
231 if (kstrtos32(buf, 0, &value)) 238 if (!kstrtos32(buf, 0, &value)) {
232 return -EINVAL; 239 /*
240 * Prevent users from writing negative or "no constraint" values
241 * directly.
242 */
243 if (value < 0 || value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
244 return -EINVAL;
233 245
234 if (value < 0) 246 if (value == 0)
247 value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
248 } else if (!strcmp(buf, "n/a") || !strcmp(buf, "n/a\n")) {
249 value = 0;
250 } else {
235 return -EINVAL; 251 return -EINVAL;
252 }
236 253
237 ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req, 254 ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req,
238 value); 255 value);
@@ -309,33 +326,6 @@ static ssize_t pm_qos_no_power_off_store(struct device *dev,
309static DEVICE_ATTR(pm_qos_no_power_off, 0644, 326static DEVICE_ATTR(pm_qos_no_power_off, 0644,
310 pm_qos_no_power_off_show, pm_qos_no_power_off_store); 327 pm_qos_no_power_off_show, pm_qos_no_power_off_store);
311 328
312static ssize_t pm_qos_remote_wakeup_show(struct device *dev,
313 struct device_attribute *attr,
314 char *buf)
315{
316 return sprintf(buf, "%d\n", !!(dev_pm_qos_requested_flags(dev)
317 & PM_QOS_FLAG_REMOTE_WAKEUP));
318}
319
320static ssize_t pm_qos_remote_wakeup_store(struct device *dev,
321 struct device_attribute *attr,
322 const char *buf, size_t n)
323{
324 int ret;
325
326 if (kstrtoint(buf, 0, &ret))
327 return -EINVAL;
328
329 if (ret != 0 && ret != 1)
330 return -EINVAL;
331
332 ret = dev_pm_qos_update_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP, ret);
333 return ret < 0 ? ret : n;
334}
335
336static DEVICE_ATTR(pm_qos_remote_wakeup, 0644,
337 pm_qos_remote_wakeup_show, pm_qos_remote_wakeup_store);
338
339#ifdef CONFIG_PM_SLEEP 329#ifdef CONFIG_PM_SLEEP
340static const char _enabled[] = "enabled"; 330static const char _enabled[] = "enabled";
341static const char _disabled[] = "disabled"; 331static const char _disabled[] = "disabled";
@@ -671,7 +661,6 @@ static const struct attribute_group pm_qos_latency_tolerance_attr_group = {
671 661
672static struct attribute *pm_qos_flags_attrs[] = { 662static struct attribute *pm_qos_flags_attrs[] = {
673 &dev_attr_pm_qos_no_power_off.attr, 663 &dev_attr_pm_qos_no_power_off.attr,
674 &dev_attr_pm_qos_remote_wakeup.attr,
675 NULL, 664 NULL,
676}; 665};
677static const struct attribute_group pm_qos_flags_attr_group = { 666static const struct attribute_group pm_qos_flags_attr_group = {
diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
index cdd6f256da59..680ee1d36ac9 100644
--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -54,7 +54,7 @@ static unsigned int saved_count;
54 54
55static DEFINE_SPINLOCK(events_lock); 55static DEFINE_SPINLOCK(events_lock);
56 56
57static void pm_wakeup_timer_fn(unsigned long data); 57static void pm_wakeup_timer_fn(struct timer_list *t);
58 58
59static LIST_HEAD(wakeup_sources); 59static LIST_HEAD(wakeup_sources);
60 60
@@ -176,7 +176,7 @@ void wakeup_source_add(struct wakeup_source *ws)
176 return; 176 return;
177 177
178 spin_lock_init(&ws->lock); 178 spin_lock_init(&ws->lock);
179 setup_timer(&ws->timer, pm_wakeup_timer_fn, (unsigned long)ws); 179 timer_setup(&ws->timer, pm_wakeup_timer_fn, 0);
180 ws->active = false; 180 ws->active = false;
181 ws->last_time = ktime_get(); 181 ws->last_time = ktime_get();
182 182
@@ -481,8 +481,7 @@ static bool wakeup_source_not_registered(struct wakeup_source *ws)
481 * Use timer struct to check if the given source is initialized 481 * Use timer struct to check if the given source is initialized
482 * by wakeup_source_add. 482 * by wakeup_source_add.
483 */ 483 */
484 return ws->timer.function != pm_wakeup_timer_fn || 484 return ws->timer.function != (TIMER_FUNC_TYPE)pm_wakeup_timer_fn;
485 ws->timer.data != (unsigned long)ws;
486} 485}
487 486
488/* 487/*
@@ -724,9 +723,9 @@ EXPORT_SYMBOL_GPL(pm_relax);
724 * in @data if it is currently active and its timer has not been canceled and 723 * in @data if it is currently active and its timer has not been canceled and
725 * the expiration time of the timer is not in future. 724 * the expiration time of the timer is not in future.
726 */ 725 */
727static void pm_wakeup_timer_fn(unsigned long data) 726static void pm_wakeup_timer_fn(struct timer_list *t)
728{ 727{
729 struct wakeup_source *ws = (struct wakeup_source *)data; 728 struct wakeup_source *ws = from_timer(ws, t, timer);
730 unsigned long flags; 729 unsigned long flags;
731 730
732 spin_lock_irqsave(&ws->lock, flags); 731 spin_lock_irqsave(&ws->lock, flags);
diff --git a/drivers/cpufreq/arm_big_little.c b/drivers/cpufreq/arm_big_little.c
index 17504129fd77..65ec5f01aa8d 100644
--- a/drivers/cpufreq/arm_big_little.c
+++ b/drivers/cpufreq/arm_big_little.c
@@ -57,7 +57,7 @@ static bool bL_switching_enabled;
57#define VIRT_FREQ(cluster, freq) ((cluster == A7_CLUSTER) ? freq >> 1 : freq) 57#define VIRT_FREQ(cluster, freq) ((cluster == A7_CLUSTER) ? freq >> 1 : freq)
58 58
59static struct thermal_cooling_device *cdev[MAX_CLUSTERS]; 59static struct thermal_cooling_device *cdev[MAX_CLUSTERS];
60static struct cpufreq_arm_bL_ops *arm_bL_ops; 60static const struct cpufreq_arm_bL_ops *arm_bL_ops;
61static struct clk *clk[MAX_CLUSTERS]; 61static struct clk *clk[MAX_CLUSTERS];
62static struct cpufreq_frequency_table *freq_table[MAX_CLUSTERS + 1]; 62static struct cpufreq_frequency_table *freq_table[MAX_CLUSTERS + 1];
63static atomic_t cluster_usage[MAX_CLUSTERS + 1]; 63static atomic_t cluster_usage[MAX_CLUSTERS + 1];
@@ -213,6 +213,7 @@ static int bL_cpufreq_set_target(struct cpufreq_policy *policy,
213{ 213{
214 u32 cpu = policy->cpu, cur_cluster, new_cluster, actual_cluster; 214 u32 cpu = policy->cpu, cur_cluster, new_cluster, actual_cluster;
215 unsigned int freqs_new; 215 unsigned int freqs_new;
216 int ret;
216 217
217 cur_cluster = cpu_to_cluster(cpu); 218 cur_cluster = cpu_to_cluster(cpu);
218 new_cluster = actual_cluster = per_cpu(physical_cluster, cpu); 219 new_cluster = actual_cluster = per_cpu(physical_cluster, cpu);
@@ -229,7 +230,14 @@ static int bL_cpufreq_set_target(struct cpufreq_policy *policy,
229 } 230 }
230 } 231 }
231 232
232 return bL_cpufreq_set_rate(cpu, actual_cluster, new_cluster, freqs_new); 233 ret = bL_cpufreq_set_rate(cpu, actual_cluster, new_cluster, freqs_new);
234
235 if (!ret) {
236 arch_set_freq_scale(policy->related_cpus, freqs_new,
237 policy->cpuinfo.max_freq);
238 }
239
240 return ret;
233} 241}
234 242
235static inline u32 get_table_count(struct cpufreq_frequency_table *table) 243static inline u32 get_table_count(struct cpufreq_frequency_table *table)
@@ -609,7 +617,7 @@ static int __bLs_register_notifier(void) { return 0; }
609static int __bLs_unregister_notifier(void) { return 0; } 617static int __bLs_unregister_notifier(void) { return 0; }
610#endif 618#endif
611 619
612int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops) 620int bL_cpufreq_register(const struct cpufreq_arm_bL_ops *ops)
613{ 621{
614 int ret, i; 622 int ret, i;
615 623
@@ -653,7 +661,7 @@ int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops)
653} 661}
654EXPORT_SYMBOL_GPL(bL_cpufreq_register); 662EXPORT_SYMBOL_GPL(bL_cpufreq_register);
655 663
656void bL_cpufreq_unregister(struct cpufreq_arm_bL_ops *ops) 664void bL_cpufreq_unregister(const struct cpufreq_arm_bL_ops *ops)
657{ 665{
658 if (arm_bL_ops != ops) { 666 if (arm_bL_ops != ops) {
659 pr_err("%s: Registered with: %s, can't unregister, exiting\n", 667 pr_err("%s: Registered with: %s, can't unregister, exiting\n",
diff --git a/drivers/cpufreq/arm_big_little.h b/drivers/cpufreq/arm_big_little.h
index 184d7c3a112a..88a176e466c8 100644
--- a/drivers/cpufreq/arm_big_little.h
+++ b/drivers/cpufreq/arm_big_little.h
@@ -37,7 +37,7 @@ struct cpufreq_arm_bL_ops {
37 void (*free_opp_table)(const struct cpumask *cpumask); 37 void (*free_opp_table)(const struct cpumask *cpumask);
38}; 38};
39 39
40int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops); 40int bL_cpufreq_register(const struct cpufreq_arm_bL_ops *ops);
41void bL_cpufreq_unregister(struct cpufreq_arm_bL_ops *ops); 41void bL_cpufreq_unregister(const struct cpufreq_arm_bL_ops *ops);
42 42
43#endif /* CPUFREQ_ARM_BIG_LITTLE_H */ 43#endif /* CPUFREQ_ARM_BIG_LITTLE_H */
diff --git a/drivers/cpufreq/arm_big_little_dt.c b/drivers/cpufreq/arm_big_little_dt.c
index 39b3f51d9a30..b944f290c8a4 100644
--- a/drivers/cpufreq/arm_big_little_dt.c
+++ b/drivers/cpufreq/arm_big_little_dt.c
@@ -61,7 +61,7 @@ static int dt_get_transition_latency(struct device *cpu_dev)
61 return transition_latency; 61 return transition_latency;
62} 62}
63 63
64static struct cpufreq_arm_bL_ops dt_bL_ops = { 64static const struct cpufreq_arm_bL_ops dt_bL_ops = {
65 .name = "dt-bl", 65 .name = "dt-bl",
66 .get_transition_latency = dt_get_transition_latency, 66 .get_transition_latency = dt_get_transition_latency,
67 .init_opp_table = dev_pm_opp_of_cpumask_add_table, 67 .init_opp_table = dev_pm_opp_of_cpumask_add_table,
diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
index a753c50e9e41..ecc56e26f8f6 100644
--- a/drivers/cpufreq/cpufreq-dt-platdev.c
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -48,7 +48,6 @@ static const struct of_device_id whitelist[] __initconst = {
48 48
49 { .compatible = "samsung,exynos3250", }, 49 { .compatible = "samsung,exynos3250", },
50 { .compatible = "samsung,exynos4210", }, 50 { .compatible = "samsung,exynos4210", },
51 { .compatible = "samsung,exynos4212", },
52 { .compatible = "samsung,exynos5250", }, 51 { .compatible = "samsung,exynos5250", },
53#ifndef CONFIG_BL_SWITCHER 52#ifndef CONFIG_BL_SWITCHER
54 { .compatible = "samsung,exynos5800", }, 53 { .compatible = "samsung,exynos5800", },
@@ -83,8 +82,6 @@ static const struct of_device_id whitelist[] __initconst = {
83 { .compatible = "rockchip,rk3368", }, 82 { .compatible = "rockchip,rk3368", },
84 { .compatible = "rockchip,rk3399", }, 83 { .compatible = "rockchip,rk3399", },
85 84
86 { .compatible = "socionext,uniphier-ld6b", },
87
88 { .compatible = "st-ericsson,u8500", }, 85 { .compatible = "st-ericsson,u8500", },
89 { .compatible = "st-ericsson,u8540", }, 86 { .compatible = "st-ericsson,u8540", },
90 { .compatible = "st-ericsson,u9500", }, 87 { .compatible = "st-ericsson,u9500", },
diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
index d83ab94d041a..545946ad0752 100644
--- a/drivers/cpufreq/cpufreq-dt.c
+++ b/drivers/cpufreq/cpufreq-dt.c
@@ -43,9 +43,17 @@ static struct freq_attr *cpufreq_dt_attr[] = {
43static int set_target(struct cpufreq_policy *policy, unsigned int index) 43static int set_target(struct cpufreq_policy *policy, unsigned int index)
44{ 44{
45 struct private_data *priv = policy->driver_data; 45 struct private_data *priv = policy->driver_data;
46 unsigned long freq = policy->freq_table[index].frequency;
47 int ret;
48
49 ret = dev_pm_opp_set_rate(priv->cpu_dev, freq * 1000);
46 50
47 return dev_pm_opp_set_rate(priv->cpu_dev, 51 if (!ret) {
48 policy->freq_table[index].frequency * 1000); 52 arch_set_freq_scale(policy->related_cpus, freq,
53 policy->cpuinfo.max_freq);
54 }
55
56 return ret;
49} 57}
50 58
51/* 59/*
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index ea43b147a7fe..41d148af7748 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -161,6 +161,12 @@ u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy)
161} 161}
162EXPORT_SYMBOL_GPL(get_cpu_idle_time); 162EXPORT_SYMBOL_GPL(get_cpu_idle_time);
163 163
164__weak void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
165 unsigned long max_freq)
166{
167}
168EXPORT_SYMBOL_GPL(arch_set_freq_scale);
169
164/* 170/*
165 * This is a generic cpufreq init() routine which can be used by cpufreq 171 * This is a generic cpufreq init() routine which can be used by cpufreq
166 * drivers of SMP systems. It will do following: 172 * drivers of SMP systems. It will do following:
diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
index e75880eb037d..1e55b5790853 100644
--- a/drivers/cpufreq/cpufreq_stats.c
+++ b/drivers/cpufreq/cpufreq_stats.c
@@ -118,8 +118,11 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
118 break; 118 break;
119 len += snprintf(buf + len, PAGE_SIZE - len, "\n"); 119 len += snprintf(buf + len, PAGE_SIZE - len, "\n");
120 } 120 }
121 if (len >= PAGE_SIZE) 121
122 return PAGE_SIZE; 122 if (len >= PAGE_SIZE) {
123 pr_warn_once("cpufreq transition table exceeds PAGE_SIZE. Disabling\n");
124 return -EFBIG;
125 }
123 return len; 126 return len;
124} 127}
125cpufreq_freq_attr_ro(trans_table); 128cpufreq_freq_attr_ro(trans_table);
diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
index 14466a9b01c0..628fe899cb48 100644
--- a/drivers/cpufreq/imx6q-cpufreq.c
+++ b/drivers/cpufreq/imx6q-cpufreq.c
@@ -12,6 +12,7 @@
12#include <linux/err.h> 12#include <linux/err.h>
13#include <linux/module.h> 13#include <linux/module.h>
14#include <linux/of.h> 14#include <linux/of.h>
15#include <linux/of_address.h>
15#include <linux/pm_opp.h> 16#include <linux/pm_opp.h>
16#include <linux/platform_device.h> 17#include <linux/platform_device.h>
17#include <linux/regulator/consumer.h> 18#include <linux/regulator/consumer.h>
@@ -191,6 +192,57 @@ static struct cpufreq_driver imx6q_cpufreq_driver = {
191 .suspend = cpufreq_generic_suspend, 192 .suspend = cpufreq_generic_suspend,
192}; 193};
193 194
195#define OCOTP_CFG3 0x440
196#define OCOTP_CFG3_SPEED_SHIFT 16
197#define OCOTP_CFG3_SPEED_1P2GHZ 0x3
198#define OCOTP_CFG3_SPEED_996MHZ 0x2
199#define OCOTP_CFG3_SPEED_852MHZ 0x1
200
201static void imx6q_opp_check_speed_grading(struct device *dev)
202{
203 struct device_node *np;
204 void __iomem *base;
205 u32 val;
206
207 np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-ocotp");
208 if (!np)
209 return;
210
211 base = of_iomap(np, 0);
212 if (!base) {
213 dev_err(dev, "failed to map ocotp\n");
214 goto put_node;
215 }
216
217 /*
218 * SPEED_GRADING[1:0] defines the max speed of ARM:
219 * 2b'11: 1200000000Hz;
220 * 2b'10: 996000000Hz;
221 * 2b'01: 852000000Hz; -- i.MX6Q Only, exclusive with 996MHz.
222 * 2b'00: 792000000Hz;
223 * We need to set the max speed of ARM according to fuse map.
224 */
225 val = readl_relaxed(base + OCOTP_CFG3);
226 val >>= OCOTP_CFG3_SPEED_SHIFT;
227 val &= 0x3;
228
229 if ((val != OCOTP_CFG3_SPEED_1P2GHZ) &&
230 of_machine_is_compatible("fsl,imx6q"))
231 if (dev_pm_opp_disable(dev, 1200000000))
232 dev_warn(dev, "failed to disable 1.2GHz OPP\n");
233 if (val < OCOTP_CFG3_SPEED_996MHZ)
234 if (dev_pm_opp_disable(dev, 996000000))
235 dev_warn(dev, "failed to disable 996MHz OPP\n");
236 if (of_machine_is_compatible("fsl,imx6q")) {
237 if (val != OCOTP_CFG3_SPEED_852MHZ)
238 if (dev_pm_opp_disable(dev, 852000000))
239 dev_warn(dev, "failed to disable 852MHz OPP\n");
240 }
241 iounmap(base);
242put_node:
243 of_node_put(np);
244}
245
194static int imx6q_cpufreq_probe(struct platform_device *pdev) 246static int imx6q_cpufreq_probe(struct platform_device *pdev)
195{ 247{
196 struct device_node *np; 248 struct device_node *np;
@@ -252,28 +304,21 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
252 goto put_reg; 304 goto put_reg;
253 } 305 }
254 306
255 /* 307 ret = dev_pm_opp_of_add_table(cpu_dev);
256 * We expect an OPP table supplied by platform. 308 if (ret < 0) {
257 * Just, incase the platform did not supply the OPP 309 dev_err(cpu_dev, "failed to init OPP table: %d\n", ret);
258 * table, it will try to get it. 310 goto put_reg;
259 */ 311 }
260 num = dev_pm_opp_get_opp_count(cpu_dev);
261 if (num < 0) {
262 ret = dev_pm_opp_of_add_table(cpu_dev);
263 if (ret < 0) {
264 dev_err(cpu_dev, "failed to init OPP table: %d\n", ret);
265 goto put_reg;
266 }
267 312
268 /* Because we have added the OPPs here, we must free them */ 313 imx6q_opp_check_speed_grading(cpu_dev);
269 free_opp = true;
270 314
271 num = dev_pm_opp_get_opp_count(cpu_dev); 315 /* Because we have added the OPPs here, we must free them */
272 if (num < 0) { 316 free_opp = true;
273 ret = num; 317 num = dev_pm_opp_get_opp_count(cpu_dev);
274 dev_err(cpu_dev, "no OPP table is found: %d\n", ret); 318 if (num < 0) {
275 goto out_free_opp; 319 ret = num;
276 } 320 dev_err(cpu_dev, "no OPP table is found: %d\n", ret);
321 goto out_free_opp;
277 } 322 }
278 323
279 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); 324 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
diff --git a/drivers/cpufreq/powernow-k8.c b/drivers/cpufreq/powernow-k8.c
index 062d71434e47..b01e31db5f83 100644
--- a/drivers/cpufreq/powernow-k8.c
+++ b/drivers/cpufreq/powernow-k8.c
@@ -1043,7 +1043,7 @@ static int powernowk8_cpu_init(struct cpufreq_policy *pol)
1043 1043
1044 data = kzalloc(sizeof(*data), GFP_KERNEL); 1044 data = kzalloc(sizeof(*data), GFP_KERNEL);
1045 if (!data) { 1045 if (!data) {
1046 pr_err("unable to alloc powernow_k8_data"); 1046 pr_err("unable to alloc powernow_k8_data\n");
1047 return -ENOMEM; 1047 return -ENOMEM;
1048 } 1048 }
1049 1049
diff --git a/drivers/cpufreq/pxa2xx-cpufreq.c b/drivers/cpufreq/pxa2xx-cpufreq.c
index ce345bf34d5d..06b024a3e474 100644
--- a/drivers/cpufreq/pxa2xx-cpufreq.c
+++ b/drivers/cpufreq/pxa2xx-cpufreq.c
@@ -58,56 +58,40 @@ module_param(pxa27x_maxfreq, uint, 0);
58MODULE_PARM_DESC(pxa27x_maxfreq, "Set the pxa27x maxfreq in MHz" 58MODULE_PARM_DESC(pxa27x_maxfreq, "Set the pxa27x maxfreq in MHz"
59 "(typically 624=>pxa270, 416=>pxa271, 520=>pxa272)"); 59 "(typically 624=>pxa270, 416=>pxa271, 520=>pxa272)");
60 60
61struct pxa_cpufreq_data {
62 struct clk *clk_core;
63};
64static struct pxa_cpufreq_data pxa_cpufreq_data;
65
61struct pxa_freqs { 66struct pxa_freqs {
62 unsigned int khz; 67 unsigned int khz;
63 unsigned int membus;
64 unsigned int cccr;
65 unsigned int div2;
66 unsigned int cclkcfg;
67 int vmin; 68 int vmin;
68 int vmax; 69 int vmax;
69}; 70};
70 71
71/* Define the refresh period in mSec for the SDRAM and the number of rows */
72#define SDRAM_TREF 64 /* standard 64ms SDRAM */
73static unsigned int sdram_rows;
74
75#define CCLKCFG_TURBO 0x1
76#define CCLKCFG_FCS 0x2
77#define CCLKCFG_HALFTURBO 0x4
78#define CCLKCFG_FASTBUS 0x8
79#define MDREFR_DB2_MASK (MDREFR_K2DB2 | MDREFR_K1DB2)
80#define MDREFR_DRI_MASK 0xFFF
81
82#define MDCNFG_DRAC2(mdcnfg) (((mdcnfg) >> 21) & 0x3)
83#define MDCNFG_DRAC0(mdcnfg) (((mdcnfg) >> 5) & 0x3)
84
85/* 72/*
86 * PXA255 definitions 73 * PXA255 definitions
87 */ 74 */
88/* Use the run mode frequencies for the CPUFREQ_POLICY_PERFORMANCE policy */
89#define CCLKCFG CCLKCFG_TURBO | CCLKCFG_FCS
90
91static const struct pxa_freqs pxa255_run_freqs[] = 75static const struct pxa_freqs pxa255_run_freqs[] =
92{ 76{
93 /* CPU MEMBUS CCCR DIV2 CCLKCFG run turbo PXbus SDRAM */ 77 /* CPU MEMBUS run turbo PXbus SDRAM */
94 { 99500, 99500, 0x121, 1, CCLKCFG, -1, -1}, /* 99, 99, 50, 50 */ 78 { 99500, -1, -1}, /* 99, 99, 50, 50 */
95 {132700, 132700, 0x123, 1, CCLKCFG, -1, -1}, /* 133, 133, 66, 66 */ 79 {132700, -1, -1}, /* 133, 133, 66, 66 */
96 {199100, 99500, 0x141, 0, CCLKCFG, -1, -1}, /* 199, 199, 99, 99 */ 80 {199100, -1, -1}, /* 199, 199, 99, 99 */
97 {265400, 132700, 0x143, 1, CCLKCFG, -1, -1}, /* 265, 265, 133, 66 */ 81 {265400, -1, -1}, /* 265, 265, 133, 66 */
98 {331800, 165900, 0x145, 1, CCLKCFG, -1, -1}, /* 331, 331, 166, 83 */ 82 {331800, -1, -1}, /* 331, 331, 166, 83 */
99 {398100, 99500, 0x161, 0, CCLKCFG, -1, -1}, /* 398, 398, 196, 99 */ 83 {398100, -1, -1}, /* 398, 398, 196, 99 */
100}; 84};
101 85
102/* Use the turbo mode frequencies for the CPUFREQ_POLICY_POWERSAVE policy */ 86/* Use the turbo mode frequencies for the CPUFREQ_POLICY_POWERSAVE policy */
103static const struct pxa_freqs pxa255_turbo_freqs[] = 87static const struct pxa_freqs pxa255_turbo_freqs[] =
104{ 88{
105 /* CPU MEMBUS CCCR DIV2 CCLKCFG run turbo PXbus SDRAM */ 89 /* CPU run turbo PXbus SDRAM */
106 { 99500, 99500, 0x121, 1, CCLKCFG, -1, -1}, /* 99, 99, 50, 50 */ 90 { 99500, -1, -1}, /* 99, 99, 50, 50 */
107 {199100, 99500, 0x221, 0, CCLKCFG, -1, -1}, /* 99, 199, 50, 99 */ 91 {199100, -1, -1}, /* 99, 199, 50, 99 */
108 {298500, 99500, 0x321, 0, CCLKCFG, -1, -1}, /* 99, 287, 50, 99 */ 92 {298500, -1, -1}, /* 99, 287, 50, 99 */
109 {298600, 99500, 0x1c1, 0, CCLKCFG, -1, -1}, /* 199, 287, 99, 99 */ 93 {298600, -1, -1}, /* 199, 287, 99, 99 */
110 {398100, 99500, 0x241, 0, CCLKCFG, -1, -1}, /* 199, 398, 99, 99 */ 94 {398100, -1, -1}, /* 199, 398, 99, 99 */
111}; 95};
112 96
113#define NUM_PXA25x_RUN_FREQS ARRAY_SIZE(pxa255_run_freqs) 97#define NUM_PXA25x_RUN_FREQS ARRAY_SIZE(pxa255_run_freqs)
@@ -122,47 +106,14 @@ static unsigned int pxa255_turbo_table;
122module_param(pxa255_turbo_table, uint, 0); 106module_param(pxa255_turbo_table, uint, 0);
123MODULE_PARM_DESC(pxa255_turbo_table, "Selects the frequency table (0 = run table, !0 = turbo table)"); 107MODULE_PARM_DESC(pxa255_turbo_table, "Selects the frequency table (0 = run table, !0 = turbo table)");
124 108
125/*
126 * PXA270 definitions
127 *
128 * For the PXA27x:
129 * Control variables are A, L, 2N for CCCR; B, HT, T for CLKCFG.
130 *
131 * A = 0 => memory controller clock from table 3-7,
132 * A = 1 => memory controller clock = system bus clock
133 * Run mode frequency = 13 MHz * L
134 * Turbo mode frequency = 13 MHz * L * N
135 * System bus frequency = 13 MHz * L / (B + 1)
136 *
137 * In CCCR:
138 * A = 1
139 * L = 16 oscillator to run mode ratio
140 * 2N = 6 2 * (turbo mode to run mode ratio)
141 *
142 * In CCLKCFG:
143 * B = 1 Fast bus mode
144 * HT = 0 Half-Turbo mode
145 * T = 1 Turbo mode
146 *
147 * For now, just support some of the combinations in table 3-7 of
148 * PXA27x Processor Family Developer's Manual to simplify frequency
149 * change sequences.
150 */
151#define PXA27x_CCCR(A, L, N2) (A << 25 | N2 << 7 | L)
152#define CCLKCFG2(B, HT, T) \
153 (CCLKCFG_FCS | \
154 ((B) ? CCLKCFG_FASTBUS : 0) | \
155 ((HT) ? CCLKCFG_HALFTURBO : 0) | \
156 ((T) ? CCLKCFG_TURBO : 0))
157
158static struct pxa_freqs pxa27x_freqs[] = { 109static struct pxa_freqs pxa27x_freqs[] = {
159 {104000, 104000, PXA27x_CCCR(1, 8, 2), 0, CCLKCFG2(1, 0, 1), 900000, 1705000 }, 110 {104000, 900000, 1705000 },
160 {156000, 104000, PXA27x_CCCR(1, 8, 3), 0, CCLKCFG2(1, 0, 1), 1000000, 1705000 }, 111 {156000, 1000000, 1705000 },
161 {208000, 208000, PXA27x_CCCR(0, 16, 2), 1, CCLKCFG2(0, 0, 1), 1180000, 1705000 }, 112 {208000, 1180000, 1705000 },
162 {312000, 208000, PXA27x_CCCR(1, 16, 3), 1, CCLKCFG2(1, 0, 1), 1250000, 1705000 }, 113 {312000, 1250000, 1705000 },
163 {416000, 208000, PXA27x_CCCR(1, 16, 4), 1, CCLKCFG2(1, 0, 1), 1350000, 1705000 }, 114 {416000, 1350000, 1705000 },
164 {520000, 208000, PXA27x_CCCR(1, 16, 5), 1, CCLKCFG2(1, 0, 1), 1450000, 1705000 }, 115 {520000, 1450000, 1705000 },
165 {624000, 208000, PXA27x_CCCR(1, 16, 6), 1, CCLKCFG2(1, 0, 1), 1550000, 1705000 } 116 {624000, 1550000, 1705000 }
166}; 117};
167 118
168#define NUM_PXA27x_FREQS ARRAY_SIZE(pxa27x_freqs) 119#define NUM_PXA27x_FREQS ARRAY_SIZE(pxa27x_freqs)
@@ -241,51 +192,29 @@ static void pxa27x_guess_max_freq(void)
241 } 192 }
242} 193}
243 194
244static void init_sdram_rows(void)
245{
246 uint32_t mdcnfg = __raw_readl(MDCNFG);
247 unsigned int drac2 = 0, drac0 = 0;
248
249 if (mdcnfg & (MDCNFG_DE2 | MDCNFG_DE3))
250 drac2 = MDCNFG_DRAC2(mdcnfg);
251
252 if (mdcnfg & (MDCNFG_DE0 | MDCNFG_DE1))
253 drac0 = MDCNFG_DRAC0(mdcnfg);
254
255 sdram_rows = 1 << (11 + max(drac0, drac2));
256}
257
258static u32 mdrefr_dri(unsigned int freq)
259{
260 u32 interval = freq * SDRAM_TREF / sdram_rows;
261
262 return (interval - (cpu_is_pxa27x() ? 31 : 0)) / 32;
263}
264
265static unsigned int pxa_cpufreq_get(unsigned int cpu) 195static unsigned int pxa_cpufreq_get(unsigned int cpu)
266{ 196{
267 return get_clk_frequency_khz(0); 197 struct pxa_cpufreq_data *data = cpufreq_get_driver_data();
198
199 return (unsigned int) clk_get_rate(data->clk_core) / 1000;
268} 200}
269 201
270static int pxa_set_target(struct cpufreq_policy *policy, unsigned int idx) 202static int pxa_set_target(struct cpufreq_policy *policy, unsigned int idx)
271{ 203{
272 struct cpufreq_frequency_table *pxa_freqs_table; 204 struct cpufreq_frequency_table *pxa_freqs_table;
273 const struct pxa_freqs *pxa_freq_settings; 205 const struct pxa_freqs *pxa_freq_settings;
274 unsigned long flags; 206 struct pxa_cpufreq_data *data = cpufreq_get_driver_data();
275 unsigned int new_freq_cpu, new_freq_mem; 207 unsigned int new_freq_cpu;
276 unsigned int unused, preset_mdrefr, postset_mdrefr, cclkcfg;
277 int ret = 0; 208 int ret = 0;
278 209
279 /* Get the current policy */ 210 /* Get the current policy */
280 find_freq_tables(&pxa_freqs_table, &pxa_freq_settings); 211 find_freq_tables(&pxa_freqs_table, &pxa_freq_settings);
281 212
282 new_freq_cpu = pxa_freq_settings[idx].khz; 213 new_freq_cpu = pxa_freq_settings[idx].khz;
283 new_freq_mem = pxa_freq_settings[idx].membus;
284 214
285 if (freq_debug) 215 if (freq_debug)
286 pr_debug("Changing CPU frequency to %d Mhz, (SDRAM %d Mhz)\n", 216 pr_debug("Changing CPU frequency from %d Mhz to %d Mhz\n",
287 new_freq_cpu / 1000, (pxa_freq_settings[idx].div2) ? 217 policy->cur / 1000, new_freq_cpu / 1000);
288 (new_freq_mem / 2000) : (new_freq_mem / 1000));
289 218
290 if (vcc_core && new_freq_cpu > policy->cur) { 219 if (vcc_core && new_freq_cpu > policy->cur) {
291 ret = pxa_cpufreq_change_voltage(&pxa_freq_settings[idx]); 220 ret = pxa_cpufreq_change_voltage(&pxa_freq_settings[idx]);
@@ -293,53 +222,7 @@ static int pxa_set_target(struct cpufreq_policy *policy, unsigned int idx)
293 return ret; 222 return ret;
294 } 223 }
295 224
296 /* Calculate the next MDREFR. If we're slowing down the SDRAM clock 225 clk_set_rate(data->clk_core, new_freq_cpu * 1000);
297 * we need to preset the smaller DRI before the change. If we're
298 * speeding up we need to set the larger DRI value after the change.
299 */
300 preset_mdrefr = postset_mdrefr = __raw_readl(MDREFR);
301 if ((preset_mdrefr & MDREFR_DRI_MASK) > mdrefr_dri(new_freq_mem)) {
302 preset_mdrefr = (preset_mdrefr & ~MDREFR_DRI_MASK);
303 preset_mdrefr |= mdrefr_dri(new_freq_mem);
304 }
305 postset_mdrefr =
306 (postset_mdrefr & ~MDREFR_DRI_MASK) | mdrefr_dri(new_freq_mem);
307
308 /* If we're dividing the memory clock by two for the SDRAM clock, this
309 * must be set prior to the change. Clearing the divide must be done
310 * after the change.
311 */
312 if (pxa_freq_settings[idx].div2) {
313 preset_mdrefr |= MDREFR_DB2_MASK;
314 postset_mdrefr |= MDREFR_DB2_MASK;
315 } else {
316 postset_mdrefr &= ~MDREFR_DB2_MASK;
317 }
318
319 local_irq_save(flags);
320
321 /* Set new the CCCR and prepare CCLKCFG */
322 writel(pxa_freq_settings[idx].cccr, CCCR);
323 cclkcfg = pxa_freq_settings[idx].cclkcfg;
324
325 asm volatile(" \n\
326 ldr r4, [%1] /* load MDREFR */ \n\
327 b 2f \n\
328 .align 5 \n\
3291: \n\
330 str %3, [%1] /* preset the MDREFR */ \n\
331 mcr p14, 0, %2, c6, c0, 0 /* set CCLKCFG[FCS] */ \n\
332 str %4, [%1] /* postset the MDREFR */ \n\
333 \n\
334 b 3f \n\
3352: b 1b \n\
3363: nop \n\
337 "
338 : "=&r" (unused)
339 : "r" (MDREFR), "r" (cclkcfg),
340 "r" (preset_mdrefr), "r" (postset_mdrefr)
341 : "r4", "r5");
342 local_irq_restore(flags);
343 226
344 /* 227 /*
345 * Even if voltage setting fails, we don't report it, as the frequency 228 * Even if voltage setting fails, we don't report it, as the frequency
@@ -369,8 +252,6 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
369 252
370 pxa_cpufreq_init_voltages(); 253 pxa_cpufreq_init_voltages();
371 254
372 init_sdram_rows();
373
374 /* set default policy and cpuinfo */ 255 /* set default policy and cpuinfo */
375 policy->cpuinfo.transition_latency = 1000; /* FIXME: 1 ms, assumed */ 256 policy->cpuinfo.transition_latency = 1000; /* FIXME: 1 ms, assumed */
376 257
@@ -429,11 +310,17 @@ static struct cpufreq_driver pxa_cpufreq_driver = {
429 .init = pxa_cpufreq_init, 310 .init = pxa_cpufreq_init,
430 .get = pxa_cpufreq_get, 311 .get = pxa_cpufreq_get,
431 .name = "PXA2xx", 312 .name = "PXA2xx",
313 .driver_data = &pxa_cpufreq_data,
432}; 314};
433 315
434static int __init pxa_cpu_init(void) 316static int __init pxa_cpu_init(void)
435{ 317{
436 int ret = -ENODEV; 318 int ret = -ENODEV;
319
320 pxa_cpufreq_data.clk_core = clk_get_sys(NULL, "core");
321 if (IS_ERR(pxa_cpufreq_data.clk_core))
322 return PTR_ERR(pxa_cpufreq_data.clk_core);
323
437 if (cpu_is_pxa25x() || cpu_is_pxa27x()) 324 if (cpu_is_pxa25x() || cpu_is_pxa27x())
438 ret = cpufreq_register_driver(&pxa_cpufreq_driver); 325 ret = cpufreq_register_driver(&pxa_cpufreq_driver);
439 return ret; 326 return ret;
diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
index 8de2364b5995..05d299052c5c 100644
--- a/drivers/cpufreq/scpi-cpufreq.c
+++ b/drivers/cpufreq/scpi-cpufreq.c
@@ -53,7 +53,7 @@ static int scpi_init_opp_table(const struct cpumask *cpumask)
53 return ret; 53 return ret;
54} 54}
55 55
56static struct cpufreq_arm_bL_ops scpi_cpufreq_ops = { 56static const struct cpufreq_arm_bL_ops scpi_cpufreq_ops = {
57 .name = "scpi", 57 .name = "scpi",
58 .get_transition_latency = scpi_get_transition_latency, 58 .get_transition_latency = scpi_get_transition_latency,
59 .init_opp_table = scpi_init_opp_table, 59 .init_opp_table = scpi_init_opp_table,
diff --git a/drivers/cpufreq/spear-cpufreq.c b/drivers/cpufreq/spear-cpufreq.c
index 4894924a3ca2..195f27f9c1cb 100644
--- a/drivers/cpufreq/spear-cpufreq.c
+++ b/drivers/cpufreq/spear-cpufreq.c
@@ -177,7 +177,7 @@ static int spear_cpufreq_probe(struct platform_device *pdev)
177 177
178 np = of_cpu_device_node_get(0); 178 np = of_cpu_device_node_get(0);
179 if (!np) { 179 if (!np) {
180 pr_err("No cpu node found"); 180 pr_err("No cpu node found\n");
181 return -ENODEV; 181 return -ENODEV;
182 } 182 }
183 183
@@ -187,7 +187,7 @@ static int spear_cpufreq_probe(struct platform_device *pdev)
187 187
188 prop = of_find_property(np, "cpufreq_tbl", NULL); 188 prop = of_find_property(np, "cpufreq_tbl", NULL);
189 if (!prop || !prop->value) { 189 if (!prop || !prop->value) {
190 pr_err("Invalid cpufreq_tbl"); 190 pr_err("Invalid cpufreq_tbl\n");
191 ret = -ENODEV; 191 ret = -ENODEV;
192 goto out_put_node; 192 goto out_put_node;
193 } 193 }
diff --git a/drivers/cpufreq/speedstep-lib.c b/drivers/cpufreq/speedstep-lib.c
index ccab452a4ef5..8085ec9000d1 100644
--- a/drivers/cpufreq/speedstep-lib.c
+++ b/drivers/cpufreq/speedstep-lib.c
@@ -367,7 +367,7 @@ unsigned int speedstep_detect_processor(void)
367 } else 367 } else
368 return SPEEDSTEP_CPU_PIII_C; 368 return SPEEDSTEP_CPU_PIII_C;
369 } 369 }
370 370 /* fall through */
371 default: 371 default:
372 return 0; 372 return 0;
373 } 373 }
diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c
index 4bf47de6101f..923317f03b4b 100644
--- a/drivers/cpufreq/ti-cpufreq.c
+++ b/drivers/cpufreq/ti-cpufreq.c
@@ -205,6 +205,7 @@ static int ti_cpufreq_init(void)
205 205
206 np = of_find_node_by_path("/"); 206 np = of_find_node_by_path("/");
207 match = of_match_node(ti_cpufreq_of_match, np); 207 match = of_match_node(ti_cpufreq_of_match, np);
208 of_node_put(np);
208 if (!match) 209 if (!match)
209 return -ENODEV; 210 return -ENODEV;
210 211
@@ -217,7 +218,8 @@ static int ti_cpufreq_init(void)
217 opp_data->cpu_dev = get_cpu_device(0); 218 opp_data->cpu_dev = get_cpu_device(0);
218 if (!opp_data->cpu_dev) { 219 if (!opp_data->cpu_dev) {
219 pr_err("%s: Failed to get device for CPU0\n", __func__); 220 pr_err("%s: Failed to get device for CPU0\n", __func__);
220 return -ENODEV; 221 ret = ENODEV;
222 goto free_opp_data;
221 } 223 }
222 224
223 opp_data->opp_node = dev_pm_opp_of_get_opp_desc_node(opp_data->cpu_dev); 225 opp_data->opp_node = dev_pm_opp_of_get_opp_desc_node(opp_data->cpu_dev);
@@ -262,6 +264,8 @@ register_cpufreq_dt:
262 264
263fail_put_node: 265fail_put_node:
264 of_node_put(opp_data->opp_node); 266 of_node_put(opp_data->opp_node);
267free_opp_data:
268 kfree(opp_data);
265 269
266 return ret; 270 return ret;
267} 271}
diff --git a/drivers/cpufreq/vexpress-spc-cpufreq.c b/drivers/cpufreq/vexpress-spc-cpufreq.c
index 87e5bdc5ec74..53237289e606 100644
--- a/drivers/cpufreq/vexpress-spc-cpufreq.c
+++ b/drivers/cpufreq/vexpress-spc-cpufreq.c
@@ -42,7 +42,7 @@ static int ve_spc_get_transition_latency(struct device *cpu_dev)
42 return 1000000; /* 1 ms */ 42 return 1000000; /* 1 ms */
43} 43}
44 44
45static struct cpufreq_arm_bL_ops ve_spc_cpufreq_ops = { 45static const struct cpufreq_arm_bL_ops ve_spc_cpufreq_ops = {
46 .name = "vexpress-spc", 46 .name = "vexpress-spc",
47 .get_transition_latency = ve_spc_get_transition_latency, 47 .get_transition_latency = ve_spc_get_transition_latency,
48 .init_opp_table = ve_spc_init_opp_table, 48 .init_opp_table = ve_spc_init_opp_table,
diff --git a/drivers/cpuidle/cpuidle-arm.c b/drivers/cpuidle/cpuidle-arm.c
index 52a75053ee03..ddee1b601b89 100644
--- a/drivers/cpuidle/cpuidle-arm.c
+++ b/drivers/cpuidle/cpuidle-arm.c
@@ -72,12 +72,94 @@ static const struct of_device_id arm_idle_state_match[] __initconst = {
72}; 72};
73 73
74/* 74/*
75 * arm_idle_init 75 * arm_idle_init_cpu
76 * 76 *
77 * Registers the arm specific cpuidle driver with the cpuidle 77 * Registers the arm specific cpuidle driver with the cpuidle
78 * framework. It relies on core code to parse the idle states 78 * framework. It relies on core code to parse the idle states
79 * and initialize them using driver data structures accordingly. 79 * and initialize them using driver data structures accordingly.
80 */ 80 */
81static int __init arm_idle_init_cpu(int cpu)
82{
83 int ret;
84 struct cpuidle_driver *drv;
85 struct cpuidle_device *dev;
86
87 drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL);
88 if (!drv)
89 return -ENOMEM;
90
91 drv->cpumask = (struct cpumask *)cpumask_of(cpu);
92
93 /*
94 * Initialize idle states data, starting at index 1. This
95 * driver is DT only, if no DT idle states are detected (ret
96 * == 0) let the driver initialization fail accordingly since
97 * there is no reason to initialize the idle driver if only
98 * wfi is supported.
99 */
100 ret = dt_init_idle_driver(drv, arm_idle_state_match, 1);
101 if (ret <= 0) {
102 ret = ret ? : -ENODEV;
103 goto out_kfree_drv;
104 }
105
106 ret = cpuidle_register_driver(drv);
107 if (ret) {
108 pr_err("Failed to register cpuidle driver\n");
109 goto out_kfree_drv;
110 }
111
112 /*
113 * Call arch CPU operations in order to initialize
114 * idle states suspend back-end specific data
115 */
116 ret = arm_cpuidle_init(cpu);
117
118 /*
119 * Skip the cpuidle device initialization if the reported
120 * failure is a HW misconfiguration/breakage (-ENXIO).
121 */
122 if (ret == -ENXIO)
123 return 0;
124
125 if (ret) {
126 pr_err("CPU %d failed to init idle CPU ops\n", cpu);
127 goto out_unregister_drv;
128 }
129
130 dev = kzalloc(sizeof(*dev), GFP_KERNEL);
131 if (!dev) {
132 pr_err("Failed to allocate cpuidle device\n");
133 ret = -ENOMEM;
134 goto out_unregister_drv;
135 }
136 dev->cpu = cpu;
137
138 ret = cpuidle_register_device(dev);
139 if (ret) {
140 pr_err("Failed to register cpuidle device for CPU %d\n",
141 cpu);
142 goto out_kfree_dev;
143 }
144
145 return 0;
146
147out_kfree_dev:
148 kfree(dev);
149out_unregister_drv:
150 cpuidle_unregister_driver(drv);
151out_kfree_drv:
152 kfree(drv);
153 return ret;
154}
155
156/*
157 * arm_idle_init - Initializes arm cpuidle driver
158 *
159 * Initializes arm cpuidle driver for all CPUs, if any CPU fails
160 * to register cpuidle driver then rollback to cancel all CPUs
161 * registeration.
162 */
81static int __init arm_idle_init(void) 163static int __init arm_idle_init(void)
82{ 164{
83 int cpu, ret; 165 int cpu, ret;
@@ -85,79 +167,20 @@ static int __init arm_idle_init(void)
85 struct cpuidle_device *dev; 167 struct cpuidle_device *dev;
86 168
87 for_each_possible_cpu(cpu) { 169 for_each_possible_cpu(cpu) {
88 170 ret = arm_idle_init_cpu(cpu);
89 drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL); 171 if (ret)
90 if (!drv) {
91 ret = -ENOMEM;
92 goto out_fail;
93 }
94
95 drv->cpumask = (struct cpumask *)cpumask_of(cpu);
96
97 /*
98 * Initialize idle states data, starting at index 1. This
99 * driver is DT only, if no DT idle states are detected (ret
100 * == 0) let the driver initialization fail accordingly since
101 * there is no reason to initialize the idle driver if only
102 * wfi is supported.
103 */
104 ret = dt_init_idle_driver(drv, arm_idle_state_match, 1);
105 if (ret <= 0) {
106 ret = ret ? : -ENODEV;
107 goto init_fail;
108 }
109
110 ret = cpuidle_register_driver(drv);
111 if (ret) {
112 pr_err("Failed to register cpuidle driver\n");
113 goto init_fail;
114 }
115
116 /*
117 * Call arch CPU operations in order to initialize
118 * idle states suspend back-end specific data
119 */
120 ret = arm_cpuidle_init(cpu);
121
122 /*
123 * Skip the cpuidle device initialization if the reported
124 * failure is a HW misconfiguration/breakage (-ENXIO).
125 */
126 if (ret == -ENXIO)
127 continue;
128
129 if (ret) {
130 pr_err("CPU %d failed to init idle CPU ops\n", cpu);
131 goto out_fail;
132 }
133
134 dev = kzalloc(sizeof(*dev), GFP_KERNEL);
135 if (!dev) {
136 pr_err("Failed to allocate cpuidle device\n");
137 ret = -ENOMEM;
138 goto out_fail; 172 goto out_fail;
139 }
140 dev->cpu = cpu;
141
142 ret = cpuidle_register_device(dev);
143 if (ret) {
144 pr_err("Failed to register cpuidle device for CPU %d\n",
145 cpu);
146 kfree(dev);
147 goto out_fail;
148 }
149 } 173 }
150 174
151 return 0; 175 return 0;
152init_fail: 176
153 kfree(drv);
154out_fail: 177out_fail:
155 while (--cpu >= 0) { 178 while (--cpu >= 0) {
156 dev = per_cpu(cpuidle_devices, cpu); 179 dev = per_cpu(cpuidle_devices, cpu);
180 drv = cpuidle_get_cpu_driver(dev);
157 cpuidle_unregister_device(dev); 181 cpuidle_unregister_device(dev);
158 kfree(dev);
159 drv = cpuidle_get_driver();
160 cpuidle_unregister_driver(drv); 182 cpuidle_unregister_driver(drv);
183 kfree(dev);
161 kfree(drv); 184 kfree(drv);
162 } 185 }
163 186
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 484cc8909d5c..68a16827f45f 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -208,6 +208,7 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
208 return -EBUSY; 208 return -EBUSY;
209 } 209 }
210 target_state = &drv->states[index]; 210 target_state = &drv->states[index];
211 broadcast = false;
211 } 212 }
212 213
213 /* Take note of the planned idle state. */ 214 /* Take note of the planned idle state. */
@@ -387,9 +388,12 @@ int cpuidle_enable_device(struct cpuidle_device *dev)
387 if (dev->enabled) 388 if (dev->enabled)
388 return 0; 389 return 0;
389 390
391 if (!cpuidle_curr_governor)
392 return -EIO;
393
390 drv = cpuidle_get_cpu_driver(dev); 394 drv = cpuidle_get_cpu_driver(dev);
391 395
392 if (!drv || !cpuidle_curr_governor) 396 if (!drv)
393 return -EIO; 397 return -EIO;
394 398
395 if (!dev->registered) 399 if (!dev->registered)
@@ -399,9 +403,11 @@ int cpuidle_enable_device(struct cpuidle_device *dev)
399 if (ret) 403 if (ret)
400 return ret; 404 return ret;
401 405
402 if (cpuidle_curr_governor->enable && 406 if (cpuidle_curr_governor->enable) {
403 (ret = cpuidle_curr_governor->enable(drv, dev))) 407 ret = cpuidle_curr_governor->enable(drv, dev);
404 goto fail_sysfs; 408 if (ret)
409 goto fail_sysfs;
410 }
405 411
406 smp_wmb(); 412 smp_wmb();
407 413
diff --git a/drivers/cpuidle/governors/ladder.c b/drivers/cpuidle/governors/ladder.c
index ce1a2ffffb2a..1ad8745fd6d6 100644
--- a/drivers/cpuidle/governors/ladder.c
+++ b/drivers/cpuidle/governors/ladder.c
@@ -17,6 +17,7 @@
17#include <linux/pm_qos.h> 17#include <linux/pm_qos.h>
18#include <linux/jiffies.h> 18#include <linux/jiffies.h>
19#include <linux/tick.h> 19#include <linux/tick.h>
20#include <linux/cpu.h>
20 21
21#include <asm/io.h> 22#include <asm/io.h>
22#include <linux/uaccess.h> 23#include <linux/uaccess.h>
@@ -67,10 +68,16 @@ static int ladder_select_state(struct cpuidle_driver *drv,
67 struct cpuidle_device *dev) 68 struct cpuidle_device *dev)
68{ 69{
69 struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); 70 struct ladder_device *ldev = this_cpu_ptr(&ladder_devices);
71 struct device *device = get_cpu_device(dev->cpu);
70 struct ladder_device_state *last_state; 72 struct ladder_device_state *last_state;
71 int last_residency, last_idx = ldev->last_state_idx; 73 int last_residency, last_idx = ldev->last_state_idx;
72 int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0; 74 int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0;
73 int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); 75 int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
76 int resume_latency = dev_pm_qos_raw_read_value(device);
77
78 if (resume_latency < latency_req &&
79 resume_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
80 latency_req = resume_latency;
74 81
75 /* Special case when user has set very strict latency requirement */ 82 /* Special case when user has set very strict latency requirement */
76 if (unlikely(latency_req == 0)) { 83 if (unlikely(latency_req == 0)) {
diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index 48eaf2879228..aa390404e85f 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -298,8 +298,8 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
298 data->needs_update = 0; 298 data->needs_update = 0;
299 } 299 }
300 300
301 /* resume_latency is 0 means no restriction */ 301 if (resume_latency < latency_req &&
302 if (resume_latency && resume_latency < latency_req) 302 resume_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
303 latency_req = resume_latency; 303 latency_req = resume_latency;
304 304
305 /* Special case when user has set very strict latency requirement */ 305 /* Special case when user has set very strict latency requirement */
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index a1c4ee818614..78fb496ecb4e 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -28,6 +28,9 @@
28#include <linux/of.h> 28#include <linux/of.h>
29#include "governor.h" 29#include "governor.h"
30 30
31#define MAX(a,b) ((a > b) ? a : b)
32#define MIN(a,b) ((a < b) ? a : b)
33
31static struct class *devfreq_class; 34static struct class *devfreq_class;
32 35
33/* 36/*
@@ -69,6 +72,34 @@ static struct devfreq *find_device_devfreq(struct device *dev)
69 return ERR_PTR(-ENODEV); 72 return ERR_PTR(-ENODEV);
70} 73}
71 74
75static unsigned long find_available_min_freq(struct devfreq *devfreq)
76{
77 struct dev_pm_opp *opp;
78 unsigned long min_freq = 0;
79
80 opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &min_freq);
81 if (IS_ERR(opp))
82 min_freq = 0;
83 else
84 dev_pm_opp_put(opp);
85
86 return min_freq;
87}
88
89static unsigned long find_available_max_freq(struct devfreq *devfreq)
90{
91 struct dev_pm_opp *opp;
92 unsigned long max_freq = ULONG_MAX;
93
94 opp = dev_pm_opp_find_freq_floor(devfreq->dev.parent, &max_freq);
95 if (IS_ERR(opp))
96 max_freq = 0;
97 else
98 dev_pm_opp_put(opp);
99
100 return max_freq;
101}
102
72/** 103/**
73 * devfreq_get_freq_level() - Lookup freq_table for the frequency 104 * devfreq_get_freq_level() - Lookup freq_table for the frequency
74 * @devfreq: the devfreq instance 105 * @devfreq: the devfreq instance
@@ -85,11 +116,7 @@ static int devfreq_get_freq_level(struct devfreq *devfreq, unsigned long freq)
85 return -EINVAL; 116 return -EINVAL;
86} 117}
87 118
88/** 119static int set_freq_table(struct devfreq *devfreq)
89 * devfreq_set_freq_table() - Initialize freq_table for the frequency
90 * @devfreq: the devfreq instance
91 */
92static void devfreq_set_freq_table(struct devfreq *devfreq)
93{ 120{
94 struct devfreq_dev_profile *profile = devfreq->profile; 121 struct devfreq_dev_profile *profile = devfreq->profile;
95 struct dev_pm_opp *opp; 122 struct dev_pm_opp *opp;
@@ -99,7 +126,7 @@ static void devfreq_set_freq_table(struct devfreq *devfreq)
99 /* Initialize the freq_table from OPP table */ 126 /* Initialize the freq_table from OPP table */
100 count = dev_pm_opp_get_opp_count(devfreq->dev.parent); 127 count = dev_pm_opp_get_opp_count(devfreq->dev.parent);
101 if (count <= 0) 128 if (count <= 0)
102 return; 129 return -EINVAL;
103 130
104 profile->max_state = count; 131 profile->max_state = count;
105 profile->freq_table = devm_kcalloc(devfreq->dev.parent, 132 profile->freq_table = devm_kcalloc(devfreq->dev.parent,
@@ -108,7 +135,7 @@ static void devfreq_set_freq_table(struct devfreq *devfreq)
108 GFP_KERNEL); 135 GFP_KERNEL);
109 if (!profile->freq_table) { 136 if (!profile->freq_table) {
110 profile->max_state = 0; 137 profile->max_state = 0;
111 return; 138 return -ENOMEM;
112 } 139 }
113 140
114 for (i = 0, freq = 0; i < profile->max_state; i++, freq++) { 141 for (i = 0, freq = 0; i < profile->max_state; i++, freq++) {
@@ -116,11 +143,13 @@ static void devfreq_set_freq_table(struct devfreq *devfreq)
116 if (IS_ERR(opp)) { 143 if (IS_ERR(opp)) {
117 devm_kfree(devfreq->dev.parent, profile->freq_table); 144 devm_kfree(devfreq->dev.parent, profile->freq_table);
118 profile->max_state = 0; 145 profile->max_state = 0;
119 return; 146 return PTR_ERR(opp);
120 } 147 }
121 dev_pm_opp_put(opp); 148 dev_pm_opp_put(opp);
122 profile->freq_table[i] = freq; 149 profile->freq_table[i] = freq;
123 } 150 }
151
152 return 0;
124} 153}
125 154
126/** 155/**
@@ -227,7 +256,7 @@ static int devfreq_notify_transition(struct devfreq *devfreq,
227int update_devfreq(struct devfreq *devfreq) 256int update_devfreq(struct devfreq *devfreq)
228{ 257{
229 struct devfreq_freqs freqs; 258 struct devfreq_freqs freqs;
230 unsigned long freq, cur_freq; 259 unsigned long freq, cur_freq, min_freq, max_freq;
231 int err = 0; 260 int err = 0;
232 u32 flags = 0; 261 u32 flags = 0;
233 262
@@ -245,19 +274,21 @@ int update_devfreq(struct devfreq *devfreq)
245 return err; 274 return err;
246 275
247 /* 276 /*
248 * Adjust the frequency with user freq and QoS. 277 * Adjust the frequency with user freq, QoS and available freq.
249 * 278 *
250 * List from the highest priority 279 * List from the highest priority
251 * max_freq 280 * max_freq
252 * min_freq 281 * min_freq
253 */ 282 */
283 max_freq = MIN(devfreq->scaling_max_freq, devfreq->max_freq);
284 min_freq = MAX(devfreq->scaling_min_freq, devfreq->min_freq);
254 285
255 if (devfreq->min_freq && freq < devfreq->min_freq) { 286 if (min_freq && freq < min_freq) {
256 freq = devfreq->min_freq; 287 freq = min_freq;
257 flags &= ~DEVFREQ_FLAG_LEAST_UPPER_BOUND; /* Use GLB */ 288 flags &= ~DEVFREQ_FLAG_LEAST_UPPER_BOUND; /* Use GLB */
258 } 289 }
259 if (devfreq->max_freq && freq > devfreq->max_freq) { 290 if (max_freq && freq > max_freq) {
260 freq = devfreq->max_freq; 291 freq = max_freq;
261 flags |= DEVFREQ_FLAG_LEAST_UPPER_BOUND; /* Use LUB */ 292 flags |= DEVFREQ_FLAG_LEAST_UPPER_BOUND; /* Use LUB */
262 } 293 }
263 294
@@ -280,10 +311,9 @@ int update_devfreq(struct devfreq *devfreq)
280 freqs.new = freq; 311 freqs.new = freq;
281 devfreq_notify_transition(devfreq, &freqs, DEVFREQ_POSTCHANGE); 312 devfreq_notify_transition(devfreq, &freqs, DEVFREQ_POSTCHANGE);
282 313
283 if (devfreq->profile->freq_table) 314 if (devfreq_update_status(devfreq, freq))
284 if (devfreq_update_status(devfreq, freq)) 315 dev_err(&devfreq->dev,
285 dev_err(&devfreq->dev, 316 "Couldn't update frequency transition information.\n");
286 "Couldn't update frequency transition information.\n");
287 317
288 devfreq->previous_freq = freq; 318 devfreq->previous_freq = freq;
289 return err; 319 return err;
@@ -466,6 +496,19 @@ static int devfreq_notifier_call(struct notifier_block *nb, unsigned long type,
466 int ret; 496 int ret;
467 497
468 mutex_lock(&devfreq->lock); 498 mutex_lock(&devfreq->lock);
499
500 devfreq->scaling_min_freq = find_available_min_freq(devfreq);
501 if (!devfreq->scaling_min_freq) {
502 mutex_unlock(&devfreq->lock);
503 return -EINVAL;
504 }
505
506 devfreq->scaling_max_freq = find_available_max_freq(devfreq);
507 if (!devfreq->scaling_max_freq) {
508 mutex_unlock(&devfreq->lock);
509 return -EINVAL;
510 }
511
469 ret = update_devfreq(devfreq); 512 ret = update_devfreq(devfreq);
470 mutex_unlock(&devfreq->lock); 513 mutex_unlock(&devfreq->lock);
471 514
@@ -555,10 +598,28 @@ struct devfreq *devfreq_add_device(struct device *dev,
555 598
556 if (!devfreq->profile->max_state && !devfreq->profile->freq_table) { 599 if (!devfreq->profile->max_state && !devfreq->profile->freq_table) {
557 mutex_unlock(&devfreq->lock); 600 mutex_unlock(&devfreq->lock);
558 devfreq_set_freq_table(devfreq); 601 err = set_freq_table(devfreq);
602 if (err < 0)
603 goto err_out;
559 mutex_lock(&devfreq->lock); 604 mutex_lock(&devfreq->lock);
560 } 605 }
561 606
607 devfreq->min_freq = find_available_min_freq(devfreq);
608 if (!devfreq->min_freq) {
609 mutex_unlock(&devfreq->lock);
610 err = -EINVAL;
611 goto err_dev;
612 }
613 devfreq->scaling_min_freq = devfreq->min_freq;
614
615 devfreq->max_freq = find_available_max_freq(devfreq);
616 if (!devfreq->max_freq) {
617 mutex_unlock(&devfreq->lock);
618 err = -EINVAL;
619 goto err_dev;
620 }
621 devfreq->scaling_max_freq = devfreq->max_freq;
622
562 dev_set_name(&devfreq->dev, "devfreq%d", 623 dev_set_name(&devfreq->dev, "devfreq%d",
563 atomic_inc_return(&devfreq_no)); 624 atomic_inc_return(&devfreq_no));
564 err = device_register(&devfreq->dev); 625 err = device_register(&devfreq->dev);
@@ -1082,6 +1143,14 @@ unlock:
1082 return ret; 1143 return ret;
1083} 1144}
1084 1145
1146static ssize_t min_freq_show(struct device *dev, struct device_attribute *attr,
1147 char *buf)
1148{
1149 struct devfreq *df = to_devfreq(dev);
1150
1151 return sprintf(buf, "%lu\n", MAX(df->scaling_min_freq, df->min_freq));
1152}
1153
1085static ssize_t max_freq_store(struct device *dev, struct device_attribute *attr, 1154static ssize_t max_freq_store(struct device *dev, struct device_attribute *attr,
1086 const char *buf, size_t count) 1155 const char *buf, size_t count)
1087{ 1156{
@@ -1108,17 +1177,15 @@ unlock:
1108 mutex_unlock(&df->lock); 1177 mutex_unlock(&df->lock);
1109 return ret; 1178 return ret;
1110} 1179}
1180static DEVICE_ATTR_RW(min_freq);
1111 1181
1112#define show_one(name) \ 1182static ssize_t max_freq_show(struct device *dev, struct device_attribute *attr,
1113static ssize_t name##_show \ 1183 char *buf)
1114(struct device *dev, struct device_attribute *attr, char *buf) \ 1184{
1115{ \ 1185 struct devfreq *df = to_devfreq(dev);
1116 return sprintf(buf, "%lu\n", to_devfreq(dev)->name); \
1117}
1118show_one(min_freq);
1119show_one(max_freq);
1120 1186
1121static DEVICE_ATTR_RW(min_freq); 1187 return sprintf(buf, "%lu\n", MIN(df->scaling_max_freq, df->max_freq));
1188}
1122static DEVICE_ATTR_RW(max_freq); 1189static DEVICE_ATTR_RW(max_freq);
1123 1190
1124static ssize_t available_frequencies_show(struct device *d, 1191static ssize_t available_frequencies_show(struct device *d,
@@ -1126,22 +1193,16 @@ static ssize_t available_frequencies_show(struct device *d,
1126 char *buf) 1193 char *buf)
1127{ 1194{
1128 struct devfreq *df = to_devfreq(d); 1195 struct devfreq *df = to_devfreq(d);
1129 struct device *dev = df->dev.parent;
1130 struct dev_pm_opp *opp;
1131 ssize_t count = 0; 1196 ssize_t count = 0;
1132 unsigned long freq = 0; 1197 int i;
1133 1198
1134 do { 1199 mutex_lock(&df->lock);
1135 opp = dev_pm_opp_find_freq_ceil(dev, &freq);
1136 if (IS_ERR(opp))
1137 break;
1138 1200
1139 dev_pm_opp_put(opp); 1201 for (i = 0; i < df->profile->max_state; i++)
1140 count += scnprintf(&buf[count], (PAGE_SIZE - count - 2), 1202 count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),
1141 "%lu ", freq); 1203 "%lu ", df->profile->freq_table[i]);
1142 freq++;
1143 } while (1);
1144 1204
1205 mutex_unlock(&df->lock);
1145 /* Truncate the trailing space */ 1206 /* Truncate the trailing space */
1146 if (count) 1207 if (count)
1147 count--; 1208 count--;
diff --git a/drivers/devfreq/exynos-bus.c b/drivers/devfreq/exynos-bus.c
index 49f68929e024..c25658b26598 100644
--- a/drivers/devfreq/exynos-bus.c
+++ b/drivers/devfreq/exynos-bus.c
@@ -436,7 +436,8 @@ static int exynos_bus_probe(struct platform_device *pdev)
436 ondemand_data->downdifferential = 5; 436 ondemand_data->downdifferential = 5;
437 437
438 /* Add devfreq device to monitor and handle the exynos bus */ 438 /* Add devfreq device to monitor and handle the exynos bus */
439 bus->devfreq = devm_devfreq_add_device(dev, profile, "simple_ondemand", 439 bus->devfreq = devm_devfreq_add_device(dev, profile,
440 DEVFREQ_GOV_SIMPLE_ONDEMAND,
440 ondemand_data); 441 ondemand_data);
441 if (IS_ERR(bus->devfreq)) { 442 if (IS_ERR(bus->devfreq)) {
442 dev_err(dev, "failed to add devfreq device\n"); 443 dev_err(dev, "failed to add devfreq device\n");
@@ -488,7 +489,7 @@ passive:
488 passive_data->parent = parent_devfreq; 489 passive_data->parent = parent_devfreq;
489 490
490 /* Add devfreq device for exynos bus with passive governor */ 491 /* Add devfreq device for exynos bus with passive governor */
491 bus->devfreq = devm_devfreq_add_device(dev, profile, "passive", 492 bus->devfreq = devm_devfreq_add_device(dev, profile, DEVFREQ_GOV_PASSIVE,
492 passive_data); 493 passive_data);
493 if (IS_ERR(bus->devfreq)) { 494 if (IS_ERR(bus->devfreq)) {
494 dev_err(dev, 495 dev_err(dev,
diff --git a/drivers/devfreq/governor_passive.c b/drivers/devfreq/governor_passive.c
index 673ad8cc9a1d..3bc29acbd54e 100644
--- a/drivers/devfreq/governor_passive.c
+++ b/drivers/devfreq/governor_passive.c
@@ -183,7 +183,7 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
183} 183}
184 184
185static struct devfreq_governor devfreq_passive = { 185static struct devfreq_governor devfreq_passive = {
186 .name = "passive", 186 .name = DEVFREQ_GOV_PASSIVE,
187 .immutable = 1, 187 .immutable = 1,
188 .get_target_freq = devfreq_passive_get_target_freq, 188 .get_target_freq = devfreq_passive_get_target_freq,
189 .event_handler = devfreq_passive_event_handler, 189 .event_handler = devfreq_passive_event_handler,
diff --git a/drivers/devfreq/governor_performance.c b/drivers/devfreq/governor_performance.c
index c72f942f30a8..4d23ecfbd948 100644
--- a/drivers/devfreq/governor_performance.c
+++ b/drivers/devfreq/governor_performance.c
@@ -42,7 +42,7 @@ static int devfreq_performance_handler(struct devfreq *devfreq,
42} 42}
43 43
44static struct devfreq_governor devfreq_performance = { 44static struct devfreq_governor devfreq_performance = {
45 .name = "performance", 45 .name = DEVFREQ_GOV_PERFORMANCE,
46 .get_target_freq = devfreq_performance_func, 46 .get_target_freq = devfreq_performance_func,
47 .event_handler = devfreq_performance_handler, 47 .event_handler = devfreq_performance_handler,
48}; 48};
diff --git a/drivers/devfreq/governor_powersave.c b/drivers/devfreq/governor_powersave.c
index 0c6bed567e6d..0c42f23249ef 100644
--- a/drivers/devfreq/governor_powersave.c
+++ b/drivers/devfreq/governor_powersave.c
@@ -39,7 +39,7 @@ static int devfreq_powersave_handler(struct devfreq *devfreq,
39} 39}
40 40
41static struct devfreq_governor devfreq_powersave = { 41static struct devfreq_governor devfreq_powersave = {
42 .name = "powersave", 42 .name = DEVFREQ_GOV_POWERSAVE,
43 .get_target_freq = devfreq_powersave_func, 43 .get_target_freq = devfreq_powersave_func,
44 .event_handler = devfreq_powersave_handler, 44 .event_handler = devfreq_powersave_handler,
45}; 45};
diff --git a/drivers/devfreq/governor_simpleondemand.c b/drivers/devfreq/governor_simpleondemand.c
index ae72ba5e78df..28e0f2de7100 100644
--- a/drivers/devfreq/governor_simpleondemand.c
+++ b/drivers/devfreq/governor_simpleondemand.c
@@ -125,7 +125,7 @@ static int devfreq_simple_ondemand_handler(struct devfreq *devfreq,
125} 125}
126 126
127static struct devfreq_governor devfreq_simple_ondemand = { 127static struct devfreq_governor devfreq_simple_ondemand = {
128 .name = "simple_ondemand", 128 .name = DEVFREQ_GOV_SIMPLE_ONDEMAND,
129 .get_target_freq = devfreq_simple_ondemand_func, 129 .get_target_freq = devfreq_simple_ondemand_func,
130 .event_handler = devfreq_simple_ondemand_handler, 130 .event_handler = devfreq_simple_ondemand_handler,
131}; 131};
diff --git a/drivers/devfreq/governor_userspace.c b/drivers/devfreq/governor_userspace.c
index 77028c27593c..080607c3f34d 100644
--- a/drivers/devfreq/governor_userspace.c
+++ b/drivers/devfreq/governor_userspace.c
@@ -87,7 +87,7 @@ static struct attribute *dev_entries[] = {
87 NULL, 87 NULL,
88}; 88};
89static const struct attribute_group dev_attr_group = { 89static const struct attribute_group dev_attr_group = {
90 .name = "userspace", 90 .name = DEVFREQ_GOV_USERSPACE,
91 .attrs = dev_entries, 91 .attrs = dev_entries,
92}; 92};
93 93
diff --git a/drivers/devfreq/rk3399_dmc.c b/drivers/devfreq/rk3399_dmc.c
index 1b89ebbad02c..5dfbfa3cc878 100644
--- a/drivers/devfreq/rk3399_dmc.c
+++ b/drivers/devfreq/rk3399_dmc.c
@@ -431,7 +431,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
431 431
432 data->devfreq = devm_devfreq_add_device(dev, 432 data->devfreq = devm_devfreq_add_device(dev,
433 &rk3399_devfreq_dmc_profile, 433 &rk3399_devfreq_dmc_profile,
434 "simple_ondemand", 434 DEVFREQ_GOV_SIMPLE_ONDEMAND,
435 &data->ondemand_data); 435 &data->ondemand_data);
436 if (IS_ERR(data->devfreq)) 436 if (IS_ERR(data->devfreq))
437 return PTR_ERR(data->devfreq); 437 return PTR_ERR(data->devfreq);
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 9f45cfeae775..f124de3a0668 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1304,7 +1304,7 @@ int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
1304 * becaue the HDA driver may require us to enable the audio power 1304 * becaue the HDA driver may require us to enable the audio power
1305 * domain during system suspend. 1305 * domain during system suspend.
1306 */ 1306 */
1307 pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME; 1307 dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
1308 1308
1309 ret = i915_driver_init_early(dev_priv, ent); 1309 ret = i915_driver_init_early(dev_priv, ent);
1310 if (ret < 0) 1310 if (ret < 0)
diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
index f0b06b14e782..b2ccce5fb071 100644
--- a/drivers/idle/intel_idle.c
+++ b/drivers/idle/intel_idle.c
@@ -913,10 +913,9 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev,
913 struct cpuidle_state *state = &drv->states[index]; 913 struct cpuidle_state *state = &drv->states[index];
914 unsigned long eax = flg2MWAIT(state->flags); 914 unsigned long eax = flg2MWAIT(state->flags);
915 unsigned int cstate; 915 unsigned int cstate;
916 bool uninitialized_var(tick);
916 int cpu = smp_processor_id(); 917 int cpu = smp_processor_id();
917 918
918 cstate = (((eax) >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK) + 1;
919
920 /* 919 /*
921 * leave_mm() to avoid costly and often unnecessary wakeups 920 * leave_mm() to avoid costly and often unnecessary wakeups
922 * for flushing the user TLB's associated with the active mm. 921 * for flushing the user TLB's associated with the active mm.
@@ -924,12 +923,19 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev,
924 if (state->flags & CPUIDLE_FLAG_TLB_FLUSHED) 923 if (state->flags & CPUIDLE_FLAG_TLB_FLUSHED)
925 leave_mm(cpu); 924 leave_mm(cpu);
926 925
927 if (!(lapic_timer_reliable_states & (1 << (cstate)))) 926 if (!static_cpu_has(X86_FEATURE_ARAT)) {
928 tick_broadcast_enter(); 927 cstate = (((eax) >> MWAIT_SUBSTATE_SIZE) &
928 MWAIT_CSTATE_MASK) + 1;
929 tick = false;
930 if (!(lapic_timer_reliable_states & (1 << (cstate)))) {
931 tick = true;
932 tick_broadcast_enter();
933 }
934 }
929 935
930 mwait_idle_with_hints(eax, ecx); 936 mwait_idle_with_hints(eax, ecx);
931 937
932 if (!(lapic_timer_reliable_states & (1 << (cstate)))) 938 if (!static_cpu_has(X86_FEATURE_ARAT) && tick)
933 tick_broadcast_exit(); 939 tick_broadcast_exit();
934 940
935 return index; 941 return index;
@@ -1061,7 +1067,7 @@ static const struct idle_cpu idle_cpu_dnv = {
1061}; 1067};
1062 1068
1063#define ICPU(model, cpu) \ 1069#define ICPU(model, cpu) \
1064 { X86_VENDOR_INTEL, 6, model, X86_FEATURE_MWAIT, (unsigned long)&cpu } 1070 { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)&cpu }
1065 1071
1066static const struct x86_cpu_id intel_idle_ids[] __initconst = { 1072static const struct x86_cpu_id intel_idle_ids[] __initconst = {
1067 ICPU(INTEL_FAM6_NEHALEM_EP, idle_cpu_nehalem), 1073 ICPU(INTEL_FAM6_NEHALEM_EP, idle_cpu_nehalem),
@@ -1125,6 +1131,11 @@ static int __init intel_idle_probe(void)
1125 return -ENODEV; 1131 return -ENODEV;
1126 } 1132 }
1127 1133
1134 if (!boot_cpu_has(X86_FEATURE_MWAIT)) {
1135 pr_debug("Please enable MWAIT in BIOS SETUP\n");
1136 return -ENODEV;
1137 }
1138
1128 if (boot_cpu_data.cpuid_level < CPUID_MWAIT_LEAF) 1139 if (boot_cpu_data.cpuid_level < CPUID_MWAIT_LEAF)
1129 return -ENODEV; 1140 return -ENODEV;
1130 1141
diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
index 78b3172c8e6e..f4f17552c9b8 100644
--- a/drivers/misc/mei/pci-me.c
+++ b/drivers/misc/mei/pci-me.c
@@ -225,7 +225,7 @@ static int mei_me_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
225 * MEI requires to resume from runtime suspend mode 225 * MEI requires to resume from runtime suspend mode
226 * in order to perform link reset flow upon system suspend. 226 * in order to perform link reset flow upon system suspend.
227 */ 227 */
228 pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME; 228 dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
229 229
230 /* 230 /*
231 * ME maps runtime suspend/resume to D0i states, 231 * ME maps runtime suspend/resume to D0i states,
diff --git a/drivers/misc/mei/pci-txe.c b/drivers/misc/mei/pci-txe.c
index 0566f9bfa7de..e1b909123fb0 100644
--- a/drivers/misc/mei/pci-txe.c
+++ b/drivers/misc/mei/pci-txe.c
@@ -141,7 +141,7 @@ static int mei_txe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
141 * MEI requires to resume from runtime suspend mode 141 * MEI requires to resume from runtime suspend mode
142 * in order to perform link reset flow upon system suspend. 142 * in order to perform link reset flow upon system suspend.
143 */ 143 */
144 pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME; 144 dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
145 145
146 /* 146 /*
147 * TXE maps runtime suspend/resume to own power gating states, 147 * TXE maps runtime suspend/resume to own power gating states,
diff --git a/drivers/opp/Kconfig b/drivers/opp/Kconfig
new file mode 100644
index 000000000000..a7fbb93f302c
--- /dev/null
+++ b/drivers/opp/Kconfig
@@ -0,0 +1,13 @@
1config PM_OPP
2 bool
3 select SRCU
4 ---help---
5 SOCs have a standard set of tuples consisting of frequency and
6 voltage pairs that the device will support per voltage domain. This
7 is called Operating Performance Point or OPP. The actual definitions
8 of OPP varies over silicon within the same family of devices.
9
10 OPP layer organizes the data internally using device pointers
11 representing individual voltage domains and provides SOC
12 implementations a ready to use framework to manage OPPs.
13 For more information, read <file:Documentation/power/opp.txt>
diff --git a/drivers/base/power/opp/Makefile b/drivers/opp/Makefile
index e70ceb406fe9..e70ceb406fe9 100644
--- a/drivers/base/power/opp/Makefile
+++ b/drivers/opp/Makefile
diff --git a/drivers/base/power/opp/core.c b/drivers/opp/core.c
index a6de32530693..92fa94a6dcc1 100644
--- a/drivers/base/power/opp/core.c
+++ b/drivers/opp/core.c
@@ -19,6 +19,7 @@
19#include <linux/slab.h> 19#include <linux/slab.h>
20#include <linux/device.h> 20#include <linux/device.h>
21#include <linux/export.h> 21#include <linux/export.h>
22#include <linux/pm_domain.h>
22#include <linux/regulator/consumer.h> 23#include <linux/regulator/consumer.h>
23 24
24#include "opp.h" 25#include "opp.h"
@@ -296,7 +297,7 @@ int dev_pm_opp_get_opp_count(struct device *dev)
296 opp_table = _find_opp_table(dev); 297 opp_table = _find_opp_table(dev);
297 if (IS_ERR(opp_table)) { 298 if (IS_ERR(opp_table)) {
298 count = PTR_ERR(opp_table); 299 count = PTR_ERR(opp_table);
299 dev_err(dev, "%s: OPP table not found (%d)\n", 300 dev_dbg(dev, "%s: OPP table not found (%d)\n",
300 __func__, count); 301 __func__, count);
301 return count; 302 return count;
302 } 303 }
@@ -535,6 +536,44 @@ _generic_set_opp_clk_only(struct device *dev, struct clk *clk,
535 return ret; 536 return ret;
536} 537}
537 538
539static inline int
540_generic_set_opp_domain(struct device *dev, struct clk *clk,
541 unsigned long old_freq, unsigned long freq,
542 unsigned int old_pstate, unsigned int new_pstate)
543{
544 int ret;
545
546 /* Scaling up? Scale domain performance state before frequency */
547 if (freq > old_freq) {
548 ret = dev_pm_genpd_set_performance_state(dev, new_pstate);
549 if (ret)
550 return ret;
551 }
552
553 ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq);
554 if (ret)
555 goto restore_domain_state;
556
557 /* Scaling down? Scale domain performance state after frequency */
558 if (freq < old_freq) {
559 ret = dev_pm_genpd_set_performance_state(dev, new_pstate);
560 if (ret)
561 goto restore_freq;
562 }
563
564 return 0;
565
566restore_freq:
567 if (_generic_set_opp_clk_only(dev, clk, freq, old_freq))
568 dev_err(dev, "%s: failed to restore old-freq (%lu Hz)\n",
569 __func__, old_freq);
570restore_domain_state:
571 if (freq > old_freq)
572 dev_pm_genpd_set_performance_state(dev, old_pstate);
573
574 return ret;
575}
576
538static int _generic_set_opp_regulator(const struct opp_table *opp_table, 577static int _generic_set_opp_regulator(const struct opp_table *opp_table,
539 struct device *dev, 578 struct device *dev,
540 unsigned long old_freq, 579 unsigned long old_freq,
@@ -653,7 +692,16 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
653 692
654 /* Only frequency scaling */ 693 /* Only frequency scaling */
655 if (!opp_table->regulators) { 694 if (!opp_table->regulators) {
656 ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq); 695 /*
696 * We don't support devices with both regulator and
697 * domain performance-state for now.
698 */
699 if (opp_table->genpd_performance_state)
700 ret = _generic_set_opp_domain(dev, clk, old_freq, freq,
701 IS_ERR(old_opp) ? 0 : old_opp->pstate,
702 opp->pstate);
703 else
704 ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq);
657 } else if (!opp_table->set_opp) { 705 } else if (!opp_table->set_opp) {
658 ret = _generic_set_opp_regulator(opp_table, dev, old_freq, freq, 706 ret = _generic_set_opp_regulator(opp_table, dev, old_freq, freq,
659 IS_ERR(old_opp) ? NULL : old_opp->supplies, 707 IS_ERR(old_opp) ? NULL : old_opp->supplies,
@@ -988,6 +1036,9 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
988 return ret; 1036 return ret;
989 } 1037 }
990 1038
1039 if (opp_table->get_pstate)
1040 new_opp->pstate = opp_table->get_pstate(dev, new_opp->rate);
1041
991 list_add(&new_opp->node, head); 1042 list_add(&new_opp->node, head);
992 mutex_unlock(&opp_table->lock); 1043 mutex_unlock(&opp_table->lock);
993 1044
@@ -1476,13 +1527,13 @@ err:
1476EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper); 1527EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper);
1477 1528
1478/** 1529/**
1479 * dev_pm_opp_register_put_opp_helper() - Releases resources blocked for 1530 * dev_pm_opp_unregister_set_opp_helper() - Releases resources blocked for
1480 * set_opp helper 1531 * set_opp helper
1481 * @opp_table: OPP table returned from dev_pm_opp_register_set_opp_helper(). 1532 * @opp_table: OPP table returned from dev_pm_opp_register_set_opp_helper().
1482 * 1533 *
1483 * Release resources blocked for platform specific set_opp helper. 1534 * Release resources blocked for platform specific set_opp helper.
1484 */ 1535 */
1485void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table) 1536void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table)
1486{ 1537{
1487 if (!opp_table->set_opp) { 1538 if (!opp_table->set_opp) {
1488 pr_err("%s: Doesn't have custom set_opp helper set\n", 1539 pr_err("%s: Doesn't have custom set_opp helper set\n",
@@ -1497,7 +1548,82 @@ void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table)
1497 1548
1498 dev_pm_opp_put_opp_table(opp_table); 1549 dev_pm_opp_put_opp_table(opp_table);
1499} 1550}
1500EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper); 1551EXPORT_SYMBOL_GPL(dev_pm_opp_unregister_set_opp_helper);
1552
1553/**
1554 * dev_pm_opp_register_get_pstate_helper() - Register get_pstate() helper.
1555 * @dev: Device for which the helper is getting registered.
1556 * @get_pstate: Helper.
1557 *
1558 * TODO: Remove this callback after the same information is available via Device
1559 * Tree.
1560 *
1561 * This allows a platform to initialize the performance states of individual
1562 * OPPs for its devices, until we get similar information directly from DT.
1563 *
1564 * This must be called before the OPPs are initialized for the device.
1565 */
1566struct opp_table *dev_pm_opp_register_get_pstate_helper(struct device *dev,
1567 int (*get_pstate)(struct device *dev, unsigned long rate))
1568{
1569 struct opp_table *opp_table;
1570 int ret;
1571
1572 if (!get_pstate)
1573 return ERR_PTR(-EINVAL);
1574
1575 opp_table = dev_pm_opp_get_opp_table(dev);
1576 if (!opp_table)
1577 return ERR_PTR(-ENOMEM);
1578
1579 /* This should be called before OPPs are initialized */
1580 if (WARN_ON(!list_empty(&opp_table->opp_list))) {
1581 ret = -EBUSY;
1582 goto err;
1583 }
1584
1585 /* Already have genpd_performance_state set */
1586 if (WARN_ON(opp_table->genpd_performance_state)) {
1587 ret = -EBUSY;
1588 goto err;
1589 }
1590
1591 opp_table->genpd_performance_state = true;
1592 opp_table->get_pstate = get_pstate;
1593
1594 return opp_table;
1595
1596err:
1597 dev_pm_opp_put_opp_table(opp_table);
1598
1599 return ERR_PTR(ret);
1600}
1601EXPORT_SYMBOL_GPL(dev_pm_opp_register_get_pstate_helper);
1602
1603/**
1604 * dev_pm_opp_unregister_get_pstate_helper() - Releases resources blocked for
1605 * get_pstate() helper
1606 * @opp_table: OPP table returned from dev_pm_opp_register_get_pstate_helper().
1607 *
1608 * Release resources blocked for platform specific get_pstate() helper.
1609 */
1610void dev_pm_opp_unregister_get_pstate_helper(struct opp_table *opp_table)
1611{
1612 if (!opp_table->genpd_performance_state) {
1613 pr_err("%s: Doesn't have performance states set\n",
1614 __func__);
1615 return;
1616 }
1617
1618 /* Make sure there are no concurrent readers while updating opp_table */
1619 WARN_ON(!list_empty(&opp_table->opp_list));
1620
1621 opp_table->genpd_performance_state = false;
1622 opp_table->get_pstate = NULL;
1623
1624 dev_pm_opp_put_opp_table(opp_table);
1625}
1626EXPORT_SYMBOL_GPL(dev_pm_opp_unregister_get_pstate_helper);
1501 1627
1502/** 1628/**
1503 * dev_pm_opp_add() - Add an OPP table from a table definitions 1629 * dev_pm_opp_add() - Add an OPP table from a table definitions
@@ -1706,6 +1832,13 @@ void _dev_pm_opp_remove_table(struct opp_table *opp_table, struct device *dev,
1706 if (remove_all || !opp->dynamic) 1832 if (remove_all || !opp->dynamic)
1707 dev_pm_opp_put(opp); 1833 dev_pm_opp_put(opp);
1708 } 1834 }
1835
1836 /*
1837 * The OPP table is getting removed, drop the performance state
1838 * constraints.
1839 */
1840 if (opp_table->genpd_performance_state)
1841 dev_pm_genpd_set_performance_state(dev, 0);
1709 } else { 1842 } else {
1710 _remove_opp_dev(_find_opp_dev(dev, opp_table), opp_table); 1843 _remove_opp_dev(_find_opp_dev(dev, opp_table), opp_table);
1711 } 1844 }
diff --git a/drivers/base/power/opp/cpu.c b/drivers/opp/cpu.c
index 2d87bc1adf38..2d87bc1adf38 100644
--- a/drivers/base/power/opp/cpu.c
+++ b/drivers/opp/cpu.c
diff --git a/drivers/base/power/opp/debugfs.c b/drivers/opp/debugfs.c
index 81cf120fcf43..b03c03576a62 100644
--- a/drivers/base/power/opp/debugfs.c
+++ b/drivers/opp/debugfs.c
@@ -41,16 +41,15 @@ static bool opp_debug_create_supplies(struct dev_pm_opp *opp,
41{ 41{
42 struct dentry *d; 42 struct dentry *d;
43 int i; 43 int i;
44 char *name;
45 44
46 for (i = 0; i < opp_table->regulator_count; i++) { 45 for (i = 0; i < opp_table->regulator_count; i++) {
47 name = kasprintf(GFP_KERNEL, "supply-%d", i); 46 char name[15];
47
48 snprintf(name, sizeof(name), "supply-%d", i);
48 49
49 /* Create per-opp directory */ 50 /* Create per-opp directory */
50 d = debugfs_create_dir(name, pdentry); 51 d = debugfs_create_dir(name, pdentry);
51 52
52 kfree(name);
53
54 if (!d) 53 if (!d)
55 return false; 54 return false;
56 55
@@ -100,6 +99,9 @@ int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
100 if (!debugfs_create_bool("suspend", S_IRUGO, d, &opp->suspend)) 99 if (!debugfs_create_bool("suspend", S_IRUGO, d, &opp->suspend))
101 return -ENOMEM; 100 return -ENOMEM;
102 101
102 if (!debugfs_create_u32("performance_state", S_IRUGO, d, &opp->pstate))
103 return -ENOMEM;
104
103 if (!debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate)) 105 if (!debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate))
104 return -ENOMEM; 106 return -ENOMEM;
105 107
diff --git a/drivers/base/power/opp/of.c b/drivers/opp/of.c
index 0b718886479b..cb716aa2f44b 100644
--- a/drivers/base/power/opp/of.c
+++ b/drivers/opp/of.c
@@ -16,7 +16,7 @@
16#include <linux/cpu.h> 16#include <linux/cpu.h>
17#include <linux/errno.h> 17#include <linux/errno.h>
18#include <linux/device.h> 18#include <linux/device.h>
19#include <linux/of.h> 19#include <linux/of_device.h>
20#include <linux/slab.h> 20#include <linux/slab.h>
21#include <linux/export.h> 21#include <linux/export.h>
22 22
@@ -397,6 +397,7 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
397 dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, 397 dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
398 ret); 398 ret);
399 _dev_pm_opp_remove_table(opp_table, dev, false); 399 _dev_pm_opp_remove_table(opp_table, dev, false);
400 of_node_put(np);
400 goto put_opp_table; 401 goto put_opp_table;
401 } 402 }
402 } 403 }
@@ -603,7 +604,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
603 if (cpu == cpu_dev->id) 604 if (cpu == cpu_dev->id)
604 continue; 605 continue;
605 606
606 cpu_np = of_get_cpu_node(cpu, NULL); 607 cpu_np = of_cpu_device_node_get(cpu);
607 if (!cpu_np) { 608 if (!cpu_np) {
608 dev_err(cpu_dev, "%s: failed to get cpu%d node\n", 609 dev_err(cpu_dev, "%s: failed to get cpu%d node\n",
609 __func__, cpu); 610 __func__, cpu);
@@ -613,6 +614,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
613 614
614 /* Get OPP descriptor node */ 615 /* Get OPP descriptor node */
615 tmp_np = _opp_of_get_opp_desc_node(cpu_np); 616 tmp_np = _opp_of_get_opp_desc_node(cpu_np);
617 of_node_put(cpu_np);
616 if (!tmp_np) { 618 if (!tmp_np) {
617 pr_err("%pOF: Couldn't find opp node\n", cpu_np); 619 pr_err("%pOF: Couldn't find opp node\n", cpu_np);
618 ret = -ENOENT; 620 ret = -ENOENT;
diff --git a/drivers/base/power/opp/opp.h b/drivers/opp/opp.h
index 166eef990599..4d00061648a3 100644
--- a/drivers/base/power/opp/opp.h
+++ b/drivers/opp/opp.h
@@ -58,6 +58,7 @@ extern struct list_head opp_tables;
58 * @dynamic: not-created from static DT entries. 58 * @dynamic: not-created from static DT entries.
59 * @turbo: true if turbo (boost) OPP 59 * @turbo: true if turbo (boost) OPP
60 * @suspend: true if suspend OPP 60 * @suspend: true if suspend OPP
61 * @pstate: Device's power domain's performance state.
61 * @rate: Frequency in hertz 62 * @rate: Frequency in hertz
62 * @supplies: Power supplies voltage/current values 63 * @supplies: Power supplies voltage/current values
63 * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's 64 * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
@@ -76,6 +77,7 @@ struct dev_pm_opp {
76 bool dynamic; 77 bool dynamic;
77 bool turbo; 78 bool turbo;
78 bool suspend; 79 bool suspend;
80 unsigned int pstate;
79 unsigned long rate; 81 unsigned long rate;
80 82
81 struct dev_pm_opp_supply *supplies; 83 struct dev_pm_opp_supply *supplies;
@@ -135,8 +137,10 @@ enum opp_table_access {
135 * @clk: Device's clock handle 137 * @clk: Device's clock handle
136 * @regulators: Supply regulators 138 * @regulators: Supply regulators
137 * @regulator_count: Number of power supply regulators 139 * @regulator_count: Number of power supply regulators
140 * @genpd_performance_state: Device's power domain support performance state.
138 * @set_opp: Platform specific set_opp callback 141 * @set_opp: Platform specific set_opp callback
139 * @set_opp_data: Data to be passed to set_opp callback 142 * @set_opp_data: Data to be passed to set_opp callback
143 * @get_pstate: Platform specific get_pstate callback
140 * @dentry: debugfs dentry pointer of the real device directory (not links). 144 * @dentry: debugfs dentry pointer of the real device directory (not links).
141 * @dentry_name: Name of the real dentry. 145 * @dentry_name: Name of the real dentry.
142 * 146 *
@@ -170,9 +174,11 @@ struct opp_table {
170 struct clk *clk; 174 struct clk *clk;
171 struct regulator **regulators; 175 struct regulator **regulators;
172 unsigned int regulator_count; 176 unsigned int regulator_count;
177 bool genpd_performance_state;
173 178
174 int (*set_opp)(struct dev_pm_set_opp_data *data); 179 int (*set_opp)(struct dev_pm_set_opp_data *data);
175 struct dev_pm_set_opp_data *set_opp_data; 180 struct dev_pm_set_opp_data *set_opp_data;
181 int (*get_pstate)(struct device *dev, unsigned long rate);
176 182
177#ifdef CONFIG_DEBUG_FS 183#ifdef CONFIG_DEBUG_FS
178 struct dentry *dentry; 184 struct dentry *dentry;
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index 11bd267fc137..07b8a9b385ab 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -680,17 +680,13 @@ static int pci_pm_prepare(struct device *dev)
680{ 680{
681 struct device_driver *drv = dev->driver; 681 struct device_driver *drv = dev->driver;
682 682
683 /*
684 * Devices having power.ignore_children set may still be necessary for
685 * suspending their children in the next phase of device suspend.
686 */
687 if (dev->power.ignore_children)
688 pm_runtime_resume(dev);
689
690 if (drv && drv->pm && drv->pm->prepare) { 683 if (drv && drv->pm && drv->pm->prepare) {
691 int error = drv->pm->prepare(dev); 684 int error = drv->pm->prepare(dev);
692 if (error) 685 if (error < 0)
693 return error; 686 return error;
687
688 if (!error && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE))
689 return 0;
694 } 690 }
695 return pci_dev_keep_suspended(to_pci_dev(dev)); 691 return pci_dev_keep_suspended(to_pci_dev(dev));
696} 692}
@@ -731,18 +727,25 @@ static int pci_pm_suspend(struct device *dev)
731 727
732 if (!pm) { 728 if (!pm) {
733 pci_pm_default_suspend(pci_dev); 729 pci_pm_default_suspend(pci_dev);
734 goto Fixup; 730 return 0;
735 } 731 }
736 732
737 /* 733 /*
738 * PCI devices suspended at run time need to be resumed at this point, 734 * PCI devices suspended at run time may need to be resumed at this
739 * because in general it is necessary to reconfigure them for system 735 * point, because in general it may be necessary to reconfigure them for
740 * suspend. Namely, if the device is supposed to wake up the system 736 * system suspend. Namely, if the device is expected to wake up the
741 * from the sleep state, we may need to reconfigure it for this purpose. 737 * system from the sleep state, it may have to be reconfigured for this
742 * In turn, if the device is not supposed to wake up the system from the 738 * purpose, or if the device is not expected to wake up the system from
743 * sleep state, we'll have to prevent it from signaling wake-up. 739 * the sleep state, it should be prevented from signaling wakeup events
740 * going forward.
741 *
742 * Also if the driver of the device does not indicate that its system
743 * suspend callbacks can cope with runtime-suspended devices, it is
744 * better to resume the device from runtime suspend here.
744 */ 745 */
745 pm_runtime_resume(dev); 746 if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
747 !pci_dev_keep_suspended(pci_dev))
748 pm_runtime_resume(dev);
746 749
747 pci_dev->state_saved = false; 750 pci_dev->state_saved = false;
748 if (pm->suspend) { 751 if (pm->suspend) {
@@ -762,17 +765,27 @@ static int pci_pm_suspend(struct device *dev)
762 } 765 }
763 } 766 }
764 767
765 Fixup:
766 pci_fixup_device(pci_fixup_suspend, pci_dev);
767
768 return 0; 768 return 0;
769} 769}
770 770
771static int pci_pm_suspend_late(struct device *dev)
772{
773 if (dev_pm_smart_suspend_and_suspended(dev))
774 return 0;
775
776 pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev));
777
778 return pm_generic_suspend_late(dev);
779}
780
771static int pci_pm_suspend_noirq(struct device *dev) 781static int pci_pm_suspend_noirq(struct device *dev)
772{ 782{
773 struct pci_dev *pci_dev = to_pci_dev(dev); 783 struct pci_dev *pci_dev = to_pci_dev(dev);
774 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 784 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
775 785
786 if (dev_pm_smart_suspend_and_suspended(dev))
787 return 0;
788
776 if (pci_has_legacy_pm_support(pci_dev)) 789 if (pci_has_legacy_pm_support(pci_dev))
777 return pci_legacy_suspend_late(dev, PMSG_SUSPEND); 790 return pci_legacy_suspend_late(dev, PMSG_SUSPEND);
778 791
@@ -805,6 +818,9 @@ static int pci_pm_suspend_noirq(struct device *dev)
805 pci_prepare_to_sleep(pci_dev); 818 pci_prepare_to_sleep(pci_dev);
806 } 819 }
807 820
821 dev_dbg(dev, "PCI PM: Suspend power state: %s\n",
822 pci_power_name(pci_dev->current_state));
823
808 pci_pm_set_unknown_state(pci_dev); 824 pci_pm_set_unknown_state(pci_dev);
809 825
810 /* 826 /*
@@ -831,6 +847,14 @@ static int pci_pm_resume_noirq(struct device *dev)
831 struct device_driver *drv = dev->driver; 847 struct device_driver *drv = dev->driver;
832 int error = 0; 848 int error = 0;
833 849
850 /*
851 * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
852 * during system suspend, so update their runtime PM status to "active"
853 * as they are going to be put into D0 shortly.
854 */
855 if (dev_pm_smart_suspend_and_suspended(dev))
856 pm_runtime_set_active(dev);
857
834 pci_pm_default_resume_early(pci_dev); 858 pci_pm_default_resume_early(pci_dev);
835 859
836 if (pci_has_legacy_pm_support(pci_dev)) 860 if (pci_has_legacy_pm_support(pci_dev))
@@ -873,6 +897,7 @@ static int pci_pm_resume(struct device *dev)
873#else /* !CONFIG_SUSPEND */ 897#else /* !CONFIG_SUSPEND */
874 898
875#define pci_pm_suspend NULL 899#define pci_pm_suspend NULL
900#define pci_pm_suspend_late NULL
876#define pci_pm_suspend_noirq NULL 901#define pci_pm_suspend_noirq NULL
877#define pci_pm_resume NULL 902#define pci_pm_resume NULL
878#define pci_pm_resume_noirq NULL 903#define pci_pm_resume_noirq NULL
@@ -907,7 +932,8 @@ static int pci_pm_freeze(struct device *dev)
907 * devices should not be touched during freeze/thaw transitions, 932 * devices should not be touched during freeze/thaw transitions,
908 * however. 933 * however.
909 */ 934 */
910 pm_runtime_resume(dev); 935 if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND))
936 pm_runtime_resume(dev);
911 937
912 pci_dev->state_saved = false; 938 pci_dev->state_saved = false;
913 if (pm->freeze) { 939 if (pm->freeze) {
@@ -919,17 +945,25 @@ static int pci_pm_freeze(struct device *dev)
919 return error; 945 return error;
920 } 946 }
921 947
922 if (pcibios_pm_ops.freeze)
923 return pcibios_pm_ops.freeze(dev);
924
925 return 0; 948 return 0;
926} 949}
927 950
951static int pci_pm_freeze_late(struct device *dev)
952{
953 if (dev_pm_smart_suspend_and_suspended(dev))
954 return 0;
955
956 return pm_generic_freeze_late(dev);;
957}
958
928static int pci_pm_freeze_noirq(struct device *dev) 959static int pci_pm_freeze_noirq(struct device *dev)
929{ 960{
930 struct pci_dev *pci_dev = to_pci_dev(dev); 961 struct pci_dev *pci_dev = to_pci_dev(dev);
931 struct device_driver *drv = dev->driver; 962 struct device_driver *drv = dev->driver;
932 963
964 if (dev_pm_smart_suspend_and_suspended(dev))
965 return 0;
966
933 if (pci_has_legacy_pm_support(pci_dev)) 967 if (pci_has_legacy_pm_support(pci_dev))
934 return pci_legacy_suspend_late(dev, PMSG_FREEZE); 968 return pci_legacy_suspend_late(dev, PMSG_FREEZE);
935 969
@@ -959,6 +993,16 @@ static int pci_pm_thaw_noirq(struct device *dev)
959 struct device_driver *drv = dev->driver; 993 struct device_driver *drv = dev->driver;
960 int error = 0; 994 int error = 0;
961 995
996 /*
997 * If the device is in runtime suspend, the code below may not work
998 * correctly with it, so skip that code and make the PM core skip all of
999 * the subsequent "thaw" callbacks for the device.
1000 */
1001 if (dev_pm_smart_suspend_and_suspended(dev)) {
1002 dev->power.direct_complete = true;
1003 return 0;
1004 }
1005
962 if (pcibios_pm_ops.thaw_noirq) { 1006 if (pcibios_pm_ops.thaw_noirq) {
963 error = pcibios_pm_ops.thaw_noirq(dev); 1007 error = pcibios_pm_ops.thaw_noirq(dev);
964 if (error) 1008 if (error)
@@ -983,12 +1027,6 @@ static int pci_pm_thaw(struct device *dev)
983 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1027 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
984 int error = 0; 1028 int error = 0;
985 1029
986 if (pcibios_pm_ops.thaw) {
987 error = pcibios_pm_ops.thaw(dev);
988 if (error)
989 return error;
990 }
991
992 if (pci_has_legacy_pm_support(pci_dev)) 1030 if (pci_has_legacy_pm_support(pci_dev))
993 return pci_legacy_resume(dev); 1031 return pci_legacy_resume(dev);
994 1032
@@ -1014,11 +1052,13 @@ static int pci_pm_poweroff(struct device *dev)
1014 1052
1015 if (!pm) { 1053 if (!pm) {
1016 pci_pm_default_suspend(pci_dev); 1054 pci_pm_default_suspend(pci_dev);
1017 goto Fixup; 1055 return 0;
1018 } 1056 }
1019 1057
1020 /* The reason to do that is the same as in pci_pm_suspend(). */ 1058 /* The reason to do that is the same as in pci_pm_suspend(). */
1021 pm_runtime_resume(dev); 1059 if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
1060 !pci_dev_keep_suspended(pci_dev))
1061 pm_runtime_resume(dev);
1022 1062
1023 pci_dev->state_saved = false; 1063 pci_dev->state_saved = false;
1024 if (pm->poweroff) { 1064 if (pm->poweroff) {
@@ -1030,13 +1070,17 @@ static int pci_pm_poweroff(struct device *dev)
1030 return error; 1070 return error;
1031 } 1071 }
1032 1072
1033 Fixup: 1073 return 0;
1034 pci_fixup_device(pci_fixup_suspend, pci_dev); 1074}
1035 1075
1036 if (pcibios_pm_ops.poweroff) 1076static int pci_pm_poweroff_late(struct device *dev)
1037 return pcibios_pm_ops.poweroff(dev); 1077{
1078 if (dev_pm_smart_suspend_and_suspended(dev))
1079 return 0;
1038 1080
1039 return 0; 1081 pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev));
1082
1083 return pm_generic_poweroff_late(dev);
1040} 1084}
1041 1085
1042static int pci_pm_poweroff_noirq(struct device *dev) 1086static int pci_pm_poweroff_noirq(struct device *dev)
@@ -1044,6 +1088,9 @@ static int pci_pm_poweroff_noirq(struct device *dev)
1044 struct pci_dev *pci_dev = to_pci_dev(dev); 1088 struct pci_dev *pci_dev = to_pci_dev(dev);
1045 struct device_driver *drv = dev->driver; 1089 struct device_driver *drv = dev->driver;
1046 1090
1091 if (dev_pm_smart_suspend_and_suspended(dev))
1092 return 0;
1093
1047 if (pci_has_legacy_pm_support(to_pci_dev(dev))) 1094 if (pci_has_legacy_pm_support(to_pci_dev(dev)))
1048 return pci_legacy_suspend_late(dev, PMSG_HIBERNATE); 1095 return pci_legacy_suspend_late(dev, PMSG_HIBERNATE);
1049 1096
@@ -1085,6 +1132,10 @@ static int pci_pm_restore_noirq(struct device *dev)
1085 struct device_driver *drv = dev->driver; 1132 struct device_driver *drv = dev->driver;
1086 int error = 0; 1133 int error = 0;
1087 1134
1135 /* This is analogous to the pci_pm_resume_noirq() case. */
1136 if (dev_pm_smart_suspend_and_suspended(dev))
1137 pm_runtime_set_active(dev);
1138
1088 if (pcibios_pm_ops.restore_noirq) { 1139 if (pcibios_pm_ops.restore_noirq) {
1089 error = pcibios_pm_ops.restore_noirq(dev); 1140 error = pcibios_pm_ops.restore_noirq(dev);
1090 if (error) 1141 if (error)
@@ -1108,12 +1159,6 @@ static int pci_pm_restore(struct device *dev)
1108 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1159 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
1109 int error = 0; 1160 int error = 0;
1110 1161
1111 if (pcibios_pm_ops.restore) {
1112 error = pcibios_pm_ops.restore(dev);
1113 if (error)
1114 return error;
1115 }
1116
1117 /* 1162 /*
1118 * This is necessary for the hibernation error path in which restore is 1163 * This is necessary for the hibernation error path in which restore is
1119 * called without restoring the standard config registers of the device. 1164 * called without restoring the standard config registers of the device.
@@ -1139,10 +1184,12 @@ static int pci_pm_restore(struct device *dev)
1139#else /* !CONFIG_HIBERNATE_CALLBACKS */ 1184#else /* !CONFIG_HIBERNATE_CALLBACKS */
1140 1185
1141#define pci_pm_freeze NULL 1186#define pci_pm_freeze NULL
1187#define pci_pm_freeze_late NULL
1142#define pci_pm_freeze_noirq NULL 1188#define pci_pm_freeze_noirq NULL
1143#define pci_pm_thaw NULL 1189#define pci_pm_thaw NULL
1144#define pci_pm_thaw_noirq NULL 1190#define pci_pm_thaw_noirq NULL
1145#define pci_pm_poweroff NULL 1191#define pci_pm_poweroff NULL
1192#define pci_pm_poweroff_late NULL
1146#define pci_pm_poweroff_noirq NULL 1193#define pci_pm_poweroff_noirq NULL
1147#define pci_pm_restore NULL 1194#define pci_pm_restore NULL
1148#define pci_pm_restore_noirq NULL 1195#define pci_pm_restore_noirq NULL
@@ -1258,10 +1305,13 @@ static const struct dev_pm_ops pci_dev_pm_ops = {
1258 .prepare = pci_pm_prepare, 1305 .prepare = pci_pm_prepare,
1259 .complete = pci_pm_complete, 1306 .complete = pci_pm_complete,
1260 .suspend = pci_pm_suspend, 1307 .suspend = pci_pm_suspend,
1308 .suspend_late = pci_pm_suspend_late,
1261 .resume = pci_pm_resume, 1309 .resume = pci_pm_resume,
1262 .freeze = pci_pm_freeze, 1310 .freeze = pci_pm_freeze,
1311 .freeze_late = pci_pm_freeze_late,
1263 .thaw = pci_pm_thaw, 1312 .thaw = pci_pm_thaw,
1264 .poweroff = pci_pm_poweroff, 1313 .poweroff = pci_pm_poweroff,
1314 .poweroff_late = pci_pm_poweroff_late,
1265 .restore = pci_pm_restore, 1315 .restore = pci_pm_restore,
1266 .suspend_noirq = pci_pm_suspend_noirq, 1316 .suspend_noirq = pci_pm_suspend_noirq,
1267 .resume_noirq = pci_pm_resume_noirq, 1317 .resume_noirq = pci_pm_resume_noirq,
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 6078dfc11b11..374f5686e2bc 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -2166,8 +2166,7 @@ bool pci_dev_keep_suspended(struct pci_dev *pci_dev)
2166 2166
2167 if (!pm_runtime_suspended(dev) 2167 if (!pm_runtime_suspended(dev)
2168 || pci_target_state(pci_dev, wakeup) != pci_dev->current_state 2168 || pci_target_state(pci_dev, wakeup) != pci_dev->current_state
2169 || platform_pci_need_resume(pci_dev) 2169 || platform_pci_need_resume(pci_dev))
2170 || (pci_dev->dev_flags & PCI_DEV_FLAGS_NEEDS_RESUME))
2171 return false; 2170 return false;
2172 2171
2173 /* 2172 /*
diff --git a/drivers/power/avs/smartreflex.c b/drivers/power/avs/smartreflex.c
index 974fd684bab2..89bf4d6cb486 100644
--- a/drivers/power/avs/smartreflex.c
+++ b/drivers/power/avs/smartreflex.c
@@ -355,7 +355,7 @@ int sr_configure_errgen(struct omap_sr *sr)
355 u8 senp_shift, senn_shift; 355 u8 senp_shift, senn_shift;
356 356
357 if (!sr) { 357 if (!sr) {
358 pr_warn("%s: NULL omap_sr from %pF\n", 358 pr_warn("%s: NULL omap_sr from %pS\n",
359 __func__, (void *)_RET_IP_); 359 __func__, (void *)_RET_IP_);
360 return -EINVAL; 360 return -EINVAL;
361 } 361 }
@@ -422,7 +422,7 @@ int sr_disable_errgen(struct omap_sr *sr)
422 u32 vpboundint_en, vpboundint_st; 422 u32 vpboundint_en, vpboundint_st;
423 423
424 if (!sr) { 424 if (!sr) {
425 pr_warn("%s: NULL omap_sr from %pF\n", 425 pr_warn("%s: NULL omap_sr from %pS\n",
426 __func__, (void *)_RET_IP_); 426 __func__, (void *)_RET_IP_);
427 return -EINVAL; 427 return -EINVAL;
428 } 428 }
@@ -477,7 +477,7 @@ int sr_configure_minmax(struct omap_sr *sr)
477 u8 senp_shift, senn_shift; 477 u8 senp_shift, senn_shift;
478 478
479 if (!sr) { 479 if (!sr) {
480 pr_warn("%s: NULL omap_sr from %pF\n", 480 pr_warn("%s: NULL omap_sr from %pS\n",
481 __func__, (void *)_RET_IP_); 481 __func__, (void *)_RET_IP_);
482 return -EINVAL; 482 return -EINVAL;
483 } 483 }
@@ -562,7 +562,7 @@ int sr_enable(struct omap_sr *sr, unsigned long volt)
562 int ret; 562 int ret;
563 563
564 if (!sr) { 564 if (!sr) {
565 pr_warn("%s: NULL omap_sr from %pF\n", 565 pr_warn("%s: NULL omap_sr from %pS\n",
566 __func__, (void *)_RET_IP_); 566 __func__, (void *)_RET_IP_);
567 return -EINVAL; 567 return -EINVAL;
568 } 568 }
@@ -614,7 +614,7 @@ int sr_enable(struct omap_sr *sr, unsigned long volt)
614void sr_disable(struct omap_sr *sr) 614void sr_disable(struct omap_sr *sr)
615{ 615{
616 if (!sr) { 616 if (!sr) {
617 pr_warn("%s: NULL omap_sr from %pF\n", 617 pr_warn("%s: NULL omap_sr from %pS\n",
618 __func__, (void *)_RET_IP_); 618 __func__, (void *)_RET_IP_);
619 return; 619 return;
620 } 620 }
diff --git a/drivers/soc/mediatek/mtk-scpsys.c b/drivers/soc/mediatek/mtk-scpsys.c
index e1ce8b1b5090..e570b6af2e6f 100644
--- a/drivers/soc/mediatek/mtk-scpsys.c
+++ b/drivers/soc/mediatek/mtk-scpsys.c
@@ -361,17 +361,6 @@ out:
361 return ret; 361 return ret;
362} 362}
363 363
364static bool scpsys_active_wakeup(struct device *dev)
365{
366 struct generic_pm_domain *genpd;
367 struct scp_domain *scpd;
368
369 genpd = pd_to_genpd(dev->pm_domain);
370 scpd = container_of(genpd, struct scp_domain, genpd);
371
372 return scpd->data->active_wakeup;
373}
374
375static void init_clks(struct platform_device *pdev, struct clk **clk) 364static void init_clks(struct platform_device *pdev, struct clk **clk)
376{ 365{
377 int i; 366 int i;
@@ -466,7 +455,8 @@ static struct scp *init_scp(struct platform_device *pdev,
466 genpd->name = data->name; 455 genpd->name = data->name;
467 genpd->power_off = scpsys_power_off; 456 genpd->power_off = scpsys_power_off;
468 genpd->power_on = scpsys_power_on; 457 genpd->power_on = scpsys_power_on;
469 genpd->dev_ops.active_wakeup = scpsys_active_wakeup; 458 if (scpd->data->active_wakeup)
459 genpd->flags |= GENPD_FLAG_ACTIVE_WAKEUP;
470 } 460 }
471 461
472 return scp; 462 return scp;
diff --git a/drivers/soc/rockchip/pm_domains.c b/drivers/soc/rockchip/pm_domains.c
index 40b75748835f..5c342167b9db 100644
--- a/drivers/soc/rockchip/pm_domains.c
+++ b/drivers/soc/rockchip/pm_domains.c
@@ -358,17 +358,6 @@ static void rockchip_pd_detach_dev(struct generic_pm_domain *genpd,
358 pm_clk_destroy(dev); 358 pm_clk_destroy(dev);
359} 359}
360 360
361static bool rockchip_active_wakeup(struct device *dev)
362{
363 struct generic_pm_domain *genpd;
364 struct rockchip_pm_domain *pd;
365
366 genpd = pd_to_genpd(dev->pm_domain);
367 pd = container_of(genpd, struct rockchip_pm_domain, genpd);
368
369 return pd->info->active_wakeup;
370}
371
372static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, 361static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
373 struct device_node *node) 362 struct device_node *node)
374{ 363{
@@ -489,8 +478,9 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
489 pd->genpd.power_on = rockchip_pd_power_on; 478 pd->genpd.power_on = rockchip_pd_power_on;
490 pd->genpd.attach_dev = rockchip_pd_attach_dev; 479 pd->genpd.attach_dev = rockchip_pd_attach_dev;
491 pd->genpd.detach_dev = rockchip_pd_detach_dev; 480 pd->genpd.detach_dev = rockchip_pd_detach_dev;
492 pd->genpd.dev_ops.active_wakeup = rockchip_active_wakeup;
493 pd->genpd.flags = GENPD_FLAG_PM_CLK; 481 pd->genpd.flags = GENPD_FLAG_PM_CLK;
482 if (pd_info->active_wakeup)
483 pd->genpd.flags |= GENPD_FLAG_ACTIVE_WAKEUP;
494 pm_genpd_init(&pd->genpd, NULL, false); 484 pm_genpd_init(&pd->genpd, NULL, false);
495 485
496 pmu->genpd_data.domains[id] = &pd->genpd; 486 pmu->genpd_data.domains[id] = &pd->genpd;