aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/base
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2015-11-04 21:10:13 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2015-11-04 21:10:13 -0500
commit0d51ce9ca1116e8f4dc87cb51db8dd250327e9bb (patch)
treef845ff44f40f102c5143f94d3c9734e65544712d /drivers/base
parent41ecf1404b34d9975eb97f5005d9e4274eaeb76a (diff)
parent1ab68460b1d0671968b35e04f21efcf1ce051916 (diff)
Merge tag 'pm+acpi-4.4-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management and ACPI updates from Rafael Wysocki: "Quite a new features are included this time. First off, the Collaborative Processor Performance Control interface (version 2) defined by ACPI will now be supported on ARM64 along with a cpufreq frontend for CPU performance scaling. Second, ACPI gets a new infrastructure for the early probing of IRQ chips and clock sources (along the lines of the existing similar mechanism for DT). Next, the ACPI core and the generic device properties API will now support a recently introduced hierarchical properties extension of the _DSD (Device Specific Data) ACPI device configuration object. If the ACPI platform firmware uses that extension to organize device properties in a hierarchical way, the kernel will automatically handle it and make those properties available to device drivers via the generic device properties API. It also will be possible to build the ACPICA's AML interpreter debugger into the kernel now and use that to diagnose AML-related problems more efficiently. In the future, this should make it possible to single-step AML execution and do similar things. Interesting stuff, although somewhat experimental at this point. Finally, the PM core gets a new mechanism that can be used by device drivers to distinguish between suspend-to-RAM (based on platform firmware support) and suspend-to-idle (or other variants of system suspend the platform firmware is not involved in) and possibly optimize their device suspend/resume handling accordingly. In addition to that, some existing features are re-organized quite substantially. First, the ACPI-based handling of PCI host bridges on x86 and ia64 is unified and the common code goes into the ACPI core (so as to reduce code duplication and eliminate non-essential differences between the two architectures in that area). Second, the Operating Performance Points (OPP) framework is reorganized to make the code easier to find and follow. Next, the cpufreq core's sysfs interface is reorganized to get rid of the "primary CPU" concept for configurations in which the same performance scaling settings are shared between multiple CPUs. Finally, some interfaces that aren't necessary any more are dropped from the generic power domains framework. On top of the above we have some minor extensions, cleanups and bug fixes in multiple places, as usual. Specifics: - ACPICA update to upstream revision 20150930 (Bob Moore, Lv Zheng). The most significant change is to allow the AML debugger to be built into the kernel. On top of that there is an update related to the NFIT table (the ACPI persistent memory interface) and a few fixes and cleanups. - ACPI CPPC2 (Collaborative Processor Performance Control v2) support along with a cpufreq frontend (Ashwin Chaugule). This can only be enabled on ARM64 at this point. - New ACPI infrastructure for the early probing of IRQ chips and clock sources (Marc Zyngier). - Support for a new hierarchical properties extension of the ACPI _DSD (Device Specific Data) device configuration object allowing the kernel to handle hierarchical properties (provided by the platform firmware this way) automatically and make them available to device drivers via the generic device properties interface (Rafael Wysocki). - Generic device properties API extension to obtain an index of certain string value in an array of strings, along the lines of of_property_match_string(), but working for all of the supported firmware node types, and support for the "dma-names" device property based on it (Mika Westerberg). - ACPI core fix to parse the MADT (Multiple APIC Description Table) entries in the order expected by platform firmware (and mandated by the specification) to avoid confusion on systems with more than 255 logical CPUs (Lukasz Anaczkowski). - Consolidation of the ACPI-based handling of PCI host bridges on x86 and ia64 (Jiang Liu). - ACPI core fixes to ensure that the correct IRQ number is used to represent the SCI (System Control Interrupt) in the cases when it has been re-mapped (Chen Yu). - New ACPI backlight quirk for Lenovo IdeaPad S405 (Hans de Goede). - ACPI EC driver fixes (Lv Zheng). - Assorted ACPI fixes and cleanups (Dan Carpenter, Insu Yun, Jiri Kosina, Rami Rosen, Rasmus Villemoes). - New mechanism in the PM core allowing drivers to check if the platform firmware is going to be involved in the upcoming system suspend or if it has been involved in the suspend the system is resuming from at the moment (Rafael Wysocki). This should allow drivers to optimize their suspend/resume handling in some cases and the changes include a couple of users of it (the i8042 input driver, PCI PM). - PCI PM fix to prevent runtime-suspended devices with PME enabled from being resumed during system suspend even if they aren't configured to wake up the system from sleep (Rafael Wysocki). - New mechanism to report the number of a wakeup IRQ that woke up the system from sleep last time (Alexandra Yates). - Removal of unused interfaces from the generic power domains framework and fixes related to latency measurements in that code (Ulf Hansson, Daniel Lezcano). - cpufreq core sysfs interface rework to make it handle CPUs that share performance scaling settings (represented by a common cpufreq policy object) more symmetrically (Viresh Kumar). This should help to simplify the CPU offline/online handling among other things. - cpufreq core fixes and cleanups (Viresh Kumar). - intel_pstate fixes related to the Turbo Activation Ratio (TAR) mechanism on client platforms which causes the turbo P-states range to vary depending on platform firmware settings (Srinivas Pandruvada). - intel_pstate sysfs interface fix (Prarit Bhargava). - Assorted cpufreq driver (imx, tegra20, powernv, integrator) fixes and cleanups (Bai Ping, Bartlomiej Zolnierkiewicz, Shilpasri G Bhat, Luis de Bethencourt). - cpuidle mvebu driver cleanups (Russell King). - OPP (Operating Performance Points) framework code reorganization to make it more maintainable (Viresh Kumar). - Intel Broxton support for the RAPL (Running Average Power Limits) power capping driver (Amy Wiles). - Assorted power management code fixes and cleanups (Dan Carpenter, Geert Uytterhoeven, Geliang Tang, Luis de Bethencourt, Rasmus Villemoes)" * tag 'pm+acpi-4.4-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (108 commits) cpufreq: postfix policy directory with the first CPU in related_cpus cpufreq: create cpu/cpufreq/policyX directories cpufreq: remove cpufreq_sysfs_{create|remove}_file() cpufreq: create cpu/cpufreq at boot time cpufreq: Use cpumask_copy instead of cpumask_or to copy a mask cpufreq: ondemand: Drop unnecessary locks from update_sampling_rate() PM / Domains: Merge measurements for PM QoS device latencies PM / Domains: Don't measure ->start|stop() latency in system PM callbacks PM / clk: Fix broken build due to non-matching code and header #ifdefs ACPI / Documentation: add copy_dsdt to ACPI format options ACPI / sysfs: correctly check failing memory allocation ACPI / video: Add a quirk to force native backlight on Lenovo IdeaPad S405 ACPI / CPPC: Fix potential memory leak ACPI / CPPC: signedness bug in register_pcc_channel() ACPI / PAD: power_saving_thread() is not freezable ACPI / PM: Fix incorrect wakeup IRQ setting during suspend-to-idle ACPI: Using correct irq when waiting for events ACPI: Use correct IRQ when uninstalling ACPI interrupt handler cpuidle: mvebu: disable the bind/unbind attributes and use builtin_platform_driver cpuidle: mvebu: clean up multiple platform drivers ...
Diffstat (limited to 'drivers/base')
-rw-r--r--drivers/base/power/Makefile2
-rw-r--r--drivers/base/power/clock_ops.c6
-rw-r--r--drivers/base/power/domain.c368
-rw-r--r--drivers/base/power/domain_governor.c6
-rw-r--r--drivers/base/power/generic_ops.c23
-rw-r--r--drivers/base/power/opp/Makefile2
-rw-r--r--drivers/base/power/opp/core.c (renamed from drivers/base/power/opp.c)363
-rw-r--r--drivers/base/power/opp/cpu.c267
-rw-r--r--drivers/base/power/opp/opp.h143
-rw-r--r--drivers/base/power/wakeup.c16
-rw-r--r--drivers/base/property.c88
11 files changed, 631 insertions, 653 deletions
diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index f94a6ccfe787..5998c53280f5 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -1,7 +1,7 @@
1obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o 1obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o
2obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o 2obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
3obj-$(CONFIG_PM_TRACE_RTC) += trace.o 3obj-$(CONFIG_PM_TRACE_RTC) += trace.o
4obj-$(CONFIG_PM_OPP) += opp.o 4obj-$(CONFIG_PM_OPP) += opp/
5obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o 5obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
6obj-$(CONFIG_HAVE_CLK) += clock_ops.o 6obj-$(CONFIG_HAVE_CLK) += clock_ops.o
7 7
diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
index 652b5a367c1f..fd0973b922a7 100644
--- a/drivers/base/power/clock_ops.c
+++ b/drivers/base/power/clock_ops.c
@@ -17,7 +17,7 @@
17#include <linux/err.h> 17#include <linux/err.h>
18#include <linux/pm_runtime.h> 18#include <linux/pm_runtime.h>
19 19
20#ifdef CONFIG_PM 20#ifdef CONFIG_PM_CLK
21 21
22enum pce_status { 22enum pce_status {
23 PCE_STATUS_NONE = 0, 23 PCE_STATUS_NONE = 0,
@@ -404,7 +404,7 @@ int pm_clk_runtime_resume(struct device *dev)
404 return pm_generic_runtime_resume(dev); 404 return pm_generic_runtime_resume(dev);
405} 405}
406 406
407#else /* !CONFIG_PM */ 407#else /* !CONFIG_PM_CLK */
408 408
409/** 409/**
410 * enable_clock - Enable a device clock. 410 * enable_clock - Enable a device clock.
@@ -484,7 +484,7 @@ static int pm_clk_notify(struct notifier_block *nb,
484 return 0; 484 return 0;
485} 485}
486 486
487#endif /* !CONFIG_PM */ 487#endif /* !CONFIG_PM_CLK */
488 488
489/** 489/**
490 * pm_clk_add_notifier - Add bus type notifier for power management clocks. 490 * pm_clk_add_notifier - Add bus type notifier for power management clocks.
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 16550c63d611..a7dfdf9f15ba 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -34,43 +34,9 @@
34 __ret; \ 34 __ret; \
35}) 35})
36 36
37#define GENPD_DEV_TIMED_CALLBACK(genpd, type, callback, dev, field, name) \
38({ \
39 ktime_t __start = ktime_get(); \
40 type __retval = GENPD_DEV_CALLBACK(genpd, type, callback, dev); \
41 s64 __elapsed = ktime_to_ns(ktime_sub(ktime_get(), __start)); \
42 struct gpd_timing_data *__td = &dev_gpd_data(dev)->td; \
43 if (!__retval && __elapsed > __td->field) { \
44 __td->field = __elapsed; \
45 dev_dbg(dev, name " latency exceeded, new value %lld ns\n", \
46 __elapsed); \
47 genpd->max_off_time_changed = true; \
48 __td->constraint_changed = true; \
49 } \
50 __retval; \
51})
52
53static LIST_HEAD(gpd_list); 37static LIST_HEAD(gpd_list);
54static DEFINE_MUTEX(gpd_list_lock); 38static DEFINE_MUTEX(gpd_list_lock);
55 39
56static struct generic_pm_domain *pm_genpd_lookup_name(const char *domain_name)
57{
58 struct generic_pm_domain *genpd = NULL, *gpd;
59
60 if (IS_ERR_OR_NULL(domain_name))
61 return NULL;
62
63 mutex_lock(&gpd_list_lock);
64 list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
65 if (!strcmp(gpd->name, domain_name)) {
66 genpd = gpd;
67 break;
68 }
69 }
70 mutex_unlock(&gpd_list_lock);
71 return genpd;
72}
73
74/* 40/*
75 * Get the generic PM domain for a particular struct device. 41 * Get the generic PM domain for a particular struct device.
76 * This validates the struct device pointer, the PM domain pointer, 42 * This validates the struct device pointer, the PM domain pointer,
@@ -110,18 +76,12 @@ static struct generic_pm_domain *dev_to_genpd(struct device *dev)
110 76
111static int genpd_stop_dev(struct generic_pm_domain *genpd, struct device *dev) 77static int genpd_stop_dev(struct generic_pm_domain *genpd, struct device *dev)
112{ 78{
113 return GENPD_DEV_TIMED_CALLBACK(genpd, int, stop, dev, 79 return GENPD_DEV_CALLBACK(genpd, int, stop, dev);
114 stop_latency_ns, "stop");
115} 80}
116 81
117static int genpd_start_dev(struct generic_pm_domain *genpd, struct device *dev, 82static int genpd_start_dev(struct generic_pm_domain *genpd, struct device *dev)
118 bool timed)
119{ 83{
120 if (!timed) 84 return GENPD_DEV_CALLBACK(genpd, int, start, dev);
121 return GENPD_DEV_CALLBACK(genpd, int, start, dev);
122
123 return GENPD_DEV_TIMED_CALLBACK(genpd, int, start, dev,
124 start_latency_ns, "start");
125} 85}
126 86
127static bool genpd_sd_counter_dec(struct generic_pm_domain *genpd) 87static bool genpd_sd_counter_dec(struct generic_pm_domain *genpd)
@@ -140,19 +100,6 @@ static void genpd_sd_counter_inc(struct generic_pm_domain *genpd)
140 smp_mb__after_atomic(); 100 smp_mb__after_atomic();
141} 101}
142 102
143static void genpd_recalc_cpu_exit_latency(struct generic_pm_domain *genpd)
144{
145 s64 usecs64;
146
147 if (!genpd->cpuidle_data)
148 return;
149
150 usecs64 = genpd->power_on_latency_ns;
151 do_div(usecs64, NSEC_PER_USEC);
152 usecs64 += genpd->cpuidle_data->saved_exit_latency;
153 genpd->cpuidle_data->idle_state->exit_latency = usecs64;
154}
155
156static int genpd_power_on(struct generic_pm_domain *genpd, bool timed) 103static int genpd_power_on(struct generic_pm_domain *genpd, bool timed)
157{ 104{
158 ktime_t time_start; 105 ktime_t time_start;
@@ -176,7 +123,6 @@ static int genpd_power_on(struct generic_pm_domain *genpd, bool timed)
176 123
177 genpd->power_on_latency_ns = elapsed_ns; 124 genpd->power_on_latency_ns = elapsed_ns;
178 genpd->max_off_time_changed = true; 125 genpd->max_off_time_changed = true;
179 genpd_recalc_cpu_exit_latency(genpd);
180 pr_debug("%s: Power-%s latency exceeded, new value %lld ns\n", 126 pr_debug("%s: Power-%s latency exceeded, new value %lld ns\n",
181 genpd->name, "on", elapsed_ns); 127 genpd->name, "on", elapsed_ns);
182 128
@@ -213,10 +159,10 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool timed)
213} 159}
214 160
215/** 161/**
216 * genpd_queue_power_off_work - Queue up the execution of pm_genpd_poweroff(). 162 * genpd_queue_power_off_work - Queue up the execution of genpd_poweroff().
217 * @genpd: PM domait to power off. 163 * @genpd: PM domait to power off.
218 * 164 *
219 * Queue up the execution of pm_genpd_poweroff() unless it's already been done 165 * Queue up the execution of genpd_poweroff() unless it's already been done
220 * before. 166 * before.
221 */ 167 */
222static void genpd_queue_power_off_work(struct generic_pm_domain *genpd) 168static void genpd_queue_power_off_work(struct generic_pm_domain *genpd)
@@ -224,14 +170,16 @@ static void genpd_queue_power_off_work(struct generic_pm_domain *genpd)
224 queue_work(pm_wq, &genpd->power_off_work); 170 queue_work(pm_wq, &genpd->power_off_work);
225} 171}
226 172
173static int genpd_poweron(struct generic_pm_domain *genpd);
174
227/** 175/**
228 * __pm_genpd_poweron - Restore power to a given PM domain and its masters. 176 * __genpd_poweron - Restore power to a given PM domain and its masters.
229 * @genpd: PM domain to power up. 177 * @genpd: PM domain to power up.
230 * 178 *
231 * Restore power to @genpd and all of its masters so that it is possible to 179 * Restore power to @genpd and all of its masters so that it is possible to
232 * resume a device belonging to it. 180 * resume a device belonging to it.
233 */ 181 */
234static int __pm_genpd_poweron(struct generic_pm_domain *genpd) 182static int __genpd_poweron(struct generic_pm_domain *genpd)
235{ 183{
236 struct gpd_link *link; 184 struct gpd_link *link;
237 int ret = 0; 185 int ret = 0;
@@ -240,13 +188,6 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
240 || (genpd->prepared_count > 0 && genpd->suspend_power_off)) 188 || (genpd->prepared_count > 0 && genpd->suspend_power_off))
241 return 0; 189 return 0;
242 190
243 if (genpd->cpuidle_data) {
244 cpuidle_pause_and_lock();
245 genpd->cpuidle_data->idle_state->disabled = true;
246 cpuidle_resume_and_unlock();
247 goto out;
248 }
249
250 /* 191 /*
251 * The list is guaranteed not to change while the loop below is being 192 * The list is guaranteed not to change while the loop below is being
252 * executed, unless one of the masters' .power_on() callbacks fiddles 193 * executed, unless one of the masters' .power_on() callbacks fiddles
@@ -255,7 +196,7 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
255 list_for_each_entry(link, &genpd->slave_links, slave_node) { 196 list_for_each_entry(link, &genpd->slave_links, slave_node) {
256 genpd_sd_counter_inc(link->master); 197 genpd_sd_counter_inc(link->master);
257 198
258 ret = pm_genpd_poweron(link->master); 199 ret = genpd_poweron(link->master);
259 if (ret) { 200 if (ret) {
260 genpd_sd_counter_dec(link->master); 201 genpd_sd_counter_dec(link->master);
261 goto err; 202 goto err;
@@ -266,7 +207,6 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
266 if (ret) 207 if (ret)
267 goto err; 208 goto err;
268 209
269 out:
270 genpd->status = GPD_STATE_ACTIVE; 210 genpd->status = GPD_STATE_ACTIVE;
271 return 0; 211 return 0;
272 212
@@ -282,46 +222,28 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
282} 222}
283 223
284/** 224/**
285 * pm_genpd_poweron - Restore power to a given PM domain and its masters. 225 * genpd_poweron - Restore power to a given PM domain and its masters.
286 * @genpd: PM domain to power up. 226 * @genpd: PM domain to power up.
287 */ 227 */
288int pm_genpd_poweron(struct generic_pm_domain *genpd) 228static int genpd_poweron(struct generic_pm_domain *genpd)
289{ 229{
290 int ret; 230 int ret;
291 231
292 mutex_lock(&genpd->lock); 232 mutex_lock(&genpd->lock);
293 ret = __pm_genpd_poweron(genpd); 233 ret = __genpd_poweron(genpd);
294 mutex_unlock(&genpd->lock); 234 mutex_unlock(&genpd->lock);
295 return ret; 235 return ret;
296} 236}
297 237
298/**
299 * pm_genpd_name_poweron - Restore power to a given PM domain and its masters.
300 * @domain_name: Name of the PM domain to power up.
301 */
302int pm_genpd_name_poweron(const char *domain_name)
303{
304 struct generic_pm_domain *genpd;
305
306 genpd = pm_genpd_lookup_name(domain_name);
307 return genpd ? pm_genpd_poweron(genpd) : -EINVAL;
308}
309
310static int genpd_save_dev(struct generic_pm_domain *genpd, struct device *dev) 238static int genpd_save_dev(struct generic_pm_domain *genpd, struct device *dev)
311{ 239{
312 return GENPD_DEV_TIMED_CALLBACK(genpd, int, save_state, dev, 240 return GENPD_DEV_CALLBACK(genpd, int, save_state, dev);
313 save_state_latency_ns, "state save");
314} 241}
315 242
316static int genpd_restore_dev(struct generic_pm_domain *genpd, 243static int genpd_restore_dev(struct generic_pm_domain *genpd,
317 struct device *dev, bool timed) 244 struct device *dev)
318{ 245{
319 if (!timed) 246 return GENPD_DEV_CALLBACK(genpd, int, restore_state, dev);
320 return GENPD_DEV_CALLBACK(genpd, int, restore_state, dev);
321
322 return GENPD_DEV_TIMED_CALLBACK(genpd, int, restore_state, dev,
323 restore_state_latency_ns,
324 "state restore");
325} 247}
326 248
327static int genpd_dev_pm_qos_notifier(struct notifier_block *nb, 249static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
@@ -365,13 +287,14 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
365} 287}
366 288
367/** 289/**
368 * pm_genpd_poweroff - Remove power from a given PM domain. 290 * genpd_poweroff - Remove power from a given PM domain.
369 * @genpd: PM domain to power down. 291 * @genpd: PM domain to power down.
292 * @is_async: PM domain is powered down from a scheduled work
370 * 293 *
371 * If all of the @genpd's devices have been suspended and all of its subdomains 294 * If all of the @genpd's devices have been suspended and all of its subdomains
372 * have been powered down, remove power from @genpd. 295 * have been powered down, remove power from @genpd.
373 */ 296 */
374static int pm_genpd_poweroff(struct generic_pm_domain *genpd) 297static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
375{ 298{
376 struct pm_domain_data *pdd; 299 struct pm_domain_data *pdd;
377 struct gpd_link *link; 300 struct gpd_link *link;
@@ -403,7 +326,7 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
403 not_suspended++; 326 not_suspended++;
404 } 327 }
405 328
406 if (not_suspended > genpd->in_progress) 329 if (not_suspended > 1 || (not_suspended == 1 && is_async))
407 return -EBUSY; 330 return -EBUSY;
408 331
409 if (genpd->gov && genpd->gov->power_down_ok) { 332 if (genpd->gov && genpd->gov->power_down_ok) {
@@ -411,21 +334,6 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
411 return -EAGAIN; 334 return -EAGAIN;
412 } 335 }
413 336
414 if (genpd->cpuidle_data) {
415 /*
416 * If cpuidle_data is set, cpuidle should turn the domain off
417 * when the CPU in it is idle. In that case we don't decrement
418 * the subdomain counts of the master domains, so that power is
419 * not removed from the current domain prematurely as a result
420 * of cutting off the masters' power.
421 */
422 genpd->status = GPD_STATE_POWER_OFF;
423 cpuidle_pause_and_lock();
424 genpd->cpuidle_data->idle_state->disabled = false;
425 cpuidle_resume_and_unlock();
426 return 0;
427 }
428
429 if (genpd->power_off) { 337 if (genpd->power_off) {
430 int ret; 338 int ret;
431 339
@@ -434,10 +342,10 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
434 342
435 /* 343 /*
436 * If sd_count > 0 at this point, one of the subdomains hasn't 344 * If sd_count > 0 at this point, one of the subdomains hasn't
437 * managed to call pm_genpd_poweron() for the master yet after 345 * managed to call genpd_poweron() for the master yet after
438 * incrementing it. In that case pm_genpd_poweron() will wait 346 * incrementing it. In that case genpd_poweron() will wait
439 * for us to drop the lock, so we can call .power_off() and let 347 * for us to drop the lock, so we can call .power_off() and let
440 * the pm_genpd_poweron() restore power for us (this shouldn't 348 * the genpd_poweron() restore power for us (this shouldn't
441 * happen very often). 349 * happen very often).
442 */ 350 */
443 ret = genpd_power_off(genpd, true); 351 ret = genpd_power_off(genpd, true);
@@ -466,7 +374,7 @@ static void genpd_power_off_work_fn(struct work_struct *work)
466 genpd = container_of(work, struct generic_pm_domain, power_off_work); 374 genpd = container_of(work, struct generic_pm_domain, power_off_work);
467 375
468 mutex_lock(&genpd->lock); 376 mutex_lock(&genpd->lock);
469 pm_genpd_poweroff(genpd); 377 genpd_poweroff(genpd, true);
470 mutex_unlock(&genpd->lock); 378 mutex_unlock(&genpd->lock);
471} 379}
472 380
@@ -482,6 +390,9 @@ static int pm_genpd_runtime_suspend(struct device *dev)
482{ 390{
483 struct generic_pm_domain *genpd; 391 struct generic_pm_domain *genpd;
484 bool (*stop_ok)(struct device *__dev); 392 bool (*stop_ok)(struct device *__dev);
393 struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
394 ktime_t time_start;
395 s64 elapsed_ns;
485 int ret; 396 int ret;
486 397
487 dev_dbg(dev, "%s()\n", __func__); 398 dev_dbg(dev, "%s()\n", __func__);
@@ -494,16 +405,29 @@ static int pm_genpd_runtime_suspend(struct device *dev)
494 if (stop_ok && !stop_ok(dev)) 405 if (stop_ok && !stop_ok(dev))
495 return -EBUSY; 406 return -EBUSY;
496 407
408 /* Measure suspend latency. */
409 time_start = ktime_get();
410
497 ret = genpd_save_dev(genpd, dev); 411 ret = genpd_save_dev(genpd, dev);
498 if (ret) 412 if (ret)
499 return ret; 413 return ret;
500 414
501 ret = genpd_stop_dev(genpd, dev); 415 ret = genpd_stop_dev(genpd, dev);
502 if (ret) { 416 if (ret) {
503 genpd_restore_dev(genpd, dev, true); 417 genpd_restore_dev(genpd, dev);
504 return ret; 418 return ret;
505 } 419 }
506 420
421 /* Update suspend latency value if the measured time exceeds it. */
422 elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
423 if (elapsed_ns > td->suspend_latency_ns) {
424 td->suspend_latency_ns = elapsed_ns;
425 dev_dbg(dev, "suspend latency exceeded, %lld ns\n",
426 elapsed_ns);
427 genpd->max_off_time_changed = true;
428 td->constraint_changed = true;
429 }
430
507 /* 431 /*
508 * If power.irq_safe is set, this routine will be run with interrupts 432 * If power.irq_safe is set, this routine will be run with interrupts
509 * off, so it can't use mutexes. 433 * off, so it can't use mutexes.
@@ -512,9 +436,7 @@ static int pm_genpd_runtime_suspend(struct device *dev)
512 return 0; 436 return 0;
513 437
514 mutex_lock(&genpd->lock); 438 mutex_lock(&genpd->lock);
515 genpd->in_progress++; 439 genpd_poweroff(genpd, false);
516 pm_genpd_poweroff(genpd);
517 genpd->in_progress--;
518 mutex_unlock(&genpd->lock); 440 mutex_unlock(&genpd->lock);
519 441
520 return 0; 442 return 0;
@@ -531,6 +453,9 @@ static int pm_genpd_runtime_suspend(struct device *dev)
531static int pm_genpd_runtime_resume(struct device *dev) 453static int pm_genpd_runtime_resume(struct device *dev)
532{ 454{
533 struct generic_pm_domain *genpd; 455 struct generic_pm_domain *genpd;
456 struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
457 ktime_t time_start;
458 s64 elapsed_ns;
534 int ret; 459 int ret;
535 bool timed = true; 460 bool timed = true;
536 461
@@ -547,15 +472,31 @@ static int pm_genpd_runtime_resume(struct device *dev)
547 } 472 }
548 473
549 mutex_lock(&genpd->lock); 474 mutex_lock(&genpd->lock);
550 ret = __pm_genpd_poweron(genpd); 475 ret = __genpd_poweron(genpd);
551 mutex_unlock(&genpd->lock); 476 mutex_unlock(&genpd->lock);
552 477
553 if (ret) 478 if (ret)
554 return ret; 479 return ret;
555 480
556 out: 481 out:
557 genpd_start_dev(genpd, dev, timed); 482 /* Measure resume latency. */
558 genpd_restore_dev(genpd, dev, timed); 483 if (timed)
484 time_start = ktime_get();
485
486 genpd_start_dev(genpd, dev);
487 genpd_restore_dev(genpd, dev);
488
489 /* Update resume latency value if the measured time exceeds it. */
490 if (timed) {
491 elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
492 if (elapsed_ns > td->resume_latency_ns) {
493 td->resume_latency_ns = elapsed_ns;
494 dev_dbg(dev, "resume latency exceeded, %lld ns\n",
495 elapsed_ns);
496 genpd->max_off_time_changed = true;
497 td->constraint_changed = true;
498 }
499 }
559 500
560 return 0; 501 return 0;
561} 502}
@@ -569,15 +510,15 @@ static int __init pd_ignore_unused_setup(char *__unused)
569__setup("pd_ignore_unused", pd_ignore_unused_setup); 510__setup("pd_ignore_unused", pd_ignore_unused_setup);
570 511
571/** 512/**
572 * pm_genpd_poweroff_unused - Power off all PM domains with no devices in use. 513 * genpd_poweroff_unused - Power off all PM domains with no devices in use.
573 */ 514 */
574void pm_genpd_poweroff_unused(void) 515static int __init genpd_poweroff_unused(void)
575{ 516{
576 struct generic_pm_domain *genpd; 517 struct generic_pm_domain *genpd;
577 518
578 if (pd_ignore_unused) { 519 if (pd_ignore_unused) {
579 pr_warn("genpd: Not disabling unused power domains\n"); 520 pr_warn("genpd: Not disabling unused power domains\n");
580 return; 521 return 0;
581 } 522 }
582 523
583 mutex_lock(&gpd_list_lock); 524 mutex_lock(&gpd_list_lock);
@@ -586,11 +527,7 @@ void pm_genpd_poweroff_unused(void)
586 genpd_queue_power_off_work(genpd); 527 genpd_queue_power_off_work(genpd);
587 528
588 mutex_unlock(&gpd_list_lock); 529 mutex_unlock(&gpd_list_lock);
589}
590 530
591static int __init genpd_poweroff_unused(void)
592{
593 pm_genpd_poweroff_unused();
594 return 0; 531 return 0;
595} 532}
596late_initcall(genpd_poweroff_unused); 533late_initcall(genpd_poweroff_unused);
@@ -764,7 +701,7 @@ static int pm_genpd_prepare(struct device *dev)
764 701
765 /* 702 /*
766 * The PM domain must be in the GPD_STATE_ACTIVE state at this point, 703 * The PM domain must be in the GPD_STATE_ACTIVE state at this point,
767 * so pm_genpd_poweron() will return immediately, but if the device 704 * so genpd_poweron() will return immediately, but if the device
768 * is suspended (e.g. it's been stopped by genpd_stop_dev()), we need 705 * is suspended (e.g. it's been stopped by genpd_stop_dev()), we need
769 * to make it operational. 706 * to make it operational.
770 */ 707 */
@@ -890,7 +827,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
890 pm_genpd_sync_poweron(genpd, true); 827 pm_genpd_sync_poweron(genpd, true);
891 genpd->suspended_count--; 828 genpd->suspended_count--;
892 829
893 return genpd_start_dev(genpd, dev, true); 830 return genpd_start_dev(genpd, dev);
894} 831}
895 832
896/** 833/**
@@ -1018,7 +955,8 @@ static int pm_genpd_thaw_noirq(struct device *dev)
1018 if (IS_ERR(genpd)) 955 if (IS_ERR(genpd))
1019 return -EINVAL; 956 return -EINVAL;
1020 957
1021 return genpd->suspend_power_off ? 0 : genpd_start_dev(genpd, dev, true); 958 return genpd->suspend_power_off ?
959 0 : genpd_start_dev(genpd, dev);
1022} 960}
1023 961
1024/** 962/**
@@ -1112,7 +1050,7 @@ static int pm_genpd_restore_noirq(struct device *dev)
1112 1050
1113 pm_genpd_sync_poweron(genpd, true); 1051 pm_genpd_sync_poweron(genpd, true);
1114 1052
1115 return genpd_start_dev(genpd, dev, true); 1053 return genpd_start_dev(genpd, dev);
1116} 1054}
1117 1055
1118/** 1056/**
@@ -1317,18 +1255,6 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
1317} 1255}
1318 1256
1319/** 1257/**
1320 * __pm_genpd_name_add_device - Find I/O PM domain and add a device to it.
1321 * @domain_name: Name of the PM domain to add the device to.
1322 * @dev: Device to be added.
1323 * @td: Set of PM QoS timing parameters to attach to the device.
1324 */
1325int __pm_genpd_name_add_device(const char *domain_name, struct device *dev,
1326 struct gpd_timing_data *td)
1327{
1328 return __pm_genpd_add_device(pm_genpd_lookup_name(domain_name), dev, td);
1329}
1330
1331/**
1332 * pm_genpd_remove_device - Remove a device from an I/O PM domain. 1258 * pm_genpd_remove_device - Remove a device from an I/O PM domain.
1333 * @genpd: PM domain to remove the device from. 1259 * @genpd: PM domain to remove the device from.
1334 * @dev: Device to be removed. 1260 * @dev: Device to be removed.
@@ -1429,35 +1355,6 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
1429} 1355}
1430 1356
1431/** 1357/**
1432 * pm_genpd_add_subdomain_names - Add a subdomain to an I/O PM domain.
1433 * @master_name: Name of the master PM domain to add the subdomain to.
1434 * @subdomain_name: Name of the subdomain to be added.
1435 */
1436int pm_genpd_add_subdomain_names(const char *master_name,
1437 const char *subdomain_name)
1438{
1439 struct generic_pm_domain *master = NULL, *subdomain = NULL, *gpd;
1440
1441 if (IS_ERR_OR_NULL(master_name) || IS_ERR_OR_NULL(subdomain_name))
1442 return -EINVAL;
1443
1444 mutex_lock(&gpd_list_lock);
1445 list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
1446 if (!master && !strcmp(gpd->name, master_name))
1447 master = gpd;
1448
1449 if (!subdomain && !strcmp(gpd->name, subdomain_name))
1450 subdomain = gpd;
1451
1452 if (master && subdomain)
1453 break;
1454 }
1455 mutex_unlock(&gpd_list_lock);
1456
1457 return pm_genpd_add_subdomain(master, subdomain);
1458}
1459
1460/**
1461 * pm_genpd_remove_subdomain - Remove a subdomain from an I/O PM domain. 1358 * pm_genpd_remove_subdomain - Remove a subdomain from an I/O PM domain.
1462 * @genpd: Master PM domain to remove the subdomain from. 1359 * @genpd: Master PM domain to remove the subdomain from.
1463 * @subdomain: Subdomain to be removed. 1360 * @subdomain: Subdomain to be removed.
@@ -1504,124 +1401,6 @@ out:
1504 return ret; 1401 return ret;
1505} 1402}
1506 1403
1507/**
1508 * pm_genpd_attach_cpuidle - Connect the given PM domain with cpuidle.
1509 * @genpd: PM domain to be connected with cpuidle.
1510 * @state: cpuidle state this domain can disable/enable.
1511 *
1512 * Make a PM domain behave as though it contained a CPU core, that is, instead
1513 * of calling its power down routine it will enable the given cpuidle state so
1514 * that the cpuidle subsystem can power it down (if possible and desirable).
1515 */
1516int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state)
1517{
1518 struct cpuidle_driver *cpuidle_drv;
1519 struct gpd_cpuidle_data *cpuidle_data;
1520 struct cpuidle_state *idle_state;
1521 int ret = 0;
1522
1523 if (IS_ERR_OR_NULL(genpd) || state < 0)
1524 return -EINVAL;
1525
1526 mutex_lock(&genpd->lock);
1527
1528 if (genpd->cpuidle_data) {
1529 ret = -EEXIST;
1530 goto out;
1531 }
1532 cpuidle_data = kzalloc(sizeof(*cpuidle_data), GFP_KERNEL);
1533 if (!cpuidle_data) {
1534 ret = -ENOMEM;
1535 goto out;
1536 }
1537 cpuidle_drv = cpuidle_driver_ref();
1538 if (!cpuidle_drv) {
1539 ret = -ENODEV;
1540 goto err_drv;
1541 }
1542 if (cpuidle_drv->state_count <= state) {
1543 ret = -EINVAL;
1544 goto err;
1545 }
1546 idle_state = &cpuidle_drv->states[state];
1547 if (!idle_state->disabled) {
1548 ret = -EAGAIN;
1549 goto err;
1550 }
1551 cpuidle_data->idle_state = idle_state;
1552 cpuidle_data->saved_exit_latency = idle_state->exit_latency;
1553 genpd->cpuidle_data = cpuidle_data;
1554 genpd_recalc_cpu_exit_latency(genpd);
1555
1556 out:
1557 mutex_unlock(&genpd->lock);
1558 return ret;
1559
1560 err:
1561 cpuidle_driver_unref();
1562
1563 err_drv:
1564 kfree(cpuidle_data);
1565 goto out;
1566}
1567
1568/**
1569 * pm_genpd_name_attach_cpuidle - Find PM domain and connect cpuidle to it.
1570 * @name: Name of the domain to connect to cpuidle.
1571 * @state: cpuidle state this domain can manipulate.
1572 */
1573int pm_genpd_name_attach_cpuidle(const char *name, int state)
1574{
1575 return pm_genpd_attach_cpuidle(pm_genpd_lookup_name(name), state);
1576}
1577
1578/**
1579 * pm_genpd_detach_cpuidle - Remove the cpuidle connection from a PM domain.
1580 * @genpd: PM domain to remove the cpuidle connection from.
1581 *
1582 * Remove the cpuidle connection set up by pm_genpd_attach_cpuidle() from the
1583 * given PM domain.
1584 */
1585int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd)
1586{
1587 struct gpd_cpuidle_data *cpuidle_data;
1588 struct cpuidle_state *idle_state;
1589 int ret = 0;
1590
1591 if (IS_ERR_OR_NULL(genpd))
1592 return -EINVAL;
1593
1594 mutex_lock(&genpd->lock);
1595
1596 cpuidle_data = genpd->cpuidle_data;
1597 if (!cpuidle_data) {
1598 ret = -ENODEV;
1599 goto out;
1600 }
1601 idle_state = cpuidle_data->idle_state;
1602 if (!idle_state->disabled) {
1603 ret = -EAGAIN;
1604 goto out;
1605 }
1606 idle_state->exit_latency = cpuidle_data->saved_exit_latency;
1607 cpuidle_driver_unref();
1608 genpd->cpuidle_data = NULL;
1609 kfree(cpuidle_data);
1610
1611 out:
1612 mutex_unlock(&genpd->lock);
1613 return ret;
1614}
1615
1616/**
1617 * pm_genpd_name_detach_cpuidle - Find PM domain and disconnect cpuidle from it.
1618 * @name: Name of the domain to disconnect cpuidle from.
1619 */
1620int pm_genpd_name_detach_cpuidle(const char *name)
1621{
1622 return pm_genpd_detach_cpuidle(pm_genpd_lookup_name(name));
1623}
1624
1625/* Default device callbacks for generic PM domains. */ 1404/* Default device callbacks for generic PM domains. */
1626 1405
1627/** 1406/**
@@ -1688,7 +1467,6 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
1688 mutex_init(&genpd->lock); 1467 mutex_init(&genpd->lock);
1689 genpd->gov = gov; 1468 genpd->gov = gov;
1690 INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn); 1469 INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
1691 genpd->in_progress = 0;
1692 atomic_set(&genpd->sd_count, 0); 1470 atomic_set(&genpd->sd_count, 0);
1693 genpd->status = is_off ? GPD_STATE_POWER_OFF : GPD_STATE_ACTIVE; 1471 genpd->status = is_off ? GPD_STATE_POWER_OFF : GPD_STATE_ACTIVE;
1694 genpd->device_count = 0; 1472 genpd->device_count = 0;
@@ -2023,7 +1801,7 @@ int genpd_dev_pm_attach(struct device *dev)
2023 1801
2024 dev->pm_domain->detach = genpd_dev_pm_detach; 1802 dev->pm_domain->detach = genpd_dev_pm_detach;
2025 dev->pm_domain->sync = genpd_dev_pm_sync; 1803 dev->pm_domain->sync = genpd_dev_pm_sync;
2026 ret = pm_genpd_poweron(pd); 1804 ret = genpd_poweron(pd);
2027 1805
2028out: 1806out:
2029 return ret ? -EPROBE_DEFER : 0; 1807 return ret ? -EPROBE_DEFER : 0;
diff --git a/drivers/base/power/domain_governor.c b/drivers/base/power/domain_governor.c
index 85e17bacc834..e60dd12e23aa 100644
--- a/drivers/base/power/domain_governor.c
+++ b/drivers/base/power/domain_governor.c
@@ -77,10 +77,8 @@ static bool default_stop_ok(struct device *dev)
77 dev_update_qos_constraint); 77 dev_update_qos_constraint);
78 78
79 if (constraint_ns > 0) { 79 if (constraint_ns > 0) {
80 constraint_ns -= td->save_state_latency_ns + 80 constraint_ns -= td->suspend_latency_ns +
81 td->stop_latency_ns + 81 td->resume_latency_ns;
82 td->start_latency_ns +
83 td->restore_state_latency_ns;
84 if (constraint_ns == 0) 82 if (constraint_ns == 0)
85 return false; 83 return false;
86 } 84 }
diff --git a/drivers/base/power/generic_ops.c b/drivers/base/power/generic_ops.c
index 96a92db83cad..07c3c4a9522d 100644
--- a/drivers/base/power/generic_ops.c
+++ b/drivers/base/power/generic_ops.c
@@ -9,6 +9,7 @@
9#include <linux/pm.h> 9#include <linux/pm.h>
10#include <linux/pm_runtime.h> 10#include <linux/pm_runtime.h>
11#include <linux/export.h> 11#include <linux/export.h>
12#include <linux/suspend.h>
12 13
13#ifdef CONFIG_PM 14#ifdef CONFIG_PM
14/** 15/**
@@ -296,11 +297,27 @@ void pm_generic_complete(struct device *dev)
296 297
297 if (drv && drv->pm && drv->pm->complete) 298 if (drv && drv->pm && drv->pm->complete)
298 drv->pm->complete(dev); 299 drv->pm->complete(dev);
300}
299 301
302/**
303 * pm_complete_with_resume_check - Complete a device power transition.
304 * @dev: Device to handle.
305 *
306 * Complete a device power transition during a system-wide power transition and
307 * optionally schedule a runtime resume of the device if the system resume in
308 * progress has been initated by the platform firmware and the device had its
309 * power.direct_complete flag set.
310 */
311void pm_complete_with_resume_check(struct device *dev)
312{
313 pm_generic_complete(dev);
300 /* 314 /*
301 * Let runtime PM try to suspend devices that haven't been in use before 315 * If the device had been runtime-suspended before the system went into
302 * going into the system-wide sleep state we're resuming from. 316 * the sleep state it is going out of and it has never been resumed till
317 * now, resume it in case the firmware powered it up.
303 */ 318 */
304 pm_request_idle(dev); 319 if (dev->power.direct_complete && pm_resume_via_firmware())
320 pm_request_resume(dev);
305} 321}
322EXPORT_SYMBOL_GPL(pm_complete_with_resume_check);
306#endif /* CONFIG_PM_SLEEP */ 323#endif /* CONFIG_PM_SLEEP */
diff --git a/drivers/base/power/opp/Makefile b/drivers/base/power/opp/Makefile
new file mode 100644
index 000000000000..33c1e18c41a4
--- /dev/null
+++ b/drivers/base/power/opp/Makefile
@@ -0,0 +1,2 @@
1ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
2obj-y += core.o cpu.o
diff --git a/drivers/base/power/opp.c b/drivers/base/power/opp/core.c
index 7ae7cd990fbf..d5c1149ff123 100644
--- a/drivers/base/power/opp.c
+++ b/drivers/base/power/opp/core.c
@@ -11,131 +11,14 @@
11 * published by the Free Software Foundation. 11 * published by the Free Software Foundation.
12 */ 12 */
13 13
14#include <linux/cpu.h>
15#include <linux/kernel.h>
16#include <linux/errno.h> 14#include <linux/errno.h>
17#include <linux/err.h> 15#include <linux/err.h>
18#include <linux/slab.h> 16#include <linux/slab.h>
19#include <linux/device.h> 17#include <linux/device.h>
20#include <linux/list.h>
21#include <linux/rculist.h>
22#include <linux/rcupdate.h>
23#include <linux/pm_opp.h>
24#include <linux/of.h> 18#include <linux/of.h>
25#include <linux/export.h> 19#include <linux/export.h>
26 20
27/* 21#include "opp.h"
28 * Internal data structure organization with the OPP layer library is as
29 * follows:
30 * dev_opp_list (root)
31 * |- device 1 (represents voltage domain 1)
32 * | |- opp 1 (availability, freq, voltage)
33 * | |- opp 2 ..
34 * ... ...
35 * | `- opp n ..
36 * |- device 2 (represents the next voltage domain)
37 * ...
38 * `- device m (represents mth voltage domain)
39 * device 1, 2.. are represented by dev_opp structure while each opp
40 * is represented by the opp structure.
41 */
42
43/**
44 * struct dev_pm_opp - Generic OPP description structure
45 * @node: opp list node. The nodes are maintained throughout the lifetime
46 * of boot. It is expected only an optimal set of OPPs are
47 * added to the library by the SoC framework.
48 * RCU usage: opp list is traversed with RCU locks. node
49 * modification is possible realtime, hence the modifications
50 * are protected by the dev_opp_list_lock for integrity.
51 * IMPORTANT: the opp nodes should be maintained in increasing
52 * order.
53 * @dynamic: not-created from static DT entries.
54 * @available: true/false - marks if this OPP as available or not
55 * @turbo: true if turbo (boost) OPP
56 * @rate: Frequency in hertz
57 * @u_volt: Target voltage in microvolts corresponding to this OPP
58 * @u_volt_min: Minimum voltage in microvolts corresponding to this OPP
59 * @u_volt_max: Maximum voltage in microvolts corresponding to this OPP
60 * @u_amp: Maximum current drawn by the device in microamperes
61 * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
62 * frequency from any other OPP's frequency.
63 * @dev_opp: points back to the device_opp struct this opp belongs to
64 * @rcu_head: RCU callback head used for deferred freeing
65 * @np: OPP's device node.
66 *
67 * This structure stores the OPP information for a given device.
68 */
69struct dev_pm_opp {
70 struct list_head node;
71
72 bool available;
73 bool dynamic;
74 bool turbo;
75 unsigned long rate;
76
77 unsigned long u_volt;
78 unsigned long u_volt_min;
79 unsigned long u_volt_max;
80 unsigned long u_amp;
81 unsigned long clock_latency_ns;
82
83 struct device_opp *dev_opp;
84 struct rcu_head rcu_head;
85
86 struct device_node *np;
87};
88
89/**
90 * struct device_list_opp - devices managed by 'struct device_opp'
91 * @node: list node
92 * @dev: device to which the struct object belongs
93 * @rcu_head: RCU callback head used for deferred freeing
94 *
95 * This is an internal data structure maintaining the list of devices that are
96 * managed by 'struct device_opp'.
97 */
98struct device_list_opp {
99 struct list_head node;
100 const struct device *dev;
101 struct rcu_head rcu_head;
102};
103
104/**
105 * struct device_opp - Device opp structure
106 * @node: list node - contains the devices with OPPs that
107 * have been registered. Nodes once added are not modified in this
108 * list.
109 * RCU usage: nodes are not modified in the list of device_opp,
110 * however addition is possible and is secured by dev_opp_list_lock
111 * @srcu_head: notifier head to notify the OPP availability changes.
112 * @rcu_head: RCU callback head used for deferred freeing
113 * @dev_list: list of devices that share these OPPs
114 * @opp_list: list of opps
115 * @np: struct device_node pointer for opp's DT node.
116 * @shared_opp: OPP is shared between multiple devices.
117 *
118 * This is an internal data structure maintaining the link to opps attached to
119 * a device. This structure is not meant to be shared to users as it is
120 * meant for book keeping and private to OPP library.
121 *
122 * Because the opp structures can be used from both rcu and srcu readers, we
123 * need to wait for the grace period of both of them before freeing any
124 * resources. And so we have used kfree_rcu() from within call_srcu() handlers.
125 */
126struct device_opp {
127 struct list_head node;
128
129 struct srcu_notifier_head srcu_head;
130 struct rcu_head rcu_head;
131 struct list_head dev_list;
132 struct list_head opp_list;
133
134 struct device_node *np;
135 unsigned long clock_latency_ns_max;
136 bool shared_opp;
137 struct dev_pm_opp *suspend_opp;
138};
139 22
140/* 23/*
141 * The root of the list of all devices. All device_opp structures branch off 24 * The root of the list of all devices. All device_opp structures branch off
@@ -200,7 +83,7 @@ static struct device_opp *_managed_opp(const struct device_node *np)
200 * is a RCU protected pointer. This means that device_opp is valid as long 83 * is a RCU protected pointer. This means that device_opp is valid as long
201 * as we are under RCU lock. 84 * as we are under RCU lock.
202 */ 85 */
203static struct device_opp *_find_device_opp(struct device *dev) 86struct device_opp *_find_device_opp(struct device *dev)
204{ 87{
205 struct device_opp *dev_opp; 88 struct device_opp *dev_opp;
206 89
@@ -579,8 +462,8 @@ static void _remove_list_dev(struct device_list_opp *list_dev,
579 _kfree_list_dev_rcu); 462 _kfree_list_dev_rcu);
580} 463}
581 464
582static struct device_list_opp *_add_list_dev(const struct device *dev, 465struct device_list_opp *_add_list_dev(const struct device *dev,
583 struct device_opp *dev_opp) 466 struct device_opp *dev_opp)
584{ 467{
585 struct device_list_opp *list_dev; 468 struct device_list_opp *list_dev;
586 469
@@ -828,8 +711,8 @@ static int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
828 * The opp is made available by default and it can be controlled using 711 * The opp is made available by default and it can be controlled using
829 * dev_pm_opp_enable/disable functions and may be removed by dev_pm_opp_remove. 712 * dev_pm_opp_enable/disable functions and may be removed by dev_pm_opp_remove.
830 * 713 *
831 * NOTE: "dynamic" parameter impacts OPPs added by the of_init_opp_table and 714 * NOTE: "dynamic" parameter impacts OPPs added by the dev_pm_opp_of_add_table
832 * freed by of_free_opp_table. 715 * and freed by dev_pm_opp_of_remove_table.
833 * 716 *
834 * Locking: The internal device_opp and opp structures are RCU protected. 717 * Locking: The internal device_opp and opp structures are RCU protected.
835 * Hence this function internally uses RCU updater strategy with mutex locks 718 * Hence this function internally uses RCU updater strategy with mutex locks
@@ -1220,7 +1103,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_notifier);
1220 1103
1221#ifdef CONFIG_OF 1104#ifdef CONFIG_OF
1222/** 1105/**
1223 * of_free_opp_table() - Free OPP table entries created from static DT entries 1106 * dev_pm_opp_of_remove_table() - Free OPP table entries created from static DT
1107 * entries
1224 * @dev: device pointer used to lookup device OPPs. 1108 * @dev: device pointer used to lookup device OPPs.
1225 * 1109 *
1226 * Free OPPs created using static entries present in DT. 1110 * Free OPPs created using static entries present in DT.
@@ -1231,7 +1115,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_notifier);
1231 * that this function is *NOT* called under RCU protection or in contexts where 1115 * that this function is *NOT* called under RCU protection or in contexts where
1232 * mutex cannot be locked. 1116 * mutex cannot be locked.
1233 */ 1117 */
1234void of_free_opp_table(struct device *dev) 1118void dev_pm_opp_of_remove_table(struct device *dev)
1235{ 1119{
1236 struct device_opp *dev_opp; 1120 struct device_opp *dev_opp;
1237 struct dev_pm_opp *opp, *tmp; 1121 struct dev_pm_opp *opp, *tmp;
@@ -1266,91 +1150,34 @@ void of_free_opp_table(struct device *dev)
1266unlock: 1150unlock:
1267 mutex_unlock(&dev_opp_list_lock); 1151 mutex_unlock(&dev_opp_list_lock);
1268} 1152}
1269EXPORT_SYMBOL_GPL(of_free_opp_table); 1153EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);
1270 1154
1271void of_cpumask_free_opp_table(cpumask_var_t cpumask) 1155/* Returns opp descriptor node for a device, caller must do of_node_put() */
1156struct device_node *_of_get_opp_desc_node(struct device *dev)
1272{ 1157{
1273 struct device *cpu_dev;
1274 int cpu;
1275
1276 WARN_ON(cpumask_empty(cpumask));
1277
1278 for_each_cpu(cpu, cpumask) {
1279 cpu_dev = get_cpu_device(cpu);
1280 if (!cpu_dev) {
1281 pr_err("%s: failed to get cpu%d device\n", __func__,
1282 cpu);
1283 continue;
1284 }
1285
1286 of_free_opp_table(cpu_dev);
1287 }
1288}
1289EXPORT_SYMBOL_GPL(of_cpumask_free_opp_table);
1290
1291/* Returns opp descriptor node from its phandle. Caller must do of_node_put() */
1292static struct device_node *
1293_of_get_opp_desc_node_from_prop(struct device *dev, const struct property *prop)
1294{
1295 struct device_node *opp_np;
1296
1297 opp_np = of_find_node_by_phandle(be32_to_cpup(prop->value));
1298 if (!opp_np) {
1299 dev_err(dev, "%s: Prop: %s contains invalid opp desc phandle\n",
1300 __func__, prop->name);
1301 return ERR_PTR(-EINVAL);
1302 }
1303
1304 return opp_np;
1305}
1306
1307/* Returns opp descriptor node for a device. Caller must do of_node_put() */
1308static struct device_node *_of_get_opp_desc_node(struct device *dev)
1309{
1310 const struct property *prop;
1311
1312 prop = of_find_property(dev->of_node, "operating-points-v2", NULL);
1313 if (!prop)
1314 return ERR_PTR(-ENODEV);
1315 if (!prop->value)
1316 return ERR_PTR(-ENODATA);
1317
1318 /* 1158 /*
1319 * TODO: Support for multiple OPP tables. 1159 * TODO: Support for multiple OPP tables.
1320 * 1160 *
1321 * There should be only ONE phandle present in "operating-points-v2" 1161 * There should be only ONE phandle present in "operating-points-v2"
1322 * property. 1162 * property.
1323 */ 1163 */
1324 if (prop->length != sizeof(__be32)) {
1325 dev_err(dev, "%s: Invalid opp desc phandle\n", __func__);
1326 return ERR_PTR(-EINVAL);
1327 }
1328 1164
1329 return _of_get_opp_desc_node_from_prop(dev, prop); 1165 return of_parse_phandle(dev->of_node, "operating-points-v2", 0);
1330} 1166}
1331 1167
1332/* Initializes OPP tables based on new bindings */ 1168/* Initializes OPP tables based on new bindings */
1333static int _of_init_opp_table_v2(struct device *dev, 1169static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
1334 const struct property *prop)
1335{ 1170{
1336 struct device_node *opp_np, *np; 1171 struct device_node *np;
1337 struct device_opp *dev_opp; 1172 struct device_opp *dev_opp;
1338 int ret = 0, count = 0; 1173 int ret = 0, count = 0;
1339 1174
1340 if (!prop->value)
1341 return -ENODATA;
1342
1343 /* Get opp node */
1344 opp_np = _of_get_opp_desc_node_from_prop(dev, prop);
1345 if (IS_ERR(opp_np))
1346 return PTR_ERR(opp_np);
1347
1348 dev_opp = _managed_opp(opp_np); 1175 dev_opp = _managed_opp(opp_np);
1349 if (dev_opp) { 1176 if (dev_opp) {
1350 /* OPPs are already managed */ 1177 /* OPPs are already managed */
1351 if (!_add_list_dev(dev, dev_opp)) 1178 if (!_add_list_dev(dev, dev_opp))
1352 ret = -ENOMEM; 1179 ret = -ENOMEM;
1353 goto put_opp_np; 1180 return ret;
1354 } 1181 }
1355 1182
1356 /* We have opp-list node now, iterate over it and add OPPs */ 1183 /* We have opp-list node now, iterate over it and add OPPs */
@@ -1366,10 +1193,8 @@ static int _of_init_opp_table_v2(struct device *dev,
1366 } 1193 }
1367 1194
1368 /* There should be one of more OPP defined */ 1195 /* There should be one of more OPP defined */
1369 if (WARN_ON(!count)) { 1196 if (WARN_ON(!count))
1370 ret = -ENOENT; 1197 return -ENOENT;
1371 goto put_opp_np;
1372 }
1373 1198
1374 dev_opp = _find_device_opp(dev); 1199 dev_opp = _find_device_opp(dev);
1375 if (WARN_ON(IS_ERR(dev_opp))) { 1200 if (WARN_ON(IS_ERR(dev_opp))) {
@@ -1380,19 +1205,16 @@ static int _of_init_opp_table_v2(struct device *dev,
1380 dev_opp->np = opp_np; 1205 dev_opp->np = opp_np;
1381 dev_opp->shared_opp = of_property_read_bool(opp_np, "opp-shared"); 1206 dev_opp->shared_opp = of_property_read_bool(opp_np, "opp-shared");
1382 1207
1383 of_node_put(opp_np);
1384 return 0; 1208 return 0;
1385 1209
1386free_table: 1210free_table:
1387 of_free_opp_table(dev); 1211 dev_pm_opp_of_remove_table(dev);
1388put_opp_np:
1389 of_node_put(opp_np);
1390 1212
1391 return ret; 1213 return ret;
1392} 1214}
1393 1215
1394/* Initializes OPP tables based on old-deprecated bindings */ 1216/* Initializes OPP tables based on old-deprecated bindings */
1395static int _of_init_opp_table_v1(struct device *dev) 1217static int _of_add_opp_table_v1(struct device *dev)
1396{ 1218{
1397 const struct property *prop; 1219 const struct property *prop;
1398 const __be32 *val; 1220 const __be32 *val;
@@ -1429,7 +1251,7 @@ static int _of_init_opp_table_v1(struct device *dev)
1429} 1251}
1430 1252
1431/** 1253/**
1432 * of_init_opp_table() - Initialize opp table from device tree 1254 * dev_pm_opp_of_add_table() - Initialize opp table from device tree
1433 * @dev: device pointer used to lookup device OPPs. 1255 * @dev: device pointer used to lookup device OPPs.
1434 * 1256 *
1435 * Register the initial OPP table with the OPP library for given device. 1257 * Register the initial OPP table with the OPP library for given device.
@@ -1451,153 +1273,28 @@ static int _of_init_opp_table_v1(struct device *dev)
1451 * -ENODATA when empty 'operating-points' property is found 1273 * -ENODATA when empty 'operating-points' property is found
1452 * -EINVAL when invalid entries are found in opp-v2 table 1274 * -EINVAL when invalid entries are found in opp-v2 table
1453 */ 1275 */
1454int of_init_opp_table(struct device *dev) 1276int dev_pm_opp_of_add_table(struct device *dev)
1455{ 1277{
1456 const struct property *prop; 1278 struct device_node *opp_np;
1279 int ret;
1457 1280
1458 /* 1281 /*
1459 * OPPs have two version of bindings now. The older one is deprecated, 1282 * OPPs have two version of bindings now. The older one is deprecated,
1460 * try for the new binding first. 1283 * try for the new binding first.
1461 */ 1284 */
1462 prop = of_find_property(dev->of_node, "operating-points-v2", NULL); 1285 opp_np = _of_get_opp_desc_node(dev);
1463 if (!prop) { 1286 if (!opp_np) {
1464 /* 1287 /*
1465 * Try old-deprecated bindings for backward compatibility with 1288 * Try old-deprecated bindings for backward compatibility with
1466 * older dtbs. 1289 * older dtbs.
1467 */ 1290 */
1468 return _of_init_opp_table_v1(dev); 1291 return _of_add_opp_table_v1(dev);
1469 }
1470
1471 return _of_init_opp_table_v2(dev, prop);
1472}
1473EXPORT_SYMBOL_GPL(of_init_opp_table);
1474
1475int of_cpumask_init_opp_table(cpumask_var_t cpumask)
1476{
1477 struct device *cpu_dev;
1478 int cpu, ret = 0;
1479
1480 WARN_ON(cpumask_empty(cpumask));
1481
1482 for_each_cpu(cpu, cpumask) {
1483 cpu_dev = get_cpu_device(cpu);
1484 if (!cpu_dev) {
1485 pr_err("%s: failed to get cpu%d device\n", __func__,
1486 cpu);
1487 continue;
1488 }
1489
1490 ret = of_init_opp_table(cpu_dev);
1491 if (ret) {
1492 pr_err("%s: couldn't find opp table for cpu:%d, %d\n",
1493 __func__, cpu, ret);
1494
1495 /* Free all other OPPs */
1496 of_cpumask_free_opp_table(cpumask);
1497 break;
1498 }
1499 } 1292 }
1500 1293
1501 return ret; 1294 ret = _of_add_opp_table_v2(dev, opp_np);
1502} 1295 of_node_put(opp_np);
1503EXPORT_SYMBOL_GPL(of_cpumask_init_opp_table);
1504
1505/* Required only for V1 bindings, as v2 can manage it from DT itself */
1506int set_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask)
1507{
1508 struct device_list_opp *list_dev;
1509 struct device_opp *dev_opp;
1510 struct device *dev;
1511 int cpu, ret = 0;
1512
1513 rcu_read_lock();
1514
1515 dev_opp = _find_device_opp(cpu_dev);
1516 if (IS_ERR(dev_opp)) {
1517 ret = -EINVAL;
1518 goto out_rcu_read_unlock;
1519 }
1520
1521 for_each_cpu(cpu, cpumask) {
1522 if (cpu == cpu_dev->id)
1523 continue;
1524
1525 dev = get_cpu_device(cpu);
1526 if (!dev) {
1527 dev_err(cpu_dev, "%s: failed to get cpu%d device\n",
1528 __func__, cpu);
1529 continue;
1530 }
1531
1532 list_dev = _add_list_dev(dev, dev_opp);
1533 if (!list_dev) {
1534 dev_err(dev, "%s: failed to add list-dev for cpu%d device\n",
1535 __func__, cpu);
1536 continue;
1537 }
1538 }
1539out_rcu_read_unlock:
1540 rcu_read_unlock();
1541
1542 return 0;
1543}
1544EXPORT_SYMBOL_GPL(set_cpus_sharing_opps);
1545
1546/*
1547 * Works only for OPP v2 bindings.
1548 *
1549 * cpumask should be already set to mask of cpu_dev->id.
1550 * Returns -ENOENT if operating-points-v2 bindings aren't supported.
1551 */
1552int of_get_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask)
1553{
1554 struct device_node *np, *tmp_np;
1555 struct device *tcpu_dev;
1556 int cpu, ret = 0;
1557
1558 /* Get OPP descriptor node */
1559 np = _of_get_opp_desc_node(cpu_dev);
1560 if (IS_ERR(np)) {
1561 dev_dbg(cpu_dev, "%s: Couldn't find opp node: %ld\n", __func__,
1562 PTR_ERR(np));
1563 return -ENOENT;
1564 }
1565
1566 /* OPPs are shared ? */
1567 if (!of_property_read_bool(np, "opp-shared"))
1568 goto put_cpu_node;
1569
1570 for_each_possible_cpu(cpu) {
1571 if (cpu == cpu_dev->id)
1572 continue;
1573
1574 tcpu_dev = get_cpu_device(cpu);
1575 if (!tcpu_dev) {
1576 dev_err(cpu_dev, "%s: failed to get cpu%d device\n",
1577 __func__, cpu);
1578 ret = -ENODEV;
1579 goto put_cpu_node;
1580 }
1581
1582 /* Get OPP descriptor node */
1583 tmp_np = _of_get_opp_desc_node(tcpu_dev);
1584 if (IS_ERR(tmp_np)) {
1585 dev_err(tcpu_dev, "%s: Couldn't find opp node: %ld\n",
1586 __func__, PTR_ERR(tmp_np));
1587 ret = PTR_ERR(tmp_np);
1588 goto put_cpu_node;
1589 }
1590
1591 /* CPUs are sharing opp node */
1592 if (np == tmp_np)
1593 cpumask_set_cpu(cpu, cpumask);
1594
1595 of_node_put(tmp_np);
1596 }
1597 1296
1598put_cpu_node:
1599 of_node_put(np);
1600 return ret; 1297 return ret;
1601} 1298}
1602EXPORT_SYMBOL_GPL(of_get_cpus_sharing_opps); 1299EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table);
1603#endif 1300#endif
diff --git a/drivers/base/power/opp/cpu.c b/drivers/base/power/opp/cpu.c
new file mode 100644
index 000000000000..7654c5606307
--- /dev/null
+++ b/drivers/base/power/opp/cpu.c
@@ -0,0 +1,267 @@
1/*
2 * Generic OPP helper interface for CPU device
3 *
4 * Copyright (C) 2009-2014 Texas Instruments Incorporated.
5 * Nishanth Menon
6 * Romit Dasgupta
7 * Kevin Hilman
8 *
9 * This program is free software; you can redistribute it and/or modify
10 * it under the terms of the GNU General Public License version 2 as
11 * published by the Free Software Foundation.
12 */
13#include <linux/cpu.h>
14#include <linux/cpufreq.h>
15#include <linux/err.h>
16#include <linux/errno.h>
17#include <linux/export.h>
18#include <linux/of.h>
19#include <linux/slab.h>
20
21#include "opp.h"
22
23#ifdef CONFIG_CPU_FREQ
24
25/**
26 * dev_pm_opp_init_cpufreq_table() - create a cpufreq table for a device
27 * @dev: device for which we do this operation
28 * @table: Cpufreq table returned back to caller
29 *
30 * Generate a cpufreq table for a provided device- this assumes that the
31 * opp list is already initialized and ready for usage.
32 *
33 * This function allocates required memory for the cpufreq table. It is
34 * expected that the caller does the required maintenance such as freeing
35 * the table as required.
36 *
37 * Returns -EINVAL for bad pointers, -ENODEV if the device is not found, -ENOMEM
38 * if no memory available for the operation (table is not populated), returns 0
39 * if successful and table is populated.
40 *
41 * WARNING: It is important for the callers to ensure refreshing their copy of
42 * the table if any of the mentioned functions have been invoked in the interim.
43 *
44 * Locking: The internal device_opp and opp structures are RCU protected.
45 * Since we just use the regular accessor functions to access the internal data
46 * structures, we use RCU read lock inside this function. As a result, users of
47 * this function DONOT need to use explicit locks for invoking.
48 */
49int dev_pm_opp_init_cpufreq_table(struct device *dev,
50 struct cpufreq_frequency_table **table)
51{
52 struct dev_pm_opp *opp;
53 struct cpufreq_frequency_table *freq_table = NULL;
54 int i, max_opps, ret = 0;
55 unsigned long rate;
56
57 rcu_read_lock();
58
59 max_opps = dev_pm_opp_get_opp_count(dev);
60 if (max_opps <= 0) {
61 ret = max_opps ? max_opps : -ENODATA;
62 goto out;
63 }
64
65 freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC);
66 if (!freq_table) {
67 ret = -ENOMEM;
68 goto out;
69 }
70
71 for (i = 0, rate = 0; i < max_opps; i++, rate++) {
72 /* find next rate */
73 opp = dev_pm_opp_find_freq_ceil(dev, &rate);
74 if (IS_ERR(opp)) {
75 ret = PTR_ERR(opp);
76 goto out;
77 }
78 freq_table[i].driver_data = i;
79 freq_table[i].frequency = rate / 1000;
80
81 /* Is Boost/turbo opp ? */
82 if (dev_pm_opp_is_turbo(opp))
83 freq_table[i].flags = CPUFREQ_BOOST_FREQ;
84 }
85
86 freq_table[i].driver_data = i;
87 freq_table[i].frequency = CPUFREQ_TABLE_END;
88
89 *table = &freq_table[0];
90
91out:
92 rcu_read_unlock();
93 if (ret)
94 kfree(freq_table);
95
96 return ret;
97}
98EXPORT_SYMBOL_GPL(dev_pm_opp_init_cpufreq_table);
99
100/**
101 * dev_pm_opp_free_cpufreq_table() - free the cpufreq table
102 * @dev: device for which we do this operation
103 * @table: table to free
104 *
105 * Free up the table allocated by dev_pm_opp_init_cpufreq_table
106 */
107void dev_pm_opp_free_cpufreq_table(struct device *dev,
108 struct cpufreq_frequency_table **table)
109{
110 if (!table)
111 return;
112
113 kfree(*table);
114 *table = NULL;
115}
116EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table);
117#endif /* CONFIG_CPU_FREQ */
118
119/* Required only for V1 bindings, as v2 can manage it from DT itself */
120int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
121{
122 struct device_list_opp *list_dev;
123 struct device_opp *dev_opp;
124 struct device *dev;
125 int cpu, ret = 0;
126
127 rcu_read_lock();
128
129 dev_opp = _find_device_opp(cpu_dev);
130 if (IS_ERR(dev_opp)) {
131 ret = -EINVAL;
132 goto out_rcu_read_unlock;
133 }
134
135 for_each_cpu(cpu, cpumask) {
136 if (cpu == cpu_dev->id)
137 continue;
138
139 dev = get_cpu_device(cpu);
140 if (!dev) {
141 dev_err(cpu_dev, "%s: failed to get cpu%d device\n",
142 __func__, cpu);
143 continue;
144 }
145
146 list_dev = _add_list_dev(dev, dev_opp);
147 if (!list_dev) {
148 dev_err(dev, "%s: failed to add list-dev for cpu%d device\n",
149 __func__, cpu);
150 continue;
151 }
152 }
153out_rcu_read_unlock:
154 rcu_read_unlock();
155
156 return 0;
157}
158EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus);
159
160#ifdef CONFIG_OF
161void dev_pm_opp_of_cpumask_remove_table(cpumask_var_t cpumask)
162{
163 struct device *cpu_dev;
164 int cpu;
165
166 WARN_ON(cpumask_empty(cpumask));
167
168 for_each_cpu(cpu, cpumask) {
169 cpu_dev = get_cpu_device(cpu);
170 if (!cpu_dev) {
171 pr_err("%s: failed to get cpu%d device\n", __func__,
172 cpu);
173 continue;
174 }
175
176 dev_pm_opp_of_remove_table(cpu_dev);
177 }
178}
179EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table);
180
181int dev_pm_opp_of_cpumask_add_table(cpumask_var_t cpumask)
182{
183 struct device *cpu_dev;
184 int cpu, ret = 0;
185
186 WARN_ON(cpumask_empty(cpumask));
187
188 for_each_cpu(cpu, cpumask) {
189 cpu_dev = get_cpu_device(cpu);
190 if (!cpu_dev) {
191 pr_err("%s: failed to get cpu%d device\n", __func__,
192 cpu);
193 continue;
194 }
195
196 ret = dev_pm_opp_of_add_table(cpu_dev);
197 if (ret) {
198 pr_err("%s: couldn't find opp table for cpu:%d, %d\n",
199 __func__, cpu, ret);
200
201 /* Free all other OPPs */
202 dev_pm_opp_of_cpumask_remove_table(cpumask);
203 break;
204 }
205 }
206
207 return ret;
208}
209EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table);
210
211/*
212 * Works only for OPP v2 bindings.
213 *
214 * cpumask should be already set to mask of cpu_dev->id.
215 * Returns -ENOENT if operating-points-v2 bindings aren't supported.
216 */
217int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
218{
219 struct device_node *np, *tmp_np;
220 struct device *tcpu_dev;
221 int cpu, ret = 0;
222
223 /* Get OPP descriptor node */
224 np = _of_get_opp_desc_node(cpu_dev);
225 if (!np) {
226 dev_dbg(cpu_dev, "%s: Couldn't find cpu_dev node.\n", __func__);
227 return -ENOENT;
228 }
229
230 /* OPPs are shared ? */
231 if (!of_property_read_bool(np, "opp-shared"))
232 goto put_cpu_node;
233
234 for_each_possible_cpu(cpu) {
235 if (cpu == cpu_dev->id)
236 continue;
237
238 tcpu_dev = get_cpu_device(cpu);
239 if (!tcpu_dev) {
240 dev_err(cpu_dev, "%s: failed to get cpu%d device\n",
241 __func__, cpu);
242 ret = -ENODEV;
243 goto put_cpu_node;
244 }
245
246 /* Get OPP descriptor node */
247 tmp_np = _of_get_opp_desc_node(tcpu_dev);
248 if (!tmp_np) {
249 dev_err(tcpu_dev, "%s: Couldn't find tcpu_dev node.\n",
250 __func__);
251 ret = -ENOENT;
252 goto put_cpu_node;
253 }
254
255 /* CPUs are sharing opp node */
256 if (np == tmp_np)
257 cpumask_set_cpu(cpu, cpumask);
258
259 of_node_put(tmp_np);
260 }
261
262put_cpu_node:
263 of_node_put(np);
264 return ret;
265}
266EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_sharing_cpus);
267#endif
diff --git a/drivers/base/power/opp/opp.h b/drivers/base/power/opp/opp.h
new file mode 100644
index 000000000000..dcb38f78dae4
--- /dev/null
+++ b/drivers/base/power/opp/opp.h
@@ -0,0 +1,143 @@
1/*
2 * Generic OPP Interface
3 *
4 * Copyright (C) 2009-2010 Texas Instruments Incorporated.
5 * Nishanth Menon
6 * Romit Dasgupta
7 * Kevin Hilman
8 *
9 * This program is free software; you can redistribute it and/or modify
10 * it under the terms of the GNU General Public License version 2 as
11 * published by the Free Software Foundation.
12 */
13
14#ifndef __DRIVER_OPP_H__
15#define __DRIVER_OPP_H__
16
17#include <linux/device.h>
18#include <linux/kernel.h>
19#include <linux/list.h>
20#include <linux/pm_opp.h>
21#include <linux/rculist.h>
22#include <linux/rcupdate.h>
23
24/*
25 * Internal data structure organization with the OPP layer library is as
26 * follows:
27 * dev_opp_list (root)
28 * |- device 1 (represents voltage domain 1)
29 * | |- opp 1 (availability, freq, voltage)
30 * | |- opp 2 ..
31 * ... ...
32 * | `- opp n ..
33 * |- device 2 (represents the next voltage domain)
34 * ...
35 * `- device m (represents mth voltage domain)
36 * device 1, 2.. are represented by dev_opp structure while each opp
37 * is represented by the opp structure.
38 */
39
40/**
41 * struct dev_pm_opp - Generic OPP description structure
42 * @node: opp list node. The nodes are maintained throughout the lifetime
43 * of boot. It is expected only an optimal set of OPPs are
44 * added to the library by the SoC framework.
45 * RCU usage: opp list is traversed with RCU locks. node
46 * modification is possible realtime, hence the modifications
47 * are protected by the dev_opp_list_lock for integrity.
48 * IMPORTANT: the opp nodes should be maintained in increasing
49 * order.
50 * @dynamic: not-created from static DT entries.
51 * @available: true/false - marks if this OPP as available or not
52 * @turbo: true if turbo (boost) OPP
53 * @rate: Frequency in hertz
54 * @u_volt: Target voltage in microvolts corresponding to this OPP
55 * @u_volt_min: Minimum voltage in microvolts corresponding to this OPP
56 * @u_volt_max: Maximum voltage in microvolts corresponding to this OPP
57 * @u_amp: Maximum current drawn by the device in microamperes
58 * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
59 * frequency from any other OPP's frequency.
60 * @dev_opp: points back to the device_opp struct this opp belongs to
61 * @rcu_head: RCU callback head used for deferred freeing
62 * @np: OPP's device node.
63 *
64 * This structure stores the OPP information for a given device.
65 */
66struct dev_pm_opp {
67 struct list_head node;
68
69 bool available;
70 bool dynamic;
71 bool turbo;
72 unsigned long rate;
73
74 unsigned long u_volt;
75 unsigned long u_volt_min;
76 unsigned long u_volt_max;
77 unsigned long u_amp;
78 unsigned long clock_latency_ns;
79
80 struct device_opp *dev_opp;
81 struct rcu_head rcu_head;
82
83 struct device_node *np;
84};
85
86/**
87 * struct device_list_opp - devices managed by 'struct device_opp'
88 * @node: list node
89 * @dev: device to which the struct object belongs
90 * @rcu_head: RCU callback head used for deferred freeing
91 *
92 * This is an internal data structure maintaining the list of devices that are
93 * managed by 'struct device_opp'.
94 */
95struct device_list_opp {
96 struct list_head node;
97 const struct device *dev;
98 struct rcu_head rcu_head;
99};
100
101/**
102 * struct device_opp - Device opp structure
103 * @node: list node - contains the devices with OPPs that
104 * have been registered. Nodes once added are not modified in this
105 * list.
106 * RCU usage: nodes are not modified in the list of device_opp,
107 * however addition is possible and is secured by dev_opp_list_lock
108 * @srcu_head: notifier head to notify the OPP availability changes.
109 * @rcu_head: RCU callback head used for deferred freeing
110 * @dev_list: list of devices that share these OPPs
111 * @opp_list: list of opps
112 * @np: struct device_node pointer for opp's DT node.
113 * @shared_opp: OPP is shared between multiple devices.
114 *
115 * This is an internal data structure maintaining the link to opps attached to
116 * a device. This structure is not meant to be shared to users as it is
117 * meant for book keeping and private to OPP library.
118 *
119 * Because the opp structures can be used from both rcu and srcu readers, we
120 * need to wait for the grace period of both of them before freeing any
121 * resources. And so we have used kfree_rcu() from within call_srcu() handlers.
122 */
123struct device_opp {
124 struct list_head node;
125
126 struct srcu_notifier_head srcu_head;
127 struct rcu_head rcu_head;
128 struct list_head dev_list;
129 struct list_head opp_list;
130
131 struct device_node *np;
132 unsigned long clock_latency_ns_max;
133 bool shared_opp;
134 struct dev_pm_opp *suspend_opp;
135};
136
137/* Routines internal to opp core */
138struct device_opp *_find_device_opp(struct device *dev);
139struct device_list_opp *_add_list_dev(const struct device *dev,
140 struct device_opp *dev_opp);
141struct device_node *_of_get_opp_desc_node(struct device *dev);
142
143#endif /* __DRIVER_OPP_H__ */
diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
index 51f15bc15774..a1e0b9ab847a 100644
--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -25,6 +25,9 @@
25 */ 25 */
26bool events_check_enabled __read_mostly; 26bool events_check_enabled __read_mostly;
27 27
28/* First wakeup IRQ seen by the kernel in the last cycle. */
29unsigned int pm_wakeup_irq __read_mostly;
30
28/* If set and the system is suspending, terminate the suspend. */ 31/* If set and the system is suspending, terminate the suspend. */
29static bool pm_abort_suspend __read_mostly; 32static bool pm_abort_suspend __read_mostly;
30 33
@@ -91,7 +94,7 @@ struct wakeup_source *wakeup_source_create(const char *name)
91 if (!ws) 94 if (!ws)
92 return NULL; 95 return NULL;
93 96
94 wakeup_source_prepare(ws, name ? kstrdup(name, GFP_KERNEL) : NULL); 97 wakeup_source_prepare(ws, name ? kstrdup_const(name, GFP_KERNEL) : NULL);
95 return ws; 98 return ws;
96} 99}
97EXPORT_SYMBOL_GPL(wakeup_source_create); 100EXPORT_SYMBOL_GPL(wakeup_source_create);
@@ -154,7 +157,7 @@ void wakeup_source_destroy(struct wakeup_source *ws)
154 157
155 wakeup_source_drop(ws); 158 wakeup_source_drop(ws);
156 wakeup_source_record(ws); 159 wakeup_source_record(ws);
157 kfree(ws->name); 160 kfree_const(ws->name);
158 kfree(ws); 161 kfree(ws);
159} 162}
160EXPORT_SYMBOL_GPL(wakeup_source_destroy); 163EXPORT_SYMBOL_GPL(wakeup_source_destroy);
@@ -868,6 +871,15 @@ EXPORT_SYMBOL_GPL(pm_system_wakeup);
868void pm_wakeup_clear(void) 871void pm_wakeup_clear(void)
869{ 872{
870 pm_abort_suspend = false; 873 pm_abort_suspend = false;
874 pm_wakeup_irq = 0;
875}
876
877void pm_system_irq_wakeup(unsigned int irq_number)
878{
879 if (pm_wakeup_irq == 0) {
880 pm_wakeup_irq = irq_number;
881 pm_system_wakeup();
882 }
871} 883}
872 884
873/** 885/**
diff --git a/drivers/base/property.c b/drivers/base/property.c
index 2d75366c61e0..de40623bbd8a 100644
--- a/drivers/base/property.c
+++ b/drivers/base/property.c
@@ -134,7 +134,7 @@ bool fwnode_property_present(struct fwnode_handle *fwnode, const char *propname)
134 if (is_of_node(fwnode)) 134 if (is_of_node(fwnode))
135 return of_property_read_bool(to_of_node(fwnode), propname); 135 return of_property_read_bool(to_of_node(fwnode), propname);
136 else if (is_acpi_node(fwnode)) 136 else if (is_acpi_node(fwnode))
137 return !acpi_dev_prop_get(to_acpi_node(fwnode), propname, NULL); 137 return !acpi_node_prop_get(fwnode, propname, NULL);
138 138
139 return !!pset_prop_get(to_pset(fwnode), propname); 139 return !!pset_prop_get(to_pset(fwnode), propname);
140} 140}
@@ -287,6 +287,28 @@ int device_property_read_string(struct device *dev, const char *propname,
287} 287}
288EXPORT_SYMBOL_GPL(device_property_read_string); 288EXPORT_SYMBOL_GPL(device_property_read_string);
289 289
290/**
291 * device_property_match_string - find a string in an array and return index
292 * @dev: Device to get the property of
293 * @propname: Name of the property holding the array
294 * @string: String to look for
295 *
296 * Find a given string in a string array and if it is found return the
297 * index back.
298 *
299 * Return: %0 if the property was found (success),
300 * %-EINVAL if given arguments are not valid,
301 * %-ENODATA if the property does not have a value,
302 * %-EPROTO if the property is not an array of strings,
303 * %-ENXIO if no suitable firmware interface is present.
304 */
305int device_property_match_string(struct device *dev, const char *propname,
306 const char *string)
307{
308 return fwnode_property_match_string(dev_fwnode(dev), propname, string);
309}
310EXPORT_SYMBOL_GPL(device_property_match_string);
311
290#define OF_DEV_PROP_READ_ARRAY(node, propname, type, val, nval) \ 312#define OF_DEV_PROP_READ_ARRAY(node, propname, type, val, nval) \
291 (val) ? of_property_read_##type##_array((node), (propname), (val), (nval)) \ 313 (val) ? of_property_read_##type##_array((node), (propname), (val), (nval)) \
292 : of_property_count_elems_of_size((node), (propname), sizeof(type)) 314 : of_property_count_elems_of_size((node), (propname), sizeof(type))
@@ -298,8 +320,8 @@ EXPORT_SYMBOL_GPL(device_property_read_string);
298 _ret_ = OF_DEV_PROP_READ_ARRAY(to_of_node(_fwnode_), _propname_, \ 320 _ret_ = OF_DEV_PROP_READ_ARRAY(to_of_node(_fwnode_), _propname_, \
299 _type_, _val_, _nval_); \ 321 _type_, _val_, _nval_); \
300 else if (is_acpi_node(_fwnode_)) \ 322 else if (is_acpi_node(_fwnode_)) \
301 _ret_ = acpi_dev_prop_read(to_acpi_node(_fwnode_), _propname_, \ 323 _ret_ = acpi_node_prop_read(_fwnode_, _propname_, _proptype_, \
302 _proptype_, _val_, _nval_); \ 324 _val_, _nval_); \
303 else if (is_pset(_fwnode_)) \ 325 else if (is_pset(_fwnode_)) \
304 _ret_ = pset_prop_read_array(to_pset(_fwnode_), _propname_, \ 326 _ret_ = pset_prop_read_array(to_pset(_fwnode_), _propname_, \
305 _proptype_, _val_, _nval_); \ 327 _proptype_, _val_, _nval_); \
@@ -440,8 +462,8 @@ int fwnode_property_read_string_array(struct fwnode_handle *fwnode,
440 propname, val, nval) : 462 propname, val, nval) :
441 of_property_count_strings(to_of_node(fwnode), propname); 463 of_property_count_strings(to_of_node(fwnode), propname);
442 else if (is_acpi_node(fwnode)) 464 else if (is_acpi_node(fwnode))
443 return acpi_dev_prop_read(to_acpi_node(fwnode), propname, 465 return acpi_node_prop_read(fwnode, propname, DEV_PROP_STRING,
444 DEV_PROP_STRING, val, nval); 466 val, nval);
445 else if (is_pset(fwnode)) 467 else if (is_pset(fwnode))
446 return pset_prop_read_array(to_pset(fwnode), propname, 468 return pset_prop_read_array(to_pset(fwnode), propname,
447 DEV_PROP_STRING, val, nval); 469 DEV_PROP_STRING, val, nval);
@@ -470,8 +492,8 @@ int fwnode_property_read_string(struct fwnode_handle *fwnode,
470 if (is_of_node(fwnode)) 492 if (is_of_node(fwnode))
471 return of_property_read_string(to_of_node(fwnode), propname, val); 493 return of_property_read_string(to_of_node(fwnode), propname, val);
472 else if (is_acpi_node(fwnode)) 494 else if (is_acpi_node(fwnode))
473 return acpi_dev_prop_read(to_acpi_node(fwnode), propname, 495 return acpi_node_prop_read(fwnode, propname, DEV_PROP_STRING,
474 DEV_PROP_STRING, val, 1); 496 val, 1);
475 497
476 return pset_prop_read_array(to_pset(fwnode), propname, 498 return pset_prop_read_array(to_pset(fwnode), propname,
477 DEV_PROP_STRING, val, 1); 499 DEV_PROP_STRING, val, 1);
@@ -479,6 +501,52 @@ int fwnode_property_read_string(struct fwnode_handle *fwnode,
479EXPORT_SYMBOL_GPL(fwnode_property_read_string); 501EXPORT_SYMBOL_GPL(fwnode_property_read_string);
480 502
481/** 503/**
504 * fwnode_property_match_string - find a string in an array and return index
505 * @fwnode: Firmware node to get the property of
506 * @propname: Name of the property holding the array
507 * @string: String to look for
508 *
509 * Find a given string in a string array and if it is found return the
510 * index back.
511 *
512 * Return: %0 if the property was found (success),
513 * %-EINVAL if given arguments are not valid,
514 * %-ENODATA if the property does not have a value,
515 * %-EPROTO if the property is not an array of strings,
516 * %-ENXIO if no suitable firmware interface is present.
517 */
518int fwnode_property_match_string(struct fwnode_handle *fwnode,
519 const char *propname, const char *string)
520{
521 const char **values;
522 int nval, ret, i;
523
524 nval = fwnode_property_read_string_array(fwnode, propname, NULL, 0);
525 if (nval < 0)
526 return nval;
527
528 values = kcalloc(nval, sizeof(*values), GFP_KERNEL);
529 if (!values)
530 return -ENOMEM;
531
532 ret = fwnode_property_read_string_array(fwnode, propname, values, nval);
533 if (ret < 0)
534 goto out;
535
536 ret = -ENODATA;
537 for (i = 0; i < nval; i++) {
538 if (!strcmp(values[i], string)) {
539 ret = i;
540 break;
541 }
542 }
543out:
544 kfree(values);
545 return ret;
546}
547EXPORT_SYMBOL_GPL(fwnode_property_match_string);
548
549/**
482 * device_get_next_child_node - Return the next child node handle for a device 550 * device_get_next_child_node - Return the next child node handle for a device
483 * @dev: Device to find the next child node for. 551 * @dev: Device to find the next child node for.
484 * @child: Handle to one of the device's child nodes or a null handle. 552 * @child: Handle to one of the device's child nodes or a null handle.
@@ -493,11 +561,7 @@ struct fwnode_handle *device_get_next_child_node(struct device *dev,
493 if (node) 561 if (node)
494 return &node->fwnode; 562 return &node->fwnode;
495 } else if (IS_ENABLED(CONFIG_ACPI)) { 563 } else if (IS_ENABLED(CONFIG_ACPI)) {
496 struct acpi_device *node; 564 return acpi_get_next_subnode(dev, child);
497
498 node = acpi_get_next_child(dev, to_acpi_node(child));
499 if (node)
500 return acpi_fwnode_handle(node);
501 } 565 }
502 return NULL; 566 return NULL;
503} 567}