summaryrefslogtreecommitdiffstats
path: root/drivers/base
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2016-07-26 20:29:07 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2016-07-26 20:29:07 -0400
commit6453dbdda30428a3c56568c96fe70ea3612f07e2 (patch)
tree9a3c6087a2832c36e8c49296fb05f95b877e0111 /drivers/base
parent27b79027bc112a63ad4004eb83c6acacae08a0de (diff)
parentbc841e260c95608921809a2c7481cf6f03bec21a (diff)
Merge tag 'pm-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki: "Again, the majority of changes go into the cpufreq subsystem, but there are no big features this time. The cpufreq changes that stand out somewhat are the governor interface rework and improvements related to the handling of frequency tables. Apart from those, there are fixes and new device/CPU IDs in drivers, cleanups and an improvement of the new schedutil governor. Next, there are some changes in the hibernation core, including a fix for a nasty problem related to the MONITOR/MWAIT usage by CPU offline during resume from hibernation, a few core improvements related to memory management during resume, a couple of additional debug features and cleanups. Finally, we have some fixes and cleanups in the devfreq subsystem, generic power domains framework improvements related to system suspend/resume, support for some new chips in intel_idle and in the power capping RAPL driver, a new version of the AnalyzeSuspend utility and some assorted fixes and cleanups. Specifics: - Rework the cpufreq governor interface to make it more straightforward and modify the conservative governor to avoid using transition notifications (Rafael Wysocki). - Rework the handling of frequency tables by the cpufreq core to make it more efficient (Viresh Kumar). - Modify the schedutil governor to reduce the number of wakeups it causes to occur in cases when the CPU frequency doesn't need to be changed (Steve Muckle, Viresh Kumar). - Fix some minor issues and clean up code in the cpufreq core and governors (Rafael Wysocki, Viresh Kumar). - Add Intel Broxton support to the intel_pstate driver (Srinivas Pandruvada). - Fix problems related to the config TDP feature and to the validity of the MSR_HWP_INTERRUPT register in intel_pstate (Jan Kiszka, Srinivas Pandruvada). - Make intel_pstate update the cpu_frequency tracepoint even if the frequency doesn't change to avoid confusing powertop (Rafael Wysocki). - Clean up the usage of __init/__initdata in intel_pstate, mark some of its internal variables as __read_mostly and drop an unused structure element from it (Jisheng Zhang, Carsten Emde). - Clean up the usage of some duplicate MSR symbols in intel_pstate and turbostat (Srinivas Pandruvada). - Update/fix the powernv, s3c24xx and mvebu cpufreq drivers (Akshay Adiga, Viresh Kumar, Ben Dooks). - Fix a regression (introduced during the 4.5 cycle) in the pcc-cpufreq driver by reverting the problematic commit (Andreas Herrmann). - Add support for Intel Denverton to intel_idle, clean up Broxton support in it and make it explicitly non-modular (Jacob Pan, Jan Beulich, Paul Gortmaker). - Add support for Denverton and Ivy Bridge server to the Intel RAPL power capping driver and make it more careful about the handing of MSRs that may not be present (Jacob Pan, Xiaolong Wang). - Fix resume from hibernation on x86-64 by making the CPU offline during resume avoid using MONITOR/MWAIT in the "play dead" loop which may lead to an inadvertent "revival" of a "dead" CPU and a page fault leading to a kernel crash from it (Rafael Wysocki). - Make memory management during resume from hibernation more straightforward (Rafael Wysocki). - Add debug features that should help to detect problems related to hibernation and resume from it (Rafael Wysocki, Chen Yu). - Clean up hibernation core somewhat (Rafael Wysocki). - Prevent KASAN from instrumenting the hibernation core which leads to large numbers of false-positives from it (James Morse). - Prevent PM (hibernate and suspend) notifiers from being called during the cleanup phase if they have not been called during the corresponding preparation phase which is possible if one of the other notifiers returns an error at that time (Lianwei Wang). - Improve suspend-related debug printout in the tasks freezer and clean up suspend-related console handling (Roger Lu, Borislav Petkov). - Update the AnalyzeSuspend script in the kernel sources to version 4.2 (Todd Brandt). - Modify the generic power domains framework to make it handle system suspend/resume better (Ulf Hansson). - Make the runtime PM framework avoid resuming devices synchronously when user space changes the runtime PM settings for them and improve its error reporting (Rafael Wysocki, Linus Walleij). - Fix error paths in devfreq drivers (exynos, exynos-ppmu, exynos-bus) and in the core, make some devfreq code explicitly non-modular and change some of it into tristate (Bartlomiej Zolnierkiewicz, Peter Chen, Paul Gortmaker). - Add DT support to the generic PM clocks management code and make it export some more symbols (Jon Hunter, Paul Gortmaker). - Make the PCI PM core code slightly more robust against possible driver errors (Andy Shevchenko). - Make it possible to change DESTDIR and PREFIX in turbostat (Andy Shevchenko)" * tag 'pm-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (89 commits) Revert "cpufreq: pcc-cpufreq: update default value of cpuinfo_transition_latency" PM / hibernate: Introduce test_resume mode for hibernation cpufreq: export cpufreq_driver_resolve_freq() cpufreq: Disallow ->resolve_freq() for drivers providing ->target_index() PCI / PM: check all fields in pci_set_platform_pm() cpufreq: acpi-cpufreq: use cached frequency mapping when possible cpufreq: schedutil: map raw required frequency to driver frequency cpufreq: add cpufreq_driver_resolve_freq() cpufreq: intel_pstate: Check cpuid for MSR_HWP_INTERRUPT intel_pstate: Update cpu_frequency tracepoint every time cpufreq: intel_pstate: clean remnant struct element PM / tools: scripts: AnalyzeSuspend v4.2 x86 / hibernate: Use hlt_play_dead() when resuming from hibernation cpufreq: powernv: Replacing pstate_id with frequency table index intel_pstate: Fix MSR_CONFIG_TDP_x addressing in core_get_max_pstate() PM / hibernate: Image data protection during restoration PM / hibernate: Add missing braces in __register_nosave_region() PM / hibernate: Clean up comments in snapshot.c PM / hibernate: Clean up function headers in snapshot.c PM / hibernate: Add missing braces in hibernate_setup() ...
Diffstat (limited to 'drivers/base')
-rw-r--r--drivers/base/power/clock_ops.c45
-rw-r--r--drivers/base/power/domain.c301
-rw-r--r--drivers/base/power/runtime.c13
3 files changed, 109 insertions, 250 deletions
diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
index 3657ac1cb801..8e2e4757adcb 100644
--- a/drivers/base/power/clock_ops.c
+++ b/drivers/base/power/clock_ops.c
@@ -121,6 +121,7 @@ int pm_clk_add(struct device *dev, const char *con_id)
121{ 121{
122 return __pm_clk_add(dev, con_id, NULL); 122 return __pm_clk_add(dev, con_id, NULL);
123} 123}
124EXPORT_SYMBOL_GPL(pm_clk_add);
124 125
125/** 126/**
126 * pm_clk_add_clk - Start using a device clock for power management. 127 * pm_clk_add_clk - Start using a device clock for power management.
@@ -136,9 +137,42 @@ int pm_clk_add_clk(struct device *dev, struct clk *clk)
136{ 137{
137 return __pm_clk_add(dev, NULL, clk); 138 return __pm_clk_add(dev, NULL, clk);
138} 139}
140EXPORT_SYMBOL_GPL(pm_clk_add_clk);
139 141
140 142
141/** 143/**
144 * of_pm_clk_add_clk - Start using a device clock for power management.
145 * @dev: Device whose clock is going to be used for power management.
146 * @name: Name of clock that is going to be used for power management.
147 *
148 * Add the clock described in the 'clocks' device-tree node that matches
149 * with the 'name' provided, to the list of clocks used for the power
150 * management of @dev. On success, returns 0. Returns a negative error
151 * code if the clock is not found or cannot be added.
152 */
153int of_pm_clk_add_clk(struct device *dev, const char *name)
154{
155 struct clk *clk;
156 int ret;
157
158 if (!dev || !dev->of_node || !name)
159 return -EINVAL;
160
161 clk = of_clk_get_by_name(dev->of_node, name);
162 if (IS_ERR(clk))
163 return PTR_ERR(clk);
164
165 ret = pm_clk_add_clk(dev, clk);
166 if (ret) {
167 clk_put(clk);
168 return ret;
169 }
170
171 return 0;
172}
173EXPORT_SYMBOL_GPL(of_pm_clk_add_clk);
174
175/**
142 * of_pm_clk_add_clks - Start using device clock(s) for power management. 176 * of_pm_clk_add_clks - Start using device clock(s) for power management.
143 * @dev: Device whose clock(s) is going to be used for power management. 177 * @dev: Device whose clock(s) is going to be used for power management.
144 * 178 *
@@ -192,6 +226,7 @@ error:
192 226
193 return ret; 227 return ret;
194} 228}
229EXPORT_SYMBOL_GPL(of_pm_clk_add_clks);
195 230
196/** 231/**
197 * __pm_clk_remove - Destroy PM clock entry. 232 * __pm_clk_remove - Destroy PM clock entry.
@@ -252,6 +287,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
252 287
253 __pm_clk_remove(ce); 288 __pm_clk_remove(ce);
254} 289}
290EXPORT_SYMBOL_GPL(pm_clk_remove);
255 291
256/** 292/**
257 * pm_clk_remove_clk - Stop using a device clock for power management. 293 * pm_clk_remove_clk - Stop using a device clock for power management.
@@ -285,6 +321,7 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
285 321
286 __pm_clk_remove(ce); 322 __pm_clk_remove(ce);
287} 323}
324EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
288 325
289/** 326/**
290 * pm_clk_init - Initialize a device's list of power management clocks. 327 * pm_clk_init - Initialize a device's list of power management clocks.
@@ -299,6 +336,7 @@ void pm_clk_init(struct device *dev)
299 if (psd) 336 if (psd)
300 INIT_LIST_HEAD(&psd->clock_list); 337 INIT_LIST_HEAD(&psd->clock_list);
301} 338}
339EXPORT_SYMBOL_GPL(pm_clk_init);
302 340
303/** 341/**
304 * pm_clk_create - Create and initialize a device's list of PM clocks. 342 * pm_clk_create - Create and initialize a device's list of PM clocks.
@@ -311,6 +349,7 @@ int pm_clk_create(struct device *dev)
311{ 349{
312 return dev_pm_get_subsys_data(dev); 350 return dev_pm_get_subsys_data(dev);
313} 351}
352EXPORT_SYMBOL_GPL(pm_clk_create);
314 353
315/** 354/**
316 * pm_clk_destroy - Destroy a device's list of power management clocks. 355 * pm_clk_destroy - Destroy a device's list of power management clocks.
@@ -345,6 +384,7 @@ void pm_clk_destroy(struct device *dev)
345 __pm_clk_remove(ce); 384 __pm_clk_remove(ce);
346 } 385 }
347} 386}
387EXPORT_SYMBOL_GPL(pm_clk_destroy);
348 388
349/** 389/**
350 * pm_clk_suspend - Disable clocks in a device's PM clock list. 390 * pm_clk_suspend - Disable clocks in a device's PM clock list.
@@ -375,6 +415,7 @@ int pm_clk_suspend(struct device *dev)
375 415
376 return 0; 416 return 0;
377} 417}
418EXPORT_SYMBOL_GPL(pm_clk_suspend);
378 419
379/** 420/**
380 * pm_clk_resume - Enable clocks in a device's PM clock list. 421 * pm_clk_resume - Enable clocks in a device's PM clock list.
@@ -400,6 +441,7 @@ int pm_clk_resume(struct device *dev)
400 441
401 return 0; 442 return 0;
402} 443}
444EXPORT_SYMBOL_GPL(pm_clk_resume);
403 445
404/** 446/**
405 * pm_clk_notify - Notify routine for device addition and removal. 447 * pm_clk_notify - Notify routine for device addition and removal.
@@ -480,6 +522,7 @@ int pm_clk_runtime_suspend(struct device *dev)
480 522
481 return 0; 523 return 0;
482} 524}
525EXPORT_SYMBOL_GPL(pm_clk_runtime_suspend);
483 526
484int pm_clk_runtime_resume(struct device *dev) 527int pm_clk_runtime_resume(struct device *dev)
485{ 528{
@@ -495,6 +538,7 @@ int pm_clk_runtime_resume(struct device *dev)
495 538
496 return pm_generic_runtime_resume(dev); 539 return pm_generic_runtime_resume(dev);
497} 540}
541EXPORT_SYMBOL_GPL(pm_clk_runtime_resume);
498 542
499#else /* !CONFIG_PM_CLK */ 543#else /* !CONFIG_PM_CLK */
500 544
@@ -598,3 +642,4 @@ void pm_clk_add_notifier(struct bus_type *bus,
598 clknb->nb.notifier_call = pm_clk_notify; 642 clknb->nb.notifier_call = pm_clk_notify;
599 bus_register_notifier(bus, &clknb->nb); 643 bus_register_notifier(bus, &clknb->nb);
600} 644}
645EXPORT_SYMBOL_GPL(pm_clk_add_notifier);
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index de23b648fce3..a1f2aff33997 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -187,8 +187,7 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
187 struct gpd_link *link; 187 struct gpd_link *link;
188 int ret = 0; 188 int ret = 0;
189 189
190 if (genpd->status == GPD_STATE_ACTIVE 190 if (genpd->status == GPD_STATE_ACTIVE)
191 || (genpd->prepared_count > 0 && genpd->suspend_power_off))
192 return 0; 191 return 0;
193 192
194 /* 193 /*
@@ -735,82 +734,24 @@ static int pm_genpd_prepare(struct device *dev)
735 734
736 mutex_lock(&genpd->lock); 735 mutex_lock(&genpd->lock);
737 736
738 if (genpd->prepared_count++ == 0) { 737 if (genpd->prepared_count++ == 0)
739 genpd->suspended_count = 0; 738 genpd->suspended_count = 0;
740 genpd->suspend_power_off = genpd->status == GPD_STATE_POWER_OFF;
741 }
742 739
743 mutex_unlock(&genpd->lock); 740 mutex_unlock(&genpd->lock);
744 741
745 if (genpd->suspend_power_off)
746 return 0;
747
748 /*
749 * The PM domain must be in the GPD_STATE_ACTIVE state at this point,
750 * so genpd_poweron() will return immediately, but if the device
751 * is suspended (e.g. it's been stopped by genpd_stop_dev()), we need
752 * to make it operational.
753 */
754 pm_runtime_resume(dev);
755 __pm_runtime_disable(dev, false);
756
757 ret = pm_generic_prepare(dev); 742 ret = pm_generic_prepare(dev);
758 if (ret) { 743 if (ret) {
759 mutex_lock(&genpd->lock); 744 mutex_lock(&genpd->lock);
760 745
761 if (--genpd->prepared_count == 0) 746 genpd->prepared_count--;
762 genpd->suspend_power_off = false;
763 747
764 mutex_unlock(&genpd->lock); 748 mutex_unlock(&genpd->lock);
765 pm_runtime_enable(dev);
766 } 749 }
767 750
768 return ret; 751 return ret;
769} 752}
770 753
771/** 754/**
772 * pm_genpd_suspend - Suspend a device belonging to an I/O PM domain.
773 * @dev: Device to suspend.
774 *
775 * Suspend a device under the assumption that its pm_domain field points to the
776 * domain member of an object of type struct generic_pm_domain representing
777 * a PM domain consisting of I/O devices.
778 */
779static int pm_genpd_suspend(struct device *dev)
780{
781 struct generic_pm_domain *genpd;
782
783 dev_dbg(dev, "%s()\n", __func__);
784
785 genpd = dev_to_genpd(dev);
786 if (IS_ERR(genpd))
787 return -EINVAL;
788
789 return genpd->suspend_power_off ? 0 : pm_generic_suspend(dev);
790}
791
792/**
793 * pm_genpd_suspend_late - Late suspend of a device from an I/O PM domain.
794 * @dev: Device to suspend.
795 *
796 * Carry out a late suspend of a device under the assumption that its
797 * pm_domain field points to the domain member of an object of type
798 * struct generic_pm_domain representing a PM domain consisting of I/O devices.
799 */
800static int pm_genpd_suspend_late(struct device *dev)
801{
802 struct generic_pm_domain *genpd;
803
804 dev_dbg(dev, "%s()\n", __func__);
805
806 genpd = dev_to_genpd(dev);
807 if (IS_ERR(genpd))
808 return -EINVAL;
809
810 return genpd->suspend_power_off ? 0 : pm_generic_suspend_late(dev);
811}
812
813/**
814 * pm_genpd_suspend_noirq - Completion of suspend of device in an I/O PM domain. 755 * pm_genpd_suspend_noirq - Completion of suspend of device in an I/O PM domain.
815 * @dev: Device to suspend. 756 * @dev: Device to suspend.
816 * 757 *
@@ -820,6 +761,7 @@ static int pm_genpd_suspend_late(struct device *dev)
820static int pm_genpd_suspend_noirq(struct device *dev) 761static int pm_genpd_suspend_noirq(struct device *dev)
821{ 762{
822 struct generic_pm_domain *genpd; 763 struct generic_pm_domain *genpd;
764 int ret;
823 765
824 dev_dbg(dev, "%s()\n", __func__); 766 dev_dbg(dev, "%s()\n", __func__);
825 767
@@ -827,11 +769,14 @@ static int pm_genpd_suspend_noirq(struct device *dev)
827 if (IS_ERR(genpd)) 769 if (IS_ERR(genpd))
828 return -EINVAL; 770 return -EINVAL;
829 771
830 if (genpd->suspend_power_off 772 if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))
831 || (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)))
832 return 0; 773 return 0;
833 774
834 genpd_stop_dev(genpd, dev); 775 if (genpd->dev_ops.stop && genpd->dev_ops.start) {
776 ret = pm_runtime_force_suspend(dev);
777 if (ret)
778 return ret;
779 }
835 780
836 /* 781 /*
837 * Since all of the "noirq" callbacks are executed sequentially, it is 782 * Since all of the "noirq" callbacks are executed sequentially, it is
@@ -853,6 +798,7 @@ static int pm_genpd_suspend_noirq(struct device *dev)
853static int pm_genpd_resume_noirq(struct device *dev) 798static int pm_genpd_resume_noirq(struct device *dev)
854{ 799{
855 struct generic_pm_domain *genpd; 800 struct generic_pm_domain *genpd;
801 int ret = 0;
856 802
857 dev_dbg(dev, "%s()\n", __func__); 803 dev_dbg(dev, "%s()\n", __func__);
858 804
@@ -860,8 +806,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
860 if (IS_ERR(genpd)) 806 if (IS_ERR(genpd))
861 return -EINVAL; 807 return -EINVAL;
862 808
863 if (genpd->suspend_power_off 809 if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))
864 || (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)))
865 return 0; 810 return 0;
866 811
867 /* 812 /*
@@ -872,93 +817,10 @@ static int pm_genpd_resume_noirq(struct device *dev)
872 pm_genpd_sync_poweron(genpd, true); 817 pm_genpd_sync_poweron(genpd, true);
873 genpd->suspended_count--; 818 genpd->suspended_count--;
874 819
875 return genpd_start_dev(genpd, dev); 820 if (genpd->dev_ops.stop && genpd->dev_ops.start)
876} 821 ret = pm_runtime_force_resume(dev);
877
878/**
879 * pm_genpd_resume_early - Early resume of a device in an I/O PM domain.
880 * @dev: Device to resume.
881 *
882 * Carry out an early resume of a device under the assumption that its
883 * pm_domain field points to the domain member of an object of type
884 * struct generic_pm_domain representing a power domain consisting of I/O
885 * devices.
886 */
887static int pm_genpd_resume_early(struct device *dev)
888{
889 struct generic_pm_domain *genpd;
890
891 dev_dbg(dev, "%s()\n", __func__);
892
893 genpd = dev_to_genpd(dev);
894 if (IS_ERR(genpd))
895 return -EINVAL;
896
897 return genpd->suspend_power_off ? 0 : pm_generic_resume_early(dev);
898}
899
900/**
901 * pm_genpd_resume - Resume of device in an I/O PM domain.
902 * @dev: Device to resume.
903 *
904 * Resume a device under the assumption that its pm_domain field points to the
905 * domain member of an object of type struct generic_pm_domain representing
906 * a power domain consisting of I/O devices.
907 */
908static int pm_genpd_resume(struct device *dev)
909{
910 struct generic_pm_domain *genpd;
911
912 dev_dbg(dev, "%s()\n", __func__);
913
914 genpd = dev_to_genpd(dev);
915 if (IS_ERR(genpd))
916 return -EINVAL;
917
918 return genpd->suspend_power_off ? 0 : pm_generic_resume(dev);
919}
920
921/**
922 * pm_genpd_freeze - Freezing a device in an I/O PM domain.
923 * @dev: Device to freeze.
924 *
925 * Freeze a device under the assumption that its pm_domain field points to the
926 * domain member of an object of type struct generic_pm_domain representing
927 * a power domain consisting of I/O devices.
928 */
929static int pm_genpd_freeze(struct device *dev)
930{
931 struct generic_pm_domain *genpd;
932
933 dev_dbg(dev, "%s()\n", __func__);
934
935 genpd = dev_to_genpd(dev);
936 if (IS_ERR(genpd))
937 return -EINVAL;
938
939 return genpd->suspend_power_off ? 0 : pm_generic_freeze(dev);
940}
941 822
942/** 823 return ret;
943 * pm_genpd_freeze_late - Late freeze of a device in an I/O PM domain.
944 * @dev: Device to freeze.
945 *
946 * Carry out a late freeze of a device under the assumption that its
947 * pm_domain field points to the domain member of an object of type
948 * struct generic_pm_domain representing a power domain consisting of I/O
949 * devices.
950 */
951static int pm_genpd_freeze_late(struct device *dev)
952{
953 struct generic_pm_domain *genpd;
954
955 dev_dbg(dev, "%s()\n", __func__);
956
957 genpd = dev_to_genpd(dev);
958 if (IS_ERR(genpd))
959 return -EINVAL;
960
961 return genpd->suspend_power_off ? 0 : pm_generic_freeze_late(dev);
962} 824}
963 825
964/** 826/**
@@ -973,6 +835,7 @@ static int pm_genpd_freeze_late(struct device *dev)
973static int pm_genpd_freeze_noirq(struct device *dev) 835static int pm_genpd_freeze_noirq(struct device *dev)
974{ 836{
975 struct generic_pm_domain *genpd; 837 struct generic_pm_domain *genpd;
838 int ret = 0;
976 839
977 dev_dbg(dev, "%s()\n", __func__); 840 dev_dbg(dev, "%s()\n", __func__);
978 841
@@ -980,7 +843,10 @@ static int pm_genpd_freeze_noirq(struct device *dev)
980 if (IS_ERR(genpd)) 843 if (IS_ERR(genpd))
981 return -EINVAL; 844 return -EINVAL;
982 845
983 return genpd->suspend_power_off ? 0 : genpd_stop_dev(genpd, dev); 846 if (genpd->dev_ops.stop && genpd->dev_ops.start)
847 ret = pm_runtime_force_suspend(dev);
848
849 return ret;
984} 850}
985 851
986/** 852/**
@@ -993,6 +859,7 @@ static int pm_genpd_freeze_noirq(struct device *dev)
993static int pm_genpd_thaw_noirq(struct device *dev) 859static int pm_genpd_thaw_noirq(struct device *dev)
994{ 860{
995 struct generic_pm_domain *genpd; 861 struct generic_pm_domain *genpd;
862 int ret = 0;
996 863
997 dev_dbg(dev, "%s()\n", __func__); 864 dev_dbg(dev, "%s()\n", __func__);
998 865
@@ -1000,51 +867,10 @@ static int pm_genpd_thaw_noirq(struct device *dev)
1000 if (IS_ERR(genpd)) 867 if (IS_ERR(genpd))
1001 return -EINVAL; 868 return -EINVAL;
1002 869
1003 return genpd->suspend_power_off ? 870 if (genpd->dev_ops.stop && genpd->dev_ops.start)
1004 0 : genpd_start_dev(genpd, dev); 871 ret = pm_runtime_force_resume(dev);
1005}
1006
1007/**
1008 * pm_genpd_thaw_early - Early thaw of device in an I/O PM domain.
1009 * @dev: Device to thaw.
1010 *
1011 * Carry out an early thaw of a device under the assumption that its
1012 * pm_domain field points to the domain member of an object of type
1013 * struct generic_pm_domain representing a power domain consisting of I/O
1014 * devices.
1015 */
1016static int pm_genpd_thaw_early(struct device *dev)
1017{
1018 struct generic_pm_domain *genpd;
1019
1020 dev_dbg(dev, "%s()\n", __func__);
1021
1022 genpd = dev_to_genpd(dev);
1023 if (IS_ERR(genpd))
1024 return -EINVAL;
1025
1026 return genpd->suspend_power_off ? 0 : pm_generic_thaw_early(dev);
1027}
1028
1029/**
1030 * pm_genpd_thaw - Thaw a device belonging to an I/O power domain.
1031 * @dev: Device to thaw.
1032 *
1033 * Thaw a device under the assumption that its pm_domain field points to the
1034 * domain member of an object of type struct generic_pm_domain representing
1035 * a power domain consisting of I/O devices.
1036 */
1037static int pm_genpd_thaw(struct device *dev)
1038{
1039 struct generic_pm_domain *genpd;
1040
1041 dev_dbg(dev, "%s()\n", __func__);
1042 872
1043 genpd = dev_to_genpd(dev); 873 return ret;
1044 if (IS_ERR(genpd))
1045 return -EINVAL;
1046
1047 return genpd->suspend_power_off ? 0 : pm_generic_thaw(dev);
1048} 874}
1049 875
1050/** 876/**
@@ -1057,6 +883,7 @@ static int pm_genpd_thaw(struct device *dev)
1057static int pm_genpd_restore_noirq(struct device *dev) 883static int pm_genpd_restore_noirq(struct device *dev)
1058{ 884{
1059 struct generic_pm_domain *genpd; 885 struct generic_pm_domain *genpd;
886 int ret = 0;
1060 887
1061 dev_dbg(dev, "%s()\n", __func__); 888 dev_dbg(dev, "%s()\n", __func__);
1062 889
@@ -1072,30 +899,20 @@ static int pm_genpd_restore_noirq(struct device *dev)
1072 * At this point suspended_count == 0 means we are being run for the 899 * At this point suspended_count == 0 means we are being run for the
1073 * first time for the given domain in the present cycle. 900 * first time for the given domain in the present cycle.
1074 */ 901 */
1075 if (genpd->suspended_count++ == 0) { 902 if (genpd->suspended_count++ == 0)
1076 /* 903 /*
1077 * The boot kernel might put the domain into arbitrary state, 904 * The boot kernel might put the domain into arbitrary state,
1078 * so make it appear as powered off to pm_genpd_sync_poweron(), 905 * so make it appear as powered off to pm_genpd_sync_poweron(),
1079 * so that it tries to power it on in case it was really off. 906 * so that it tries to power it on in case it was really off.
1080 */ 907 */
1081 genpd->status = GPD_STATE_POWER_OFF; 908 genpd->status = GPD_STATE_POWER_OFF;
1082 if (genpd->suspend_power_off) {
1083 /*
1084 * If the domain was off before the hibernation, make
1085 * sure it will be off going forward.
1086 */
1087 genpd_power_off(genpd, true);
1088
1089 return 0;
1090 }
1091 }
1092
1093 if (genpd->suspend_power_off)
1094 return 0;
1095 909
1096 pm_genpd_sync_poweron(genpd, true); 910 pm_genpd_sync_poweron(genpd, true);
1097 911
1098 return genpd_start_dev(genpd, dev); 912 if (genpd->dev_ops.stop && genpd->dev_ops.start)
913 ret = pm_runtime_force_resume(dev);
914
915 return ret;
1099} 916}
1100 917
1101/** 918/**
@@ -1110,7 +927,6 @@ static int pm_genpd_restore_noirq(struct device *dev)
1110static void pm_genpd_complete(struct device *dev) 927static void pm_genpd_complete(struct device *dev)
1111{ 928{
1112 struct generic_pm_domain *genpd; 929 struct generic_pm_domain *genpd;
1113 bool run_complete;
1114 930
1115 dev_dbg(dev, "%s()\n", __func__); 931 dev_dbg(dev, "%s()\n", __func__);
1116 932
@@ -1118,20 +934,15 @@ static void pm_genpd_complete(struct device *dev)
1118 if (IS_ERR(genpd)) 934 if (IS_ERR(genpd))
1119 return; 935 return;
1120 936
937 pm_generic_complete(dev);
938
1121 mutex_lock(&genpd->lock); 939 mutex_lock(&genpd->lock);
1122 940
1123 run_complete = !genpd->suspend_power_off; 941 genpd->prepared_count--;
1124 if (--genpd->prepared_count == 0) 942 if (!genpd->prepared_count)
1125 genpd->suspend_power_off = false; 943 genpd_queue_power_off_work(genpd);
1126 944
1127 mutex_unlock(&genpd->lock); 945 mutex_unlock(&genpd->lock);
1128
1129 if (run_complete) {
1130 pm_generic_complete(dev);
1131 pm_runtime_set_active(dev);
1132 pm_runtime_enable(dev);
1133 pm_request_idle(dev);
1134 }
1135} 946}
1136 947
1137/** 948/**
@@ -1173,18 +984,10 @@ EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweron);
1173#else /* !CONFIG_PM_SLEEP */ 984#else /* !CONFIG_PM_SLEEP */
1174 985
1175#define pm_genpd_prepare NULL 986#define pm_genpd_prepare NULL
1176#define pm_genpd_suspend NULL
1177#define pm_genpd_suspend_late NULL
1178#define pm_genpd_suspend_noirq NULL 987#define pm_genpd_suspend_noirq NULL
1179#define pm_genpd_resume_early NULL
1180#define pm_genpd_resume_noirq NULL 988#define pm_genpd_resume_noirq NULL
1181#define pm_genpd_resume NULL
1182#define pm_genpd_freeze NULL
1183#define pm_genpd_freeze_late NULL
1184#define pm_genpd_freeze_noirq NULL 989#define pm_genpd_freeze_noirq NULL
1185#define pm_genpd_thaw_early NULL
1186#define pm_genpd_thaw_noirq NULL 990#define pm_genpd_thaw_noirq NULL
1187#define pm_genpd_thaw NULL
1188#define pm_genpd_restore_noirq NULL 991#define pm_genpd_restore_noirq NULL
1189#define pm_genpd_complete NULL 992#define pm_genpd_complete NULL
1190 993
@@ -1455,12 +1258,14 @@ EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
1455 * @genpd: PM domain object to initialize. 1258 * @genpd: PM domain object to initialize.
1456 * @gov: PM domain governor to associate with the domain (may be NULL). 1259 * @gov: PM domain governor to associate with the domain (may be NULL).
1457 * @is_off: Initial value of the domain's power_is_off field. 1260 * @is_off: Initial value of the domain's power_is_off field.
1261 *
1262 * Returns 0 on successful initialization, else a negative error code.
1458 */ 1263 */
1459void pm_genpd_init(struct generic_pm_domain *genpd, 1264int pm_genpd_init(struct generic_pm_domain *genpd,
1460 struct dev_power_governor *gov, bool is_off) 1265 struct dev_power_governor *gov, bool is_off)
1461{ 1266{
1462 if (IS_ERR_OR_NULL(genpd)) 1267 if (IS_ERR_OR_NULL(genpd))
1463 return; 1268 return -EINVAL;
1464 1269
1465 INIT_LIST_HEAD(&genpd->master_links); 1270 INIT_LIST_HEAD(&genpd->master_links);
1466 INIT_LIST_HEAD(&genpd->slave_links); 1271 INIT_LIST_HEAD(&genpd->slave_links);
@@ -1476,24 +1281,24 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
1476 genpd->domain.ops.runtime_suspend = genpd_runtime_suspend; 1281 genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
1477 genpd->domain.ops.runtime_resume = genpd_runtime_resume; 1282 genpd->domain.ops.runtime_resume = genpd_runtime_resume;
1478 genpd->domain.ops.prepare = pm_genpd_prepare; 1283 genpd->domain.ops.prepare = pm_genpd_prepare;
1479 genpd->domain.ops.suspend = pm_genpd_suspend; 1284 genpd->domain.ops.suspend = pm_generic_suspend;
1480 genpd->domain.ops.suspend_late = pm_genpd_suspend_late; 1285 genpd->domain.ops.suspend_late = pm_generic_suspend_late;
1481 genpd->domain.ops.suspend_noirq = pm_genpd_suspend_noirq; 1286 genpd->domain.ops.suspend_noirq = pm_genpd_suspend_noirq;
1482 genpd->domain.ops.resume_noirq = pm_genpd_resume_noirq; 1287 genpd->domain.ops.resume_noirq = pm_genpd_resume_noirq;
1483 genpd->domain.ops.resume_early = pm_genpd_resume_early; 1288 genpd->domain.ops.resume_early = pm_generic_resume_early;
1484 genpd->domain.ops.resume = pm_genpd_resume; 1289 genpd->domain.ops.resume = pm_generic_resume;
1485 genpd->domain.ops.freeze = pm_genpd_freeze; 1290 genpd->domain.ops.freeze = pm_generic_freeze;
1486 genpd->domain.ops.freeze_late = pm_genpd_freeze_late; 1291 genpd->domain.ops.freeze_late = pm_generic_freeze_late;
1487 genpd->domain.ops.freeze_noirq = pm_genpd_freeze_noirq; 1292 genpd->domain.ops.freeze_noirq = pm_genpd_freeze_noirq;
1488 genpd->domain.ops.thaw_noirq = pm_genpd_thaw_noirq; 1293 genpd->domain.ops.thaw_noirq = pm_genpd_thaw_noirq;
1489 genpd->domain.ops.thaw_early = pm_genpd_thaw_early; 1294 genpd->domain.ops.thaw_early = pm_generic_thaw_early;
1490 genpd->domain.ops.thaw = pm_genpd_thaw; 1295 genpd->domain.ops.thaw = pm_generic_thaw;
1491 genpd->domain.ops.poweroff = pm_genpd_suspend; 1296 genpd->domain.ops.poweroff = pm_generic_poweroff;
1492 genpd->domain.ops.poweroff_late = pm_genpd_suspend_late; 1297 genpd->domain.ops.poweroff_late = pm_generic_poweroff_late;
1493 genpd->domain.ops.poweroff_noirq = pm_genpd_suspend_noirq; 1298 genpd->domain.ops.poweroff_noirq = pm_genpd_suspend_noirq;
1494 genpd->domain.ops.restore_noirq = pm_genpd_restore_noirq; 1299 genpd->domain.ops.restore_noirq = pm_genpd_restore_noirq;
1495 genpd->domain.ops.restore_early = pm_genpd_resume_early; 1300 genpd->domain.ops.restore_early = pm_generic_restore_early;
1496 genpd->domain.ops.restore = pm_genpd_resume; 1301 genpd->domain.ops.restore = pm_generic_restore;
1497 genpd->domain.ops.complete = pm_genpd_complete; 1302 genpd->domain.ops.complete = pm_genpd_complete;
1498 1303
1499 if (genpd->flags & GENPD_FLAG_PM_CLK) { 1304 if (genpd->flags & GENPD_FLAG_PM_CLK) {
@@ -1518,6 +1323,8 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
1518 mutex_lock(&gpd_list_lock); 1323 mutex_lock(&gpd_list_lock);
1519 list_add(&genpd->gpd_list_node, &gpd_list); 1324 list_add(&genpd->gpd_list_node, &gpd_list);
1520 mutex_unlock(&gpd_list_lock); 1325 mutex_unlock(&gpd_list_lock);
1326
1327 return 0;
1521} 1328}
1522EXPORT_SYMBOL_GPL(pm_genpd_init); 1329EXPORT_SYMBOL_GPL(pm_genpd_init);
1523 1330
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index b74690418504..e097d355cc04 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -1045,10 +1045,14 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
1045 */ 1045 */
1046 if (!parent->power.disable_depth 1046 if (!parent->power.disable_depth
1047 && !parent->power.ignore_children 1047 && !parent->power.ignore_children
1048 && parent->power.runtime_status != RPM_ACTIVE) 1048 && parent->power.runtime_status != RPM_ACTIVE) {
1049 dev_err(dev, "runtime PM trying to activate child device %s but parent (%s) is not active\n",
1050 dev_name(dev),
1051 dev_name(parent));
1049 error = -EBUSY; 1052 error = -EBUSY;
1050 else if (dev->power.runtime_status == RPM_SUSPENDED) 1053 } else if (dev->power.runtime_status == RPM_SUSPENDED) {
1051 atomic_inc(&parent->power.child_count); 1054 atomic_inc(&parent->power.child_count);
1055 }
1052 1056
1053 spin_unlock(&parent->power.lock); 1057 spin_unlock(&parent->power.lock);
1054 1058
@@ -1256,7 +1260,7 @@ void pm_runtime_allow(struct device *dev)
1256 1260
1257 dev->power.runtime_auto = true; 1261 dev->power.runtime_auto = true;
1258 if (atomic_dec_and_test(&dev->power.usage_count)) 1262 if (atomic_dec_and_test(&dev->power.usage_count))
1259 rpm_idle(dev, RPM_AUTO); 1263 rpm_idle(dev, RPM_AUTO | RPM_ASYNC);
1260 1264
1261 out: 1265 out:
1262 spin_unlock_irq(&dev->power.lock); 1266 spin_unlock_irq(&dev->power.lock);
@@ -1506,6 +1510,9 @@ int pm_runtime_force_resume(struct device *dev)
1506 goto out; 1510 goto out;
1507 } 1511 }
1508 1512
1513 if (!pm_runtime_status_suspended(dev))
1514 goto out;
1515
1509 ret = pm_runtime_set_active(dev); 1516 ret = pm_runtime_set_active(dev);
1510 if (ret) 1517 if (ret)
1511 goto out; 1518 goto out;