aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2018-01-29 12:47:41 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2018-01-29 12:47:41 -0500
commit7f3fdd40a7dfaa7405185250974b0fabd08c1f8b (patch)
tree7451aae06a883478d380fe03f7d817d0e3232e94
parent1c1f395b2873f59830979cf82324fbf00edfb80c (diff)
parentee43730d65155c5b3c3d0531f11daf59f8f42a73 (diff)
Merge tag 'pm-4.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki: "This includes some infrastructure changes in the PM core, mostly related to integration between runtime PM and system-wide suspend and hibernation, plus some driver changes depending on them and fixes for issues in that area which have become quite apparent recently. Also included are changes making more x86-based systems use the Low Power Sleep S0 _DSM interface by default, which turned out to be necessary to handle power button wakeups from suspend-to-idle on Surface Pro3. On the cpufreq front we have fixes and cleanups in the core, some new hardware support, driver updates and the removal of some unused code from the CPU cooling thermal driver. Apart from this, the Operating Performance Points (OPP) framework is prepared to be used with power domains in the future and there is a usual bunch of assorted fixes and cleanups. Specifics: - Define a PM driver flag allowing drivers to request that their devices be left in suspend after system-wide transitions to the working state if possible and add support for it to the PCI bus type and the ACPI PM domain (Rafael Wysocki). - Make the PM core carry out optimizations for devices with driver PM flags set in some cases and make a few drivers set those flags (Rafael Wysocki). - Fix and clean up wrapper routines allowing runtime PM device callbacks to be re-used for system-wide PM, change the generic power domains (genpd) framework to stop using those routines incorrectly and fix up a driver depending on that behavior of genpd (Rafael Wysocki, Ulf Hansson, Geert Uytterhoeven). - Fix and clean up the PM core's device wakeup framework and re-factor system-wide PM core code related to device wakeup (Rafael Wysocki, Ulf Hansson, Brian Norris). - Make more x86-based systems use the Low Power Sleep S0 _DSM interface by default (to fix power button wakeup from suspend-to-idle on Surface Pro3) and add a kernel command line switch to tell it to ignore the system sleep blacklist in the ACPI core (Rafael Wysocki). - Fix a race condition related to cpufreq governor module removal and clean up the governor management code in the cpufreq core (Rafael Wysocki). - Drop the unused generic code related to the handling of the static power energy usage model in the CPU cooling thermal driver along with the corresponding documentation (Viresh Kumar). - Add mt2712 support to the Mediatek cpufreq driver (Andrew-sh Cheng). - Add a new operating point to the imx6ul and imx6q cpufreq drivers and switch the latter to using clk_bulk_get() (Anson Huang, Dong Aisheng). - Add support for multiple regulators to the TI cpufreq driver along with a new DT binding related to that and clean up that driver somewhat (Dave Gerlach). - Fix a powernv cpufreq driver regression leading to incorrect CPU frequency reporting, fix that driver to deal with non-continguous P-states correctly and clean it up (Gautham Shenoy, Shilpasri Bhat). - Add support for frequency scaling on Armada 37xx SoCs through the generic DT cpufreq driver (Gregory CLEMENT). - Fix error code paths in the mvebu cpufreq driver (Gregory CLEMENT). - Fix a transition delay setting regression in the longhaul cpufreq driver (Viresh Kumar). - Add Skylake X (server) support to the intel_pstate cpufreq driver and clean up that driver somewhat (Srinivas Pandruvada). - Clean up the cpufreq statistics collection code (Viresh Kumar). - Drop cluster terminology and dependency on physical_package_id from the PSCI driver and drop dependency on arm_big_little from the SCPI cpufreq driver (Sudeep Holla). - Add support for system-wide suspend and resume to the RAPL power capping driver and drop a redundant semicolon from it (Zhen Han, Luis de Bethencourt). - Make SPI domain validation (in the SCSI SPI transport driver) and system-wide suspend mutually exclusive as they rely on the same underlying mechanism and cannot be carried out at the same time (Bart Van Assche). - Fix the computation of the amount of memory to preallocate in the hibernation core and clean up one function in there (Rainer Fiebig, Kyungsik Lee). - Prepare the Operating Performance Points (OPP) framework for being used with power domains and clean up one function in it (Viresh Kumar, Wei Yongjun). - Clean up the generic sysfs interface for device PM (Andy Shevchenko). - Fix several minor issues in power management frameworks and clean them up a bit (Arvind Yadav, Bjorn Andersson, Geert Uytterhoeven, Gustavo Silva, Julia Lawall, Luis de Bethencourt, Paul Gortmaker, Sergey Senozhatsky, gaurav jindal). - Make it easier to disable PM via Kconfig (Mark Brown). - Clean up the cpupower and intel_pstate_tracer utilities (Doug Smythies, Laura Abbott)" * tag 'pm-4.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (89 commits) PCI / PM: Remove spurious semicolon cpufreq: scpi: remove arm_big_little dependency drivers: psci: remove cluster terminology and dependency on physical_package_id powercap: intel_rapl: Fix trailing semicolon dmaengine: rcar-dmac: Make DMAC reinit during system resume explicit PM / runtime: Allow no callbacks in pm_runtime_force_suspend|resume() PM / hibernate: Drop unused parameter of enough_swap PM / runtime: Check ignore_children in pm_runtime_need_not_resume() PM / runtime: Rework pm_runtime_force_suspend/resume() PM / genpd: Stop/start devices without pm_runtime_force_suspend/resume() cpufreq: powernv: Dont assume distinct pstate values for nominal and pmin cpufreq: intel_pstate: Add Skylake servers support cpufreq: intel_pstate: Replace bxt_funcs with core_funcs platform/x86: surfacepro3: Support for wakeup from suspend-to-idle ACPI / PM: Use Low Power S0 Idle on more systems PM / wakeup: Print warn if device gets enabled as wakeup source during sleep PM / domains: Don't skip driver's ->suspend|resume_noirq() callbacks PM / core: Propagate wakeup_path status flag in __device_suspend_late() PM / core: Re-structure code for clearing the direct_complete flag powercap: add suspend and resume mechanism for SOC power limit ...
-rw-r--r--Documentation/admin-guide/kernel-parameters.txt5
-rw-r--r--Documentation/devicetree/bindings/arm/marvell/armada-37xx.txt19
-rw-r--r--Documentation/devicetree/bindings/opp/opp.txt13
-rw-r--r--Documentation/devicetree/bindings/opp/ti-omap5-opp-supply.txt63
-rw-r--r--Documentation/devicetree/bindings/power/power_domain.txt65
-rw-r--r--Documentation/driver-api/pm/devices.rst54
-rw-r--r--Documentation/power/pci.txt11
-rw-r--r--Documentation/thermal/cpu-cooling-api.txt115
-rw-r--r--MAINTAINERS2
-rw-r--r--arch/arm/boot/dts/imx6ul.dtsi2
-rw-r--r--arch/x86/kernel/acpi/sleep.c2
-rw-r--r--drivers/acpi/device_pm.c27
-rw-r--r--drivers/acpi/sleep.c16
-rw-r--r--drivers/base/power/domain.c69
-rw-r--r--drivers/base/power/main.c415
-rw-r--r--drivers/base/power/power.h11
-rw-r--r--drivers/base/power/runtime.c84
-rw-r--r--drivers/base/power/sysfs.c182
-rw-r--r--drivers/base/power/wakeirq.c8
-rw-r--r--drivers/base/power/wakeup.c26
-rw-r--r--drivers/bus/Kconfig2
-rw-r--r--drivers/cpufreq/Kconfig.arm88
-rw-r--r--drivers/cpufreq/Makefile9
-rw-r--r--drivers/cpufreq/arm_big_little.c23
-rw-r--r--drivers/cpufreq/armada-37xx-cpufreq.c241
-rw-r--r--drivers/cpufreq/cpufreq-dt-platdev.c8
-rw-r--r--drivers/cpufreq/cpufreq-dt.c27
-rw-r--r--drivers/cpufreq/cpufreq.c55
-rw-r--r--drivers/cpufreq/cpufreq_stats.c3
-rw-r--r--drivers/cpufreq/imx6q-cpufreq.c171
-rw-r--r--drivers/cpufreq/intel_pstate.c14
-rw-r--r--drivers/cpufreq/longhaul.c2
-rw-r--r--drivers/cpufreq/mediatek-cpufreq.c23
-rw-r--r--drivers/cpufreq/mvebu-cpufreq.c16
-rw-r--r--drivers/cpufreq/powernv-cpufreq.c143
-rw-r--r--drivers/cpufreq/qoriq-cpufreq.c14
-rw-r--r--drivers/cpufreq/scpi-cpufreq.c193
-rw-r--r--drivers/cpufreq/ti-cpufreq.c51
-rw-r--r--drivers/cpuidle/governor.c5
-rw-r--r--drivers/devfreq/devfreq.c5
-rw-r--r--drivers/dma/sh/rcar-dmac.c24
-rw-r--r--drivers/firmware/psci_checker.c46
-rw-r--r--drivers/i2c/busses/i2c-designware-core.h2
-rw-r--r--drivers/i2c/busses/i2c-designware-platdrv.c39
-rw-r--r--drivers/mfd/intel-lpss.c6
-rw-r--r--drivers/opp/Makefile1
-rw-r--r--drivers/opp/ti-opp-supply.c425
-rw-r--r--drivers/pci/pci-driver.c21
-rw-r--r--drivers/pci/pcie/portdrv_pci.c3
-rw-r--r--drivers/platform/x86/surfacepro3_button.c4
-rw-r--r--drivers/power/avs/rockchip-io-domain.c24
-rw-r--r--drivers/powercap/intel_rapl.c99
-rw-r--r--drivers/powercap/powercap_sys.c6
-rw-r--r--drivers/scsi/scsi_transport_spi.c16
-rw-r--r--drivers/thermal/cpu_cooling.c201
-rw-r--r--include/linux/acpi.h1
-rw-r--r--include/linux/cpu_cooling.h75
-rw-r--r--include/linux/pm.h16
-rw-r--r--include/linux/pm_wakeup.h7
-rw-r--r--include/linux/suspend.h28
-rw-r--r--include/trace/events/thermal.h10
-rw-r--r--kernel/configs/nopm.config15
-rw-r--r--kernel/power/main.c29
-rw-r--r--kernel/power/snapshot.c6
-rw-r--r--kernel/power/swap.c4
-rw-r--r--tools/power/cpupower/lib/cpufreq.h4
-rwxr-xr-xtools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py15
67 files changed, 2310 insertions, 1099 deletions
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 46b26bfee27b..281c85839c17 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -223,7 +223,7 @@
223 223
224 acpi_sleep= [HW,ACPI] Sleep options 224 acpi_sleep= [HW,ACPI] Sleep options
225 Format: { s3_bios, s3_mode, s3_beep, s4_nohwsig, 225 Format: { s3_bios, s3_mode, s3_beep, s4_nohwsig,
226 old_ordering, nonvs, sci_force_enable } 226 old_ordering, nonvs, sci_force_enable, nobl }
227 See Documentation/power/video.txt for information on 227 See Documentation/power/video.txt for information on
228 s3_bios and s3_mode. 228 s3_bios and s3_mode.
229 s3_beep is for debugging; it makes the PC's speaker beep 229 s3_beep is for debugging; it makes the PC's speaker beep
@@ -239,6 +239,9 @@
239 sci_force_enable causes the kernel to set SCI_EN directly 239 sci_force_enable causes the kernel to set SCI_EN directly
240 on resume from S1/S3 (which is against the ACPI spec, 240 on resume from S1/S3 (which is against the ACPI spec,
241 but some broken systems don't work without it). 241 but some broken systems don't work without it).
242 nobl causes the internal blacklist of systems known to
243 behave incorrectly in some ways with respect to system
244 suspend and resume to be ignored (use wisely).
242 245
243 acpi_use_timer_override [HW,ACPI] 246 acpi_use_timer_override [HW,ACPI]
244 Use timer override. For some broken Nvidia NF5 boards 247 Use timer override. For some broken Nvidia NF5 boards
diff --git a/Documentation/devicetree/bindings/arm/marvell/armada-37xx.txt b/Documentation/devicetree/bindings/arm/marvell/armada-37xx.txt
index 51336e5fc761..35c3c3460d17 100644
--- a/Documentation/devicetree/bindings/arm/marvell/armada-37xx.txt
+++ b/Documentation/devicetree/bindings/arm/marvell/armada-37xx.txt
@@ -14,3 +14,22 @@ following property before the previous one:
14Example: 14Example:
15 15
16compatible = "marvell,armada-3720-db", "marvell,armada3720", "marvell,armada3710"; 16compatible = "marvell,armada-3720-db", "marvell,armada3720", "marvell,armada3710";
17
18
19Power management
20----------------
21
22For power management (particularly DVFS and AVS), the North Bridge
23Power Management component is needed:
24
25Required properties:
26- compatible : should contain "marvell,armada-3700-nb-pm", "syscon";
27- reg : the register start and length for the North Bridge
28 Power Management
29
30Example:
31
32nb_pm: syscon@14000 {
33 compatible = "marvell,armada-3700-nb-pm", "syscon";
34 reg = <0x14000 0x60>;
35}
diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
index 9d733af26be7..4e4f30288c8b 100644
--- a/Documentation/devicetree/bindings/opp/opp.txt
+++ b/Documentation/devicetree/bindings/opp/opp.txt
@@ -45,6 +45,11 @@ Devices supporting OPPs must set their "operating-points-v2" property with
45phandle to a OPP table in their DT node. The OPP core will use this phandle to 45phandle to a OPP table in their DT node. The OPP core will use this phandle to
46find the operating points for the device. 46find the operating points for the device.
47 47
48This can contain more than one phandle for power domain providers that provide
49multiple power domains. That is, one phandle for each power domain. If only one
50phandle is available, then the same OPP table will be used for all power domains
51provided by the power domain provider.
52
48If required, this can be extended for SoC vendor specific bindings. Such bindings 53If required, this can be extended for SoC vendor specific bindings. Such bindings
49should be documented as Documentation/devicetree/bindings/power/<vendor>-opp.txt 54should be documented as Documentation/devicetree/bindings/power/<vendor>-opp.txt
50and should have a compatible description like: "operating-points-v2-<vendor>". 55and should have a compatible description like: "operating-points-v2-<vendor>".
@@ -154,6 +159,14 @@ Optional properties:
154 159
155- status: Marks the node enabled/disabled. 160- status: Marks the node enabled/disabled.
156 161
162- required-opp: This contains phandle to an OPP node in another device's OPP
163 table. It may contain an array of phandles, where each phandle points to an
164 OPP of a different device. It should not contain multiple phandles to the OPP
165 nodes in the same OPP table. This specifies the minimum required OPP of the
166 device(s), whose OPP's phandle is present in this property, for the
167 functioning of the current device at the current OPP (where this property is
168 present).
169
157Example 1: Single cluster Dual-core ARM cortex A9, switch DVFS states together. 170Example 1: Single cluster Dual-core ARM cortex A9, switch DVFS states together.
158 171
159/ { 172/ {
diff --git a/Documentation/devicetree/bindings/opp/ti-omap5-opp-supply.txt b/Documentation/devicetree/bindings/opp/ti-omap5-opp-supply.txt
new file mode 100644
index 000000000000..832346e489a3
--- /dev/null
+++ b/Documentation/devicetree/bindings/opp/ti-omap5-opp-supply.txt
@@ -0,0 +1,63 @@
1Texas Instruments OMAP compatible OPP supply description
2
3OMAP5, DRA7, and AM57 family of SoCs have Class0 AVS eFuse registers which
4contain data that can be used to adjust voltages programmed for some of their
5supplies for more efficient operation. This binding provides the information
6needed to read these values and use them to program the main regulator during
7an OPP transitions.
8
9Also, some supplies may have an associated vbb-supply which is an Adaptive Body
10Bias regulator which much be transitioned in a specific sequence with regards
11to the vdd-supply and clk when making an OPP transition. By supplying two
12regulators to the device that will undergo OPP transitions we can make use
13of the multi regulator binding that is part of the OPP core described here [1]
14to describe both regulators needed by the platform.
15
16[1] Documentation/devicetree/bindings/opp/opp.txt
17
18Required Properties for Device Node:
19- vdd-supply: phandle to regulator controlling VDD supply
20- vbb-supply: phandle to regulator controlling Body Bias supply
21 (Usually Adaptive Body Bias regulator)
22
23Required Properties for opp-supply node:
24- compatible: Should be one of:
25 "ti,omap-opp-supply" - basic OPP supply controlling VDD and VBB
26 "ti,omap5-opp-supply" - OMAP5+ optimized voltages in efuse(class0)VDD
27 along with VBB
28 "ti,omap5-core-opp-supply" - OMAP5+ optimized voltages in efuse(class0) VDD
29 but no VBB.
30- reg: Address and length of the efuse register set for the device (mandatory
31 only for "ti,omap5-opp-supply")
32- ti,efuse-settings: An array of u32 tuple items providing information about
33 optimized efuse configuration. Each item consists of the following:
34 volt: voltage in uV - reference voltage (OPP voltage)
35 efuse_offseet: efuse offset from reg where the optimized voltage is stored.
36- ti,absolute-max-voltage-uv: absolute maximum voltage for the OPP supply.
37
38Example:
39
40/* Device Node (CPU) */
41cpus {
42 cpu0: cpu@0 {
43 device_type = "cpu";
44
45 ...
46
47 vdd-supply = <&vcc>;
48 vbb-supply = <&abb_mpu>;
49 };
50};
51
52/* OMAP OPP Supply with Class0 registers */
53opp_supply_mpu: opp_supply@4a003b20 {
54 compatible = "ti,omap5-opp-supply";
55 reg = <0x4a003b20 0x8>;
56 ti,efuse-settings = <
57 /* uV offset */
58 1060000 0x0
59 1160000 0x4
60 1210000 0x8
61 >;
62 ti,absolute-max-voltage-uv = <1500000>;
63};
diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
index 14bd9e945ff6..f3355313c020 100644
--- a/Documentation/devicetree/bindings/power/power_domain.txt
+++ b/Documentation/devicetree/bindings/power/power_domain.txt
@@ -40,6 +40,12 @@ Optional properties:
40 domain's idle states. In the absence of this property, the domain would be 40 domain's idle states. In the absence of this property, the domain would be
41 considered as capable of being powered-on or powered-off. 41 considered as capable of being powered-on or powered-off.
42 42
43- operating-points-v2 : Phandles to the OPP tables of power domains provided by
44 a power domain provider. If the provider provides a single power domain only
45 or all the power domains provided by the provider have identical OPP tables,
46 then this shall contain a single phandle. Refer to ../opp/opp.txt for more
47 information.
48
43Example: 49Example:
44 50
45 power: power-controller@12340000 { 51 power: power-controller@12340000 {
@@ -120,4 +126,63 @@ The node above defines a typical PM domain consumer device, which is located
120inside a PM domain with index 0 of a power controller represented by a node 126inside a PM domain with index 0 of a power controller represented by a node
121with the label "power". 127with the label "power".
122 128
129Optional properties:
130- required-opp: This contains phandle to an OPP node in another device's OPP
131 table. It may contain an array of phandles, where each phandle points to an
132 OPP of a different device. It should not contain multiple phandles to the OPP
133 nodes in the same OPP table. This specifies the minimum required OPP of the
134 device(s), whose OPP's phandle is present in this property, for the
135 functioning of the current device at the current OPP (where this property is
136 present).
137
138Example:
139- OPP table for domain provider that provides two domains.
140
141 domain0_opp_table: opp-table0 {
142 compatible = "operating-points-v2";
143
144 domain0_opp_0: opp-1000000000 {
145 opp-hz = /bits/ 64 <1000000000>;
146 opp-microvolt = <975000 970000 985000>;
147 };
148 domain0_opp_1: opp-1100000000 {
149 opp-hz = /bits/ 64 <1100000000>;
150 opp-microvolt = <1000000 980000 1010000>;
151 };
152 };
153
154 domain1_opp_table: opp-table1 {
155 compatible = "operating-points-v2";
156
157 domain1_opp_0: opp-1200000000 {
158 opp-hz = /bits/ 64 <1200000000>;
159 opp-microvolt = <975000 970000 985000>;
160 };
161 domain1_opp_1: opp-1300000000 {
162 opp-hz = /bits/ 64 <1300000000>;
163 opp-microvolt = <1000000 980000 1010000>;
164 };
165 };
166
167 power: power-controller@12340000 {
168 compatible = "foo,power-controller";
169 reg = <0x12340000 0x1000>;
170 #power-domain-cells = <1>;
171 operating-points-v2 = <&domain0_opp_table>, <&domain1_opp_table>;
172 };
173
174 leaky-device0@12350000 {
175 compatible = "foo,i-leak-current";
176 reg = <0x12350000 0x1000>;
177 power-domains = <&power 0>;
178 required-opp = <&domain0_opp_0>;
179 };
180
181 leaky-device1@12350000 {
182 compatible = "foo,i-leak-current";
183 reg = <0x12350000 0x1000>;
184 power-domains = <&power 1>;
185 required-opp = <&domain1_opp_1>;
186 };
187
123[1]. Documentation/devicetree/bindings/power/domain-idle-state.txt 188[1]. Documentation/devicetree/bindings/power/domain-idle-state.txt
diff --git a/Documentation/driver-api/pm/devices.rst b/Documentation/driver-api/pm/devices.rst
index 53c1b0b06da5..1128705a5731 100644
--- a/Documentation/driver-api/pm/devices.rst
+++ b/Documentation/driver-api/pm/devices.rst
@@ -777,17 +777,51 @@ The driver can indicate that by setting ``DPM_FLAG_SMART_SUSPEND`` in
777runtime suspend at the beginning of the ``suspend_late`` phase of system-wide 777runtime suspend at the beginning of the ``suspend_late`` phase of system-wide
778suspend (or in the ``poweroff_late`` phase of hibernation), when runtime PM 778suspend (or in the ``poweroff_late`` phase of hibernation), when runtime PM
779has been disabled for it, under the assumption that its state should not change 779has been disabled for it, under the assumption that its state should not change
780after that point until the system-wide transition is over. If that happens, the 780after that point until the system-wide transition is over (the PM core itself
781driver's system-wide resume callbacks, if present, may still be invoked during 781does that for devices whose "noirq", "late" and "early" system-wide PM callbacks
782the subsequent system-wide resume transition and the device's runtime power 782are executed directly by it). If that happens, the driver's system-wide resume
783management status may be set to "active" before enabling runtime PM for it, 783callbacks, if present, may still be invoked during the subsequent system-wide
784so the driver must be prepared to cope with the invocation of its system-wide 784resume transition and the device's runtime power management status may be set
785resume callbacks back-to-back with its ``->runtime_suspend`` one (without the 785to "active" before enabling runtime PM for it, so the driver must be prepared to
786intervening ``->runtime_resume`` and so on) and the final state of the device 786cope with the invocation of its system-wide resume callbacks back-to-back with
787must reflect the "active" status for runtime PM in that case. 787its ``->runtime_suspend`` one (without the intervening ``->runtime_resume`` and
788so on) and the final state of the device must reflect the "active" runtime PM
789status in that case.
788 790
789During system-wide resume from a sleep state it's easiest to put devices into 791During system-wide resume from a sleep state it's easiest to put devices into
790the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`. 792the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`.
791Refer to that document for more information regarding this particular issue as 793[Refer to that document for more information regarding this particular issue as
792well as for information on the device runtime power management framework in 794well as for information on the device runtime power management framework in
793general. 795general.]
796
797However, it often is desirable to leave devices in suspend after system
798transitions to the working state, especially if those devices had been in
799runtime suspend before the preceding system-wide suspend (or analogous)
800transition. Device drivers can use the ``DPM_FLAG_LEAVE_SUSPENDED`` flag to
801indicate to the PM core (and middle-layer code) that they prefer the specific
802devices handled by them to be left suspended and they have no problems with
803skipping their system-wide resume callbacks for this reason. Whether or not the
804devices will actually be left in suspend may depend on their state before the
805given system suspend-resume cycle and on the type of the system transition under
806way. In particular, devices are not left suspended if that transition is a
807restore from hibernation, as device states are not guaranteed to be reflected
808by the information stored in the hibernation image in that case.
809
810The middle-layer code involved in the handling of the device is expected to
811indicate to the PM core if the device may be left in suspend by setting its
812:c:member:`power.may_skip_resume` status bit which is checked by the PM core
813during the "noirq" phase of the preceding system-wide suspend (or analogous)
814transition. The middle layer is then responsible for handling the device as
815appropriate in its "noirq" resume callback, which is executed regardless of
816whether or not the device is left suspended, but the other resume callbacks
817(except for ``->complete``) will be skipped automatically by the PM core if the
818device really can be left in suspend.
819
820For devices whose "noirq", "late" and "early" driver callbacks are invoked
821directly by the PM core, all of the system-wide resume callbacks are skipped if
822``DPM_FLAG_LEAVE_SUSPENDED`` is set and the device is in runtime suspend during
823the ``suspend_noirq`` (or analogous) phase or the transition under way is a
824proper system suspend (rather than anything related to hibernation) and the
825device's wakeup settings are suitable for runtime PM (that is, it cannot
826generate wakeup signals at all or it is allowed to wake up the system from
827sleep).
diff --git a/Documentation/power/pci.txt b/Documentation/power/pci.txt
index 704cd36079b8..8eaf9ee24d43 100644
--- a/Documentation/power/pci.txt
+++ b/Documentation/power/pci.txt
@@ -994,6 +994,17 @@ into D0 going forward), but if it is in runtime suspend in pci_pm_thaw_noirq(),
994the function will set the power.direct_complete flag for it (to make the PM core 994the function will set the power.direct_complete flag for it (to make the PM core
995skip the subsequent "thaw" callbacks for it) and return. 995skip the subsequent "thaw" callbacks for it) and return.
996 996
997Setting the DPM_FLAG_LEAVE_SUSPENDED flag means that the driver prefers the
998device to be left in suspend after system-wide transitions to the working state.
999This flag is checked by the PM core, but the PCI bus type informs the PM core
1000which devices may be left in suspend from its perspective (that happens during
1001the "noirq" phase of system-wide suspend and analogous transitions) and next it
1002uses the dev_pm_may_skip_resume() helper to decide whether or not to return from
1003pci_pm_resume_noirq() early, as the PM core will skip the remaining resume
1004callbacks for the device during the transition under way and will set its
1005runtime PM status to "suspended" if dev_pm_may_skip_resume() returns "true" for
1006it.
1007
9973.2. Device Runtime Power Management 10083.2. Device Runtime Power Management
998------------------------------------ 1009------------------------------------
999In addition to providing device power management callbacks PCI device drivers 1010In addition to providing device power management callbacks PCI device drivers
diff --git a/Documentation/thermal/cpu-cooling-api.txt b/Documentation/thermal/cpu-cooling-api.txt
index 71653584cd03..7df567eaea1a 100644
--- a/Documentation/thermal/cpu-cooling-api.txt
+++ b/Documentation/thermal/cpu-cooling-api.txt
@@ -26,39 +26,16 @@ the user. The registration APIs returns the cooling device pointer.
26 clip_cpus: cpumask of cpus where the frequency constraints will happen. 26 clip_cpus: cpumask of cpus where the frequency constraints will happen.
27 27
281.1.2 struct thermal_cooling_device *of_cpufreq_cooling_register( 281.1.2 struct thermal_cooling_device *of_cpufreq_cooling_register(
29 struct device_node *np, const struct cpumask *clip_cpus) 29 struct cpufreq_policy *policy)
30 30
31 This interface function registers the cpufreq cooling device with 31 This interface function registers the cpufreq cooling device with
32 the name "thermal-cpufreq-%x" linking it with a device tree node, in 32 the name "thermal-cpufreq-%x" linking it with a device tree node, in
33 order to bind it via the thermal DT code. This api can support multiple 33 order to bind it via the thermal DT code. This api can support multiple
34 instances of cpufreq cooling devices. 34 instances of cpufreq cooling devices.
35 35
36 np: pointer to the cooling device device tree node 36 policy: CPUFreq policy.
37 clip_cpus: cpumask of cpus where the frequency constraints will happen.
38 37
391.1.3 struct thermal_cooling_device *cpufreq_power_cooling_register( 381.1.3 void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
40 const struct cpumask *clip_cpus, u32 capacitance,
41 get_static_t plat_static_func)
42
43Similar to cpufreq_cooling_register, this function registers a cpufreq
44cooling device. Using this function, the cooling device will
45implement the power extensions by using a simple cpu power model. The
46cpus must have registered their OPPs using the OPP library.
47
48The additional parameters are needed for the power model (See 2. Power
49models). "capacitance" is the dynamic power coefficient (See 2.1
50Dynamic power). "plat_static_func" is a function to calculate the
51static power consumed by these cpus (See 2.2 Static power).
52
531.1.4 struct thermal_cooling_device *of_cpufreq_power_cooling_register(
54 struct device_node *np, const struct cpumask *clip_cpus, u32 capacitance,
55 get_static_t plat_static_func)
56
57Similar to cpufreq_power_cooling_register, this function register a
58cpufreq cooling device with power extensions using the device tree
59information supplied by the np parameter.
60
611.1.5 void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
62 39
63 This interface function unregisters the "thermal-cpufreq-%x" cooling device. 40 This interface function unregisters the "thermal-cpufreq-%x" cooling device.
64 41
@@ -67,20 +44,14 @@ information supplied by the np parameter.
672. Power models 442. Power models
68 45
69The power API registration functions provide a simple power model for 46The power API registration functions provide a simple power model for
70CPUs. The current power is calculated as dynamic + (optionally) 47CPUs. The current power is calculated as dynamic power (static power isn't
71static power. This power model requires that the operating-points of 48supported currently). This power model requires that the operating-points of
72the CPUs are registered using the kernel's opp library and the 49the CPUs are registered using the kernel's opp library and the
73`cpufreq_frequency_table` is assigned to the `struct device` of the 50`cpufreq_frequency_table` is assigned to the `struct device` of the
74cpu. If you are using CONFIG_CPUFREQ_DT then the 51cpu. If you are using CONFIG_CPUFREQ_DT then the
75`cpufreq_frequency_table` should already be assigned to the cpu 52`cpufreq_frequency_table` should already be assigned to the cpu
76device. 53device.
77 54
78The `plat_static_func` parameter of `cpufreq_power_cooling_register()`
79and `of_cpufreq_power_cooling_register()` is optional. If you don't
80provide it, only dynamic power will be considered.
81
822.1 Dynamic power
83
84The dynamic power consumption of a processor depends on many factors. 55The dynamic power consumption of a processor depends on many factors.
85For a given processor implementation the primary factors are: 56For a given processor implementation the primary factors are:
86 57
@@ -119,79 +90,3 @@ mW/MHz/uVolt^2. Typical values for mobile CPUs might lie in range
119from 100 to 500. For reference, the approximate values for the SoC in 90from 100 to 500. For reference, the approximate values for the SoC in
120ARM's Juno Development Platform are 530 for the Cortex-A57 cluster and 91ARM's Juno Development Platform are 530 for the Cortex-A57 cluster and
121140 for the Cortex-A53 cluster. 92140 for the Cortex-A53 cluster.
122
123
1242.2 Static power
125
126Static leakage power consumption depends on a number of factors. For a
127given circuit implementation the primary factors are:
128
129- Time the circuit spends in each 'power state'
130- Temperature
131- Operating voltage
132- Process grade
133
134The time the circuit spends in each 'power state' for a given
135evaluation period at first order means OFF or ON. However,
136'retention' states can also be supported that reduce power during
137inactive periods without loss of context.
138
139Note: The visibility of state entries to the OS can vary, according to
140platform specifics, and this can then impact the accuracy of a model
141based on OS state information alone. It might be possible in some
142cases to extract more accurate information from system resources.
143
144The temperature, operating voltage and process 'grade' (slow to fast)
145of the circuit are all significant factors in static leakage power
146consumption. All of these have complex relationships to static power.
147
148Circuit implementation specific factors include the chosen silicon
149process as well as the type, number and size of transistors in both
150the logic gates and any RAM elements included.
151
152The static power consumption modelling must take into account the
153power managed regions that are implemented. Taking the example of an
154ARM processor cluster, the modelling would take into account whether
155each CPU can be powered OFF separately or if only a single power
156region is implemented for the complete cluster.
157
158In one view, there are others, a static power consumption model can
159then start from a set of reference values for each power managed
160region (e.g. CPU, Cluster/L2) in each state (e.g. ON, OFF) at an
161arbitrary process grade, voltage and temperature point. These values
162are then scaled for all of the following: the time in each state, the
163process grade, the current temperature and the operating voltage.
164However, since both implementation specific and complex relationships
165dominate the estimate, the appropriate interface to the model from the
166cpu cooling device is to provide a function callback that calculates
167the static power in this platform. When registering the cpu cooling
168device pass a function pointer that follows the `get_static_t`
169prototype:
170
171 int plat_get_static(cpumask_t *cpumask, int interval,
172 unsigned long voltage, u32 &power);
173
174`cpumask` is the cpumask of the cpus involved in the calculation.
175`voltage` is the voltage at which they are operating. The function
176should calculate the average static power for the last `interval`
177milliseconds. It returns 0 on success, -E* on error. If it
178succeeds, it should store the static power in `power`. Reading the
179temperature of the cpus described by `cpumask` is left for
180plat_get_static() to do as the platform knows best which thermal
181sensor is closest to the cpu.
182
183If `plat_static_func` is NULL, static power is considered to be
184negligible for this platform and only dynamic power is considered.
185
186The platform specific callback can then use any combination of tables
187and/or equations to permute the estimated value. Process grade
188information is not passed to the model since access to such data, from
189on-chip measurement capability or manufacture time data, is platform
190specific.
191
192Note: the significance of static power for CPUs in comparison to
193dynamic power is highly dependent on implementation. Given the
194potential complexity in implementation, the importance and accuracy of
195its inclusion when using cpu cooling devices should be assessed on a
196case by case basis.
197
diff --git a/MAINTAINERS b/MAINTAINERS
index 5046003ea162..cf729942f33d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1591,6 +1591,7 @@ F: arch/arm/boot/dts/kirkwood*
1591F: arch/arm/configs/mvebu_*_defconfig 1591F: arch/arm/configs/mvebu_*_defconfig
1592F: arch/arm/mach-mvebu/ 1592F: arch/arm/mach-mvebu/
1593F: arch/arm64/boot/dts/marvell/armada* 1593F: arch/arm64/boot/dts/marvell/armada*
1594F: drivers/cpufreq/armada-37xx-cpufreq.c
1594F: drivers/cpufreq/mvebu-cpufreq.c 1595F: drivers/cpufreq/mvebu-cpufreq.c
1595F: drivers/irqchip/irq-armada-370-xp.c 1596F: drivers/irqchip/irq-armada-370-xp.c
1596F: drivers/irqchip/irq-mvebu-* 1597F: drivers/irqchip/irq-mvebu-*
@@ -10889,6 +10890,7 @@ F: include/linux/pm.h
10889F: include/linux/pm_* 10890F: include/linux/pm_*
10890F: include/linux/powercap.h 10891F: include/linux/powercap.h
10891F: drivers/powercap/ 10892F: drivers/powercap/
10893F: kernel/configs/nopm.config
10892 10894
10893POWER STATE COORDINATION INTERFACE (PSCI) 10895POWER STATE COORDINATION INTERFACE (PSCI)
10894M: Mark Rutland <mark.rutland@arm.com> 10896M: Mark Rutland <mark.rutland@arm.com>
diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi
index d5181f85ca9c..963e1698fe1d 100644
--- a/arch/arm/boot/dts/imx6ul.dtsi
+++ b/arch/arm/boot/dts/imx6ul.dtsi
@@ -68,12 +68,14 @@
68 clock-latency = <61036>; /* two CLK32 periods */ 68 clock-latency = <61036>; /* two CLK32 periods */
69 operating-points = < 69 operating-points = <
70 /* kHz uV */ 70 /* kHz uV */
71 696000 1275000
71 528000 1175000 72 528000 1175000
72 396000 1025000 73 396000 1025000
73 198000 950000 74 198000 950000
74 >; 75 >;
75 fsl,soc-operating-points = < 76 fsl,soc-operating-points = <
76 /* KHz uV */ 77 /* KHz uV */
78 696000 1275000
77 528000 1175000 79 528000 1175000
78 396000 1175000 80 396000 1175000
79 198000 1175000 81 198000 1175000
diff --git a/arch/x86/kernel/acpi/sleep.c b/arch/x86/kernel/acpi/sleep.c
index 7188aea91549..f1915b744052 100644
--- a/arch/x86/kernel/acpi/sleep.c
+++ b/arch/x86/kernel/acpi/sleep.c
@@ -138,6 +138,8 @@ static int __init acpi_sleep_setup(char *str)
138 acpi_nvs_nosave_s3(); 138 acpi_nvs_nosave_s3();
139 if (strncmp(str, "old_ordering", 12) == 0) 139 if (strncmp(str, "old_ordering", 12) == 0)
140 acpi_old_suspend_ordering(); 140 acpi_old_suspend_ordering();
141 if (strncmp(str, "nobl", 4) == 0)
142 acpi_sleep_no_blacklist();
141 str = strchr(str, ','); 143 str = strchr(str, ',');
142 if (str != NULL) 144 if (str != NULL)
143 str += strspn(str, ", \t"); 145 str += strspn(str, ", \t");
diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
index a4c8ad98560d..c4d0a1c912f0 100644
--- a/drivers/acpi/device_pm.c
+++ b/drivers/acpi/device_pm.c
@@ -990,7 +990,7 @@ void acpi_subsys_complete(struct device *dev)
990 * the sleep state it is going out of and it has never been resumed till 990 * the sleep state it is going out of and it has never been resumed till
991 * now, resume it in case the firmware powered it up. 991 * now, resume it in case the firmware powered it up.
992 */ 992 */
993 if (dev->power.direct_complete && pm_resume_via_firmware()) 993 if (pm_runtime_suspended(dev) && pm_resume_via_firmware())
994 pm_request_resume(dev); 994 pm_request_resume(dev);
995} 995}
996EXPORT_SYMBOL_GPL(acpi_subsys_complete); 996EXPORT_SYMBOL_GPL(acpi_subsys_complete);
@@ -1039,10 +1039,28 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late);
1039 */ 1039 */
1040int acpi_subsys_suspend_noirq(struct device *dev) 1040int acpi_subsys_suspend_noirq(struct device *dev)
1041{ 1041{
1042 if (dev_pm_smart_suspend_and_suspended(dev)) 1042 int ret;
1043
1044 if (dev_pm_smart_suspend_and_suspended(dev)) {
1045 dev->power.may_skip_resume = true;
1043 return 0; 1046 return 0;
1047 }
1048
1049 ret = pm_generic_suspend_noirq(dev);
1050 if (ret)
1051 return ret;
1052
1053 /*
1054 * If the target system sleep state is suspend-to-idle, it is sufficient
1055 * to check whether or not the device's wakeup settings are good for
1056 * runtime PM. Otherwise, the pm_resume_via_firmware() check will cause
1057 * acpi_subsys_complete() to take care of fixing up the device's state
1058 * anyway, if need be.
1059 */
1060 dev->power.may_skip_resume = device_may_wakeup(dev) ||
1061 !device_can_wakeup(dev);
1044 1062
1045 return pm_generic_suspend_noirq(dev); 1063 return 0;
1046} 1064}
1047EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq); 1065EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);
1048 1066
@@ -1052,6 +1070,9 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);
1052 */ 1070 */
1053int acpi_subsys_resume_noirq(struct device *dev) 1071int acpi_subsys_resume_noirq(struct device *dev)
1054{ 1072{
1073 if (dev_pm_may_skip_resume(dev))
1074 return 0;
1075
1055 /* 1076 /*
1056 * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend 1077 * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
1057 * during system suspend, so update their runtime PM status to "active" 1078 * during system suspend, so update their runtime PM status to "active"
diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
index 8082871b409a..46cde0912762 100644
--- a/drivers/acpi/sleep.c
+++ b/drivers/acpi/sleep.c
@@ -367,10 +367,20 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
367 {}, 367 {},
368}; 368};
369 369
370static bool ignore_blacklist;
371
372void __init acpi_sleep_no_blacklist(void)
373{
374 ignore_blacklist = true;
375}
376
370static void __init acpi_sleep_dmi_check(void) 377static void __init acpi_sleep_dmi_check(void)
371{ 378{
372 int year; 379 int year;
373 380
381 if (ignore_blacklist)
382 return;
383
374 if (dmi_get_date(DMI_BIOS_DATE, &year, NULL, NULL) && year >= 2012) 384 if (dmi_get_date(DMI_BIOS_DATE, &year, NULL, NULL) && year >= 2012)
375 acpi_nvs_nosave_s3(); 385 acpi_nvs_nosave_s3();
376 386
@@ -697,7 +707,8 @@ static const struct acpi_device_id lps0_device_ids[] = {
697#define ACPI_LPS0_ENTRY 5 707#define ACPI_LPS0_ENTRY 5
698#define ACPI_LPS0_EXIT 6 708#define ACPI_LPS0_EXIT 6
699 709
700#define ACPI_S2IDLE_FUNC_MASK ((1 << ACPI_LPS0_ENTRY) | (1 << ACPI_LPS0_EXIT)) 710#define ACPI_LPS0_SCREEN_MASK ((1 << ACPI_LPS0_SCREEN_OFF) | (1 << ACPI_LPS0_SCREEN_ON))
711#define ACPI_LPS0_PLATFORM_MASK ((1 << ACPI_LPS0_ENTRY) | (1 << ACPI_LPS0_EXIT))
701 712
702static acpi_handle lps0_device_handle; 713static acpi_handle lps0_device_handle;
703static guid_t lps0_dsm_guid; 714static guid_t lps0_dsm_guid;
@@ -900,7 +911,8 @@ static int lps0_device_attach(struct acpi_device *adev,
900 if (out_obj && out_obj->type == ACPI_TYPE_BUFFER) { 911 if (out_obj && out_obj->type == ACPI_TYPE_BUFFER) {
901 char bitmask = *(char *)out_obj->buffer.pointer; 912 char bitmask = *(char *)out_obj->buffer.pointer;
902 913
903 if ((bitmask & ACPI_S2IDLE_FUNC_MASK) == ACPI_S2IDLE_FUNC_MASK) { 914 if ((bitmask & ACPI_LPS0_PLATFORM_MASK) == ACPI_LPS0_PLATFORM_MASK ||
915 (bitmask & ACPI_LPS0_SCREEN_MASK) == ACPI_LPS0_SCREEN_MASK) {
904 lps0_dsm_func_mask = bitmask; 916 lps0_dsm_func_mask = bitmask;
905 lps0_device_handle = adev->handle; 917 lps0_device_handle = adev->handle;
906 /* 918 /*
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 0c80bea05bcb..528b24149bc7 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -1032,15 +1032,12 @@ static int genpd_prepare(struct device *dev)
1032static int genpd_finish_suspend(struct device *dev, bool poweroff) 1032static int genpd_finish_suspend(struct device *dev, bool poweroff)
1033{ 1033{
1034 struct generic_pm_domain *genpd; 1034 struct generic_pm_domain *genpd;
1035 int ret; 1035 int ret = 0;
1036 1036
1037 genpd = dev_to_genpd(dev); 1037 genpd = dev_to_genpd(dev);
1038 if (IS_ERR(genpd)) 1038 if (IS_ERR(genpd))
1039 return -EINVAL; 1039 return -EINVAL;
1040 1040
1041 if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
1042 return 0;
1043
1044 if (poweroff) 1041 if (poweroff)
1045 ret = pm_generic_poweroff_noirq(dev); 1042 ret = pm_generic_poweroff_noirq(dev);
1046 else 1043 else
@@ -1048,10 +1045,19 @@ static int genpd_finish_suspend(struct device *dev, bool poweroff)
1048 if (ret) 1045 if (ret)
1049 return ret; 1046 return ret;
1050 1047
1051 if (genpd->dev_ops.stop && genpd->dev_ops.start) { 1048 if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
1052 ret = pm_runtime_force_suspend(dev); 1049 return 0;
1053 if (ret) 1050
1051 if (genpd->dev_ops.stop && genpd->dev_ops.start &&
1052 !pm_runtime_status_suspended(dev)) {
1053 ret = genpd_stop_dev(genpd, dev);
1054 if (ret) {
1055 if (poweroff)
1056 pm_generic_restore_noirq(dev);
1057 else
1058 pm_generic_resume_noirq(dev);
1054 return ret; 1059 return ret;
1060 }
1055 } 1061 }
1056 1062
1057 genpd_lock(genpd); 1063 genpd_lock(genpd);
@@ -1085,7 +1091,7 @@ static int genpd_suspend_noirq(struct device *dev)
1085static int genpd_resume_noirq(struct device *dev) 1091static int genpd_resume_noirq(struct device *dev)
1086{ 1092{
1087 struct generic_pm_domain *genpd; 1093 struct generic_pm_domain *genpd;
1088 int ret = 0; 1094 int ret;
1089 1095
1090 dev_dbg(dev, "%s()\n", __func__); 1096 dev_dbg(dev, "%s()\n", __func__);
1091 1097
@@ -1094,21 +1100,21 @@ static int genpd_resume_noirq(struct device *dev)
1094 return -EINVAL; 1100 return -EINVAL;
1095 1101
1096 if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd)) 1102 if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
1097 return 0; 1103 return pm_generic_resume_noirq(dev);
1098 1104
1099 genpd_lock(genpd); 1105 genpd_lock(genpd);
1100 genpd_sync_power_on(genpd, true, 0); 1106 genpd_sync_power_on(genpd, true, 0);
1101 genpd->suspended_count--; 1107 genpd->suspended_count--;
1102 genpd_unlock(genpd); 1108 genpd_unlock(genpd);
1103 1109
1104 if (genpd->dev_ops.stop && genpd->dev_ops.start) 1110 if (genpd->dev_ops.stop && genpd->dev_ops.start &&
1105 ret = pm_runtime_force_resume(dev); 1111 !pm_runtime_status_suspended(dev)) {
1106 1112 ret = genpd_start_dev(genpd, dev);
1107 ret = pm_generic_resume_noirq(dev); 1113 if (ret)
1108 if (ret) 1114 return ret;
1109 return ret; 1115 }
1110 1116
1111 return ret; 1117 return pm_generic_resume_noirq(dev);
1112} 1118}
1113 1119
1114/** 1120/**
@@ -1135,8 +1141,9 @@ static int genpd_freeze_noirq(struct device *dev)
1135 if (ret) 1141 if (ret)
1136 return ret; 1142 return ret;
1137 1143
1138 if (genpd->dev_ops.stop && genpd->dev_ops.start) 1144 if (genpd->dev_ops.stop && genpd->dev_ops.start &&
1139 ret = pm_runtime_force_suspend(dev); 1145 !pm_runtime_status_suspended(dev))
1146 ret = genpd_stop_dev(genpd, dev);
1140 1147
1141 return ret; 1148 return ret;
1142} 1149}
@@ -1159,8 +1166,9 @@ static int genpd_thaw_noirq(struct device *dev)
1159 if (IS_ERR(genpd)) 1166 if (IS_ERR(genpd))
1160 return -EINVAL; 1167 return -EINVAL;
1161 1168
1162 if (genpd->dev_ops.stop && genpd->dev_ops.start) { 1169 if (genpd->dev_ops.stop && genpd->dev_ops.start &&
1163 ret = pm_runtime_force_resume(dev); 1170 !pm_runtime_status_suspended(dev)) {
1171 ret = genpd_start_dev(genpd, dev);
1164 if (ret) 1172 if (ret)
1165 return ret; 1173 return ret;
1166 } 1174 }
@@ -1217,8 +1225,9 @@ static int genpd_restore_noirq(struct device *dev)
1217 genpd_sync_power_on(genpd, true, 0); 1225 genpd_sync_power_on(genpd, true, 0);
1218 genpd_unlock(genpd); 1226 genpd_unlock(genpd);
1219 1227
1220 if (genpd->dev_ops.stop && genpd->dev_ops.start) { 1228 if (genpd->dev_ops.stop && genpd->dev_ops.start &&
1221 ret = pm_runtime_force_resume(dev); 1229 !pm_runtime_status_suspended(dev)) {
1230 ret = genpd_start_dev(genpd, dev);
1222 if (ret) 1231 if (ret)
1223 return ret; 1232 return ret;
1224 } 1233 }
@@ -2199,20 +2208,8 @@ int genpd_dev_pm_attach(struct device *dev)
2199 2208
2200 ret = of_parse_phandle_with_args(dev->of_node, "power-domains", 2209 ret = of_parse_phandle_with_args(dev->of_node, "power-domains",
2201 "#power-domain-cells", 0, &pd_args); 2210 "#power-domain-cells", 0, &pd_args);
2202 if (ret < 0) { 2211 if (ret < 0)
2203 if (ret != -ENOENT) 2212 return ret;
2204 return ret;
2205
2206 /*
2207 * Try legacy Samsung-specific bindings
2208 * (for backwards compatibility of DT ABI)
2209 */
2210 pd_args.args_count = 0;
2211 pd_args.np = of_parse_phandle(dev->of_node,
2212 "samsung,power-domain", 0);
2213 if (!pd_args.np)
2214 return -ENOENT;
2215 }
2216 2213
2217 mutex_lock(&gpd_list_lock); 2214 mutex_lock(&gpd_list_lock);
2218 pd = genpd_get_from_provider(&pd_args); 2215 pd = genpd_get_from_provider(&pd_args);
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 08744b572af6..02a497e7c785 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -18,7 +18,6 @@
18 */ 18 */
19 19
20#include <linux/device.h> 20#include <linux/device.h>
21#include <linux/kallsyms.h>
22#include <linux/export.h> 21#include <linux/export.h>
23#include <linux/mutex.h> 22#include <linux/mutex.h>
24#include <linux/pm.h> 23#include <linux/pm.h>
@@ -541,6 +540,73 @@ void dev_pm_skip_next_resume_phases(struct device *dev)
541} 540}
542 541
543/** 542/**
543 * suspend_event - Return a "suspend" message for given "resume" one.
544 * @resume_msg: PM message representing a system-wide resume transition.
545 */
546static pm_message_t suspend_event(pm_message_t resume_msg)
547{
548 switch (resume_msg.event) {
549 case PM_EVENT_RESUME:
550 return PMSG_SUSPEND;
551 case PM_EVENT_THAW:
552 case PM_EVENT_RESTORE:
553 return PMSG_FREEZE;
554 case PM_EVENT_RECOVER:
555 return PMSG_HIBERNATE;
556 }
557 return PMSG_ON;
558}
559
560/**
561 * dev_pm_may_skip_resume - System-wide device resume optimization check.
562 * @dev: Target device.
563 *
564 * Checks whether or not the device may be left in suspend after a system-wide
565 * transition to the working state.
566 */
567bool dev_pm_may_skip_resume(struct device *dev)
568{
569 return !dev->power.must_resume && pm_transition.event != PM_EVENT_RESTORE;
570}
571
572static pm_callback_t dpm_subsys_resume_noirq_cb(struct device *dev,
573 pm_message_t state,
574 const char **info_p)
575{
576 pm_callback_t callback;
577 const char *info;
578
579 if (dev->pm_domain) {
580 info = "noirq power domain ";
581 callback = pm_noirq_op(&dev->pm_domain->ops, state);
582 } else if (dev->type && dev->type->pm) {
583 info = "noirq type ";
584 callback = pm_noirq_op(dev->type->pm, state);
585 } else if (dev->class && dev->class->pm) {
586 info = "noirq class ";
587 callback = pm_noirq_op(dev->class->pm, state);
588 } else if (dev->bus && dev->bus->pm) {
589 info = "noirq bus ";
590 callback = pm_noirq_op(dev->bus->pm, state);
591 } else {
592 return NULL;
593 }
594
595 if (info_p)
596 *info_p = info;
597
598 return callback;
599}
600
601static pm_callback_t dpm_subsys_suspend_noirq_cb(struct device *dev,
602 pm_message_t state,
603 const char **info_p);
604
605static pm_callback_t dpm_subsys_suspend_late_cb(struct device *dev,
606 pm_message_t state,
607 const char **info_p);
608
609/**
544 * device_resume_noirq - Execute a "noirq resume" callback for given device. 610 * device_resume_noirq - Execute a "noirq resume" callback for given device.
545 * @dev: Device to handle. 611 * @dev: Device to handle.
546 * @state: PM transition of the system being carried out. 612 * @state: PM transition of the system being carried out.
@@ -551,8 +617,9 @@ void dev_pm_skip_next_resume_phases(struct device *dev)
551 */ 617 */
552static int device_resume_noirq(struct device *dev, pm_message_t state, bool async) 618static int device_resume_noirq(struct device *dev, pm_message_t state, bool async)
553{ 619{
554 pm_callback_t callback = NULL; 620 pm_callback_t callback;
555 const char *info = NULL; 621 const char *info;
622 bool skip_resume;
556 int error = 0; 623 int error = 0;
557 624
558 TRACE_DEVICE(dev); 625 TRACE_DEVICE(dev);
@@ -566,29 +633,61 @@ static int device_resume_noirq(struct device *dev, pm_message_t state, bool asyn
566 633
567 dpm_wait_for_superior(dev, async); 634 dpm_wait_for_superior(dev, async);
568 635
569 if (dev->pm_domain) { 636 skip_resume = dev_pm_may_skip_resume(dev);
570 info = "noirq power domain "; 637
571 callback = pm_noirq_op(&dev->pm_domain->ops, state); 638 callback = dpm_subsys_resume_noirq_cb(dev, state, &info);
572 } else if (dev->type && dev->type->pm) { 639 if (callback)
573 info = "noirq type "; 640 goto Run;
574 callback = pm_noirq_op(dev->type->pm, state); 641
575 } else if (dev->class && dev->class->pm) { 642 if (skip_resume)
576 info = "noirq class "; 643 goto Skip;
577 callback = pm_noirq_op(dev->class->pm, state); 644
578 } else if (dev->bus && dev->bus->pm) { 645 if (dev_pm_smart_suspend_and_suspended(dev)) {
579 info = "noirq bus "; 646 pm_message_t suspend_msg = suspend_event(state);
580 callback = pm_noirq_op(dev->bus->pm, state); 647
648 /*
649 * If "freeze" callbacks have been skipped during a transition
650 * related to hibernation, the subsequent "thaw" callbacks must
651 * be skipped too or bad things may happen. Otherwise, resume
652 * callbacks are going to be run for the device, so its runtime
653 * PM status must be changed to reflect the new state after the
654 * transition under way.
655 */
656 if (!dpm_subsys_suspend_late_cb(dev, suspend_msg, NULL) &&
657 !dpm_subsys_suspend_noirq_cb(dev, suspend_msg, NULL)) {
658 if (state.event == PM_EVENT_THAW) {
659 skip_resume = true;
660 goto Skip;
661 } else {
662 pm_runtime_set_active(dev);
663 }
664 }
581 } 665 }
582 666
583 if (!callback && dev->driver && dev->driver->pm) { 667 if (dev->driver && dev->driver->pm) {
584 info = "noirq driver "; 668 info = "noirq driver ";
585 callback = pm_noirq_op(dev->driver->pm, state); 669 callback = pm_noirq_op(dev->driver->pm, state);
586 } 670 }
587 671
672Run:
588 error = dpm_run_callback(callback, dev, state, info); 673 error = dpm_run_callback(callback, dev, state, info);
674
675Skip:
589 dev->power.is_noirq_suspended = false; 676 dev->power.is_noirq_suspended = false;
590 677
591 Out: 678 if (skip_resume) {
679 /*
680 * The device is going to be left in suspend, but it might not
681 * have been in runtime suspend before the system suspended, so
682 * its runtime PM status needs to be updated to avoid confusing
683 * the runtime PM framework when runtime PM is enabled for the
684 * device again.
685 */
686 pm_runtime_set_suspended(dev);
687 dev_pm_skip_next_resume_phases(dev);
688 }
689
690Out:
592 complete_all(&dev->power.completion); 691 complete_all(&dev->power.completion);
593 TRACE_RESUME(error); 692 TRACE_RESUME(error);
594 return error; 693 return error;
@@ -681,6 +780,35 @@ void dpm_resume_noirq(pm_message_t state)
681 dpm_noirq_end(); 780 dpm_noirq_end();
682} 781}
683 782
783static pm_callback_t dpm_subsys_resume_early_cb(struct device *dev,
784 pm_message_t state,
785 const char **info_p)
786{
787 pm_callback_t callback;
788 const char *info;
789
790 if (dev->pm_domain) {
791 info = "early power domain ";
792 callback = pm_late_early_op(&dev->pm_domain->ops, state);
793 } else if (dev->type && dev->type->pm) {
794 info = "early type ";
795 callback = pm_late_early_op(dev->type->pm, state);
796 } else if (dev->class && dev->class->pm) {
797 info = "early class ";
798 callback = pm_late_early_op(dev->class->pm, state);
799 } else if (dev->bus && dev->bus->pm) {
800 info = "early bus ";
801 callback = pm_late_early_op(dev->bus->pm, state);
802 } else {
803 return NULL;
804 }
805
806 if (info_p)
807 *info_p = info;
808
809 return callback;
810}
811
684/** 812/**
685 * device_resume_early - Execute an "early resume" callback for given device. 813 * device_resume_early - Execute an "early resume" callback for given device.
686 * @dev: Device to handle. 814 * @dev: Device to handle.
@@ -691,8 +819,8 @@ void dpm_resume_noirq(pm_message_t state)
691 */ 819 */
692static int device_resume_early(struct device *dev, pm_message_t state, bool async) 820static int device_resume_early(struct device *dev, pm_message_t state, bool async)
693{ 821{
694 pm_callback_t callback = NULL; 822 pm_callback_t callback;
695 const char *info = NULL; 823 const char *info;
696 int error = 0; 824 int error = 0;
697 825
698 TRACE_DEVICE(dev); 826 TRACE_DEVICE(dev);
@@ -706,19 +834,7 @@ static int device_resume_early(struct device *dev, pm_message_t state, bool asyn
706 834
707 dpm_wait_for_superior(dev, async); 835 dpm_wait_for_superior(dev, async);
708 836
709 if (dev->pm_domain) { 837 callback = dpm_subsys_resume_early_cb(dev, state, &info);
710 info = "early power domain ";
711 callback = pm_late_early_op(&dev->pm_domain->ops, state);
712 } else if (dev->type && dev->type->pm) {
713 info = "early type ";
714 callback = pm_late_early_op(dev->type->pm, state);
715 } else if (dev->class && dev->class->pm) {
716 info = "early class ";
717 callback = pm_late_early_op(dev->class->pm, state);
718 } else if (dev->bus && dev->bus->pm) {
719 info = "early bus ";
720 callback = pm_late_early_op(dev->bus->pm, state);
721 }
722 838
723 if (!callback && dev->driver && dev->driver->pm) { 839 if (!callback && dev->driver && dev->driver->pm) {
724 info = "early driver "; 840 info = "early driver ";
@@ -1089,6 +1205,77 @@ static pm_message_t resume_event(pm_message_t sleep_state)
1089 return PMSG_ON; 1205 return PMSG_ON;
1090} 1206}
1091 1207
1208static void dpm_superior_set_must_resume(struct device *dev)
1209{
1210 struct device_link *link;
1211 int idx;
1212
1213 if (dev->parent)
1214 dev->parent->power.must_resume = true;
1215
1216 idx = device_links_read_lock();
1217
1218 list_for_each_entry_rcu(link, &dev->links.suppliers, c_node)
1219 link->supplier->power.must_resume = true;
1220
1221 device_links_read_unlock(idx);
1222}
1223
1224static pm_callback_t dpm_subsys_suspend_noirq_cb(struct device *dev,
1225 pm_message_t state,
1226 const char **info_p)
1227{
1228 pm_callback_t callback;
1229 const char *info;
1230
1231 if (dev->pm_domain) {
1232 info = "noirq power domain ";
1233 callback = pm_noirq_op(&dev->pm_domain->ops, state);
1234 } else if (dev->type && dev->type->pm) {
1235 info = "noirq type ";
1236 callback = pm_noirq_op(dev->type->pm, state);
1237 } else if (dev->class && dev->class->pm) {
1238 info = "noirq class ";
1239 callback = pm_noirq_op(dev->class->pm, state);
1240 } else if (dev->bus && dev->bus->pm) {
1241 info = "noirq bus ";
1242 callback = pm_noirq_op(dev->bus->pm, state);
1243 } else {
1244 return NULL;
1245 }
1246
1247 if (info_p)
1248 *info_p = info;
1249
1250 return callback;
1251}
1252
1253static bool device_must_resume(struct device *dev, pm_message_t state,
1254 bool no_subsys_suspend_noirq)
1255{
1256 pm_message_t resume_msg = resume_event(state);
1257
1258 /*
1259 * If all of the device driver's "noirq", "late" and "early" callbacks
1260 * are invoked directly by the core, the decision to allow the device to
1261 * stay in suspend can be based on its current runtime PM status and its
1262 * wakeup settings.
1263 */
1264 if (no_subsys_suspend_noirq &&
1265 !dpm_subsys_suspend_late_cb(dev, state, NULL) &&
1266 !dpm_subsys_resume_early_cb(dev, resume_msg, NULL) &&
1267 !dpm_subsys_resume_noirq_cb(dev, resume_msg, NULL))
1268 return !pm_runtime_status_suspended(dev) &&
1269 (resume_msg.event != PM_EVENT_RESUME ||
1270 (device_can_wakeup(dev) && !device_may_wakeup(dev)));
1271
1272 /*
1273 * The only safe strategy here is to require that if the device may not
1274 * be left in suspend, resume callbacks must be invoked for it.
1275 */
1276 return !dev->power.may_skip_resume;
1277}
1278
1092/** 1279/**
1093 * __device_suspend_noirq - Execute a "noirq suspend" callback for given device. 1280 * __device_suspend_noirq - Execute a "noirq suspend" callback for given device.
1094 * @dev: Device to handle. 1281 * @dev: Device to handle.
@@ -1100,8 +1287,9 @@ static pm_message_t resume_event(pm_message_t sleep_state)
1100 */ 1287 */
1101static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool async) 1288static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool async)
1102{ 1289{
1103 pm_callback_t callback = NULL; 1290 pm_callback_t callback;
1104 const char *info = NULL; 1291 const char *info;
1292 bool no_subsys_cb = false;
1105 int error = 0; 1293 int error = 0;
1106 1294
1107 TRACE_DEVICE(dev); 1295 TRACE_DEVICE(dev);
@@ -1120,30 +1308,40 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
1120 if (dev->power.syscore || dev->power.direct_complete) 1308 if (dev->power.syscore || dev->power.direct_complete)
1121 goto Complete; 1309 goto Complete;
1122 1310
1123 if (dev->pm_domain) { 1311 callback = dpm_subsys_suspend_noirq_cb(dev, state, &info);
1124 info = "noirq power domain "; 1312 if (callback)
1125 callback = pm_noirq_op(&dev->pm_domain->ops, state); 1313 goto Run;
1126 } else if (dev->type && dev->type->pm) {
1127 info = "noirq type ";
1128 callback = pm_noirq_op(dev->type->pm, state);
1129 } else if (dev->class && dev->class->pm) {
1130 info = "noirq class ";
1131 callback = pm_noirq_op(dev->class->pm, state);
1132 } else if (dev->bus && dev->bus->pm) {
1133 info = "noirq bus ";
1134 callback = pm_noirq_op(dev->bus->pm, state);
1135 }
1136 1314
1137 if (!callback && dev->driver && dev->driver->pm) { 1315 no_subsys_cb = !dpm_subsys_suspend_late_cb(dev, state, NULL);
1316
1317 if (dev_pm_smart_suspend_and_suspended(dev) && no_subsys_cb)
1318 goto Skip;
1319
1320 if (dev->driver && dev->driver->pm) {
1138 info = "noirq driver "; 1321 info = "noirq driver ";
1139 callback = pm_noirq_op(dev->driver->pm, state); 1322 callback = pm_noirq_op(dev->driver->pm, state);
1140 } 1323 }
1141 1324
1325Run:
1142 error = dpm_run_callback(callback, dev, state, info); 1326 error = dpm_run_callback(callback, dev, state, info);
1143 if (!error) 1327 if (error) {
1144 dev->power.is_noirq_suspended = true;
1145 else
1146 async_error = error; 1328 async_error = error;
1329 goto Complete;
1330 }
1331
1332Skip:
1333 dev->power.is_noirq_suspended = true;
1334
1335 if (dev_pm_test_driver_flags(dev, DPM_FLAG_LEAVE_SUSPENDED)) {
1336 dev->power.must_resume = dev->power.must_resume ||
1337 atomic_read(&dev->power.usage_count) > 1 ||
1338 device_must_resume(dev, state, no_subsys_cb);
1339 } else {
1340 dev->power.must_resume = true;
1341 }
1342
1343 if (dev->power.must_resume)
1344 dpm_superior_set_must_resume(dev);
1147 1345
1148Complete: 1346Complete:
1149 complete_all(&dev->power.completion); 1347 complete_all(&dev->power.completion);
@@ -1249,6 +1447,50 @@ int dpm_suspend_noirq(pm_message_t state)
1249 return ret; 1447 return ret;
1250} 1448}
1251 1449
1450static void dpm_propagate_wakeup_to_parent(struct device *dev)
1451{
1452 struct device *parent = dev->parent;
1453
1454 if (!parent)
1455 return;
1456
1457 spin_lock_irq(&parent->power.lock);
1458
1459 if (dev->power.wakeup_path && !parent->power.ignore_children)
1460 parent->power.wakeup_path = true;
1461
1462 spin_unlock_irq(&parent->power.lock);
1463}
1464
1465static pm_callback_t dpm_subsys_suspend_late_cb(struct device *dev,
1466 pm_message_t state,
1467 const char **info_p)
1468{
1469 pm_callback_t callback;
1470 const char *info;
1471
1472 if (dev->pm_domain) {
1473 info = "late power domain ";
1474 callback = pm_late_early_op(&dev->pm_domain->ops, state);
1475 } else if (dev->type && dev->type->pm) {
1476 info = "late type ";
1477 callback = pm_late_early_op(dev->type->pm, state);
1478 } else if (dev->class && dev->class->pm) {
1479 info = "late class ";
1480 callback = pm_late_early_op(dev->class->pm, state);
1481 } else if (dev->bus && dev->bus->pm) {
1482 info = "late bus ";
1483 callback = pm_late_early_op(dev->bus->pm, state);
1484 } else {
1485 return NULL;
1486 }
1487
1488 if (info_p)
1489 *info_p = info;
1490
1491 return callback;
1492}
1493
1252/** 1494/**
1253 * __device_suspend_late - Execute a "late suspend" callback for given device. 1495 * __device_suspend_late - Execute a "late suspend" callback for given device.
1254 * @dev: Device to handle. 1496 * @dev: Device to handle.
@@ -1259,8 +1501,8 @@ int dpm_suspend_noirq(pm_message_t state)
1259 */ 1501 */
1260static int __device_suspend_late(struct device *dev, pm_message_t state, bool async) 1502static int __device_suspend_late(struct device *dev, pm_message_t state, bool async)
1261{ 1503{
1262 pm_callback_t callback = NULL; 1504 pm_callback_t callback;
1263 const char *info = NULL; 1505 const char *info;
1264 int error = 0; 1506 int error = 0;
1265 1507
1266 TRACE_DEVICE(dev); 1508 TRACE_DEVICE(dev);
@@ -1281,30 +1523,29 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as
1281 if (dev->power.syscore || dev->power.direct_complete) 1523 if (dev->power.syscore || dev->power.direct_complete)
1282 goto Complete; 1524 goto Complete;
1283 1525
1284 if (dev->pm_domain) { 1526 callback = dpm_subsys_suspend_late_cb(dev, state, &info);
1285 info = "late power domain "; 1527 if (callback)
1286 callback = pm_late_early_op(&dev->pm_domain->ops, state); 1528 goto Run;
1287 } else if (dev->type && dev->type->pm) {
1288 info = "late type ";
1289 callback = pm_late_early_op(dev->type->pm, state);
1290 } else if (dev->class && dev->class->pm) {
1291 info = "late class ";
1292 callback = pm_late_early_op(dev->class->pm, state);
1293 } else if (dev->bus && dev->bus->pm) {
1294 info = "late bus ";
1295 callback = pm_late_early_op(dev->bus->pm, state);
1296 }
1297 1529
1298 if (!callback && dev->driver && dev->driver->pm) { 1530 if (dev_pm_smart_suspend_and_suspended(dev) &&
1531 !dpm_subsys_suspend_noirq_cb(dev, state, NULL))
1532 goto Skip;
1533
1534 if (dev->driver && dev->driver->pm) {
1299 info = "late driver "; 1535 info = "late driver ";
1300 callback = pm_late_early_op(dev->driver->pm, state); 1536 callback = pm_late_early_op(dev->driver->pm, state);
1301 } 1537 }
1302 1538
1539Run:
1303 error = dpm_run_callback(callback, dev, state, info); 1540 error = dpm_run_callback(callback, dev, state, info);
1304 if (!error) 1541 if (error) {
1305 dev->power.is_late_suspended = true;
1306 else
1307 async_error = error; 1542 async_error = error;
1543 goto Complete;
1544 }
1545 dpm_propagate_wakeup_to_parent(dev);
1546
1547Skip:
1548 dev->power.is_late_suspended = true;
1308 1549
1309Complete: 1550Complete:
1310 TRACE_SUSPEND(error); 1551 TRACE_SUSPEND(error);
@@ -1435,11 +1676,17 @@ static int legacy_suspend(struct device *dev, pm_message_t state,
1435 return error; 1676 return error;
1436} 1677}
1437 1678
1438static void dpm_clear_suppliers_direct_complete(struct device *dev) 1679static void dpm_clear_superiors_direct_complete(struct device *dev)
1439{ 1680{
1440 struct device_link *link; 1681 struct device_link *link;
1441 int idx; 1682 int idx;
1442 1683
1684 if (dev->parent) {
1685 spin_lock_irq(&dev->parent->power.lock);
1686 dev->parent->power.direct_complete = false;
1687 spin_unlock_irq(&dev->parent->power.lock);
1688 }
1689
1443 idx = device_links_read_lock(); 1690 idx = device_links_read_lock();
1444 1691
1445 list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) { 1692 list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) {
@@ -1500,6 +1747,9 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
1500 dev->power.direct_complete = false; 1747 dev->power.direct_complete = false;
1501 } 1748 }
1502 1749
1750 dev->power.may_skip_resume = false;
1751 dev->power.must_resume = false;
1752
1503 dpm_watchdog_set(&wd, dev); 1753 dpm_watchdog_set(&wd, dev);
1504 device_lock(dev); 1754 device_lock(dev);
1505 1755
@@ -1543,20 +1793,12 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
1543 1793
1544 End: 1794 End:
1545 if (!error) { 1795 if (!error) {
1546 struct device *parent = dev->parent;
1547
1548 dev->power.is_suspended = true; 1796 dev->power.is_suspended = true;
1549 if (parent) { 1797 if (device_may_wakeup(dev))
1550 spin_lock_irq(&parent->power.lock); 1798 dev->power.wakeup_path = true;
1551
1552 dev->parent->power.direct_complete = false;
1553 if (dev->power.wakeup_path
1554 && !dev->parent->power.ignore_children)
1555 dev->parent->power.wakeup_path = true;
1556 1799
1557 spin_unlock_irq(&parent->power.lock); 1800 dpm_propagate_wakeup_to_parent(dev);
1558 } 1801 dpm_clear_superiors_direct_complete(dev);
1559 dpm_clear_suppliers_direct_complete(dev);
1560 } 1802 }
1561 1803
1562 device_unlock(dev); 1804 device_unlock(dev);
@@ -1665,8 +1907,9 @@ static int device_prepare(struct device *dev, pm_message_t state)
1665 if (dev->power.syscore) 1907 if (dev->power.syscore)
1666 return 0; 1908 return 0;
1667 1909
1668 WARN_ON(dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) && 1910 WARN_ON(!pm_runtime_enabled(dev) &&
1669 !pm_runtime_enabled(dev)); 1911 dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND |
1912 DPM_FLAG_LEAVE_SUSPENDED));
1670 1913
1671 /* 1914 /*
1672 * If a device's parent goes into runtime suspend at the wrong time, 1915 * If a device's parent goes into runtime suspend at the wrong time,
@@ -1678,7 +1921,7 @@ static int device_prepare(struct device *dev, pm_message_t state)
1678 1921
1679 device_lock(dev); 1922 device_lock(dev);
1680 1923
1681 dev->power.wakeup_path = device_may_wakeup(dev); 1924 dev->power.wakeup_path = false;
1682 1925
1683 if (dev->power.no_pm_callbacks) { 1926 if (dev->power.no_pm_callbacks) {
1684 ret = 1; /* Let device go direct_complete */ 1927 ret = 1; /* Let device go direct_complete */
diff --git a/drivers/base/power/power.h b/drivers/base/power/power.h
index 7beee75399d4..21244c53e377 100644
--- a/drivers/base/power/power.h
+++ b/drivers/base/power/power.h
@@ -41,20 +41,15 @@ extern void dev_pm_disable_wake_irq_check(struct device *dev);
41 41
42#ifdef CONFIG_PM_SLEEP 42#ifdef CONFIG_PM_SLEEP
43 43
44extern int device_wakeup_attach_irq(struct device *dev, 44extern void device_wakeup_attach_irq(struct device *dev, struct wake_irq *wakeirq);
45 struct wake_irq *wakeirq);
46extern void device_wakeup_detach_irq(struct device *dev); 45extern void device_wakeup_detach_irq(struct device *dev);
47extern void device_wakeup_arm_wake_irqs(void); 46extern void device_wakeup_arm_wake_irqs(void);
48extern void device_wakeup_disarm_wake_irqs(void); 47extern void device_wakeup_disarm_wake_irqs(void);
49 48
50#else 49#else
51 50
52static inline int 51static inline void device_wakeup_attach_irq(struct device *dev,
53device_wakeup_attach_irq(struct device *dev, 52 struct wake_irq *wakeirq) {}
54 struct wake_irq *wakeirq)
55{
56 return 0;
57}
58 53
59static inline void device_wakeup_detach_irq(struct device *dev) 54static inline void device_wakeup_detach_irq(struct device *dev)
60{ 55{
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 6e89b51ea3d9..8bef3cb2424d 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -1613,22 +1613,34 @@ void pm_runtime_drop_link(struct device *dev)
1613 spin_unlock_irq(&dev->power.lock); 1613 spin_unlock_irq(&dev->power.lock);
1614} 1614}
1615 1615
1616static bool pm_runtime_need_not_resume(struct device *dev)
1617{
1618 return atomic_read(&dev->power.usage_count) <= 1 &&
1619 (atomic_read(&dev->power.child_count) == 0 ||
1620 dev->power.ignore_children);
1621}
1622
1616/** 1623/**
1617 * pm_runtime_force_suspend - Force a device into suspend state if needed. 1624 * pm_runtime_force_suspend - Force a device into suspend state if needed.
1618 * @dev: Device to suspend. 1625 * @dev: Device to suspend.
1619 * 1626 *
1620 * Disable runtime PM so we safely can check the device's runtime PM status and 1627 * Disable runtime PM so we safely can check the device's runtime PM status and
1621 * if it is active, invoke it's .runtime_suspend callback to bring it into 1628 * if it is active, invoke its ->runtime_suspend callback to suspend it and
1622 * suspend state. Keep runtime PM disabled to preserve the state unless we 1629 * change its runtime PM status field to RPM_SUSPENDED. Also, if the device's
1623 * encounter errors. 1630 * usage and children counters don't indicate that the device was in use before
1631 * the system-wide transition under way, decrement its parent's children counter
1632 * (if there is a parent). Keep runtime PM disabled to preserve the state
1633 * unless we encounter errors.
1624 * 1634 *
1625 * Typically this function may be invoked from a system suspend callback to make 1635 * Typically this function may be invoked from a system suspend callback to make
1626 * sure the device is put into low power state. 1636 * sure the device is put into low power state and it should only be used during
1637 * system-wide PM transitions to sleep states. It assumes that the analogous
1638 * pm_runtime_force_resume() will be used to resume the device.
1627 */ 1639 */
1628int pm_runtime_force_suspend(struct device *dev) 1640int pm_runtime_force_suspend(struct device *dev)
1629{ 1641{
1630 int (*callback)(struct device *); 1642 int (*callback)(struct device *);
1631 int ret = 0; 1643 int ret;
1632 1644
1633 pm_runtime_disable(dev); 1645 pm_runtime_disable(dev);
1634 if (pm_runtime_status_suspended(dev)) 1646 if (pm_runtime_status_suspended(dev))
@@ -1636,27 +1648,23 @@ int pm_runtime_force_suspend(struct device *dev)
1636 1648
1637 callback = RPM_GET_CALLBACK(dev, runtime_suspend); 1649 callback = RPM_GET_CALLBACK(dev, runtime_suspend);
1638 1650
1639 if (!callback) { 1651 ret = callback ? callback(dev) : 0;
1640 ret = -ENOSYS;
1641 goto err;
1642 }
1643
1644 ret = callback(dev);
1645 if (ret) 1652 if (ret)
1646 goto err; 1653 goto err;
1647 1654
1648 /* 1655 /*
1649 * Increase the runtime PM usage count for the device's parent, in case 1656 * If the device can stay in suspend after the system-wide transition
1650 * when we find the device being used when system suspend was invoked. 1657 * to the working state that will follow, drop the children counter of
1651 * This informs pm_runtime_force_resume() to resume the parent 1658 * its parent, but set its status to RPM_SUSPENDED anyway in case this
1652 * immediately, which is needed to be able to resume its children, 1659 * function will be called again for it in the meantime.
1653 * when not deferring the resume to be managed via runtime PM.
1654 */ 1660 */
1655 if (dev->parent && atomic_read(&dev->power.usage_count) > 1) 1661 if (pm_runtime_need_not_resume(dev))
1656 pm_runtime_get_noresume(dev->parent); 1662 pm_runtime_set_suspended(dev);
1663 else
1664 __update_runtime_status(dev, RPM_SUSPENDED);
1657 1665
1658 pm_runtime_set_suspended(dev);
1659 return 0; 1666 return 0;
1667
1660err: 1668err:
1661 pm_runtime_enable(dev); 1669 pm_runtime_enable(dev);
1662 return ret; 1670 return ret;
@@ -1669,13 +1677,9 @@ EXPORT_SYMBOL_GPL(pm_runtime_force_suspend);
1669 * 1677 *
1670 * Prior invoking this function we expect the user to have brought the device 1678 * Prior invoking this function we expect the user to have brought the device
1671 * into low power state by a call to pm_runtime_force_suspend(). Here we reverse 1679 * into low power state by a call to pm_runtime_force_suspend(). Here we reverse
1672 * those actions and brings the device into full power, if it is expected to be 1680 * those actions and bring the device into full power, if it is expected to be
1673 * used on system resume. To distinguish that, we check whether the runtime PM 1681 * used on system resume. In the other case, we defer the resume to be managed
1674 * usage count is greater than 1 (the PM core increases the usage count in the 1682 * via runtime PM.
1675 * system PM prepare phase), as that indicates a real user (such as a subsystem,
1676 * driver, userspace, etc.) is using it. If that is the case, the device is
1677 * expected to be used on system resume as well, so then we resume it. In the
1678 * other case, we defer the resume to be managed via runtime PM.
1679 * 1683 *
1680 * Typically this function may be invoked from a system resume callback. 1684 * Typically this function may be invoked from a system resume callback.
1681 */ 1685 */
@@ -1684,32 +1688,18 @@ int pm_runtime_force_resume(struct device *dev)
1684 int (*callback)(struct device *); 1688 int (*callback)(struct device *);
1685 int ret = 0; 1689 int ret = 0;
1686 1690
1687 callback = RPM_GET_CALLBACK(dev, runtime_resume); 1691 if (!pm_runtime_status_suspended(dev) || pm_runtime_need_not_resume(dev))
1688
1689 if (!callback) {
1690 ret = -ENOSYS;
1691 goto out;
1692 }
1693
1694 if (!pm_runtime_status_suspended(dev))
1695 goto out; 1692 goto out;
1696 1693
1697 /* 1694 /*
1698 * Decrease the parent's runtime PM usage count, if we increased it 1695 * The value of the parent's children counter is correct already, so
1699 * during system suspend in pm_runtime_force_suspend(). 1696 * just update the status of the device.
1700 */ 1697 */
1701 if (atomic_read(&dev->power.usage_count) > 1) { 1698 __update_runtime_status(dev, RPM_ACTIVE);
1702 if (dev->parent)
1703 pm_runtime_put_noidle(dev->parent);
1704 } else {
1705 goto out;
1706 }
1707 1699
1708 ret = pm_runtime_set_active(dev); 1700 callback = RPM_GET_CALLBACK(dev, runtime_resume);
1709 if (ret)
1710 goto out;
1711 1701
1712 ret = callback(dev); 1702 ret = callback ? callback(dev) : 0;
1713 if (ret) { 1703 if (ret) {
1714 pm_runtime_set_suspended(dev); 1704 pm_runtime_set_suspended(dev);
1715 goto out; 1705 goto out;
diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c
index e153e28b1857..0f651efc58a1 100644
--- a/drivers/base/power/sysfs.c
+++ b/drivers/base/power/sysfs.c
@@ -108,16 +108,10 @@ static ssize_t control_show(struct device *dev, struct device_attribute *attr,
108static ssize_t control_store(struct device * dev, struct device_attribute *attr, 108static ssize_t control_store(struct device * dev, struct device_attribute *attr,
109 const char * buf, size_t n) 109 const char * buf, size_t n)
110{ 110{
111 char *cp;
112 int len = n;
113
114 cp = memchr(buf, '\n', n);
115 if (cp)
116 len = cp - buf;
117 device_lock(dev); 111 device_lock(dev);
118 if (len == sizeof ctrl_auto - 1 && strncmp(buf, ctrl_auto, len) == 0) 112 if (sysfs_streq(buf, ctrl_auto))
119 pm_runtime_allow(dev); 113 pm_runtime_allow(dev);
120 else if (len == sizeof ctrl_on - 1 && strncmp(buf, ctrl_on, len) == 0) 114 else if (sysfs_streq(buf, ctrl_on))
121 pm_runtime_forbid(dev); 115 pm_runtime_forbid(dev);
122 else 116 else
123 n = -EINVAL; 117 n = -EINVAL;
@@ -125,9 +119,9 @@ static ssize_t control_store(struct device * dev, struct device_attribute *attr,
125 return n; 119 return n;
126} 120}
127 121
128static DEVICE_ATTR(control, 0644, control_show, control_store); 122static DEVICE_ATTR_RW(control);
129 123
130static ssize_t rtpm_active_time_show(struct device *dev, 124static ssize_t runtime_active_time_show(struct device *dev,
131 struct device_attribute *attr, char *buf) 125 struct device_attribute *attr, char *buf)
132{ 126{
133 int ret; 127 int ret;
@@ -138,9 +132,9 @@ static ssize_t rtpm_active_time_show(struct device *dev,
138 return ret; 132 return ret;
139} 133}
140 134
141static DEVICE_ATTR(runtime_active_time, 0444, rtpm_active_time_show, NULL); 135static DEVICE_ATTR_RO(runtime_active_time);
142 136
143static ssize_t rtpm_suspended_time_show(struct device *dev, 137static ssize_t runtime_suspended_time_show(struct device *dev,
144 struct device_attribute *attr, char *buf) 138 struct device_attribute *attr, char *buf)
145{ 139{
146 int ret; 140 int ret;
@@ -152,9 +146,9 @@ static ssize_t rtpm_suspended_time_show(struct device *dev,
152 return ret; 146 return ret;
153} 147}
154 148
155static DEVICE_ATTR(runtime_suspended_time, 0444, rtpm_suspended_time_show, NULL); 149static DEVICE_ATTR_RO(runtime_suspended_time);
156 150
157static ssize_t rtpm_status_show(struct device *dev, 151static ssize_t runtime_status_show(struct device *dev,
158 struct device_attribute *attr, char *buf) 152 struct device_attribute *attr, char *buf)
159{ 153{
160 const char *p; 154 const char *p;
@@ -184,7 +178,7 @@ static ssize_t rtpm_status_show(struct device *dev,
184 return sprintf(buf, p); 178 return sprintf(buf, p);
185} 179}
186 180
187static DEVICE_ATTR(runtime_status, 0444, rtpm_status_show, NULL); 181static DEVICE_ATTR_RO(runtime_status);
188 182
189static ssize_t autosuspend_delay_ms_show(struct device *dev, 183static ssize_t autosuspend_delay_ms_show(struct device *dev,
190 struct device_attribute *attr, char *buf) 184 struct device_attribute *attr, char *buf)
@@ -211,26 +205,25 @@ static ssize_t autosuspend_delay_ms_store(struct device *dev,
211 return n; 205 return n;
212} 206}
213 207
214static DEVICE_ATTR(autosuspend_delay_ms, 0644, autosuspend_delay_ms_show, 208static DEVICE_ATTR_RW(autosuspend_delay_ms);
215 autosuspend_delay_ms_store);
216 209
217static ssize_t pm_qos_resume_latency_show(struct device *dev, 210static ssize_t pm_qos_resume_latency_us_show(struct device *dev,
218 struct device_attribute *attr, 211 struct device_attribute *attr,
219 char *buf) 212 char *buf)
220{ 213{
221 s32 value = dev_pm_qos_requested_resume_latency(dev); 214 s32 value = dev_pm_qos_requested_resume_latency(dev);
222 215
223 if (value == 0) 216 if (value == 0)
224 return sprintf(buf, "n/a\n"); 217 return sprintf(buf, "n/a\n");
225 else if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 218 if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
226 value = 0; 219 value = 0;
227 220
228 return sprintf(buf, "%d\n", value); 221 return sprintf(buf, "%d\n", value);
229} 222}
230 223
231static ssize_t pm_qos_resume_latency_store(struct device *dev, 224static ssize_t pm_qos_resume_latency_us_store(struct device *dev,
232 struct device_attribute *attr, 225 struct device_attribute *attr,
233 const char *buf, size_t n) 226 const char *buf, size_t n)
234{ 227{
235 s32 value; 228 s32 value;
236 int ret; 229 int ret;
@@ -245,7 +238,7 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev,
245 238
246 if (value == 0) 239 if (value == 0)
247 value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; 240 value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
248 } else if (!strcmp(buf, "n/a") || !strcmp(buf, "n/a\n")) { 241 } else if (sysfs_streq(buf, "n/a")) {
249 value = 0; 242 value = 0;
250 } else { 243 } else {
251 return -EINVAL; 244 return -EINVAL;
@@ -256,26 +249,25 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev,
256 return ret < 0 ? ret : n; 249 return ret < 0 ? ret : n;
257} 250}
258 251
259static DEVICE_ATTR(pm_qos_resume_latency_us, 0644, 252static DEVICE_ATTR_RW(pm_qos_resume_latency_us);
260 pm_qos_resume_latency_show, pm_qos_resume_latency_store);
261 253
262static ssize_t pm_qos_latency_tolerance_show(struct device *dev, 254static ssize_t pm_qos_latency_tolerance_us_show(struct device *dev,
263 struct device_attribute *attr, 255 struct device_attribute *attr,
264 char *buf) 256 char *buf)
265{ 257{
266 s32 value = dev_pm_qos_get_user_latency_tolerance(dev); 258 s32 value = dev_pm_qos_get_user_latency_tolerance(dev);
267 259
268 if (value < 0) 260 if (value < 0)
269 return sprintf(buf, "auto\n"); 261 return sprintf(buf, "auto\n");
270 else if (value == PM_QOS_LATENCY_ANY) 262 if (value == PM_QOS_LATENCY_ANY)
271 return sprintf(buf, "any\n"); 263 return sprintf(buf, "any\n");
272 264
273 return sprintf(buf, "%d\n", value); 265 return sprintf(buf, "%d\n", value);
274} 266}
275 267
276static ssize_t pm_qos_latency_tolerance_store(struct device *dev, 268static ssize_t pm_qos_latency_tolerance_us_store(struct device *dev,
277 struct device_attribute *attr, 269 struct device_attribute *attr,
278 const char *buf, size_t n) 270 const char *buf, size_t n)
279{ 271{
280 s32 value; 272 s32 value;
281 int ret; 273 int ret;
@@ -285,9 +277,9 @@ static ssize_t pm_qos_latency_tolerance_store(struct device *dev,
285 if (value < 0) 277 if (value < 0)
286 return -EINVAL; 278 return -EINVAL;
287 } else { 279 } else {
288 if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n")) 280 if (sysfs_streq(buf, "auto"))
289 value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; 281 value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
290 else if (!strcmp(buf, "any") || !strcmp(buf, "any\n")) 282 else if (sysfs_streq(buf, "any"))
291 value = PM_QOS_LATENCY_ANY; 283 value = PM_QOS_LATENCY_ANY;
292 else 284 else
293 return -EINVAL; 285 return -EINVAL;
@@ -296,8 +288,7 @@ static ssize_t pm_qos_latency_tolerance_store(struct device *dev,
296 return ret < 0 ? ret : n; 288 return ret < 0 ? ret : n;
297} 289}
298 290
299static DEVICE_ATTR(pm_qos_latency_tolerance_us, 0644, 291static DEVICE_ATTR_RW(pm_qos_latency_tolerance_us);
300 pm_qos_latency_tolerance_show, pm_qos_latency_tolerance_store);
301 292
302static ssize_t pm_qos_no_power_off_show(struct device *dev, 293static ssize_t pm_qos_no_power_off_show(struct device *dev,
303 struct device_attribute *attr, 294 struct device_attribute *attr,
@@ -323,49 +314,39 @@ static ssize_t pm_qos_no_power_off_store(struct device *dev,
323 return ret < 0 ? ret : n; 314 return ret < 0 ? ret : n;
324} 315}
325 316
326static DEVICE_ATTR(pm_qos_no_power_off, 0644, 317static DEVICE_ATTR_RW(pm_qos_no_power_off);
327 pm_qos_no_power_off_show, pm_qos_no_power_off_store);
328 318
329#ifdef CONFIG_PM_SLEEP 319#ifdef CONFIG_PM_SLEEP
330static const char _enabled[] = "enabled"; 320static const char _enabled[] = "enabled";
331static const char _disabled[] = "disabled"; 321static const char _disabled[] = "disabled";
332 322
333static ssize_t 323static ssize_t wakeup_show(struct device *dev, struct device_attribute *attr,
334wake_show(struct device * dev, struct device_attribute *attr, char * buf) 324 char *buf)
335{ 325{
336 return sprintf(buf, "%s\n", device_can_wakeup(dev) 326 return sprintf(buf, "%s\n", device_can_wakeup(dev)
337 ? (device_may_wakeup(dev) ? _enabled : _disabled) 327 ? (device_may_wakeup(dev) ? _enabled : _disabled)
338 : ""); 328 : "");
339} 329}
340 330
341static ssize_t 331static ssize_t wakeup_store(struct device *dev, struct device_attribute *attr,
342wake_store(struct device * dev, struct device_attribute *attr, 332 const char *buf, size_t n)
343 const char * buf, size_t n)
344{ 333{
345 char *cp;
346 int len = n;
347
348 if (!device_can_wakeup(dev)) 334 if (!device_can_wakeup(dev))
349 return -EINVAL; 335 return -EINVAL;
350 336
351 cp = memchr(buf, '\n', n); 337 if (sysfs_streq(buf, _enabled))
352 if (cp)
353 len = cp - buf;
354 if (len == sizeof _enabled - 1
355 && strncmp(buf, _enabled, sizeof _enabled - 1) == 0)
356 device_set_wakeup_enable(dev, 1); 338 device_set_wakeup_enable(dev, 1);
357 else if (len == sizeof _disabled - 1 339 else if (sysfs_streq(buf, _disabled))
358 && strncmp(buf, _disabled, sizeof _disabled - 1) == 0)
359 device_set_wakeup_enable(dev, 0); 340 device_set_wakeup_enable(dev, 0);
360 else 341 else
361 return -EINVAL; 342 return -EINVAL;
362 return n; 343 return n;
363} 344}
364 345
365static DEVICE_ATTR(wakeup, 0644, wake_show, wake_store); 346static DEVICE_ATTR_RW(wakeup);
366 347
367static ssize_t wakeup_count_show(struct device *dev, 348static ssize_t wakeup_count_show(struct device *dev,
368 struct device_attribute *attr, char *buf) 349 struct device_attribute *attr, char *buf)
369{ 350{
370 unsigned long count = 0; 351 unsigned long count = 0;
371 bool enabled = false; 352 bool enabled = false;
@@ -379,10 +360,11 @@ static ssize_t wakeup_count_show(struct device *dev,
379 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); 360 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
380} 361}
381 362
382static DEVICE_ATTR(wakeup_count, 0444, wakeup_count_show, NULL); 363static DEVICE_ATTR_RO(wakeup_count);
383 364
384static ssize_t wakeup_active_count_show(struct device *dev, 365static ssize_t wakeup_active_count_show(struct device *dev,
385 struct device_attribute *attr, char *buf) 366 struct device_attribute *attr,
367 char *buf)
386{ 368{
387 unsigned long count = 0; 369 unsigned long count = 0;
388 bool enabled = false; 370 bool enabled = false;
@@ -396,11 +378,11 @@ static ssize_t wakeup_active_count_show(struct device *dev,
396 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); 378 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
397} 379}
398 380
399static DEVICE_ATTR(wakeup_active_count, 0444, wakeup_active_count_show, NULL); 381static DEVICE_ATTR_RO(wakeup_active_count);
400 382
401static ssize_t wakeup_abort_count_show(struct device *dev, 383static ssize_t wakeup_abort_count_show(struct device *dev,
402 struct device_attribute *attr, 384 struct device_attribute *attr,
403 char *buf) 385 char *buf)
404{ 386{
405 unsigned long count = 0; 387 unsigned long count = 0;
406 bool enabled = false; 388 bool enabled = false;
@@ -414,7 +396,7 @@ static ssize_t wakeup_abort_count_show(struct device *dev,
414 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); 396 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
415} 397}
416 398
417static DEVICE_ATTR(wakeup_abort_count, 0444, wakeup_abort_count_show, NULL); 399static DEVICE_ATTR_RO(wakeup_abort_count);
418 400
419static ssize_t wakeup_expire_count_show(struct device *dev, 401static ssize_t wakeup_expire_count_show(struct device *dev,
420 struct device_attribute *attr, 402 struct device_attribute *attr,
@@ -432,10 +414,10 @@ static ssize_t wakeup_expire_count_show(struct device *dev,
432 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); 414 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
433} 415}
434 416
435static DEVICE_ATTR(wakeup_expire_count, 0444, wakeup_expire_count_show, NULL); 417static DEVICE_ATTR_RO(wakeup_expire_count);
436 418
437static ssize_t wakeup_active_show(struct device *dev, 419static ssize_t wakeup_active_show(struct device *dev,
438 struct device_attribute *attr, char *buf) 420 struct device_attribute *attr, char *buf)
439{ 421{
440 unsigned int active = 0; 422 unsigned int active = 0;
441 bool enabled = false; 423 bool enabled = false;
@@ -449,10 +431,11 @@ static ssize_t wakeup_active_show(struct device *dev,
449 return enabled ? sprintf(buf, "%u\n", active) : sprintf(buf, "\n"); 431 return enabled ? sprintf(buf, "%u\n", active) : sprintf(buf, "\n");
450} 432}
451 433
452static DEVICE_ATTR(wakeup_active, 0444, wakeup_active_show, NULL); 434static DEVICE_ATTR_RO(wakeup_active);
453 435
454static ssize_t wakeup_total_time_show(struct device *dev, 436static ssize_t wakeup_total_time_ms_show(struct device *dev,
455 struct device_attribute *attr, char *buf) 437 struct device_attribute *attr,
438 char *buf)
456{ 439{
457 s64 msec = 0; 440 s64 msec = 0;
458 bool enabled = false; 441 bool enabled = false;
@@ -466,10 +449,10 @@ static ssize_t wakeup_total_time_show(struct device *dev,
466 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); 449 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
467} 450}
468 451
469static DEVICE_ATTR(wakeup_total_time_ms, 0444, wakeup_total_time_show, NULL); 452static DEVICE_ATTR_RO(wakeup_total_time_ms);
470 453
471static ssize_t wakeup_max_time_show(struct device *dev, 454static ssize_t wakeup_max_time_ms_show(struct device *dev,
472 struct device_attribute *attr, char *buf) 455 struct device_attribute *attr, char *buf)
473{ 456{
474 s64 msec = 0; 457 s64 msec = 0;
475 bool enabled = false; 458 bool enabled = false;
@@ -483,10 +466,11 @@ static ssize_t wakeup_max_time_show(struct device *dev,
483 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); 466 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
484} 467}
485 468
486static DEVICE_ATTR(wakeup_max_time_ms, 0444, wakeup_max_time_show, NULL); 469static DEVICE_ATTR_RO(wakeup_max_time_ms);
487 470
488static ssize_t wakeup_last_time_show(struct device *dev, 471static ssize_t wakeup_last_time_ms_show(struct device *dev,
489 struct device_attribute *attr, char *buf) 472 struct device_attribute *attr,
473 char *buf)
490{ 474{
491 s64 msec = 0; 475 s64 msec = 0;
492 bool enabled = false; 476 bool enabled = false;
@@ -500,12 +484,12 @@ static ssize_t wakeup_last_time_show(struct device *dev,
500 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); 484 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
501} 485}
502 486
503static DEVICE_ATTR(wakeup_last_time_ms, 0444, wakeup_last_time_show, NULL); 487static DEVICE_ATTR_RO(wakeup_last_time_ms);
504 488
505#ifdef CONFIG_PM_AUTOSLEEP 489#ifdef CONFIG_PM_AUTOSLEEP
506static ssize_t wakeup_prevent_sleep_time_show(struct device *dev, 490static ssize_t wakeup_prevent_sleep_time_ms_show(struct device *dev,
507 struct device_attribute *attr, 491 struct device_attribute *attr,
508 char *buf) 492 char *buf)
509{ 493{
510 s64 msec = 0; 494 s64 msec = 0;
511 bool enabled = false; 495 bool enabled = false;
@@ -519,40 +503,39 @@ static ssize_t wakeup_prevent_sleep_time_show(struct device *dev,
519 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); 503 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
520} 504}
521 505
522static DEVICE_ATTR(wakeup_prevent_sleep_time_ms, 0444, 506static DEVICE_ATTR_RO(wakeup_prevent_sleep_time_ms);
523 wakeup_prevent_sleep_time_show, NULL);
524#endif /* CONFIG_PM_AUTOSLEEP */ 507#endif /* CONFIG_PM_AUTOSLEEP */
525#endif /* CONFIG_PM_SLEEP */ 508#endif /* CONFIG_PM_SLEEP */
526 509
527#ifdef CONFIG_PM_ADVANCED_DEBUG 510#ifdef CONFIG_PM_ADVANCED_DEBUG
528static ssize_t rtpm_usagecount_show(struct device *dev, 511static ssize_t runtime_usage_show(struct device *dev,
529 struct device_attribute *attr, char *buf) 512 struct device_attribute *attr, char *buf)
530{ 513{
531 return sprintf(buf, "%d\n", atomic_read(&dev->power.usage_count)); 514 return sprintf(buf, "%d\n", atomic_read(&dev->power.usage_count));
532} 515}
516static DEVICE_ATTR_RO(runtime_usage);
533 517
534static ssize_t rtpm_children_show(struct device *dev, 518static ssize_t runtime_active_kids_show(struct device *dev,
535 struct device_attribute *attr, char *buf) 519 struct device_attribute *attr,
520 char *buf)
536{ 521{
537 return sprintf(buf, "%d\n", dev->power.ignore_children ? 522 return sprintf(buf, "%d\n", dev->power.ignore_children ?
538 0 : atomic_read(&dev->power.child_count)); 523 0 : atomic_read(&dev->power.child_count));
539} 524}
525static DEVICE_ATTR_RO(runtime_active_kids);
540 526
541static ssize_t rtpm_enabled_show(struct device *dev, 527static ssize_t runtime_enabled_show(struct device *dev,
542 struct device_attribute *attr, char *buf) 528 struct device_attribute *attr, char *buf)
543{ 529{
544 if ((dev->power.disable_depth) && (dev->power.runtime_auto == false)) 530 if (dev->power.disable_depth && (dev->power.runtime_auto == false))
545 return sprintf(buf, "disabled & forbidden\n"); 531 return sprintf(buf, "disabled & forbidden\n");
546 else if (dev->power.disable_depth) 532 if (dev->power.disable_depth)
547 return sprintf(buf, "disabled\n"); 533 return sprintf(buf, "disabled\n");
548 else if (dev->power.runtime_auto == false) 534 if (dev->power.runtime_auto == false)
549 return sprintf(buf, "forbidden\n"); 535 return sprintf(buf, "forbidden\n");
550 return sprintf(buf, "enabled\n"); 536 return sprintf(buf, "enabled\n");
551} 537}
552 538static DEVICE_ATTR_RO(runtime_enabled);
553static DEVICE_ATTR(runtime_usage, 0444, rtpm_usagecount_show, NULL);
554static DEVICE_ATTR(runtime_active_kids, 0444, rtpm_children_show, NULL);
555static DEVICE_ATTR(runtime_enabled, 0444, rtpm_enabled_show, NULL);
556 539
557#ifdef CONFIG_PM_SLEEP 540#ifdef CONFIG_PM_SLEEP
558static ssize_t async_show(struct device *dev, struct device_attribute *attr, 541static ssize_t async_show(struct device *dev, struct device_attribute *attr,
@@ -566,23 +549,16 @@ static ssize_t async_show(struct device *dev, struct device_attribute *attr,
566static ssize_t async_store(struct device *dev, struct device_attribute *attr, 549static ssize_t async_store(struct device *dev, struct device_attribute *attr,
567 const char *buf, size_t n) 550 const char *buf, size_t n)
568{ 551{
569 char *cp; 552 if (sysfs_streq(buf, _enabled))
570 int len = n;
571
572 cp = memchr(buf, '\n', n);
573 if (cp)
574 len = cp - buf;
575 if (len == sizeof _enabled - 1 && strncmp(buf, _enabled, len) == 0)
576 device_enable_async_suspend(dev); 553 device_enable_async_suspend(dev);
577 else if (len == sizeof _disabled - 1 && 554 else if (sysfs_streq(buf, _disabled))
578 strncmp(buf, _disabled, len) == 0)
579 device_disable_async_suspend(dev); 555 device_disable_async_suspend(dev);
580 else 556 else
581 return -EINVAL; 557 return -EINVAL;
582 return n; 558 return n;
583} 559}
584 560
585static DEVICE_ATTR(async, 0644, async_show, async_store); 561static DEVICE_ATTR_RW(async);
586 562
587#endif /* CONFIG_PM_SLEEP */ 563#endif /* CONFIG_PM_SLEEP */
588#endif /* CONFIG_PM_ADVANCED_DEBUG */ 564#endif /* CONFIG_PM_ADVANCED_DEBUG */
diff --git a/drivers/base/power/wakeirq.c b/drivers/base/power/wakeirq.c
index ae0429827f31..a8ac86e4d79e 100644
--- a/drivers/base/power/wakeirq.c
+++ b/drivers/base/power/wakeirq.c
@@ -33,7 +33,6 @@ static int dev_pm_attach_wake_irq(struct device *dev, int irq,
33 struct wake_irq *wirq) 33 struct wake_irq *wirq)
34{ 34{
35 unsigned long flags; 35 unsigned long flags;
36 int err;
37 36
38 if (!dev || !wirq) 37 if (!dev || !wirq)
39 return -EINVAL; 38 return -EINVAL;
@@ -45,12 +44,11 @@ static int dev_pm_attach_wake_irq(struct device *dev, int irq,
45 return -EEXIST; 44 return -EEXIST;
46 } 45 }
47 46
48 err = device_wakeup_attach_irq(dev, wirq); 47 dev->power.wakeirq = wirq;
49 if (!err) 48 device_wakeup_attach_irq(dev, wirq);
50 dev->power.wakeirq = wirq;
51 49
52 spin_unlock_irqrestore(&dev->power.lock, flags); 50 spin_unlock_irqrestore(&dev->power.lock, flags);
53 return err; 51 return 0;
54} 52}
55 53
56/** 54/**
diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
index 38559f04db2c..ea01621ed769 100644
--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -19,6 +19,11 @@
19 19
20#include "power.h" 20#include "power.h"
21 21
22#ifndef CONFIG_SUSPEND
23suspend_state_t pm_suspend_target_state;
24#define pm_suspend_target_state (PM_SUSPEND_ON)
25#endif
26
22/* 27/*
23 * If set, the suspend/hibernate code will abort transitions to a sleep state 28 * If set, the suspend/hibernate code will abort transitions to a sleep state
24 * if wakeup events are registered during or immediately before the transition. 29 * if wakeup events are registered during or immediately before the transition.
@@ -268,6 +273,9 @@ int device_wakeup_enable(struct device *dev)
268 if (!dev || !dev->power.can_wakeup) 273 if (!dev || !dev->power.can_wakeup)
269 return -EINVAL; 274 return -EINVAL;
270 275
276 if (pm_suspend_target_state != PM_SUSPEND_ON)
277 dev_dbg(dev, "Suspicious %s() during system transition!\n", __func__);
278
271 ws = wakeup_source_register(dev_name(dev)); 279 ws = wakeup_source_register(dev_name(dev));
272 if (!ws) 280 if (!ws)
273 return -ENOMEM; 281 return -ENOMEM;
@@ -291,22 +299,19 @@ EXPORT_SYMBOL_GPL(device_wakeup_enable);
291 * 299 *
292 * Call under the device's power.lock lock. 300 * Call under the device's power.lock lock.
293 */ 301 */
294int device_wakeup_attach_irq(struct device *dev, 302void device_wakeup_attach_irq(struct device *dev,
295 struct wake_irq *wakeirq) 303 struct wake_irq *wakeirq)
296{ 304{
297 struct wakeup_source *ws; 305 struct wakeup_source *ws;
298 306
299 ws = dev->power.wakeup; 307 ws = dev->power.wakeup;
300 if (!ws) { 308 if (!ws)
301 dev_err(dev, "forgot to call call device_init_wakeup?\n"); 309 return;
302 return -EINVAL;
303 }
304 310
305 if (ws->wakeirq) 311 if (ws->wakeirq)
306 return -EEXIST; 312 dev_err(dev, "Leftover wakeup IRQ found, overriding\n");
307 313
308 ws->wakeirq = wakeirq; 314 ws->wakeirq = wakeirq;
309 return 0;
310} 315}
311 316
312/** 317/**
@@ -448,9 +453,7 @@ int device_init_wakeup(struct device *dev, bool enable)
448 device_set_wakeup_capable(dev, true); 453 device_set_wakeup_capable(dev, true);
449 ret = device_wakeup_enable(dev); 454 ret = device_wakeup_enable(dev);
450 } else { 455 } else {
451 if (dev->power.can_wakeup) 456 device_wakeup_disable(dev);
452 device_wakeup_disable(dev);
453
454 device_set_wakeup_capable(dev, false); 457 device_set_wakeup_capable(dev, false);
455 } 458 }
456 459
@@ -464,9 +467,6 @@ EXPORT_SYMBOL_GPL(device_init_wakeup);
464 */ 467 */
465int device_set_wakeup_enable(struct device *dev, bool enable) 468int device_set_wakeup_enable(struct device *dev, bool enable)
466{ 469{
467 if (!dev || !dev->power.can_wakeup)
468 return -EINVAL;
469
470 return enable ? device_wakeup_enable(dev) : device_wakeup_disable(dev); 470 return enable ? device_wakeup_enable(dev) : device_wakeup_disable(dev);
471} 471}
472EXPORT_SYMBOL_GPL(device_set_wakeup_enable); 472EXPORT_SYMBOL_GPL(device_set_wakeup_enable);
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
index dc7b3c7b7d42..57e011d36a79 100644
--- a/drivers/bus/Kconfig
+++ b/drivers/bus/Kconfig
@@ -120,7 +120,7 @@ config QCOM_EBI2
120 SRAM, ethernet adapters, FPGAs and LCD displays. 120 SRAM, ethernet adapters, FPGAs and LCD displays.
121 121
122config SIMPLE_PM_BUS 122config SIMPLE_PM_BUS
123 bool "Simple Power-Managed Bus Driver" 123 tristate "Simple Power-Managed Bus Driver"
124 depends on OF && PM 124 depends on OF && PM
125 help 125 help
126 Driver for transparent busses that don't need a real driver, but 126 Driver for transparent busses that don't need a real driver, but
diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
index bdce4488ded1..3a88e33b0cfe 100644
--- a/drivers/cpufreq/Kconfig.arm
+++ b/drivers/cpufreq/Kconfig.arm
@@ -2,6 +2,29 @@
2# ARM CPU Frequency scaling drivers 2# ARM CPU Frequency scaling drivers
3# 3#
4 4
5config ACPI_CPPC_CPUFREQ
6 tristate "CPUFreq driver based on the ACPI CPPC spec"
7 depends on ACPI_PROCESSOR
8 select ACPI_CPPC_LIB
9 help
10 This adds a CPUFreq driver which uses CPPC methods
11 as described in the ACPIv5.1 spec. CPPC stands for
12 Collaborative Processor Performance Controls. It
13 is based on an abstract continuous scale of CPU
14 performance values which allows the remote power
15 processor to flexibly optimize for power and
16 performance. CPPC relies on power management firmware
17 support for its operation.
18
19 If in doubt, say N.
20
21config ARM_ARMADA_37XX_CPUFREQ
22 tristate "Armada 37xx CPUFreq support"
23 depends on ARCH_MVEBU
24 help
25 This adds the CPUFreq driver support for Marvell Armada 37xx SoCs.
26 The Armada 37xx PMU supports 4 frequency and VDD levels.
27
5# big LITTLE core layer and glue drivers 28# big LITTLE core layer and glue drivers
6config ARM_BIG_LITTLE_CPUFREQ 29config ARM_BIG_LITTLE_CPUFREQ
7 tristate "Generic ARM big LITTLE CPUfreq driver" 30 tristate "Generic ARM big LITTLE CPUfreq driver"
@@ -12,6 +35,30 @@ config ARM_BIG_LITTLE_CPUFREQ
12 help 35 help
13 This enables the Generic CPUfreq driver for ARM big.LITTLE platforms. 36 This enables the Generic CPUfreq driver for ARM big.LITTLE platforms.
14 37
38config ARM_DT_BL_CPUFREQ
39 tristate "Generic probing via DT for ARM big LITTLE CPUfreq driver"
40 depends on ARM_BIG_LITTLE_CPUFREQ && OF
41 help
42 This enables probing via DT for Generic CPUfreq driver for ARM
43 big.LITTLE platform. This gets frequency tables from DT.
44
45config ARM_SCPI_CPUFREQ
46 tristate "SCPI based CPUfreq driver"
47 depends on ARM_BIG_LITTLE_CPUFREQ && ARM_SCPI_PROTOCOL && COMMON_CLK_SCPI
48 help
49 This adds the CPUfreq driver support for ARM big.LITTLE platforms
50 using SCPI protocol for CPU power management.
51
52 This driver uses SCPI Message Protocol driver to interact with the
53 firmware providing the CPU DVFS functionality.
54
55config ARM_VEXPRESS_SPC_CPUFREQ
56 tristate "Versatile Express SPC based CPUfreq driver"
57 depends on ARM_BIG_LITTLE_CPUFREQ && ARCH_VEXPRESS_SPC
58 help
59 This add the CPUfreq driver support for Versatile Express
60 big.LITTLE platforms using SPC for power management.
61
15config ARM_BRCMSTB_AVS_CPUFREQ 62config ARM_BRCMSTB_AVS_CPUFREQ
16 tristate "Broadcom STB AVS CPUfreq driver" 63 tristate "Broadcom STB AVS CPUfreq driver"
17 depends on ARCH_BRCMSTB || COMPILE_TEST 64 depends on ARCH_BRCMSTB || COMPILE_TEST
@@ -33,20 +80,6 @@ config ARM_BRCMSTB_AVS_CPUFREQ_DEBUG
33 80
34 If in doubt, say N. 81 If in doubt, say N.
35 82
36config ARM_DT_BL_CPUFREQ
37 tristate "Generic probing via DT for ARM big LITTLE CPUfreq driver"
38 depends on ARM_BIG_LITTLE_CPUFREQ && OF
39 help
40 This enables probing via DT for Generic CPUfreq driver for ARM
41 big.LITTLE platform. This gets frequency tables from DT.
42
43config ARM_VEXPRESS_SPC_CPUFREQ
44 tristate "Versatile Express SPC based CPUfreq driver"
45 depends on ARM_BIG_LITTLE_CPUFREQ && ARCH_VEXPRESS_SPC
46 help
47 This add the CPUfreq driver support for Versatile Express
48 big.LITTLE platforms using SPC for power management.
49
50config ARM_EXYNOS5440_CPUFREQ 83config ARM_EXYNOS5440_CPUFREQ
51 tristate "SAMSUNG EXYNOS5440" 84 tristate "SAMSUNG EXYNOS5440"
52 depends on SOC_EXYNOS5440 85 depends on SOC_EXYNOS5440
@@ -205,16 +238,6 @@ config ARM_SA1100_CPUFREQ
205config ARM_SA1110_CPUFREQ 238config ARM_SA1110_CPUFREQ
206 bool 239 bool
207 240
208config ARM_SCPI_CPUFREQ
209 tristate "SCPI based CPUfreq driver"
210 depends on ARM_BIG_LITTLE_CPUFREQ && ARM_SCPI_PROTOCOL && COMMON_CLK_SCPI
211 help
212 This adds the CPUfreq driver support for ARM big.LITTLE platforms
213 using SCPI protocol for CPU power management.
214
215 This driver uses SCPI Message Protocol driver to interact with the
216 firmware providing the CPU DVFS functionality.
217
218config ARM_SPEAR_CPUFREQ 241config ARM_SPEAR_CPUFREQ
219 bool "SPEAr CPUFreq support" 242 bool "SPEAr CPUFreq support"
220 depends on PLAT_SPEAR 243 depends on PLAT_SPEAR
@@ -275,20 +298,3 @@ config ARM_PXA2xx_CPUFREQ
275 This add the CPUFreq driver support for Intel PXA2xx SOCs. 298 This add the CPUFreq driver support for Intel PXA2xx SOCs.
276 299
277 If in doubt, say N. 300 If in doubt, say N.
278
279config ACPI_CPPC_CPUFREQ
280 tristate "CPUFreq driver based on the ACPI CPPC spec"
281 depends on ACPI_PROCESSOR
282 select ACPI_CPPC_LIB
283 default n
284 help
285 This adds a CPUFreq driver which uses CPPC methods
286 as described in the ACPIv5.1 spec. CPPC stands for
287 Collaborative Processor Performance Controls. It
288 is based on an abstract continuous scale of CPU
289 performance values which allows the remote power
290 processor to flexibly optimize for power and
291 performance. CPPC relies on power management firmware
292 support for its operation.
293
294 If in doubt, say N.
diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile
index 812f9e0d01a3..e07715ce8844 100644
--- a/drivers/cpufreq/Makefile
+++ b/drivers/cpufreq/Makefile
@@ -52,23 +52,26 @@ obj-$(CONFIG_ARM_BIG_LITTLE_CPUFREQ) += arm_big_little.o
52# LITTLE drivers, so that it is probed last. 52# LITTLE drivers, so that it is probed last.
53obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o 53obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o
54 54
55obj-$(CONFIG_ARM_ARMADA_37XX_CPUFREQ) += armada-37xx-cpufreq.o
55obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o 56obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o
57obj-$(CONFIG_ACPI_CPPC_CPUFREQ) += cppc_cpufreq.o
56obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o 58obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o
57obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o 59obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o
58obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o 60obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o
59obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o 61obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
60obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o 62obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o
61obj-$(CONFIG_ARM_MEDIATEK_CPUFREQ) += mediatek-cpufreq.o 63obj-$(CONFIG_ARM_MEDIATEK_CPUFREQ) += mediatek-cpufreq.o
64obj-$(CONFIG_MACH_MVEBU_V7) += mvebu-cpufreq.o
62obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o 65obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o
63obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o 66obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o
64obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o 67obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
65obj-$(CONFIG_ARM_S3C24XX_CPUFREQ) += s3c24xx-cpufreq.o
66obj-$(CONFIG_ARM_S3C24XX_CPUFREQ_DEBUGFS) += s3c24xx-cpufreq-debugfs.o
67obj-$(CONFIG_ARM_S3C2410_CPUFREQ) += s3c2410-cpufreq.o 68obj-$(CONFIG_ARM_S3C2410_CPUFREQ) += s3c2410-cpufreq.o
68obj-$(CONFIG_ARM_S3C2412_CPUFREQ) += s3c2412-cpufreq.o 69obj-$(CONFIG_ARM_S3C2412_CPUFREQ) += s3c2412-cpufreq.o
69obj-$(CONFIG_ARM_S3C2416_CPUFREQ) += s3c2416-cpufreq.o 70obj-$(CONFIG_ARM_S3C2416_CPUFREQ) += s3c2416-cpufreq.o
70obj-$(CONFIG_ARM_S3C2440_CPUFREQ) += s3c2440-cpufreq.o 71obj-$(CONFIG_ARM_S3C2440_CPUFREQ) += s3c2440-cpufreq.o
71obj-$(CONFIG_ARM_S3C64XX_CPUFREQ) += s3c64xx-cpufreq.o 72obj-$(CONFIG_ARM_S3C64XX_CPUFREQ) += s3c64xx-cpufreq.o
73obj-$(CONFIG_ARM_S3C24XX_CPUFREQ) += s3c24xx-cpufreq.o
74obj-$(CONFIG_ARM_S3C24XX_CPUFREQ_DEBUGFS) += s3c24xx-cpufreq-debugfs.o
72obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o 75obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o
73obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o 76obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o
74obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o 77obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o
@@ -81,8 +84,6 @@ obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
81obj-$(CONFIG_ARM_TEGRA186_CPUFREQ) += tegra186-cpufreq.o 84obj-$(CONFIG_ARM_TEGRA186_CPUFREQ) += tegra186-cpufreq.o
82obj-$(CONFIG_ARM_TI_CPUFREQ) += ti-cpufreq.o 85obj-$(CONFIG_ARM_TI_CPUFREQ) += ti-cpufreq.o
83obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o 86obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o
84obj-$(CONFIG_ACPI_CPPC_CPUFREQ) += cppc_cpufreq.o
85obj-$(CONFIG_MACH_MVEBU_V7) += mvebu-cpufreq.o
86 87
87 88
88################################################################################## 89##################################################################################
diff --git a/drivers/cpufreq/arm_big_little.c b/drivers/cpufreq/arm_big_little.c
index 65ec5f01aa8d..c56b57dcfda5 100644
--- a/drivers/cpufreq/arm_big_little.c
+++ b/drivers/cpufreq/arm_big_little.c
@@ -526,34 +526,13 @@ static int bL_cpufreq_exit(struct cpufreq_policy *policy)
526 526
527static void bL_cpufreq_ready(struct cpufreq_policy *policy) 527static void bL_cpufreq_ready(struct cpufreq_policy *policy)
528{ 528{
529 struct device *cpu_dev = get_cpu_device(policy->cpu);
530 int cur_cluster = cpu_to_cluster(policy->cpu); 529 int cur_cluster = cpu_to_cluster(policy->cpu);
531 struct device_node *np;
532 530
533 /* Do not register a cpu_cooling device if we are in IKS mode */ 531 /* Do not register a cpu_cooling device if we are in IKS mode */
534 if (cur_cluster >= MAX_CLUSTERS) 532 if (cur_cluster >= MAX_CLUSTERS)
535 return; 533 return;
536 534
537 np = of_node_get(cpu_dev->of_node); 535 cdev[cur_cluster] = of_cpufreq_cooling_register(policy);
538 if (WARN_ON(!np))
539 return;
540
541 if (of_find_property(np, "#cooling-cells", NULL)) {
542 u32 power_coefficient = 0;
543
544 of_property_read_u32(np, "dynamic-power-coefficient",
545 &power_coefficient);
546
547 cdev[cur_cluster] = of_cpufreq_power_cooling_register(np,
548 policy, power_coefficient, NULL);
549 if (IS_ERR(cdev[cur_cluster])) {
550 dev_err(cpu_dev,
551 "running cpufreq without cooling device: %ld\n",
552 PTR_ERR(cdev[cur_cluster]));
553 cdev[cur_cluster] = NULL;
554 }
555 }
556 of_node_put(np);
557} 536}
558 537
559static struct cpufreq_driver bL_cpufreq_driver = { 538static struct cpufreq_driver bL_cpufreq_driver = {
diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
new file mode 100644
index 000000000000..c6ebc88a7d8d
--- /dev/null
+++ b/drivers/cpufreq/armada-37xx-cpufreq.c
@@ -0,0 +1,241 @@
1// SPDX-License-Identifier: GPL-2.0+
2/*
3 * CPU frequency scaling support for Armada 37xx platform.
4 *
5 * Copyright (C) 2017 Marvell
6 *
7 * Gregory CLEMENT <gregory.clement@free-electrons.com>
8 */
9
10#include <linux/clk.h>
11#include <linux/cpu.h>
12#include <linux/cpufreq.h>
13#include <linux/err.h>
14#include <linux/interrupt.h>
15#include <linux/io.h>
16#include <linux/mfd/syscon.h>
17#include <linux/module.h>
18#include <linux/of_address.h>
19#include <linux/of_device.h>
20#include <linux/of_irq.h>
21#include <linux/platform_device.h>
22#include <linux/pm_opp.h>
23#include <linux/regmap.h>
24#include <linux/slab.h>
25
26/* Power management in North Bridge register set */
27#define ARMADA_37XX_NB_L0L1 0x18
28#define ARMADA_37XX_NB_L2L3 0x1C
29#define ARMADA_37XX_NB_TBG_DIV_OFF 13
30#define ARMADA_37XX_NB_TBG_DIV_MASK 0x7
31#define ARMADA_37XX_NB_CLK_SEL_OFF 11
32#define ARMADA_37XX_NB_CLK_SEL_MASK 0x1
33#define ARMADA_37XX_NB_CLK_SEL_TBG 0x1
34#define ARMADA_37XX_NB_TBG_SEL_OFF 9
35#define ARMADA_37XX_NB_TBG_SEL_MASK 0x3
36#define ARMADA_37XX_NB_VDD_SEL_OFF 6
37#define ARMADA_37XX_NB_VDD_SEL_MASK 0x3
38#define ARMADA_37XX_NB_CONFIG_SHIFT 16
39#define ARMADA_37XX_NB_DYN_MOD 0x24
40#define ARMADA_37XX_NB_CLK_SEL_EN BIT(26)
41#define ARMADA_37XX_NB_TBG_EN BIT(28)
42#define ARMADA_37XX_NB_DIV_EN BIT(29)
43#define ARMADA_37XX_NB_VDD_EN BIT(30)
44#define ARMADA_37XX_NB_DFS_EN BIT(31)
45#define ARMADA_37XX_NB_CPU_LOAD 0x30
46#define ARMADA_37XX_NB_CPU_LOAD_MASK 0x3
47#define ARMADA_37XX_DVFS_LOAD_0 0
48#define ARMADA_37XX_DVFS_LOAD_1 1
49#define ARMADA_37XX_DVFS_LOAD_2 2
50#define ARMADA_37XX_DVFS_LOAD_3 3
51
52/*
53 * On Armada 37xx the Power management manages 4 level of CPU load,
54 * each level can be associated with a CPU clock source, a CPU
55 * divider, a VDD level, etc...
56 */
57#define LOAD_LEVEL_NR 4
58
59struct armada_37xx_dvfs {
60 u32 cpu_freq_max;
61 u8 divider[LOAD_LEVEL_NR];
62};
63
64static struct armada_37xx_dvfs armada_37xx_dvfs[] = {
65 {.cpu_freq_max = 1200*1000*1000, .divider = {1, 2, 4, 6} },
66 {.cpu_freq_max = 1000*1000*1000, .divider = {1, 2, 4, 5} },
67 {.cpu_freq_max = 800*1000*1000, .divider = {1, 2, 3, 4} },
68 {.cpu_freq_max = 600*1000*1000, .divider = {2, 4, 5, 6} },
69};
70
71static struct armada_37xx_dvfs *armada_37xx_cpu_freq_info_get(u32 freq)
72{
73 int i;
74
75 for (i = 0; i < ARRAY_SIZE(armada_37xx_dvfs); i++) {
76 if (freq == armada_37xx_dvfs[i].cpu_freq_max)
77 return &armada_37xx_dvfs[i];
78 }
79
80 pr_err("Unsupported CPU frequency %d MHz\n", freq/1000000);
81 return NULL;
82}
83
84/*
85 * Setup the four level managed by the hardware. Once the four level
86 * will be configured then the DVFS will be enabled.
87 */
88static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
89 struct clk *clk, u8 *divider)
90{
91 int load_lvl;
92 struct clk *parent;
93
94 for (load_lvl = 0; load_lvl < LOAD_LEVEL_NR; load_lvl++) {
95 unsigned int reg, mask, val, offset = 0;
96
97 if (load_lvl <= ARMADA_37XX_DVFS_LOAD_1)
98 reg = ARMADA_37XX_NB_L0L1;
99 else
100 reg = ARMADA_37XX_NB_L2L3;
101
102 if (load_lvl == ARMADA_37XX_DVFS_LOAD_0 ||
103 load_lvl == ARMADA_37XX_DVFS_LOAD_2)
104 offset += ARMADA_37XX_NB_CONFIG_SHIFT;
105
106 /* Set cpu clock source, for all the level we use TBG */
107 val = ARMADA_37XX_NB_CLK_SEL_TBG << ARMADA_37XX_NB_CLK_SEL_OFF;
108 mask = (ARMADA_37XX_NB_CLK_SEL_MASK
109 << ARMADA_37XX_NB_CLK_SEL_OFF);
110
111 /*
112 * Set cpu divider based on the pre-computed array in
113 * order to have balanced step.
114 */
115 val |= divider[load_lvl] << ARMADA_37XX_NB_TBG_DIV_OFF;
116 mask |= (ARMADA_37XX_NB_TBG_DIV_MASK
117 << ARMADA_37XX_NB_TBG_DIV_OFF);
118
119 /* Set VDD divider which is actually the load level. */
120 val |= load_lvl << ARMADA_37XX_NB_VDD_SEL_OFF;
121 mask |= (ARMADA_37XX_NB_VDD_SEL_MASK
122 << ARMADA_37XX_NB_VDD_SEL_OFF);
123
124 val <<= offset;
125 mask <<= offset;
126
127 regmap_update_bits(base, reg, mask, val);
128 }
129
130 /*
131 * Set cpu clock source, for all the level we keep the same
132 * clock source that the one already configured. For this one
133 * we need to use the clock framework
134 */
135 parent = clk_get_parent(clk);
136 clk_set_parent(clk, parent);
137}
138
139static void __init armada37xx_cpufreq_disable_dvfs(struct regmap *base)
140{
141 unsigned int reg = ARMADA_37XX_NB_DYN_MOD,
142 mask = ARMADA_37XX_NB_DFS_EN;
143
144 regmap_update_bits(base, reg, mask, 0);
145}
146
147static void __init armada37xx_cpufreq_enable_dvfs(struct regmap *base)
148{
149 unsigned int val, reg = ARMADA_37XX_NB_CPU_LOAD,
150 mask = ARMADA_37XX_NB_CPU_LOAD_MASK;
151
152 /* Start with the highest load (0) */
153 val = ARMADA_37XX_DVFS_LOAD_0;
154 regmap_update_bits(base, reg, mask, val);
155
156 /* Now enable DVFS for the CPUs */
157 reg = ARMADA_37XX_NB_DYN_MOD;
158 mask = ARMADA_37XX_NB_CLK_SEL_EN | ARMADA_37XX_NB_TBG_EN |
159 ARMADA_37XX_NB_DIV_EN | ARMADA_37XX_NB_VDD_EN |
160 ARMADA_37XX_NB_DFS_EN;
161
162 regmap_update_bits(base, reg, mask, mask);
163}
164
165static int __init armada37xx_cpufreq_driver_init(void)
166{
167 struct armada_37xx_dvfs *dvfs;
168 struct platform_device *pdev;
169 unsigned int cur_frequency;
170 struct regmap *nb_pm_base;
171 struct device *cpu_dev;
172 int load_lvl, ret;
173 struct clk *clk;
174
175 nb_pm_base =
176 syscon_regmap_lookup_by_compatible("marvell,armada-3700-nb-pm");
177
178 if (IS_ERR(nb_pm_base))
179 return -ENODEV;
180
181 /* Before doing any configuration on the DVFS first, disable it */
182 armada37xx_cpufreq_disable_dvfs(nb_pm_base);
183
184 /*
185 * On CPU 0 register the operating points supported (which are
186 * the nominal CPU frequency and full integer divisions of
187 * it).
188 */
189 cpu_dev = get_cpu_device(0);
190 if (!cpu_dev) {
191 dev_err(cpu_dev, "Cannot get CPU\n");
192 return -ENODEV;
193 }
194
195 clk = clk_get(cpu_dev, 0);
196 if (IS_ERR(clk)) {
197 dev_err(cpu_dev, "Cannot get clock for CPU0\n");
198 return PTR_ERR(clk);
199 }
200
201 /* Get nominal (current) CPU frequency */
202 cur_frequency = clk_get_rate(clk);
203 if (!cur_frequency) {
204 dev_err(cpu_dev, "Failed to get clock rate for CPU\n");
205 return -EINVAL;
206 }
207
208 dvfs = armada_37xx_cpu_freq_info_get(cur_frequency);
209 if (!dvfs)
210 return -EINVAL;
211
212 armada37xx_cpufreq_dvfs_setup(nb_pm_base, clk, dvfs->divider);
213
214 for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR;
215 load_lvl++) {
216 unsigned long freq = cur_frequency / dvfs->divider[load_lvl];
217
218 ret = dev_pm_opp_add(cpu_dev, freq, 0);
219 if (ret) {
220 /* clean-up the already added opp before leaving */
221 while (load_lvl-- > ARMADA_37XX_DVFS_LOAD_0) {
222 freq = cur_frequency / dvfs->divider[load_lvl];
223 dev_pm_opp_remove(cpu_dev, freq);
224 }
225 return ret;
226 }
227 }
228
229 /* Now that everything is setup, enable the DVFS at hardware level */
230 armada37xx_cpufreq_enable_dvfs(nb_pm_base);
231
232 pdev = platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
233
234 return PTR_ERR_OR_ZERO(pdev);
235}
236/* late_initcall, to guarantee the driver is loaded after A37xx clock driver */
237late_initcall(armada37xx_cpufreq_driver_init);
238
239MODULE_AUTHOR("Gregory CLEMENT <gregory.clement@free-electrons.com>");
240MODULE_DESCRIPTION("Armada 37xx cpufreq driver");
241MODULE_LICENSE("GPL");
diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
index ecc56e26f8f6..3b585e4bfac5 100644
--- a/drivers/cpufreq/cpufreq-dt-platdev.c
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -108,6 +108,14 @@ static const struct of_device_id blacklist[] __initconst = {
108 108
109 { .compatible = "marvell,armadaxp", }, 109 { .compatible = "marvell,armadaxp", },
110 110
111 { .compatible = "mediatek,mt2701", },
112 { .compatible = "mediatek,mt2712", },
113 { .compatible = "mediatek,mt7622", },
114 { .compatible = "mediatek,mt7623", },
115 { .compatible = "mediatek,mt817x", },
116 { .compatible = "mediatek,mt8173", },
117 { .compatible = "mediatek,mt8176", },
118
111 { .compatible = "nvidia,tegra124", }, 119 { .compatible = "nvidia,tegra124", },
112 120
113 { .compatible = "st,stih407", }, 121 { .compatible = "st,stih407", },
diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
index 545946ad0752..de3d104c25d7 100644
--- a/drivers/cpufreq/cpufreq-dt.c
+++ b/drivers/cpufreq/cpufreq-dt.c
@@ -319,33 +319,8 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
319static void cpufreq_ready(struct cpufreq_policy *policy) 319static void cpufreq_ready(struct cpufreq_policy *policy)
320{ 320{
321 struct private_data *priv = policy->driver_data; 321 struct private_data *priv = policy->driver_data;
322 struct device_node *np = of_node_get(priv->cpu_dev->of_node);
323 322
324 if (WARN_ON(!np)) 323 priv->cdev = of_cpufreq_cooling_register(policy);
325 return;
326
327 /*
328 * For now, just loading the cooling device;
329 * thermal DT code takes care of matching them.
330 */
331 if (of_find_property(np, "#cooling-cells", NULL)) {
332 u32 power_coefficient = 0;
333
334 of_property_read_u32(np, "dynamic-power-coefficient",
335 &power_coefficient);
336
337 priv->cdev = of_cpufreq_power_cooling_register(np,
338 policy, power_coefficient, NULL);
339 if (IS_ERR(priv->cdev)) {
340 dev_err(priv->cpu_dev,
341 "running cpufreq without cooling device: %ld\n",
342 PTR_ERR(priv->cdev));
343
344 priv->cdev = NULL;
345 }
346 }
347
348 of_node_put(np);
349} 324}
350 325
351static struct cpufreq_driver dt_cpufreq_driver = { 326static struct cpufreq_driver dt_cpufreq_driver = {
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 41d148af7748..421f318c0e66 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -601,19 +601,18 @@ static struct cpufreq_governor *find_governor(const char *str_governor)
601/** 601/**
602 * cpufreq_parse_governor - parse a governor string 602 * cpufreq_parse_governor - parse a governor string
603 */ 603 */
604static int cpufreq_parse_governor(char *str_governor, unsigned int *policy, 604static int cpufreq_parse_governor(char *str_governor,
605 struct cpufreq_governor **governor) 605 struct cpufreq_policy *policy)
606{ 606{
607 int err = -EINVAL;
608
609 if (cpufreq_driver->setpolicy) { 607 if (cpufreq_driver->setpolicy) {
610 if (!strncasecmp(str_governor, "performance", CPUFREQ_NAME_LEN)) { 608 if (!strncasecmp(str_governor, "performance", CPUFREQ_NAME_LEN)) {
611 *policy = CPUFREQ_POLICY_PERFORMANCE; 609 policy->policy = CPUFREQ_POLICY_PERFORMANCE;
612 err = 0; 610 return 0;
613 } else if (!strncasecmp(str_governor, "powersave", 611 }
614 CPUFREQ_NAME_LEN)) { 612
615 *policy = CPUFREQ_POLICY_POWERSAVE; 613 if (!strncasecmp(str_governor, "powersave", CPUFREQ_NAME_LEN)) {
616 err = 0; 614 policy->policy = CPUFREQ_POLICY_POWERSAVE;
615 return 0;
617 } 616 }
618 } else { 617 } else {
619 struct cpufreq_governor *t; 618 struct cpufreq_governor *t;
@@ -621,26 +620,31 @@ static int cpufreq_parse_governor(char *str_governor, unsigned int *policy,
621 mutex_lock(&cpufreq_governor_mutex); 620 mutex_lock(&cpufreq_governor_mutex);
622 621
623 t = find_governor(str_governor); 622 t = find_governor(str_governor);
624 623 if (!t) {
625 if (t == NULL) {
626 int ret; 624 int ret;
627 625
628 mutex_unlock(&cpufreq_governor_mutex); 626 mutex_unlock(&cpufreq_governor_mutex);
627
629 ret = request_module("cpufreq_%s", str_governor); 628 ret = request_module("cpufreq_%s", str_governor);
630 mutex_lock(&cpufreq_governor_mutex); 629 if (ret)
630 return -EINVAL;
631 631
632 if (ret == 0) 632 mutex_lock(&cpufreq_governor_mutex);
633 t = find_governor(str_governor);
634 }
635 633
636 if (t != NULL) { 634 t = find_governor(str_governor);
637 *governor = t;
638 err = 0;
639 } 635 }
636 if (t && !try_module_get(t->owner))
637 t = NULL;
640 638
641 mutex_unlock(&cpufreq_governor_mutex); 639 mutex_unlock(&cpufreq_governor_mutex);
640
641 if (t) {
642 policy->governor = t;
643 return 0;
644 }
642 } 645 }
643 return err; 646
647 return -EINVAL;
644} 648}
645 649
646/** 650/**
@@ -760,11 +764,14 @@ static ssize_t store_scaling_governor(struct cpufreq_policy *policy,
760 if (ret != 1) 764 if (ret != 1)
761 return -EINVAL; 765 return -EINVAL;
762 766
763 if (cpufreq_parse_governor(str_governor, &new_policy.policy, 767 if (cpufreq_parse_governor(str_governor, &new_policy))
764 &new_policy.governor))
765 return -EINVAL; 768 return -EINVAL;
766 769
767 ret = cpufreq_set_policy(policy, &new_policy); 770 ret = cpufreq_set_policy(policy, &new_policy);
771
772 if (new_policy.governor)
773 module_put(new_policy.governor->owner);
774
768 return ret ? ret : count; 775 return ret ? ret : count;
769} 776}
770 777
@@ -1044,8 +1051,7 @@ static int cpufreq_init_policy(struct cpufreq_policy *policy)
1044 if (policy->last_policy) 1051 if (policy->last_policy)
1045 new_policy.policy = policy->last_policy; 1052 new_policy.policy = policy->last_policy;
1046 else 1053 else
1047 cpufreq_parse_governor(gov->name, &new_policy.policy, 1054 cpufreq_parse_governor(gov->name, &new_policy);
1048 NULL);
1049 } 1055 }
1050 /* set default policy */ 1056 /* set default policy */
1051 return cpufreq_set_policy(policy, &new_policy); 1057 return cpufreq_set_policy(policy, &new_policy);
@@ -2160,7 +2166,6 @@ void cpufreq_unregister_governor(struct cpufreq_governor *governor)
2160 mutex_lock(&cpufreq_governor_mutex); 2166 mutex_lock(&cpufreq_governor_mutex);
2161 list_del(&governor->governor_list); 2167 list_del(&governor->governor_list);
2162 mutex_unlock(&cpufreq_governor_mutex); 2168 mutex_unlock(&cpufreq_governor_mutex);
2163 return;
2164} 2169}
2165EXPORT_SYMBOL_GPL(cpufreq_unregister_governor); 2170EXPORT_SYMBOL_GPL(cpufreq_unregister_governor);
2166 2171
diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
index 1e55b5790853..1572129844a5 100644
--- a/drivers/cpufreq/cpufreq_stats.c
+++ b/drivers/cpufreq/cpufreq_stats.c
@@ -27,7 +27,7 @@ struct cpufreq_stats {
27 unsigned int *trans_table; 27 unsigned int *trans_table;
28}; 28};
29 29
30static int cpufreq_stats_update(struct cpufreq_stats *stats) 30static void cpufreq_stats_update(struct cpufreq_stats *stats)
31{ 31{
32 unsigned long long cur_time = get_jiffies_64(); 32 unsigned long long cur_time = get_jiffies_64();
33 33
@@ -35,7 +35,6 @@ static int cpufreq_stats_update(struct cpufreq_stats *stats)
35 stats->time_in_state[stats->last_index] += cur_time - stats->last_time; 35 stats->time_in_state[stats->last_index] += cur_time - stats->last_time;
36 stats->last_time = cur_time; 36 stats->last_time = cur_time;
37 spin_unlock(&cpufreq_stats_lock); 37 spin_unlock(&cpufreq_stats_lock);
38 return 0;
39} 38}
40 39
41static void cpufreq_stats_clear_table(struct cpufreq_stats *stats) 40static void cpufreq_stats_clear_table(struct cpufreq_stats *stats)
diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
index d9b2c2de49c4..741f22e5cee3 100644
--- a/drivers/cpufreq/imx6q-cpufreq.c
+++ b/drivers/cpufreq/imx6q-cpufreq.c
@@ -25,15 +25,29 @@ static struct regulator *arm_reg;
25static struct regulator *pu_reg; 25static struct regulator *pu_reg;
26static struct regulator *soc_reg; 26static struct regulator *soc_reg;
27 27
28static struct clk *arm_clk; 28enum IMX6_CPUFREQ_CLKS {
29static struct clk *pll1_sys_clk; 29 ARM,
30static struct clk *pll1_sw_clk; 30 PLL1_SYS,
31static struct clk *step_clk; 31 STEP,
32static struct clk *pll2_pfd2_396m_clk; 32 PLL1_SW,
33 33 PLL2_PFD2_396M,
34/* clk used by i.MX6UL */ 34 /* MX6UL requires two more clks */
35static struct clk *pll2_bus_clk; 35 PLL2_BUS,
36static struct clk *secondary_sel_clk; 36 SECONDARY_SEL,
37};
38#define IMX6Q_CPUFREQ_CLK_NUM 5
39#define IMX6UL_CPUFREQ_CLK_NUM 7
40
41static int num_clks;
42static struct clk_bulk_data clks[] = {
43 { .id = "arm" },
44 { .id = "pll1_sys" },
45 { .id = "step" },
46 { .id = "pll1_sw" },
47 { .id = "pll2_pfd2_396m" },
48 { .id = "pll2_bus" },
49 { .id = "secondary_sel" },
50};
37 51
38static struct device *cpu_dev; 52static struct device *cpu_dev;
39static bool free_opp; 53static bool free_opp;
@@ -53,7 +67,7 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
53 67
54 new_freq = freq_table[index].frequency; 68 new_freq = freq_table[index].frequency;
55 freq_hz = new_freq * 1000; 69 freq_hz = new_freq * 1000;
56 old_freq = clk_get_rate(arm_clk) / 1000; 70 old_freq = clk_get_rate(clks[ARM].clk) / 1000;
57 71
58 opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz); 72 opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz);
59 if (IS_ERR(opp)) { 73 if (IS_ERR(opp)) {
@@ -112,29 +126,35 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
112 * voltage of 528MHz, so lower the CPU frequency to one 126 * voltage of 528MHz, so lower the CPU frequency to one
113 * half before changing CPU frequency. 127 * half before changing CPU frequency.
114 */ 128 */
115 clk_set_rate(arm_clk, (old_freq >> 1) * 1000); 129 clk_set_rate(clks[ARM].clk, (old_freq >> 1) * 1000);
116 clk_set_parent(pll1_sw_clk, pll1_sys_clk); 130 clk_set_parent(clks[PLL1_SW].clk, clks[PLL1_SYS].clk);
117 if (freq_hz > clk_get_rate(pll2_pfd2_396m_clk)) 131 if (freq_hz > clk_get_rate(clks[PLL2_PFD2_396M].clk))
118 clk_set_parent(secondary_sel_clk, pll2_bus_clk); 132 clk_set_parent(clks[SECONDARY_SEL].clk,
133 clks[PLL2_BUS].clk);
119 else 134 else
120 clk_set_parent(secondary_sel_clk, pll2_pfd2_396m_clk); 135 clk_set_parent(clks[SECONDARY_SEL].clk,
121 clk_set_parent(step_clk, secondary_sel_clk); 136 clks[PLL2_PFD2_396M].clk);
122 clk_set_parent(pll1_sw_clk, step_clk); 137 clk_set_parent(clks[STEP].clk, clks[SECONDARY_SEL].clk);
138 clk_set_parent(clks[PLL1_SW].clk, clks[STEP].clk);
139 if (freq_hz > clk_get_rate(clks[PLL2_BUS].clk)) {
140 clk_set_rate(clks[PLL1_SYS].clk, new_freq * 1000);
141 clk_set_parent(clks[PLL1_SW].clk, clks[PLL1_SYS].clk);
142 }
123 } else { 143 } else {
124 clk_set_parent(step_clk, pll2_pfd2_396m_clk); 144 clk_set_parent(clks[STEP].clk, clks[PLL2_PFD2_396M].clk);
125 clk_set_parent(pll1_sw_clk, step_clk); 145 clk_set_parent(clks[PLL1_SW].clk, clks[STEP].clk);
126 if (freq_hz > clk_get_rate(pll2_pfd2_396m_clk)) { 146 if (freq_hz > clk_get_rate(clks[PLL2_PFD2_396M].clk)) {
127 clk_set_rate(pll1_sys_clk, new_freq * 1000); 147 clk_set_rate(clks[PLL1_SYS].clk, new_freq * 1000);
128 clk_set_parent(pll1_sw_clk, pll1_sys_clk); 148 clk_set_parent(clks[PLL1_SW].clk, clks[PLL1_SYS].clk);
129 } else { 149 } else {
130 /* pll1_sys needs to be enabled for divider rate change to work. */ 150 /* pll1_sys needs to be enabled for divider rate change to work. */
131 pll1_sys_temp_enabled = true; 151 pll1_sys_temp_enabled = true;
132 clk_prepare_enable(pll1_sys_clk); 152 clk_prepare_enable(clks[PLL1_SYS].clk);
133 } 153 }
134 } 154 }
135 155
136 /* Ensure the arm clock divider is what we expect */ 156 /* Ensure the arm clock divider is what we expect */
137 ret = clk_set_rate(arm_clk, new_freq * 1000); 157 ret = clk_set_rate(clks[ARM].clk, new_freq * 1000);
138 if (ret) { 158 if (ret) {
139 dev_err(cpu_dev, "failed to set clock rate: %d\n", ret); 159 dev_err(cpu_dev, "failed to set clock rate: %d\n", ret);
140 regulator_set_voltage_tol(arm_reg, volt_old, 0); 160 regulator_set_voltage_tol(arm_reg, volt_old, 0);
@@ -143,7 +163,7 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
143 163
144 /* PLL1 is only needed until after ARM-PODF is set. */ 164 /* PLL1 is only needed until after ARM-PODF is set. */
145 if (pll1_sys_temp_enabled) 165 if (pll1_sys_temp_enabled)
146 clk_disable_unprepare(pll1_sys_clk); 166 clk_disable_unprepare(clks[PLL1_SYS].clk);
147 167
148 /* scaling down? scale voltage after frequency */ 168 /* scaling down? scale voltage after frequency */
149 if (new_freq < old_freq) { 169 if (new_freq < old_freq) {
@@ -174,7 +194,7 @@ static int imx6q_cpufreq_init(struct cpufreq_policy *policy)
174{ 194{
175 int ret; 195 int ret;
176 196
177 policy->clk = arm_clk; 197 policy->clk = clks[ARM].clk;
178 ret = cpufreq_generic_init(policy, freq_table, transition_latency); 198 ret = cpufreq_generic_init(policy, freq_table, transition_latency);
179 policy->suspend_freq = policy->max; 199 policy->suspend_freq = policy->max;
180 200
@@ -244,6 +264,43 @@ put_node:
244 of_node_put(np); 264 of_node_put(np);
245} 265}
246 266
267#define OCOTP_CFG3_6UL_SPEED_696MHZ 0x2
268
269static void imx6ul_opp_check_speed_grading(struct device *dev)
270{
271 struct device_node *np;
272 void __iomem *base;
273 u32 val;
274
275 np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-ocotp");
276 if (!np)
277 return;
278
279 base = of_iomap(np, 0);
280 if (!base) {
281 dev_err(dev, "failed to map ocotp\n");
282 goto put_node;
283 }
284
285 /*
286 * Speed GRADING[1:0] defines the max speed of ARM:
287 * 2b'00: Reserved;
288 * 2b'01: 528000000Hz;
289 * 2b'10: 696000000Hz;
290 * 2b'11: Reserved;
291 * We need to set the max speed of ARM according to fuse map.
292 */
293 val = readl_relaxed(base + OCOTP_CFG3);
294 val >>= OCOTP_CFG3_SPEED_SHIFT;
295 val &= 0x3;
296 if (val != OCOTP_CFG3_6UL_SPEED_696MHZ)
297 if (dev_pm_opp_disable(dev, 696000000))
298 dev_warn(dev, "failed to disable 696MHz OPP\n");
299 iounmap(base);
300put_node:
301 of_node_put(np);
302}
303
247static int imx6q_cpufreq_probe(struct platform_device *pdev) 304static int imx6q_cpufreq_probe(struct platform_device *pdev)
248{ 305{
249 struct device_node *np; 306 struct device_node *np;
@@ -266,28 +323,15 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
266 return -ENOENT; 323 return -ENOENT;
267 } 324 }
268 325
269 arm_clk = clk_get(cpu_dev, "arm");
270 pll1_sys_clk = clk_get(cpu_dev, "pll1_sys");
271 pll1_sw_clk = clk_get(cpu_dev, "pll1_sw");
272 step_clk = clk_get(cpu_dev, "step");
273 pll2_pfd2_396m_clk = clk_get(cpu_dev, "pll2_pfd2_396m");
274 if (IS_ERR(arm_clk) || IS_ERR(pll1_sys_clk) || IS_ERR(pll1_sw_clk) ||
275 IS_ERR(step_clk) || IS_ERR(pll2_pfd2_396m_clk)) {
276 dev_err(cpu_dev, "failed to get clocks\n");
277 ret = -ENOENT;
278 goto put_clk;
279 }
280
281 if (of_machine_is_compatible("fsl,imx6ul") || 326 if (of_machine_is_compatible("fsl,imx6ul") ||
282 of_machine_is_compatible("fsl,imx6ull")) { 327 of_machine_is_compatible("fsl,imx6ull"))
283 pll2_bus_clk = clk_get(cpu_dev, "pll2_bus"); 328 num_clks = IMX6UL_CPUFREQ_CLK_NUM;
284 secondary_sel_clk = clk_get(cpu_dev, "secondary_sel"); 329 else
285 if (IS_ERR(pll2_bus_clk) || IS_ERR(secondary_sel_clk)) { 330 num_clks = IMX6Q_CPUFREQ_CLK_NUM;
286 dev_err(cpu_dev, "failed to get clocks specific to imx6ul\n"); 331
287 ret = -ENOENT; 332 ret = clk_bulk_get(cpu_dev, num_clks, clks);
288 goto put_clk; 333 if (ret)
289 } 334 goto put_node;
290 }
291 335
292 arm_reg = regulator_get(cpu_dev, "arm"); 336 arm_reg = regulator_get(cpu_dev, "arm");
293 pu_reg = regulator_get_optional(cpu_dev, "pu"); 337 pu_reg = regulator_get_optional(cpu_dev, "pu");
@@ -311,7 +355,10 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
311 goto put_reg; 355 goto put_reg;
312 } 356 }
313 357
314 imx6q_opp_check_speed_grading(cpu_dev); 358 if (of_machine_is_compatible("fsl,imx6ul"))
359 imx6ul_opp_check_speed_grading(cpu_dev);
360 else
361 imx6q_opp_check_speed_grading(cpu_dev);
315 362
316 /* Because we have added the OPPs here, we must free them */ 363 /* Because we have added the OPPs here, we must free them */
317 free_opp = true; 364 free_opp = true;
@@ -424,22 +471,11 @@ put_reg:
424 regulator_put(pu_reg); 471 regulator_put(pu_reg);
425 if (!IS_ERR(soc_reg)) 472 if (!IS_ERR(soc_reg))
426 regulator_put(soc_reg); 473 regulator_put(soc_reg);
427put_clk: 474
428 if (!IS_ERR(arm_clk)) 475 clk_bulk_put(num_clks, clks);
429 clk_put(arm_clk); 476put_node:
430 if (!IS_ERR(pll1_sys_clk))
431 clk_put(pll1_sys_clk);
432 if (!IS_ERR(pll1_sw_clk))
433 clk_put(pll1_sw_clk);
434 if (!IS_ERR(step_clk))
435 clk_put(step_clk);
436 if (!IS_ERR(pll2_pfd2_396m_clk))
437 clk_put(pll2_pfd2_396m_clk);
438 if (!IS_ERR(pll2_bus_clk))
439 clk_put(pll2_bus_clk);
440 if (!IS_ERR(secondary_sel_clk))
441 clk_put(secondary_sel_clk);
442 of_node_put(np); 477 of_node_put(np);
478
443 return ret; 479 return ret;
444} 480}
445 481
@@ -453,13 +489,8 @@ static int imx6q_cpufreq_remove(struct platform_device *pdev)
453 if (!IS_ERR(pu_reg)) 489 if (!IS_ERR(pu_reg))
454 regulator_put(pu_reg); 490 regulator_put(pu_reg);
455 regulator_put(soc_reg); 491 regulator_put(soc_reg);
456 clk_put(arm_clk); 492
457 clk_put(pll1_sys_clk); 493 clk_bulk_put(num_clks, clks);
458 clk_put(pll1_sw_clk);
459 clk_put(step_clk);
460 clk_put(pll2_pfd2_396m_clk);
461 clk_put(pll2_bus_clk);
462 clk_put(secondary_sel_clk);
463 494
464 return 0; 495 return 0;
465} 496}
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 93a0e88bef76..7edf7a0e5a96 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -1595,15 +1595,6 @@ static const struct pstate_funcs knl_funcs = {
1595 .get_val = core_get_val, 1595 .get_val = core_get_val,
1596}; 1596};
1597 1597
1598static const struct pstate_funcs bxt_funcs = {
1599 .get_max = core_get_max_pstate,
1600 .get_max_physical = core_get_max_pstate_physical,
1601 .get_min = core_get_min_pstate,
1602 .get_turbo = core_get_turbo_pstate,
1603 .get_scaling = core_get_scaling,
1604 .get_val = core_get_val,
1605};
1606
1607#define ICPU(model, policy) \ 1598#define ICPU(model, policy) \
1608 { X86_VENDOR_INTEL, 6, model, X86_FEATURE_APERFMPERF,\ 1599 { X86_VENDOR_INTEL, 6, model, X86_FEATURE_APERFMPERF,\
1609 (unsigned long)&policy } 1600 (unsigned long)&policy }
@@ -1627,8 +1618,9 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
1627 ICPU(INTEL_FAM6_BROADWELL_XEON_D, core_funcs), 1618 ICPU(INTEL_FAM6_BROADWELL_XEON_D, core_funcs),
1628 ICPU(INTEL_FAM6_XEON_PHI_KNL, knl_funcs), 1619 ICPU(INTEL_FAM6_XEON_PHI_KNL, knl_funcs),
1629 ICPU(INTEL_FAM6_XEON_PHI_KNM, knl_funcs), 1620 ICPU(INTEL_FAM6_XEON_PHI_KNM, knl_funcs),
1630 ICPU(INTEL_FAM6_ATOM_GOLDMONT, bxt_funcs), 1621 ICPU(INTEL_FAM6_ATOM_GOLDMONT, core_funcs),
1631 ICPU(INTEL_FAM6_ATOM_GEMINI_LAKE, bxt_funcs), 1622 ICPU(INTEL_FAM6_ATOM_GEMINI_LAKE, core_funcs),
1623 ICPU(INTEL_FAM6_SKYLAKE_X, core_funcs),
1632 {} 1624 {}
1633}; 1625};
1634MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids); 1626MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
diff --git a/drivers/cpufreq/longhaul.c b/drivers/cpufreq/longhaul.c
index c46a12df40dd..5faa37c5b091 100644
--- a/drivers/cpufreq/longhaul.c
+++ b/drivers/cpufreq/longhaul.c
@@ -894,7 +894,7 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
894 if ((longhaul_version != TYPE_LONGHAUL_V1) && (scale_voltage != 0)) 894 if ((longhaul_version != TYPE_LONGHAUL_V1) && (scale_voltage != 0))
895 longhaul_setup_voltagescaling(); 895 longhaul_setup_voltagescaling();
896 896
897 policy->cpuinfo.transition_latency = 200000; /* nsec */ 897 policy->transition_delay_us = 200000; /* usec */
898 898
899 return cpufreq_table_validate_and_show(policy, longhaul_table); 899 return cpufreq_table_validate_and_show(policy, longhaul_table);
900} 900}
diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
index e0d5090b303d..8c04dddd3c28 100644
--- a/drivers/cpufreq/mediatek-cpufreq.c
+++ b/drivers/cpufreq/mediatek-cpufreq.c
@@ -310,28 +310,8 @@ static int mtk_cpufreq_set_target(struct cpufreq_policy *policy,
310static void mtk_cpufreq_ready(struct cpufreq_policy *policy) 310static void mtk_cpufreq_ready(struct cpufreq_policy *policy)
311{ 311{
312 struct mtk_cpu_dvfs_info *info = policy->driver_data; 312 struct mtk_cpu_dvfs_info *info = policy->driver_data;
313 struct device_node *np = of_node_get(info->cpu_dev->of_node);
314 u32 capacitance = 0;
315 313
316 if (WARN_ON(!np)) 314 info->cdev = of_cpufreq_cooling_register(policy);
317 return;
318
319 if (of_find_property(np, "#cooling-cells", NULL)) {
320 of_property_read_u32(np, DYNAMIC_POWER, &capacitance);
321
322 info->cdev = of_cpufreq_power_cooling_register(np,
323 policy, capacitance, NULL);
324
325 if (IS_ERR(info->cdev)) {
326 dev_err(info->cpu_dev,
327 "running cpufreq without cooling device: %ld\n",
328 PTR_ERR(info->cdev));
329
330 info->cdev = NULL;
331 }
332 }
333
334 of_node_put(np);
335} 315}
336 316
337static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) 317static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
@@ -574,6 +554,7 @@ static struct platform_driver mtk_cpufreq_platdrv = {
574/* List of machines supported by this driver */ 554/* List of machines supported by this driver */
575static const struct of_device_id mtk_cpufreq_machines[] __initconst = { 555static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
576 { .compatible = "mediatek,mt2701", }, 556 { .compatible = "mediatek,mt2701", },
557 { .compatible = "mediatek,mt2712", },
577 { .compatible = "mediatek,mt7622", }, 558 { .compatible = "mediatek,mt7622", },
578 { .compatible = "mediatek,mt7623", }, 559 { .compatible = "mediatek,mt7623", },
579 { .compatible = "mediatek,mt817x", }, 560 { .compatible = "mediatek,mt817x", },
diff --git a/drivers/cpufreq/mvebu-cpufreq.c b/drivers/cpufreq/mvebu-cpufreq.c
index ed915ee85dd9..31513bd42705 100644
--- a/drivers/cpufreq/mvebu-cpufreq.c
+++ b/drivers/cpufreq/mvebu-cpufreq.c
@@ -76,12 +76,6 @@ static int __init armada_xp_pmsu_cpufreq_init(void)
76 return PTR_ERR(clk); 76 return PTR_ERR(clk);
77 } 77 }
78 78
79 /*
80 * In case of a failure of dev_pm_opp_add(), we don't
81 * bother with cleaning up the registered OPP (there's
82 * no function to do so), and simply cancel the
83 * registration of the cpufreq device.
84 */
85 ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk), 0); 79 ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk), 0);
86 if (ret) { 80 if (ret) {
87 clk_put(clk); 81 clk_put(clk);
@@ -91,7 +85,8 @@ static int __init armada_xp_pmsu_cpufreq_init(void)
91 ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk) / 2, 0); 85 ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk) / 2, 0);
92 if (ret) { 86 if (ret) {
93 clk_put(clk); 87 clk_put(clk);
94 return ret; 88 dev_err(cpu_dev, "Failed to register OPPs\n");
89 goto opp_register_failed;
95 } 90 }
96 91
97 ret = dev_pm_opp_set_sharing_cpus(cpu_dev, 92 ret = dev_pm_opp_set_sharing_cpus(cpu_dev,
@@ -99,9 +94,16 @@ static int __init armada_xp_pmsu_cpufreq_init(void)
99 if (ret) 94 if (ret)
100 dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n", 95 dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
101 __func__, ret); 96 __func__, ret);
97 clk_put(clk);
102 } 98 }
103 99
104 platform_device_register_simple("cpufreq-dt", -1, NULL, 0); 100 platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
105 return 0; 101 return 0;
102
103opp_register_failed:
104 /* As registering has failed remove all the opp for all cpus */
105 dev_pm_opp_cpumask_remove_table(cpu_possible_mask);
106
107 return ret;
106} 108}
107device_initcall(armada_xp_pmsu_cpufreq_init); 109device_initcall(armada_xp_pmsu_cpufreq_init);
diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
index b6d7c4c98d0a..29cdec198657 100644
--- a/drivers/cpufreq/powernv-cpufreq.c
+++ b/drivers/cpufreq/powernv-cpufreq.c
@@ -29,6 +29,7 @@
29#include <linux/reboot.h> 29#include <linux/reboot.h>
30#include <linux/slab.h> 30#include <linux/slab.h>
31#include <linux/cpu.h> 31#include <linux/cpu.h>
32#include <linux/hashtable.h>
32#include <trace/events/power.h> 33#include <trace/events/power.h>
33 34
34#include <asm/cputhreads.h> 35#include <asm/cputhreads.h>
@@ -38,14 +39,13 @@
38#include <asm/opal.h> 39#include <asm/opal.h>
39#include <linux/timer.h> 40#include <linux/timer.h>
40 41
41#define POWERNV_MAX_PSTATES 256 42#define POWERNV_MAX_PSTATES_ORDER 8
43#define POWERNV_MAX_PSTATES (1UL << (POWERNV_MAX_PSTATES_ORDER))
42#define PMSR_PSAFE_ENABLE (1UL << 30) 44#define PMSR_PSAFE_ENABLE (1UL << 30)
43#define PMSR_SPR_EM_DISABLE (1UL << 31) 45#define PMSR_SPR_EM_DISABLE (1UL << 31)
44#define PMSR_MAX(x) ((x >> 32) & 0xFF) 46#define MAX_PSTATE_SHIFT 32
45#define LPSTATE_SHIFT 48 47#define LPSTATE_SHIFT 48
46#define GPSTATE_SHIFT 56 48#define GPSTATE_SHIFT 56
47#define GET_LPSTATE(x) (((x) >> LPSTATE_SHIFT) & 0xFF)
48#define GET_GPSTATE(x) (((x) >> GPSTATE_SHIFT) & 0xFF)
49 49
50#define MAX_RAMP_DOWN_TIME 5120 50#define MAX_RAMP_DOWN_TIME 5120
51/* 51/*
@@ -94,6 +94,27 @@ struct global_pstate_info {
94}; 94};
95 95
96static struct cpufreq_frequency_table powernv_freqs[POWERNV_MAX_PSTATES+1]; 96static struct cpufreq_frequency_table powernv_freqs[POWERNV_MAX_PSTATES+1];
97
98DEFINE_HASHTABLE(pstate_revmap, POWERNV_MAX_PSTATES_ORDER);
99/**
100 * struct pstate_idx_revmap_data: Entry in the hashmap pstate_revmap
101 * indexed by a function of pstate id.
102 *
103 * @pstate_id: pstate id for this entry.
104 *
105 * @cpufreq_table_idx: Index into the powernv_freqs
106 * cpufreq_frequency_table for frequency
107 * corresponding to pstate_id.
108 *
109 * @hentry: hlist_node that hooks this entry into the pstate_revmap
110 * hashtable
111 */
112struct pstate_idx_revmap_data {
113 u8 pstate_id;
114 unsigned int cpufreq_table_idx;
115 struct hlist_node hentry;
116};
117
97static bool rebooting, throttled, occ_reset; 118static bool rebooting, throttled, occ_reset;
98 119
99static const char * const throttle_reason[] = { 120static const char * const throttle_reason[] = {
@@ -148,39 +169,56 @@ static struct powernv_pstate_info {
148 bool wof_enabled; 169 bool wof_enabled;
149} powernv_pstate_info; 170} powernv_pstate_info;
150 171
151/* Use following macros for conversions between pstate_id and index */ 172static inline u8 extract_pstate(u64 pmsr_val, unsigned int shift)
152static inline int idx_to_pstate(unsigned int i) 173{
174 return ((pmsr_val >> shift) & 0xFF);
175}
176
177#define extract_local_pstate(x) extract_pstate(x, LPSTATE_SHIFT)
178#define extract_global_pstate(x) extract_pstate(x, GPSTATE_SHIFT)
179#define extract_max_pstate(x) extract_pstate(x, MAX_PSTATE_SHIFT)
180
181/* Use following functions for conversions between pstate_id and index */
182
183/**
184 * idx_to_pstate : Returns the pstate id corresponding to the
185 * frequency in the cpufreq frequency table
186 * powernv_freqs indexed by @i.
187 *
188 * If @i is out of bound, this will return the pstate
189 * corresponding to the nominal frequency.
190 */
191static inline u8 idx_to_pstate(unsigned int i)
153{ 192{
154 if (unlikely(i >= powernv_pstate_info.nr_pstates)) { 193 if (unlikely(i >= powernv_pstate_info.nr_pstates)) {
155 pr_warn_once("index %u is out of bound\n", i); 194 pr_warn_once("idx_to_pstate: index %u is out of bound\n", i);
156 return powernv_freqs[powernv_pstate_info.nominal].driver_data; 195 return powernv_freqs[powernv_pstate_info.nominal].driver_data;
157 } 196 }
158 197
159 return powernv_freqs[i].driver_data; 198 return powernv_freqs[i].driver_data;
160} 199}
161 200
162static inline unsigned int pstate_to_idx(int pstate) 201/**
202 * pstate_to_idx : Returns the index in the cpufreq frequencytable
203 * powernv_freqs for the frequency whose corresponding
204 * pstate id is @pstate.
205 *
206 * If no frequency corresponding to @pstate is found,
207 * this will return the index of the nominal
208 * frequency.
209 */
210static unsigned int pstate_to_idx(u8 pstate)
163{ 211{
164 int min = powernv_freqs[powernv_pstate_info.min].driver_data; 212 unsigned int key = pstate % POWERNV_MAX_PSTATES;
165 int max = powernv_freqs[powernv_pstate_info.max].driver_data; 213 struct pstate_idx_revmap_data *revmap_data;
166 214
167 if (min > 0) { 215 hash_for_each_possible(pstate_revmap, revmap_data, hentry, key) {
168 if (unlikely((pstate < max) || (pstate > min))) { 216 if (revmap_data->pstate_id == pstate)
169 pr_warn_once("pstate %d is out of bound\n", pstate); 217 return revmap_data->cpufreq_table_idx;
170 return powernv_pstate_info.nominal;
171 }
172 } else {
173 if (unlikely((pstate > max) || (pstate < min))) {
174 pr_warn_once("pstate %d is out of bound\n", pstate);
175 return powernv_pstate_info.nominal;
176 }
177 } 218 }
178 /* 219
179 * abs() is deliberately used so that is works with 220 pr_warn_once("pstate_to_idx: pstate 0x%x not found\n", pstate);
180 * both monotonically increasing and decreasing 221 return powernv_pstate_info.nominal;
181 * pstate values
182 */
183 return abs(pstate - idx_to_pstate(powernv_pstate_info.max));
184} 222}
185 223
186static inline void reset_gpstates(struct cpufreq_policy *policy) 224static inline void reset_gpstates(struct cpufreq_policy *policy)
@@ -247,7 +285,7 @@ static int init_powernv_pstates(void)
247 powernv_pstate_info.wof_enabled = true; 285 powernv_pstate_info.wof_enabled = true;
248 286
249next: 287next:
250 pr_info("cpufreq pstate min %d nominal %d max %d\n", pstate_min, 288 pr_info("cpufreq pstate min 0x%x nominal 0x%x max 0x%x\n", pstate_min,
251 pstate_nominal, pstate_max); 289 pstate_nominal, pstate_max);
252 pr_info("Workload Optimized Frequency is %s in the platform\n", 290 pr_info("Workload Optimized Frequency is %s in the platform\n",
253 (powernv_pstate_info.wof_enabled) ? "enabled" : "disabled"); 291 (powernv_pstate_info.wof_enabled) ? "enabled" : "disabled");
@@ -278,19 +316,30 @@ next:
278 316
279 powernv_pstate_info.nr_pstates = nr_pstates; 317 powernv_pstate_info.nr_pstates = nr_pstates;
280 pr_debug("NR PStates %d\n", nr_pstates); 318 pr_debug("NR PStates %d\n", nr_pstates);
319
281 for (i = 0; i < nr_pstates; i++) { 320 for (i = 0; i < nr_pstates; i++) {
282 u32 id = be32_to_cpu(pstate_ids[i]); 321 u32 id = be32_to_cpu(pstate_ids[i]);
283 u32 freq = be32_to_cpu(pstate_freqs[i]); 322 u32 freq = be32_to_cpu(pstate_freqs[i]);
323 struct pstate_idx_revmap_data *revmap_data;
324 unsigned int key;
284 325
285 pr_debug("PState id %d freq %d MHz\n", id, freq); 326 pr_debug("PState id %d freq %d MHz\n", id, freq);
286 powernv_freqs[i].frequency = freq * 1000; /* kHz */ 327 powernv_freqs[i].frequency = freq * 1000; /* kHz */
287 powernv_freqs[i].driver_data = id; 328 powernv_freqs[i].driver_data = id & 0xFF;
329
330 revmap_data = (struct pstate_idx_revmap_data *)
331 kmalloc(sizeof(*revmap_data), GFP_KERNEL);
332
333 revmap_data->pstate_id = id & 0xFF;
334 revmap_data->cpufreq_table_idx = i;
335 key = (revmap_data->pstate_id) % POWERNV_MAX_PSTATES;
336 hash_add(pstate_revmap, &revmap_data->hentry, key);
288 337
289 if (id == pstate_max) 338 if (id == pstate_max)
290 powernv_pstate_info.max = i; 339 powernv_pstate_info.max = i;
291 else if (id == pstate_nominal) 340 if (id == pstate_nominal)
292 powernv_pstate_info.nominal = i; 341 powernv_pstate_info.nominal = i;
293 else if (id == pstate_min) 342 if (id == pstate_min)
294 powernv_pstate_info.min = i; 343 powernv_pstate_info.min = i;
295 344
296 if (powernv_pstate_info.wof_enabled && id == pstate_turbo) { 345 if (powernv_pstate_info.wof_enabled && id == pstate_turbo) {
@@ -307,14 +356,13 @@ next:
307} 356}
308 357
309/* Returns the CPU frequency corresponding to the pstate_id. */ 358/* Returns the CPU frequency corresponding to the pstate_id. */
310static unsigned int pstate_id_to_freq(int pstate_id) 359static unsigned int pstate_id_to_freq(u8 pstate_id)
311{ 360{
312 int i; 361 int i;
313 362
314 i = pstate_to_idx(pstate_id); 363 i = pstate_to_idx(pstate_id);
315 if (i >= powernv_pstate_info.nr_pstates || i < 0) { 364 if (i >= powernv_pstate_info.nr_pstates || i < 0) {
316 pr_warn("PState id %d outside of PState table, " 365 pr_warn("PState id 0x%x outside of PState table, reporting nominal id 0x%x instead\n",
317 "reporting nominal id %d instead\n",
318 pstate_id, idx_to_pstate(powernv_pstate_info.nominal)); 366 pstate_id, idx_to_pstate(powernv_pstate_info.nominal));
319 i = powernv_pstate_info.nominal; 367 i = powernv_pstate_info.nominal;
320 } 368 }
@@ -420,8 +468,8 @@ static inline void set_pmspr(unsigned long sprn, unsigned long val)
420 */ 468 */
421struct powernv_smp_call_data { 469struct powernv_smp_call_data {
422 unsigned int freq; 470 unsigned int freq;
423 int pstate_id; 471 u8 pstate_id;
424 int gpstate_id; 472 u8 gpstate_id;
425}; 473};
426 474
427/* 475/*
@@ -438,22 +486,15 @@ struct powernv_smp_call_data {
438static void powernv_read_cpu_freq(void *arg) 486static void powernv_read_cpu_freq(void *arg)
439{ 487{
440 unsigned long pmspr_val; 488 unsigned long pmspr_val;
441 s8 local_pstate_id;
442 struct powernv_smp_call_data *freq_data = arg; 489 struct powernv_smp_call_data *freq_data = arg;
443 490
444 pmspr_val = get_pmspr(SPRN_PMSR); 491 pmspr_val = get_pmspr(SPRN_PMSR);
445 492 freq_data->pstate_id = extract_local_pstate(pmspr_val);
446 /*
447 * The local pstate id corresponds bits 48..55 in the PMSR.
448 * Note: Watch out for the sign!
449 */
450 local_pstate_id = (pmspr_val >> 48) & 0xFF;
451 freq_data->pstate_id = local_pstate_id;
452 freq_data->freq = pstate_id_to_freq(freq_data->pstate_id); 493 freq_data->freq = pstate_id_to_freq(freq_data->pstate_id);
453 494
454 pr_debug("cpu %d pmsr %016lX pstate_id %d frequency %d kHz\n", 495 pr_debug("cpu %d pmsr %016lX pstate_id 0x%x frequency %d kHz\n",
455 raw_smp_processor_id(), pmspr_val, freq_data->pstate_id, 496 raw_smp_processor_id(), pmspr_val, freq_data->pstate_id,
456 freq_data->freq); 497 freq_data->freq);
457} 498}
458 499
459/* 500/*
@@ -515,21 +556,21 @@ static void powernv_cpufreq_throttle_check(void *data)
515 struct chip *chip; 556 struct chip *chip;
516 unsigned int cpu = smp_processor_id(); 557 unsigned int cpu = smp_processor_id();
517 unsigned long pmsr; 558 unsigned long pmsr;
518 int pmsr_pmax; 559 u8 pmsr_pmax;
519 unsigned int pmsr_pmax_idx; 560 unsigned int pmsr_pmax_idx;
520 561
521 pmsr = get_pmspr(SPRN_PMSR); 562 pmsr = get_pmspr(SPRN_PMSR);
522 chip = this_cpu_read(chip_info); 563 chip = this_cpu_read(chip_info);
523 564
524 /* Check for Pmax Capping */ 565 /* Check for Pmax Capping */
525 pmsr_pmax = (s8)PMSR_MAX(pmsr); 566 pmsr_pmax = extract_max_pstate(pmsr);
526 pmsr_pmax_idx = pstate_to_idx(pmsr_pmax); 567 pmsr_pmax_idx = pstate_to_idx(pmsr_pmax);
527 if (pmsr_pmax_idx != powernv_pstate_info.max) { 568 if (pmsr_pmax_idx != powernv_pstate_info.max) {
528 if (chip->throttled) 569 if (chip->throttled)
529 goto next; 570 goto next;
530 chip->throttled = true; 571 chip->throttled = true;
531 if (pmsr_pmax_idx > powernv_pstate_info.nominal) { 572 if (pmsr_pmax_idx > powernv_pstate_info.nominal) {
532 pr_warn_once("CPU %d on Chip %u has Pmax(%d) reduced below nominal frequency(%d)\n", 573 pr_warn_once("CPU %d on Chip %u has Pmax(0x%x) reduced below that of nominal frequency(0x%x)\n",
533 cpu, chip->id, pmsr_pmax, 574 cpu, chip->id, pmsr_pmax,
534 idx_to_pstate(powernv_pstate_info.nominal)); 575 idx_to_pstate(powernv_pstate_info.nominal));
535 chip->throttle_sub_turbo++; 576 chip->throttle_sub_turbo++;
@@ -645,8 +686,8 @@ void gpstate_timer_handler(struct timer_list *t)
645 * value. Hence, read from PMCR to get correct data. 686 * value. Hence, read from PMCR to get correct data.
646 */ 687 */
647 val = get_pmspr(SPRN_PMCR); 688 val = get_pmspr(SPRN_PMCR);
648 freq_data.gpstate_id = (s8)GET_GPSTATE(val); 689 freq_data.gpstate_id = extract_global_pstate(val);
649 freq_data.pstate_id = (s8)GET_LPSTATE(val); 690 freq_data.pstate_id = extract_local_pstate(val);
650 if (freq_data.gpstate_id == freq_data.pstate_id) { 691 if (freq_data.gpstate_id == freq_data.pstate_id) {
651 reset_gpstates(policy); 692 reset_gpstates(policy);
652 spin_unlock(&gpstates->gpstate_lock); 693 spin_unlock(&gpstates->gpstate_lock);
diff --git a/drivers/cpufreq/qoriq-cpufreq.c b/drivers/cpufreq/qoriq-cpufreq.c
index 4ada55b8856e..0562761a3dec 100644
--- a/drivers/cpufreq/qoriq-cpufreq.c
+++ b/drivers/cpufreq/qoriq-cpufreq.c
@@ -275,20 +275,8 @@ static int qoriq_cpufreq_target(struct cpufreq_policy *policy,
275static void qoriq_cpufreq_ready(struct cpufreq_policy *policy) 275static void qoriq_cpufreq_ready(struct cpufreq_policy *policy)
276{ 276{
277 struct cpu_data *cpud = policy->driver_data; 277 struct cpu_data *cpud = policy->driver_data;
278 struct device_node *np = of_get_cpu_node(policy->cpu, NULL);
279 278
280 if (of_find_property(np, "#cooling-cells", NULL)) { 279 cpud->cdev = of_cpufreq_cooling_register(policy);
281 cpud->cdev = of_cpufreq_cooling_register(np, policy);
282
283 if (IS_ERR(cpud->cdev) && PTR_ERR(cpud->cdev) != -ENOSYS) {
284 pr_err("cpu%d is not running as cooling device: %ld\n",
285 policy->cpu, PTR_ERR(cpud->cdev));
286
287 cpud->cdev = NULL;
288 }
289 }
290
291 of_node_put(np);
292} 280}
293 281
294static struct cpufreq_driver qoriq_cpufreq_driver = { 282static struct cpufreq_driver qoriq_cpufreq_driver = {
diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
index 05d299052c5c..247fcbfa4cb5 100644
--- a/drivers/cpufreq/scpi-cpufreq.c
+++ b/drivers/cpufreq/scpi-cpufreq.c
@@ -18,27 +18,89 @@
18 18
19#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 19#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
20 20
21#include <linux/clk.h>
21#include <linux/cpu.h> 22#include <linux/cpu.h>
22#include <linux/cpufreq.h> 23#include <linux/cpufreq.h>
24#include <linux/cpumask.h>
25#include <linux/cpu_cooling.h>
26#include <linux/export.h>
23#include <linux/module.h> 27#include <linux/module.h>
24#include <linux/platform_device.h> 28#include <linux/of_platform.h>
25#include <linux/pm_opp.h> 29#include <linux/pm_opp.h>
26#include <linux/scpi_protocol.h> 30#include <linux/scpi_protocol.h>
31#include <linux/slab.h>
27#include <linux/types.h> 32#include <linux/types.h>
28 33
29#include "arm_big_little.h" 34struct scpi_data {
35 struct clk *clk;
36 struct device *cpu_dev;
37 struct thermal_cooling_device *cdev;
38};
30 39
31static struct scpi_ops *scpi_ops; 40static struct scpi_ops *scpi_ops;
32 41
33static int scpi_get_transition_latency(struct device *cpu_dev) 42static unsigned int scpi_cpufreq_get_rate(unsigned int cpu)
43{
44 struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
45 struct scpi_data *priv = policy->driver_data;
46 unsigned long rate = clk_get_rate(priv->clk);
47
48 return rate / 1000;
49}
50
51static int
52scpi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
53{
54 struct scpi_data *priv = policy->driver_data;
55 u64 rate = policy->freq_table[index].frequency * 1000;
56 int ret;
57
58 ret = clk_set_rate(priv->clk, rate);
59 if (!ret && (clk_get_rate(priv->clk) != rate))
60 ret = -EIO;
61
62 return ret;
63}
64
65static int
66scpi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
34{ 67{
35 return scpi_ops->get_transition_latency(cpu_dev); 68 int cpu, domain, tdomain;
69 struct device *tcpu_dev;
70
71 domain = scpi_ops->device_domain_id(cpu_dev);
72 if (domain < 0)
73 return domain;
74
75 for_each_possible_cpu(cpu) {
76 if (cpu == cpu_dev->id)
77 continue;
78
79 tcpu_dev = get_cpu_device(cpu);
80 if (!tcpu_dev)
81 continue;
82
83 tdomain = scpi_ops->device_domain_id(tcpu_dev);
84 if (tdomain == domain)
85 cpumask_set_cpu(cpu, cpumask);
86 }
87
88 return 0;
36} 89}
37 90
38static int scpi_init_opp_table(const struct cpumask *cpumask) 91static int scpi_cpufreq_init(struct cpufreq_policy *policy)
39{ 92{
40 int ret; 93 int ret;
41 struct device *cpu_dev = get_cpu_device(cpumask_first(cpumask)); 94 unsigned int latency;
95 struct device *cpu_dev;
96 struct scpi_data *priv;
97 struct cpufreq_frequency_table *freq_table;
98
99 cpu_dev = get_cpu_device(policy->cpu);
100 if (!cpu_dev) {
101 pr_err("failed to get cpu%d device\n", policy->cpu);
102 return -ENODEV;
103 }
42 104
43 ret = scpi_ops->add_opps_to_device(cpu_dev); 105 ret = scpi_ops->add_opps_to_device(cpu_dev);
44 if (ret) { 106 if (ret) {
@@ -46,32 +108,133 @@ static int scpi_init_opp_table(const struct cpumask *cpumask)
46 return ret; 108 return ret;
47 } 109 }
48 110
49 ret = dev_pm_opp_set_sharing_cpus(cpu_dev, cpumask); 111 ret = scpi_get_sharing_cpus(cpu_dev, policy->cpus);
50 if (ret) 112 if (ret) {
113 dev_warn(cpu_dev, "failed to get sharing cpumask\n");
114 return ret;
115 }
116
117 ret = dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus);
118 if (ret) {
51 dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n", 119 dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
52 __func__, ret); 120 __func__, ret);
121 return ret;
122 }
123
124 ret = dev_pm_opp_get_opp_count(cpu_dev);
125 if (ret <= 0) {
126 dev_dbg(cpu_dev, "OPP table is not ready, deferring probe\n");
127 ret = -EPROBE_DEFER;
128 goto out_free_opp;
129 }
130
131 priv = kzalloc(sizeof(*priv), GFP_KERNEL);
132 if (!priv) {
133 ret = -ENOMEM;
134 goto out_free_opp;
135 }
136
137 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
138 if (ret) {
139 dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
140 goto out_free_priv;
141 }
142
143 priv->cpu_dev = cpu_dev;
144 priv->clk = clk_get(cpu_dev, NULL);
145 if (IS_ERR(priv->clk)) {
146 dev_err(cpu_dev, "%s: Failed to get clk for cpu: %d\n",
147 __func__, cpu_dev->id);
148 goto out_free_cpufreq_table;
149 }
150
151 policy->driver_data = priv;
152
153 ret = cpufreq_table_validate_and_show(policy, freq_table);
154 if (ret) {
155 dev_err(cpu_dev, "%s: invalid frequency table: %d\n", __func__,
156 ret);
157 goto out_put_clk;
158 }
159
160 /* scpi allows DVFS request for any domain from any CPU */
161 policy->dvfs_possible_from_any_cpu = true;
162
163 latency = scpi_ops->get_transition_latency(cpu_dev);
164 if (!latency)
165 latency = CPUFREQ_ETERNAL;
166
167 policy->cpuinfo.transition_latency = latency;
168
169 policy->fast_switch_possible = false;
170 return 0;
171
172out_put_clk:
173 clk_put(priv->clk);
174out_free_cpufreq_table:
175 dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
176out_free_priv:
177 kfree(priv);
178out_free_opp:
179 dev_pm_opp_cpumask_remove_table(policy->cpus);
180
53 return ret; 181 return ret;
54} 182}
55 183
56static const struct cpufreq_arm_bL_ops scpi_cpufreq_ops = { 184static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
57 .name = "scpi", 185{
58 .get_transition_latency = scpi_get_transition_latency, 186 struct scpi_data *priv = policy->driver_data;
59 .init_opp_table = scpi_init_opp_table, 187
60 .free_opp_table = dev_pm_opp_cpumask_remove_table, 188 cpufreq_cooling_unregister(priv->cdev);
189 clk_put(priv->clk);
190 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
191 kfree(priv);
192 dev_pm_opp_cpumask_remove_table(policy->related_cpus);
193
194 return 0;
195}
196
197static void scpi_cpufreq_ready(struct cpufreq_policy *policy)
198{
199 struct scpi_data *priv = policy->driver_data;
200 struct thermal_cooling_device *cdev;
201
202 cdev = of_cpufreq_cooling_register(policy);
203 if (!IS_ERR(cdev))
204 priv->cdev = cdev;
205}
206
207static struct cpufreq_driver scpi_cpufreq_driver = {
208 .name = "scpi-cpufreq",
209 .flags = CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY |
210 CPUFREQ_NEED_INITIAL_FREQ_CHECK,
211 .verify = cpufreq_generic_frequency_table_verify,
212 .attr = cpufreq_generic_attr,
213 .get = scpi_cpufreq_get_rate,
214 .init = scpi_cpufreq_init,
215 .exit = scpi_cpufreq_exit,
216 .ready = scpi_cpufreq_ready,
217 .target_index = scpi_cpufreq_set_target,
61}; 218};
62 219
63static int scpi_cpufreq_probe(struct platform_device *pdev) 220static int scpi_cpufreq_probe(struct platform_device *pdev)
64{ 221{
222 int ret;
223
65 scpi_ops = get_scpi_ops(); 224 scpi_ops = get_scpi_ops();
66 if (!scpi_ops) 225 if (!scpi_ops)
67 return -EIO; 226 return -EIO;
68 227
69 return bL_cpufreq_register(&scpi_cpufreq_ops); 228 ret = cpufreq_register_driver(&scpi_cpufreq_driver);
229 if (ret)
230 dev_err(&pdev->dev, "%s: registering cpufreq failed, err: %d\n",
231 __func__, ret);
232 return ret;
70} 233}
71 234
72static int scpi_cpufreq_remove(struct platform_device *pdev) 235static int scpi_cpufreq_remove(struct platform_device *pdev)
73{ 236{
74 bL_cpufreq_unregister(&scpi_cpufreq_ops); 237 cpufreq_unregister_driver(&scpi_cpufreq_driver);
75 scpi_ops = NULL; 238 scpi_ops = NULL;
76 return 0; 239 return 0;
77} 240}
diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c
index 923317f03b4b..a099b7bf74cd 100644
--- a/drivers/cpufreq/ti-cpufreq.c
+++ b/drivers/cpufreq/ti-cpufreq.c
@@ -17,6 +17,7 @@
17#include <linux/cpu.h> 17#include <linux/cpu.h>
18#include <linux/io.h> 18#include <linux/io.h>
19#include <linux/mfd/syscon.h> 19#include <linux/mfd/syscon.h>
20#include <linux/module.h>
20#include <linux/init.h> 21#include <linux/init.h>
21#include <linux/of.h> 22#include <linux/of.h>
22#include <linux/of_platform.h> 23#include <linux/of_platform.h>
@@ -50,6 +51,7 @@ struct ti_cpufreq_soc_data {
50 unsigned long efuse_mask; 51 unsigned long efuse_mask;
51 unsigned long efuse_shift; 52 unsigned long efuse_shift;
52 unsigned long rev_offset; 53 unsigned long rev_offset;
54 bool multi_regulator;
53}; 55};
54 56
55struct ti_cpufreq_data { 57struct ti_cpufreq_data {
@@ -57,6 +59,7 @@ struct ti_cpufreq_data {
57 struct device_node *opp_node; 59 struct device_node *opp_node;
58 struct regmap *syscon; 60 struct regmap *syscon;
59 const struct ti_cpufreq_soc_data *soc_data; 61 const struct ti_cpufreq_soc_data *soc_data;
62 struct opp_table *opp_table;
60}; 63};
61 64
62static unsigned long amx3_efuse_xlate(struct ti_cpufreq_data *opp_data, 65static unsigned long amx3_efuse_xlate(struct ti_cpufreq_data *opp_data,
@@ -95,6 +98,7 @@ static struct ti_cpufreq_soc_data am3x_soc_data = {
95 .efuse_offset = 0x07fc, 98 .efuse_offset = 0x07fc,
96 .efuse_mask = 0x1fff, 99 .efuse_mask = 0x1fff,
97 .rev_offset = 0x600, 100 .rev_offset = 0x600,
101 .multi_regulator = false,
98}; 102};
99 103
100static struct ti_cpufreq_soc_data am4x_soc_data = { 104static struct ti_cpufreq_soc_data am4x_soc_data = {
@@ -103,6 +107,7 @@ static struct ti_cpufreq_soc_data am4x_soc_data = {
103 .efuse_offset = 0x0610, 107 .efuse_offset = 0x0610,
104 .efuse_mask = 0x3f, 108 .efuse_mask = 0x3f,
105 .rev_offset = 0x600, 109 .rev_offset = 0x600,
110 .multi_regulator = false,
106}; 111};
107 112
108static struct ti_cpufreq_soc_data dra7_soc_data = { 113static struct ti_cpufreq_soc_data dra7_soc_data = {
@@ -111,6 +116,7 @@ static struct ti_cpufreq_soc_data dra7_soc_data = {
111 .efuse_mask = 0xf80000, 116 .efuse_mask = 0xf80000,
112 .efuse_shift = 19, 117 .efuse_shift = 19,
113 .rev_offset = 0x204, 118 .rev_offset = 0x204,
119 .multi_regulator = true,
114}; 120};
115 121
116/** 122/**
@@ -195,12 +201,14 @@ static const struct of_device_id ti_cpufreq_of_match[] = {
195 {}, 201 {},
196}; 202};
197 203
198static int ti_cpufreq_init(void) 204static int ti_cpufreq_probe(struct platform_device *pdev)
199{ 205{
200 u32 version[VERSION_COUNT]; 206 u32 version[VERSION_COUNT];
201 struct device_node *np; 207 struct device_node *np;
202 const struct of_device_id *match; 208 const struct of_device_id *match;
209 struct opp_table *ti_opp_table;
203 struct ti_cpufreq_data *opp_data; 210 struct ti_cpufreq_data *opp_data;
211 const char * const reg_names[] = {"vdd", "vbb"};
204 int ret; 212 int ret;
205 213
206 np = of_find_node_by_path("/"); 214 np = of_find_node_by_path("/");
@@ -247,16 +255,29 @@ static int ti_cpufreq_init(void)
247 if (ret) 255 if (ret)
248 goto fail_put_node; 256 goto fail_put_node;
249 257
250 ret = PTR_ERR_OR_ZERO(dev_pm_opp_set_supported_hw(opp_data->cpu_dev, 258 ti_opp_table = dev_pm_opp_set_supported_hw(opp_data->cpu_dev,
251 version, VERSION_COUNT)); 259 version, VERSION_COUNT);
252 if (ret) { 260 if (IS_ERR(ti_opp_table)) {
253 dev_err(opp_data->cpu_dev, 261 dev_err(opp_data->cpu_dev,
254 "Failed to set supported hardware\n"); 262 "Failed to set supported hardware\n");
263 ret = PTR_ERR(ti_opp_table);
255 goto fail_put_node; 264 goto fail_put_node;
256 } 265 }
257 266
258 of_node_put(opp_data->opp_node); 267 opp_data->opp_table = ti_opp_table;
268
269 if (opp_data->soc_data->multi_regulator) {
270 ti_opp_table = dev_pm_opp_set_regulators(opp_data->cpu_dev,
271 reg_names,
272 ARRAY_SIZE(reg_names));
273 if (IS_ERR(ti_opp_table)) {
274 dev_pm_opp_put_supported_hw(opp_data->opp_table);
275 ret = PTR_ERR(ti_opp_table);
276 goto fail_put_node;
277 }
278 }
259 279
280 of_node_put(opp_data->opp_node);
260register_cpufreq_dt: 281register_cpufreq_dt:
261 platform_device_register_simple("cpufreq-dt", -1, NULL, 0); 282 platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
262 283
@@ -269,4 +290,22 @@ free_opp_data:
269 290
270 return ret; 291 return ret;
271} 292}
272device_initcall(ti_cpufreq_init); 293
294static int ti_cpufreq_init(void)
295{
296 platform_device_register_simple("ti-cpufreq", -1, NULL, 0);
297 return 0;
298}
299module_init(ti_cpufreq_init);
300
301static struct platform_driver ti_cpufreq_driver = {
302 .probe = ti_cpufreq_probe,
303 .driver = {
304 .name = "ti-cpufreq",
305 },
306};
307module_platform_driver(ti_cpufreq_driver);
308
309MODULE_DESCRIPTION("TI CPUFreq/OPP hw-supported driver");
310MODULE_AUTHOR("Dave Gerlach <d-gerlach@ti.com>");
311MODULE_LICENSE("GPL v2");
diff --git a/drivers/cpuidle/governor.c b/drivers/cpuidle/governor.c
index 4e78263e34a4..5d359aff3cc5 100644
--- a/drivers/cpuidle/governor.c
+++ b/drivers/cpuidle/governor.c
@@ -36,14 +36,15 @@ static struct cpuidle_governor * __cpuidle_find_governor(const char *str)
36/** 36/**
37 * cpuidle_switch_governor - changes the governor 37 * cpuidle_switch_governor - changes the governor
38 * @gov: the new target governor 38 * @gov: the new target governor
39 *
40 * NOTE: "gov" can be NULL to specify disabled
41 * Must be called with cpuidle_lock acquired. 39 * Must be called with cpuidle_lock acquired.
42 */ 40 */
43int cpuidle_switch_governor(struct cpuidle_governor *gov) 41int cpuidle_switch_governor(struct cpuidle_governor *gov)
44{ 42{
45 struct cpuidle_device *dev; 43 struct cpuidle_device *dev;
46 44
45 if (!gov)
46 return -EINVAL;
47
47 if (gov == cpuidle_curr_governor) 48 if (gov == cpuidle_curr_governor)
48 return 0; 49 return 0;
49 50
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index 78fb496ecb4e..fe2af6aa88fc 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -737,7 +737,7 @@ struct devfreq *devm_devfreq_add_device(struct device *dev,
737 devfreq = devfreq_add_device(dev, profile, governor_name, data); 737 devfreq = devfreq_add_device(dev, profile, governor_name, data);
738 if (IS_ERR(devfreq)) { 738 if (IS_ERR(devfreq)) {
739 devres_free(ptr); 739 devres_free(ptr);
740 return ERR_PTR(-ENOMEM); 740 return devfreq;
741 } 741 }
742 742
743 *ptr = devfreq; 743 *ptr = devfreq;
@@ -996,7 +996,8 @@ static ssize_t governor_store(struct device *dev, struct device_attribute *attr,
996 if (df->governor == governor) { 996 if (df->governor == governor) {
997 ret = 0; 997 ret = 0;
998 goto out; 998 goto out;
999 } else if (df->governor->immutable || governor->immutable) { 999 } else if ((df->governor && df->governor->immutable) ||
1000 governor->immutable) {
1000 ret = -EINVAL; 1001 ret = -EINVAL;
1001 goto out; 1002 goto out;
1002 } 1003 }
diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c
index 2b2c7db3e480..35c3936edc45 100644
--- a/drivers/dma/sh/rcar-dmac.c
+++ b/drivers/dma/sh/rcar-dmac.c
@@ -1615,22 +1615,6 @@ static struct dma_chan *rcar_dmac_of_xlate(struct of_phandle_args *dma_spec,
1615 * Power management 1615 * Power management
1616 */ 1616 */
1617 1617
1618#ifdef CONFIG_PM_SLEEP
1619static int rcar_dmac_sleep_suspend(struct device *dev)
1620{
1621 /*
1622 * TODO: Wait for the current transfer to complete and stop the device.
1623 */
1624 return 0;
1625}
1626
1627static int rcar_dmac_sleep_resume(struct device *dev)
1628{
1629 /* TODO: Resume transfers, if any. */
1630 return 0;
1631}
1632#endif
1633
1634#ifdef CONFIG_PM 1618#ifdef CONFIG_PM
1635static int rcar_dmac_runtime_suspend(struct device *dev) 1619static int rcar_dmac_runtime_suspend(struct device *dev)
1636{ 1620{
@@ -1646,7 +1630,13 @@ static int rcar_dmac_runtime_resume(struct device *dev)
1646#endif 1630#endif
1647 1631
1648static const struct dev_pm_ops rcar_dmac_pm = { 1632static const struct dev_pm_ops rcar_dmac_pm = {
1649 SET_SYSTEM_SLEEP_PM_OPS(rcar_dmac_sleep_suspend, rcar_dmac_sleep_resume) 1633 /*
1634 * TODO for system sleep/resume:
1635 * - Wait for the current transfer to complete and stop the device,
1636 * - Resume transfers, if any.
1637 */
1638 SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
1639 pm_runtime_force_resume)
1650 SET_RUNTIME_PM_OPS(rcar_dmac_runtime_suspend, rcar_dmac_runtime_resume, 1640 SET_RUNTIME_PM_OPS(rcar_dmac_runtime_suspend, rcar_dmac_runtime_resume,
1651 NULL) 1641 NULL)
1652}; 1642};
diff --git a/drivers/firmware/psci_checker.c b/drivers/firmware/psci_checker.c
index f3f4f810e5df..bb1c068bff19 100644
--- a/drivers/firmware/psci_checker.c
+++ b/drivers/firmware/psci_checker.c
@@ -77,8 +77,8 @@ static int psci_ops_check(void)
77 return 0; 77 return 0;
78} 78}
79 79
80static int find_clusters(const struct cpumask *cpus, 80static int find_cpu_groups(const struct cpumask *cpus,
81 const struct cpumask **clusters) 81 const struct cpumask **cpu_groups)
82{ 82{
83 unsigned int nb = 0; 83 unsigned int nb = 0;
84 cpumask_var_t tmp; 84 cpumask_var_t tmp;
@@ -88,11 +88,11 @@ static int find_clusters(const struct cpumask *cpus,
88 cpumask_copy(tmp, cpus); 88 cpumask_copy(tmp, cpus);
89 89
90 while (!cpumask_empty(tmp)) { 90 while (!cpumask_empty(tmp)) {
91 const struct cpumask *cluster = 91 const struct cpumask *cpu_group =
92 topology_core_cpumask(cpumask_any(tmp)); 92 topology_core_cpumask(cpumask_any(tmp));
93 93
94 clusters[nb++] = cluster; 94 cpu_groups[nb++] = cpu_group;
95 cpumask_andnot(tmp, tmp, cluster); 95 cpumask_andnot(tmp, tmp, cpu_group);
96 } 96 }
97 97
98 free_cpumask_var(tmp); 98 free_cpumask_var(tmp);
@@ -170,24 +170,24 @@ static int hotplug_tests(void)
170{ 170{
171 int err; 171 int err;
172 cpumask_var_t offlined_cpus; 172 cpumask_var_t offlined_cpus;
173 int i, nb_cluster; 173 int i, nb_cpu_group;
174 const struct cpumask **clusters; 174 const struct cpumask **cpu_groups;
175 char *page_buf; 175 char *page_buf;
176 176
177 err = -ENOMEM; 177 err = -ENOMEM;
178 if (!alloc_cpumask_var(&offlined_cpus, GFP_KERNEL)) 178 if (!alloc_cpumask_var(&offlined_cpus, GFP_KERNEL))
179 return err; 179 return err;
180 /* We may have up to nb_available_cpus clusters. */ 180 /* We may have up to nb_available_cpus cpu_groups. */
181 clusters = kmalloc_array(nb_available_cpus, sizeof(*clusters), 181 cpu_groups = kmalloc_array(nb_available_cpus, sizeof(*cpu_groups),
182 GFP_KERNEL); 182 GFP_KERNEL);
183 if (!clusters) 183 if (!cpu_groups)
184 goto out_free_cpus; 184 goto out_free_cpus;
185 page_buf = (char *)__get_free_page(GFP_KERNEL); 185 page_buf = (char *)__get_free_page(GFP_KERNEL);
186 if (!page_buf) 186 if (!page_buf)
187 goto out_free_clusters; 187 goto out_free_cpu_groups;
188 188
189 err = 0; 189 err = 0;
190 nb_cluster = find_clusters(cpu_online_mask, clusters); 190 nb_cpu_group = find_cpu_groups(cpu_online_mask, cpu_groups);
191 191
192 /* 192 /*
193 * Of course the last CPU cannot be powered down and cpu_down() should 193 * Of course the last CPU cannot be powered down and cpu_down() should
@@ -197,24 +197,22 @@ static int hotplug_tests(void)
197 err += down_and_up_cpus(cpu_online_mask, offlined_cpus); 197 err += down_and_up_cpus(cpu_online_mask, offlined_cpus);
198 198
199 /* 199 /*
200 * Take down CPUs by cluster this time. When the last CPU is turned 200 * Take down CPUs by cpu group this time. When the last CPU is turned
201 * off, the cluster itself should shut down. 201 * off, the cpu group itself should shut down.
202 */ 202 */
203 for (i = 0; i < nb_cluster; ++i) { 203 for (i = 0; i < nb_cpu_group; ++i) {
204 int cluster_id =
205 topology_physical_package_id(cpumask_any(clusters[i]));
206 ssize_t len = cpumap_print_to_pagebuf(true, page_buf, 204 ssize_t len = cpumap_print_to_pagebuf(true, page_buf,
207 clusters[i]); 205 cpu_groups[i]);
208 /* Remove trailing newline. */ 206 /* Remove trailing newline. */
209 page_buf[len - 1] = '\0'; 207 page_buf[len - 1] = '\0';
210 pr_info("Trying to turn off and on again cluster %d " 208 pr_info("Trying to turn off and on again group %d (CPUs %s)\n",
211 "(CPUs %s)\n", cluster_id, page_buf); 209 i, page_buf);
212 err += down_and_up_cpus(clusters[i], offlined_cpus); 210 err += down_and_up_cpus(cpu_groups[i], offlined_cpus);
213 } 211 }
214 212
215 free_page((unsigned long)page_buf); 213 free_page((unsigned long)page_buf);
216out_free_clusters: 214out_free_cpu_groups:
217 kfree(clusters); 215 kfree(cpu_groups);
218out_free_cpus: 216out_free_cpus:
219 free_cpumask_var(offlined_cpus); 217 free_cpumask_var(offlined_cpus);
220 return err; 218 return err;
diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
index 21bf619a86c5..9fee4c054d3d 100644
--- a/drivers/i2c/busses/i2c-designware-core.h
+++ b/drivers/i2c/busses/i2c-designware-core.h
@@ -280,8 +280,6 @@ struct dw_i2c_dev {
280 int (*acquire_lock)(struct dw_i2c_dev *dev); 280 int (*acquire_lock)(struct dw_i2c_dev *dev);
281 void (*release_lock)(struct dw_i2c_dev *dev); 281 void (*release_lock)(struct dw_i2c_dev *dev);
282 bool pm_disabled; 282 bool pm_disabled;
283 bool suspended;
284 bool skip_resume;
285 void (*disable)(struct dw_i2c_dev *dev); 283 void (*disable)(struct dw_i2c_dev *dev);
286 void (*disable_int)(struct dw_i2c_dev *dev); 284 void (*disable_int)(struct dw_i2c_dev *dev);
287 int (*init)(struct dw_i2c_dev *dev); 285 int (*init)(struct dw_i2c_dev *dev);
diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
index 58add69a441c..153b947702c5 100644
--- a/drivers/i2c/busses/i2c-designware-platdrv.c
+++ b/drivers/i2c/busses/i2c-designware-platdrv.c
@@ -42,6 +42,7 @@
42#include <linux/reset.h> 42#include <linux/reset.h>
43#include <linux/sched.h> 43#include <linux/sched.h>
44#include <linux/slab.h> 44#include <linux/slab.h>
45#include <linux/suspend.h>
45 46
46#include "i2c-designware-core.h" 47#include "i2c-designware-core.h"
47 48
@@ -372,6 +373,11 @@ static int dw_i2c_plat_probe(struct platform_device *pdev)
372 ACPI_COMPANION_SET(&adap->dev, ACPI_COMPANION(&pdev->dev)); 373 ACPI_COMPANION_SET(&adap->dev, ACPI_COMPANION(&pdev->dev));
373 adap->dev.of_node = pdev->dev.of_node; 374 adap->dev.of_node = pdev->dev.of_node;
374 375
376 dev_pm_set_driver_flags(&pdev->dev,
377 DPM_FLAG_SMART_PREPARE |
378 DPM_FLAG_SMART_SUSPEND |
379 DPM_FLAG_LEAVE_SUSPENDED);
380
375 /* The code below assumes runtime PM to be disabled. */ 381 /* The code below assumes runtime PM to be disabled. */
376 WARN_ON(pm_runtime_enabled(&pdev->dev)); 382 WARN_ON(pm_runtime_enabled(&pdev->dev));
377 383
@@ -435,12 +441,24 @@ MODULE_DEVICE_TABLE(of, dw_i2c_of_match);
435#ifdef CONFIG_PM_SLEEP 441#ifdef CONFIG_PM_SLEEP
436static int dw_i2c_plat_prepare(struct device *dev) 442static int dw_i2c_plat_prepare(struct device *dev)
437{ 443{
438 return pm_runtime_suspended(dev); 444 /*
445 * If the ACPI companion device object is present for this device, it
446 * may be accessed during suspend and resume of other devices via I2C
447 * operation regions, so tell the PM core and middle layers to avoid
448 * skipping system suspend/resume callbacks for it in that case.
449 */
450 return !has_acpi_companion(dev);
439} 451}
440 452
441static void dw_i2c_plat_complete(struct device *dev) 453static void dw_i2c_plat_complete(struct device *dev)
442{ 454{
443 if (dev->power.direct_complete) 455 /*
456 * The device can only be in runtime suspend at this point if it has not
457 * been resumed throughout the ending system suspend/resume cycle, so if
458 * the platform firmware might mess up with it, request the runtime PM
459 * framework to resume it.
460 */
461 if (pm_runtime_suspended(dev) && pm_resume_via_firmware())
444 pm_request_resume(dev); 462 pm_request_resume(dev);
445} 463}
446#else 464#else
@@ -453,16 +471,9 @@ static int dw_i2c_plat_suspend(struct device *dev)
453{ 471{
454 struct dw_i2c_dev *i_dev = dev_get_drvdata(dev); 472 struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
455 473
456 if (i_dev->suspended) {
457 i_dev->skip_resume = true;
458 return 0;
459 }
460
461 i_dev->disable(i_dev); 474 i_dev->disable(i_dev);
462 i2c_dw_plat_prepare_clk(i_dev, false); 475 i2c_dw_plat_prepare_clk(i_dev, false);
463 476
464 i_dev->suspended = true;
465
466 return 0; 477 return 0;
467} 478}
468 479
@@ -470,19 +481,9 @@ static int dw_i2c_plat_resume(struct device *dev)
470{ 481{
471 struct dw_i2c_dev *i_dev = dev_get_drvdata(dev); 482 struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
472 483
473 if (!i_dev->suspended)
474 return 0;
475
476 if (i_dev->skip_resume) {
477 i_dev->skip_resume = false;
478 return 0;
479 }
480
481 i2c_dw_plat_prepare_clk(i_dev, true); 484 i2c_dw_plat_prepare_clk(i_dev, true);
482 i_dev->init(i_dev); 485 i_dev->init(i_dev);
483 486
484 i_dev->suspended = false;
485
486 return 0; 487 return 0;
487} 488}
488 489
diff --git a/drivers/mfd/intel-lpss.c b/drivers/mfd/intel-lpss.c
index 0e0ab9bb1530..9e545eb6e8b4 100644
--- a/drivers/mfd/intel-lpss.c
+++ b/drivers/mfd/intel-lpss.c
@@ -450,6 +450,8 @@ int intel_lpss_probe(struct device *dev,
450 if (ret) 450 if (ret)
451 goto err_remove_ltr; 451 goto err_remove_ltr;
452 452
453 dev_pm_set_driver_flags(dev, DPM_FLAG_SMART_SUSPEND);
454
453 return 0; 455 return 0;
454 456
455err_remove_ltr: 457err_remove_ltr:
@@ -478,7 +480,9 @@ EXPORT_SYMBOL_GPL(intel_lpss_remove);
478 480
479static int resume_lpss_device(struct device *dev, void *data) 481static int resume_lpss_device(struct device *dev, void *data)
480{ 482{
481 pm_runtime_resume(dev); 483 if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND))
484 pm_runtime_resume(dev);
485
482 return 0; 486 return 0;
483} 487}
484 488
diff --git a/drivers/opp/Makefile b/drivers/opp/Makefile
index e70ceb406fe9..6ce6aefacc81 100644
--- a/drivers/opp/Makefile
+++ b/drivers/opp/Makefile
@@ -2,3 +2,4 @@ ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
2obj-y += core.o cpu.o 2obj-y += core.o cpu.o
3obj-$(CONFIG_OF) += of.o 3obj-$(CONFIG_OF) += of.o
4obj-$(CONFIG_DEBUG_FS) += debugfs.o 4obj-$(CONFIG_DEBUG_FS) += debugfs.o
5obj-$(CONFIG_ARM_TI_CPUFREQ) += ti-opp-supply.o
diff --git a/drivers/opp/ti-opp-supply.c b/drivers/opp/ti-opp-supply.c
new file mode 100644
index 000000000000..370eff3acd8a
--- /dev/null
+++ b/drivers/opp/ti-opp-supply.c
@@ -0,0 +1,425 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * Copyright (C) 2016-2017 Texas Instruments Incorporated - http://www.ti.com/
4 * Nishanth Menon <nm@ti.com>
5 * Dave Gerlach <d-gerlach@ti.com>
6 *
7 * TI OPP supply driver that provides override into the regulator control
8 * for generic opp core to handle devices with ABB regulator and/or
9 * SmartReflex Class0.
10 */
11#include <linux/clk.h>
12#include <linux/cpufreq.h>
13#include <linux/device.h>
14#include <linux/io.h>
15#include <linux/module.h>
16#include <linux/notifier.h>
17#include <linux/of_device.h>
18#include <linux/of.h>
19#include <linux/platform_device.h>
20#include <linux/pm_opp.h>
21#include <linux/regulator/consumer.h>
22#include <linux/slab.h>
23
24/**
25 * struct ti_opp_supply_optimum_voltage_table - optimized voltage table
26 * @reference_uv: reference voltage (usually Nominal voltage)
27 * @optimized_uv: Optimized voltage from efuse
28 */
29struct ti_opp_supply_optimum_voltage_table {
30 unsigned int reference_uv;
31 unsigned int optimized_uv;
32};
33
34/**
35 * struct ti_opp_supply_data - OMAP specific opp supply data
36 * @vdd_table: Optimized voltage mapping table
37 * @num_vdd_table: number of entries in vdd_table
38 * @vdd_absolute_max_voltage_uv: absolute maximum voltage in UV for the supply
39 */
40struct ti_opp_supply_data {
41 struct ti_opp_supply_optimum_voltage_table *vdd_table;
42 u32 num_vdd_table;
43 u32 vdd_absolute_max_voltage_uv;
44};
45
46static struct ti_opp_supply_data opp_data;
47
48/**
49 * struct ti_opp_supply_of_data - device tree match data
50 * @flags: specific type of opp supply
51 * @efuse_voltage_mask: mask required for efuse register representing voltage
52 * @efuse_voltage_uv: Are the efuse entries in micro-volts? if not, assume
53 * milli-volts.
54 */
55struct ti_opp_supply_of_data {
56#define OPPDM_EFUSE_CLASS0_OPTIMIZED_VOLTAGE BIT(1)
57#define OPPDM_HAS_NO_ABB BIT(2)
58 const u8 flags;
59 const u32 efuse_voltage_mask;
60 const bool efuse_voltage_uv;
61};
62
63/**
64 * _store_optimized_voltages() - store optimized voltages
65 * @dev: ti opp supply device for which we need to store info
66 * @data: data specific to the device
67 *
68 * Picks up efuse based optimized voltages for VDD unique per device and
69 * stores it in internal data structure for use during transition requests.
70 *
71 * Return: If successful, 0, else appropriate error value.
72 */
73static int _store_optimized_voltages(struct device *dev,
74 struct ti_opp_supply_data *data)
75{
76 void __iomem *base;
77 struct property *prop;
78 struct resource *res;
79 const __be32 *val;
80 int proplen, i;
81 int ret = 0;
82 struct ti_opp_supply_optimum_voltage_table *table;
83 const struct ti_opp_supply_of_data *of_data = dev_get_drvdata(dev);
84
85 /* pick up Efuse based voltages */
86 res = platform_get_resource(to_platform_device(dev), IORESOURCE_MEM, 0);
87 if (!res) {
88 dev_err(dev, "Unable to get IO resource\n");
89 ret = -ENODEV;
90 goto out_map;
91 }
92
93 base = ioremap_nocache(res->start, resource_size(res));
94 if (!base) {
95 dev_err(dev, "Unable to map Efuse registers\n");
96 ret = -ENOMEM;
97 goto out_map;
98 }
99
100 /* Fetch efuse-settings. */
101 prop = of_find_property(dev->of_node, "ti,efuse-settings", NULL);
102 if (!prop) {
103 dev_err(dev, "No 'ti,efuse-settings' property found\n");
104 ret = -EINVAL;
105 goto out;
106 }
107
108 proplen = prop->length / sizeof(int);
109 data->num_vdd_table = proplen / 2;
110 /* Verify for corrupted OPP entries in dt */
111 if (data->num_vdd_table * 2 * sizeof(int) != prop->length) {
112 dev_err(dev, "Invalid 'ti,efuse-settings'\n");
113 ret = -EINVAL;
114 goto out;
115 }
116
117 ret = of_property_read_u32(dev->of_node, "ti,absolute-max-voltage-uv",
118 &data->vdd_absolute_max_voltage_uv);
119 if (ret) {
120 dev_err(dev, "ti,absolute-max-voltage-uv is missing\n");
121 ret = -EINVAL;
122 goto out;
123 }
124
125 table = kzalloc(sizeof(*data->vdd_table) *
126 data->num_vdd_table, GFP_KERNEL);
127 if (!table) {
128 ret = -ENOMEM;
129 goto out;
130 }
131 data->vdd_table = table;
132
133 val = prop->value;
134 for (i = 0; i < data->num_vdd_table; i++, table++) {
135 u32 efuse_offset;
136 u32 tmp;
137
138 table->reference_uv = be32_to_cpup(val++);
139 efuse_offset = be32_to_cpup(val++);
140
141 tmp = readl(base + efuse_offset);
142 tmp &= of_data->efuse_voltage_mask;
143 tmp >>= __ffs(of_data->efuse_voltage_mask);
144
145 table->optimized_uv = of_data->efuse_voltage_uv ? tmp :
146 tmp * 1000;
147
148 dev_dbg(dev, "[%d] efuse=0x%08x volt_table=%d vset=%d\n",
149 i, efuse_offset, table->reference_uv,
150 table->optimized_uv);
151
152 /*
153 * Some older samples might not have optimized efuse
154 * Use reference voltage for those - just add debug message
155 * for them.
156 */
157 if (!table->optimized_uv) {
158 dev_dbg(dev, "[%d] efuse=0x%08x volt_table=%d:vset0\n",
159 i, efuse_offset, table->reference_uv);
160 table->optimized_uv = table->reference_uv;
161 }
162 }
163out:
164 iounmap(base);
165out_map:
166 return ret;
167}
168
169/**
170 * _free_optimized_voltages() - free resources for optvoltages
171 * @dev: device for which we need to free info
172 * @data: data specific to the device
173 */
174static void _free_optimized_voltages(struct device *dev,
175 struct ti_opp_supply_data *data)
176{
177 kfree(data->vdd_table);
178 data->vdd_table = NULL;
179 data->num_vdd_table = 0;
180}
181
182/**
183 * _get_optimal_vdd_voltage() - Finds optimal voltage for the supply
184 * @dev: device for which we need to find info
185 * @data: data specific to the device
186 * @reference_uv: reference voltage (OPP voltage) for which we need value
187 *
188 * Return: if a match is found, return optimized voltage, else return
189 * reference_uv, also return reference_uv if no optimization is needed.
190 */
191static int _get_optimal_vdd_voltage(struct device *dev,
192 struct ti_opp_supply_data *data,
193 int reference_uv)
194{
195 int i;
196 struct ti_opp_supply_optimum_voltage_table *table;
197
198 if (!data->num_vdd_table)
199 return reference_uv;
200
201 table = data->vdd_table;
202 if (!table)
203 return -EINVAL;
204
205 /* Find a exact match - this list is usually very small */
206 for (i = 0; i < data->num_vdd_table; i++, table++)
207 if (table->reference_uv == reference_uv)
208 return table->optimized_uv;
209
210 /* IF things are screwed up, we'd make a mess on console.. ratelimit */
211 dev_err_ratelimited(dev, "%s: Failed optimized voltage match for %d\n",
212 __func__, reference_uv);
213 return reference_uv;
214}
215
216static int _opp_set_voltage(struct device *dev,
217 struct dev_pm_opp_supply *supply,
218 int new_target_uv, struct regulator *reg,
219 char *reg_name)
220{
221 int ret;
222 unsigned long vdd_uv, uv_max;
223
224 if (new_target_uv)
225 vdd_uv = new_target_uv;
226 else
227 vdd_uv = supply->u_volt;
228
229 /*
230 * If we do have an absolute max voltage specified, then we should
231 * use that voltage instead to allow for cases where the voltage rails
232 * are ganged (example if we set the max for an opp as 1.12v, and
233 * the absolute max is 1.5v, for another rail to get 1.25v, it cannot
234 * be achieved if the regulator is constrainted to max of 1.12v, even
235 * if it can function at 1.25v
236 */
237 if (opp_data.vdd_absolute_max_voltage_uv)
238 uv_max = opp_data.vdd_absolute_max_voltage_uv;
239 else
240 uv_max = supply->u_volt_max;
241
242 if (vdd_uv > uv_max ||
243 vdd_uv < supply->u_volt_min ||
244 supply->u_volt_min > uv_max) {
245 dev_warn(dev,
246 "Invalid range voltages [Min:%lu target:%lu Max:%lu]\n",
247 supply->u_volt_min, vdd_uv, uv_max);
248 return -EINVAL;
249 }
250
251 dev_dbg(dev, "%s scaling to %luuV[min %luuV max %luuV]\n", reg_name,
252 vdd_uv, supply->u_volt_min,
253 uv_max);
254
255 ret = regulator_set_voltage_triplet(reg,
256 supply->u_volt_min,
257 vdd_uv,
258 uv_max);
259 if (ret) {
260 dev_err(dev, "%s failed for %luuV[min %luuV max %luuV]\n",
261 reg_name, vdd_uv, supply->u_volt_min,
262 uv_max);
263 return ret;
264 }
265
266 return 0;
267}
268
269/**
270 * ti_opp_supply_set_opp() - do the opp supply transition
271 * @data: information on regulators and new and old opps provided by
272 * opp core to use in transition
273 *
274 * Return: If successful, 0, else appropriate error value.
275 */
276static int ti_opp_supply_set_opp(struct dev_pm_set_opp_data *data)
277{
278 struct dev_pm_opp_supply *old_supply_vdd = &data->old_opp.supplies[0];
279 struct dev_pm_opp_supply *old_supply_vbb = &data->old_opp.supplies[1];
280 struct dev_pm_opp_supply *new_supply_vdd = &data->new_opp.supplies[0];
281 struct dev_pm_opp_supply *new_supply_vbb = &data->new_opp.supplies[1];
282 struct device *dev = data->dev;
283 unsigned long old_freq = data->old_opp.rate, freq = data->new_opp.rate;
284 struct clk *clk = data->clk;
285 struct regulator *vdd_reg = data->regulators[0];
286 struct regulator *vbb_reg = data->regulators[1];
287 int vdd_uv;
288 int ret;
289
290 vdd_uv = _get_optimal_vdd_voltage(dev, &opp_data,
291 new_supply_vbb->u_volt);
292
293 /* Scaling up? Scale voltage before frequency */
294 if (freq > old_freq) {
295 ret = _opp_set_voltage(dev, new_supply_vdd, vdd_uv, vdd_reg,
296 "vdd");
297 if (ret)
298 goto restore_voltage;
299
300 ret = _opp_set_voltage(dev, new_supply_vbb, 0, vbb_reg, "vbb");
301 if (ret)
302 goto restore_voltage;
303 }
304
305 /* Change frequency */
306 dev_dbg(dev, "%s: switching OPP: %lu Hz --> %lu Hz\n",
307 __func__, old_freq, freq);
308
309 ret = clk_set_rate(clk, freq);
310 if (ret) {
311 dev_err(dev, "%s: failed to set clock rate: %d\n", __func__,
312 ret);
313 goto restore_voltage;
314 }
315
316 /* Scaling down? Scale voltage after frequency */
317 if (freq < old_freq) {
318 ret = _opp_set_voltage(dev, new_supply_vbb, 0, vbb_reg, "vbb");
319 if (ret)
320 goto restore_freq;
321
322 ret = _opp_set_voltage(dev, new_supply_vdd, vdd_uv, vdd_reg,
323 "vdd");
324 if (ret)
325 goto restore_freq;
326 }
327
328 return 0;
329
330restore_freq:
331 ret = clk_set_rate(clk, old_freq);
332 if (ret)
333 dev_err(dev, "%s: failed to restore old-freq (%lu Hz)\n",
334 __func__, old_freq);
335restore_voltage:
336 /* This shouldn't harm even if the voltages weren't updated earlier */
337 if (old_supply_vdd->u_volt) {
338 ret = _opp_set_voltage(dev, old_supply_vbb, 0, vbb_reg, "vbb");
339 if (ret)
340 return ret;
341
342 ret = _opp_set_voltage(dev, old_supply_vdd, 0, vdd_reg,
343 "vdd");
344 if (ret)
345 return ret;
346 }
347
348 return ret;
349}
350
351static const struct ti_opp_supply_of_data omap_generic_of_data = {
352};
353
354static const struct ti_opp_supply_of_data omap_omap5_of_data = {
355 .flags = OPPDM_EFUSE_CLASS0_OPTIMIZED_VOLTAGE,
356 .efuse_voltage_mask = 0xFFF,
357 .efuse_voltage_uv = false,
358};
359
360static const struct ti_opp_supply_of_data omap_omap5core_of_data = {
361 .flags = OPPDM_EFUSE_CLASS0_OPTIMIZED_VOLTAGE | OPPDM_HAS_NO_ABB,
362 .efuse_voltage_mask = 0xFFF,
363 .efuse_voltage_uv = false,
364};
365
366static const struct of_device_id ti_opp_supply_of_match[] = {
367 {.compatible = "ti,omap-opp-supply", .data = &omap_generic_of_data},
368 {.compatible = "ti,omap5-opp-supply", .data = &omap_omap5_of_data},
369 {.compatible = "ti,omap5-core-opp-supply",
370 .data = &omap_omap5core_of_data},
371 {},
372};
373MODULE_DEVICE_TABLE(of, ti_opp_supply_of_match);
374
375static int ti_opp_supply_probe(struct platform_device *pdev)
376{
377 struct device *dev = &pdev->dev;
378 struct device *cpu_dev = get_cpu_device(0);
379 const struct of_device_id *match;
380 const struct ti_opp_supply_of_data *of_data;
381 int ret = 0;
382
383 match = of_match_device(ti_opp_supply_of_match, dev);
384 if (!match) {
385 /* We do not expect this to happen */
386 dev_err(dev, "%s: Unable to match device\n", __func__);
387 return -ENODEV;
388 }
389 if (!match->data) {
390 /* Again, unlikely.. but mistakes do happen */
391 dev_err(dev, "%s: Bad data in match\n", __func__);
392 return -EINVAL;
393 }
394 of_data = match->data;
395
396 dev_set_drvdata(dev, (void *)of_data);
397
398 /* If we need optimized voltage */
399 if (of_data->flags & OPPDM_EFUSE_CLASS0_OPTIMIZED_VOLTAGE) {
400 ret = _store_optimized_voltages(dev, &opp_data);
401 if (ret)
402 return ret;
403 }
404
405 ret = PTR_ERR_OR_ZERO(dev_pm_opp_register_set_opp_helper(cpu_dev,
406 ti_opp_supply_set_opp));
407 if (ret)
408 _free_optimized_voltages(dev, &opp_data);
409
410 return ret;
411}
412
413static struct platform_driver ti_opp_supply_driver = {
414 .probe = ti_opp_supply_probe,
415 .driver = {
416 .name = "ti_opp_supply",
417 .owner = THIS_MODULE,
418 .of_match_table = of_match_ptr(ti_opp_supply_of_match),
419 },
420};
421module_platform_driver(ti_opp_supply_driver);
422
423MODULE_DESCRIPTION("Texas Instruments OMAP OPP Supply driver");
424MODULE_AUTHOR("Texas Instruments Inc.");
425MODULE_LICENSE("GPL v2");
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index 14fd865a5120..5958c8dda4e3 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -699,7 +699,7 @@ static void pci_pm_complete(struct device *dev)
699 pm_generic_complete(dev); 699 pm_generic_complete(dev);
700 700
701 /* Resume device if platform firmware has put it in reset-power-on */ 701 /* Resume device if platform firmware has put it in reset-power-on */
702 if (dev->power.direct_complete && pm_resume_via_firmware()) { 702 if (pm_runtime_suspended(dev) && pm_resume_via_firmware()) {
703 pci_power_t pre_sleep_state = pci_dev->current_state; 703 pci_power_t pre_sleep_state = pci_dev->current_state;
704 704
705 pci_update_current_state(pci_dev, pci_dev->current_state); 705 pci_update_current_state(pci_dev, pci_dev->current_state);
@@ -783,8 +783,10 @@ static int pci_pm_suspend_noirq(struct device *dev)
783 struct pci_dev *pci_dev = to_pci_dev(dev); 783 struct pci_dev *pci_dev = to_pci_dev(dev);
784 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 784 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
785 785
786 if (dev_pm_smart_suspend_and_suspended(dev)) 786 if (dev_pm_smart_suspend_and_suspended(dev)) {
787 dev->power.may_skip_resume = true;
787 return 0; 788 return 0;
789 }
788 790
789 if (pci_has_legacy_pm_support(pci_dev)) 791 if (pci_has_legacy_pm_support(pci_dev))
790 return pci_legacy_suspend_late(dev, PMSG_SUSPEND); 792 return pci_legacy_suspend_late(dev, PMSG_SUSPEND);
@@ -838,6 +840,16 @@ static int pci_pm_suspend_noirq(struct device *dev)
838Fixup: 840Fixup:
839 pci_fixup_device(pci_fixup_suspend_late, pci_dev); 841 pci_fixup_device(pci_fixup_suspend_late, pci_dev);
840 842
843 /*
844 * If the target system sleep state is suspend-to-idle, it is sufficient
845 * to check whether or not the device's wakeup settings are good for
846 * runtime PM. Otherwise, the pm_resume_via_firmware() check will cause
847 * pci_pm_complete() to take care of fixing up the device's state
848 * anyway, if need be.
849 */
850 dev->power.may_skip_resume = device_may_wakeup(dev) ||
851 !device_can_wakeup(dev);
852
841 return 0; 853 return 0;
842} 854}
843 855
@@ -847,6 +859,9 @@ static int pci_pm_resume_noirq(struct device *dev)
847 struct device_driver *drv = dev->driver; 859 struct device_driver *drv = dev->driver;
848 int error = 0; 860 int error = 0;
849 861
862 if (dev_pm_may_skip_resume(dev))
863 return 0;
864
850 /* 865 /*
851 * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend 866 * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
852 * during system suspend, so update their runtime PM status to "active" 867 * during system suspend, so update their runtime PM status to "active"
@@ -953,7 +968,7 @@ static int pci_pm_freeze_late(struct device *dev)
953 if (dev_pm_smart_suspend_and_suspended(dev)) 968 if (dev_pm_smart_suspend_and_suspended(dev))
954 return 0; 969 return 0;
955 970
956 return pm_generic_freeze_late(dev);; 971 return pm_generic_freeze_late(dev);
957} 972}
958 973
959static int pci_pm_freeze_noirq(struct device *dev) 974static int pci_pm_freeze_noirq(struct device *dev)
diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c
index ffbf4e723527..fb1c1bb87316 100644
--- a/drivers/pci/pcie/portdrv_pci.c
+++ b/drivers/pci/pcie/portdrv_pci.c
@@ -150,6 +150,9 @@ static int pcie_portdrv_probe(struct pci_dev *dev,
150 150
151 pci_save_state(dev); 151 pci_save_state(dev);
152 152
153 dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_SMART_SUSPEND |
154 DPM_FLAG_LEAVE_SUSPENDED);
155
153 if (pci_bridge_d3_possible(dev)) { 156 if (pci_bridge_d3_possible(dev)) {
154 /* 157 /*
155 * Keep the port resumed 100ms to make sure things like 158 * Keep the port resumed 100ms to make sure things like
diff --git a/drivers/platform/x86/surfacepro3_button.c b/drivers/platform/x86/surfacepro3_button.c
index 6505c97705e1..1b491690ce07 100644
--- a/drivers/platform/x86/surfacepro3_button.c
+++ b/drivers/platform/x86/surfacepro3_button.c
@@ -119,7 +119,7 @@ static void surface_button_notify(struct acpi_device *device, u32 event)
119 if (key_code == KEY_RESERVED) 119 if (key_code == KEY_RESERVED)
120 return; 120 return;
121 if (pressed) 121 if (pressed)
122 pm_wakeup_event(&device->dev, 0); 122 pm_wakeup_dev_event(&device->dev, 0, button->suspended);
123 if (button->suspended) 123 if (button->suspended)
124 return; 124 return;
125 input_report_key(input, key_code, pressed?1:0); 125 input_report_key(input, key_code, pressed?1:0);
@@ -185,6 +185,8 @@ static int surface_button_add(struct acpi_device *device)
185 error = input_register_device(input); 185 error = input_register_device(input);
186 if (error) 186 if (error)
187 goto err_free_input; 187 goto err_free_input;
188
189 device_init_wakeup(&device->dev, true);
188 dev_info(&device->dev, 190 dev_info(&device->dev,
189 "%s [%s]\n", name, acpi_device_bid(device)); 191 "%s [%s]\n", name, acpi_device_bid(device));
190 return 0; 192 return 0;
diff --git a/drivers/power/avs/rockchip-io-domain.c b/drivers/power/avs/rockchip-io-domain.c
index 75f63e38a8d1..ed2b109ae8fc 100644
--- a/drivers/power/avs/rockchip-io-domain.c
+++ b/drivers/power/avs/rockchip-io-domain.c
@@ -76,7 +76,7 @@ struct rockchip_iodomain_supply {
76struct rockchip_iodomain { 76struct rockchip_iodomain {
77 struct device *dev; 77 struct device *dev;
78 struct regmap *grf; 78 struct regmap *grf;
79 struct rockchip_iodomain_soc_data *soc_data; 79 const struct rockchip_iodomain_soc_data *soc_data;
80 struct rockchip_iodomain_supply supplies[MAX_SUPPLIES]; 80 struct rockchip_iodomain_supply supplies[MAX_SUPPLIES];
81}; 81};
82 82
@@ -382,43 +382,43 @@ static const struct rockchip_iodomain_soc_data soc_data_rv1108_pmu = {
382static const struct of_device_id rockchip_iodomain_match[] = { 382static const struct of_device_id rockchip_iodomain_match[] = {
383 { 383 {
384 .compatible = "rockchip,rk3188-io-voltage-domain", 384 .compatible = "rockchip,rk3188-io-voltage-domain",
385 .data = (void *)&soc_data_rk3188 385 .data = &soc_data_rk3188
386 }, 386 },
387 { 387 {
388 .compatible = "rockchip,rk3228-io-voltage-domain", 388 .compatible = "rockchip,rk3228-io-voltage-domain",
389 .data = (void *)&soc_data_rk3228 389 .data = &soc_data_rk3228
390 }, 390 },
391 { 391 {
392 .compatible = "rockchip,rk3288-io-voltage-domain", 392 .compatible = "rockchip,rk3288-io-voltage-domain",
393 .data = (void *)&soc_data_rk3288 393 .data = &soc_data_rk3288
394 }, 394 },
395 { 395 {
396 .compatible = "rockchip,rk3328-io-voltage-domain", 396 .compatible = "rockchip,rk3328-io-voltage-domain",
397 .data = (void *)&soc_data_rk3328 397 .data = &soc_data_rk3328
398 }, 398 },
399 { 399 {
400 .compatible = "rockchip,rk3368-io-voltage-domain", 400 .compatible = "rockchip,rk3368-io-voltage-domain",
401 .data = (void *)&soc_data_rk3368 401 .data = &soc_data_rk3368
402 }, 402 },
403 { 403 {
404 .compatible = "rockchip,rk3368-pmu-io-voltage-domain", 404 .compatible = "rockchip,rk3368-pmu-io-voltage-domain",
405 .data = (void *)&soc_data_rk3368_pmu 405 .data = &soc_data_rk3368_pmu
406 }, 406 },
407 { 407 {
408 .compatible = "rockchip,rk3399-io-voltage-domain", 408 .compatible = "rockchip,rk3399-io-voltage-domain",
409 .data = (void *)&soc_data_rk3399 409 .data = &soc_data_rk3399
410 }, 410 },
411 { 411 {
412 .compatible = "rockchip,rk3399-pmu-io-voltage-domain", 412 .compatible = "rockchip,rk3399-pmu-io-voltage-domain",
413 .data = (void *)&soc_data_rk3399_pmu 413 .data = &soc_data_rk3399_pmu
414 }, 414 },
415 { 415 {
416 .compatible = "rockchip,rv1108-io-voltage-domain", 416 .compatible = "rockchip,rv1108-io-voltage-domain",
417 .data = (void *)&soc_data_rv1108 417 .data = &soc_data_rv1108
418 }, 418 },
419 { 419 {
420 .compatible = "rockchip,rv1108-pmu-io-voltage-domain", 420 .compatible = "rockchip,rv1108-pmu-io-voltage-domain",
421 .data = (void *)&soc_data_rv1108_pmu 421 .data = &soc_data_rv1108_pmu
422 }, 422 },
423 { /* sentinel */ }, 423 { /* sentinel */ },
424}; 424};
@@ -443,7 +443,7 @@ static int rockchip_iodomain_probe(struct platform_device *pdev)
443 platform_set_drvdata(pdev, iod); 443 platform_set_drvdata(pdev, iod);
444 444
445 match = of_match_node(rockchip_iodomain_match, np); 445 match = of_match_node(rockchip_iodomain_match, np);
446 iod->soc_data = (struct rockchip_iodomain_soc_data *)match->data; 446 iod->soc_data = match->data;
447 447
448 parent = pdev->dev.parent; 448 parent = pdev->dev.parent;
449 if (parent && parent->of_node) { 449 if (parent && parent->of_node) {
diff --git a/drivers/powercap/intel_rapl.c b/drivers/powercap/intel_rapl.c
index d1694f1def72..35636e1d8a3d 100644
--- a/drivers/powercap/intel_rapl.c
+++ b/drivers/powercap/intel_rapl.c
@@ -29,6 +29,7 @@
29#include <linux/sysfs.h> 29#include <linux/sysfs.h>
30#include <linux/cpu.h> 30#include <linux/cpu.h>
31#include <linux/powercap.h> 31#include <linux/powercap.h>
32#include <linux/suspend.h>
32#include <asm/iosf_mbi.h> 33#include <asm/iosf_mbi.h>
33 34
34#include <asm/processor.h> 35#include <asm/processor.h>
@@ -155,6 +156,7 @@ struct rapl_power_limit {
155 int prim_id; /* primitive ID used to enable */ 156 int prim_id; /* primitive ID used to enable */
156 struct rapl_domain *domain; 157 struct rapl_domain *domain;
157 const char *name; 158 const char *name;
159 u64 last_power_limit;
158}; 160};
159 161
160static const char pl1_name[] = "long_term"; 162static const char pl1_name[] = "long_term";
@@ -1209,7 +1211,7 @@ static int rapl_package_register_powercap(struct rapl_package *rp)
1209 struct rapl_domain *rd; 1211 struct rapl_domain *rd;
1210 char dev_name[17]; /* max domain name = 7 + 1 + 8 for int + 1 for null*/ 1212 char dev_name[17]; /* max domain name = 7 + 1 + 8 for int + 1 for null*/
1211 struct powercap_zone *power_zone = NULL; 1213 struct powercap_zone *power_zone = NULL;
1212 int nr_pl, ret;; 1214 int nr_pl, ret;
1213 1215
1214 /* Update the domain data of the new package */ 1216 /* Update the domain data of the new package */
1215 rapl_update_domain_data(rp); 1217 rapl_update_domain_data(rp);
@@ -1533,6 +1535,92 @@ static int rapl_cpu_down_prep(unsigned int cpu)
1533 1535
1534static enum cpuhp_state pcap_rapl_online; 1536static enum cpuhp_state pcap_rapl_online;
1535 1537
1538static void power_limit_state_save(void)
1539{
1540 struct rapl_package *rp;
1541 struct rapl_domain *rd;
1542 int nr_pl, ret, i;
1543
1544 get_online_cpus();
1545 list_for_each_entry(rp, &rapl_packages, plist) {
1546 if (!rp->power_zone)
1547 continue;
1548 rd = power_zone_to_rapl_domain(rp->power_zone);
1549 nr_pl = find_nr_power_limit(rd);
1550 for (i = 0; i < nr_pl; i++) {
1551 switch (rd->rpl[i].prim_id) {
1552 case PL1_ENABLE:
1553 ret = rapl_read_data_raw(rd,
1554 POWER_LIMIT1,
1555 true,
1556 &rd->rpl[i].last_power_limit);
1557 if (ret)
1558 rd->rpl[i].last_power_limit = 0;
1559 break;
1560 case PL2_ENABLE:
1561 ret = rapl_read_data_raw(rd,
1562 POWER_LIMIT2,
1563 true,
1564 &rd->rpl[i].last_power_limit);
1565 if (ret)
1566 rd->rpl[i].last_power_limit = 0;
1567 break;
1568 }
1569 }
1570 }
1571 put_online_cpus();
1572}
1573
1574static void power_limit_state_restore(void)
1575{
1576 struct rapl_package *rp;
1577 struct rapl_domain *rd;
1578 int nr_pl, i;
1579
1580 get_online_cpus();
1581 list_for_each_entry(rp, &rapl_packages, plist) {
1582 if (!rp->power_zone)
1583 continue;
1584 rd = power_zone_to_rapl_domain(rp->power_zone);
1585 nr_pl = find_nr_power_limit(rd);
1586 for (i = 0; i < nr_pl; i++) {
1587 switch (rd->rpl[i].prim_id) {
1588 case PL1_ENABLE:
1589 if (rd->rpl[i].last_power_limit)
1590 rapl_write_data_raw(rd,
1591 POWER_LIMIT1,
1592 rd->rpl[i].last_power_limit);
1593 break;
1594 case PL2_ENABLE:
1595 if (rd->rpl[i].last_power_limit)
1596 rapl_write_data_raw(rd,
1597 POWER_LIMIT2,
1598 rd->rpl[i].last_power_limit);
1599 break;
1600 }
1601 }
1602 }
1603 put_online_cpus();
1604}
1605
1606static int rapl_pm_callback(struct notifier_block *nb,
1607 unsigned long mode, void *_unused)
1608{
1609 switch (mode) {
1610 case PM_SUSPEND_PREPARE:
1611 power_limit_state_save();
1612 break;
1613 case PM_POST_SUSPEND:
1614 power_limit_state_restore();
1615 break;
1616 }
1617 return NOTIFY_OK;
1618}
1619
1620static struct notifier_block rapl_pm_notifier = {
1621 .notifier_call = rapl_pm_callback,
1622};
1623
1536static int __init rapl_init(void) 1624static int __init rapl_init(void)
1537{ 1625{
1538 const struct x86_cpu_id *id; 1626 const struct x86_cpu_id *id;
@@ -1560,8 +1648,16 @@ static int __init rapl_init(void)
1560 1648
1561 /* Don't bail out if PSys is not supported */ 1649 /* Don't bail out if PSys is not supported */
1562 rapl_register_psys(); 1650 rapl_register_psys();
1651
1652 ret = register_pm_notifier(&rapl_pm_notifier);
1653 if (ret)
1654 goto err_unreg_all;
1655
1563 return 0; 1656 return 0;
1564 1657
1658err_unreg_all:
1659 cpuhp_remove_state(pcap_rapl_online);
1660
1565err_unreg: 1661err_unreg:
1566 rapl_unregister_powercap(); 1662 rapl_unregister_powercap();
1567 return ret; 1663 return ret;
@@ -1569,6 +1665,7 @@ err_unreg:
1569 1665
1570static void __exit rapl_exit(void) 1666static void __exit rapl_exit(void)
1571{ 1667{
1668 unregister_pm_notifier(&rapl_pm_notifier);
1572 cpuhp_remove_state(pcap_rapl_online); 1669 cpuhp_remove_state(pcap_rapl_online);
1573 rapl_unregister_powercap(); 1670 rapl_unregister_powercap();
1574} 1671}
diff --git a/drivers/powercap/powercap_sys.c b/drivers/powercap/powercap_sys.c
index 5b10b50f8686..64b2b2501a79 100644
--- a/drivers/powercap/powercap_sys.c
+++ b/drivers/powercap/powercap_sys.c
@@ -673,15 +673,13 @@ EXPORT_SYMBOL_GPL(powercap_unregister_control_type);
673 673
674static int __init powercap_init(void) 674static int __init powercap_init(void)
675{ 675{
676 int result = 0; 676 int result;
677 677
678 result = seed_constraint_attributes(); 678 result = seed_constraint_attributes();
679 if (result) 679 if (result)
680 return result; 680 return result;
681 681
682 result = class_register(&powercap_class); 682 return class_register(&powercap_class);
683
684 return result;
685} 683}
686 684
687device_initcall(powercap_init); 685device_initcall(powercap_init);
diff --git a/drivers/scsi/scsi_transport_spi.c b/drivers/scsi/scsi_transport_spi.c
index 10ebb213ddb3..871ea582029e 100644
--- a/drivers/scsi/scsi_transport_spi.c
+++ b/drivers/scsi/scsi_transport_spi.c
@@ -26,6 +26,7 @@
26#include <linux/mutex.h> 26#include <linux/mutex.h>
27#include <linux/sysfs.h> 27#include <linux/sysfs.h>
28#include <linux/slab.h> 28#include <linux/slab.h>
29#include <linux/suspend.h>
29#include <scsi/scsi.h> 30#include <scsi/scsi.h>
30#include "scsi_priv.h" 31#include "scsi_priv.h"
31#include <scsi/scsi_device.h> 32#include <scsi/scsi_device.h>
@@ -1009,11 +1010,20 @@ spi_dv_device(struct scsi_device *sdev)
1009 u8 *buffer; 1010 u8 *buffer;
1010 const int len = SPI_MAX_ECHO_BUFFER_SIZE*2; 1011 const int len = SPI_MAX_ECHO_BUFFER_SIZE*2;
1011 1012
1013 /*
1014 * Because this function and the power management code both call
1015 * scsi_device_quiesce(), it is not safe to perform domain validation
1016 * while suspend or resume is in progress. Hence the
1017 * lock/unlock_system_sleep() calls.
1018 */
1019 lock_system_sleep();
1020
1012 if (unlikely(spi_dv_in_progress(starget))) 1021 if (unlikely(spi_dv_in_progress(starget)))
1013 return; 1022 goto unlock;
1014 1023
1015 if (unlikely(scsi_device_get(sdev))) 1024 if (unlikely(scsi_device_get(sdev)))
1016 return; 1025 goto unlock;
1026
1017 spi_dv_in_progress(starget) = 1; 1027 spi_dv_in_progress(starget) = 1;
1018 1028
1019 buffer = kzalloc(len, GFP_KERNEL); 1029 buffer = kzalloc(len, GFP_KERNEL);
@@ -1049,6 +1059,8 @@ spi_dv_device(struct scsi_device *sdev)
1049 out_put: 1059 out_put:
1050 spi_dv_in_progress(starget) = 0; 1060 spi_dv_in_progress(starget) = 0;
1051 scsi_device_put(sdev); 1061 scsi_device_put(sdev);
1062unlock:
1063 unlock_system_sleep();
1052} 1064}
1053EXPORT_SYMBOL(spi_dv_device); 1065EXPORT_SYMBOL(spi_dv_device);
1054 1066
diff --git a/drivers/thermal/cpu_cooling.c b/drivers/thermal/cpu_cooling.c
index dc63aba092e4..dfd23245f778 100644
--- a/drivers/thermal/cpu_cooling.c
+++ b/drivers/thermal/cpu_cooling.c
@@ -88,7 +88,6 @@ struct time_in_idle {
88 * @policy: cpufreq policy. 88 * @policy: cpufreq policy.
89 * @node: list_head to link all cpufreq_cooling_device together. 89 * @node: list_head to link all cpufreq_cooling_device together.
90 * @idle_time: idle time stats 90 * @idle_time: idle time stats
91 * @plat_get_static_power: callback to calculate the static power
92 * 91 *
93 * This structure is required for keeping information of each registered 92 * This structure is required for keeping information of each registered
94 * cpufreq_cooling_device. 93 * cpufreq_cooling_device.
@@ -104,7 +103,6 @@ struct cpufreq_cooling_device {
104 struct cpufreq_policy *policy; 103 struct cpufreq_policy *policy;
105 struct list_head node; 104 struct list_head node;
106 struct time_in_idle *idle_time; 105 struct time_in_idle *idle_time;
107 get_static_t plat_get_static_power;
108}; 106};
109 107
110static DEFINE_IDA(cpufreq_ida); 108static DEFINE_IDA(cpufreq_ida);
@@ -319,60 +317,6 @@ static u32 get_load(struct cpufreq_cooling_device *cpufreq_cdev, int cpu,
319} 317}
320 318
321/** 319/**
322 * get_static_power() - calculate the static power consumed by the cpus
323 * @cpufreq_cdev: struct &cpufreq_cooling_device for this cpu cdev
324 * @tz: thermal zone device in which we're operating
325 * @freq: frequency in KHz
326 * @power: pointer in which to store the calculated static power
327 *
328 * Calculate the static power consumed by the cpus described by
329 * @cpu_actor running at frequency @freq. This function relies on a
330 * platform specific function that should have been provided when the
331 * actor was registered. If it wasn't, the static power is assumed to
332 * be negligible. The calculated static power is stored in @power.
333 *
334 * Return: 0 on success, -E* on failure.
335 */
336static int get_static_power(struct cpufreq_cooling_device *cpufreq_cdev,
337 struct thermal_zone_device *tz, unsigned long freq,
338 u32 *power)
339{
340 struct dev_pm_opp *opp;
341 unsigned long voltage;
342 struct cpufreq_policy *policy = cpufreq_cdev->policy;
343 struct cpumask *cpumask = policy->related_cpus;
344 unsigned long freq_hz = freq * 1000;
345 struct device *dev;
346
347 if (!cpufreq_cdev->plat_get_static_power) {
348 *power = 0;
349 return 0;
350 }
351
352 dev = get_cpu_device(policy->cpu);
353 WARN_ON(!dev);
354
355 opp = dev_pm_opp_find_freq_exact(dev, freq_hz, true);
356 if (IS_ERR(opp)) {
357 dev_warn_ratelimited(dev, "Failed to find OPP for frequency %lu: %ld\n",
358 freq_hz, PTR_ERR(opp));
359 return -EINVAL;
360 }
361
362 voltage = dev_pm_opp_get_voltage(opp);
363 dev_pm_opp_put(opp);
364
365 if (voltage == 0) {
366 dev_err_ratelimited(dev, "Failed to get voltage for frequency %lu\n",
367 freq_hz);
368 return -EINVAL;
369 }
370
371 return cpufreq_cdev->plat_get_static_power(cpumask, tz->passive_delay,
372 voltage, power);
373}
374
375/**
376 * get_dynamic_power() - calculate the dynamic power 320 * get_dynamic_power() - calculate the dynamic power
377 * @cpufreq_cdev: &cpufreq_cooling_device for this cdev 321 * @cpufreq_cdev: &cpufreq_cooling_device for this cdev
378 * @freq: current frequency 322 * @freq: current frequency
@@ -491,8 +435,8 @@ static int cpufreq_get_requested_power(struct thermal_cooling_device *cdev,
491 u32 *power) 435 u32 *power)
492{ 436{
493 unsigned long freq; 437 unsigned long freq;
494 int i = 0, cpu, ret; 438 int i = 0, cpu;
495 u32 static_power, dynamic_power, total_load = 0; 439 u32 total_load = 0;
496 struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata; 440 struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata;
497 struct cpufreq_policy *policy = cpufreq_cdev->policy; 441 struct cpufreq_policy *policy = cpufreq_cdev->policy;
498 u32 *load_cpu = NULL; 442 u32 *load_cpu = NULL;
@@ -522,22 +466,15 @@ static int cpufreq_get_requested_power(struct thermal_cooling_device *cdev,
522 466
523 cpufreq_cdev->last_load = total_load; 467 cpufreq_cdev->last_load = total_load;
524 468
525 dynamic_power = get_dynamic_power(cpufreq_cdev, freq); 469 *power = get_dynamic_power(cpufreq_cdev, freq);
526 ret = get_static_power(cpufreq_cdev, tz, freq, &static_power);
527 if (ret) {
528 kfree(load_cpu);
529 return ret;
530 }
531 470
532 if (load_cpu) { 471 if (load_cpu) {
533 trace_thermal_power_cpu_get_power(policy->related_cpus, freq, 472 trace_thermal_power_cpu_get_power(policy->related_cpus, freq,
534 load_cpu, i, dynamic_power, 473 load_cpu, i, *power);
535 static_power);
536 474
537 kfree(load_cpu); 475 kfree(load_cpu);
538 } 476 }
539 477
540 *power = static_power + dynamic_power;
541 return 0; 478 return 0;
542} 479}
543 480
@@ -561,8 +498,6 @@ static int cpufreq_state2power(struct thermal_cooling_device *cdev,
561 unsigned long state, u32 *power) 498 unsigned long state, u32 *power)
562{ 499{
563 unsigned int freq, num_cpus; 500 unsigned int freq, num_cpus;
564 u32 static_power, dynamic_power;
565 int ret;
566 struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata; 501 struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata;
567 502
568 /* Request state should be less than max_level */ 503 /* Request state should be less than max_level */
@@ -572,13 +507,9 @@ static int cpufreq_state2power(struct thermal_cooling_device *cdev,
572 num_cpus = cpumask_weight(cpufreq_cdev->policy->cpus); 507 num_cpus = cpumask_weight(cpufreq_cdev->policy->cpus);
573 508
574 freq = cpufreq_cdev->freq_table[state].frequency; 509 freq = cpufreq_cdev->freq_table[state].frequency;
575 dynamic_power = cpu_freq_to_power(cpufreq_cdev, freq) * num_cpus; 510 *power = cpu_freq_to_power(cpufreq_cdev, freq) * num_cpus;
576 ret = get_static_power(cpufreq_cdev, tz, freq, &static_power);
577 if (ret)
578 return ret;
579 511
580 *power = static_power + dynamic_power; 512 return 0;
581 return ret;
582} 513}
583 514
584/** 515/**
@@ -606,21 +537,14 @@ static int cpufreq_power2state(struct thermal_cooling_device *cdev,
606 unsigned long *state) 537 unsigned long *state)
607{ 538{
608 unsigned int cur_freq, target_freq; 539 unsigned int cur_freq, target_freq;
609 int ret; 540 u32 last_load, normalised_power;
610 s32 dyn_power;
611 u32 last_load, normalised_power, static_power;
612 struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata; 541 struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata;
613 struct cpufreq_policy *policy = cpufreq_cdev->policy; 542 struct cpufreq_policy *policy = cpufreq_cdev->policy;
614 543
615 cur_freq = cpufreq_quick_get(policy->cpu); 544 cur_freq = cpufreq_quick_get(policy->cpu);
616 ret = get_static_power(cpufreq_cdev, tz, cur_freq, &static_power); 545 power = power > 0 ? power : 0;
617 if (ret)
618 return ret;
619
620 dyn_power = power - static_power;
621 dyn_power = dyn_power > 0 ? dyn_power : 0;
622 last_load = cpufreq_cdev->last_load ?: 1; 546 last_load = cpufreq_cdev->last_load ?: 1;
623 normalised_power = (dyn_power * 100) / last_load; 547 normalised_power = (power * 100) / last_load;
624 target_freq = cpu_power_to_freq(cpufreq_cdev, normalised_power); 548 target_freq = cpu_power_to_freq(cpufreq_cdev, normalised_power);
625 549
626 *state = get_level(cpufreq_cdev, target_freq); 550 *state = get_level(cpufreq_cdev, target_freq);
@@ -671,8 +595,6 @@ static unsigned int find_next_max(struct cpufreq_frequency_table *table,
671 * @policy: cpufreq policy 595 * @policy: cpufreq policy
672 * Normally this should be same as cpufreq policy->related_cpus. 596 * Normally this should be same as cpufreq policy->related_cpus.
673 * @capacitance: dynamic power coefficient for these cpus 597 * @capacitance: dynamic power coefficient for these cpus
674 * @plat_static_func: function to calculate the static power consumed by these
675 * cpus (optional)
676 * 598 *
677 * This interface function registers the cpufreq cooling device with the name 599 * This interface function registers the cpufreq cooling device with the name
678 * "thermal-cpufreq-%x". This api can support multiple instances of cpufreq 600 * "thermal-cpufreq-%x". This api can support multiple instances of cpufreq
@@ -684,8 +606,7 @@ static unsigned int find_next_max(struct cpufreq_frequency_table *table,
684 */ 606 */
685static struct thermal_cooling_device * 607static struct thermal_cooling_device *
686__cpufreq_cooling_register(struct device_node *np, 608__cpufreq_cooling_register(struct device_node *np,
687 struct cpufreq_policy *policy, u32 capacitance, 609 struct cpufreq_policy *policy, u32 capacitance)
688 get_static_t plat_static_func)
689{ 610{
690 struct thermal_cooling_device *cdev; 611 struct thermal_cooling_device *cdev;
691 struct cpufreq_cooling_device *cpufreq_cdev; 612 struct cpufreq_cooling_device *cpufreq_cdev;
@@ -755,8 +676,6 @@ __cpufreq_cooling_register(struct device_node *np,
755 } 676 }
756 677
757 if (capacitance) { 678 if (capacitance) {
758 cpufreq_cdev->plat_get_static_power = plat_static_func;
759
760 ret = update_freq_table(cpufreq_cdev, capacitance); 679 ret = update_freq_table(cpufreq_cdev, capacitance);
761 if (ret) { 680 if (ret) {
762 cdev = ERR_PTR(ret); 681 cdev = ERR_PTR(ret);
@@ -813,13 +732,12 @@ free_cdev:
813struct thermal_cooling_device * 732struct thermal_cooling_device *
814cpufreq_cooling_register(struct cpufreq_policy *policy) 733cpufreq_cooling_register(struct cpufreq_policy *policy)
815{ 734{
816 return __cpufreq_cooling_register(NULL, policy, 0, NULL); 735 return __cpufreq_cooling_register(NULL, policy, 0);
817} 736}
818EXPORT_SYMBOL_GPL(cpufreq_cooling_register); 737EXPORT_SYMBOL_GPL(cpufreq_cooling_register);
819 738
820/** 739/**
821 * of_cpufreq_cooling_register - function to create cpufreq cooling device. 740 * of_cpufreq_cooling_register - function to create cpufreq cooling device.
822 * @np: a valid struct device_node to the cooling device device tree node
823 * @policy: cpufreq policy 741 * @policy: cpufreq policy
824 * 742 *
825 * This interface function registers the cpufreq cooling device with the name 743 * This interface function registers the cpufreq cooling device with the name
@@ -827,86 +745,45 @@ EXPORT_SYMBOL_GPL(cpufreq_cooling_register);
827 * cooling devices. Using this API, the cpufreq cooling device will be 745 * cooling devices. Using this API, the cpufreq cooling device will be
828 * linked to the device tree node provided. 746 * linked to the device tree node provided.
829 * 747 *
830 * Return: a valid struct thermal_cooling_device pointer on success,
831 * on failure, it returns a corresponding ERR_PTR().
832 */
833struct thermal_cooling_device *
834of_cpufreq_cooling_register(struct device_node *np,
835 struct cpufreq_policy *policy)
836{
837 if (!np)
838 return ERR_PTR(-EINVAL);
839
840 return __cpufreq_cooling_register(np, policy, 0, NULL);
841}
842EXPORT_SYMBOL_GPL(of_cpufreq_cooling_register);
843
844/**
845 * cpufreq_power_cooling_register() - create cpufreq cooling device with power extensions
846 * @policy: cpufreq policy
847 * @capacitance: dynamic power coefficient for these cpus
848 * @plat_static_func: function to calculate the static power consumed by these
849 * cpus (optional)
850 *
851 * This interface function registers the cpufreq cooling device with
852 * the name "thermal-cpufreq-%x". This api can support multiple
853 * instances of cpufreq cooling devices. Using this function, the
854 * cooling device will implement the power extensions by using a
855 * simple cpu power model. The cpus must have registered their OPPs
856 * using the OPP library.
857 *
858 * An optional @plat_static_func may be provided to calculate the
859 * static power consumed by these cpus. If the platform's static
860 * power consumption is unknown or negligible, make it NULL.
861 *
862 * Return: a valid struct thermal_cooling_device pointer on success,
863 * on failure, it returns a corresponding ERR_PTR().
864 */
865struct thermal_cooling_device *
866cpufreq_power_cooling_register(struct cpufreq_policy *policy, u32 capacitance,
867 get_static_t plat_static_func)
868{
869 return __cpufreq_cooling_register(NULL, policy, capacitance,
870 plat_static_func);
871}
872EXPORT_SYMBOL(cpufreq_power_cooling_register);
873
874/**
875 * of_cpufreq_power_cooling_register() - create cpufreq cooling device with power extensions
876 * @np: a valid struct device_node to the cooling device device tree node
877 * @policy: cpufreq policy
878 * @capacitance: dynamic power coefficient for these cpus
879 * @plat_static_func: function to calculate the static power consumed by these
880 * cpus (optional)
881 *
882 * This interface function registers the cpufreq cooling device with
883 * the name "thermal-cpufreq-%x". This api can support multiple
884 * instances of cpufreq cooling devices. Using this API, the cpufreq
885 * cooling device will be linked to the device tree node provided.
886 * Using this function, the cooling device will implement the power 748 * Using this function, the cooling device will implement the power
887 * extensions by using a simple cpu power model. The cpus must have 749 * extensions by using a simple cpu power model. The cpus must have
888 * registered their OPPs using the OPP library. 750 * registered their OPPs using the OPP library.
889 * 751 *
890 * An optional @plat_static_func may be provided to calculate the 752 * It also takes into account, if property present in policy CPU node, the
891 * static power consumed by these cpus. If the platform's static 753 * static power consumed by the cpu.
892 * power consumption is unknown or negligible, make it NULL.
893 * 754 *
894 * Return: a valid struct thermal_cooling_device pointer on success, 755 * Return: a valid struct thermal_cooling_device pointer on success,
895 * on failure, it returns a corresponding ERR_PTR(). 756 * and NULL on failure.
896 */ 757 */
897struct thermal_cooling_device * 758struct thermal_cooling_device *
898of_cpufreq_power_cooling_register(struct device_node *np, 759of_cpufreq_cooling_register(struct cpufreq_policy *policy)
899 struct cpufreq_policy *policy,
900 u32 capacitance,
901 get_static_t plat_static_func)
902{ 760{
903 if (!np) 761 struct device_node *np = of_get_cpu_node(policy->cpu, NULL);
904 return ERR_PTR(-EINVAL); 762 struct thermal_cooling_device *cdev = NULL;
763 u32 capacitance = 0;
764
765 if (!np) {
766 pr_err("cpu_cooling: OF node not available for cpu%d\n",
767 policy->cpu);
768 return NULL;
769 }
770
771 if (of_find_property(np, "#cooling-cells", NULL)) {
772 of_property_read_u32(np, "dynamic-power-coefficient",
773 &capacitance);
905 774
906 return __cpufreq_cooling_register(np, policy, capacitance, 775 cdev = __cpufreq_cooling_register(np, policy, capacitance);
907 plat_static_func); 776 if (IS_ERR(cdev)) {
777 pr_err("cpu_cooling: cpu%d is not running as cooling device: %ld\n",
778 policy->cpu, PTR_ERR(cdev));
779 cdev = NULL;
780 }
781 }
782
783 of_node_put(np);
784 return cdev;
908} 785}
909EXPORT_SYMBOL(of_cpufreq_power_cooling_register); 786EXPORT_SYMBOL_GPL(of_cpufreq_cooling_register);
910 787
911/** 788/**
912 * cpufreq_cooling_unregister - function to remove cpufreq cooling device. 789 * cpufreq_cooling_unregister - function to remove cpufreq cooling device.
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index d918f1ea84e6..b8f4c3c776e5 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -451,6 +451,7 @@ void __init acpi_no_s4_hw_signature(void);
451void __init acpi_old_suspend_ordering(void); 451void __init acpi_old_suspend_ordering(void);
452void __init acpi_nvs_nosave(void); 452void __init acpi_nvs_nosave(void);
453void __init acpi_nvs_nosave_s3(void); 453void __init acpi_nvs_nosave_s3(void);
454void __init acpi_sleep_no_blacklist(void);
454#endif /* CONFIG_PM_SLEEP */ 455#endif /* CONFIG_PM_SLEEP */
455 456
456struct acpi_osc_context { 457struct acpi_osc_context {
diff --git a/include/linux/cpu_cooling.h b/include/linux/cpu_cooling.h
index d4292ebc5c8b..de0dafb9399d 100644
--- a/include/linux/cpu_cooling.h
+++ b/include/linux/cpu_cooling.h
@@ -30,9 +30,6 @@
30 30
31struct cpufreq_policy; 31struct cpufreq_policy;
32 32
33typedef int (*get_static_t)(cpumask_t *cpumask, int interval,
34 unsigned long voltage, u32 *power);
35
36#ifdef CONFIG_CPU_THERMAL 33#ifdef CONFIG_CPU_THERMAL
37/** 34/**
38 * cpufreq_cooling_register - function to create cpufreq cooling device. 35 * cpufreq_cooling_register - function to create cpufreq cooling device.
@@ -41,43 +38,6 @@ typedef int (*get_static_t)(cpumask_t *cpumask, int interval,
41struct thermal_cooling_device * 38struct thermal_cooling_device *
42cpufreq_cooling_register(struct cpufreq_policy *policy); 39cpufreq_cooling_register(struct cpufreq_policy *policy);
43 40
44struct thermal_cooling_device *
45cpufreq_power_cooling_register(struct cpufreq_policy *policy,
46 u32 capacitance, get_static_t plat_static_func);
47
48/**
49 * of_cpufreq_cooling_register - create cpufreq cooling device based on DT.
50 * @np: a valid struct device_node to the cooling device device tree node.
51 * @policy: cpufreq policy.
52 */
53#ifdef CONFIG_THERMAL_OF
54struct thermal_cooling_device *
55of_cpufreq_cooling_register(struct device_node *np,
56 struct cpufreq_policy *policy);
57
58struct thermal_cooling_device *
59of_cpufreq_power_cooling_register(struct device_node *np,
60 struct cpufreq_policy *policy,
61 u32 capacitance,
62 get_static_t plat_static_func);
63#else
64static inline struct thermal_cooling_device *
65of_cpufreq_cooling_register(struct device_node *np,
66 struct cpufreq_policy *policy)
67{
68 return ERR_PTR(-ENOSYS);
69}
70
71static inline struct thermal_cooling_device *
72of_cpufreq_power_cooling_register(struct device_node *np,
73 struct cpufreq_policy *policy,
74 u32 capacitance,
75 get_static_t plat_static_func)
76{
77 return NULL;
78}
79#endif
80
81/** 41/**
82 * cpufreq_cooling_unregister - function to remove cpufreq cooling device. 42 * cpufreq_cooling_unregister - function to remove cpufreq cooling device.
83 * @cdev: thermal cooling device pointer. 43 * @cdev: thermal cooling device pointer.
@@ -90,34 +50,27 @@ cpufreq_cooling_register(struct cpufreq_policy *policy)
90{ 50{
91 return ERR_PTR(-ENOSYS); 51 return ERR_PTR(-ENOSYS);
92} 52}
93static inline struct thermal_cooling_device *
94cpufreq_power_cooling_register(struct cpufreq_policy *policy,
95 u32 capacitance, get_static_t plat_static_func)
96{
97 return NULL;
98}
99 53
100static inline struct thermal_cooling_device * 54static inline
101of_cpufreq_cooling_register(struct device_node *np, 55void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
102 struct cpufreq_policy *policy)
103{ 56{
104 return ERR_PTR(-ENOSYS); 57 return;
105} 58}
59#endif /* CONFIG_CPU_THERMAL */
106 60
61#if defined(CONFIG_THERMAL_OF) && defined(CONFIG_CPU_THERMAL)
62/**
63 * of_cpufreq_cooling_register - create cpufreq cooling device based on DT.
64 * @policy: cpufreq policy.
65 */
66struct thermal_cooling_device *
67of_cpufreq_cooling_register(struct cpufreq_policy *policy);
68#else
107static inline struct thermal_cooling_device * 69static inline struct thermal_cooling_device *
108of_cpufreq_power_cooling_register(struct device_node *np, 70of_cpufreq_cooling_register(struct cpufreq_policy *policy)
109 struct cpufreq_policy *policy,
110 u32 capacitance,
111 get_static_t plat_static_func)
112{ 71{
113 return NULL; 72 return NULL;
114} 73}
115 74#endif /* defined(CONFIG_THERMAL_OF) && defined(CONFIG_CPU_THERMAL) */
116static inline
117void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
118{
119 return;
120}
121#endif /* CONFIG_CPU_THERMAL */
122 75
123#endif /* __CPU_COOLING_H__ */ 76#endif /* __CPU_COOLING_H__ */
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 492ed473ba7e..e723b78d8357 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -556,9 +556,10 @@ struct pm_subsys_data {
556 * These flags can be set by device drivers at the probe time. They need not be 556 * These flags can be set by device drivers at the probe time. They need not be
557 * cleared by the drivers as the driver core will take care of that. 557 * cleared by the drivers as the driver core will take care of that.
558 * 558 *
559 * NEVER_SKIP: Do not skip system suspend/resume callbacks for the device. 559 * NEVER_SKIP: Do not skip all system suspend/resume callbacks for the device.
560 * SMART_PREPARE: Check the return value of the driver's ->prepare callback. 560 * SMART_PREPARE: Check the return value of the driver's ->prepare callback.
561 * SMART_SUSPEND: No need to resume the device from runtime suspend. 561 * SMART_SUSPEND: No need to resume the device from runtime suspend.
562 * LEAVE_SUSPENDED: Avoid resuming the device during system resume if possible.
562 * 563 *
563 * Setting SMART_PREPARE instructs bus types and PM domains which may want 564 * Setting SMART_PREPARE instructs bus types and PM domains which may want
564 * system suspend/resume callbacks to be skipped for the device to return 0 from 565 * system suspend/resume callbacks to be skipped for the device to return 0 from
@@ -572,10 +573,14 @@ struct pm_subsys_data {
572 * necessary from the driver's perspective. It also may cause them to skip 573 * necessary from the driver's perspective. It also may cause them to skip
573 * invocations of the ->suspend_late and ->suspend_noirq callbacks provided by 574 * invocations of the ->suspend_late and ->suspend_noirq callbacks provided by
574 * the driver if they decide to leave the device in runtime suspend. 575 * the driver if they decide to leave the device in runtime suspend.
576 *
577 * Setting LEAVE_SUSPENDED informs the PM core and middle-layer code that the
578 * driver prefers the device to be left in suspend after system resume.
575 */ 579 */
576#define DPM_FLAG_NEVER_SKIP BIT(0) 580#define DPM_FLAG_NEVER_SKIP BIT(0)
577#define DPM_FLAG_SMART_PREPARE BIT(1) 581#define DPM_FLAG_SMART_PREPARE BIT(1)
578#define DPM_FLAG_SMART_SUSPEND BIT(2) 582#define DPM_FLAG_SMART_SUSPEND BIT(2)
583#define DPM_FLAG_LEAVE_SUSPENDED BIT(3)
579 584
580struct dev_pm_info { 585struct dev_pm_info {
581 pm_message_t power_state; 586 pm_message_t power_state;
@@ -597,6 +602,8 @@ struct dev_pm_info {
597 bool wakeup_path:1; 602 bool wakeup_path:1;
598 bool syscore:1; 603 bool syscore:1;
599 bool no_pm_callbacks:1; /* Owned by the PM core */ 604 bool no_pm_callbacks:1; /* Owned by the PM core */
605 unsigned int must_resume:1; /* Owned by the PM core */
606 unsigned int may_skip_resume:1; /* Set by subsystems */
600#else 607#else
601 unsigned int should_wakeup:1; 608 unsigned int should_wakeup:1;
602#endif 609#endif
@@ -766,6 +773,7 @@ extern int pm_generic_poweroff(struct device *dev);
766extern void pm_generic_complete(struct device *dev); 773extern void pm_generic_complete(struct device *dev);
767 774
768extern void dev_pm_skip_next_resume_phases(struct device *dev); 775extern void dev_pm_skip_next_resume_phases(struct device *dev);
776extern bool dev_pm_may_skip_resume(struct device *dev);
769extern bool dev_pm_smart_suspend_and_suspended(struct device *dev); 777extern bool dev_pm_smart_suspend_and_suspended(struct device *dev);
770 778
771#else /* !CONFIG_PM_SLEEP */ 779#else /* !CONFIG_PM_SLEEP */
diff --git a/include/linux/pm_wakeup.h b/include/linux/pm_wakeup.h
index 4c2cba7ec1d4..4238dde0aaf0 100644
--- a/include/linux/pm_wakeup.h
+++ b/include/linux/pm_wakeup.h
@@ -88,6 +88,11 @@ static inline bool device_may_wakeup(struct device *dev)
88 return dev->power.can_wakeup && !!dev->power.wakeup; 88 return dev->power.can_wakeup && !!dev->power.wakeup;
89} 89}
90 90
91static inline void device_set_wakeup_path(struct device *dev)
92{
93 dev->power.wakeup_path = true;
94}
95
91/* drivers/base/power/wakeup.c */ 96/* drivers/base/power/wakeup.c */
92extern void wakeup_source_prepare(struct wakeup_source *ws, const char *name); 97extern void wakeup_source_prepare(struct wakeup_source *ws, const char *name);
93extern struct wakeup_source *wakeup_source_create(const char *name); 98extern struct wakeup_source *wakeup_source_create(const char *name);
@@ -174,6 +179,8 @@ static inline bool device_may_wakeup(struct device *dev)
174 return dev->power.can_wakeup && dev->power.should_wakeup; 179 return dev->power.can_wakeup && dev->power.should_wakeup;
175} 180}
176 181
182static inline void device_set_wakeup_path(struct device *dev) {}
183
177static inline void __pm_stay_awake(struct wakeup_source *ws) {} 184static inline void __pm_stay_awake(struct wakeup_source *ws) {}
178 185
179static inline void pm_stay_awake(struct device *dev) {} 186static inline void pm_stay_awake(struct device *dev) {}
diff --git a/include/linux/suspend.h b/include/linux/suspend.h
index d60b0f5c38d5..cc22a24516d6 100644
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -443,32 +443,8 @@ extern bool pm_save_wakeup_count(unsigned int count);
443extern void pm_wakep_autosleep_enabled(bool set); 443extern void pm_wakep_autosleep_enabled(bool set);
444extern void pm_print_active_wakeup_sources(void); 444extern void pm_print_active_wakeup_sources(void);
445 445
446static inline void lock_system_sleep(void) 446extern void lock_system_sleep(void);
447{ 447extern void unlock_system_sleep(void);
448 current->flags |= PF_FREEZER_SKIP;
449 mutex_lock(&pm_mutex);
450}
451
452static inline void unlock_system_sleep(void)
453{
454 /*
455 * Don't use freezer_count() because we don't want the call to
456 * try_to_freeze() here.
457 *
458 * Reason:
459 * Fundamentally, we just don't need it, because freezing condition
460 * doesn't come into effect until we release the pm_mutex lock,
461 * since the freezer always works with pm_mutex held.
462 *
463 * More importantly, in the case of hibernation,
464 * unlock_system_sleep() gets called in snapshot_read() and
465 * snapshot_write() when the freezing condition is still in effect.
466 * Which means, if we use try_to_freeze() here, it would make them
467 * enter the refrigerator, thus causing hibernation to lockup.
468 */
469 current->flags &= ~PF_FREEZER_SKIP;
470 mutex_unlock(&pm_mutex);
471}
472 448
473#else /* !CONFIG_PM_SLEEP */ 449#else /* !CONFIG_PM_SLEEP */
474 450
diff --git a/include/trace/events/thermal.h b/include/trace/events/thermal.h
index 78946640fe03..135e5421f003 100644
--- a/include/trace/events/thermal.h
+++ b/include/trace/events/thermal.h
@@ -94,9 +94,9 @@ TRACE_EVENT(thermal_zone_trip,
94#ifdef CONFIG_CPU_THERMAL 94#ifdef CONFIG_CPU_THERMAL
95TRACE_EVENT(thermal_power_cpu_get_power, 95TRACE_EVENT(thermal_power_cpu_get_power,
96 TP_PROTO(const struct cpumask *cpus, unsigned long freq, u32 *load, 96 TP_PROTO(const struct cpumask *cpus, unsigned long freq, u32 *load,
97 size_t load_len, u32 dynamic_power, u32 static_power), 97 size_t load_len, u32 dynamic_power),
98 98
99 TP_ARGS(cpus, freq, load, load_len, dynamic_power, static_power), 99 TP_ARGS(cpus, freq, load, load_len, dynamic_power),
100 100
101 TP_STRUCT__entry( 101 TP_STRUCT__entry(
102 __bitmask(cpumask, num_possible_cpus()) 102 __bitmask(cpumask, num_possible_cpus())
@@ -104,7 +104,6 @@ TRACE_EVENT(thermal_power_cpu_get_power,
104 __dynamic_array(u32, load, load_len) 104 __dynamic_array(u32, load, load_len)
105 __field(size_t, load_len ) 105 __field(size_t, load_len )
106 __field(u32, dynamic_power ) 106 __field(u32, dynamic_power )
107 __field(u32, static_power )
108 ), 107 ),
109 108
110 TP_fast_assign( 109 TP_fast_assign(
@@ -115,13 +114,12 @@ TRACE_EVENT(thermal_power_cpu_get_power,
115 load_len * sizeof(*load)); 114 load_len * sizeof(*load));
116 __entry->load_len = load_len; 115 __entry->load_len = load_len;
117 __entry->dynamic_power = dynamic_power; 116 __entry->dynamic_power = dynamic_power;
118 __entry->static_power = static_power;
119 ), 117 ),
120 118
121 TP_printk("cpus=%s freq=%lu load={%s} dynamic_power=%d static_power=%d", 119 TP_printk("cpus=%s freq=%lu load={%s} dynamic_power=%d",
122 __get_bitmask(cpumask), __entry->freq, 120 __get_bitmask(cpumask), __entry->freq,
123 __print_array(__get_dynamic_array(load), __entry->load_len, 4), 121 __print_array(__get_dynamic_array(load), __entry->load_len, 4),
124 __entry->dynamic_power, __entry->static_power) 122 __entry->dynamic_power)
125); 123);
126 124
127TRACE_EVENT(thermal_power_cpu_limit, 125TRACE_EVENT(thermal_power_cpu_limit,
diff --git a/kernel/configs/nopm.config b/kernel/configs/nopm.config
new file mode 100644
index 000000000000..81ff07863576
--- /dev/null
+++ b/kernel/configs/nopm.config
@@ -0,0 +1,15 @@
1CONFIG_PM=n
2CONFIG_SUSPEND=n
3CONFIG_HIBERNATION=n
4
5# Triggers PM on OMAP
6CONFIG_CPU_IDLE=n
7
8# Triggers enablement via hibernate callbacks
9CONFIG_XEN=n
10
11# ARM/ARM64 architectures that select PM unconditionally
12CONFIG_ARCH_OMAP2PLUS_TYPICAL=n
13CONFIG_ARCH_RENESAS=n
14CONFIG_ARCH_TEGRA=n
15CONFIG_ARCH_VEXPRESS=n
diff --git a/kernel/power/main.c b/kernel/power/main.c
index 3a2ca9066583..705c2366dafe 100644
--- a/kernel/power/main.c
+++ b/kernel/power/main.c
@@ -22,6 +22,35 @@ DEFINE_MUTEX(pm_mutex);
22 22
23#ifdef CONFIG_PM_SLEEP 23#ifdef CONFIG_PM_SLEEP
24 24
25void lock_system_sleep(void)
26{
27 current->flags |= PF_FREEZER_SKIP;
28 mutex_lock(&pm_mutex);
29}
30EXPORT_SYMBOL_GPL(lock_system_sleep);
31
32void unlock_system_sleep(void)
33{
34 /*
35 * Don't use freezer_count() because we don't want the call to
36 * try_to_freeze() here.
37 *
38 * Reason:
39 * Fundamentally, we just don't need it, because freezing condition
40 * doesn't come into effect until we release the pm_mutex lock,
41 * since the freezer always works with pm_mutex held.
42 *
43 * More importantly, in the case of hibernation,
44 * unlock_system_sleep() gets called in snapshot_read() and
45 * snapshot_write() when the freezing condition is still in effect.
46 * Which means, if we use try_to_freeze() here, it would make them
47 * enter the refrigerator, thus causing hibernation to lockup.
48 */
49 current->flags &= ~PF_FREEZER_SKIP;
50 mutex_unlock(&pm_mutex);
51}
52EXPORT_SYMBOL_GPL(unlock_system_sleep);
53
25/* Routines for PM-transition notifications */ 54/* Routines for PM-transition notifications */
26 55
27static BLOCKING_NOTIFIER_HEAD(pm_chain_head); 56static BLOCKING_NOTIFIER_HEAD(pm_chain_head);
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index bce0464524d8..3d37c279c090 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1645,8 +1645,7 @@ static unsigned long free_unnecessary_pages(void)
1645 * [number of saveable pages] - [number of pages that can be freed in theory] 1645 * [number of saveable pages] - [number of pages that can be freed in theory]
1646 * 1646 *
1647 * where the second term is the sum of (1) reclaimable slab pages, (2) active 1647 * where the second term is the sum of (1) reclaimable slab pages, (2) active
1648 * and (3) inactive anonymous pages, (4) active and (5) inactive file pages, 1648 * and (3) inactive anonymous pages, (4) active and (5) inactive file pages.
1649 * minus mapped file pages.
1650 */ 1649 */
1651static unsigned long minimum_image_size(unsigned long saveable) 1650static unsigned long minimum_image_size(unsigned long saveable)
1652{ 1651{
@@ -1656,8 +1655,7 @@ static unsigned long minimum_image_size(unsigned long saveable)
1656 + global_node_page_state(NR_ACTIVE_ANON) 1655 + global_node_page_state(NR_ACTIVE_ANON)
1657 + global_node_page_state(NR_INACTIVE_ANON) 1656 + global_node_page_state(NR_INACTIVE_ANON)
1658 + global_node_page_state(NR_ACTIVE_FILE) 1657 + global_node_page_state(NR_ACTIVE_FILE)
1659 + global_node_page_state(NR_INACTIVE_FILE) 1658 + global_node_page_state(NR_INACTIVE_FILE);
1660 - global_node_page_state(NR_FILE_MAPPED);
1661 1659
1662 return saveable <= size ? 0 : saveable - size; 1660 return saveable <= size ? 0 : saveable - size;
1663} 1661}
diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index 293ead59eccc..a46be1261c09 100644
--- a/kernel/power/swap.c
+++ b/kernel/power/swap.c
@@ -879,7 +879,7 @@ out_clean:
879 * space avaiable from the resume partition. 879 * space avaiable from the resume partition.
880 */ 880 */
881 881
882static int enough_swap(unsigned int nr_pages, unsigned int flags) 882static int enough_swap(unsigned int nr_pages)
883{ 883{
884 unsigned int free_swap = count_swap_pages(root_swap, 1); 884 unsigned int free_swap = count_swap_pages(root_swap, 1);
885 unsigned int required; 885 unsigned int required;
@@ -915,7 +915,7 @@ int swsusp_write(unsigned int flags)
915 return error; 915 return error;
916 } 916 }
917 if (flags & SF_NOCOMPRESS_MODE) { 917 if (flags & SF_NOCOMPRESS_MODE) {
918 if (!enough_swap(pages, flags)) { 918 if (!enough_swap(pages)) {
919 pr_err("Not enough free swap\n"); 919 pr_err("Not enough free swap\n");
920 error = -ENOSPC; 920 error = -ENOSPC;
921 goto out_finish; 921 goto out_finish;
diff --git a/tools/power/cpupower/lib/cpufreq.h b/tools/power/cpupower/lib/cpufreq.h
index 3b005c39f068..60beaf5ed2ea 100644
--- a/tools/power/cpupower/lib/cpufreq.h
+++ b/tools/power/cpupower/lib/cpufreq.h
@@ -11,10 +11,6 @@
11 * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 * but WITHOUT ANY WARRANTY; without even the implied warranty of
12 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 * GNU General Public License for more details. 13 * GNU General Public License for more details.
14 *
15 * You should have received a copy of the GNU General Public License
16 * along with this program; if not, write to the Free Software
17 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
18 */ 14 */
19 15
20#ifndef __CPUPOWER_CPUFREQ_H__ 16#ifndef __CPUPOWER_CPUFREQ_H__
diff --git a/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py b/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
index 0b24dd9d01ff..29f50d4cfea0 100755
--- a/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
+++ b/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
@@ -411,6 +411,16 @@ def set_trace_buffer_size():
411 print('IO error setting trace buffer size ') 411 print('IO error setting trace buffer size ')
412 quit() 412 quit()
413 413
414def free_trace_buffer():
415 """ Free the trace buffer memory """
416
417 try:
418 open('/sys/kernel/debug/tracing/buffer_size_kb'
419 , 'w').write("1")
420 except:
421 print('IO error setting trace buffer size ')
422 quit()
423
414def read_trace_data(filename): 424def read_trace_data(filename):
415 """ Read and parse trace data """ 425 """ Read and parse trace data """
416 426
@@ -583,4 +593,9 @@ for root, dirs, files in os.walk('.'):
583 for f in files: 593 for f in files:
584 fix_ownership(f) 594 fix_ownership(f)
585 595
596clear_trace_file()
597# Free the memory
598if interval:
599 free_trace_buffer()
600
586os.chdir('../../') 601os.chdir('../../')