summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2019-09-17 22:15:14 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2019-09-17 22:15:14 -0400
commit77dcfe2b9edc98286cf18e03c243c9b999f955d9 (patch)
tree0ba3c4002b6c26c715bf03fac81d63de13c01d96
parent04cbfba6208592999d7bfe6609ec01dc3fde73f5 (diff)
parentfc6763a2d7e0a7f49ccec97a46e92e9fb1f3f9dd (diff)
Merge tag 'pm-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki: "These include a rework of the main suspend-to-idle code flow (related to the handling of spurious wakeups), a switch over of several users of cpufreq notifiers to QoS-based limits, a new devfreq driver for Tegra20, a new cpuidle driver and governor for virtualized guests, an extension of the wakeup sources framework to expose wakeup sources as device objects in sysfs, and more. Specifics: - Rework the main suspend-to-idle control flow to avoid repeating "noirq" device resume and suspend operations in case of spurious wakeups from the ACPI EC and decouple the ACPI EC wakeups support from the LPS0 _DSM support (Rafael Wysocki). - Extend the wakeup sources framework to expose wakeup sources as device objects in sysfs (Tri Vo, Stephen Boyd). - Expose system suspend statistics in sysfs (Kalesh Singh). - Introduce a new haltpoll cpuidle driver and a new matching governor for virtualized guests wanting to do guest-side polling in the idle loop (Marcelo Tosatti, Joao Martins, Wanpeng Li, Stephen Rothwell). - Fix the menu and teo cpuidle governors to allow the scheduler tick to be stopped if PM QoS is used to limit the CPU idle state exit latency in some cases (Rafael Wysocki). - Increase the resolution of the play_idle() argument to microseconds for more fine-grained injection of CPU idle cycles (Daniel Lezcano). - Switch over some users of cpuidle notifiers to the new QoS-based frequency limits and drop the CPUFREQ_ADJUST and CPUFREQ_NOTIFY policy notifier events (Viresh Kumar). - Add new cpufreq driver based on nvmem for sun50i (Yangtao Li). - Add support for MT8183 and MT8516 to the mediatek cpufreq driver (Andrew-sh.Cheng, Fabien Parent). - Add i.MX8MN support to the imx-cpufreq-dt cpufreq driver (Anson Huang). - Add qcs404 to cpufreq-dt-platdev blacklist (Jorge Ramirez-Ortiz). - Update the qcom cpufreq driver (among other things, to make it easier to extend and to use kryo cpufreq for other nvmem-based SoCs) and add qcs404 support to it (Niklas Cassel, Douglas RAILLARD, Sibi Sankar, Sricharan R). - Fix assorted issues and make assorted minor improvements in the cpufreq code (Colin Ian King, Douglas RAILLARD, Florian Fainelli, Gustavo Silva, Hariprasad Kelam). - Add new devfreq driver for NVidia Tegra20 (Dmitry Osipenko, Arnd Bergmann). - Add new Exynos PPMU events to devfreq events and extend that mechanism (Lukasz Luba). - Fix and clean up the exynos-bus devfreq driver (Kamil Konieczny). - Improve devfreq documentation and governor code, fix spelling typos in devfreq (Ezequiel Garcia, Krzysztof Kozlowski, Leonard Crestez, MyungJoo Ham, Gaël PORTAY). - Add regulators enable and disable to the OPP (operating performance points) framework (Kamil Konieczny). - Update the OPP framework to support multiple opp-suspend properties (Anson Huang). - Fix assorted issues and make assorted minor improvements in the OPP code (Niklas Cassel, Viresh Kumar, Yue Hu). - Clean up the generic power domains (genpd) framework (Ulf Hansson). - Clean up assorted pieces of power management code and documentation (Akinobu Mita, Amit Kucheria, Chuhong Yuan). - Update the pm-graph tool to version 5.5 including multiple fixes and improvements (Todd Brandt). - Update the cpupower utility (Benjamin Weis, Geert Uytterhoeven, Sébastien Szymanski)" * tag 'pm-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (126 commits) cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available cpuidle-haltpoll: do not set an owner to allow modunload cpuidle-haltpoll: return -ENODEV on modinit failure cpuidle-haltpoll: set haltpoll as preferred governor cpuidle: allow governor switch on cpuidle_register_driver() PM: runtime: Documentation: add runtime_status ABI document pm-graph: make setVal unbuffered again for python2 and python3 powercap: idle_inject: Use higher resolution for idle injection cpuidle: play_idle: Increase the resolution to usec cpuidle-haltpoll: vcpu hotplug support cpufreq: Add qcs404 to cpufreq-dt-platdev blacklist cpufreq: qcom: Add support for qcs404 on nvmem driver cpufreq: qcom: Refactor the driver to make it easier to extend cpufreq: qcom: Re-organise kryo cpufreq to use it for other nvmem based qcom socs dt-bindings: opp: Add qcom-opp bindings with properties needed for CPR dt-bindings: opp: qcom-nvmem: Support pstates provided by a power domain Documentation: cpufreq: Update policy notifier documentation cpufreq: Remove CPUFREQ_ADJUST and CPUFREQ_NOTIFY policy notifier events PM / Domains: Verify PM domain type in dev_pm_genpd_set_performance_state() PM / Domains: Simplify genpd_lookup_dev() ...
-rw-r--r--Documentation/ABI/testing/sysfs-class-wakeup76
-rw-r--r--Documentation/ABI/testing/sysfs-devices-power9
-rw-r--r--Documentation/ABI/testing/sysfs-power106
-rw-r--r--Documentation/cpu-freq/core.txt16
-rw-r--r--Documentation/devicetree/bindings/opp/opp.txt4
-rw-r--r--Documentation/devicetree/bindings/opp/qcom-nvmem-cpufreq.txt (renamed from Documentation/devicetree/bindings/opp/kryo-cpufreq.txt)127
-rw-r--r--Documentation/devicetree/bindings/opp/qcom-opp.txt19
-rw-r--r--Documentation/devicetree/bindings/opp/sun50i-nvmem-cpufreq.txt167
-rw-r--r--Documentation/power/opp.rst2
-rw-r--r--Documentation/power/pm_qos_interface.rst5
-rw-r--r--Documentation/virtual/guest-halt-polling.txt78
-rw-r--r--MAINTAINERS11
-rw-r--r--arch/x86/Kconfig7
-rw-r--r--arch/x86/include/asm/cpuidle_haltpoll.h8
-rw-r--r--arch/x86/kernel/kvm.c37
-rw-r--r--arch/x86/kernel/process.c2
-rw-r--r--drivers/acpi/acpica/evxfgpe.c6
-rw-r--r--drivers/acpi/device_pm.c7
-rw-r--r--drivers/acpi/ec.c57
-rw-r--r--drivers/acpi/internal.h6
-rw-r--r--drivers/acpi/processor_driver.c39
-rw-r--r--drivers/acpi/processor_perflib.c100
-rw-r--r--drivers/acpi/processor_thermal.c84
-rw-r--r--drivers/acpi/sleep.c165
-rw-r--r--drivers/base/arch_topology.c2
-rw-r--r--drivers/base/power/Makefile2
-rw-r--r--drivers/base/power/domain.c25
-rw-r--r--drivers/base/power/main.c35
-rw-r--r--drivers/base/power/power.h18
-rw-r--r--drivers/base/power/sysfs.c6
-rw-r--r--drivers/base/power/wakeup.c72
-rw-r--r--drivers/base/power/wakeup_stats.c214
-rw-r--r--drivers/cpufreq/Kconfig.arm16
-rw-r--r--drivers/cpufreq/Makefile3
-rw-r--r--drivers/cpufreq/armada-8k-cpufreq.c2
-rw-r--r--drivers/cpufreq/cpufreq-dt-platdev.c5
-rw-r--r--drivers/cpufreq/cpufreq.c57
-rw-r--r--drivers/cpufreq/imx-cpufreq-dt.c8
-rw-r--r--drivers/cpufreq/intel_pstate.c120
-rw-r--r--drivers/cpufreq/mediatek-cpufreq.c4
-rw-r--r--drivers/cpufreq/ppc_cbe_cpufreq.c19
-rw-r--r--drivers/cpufreq/ppc_cbe_cpufreq.h8
-rw-r--r--drivers/cpufreq/ppc_cbe_cpufreq_pmi.c96
-rw-r--r--drivers/cpufreq/qcom-cpufreq-hw.c23
-rw-r--r--drivers/cpufreq/qcom-cpufreq-kryo.c249
-rw-r--r--drivers/cpufreq/qcom-cpufreq-nvmem.c352
-rw-r--r--drivers/cpufreq/sun50i-cpufreq-nvmem.c226
-rw-r--r--drivers/cpufreq/ti-cpufreq.c1
-rw-r--r--drivers/cpuidle/Kconfig20
-rw-r--r--drivers/cpuidle/Makefile1
-rw-r--r--drivers/cpuidle/cpuidle-haltpoll.c134
-rw-r--r--drivers/cpuidle/cpuidle.c30
-rw-r--r--drivers/cpuidle/cpuidle.h2
-rw-r--r--drivers/cpuidle/driver.c25
-rw-r--r--drivers/cpuidle/governor.c7
-rw-r--r--drivers/cpuidle/governors/Makefile1
-rw-r--r--drivers/cpuidle/governors/haltpoll.c150
-rw-r--r--drivers/cpuidle/governors/ladder.c21
-rw-r--r--drivers/cpuidle/governors/menu.c21
-rw-r--r--drivers/cpuidle/governors/teo.c60
-rw-r--r--drivers/cpuidle/poll_state.c11
-rw-r--r--drivers/cpuidle/sysfs.c7
-rw-r--r--drivers/devfreq/Kconfig19
-rw-r--r--drivers/devfreq/Makefile3
-rw-r--r--drivers/devfreq/devfreq.c12
-rw-r--r--drivers/devfreq/event/exynos-ppmu.c104
-rw-r--r--drivers/devfreq/exynos-bus.c153
-rw-r--r--drivers/devfreq/governor_passive.c7
-rw-r--r--drivers/devfreq/rk3399_dmc.c2
-rw-r--r--drivers/devfreq/tegra20-devfreq.c212
-rw-r--r--drivers/devfreq/tegra30-devfreq.c (renamed from drivers/devfreq/tegra-devfreq.c)315
-rw-r--r--drivers/macintosh/windfarm_cpufreq_clamp.c77
-rw-r--r--drivers/opp/core.c85
-rw-r--r--drivers/opp/of.c30
-rw-r--r--drivers/platform/x86/intel-hid.c36
-rw-r--r--drivers/platform/x86/intel-vbtn.c20
-rw-r--r--drivers/powercap/idle_inject.c53
-rw-r--r--drivers/thermal/cpu_cooling.c110
-rw-r--r--drivers/thermal/intel/intel_powerclamp.c2
-rw-r--r--drivers/video/fbdev/pxafb.c21
-rw-r--r--drivers/video/fbdev/pxafb.h1
-rw-r--r--drivers/video/fbdev/sa1100fb.c27
-rw-r--r--drivers/video/fbdev/sa1100fb.h1
-rw-r--r--fs/eventpoll.c4
-rw-r--r--include/acpi/acpixf.h8
-rw-r--r--include/acpi/processor.h26
-rw-r--r--include/linux/acpi.h4
-rw-r--r--include/linux/cpu.h2
-rw-r--r--include/linux/cpufreq.h4
-rw-r--r--include/linux/cpuidle.h10
-rw-r--r--include/linux/cpuidle_haltpoll.h16
-rw-r--r--include/linux/devfreq-event.h6
-rw-r--r--include/linux/idle_inject.h8
-rw-r--r--include/linux/interrupt.h1
-rw-r--r--include/linux/pm.h4
-rw-r--r--include/linux/pm_domain.h16
-rw-r--r--include/linux/pm_opp.h12
-rw-r--r--include/linux/pm_qos.h6
-rw-r--r--include/linux/pm_wakeup.h21
-rw-r--r--include/linux/suspend.h4
-rw-r--r--include/trace/events/power.h8
-rw-r--r--kernel/irq/pm.c20
-rw-r--r--kernel/power/autosleep.c2
-rw-r--r--kernel/power/main.c99
-rw-r--r--kernel/power/qos.c48
-rw-r--r--kernel/power/suspend.c65
-rw-r--r--kernel/power/wakelock.c32
-rw-r--r--kernel/sched/cpufreq_schedutil.c7
-rw-r--r--kernel/sched/idle.c7
-rw-r--r--kernel/time/alarmtimer.c2
-rw-r--r--tools/power/cpupower/Makefile14
-rw-r--r--tools/power/cpupower/bench/cpufreq-bench_plot.sh2
-rw-r--r--tools/power/cpupower/bench/cpufreq-bench_script.sh2
-rw-r--r--tools/power/cpupower/po/de.po344
-rw-r--r--tools/power/pm-graph/README6
-rwxr-xr-xtools/power/pm-graph/bootgraph.py59
-rw-r--r--tools/power/pm-graph/sleepgraph.88
-rwxr-xr-xtools/power/pm-graph/sleepgraph.py618
118 files changed, 4052 insertions, 1924 deletions
diff --git a/Documentation/ABI/testing/sysfs-class-wakeup b/Documentation/ABI/testing/sysfs-class-wakeup
new file mode 100644
index 000000000000..754aab8b6dcd
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-class-wakeup
@@ -0,0 +1,76 @@
1What: /sys/class/wakeup/
2Date: June 2019
3Contact: Tri Vo <trong@android.com>
4Description:
5 The /sys/class/wakeup/ directory contains pointers to all
6 wakeup sources in the kernel at that moment in time.
7
8What: /sys/class/wakeup/.../name
9Date: June 2019
10Contact: Tri Vo <trong@android.com>
11Description:
12 This file contains the name of the wakeup source.
13
14What: /sys/class/wakeup/.../active_count
15Date: June 2019
16Contact: Tri Vo <trong@android.com>
17Description:
18 This file contains the number of times the wakeup source was
19 activated.
20
21What: /sys/class/wakeup/.../event_count
22Date: June 2019
23Contact: Tri Vo <trong@android.com>
24Description:
25 This file contains the number of signaled wakeup events
26 associated with the wakeup source.
27
28What: /sys/class/wakeup/.../wakeup_count
29Date: June 2019
30Contact: Tri Vo <trong@android.com>
31Description:
32 This file contains the number of times the wakeup source might
33 abort suspend.
34
35What: /sys/class/wakeup/.../expire_count
36Date: June 2019
37Contact: Tri Vo <trong@android.com>
38Description:
39 This file contains the number of times the wakeup source's
40 timeout has expired.
41
42What: /sys/class/wakeup/.../active_time_ms
43Date: June 2019
44Contact: Tri Vo <trong@android.com>
45Description:
46 This file contains the amount of time the wakeup source has
47 been continuously active, in milliseconds. If the wakeup
48 source is not active, this file contains '0'.
49
50What: /sys/class/wakeup/.../total_time_ms
51Date: June 2019
52Contact: Tri Vo <trong@android.com>
53Description:
54 This file contains the total amount of time this wakeup source
55 has been active, in milliseconds.
56
57What: /sys/class/wakeup/.../max_time_ms
58Date: June 2019
59Contact: Tri Vo <trong@android.com>
60Description:
61 This file contains the maximum amount of time this wakeup
62 source has been continuously active, in milliseconds.
63
64What: /sys/class/wakeup/.../last_change_ms
65Date: June 2019
66Contact: Tri Vo <trong@android.com>
67Description:
68 This file contains the monotonic clock time when the wakeup
69 source was touched last time, in milliseconds.
70
71What: /sys/class/wakeup/.../prevent_suspend_time_ms
72Date: June 2019
73Contact: Tri Vo <trong@android.com>
74Description:
75 The file contains the total amount of time this wakeup source
76 has been preventing autosleep, in milliseconds.
diff --git a/Documentation/ABI/testing/sysfs-devices-power b/Documentation/ABI/testing/sysfs-devices-power
index 80a00f7b6667..1763e64dd152 100644
--- a/Documentation/ABI/testing/sysfs-devices-power
+++ b/Documentation/ABI/testing/sysfs-devices-power
@@ -260,3 +260,12 @@ Description:
260 260
261 This attribute has no effect on system-wide suspend/resume and 261 This attribute has no effect on system-wide suspend/resume and
262 hibernation. 262 hibernation.
263
264What: /sys/devices/.../power/runtime_status
265Date: April 2010
266Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
267Description:
268 The /sys/devices/.../power/runtime_status attribute contains
269 the current runtime PM status of the device, which may be
270 "suspended", "suspending", "resuming", "active", "error" (fatal
271 error), or "unsupported" (runtime PM is disabled).
diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power
index 3c5130355011..6f87b9dd384b 100644
--- a/Documentation/ABI/testing/sysfs-power
+++ b/Documentation/ABI/testing/sysfs-power
@@ -301,3 +301,109 @@ Description:
301 301
302 Using this sysfs file will override any values that were 302 Using this sysfs file will override any values that were
303 set using the kernel command line for disk offset. 303 set using the kernel command line for disk offset.
304
305What: /sys/power/suspend_stats
306Date: July 2019
307Contact: Kalesh Singh <kaleshsingh96@gmail.com>
308Description:
309 The /sys/power/suspend_stats directory contains suspend related
310 statistics.
311
312What: /sys/power/suspend_stats/success
313Date: July 2019
314Contact: Kalesh Singh <kaleshsingh96@gmail.com>
315Description:
316 The /sys/power/suspend_stats/success file contains the number
317 of times entering system sleep state succeeded.
318
319What: /sys/power/suspend_stats/fail
320Date: July 2019
321Contact: Kalesh Singh <kaleshsingh96@gmail.com>
322Description:
323 The /sys/power/suspend_stats/fail file contains the number
324 of times entering system sleep state failed.
325
326What: /sys/power/suspend_stats/failed_freeze
327Date: July 2019
328Contact: Kalesh Singh <kaleshsingh96@gmail.com>
329Description:
330 The /sys/power/suspend_stats/failed_freeze file contains the
331 number of times freezing processes failed.
332
333What: /sys/power/suspend_stats/failed_prepare
334Date: July 2019
335Contact: Kalesh Singh <kaleshsingh96@gmail.com>
336Description:
337 The /sys/power/suspend_stats/failed_prepare file contains the
338 number of times preparing all non-sysdev devices for
339 a system PM transition failed.
340
341What: /sys/power/suspend_stats/failed_resume
342Date: July 2019
343Contact: Kalesh Singh <kaleshsingh96@gmail.com>
344Description:
345 The /sys/power/suspend_stats/failed_resume file contains the
346 number of times executing "resume" callbacks of
347 non-sysdev devices failed.
348
349What: /sys/power/suspend_stats/failed_resume_early
350Date: July 2019
351Contact: Kalesh Singh <kaleshsingh96@gmail.com>
352Description:
353 The /sys/power/suspend_stats/failed_resume_early file contains
354 the number of times executing "early resume" callbacks
355 of devices failed.
356
357What: /sys/power/suspend_stats/failed_resume_noirq
358Date: July 2019
359Contact: Kalesh Singh <kaleshsingh96@gmail.com>
360Description:
361 The /sys/power/suspend_stats/failed_resume_noirq file contains
362 the number of times executing "noirq resume" callbacks
363 of devices failed.
364
365What: /sys/power/suspend_stats/failed_suspend
366Date: July 2019
367Contact: Kalesh Singh <kaleshsingh96@gmail.com>
368Description:
369 The /sys/power/suspend_stats/failed_suspend file contains
370 the number of times executing "suspend" callbacks
371 of all non-sysdev devices failed.
372
373What: /sys/power/suspend_stats/failed_suspend_late
374Date: July 2019
375Contact: Kalesh Singh <kaleshsingh96@gmail.com>
376Description:
377 The /sys/power/suspend_stats/failed_suspend_late file contains
378 the number of times executing "late suspend" callbacks
379 of all devices failed.
380
381What: /sys/power/suspend_stats/failed_suspend_noirq
382Date: July 2019
383Contact: Kalesh Singh <kaleshsingh96@gmail.com>
384Description:
385 The /sys/power/suspend_stats/failed_suspend_noirq file contains
386 the number of times executing "noirq suspend" callbacks
387 of all devices failed.
388
389What: /sys/power/suspend_stats/last_failed_dev
390Date: July 2019
391Contact: Kalesh Singh <kaleshsingh96@gmail.com>
392Description:
393 The /sys/power/suspend_stats/last_failed_dev file contains
394 the last device for which a suspend/resume callback failed.
395
396What: /sys/power/suspend_stats/last_failed_errno
397Date: July 2019
398Contact: Kalesh Singh <kaleshsingh96@gmail.com>
399Description:
400 The /sys/power/suspend_stats/last_failed_errno file contains
401 the errno of the last failed attempt at entering
402 system sleep state.
403
404What: /sys/power/suspend_stats/last_failed_step
405Date: July 2019
406Contact: Kalesh Singh <kaleshsingh96@gmail.com>
407Description:
408 The /sys/power/suspend_stats/last_failed_step file contains
409 the last failed step in the suspend/resume path.
diff --git a/Documentation/cpu-freq/core.txt b/Documentation/cpu-freq/core.txt
index 55193e680250..ed577d9c154b 100644
--- a/Documentation/cpu-freq/core.txt
+++ b/Documentation/cpu-freq/core.txt
@@ -57,19 +57,11 @@ transition notifiers.
572.1 CPUFreq policy notifiers 572.1 CPUFreq policy notifiers
58---------------------------- 58----------------------------
59 59
60These are notified when a new policy is intended to be set. Each 60These are notified when a new policy is created or removed.
61CPUFreq policy notifier is called twice for a policy transition:
62 61
631.) During CPUFREQ_ADJUST all CPUFreq notifiers may change the limit if 62The phase is specified in the second argument to the notifier. The phase is
64 they see a need for this - may it be thermal considerations or 63CPUFREQ_CREATE_POLICY when the policy is first created and it is
65 hardware limitations. 64CPUFREQ_REMOVE_POLICY when the policy is removed.
66
672.) And during CPUFREQ_NOTIFY all notifiers are informed of the new policy
68 - if two hardware drivers failed to agree on a new policy before this
69 stage, the incompatible hardware shall be shut down, and the user
70 informed of this.
71
72The phase is specified in the second argument to the notifier.
73 65
74The third argument, a void *pointer, points to a struct cpufreq_policy 66The third argument, a void *pointer, points to a struct cpufreq_policy
75consisting of several values, including min, max (the lower and upper 67consisting of several values, including min, max (the lower and upper
diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
index 76b6c79604a5..68592271461f 100644
--- a/Documentation/devicetree/bindings/opp/opp.txt
+++ b/Documentation/devicetree/bindings/opp/opp.txt
@@ -140,8 +140,8 @@ Optional properties:
140 frequency for a short duration of time limited by the device's power, current 140 frequency for a short duration of time limited by the device's power, current
141 and thermal limits. 141 and thermal limits.
142 142
143- opp-suspend: Marks the OPP to be used during device suspend. Only one OPP in 143- opp-suspend: Marks the OPP to be used during device suspend. If multiple OPPs
144 the table should have this. 144 in the table have this, the OPP with highest opp-hz will be used.
145 145
146- opp-supported-hw: This enables us to select only a subset of OPPs from the 146- opp-supported-hw: This enables us to select only a subset of OPPs from the
147 larger OPP table, based on what version of the hardware we are running on. We 147 larger OPP table, based on what version of the hardware we are running on. We
diff --git a/Documentation/devicetree/bindings/opp/kryo-cpufreq.txt b/Documentation/devicetree/bindings/opp/qcom-nvmem-cpufreq.txt
index c2127b96805a..4751029b9b74 100644
--- a/Documentation/devicetree/bindings/opp/kryo-cpufreq.txt
+++ b/Documentation/devicetree/bindings/opp/qcom-nvmem-cpufreq.txt
@@ -1,25 +1,38 @@
1Qualcomm Technologies, Inc. KRYO CPUFreq and OPP bindings 1Qualcomm Technologies, Inc. NVMEM CPUFreq and OPP bindings
2=================================== 2===================================
3 3
4In Certain Qualcomm Technologies, Inc. SoCs like apq8096 and msm8996 4In Certain Qualcomm Technologies, Inc. SoCs like apq8096 and msm8996,
5that have KRYO processors, the CPU ferequencies subset and voltage value 5the CPU frequencies subset and voltage value of each OPP varies based on
6of each OPP varies based on the silicon variant in use. 6the silicon variant in use.
7Qualcomm Technologies, Inc. Process Voltage Scaling Tables 7Qualcomm Technologies, Inc. Process Voltage Scaling Tables
8defines the voltage and frequency value based on the msm-id in SMEM 8defines the voltage and frequency value based on the msm-id in SMEM
9and speedbin blown in the efuse combination. 9and speedbin blown in the efuse combination.
10The qcom-cpufreq-kryo driver reads the msm-id and efuse value from the SoC 10The qcom-cpufreq-nvmem driver reads the msm-id and efuse value from the SoC
11to provide the OPP framework with required information (existing HW bitmap). 11to provide the OPP framework with required information (existing HW bitmap).
12This is used to determine the voltage and frequency value for each OPP of 12This is used to determine the voltage and frequency value for each OPP of
13operating-points-v2 table when it is parsed by the OPP framework. 13operating-points-v2 table when it is parsed by the OPP framework.
14 14
15Required properties: 15Required properties:
16-------------------- 16--------------------
17In 'cpus' nodes: 17In 'cpu' nodes:
18- operating-points-v2: Phandle to the operating-points-v2 table to use. 18- operating-points-v2: Phandle to the operating-points-v2 table to use.
19 19
20In 'operating-points-v2' table: 20In 'operating-points-v2' table:
21- compatible: Should be 21- compatible: Should be
22 - 'operating-points-v2-kryo-cpu' for apq8096 and msm8996. 22 - 'operating-points-v2-kryo-cpu' for apq8096 and msm8996.
23
24Optional properties:
25--------------------
26In 'cpu' nodes:
27- power-domains: A phandle pointing to the PM domain specifier which provides
28 the performance states available for active state management.
29 Please refer to the power-domains bindings
30 Documentation/devicetree/bindings/power/power_domain.txt
31 and also examples below.
32- power-domain-names: Should be
33 - 'cpr' for qcs404.
34
35In 'operating-points-v2' table:
23- nvmem-cells: A phandle pointing to a nvmem-cells node representing the 36- nvmem-cells: A phandle pointing to a nvmem-cells node representing the
24 efuse registers that has information about the 37 efuse registers that has information about the
25 speedbin that is used to select the right frequency/voltage 38 speedbin that is used to select the right frequency/voltage
@@ -678,3 +691,105 @@ soc {
678 }; 691 };
679 }; 692 };
680}; 693};
694
695Example 2:
696---------
697
698 cpus {
699 #address-cells = <1>;
700 #size-cells = <0>;
701
702 CPU0: cpu@100 {
703 device_type = "cpu";
704 compatible = "arm,cortex-a53";
705 reg = <0x100>;
706 ....
707 clocks = <&apcs_glb>;
708 operating-points-v2 = <&cpu_opp_table>;
709 power-domains = <&cpr>;
710 power-domain-names = "cpr";
711 };
712
713 CPU1: cpu@101 {
714 device_type = "cpu";
715 compatible = "arm,cortex-a53";
716 reg = <0x101>;
717 ....
718 clocks = <&apcs_glb>;
719 operating-points-v2 = <&cpu_opp_table>;
720 power-domains = <&cpr>;
721 power-domain-names = "cpr";
722 };
723
724 CPU2: cpu@102 {
725 device_type = "cpu";
726 compatible = "arm,cortex-a53";
727 reg = <0x102>;
728 ....
729 clocks = <&apcs_glb>;
730 operating-points-v2 = <&cpu_opp_table>;
731 power-domains = <&cpr>;
732 power-domain-names = "cpr";
733 };
734
735 CPU3: cpu@103 {
736 device_type = "cpu";
737 compatible = "arm,cortex-a53";
738 reg = <0x103>;
739 ....
740 clocks = <&apcs_glb>;
741 operating-points-v2 = <&cpu_opp_table>;
742 power-domains = <&cpr>;
743 power-domain-names = "cpr";
744 };
745 };
746
747 cpu_opp_table: cpu-opp-table {
748 compatible = "operating-points-v2-kryo-cpu";
749 opp-shared;
750
751 opp-1094400000 {
752 opp-hz = /bits/ 64 <1094400000>;
753 required-opps = <&cpr_opp1>;
754 };
755 opp-1248000000 {
756 opp-hz = /bits/ 64 <1248000000>;
757 required-opps = <&cpr_opp2>;
758 };
759 opp-1401600000 {
760 opp-hz = /bits/ 64 <1401600000>;
761 required-opps = <&cpr_opp3>;
762 };
763 };
764
765 cpr_opp_table: cpr-opp-table {
766 compatible = "operating-points-v2-qcom-level";
767
768 cpr_opp1: opp1 {
769 opp-level = <1>;
770 qcom,opp-fuse-level = <1>;
771 };
772 cpr_opp2: opp2 {
773 opp-level = <2>;
774 qcom,opp-fuse-level = <2>;
775 };
776 cpr_opp3: opp3 {
777 opp-level = <3>;
778 qcom,opp-fuse-level = <3>;
779 };
780 };
781
782....
783
784soc {
785....
786 cpr: power-controller@b018000 {
787 compatible = "qcom,qcs404-cpr", "qcom,cpr";
788 reg = <0x0b018000 0x1000>;
789 ....
790 vdd-apc-supply = <&pms405_s3>;
791 #power-domain-cells = <0>;
792 operating-points-v2 = <&cpr_opp_table>;
793 ....
794 };
795};
diff --git a/Documentation/devicetree/bindings/opp/qcom-opp.txt b/Documentation/devicetree/bindings/opp/qcom-opp.txt
new file mode 100644
index 000000000000..32eb0793c7e6
--- /dev/null
+++ b/Documentation/devicetree/bindings/opp/qcom-opp.txt
@@ -0,0 +1,19 @@
1Qualcomm OPP bindings to describe OPP nodes
2
3The bindings are based on top of the operating-points-v2 bindings
4described in Documentation/devicetree/bindings/opp/opp.txt
5Additional properties are described below.
6
7* OPP Table Node
8
9Required properties:
10- compatible: Allow OPPs to express their compatibility. It should be:
11 "operating-points-v2-qcom-level"
12
13* OPP Node
14
15Required properties:
16- qcom,opp-fuse-level: A positive value representing the fuse corner/level
17 associated with this OPP node. Sometimes several corners/levels shares
18 a certain fuse corner/level. A fuse corner/level contains e.g. ref uV,
19 min uV, and max uV.
diff --git a/Documentation/devicetree/bindings/opp/sun50i-nvmem-cpufreq.txt b/Documentation/devicetree/bindings/opp/sun50i-nvmem-cpufreq.txt
new file mode 100644
index 000000000000..7deae57a587b
--- /dev/null
+++ b/Documentation/devicetree/bindings/opp/sun50i-nvmem-cpufreq.txt
@@ -0,0 +1,167 @@
1Allwinner Technologies, Inc. NVMEM CPUFreq and OPP bindings
2===================================
3
4For some SoCs, the CPU frequency subset and voltage value of each OPP
5varies based on the silicon variant in use. Allwinner Process Voltage
6Scaling Tables defines the voltage and frequency value based on the
7speedbin blown in the efuse combination. The sun50i-cpufreq-nvmem driver
8reads the efuse value from the SoC to provide the OPP framework with
9required information.
10
11Required properties:
12--------------------
13In 'cpus' nodes:
14- operating-points-v2: Phandle to the operating-points-v2 table to use.
15
16In 'operating-points-v2' table:
17- compatible: Should be
18 - 'allwinner,sun50i-h6-operating-points'.
19- nvmem-cells: A phandle pointing to a nvmem-cells node representing the
20 efuse registers that has information about the speedbin
21 that is used to select the right frequency/voltage value
22 pair. Please refer the for nvmem-cells bindings
23 Documentation/devicetree/bindings/nvmem/nvmem.txt and
24 also examples below.
25
26In every OPP node:
27- opp-microvolt-<name>: Voltage in micro Volts.
28 At runtime, the platform can pick a <name> and
29 matching opp-microvolt-<name> property.
30 [See: opp.txt]
31 HW: <name>:
32 sun50i-h6 speed0 speed1 speed2
33
34Example 1:
35---------
36
37 cpus {
38 #address-cells = <1>;
39 #size-cells = <0>;
40
41 cpu0: cpu@0 {
42 compatible = "arm,cortex-a53";
43 device_type = "cpu";
44 reg = <0>;
45 enable-method = "psci";
46 clocks = <&ccu CLK_CPUX>;
47 clock-latency-ns = <244144>; /* 8 32k periods */
48 operating-points-v2 = <&cpu_opp_table>;
49 #cooling-cells = <2>;
50 };
51
52 cpu1: cpu@1 {
53 compatible = "arm,cortex-a53";
54 device_type = "cpu";
55 reg = <1>;
56 enable-method = "psci";
57 clocks = <&ccu CLK_CPUX>;
58 clock-latency-ns = <244144>; /* 8 32k periods */
59 operating-points-v2 = <&cpu_opp_table>;
60 #cooling-cells = <2>;
61 };
62
63 cpu2: cpu@2 {
64 compatible = "arm,cortex-a53";
65 device_type = "cpu";
66 reg = <2>;
67 enable-method = "psci";
68 clocks = <&ccu CLK_CPUX>;
69 clock-latency-ns = <244144>; /* 8 32k periods */
70 operating-points-v2 = <&cpu_opp_table>;
71 #cooling-cells = <2>;
72 };
73
74 cpu3: cpu@3 {
75 compatible = "arm,cortex-a53";
76 device_type = "cpu";
77 reg = <3>;
78 enable-method = "psci";
79 clocks = <&ccu CLK_CPUX>;
80 clock-latency-ns = <244144>; /* 8 32k periods */
81 operating-points-v2 = <&cpu_opp_table>;
82 #cooling-cells = <2>;
83 };
84 };
85
86 cpu_opp_table: opp_table {
87 compatible = "allwinner,sun50i-h6-operating-points";
88 nvmem-cells = <&speedbin_efuse>;
89 opp-shared;
90
91 opp@480000000 {
92 clock-latency-ns = <244144>; /* 8 32k periods */
93 opp-hz = /bits/ 64 <480000000>;
94
95 opp-microvolt-speed0 = <880000>;
96 opp-microvolt-speed1 = <820000>;
97 opp-microvolt-speed2 = <800000>;
98 };
99
100 opp@720000000 {
101 clock-latency-ns = <244144>; /* 8 32k periods */
102 opp-hz = /bits/ 64 <720000000>;
103
104 opp-microvolt-speed0 = <880000>;
105 opp-microvolt-speed1 = <820000>;
106 opp-microvolt-speed2 = <800000>;
107 };
108
109 opp@816000000 {
110 clock-latency-ns = <244144>; /* 8 32k periods */
111 opp-hz = /bits/ 64 <816000000>;
112
113 opp-microvolt-speed0 = <880000>;
114 opp-microvolt-speed1 = <820000>;
115 opp-microvolt-speed2 = <800000>;
116 };
117
118 opp@888000000 {
119 clock-latency-ns = <244144>; /* 8 32k periods */
120 opp-hz = /bits/ 64 <888000000>;
121
122 opp-microvolt-speed0 = <940000>;
123 opp-microvolt-speed1 = <820000>;
124 opp-microvolt-speed2 = <800000>;
125 };
126
127 opp@1080000000 {
128 clock-latency-ns = <244144>; /* 8 32k periods */
129 opp-hz = /bits/ 64 <1080000000>;
130
131 opp-microvolt-speed0 = <1060000>;
132 opp-microvolt-speed1 = <880000>;
133 opp-microvolt-speed2 = <840000>;
134 };
135
136 opp@1320000000 {
137 clock-latency-ns = <244144>; /* 8 32k periods */
138 opp-hz = /bits/ 64 <1320000000>;
139
140 opp-microvolt-speed0 = <1160000>;
141 opp-microvolt-speed1 = <940000>;
142 opp-microvolt-speed2 = <900000>;
143 };
144
145 opp@1488000000 {
146 clock-latency-ns = <244144>; /* 8 32k periods */
147 opp-hz = /bits/ 64 <1488000000>;
148
149 opp-microvolt-speed0 = <1160000>;
150 opp-microvolt-speed1 = <1000000>;
151 opp-microvolt-speed2 = <960000>;
152 };
153 };
154....
155soc {
156....
157 sid: sid@3006000 {
158 compatible = "allwinner,sun50i-h6-sid";
159 reg = <0x03006000 0x400>;
160 #address-cells = <1>;
161 #size-cells = <1>;
162 ....
163 speedbin_efuse: speed@1c {
164 reg = <0x1c 4>;
165 };
166 };
167};
diff --git a/Documentation/power/opp.rst b/Documentation/power/opp.rst
index b3cf1def9dee..209c7613f5a4 100644
--- a/Documentation/power/opp.rst
+++ b/Documentation/power/opp.rst
@@ -46,7 +46,7 @@ We can represent these as three OPPs as the following {Hz, uV} tuples:
46---------------------------------------- 46----------------------------------------
47 47
48OPP library provides a set of helper functions to organize and query the OPP 48OPP library provides a set of helper functions to organize and query the OPP
49information. The library is located in drivers/base/power/opp.c and the header 49information. The library is located in drivers/opp/ directory and the header
50is located in include/linux/pm_opp.h. OPP library can be enabled by enabling 50is located in include/linux/pm_opp.h. OPP library can be enabled by enabling
51CONFIG_PM_OPP from power management menuconfig menu. OPP library depends on 51CONFIG_PM_OPP from power management menuconfig menu. OPP library depends on
52CONFIG_PM as certain SoCs such as Texas Instrument's OMAP framework allows to 52CONFIG_PM as certain SoCs such as Texas Instrument's OMAP framework allows to
diff --git a/Documentation/power/pm_qos_interface.rst b/Documentation/power/pm_qos_interface.rst
index 69921f072ce1..3097694fba69 100644
--- a/Documentation/power/pm_qos_interface.rst
+++ b/Documentation/power/pm_qos_interface.rst
@@ -7,8 +7,7 @@ performance expectations by drivers, subsystems and user space applications on
7one of the parameters. 7one of the parameters.
8 8
9Two different PM QoS frameworks are available: 9Two different PM QoS frameworks are available:
101. PM QoS classes for cpu_dma_latency, network_latency, network_throughput, 101. PM QoS classes for cpu_dma_latency
11memory_bandwidth.
122. the per-device PM QoS framework provides the API to manage the per-device latency 112. the per-device PM QoS framework provides the API to manage the per-device latency
13constraints and PM QoS flags. 12constraints and PM QoS flags.
14 13
@@ -79,7 +78,7 @@ cleanup of a process, the interface requires the process to register its
79parameter requests in the following way: 78parameter requests in the following way:
80 79
81To register the default pm_qos target for the specific parameter, the process 80To register the default pm_qos target for the specific parameter, the process
82must open one of /dev/[cpu_dma_latency, network_latency, network_throughput] 81must open /dev/cpu_dma_latency
83 82
84As long as the device node is held open that process has a registered 83As long as the device node is held open that process has a registered
85request on the parameter. 84request on the parameter.
diff --git a/Documentation/virtual/guest-halt-polling.txt b/Documentation/virtual/guest-halt-polling.txt
new file mode 100644
index 000000000000..b3a2a294532d
--- /dev/null
+++ b/Documentation/virtual/guest-halt-polling.txt
@@ -0,0 +1,78 @@
1Guest halt polling
2==================
3
4The cpuidle_haltpoll driver, with the haltpoll governor, allows
5the guest vcpus to poll for a specified amount of time before
6halting.
7This provides the following benefits to host side polling:
8
9 1) The POLL flag is set while polling is performed, which allows
10 a remote vCPU to avoid sending an IPI (and the associated
11 cost of handling the IPI) when performing a wakeup.
12
13 2) The VM-exit cost can be avoided.
14
15The downside of guest side polling is that polling is performed
16even with other runnable tasks in the host.
17
18The basic logic as follows: A global value, guest_halt_poll_ns,
19is configured by the user, indicating the maximum amount of
20time polling is allowed. This value is fixed.
21
22Each vcpu has an adjustable guest_halt_poll_ns
23("per-cpu guest_halt_poll_ns"), which is adjusted by the algorithm
24in response to events (explained below).
25
26Module Parameters
27=================
28
29The haltpoll governor has 5 tunable module parameters:
30
311) guest_halt_poll_ns:
32Maximum amount of time, in nanoseconds, that polling is
33performed before halting.
34
35Default: 200000
36
372) guest_halt_poll_shrink:
38Division factor used to shrink per-cpu guest_halt_poll_ns when
39wakeup event occurs after the global guest_halt_poll_ns.
40
41Default: 2
42
433) guest_halt_poll_grow:
44Multiplication factor used to grow per-cpu guest_halt_poll_ns
45when event occurs after per-cpu guest_halt_poll_ns
46but before global guest_halt_poll_ns.
47
48Default: 2
49
504) guest_halt_poll_grow_start:
51The per-cpu guest_halt_poll_ns eventually reaches zero
52in case of an idle system. This value sets the initial
53per-cpu guest_halt_poll_ns when growing. This can
54be increased from 10000, to avoid misses during the initial
55growth stage:
56
5710k, 20k, 40k, ... (example assumes guest_halt_poll_grow=2).
58
59Default: 50000
60
615) guest_halt_poll_allow_shrink:
62
63Bool parameter which allows shrinking. Set to N
64to avoid it (per-cpu guest_halt_poll_ns will remain
65high once achieves global guest_halt_poll_ns value).
66
67Default: Y
68
69The module parameters can be set from the debugfs files in:
70
71 /sys/module/haltpoll/parameters/
72
73Further Notes
74=============
75
76- Care should be taken when setting the guest_halt_poll_ns parameter as a
77large value has the potential to drive the cpu usage to 100% on a machine which
78would be almost entirely idle otherwise.
diff --git a/MAINTAINERS b/MAINTAINERS
index 7e9a71256079..2ecc1de1910d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -668,6 +668,13 @@ L: linux-media@vger.kernel.org
668S: Maintained 668S: Maintained
669F: drivers/staging/media/allegro-dvt/ 669F: drivers/staging/media/allegro-dvt/
670 670
671ALLWINNER CPUFREQ DRIVER
672M: Yangtao Li <tiny.windzz@gmail.com>
673L: linux-pm@vger.kernel.org
674S: Maintained
675F: Documentation/devicetree/bindings/opp/sun50i-nvmem-cpufreq.txt
676F: drivers/cpufreq/sun50i-cpufreq-nvmem.c
677
671ALLWINNER SECURITY SYSTEM 678ALLWINNER SECURITY SYSTEM
672M: Corentin Labbe <clabbe.montjoie@gmail.com> 679M: Corentin Labbe <clabbe.montjoie@gmail.com>
673L: linux-crypto@vger.kernel.org 680L: linux-crypto@vger.kernel.org
@@ -13311,8 +13318,8 @@ QUALCOMM CPUFREQ DRIVER MSM8996/APQ8096
13311M: Ilia Lin <ilia.lin@kernel.org> 13318M: Ilia Lin <ilia.lin@kernel.org>
13312L: linux-pm@vger.kernel.org 13319L: linux-pm@vger.kernel.org
13313S: Maintained 13320S: Maintained
13314F: Documentation/devicetree/bindings/opp/kryo-cpufreq.txt 13321F: Documentation/devicetree/bindings/opp/qcom-nvmem-cpufreq.txt
13315F: drivers/cpufreq/qcom-cpufreq-kryo.c 13322F: drivers/cpufreq/qcom-cpufreq-nvmem.c
13316 13323
13317QUALCOMM EMAC GIGABIT ETHERNET DRIVER 13324QUALCOMM EMAC GIGABIT ETHERNET DRIVER
13318M: Timur Tabi <timur@kernel.org> 13325M: Timur Tabi <timur@kernel.org>
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 58eae28c3dd6..4195f44c6a09 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -794,6 +794,7 @@ config KVM_GUEST
794 bool "KVM Guest support (including kvmclock)" 794 bool "KVM Guest support (including kvmclock)"
795 depends on PARAVIRT 795 depends on PARAVIRT
796 select PARAVIRT_CLOCK 796 select PARAVIRT_CLOCK
797 select ARCH_CPUIDLE_HALTPOLL
797 default y 798 default y
798 ---help--- 799 ---help---
799 This option enables various optimizations for running under the KVM 800 This option enables various optimizations for running under the KVM
@@ -802,6 +803,12 @@ config KVM_GUEST
802 underlying device model, the host provides the guest with 803 underlying device model, the host provides the guest with
803 timing infrastructure such as time of day, and system time 804 timing infrastructure such as time of day, and system time
804 805
806config ARCH_CPUIDLE_HALTPOLL
807 def_bool n
808 prompt "Disable host haltpoll when loading haltpoll driver"
809 help
810 If virtualized under KVM, disable host haltpoll.
811
805config PVH 812config PVH
806 bool "Support for running PVH guests" 813 bool "Support for running PVH guests"
807 ---help--- 814 ---help---
diff --git a/arch/x86/include/asm/cpuidle_haltpoll.h b/arch/x86/include/asm/cpuidle_haltpoll.h
new file mode 100644
index 000000000000..c8b39c6716ff
--- /dev/null
+++ b/arch/x86/include/asm/cpuidle_haltpoll.h
@@ -0,0 +1,8 @@
1/* SPDX-License-Identifier: GPL-2.0 */
2#ifndef _ARCH_HALTPOLL_H
3#define _ARCH_HALTPOLL_H
4
5void arch_haltpoll_enable(unsigned int cpu);
6void arch_haltpoll_disable(unsigned int cpu);
7
8#endif
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 4cc967178bf9..b2f56602af65 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -705,6 +705,7 @@ unsigned int kvm_arch_para_hints(void)
705{ 705{
706 return cpuid_edx(kvm_cpuid_base() | KVM_CPUID_FEATURES); 706 return cpuid_edx(kvm_cpuid_base() | KVM_CPUID_FEATURES);
707} 707}
708EXPORT_SYMBOL_GPL(kvm_arch_para_hints);
708 709
709static uint32_t __init kvm_detect(void) 710static uint32_t __init kvm_detect(void)
710{ 711{
@@ -867,3 +868,39 @@ void __init kvm_spinlock_init(void)
867} 868}
868 869
869#endif /* CONFIG_PARAVIRT_SPINLOCKS */ 870#endif /* CONFIG_PARAVIRT_SPINLOCKS */
871
872#ifdef CONFIG_ARCH_CPUIDLE_HALTPOLL
873
874static void kvm_disable_host_haltpoll(void *i)
875{
876 wrmsrl(MSR_KVM_POLL_CONTROL, 0);
877}
878
879static void kvm_enable_host_haltpoll(void *i)
880{
881 wrmsrl(MSR_KVM_POLL_CONTROL, 1);
882}
883
884void arch_haltpoll_enable(unsigned int cpu)
885{
886 if (!kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL)) {
887 pr_err_once("kvm: host does not support poll control\n");
888 pr_err_once("kvm: host upgrade recommended\n");
889 return;
890 }
891
892 /* Enable guest halt poll disables host halt poll */
893 smp_call_function_single(cpu, kvm_disable_host_haltpoll, NULL, 1);
894}
895EXPORT_SYMBOL_GPL(arch_haltpoll_enable);
896
897void arch_haltpoll_disable(unsigned int cpu)
898{
899 if (!kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL))
900 return;
901
902 /* Enable guest halt poll disables host halt poll */
903 smp_call_function_single(cpu, kvm_enable_host_haltpoll, NULL, 1);
904}
905EXPORT_SYMBOL_GPL(arch_haltpoll_disable);
906#endif
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 75fea0d48c0e..5e94c4354d4e 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -580,7 +580,7 @@ void __cpuidle default_idle(void)
580 safe_halt(); 580 safe_halt();
581 trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id()); 581 trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id());
582} 582}
583#ifdef CONFIG_APM_MODULE 583#if defined(CONFIG_APM_MODULE) || defined(CONFIG_HALTPOLL_CPUIDLE_MODULE)
584EXPORT_SYMBOL(default_idle); 584EXPORT_SYMBOL(default_idle);
585#endif 585#endif
586 586
diff --git a/drivers/acpi/acpica/evxfgpe.c b/drivers/acpi/acpica/evxfgpe.c
index 710488ec59e9..04a40d563dd6 100644
--- a/drivers/acpi/acpica/evxfgpe.c
+++ b/drivers/acpi/acpica/evxfgpe.c
@@ -644,17 +644,17 @@ ACPI_EXPORT_SYMBOL(acpi_get_gpe_status)
644 * PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1 644 * PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1
645 * gpe_number - GPE level within the GPE block 645 * gpe_number - GPE level within the GPE block
646 * 646 *
647 * RETURN: None 647 * RETURN: INTERRUPT_HANDLED or INTERRUPT_NOT_HANDLED
648 * 648 *
649 * DESCRIPTION: Detect and dispatch a General Purpose Event to either a function 649 * DESCRIPTION: Detect and dispatch a General Purpose Event to either a function
650 * (e.g. EC) or method (e.g. _Lxx/_Exx) handler. 650 * (e.g. EC) or method (e.g. _Lxx/_Exx) handler.
651 * 651 *
652 ******************************************************************************/ 652 ******************************************************************************/
653void acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number) 653u32 acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number)
654{ 654{
655 ACPI_FUNCTION_TRACE(acpi_dispatch_gpe); 655 ACPI_FUNCTION_TRACE(acpi_dispatch_gpe);
656 656
657 acpi_ev_detect_gpe(gpe_device, NULL, gpe_number); 657 return acpi_ev_detect_gpe(gpe_device, NULL, gpe_number);
658} 658}
659 659
660ACPI_EXPORT_SYMBOL(acpi_dispatch_gpe) 660ACPI_EXPORT_SYMBOL(acpi_dispatch_gpe)
diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
index f616b16c1f0b..08bb9f2f2d23 100644
--- a/drivers/acpi/device_pm.c
+++ b/drivers/acpi/device_pm.c
@@ -166,6 +166,10 @@ int acpi_device_set_power(struct acpi_device *device, int state)
166 || (state < ACPI_STATE_D0) || (state > ACPI_STATE_D3_COLD)) 166 || (state < ACPI_STATE_D0) || (state > ACPI_STATE_D3_COLD))
167 return -EINVAL; 167 return -EINVAL;
168 168
169 acpi_handle_debug(device->handle, "Power state change: %s -> %s\n",
170 acpi_power_state_string(device->power.state),
171 acpi_power_state_string(state));
172
169 /* Make sure this is a valid target state */ 173 /* Make sure this is a valid target state */
170 174
171 /* There is a special case for D0 addressed below. */ 175 /* There is a special case for D0 addressed below. */
@@ -497,7 +501,8 @@ acpi_status acpi_add_pm_notifier(struct acpi_device *adev, struct device *dev,
497 goto out; 501 goto out;
498 502
499 mutex_lock(&acpi_pm_notifier_lock); 503 mutex_lock(&acpi_pm_notifier_lock);
500 adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev)); 504 adev->wakeup.ws = wakeup_source_register(&adev->dev,
505 dev_name(&adev->dev));
501 adev->wakeup.context.dev = dev; 506 adev->wakeup.context.dev = dev;
502 adev->wakeup.context.func = func; 507 adev->wakeup.context.func = func;
503 adev->wakeup.flags.notifier_present = true; 508 adev->wakeup.flags.notifier_present = true;
diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index c33756ed3304..da1e5c5ce150 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -25,6 +25,7 @@
25#include <linux/list.h> 25#include <linux/list.h>
26#include <linux/spinlock.h> 26#include <linux/spinlock.h>
27#include <linux/slab.h> 27#include <linux/slab.h>
28#include <linux/suspend.h>
28#include <linux/acpi.h> 29#include <linux/acpi.h>
29#include <linux/dmi.h> 30#include <linux/dmi.h>
30#include <asm/io.h> 31#include <asm/io.h>
@@ -1048,24 +1049,6 @@ void acpi_ec_unblock_transactions(void)
1048 acpi_ec_start(first_ec, true); 1049 acpi_ec_start(first_ec, true);
1049} 1050}
1050 1051
1051void acpi_ec_mark_gpe_for_wake(void)
1052{
1053 if (first_ec && !ec_no_wakeup)
1054 acpi_mark_gpe_for_wake(NULL, first_ec->gpe);
1055}
1056
1057void acpi_ec_set_gpe_wake_mask(u8 action)
1058{
1059 if (first_ec && !ec_no_wakeup)
1060 acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
1061}
1062
1063void acpi_ec_dispatch_gpe(void)
1064{
1065 if (first_ec)
1066 acpi_dispatch_gpe(NULL, first_ec->gpe);
1067}
1068
1069/* -------------------------------------------------------------------------- 1052/* --------------------------------------------------------------------------
1070 Event Management 1053 Event Management
1071 -------------------------------------------------------------------------- */ 1054 -------------------------------------------------------------------------- */
@@ -1931,7 +1914,7 @@ static int acpi_ec_suspend(struct device *dev)
1931 struct acpi_ec *ec = 1914 struct acpi_ec *ec =
1932 acpi_driver_data(to_acpi_device(dev)); 1915 acpi_driver_data(to_acpi_device(dev));
1933 1916
1934 if (acpi_sleep_no_ec_events() && ec_freeze_events) 1917 if (!pm_suspend_no_platform() && ec_freeze_events)
1935 acpi_ec_disable_event(ec); 1918 acpi_ec_disable_event(ec);
1936 return 0; 1919 return 0;
1937} 1920}
@@ -1948,8 +1931,7 @@ static int acpi_ec_suspend_noirq(struct device *dev)
1948 ec->reference_count >= 1) 1931 ec->reference_count >= 1)
1949 acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_DISABLE); 1932 acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_DISABLE);
1950 1933
1951 if (acpi_sleep_no_ec_events()) 1934 acpi_ec_enter_noirq(ec);
1952 acpi_ec_enter_noirq(ec);
1953 1935
1954 return 0; 1936 return 0;
1955} 1937}
@@ -1958,8 +1940,7 @@ static int acpi_ec_resume_noirq(struct device *dev)
1958{ 1940{
1959 struct acpi_ec *ec = acpi_driver_data(to_acpi_device(dev)); 1941 struct acpi_ec *ec = acpi_driver_data(to_acpi_device(dev));
1960 1942
1961 if (acpi_sleep_no_ec_events()) 1943 acpi_ec_leave_noirq(ec);
1962 acpi_ec_leave_noirq(ec);
1963 1944
1964 if (ec_no_wakeup && test_bit(EC_FLAGS_STARTED, &ec->flags) && 1945 if (ec_no_wakeup && test_bit(EC_FLAGS_STARTED, &ec->flags) &&
1965 ec->reference_count >= 1) 1946 ec->reference_count >= 1)
@@ -1976,7 +1957,35 @@ static int acpi_ec_resume(struct device *dev)
1976 acpi_ec_enable_event(ec); 1957 acpi_ec_enable_event(ec);
1977 return 0; 1958 return 0;
1978} 1959}
1979#endif 1960
1961void acpi_ec_mark_gpe_for_wake(void)
1962{
1963 if (first_ec && !ec_no_wakeup)
1964 acpi_mark_gpe_for_wake(NULL, first_ec->gpe);
1965}
1966EXPORT_SYMBOL_GPL(acpi_ec_mark_gpe_for_wake);
1967
1968void acpi_ec_set_gpe_wake_mask(u8 action)
1969{
1970 if (pm_suspend_no_platform() && first_ec && !ec_no_wakeup)
1971 acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
1972}
1973
1974bool acpi_ec_dispatch_gpe(void)
1975{
1976 u32 ret;
1977
1978 if (!first_ec)
1979 return false;
1980
1981 ret = acpi_dispatch_gpe(NULL, first_ec->gpe);
1982 if (ret == ACPI_INTERRUPT_HANDLED) {
1983 pm_pr_dbg("EC GPE dispatched\n");
1984 return true;
1985 }
1986 return false;
1987}
1988#endif /* CONFIG_PM_SLEEP */
1980 1989
1981static const struct dev_pm_ops acpi_ec_pm = { 1990static const struct dev_pm_ops acpi_ec_pm = {
1982 SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(acpi_ec_suspend_noirq, acpi_ec_resume_noirq) 1991 SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(acpi_ec_suspend_noirq, acpi_ec_resume_noirq)
diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
index f4c2fe6be4f2..afe6636f9ad3 100644
--- a/drivers/acpi/internal.h
+++ b/drivers/acpi/internal.h
@@ -194,9 +194,6 @@ void acpi_ec_ecdt_probe(void);
194void acpi_ec_dsdt_probe(void); 194void acpi_ec_dsdt_probe(void);
195void acpi_ec_block_transactions(void); 195void acpi_ec_block_transactions(void);
196void acpi_ec_unblock_transactions(void); 196void acpi_ec_unblock_transactions(void);
197void acpi_ec_mark_gpe_for_wake(void);
198void acpi_ec_set_gpe_wake_mask(u8 action);
199void acpi_ec_dispatch_gpe(void);
200int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit, 197int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
201 acpi_handle handle, acpi_ec_query_func func, 198 acpi_handle handle, acpi_ec_query_func func,
202 void *data); 199 void *data);
@@ -204,6 +201,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit);
204 201
205#ifdef CONFIG_PM_SLEEP 202#ifdef CONFIG_PM_SLEEP
206void acpi_ec_flush_work(void); 203void acpi_ec_flush_work(void);
204bool acpi_ec_dispatch_gpe(void);
207#endif 205#endif
208 206
209 207
@@ -212,11 +210,9 @@ void acpi_ec_flush_work(void);
212 -------------------------------------------------------------------------- */ 210 -------------------------------------------------------------------------- */
213#ifdef CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT 211#ifdef CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT
214extern bool acpi_s2idle_wakeup(void); 212extern bool acpi_s2idle_wakeup(void);
215extern bool acpi_sleep_no_ec_events(void);
216extern int acpi_sleep_init(void); 213extern int acpi_sleep_init(void);
217#else 214#else
218static inline bool acpi_s2idle_wakeup(void) { return false; } 215static inline bool acpi_s2idle_wakeup(void) { return false; }
219static inline bool acpi_sleep_no_ec_events(void) { return true; }
220static inline int acpi_sleep_init(void) { return -ENXIO; } 216static inline int acpi_sleep_init(void) { return -ENXIO; }
221#endif 217#endif
222 218
diff --git a/drivers/acpi/processor_driver.c b/drivers/acpi/processor_driver.c
index aea8d674a33d..08da9c29f1e9 100644
--- a/drivers/acpi/processor_driver.c
+++ b/drivers/acpi/processor_driver.c
@@ -284,6 +284,29 @@ static int acpi_processor_stop(struct device *dev)
284 return 0; 284 return 0;
285} 285}
286 286
287bool acpi_processor_cpufreq_init;
288
289static int acpi_processor_notifier(struct notifier_block *nb,
290 unsigned long event, void *data)
291{
292 struct cpufreq_policy *policy = data;
293 int cpu = policy->cpu;
294
295 if (event == CPUFREQ_CREATE_POLICY) {
296 acpi_thermal_cpufreq_init(cpu);
297 acpi_processor_ppc_init(cpu);
298 } else if (event == CPUFREQ_REMOVE_POLICY) {
299 acpi_processor_ppc_exit(cpu);
300 acpi_thermal_cpufreq_exit(cpu);
301 }
302
303 return 0;
304}
305
306static struct notifier_block acpi_processor_notifier_block = {
307 .notifier_call = acpi_processor_notifier,
308};
309
287/* 310/*
288 * We keep the driver loaded even when ACPI is not running. 311 * We keep the driver loaded even when ACPI is not running.
289 * This is needed for the powernow-k8 driver, that works even without 312 * This is needed for the powernow-k8 driver, that works even without
@@ -310,8 +333,12 @@ static int __init acpi_processor_driver_init(void)
310 cpuhp_setup_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD, "acpi/cpu-drv:dead", 333 cpuhp_setup_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD, "acpi/cpu-drv:dead",
311 NULL, acpi_soft_cpu_dead); 334 NULL, acpi_soft_cpu_dead);
312 335
313 acpi_thermal_cpufreq_init(); 336 if (!cpufreq_register_notifier(&acpi_processor_notifier_block,
314 acpi_processor_ppc_init(); 337 CPUFREQ_POLICY_NOTIFIER)) {
338 acpi_processor_cpufreq_init = true;
339 acpi_processor_ignore_ppc_init();
340 }
341
315 acpi_processor_throttling_init(); 342 acpi_processor_throttling_init();
316 return 0; 343 return 0;
317err: 344err:
@@ -324,8 +351,12 @@ static void __exit acpi_processor_driver_exit(void)
324 if (acpi_disabled) 351 if (acpi_disabled)
325 return; 352 return;
326 353
327 acpi_processor_ppc_exit(); 354 if (acpi_processor_cpufreq_init) {
328 acpi_thermal_cpufreq_exit(); 355 cpufreq_unregister_notifier(&acpi_processor_notifier_block,
356 CPUFREQ_POLICY_NOTIFIER);
357 acpi_processor_cpufreq_init = false;
358 }
359
329 cpuhp_remove_state_nocalls(hp_online); 360 cpuhp_remove_state_nocalls(hp_online);
330 cpuhp_remove_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD); 361 cpuhp_remove_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD);
331 driver_unregister(&acpi_processor_driver); 362 driver_unregister(&acpi_processor_driver);
diff --git a/drivers/acpi/processor_perflib.c b/drivers/acpi/processor_perflib.c
index ee87cb6f6e59..2261713d1aec 100644
--- a/drivers/acpi/processor_perflib.c
+++ b/drivers/acpi/processor_perflib.c
@@ -50,57 +50,13 @@ module_param(ignore_ppc, int, 0644);
50MODULE_PARM_DESC(ignore_ppc, "If the frequency of your machine gets wrongly" \ 50MODULE_PARM_DESC(ignore_ppc, "If the frequency of your machine gets wrongly" \
51 "limited by BIOS, this should help"); 51 "limited by BIOS, this should help");
52 52
53#define PPC_REGISTERED 1 53static bool acpi_processor_ppc_in_use;
54#define PPC_IN_USE 2
55
56static int acpi_processor_ppc_status;
57
58static int acpi_processor_ppc_notifier(struct notifier_block *nb,
59 unsigned long event, void *data)
60{
61 struct cpufreq_policy *policy = data;
62 struct acpi_processor *pr;
63 unsigned int ppc = 0;
64
65 if (ignore_ppc < 0)
66 ignore_ppc = 0;
67
68 if (ignore_ppc)
69 return 0;
70
71 if (event != CPUFREQ_ADJUST)
72 return 0;
73
74 mutex_lock(&performance_mutex);
75
76 pr = per_cpu(processors, policy->cpu);
77 if (!pr || !pr->performance)
78 goto out;
79
80 ppc = (unsigned int)pr->performance_platform_limit;
81
82 if (ppc >= pr->performance->state_count)
83 goto out;
84
85 cpufreq_verify_within_limits(policy, 0,
86 pr->performance->states[ppc].
87 core_frequency * 1000);
88
89 out:
90 mutex_unlock(&performance_mutex);
91
92 return 0;
93}
94
95static struct notifier_block acpi_ppc_notifier_block = {
96 .notifier_call = acpi_processor_ppc_notifier,
97};
98 54
99static int acpi_processor_get_platform_limit(struct acpi_processor *pr) 55static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
100{ 56{
101 acpi_status status = 0; 57 acpi_status status = 0;
102 unsigned long long ppc = 0; 58 unsigned long long ppc = 0;
103 59 int ret;
104 60
105 if (!pr) 61 if (!pr)
106 return -EINVAL; 62 return -EINVAL;
@@ -112,7 +68,7 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
112 status = acpi_evaluate_integer(pr->handle, "_PPC", NULL, &ppc); 68 status = acpi_evaluate_integer(pr->handle, "_PPC", NULL, &ppc);
113 69
114 if (status != AE_NOT_FOUND) 70 if (status != AE_NOT_FOUND)
115 acpi_processor_ppc_status |= PPC_IN_USE; 71 acpi_processor_ppc_in_use = true;
116 72
117 if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { 73 if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) {
118 ACPI_EXCEPTION((AE_INFO, status, "Evaluating _PPC")); 74 ACPI_EXCEPTION((AE_INFO, status, "Evaluating _PPC"));
@@ -124,6 +80,17 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
124 80
125 pr->performance_platform_limit = (int)ppc; 81 pr->performance_platform_limit = (int)ppc;
126 82
83 if (ppc >= pr->performance->state_count ||
84 unlikely(!dev_pm_qos_request_active(&pr->perflib_req)))
85 return 0;
86
87 ret = dev_pm_qos_update_request(&pr->perflib_req,
88 pr->performance->states[ppc].core_frequency * 1000);
89 if (ret < 0) {
90 pr_warn("Failed to update perflib freq constraint: CPU%d (%d)\n",
91 pr->id, ret);
92 }
93
127 return 0; 94 return 0;
128} 95}
129 96
@@ -184,23 +151,32 @@ int acpi_processor_get_bios_limit(int cpu, unsigned int *limit)
184} 151}
185EXPORT_SYMBOL(acpi_processor_get_bios_limit); 152EXPORT_SYMBOL(acpi_processor_get_bios_limit);
186 153
187void acpi_processor_ppc_init(void) 154void acpi_processor_ignore_ppc_init(void)
188{ 155{
189 if (!cpufreq_register_notifier 156 if (ignore_ppc < 0)
190 (&acpi_ppc_notifier_block, CPUFREQ_POLICY_NOTIFIER)) 157 ignore_ppc = 0;
191 acpi_processor_ppc_status |= PPC_REGISTERED; 158}
192 else 159
193 printk(KERN_DEBUG 160void acpi_processor_ppc_init(int cpu)
194 "Warning: Processor Platform Limit not supported.\n"); 161{
162 struct acpi_processor *pr = per_cpu(processors, cpu);
163 int ret;
164
165 ret = dev_pm_qos_add_request(get_cpu_device(cpu),
166 &pr->perflib_req, DEV_PM_QOS_MAX_FREQUENCY,
167 INT_MAX);
168 if (ret < 0) {
169 pr_err("Failed to add freq constraint for CPU%d (%d)\n", cpu,
170 ret);
171 return;
172 }
195} 173}
196 174
197void acpi_processor_ppc_exit(void) 175void acpi_processor_ppc_exit(int cpu)
198{ 176{
199 if (acpi_processor_ppc_status & PPC_REGISTERED) 177 struct acpi_processor *pr = per_cpu(processors, cpu);
200 cpufreq_unregister_notifier(&acpi_ppc_notifier_block,
201 CPUFREQ_POLICY_NOTIFIER);
202 178
203 acpi_processor_ppc_status &= ~PPC_REGISTERED; 179 dev_pm_qos_remove_request(&pr->perflib_req);
204} 180}
205 181
206static int acpi_processor_get_performance_control(struct acpi_processor *pr) 182static int acpi_processor_get_performance_control(struct acpi_processor *pr)
@@ -477,7 +453,7 @@ int acpi_processor_notify_smm(struct module *calling_module)
477 static int is_done = 0; 453 static int is_done = 0;
478 int result; 454 int result;
479 455
480 if (!(acpi_processor_ppc_status & PPC_REGISTERED)) 456 if (!acpi_processor_cpufreq_init)
481 return -EBUSY; 457 return -EBUSY;
482 458
483 if (!try_module_get(calling_module)) 459 if (!try_module_get(calling_module))
@@ -513,7 +489,7 @@ int acpi_processor_notify_smm(struct module *calling_module)
513 * we can allow the cpufreq driver to be rmmod'ed. */ 489 * we can allow the cpufreq driver to be rmmod'ed. */
514 is_done = 1; 490 is_done = 1;
515 491
516 if (!(acpi_processor_ppc_status & PPC_IN_USE)) 492 if (!acpi_processor_ppc_in_use)
517 module_put(calling_module); 493 module_put(calling_module);
518 494
519 return 0; 495 return 0;
@@ -742,7 +718,7 @@ acpi_processor_register_performance(struct acpi_processor_performance
742{ 718{
743 struct acpi_processor *pr; 719 struct acpi_processor *pr;
744 720
745 if (!(acpi_processor_ppc_status & PPC_REGISTERED)) 721 if (!acpi_processor_cpufreq_init)
746 return -EINVAL; 722 return -EINVAL;
747 723
748 mutex_lock(&performance_mutex); 724 mutex_lock(&performance_mutex);
diff --git a/drivers/acpi/processor_thermal.c b/drivers/acpi/processor_thermal.c
index 50fb0107375e..ec2638f1df4f 100644
--- a/drivers/acpi/processor_thermal.c
+++ b/drivers/acpi/processor_thermal.c
@@ -35,7 +35,6 @@ ACPI_MODULE_NAME("processor_thermal");
35#define CPUFREQ_THERMAL_MAX_STEP 3 35#define CPUFREQ_THERMAL_MAX_STEP 3
36 36
37static DEFINE_PER_CPU(unsigned int, cpufreq_thermal_reduction_pctg); 37static DEFINE_PER_CPU(unsigned int, cpufreq_thermal_reduction_pctg);
38static unsigned int acpi_thermal_cpufreq_is_init = 0;
39 38
40#define reduction_pctg(cpu) \ 39#define reduction_pctg(cpu) \
41 per_cpu(cpufreq_thermal_reduction_pctg, phys_package_first_cpu(cpu)) 40 per_cpu(cpufreq_thermal_reduction_pctg, phys_package_first_cpu(cpu))
@@ -61,35 +60,11 @@ static int phys_package_first_cpu(int cpu)
61static int cpu_has_cpufreq(unsigned int cpu) 60static int cpu_has_cpufreq(unsigned int cpu)
62{ 61{
63 struct cpufreq_policy policy; 62 struct cpufreq_policy policy;
64 if (!acpi_thermal_cpufreq_is_init || cpufreq_get_policy(&policy, cpu)) 63 if (!acpi_processor_cpufreq_init || cpufreq_get_policy(&policy, cpu))
65 return 0; 64 return 0;
66 return 1; 65 return 1;
67} 66}
68 67
69static int acpi_thermal_cpufreq_notifier(struct notifier_block *nb,
70 unsigned long event, void *data)
71{
72 struct cpufreq_policy *policy = data;
73 unsigned long max_freq = 0;
74
75 if (event != CPUFREQ_ADJUST)
76 goto out;
77
78 max_freq = (
79 policy->cpuinfo.max_freq *
80 (100 - reduction_pctg(policy->cpu) * 20)
81 ) / 100;
82
83 cpufreq_verify_within_limits(policy, 0, max_freq);
84
85 out:
86 return 0;
87}
88
89static struct notifier_block acpi_thermal_cpufreq_notifier_block = {
90 .notifier_call = acpi_thermal_cpufreq_notifier,
91};
92
93static int cpufreq_get_max_state(unsigned int cpu) 68static int cpufreq_get_max_state(unsigned int cpu)
94{ 69{
95 if (!cpu_has_cpufreq(cpu)) 70 if (!cpu_has_cpufreq(cpu))
@@ -108,7 +83,10 @@ static int cpufreq_get_cur_state(unsigned int cpu)
108 83
109static int cpufreq_set_cur_state(unsigned int cpu, int state) 84static int cpufreq_set_cur_state(unsigned int cpu, int state)
110{ 85{
111 int i; 86 struct cpufreq_policy *policy;
87 struct acpi_processor *pr;
88 unsigned long max_freq;
89 int i, ret;
112 90
113 if (!cpu_has_cpufreq(cpu)) 91 if (!cpu_has_cpufreq(cpu))
114 return 0; 92 return 0;
@@ -121,33 +99,53 @@ static int cpufreq_set_cur_state(unsigned int cpu, int state)
121 * frequency. 99 * frequency.
122 */ 100 */
123 for_each_online_cpu(i) { 101 for_each_online_cpu(i) {
124 if (topology_physical_package_id(i) == 102 if (topology_physical_package_id(i) !=
125 topology_physical_package_id(cpu)) 103 topology_physical_package_id(cpu))
126 cpufreq_update_policy(i); 104 continue;
105
106 pr = per_cpu(processors, i);
107
108 if (unlikely(!dev_pm_qos_request_active(&pr->thermal_req)))
109 continue;
110
111 policy = cpufreq_cpu_get(i);
112 if (!policy)
113 return -EINVAL;
114
115 max_freq = (policy->cpuinfo.max_freq * (100 - reduction_pctg(i) * 20)) / 100;
116
117 cpufreq_cpu_put(policy);
118
119 ret = dev_pm_qos_update_request(&pr->thermal_req, max_freq);
120 if (ret < 0) {
121 pr_warn("Failed to update thermal freq constraint: CPU%d (%d)\n",
122 pr->id, ret);
123 }
127 } 124 }
128 return 0; 125 return 0;
129} 126}
130 127
131void acpi_thermal_cpufreq_init(void) 128void acpi_thermal_cpufreq_init(int cpu)
132{ 129{
133 int i; 130 struct acpi_processor *pr = per_cpu(processors, cpu);
134 131 int ret;
135 i = cpufreq_register_notifier(&acpi_thermal_cpufreq_notifier_block, 132
136 CPUFREQ_POLICY_NOTIFIER); 133 ret = dev_pm_qos_add_request(get_cpu_device(cpu),
137 if (!i) 134 &pr->thermal_req, DEV_PM_QOS_MAX_FREQUENCY,
138 acpi_thermal_cpufreq_is_init = 1; 135 INT_MAX);
136 if (ret < 0) {
137 pr_err("Failed to add freq constraint for CPU%d (%d)\n", cpu,
138 ret);
139 return;
140 }
139} 141}
140 142
141void acpi_thermal_cpufreq_exit(void) 143void acpi_thermal_cpufreq_exit(int cpu)
142{ 144{
143 if (acpi_thermal_cpufreq_is_init) 145 struct acpi_processor *pr = per_cpu(processors, cpu);
144 cpufreq_unregister_notifier
145 (&acpi_thermal_cpufreq_notifier_block,
146 CPUFREQ_POLICY_NOTIFIER);
147 146
148 acpi_thermal_cpufreq_is_init = 0; 147 dev_pm_qos_remove_request(&pr->thermal_req);
149} 148}
150
151#else /* ! CONFIG_CPU_FREQ */ 149#else /* ! CONFIG_CPU_FREQ */
152static int cpufreq_get_max_state(unsigned int cpu) 150static int cpufreq_get_max_state(unsigned int cpu)
153{ 151{
diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
index f0fe7c15d657..9fa77d72ef27 100644
--- a/drivers/acpi/sleep.c
+++ b/drivers/acpi/sleep.c
@@ -89,6 +89,10 @@ bool acpi_sleep_state_supported(u8 sleep_state)
89} 89}
90 90
91#ifdef CONFIG_ACPI_SLEEP 91#ifdef CONFIG_ACPI_SLEEP
92static bool sleep_no_lps0 __read_mostly;
93module_param(sleep_no_lps0, bool, 0644);
94MODULE_PARM_DESC(sleep_no_lps0, "Do not use the special LPS0 device interface");
95
92static u32 acpi_target_sleep_state = ACPI_STATE_S0; 96static u32 acpi_target_sleep_state = ACPI_STATE_S0;
93 97
94u32 acpi_target_system_state(void) 98u32 acpi_target_system_state(void)
@@ -158,11 +162,11 @@ static int __init init_nvs_nosave(const struct dmi_system_id *d)
158 return 0; 162 return 0;
159} 163}
160 164
161static bool acpi_sleep_no_lps0; 165static bool acpi_sleep_default_s3;
162 166
163static int __init init_no_lps0(const struct dmi_system_id *d) 167static int __init init_default_s3(const struct dmi_system_id *d)
164{ 168{
165 acpi_sleep_no_lps0 = true; 169 acpi_sleep_default_s3 = true;
166 return 0; 170 return 0;
167} 171}
168 172
@@ -363,7 +367,7 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
363 * S0 Idle firmware interface. 367 * S0 Idle firmware interface.
364 */ 368 */
365 { 369 {
366 .callback = init_no_lps0, 370 .callback = init_default_s3,
367 .ident = "Dell XPS13 9360", 371 .ident = "Dell XPS13 9360",
368 .matches = { 372 .matches = {
369 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 373 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
@@ -376,7 +380,7 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
376 * https://bugzilla.kernel.org/show_bug.cgi?id=199057). 380 * https://bugzilla.kernel.org/show_bug.cgi?id=199057).
377 */ 381 */
378 { 382 {
379 .callback = init_no_lps0, 383 .callback = init_default_s3,
380 .ident = "ThinkPad X1 Tablet(2016)", 384 .ident = "ThinkPad X1 Tablet(2016)",
381 .matches = { 385 .matches = {
382 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 386 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
@@ -524,8 +528,9 @@ static void acpi_pm_end(void)
524 acpi_sleep_tts_switch(acpi_target_sleep_state); 528 acpi_sleep_tts_switch(acpi_target_sleep_state);
525} 529}
526#else /* !CONFIG_ACPI_SLEEP */ 530#else /* !CONFIG_ACPI_SLEEP */
531#define sleep_no_lps0 (1)
527#define acpi_target_sleep_state ACPI_STATE_S0 532#define acpi_target_sleep_state ACPI_STATE_S0
528#define acpi_sleep_no_lps0 (false) 533#define acpi_sleep_default_s3 (1)
529static inline void acpi_sleep_dmi_check(void) {} 534static inline void acpi_sleep_dmi_check(void) {}
530#endif /* CONFIG_ACPI_SLEEP */ 535#endif /* CONFIG_ACPI_SLEEP */
531 536
@@ -691,7 +696,6 @@ static const struct platform_suspend_ops acpi_suspend_ops_old = {
691 .recover = acpi_pm_finish, 696 .recover = acpi_pm_finish,
692}; 697};
693 698
694static bool s2idle_in_progress;
695static bool s2idle_wakeup; 699static bool s2idle_wakeup;
696 700
697/* 701/*
@@ -904,42 +908,43 @@ static int lps0_device_attach(struct acpi_device *adev,
904 if (lps0_device_handle) 908 if (lps0_device_handle)
905 return 0; 909 return 0;
906 910
907 if (acpi_sleep_no_lps0) {
908 acpi_handle_info(adev->handle,
909 "Low Power S0 Idle interface disabled\n");
910 return 0;
911 }
912
913 if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) 911 if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
914 return 0; 912 return 0;
915 913
916 guid_parse(ACPI_LPS0_DSM_UUID, &lps0_dsm_guid); 914 guid_parse(ACPI_LPS0_DSM_UUID, &lps0_dsm_guid);
917 /* Check if the _DSM is present and as expected. */ 915 /* Check if the _DSM is present and as expected. */
918 out_obj = acpi_evaluate_dsm(adev->handle, &lps0_dsm_guid, 1, 0, NULL); 916 out_obj = acpi_evaluate_dsm(adev->handle, &lps0_dsm_guid, 1, 0, NULL);
919 if (out_obj && out_obj->type == ACPI_TYPE_BUFFER) { 917 if (!out_obj || out_obj->type != ACPI_TYPE_BUFFER) {
920 char bitmask = *(char *)out_obj->buffer.pointer;
921
922 lps0_dsm_func_mask = bitmask;
923 lps0_device_handle = adev->handle;
924 /*
925 * Use suspend-to-idle by default if the default
926 * suspend mode was not set from the command line.
927 */
928 if (mem_sleep_default > PM_SUSPEND_MEM)
929 mem_sleep_current = PM_SUSPEND_TO_IDLE;
930
931 acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n",
932 bitmask);
933
934 acpi_ec_mark_gpe_for_wake();
935 } else {
936 acpi_handle_debug(adev->handle, 918 acpi_handle_debug(adev->handle,
937 "_DSM function 0 evaluation failed\n"); 919 "_DSM function 0 evaluation failed\n");
920 return 0;
938 } 921 }
922
923 lps0_dsm_func_mask = *(char *)out_obj->buffer.pointer;
924
939 ACPI_FREE(out_obj); 925 ACPI_FREE(out_obj);
940 926
927 acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n",
928 lps0_dsm_func_mask);
929
930 lps0_device_handle = adev->handle;
931
941 lpi_device_get_constraints(); 932 lpi_device_get_constraints();
942 933
934 /*
935 * Use suspend-to-idle by default if the default suspend mode was not
936 * set from the command line.
937 */
938 if (mem_sleep_default > PM_SUSPEND_MEM && !acpi_sleep_default_s3)
939 mem_sleep_current = PM_SUSPEND_TO_IDLE;
940
941 /*
942 * Some LPS0 systems, like ASUS Zenbook UX430UNR/i7-8550U, require the
943 * EC GPE to be enabled while suspended for certain wakeup devices to
944 * work, so mark it as wakeup-capable.
945 */
946 acpi_ec_mark_gpe_for_wake();
947
943 return 0; 948 return 0;
944} 949}
945 950
@@ -951,98 +956,110 @@ static struct acpi_scan_handler lps0_handler = {
951static int acpi_s2idle_begin(void) 956static int acpi_s2idle_begin(void)
952{ 957{
953 acpi_scan_lock_acquire(); 958 acpi_scan_lock_acquire();
954 s2idle_in_progress = true;
955 return 0; 959 return 0;
956} 960}
957 961
958static int acpi_s2idle_prepare(void) 962static int acpi_s2idle_prepare(void)
959{ 963{
960 if (lps0_device_handle) { 964 if (acpi_sci_irq_valid()) {
961 acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF); 965 enable_irq_wake(acpi_sci_irq);
962 acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY);
963
964 acpi_ec_set_gpe_wake_mask(ACPI_GPE_ENABLE); 966 acpi_ec_set_gpe_wake_mask(ACPI_GPE_ENABLE);
965 } 967 }
966 968
967 if (acpi_sci_irq_valid())
968 enable_irq_wake(acpi_sci_irq);
969
970 acpi_enable_wakeup_devices(ACPI_STATE_S0); 969 acpi_enable_wakeup_devices(ACPI_STATE_S0);
971 970
972 /* Change the configuration of GPEs to avoid spurious wakeup. */ 971 /* Change the configuration of GPEs to avoid spurious wakeup. */
973 acpi_enable_all_wakeup_gpes(); 972 acpi_enable_all_wakeup_gpes();
974 acpi_os_wait_events_complete(); 973 acpi_os_wait_events_complete();
974
975 s2idle_wakeup = true;
975 return 0; 976 return 0;
976} 977}
977 978
978static void acpi_s2idle_wake(void) 979static int acpi_s2idle_prepare_late(void)
979{ 980{
980 if (!lps0_device_handle) 981 if (!lps0_device_handle || sleep_no_lps0)
981 return; 982 return 0;
982 983
983 if (pm_debug_messages_on) 984 if (pm_debug_messages_on)
984 lpi_check_constraints(); 985 lpi_check_constraints();
985 986
987 acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF);
988 acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY);
989
990 return 0;
991}
992
993static void acpi_s2idle_wake(void)
994{
995 /*
996 * If IRQD_WAKEUP_ARMED is set for the SCI at this point, the SCI has
997 * not triggered while suspended, so bail out.
998 */
999 if (!acpi_sci_irq_valid() ||
1000 irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq)))
1001 return;
1002
986 /* 1003 /*
987 * If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means 1004 * If there are EC events to process, the wakeup may be a spurious one
988 * that the SCI has triggered while suspended, so cancel the wakeup in 1005 * coming from the EC.
989 * case it has not been a wakeup event (the GPEs will be checked later).
990 */ 1006 */
991 if (acpi_sci_irq_valid() && 1007 if (acpi_ec_dispatch_gpe()) {
992 !irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq))) { 1008 /*
1009 * Cancel the wakeup and process all pending events in case
1010 * there are any wakeup ones in there.
1011 *
1012 * Note that if any non-EC GPEs are active at this point, the
1013 * SCI will retrigger after the rearming below, so no events
1014 * should be missed by canceling the wakeup here.
1015 */
993 pm_system_cancel_wakeup(); 1016 pm_system_cancel_wakeup();
994 s2idle_wakeup = true;
995 /* 1017 /*
996 * On some platforms with the LPS0 _DSM device noirq resume 1018 * The EC driver uses the system workqueue and an additional
997 * takes too much time for EC wakeup events to survive, so look 1019 * special one, so those need to be flushed too.
998 * for them now.
999 */ 1020 */
1000 acpi_ec_dispatch_gpe(); 1021 acpi_os_wait_events_complete(); /* synchronize EC GPE processing */
1022 acpi_ec_flush_work();
1023 acpi_os_wait_events_complete(); /* synchronize Notify handling */
1024
1025 rearm_wake_irq(acpi_sci_irq);
1001 } 1026 }
1002} 1027}
1003 1028
1004static void acpi_s2idle_sync(void) 1029static void acpi_s2idle_restore_early(void)
1005{ 1030{
1006 /* 1031 if (!lps0_device_handle || sleep_no_lps0)
1007 * Process all pending events in case there are any wakeup ones. 1032 return;
1008 * 1033
1009 * The EC driver uses the system workqueue and an additional special 1034 acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT);
1010 * one, so those need to be flushed too. 1035 acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON);
1011 */
1012 acpi_os_wait_events_complete(); /* synchronize SCI IRQ handling */
1013 acpi_ec_flush_work();
1014 acpi_os_wait_events_complete(); /* synchronize Notify handling */
1015 s2idle_wakeup = false;
1016} 1036}
1017 1037
1018static void acpi_s2idle_restore(void) 1038static void acpi_s2idle_restore(void)
1019{ 1039{
1040 s2idle_wakeup = false;
1041
1020 acpi_enable_all_runtime_gpes(); 1042 acpi_enable_all_runtime_gpes();
1021 1043
1022 acpi_disable_wakeup_devices(ACPI_STATE_S0); 1044 acpi_disable_wakeup_devices(ACPI_STATE_S0);
1023 1045
1024 if (acpi_sci_irq_valid()) 1046 if (acpi_sci_irq_valid()) {
1025 disable_irq_wake(acpi_sci_irq);
1026
1027 if (lps0_device_handle) {
1028 acpi_ec_set_gpe_wake_mask(ACPI_GPE_DISABLE); 1047 acpi_ec_set_gpe_wake_mask(ACPI_GPE_DISABLE);
1029 1048 disable_irq_wake(acpi_sci_irq);
1030 acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT);
1031 acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON);
1032 } 1049 }
1033} 1050}
1034 1051
1035static void acpi_s2idle_end(void) 1052static void acpi_s2idle_end(void)
1036{ 1053{
1037 s2idle_in_progress = false;
1038 acpi_scan_lock_release(); 1054 acpi_scan_lock_release();
1039} 1055}
1040 1056
1041static const struct platform_s2idle_ops acpi_s2idle_ops = { 1057static const struct platform_s2idle_ops acpi_s2idle_ops = {
1042 .begin = acpi_s2idle_begin, 1058 .begin = acpi_s2idle_begin,
1043 .prepare = acpi_s2idle_prepare, 1059 .prepare = acpi_s2idle_prepare,
1060 .prepare_late = acpi_s2idle_prepare_late,
1044 .wake = acpi_s2idle_wake, 1061 .wake = acpi_s2idle_wake,
1045 .sync = acpi_s2idle_sync, 1062 .restore_early = acpi_s2idle_restore_early,
1046 .restore = acpi_s2idle_restore, 1063 .restore = acpi_s2idle_restore,
1047 .end = acpi_s2idle_end, 1064 .end = acpi_s2idle_end,
1048}; 1065};
@@ -1063,7 +1080,6 @@ static void acpi_sleep_suspend_setup(void)
1063} 1080}
1064 1081
1065#else /* !CONFIG_SUSPEND */ 1082#else /* !CONFIG_SUSPEND */
1066#define s2idle_in_progress (false)
1067#define s2idle_wakeup (false) 1083#define s2idle_wakeup (false)
1068#define lps0_device_handle (NULL) 1084#define lps0_device_handle (NULL)
1069static inline void acpi_sleep_suspend_setup(void) {} 1085static inline void acpi_sleep_suspend_setup(void) {}
@@ -1074,11 +1090,6 @@ bool acpi_s2idle_wakeup(void)
1074 return s2idle_wakeup; 1090 return s2idle_wakeup;
1075} 1091}
1076 1092
1077bool acpi_sleep_no_ec_events(void)
1078{
1079 return !s2idle_in_progress || !lps0_device_handle;
1080}
1081
1082#ifdef CONFIG_PM_SLEEP 1093#ifdef CONFIG_PM_SLEEP
1083static u32 saved_bm_rld; 1094static u32 saved_bm_rld;
1084 1095
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index b54d241a2ff5..1eb81f113786 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -179,7 +179,7 @@ init_cpu_capacity_callback(struct notifier_block *nb,
179 if (!raw_capacity) 179 if (!raw_capacity)
180 return 0; 180 return 0;
181 181
182 if (val != CPUFREQ_NOTIFY) 182 if (val != CPUFREQ_CREATE_POLICY)
183 return 0; 183 return 0;
184 184
185 pr_debug("cpu_capacity: init cpu capacity for CPUs [%*pbl] (to_visit=%*pbl)\n", 185 pr_debug("cpu_capacity: init cpu capacity for CPUs [%*pbl] (to_visit=%*pbl)\n",
diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index e1bb691cf8f1..ec5bb190b9d0 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -1,6 +1,6 @@
1# SPDX-License-Identifier: GPL-2.0 1# SPDX-License-Identifier: GPL-2.0
2obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o 2obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o
3obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o 3obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o wakeup_stats.o
4obj-$(CONFIG_PM_TRACE_RTC) += trace.o 4obj-$(CONFIG_PM_TRACE_RTC) += trace.o
5obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o 5obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
6obj-$(CONFIG_HAVE_CLK) += clock_ops.o 6obj-$(CONFIG_HAVE_CLK) += clock_ops.o
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index b063bc41b0a9..cc85e87eaf05 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -149,29 +149,24 @@ static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
149 return ret; 149 return ret;
150} 150}
151 151
152static int genpd_runtime_suspend(struct device *dev);
153
152/* 154/*
153 * Get the generic PM domain for a particular struct device. 155 * Get the generic PM domain for a particular struct device.
154 * This validates the struct device pointer, the PM domain pointer, 156 * This validates the struct device pointer, the PM domain pointer,
155 * and checks that the PM domain pointer is a real generic PM domain. 157 * and checks that the PM domain pointer is a real generic PM domain.
156 * Any failure results in NULL being returned. 158 * Any failure results in NULL being returned.
157 */ 159 */
158static struct generic_pm_domain *genpd_lookup_dev(struct device *dev) 160static struct generic_pm_domain *dev_to_genpd_safe(struct device *dev)
159{ 161{
160 struct generic_pm_domain *genpd = NULL, *gpd;
161
162 if (IS_ERR_OR_NULL(dev) || IS_ERR_OR_NULL(dev->pm_domain)) 162 if (IS_ERR_OR_NULL(dev) || IS_ERR_OR_NULL(dev->pm_domain))
163 return NULL; 163 return NULL;
164 164
165 mutex_lock(&gpd_list_lock); 165 /* A genpd's always have its ->runtime_suspend() callback assigned. */
166 list_for_each_entry(gpd, &gpd_list, gpd_list_node) { 166 if (dev->pm_domain->ops.runtime_suspend == genpd_runtime_suspend)
167 if (&gpd->domain == dev->pm_domain) { 167 return pd_to_genpd(dev->pm_domain);
168 genpd = gpd;
169 break;
170 }
171 }
172 mutex_unlock(&gpd_list_lock);
173 168
174 return genpd; 169 return NULL;
175} 170}
176 171
177/* 172/*
@@ -385,8 +380,8 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state)
385 unsigned int prev; 380 unsigned int prev;
386 int ret; 381 int ret;
387 382
388 genpd = dev_to_genpd(dev); 383 genpd = dev_to_genpd_safe(dev);
389 if (IS_ERR(genpd)) 384 if (!genpd)
390 return -ENODEV; 385 return -ENODEV;
391 386
392 if (unlikely(!genpd->set_performance_state)) 387 if (unlikely(!genpd->set_performance_state))
@@ -1610,7 +1605,7 @@ static int genpd_remove_device(struct generic_pm_domain *genpd,
1610 */ 1605 */
1611int pm_genpd_remove_device(struct device *dev) 1606int pm_genpd_remove_device(struct device *dev)
1612{ 1607{
1613 struct generic_pm_domain *genpd = genpd_lookup_dev(dev); 1608 struct generic_pm_domain *genpd = dev_to_genpd_safe(dev);
1614 1609
1615 if (!genpd) 1610 if (!genpd)
1616 return -EINVAL; 1611 return -EINVAL;
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 7fb2c39bc725..134a8af51511 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -716,7 +716,7 @@ static void async_resume_noirq(void *data, async_cookie_t cookie)
716 put_device(dev); 716 put_device(dev);
717} 717}
718 718
719void dpm_noirq_resume_devices(pm_message_t state) 719static void dpm_noirq_resume_devices(pm_message_t state)
720{ 720{
721 struct device *dev; 721 struct device *dev;
722 ktime_t starttime = ktime_get(); 722 ktime_t starttime = ktime_get();
@@ -760,13 +760,6 @@ void dpm_noirq_resume_devices(pm_message_t state)
760 trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false); 760 trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
761} 761}
762 762
763void dpm_noirq_end(void)
764{
765 resume_device_irqs();
766 device_wakeup_disarm_wake_irqs();
767 cpuidle_resume();
768}
769
770/** 763/**
771 * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices. 764 * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
772 * @state: PM transition of the system being carried out. 765 * @state: PM transition of the system being carried out.
@@ -777,7 +770,11 @@ void dpm_noirq_end(void)
777void dpm_resume_noirq(pm_message_t state) 770void dpm_resume_noirq(pm_message_t state)
778{ 771{
779 dpm_noirq_resume_devices(state); 772 dpm_noirq_resume_devices(state);
780 dpm_noirq_end(); 773
774 resume_device_irqs();
775 device_wakeup_disarm_wake_irqs();
776
777 cpuidle_resume();
781} 778}
782 779
783static pm_callback_t dpm_subsys_resume_early_cb(struct device *dev, 780static pm_callback_t dpm_subsys_resume_early_cb(struct device *dev,
@@ -1291,11 +1288,6 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
1291 if (async_error) 1288 if (async_error)
1292 goto Complete; 1289 goto Complete;
1293 1290
1294 if (pm_wakeup_pending()) {
1295 async_error = -EBUSY;
1296 goto Complete;
1297 }
1298
1299 if (dev->power.syscore || dev->power.direct_complete) 1291 if (dev->power.syscore || dev->power.direct_complete)
1300 goto Complete; 1292 goto Complete;
1301 1293
@@ -1362,14 +1354,7 @@ static int device_suspend_noirq(struct device *dev)
1362 return __device_suspend_noirq(dev, pm_transition, false); 1354 return __device_suspend_noirq(dev, pm_transition, false);
1363} 1355}
1364 1356
1365void dpm_noirq_begin(void) 1357static int dpm_noirq_suspend_devices(pm_message_t state)
1366{
1367 cpuidle_pause();
1368 device_wakeup_arm_wake_irqs();
1369 suspend_device_irqs();
1370}
1371
1372int dpm_noirq_suspend_devices(pm_message_t state)
1373{ 1358{
1374 ktime_t starttime = ktime_get(); 1359 ktime_t starttime = ktime_get();
1375 int error = 0; 1360 int error = 0;
@@ -1426,7 +1411,11 @@ int dpm_suspend_noirq(pm_message_t state)
1426{ 1411{
1427 int ret; 1412 int ret;
1428 1413
1429 dpm_noirq_begin(); 1414 cpuidle_pause();
1415
1416 device_wakeup_arm_wake_irqs();
1417 suspend_device_irqs();
1418
1430 ret = dpm_noirq_suspend_devices(state); 1419 ret = dpm_noirq_suspend_devices(state);
1431 if (ret) 1420 if (ret)
1432 dpm_resume_noirq(resume_event(state)); 1421 dpm_resume_noirq(resume_event(state));
diff --git a/drivers/base/power/power.h b/drivers/base/power/power.h
index ec33fbdb919b..39a06a0cfdaa 100644
--- a/drivers/base/power/power.h
+++ b/drivers/base/power/power.h
@@ -149,3 +149,21 @@ static inline void device_pm_init(struct device *dev)
149 device_pm_sleep_init(dev); 149 device_pm_sleep_init(dev);
150 pm_runtime_init(dev); 150 pm_runtime_init(dev);
151} 151}
152
153#ifdef CONFIG_PM_SLEEP
154
155/* drivers/base/power/wakeup_stats.c */
156extern int wakeup_source_sysfs_add(struct device *parent,
157 struct wakeup_source *ws);
158extern void wakeup_source_sysfs_remove(struct wakeup_source *ws);
159
160extern int pm_wakeup_source_sysfs_add(struct device *parent);
161
162#else /* !CONFIG_PM_SLEEP */
163
164static inline int pm_wakeup_source_sysfs_add(struct device *parent)
165{
166 return 0;
167}
168
169#endif /* CONFIG_PM_SLEEP */
diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c
index 1b9c281cbe41..d7d82db2e4bc 100644
--- a/drivers/base/power/sysfs.c
+++ b/drivers/base/power/sysfs.c
@@ -5,6 +5,7 @@
5#include <linux/export.h> 5#include <linux/export.h>
6#include <linux/pm_qos.h> 6#include <linux/pm_qos.h>
7#include <linux/pm_runtime.h> 7#include <linux/pm_runtime.h>
8#include <linux/pm_wakeup.h>
8#include <linux/atomic.h> 9#include <linux/atomic.h>
9#include <linux/jiffies.h> 10#include <linux/jiffies.h>
10#include "power.h" 11#include "power.h"
@@ -667,8 +668,13 @@ int dpm_sysfs_add(struct device *dev)
667 if (rc) 668 if (rc)
668 goto err_wakeup; 669 goto err_wakeup;
669 } 670 }
671 rc = pm_wakeup_source_sysfs_add(dev);
672 if (rc)
673 goto err_latency;
670 return 0; 674 return 0;
671 675
676 err_latency:
677 sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group);
672 err_wakeup: 678 err_wakeup:
673 sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group); 679 sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
674 err_runtime: 680 err_runtime:
diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
index ee31d4f8d856..5817b51d2b15 100644
--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -72,22 +72,7 @@ static struct wakeup_source deleted_ws = {
72 .lock = __SPIN_LOCK_UNLOCKED(deleted_ws.lock), 72 .lock = __SPIN_LOCK_UNLOCKED(deleted_ws.lock),
73}; 73};
74 74
75/** 75static DEFINE_IDA(wakeup_ida);
76 * wakeup_source_prepare - Prepare a new wakeup source for initialization.
77 * @ws: Wakeup source to prepare.
78 * @name: Pointer to the name of the new wakeup source.
79 *
80 * Callers must ensure that the @name string won't be freed when @ws is still in
81 * use.
82 */
83void wakeup_source_prepare(struct wakeup_source *ws, const char *name)
84{
85 if (ws) {
86 memset(ws, 0, sizeof(*ws));
87 ws->name = name;
88 }
89}
90EXPORT_SYMBOL_GPL(wakeup_source_prepare);
91 76
92/** 77/**
93 * wakeup_source_create - Create a struct wakeup_source object. 78 * wakeup_source_create - Create a struct wakeup_source object.
@@ -96,13 +81,31 @@ EXPORT_SYMBOL_GPL(wakeup_source_prepare);
96struct wakeup_source *wakeup_source_create(const char *name) 81struct wakeup_source *wakeup_source_create(const char *name)
97{ 82{
98 struct wakeup_source *ws; 83 struct wakeup_source *ws;
84 const char *ws_name;
85 int id;
99 86
100 ws = kmalloc(sizeof(*ws), GFP_KERNEL); 87 ws = kzalloc(sizeof(*ws), GFP_KERNEL);
101 if (!ws) 88 if (!ws)
102 return NULL; 89 goto err_ws;
90
91 ws_name = kstrdup_const(name, GFP_KERNEL);
92 if (!ws_name)
93 goto err_name;
94 ws->name = ws_name;
95
96 id = ida_alloc(&wakeup_ida, GFP_KERNEL);
97 if (id < 0)
98 goto err_id;
99 ws->id = id;
103 100
104 wakeup_source_prepare(ws, name ? kstrdup_const(name, GFP_KERNEL) : NULL);
105 return ws; 101 return ws;
102
103err_id:
104 kfree_const(ws->name);
105err_name:
106 kfree(ws);
107err_ws:
108 return NULL;
106} 109}
107EXPORT_SYMBOL_GPL(wakeup_source_create); 110EXPORT_SYMBOL_GPL(wakeup_source_create);
108 111
@@ -134,6 +137,13 @@ static void wakeup_source_record(struct wakeup_source *ws)
134 spin_unlock_irqrestore(&deleted_ws.lock, flags); 137 spin_unlock_irqrestore(&deleted_ws.lock, flags);
135} 138}
136 139
140static void wakeup_source_free(struct wakeup_source *ws)
141{
142 ida_free(&wakeup_ida, ws->id);
143 kfree_const(ws->name);
144 kfree(ws);
145}
146
137/** 147/**
138 * wakeup_source_destroy - Destroy a struct wakeup_source object. 148 * wakeup_source_destroy - Destroy a struct wakeup_source object.
139 * @ws: Wakeup source to destroy. 149 * @ws: Wakeup source to destroy.
@@ -147,8 +157,7 @@ void wakeup_source_destroy(struct wakeup_source *ws)
147 157
148 __pm_relax(ws); 158 __pm_relax(ws);
149 wakeup_source_record(ws); 159 wakeup_source_record(ws);
150 kfree_const(ws->name); 160 wakeup_source_free(ws);
151 kfree(ws);
152} 161}
153EXPORT_SYMBOL_GPL(wakeup_source_destroy); 162EXPORT_SYMBOL_GPL(wakeup_source_destroy);
154 163
@@ -200,16 +209,26 @@ EXPORT_SYMBOL_GPL(wakeup_source_remove);
200 209
201/** 210/**
202 * wakeup_source_register - Create wakeup source and add it to the list. 211 * wakeup_source_register - Create wakeup source and add it to the list.
212 * @dev: Device this wakeup source is associated with (or NULL if virtual).
203 * @name: Name of the wakeup source to register. 213 * @name: Name of the wakeup source to register.
204 */ 214 */
205struct wakeup_source *wakeup_source_register(const char *name) 215struct wakeup_source *wakeup_source_register(struct device *dev,
216 const char *name)
206{ 217{
207 struct wakeup_source *ws; 218 struct wakeup_source *ws;
219 int ret;
208 220
209 ws = wakeup_source_create(name); 221 ws = wakeup_source_create(name);
210 if (ws) 222 if (ws) {
223 if (!dev || device_is_registered(dev)) {
224 ret = wakeup_source_sysfs_add(dev, ws);
225 if (ret) {
226 wakeup_source_free(ws);
227 return NULL;
228 }
229 }
211 wakeup_source_add(ws); 230 wakeup_source_add(ws);
212 231 }
213 return ws; 232 return ws;
214} 233}
215EXPORT_SYMBOL_GPL(wakeup_source_register); 234EXPORT_SYMBOL_GPL(wakeup_source_register);
@@ -222,6 +241,7 @@ void wakeup_source_unregister(struct wakeup_source *ws)
222{ 241{
223 if (ws) { 242 if (ws) {
224 wakeup_source_remove(ws); 243 wakeup_source_remove(ws);
244 wakeup_source_sysfs_remove(ws);
225 wakeup_source_destroy(ws); 245 wakeup_source_destroy(ws);
226 } 246 }
227} 247}
@@ -265,7 +285,7 @@ int device_wakeup_enable(struct device *dev)
265 if (pm_suspend_target_state != PM_SUSPEND_ON) 285 if (pm_suspend_target_state != PM_SUSPEND_ON)
266 dev_dbg(dev, "Suspicious %s() during system transition!\n", __func__); 286 dev_dbg(dev, "Suspicious %s() during system transition!\n", __func__);
267 287
268 ws = wakeup_source_register(dev_name(dev)); 288 ws = wakeup_source_register(dev, dev_name(dev));
269 if (!ws) 289 if (!ws)
270 return -ENOMEM; 290 return -ENOMEM;
271 291
@@ -859,7 +879,7 @@ EXPORT_SYMBOL_GPL(pm_system_wakeup);
859 879
860void pm_system_cancel_wakeup(void) 880void pm_system_cancel_wakeup(void)
861{ 881{
862 atomic_dec(&pm_abort_suspend); 882 atomic_dec_if_positive(&pm_abort_suspend);
863} 883}
864 884
865void pm_wakeup_clear(bool reset) 885void pm_wakeup_clear(bool reset)
diff --git a/drivers/base/power/wakeup_stats.c b/drivers/base/power/wakeup_stats.c
new file mode 100644
index 000000000000..c7734914d914
--- /dev/null
+++ b/drivers/base/power/wakeup_stats.c
@@ -0,0 +1,214 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * Wakeup statistics in sysfs
4 *
5 * Copyright (c) 2019 Linux Foundation
6 * Copyright (c) 2019 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 * Copyright (c) 2019 Google Inc.
8 */
9
10#include <linux/device.h>
11#include <linux/idr.h>
12#include <linux/init.h>
13#include <linux/kdev_t.h>
14#include <linux/kernel.h>
15#include <linux/kobject.h>
16#include <linux/slab.h>
17#include <linux/timekeeping.h>
18
19#include "power.h"
20
21static struct class *wakeup_class;
22
23#define wakeup_attr(_name) \
24static ssize_t _name##_show(struct device *dev, \
25 struct device_attribute *attr, char *buf) \
26{ \
27 struct wakeup_source *ws = dev_get_drvdata(dev); \
28 \
29 return sprintf(buf, "%lu\n", ws->_name); \
30} \
31static DEVICE_ATTR_RO(_name)
32
33wakeup_attr(active_count);
34wakeup_attr(event_count);
35wakeup_attr(wakeup_count);
36wakeup_attr(expire_count);
37
38static ssize_t active_time_ms_show(struct device *dev,
39 struct device_attribute *attr, char *buf)
40{
41 struct wakeup_source *ws = dev_get_drvdata(dev);
42 ktime_t active_time =
43 ws->active ? ktime_sub(ktime_get(), ws->last_time) : 0;
44
45 return sprintf(buf, "%lld\n", ktime_to_ms(active_time));
46}
47static DEVICE_ATTR_RO(active_time_ms);
48
49static ssize_t total_time_ms_show(struct device *dev,
50 struct device_attribute *attr, char *buf)
51{
52 struct wakeup_source *ws = dev_get_drvdata(dev);
53 ktime_t active_time;
54 ktime_t total_time = ws->total_time;
55
56 if (ws->active) {
57 active_time = ktime_sub(ktime_get(), ws->last_time);
58 total_time = ktime_add(total_time, active_time);
59 }
60 return sprintf(buf, "%lld\n", ktime_to_ms(total_time));
61}
62static DEVICE_ATTR_RO(total_time_ms);
63
64static ssize_t max_time_ms_show(struct device *dev,
65 struct device_attribute *attr, char *buf)
66{
67 struct wakeup_source *ws = dev_get_drvdata(dev);
68 ktime_t active_time;
69 ktime_t max_time = ws->max_time;
70
71 if (ws->active) {
72 active_time = ktime_sub(ktime_get(), ws->last_time);
73 if (active_time > max_time)
74 max_time = active_time;
75 }
76 return sprintf(buf, "%lld\n", ktime_to_ms(max_time));
77}
78static DEVICE_ATTR_RO(max_time_ms);
79
80static ssize_t last_change_ms_show(struct device *dev,
81 struct device_attribute *attr, char *buf)
82{
83 struct wakeup_source *ws = dev_get_drvdata(dev);
84
85 return sprintf(buf, "%lld\n", ktime_to_ms(ws->last_time));
86}
87static DEVICE_ATTR_RO(last_change_ms);
88
89static ssize_t name_show(struct device *dev, struct device_attribute *attr,
90 char *buf)
91{
92 struct wakeup_source *ws = dev_get_drvdata(dev);
93
94 return sprintf(buf, "%s\n", ws->name);
95}
96static DEVICE_ATTR_RO(name);
97
98static ssize_t prevent_suspend_time_ms_show(struct device *dev,
99 struct device_attribute *attr,
100 char *buf)
101{
102 struct wakeup_source *ws = dev_get_drvdata(dev);
103 ktime_t prevent_sleep_time = ws->prevent_sleep_time;
104
105 if (ws->active && ws->autosleep_enabled) {
106 prevent_sleep_time = ktime_add(prevent_sleep_time,
107 ktime_sub(ktime_get(), ws->start_prevent_time));
108 }
109 return sprintf(buf, "%lld\n", ktime_to_ms(prevent_sleep_time));
110}
111static DEVICE_ATTR_RO(prevent_suspend_time_ms);
112
113static struct attribute *wakeup_source_attrs[] = {
114 &dev_attr_name.attr,
115 &dev_attr_active_count.attr,
116 &dev_attr_event_count.attr,
117 &dev_attr_wakeup_count.attr,
118 &dev_attr_expire_count.attr,
119 &dev_attr_active_time_ms.attr,
120 &dev_attr_total_time_ms.attr,
121 &dev_attr_max_time_ms.attr,
122 &dev_attr_last_change_ms.attr,
123 &dev_attr_prevent_suspend_time_ms.attr,
124 NULL,
125};
126ATTRIBUTE_GROUPS(wakeup_source);
127
128static void device_create_release(struct device *dev)
129{
130 kfree(dev);
131}
132
133static struct device *wakeup_source_device_create(struct device *parent,
134 struct wakeup_source *ws)
135{
136 struct device *dev = NULL;
137 int retval = -ENODEV;
138
139 dev = kzalloc(sizeof(*dev), GFP_KERNEL);
140 if (!dev) {
141 retval = -ENOMEM;
142 goto error;
143 }
144
145 device_initialize(dev);
146 dev->devt = MKDEV(0, 0);
147 dev->class = wakeup_class;
148 dev->parent = parent;
149 dev->groups = wakeup_source_groups;
150 dev->release = device_create_release;
151 dev_set_drvdata(dev, ws);
152 device_set_pm_not_required(dev);
153
154 retval = kobject_set_name(&dev->kobj, "wakeup%d", ws->id);
155 if (retval)
156 goto error;
157
158 retval = device_add(dev);
159 if (retval)
160 goto error;
161
162 return dev;
163
164error:
165 put_device(dev);
166 return ERR_PTR(retval);
167}
168
169/**
170 * wakeup_source_sysfs_add - Add wakeup_source attributes to sysfs.
171 * @parent: Device given wakeup source is associated with (or NULL if virtual).
172 * @ws: Wakeup source to be added in sysfs.
173 */
174int wakeup_source_sysfs_add(struct device *parent, struct wakeup_source *ws)
175{
176 struct device *dev;
177
178 dev = wakeup_source_device_create(parent, ws);
179 if (IS_ERR(dev))
180 return PTR_ERR(dev);
181 ws->dev = dev;
182
183 return 0;
184}
185
186/**
187 * pm_wakeup_source_sysfs_add - Add wakeup_source attributes to sysfs
188 * for a device if they're missing.
189 * @parent: Device given wakeup source is associated with
190 */
191int pm_wakeup_source_sysfs_add(struct device *parent)
192{
193 if (!parent->power.wakeup || parent->power.wakeup->dev)
194 return 0;
195
196 return wakeup_source_sysfs_add(parent, parent->power.wakeup);
197}
198
199/**
200 * wakeup_source_sysfs_remove - Remove wakeup_source attributes from sysfs.
201 * @ws: Wakeup source to be removed from sysfs.
202 */
203void wakeup_source_sysfs_remove(struct wakeup_source *ws)
204{
205 device_unregister(ws->dev);
206}
207
208static int __init wakeup_sources_sysfs_init(void)
209{
210 wakeup_class = class_create(THIS_MODULE, "wakeup");
211
212 return PTR_ERR_OR_ZERO(wakeup_class);
213}
214postcore_initcall(wakeup_sources_sysfs_init);
diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
index 56c31a78c692..a905796f7f85 100644
--- a/drivers/cpufreq/Kconfig.arm
+++ b/drivers/cpufreq/Kconfig.arm
@@ -19,6 +19,18 @@ config ACPI_CPPC_CPUFREQ
19 19
20 If in doubt, say N. 20 If in doubt, say N.
21 21
22config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM
23 tristate "Allwinner nvmem based SUN50I CPUFreq driver"
24 depends on ARCH_SUNXI
25 depends on NVMEM_SUNXI_SID
26 select PM_OPP
27 help
28 This adds the nvmem based CPUFreq driver for Allwinner
29 h6 SoC.
30
31 To compile this driver as a module, choose M here: the
32 module will be called sun50i-cpufreq-nvmem.
33
22config ARM_ARMADA_37XX_CPUFREQ 34config ARM_ARMADA_37XX_CPUFREQ
23 tristate "Armada 37xx CPUFreq support" 35 tristate "Armada 37xx CPUFreq support"
24 depends on ARCH_MVEBU && CPUFREQ_DT 36 depends on ARCH_MVEBU && CPUFREQ_DT
@@ -120,8 +132,8 @@ config ARM_OMAP2PLUS_CPUFREQ
120 depends on ARCH_OMAP2PLUS 132 depends on ARCH_OMAP2PLUS
121 default ARCH_OMAP2PLUS 133 default ARCH_OMAP2PLUS
122 134
123config ARM_QCOM_CPUFREQ_KRYO 135config ARM_QCOM_CPUFREQ_NVMEM
124 tristate "Qualcomm Kryo based CPUFreq" 136 tristate "Qualcomm nvmem based CPUFreq"
125 depends on ARM64 137 depends on ARM64
126 depends on QCOM_QFPROM 138 depends on QCOM_QFPROM
127 depends on QCOM_SMEM 139 depends on QCOM_SMEM
diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile
index 5a6c70d26c98..9a9f5ccd13d9 100644
--- a/drivers/cpufreq/Makefile
+++ b/drivers/cpufreq/Makefile
@@ -64,7 +64,7 @@ obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o
64obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o 64obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o
65obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o 65obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
66obj-$(CONFIG_ARM_QCOM_CPUFREQ_HW) += qcom-cpufreq-hw.o 66obj-$(CONFIG_ARM_QCOM_CPUFREQ_HW) += qcom-cpufreq-hw.o
67obj-$(CONFIG_ARM_QCOM_CPUFREQ_KRYO) += qcom-cpufreq-kryo.o 67obj-$(CONFIG_ARM_QCOM_CPUFREQ_NVMEM) += qcom-cpufreq-nvmem.o
68obj-$(CONFIG_ARM_RASPBERRYPI_CPUFREQ) += raspberrypi-cpufreq.o 68obj-$(CONFIG_ARM_RASPBERRYPI_CPUFREQ) += raspberrypi-cpufreq.o
69obj-$(CONFIG_ARM_S3C2410_CPUFREQ) += s3c2410-cpufreq.o 69obj-$(CONFIG_ARM_S3C2410_CPUFREQ) += s3c2410-cpufreq.o
70obj-$(CONFIG_ARM_S3C2412_CPUFREQ) += s3c2412-cpufreq.o 70obj-$(CONFIG_ARM_S3C2412_CPUFREQ) += s3c2412-cpufreq.o
@@ -80,6 +80,7 @@ obj-$(CONFIG_ARM_SCMI_CPUFREQ) += scmi-cpufreq.o
80obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o 80obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o
81obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o 81obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o
82obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o 82obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o
83obj-$(CONFIG_ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM) += sun50i-cpufreq-nvmem.o
83obj-$(CONFIG_ARM_TANGO_CPUFREQ) += tango-cpufreq.o 84obj-$(CONFIG_ARM_TANGO_CPUFREQ) += tango-cpufreq.o
84obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o 85obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o
85obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o 86obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c
index 988ebc326bdb..39e34f5066d3 100644
--- a/drivers/cpufreq/armada-8k-cpufreq.c
+++ b/drivers/cpufreq/armada-8k-cpufreq.c
@@ -136,6 +136,8 @@ static int __init armada_8k_cpufreq_init(void)
136 136
137 nb_cpus = num_possible_cpus(); 137 nb_cpus = num_possible_cpus();
138 freq_tables = kcalloc(nb_cpus, sizeof(*freq_tables), GFP_KERNEL); 138 freq_tables = kcalloc(nb_cpus, sizeof(*freq_tables), GFP_KERNEL);
139 if (!freq_tables)
140 return -ENOMEM;
139 cpumask_copy(&cpus, cpu_possible_mask); 141 cpumask_copy(&cpus, cpu_possible_mask);
140 142
141 /* 143 /*
diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
index 03dc4244ab00..bca8d1f47fd2 100644
--- a/drivers/cpufreq/cpufreq-dt-platdev.c
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -101,12 +101,15 @@ static const struct of_device_id whitelist[] __initconst = {
101 * platforms using "operating-points-v2" property. 101 * platforms using "operating-points-v2" property.
102 */ 102 */
103static const struct of_device_id blacklist[] __initconst = { 103static const struct of_device_id blacklist[] __initconst = {
104 { .compatible = "allwinner,sun50i-h6", },
105
104 { .compatible = "calxeda,highbank", }, 106 { .compatible = "calxeda,highbank", },
105 { .compatible = "calxeda,ecx-2000", }, 107 { .compatible = "calxeda,ecx-2000", },
106 108
107 { .compatible = "fsl,imx7d", }, 109 { .compatible = "fsl,imx7d", },
108 { .compatible = "fsl,imx8mq", }, 110 { .compatible = "fsl,imx8mq", },
109 { .compatible = "fsl,imx8mm", }, 111 { .compatible = "fsl,imx8mm", },
112 { .compatible = "fsl,imx8mn", },
110 113
111 { .compatible = "marvell,armadaxp", }, 114 { .compatible = "marvell,armadaxp", },
112 115
@@ -117,12 +120,14 @@ static const struct of_device_id blacklist[] __initconst = {
117 { .compatible = "mediatek,mt817x", }, 120 { .compatible = "mediatek,mt817x", },
118 { .compatible = "mediatek,mt8173", }, 121 { .compatible = "mediatek,mt8173", },
119 { .compatible = "mediatek,mt8176", }, 122 { .compatible = "mediatek,mt8176", },
123 { .compatible = "mediatek,mt8183", },
120 124
121 { .compatible = "nvidia,tegra124", }, 125 { .compatible = "nvidia,tegra124", },
122 { .compatible = "nvidia,tegra210", }, 126 { .compatible = "nvidia,tegra210", },
123 127
124 { .compatible = "qcom,apq8096", }, 128 { .compatible = "qcom,apq8096", },
125 { .compatible = "qcom,msm8996", }, 129 { .compatible = "qcom,msm8996", },
130 { .compatible = "qcom,qcs404", },
126 131
127 { .compatible = "st,stih407", }, 132 { .compatible = "st,stih407", },
128 { .compatible = "st,stih410", }, 133 { .compatible = "st,stih410", },
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index c28ebf2810f1..c52d6fa32aac 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -1266,7 +1266,17 @@ static void cpufreq_policy_free(struct cpufreq_policy *policy)
1266 DEV_PM_QOS_MAX_FREQUENCY); 1266 DEV_PM_QOS_MAX_FREQUENCY);
1267 dev_pm_qos_remove_notifier(dev, &policy->nb_min, 1267 dev_pm_qos_remove_notifier(dev, &policy->nb_min,
1268 DEV_PM_QOS_MIN_FREQUENCY); 1268 DEV_PM_QOS_MIN_FREQUENCY);
1269 dev_pm_qos_remove_request(policy->max_freq_req); 1269
1270 if (policy->max_freq_req) {
1271 /*
1272 * CPUFREQ_CREATE_POLICY notification is sent only after
1273 * successfully adding max_freq_req request.
1274 */
1275 blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
1276 CPUFREQ_REMOVE_POLICY, policy);
1277 dev_pm_qos_remove_request(policy->max_freq_req);
1278 }
1279
1270 dev_pm_qos_remove_request(policy->min_freq_req); 1280 dev_pm_qos_remove_request(policy->min_freq_req);
1271 kfree(policy->min_freq_req); 1281 kfree(policy->min_freq_req);
1272 1282
@@ -1391,6 +1401,9 @@ static int cpufreq_online(unsigned int cpu)
1391 ret); 1401 ret);
1392 goto out_destroy_policy; 1402 goto out_destroy_policy;
1393 } 1403 }
1404
1405 blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
1406 CPUFREQ_CREATE_POLICY, policy);
1394 } 1407 }
1395 1408
1396 if (cpufreq_driver->get && has_target()) { 1409 if (cpufreq_driver->get && has_target()) {
@@ -1807,8 +1820,8 @@ void cpufreq_suspend(void)
1807 } 1820 }
1808 1821
1809 if (cpufreq_driver->suspend && cpufreq_driver->suspend(policy)) 1822 if (cpufreq_driver->suspend && cpufreq_driver->suspend(policy))
1810 pr_err("%s: Failed to suspend driver: %p\n", __func__, 1823 pr_err("%s: Failed to suspend driver: %s\n", __func__,
1811 policy); 1824 cpufreq_driver->name);
1812 } 1825 }
1813 1826
1814suspend: 1827suspend:
@@ -2140,7 +2153,7 @@ int cpufreq_driver_target(struct cpufreq_policy *policy,
2140 unsigned int target_freq, 2153 unsigned int target_freq,
2141 unsigned int relation) 2154 unsigned int relation)
2142{ 2155{
2143 int ret = -EINVAL; 2156 int ret;
2144 2157
2145 down_write(&policy->rwsem); 2158 down_write(&policy->rwsem);
2146 2159
@@ -2347,15 +2360,13 @@ EXPORT_SYMBOL(cpufreq_get_policy);
2347 * @policy: Policy object to modify. 2360 * @policy: Policy object to modify.
2348 * @new_policy: New policy data. 2361 * @new_policy: New policy data.
2349 * 2362 *
2350 * Pass @new_policy to the cpufreq driver's ->verify() callback, run the 2363 * Pass @new_policy to the cpufreq driver's ->verify() callback. Next, copy the
2351 * installed policy notifiers for it with the CPUFREQ_ADJUST value, pass it to 2364 * min and max parameters of @new_policy to @policy and either invoke the
2352 * the driver's ->verify() callback again and run the notifiers for it again 2365 * driver's ->setpolicy() callback (if present) or carry out a governor update
2353 * with the CPUFREQ_NOTIFY value. Next, copy the min and max parameters 2366 * for @policy. That is, run the current governor's ->limits() callback (if the
2354 * of @new_policy to @policy and either invoke the driver's ->setpolicy() 2367 * governor field in @new_policy points to the same object as the one in
2355 * callback (if present) or carry out a governor update for @policy. That is, 2368 * @policy) or replace the governor for @policy with the new one stored in
2356 * run the current governor's ->limits() callback (if the governor field in 2369 * @new_policy.
2357 * @new_policy points to the same object as the one in @policy) or replace the
2358 * governor for @policy with the new one stored in @new_policy.
2359 * 2370 *
2360 * The cpuinfo part of @policy is not updated by this function. 2371 * The cpuinfo part of @policy is not updated by this function.
2361 */ 2372 */
@@ -2383,26 +2394,6 @@ int cpufreq_set_policy(struct cpufreq_policy *policy,
2383 if (ret) 2394 if (ret)
2384 return ret; 2395 return ret;
2385 2396
2386 /*
2387 * The notifier-chain shall be removed once all the users of
2388 * CPUFREQ_ADJUST are moved to use the QoS framework.
2389 */
2390 /* adjust if necessary - all reasons */
2391 blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
2392 CPUFREQ_ADJUST, new_policy);
2393
2394 /*
2395 * verify the cpu speed can be set within this limit, which might be
2396 * different to the first one
2397 */
2398 ret = cpufreq_driver->verify(new_policy);
2399 if (ret)
2400 return ret;
2401
2402 /* notification of the new policy */
2403 blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
2404 CPUFREQ_NOTIFY, new_policy);
2405
2406 policy->min = new_policy->min; 2397 policy->min = new_policy->min;
2407 policy->max = new_policy->max; 2398 policy->max = new_policy->max;
2408 trace_cpu_frequency_limits(policy); 2399 trace_cpu_frequency_limits(policy);
diff --git a/drivers/cpufreq/imx-cpufreq-dt.c b/drivers/cpufreq/imx-cpufreq-dt.c
index 4f85f3112784..35db14cf3102 100644
--- a/drivers/cpufreq/imx-cpufreq-dt.c
+++ b/drivers/cpufreq/imx-cpufreq-dt.c
@@ -16,6 +16,7 @@
16 16
17#define OCOTP_CFG3_SPEED_GRADE_SHIFT 8 17#define OCOTP_CFG3_SPEED_GRADE_SHIFT 8
18#define OCOTP_CFG3_SPEED_GRADE_MASK (0x3 << 8) 18#define OCOTP_CFG3_SPEED_GRADE_MASK (0x3 << 8)
19#define IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK (0xf << 8)
19#define OCOTP_CFG3_MKT_SEGMENT_SHIFT 6 20#define OCOTP_CFG3_MKT_SEGMENT_SHIFT 6
20#define OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 6) 21#define OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 6)
21 22
@@ -34,7 +35,12 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
34 if (ret) 35 if (ret)
35 return ret; 36 return ret;
36 37
37 speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK) >> OCOTP_CFG3_SPEED_GRADE_SHIFT; 38 if (of_machine_is_compatible("fsl,imx8mn"))
39 speed_grade = (cell_value & IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK)
40 >> OCOTP_CFG3_SPEED_GRADE_SHIFT;
41 else
42 speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK)
43 >> OCOTP_CFG3_SPEED_GRADE_SHIFT;
38 mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) >> OCOTP_CFG3_MKT_SEGMENT_SHIFT; 44 mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) >> OCOTP_CFG3_MKT_SEGMENT_SHIFT;
39 45
40 /* 46 /*
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 886324041add..9f02de9a1b47 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -24,6 +24,7 @@
24#include <linux/fs.h> 24#include <linux/fs.h>
25#include <linux/acpi.h> 25#include <linux/acpi.h>
26#include <linux/vmalloc.h> 26#include <linux/vmalloc.h>
27#include <linux/pm_qos.h>
27#include <trace/events/power.h> 28#include <trace/events/power.h>
28 29
29#include <asm/div64.h> 30#include <asm/div64.h>
@@ -1085,6 +1086,47 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
1085 return count; 1086 return count;
1086} 1087}
1087 1088
1089static struct cpufreq_driver intel_pstate;
1090
1091static void update_qos_request(enum dev_pm_qos_req_type type)
1092{
1093 int max_state, turbo_max, freq, i, perf_pct;
1094 struct dev_pm_qos_request *req;
1095 struct cpufreq_policy *policy;
1096
1097 for_each_possible_cpu(i) {
1098 struct cpudata *cpu = all_cpu_data[i];
1099
1100 policy = cpufreq_cpu_get(i);
1101 if (!policy)
1102 continue;
1103
1104 req = policy->driver_data;
1105 cpufreq_cpu_put(policy);
1106
1107 if (!req)
1108 continue;
1109
1110 if (hwp_active)
1111 intel_pstate_get_hwp_max(i, &turbo_max, &max_state);
1112 else
1113 turbo_max = cpu->pstate.turbo_pstate;
1114
1115 if (type == DEV_PM_QOS_MIN_FREQUENCY) {
1116 perf_pct = global.min_perf_pct;
1117 } else {
1118 req++;
1119 perf_pct = global.max_perf_pct;
1120 }
1121
1122 freq = DIV_ROUND_UP(turbo_max * perf_pct, 100);
1123 freq *= cpu->pstate.scaling;
1124
1125 if (dev_pm_qos_update_request(req, freq) < 0)
1126 pr_warn("Failed to update freq constraint: CPU%d\n", i);
1127 }
1128}
1129
1088static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b, 1130static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
1089 const char *buf, size_t count) 1131 const char *buf, size_t count)
1090{ 1132{
@@ -1108,7 +1150,10 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
1108 1150
1109 mutex_unlock(&intel_pstate_limits_lock); 1151 mutex_unlock(&intel_pstate_limits_lock);
1110 1152
1111 intel_pstate_update_policies(); 1153 if (intel_pstate_driver == &intel_pstate)
1154 intel_pstate_update_policies();
1155 else
1156 update_qos_request(DEV_PM_QOS_MAX_FREQUENCY);
1112 1157
1113 mutex_unlock(&intel_pstate_driver_lock); 1158 mutex_unlock(&intel_pstate_driver_lock);
1114 1159
@@ -1139,7 +1184,10 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b,
1139 1184
1140 mutex_unlock(&intel_pstate_limits_lock); 1185 mutex_unlock(&intel_pstate_limits_lock);
1141 1186
1142 intel_pstate_update_policies(); 1187 if (intel_pstate_driver == &intel_pstate)
1188 intel_pstate_update_policies();
1189 else
1190 update_qos_request(DEV_PM_QOS_MIN_FREQUENCY);
1143 1191
1144 mutex_unlock(&intel_pstate_driver_lock); 1192 mutex_unlock(&intel_pstate_driver_lock);
1145 1193
@@ -2332,8 +2380,16 @@ static unsigned int intel_cpufreq_fast_switch(struct cpufreq_policy *policy,
2332 2380
2333static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy) 2381static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
2334{ 2382{
2335 int ret = __intel_pstate_cpu_init(policy); 2383 int max_state, turbo_max, min_freq, max_freq, ret;
2384 struct dev_pm_qos_request *req;
2385 struct cpudata *cpu;
2386 struct device *dev;
2387
2388 dev = get_cpu_device(policy->cpu);
2389 if (!dev)
2390 return -ENODEV;
2336 2391
2392 ret = __intel_pstate_cpu_init(policy);
2337 if (ret) 2393 if (ret)
2338 return ret; 2394 return ret;
2339 2395
@@ -2342,7 +2398,63 @@ static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
2342 /* This reflects the intel_pstate_get_cpu_pstates() setting. */ 2398 /* This reflects the intel_pstate_get_cpu_pstates() setting. */
2343 policy->cur = policy->cpuinfo.min_freq; 2399 policy->cur = policy->cpuinfo.min_freq;
2344 2400
2401 req = kcalloc(2, sizeof(*req), GFP_KERNEL);
2402 if (!req) {
2403 ret = -ENOMEM;
2404 goto pstate_exit;
2405 }
2406
2407 cpu = all_cpu_data[policy->cpu];
2408
2409 if (hwp_active)
2410 intel_pstate_get_hwp_max(policy->cpu, &turbo_max, &max_state);
2411 else
2412 turbo_max = cpu->pstate.turbo_pstate;
2413
2414 min_freq = DIV_ROUND_UP(turbo_max * global.min_perf_pct, 100);
2415 min_freq *= cpu->pstate.scaling;
2416 max_freq = DIV_ROUND_UP(turbo_max * global.max_perf_pct, 100);
2417 max_freq *= cpu->pstate.scaling;
2418
2419 ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_MIN_FREQUENCY,
2420 min_freq);
2421 if (ret < 0) {
2422 dev_err(dev, "Failed to add min-freq constraint (%d)\n", ret);
2423 goto free_req;
2424 }
2425
2426 ret = dev_pm_qos_add_request(dev, req + 1, DEV_PM_QOS_MAX_FREQUENCY,
2427 max_freq);
2428 if (ret < 0) {
2429 dev_err(dev, "Failed to add max-freq constraint (%d)\n", ret);
2430 goto remove_min_req;
2431 }
2432
2433 policy->driver_data = req;
2434
2345 return 0; 2435 return 0;
2436
2437remove_min_req:
2438 dev_pm_qos_remove_request(req);
2439free_req:
2440 kfree(req);
2441pstate_exit:
2442 intel_pstate_exit_perf_limits(policy);
2443
2444 return ret;
2445}
2446
2447static int intel_cpufreq_cpu_exit(struct cpufreq_policy *policy)
2448{
2449 struct dev_pm_qos_request *req;
2450
2451 req = policy->driver_data;
2452
2453 dev_pm_qos_remove_request(req + 1);
2454 dev_pm_qos_remove_request(req);
2455 kfree(req);
2456
2457 return intel_pstate_cpu_exit(policy);
2346} 2458}
2347 2459
2348static struct cpufreq_driver intel_cpufreq = { 2460static struct cpufreq_driver intel_cpufreq = {
@@ -2351,7 +2463,7 @@ static struct cpufreq_driver intel_cpufreq = {
2351 .target = intel_cpufreq_target, 2463 .target = intel_cpufreq_target,
2352 .fast_switch = intel_cpufreq_fast_switch, 2464 .fast_switch = intel_cpufreq_fast_switch,
2353 .init = intel_cpufreq_cpu_init, 2465 .init = intel_cpufreq_cpu_init,
2354 .exit = intel_pstate_cpu_exit, 2466 .exit = intel_cpufreq_cpu_exit,
2355 .stop_cpu = intel_cpufreq_stop_cpu, 2467 .stop_cpu = intel_cpufreq_stop_cpu,
2356 .update_limits = intel_pstate_update_limits, 2468 .update_limits = intel_pstate_update_limits,
2357 .name = "intel_cpufreq", 2469 .name = "intel_cpufreq",
diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
index f14f3a85f2f7..0c98dd08273d 100644
--- a/drivers/cpufreq/mediatek-cpufreq.c
+++ b/drivers/cpufreq/mediatek-cpufreq.c
@@ -338,7 +338,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
338 goto out_free_resources; 338 goto out_free_resources;
339 } 339 }
340 340
341 proc_reg = regulator_get_exclusive(cpu_dev, "proc"); 341 proc_reg = regulator_get_optional(cpu_dev, "proc");
342 if (IS_ERR(proc_reg)) { 342 if (IS_ERR(proc_reg)) {
343 if (PTR_ERR(proc_reg) == -EPROBE_DEFER) 343 if (PTR_ERR(proc_reg) == -EPROBE_DEFER)
344 pr_warn("proc regulator for cpu%d not ready, retry.\n", 344 pr_warn("proc regulator for cpu%d not ready, retry.\n",
@@ -535,6 +535,8 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
535 { .compatible = "mediatek,mt817x", }, 535 { .compatible = "mediatek,mt817x", },
536 { .compatible = "mediatek,mt8173", }, 536 { .compatible = "mediatek,mt8173", },
537 { .compatible = "mediatek,mt8176", }, 537 { .compatible = "mediatek,mt8176", },
538 { .compatible = "mediatek,mt8183", },
539 { .compatible = "mediatek,mt8516", },
538 540
539 { } 541 { }
540}; 542};
diff --git a/drivers/cpufreq/ppc_cbe_cpufreq.c b/drivers/cpufreq/ppc_cbe_cpufreq.c
index b83f36febf03..c58abb4cca3a 100644
--- a/drivers/cpufreq/ppc_cbe_cpufreq.c
+++ b/drivers/cpufreq/ppc_cbe_cpufreq.c
@@ -110,6 +110,13 @@ static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy)
110#endif 110#endif
111 111
112 policy->freq_table = cbe_freqs; 112 policy->freq_table = cbe_freqs;
113 cbe_cpufreq_pmi_policy_init(policy);
114 return 0;
115}
116
117static int cbe_cpufreq_cpu_exit(struct cpufreq_policy *policy)
118{
119 cbe_cpufreq_pmi_policy_exit(policy);
113 return 0; 120 return 0;
114} 121}
115 122
@@ -129,6 +136,7 @@ static struct cpufreq_driver cbe_cpufreq_driver = {
129 .verify = cpufreq_generic_frequency_table_verify, 136 .verify = cpufreq_generic_frequency_table_verify,
130 .target_index = cbe_cpufreq_target, 137 .target_index = cbe_cpufreq_target,
131 .init = cbe_cpufreq_cpu_init, 138 .init = cbe_cpufreq_cpu_init,
139 .exit = cbe_cpufreq_cpu_exit,
132 .name = "cbe-cpufreq", 140 .name = "cbe-cpufreq",
133 .flags = CPUFREQ_CONST_LOOPS, 141 .flags = CPUFREQ_CONST_LOOPS,
134}; 142};
@@ -139,15 +147,24 @@ static struct cpufreq_driver cbe_cpufreq_driver = {
139 147
140static int __init cbe_cpufreq_init(void) 148static int __init cbe_cpufreq_init(void)
141{ 149{
150 int ret;
151
142 if (!machine_is(cell)) 152 if (!machine_is(cell))
143 return -ENODEV; 153 return -ENODEV;
144 154
145 return cpufreq_register_driver(&cbe_cpufreq_driver); 155 cbe_cpufreq_pmi_init();
156
157 ret = cpufreq_register_driver(&cbe_cpufreq_driver);
158 if (ret)
159 cbe_cpufreq_pmi_exit();
160
161 return ret;
146} 162}
147 163
148static void __exit cbe_cpufreq_exit(void) 164static void __exit cbe_cpufreq_exit(void)
149{ 165{
150 cpufreq_unregister_driver(&cbe_cpufreq_driver); 166 cpufreq_unregister_driver(&cbe_cpufreq_driver);
167 cbe_cpufreq_pmi_exit();
151} 168}
152 169
153module_init(cbe_cpufreq_init); 170module_init(cbe_cpufreq_init);
diff --git a/drivers/cpufreq/ppc_cbe_cpufreq.h b/drivers/cpufreq/ppc_cbe_cpufreq.h
index 9d973519d669..00cd8633b0d9 100644
--- a/drivers/cpufreq/ppc_cbe_cpufreq.h
+++ b/drivers/cpufreq/ppc_cbe_cpufreq.h
@@ -20,6 +20,14 @@ int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode);
20 20
21#if IS_ENABLED(CONFIG_CPU_FREQ_CBE_PMI) 21#if IS_ENABLED(CONFIG_CPU_FREQ_CBE_PMI)
22extern bool cbe_cpufreq_has_pmi; 22extern bool cbe_cpufreq_has_pmi;
23void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy);
24void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy);
25void cbe_cpufreq_pmi_init(void);
26void cbe_cpufreq_pmi_exit(void);
23#else 27#else
24#define cbe_cpufreq_has_pmi (0) 28#define cbe_cpufreq_has_pmi (0)
29static inline void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy) {}
30static inline void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy) {}
31static inline void cbe_cpufreq_pmi_init(void) {}
32static inline void cbe_cpufreq_pmi_exit(void) {}
25#endif 33#endif
diff --git a/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c b/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c
index 97c8ee4614b7..bc9dd30395c4 100644
--- a/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c
+++ b/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c
@@ -12,6 +12,7 @@
12#include <linux/timer.h> 12#include <linux/timer.h>
13#include <linux/init.h> 13#include <linux/init.h>
14#include <linux/of_platform.h> 14#include <linux/of_platform.h>
15#include <linux/pm_qos.h>
15 16
16#include <asm/processor.h> 17#include <asm/processor.h>
17#include <asm/prom.h> 18#include <asm/prom.h>
@@ -24,8 +25,6 @@
24 25
25#include "ppc_cbe_cpufreq.h" 26#include "ppc_cbe_cpufreq.h"
26 27
27static u8 pmi_slow_mode_limit[MAX_CBE];
28
29bool cbe_cpufreq_has_pmi = false; 28bool cbe_cpufreq_has_pmi = false;
30EXPORT_SYMBOL_GPL(cbe_cpufreq_has_pmi); 29EXPORT_SYMBOL_GPL(cbe_cpufreq_has_pmi);
31 30
@@ -65,64 +64,89 @@ EXPORT_SYMBOL_GPL(cbe_cpufreq_set_pmode_pmi);
65 64
66static void cbe_cpufreq_handle_pmi(pmi_message_t pmi_msg) 65static void cbe_cpufreq_handle_pmi(pmi_message_t pmi_msg)
67{ 66{
67 struct cpufreq_policy *policy;
68 struct dev_pm_qos_request *req;
68 u8 node, slow_mode; 69 u8 node, slow_mode;
70 int cpu, ret;
69 71
70 BUG_ON(pmi_msg.type != PMI_TYPE_FREQ_CHANGE); 72 BUG_ON(pmi_msg.type != PMI_TYPE_FREQ_CHANGE);
71 73
72 node = pmi_msg.data1; 74 node = pmi_msg.data1;
73 slow_mode = pmi_msg.data2; 75 slow_mode = pmi_msg.data2;
74 76
75 pmi_slow_mode_limit[node] = slow_mode; 77 cpu = cbe_node_to_cpu(node);
76 78
77 pr_debug("cbe_handle_pmi: node: %d max_freq: %d\n", node, slow_mode); 79 pr_debug("cbe_handle_pmi: node: %d max_freq: %d\n", node, slow_mode);
78}
79
80static int pmi_notifier(struct notifier_block *nb,
81 unsigned long event, void *data)
82{
83 struct cpufreq_policy *policy = data;
84 struct cpufreq_frequency_table *cbe_freqs = policy->freq_table;
85 u8 node;
86
87 /* Should this really be called for CPUFREQ_ADJUST and CPUFREQ_NOTIFY
88 * policy events?)
89 */
90 node = cbe_cpu_to_node(policy->cpu);
91
92 pr_debug("got notified, event=%lu, node=%u\n", event, node);
93 80
94 if (pmi_slow_mode_limit[node] != 0) { 81 policy = cpufreq_cpu_get(cpu);
95 pr_debug("limiting node %d to slow mode %d\n", 82 if (!policy) {
96 node, pmi_slow_mode_limit[node]); 83 pr_warn("cpufreq policy not found cpu%d\n", cpu);
84 return;
85 }
97 86
98 cpufreq_verify_within_limits(policy, 0, 87 req = policy->driver_data;
99 88
100 cbe_freqs[pmi_slow_mode_limit[node]].frequency); 89 ret = dev_pm_qos_update_request(req,
101 } 90 policy->freq_table[slow_mode].frequency);
91 if (ret < 0)
92 pr_warn("Failed to update freq constraint: %d\n", ret);
93 else
94 pr_debug("limiting node %d to slow mode %d\n", node, slow_mode);
102 95
103 return 0; 96 cpufreq_cpu_put(policy);
104} 97}
105 98
106static struct notifier_block pmi_notifier_block = {
107 .notifier_call = pmi_notifier,
108};
109
110static struct pmi_handler cbe_pmi_handler = { 99static struct pmi_handler cbe_pmi_handler = {
111 .type = PMI_TYPE_FREQ_CHANGE, 100 .type = PMI_TYPE_FREQ_CHANGE,
112 .handle_pmi_message = cbe_cpufreq_handle_pmi, 101 .handle_pmi_message = cbe_cpufreq_handle_pmi,
113}; 102};
114 103
104void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy)
105{
106 struct dev_pm_qos_request *req;
107 int ret;
108
109 if (!cbe_cpufreq_has_pmi)
110 return;
111
112 req = kzalloc(sizeof(*req), GFP_KERNEL);
113 if (!req)
114 return;
115
116 ret = dev_pm_qos_add_request(get_cpu_device(policy->cpu), req,
117 DEV_PM_QOS_MAX_FREQUENCY,
118 policy->freq_table[0].frequency);
119 if (ret < 0) {
120 pr_err("Failed to add freq constraint (%d)\n", ret);
121 kfree(req);
122 return;
123 }
115 124
125 policy->driver_data = req;
126}
127EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_policy_init);
116 128
117static int __init cbe_cpufreq_pmi_init(void) 129void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy)
118{ 130{
119 cbe_cpufreq_has_pmi = pmi_register_handler(&cbe_pmi_handler) == 0; 131 struct dev_pm_qos_request *req = policy->driver_data;
120 132
121 if (!cbe_cpufreq_has_pmi) 133 if (cbe_cpufreq_has_pmi) {
122 return -ENODEV; 134 dev_pm_qos_remove_request(req);
135 kfree(req);
136 }
137}
138EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_policy_exit);
123 139
124 cpufreq_register_notifier(&pmi_notifier_block, CPUFREQ_POLICY_NOTIFIER); 140void cbe_cpufreq_pmi_init(void)
141{
142 if (!pmi_register_handler(&cbe_pmi_handler))
143 cbe_cpufreq_has_pmi = true;
144}
145EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_init);
125 146
126 return 0; 147void cbe_cpufreq_pmi_exit(void)
148{
149 pmi_unregister_handler(&cbe_pmi_handler);
150 cbe_cpufreq_has_pmi = false;
127} 151}
128device_initcall(cbe_cpufreq_pmi_init); 152EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_exit);
diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
index 4b0b50403901..a9ae2f84a4ef 100644
--- a/drivers/cpufreq/qcom-cpufreq-hw.c
+++ b/drivers/cpufreq/qcom-cpufreq-hw.c
@@ -20,6 +20,7 @@
20#define LUT_VOLT GENMASK(11, 0) 20#define LUT_VOLT GENMASK(11, 0)
21#define LUT_ROW_SIZE 32 21#define LUT_ROW_SIZE 32
22#define CLK_HW_DIV 2 22#define CLK_HW_DIV 2
23#define LUT_TURBO_IND 1
23 24
24/* Register offsets */ 25/* Register offsets */
25#define REG_ENABLE 0x0 26#define REG_ENABLE 0x0
@@ -34,9 +35,12 @@ static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy,
34 unsigned int index) 35 unsigned int index)
35{ 36{
36 void __iomem *perf_state_reg = policy->driver_data; 37 void __iomem *perf_state_reg = policy->driver_data;
38 unsigned long freq = policy->freq_table[index].frequency;
37 39
38 writel_relaxed(index, perf_state_reg); 40 writel_relaxed(index, perf_state_reg);
39 41
42 arch_set_freq_scale(policy->related_cpus, freq,
43 policy->cpuinfo.max_freq);
40 return 0; 44 return 0;
41} 45}
42 46
@@ -63,6 +67,7 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
63{ 67{
64 void __iomem *perf_state_reg = policy->driver_data; 68 void __iomem *perf_state_reg = policy->driver_data;
65 int index; 69 int index;
70 unsigned long freq;
66 71
67 index = policy->cached_resolved_idx; 72 index = policy->cached_resolved_idx;
68 if (index < 0) 73 if (index < 0)
@@ -70,16 +75,19 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
70 75
71 writel_relaxed(index, perf_state_reg); 76 writel_relaxed(index, perf_state_reg);
72 77
73 return policy->freq_table[index].frequency; 78 freq = policy->freq_table[index].frequency;
79 arch_set_freq_scale(policy->related_cpus, freq,
80 policy->cpuinfo.max_freq);
81
82 return freq;
74} 83}
75 84
76static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev, 85static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
77 struct cpufreq_policy *policy, 86 struct cpufreq_policy *policy,
78 void __iomem *base) 87 void __iomem *base)
79{ 88{
80 u32 data, src, lval, i, core_count, prev_cc = 0, prev_freq = 0, freq; 89 u32 data, src, lval, i, core_count, prev_freq = 0, freq;
81 u32 volt; 90 u32 volt;
82 unsigned int max_cores = cpumask_weight(policy->cpus);
83 struct cpufreq_frequency_table *table; 91 struct cpufreq_frequency_table *table;
84 92
85 table = kcalloc(LUT_MAX_ENTRIES + 1, sizeof(*table), GFP_KERNEL); 93 table = kcalloc(LUT_MAX_ENTRIES + 1, sizeof(*table), GFP_KERNEL);
@@ -102,12 +110,12 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
102 else 110 else
103 freq = cpu_hw_rate / 1000; 111 freq = cpu_hw_rate / 1000;
104 112
105 if (freq != prev_freq && core_count == max_cores) { 113 if (freq != prev_freq && core_count != LUT_TURBO_IND) {
106 table[i].frequency = freq; 114 table[i].frequency = freq;
107 dev_pm_opp_add(cpu_dev, freq * 1000, volt); 115 dev_pm_opp_add(cpu_dev, freq * 1000, volt);
108 dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i, 116 dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i,
109 freq, core_count); 117 freq, core_count);
110 } else { 118 } else if (core_count == LUT_TURBO_IND) {
111 table[i].frequency = CPUFREQ_ENTRY_INVALID; 119 table[i].frequency = CPUFREQ_ENTRY_INVALID;
112 } 120 }
113 121
@@ -115,14 +123,14 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
115 * Two of the same frequencies with the same core counts means 123 * Two of the same frequencies with the same core counts means
116 * end of table 124 * end of table
117 */ 125 */
118 if (i > 0 && prev_freq == freq && prev_cc == core_count) { 126 if (i > 0 && prev_freq == freq) {
119 struct cpufreq_frequency_table *prev = &table[i - 1]; 127 struct cpufreq_frequency_table *prev = &table[i - 1];
120 128
121 /* 129 /*
122 * Only treat the last frequency that might be a boost 130 * Only treat the last frequency that might be a boost
123 * as the boost frequency 131 * as the boost frequency
124 */ 132 */
125 if (prev_cc != max_cores) { 133 if (prev->frequency == CPUFREQ_ENTRY_INVALID) {
126 prev->frequency = prev_freq; 134 prev->frequency = prev_freq;
127 prev->flags = CPUFREQ_BOOST_FREQ; 135 prev->flags = CPUFREQ_BOOST_FREQ;
128 dev_pm_opp_add(cpu_dev, prev_freq * 1000, volt); 136 dev_pm_opp_add(cpu_dev, prev_freq * 1000, volt);
@@ -131,7 +139,6 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
131 break; 139 break;
132 } 140 }
133 141
134 prev_cc = core_count;
135 prev_freq = freq; 142 prev_freq = freq;
136 } 143 }
137 144
diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c
deleted file mode 100644
index dd64dcf89c74..000000000000
--- a/drivers/cpufreq/qcom-cpufreq-kryo.c
+++ /dev/null
@@ -1,249 +0,0 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * Copyright (c) 2018, The Linux Foundation. All rights reserved.
4 */
5
6/*
7 * In Certain QCOM SoCs like apq8096 and msm8996 that have KRYO processors,
8 * the CPU frequency subset and voltage value of each OPP varies
9 * based on the silicon variant in use. Qualcomm Process Voltage Scaling Tables
10 * defines the voltage and frequency value based on the msm-id in SMEM
11 * and speedbin blown in the efuse combination.
12 * The qcom-cpufreq-kryo driver reads the msm-id and efuse value from the SoC
13 * to provide the OPP framework with required information.
14 * This is used to determine the voltage and frequency value for each OPP of
15 * operating-points-v2 table when it is parsed by the OPP framework.
16 */
17
18#include <linux/cpu.h>
19#include <linux/err.h>
20#include <linux/init.h>
21#include <linux/kernel.h>
22#include <linux/module.h>
23#include <linux/nvmem-consumer.h>
24#include <linux/of.h>
25#include <linux/platform_device.h>
26#include <linux/pm_opp.h>
27#include <linux/slab.h>
28#include <linux/soc/qcom/smem.h>
29
30#define MSM_ID_SMEM 137
31
32enum _msm_id {
33 MSM8996V3 = 0xF6ul,
34 APQ8096V3 = 0x123ul,
35 MSM8996SG = 0x131ul,
36 APQ8096SG = 0x138ul,
37};
38
39enum _msm8996_version {
40 MSM8996_V3,
41 MSM8996_SG,
42 NUM_OF_MSM8996_VERSIONS,
43};
44
45static struct platform_device *cpufreq_dt_pdev, *kryo_cpufreq_pdev;
46
47static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
48{
49 size_t len;
50 u32 *msm_id;
51 enum _msm8996_version version;
52
53 msm_id = qcom_smem_get(QCOM_SMEM_HOST_ANY, MSM_ID_SMEM, &len);
54 if (IS_ERR(msm_id))
55 return NUM_OF_MSM8996_VERSIONS;
56
57 /* The first 4 bytes are format, next to them is the actual msm-id */
58 msm_id++;
59
60 switch ((enum _msm_id)*msm_id) {
61 case MSM8996V3:
62 case APQ8096V3:
63 version = MSM8996_V3;
64 break;
65 case MSM8996SG:
66 case APQ8096SG:
67 version = MSM8996_SG;
68 break;
69 default:
70 version = NUM_OF_MSM8996_VERSIONS;
71 }
72
73 return version;
74}
75
76static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
77{
78 struct opp_table **opp_tables;
79 enum _msm8996_version msm8996_version;
80 struct nvmem_cell *speedbin_nvmem;
81 struct device_node *np;
82 struct device *cpu_dev;
83 unsigned cpu;
84 u8 *speedbin;
85 u32 versions;
86 size_t len;
87 int ret;
88
89 cpu_dev = get_cpu_device(0);
90 if (!cpu_dev)
91 return -ENODEV;
92
93 msm8996_version = qcom_cpufreq_kryo_get_msm_id();
94 if (NUM_OF_MSM8996_VERSIONS == msm8996_version) {
95 dev_err(cpu_dev, "Not Snapdragon 820/821!");
96 return -ENODEV;
97 }
98
99 np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
100 if (!np)
101 return -ENOENT;
102
103 ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu");
104 if (!ret) {
105 of_node_put(np);
106 return -ENOENT;
107 }
108
109 speedbin_nvmem = of_nvmem_cell_get(np, NULL);
110 of_node_put(np);
111 if (IS_ERR(speedbin_nvmem)) {
112 if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
113 dev_err(cpu_dev, "Could not get nvmem cell: %ld\n",
114 PTR_ERR(speedbin_nvmem));
115 return PTR_ERR(speedbin_nvmem);
116 }
117
118 speedbin = nvmem_cell_read(speedbin_nvmem, &len);
119 nvmem_cell_put(speedbin_nvmem);
120 if (IS_ERR(speedbin))
121 return PTR_ERR(speedbin);
122
123 switch (msm8996_version) {
124 case MSM8996_V3:
125 versions = 1 << (unsigned int)(*speedbin);
126 break;
127 case MSM8996_SG:
128 versions = 1 << ((unsigned int)(*speedbin) + 4);
129 break;
130 default:
131 BUG();
132 break;
133 }
134 kfree(speedbin);
135
136 opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables), GFP_KERNEL);
137 if (!opp_tables)
138 return -ENOMEM;
139
140 for_each_possible_cpu(cpu) {
141 cpu_dev = get_cpu_device(cpu);
142 if (NULL == cpu_dev) {
143 ret = -ENODEV;
144 goto free_opp;
145 }
146
147 opp_tables[cpu] = dev_pm_opp_set_supported_hw(cpu_dev,
148 &versions, 1);
149 if (IS_ERR(opp_tables[cpu])) {
150 ret = PTR_ERR(opp_tables[cpu]);
151 dev_err(cpu_dev, "Failed to set supported hardware\n");
152 goto free_opp;
153 }
154 }
155
156 cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
157 NULL, 0);
158 if (!IS_ERR(cpufreq_dt_pdev)) {
159 platform_set_drvdata(pdev, opp_tables);
160 return 0;
161 }
162
163 ret = PTR_ERR(cpufreq_dt_pdev);
164 dev_err(cpu_dev, "Failed to register platform device\n");
165
166free_opp:
167 for_each_possible_cpu(cpu) {
168 if (IS_ERR_OR_NULL(opp_tables[cpu]))
169 break;
170 dev_pm_opp_put_supported_hw(opp_tables[cpu]);
171 }
172 kfree(opp_tables);
173
174 return ret;
175}
176
177static int qcom_cpufreq_kryo_remove(struct platform_device *pdev)
178{
179 struct opp_table **opp_tables = platform_get_drvdata(pdev);
180 unsigned int cpu;
181
182 platform_device_unregister(cpufreq_dt_pdev);
183
184 for_each_possible_cpu(cpu)
185 dev_pm_opp_put_supported_hw(opp_tables[cpu]);
186
187 kfree(opp_tables);
188
189 return 0;
190}
191
192static struct platform_driver qcom_cpufreq_kryo_driver = {
193 .probe = qcom_cpufreq_kryo_probe,
194 .remove = qcom_cpufreq_kryo_remove,
195 .driver = {
196 .name = "qcom-cpufreq-kryo",
197 },
198};
199
200static const struct of_device_id qcom_cpufreq_kryo_match_list[] __initconst = {
201 { .compatible = "qcom,apq8096", },
202 { .compatible = "qcom,msm8996", },
203 {}
204};
205
206/*
207 * Since the driver depends on smem and nvmem drivers, which may
208 * return EPROBE_DEFER, all the real activity is done in the probe,
209 * which may be defered as well. The init here is only registering
210 * the driver and the platform device.
211 */
212static int __init qcom_cpufreq_kryo_init(void)
213{
214 struct device_node *np = of_find_node_by_path("/");
215 const struct of_device_id *match;
216 int ret;
217
218 if (!np)
219 return -ENODEV;
220
221 match = of_match_node(qcom_cpufreq_kryo_match_list, np);
222 of_node_put(np);
223 if (!match)
224 return -ENODEV;
225
226 ret = platform_driver_register(&qcom_cpufreq_kryo_driver);
227 if (unlikely(ret < 0))
228 return ret;
229
230 kryo_cpufreq_pdev = platform_device_register_simple(
231 "qcom-cpufreq-kryo", -1, NULL, 0);
232 ret = PTR_ERR_OR_ZERO(kryo_cpufreq_pdev);
233 if (0 == ret)
234 return 0;
235
236 platform_driver_unregister(&qcom_cpufreq_kryo_driver);
237 return ret;
238}
239module_init(qcom_cpufreq_kryo_init);
240
241static void __exit qcom_cpufreq_kryo_exit(void)
242{
243 platform_device_unregister(kryo_cpufreq_pdev);
244 platform_driver_unregister(&qcom_cpufreq_kryo_driver);
245}
246module_exit(qcom_cpufreq_kryo_exit);
247
248MODULE_DESCRIPTION("Qualcomm Technologies, Inc. Kryo CPUfreq driver");
249MODULE_LICENSE("GPL v2");
diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
new file mode 100644
index 000000000000..f0d2d5035413
--- /dev/null
+++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
@@ -0,0 +1,352 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * Copyright (c) 2018, The Linux Foundation. All rights reserved.
4 */
5
6/*
7 * In Certain QCOM SoCs like apq8096 and msm8996 that have KRYO processors,
8 * the CPU frequency subset and voltage value of each OPP varies
9 * based on the silicon variant in use. Qualcomm Process Voltage Scaling Tables
10 * defines the voltage and frequency value based on the msm-id in SMEM
11 * and speedbin blown in the efuse combination.
12 * The qcom-cpufreq-nvmem driver reads the msm-id and efuse value from the SoC
13 * to provide the OPP framework with required information.
14 * This is used to determine the voltage and frequency value for each OPP of
15 * operating-points-v2 table when it is parsed by the OPP framework.
16 */
17
18#include <linux/cpu.h>
19#include <linux/err.h>
20#include <linux/init.h>
21#include <linux/kernel.h>
22#include <linux/module.h>
23#include <linux/nvmem-consumer.h>
24#include <linux/of.h>
25#include <linux/of_device.h>
26#include <linux/platform_device.h>
27#include <linux/pm_domain.h>
28#include <linux/pm_opp.h>
29#include <linux/slab.h>
30#include <linux/soc/qcom/smem.h>
31
32#define MSM_ID_SMEM 137
33
34enum _msm_id {
35 MSM8996V3 = 0xF6ul,
36 APQ8096V3 = 0x123ul,
37 MSM8996SG = 0x131ul,
38 APQ8096SG = 0x138ul,
39};
40
41enum _msm8996_version {
42 MSM8996_V3,
43 MSM8996_SG,
44 NUM_OF_MSM8996_VERSIONS,
45};
46
47struct qcom_cpufreq_drv;
48
49struct qcom_cpufreq_match_data {
50 int (*get_version)(struct device *cpu_dev,
51 struct nvmem_cell *speedbin_nvmem,
52 struct qcom_cpufreq_drv *drv);
53 const char **genpd_names;
54};
55
56struct qcom_cpufreq_drv {
57 struct opp_table **opp_tables;
58 struct opp_table **genpd_opp_tables;
59 u32 versions;
60 const struct qcom_cpufreq_match_data *data;
61};
62
63static struct platform_device *cpufreq_dt_pdev, *cpufreq_pdev;
64
65static enum _msm8996_version qcom_cpufreq_get_msm_id(void)
66{
67 size_t len;
68 u32 *msm_id;
69 enum _msm8996_version version;
70
71 msm_id = qcom_smem_get(QCOM_SMEM_HOST_ANY, MSM_ID_SMEM, &len);
72 if (IS_ERR(msm_id))
73 return NUM_OF_MSM8996_VERSIONS;
74
75 /* The first 4 bytes are format, next to them is the actual msm-id */
76 msm_id++;
77
78 switch ((enum _msm_id)*msm_id) {
79 case MSM8996V3:
80 case APQ8096V3:
81 version = MSM8996_V3;
82 break;
83 case MSM8996SG:
84 case APQ8096SG:
85 version = MSM8996_SG;
86 break;
87 default:
88 version = NUM_OF_MSM8996_VERSIONS;
89 }
90
91 return version;
92}
93
94static int qcom_cpufreq_kryo_name_version(struct device *cpu_dev,
95 struct nvmem_cell *speedbin_nvmem,
96 struct qcom_cpufreq_drv *drv)
97{
98 size_t len;
99 u8 *speedbin;
100 enum _msm8996_version msm8996_version;
101
102 msm8996_version = qcom_cpufreq_get_msm_id();
103 if (NUM_OF_MSM8996_VERSIONS == msm8996_version) {
104 dev_err(cpu_dev, "Not Snapdragon 820/821!");
105 return -ENODEV;
106 }
107
108 speedbin = nvmem_cell_read(speedbin_nvmem, &len);
109 if (IS_ERR(speedbin))
110 return PTR_ERR(speedbin);
111
112 switch (msm8996_version) {
113 case MSM8996_V3:
114 drv->versions = 1 << (unsigned int)(*speedbin);
115 break;
116 case MSM8996_SG:
117 drv->versions = 1 << ((unsigned int)(*speedbin) + 4);
118 break;
119 default:
120 BUG();
121 break;
122 }
123
124 kfree(speedbin);
125 return 0;
126}
127
128static const struct qcom_cpufreq_match_data match_data_kryo = {
129 .get_version = qcom_cpufreq_kryo_name_version,
130};
131
132static const char *qcs404_genpd_names[] = { "cpr", NULL };
133
134static const struct qcom_cpufreq_match_data match_data_qcs404 = {
135 .genpd_names = qcs404_genpd_names,
136};
137
138static int qcom_cpufreq_probe(struct platform_device *pdev)
139{
140 struct qcom_cpufreq_drv *drv;
141 struct nvmem_cell *speedbin_nvmem;
142 struct device_node *np;
143 struct device *cpu_dev;
144 unsigned cpu;
145 const struct of_device_id *match;
146 int ret;
147
148 cpu_dev = get_cpu_device(0);
149 if (!cpu_dev)
150 return -ENODEV;
151
152 np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
153 if (!np)
154 return -ENOENT;
155
156 ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu");
157 if (!ret) {
158 of_node_put(np);
159 return -ENOENT;
160 }
161
162 drv = kzalloc(sizeof(*drv), GFP_KERNEL);
163 if (!drv)
164 return -ENOMEM;
165
166 match = pdev->dev.platform_data;
167 drv->data = match->data;
168 if (!drv->data) {
169 ret = -ENODEV;
170 goto free_drv;
171 }
172
173 if (drv->data->get_version) {
174 speedbin_nvmem = of_nvmem_cell_get(np, NULL);
175 if (IS_ERR(speedbin_nvmem)) {
176 if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
177 dev_err(cpu_dev,
178 "Could not get nvmem cell: %ld\n",
179 PTR_ERR(speedbin_nvmem));
180 ret = PTR_ERR(speedbin_nvmem);
181 goto free_drv;
182 }
183
184 ret = drv->data->get_version(cpu_dev, speedbin_nvmem, drv);
185 if (ret) {
186 nvmem_cell_put(speedbin_nvmem);
187 goto free_drv;
188 }
189 nvmem_cell_put(speedbin_nvmem);
190 }
191 of_node_put(np);
192
193 drv->opp_tables = kcalloc(num_possible_cpus(), sizeof(*drv->opp_tables),
194 GFP_KERNEL);
195 if (!drv->opp_tables) {
196 ret = -ENOMEM;
197 goto free_drv;
198 }
199
200 drv->genpd_opp_tables = kcalloc(num_possible_cpus(),
201 sizeof(*drv->genpd_opp_tables),
202 GFP_KERNEL);
203 if (!drv->genpd_opp_tables) {
204 ret = -ENOMEM;
205 goto free_opp;
206 }
207
208 for_each_possible_cpu(cpu) {
209 cpu_dev = get_cpu_device(cpu);
210 if (NULL == cpu_dev) {
211 ret = -ENODEV;
212 goto free_genpd_opp;
213 }
214
215 if (drv->data->get_version) {
216 drv->opp_tables[cpu] =
217 dev_pm_opp_set_supported_hw(cpu_dev,
218 &drv->versions, 1);
219 if (IS_ERR(drv->opp_tables[cpu])) {
220 ret = PTR_ERR(drv->opp_tables[cpu]);
221 dev_err(cpu_dev,
222 "Failed to set supported hardware\n");
223 goto free_genpd_opp;
224 }
225 }
226
227 if (drv->data->genpd_names) {
228 drv->genpd_opp_tables[cpu] =
229 dev_pm_opp_attach_genpd(cpu_dev,
230 drv->data->genpd_names,
231 NULL);
232 if (IS_ERR(drv->genpd_opp_tables[cpu])) {
233 ret = PTR_ERR(drv->genpd_opp_tables[cpu]);
234 if (ret != -EPROBE_DEFER)
235 dev_err(cpu_dev,
236 "Could not attach to pm_domain: %d\n",
237 ret);
238 goto free_genpd_opp;
239 }
240 }
241 }
242
243 cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
244 NULL, 0);
245 if (!IS_ERR(cpufreq_dt_pdev)) {
246 platform_set_drvdata(pdev, drv);
247 return 0;
248 }
249
250 ret = PTR_ERR(cpufreq_dt_pdev);
251 dev_err(cpu_dev, "Failed to register platform device\n");
252
253free_genpd_opp:
254 for_each_possible_cpu(cpu) {
255 if (IS_ERR_OR_NULL(drv->genpd_opp_tables[cpu]))
256 break;
257 dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
258 }
259 kfree(drv->genpd_opp_tables);
260free_opp:
261 for_each_possible_cpu(cpu) {
262 if (IS_ERR_OR_NULL(drv->opp_tables[cpu]))
263 break;
264 dev_pm_opp_put_supported_hw(drv->opp_tables[cpu]);
265 }
266 kfree(drv->opp_tables);
267free_drv:
268 kfree(drv);
269
270 return ret;
271}
272
273static int qcom_cpufreq_remove(struct platform_device *pdev)
274{
275 struct qcom_cpufreq_drv *drv = platform_get_drvdata(pdev);
276 unsigned int cpu;
277
278 platform_device_unregister(cpufreq_dt_pdev);
279
280 for_each_possible_cpu(cpu) {
281 if (drv->opp_tables[cpu])
282 dev_pm_opp_put_supported_hw(drv->opp_tables[cpu]);
283 if (drv->genpd_opp_tables[cpu])
284 dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
285 }
286
287 kfree(drv->opp_tables);
288 kfree(drv->genpd_opp_tables);
289 kfree(drv);
290
291 return 0;
292}
293
294static struct platform_driver qcom_cpufreq_driver = {
295 .probe = qcom_cpufreq_probe,
296 .remove = qcom_cpufreq_remove,
297 .driver = {
298 .name = "qcom-cpufreq-nvmem",
299 },
300};
301
302static const struct of_device_id qcom_cpufreq_match_list[] __initconst = {
303 { .compatible = "qcom,apq8096", .data = &match_data_kryo },
304 { .compatible = "qcom,msm8996", .data = &match_data_kryo },
305 { .compatible = "qcom,qcs404", .data = &match_data_qcs404 },
306 {},
307};
308
309/*
310 * Since the driver depends on smem and nvmem drivers, which may
311 * return EPROBE_DEFER, all the real activity is done in the probe,
312 * which may be defered as well. The init here is only registering
313 * the driver and the platform device.
314 */
315static int __init qcom_cpufreq_init(void)
316{
317 struct device_node *np = of_find_node_by_path("/");
318 const struct of_device_id *match;
319 int ret;
320
321 if (!np)
322 return -ENODEV;
323
324 match = of_match_node(qcom_cpufreq_match_list, np);
325 of_node_put(np);
326 if (!match)
327 return -ENODEV;
328
329 ret = platform_driver_register(&qcom_cpufreq_driver);
330 if (unlikely(ret < 0))
331 return ret;
332
333 cpufreq_pdev = platform_device_register_data(NULL, "qcom-cpufreq-nvmem",
334 -1, match, sizeof(*match));
335 ret = PTR_ERR_OR_ZERO(cpufreq_pdev);
336 if (0 == ret)
337 return 0;
338
339 platform_driver_unregister(&qcom_cpufreq_driver);
340 return ret;
341}
342module_init(qcom_cpufreq_init);
343
344static void __exit qcom_cpufreq_exit(void)
345{
346 platform_device_unregister(cpufreq_pdev);
347 platform_driver_unregister(&qcom_cpufreq_driver);
348}
349module_exit(qcom_cpufreq_exit);
350
351MODULE_DESCRIPTION("Qualcomm Technologies, Inc. CPUfreq driver");
352MODULE_LICENSE("GPL v2");
diff --git a/drivers/cpufreq/sun50i-cpufreq-nvmem.c b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
new file mode 100644
index 000000000000..eca32e443716
--- /dev/null
+++ b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
@@ -0,0 +1,226 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * Allwinner CPUFreq nvmem based driver
4 *
5 * The sun50i-cpufreq-nvmem driver reads the efuse value from the SoC to
6 * provide the OPP framework with required information.
7 *
8 * Copyright (C) 2019 Yangtao Li <tiny.windzz@gmail.com>
9 */
10
11#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
12
13#include <linux/module.h>
14#include <linux/nvmem-consumer.h>
15#include <linux/of_device.h>
16#include <linux/platform_device.h>
17#include <linux/pm_opp.h>
18#include <linux/slab.h>
19
20#define MAX_NAME_LEN 7
21
22#define NVMEM_MASK 0x7
23#define NVMEM_SHIFT 5
24
25static struct platform_device *cpufreq_dt_pdev, *sun50i_cpufreq_pdev;
26
27/**
28 * sun50i_cpufreq_get_efuse() - Parse and return efuse value present on SoC
29 * @versions: Set to the value parsed from efuse
30 *
31 * Returns 0 if success.
32 */
33static int sun50i_cpufreq_get_efuse(u32 *versions)
34{
35 struct nvmem_cell *speedbin_nvmem;
36 struct device_node *np;
37 struct device *cpu_dev;
38 u32 *speedbin, efuse_value;
39 size_t len;
40 int ret;
41
42 cpu_dev = get_cpu_device(0);
43 if (!cpu_dev)
44 return -ENODEV;
45
46 np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
47 if (!np)
48 return -ENOENT;
49
50 ret = of_device_is_compatible(np,
51 "allwinner,sun50i-h6-operating-points");
52 if (!ret) {
53 of_node_put(np);
54 return -ENOENT;
55 }
56
57 speedbin_nvmem = of_nvmem_cell_get(np, NULL);
58 of_node_put(np);
59 if (IS_ERR(speedbin_nvmem)) {
60 if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
61 pr_err("Could not get nvmem cell: %ld\n",
62 PTR_ERR(speedbin_nvmem));
63 return PTR_ERR(speedbin_nvmem);
64 }
65
66 speedbin = nvmem_cell_read(speedbin_nvmem, &len);
67 nvmem_cell_put(speedbin_nvmem);
68 if (IS_ERR(speedbin))
69 return PTR_ERR(speedbin);
70
71 efuse_value = (*speedbin >> NVMEM_SHIFT) & NVMEM_MASK;
72 switch (efuse_value) {
73 case 0b0001:
74 *versions = 1;
75 break;
76 case 0b0011:
77 *versions = 2;
78 break;
79 default:
80 /*
81 * For other situations, we treat it as bin0.
82 * This vf table can be run for any good cpu.
83 */
84 *versions = 0;
85 break;
86 }
87
88 kfree(speedbin);
89 return 0;
90};
91
92static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev)
93{
94 struct opp_table **opp_tables;
95 char name[MAX_NAME_LEN];
96 unsigned int cpu;
97 u32 speed = 0;
98 int ret;
99
100 opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables),
101 GFP_KERNEL);
102 if (!opp_tables)
103 return -ENOMEM;
104
105 ret = sun50i_cpufreq_get_efuse(&speed);
106 if (ret)
107 return ret;
108
109 snprintf(name, MAX_NAME_LEN, "speed%d", speed);
110
111 for_each_possible_cpu(cpu) {
112 struct device *cpu_dev = get_cpu_device(cpu);
113
114 if (!cpu_dev) {
115 ret = -ENODEV;
116 goto free_opp;
117 }
118
119 opp_tables[cpu] = dev_pm_opp_set_prop_name(cpu_dev, name);
120 if (IS_ERR(opp_tables[cpu])) {
121 ret = PTR_ERR(opp_tables[cpu]);
122 pr_err("Failed to set prop name\n");
123 goto free_opp;
124 }
125 }
126
127 cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
128 NULL, 0);
129 if (!IS_ERR(cpufreq_dt_pdev)) {
130 platform_set_drvdata(pdev, opp_tables);
131 return 0;
132 }
133
134 ret = PTR_ERR(cpufreq_dt_pdev);
135 pr_err("Failed to register platform device\n");
136
137free_opp:
138 for_each_possible_cpu(cpu) {
139 if (IS_ERR_OR_NULL(opp_tables[cpu]))
140 break;
141 dev_pm_opp_put_prop_name(opp_tables[cpu]);
142 }
143 kfree(opp_tables);
144
145 return ret;
146}
147
148static int sun50i_cpufreq_nvmem_remove(struct platform_device *pdev)
149{
150 struct opp_table **opp_tables = platform_get_drvdata(pdev);
151 unsigned int cpu;
152
153 platform_device_unregister(cpufreq_dt_pdev);
154
155 for_each_possible_cpu(cpu)
156 dev_pm_opp_put_prop_name(opp_tables[cpu]);
157
158 kfree(opp_tables);
159
160 return 0;
161}
162
163static struct platform_driver sun50i_cpufreq_driver = {
164 .probe = sun50i_cpufreq_nvmem_probe,
165 .remove = sun50i_cpufreq_nvmem_remove,
166 .driver = {
167 .name = "sun50i-cpufreq-nvmem",
168 },
169};
170
171static const struct of_device_id sun50i_cpufreq_match_list[] = {
172 { .compatible = "allwinner,sun50i-h6" },
173 {}
174};
175
176static const struct of_device_id *sun50i_cpufreq_match_node(void)
177{
178 const struct of_device_id *match;
179 struct device_node *np;
180
181 np = of_find_node_by_path("/");
182 match = of_match_node(sun50i_cpufreq_match_list, np);
183 of_node_put(np);
184
185 return match;
186}
187
188/*
189 * Since the driver depends on nvmem drivers, which may return EPROBE_DEFER,
190 * all the real activity is done in the probe, which may be defered as well.
191 * The init here is only registering the driver and the platform device.
192 */
193static int __init sun50i_cpufreq_init(void)
194{
195 const struct of_device_id *match;
196 int ret;
197
198 match = sun50i_cpufreq_match_node();
199 if (!match)
200 return -ENODEV;
201
202 ret = platform_driver_register(&sun50i_cpufreq_driver);
203 if (unlikely(ret < 0))
204 return ret;
205
206 sun50i_cpufreq_pdev =
207 platform_device_register_simple("sun50i-cpufreq-nvmem",
208 -1, NULL, 0);
209 ret = PTR_ERR_OR_ZERO(sun50i_cpufreq_pdev);
210 if (ret == 0)
211 return 0;
212
213 platform_driver_unregister(&sun50i_cpufreq_driver);
214 return ret;
215}
216module_init(sun50i_cpufreq_init);
217
218static void __exit sun50i_cpufreq_exit(void)
219{
220 platform_device_unregister(sun50i_cpufreq_pdev);
221 platform_driver_unregister(&sun50i_cpufreq_driver);
222}
223module_exit(sun50i_cpufreq_exit);
224
225MODULE_DESCRIPTION("Sun50i-h6 cpufreq driver");
226MODULE_LICENSE("GPL v2");
diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c
index 2ad1ae17932d..aeaa883a8c9d 100644
--- a/drivers/cpufreq/ti-cpufreq.c
+++ b/drivers/cpufreq/ti-cpufreq.c
@@ -77,6 +77,7 @@ static unsigned long dra7_efuse_xlate(struct ti_cpufreq_data *opp_data,
77 case DRA7_EFUSE_HAS_ALL_MPU_OPP: 77 case DRA7_EFUSE_HAS_ALL_MPU_OPP:
78 case DRA7_EFUSE_HAS_HIGH_MPU_OPP: 78 case DRA7_EFUSE_HAS_HIGH_MPU_OPP:
79 calculated_efuse |= DRA7_EFUSE_HIGH_MPU_OPP; 79 calculated_efuse |= DRA7_EFUSE_HIGH_MPU_OPP;
80 /* Fall through */
80 case DRA7_EFUSE_HAS_OD_MPU_OPP: 81 case DRA7_EFUSE_HAS_OD_MPU_OPP:
81 calculated_efuse |= DRA7_EFUSE_OD_MPU_OPP; 82 calculated_efuse |= DRA7_EFUSE_OD_MPU_OPP;
82 } 83 }
diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
index a4ac31e4a58c..88727b7c0d59 100644
--- a/drivers/cpuidle/Kconfig
+++ b/drivers/cpuidle/Kconfig
@@ -33,6 +33,17 @@ config CPU_IDLE_GOV_TEO
33 Some workloads benefit from using it and it generally should be safe 33 Some workloads benefit from using it and it generally should be safe
34 to use. Say Y here if you are not happy with the alternatives. 34 to use. Say Y here if you are not happy with the alternatives.
35 35
36config CPU_IDLE_GOV_HALTPOLL
37 bool "Haltpoll governor (for virtualized systems)"
38 depends on KVM_GUEST
39 help
40 This governor implements haltpoll idle state selection, to be
41 used in conjunction with the haltpoll cpuidle driver, allowing
42 for polling for a certain amount of time before entering idle
43 state.
44
45 Some virtualized workloads benefit from using it.
46
36config DT_IDLE_STATES 47config DT_IDLE_STATES
37 bool 48 bool
38 49
@@ -51,6 +62,15 @@ depends on PPC
51source "drivers/cpuidle/Kconfig.powerpc" 62source "drivers/cpuidle/Kconfig.powerpc"
52endmenu 63endmenu
53 64
65config HALTPOLL_CPUIDLE
66 tristate "Halt poll cpuidle driver"
67 depends on X86 && KVM_GUEST
68 default y
69 help
70 This option enables halt poll cpuidle driver, which allows to poll
71 before halting in the guest (more efficient than polling in the
72 host via halt_poll_ns for some scenarios).
73
54endif 74endif
55 75
56config ARCH_NEEDS_CPU_IDLE_COUPLED 76config ARCH_NEEDS_CPU_IDLE_COUPLED
diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
index 40d016339b29..ee70d5cc5b99 100644
--- a/drivers/cpuidle/Makefile
+++ b/drivers/cpuidle/Makefile
@@ -7,6 +7,7 @@ obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
7obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o 7obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
8obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o 8obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o
9obj-$(CONFIG_ARCH_HAS_CPU_RELAX) += poll_state.o 9obj-$(CONFIG_ARCH_HAS_CPU_RELAX) += poll_state.o
10obj-$(CONFIG_HALTPOLL_CPUIDLE) += cpuidle-haltpoll.o
10 11
11################################################################################## 12##################################################################################
12# ARM SoC drivers 13# ARM SoC drivers
diff --git a/drivers/cpuidle/cpuidle-haltpoll.c b/drivers/cpuidle/cpuidle-haltpoll.c
new file mode 100644
index 000000000000..932390b028f1
--- /dev/null
+++ b/drivers/cpuidle/cpuidle-haltpoll.c
@@ -0,0 +1,134 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * cpuidle driver for haltpoll governor.
4 *
5 * Copyright 2019 Red Hat, Inc. and/or its affiliates.
6 *
7 * This work is licensed under the terms of the GNU GPL, version 2. See
8 * the COPYING file in the top-level directory.
9 *
10 * Authors: Marcelo Tosatti <mtosatti@redhat.com>
11 */
12
13#include <linux/init.h>
14#include <linux/cpu.h>
15#include <linux/cpuidle.h>
16#include <linux/module.h>
17#include <linux/sched/idle.h>
18#include <linux/kvm_para.h>
19#include <linux/cpuidle_haltpoll.h>
20
21static struct cpuidle_device __percpu *haltpoll_cpuidle_devices;
22static enum cpuhp_state haltpoll_hp_state;
23
24static int default_enter_idle(struct cpuidle_device *dev,
25 struct cpuidle_driver *drv, int index)
26{
27 if (current_clr_polling_and_test()) {
28 local_irq_enable();
29 return index;
30 }
31 default_idle();
32 return index;
33}
34
35static struct cpuidle_driver haltpoll_driver = {
36 .name = "haltpoll",
37 .governor = "haltpoll",
38 .states = {
39 { /* entry 0 is for polling */ },
40 {
41 .enter = default_enter_idle,
42 .exit_latency = 1,
43 .target_residency = 1,
44 .power_usage = -1,
45 .name = "haltpoll idle",
46 .desc = "default architecture idle",
47 },
48 },
49 .safe_state_index = 0,
50 .state_count = 2,
51};
52
53static int haltpoll_cpu_online(unsigned int cpu)
54{
55 struct cpuidle_device *dev;
56
57 dev = per_cpu_ptr(haltpoll_cpuidle_devices, cpu);
58 if (!dev->registered) {
59 dev->cpu = cpu;
60 if (cpuidle_register_device(dev)) {
61 pr_notice("cpuidle_register_device %d failed!\n", cpu);
62 return -EIO;
63 }
64 arch_haltpoll_enable(cpu);
65 }
66
67 return 0;
68}
69
70static int haltpoll_cpu_offline(unsigned int cpu)
71{
72 struct cpuidle_device *dev;
73
74 dev = per_cpu_ptr(haltpoll_cpuidle_devices, cpu);
75 if (dev->registered) {
76 arch_haltpoll_disable(cpu);
77 cpuidle_unregister_device(dev);
78 }
79
80 return 0;
81}
82
83static void haltpoll_uninit(void)
84{
85 if (haltpoll_hp_state)
86 cpuhp_remove_state(haltpoll_hp_state);
87 cpuidle_unregister_driver(&haltpoll_driver);
88
89 free_percpu(haltpoll_cpuidle_devices);
90 haltpoll_cpuidle_devices = NULL;
91}
92
93static int __init haltpoll_init(void)
94{
95 int ret;
96 struct cpuidle_driver *drv = &haltpoll_driver;
97
98 cpuidle_poll_state_init(drv);
99
100 if (!kvm_para_available() ||
101 !kvm_para_has_hint(KVM_HINTS_REALTIME))
102 return -ENODEV;
103
104 ret = cpuidle_register_driver(drv);
105 if (ret < 0)
106 return ret;
107
108 haltpoll_cpuidle_devices = alloc_percpu(struct cpuidle_device);
109 if (haltpoll_cpuidle_devices == NULL) {
110 cpuidle_unregister_driver(drv);
111 return -ENOMEM;
112 }
113
114 ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "cpuidle/haltpoll:online",
115 haltpoll_cpu_online, haltpoll_cpu_offline);
116 if (ret < 0) {
117 haltpoll_uninit();
118 } else {
119 haltpoll_hp_state = ret;
120 ret = 0;
121 }
122
123 return ret;
124}
125
126static void __exit haltpoll_exit(void)
127{
128 haltpoll_uninit();
129}
130
131module_init(haltpoll_init);
132module_exit(haltpoll_exit);
133MODULE_LICENSE("GPL");
134MODULE_AUTHOR("Marcelo Tosatti <mtosatti@redhat.com>");
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 0f4b7c45df3e..0895b988fa92 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -362,6 +362,36 @@ void cpuidle_reflect(struct cpuidle_device *dev, int index)
362} 362}
363 363
364/** 364/**
365 * cpuidle_poll_time - return amount of time to poll for,
366 * governors can override dev->poll_limit_ns if necessary
367 *
368 * @drv: the cpuidle driver tied with the cpu
369 * @dev: the cpuidle device
370 *
371 */
372u64 cpuidle_poll_time(struct cpuidle_driver *drv,
373 struct cpuidle_device *dev)
374{
375 int i;
376 u64 limit_ns;
377
378 if (dev->poll_limit_ns)
379 return dev->poll_limit_ns;
380
381 limit_ns = TICK_NSEC;
382 for (i = 1; i < drv->state_count; i++) {
383 if (drv->states[i].disabled || dev->states_usage[i].disable)
384 continue;
385
386 limit_ns = (u64)drv->states[i].target_residency * NSEC_PER_USEC;
387 }
388
389 dev->poll_limit_ns = limit_ns;
390
391 return dev->poll_limit_ns;
392}
393
394/**
365 * cpuidle_install_idle_handler - installs the cpuidle idle loop handler 395 * cpuidle_install_idle_handler - installs the cpuidle idle loop handler
366 */ 396 */
367void cpuidle_install_idle_handler(void) 397void cpuidle_install_idle_handler(void)
diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
index d6613101af92..9f336af17fa6 100644
--- a/drivers/cpuidle/cpuidle.h
+++ b/drivers/cpuidle/cpuidle.h
@@ -9,6 +9,7 @@
9/* For internal use only */ 9/* For internal use only */
10extern char param_governor[]; 10extern char param_governor[];
11extern struct cpuidle_governor *cpuidle_curr_governor; 11extern struct cpuidle_governor *cpuidle_curr_governor;
12extern struct cpuidle_governor *cpuidle_prev_governor;
12extern struct list_head cpuidle_governors; 13extern struct list_head cpuidle_governors;
13extern struct list_head cpuidle_detected_devices; 14extern struct list_head cpuidle_detected_devices;
14extern struct mutex cpuidle_lock; 15extern struct mutex cpuidle_lock;
@@ -22,6 +23,7 @@ extern void cpuidle_install_idle_handler(void);
22extern void cpuidle_uninstall_idle_handler(void); 23extern void cpuidle_uninstall_idle_handler(void);
23 24
24/* governors */ 25/* governors */
26extern struct cpuidle_governor *cpuidle_find_governor(const char *str);
25extern int cpuidle_switch_governor(struct cpuidle_governor *gov); 27extern int cpuidle_switch_governor(struct cpuidle_governor *gov);
26 28
27/* sysfs */ 29/* sysfs */
diff --git a/drivers/cpuidle/driver.c b/drivers/cpuidle/driver.c
index dc32f34e68d9..80c1a830d991 100644
--- a/drivers/cpuidle/driver.c
+++ b/drivers/cpuidle/driver.c
@@ -254,12 +254,25 @@ static void __cpuidle_unregister_driver(struct cpuidle_driver *drv)
254 */ 254 */
255int cpuidle_register_driver(struct cpuidle_driver *drv) 255int cpuidle_register_driver(struct cpuidle_driver *drv)
256{ 256{
257 struct cpuidle_governor *gov;
257 int ret; 258 int ret;
258 259
259 spin_lock(&cpuidle_driver_lock); 260 spin_lock(&cpuidle_driver_lock);
260 ret = __cpuidle_register_driver(drv); 261 ret = __cpuidle_register_driver(drv);
261 spin_unlock(&cpuidle_driver_lock); 262 spin_unlock(&cpuidle_driver_lock);
262 263
264 if (!ret && !strlen(param_governor) && drv->governor &&
265 (cpuidle_get_driver() == drv)) {
266 mutex_lock(&cpuidle_lock);
267 gov = cpuidle_find_governor(drv->governor);
268 if (gov) {
269 cpuidle_prev_governor = cpuidle_curr_governor;
270 if (cpuidle_switch_governor(gov) < 0)
271 cpuidle_prev_governor = NULL;
272 }
273 mutex_unlock(&cpuidle_lock);
274 }
275
263 return ret; 276 return ret;
264} 277}
265EXPORT_SYMBOL_GPL(cpuidle_register_driver); 278EXPORT_SYMBOL_GPL(cpuidle_register_driver);
@@ -274,9 +287,21 @@ EXPORT_SYMBOL_GPL(cpuidle_register_driver);
274 */ 287 */
275void cpuidle_unregister_driver(struct cpuidle_driver *drv) 288void cpuidle_unregister_driver(struct cpuidle_driver *drv)
276{ 289{
290 bool enabled = (cpuidle_get_driver() == drv);
291
277 spin_lock(&cpuidle_driver_lock); 292 spin_lock(&cpuidle_driver_lock);
278 __cpuidle_unregister_driver(drv); 293 __cpuidle_unregister_driver(drv);
279 spin_unlock(&cpuidle_driver_lock); 294 spin_unlock(&cpuidle_driver_lock);
295
296 if (!enabled)
297 return;
298
299 mutex_lock(&cpuidle_lock);
300 if (cpuidle_prev_governor) {
301 if (!cpuidle_switch_governor(cpuidle_prev_governor))
302 cpuidle_prev_governor = NULL;
303 }
304 mutex_unlock(&cpuidle_lock);
280} 305}
281EXPORT_SYMBOL_GPL(cpuidle_unregister_driver); 306EXPORT_SYMBOL_GPL(cpuidle_unregister_driver);
282 307
diff --git a/drivers/cpuidle/governor.c b/drivers/cpuidle/governor.c
index 2e3e14192bee..e9801f26c732 100644
--- a/drivers/cpuidle/governor.c
+++ b/drivers/cpuidle/governor.c
@@ -20,14 +20,15 @@ char param_governor[CPUIDLE_NAME_LEN];
20 20
21LIST_HEAD(cpuidle_governors); 21LIST_HEAD(cpuidle_governors);
22struct cpuidle_governor *cpuidle_curr_governor; 22struct cpuidle_governor *cpuidle_curr_governor;
23struct cpuidle_governor *cpuidle_prev_governor;
23 24
24/** 25/**
25 * __cpuidle_find_governor - finds a governor of the specified name 26 * cpuidle_find_governor - finds a governor of the specified name
26 * @str: the name 27 * @str: the name
27 * 28 *
28 * Must be called with cpuidle_lock acquired. 29 * Must be called with cpuidle_lock acquired.
29 */ 30 */
30static struct cpuidle_governor * __cpuidle_find_governor(const char *str) 31struct cpuidle_governor *cpuidle_find_governor(const char *str)
31{ 32{
32 struct cpuidle_governor *gov; 33 struct cpuidle_governor *gov;
33 34
@@ -87,7 +88,7 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
87 return -ENODEV; 88 return -ENODEV;
88 89
89 mutex_lock(&cpuidle_lock); 90 mutex_lock(&cpuidle_lock);
90 if (__cpuidle_find_governor(gov->name) == NULL) { 91 if (cpuidle_find_governor(gov->name) == NULL) {
91 ret = 0; 92 ret = 0;
92 list_add_tail(&gov->governor_list, &cpuidle_governors); 93 list_add_tail(&gov->governor_list, &cpuidle_governors);
93 if (!cpuidle_curr_governor || 94 if (!cpuidle_curr_governor ||
diff --git a/drivers/cpuidle/governors/Makefile b/drivers/cpuidle/governors/Makefile
index 42f44cc610dd..63abb5393a4d 100644
--- a/drivers/cpuidle/governors/Makefile
+++ b/drivers/cpuidle/governors/Makefile
@@ -6,3 +6,4 @@
6obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o 6obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
7obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o 7obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
8obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o 8obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o
9obj-$(CONFIG_CPU_IDLE_GOV_HALTPOLL) += haltpoll.o
diff --git a/drivers/cpuidle/governors/haltpoll.c b/drivers/cpuidle/governors/haltpoll.c
new file mode 100644
index 000000000000..7a703d2e0064
--- /dev/null
+++ b/drivers/cpuidle/governors/haltpoll.c
@@ -0,0 +1,150 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * haltpoll.c - haltpoll idle governor
4 *
5 * Copyright 2019 Red Hat, Inc. and/or its affiliates.
6 *
7 * This work is licensed under the terms of the GNU GPL, version 2. See
8 * the COPYING file in the top-level directory.
9 *
10 * Authors: Marcelo Tosatti <mtosatti@redhat.com>
11 */
12
13#include <linux/kernel.h>
14#include <linux/cpuidle.h>
15#include <linux/time.h>
16#include <linux/ktime.h>
17#include <linux/hrtimer.h>
18#include <linux/tick.h>
19#include <linux/sched.h>
20#include <linux/module.h>
21#include <linux/kvm_para.h>
22
23static unsigned int guest_halt_poll_ns __read_mostly = 200000;
24module_param(guest_halt_poll_ns, uint, 0644);
25
26/* division factor to shrink halt_poll_ns */
27static unsigned int guest_halt_poll_shrink __read_mostly = 2;
28module_param(guest_halt_poll_shrink, uint, 0644);
29
30/* multiplication factor to grow per-cpu poll_limit_ns */
31static unsigned int guest_halt_poll_grow __read_mostly = 2;
32module_param(guest_halt_poll_grow, uint, 0644);
33
34/* value in us to start growing per-cpu halt_poll_ns */
35static unsigned int guest_halt_poll_grow_start __read_mostly = 50000;
36module_param(guest_halt_poll_grow_start, uint, 0644);
37
38/* allow shrinking guest halt poll */
39static bool guest_halt_poll_allow_shrink __read_mostly = true;
40module_param(guest_halt_poll_allow_shrink, bool, 0644);
41
42/**
43 * haltpoll_select - selects the next idle state to enter
44 * @drv: cpuidle driver containing state data
45 * @dev: the CPU
46 * @stop_tick: indication on whether or not to stop the tick
47 */
48static int haltpoll_select(struct cpuidle_driver *drv,
49 struct cpuidle_device *dev,
50 bool *stop_tick)
51{
52 int latency_req = cpuidle_governor_latency_req(dev->cpu);
53
54 if (!drv->state_count || latency_req == 0) {
55 *stop_tick = false;
56 return 0;
57 }
58
59 if (dev->poll_limit_ns == 0)
60 return 1;
61
62 /* Last state was poll? */
63 if (dev->last_state_idx == 0) {
64 /* Halt if no event occurred on poll window */
65 if (dev->poll_time_limit == true)
66 return 1;
67
68 *stop_tick = false;
69 /* Otherwise, poll again */
70 return 0;
71 }
72
73 *stop_tick = false;
74 /* Last state was halt: poll */
75 return 0;
76}
77
78static void adjust_poll_limit(struct cpuidle_device *dev, unsigned int block_us)
79{
80 unsigned int val;
81 u64 block_ns = block_us*NSEC_PER_USEC;
82
83 /* Grow cpu_halt_poll_us if
84 * cpu_halt_poll_us < block_ns < guest_halt_poll_us
85 */
86 if (block_ns > dev->poll_limit_ns && block_ns <= guest_halt_poll_ns) {
87 val = dev->poll_limit_ns * guest_halt_poll_grow;
88
89 if (val < guest_halt_poll_grow_start)
90 val = guest_halt_poll_grow_start;
91 if (val > guest_halt_poll_ns)
92 val = guest_halt_poll_ns;
93
94 dev->poll_limit_ns = val;
95 } else if (block_ns > guest_halt_poll_ns &&
96 guest_halt_poll_allow_shrink) {
97 unsigned int shrink = guest_halt_poll_shrink;
98
99 val = dev->poll_limit_ns;
100 if (shrink == 0)
101 val = 0;
102 else
103 val /= shrink;
104 dev->poll_limit_ns = val;
105 }
106}
107
108/**
109 * haltpoll_reflect - update variables and update poll time
110 * @dev: the CPU
111 * @index: the index of actual entered state
112 */
113static void haltpoll_reflect(struct cpuidle_device *dev, int index)
114{
115 dev->last_state_idx = index;
116
117 if (index != 0)
118 adjust_poll_limit(dev, dev->last_residency);
119}
120
121/**
122 * haltpoll_enable_device - scans a CPU's states and does setup
123 * @drv: cpuidle driver
124 * @dev: the CPU
125 */
126static int haltpoll_enable_device(struct cpuidle_driver *drv,
127 struct cpuidle_device *dev)
128{
129 dev->poll_limit_ns = 0;
130
131 return 0;
132}
133
134static struct cpuidle_governor haltpoll_governor = {
135 .name = "haltpoll",
136 .rating = 9,
137 .enable = haltpoll_enable_device,
138 .select = haltpoll_select,
139 .reflect = haltpoll_reflect,
140};
141
142static int __init init_haltpoll(void)
143{
144 if (kvm_para_available())
145 return cpuidle_register_governor(&haltpoll_governor);
146
147 return 0;
148}
149
150postcore_initcall(init_haltpoll);
diff --git a/drivers/cpuidle/governors/ladder.c b/drivers/cpuidle/governors/ladder.c
index f0dddc66af26..428eeb832fe7 100644
--- a/drivers/cpuidle/governors/ladder.c
+++ b/drivers/cpuidle/governors/ladder.c
@@ -38,7 +38,6 @@ struct ladder_device_state {
38 38
39struct ladder_device { 39struct ladder_device {
40 struct ladder_device_state states[CPUIDLE_STATE_MAX]; 40 struct ladder_device_state states[CPUIDLE_STATE_MAX];
41 int last_state_idx;
42}; 41};
43 42
44static DEFINE_PER_CPU(struct ladder_device, ladder_devices); 43static DEFINE_PER_CPU(struct ladder_device, ladder_devices);
@@ -49,12 +48,13 @@ static DEFINE_PER_CPU(struct ladder_device, ladder_devices);
49 * @old_idx: the current state index 48 * @old_idx: the current state index
50 * @new_idx: the new target state index 49 * @new_idx: the new target state index
51 */ 50 */
52static inline void ladder_do_selection(struct ladder_device *ldev, 51static inline void ladder_do_selection(struct cpuidle_device *dev,
52 struct ladder_device *ldev,
53 int old_idx, int new_idx) 53 int old_idx, int new_idx)
54{ 54{
55 ldev->states[old_idx].stats.promotion_count = 0; 55 ldev->states[old_idx].stats.promotion_count = 0;
56 ldev->states[old_idx].stats.demotion_count = 0; 56 ldev->states[old_idx].stats.demotion_count = 0;
57 ldev->last_state_idx = new_idx; 57 dev->last_state_idx = new_idx;
58} 58}
59 59
60/** 60/**
@@ -68,13 +68,13 @@ static int ladder_select_state(struct cpuidle_driver *drv,
68{ 68{
69 struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); 69 struct ladder_device *ldev = this_cpu_ptr(&ladder_devices);
70 struct ladder_device_state *last_state; 70 struct ladder_device_state *last_state;
71 int last_residency, last_idx = ldev->last_state_idx; 71 int last_residency, last_idx = dev->last_state_idx;
72 int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0; 72 int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0;
73 int latency_req = cpuidle_governor_latency_req(dev->cpu); 73 int latency_req = cpuidle_governor_latency_req(dev->cpu);
74 74
75 /* Special case when user has set very strict latency requirement */ 75 /* Special case when user has set very strict latency requirement */
76 if (unlikely(latency_req == 0)) { 76 if (unlikely(latency_req == 0)) {
77 ladder_do_selection(ldev, last_idx, 0); 77 ladder_do_selection(dev, ldev, last_idx, 0);
78 return 0; 78 return 0;
79 } 79 }
80 80
@@ -91,7 +91,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
91 last_state->stats.promotion_count++; 91 last_state->stats.promotion_count++;
92 last_state->stats.demotion_count = 0; 92 last_state->stats.demotion_count = 0;
93 if (last_state->stats.promotion_count >= last_state->threshold.promotion_count) { 93 if (last_state->stats.promotion_count >= last_state->threshold.promotion_count) {
94 ladder_do_selection(ldev, last_idx, last_idx + 1); 94 ladder_do_selection(dev, ldev, last_idx, last_idx + 1);
95 return last_idx + 1; 95 return last_idx + 1;
96 } 96 }
97 } 97 }
@@ -107,7 +107,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
107 if (drv->states[i].exit_latency <= latency_req) 107 if (drv->states[i].exit_latency <= latency_req)
108 break; 108 break;
109 } 109 }
110 ladder_do_selection(ldev, last_idx, i); 110 ladder_do_selection(dev, ldev, last_idx, i);
111 return i; 111 return i;
112 } 112 }
113 113
@@ -116,7 +116,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
116 last_state->stats.demotion_count++; 116 last_state->stats.demotion_count++;
117 last_state->stats.promotion_count = 0; 117 last_state->stats.promotion_count = 0;
118 if (last_state->stats.demotion_count >= last_state->threshold.demotion_count) { 118 if (last_state->stats.demotion_count >= last_state->threshold.demotion_count) {
119 ladder_do_selection(ldev, last_idx, last_idx - 1); 119 ladder_do_selection(dev, ldev, last_idx, last_idx - 1);
120 return last_idx - 1; 120 return last_idx - 1;
121 } 121 }
122 } 122 }
@@ -139,7 +139,7 @@ static int ladder_enable_device(struct cpuidle_driver *drv,
139 struct ladder_device_state *lstate; 139 struct ladder_device_state *lstate;
140 struct cpuidle_state *state; 140 struct cpuidle_state *state;
141 141
142 ldev->last_state_idx = first_idx; 142 dev->last_state_idx = first_idx;
143 143
144 for (i = first_idx; i < drv->state_count; i++) { 144 for (i = first_idx; i < drv->state_count; i++) {
145 state = &drv->states[i]; 145 state = &drv->states[i];
@@ -167,9 +167,8 @@ static int ladder_enable_device(struct cpuidle_driver *drv,
167 */ 167 */
168static void ladder_reflect(struct cpuidle_device *dev, int index) 168static void ladder_reflect(struct cpuidle_device *dev, int index)
169{ 169{
170 struct ladder_device *ldev = this_cpu_ptr(&ladder_devices);
171 if (index > 0) 170 if (index > 0)
172 ldev->last_state_idx = index; 171 dev->last_state_idx = index;
173} 172}
174 173
175static struct cpuidle_governor ladder_governor = { 174static struct cpuidle_governor ladder_governor = {
diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index e9a28c7846d6..e5a5d0c8d66b 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -117,7 +117,6 @@
117 */ 117 */
118 118
119struct menu_device { 119struct menu_device {
120 int last_state_idx;
121 int needs_update; 120 int needs_update;
122 int tick_wakeup; 121 int tick_wakeup;
123 122
@@ -302,9 +301,10 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
302 !drv->states[0].disabled && !dev->states_usage[0].disable)) { 301 !drv->states[0].disabled && !dev->states_usage[0].disable)) {
303 /* 302 /*
304 * In this case state[0] will be used no matter what, so return 303 * In this case state[0] will be used no matter what, so return
305 * it right away and keep the tick running. 304 * it right away and keep the tick running if state[0] is a
305 * polling one.
306 */ 306 */
307 *stop_tick = false; 307 *stop_tick = !(drv->states[0].flags & CPUIDLE_FLAG_POLLING);
308 return 0; 308 return 0;
309 } 309 }
310 310
@@ -395,16 +395,9 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
395 395
396 return idx; 396 return idx;
397 } 397 }
398 if (s->exit_latency > latency_req) { 398 if (s->exit_latency > latency_req)
399 /*
400 * If we break out of the loop for latency reasons, use
401 * the target residency of the selected state as the
402 * expected idle duration so that the tick is retained
403 * as long as that target residency is low enough.
404 */
405 predicted_us = drv->states[idx].target_residency;
406 break; 399 break;
407 } 400
408 idx = i; 401 idx = i;
409 } 402 }
410 403
@@ -455,7 +448,7 @@ static void menu_reflect(struct cpuidle_device *dev, int index)
455{ 448{
456 struct menu_device *data = this_cpu_ptr(&menu_devices); 449 struct menu_device *data = this_cpu_ptr(&menu_devices);
457 450
458 data->last_state_idx = index; 451 dev->last_state_idx = index;
459 data->needs_update = 1; 452 data->needs_update = 1;
460 data->tick_wakeup = tick_nohz_idle_got_tick(); 453 data->tick_wakeup = tick_nohz_idle_got_tick();
461} 454}
@@ -468,7 +461,7 @@ static void menu_reflect(struct cpuidle_device *dev, int index)
468static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) 461static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
469{ 462{
470 struct menu_device *data = this_cpu_ptr(&menu_devices); 463 struct menu_device *data = this_cpu_ptr(&menu_devices);
471 int last_idx = data->last_state_idx; 464 int last_idx = dev->last_state_idx;
472 struct cpuidle_state *target = &drv->states[last_idx]; 465 struct cpuidle_state *target = &drv->states[last_idx];
473 unsigned int measured_us; 466 unsigned int measured_us;
474 unsigned int new_factor; 467 unsigned int new_factor;
diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c
index 7d05efdbd3c6..b5a0e498f798 100644
--- a/drivers/cpuidle/governors/teo.c
+++ b/drivers/cpuidle/governors/teo.c
@@ -96,7 +96,6 @@ struct teo_idle_state {
96 * @time_span_ns: Time between idle state selection and post-wakeup update. 96 * @time_span_ns: Time between idle state selection and post-wakeup update.
97 * @sleep_length_ns: Time till the closest timer event (at the selection time). 97 * @sleep_length_ns: Time till the closest timer event (at the selection time).
98 * @states: Idle states data corresponding to this CPU. 98 * @states: Idle states data corresponding to this CPU.
99 * @last_state: Idle state entered by the CPU last time.
100 * @interval_idx: Index of the most recent saved idle interval. 99 * @interval_idx: Index of the most recent saved idle interval.
101 * @intervals: Saved idle duration values. 100 * @intervals: Saved idle duration values.
102 */ 101 */
@@ -104,7 +103,6 @@ struct teo_cpu {
104 u64 time_span_ns; 103 u64 time_span_ns;
105 u64 sleep_length_ns; 104 u64 sleep_length_ns;
106 struct teo_idle_state states[CPUIDLE_STATE_MAX]; 105 struct teo_idle_state states[CPUIDLE_STATE_MAX];
107 int last_state;
108 int interval_idx; 106 int interval_idx;
109 unsigned int intervals[INTERVALS]; 107 unsigned int intervals[INTERVALS];
110}; 108};
@@ -125,12 +123,15 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
125 123
126 if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) { 124 if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) {
127 /* 125 /*
128 * One of the safety nets has triggered or this was a timer 126 * One of the safety nets has triggered or the wakeup was close
129 * wakeup (or equivalent). 127 * enough to the closest timer event expected at the idle state
128 * selection time to be discarded.
130 */ 129 */
131 measured_us = sleep_length_us; 130 measured_us = UINT_MAX;
132 } else { 131 } else {
133 unsigned int lat = drv->states[cpu_data->last_state].exit_latency; 132 unsigned int lat;
133
134 lat = drv->states[dev->last_state_idx].exit_latency;
134 135
135 measured_us = ktime_to_us(cpu_data->time_span_ns); 136 measured_us = ktime_to_us(cpu_data->time_span_ns);
136 /* 137 /*
@@ -189,15 +190,6 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
189 } 190 }
190 191
191 /* 192 /*
192 * If the total time span between idle state selection and the "reflect"
193 * callback is greater than or equal to the sleep length determined at
194 * the idle state selection time, the wakeup is likely to be due to a
195 * timer event.
196 */
197 if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns)
198 measured_us = UINT_MAX;
199
200 /*
201 * Save idle duration values corresponding to non-timer wakeups for 193 * Save idle duration values corresponding to non-timer wakeups for
202 * pattern detection. 194 * pattern detection.
203 */ 195 */
@@ -242,12 +234,12 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
242 struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); 234 struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
243 int latency_req = cpuidle_governor_latency_req(dev->cpu); 235 int latency_req = cpuidle_governor_latency_req(dev->cpu);
244 unsigned int duration_us, count; 236 unsigned int duration_us, count;
245 int max_early_idx, idx, i; 237 int max_early_idx, constraint_idx, idx, i;
246 ktime_t delta_tick; 238 ktime_t delta_tick;
247 239
248 if (cpu_data->last_state >= 0) { 240 if (dev->last_state_idx >= 0) {
249 teo_update(drv, dev); 241 teo_update(drv, dev);
250 cpu_data->last_state = -1; 242 dev->last_state_idx = -1;
251 } 243 }
252 244
253 cpu_data->time_span_ns = local_clock(); 245 cpu_data->time_span_ns = local_clock();
@@ -257,6 +249,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
257 249
258 count = 0; 250 count = 0;
259 max_early_idx = -1; 251 max_early_idx = -1;
252 constraint_idx = drv->state_count;
260 idx = -1; 253 idx = -1;
261 254
262 for (i = 0; i < drv->state_count; i++) { 255 for (i = 0; i < drv->state_count; i++) {
@@ -286,16 +279,8 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
286 if (s->target_residency > duration_us) 279 if (s->target_residency > duration_us)
287 break; 280 break;
288 281
289 if (s->exit_latency > latency_req) { 282 if (s->exit_latency > latency_req && constraint_idx > i)
290 /* 283 constraint_idx = i;
291 * If we break out of the loop for latency reasons, use
292 * the target residency of the selected state as the
293 * expected idle duration to avoid stopping the tick
294 * as long as that target residency is low enough.
295 */
296 duration_us = drv->states[idx].target_residency;
297 goto refine;
298 }
299 284
300 idx = i; 285 idx = i;
301 286
@@ -321,7 +306,13 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
321 duration_us = drv->states[idx].target_residency; 306 duration_us = drv->states[idx].target_residency;
322 } 307 }
323 308
324refine: 309 /*
310 * If there is a latency constraint, it may be necessary to use a
311 * shallower idle state than the one selected so far.
312 */
313 if (constraint_idx < idx)
314 idx = constraint_idx;
315
325 if (idx < 0) { 316 if (idx < 0) {
326 idx = 0; /* No states enabled. Must use 0. */ 317 idx = 0; /* No states enabled. Must use 0. */
327 } else if (idx > 0) { 318 } else if (idx > 0) {
@@ -331,13 +322,12 @@ refine:
331 322
332 /* 323 /*
333 * Count and sum the most recent idle duration values less than 324 * Count and sum the most recent idle duration values less than
334 * the target residency of the state selected so far, find the 325 * the current expected idle duration value.
335 * max.
336 */ 326 */
337 for (i = 0; i < INTERVALS; i++) { 327 for (i = 0; i < INTERVALS; i++) {
338 unsigned int val = cpu_data->intervals[i]; 328 unsigned int val = cpu_data->intervals[i];
339 329
340 if (val >= drv->states[idx].target_residency) 330 if (val >= duration_us)
341 continue; 331 continue;
342 332
343 count++; 333 count++;
@@ -356,8 +346,10 @@ refine:
356 * would be too shallow. 346 * would be too shallow.
357 */ 347 */
358 if (!(tick_nohz_tick_stopped() && avg_us < TICK_USEC)) { 348 if (!(tick_nohz_tick_stopped() && avg_us < TICK_USEC)) {
359 idx = teo_find_shallower_state(drv, dev, idx, avg_us);
360 duration_us = avg_us; 349 duration_us = avg_us;
350 if (drv->states[idx].target_residency > avg_us)
351 idx = teo_find_shallower_state(drv, dev,
352 idx, avg_us);
361 } 353 }
362 } 354 }
363 } 355 }
@@ -394,7 +386,7 @@ static void teo_reflect(struct cpuidle_device *dev, int state)
394{ 386{
395 struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); 387 struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
396 388
397 cpu_data->last_state = state; 389 dev->last_state_idx = state;
398 /* 390 /*
399 * If the wakeup was not "natural", but triggered by one of the safety 391 * If the wakeup was not "natural", but triggered by one of the safety
400 * nets, assume that the CPU might have been idle for the entire sleep 392 * nets, assume that the CPU might have been idle for the entire sleep
diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c
index 02b9315a9e96..c8fa5f41dfc4 100644
--- a/drivers/cpuidle/poll_state.c
+++ b/drivers/cpuidle/poll_state.c
@@ -20,16 +20,9 @@ static int __cpuidle poll_idle(struct cpuidle_device *dev,
20 local_irq_enable(); 20 local_irq_enable();
21 if (!current_set_polling_and_test()) { 21 if (!current_set_polling_and_test()) {
22 unsigned int loop_count = 0; 22 unsigned int loop_count = 0;
23 u64 limit = TICK_NSEC; 23 u64 limit;
24 int i;
25 24
26 for (i = 1; i < drv->state_count; i++) { 25 limit = cpuidle_poll_time(drv, dev);
27 if (drv->states[i].disabled || dev->states_usage[i].disable)
28 continue;
29
30 limit = (u64)drv->states[i].target_residency * NSEC_PER_USEC;
31 break;
32 }
33 26
34 while (!need_resched()) { 27 while (!need_resched()) {
35 cpu_relax(); 28 cpu_relax();
diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c
index eb20adb5de23..2bb2683b493c 100644
--- a/drivers/cpuidle/sysfs.c
+++ b/drivers/cpuidle/sysfs.c
@@ -334,6 +334,7 @@ struct cpuidle_state_kobj {
334 struct cpuidle_state_usage *state_usage; 334 struct cpuidle_state_usage *state_usage;
335 struct completion kobj_unregister; 335 struct completion kobj_unregister;
336 struct kobject kobj; 336 struct kobject kobj;
337 struct cpuidle_device *device;
337}; 338};
338 339
339#ifdef CONFIG_SUSPEND 340#ifdef CONFIG_SUSPEND
@@ -391,6 +392,7 @@ static inline void cpuidle_remove_s2idle_attr_group(struct cpuidle_state_kobj *k
391#define kobj_to_state_obj(k) container_of(k, struct cpuidle_state_kobj, kobj) 392#define kobj_to_state_obj(k) container_of(k, struct cpuidle_state_kobj, kobj)
392#define kobj_to_state(k) (kobj_to_state_obj(k)->state) 393#define kobj_to_state(k) (kobj_to_state_obj(k)->state)
393#define kobj_to_state_usage(k) (kobj_to_state_obj(k)->state_usage) 394#define kobj_to_state_usage(k) (kobj_to_state_obj(k)->state_usage)
395#define kobj_to_device(k) (kobj_to_state_obj(k)->device)
394#define attr_to_stateattr(a) container_of(a, struct cpuidle_state_attr, attr) 396#define attr_to_stateattr(a) container_of(a, struct cpuidle_state_attr, attr)
395 397
396static ssize_t cpuidle_state_show(struct kobject *kobj, struct attribute *attr, 398static ssize_t cpuidle_state_show(struct kobject *kobj, struct attribute *attr,
@@ -414,10 +416,14 @@ static ssize_t cpuidle_state_store(struct kobject *kobj, struct attribute *attr,
414 struct cpuidle_state *state = kobj_to_state(kobj); 416 struct cpuidle_state *state = kobj_to_state(kobj);
415 struct cpuidle_state_usage *state_usage = kobj_to_state_usage(kobj); 417 struct cpuidle_state_usage *state_usage = kobj_to_state_usage(kobj);
416 struct cpuidle_state_attr *cattr = attr_to_stateattr(attr); 418 struct cpuidle_state_attr *cattr = attr_to_stateattr(attr);
419 struct cpuidle_device *dev = kobj_to_device(kobj);
417 420
418 if (cattr->store) 421 if (cattr->store)
419 ret = cattr->store(state, state_usage, buf, size); 422 ret = cattr->store(state, state_usage, buf, size);
420 423
424 /* reset poll time cache */
425 dev->poll_limit_ns = 0;
426
421 return ret; 427 return ret;
422} 428}
423 429
@@ -468,6 +474,7 @@ static int cpuidle_add_state_sysfs(struct cpuidle_device *device)
468 } 474 }
469 kobj->state = &drv->states[i]; 475 kobj->state = &drv->states[i];
470 kobj->state_usage = &device->states_usage[i]; 476 kobj->state_usage = &device->states_usage[i];
477 kobj->device = device;
471 init_completion(&kobj->kobj_unregister); 478 init_completion(&kobj->kobj_unregister);
472 479
473 ret = kobject_init_and_add(&kobj->kobj, &ktype_state_cpuidle, 480 ret = kobject_init_and_add(&kobj->kobj, &ktype_state_cpuidle,
diff --git a/drivers/devfreq/Kconfig b/drivers/devfreq/Kconfig
index ba98a4e3ad33..defe1d438710 100644
--- a/drivers/devfreq/Kconfig
+++ b/drivers/devfreq/Kconfig
@@ -93,15 +93,28 @@ config ARM_EXYNOS_BUS_DEVFREQ
93 This does not yet operate with optimal voltages. 93 This does not yet operate with optimal voltages.
94 94
95config ARM_TEGRA_DEVFREQ 95config ARM_TEGRA_DEVFREQ
96 tristate "Tegra DEVFREQ Driver" 96 tristate "NVIDIA Tegra30/114/124/210 DEVFREQ Driver"
97 depends on ARCH_TEGRA_124_SOC 97 depends on ARCH_TEGRA_3x_SOC || ARCH_TEGRA_114_SOC || \
98 select DEVFREQ_GOV_SIMPLE_ONDEMAND 98 ARCH_TEGRA_132_SOC || ARCH_TEGRA_124_SOC || \
99 ARCH_TEGRA_210_SOC || \
100 COMPILE_TEST
99 select PM_OPP 101 select PM_OPP
100 help 102 help
101 This adds the DEVFREQ driver for the Tegra family of SoCs. 103 This adds the DEVFREQ driver for the Tegra family of SoCs.
102 It reads ACTMON counters of memory controllers and adjusts the 104 It reads ACTMON counters of memory controllers and adjusts the
103 operating frequencies and voltages with OPP support. 105 operating frequencies and voltages with OPP support.
104 106
107config ARM_TEGRA20_DEVFREQ
108 tristate "NVIDIA Tegra20 DEVFREQ Driver"
109 depends on (TEGRA_MC && TEGRA20_EMC) || COMPILE_TEST
110 depends on COMMON_CLK
111 select DEVFREQ_GOV_SIMPLE_ONDEMAND
112 select PM_OPP
113 help
114 This adds the DEVFREQ driver for the Tegra20 family of SoCs.
115 It reads Memory Controller counters and adjusts the operating
116 frequencies and voltages with OPP support.
117
105config ARM_RK3399_DMC_DEVFREQ 118config ARM_RK3399_DMC_DEVFREQ
106 tristate "ARM RK3399 DMC DEVFREQ Driver" 119 tristate "ARM RK3399 DMC DEVFREQ Driver"
107 depends on ARCH_ROCKCHIP 120 depends on ARCH_ROCKCHIP
diff --git a/drivers/devfreq/Makefile b/drivers/devfreq/Makefile
index 32b8d4d3f12c..338ae8440db6 100644
--- a/drivers/devfreq/Makefile
+++ b/drivers/devfreq/Makefile
@@ -10,7 +10,8 @@ obj-$(CONFIG_DEVFREQ_GOV_PASSIVE) += governor_passive.o
10# DEVFREQ Drivers 10# DEVFREQ Drivers
11obj-$(CONFIG_ARM_EXYNOS_BUS_DEVFREQ) += exynos-bus.o 11obj-$(CONFIG_ARM_EXYNOS_BUS_DEVFREQ) += exynos-bus.o
12obj-$(CONFIG_ARM_RK3399_DMC_DEVFREQ) += rk3399_dmc.o 12obj-$(CONFIG_ARM_RK3399_DMC_DEVFREQ) += rk3399_dmc.o
13obj-$(CONFIG_ARM_TEGRA_DEVFREQ) += tegra-devfreq.o 13obj-$(CONFIG_ARM_TEGRA_DEVFREQ) += tegra30-devfreq.o
14obj-$(CONFIG_ARM_TEGRA20_DEVFREQ) += tegra20-devfreq.o
14 15
15# DEVFREQ Event Drivers 16# DEVFREQ Event Drivers
16obj-$(CONFIG_PM_DEVFREQ_EVENT) += event/ 17obj-$(CONFIG_PM_DEVFREQ_EVENT) += event/
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index ab22bf8a12d6..446490c9d635 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -254,7 +254,7 @@ static struct devfreq_governor *try_then_request_governor(const char *name)
254 /* Restore previous state before return */ 254 /* Restore previous state before return */
255 mutex_lock(&devfreq_list_lock); 255 mutex_lock(&devfreq_list_lock);
256 if (err) 256 if (err)
257 return ERR_PTR(err); 257 return (err < 0) ? ERR_PTR(err) : ERR_PTR(-EINVAL);
258 258
259 governor = find_devfreq_governor(name); 259 governor = find_devfreq_governor(name);
260 } 260 }
@@ -402,7 +402,7 @@ static void devfreq_monitor(struct work_struct *work)
402 * devfreq_monitor_start() - Start load monitoring of devfreq instance 402 * devfreq_monitor_start() - Start load monitoring of devfreq instance
403 * @devfreq: the devfreq instance. 403 * @devfreq: the devfreq instance.
404 * 404 *
405 * Helper function for starting devfreq device load monitoing. By 405 * Helper function for starting devfreq device load monitoring. By
406 * default delayed work based monitoring is supported. Function 406 * default delayed work based monitoring is supported. Function
407 * to be called from governor in response to DEVFREQ_GOV_START 407 * to be called from governor in response to DEVFREQ_GOV_START
408 * event when device is added to devfreq framework. 408 * event when device is added to devfreq framework.
@@ -420,7 +420,7 @@ EXPORT_SYMBOL(devfreq_monitor_start);
420 * devfreq_monitor_stop() - Stop load monitoring of a devfreq instance 420 * devfreq_monitor_stop() - Stop load monitoring of a devfreq instance
421 * @devfreq: the devfreq instance. 421 * @devfreq: the devfreq instance.
422 * 422 *
423 * Helper function to stop devfreq device load monitoing. Function 423 * Helper function to stop devfreq device load monitoring. Function
424 * to be called from governor in response to DEVFREQ_GOV_STOP 424 * to be called from governor in response to DEVFREQ_GOV_STOP
425 * event when device is removed from devfreq framework. 425 * event when device is removed from devfreq framework.
426 */ 426 */
@@ -434,7 +434,7 @@ EXPORT_SYMBOL(devfreq_monitor_stop);
434 * devfreq_monitor_suspend() - Suspend load monitoring of a devfreq instance 434 * devfreq_monitor_suspend() - Suspend load monitoring of a devfreq instance
435 * @devfreq: the devfreq instance. 435 * @devfreq: the devfreq instance.
436 * 436 *
437 * Helper function to suspend devfreq device load monitoing. Function 437 * Helper function to suspend devfreq device load monitoring. Function
438 * to be called from governor in response to DEVFREQ_GOV_SUSPEND 438 * to be called from governor in response to DEVFREQ_GOV_SUSPEND
439 * event or when polling interval is set to zero. 439 * event or when polling interval is set to zero.
440 * 440 *
@@ -461,7 +461,7 @@ EXPORT_SYMBOL(devfreq_monitor_suspend);
461 * devfreq_monitor_resume() - Resume load monitoring of a devfreq instance 461 * devfreq_monitor_resume() - Resume load monitoring of a devfreq instance
462 * @devfreq: the devfreq instance. 462 * @devfreq: the devfreq instance.
463 * 463 *
464 * Helper function to resume devfreq device load monitoing. Function 464 * Helper function to resume devfreq device load monitoring. Function
465 * to be called from governor in response to DEVFREQ_GOV_RESUME 465 * to be called from governor in response to DEVFREQ_GOV_RESUME
466 * event or when polling interval is set to non-zero. 466 * event or when polling interval is set to non-zero.
467 */ 467 */
@@ -867,7 +867,7 @@ EXPORT_SYMBOL_GPL(devfreq_get_devfreq_by_phandle);
867 867
868/** 868/**
869 * devm_devfreq_remove_device() - Resource-managed devfreq_remove_device() 869 * devm_devfreq_remove_device() - Resource-managed devfreq_remove_device()
870 * @dev: the device to add devfreq feature. 870 * @dev: the device from which to remove devfreq feature.
871 * @devfreq: the devfreq instance to be removed 871 * @devfreq: the devfreq instance to be removed
872 */ 872 */
873void devm_devfreq_remove_device(struct device *dev, struct devfreq *devfreq) 873void devm_devfreq_remove_device(struct device *dev, struct devfreq *devfreq)
diff --git a/drivers/devfreq/event/exynos-ppmu.c b/drivers/devfreq/event/exynos-ppmu.c
index 3ee3dd5653aa..87b42055e6bc 100644
--- a/drivers/devfreq/event/exynos-ppmu.c
+++ b/drivers/devfreq/event/exynos-ppmu.c
@@ -13,6 +13,7 @@
13#include <linux/kernel.h> 13#include <linux/kernel.h>
14#include <linux/module.h> 14#include <linux/module.h>
15#include <linux/of_address.h> 15#include <linux/of_address.h>
16#include <linux/of_device.h>
16#include <linux/platform_device.h> 17#include <linux/platform_device.h>
17#include <linux/regmap.h> 18#include <linux/regmap.h>
18#include <linux/suspend.h> 19#include <linux/suspend.h>
@@ -20,6 +21,11 @@
20 21
21#include "exynos-ppmu.h" 22#include "exynos-ppmu.h"
22 23
24enum exynos_ppmu_type {
25 EXYNOS_TYPE_PPMU,
26 EXYNOS_TYPE_PPMU_V2,
27};
28
23struct exynos_ppmu_data { 29struct exynos_ppmu_data {
24 struct clk *clk; 30 struct clk *clk;
25}; 31};
@@ -33,6 +39,7 @@ struct exynos_ppmu {
33 struct regmap *regmap; 39 struct regmap *regmap;
34 40
35 struct exynos_ppmu_data ppmu; 41 struct exynos_ppmu_data ppmu;
42 enum exynos_ppmu_type ppmu_type;
36}; 43};
37 44
38#define PPMU_EVENT(name) \ 45#define PPMU_EVENT(name) \
@@ -86,6 +93,12 @@ static struct __exynos_ppmu_events {
86 PPMU_EVENT(d1-cpu), 93 PPMU_EVENT(d1-cpu),
87 PPMU_EVENT(d1-general), 94 PPMU_EVENT(d1-general),
88 PPMU_EVENT(d1-rt), 95 PPMU_EVENT(d1-rt),
96
97 /* For Exynos5422 SoC */
98 PPMU_EVENT(dmc0_0),
99 PPMU_EVENT(dmc0_1),
100 PPMU_EVENT(dmc1_0),
101 PPMU_EVENT(dmc1_1),
89}; 102};
90 103
91static int exynos_ppmu_find_ppmu_id(struct devfreq_event_dev *edev) 104static int exynos_ppmu_find_ppmu_id(struct devfreq_event_dev *edev)
@@ -151,9 +164,9 @@ static int exynos_ppmu_set_event(struct devfreq_event_dev *edev)
151 if (ret < 0) 164 if (ret < 0)
152 return ret; 165 return ret;
153 166
154 /* Set the event of Read/Write data count */ 167 /* Set the event of proper data type monitoring */
155 ret = regmap_write(info->regmap, PPMU_BEVTxSEL(id), 168 ret = regmap_write(info->regmap, PPMU_BEVTxSEL(id),
156 PPMU_RO_DATA_CNT | PPMU_WO_DATA_CNT); 169 edev->desc->event_type);
157 if (ret < 0) 170 if (ret < 0)
158 return ret; 171 return ret;
159 172
@@ -365,23 +378,11 @@ static int exynos_ppmu_v2_set_event(struct devfreq_event_dev *edev)
365 if (ret < 0) 378 if (ret < 0)
366 return ret; 379 return ret;
367 380
368 /* Set the event of Read/Write data count */ 381 /* Set the event of proper data type monitoring */
369 switch (id) { 382 ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
370 case PPMU_PMNCNT0: 383 edev->desc->event_type);
371 case PPMU_PMNCNT1: 384 if (ret < 0)
372 case PPMU_PMNCNT2: 385 return ret;
373 ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
374 PPMU_V2_RO_DATA_CNT | PPMU_V2_WO_DATA_CNT);
375 if (ret < 0)
376 return ret;
377 break;
378 case PPMU_PMNCNT3:
379 ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
380 PPMU_V2_EVT3_RW_DATA_CNT);
381 if (ret < 0)
382 return ret;
383 break;
384 }
385 386
386 /* Reset cycle counter/performance counter and enable PPMU */ 387 /* Reset cycle counter/performance counter and enable PPMU */
387 ret = regmap_read(info->regmap, PPMU_V2_PMNC, &pmnc); 388 ret = regmap_read(info->regmap, PPMU_V2_PMNC, &pmnc);
@@ -480,31 +481,24 @@ static const struct devfreq_event_ops exynos_ppmu_v2_ops = {
480static const struct of_device_id exynos_ppmu_id_match[] = { 481static const struct of_device_id exynos_ppmu_id_match[] = {
481 { 482 {
482 .compatible = "samsung,exynos-ppmu", 483 .compatible = "samsung,exynos-ppmu",
483 .data = (void *)&exynos_ppmu_ops, 484 .data = (void *)EXYNOS_TYPE_PPMU,
484 }, { 485 }, {
485 .compatible = "samsung,exynos-ppmu-v2", 486 .compatible = "samsung,exynos-ppmu-v2",
486 .data = (void *)&exynos_ppmu_v2_ops, 487 .data = (void *)EXYNOS_TYPE_PPMU_V2,
487 }, 488 },
488 { /* sentinel */ }, 489 { /* sentinel */ },
489}; 490};
490MODULE_DEVICE_TABLE(of, exynos_ppmu_id_match); 491MODULE_DEVICE_TABLE(of, exynos_ppmu_id_match);
491 492
492static struct devfreq_event_ops *exynos_bus_get_ops(struct device_node *np)
493{
494 const struct of_device_id *match;
495
496 match = of_match_node(exynos_ppmu_id_match, np);
497 return (struct devfreq_event_ops *)match->data;
498}
499
500static int of_get_devfreq_events(struct device_node *np, 493static int of_get_devfreq_events(struct device_node *np,
501 struct exynos_ppmu *info) 494 struct exynos_ppmu *info)
502{ 495{
503 struct devfreq_event_desc *desc; 496 struct devfreq_event_desc *desc;
504 struct devfreq_event_ops *event_ops;
505 struct device *dev = info->dev; 497 struct device *dev = info->dev;
506 struct device_node *events_np, *node; 498 struct device_node *events_np, *node;
507 int i, j, count; 499 int i, j, count;
500 const struct of_device_id *of_id;
501 int ret;
508 502
509 events_np = of_get_child_by_name(np, "events"); 503 events_np = of_get_child_by_name(np, "events");
510 if (!events_np) { 504 if (!events_np) {
@@ -512,7 +506,6 @@ static int of_get_devfreq_events(struct device_node *np,
512 "failed to get child node of devfreq-event devices\n"); 506 "failed to get child node of devfreq-event devices\n");
513 return -EINVAL; 507 return -EINVAL;
514 } 508 }
515 event_ops = exynos_bus_get_ops(np);
516 509
517 count = of_get_child_count(events_np); 510 count = of_get_child_count(events_np);
518 desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL); 511 desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL);
@@ -520,6 +513,12 @@ static int of_get_devfreq_events(struct device_node *np,
520 return -ENOMEM; 513 return -ENOMEM;
521 info->num_events = count; 514 info->num_events = count;
522 515
516 of_id = of_match_device(exynos_ppmu_id_match, dev);
517 if (of_id)
518 info->ppmu_type = (enum exynos_ppmu_type)of_id->data;
519 else
520 return -EINVAL;
521
523 j = 0; 522 j = 0;
524 for_each_child_of_node(events_np, node) { 523 for_each_child_of_node(events_np, node) {
525 for (i = 0; i < ARRAY_SIZE(ppmu_events); i++) { 524 for (i = 0; i < ARRAY_SIZE(ppmu_events); i++) {
@@ -537,10 +536,51 @@ static int of_get_devfreq_events(struct device_node *np,
537 continue; 536 continue;
538 } 537 }
539 538
540 desc[j].ops = event_ops; 539 switch (info->ppmu_type) {
540 case EXYNOS_TYPE_PPMU:
541 desc[j].ops = &exynos_ppmu_ops;
542 break;
543 case EXYNOS_TYPE_PPMU_V2:
544 desc[j].ops = &exynos_ppmu_v2_ops;
545 break;
546 }
547
541 desc[j].driver_data = info; 548 desc[j].driver_data = info;
542 549
543 of_property_read_string(node, "event-name", &desc[j].name); 550 of_property_read_string(node, "event-name", &desc[j].name);
551 ret = of_property_read_u32(node, "event-data-type",
552 &desc[j].event_type);
553 if (ret) {
554 /* Set the event of proper data type counting.
555 * Check if the data type has been defined in DT,
556 * use default if not.
557 */
558 if (info->ppmu_type == EXYNOS_TYPE_PPMU_V2) {
559 struct devfreq_event_dev edev;
560 int id;
561 /* Not all registers take the same value for
562 * read+write data count.
563 */
564 edev.desc = &desc[j];
565 id = exynos_ppmu_find_ppmu_id(&edev);
566
567 switch (id) {
568 case PPMU_PMNCNT0:
569 case PPMU_PMNCNT1:
570 case PPMU_PMNCNT2:
571 desc[j].event_type = PPMU_V2_RO_DATA_CNT
572 | PPMU_V2_WO_DATA_CNT;
573 break;
574 case PPMU_PMNCNT3:
575 desc[j].event_type =
576 PPMU_V2_EVT3_RW_DATA_CNT;
577 break;
578 }
579 } else {
580 desc[j].event_type = PPMU_RO_DATA_CNT |
581 PPMU_WO_DATA_CNT;
582 }
583 }
544 584
545 j++; 585 j++;
546 } 586 }
diff --git a/drivers/devfreq/exynos-bus.c b/drivers/devfreq/exynos-bus.c
index d9f377912c10..c832673273a2 100644
--- a/drivers/devfreq/exynos-bus.c
+++ b/drivers/devfreq/exynos-bus.c
@@ -22,7 +22,6 @@
22#include <linux/slab.h> 22#include <linux/slab.h>
23 23
24#define DEFAULT_SATURATION_RATIO 40 24#define DEFAULT_SATURATION_RATIO 40
25#define DEFAULT_VOLTAGE_TOLERANCE 2
26 25
27struct exynos_bus { 26struct exynos_bus {
28 struct device *dev; 27 struct device *dev;
@@ -34,9 +33,8 @@ struct exynos_bus {
34 33
35 unsigned long curr_freq; 34 unsigned long curr_freq;
36 35
37 struct regulator *regulator; 36 struct opp_table *opp_table;
38 struct clk *clk; 37 struct clk *clk;
39 unsigned int voltage_tolerance;
40 unsigned int ratio; 38 unsigned int ratio;
41}; 39};
42 40
@@ -90,62 +88,29 @@ static int exynos_bus_get_event(struct exynos_bus *bus,
90} 88}
91 89
92/* 90/*
93 * Must necessary function for devfreq simple-ondemand governor 91 * devfreq function for both simple-ondemand and passive governor
94 */ 92 */
95static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags) 93static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags)
96{ 94{
97 struct exynos_bus *bus = dev_get_drvdata(dev); 95 struct exynos_bus *bus = dev_get_drvdata(dev);
98 struct dev_pm_opp *new_opp; 96 struct dev_pm_opp *new_opp;
99 unsigned long old_freq, new_freq, new_volt, tol;
100 int ret = 0; 97 int ret = 0;
101 98
102 /* Get new opp-bus instance according to new bus clock */ 99 /* Get correct frequency for bus. */
103 new_opp = devfreq_recommended_opp(dev, freq, flags); 100 new_opp = devfreq_recommended_opp(dev, freq, flags);
104 if (IS_ERR(new_opp)) { 101 if (IS_ERR(new_opp)) {
105 dev_err(dev, "failed to get recommended opp instance\n"); 102 dev_err(dev, "failed to get recommended opp instance\n");
106 return PTR_ERR(new_opp); 103 return PTR_ERR(new_opp);
107 } 104 }
108 105
109 new_freq = dev_pm_opp_get_freq(new_opp);
110 new_volt = dev_pm_opp_get_voltage(new_opp);
111 dev_pm_opp_put(new_opp); 106 dev_pm_opp_put(new_opp);
112 107
113 old_freq = bus->curr_freq;
114
115 if (old_freq == new_freq)
116 return 0;
117 tol = new_volt * bus->voltage_tolerance / 100;
118
119 /* Change voltage and frequency according to new OPP level */ 108 /* Change voltage and frequency according to new OPP level */
120 mutex_lock(&bus->lock); 109 mutex_lock(&bus->lock);
110 ret = dev_pm_opp_set_rate(dev, *freq);
111 if (!ret)
112 bus->curr_freq = *freq;
121 113
122 if (old_freq < new_freq) {
123 ret = regulator_set_voltage_tol(bus->regulator, new_volt, tol);
124 if (ret < 0) {
125 dev_err(bus->dev, "failed to set voltage\n");
126 goto out;
127 }
128 }
129
130 ret = clk_set_rate(bus->clk, new_freq);
131 if (ret < 0) {
132 dev_err(dev, "failed to change clock of bus\n");
133 clk_set_rate(bus->clk, old_freq);
134 goto out;
135 }
136
137 if (old_freq > new_freq) {
138 ret = regulator_set_voltage_tol(bus->regulator, new_volt, tol);
139 if (ret < 0) {
140 dev_err(bus->dev, "failed to set voltage\n");
141 goto out;
142 }
143 }
144 bus->curr_freq = new_freq;
145
146 dev_dbg(dev, "Set the frequency of bus (%luHz -> %luHz, %luHz)\n",
147 old_freq, new_freq, clk_get_rate(bus->clk));
148out:
149 mutex_unlock(&bus->lock); 114 mutex_unlock(&bus->lock);
150 115
151 return ret; 116 return ret;
@@ -191,57 +156,12 @@ static void exynos_bus_exit(struct device *dev)
191 if (ret < 0) 156 if (ret < 0)
192 dev_warn(dev, "failed to disable the devfreq-event devices\n"); 157 dev_warn(dev, "failed to disable the devfreq-event devices\n");
193 158
194 if (bus->regulator)
195 regulator_disable(bus->regulator);
196
197 dev_pm_opp_of_remove_table(dev); 159 dev_pm_opp_of_remove_table(dev);
198 clk_disable_unprepare(bus->clk); 160 clk_disable_unprepare(bus->clk);
199} 161 if (bus->opp_table) {
200 162 dev_pm_opp_put_regulators(bus->opp_table);
201/* 163 bus->opp_table = NULL;
202 * Must necessary function for devfreq passive governor
203 */
204static int exynos_bus_passive_target(struct device *dev, unsigned long *freq,
205 u32 flags)
206{
207 struct exynos_bus *bus = dev_get_drvdata(dev);
208 struct dev_pm_opp *new_opp;
209 unsigned long old_freq, new_freq;
210 int ret = 0;
211
212 /* Get new opp-bus instance according to new bus clock */
213 new_opp = devfreq_recommended_opp(dev, freq, flags);
214 if (IS_ERR(new_opp)) {
215 dev_err(dev, "failed to get recommended opp instance\n");
216 return PTR_ERR(new_opp);
217 } 164 }
218
219 new_freq = dev_pm_opp_get_freq(new_opp);
220 dev_pm_opp_put(new_opp);
221
222 old_freq = bus->curr_freq;
223
224 if (old_freq == new_freq)
225 return 0;
226
227 /* Change the frequency according to new OPP level */
228 mutex_lock(&bus->lock);
229
230 ret = clk_set_rate(bus->clk, new_freq);
231 if (ret < 0) {
232 dev_err(dev, "failed to set the clock of bus\n");
233 goto out;
234 }
235
236 *freq = new_freq;
237 bus->curr_freq = new_freq;
238
239 dev_dbg(dev, "Set the frequency of bus (%luHz -> %luHz, %luHz)\n",
240 old_freq, new_freq, clk_get_rate(bus->clk));
241out:
242 mutex_unlock(&bus->lock);
243
244 return ret;
245} 165}
246 166
247static void exynos_bus_passive_exit(struct device *dev) 167static void exynos_bus_passive_exit(struct device *dev)
@@ -256,21 +176,19 @@ static int exynos_bus_parent_parse_of(struct device_node *np,
256 struct exynos_bus *bus) 176 struct exynos_bus *bus)
257{ 177{
258 struct device *dev = bus->dev; 178 struct device *dev = bus->dev;
179 struct opp_table *opp_table;
180 const char *vdd = "vdd";
259 int i, ret, count, size; 181 int i, ret, count, size;
260 182
261 /* Get the regulator to provide each bus with the power */ 183 opp_table = dev_pm_opp_set_regulators(dev, &vdd, 1);
262 bus->regulator = devm_regulator_get(dev, "vdd"); 184 if (IS_ERR(opp_table)) {
263 if (IS_ERR(bus->regulator)) { 185 ret = PTR_ERR(opp_table);
264 dev_err(dev, "failed to get VDD regulator\n"); 186 dev_err(dev, "failed to set regulators %d\n", ret);
265 return PTR_ERR(bus->regulator);
266 }
267
268 ret = regulator_enable(bus->regulator);
269 if (ret < 0) {
270 dev_err(dev, "failed to enable VDD regulator\n");
271 return ret; 187 return ret;
272 } 188 }
273 189
190 bus->opp_table = opp_table;
191
274 /* 192 /*
275 * Get the devfreq-event devices to get the current utilization of 193 * Get the devfreq-event devices to get the current utilization of
276 * buses. This raw data will be used in devfreq ondemand governor. 194 * buses. This raw data will be used in devfreq ondemand governor.
@@ -311,14 +229,11 @@ static int exynos_bus_parent_parse_of(struct device_node *np,
311 if (of_property_read_u32(np, "exynos,saturation-ratio", &bus->ratio)) 229 if (of_property_read_u32(np, "exynos,saturation-ratio", &bus->ratio))
312 bus->ratio = DEFAULT_SATURATION_RATIO; 230 bus->ratio = DEFAULT_SATURATION_RATIO;
313 231
314 if (of_property_read_u32(np, "exynos,voltage-tolerance",
315 &bus->voltage_tolerance))
316 bus->voltage_tolerance = DEFAULT_VOLTAGE_TOLERANCE;
317
318 return 0; 232 return 0;
319 233
320err_regulator: 234err_regulator:
321 regulator_disable(bus->regulator); 235 dev_pm_opp_put_regulators(bus->opp_table);
236 bus->opp_table = NULL;
322 237
323 return ret; 238 return ret;
324} 239}
@@ -383,6 +298,7 @@ static int exynos_bus_probe(struct platform_device *pdev)
383 struct exynos_bus *bus; 298 struct exynos_bus *bus;
384 int ret, max_state; 299 int ret, max_state;
385 unsigned long min_freq, max_freq; 300 unsigned long min_freq, max_freq;
301 bool passive = false;
386 302
387 if (!np) { 303 if (!np) {
388 dev_err(dev, "failed to find devicetree node\n"); 304 dev_err(dev, "failed to find devicetree node\n");
@@ -396,27 +312,27 @@ static int exynos_bus_probe(struct platform_device *pdev)
396 bus->dev = &pdev->dev; 312 bus->dev = &pdev->dev;
397 platform_set_drvdata(pdev, bus); 313 platform_set_drvdata(pdev, bus);
398 314
399 /* Parse the device-tree to get the resource information */
400 ret = exynos_bus_parse_of(np, bus);
401 if (ret < 0)
402 return ret;
403
404 profile = devm_kzalloc(dev, sizeof(*profile), GFP_KERNEL); 315 profile = devm_kzalloc(dev, sizeof(*profile), GFP_KERNEL);
405 if (!profile) { 316 if (!profile)
406 ret = -ENOMEM; 317 return -ENOMEM;
407 goto err;
408 }
409 318
410 node = of_parse_phandle(dev->of_node, "devfreq", 0); 319 node = of_parse_phandle(dev->of_node, "devfreq", 0);
411 if (node) { 320 if (node) {
412 of_node_put(node); 321 of_node_put(node);
413 goto passive; 322 passive = true;
414 } else { 323 } else {
415 ret = exynos_bus_parent_parse_of(np, bus); 324 ret = exynos_bus_parent_parse_of(np, bus);
325 if (ret < 0)
326 return ret;
416 } 327 }
417 328
329 /* Parse the device-tree to get the resource information */
330 ret = exynos_bus_parse_of(np, bus);
418 if (ret < 0) 331 if (ret < 0)
419 goto err; 332 goto err_reg;
333
334 if (passive)
335 goto passive;
420 336
421 /* Initialize the struct profile and governor data for parent device */ 337 /* Initialize the struct profile and governor data for parent device */
422 profile->polling_ms = 50; 338 profile->polling_ms = 50;
@@ -468,7 +384,7 @@ static int exynos_bus_probe(struct platform_device *pdev)
468 goto out; 384 goto out;
469passive: 385passive:
470 /* Initialize the struct profile and governor data for passive device */ 386 /* Initialize the struct profile and governor data for passive device */
471 profile->target = exynos_bus_passive_target; 387 profile->target = exynos_bus_target;
472 profile->exit = exynos_bus_passive_exit; 388 profile->exit = exynos_bus_passive_exit;
473 389
474 /* Get the instance of parent devfreq device */ 390 /* Get the instance of parent devfreq device */
@@ -507,6 +423,11 @@ out:
507err: 423err:
508 dev_pm_opp_of_remove_table(dev); 424 dev_pm_opp_of_remove_table(dev);
509 clk_disable_unprepare(bus->clk); 425 clk_disable_unprepare(bus->clk);
426err_reg:
427 if (!passive) {
428 dev_pm_opp_put_regulators(bus->opp_table);
429 bus->opp_table = NULL;
430 }
510 431
511 return ret; 432 return ret;
512} 433}
diff --git a/drivers/devfreq/governor_passive.c b/drivers/devfreq/governor_passive.c
index 58308948b863..be6eeab9c814 100644
--- a/drivers/devfreq/governor_passive.c
+++ b/drivers/devfreq/governor_passive.c
@@ -149,7 +149,6 @@ static int devfreq_passive_notifier_call(struct notifier_block *nb,
149static int devfreq_passive_event_handler(struct devfreq *devfreq, 149static int devfreq_passive_event_handler(struct devfreq *devfreq,
150 unsigned int event, void *data) 150 unsigned int event, void *data)
151{ 151{
152 struct device *dev = devfreq->dev.parent;
153 struct devfreq_passive_data *p_data 152 struct devfreq_passive_data *p_data
154 = (struct devfreq_passive_data *)devfreq->data; 153 = (struct devfreq_passive_data *)devfreq->data;
155 struct devfreq *parent = (struct devfreq *)p_data->parent; 154 struct devfreq *parent = (struct devfreq *)p_data->parent;
@@ -165,12 +164,12 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
165 p_data->this = devfreq; 164 p_data->this = devfreq;
166 165
167 nb->notifier_call = devfreq_passive_notifier_call; 166 nb->notifier_call = devfreq_passive_notifier_call;
168 ret = devm_devfreq_register_notifier(dev, parent, nb, 167 ret = devfreq_register_notifier(parent, nb,
169 DEVFREQ_TRANSITION_NOTIFIER); 168 DEVFREQ_TRANSITION_NOTIFIER);
170 break; 169 break;
171 case DEVFREQ_GOV_STOP: 170 case DEVFREQ_GOV_STOP:
172 devm_devfreq_unregister_notifier(dev, parent, nb, 171 WARN_ON(devfreq_unregister_notifier(parent, nb,
173 DEVFREQ_TRANSITION_NOTIFIER); 172 DEVFREQ_TRANSITION_NOTIFIER));
174 break; 173 break;
175 default: 174 default:
176 break; 175 break;
diff --git a/drivers/devfreq/rk3399_dmc.c b/drivers/devfreq/rk3399_dmc.c
index 682465fa57e1..2e65d7279d79 100644
--- a/drivers/devfreq/rk3399_dmc.c
+++ b/drivers/devfreq/rk3399_dmc.c
@@ -351,7 +351,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
351 351
352 /* 352 /*
353 * Get dram timing and pass it to arm trust firmware, 353 * Get dram timing and pass it to arm trust firmware,
354 * the dram drvier in arm trust firmware will get these 354 * the dram driver in arm trust firmware will get these
355 * timing and to do dram initial. 355 * timing and to do dram initial.
356 */ 356 */
357 if (!of_get_ddr_timings(&data->timing, np)) { 357 if (!of_get_ddr_timings(&data->timing, np)) {
diff --git a/drivers/devfreq/tegra20-devfreq.c b/drivers/devfreq/tegra20-devfreq.c
new file mode 100644
index 000000000000..ff82bac9ee4e
--- /dev/null
+++ b/drivers/devfreq/tegra20-devfreq.c
@@ -0,0 +1,212 @@
1// SPDX-License-Identifier: GPL-2.0
2/*
3 * NVIDIA Tegra20 devfreq driver
4 *
5 * Copyright (C) 2019 GRATE-DRIVER project
6 */
7
8#include <linux/clk.h>
9#include <linux/devfreq.h>
10#include <linux/io.h>
11#include <linux/kernel.h>
12#include <linux/module.h>
13#include <linux/of_device.h>
14#include <linux/platform_device.h>
15#include <linux/pm_opp.h>
16#include <linux/slab.h>
17
18#include <soc/tegra/mc.h>
19
20#include "governor.h"
21
22#define MC_STAT_CONTROL 0x90
23#define MC_STAT_EMC_CLOCK_LIMIT 0xa0
24#define MC_STAT_EMC_CLOCKS 0xa4
25#define MC_STAT_EMC_CONTROL 0xa8
26#define MC_STAT_EMC_COUNT 0xb8
27
28#define EMC_GATHER_CLEAR (1 << 8)
29#define EMC_GATHER_ENABLE (3 << 8)
30
31struct tegra_devfreq {
32 struct devfreq *devfreq;
33 struct clk *emc_clock;
34 void __iomem *regs;
35};
36
37static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
38 u32 flags)
39{
40 struct tegra_devfreq *tegra = dev_get_drvdata(dev);
41 struct devfreq *devfreq = tegra->devfreq;
42 struct dev_pm_opp *opp;
43 unsigned long rate;
44 int err;
45
46 opp = devfreq_recommended_opp(dev, freq, flags);
47 if (IS_ERR(opp))
48 return PTR_ERR(opp);
49
50 rate = dev_pm_opp_get_freq(opp);
51 dev_pm_opp_put(opp);
52
53 err = clk_set_min_rate(tegra->emc_clock, rate);
54 if (err)
55 return err;
56
57 err = clk_set_rate(tegra->emc_clock, 0);
58 if (err)
59 goto restore_min_rate;
60
61 return 0;
62
63restore_min_rate:
64 clk_set_min_rate(tegra->emc_clock, devfreq->previous_freq);
65
66 return err;
67}
68
69static int tegra_devfreq_get_dev_status(struct device *dev,
70 struct devfreq_dev_status *stat)
71{
72 struct tegra_devfreq *tegra = dev_get_drvdata(dev);
73
74 /*
75 * EMC_COUNT returns number of memory events, that number is lower
76 * than the number of clocks. Conversion ratio of 1/8 results in a
77 * bit higher bandwidth than actually needed, it is good enough for
78 * the time being because drivers don't support requesting minimum
79 * needed memory bandwidth yet.
80 *
81 * TODO: adjust the ratio value once relevant drivers will support
82 * memory bandwidth management.
83 */
84 stat->busy_time = readl_relaxed(tegra->regs + MC_STAT_EMC_COUNT);
85 stat->total_time = readl_relaxed(tegra->regs + MC_STAT_EMC_CLOCKS) / 8;
86 stat->current_frequency = clk_get_rate(tegra->emc_clock);
87
88 writel_relaxed(EMC_GATHER_CLEAR, tegra->regs + MC_STAT_CONTROL);
89 writel_relaxed(EMC_GATHER_ENABLE, tegra->regs + MC_STAT_CONTROL);
90
91 return 0;
92}
93
94static struct devfreq_dev_profile tegra_devfreq_profile = {
95 .polling_ms = 500,
96 .target = tegra_devfreq_target,
97 .get_dev_status = tegra_devfreq_get_dev_status,
98};
99
100static struct tegra_mc *tegra_get_memory_controller(void)
101{
102 struct platform_device *pdev;
103 struct device_node *np;
104 struct tegra_mc *mc;
105
106 np = of_find_compatible_node(NULL, NULL, "nvidia,tegra20-mc-gart");
107 if (!np)
108 return ERR_PTR(-ENOENT);
109
110 pdev = of_find_device_by_node(np);
111 of_node_put(np);
112 if (!pdev)
113 return ERR_PTR(-ENODEV);
114
115 mc = platform_get_drvdata(pdev);
116 if (!mc)
117 return ERR_PTR(-EPROBE_DEFER);
118
119 return mc;
120}
121
122static int tegra_devfreq_probe(struct platform_device *pdev)
123{
124 struct tegra_devfreq *tegra;
125 struct tegra_mc *mc;
126 unsigned long max_rate;
127 unsigned long rate;
128 int err;
129
130 mc = tegra_get_memory_controller();
131 if (IS_ERR(mc)) {
132 err = PTR_ERR(mc);
133 dev_err(&pdev->dev, "failed to get memory controller: %d\n",
134 err);
135 return err;
136 }
137
138 tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL);
139 if (!tegra)
140 return -ENOMEM;
141
142 /* EMC is a system-critical clock that is always enabled */
143 tegra->emc_clock = devm_clk_get(&pdev->dev, "emc");
144 if (IS_ERR(tegra->emc_clock)) {
145 err = PTR_ERR(tegra->emc_clock);
146 dev_err(&pdev->dev, "failed to get emc clock: %d\n", err);
147 return err;
148 }
149
150 tegra->regs = mc->regs;
151
152 max_rate = clk_round_rate(tegra->emc_clock, ULONG_MAX);
153
154 for (rate = 0; rate <= max_rate; rate++) {
155 rate = clk_round_rate(tegra->emc_clock, rate);
156
157 err = dev_pm_opp_add(&pdev->dev, rate, 0);
158 if (err) {
159 dev_err(&pdev->dev, "failed to add opp: %d\n", err);
160 goto remove_opps;
161 }
162 }
163
164 /*
165 * Reset statistic gathers state, select global bandwidth for the
166 * statistics collection mode and set clocks counter saturation
167 * limit to maximum.
168 */
169 writel_relaxed(0x00000000, tegra->regs + MC_STAT_CONTROL);
170 writel_relaxed(0x00000000, tegra->regs + MC_STAT_EMC_CONTROL);
171 writel_relaxed(0xffffffff, tegra->regs + MC_STAT_EMC_CLOCK_LIMIT);
172
173 platform_set_drvdata(pdev, tegra);
174
175 tegra->devfreq = devfreq_add_device(&pdev->dev, &tegra_devfreq_profile,
176 DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL);
177 if (IS_ERR(tegra->devfreq)) {
178 err = PTR_ERR(tegra->devfreq);
179 goto remove_opps;
180 }
181
182 return 0;
183
184remove_opps:
185 dev_pm_opp_remove_all_dynamic(&pdev->dev);
186
187 return err;
188}
189
190static int tegra_devfreq_remove(struct platform_device *pdev)
191{
192 struct tegra_devfreq *tegra = platform_get_drvdata(pdev);
193
194 devfreq_remove_device(tegra->devfreq);
195 dev_pm_opp_remove_all_dynamic(&pdev->dev);
196
197 return 0;
198}
199
200static struct platform_driver tegra_devfreq_driver = {
201 .probe = tegra_devfreq_probe,
202 .remove = tegra_devfreq_remove,
203 .driver = {
204 .name = "tegra20-devfreq",
205 },
206};
207module_platform_driver(tegra_devfreq_driver);
208
209MODULE_ALIAS("platform:tegra20-devfreq");
210MODULE_AUTHOR("Dmitry Osipenko <digetx@gmail.com>");
211MODULE_DESCRIPTION("NVIDIA Tegra20 devfreq driver");
212MODULE_LICENSE("GPL v2");
diff --git a/drivers/devfreq/tegra-devfreq.c b/drivers/devfreq/tegra30-devfreq.c
index 35c38aad8b4f..a6ba75f4106d 100644
--- a/drivers/devfreq/tegra-devfreq.c
+++ b/drivers/devfreq/tegra30-devfreq.c
@@ -132,7 +132,6 @@ static struct tegra_devfreq_device_config actmon_device_configs[] = {
132struct tegra_devfreq_device { 132struct tegra_devfreq_device {
133 const struct tegra_devfreq_device_config *config; 133 const struct tegra_devfreq_device_config *config;
134 void __iomem *regs; 134 void __iomem *regs;
135 spinlock_t lock;
136 135
137 /* Average event count sampled in the last interrupt */ 136 /* Average event count sampled in the last interrupt */
138 u32 avg_count; 137 u32 avg_count;
@@ -160,6 +159,8 @@ struct tegra_devfreq {
160 struct notifier_block rate_change_nb; 159 struct notifier_block rate_change_nb;
161 160
162 struct tegra_devfreq_device devices[ARRAY_SIZE(actmon_device_configs)]; 161 struct tegra_devfreq_device devices[ARRAY_SIZE(actmon_device_configs)];
162
163 int irq;
163}; 164};
164 165
165struct tegra_actmon_emc_ratio { 166struct tegra_actmon_emc_ratio {
@@ -179,23 +180,23 @@ static struct tegra_actmon_emc_ratio actmon_emc_ratios[] = {
179 180
180static u32 actmon_readl(struct tegra_devfreq *tegra, u32 offset) 181static u32 actmon_readl(struct tegra_devfreq *tegra, u32 offset)
181{ 182{
182 return readl(tegra->regs + offset); 183 return readl_relaxed(tegra->regs + offset);
183} 184}
184 185
185static void actmon_writel(struct tegra_devfreq *tegra, u32 val, u32 offset) 186static void actmon_writel(struct tegra_devfreq *tegra, u32 val, u32 offset)
186{ 187{
187 writel(val, tegra->regs + offset); 188 writel_relaxed(val, tegra->regs + offset);
188} 189}
189 190
190static u32 device_readl(struct tegra_devfreq_device *dev, u32 offset) 191static u32 device_readl(struct tegra_devfreq_device *dev, u32 offset)
191{ 192{
192 return readl(dev->regs + offset); 193 return readl_relaxed(dev->regs + offset);
193} 194}
194 195
195static void device_writel(struct tegra_devfreq_device *dev, u32 val, 196static void device_writel(struct tegra_devfreq_device *dev, u32 val,
196 u32 offset) 197 u32 offset)
197{ 198{
198 writel(val, dev->regs + offset); 199 writel_relaxed(val, dev->regs + offset);
199} 200}
200 201
201static unsigned long do_percent(unsigned long val, unsigned int pct) 202static unsigned long do_percent(unsigned long val, unsigned int pct)
@@ -231,18 +232,14 @@ static void tegra_devfreq_update_wmark(struct tegra_devfreq *tegra,
231static void actmon_write_barrier(struct tegra_devfreq *tegra) 232static void actmon_write_barrier(struct tegra_devfreq *tegra)
232{ 233{
233 /* ensure the update has reached the ACTMON */ 234 /* ensure the update has reached the ACTMON */
234 wmb(); 235 readl(tegra->regs + ACTMON_GLB_STATUS);
235 actmon_readl(tegra, ACTMON_GLB_STATUS);
236} 236}
237 237
238static void actmon_isr_device(struct tegra_devfreq *tegra, 238static void actmon_isr_device(struct tegra_devfreq *tegra,
239 struct tegra_devfreq_device *dev) 239 struct tegra_devfreq_device *dev)
240{ 240{
241 unsigned long flags;
242 u32 intr_status, dev_ctrl; 241 u32 intr_status, dev_ctrl;
243 242
244 spin_lock_irqsave(&dev->lock, flags);
245
246 dev->avg_count = device_readl(dev, ACTMON_DEV_AVG_COUNT); 243 dev->avg_count = device_readl(dev, ACTMON_DEV_AVG_COUNT);
247 tegra_devfreq_update_avg_wmark(tegra, dev); 244 tegra_devfreq_update_avg_wmark(tegra, dev);
248 245
@@ -291,26 +288,6 @@ static void actmon_isr_device(struct tegra_devfreq *tegra,
291 device_writel(dev, ACTMON_INTR_STATUS_CLEAR, ACTMON_DEV_INTR_STATUS); 288 device_writel(dev, ACTMON_INTR_STATUS_CLEAR, ACTMON_DEV_INTR_STATUS);
292 289
293 actmon_write_barrier(tegra); 290 actmon_write_barrier(tegra);
294
295 spin_unlock_irqrestore(&dev->lock, flags);
296}
297
298static irqreturn_t actmon_isr(int irq, void *data)
299{
300 struct tegra_devfreq *tegra = data;
301 bool handled = false;
302 unsigned int i;
303 u32 val;
304
305 val = actmon_readl(tegra, ACTMON_GLB_STATUS);
306 for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
307 if (val & tegra->devices[i].config->irq_mask) {
308 actmon_isr_device(tegra, tegra->devices + i);
309 handled = true;
310 }
311 }
312
313 return handled ? IRQ_WAKE_THREAD : IRQ_NONE;
314} 291}
315 292
316static unsigned long actmon_cpu_to_emc_rate(struct tegra_devfreq *tegra, 293static unsigned long actmon_cpu_to_emc_rate(struct tegra_devfreq *tegra,
@@ -337,15 +314,12 @@ static void actmon_update_target(struct tegra_devfreq *tegra,
337 unsigned long cpu_freq = 0; 314 unsigned long cpu_freq = 0;
338 unsigned long static_cpu_emc_freq = 0; 315 unsigned long static_cpu_emc_freq = 0;
339 unsigned int avg_sustain_coef; 316 unsigned int avg_sustain_coef;
340 unsigned long flags;
341 317
342 if (dev->config->avg_dependency_threshold) { 318 if (dev->config->avg_dependency_threshold) {
343 cpu_freq = cpufreq_get(0); 319 cpu_freq = cpufreq_get(0);
344 static_cpu_emc_freq = actmon_cpu_to_emc_rate(tegra, cpu_freq); 320 static_cpu_emc_freq = actmon_cpu_to_emc_rate(tegra, cpu_freq);
345 } 321 }
346 322
347 spin_lock_irqsave(&dev->lock, flags);
348
349 dev->target_freq = dev->avg_count / ACTMON_SAMPLING_PERIOD; 323 dev->target_freq = dev->avg_count / ACTMON_SAMPLING_PERIOD;
350 avg_sustain_coef = 100 * 100 / dev->config->boost_up_threshold; 324 avg_sustain_coef = 100 * 100 / dev->config->boost_up_threshold;
351 dev->target_freq = do_percent(dev->target_freq, avg_sustain_coef); 325 dev->target_freq = do_percent(dev->target_freq, avg_sustain_coef);
@@ -353,19 +327,31 @@ static void actmon_update_target(struct tegra_devfreq *tegra,
353 327
354 if (dev->avg_count >= dev->config->avg_dependency_threshold) 328 if (dev->avg_count >= dev->config->avg_dependency_threshold)
355 dev->target_freq = max(dev->target_freq, static_cpu_emc_freq); 329 dev->target_freq = max(dev->target_freq, static_cpu_emc_freq);
356
357 spin_unlock_irqrestore(&dev->lock, flags);
358} 330}
359 331
360static irqreturn_t actmon_thread_isr(int irq, void *data) 332static irqreturn_t actmon_thread_isr(int irq, void *data)
361{ 333{
362 struct tegra_devfreq *tegra = data; 334 struct tegra_devfreq *tegra = data;
335 bool handled = false;
336 unsigned int i;
337 u32 val;
363 338
364 mutex_lock(&tegra->devfreq->lock); 339 mutex_lock(&tegra->devfreq->lock);
365 update_devfreq(tegra->devfreq); 340
341 val = actmon_readl(tegra, ACTMON_GLB_STATUS);
342 for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
343 if (val & tegra->devices[i].config->irq_mask) {
344 actmon_isr_device(tegra, tegra->devices + i);
345 handled = true;
346 }
347 }
348
349 if (handled)
350 update_devfreq(tegra->devfreq);
351
366 mutex_unlock(&tegra->devfreq->lock); 352 mutex_unlock(&tegra->devfreq->lock);
367 353
368 return IRQ_HANDLED; 354 return handled ? IRQ_HANDLED : IRQ_NONE;
369} 355}
370 356
371static int tegra_actmon_rate_notify_cb(struct notifier_block *nb, 357static int tegra_actmon_rate_notify_cb(struct notifier_block *nb,
@@ -375,7 +361,6 @@ static int tegra_actmon_rate_notify_cb(struct notifier_block *nb,
375 struct tegra_devfreq *tegra; 361 struct tegra_devfreq *tegra;
376 struct tegra_devfreq_device *dev; 362 struct tegra_devfreq_device *dev;
377 unsigned int i; 363 unsigned int i;
378 unsigned long flags;
379 364
380 if (action != POST_RATE_CHANGE) 365 if (action != POST_RATE_CHANGE)
381 return NOTIFY_OK; 366 return NOTIFY_OK;
@@ -387,9 +372,7 @@ static int tegra_actmon_rate_notify_cb(struct notifier_block *nb,
387 for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) { 372 for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
388 dev = &tegra->devices[i]; 373 dev = &tegra->devices[i];
389 374
390 spin_lock_irqsave(&dev->lock, flags);
391 tegra_devfreq_update_wmark(tegra, dev); 375 tegra_devfreq_update_wmark(tegra, dev);
392 spin_unlock_irqrestore(&dev->lock, flags);
393 } 376 }
394 377
395 actmon_write_barrier(tegra); 378 actmon_write_barrier(tegra);
@@ -397,48 +380,6 @@ static int tegra_actmon_rate_notify_cb(struct notifier_block *nb,
397 return NOTIFY_OK; 380 return NOTIFY_OK;
398} 381}
399 382
400static void tegra_actmon_enable_interrupts(struct tegra_devfreq *tegra)
401{
402 struct tegra_devfreq_device *dev;
403 u32 val;
404 unsigned int i;
405
406 for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
407 dev = &tegra->devices[i];
408
409 val = device_readl(dev, ACTMON_DEV_CTRL);
410 val |= ACTMON_DEV_CTRL_AVG_ABOVE_WMARK_EN;
411 val |= ACTMON_DEV_CTRL_AVG_BELOW_WMARK_EN;
412 val |= ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
413 val |= ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN;
414
415 device_writel(dev, val, ACTMON_DEV_CTRL);
416 }
417
418 actmon_write_barrier(tegra);
419}
420
421static void tegra_actmon_disable_interrupts(struct tegra_devfreq *tegra)
422{
423 struct tegra_devfreq_device *dev;
424 u32 val;
425 unsigned int i;
426
427 for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
428 dev = &tegra->devices[i];
429
430 val = device_readl(dev, ACTMON_DEV_CTRL);
431 val &= ~ACTMON_DEV_CTRL_AVG_ABOVE_WMARK_EN;
432 val &= ~ACTMON_DEV_CTRL_AVG_BELOW_WMARK_EN;
433 val &= ~ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
434 val &= ~ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN;
435
436 device_writel(dev, val, ACTMON_DEV_CTRL);
437 }
438
439 actmon_write_barrier(tegra);
440}
441
442static void tegra_actmon_configure_device(struct tegra_devfreq *tegra, 383static void tegra_actmon_configure_device(struct tegra_devfreq *tegra,
443 struct tegra_devfreq_device *dev) 384 struct tegra_devfreq_device *dev)
444{ 385{
@@ -462,34 +403,80 @@ static void tegra_actmon_configure_device(struct tegra_devfreq *tegra,
462 << ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_NUM_SHIFT; 403 << ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_NUM_SHIFT;
463 val |= (ACTMON_ABOVE_WMARK_WINDOW - 1) 404 val |= (ACTMON_ABOVE_WMARK_WINDOW - 1)
464 << ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_NUM_SHIFT; 405 << ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_NUM_SHIFT;
406 val |= ACTMON_DEV_CTRL_AVG_ABOVE_WMARK_EN;
407 val |= ACTMON_DEV_CTRL_AVG_BELOW_WMARK_EN;
408 val |= ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
409 val |= ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN;
465 val |= ACTMON_DEV_CTRL_ENB; 410 val |= ACTMON_DEV_CTRL_ENB;
466 411
467 device_writel(dev, val, ACTMON_DEV_CTRL); 412 device_writel(dev, val, ACTMON_DEV_CTRL);
413}
414
415static void tegra_actmon_start(struct tegra_devfreq *tegra)
416{
417 unsigned int i;
418
419 disable_irq(tegra->irq);
420
421 actmon_writel(tegra, ACTMON_SAMPLING_PERIOD - 1,
422 ACTMON_GLB_PERIOD_CTRL);
423
424 for (i = 0; i < ARRAY_SIZE(tegra->devices); i++)
425 tegra_actmon_configure_device(tegra, &tegra->devices[i]);
426
427 actmon_write_barrier(tegra);
428
429 enable_irq(tegra->irq);
430}
431
432static void tegra_actmon_stop(struct tegra_devfreq *tegra)
433{
434 unsigned int i;
435
436 disable_irq(tegra->irq);
437
438 for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
439 device_writel(&tegra->devices[i], 0x00000000, ACTMON_DEV_CTRL);
440 device_writel(&tegra->devices[i], ACTMON_INTR_STATUS_CLEAR,
441 ACTMON_DEV_INTR_STATUS);
442 }
468 443
469 actmon_write_barrier(tegra); 444 actmon_write_barrier(tegra);
445
446 enable_irq(tegra->irq);
470} 447}
471 448
472static int tegra_devfreq_target(struct device *dev, unsigned long *freq, 449static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
473 u32 flags) 450 u32 flags)
474{ 451{
475 struct tegra_devfreq *tegra = dev_get_drvdata(dev); 452 struct tegra_devfreq *tegra = dev_get_drvdata(dev);
453 struct devfreq *devfreq = tegra->devfreq;
476 struct dev_pm_opp *opp; 454 struct dev_pm_opp *opp;
477 unsigned long rate = *freq * KHZ; 455 unsigned long rate;
456 int err;
478 457
479 opp = devfreq_recommended_opp(dev, &rate, flags); 458 opp = devfreq_recommended_opp(dev, freq, flags);
480 if (IS_ERR(opp)) { 459 if (IS_ERR(opp)) {
481 dev_err(dev, "Failed to find opp for %lu KHz\n", *freq); 460 dev_err(dev, "Failed to find opp for %lu Hz\n", *freq);
482 return PTR_ERR(opp); 461 return PTR_ERR(opp);
483 } 462 }
484 rate = dev_pm_opp_get_freq(opp); 463 rate = dev_pm_opp_get_freq(opp);
485 dev_pm_opp_put(opp); 464 dev_pm_opp_put(opp);
486 465
487 clk_set_min_rate(tegra->emc_clock, rate); 466 err = clk_set_min_rate(tegra->emc_clock, rate);
488 clk_set_rate(tegra->emc_clock, 0); 467 if (err)
468 return err;
489 469
490 *freq = rate; 470 err = clk_set_rate(tegra->emc_clock, 0);
471 if (err)
472 goto restore_min_rate;
491 473
492 return 0; 474 return 0;
475
476restore_min_rate:
477 clk_set_min_rate(tegra->emc_clock, devfreq->previous_freq);
478
479 return err;
493} 480}
494 481
495static int tegra_devfreq_get_dev_status(struct device *dev, 482static int tegra_devfreq_get_dev_status(struct device *dev,
@@ -497,13 +484,15 @@ static int tegra_devfreq_get_dev_status(struct device *dev,
497{ 484{
498 struct tegra_devfreq *tegra = dev_get_drvdata(dev); 485 struct tegra_devfreq *tegra = dev_get_drvdata(dev);
499 struct tegra_devfreq_device *actmon_dev; 486 struct tegra_devfreq_device *actmon_dev;
487 unsigned long cur_freq;
500 488
501 stat->current_frequency = tegra->cur_freq; 489 cur_freq = READ_ONCE(tegra->cur_freq);
502 490
503 /* To be used by the tegra governor */ 491 /* To be used by the tegra governor */
504 stat->private_data = tegra; 492 stat->private_data = tegra;
505 493
506 /* The below are to be used by the other governors */ 494 /* The below are to be used by the other governors */
495 stat->current_frequency = cur_freq * KHZ;
507 496
508 actmon_dev = &tegra->devices[MCALL]; 497 actmon_dev = &tegra->devices[MCALL];
509 498
@@ -514,7 +503,7 @@ static int tegra_devfreq_get_dev_status(struct device *dev,
514 stat->busy_time *= 100 / BUS_SATURATION_RATIO; 503 stat->busy_time *= 100 / BUS_SATURATION_RATIO;
515 504
516 /* Number of cycles in a sampling period */ 505 /* Number of cycles in a sampling period */
517 stat->total_time = ACTMON_SAMPLING_PERIOD * tegra->cur_freq; 506 stat->total_time = ACTMON_SAMPLING_PERIOD * cur_freq;
518 507
519 stat->busy_time = min(stat->busy_time, stat->total_time); 508 stat->busy_time = min(stat->busy_time, stat->total_time);
520 509
@@ -553,7 +542,7 @@ static int tegra_governor_get_target(struct devfreq *devfreq,
553 target_freq = max(target_freq, dev->target_freq); 542 target_freq = max(target_freq, dev->target_freq);
554 } 543 }
555 544
556 *freq = target_freq; 545 *freq = target_freq * KHZ;
557 546
558 return 0; 547 return 0;
559} 548}
@@ -566,22 +555,22 @@ static int tegra_governor_event_handler(struct devfreq *devfreq,
566 switch (event) { 555 switch (event) {
567 case DEVFREQ_GOV_START: 556 case DEVFREQ_GOV_START:
568 devfreq_monitor_start(devfreq); 557 devfreq_monitor_start(devfreq);
569 tegra_actmon_enable_interrupts(tegra); 558 tegra_actmon_start(tegra);
570 break; 559 break;
571 560
572 case DEVFREQ_GOV_STOP: 561 case DEVFREQ_GOV_STOP:
573 tegra_actmon_disable_interrupts(tegra); 562 tegra_actmon_stop(tegra);
574 devfreq_monitor_stop(devfreq); 563 devfreq_monitor_stop(devfreq);
575 break; 564 break;
576 565
577 case DEVFREQ_GOV_SUSPEND: 566 case DEVFREQ_GOV_SUSPEND:
578 tegra_actmon_disable_interrupts(tegra); 567 tegra_actmon_stop(tegra);
579 devfreq_monitor_suspend(devfreq); 568 devfreq_monitor_suspend(devfreq);
580 break; 569 break;
581 570
582 case DEVFREQ_GOV_RESUME: 571 case DEVFREQ_GOV_RESUME:
583 devfreq_monitor_resume(devfreq); 572 devfreq_monitor_resume(devfreq);
584 tegra_actmon_enable_interrupts(tegra); 573 tegra_actmon_start(tegra);
585 break; 574 break;
586 } 575 }
587 576
@@ -592,25 +581,22 @@ static struct devfreq_governor tegra_devfreq_governor = {
592 .name = "tegra_actmon", 581 .name = "tegra_actmon",
593 .get_target_freq = tegra_governor_get_target, 582 .get_target_freq = tegra_governor_get_target,
594 .event_handler = tegra_governor_event_handler, 583 .event_handler = tegra_governor_event_handler,
584 .immutable = true,
595}; 585};
596 586
597static int tegra_devfreq_probe(struct platform_device *pdev) 587static int tegra_devfreq_probe(struct platform_device *pdev)
598{ 588{
599 struct tegra_devfreq *tegra; 589 struct tegra_devfreq *tegra;
600 struct tegra_devfreq_device *dev; 590 struct tegra_devfreq_device *dev;
601 struct resource *res;
602 unsigned int i; 591 unsigned int i;
603 unsigned long rate; 592 unsigned long rate;
604 int irq;
605 int err; 593 int err;
606 594
607 tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL); 595 tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL);
608 if (!tegra) 596 if (!tegra)
609 return -ENOMEM; 597 return -ENOMEM;
610 598
611 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 599 tegra->regs = devm_platform_ioremap_resource(pdev, 0);
612
613 tegra->regs = devm_ioremap_resource(&pdev->dev, res);
614 if (IS_ERR(tegra->regs)) 600 if (IS_ERR(tegra->regs))
615 return PTR_ERR(tegra->regs); 601 return PTR_ERR(tegra->regs);
616 602
@@ -632,13 +618,10 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
632 return PTR_ERR(tegra->emc_clock); 618 return PTR_ERR(tegra->emc_clock);
633 } 619 }
634 620
635 clk_set_rate(tegra->emc_clock, ULONG_MAX); 621 tegra->irq = platform_get_irq(pdev, 0);
636 622 if (tegra->irq < 0) {
637 tegra->rate_change_nb.notifier_call = tegra_actmon_rate_notify_cb; 623 err = tegra->irq;
638 err = clk_notifier_register(tegra->emc_clock, &tegra->rate_change_nb); 624 dev_err(&pdev->dev, "Failed to get IRQ: %d\n", err);
639 if (err) {
640 dev_err(&pdev->dev,
641 "Failed to register rate change notifier\n");
642 return err; 625 return err;
643 } 626 }
644 627
@@ -656,73 +639,94 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
656 tegra->max_freq = clk_round_rate(tegra->emc_clock, ULONG_MAX) / KHZ; 639 tegra->max_freq = clk_round_rate(tegra->emc_clock, ULONG_MAX) / KHZ;
657 tegra->cur_freq = clk_get_rate(tegra->emc_clock) / KHZ; 640 tegra->cur_freq = clk_get_rate(tegra->emc_clock) / KHZ;
658 641
659 actmon_writel(tegra, ACTMON_SAMPLING_PERIOD - 1,
660 ACTMON_GLB_PERIOD_CTRL);
661
662 for (i = 0; i < ARRAY_SIZE(actmon_device_configs); i++) { 642 for (i = 0; i < ARRAY_SIZE(actmon_device_configs); i++) {
663 dev = tegra->devices + i; 643 dev = tegra->devices + i;
664 dev->config = actmon_device_configs + i; 644 dev->config = actmon_device_configs + i;
665 dev->regs = tegra->regs + dev->config->offset; 645 dev->regs = tegra->regs + dev->config->offset;
666 spin_lock_init(&dev->lock);
667
668 tegra_actmon_configure_device(tegra, dev);
669 } 646 }
670 647
671 for (rate = 0; rate <= tegra->max_freq * KHZ; rate++) { 648 for (rate = 0; rate <= tegra->max_freq * KHZ; rate++) {
672 rate = clk_round_rate(tegra->emc_clock, rate); 649 rate = clk_round_rate(tegra->emc_clock, rate);
673 dev_pm_opp_add(&pdev->dev, rate, 0);
674 }
675 650
676 irq = platform_get_irq(pdev, 0); 651 err = dev_pm_opp_add(&pdev->dev, rate, 0);
677 if (irq < 0) { 652 if (err) {
678 dev_err(&pdev->dev, "Failed to get IRQ: %d\n", irq); 653 dev_err(&pdev->dev, "Failed to add OPP: %d\n", err);
679 return irq; 654 goto remove_opps;
655 }
680 } 656 }
681 657
682 platform_set_drvdata(pdev, tegra); 658 platform_set_drvdata(pdev, tegra);
683 659
684 err = devm_request_threaded_irq(&pdev->dev, irq, actmon_isr, 660 tegra->rate_change_nb.notifier_call = tegra_actmon_rate_notify_cb;
685 actmon_thread_isr, IRQF_SHARED, 661 err = clk_notifier_register(tegra->emc_clock, &tegra->rate_change_nb);
686 "tegra-devfreq", tegra);
687 if (err) { 662 if (err) {
688 dev_err(&pdev->dev, "Interrupt request failed\n"); 663 dev_err(&pdev->dev,
689 return err; 664 "Failed to register rate change notifier\n");
665 goto remove_opps;
666 }
667
668 err = devfreq_add_governor(&tegra_devfreq_governor);
669 if (err) {
670 dev_err(&pdev->dev, "Failed to add governor: %d\n", err);
671 goto unreg_notifier;
690 } 672 }
691 673
692 tegra_devfreq_profile.initial_freq = clk_get_rate(tegra->emc_clock); 674 tegra_devfreq_profile.initial_freq = clk_get_rate(tegra->emc_clock);
693 tegra->devfreq = devm_devfreq_add_device(&pdev->dev, 675 tegra->devfreq = devfreq_add_device(&pdev->dev,
694 &tegra_devfreq_profile, 676 &tegra_devfreq_profile,
695 "tegra_actmon", 677 "tegra_actmon",
696 NULL); 678 NULL);
679 if (IS_ERR(tegra->devfreq)) {
680 err = PTR_ERR(tegra->devfreq);
681 goto remove_governor;
682 }
683
684 err = devm_request_threaded_irq(&pdev->dev, tegra->irq, NULL,
685 actmon_thread_isr, IRQF_ONESHOT,
686 "tegra-devfreq", tegra);
687 if (err) {
688 dev_err(&pdev->dev, "Interrupt request failed: %d\n", err);
689 goto remove_devfreq;
690 }
697 691
698 return 0; 692 return 0;
693
694remove_devfreq:
695 devfreq_remove_device(tegra->devfreq);
696
697remove_governor:
698 devfreq_remove_governor(&tegra_devfreq_governor);
699
700unreg_notifier:
701 clk_notifier_unregister(tegra->emc_clock, &tegra->rate_change_nb);
702
703remove_opps:
704 dev_pm_opp_remove_all_dynamic(&pdev->dev);
705
706 reset_control_reset(tegra->reset);
707 clk_disable_unprepare(tegra->clock);
708
709 return err;
699} 710}
700 711
701static int tegra_devfreq_remove(struct platform_device *pdev) 712static int tegra_devfreq_remove(struct platform_device *pdev)
702{ 713{
703 struct tegra_devfreq *tegra = platform_get_drvdata(pdev); 714 struct tegra_devfreq *tegra = platform_get_drvdata(pdev);
704 int irq = platform_get_irq(pdev, 0);
705 u32 val;
706 unsigned int i;
707 715
708 for (i = 0; i < ARRAY_SIZE(actmon_device_configs); i++) { 716 devfreq_remove_device(tegra->devfreq);
709 val = device_readl(&tegra->devices[i], ACTMON_DEV_CTRL); 717 devfreq_remove_governor(&tegra_devfreq_governor);
710 val &= ~ACTMON_DEV_CTRL_ENB;
711 device_writel(&tegra->devices[i], val, ACTMON_DEV_CTRL);
712 }
713
714 actmon_write_barrier(tegra);
715
716 devm_free_irq(&pdev->dev, irq, tegra);
717 718
718 clk_notifier_unregister(tegra->emc_clock, &tegra->rate_change_nb); 719 clk_notifier_unregister(tegra->emc_clock, &tegra->rate_change_nb);
720 dev_pm_opp_remove_all_dynamic(&pdev->dev);
719 721
722 reset_control_reset(tegra->reset);
720 clk_disable_unprepare(tegra->clock); 723 clk_disable_unprepare(tegra->clock);
721 724
722 return 0; 725 return 0;
723} 726}
724 727
725static const struct of_device_id tegra_devfreq_of_match[] = { 728static const struct of_device_id tegra_devfreq_of_match[] = {
729 { .compatible = "nvidia,tegra30-actmon" },
726 { .compatible = "nvidia,tegra124-actmon" }, 730 { .compatible = "nvidia,tegra124-actmon" },
727 { }, 731 { },
728}; 732};
@@ -737,36 +741,7 @@ static struct platform_driver tegra_devfreq_driver = {
737 .of_match_table = tegra_devfreq_of_match, 741 .of_match_table = tegra_devfreq_of_match,
738 }, 742 },
739}; 743};
740 744module_platform_driver(tegra_devfreq_driver);
741static int __init tegra_devfreq_init(void)
742{
743 int ret = 0;
744
745 ret = devfreq_add_governor(&tegra_devfreq_governor);
746 if (ret) {
747 pr_err("%s: failed to add governor: %d\n", __func__, ret);
748 return ret;
749 }
750
751 ret = platform_driver_register(&tegra_devfreq_driver);
752 if (ret)
753 devfreq_remove_governor(&tegra_devfreq_governor);
754
755 return ret;
756}
757module_init(tegra_devfreq_init)
758
759static void __exit tegra_devfreq_exit(void)
760{
761 int ret = 0;
762
763 platform_driver_unregister(&tegra_devfreq_driver);
764
765 ret = devfreq_remove_governor(&tegra_devfreq_governor);
766 if (ret)
767 pr_err("%s: failed to remove governor: %d\n", __func__, ret);
768}
769module_exit(tegra_devfreq_exit)
770 745
771MODULE_LICENSE("GPL v2"); 746MODULE_LICENSE("GPL v2");
772MODULE_DESCRIPTION("Tegra devfreq driver"); 747MODULE_DESCRIPTION("Tegra devfreq driver");
diff --git a/drivers/macintosh/windfarm_cpufreq_clamp.c b/drivers/macintosh/windfarm_cpufreq_clamp.c
index 52fd5fca89a0..705c6200814b 100644
--- a/drivers/macintosh/windfarm_cpufreq_clamp.c
+++ b/drivers/macintosh/windfarm_cpufreq_clamp.c
@@ -3,9 +3,11 @@
3#include <linux/errno.h> 3#include <linux/errno.h>
4#include <linux/kernel.h> 4#include <linux/kernel.h>
5#include <linux/delay.h> 5#include <linux/delay.h>
6#include <linux/pm_qos.h>
6#include <linux/slab.h> 7#include <linux/slab.h>
7#include <linux/init.h> 8#include <linux/init.h>
8#include <linux/wait.h> 9#include <linux/wait.h>
10#include <linux/cpu.h>
9#include <linux/cpufreq.h> 11#include <linux/cpufreq.h>
10 12
11#include <asm/prom.h> 13#include <asm/prom.h>
@@ -16,36 +18,24 @@
16 18
17static int clamped; 19static int clamped;
18static struct wf_control *clamp_control; 20static struct wf_control *clamp_control;
19 21static struct dev_pm_qos_request qos_req;
20static int clamp_notifier_call(struct notifier_block *self, 22static unsigned int min_freq, max_freq;
21 unsigned long event, void *data)
22{
23 struct cpufreq_policy *p = data;
24 unsigned long max_freq;
25
26 if (event != CPUFREQ_ADJUST)
27 return 0;
28
29 max_freq = clamped ? (p->cpuinfo.min_freq) : (p->cpuinfo.max_freq);
30 cpufreq_verify_within_limits(p, 0, max_freq);
31
32 return 0;
33}
34
35static struct notifier_block clamp_notifier = {
36 .notifier_call = clamp_notifier_call,
37};
38 23
39static int clamp_set(struct wf_control *ct, s32 value) 24static int clamp_set(struct wf_control *ct, s32 value)
40{ 25{
41 if (value) 26 unsigned int freq;
27
28 if (value) {
29 freq = min_freq;
42 printk(KERN_INFO "windfarm: Clamping CPU frequency to " 30 printk(KERN_INFO "windfarm: Clamping CPU frequency to "
43 "minimum !\n"); 31 "minimum !\n");
44 else 32 } else {
33 freq = max_freq;
45 printk(KERN_INFO "windfarm: CPU frequency unclamped !\n"); 34 printk(KERN_INFO "windfarm: CPU frequency unclamped !\n");
35 }
46 clamped = value; 36 clamped = value;
47 cpufreq_update_policy(0); 37
48 return 0; 38 return dev_pm_qos_update_request(&qos_req, freq);
49} 39}
50 40
51static int clamp_get(struct wf_control *ct, s32 *value) 41static int clamp_get(struct wf_control *ct, s32 *value)
@@ -74,27 +64,60 @@ static const struct wf_control_ops clamp_ops = {
74 64
75static int __init wf_cpufreq_clamp_init(void) 65static int __init wf_cpufreq_clamp_init(void)
76{ 66{
67 struct cpufreq_policy *policy;
77 struct wf_control *clamp; 68 struct wf_control *clamp;
69 struct device *dev;
70 int ret;
71
72 policy = cpufreq_cpu_get(0);
73 if (!policy) {
74 pr_warn("%s: cpufreq policy not found cpu0\n", __func__);
75 return -EPROBE_DEFER;
76 }
77
78 min_freq = policy->cpuinfo.min_freq;
79 max_freq = policy->cpuinfo.max_freq;
80 cpufreq_cpu_put(policy);
81
82 dev = get_cpu_device(0);
83 if (unlikely(!dev)) {
84 pr_warn("%s: No cpu device for cpu0\n", __func__);
85 return -ENODEV;
86 }
78 87
79 clamp = kmalloc(sizeof(struct wf_control), GFP_KERNEL); 88 clamp = kmalloc(sizeof(struct wf_control), GFP_KERNEL);
80 if (clamp == NULL) 89 if (clamp == NULL)
81 return -ENOMEM; 90 return -ENOMEM;
82 cpufreq_register_notifier(&clamp_notifier, CPUFREQ_POLICY_NOTIFIER); 91
92 ret = dev_pm_qos_add_request(dev, &qos_req, DEV_PM_QOS_MAX_FREQUENCY,
93 max_freq);
94 if (ret < 0) {
95 pr_err("%s: Failed to add freq constraint (%d)\n", __func__,
96 ret);
97 goto free;
98 }
99
83 clamp->ops = &clamp_ops; 100 clamp->ops = &clamp_ops;
84 clamp->name = "cpufreq-clamp"; 101 clamp->name = "cpufreq-clamp";
85 if (wf_register_control(clamp)) 102 ret = wf_register_control(clamp);
103 if (ret)
86 goto fail; 104 goto fail;
87 clamp_control = clamp; 105 clamp_control = clamp;
88 return 0; 106 return 0;
89 fail: 107 fail:
108 dev_pm_qos_remove_request(&qos_req);
109
110 free:
90 kfree(clamp); 111 kfree(clamp);
91 return -ENODEV; 112 return ret;
92} 113}
93 114
94static void __exit wf_cpufreq_clamp_exit(void) 115static void __exit wf_cpufreq_clamp_exit(void)
95{ 116{
96 if (clamp_control) 117 if (clamp_control) {
97 wf_unregister_control(clamp_control); 118 wf_unregister_control(clamp_control);
119 dev_pm_qos_remove_request(&qos_req);
120 }
98} 121}
99 122
100 123
diff --git a/drivers/opp/core.c b/drivers/opp/core.c
index c094d5d20fd7..3b7ffd0234e9 100644
--- a/drivers/opp/core.c
+++ b/drivers/opp/core.c
@@ -401,6 +401,54 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
401} 401}
402EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact); 402EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact);
403 403
404/**
405 * dev_pm_opp_find_level_exact() - search for an exact level
406 * @dev: device for which we do this operation
407 * @level: level to search for
408 *
409 * Return: Searches for exact match in the opp table and returns pointer to the
410 * matching opp if found, else returns ERR_PTR in case of error and should
411 * be handled using IS_ERR. Error return values can be:
412 * EINVAL: for bad pointer
413 * ERANGE: no match found for search
414 * ENODEV: if device not found in list of registered devices
415 *
416 * The callers are required to call dev_pm_opp_put() for the returned OPP after
417 * use.
418 */
419struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev,
420 unsigned int level)
421{
422 struct opp_table *opp_table;
423 struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
424
425 opp_table = _find_opp_table(dev);
426 if (IS_ERR(opp_table)) {
427 int r = PTR_ERR(opp_table);
428
429 dev_err(dev, "%s: OPP table not found (%d)\n", __func__, r);
430 return ERR_PTR(r);
431 }
432
433 mutex_lock(&opp_table->lock);
434
435 list_for_each_entry(temp_opp, &opp_table->opp_list, node) {
436 if (temp_opp->level == level) {
437 opp = temp_opp;
438
439 /* Increment the reference count of OPP */
440 dev_pm_opp_get(opp);
441 break;
442 }
443 }
444
445 mutex_unlock(&opp_table->lock);
446 dev_pm_opp_put_opp_table(opp_table);
447
448 return opp;
449}
450EXPORT_SYMBOL_GPL(dev_pm_opp_find_level_exact);
451
404static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table, 452static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table,
405 unsigned long *freq) 453 unsigned long *freq)
406{ 454{
@@ -940,6 +988,7 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
940 BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head); 988 BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head);
941 INIT_LIST_HEAD(&opp_table->opp_list); 989 INIT_LIST_HEAD(&opp_table->opp_list);
942 kref_init(&opp_table->kref); 990 kref_init(&opp_table->kref);
991 kref_init(&opp_table->list_kref);
943 992
944 /* Secure the device table modification */ 993 /* Secure the device table modification */
945 list_add(&opp_table->node, &opp_tables); 994 list_add(&opp_table->node, &opp_tables);
@@ -1577,6 +1626,12 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
1577 goto free_regulators; 1626 goto free_regulators;
1578 } 1627 }
1579 1628
1629 ret = regulator_enable(reg);
1630 if (ret < 0) {
1631 regulator_put(reg);
1632 goto free_regulators;
1633 }
1634
1580 opp_table->regulators[i] = reg; 1635 opp_table->regulators[i] = reg;
1581 } 1636 }
1582 1637
@@ -1590,8 +1645,10 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
1590 return opp_table; 1645 return opp_table;
1591 1646
1592free_regulators: 1647free_regulators:
1593 while (i != 0) 1648 while (i--) {
1594 regulator_put(opp_table->regulators[--i]); 1649 regulator_disable(opp_table->regulators[i]);
1650 regulator_put(opp_table->regulators[i]);
1651 }
1595 1652
1596 kfree(opp_table->regulators); 1653 kfree(opp_table->regulators);
1597 opp_table->regulators = NULL; 1654 opp_table->regulators = NULL;
@@ -1617,8 +1674,10 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table)
1617 /* Make sure there are no concurrent readers while updating opp_table */ 1674 /* Make sure there are no concurrent readers while updating opp_table */
1618 WARN_ON(!list_empty(&opp_table->opp_list)); 1675 WARN_ON(!list_empty(&opp_table->opp_list));
1619 1676
1620 for (i = opp_table->regulator_count - 1; i >= 0; i--) 1677 for (i = opp_table->regulator_count - 1; i >= 0; i--) {
1678 regulator_disable(opp_table->regulators[i]);
1621 regulator_put(opp_table->regulators[i]); 1679 regulator_put(opp_table->regulators[i]);
1680 }
1622 1681
1623 _free_set_opp_data(opp_table); 1682 _free_set_opp_data(opp_table);
1624 1683
@@ -1771,6 +1830,7 @@ static void _opp_detach_genpd(struct opp_table *opp_table)
1771 * dev_pm_opp_attach_genpd - Attach genpd(s) for the device and save virtual device pointer 1830 * dev_pm_opp_attach_genpd - Attach genpd(s) for the device and save virtual device pointer
1772 * @dev: Consumer device for which the genpd is getting attached. 1831 * @dev: Consumer device for which the genpd is getting attached.
1773 * @names: Null terminated array of pointers containing names of genpd to attach. 1832 * @names: Null terminated array of pointers containing names of genpd to attach.
1833 * @virt_devs: Pointer to return the array of virtual devices.
1774 * 1834 *
1775 * Multiple generic power domains for a device are supported with the help of 1835 * Multiple generic power domains for a device are supported with the help of
1776 * virtual genpd devices, which are created for each consumer device - genpd 1836 * virtual genpd devices, which are created for each consumer device - genpd
@@ -1784,12 +1844,16 @@ static void _opp_detach_genpd(struct opp_table *opp_table)
1784 * 1844 *
1785 * This helper needs to be called once with a list of all genpd to attach. 1845 * This helper needs to be called once with a list of all genpd to attach.
1786 * Otherwise the original device structure will be used instead by the OPP core. 1846 * Otherwise the original device structure will be used instead by the OPP core.
1847 *
1848 * The order of entries in the names array must match the order in which
1849 * "required-opps" are added in DT.
1787 */ 1850 */
1788struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names) 1851struct opp_table *dev_pm_opp_attach_genpd(struct device *dev,
1852 const char **names, struct device ***virt_devs)
1789{ 1853{
1790 struct opp_table *opp_table; 1854 struct opp_table *opp_table;
1791 struct device *virt_dev; 1855 struct device *virt_dev;
1792 int index, ret = -EINVAL; 1856 int index = 0, ret = -EINVAL;
1793 const char **name = names; 1857 const char **name = names;
1794 1858
1795 opp_table = dev_pm_opp_get_opp_table(dev); 1859 opp_table = dev_pm_opp_get_opp_table(dev);
@@ -1815,14 +1879,6 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names
1815 goto unlock; 1879 goto unlock;
1816 1880
1817 while (*name) { 1881 while (*name) {
1818 index = of_property_match_string(dev->of_node,
1819 "power-domain-names", *name);
1820 if (index < 0) {
1821 dev_err(dev, "Failed to find power domain: %s (%d)\n",
1822 *name, index);
1823 goto err;
1824 }
1825
1826 if (index >= opp_table->required_opp_count) { 1882 if (index >= opp_table->required_opp_count) {
1827 dev_err(dev, "Index can't be greater than required-opp-count - 1, %s (%d : %d)\n", 1883 dev_err(dev, "Index can't be greater than required-opp-count - 1, %s (%d : %d)\n",
1828 *name, opp_table->required_opp_count, index); 1884 *name, opp_table->required_opp_count, index);
@@ -1843,9 +1899,12 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names
1843 } 1899 }
1844 1900
1845 opp_table->genpd_virt_devs[index] = virt_dev; 1901 opp_table->genpd_virt_devs[index] = virt_dev;
1902 index++;
1846 name++; 1903 name++;
1847 } 1904 }
1848 1905
1906 if (virt_devs)
1907 *virt_devs = opp_table->genpd_virt_devs;
1849 mutex_unlock(&opp_table->genpd_virt_dev_lock); 1908 mutex_unlock(&opp_table->genpd_virt_dev_lock);
1850 1909
1851 return opp_table; 1910 return opp_table;
diff --git a/drivers/opp/of.c b/drivers/opp/of.c
index b313aca9894f..1813f5ad5fa2 100644
--- a/drivers/opp/of.c
+++ b/drivers/opp/of.c
@@ -617,9 +617,12 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
617 /* OPP to select on device suspend */ 617 /* OPP to select on device suspend */
618 if (of_property_read_bool(np, "opp-suspend")) { 618 if (of_property_read_bool(np, "opp-suspend")) {
619 if (opp_table->suspend_opp) { 619 if (opp_table->suspend_opp) {
620 dev_warn(dev, "%s: Multiple suspend OPPs found (%lu %lu)\n", 620 /* Pick the OPP with higher rate as suspend OPP */
621 __func__, opp_table->suspend_opp->rate, 621 if (new_opp->rate > opp_table->suspend_opp->rate) {
622 new_opp->rate); 622 opp_table->suspend_opp->suspend = false;
623 new_opp->suspend = true;
624 opp_table->suspend_opp = new_opp;
625 }
623 } else { 626 } else {
624 new_opp->suspend = true; 627 new_opp->suspend = true;
625 opp_table->suspend_opp = new_opp; 628 opp_table->suspend_opp = new_opp;
@@ -662,8 +665,6 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
662 return 0; 665 return 0;
663 } 666 }
664 667
665 kref_init(&opp_table->list_kref);
666
667 /* We have opp-table node now, iterate over it and add OPPs */ 668 /* We have opp-table node now, iterate over it and add OPPs */
668 for_each_available_child_of_node(opp_table->np, np) { 669 for_each_available_child_of_node(opp_table->np, np) {
669 opp = _opp_add_static_v2(opp_table, dev, np); 670 opp = _opp_add_static_v2(opp_table, dev, np);
@@ -672,17 +673,15 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
672 dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, 673 dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
673 ret); 674 ret);
674 of_node_put(np); 675 of_node_put(np);
675 goto put_list_kref; 676 return ret;
676 } else if (opp) { 677 } else if (opp) {
677 count++; 678 count++;
678 } 679 }
679 } 680 }
680 681
681 /* There should be one of more OPP defined */ 682 /* There should be one of more OPP defined */
682 if (WARN_ON(!count)) { 683 if (WARN_ON(!count))
683 ret = -ENOENT; 684 return -ENOENT;
684 goto put_list_kref;
685 }
686 685
687 list_for_each_entry(opp, &opp_table->opp_list, node) 686 list_for_each_entry(opp, &opp_table->opp_list, node)
688 pstate_count += !!opp->pstate; 687 pstate_count += !!opp->pstate;
@@ -691,8 +690,7 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
691 if (pstate_count && pstate_count != count) { 690 if (pstate_count && pstate_count != count) {
692 dev_err(dev, "Not all nodes have performance state set (%d: %d)\n", 691 dev_err(dev, "Not all nodes have performance state set (%d: %d)\n",
693 count, pstate_count); 692 count, pstate_count);
694 ret = -ENOENT; 693 return -ENOENT;
695 goto put_list_kref;
696 } 694 }
697 695
698 if (pstate_count) 696 if (pstate_count)
@@ -701,11 +699,6 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
701 opp_table->parsed_static_opps = true; 699 opp_table->parsed_static_opps = true;
702 700
703 return 0; 701 return 0;
704
705put_list_kref:
706 _put_opp_list_kref(opp_table);
707
708 return ret;
709} 702}
710 703
711/* Initializes OPP tables based on old-deprecated bindings */ 704/* Initializes OPP tables based on old-deprecated bindings */
@@ -731,8 +724,6 @@ static int _of_add_opp_table_v1(struct device *dev, struct opp_table *opp_table)
731 return -EINVAL; 724 return -EINVAL;
732 } 725 }
733 726
734 kref_init(&opp_table->list_kref);
735
736 val = prop->value; 727 val = prop->value;
737 while (nr) { 728 while (nr) {
738 unsigned long freq = be32_to_cpup(val++) * 1000; 729 unsigned long freq = be32_to_cpup(val++) * 1000;
@@ -742,7 +733,6 @@ static int _of_add_opp_table_v1(struct device *dev, struct opp_table *opp_table)
742 if (ret) { 733 if (ret) {
743 dev_err(dev, "%s: Failed to add OPP %ld (%d)\n", 734 dev_err(dev, "%s: Failed to add OPP %ld (%d)\n",
744 __func__, freq, ret); 735 __func__, freq, ret);
745 _put_opp_list_kref(opp_table);
746 return ret; 736 return ret;
747 } 737 }
748 nr -= 2; 738 nr -= 2;
diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
index bc0d55a59015..ef6d4bd77b1a 100644
--- a/drivers/platform/x86/intel-hid.c
+++ b/drivers/platform/x86/intel-hid.c
@@ -253,35 +253,45 @@ static void intel_button_array_enable(struct device *device, bool enable)
253 253
254static int intel_hid_pm_prepare(struct device *device) 254static int intel_hid_pm_prepare(struct device *device)
255{ 255{
256 struct intel_hid_priv *priv = dev_get_drvdata(device); 256 if (device_may_wakeup(device)) {
257 struct intel_hid_priv *priv = dev_get_drvdata(device);
257 258
258 priv->wakeup_mode = true; 259 priv->wakeup_mode = true;
260 }
259 return 0; 261 return 0;
260} 262}
261 263
264static void intel_hid_pm_complete(struct device *device)
265{
266 struct intel_hid_priv *priv = dev_get_drvdata(device);
267
268 priv->wakeup_mode = false;
269}
270
262static int intel_hid_pl_suspend_handler(struct device *device) 271static int intel_hid_pl_suspend_handler(struct device *device)
263{ 272{
264 if (pm_suspend_via_firmware()) { 273 intel_button_array_enable(device, false);
274
275 if (!pm_suspend_no_platform())
265 intel_hid_set_enable(device, false); 276 intel_hid_set_enable(device, false);
266 intel_button_array_enable(device, false); 277
267 }
268 return 0; 278 return 0;
269} 279}
270 280
271static int intel_hid_pl_resume_handler(struct device *device) 281static int intel_hid_pl_resume_handler(struct device *device)
272{ 282{
273 struct intel_hid_priv *priv = dev_get_drvdata(device); 283 intel_hid_pm_complete(device);
274 284
275 priv->wakeup_mode = false; 285 if (!pm_suspend_no_platform())
276 if (pm_resume_via_firmware()) {
277 intel_hid_set_enable(device, true); 286 intel_hid_set_enable(device, true);
278 intel_button_array_enable(device, true); 287
279 } 288 intel_button_array_enable(device, true);
280 return 0; 289 return 0;
281} 290}
282 291
283static const struct dev_pm_ops intel_hid_pl_pm_ops = { 292static const struct dev_pm_ops intel_hid_pl_pm_ops = {
284 .prepare = intel_hid_pm_prepare, 293 .prepare = intel_hid_pm_prepare,
294 .complete = intel_hid_pm_complete,
285 .freeze = intel_hid_pl_suspend_handler, 295 .freeze = intel_hid_pl_suspend_handler,
286 .thaw = intel_hid_pl_resume_handler, 296 .thaw = intel_hid_pl_resume_handler,
287 .restore = intel_hid_pl_resume_handler, 297 .restore = intel_hid_pl_resume_handler,
@@ -491,6 +501,12 @@ static int intel_hid_probe(struct platform_device *device)
491 } 501 }
492 502
493 device_init_wakeup(&device->dev, true); 503 device_init_wakeup(&device->dev, true);
504 /*
505 * In order for system wakeup to work, the EC GPE has to be marked as
506 * a wakeup one, so do that here (this setting will persist, but it has
507 * no effect until the wakeup mask is set for the EC GPE).
508 */
509 acpi_ec_mark_gpe_for_wake();
494 return 0; 510 return 0;
495 511
496err_remove_notify: 512err_remove_notify:
diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
index a0d0cecff55f..b74932307d69 100644
--- a/drivers/platform/x86/intel-vbtn.c
+++ b/drivers/platform/x86/intel-vbtn.c
@@ -176,6 +176,12 @@ static int intel_vbtn_probe(struct platform_device *device)
176 return -EBUSY; 176 return -EBUSY;
177 177
178 device_init_wakeup(&device->dev, true); 178 device_init_wakeup(&device->dev, true);
179 /*
180 * In order for system wakeup to work, the EC GPE has to be marked as
181 * a wakeup one, so do that here (this setting will persist, but it has
182 * no effect until the wakeup mask is set for the EC GPE).
183 */
184 acpi_ec_mark_gpe_for_wake();
179 return 0; 185 return 0;
180} 186}
181 187
@@ -195,22 +201,30 @@ static int intel_vbtn_remove(struct platform_device *device)
195 201
196static int intel_vbtn_pm_prepare(struct device *dev) 202static int intel_vbtn_pm_prepare(struct device *dev)
197{ 203{
198 struct intel_vbtn_priv *priv = dev_get_drvdata(dev); 204 if (device_may_wakeup(dev)) {
205 struct intel_vbtn_priv *priv = dev_get_drvdata(dev);
199 206
200 priv->wakeup_mode = true; 207 priv->wakeup_mode = true;
208 }
201 return 0; 209 return 0;
202} 210}
203 211
204static int intel_vbtn_pm_resume(struct device *dev) 212static void intel_vbtn_pm_complete(struct device *dev)
205{ 213{
206 struct intel_vbtn_priv *priv = dev_get_drvdata(dev); 214 struct intel_vbtn_priv *priv = dev_get_drvdata(dev);
207 215
208 priv->wakeup_mode = false; 216 priv->wakeup_mode = false;
217}
218
219static int intel_vbtn_pm_resume(struct device *dev)
220{
221 intel_vbtn_pm_complete(dev);
209 return 0; 222 return 0;
210} 223}
211 224
212static const struct dev_pm_ops intel_vbtn_pm_ops = { 225static const struct dev_pm_ops intel_vbtn_pm_ops = {
213 .prepare = intel_vbtn_pm_prepare, 226 .prepare = intel_vbtn_pm_prepare,
227 .complete = intel_vbtn_pm_complete,
214 .resume = intel_vbtn_pm_resume, 228 .resume = intel_vbtn_pm_resume,
215 .restore = intel_vbtn_pm_resume, 229 .restore = intel_vbtn_pm_resume,
216 .thaw = intel_vbtn_pm_resume, 230 .thaw = intel_vbtn_pm_resume,
diff --git a/drivers/powercap/idle_inject.c b/drivers/powercap/idle_inject.c
index 24ff2a068978..cd1270614cc6 100644
--- a/drivers/powercap/idle_inject.c
+++ b/drivers/powercap/idle_inject.c
@@ -59,14 +59,14 @@ struct idle_inject_thread {
59/** 59/**
60 * struct idle_inject_device - idle injection data 60 * struct idle_inject_device - idle injection data
61 * @timer: idle injection period timer 61 * @timer: idle injection period timer
62 * @idle_duration_ms: duration of CPU idle time to inject 62 * @idle_duration_us: duration of CPU idle time to inject
63 * @run_duration_ms: duration of CPU run time to allow 63 * @run_duration_us: duration of CPU run time to allow
64 * @cpumask: mask of CPUs affected by idle injection 64 * @cpumask: mask of CPUs affected by idle injection
65 */ 65 */
66struct idle_inject_device { 66struct idle_inject_device {
67 struct hrtimer timer; 67 struct hrtimer timer;
68 unsigned int idle_duration_ms; 68 unsigned int idle_duration_us;
69 unsigned int run_duration_ms; 69 unsigned int run_duration_us;
70 unsigned long int cpumask[0]; 70 unsigned long int cpumask[0];
71}; 71};
72 72
@@ -104,16 +104,16 @@ static void idle_inject_wakeup(struct idle_inject_device *ii_dev)
104 */ 104 */
105static enum hrtimer_restart idle_inject_timer_fn(struct hrtimer *timer) 105static enum hrtimer_restart idle_inject_timer_fn(struct hrtimer *timer)
106{ 106{
107 unsigned int duration_ms; 107 unsigned int duration_us;
108 struct idle_inject_device *ii_dev = 108 struct idle_inject_device *ii_dev =
109 container_of(timer, struct idle_inject_device, timer); 109 container_of(timer, struct idle_inject_device, timer);
110 110
111 duration_ms = READ_ONCE(ii_dev->run_duration_ms); 111 duration_us = READ_ONCE(ii_dev->run_duration_us);
112 duration_ms += READ_ONCE(ii_dev->idle_duration_ms); 112 duration_us += READ_ONCE(ii_dev->idle_duration_us);
113 113
114 idle_inject_wakeup(ii_dev); 114 idle_inject_wakeup(ii_dev);
115 115
116 hrtimer_forward_now(timer, ms_to_ktime(duration_ms)); 116 hrtimer_forward_now(timer, ns_to_ktime(duration_us * NSEC_PER_USEC));
117 117
118 return HRTIMER_RESTART; 118 return HRTIMER_RESTART;
119} 119}
@@ -138,35 +138,35 @@ static void idle_inject_fn(unsigned int cpu)
138 */ 138 */
139 iit->should_run = 0; 139 iit->should_run = 0;
140 140
141 play_idle(READ_ONCE(ii_dev->idle_duration_ms)); 141 play_idle(READ_ONCE(ii_dev->idle_duration_us));
142} 142}
143 143
144/** 144/**
145 * idle_inject_set_duration - idle and run duration update helper 145 * idle_inject_set_duration - idle and run duration update helper
146 * @run_duration_ms: CPU run time to allow in milliseconds 146 * @run_duration_us: CPU run time to allow in microseconds
147 * @idle_duration_ms: CPU idle time to inject in milliseconds 147 * @idle_duration_us: CPU idle time to inject in microseconds
148 */ 148 */
149void idle_inject_set_duration(struct idle_inject_device *ii_dev, 149void idle_inject_set_duration(struct idle_inject_device *ii_dev,
150 unsigned int run_duration_ms, 150 unsigned int run_duration_us,
151 unsigned int idle_duration_ms) 151 unsigned int idle_duration_us)
152{ 152{
153 if (run_duration_ms && idle_duration_ms) { 153 if (run_duration_us && idle_duration_us) {
154 WRITE_ONCE(ii_dev->run_duration_ms, run_duration_ms); 154 WRITE_ONCE(ii_dev->run_duration_us, run_duration_us);
155 WRITE_ONCE(ii_dev->idle_duration_ms, idle_duration_ms); 155 WRITE_ONCE(ii_dev->idle_duration_us, idle_duration_us);
156 } 156 }
157} 157}
158 158
159/** 159/**
160 * idle_inject_get_duration - idle and run duration retrieval helper 160 * idle_inject_get_duration - idle and run duration retrieval helper
161 * @run_duration_ms: memory location to store the current CPU run time 161 * @run_duration_us: memory location to store the current CPU run time
162 * @idle_duration_ms: memory location to store the current CPU idle time 162 * @idle_duration_us: memory location to store the current CPU idle time
163 */ 163 */
164void idle_inject_get_duration(struct idle_inject_device *ii_dev, 164void idle_inject_get_duration(struct idle_inject_device *ii_dev,
165 unsigned int *run_duration_ms, 165 unsigned int *run_duration_us,
166 unsigned int *idle_duration_ms) 166 unsigned int *idle_duration_us)
167{ 167{
168 *run_duration_ms = READ_ONCE(ii_dev->run_duration_ms); 168 *run_duration_us = READ_ONCE(ii_dev->run_duration_us);
169 *idle_duration_ms = READ_ONCE(ii_dev->idle_duration_ms); 169 *idle_duration_us = READ_ONCE(ii_dev->idle_duration_us);
170} 170}
171 171
172/** 172/**
@@ -181,10 +181,10 @@ void idle_inject_get_duration(struct idle_inject_device *ii_dev,
181 */ 181 */
182int idle_inject_start(struct idle_inject_device *ii_dev) 182int idle_inject_start(struct idle_inject_device *ii_dev)
183{ 183{
184 unsigned int idle_duration_ms = READ_ONCE(ii_dev->idle_duration_ms); 184 unsigned int idle_duration_us = READ_ONCE(ii_dev->idle_duration_us);
185 unsigned int run_duration_ms = READ_ONCE(ii_dev->run_duration_ms); 185 unsigned int run_duration_us = READ_ONCE(ii_dev->run_duration_us);
186 186
187 if (!idle_duration_ms || !run_duration_ms) 187 if (!idle_duration_us || !run_duration_us)
188 return -EINVAL; 188 return -EINVAL;
189 189
190 pr_debug("Starting injecting idle cycles on CPUs '%*pbl'\n", 190 pr_debug("Starting injecting idle cycles on CPUs '%*pbl'\n",
@@ -193,7 +193,8 @@ int idle_inject_start(struct idle_inject_device *ii_dev)
193 idle_inject_wakeup(ii_dev); 193 idle_inject_wakeup(ii_dev);
194 194
195 hrtimer_start(&ii_dev->timer, 195 hrtimer_start(&ii_dev->timer,
196 ms_to_ktime(idle_duration_ms + run_duration_ms), 196 ns_to_ktime((idle_duration_us + run_duration_us) *
197 NSEC_PER_USEC),
197 HRTIMER_MODE_REL); 198 HRTIMER_MODE_REL);
198 199
199 return 0; 200 return 0;
diff --git a/drivers/thermal/cpu_cooling.c b/drivers/thermal/cpu_cooling.c
index 4c5db59a619b..391f39776c6a 100644
--- a/drivers/thermal/cpu_cooling.c
+++ b/drivers/thermal/cpu_cooling.c
@@ -16,6 +16,7 @@
16#include <linux/err.h> 16#include <linux/err.h>
17#include <linux/idr.h> 17#include <linux/idr.h>
18#include <linux/pm_opp.h> 18#include <linux/pm_opp.h>
19#include <linux/pm_qos.h>
19#include <linux/slab.h> 20#include <linux/slab.h>
20#include <linux/cpu.h> 21#include <linux/cpu.h>
21#include <linux/cpu_cooling.h> 22#include <linux/cpu_cooling.h>
@@ -66,8 +67,6 @@ struct time_in_idle {
66 * @last_load: load measured by the latest call to cpufreq_get_requested_power() 67 * @last_load: load measured by the latest call to cpufreq_get_requested_power()
67 * @cpufreq_state: integer value representing the current state of cpufreq 68 * @cpufreq_state: integer value representing the current state of cpufreq
68 * cooling devices. 69 * cooling devices.
69 * @clipped_freq: integer value representing the absolute value of the clipped
70 * frequency.
71 * @max_level: maximum cooling level. One less than total number of valid 70 * @max_level: maximum cooling level. One less than total number of valid
72 * cpufreq frequencies. 71 * cpufreq frequencies.
73 * @freq_table: Freq table in descending order of frequencies 72 * @freq_table: Freq table in descending order of frequencies
@@ -84,12 +83,12 @@ struct cpufreq_cooling_device {
84 int id; 83 int id;
85 u32 last_load; 84 u32 last_load;
86 unsigned int cpufreq_state; 85 unsigned int cpufreq_state;
87 unsigned int clipped_freq;
88 unsigned int max_level; 86 unsigned int max_level;
89 struct freq_table *freq_table; /* In descending order */ 87 struct freq_table *freq_table; /* In descending order */
90 struct cpufreq_policy *policy; 88 struct cpufreq_policy *policy;
91 struct list_head node; 89 struct list_head node;
92 struct time_in_idle *idle_time; 90 struct time_in_idle *idle_time;
91 struct dev_pm_qos_request qos_req;
93}; 92};
94 93
95static DEFINE_IDA(cpufreq_ida); 94static DEFINE_IDA(cpufreq_ida);
@@ -119,59 +118,6 @@ static unsigned long get_level(struct cpufreq_cooling_device *cpufreq_cdev,
119} 118}
120 119
121/** 120/**
122 * cpufreq_thermal_notifier - notifier callback for cpufreq policy change.
123 * @nb: struct notifier_block * with callback info.
124 * @event: value showing cpufreq event for which this function invoked.
125 * @data: callback-specific data
126 *
127 * Callback to hijack the notification on cpufreq policy transition.
128 * Every time there is a change in policy, we will intercept and
129 * update the cpufreq policy with thermal constraints.
130 *
131 * Return: 0 (success)
132 */
133static int cpufreq_thermal_notifier(struct notifier_block *nb,
134 unsigned long event, void *data)
135{
136 struct cpufreq_policy *policy = data;
137 unsigned long clipped_freq;
138 struct cpufreq_cooling_device *cpufreq_cdev;
139
140 if (event != CPUFREQ_ADJUST)
141 return NOTIFY_DONE;
142
143 mutex_lock(&cooling_list_lock);
144 list_for_each_entry(cpufreq_cdev, &cpufreq_cdev_list, node) {
145 /*
146 * A new copy of the policy is sent to the notifier and can't
147 * compare that directly.
148 */
149 if (policy->cpu != cpufreq_cdev->policy->cpu)
150 continue;
151
152 /*
153 * policy->max is the maximum allowed frequency defined by user
154 * and clipped_freq is the maximum that thermal constraints
155 * allow.
156 *
157 * If clipped_freq is lower than policy->max, then we need to
158 * readjust policy->max.
159 *
160 * But, if clipped_freq is greater than policy->max, we don't
161 * need to do anything.
162 */
163 clipped_freq = cpufreq_cdev->clipped_freq;
164
165 if (policy->max > clipped_freq)
166 cpufreq_verify_within_limits(policy, 0, clipped_freq);
167 break;
168 }
169 mutex_unlock(&cooling_list_lock);
170
171 return NOTIFY_OK;
172}
173
174/**
175 * update_freq_table() - Update the freq table with power numbers 121 * update_freq_table() - Update the freq table with power numbers
176 * @cpufreq_cdev: the cpufreq cooling device in which to update the table 122 * @cpufreq_cdev: the cpufreq cooling device in which to update the table
177 * @capacitance: dynamic power coefficient for these cpus 123 * @capacitance: dynamic power coefficient for these cpus
@@ -374,7 +320,6 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
374 unsigned long state) 320 unsigned long state)
375{ 321{
376 struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata; 322 struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata;
377 unsigned int clip_freq;
378 323
379 /* Request state should be less than max_level */ 324 /* Request state should be less than max_level */
380 if (WARN_ON(state > cpufreq_cdev->max_level)) 325 if (WARN_ON(state > cpufreq_cdev->max_level))
@@ -384,13 +329,10 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
384 if (cpufreq_cdev->cpufreq_state == state) 329 if (cpufreq_cdev->cpufreq_state == state)
385 return 0; 330 return 0;
386 331
387 clip_freq = cpufreq_cdev->freq_table[state].frequency;
388 cpufreq_cdev->cpufreq_state = state; 332 cpufreq_cdev->cpufreq_state = state;
389 cpufreq_cdev->clipped_freq = clip_freq;
390
391 cpufreq_update_policy(cpufreq_cdev->policy->cpu);
392 333
393 return 0; 334 return dev_pm_qos_update_request(&cpufreq_cdev->qos_req,
335 cpufreq_cdev->freq_table[state].frequency);
394} 336}
395 337
396/** 338/**
@@ -554,11 +496,6 @@ static struct thermal_cooling_device_ops cpufreq_power_cooling_ops = {
554 .power2state = cpufreq_power2state, 496 .power2state = cpufreq_power2state,
555}; 497};
556 498
557/* Notifier for cpufreq policy change */
558static struct notifier_block thermal_cpufreq_notifier_block = {
559 .notifier_call = cpufreq_thermal_notifier,
560};
561
562static unsigned int find_next_max(struct cpufreq_frequency_table *table, 499static unsigned int find_next_max(struct cpufreq_frequency_table *table,
563 unsigned int prev_max) 500 unsigned int prev_max)
564{ 501{
@@ -596,9 +533,16 @@ __cpufreq_cooling_register(struct device_node *np,
596 struct cpufreq_cooling_device *cpufreq_cdev; 533 struct cpufreq_cooling_device *cpufreq_cdev;
597 char dev_name[THERMAL_NAME_LENGTH]; 534 char dev_name[THERMAL_NAME_LENGTH];
598 unsigned int freq, i, num_cpus; 535 unsigned int freq, i, num_cpus;
536 struct device *dev;
599 int ret; 537 int ret;
600 struct thermal_cooling_device_ops *cooling_ops; 538 struct thermal_cooling_device_ops *cooling_ops;
601 bool first; 539
540 dev = get_cpu_device(policy->cpu);
541 if (unlikely(!dev)) {
542 pr_warn("No cpu device for cpu %d\n", policy->cpu);
543 return ERR_PTR(-ENODEV);
544 }
545
602 546
603 if (IS_ERR_OR_NULL(policy)) { 547 if (IS_ERR_OR_NULL(policy)) {
604 pr_err("%s: cpufreq policy isn't valid: %p\n", __func__, policy); 548 pr_err("%s: cpufreq policy isn't valid: %p\n", __func__, policy);
@@ -671,25 +615,29 @@ __cpufreq_cooling_register(struct device_node *np,
671 cooling_ops = &cpufreq_cooling_ops; 615 cooling_ops = &cpufreq_cooling_ops;
672 } 616 }
673 617
618 ret = dev_pm_qos_add_request(dev, &cpufreq_cdev->qos_req,
619 DEV_PM_QOS_MAX_FREQUENCY,
620 cpufreq_cdev->freq_table[0].frequency);
621 if (ret < 0) {
622 pr_err("%s: Failed to add freq constraint (%d)\n", __func__,
623 ret);
624 cdev = ERR_PTR(ret);
625 goto remove_ida;
626 }
627
674 cdev = thermal_of_cooling_device_register(np, dev_name, cpufreq_cdev, 628 cdev = thermal_of_cooling_device_register(np, dev_name, cpufreq_cdev,
675 cooling_ops); 629 cooling_ops);
676 if (IS_ERR(cdev)) 630 if (IS_ERR(cdev))
677 goto remove_ida; 631 goto remove_qos_req;
678
679 cpufreq_cdev->clipped_freq = cpufreq_cdev->freq_table[0].frequency;
680 632
681 mutex_lock(&cooling_list_lock); 633 mutex_lock(&cooling_list_lock);
682 /* Register the notifier for first cpufreq cooling device */
683 first = list_empty(&cpufreq_cdev_list);
684 list_add(&cpufreq_cdev->node, &cpufreq_cdev_list); 634 list_add(&cpufreq_cdev->node, &cpufreq_cdev_list);
685 mutex_unlock(&cooling_list_lock); 635 mutex_unlock(&cooling_list_lock);
686 636
687 if (first)
688 cpufreq_register_notifier(&thermal_cpufreq_notifier_block,
689 CPUFREQ_POLICY_NOTIFIER);
690
691 return cdev; 637 return cdev;
692 638
639remove_qos_req:
640 dev_pm_qos_remove_request(&cpufreq_cdev->qos_req);
693remove_ida: 641remove_ida:
694 ida_simple_remove(&cpufreq_ida, cpufreq_cdev->id); 642 ida_simple_remove(&cpufreq_ida, cpufreq_cdev->id);
695free_table: 643free_table:
@@ -777,7 +725,6 @@ EXPORT_SYMBOL_GPL(of_cpufreq_cooling_register);
777void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev) 725void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
778{ 726{
779 struct cpufreq_cooling_device *cpufreq_cdev; 727 struct cpufreq_cooling_device *cpufreq_cdev;
780 bool last;
781 728
782 if (!cdev) 729 if (!cdev)
783 return; 730 return;
@@ -786,15 +733,10 @@ void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
786 733
787 mutex_lock(&cooling_list_lock); 734 mutex_lock(&cooling_list_lock);
788 list_del(&cpufreq_cdev->node); 735 list_del(&cpufreq_cdev->node);
789 /* Unregister the notifier for the last cpufreq cooling device */
790 last = list_empty(&cpufreq_cdev_list);
791 mutex_unlock(&cooling_list_lock); 736 mutex_unlock(&cooling_list_lock);
792 737
793 if (last)
794 cpufreq_unregister_notifier(&thermal_cpufreq_notifier_block,
795 CPUFREQ_POLICY_NOTIFIER);
796
797 thermal_cooling_device_unregister(cdev); 738 thermal_cooling_device_unregister(cdev);
739 dev_pm_qos_remove_request(&cpufreq_cdev->qos_req);
798 ida_simple_remove(&cpufreq_ida, cpufreq_cdev->id); 740 ida_simple_remove(&cpufreq_ida, cpufreq_cdev->id);
799 kfree(cpufreq_cdev->idle_time); 741 kfree(cpufreq_cdev->idle_time);
800 kfree(cpufreq_cdev->freq_table); 742 kfree(cpufreq_cdev->freq_table);
diff --git a/drivers/thermal/intel/intel_powerclamp.c b/drivers/thermal/intel/intel_powerclamp.c
index 5149a817456b..53216dcbe173 100644
--- a/drivers/thermal/intel/intel_powerclamp.c
+++ b/drivers/thermal/intel/intel_powerclamp.c
@@ -430,7 +430,7 @@ static void clamp_idle_injection_func(struct kthread_work *work)
430 if (should_skip) 430 if (should_skip)
431 goto balance; 431 goto balance;
432 432
433 play_idle(jiffies_to_msecs(w_data->duration_jiffies)); 433 play_idle(jiffies_to_usecs(w_data->duration_jiffies));
434 434
435balance: 435balance:
436 if (clamping && w_data->clamping && cpu_online(w_data->cpu)) 436 if (clamping && w_data->clamping && cpu_online(w_data->cpu))
diff --git a/drivers/video/fbdev/pxafb.c b/drivers/video/fbdev/pxafb.c
index 4282cb117b92..f70c9f79622e 100644
--- a/drivers/video/fbdev/pxafb.c
+++ b/drivers/video/fbdev/pxafb.c
@@ -1678,24 +1678,6 @@ pxafb_freq_transition(struct notifier_block *nb, unsigned long val, void *data)
1678 } 1678 }
1679 return 0; 1679 return 0;
1680} 1680}
1681
1682static int
1683pxafb_freq_policy(struct notifier_block *nb, unsigned long val, void *data)
1684{
1685 struct pxafb_info *fbi = TO_INF(nb, freq_policy);
1686 struct fb_var_screeninfo *var = &fbi->fb.var;
1687 struct cpufreq_policy *policy = data;
1688
1689 switch (val) {
1690 case CPUFREQ_ADJUST:
1691 pr_debug("min dma period: %d ps, "
1692 "new clock %d kHz\n", pxafb_display_dma_period(var),
1693 policy->max);
1694 /* TODO: fill in min/max values */
1695 break;
1696 }
1697 return 0;
1698}
1699#endif 1681#endif
1700 1682
1701#ifdef CONFIG_PM 1683#ifdef CONFIG_PM
@@ -2400,11 +2382,8 @@ static int pxafb_probe(struct platform_device *dev)
2400 2382
2401#ifdef CONFIG_CPU_FREQ 2383#ifdef CONFIG_CPU_FREQ
2402 fbi->freq_transition.notifier_call = pxafb_freq_transition; 2384 fbi->freq_transition.notifier_call = pxafb_freq_transition;
2403 fbi->freq_policy.notifier_call = pxafb_freq_policy;
2404 cpufreq_register_notifier(&fbi->freq_transition, 2385 cpufreq_register_notifier(&fbi->freq_transition,
2405 CPUFREQ_TRANSITION_NOTIFIER); 2386 CPUFREQ_TRANSITION_NOTIFIER);
2406 cpufreq_register_notifier(&fbi->freq_policy,
2407 CPUFREQ_POLICY_NOTIFIER);
2408#endif 2387#endif
2409 2388
2410 /* 2389 /*
diff --git a/drivers/video/fbdev/pxafb.h b/drivers/video/fbdev/pxafb.h
index b641289c8a99..86b1e9ab1a38 100644
--- a/drivers/video/fbdev/pxafb.h
+++ b/drivers/video/fbdev/pxafb.h
@@ -162,7 +162,6 @@ struct pxafb_info {
162 162
163#ifdef CONFIG_CPU_FREQ 163#ifdef CONFIG_CPU_FREQ
164 struct notifier_block freq_transition; 164 struct notifier_block freq_transition;
165 struct notifier_block freq_policy;
166#endif 165#endif
167 166
168 struct regulator *lcd_supply; 167 struct regulator *lcd_supply;
diff --git a/drivers/video/fbdev/sa1100fb.c b/drivers/video/fbdev/sa1100fb.c
index f7f8dee044b1..ae2bcfee338a 100644
--- a/drivers/video/fbdev/sa1100fb.c
+++ b/drivers/video/fbdev/sa1100fb.c
@@ -1005,31 +1005,6 @@ sa1100fb_freq_transition(struct notifier_block *nb, unsigned long val,
1005 } 1005 }
1006 return 0; 1006 return 0;
1007} 1007}
1008
1009static int
1010sa1100fb_freq_policy(struct notifier_block *nb, unsigned long val,
1011 void *data)
1012{
1013 struct sa1100fb_info *fbi = TO_INF(nb, freq_policy);
1014 struct cpufreq_policy *policy = data;
1015
1016 switch (val) {
1017 case CPUFREQ_ADJUST:
1018 dev_dbg(fbi->dev, "min dma period: %d ps, "
1019 "new clock %d kHz\n", sa1100fb_min_dma_period(fbi),
1020 policy->max);
1021 /* todo: fill in min/max values */
1022 break;
1023 case CPUFREQ_NOTIFY:
1024 do {} while(0);
1025 /* todo: panic if min/max values aren't fulfilled
1026 * [can't really happen unless there's a bug in the
1027 * CPU policy verififcation process *
1028 */
1029 break;
1030 }
1031 return 0;
1032}
1033#endif 1008#endif
1034 1009
1035#ifdef CONFIG_PM 1010#ifdef CONFIG_PM
@@ -1242,9 +1217,7 @@ static int sa1100fb_probe(struct platform_device *pdev)
1242 1217
1243#ifdef CONFIG_CPU_FREQ 1218#ifdef CONFIG_CPU_FREQ
1244 fbi->freq_transition.notifier_call = sa1100fb_freq_transition; 1219 fbi->freq_transition.notifier_call = sa1100fb_freq_transition;
1245 fbi->freq_policy.notifier_call = sa1100fb_freq_policy;
1246 cpufreq_register_notifier(&fbi->freq_transition, CPUFREQ_TRANSITION_NOTIFIER); 1220 cpufreq_register_notifier(&fbi->freq_transition, CPUFREQ_TRANSITION_NOTIFIER);
1247 cpufreq_register_notifier(&fbi->freq_policy, CPUFREQ_POLICY_NOTIFIER);
1248#endif 1221#endif
1249 1222
1250 /* This driver cannot be unloaded at the moment */ 1223 /* This driver cannot be unloaded at the moment */
diff --git a/drivers/video/fbdev/sa1100fb.h b/drivers/video/fbdev/sa1100fb.h
index 7a1a9ca33cec..d0aa33b0b88a 100644
--- a/drivers/video/fbdev/sa1100fb.h
+++ b/drivers/video/fbdev/sa1100fb.h
@@ -64,7 +64,6 @@ struct sa1100fb_info {
64 64
65#ifdef CONFIG_CPU_FREQ 65#ifdef CONFIG_CPU_FREQ
66 struct notifier_block freq_transition; 66 struct notifier_block freq_transition;
67 struct notifier_block freq_policy;
68#endif 67#endif
69 68
70 const struct sa1100fb_mach_info *inf; 69 const struct sa1100fb_mach_info *inf;
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index d7f1f5011fac..c4159bcc05d9 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -1459,13 +1459,13 @@ static int ep_create_wakeup_source(struct epitem *epi)
1459 struct wakeup_source *ws; 1459 struct wakeup_source *ws;
1460 1460
1461 if (!epi->ep->ws) { 1461 if (!epi->ep->ws) {
1462 epi->ep->ws = wakeup_source_register("eventpoll"); 1462 epi->ep->ws = wakeup_source_register(NULL, "eventpoll");
1463 if (!epi->ep->ws) 1463 if (!epi->ep->ws)
1464 return -ENOMEM; 1464 return -ENOMEM;
1465 } 1465 }
1466 1466
1467 name = epi->ffd.file->f_path.dentry->d_name.name; 1467 name = epi->ffd.file->f_path.dentry->d_name.name;
1468 ws = wakeup_source_register(name); 1468 ws = wakeup_source_register(NULL, name);
1469 1469
1470 if (!ws) 1470 if (!ws)
1471 return -ENOMEM; 1471 return -ENOMEM;
diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
index 3845c8fcc94e..4ed603a3b448 100644
--- a/include/acpi/acpixf.h
+++ b/include/acpi/acpixf.h
@@ -297,6 +297,9 @@ ACPI_GLOBAL(u8, acpi_gbl_system_awake_and_running);
297#define ACPI_HW_DEPENDENT_RETURN_OK(prototype) \ 297#define ACPI_HW_DEPENDENT_RETURN_OK(prototype) \
298 ACPI_EXTERNAL_RETURN_OK(prototype) 298 ACPI_EXTERNAL_RETURN_OK(prototype)
299 299
300#define ACPI_HW_DEPENDENT_RETURN_UINT32(prototype) \
301 ACPI_EXTERNAL_RETURN_UINT32(prototype)
302
300#define ACPI_HW_DEPENDENT_RETURN_VOID(prototype) \ 303#define ACPI_HW_DEPENDENT_RETURN_VOID(prototype) \
301 ACPI_EXTERNAL_RETURN_VOID(prototype) 304 ACPI_EXTERNAL_RETURN_VOID(prototype)
302 305
@@ -307,6 +310,9 @@ ACPI_GLOBAL(u8, acpi_gbl_system_awake_and_running);
307#define ACPI_HW_DEPENDENT_RETURN_OK(prototype) \ 310#define ACPI_HW_DEPENDENT_RETURN_OK(prototype) \
308 static ACPI_INLINE prototype {return(AE_OK);} 311 static ACPI_INLINE prototype {return(AE_OK);}
309 312
313#define ACPI_HW_DEPENDENT_RETURN_UINT32(prototype) \
314 static ACPI_INLINE prototype {return(0);}
315
310#define ACPI_HW_DEPENDENT_RETURN_VOID(prototype) \ 316#define ACPI_HW_DEPENDENT_RETURN_VOID(prototype) \
311 static ACPI_INLINE prototype {return;} 317 static ACPI_INLINE prototype {return;}
312 318
@@ -738,7 +744,7 @@ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
738 u32 gpe_number, 744 u32 gpe_number,
739 acpi_event_status 745 acpi_event_status
740 *event_status)) 746 *event_status))
741ACPI_HW_DEPENDENT_RETURN_VOID(void acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number)) 747ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number))
742ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void)) 748ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void))
743ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void)) 749ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void))
744ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void)) 750ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void))
diff --git a/include/acpi/processor.h b/include/acpi/processor.h
index 1194a4c78d55..f936033cb9e6 100644
--- a/include/acpi/processor.h
+++ b/include/acpi/processor.h
@@ -4,6 +4,8 @@
4 4
5#include <linux/kernel.h> 5#include <linux/kernel.h>
6#include <linux/cpu.h> 6#include <linux/cpu.h>
7#include <linux/cpufreq.h>
8#include <linux/pm_qos.h>
7#include <linux/thermal.h> 9#include <linux/thermal.h>
8#include <asm/acpi.h> 10#include <asm/acpi.h>
9 11
@@ -230,6 +232,8 @@ struct acpi_processor {
230 struct acpi_processor_limit limit; 232 struct acpi_processor_limit limit;
231 struct thermal_cooling_device *cdev; 233 struct thermal_cooling_device *cdev;
232 struct device *dev; /* Processor device. */ 234 struct device *dev; /* Processor device. */
235 struct dev_pm_qos_request perflib_req;
236 struct dev_pm_qos_request thermal_req;
233}; 237};
234 238
235struct acpi_processor_errata { 239struct acpi_processor_errata {
@@ -296,16 +300,22 @@ static inline void acpi_processor_ffh_cstate_enter(struct acpi_processor_cx
296/* in processor_perflib.c */ 300/* in processor_perflib.c */
297 301
298#ifdef CONFIG_CPU_FREQ 302#ifdef CONFIG_CPU_FREQ
299void acpi_processor_ppc_init(void); 303extern bool acpi_processor_cpufreq_init;
300void acpi_processor_ppc_exit(void); 304void acpi_processor_ignore_ppc_init(void);
305void acpi_processor_ppc_init(int cpu);
306void acpi_processor_ppc_exit(int cpu);
301void acpi_processor_ppc_has_changed(struct acpi_processor *pr, int event_flag); 307void acpi_processor_ppc_has_changed(struct acpi_processor *pr, int event_flag);
302extern int acpi_processor_get_bios_limit(int cpu, unsigned int *limit); 308extern int acpi_processor_get_bios_limit(int cpu, unsigned int *limit);
303#else 309#else
304static inline void acpi_processor_ppc_init(void) 310static inline void acpi_processor_ignore_ppc_init(void)
305{ 311{
306 return; 312 return;
307} 313}
308static inline void acpi_processor_ppc_exit(void) 314static inline void acpi_processor_ppc_init(int cpu)
315{
316 return;
317}
318static inline void acpi_processor_ppc_exit(int cpu)
309{ 319{
310 return; 320 return;
311} 321}
@@ -421,14 +431,14 @@ static inline int acpi_processor_hotplug(struct acpi_processor *pr)
421int acpi_processor_get_limit_info(struct acpi_processor *pr); 431int acpi_processor_get_limit_info(struct acpi_processor *pr);
422extern const struct thermal_cooling_device_ops processor_cooling_ops; 432extern const struct thermal_cooling_device_ops processor_cooling_ops;
423#if defined(CONFIG_ACPI_CPU_FREQ_PSS) & defined(CONFIG_CPU_FREQ) 433#if defined(CONFIG_ACPI_CPU_FREQ_PSS) & defined(CONFIG_CPU_FREQ)
424void acpi_thermal_cpufreq_init(void); 434void acpi_thermal_cpufreq_init(int cpu);
425void acpi_thermal_cpufreq_exit(void); 435void acpi_thermal_cpufreq_exit(int cpu);
426#else 436#else
427static inline void acpi_thermal_cpufreq_init(void) 437static inline void acpi_thermal_cpufreq_init(int cpu)
428{ 438{
429 return; 439 return;
430} 440}
431static inline void acpi_thermal_cpufreq_exit(void) 441static inline void acpi_thermal_cpufreq_exit(int cpu)
432{ 442{
433 return; 443 return;
434} 444}
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 0fecacca51e8..978cc239f23b 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -931,6 +931,8 @@ int acpi_subsys_suspend_noirq(struct device *dev);
931int acpi_subsys_suspend(struct device *dev); 931int acpi_subsys_suspend(struct device *dev);
932int acpi_subsys_freeze(struct device *dev); 932int acpi_subsys_freeze(struct device *dev);
933int acpi_subsys_poweroff(struct device *dev); 933int acpi_subsys_poweroff(struct device *dev);
934void acpi_ec_mark_gpe_for_wake(void);
935void acpi_ec_set_gpe_wake_mask(u8 action);
934#else 936#else
935static inline int acpi_subsys_prepare(struct device *dev) { return 0; } 937static inline int acpi_subsys_prepare(struct device *dev) { return 0; }
936static inline void acpi_subsys_complete(struct device *dev) {} 938static inline void acpi_subsys_complete(struct device *dev) {}
@@ -939,6 +941,8 @@ static inline int acpi_subsys_suspend_noirq(struct device *dev) { return 0; }
939static inline int acpi_subsys_suspend(struct device *dev) { return 0; } 941static inline int acpi_subsys_suspend(struct device *dev) { return 0; }
940static inline int acpi_subsys_freeze(struct device *dev) { return 0; } 942static inline int acpi_subsys_freeze(struct device *dev) { return 0; }
941static inline int acpi_subsys_poweroff(struct device *dev) { return 0; } 943static inline int acpi_subsys_poweroff(struct device *dev) { return 0; }
944static inline void acpi_ec_mark_gpe_for_wake(void) {}
945static inline void acpi_ec_set_gpe_wake_mask(u8 action) {}
942#endif 946#endif
943 947
944#ifdef CONFIG_ACPI 948#ifdef CONFIG_ACPI
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index fcb1386bb0d4..88dc0c653925 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -179,7 +179,7 @@ void arch_cpu_idle_dead(void);
179int cpu_report_state(int cpu); 179int cpu_report_state(int cpu);
180int cpu_check_up_prepare(int cpu); 180int cpu_check_up_prepare(int cpu);
181void cpu_set_state_online(int cpu); 181void cpu_set_state_online(int cpu);
182void play_idle(unsigned long duration_ms); 182void play_idle(unsigned long duration_us);
183 183
184#ifdef CONFIG_HOTPLUG_CPU 184#ifdef CONFIG_HOTPLUG_CPU
185bool cpu_wait_death(unsigned int cpu, int seconds); 185bool cpu_wait_death(unsigned int cpu, int seconds);
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
index 536a049d7ecc..c57e88e85c41 100644
--- a/include/linux/cpufreq.h
+++ b/include/linux/cpufreq.h
@@ -456,8 +456,8 @@ static inline void cpufreq_resume(void) {}
456#define CPUFREQ_POSTCHANGE (1) 456#define CPUFREQ_POSTCHANGE (1)
457 457
458/* Policy Notifiers */ 458/* Policy Notifiers */
459#define CPUFREQ_ADJUST (0) 459#define CPUFREQ_CREATE_POLICY (0)
460#define CPUFREQ_NOTIFY (1) 460#define CPUFREQ_REMOVE_POLICY (1)
461 461
462#ifdef CONFIG_CPU_FREQ 462#ifdef CONFIG_CPU_FREQ
463int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list); 463int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list);
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 12ae4b87494e..4b6b5bea8f79 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -85,7 +85,9 @@ struct cpuidle_device {
85 unsigned int cpu; 85 unsigned int cpu;
86 ktime_t next_hrtimer; 86 ktime_t next_hrtimer;
87 87
88 int last_state_idx;
88 int last_residency; 89 int last_residency;
90 u64 poll_limit_ns;
89 struct cpuidle_state_usage states_usage[CPUIDLE_STATE_MAX]; 91 struct cpuidle_state_usage states_usage[CPUIDLE_STATE_MAX];
90 struct cpuidle_state_kobj *kobjs[CPUIDLE_STATE_MAX]; 92 struct cpuidle_state_kobj *kobjs[CPUIDLE_STATE_MAX];
91 struct cpuidle_driver_kobj *kobj_driver; 93 struct cpuidle_driver_kobj *kobj_driver;
@@ -119,6 +121,9 @@ struct cpuidle_driver {
119 121
120 /* the driver handles the cpus in cpumask */ 122 /* the driver handles the cpus in cpumask */
121 struct cpumask *cpumask; 123 struct cpumask *cpumask;
124
125 /* preferred governor to switch at register time */
126 const char *governor;
122}; 127};
123 128
124#ifdef CONFIG_CPU_IDLE 129#ifdef CONFIG_CPU_IDLE
@@ -132,6 +137,8 @@ extern int cpuidle_select(struct cpuidle_driver *drv,
132extern int cpuidle_enter(struct cpuidle_driver *drv, 137extern int cpuidle_enter(struct cpuidle_driver *drv,
133 struct cpuidle_device *dev, int index); 138 struct cpuidle_device *dev, int index);
134extern void cpuidle_reflect(struct cpuidle_device *dev, int index); 139extern void cpuidle_reflect(struct cpuidle_device *dev, int index);
140extern u64 cpuidle_poll_time(struct cpuidle_driver *drv,
141 struct cpuidle_device *dev);
135 142
136extern int cpuidle_register_driver(struct cpuidle_driver *drv); 143extern int cpuidle_register_driver(struct cpuidle_driver *drv);
137extern struct cpuidle_driver *cpuidle_get_driver(void); 144extern struct cpuidle_driver *cpuidle_get_driver(void);
@@ -166,6 +173,9 @@ static inline int cpuidle_enter(struct cpuidle_driver *drv,
166 struct cpuidle_device *dev, int index) 173 struct cpuidle_device *dev, int index)
167{return -ENODEV; } 174{return -ENODEV; }
168static inline void cpuidle_reflect(struct cpuidle_device *dev, int index) { } 175static inline void cpuidle_reflect(struct cpuidle_device *dev, int index) { }
176static inline u64 cpuidle_poll_time(struct cpuidle_driver *drv,
177 struct cpuidle_device *dev)
178{return 0; }
169static inline int cpuidle_register_driver(struct cpuidle_driver *drv) 179static inline int cpuidle_register_driver(struct cpuidle_driver *drv)
170{return -ENODEV; } 180{return -ENODEV; }
171static inline struct cpuidle_driver *cpuidle_get_driver(void) {return NULL; } 181static inline struct cpuidle_driver *cpuidle_get_driver(void) {return NULL; }
diff --git a/include/linux/cpuidle_haltpoll.h b/include/linux/cpuidle_haltpoll.h
new file mode 100644
index 000000000000..d50c1e0411a2
--- /dev/null
+++ b/include/linux/cpuidle_haltpoll.h
@@ -0,0 +1,16 @@
1/* SPDX-License-Identifier: GPL-2.0 */
2#ifndef _CPUIDLE_HALTPOLL_H
3#define _CPUIDLE_HALTPOLL_H
4
5#ifdef CONFIG_ARCH_CPUIDLE_HALTPOLL
6#include <asm/cpuidle_haltpoll.h>
7#else
8static inline void arch_haltpoll_enable(unsigned int cpu)
9{
10}
11
12static inline void arch_haltpoll_disable(unsigned int cpu)
13{
14}
15#endif
16#endif
diff --git a/include/linux/devfreq-event.h b/include/linux/devfreq-event.h
index 29fc0dd735ae..f14f17f8cb7f 100644
--- a/include/linux/devfreq-event.h
+++ b/include/linux/devfreq-event.h
@@ -78,14 +78,20 @@ struct devfreq_event_ops {
78 * struct devfreq_event_desc - the descriptor of devfreq-event device 78 * struct devfreq_event_desc - the descriptor of devfreq-event device
79 * 79 *
80 * @name : the name of devfreq-event device. 80 * @name : the name of devfreq-event device.
81 * @event_type : the type of the event determined and used by driver
81 * @driver_data : the private data for devfreq-event driver. 82 * @driver_data : the private data for devfreq-event driver.
82 * @ops : the operation to control devfreq-event device. 83 * @ops : the operation to control devfreq-event device.
83 * 84 *
84 * Each devfreq-event device is described with a this structure. 85 * Each devfreq-event device is described with a this structure.
85 * This structure contains the various data for devfreq-event device. 86 * This structure contains the various data for devfreq-event device.
87 * The event_type describes what is going to be counted in the register.
88 * It might choose to count e.g. read requests, write data in bytes, etc.
89 * The full supported list of types is present in specyfic header in:
90 * include/dt-bindings/pmu/.
86 */ 91 */
87struct devfreq_event_desc { 92struct devfreq_event_desc {
88 const char *name; 93 const char *name;
94 u32 event_type;
89 void *driver_data; 95 void *driver_data;
90 96
91 const struct devfreq_event_ops *ops; 97 const struct devfreq_event_ops *ops;
diff --git a/include/linux/idle_inject.h b/include/linux/idle_inject.h
index bdc0293fb6cb..a445cd1a36c5 100644
--- a/include/linux/idle_inject.h
+++ b/include/linux/idle_inject.h
@@ -20,10 +20,10 @@ int idle_inject_start(struct idle_inject_device *ii_dev);
20void idle_inject_stop(struct idle_inject_device *ii_dev); 20void idle_inject_stop(struct idle_inject_device *ii_dev);
21 21
22void idle_inject_set_duration(struct idle_inject_device *ii_dev, 22void idle_inject_set_duration(struct idle_inject_device *ii_dev,
23 unsigned int run_duration_ms, 23 unsigned int run_duration_us,
24 unsigned int idle_duration_ms); 24 unsigned int idle_duration_us);
25 25
26void idle_inject_get_duration(struct idle_inject_device *ii_dev, 26void idle_inject_get_duration(struct idle_inject_device *ii_dev,
27 unsigned int *run_duration_ms, 27 unsigned int *run_duration_us,
28 unsigned int *idle_duration_ms); 28 unsigned int *idle_duration_us);
29#endif /* __IDLE_INJECT_H__ */ 29#endif /* __IDLE_INJECT_H__ */
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 07b527dca996..89fc59dab57d 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -238,6 +238,7 @@ extern void teardown_percpu_nmi(unsigned int irq);
238/* The following three functions are for the core kernel use only. */ 238/* The following three functions are for the core kernel use only. */
239extern void suspend_device_irqs(void); 239extern void suspend_device_irqs(void);
240extern void resume_device_irqs(void); 240extern void resume_device_irqs(void);
241extern void rearm_wake_irq(unsigned int irq);
241 242
242/** 243/**
243 * struct irq_affinity_notify - context for notification of IRQ affinity changes 244 * struct irq_affinity_notify - context for notification of IRQ affinity changes
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 3619a870eaa4..4c441be03079 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -712,8 +712,6 @@ struct dev_pm_domain {
712extern void device_pm_lock(void); 712extern void device_pm_lock(void);
713extern void dpm_resume_start(pm_message_t state); 713extern void dpm_resume_start(pm_message_t state);
714extern void dpm_resume_end(pm_message_t state); 714extern void dpm_resume_end(pm_message_t state);
715extern void dpm_noirq_resume_devices(pm_message_t state);
716extern void dpm_noirq_end(void);
717extern void dpm_resume_noirq(pm_message_t state); 715extern void dpm_resume_noirq(pm_message_t state);
718extern void dpm_resume_early(pm_message_t state); 716extern void dpm_resume_early(pm_message_t state);
719extern void dpm_resume(pm_message_t state); 717extern void dpm_resume(pm_message_t state);
@@ -722,8 +720,6 @@ extern void dpm_complete(pm_message_t state);
722extern void device_pm_unlock(void); 720extern void device_pm_unlock(void);
723extern int dpm_suspend_end(pm_message_t state); 721extern int dpm_suspend_end(pm_message_t state);
724extern int dpm_suspend_start(pm_message_t state); 722extern int dpm_suspend_start(pm_message_t state);
725extern void dpm_noirq_begin(void);
726extern int dpm_noirq_suspend_devices(pm_message_t state);
727extern int dpm_suspend_noirq(pm_message_t state); 723extern int dpm_suspend_noirq(pm_message_t state);
728extern int dpm_suspend_late(pm_message_t state); 724extern int dpm_suspend_late(pm_message_t state);
729extern int dpm_suspend(pm_message_t state); 725extern int dpm_suspend(pm_message_t state);
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index 91d9bf497071..baf02ff91a31 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -197,9 +197,9 @@ static inline struct generic_pm_domain_data *dev_gpd_data(struct device *dev)
197int pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev); 197int pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev);
198int pm_genpd_remove_device(struct device *dev); 198int pm_genpd_remove_device(struct device *dev);
199int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, 199int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
200 struct generic_pm_domain *new_subdomain); 200 struct generic_pm_domain *subdomain);
201int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, 201int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
202 struct generic_pm_domain *target); 202 struct generic_pm_domain *subdomain);
203int pm_genpd_init(struct generic_pm_domain *genpd, 203int pm_genpd_init(struct generic_pm_domain *genpd,
204 struct dev_power_governor *gov, bool is_off); 204 struct dev_power_governor *gov, bool is_off);
205int pm_genpd_remove(struct generic_pm_domain *genpd); 205int pm_genpd_remove(struct generic_pm_domain *genpd);
@@ -226,12 +226,12 @@ static inline int pm_genpd_remove_device(struct device *dev)
226 return -ENOSYS; 226 return -ENOSYS;
227} 227}
228static inline int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, 228static inline int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
229 struct generic_pm_domain *new_sd) 229 struct generic_pm_domain *subdomain)
230{ 230{
231 return -ENOSYS; 231 return -ENOSYS;
232} 232}
233static inline int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, 233static inline int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
234 struct generic_pm_domain *target) 234 struct generic_pm_domain *subdomain)
235{ 235{
236 return -ENOSYS; 236 return -ENOSYS;
237} 237}
@@ -282,8 +282,8 @@ int of_genpd_add_provider_onecell(struct device_node *np,
282 struct genpd_onecell_data *data); 282 struct genpd_onecell_data *data);
283void of_genpd_del_provider(struct device_node *np); 283void of_genpd_del_provider(struct device_node *np);
284int of_genpd_add_device(struct of_phandle_args *args, struct device *dev); 284int of_genpd_add_device(struct of_phandle_args *args, struct device *dev);
285int of_genpd_add_subdomain(struct of_phandle_args *parent, 285int of_genpd_add_subdomain(struct of_phandle_args *parent_spec,
286 struct of_phandle_args *new_subdomain); 286 struct of_phandle_args *subdomain_spec);
287struct generic_pm_domain *of_genpd_remove_last(struct device_node *np); 287struct generic_pm_domain *of_genpd_remove_last(struct device_node *np);
288int of_genpd_parse_idle_states(struct device_node *dn, 288int of_genpd_parse_idle_states(struct device_node *dn,
289 struct genpd_power_state **states, int *n); 289 struct genpd_power_state **states, int *n);
@@ -316,8 +316,8 @@ static inline int of_genpd_add_device(struct of_phandle_args *args,
316 return -ENODEV; 316 return -ENODEV;
317} 317}
318 318
319static inline int of_genpd_add_subdomain(struct of_phandle_args *parent, 319static inline int of_genpd_add_subdomain(struct of_phandle_args *parent_spec,
320 struct of_phandle_args *new_subdomain) 320 struct of_phandle_args *subdomain_spec)
321{ 321{
322 return -ENODEV; 322 return -ENODEV;
323} 323}
diff --git a/include/linux/pm_opp.h b/include/linux/pm_opp.h
index af5021f27cb7..b8197ab014f2 100644
--- a/include/linux/pm_opp.h
+++ b/include/linux/pm_opp.h
@@ -96,6 +96,8 @@ unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev);
96struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, 96struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
97 unsigned long freq, 97 unsigned long freq,
98 bool available); 98 bool available);
99struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev,
100 unsigned int level);
99 101
100struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, 102struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
101 unsigned long *freq); 103 unsigned long *freq);
@@ -128,7 +130,7 @@ struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char * name);
128void dev_pm_opp_put_clkname(struct opp_table *opp_table); 130void dev_pm_opp_put_clkname(struct opp_table *opp_table);
129struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); 131struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
130void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table); 132void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table);
131struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names); 133struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs);
132void dev_pm_opp_detach_genpd(struct opp_table *opp_table); 134void dev_pm_opp_detach_genpd(struct opp_table *opp_table);
133int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate); 135int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate);
134int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq); 136int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
@@ -200,6 +202,12 @@ static inline struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
200 return ERR_PTR(-ENOTSUPP); 202 return ERR_PTR(-ENOTSUPP);
201} 203}
202 204
205static inline struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev,
206 unsigned int level)
207{
208 return ERR_PTR(-ENOTSUPP);
209}
210
203static inline struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, 211static inline struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
204 unsigned long *freq) 212 unsigned long *freq)
205{ 213{
@@ -292,7 +300,7 @@ static inline struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const
292 300
293static inline void dev_pm_opp_put_clkname(struct opp_table *opp_table) {} 301static inline void dev_pm_opp_put_clkname(struct opp_table *opp_table) {}
294 302
295static inline struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names) 303static inline struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs)
296{ 304{
297 return ERR_PTR(-ENOTSUPP); 305 return ERR_PTR(-ENOTSUPP);
298} 306}
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 2aebbc5b9950..222c3e01397c 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -13,9 +13,6 @@
13enum { 13enum {
14 PM_QOS_RESERVED = 0, 14 PM_QOS_RESERVED = 0,
15 PM_QOS_CPU_DMA_LATENCY, 15 PM_QOS_CPU_DMA_LATENCY,
16 PM_QOS_NETWORK_LATENCY,
17 PM_QOS_NETWORK_THROUGHPUT,
18 PM_QOS_MEMORY_BANDWIDTH,
19 16
20 /* insert new class ID */ 17 /* insert new class ID */
21 PM_QOS_NUM_CLASSES, 18 PM_QOS_NUM_CLASSES,
@@ -33,9 +30,6 @@ enum pm_qos_flags_status {
33#define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC) 30#define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC)
34 31
35#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 32#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
36#define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
37#define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0
38#define PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE 0
39#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY 33#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY
40#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY 34#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY
41#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS 35#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS
diff --git a/include/linux/pm_wakeup.h b/include/linux/pm_wakeup.h
index 91027602d137..661efa029c96 100644
--- a/include/linux/pm_wakeup.h
+++ b/include/linux/pm_wakeup.h
@@ -21,6 +21,7 @@ struct wake_irq;
21 * struct wakeup_source - Representation of wakeup sources 21 * struct wakeup_source - Representation of wakeup sources
22 * 22 *
23 * @name: Name of the wakeup source 23 * @name: Name of the wakeup source
24 * @id: Wakeup source id
24 * @entry: Wakeup source list entry 25 * @entry: Wakeup source list entry
25 * @lock: Wakeup source lock 26 * @lock: Wakeup source lock
26 * @wakeirq: Optional device specific wakeirq 27 * @wakeirq: Optional device specific wakeirq
@@ -35,11 +36,13 @@ struct wake_irq;
35 * @relax_count: Number of times the wakeup source was deactivated. 36 * @relax_count: Number of times the wakeup source was deactivated.
36 * @expire_count: Number of times the wakeup source's timeout has expired. 37 * @expire_count: Number of times the wakeup source's timeout has expired.
37 * @wakeup_count: Number of times the wakeup source might abort suspend. 38 * @wakeup_count: Number of times the wakeup source might abort suspend.
39 * @dev: Struct device for sysfs statistics about the wakeup source.
38 * @active: Status of the wakeup source. 40 * @active: Status of the wakeup source.
39 * @autosleep_enabled: Autosleep is active, so update @prevent_sleep_time. 41 * @autosleep_enabled: Autosleep is active, so update @prevent_sleep_time.
40 */ 42 */
41struct wakeup_source { 43struct wakeup_source {
42 const char *name; 44 const char *name;
45 int id;
43 struct list_head entry; 46 struct list_head entry;
44 spinlock_t lock; 47 spinlock_t lock;
45 struct wake_irq *wakeirq; 48 struct wake_irq *wakeirq;
@@ -55,6 +58,7 @@ struct wakeup_source {
55 unsigned long relax_count; 58 unsigned long relax_count;
56 unsigned long expire_count; 59 unsigned long expire_count;
57 unsigned long wakeup_count; 60 unsigned long wakeup_count;
61 struct device *dev;
58 bool active:1; 62 bool active:1;
59 bool autosleep_enabled:1; 63 bool autosleep_enabled:1;
60}; 64};
@@ -81,12 +85,12 @@ static inline void device_set_wakeup_path(struct device *dev)
81} 85}
82 86
83/* drivers/base/power/wakeup.c */ 87/* drivers/base/power/wakeup.c */
84extern void wakeup_source_prepare(struct wakeup_source *ws, const char *name);
85extern struct wakeup_source *wakeup_source_create(const char *name); 88extern struct wakeup_source *wakeup_source_create(const char *name);
86extern void wakeup_source_destroy(struct wakeup_source *ws); 89extern void wakeup_source_destroy(struct wakeup_source *ws);
87extern void wakeup_source_add(struct wakeup_source *ws); 90extern void wakeup_source_add(struct wakeup_source *ws);
88extern void wakeup_source_remove(struct wakeup_source *ws); 91extern void wakeup_source_remove(struct wakeup_source *ws);
89extern struct wakeup_source *wakeup_source_register(const char *name); 92extern struct wakeup_source *wakeup_source_register(struct device *dev,
93 const char *name);
90extern void wakeup_source_unregister(struct wakeup_source *ws); 94extern void wakeup_source_unregister(struct wakeup_source *ws);
91extern int device_wakeup_enable(struct device *dev); 95extern int device_wakeup_enable(struct device *dev);
92extern int device_wakeup_disable(struct device *dev); 96extern int device_wakeup_disable(struct device *dev);
@@ -112,9 +116,6 @@ static inline bool device_can_wakeup(struct device *dev)
112 return dev->power.can_wakeup; 116 return dev->power.can_wakeup;
113} 117}
114 118
115static inline void wakeup_source_prepare(struct wakeup_source *ws,
116 const char *name) {}
117
118static inline struct wakeup_source *wakeup_source_create(const char *name) 119static inline struct wakeup_source *wakeup_source_create(const char *name)
119{ 120{
120 return NULL; 121 return NULL;
@@ -126,7 +127,8 @@ static inline void wakeup_source_add(struct wakeup_source *ws) {}
126 127
127static inline void wakeup_source_remove(struct wakeup_source *ws) {} 128static inline void wakeup_source_remove(struct wakeup_source *ws) {}
128 129
129static inline struct wakeup_source *wakeup_source_register(const char *name) 130static inline struct wakeup_source *wakeup_source_register(struct device *dev,
131 const char *name)
130{ 132{
131 return NULL; 133 return NULL;
132} 134}
@@ -181,13 +183,6 @@ static inline void pm_wakeup_dev_event(struct device *dev, unsigned int msec,
181 183
182#endif /* !CONFIG_PM_SLEEP */ 184#endif /* !CONFIG_PM_SLEEP */
183 185
184static inline void wakeup_source_init(struct wakeup_source *ws,
185 const char *name)
186{
187 wakeup_source_prepare(ws, name);
188 wakeup_source_add(ws);
189}
190
191static inline void __pm_wakeup_event(struct wakeup_source *ws, unsigned int msec) 186static inline void __pm_wakeup_event(struct wakeup_source *ws, unsigned int msec)
192{ 187{
193 return pm_wakeup_ws_event(ws, msec, false); 188 return pm_wakeup_ws_event(ws, msec, false);
diff --git a/include/linux/suspend.h b/include/linux/suspend.h
index 9c0ad1a3a727..6fc8843f1c9e 100644
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -190,8 +190,9 @@ struct platform_suspend_ops {
190struct platform_s2idle_ops { 190struct platform_s2idle_ops {
191 int (*begin)(void); 191 int (*begin)(void);
192 int (*prepare)(void); 192 int (*prepare)(void);
193 int (*prepare_late)(void);
193 void (*wake)(void); 194 void (*wake)(void);
194 void (*sync)(void); 195 void (*restore_early)(void);
195 void (*restore)(void); 196 void (*restore)(void);
196 void (*end)(void); 197 void (*end)(void);
197}; 198};
@@ -336,6 +337,7 @@ static inline void pm_set_suspend_via_firmware(void) {}
336static inline void pm_set_resume_via_firmware(void) {} 337static inline void pm_set_resume_via_firmware(void) {}
337static inline bool pm_suspend_via_firmware(void) { return false; } 338static inline bool pm_suspend_via_firmware(void) { return false; }
338static inline bool pm_resume_via_firmware(void) { return false; } 339static inline bool pm_resume_via_firmware(void) { return false; }
340static inline bool pm_suspend_no_platform(void) { return false; }
339static inline bool pm_suspend_default_s2idle(void) { return false; } 341static inline bool pm_suspend_default_s2idle(void) { return false; }
340 342
341static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {} 343static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {}
diff --git a/include/trace/events/power.h b/include/trace/events/power.h
index f7aece721aed..7457e238e1b7 100644
--- a/include/trace/events/power.h
+++ b/include/trace/events/power.h
@@ -379,9 +379,7 @@ DECLARE_EVENT_CLASS(pm_qos_request,
379 379
380 TP_printk("pm_qos_class=%s value=%d", 380 TP_printk("pm_qos_class=%s value=%d",
381 __print_symbolic(__entry->pm_qos_class, 381 __print_symbolic(__entry->pm_qos_class,
382 { PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }, 382 { PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }),
383 { PM_QOS_NETWORK_LATENCY, "NETWORK_LATENCY" },
384 { PM_QOS_NETWORK_THROUGHPUT, "NETWORK_THROUGHPUT" }),
385 __entry->value) 383 __entry->value)
386); 384);
387 385
@@ -426,9 +424,7 @@ TRACE_EVENT(pm_qos_update_request_timeout,
426 424
427 TP_printk("pm_qos_class=%s value=%d, timeout_us=%ld", 425 TP_printk("pm_qos_class=%s value=%d, timeout_us=%ld",
428 __print_symbolic(__entry->pm_qos_class, 426 __print_symbolic(__entry->pm_qos_class,
429 { PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }, 427 { PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }),
430 { PM_QOS_NETWORK_LATENCY, "NETWORK_LATENCY" },
431 { PM_QOS_NETWORK_THROUGHPUT, "NETWORK_THROUGHPUT" }),
432 __entry->value, __entry->timeout_us) 428 __entry->value, __entry->timeout_us)
433); 429);
434 430
diff --git a/kernel/irq/pm.c b/kernel/irq/pm.c
index d6961d3c6f9e..8f557fa1f4fe 100644
--- a/kernel/irq/pm.c
+++ b/kernel/irq/pm.c
@@ -177,6 +177,26 @@ static void resume_irqs(bool want_early)
177} 177}
178 178
179/** 179/**
180 * rearm_wake_irq - rearm a wakeup interrupt line after signaling wakeup
181 * @irq: Interrupt to rearm
182 */
183void rearm_wake_irq(unsigned int irq)
184{
185 unsigned long flags;
186 struct irq_desc *desc = irq_get_desc_buslock(irq, &flags, IRQ_GET_DESC_CHECK_GLOBAL);
187
188 if (!desc || !(desc->istate & IRQS_SUSPENDED) ||
189 !irqd_is_wakeup_set(&desc->irq_data))
190 return;
191
192 desc->istate &= ~IRQS_SUSPENDED;
193 irqd_set(&desc->irq_data, IRQD_WAKEUP_ARMED);
194 __enable_irq(desc);
195
196 irq_put_desc_busunlock(desc, flags);
197}
198
199/**
180 * irq_pm_syscore_ops - enable interrupt lines early 200 * irq_pm_syscore_ops - enable interrupt lines early
181 * 201 *
182 * Enable all interrupt lines with %IRQF_EARLY_RESUME set. 202 * Enable all interrupt lines with %IRQF_EARLY_RESUME set.
diff --git a/kernel/power/autosleep.c b/kernel/power/autosleep.c
index 41e83a779e19..9af5a50d3489 100644
--- a/kernel/power/autosleep.c
+++ b/kernel/power/autosleep.c
@@ -116,7 +116,7 @@ int pm_autosleep_set_state(suspend_state_t state)
116 116
117int __init pm_autosleep_init(void) 117int __init pm_autosleep_init(void)
118{ 118{
119 autosleep_ws = wakeup_source_register("autosleep"); 119 autosleep_ws = wakeup_source_register(NULL, "autosleep");
120 if (!autosleep_ws) 120 if (!autosleep_ws)
121 return -ENOMEM; 121 return -ENOMEM;
122 122
diff --git a/kernel/power/main.c b/kernel/power/main.c
index bdbd605c4215..e8710d179b35 100644
--- a/kernel/power/main.c
+++ b/kernel/power/main.c
@@ -254,7 +254,6 @@ static ssize_t pm_test_store(struct kobject *kobj, struct kobj_attribute *attr,
254power_attr(pm_test); 254power_attr(pm_test);
255#endif /* CONFIG_PM_SLEEP_DEBUG */ 255#endif /* CONFIG_PM_SLEEP_DEBUG */
256 256
257#ifdef CONFIG_DEBUG_FS
258static char *suspend_step_name(enum suspend_stat_step step) 257static char *suspend_step_name(enum suspend_stat_step step)
259{ 258{
260 switch (step) { 259 switch (step) {
@@ -275,6 +274,92 @@ static char *suspend_step_name(enum suspend_stat_step step)
275 } 274 }
276} 275}
277 276
277#define suspend_attr(_name) \
278static ssize_t _name##_show(struct kobject *kobj, \
279 struct kobj_attribute *attr, char *buf) \
280{ \
281 return sprintf(buf, "%d\n", suspend_stats._name); \
282} \
283static struct kobj_attribute _name = __ATTR_RO(_name)
284
285suspend_attr(success);
286suspend_attr(fail);
287suspend_attr(failed_freeze);
288suspend_attr(failed_prepare);
289suspend_attr(failed_suspend);
290suspend_attr(failed_suspend_late);
291suspend_attr(failed_suspend_noirq);
292suspend_attr(failed_resume);
293suspend_attr(failed_resume_early);
294suspend_attr(failed_resume_noirq);
295
296static ssize_t last_failed_dev_show(struct kobject *kobj,
297 struct kobj_attribute *attr, char *buf)
298{
299 int index;
300 char *last_failed_dev = NULL;
301
302 index = suspend_stats.last_failed_dev + REC_FAILED_NUM - 1;
303 index %= REC_FAILED_NUM;
304 last_failed_dev = suspend_stats.failed_devs[index];
305
306 return sprintf(buf, "%s\n", last_failed_dev);
307}
308static struct kobj_attribute last_failed_dev = __ATTR_RO(last_failed_dev);
309
310static ssize_t last_failed_errno_show(struct kobject *kobj,
311 struct kobj_attribute *attr, char *buf)
312{
313 int index;
314 int last_failed_errno;
315
316 index = suspend_stats.last_failed_errno + REC_FAILED_NUM - 1;
317 index %= REC_FAILED_NUM;
318 last_failed_errno = suspend_stats.errno[index];
319
320 return sprintf(buf, "%d\n", last_failed_errno);
321}
322static struct kobj_attribute last_failed_errno = __ATTR_RO(last_failed_errno);
323
324static ssize_t last_failed_step_show(struct kobject *kobj,
325 struct kobj_attribute *attr, char *buf)
326{
327 int index;
328 enum suspend_stat_step step;
329 char *last_failed_step = NULL;
330
331 index = suspend_stats.last_failed_step + REC_FAILED_NUM - 1;
332 index %= REC_FAILED_NUM;
333 step = suspend_stats.failed_steps[index];
334 last_failed_step = suspend_step_name(step);
335
336 return sprintf(buf, "%s\n", last_failed_step);
337}
338static struct kobj_attribute last_failed_step = __ATTR_RO(last_failed_step);
339
340static struct attribute *suspend_attrs[] = {
341 &success.attr,
342 &fail.attr,
343 &failed_freeze.attr,
344 &failed_prepare.attr,
345 &failed_suspend.attr,
346 &failed_suspend_late.attr,
347 &failed_suspend_noirq.attr,
348 &failed_resume.attr,
349 &failed_resume_early.attr,
350 &failed_resume_noirq.attr,
351 &last_failed_dev.attr,
352 &last_failed_errno.attr,
353 &last_failed_step.attr,
354 NULL,
355};
356
357static struct attribute_group suspend_attr_group = {
358 .name = "suspend_stats",
359 .attrs = suspend_attrs,
360};
361
362#ifdef CONFIG_DEBUG_FS
278static int suspend_stats_show(struct seq_file *s, void *unused) 363static int suspend_stats_show(struct seq_file *s, void *unused)
279{ 364{
280 int i, index, last_dev, last_errno, last_step; 365 int i, index, last_dev, last_errno, last_step;
@@ -495,7 +580,7 @@ static suspend_state_t decode_state(const char *buf, size_t n)
495 len = p ? p - buf : n; 580 len = p ? p - buf : n;
496 581
497 /* Check hibernation first. */ 582 /* Check hibernation first. */
498 if (len == 4 && !strncmp(buf, "disk", len)) 583 if (len == 4 && str_has_prefix(buf, "disk"))
499 return PM_SUSPEND_MAX; 584 return PM_SUSPEND_MAX;
500 585
501#ifdef CONFIG_SUSPEND 586#ifdef CONFIG_SUSPEND
@@ -794,6 +879,14 @@ static const struct attribute_group attr_group = {
794 .attrs = g, 879 .attrs = g,
795}; 880};
796 881
882static const struct attribute_group *attr_groups[] = {
883 &attr_group,
884#ifdef CONFIG_PM_SLEEP
885 &suspend_attr_group,
886#endif
887 NULL,
888};
889
797struct workqueue_struct *pm_wq; 890struct workqueue_struct *pm_wq;
798EXPORT_SYMBOL_GPL(pm_wq); 891EXPORT_SYMBOL_GPL(pm_wq);
799 892
@@ -815,7 +908,7 @@ static int __init pm_init(void)
815 power_kobj = kobject_create_and_add("power", NULL); 908 power_kobj = kobject_create_and_add("power", NULL);
816 if (!power_kobj) 909 if (!power_kobj)
817 return -ENOMEM; 910 return -ENOMEM;
818 error = sysfs_create_group(power_kobj, &attr_group); 911 error = sysfs_create_groups(power_kobj, attr_groups);
819 if (error) 912 if (error)
820 return error; 913 return error;
821 pm_print_times_init(); 914 pm_print_times_init();
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 33e3febaba53..9568a2fe7c11 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -78,57 +78,9 @@ static struct pm_qos_object cpu_dma_pm_qos = {
78 .name = "cpu_dma_latency", 78 .name = "cpu_dma_latency",
79}; 79};
80 80
81static BLOCKING_NOTIFIER_HEAD(network_lat_notifier);
82static struct pm_qos_constraints network_lat_constraints = {
83 .list = PLIST_HEAD_INIT(network_lat_constraints.list),
84 .target_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE,
85 .default_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE,
86 .no_constraint_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE,
87 .type = PM_QOS_MIN,
88 .notifiers = &network_lat_notifier,
89};
90static struct pm_qos_object network_lat_pm_qos = {
91 .constraints = &network_lat_constraints,
92 .name = "network_latency",
93};
94
95
96static BLOCKING_NOTIFIER_HEAD(network_throughput_notifier);
97static struct pm_qos_constraints network_tput_constraints = {
98 .list = PLIST_HEAD_INIT(network_tput_constraints.list),
99 .target_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE,
100 .default_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE,
101 .no_constraint_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE,
102 .type = PM_QOS_MAX,
103 .notifiers = &network_throughput_notifier,
104};
105static struct pm_qos_object network_throughput_pm_qos = {
106 .constraints = &network_tput_constraints,
107 .name = "network_throughput",
108};
109
110
111static BLOCKING_NOTIFIER_HEAD(memory_bandwidth_notifier);
112static struct pm_qos_constraints memory_bw_constraints = {
113 .list = PLIST_HEAD_INIT(memory_bw_constraints.list),
114 .target_value = PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE,
115 .default_value = PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE,
116 .no_constraint_value = PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE,
117 .type = PM_QOS_SUM,
118 .notifiers = &memory_bandwidth_notifier,
119};
120static struct pm_qos_object memory_bandwidth_pm_qos = {
121 .constraints = &memory_bw_constraints,
122 .name = "memory_bandwidth",
123};
124
125
126static struct pm_qos_object *pm_qos_array[] = { 81static struct pm_qos_object *pm_qos_array[] = {
127 &null_pm_qos, 82 &null_pm_qos,
128 &cpu_dma_pm_qos, 83 &cpu_dma_pm_qos,
129 &network_lat_pm_qos,
130 &network_throughput_pm_qos,
131 &memory_bandwidth_pm_qos,
132}; 84};
133 85
134static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf, 86static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
index c874a7026e24..f3b7239f1892 100644
--- a/kernel/power/suspend.c
+++ b/kernel/power/suspend.c
@@ -121,43 +121,25 @@ static void s2idle_loop(void)
121{ 121{
122 pm_pr_dbg("suspend-to-idle\n"); 122 pm_pr_dbg("suspend-to-idle\n");
123 123
124 /*
125 * Suspend-to-idle equals:
126 * frozen processes + suspended devices + idle processors.
127 * Thus s2idle_enter() should be called right after all devices have
128 * been suspended.
129 *
130 * Wakeups during the noirq suspend of devices may be spurious, so try
131 * to avoid them upfront.
132 */
124 for (;;) { 133 for (;;) {
125 int error; 134 if (s2idle_ops && s2idle_ops->wake)
126
127 dpm_noirq_begin();
128
129 /*
130 * Suspend-to-idle equals
131 * frozen processes + suspended devices + idle processors.
132 * Thus s2idle_enter() should be called right after
133 * all devices have been suspended.
134 *
135 * Wakeups during the noirq suspend of devices may be spurious,
136 * so prevent them from terminating the loop right away.
137 */
138 error = dpm_noirq_suspend_devices(PMSG_SUSPEND);
139 if (!error)
140 s2idle_enter();
141 else if (error == -EBUSY && pm_wakeup_pending())
142 error = 0;
143
144 if (!error && s2idle_ops && s2idle_ops->wake)
145 s2idle_ops->wake(); 135 s2idle_ops->wake();
146 136
147 dpm_noirq_resume_devices(PMSG_RESUME);
148
149 dpm_noirq_end();
150
151 if (error)
152 break;
153
154 if (s2idle_ops && s2idle_ops->sync)
155 s2idle_ops->sync();
156
157 if (pm_wakeup_pending()) 137 if (pm_wakeup_pending())
158 break; 138 break;
159 139
160 pm_wakeup_clear(false); 140 pm_wakeup_clear(false);
141
142 s2idle_enter();
161 } 143 }
162 144
163 pm_pr_dbg("resume from suspend-to-idle\n"); 145 pm_pr_dbg("resume from suspend-to-idle\n");
@@ -271,14 +253,21 @@ static int platform_suspend_prepare_late(suspend_state_t state)
271 253
272static int platform_suspend_prepare_noirq(suspend_state_t state) 254static int platform_suspend_prepare_noirq(suspend_state_t state)
273{ 255{
274 return state != PM_SUSPEND_TO_IDLE && suspend_ops->prepare_late ? 256 if (state == PM_SUSPEND_TO_IDLE)
275 suspend_ops->prepare_late() : 0; 257 return s2idle_ops && s2idle_ops->prepare_late ?
258 s2idle_ops->prepare_late() : 0;
259
260 return suspend_ops->prepare_late ? suspend_ops->prepare_late() : 0;
276} 261}
277 262
278static void platform_resume_noirq(suspend_state_t state) 263static void platform_resume_noirq(suspend_state_t state)
279{ 264{
280 if (state != PM_SUSPEND_TO_IDLE && suspend_ops->wake) 265 if (state == PM_SUSPEND_TO_IDLE) {
266 if (s2idle_ops && s2idle_ops->restore_early)
267 s2idle_ops->restore_early();
268 } else if (suspend_ops->wake) {
281 suspend_ops->wake(); 269 suspend_ops->wake();
270 }
282} 271}
283 272
284static void platform_resume_early(suspend_state_t state) 273static void platform_resume_early(suspend_state_t state)
@@ -415,11 +404,6 @@ static int suspend_enter(suspend_state_t state, bool *wakeup)
415 if (error) 404 if (error)
416 goto Devices_early_resume; 405 goto Devices_early_resume;
417 406
418 if (state == PM_SUSPEND_TO_IDLE && pm_test_level != TEST_PLATFORM) {
419 s2idle_loop();
420 goto Platform_early_resume;
421 }
422
423 error = dpm_suspend_noirq(PMSG_SUSPEND); 407 error = dpm_suspend_noirq(PMSG_SUSPEND);
424 if (error) { 408 if (error) {
425 pr_err("noirq suspend of devices failed\n"); 409 pr_err("noirq suspend of devices failed\n");
@@ -432,6 +416,11 @@ static int suspend_enter(suspend_state_t state, bool *wakeup)
432 if (suspend_test(TEST_PLATFORM)) 416 if (suspend_test(TEST_PLATFORM))
433 goto Platform_wake; 417 goto Platform_wake;
434 418
419 if (state == PM_SUSPEND_TO_IDLE) {
420 s2idle_loop();
421 goto Platform_wake;
422 }
423
435 error = suspend_disable_secondary_cpus(); 424 error = suspend_disable_secondary_cpus();
436 if (error || suspend_test(TEST_CPUS)) 425 if (error || suspend_test(TEST_CPUS))
437 goto Enable_cpus; 426 goto Enable_cpus;
diff --git a/kernel/power/wakelock.c b/kernel/power/wakelock.c
index 4210152e56f0..105df4dfc783 100644
--- a/kernel/power/wakelock.c
+++ b/kernel/power/wakelock.c
@@ -27,7 +27,7 @@ static DEFINE_MUTEX(wakelocks_lock);
27struct wakelock { 27struct wakelock {
28 char *name; 28 char *name;
29 struct rb_node node; 29 struct rb_node node;
30 struct wakeup_source ws; 30 struct wakeup_source *ws;
31#ifdef CONFIG_PM_WAKELOCKS_GC 31#ifdef CONFIG_PM_WAKELOCKS_GC
32 struct list_head lru; 32 struct list_head lru;
33#endif 33#endif
@@ -46,7 +46,7 @@ ssize_t pm_show_wakelocks(char *buf, bool show_active)
46 46
47 for (node = rb_first(&wakelocks_tree); node; node = rb_next(node)) { 47 for (node = rb_first(&wakelocks_tree); node; node = rb_next(node)) {
48 wl = rb_entry(node, struct wakelock, node); 48 wl = rb_entry(node, struct wakelock, node);
49 if (wl->ws.active == show_active) 49 if (wl->ws->active == show_active)
50 str += scnprintf(str, end - str, "%s ", wl->name); 50 str += scnprintf(str, end - str, "%s ", wl->name);
51 } 51 }
52 if (str > buf) 52 if (str > buf)
@@ -112,16 +112,16 @@ static void __wakelocks_gc(struct work_struct *work)
112 u64 idle_time_ns; 112 u64 idle_time_ns;
113 bool active; 113 bool active;
114 114
115 spin_lock_irq(&wl->ws.lock); 115 spin_lock_irq(&wl->ws->lock);
116 idle_time_ns = ktime_to_ns(ktime_sub(now, wl->ws.last_time)); 116 idle_time_ns = ktime_to_ns(ktime_sub(now, wl->ws->last_time));
117 active = wl->ws.active; 117 active = wl->ws->active;
118 spin_unlock_irq(&wl->ws.lock); 118 spin_unlock_irq(&wl->ws->lock);
119 119
120 if (idle_time_ns < ((u64)WL_GC_TIME_SEC * NSEC_PER_SEC)) 120 if (idle_time_ns < ((u64)WL_GC_TIME_SEC * NSEC_PER_SEC))
121 break; 121 break;
122 122
123 if (!active) { 123 if (!active) {
124 wakeup_source_remove(&wl->ws); 124 wakeup_source_unregister(wl->ws);
125 rb_erase(&wl->node, &wakelocks_tree); 125 rb_erase(&wl->node, &wakelocks_tree);
126 list_del(&wl->lru); 126 list_del(&wl->lru);
127 kfree(wl->name); 127 kfree(wl->name);
@@ -187,9 +187,15 @@ static struct wakelock *wakelock_lookup_add(const char *name, size_t len,
187 kfree(wl); 187 kfree(wl);
188 return ERR_PTR(-ENOMEM); 188 return ERR_PTR(-ENOMEM);
189 } 189 }
190 wl->ws.name = wl->name; 190
191 wl->ws.last_time = ktime_get(); 191 wl->ws = wakeup_source_register(NULL, wl->name);
192 wakeup_source_add(&wl->ws); 192 if (!wl->ws) {
193 kfree(wl->name);
194 kfree(wl);
195 return ERR_PTR(-ENOMEM);
196 }
197 wl->ws->last_time = ktime_get();
198
193 rb_link_node(&wl->node, parent, node); 199 rb_link_node(&wl->node, parent, node);
194 rb_insert_color(&wl->node, &wakelocks_tree); 200 rb_insert_color(&wl->node, &wakelocks_tree);
195 wakelocks_lru_add(wl); 201 wakelocks_lru_add(wl);
@@ -233,9 +239,9 @@ int pm_wake_lock(const char *buf)
233 u64 timeout_ms = timeout_ns + NSEC_PER_MSEC - 1; 239 u64 timeout_ms = timeout_ns + NSEC_PER_MSEC - 1;
234 240
235 do_div(timeout_ms, NSEC_PER_MSEC); 241 do_div(timeout_ms, NSEC_PER_MSEC);
236 __pm_wakeup_event(&wl->ws, timeout_ms); 242 __pm_wakeup_event(wl->ws, timeout_ms);
237 } else { 243 } else {
238 __pm_stay_awake(&wl->ws); 244 __pm_stay_awake(wl->ws);
239 } 245 }
240 246
241 wakelocks_lru_most_recent(wl); 247 wakelocks_lru_most_recent(wl);
@@ -271,7 +277,7 @@ int pm_wake_unlock(const char *buf)
271 ret = PTR_ERR(wl); 277 ret = PTR_ERR(wl);
272 goto out; 278 goto out;
273 } 279 }
274 __pm_relax(&wl->ws); 280 __pm_relax(wl->ws);
275 281
276 wakelocks_lru_most_recent(wl); 282 wakelocks_lru_most_recent(wl);
277 wakelocks_gc(); 283 wakelocks_gc();
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index fdce9cfaca05..86800b4d5453 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -117,6 +117,7 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
117 unsigned int next_freq) 117 unsigned int next_freq)
118{ 118{
119 struct cpufreq_policy *policy = sg_policy->policy; 119 struct cpufreq_policy *policy = sg_policy->policy;
120 int cpu;
120 121
121 if (!sugov_update_next_freq(sg_policy, time, next_freq)) 122 if (!sugov_update_next_freq(sg_policy, time, next_freq))
122 return; 123 return;
@@ -126,7 +127,11 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
126 return; 127 return;
127 128
128 policy->cur = next_freq; 129 policy->cur = next_freq;
129 trace_cpu_frequency(next_freq, smp_processor_id()); 130
131 if (trace_cpu_frequency_enabled()) {
132 for_each_cpu(cpu, policy->cpus)
133 trace_cpu_frequency(next_freq, cpu);
134 }
130} 135}
131 136
132static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time, 137static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 8bfeb6395bdd..c892c6280c9f 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -312,7 +312,7 @@ static enum hrtimer_restart idle_inject_timer_fn(struct hrtimer *timer)
312 return HRTIMER_NORESTART; 312 return HRTIMER_NORESTART;
313} 313}
314 314
315void play_idle(unsigned long duration_ms) 315void play_idle(unsigned long duration_us)
316{ 316{
317 struct idle_timer it; 317 struct idle_timer it;
318 318
@@ -324,7 +324,7 @@ void play_idle(unsigned long duration_ms)
324 WARN_ON_ONCE(current->nr_cpus_allowed != 1); 324 WARN_ON_ONCE(current->nr_cpus_allowed != 1);
325 WARN_ON_ONCE(!(current->flags & PF_KTHREAD)); 325 WARN_ON_ONCE(!(current->flags & PF_KTHREAD));
326 WARN_ON_ONCE(!(current->flags & PF_NO_SETAFFINITY)); 326 WARN_ON_ONCE(!(current->flags & PF_NO_SETAFFINITY));
327 WARN_ON_ONCE(!duration_ms); 327 WARN_ON_ONCE(!duration_us);
328 328
329 rcu_sleep_check(); 329 rcu_sleep_check();
330 preempt_disable(); 330 preempt_disable();
@@ -334,7 +334,8 @@ void play_idle(unsigned long duration_ms)
334 it.done = 0; 334 it.done = 0;
335 hrtimer_init_on_stack(&it.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 335 hrtimer_init_on_stack(&it.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
336 it.timer.function = idle_inject_timer_fn; 336 it.timer.function = idle_inject_timer_fn;
337 hrtimer_start(&it.timer, ms_to_ktime(duration_ms), HRTIMER_MODE_REL_PINNED); 337 hrtimer_start(&it.timer, ns_to_ktime(duration_us * NSEC_PER_USEC),
338 HRTIMER_MODE_REL_PINNED);
338 339
339 while (!READ_ONCE(it.done)) 340 while (!READ_ONCE(it.done))
340 do_idle(); 341 do_idle();
diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
index 271ce6c12907..451f9d05ccfe 100644
--- a/kernel/time/alarmtimer.c
+++ b/kernel/time/alarmtimer.c
@@ -97,7 +97,7 @@ static int alarmtimer_rtc_add_device(struct device *dev,
97 if (!device_may_wakeup(rtc->dev.parent)) 97 if (!device_may_wakeup(rtc->dev.parent))
98 return -1; 98 return -1;
99 99
100 __ws = wakeup_source_register("alarmtimer"); 100 __ws = wakeup_source_register(dev, "alarmtimer");
101 101
102 spin_lock_irqsave(&rtcdev_lock, flags); 102 spin_lock_irqsave(&rtcdev_lock, flags);
103 if (!rtcdev) { 103 if (!rtcdev) {
diff --git a/tools/power/cpupower/Makefile b/tools/power/cpupower/Makefile
index 9063fca480b3..c8622497ef23 100644
--- a/tools/power/cpupower/Makefile
+++ b/tools/power/cpupower/Makefile
@@ -18,7 +18,6 @@ OUTDIR := $(shell cd $(OUTPUT) && pwd)
18$(if $(OUTDIR),, $(error output directory "$(OUTPUT)" does not exist)) 18$(if $(OUTDIR),, $(error output directory "$(OUTPUT)" does not exist))
19endif 19endif
20 20
21include ../../scripts/Makefile.arch
22 21
23# --- CONFIGURATION BEGIN --- 22# --- CONFIGURATION BEGIN ---
24 23
@@ -69,11 +68,6 @@ bindir ?= /usr/bin
69sbindir ?= /usr/sbin 68sbindir ?= /usr/sbin
70mandir ?= /usr/man 69mandir ?= /usr/man
71includedir ?= /usr/include 70includedir ?= /usr/include
72ifeq ($(IS_64_BIT), 1)
73libdir ?= /usr/lib64
74else
75libdir ?= /usr/lib
76endif
77localedir ?= /usr/share/locale 71localedir ?= /usr/share/locale
78docdir ?= /usr/share/doc/packages/cpupower 72docdir ?= /usr/share/doc/packages/cpupower
79confdir ?= /etc/ 73confdir ?= /etc/
@@ -100,6 +94,14 @@ RANLIB = $(CROSS)ranlib
100HOSTCC = gcc 94HOSTCC = gcc
101MKDIR = mkdir 95MKDIR = mkdir
102 96
97# 64bit library detection
98include ../../scripts/Makefile.arch
99
100ifeq ($(IS_64_BIT), 1)
101libdir ?= /usr/lib64
102else
103libdir ?= /usr/lib
104endif
103 105
104# Now we set up the build system 106# Now we set up the build system
105# 107#
diff --git a/tools/power/cpupower/bench/cpufreq-bench_plot.sh b/tools/power/cpupower/bench/cpufreq-bench_plot.sh
index 9061b4f1244e..f5f8b3c8f062 100644
--- a/tools/power/cpupower/bench/cpufreq-bench_plot.sh
+++ b/tools/power/cpupower/bench/cpufreq-bench_plot.sh
@@ -88,4 +88,4 @@ done
88echo >> $dir/plot_script.gpl 88echo >> $dir/plot_script.gpl
89 89
90gnuplot $dir/plot_script.gpl 90gnuplot $dir/plot_script.gpl
91rm -r $dir \ No newline at end of file 91rm -r $dir
diff --git a/tools/power/cpupower/bench/cpufreq-bench_script.sh b/tools/power/cpupower/bench/cpufreq-bench_script.sh
index 4e9714b876d2..785a3679c704 100644
--- a/tools/power/cpupower/bench/cpufreq-bench_script.sh
+++ b/tools/power/cpupower/bench/cpufreq-bench_script.sh
@@ -85,4 +85,4 @@ function create_plots()
85} 85}
86 86
87measure 87measure
88create_plots \ No newline at end of file 88create_plots
diff --git a/tools/power/cpupower/po/de.po b/tools/power/cpupower/po/de.po
index 70887bb8ba95..9780a447bb55 100644
--- a/tools/power/cpupower/po/de.po
+++ b/tools/power/cpupower/po/de.po
@@ -8,66 +8,66 @@ msgstr ""
8"Project-Id-Version: cpufrequtils 006\n" 8"Project-Id-Version: cpufrequtils 006\n"
9"Report-Msgid-Bugs-To: \n" 9"Report-Msgid-Bugs-To: \n"
10"POT-Creation-Date: 2011-03-08 17:03+0100\n" 10"POT-Creation-Date: 2011-03-08 17:03+0100\n"
11"PO-Revision-Date: 2009-08-08 17:18+0100\n" 11"PO-Revision-Date: 2019-06-02 15:23+0200\n"
12"Last-Translator: <linux@dominikbrodowski.net>\n" 12"Last-Translator: Benjamin Weis <benjamin.weis@gmx.com>\n"
13"Language-Team: NONE\n" 13"Language-Team: NONE\n"
14"Language: \n" 14"Language: \n"
15"MIME-Version: 1.0\n" 15"MIME-Version: 1.0\n"
16"Content-Type: text/plain; charset=ISO-8859-1\n" 16"Content-Type: text/plain; charset=UTF-8\n"
17"Content-Transfer-Encoding: 8bit\n" 17"Content-Transfer-Encoding: 8bit\n"
18"Plural-Forms: nplurals=2; plural=(n != 1);\n" 18"Plural-Forms: nplurals=2; plural=(n != 1);\n"
19 19
20#: utils/idle_monitor/nhm_idle.c:36 20#: utils/idle_monitor/nhm_idle.c:36
21msgid "Processor Core C3" 21msgid "Processor Core C3"
22msgstr "" 22msgstr "Prozessorkern C3"
23 23
24#: utils/idle_monitor/nhm_idle.c:43 24#: utils/idle_monitor/nhm_idle.c:43
25msgid "Processor Core C6" 25msgid "Processor Core C6"
26msgstr "" 26msgstr "Prozessorkern C6"
27 27
28#: utils/idle_monitor/nhm_idle.c:51 28#: utils/idle_monitor/nhm_idle.c:51
29msgid "Processor Package C3" 29msgid "Processor Package C3"
30msgstr "" 30msgstr "Prozessorpaket C3"
31 31
32#: utils/idle_monitor/nhm_idle.c:58 utils/idle_monitor/amd_fam14h_idle.c:70 32#: utils/idle_monitor/nhm_idle.c:58 utils/idle_monitor/amd_fam14h_idle.c:70
33msgid "Processor Package C6" 33msgid "Processor Package C6"
34msgstr "" 34msgstr "Prozessorpaket C6"
35 35
36#: utils/idle_monitor/snb_idle.c:33 36#: utils/idle_monitor/snb_idle.c:33
37msgid "Processor Core C7" 37msgid "Processor Core C7"
38msgstr "" 38msgstr "Prozessorkern C7"
39 39
40#: utils/idle_monitor/snb_idle.c:40 40#: utils/idle_monitor/snb_idle.c:40
41msgid "Processor Package C2" 41msgid "Processor Package C2"
42msgstr "" 42msgstr "Prozessorpaket C2"
43 43
44#: utils/idle_monitor/snb_idle.c:47 44#: utils/idle_monitor/snb_idle.c:47
45msgid "Processor Package C7" 45msgid "Processor Package C7"
46msgstr "" 46msgstr "Prozessorpaket C7"
47 47
48#: utils/idle_monitor/amd_fam14h_idle.c:56 48#: utils/idle_monitor/amd_fam14h_idle.c:56
49msgid "Package in sleep state (PC1 or deeper)" 49msgid "Package in sleep state (PC1 or deeper)"
50msgstr "" 50msgstr "Paket in Schlafzustand (PC1 oder tiefer)"
51 51
52#: utils/idle_monitor/amd_fam14h_idle.c:63 52#: utils/idle_monitor/amd_fam14h_idle.c:63
53msgid "Processor Package C1" 53msgid "Processor Package C1"
54msgstr "" 54msgstr "Prozessorpaket C1"
55 55
56#: utils/idle_monitor/amd_fam14h_idle.c:77 56#: utils/idle_monitor/amd_fam14h_idle.c:77
57msgid "North Bridge P1 boolean counter (returns 0 or 1)" 57msgid "North Bridge P1 boolean counter (returns 0 or 1)"
58msgstr "" 58msgstr "North Bridge P1 boolescher Zähler (gibt 0 oder 1 zurück)"
59 59
60#: utils/idle_monitor/mperf_monitor.c:35 60#: utils/idle_monitor/mperf_monitor.c:35
61msgid "Processor Core not idle" 61msgid "Processor Core not idle"
62msgstr "" 62msgstr "Prozessorkern ist nicht im Leerlauf"
63 63
64#: utils/idle_monitor/mperf_monitor.c:42 64#: utils/idle_monitor/mperf_monitor.c:42
65msgid "Processor Core in an idle state" 65msgid "Processor Core in an idle state"
66msgstr "" 66msgstr "Prozessorkern ist in einem Ruhezustand"
67 67
68#: utils/idle_monitor/mperf_monitor.c:50 68#: utils/idle_monitor/mperf_monitor.c:50
69msgid "Average Frequency (including boost) in MHz" 69msgid "Average Frequency (including boost) in MHz"
70msgstr "" 70msgstr "Durchschnittliche Frequenz (einschließlich Boost) in MHz"
71 71
72#: utils/idle_monitor/cpupower-monitor.c:66 72#: utils/idle_monitor/cpupower-monitor.c:66
73#, c-format 73#, c-format
@@ -75,6 +75,8 @@ msgid ""
75"cpupower monitor: [-h] [ [-t] | [-l] | [-m <mon1>,[<mon2>] ] ] [-i " 75"cpupower monitor: [-h] [ [-t] | [-l] | [-m <mon1>,[<mon2>] ] ] [-i "
76"interval_sec | -c command ...]\n" 76"interval_sec | -c command ...]\n"
77msgstr "" 77msgstr ""
78"cpupower monitor: [-h] [ [-t] | [-l] | [-m <mon1>,[<mon2>] ] ] [-i "
79"interval_sec | -c Befehl ...]\n"
78 80
79#: utils/idle_monitor/cpupower-monitor.c:69 81#: utils/idle_monitor/cpupower-monitor.c:69
80#, c-format 82#, c-format
@@ -82,36 +84,40 @@ msgid ""
82"cpupower monitor: [-v] [-h] [ [-t] | [-l] | [-m <mon1>,[<mon2>] ] ] [-i " 84"cpupower monitor: [-v] [-h] [ [-t] | [-l] | [-m <mon1>,[<mon2>] ] ] [-i "
83"interval_sec | -c command ...]\n" 85"interval_sec | -c command ...]\n"
84msgstr "" 86msgstr ""
87"cpupower monitor: [-v] [-h] [ [-t] | [-l] | [-m <mon1>,[<mon2>] ] ] [-i "
88"interval_sec | -c Befehl ...]\n"
85 89
86#: utils/idle_monitor/cpupower-monitor.c:71 90#: utils/idle_monitor/cpupower-monitor.c:71
87#, c-format 91#, c-format
88msgid "\t -v: be more verbose\n" 92msgid "\t -v: be more verbose\n"
89msgstr "" 93msgstr "\t -v: ausführlicher\n"
90 94
91#: utils/idle_monitor/cpupower-monitor.c:73 95#: utils/idle_monitor/cpupower-monitor.c:73
92#, c-format 96#, c-format
93msgid "\t -h: print this help\n" 97msgid "\t -h: print this help\n"
94msgstr "" 98msgstr "\t -h: diese Hilfe ausgeben\n"
95 99
96#: utils/idle_monitor/cpupower-monitor.c:74 100#: utils/idle_monitor/cpupower-monitor.c:74
97#, c-format 101#, c-format
98msgid "\t -i: time interval to measure for in seconds (default 1)\n" 102msgid "\t -i: time interval to measure for in seconds (default 1)\n"
99msgstr "" 103msgstr "\t -i: Zeitintervall für die Messung in Sekunden (Standard 1)\n"
100 104
101#: utils/idle_monitor/cpupower-monitor.c:75 105#: utils/idle_monitor/cpupower-monitor.c:75
102#, c-format 106#, c-format
103msgid "\t -t: show CPU topology/hierarchy\n" 107msgid "\t -t: show CPU topology/hierarchy\n"
104msgstr "" 108msgstr "\t -t: CPU-Topologie/Hierarchie anzeigen\n"
105 109
106#: utils/idle_monitor/cpupower-monitor.c:76 110#: utils/idle_monitor/cpupower-monitor.c:76
107#, c-format 111#, c-format
108msgid "\t -l: list available CPU sleep monitors (for use with -m)\n" 112msgid "\t -l: list available CPU sleep monitors (for use with -m)\n"
109msgstr "" 113msgstr ""
114"\t -l: verfügbare CPU-Schlafwächter auflisten (für Verwendung mit -m)\n"
110 115
111#: utils/idle_monitor/cpupower-monitor.c:77 116#: utils/idle_monitor/cpupower-monitor.c:77
112#, c-format 117#, c-format
113msgid "\t -m: show specific CPU sleep monitors only (in same order)\n" 118msgid "\t -m: show specific CPU sleep monitors only (in same order)\n"
114msgstr "" 119msgstr ""
120"\t -m: spezifische CPU-Schlafwächter anzeigen (in gleicher Reihenfolge)\n"
115 121
116#: utils/idle_monitor/cpupower-monitor.c:79 122#: utils/idle_monitor/cpupower-monitor.c:79
117#, c-format 123#, c-format
@@ -119,71 +125,73 @@ msgid ""
119"only one of: -t, -l, -m are allowed\n" 125"only one of: -t, -l, -m are allowed\n"
120"If none of them is passed," 126"If none of them is passed,"
121msgstr "" 127msgstr ""
128"nur einer von: -t, -l, -m ist erlaubt\n"
129"Wenn keiner von ihnen übergeben wird,"
122 130
123#: utils/idle_monitor/cpupower-monitor.c:80 131#: utils/idle_monitor/cpupower-monitor.c:80
124#, c-format 132#, c-format
125msgid " all supported monitors are shown\n" 133msgid " all supported monitors are shown\n"
126msgstr "" 134msgstr " werden alle unterstützten Wächter angezeigt\n"
127 135
128#: utils/idle_monitor/cpupower-monitor.c:197 136#: utils/idle_monitor/cpupower-monitor.c:197
129#, c-format 137#, c-format
130msgid "Monitor %s, Counter %s has no count function. Implementation error\n" 138msgid "Monitor %s, Counter %s has no count function. Implementation error\n"
131msgstr "" 139msgstr "Wächter %s, Zähler %s hat keine Zählfunktion. Implementierungsfehler\n"
132 140
133#: utils/idle_monitor/cpupower-monitor.c:207 141#: utils/idle_monitor/cpupower-monitor.c:207
134#, c-format 142#, c-format
135msgid " *is offline\n" 143msgid " *is offline\n"
136msgstr "" 144msgstr " *ist offline\n"
137 145
138#: utils/idle_monitor/cpupower-monitor.c:236 146#: utils/idle_monitor/cpupower-monitor.c:236
139#, c-format 147#, c-format
140msgid "%s: max monitor name length (%d) exceeded\n" 148msgid "%s: max monitor name length (%d) exceeded\n"
141msgstr "" 149msgstr "%s: max. Wächternamenslänge (%d) überschritten\n"
142 150
143#: utils/idle_monitor/cpupower-monitor.c:250 151#: utils/idle_monitor/cpupower-monitor.c:250
144#, c-format 152#, c-format
145msgid "No matching monitor found in %s, try -l option\n" 153msgid "No matching monitor found in %s, try -l option\n"
146msgstr "" 154msgstr "Kein passender Wächter in %s gefunden, versuchen Sie die Option -l\n"
147 155
148#: utils/idle_monitor/cpupower-monitor.c:266 156#: utils/idle_monitor/cpupower-monitor.c:266
149#, c-format 157#, c-format
150msgid "Monitor \"%s\" (%d states) - Might overflow after %u s\n" 158msgid "Monitor \"%s\" (%d states) - Might overflow after %u s\n"
151msgstr "" 159msgstr "Wächter \"%s\" (%d Zustände) - Könnte nach %u s überlaufen\n"
152 160
153#: utils/idle_monitor/cpupower-monitor.c:319 161#: utils/idle_monitor/cpupower-monitor.c:319
154#, c-format 162#, c-format
155msgid "%s took %.5f seconds and exited with status %d\n" 163msgid "%s took %.5f seconds and exited with status %d\n"
156msgstr "" 164msgstr "%s hat %.5f Sekunden gedauert und hat sich mit Status %d beendet\n"
157 165
158#: utils/idle_monitor/cpupower-monitor.c:406 166#: utils/idle_monitor/cpupower-monitor.c:406
159#, c-format 167#, c-format
160msgid "Cannot read number of available processors\n" 168msgid "Cannot read number of available processors\n"
161msgstr "" 169msgstr "Anzahl der verfügbaren Prozessoren kann nicht gelesen werden\n"
162 170
163#: utils/idle_monitor/cpupower-monitor.c:417 171#: utils/idle_monitor/cpupower-monitor.c:417
164#, c-format 172#, c-format
165msgid "Available monitor %s needs root access\n" 173msgid "Available monitor %s needs root access\n"
166msgstr "" 174msgstr "Verfügbarer Wächter %s benötigt root-Zugriff\n"
167 175
168#: utils/idle_monitor/cpupower-monitor.c:428 176#: utils/idle_monitor/cpupower-monitor.c:428
169#, c-format 177#, c-format
170msgid "No HW Cstate monitors found\n" 178msgid "No HW Cstate monitors found\n"
171msgstr "" 179msgstr "Keine HW C-Zustandswächter gefunden\n"
172 180
173#: utils/cpupower.c:78 181#: utils/cpupower.c:78
174#, c-format 182#, c-format
175msgid "cpupower [ -c cpulist ] subcommand [ARGS]\n" 183msgid "cpupower [ -c cpulist ] subcommand [ARGS]\n"
176msgstr "" 184msgstr "cpupower [ -c cpulist ] Unterbefehl [ARGS]\n"
177 185
178#: utils/cpupower.c:79 186#: utils/cpupower.c:79
179#, c-format 187#, c-format
180msgid "cpupower --version\n" 188msgid "cpupower --version\n"
181msgstr "" 189msgstr "cpupower --version\n"
182 190
183#: utils/cpupower.c:80 191#: utils/cpupower.c:80
184#, c-format 192#, c-format
185msgid "Supported subcommands are:\n" 193msgid "Supported subcommands are:\n"
186msgstr "" 194msgstr "Unterstützte Unterbefehle sind:\n"
187 195
188#: utils/cpupower.c:83 196#: utils/cpupower.c:83
189#, c-format 197#, c-format
@@ -191,11 +199,15 @@ msgid ""
191"\n" 199"\n"
192"Some subcommands can make use of the -c cpulist option.\n" 200"Some subcommands can make use of the -c cpulist option.\n"
193msgstr "" 201msgstr ""
202"\n"
203"Einige Unterbefehle können die Option -c cpulist verwenden.\n"
194 204
195#: utils/cpupower.c:84 205#: utils/cpupower.c:84
196#, c-format 206#, c-format
197msgid "Look at the general cpupower manpage how to use it\n" 207msgid "Look at the general cpupower manpage how to use it\n"
198msgstr "" 208msgstr ""
209"Schauen Sie sich die allgemeine cpupower manpage an, um zu erfahren, wie man "
210"es benutzt\n"
199 211
200#: utils/cpupower.c:85 212#: utils/cpupower.c:85
201#, c-format 213#, c-format
@@ -217,30 +229,31 @@ msgstr "Bitte melden Sie Fehler an %s.\n"
217#: utils/cpupower.c:114 229#: utils/cpupower.c:114
218#, c-format 230#, c-format
219msgid "Error parsing cpu list\n" 231msgid "Error parsing cpu list\n"
220msgstr "" 232msgstr "Fehler beim Parsen der CPU-Liste\n"
221 233
222#: utils/cpupower.c:172 234#: utils/cpupower.c:172
223#, c-format 235#, c-format
224msgid "Subcommand %s needs root privileges\n" 236msgid "Subcommand %s needs root privileges\n"
225msgstr "" 237msgstr "Unterbefehl %s benötigt root-Rechte\n"
226 238
227#: utils/cpufreq-info.c:31 239#: utils/cpufreq-info.c:31
228#, c-format 240#, c-format
229msgid "Couldn't count the number of CPUs (%s: %s), assuming 1\n" 241msgid "Couldn't count the number of CPUs (%s: %s), assuming 1\n"
230msgstr "" 242msgstr ""
231"Konnte nicht die Anzahl der CPUs herausfinden (%s : %s), nehme daher 1 an.\n" 243"Anzahl der CPUs konnte nicht herausgefinden werden (%s: %s), es wird daher 1 "
244"angenommen\n"
232 245
233#: utils/cpufreq-info.c:63 246#: utils/cpufreq-info.c:63
234#, c-format 247#, c-format
235msgid "" 248msgid ""
236" minimum CPU frequency - maximum CPU frequency - governor\n" 249" minimum CPU frequency - maximum CPU frequency - governor\n"
237msgstr "" 250msgstr " minimale CPU-Frequenz - maximale CPU-Frequenz - Regler\n"
238" minimale CPU-Taktfreq. - maximale CPU-Taktfreq. - Regler \n"
239 251
240#: utils/cpufreq-info.c:151 252#: utils/cpufreq-info.c:151
241#, c-format 253#, c-format
242msgid "Error while evaluating Boost Capabilities on CPU %d -- are you root?\n" 254msgid "Error while evaluating Boost Capabilities on CPU %d -- are you root?\n"
243msgstr "" 255msgstr ""
256"Fehler beim Evaluieren der Boost-Fähigkeiten bei CPU %d -- sind Sie root?\n"
244 257
245#. P state changes via MSR are identified via cpuid 80000007 258#. P state changes via MSR are identified via cpuid 80000007
246#. on Intel and AMD, but we assume boost capable machines can do that 259#. on Intel and AMD, but we assume boost capable machines can do that
@@ -250,50 +263,50 @@ msgstr ""
250#: utils/cpufreq-info.c:161 263#: utils/cpufreq-info.c:161
251#, c-format 264#, c-format
252msgid " boost state support: \n" 265msgid " boost state support: \n"
253msgstr "" 266msgstr " Boost-Zustand-Unterstützung: \n"
254 267
255#: utils/cpufreq-info.c:163 268#: utils/cpufreq-info.c:163
256#, c-format 269#, c-format
257msgid " Supported: %s\n" 270msgid " Supported: %s\n"
258msgstr "" 271msgstr " Unterstützt: %s\n"
259 272
260#: utils/cpufreq-info.c:163 utils/cpufreq-info.c:164 273#: utils/cpufreq-info.c:163 utils/cpufreq-info.c:164
261msgid "yes" 274msgid "yes"
262msgstr "" 275msgstr "ja"
263 276
264#: utils/cpufreq-info.c:163 utils/cpufreq-info.c:164 277#: utils/cpufreq-info.c:163 utils/cpufreq-info.c:164
265msgid "no" 278msgid "no"
266msgstr "" 279msgstr "nein"
267 280
268#: utils/cpufreq-info.c:164 281#: utils/cpufreq-info.c:164
269#, fuzzy, c-format 282#, c-format
270msgid " Active: %s\n" 283msgid " Active: %s\n"
271msgstr " Treiber: %s\n" 284msgstr " Aktiv: %s\n"
272 285
273#: utils/cpufreq-info.c:177 286#: utils/cpufreq-info.c:177
274#, c-format 287#, c-format
275msgid " Boost States: %d\n" 288msgid " Boost States: %d\n"
276msgstr "" 289msgstr " Boost-Zustände: %d\n"
277 290
278#: utils/cpufreq-info.c:178 291#: utils/cpufreq-info.c:178
279#, c-format 292#, c-format
280msgid " Total States: %d\n" 293msgid " Total States: %d\n"
281msgstr "" 294msgstr " Gesamtzustände: %d\n"
282 295
283#: utils/cpufreq-info.c:181 296#: utils/cpufreq-info.c:181
284#, c-format 297#, c-format
285msgid " Pstate-Pb%d: %luMHz (boost state)\n" 298msgid " Pstate-Pb%d: %luMHz (boost state)\n"
286msgstr "" 299msgstr " Pstate-Pb%d: %luMHz (Boost-Zustand)\n"
287 300
288#: utils/cpufreq-info.c:184 301#: utils/cpufreq-info.c:184
289#, c-format 302#, c-format
290msgid " Pstate-P%d: %luMHz\n" 303msgid " Pstate-P%d: %luMHz\n"
291msgstr "" 304msgstr " Pstate-P%d: %luMHz\n"
292 305
293#: utils/cpufreq-info.c:211 306#: utils/cpufreq-info.c:211
294#, c-format 307#, c-format
295msgid " no or unknown cpufreq driver is active on this CPU\n" 308msgid " no or unknown cpufreq driver is active on this CPU\n"
296msgstr " kein oder nicht bestimmbarer cpufreq-Treiber aktiv\n" 309msgstr " kein oder ein unbekannter cpufreq-Treiber ist auf dieser CPU aktiv\n"
297 310
298#: utils/cpufreq-info.c:213 311#: utils/cpufreq-info.c:213
299#, c-format 312#, c-format
@@ -303,12 +316,12 @@ msgstr " Treiber: %s\n"
303#: utils/cpufreq-info.c:219 316#: utils/cpufreq-info.c:219
304#, c-format 317#, c-format
305msgid " CPUs which run at the same hardware frequency: " 318msgid " CPUs which run at the same hardware frequency: "
306msgstr " Folgende CPUs laufen mit der gleichen Hardware-Taktfrequenz: " 319msgstr " CPUs, die mit der gleichen Hardwarefrequenz laufen: "
307 320
308#: utils/cpufreq-info.c:230 321#: utils/cpufreq-info.c:230
309#, c-format 322#, c-format
310msgid " CPUs which need to have their frequency coordinated by software: " 323msgid " CPUs which need to have their frequency coordinated by software: "
311msgstr " Die Taktfrequenz folgender CPUs werden per Software koordiniert: " 324msgstr " CPUs, die ihre Frequenz mit Software koordinieren müssen: "
312 325
313#: utils/cpufreq-info.c:241 326#: utils/cpufreq-info.c:241
314#, c-format 327#, c-format
@@ -318,22 +331,22 @@ msgstr " Maximale Dauer eines Taktfrequenzwechsels: "
318#: utils/cpufreq-info.c:247 331#: utils/cpufreq-info.c:247
319#, c-format 332#, c-format
320msgid " hardware limits: " 333msgid " hardware limits: "
321msgstr " Hardwarebedingte Grenzen der Taktfrequenz: " 334msgstr " Hardwarebegrenzungen: "
322 335
323#: utils/cpufreq-info.c:256 336#: utils/cpufreq-info.c:256
324#, c-format 337#, c-format
325msgid " available frequency steps: " 338msgid " available frequency steps: "
326msgstr " mÃgliche Taktfrequenzen: " 339msgstr " verfügbare Frequenzschritte: "
327 340
328#: utils/cpufreq-info.c:269 341#: utils/cpufreq-info.c:269
329#, c-format 342#, c-format
330msgid " available cpufreq governors: " 343msgid " available cpufreq governors: "
331msgstr " mÃgliche Regler: " 344msgstr " verfügbare cpufreq-Regler: "
332 345
333#: utils/cpufreq-info.c:280 346#: utils/cpufreq-info.c:280
334#, c-format 347#, c-format
335msgid " current policy: frequency should be within " 348msgid " current policy: frequency should be within "
336msgstr " momentane Taktik: die Frequenz soll innerhalb " 349msgstr " momentane Richtlinie: Frequenz sollte innerhalb "
337 350
338#: utils/cpufreq-info.c:282 351#: utils/cpufreq-info.c:282
339#, c-format 352#, c-format
@@ -346,29 +359,28 @@ msgid ""
346"The governor \"%s\" may decide which speed to use\n" 359"The governor \"%s\" may decide which speed to use\n"
347" within this range.\n" 360" within this range.\n"
348msgstr "" 361msgstr ""
349" liegen. Der Regler \"%s\" kann frei entscheiden,\n" 362" sein. Der Regler \"%s\" kann frei entscheiden,\n"
350" welche Taktfrequenz innerhalb dieser Grenze verwendet " 363" welche Geschwindigkeit er in diesem Bereich verwendet.\n"
351"wird.\n"
352 364
353#: utils/cpufreq-info.c:293 365#: utils/cpufreq-info.c:293
354#, c-format 366#, c-format
355msgid " current CPU frequency is " 367msgid " current CPU frequency is "
356msgstr " momentane Taktfrequenz ist " 368msgstr " momentane CPU-Frequenz ist "
357 369
358#: utils/cpufreq-info.c:296 370#: utils/cpufreq-info.c:296
359#, c-format 371#, c-format
360msgid " (asserted by call to hardware)" 372msgid " (asserted by call to hardware)"
361msgstr " (verifiziert durch Nachfrage bei der Hardware)" 373msgstr " (durch Aufruf der Hardware sichergestellt)"
362 374
363#: utils/cpufreq-info.c:304 375#: utils/cpufreq-info.c:304
364#, c-format 376#, c-format
365msgid " cpufreq stats: " 377msgid " cpufreq stats: "
366msgstr " Statistik:" 378msgstr " cpufreq-Statistiken: "
367 379
368#: utils/cpufreq-info.c:472 380#: utils/cpufreq-info.c:472
369#, fuzzy, c-format 381#, c-format
370msgid "Usage: cpupower freqinfo [options]\n" 382msgid "Usage: cpupower freqinfo [options]\n"
371msgstr "Aufruf: cpufreq-info [Optionen]\n" 383msgstr "Aufruf: cpupower freqinfo [Optionen]\n"
372 384
373#: utils/cpufreq-info.c:473 utils/cpufreq-set.c:26 utils/cpupower-set.c:23 385#: utils/cpufreq-info.c:473 utils/cpufreq-set.c:26 utils/cpupower-set.c:23
374#: utils/cpupower-info.c:22 utils/cpuidle-info.c:148 386#: utils/cpupower-info.c:22 utils/cpuidle-info.c:148
@@ -377,11 +389,9 @@ msgid "Options:\n"
377msgstr "Optionen:\n" 389msgstr "Optionen:\n"
378 390
379#: utils/cpufreq-info.c:474 391#: utils/cpufreq-info.c:474
380#, fuzzy, c-format 392#, c-format
381msgid " -e, --debug Prints out debug information [default]\n" 393msgid " -e, --debug Prints out debug information [default]\n"
382msgstr "" 394msgstr " -e, --debug Gibt Debug-Informationen aus [Standard]\n"
383" -e, --debug Erzeugt detaillierte Informationen, hilfreich\n"
384" zum Aufspüren von Fehlern\n"
385 395
386#: utils/cpufreq-info.c:475 396#: utils/cpufreq-info.c:475
387#, c-format 397#, c-format
@@ -424,7 +434,7 @@ msgstr " -p, --policy Findet die momentane Taktik heraus *\n"
424#: utils/cpufreq-info.c:482 434#: utils/cpufreq-info.c:482
425#, c-format 435#, c-format
426msgid " -g, --governors Determines available cpufreq governors *\n" 436msgid " -g, --governors Determines available cpufreq governors *\n"
427msgstr " -g, --governors Erzeugt eine Liste mit verfügbaren Reglern *\n" 437msgstr " -g, --governors Ermittelt verfügbare cpufreq-Regler *\n"
428 438
429#: utils/cpufreq-info.c:483 439#: utils/cpufreq-info.c:483
430#, c-format 440#, c-format
@@ -449,8 +459,7 @@ msgstr ""
449#: utils/cpufreq-info.c:486 459#: utils/cpufreq-info.c:486
450#, c-format 460#, c-format
451msgid " -s, --stats Shows cpufreq statistics if available\n" 461msgid " -s, --stats Shows cpufreq statistics if available\n"
452msgstr "" 462msgstr " -s, --stats Zeigt cpufreq-Statistiken an, falls vorhanden\n"
453" -s, --stats Zeigt, sofern möglich, Statistiken über cpufreq an.\n"
454 463
455#: utils/cpufreq-info.c:487 464#: utils/cpufreq-info.c:487
456#, c-format 465#, c-format
@@ -464,13 +473,13 @@ msgstr ""
464#: utils/cpufreq-info.c:488 473#: utils/cpufreq-info.c:488
465#, c-format 474#, c-format
466msgid " -b, --boost Checks for turbo or boost modes *\n" 475msgid " -b, --boost Checks for turbo or boost modes *\n"
467msgstr "" 476msgstr " -b, --boost Prüft auf Turbo- oder Boost-Modi *\n"
468 477
469#: utils/cpufreq-info.c:489 478#: utils/cpufreq-info.c:489
470#, c-format 479#, c-format
471msgid "" 480msgid ""
472" -o, --proc Prints out information like provided by the /proc/" 481" -o, --proc Prints out information like provided by the "
473"cpufreq\n" 482"/proc/cpufreq\n"
474" interface in 2.4. and early 2.6. kernels\n" 483" interface in 2.4. and early 2.6. kernels\n"
475msgstr "" 484msgstr ""
476" -o, --proc Erzeugt Informationen in einem ähnlichem Format zu " 485" -o, --proc Erzeugt Informationen in einem ähnlichem Format zu "
@@ -509,8 +518,8 @@ msgid ""
509"For the arguments marked with *, omitting the -c or --cpu argument is\n" 518"For the arguments marked with *, omitting the -c or --cpu argument is\n"
510"equivalent to setting it to zero\n" 519"equivalent to setting it to zero\n"
511msgstr "" 520msgstr ""
512"Bei den mit * markierten Parametern wird '--cpu 0' angenommen, soweit nicht\n" 521"Für die mit * markierten Argumente ist das Weglassen des Arguments\n"
513"mittels -c oder --cpu etwas anderes angegeben wird\n" 522"-c oder --cpu gleichbedeutend mit der Einstellung auf Null\n"
514 523
515#: utils/cpufreq-info.c:580 524#: utils/cpufreq-info.c:580
516#, c-format 525#, c-format
@@ -525,8 +534,8 @@ msgid ""
525"You can't specify more than one --cpu parameter and/or\n" 534"You can't specify more than one --cpu parameter and/or\n"
526"more than one output-specific argument\n" 535"more than one output-specific argument\n"
527msgstr "" 536msgstr ""
528"Man kann nicht mehr als einen --cpu-Parameter und/oder mehr als einen\n" 537"Sie können nicht mehr als einen Parameter --cpu und/oder\n"
529"informationsspezifischen Parameter gleichzeitig angeben\n" 538"mehr als ein ausgabespezifisches Argument angeben\n"
530 539
531#: utils/cpufreq-info.c:600 utils/cpufreq-set.c:82 utils/cpupower-set.c:42 540#: utils/cpufreq-info.c:600 utils/cpufreq-set.c:82 utils/cpupower-set.c:42
532#: utils/cpupower-info.c:42 utils/cpuidle-info.c:213 541#: utils/cpupower-info.c:42 utils/cpuidle-info.c:213
@@ -538,17 +547,17 @@ msgstr "unbekannter oder falscher Parameter\n"
538#, c-format 547#, c-format
539msgid "couldn't analyze CPU %d as it doesn't seem to be present\n" 548msgid "couldn't analyze CPU %d as it doesn't seem to be present\n"
540msgstr "" 549msgstr ""
541"Konnte nicht die CPU %d analysieren, da sie (scheinbar?) nicht existiert.\n" 550"CPU %d konnte nicht analysiert werden, da sie scheinbar nicht existiert\n"
542 551
543#: utils/cpufreq-info.c:620 utils/cpupower-info.c:142 552#: utils/cpufreq-info.c:620 utils/cpupower-info.c:142
544#, c-format 553#, c-format
545msgid "analyzing CPU %d:\n" 554msgid "analyzing CPU %d:\n"
546msgstr "analysiere CPU %d:\n" 555msgstr "CPU %d wird analysiert:\n"
547 556
548#: utils/cpufreq-set.c:25 557#: utils/cpufreq-set.c:25
549#, fuzzy, c-format 558#, c-format
550msgid "Usage: cpupower frequency-set [options]\n" 559msgid "Usage: cpupower frequency-set [options]\n"
551msgstr "Aufruf: cpufreq-set [Optionen]\n" 560msgstr "Aufruf: cpupower frequency-set [Optionen]\n"
552 561
553#: utils/cpufreq-set.c:27 562#: utils/cpufreq-set.c:27
554#, c-format 563#, c-format
@@ -556,7 +565,7 @@ msgid ""
556" -d FREQ, --min FREQ new minimum CPU frequency the governor may " 565" -d FREQ, --min FREQ new minimum CPU frequency the governor may "
557"select\n" 566"select\n"
558msgstr "" 567msgstr ""
559" -d FREQ, --min FREQ neue minimale Taktfrequenz, die der Regler\n" 568" -d FREQ, --min FREQ neue minimale CPU-Frequenz, die der Regler\n"
560" auswählen darf\n" 569" auswählen darf\n"
561 570
562#: utils/cpufreq-set.c:28 571#: utils/cpufreq-set.c:28
@@ -571,7 +580,7 @@ msgstr ""
571#: utils/cpufreq-set.c:29 580#: utils/cpufreq-set.c:29
572#, c-format 581#, c-format
573msgid " -g GOV, --governor GOV new cpufreq governor\n" 582msgid " -g GOV, --governor GOV new cpufreq governor\n"
574msgstr " -g GOV, --governors GOV wechsle zu Regler GOV\n" 583msgstr " -g GOV, --governors GOV neuer cpufreq-Regler\n"
575 584
576#: utils/cpufreq-set.c:30 585#: utils/cpufreq-set.c:30
577#, c-format 586#, c-format
@@ -579,29 +588,29 @@ msgid ""
579" -f FREQ, --freq FREQ specific frequency to be set. Requires userspace\n" 588" -f FREQ, --freq FREQ specific frequency to be set. Requires userspace\n"
580" governor to be available and loaded\n" 589" governor to be available and loaded\n"
581msgstr "" 590msgstr ""
582" -f FREQ, --freq FREQ setze exakte Taktfrequenz. Benötigt den Regler\n" 591" -f FREQ, --freq FREQ bestimmte Frequenz, die eingestellt werden soll.\n"
583" 'userspace'.\n" 592" Erfordert einen verfügbaren und geladenen "
593"userspace-Regler\n"
584 594
585#: utils/cpufreq-set.c:32 595#: utils/cpufreq-set.c:32
586#, c-format 596#, c-format
587msgid " -r, --related Switches all hardware-related CPUs\n" 597msgid " -r, --related Switches all hardware-related CPUs\n"
588msgstr "" 598msgstr " -r, --related Schaltet alle hardwarebezogenen CPUs um\n"
589" -r, --related Setze Werte für alle CPUs, deren Taktfrequenz\n"
590" hardwarebedingt identisch ist.\n"
591 599
592#: utils/cpufreq-set.c:33 utils/cpupower-set.c:28 utils/cpupower-info.c:27 600#: utils/cpufreq-set.c:33 utils/cpupower-set.c:28 utils/cpupower-info.c:27
593#, c-format 601#, c-format
594msgid " -h, --help Prints out this screen\n" 602msgid " -h, --help Prints out this screen\n"
595msgstr " -h, --help Gibt diese Kurzübersicht aus\n" 603msgstr " -h, --help Gibt diesen Bildschirm aus\n"
596 604
597#: utils/cpufreq-set.c:35 605#: utils/cpufreq-set.c:35
598#, fuzzy, c-format 606#, c-format
599msgid "" 607msgid ""
600"Notes:\n" 608"Notes:\n"
601"1. Omitting the -c or --cpu argument is equivalent to setting it to \"all\"\n" 609"1. Omitting the -c or --cpu argument is equivalent to setting it to \"all\"\n"
602msgstr "" 610msgstr ""
603"Bei den mit * markierten Parametern wird '--cpu 0' angenommen, soweit nicht\n" 611"Hinweis:\n"
604"mittels -c oder --cpu etwas anderes angegeben wird\n" 612"1. Das Weglassen des Arguments -c oder --cpu ist gleichbedeutend mit der "
613"Einstellung auf \"all\"\n"
605 614
606#: utils/cpufreq-set.c:37 615#: utils/cpufreq-set.c:37
607#, fuzzy, c-format 616#, fuzzy, c-format
@@ -636,17 +645,21 @@ msgid ""
636"frequency\n" 645"frequency\n"
637" or because the userspace governor isn't loaded?\n" 646" or because the userspace governor isn't loaded?\n"
638msgstr "" 647msgstr ""
639"Beim Einstellen ist ein Fehler aufgetreten. Typische Fehlerquellen sind:\n" 648"Fehler beim Festlegen neuer Werte. Häufige Fehler:\n"
640"- nicht ausreichende Rechte (Administrator)\n" 649"- Verfügen Sie über die erforderlichen Administrationsrechte? (Superuser?)\n"
641"- der Regler ist nicht verfügbar bzw. nicht geladen\n" 650"- Ist der von Ihnen gewünschte Regler verfügbar und mittels modprobe "
642"- die angegebene Taktik ist inkorrekt\n" 651"geladen?\n"
643"- eine spezifische Frequenz wurde angegeben, aber der Regler 'userspace'\n" 652"- Versuchen Sie eine ungültige Richtlinie festzulegen?\n"
644" kann entweder hardwarebedingt nicht genutzt werden oder ist nicht geladen\n" 653"- Versuchen Sie eine bestimmte Frequenz festzulegen, aber der "
654"userspace-Regler ist nicht verfügbar,\n"
655" z.B. wegen Hardware, die nicht auf eine bestimmte Frequenz eingestellt "
656"werden kann\n"
657" oder weil der userspace-Regler nicht geladen ist?\n"
645 658
646#: utils/cpufreq-set.c:170 659#: utils/cpufreq-set.c:170
647#, c-format 660#, c-format
648msgid "wrong, unknown or unhandled CPU?\n" 661msgid "wrong, unknown or unhandled CPU?\n"
649msgstr "unbekannte oder nicht regelbare CPU\n" 662msgstr "falsche, unbekannte oder nicht regelbare CPU?\n"
650 663
651#: utils/cpufreq-set.c:302 664#: utils/cpufreq-set.c:302
652#, c-format 665#, c-format
@@ -654,8 +667,8 @@ msgid ""
654"the -f/--freq parameter cannot be combined with -d/--min, -u/--max or\n" 667"the -f/--freq parameter cannot be combined with -d/--min, -u/--max or\n"
655"-g/--governor parameters\n" 668"-g/--governor parameters\n"
656msgstr "" 669msgstr ""
657"Der -f bzw. --freq-Parameter kann nicht mit den Parametern -d/--min, -u/--" 670"Der -f bzw. --freq-Parameter kann nicht mit den Parametern -d/--min, "
658"max\n" 671"-u/--max\n"
659"oder -g/--governor kombiniert werden\n" 672"oder -g/--governor kombiniert werden\n"
660 673
661#: utils/cpufreq-set.c:308 674#: utils/cpufreq-set.c:308
@@ -664,18 +677,18 @@ msgid ""
664"At least one parameter out of -f/--freq, -d/--min, -u/--max, and\n" 677"At least one parameter out of -f/--freq, -d/--min, -u/--max, and\n"
665"-g/--governor must be passed\n" 678"-g/--governor must be passed\n"
666msgstr "" 679msgstr ""
667"Es muss mindestens ein Parameter aus -f/--freq, -d/--min, -u/--max oder\n" 680"Mindestens ein Parameter aus -f/--freq, -d/--min, -u/--max und\n"
668"-g/--governor angegeben werden.\n" 681"-g/--governor muss übergeben werden\n"
669 682
670#: utils/cpufreq-set.c:347 683#: utils/cpufreq-set.c:347
671#, c-format 684#, c-format
672msgid "Setting cpu: %d\n" 685msgid "Setting cpu: %d\n"
673msgstr "" 686msgstr "CPU einstellen: %d\n"
674 687
675#: utils/cpupower-set.c:22 688#: utils/cpupower-set.c:22
676#, c-format 689#, c-format
677msgid "Usage: cpupower set [ -b val ] [ -m val ] [ -s val ]\n" 690msgid "Usage: cpupower set [ -b val ] [ -m val ] [ -s val ]\n"
678msgstr "" 691msgstr "Aufruf: cpupower set [ -b val ] [ -m val ] [ -s val ]\n"
679 692
680#: utils/cpupower-set.c:24 693#: utils/cpupower-set.c:24
681#, c-format 694#, c-format
@@ -689,6 +702,8 @@ msgstr ""
689msgid "" 702msgid ""
690" -m, --sched-mc [VAL] Sets the kernel's multi core scheduler policy.\n" 703" -m, --sched-mc [VAL] Sets the kernel's multi core scheduler policy.\n"
691msgstr "" 704msgstr ""
705" -m, --sched-mc [VAL] Legt die Mehrkern-Scheduler-Richtlinie des "
706"Kernels fest.\n"
692 707
693#: utils/cpupower-set.c:27 708#: utils/cpupower-set.c:27
694#, c-format 709#, c-format
@@ -700,37 +715,37 @@ msgstr ""
700#: utils/cpupower-set.c:80 715#: utils/cpupower-set.c:80
701#, c-format 716#, c-format
702msgid "--perf-bias param out of range [0-%d]\n" 717msgid "--perf-bias param out of range [0-%d]\n"
703msgstr "" 718msgstr "--perf-bias-Parameter außerhalb des Bereichs [0-%d]\n"
704 719
705#: utils/cpupower-set.c:91 720#: utils/cpupower-set.c:91
706#, c-format 721#, c-format
707msgid "--sched-mc param out of range [0-%d]\n" 722msgid "--sched-mc param out of range [0-%d]\n"
708msgstr "" 723msgstr "Parameter --sched-mc außerhalb des Bereichs [0-%d]\n"
709 724
710#: utils/cpupower-set.c:102 725#: utils/cpupower-set.c:102
711#, c-format 726#, c-format
712msgid "--sched-smt param out of range [0-%d]\n" 727msgid "--sched-smt param out of range [0-%d]\n"
713msgstr "" 728msgstr "Parameter --sched-smt außerhalb des Bereichs [0-%d]\n"
714 729
715#: utils/cpupower-set.c:121 730#: utils/cpupower-set.c:121
716#, c-format 731#, c-format
717msgid "Error setting sched-mc %s\n" 732msgid "Error setting sched-mc %s\n"
718msgstr "" 733msgstr "Fehler beim Einstellen von sched-mc %s\n"
719 734
720#: utils/cpupower-set.c:127 735#: utils/cpupower-set.c:127
721#, c-format 736#, c-format
722msgid "Error setting sched-smt %s\n" 737msgid "Error setting sched-smt %s\n"
723msgstr "" 738msgstr "Fehler beim Einstellen von sched-smt %s\n"
724 739
725#: utils/cpupower-set.c:146 740#: utils/cpupower-set.c:146
726#, c-format 741#, c-format
727msgid "Error setting perf-bias value on CPU %d\n" 742msgid "Error setting perf-bias value on CPU %d\n"
728msgstr "" 743msgstr "Fehler beim Einstellen des perf-bias-Wertes auf der CPU %d\n"
729 744
730#: utils/cpupower-info.c:21 745#: utils/cpupower-info.c:21
731#, c-format 746#, c-format
732msgid "Usage: cpupower info [ -b ] [ -m ] [ -s ]\n" 747msgid "Usage: cpupower info [ -b ] [ -m ] [ -s ]\n"
733msgstr "" 748msgstr "Aufruf: cpupower info [ -b ] [ -m ] [ -s ]\n"
734 749
735#: utils/cpupower-info.c:23 750#: utils/cpupower-info.c:23
736#, c-format 751#, c-format
@@ -740,9 +755,10 @@ msgid ""
740msgstr "" 755msgstr ""
741 756
742#: utils/cpupower-info.c:25 757#: utils/cpupower-info.c:25
743#, fuzzy, c-format 758#, c-format
744msgid " -m, --sched-mc Gets the kernel's multi core scheduler policy.\n" 759msgid " -m, --sched-mc Gets the kernel's multi core scheduler policy.\n"
745msgstr " -p, --policy Findet die momentane Taktik heraus *\n" 760msgstr ""
761" -m, --sched-mc Ruft die Mehrkern-Scheduler-Richtlinie des Kernels ab.\n"
746 762
747#: utils/cpupower-info.c:26 763#: utils/cpupower-info.c:26
748#, c-format 764#, c-format
@@ -756,17 +772,20 @@ msgid ""
756"\n" 772"\n"
757"Passing no option will show all info, by default only on core 0\n" 773"Passing no option will show all info, by default only on core 0\n"
758msgstr "" 774msgstr ""
775"\n"
776"Wenn Sie keine Option übergeben, werden alle Informationen angezeigt, "
777"standardmäßig nur auf Kern 0\n"
759 778
760#: utils/cpupower-info.c:102 779#: utils/cpupower-info.c:102
761#, c-format 780#, c-format
762msgid "System's multi core scheduler setting: " 781msgid "System's multi core scheduler setting: "
763msgstr "" 782msgstr "Mehrkern-Scheduler-Einstellung des Systems: "
764 783
765#. if sysfs file is missing it's: errno == ENOENT 784#. if sysfs file is missing it's: errno == ENOENT
766#: utils/cpupower-info.c:105 utils/cpupower-info.c:114 785#: utils/cpupower-info.c:105 utils/cpupower-info.c:114
767#, c-format 786#, c-format
768msgid "not supported\n" 787msgid "not supported\n"
769msgstr "" 788msgstr "nicht unterstützt\n"
770 789
771#: utils/cpupower-info.c:111 790#: utils/cpupower-info.c:111
772#, c-format 791#, c-format
@@ -786,164 +805,161 @@ msgstr ""
786#: utils/cpupower-info.c:147 805#: utils/cpupower-info.c:147
787#, c-format 806#, c-format
788msgid "Could not read perf-bias value\n" 807msgid "Could not read perf-bias value\n"
789msgstr "" 808msgstr "perf-bias-Wert konnte nicht gelesen werden\n"
790 809
791#: utils/cpupower-info.c:150 810#: utils/cpupower-info.c:150
792#, c-format 811#, c-format
793msgid "perf-bias: %d\n" 812msgid "perf-bias: %d\n"
794msgstr "" 813msgstr "perf-bias: %d\n"
795 814
796#: utils/cpuidle-info.c:28 815#: utils/cpuidle-info.c:28
797#, fuzzy, c-format 816#, c-format
798msgid "Analyzing CPU %d:\n" 817msgid "Analyzing CPU %d:\n"
799msgstr "analysiere CPU %d:\n" 818msgstr "CPU %d wird analysiert:\n"
800 819
801#: utils/cpuidle-info.c:32 820#: utils/cpuidle-info.c:32
802#, c-format 821#, c-format
803msgid "CPU %u: No idle states\n" 822msgid "CPU %u: No idle states\n"
804msgstr "" 823msgstr "CPU %u: Keine Ruhezustände\n"
805 824
806#: utils/cpuidle-info.c:36 825#: utils/cpuidle-info.c:36
807#, c-format 826#, c-format
808msgid "CPU %u: Can't read idle state info\n" 827msgid "CPU %u: Can't read idle state info\n"
809msgstr "" 828msgstr "CPU %u: Ruhezustands-Informationen können nicht gelesen werden\n"
810 829
811#: utils/cpuidle-info.c:41 830#: utils/cpuidle-info.c:41
812#, c-format 831#, c-format
813msgid "Could not determine max idle state %u\n" 832msgid "Could not determine max idle state %u\n"
814msgstr "" 833msgstr "Max. Ruhezustand %u konnte nicht bestimmt werden\n"
815 834
816#: utils/cpuidle-info.c:46 835#: utils/cpuidle-info.c:46
817#, c-format 836#, c-format
818msgid "Number of idle states: %d\n" 837msgid "Number of idle states: %d\n"
819msgstr "" 838msgstr "Anzahl der Ruhezustände: %d\n"
820 839
821#: utils/cpuidle-info.c:48 840#: utils/cpuidle-info.c:48
822#, fuzzy, c-format 841#, c-format
823msgid "Available idle states:" 842msgid "Available idle states:"
824msgstr " mÃgliche Taktfrequenzen: " 843msgstr "Verfügbare Ruhezustände:"
825 844
826#: utils/cpuidle-info.c:71 845#: utils/cpuidle-info.c:71
827#, c-format 846#, c-format
828msgid "Flags/Description: %s\n" 847msgid "Flags/Description: %s\n"
829msgstr "" 848msgstr "Merker/Beschreibung: %s\n"
830 849
831#: utils/cpuidle-info.c:74 850#: utils/cpuidle-info.c:74
832#, c-format 851#, c-format
833msgid "Latency: %lu\n" 852msgid "Latency: %lu\n"
834msgstr "" 853msgstr "Latenz: %lu\n"
835 854
836#: utils/cpuidle-info.c:76 855#: utils/cpuidle-info.c:76
837#, c-format 856#, c-format
838msgid "Usage: %lu\n" 857msgid "Usage: %lu\n"
839msgstr "" 858msgstr "Aufruf: %lu\n"
840 859
841#: utils/cpuidle-info.c:78 860#: utils/cpuidle-info.c:78
842#, c-format 861#, c-format
843msgid "Duration: %llu\n" 862msgid "Duration: %llu\n"
844msgstr "" 863msgstr "Dauer: %llu\n"
845 864
846#: utils/cpuidle-info.c:90 865#: utils/cpuidle-info.c:90
847#, c-format 866#, c-format
848msgid "Could not determine cpuidle driver\n" 867msgid "Could not determine cpuidle driver\n"
849msgstr "" 868msgstr "cpuidle-Treiber konnte nicht bestimmt werden\n"
850 869
851#: utils/cpuidle-info.c:94 870#: utils/cpuidle-info.c:94
852#, fuzzy, c-format 871#, c-format
853msgid "CPUidle driver: %s\n" 872msgid "CPUidle driver: %s\n"
854msgstr " Treiber: %s\n" 873msgstr "CPUidle-Treiber: %s\n"
855 874
856#: utils/cpuidle-info.c:99 875#: utils/cpuidle-info.c:99
857#, c-format 876#, c-format
858msgid "Could not determine cpuidle governor\n" 877msgid "Could not determine cpuidle governor\n"
859msgstr "" 878msgstr "cpuidle-Regler konnte nicht bestimmt werden\n"
860 879
861#: utils/cpuidle-info.c:103 880#: utils/cpuidle-info.c:103
862#, c-format 881#, c-format
863msgid "CPUidle governor: %s\n" 882msgid "CPUidle governor: %s\n"
864msgstr "" 883msgstr "CPUidle-Regler: %s\n"
865 884
866#: utils/cpuidle-info.c:122 885#: utils/cpuidle-info.c:122
867#, c-format 886#, c-format
868msgid "CPU %u: Can't read C-state info\n" 887msgid "CPU %u: Can't read C-state info\n"
869msgstr "" 888msgstr "CPU %u: C-Zustands-Informationen können nicht gelesen werden\n"
870 889
871#. printf("Cstates: %d\n", cstates); 890#. printf("Cstates: %d\n", cstates);
872#: utils/cpuidle-info.c:127 891#: utils/cpuidle-info.c:127
873#, c-format 892#, c-format
874msgid "active state: C0\n" 893msgid "active state: C0\n"
875msgstr "" 894msgstr "aktiver Zustand: C0\n"
876 895
877#: utils/cpuidle-info.c:128 896#: utils/cpuidle-info.c:128
878#, c-format 897#, c-format
879msgid "max_cstate: C%u\n" 898msgid "max_cstate: C%u\n"
880msgstr "" 899msgstr "max_cstate: C%u\n"
881 900
882#: utils/cpuidle-info.c:129 901#: utils/cpuidle-info.c:129
883#, fuzzy, c-format 902#, c-format
884msgid "maximum allowed latency: %lu usec\n" 903msgid "maximum allowed latency: %lu usec\n"
885msgstr " Maximale Dauer eines Taktfrequenzwechsels: " 904msgstr "maximal erlaubte Latenz: %lu usec\n"
886 905
887#: utils/cpuidle-info.c:130 906#: utils/cpuidle-info.c:130
888#, c-format 907#, c-format
889msgid "states:\t\n" 908msgid "states:\t\n"
890msgstr "" 909msgstr "Zustände:\t\n"
891 910
892#: utils/cpuidle-info.c:132 911#: utils/cpuidle-info.c:132
893#, c-format 912#, c-format
894msgid " C%d: type[C%d] " 913msgid " C%d: type[C%d] "
895msgstr "" 914msgstr " C%d: Typ[C%d] "
896 915
897#: utils/cpuidle-info.c:134 916#: utils/cpuidle-info.c:134
898#, c-format 917#, c-format
899msgid "promotion[--] demotion[--] " 918msgid "promotion[--] demotion[--] "
900msgstr "" 919msgstr "promotion[--] demotion[--] "
901 920
902#: utils/cpuidle-info.c:135 921#: utils/cpuidle-info.c:135
903#, c-format 922#, c-format
904msgid "latency[%03lu] " 923msgid "latency[%03lu] "
905msgstr "" 924msgstr "Latenz[%03lu] "
906 925
907#: utils/cpuidle-info.c:137 926#: utils/cpuidle-info.c:137
908#, c-format 927#, c-format
909msgid "usage[%08lu] " 928msgid "usage[%08lu] "
910msgstr "" 929msgstr "Aufruf[%08lu] "
911 930
912#: utils/cpuidle-info.c:139 931#: utils/cpuidle-info.c:139
913#, c-format 932#, c-format
914msgid "duration[%020Lu] \n" 933msgid "duration[%020Lu] \n"
915msgstr "" 934msgstr "Dauer[%020Lu] \n"
916 935
917#: utils/cpuidle-info.c:147 936#: utils/cpuidle-info.c:147
918#, fuzzy, c-format 937#, c-format
919msgid "Usage: cpupower idleinfo [options]\n" 938msgid "Usage: cpupower idleinfo [options]\n"
920msgstr "Aufruf: cpufreq-info [Optionen]\n" 939msgstr "Aufruf: cpupower idleinfo [Optionen]\n"
921 940
922#: utils/cpuidle-info.c:149 941#: utils/cpuidle-info.c:149
923#, fuzzy, c-format 942#, c-format
924msgid " -s, --silent Only show general C-state information\n" 943msgid " -s, --silent Only show general C-state information\n"
925msgstr "" 944msgstr ""
926" -e, --debug Erzeugt detaillierte Informationen, hilfreich\n" 945" -s, --silent Nur allgemeine C-Zustands-Informationen anzeigen\n"
927" zum Aufspüren von Fehlern\n"
928 946
929#: utils/cpuidle-info.c:150 947#: utils/cpuidle-info.c:150
930#, fuzzy, c-format 948#, c-format
931msgid "" 949msgid ""
932" -o, --proc Prints out information like provided by the /proc/" 950" -o, --proc Prints out information like provided by the "
933"acpi/processor/*/power\n" 951"/proc/acpi/processor/*/power\n"
934" interface in older kernels\n" 952" interface in older kernels\n"
935msgstr "" 953msgstr ""
936" -o, --proc Erzeugt Informationen in einem ähnlichem Format zu " 954" -o, --proc Gibt Informationen so aus, wie sie von der "
937"dem\n" 955"Schnittstelle\n"
938" der /proc/cpufreq-Datei in 2.4. und frühen 2.6.\n" 956" /proc/acpi/processor/*/power in älteren Kerneln "
939" Kernel-Versionen\n" 957"bereitgestellt werden\n"
940 958
941#: utils/cpuidle-info.c:209 959#: utils/cpuidle-info.c:209
942#, fuzzy, c-format 960#, c-format
943msgid "You can't specify more than one output-specific argument\n" 961msgid "You can't specify more than one output-specific argument\n"
944msgstr "" 962msgstr "Sie können nicht mehr als ein ausgabenspezifisches Argument angeben\n"
945"Man kann nicht mehr als einen --cpu-Parameter und/oder mehr als einen\n"
946"informationsspezifischen Parameter gleichzeitig angeben\n"
947 963
948#~ msgid "" 964#~ msgid ""
949#~ " -c CPU, --cpu CPU CPU number which information shall be determined " 965#~ " -c CPU, --cpu CPU CPU number which information shall be determined "
@@ -956,6 +972,6 @@ msgstr ""
956#~ " -c CPU, --cpu CPU number of CPU where cpufreq settings shall be " 972#~ " -c CPU, --cpu CPU number of CPU where cpufreq settings shall be "
957#~ "modified\n" 973#~ "modified\n"
958#~ msgstr "" 974#~ msgstr ""
959#~ " -c CPU, --cpu CPU Nummer der CPU, deren Taktfrequenz-" 975#~ " -c CPU, --cpu CPU Nummer der CPU, deren "
960#~ "Einstellung\n" 976#~ "Taktfrequenz-Einstellung\n"
961#~ " werden soll\n" 977#~ " werden soll\n"
diff --git a/tools/power/pm-graph/README b/tools/power/pm-graph/README
index 58a5591e3951..96259f6e5715 100644
--- a/tools/power/pm-graph/README
+++ b/tools/power/pm-graph/README
@@ -1,7 +1,7 @@
1 p m - g r a p h 1 p m - g r a p h
2 2
3 pm-graph: suspend/resume/boot timing analysis tools 3 pm-graph: suspend/resume/boot timing analysis tools
4 Version: 5.4 4 Version: 5.5
5 Author: Todd Brandt <todd.e.brandt@intel.com> 5 Author: Todd Brandt <todd.e.brandt@intel.com>
6 Home Page: https://01.org/pm-graph 6 Home Page: https://01.org/pm-graph
7 7
@@ -18,6 +18,10 @@
18 - upstream version in git: 18 - upstream version in git:
19 https://github.com/intel/pm-graph/ 19 https://github.com/intel/pm-graph/
20 20
21 Requirements:
22 - runs with python2 or python3, choice is made by /usr/bin/python link
23 - python2 now requires python-configparser be installed
24
21 Table of Contents 25 Table of Contents
22 - Overview 26 - Overview
23 - Setup 27 - Setup
diff --git a/tools/power/pm-graph/bootgraph.py b/tools/power/pm-graph/bootgraph.py
index 666bcbda648d..d3b99a1e92d6 100755
--- a/tools/power/pm-graph/bootgraph.py
+++ b/tools/power/pm-graph/bootgraph.py
@@ -1,9 +1,18 @@
1#!/usr/bin/python2 1#!/usr/bin/python
2# SPDX-License-Identifier: GPL-2.0-only 2# SPDX-License-Identifier: GPL-2.0-only
3# 3#
4# Tool for analyzing boot timing 4# Tool for analyzing boot timing
5# Copyright (c) 2013, Intel Corporation. 5# Copyright (c) 2013, Intel Corporation.
6# 6#
7# This program is free software; you can redistribute it and/or modify it
8# under the terms and conditions of the GNU General Public License,
9# version 2, as published by the Free Software Foundation.
10#
11# This program is distributed in the hope it will be useful, but WITHOUT
12# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
14# more details.
15#
7# Authors: 16# Authors:
8# Todd Brandt <todd.e.brandt@linux.intel.com> 17# Todd Brandt <todd.e.brandt@linux.intel.com>
9# 18#
@@ -81,7 +90,7 @@ class SystemValues(aslib.SystemValues):
81 cmdline = 'initcall_debug log_buf_len=32M' 90 cmdline = 'initcall_debug log_buf_len=32M'
82 if self.useftrace: 91 if self.useftrace:
83 if self.cpucount > 0: 92 if self.cpucount > 0:
84 bs = min(self.memtotal / 2, 2*1024*1024) / self.cpucount 93 bs = min(self.memtotal // 2, 2*1024*1024) // self.cpucount
85 else: 94 else:
86 bs = 131072 95 bs = 131072
87 cmdline += ' trace_buf_size=%dK trace_clock=global '\ 96 cmdline += ' trace_buf_size=%dK trace_clock=global '\
@@ -137,13 +146,13 @@ class SystemValues(aslib.SystemValues):
137 if arg in ['-h', '-v', '-cronjob', '-reboot', '-verbose']: 146 if arg in ['-h', '-v', '-cronjob', '-reboot', '-verbose']:
138 continue 147 continue
139 elif arg in ['-o', '-dmesg', '-ftrace', '-func']: 148 elif arg in ['-o', '-dmesg', '-ftrace', '-func']:
140 args.next() 149 next(args)
141 continue 150 continue
142 elif arg == '-result': 151 elif arg == '-result':
143 cmdline += ' %s "%s"' % (arg, os.path.abspath(args.next())) 152 cmdline += ' %s "%s"' % (arg, os.path.abspath(next(args)))
144 continue 153 continue
145 elif arg == '-cgskip': 154 elif arg == '-cgskip':
146 file = self.configFile(args.next()) 155 file = self.configFile(next(args))
147 cmdline += ' %s "%s"' % (arg, os.path.abspath(file)) 156 cmdline += ' %s "%s"' % (arg, os.path.abspath(file))
148 continue 157 continue
149 cmdline += ' '+arg 158 cmdline += ' '+arg
@@ -292,11 +301,11 @@ def parseKernelLog():
292 tp = aslib.TestProps() 301 tp = aslib.TestProps()
293 devtemp = dict() 302 devtemp = dict()
294 if(sysvals.dmesgfile): 303 if(sysvals.dmesgfile):
295 lf = open(sysvals.dmesgfile, 'r') 304 lf = open(sysvals.dmesgfile, 'rb')
296 else: 305 else:
297 lf = Popen('dmesg', stdout=PIPE).stdout 306 lf = Popen('dmesg', stdout=PIPE).stdout
298 for line in lf: 307 for line in lf:
299 line = line.replace('\r\n', '') 308 line = aslib.ascii(line).replace('\r\n', '')
300 # grab the stamp and sysinfo 309 # grab the stamp and sysinfo
301 if re.match(tp.stampfmt, line): 310 if re.match(tp.stampfmt, line):
302 tp.stamp = line 311 tp.stamp = line
@@ -649,7 +658,7 @@ def createBootGraph(data):
649 statinfo += '\t"%s": [\n\t\t"%s",\n' % (n, devstats[n]['info']) 658 statinfo += '\t"%s": [\n\t\t"%s",\n' % (n, devstats[n]['info'])
650 if 'fstat' in devstats[n]: 659 if 'fstat' in devstats[n]:
651 funcs = devstats[n]['fstat'] 660 funcs = devstats[n]['fstat']
652 for f in sorted(funcs, key=funcs.get, reverse=True): 661 for f in sorted(funcs, key=lambda k:(funcs[k], k), reverse=True):
653 if funcs[f][0] < 0.01 and len(funcs) > 10: 662 if funcs[f][0] < 0.01 and len(funcs) > 10:
654 break 663 break
655 statinfo += '\t\t"%f|%s|%d",\n' % (funcs[f][0], f, funcs[f][1]) 664 statinfo += '\t\t"%f|%s|%d",\n' % (funcs[f][0], f, funcs[f][1])
@@ -729,7 +738,7 @@ def updateCron(restore=False):
729 op.write('@reboot python %s\n' % sysvals.cronjobCmdString()) 738 op.write('@reboot python %s\n' % sysvals.cronjobCmdString())
730 op.close() 739 op.close()
731 res = call([cmd, cronfile]) 740 res = call([cmd, cronfile])
732 except Exception, e: 741 except Exception as e:
733 pprint('Exception: %s' % str(e)) 742 pprint('Exception: %s' % str(e))
734 shutil.move(backfile, cronfile) 743 shutil.move(backfile, cronfile)
735 res = -1 744 res = -1
@@ -745,7 +754,7 @@ def updateGrub(restore=False):
745 try: 754 try:
746 call(sysvals.blexec, stderr=PIPE, stdout=PIPE, 755 call(sysvals.blexec, stderr=PIPE, stdout=PIPE,
747 env={'PATH': '.:/sbin:/usr/sbin:/usr/bin:/sbin:/bin'}) 756 env={'PATH': '.:/sbin:/usr/sbin:/usr/bin:/sbin:/bin'})
748 except Exception, e: 757 except Exception as e:
749 pprint('Exception: %s\n' % str(e)) 758 pprint('Exception: %s\n' % str(e))
750 return 759 return
751 # extract the option and create a grub config without it 760 # extract the option and create a grub config without it
@@ -792,7 +801,7 @@ def updateGrub(restore=False):
792 op.close() 801 op.close()
793 res = call(sysvals.blexec) 802 res = call(sysvals.blexec)
794 os.remove(grubfile) 803 os.remove(grubfile)
795 except Exception, e: 804 except Exception as e:
796 pprint('Exception: %s' % str(e)) 805 pprint('Exception: %s' % str(e))
797 res = -1 806 res = -1
798 # cleanup 807 # cleanup
@@ -866,6 +875,7 @@ def printHelp():
866 'Other commands:\n'\ 875 'Other commands:\n'\
867 ' -flistall Print all functions capable of being captured in ftrace\n'\ 876 ' -flistall Print all functions capable of being captured in ftrace\n'\
868 ' -sysinfo Print out system info extracted from BIOS\n'\ 877 ' -sysinfo Print out system info extracted from BIOS\n'\
878 ' -which exec Print an executable path, should function even without PATH\n'\
869 ' [redo]\n'\ 879 ' [redo]\n'\
870 ' -dmesg file Create HTML output using dmesg input (used with -ftrace)\n'\ 880 ' -dmesg file Create HTML output using dmesg input (used with -ftrace)\n'\
871 ' -ftrace file Create HTML output using ftrace input (used with -dmesg)\n'\ 881 ' -ftrace file Create HTML output using ftrace input (used with -dmesg)\n'\
@@ -907,13 +917,13 @@ if __name__ == '__main__':
907 sysvals.mincglen = aslib.getArgFloat('-mincg', args, 0.0, 10000.0) 917 sysvals.mincglen = aslib.getArgFloat('-mincg', args, 0.0, 10000.0)
908 elif(arg == '-cgfilter'): 918 elif(arg == '-cgfilter'):
909 try: 919 try:
910 val = args.next() 920 val = next(args)
911 except: 921 except:
912 doError('No callgraph functions supplied', True) 922 doError('No callgraph functions supplied', True)
913 sysvals.setCallgraphFilter(val) 923 sysvals.setCallgraphFilter(val)
914 elif(arg == '-cgskip'): 924 elif(arg == '-cgskip'):
915 try: 925 try:
916 val = args.next() 926 val = next(args)
917 except: 927 except:
918 doError('No file supplied', True) 928 doError('No file supplied', True)
919 if val.lower() in switchoff: 929 if val.lower() in switchoff:
@@ -924,7 +934,7 @@ if __name__ == '__main__':
924 doError('%s does not exist' % cgskip) 934 doError('%s does not exist' % cgskip)
925 elif(arg == '-bl'): 935 elif(arg == '-bl'):
926 try: 936 try:
927 val = args.next() 937 val = next(args)
928 except: 938 except:
929 doError('No boot loader name supplied', True) 939 doError('No boot loader name supplied', True)
930 if val.lower() not in ['grub']: 940 if val.lower() not in ['grub']:
@@ -937,7 +947,7 @@ if __name__ == '__main__':
937 sysvals.max_graph_depth = aslib.getArgInt('-maxdepth', args, 0, 1000) 947 sysvals.max_graph_depth = aslib.getArgInt('-maxdepth', args, 0, 1000)
938 elif(arg == '-func'): 948 elif(arg == '-func'):
939 try: 949 try:
940 val = args.next() 950 val = next(args)
941 except: 951 except:
942 doError('No filter functions supplied', True) 952 doError('No filter functions supplied', True)
943 sysvals.useftrace = True 953 sysvals.useftrace = True
@@ -946,7 +956,7 @@ if __name__ == '__main__':
946 sysvals.setGraphFilter(val) 956 sysvals.setGraphFilter(val)
947 elif(arg == '-ftrace'): 957 elif(arg == '-ftrace'):
948 try: 958 try:
949 val = args.next() 959 val = next(args)
950 except: 960 except:
951 doError('No ftrace file supplied', True) 961 doError('No ftrace file supplied', True)
952 if(os.path.exists(val) == False): 962 if(os.path.exists(val) == False):
@@ -959,7 +969,7 @@ if __name__ == '__main__':
959 sysvals.cgexp = True 969 sysvals.cgexp = True
960 elif(arg == '-dmesg'): 970 elif(arg == '-dmesg'):
961 try: 971 try:
962 val = args.next() 972 val = next(args)
963 except: 973 except:
964 doError('No dmesg file supplied', True) 974 doError('No dmesg file supplied', True)
965 if(os.path.exists(val) == False): 975 if(os.path.exists(val) == False):
@@ -968,13 +978,13 @@ if __name__ == '__main__':
968 sysvals.dmesgfile = val 978 sysvals.dmesgfile = val
969 elif(arg == '-o'): 979 elif(arg == '-o'):
970 try: 980 try:
971 val = args.next() 981 val = next(args)
972 except: 982 except:
973 doError('No subdirectory name supplied', True) 983 doError('No subdirectory name supplied', True)
974 sysvals.testdir = sysvals.setOutputFolder(val) 984 sysvals.testdir = sysvals.setOutputFolder(val)
975 elif(arg == '-result'): 985 elif(arg == '-result'):
976 try: 986 try:
977 val = args.next() 987 val = next(args)
978 except: 988 except:
979 doError('No result file supplied', True) 989 doError('No result file supplied', True)
980 sysvals.result = val 990 sysvals.result = val
@@ -986,6 +996,17 @@ if __name__ == '__main__':
986 # remaining options are only for cron job use 996 # remaining options are only for cron job use
987 elif(arg == '-cronjob'): 997 elif(arg == '-cronjob'):
988 sysvals.iscronjob = True 998 sysvals.iscronjob = True
999 elif(arg == '-which'):
1000 try:
1001 val = next(args)
1002 except:
1003 doError('No executable supplied', True)
1004 out = sysvals.getExec(val)
1005 if not out:
1006 print('%s not found' % val)
1007 sys.exit(1)
1008 print(out)
1009 sys.exit(0)
989 else: 1010 else:
990 doError('Invalid argument: '+arg, True) 1011 doError('Invalid argument: '+arg, True)
991 1012
diff --git a/tools/power/pm-graph/sleepgraph.8 b/tools/power/pm-graph/sleepgraph.8
index 9648be644d5f..43aee64316df 100644
--- a/tools/power/pm-graph/sleepgraph.8
+++ b/tools/power/pm-graph/sleepgraph.8
@@ -53,10 +53,10 @@ disable rtcwake and require a user keypress to resume.
53Add the dmesg and ftrace logs to the html output. They will be viewable by 53Add the dmesg and ftrace logs to the html output. They will be viewable by
54clicking buttons in the timeline. 54clicking buttons in the timeline.
55.TP 55.TP
56\fB-turbostat\fR 56\fB-noturbostat\fR
57Use turbostat to execute the command in freeze mode (default: disabled). This 57By default, if turbostat is found and the requested mode is freeze, sleepgraph
58will provide turbostat output in the log which will tell you which actual 58will execute the suspend via turbostat and collect data in the timeline log.
59power modes were entered. 59This option disables the use of turbostat.
60.TP 60.TP
61\fB-result \fIfile\fR 61\fB-result \fIfile\fR
62Export a results table to a text file for parsing. 62Export a results table to a text file for parsing.
diff --git a/tools/power/pm-graph/sleepgraph.py b/tools/power/pm-graph/sleepgraph.py
index 4f46a7a1feb6..f7d1c1f62f86 100755
--- a/tools/power/pm-graph/sleepgraph.py
+++ b/tools/power/pm-graph/sleepgraph.py
@@ -1,9 +1,18 @@
1#!/usr/bin/python2 1#!/usr/bin/python
2# SPDX-License-Identifier: GPL-2.0-only 2# SPDX-License-Identifier: GPL-2.0-only
3# 3#
4# Tool for analyzing suspend/resume timing 4# Tool for analyzing suspend/resume timing
5# Copyright (c) 2013, Intel Corporation. 5# Copyright (c) 2013, Intel Corporation.
6# 6#
7# This program is free software; you can redistribute it and/or modify it
8# under the terms and conditions of the GNU General Public License,
9# version 2, as published by the Free Software Foundation.
10#
11# This program is distributed in the hope it will be useful, but WITHOUT
12# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
14# more details.
15#
7# Authors: 16# Authors:
8# Todd Brandt <todd.e.brandt@linux.intel.com> 17# Todd Brandt <todd.e.brandt@linux.intel.com>
9# 18#
@@ -48,9 +57,10 @@ import string
48import re 57import re
49import platform 58import platform
50import signal 59import signal
60import codecs
51from datetime import datetime 61from datetime import datetime
52import struct 62import struct
53import ConfigParser 63import configparser
54import gzip 64import gzip
55from threading import Thread 65from threading import Thread
56from subprocess import call, Popen, PIPE 66from subprocess import call, Popen, PIPE
@@ -60,6 +70,9 @@ def pprint(msg):
60 print(msg) 70 print(msg)
61 sys.stdout.flush() 71 sys.stdout.flush()
62 72
73def ascii(text):
74 return text.decode('ascii', 'ignore')
75
63# ----------------- CLASSES -------------------- 76# ----------------- CLASSES --------------------
64 77
65# Class: SystemValues 78# Class: SystemValues
@@ -68,7 +81,7 @@ def pprint(msg):
68# store system values and test parameters 81# store system values and test parameters
69class SystemValues: 82class SystemValues:
70 title = 'SleepGraph' 83 title = 'SleepGraph'
71 version = '5.4' 84 version = '5.5'
72 ansi = False 85 ansi = False
73 rs = 0 86 rs = 0
74 display = '' 87 display = ''
@@ -78,7 +91,7 @@ class SystemValues:
78 testlog = True 91 testlog = True
79 dmesglog = True 92 dmesglog = True
80 ftracelog = False 93 ftracelog = False
81 tstat = False 94 tstat = True
82 mindevlen = 0.0 95 mindevlen = 0.0
83 mincglen = 0.0 96 mincglen = 0.0
84 cgphase = '' 97 cgphase = ''
@@ -147,6 +160,7 @@ class SystemValues:
147 devdump = False 160 devdump = False
148 mixedphaseheight = True 161 mixedphaseheight = True
149 devprops = dict() 162 devprops = dict()
163 platinfo = []
150 predelay = 0 164 predelay = 0
151 postdelay = 0 165 postdelay = 0
152 pmdebug = '' 166 pmdebug = ''
@@ -323,13 +337,20 @@ class SystemValues:
323 sys.exit(1) 337 sys.exit(1)
324 return False 338 return False
325 def getExec(self, cmd): 339 def getExec(self, cmd):
326 dirlist = ['/sbin', '/bin', '/usr/sbin', '/usr/bin', 340 try:
327 '/usr/local/sbin', '/usr/local/bin'] 341 fp = Popen(['which', cmd], stdout=PIPE, stderr=PIPE).stdout
328 for path in dirlist: 342 out = ascii(fp.read()).strip()
343 fp.close()
344 except:
345 out = ''
346 if out:
347 return out
348 for path in ['/sbin', '/bin', '/usr/sbin', '/usr/bin',
349 '/usr/local/sbin', '/usr/local/bin']:
329 cmdfull = os.path.join(path, cmd) 350 cmdfull = os.path.join(path, cmd)
330 if os.path.exists(cmdfull): 351 if os.path.exists(cmdfull):
331 return cmdfull 352 return cmdfull
332 return '' 353 return out
333 def setPrecision(self, num): 354 def setPrecision(self, num):
334 if num < 0 or num > 6: 355 if num < 0 or num > 6:
335 return 356 return
@@ -455,7 +476,7 @@ class SystemValues:
455 fp = Popen('dmesg', stdout=PIPE).stdout 476 fp = Popen('dmesg', stdout=PIPE).stdout
456 ktime = '0' 477 ktime = '0'
457 for line in fp: 478 for line in fp:
458 line = line.replace('\r\n', '') 479 line = ascii(line).replace('\r\n', '')
459 idx = line.find('[') 480 idx = line.find('[')
460 if idx > 1: 481 if idx > 1:
461 line = line[idx:] 482 line = line[idx:]
@@ -469,7 +490,7 @@ class SystemValues:
469 # store all new dmesg lines since initdmesg was called 490 # store all new dmesg lines since initdmesg was called
470 fp = Popen('dmesg', stdout=PIPE).stdout 491 fp = Popen('dmesg', stdout=PIPE).stdout
471 for line in fp: 492 for line in fp:
472 line = line.replace('\r\n', '') 493 line = ascii(line).replace('\r\n', '')
473 idx = line.find('[') 494 idx = line.find('[')
474 if idx > 1: 495 if idx > 1:
475 line = line[idx:] 496 line = line[idx:]
@@ -501,7 +522,7 @@ class SystemValues:
501 call('cat '+self.tpath+'available_filter_functions', shell=True) 522 call('cat '+self.tpath+'available_filter_functions', shell=True)
502 return 523 return
503 master = self.listFromFile(self.tpath+'available_filter_functions') 524 master = self.listFromFile(self.tpath+'available_filter_functions')
504 for i in self.tracefuncs: 525 for i in sorted(self.tracefuncs):
505 if 'func' in self.tracefuncs[i]: 526 if 'func' in self.tracefuncs[i]:
506 i = self.tracefuncs[i]['func'] 527 i = self.tracefuncs[i]['func']
507 if i in master: 528 if i in master:
@@ -628,7 +649,7 @@ class SystemValues:
628 self.fsetVal(kprobeevents, 'kprobe_events') 649 self.fsetVal(kprobeevents, 'kprobe_events')
629 if output: 650 if output:
630 check = self.fgetVal('kprobe_events') 651 check = self.fgetVal('kprobe_events')
631 linesack = (len(check.split('\n')) - 1) / 2 652 linesack = (len(check.split('\n')) - 1) // 2
632 pprint(' kprobe functions enabled: %d/%d' % (linesack, linesout)) 653 pprint(' kprobe functions enabled: %d/%d' % (linesack, linesout))
633 self.fsetVal('1', 'events/kprobes/enable') 654 self.fsetVal('1', 'events/kprobes/enable')
634 def testKprobe(self, kname, kprobe): 655 def testKprobe(self, kname, kprobe):
@@ -646,19 +667,19 @@ class SystemValues:
646 if linesack < linesout: 667 if linesack < linesout:
647 return False 668 return False
648 return True 669 return True
649 def setVal(self, val, file, mode='w'): 670 def setVal(self, val, file):
650 if not os.path.exists(file): 671 if not os.path.exists(file):
651 return False 672 return False
652 try: 673 try:
653 fp = open(file, mode, 0) 674 fp = open(file, 'wb', 0)
654 fp.write(val) 675 fp.write(val.encode())
655 fp.flush() 676 fp.flush()
656 fp.close() 677 fp.close()
657 except: 678 except:
658 return False 679 return False
659 return True 680 return True
660 def fsetVal(self, val, path, mode='w'): 681 def fsetVal(self, val, path):
661 return self.setVal(val, self.tpath+path, mode) 682 return self.setVal(val, self.tpath+path)
662 def getVal(self, file): 683 def getVal(self, file):
663 res = '' 684 res = ''
664 if not os.path.exists(file): 685 if not os.path.exists(file):
@@ -719,7 +740,7 @@ class SystemValues:
719 tgtsize = min(self.memfree, bmax) 740 tgtsize = min(self.memfree, bmax)
720 else: 741 else:
721 tgtsize = 65536 742 tgtsize = 65536
722 while not self.fsetVal('%d' % (tgtsize / cpus), 'buffer_size_kb'): 743 while not self.fsetVal('%d' % (tgtsize // cpus), 'buffer_size_kb'):
723 # if the size failed to set, lower it and keep trying 744 # if the size failed to set, lower it and keep trying
724 tgtsize -= 65536 745 tgtsize -= 65536
725 if tgtsize < 65536: 746 if tgtsize < 65536:
@@ -863,14 +884,23 @@ class SystemValues:
863 isgz = self.gzip 884 isgz = self.gzip
864 if mode == 'r': 885 if mode == 'r':
865 try: 886 try:
866 with gzip.open(filename, mode+'b') as fp: 887 with gzip.open(filename, mode+'t') as fp:
867 test = fp.read(64) 888 test = fp.read(64)
868 isgz = True 889 isgz = True
869 except: 890 except:
870 isgz = False 891 isgz = False
871 if isgz: 892 if isgz:
872 return gzip.open(filename, mode+'b') 893 return gzip.open(filename, mode+'t')
873 return open(filename, mode) 894 return open(filename, mode)
895 def b64unzip(self, data):
896 try:
897 out = codecs.decode(base64.b64decode(data), 'zlib').decode()
898 except:
899 out = data
900 return out
901 def b64zip(self, data):
902 out = base64.b64encode(codecs.encode(data.encode(), 'zlib')).decode()
903 return out
874 def mcelog(self, clear=False): 904 def mcelog(self, clear=False):
875 cmd = self.getExec('mcelog') 905 cmd = self.getExec('mcelog')
876 if not cmd: 906 if not cmd:
@@ -878,12 +908,124 @@ class SystemValues:
878 if clear: 908 if clear:
879 call(cmd+' > /dev/null 2>&1', shell=True) 909 call(cmd+' > /dev/null 2>&1', shell=True)
880 return '' 910 return ''
881 fp = Popen([cmd], stdout=PIPE, stderr=PIPE).stdout 911 try:
882 out = fp.read().strip() 912 fp = Popen([cmd], stdout=PIPE, stderr=PIPE).stdout
883 fp.close() 913 out = ascii(fp.read()).strip()
914 fp.close()
915 except:
916 return ''
884 if not out: 917 if not out:
885 return '' 918 return ''
886 return base64.b64encode(out.encode('zlib')) 919 return self.b64zip(out)
920 def platforminfo(self):
921 # add platform info on to a completed ftrace file
922 if not os.path.exists(self.ftracefile):
923 return False
924 footer = '#\n'
925
926 # add test command string line if need be
927 if self.suspendmode == 'command' and self.testcommand:
928 footer += '# platform-testcmd: %s\n' % (self.testcommand)
929
930 # get a list of target devices from the ftrace file
931 props = dict()
932 tp = TestProps()
933 tf = self.openlog(self.ftracefile, 'r')
934 for line in tf:
935 # determine the trace data type (required for further parsing)
936 m = re.match(tp.tracertypefmt, line)
937 if(m):
938 tp.setTracerType(m.group('t'))
939 continue
940 # parse only valid lines, if this is not one move on
941 m = re.match(tp.ftrace_line_fmt, line)
942 if(not m or 'device_pm_callback_start' not in line):
943 continue
944 m = re.match('.*: (?P<drv>.*) (?P<d>.*), parent: *(?P<p>.*), .*', m.group('msg'));
945 if(not m):
946 continue
947 dev = m.group('d')
948 if dev not in props:
949 props[dev] = DevProps()
950 tf.close()
951
952 # now get the syspath for each target device
953 for dirname, dirnames, filenames in os.walk('/sys/devices'):
954 if(re.match('.*/power', dirname) and 'async' in filenames):
955 dev = dirname.split('/')[-2]
956 if dev in props and (not props[dev].syspath or len(dirname) < len(props[dev].syspath)):
957 props[dev].syspath = dirname[:-6]
958
959 # now fill in the properties for our target devices
960 for dev in sorted(props):
961 dirname = props[dev].syspath
962 if not dirname or not os.path.exists(dirname):
963 continue
964 with open(dirname+'/power/async') as fp:
965 text = fp.read()
966 props[dev].isasync = False
967 if 'enabled' in text:
968 props[dev].isasync = True
969 fields = os.listdir(dirname)
970 if 'product' in fields:
971 with open(dirname+'/product', 'rb') as fp:
972 props[dev].altname = ascii(fp.read())
973 elif 'name' in fields:
974 with open(dirname+'/name', 'rb') as fp:
975 props[dev].altname = ascii(fp.read())
976 elif 'model' in fields:
977 with open(dirname+'/model', 'rb') as fp:
978 props[dev].altname = ascii(fp.read())
979 elif 'description' in fields:
980 with open(dirname+'/description', 'rb') as fp:
981 props[dev].altname = ascii(fp.read())
982 elif 'id' in fields:
983 with open(dirname+'/id', 'rb') as fp:
984 props[dev].altname = ascii(fp.read())
985 elif 'idVendor' in fields and 'idProduct' in fields:
986 idv, idp = '', ''
987 with open(dirname+'/idVendor', 'rb') as fp:
988 idv = ascii(fp.read()).strip()
989 with open(dirname+'/idProduct', 'rb') as fp:
990 idp = ascii(fp.read()).strip()
991 props[dev].altname = '%s:%s' % (idv, idp)
992 if props[dev].altname:
993 out = props[dev].altname.strip().replace('\n', ' ')\
994 .replace(',', ' ').replace(';', ' ')
995 props[dev].altname = out
996
997 # add a devinfo line to the bottom of ftrace
998 out = ''
999 for dev in sorted(props):
1000 out += props[dev].out(dev)
1001 footer += '# platform-devinfo: %s\n' % self.b64zip(out)
1002
1003 # add a line for each of these commands with their outputs
1004 cmds = [
1005 ['pcidevices', 'lspci', '-tv'],
1006 ['interrupts', 'cat', '/proc/interrupts'],
1007 ['gpecounts', 'sh', '-c', 'grep -v invalid /sys/firmware/acpi/interrupts/gpe*'],
1008 ]
1009 for cargs in cmds:
1010 name = cargs[0]
1011 cmdline = ' '.join(cargs[1:])
1012 cmdpath = self.getExec(cargs[1])
1013 if not cmdpath:
1014 continue
1015 cmd = [cmdpath] + cargs[2:]
1016 try:
1017 fp = Popen(cmd, stdout=PIPE, stderr=PIPE).stdout
1018 info = ascii(fp.read()).strip()
1019 fp.close()
1020 except:
1021 continue
1022 if not info:
1023 continue
1024 footer += '# platform-%s: %s | %s\n' % (name, cmdline, self.b64zip(info))
1025
1026 with self.openlog(self.ftracefile, 'a') as fp:
1027 fp.write(footer)
1028 return True
887 def haveTurbostat(self): 1029 def haveTurbostat(self):
888 if not self.tstat: 1030 if not self.tstat:
889 return False 1031 return False
@@ -891,31 +1033,40 @@ class SystemValues:
891 if not cmd: 1033 if not cmd:
892 return False 1034 return False
893 fp = Popen([cmd, '-v'], stdout=PIPE, stderr=PIPE).stderr 1035 fp = Popen([cmd, '-v'], stdout=PIPE, stderr=PIPE).stderr
894 out = fp.read().strip() 1036 out = ascii(fp.read()).strip()
895 fp.close() 1037 fp.close()
896 return re.match('turbostat version [0-9\.]* .*', out) 1038 if re.match('turbostat version [0-9\.]* .*', out):
1039 sysvals.vprint(out)
1040 return True
1041 return False
897 def turbostat(self): 1042 def turbostat(self):
898 cmd = self.getExec('turbostat') 1043 cmd = self.getExec('turbostat')
899 if not cmd: 1044 rawout = keyline = valline = ''
900 return 'missing turbostat executable'
901 text = []
902 fullcmd = '%s -q -S echo freeze > %s' % (cmd, self.powerfile) 1045 fullcmd = '%s -q -S echo freeze > %s' % (cmd, self.powerfile)
903 fp = Popen(['sh', '-c', fullcmd], stdout=PIPE, stderr=PIPE).stderr 1046 fp = Popen(['sh', '-c', fullcmd], stdout=PIPE, stderr=PIPE).stderr
904 for line in fp: 1047 for line in fp:
905 if re.match('[0-9.]* sec', line): 1048 line = ascii(line)
1049 rawout += line
1050 if keyline and valline:
906 continue 1051 continue
907 text.append(line.split()) 1052 if re.match('(?i)Avg_MHz.*', line):
1053 keyline = line.strip().split()
1054 elif keyline:
1055 valline = line.strip().split()
908 fp.close() 1056 fp.close()
909 if len(text) < 2: 1057 if not keyline or not valline or len(keyline) != len(valline):
910 return 'turbostat output format error' 1058 errmsg = 'unrecognized turbostat output:\n'+rawout.strip()
1059 sysvals.vprint(errmsg)
1060 if not sysvals.verbose:
1061 pprint(errmsg)
1062 return ''
1063 if sysvals.verbose:
1064 pprint(rawout.strip())
911 out = [] 1065 out = []
912 for key in text[0]: 1066 for key in keyline:
913 values = [] 1067 idx = keyline.index(key)
914 idx = text[0].index(key) 1068 val = valline[idx]
915 for line in text[1:]: 1069 out.append('%s=%s' % (key, val))
916 if len(line) > idx:
917 values.append(line[idx])
918 out.append('%s=%s' % (key, ','.join(values)))
919 return '|'.join(out) 1070 return '|'.join(out)
920 def checkWifi(self): 1071 def checkWifi(self):
921 out = dict() 1072 out = dict()
@@ -924,7 +1075,7 @@ class SystemValues:
924 return out 1075 return out
925 fp = Popen(iwcmd, stdout=PIPE, stderr=PIPE).stdout 1076 fp = Popen(iwcmd, stdout=PIPE, stderr=PIPE).stdout
926 for line in fp: 1077 for line in fp:
927 m = re.match('(?P<dev>\S*) .* ESSID:(?P<ess>\S*)', line) 1078 m = re.match('(?P<dev>\S*) .* ESSID:(?P<ess>\S*)', ascii(line))
928 if not m: 1079 if not m:
929 continue 1080 continue
930 out['device'] = m.group('dev') 1081 out['device'] = m.group('dev')
@@ -935,7 +1086,7 @@ class SystemValues:
935 if 'device' in out: 1086 if 'device' in out:
936 fp = Popen([ifcmd, out['device']], stdout=PIPE, stderr=PIPE).stdout 1087 fp = Popen([ifcmd, out['device']], stdout=PIPE, stderr=PIPE).stdout
937 for line in fp: 1088 for line in fp:
938 m = re.match('.* inet (?P<ip>[0-9\.]*)', line) 1089 m = re.match('.* inet (?P<ip>[0-9\.]*)', ascii(line))
939 if m: 1090 if m:
940 out['ip'] = m.group('ip') 1091 out['ip'] = m.group('ip')
941 break 1092 break
@@ -990,13 +1141,13 @@ class DevProps:
990 def __init__(self): 1141 def __init__(self):
991 self.syspath = '' 1142 self.syspath = ''
992 self.altname = '' 1143 self.altname = ''
993 self.async = True 1144 self.isasync = True
994 self.xtraclass = '' 1145 self.xtraclass = ''
995 self.xtrainfo = '' 1146 self.xtrainfo = ''
996 def out(self, dev): 1147 def out(self, dev):
997 return '%s,%s,%d;' % (dev, self.altname, self.async) 1148 return '%s,%s,%d;' % (dev, self.altname, self.isasync)
998 def debug(self, dev): 1149 def debug(self, dev):
999 pprint('%s:\n\taltname = %s\n\t async = %s' % (dev, self.altname, self.async)) 1150 pprint('%s:\n\taltname = %s\n\t async = %s' % (dev, self.altname, self.isasync))
1000 def altName(self, dev): 1151 def altName(self, dev):
1001 if not self.altname or self.altname == dev: 1152 if not self.altname or self.altname == dev:
1002 return dev 1153 return dev
@@ -1004,13 +1155,13 @@ class DevProps:
1004 def xtraClass(self): 1155 def xtraClass(self):
1005 if self.xtraclass: 1156 if self.xtraclass:
1006 return ' '+self.xtraclass 1157 return ' '+self.xtraclass
1007 if not self.async: 1158 if not self.isasync:
1008 return ' sync' 1159 return ' sync'
1009 return '' 1160 return ''
1010 def xtraInfo(self): 1161 def xtraInfo(self):
1011 if self.xtraclass: 1162 if self.xtraclass:
1012 return ' '+self.xtraclass 1163 return ' '+self.xtraclass
1013 if self.async: 1164 if self.isasync:
1014 return ' async_device' 1165 return ' async_device'
1015 return ' sync_device' 1166 return ' sync_device'
1016 1167
@@ -1108,7 +1259,7 @@ class Data:
1108 return sorted(self.dmesg, key=lambda k:self.dmesg[k]['order']) 1259 return sorted(self.dmesg, key=lambda k:self.dmesg[k]['order'])
1109 def initDevicegroups(self): 1260 def initDevicegroups(self):
1110 # called when phases are all finished being added 1261 # called when phases are all finished being added
1111 for phase in self.dmesg.keys(): 1262 for phase in sorted(self.dmesg.keys()):
1112 if '*' in phase: 1263 if '*' in phase:
1113 p = phase.split('*') 1264 p = phase.split('*')
1114 pnew = '%s%d' % (p[0], len(p)) 1265 pnew = '%s%d' % (p[0], len(p))
@@ -1430,16 +1581,7 @@ class Data:
1430 return phase 1581 return phase
1431 def sortedDevices(self, phase): 1582 def sortedDevices(self, phase):
1432 list = self.dmesg[phase]['list'] 1583 list = self.dmesg[phase]['list']
1433 slist = [] 1584 return sorted(list, key=lambda k:list[k]['start'])
1434 tmp = dict()
1435 for devname in list:
1436 dev = list[devname]
1437 if dev['length'] == 0:
1438 continue
1439 tmp[dev['start']] = devname
1440 for t in sorted(tmp):
1441 slist.append(tmp[t])
1442 return slist
1443 def fixupInitcalls(self, phase): 1585 def fixupInitcalls(self, phase):
1444 # if any calls never returned, clip them at system resume end 1586 # if any calls never returned, clip them at system resume end
1445 phaselist = self.dmesg[phase]['list'] 1587 phaselist = self.dmesg[phase]['list']
@@ -1576,7 +1718,7 @@ class Data:
1576 maxname = '%d' % self.maxDeviceNameSize(phase) 1718 maxname = '%d' % self.maxDeviceNameSize(phase)
1577 fmt = '%3d) %'+maxname+'s - %f - %f' 1719 fmt = '%3d) %'+maxname+'s - %f - %f'
1578 c = 1 1720 c = 1
1579 for name in devlist: 1721 for name in sorted(devlist):
1580 s = devlist[name]['start'] 1722 s = devlist[name]['start']
1581 e = devlist[name]['end'] 1723 e = devlist[name]['end']
1582 sysvals.vprint(fmt % (c, name, s, e)) 1724 sysvals.vprint(fmt % (c, name, s, e))
@@ -1588,7 +1730,7 @@ class Data:
1588 devlist = [] 1730 devlist = []
1589 for phase in self.sortedPhases(): 1731 for phase in self.sortedPhases():
1590 list = self.deviceChildren(devname, phase) 1732 list = self.deviceChildren(devname, phase)
1591 for dev in list: 1733 for dev in sorted(list):
1592 if dev not in devlist: 1734 if dev not in devlist:
1593 devlist.append(dev) 1735 devlist.append(dev)
1594 return devlist 1736 return devlist
@@ -1628,16 +1770,16 @@ class Data:
1628 def rootDeviceList(self): 1770 def rootDeviceList(self):
1629 # list of devices graphed 1771 # list of devices graphed
1630 real = [] 1772 real = []
1631 for phase in self.dmesg: 1773 for phase in self.sortedPhases():
1632 list = self.dmesg[phase]['list'] 1774 list = self.dmesg[phase]['list']
1633 for dev in list: 1775 for dev in sorted(list):
1634 if list[dev]['pid'] >= 0 and dev not in real: 1776 if list[dev]['pid'] >= 0 and dev not in real:
1635 real.append(dev) 1777 real.append(dev)
1636 # list of top-most root devices 1778 # list of top-most root devices
1637 rootlist = [] 1779 rootlist = []
1638 for phase in self.dmesg: 1780 for phase in self.sortedPhases():
1639 list = self.dmesg[phase]['list'] 1781 list = self.dmesg[phase]['list']
1640 for dev in list: 1782 for dev in sorted(list):
1641 pdev = list[dev]['par'] 1783 pdev = list[dev]['par']
1642 pid = list[dev]['pid'] 1784 pid = list[dev]['pid']
1643 if(pid < 0 or re.match('[0-9]*-[0-9]*\.[0-9]*[\.0-9]*\:[\.0-9]*$', pdev)): 1785 if(pid < 0 or re.match('[0-9]*-[0-9]*\.[0-9]*[\.0-9]*\:[\.0-9]*$', pdev)):
@@ -1718,9 +1860,9 @@ class Data:
1718 def createProcessUsageEvents(self): 1860 def createProcessUsageEvents(self):
1719 # get an array of process names 1861 # get an array of process names
1720 proclist = [] 1862 proclist = []
1721 for t in self.pstl: 1863 for t in sorted(self.pstl):
1722 pslist = self.pstl[t] 1864 pslist = self.pstl[t]
1723 for ps in pslist: 1865 for ps in sorted(pslist):
1724 if ps not in proclist: 1866 if ps not in proclist:
1725 proclist.append(ps) 1867 proclist.append(ps)
1726 # get a list of data points for suspend and resume 1868 # get a list of data points for suspend and resume
@@ -1765,7 +1907,7 @@ class Data:
1765 def debugPrint(self): 1907 def debugPrint(self):
1766 for p in self.sortedPhases(): 1908 for p in self.sortedPhases():
1767 list = self.dmesg[p]['list'] 1909 list = self.dmesg[p]['list']
1768 for devname in list: 1910 for devname in sorted(list):
1769 dev = list[devname] 1911 dev = list[devname]
1770 if 'ftrace' in dev: 1912 if 'ftrace' in dev:
1771 dev['ftrace'].debugPrint(' [%s]' % devname) 1913 dev['ftrace'].debugPrint(' [%s]' % devname)
@@ -2466,7 +2608,7 @@ class Timeline:
2466 # if there is 1 line per row, draw them the standard way 2608 # if there is 1 line per row, draw them the standard way
2467 for t, p in standardphases: 2609 for t, p in standardphases:
2468 for i in sorted(self.rowheight[t][p]): 2610 for i in sorted(self.rowheight[t][p]):
2469 self.rowheight[t][p][i] = self.bodyH/len(self.rowlines[t][p]) 2611 self.rowheight[t][p][i] = float(self.bodyH)/len(self.rowlines[t][p])
2470 def createZoomBox(self, mode='command', testcount=1): 2612 def createZoomBox(self, mode='command', testcount=1):
2471 # Create bounding box, add buttons 2613 # Create bounding box, add buttons
2472 html_zoombox = '<center><button id="zoomin">ZOOM IN +</button><button id="zoomout">ZOOM OUT -</button><button id="zoomdef">ZOOM 1:1</button></center>\n' 2614 html_zoombox = '<center><button id="zoomin">ZOOM IN +</button><button id="zoomout">ZOOM OUT -</button><button id="zoomdef">ZOOM 1:1</button></center>\n'
@@ -2537,6 +2679,7 @@ class TestProps:
2537 cmdlinefmt = '^# command \| (?P<cmd>.*)' 2679 cmdlinefmt = '^# command \| (?P<cmd>.*)'
2538 kparamsfmt = '^# kparams \| (?P<kp>.*)' 2680 kparamsfmt = '^# kparams \| (?P<kp>.*)'
2539 devpropfmt = '# Device Properties: .*' 2681 devpropfmt = '# Device Properties: .*'
2682 pinfofmt = '# platform-(?P<val>[a-z,A-Z,0-9]*): (?P<info>.*)'
2540 tracertypefmt = '# tracer: (?P<t>.*)' 2683 tracertypefmt = '# tracer: (?P<t>.*)'
2541 firmwarefmt = '# fwsuspend (?P<s>[0-9]*) fwresume (?P<r>[0-9]*)$' 2684 firmwarefmt = '# fwsuspend (?P<s>[0-9]*) fwresume (?P<r>[0-9]*)$'
2542 procexecfmt = 'ps - (?P<ps>.*)$' 2685 procexecfmt = 'ps - (?P<ps>.*)$'
@@ -2571,12 +2714,6 @@ class TestProps:
2571 self.ftrace_line_fmt = self.ftrace_line_fmt_nop 2714 self.ftrace_line_fmt = self.ftrace_line_fmt_nop
2572 else: 2715 else:
2573 doError('Invalid tracer format: [%s]' % tracer) 2716 doError('Invalid tracer format: [%s]' % tracer)
2574 def decode(self, data):
2575 try:
2576 out = base64.b64decode(data).decode('zlib')
2577 except:
2578 out = data
2579 return out
2580 def stampInfo(self, line): 2717 def stampInfo(self, line):
2581 if re.match(self.stampfmt, line): 2718 if re.match(self.stampfmt, line):
2582 self.stamp = line 2719 self.stamp = line
@@ -2660,7 +2797,7 @@ class TestProps:
2660 if len(self.mcelog) > data.testnumber: 2797 if len(self.mcelog) > data.testnumber:
2661 m = re.match(self.mcelogfmt, self.mcelog[data.testnumber]) 2798 m = re.match(self.mcelogfmt, self.mcelog[data.testnumber])
2662 if m: 2799 if m:
2663 data.mcelog = self.decode(m.group('m')) 2800 data.mcelog = sv.b64unzip(m.group('m'))
2664 # turbostat data 2801 # turbostat data
2665 if len(self.turbostat) > data.testnumber: 2802 if len(self.turbostat) > data.testnumber:
2666 m = re.match(self.tstatfmt, self.turbostat[data.testnumber]) 2803 m = re.match(self.tstatfmt, self.turbostat[data.testnumber])
@@ -2681,6 +2818,46 @@ class TestProps:
2681 m = re.match(self.testerrfmt, self.testerror[data.testnumber]) 2818 m = re.match(self.testerrfmt, self.testerror[data.testnumber])
2682 if m: 2819 if m:
2683 data.enterfail = m.group('e') 2820 data.enterfail = m.group('e')
2821 def devprops(self, data):
2822 props = dict()
2823 devlist = data.split(';')
2824 for dev in devlist:
2825 f = dev.split(',')
2826 if len(f) < 3:
2827 continue
2828 dev = f[0]
2829 props[dev] = DevProps()
2830 props[dev].altname = f[1]
2831 if int(f[2]):
2832 props[dev].isasync = True
2833 else:
2834 props[dev].isasync = False
2835 return props
2836 def parseDevprops(self, line, sv):
2837 idx = line.index(': ') + 2
2838 if idx >= len(line):
2839 return
2840 props = self.devprops(line[idx:])
2841 if sv.suspendmode == 'command' and 'testcommandstring' in props:
2842 sv.testcommand = props['testcommandstring'].altname
2843 sv.devprops = props
2844 def parsePlatformInfo(self, line, sv):
2845 m = re.match(self.pinfofmt, line)
2846 if not m:
2847 return
2848 name, info = m.group('val'), m.group('info')
2849 if name == 'devinfo':
2850 sv.devprops = self.devprops(sv.b64unzip(info))
2851 return
2852 elif name == 'testcmd':
2853 sv.testcommand = info
2854 return
2855 field = info.split('|')
2856 if len(field) < 2:
2857 return
2858 cmdline = field[0].strip()
2859 output = sv.b64unzip(field[1].strip())
2860 sv.platinfo.append([name, cmdline, output])
2684 2861
2685# Class: TestRun 2862# Class: TestRun
2686# Description: 2863# Description:
@@ -2701,7 +2878,7 @@ class ProcessMonitor:
2701 process = Popen(c, shell=True, stdout=PIPE) 2878 process = Popen(c, shell=True, stdout=PIPE)
2702 running = dict() 2879 running = dict()
2703 for line in process.stdout: 2880 for line in process.stdout:
2704 data = line.split() 2881 data = ascii(line).split()
2705 pid = data[0] 2882 pid = data[0]
2706 name = re.sub('[()]', '', data[1]) 2883 name = re.sub('[()]', '', data[1])
2707 user = int(data[13]) 2884 user = int(data[13])
@@ -2805,7 +2982,11 @@ def appendIncompleteTraceLog(testruns):
2805 continue 2982 continue
2806 # device properties line 2983 # device properties line
2807 if(re.match(tp.devpropfmt, line)): 2984 if(re.match(tp.devpropfmt, line)):
2808 devProps(line) 2985 tp.parseDevprops(line, sysvals)
2986 continue
2987 # platform info line
2988 if(re.match(tp.pinfofmt, line)):
2989 tp.parsePlatformInfo(line, sysvals)
2809 continue 2990 continue
2810 # parse only valid lines, if this is not one move on 2991 # parse only valid lines, if this is not one move on
2811 m = re.match(tp.ftrace_line_fmt, line) 2992 m = re.match(tp.ftrace_line_fmt, line)
@@ -2902,7 +3083,7 @@ def parseTraceLog(live=False):
2902 sysvals.setupAllKprobes() 3083 sysvals.setupAllKprobes()
2903 ksuscalls = ['pm_prepare_console'] 3084 ksuscalls = ['pm_prepare_console']
2904 krescalls = ['pm_restore_console'] 3085 krescalls = ['pm_restore_console']
2905 tracewatch = [] 3086 tracewatch = ['irq_wakeup']
2906 if sysvals.usekprobes: 3087 if sysvals.usekprobes:
2907 tracewatch += ['sync_filesystems', 'freeze_processes', 'syscore_suspend', 3088 tracewatch += ['sync_filesystems', 'freeze_processes', 'syscore_suspend',
2908 'syscore_resume', 'resume_console', 'thaw_processes', 'CPU_ON', 3089 'syscore_resume', 'resume_console', 'thaw_processes', 'CPU_ON',
@@ -2928,7 +3109,11 @@ def parseTraceLog(live=False):
2928 continue 3109 continue
2929 # device properties line 3110 # device properties line
2930 if(re.match(tp.devpropfmt, line)): 3111 if(re.match(tp.devpropfmt, line)):
2931 devProps(line) 3112 tp.parseDevprops(line, sysvals)
3113 continue
3114 # platform info line
3115 if(re.match(tp.pinfofmt, line)):
3116 tp.parsePlatformInfo(line, sysvals)
2932 continue 3117 continue
2933 # ignore all other commented lines 3118 # ignore all other commented lines
2934 if line[0] == '#': 3119 if line[0] == '#':
@@ -3001,16 +3186,11 @@ def parseTraceLog(live=False):
3001 isbegin = False 3186 isbegin = False
3002 else: 3187 else:
3003 continue 3188 continue
3004 m = re.match('(?P<name>.*)\[(?P<val>[0-9]*)\] .*', t.name) 3189 if '[' in t.name:
3005 if(m): 3190 m = re.match('(?P<name>.*)\[.*', t.name)
3006 val = m.group('val')
3007 if val == '0':
3008 name = m.group('name')
3009 else:
3010 name = m.group('name')+'['+val+']'
3011 else: 3191 else:
3012 m = re.match('(?P<name>.*) .*', t.name) 3192 m = re.match('(?P<name>.*) .*', t.name)
3013 name = m.group('name') 3193 name = m.group('name')
3014 # ignore these events 3194 # ignore these events
3015 if(name.split('[')[0] in tracewatch): 3195 if(name.split('[')[0] in tracewatch):
3016 continue 3196 continue
@@ -3045,6 +3225,8 @@ def parseTraceLog(live=False):
3045 elif(re.match('machine_suspend\[.*', t.name)): 3225 elif(re.match('machine_suspend\[.*', t.name)):
3046 if(isbegin): 3226 if(isbegin):
3047 lp = data.lastPhase() 3227 lp = data.lastPhase()
3228 if lp == 'resume_machine':
3229 data.dmesg[lp]['end'] = t.time
3048 phase = data.setPhase('suspend_machine', data.dmesg[lp]['end'], True) 3230 phase = data.setPhase('suspend_machine', data.dmesg[lp]['end'], True)
3049 data.setPhase(phase, t.time, False) 3231 data.setPhase(phase, t.time, False)
3050 if data.tSuspended == 0: 3232 if data.tSuspended == 0:
@@ -3213,11 +3395,11 @@ def parseTraceLog(live=False):
3213 # add the traceevent data to the device hierarchy 3395 # add the traceevent data to the device hierarchy
3214 if(sysvals.usetraceevents): 3396 if(sysvals.usetraceevents):
3215 # add actual trace funcs 3397 # add actual trace funcs
3216 for name in test.ttemp: 3398 for name in sorted(test.ttemp):
3217 for event in test.ttemp[name]: 3399 for event in test.ttemp[name]:
3218 data.newActionGlobal(name, event['begin'], event['end'], event['pid']) 3400 data.newActionGlobal(name, event['begin'], event['end'], event['pid'])
3219 # add the kprobe based virtual tracefuncs as actual devices 3401 # add the kprobe based virtual tracefuncs as actual devices
3220 for key in tp.ktemp: 3402 for key in sorted(tp.ktemp):
3221 name, pid = key 3403 name, pid = key
3222 if name not in sysvals.tracefuncs: 3404 if name not in sysvals.tracefuncs:
3223 continue 3405 continue
@@ -3231,7 +3413,7 @@ def parseTraceLog(live=False):
3231 data.newActionGlobal(e['name'], kb, ke, pid, color) 3413 data.newActionGlobal(e['name'], kb, ke, pid, color)
3232 # add config base kprobes and dev kprobes 3414 # add config base kprobes and dev kprobes
3233 if sysvals.usedevsrc: 3415 if sysvals.usedevsrc:
3234 for key in tp.ktemp: 3416 for key in sorted(tp.ktemp):
3235 name, pid = key 3417 name, pid = key
3236 if name in sysvals.tracefuncs or name not in sysvals.dev_tracefuncs: 3418 if name in sysvals.tracefuncs or name not in sysvals.dev_tracefuncs:
3237 continue 3419 continue
@@ -3244,7 +3426,7 @@ def parseTraceLog(live=False):
3244 if sysvals.usecallgraph: 3426 if sysvals.usecallgraph:
3245 # add the callgraph data to the device hierarchy 3427 # add the callgraph data to the device hierarchy
3246 sortlist = dict() 3428 sortlist = dict()
3247 for key in test.ftemp: 3429 for key in sorted(test.ftemp):
3248 proc, pid = key 3430 proc, pid = key
3249 for cg in test.ftemp[key]: 3431 for cg in test.ftemp[key]:
3250 if len(cg.list) < 1 or cg.invalid or (cg.end - cg.start == 0): 3432 if len(cg.list) < 1 or cg.invalid or (cg.end - cg.start == 0):
@@ -3582,7 +3764,7 @@ def parseKernelLog(data):
3582 # if trace events are not available, these are better than nothing 3764 # if trace events are not available, these are better than nothing
3583 if(not sysvals.usetraceevents): 3765 if(not sysvals.usetraceevents):
3584 # look for known actions 3766 # look for known actions
3585 for a in at: 3767 for a in sorted(at):
3586 if(re.match(at[a]['smsg'], msg)): 3768 if(re.match(at[a]['smsg'], msg)):
3587 if(a not in actions): 3769 if(a not in actions):
3588 actions[a] = [] 3770 actions[a] = []
@@ -3641,7 +3823,7 @@ def parseKernelLog(data):
3641 data.tResumed = data.tSuspended 3823 data.tResumed = data.tSuspended
3642 3824
3643 # fill in any actions we've found 3825 # fill in any actions we've found
3644 for name in actions: 3826 for name in sorted(actions):
3645 for event in actions[name]: 3827 for event in actions[name]:
3646 data.newActionGlobal(name, event['begin'], event['end']) 3828 data.newActionGlobal(name, event['begin'], event['end'])
3647 3829
@@ -3761,7 +3943,7 @@ def createHTMLSummarySimple(testruns, htmlfile, title):
3761 if lastmode and lastmode != mode and num > 0: 3943 if lastmode and lastmode != mode and num > 0:
3762 for i in range(2): 3944 for i in range(2):
3763 s = sorted(tMed[i]) 3945 s = sorted(tMed[i])
3764 list[lastmode]['med'][i] = s[int(len(s)/2)] 3946 list[lastmode]['med'][i] = s[int(len(s)//2)]
3765 iMed[i] = tMed[i][list[lastmode]['med'][i]] 3947 iMed[i] = tMed[i][list[lastmode]['med'][i]]
3766 list[lastmode]['avg'] = [tAvg[0] / num, tAvg[1] / num] 3948 list[lastmode]['avg'] = [tAvg[0] / num, tAvg[1] / num]
3767 list[lastmode]['min'] = tMin 3949 list[lastmode]['min'] = tMin
@@ -3803,7 +3985,7 @@ def createHTMLSummarySimple(testruns, htmlfile, title):
3803 if lastmode and num > 0: 3985 if lastmode and num > 0:
3804 for i in range(2): 3986 for i in range(2):
3805 s = sorted(tMed[i]) 3987 s = sorted(tMed[i])
3806 list[lastmode]['med'][i] = s[int(len(s)/2)] 3988 list[lastmode]['med'][i] = s[int(len(s)//2)]
3807 iMed[i] = tMed[i][list[lastmode]['med'][i]] 3989 iMed[i] = tMed[i][list[lastmode]['med'][i]]
3808 list[lastmode]['avg'] = [tAvg[0] / num, tAvg[1] / num] 3990 list[lastmode]['avg'] = [tAvg[0] / num, tAvg[1] / num]
3809 list[lastmode]['min'] = tMin 3991 list[lastmode]['min'] = tMin
@@ -3845,7 +4027,7 @@ def createHTMLSummarySimple(testruns, htmlfile, title):
3845 '</tr>\n' 4027 '</tr>\n'
3846 headnone = '<tr class="head"><td>{0}</td><td>{1}</td><td colspan='+\ 4028 headnone = '<tr class="head"><td>{0}</td><td>{1}</td><td colspan='+\
3847 colspan+'></td></tr>\n' 4029 colspan+'></td></tr>\n'
3848 for mode in list: 4030 for mode in sorted(list):
3849 # header line for each suspend mode 4031 # header line for each suspend mode
3850 num = 0 4032 num = 0
3851 tAvg, tMin, tMax, tMed = list[mode]['avg'], list[mode]['min'],\ 4033 tAvg, tMin, tMax, tMed = list[mode]['avg'], list[mode]['min'],\
@@ -3944,7 +4126,8 @@ def createHTMLDeviceSummary(testruns, htmlfile, title):
3944 th.format('Average Time') + th.format('Count') +\ 4126 th.format('Average Time') + th.format('Count') +\
3945 th.format('Worst Time') + th.format('Host (worst time)') +\ 4127 th.format('Worst Time') + th.format('Host (worst time)') +\
3946 th.format('Link (worst time)') + '</tr>\n' 4128 th.format('Link (worst time)') + '</tr>\n'
3947 for name in sorted(devlist, key=lambda k:devlist[k]['worst'], reverse=True): 4129 for name in sorted(devlist, key=lambda k:(devlist[k]['worst'], \
4130 devlist[k]['total'], devlist[k]['name']), reverse=True):
3948 data = devall[type][name] 4131 data = devall[type][name]
3949 data['average'] = data['total'] / data['count'] 4132 data['average'] = data['total'] / data['count']
3950 if data['average'] < limit: 4133 if data['average'] < limit:
@@ -4085,7 +4268,7 @@ def createHTML(testruns, testfail):
4085 if(tTotal == 0): 4268 if(tTotal == 0):
4086 doError('No timeline data') 4269 doError('No timeline data')
4087 if(len(data.tLow) > 0): 4270 if(len(data.tLow) > 0):
4088 low_time = '|'.join(data.tLow) 4271 low_time = '+'.join(data.tLow)
4089 if sysvals.suspendmode == 'command': 4272 if sysvals.suspendmode == 'command':
4090 run_time = '%.0f'%((data.end-data.start)*1000) 4273 run_time = '%.0f'%((data.end-data.start)*1000)
4091 if sysvals.testcommand: 4274 if sysvals.testcommand:
@@ -4151,7 +4334,7 @@ def createHTML(testruns, testfail):
4151 for group in data.devicegroups: 4334 for group in data.devicegroups:
4152 devlist = [] 4335 devlist = []
4153 for phase in group: 4336 for phase in group:
4154 for devname in data.tdevlist[phase]: 4337 for devname in sorted(data.tdevlist[phase]):
4155 d = DevItem(data.testnumber, phase, data.dmesg[phase]['list'][devname]) 4338 d = DevItem(data.testnumber, phase, data.dmesg[phase]['list'][devname])
4156 devlist.append(d) 4339 devlist.append(d)
4157 if d.isa('kth'): 4340 if d.isa('kth'):
@@ -4230,7 +4413,7 @@ def createHTML(testruns, testfail):
4230 for b in phases[dir]: 4413 for b in phases[dir]:
4231 # draw the devices for this phase 4414 # draw the devices for this phase
4232 phaselist = data.dmesg[b]['list'] 4415 phaselist = data.dmesg[b]['list']
4233 for d in data.tdevlist[b]: 4416 for d in sorted(data.tdevlist[b]):
4234 name = d 4417 name = d
4235 drv = '' 4418 drv = ''
4236 dev = phaselist[d] 4419 dev = phaselist[d]
@@ -4971,13 +5154,9 @@ def executeSuspend():
4971 if mode == 'freeze' and sysvals.haveTurbostat(): 5154 if mode == 'freeze' and sysvals.haveTurbostat():
4972 # execution will pause here 5155 # execution will pause here
4973 turbo = sysvals.turbostat() 5156 turbo = sysvals.turbostat()
4974 if '|' in turbo: 5157 if turbo:
4975 tdata['turbo'] = turbo 5158 tdata['turbo'] = turbo
4976 else:
4977 tdata['error'] = turbo
4978 else: 5159 else:
4979 if sysvals.haveTurbostat():
4980 sysvals.vprint('WARNING: ignoring turbostat in mode "%s"' % mode)
4981 pf = open(sysvals.powerfile, 'w') 5160 pf = open(sysvals.powerfile, 'w')
4982 pf.write(mode) 5161 pf.write(mode)
4983 # execution will pause here 5162 # execution will pause here
@@ -5024,7 +5203,7 @@ def executeSuspend():
5024 op.write(line) 5203 op.write(line)
5025 op.close() 5204 op.close()
5026 sysvals.fsetVal('', 'trace') 5205 sysvals.fsetVal('', 'trace')
5027 devProps() 5206 sysvals.platforminfo()
5028 return testdata 5207 return testdata
5029 5208
5030def readFile(file): 5209def readFile(file):
@@ -5040,9 +5219,9 @@ def readFile(file):
5040# The time string, e.g. "1901m16s" 5219# The time string, e.g. "1901m16s"
5041def ms2nice(val): 5220def ms2nice(val):
5042 val = int(val) 5221 val = int(val)
5043 h = val / 3600000 5222 h = val // 3600000
5044 m = (val / 60000) % 60 5223 m = (val // 60000) % 60
5045 s = (val / 1000) % 60 5224 s = (val // 1000) % 60
5046 if h > 0: 5225 if h > 0:
5047 return '%d:%02d:%02d' % (h, m, s) 5226 return '%d:%02d:%02d' % (h, m, s)
5048 if m > 0: 5227 if m > 0:
@@ -5115,127 +5294,6 @@ def deviceInfo(output=''):
5115 print(lines[i]) 5294 print(lines[i])
5116 return res 5295 return res
5117 5296
5118# Function: devProps
5119# Description:
5120# Retrieve a list of properties for all devices in the trace log
5121def devProps(data=0):
5122 props = dict()
5123
5124 if data:
5125 idx = data.index(': ') + 2
5126 if idx >= len(data):
5127 return
5128 devlist = data[idx:].split(';')
5129 for dev in devlist:
5130 f = dev.split(',')
5131 if len(f) < 3:
5132 continue
5133 dev = f[0]
5134 props[dev] = DevProps()
5135 props[dev].altname = f[1]
5136 if int(f[2]):
5137 props[dev].async = True
5138 else:
5139 props[dev].async = False
5140 sysvals.devprops = props
5141 if sysvals.suspendmode == 'command' and 'testcommandstring' in props:
5142 sysvals.testcommand = props['testcommandstring'].altname
5143 return
5144
5145 if(os.path.exists(sysvals.ftracefile) == False):
5146 doError('%s does not exist' % sysvals.ftracefile)
5147
5148 # first get the list of devices we need properties for
5149 msghead = 'Additional data added by AnalyzeSuspend'
5150 alreadystamped = False
5151 tp = TestProps()
5152 tf = sysvals.openlog(sysvals.ftracefile, 'r')
5153 for line in tf:
5154 if msghead in line:
5155 alreadystamped = True
5156 continue
5157 # determine the trace data type (required for further parsing)
5158 m = re.match(tp.tracertypefmt, line)
5159 if(m):
5160 tp.setTracerType(m.group('t'))
5161 continue
5162 # parse only valid lines, if this is not one move on
5163 m = re.match(tp.ftrace_line_fmt, line)
5164 if(not m or 'device_pm_callback_start' not in line):
5165 continue
5166 m = re.match('.*: (?P<drv>.*) (?P<d>.*), parent: *(?P<p>.*), .*', m.group('msg'));
5167 if(not m):
5168 continue
5169 dev = m.group('d')
5170 if dev not in props:
5171 props[dev] = DevProps()
5172 tf.close()
5173
5174 if not alreadystamped and sysvals.suspendmode == 'command':
5175 out = '#\n# '+msghead+'\n# Device Properties: '
5176 out += 'testcommandstring,%s,0;' % (sysvals.testcommand)
5177 with sysvals.openlog(sysvals.ftracefile, 'a') as fp:
5178 fp.write(out+'\n')
5179 sysvals.devprops = props
5180 return
5181
5182 # now get the syspath for each of our target devices
5183 for dirname, dirnames, filenames in os.walk('/sys/devices'):
5184 if(re.match('.*/power', dirname) and 'async' in filenames):
5185 dev = dirname.split('/')[-2]
5186 if dev in props and (not props[dev].syspath or len(dirname) < len(props[dev].syspath)):
5187 props[dev].syspath = dirname[:-6]
5188
5189 # now fill in the properties for our target devices
5190 for dev in props:
5191 dirname = props[dev].syspath
5192 if not dirname or not os.path.exists(dirname):
5193 continue
5194 with open(dirname+'/power/async') as fp:
5195 text = fp.read()
5196 props[dev].async = False
5197 if 'enabled' in text:
5198 props[dev].async = True
5199 fields = os.listdir(dirname)
5200 if 'product' in fields:
5201 with open(dirname+'/product') as fp:
5202 props[dev].altname = fp.read()
5203 elif 'name' in fields:
5204 with open(dirname+'/name') as fp:
5205 props[dev].altname = fp.read()
5206 elif 'model' in fields:
5207 with open(dirname+'/model') as fp:
5208 props[dev].altname = fp.read()
5209 elif 'description' in fields:
5210 with open(dirname+'/description') as fp:
5211 props[dev].altname = fp.read()
5212 elif 'id' in fields:
5213 with open(dirname+'/id') as fp:
5214 props[dev].altname = fp.read()
5215 elif 'idVendor' in fields and 'idProduct' in fields:
5216 idv, idp = '', ''
5217 with open(dirname+'/idVendor') as fp:
5218 idv = fp.read().strip()
5219 with open(dirname+'/idProduct') as fp:
5220 idp = fp.read().strip()
5221 props[dev].altname = '%s:%s' % (idv, idp)
5222
5223 if props[dev].altname:
5224 out = props[dev].altname.strip().replace('\n', ' ')
5225 out = out.replace(',', ' ')
5226 out = out.replace(';', ' ')
5227 props[dev].altname = out
5228
5229 # and now write the data to the ftrace file
5230 if not alreadystamped:
5231 out = '#\n# '+msghead+'\n# Device Properties: '
5232 for dev in sorted(props):
5233 out += props[dev].out(dev)
5234 with sysvals.openlog(sysvals.ftracefile, 'a') as fp:
5235 fp.write(out+'\n')
5236
5237 sysvals.devprops = props
5238
5239# Function: getModes 5297# Function: getModes
5240# Description: 5298# Description:
5241# Determine the supported power modes on this system 5299# Determine the supported power modes on this system
@@ -5339,11 +5397,11 @@ def dmidecode(mempath, fatal=False):
5339 # search for either an SM table or DMI table 5397 # search for either an SM table or DMI table
5340 i = base = length = num = 0 5398 i = base = length = num = 0
5341 while(i < memsize): 5399 while(i < memsize):
5342 if buf[i:i+4] == '_SM_' and i < memsize - 16: 5400 if buf[i:i+4] == b'_SM_' and i < memsize - 16:
5343 length = struct.unpack('H', buf[i+22:i+24])[0] 5401 length = struct.unpack('H', buf[i+22:i+24])[0]
5344 base, num = struct.unpack('IH', buf[i+24:i+30]) 5402 base, num = struct.unpack('IH', buf[i+24:i+30])
5345 break 5403 break
5346 elif buf[i:i+5] == '_DMI_': 5404 elif buf[i:i+5] == b'_DMI_':
5347 length = struct.unpack('H', buf[i+6:i+8])[0] 5405 length = struct.unpack('H', buf[i+6:i+8])[0]
5348 base, num = struct.unpack('IH', buf[i+8:i+14]) 5406 base, num = struct.unpack('IH', buf[i+8:i+14])
5349 break 5407 break
@@ -5376,15 +5434,15 @@ def dmidecode(mempath, fatal=False):
5376 if 0 == struct.unpack('H', buf[n:n+2])[0]: 5434 if 0 == struct.unpack('H', buf[n:n+2])[0]:
5377 break 5435 break
5378 n += 1 5436 n += 1
5379 data = buf[i+size:n+2].split('\0') 5437 data = buf[i+size:n+2].split(b'\0')
5380 for name in info: 5438 for name in info:
5381 itype, idxadr = info[name] 5439 itype, idxadr = info[name]
5382 if itype == type: 5440 if itype == type:
5383 idx = struct.unpack('B', buf[i+idxadr])[0] 5441 idx = struct.unpack('B', buf[i+idxadr:i+idxadr+1])[0]
5384 if idx > 0 and idx < len(data) - 1: 5442 if idx > 0 and idx < len(data) - 1:
5385 s = data[idx-1].strip() 5443 s = data[idx-1].decode('utf-8')
5386 if s and s.lower() != 'to be filled by o.e.m.': 5444 if s.strip() and s.strip().lower() != 'to be filled by o.e.m.':
5387 out[name] = data[idx-1] 5445 out[name] = s
5388 i = n + 2 5446 i = n + 2
5389 count += 1 5447 count += 1
5390 return out 5448 return out
@@ -5409,7 +5467,7 @@ def getBattery():
5409 return (ac, charge) 5467 return (ac, charge)
5410 5468
5411def displayControl(cmd): 5469def displayControl(cmd):
5412 xset, ret = 'xset -d :0.0 {0}', 0 5470 xset, ret = 'timeout 10 xset -d :0.0 {0}', 0
5413 if sysvals.sudouser: 5471 if sysvals.sudouser:
5414 xset = 'sudo -u %s %s' % (sysvals.sudouser, xset) 5472 xset = 'sudo -u %s %s' % (sysvals.sudouser, xset)
5415 if cmd == 'init': 5473 if cmd == 'init':
@@ -5433,7 +5491,7 @@ def displayControl(cmd):
5433 fp = Popen(xset.format('q').split(' '), stdout=PIPE).stdout 5491 fp = Popen(xset.format('q').split(' '), stdout=PIPE).stdout
5434 ret = 'unknown' 5492 ret = 'unknown'
5435 for line in fp: 5493 for line in fp:
5436 m = re.match('[\s]*Monitor is (?P<m>.*)', line) 5494 m = re.match('[\s]*Monitor is (?P<m>.*)', ascii(line))
5437 if(m and len(m.group('m')) >= 2): 5495 if(m and len(m.group('m')) >= 2):
5438 out = m.group('m').lower() 5496 out = m.group('m').lower()
5439 ret = out[3:] if out[0:2] == 'in' else out 5497 ret = out[3:] if out[0:2] == 'in' else out
@@ -5495,10 +5553,11 @@ def getFPDT(output):
5495 ' OEM Revision : %u\n'\ 5553 ' OEM Revision : %u\n'\
5496 ' Creator ID : %s\n'\ 5554 ' Creator ID : %s\n'\
5497 ' Creator Revision : 0x%x\n'\ 5555 ' Creator Revision : 0x%x\n'\
5498 '' % (table[0], table[0], table[1], table[2], table[3], 5556 '' % (ascii(table[0]), ascii(table[0]), table[1], table[2],
5499 table[4], table[5], table[6], table[7], table[8])) 5557 table[3], ascii(table[4]), ascii(table[5]), table[6],
5558 ascii(table[7]), table[8]))
5500 5559
5501 if(table[0] != 'FPDT'): 5560 if(table[0] != b'FPDT'):
5502 if(output): 5561 if(output):
5503 doError('Invalid FPDT table') 5562 doError('Invalid FPDT table')
5504 return False 5563 return False
@@ -5530,8 +5589,8 @@ def getFPDT(output):
5530 return [0, 0] 5589 return [0, 0]
5531 rechead = struct.unpack('4sI', first) 5590 rechead = struct.unpack('4sI', first)
5532 recdata = fp.read(rechead[1]-8) 5591 recdata = fp.read(rechead[1]-8)
5533 if(rechead[0] == 'FBPT'): 5592 if(rechead[0] == b'FBPT'):
5534 record = struct.unpack('HBBIQQQQQ', recdata) 5593 record = struct.unpack('HBBIQQQQQ', recdata[:48])
5535 if(output): 5594 if(output):
5536 pprint('%s (%s)\n'\ 5595 pprint('%s (%s)\n'\
5537 ' Reset END : %u ns\n'\ 5596 ' Reset END : %u ns\n'\
@@ -5539,11 +5598,11 @@ def getFPDT(output):
5539 ' OS Loader StartImage Start : %u ns\n'\ 5598 ' OS Loader StartImage Start : %u ns\n'\
5540 ' ExitBootServices Entry : %u ns\n'\ 5599 ' ExitBootServices Entry : %u ns\n'\
5541 ' ExitBootServices Exit : %u ns'\ 5600 ' ExitBootServices Exit : %u ns'\
5542 '' % (rectype[header[0]], rechead[0], record[4], record[5], 5601 '' % (rectype[header[0]], ascii(rechead[0]), record[4], record[5],
5543 record[6], record[7], record[8])) 5602 record[6], record[7], record[8]))
5544 elif(rechead[0] == 'S3PT'): 5603 elif(rechead[0] == b'S3PT'):
5545 if(output): 5604 if(output):
5546 pprint('%s (%s)' % (rectype[header[0]], rechead[0])) 5605 pprint('%s (%s)' % (rectype[header[0]], ascii(rechead[0])))
5547 j = 0 5606 j = 0
5548 while(j < len(recdata)): 5607 while(j < len(recdata)):
5549 prechead = struct.unpack('HBB', recdata[j:j+4]) 5608 prechead = struct.unpack('HBB', recdata[j:j+4])
@@ -5689,7 +5748,7 @@ def doError(msg, help=False):
5689def getArgInt(name, args, min, max, main=True): 5748def getArgInt(name, args, min, max, main=True):
5690 if main: 5749 if main:
5691 try: 5750 try:
5692 arg = args.next() 5751 arg = next(args)
5693 except: 5752 except:
5694 doError(name+': no argument supplied', True) 5753 doError(name+': no argument supplied', True)
5695 else: 5754 else:
@@ -5708,7 +5767,7 @@ def getArgInt(name, args, min, max, main=True):
5708def getArgFloat(name, args, min, max, main=True): 5767def getArgFloat(name, args, min, max, main=True):
5709 if main: 5768 if main:
5710 try: 5769 try:
5711 arg = args.next() 5770 arg = next(args)
5712 except: 5771 except:
5713 doError(name+': no argument supplied', True) 5772 doError(name+': no argument supplied', True)
5714 else: 5773 else:
@@ -5737,9 +5796,12 @@ def processData(live=False):
5737 parseKernelLog(data) 5796 parseKernelLog(data)
5738 if(sysvals.ftracefile and (sysvals.usecallgraph or sysvals.usetraceevents)): 5797 if(sysvals.ftracefile and (sysvals.usecallgraph or sysvals.usetraceevents)):
5739 appendIncompleteTraceLog(testruns) 5798 appendIncompleteTraceLog(testruns)
5799 shown = ['bios', 'biosdate', 'cpu', 'host', 'kernel', 'man', 'memfr',
5800 'memsz', 'mode', 'numcpu', 'plat', 'time']
5740 sysvals.vprint('System Info:') 5801 sysvals.vprint('System Info:')
5741 for key in sorted(sysvals.stamp): 5802 for key in sorted(sysvals.stamp):
5742 sysvals.vprint(' %-8s : %s' % (key.upper(), sysvals.stamp[key])) 5803 if key in shown:
5804 sysvals.vprint(' %-8s : %s' % (key.upper(), sysvals.stamp[key]))
5743 if sysvals.kparams: 5805 if sysvals.kparams:
5744 sysvals.vprint('Kparams:\n %s' % sysvals.kparams) 5806 sysvals.vprint('Kparams:\n %s' % sysvals.kparams)
5745 sysvals.vprint('Command:\n %s' % sysvals.cmdline) 5807 sysvals.vprint('Command:\n %s' % sysvals.cmdline)
@@ -5768,6 +5830,12 @@ def processData(live=False):
5768 (w[0], w[1]) 5830 (w[0], w[1])
5769 sysvals.vprint(s) 5831 sysvals.vprint(s)
5770 data.printDetails() 5832 data.printDetails()
5833 if len(sysvals.platinfo) > 0:
5834 sysvals.vprint('\nPlatform Info:')
5835 for info in sysvals.platinfo:
5836 sysvals.vprint(info[0]+' - '+info[1])
5837 sysvals.vprint(info[2])
5838 sysvals.vprint('')
5771 if sysvals.cgdump: 5839 if sysvals.cgdump:
5772 for data in testruns: 5840 for data in testruns:
5773 data.debugPrint() 5841 data.debugPrint()
@@ -5951,7 +6019,7 @@ def data_from_html(file, outpath, issues, fulldetail=False):
5951 worst[d] = {'name':'', 'time': 0.0} 6019 worst[d] = {'name':'', 'time': 0.0}
5952 dev = devices[d] if d in devices else 0 6020 dev = devices[d] if d in devices else 0
5953 if dev and len(dev.keys()) > 0: 6021 if dev and len(dev.keys()) > 0:
5954 n = sorted(dev, key=dev.get, reverse=True)[0] 6022 n = sorted(dev, key=lambda k:(dev[k], k), reverse=True)[0]
5955 worst[d]['name'], worst[d]['time'] = n, dev[n] 6023 worst[d]['name'], worst[d]['time'] = n, dev[n]
5956 data = { 6024 data = {
5957 'mode': stmp[2], 6025 'mode': stmp[2],
@@ -5976,7 +6044,7 @@ def data_from_html(file, outpath, issues, fulldetail=False):
5976 data['funclist'] = find_in_html(html, '<div title="', '" class="traceevent"', False) 6044 data['funclist'] = find_in_html(html, '<div title="', '" class="traceevent"', False)
5977 return data 6045 return data
5978 6046
5979def genHtml(subdir): 6047def genHtml(subdir, force=False):
5980 for dirname, dirnames, filenames in os.walk(subdir): 6048 for dirname, dirnames, filenames in os.walk(subdir):
5981 sysvals.dmesgfile = sysvals.ftracefile = sysvals.htmlfile = '' 6049 sysvals.dmesgfile = sysvals.ftracefile = sysvals.htmlfile = ''
5982 for filename in filenames: 6050 for filename in filenames:
@@ -5986,7 +6054,7 @@ def genHtml(subdir):
5986 sysvals.ftracefile = os.path.join(dirname, filename) 6054 sysvals.ftracefile = os.path.join(dirname, filename)
5987 sysvals.setOutputFile() 6055 sysvals.setOutputFile()
5988 if sysvals.ftracefile and sysvals.htmlfile and \ 6056 if sysvals.ftracefile and sysvals.htmlfile and \
5989 not os.path.exists(sysvals.htmlfile): 6057 (force or not os.path.exists(sysvals.htmlfile)):
5990 pprint('FTRACE: %s' % sysvals.ftracefile) 6058 pprint('FTRACE: %s' % sysvals.ftracefile)
5991 if sysvals.dmesgfile: 6059 if sysvals.dmesgfile:
5992 pprint('DMESG : %s' % sysvals.dmesgfile) 6060 pprint('DMESG : %s' % sysvals.dmesgfile)
@@ -6042,7 +6110,7 @@ def checkArgBool(name, value):
6042# Description: 6110# Description:
6043# Configure the script via the info in a config file 6111# Configure the script via the info in a config file
6044def configFromFile(file): 6112def configFromFile(file):
6045 Config = ConfigParser.ConfigParser() 6113 Config = configparser.ConfigParser()
6046 6114
6047 Config.read(file) 6115 Config.read(file)
6048 sections = Config.sections() 6116 sections = Config.sections()
@@ -6270,7 +6338,7 @@ def printHelp():
6270 ' default: suspend-{date}-{time}\n'\ 6338 ' default: suspend-{date}-{time}\n'\
6271 ' -rtcwake t Wakeup t seconds after suspend, set t to "off" to disable (default: 15)\n'\ 6339 ' -rtcwake t Wakeup t seconds after suspend, set t to "off" to disable (default: 15)\n'\
6272 ' -addlogs Add the dmesg and ftrace logs to the html output\n'\ 6340 ' -addlogs Add the dmesg and ftrace logs to the html output\n'\
6273 ' -turbostat Use turbostat to execute the command in freeze mode (default: disabled)\n'\ 6341 ' -noturbostat Dont use turbostat in freeze mode (default: disabled)\n'\
6274 ' -srgap Add a visible gap in the timeline between sus/res (default: disabled)\n'\ 6342 ' -srgap Add a visible gap in the timeline between sus/res (default: disabled)\n'\
6275 ' -skiphtml Run the test and capture the trace logs, but skip the timeline (default: disabled)\n'\ 6343 ' -skiphtml Run the test and capture the trace logs, but skip the timeline (default: disabled)\n'\
6276 ' -result fn Export a results table to a text file for parsing.\n'\ 6344 ' -result fn Export a results table to a text file for parsing.\n'\
@@ -6340,7 +6408,7 @@ if __name__ == '__main__':
6340 for arg in args: 6408 for arg in args:
6341 if(arg == '-m'): 6409 if(arg == '-m'):
6342 try: 6410 try:
6343 val = args.next() 6411 val = next(args)
6344 except: 6412 except:
6345 doError('No mode supplied', True) 6413 doError('No mode supplied', True)
6346 if val == 'command' and not sysvals.testcommand: 6414 if val == 'command' and not sysvals.testcommand:
@@ -6384,10 +6452,8 @@ if __name__ == '__main__':
6384 sysvals.dmesglog = True 6452 sysvals.dmesglog = True
6385 elif(arg == '-addlogftrace'): 6453 elif(arg == '-addlogftrace'):
6386 sysvals.ftracelog = True 6454 sysvals.ftracelog = True
6387 elif(arg == '-turbostat'): 6455 elif(arg == '-noturbostat'):
6388 sysvals.tstat = True 6456 sysvals.tstat = False
6389 if not sysvals.haveTurbostat():
6390 doError('Turbostat command not found')
6391 elif(arg == '-verbose'): 6457 elif(arg == '-verbose'):
6392 sysvals.verbose = True 6458 sysvals.verbose = True
6393 elif(arg == '-proc'): 6459 elif(arg == '-proc'):
@@ -6400,7 +6466,7 @@ if __name__ == '__main__':
6400 sysvals.gzip = True 6466 sysvals.gzip = True
6401 elif(arg == '-rs'): 6467 elif(arg == '-rs'):
6402 try: 6468 try:
6403 val = args.next() 6469 val = next(args)
6404 except: 6470 except:
6405 doError('-rs requires "enable" or "disable"', True) 6471 doError('-rs requires "enable" or "disable"', True)
6406 if val.lower() in switchvalues: 6472 if val.lower() in switchvalues:
@@ -6412,7 +6478,7 @@ if __name__ == '__main__':
6412 doError('invalid option: %s, use "enable/disable" or "on/off"' % val, True) 6478 doError('invalid option: %s, use "enable/disable" or "on/off"' % val, True)
6413 elif(arg == '-display'): 6479 elif(arg == '-display'):
6414 try: 6480 try:
6415 val = args.next() 6481 val = next(args)
6416 except: 6482 except:
6417 doError('-display requires an mode value', True) 6483 doError('-display requires an mode value', True)
6418 disopt = ['on', 'off', 'standby', 'suspend'] 6484 disopt = ['on', 'off', 'standby', 'suspend']
@@ -6423,7 +6489,7 @@ if __name__ == '__main__':
6423 sysvals.max_graph_depth = getArgInt('-maxdepth', args, 0, 1000) 6489 sysvals.max_graph_depth = getArgInt('-maxdepth', args, 0, 1000)
6424 elif(arg == '-rtcwake'): 6490 elif(arg == '-rtcwake'):
6425 try: 6491 try:
6426 val = args.next() 6492 val = next(args)
6427 except: 6493 except:
6428 doError('No rtcwake time supplied', True) 6494 doError('No rtcwake time supplied', True)
6429 if val.lower() in switchoff: 6495 if val.lower() in switchoff:
@@ -6443,7 +6509,7 @@ if __name__ == '__main__':
6443 sysvals.cgtest = getArgInt('-cgtest', args, 0, 1) 6509 sysvals.cgtest = getArgInt('-cgtest', args, 0, 1)
6444 elif(arg == '-cgphase'): 6510 elif(arg == '-cgphase'):
6445 try: 6511 try:
6446 val = args.next() 6512 val = next(args)
6447 except: 6513 except:
6448 doError('No phase name supplied', True) 6514 doError('No phase name supplied', True)
6449 d = Data(0) 6515 d = Data(0)
@@ -6453,19 +6519,19 @@ if __name__ == '__main__':
6453 sysvals.cgphase = val 6519 sysvals.cgphase = val
6454 elif(arg == '-cgfilter'): 6520 elif(arg == '-cgfilter'):
6455 try: 6521 try:
6456 val = args.next() 6522 val = next(args)
6457 except: 6523 except:
6458 doError('No callgraph functions supplied', True) 6524 doError('No callgraph functions supplied', True)
6459 sysvals.setCallgraphFilter(val) 6525 sysvals.setCallgraphFilter(val)
6460 elif(arg == '-skipkprobe'): 6526 elif(arg == '-skipkprobe'):
6461 try: 6527 try:
6462 val = args.next() 6528 val = next(args)
6463 except: 6529 except:
6464 doError('No kprobe functions supplied', True) 6530 doError('No kprobe functions supplied', True)
6465 sysvals.skipKprobes(val) 6531 sysvals.skipKprobes(val)
6466 elif(arg == '-cgskip'): 6532 elif(arg == '-cgskip'):
6467 try: 6533 try:
6468 val = args.next() 6534 val = next(args)
6469 except: 6535 except:
6470 doError('No file supplied', True) 6536 doError('No file supplied', True)
6471 if val.lower() in switchoff: 6537 if val.lower() in switchoff:
@@ -6480,7 +6546,7 @@ if __name__ == '__main__':
6480 sysvals.callloopmaxlen = getArgFloat('-callloop-maxlen', args, 0.0, 1.0) 6546 sysvals.callloopmaxlen = getArgFloat('-callloop-maxlen', args, 0.0, 1.0)
6481 elif(arg == '-cmd'): 6547 elif(arg == '-cmd'):
6482 try: 6548 try:
6483 val = args.next() 6549 val = next(args)
6484 except: 6550 except:
6485 doError('No command string supplied', True) 6551 doError('No command string supplied', True)
6486 sysvals.testcommand = val 6552 sysvals.testcommand = val
@@ -6495,13 +6561,13 @@ if __name__ == '__main__':
6495 sysvals.multitest['delay'] = getArgInt('-multi n d (delay between tests)', args, 0, 3600) 6561 sysvals.multitest['delay'] = getArgInt('-multi n d (delay between tests)', args, 0, 3600)
6496 elif(arg == '-o'): 6562 elif(arg == '-o'):
6497 try: 6563 try:
6498 val = args.next() 6564 val = next(args)
6499 except: 6565 except:
6500 doError('No subdirectory name supplied', True) 6566 doError('No subdirectory name supplied', True)
6501 sysvals.outdir = sysvals.setOutputFolder(val) 6567 sysvals.outdir = sysvals.setOutputFolder(val)
6502 elif(arg == '-config'): 6568 elif(arg == '-config'):
6503 try: 6569 try:
6504 val = args.next() 6570 val = next(args)
6505 except: 6571 except:
6506 doError('No text file supplied', True) 6572 doError('No text file supplied', True)
6507 file = sysvals.configFile(val) 6573 file = sysvals.configFile(val)
@@ -6510,7 +6576,7 @@ if __name__ == '__main__':
6510 configFromFile(file) 6576 configFromFile(file)
6511 elif(arg == '-fadd'): 6577 elif(arg == '-fadd'):
6512 try: 6578 try:
6513 val = args.next() 6579 val = next(args)
6514 except: 6580 except:
6515 doError('No text file supplied', True) 6581 doError('No text file supplied', True)
6516 file = sysvals.configFile(val) 6582 file = sysvals.configFile(val)
@@ -6519,7 +6585,7 @@ if __name__ == '__main__':
6519 sysvals.addFtraceFilterFunctions(file) 6585 sysvals.addFtraceFilterFunctions(file)
6520 elif(arg == '-dmesg'): 6586 elif(arg == '-dmesg'):
6521 try: 6587 try:
6522 val = args.next() 6588 val = next(args)
6523 except: 6589 except:
6524 doError('No dmesg file supplied', True) 6590 doError('No dmesg file supplied', True)
6525 sysvals.notestrun = True 6591 sysvals.notestrun = True
@@ -6528,7 +6594,7 @@ if __name__ == '__main__':
6528 doError('%s does not exist' % sysvals.dmesgfile) 6594 doError('%s does not exist' % sysvals.dmesgfile)
6529 elif(arg == '-ftrace'): 6595 elif(arg == '-ftrace'):
6530 try: 6596 try:
6531 val = args.next() 6597 val = next(args)
6532 except: 6598 except:
6533 doError('No ftrace file supplied', True) 6599 doError('No ftrace file supplied', True)
6534 sysvals.notestrun = True 6600 sysvals.notestrun = True
@@ -6537,7 +6603,7 @@ if __name__ == '__main__':
6537 doError('%s does not exist' % sysvals.ftracefile) 6603 doError('%s does not exist' % sysvals.ftracefile)
6538 elif(arg == '-summary'): 6604 elif(arg == '-summary'):
6539 try: 6605 try:
6540 val = args.next() 6606 val = next(args)
6541 except: 6607 except:
6542 doError('No directory supplied', True) 6608 doError('No directory supplied', True)
6543 cmd = 'summary' 6609 cmd = 'summary'
@@ -6547,13 +6613,13 @@ if __name__ == '__main__':
6547 doError('%s is not accesible' % val) 6613 doError('%s is not accesible' % val)
6548 elif(arg == '-filter'): 6614 elif(arg == '-filter'):
6549 try: 6615 try:
6550 val = args.next() 6616 val = next(args)
6551 except: 6617 except:
6552 doError('No devnames supplied', True) 6618 doError('No devnames supplied', True)
6553 sysvals.setDeviceFilter(val) 6619 sysvals.setDeviceFilter(val)
6554 elif(arg == '-result'): 6620 elif(arg == '-result'):
6555 try: 6621 try:
6556 val = args.next() 6622 val = next(args)
6557 except: 6623 except:
6558 doError('No result file supplied', True) 6624 doError('No result file supplied', True)
6559 sysvals.result = val 6625 sysvals.result = val