aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2017-09-05 15:19:08 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2017-09-05 15:19:08 -0400
commit439644096c1a6afb9bd9953130f4444a856f76c5 (patch)
treecdf21533aa0ec95d084988f234186f8a5071d5df
parentb42a362e6d10c342004b183defcb9940331b6737 (diff)
parentd97561f461e4cb5b2e170bc33b466cffbddc8719 (diff)
Merge tag 'pm-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki: "This time (again) cpufreq gets the majority of changes which mostly are driver updates (including a major consolidation of intel_pstate), some schedutil governor modifications and core cleanups. There also are some changes in the system suspend area, mostly related to diagnostics and debug messages plus some renames of things related to suspend-to-idle. One major change here is that suspend-to-idle is now going to be preferred over S3 on systems where the ACPI tables indicate to do so and provide requsite support (the Low Power Idle S0 _DSM in particular). The system sleep documentation and the tools related to it are updated too. The rest is a few cpuidle changes (nothing major), devfreq updates, generic power domains (genpd) framework updates and a few assorted modifications elsewhere. Specifics: - Drop the P-state selection algorithm based on a PID controller from intel_pstate and make it use the same P-state selection method (based on the CPU load) for all types of systems in the active mode (Rafael Wysocki, Srinivas Pandruvada). - Rework the cpufreq core and governors to make it possible to take cross-CPU utilization updates into account and modify the schedutil governor to actually do so (Viresh Kumar). - Clean up the handling of transition latency information in the cpufreq core and untangle it from the information on which drivers cannot do dynamic frequency switching (Viresh Kumar). - Add support for new SoCs (MT2701/MT7623 and MT7622) to the mediatek cpufreq driver and update its DT bindings (Sean Wang). - Modify the cpufreq dt-platdev driver to autimatically create cpufreq devices for the new (v2) Operating Performance Points (OPP) DT bindings and update its whitelist of supported systems (Viresh Kumar, Shubhrajyoti Datta, Marc Gonzalez, Khiem Nguyen, Finley Xiao). - Add support for Ux500 to the cpufreq-dt driver and drop the obsolete dbx500 cpufreq driver (Linus Walleij, Arnd Bergmann). - Add new SoC (R8A7795) support to the cpufreq rcar driver (Khiem Nguyen). - Fix and clean up assorted issues in the cpufreq drivers and core (Arvind Yadav, Christophe Jaillet, Colin Ian King, Gustavo Silva, Julia Lawall, Leonard Crestez, Rob Herring, Sudeep Holla). - Update the IO-wait boost handling in the schedutil governor to make it less aggressive (Joel Fernandes). - Rework system suspend diagnostics to make it print fewer messages to the kernel log by default, add a sysfs knob to allow more suspend-related messages to be printed and add Low Power S0 Idle constraints checks to the ACPI suspend-to-idle code (Rafael Wysocki, Srinivas Pandruvada). - Prefer suspend-to-idle over S3 on ACPI-based systems with the ACPI_FADT_LOW_POWER_S0 flag set and the Low Power Idle S0 _DSM interface present in the ACPI tables (Rafael Wysocki). - Update documentation related to system sleep and rename a number of items in the code to make it cleare that they are related to suspend-to-idle (Rafael Wysocki). - Export a variable allowing device drivers to check the target system sleep state from the core system suspend code (Florian Fainelli). - Clean up the cpuidle subsystem to handle the polling state on x86 in a more straightforward way and to use %pOF instead of full_name (Rafael Wysocki, Rob Herring). - Update the devfreq framework to fix and clean up a few minor issues (Chanwoo Choi, Rob Herring). - Extend diagnostics in the generic power domains (genpd) framework and clean it up slightly (Thara Gopinath, Rob Herring). - Fix and clean up a couple of issues in the operating performance points (OPP) framework (Viresh Kumar, Waldemar Rymarkiewicz). - Add support for RV1108 to the rockchip-io Adaptive Voltage Scaling (AVS) driver (David Wu). - Fix the usage of notifiers in CPU power management on some platforms (Alex Shi). - Update the pm-graph system suspend/hibernation and boot profiling utility (Todd Brandt). - Make it possible to run the cpupower utility without CPU0 (Prarit Bhargava)" * tag 'pm-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (87 commits) cpuidle: Make drivers initialize polling state cpuidle: Move polling state initialization code to separate file cpuidle: Eliminate the CPUIDLE_DRIVER_STATE_START symbol cpufreq: imx6q: Fix imx6sx low frequency support cpufreq: speedstep-lib: make several arrays static, makes code smaller PM: docs: Delete the obsolete states.txt document PM: docs: Describe high-level PM strategies and sleep states PM / devfreq: Fix memory leak when fail to register device PM / devfreq: Add dependency on PM_OPP PM / devfreq: Move private devfreq_update_stats() into devfreq PM / devfreq: Convert to using %pOF instead of full_name PM / AVS: rockchip-io: add io selectors and supplies for RV1108 cpufreq: ti: Fix 'of_node_put' being called twice in error handling path cpufreq: dt-platdev: Drop few entries from whitelist cpufreq: dt-platdev: Automatically create cpufreq device with OPP v2 ARM: ux500: don't select CPUFREQ_DT cpuidle: Convert to using %pOF instead of full_name cpufreq: Convert to using %pOF instead of full_name PM / Domains: Convert to using %pOF instead of full_name cpufreq: Cap the default transition delay value to 10 ms ...
-rw-r--r--Documentation/ABI/testing/sysfs-power12
-rw-r--r--Documentation/admin-guide/pm/cpufreq.rst8
-rw-r--r--Documentation/admin-guide/pm/index.rst12
-rw-r--r--Documentation/admin-guide/pm/intel_pstate.rst61
-rw-r--r--Documentation/admin-guide/pm/sleep-states.rst245
-rw-r--r--Documentation/admin-guide/pm/strategies.rst52
-rw-r--r--Documentation/admin-guide/pm/system-wide.rst8
-rw-r--r--Documentation/admin-guide/pm/working-state.rst9
-rw-r--r--Documentation/devicetree/bindings/clock/mt8173-cpu-dvfs.txt83
-rw-r--r--Documentation/devicetree/bindings/cpufreq/cpufreq-mediatek.txt247
-rw-r--r--Documentation/devicetree/bindings/power/rockchip-io-domain.txt2
-rw-r--r--Documentation/power/states.txt125
-rw-r--r--arch/arm/boot/dts/tango4-smp8758.dtsi1
-rw-r--r--arch/arm/mach-tegra/cpuidle-tegra114.c4
-rw-r--r--drivers/acpi/processor_idle.c23
-rw-r--r--drivers/acpi/sleep.c202
-rw-r--r--drivers/base/power/domain.c251
-rw-r--r--drivers/base/power/main.c103
-rw-r--r--drivers/base/power/opp/of.c37
-rw-r--r--drivers/base/power/wakeup.c10
-rw-r--r--drivers/cpufreq/Kconfig.arm21
-rw-r--r--drivers/cpufreq/Makefile4
-rw-r--r--drivers/cpufreq/arm_big_little.c10
-rw-r--r--drivers/cpufreq/cppc_cpufreq.c1
-rw-r--r--drivers/cpufreq/cpufreq-dt-platdev.c65
-rw-r--r--drivers/cpufreq/cpufreq-dt.c1
-rw-r--r--drivers/cpufreq/cpufreq-nforce2.c2
-rw-r--r--drivers/cpufreq/cpufreq.c41
-rw-r--r--drivers/cpufreq/cpufreq_conservative.c6
-rw-r--r--drivers/cpufreq/cpufreq_governor.c20
-rw-r--r--drivers/cpufreq/cpufreq_governor.h3
-rw-r--r--drivers/cpufreq/cpufreq_ondemand.c12
-rw-r--r--drivers/cpufreq/dbx500-cpufreq.c103
-rw-r--r--drivers/cpufreq/elanfreq.c4
-rw-r--r--drivers/cpufreq/gx-suspmod.c2
-rw-r--r--drivers/cpufreq/imx6q-cpufreq.c9
-rw-r--r--drivers/cpufreq/intel_pstate.c322
-rw-r--r--drivers/cpufreq/longrun.c1
-rw-r--r--drivers/cpufreq/loongson2_cpufreq.c2
-rw-r--r--drivers/cpufreq/mediatek-cpufreq.c (renamed from drivers/cpufreq/mt8173-cpufreq.c)29
-rw-r--r--drivers/cpufreq/pmac32-cpufreq.c7
-rw-r--r--drivers/cpufreq/pmac64-cpufreq.c2
-rw-r--r--drivers/cpufreq/s5pv210-cpufreq.c3
-rw-r--r--drivers/cpufreq/sa1100-cpufreq.c5
-rw-r--r--drivers/cpufreq/sa1110-cpufreq.c5
-rw-r--r--drivers/cpufreq/sh-cpufreq.c3
-rw-r--r--drivers/cpufreq/speedstep-ich.c2
-rw-r--r--drivers/cpufreq/speedstep-lib.c4
-rw-r--r--drivers/cpufreq/speedstep-smi.c2
-rw-r--r--drivers/cpufreq/sti-cpufreq.c8
-rw-r--r--drivers/cpufreq/tango-cpufreq.c38
-rw-r--r--drivers/cpufreq/ti-cpufreq.c4
-rw-r--r--drivers/cpufreq/unicore2-cpufreq.c3
-rw-r--r--drivers/cpuidle/Makefile1
-rw-r--r--drivers/cpuidle/cpuidle.c18
-rw-r--r--drivers/cpuidle/driver.c32
-rw-r--r--drivers/cpuidle/dt_idle_states.c24
-rw-r--r--drivers/cpuidle/governors/ladder.c14
-rw-r--r--drivers/cpuidle/governors/menu.c13
-rw-r--r--drivers/cpuidle/poll_state.c37
-rw-r--r--drivers/devfreq/Kconfig1
-rw-r--r--drivers/devfreq/devfreq-event.c4
-rw-r--r--drivers/devfreq/devfreq.c5
-rw-r--r--drivers/devfreq/governor.h4
-rw-r--r--drivers/idle/intel_idle.c181
-rw-r--r--drivers/mfd/db8500-prcmu.c62
-rw-r--r--drivers/platform/x86/intel-hid.c17
-rw-r--r--drivers/power/avs/rockchip-io-domain.c38
-rw-r--r--drivers/regulator/of_regulator.c2
-rw-r--r--include/linux/cpufreq.h38
-rw-r--r--include/linux/cpuidle.h21
-rw-r--r--include/linux/devfreq.h13
-rw-r--r--include/linux/pm.h4
-rw-r--r--include/linux/pm_domain.h3
-rw-r--r--include/linux/suspend.h48
-rw-r--r--kernel/cpu_pm.c50
-rw-r--r--kernel/power/hibernate.c29
-rw-r--r--kernel/power/main.c64
-rw-r--r--kernel/power/power.h5
-rw-r--r--kernel/power/suspend.c184
-rw-r--r--kernel/power/suspend_test.c4
-rw-r--r--kernel/sched/cpufreq_schedutil.c98
-rw-r--r--kernel/sched/deadline.c2
-rw-r--r--kernel/sched/fair.c8
-rw-r--r--kernel/sched/idle.c8
-rw-r--r--kernel/sched/rt.c2
-rw-r--r--kernel/sched/sched.h10
-rw-r--r--kernel/time/timekeeping_debug.c5
-rw-r--r--tools/power/cpupower/utils/cpupower.c15
-rw-r--r--tools/power/cpupower/utils/helpers/cpuid.c4
-rw-r--r--tools/power/cpupower/utils/helpers/helpers.h5
-rw-r--r--tools/power/cpupower/utils/helpers/misc.c2
-rw-r--r--tools/power/cpupower/utils/idle_monitor/hsw_ext_idle.c4
-rw-r--r--tools/power/cpupower/utils/idle_monitor/mperf_monitor.c3
-rw-r--r--tools/power/cpupower/utils/idle_monitor/nhm_idle.c8
-rw-r--r--tools/power/cpupower/utils/idle_monitor/snb_idle.c4
-rw-r--r--tools/power/pm-graph/Makefile19
-rwxr-xr-xtools/power/pm-graph/analyze_boot.py586
-rwxr-xr-xtools/power/pm-graph/analyze_suspend.py534
-rw-r--r--tools/power/pm-graph/bootgraph.861
-rw-r--r--tools/power/pm-graph/sleepgraph.848
101 files changed, 2835 insertions, 1746 deletions
diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power
index f523e5a3ac33..713cab1d5f12 100644
--- a/Documentation/ABI/testing/sysfs-power
+++ b/Documentation/ABI/testing/sysfs-power
@@ -273,3 +273,15 @@ Description:
273 273
274 This output is useful for system wakeup diagnostics of spurious 274 This output is useful for system wakeup diagnostics of spurious
275 wakeup interrupts. 275 wakeup interrupts.
276
277What: /sys/power/pm_debug_messages
278Date: July 2017
279Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
280Description:
281 The /sys/power/pm_debug_messages file controls the printing
282 of debug messages from the system suspend/hiberbation
283 infrastructure to the kernel log.
284
285 Writing a "1" to this file enables the debug messages and
286 writing a "0" (default) to it disables them. Reads from
287 this file return the current value.
diff --git a/Documentation/admin-guide/pm/cpufreq.rst b/Documentation/admin-guide/pm/cpufreq.rst
index 7af83a92d2d6..47153e64dfb5 100644
--- a/Documentation/admin-guide/pm/cpufreq.rst
+++ b/Documentation/admin-guide/pm/cpufreq.rst
@@ -479,14 +479,6 @@ This governor exposes the following tunables:
479 479
480 # echo `$(($(cat cpuinfo_transition_latency) * 750 / 1000)) > ondemand/sampling_rate 480 # echo `$(($(cat cpuinfo_transition_latency) * 750 / 1000)) > ondemand/sampling_rate
481 481
482
483``min_sampling_rate``
484 The minimum value of ``sampling_rate``.
485
486 Equal to 10000 (10 ms) if :c:macro:`CONFIG_NO_HZ_COMMON` and
487 :c:data:`tick_nohz_active` are both set or to 20 times the value of
488 :c:data:`jiffies` in microseconds otherwise.
489
490``up_threshold`` 482``up_threshold``
491 If the estimated CPU load is above this value (in percent), the governor 483 If the estimated CPU load is above this value (in percent), the governor
492 will set the frequency to the maximum value allowed for the policy. 484 will set the frequency to the maximum value allowed for the policy.
diff --git a/Documentation/admin-guide/pm/index.rst b/Documentation/admin-guide/pm/index.rst
index 7f148f76f432..49237ac73442 100644
--- a/Documentation/admin-guide/pm/index.rst
+++ b/Documentation/admin-guide/pm/index.rst
@@ -5,12 +5,6 @@ Power Management
5.. toctree:: 5.. toctree::
6 :maxdepth: 2 6 :maxdepth: 2
7 7
8 cpufreq 8 strategies
9 intel_pstate 9 system-wide
10 10 working-state
11.. only:: subproject and html
12
13 Indices
14 =======
15
16 * :ref:`genindex`
diff --git a/Documentation/admin-guide/pm/intel_pstate.rst b/Documentation/admin-guide/pm/intel_pstate.rst
index 1d6249825efc..d2b6fda3d67b 100644
--- a/Documentation/admin-guide/pm/intel_pstate.rst
+++ b/Documentation/admin-guide/pm/intel_pstate.rst
@@ -167,35 +167,17 @@ is set.
167``powersave`` 167``powersave``
168............. 168.............
169 169
170Without HWP, this P-state selection algorithm generally depends on the 170Without HWP, this P-state selection algorithm is similar to the algorithm
171processor model and/or the system profile setting in the ACPI tables and there
172are two variants of it.
173
174One of them is used with processors from the Atom line and (regardless of the
175processor model) on platforms with the system profile in the ACPI tables set to
176"mobile" (laptops mostly), "tablet", "appliance PC", "desktop", or
177"workstation". It is also used with processors supporting the HWP feature if
178that feature has not been enabled (that is, with the ``intel_pstate=no_hwp``
179argument in the kernel command line). It is similar to the algorithm
180implemented by the generic ``schedutil`` scaling governor except that the 171implemented by the generic ``schedutil`` scaling governor except that the
181utilization metric used by it is based on numbers coming from feedback 172utilization metric used by it is based on numbers coming from feedback
182registers of the CPU. It generally selects P-states proportional to the 173registers of the CPU. It generally selects P-states proportional to the
183current CPU utilization, so it is referred to as the "proportional" algorithm. 174current CPU utilization.
184 175
185The second variant of the ``powersave`` P-state selection algorithm, used in all 176This algorithm is run by the driver's utilization update callback for the
186of the other cases (generally, on processors from the Core line, so it is 177given CPU when it is invoked by the CPU scheduler, but not more often than
187referred to as the "Core" algorithm), is based on the values read from the APERF 178every 10 ms. Like in the ``performance`` case, the hardware configuration
188and MPERF feedback registers and the previously requested target P-state. 179is not touched if the new P-state turns out to be the same as the current
189It does not really take CPU utilization into account explicitly, but as a rule 180one.
190it causes the CPU P-state to ramp up very quickly in response to increased
191utilization which is generally desirable in server environments.
192
193Regardless of the variant, this algorithm is run by the driver's utilization
194update callback for the given CPU when it is invoked by the CPU scheduler, but
195not more often than every 10 ms (that can be tweaked via ``debugfs`` in `this
196particular case <Tuning Interface in debugfs_>`_). Like in the ``performance``
197case, the hardware configuration is not touched if the new P-state turns out to
198be the same as the current one.
199 181
200This is the default P-state selection algorithm if the 182This is the default P-state selection algorithm if the
201:c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option 183:c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option
@@ -720,34 +702,7 @@ P-state is called, the ``ftrace`` filter can be set to to
720 gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func 702 gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func
721 <idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func 703 <idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func
722 704
723Tuning Interface in ``debugfs``
724-------------------------------
725
726The ``powersave`` algorithm provided by ``intel_pstate`` for `the Core line of
727processors in the active mode <powersave_>`_ is based on a `PID controller`_
728whose parameters were chosen to address a number of different use cases at the
729same time. However, it still is possible to fine-tune it to a specific workload
730and the ``debugfs`` interface under ``/sys/kernel/debug/pstate_snb/`` is
731provided for this purpose. [Note that the ``pstate_snb`` directory will be
732present only if the specific P-state selection algorithm matching the interface
733in it actually is in use.]
734
735The following files present in that directory can be used to modify the PID
736controller parameters at run time:
737
738| ``deadband``
739| ``d_gain_pct``
740| ``i_gain_pct``
741| ``p_gain_pct``
742| ``sample_rate_ms``
743| ``setpoint``
744
745Note, however, that achieving desirable results this way generally requires
746expert-level understanding of the power vs performance tradeoff, so extra care
747is recommended when attempting to do that.
748
749 705
750.. _LCEU2015: http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf 706.. _LCEU2015: http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf
751.. _SDM: http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html 707.. _SDM: http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html
752.. _ACPI specification: http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf 708.. _ACPI specification: http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf
753.. _PID controller: https://en.wikipedia.org/wiki/PID_controller
diff --git a/Documentation/admin-guide/pm/sleep-states.rst b/Documentation/admin-guide/pm/sleep-states.rst
new file mode 100644
index 000000000000..1e5c0f00cb2f
--- /dev/null
+++ b/Documentation/admin-guide/pm/sleep-states.rst
@@ -0,0 +1,245 @@
1===================
2System Sleep States
3===================
4
5::
6
7 Copyright (c) 2017 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
8
9Sleep states are global low-power states of the entire system in which user
10space code cannot be executed and the overall system activity is significantly
11reduced.
12
13
14Sleep States That Can Be Supported
15==================================
16
17Depending on its configuration and the capabilities of the platform it runs on,
18the Linux kernel can support up to four system sleep states, includig
19hibernation and up to three variants of system suspend. The sleep states that
20can be supported by the kernel are listed below.
21
22.. _s2idle:
23
24Suspend-to-Idle
25---------------
26
27This is a generic, pure software, light-weight variant of system suspend (also
28referred to as S2I or S2Idle). It allows more energy to be saved relative to
29runtime idle by freezing user space, suspending the timekeeping and putting all
30I/O devices into low-power states (possibly lower-power than available in the
31working state), such that the processors can spend time in their deepest idle
32states while the system is suspended.
33
34The system is woken up from this state by in-band interrupts, so theoretically
35any devices that can cause interrupts to be generated in the working state can
36also be set up as wakeup devices for S2Idle.
37
38This state can be used on platforms without support for :ref:`standby <standby>`
39or :ref:`suspend-to-RAM <s2ram>`, or it can be used in addition to any of the
40deeper system suspend variants to provide reduced resume latency. It is always
41supported if the :c:macro:`CONFIG_SUSPEND` kernel configuration option is set.
42
43.. _standby:
44
45Standby
46-------
47
48This state, if supported, offers moderate, but real, energy savings, while
49providing a relatively straightforward transition back to the working state. No
50operating state is lost (the system core logic retains power), so the system can
51go back to where it left off easily enough.
52
53In addition to freezing user space, suspending the timekeeping and putting all
54I/O devices into low-power states, which is done for :ref:`suspend-to-idle
55<s2idle>` too, nonboot CPUs are taken offline and all low-level system functions
56are suspended during transitions into this state. For this reason, it should
57allow more energy to be saved relative to :ref:`suspend-to-idle <s2idle>`, but
58the resume latency will generally be greater than for that state.
59
60The set of devices that can wake up the system from this state usually is
61reduced relative to :ref:`suspend-to-idle <s2idle>` and it may be necessary to
62rely on the platform for setting up the wakeup functionality as appropriate.
63
64This state is supported if the :c:macro:`CONFIG_SUSPEND` kernel configuration
65option is set and the support for it is registered by the platform with the
66core system suspend subsystem. On ACPI-based systems this state is mapped to
67the S1 system state defined by ACPI.
68
69.. _s2ram:
70
71Suspend-to-RAM
72--------------
73
74This state (also referred to as STR or S2RAM), if supported, offers significant
75energy savings as everything in the system is put into a low-power state, except
76for memory, which should be placed into the self-refresh mode to retain its
77contents. All of the steps carried out when entering :ref:`standby <standby>`
78are also carried out during transitions to S2RAM. Additional operations may
79take place depending on the platform capabilities. In particular, on ACPI-based
80systems the kernel passes control to the platform firmware (BIOS) as the last
81step during S2RAM transitions and that usually results in powering down some
82more low-level components that are not directly controlled by the kernel.
83
84The state of devices and CPUs is saved and held in memory. All devices are
85suspended and put into low-power states. In many cases, all peripheral buses
86lose power when entering S2RAM, so devices must be able to handle the transition
87back to the "on" state.
88
89On ACPI-based systems S2RAM requires some minimal boot-strapping code in the
90platform firmware to resume the system from it. This may be the case on other
91platforms too.
92
93The set of devices that can wake up the system from S2RAM usually is reduced
94relative to :ref:`suspend-to-idle <s2idle>` and :ref:`standby <standby>` and it
95may be necessary to rely on the platform for setting up the wakeup functionality
96as appropriate.
97
98S2RAM is supported if the :c:macro:`CONFIG_SUSPEND` kernel configuration option
99is set and the support for it is registered by the platform with the core system
100suspend subsystem. On ACPI-based systems it is mapped to the S3 system state
101defined by ACPI.
102
103.. _hibernation:
104
105Hibernation
106-----------
107
108This state (also referred to as Suspend-to-Disk or STD) offers the greatest
109energy savings and can be used even in the absence of low-level platform support
110for system suspend. However, it requires some low-level code for resuming the
111system to be present for the underlying CPU architecture.
112
113Hibernation is significantly different from any of the system suspend variants.
114It takes three system state changes to put it into hibernation and two system
115state changes to resume it.
116
117First, when hibernation is triggered, the kernel stops all system activity and
118creates a snapshot image of memory to be written into persistent storage. Next,
119the system goes into a state in which the snapshot image can be saved, the image
120is written out and finally the system goes into the target low-power state in
121which power is cut from almost all of its hardware components, including memory,
122except for a limited set of wakeup devices.
123
124Once the snapshot image has been written out, the system may either enter a
125special low-power state (like ACPI S4), or it may simply power down itself.
126Powering down means minimum power draw and it allows this mechanism to work on
127any system. However, entering a special low-power state may allow additional
128means of system wakeup to be used (e.g. pressing a key on the keyboard or
129opening a laptop lid).
130
131After wakeup, control goes to the platform firmware that runs a boot loader
132which boots a fresh instance of the kernel (control may also go directly to
133the boot loader, depending on the system configuration, but anyway it causes
134a fresh instance of the kernel to be booted). That new instance of the kernel
135(referred to as the ``restore kernel``) looks for a hibernation image in
136persistent storage and if one is found, it is loaded into memory. Next, all
137activity in the system is stopped and the restore kernel overwrites itself with
138the image contents and jumps into a special trampoline area in the original
139kernel stored in the image (referred to as the ``image kernel``), which is where
140the special architecture-specific low-level code is needed. Finally, the
141image kernel restores the system to the pre-hibernation state and allows user
142space to run again.
143
144Hibernation is supported if the :c:macro:`CONFIG_HIBERNATION` kernel
145configuration option is set. However, this option can only be set if support
146for the given CPU architecture includes the low-level code for system resume.
147
148
149Basic ``sysfs`` Interfaces for System Suspend and Hibernation
150=============================================================
151
152The following files located in the :file:`/sys/power/` directory can be used by
153user space for sleep states control.
154
155``state``
156 This file contains a list of strings representing sleep states supported
157 by the kernel. Writing one of these strings into it causes the kernel
158 to start a transition of the system into the sleep state represented by
159 that string.
160
161 In particular, the strings "disk", "freeze" and "standby" represent the
162 :ref:`hibernation <hibernation>`, :ref:`suspend-to-idle <s2idle>` and
163 :ref:`standby <standby>` sleep states, respectively. The string "mem"
164 is interpreted in accordance with the contents of the ``mem_sleep`` file
165 described below.
166
167 If the kernel does not support any system sleep states, this file is
168 not present.
169
170``mem_sleep``
171 This file contains a list of strings representing supported system
172 suspend variants and allows user space to select the variant to be
173 associated with the "mem" string in the ``state`` file described above.
174
175 The strings that may be present in this file are "s2idle", "shallow"
176 and "deep". The string "s2idle" always represents :ref:`suspend-to-idle
177 <s2idle>` and, by convention, "shallow" and "deep" represent
178 :ref:`standby <standby>` and :ref:`suspend-to-RAM <s2ram>`,
179 respectively.
180
181 Writing one of the listed strings into this file causes the system
182 suspend variant represented by it to be associated with the "mem" string
183 in the ``state`` file. The string representing the suspend variant
184 currently associated with the "mem" string in the ``state`` file
185 is listed in square brackets.
186
187 If the kernel does not support system suspend, this file is not present.
188
189``disk``
190 This file contains a list of strings representing different operations
191 that can be carried out after the hibernation image has been saved. The
192 possible options are as follows:
193
194 ``platform``
195 Put the system into a special low-power state (e.g. ACPI S4) to
196 make additional wakeup options available and possibly allow the
197 platform firmware to take a simplified initialization path after
198 wakeup.
199
200 ``shutdown``
201 Power off the system.
202
203 ``reboot``
204 Reboot the system (useful for diagnostics mostly).
205
206 ``suspend``
207 Hybrid system suspend. Put the system into the suspend sleep
208 state selected through the ``mem_sleep`` file described above.
209 If the system is successfully woken up from that state, discard
210 the hibernation image and continue. Otherwise, use the image
211 to restore the previous state of the system.
212
213 ``test_resume``
214 Diagnostic operation. Load the image as though the system had
215 just woken up from hibernation and the currently running kernel
216 instance was a restore kernel and follow up with full system
217 resume.
218
219 Writing one of the listed strings into this file causes the option
220 represented by it to be selected.
221
222 The currently selected option is shown in square brackets which means
223 that the operation represented by it will be carried out after creating
224 and saving the image next time hibernation is triggered by writing
225 ``disk`` to :file:`/sys/power/state`.
226
227 If the kernel does not support hibernation, this file is not present.
228
229According to the above, there are two ways to make the system go into the
230:ref:`suspend-to-idle <s2idle>` state. The first one is to write "freeze"
231directly to :file:`/sys/power/state`. The second one is to write "s2idle" to
232:file:`/sys/power/mem_sleep` and then to write "mem" to
233:file:`/sys/power/state`. Likewise, there are two ways to make the system go
234into the :ref:`standby <standby>` state (the strings to write to the control
235files in that case are "standby" or "shallow" and "mem", respectively) if that
236state is supported by the platform. However, there is only one way to make the
237system go into the :ref:`suspend-to-RAM <s2ram>` state (write "deep" into
238:file:`/sys/power/mem_sleep` and "mem" into :file:`/sys/power/state`).
239
240The default suspend variant (ie. the one to be used without writing anything
241into :file:`/sys/power/mem_sleep`) is either "deep" (on the majority of systems
242supporting :ref:`suspend-to-RAM <s2ram>`) or "s2idle", but it can be overridden
243by the value of the "mem_sleep_default" parameter in the kernel command line.
244On some ACPI-based systems, depending on the information in the ACPI tables, the
245default may be "s2idle" even if :ref:`suspend-to-RAM <s2ram>` is supported.
diff --git a/Documentation/admin-guide/pm/strategies.rst b/Documentation/admin-guide/pm/strategies.rst
new file mode 100644
index 000000000000..afe4d3f831fe
--- /dev/null
+++ b/Documentation/admin-guide/pm/strategies.rst
@@ -0,0 +1,52 @@
1===========================
2Power Management Strategies
3===========================
4
5::
6
7 Copyright (c) 2017 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
8
9The Linux kernel supports two major high-level power management strategies.
10
11One of them is based on using global low-power states of the whole system in
12which user space code cannot be executed and the overall system activity is
13significantly reduced, referred to as :doc:`sleep states <sleep-states>`. The
14kernel puts the system into one of these states when requested by user space
15and the system stays in it until a special signal is received from one of
16designated devices, triggering a transition to the ``working state`` in which
17user space code can run. Because sleep states are global and the whole system
18is affected by the state changes, this strategy is referred to as the
19:doc:`system-wide power management <system-wide>`.
20
21The other strategy, referred to as the :doc:`working-state power management
22<working-state>`, is based on adjusting the power states of individual hardware
23components of the system, as needed, in the working state. In consequence, if
24this strategy is in use, the working state of the system usually does not
25correspond to any particular physical configuration of it, but can be treated as
26a metastate covering a range of different power states of the system in which
27the individual components of it can be either ``active`` (in use) or
28``inactive`` (idle). If they are active, they have to be in power states
29allowing them to process data and to be accessed by software. In turn, if they
30are inactive, ideally, they should be in low-power states in which they may not
31be accessible.
32
33If all of the system components are active, the system as a whole is regarded as
34"runtime active" and that situation typically corresponds to the maximum power
35draw (or maximum energy usage) of it. If all of them are inactive, the system
36as a whole is regarded as "runtime idle" which may be very close to a sleep
37state from the physical system configuration and power draw perspective, but
38then it takes much less time and effort to start executing user space code than
39for the same system in a sleep state. However, transitions from sleep states
40back to the working state can only be started by a limited set of devices, so
41typically the system can spend much more time in a sleep state than it can be
42runtime idle in one go. For this reason, systems usually use less energy in
43sleep states than when they are runtime idle most of the time.
44
45Moreover, the two power management strategies address different usage scenarios.
46Namely, if the user indicates that the system will not be in use going forward,
47for example by closing its lid (if the system is a laptop), it probably should
48go into a sleep state at that point. On the other hand, if the user simply goes
49away from the laptop keyboard, it probably should stay in the working state and
50use the working-state power management in case it becomes idle, because the user
51may come back to it at any time and then may want the system to be immediately
52accessible.
diff --git a/Documentation/admin-guide/pm/system-wide.rst b/Documentation/admin-guide/pm/system-wide.rst
new file mode 100644
index 000000000000..0c81e4c5de39
--- /dev/null
+++ b/Documentation/admin-guide/pm/system-wide.rst
@@ -0,0 +1,8 @@
1============================
2System-Wide Power Management
3============================
4
5.. toctree::
6 :maxdepth: 2
7
8 sleep-states
diff --git a/Documentation/admin-guide/pm/working-state.rst b/Documentation/admin-guide/pm/working-state.rst
new file mode 100644
index 000000000000..fa01bf083dfe
--- /dev/null
+++ b/Documentation/admin-guide/pm/working-state.rst
@@ -0,0 +1,9 @@
1==============================
2Working-State Power Management
3==============================
4
5.. toctree::
6 :maxdepth: 2
7
8 cpufreq
9 intel_pstate
diff --git a/Documentation/devicetree/bindings/clock/mt8173-cpu-dvfs.txt b/Documentation/devicetree/bindings/clock/mt8173-cpu-dvfs.txt
deleted file mode 100644
index 52b457c23eed..000000000000
--- a/Documentation/devicetree/bindings/clock/mt8173-cpu-dvfs.txt
+++ /dev/null
@@ -1,83 +0,0 @@
1Device Tree Clock bindins for CPU DVFS of Mediatek MT8173 SoC
2
3Required properties:
4- clocks: A list of phandle + clock-specifier pairs for the clocks listed in clock names.
5- clock-names: Should contain the following:
6 "cpu" - The multiplexer for clock input of CPU cluster.
7 "intermediate" - A parent of "cpu" clock which is used as "intermediate" clock
8 source (usually MAINPLL) when the original CPU PLL is under
9 transition and not stable yet.
10 Please refer to Documentation/devicetree/bindings/clk/clock-bindings.txt for
11 generic clock consumer properties.
12- proc-supply: Regulator for Vproc of CPU cluster.
13
14Optional properties:
15- sram-supply: Regulator for Vsram of CPU cluster. When present, the cpufreq driver
16 needs to do "voltage tracking" to step by step scale up/down Vproc and
17 Vsram to fit SoC specific needs. When absent, the voltage scaling
18 flow is handled by hardware, hence no software "voltage tracking" is
19 needed.
20
21Example:
22--------
23 cpu0: cpu@0 {
24 device_type = "cpu";
25 compatible = "arm,cortex-a53";
26 reg = <0x000>;
27 enable-method = "psci";
28 cpu-idle-states = <&CPU_SLEEP_0>;
29 clocks = <&infracfg CLK_INFRA_CA53SEL>,
30 <&apmixedsys CLK_APMIXED_MAINPLL>;
31 clock-names = "cpu", "intermediate";
32 };
33
34 cpu1: cpu@1 {
35 device_type = "cpu";
36 compatible = "arm,cortex-a53";
37 reg = <0x001>;
38 enable-method = "psci";
39 cpu-idle-states = <&CPU_SLEEP_0>;
40 clocks = <&infracfg CLK_INFRA_CA53SEL>,
41 <&apmixedsys CLK_APMIXED_MAINPLL>;
42 clock-names = "cpu", "intermediate";
43 };
44
45 cpu2: cpu@100 {
46 device_type = "cpu";
47 compatible = "arm,cortex-a57";
48 reg = <0x100>;
49 enable-method = "psci";
50 cpu-idle-states = <&CPU_SLEEP_0>;
51 clocks = <&infracfg CLK_INFRA_CA57SEL>,
52 <&apmixedsys CLK_APMIXED_MAINPLL>;
53 clock-names = "cpu", "intermediate";
54 };
55
56 cpu3: cpu@101 {
57 device_type = "cpu";
58 compatible = "arm,cortex-a57";
59 reg = <0x101>;
60 enable-method = "psci";
61 cpu-idle-states = <&CPU_SLEEP_0>;
62 clocks = <&infracfg CLK_INFRA_CA57SEL>,
63 <&apmixedsys CLK_APMIXED_MAINPLL>;
64 clock-names = "cpu", "intermediate";
65 };
66
67 &cpu0 {
68 proc-supply = <&mt6397_vpca15_reg>;
69 };
70
71 &cpu1 {
72 proc-supply = <&mt6397_vpca15_reg>;
73 };
74
75 &cpu2 {
76 proc-supply = <&da9211_vcpu_reg>;
77 sram-supply = <&mt6397_vsramca7_reg>;
78 };
79
80 &cpu3 {
81 proc-supply = <&da9211_vcpu_reg>;
82 sram-supply = <&mt6397_vsramca7_reg>;
83 };
diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-mediatek.txt b/Documentation/devicetree/bindings/cpufreq/cpufreq-mediatek.txt
new file mode 100644
index 000000000000..f6403089edcf
--- /dev/null
+++ b/Documentation/devicetree/bindings/cpufreq/cpufreq-mediatek.txt
@@ -0,0 +1,247 @@
1Binding for MediaTek's CPUFreq driver
2=====================================
3
4Required properties:
5- clocks: A list of phandle + clock-specifier pairs for the clocks listed in clock names.
6- clock-names: Should contain the following:
7 "cpu" - The multiplexer for clock input of CPU cluster.
8 "intermediate" - A parent of "cpu" clock which is used as "intermediate" clock
9 source (usually MAINPLL) when the original CPU PLL is under
10 transition and not stable yet.
11 Please refer to Documentation/devicetree/bindings/clk/clock-bindings.txt for
12 generic clock consumer properties.
13- operating-points-v2: Please refer to Documentation/devicetree/bindings/opp/opp.txt
14 for detail.
15- proc-supply: Regulator for Vproc of CPU cluster.
16
17Optional properties:
18- sram-supply: Regulator for Vsram of CPU cluster. When present, the cpufreq driver
19 needs to do "voltage tracking" to step by step scale up/down Vproc and
20 Vsram to fit SoC specific needs. When absent, the voltage scaling
21 flow is handled by hardware, hence no software "voltage tracking" is
22 needed.
23- #cooling-cells:
24- cooling-min-level:
25- cooling-max-level:
26 Please refer to Documentation/devicetree/bindings/thermal/thermal.txt
27 for detail.
28
29Example 1 (MT7623 SoC):
30
31 cpu_opp_table: opp_table {
32 compatible = "operating-points-v2";
33 opp-shared;
34
35 opp-598000000 {
36 opp-hz = /bits/ 64 <598000000>;
37 opp-microvolt = <1050000>;
38 };
39
40 opp-747500000 {
41 opp-hz = /bits/ 64 <747500000>;
42 opp-microvolt = <1050000>;
43 };
44
45 opp-1040000000 {
46 opp-hz = /bits/ 64 <1040000000>;
47 opp-microvolt = <1150000>;
48 };
49
50 opp-1196000000 {
51 opp-hz = /bits/ 64 <1196000000>;
52 opp-microvolt = <1200000>;
53 };
54
55 opp-1300000000 {
56 opp-hz = /bits/ 64 <1300000000>;
57 opp-microvolt = <1300000>;
58 };
59 };
60
61 cpu0: cpu@0 {
62 device_type = "cpu";
63 compatible = "arm,cortex-a7";
64 reg = <0x0>;
65 clocks = <&infracfg CLK_INFRA_CPUSEL>,
66 <&apmixedsys CLK_APMIXED_MAINPLL>;
67 clock-names = "cpu", "intermediate";
68 operating-points-v2 = <&cpu_opp_table>;
69 #cooling-cells = <2>;
70 cooling-min-level = <0>;
71 cooling-max-level = <7>;
72 };
73 cpu@1 {
74 device_type = "cpu";
75 compatible = "arm,cortex-a7";
76 reg = <0x1>;
77 operating-points-v2 = <&cpu_opp_table>;
78 };
79 cpu@2 {
80 device_type = "cpu";
81 compatible = "arm,cortex-a7";
82 reg = <0x2>;
83 operating-points-v2 = <&cpu_opp_table>;
84 };
85 cpu@3 {
86 device_type = "cpu";
87 compatible = "arm,cortex-a7";
88 reg = <0x3>;
89 operating-points-v2 = <&cpu_opp_table>;
90 };
91
92Example 2 (MT8173 SoC):
93 cpu_opp_table_a: opp_table_a {
94 compatible = "operating-points-v2";
95 opp-shared;
96
97 opp-507000000 {
98 opp-hz = /bits/ 64 <507000000>;
99 opp-microvolt = <859000>;
100 };
101
102 opp-702000000 {
103 opp-hz = /bits/ 64 <702000000>;
104 opp-microvolt = <908000>;
105 };
106
107 opp-1001000000 {
108 opp-hz = /bits/ 64 <1001000000>;
109 opp-microvolt = <983000>;
110 };
111
112 opp-1105000000 {
113 opp-hz = /bits/ 64 <1105000000>;
114 opp-microvolt = <1009000>;
115 };
116
117 opp-1183000000 {
118 opp-hz = /bits/ 64 <1183000000>;
119 opp-microvolt = <1028000>;
120 };
121
122 opp-1404000000 {
123 opp-hz = /bits/ 64 <1404000000>;
124 opp-microvolt = <1083000>;
125 };
126
127 opp-1508000000 {
128 opp-hz = /bits/ 64 <1508000000>;
129 opp-microvolt = <1109000>;
130 };
131
132 opp-1573000000 {
133 opp-hz = /bits/ 64 <1573000000>;
134 opp-microvolt = <1125000>;
135 };
136 };
137
138 cpu_opp_table_b: opp_table_b {
139 compatible = "operating-points-v2";
140 opp-shared;
141
142 opp-507000000 {
143 opp-hz = /bits/ 64 <507000000>;
144 opp-microvolt = <828000>;
145 };
146
147 opp-702000000 {
148 opp-hz = /bits/ 64 <702000000>;
149 opp-microvolt = <867000>;
150 };
151
152 opp-1001000000 {
153 opp-hz = /bits/ 64 <1001000000>;
154 opp-microvolt = <927000>;
155 };
156
157 opp-1209000000 {
158 opp-hz = /bits/ 64 <1209000000>;
159 opp-microvolt = <968000>;
160 };
161
162 opp-1404000000 {
163 opp-hz = /bits/ 64 <1007000000>;
164 opp-microvolt = <1028000>;
165 };
166
167 opp-1612000000 {
168 opp-hz = /bits/ 64 <1612000000>;
169 opp-microvolt = <1049000>;
170 };
171
172 opp-1807000000 {
173 opp-hz = /bits/ 64 <1807000000>;
174 opp-microvolt = <1089000>;
175 };
176
177 opp-1989000000 {
178 opp-hz = /bits/ 64 <1989000000>;
179 opp-microvolt = <1125000>;
180 };
181 };
182
183 cpu0: cpu@0 {
184 device_type = "cpu";
185 compatible = "arm,cortex-a53";
186 reg = <0x000>;
187 enable-method = "psci";
188 cpu-idle-states = <&CPU_SLEEP_0>;
189 clocks = <&infracfg CLK_INFRA_CA53SEL>,
190 <&apmixedsys CLK_APMIXED_MAINPLL>;
191 clock-names = "cpu", "intermediate";
192 operating-points-v2 = <&cpu_opp_table_a>;
193 };
194
195 cpu1: cpu@1 {
196 device_type = "cpu";
197 compatible = "arm,cortex-a53";
198 reg = <0x001>;
199 enable-method = "psci";
200 cpu-idle-states = <&CPU_SLEEP_0>;
201 clocks = <&infracfg CLK_INFRA_CA53SEL>,
202 <&apmixedsys CLK_APMIXED_MAINPLL>;
203 clock-names = "cpu", "intermediate";
204 operating-points-v2 = <&cpu_opp_table_a>;
205 };
206
207 cpu2: cpu@100 {
208 device_type = "cpu";
209 compatible = "arm,cortex-a57";
210 reg = <0x100>;
211 enable-method = "psci";
212 cpu-idle-states = <&CPU_SLEEP_0>;
213 clocks = <&infracfg CLK_INFRA_CA57SEL>,
214 <&apmixedsys CLK_APMIXED_MAINPLL>;
215 clock-names = "cpu", "intermediate";
216 operating-points-v2 = <&cpu_opp_table_b>;
217 };
218
219 cpu3: cpu@101 {
220 device_type = "cpu";
221 compatible = "arm,cortex-a57";
222 reg = <0x101>;
223 enable-method = "psci";
224 cpu-idle-states = <&CPU_SLEEP_0>;
225 clocks = <&infracfg CLK_INFRA_CA57SEL>,
226 <&apmixedsys CLK_APMIXED_MAINPLL>;
227 clock-names = "cpu", "intermediate";
228 operating-points-v2 = <&cpu_opp_table_b>;
229 };
230
231 &cpu0 {
232 proc-supply = <&mt6397_vpca15_reg>;
233 };
234
235 &cpu1 {
236 proc-supply = <&mt6397_vpca15_reg>;
237 };
238
239 &cpu2 {
240 proc-supply = <&da9211_vcpu_reg>;
241 sram-supply = <&mt6397_vsramca7_reg>;
242 };
243
244 &cpu3 {
245 proc-supply = <&da9211_vcpu_reg>;
246 sram-supply = <&mt6397_vsramca7_reg>;
247 };
diff --git a/Documentation/devicetree/bindings/power/rockchip-io-domain.txt b/Documentation/devicetree/bindings/power/rockchip-io-domain.txt
index 43c21fb04564..4a4766e9c254 100644
--- a/Documentation/devicetree/bindings/power/rockchip-io-domain.txt
+++ b/Documentation/devicetree/bindings/power/rockchip-io-domain.txt
@@ -39,6 +39,8 @@ Required properties:
39 - "rockchip,rk3368-pmu-io-voltage-domain" for rk3368 pmu-domains 39 - "rockchip,rk3368-pmu-io-voltage-domain" for rk3368 pmu-domains
40 - "rockchip,rk3399-io-voltage-domain" for rk3399 40 - "rockchip,rk3399-io-voltage-domain" for rk3399
41 - "rockchip,rk3399-pmu-io-voltage-domain" for rk3399 pmu-domains 41 - "rockchip,rk3399-pmu-io-voltage-domain" for rk3399 pmu-domains
42 - "rockchip,rv1108-io-voltage-domain" for rv1108
43 - "rockchip,rv1108-pmu-io-voltage-domain" for rv1108 pmu-domains
42 44
43Deprecated properties: 45Deprecated properties:
44- rockchip,grf: phandle to the syscon managing the "general register files" 46- rockchip,grf: phandle to the syscon managing the "general register files"
diff --git a/Documentation/power/states.txt b/Documentation/power/states.txt
deleted file mode 100644
index bc4548245a24..000000000000
--- a/Documentation/power/states.txt
+++ /dev/null
@@ -1,125 +0,0 @@
1System Power Management Sleep States
2
3(C) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
4
5The kernel supports up to four system sleep states generically, although three
6of them depend on the platform support code to implement the low-level details
7for each state.
8
9The states are represented by strings that can be read or written to the
10/sys/power/state file. Those strings may be "mem", "standby", "freeze" and
11"disk", where the last three always represent Power-On Suspend (if supported),
12Suspend-To-Idle and hibernation (Suspend-To-Disk), respectively.
13
14The meaning of the "mem" string is controlled by the /sys/power/mem_sleep file.
15It contains strings representing the available modes of system suspend that may
16be triggered by writing "mem" to /sys/power/state. These modes are "s2idle"
17(Suspend-To-Idle), "shallow" (Power-On Suspend) and "deep" (Suspend-To-RAM).
18The "s2idle" mode is always available, while the other ones are only available
19if supported by the platform (if not supported, the strings representing them
20are not present in /sys/power/mem_sleep). The string representing the suspend
21mode to be used subsequently is enclosed in square brackets. Writing one of
22the other strings present in /sys/power/mem_sleep to it causes the suspend mode
23to be used subsequently to change to the one represented by that string.
24
25Consequently, there are two ways to cause the system to go into the
26Suspend-To-Idle sleep state. The first one is to write "freeze" directly to
27/sys/power/state. The second one is to write "s2idle" to /sys/power/mem_sleep
28and then to write "mem" to /sys/power/state. Similarly, there are two ways
29to cause the system to go into the Power-On Suspend sleep state (the strings to
30write to the control files in that case are "standby" or "shallow" and "mem",
31respectively) if that state is supported by the platform. In turn, there is
32only one way to cause the system to go into the Suspend-To-RAM state (write
33"deep" into /sys/power/mem_sleep and "mem" into /sys/power/state).
34
35The default suspend mode (ie. the one to be used without writing anything into
36/sys/power/mem_sleep) is either "deep" (if Suspend-To-RAM is supported) or
37"s2idle", but it can be overridden by the value of the "mem_sleep_default"
38parameter in the kernel command line.
39
40The properties of all of the sleep states are described below.
41
42
43State: Suspend-To-Idle
44ACPI state: S0
45Label: "s2idle" ("freeze")
46
47This state is a generic, pure software, light-weight, system sleep state.
48It allows more energy to be saved relative to runtime idle by freezing user
49space and putting all I/O devices into low-power states (possibly
50lower-power than available at run time), such that the processors can
51spend more time in their idle states.
52
53This state can be used for platforms without Power-On Suspend/Suspend-to-RAM
54support, or it can be used in addition to Suspend-to-RAM to provide reduced
55resume latency. It is always supported.
56
57
58State: Standby / Power-On Suspend
59ACPI State: S1
60Label: "shallow" ("standby")
61
62This state, if supported, offers moderate, though real, power savings, while
63providing a relatively low-latency transition back to a working system. No
64operating state is lost (the CPU retains power), so the system easily starts up
65again where it left off.
66
67In addition to freezing user space and putting all I/O devices into low-power
68states, which is done for Suspend-To-Idle too, nonboot CPUs are taken offline
69and all low-level system functions are suspended during transitions into this
70state. For this reason, it should allow more energy to be saved relative to
71Suspend-To-Idle, but the resume latency will generally be greater than for that
72state.
73
74
75State: Suspend-to-RAM
76ACPI State: S3
77Label: "deep"
78
79This state, if supported, offers significant power savings as everything in the
80system is put into a low-power state, except for memory, which should be placed
81into the self-refresh mode to retain its contents. All of the steps carried out
82when entering Power-On Suspend are also carried out during transitions to STR.
83Additional operations may take place depending on the platform capabilities. In
84particular, on ACPI systems the kernel passes control to the BIOS (platform
85firmware) as the last step during STR transitions and that usually results in
86powering down some more low-level components that aren't directly controlled by
87the kernel.
88
89System and device state is saved and kept in memory. All devices are suspended
90and put into low-power states. In many cases, all peripheral buses lose power
91when entering STR, so devices must be able to handle the transition back to the
92"on" state.
93
94For at least ACPI, STR requires some minimal boot-strapping code to resume the
95system from it. This may be the case on other platforms too.
96
97
98State: Suspend-to-disk
99ACPI State: S4
100Label: "disk"
101
102This state offers the greatest power savings, and can be used even in
103the absence of low-level platform support for power management. This
104state operates similarly to Suspend-to-RAM, but includes a final step
105of writing memory contents to disk. On resume, this is read and memory
106is restored to its pre-suspend state.
107
108STD can be handled by the firmware or the kernel. If it is handled by
109the firmware, it usually requires a dedicated partition that must be
110setup via another operating system for it to use. Despite the
111inconvenience, this method requires minimal work by the kernel, since
112the firmware will also handle restoring memory contents on resume.
113
114For suspend-to-disk, a mechanism called 'swsusp' (Swap Suspend) is used
115to write memory contents to free swap space. swsusp has some restrictive
116requirements, but should work in most cases. Some, albeit outdated,
117documentation can be found in Documentation/power/swsusp.txt.
118Alternatively, userspace can do most of the actual suspend to disk work,
119see userland-swsusp.txt.
120
121Once memory state is written to disk, the system may either enter a
122low-power state (like ACPI S4), or it may simply power down. Powering
123down offers greater savings, and allows this mechanism to work on any
124system. However, entering a real low-power state allows the user to
125trigger wake up events (e.g. pressing a key or opening a laptop lid).
diff --git a/arch/arm/boot/dts/tango4-smp8758.dtsi b/arch/arm/boot/dts/tango4-smp8758.dtsi
index d2e65c46bcc7..eca33d568690 100644
--- a/arch/arm/boot/dts/tango4-smp8758.dtsi
+++ b/arch/arm/boot/dts/tango4-smp8758.dtsi
@@ -13,7 +13,6 @@
13 reg = <0>; 13 reg = <0>;
14 clocks = <&clkgen CPU_CLK>; 14 clocks = <&clkgen CPU_CLK>;
15 clock-latency = <1>; 15 clock-latency = <1>;
16 operating-points = <1215000 0 607500 0 405000 0 243000 0 135000 0>;
17 }; 16 };
18 17
19 cpu1: cpu@1 { 18 cpu1: cpu@1 {
diff --git a/arch/arm/mach-tegra/cpuidle-tegra114.c b/arch/arm/mach-tegra/cpuidle-tegra114.c
index d3aa9be16621..e3fbcfedf845 100644
--- a/arch/arm/mach-tegra/cpuidle-tegra114.c
+++ b/arch/arm/mach-tegra/cpuidle-tegra114.c
@@ -60,7 +60,7 @@ static int tegra114_idle_power_down(struct cpuidle_device *dev,
60 return index; 60 return index;
61} 61}
62 62
63static void tegra114_idle_enter_freeze(struct cpuidle_device *dev, 63static void tegra114_idle_enter_s2idle(struct cpuidle_device *dev,
64 struct cpuidle_driver *drv, 64 struct cpuidle_driver *drv,
65 int index) 65 int index)
66{ 66{
@@ -77,7 +77,7 @@ static struct cpuidle_driver tegra_idle_driver = {
77#ifdef CONFIG_PM_SLEEP 77#ifdef CONFIG_PM_SLEEP
78 [1] = { 78 [1] = {
79 .enter = tegra114_idle_power_down, 79 .enter = tegra114_idle_power_down,
80 .enter_freeze = tegra114_idle_enter_freeze, 80 .enter_s2idle = tegra114_idle_enter_s2idle,
81 .exit_latency = 500, 81 .exit_latency = 500,
82 .target_residency = 1000, 82 .target_residency = 1000,
83 .flags = CPUIDLE_FLAG_TIMER_STOP, 83 .flags = CPUIDLE_FLAG_TIMER_STOP,
diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
index fe3d2a40f311..2736e25e9dc6 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -48,6 +48,8 @@
48#define _COMPONENT ACPI_PROCESSOR_COMPONENT 48#define _COMPONENT ACPI_PROCESSOR_COMPONENT
49ACPI_MODULE_NAME("processor_idle"); 49ACPI_MODULE_NAME("processor_idle");
50 50
51#define ACPI_IDLE_STATE_START (IS_ENABLED(CONFIG_ARCH_HAS_CPU_RELAX) ? 1 : 0)
52
51static unsigned int max_cstate __read_mostly = ACPI_PROCESSOR_MAX_POWER; 53static unsigned int max_cstate __read_mostly = ACPI_PROCESSOR_MAX_POWER;
52module_param(max_cstate, uint, 0000); 54module_param(max_cstate, uint, 0000);
53static unsigned int nocst __read_mostly; 55static unsigned int nocst __read_mostly;
@@ -759,7 +761,7 @@ static int acpi_idle_enter(struct cpuidle_device *dev,
759 761
760 if (cx->type != ACPI_STATE_C1) { 762 if (cx->type != ACPI_STATE_C1) {
761 if (acpi_idle_fallback_to_c1(pr) && num_online_cpus() > 1) { 763 if (acpi_idle_fallback_to_c1(pr) && num_online_cpus() > 1) {
762 index = CPUIDLE_DRIVER_STATE_START; 764 index = ACPI_IDLE_STATE_START;
763 cx = per_cpu(acpi_cstate[index], dev->cpu); 765 cx = per_cpu(acpi_cstate[index], dev->cpu);
764 } else if (cx->type == ACPI_STATE_C3 && pr->flags.bm_check) { 766 } else if (cx->type == ACPI_STATE_C3 && pr->flags.bm_check) {
765 if (cx->bm_sts_skip || !acpi_idle_bm_check()) { 767 if (cx->bm_sts_skip || !acpi_idle_bm_check()) {
@@ -787,7 +789,7 @@ static int acpi_idle_enter(struct cpuidle_device *dev,
787 return index; 789 return index;
788} 790}
789 791
790static void acpi_idle_enter_freeze(struct cpuidle_device *dev, 792static void acpi_idle_enter_s2idle(struct cpuidle_device *dev,
791 struct cpuidle_driver *drv, int index) 793 struct cpuidle_driver *drv, int index)
792{ 794{
793 struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu); 795 struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
@@ -811,7 +813,7 @@ static void acpi_idle_enter_freeze(struct cpuidle_device *dev,
811static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr, 813static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr,
812 struct cpuidle_device *dev) 814 struct cpuidle_device *dev)
813{ 815{
814 int i, count = CPUIDLE_DRIVER_STATE_START; 816 int i, count = ACPI_IDLE_STATE_START;
815 struct acpi_processor_cx *cx; 817 struct acpi_processor_cx *cx;
816 818
817 if (max_cstate == 0) 819 if (max_cstate == 0)
@@ -838,7 +840,7 @@ static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr,
838 840
839static int acpi_processor_setup_cstates(struct acpi_processor *pr) 841static int acpi_processor_setup_cstates(struct acpi_processor *pr)
840{ 842{
841 int i, count = CPUIDLE_DRIVER_STATE_START; 843 int i, count;
842 struct acpi_processor_cx *cx; 844 struct acpi_processor_cx *cx;
843 struct cpuidle_state *state; 845 struct cpuidle_state *state;
844 struct cpuidle_driver *drv = &acpi_idle_driver; 846 struct cpuidle_driver *drv = &acpi_idle_driver;
@@ -846,6 +848,13 @@ static int acpi_processor_setup_cstates(struct acpi_processor *pr)
846 if (max_cstate == 0) 848 if (max_cstate == 0)
847 max_cstate = 1; 849 max_cstate = 1;
848 850
851 if (IS_ENABLED(CONFIG_ARCH_HAS_CPU_RELAX)) {
852 cpuidle_poll_state_init(drv);
853 count = 1;
854 } else {
855 count = 0;
856 }
857
849 for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++) { 858 for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++) {
850 cx = &pr->power.states[i]; 859 cx = &pr->power.states[i];
851 860
@@ -865,14 +874,14 @@ static int acpi_processor_setup_cstates(struct acpi_processor *pr)
865 drv->safe_state_index = count; 874 drv->safe_state_index = count;
866 } 875 }
867 /* 876 /*
868 * Halt-induced C1 is not good for ->enter_freeze, because it 877 * Halt-induced C1 is not good for ->enter_s2idle, because it
869 * re-enables interrupts on exit. Moreover, C1 is generally not 878 * re-enables interrupts on exit. Moreover, C1 is generally not
870 * particularly interesting from the suspend-to-idle angle, so 879 * particularly interesting from the suspend-to-idle angle, so
871 * avoid C1 and the situations in which we may need to fall back 880 * avoid C1 and the situations in which we may need to fall back
872 * to it altogether. 881 * to it altogether.
873 */ 882 */
874 if (cx->type != ACPI_STATE_C1 && !acpi_idle_fallback_to_c1(pr)) 883 if (cx->type != ACPI_STATE_C1 && !acpi_idle_fallback_to_c1(pr))
875 state->enter_freeze = acpi_idle_enter_freeze; 884 state->enter_s2idle = acpi_idle_enter_s2idle;
876 885
877 count++; 886 count++;
878 if (count == CPUIDLE_STATE_MAX) 887 if (count == CPUIDLE_STATE_MAX)
@@ -1289,7 +1298,7 @@ static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr)
1289 return -EINVAL; 1298 return -EINVAL;
1290 1299
1291 drv->safe_state_index = -1; 1300 drv->safe_state_index = -1;
1292 for (i = CPUIDLE_DRIVER_STATE_START; i < CPUIDLE_STATE_MAX; i++) { 1301 for (i = ACPI_IDLE_STATE_START; i < CPUIDLE_STATE_MAX; i++) {
1293 drv->states[i].name[0] = '\0'; 1302 drv->states[i].name[0] = '\0';
1294 drv->states[i].desc[0] = '\0'; 1303 drv->states[i].desc[0] = '\0';
1295 } 1304 }
diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
index fa8243c5c062..09460d9f9208 100644
--- a/drivers/acpi/sleep.c
+++ b/drivers/acpi/sleep.c
@@ -669,6 +669,7 @@ static const struct acpi_device_id lps0_device_ids[] = {
669 669
670#define ACPI_LPS0_DSM_UUID "c4eb40a0-6cd2-11e2-bcfd-0800200c9a66" 670#define ACPI_LPS0_DSM_UUID "c4eb40a0-6cd2-11e2-bcfd-0800200c9a66"
671 671
672#define ACPI_LPS0_GET_DEVICE_CONSTRAINTS 1
672#define ACPI_LPS0_SCREEN_OFF 3 673#define ACPI_LPS0_SCREEN_OFF 3
673#define ACPI_LPS0_SCREEN_ON 4 674#define ACPI_LPS0_SCREEN_ON 4
674#define ACPI_LPS0_ENTRY 5 675#define ACPI_LPS0_ENTRY 5
@@ -680,6 +681,166 @@ static acpi_handle lps0_device_handle;
680static guid_t lps0_dsm_guid; 681static guid_t lps0_dsm_guid;
681static char lps0_dsm_func_mask; 682static char lps0_dsm_func_mask;
682 683
684/* Device constraint entry structure */
685struct lpi_device_info {
686 char *name;
687 int enabled;
688 union acpi_object *package;
689};
690
691/* Constraint package structure */
692struct lpi_device_constraint {
693 int uid;
694 int min_dstate;
695 int function_states;
696};
697
698struct lpi_constraints {
699 acpi_handle handle;
700 int min_dstate;
701};
702
703static struct lpi_constraints *lpi_constraints_table;
704static int lpi_constraints_table_size;
705
706static void lpi_device_get_constraints(void)
707{
708 union acpi_object *out_obj;
709 int i;
710
711 out_obj = acpi_evaluate_dsm_typed(lps0_device_handle, &lps0_dsm_guid,
712 1, ACPI_LPS0_GET_DEVICE_CONSTRAINTS,
713 NULL, ACPI_TYPE_PACKAGE);
714
715 acpi_handle_debug(lps0_device_handle, "_DSM function 1 eval %s\n",
716 out_obj ? "successful" : "failed");
717
718 if (!out_obj)
719 return;
720
721 lpi_constraints_table = kcalloc(out_obj->package.count,
722 sizeof(*lpi_constraints_table),
723 GFP_KERNEL);
724 if (!lpi_constraints_table)
725 goto free_acpi_buffer;
726
727 acpi_handle_debug(lps0_device_handle, "LPI: constraints list begin:\n");
728
729 for (i = 0; i < out_obj->package.count; i++) {
730 struct lpi_constraints *constraint;
731 acpi_status status;
732 union acpi_object *package = &out_obj->package.elements[i];
733 struct lpi_device_info info = { };
734 int package_count = 0, j;
735
736 if (!package)
737 continue;
738
739 for (j = 0; j < package->package.count; ++j) {
740 union acpi_object *element =
741 &(package->package.elements[j]);
742
743 switch (element->type) {
744 case ACPI_TYPE_INTEGER:
745 info.enabled = element->integer.value;
746 break;
747 case ACPI_TYPE_STRING:
748 info.name = element->string.pointer;
749 break;
750 case ACPI_TYPE_PACKAGE:
751 package_count = element->package.count;
752 info.package = element->package.elements;
753 break;
754 }
755 }
756
757 if (!info.enabled || !info.package || !info.name)
758 continue;
759
760 constraint = &lpi_constraints_table[lpi_constraints_table_size];
761
762 status = acpi_get_handle(NULL, info.name, &constraint->handle);
763 if (ACPI_FAILURE(status))
764 continue;
765
766 acpi_handle_debug(lps0_device_handle,
767 "index:%d Name:%s\n", i, info.name);
768
769 constraint->min_dstate = -1;
770
771 for (j = 0; j < package_count; ++j) {
772 union acpi_object *info_obj = &info.package[j];
773 union acpi_object *cnstr_pkg;
774 union acpi_object *obj;
775 struct lpi_device_constraint dev_info;
776
777 switch (info_obj->type) {
778 case ACPI_TYPE_INTEGER:
779 /* version */
780 break;
781 case ACPI_TYPE_PACKAGE:
782 if (info_obj->package.count < 2)
783 break;
784
785 cnstr_pkg = info_obj->package.elements;
786 obj = &cnstr_pkg[0];
787 dev_info.uid = obj->integer.value;
788 obj = &cnstr_pkg[1];
789 dev_info.min_dstate = obj->integer.value;
790
791 acpi_handle_debug(lps0_device_handle,
792 "uid:%d min_dstate:%s\n",
793 dev_info.uid,
794 acpi_power_state_string(dev_info.min_dstate));
795
796 constraint->min_dstate = dev_info.min_dstate;
797 break;
798 }
799 }
800
801 if (constraint->min_dstate < 0) {
802 acpi_handle_debug(lps0_device_handle,
803 "Incomplete constraint defined\n");
804 continue;
805 }
806
807 lpi_constraints_table_size++;
808 }
809
810 acpi_handle_debug(lps0_device_handle, "LPI: constraints list end\n");
811
812free_acpi_buffer:
813 ACPI_FREE(out_obj);
814}
815
816static void lpi_check_constraints(void)
817{
818 int i;
819
820 for (i = 0; i < lpi_constraints_table_size; ++i) {
821 struct acpi_device *adev;
822
823 if (acpi_bus_get_device(lpi_constraints_table[i].handle, &adev))
824 continue;
825
826 acpi_handle_debug(adev->handle,
827 "LPI: required min power state:%s current power state:%s\n",
828 acpi_power_state_string(lpi_constraints_table[i].min_dstate),
829 acpi_power_state_string(adev->power.state));
830
831 if (!adev->flags.power_manageable) {
832 acpi_handle_info(adev->handle, "LPI: Device not power manageble\n");
833 continue;
834 }
835
836 if (adev->power.state < lpi_constraints_table[i].min_dstate)
837 acpi_handle_info(adev->handle,
838 "LPI: Constraint not met; min power state:%s current power state:%s\n",
839 acpi_power_state_string(lpi_constraints_table[i].min_dstate),
840 acpi_power_state_string(adev->power.state));
841 }
842}
843
683static void acpi_sleep_run_lps0_dsm(unsigned int func) 844static void acpi_sleep_run_lps0_dsm(unsigned int func)
684{ 845{
685 union acpi_object *out_obj; 846 union acpi_object *out_obj;
@@ -714,6 +875,12 @@ static int lps0_device_attach(struct acpi_device *adev,
714 if ((bitmask & ACPI_S2IDLE_FUNC_MASK) == ACPI_S2IDLE_FUNC_MASK) { 875 if ((bitmask & ACPI_S2IDLE_FUNC_MASK) == ACPI_S2IDLE_FUNC_MASK) {
715 lps0_dsm_func_mask = bitmask; 876 lps0_dsm_func_mask = bitmask;
716 lps0_device_handle = adev->handle; 877 lps0_device_handle = adev->handle;
878 /*
879 * Use suspend-to-idle by default if the default
880 * suspend mode was not set from the command line.
881 */
882 if (mem_sleep_default > PM_SUSPEND_MEM)
883 mem_sleep_current = PM_SUSPEND_TO_IDLE;
717 } 884 }
718 885
719 acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n", 886 acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n",
@@ -723,6 +890,9 @@ static int lps0_device_attach(struct acpi_device *adev,
723 "_DSM function 0 evaluation failed\n"); 890 "_DSM function 0 evaluation failed\n");
724 } 891 }
725 ACPI_FREE(out_obj); 892 ACPI_FREE(out_obj);
893
894 lpi_device_get_constraints();
895
726 return 0; 896 return 0;
727} 897}
728 898
@@ -731,14 +901,14 @@ static struct acpi_scan_handler lps0_handler = {
731 .attach = lps0_device_attach, 901 .attach = lps0_device_attach,
732}; 902};
733 903
734static int acpi_freeze_begin(void) 904static int acpi_s2idle_begin(void)
735{ 905{
736 acpi_scan_lock_acquire(); 906 acpi_scan_lock_acquire();
737 s2idle_in_progress = true; 907 s2idle_in_progress = true;
738 return 0; 908 return 0;
739} 909}
740 910
741static int acpi_freeze_prepare(void) 911static int acpi_s2idle_prepare(void)
742{ 912{
743 if (lps0_device_handle) { 913 if (lps0_device_handle) {
744 acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF); 914 acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF);
@@ -758,8 +928,12 @@ static int acpi_freeze_prepare(void)
758 return 0; 928 return 0;
759} 929}
760 930
761static void acpi_freeze_wake(void) 931static void acpi_s2idle_wake(void)
762{ 932{
933
934 if (pm_debug_messages_on)
935 lpi_check_constraints();
936
763 /* 937 /*
764 * If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means 938 * If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means
765 * that the SCI has triggered while suspended, so cancel the wakeup in 939 * that the SCI has triggered while suspended, so cancel the wakeup in
@@ -772,7 +946,7 @@ static void acpi_freeze_wake(void)
772 } 946 }
773} 947}
774 948
775static void acpi_freeze_sync(void) 949static void acpi_s2idle_sync(void)
776{ 950{
777 /* 951 /*
778 * Process all pending events in case there are any wakeup ones. 952 * Process all pending events in case there are any wakeup ones.
@@ -785,7 +959,7 @@ static void acpi_freeze_sync(void)
785 s2idle_wakeup = false; 959 s2idle_wakeup = false;
786} 960}
787 961
788static void acpi_freeze_restore(void) 962static void acpi_s2idle_restore(void)
789{ 963{
790 if (acpi_sci_irq_valid()) 964 if (acpi_sci_irq_valid())
791 disable_irq_wake(acpi_sci_irq); 965 disable_irq_wake(acpi_sci_irq);
@@ -798,19 +972,19 @@ static void acpi_freeze_restore(void)
798 } 972 }
799} 973}
800 974
801static void acpi_freeze_end(void) 975static void acpi_s2idle_end(void)
802{ 976{
803 s2idle_in_progress = false; 977 s2idle_in_progress = false;
804 acpi_scan_lock_release(); 978 acpi_scan_lock_release();
805} 979}
806 980
807static const struct platform_freeze_ops acpi_freeze_ops = { 981static const struct platform_s2idle_ops acpi_s2idle_ops = {
808 .begin = acpi_freeze_begin, 982 .begin = acpi_s2idle_begin,
809 .prepare = acpi_freeze_prepare, 983 .prepare = acpi_s2idle_prepare,
810 .wake = acpi_freeze_wake, 984 .wake = acpi_s2idle_wake,
811 .sync = acpi_freeze_sync, 985 .sync = acpi_s2idle_sync,
812 .restore = acpi_freeze_restore, 986 .restore = acpi_s2idle_restore,
813 .end = acpi_freeze_end, 987 .end = acpi_s2idle_end,
814}; 988};
815 989
816static void acpi_sleep_suspend_setup(void) 990static void acpi_sleep_suspend_setup(void)
@@ -825,7 +999,7 @@ static void acpi_sleep_suspend_setup(void)
825 &acpi_suspend_ops_old : &acpi_suspend_ops); 999 &acpi_suspend_ops_old : &acpi_suspend_ops);
826 1000
827 acpi_scan_add_handler(&lps0_handler); 1001 acpi_scan_add_handler(&lps0_handler);
828 freeze_set_ops(&acpi_freeze_ops); 1002 s2idle_set_ops(&acpi_s2idle_ops);
829} 1003}
830 1004
831#else /* !CONFIG_SUSPEND */ 1005#else /* !CONFIG_SUSPEND */
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 60303aa28587..e8ca5e2cf1e5 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -209,6 +209,34 @@ static void genpd_sd_counter_inc(struct generic_pm_domain *genpd)
209 smp_mb__after_atomic(); 209 smp_mb__after_atomic();
210} 210}
211 211
212#ifdef CONFIG_DEBUG_FS
213static void genpd_update_accounting(struct generic_pm_domain *genpd)
214{
215 ktime_t delta, now;
216
217 now = ktime_get();
218 delta = ktime_sub(now, genpd->accounting_time);
219
220 /*
221 * If genpd->status is active, it means we are just
222 * out of off and so update the idle time and vice
223 * versa.
224 */
225 if (genpd->status == GPD_STATE_ACTIVE) {
226 int state_idx = genpd->state_idx;
227
228 genpd->states[state_idx].idle_time =
229 ktime_add(genpd->states[state_idx].idle_time, delta);
230 } else {
231 genpd->on_time = ktime_add(genpd->on_time, delta);
232 }
233
234 genpd->accounting_time = now;
235}
236#else
237static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {}
238#endif
239
212static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed) 240static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
213{ 241{
214 unsigned int state_idx = genpd->state_idx; 242 unsigned int state_idx = genpd->state_idx;
@@ -361,6 +389,7 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on,
361 } 389 }
362 390
363 genpd->status = GPD_STATE_POWER_OFF; 391 genpd->status = GPD_STATE_POWER_OFF;
392 genpd_update_accounting(genpd);
364 393
365 list_for_each_entry(link, &genpd->slave_links, slave_node) { 394 list_for_each_entry(link, &genpd->slave_links, slave_node) {
366 genpd_sd_counter_dec(link->master); 395 genpd_sd_counter_dec(link->master);
@@ -413,6 +442,8 @@ static int genpd_power_on(struct generic_pm_domain *genpd, unsigned int depth)
413 goto err; 442 goto err;
414 443
415 genpd->status = GPD_STATE_ACTIVE; 444 genpd->status = GPD_STATE_ACTIVE;
445 genpd_update_accounting(genpd);
446
416 return 0; 447 return 0;
417 448
418 err: 449 err:
@@ -1540,6 +1571,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
1540 genpd->max_off_time_changed = true; 1571 genpd->max_off_time_changed = true;
1541 genpd->provider = NULL; 1572 genpd->provider = NULL;
1542 genpd->has_provider = false; 1573 genpd->has_provider = false;
1574 genpd->accounting_time = ktime_get();
1543 genpd->domain.ops.runtime_suspend = genpd_runtime_suspend; 1575 genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
1544 genpd->domain.ops.runtime_resume = genpd_runtime_resume; 1576 genpd->domain.ops.runtime_resume = genpd_runtime_resume;
1545 genpd->domain.ops.prepare = pm_genpd_prepare; 1577 genpd->domain.ops.prepare = pm_genpd_prepare;
@@ -1743,7 +1775,7 @@ static int genpd_add_provider(struct device_node *np, genpd_xlate_t xlate,
1743 mutex_lock(&of_genpd_mutex); 1775 mutex_lock(&of_genpd_mutex);
1744 list_add(&cp->link, &of_genpd_providers); 1776 list_add(&cp->link, &of_genpd_providers);
1745 mutex_unlock(&of_genpd_mutex); 1777 mutex_unlock(&of_genpd_mutex);
1746 pr_debug("Added domain provider from %s\n", np->full_name); 1778 pr_debug("Added domain provider from %pOF\n", np);
1747 1779
1748 return 0; 1780 return 0;
1749} 1781}
@@ -2149,16 +2181,16 @@ static int genpd_parse_state(struct genpd_power_state *genpd_state,
2149 err = of_property_read_u32(state_node, "entry-latency-us", 2181 err = of_property_read_u32(state_node, "entry-latency-us",
2150 &entry_latency); 2182 &entry_latency);
2151 if (err) { 2183 if (err) {
2152 pr_debug(" * %s missing entry-latency-us property\n", 2184 pr_debug(" * %pOF missing entry-latency-us property\n",
2153 state_node->full_name); 2185 state_node);
2154 return -EINVAL; 2186 return -EINVAL;
2155 } 2187 }
2156 2188
2157 err = of_property_read_u32(state_node, "exit-latency-us", 2189 err = of_property_read_u32(state_node, "exit-latency-us",
2158 &exit_latency); 2190 &exit_latency);
2159 if (err) { 2191 if (err) {
2160 pr_debug(" * %s missing exit-latency-us property\n", 2192 pr_debug(" * %pOF missing exit-latency-us property\n",
2161 state_node->full_name); 2193 state_node);
2162 return -EINVAL; 2194 return -EINVAL;
2163 } 2195 }
2164 2196
@@ -2212,8 +2244,8 @@ int of_genpd_parse_idle_states(struct device_node *dn,
2212 ret = genpd_parse_state(&st[i++], np); 2244 ret = genpd_parse_state(&st[i++], np);
2213 if (ret) { 2245 if (ret) {
2214 pr_err 2246 pr_err
2215 ("Parsing idle state node %s failed with err %d\n", 2247 ("Parsing idle state node %pOF failed with err %d\n",
2216 np->full_name, ret); 2248 np, ret);
2217 of_node_put(np); 2249 of_node_put(np);
2218 kfree(st); 2250 kfree(st);
2219 return ret; 2251 return ret;
@@ -2327,7 +2359,7 @@ exit:
2327 return 0; 2359 return 0;
2328} 2360}
2329 2361
2330static int pm_genpd_summary_show(struct seq_file *s, void *data) 2362static int genpd_summary_show(struct seq_file *s, void *data)
2331{ 2363{
2332 struct generic_pm_domain *genpd; 2364 struct generic_pm_domain *genpd;
2333 int ret = 0; 2365 int ret = 0;
@@ -2350,21 +2382,187 @@ static int pm_genpd_summary_show(struct seq_file *s, void *data)
2350 return ret; 2382 return ret;
2351} 2383}
2352 2384
2353static int pm_genpd_summary_open(struct inode *inode, struct file *file) 2385static int genpd_status_show(struct seq_file *s, void *data)
2354{ 2386{
2355 return single_open(file, pm_genpd_summary_show, NULL); 2387 static const char * const status_lookup[] = {
2388 [GPD_STATE_ACTIVE] = "on",
2389 [GPD_STATE_POWER_OFF] = "off"
2390 };
2391
2392 struct generic_pm_domain *genpd = s->private;
2393 int ret = 0;
2394
2395 ret = genpd_lock_interruptible(genpd);
2396 if (ret)
2397 return -ERESTARTSYS;
2398
2399 if (WARN_ON_ONCE(genpd->status >= ARRAY_SIZE(status_lookup)))
2400 goto exit;
2401
2402 if (genpd->status == GPD_STATE_POWER_OFF)
2403 seq_printf(s, "%s-%u\n", status_lookup[genpd->status],
2404 genpd->state_idx);
2405 else
2406 seq_printf(s, "%s\n", status_lookup[genpd->status]);
2407exit:
2408 genpd_unlock(genpd);
2409 return ret;
2356} 2410}
2357 2411
2358static const struct file_operations pm_genpd_summary_fops = { 2412static int genpd_sub_domains_show(struct seq_file *s, void *data)
2359 .open = pm_genpd_summary_open, 2413{
2360 .read = seq_read, 2414 struct generic_pm_domain *genpd = s->private;
2361 .llseek = seq_lseek, 2415 struct gpd_link *link;
2362 .release = single_release, 2416 int ret = 0;
2363}; 2417
2418 ret = genpd_lock_interruptible(genpd);
2419 if (ret)
2420 return -ERESTARTSYS;
2421
2422 list_for_each_entry(link, &genpd->master_links, master_node)
2423 seq_printf(s, "%s\n", link->slave->name);
2424
2425 genpd_unlock(genpd);
2426 return ret;
2427}
2428
2429static int genpd_idle_states_show(struct seq_file *s, void *data)
2430{
2431 struct generic_pm_domain *genpd = s->private;
2432 unsigned int i;
2433 int ret = 0;
2434
2435 ret = genpd_lock_interruptible(genpd);
2436 if (ret)
2437 return -ERESTARTSYS;
2438
2439 seq_puts(s, "State Time Spent(ms)\n");
2440
2441 for (i = 0; i < genpd->state_count; i++) {
2442 ktime_t delta = 0;
2443 s64 msecs;
2444
2445 if ((genpd->status == GPD_STATE_POWER_OFF) &&
2446 (genpd->state_idx == i))
2447 delta = ktime_sub(ktime_get(), genpd->accounting_time);
2448
2449 msecs = ktime_to_ms(
2450 ktime_add(genpd->states[i].idle_time, delta));
2451 seq_printf(s, "S%-13i %lld\n", i, msecs);
2452 }
2453
2454 genpd_unlock(genpd);
2455 return ret;
2456}
2457
2458static int genpd_active_time_show(struct seq_file *s, void *data)
2459{
2460 struct generic_pm_domain *genpd = s->private;
2461 ktime_t delta = 0;
2462 int ret = 0;
2463
2464 ret = genpd_lock_interruptible(genpd);
2465 if (ret)
2466 return -ERESTARTSYS;
2467
2468 if (genpd->status == GPD_STATE_ACTIVE)
2469 delta = ktime_sub(ktime_get(), genpd->accounting_time);
2470
2471 seq_printf(s, "%lld ms\n", ktime_to_ms(
2472 ktime_add(genpd->on_time, delta)));
2473
2474 genpd_unlock(genpd);
2475 return ret;
2476}
2477
2478static int genpd_total_idle_time_show(struct seq_file *s, void *data)
2479{
2480 struct generic_pm_domain *genpd = s->private;
2481 ktime_t delta = 0, total = 0;
2482 unsigned int i;
2483 int ret = 0;
2484
2485 ret = genpd_lock_interruptible(genpd);
2486 if (ret)
2487 return -ERESTARTSYS;
2488
2489 for (i = 0; i < genpd->state_count; i++) {
2490
2491 if ((genpd->status == GPD_STATE_POWER_OFF) &&
2492 (genpd->state_idx == i))
2493 delta = ktime_sub(ktime_get(), genpd->accounting_time);
2494
2495 total = ktime_add(total, genpd->states[i].idle_time);
2496 }
2497 total = ktime_add(total, delta);
2498
2499 seq_printf(s, "%lld ms\n", ktime_to_ms(total));
2500
2501 genpd_unlock(genpd);
2502 return ret;
2503}
2504
2505
2506static int genpd_devices_show(struct seq_file *s, void *data)
2507{
2508 struct generic_pm_domain *genpd = s->private;
2509 struct pm_domain_data *pm_data;
2510 const char *kobj_path;
2511 int ret = 0;
2512
2513 ret = genpd_lock_interruptible(genpd);
2514 if (ret)
2515 return -ERESTARTSYS;
2516
2517 list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
2518 kobj_path = kobject_get_path(&pm_data->dev->kobj,
2519 genpd_is_irq_safe(genpd) ?
2520 GFP_ATOMIC : GFP_KERNEL);
2521 if (kobj_path == NULL)
2522 continue;
2523
2524 seq_printf(s, "%s\n", kobj_path);
2525 kfree(kobj_path);
2526 }
2527
2528 genpd_unlock(genpd);
2529 return ret;
2530}
2531
2532#define define_genpd_open_function(name) \
2533static int genpd_##name##_open(struct inode *inode, struct file *file) \
2534{ \
2535 return single_open(file, genpd_##name##_show, inode->i_private); \
2536}
2537
2538define_genpd_open_function(summary);
2539define_genpd_open_function(status);
2540define_genpd_open_function(sub_domains);
2541define_genpd_open_function(idle_states);
2542define_genpd_open_function(active_time);
2543define_genpd_open_function(total_idle_time);
2544define_genpd_open_function(devices);
2545
2546#define define_genpd_debugfs_fops(name) \
2547static const struct file_operations genpd_##name##_fops = { \
2548 .open = genpd_##name##_open, \
2549 .read = seq_read, \
2550 .llseek = seq_lseek, \
2551 .release = single_release, \
2552}
2553
2554define_genpd_debugfs_fops(summary);
2555define_genpd_debugfs_fops(status);
2556define_genpd_debugfs_fops(sub_domains);
2557define_genpd_debugfs_fops(idle_states);
2558define_genpd_debugfs_fops(active_time);
2559define_genpd_debugfs_fops(total_idle_time);
2560define_genpd_debugfs_fops(devices);
2364 2561
2365static int __init pm_genpd_debug_init(void) 2562static int __init pm_genpd_debug_init(void)
2366{ 2563{
2367 struct dentry *d; 2564 struct dentry *d;
2565 struct generic_pm_domain *genpd;
2368 2566
2369 pm_genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL); 2567 pm_genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL);
2370 2568
@@ -2372,10 +2570,29 @@ static int __init pm_genpd_debug_init(void)
2372 return -ENOMEM; 2570 return -ENOMEM;
2373 2571
2374 d = debugfs_create_file("pm_genpd_summary", S_IRUGO, 2572 d = debugfs_create_file("pm_genpd_summary", S_IRUGO,
2375 pm_genpd_debugfs_dir, NULL, &pm_genpd_summary_fops); 2573 pm_genpd_debugfs_dir, NULL, &genpd_summary_fops);
2376 if (!d) 2574 if (!d)
2377 return -ENOMEM; 2575 return -ENOMEM;
2378 2576
2577 list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
2578 d = debugfs_create_dir(genpd->name, pm_genpd_debugfs_dir);
2579 if (!d)
2580 return -ENOMEM;
2581
2582 debugfs_create_file("current_state", 0444,
2583 d, genpd, &genpd_status_fops);
2584 debugfs_create_file("sub_domains", 0444,
2585 d, genpd, &genpd_sub_domains_fops);
2586 debugfs_create_file("idle_states", 0444,
2587 d, genpd, &genpd_idle_states_fops);
2588 debugfs_create_file("active_time", 0444,
2589 d, genpd, &genpd_active_time_fops);
2590 debugfs_create_file("total_idle_time", 0444,
2591 d, genpd, &genpd_total_idle_time_fops);
2592 debugfs_create_file("devices", 0444,
2593 d, genpd, &genpd_devices_fops);
2594 }
2595
2379 return 0; 2596 return 0;
2380} 2597}
2381late_initcall(pm_genpd_debug_init); 2598late_initcall(pm_genpd_debug_init);
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index c99f8730de82..ea1732ed7a9d 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -418,8 +418,7 @@ static void pm_dev_err(struct device *dev, pm_message_t state, const char *info,
418 dev_name(dev), pm_verb(state.event), info, error); 418 dev_name(dev), pm_verb(state.event), info, error);
419} 419}
420 420
421#ifdef CONFIG_PM_DEBUG 421static void dpm_show_time(ktime_t starttime, pm_message_t state, int error,
422static void dpm_show_time(ktime_t starttime, pm_message_t state,
423 const char *info) 422 const char *info)
424{ 423{
425 ktime_t calltime; 424 ktime_t calltime;
@@ -432,14 +431,12 @@ static void dpm_show_time(ktime_t starttime, pm_message_t state,
432 usecs = usecs64; 431 usecs = usecs64;
433 if (usecs == 0) 432 if (usecs == 0)
434 usecs = 1; 433 usecs = 1;
435 pr_info("PM: %s%s%s of devices complete after %ld.%03ld msecs\n", 434
436 info ?: "", info ? " " : "", pm_verb(state.event), 435 pm_pr_dbg("%s%s%s of devices %s after %ld.%03ld msecs\n",
437 usecs / USEC_PER_MSEC, usecs % USEC_PER_MSEC); 436 info ?: "", info ? " " : "", pm_verb(state.event),
437 error ? "aborted" : "complete",
438 usecs / USEC_PER_MSEC, usecs % USEC_PER_MSEC);
438} 439}
439#else
440static inline void dpm_show_time(ktime_t starttime, pm_message_t state,
441 const char *info) {}
442#endif /* CONFIG_PM_DEBUG */
443 440
444static int dpm_run_callback(pm_callback_t cb, struct device *dev, 441static int dpm_run_callback(pm_callback_t cb, struct device *dev,
445 pm_message_t state, const char *info) 442 pm_message_t state, const char *info)
@@ -602,14 +599,7 @@ static void async_resume_noirq(void *data, async_cookie_t cookie)
602 put_device(dev); 599 put_device(dev);
603} 600}
604 601
605/** 602void dpm_noirq_resume_devices(pm_message_t state)
606 * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
607 * @state: PM transition of the system being carried out.
608 *
609 * Call the "noirq" resume handlers for all devices in dpm_noirq_list and
610 * enable device drivers to receive interrupts.
611 */
612void dpm_resume_noirq(pm_message_t state)
613{ 603{
614 struct device *dev; 604 struct device *dev;
615 ktime_t starttime = ktime_get(); 605 ktime_t starttime = ktime_get();
@@ -654,11 +644,28 @@ void dpm_resume_noirq(pm_message_t state)
654 } 644 }
655 mutex_unlock(&dpm_list_mtx); 645 mutex_unlock(&dpm_list_mtx);
656 async_synchronize_full(); 646 async_synchronize_full();
657 dpm_show_time(starttime, state, "noirq"); 647 dpm_show_time(starttime, state, 0, "noirq");
648 trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
649}
650
651void dpm_noirq_end(void)
652{
658 resume_device_irqs(); 653 resume_device_irqs();
659 device_wakeup_disarm_wake_irqs(); 654 device_wakeup_disarm_wake_irqs();
660 cpuidle_resume(); 655 cpuidle_resume();
661 trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false); 656}
657
658/**
659 * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
660 * @state: PM transition of the system being carried out.
661 *
662 * Invoke the "noirq" resume callbacks for all devices in dpm_noirq_list and
663 * allow device drivers' interrupt handlers to be called.
664 */
665void dpm_resume_noirq(pm_message_t state)
666{
667 dpm_noirq_resume_devices(state);
668 dpm_noirq_end();
662} 669}
663 670
664/** 671/**
@@ -776,7 +783,7 @@ void dpm_resume_early(pm_message_t state)
776 } 783 }
777 mutex_unlock(&dpm_list_mtx); 784 mutex_unlock(&dpm_list_mtx);
778 async_synchronize_full(); 785 async_synchronize_full();
779 dpm_show_time(starttime, state, "early"); 786 dpm_show_time(starttime, state, 0, "early");
780 trace_suspend_resume(TPS("dpm_resume_early"), state.event, false); 787 trace_suspend_resume(TPS("dpm_resume_early"), state.event, false);
781} 788}
782 789
@@ -948,7 +955,7 @@ void dpm_resume(pm_message_t state)
948 } 955 }
949 mutex_unlock(&dpm_list_mtx); 956 mutex_unlock(&dpm_list_mtx);
950 async_synchronize_full(); 957 async_synchronize_full();
951 dpm_show_time(starttime, state, NULL); 958 dpm_show_time(starttime, state, 0, NULL);
952 959
953 cpufreq_resume(); 960 cpufreq_resume();
954 trace_suspend_resume(TPS("dpm_resume"), state.event, false); 961 trace_suspend_resume(TPS("dpm_resume"), state.event, false);
@@ -1098,6 +1105,11 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
1098 if (async_error) 1105 if (async_error)
1099 goto Complete; 1106 goto Complete;
1100 1107
1108 if (pm_wakeup_pending()) {
1109 async_error = -EBUSY;
1110 goto Complete;
1111 }
1112
1101 if (dev->power.syscore || dev->power.direct_complete) 1113 if (dev->power.syscore || dev->power.direct_complete)
1102 goto Complete; 1114 goto Complete;
1103 1115
@@ -1158,22 +1170,19 @@ static int device_suspend_noirq(struct device *dev)
1158 return __device_suspend_noirq(dev, pm_transition, false); 1170 return __device_suspend_noirq(dev, pm_transition, false);
1159} 1171}
1160 1172
1161/** 1173void dpm_noirq_begin(void)
1162 * dpm_suspend_noirq - Execute "noirq suspend" callbacks for all devices. 1174{
1163 * @state: PM transition of the system being carried out. 1175 cpuidle_pause();
1164 * 1176 device_wakeup_arm_wake_irqs();
1165 * Prevent device drivers from receiving interrupts and call the "noirq" suspend 1177 suspend_device_irqs();
1166 * handlers for all non-sysdev devices. 1178}
1167 */ 1179
1168int dpm_suspend_noirq(pm_message_t state) 1180int dpm_noirq_suspend_devices(pm_message_t state)
1169{ 1181{
1170 ktime_t starttime = ktime_get(); 1182 ktime_t starttime = ktime_get();
1171 int error = 0; 1183 int error = 0;
1172 1184
1173 trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, true); 1185 trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, true);
1174 cpuidle_pause();
1175 device_wakeup_arm_wake_irqs();
1176 suspend_device_irqs();
1177 mutex_lock(&dpm_list_mtx); 1186 mutex_lock(&dpm_list_mtx);
1178 pm_transition = state; 1187 pm_transition = state;
1179 async_error = 0; 1188 async_error = 0;
@@ -1208,15 +1217,32 @@ int dpm_suspend_noirq(pm_message_t state)
1208 if (error) { 1217 if (error) {
1209 suspend_stats.failed_suspend_noirq++; 1218 suspend_stats.failed_suspend_noirq++;
1210 dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ); 1219 dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
1211 dpm_resume_noirq(resume_event(state));
1212 } else {
1213 dpm_show_time(starttime, state, "noirq");
1214 } 1220 }
1221 dpm_show_time(starttime, state, error, "noirq");
1215 trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, false); 1222 trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, false);
1216 return error; 1223 return error;
1217} 1224}
1218 1225
1219/** 1226/**
1227 * dpm_suspend_noirq - Execute "noirq suspend" callbacks for all devices.
1228 * @state: PM transition of the system being carried out.
1229 *
1230 * Prevent device drivers' interrupt handlers from being called and invoke
1231 * "noirq" suspend callbacks for all non-sysdev devices.
1232 */
1233int dpm_suspend_noirq(pm_message_t state)
1234{
1235 int ret;
1236
1237 dpm_noirq_begin();
1238 ret = dpm_noirq_suspend_devices(state);
1239 if (ret)
1240 dpm_resume_noirq(resume_event(state));
1241
1242 return ret;
1243}
1244
1245/**
1220 * device_suspend_late - Execute a "late suspend" callback for given device. 1246 * device_suspend_late - Execute a "late suspend" callback for given device.
1221 * @dev: Device to handle. 1247 * @dev: Device to handle.
1222 * @state: PM transition of the system being carried out. 1248 * @state: PM transition of the system being carried out.
@@ -1350,9 +1376,8 @@ int dpm_suspend_late(pm_message_t state)
1350 suspend_stats.failed_suspend_late++; 1376 suspend_stats.failed_suspend_late++;
1351 dpm_save_failed_step(SUSPEND_SUSPEND_LATE); 1377 dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
1352 dpm_resume_early(resume_event(state)); 1378 dpm_resume_early(resume_event(state));
1353 } else {
1354 dpm_show_time(starttime, state, "late");
1355 } 1379 }
1380 dpm_show_time(starttime, state, error, "late");
1356 trace_suspend_resume(TPS("dpm_suspend_late"), state.event, false); 1381 trace_suspend_resume(TPS("dpm_suspend_late"), state.event, false);
1357 return error; 1382 return error;
1358} 1383}
@@ -1618,8 +1643,8 @@ int dpm_suspend(pm_message_t state)
1618 if (error) { 1643 if (error) {
1619 suspend_stats.failed_suspend++; 1644 suspend_stats.failed_suspend++;
1620 dpm_save_failed_step(SUSPEND_SUSPEND); 1645 dpm_save_failed_step(SUSPEND_SUSPEND);
1621 } else 1646 }
1622 dpm_show_time(starttime, state, NULL); 1647 dpm_show_time(starttime, state, error, NULL);
1623 trace_suspend_resume(TPS("dpm_suspend"), state.event, false); 1648 trace_suspend_resume(TPS("dpm_suspend"), state.event, false);
1624 return error; 1649 return error;
1625} 1650}
diff --git a/drivers/base/power/opp/of.c b/drivers/base/power/opp/of.c
index 57eec1ca0569..0b718886479b 100644
--- a/drivers/base/power/opp/of.c
+++ b/drivers/base/power/opp/of.c
@@ -248,15 +248,22 @@ void dev_pm_opp_of_remove_table(struct device *dev)
248} 248}
249EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); 249EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);
250 250
251/* Returns opp descriptor node for a device, caller must do of_node_put() */ 251/* Returns opp descriptor node for a device node, caller must
252struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev) 252 * do of_node_put() */
253static struct device_node *_opp_of_get_opp_desc_node(struct device_node *np)
253{ 254{
254 /* 255 /*
255 * There should be only ONE phandle present in "operating-points-v2" 256 * There should be only ONE phandle present in "operating-points-v2"
256 * property. 257 * property.
257 */ 258 */
258 259
259 return of_parse_phandle(dev->of_node, "operating-points-v2", 0); 260 return of_parse_phandle(np, "operating-points-v2", 0);
261}
262
263/* Returns opp descriptor node for a device, caller must do of_node_put() */
264struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev)
265{
266 return _opp_of_get_opp_desc_node(dev->of_node);
260} 267}
261EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node); 268EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node);
262 269
@@ -539,8 +546,12 @@ int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask)
539 546
540 ret = dev_pm_opp_of_add_table(cpu_dev); 547 ret = dev_pm_opp_of_add_table(cpu_dev);
541 if (ret) { 548 if (ret) {
542 pr_err("%s: couldn't find opp table for cpu:%d, %d\n", 549 /*
543 __func__, cpu, ret); 550 * OPP may get registered dynamically, don't print error
551 * message here.
552 */
553 pr_debug("%s: couldn't find opp table for cpu:%d, %d\n",
554 __func__, cpu, ret);
544 555
545 /* Free all other OPPs */ 556 /* Free all other OPPs */
546 dev_pm_opp_of_cpumask_remove_table(cpumask); 557 dev_pm_opp_of_cpumask_remove_table(cpumask);
@@ -572,8 +583,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table);
572int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, 583int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
573 struct cpumask *cpumask) 584 struct cpumask *cpumask)
574{ 585{
575 struct device_node *np, *tmp_np; 586 struct device_node *np, *tmp_np, *cpu_np;
576 struct device *tcpu_dev;
577 int cpu, ret = 0; 587 int cpu, ret = 0;
578 588
579 /* Get OPP descriptor node */ 589 /* Get OPP descriptor node */
@@ -593,19 +603,18 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
593 if (cpu == cpu_dev->id) 603 if (cpu == cpu_dev->id)
594 continue; 604 continue;
595 605
596 tcpu_dev = get_cpu_device(cpu); 606 cpu_np = of_get_cpu_node(cpu, NULL);
597 if (!tcpu_dev) { 607 if (!cpu_np) {
598 dev_err(cpu_dev, "%s: failed to get cpu%d device\n", 608 dev_err(cpu_dev, "%s: failed to get cpu%d node\n",
599 __func__, cpu); 609 __func__, cpu);
600 ret = -ENODEV; 610 ret = -ENOENT;
601 goto put_cpu_node; 611 goto put_cpu_node;
602 } 612 }
603 613
604 /* Get OPP descriptor node */ 614 /* Get OPP descriptor node */
605 tmp_np = dev_pm_opp_of_get_opp_desc_node(tcpu_dev); 615 tmp_np = _opp_of_get_opp_desc_node(cpu_np);
606 if (!tmp_np) { 616 if (!tmp_np) {
607 dev_err(tcpu_dev, "%s: Couldn't find opp node.\n", 617 pr_err("%pOF: Couldn't find opp node\n", cpu_np);
608 __func__);
609 ret = -ENOENT; 618 ret = -ENOENT;
610 goto put_cpu_node; 619 goto put_cpu_node;
611 } 620 }
diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
index 144e6d8fafc8..cdd6f256da59 100644
--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -412,15 +412,17 @@ void device_set_wakeup_capable(struct device *dev, bool capable)
412 if (!!dev->power.can_wakeup == !!capable) 412 if (!!dev->power.can_wakeup == !!capable)
413 return; 413 return;
414 414
415 dev->power.can_wakeup = capable;
415 if (device_is_registered(dev) && !list_empty(&dev->power.entry)) { 416 if (device_is_registered(dev) && !list_empty(&dev->power.entry)) {
416 if (capable) { 417 if (capable) {
417 if (wakeup_sysfs_add(dev)) 418 int ret = wakeup_sysfs_add(dev);
418 return; 419
420 if (ret)
421 dev_info(dev, "Wakeup sysfs attributes not added\n");
419 } else { 422 } else {
420 wakeup_sysfs_remove(dev); 423 wakeup_sysfs_remove(dev);
421 } 424 }
422 } 425 }
423 dev->power.can_wakeup = capable;
424} 426}
425EXPORT_SYMBOL_GPL(device_set_wakeup_capable); 427EXPORT_SYMBOL_GPL(device_set_wakeup_capable);
426 428
@@ -863,7 +865,7 @@ bool pm_wakeup_pending(void)
863void pm_system_wakeup(void) 865void pm_system_wakeup(void)
864{ 866{
865 atomic_inc(&pm_abort_suspend); 867 atomic_inc(&pm_abort_suspend);
866 freeze_wake(); 868 s2idle_wake();
867} 869}
868EXPORT_SYMBOL_GPL(pm_system_wakeup); 870EXPORT_SYMBOL_GPL(pm_system_wakeup);
869 871
diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
index 2011fec2d6ad..bdce4488ded1 100644
--- a/drivers/cpufreq/Kconfig.arm
+++ b/drivers/cpufreq/Kconfig.arm
@@ -71,15 +71,6 @@ config ARM_HIGHBANK_CPUFREQ
71 71
72 If in doubt, say N. 72 If in doubt, say N.
73 73
74config ARM_DB8500_CPUFREQ
75 tristate "ST-Ericsson DB8500 cpufreq" if COMPILE_TEST && !ARCH_U8500
76 default ARCH_U8500
77 depends on HAS_IOMEM
78 depends on !CPU_THERMAL || THERMAL
79 help
80 This adds the CPUFreq driver for ST-Ericsson Ux500 (DB8500) SoC
81 series.
82
83config ARM_IMX6Q_CPUFREQ 74config ARM_IMX6Q_CPUFREQ
84 tristate "Freescale i.MX6 cpufreq support" 75 tristate "Freescale i.MX6 cpufreq support"
85 depends on ARCH_MXC 76 depends on ARCH_MXC
@@ -96,14 +87,13 @@ config ARM_KIRKWOOD_CPUFREQ
96 This adds the CPUFreq driver for Marvell Kirkwood 87 This adds the CPUFreq driver for Marvell Kirkwood
97 SoCs. 88 SoCs.
98 89
99config ARM_MT8173_CPUFREQ 90config ARM_MEDIATEK_CPUFREQ
100 tristate "Mediatek MT8173 CPUFreq support" 91 tristate "CPU Frequency scaling support for MediaTek SoCs"
101 depends on ARCH_MEDIATEK && REGULATOR 92 depends on ARCH_MEDIATEK && REGULATOR
102 depends on ARM64 || (ARM_CPU_TOPOLOGY && COMPILE_TEST)
103 depends on !CPU_THERMAL || THERMAL 93 depends on !CPU_THERMAL || THERMAL
104 select PM_OPP 94 select PM_OPP
105 help 95 help
106 This adds the CPUFreq driver support for Mediatek MT8173 SoC. 96 This adds the CPUFreq driver support for MediaTek SoCs.
107 97
108config ARM_OMAP2PLUS_CPUFREQ 98config ARM_OMAP2PLUS_CPUFREQ
109 bool "TI OMAP2+" 99 bool "TI OMAP2+"
@@ -242,6 +232,11 @@ config ARM_STI_CPUFREQ
242 this config option if you wish to add CPUFreq support for STi based 232 this config option if you wish to add CPUFreq support for STi based
243 SoCs. 233 SoCs.
244 234
235config ARM_TANGO_CPUFREQ
236 bool
237 depends on CPUFREQ_DT && ARCH_TANGO
238 default y
239
245config ARM_TEGRA20_CPUFREQ 240config ARM_TEGRA20_CPUFREQ
246 bool "Tegra20 CPUFreq support" 241 bool "Tegra20 CPUFreq support"
247 depends on ARCH_TEGRA 242 depends on ARCH_TEGRA
diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile
index ab3a42cd29ef..c7af9b2a255e 100644
--- a/drivers/cpufreq/Makefile
+++ b/drivers/cpufreq/Makefile
@@ -53,12 +53,11 @@ obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o
53 53
54obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o 54obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o
55obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o 55obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o
56obj-$(CONFIG_ARM_DB8500_CPUFREQ) += dbx500-cpufreq.o
57obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o 56obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o
58obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o 57obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o
59obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o 58obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
60obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o 59obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o
61obj-$(CONFIG_ARM_MT8173_CPUFREQ) += mt8173-cpufreq.o 60obj-$(CONFIG_ARM_MEDIATEK_CPUFREQ) += mediatek-cpufreq.o
62obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o 61obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o
63obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o 62obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o
64obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o 63obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
@@ -75,6 +74,7 @@ obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o
75obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o 74obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o
76obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o 75obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o
77obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o 76obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o
77obj-$(CONFIG_ARM_TANGO_CPUFREQ) += tango-cpufreq.o
78obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o 78obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o
79obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o 79obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
80obj-$(CONFIG_ARM_TEGRA186_CPUFREQ) += tegra186-cpufreq.o 80obj-$(CONFIG_ARM_TEGRA186_CPUFREQ) += tegra186-cpufreq.o
diff --git a/drivers/cpufreq/arm_big_little.c b/drivers/cpufreq/arm_big_little.c
index ea6d62547b10..17504129fd77 100644
--- a/drivers/cpufreq/arm_big_little.c
+++ b/drivers/cpufreq/arm_big_little.c
@@ -483,11 +483,8 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy)
483 return ret; 483 return ret;
484 } 484 }
485 485
486 if (arm_bL_ops->get_transition_latency) 486 policy->cpuinfo.transition_latency =
487 policy->cpuinfo.transition_latency = 487 arm_bL_ops->get_transition_latency(cpu_dev);
488 arm_bL_ops->get_transition_latency(cpu_dev);
489 else
490 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
491 488
492 if (is_bL_switching_enabled()) 489 if (is_bL_switching_enabled())
493 per_cpu(cpu_last_req_freq, policy->cpu) = clk_get_cpu_rate(policy->cpu); 490 per_cpu(cpu_last_req_freq, policy->cpu) = clk_get_cpu_rate(policy->cpu);
@@ -622,7 +619,8 @@ int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops)
622 return -EBUSY; 619 return -EBUSY;
623 } 620 }
624 621
625 if (!ops || !strlen(ops->name) || !ops->init_opp_table) { 622 if (!ops || !strlen(ops->name) || !ops->init_opp_table ||
623 !ops->get_transition_latency) {
626 pr_err("%s: Invalid arm_bL_ops, exiting\n", __func__); 624 pr_err("%s: Invalid arm_bL_ops, exiting\n", __func__);
627 return -ENODEV; 625 return -ENODEV;
628 } 626 }
diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
index 10be285c9055..a1c3025f9df7 100644
--- a/drivers/cpufreq/cppc_cpufreq.c
+++ b/drivers/cpufreq/cppc_cpufreq.c
@@ -172,7 +172,6 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
172 return -EFAULT; 172 return -EFAULT;
173 } 173 }
174 174
175 cpumask_set_cpu(policy->cpu, policy->cpus);
176 cpu->cur_policy = policy; 175 cpu->cur_policy = policy;
177 176
178 /* Set policy->cur to max now. The governors will adjust later. */ 177 /* Set policy->cur to max now. The governors will adjust later. */
diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
index 1c262923fe58..a020da7940d6 100644
--- a/drivers/cpufreq/cpufreq-dt-platdev.c
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -9,11 +9,16 @@
9 9
10#include <linux/err.h> 10#include <linux/err.h>
11#include <linux/of.h> 11#include <linux/of.h>
12#include <linux/of_device.h>
12#include <linux/platform_device.h> 13#include <linux/platform_device.h>
13 14
14#include "cpufreq-dt.h" 15#include "cpufreq-dt.h"
15 16
16static const struct of_device_id machines[] __initconst = { 17/*
18 * Machines for which the cpufreq device is *always* created, mostly used for
19 * platforms using "operating-points" (V1) property.
20 */
21static const struct of_device_id whitelist[] __initconst = {
17 { .compatible = "allwinner,sun4i-a10", }, 22 { .compatible = "allwinner,sun4i-a10", },
18 { .compatible = "allwinner,sun5i-a10s", }, 23 { .compatible = "allwinner,sun5i-a10s", },
19 { .compatible = "allwinner,sun5i-a13", }, 24 { .compatible = "allwinner,sun5i-a13", },
@@ -22,7 +27,6 @@ static const struct of_device_id machines[] __initconst = {
22 { .compatible = "allwinner,sun6i-a31s", }, 27 { .compatible = "allwinner,sun6i-a31s", },
23 { .compatible = "allwinner,sun7i-a20", }, 28 { .compatible = "allwinner,sun7i-a20", },
24 { .compatible = "allwinner,sun8i-a23", }, 29 { .compatible = "allwinner,sun8i-a23", },
25 { .compatible = "allwinner,sun8i-a33", },
26 { .compatible = "allwinner,sun8i-a83t", }, 30 { .compatible = "allwinner,sun8i-a83t", },
27 { .compatible = "allwinner,sun8i-h3", }, 31 { .compatible = "allwinner,sun8i-h3", },
28 32
@@ -32,7 +36,6 @@ static const struct of_device_id machines[] __initconst = {
32 { .compatible = "arm,integrator-cp", }, 36 { .compatible = "arm,integrator-cp", },
33 37
34 { .compatible = "hisilicon,hi3660", }, 38 { .compatible = "hisilicon,hi3660", },
35 { .compatible = "hisilicon,hi6220", },
36 39
37 { .compatible = "fsl,imx27", }, 40 { .compatible = "fsl,imx27", },
38 { .compatible = "fsl,imx51", }, 41 { .compatible = "fsl,imx51", },
@@ -46,11 +49,8 @@ static const struct of_device_id machines[] __initconst = {
46 { .compatible = "samsung,exynos3250", }, 49 { .compatible = "samsung,exynos3250", },
47 { .compatible = "samsung,exynos4210", }, 50 { .compatible = "samsung,exynos4210", },
48 { .compatible = "samsung,exynos4212", }, 51 { .compatible = "samsung,exynos4212", },
49 { .compatible = "samsung,exynos4412", },
50 { .compatible = "samsung,exynos5250", }, 52 { .compatible = "samsung,exynos5250", },
51#ifndef CONFIG_BL_SWITCHER 53#ifndef CONFIG_BL_SWITCHER
52 { .compatible = "samsung,exynos5420", },
53 { .compatible = "samsung,exynos5433", },
54 { .compatible = "samsung,exynos5800", }, 54 { .compatible = "samsung,exynos5800", },
55#endif 55#endif
56 56
@@ -67,6 +67,8 @@ static const struct of_device_id machines[] __initconst = {
67 { .compatible = "renesas,r8a7792", }, 67 { .compatible = "renesas,r8a7792", },
68 { .compatible = "renesas,r8a7793", }, 68 { .compatible = "renesas,r8a7793", },
69 { .compatible = "renesas,r8a7794", }, 69 { .compatible = "renesas,r8a7794", },
70 { .compatible = "renesas,r8a7795", },
71 { .compatible = "renesas,r8a7796", },
70 { .compatible = "renesas,sh73a0", }, 72 { .compatible = "renesas,sh73a0", },
71 73
72 { .compatible = "rockchip,rk2928", }, 74 { .compatible = "rockchip,rk2928", },
@@ -76,17 +78,17 @@ static const struct of_device_id machines[] __initconst = {
76 { .compatible = "rockchip,rk3188", }, 78 { .compatible = "rockchip,rk3188", },
77 { .compatible = "rockchip,rk3228", }, 79 { .compatible = "rockchip,rk3228", },
78 { .compatible = "rockchip,rk3288", }, 80 { .compatible = "rockchip,rk3288", },
81 { .compatible = "rockchip,rk3328", },
79 { .compatible = "rockchip,rk3366", }, 82 { .compatible = "rockchip,rk3366", },
80 { .compatible = "rockchip,rk3368", }, 83 { .compatible = "rockchip,rk3368", },
81 { .compatible = "rockchip,rk3399", }, 84 { .compatible = "rockchip,rk3399", },
82 85
83 { .compatible = "sigma,tango4" },
84
85 { .compatible = "socionext,uniphier-pro5", },
86 { .compatible = "socionext,uniphier-pxs2", },
87 { .compatible = "socionext,uniphier-ld6b", }, 86 { .compatible = "socionext,uniphier-ld6b", },
88 { .compatible = "socionext,uniphier-ld11", }, 87
89 { .compatible = "socionext,uniphier-ld20", }, 88 { .compatible = "st-ericsson,u8500", },
89 { .compatible = "st-ericsson,u8540", },
90 { .compatible = "st-ericsson,u9500", },
91 { .compatible = "st-ericsson,u9540", },
90 92
91 { .compatible = "ti,omap2", }, 93 { .compatible = "ti,omap2", },
92 { .compatible = "ti,omap3", }, 94 { .compatible = "ti,omap3", },
@@ -94,27 +96,56 @@ static const struct of_device_id machines[] __initconst = {
94 { .compatible = "ti,omap5", }, 96 { .compatible = "ti,omap5", },
95 97
96 { .compatible = "xlnx,zynq-7000", }, 98 { .compatible = "xlnx,zynq-7000", },
99 { .compatible = "xlnx,zynqmp", },
97 100
98 { .compatible = "zte,zx296718", }, 101 { }
102};
99 103
104/*
105 * Machines for which the cpufreq device is *not* created, mostly used for
106 * platforms using "operating-points-v2" property.
107 */
108static const struct of_device_id blacklist[] __initconst = {
100 { } 109 { }
101}; 110};
102 111
112static bool __init cpu0_node_has_opp_v2_prop(void)
113{
114 struct device_node *np = of_cpu_device_node_get(0);
115 bool ret = false;
116
117 if (of_get_property(np, "operating-points-v2", NULL))
118 ret = true;
119
120 of_node_put(np);
121 return ret;
122}
123
103static int __init cpufreq_dt_platdev_init(void) 124static int __init cpufreq_dt_platdev_init(void)
104{ 125{
105 struct device_node *np = of_find_node_by_path("/"); 126 struct device_node *np = of_find_node_by_path("/");
106 const struct of_device_id *match; 127 const struct of_device_id *match;
128 const void *data = NULL;
107 129
108 if (!np) 130 if (!np)
109 return -ENODEV; 131 return -ENODEV;
110 132
111 match = of_match_node(machines, np); 133 match = of_match_node(whitelist, np);
134 if (match) {
135 data = match->data;
136 goto create_pdev;
137 }
138
139 if (cpu0_node_has_opp_v2_prop() && !of_match_node(blacklist, np))
140 goto create_pdev;
141
112 of_node_put(np); 142 of_node_put(np);
113 if (!match) 143 return -ENODEV;
114 return -ENODEV;
115 144
145create_pdev:
146 of_node_put(np);
116 return PTR_ERR_OR_ZERO(platform_device_register_data(NULL, "cpufreq-dt", 147 return PTR_ERR_OR_ZERO(platform_device_register_data(NULL, "cpufreq-dt",
117 -1, match->data, 148 -1, data,
118 sizeof(struct cpufreq_dt_platform_data))); 149 sizeof(struct cpufreq_dt_platform_data)));
119} 150}
120device_initcall(cpufreq_dt_platdev_init); 151device_initcall(cpufreq_dt_platdev_init);
diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
index fef3c2160691..d83ab94d041a 100644
--- a/drivers/cpufreq/cpufreq-dt.c
+++ b/drivers/cpufreq/cpufreq-dt.c
@@ -274,6 +274,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
274 transition_latency = CPUFREQ_ETERNAL; 274 transition_latency = CPUFREQ_ETERNAL;
275 275
276 policy->cpuinfo.transition_latency = transition_latency; 276 policy->cpuinfo.transition_latency = transition_latency;
277 policy->dvfs_possible_from_any_cpu = true;
277 278
278 return 0; 279 return 0;
279 280
diff --git a/drivers/cpufreq/cpufreq-nforce2.c b/drivers/cpufreq/cpufreq-nforce2.c
index 5503d491b016..dbf82f36d270 100644
--- a/drivers/cpufreq/cpufreq-nforce2.c
+++ b/drivers/cpufreq/cpufreq-nforce2.c
@@ -357,7 +357,6 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy)
357 /* cpuinfo and default policy values */ 357 /* cpuinfo and default policy values */
358 policy->min = policy->cpuinfo.min_freq = min_fsb * fid * 100; 358 policy->min = policy->cpuinfo.min_freq = min_fsb * fid * 100;
359 policy->max = policy->cpuinfo.max_freq = max_fsb * fid * 100; 359 policy->max = policy->cpuinfo.max_freq = max_fsb * fid * 100;
360 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
361 360
362 return 0; 361 return 0;
363} 362}
@@ -369,6 +368,7 @@ static int nforce2_cpu_exit(struct cpufreq_policy *policy)
369 368
370static struct cpufreq_driver nforce2_driver = { 369static struct cpufreq_driver nforce2_driver = {
371 .name = "nforce2", 370 .name = "nforce2",
371 .flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
372 .verify = nforce2_verify, 372 .verify = nforce2_verify,
373 .target = nforce2_target, 373 .target = nforce2_target,
374 .get = nforce2_get, 374 .get = nforce2_get,
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 9bf97a366029..ea43b147a7fe 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -524,6 +524,32 @@ unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
524} 524}
525EXPORT_SYMBOL_GPL(cpufreq_driver_resolve_freq); 525EXPORT_SYMBOL_GPL(cpufreq_driver_resolve_freq);
526 526
527unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy)
528{
529 unsigned int latency;
530
531 if (policy->transition_delay_us)
532 return policy->transition_delay_us;
533
534 latency = policy->cpuinfo.transition_latency / NSEC_PER_USEC;
535 if (latency) {
536 /*
537 * For platforms that can change the frequency very fast (< 10
538 * us), the above formula gives a decent transition delay. But
539 * for platforms where transition_latency is in milliseconds, it
540 * ends up giving unrealistic values.
541 *
542 * Cap the default transition delay to 10 ms, which seems to be
543 * a reasonable amount of time after which we should reevaluate
544 * the frequency.
545 */
546 return min(latency * LATENCY_MULTIPLIER, (unsigned int)10000);
547 }
548
549 return LATENCY_MULTIPLIER;
550}
551EXPORT_SYMBOL_GPL(cpufreq_policy_transition_delay_us);
552
527/********************************************************************* 553/*********************************************************************
528 * SYSFS INTERFACE * 554 * SYSFS INTERFACE *
529 *********************************************************************/ 555 *********************************************************************/
@@ -1817,9 +1843,10 @@ EXPORT_SYMBOL(cpufreq_unregister_notifier);
1817 * twice in parallel for the same policy and that it will never be called in 1843 * twice in parallel for the same policy and that it will never be called in
1818 * parallel with either ->target() or ->target_index() for the same policy. 1844 * parallel with either ->target() or ->target_index() for the same policy.
1819 * 1845 *
1820 * If CPUFREQ_ENTRY_INVALID is returned by the driver's ->fast_switch() 1846 * Returns the actual frequency set for the CPU.
1821 * callback to indicate an error condition, the hardware configuration must be 1847 *
1822 * preserved. 1848 * If 0 is returned by the driver's ->fast_switch() callback to indicate an
1849 * error condition, the hardware configuration must be preserved.
1823 */ 1850 */
1824unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy, 1851unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy,
1825 unsigned int target_freq) 1852 unsigned int target_freq)
@@ -1988,13 +2015,13 @@ static int cpufreq_init_governor(struct cpufreq_policy *policy)
1988 if (!policy->governor) 2015 if (!policy->governor)
1989 return -EINVAL; 2016 return -EINVAL;
1990 2017
1991 if (policy->governor->max_transition_latency && 2018 /* Platform doesn't want dynamic frequency switching ? */
1992 policy->cpuinfo.transition_latency > 2019 if (policy->governor->dynamic_switching &&
1993 policy->governor->max_transition_latency) { 2020 cpufreq_driver->flags & CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING) {
1994 struct cpufreq_governor *gov = cpufreq_fallback_governor(); 2021 struct cpufreq_governor *gov = cpufreq_fallback_governor();
1995 2022
1996 if (gov) { 2023 if (gov) {
1997 pr_warn("%s governor failed, too long transition latency of HW, fallback to %s governor\n", 2024 pr_warn("Can't use %s governor as dynamic switching is disallowed. Fallback to %s governor\n",
1998 policy->governor->name, gov->name); 2025 policy->governor->name, gov->name);
1999 policy->governor = gov; 2026 policy->governor = gov;
2000 } else { 2027 } else {
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index 88220ff3e1c2..f20f20a77d4d 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -246,7 +246,6 @@ gov_show_one_common(sampling_rate);
246gov_show_one_common(sampling_down_factor); 246gov_show_one_common(sampling_down_factor);
247gov_show_one_common(up_threshold); 247gov_show_one_common(up_threshold);
248gov_show_one_common(ignore_nice_load); 248gov_show_one_common(ignore_nice_load);
249gov_show_one_common(min_sampling_rate);
250gov_show_one(cs, down_threshold); 249gov_show_one(cs, down_threshold);
251gov_show_one(cs, freq_step); 250gov_show_one(cs, freq_step);
252 251
@@ -254,12 +253,10 @@ gov_attr_rw(sampling_rate);
254gov_attr_rw(sampling_down_factor); 253gov_attr_rw(sampling_down_factor);
255gov_attr_rw(up_threshold); 254gov_attr_rw(up_threshold);
256gov_attr_rw(ignore_nice_load); 255gov_attr_rw(ignore_nice_load);
257gov_attr_ro(min_sampling_rate);
258gov_attr_rw(down_threshold); 256gov_attr_rw(down_threshold);
259gov_attr_rw(freq_step); 257gov_attr_rw(freq_step);
260 258
261static struct attribute *cs_attributes[] = { 259static struct attribute *cs_attributes[] = {
262 &min_sampling_rate.attr,
263 &sampling_rate.attr, 260 &sampling_rate.attr,
264 &sampling_down_factor.attr, 261 &sampling_down_factor.attr,
265 &up_threshold.attr, 262 &up_threshold.attr,
@@ -297,10 +294,7 @@ static int cs_init(struct dbs_data *dbs_data)
297 dbs_data->up_threshold = DEF_FREQUENCY_UP_THRESHOLD; 294 dbs_data->up_threshold = DEF_FREQUENCY_UP_THRESHOLD;
298 dbs_data->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR; 295 dbs_data->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR;
299 dbs_data->ignore_nice_load = 0; 296 dbs_data->ignore_nice_load = 0;
300
301 dbs_data->tuners = tuners; 297 dbs_data->tuners = tuners;
302 dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO *
303 jiffies_to_usecs(10);
304 298
305 return 0; 299 return 0;
306} 300}
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 47e24b5384b3..58d4f4e1ad6a 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -47,14 +47,11 @@ ssize_t store_sampling_rate(struct gov_attr_set *attr_set, const char *buf,
47{ 47{
48 struct dbs_data *dbs_data = to_dbs_data(attr_set); 48 struct dbs_data *dbs_data = to_dbs_data(attr_set);
49 struct policy_dbs_info *policy_dbs; 49 struct policy_dbs_info *policy_dbs;
50 unsigned int rate;
51 int ret; 50 int ret;
52 ret = sscanf(buf, "%u", &rate); 51 ret = sscanf(buf, "%u", &dbs_data->sampling_rate);
53 if (ret != 1) 52 if (ret != 1)
54 return -EINVAL; 53 return -EINVAL;
55 54
56 dbs_data->sampling_rate = max(rate, dbs_data->min_sampling_rate);
57
58 /* 55 /*
59 * We are operating under dbs_data->mutex and so the list and its 56 * We are operating under dbs_data->mutex and so the list and its
60 * entries can't be freed concurrently. 57 * entries can't be freed concurrently.
@@ -275,6 +272,9 @@ static void dbs_update_util_handler(struct update_util_data *data, u64 time,
275 struct policy_dbs_info *policy_dbs = cdbs->policy_dbs; 272 struct policy_dbs_info *policy_dbs = cdbs->policy_dbs;
276 u64 delta_ns, lst; 273 u64 delta_ns, lst;
277 274
275 if (!cpufreq_can_do_remote_dvfs(policy_dbs->policy))
276 return;
277
278 /* 278 /*
279 * The work may not be allowed to be queued up right now. 279 * The work may not be allowed to be queued up right now.
280 * Possible reasons: 280 * Possible reasons:
@@ -392,7 +392,6 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
392 struct dbs_governor *gov = dbs_governor_of(policy); 392 struct dbs_governor *gov = dbs_governor_of(policy);
393 struct dbs_data *dbs_data; 393 struct dbs_data *dbs_data;
394 struct policy_dbs_info *policy_dbs; 394 struct policy_dbs_info *policy_dbs;
395 unsigned int latency;
396 int ret = 0; 395 int ret = 0;
397 396
398 /* State should be equivalent to EXIT */ 397 /* State should be equivalent to EXIT */
@@ -431,16 +430,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
431 if (ret) 430 if (ret)
432 goto free_policy_dbs_info; 431 goto free_policy_dbs_info;
433 432
434 /* policy latency is in ns. Convert it to us first */ 433 dbs_data->sampling_rate = cpufreq_policy_transition_delay_us(policy);
435 latency = policy->cpuinfo.transition_latency / 1000;
436 if (latency == 0)
437 latency = 1;
438
439 /* Bring kernel and HW constraints together */
440 dbs_data->min_sampling_rate = max(dbs_data->min_sampling_rate,
441 MIN_LATENCY_MULTIPLIER * latency);
442 dbs_data->sampling_rate = max(dbs_data->min_sampling_rate,
443 LATENCY_MULTIPLIER * latency);
444 434
445 if (!have_governor_per_policy()) 435 if (!have_governor_per_policy())
446 gov->gdbs_data = dbs_data; 436 gov->gdbs_data = dbs_data;
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index 0236ec2cd654..8463f5def0f5 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -41,7 +41,6 @@ enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE};
41struct dbs_data { 41struct dbs_data {
42 struct gov_attr_set attr_set; 42 struct gov_attr_set attr_set;
43 void *tuners; 43 void *tuners;
44 unsigned int min_sampling_rate;
45 unsigned int ignore_nice_load; 44 unsigned int ignore_nice_load;
46 unsigned int sampling_rate; 45 unsigned int sampling_rate;
47 unsigned int sampling_down_factor; 46 unsigned int sampling_down_factor;
@@ -160,7 +159,7 @@ void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy);
160#define CPUFREQ_DBS_GOVERNOR_INITIALIZER(_name_) \ 159#define CPUFREQ_DBS_GOVERNOR_INITIALIZER(_name_) \
161 { \ 160 { \
162 .name = _name_, \ 161 .name = _name_, \
163 .max_transition_latency = TRANSITION_LATENCY_LIMIT, \ 162 .dynamic_switching = true, \
164 .owner = THIS_MODULE, \ 163 .owner = THIS_MODULE, \
165 .init = cpufreq_dbs_governor_init, \ 164 .init = cpufreq_dbs_governor_init, \
166 .exit = cpufreq_dbs_governor_exit, \ 165 .exit = cpufreq_dbs_governor_exit, \
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 3937acf7e026..6b423eebfd5d 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -319,7 +319,6 @@ gov_show_one_common(sampling_rate);
319gov_show_one_common(up_threshold); 319gov_show_one_common(up_threshold);
320gov_show_one_common(sampling_down_factor); 320gov_show_one_common(sampling_down_factor);
321gov_show_one_common(ignore_nice_load); 321gov_show_one_common(ignore_nice_load);
322gov_show_one_common(min_sampling_rate);
323gov_show_one_common(io_is_busy); 322gov_show_one_common(io_is_busy);
324gov_show_one(od, powersave_bias); 323gov_show_one(od, powersave_bias);
325 324
@@ -329,10 +328,8 @@ gov_attr_rw(up_threshold);
329gov_attr_rw(sampling_down_factor); 328gov_attr_rw(sampling_down_factor);
330gov_attr_rw(ignore_nice_load); 329gov_attr_rw(ignore_nice_load);
331gov_attr_rw(powersave_bias); 330gov_attr_rw(powersave_bias);
332gov_attr_ro(min_sampling_rate);
333 331
334static struct attribute *od_attributes[] = { 332static struct attribute *od_attributes[] = {
335 &min_sampling_rate.attr,
336 &sampling_rate.attr, 333 &sampling_rate.attr,
337 &up_threshold.attr, 334 &up_threshold.attr,
338 &sampling_down_factor.attr, 335 &sampling_down_factor.attr,
@@ -373,17 +370,8 @@ static int od_init(struct dbs_data *dbs_data)
373 if (idle_time != -1ULL) { 370 if (idle_time != -1ULL) {
374 /* Idle micro accounting is supported. Use finer thresholds */ 371 /* Idle micro accounting is supported. Use finer thresholds */
375 dbs_data->up_threshold = MICRO_FREQUENCY_UP_THRESHOLD; 372 dbs_data->up_threshold = MICRO_FREQUENCY_UP_THRESHOLD;
376 /*
377 * In nohz/micro accounting case we set the minimum frequency
378 * not depending on HZ, but fixed (very low).
379 */
380 dbs_data->min_sampling_rate = MICRO_FREQUENCY_MIN_SAMPLE_RATE;
381 } else { 373 } else {
382 dbs_data->up_threshold = DEF_FREQUENCY_UP_THRESHOLD; 374 dbs_data->up_threshold = DEF_FREQUENCY_UP_THRESHOLD;
383
384 /* For correct statistics, we need 10 ticks for each measure */
385 dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO *
386 jiffies_to_usecs(10);
387 } 375 }
388 376
389 dbs_data->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR; 377 dbs_data->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR;
diff --git a/drivers/cpufreq/dbx500-cpufreq.c b/drivers/cpufreq/dbx500-cpufreq.c
deleted file mode 100644
index 4ee0431579c1..000000000000
--- a/drivers/cpufreq/dbx500-cpufreq.c
+++ /dev/null
@@ -1,103 +0,0 @@
1/*
2 * Copyright (C) STMicroelectronics 2009
3 * Copyright (C) ST-Ericsson SA 2010-2012
4 *
5 * License Terms: GNU General Public License v2
6 * Author: Sundar Iyer <sundar.iyer@stericsson.com>
7 * Author: Martin Persson <martin.persson@stericsson.com>
8 * Author: Jonas Aaberg <jonas.aberg@stericsson.com>
9 */
10
11#include <linux/module.h>
12#include <linux/kernel.h>
13#include <linux/cpufreq.h>
14#include <linux/cpu_cooling.h>
15#include <linux/delay.h>
16#include <linux/slab.h>
17#include <linux/platform_device.h>
18#include <linux/clk.h>
19
20static struct cpufreq_frequency_table *freq_table;
21static struct clk *armss_clk;
22static struct thermal_cooling_device *cdev;
23
24static int dbx500_cpufreq_target(struct cpufreq_policy *policy,
25 unsigned int index)
26{
27 /* update armss clk frequency */
28 return clk_set_rate(armss_clk, freq_table[index].frequency * 1000);
29}
30
31static int dbx500_cpufreq_init(struct cpufreq_policy *policy)
32{
33 policy->clk = armss_clk;
34 return cpufreq_generic_init(policy, freq_table, 20 * 1000);
35}
36
37static int dbx500_cpufreq_exit(struct cpufreq_policy *policy)
38{
39 if (!IS_ERR(cdev))
40 cpufreq_cooling_unregister(cdev);
41 return 0;
42}
43
44static void dbx500_cpufreq_ready(struct cpufreq_policy *policy)
45{
46 cdev = cpufreq_cooling_register(policy);
47 if (IS_ERR(cdev))
48 pr_err("Failed to register cooling device %ld\n", PTR_ERR(cdev));
49 else
50 pr_info("Cooling device registered: %s\n", cdev->type);
51}
52
53static struct cpufreq_driver dbx500_cpufreq_driver = {
54 .flags = CPUFREQ_STICKY | CPUFREQ_CONST_LOOPS |
55 CPUFREQ_NEED_INITIAL_FREQ_CHECK,
56 .verify = cpufreq_generic_frequency_table_verify,
57 .target_index = dbx500_cpufreq_target,
58 .get = cpufreq_generic_get,
59 .init = dbx500_cpufreq_init,
60 .exit = dbx500_cpufreq_exit,
61 .ready = dbx500_cpufreq_ready,
62 .name = "DBX500",
63 .attr = cpufreq_generic_attr,
64};
65
66static int dbx500_cpufreq_probe(struct platform_device *pdev)
67{
68 struct cpufreq_frequency_table *pos;
69
70 freq_table = dev_get_platdata(&pdev->dev);
71 if (!freq_table) {
72 pr_err("dbx500-cpufreq: Failed to fetch cpufreq table\n");
73 return -ENODEV;
74 }
75
76 armss_clk = clk_get(&pdev->dev, "armss");
77 if (IS_ERR(armss_clk)) {
78 pr_err("dbx500-cpufreq: Failed to get armss clk\n");
79 return PTR_ERR(armss_clk);
80 }
81
82 pr_info("dbx500-cpufreq: Available frequencies:\n");
83 cpufreq_for_each_entry(pos, freq_table)
84 pr_info(" %d Mhz\n", pos->frequency / 1000);
85
86 return cpufreq_register_driver(&dbx500_cpufreq_driver);
87}
88
89static struct platform_driver dbx500_cpufreq_plat_driver = {
90 .driver = {
91 .name = "cpufreq-ux500",
92 },
93 .probe = dbx500_cpufreq_probe,
94};
95
96static int __init dbx500_cpufreq_register(void)
97{
98 return platform_driver_register(&dbx500_cpufreq_plat_driver);
99}
100device_initcall(dbx500_cpufreq_register);
101
102MODULE_LICENSE("GPL v2");
103MODULE_DESCRIPTION("cpufreq driver for DBX500");
diff --git a/drivers/cpufreq/elanfreq.c b/drivers/cpufreq/elanfreq.c
index bfce11cba1df..45e2ca62515e 100644
--- a/drivers/cpufreq/elanfreq.c
+++ b/drivers/cpufreq/elanfreq.c
@@ -165,9 +165,6 @@ static int elanfreq_cpu_init(struct cpufreq_policy *policy)
165 if (pos->frequency > max_freq) 165 if (pos->frequency > max_freq)
166 pos->frequency = CPUFREQ_ENTRY_INVALID; 166 pos->frequency = CPUFREQ_ENTRY_INVALID;
167 167
168 /* cpuinfo and default policy values */
169 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
170
171 return cpufreq_table_validate_and_show(policy, elanfreq_table); 168 return cpufreq_table_validate_and_show(policy, elanfreq_table);
172} 169}
173 170
@@ -196,6 +193,7 @@ __setup("elanfreq=", elanfreq_setup);
196 193
197static struct cpufreq_driver elanfreq_driver = { 194static struct cpufreq_driver elanfreq_driver = {
198 .get = elanfreq_get_cpu_frequency, 195 .get = elanfreq_get_cpu_frequency,
196 .flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
199 .verify = cpufreq_generic_frequency_table_verify, 197 .verify = cpufreq_generic_frequency_table_verify,
200 .target_index = elanfreq_target, 198 .target_index = elanfreq_target,
201 .init = elanfreq_cpu_init, 199 .init = elanfreq_cpu_init,
diff --git a/drivers/cpufreq/gx-suspmod.c b/drivers/cpufreq/gx-suspmod.c
index 3488c9c175eb..8f52a06664e3 100644
--- a/drivers/cpufreq/gx-suspmod.c
+++ b/drivers/cpufreq/gx-suspmod.c
@@ -428,7 +428,6 @@ static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy)
428 policy->max = maxfreq; 428 policy->max = maxfreq;
429 policy->cpuinfo.min_freq = maxfreq / max_duration; 429 policy->cpuinfo.min_freq = maxfreq / max_duration;
430 policy->cpuinfo.max_freq = maxfreq; 430 policy->cpuinfo.max_freq = maxfreq;
431 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
432 431
433 return 0; 432 return 0;
434} 433}
@@ -438,6 +437,7 @@ static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy)
438 * MediaGX/Geode GX initialize cpufreq driver 437 * MediaGX/Geode GX initialize cpufreq driver
439 */ 438 */
440static struct cpufreq_driver gx_suspmod_driver = { 439static struct cpufreq_driver gx_suspmod_driver = {
440 .flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
441 .get = gx_get_cpuspeed, 441 .get = gx_get_cpuspeed,
442 .verify = cpufreq_gx_verify, 442 .verify = cpufreq_gx_verify,
443 .target = cpufreq_gx_target, 443 .target = cpufreq_gx_target,
diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
index b6edd3ccaa55..14466a9b01c0 100644
--- a/drivers/cpufreq/imx6q-cpufreq.c
+++ b/drivers/cpufreq/imx6q-cpufreq.c
@@ -47,6 +47,7 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
47 struct dev_pm_opp *opp; 47 struct dev_pm_opp *opp;
48 unsigned long freq_hz, volt, volt_old; 48 unsigned long freq_hz, volt, volt_old;
49 unsigned int old_freq, new_freq; 49 unsigned int old_freq, new_freq;
50 bool pll1_sys_temp_enabled = false;
50 int ret; 51 int ret;
51 52
52 new_freq = freq_table[index].frequency; 53 new_freq = freq_table[index].frequency;
@@ -124,6 +125,10 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
124 if (freq_hz > clk_get_rate(pll2_pfd2_396m_clk)) { 125 if (freq_hz > clk_get_rate(pll2_pfd2_396m_clk)) {
125 clk_set_rate(pll1_sys_clk, new_freq * 1000); 126 clk_set_rate(pll1_sys_clk, new_freq * 1000);
126 clk_set_parent(pll1_sw_clk, pll1_sys_clk); 127 clk_set_parent(pll1_sw_clk, pll1_sys_clk);
128 } else {
129 /* pll1_sys needs to be enabled for divider rate change to work. */
130 pll1_sys_temp_enabled = true;
131 clk_prepare_enable(pll1_sys_clk);
127 } 132 }
128 } 133 }
129 134
@@ -135,6 +140,10 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
135 return ret; 140 return ret;
136 } 141 }
137 142
143 /* PLL1 is only needed until after ARM-PODF is set. */
144 if (pll1_sys_temp_enabled)
145 clk_disable_unprepare(pll1_sys_clk);
146
138 /* scaling down? scale voltage after frequency */ 147 /* scaling down? scale voltage after frequency */
139 if (new_freq < old_freq) { 148 if (new_freq < old_freq) {
140 ret = regulator_set_voltage_tol(arm_reg, volt, 0); 149 ret = regulator_set_voltage_tol(arm_reg, volt, 0);
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 65ee4fcace1f..8f95265d5f52 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -37,8 +37,7 @@
37#include <asm/cpufeature.h> 37#include <asm/cpufeature.h>
38#include <asm/intel-family.h> 38#include <asm/intel-family.h>
39 39
40#define INTEL_PSTATE_DEFAULT_SAMPLING_INTERVAL (10 * NSEC_PER_MSEC) 40#define INTEL_PSTATE_SAMPLING_INTERVAL (10 * NSEC_PER_MSEC)
41#define INTEL_PSTATE_HWP_SAMPLING_INTERVAL (50 * NSEC_PER_MSEC)
42 41
43#define INTEL_CPUFREQ_TRANSITION_LATENCY 20000 42#define INTEL_CPUFREQ_TRANSITION_LATENCY 20000
44#define INTEL_CPUFREQ_TRANSITION_DELAY 500 43#define INTEL_CPUFREQ_TRANSITION_DELAY 500
@@ -173,28 +172,6 @@ struct vid_data {
173}; 172};
174 173
175/** 174/**
176 * struct _pid - Stores PID data
177 * @setpoint: Target set point for busyness or performance
178 * @integral: Storage for accumulated error values
179 * @p_gain: PID proportional gain
180 * @i_gain: PID integral gain
181 * @d_gain: PID derivative gain
182 * @deadband: PID deadband
183 * @last_err: Last error storage for integral part of PID calculation
184 *
185 * Stores PID coefficients and last error for PID controller.
186 */
187struct _pid {
188 int setpoint;
189 int32_t integral;
190 int32_t p_gain;
191 int32_t i_gain;
192 int32_t d_gain;
193 int deadband;
194 int32_t last_err;
195};
196
197/**
198 * struct global_params - Global parameters, mostly tunable via sysfs. 175 * struct global_params - Global parameters, mostly tunable via sysfs.
199 * @no_turbo: Whether or not to use turbo P-states. 176 * @no_turbo: Whether or not to use turbo P-states.
200 * @turbo_disabled: Whethet or not turbo P-states are available at all, 177 * @turbo_disabled: Whethet or not turbo P-states are available at all,
@@ -223,7 +200,6 @@ struct global_params {
223 * @last_update: Time of the last update. 200 * @last_update: Time of the last update.
224 * @pstate: Stores P state limits for this CPU 201 * @pstate: Stores P state limits for this CPU
225 * @vid: Stores VID limits for this CPU 202 * @vid: Stores VID limits for this CPU
226 * @pid: Stores PID parameters for this CPU
227 * @last_sample_time: Last Sample time 203 * @last_sample_time: Last Sample time
228 * @aperf_mperf_shift: Number of clock cycles after aperf, merf is incremented 204 * @aperf_mperf_shift: Number of clock cycles after aperf, merf is incremented
229 * This shift is a multiplier to mperf delta to 205 * This shift is a multiplier to mperf delta to
@@ -258,7 +234,6 @@ struct cpudata {
258 234
259 struct pstate_data pstate; 235 struct pstate_data pstate;
260 struct vid_data vid; 236 struct vid_data vid;
261 struct _pid pid;
262 237
263 u64 last_update; 238 u64 last_update;
264 u64 last_sample_time; 239 u64 last_sample_time;
@@ -284,28 +259,6 @@ struct cpudata {
284static struct cpudata **all_cpu_data; 259static struct cpudata **all_cpu_data;
285 260
286/** 261/**
287 * struct pstate_adjust_policy - Stores static PID configuration data
288 * @sample_rate_ms: PID calculation sample rate in ms
289 * @sample_rate_ns: Sample rate calculation in ns
290 * @deadband: PID deadband
291 * @setpoint: PID Setpoint
292 * @p_gain_pct: PID proportional gain
293 * @i_gain_pct: PID integral gain
294 * @d_gain_pct: PID derivative gain
295 *
296 * Stores per CPU model static PID configuration data.
297 */
298struct pstate_adjust_policy {
299 int sample_rate_ms;
300 s64 sample_rate_ns;
301 int deadband;
302 int setpoint;
303 int p_gain_pct;
304 int d_gain_pct;
305 int i_gain_pct;
306};
307
308/**
309 * struct pstate_funcs - Per CPU model specific callbacks 262 * struct pstate_funcs - Per CPU model specific callbacks
310 * @get_max: Callback to get maximum non turbo effective P state 263 * @get_max: Callback to get maximum non turbo effective P state
311 * @get_max_physical: Callback to get maximum non turbo physical P state 264 * @get_max_physical: Callback to get maximum non turbo physical P state
@@ -314,7 +267,6 @@ struct pstate_adjust_policy {
314 * @get_scaling: Callback to get frequency scaling factor 267 * @get_scaling: Callback to get frequency scaling factor
315 * @get_val: Callback to convert P state to actual MSR write value 268 * @get_val: Callback to convert P state to actual MSR write value
316 * @get_vid: Callback to get VID data for Atom platforms 269 * @get_vid: Callback to get VID data for Atom platforms
317 * @update_util: Active mode utilization update callback.
318 * 270 *
319 * Core and Atom CPU models have different way to get P State limits. This 271 * Core and Atom CPU models have different way to get P State limits. This
320 * structure is used to store those callbacks. 272 * structure is used to store those callbacks.
@@ -328,20 +280,9 @@ struct pstate_funcs {
328 int (*get_aperf_mperf_shift)(void); 280 int (*get_aperf_mperf_shift)(void);
329 u64 (*get_val)(struct cpudata*, int pstate); 281 u64 (*get_val)(struct cpudata*, int pstate);
330 void (*get_vid)(struct cpudata *); 282 void (*get_vid)(struct cpudata *);
331 void (*update_util)(struct update_util_data *data, u64 time,
332 unsigned int flags);
333}; 283};
334 284
335static struct pstate_funcs pstate_funcs __read_mostly; 285static struct pstate_funcs pstate_funcs __read_mostly;
336static struct pstate_adjust_policy pid_params __read_mostly = {
337 .sample_rate_ms = 10,
338 .sample_rate_ns = 10 * NSEC_PER_MSEC,
339 .deadband = 0,
340 .setpoint = 97,
341 .p_gain_pct = 20,
342 .d_gain_pct = 0,
343 .i_gain_pct = 0,
344};
345 286
346static int hwp_active __read_mostly; 287static int hwp_active __read_mostly;
347static bool per_cpu_limits __read_mostly; 288static bool per_cpu_limits __read_mostly;
@@ -509,56 +450,6 @@ static inline void intel_pstate_exit_perf_limits(struct cpufreq_policy *policy)
509} 450}
510#endif 451#endif
511 452
512static signed int pid_calc(struct _pid *pid, int32_t busy)
513{
514 signed int result;
515 int32_t pterm, dterm, fp_error;
516 int32_t integral_limit;
517
518 fp_error = pid->setpoint - busy;
519
520 if (abs(fp_error) <= pid->deadband)
521 return 0;
522
523 pterm = mul_fp(pid->p_gain, fp_error);
524
525 pid->integral += fp_error;
526
527 /*
528 * We limit the integral here so that it will never
529 * get higher than 30. This prevents it from becoming
530 * too large an input over long periods of time and allows
531 * it to get factored out sooner.
532 *
533 * The value of 30 was chosen through experimentation.
534 */
535 integral_limit = int_tofp(30);
536 if (pid->integral > integral_limit)
537 pid->integral = integral_limit;
538 if (pid->integral < -integral_limit)
539 pid->integral = -integral_limit;
540
541 dterm = mul_fp(pid->d_gain, fp_error - pid->last_err);
542 pid->last_err = fp_error;
543
544 result = pterm + mul_fp(pid->integral, pid->i_gain) + dterm;
545 result = result + (1 << (FRAC_BITS-1));
546 return (signed int)fp_toint(result);
547}
548
549static inline void intel_pstate_pid_reset(struct cpudata *cpu)
550{
551 struct _pid *pid = &cpu->pid;
552
553 pid->p_gain = percent_fp(pid_params.p_gain_pct);
554 pid->d_gain = percent_fp(pid_params.d_gain_pct);
555 pid->i_gain = percent_fp(pid_params.i_gain_pct);
556 pid->setpoint = int_tofp(pid_params.setpoint);
557 pid->last_err = pid->setpoint - int_tofp(100);
558 pid->deadband = int_tofp(pid_params.deadband);
559 pid->integral = 0;
560}
561
562static inline void update_turbo_state(void) 453static inline void update_turbo_state(void)
563{ 454{
564 u64 misc_en; 455 u64 misc_en;
@@ -911,82 +802,6 @@ static void intel_pstate_update_policies(void)
911 cpufreq_update_policy(cpu); 802 cpufreq_update_policy(cpu);
912} 803}
913 804
914/************************** debugfs begin ************************/
915static int pid_param_set(void *data, u64 val)
916{
917 unsigned int cpu;
918
919 *(u32 *)data = val;
920 pid_params.sample_rate_ns = pid_params.sample_rate_ms * NSEC_PER_MSEC;
921 for_each_possible_cpu(cpu)
922 if (all_cpu_data[cpu])
923 intel_pstate_pid_reset(all_cpu_data[cpu]);
924
925 return 0;
926}
927
928static int pid_param_get(void *data, u64 *val)
929{
930 *val = *(u32 *)data;
931 return 0;
932}
933DEFINE_SIMPLE_ATTRIBUTE(fops_pid_param, pid_param_get, pid_param_set, "%llu\n");
934
935static struct dentry *debugfs_parent;
936
937struct pid_param {
938 char *name;
939 void *value;
940 struct dentry *dentry;
941};
942
943static struct pid_param pid_files[] = {
944 {"sample_rate_ms", &pid_params.sample_rate_ms, },
945 {"d_gain_pct", &pid_params.d_gain_pct, },
946 {"i_gain_pct", &pid_params.i_gain_pct, },
947 {"deadband", &pid_params.deadband, },
948 {"setpoint", &pid_params.setpoint, },
949 {"p_gain_pct", &pid_params.p_gain_pct, },
950 {NULL, NULL, }
951};
952
953static void intel_pstate_debug_expose_params(void)
954{
955 int i;
956
957 debugfs_parent = debugfs_create_dir("pstate_snb", NULL);
958 if (IS_ERR_OR_NULL(debugfs_parent))
959 return;
960
961 for (i = 0; pid_files[i].name; i++) {
962 struct dentry *dentry;
963
964 dentry = debugfs_create_file(pid_files[i].name, 0660,
965 debugfs_parent, pid_files[i].value,
966 &fops_pid_param);
967 if (!IS_ERR(dentry))
968 pid_files[i].dentry = dentry;
969 }
970}
971
972static void intel_pstate_debug_hide_params(void)
973{
974 int i;
975
976 if (IS_ERR_OR_NULL(debugfs_parent))
977 return;
978
979 for (i = 0; pid_files[i].name; i++) {
980 debugfs_remove(pid_files[i].dentry);
981 pid_files[i].dentry = NULL;
982 }
983
984 debugfs_remove(debugfs_parent);
985 debugfs_parent = NULL;
986}
987
988/************************** debugfs end ************************/
989
990/************************** sysfs begin ************************/ 805/************************** sysfs begin ************************/
991#define show_one(file_name, object) \ 806#define show_one(file_name, object) \
992 static ssize_t show_##file_name \ 807 static ssize_t show_##file_name \
@@ -1622,7 +1437,7 @@ static inline int32_t get_avg_pstate(struct cpudata *cpu)
1622 cpu->sample.core_avg_perf); 1437 cpu->sample.core_avg_perf);
1623} 1438}
1624 1439
1625static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu) 1440static inline int32_t get_target_pstate(struct cpudata *cpu)
1626{ 1441{
1627 struct sample *sample = &cpu->sample; 1442 struct sample *sample = &cpu->sample;
1628 int32_t busy_frac, boost; 1443 int32_t busy_frac, boost;
@@ -1660,44 +1475,6 @@ static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu)
1660 return target; 1475 return target;
1661} 1476}
1662 1477
1663static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu)
1664{
1665 int32_t perf_scaled, max_pstate, current_pstate, sample_ratio;
1666 u64 duration_ns;
1667
1668 /*
1669 * perf_scaled is the ratio of the average P-state during the last
1670 * sampling period to the P-state requested last time (in percent).
1671 *
1672 * That measures the system's response to the previous P-state
1673 * selection.
1674 */
1675 max_pstate = cpu->pstate.max_pstate_physical;
1676 current_pstate = cpu->pstate.current_pstate;
1677 perf_scaled = mul_ext_fp(cpu->sample.core_avg_perf,
1678 div_fp(100 * max_pstate, current_pstate));
1679
1680 /*
1681 * Since our utilization update callback will not run unless we are
1682 * in C0, check if the actual elapsed time is significantly greater (3x)
1683 * than our sample interval. If it is, then we were idle for a long
1684 * enough period of time to adjust our performance metric.
1685 */
1686 duration_ns = cpu->sample.time - cpu->last_sample_time;
1687 if ((s64)duration_ns > pid_params.sample_rate_ns * 3) {
1688 sample_ratio = div_fp(pid_params.sample_rate_ns, duration_ns);
1689 perf_scaled = mul_fp(perf_scaled, sample_ratio);
1690 } else {
1691 sample_ratio = div_fp(100 * (cpu->sample.mperf << cpu->aperf_mperf_shift),
1692 cpu->sample.tsc);
1693 if (sample_ratio < int_tofp(1))
1694 perf_scaled = 0;
1695 }
1696
1697 cpu->sample.busy_scaled = perf_scaled;
1698 return cpu->pstate.current_pstate - pid_calc(&cpu->pid, perf_scaled);
1699}
1700
1701static int intel_pstate_prepare_request(struct cpudata *cpu, int pstate) 1478static int intel_pstate_prepare_request(struct cpudata *cpu, int pstate)
1702{ 1479{
1703 int max_pstate = intel_pstate_get_base_pstate(cpu); 1480 int max_pstate = intel_pstate_get_base_pstate(cpu);
@@ -1717,13 +1494,15 @@ static void intel_pstate_update_pstate(struct cpudata *cpu, int pstate)
1717 wrmsrl(MSR_IA32_PERF_CTL, pstate_funcs.get_val(cpu, pstate)); 1494 wrmsrl(MSR_IA32_PERF_CTL, pstate_funcs.get_val(cpu, pstate));
1718} 1495}
1719 1496
1720static void intel_pstate_adjust_pstate(struct cpudata *cpu, int target_pstate) 1497static void intel_pstate_adjust_pstate(struct cpudata *cpu)
1721{ 1498{
1722 int from = cpu->pstate.current_pstate; 1499 int from = cpu->pstate.current_pstate;
1723 struct sample *sample; 1500 struct sample *sample;
1501 int target_pstate;
1724 1502
1725 update_turbo_state(); 1503 update_turbo_state();
1726 1504
1505 target_pstate = get_target_pstate(cpu);
1727 target_pstate = intel_pstate_prepare_request(cpu, target_pstate); 1506 target_pstate = intel_pstate_prepare_request(cpu, target_pstate);
1728 trace_cpu_frequency(target_pstate * cpu->pstate.scaling, cpu->cpu); 1507 trace_cpu_frequency(target_pstate * cpu->pstate.scaling, cpu->cpu);
1729 intel_pstate_update_pstate(cpu, target_pstate); 1508 intel_pstate_update_pstate(cpu, target_pstate);
@@ -1740,31 +1519,27 @@ static void intel_pstate_adjust_pstate(struct cpudata *cpu, int target_pstate)
1740 fp_toint(cpu->iowait_boost * 100)); 1519 fp_toint(cpu->iowait_boost * 100));
1741} 1520}
1742 1521
1743static void intel_pstate_update_util_pid(struct update_util_data *data,
1744 u64 time, unsigned int flags)
1745{
1746 struct cpudata *cpu = container_of(data, struct cpudata, update_util);
1747 u64 delta_ns = time - cpu->sample.time;
1748
1749 if ((s64)delta_ns < pid_params.sample_rate_ns)
1750 return;
1751
1752 if (intel_pstate_sample(cpu, time)) {
1753 int target_pstate;
1754
1755 target_pstate = get_target_pstate_use_performance(cpu);
1756 intel_pstate_adjust_pstate(cpu, target_pstate);
1757 }
1758}
1759
1760static void intel_pstate_update_util(struct update_util_data *data, u64 time, 1522static void intel_pstate_update_util(struct update_util_data *data, u64 time,
1761 unsigned int flags) 1523 unsigned int flags)
1762{ 1524{
1763 struct cpudata *cpu = container_of(data, struct cpudata, update_util); 1525 struct cpudata *cpu = container_of(data, struct cpudata, update_util);
1764 u64 delta_ns; 1526 u64 delta_ns;
1765 1527
1528 /* Don't allow remote callbacks */
1529 if (smp_processor_id() != cpu->cpu)
1530 return;
1531
1766 if (flags & SCHED_CPUFREQ_IOWAIT) { 1532 if (flags & SCHED_CPUFREQ_IOWAIT) {
1767 cpu->iowait_boost = int_tofp(1); 1533 cpu->iowait_boost = int_tofp(1);
1534 cpu->last_update = time;
1535 /*
1536 * The last time the busy was 100% so P-state was max anyway
1537 * so avoid overhead of computation.
1538 */
1539 if (fp_toint(cpu->sample.busy_scaled) == 100)
1540 return;
1541
1542 goto set_pstate;
1768 } else if (cpu->iowait_boost) { 1543 } else if (cpu->iowait_boost) {
1769 /* Clear iowait_boost if the CPU may have been idle. */ 1544 /* Clear iowait_boost if the CPU may have been idle. */
1770 delta_ns = time - cpu->last_update; 1545 delta_ns = time - cpu->last_update;
@@ -1773,15 +1548,12 @@ static void intel_pstate_update_util(struct update_util_data *data, u64 time,
1773 } 1548 }
1774 cpu->last_update = time; 1549 cpu->last_update = time;
1775 delta_ns = time - cpu->sample.time; 1550 delta_ns = time - cpu->sample.time;
1776 if ((s64)delta_ns < INTEL_PSTATE_DEFAULT_SAMPLING_INTERVAL) 1551 if ((s64)delta_ns < INTEL_PSTATE_SAMPLING_INTERVAL)
1777 return; 1552 return;
1778 1553
1779 if (intel_pstate_sample(cpu, time)) { 1554set_pstate:
1780 int target_pstate; 1555 if (intel_pstate_sample(cpu, time))
1781 1556 intel_pstate_adjust_pstate(cpu);
1782 target_pstate = get_target_pstate_use_cpu_load(cpu);
1783 intel_pstate_adjust_pstate(cpu, target_pstate);
1784 }
1785} 1557}
1786 1558
1787static struct pstate_funcs core_funcs = { 1559static struct pstate_funcs core_funcs = {
@@ -1791,7 +1563,6 @@ static struct pstate_funcs core_funcs = {
1791 .get_turbo = core_get_turbo_pstate, 1563 .get_turbo = core_get_turbo_pstate,
1792 .get_scaling = core_get_scaling, 1564 .get_scaling = core_get_scaling,
1793 .get_val = core_get_val, 1565 .get_val = core_get_val,
1794 .update_util = intel_pstate_update_util_pid,
1795}; 1566};
1796 1567
1797static const struct pstate_funcs silvermont_funcs = { 1568static const struct pstate_funcs silvermont_funcs = {
@@ -1802,7 +1573,6 @@ static const struct pstate_funcs silvermont_funcs = {
1802 .get_val = atom_get_val, 1573 .get_val = atom_get_val,
1803 .get_scaling = silvermont_get_scaling, 1574 .get_scaling = silvermont_get_scaling,
1804 .get_vid = atom_get_vid, 1575 .get_vid = atom_get_vid,
1805 .update_util = intel_pstate_update_util,
1806}; 1576};
1807 1577
1808static const struct pstate_funcs airmont_funcs = { 1578static const struct pstate_funcs airmont_funcs = {
@@ -1813,7 +1583,6 @@ static const struct pstate_funcs airmont_funcs = {
1813 .get_val = atom_get_val, 1583 .get_val = atom_get_val,
1814 .get_scaling = airmont_get_scaling, 1584 .get_scaling = airmont_get_scaling,
1815 .get_vid = atom_get_vid, 1585 .get_vid = atom_get_vid,
1816 .update_util = intel_pstate_update_util,
1817}; 1586};
1818 1587
1819static const struct pstate_funcs knl_funcs = { 1588static const struct pstate_funcs knl_funcs = {
@@ -1824,7 +1593,6 @@ static const struct pstate_funcs knl_funcs = {
1824 .get_aperf_mperf_shift = knl_get_aperf_mperf_shift, 1593 .get_aperf_mperf_shift = knl_get_aperf_mperf_shift,
1825 .get_scaling = core_get_scaling, 1594 .get_scaling = core_get_scaling,
1826 .get_val = core_get_val, 1595 .get_val = core_get_val,
1827 .update_util = intel_pstate_update_util_pid,
1828}; 1596};
1829 1597
1830static const struct pstate_funcs bxt_funcs = { 1598static const struct pstate_funcs bxt_funcs = {
@@ -1834,7 +1602,6 @@ static const struct pstate_funcs bxt_funcs = {
1834 .get_turbo = core_get_turbo_pstate, 1602 .get_turbo = core_get_turbo_pstate,
1835 .get_scaling = core_get_scaling, 1603 .get_scaling = core_get_scaling,
1836 .get_val = core_get_val, 1604 .get_val = core_get_val,
1837 .update_util = intel_pstate_update_util,
1838}; 1605};
1839 1606
1840#define ICPU(model, policy) \ 1607#define ICPU(model, policy) \
@@ -1878,8 +1645,6 @@ static const struct x86_cpu_id intel_pstate_cpu_ee_disable_ids[] = {
1878 {} 1645 {}
1879}; 1646};
1880 1647
1881static bool pid_in_use(void);
1882
1883static int intel_pstate_init_cpu(unsigned int cpunum) 1648static int intel_pstate_init_cpu(unsigned int cpunum)
1884{ 1649{
1885 struct cpudata *cpu; 1650 struct cpudata *cpu;
@@ -1910,8 +1675,6 @@ static int intel_pstate_init_cpu(unsigned int cpunum)
1910 intel_pstate_disable_ee(cpunum); 1675 intel_pstate_disable_ee(cpunum);
1911 1676
1912 intel_pstate_hwp_enable(cpu); 1677 intel_pstate_hwp_enable(cpu);
1913 } else if (pid_in_use()) {
1914 intel_pstate_pid_reset(cpu);
1915 } 1678 }
1916 1679
1917 intel_pstate_get_cpu_pstates(cpu); 1680 intel_pstate_get_cpu_pstates(cpu);
@@ -1934,7 +1697,7 @@ static void intel_pstate_set_update_util_hook(unsigned int cpu_num)
1934 /* Prevent intel_pstate_update_util() from using stale data. */ 1697 /* Prevent intel_pstate_update_util() from using stale data. */
1935 cpu->sample.time = 0; 1698 cpu->sample.time = 0;
1936 cpufreq_add_update_util_hook(cpu_num, &cpu->update_util, 1699 cpufreq_add_update_util_hook(cpu_num, &cpu->update_util,
1937 pstate_funcs.update_util); 1700 intel_pstate_update_util);
1938 cpu->update_util_set = true; 1701 cpu->update_util_set = true;
1939} 1702}
1940 1703
@@ -2132,7 +1895,6 @@ static int __intel_pstate_cpu_init(struct cpufreq_policy *policy)
2132 policy->cpuinfo.max_freq *= cpu->pstate.scaling; 1895 policy->cpuinfo.max_freq *= cpu->pstate.scaling;
2133 1896
2134 intel_pstate_init_acpi_perf_limits(policy); 1897 intel_pstate_init_acpi_perf_limits(policy);
2135 cpumask_set_cpu(policy->cpu, policy->cpus);
2136 1898
2137 policy->fast_switch_possible = true; 1899 policy->fast_switch_possible = true;
2138 1900
@@ -2146,7 +1908,6 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy)
2146 if (ret) 1908 if (ret)
2147 return ret; 1909 return ret;
2148 1910
2149 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
2150 if (IS_ENABLED(CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE)) 1911 if (IS_ENABLED(CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE))
2151 policy->policy = CPUFREQ_POLICY_PERFORMANCE; 1912 policy->policy = CPUFREQ_POLICY_PERFORMANCE;
2152 else 1913 else
@@ -2261,12 +2022,6 @@ static struct cpufreq_driver intel_cpufreq = {
2261 2022
2262static struct cpufreq_driver *default_driver = &intel_pstate; 2023static struct cpufreq_driver *default_driver = &intel_pstate;
2263 2024
2264static bool pid_in_use(void)
2265{
2266 return intel_pstate_driver == &intel_pstate &&
2267 pstate_funcs.update_util == intel_pstate_update_util_pid;
2268}
2269
2270static void intel_pstate_driver_cleanup(void) 2025static void intel_pstate_driver_cleanup(void)
2271{ 2026{
2272 unsigned int cpu; 2027 unsigned int cpu;
@@ -2301,9 +2056,6 @@ static int intel_pstate_register_driver(struct cpufreq_driver *driver)
2301 2056
2302 global.min_perf_pct = min_perf_pct_min(); 2057 global.min_perf_pct = min_perf_pct_min();
2303 2058
2304 if (pid_in_use())
2305 intel_pstate_debug_expose_params();
2306
2307 return 0; 2059 return 0;
2308} 2060}
2309 2061
@@ -2312,9 +2064,6 @@ static int intel_pstate_unregister_driver(void)
2312 if (hwp_active) 2064 if (hwp_active)
2313 return -EBUSY; 2065 return -EBUSY;
2314 2066
2315 if (pid_in_use())
2316 intel_pstate_debug_hide_params();
2317
2318 cpufreq_unregister_driver(intel_pstate_driver); 2067 cpufreq_unregister_driver(intel_pstate_driver);
2319 intel_pstate_driver_cleanup(); 2068 intel_pstate_driver_cleanup();
2320 2069
@@ -2382,24 +2131,6 @@ static int __init intel_pstate_msrs_not_valid(void)
2382 return 0; 2131 return 0;
2383} 2132}
2384 2133
2385#ifdef CONFIG_ACPI
2386static void intel_pstate_use_acpi_profile(void)
2387{
2388 switch (acpi_gbl_FADT.preferred_profile) {
2389 case PM_MOBILE:
2390 case PM_TABLET:
2391 case PM_APPLIANCE_PC:
2392 case PM_DESKTOP:
2393 case PM_WORKSTATION:
2394 pstate_funcs.update_util = intel_pstate_update_util;
2395 }
2396}
2397#else
2398static void intel_pstate_use_acpi_profile(void)
2399{
2400}
2401#endif
2402
2403static void __init copy_cpu_funcs(struct pstate_funcs *funcs) 2134static void __init copy_cpu_funcs(struct pstate_funcs *funcs)
2404{ 2135{
2405 pstate_funcs.get_max = funcs->get_max; 2136 pstate_funcs.get_max = funcs->get_max;
@@ -2409,10 +2140,7 @@ static void __init copy_cpu_funcs(struct pstate_funcs *funcs)
2409 pstate_funcs.get_scaling = funcs->get_scaling; 2140 pstate_funcs.get_scaling = funcs->get_scaling;
2410 pstate_funcs.get_val = funcs->get_val; 2141 pstate_funcs.get_val = funcs->get_val;
2411 pstate_funcs.get_vid = funcs->get_vid; 2142 pstate_funcs.get_vid = funcs->get_vid;
2412 pstate_funcs.update_util = funcs->update_util;
2413 pstate_funcs.get_aperf_mperf_shift = funcs->get_aperf_mperf_shift; 2143 pstate_funcs.get_aperf_mperf_shift = funcs->get_aperf_mperf_shift;
2414
2415 intel_pstate_use_acpi_profile();
2416} 2144}
2417 2145
2418#ifdef CONFIG_ACPI 2146#ifdef CONFIG_ACPI
@@ -2556,9 +2284,7 @@ static int __init intel_pstate_init(void)
2556 2284
2557 if (x86_match_cpu(hwp_support_ids)) { 2285 if (x86_match_cpu(hwp_support_ids)) {
2558 copy_cpu_funcs(&core_funcs); 2286 copy_cpu_funcs(&core_funcs);
2559 if (no_hwp) { 2287 if (!no_hwp) {
2560 pstate_funcs.update_util = intel_pstate_update_util;
2561 } else {
2562 hwp_active++; 2288 hwp_active++;
2563 intel_pstate.attr = hwp_cpufreq_attrs; 2289 intel_pstate.attr = hwp_cpufreq_attrs;
2564 goto hwp_cpu_matched; 2290 goto hwp_cpu_matched;
diff --git a/drivers/cpufreq/longrun.c b/drivers/cpufreq/longrun.c
index 074971b12635..542aa9adba1a 100644
--- a/drivers/cpufreq/longrun.c
+++ b/drivers/cpufreq/longrun.c
@@ -270,7 +270,6 @@ static int longrun_cpu_init(struct cpufreq_policy *policy)
270 /* cpuinfo and default policy values */ 270 /* cpuinfo and default policy values */
271 policy->cpuinfo.min_freq = longrun_low_freq; 271 policy->cpuinfo.min_freq = longrun_low_freq;
272 policy->cpuinfo.max_freq = longrun_high_freq; 272 policy->cpuinfo.max_freq = longrun_high_freq;
273 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
274 longrun_get_policy(policy); 273 longrun_get_policy(policy);
275 274
276 return 0; 275 return 0;
diff --git a/drivers/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c
index 9ac27b22476c..da344696beed 100644
--- a/drivers/cpufreq/loongson2_cpufreq.c
+++ b/drivers/cpufreq/loongson2_cpufreq.c
@@ -114,7 +114,7 @@ static struct cpufreq_driver loongson2_cpufreq_driver = {
114 .attr = cpufreq_generic_attr, 114 .attr = cpufreq_generic_attr,
115}; 115};
116 116
117static struct platform_device_id platform_device_ids[] = { 117static const struct platform_device_id platform_device_ids[] = {
118 { 118 {
119 .name = "loongson2_cpufreq", 119 .name = "loongson2_cpufreq",
120 }, 120 },
diff --git a/drivers/cpufreq/mt8173-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
index f9f00fb4bc3a..18c4bd9a5c65 100644
--- a/drivers/cpufreq/mt8173-cpufreq.c
+++ b/drivers/cpufreq/mediatek-cpufreq.c
@@ -507,7 +507,7 @@ static int mtk_cpufreq_exit(struct cpufreq_policy *policy)
507 return 0; 507 return 0;
508} 508}
509 509
510static struct cpufreq_driver mt8173_cpufreq_driver = { 510static struct cpufreq_driver mtk_cpufreq_driver = {
511 .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK | 511 .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK |
512 CPUFREQ_HAVE_GOVERNOR_PER_POLICY, 512 CPUFREQ_HAVE_GOVERNOR_PER_POLICY,
513 .verify = cpufreq_generic_frequency_table_verify, 513 .verify = cpufreq_generic_frequency_table_verify,
@@ -520,7 +520,7 @@ static struct cpufreq_driver mt8173_cpufreq_driver = {
520 .attr = cpufreq_generic_attr, 520 .attr = cpufreq_generic_attr,
521}; 521};
522 522
523static int mt8173_cpufreq_probe(struct platform_device *pdev) 523static int mtk_cpufreq_probe(struct platform_device *pdev)
524{ 524{
525 struct mtk_cpu_dvfs_info *info, *tmp; 525 struct mtk_cpu_dvfs_info *info, *tmp;
526 int cpu, ret; 526 int cpu, ret;
@@ -547,7 +547,7 @@ static int mt8173_cpufreq_probe(struct platform_device *pdev)
547 list_add(&info->list_head, &dvfs_info_list); 547 list_add(&info->list_head, &dvfs_info_list);
548 } 548 }
549 549
550 ret = cpufreq_register_driver(&mt8173_cpufreq_driver); 550 ret = cpufreq_register_driver(&mtk_cpufreq_driver);
551 if (ret) { 551 if (ret) {
552 dev_err(&pdev->dev, "failed to register mtk cpufreq driver\n"); 552 dev_err(&pdev->dev, "failed to register mtk cpufreq driver\n");
553 goto release_dvfs_info_list; 553 goto release_dvfs_info_list;
@@ -564,15 +564,18 @@ release_dvfs_info_list:
564 return ret; 564 return ret;
565} 565}
566 566
567static struct platform_driver mt8173_cpufreq_platdrv = { 567static struct platform_driver mtk_cpufreq_platdrv = {
568 .driver = { 568 .driver = {
569 .name = "mt8173-cpufreq", 569 .name = "mtk-cpufreq",
570 }, 570 },
571 .probe = mt8173_cpufreq_probe, 571 .probe = mtk_cpufreq_probe,
572}; 572};
573 573
574/* List of machines supported by this driver */ 574/* List of machines supported by this driver */
575static const struct of_device_id mt8173_cpufreq_machines[] __initconst = { 575static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
576 { .compatible = "mediatek,mt2701", },
577 { .compatible = "mediatek,mt7622", },
578 { .compatible = "mediatek,mt7623", },
576 { .compatible = "mediatek,mt817x", }, 579 { .compatible = "mediatek,mt817x", },
577 { .compatible = "mediatek,mt8173", }, 580 { .compatible = "mediatek,mt8173", },
578 { .compatible = "mediatek,mt8176", }, 581 { .compatible = "mediatek,mt8176", },
@@ -580,7 +583,7 @@ static const struct of_device_id mt8173_cpufreq_machines[] __initconst = {
580 { } 583 { }
581}; 584};
582 585
583static int __init mt8173_cpufreq_driver_init(void) 586static int __init mtk_cpufreq_driver_init(void)
584{ 587{
585 struct device_node *np; 588 struct device_node *np;
586 const struct of_device_id *match; 589 const struct of_device_id *match;
@@ -591,14 +594,14 @@ static int __init mt8173_cpufreq_driver_init(void)
591 if (!np) 594 if (!np)
592 return -ENODEV; 595 return -ENODEV;
593 596
594 match = of_match_node(mt8173_cpufreq_machines, np); 597 match = of_match_node(mtk_cpufreq_machines, np);
595 of_node_put(np); 598 of_node_put(np);
596 if (!match) { 599 if (!match) {
597 pr_warn("Machine is not compatible with mt8173-cpufreq\n"); 600 pr_warn("Machine is not compatible with mtk-cpufreq\n");
598 return -ENODEV; 601 return -ENODEV;
599 } 602 }
600 603
601 err = platform_driver_register(&mt8173_cpufreq_platdrv); 604 err = platform_driver_register(&mtk_cpufreq_platdrv);
602 if (err) 605 if (err)
603 return err; 606 return err;
604 607
@@ -608,7 +611,7 @@ static int __init mt8173_cpufreq_driver_init(void)
608 * and the device registration codes are put here to handle defer 611 * and the device registration codes are put here to handle defer
609 * probing. 612 * probing.
610 */ 613 */
611 pdev = platform_device_register_simple("mt8173-cpufreq", -1, NULL, 0); 614 pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0);
612 if (IS_ERR(pdev)) { 615 if (IS_ERR(pdev)) {
613 pr_err("failed to register mtk-cpufreq platform device\n"); 616 pr_err("failed to register mtk-cpufreq platform device\n");
614 return PTR_ERR(pdev); 617 return PTR_ERR(pdev);
@@ -616,4 +619,4 @@ static int __init mt8173_cpufreq_driver_init(void)
616 619
617 return 0; 620 return 0;
618} 621}
619device_initcall(mt8173_cpufreq_driver_init); 622device_initcall(mtk_cpufreq_driver_init);
diff --git a/drivers/cpufreq/pmac32-cpufreq.c b/drivers/cpufreq/pmac32-cpufreq.c
index ff44016ea031..61ae06ca008e 100644
--- a/drivers/cpufreq/pmac32-cpufreq.c
+++ b/drivers/cpufreq/pmac32-cpufreq.c
@@ -442,7 +442,8 @@ static struct cpufreq_driver pmac_cpufreq_driver = {
442 .init = pmac_cpufreq_cpu_init, 442 .init = pmac_cpufreq_cpu_init,
443 .suspend = pmac_cpufreq_suspend, 443 .suspend = pmac_cpufreq_suspend,
444 .resume = pmac_cpufreq_resume, 444 .resume = pmac_cpufreq_resume,
445 .flags = CPUFREQ_PM_NO_WARN, 445 .flags = CPUFREQ_PM_NO_WARN |
446 CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
446 .attr = cpufreq_generic_attr, 447 .attr = cpufreq_generic_attr,
447 .name = "powermac", 448 .name = "powermac",
448}; 449};
@@ -626,14 +627,16 @@ static int __init pmac_cpufreq_setup(void)
626 if (!value) 627 if (!value)
627 goto out; 628 goto out;
628 cur_freq = (*value) / 1000; 629 cur_freq = (*value) / 1000;
629 transition_latency = CPUFREQ_ETERNAL;
630 630
631 /* Check for 7447A based MacRISC3 */ 631 /* Check for 7447A based MacRISC3 */
632 if (of_machine_is_compatible("MacRISC3") && 632 if (of_machine_is_compatible("MacRISC3") &&
633 of_get_property(cpunode, "dynamic-power-step", NULL) && 633 of_get_property(cpunode, "dynamic-power-step", NULL) &&
634 PVR_VER(mfspr(SPRN_PVR)) == 0x8003) { 634 PVR_VER(mfspr(SPRN_PVR)) == 0x8003) {
635 pmac_cpufreq_init_7447A(cpunode); 635 pmac_cpufreq_init_7447A(cpunode);
636
637 /* Allow dynamic switching */
636 transition_latency = 8000000; 638 transition_latency = 8000000;
639 pmac_cpufreq_driver.flags &= ~CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING;
637 /* Check for other MacRISC3 machines */ 640 /* Check for other MacRISC3 machines */
638 } else if (of_machine_is_compatible("PowerBook3,4") || 641 } else if (of_machine_is_compatible("PowerBook3,4") ||
639 of_machine_is_compatible("PowerBook3,5") || 642 of_machine_is_compatible("PowerBook3,5") ||
diff --git a/drivers/cpufreq/pmac64-cpufreq.c b/drivers/cpufreq/pmac64-cpufreq.c
index 267e0894c62d..be623dd7b9f2 100644
--- a/drivers/cpufreq/pmac64-cpufreq.c
+++ b/drivers/cpufreq/pmac64-cpufreq.c
@@ -516,7 +516,7 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode)
516 goto bail; 516 goto bail;
517 } 517 }
518 518
519 DBG("cpufreq: i2c clock chip found: %s\n", hwclock->full_name); 519 DBG("cpufreq: i2c clock chip found: %pOF\n", hwclock);
520 520
521 /* Now get all the platform functions */ 521 /* Now get all the platform functions */
522 pfunc_cpu_getfreq = 522 pfunc_cpu_getfreq =
diff --git a/drivers/cpufreq/s5pv210-cpufreq.c b/drivers/cpufreq/s5pv210-cpufreq.c
index f82074eea779..5d31c2db12a3 100644
--- a/drivers/cpufreq/s5pv210-cpufreq.c
+++ b/drivers/cpufreq/s5pv210-cpufreq.c
@@ -602,6 +602,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
602 } 602 }
603 603
604 clk_base = of_iomap(np, 0); 604 clk_base = of_iomap(np, 0);
605 of_node_put(np);
605 if (!clk_base) { 606 if (!clk_base) {
606 pr_err("%s: failed to map clock registers\n", __func__); 607 pr_err("%s: failed to map clock registers\n", __func__);
607 return -EFAULT; 608 return -EFAULT;
@@ -612,6 +613,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
612 if (id < 0 || id >= ARRAY_SIZE(dmc_base)) { 613 if (id < 0 || id >= ARRAY_SIZE(dmc_base)) {
613 pr_err("%s: failed to get alias of dmc node '%s'\n", 614 pr_err("%s: failed to get alias of dmc node '%s'\n",
614 __func__, np->name); 615 __func__, np->name);
616 of_node_put(np);
615 return id; 617 return id;
616 } 618 }
617 619
@@ -619,6 +621,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
619 if (!dmc_base[id]) { 621 if (!dmc_base[id]) {
620 pr_err("%s: failed to map dmc%d registers\n", 622 pr_err("%s: failed to map dmc%d registers\n",
621 __func__, id); 623 __func__, id);
624 of_node_put(np);
622 return -EFAULT; 625 return -EFAULT;
623 } 626 }
624 } 627 }
diff --git a/drivers/cpufreq/sa1100-cpufreq.c b/drivers/cpufreq/sa1100-cpufreq.c
index 728eab77e8e0..e2d8a77c36d5 100644
--- a/drivers/cpufreq/sa1100-cpufreq.c
+++ b/drivers/cpufreq/sa1100-cpufreq.c
@@ -197,11 +197,12 @@ static int sa1100_target(struct cpufreq_policy *policy, unsigned int ppcr)
197 197
198static int __init sa1100_cpu_init(struct cpufreq_policy *policy) 198static int __init sa1100_cpu_init(struct cpufreq_policy *policy)
199{ 199{
200 return cpufreq_generic_init(policy, sa11x0_freq_table, CPUFREQ_ETERNAL); 200 return cpufreq_generic_init(policy, sa11x0_freq_table, 0);
201} 201}
202 202
203static struct cpufreq_driver sa1100_driver __refdata = { 203static struct cpufreq_driver sa1100_driver __refdata = {
204 .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 204 .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK |
205 CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
205 .verify = cpufreq_generic_frequency_table_verify, 206 .verify = cpufreq_generic_frequency_table_verify,
206 .target_index = sa1100_target, 207 .target_index = sa1100_target,
207 .get = sa11x0_getspeed, 208 .get = sa11x0_getspeed,
diff --git a/drivers/cpufreq/sa1110-cpufreq.c b/drivers/cpufreq/sa1110-cpufreq.c
index 2bac9b6cfeea..66e5fb088ecc 100644
--- a/drivers/cpufreq/sa1110-cpufreq.c
+++ b/drivers/cpufreq/sa1110-cpufreq.c
@@ -306,13 +306,14 @@ static int sa1110_target(struct cpufreq_policy *policy, unsigned int ppcr)
306 306
307static int __init sa1110_cpu_init(struct cpufreq_policy *policy) 307static int __init sa1110_cpu_init(struct cpufreq_policy *policy)
308{ 308{
309 return cpufreq_generic_init(policy, sa11x0_freq_table, CPUFREQ_ETERNAL); 309 return cpufreq_generic_init(policy, sa11x0_freq_table, 0);
310} 310}
311 311
312/* sa1110_driver needs __refdata because it must remain after init registers 312/* sa1110_driver needs __refdata because it must remain after init registers
313 * it with cpufreq_register_driver() */ 313 * it with cpufreq_register_driver() */
314static struct cpufreq_driver sa1110_driver __refdata = { 314static struct cpufreq_driver sa1110_driver __refdata = {
315 .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 315 .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK |
316 CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
316 .verify = cpufreq_generic_frequency_table_verify, 317 .verify = cpufreq_generic_frequency_table_verify,
317 .target_index = sa1110_target, 318 .target_index = sa1110_target,
318 .get = sa11x0_getspeed, 319 .get = sa11x0_getspeed,
diff --git a/drivers/cpufreq/sh-cpufreq.c b/drivers/cpufreq/sh-cpufreq.c
index 719c3d9f07fb..28893d435cf5 100644
--- a/drivers/cpufreq/sh-cpufreq.c
+++ b/drivers/cpufreq/sh-cpufreq.c
@@ -137,8 +137,6 @@ static int sh_cpufreq_cpu_init(struct cpufreq_policy *policy)
137 (clk_round_rate(cpuclk, ~0UL) + 500) / 1000; 137 (clk_round_rate(cpuclk, ~0UL) + 500) / 1000;
138 } 138 }
139 139
140 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
141
142 dev_info(dev, "CPU Frequencies - Minimum %u.%03u MHz, " 140 dev_info(dev, "CPU Frequencies - Minimum %u.%03u MHz, "
143 "Maximum %u.%03u MHz.\n", 141 "Maximum %u.%03u MHz.\n",
144 policy->min / 1000, policy->min % 1000, 142 policy->min / 1000, policy->min % 1000,
@@ -159,6 +157,7 @@ static int sh_cpufreq_cpu_exit(struct cpufreq_policy *policy)
159 157
160static struct cpufreq_driver sh_cpufreq_driver = { 158static struct cpufreq_driver sh_cpufreq_driver = {
161 .name = "sh", 159 .name = "sh",
160 .flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
162 .get = sh_cpufreq_get, 161 .get = sh_cpufreq_get,
163 .target = sh_cpufreq_target, 162 .target = sh_cpufreq_target,
164 .verify = sh_cpufreq_verify, 163 .verify = sh_cpufreq_verify,
diff --git a/drivers/cpufreq/speedstep-ich.c b/drivers/cpufreq/speedstep-ich.c
index b86953a3ddc4..0412a246a785 100644
--- a/drivers/cpufreq/speedstep-ich.c
+++ b/drivers/cpufreq/speedstep-ich.c
@@ -207,7 +207,7 @@ static unsigned int speedstep_detect_chipset(void)
207 * 8100 which use a pretty old revision of the 82815 207 * 8100 which use a pretty old revision of the 82815
208 * host bridge. Abort on these systems. 208 * host bridge. Abort on these systems.
209 */ 209 */
210 static struct pci_dev *hostbridge; 210 struct pci_dev *hostbridge;
211 211
212 hostbridge = pci_get_subsys(PCI_VENDOR_ID_INTEL, 212 hostbridge = pci_get_subsys(PCI_VENDOR_ID_INTEL,
213 PCI_DEVICE_ID_INTEL_82815_MC, 213 PCI_DEVICE_ID_INTEL_82815_MC,
diff --git a/drivers/cpufreq/speedstep-lib.c b/drivers/cpufreq/speedstep-lib.c
index 1b8062182c81..ccab452a4ef5 100644
--- a/drivers/cpufreq/speedstep-lib.c
+++ b/drivers/cpufreq/speedstep-lib.c
@@ -35,7 +35,7 @@ static int relaxed_check;
35static unsigned int pentium3_get_frequency(enum speedstep_processor processor) 35static unsigned int pentium3_get_frequency(enum speedstep_processor processor)
36{ 36{
37 /* See table 14 of p3_ds.pdf and table 22 of 29834003.pdf */ 37 /* See table 14 of p3_ds.pdf and table 22 of 29834003.pdf */
38 struct { 38 static const struct {
39 unsigned int ratio; /* Frequency Multiplier (x10) */ 39 unsigned int ratio; /* Frequency Multiplier (x10) */
40 u8 bitmap; /* power on configuration bits 40 u8 bitmap; /* power on configuration bits
41 [27, 25:22] (in MSR 0x2a) */ 41 [27, 25:22] (in MSR 0x2a) */
@@ -58,7 +58,7 @@ static unsigned int pentium3_get_frequency(enum speedstep_processor processor)
58 }; 58 };
59 59
60 /* PIII(-M) FSB settings: see table b1-b of 24547206.pdf */ 60 /* PIII(-M) FSB settings: see table b1-b of 24547206.pdf */
61 struct { 61 static const struct {
62 unsigned int value; /* Front Side Bus speed in MHz */ 62 unsigned int value; /* Front Side Bus speed in MHz */
63 u8 bitmap; /* power on configuration bits [18: 19] 63 u8 bitmap; /* power on configuration bits [18: 19]
64 (in MSR 0x2a) */ 64 (in MSR 0x2a) */
diff --git a/drivers/cpufreq/speedstep-smi.c b/drivers/cpufreq/speedstep-smi.c
index 37b30071c220..d23f24ccff38 100644
--- a/drivers/cpufreq/speedstep-smi.c
+++ b/drivers/cpufreq/speedstep-smi.c
@@ -266,7 +266,6 @@ static int speedstep_cpu_init(struct cpufreq_policy *policy)
266 pr_debug("workaround worked.\n"); 266 pr_debug("workaround worked.\n");
267 } 267 }
268 268
269 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
270 return cpufreq_table_validate_and_show(policy, speedstep_freqs); 269 return cpufreq_table_validate_and_show(policy, speedstep_freqs);
271} 270}
272 271
@@ -290,6 +289,7 @@ static int speedstep_resume(struct cpufreq_policy *policy)
290 289
291static struct cpufreq_driver speedstep_driver = { 290static struct cpufreq_driver speedstep_driver = {
292 .name = "speedstep-smi", 291 .name = "speedstep-smi",
292 .flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
293 .verify = cpufreq_generic_frequency_table_verify, 293 .verify = cpufreq_generic_frequency_table_verify,
294 .target_index = speedstep_target, 294 .target_index = speedstep_target,
295 .init = speedstep_cpu_init, 295 .init = speedstep_cpu_init,
diff --git a/drivers/cpufreq/sti-cpufreq.c b/drivers/cpufreq/sti-cpufreq.c
index d2d0430d09d4..47105735df12 100644
--- a/drivers/cpufreq/sti-cpufreq.c
+++ b/drivers/cpufreq/sti-cpufreq.c
@@ -65,8 +65,8 @@ static int sti_cpufreq_fetch_major(void) {
65 ret = of_property_read_u32_index(np, "st,syscfg", 65 ret = of_property_read_u32_index(np, "st,syscfg",
66 MAJOR_ID_INDEX, &major_offset); 66 MAJOR_ID_INDEX, &major_offset);
67 if (ret) { 67 if (ret) {
68 dev_err(dev, "No major number offset provided in %s [%d]\n", 68 dev_err(dev, "No major number offset provided in %pOF [%d]\n",
69 np->full_name, ret); 69 np, ret);
70 return ret; 70 return ret;
71 } 71 }
72 72
@@ -92,8 +92,8 @@ static int sti_cpufreq_fetch_minor(void)
92 MINOR_ID_INDEX, &minor_offset); 92 MINOR_ID_INDEX, &minor_offset);
93 if (ret) { 93 if (ret) {
94 dev_err(dev, 94 dev_err(dev,
95 "No minor number offset provided %s [%d]\n", 95 "No minor number offset provided %pOF [%d]\n",
96 np->full_name, ret); 96 np, ret);
97 return ret; 97 return ret;
98 } 98 }
99 99
diff --git a/drivers/cpufreq/tango-cpufreq.c b/drivers/cpufreq/tango-cpufreq.c
new file mode 100644
index 000000000000..89a7f860bfe8
--- /dev/null
+++ b/drivers/cpufreq/tango-cpufreq.c
@@ -0,0 +1,38 @@
1#include <linux/of.h>
2#include <linux/cpu.h>
3#include <linux/clk.h>
4#include <linux/pm_opp.h>
5#include <linux/platform_device.h>
6
7static const struct of_device_id machines[] __initconst = {
8 { .compatible = "sigma,tango4" },
9 { /* sentinel */ }
10};
11
12static int __init tango_cpufreq_init(void)
13{
14 struct device *cpu_dev = get_cpu_device(0);
15 unsigned long max_freq;
16 struct clk *cpu_clk;
17 void *res;
18
19 if (!of_match_node(machines, of_root))
20 return -ENODEV;
21
22 cpu_clk = clk_get(cpu_dev, NULL);
23 if (IS_ERR(cpu_clk))
24 return -ENODEV;
25
26 max_freq = clk_get_rate(cpu_clk);
27
28 dev_pm_opp_add(cpu_dev, max_freq / 1, 0);
29 dev_pm_opp_add(cpu_dev, max_freq / 2, 0);
30 dev_pm_opp_add(cpu_dev, max_freq / 3, 0);
31 dev_pm_opp_add(cpu_dev, max_freq / 5, 0);
32 dev_pm_opp_add(cpu_dev, max_freq / 9, 0);
33
34 res = platform_device_register_data(NULL, "cpufreq-dt", -1, NULL, 0);
35
36 return PTR_ERR_OR_ZERO(res);
37}
38device_initcall(tango_cpufreq_init);
diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c
index a7b5658c0460..b29cd3398463 100644
--- a/drivers/cpufreq/ti-cpufreq.c
+++ b/drivers/cpufreq/ti-cpufreq.c
@@ -245,8 +245,6 @@ static int ti_cpufreq_init(void)
245 if (ret) 245 if (ret)
246 goto fail_put_node; 246 goto fail_put_node;
247 247
248 of_node_put(opp_data->opp_node);
249
250 ret = PTR_ERR_OR_ZERO(dev_pm_opp_set_supported_hw(opp_data->cpu_dev, 248 ret = PTR_ERR_OR_ZERO(dev_pm_opp_set_supported_hw(opp_data->cpu_dev,
251 version, VERSION_COUNT)); 249 version, VERSION_COUNT));
252 if (ret) { 250 if (ret) {
@@ -255,6 +253,8 @@ static int ti_cpufreq_init(void)
255 goto fail_put_node; 253 goto fail_put_node;
256 } 254 }
257 255
256 of_node_put(opp_data->opp_node);
257
258register_cpufreq_dt: 258register_cpufreq_dt:
259 platform_device_register_simple("cpufreq-dt", -1, NULL, 0); 259 platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
260 260
diff --git a/drivers/cpufreq/unicore2-cpufreq.c b/drivers/cpufreq/unicore2-cpufreq.c
index 6f9dfa80563a..db62d9844751 100644
--- a/drivers/cpufreq/unicore2-cpufreq.c
+++ b/drivers/cpufreq/unicore2-cpufreq.c
@@ -58,13 +58,12 @@ static int __init ucv2_cpu_init(struct cpufreq_policy *policy)
58 58
59 policy->min = policy->cpuinfo.min_freq = 250000; 59 policy->min = policy->cpuinfo.min_freq = 250000;
60 policy->max = policy->cpuinfo.max_freq = 1000000; 60 policy->max = policy->cpuinfo.max_freq = 1000000;
61 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
62 policy->clk = clk_get(NULL, "MAIN_CLK"); 61 policy->clk = clk_get(NULL, "MAIN_CLK");
63 return PTR_ERR_OR_ZERO(policy->clk); 62 return PTR_ERR_OR_ZERO(policy->clk);
64} 63}
65 64
66static struct cpufreq_driver ucv2_driver = { 65static struct cpufreq_driver ucv2_driver = {
67 .flags = CPUFREQ_STICKY, 66 .flags = CPUFREQ_STICKY | CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
68 .verify = ucv2_verify_speed, 67 .verify = ucv2_verify_speed,
69 .target = ucv2_target, 68 .target = ucv2_target,
70 .get = cpufreq_generic_get, 69 .get = cpufreq_generic_get,
diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
index 3ba81b1dffad..0b67a05a7aae 100644
--- a/drivers/cpuidle/Makefile
+++ b/drivers/cpuidle/Makefile
@@ -5,6 +5,7 @@
5obj-y += cpuidle.o driver.o governor.o sysfs.o governors/ 5obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
6obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o 6obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
7obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o 7obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o
8obj-$(CONFIG_ARCH_HAS_CPU_RELAX) += poll_state.o
8 9
9################################################################################## 10##################################################################################
10# ARM SoC drivers 11# ARM SoC drivers
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 60bb64f4329d..484cc8909d5c 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -77,7 +77,7 @@ static int find_deepest_state(struct cpuidle_driver *drv,
77 struct cpuidle_device *dev, 77 struct cpuidle_device *dev,
78 unsigned int max_latency, 78 unsigned int max_latency,
79 unsigned int forbidden_flags, 79 unsigned int forbidden_flags,
80 bool freeze) 80 bool s2idle)
81{ 81{
82 unsigned int latency_req = 0; 82 unsigned int latency_req = 0;
83 int i, ret = 0; 83 int i, ret = 0;
@@ -89,7 +89,7 @@ static int find_deepest_state(struct cpuidle_driver *drv,
89 if (s->disabled || su->disable || s->exit_latency <= latency_req 89 if (s->disabled || su->disable || s->exit_latency <= latency_req
90 || s->exit_latency > max_latency 90 || s->exit_latency > max_latency
91 || (s->flags & forbidden_flags) 91 || (s->flags & forbidden_flags)
92 || (freeze && !s->enter_freeze)) 92 || (s2idle && !s->enter_s2idle))
93 continue; 93 continue;
94 94
95 latency_req = s->exit_latency; 95 latency_req = s->exit_latency;
@@ -128,7 +128,7 @@ int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
128} 128}
129 129
130#ifdef CONFIG_SUSPEND 130#ifdef CONFIG_SUSPEND
131static void enter_freeze_proper(struct cpuidle_driver *drv, 131static void enter_s2idle_proper(struct cpuidle_driver *drv,
132 struct cpuidle_device *dev, int index) 132 struct cpuidle_device *dev, int index)
133{ 133{
134 /* 134 /*
@@ -143,7 +143,7 @@ static void enter_freeze_proper(struct cpuidle_driver *drv,
143 * suspended is generally unsafe. 143 * suspended is generally unsafe.
144 */ 144 */
145 stop_critical_timings(); 145 stop_critical_timings();
146 drv->states[index].enter_freeze(dev, drv, index); 146 drv->states[index].enter_s2idle(dev, drv, index);
147 WARN_ON(!irqs_disabled()); 147 WARN_ON(!irqs_disabled());
148 /* 148 /*
149 * timekeeping_resume() that will be called by tick_unfreeze() for the 149 * timekeeping_resume() that will be called by tick_unfreeze() for the
@@ -155,25 +155,25 @@ static void enter_freeze_proper(struct cpuidle_driver *drv,
155} 155}
156 156
157/** 157/**
158 * cpuidle_enter_freeze - Enter an idle state suitable for suspend-to-idle. 158 * cpuidle_enter_s2idle - Enter an idle state suitable for suspend-to-idle.
159 * @drv: cpuidle driver for the given CPU. 159 * @drv: cpuidle driver for the given CPU.
160 * @dev: cpuidle device for the given CPU. 160 * @dev: cpuidle device for the given CPU.
161 * 161 *
162 * If there are states with the ->enter_freeze callback, find the deepest of 162 * If there are states with the ->enter_s2idle callback, find the deepest of
163 * them and enter it with frozen tick. 163 * them and enter it with frozen tick.
164 */ 164 */
165int cpuidle_enter_freeze(struct cpuidle_driver *drv, struct cpuidle_device *dev) 165int cpuidle_enter_s2idle(struct cpuidle_driver *drv, struct cpuidle_device *dev)
166{ 166{
167 int index; 167 int index;
168 168
169 /* 169 /*
170 * Find the deepest state with ->enter_freeze present, which guarantees 170 * Find the deepest state with ->enter_s2idle present, which guarantees
171 * that interrupts won't be enabled when it exits and allows the tick to 171 * that interrupts won't be enabled when it exits and allows the tick to
172 * be frozen safely. 172 * be frozen safely.
173 */ 173 */
174 index = find_deepest_state(drv, dev, UINT_MAX, 0, true); 174 index = find_deepest_state(drv, dev, UINT_MAX, 0, true);
175 if (index > 0) 175 if (index > 0)
176 enter_freeze_proper(drv, dev, index); 176 enter_s2idle_proper(drv, dev, index);
177 177
178 return index; 178 return index;
179} 179}
diff --git a/drivers/cpuidle/driver.c b/drivers/cpuidle/driver.c
index e53fb861beb0..dc32f34e68d9 100644
--- a/drivers/cpuidle/driver.c
+++ b/drivers/cpuidle/driver.c
@@ -179,36 +179,6 @@ static void __cpuidle_driver_init(struct cpuidle_driver *drv)
179 } 179 }
180} 180}
181 181
182#ifdef CONFIG_ARCH_HAS_CPU_RELAX
183static int __cpuidle poll_idle(struct cpuidle_device *dev,
184 struct cpuidle_driver *drv, int index)
185{
186 local_irq_enable();
187 if (!current_set_polling_and_test()) {
188 while (!need_resched())
189 cpu_relax();
190 }
191 current_clr_polling();
192
193 return index;
194}
195
196static void poll_idle_init(struct cpuidle_driver *drv)
197{
198 struct cpuidle_state *state = &drv->states[0];
199
200 snprintf(state->name, CPUIDLE_NAME_LEN, "POLL");
201 snprintf(state->desc, CPUIDLE_DESC_LEN, "CPUIDLE CORE POLL IDLE");
202 state->exit_latency = 0;
203 state->target_residency = 0;
204 state->power_usage = -1;
205 state->enter = poll_idle;
206 state->disabled = false;
207}
208#else
209static void poll_idle_init(struct cpuidle_driver *drv) {}
210#endif /* !CONFIG_ARCH_HAS_CPU_RELAX */
211
212/** 182/**
213 * __cpuidle_register_driver: register the driver 183 * __cpuidle_register_driver: register the driver
214 * @drv: a valid pointer to a struct cpuidle_driver 184 * @drv: a valid pointer to a struct cpuidle_driver
@@ -246,8 +216,6 @@ static int __cpuidle_register_driver(struct cpuidle_driver *drv)
246 on_each_cpu_mask(drv->cpumask, cpuidle_setup_broadcast_timer, 216 on_each_cpu_mask(drv->cpumask, cpuidle_setup_broadcast_timer,
247 (void *)1, 1); 217 (void *)1, 1);
248 218
249 poll_idle_init(drv);
250
251 return 0; 219 return 0;
252} 220}
253 221
diff --git a/drivers/cpuidle/dt_idle_states.c b/drivers/cpuidle/dt_idle_states.c
index ae8eb0359889..53342b7f1010 100644
--- a/drivers/cpuidle/dt_idle_states.c
+++ b/drivers/cpuidle/dt_idle_states.c
@@ -41,9 +41,9 @@ static int init_state_node(struct cpuidle_state *idle_state,
41 /* 41 /*
42 * Since this is not a "coupled" state, it's safe to assume interrupts 42 * Since this is not a "coupled" state, it's safe to assume interrupts
43 * won't be enabled when it exits allowing the tick to be frozen 43 * won't be enabled when it exits allowing the tick to be frozen
44 * safely. So enter() can be also enter_freeze() callback. 44 * safely. So enter() can be also enter_s2idle() callback.
45 */ 45 */
46 idle_state->enter_freeze = match_id->data; 46 idle_state->enter_s2idle = match_id->data;
47 47
48 err = of_property_read_u32(state_node, "wakeup-latency-us", 48 err = of_property_read_u32(state_node, "wakeup-latency-us",
49 &idle_state->exit_latency); 49 &idle_state->exit_latency);
@@ -53,16 +53,16 @@ static int init_state_node(struct cpuidle_state *idle_state,
53 err = of_property_read_u32(state_node, "entry-latency-us", 53 err = of_property_read_u32(state_node, "entry-latency-us",
54 &entry_latency); 54 &entry_latency);
55 if (err) { 55 if (err) {
56 pr_debug(" * %s missing entry-latency-us property\n", 56 pr_debug(" * %pOF missing entry-latency-us property\n",
57 state_node->full_name); 57 state_node);
58 return -EINVAL; 58 return -EINVAL;
59 } 59 }
60 60
61 err = of_property_read_u32(state_node, "exit-latency-us", 61 err = of_property_read_u32(state_node, "exit-latency-us",
62 &exit_latency); 62 &exit_latency);
63 if (err) { 63 if (err) {
64 pr_debug(" * %s missing exit-latency-us property\n", 64 pr_debug(" * %pOF missing exit-latency-us property\n",
65 state_node->full_name); 65 state_node);
66 return -EINVAL; 66 return -EINVAL;
67 } 67 }
68 /* 68 /*
@@ -75,8 +75,8 @@ static int init_state_node(struct cpuidle_state *idle_state,
75 err = of_property_read_u32(state_node, "min-residency-us", 75 err = of_property_read_u32(state_node, "min-residency-us",
76 &idle_state->target_residency); 76 &idle_state->target_residency);
77 if (err) { 77 if (err) {
78 pr_debug(" * %s missing min-residency-us property\n", 78 pr_debug(" * %pOF missing min-residency-us property\n",
79 state_node->full_name); 79 state_node);
80 return -EINVAL; 80 return -EINVAL;
81 } 81 }
82 82
@@ -186,8 +186,8 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
186 } 186 }
187 187
188 if (!idle_state_valid(state_node, i, cpumask)) { 188 if (!idle_state_valid(state_node, i, cpumask)) {
189 pr_warn("%s idle state not valid, bailing out\n", 189 pr_warn("%pOF idle state not valid, bailing out\n",
190 state_node->full_name); 190 state_node);
191 err = -EINVAL; 191 err = -EINVAL;
192 break; 192 break;
193 } 193 }
@@ -200,8 +200,8 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
200 idle_state = &drv->states[state_idx++]; 200 idle_state = &drv->states[state_idx++];
201 err = init_state_node(idle_state, matches, state_node); 201 err = init_state_node(idle_state, matches, state_node);
202 if (err) { 202 if (err) {
203 pr_err("Parsing idle state node %s failed with err %d\n", 203 pr_err("Parsing idle state node %pOF failed with err %d\n",
204 state_node->full_name, err); 204 state_node, err);
205 err = -EINVAL; 205 err = -EINVAL;
206 break; 206 break;
207 } 207 }
diff --git a/drivers/cpuidle/governors/ladder.c b/drivers/cpuidle/governors/ladder.c
index ac321f09e717..ce1a2ffffb2a 100644
--- a/drivers/cpuidle/governors/ladder.c
+++ b/drivers/cpuidle/governors/ladder.c
@@ -69,6 +69,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
69 struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); 69 struct ladder_device *ldev = this_cpu_ptr(&ladder_devices);
70 struct ladder_device_state *last_state; 70 struct ladder_device_state *last_state;
71 int last_residency, last_idx = ldev->last_state_idx; 71 int last_residency, last_idx = ldev->last_state_idx;
72 int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0;
72 int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); 73 int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
73 74
74 /* Special case when user has set very strict latency requirement */ 75 /* Special case when user has set very strict latency requirement */
@@ -96,13 +97,13 @@ static int ladder_select_state(struct cpuidle_driver *drv,
96 } 97 }
97 98
98 /* consider demotion */ 99 /* consider demotion */
99 if (last_idx > CPUIDLE_DRIVER_STATE_START && 100 if (last_idx > first_idx &&
100 (drv->states[last_idx].disabled || 101 (drv->states[last_idx].disabled ||
101 dev->states_usage[last_idx].disable || 102 dev->states_usage[last_idx].disable ||
102 drv->states[last_idx].exit_latency > latency_req)) { 103 drv->states[last_idx].exit_latency > latency_req)) {
103 int i; 104 int i;
104 105
105 for (i = last_idx - 1; i > CPUIDLE_DRIVER_STATE_START; i--) { 106 for (i = last_idx - 1; i > first_idx; i--) {
106 if (drv->states[i].exit_latency <= latency_req) 107 if (drv->states[i].exit_latency <= latency_req)
107 break; 108 break;
108 } 109 }
@@ -110,7 +111,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
110 return i; 111 return i;
111 } 112 }
112 113
113 if (last_idx > CPUIDLE_DRIVER_STATE_START && 114 if (last_idx > first_idx &&
114 last_residency < last_state->threshold.demotion_time) { 115 last_residency < last_state->threshold.demotion_time) {
115 last_state->stats.demotion_count++; 116 last_state->stats.demotion_count++;
116 last_state->stats.promotion_count = 0; 117 last_state->stats.promotion_count = 0;
@@ -133,13 +134,14 @@ static int ladder_enable_device(struct cpuidle_driver *drv,
133 struct cpuidle_device *dev) 134 struct cpuidle_device *dev)
134{ 135{
135 int i; 136 int i;
137 int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0;
136 struct ladder_device *ldev = &per_cpu(ladder_devices, dev->cpu); 138 struct ladder_device *ldev = &per_cpu(ladder_devices, dev->cpu);
137 struct ladder_device_state *lstate; 139 struct ladder_device_state *lstate;
138 struct cpuidle_state *state; 140 struct cpuidle_state *state;
139 141
140 ldev->last_state_idx = CPUIDLE_DRIVER_STATE_START; 142 ldev->last_state_idx = first_idx;
141 143
142 for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) { 144 for (i = first_idx; i < drv->state_count; i++) {
143 state = &drv->states[i]; 145 state = &drv->states[i];
144 lstate = &ldev->states[i]; 146 lstate = &ldev->states[i];
145 147
@@ -151,7 +153,7 @@ static int ladder_enable_device(struct cpuidle_driver *drv,
151 153
152 if (i < drv->state_count - 1) 154 if (i < drv->state_count - 1)
153 lstate->threshold.promotion_time = state->exit_latency; 155 lstate->threshold.promotion_time = state->exit_latency;
154 if (i > CPUIDLE_DRIVER_STATE_START) 156 if (i > first_idx)
155 lstate->threshold.demotion_time = state->exit_latency; 157 lstate->threshold.demotion_time = state->exit_latency;
156 } 158 }
157 159
diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index 61b64c2b2cb8..48eaf2879228 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -324,8 +324,9 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
324 expected_interval = get_typical_interval(data); 324 expected_interval = get_typical_interval(data);
325 expected_interval = min(expected_interval, data->next_timer_us); 325 expected_interval = min(expected_interval, data->next_timer_us);
326 326
327 if (CPUIDLE_DRIVER_STATE_START > 0) { 327 first_idx = 0;
328 struct cpuidle_state *s = &drv->states[CPUIDLE_DRIVER_STATE_START]; 328 if (drv->states[0].flags & CPUIDLE_FLAG_POLLING) {
329 struct cpuidle_state *s = &drv->states[1];
329 unsigned int polling_threshold; 330 unsigned int polling_threshold;
330 331
331 /* 332 /*
@@ -336,12 +337,8 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
336 polling_threshold = max_t(unsigned int, 20, s->target_residency); 337 polling_threshold = max_t(unsigned int, 20, s->target_residency);
337 if (data->next_timer_us > polling_threshold && 338 if (data->next_timer_us > polling_threshold &&
338 latency_req > s->exit_latency && !s->disabled && 339 latency_req > s->exit_latency && !s->disabled &&
339 !dev->states_usage[CPUIDLE_DRIVER_STATE_START].disable) 340 !dev->states_usage[1].disable)
340 first_idx = CPUIDLE_DRIVER_STATE_START; 341 first_idx = 1;
341 else
342 first_idx = CPUIDLE_DRIVER_STATE_START - 1;
343 } else {
344 first_idx = 0;
345 } 342 }
346 343
347 /* 344 /*
diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c
new file mode 100644
index 000000000000..7416b16287de
--- /dev/null
+++ b/drivers/cpuidle/poll_state.c
@@ -0,0 +1,37 @@
1/*
2 * poll_state.c - Polling idle state
3 *
4 * This file is released under the GPLv2.
5 */
6
7#include <linux/cpuidle.h>
8#include <linux/sched.h>
9#include <linux/sched/idle.h>
10
11static int __cpuidle poll_idle(struct cpuidle_device *dev,
12 struct cpuidle_driver *drv, int index)
13{
14 local_irq_enable();
15 if (!current_set_polling_and_test()) {
16 while (!need_resched())
17 cpu_relax();
18 }
19 current_clr_polling();
20
21 return index;
22}
23
24void cpuidle_poll_state_init(struct cpuidle_driver *drv)
25{
26 struct cpuidle_state *state = &drv->states[0];
27
28 snprintf(state->name, CPUIDLE_NAME_LEN, "POLL");
29 snprintf(state->desc, CPUIDLE_DESC_LEN, "CPUIDLE CORE POLL IDLE");
30 state->exit_latency = 0;
31 state->target_residency = 0;
32 state->power_usage = -1;
33 state->enter = poll_idle;
34 state->disabled = false;
35 state->flags = CPUIDLE_FLAG_POLLING;
36}
37EXPORT_SYMBOL_GPL(cpuidle_poll_state_init);
diff --git a/drivers/devfreq/Kconfig b/drivers/devfreq/Kconfig
index 41254e702f1e..6a172d338f6d 100644
--- a/drivers/devfreq/Kconfig
+++ b/drivers/devfreq/Kconfig
@@ -1,6 +1,7 @@
1menuconfig PM_DEVFREQ 1menuconfig PM_DEVFREQ
2 bool "Generic Dynamic Voltage and Frequency Scaling (DVFS) support" 2 bool "Generic Dynamic Voltage and Frequency Scaling (DVFS) support"
3 select SRCU 3 select SRCU
4 select PM_OPP
4 help 5 help
5 A device may have a list of frequencies and voltages available. 6 A device may have a list of frequencies and voltages available.
6 devfreq, a generic DVFS framework can be registered for a device 7 devfreq, a generic DVFS framework can be registered for a device
diff --git a/drivers/devfreq/devfreq-event.c b/drivers/devfreq/devfreq-event.c
index 8648b32ebc89..d67242d87744 100644
--- a/drivers/devfreq/devfreq-event.c
+++ b/drivers/devfreq/devfreq-event.c
@@ -277,8 +277,8 @@ int devfreq_event_get_edev_count(struct device *dev)
277 sizeof(u32)); 277 sizeof(u32));
278 if (count < 0) { 278 if (count < 0) {
279 dev_err(dev, 279 dev_err(dev,
280 "failed to get the count of devfreq-event in %s node\n", 280 "failed to get the count of devfreq-event in %pOF node\n",
281 dev->of_node->full_name); 281 dev->of_node);
282 return count; 282 return count;
283 } 283 }
284 284
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index dea04871b50d..a1c4ee818614 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -564,7 +564,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
564 err = device_register(&devfreq->dev); 564 err = device_register(&devfreq->dev);
565 if (err) { 565 if (err) {
566 mutex_unlock(&devfreq->lock); 566 mutex_unlock(&devfreq->lock);
567 goto err_out; 567 goto err_dev;
568 } 568 }
569 569
570 devfreq->trans_table = devm_kzalloc(&devfreq->dev, 570 devfreq->trans_table = devm_kzalloc(&devfreq->dev,
@@ -610,6 +610,9 @@ err_init:
610 mutex_unlock(&devfreq_list_lock); 610 mutex_unlock(&devfreq_list_lock);
611 611
612 device_unregister(&devfreq->dev); 612 device_unregister(&devfreq->dev);
613err_dev:
614 if (devfreq)
615 kfree(devfreq);
613err_out: 616err_out:
614 return ERR_PTR(err); 617 return ERR_PTR(err);
615} 618}
diff --git a/drivers/devfreq/governor.h b/drivers/devfreq/governor.h
index a4f2fa1091e4..cfc50a61a90d 100644
--- a/drivers/devfreq/governor.h
+++ b/drivers/devfreq/governor.h
@@ -69,4 +69,8 @@ extern int devfreq_remove_governor(struct devfreq_governor *governor);
69 69
70extern int devfreq_update_status(struct devfreq *devfreq, unsigned long freq); 70extern int devfreq_update_status(struct devfreq *devfreq, unsigned long freq);
71 71
72static inline int devfreq_update_stats(struct devfreq *df)
73{
74 return df->profile->get_dev_status(df->dev.parent, &df->last_status);
75}
72#endif /* _GOVERNOR_H */ 76#endif /* _GOVERNOR_H */
diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
index e87ffb3c31a9..5dc7ea4b6bc4 100644
--- a/drivers/idle/intel_idle.c
+++ b/drivers/idle/intel_idle.c
@@ -97,7 +97,7 @@ static const struct idle_cpu *icpu;
97static struct cpuidle_device __percpu *intel_idle_cpuidle_devices; 97static struct cpuidle_device __percpu *intel_idle_cpuidle_devices;
98static int intel_idle(struct cpuidle_device *dev, 98static int intel_idle(struct cpuidle_device *dev,
99 struct cpuidle_driver *drv, int index); 99 struct cpuidle_driver *drv, int index);
100static void intel_idle_freeze(struct cpuidle_device *dev, 100static void intel_idle_s2idle(struct cpuidle_device *dev,
101 struct cpuidle_driver *drv, int index); 101 struct cpuidle_driver *drv, int index);
102static struct cpuidle_state *cpuidle_state_table; 102static struct cpuidle_state *cpuidle_state_table;
103 103
@@ -132,7 +132,7 @@ static struct cpuidle_state nehalem_cstates[] = {
132 .exit_latency = 3, 132 .exit_latency = 3,
133 .target_residency = 6, 133 .target_residency = 6,
134 .enter = &intel_idle, 134 .enter = &intel_idle,
135 .enter_freeze = intel_idle_freeze, }, 135 .enter_s2idle = intel_idle_s2idle, },
136 { 136 {
137 .name = "C1E", 137 .name = "C1E",
138 .desc = "MWAIT 0x01", 138 .desc = "MWAIT 0x01",
@@ -140,7 +140,7 @@ static struct cpuidle_state nehalem_cstates[] = {
140 .exit_latency = 10, 140 .exit_latency = 10,
141 .target_residency = 20, 141 .target_residency = 20,
142 .enter = &intel_idle, 142 .enter = &intel_idle,
143 .enter_freeze = intel_idle_freeze, }, 143 .enter_s2idle = intel_idle_s2idle, },
144 { 144 {
145 .name = "C3", 145 .name = "C3",
146 .desc = "MWAIT 0x10", 146 .desc = "MWAIT 0x10",
@@ -148,7 +148,7 @@ static struct cpuidle_state nehalem_cstates[] = {
148 .exit_latency = 20, 148 .exit_latency = 20,
149 .target_residency = 80, 149 .target_residency = 80,
150 .enter = &intel_idle, 150 .enter = &intel_idle,
151 .enter_freeze = intel_idle_freeze, }, 151 .enter_s2idle = intel_idle_s2idle, },
152 { 152 {
153 .name = "C6", 153 .name = "C6",
154 .desc = "MWAIT 0x20", 154 .desc = "MWAIT 0x20",
@@ -156,7 +156,7 @@ static struct cpuidle_state nehalem_cstates[] = {
156 .exit_latency = 200, 156 .exit_latency = 200,
157 .target_residency = 800, 157 .target_residency = 800,
158 .enter = &intel_idle, 158 .enter = &intel_idle,
159 .enter_freeze = intel_idle_freeze, }, 159 .enter_s2idle = intel_idle_s2idle, },
160 { 160 {
161 .enter = NULL } 161 .enter = NULL }
162}; 162};
@@ -169,7 +169,7 @@ static struct cpuidle_state snb_cstates[] = {
169 .exit_latency = 2, 169 .exit_latency = 2,
170 .target_residency = 2, 170 .target_residency = 2,
171 .enter = &intel_idle, 171 .enter = &intel_idle,
172 .enter_freeze = intel_idle_freeze, }, 172 .enter_s2idle = intel_idle_s2idle, },
173 { 173 {
174 .name = "C1E", 174 .name = "C1E",
175 .desc = "MWAIT 0x01", 175 .desc = "MWAIT 0x01",
@@ -177,7 +177,7 @@ static struct cpuidle_state snb_cstates[] = {
177 .exit_latency = 10, 177 .exit_latency = 10,
178 .target_residency = 20, 178 .target_residency = 20,
179 .enter = &intel_idle, 179 .enter = &intel_idle,
180 .enter_freeze = intel_idle_freeze, }, 180 .enter_s2idle = intel_idle_s2idle, },
181 { 181 {
182 .name = "C3", 182 .name = "C3",
183 .desc = "MWAIT 0x10", 183 .desc = "MWAIT 0x10",
@@ -185,7 +185,7 @@ static struct cpuidle_state snb_cstates[] = {
185 .exit_latency = 80, 185 .exit_latency = 80,
186 .target_residency = 211, 186 .target_residency = 211,
187 .enter = &intel_idle, 187 .enter = &intel_idle,
188 .enter_freeze = intel_idle_freeze, }, 188 .enter_s2idle = intel_idle_s2idle, },
189 { 189 {
190 .name = "C6", 190 .name = "C6",
191 .desc = "MWAIT 0x20", 191 .desc = "MWAIT 0x20",
@@ -193,7 +193,7 @@ static struct cpuidle_state snb_cstates[] = {
193 .exit_latency = 104, 193 .exit_latency = 104,
194 .target_residency = 345, 194 .target_residency = 345,
195 .enter = &intel_idle, 195 .enter = &intel_idle,
196 .enter_freeze = intel_idle_freeze, }, 196 .enter_s2idle = intel_idle_s2idle, },
197 { 197 {
198 .name = "C7", 198 .name = "C7",
199 .desc = "MWAIT 0x30", 199 .desc = "MWAIT 0x30",
@@ -201,7 +201,7 @@ static struct cpuidle_state snb_cstates[] = {
201 .exit_latency = 109, 201 .exit_latency = 109,
202 .target_residency = 345, 202 .target_residency = 345,
203 .enter = &intel_idle, 203 .enter = &intel_idle,
204 .enter_freeze = intel_idle_freeze, }, 204 .enter_s2idle = intel_idle_s2idle, },
205 { 205 {
206 .enter = NULL } 206 .enter = NULL }
207}; 207};
@@ -214,7 +214,7 @@ static struct cpuidle_state byt_cstates[] = {
214 .exit_latency = 1, 214 .exit_latency = 1,
215 .target_residency = 1, 215 .target_residency = 1,
216 .enter = &intel_idle, 216 .enter = &intel_idle,
217 .enter_freeze = intel_idle_freeze, }, 217 .enter_s2idle = intel_idle_s2idle, },
218 { 218 {
219 .name = "C6N", 219 .name = "C6N",
220 .desc = "MWAIT 0x58", 220 .desc = "MWAIT 0x58",
@@ -222,7 +222,7 @@ static struct cpuidle_state byt_cstates[] = {
222 .exit_latency = 300, 222 .exit_latency = 300,
223 .target_residency = 275, 223 .target_residency = 275,
224 .enter = &intel_idle, 224 .enter = &intel_idle,
225 .enter_freeze = intel_idle_freeze, }, 225 .enter_s2idle = intel_idle_s2idle, },
226 { 226 {
227 .name = "C6S", 227 .name = "C6S",
228 .desc = "MWAIT 0x52", 228 .desc = "MWAIT 0x52",
@@ -230,7 +230,7 @@ static struct cpuidle_state byt_cstates[] = {
230 .exit_latency = 500, 230 .exit_latency = 500,
231 .target_residency = 560, 231 .target_residency = 560,
232 .enter = &intel_idle, 232 .enter = &intel_idle,
233 .enter_freeze = intel_idle_freeze, }, 233 .enter_s2idle = intel_idle_s2idle, },
234 { 234 {
235 .name = "C7", 235 .name = "C7",
236 .desc = "MWAIT 0x60", 236 .desc = "MWAIT 0x60",
@@ -238,7 +238,7 @@ static struct cpuidle_state byt_cstates[] = {
238 .exit_latency = 1200, 238 .exit_latency = 1200,
239 .target_residency = 4000, 239 .target_residency = 4000,
240 .enter = &intel_idle, 240 .enter = &intel_idle,
241 .enter_freeze = intel_idle_freeze, }, 241 .enter_s2idle = intel_idle_s2idle, },
242 { 242 {
243 .name = "C7S", 243 .name = "C7S",
244 .desc = "MWAIT 0x64", 244 .desc = "MWAIT 0x64",
@@ -246,7 +246,7 @@ static struct cpuidle_state byt_cstates[] = {
246 .exit_latency = 10000, 246 .exit_latency = 10000,
247 .target_residency = 20000, 247 .target_residency = 20000,
248 .enter = &intel_idle, 248 .enter = &intel_idle,
249 .enter_freeze = intel_idle_freeze, }, 249 .enter_s2idle = intel_idle_s2idle, },
250 { 250 {
251 .enter = NULL } 251 .enter = NULL }
252}; 252};
@@ -259,7 +259,7 @@ static struct cpuidle_state cht_cstates[] = {
259 .exit_latency = 1, 259 .exit_latency = 1,
260 .target_residency = 1, 260 .target_residency = 1,
261 .enter = &intel_idle, 261 .enter = &intel_idle,
262 .enter_freeze = intel_idle_freeze, }, 262 .enter_s2idle = intel_idle_s2idle, },
263 { 263 {
264 .name = "C6N", 264 .name = "C6N",
265 .desc = "MWAIT 0x58", 265 .desc = "MWAIT 0x58",
@@ -267,7 +267,7 @@ static struct cpuidle_state cht_cstates[] = {
267 .exit_latency = 80, 267 .exit_latency = 80,
268 .target_residency = 275, 268 .target_residency = 275,
269 .enter = &intel_idle, 269 .enter = &intel_idle,
270 .enter_freeze = intel_idle_freeze, }, 270 .enter_s2idle = intel_idle_s2idle, },
271 { 271 {
272 .name = "C6S", 272 .name = "C6S",
273 .desc = "MWAIT 0x52", 273 .desc = "MWAIT 0x52",
@@ -275,7 +275,7 @@ static struct cpuidle_state cht_cstates[] = {
275 .exit_latency = 200, 275 .exit_latency = 200,
276 .target_residency = 560, 276 .target_residency = 560,
277 .enter = &intel_idle, 277 .enter = &intel_idle,
278 .enter_freeze = intel_idle_freeze, }, 278 .enter_s2idle = intel_idle_s2idle, },
279 { 279 {
280 .name = "C7", 280 .name = "C7",
281 .desc = "MWAIT 0x60", 281 .desc = "MWAIT 0x60",
@@ -283,7 +283,7 @@ static struct cpuidle_state cht_cstates[] = {
283 .exit_latency = 1200, 283 .exit_latency = 1200,
284 .target_residency = 4000, 284 .target_residency = 4000,
285 .enter = &intel_idle, 285 .enter = &intel_idle,
286 .enter_freeze = intel_idle_freeze, }, 286 .enter_s2idle = intel_idle_s2idle, },
287 { 287 {
288 .name = "C7S", 288 .name = "C7S",
289 .desc = "MWAIT 0x64", 289 .desc = "MWAIT 0x64",
@@ -291,7 +291,7 @@ static struct cpuidle_state cht_cstates[] = {
291 .exit_latency = 10000, 291 .exit_latency = 10000,
292 .target_residency = 20000, 292 .target_residency = 20000,
293 .enter = &intel_idle, 293 .enter = &intel_idle,
294 .enter_freeze = intel_idle_freeze, }, 294 .enter_s2idle = intel_idle_s2idle, },
295 { 295 {
296 .enter = NULL } 296 .enter = NULL }
297}; 297};
@@ -304,7 +304,7 @@ static struct cpuidle_state ivb_cstates[] = {
304 .exit_latency = 1, 304 .exit_latency = 1,
305 .target_residency = 1, 305 .target_residency = 1,
306 .enter = &intel_idle, 306 .enter = &intel_idle,
307 .enter_freeze = intel_idle_freeze, }, 307 .enter_s2idle = intel_idle_s2idle, },
308 { 308 {
309 .name = "C1E", 309 .name = "C1E",
310 .desc = "MWAIT 0x01", 310 .desc = "MWAIT 0x01",
@@ -312,7 +312,7 @@ static struct cpuidle_state ivb_cstates[] = {
312 .exit_latency = 10, 312 .exit_latency = 10,
313 .target_residency = 20, 313 .target_residency = 20,
314 .enter = &intel_idle, 314 .enter = &intel_idle,
315 .enter_freeze = intel_idle_freeze, }, 315 .enter_s2idle = intel_idle_s2idle, },
316 { 316 {
317 .name = "C3", 317 .name = "C3",
318 .desc = "MWAIT 0x10", 318 .desc = "MWAIT 0x10",
@@ -320,7 +320,7 @@ static struct cpuidle_state ivb_cstates[] = {
320 .exit_latency = 59, 320 .exit_latency = 59,
321 .target_residency = 156, 321 .target_residency = 156,
322 .enter = &intel_idle, 322 .enter = &intel_idle,
323 .enter_freeze = intel_idle_freeze, }, 323 .enter_s2idle = intel_idle_s2idle, },
324 { 324 {
325 .name = "C6", 325 .name = "C6",
326 .desc = "MWAIT 0x20", 326 .desc = "MWAIT 0x20",
@@ -328,7 +328,7 @@ static struct cpuidle_state ivb_cstates[] = {
328 .exit_latency = 80, 328 .exit_latency = 80,
329 .target_residency = 300, 329 .target_residency = 300,
330 .enter = &intel_idle, 330 .enter = &intel_idle,
331 .enter_freeze = intel_idle_freeze, }, 331 .enter_s2idle = intel_idle_s2idle, },
332 { 332 {
333 .name = "C7", 333 .name = "C7",
334 .desc = "MWAIT 0x30", 334 .desc = "MWAIT 0x30",
@@ -336,7 +336,7 @@ static struct cpuidle_state ivb_cstates[] = {
336 .exit_latency = 87, 336 .exit_latency = 87,
337 .target_residency = 300, 337 .target_residency = 300,
338 .enter = &intel_idle, 338 .enter = &intel_idle,
339 .enter_freeze = intel_idle_freeze, }, 339 .enter_s2idle = intel_idle_s2idle, },
340 { 340 {
341 .enter = NULL } 341 .enter = NULL }
342}; 342};
@@ -349,7 +349,7 @@ static struct cpuidle_state ivt_cstates[] = {
349 .exit_latency = 1, 349 .exit_latency = 1,
350 .target_residency = 1, 350 .target_residency = 1,
351 .enter = &intel_idle, 351 .enter = &intel_idle,
352 .enter_freeze = intel_idle_freeze, }, 352 .enter_s2idle = intel_idle_s2idle, },
353 { 353 {
354 .name = "C1E", 354 .name = "C1E",
355 .desc = "MWAIT 0x01", 355 .desc = "MWAIT 0x01",
@@ -357,7 +357,7 @@ static struct cpuidle_state ivt_cstates[] = {
357 .exit_latency = 10, 357 .exit_latency = 10,
358 .target_residency = 80, 358 .target_residency = 80,
359 .enter = &intel_idle, 359 .enter = &intel_idle,
360 .enter_freeze = intel_idle_freeze, }, 360 .enter_s2idle = intel_idle_s2idle, },
361 { 361 {
362 .name = "C3", 362 .name = "C3",
363 .desc = "MWAIT 0x10", 363 .desc = "MWAIT 0x10",
@@ -365,7 +365,7 @@ static struct cpuidle_state ivt_cstates[] = {
365 .exit_latency = 59, 365 .exit_latency = 59,
366 .target_residency = 156, 366 .target_residency = 156,
367 .enter = &intel_idle, 367 .enter = &intel_idle,
368 .enter_freeze = intel_idle_freeze, }, 368 .enter_s2idle = intel_idle_s2idle, },
369 { 369 {
370 .name = "C6", 370 .name = "C6",
371 .desc = "MWAIT 0x20", 371 .desc = "MWAIT 0x20",
@@ -373,7 +373,7 @@ static struct cpuidle_state ivt_cstates[] = {
373 .exit_latency = 82, 373 .exit_latency = 82,
374 .target_residency = 300, 374 .target_residency = 300,
375 .enter = &intel_idle, 375 .enter = &intel_idle,
376 .enter_freeze = intel_idle_freeze, }, 376 .enter_s2idle = intel_idle_s2idle, },
377 { 377 {
378 .enter = NULL } 378 .enter = NULL }
379}; 379};
@@ -386,7 +386,7 @@ static struct cpuidle_state ivt_cstates_4s[] = {
386 .exit_latency = 1, 386 .exit_latency = 1,
387 .target_residency = 1, 387 .target_residency = 1,
388 .enter = &intel_idle, 388 .enter = &intel_idle,
389 .enter_freeze = intel_idle_freeze, }, 389 .enter_s2idle = intel_idle_s2idle, },
390 { 390 {
391 .name = "C1E", 391 .name = "C1E",
392 .desc = "MWAIT 0x01", 392 .desc = "MWAIT 0x01",
@@ -394,7 +394,7 @@ static struct cpuidle_state ivt_cstates_4s[] = {
394 .exit_latency = 10, 394 .exit_latency = 10,
395 .target_residency = 250, 395 .target_residency = 250,
396 .enter = &intel_idle, 396 .enter = &intel_idle,
397 .enter_freeze = intel_idle_freeze, }, 397 .enter_s2idle = intel_idle_s2idle, },
398 { 398 {
399 .name = "C3", 399 .name = "C3",
400 .desc = "MWAIT 0x10", 400 .desc = "MWAIT 0x10",
@@ -402,7 +402,7 @@ static struct cpuidle_state ivt_cstates_4s[] = {
402 .exit_latency = 59, 402 .exit_latency = 59,
403 .target_residency = 300, 403 .target_residency = 300,
404 .enter = &intel_idle, 404 .enter = &intel_idle,
405 .enter_freeze = intel_idle_freeze, }, 405 .enter_s2idle = intel_idle_s2idle, },
406 { 406 {
407 .name = "C6", 407 .name = "C6",
408 .desc = "MWAIT 0x20", 408 .desc = "MWAIT 0x20",
@@ -410,7 +410,7 @@ static struct cpuidle_state ivt_cstates_4s[] = {
410 .exit_latency = 84, 410 .exit_latency = 84,
411 .target_residency = 400, 411 .target_residency = 400,
412 .enter = &intel_idle, 412 .enter = &intel_idle,
413 .enter_freeze = intel_idle_freeze, }, 413 .enter_s2idle = intel_idle_s2idle, },
414 { 414 {
415 .enter = NULL } 415 .enter = NULL }
416}; 416};
@@ -423,7 +423,7 @@ static struct cpuidle_state ivt_cstates_8s[] = {
423 .exit_latency = 1, 423 .exit_latency = 1,
424 .target_residency = 1, 424 .target_residency = 1,
425 .enter = &intel_idle, 425 .enter = &intel_idle,
426 .enter_freeze = intel_idle_freeze, }, 426 .enter_s2idle = intel_idle_s2idle, },
427 { 427 {
428 .name = "C1E", 428 .name = "C1E",
429 .desc = "MWAIT 0x01", 429 .desc = "MWAIT 0x01",
@@ -431,7 +431,7 @@ static struct cpuidle_state ivt_cstates_8s[] = {
431 .exit_latency = 10, 431 .exit_latency = 10,
432 .target_residency = 500, 432 .target_residency = 500,
433 .enter = &intel_idle, 433 .enter = &intel_idle,
434 .enter_freeze = intel_idle_freeze, }, 434 .enter_s2idle = intel_idle_s2idle, },
435 { 435 {
436 .name = "C3", 436 .name = "C3",
437 .desc = "MWAIT 0x10", 437 .desc = "MWAIT 0x10",
@@ -439,7 +439,7 @@ static struct cpuidle_state ivt_cstates_8s[] = {
439 .exit_latency = 59, 439 .exit_latency = 59,
440 .target_residency = 600, 440 .target_residency = 600,
441 .enter = &intel_idle, 441 .enter = &intel_idle,
442 .enter_freeze = intel_idle_freeze, }, 442 .enter_s2idle = intel_idle_s2idle, },
443 { 443 {
444 .name = "C6", 444 .name = "C6",
445 .desc = "MWAIT 0x20", 445 .desc = "MWAIT 0x20",
@@ -447,7 +447,7 @@ static struct cpuidle_state ivt_cstates_8s[] = {
447 .exit_latency = 88, 447 .exit_latency = 88,
448 .target_residency = 700, 448 .target_residency = 700,
449 .enter = &intel_idle, 449 .enter = &intel_idle,
450 .enter_freeze = intel_idle_freeze, }, 450 .enter_s2idle = intel_idle_s2idle, },
451 { 451 {
452 .enter = NULL } 452 .enter = NULL }
453}; 453};
@@ -460,7 +460,7 @@ static struct cpuidle_state hsw_cstates[] = {
460 .exit_latency = 2, 460 .exit_latency = 2,
461 .target_residency = 2, 461 .target_residency = 2,
462 .enter = &intel_idle, 462 .enter = &intel_idle,
463 .enter_freeze = intel_idle_freeze, }, 463 .enter_s2idle = intel_idle_s2idle, },
464 { 464 {
465 .name = "C1E", 465 .name = "C1E",
466 .desc = "MWAIT 0x01", 466 .desc = "MWAIT 0x01",
@@ -468,7 +468,7 @@ static struct cpuidle_state hsw_cstates[] = {
468 .exit_latency = 10, 468 .exit_latency = 10,
469 .target_residency = 20, 469 .target_residency = 20,
470 .enter = &intel_idle, 470 .enter = &intel_idle,
471 .enter_freeze = intel_idle_freeze, }, 471 .enter_s2idle = intel_idle_s2idle, },
472 { 472 {
473 .name = "C3", 473 .name = "C3",
474 .desc = "MWAIT 0x10", 474 .desc = "MWAIT 0x10",
@@ -476,7 +476,7 @@ static struct cpuidle_state hsw_cstates[] = {
476 .exit_latency = 33, 476 .exit_latency = 33,
477 .target_residency = 100, 477 .target_residency = 100,
478 .enter = &intel_idle, 478 .enter = &intel_idle,
479 .enter_freeze = intel_idle_freeze, }, 479 .enter_s2idle = intel_idle_s2idle, },
480 { 480 {
481 .name = "C6", 481 .name = "C6",
482 .desc = "MWAIT 0x20", 482 .desc = "MWAIT 0x20",
@@ -484,7 +484,7 @@ static struct cpuidle_state hsw_cstates[] = {
484 .exit_latency = 133, 484 .exit_latency = 133,
485 .target_residency = 400, 485 .target_residency = 400,
486 .enter = &intel_idle, 486 .enter = &intel_idle,
487 .enter_freeze = intel_idle_freeze, }, 487 .enter_s2idle = intel_idle_s2idle, },
488 { 488 {
489 .name = "C7s", 489 .name = "C7s",
490 .desc = "MWAIT 0x32", 490 .desc = "MWAIT 0x32",
@@ -492,7 +492,7 @@ static struct cpuidle_state hsw_cstates[] = {
492 .exit_latency = 166, 492 .exit_latency = 166,
493 .target_residency = 500, 493 .target_residency = 500,
494 .enter = &intel_idle, 494 .enter = &intel_idle,
495 .enter_freeze = intel_idle_freeze, }, 495 .enter_s2idle = intel_idle_s2idle, },
496 { 496 {
497 .name = "C8", 497 .name = "C8",
498 .desc = "MWAIT 0x40", 498 .desc = "MWAIT 0x40",
@@ -500,7 +500,7 @@ static struct cpuidle_state hsw_cstates[] = {
500 .exit_latency = 300, 500 .exit_latency = 300,
501 .target_residency = 900, 501 .target_residency = 900,
502 .enter = &intel_idle, 502 .enter = &intel_idle,
503 .enter_freeze = intel_idle_freeze, }, 503 .enter_s2idle = intel_idle_s2idle, },
504 { 504 {
505 .name = "C9", 505 .name = "C9",
506 .desc = "MWAIT 0x50", 506 .desc = "MWAIT 0x50",
@@ -508,7 +508,7 @@ static struct cpuidle_state hsw_cstates[] = {
508 .exit_latency = 600, 508 .exit_latency = 600,
509 .target_residency = 1800, 509 .target_residency = 1800,
510 .enter = &intel_idle, 510 .enter = &intel_idle,
511 .enter_freeze = intel_idle_freeze, }, 511 .enter_s2idle = intel_idle_s2idle, },
512 { 512 {
513 .name = "C10", 513 .name = "C10",
514 .desc = "MWAIT 0x60", 514 .desc = "MWAIT 0x60",
@@ -516,7 +516,7 @@ static struct cpuidle_state hsw_cstates[] = {
516 .exit_latency = 2600, 516 .exit_latency = 2600,
517 .target_residency = 7700, 517 .target_residency = 7700,
518 .enter = &intel_idle, 518 .enter = &intel_idle,
519 .enter_freeze = intel_idle_freeze, }, 519 .enter_s2idle = intel_idle_s2idle, },
520 { 520 {
521 .enter = NULL } 521 .enter = NULL }
522}; 522};
@@ -528,7 +528,7 @@ static struct cpuidle_state bdw_cstates[] = {
528 .exit_latency = 2, 528 .exit_latency = 2,
529 .target_residency = 2, 529 .target_residency = 2,
530 .enter = &intel_idle, 530 .enter = &intel_idle,
531 .enter_freeze = intel_idle_freeze, }, 531 .enter_s2idle = intel_idle_s2idle, },
532 { 532 {
533 .name = "C1E", 533 .name = "C1E",
534 .desc = "MWAIT 0x01", 534 .desc = "MWAIT 0x01",
@@ -536,7 +536,7 @@ static struct cpuidle_state bdw_cstates[] = {
536 .exit_latency = 10, 536 .exit_latency = 10,
537 .target_residency = 20, 537 .target_residency = 20,
538 .enter = &intel_idle, 538 .enter = &intel_idle,
539 .enter_freeze = intel_idle_freeze, }, 539 .enter_s2idle = intel_idle_s2idle, },
540 { 540 {
541 .name = "C3", 541 .name = "C3",
542 .desc = "MWAIT 0x10", 542 .desc = "MWAIT 0x10",
@@ -544,7 +544,7 @@ static struct cpuidle_state bdw_cstates[] = {
544 .exit_latency = 40, 544 .exit_latency = 40,
545 .target_residency = 100, 545 .target_residency = 100,
546 .enter = &intel_idle, 546 .enter = &intel_idle,
547 .enter_freeze = intel_idle_freeze, }, 547 .enter_s2idle = intel_idle_s2idle, },
548 { 548 {
549 .name = "C6", 549 .name = "C6",
550 .desc = "MWAIT 0x20", 550 .desc = "MWAIT 0x20",
@@ -552,7 +552,7 @@ static struct cpuidle_state bdw_cstates[] = {
552 .exit_latency = 133, 552 .exit_latency = 133,
553 .target_residency = 400, 553 .target_residency = 400,
554 .enter = &intel_idle, 554 .enter = &intel_idle,
555 .enter_freeze = intel_idle_freeze, }, 555 .enter_s2idle = intel_idle_s2idle, },
556 { 556 {
557 .name = "C7s", 557 .name = "C7s",
558 .desc = "MWAIT 0x32", 558 .desc = "MWAIT 0x32",
@@ -560,7 +560,7 @@ static struct cpuidle_state bdw_cstates[] = {
560 .exit_latency = 166, 560 .exit_latency = 166,
561 .target_residency = 500, 561 .target_residency = 500,
562 .enter = &intel_idle, 562 .enter = &intel_idle,
563 .enter_freeze = intel_idle_freeze, }, 563 .enter_s2idle = intel_idle_s2idle, },
564 { 564 {
565 .name = "C8", 565 .name = "C8",
566 .desc = "MWAIT 0x40", 566 .desc = "MWAIT 0x40",
@@ -568,7 +568,7 @@ static struct cpuidle_state bdw_cstates[] = {
568 .exit_latency = 300, 568 .exit_latency = 300,
569 .target_residency = 900, 569 .target_residency = 900,
570 .enter = &intel_idle, 570 .enter = &intel_idle,
571 .enter_freeze = intel_idle_freeze, }, 571 .enter_s2idle = intel_idle_s2idle, },
572 { 572 {
573 .name = "C9", 573 .name = "C9",
574 .desc = "MWAIT 0x50", 574 .desc = "MWAIT 0x50",
@@ -576,7 +576,7 @@ static struct cpuidle_state bdw_cstates[] = {
576 .exit_latency = 600, 576 .exit_latency = 600,
577 .target_residency = 1800, 577 .target_residency = 1800,
578 .enter = &intel_idle, 578 .enter = &intel_idle,
579 .enter_freeze = intel_idle_freeze, }, 579 .enter_s2idle = intel_idle_s2idle, },
580 { 580 {
581 .name = "C10", 581 .name = "C10",
582 .desc = "MWAIT 0x60", 582 .desc = "MWAIT 0x60",
@@ -584,7 +584,7 @@ static struct cpuidle_state bdw_cstates[] = {
584 .exit_latency = 2600, 584 .exit_latency = 2600,
585 .target_residency = 7700, 585 .target_residency = 7700,
586 .enter = &intel_idle, 586 .enter = &intel_idle,
587 .enter_freeze = intel_idle_freeze, }, 587 .enter_s2idle = intel_idle_s2idle, },
588 { 588 {
589 .enter = NULL } 589 .enter = NULL }
590}; 590};
@@ -597,7 +597,7 @@ static struct cpuidle_state skl_cstates[] = {
597 .exit_latency = 2, 597 .exit_latency = 2,
598 .target_residency = 2, 598 .target_residency = 2,
599 .enter = &intel_idle, 599 .enter = &intel_idle,
600 .enter_freeze = intel_idle_freeze, }, 600 .enter_s2idle = intel_idle_s2idle, },
601 { 601 {
602 .name = "C1E", 602 .name = "C1E",
603 .desc = "MWAIT 0x01", 603 .desc = "MWAIT 0x01",
@@ -605,7 +605,7 @@ static struct cpuidle_state skl_cstates[] = {
605 .exit_latency = 10, 605 .exit_latency = 10,
606 .target_residency = 20, 606 .target_residency = 20,
607 .enter = &intel_idle, 607 .enter = &intel_idle,
608 .enter_freeze = intel_idle_freeze, }, 608 .enter_s2idle = intel_idle_s2idle, },
609 { 609 {
610 .name = "C3", 610 .name = "C3",
611 .desc = "MWAIT 0x10", 611 .desc = "MWAIT 0x10",
@@ -613,7 +613,7 @@ static struct cpuidle_state skl_cstates[] = {
613 .exit_latency = 70, 613 .exit_latency = 70,
614 .target_residency = 100, 614 .target_residency = 100,
615 .enter = &intel_idle, 615 .enter = &intel_idle,
616 .enter_freeze = intel_idle_freeze, }, 616 .enter_s2idle = intel_idle_s2idle, },
617 { 617 {
618 .name = "C6", 618 .name = "C6",
619 .desc = "MWAIT 0x20", 619 .desc = "MWAIT 0x20",
@@ -621,7 +621,7 @@ static struct cpuidle_state skl_cstates[] = {
621 .exit_latency = 85, 621 .exit_latency = 85,
622 .target_residency = 200, 622 .target_residency = 200,
623 .enter = &intel_idle, 623 .enter = &intel_idle,
624 .enter_freeze = intel_idle_freeze, }, 624 .enter_s2idle = intel_idle_s2idle, },
625 { 625 {
626 .name = "C7s", 626 .name = "C7s",
627 .desc = "MWAIT 0x33", 627 .desc = "MWAIT 0x33",
@@ -629,7 +629,7 @@ static struct cpuidle_state skl_cstates[] = {
629 .exit_latency = 124, 629 .exit_latency = 124,
630 .target_residency = 800, 630 .target_residency = 800,
631 .enter = &intel_idle, 631 .enter = &intel_idle,
632 .enter_freeze = intel_idle_freeze, }, 632 .enter_s2idle = intel_idle_s2idle, },
633 { 633 {
634 .name = "C8", 634 .name = "C8",
635 .desc = "MWAIT 0x40", 635 .desc = "MWAIT 0x40",
@@ -637,7 +637,7 @@ static struct cpuidle_state skl_cstates[] = {
637 .exit_latency = 200, 637 .exit_latency = 200,
638 .target_residency = 800, 638 .target_residency = 800,
639 .enter = &intel_idle, 639 .enter = &intel_idle,
640 .enter_freeze = intel_idle_freeze, }, 640 .enter_s2idle = intel_idle_s2idle, },
641 { 641 {
642 .name = "C9", 642 .name = "C9",
643 .desc = "MWAIT 0x50", 643 .desc = "MWAIT 0x50",
@@ -645,7 +645,7 @@ static struct cpuidle_state skl_cstates[] = {
645 .exit_latency = 480, 645 .exit_latency = 480,
646 .target_residency = 5000, 646 .target_residency = 5000,
647 .enter = &intel_idle, 647 .enter = &intel_idle,
648 .enter_freeze = intel_idle_freeze, }, 648 .enter_s2idle = intel_idle_s2idle, },
649 { 649 {
650 .name = "C10", 650 .name = "C10",
651 .desc = "MWAIT 0x60", 651 .desc = "MWAIT 0x60",
@@ -653,7 +653,7 @@ static struct cpuidle_state skl_cstates[] = {
653 .exit_latency = 890, 653 .exit_latency = 890,
654 .target_residency = 5000, 654 .target_residency = 5000,
655 .enter = &intel_idle, 655 .enter = &intel_idle,
656 .enter_freeze = intel_idle_freeze, }, 656 .enter_s2idle = intel_idle_s2idle, },
657 { 657 {
658 .enter = NULL } 658 .enter = NULL }
659}; 659};
@@ -666,7 +666,7 @@ static struct cpuidle_state skx_cstates[] = {
666 .exit_latency = 2, 666 .exit_latency = 2,
667 .target_residency = 2, 667 .target_residency = 2,
668 .enter = &intel_idle, 668 .enter = &intel_idle,
669 .enter_freeze = intel_idle_freeze, }, 669 .enter_s2idle = intel_idle_s2idle, },
670 { 670 {
671 .name = "C1E", 671 .name = "C1E",
672 .desc = "MWAIT 0x01", 672 .desc = "MWAIT 0x01",
@@ -674,7 +674,7 @@ static struct cpuidle_state skx_cstates[] = {
674 .exit_latency = 10, 674 .exit_latency = 10,
675 .target_residency = 20, 675 .target_residency = 20,
676 .enter = &intel_idle, 676 .enter = &intel_idle,
677 .enter_freeze = intel_idle_freeze, }, 677 .enter_s2idle = intel_idle_s2idle, },
678 { 678 {
679 .name = "C6", 679 .name = "C6",
680 .desc = "MWAIT 0x20", 680 .desc = "MWAIT 0x20",
@@ -682,7 +682,7 @@ static struct cpuidle_state skx_cstates[] = {
682 .exit_latency = 133, 682 .exit_latency = 133,
683 .target_residency = 600, 683 .target_residency = 600,
684 .enter = &intel_idle, 684 .enter = &intel_idle,
685 .enter_freeze = intel_idle_freeze, }, 685 .enter_s2idle = intel_idle_s2idle, },
686 { 686 {
687 .enter = NULL } 687 .enter = NULL }
688}; 688};
@@ -695,7 +695,7 @@ static struct cpuidle_state atom_cstates[] = {
695 .exit_latency = 10, 695 .exit_latency = 10,
696 .target_residency = 20, 696 .target_residency = 20,
697 .enter = &intel_idle, 697 .enter = &intel_idle,
698 .enter_freeze = intel_idle_freeze, }, 698 .enter_s2idle = intel_idle_s2idle, },
699 { 699 {
700 .name = "C2", 700 .name = "C2",
701 .desc = "MWAIT 0x10", 701 .desc = "MWAIT 0x10",
@@ -703,7 +703,7 @@ static struct cpuidle_state atom_cstates[] = {
703 .exit_latency = 20, 703 .exit_latency = 20,
704 .target_residency = 80, 704 .target_residency = 80,
705 .enter = &intel_idle, 705 .enter = &intel_idle,
706 .enter_freeze = intel_idle_freeze, }, 706 .enter_s2idle = intel_idle_s2idle, },
707 { 707 {
708 .name = "C4", 708 .name = "C4",
709 .desc = "MWAIT 0x30", 709 .desc = "MWAIT 0x30",
@@ -711,7 +711,7 @@ static struct cpuidle_state atom_cstates[] = {
711 .exit_latency = 100, 711 .exit_latency = 100,
712 .target_residency = 400, 712 .target_residency = 400,
713 .enter = &intel_idle, 713 .enter = &intel_idle,
714 .enter_freeze = intel_idle_freeze, }, 714 .enter_s2idle = intel_idle_s2idle, },
715 { 715 {
716 .name = "C6", 716 .name = "C6",
717 .desc = "MWAIT 0x52", 717 .desc = "MWAIT 0x52",
@@ -719,7 +719,7 @@ static struct cpuidle_state atom_cstates[] = {
719 .exit_latency = 140, 719 .exit_latency = 140,
720 .target_residency = 560, 720 .target_residency = 560,
721 .enter = &intel_idle, 721 .enter = &intel_idle,
722 .enter_freeze = intel_idle_freeze, }, 722 .enter_s2idle = intel_idle_s2idle, },
723 { 723 {
724 .enter = NULL } 724 .enter = NULL }
725}; 725};
@@ -731,7 +731,7 @@ static struct cpuidle_state tangier_cstates[] = {
731 .exit_latency = 1, 731 .exit_latency = 1,
732 .target_residency = 4, 732 .target_residency = 4,
733 .enter = &intel_idle, 733 .enter = &intel_idle,
734 .enter_freeze = intel_idle_freeze, }, 734 .enter_s2idle = intel_idle_s2idle, },
735 { 735 {
736 .name = "C4", 736 .name = "C4",
737 .desc = "MWAIT 0x30", 737 .desc = "MWAIT 0x30",
@@ -739,7 +739,7 @@ static struct cpuidle_state tangier_cstates[] = {
739 .exit_latency = 100, 739 .exit_latency = 100,
740 .target_residency = 400, 740 .target_residency = 400,
741 .enter = &intel_idle, 741 .enter = &intel_idle,
742 .enter_freeze = intel_idle_freeze, }, 742 .enter_s2idle = intel_idle_s2idle, },
743 { 743 {
744 .name = "C6", 744 .name = "C6",
745 .desc = "MWAIT 0x52", 745 .desc = "MWAIT 0x52",
@@ -747,7 +747,7 @@ static struct cpuidle_state tangier_cstates[] = {
747 .exit_latency = 140, 747 .exit_latency = 140,
748 .target_residency = 560, 748 .target_residency = 560,
749 .enter = &intel_idle, 749 .enter = &intel_idle,
750 .enter_freeze = intel_idle_freeze, }, 750 .enter_s2idle = intel_idle_s2idle, },
751 { 751 {
752 .name = "C7", 752 .name = "C7",
753 .desc = "MWAIT 0x60", 753 .desc = "MWAIT 0x60",
@@ -755,7 +755,7 @@ static struct cpuidle_state tangier_cstates[] = {
755 .exit_latency = 1200, 755 .exit_latency = 1200,
756 .target_residency = 4000, 756 .target_residency = 4000,
757 .enter = &intel_idle, 757 .enter = &intel_idle,
758 .enter_freeze = intel_idle_freeze, }, 758 .enter_s2idle = intel_idle_s2idle, },
759 { 759 {
760 .name = "C9", 760 .name = "C9",
761 .desc = "MWAIT 0x64", 761 .desc = "MWAIT 0x64",
@@ -763,7 +763,7 @@ static struct cpuidle_state tangier_cstates[] = {
763 .exit_latency = 10000, 763 .exit_latency = 10000,
764 .target_residency = 20000, 764 .target_residency = 20000,
765 .enter = &intel_idle, 765 .enter = &intel_idle,
766 .enter_freeze = intel_idle_freeze, }, 766 .enter_s2idle = intel_idle_s2idle, },
767 { 767 {
768 .enter = NULL } 768 .enter = NULL }
769}; 769};
@@ -775,7 +775,7 @@ static struct cpuidle_state avn_cstates[] = {
775 .exit_latency = 2, 775 .exit_latency = 2,
776 .target_residency = 2, 776 .target_residency = 2,
777 .enter = &intel_idle, 777 .enter = &intel_idle,
778 .enter_freeze = intel_idle_freeze, }, 778 .enter_s2idle = intel_idle_s2idle, },
779 { 779 {
780 .name = "C6", 780 .name = "C6",
781 .desc = "MWAIT 0x51", 781 .desc = "MWAIT 0x51",
@@ -783,7 +783,7 @@ static struct cpuidle_state avn_cstates[] = {
783 .exit_latency = 15, 783 .exit_latency = 15,
784 .target_residency = 45, 784 .target_residency = 45,
785 .enter = &intel_idle, 785 .enter = &intel_idle,
786 .enter_freeze = intel_idle_freeze, }, 786 .enter_s2idle = intel_idle_s2idle, },
787 { 787 {
788 .enter = NULL } 788 .enter = NULL }
789}; 789};
@@ -795,7 +795,7 @@ static struct cpuidle_state knl_cstates[] = {
795 .exit_latency = 1, 795 .exit_latency = 1,
796 .target_residency = 2, 796 .target_residency = 2,
797 .enter = &intel_idle, 797 .enter = &intel_idle,
798 .enter_freeze = intel_idle_freeze }, 798 .enter_s2idle = intel_idle_s2idle },
799 { 799 {
800 .name = "C6", 800 .name = "C6",
801 .desc = "MWAIT 0x10", 801 .desc = "MWAIT 0x10",
@@ -803,7 +803,7 @@ static struct cpuidle_state knl_cstates[] = {
803 .exit_latency = 120, 803 .exit_latency = 120,
804 .target_residency = 500, 804 .target_residency = 500,
805 .enter = &intel_idle, 805 .enter = &intel_idle,
806 .enter_freeze = intel_idle_freeze }, 806 .enter_s2idle = intel_idle_s2idle },
807 { 807 {
808 .enter = NULL } 808 .enter = NULL }
809}; 809};
@@ -816,7 +816,7 @@ static struct cpuidle_state bxt_cstates[] = {
816 .exit_latency = 2, 816 .exit_latency = 2,
817 .target_residency = 2, 817 .target_residency = 2,
818 .enter = &intel_idle, 818 .enter = &intel_idle,
819 .enter_freeze = intel_idle_freeze, }, 819 .enter_s2idle = intel_idle_s2idle, },
820 { 820 {
821 .name = "C1E", 821 .name = "C1E",
822 .desc = "MWAIT 0x01", 822 .desc = "MWAIT 0x01",
@@ -824,7 +824,7 @@ static struct cpuidle_state bxt_cstates[] = {
824 .exit_latency = 10, 824 .exit_latency = 10,
825 .target_residency = 20, 825 .target_residency = 20,
826 .enter = &intel_idle, 826 .enter = &intel_idle,
827 .enter_freeze = intel_idle_freeze, }, 827 .enter_s2idle = intel_idle_s2idle, },
828 { 828 {
829 .name = "C6", 829 .name = "C6",
830 .desc = "MWAIT 0x20", 830 .desc = "MWAIT 0x20",
@@ -832,7 +832,7 @@ static struct cpuidle_state bxt_cstates[] = {
832 .exit_latency = 133, 832 .exit_latency = 133,
833 .target_residency = 133, 833 .target_residency = 133,
834 .enter = &intel_idle, 834 .enter = &intel_idle,
835 .enter_freeze = intel_idle_freeze, }, 835 .enter_s2idle = intel_idle_s2idle, },
836 { 836 {
837 .name = "C7s", 837 .name = "C7s",
838 .desc = "MWAIT 0x31", 838 .desc = "MWAIT 0x31",
@@ -840,7 +840,7 @@ static struct cpuidle_state bxt_cstates[] = {
840 .exit_latency = 155, 840 .exit_latency = 155,
841 .target_residency = 155, 841 .target_residency = 155,
842 .enter = &intel_idle, 842 .enter = &intel_idle,
843 .enter_freeze = intel_idle_freeze, }, 843 .enter_s2idle = intel_idle_s2idle, },
844 { 844 {
845 .name = "C8", 845 .name = "C8",
846 .desc = "MWAIT 0x40", 846 .desc = "MWAIT 0x40",
@@ -848,7 +848,7 @@ static struct cpuidle_state bxt_cstates[] = {
848 .exit_latency = 1000, 848 .exit_latency = 1000,
849 .target_residency = 1000, 849 .target_residency = 1000,
850 .enter = &intel_idle, 850 .enter = &intel_idle,
851 .enter_freeze = intel_idle_freeze, }, 851 .enter_s2idle = intel_idle_s2idle, },
852 { 852 {
853 .name = "C9", 853 .name = "C9",
854 .desc = "MWAIT 0x50", 854 .desc = "MWAIT 0x50",
@@ -856,7 +856,7 @@ static struct cpuidle_state bxt_cstates[] = {
856 .exit_latency = 2000, 856 .exit_latency = 2000,
857 .target_residency = 2000, 857 .target_residency = 2000,
858 .enter = &intel_idle, 858 .enter = &intel_idle,
859 .enter_freeze = intel_idle_freeze, }, 859 .enter_s2idle = intel_idle_s2idle, },
860 { 860 {
861 .name = "C10", 861 .name = "C10",
862 .desc = "MWAIT 0x60", 862 .desc = "MWAIT 0x60",
@@ -864,7 +864,7 @@ static struct cpuidle_state bxt_cstates[] = {
864 .exit_latency = 10000, 864 .exit_latency = 10000,
865 .target_residency = 10000, 865 .target_residency = 10000,
866 .enter = &intel_idle, 866 .enter = &intel_idle,
867 .enter_freeze = intel_idle_freeze, }, 867 .enter_s2idle = intel_idle_s2idle, },
868 { 868 {
869 .enter = NULL } 869 .enter = NULL }
870}; 870};
@@ -877,7 +877,7 @@ static struct cpuidle_state dnv_cstates[] = {
877 .exit_latency = 2, 877 .exit_latency = 2,
878 .target_residency = 2, 878 .target_residency = 2,
879 .enter = &intel_idle, 879 .enter = &intel_idle,
880 .enter_freeze = intel_idle_freeze, }, 880 .enter_s2idle = intel_idle_s2idle, },
881 { 881 {
882 .name = "C1E", 882 .name = "C1E",
883 .desc = "MWAIT 0x01", 883 .desc = "MWAIT 0x01",
@@ -885,7 +885,7 @@ static struct cpuidle_state dnv_cstates[] = {
885 .exit_latency = 10, 885 .exit_latency = 10,
886 .target_residency = 20, 886 .target_residency = 20,
887 .enter = &intel_idle, 887 .enter = &intel_idle,
888 .enter_freeze = intel_idle_freeze, }, 888 .enter_s2idle = intel_idle_s2idle, },
889 { 889 {
890 .name = "C6", 890 .name = "C6",
891 .desc = "MWAIT 0x20", 891 .desc = "MWAIT 0x20",
@@ -893,7 +893,7 @@ static struct cpuidle_state dnv_cstates[] = {
893 .exit_latency = 50, 893 .exit_latency = 50,
894 .target_residency = 500, 894 .target_residency = 500,
895 .enter = &intel_idle, 895 .enter = &intel_idle,
896 .enter_freeze = intel_idle_freeze, }, 896 .enter_s2idle = intel_idle_s2idle, },
897 { 897 {
898 .enter = NULL } 898 .enter = NULL }
899}; 899};
@@ -935,12 +935,12 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev,
935} 935}
936 936
937/** 937/**
938 * intel_idle_freeze - simplified "enter" callback routine for suspend-to-idle 938 * intel_idle_s2idle - simplified "enter" callback routine for suspend-to-idle
939 * @dev: cpuidle_device 939 * @dev: cpuidle_device
940 * @drv: cpuidle driver 940 * @drv: cpuidle driver
941 * @index: state index 941 * @index: state index
942 */ 942 */
943static void intel_idle_freeze(struct cpuidle_device *dev, 943static void intel_idle_s2idle(struct cpuidle_device *dev,
944 struct cpuidle_driver *drv, int index) 944 struct cpuidle_driver *drv, int index)
945{ 945{
946 unsigned long ecx = 1; /* break on interrupt flag */ 946 unsigned long ecx = 1; /* break on interrupt flag */
@@ -1330,13 +1330,14 @@ static void __init intel_idle_cpuidle_driver_init(void)
1330 1330
1331 intel_idle_state_table_update(); 1331 intel_idle_state_table_update();
1332 1332
1333 cpuidle_poll_state_init(drv);
1333 drv->state_count = 1; 1334 drv->state_count = 1;
1334 1335
1335 for (cstate = 0; cstate < CPUIDLE_STATE_MAX; ++cstate) { 1336 for (cstate = 0; cstate < CPUIDLE_STATE_MAX; ++cstate) {
1336 int num_substates, mwait_hint, mwait_cstate; 1337 int num_substates, mwait_hint, mwait_cstate;
1337 1338
1338 if ((cpuidle_state_table[cstate].enter == NULL) && 1339 if ((cpuidle_state_table[cstate].enter == NULL) &&
1339 (cpuidle_state_table[cstate].enter_freeze == NULL)) 1340 (cpuidle_state_table[cstate].enter_s2idle == NULL))
1340 break; 1341 break;
1341 1342
1342 if (cstate + 1 > max_cstate) { 1343 if (cstate + 1 > max_cstate) {
diff --git a/drivers/mfd/db8500-prcmu.c b/drivers/mfd/db8500-prcmu.c
index 5c739ac752e8..5970b8def548 100644
--- a/drivers/mfd/db8500-prcmu.c
+++ b/drivers/mfd/db8500-prcmu.c
@@ -33,7 +33,6 @@
33#include <linux/mfd/abx500/ab8500.h> 33#include <linux/mfd/abx500/ab8500.h>
34#include <linux/regulator/db8500-prcmu.h> 34#include <linux/regulator/db8500-prcmu.h>
35#include <linux/regulator/machine.h> 35#include <linux/regulator/machine.h>
36#include <linux/cpufreq.h>
37#include <linux/platform_data/ux500_wdt.h> 36#include <linux/platform_data/ux500_wdt.h>
38#include <linux/platform_data/db8500_thermal.h> 37#include <linux/platform_data/db8500_thermal.h>
39#include "dbx500-prcmu-regs.h" 38#include "dbx500-prcmu-regs.h"
@@ -1692,32 +1691,27 @@ static long round_clock_rate(u8 clock, unsigned long rate)
1692 return rounded_rate; 1691 return rounded_rate;
1693} 1692}
1694 1693
1695/* CPU FREQ table, may be changed due to if MAX_OPP is supported. */ 1694static const unsigned long armss_freqs[] = {
1696static struct cpufreq_frequency_table db8500_cpufreq_table[] = { 1695 200000000,
1697 { .frequency = 200000, .driver_data = ARM_EXTCLK,}, 1696 400000000,
1698 { .frequency = 400000, .driver_data = ARM_50_OPP,}, 1697 800000000,
1699 { .frequency = 800000, .driver_data = ARM_100_OPP,}, 1698 998400000
1700 { .frequency = CPUFREQ_TABLE_END,}, /* To be used for MAX_OPP. */
1701 { .frequency = CPUFREQ_TABLE_END,},
1702}; 1699};
1703 1700
1704static long round_armss_rate(unsigned long rate) 1701static long round_armss_rate(unsigned long rate)
1705{ 1702{
1706 struct cpufreq_frequency_table *pos; 1703 unsigned long freq = 0;
1707 long freq = 0; 1704 int i;
1708
1709 /* cpufreq table frequencies is in KHz. */
1710 rate = rate / 1000;
1711 1705
1712 /* Find the corresponding arm opp from the cpufreq table. */ 1706 /* Find the corresponding arm opp from the cpufreq table. */
1713 cpufreq_for_each_entry(pos, db8500_cpufreq_table) { 1707 for (i = 0; i < ARRAY_SIZE(armss_freqs); i++) {
1714 freq = pos->frequency; 1708 freq = armss_freqs[i];
1715 if (freq == rate) 1709 if (rate <= freq)
1716 break; 1710 break;
1717 } 1711 }
1718 1712
1719 /* Return the last valid value, even if a match was not found. */ 1713 /* Return the last valid value, even if a match was not found. */
1720 return freq * 1000; 1714 return freq;
1721} 1715}
1722 1716
1723#define MIN_PLL_VCO_RATE 600000000ULL 1717#define MIN_PLL_VCO_RATE 600000000ULL
@@ -1854,21 +1848,23 @@ static void set_clock_rate(u8 clock, unsigned long rate)
1854 1848
1855static int set_armss_rate(unsigned long rate) 1849static int set_armss_rate(unsigned long rate)
1856{ 1850{
1857 struct cpufreq_frequency_table *pos; 1851 unsigned long freq;
1858 1852 u8 opps[] = { ARM_EXTCLK, ARM_50_OPP, ARM_100_OPP, ARM_MAX_OPP };
1859 /* cpufreq table frequencies is in KHz. */ 1853 int i;
1860 rate = rate / 1000;
1861 1854
1862 /* Find the corresponding arm opp from the cpufreq table. */ 1855 /* Find the corresponding arm opp from the cpufreq table. */
1863 cpufreq_for_each_entry(pos, db8500_cpufreq_table) 1856 for (i = 0; i < ARRAY_SIZE(armss_freqs); i++) {
1864 if (pos->frequency == rate) 1857 freq = armss_freqs[i];
1858 if (rate == freq)
1865 break; 1859 break;
1860 }
1866 1861
1867 if (pos->frequency != rate) 1862 if (rate != freq)
1868 return -EINVAL; 1863 return -EINVAL;
1869 1864
1870 /* Set the new arm opp. */ 1865 /* Set the new arm opp. */
1871 return db8500_prcmu_set_arm_opp(pos->driver_data); 1866 pr_debug("SET ARM OPP 0x%02x\n", opps[i]);
1867 return db8500_prcmu_set_arm_opp(opps[i]);
1872} 1868}
1873 1869
1874static int set_plldsi_rate(unsigned long rate) 1870static int set_plldsi_rate(unsigned long rate)
@@ -3049,12 +3045,6 @@ static const struct mfd_cell db8500_prcmu_devs[] = {
3049 .pdata_size = sizeof(db8500_regulators), 3045 .pdata_size = sizeof(db8500_regulators),
3050 }, 3046 },
3051 { 3047 {
3052 .name = "cpufreq-ux500",
3053 .of_compatible = "stericsson,cpufreq-ux500",
3054 .platform_data = &db8500_cpufreq_table,
3055 .pdata_size = sizeof(db8500_cpufreq_table),
3056 },
3057 {
3058 .name = "cpuidle-dbx500", 3048 .name = "cpuidle-dbx500",
3059 .of_compatible = "stericsson,cpuidle-dbx500", 3049 .of_compatible = "stericsson,cpuidle-dbx500",
3060 }, 3050 },
@@ -3067,14 +3057,6 @@ static const struct mfd_cell db8500_prcmu_devs[] = {
3067 }, 3057 },
3068}; 3058};
3069 3059
3070static void db8500_prcmu_update_cpufreq(void)
3071{
3072 if (prcmu_has_arm_maxopp()) {
3073 db8500_cpufreq_table[3].frequency = 1000000;
3074 db8500_cpufreq_table[3].driver_data = ARM_MAX_OPP;
3075 }
3076}
3077
3078static int db8500_prcmu_register_ab8500(struct device *parent) 3060static int db8500_prcmu_register_ab8500(struct device *parent)
3079{ 3061{
3080 struct device_node *np; 3062 struct device_node *np;
@@ -3160,8 +3142,6 @@ static int db8500_prcmu_probe(struct platform_device *pdev)
3160 3142
3161 prcmu_config_esram0_deep_sleep(ESRAM0_DEEP_SLEEP_STATE_RET); 3143 prcmu_config_esram0_deep_sleep(ESRAM0_DEEP_SLEEP_STATE_RET);
3162 3144
3163 db8500_prcmu_update_cpufreq();
3164
3165 err = mfd_add_devices(&pdev->dev, 0, common_prcmu_devs, 3145 err = mfd_add_devices(&pdev->dev, 0, common_prcmu_devs,
3166 ARRAY_SIZE(common_prcmu_devs), NULL, 0, db8500_irq_domain); 3146 ARRAY_SIZE(common_prcmu_devs), NULL, 0, db8500_irq_domain);
3167 if (err) { 3147 if (err) {
diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
index 8519e0f97bdd..a782c78e7c63 100644
--- a/drivers/platform/x86/intel-hid.c
+++ b/drivers/platform/x86/intel-hid.c
@@ -203,15 +203,26 @@ static void notify_handler(acpi_handle handle, u32 event, void *context)
203 acpi_status status; 203 acpi_status status;
204 204
205 if (priv->wakeup_mode) { 205 if (priv->wakeup_mode) {
206 /*
207 * Needed for wakeup from suspend-to-idle to work on some
208 * platforms that don't expose the 5-button array, but still
209 * send notifies with the power button event code to this
210 * device object on power button actions while suspended.
211 */
212 if (event == 0xce)
213 goto wakeup;
214
206 /* Wake up on 5-button array events only. */ 215 /* Wake up on 5-button array events only. */
207 if (event == 0xc0 || !priv->array) 216 if (event == 0xc0 || !priv->array)
208 return; 217 return;
209 218
210 if (sparse_keymap_entry_from_scancode(priv->array, event)) 219 if (!sparse_keymap_entry_from_scancode(priv->array, event)) {
211 pm_wakeup_hard_event(&device->dev);
212 else
213 dev_info(&device->dev, "unknown event 0x%x\n", event); 220 dev_info(&device->dev, "unknown event 0x%x\n", event);
221 return;
222 }
214 223
224wakeup:
225 pm_wakeup_hard_event(&device->dev);
215 return; 226 return;
216 } 227 }
217 228
diff --git a/drivers/power/avs/rockchip-io-domain.c b/drivers/power/avs/rockchip-io-domain.c
index 031a34372191..75f63e38a8d1 100644
--- a/drivers/power/avs/rockchip-io-domain.c
+++ b/drivers/power/avs/rockchip-io-domain.c
@@ -349,6 +349,36 @@ static const struct rockchip_iodomain_soc_data soc_data_rk3399_pmu = {
349 .init = rk3399_pmu_iodomain_init, 349 .init = rk3399_pmu_iodomain_init,
350}; 350};
351 351
352static const struct rockchip_iodomain_soc_data soc_data_rv1108 = {
353 .grf_offset = 0x404,
354 .supply_names = {
355 NULL,
356 NULL,
357 NULL,
358 NULL,
359 NULL,
360 NULL,
361 NULL,
362 NULL,
363 NULL,
364 NULL,
365 NULL,
366 "vccio1",
367 "vccio2",
368 "vccio3",
369 "vccio5",
370 "vccio6",
371 },
372
373};
374
375static const struct rockchip_iodomain_soc_data soc_data_rv1108_pmu = {
376 .grf_offset = 0x104,
377 .supply_names = {
378 "pmu",
379 },
380};
381
352static const struct of_device_id rockchip_iodomain_match[] = { 382static const struct of_device_id rockchip_iodomain_match[] = {
353 { 383 {
354 .compatible = "rockchip,rk3188-io-voltage-domain", 384 .compatible = "rockchip,rk3188-io-voltage-domain",
@@ -382,6 +412,14 @@ static const struct of_device_id rockchip_iodomain_match[] = {
382 .compatible = "rockchip,rk3399-pmu-io-voltage-domain", 412 .compatible = "rockchip,rk3399-pmu-io-voltage-domain",
383 .data = (void *)&soc_data_rk3399_pmu 413 .data = (void *)&soc_data_rk3399_pmu
384 }, 414 },
415 {
416 .compatible = "rockchip,rv1108-io-voltage-domain",
417 .data = (void *)&soc_data_rv1108
418 },
419 {
420 .compatible = "rockchip,rv1108-pmu-io-voltage-domain",
421 .data = (void *)&soc_data_rv1108_pmu
422 },
385 { /* sentinel */ }, 423 { /* sentinel */ },
386}; 424};
387MODULE_DEVICE_TABLE(of, rockchip_iodomain_match); 425MODULE_DEVICE_TABLE(of, rockchip_iodomain_match);
diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
index 9dd44dd4cdf6..14637a01ba2d 100644
--- a/drivers/regulator/of_regulator.c
+++ b/drivers/regulator/of_regulator.c
@@ -150,7 +150,7 @@ static void of_get_regulation_constraints(struct device_node *np,
150 suspend_state = &constraints->state_disk; 150 suspend_state = &constraints->state_disk;
151 break; 151 break;
152 case PM_SUSPEND_ON: 152 case PM_SUSPEND_ON:
153 case PM_SUSPEND_FREEZE: 153 case PM_SUSPEND_TO_IDLE:
154 case PM_SUSPEND_STANDBY: 154 case PM_SUSPEND_STANDBY:
155 default: 155 default:
156 continue; 156 continue;
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
index f10a9b3761cd..537ff842ff73 100644
--- a/include/linux/cpufreq.h
+++ b/include/linux/cpufreq.h
@@ -127,6 +127,15 @@ struct cpufreq_policy {
127 */ 127 */
128 unsigned int transition_delay_us; 128 unsigned int transition_delay_us;
129 129
130 /*
131 * Remote DVFS flag (Not added to the driver structure as we don't want
132 * to access another structure from scheduler hotpath).
133 *
134 * Should be set if CPUs can do DVFS on behalf of other CPUs from
135 * different cpufreq policies.
136 */
137 bool dvfs_possible_from_any_cpu;
138
130 /* Cached frequency lookup from cpufreq_driver_resolve_freq. */ 139 /* Cached frequency lookup from cpufreq_driver_resolve_freq. */
131 unsigned int cached_target_freq; 140 unsigned int cached_target_freq;
132 int cached_resolved_idx; 141 int cached_resolved_idx;
@@ -370,6 +379,12 @@ struct cpufreq_driver {
370 */ 379 */
371#define CPUFREQ_NEED_INITIAL_FREQ_CHECK (1 << 5) 380#define CPUFREQ_NEED_INITIAL_FREQ_CHECK (1 << 5)
372 381
382/*
383 * Set by drivers to disallow use of governors with "dynamic_switching" flag
384 * set.
385 */
386#define CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING (1 << 6)
387
373int cpufreq_register_driver(struct cpufreq_driver *driver_data); 388int cpufreq_register_driver(struct cpufreq_driver *driver_data);
374int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); 389int cpufreq_unregister_driver(struct cpufreq_driver *driver_data);
375 390
@@ -487,14 +502,8 @@ static inline unsigned long cpufreq_scale(unsigned long old, u_int div,
487 * polling frequency is 1000 times the transition latency of the processor. The 502 * polling frequency is 1000 times the transition latency of the processor. The
488 * ondemand governor will work on any processor with transition latency <= 10ms, 503 * ondemand governor will work on any processor with transition latency <= 10ms,
489 * using appropriate sampling rate. 504 * using appropriate sampling rate.
490 *
491 * For CPUs with transition latency > 10ms (mostly drivers with CPUFREQ_ETERNAL)
492 * the ondemand governor will not work. All times here are in us (microseconds).
493 */ 505 */
494#define MIN_SAMPLING_RATE_RATIO (2)
495#define LATENCY_MULTIPLIER (1000) 506#define LATENCY_MULTIPLIER (1000)
496#define MIN_LATENCY_MULTIPLIER (20)
497#define TRANSITION_LATENCY_LIMIT (10 * 1000 * 1000)
498 507
499struct cpufreq_governor { 508struct cpufreq_governor {
500 char name[CPUFREQ_NAME_LEN]; 509 char name[CPUFREQ_NAME_LEN];
@@ -507,9 +516,8 @@ struct cpufreq_governor {
507 char *buf); 516 char *buf);
508 int (*store_setspeed) (struct cpufreq_policy *policy, 517 int (*store_setspeed) (struct cpufreq_policy *policy,
509 unsigned int freq); 518 unsigned int freq);
510 unsigned int max_transition_latency; /* HW must be able to switch to 519 /* For governors which change frequency dynamically by themselves */
511 next freq faster than this value in nano secs or we 520 bool dynamic_switching;
512 will fallback to performance governor */
513 struct list_head governor_list; 521 struct list_head governor_list;
514 struct module *owner; 522 struct module *owner;
515}; 523};
@@ -525,6 +533,7 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
525 unsigned int relation); 533 unsigned int relation);
526unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy, 534unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
527 unsigned int target_freq); 535 unsigned int target_freq);
536unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy);
528int cpufreq_register_governor(struct cpufreq_governor *governor); 537int cpufreq_register_governor(struct cpufreq_governor *governor);
529void cpufreq_unregister_governor(struct cpufreq_governor *governor); 538void cpufreq_unregister_governor(struct cpufreq_governor *governor);
530 539
@@ -562,6 +571,17 @@ struct governor_attr {
562 size_t count); 571 size_t count);
563}; 572};
564 573
574static inline bool cpufreq_can_do_remote_dvfs(struct cpufreq_policy *policy)
575{
576 /*
577 * Allow remote callbacks if:
578 * - dvfs_possible_from_any_cpu flag is set
579 * - the local and remote CPUs share cpufreq policy
580 */
581 return policy->dvfs_possible_from_any_cpu ||
582 cpumask_test_cpu(smp_processor_id(), policy->cpus);
583}
584
565/********************************************************************* 585/*********************************************************************
566 * FREQUENCY TABLE HELPERS * 586 * FREQUENCY TABLE HELPERS *
567 *********************************************************************/ 587 *********************************************************************/
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index fc1e5d7fc1c7..8f7788d23b57 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -52,17 +52,18 @@ struct cpuidle_state {
52 int (*enter_dead) (struct cpuidle_device *dev, int index); 52 int (*enter_dead) (struct cpuidle_device *dev, int index);
53 53
54 /* 54 /*
55 * CPUs execute ->enter_freeze with the local tick or entire timekeeping 55 * CPUs execute ->enter_s2idle with the local tick or entire timekeeping
56 * suspended, so it must not re-enable interrupts at any point (even 56 * suspended, so it must not re-enable interrupts at any point (even
57 * temporarily) or attempt to change states of clock event devices. 57 * temporarily) or attempt to change states of clock event devices.
58 */ 58 */
59 void (*enter_freeze) (struct cpuidle_device *dev, 59 void (*enter_s2idle) (struct cpuidle_device *dev,
60 struct cpuidle_driver *drv, 60 struct cpuidle_driver *drv,
61 int index); 61 int index);
62}; 62};
63 63
64/* Idle State Flags */ 64/* Idle State Flags */
65#define CPUIDLE_FLAG_NONE (0x00) 65#define CPUIDLE_FLAG_NONE (0x00)
66#define CPUIDLE_FLAG_POLLING (0x01) /* polling state */
66#define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */ 67#define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */
67#define CPUIDLE_FLAG_TIMER_STOP (0x04) /* timer is stopped on this state */ 68#define CPUIDLE_FLAG_TIMER_STOP (0x04) /* timer is stopped on this state */
68 69
@@ -197,14 +198,14 @@ static inline struct cpuidle_device *cpuidle_get_device(void) {return NULL; }
197#ifdef CONFIG_CPU_IDLE 198#ifdef CONFIG_CPU_IDLE
198extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 199extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
199 struct cpuidle_device *dev); 200 struct cpuidle_device *dev);
200extern int cpuidle_enter_freeze(struct cpuidle_driver *drv, 201extern int cpuidle_enter_s2idle(struct cpuidle_driver *drv,
201 struct cpuidle_device *dev); 202 struct cpuidle_device *dev);
202extern void cpuidle_use_deepest_state(bool enable); 203extern void cpuidle_use_deepest_state(bool enable);
203#else 204#else
204static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 205static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
205 struct cpuidle_device *dev) 206 struct cpuidle_device *dev)
206{return -ENODEV; } 207{return -ENODEV; }
207static inline int cpuidle_enter_freeze(struct cpuidle_driver *drv, 208static inline int cpuidle_enter_s2idle(struct cpuidle_driver *drv,
208 struct cpuidle_device *dev) 209 struct cpuidle_device *dev)
209{return -ENODEV; } 210{return -ENODEV; }
210static inline void cpuidle_use_deepest_state(bool enable) 211static inline void cpuidle_use_deepest_state(bool enable)
@@ -224,6 +225,12 @@ static inline void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev,
224} 225}
225#endif 226#endif
226 227
228#ifdef CONFIG_ARCH_HAS_CPU_RELAX
229void cpuidle_poll_state_init(struct cpuidle_driver *drv);
230#else
231static inline void cpuidle_poll_state_init(struct cpuidle_driver *drv) {}
232#endif
233
227/****************************** 234/******************************
228 * CPUIDLE GOVERNOR INTERFACE * 235 * CPUIDLE GOVERNOR INTERFACE *
229 ******************************/ 236 ******************************/
@@ -250,12 +257,6 @@ static inline int cpuidle_register_governor(struct cpuidle_governor *gov)
250{return 0;} 257{return 0;}
251#endif 258#endif
252 259
253#ifdef CONFIG_ARCH_HAS_CPU_RELAX
254#define CPUIDLE_DRIVER_STATE_START 1
255#else
256#define CPUIDLE_DRIVER_STATE_START 0
257#endif
258
259#define CPU_PM_CPU_IDLE_ENTER(low_level_idle_enter, idx) \ 260#define CPU_PM_CPU_IDLE_ENTER(low_level_idle_enter, idx) \
260({ \ 261({ \
261 int __ret; \ 262 int __ret; \
diff --git a/include/linux/devfreq.h b/include/linux/devfreq.h
index 6c220e4ebb6b..597294e0cc40 100644
--- a/include/linux/devfreq.h
+++ b/include/linux/devfreq.h
@@ -214,19 +214,6 @@ extern void devm_devfreq_unregister_notifier(struct device *dev,
214extern struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, 214extern struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev,
215 int index); 215 int index);
216 216
217/**
218 * devfreq_update_stats() - update the last_status pointer in struct devfreq
219 * @df: the devfreq instance whose status needs updating
220 *
221 * Governors are recommended to use this function along with last_status,
222 * which allows other entities to reuse the last_status without affecting
223 * the values fetched later by governors.
224 */
225static inline int devfreq_update_stats(struct devfreq *df)
226{
227 return df->profile->get_dev_status(df->dev.parent, &df->last_status);
228}
229
230#if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) 217#if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND)
231/** 218/**
232 * struct devfreq_simple_ondemand_data - void *data fed to struct devfreq 219 * struct devfreq_simple_ondemand_data - void *data fed to struct devfreq
diff --git a/include/linux/pm.h b/include/linux/pm.h
index b8b4df09fd8f..47ded8aa8a5d 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -689,6 +689,8 @@ struct dev_pm_domain {
689extern void device_pm_lock(void); 689extern void device_pm_lock(void);
690extern void dpm_resume_start(pm_message_t state); 690extern void dpm_resume_start(pm_message_t state);
691extern void dpm_resume_end(pm_message_t state); 691extern void dpm_resume_end(pm_message_t state);
692extern void dpm_noirq_resume_devices(pm_message_t state);
693extern void dpm_noirq_end(void);
692extern void dpm_resume_noirq(pm_message_t state); 694extern void dpm_resume_noirq(pm_message_t state);
693extern void dpm_resume_early(pm_message_t state); 695extern void dpm_resume_early(pm_message_t state);
694extern void dpm_resume(pm_message_t state); 696extern void dpm_resume(pm_message_t state);
@@ -697,6 +699,8 @@ extern void dpm_complete(pm_message_t state);
697extern void device_pm_unlock(void); 699extern void device_pm_unlock(void);
698extern int dpm_suspend_end(pm_message_t state); 700extern int dpm_suspend_end(pm_message_t state);
699extern int dpm_suspend_start(pm_message_t state); 701extern int dpm_suspend_start(pm_message_t state);
702extern void dpm_noirq_begin(void);
703extern int dpm_noirq_suspend_devices(pm_message_t state);
700extern int dpm_suspend_noirq(pm_message_t state); 704extern int dpm_suspend_noirq(pm_message_t state);
701extern int dpm_suspend_late(pm_message_t state); 705extern int dpm_suspend_late(pm_message_t state);
702extern int dpm_suspend(pm_message_t state); 706extern int dpm_suspend(pm_message_t state);
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index 41004d97cefa..84f423d5633e 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -43,6 +43,7 @@ struct genpd_power_state {
43 s64 power_on_latency_ns; 43 s64 power_on_latency_ns;
44 s64 residency_ns; 44 s64 residency_ns;
45 struct fwnode_handle *fwnode; 45 struct fwnode_handle *fwnode;
46 ktime_t idle_time;
46}; 47};
47 48
48struct genpd_lock_ops; 49struct genpd_lock_ops;
@@ -78,6 +79,8 @@ struct generic_pm_domain {
78 unsigned int state_count; /* number of states */ 79 unsigned int state_count; /* number of states */
79 unsigned int state_idx; /* state that genpd will go to when off */ 80 unsigned int state_idx; /* state that genpd will go to when off */
80 void *free; /* Free the state that was allocated for default */ 81 void *free; /* Free the state that was allocated for default */
82 ktime_t on_time;
83 ktime_t accounting_time;
81 const struct genpd_lock_ops *lock_ops; 84 const struct genpd_lock_ops *lock_ops;
82 union { 85 union {
83 struct mutex mlock; 86 struct mutex mlock;
diff --git a/include/linux/suspend.h b/include/linux/suspend.h
index 0b1cf32edfd7..d10b7980799d 100644
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -33,10 +33,10 @@ static inline void pm_restore_console(void)
33typedef int __bitwise suspend_state_t; 33typedef int __bitwise suspend_state_t;
34 34
35#define PM_SUSPEND_ON ((__force suspend_state_t) 0) 35#define PM_SUSPEND_ON ((__force suspend_state_t) 0)
36#define PM_SUSPEND_FREEZE ((__force suspend_state_t) 1) 36#define PM_SUSPEND_TO_IDLE ((__force suspend_state_t) 1)
37#define PM_SUSPEND_STANDBY ((__force suspend_state_t) 2) 37#define PM_SUSPEND_STANDBY ((__force suspend_state_t) 2)
38#define PM_SUSPEND_MEM ((__force suspend_state_t) 3) 38#define PM_SUSPEND_MEM ((__force suspend_state_t) 3)
39#define PM_SUSPEND_MIN PM_SUSPEND_FREEZE 39#define PM_SUSPEND_MIN PM_SUSPEND_TO_IDLE
40#define PM_SUSPEND_MAX ((__force suspend_state_t) 4) 40#define PM_SUSPEND_MAX ((__force suspend_state_t) 4)
41 41
42enum suspend_stat_step { 42enum suspend_stat_step {
@@ -186,7 +186,7 @@ struct platform_suspend_ops {
186 void (*recover)(void); 186 void (*recover)(void);
187}; 187};
188 188
189struct platform_freeze_ops { 189struct platform_s2idle_ops {
190 int (*begin)(void); 190 int (*begin)(void);
191 int (*prepare)(void); 191 int (*prepare)(void);
192 void (*wake)(void); 192 void (*wake)(void);
@@ -196,6 +196,9 @@ struct platform_freeze_ops {
196}; 196};
197 197
198#ifdef CONFIG_SUSPEND 198#ifdef CONFIG_SUSPEND
199extern suspend_state_t mem_sleep_current;
200extern suspend_state_t mem_sleep_default;
201
199/** 202/**
200 * suspend_set_ops - set platform dependent suspend operations 203 * suspend_set_ops - set platform dependent suspend operations
201 * @ops: The new suspend operations to set. 204 * @ops: The new suspend operations to set.
@@ -234,22 +237,22 @@ static inline bool pm_resume_via_firmware(void)
234} 237}
235 238
236/* Suspend-to-idle state machnine. */ 239/* Suspend-to-idle state machnine. */
237enum freeze_state { 240enum s2idle_states {
238 FREEZE_STATE_NONE, /* Not suspended/suspending. */ 241 S2IDLE_STATE_NONE, /* Not suspended/suspending. */
239 FREEZE_STATE_ENTER, /* Enter suspend-to-idle. */ 242 S2IDLE_STATE_ENTER, /* Enter suspend-to-idle. */
240 FREEZE_STATE_WAKE, /* Wake up from suspend-to-idle. */ 243 S2IDLE_STATE_WAKE, /* Wake up from suspend-to-idle. */
241}; 244};
242 245
243extern enum freeze_state __read_mostly suspend_freeze_state; 246extern enum s2idle_states __read_mostly s2idle_state;
244 247
245static inline bool idle_should_freeze(void) 248static inline bool idle_should_enter_s2idle(void)
246{ 249{
247 return unlikely(suspend_freeze_state == FREEZE_STATE_ENTER); 250 return unlikely(s2idle_state == S2IDLE_STATE_ENTER);
248} 251}
249 252
250extern void __init pm_states_init(void); 253extern void __init pm_states_init(void);
251extern void freeze_set_ops(const struct platform_freeze_ops *ops); 254extern void s2idle_set_ops(const struct platform_s2idle_ops *ops);
252extern void freeze_wake(void); 255extern void s2idle_wake(void);
253 256
254/** 257/**
255 * arch_suspend_disable_irqs - disable IRQs for suspend 258 * arch_suspend_disable_irqs - disable IRQs for suspend
@@ -281,10 +284,10 @@ static inline bool pm_resume_via_firmware(void) { return false; }
281 284
282static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {} 285static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {}
283static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; } 286static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; }
284static inline bool idle_should_freeze(void) { return false; } 287static inline bool idle_should_enter_s2idle(void) { return false; }
285static inline void __init pm_states_init(void) {} 288static inline void __init pm_states_init(void) {}
286static inline void freeze_set_ops(const struct platform_freeze_ops *ops) {} 289static inline void s2idle_set_ops(const struct platform_s2idle_ops *ops) {}
287static inline void freeze_wake(void) {} 290static inline void s2idle_wake(void) {}
288#endif /* !CONFIG_SUSPEND */ 291#endif /* !CONFIG_SUSPEND */
289 292
290/* struct pbe is used for creating lists of pages that should be restored 293/* struct pbe is used for creating lists of pages that should be restored
@@ -427,6 +430,7 @@ extern int unregister_pm_notifier(struct notifier_block *nb);
427/* drivers/base/power/wakeup.c */ 430/* drivers/base/power/wakeup.c */
428extern bool events_check_enabled; 431extern bool events_check_enabled;
429extern unsigned int pm_wakeup_irq; 432extern unsigned int pm_wakeup_irq;
433extern suspend_state_t pm_suspend_target_state;
430 434
431extern bool pm_wakeup_pending(void); 435extern bool pm_wakeup_pending(void);
432extern void pm_system_wakeup(void); 436extern void pm_system_wakeup(void);
@@ -491,10 +495,24 @@ static inline void unlock_system_sleep(void) {}
491 495
492#ifdef CONFIG_PM_SLEEP_DEBUG 496#ifdef CONFIG_PM_SLEEP_DEBUG
493extern bool pm_print_times_enabled; 497extern bool pm_print_times_enabled;
498extern bool pm_debug_messages_on;
499extern __printf(2, 3) void __pm_pr_dbg(bool defer, const char *fmt, ...);
494#else 500#else
495#define pm_print_times_enabled (false) 501#define pm_print_times_enabled (false)
502#define pm_debug_messages_on (false)
503
504#include <linux/printk.h>
505
506#define __pm_pr_dbg(defer, fmt, ...) \
507 no_printk(KERN_DEBUG fmt, ##__VA_ARGS__)
496#endif 508#endif
497 509
510#define pm_pr_dbg(fmt, ...) \
511 __pm_pr_dbg(false, fmt, ##__VA_ARGS__)
512
513#define pm_deferred_pr_dbg(fmt, ...) \
514 __pm_pr_dbg(true, fmt, ##__VA_ARGS__)
515
498#ifdef CONFIG_PM_AUTOSLEEP 516#ifdef CONFIG_PM_AUTOSLEEP
499 517
500/* kernel/power/autosleep.c */ 518/* kernel/power/autosleep.c */
diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
index 009cc9a17d95..67b02e138a47 100644
--- a/kernel/cpu_pm.c
+++ b/kernel/cpu_pm.c
@@ -22,15 +22,21 @@
22#include <linux/spinlock.h> 22#include <linux/spinlock.h>
23#include <linux/syscore_ops.h> 23#include <linux/syscore_ops.h>
24 24
25static DEFINE_RWLOCK(cpu_pm_notifier_lock); 25static ATOMIC_NOTIFIER_HEAD(cpu_pm_notifier_chain);
26static RAW_NOTIFIER_HEAD(cpu_pm_notifier_chain);
27 26
28static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls) 27static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls)
29{ 28{
30 int ret; 29 int ret;
31 30
32 ret = __raw_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL, 31 /*
32 * __atomic_notifier_call_chain has a RCU read critical section, which
33 * could be disfunctional in cpu idle. Copy RCU_NONIDLE code to let
34 * RCU know this.
35 */
36 rcu_irq_enter_irqson();
37 ret = __atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL,
33 nr_to_call, nr_calls); 38 nr_to_call, nr_calls);
39 rcu_irq_exit_irqson();
34 40
35 return notifier_to_errno(ret); 41 return notifier_to_errno(ret);
36} 42}
@@ -47,14 +53,7 @@ static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls)
47 */ 53 */
48int cpu_pm_register_notifier(struct notifier_block *nb) 54int cpu_pm_register_notifier(struct notifier_block *nb)
49{ 55{
50 unsigned long flags; 56 return atomic_notifier_chain_register(&cpu_pm_notifier_chain, nb);
51 int ret;
52
53 write_lock_irqsave(&cpu_pm_notifier_lock, flags);
54 ret = raw_notifier_chain_register(&cpu_pm_notifier_chain, nb);
55 write_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
56
57 return ret;
58} 57}
59EXPORT_SYMBOL_GPL(cpu_pm_register_notifier); 58EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
60 59
@@ -69,14 +68,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
69 */ 68 */
70int cpu_pm_unregister_notifier(struct notifier_block *nb) 69int cpu_pm_unregister_notifier(struct notifier_block *nb)
71{ 70{
72 unsigned long flags; 71 return atomic_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
73 int ret;
74
75 write_lock_irqsave(&cpu_pm_notifier_lock, flags);
76 ret = raw_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
77 write_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
78
79 return ret;
80} 72}
81EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier); 73EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier);
82 74
@@ -100,7 +92,6 @@ int cpu_pm_enter(void)
100 int nr_calls; 92 int nr_calls;
101 int ret = 0; 93 int ret = 0;
102 94
103 read_lock(&cpu_pm_notifier_lock);
104 ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls); 95 ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls);
105 if (ret) 96 if (ret)
106 /* 97 /*
@@ -108,7 +99,6 @@ int cpu_pm_enter(void)
108 * PM entry who are notified earlier to prepare for it. 99 * PM entry who are notified earlier to prepare for it.
109 */ 100 */
110 cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL); 101 cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
111 read_unlock(&cpu_pm_notifier_lock);
112 102
113 return ret; 103 return ret;
114} 104}
@@ -128,13 +118,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_enter);
128 */ 118 */
129int cpu_pm_exit(void) 119int cpu_pm_exit(void)
130{ 120{
131 int ret; 121 return cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
132
133 read_lock(&cpu_pm_notifier_lock);
134 ret = cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
135 read_unlock(&cpu_pm_notifier_lock);
136
137 return ret;
138} 122}
139EXPORT_SYMBOL_GPL(cpu_pm_exit); 123EXPORT_SYMBOL_GPL(cpu_pm_exit);
140 124
@@ -159,7 +143,6 @@ int cpu_cluster_pm_enter(void)
159 int nr_calls; 143 int nr_calls;
160 int ret = 0; 144 int ret = 0;
161 145
162 read_lock(&cpu_pm_notifier_lock);
163 ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls); 146 ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls);
164 if (ret) 147 if (ret)
165 /* 148 /*
@@ -167,7 +150,6 @@ int cpu_cluster_pm_enter(void)
167 * PM entry who are notified earlier to prepare for it. 150 * PM entry who are notified earlier to prepare for it.
168 */ 151 */
169 cpu_pm_notify(CPU_CLUSTER_PM_ENTER_FAILED, nr_calls - 1, NULL); 152 cpu_pm_notify(CPU_CLUSTER_PM_ENTER_FAILED, nr_calls - 1, NULL);
170 read_unlock(&cpu_pm_notifier_lock);
171 153
172 return ret; 154 return ret;
173} 155}
@@ -190,13 +172,7 @@ EXPORT_SYMBOL_GPL(cpu_cluster_pm_enter);
190 */ 172 */
191int cpu_cluster_pm_exit(void) 173int cpu_cluster_pm_exit(void)
192{ 174{
193 int ret; 175 return cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL);
194
195 read_lock(&cpu_pm_notifier_lock);
196 ret = cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL);
197 read_unlock(&cpu_pm_notifier_lock);
198
199 return ret;
200} 176}
201EXPORT_SYMBOL_GPL(cpu_cluster_pm_exit); 177EXPORT_SYMBOL_GPL(cpu_cluster_pm_exit);
202 178
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index e1914c7b85b1..a5c36e9c56a6 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -651,7 +651,7 @@ static int load_image_and_restore(void)
651 int error; 651 int error;
652 unsigned int flags; 652 unsigned int flags;
653 653
654 pr_debug("Loading hibernation image.\n"); 654 pm_pr_dbg("Loading hibernation image.\n");
655 655
656 lock_device_hotplug(); 656 lock_device_hotplug();
657 error = create_basic_memory_bitmaps(); 657 error = create_basic_memory_bitmaps();
@@ -681,7 +681,7 @@ int hibernate(void)
681 bool snapshot_test = false; 681 bool snapshot_test = false;
682 682
683 if (!hibernation_available()) { 683 if (!hibernation_available()) {
684 pr_debug("Hibernation not available.\n"); 684 pm_pr_dbg("Hibernation not available.\n");
685 return -EPERM; 685 return -EPERM;
686 } 686 }
687 687
@@ -692,6 +692,7 @@ int hibernate(void)
692 goto Unlock; 692 goto Unlock;
693 } 693 }
694 694
695 pr_info("hibernation entry\n");
695 pm_prepare_console(); 696 pm_prepare_console();
696 error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls); 697 error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls);
697 if (error) { 698 if (error) {
@@ -727,7 +728,7 @@ int hibernate(void)
727 else 728 else
728 flags |= SF_CRC32_MODE; 729 flags |= SF_CRC32_MODE;
729 730
730 pr_debug("Writing image.\n"); 731 pm_pr_dbg("Writing image.\n");
731 error = swsusp_write(flags); 732 error = swsusp_write(flags);
732 swsusp_free(); 733 swsusp_free();
733 if (!error) { 734 if (!error) {
@@ -739,7 +740,7 @@ int hibernate(void)
739 in_suspend = 0; 740 in_suspend = 0;
740 pm_restore_gfp_mask(); 741 pm_restore_gfp_mask();
741 } else { 742 } else {
742 pr_debug("Image restored successfully.\n"); 743 pm_pr_dbg("Image restored successfully.\n");
743 } 744 }
744 745
745 Free_bitmaps: 746 Free_bitmaps:
@@ -747,7 +748,7 @@ int hibernate(void)
747 Thaw: 748 Thaw:
748 unlock_device_hotplug(); 749 unlock_device_hotplug();
749 if (snapshot_test) { 750 if (snapshot_test) {
750 pr_debug("Checking hibernation image\n"); 751 pm_pr_dbg("Checking hibernation image\n");
751 error = swsusp_check(); 752 error = swsusp_check();
752 if (!error) 753 if (!error)
753 error = load_image_and_restore(); 754 error = load_image_and_restore();
@@ -762,6 +763,8 @@ int hibernate(void)
762 atomic_inc(&snapshot_device_available); 763 atomic_inc(&snapshot_device_available);
763 Unlock: 764 Unlock:
764 unlock_system_sleep(); 765 unlock_system_sleep();
766 pr_info("hibernation exit\n");
767
765 return error; 768 return error;
766} 769}
767 770
@@ -811,7 +814,7 @@ static int software_resume(void)
811 goto Unlock; 814 goto Unlock;
812 } 815 }
813 816
814 pr_debug("Checking hibernation image partition %s\n", resume_file); 817 pm_pr_dbg("Checking hibernation image partition %s\n", resume_file);
815 818
816 if (resume_delay) { 819 if (resume_delay) {
817 pr_info("Waiting %dsec before reading resume device ...\n", 820 pr_info("Waiting %dsec before reading resume device ...\n",
@@ -853,10 +856,10 @@ static int software_resume(void)
853 } 856 }
854 857
855 Check_image: 858 Check_image:
856 pr_debug("Hibernation image partition %d:%d present\n", 859 pm_pr_dbg("Hibernation image partition %d:%d present\n",
857 MAJOR(swsusp_resume_device), MINOR(swsusp_resume_device)); 860 MAJOR(swsusp_resume_device), MINOR(swsusp_resume_device));
858 861
859 pr_debug("Looking for hibernation image.\n"); 862 pm_pr_dbg("Looking for hibernation image.\n");
860 error = swsusp_check(); 863 error = swsusp_check();
861 if (error) 864 if (error)
862 goto Unlock; 865 goto Unlock;
@@ -868,6 +871,7 @@ static int software_resume(void)
868 goto Unlock; 871 goto Unlock;
869 } 872 }
870 873
874 pr_info("resume from hibernation\n");
871 pm_prepare_console(); 875 pm_prepare_console();
872 error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls); 876 error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls);
873 if (error) { 877 if (error) {
@@ -875,7 +879,7 @@ static int software_resume(void)
875 goto Close_Finish; 879 goto Close_Finish;
876 } 880 }
877 881
878 pr_debug("Preparing processes for restore.\n"); 882 pm_pr_dbg("Preparing processes for restore.\n");
879 error = freeze_processes(); 883 error = freeze_processes();
880 if (error) 884 if (error)
881 goto Close_Finish; 885 goto Close_Finish;
@@ -884,11 +888,12 @@ static int software_resume(void)
884 Finish: 888 Finish:
885 __pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL); 889 __pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL);
886 pm_restore_console(); 890 pm_restore_console();
891 pr_info("resume from hibernation failed (%d)\n", error);
887 atomic_inc(&snapshot_device_available); 892 atomic_inc(&snapshot_device_available);
888 /* For success case, the suspend path will release the lock */ 893 /* For success case, the suspend path will release the lock */
889 Unlock: 894 Unlock:
890 mutex_unlock(&pm_mutex); 895 mutex_unlock(&pm_mutex);
891 pr_debug("Hibernation image not present or could not be loaded.\n"); 896 pm_pr_dbg("Hibernation image not present or could not be loaded.\n");
892 return error; 897 return error;
893 Close_Finish: 898 Close_Finish:
894 swsusp_close(FMODE_READ); 899 swsusp_close(FMODE_READ);
@@ -1012,8 +1017,8 @@ static ssize_t disk_store(struct kobject *kobj, struct kobj_attribute *attr,
1012 error = -EINVAL; 1017 error = -EINVAL;
1013 1018
1014 if (!error) 1019 if (!error)
1015 pr_debug("Hibernation mode set to '%s'\n", 1020 pm_pr_dbg("Hibernation mode set to '%s'\n",
1016 hibernation_modes[mode]); 1021 hibernation_modes[mode]);
1017 unlock_system_sleep(); 1022 unlock_system_sleep();
1018 return error ? error : n; 1023 return error ? error : n;
1019} 1024}
diff --git a/kernel/power/main.c b/kernel/power/main.c
index 42bd800a6755..3a2ca9066583 100644
--- a/kernel/power/main.c
+++ b/kernel/power/main.c
@@ -150,7 +150,7 @@ static ssize_t mem_sleep_store(struct kobject *kobj, struct kobj_attribute *attr
150power_attr(mem_sleep); 150power_attr(mem_sleep);
151#endif /* CONFIG_SUSPEND */ 151#endif /* CONFIG_SUSPEND */
152 152
153#ifdef CONFIG_PM_DEBUG 153#ifdef CONFIG_PM_SLEEP_DEBUG
154int pm_test_level = TEST_NONE; 154int pm_test_level = TEST_NONE;
155 155
156static const char * const pm_tests[__TEST_AFTER_LAST] = { 156static const char * const pm_tests[__TEST_AFTER_LAST] = {
@@ -211,7 +211,7 @@ static ssize_t pm_test_store(struct kobject *kobj, struct kobj_attribute *attr,
211} 211}
212 212
213power_attr(pm_test); 213power_attr(pm_test);
214#endif /* CONFIG_PM_DEBUG */ 214#endif /* CONFIG_PM_SLEEP_DEBUG */
215 215
216#ifdef CONFIG_DEBUG_FS 216#ifdef CONFIG_DEBUG_FS
217static char *suspend_step_name(enum suspend_stat_step step) 217static char *suspend_step_name(enum suspend_stat_step step)
@@ -361,6 +361,61 @@ static ssize_t pm_wakeup_irq_show(struct kobject *kobj,
361 361
362power_attr_ro(pm_wakeup_irq); 362power_attr_ro(pm_wakeup_irq);
363 363
364bool pm_debug_messages_on __read_mostly;
365
366static ssize_t pm_debug_messages_show(struct kobject *kobj,
367 struct kobj_attribute *attr, char *buf)
368{
369 return sprintf(buf, "%d\n", pm_debug_messages_on);
370}
371
372static ssize_t pm_debug_messages_store(struct kobject *kobj,
373 struct kobj_attribute *attr,
374 const char *buf, size_t n)
375{
376 unsigned long val;
377
378 if (kstrtoul(buf, 10, &val))
379 return -EINVAL;
380
381 if (val > 1)
382 return -EINVAL;
383
384 pm_debug_messages_on = !!val;
385 return n;
386}
387
388power_attr(pm_debug_messages);
389
390/**
391 * __pm_pr_dbg - Print a suspend debug message to the kernel log.
392 * @defer: Whether or not to use printk_deferred() to print the message.
393 * @fmt: Message format.
394 *
395 * The message will be emitted if enabled through the pm_debug_messages
396 * sysfs attribute.
397 */
398void __pm_pr_dbg(bool defer, const char *fmt, ...)
399{
400 struct va_format vaf;
401 va_list args;
402
403 if (!pm_debug_messages_on)
404 return;
405
406 va_start(args, fmt);
407
408 vaf.fmt = fmt;
409 vaf.va = &args;
410
411 if (defer)
412 printk_deferred(KERN_DEBUG "PM: %pV", &vaf);
413 else
414 printk(KERN_DEBUG "PM: %pV", &vaf);
415
416 va_end(args);
417}
418
364#else /* !CONFIG_PM_SLEEP_DEBUG */ 419#else /* !CONFIG_PM_SLEEP_DEBUG */
365static inline void pm_print_times_init(void) {} 420static inline void pm_print_times_init(void) {}
366#endif /* CONFIG_PM_SLEEP_DEBUG */ 421#endif /* CONFIG_PM_SLEEP_DEBUG */
@@ -691,12 +746,11 @@ static struct attribute * g[] = {
691 &wake_lock_attr.attr, 746 &wake_lock_attr.attr,
692 &wake_unlock_attr.attr, 747 &wake_unlock_attr.attr,
693#endif 748#endif
694#ifdef CONFIG_PM_DEBUG
695 &pm_test_attr.attr,
696#endif
697#ifdef CONFIG_PM_SLEEP_DEBUG 749#ifdef CONFIG_PM_SLEEP_DEBUG
750 &pm_test_attr.attr,
698 &pm_print_times_attr.attr, 751 &pm_print_times_attr.attr,
699 &pm_wakeup_irq_attr.attr, 752 &pm_wakeup_irq_attr.attr,
753 &pm_debug_messages_attr.attr,
700#endif 754#endif
701#endif 755#endif
702#ifdef CONFIG_FREEZER 756#ifdef CONFIG_FREEZER
diff --git a/kernel/power/power.h b/kernel/power/power.h
index 7fdc40d31b7d..1d2d761e3c25 100644
--- a/kernel/power/power.h
+++ b/kernel/power/power.h
@@ -192,7 +192,6 @@ extern void swsusp_show_speed(ktime_t, ktime_t, unsigned int, char *);
192extern const char * const pm_labels[]; 192extern const char * const pm_labels[];
193extern const char *pm_states[]; 193extern const char *pm_states[];
194extern const char *mem_sleep_states[]; 194extern const char *mem_sleep_states[];
195extern suspend_state_t mem_sleep_current;
196 195
197extern int suspend_devices_and_enter(suspend_state_t state); 196extern int suspend_devices_and_enter(suspend_state_t state);
198#else /* !CONFIG_SUSPEND */ 197#else /* !CONFIG_SUSPEND */
@@ -245,7 +244,11 @@ enum {
245#define TEST_FIRST TEST_NONE 244#define TEST_FIRST TEST_NONE
246#define TEST_MAX (__TEST_AFTER_LAST - 1) 245#define TEST_MAX (__TEST_AFTER_LAST - 1)
247 246
247#ifdef CONFIG_PM_SLEEP_DEBUG
248extern int pm_test_level; 248extern int pm_test_level;
249#else
250#define pm_test_level (TEST_NONE)
251#endif
249 252
250#ifdef CONFIG_SUSPEND_FREEZER 253#ifdef CONFIG_SUSPEND_FREEZER
251static inline int suspend_freeze_processes(void) 254static inline int suspend_freeze_processes(void)
diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
index 3ecf275d7e44..3e2b4f519009 100644
--- a/kernel/power/suspend.c
+++ b/kernel/power/suspend.c
@@ -8,6 +8,8 @@
8 * This file is released under the GPLv2. 8 * This file is released under the GPLv2.
9 */ 9 */
10 10
11#define pr_fmt(fmt) "PM: " fmt
12
11#include <linux/string.h> 13#include <linux/string.h>
12#include <linux/delay.h> 14#include <linux/delay.h>
13#include <linux/errno.h> 15#include <linux/errno.h>
@@ -33,53 +35,55 @@
33#include "power.h" 35#include "power.h"
34 36
35const char * const pm_labels[] = { 37const char * const pm_labels[] = {
36 [PM_SUSPEND_FREEZE] = "freeze", 38 [PM_SUSPEND_TO_IDLE] = "freeze",
37 [PM_SUSPEND_STANDBY] = "standby", 39 [PM_SUSPEND_STANDBY] = "standby",
38 [PM_SUSPEND_MEM] = "mem", 40 [PM_SUSPEND_MEM] = "mem",
39}; 41};
40const char *pm_states[PM_SUSPEND_MAX]; 42const char *pm_states[PM_SUSPEND_MAX];
41static const char * const mem_sleep_labels[] = { 43static const char * const mem_sleep_labels[] = {
42 [PM_SUSPEND_FREEZE] = "s2idle", 44 [PM_SUSPEND_TO_IDLE] = "s2idle",
43 [PM_SUSPEND_STANDBY] = "shallow", 45 [PM_SUSPEND_STANDBY] = "shallow",
44 [PM_SUSPEND_MEM] = "deep", 46 [PM_SUSPEND_MEM] = "deep",
45}; 47};
46const char *mem_sleep_states[PM_SUSPEND_MAX]; 48const char *mem_sleep_states[PM_SUSPEND_MAX];
47 49
48suspend_state_t mem_sleep_current = PM_SUSPEND_FREEZE; 50suspend_state_t mem_sleep_current = PM_SUSPEND_TO_IDLE;
49static suspend_state_t mem_sleep_default = PM_SUSPEND_MEM; 51suspend_state_t mem_sleep_default = PM_SUSPEND_MAX;
52suspend_state_t pm_suspend_target_state;
53EXPORT_SYMBOL_GPL(pm_suspend_target_state);
50 54
51unsigned int pm_suspend_global_flags; 55unsigned int pm_suspend_global_flags;
52EXPORT_SYMBOL_GPL(pm_suspend_global_flags); 56EXPORT_SYMBOL_GPL(pm_suspend_global_flags);
53 57
54static const struct platform_suspend_ops *suspend_ops; 58static const struct platform_suspend_ops *suspend_ops;
55static const struct platform_freeze_ops *freeze_ops; 59static const struct platform_s2idle_ops *s2idle_ops;
56static DECLARE_WAIT_QUEUE_HEAD(suspend_freeze_wait_head); 60static DECLARE_WAIT_QUEUE_HEAD(s2idle_wait_head);
57 61
58enum freeze_state __read_mostly suspend_freeze_state; 62enum s2idle_states __read_mostly s2idle_state;
59static DEFINE_SPINLOCK(suspend_freeze_lock); 63static DEFINE_SPINLOCK(s2idle_lock);
60 64
61void freeze_set_ops(const struct platform_freeze_ops *ops) 65void s2idle_set_ops(const struct platform_s2idle_ops *ops)
62{ 66{
63 lock_system_sleep(); 67 lock_system_sleep();
64 freeze_ops = ops; 68 s2idle_ops = ops;
65 unlock_system_sleep(); 69 unlock_system_sleep();
66} 70}
67 71
68static void freeze_begin(void) 72static void s2idle_begin(void)
69{ 73{
70 suspend_freeze_state = FREEZE_STATE_NONE; 74 s2idle_state = S2IDLE_STATE_NONE;
71} 75}
72 76
73static void freeze_enter(void) 77static void s2idle_enter(void)
74{ 78{
75 trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_FREEZE, true); 79 trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_TO_IDLE, true);
76 80
77 spin_lock_irq(&suspend_freeze_lock); 81 spin_lock_irq(&s2idle_lock);
78 if (pm_wakeup_pending()) 82 if (pm_wakeup_pending())
79 goto out; 83 goto out;
80 84
81 suspend_freeze_state = FREEZE_STATE_ENTER; 85 s2idle_state = S2IDLE_STATE_ENTER;
82 spin_unlock_irq(&suspend_freeze_lock); 86 spin_unlock_irq(&s2idle_lock);
83 87
84 get_online_cpus(); 88 get_online_cpus();
85 cpuidle_resume(); 89 cpuidle_resume();
@@ -87,56 +91,75 @@ static void freeze_enter(void)
87 /* Push all the CPUs into the idle loop. */ 91 /* Push all the CPUs into the idle loop. */
88 wake_up_all_idle_cpus(); 92 wake_up_all_idle_cpus();
89 /* Make the current CPU wait so it can enter the idle loop too. */ 93 /* Make the current CPU wait so it can enter the idle loop too. */
90 wait_event(suspend_freeze_wait_head, 94 wait_event(s2idle_wait_head,
91 suspend_freeze_state == FREEZE_STATE_WAKE); 95 s2idle_state == S2IDLE_STATE_WAKE);
92 96
93 cpuidle_pause(); 97 cpuidle_pause();
94 put_online_cpus(); 98 put_online_cpus();
95 99
96 spin_lock_irq(&suspend_freeze_lock); 100 spin_lock_irq(&s2idle_lock);
97 101
98 out: 102 out:
99 suspend_freeze_state = FREEZE_STATE_NONE; 103 s2idle_state = S2IDLE_STATE_NONE;
100 spin_unlock_irq(&suspend_freeze_lock); 104 spin_unlock_irq(&s2idle_lock);
101 105
102 trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_FREEZE, false); 106 trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_TO_IDLE, false);
103} 107}
104 108
105static void s2idle_loop(void) 109static void s2idle_loop(void)
106{ 110{
107 pr_debug("PM: suspend-to-idle\n"); 111 pm_pr_dbg("suspend-to-idle\n");
112
113 for (;;) {
114 int error;
115
116 dpm_noirq_begin();
117
118 /*
119 * Suspend-to-idle equals
120 * frozen processes + suspended devices + idle processors.
121 * Thus s2idle_enter() should be called right after
122 * all devices have been suspended.
123 */
124 error = dpm_noirq_suspend_devices(PMSG_SUSPEND);
125 if (!error)
126 s2idle_enter();
127
128 dpm_noirq_resume_devices(PMSG_RESUME);
129 if (error && (error != -EBUSY || !pm_wakeup_pending())) {
130 dpm_noirq_end();
131 break;
132 }
108 133
109 do { 134 if (s2idle_ops && s2idle_ops->wake)
110 freeze_enter(); 135 s2idle_ops->wake();
111 136
112 if (freeze_ops && freeze_ops->wake) 137 dpm_noirq_end();
113 freeze_ops->wake();
114 138
115 dpm_resume_noirq(PMSG_RESUME); 139 if (s2idle_ops && s2idle_ops->sync)
116 if (freeze_ops && freeze_ops->sync) 140 s2idle_ops->sync();
117 freeze_ops->sync();
118 141
119 if (pm_wakeup_pending()) 142 if (pm_wakeup_pending())
120 break; 143 break;
121 144
122 pm_wakeup_clear(false); 145 pm_wakeup_clear(false);
123 } while (!dpm_suspend_noirq(PMSG_SUSPEND)); 146 }
124 147
125 pr_debug("PM: resume from suspend-to-idle\n"); 148 pm_pr_dbg("resume from suspend-to-idle\n");
126} 149}
127 150
128void freeze_wake(void) 151void s2idle_wake(void)
129{ 152{
130 unsigned long flags; 153 unsigned long flags;
131 154
132 spin_lock_irqsave(&suspend_freeze_lock, flags); 155 spin_lock_irqsave(&s2idle_lock, flags);
133 if (suspend_freeze_state > FREEZE_STATE_NONE) { 156 if (s2idle_state > S2IDLE_STATE_NONE) {
134 suspend_freeze_state = FREEZE_STATE_WAKE; 157 s2idle_state = S2IDLE_STATE_WAKE;
135 wake_up(&suspend_freeze_wait_head); 158 wake_up(&s2idle_wait_head);
136 } 159 }
137 spin_unlock_irqrestore(&suspend_freeze_lock, flags); 160 spin_unlock_irqrestore(&s2idle_lock, flags);
138} 161}
139EXPORT_SYMBOL_GPL(freeze_wake); 162EXPORT_SYMBOL_GPL(s2idle_wake);
140 163
141static bool valid_state(suspend_state_t state) 164static bool valid_state(suspend_state_t state)
142{ 165{
@@ -152,19 +175,19 @@ void __init pm_states_init(void)
152{ 175{
153 /* "mem" and "freeze" are always present in /sys/power/state. */ 176 /* "mem" and "freeze" are always present in /sys/power/state. */
154 pm_states[PM_SUSPEND_MEM] = pm_labels[PM_SUSPEND_MEM]; 177 pm_states[PM_SUSPEND_MEM] = pm_labels[PM_SUSPEND_MEM];
155 pm_states[PM_SUSPEND_FREEZE] = pm_labels[PM_SUSPEND_FREEZE]; 178 pm_states[PM_SUSPEND_TO_IDLE] = pm_labels[PM_SUSPEND_TO_IDLE];
156 /* 179 /*
157 * Suspend-to-idle should be supported even without any suspend_ops, 180 * Suspend-to-idle should be supported even without any suspend_ops,
158 * initialize mem_sleep_states[] accordingly here. 181 * initialize mem_sleep_states[] accordingly here.
159 */ 182 */
160 mem_sleep_states[PM_SUSPEND_FREEZE] = mem_sleep_labels[PM_SUSPEND_FREEZE]; 183 mem_sleep_states[PM_SUSPEND_TO_IDLE] = mem_sleep_labels[PM_SUSPEND_TO_IDLE];
161} 184}
162 185
163static int __init mem_sleep_default_setup(char *str) 186static int __init mem_sleep_default_setup(char *str)
164{ 187{
165 suspend_state_t state; 188 suspend_state_t state;
166 189
167 for (state = PM_SUSPEND_FREEZE; state <= PM_SUSPEND_MEM; state++) 190 for (state = PM_SUSPEND_TO_IDLE; state <= PM_SUSPEND_MEM; state++)
168 if (mem_sleep_labels[state] && 191 if (mem_sleep_labels[state] &&
169 !strcmp(str, mem_sleep_labels[state])) { 192 !strcmp(str, mem_sleep_labels[state])) {
170 mem_sleep_default = state; 193 mem_sleep_default = state;
@@ -193,7 +216,7 @@ void suspend_set_ops(const struct platform_suspend_ops *ops)
193 } 216 }
194 if (valid_state(PM_SUSPEND_MEM)) { 217 if (valid_state(PM_SUSPEND_MEM)) {
195 mem_sleep_states[PM_SUSPEND_MEM] = mem_sleep_labels[PM_SUSPEND_MEM]; 218 mem_sleep_states[PM_SUSPEND_MEM] = mem_sleep_labels[PM_SUSPEND_MEM];
196 if (mem_sleep_default == PM_SUSPEND_MEM) 219 if (mem_sleep_default >= PM_SUSPEND_MEM)
197 mem_sleep_current = PM_SUSPEND_MEM; 220 mem_sleep_current = PM_SUSPEND_MEM;
198 } 221 }
199 222
@@ -216,49 +239,49 @@ EXPORT_SYMBOL_GPL(suspend_valid_only_mem);
216 239
217static bool sleep_state_supported(suspend_state_t state) 240static bool sleep_state_supported(suspend_state_t state)
218{ 241{
219 return state == PM_SUSPEND_FREEZE || (suspend_ops && suspend_ops->enter); 242 return state == PM_SUSPEND_TO_IDLE || (suspend_ops && suspend_ops->enter);
220} 243}
221 244
222static int platform_suspend_prepare(suspend_state_t state) 245static int platform_suspend_prepare(suspend_state_t state)
223{ 246{
224 return state != PM_SUSPEND_FREEZE && suspend_ops->prepare ? 247 return state != PM_SUSPEND_TO_IDLE && suspend_ops->prepare ?
225 suspend_ops->prepare() : 0; 248 suspend_ops->prepare() : 0;
226} 249}
227 250
228static int platform_suspend_prepare_late(suspend_state_t state) 251static int platform_suspend_prepare_late(suspend_state_t state)
229{ 252{
230 return state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->prepare ? 253 return state == PM_SUSPEND_TO_IDLE && s2idle_ops && s2idle_ops->prepare ?
231 freeze_ops->prepare() : 0; 254 s2idle_ops->prepare() : 0;
232} 255}
233 256
234static int platform_suspend_prepare_noirq(suspend_state_t state) 257static int platform_suspend_prepare_noirq(suspend_state_t state)
235{ 258{
236 return state != PM_SUSPEND_FREEZE && suspend_ops->prepare_late ? 259 return state != PM_SUSPEND_TO_IDLE && suspend_ops->prepare_late ?
237 suspend_ops->prepare_late() : 0; 260 suspend_ops->prepare_late() : 0;
238} 261}
239 262
240static void platform_resume_noirq(suspend_state_t state) 263static void platform_resume_noirq(suspend_state_t state)
241{ 264{
242 if (state != PM_SUSPEND_FREEZE && suspend_ops->wake) 265 if (state != PM_SUSPEND_TO_IDLE && suspend_ops->wake)
243 suspend_ops->wake(); 266 suspend_ops->wake();
244} 267}
245 268
246static void platform_resume_early(suspend_state_t state) 269static void platform_resume_early(suspend_state_t state)
247{ 270{
248 if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->restore) 271 if (state == PM_SUSPEND_TO_IDLE && s2idle_ops && s2idle_ops->restore)
249 freeze_ops->restore(); 272 s2idle_ops->restore();
250} 273}
251 274
252static void platform_resume_finish(suspend_state_t state) 275static void platform_resume_finish(suspend_state_t state)
253{ 276{
254 if (state != PM_SUSPEND_FREEZE && suspend_ops->finish) 277 if (state != PM_SUSPEND_TO_IDLE && suspend_ops->finish)
255 suspend_ops->finish(); 278 suspend_ops->finish();
256} 279}
257 280
258static int platform_suspend_begin(suspend_state_t state) 281static int platform_suspend_begin(suspend_state_t state)
259{ 282{
260 if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->begin) 283 if (state == PM_SUSPEND_TO_IDLE && s2idle_ops && s2idle_ops->begin)
261 return freeze_ops->begin(); 284 return s2idle_ops->begin();
262 else if (suspend_ops && suspend_ops->begin) 285 else if (suspend_ops && suspend_ops->begin)
263 return suspend_ops->begin(state); 286 return suspend_ops->begin(state);
264 else 287 else
@@ -267,21 +290,21 @@ static int platform_suspend_begin(suspend_state_t state)
267 290
268static void platform_resume_end(suspend_state_t state) 291static void platform_resume_end(suspend_state_t state)
269{ 292{
270 if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->end) 293 if (state == PM_SUSPEND_TO_IDLE && s2idle_ops && s2idle_ops->end)
271 freeze_ops->end(); 294 s2idle_ops->end();
272 else if (suspend_ops && suspend_ops->end) 295 else if (suspend_ops && suspend_ops->end)
273 suspend_ops->end(); 296 suspend_ops->end();
274} 297}
275 298
276static void platform_recover(suspend_state_t state) 299static void platform_recover(suspend_state_t state)
277{ 300{
278 if (state != PM_SUSPEND_FREEZE && suspend_ops->recover) 301 if (state != PM_SUSPEND_TO_IDLE && suspend_ops->recover)
279 suspend_ops->recover(); 302 suspend_ops->recover();
280} 303}
281 304
282static bool platform_suspend_again(suspend_state_t state) 305static bool platform_suspend_again(suspend_state_t state)
283{ 306{
284 return state != PM_SUSPEND_FREEZE && suspend_ops->suspend_again ? 307 return state != PM_SUSPEND_TO_IDLE && suspend_ops->suspend_again ?
285 suspend_ops->suspend_again() : false; 308 suspend_ops->suspend_again() : false;
286} 309}
287 310
@@ -370,16 +393,21 @@ static int suspend_enter(suspend_state_t state, bool *wakeup)
370 393
371 error = dpm_suspend_late(PMSG_SUSPEND); 394 error = dpm_suspend_late(PMSG_SUSPEND);
372 if (error) { 395 if (error) {
373 pr_err("PM: late suspend of devices failed\n"); 396 pr_err("late suspend of devices failed\n");
374 goto Platform_finish; 397 goto Platform_finish;
375 } 398 }
376 error = platform_suspend_prepare_late(state); 399 error = platform_suspend_prepare_late(state);
377 if (error) 400 if (error)
378 goto Devices_early_resume; 401 goto Devices_early_resume;
379 402
403 if (state == PM_SUSPEND_TO_IDLE && pm_test_level != TEST_PLATFORM) {
404 s2idle_loop();
405 goto Platform_early_resume;
406 }
407
380 error = dpm_suspend_noirq(PMSG_SUSPEND); 408 error = dpm_suspend_noirq(PMSG_SUSPEND);
381 if (error) { 409 if (error) {
382 pr_err("PM: noirq suspend of devices failed\n"); 410 pr_err("noirq suspend of devices failed\n");
383 goto Platform_early_resume; 411 goto Platform_early_resume;
384 } 412 }
385 error = platform_suspend_prepare_noirq(state); 413 error = platform_suspend_prepare_noirq(state);
@@ -389,17 +417,6 @@ static int suspend_enter(suspend_state_t state, bool *wakeup)
389 if (suspend_test(TEST_PLATFORM)) 417 if (suspend_test(TEST_PLATFORM))
390 goto Platform_wake; 418 goto Platform_wake;
391 419
392 /*
393 * PM_SUSPEND_FREEZE equals
394 * frozen processes + suspended devices + idle processors.
395 * Thus we should invoke freeze_enter() soon after
396 * all the devices are suspended.
397 */
398 if (state == PM_SUSPEND_FREEZE) {
399 s2idle_loop();
400 goto Platform_early_resume;
401 }
402
403 error = disable_nonboot_cpus(); 420 error = disable_nonboot_cpus();
404 if (error || suspend_test(TEST_CPUS)) 421 if (error || suspend_test(TEST_CPUS))
405 goto Enable_cpus; 422 goto Enable_cpus;
@@ -456,6 +473,8 @@ int suspend_devices_and_enter(suspend_state_t state)
456 if (!sleep_state_supported(state)) 473 if (!sleep_state_supported(state))
457 return -ENOSYS; 474 return -ENOSYS;
458 475
476 pm_suspend_target_state = state;
477
459 error = platform_suspend_begin(state); 478 error = platform_suspend_begin(state);
460 if (error) 479 if (error)
461 goto Close; 480 goto Close;
@@ -464,7 +483,7 @@ int suspend_devices_and_enter(suspend_state_t state)
464 suspend_test_start(); 483 suspend_test_start();
465 error = dpm_suspend_start(PMSG_SUSPEND); 484 error = dpm_suspend_start(PMSG_SUSPEND);
466 if (error) { 485 if (error) {
467 pr_err("PM: Some devices failed to suspend, or early wake event detected\n"); 486 pr_err("Some devices failed to suspend, or early wake event detected\n");
468 goto Recover_platform; 487 goto Recover_platform;
469 } 488 }
470 suspend_test_finish("suspend devices"); 489 suspend_test_finish("suspend devices");
@@ -485,6 +504,7 @@ int suspend_devices_and_enter(suspend_state_t state)
485 504
486 Close: 505 Close:
487 platform_resume_end(state); 506 platform_resume_end(state);
507 pm_suspend_target_state = PM_SUSPEND_ON;
488 return error; 508 return error;
489 509
490 Recover_platform: 510 Recover_platform:
@@ -518,10 +538,10 @@ static int enter_state(suspend_state_t state)
518 int error; 538 int error;
519 539
520 trace_suspend_resume(TPS("suspend_enter"), state, true); 540 trace_suspend_resume(TPS("suspend_enter"), state, true);
521 if (state == PM_SUSPEND_FREEZE) { 541 if (state == PM_SUSPEND_TO_IDLE) {
522#ifdef CONFIG_PM_DEBUG 542#ifdef CONFIG_PM_DEBUG
523 if (pm_test_level != TEST_NONE && pm_test_level <= TEST_CPUS) { 543 if (pm_test_level != TEST_NONE && pm_test_level <= TEST_CPUS) {
524 pr_warn("PM: Unsupported test mode for suspend to idle, please choose none/freezer/devices/platform.\n"); 544 pr_warn("Unsupported test mode for suspend to idle, please choose none/freezer/devices/platform.\n");
525 return -EAGAIN; 545 return -EAGAIN;
526 } 546 }
527#endif 547#endif
@@ -531,18 +551,18 @@ static int enter_state(suspend_state_t state)
531 if (!mutex_trylock(&pm_mutex)) 551 if (!mutex_trylock(&pm_mutex))
532 return -EBUSY; 552 return -EBUSY;
533 553
534 if (state == PM_SUSPEND_FREEZE) 554 if (state == PM_SUSPEND_TO_IDLE)
535 freeze_begin(); 555 s2idle_begin();
536 556
537#ifndef CONFIG_SUSPEND_SKIP_SYNC 557#ifndef CONFIG_SUSPEND_SKIP_SYNC
538 trace_suspend_resume(TPS("sync_filesystems"), 0, true); 558 trace_suspend_resume(TPS("sync_filesystems"), 0, true);
539 pr_info("PM: Syncing filesystems ... "); 559 pr_info("Syncing filesystems ... ");
540 sys_sync(); 560 sys_sync();
541 pr_cont("done.\n"); 561 pr_cont("done.\n");
542 trace_suspend_resume(TPS("sync_filesystems"), 0, false); 562 trace_suspend_resume(TPS("sync_filesystems"), 0, false);
543#endif 563#endif
544 564
545 pr_debug("PM: Preparing system for sleep (%s)\n", pm_states[state]); 565 pm_pr_dbg("Preparing system for sleep (%s)\n", mem_sleep_labels[state]);
546 pm_suspend_clear_flags(); 566 pm_suspend_clear_flags();
547 error = suspend_prepare(state); 567 error = suspend_prepare(state);
548 if (error) 568 if (error)
@@ -552,13 +572,13 @@ static int enter_state(suspend_state_t state)
552 goto Finish; 572 goto Finish;
553 573
554 trace_suspend_resume(TPS("suspend_enter"), state, false); 574 trace_suspend_resume(TPS("suspend_enter"), state, false);
555 pr_debug("PM: Suspending system (%s)\n", pm_states[state]); 575 pm_pr_dbg("Suspending system (%s)\n", mem_sleep_labels[state]);
556 pm_restrict_gfp_mask(); 576 pm_restrict_gfp_mask();
557 error = suspend_devices_and_enter(state); 577 error = suspend_devices_and_enter(state);
558 pm_restore_gfp_mask(); 578 pm_restore_gfp_mask();
559 579
560 Finish: 580 Finish:
561 pr_debug("PM: Finishing wakeup.\n"); 581 pm_pr_dbg("Finishing wakeup.\n");
562 suspend_finish(); 582 suspend_finish();
563 Unlock: 583 Unlock:
564 mutex_unlock(&pm_mutex); 584 mutex_unlock(&pm_mutex);
@@ -579,6 +599,7 @@ int pm_suspend(suspend_state_t state)
579 if (state <= PM_SUSPEND_ON || state >= PM_SUSPEND_MAX) 599 if (state <= PM_SUSPEND_ON || state >= PM_SUSPEND_MAX)
580 return -EINVAL; 600 return -EINVAL;
581 601
602 pr_info("suspend entry (%s)\n", mem_sleep_labels[state]);
582 error = enter_state(state); 603 error = enter_state(state);
583 if (error) { 604 if (error) {
584 suspend_stats.fail++; 605 suspend_stats.fail++;
@@ -586,6 +607,7 @@ int pm_suspend(suspend_state_t state)
586 } else { 607 } else {
587 suspend_stats.success++; 608 suspend_stats.success++;
588 } 609 }
610 pr_info("suspend exit\n");
589 return error; 611 return error;
590} 612}
591EXPORT_SYMBOL(pm_suspend); 613EXPORT_SYMBOL(pm_suspend);
diff --git a/kernel/power/suspend_test.c b/kernel/power/suspend_test.c
index 5db217051232..6a897e8b2a88 100644
--- a/kernel/power/suspend_test.c
+++ b/kernel/power/suspend_test.c
@@ -104,9 +104,9 @@ repeat:
104 printk(info_test, pm_states[state]); 104 printk(info_test, pm_states[state]);
105 status = pm_suspend(state); 105 status = pm_suspend(state);
106 if (status < 0) 106 if (status < 0)
107 state = PM_SUSPEND_FREEZE; 107 state = PM_SUSPEND_TO_IDLE;
108 } 108 }
109 if (state == PM_SUSPEND_FREEZE) { 109 if (state == PM_SUSPEND_TO_IDLE) {
110 printk(info_test, pm_states[state]); 110 printk(info_test, pm_states[state]);
111 status = pm_suspend(state); 111 status = pm_suspend(state);
112 } 112 }
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 29a397067ffa..9209d83ecdcf 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -52,9 +52,11 @@ struct sugov_policy {
52struct sugov_cpu { 52struct sugov_cpu {
53 struct update_util_data update_util; 53 struct update_util_data update_util;
54 struct sugov_policy *sg_policy; 54 struct sugov_policy *sg_policy;
55 unsigned int cpu;
55 56
56 unsigned long iowait_boost; 57 bool iowait_boost_pending;
57 unsigned long iowait_boost_max; 58 unsigned int iowait_boost;
59 unsigned int iowait_boost_max;
58 u64 last_update; 60 u64 last_update;
59 61
60 /* The fields below are only needed when sharing a policy. */ 62 /* The fields below are only needed when sharing a policy. */
@@ -76,6 +78,26 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
76{ 78{
77 s64 delta_ns; 79 s64 delta_ns;
78 80
81 /*
82 * Since cpufreq_update_util() is called with rq->lock held for
83 * the @target_cpu, our per-cpu data is fully serialized.
84 *
85 * However, drivers cannot in general deal with cross-cpu
86 * requests, so while get_next_freq() will work, our
87 * sugov_update_commit() call may not for the fast switching platforms.
88 *
89 * Hence stop here for remote requests if they aren't supported
90 * by the hardware, as calculating the frequency is pointless if
91 * we cannot in fact act on it.
92 *
93 * For the slow switching platforms, the kthread is always scheduled on
94 * the right set of CPUs and any CPU can find the next frequency and
95 * schedule the kthread.
96 */
97 if (sg_policy->policy->fast_switch_enabled &&
98 !cpufreq_can_do_remote_dvfs(sg_policy->policy))
99 return false;
100
79 if (sg_policy->work_in_progress) 101 if (sg_policy->work_in_progress)
80 return false; 102 return false;
81 103
@@ -106,7 +128,7 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
106 128
107 if (policy->fast_switch_enabled) { 129 if (policy->fast_switch_enabled) {
108 next_freq = cpufreq_driver_fast_switch(policy, next_freq); 130 next_freq = cpufreq_driver_fast_switch(policy, next_freq);
109 if (next_freq == CPUFREQ_ENTRY_INVALID) 131 if (!next_freq)
110 return; 132 return;
111 133
112 policy->cur = next_freq; 134 policy->cur = next_freq;
@@ -154,12 +176,12 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
154 return cpufreq_driver_resolve_freq(policy, freq); 176 return cpufreq_driver_resolve_freq(policy, freq);
155} 177}
156 178
157static void sugov_get_util(unsigned long *util, unsigned long *max) 179static void sugov_get_util(unsigned long *util, unsigned long *max, int cpu)
158{ 180{
159 struct rq *rq = this_rq(); 181 struct rq *rq = cpu_rq(cpu);
160 unsigned long cfs_max; 182 unsigned long cfs_max;
161 183
162 cfs_max = arch_scale_cpu_capacity(NULL, smp_processor_id()); 184 cfs_max = arch_scale_cpu_capacity(NULL, cpu);
163 185
164 *util = min(rq->cfs.avg.util_avg, cfs_max); 186 *util = min(rq->cfs.avg.util_avg, cfs_max);
165 *max = cfs_max; 187 *max = cfs_max;
@@ -169,30 +191,54 @@ static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
169 unsigned int flags) 191 unsigned int flags)
170{ 192{
171 if (flags & SCHED_CPUFREQ_IOWAIT) { 193 if (flags & SCHED_CPUFREQ_IOWAIT) {
172 sg_cpu->iowait_boost = sg_cpu->iowait_boost_max; 194 if (sg_cpu->iowait_boost_pending)
195 return;
196
197 sg_cpu->iowait_boost_pending = true;
198
199 if (sg_cpu->iowait_boost) {
200 sg_cpu->iowait_boost <<= 1;
201 if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max)
202 sg_cpu->iowait_boost = sg_cpu->iowait_boost_max;
203 } else {
204 sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min;
205 }
173 } else if (sg_cpu->iowait_boost) { 206 } else if (sg_cpu->iowait_boost) {
174 s64 delta_ns = time - sg_cpu->last_update; 207 s64 delta_ns = time - sg_cpu->last_update;
175 208
176 /* Clear iowait_boost if the CPU apprears to have been idle. */ 209 /* Clear iowait_boost if the CPU apprears to have been idle. */
177 if (delta_ns > TICK_NSEC) 210 if (delta_ns > TICK_NSEC) {
178 sg_cpu->iowait_boost = 0; 211 sg_cpu->iowait_boost = 0;
212 sg_cpu->iowait_boost_pending = false;
213 }
179 } 214 }
180} 215}
181 216
182static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util, 217static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util,
183 unsigned long *max) 218 unsigned long *max)
184{ 219{
185 unsigned long boost_util = sg_cpu->iowait_boost; 220 unsigned int boost_util, boost_max;
186 unsigned long boost_max = sg_cpu->iowait_boost_max;
187 221
188 if (!boost_util) 222 if (!sg_cpu->iowait_boost)
189 return; 223 return;
190 224
225 if (sg_cpu->iowait_boost_pending) {
226 sg_cpu->iowait_boost_pending = false;
227 } else {
228 sg_cpu->iowait_boost >>= 1;
229 if (sg_cpu->iowait_boost < sg_cpu->sg_policy->policy->min) {
230 sg_cpu->iowait_boost = 0;
231 return;
232 }
233 }
234
235 boost_util = sg_cpu->iowait_boost;
236 boost_max = sg_cpu->iowait_boost_max;
237
191 if (*util * boost_max < *max * boost_util) { 238 if (*util * boost_max < *max * boost_util) {
192 *util = boost_util; 239 *util = boost_util;
193 *max = boost_max; 240 *max = boost_max;
194 } 241 }
195 sg_cpu->iowait_boost >>= 1;
196} 242}
197 243
198#ifdef CONFIG_NO_HZ_COMMON 244#ifdef CONFIG_NO_HZ_COMMON
@@ -229,7 +275,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
229 if (flags & SCHED_CPUFREQ_RT_DL) { 275 if (flags & SCHED_CPUFREQ_RT_DL) {
230 next_f = policy->cpuinfo.max_freq; 276 next_f = policy->cpuinfo.max_freq;
231 } else { 277 } else {
232 sugov_get_util(&util, &max); 278 sugov_get_util(&util, &max, sg_cpu->cpu);
233 sugov_iowait_boost(sg_cpu, &util, &max); 279 sugov_iowait_boost(sg_cpu, &util, &max);
234 next_f = get_next_freq(sg_policy, util, max); 280 next_f = get_next_freq(sg_policy, util, max);
235 /* 281 /*
@@ -264,6 +310,7 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
264 delta_ns = time - j_sg_cpu->last_update; 310 delta_ns = time - j_sg_cpu->last_update;
265 if (delta_ns > TICK_NSEC) { 311 if (delta_ns > TICK_NSEC) {
266 j_sg_cpu->iowait_boost = 0; 312 j_sg_cpu->iowait_boost = 0;
313 j_sg_cpu->iowait_boost_pending = false;
267 continue; 314 continue;
268 } 315 }
269 if (j_sg_cpu->flags & SCHED_CPUFREQ_RT_DL) 316 if (j_sg_cpu->flags & SCHED_CPUFREQ_RT_DL)
@@ -290,7 +337,7 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time,
290 unsigned long util, max; 337 unsigned long util, max;
291 unsigned int next_f; 338 unsigned int next_f;
292 339
293 sugov_get_util(&util, &max); 340 sugov_get_util(&util, &max, sg_cpu->cpu);
294 341
295 raw_spin_lock(&sg_policy->update_lock); 342 raw_spin_lock(&sg_policy->update_lock);
296 343
@@ -445,7 +492,11 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
445 } 492 }
446 493
447 sg_policy->thread = thread; 494 sg_policy->thread = thread;
448 kthread_bind_mask(thread, policy->related_cpus); 495
496 /* Kthread is bound to all CPUs by default */
497 if (!policy->dvfs_possible_from_any_cpu)
498 kthread_bind_mask(thread, policy->related_cpus);
499
449 init_irq_work(&sg_policy->irq_work, sugov_irq_work); 500 init_irq_work(&sg_policy->irq_work, sugov_irq_work);
450 mutex_init(&sg_policy->work_lock); 501 mutex_init(&sg_policy->work_lock);
451 502
@@ -528,16 +579,7 @@ static int sugov_init(struct cpufreq_policy *policy)
528 goto stop_kthread; 579 goto stop_kthread;
529 } 580 }
530 581
531 if (policy->transition_delay_us) { 582 tunables->rate_limit_us = cpufreq_policy_transition_delay_us(policy);
532 tunables->rate_limit_us = policy->transition_delay_us;
533 } else {
534 unsigned int lat;
535
536 tunables->rate_limit_us = LATENCY_MULTIPLIER;
537 lat = policy->cpuinfo.transition_latency / NSEC_PER_USEC;
538 if (lat)
539 tunables->rate_limit_us *= lat;
540 }
541 583
542 policy->governor_data = sg_policy; 584 policy->governor_data = sg_policy;
543 sg_policy->tunables = tunables; 585 sg_policy->tunables = tunables;
@@ -655,6 +697,7 @@ static void sugov_limits(struct cpufreq_policy *policy)
655static struct cpufreq_governor schedutil_gov = { 697static struct cpufreq_governor schedutil_gov = {
656 .name = "schedutil", 698 .name = "schedutil",
657 .owner = THIS_MODULE, 699 .owner = THIS_MODULE,
700 .dynamic_switching = true,
658 .init = sugov_init, 701 .init = sugov_init,
659 .exit = sugov_exit, 702 .exit = sugov_exit,
660 .start = sugov_start, 703 .start = sugov_start,
@@ -671,6 +714,11 @@ struct cpufreq_governor *cpufreq_default_governor(void)
671 714
672static int __init sugov_register(void) 715static int __init sugov_register(void)
673{ 716{
717 int cpu;
718
719 for_each_possible_cpu(cpu)
720 per_cpu(sugov_cpu, cpu).cpu = cpu;
721
674 return cpufreq_register_governor(&schedutil_gov); 722 return cpufreq_register_governor(&schedutil_gov);
675} 723}
676fs_initcall(sugov_register); 724fs_initcall(sugov_register);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index d05bd9457a40..9e38df7649f4 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1136,7 +1136,7 @@ static void update_curr_dl(struct rq *rq)
1136 } 1136 }
1137 1137
1138 /* kick cpufreq (see the comment in kernel/sched/sched.h). */ 1138 /* kick cpufreq (see the comment in kernel/sched/sched.h). */
1139 cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_DL); 1139 cpufreq_update_util(rq, SCHED_CPUFREQ_DL);
1140 1140
1141 schedstat_set(curr->se.statistics.exec_max, 1141 schedstat_set(curr->se.statistics.exec_max,
1142 max(curr->se.statistics.exec_max, delta_exec)); 1142 max(curr->se.statistics.exec_max, delta_exec));
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8d5868771cb3..8bc0a883d190 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2803,7 +2803,9 @@ static inline void update_cfs_shares(struct sched_entity *se)
2803 2803
2804static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq) 2804static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq)
2805{ 2805{
2806 if (&this_rq()->cfs == cfs_rq) { 2806 struct rq *rq = rq_of(cfs_rq);
2807
2808 if (&rq->cfs == cfs_rq) {
2807 /* 2809 /*
2808 * There are a few boundary cases this might miss but it should 2810 * There are a few boundary cases this might miss but it should
2809 * get called often enough that that should (hopefully) not be 2811 * get called often enough that that should (hopefully) not be
@@ -2820,7 +2822,7 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq)
2820 * 2822 *
2821 * See cpu_util(). 2823 * See cpu_util().
2822 */ 2824 */
2823 cpufreq_update_util(rq_of(cfs_rq), 0); 2825 cpufreq_update_util(rq, 0);
2824 } 2826 }
2825} 2827}
2826 2828
@@ -4897,7 +4899,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
4897 * passed. 4899 * passed.
4898 */ 4900 */
4899 if (p->in_iowait) 4901 if (p->in_iowait)
4900 cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_IOWAIT); 4902 cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT);
4901 4903
4902 for_each_sched_entity(se) { 4904 for_each_sched_entity(se) {
4903 if (se->on_rq) 4905 if (se->on_rq)
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 6c23e30c0e5c..257f4f0b4532 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -158,7 +158,7 @@ static void cpuidle_idle_call(void)
158 } 158 }
159 159
160 /* 160 /*
161 * Suspend-to-idle ("freeze") is a system state in which all user space 161 * Suspend-to-idle ("s2idle") is a system state in which all user space
162 * has been frozen, all I/O devices have been suspended and the only 162 * has been frozen, all I/O devices have been suspended and the only
163 * activity happens here and in iterrupts (if any). In that case bypass 163 * activity happens here and in iterrupts (if any). In that case bypass
164 * the cpuidle governor and go stratight for the deepest idle state 164 * the cpuidle governor and go stratight for the deepest idle state
@@ -167,9 +167,9 @@ static void cpuidle_idle_call(void)
167 * until a proper wakeup interrupt happens. 167 * until a proper wakeup interrupt happens.
168 */ 168 */
169 169
170 if (idle_should_freeze() || dev->use_deepest_state) { 170 if (idle_should_enter_s2idle() || dev->use_deepest_state) {
171 if (idle_should_freeze()) { 171 if (idle_should_enter_s2idle()) {
172 entered_state = cpuidle_enter_freeze(drv, dev); 172 entered_state = cpuidle_enter_s2idle(drv, dev);
173 if (entered_state > 0) { 173 if (entered_state > 0) {
174 local_irq_enable(); 174 local_irq_enable();
175 goto exit_idle; 175 goto exit_idle;
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 45caf937ef90..0af5ca9e3e3f 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -970,7 +970,7 @@ static void update_curr_rt(struct rq *rq)
970 return; 970 return;
971 971
972 /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ 972 /* Kick cpufreq (see the comment in kernel/sched/sched.h). */
973 cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_RT); 973 cpufreq_update_util(rq, SCHED_CPUFREQ_RT);
974 974
975 schedstat_set(curr->se.statistics.exec_max, 975 schedstat_set(curr->se.statistics.exec_max,
976 max(curr->se.statistics.exec_max, delta_exec)); 976 max(curr->se.statistics.exec_max, delta_exec));
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ab1c7f5409a0..6ed7962dc896 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2074,19 +2074,13 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
2074{ 2074{
2075 struct update_util_data *data; 2075 struct update_util_data *data;
2076 2076
2077 data = rcu_dereference_sched(*this_cpu_ptr(&cpufreq_update_util_data)); 2077 data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
2078 cpu_of(rq)));
2078 if (data) 2079 if (data)
2079 data->func(data, rq_clock(rq), flags); 2080 data->func(data, rq_clock(rq), flags);
2080} 2081}
2081
2082static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags)
2083{
2084 if (cpu_of(rq) == smp_processor_id())
2085 cpufreq_update_util(rq, flags);
2086}
2087#else 2082#else
2088static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} 2083static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
2089static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags) {}
2090#endif /* CONFIG_CPU_FREQ */ 2084#endif /* CONFIG_CPU_FREQ */
2091 2085
2092#ifdef arch_scale_freq_capacity 2086#ifdef arch_scale_freq_capacity
diff --git a/kernel/time/timekeeping_debug.c b/kernel/time/timekeeping_debug.c
index 38bc4d2208e8..0754cadfa9e6 100644
--- a/kernel/time/timekeeping_debug.c
+++ b/kernel/time/timekeeping_debug.c
@@ -19,6 +19,7 @@
19#include <linux/init.h> 19#include <linux/init.h>
20#include <linux/kernel.h> 20#include <linux/kernel.h>
21#include <linux/seq_file.h> 21#include <linux/seq_file.h>
22#include <linux/suspend.h>
22#include <linux/time.h> 23#include <linux/time.h>
23 24
24#include "timekeeping_internal.h" 25#include "timekeeping_internal.h"
@@ -75,7 +76,7 @@ void tk_debug_account_sleep_time(struct timespec64 *t)
75 int bin = min(fls(t->tv_sec), NUM_BINS-1); 76 int bin = min(fls(t->tv_sec), NUM_BINS-1);
76 77
77 sleep_time_bin[bin]++; 78 sleep_time_bin[bin]++;
78 printk_deferred(KERN_INFO "Suspended for %lld.%03lu seconds\n", 79 pm_deferred_pr_dbg("Timekeeping suspended for %lld.%03lu seconds\n",
79 (s64)t->tv_sec, t->tv_nsec / NSEC_PER_MSEC); 80 (s64)t->tv_sec, t->tv_nsec / NSEC_PER_MSEC);
80} 81}
81 82
diff --git a/tools/power/cpupower/utils/cpupower.c b/tools/power/cpupower/utils/cpupower.c
index 9ea914378985..2dccf4998599 100644
--- a/tools/power/cpupower/utils/cpupower.c
+++ b/tools/power/cpupower/utils/cpupower.c
@@ -12,6 +12,7 @@
12#include <string.h> 12#include <string.h>
13#include <unistd.h> 13#include <unistd.h>
14#include <errno.h> 14#include <errno.h>
15#include <sched.h>
15#include <sys/types.h> 16#include <sys/types.h>
16#include <sys/stat.h> 17#include <sys/stat.h>
17#include <sys/utsname.h> 18#include <sys/utsname.h>
@@ -31,6 +32,7 @@ static int cmd_help(int argc, const char **argv);
31 */ 32 */
32struct cpupower_cpu_info cpupower_cpu_info; 33struct cpupower_cpu_info cpupower_cpu_info;
33int run_as_root; 34int run_as_root;
35int base_cpu;
34/* Affected cpus chosen by -c/--cpu param */ 36/* Affected cpus chosen by -c/--cpu param */
35struct bitmask *cpus_chosen; 37struct bitmask *cpus_chosen;
36 38
@@ -174,6 +176,7 @@ int main(int argc, const char *argv[])
174 unsigned int i, ret; 176 unsigned int i, ret;
175 struct stat statbuf; 177 struct stat statbuf;
176 struct utsname uts; 178 struct utsname uts;
179 char pathname[32];
177 180
178 cpus_chosen = bitmask_alloc(sysconf(_SC_NPROCESSORS_CONF)); 181 cpus_chosen = bitmask_alloc(sysconf(_SC_NPROCESSORS_CONF));
179 182
@@ -198,17 +201,23 @@ int main(int argc, const char *argv[])
198 argv[0] = cmd = "help"; 201 argv[0] = cmd = "help";
199 } 202 }
200 203
201 get_cpu_info(0, &cpupower_cpu_info); 204 base_cpu = sched_getcpu();
205 if (base_cpu < 0) {
206 fprintf(stderr, _("No valid cpus found.\n"));
207 return EXIT_FAILURE;
208 }
209
210 get_cpu_info(&cpupower_cpu_info);
202 run_as_root = !geteuid(); 211 run_as_root = !geteuid();
203 if (run_as_root) { 212 if (run_as_root) {
204 ret = uname(&uts); 213 ret = uname(&uts);
214 sprintf(pathname, "/dev/cpu/%d/msr", base_cpu);
205 if (!ret && !strcmp(uts.machine, "x86_64") && 215 if (!ret && !strcmp(uts.machine, "x86_64") &&
206 stat("/dev/cpu/0/msr", &statbuf) != 0) { 216 stat(pathname, &statbuf) != 0) {
207 if (system("modprobe msr") == -1) 217 if (system("modprobe msr") == -1)
208 fprintf(stderr, _("MSR access not available.\n")); 218 fprintf(stderr, _("MSR access not available.\n"));
209 } 219 }
210 } 220 }
211
212 221
213 for (i = 0; i < ARRAY_SIZE(commands); i++) { 222 for (i = 0; i < ARRAY_SIZE(commands); i++) {
214 struct cmd_struct *p = commands + i; 223 struct cmd_struct *p = commands + i;
diff --git a/tools/power/cpupower/utils/helpers/cpuid.c b/tools/power/cpupower/utils/helpers/cpuid.c
index 39c2c7d067bb..32d37c9be791 100644
--- a/tools/power/cpupower/utils/helpers/cpuid.c
+++ b/tools/power/cpupower/utils/helpers/cpuid.c
@@ -42,7 +42,7 @@ cpuid_func(edx);
42 * 42 *
43 * TBD: Should there be a cpuid alternative for this if /proc is not mounted? 43 * TBD: Should there be a cpuid alternative for this if /proc is not mounted?
44 */ 44 */
45int get_cpu_info(unsigned int cpu, struct cpupower_cpu_info *cpu_info) 45int get_cpu_info(struct cpupower_cpu_info *cpu_info)
46{ 46{
47 FILE *fp; 47 FILE *fp;
48 char value[64]; 48 char value[64];
@@ -70,7 +70,7 @@ int get_cpu_info(unsigned int cpu, struct cpupower_cpu_info *cpu_info)
70 if (!strncmp(value, "processor\t: ", 12)) 70 if (!strncmp(value, "processor\t: ", 12))
71 sscanf(value, "processor\t: %u", &proc); 71 sscanf(value, "processor\t: %u", &proc);
72 72
73 if (proc != cpu) 73 if (proc != (unsigned int)base_cpu)
74 continue; 74 continue;
75 75
76 /* Get CPU vendor */ 76 /* Get CPU vendor */
diff --git a/tools/power/cpupower/utils/helpers/helpers.h b/tools/power/cpupower/utils/helpers/helpers.h
index 799a18be60aa..41da392be448 100644
--- a/tools/power/cpupower/utils/helpers/helpers.h
+++ b/tools/power/cpupower/utils/helpers/helpers.h
@@ -34,6 +34,7 @@
34/* Internationalization ****************************/ 34/* Internationalization ****************************/
35 35
36extern int run_as_root; 36extern int run_as_root;
37extern int base_cpu;
37extern struct bitmask *cpus_chosen; 38extern struct bitmask *cpus_chosen;
38 39
39/* Global verbose (-d) stuff *********************************/ 40/* Global verbose (-d) stuff *********************************/
@@ -87,11 +88,11 @@ struct cpupower_cpu_info {
87 * 88 *
88 * Extract CPU vendor, family, model, stepping info from /proc/cpuinfo 89 * Extract CPU vendor, family, model, stepping info from /proc/cpuinfo
89 * 90 *
90 * Returns 0 on success or a negativ error code 91 * Returns 0 on success or a negative error code
91 * Only used on x86, below global's struct values are zero/unknown on 92 * Only used on x86, below global's struct values are zero/unknown on
92 * other archs 93 * other archs
93 */ 94 */
94extern int get_cpu_info(unsigned int cpu, struct cpupower_cpu_info *cpu_info); 95extern int get_cpu_info(struct cpupower_cpu_info *cpu_info);
95extern struct cpupower_cpu_info cpupower_cpu_info; 96extern struct cpupower_cpu_info cpupower_cpu_info;
96/* cpuid and cpuinfo helpers **************************/ 97/* cpuid and cpuinfo helpers **************************/
97 98
diff --git a/tools/power/cpupower/utils/helpers/misc.c b/tools/power/cpupower/utils/helpers/misc.c
index 601d719d4e60..a5e7ddf19dbd 100644
--- a/tools/power/cpupower/utils/helpers/misc.c
+++ b/tools/power/cpupower/utils/helpers/misc.c
@@ -13,7 +13,7 @@ int cpufreq_has_boost_support(unsigned int cpu, int *support, int *active,
13 13
14 *support = *active = *states = 0; 14 *support = *active = *states = 0;
15 15
16 ret = get_cpu_info(0, &cpu_info); 16 ret = get_cpu_info(&cpu_info);
17 if (ret) 17 if (ret)
18 return ret; 18 return ret;
19 19
diff --git a/tools/power/cpupower/utils/idle_monitor/hsw_ext_idle.c b/tools/power/cpupower/utils/idle_monitor/hsw_ext_idle.c
index ebeaba6571a3..f794d6bbb7e9 100644
--- a/tools/power/cpupower/utils/idle_monitor/hsw_ext_idle.c
+++ b/tools/power/cpupower/utils/idle_monitor/hsw_ext_idle.c
@@ -123,7 +123,7 @@ static int hsw_ext_start(void)
123 previous_count[num][cpu] = val; 123 previous_count[num][cpu] = val;
124 } 124 }
125 } 125 }
126 hsw_ext_get_count(TSC, &tsc_at_measure_start, 0); 126 hsw_ext_get_count(TSC, &tsc_at_measure_start, base_cpu);
127 return 0; 127 return 0;
128} 128}
129 129
@@ -132,7 +132,7 @@ static int hsw_ext_stop(void)
132 unsigned long long val; 132 unsigned long long val;
133 int num, cpu; 133 int num, cpu;
134 134
135 hsw_ext_get_count(TSC, &tsc_at_measure_end, 0); 135 hsw_ext_get_count(TSC, &tsc_at_measure_end, base_cpu);
136 136
137 for (num = 0; num < HSW_EXT_CSTATE_COUNT; num++) { 137 for (num = 0; num < HSW_EXT_CSTATE_COUNT; num++) {
138 for (cpu = 0; cpu < cpu_count; cpu++) { 138 for (cpu = 0; cpu < cpu_count; cpu++) {
diff --git a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
index c83f1606970b..d7c2a6d13dea 100644
--- a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+++ b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
@@ -80,7 +80,8 @@ static int *is_valid;
80static int mperf_get_tsc(unsigned long long *tsc) 80static int mperf_get_tsc(unsigned long long *tsc)
81{ 81{
82 int ret; 82 int ret;
83 ret = read_msr(0, MSR_TSC, tsc); 83
84 ret = read_msr(base_cpu, MSR_TSC, tsc);
84 if (ret) 85 if (ret)
85 dprint("Reading TSC MSR failed, returning %llu\n", *tsc); 86 dprint("Reading TSC MSR failed, returning %llu\n", *tsc);
86 return ret; 87 return ret;
diff --git a/tools/power/cpupower/utils/idle_monitor/nhm_idle.c b/tools/power/cpupower/utils/idle_monitor/nhm_idle.c
index d2a91dd0d563..abf8cb5f7349 100644
--- a/tools/power/cpupower/utils/idle_monitor/nhm_idle.c
+++ b/tools/power/cpupower/utils/idle_monitor/nhm_idle.c
@@ -129,7 +129,7 @@ static int nhm_start(void)
129 int num, cpu; 129 int num, cpu;
130 unsigned long long dbg, val; 130 unsigned long long dbg, val;
131 131
132 nhm_get_count(TSC, &tsc_at_measure_start, 0); 132 nhm_get_count(TSC, &tsc_at_measure_start, base_cpu);
133 133
134 for (num = 0; num < NHM_CSTATE_COUNT; num++) { 134 for (num = 0; num < NHM_CSTATE_COUNT; num++) {
135 for (cpu = 0; cpu < cpu_count; cpu++) { 135 for (cpu = 0; cpu < cpu_count; cpu++) {
@@ -137,7 +137,7 @@ static int nhm_start(void)
137 previous_count[num][cpu] = val; 137 previous_count[num][cpu] = val;
138 } 138 }
139 } 139 }
140 nhm_get_count(TSC, &dbg, 0); 140 nhm_get_count(TSC, &dbg, base_cpu);
141 dprint("TSC diff: %llu\n", dbg - tsc_at_measure_start); 141 dprint("TSC diff: %llu\n", dbg - tsc_at_measure_start);
142 return 0; 142 return 0;
143} 143}
@@ -148,7 +148,7 @@ static int nhm_stop(void)
148 unsigned long long dbg; 148 unsigned long long dbg;
149 int num, cpu; 149 int num, cpu;
150 150
151 nhm_get_count(TSC, &tsc_at_measure_end, 0); 151 nhm_get_count(TSC, &tsc_at_measure_end, base_cpu);
152 152
153 for (num = 0; num < NHM_CSTATE_COUNT; num++) { 153 for (num = 0; num < NHM_CSTATE_COUNT; num++) {
154 for (cpu = 0; cpu < cpu_count; cpu++) { 154 for (cpu = 0; cpu < cpu_count; cpu++) {
@@ -156,7 +156,7 @@ static int nhm_stop(void)
156 current_count[num][cpu] = val; 156 current_count[num][cpu] = val;
157 } 157 }
158 } 158 }
159 nhm_get_count(TSC, &dbg, 0); 159 nhm_get_count(TSC, &dbg, base_cpu);
160 dprint("TSC diff: %llu\n", dbg - tsc_at_measure_end); 160 dprint("TSC diff: %llu\n", dbg - tsc_at_measure_end);
161 161
162 return 0; 162 return 0;
diff --git a/tools/power/cpupower/utils/idle_monitor/snb_idle.c b/tools/power/cpupower/utils/idle_monitor/snb_idle.c
index efc8a69c9aba..a2b45219648d 100644
--- a/tools/power/cpupower/utils/idle_monitor/snb_idle.c
+++ b/tools/power/cpupower/utils/idle_monitor/snb_idle.c
@@ -120,7 +120,7 @@ static int snb_start(void)
120 previous_count[num][cpu] = val; 120 previous_count[num][cpu] = val;
121 } 121 }
122 } 122 }
123 snb_get_count(TSC, &tsc_at_measure_start, 0); 123 snb_get_count(TSC, &tsc_at_measure_start, base_cpu);
124 return 0; 124 return 0;
125} 125}
126 126
@@ -129,7 +129,7 @@ static int snb_stop(void)
129 unsigned long long val; 129 unsigned long long val;
130 int num, cpu; 130 int num, cpu;
131 131
132 snb_get_count(TSC, &tsc_at_measure_end, 0); 132 snb_get_count(TSC, &tsc_at_measure_end, base_cpu);
133 133
134 for (num = 0; num < SNB_CSTATE_COUNT; num++) { 134 for (num = 0; num < SNB_CSTATE_COUNT; num++) {
135 for (cpu = 0; cpu < cpu_count; cpu++) { 135 for (cpu = 0; cpu < cpu_count; cpu++) {
diff --git a/tools/power/pm-graph/Makefile b/tools/power/pm-graph/Makefile
index 4d0ccc89e6c6..32f40eacdafe 100644
--- a/tools/power/pm-graph/Makefile
+++ b/tools/power/pm-graph/Makefile
@@ -4,7 +4,7 @@ DESTDIR ?=
4all: 4all:
5 @echo "Nothing to build" 5 @echo "Nothing to build"
6 6
7install : 7install : uninstall
8 install -d $(DESTDIR)$(PREFIX)/lib/pm-graph 8 install -d $(DESTDIR)$(PREFIX)/lib/pm-graph
9 install analyze_suspend.py $(DESTDIR)$(PREFIX)/lib/pm-graph 9 install analyze_suspend.py $(DESTDIR)$(PREFIX)/lib/pm-graph
10 install analyze_boot.py $(DESTDIR)$(PREFIX)/lib/pm-graph 10 install analyze_boot.py $(DESTDIR)$(PREFIX)/lib/pm-graph
@@ -17,12 +17,15 @@ install :
17 install sleepgraph.8 $(DESTDIR)$(PREFIX)/share/man/man8 17 install sleepgraph.8 $(DESTDIR)$(PREFIX)/share/man/man8
18 18
19uninstall : 19uninstall :
20 rm $(DESTDIR)$(PREFIX)/share/man/man8/bootgraph.8 20 rm -f $(DESTDIR)$(PREFIX)/share/man/man8/bootgraph.8
21 rm $(DESTDIR)$(PREFIX)/share/man/man8/sleepgraph.8 21 rm -f $(DESTDIR)$(PREFIX)/share/man/man8/sleepgraph.8
22 22
23 rm $(DESTDIR)$(PREFIX)/bin/bootgraph 23 rm -f $(DESTDIR)$(PREFIX)/bin/bootgraph
24 rm $(DESTDIR)$(PREFIX)/bin/sleepgraph 24 rm -f $(DESTDIR)$(PREFIX)/bin/sleepgraph
25 25
26 rm $(DESTDIR)$(PREFIX)/lib/pm-graph/analyze_boot.py 26 rm -f $(DESTDIR)$(PREFIX)/lib/pm-graph/analyze_boot.py
27 rm $(DESTDIR)$(PREFIX)/lib/pm-graph/analyze_suspend.py 27 rm -f $(DESTDIR)$(PREFIX)/lib/pm-graph/analyze_suspend.py
28 rmdir $(DESTDIR)$(PREFIX)/lib/pm-graph 28 rm -f $(DESTDIR)$(PREFIX)/lib/pm-graph/*.pyc
29 if [ -d $(DESTDIR)$(PREFIX)/lib/pm-graph ] ; then \
30 rmdir $(DESTDIR)$(PREFIX)/lib/pm-graph; \
31 fi;
diff --git a/tools/power/pm-graph/analyze_boot.py b/tools/power/pm-graph/analyze_boot.py
index 3e1dcbbf1adc..e83df141a597 100755
--- a/tools/power/pm-graph/analyze_boot.py
+++ b/tools/power/pm-graph/analyze_boot.py
@@ -42,7 +42,7 @@ import analyze_suspend as aslib
42# store system values and test parameters 42# store system values and test parameters
43class SystemValues(aslib.SystemValues): 43class SystemValues(aslib.SystemValues):
44 title = 'BootGraph' 44 title = 'BootGraph'
45 version = 2.0 45 version = '2.1'
46 hostname = 'localhost' 46 hostname = 'localhost'
47 testtime = '' 47 testtime = ''
48 kernel = '' 48 kernel = ''
@@ -50,9 +50,14 @@ class SystemValues(aslib.SystemValues):
50 ftracefile = '' 50 ftracefile = ''
51 htmlfile = 'bootgraph.html' 51 htmlfile = 'bootgraph.html'
52 outfile = '' 52 outfile = ''
53 phoronix = False 53 testdir = ''
54 addlogs = False 54 testdirprefix = 'boot'
55 embedded = False
56 testlog = False
57 dmesglog = False
58 ftracelog = False
55 useftrace = False 59 useftrace = False
60 usecallgraph = False
56 usedevsrc = True 61 usedevsrc = True
57 suspendmode = 'boot' 62 suspendmode = 'boot'
58 max_graph_depth = 2 63 max_graph_depth = 2
@@ -61,10 +66,12 @@ class SystemValues(aslib.SystemValues):
61 manual = False 66 manual = False
62 iscronjob = False 67 iscronjob = False
63 timeformat = '%.6f' 68 timeformat = '%.6f'
69 bootloader = 'grub'
70 blexec = []
64 def __init__(self): 71 def __init__(self):
65 if('LOG_FILE' in os.environ and 'TEST_RESULTS_IDENTIFIER' in os.environ): 72 if('LOG_FILE' in os.environ and 'TEST_RESULTS_IDENTIFIER' in os.environ):
66 self.phoronix = True 73 self.embedded = True
67 self.addlogs = True 74 self.dmesglog = True
68 self.outfile = os.environ['LOG_FILE'] 75 self.outfile = os.environ['LOG_FILE']
69 self.htmlfile = os.environ['LOG_FILE'] 76 self.htmlfile = os.environ['LOG_FILE']
70 self.hostname = platform.node() 77 self.hostname = platform.node()
@@ -76,42 +83,80 @@ class SystemValues(aslib.SystemValues):
76 self.kernel = self.kernelVersion(val) 83 self.kernel = self.kernelVersion(val)
77 else: 84 else:
78 self.kernel = 'unknown' 85 self.kernel = 'unknown'
86 self.testdir = datetime.now().strftime('boot-%y%m%d-%H%M%S')
79 def kernelVersion(self, msg): 87 def kernelVersion(self, msg):
80 return msg.split()[2] 88 return msg.split()[2]
89 def checkFtraceKernelVersion(self):
90 val = tuple(map(int, self.kernel.split('-')[0].split('.')))
91 if val >= (4, 10, 0):
92 return True
93 return False
81 def kernelParams(self): 94 def kernelParams(self):
82 cmdline = 'initcall_debug log_buf_len=32M' 95 cmdline = 'initcall_debug log_buf_len=32M'
83 if self.useftrace: 96 if self.useftrace:
84 cmdline += ' trace_buf_size=128M trace_clock=global '\ 97 if self.cpucount > 0:
98 bs = min(self.memtotal / 2, 2*1024*1024) / self.cpucount
99 else:
100 bs = 131072
101 cmdline += ' trace_buf_size=%dK trace_clock=global '\
85 'trace_options=nooverwrite,funcgraph-abstime,funcgraph-cpu,'\ 102 'trace_options=nooverwrite,funcgraph-abstime,funcgraph-cpu,'\
86 'funcgraph-duration,funcgraph-proc,funcgraph-tail,'\ 103 'funcgraph-duration,funcgraph-proc,funcgraph-tail,'\
87 'nofuncgraph-overhead,context-info,graph-time '\ 104 'nofuncgraph-overhead,context-info,graph-time '\
88 'ftrace=function_graph '\ 105 'ftrace=function_graph '\
89 'ftrace_graph_max_depth=%d '\ 106 'ftrace_graph_max_depth=%d '\
90 'ftrace_graph_filter=%s' % \ 107 'ftrace_graph_filter=%s' % \
91 (self.max_graph_depth, self.graph_filter) 108 (bs, self.max_graph_depth, self.graph_filter)
92 return cmdline 109 return cmdline
93 def setGraphFilter(self, val): 110 def setGraphFilter(self, val):
94 fp = open(self.tpath+'available_filter_functions') 111 master = self.getBootFtraceFilterFunctions()
95 master = fp.read().split('\n') 112 fs = ''
96 fp.close()
97 for i in val.split(','): 113 for i in val.split(','):
98 func = i.strip() 114 func = i.strip()
115 if func == '':
116 doError('badly formatted filter function string')
117 if '[' in func or ']' in func:
118 doError('loadable module functions not allowed - "%s"' % func)
119 if ' ' in func:
120 doError('spaces found in filter functions - "%s"' % func)
99 if func not in master: 121 if func not in master:
100 doError('function "%s" not available for ftrace' % func) 122 doError('function "%s" not available for ftrace' % func)
101 self.graph_filter = val 123 if not fs:
124 fs = func
125 else:
126 fs += ','+func
127 if not fs:
128 doError('badly formatted filter function string')
129 self.graph_filter = fs
130 def getBootFtraceFilterFunctions(self):
131 self.rootCheck(True)
132 fp = open(self.tpath+'available_filter_functions')
133 fulllist = fp.read().split('\n')
134 fp.close()
135 list = []
136 for i in fulllist:
137 if not i or ' ' in i or '[' in i or ']' in i:
138 continue
139 list.append(i)
140 return list
141 def myCronJob(self, line):
142 if '@reboot' not in line:
143 return False
144 if 'bootgraph' in line or 'analyze_boot.py' in line or '-cronjob' in line:
145 return True
146 return False
102 def cronjobCmdString(self): 147 def cronjobCmdString(self):
103 cmdline = '%s -cronjob' % os.path.abspath(sys.argv[0]) 148 cmdline = '%s -cronjob' % os.path.abspath(sys.argv[0])
104 args = iter(sys.argv[1:]) 149 args = iter(sys.argv[1:])
105 for arg in args: 150 for arg in args:
106 if arg in ['-h', '-v', '-cronjob', '-reboot']: 151 if arg in ['-h', '-v', '-cronjob', '-reboot']:
107 continue 152 continue
108 elif arg in ['-o', '-dmesg', '-ftrace', '-filter']: 153 elif arg in ['-o', '-dmesg', '-ftrace', '-func']:
109 args.next() 154 args.next()
110 continue 155 continue
111 cmdline += ' '+arg 156 cmdline += ' '+arg
112 if self.graph_filter != 'do_one_initcall': 157 if self.graph_filter != 'do_one_initcall':
113 cmdline += ' -filter "%s"' % self.graph_filter 158 cmdline += ' -func "%s"' % self.graph_filter
114 cmdline += ' -o "%s"' % os.path.abspath(self.htmlfile) 159 cmdline += ' -o "%s"' % os.path.abspath(self.testdir)
115 return cmdline 160 return cmdline
116 def manualRebootRequired(self): 161 def manualRebootRequired(self):
117 cmdline = self.kernelParams() 162 cmdline = self.kernelParams()
@@ -121,6 +166,39 @@ class SystemValues(aslib.SystemValues):
121 print '3. After reboot, re-run this tool with the same arguments but no command (w/o -reboot or -manual).\n' 166 print '3. After reboot, re-run this tool with the same arguments but no command (w/o -reboot or -manual).\n'
122 print 'CMDLINE="%s"' % cmdline 167 print 'CMDLINE="%s"' % cmdline
123 sys.exit() 168 sys.exit()
169 def getExec(self, cmd):
170 dirlist = ['/sbin', '/bin', '/usr/sbin', '/usr/bin',
171 '/usr/local/sbin', '/usr/local/bin']
172 for path in dirlist:
173 cmdfull = os.path.join(path, cmd)
174 if os.path.exists(cmdfull):
175 return cmdfull
176 return ''
177 def blGrub(self):
178 blcmd = ''
179 for cmd in ['update-grub', 'grub-mkconfig', 'grub2-mkconfig']:
180 if blcmd:
181 break
182 blcmd = self.getExec(cmd)
183 if not blcmd:
184 doError('[GRUB] missing update command')
185 if not os.path.exists('/etc/default/grub'):
186 doError('[GRUB] missing /etc/default/grub')
187 if 'grub2' in blcmd:
188 cfg = '/boot/grub2/grub.cfg'
189 else:
190 cfg = '/boot/grub/grub.cfg'
191 if not os.path.exists(cfg):
192 doError('[GRUB] missing %s' % cfg)
193 if 'update-grub' in blcmd:
194 self.blexec = [blcmd]
195 else:
196 self.blexec = [blcmd, '-o', cfg]
197 def getBootLoader(self):
198 if self.bootloader == 'grub':
199 self.blGrub()
200 else:
201 doError('unknown boot loader: %s' % self.bootloader)
124 202
125sysvals = SystemValues() 203sysvals = SystemValues()
126 204
@@ -136,20 +214,23 @@ class Data(aslib.Data):
136 idstr = '' 214 idstr = ''
137 html_device_id = 0 215 html_device_id = 0
138 valid = False 216 valid = False
139 initstart = 0.0 217 tUserMode = 0.0
140 boottime = '' 218 boottime = ''
141 phases = ['boot'] 219 phases = ['kernel', 'user']
142 do_one_initcall = False 220 do_one_initcall = False
143 def __init__(self, num): 221 def __init__(self, num):
144 self.testnumber = num 222 self.testnumber = num
145 self.idstr = 'a' 223 self.idstr = 'a'
146 self.dmesgtext = [] 224 self.dmesgtext = []
147 self.dmesg = { 225 self.dmesg = {
148 'boot': {'list': dict(), 'start': -1.0, 'end': -1.0, 'row': 0, 'color': '#dddddd'} 226 'kernel': {'list': dict(), 'start': -1.0, 'end': -1.0, 'row': 0,
227 'order': 0, 'color': 'linear-gradient(to bottom, #fff, #bcf)'},
228 'user': {'list': dict(), 'start': -1.0, 'end': -1.0, 'row': 0,
229 'order': 1, 'color': '#fff'}
149 } 230 }
150 def deviceTopology(self): 231 def deviceTopology(self):
151 return '' 232 return ''
152 def newAction(self, phase, name, start, end, ret, ulen): 233 def newAction(self, phase, name, pid, start, end, ret, ulen):
153 # new device callback for a specific phase 234 # new device callback for a specific phase
154 self.html_device_id += 1 235 self.html_device_id += 1
155 devid = '%s%d' % (self.idstr, self.html_device_id) 236 devid = '%s%d' % (self.idstr, self.html_device_id)
@@ -163,41 +244,46 @@ class Data(aslib.Data):
163 name = '%s[%d]' % (origname, i) 244 name = '%s[%d]' % (origname, i)
164 i += 1 245 i += 1
165 list[name] = {'name': name, 'start': start, 'end': end, 246 list[name] = {'name': name, 'start': start, 'end': end,
166 'pid': 0, 'length': length, 'row': 0, 'id': devid, 247 'pid': pid, 'length': length, 'row': 0, 'id': devid,
167 'ret': ret, 'ulen': ulen } 248 'ret': ret, 'ulen': ulen }
168 return name 249 return name
169 def deviceMatch(self, cg): 250 def deviceMatch(self, pid, cg):
170 if cg.end - cg.start == 0: 251 if cg.end - cg.start == 0:
171 return True 252 return True
172 list = self.dmesg['boot']['list'] 253 for p in data.phases:
173 for devname in list: 254 list = self.dmesg[p]['list']
174 dev = list[devname] 255 for devname in list:
175 if cg.name == 'do_one_initcall': 256 dev = list[devname]
176 if(cg.start <= dev['start'] and cg.end >= dev['end'] and dev['length'] > 0): 257 if pid != dev['pid']:
177 dev['ftrace'] = cg 258 continue
178 self.do_one_initcall = True 259 if cg.name == 'do_one_initcall':
179 return True 260 if(cg.start <= dev['start'] and cg.end >= dev['end'] and dev['length'] > 0):
180 else: 261 dev['ftrace'] = cg
181 if(cg.start > dev['start'] and cg.end < dev['end']): 262 self.do_one_initcall = True
182 if 'ftraces' not in dev: 263 return True
183 dev['ftraces'] = [] 264 else:
184 dev['ftraces'].append(cg) 265 if(cg.start > dev['start'] and cg.end < dev['end']):
185 return True 266 if 'ftraces' not in dev:
267 dev['ftraces'] = []
268 dev['ftraces'].append(cg)
269 return True
186 return False 270 return False
187 271
188# ----------------- FUNCTIONS -------------------- 272# ----------------- FUNCTIONS --------------------
189 273
190# Function: loadKernelLog 274# Function: parseKernelLog
191# Description: 275# Description:
192# Load a raw kernel log from dmesg 276# parse a kernel log for boot data
193def loadKernelLog(): 277def parseKernelLog():
278 phase = 'kernel'
194 data = Data(0) 279 data = Data(0)
195 data.dmesg['boot']['start'] = data.start = ktime = 0.0 280 data.dmesg['kernel']['start'] = data.start = ktime = 0.0
196 sysvals.stamp = { 281 sysvals.stamp = {
197 'time': datetime.now().strftime('%B %d %Y, %I:%M:%S %p'), 282 'time': datetime.now().strftime('%B %d %Y, %I:%M:%S %p'),
198 'host': sysvals.hostname, 283 'host': sysvals.hostname,
199 'mode': 'boot', 'kernel': ''} 284 'mode': 'boot', 'kernel': ''}
200 285
286 tp = aslib.TestProps()
201 devtemp = dict() 287 devtemp = dict()
202 if(sysvals.dmesgfile): 288 if(sysvals.dmesgfile):
203 lf = open(sysvals.dmesgfile, 'r') 289 lf = open(sysvals.dmesgfile, 'r')
@@ -205,6 +291,13 @@ def loadKernelLog():
205 lf = Popen('dmesg', stdout=PIPE).stdout 291 lf = Popen('dmesg', stdout=PIPE).stdout
206 for line in lf: 292 for line in lf:
207 line = line.replace('\r\n', '') 293 line = line.replace('\r\n', '')
294 # grab the stamp and sysinfo
295 if re.match(tp.stampfmt, line):
296 tp.stamp = line
297 continue
298 elif re.match(tp.sysinfofmt, line):
299 tp.sysinfo = line
300 continue
208 idx = line.find('[') 301 idx = line.find('[')
209 if idx > 1: 302 if idx > 1:
210 line = line[idx:] 303 line = line[idx:]
@@ -215,7 +308,6 @@ def loadKernelLog():
215 if(ktime > 120): 308 if(ktime > 120):
216 break 309 break
217 msg = m.group('msg') 310 msg = m.group('msg')
218 data.end = data.initstart = ktime
219 data.dmesgtext.append(line) 311 data.dmesgtext.append(line)
220 if(ktime == 0.0 and re.match('^Linux version .*', msg)): 312 if(ktime == 0.0 and re.match('^Linux version .*', msg)):
221 if(not sysvals.stamp['kernel']): 313 if(not sysvals.stamp['kernel']):
@@ -228,43 +320,39 @@ def loadKernelLog():
228 data.boottime = bt.strftime('%Y-%m-%d_%H:%M:%S') 320 data.boottime = bt.strftime('%Y-%m-%d_%H:%M:%S')
229 sysvals.stamp['time'] = bt.strftime('%B %d %Y, %I:%M:%S %p') 321 sysvals.stamp['time'] = bt.strftime('%B %d %Y, %I:%M:%S %p')
230 continue 322 continue
231 m = re.match('^calling *(?P<f>.*)\+.*', msg) 323 m = re.match('^calling *(?P<f>.*)\+.* @ (?P<p>[0-9]*)', msg)
232 if(m): 324 if(m):
233 devtemp[m.group('f')] = ktime 325 func = m.group('f')
326 pid = int(m.group('p'))
327 devtemp[func] = (ktime, pid)
234 continue 328 continue
235 m = re.match('^initcall *(?P<f>.*)\+.* returned (?P<r>.*) after (?P<t>.*) usecs', msg) 329 m = re.match('^initcall *(?P<f>.*)\+.* returned (?P<r>.*) after (?P<t>.*) usecs', msg)
236 if(m): 330 if(m):
237 data.valid = True 331 data.valid = True
332 data.end = ktime
238 f, r, t = m.group('f', 'r', 't') 333 f, r, t = m.group('f', 'r', 't')
239 if(f in devtemp): 334 if(f in devtemp):
240 data.newAction('boot', f, devtemp[f], ktime, int(r), int(t)) 335 start, pid = devtemp[f]
241 data.end = ktime 336 data.newAction(phase, f, pid, start, ktime, int(r), int(t))
242 del devtemp[f] 337 del devtemp[f]
243 continue 338 continue
244 if(re.match('^Freeing unused kernel memory.*', msg)): 339 if(re.match('^Freeing unused kernel memory.*', msg)):
245 break 340 data.tUserMode = ktime
246 341 data.dmesg['kernel']['end'] = ktime
247 data.dmesg['boot']['end'] = data.end 342 data.dmesg['user']['start'] = ktime
343 phase = 'user'
344
345 if tp.stamp:
346 sysvals.stamp = 0
347 tp.parseStamp(data, sysvals)
348 data.dmesg['user']['end'] = data.end
248 lf.close() 349 lf.close()
249 return data 350 return data
250 351
251# Function: loadTraceLog 352# Function: parseTraceLog
252# Description: 353# Description:
253# Check if trace is available and copy to a temp file 354# Check if trace is available and copy to a temp file
254def loadTraceLog(data): 355def parseTraceLog(data):
255 # load the data to a temp file if none given
256 if not sysvals.ftracefile:
257 lib = aslib.sysvals
258 aslib.rootCheck(True)
259 if not lib.verifyFtrace():
260 doError('ftrace not available')
261 if lib.fgetVal('current_tracer').strip() != 'function_graph':
262 doError('ftrace not configured for a boot callgraph')
263 sysvals.ftracefile = '/tmp/boot_ftrace.%s.txt' % os.getpid()
264 call('cat '+lib.tpath+'trace > '+sysvals.ftracefile, shell=True)
265 if not sysvals.ftracefile:
266 doError('No trace data available')
267
268 # parse the trace log 356 # parse the trace log
269 ftemp = dict() 357 ftemp = dict()
270 tp = aslib.TestProps() 358 tp = aslib.TestProps()
@@ -306,9 +394,29 @@ def loadTraceLog(data):
306 print('Sanity check failed for %s-%d' % (proc, pid)) 394 print('Sanity check failed for %s-%d' % (proc, pid))
307 continue 395 continue
308 # match cg data to devices 396 # match cg data to devices
309 if not data.deviceMatch(cg): 397 if not data.deviceMatch(pid, cg):
310 print ' BAD: %s %s-%d [%f - %f]' % (cg.name, proc, pid, cg.start, cg.end) 398 print ' BAD: %s %s-%d [%f - %f]' % (cg.name, proc, pid, cg.start, cg.end)
311 399
400# Function: retrieveLogs
401# Description:
402# Create copies of dmesg and/or ftrace for later processing
403def retrieveLogs():
404 # check ftrace is configured first
405 if sysvals.useftrace:
406 tracer = sysvals.fgetVal('current_tracer').strip()
407 if tracer != 'function_graph':
408 doError('ftrace not configured for a boot callgraph')
409 # create the folder and get dmesg
410 sysvals.systemInfo(aslib.dmidecode(sysvals.mempath))
411 sysvals.initTestOutput('boot')
412 sysvals.writeDatafileHeader(sysvals.dmesgfile)
413 call('dmesg >> '+sysvals.dmesgfile, shell=True)
414 if not sysvals.useftrace:
415 return
416 # get ftrace
417 sysvals.writeDatafileHeader(sysvals.ftracefile)
418 call('cat '+sysvals.tpath+'trace >> '+sysvals.ftracefile, shell=True)
419
312# Function: colorForName 420# Function: colorForName
313# Description: 421# Description:
314# Generate a repeatable color from a list for a given name 422# Generate a repeatable color from a list for a given name
@@ -353,18 +461,19 @@ def cgOverview(cg, minlen):
353# testruns: array of Data objects from parseKernelLog or parseTraceLog 461# testruns: array of Data objects from parseKernelLog or parseTraceLog
354# Output: 462# Output:
355# True if the html file was created, false if it failed 463# True if the html file was created, false if it failed
356def createBootGraph(data, embedded): 464def createBootGraph(data):
357 # html function templates 465 # html function templates
358 html_srccall = '<div id={6} title="{5}" class="srccall" style="left:{1}%;top:{2}px;height:{3}px;width:{4}%;line-height:{3}px;">{0}</div>\n' 466 html_srccall = '<div id={6} title="{5}" class="srccall" style="left:{1}%;top:{2}px;height:{3}px;width:{4}%;line-height:{3}px;">{0}</div>\n'
359 html_timetotal = '<table class="time1">\n<tr>'\ 467 html_timetotal = '<table class="time1">\n<tr>'\
360 '<td class="blue">Time from Kernel Boot to start of User Mode: <b>{0} ms</b></td>'\ 468 '<td class="blue">Init process starts @ <b>{0} ms</b></td>'\
469 '<td class="blue">Last initcall ends @ <b>{1} ms</b></td>'\
361 '</tr>\n</table>\n' 470 '</tr>\n</table>\n'
362 471
363 # device timeline 472 # device timeline
364 devtl = aslib.Timeline(100, 20) 473 devtl = aslib.Timeline(100, 20)
365 474
366 # write the test title and general info header 475 # write the test title and general info header
367 devtl.createHeader(sysvals, 'noftrace') 476 devtl.createHeader(sysvals)
368 477
369 # Generate the header for this timeline 478 # Generate the header for this timeline
370 t0 = data.start 479 t0 = data.start
@@ -373,84 +482,98 @@ def createBootGraph(data, embedded):
373 if(tTotal == 0): 482 if(tTotal == 0):
374 print('ERROR: No timeline data') 483 print('ERROR: No timeline data')
375 return False 484 return False
376 boot_time = '%.0f'%(tTotal*1000) 485 user_mode = '%.0f'%(data.tUserMode*1000)
377 devtl.html += html_timetotal.format(boot_time) 486 last_init = '%.0f'%(tTotal*1000)
487 devtl.html += html_timetotal.format(user_mode, last_init)
378 488
379 # determine the maximum number of rows we need to draw 489 # determine the maximum number of rows we need to draw
380 phase = 'boot'
381 list = data.dmesg[phase]['list']
382 devlist = [] 490 devlist = []
383 for devname in list: 491 for p in data.phases:
384 d = aslib.DevItem(0, phase, list[devname]) 492 list = data.dmesg[p]['list']
385 devlist.append(d) 493 for devname in list:
386 devtl.getPhaseRows(devlist) 494 d = aslib.DevItem(0, p, list[devname])
495 devlist.append(d)
496 devtl.getPhaseRows(devlist, 0, 'start')
387 devtl.calcTotalRows() 497 devtl.calcTotalRows()
388 498
389 # draw the timeline background 499 # draw the timeline background
390 devtl.createZoomBox() 500 devtl.createZoomBox()
391 boot = data.dmesg[phase] 501 devtl.html += devtl.html_tblock.format('boot', '0', '100', devtl.scaleH)
392 length = boot['end']-boot['start'] 502 for p in data.phases:
393 left = '%.3f' % (((boot['start']-t0)*100.0)/tTotal) 503 phase = data.dmesg[p]
394 width = '%.3f' % ((length*100.0)/tTotal) 504 length = phase['end']-phase['start']
395 devtl.html += devtl.html_tblock.format(phase, left, width, devtl.scaleH) 505 left = '%.3f' % (((phase['start']-t0)*100.0)/tTotal)
396 devtl.html += devtl.html_phase.format('0', '100', \ 506 width = '%.3f' % ((length*100.0)/tTotal)
397 '%.3f'%devtl.scaleH, '%.3f'%devtl.bodyH, \ 507 devtl.html += devtl.html_phase.format(left, width, \
398 'white', '') 508 '%.3f'%devtl.scaleH, '%.3f'%devtl.bodyH, \
509 phase['color'], '')
399 510
400 # draw the device timeline 511 # draw the device timeline
401 num = 0 512 num = 0
402 devstats = dict() 513 devstats = dict()
403 for devname in sorted(list): 514 for phase in data.phases:
404 cls, color = colorForName(devname) 515 list = data.dmesg[phase]['list']
405 dev = list[devname] 516 for devname in sorted(list):
406 info = '@|%.3f|%.3f|%.3f|%d' % (dev['start']*1000.0, dev['end']*1000.0, 517 cls, color = colorForName(devname)
407 dev['ulen']/1000.0, dev['ret']) 518 dev = list[devname]
408 devstats[dev['id']] = {'info':info} 519 info = '@|%.3f|%.3f|%.3f|%d' % (dev['start']*1000.0, dev['end']*1000.0,
409 dev['color'] = color 520 dev['ulen']/1000.0, dev['ret'])
410 height = devtl.phaseRowHeight(0, phase, dev['row']) 521 devstats[dev['id']] = {'info':info}
411 top = '%.6f' % ((dev['row']*height) + devtl.scaleH) 522 dev['color'] = color
412 left = '%.6f' % (((dev['start']-t0)*100)/tTotal) 523 height = devtl.phaseRowHeight(0, phase, dev['row'])
413 width = '%.6f' % (((dev['end']-dev['start'])*100)/tTotal) 524 top = '%.6f' % ((dev['row']*height) + devtl.scaleH)
414 length = ' (%0.3f ms) ' % ((dev['end']-dev['start'])*1000) 525 left = '%.6f' % (((dev['start']-t0)*100)/tTotal)
415 devtl.html += devtl.html_device.format(dev['id'], 526 width = '%.6f' % (((dev['end']-dev['start'])*100)/tTotal)
416 devname+length+'kernel_mode', left, top, '%.3f'%height, 527 length = ' (%0.3f ms) ' % ((dev['end']-dev['start'])*1000)
417 width, devname, ' '+cls, '') 528 devtl.html += devtl.html_device.format(dev['id'],
418 rowtop = devtl.phaseRowTop(0, phase, dev['row']) 529 devname+length+phase+'_mode', left, top, '%.3f'%height,
419 height = '%.6f' % (devtl.rowH / 2) 530 width, devname, ' '+cls, '')
420 top = '%.6f' % (rowtop + devtl.scaleH + (devtl.rowH / 2)) 531 rowtop = devtl.phaseRowTop(0, phase, dev['row'])
421 if data.do_one_initcall: 532 height = '%.6f' % (devtl.rowH / 2)
422 if('ftrace' not in dev): 533 top = '%.6f' % (rowtop + devtl.scaleH + (devtl.rowH / 2))
534 if data.do_one_initcall:
535 if('ftrace' not in dev):
536 continue
537 cg = dev['ftrace']
538 large, stats = cgOverview(cg, 0.001)
539 devstats[dev['id']]['fstat'] = stats
540 for l in large:
541 left = '%f' % (((l.time-t0)*100)/tTotal)
542 width = '%f' % (l.length*100/tTotal)
543 title = '%s (%0.3fms)' % (l.name, l.length * 1000.0)
544 devtl.html += html_srccall.format(l.name, left,
545 top, height, width, title, 'x%d'%num)
546 num += 1
547 continue
548 if('ftraces' not in dev):
423 continue 549 continue
424 cg = dev['ftrace'] 550 for cg in dev['ftraces']:
425 large, stats = cgOverview(cg, 0.001) 551 left = '%f' % (((cg.start-t0)*100)/tTotal)
426 devstats[dev['id']]['fstat'] = stats 552 width = '%f' % ((cg.end-cg.start)*100/tTotal)
427 for l in large: 553 cglen = (cg.end - cg.start) * 1000.0
428 left = '%f' % (((l.time-t0)*100)/tTotal) 554 title = '%s (%0.3fms)' % (cg.name, cglen)
429 width = '%f' % (l.length*100/tTotal) 555 cg.id = 'x%d' % num
430 title = '%s (%0.3fms)' % (l.name, l.length * 1000.0) 556 devtl.html += html_srccall.format(cg.name, left,
431 devtl.html += html_srccall.format(l.name, left, 557 top, height, width, title, dev['id']+cg.id)
432 top, height, width, title, 'x%d'%num)
433 num += 1 558 num += 1
434 continue
435 if('ftraces' not in dev):
436 continue
437 for cg in dev['ftraces']:
438 left = '%f' % (((cg.start-t0)*100)/tTotal)
439 width = '%f' % ((cg.end-cg.start)*100/tTotal)
440 cglen = (cg.end - cg.start) * 1000.0
441 title = '%s (%0.3fms)' % (cg.name, cglen)
442 cg.id = 'x%d' % num
443 devtl.html += html_srccall.format(cg.name, left,
444 top, height, width, title, dev['id']+cg.id)
445 num += 1
446 559
447 # draw the time scale, try to make the number of labels readable 560 # draw the time scale, try to make the number of labels readable
448 devtl.createTimeScale(t0, tMax, tTotal, phase) 561 devtl.createTimeScale(t0, tMax, tTotal, 'boot')
449 devtl.html += '</div>\n' 562 devtl.html += '</div>\n'
450 563
451 # timeline is finished 564 # timeline is finished
452 devtl.html += '</div>\n</div>\n' 565 devtl.html += '</div>\n</div>\n'
453 566
567 # draw a legend which describes the phases by color
568 devtl.html += '<div class="legend">\n'
569 pdelta = 20.0
570 pmargin = 36.0
571 for phase in data.phases:
572 order = '%.2f' % ((data.dmesg[phase]['order'] * pdelta) + pmargin)
573 devtl.html += devtl.html_legend.format(order, \
574 data.dmesg[phase]['color'], phase+'_mode', phase[0])
575 devtl.html += '</div>\n'
576
454 if(sysvals.outfile == sysvals.htmlfile): 577 if(sysvals.outfile == sysvals.htmlfile):
455 hf = open(sysvals.htmlfile, 'a') 578 hf = open(sysvals.htmlfile, 'a')
456 else: 579 else:
@@ -474,7 +597,7 @@ def createBootGraph(data, embedded):
474 .fstat td {text-align:left;width:35px;}\n\ 597 .fstat td {text-align:left;width:35px;}\n\
475 .srccall {position:absolute;font-size:10px;z-index:7;overflow:hidden;color:black;text-align:center;white-space:nowrap;border-radius:5px;border:1px solid black;background:linear-gradient(to bottom right,#CCC,#969696);}\n\ 598 .srccall {position:absolute;font-size:10px;z-index:7;overflow:hidden;color:black;text-align:center;white-space:nowrap;border-radius:5px;border:1px solid black;background:linear-gradient(to bottom right,#CCC,#969696);}\n\
476 .srccall:hover {color:white;font-weight:bold;border:1px solid white;}\n' 599 .srccall:hover {color:white;font-weight:bold;border:1px solid white;}\n'
477 if(not embedded): 600 if(not sysvals.embedded):
478 aslib.addCSS(hf, sysvals, 1, False, extra) 601 aslib.addCSS(hf, sysvals, 1, False, extra)
479 602
480 # write the device timeline 603 # write the device timeline
@@ -495,9 +618,11 @@ def createBootGraph(data, embedded):
495 html = \ 618 html = \
496 '<div id="devicedetailtitle"></div>\n'\ 619 '<div id="devicedetailtitle"></div>\n'\
497 '<div id="devicedetail" style="display:none;">\n'\ 620 '<div id="devicedetail" style="display:none;">\n'\
498 '<div id="devicedetail0">\n'\ 621 '<div id="devicedetail0">\n'
499 '<div id="kernel_mode" class="phaselet" style="left:0%;width:100%;background:#DDDDDD"></div>\n'\ 622 for p in data.phases:
500 '</div>\n</div>\n'\ 623 phase = data.dmesg[p]
624 html += devtl.html_phaselet.format(p+'_mode', '0', '100', phase['color'])
625 html += '</div>\n</div>\n'\
501 '<script type="text/javascript">\n'+statinfo+\ 626 '<script type="text/javascript">\n'+statinfo+\
502 '</script>\n' 627 '</script>\n'
503 hf.write(html) 628 hf.write(html)
@@ -507,21 +632,21 @@ def createBootGraph(data, embedded):
507 aslib.addCallgraphs(sysvals, hf, data) 632 aslib.addCallgraphs(sysvals, hf, data)
508 633
509 # add the dmesg log as a hidden div 634 # add the dmesg log as a hidden div
510 if sysvals.addlogs: 635 if sysvals.dmesglog:
511 hf.write('<div id="dmesglog" style="display:none;">\n') 636 hf.write('<div id="dmesglog" style="display:none;">\n')
512 for line in data.dmesgtext: 637 for line in data.dmesgtext:
513 line = line.replace('<', '&lt').replace('>', '&gt') 638 line = line.replace('<', '&lt').replace('>', '&gt')
514 hf.write(line) 639 hf.write(line)
515 hf.write('</div>\n') 640 hf.write('</div>\n')
516 641
517 if(not embedded): 642 if(not sysvals.embedded):
518 # write the footer and close 643 # write the footer and close
519 aslib.addScriptCode(hf, [data]) 644 aslib.addScriptCode(hf, [data])
520 hf.write('</body>\n</html>\n') 645 hf.write('</body>\n</html>\n')
521 else: 646 else:
522 # embedded out will be loaded in a page, skip the js 647 # embedded out will be loaded in a page, skip the js
523 hf.write('<div id=bounds style=display:none>%f,%f</div>' % \ 648 hf.write('<div id=bounds style=display:none>%f,%f</div>' % \
524 (data.start*1000, data.initstart*1000)) 649 (data.start*1000, data.end*1000))
525 hf.close() 650 hf.close()
526 return True 651 return True
527 652
@@ -533,17 +658,20 @@ def updateCron(restore=False):
533 if not restore: 658 if not restore:
534 sysvals.rootUser(True) 659 sysvals.rootUser(True)
535 crondir = '/var/spool/cron/crontabs/' 660 crondir = '/var/spool/cron/crontabs/'
536 cronfile = crondir+'root' 661 if not os.path.exists(crondir):
537 backfile = crondir+'root-analyze_boot-backup' 662 crondir = '/var/spool/cron/'
538 if not os.path.exists(crondir): 663 if not os.path.exists(crondir):
539 doError('%s not found' % crondir) 664 doError('%s not found' % crondir)
540 out = Popen(['which', 'crontab'], stdout=PIPE).stdout.read() 665 cronfile = crondir+'root'
541 if not out: 666 backfile = crondir+'root-analyze_boot-backup'
667 cmd = sysvals.getExec('crontab')
668 if not cmd:
542 doError('crontab not found') 669 doError('crontab not found')
543 # on restore: move the backup cron back into place 670 # on restore: move the backup cron back into place
544 if restore: 671 if restore:
545 if os.path.exists(backfile): 672 if os.path.exists(backfile):
546 shutil.move(backfile, cronfile) 673 shutil.move(backfile, cronfile)
674 call([cmd, cronfile])
547 return 675 return
548 # backup current cron and install new one with reboot 676 # backup current cron and install new one with reboot
549 if os.path.exists(cronfile): 677 if os.path.exists(cronfile):
@@ -556,13 +684,13 @@ def updateCron(restore=False):
556 fp = open(backfile, 'r') 684 fp = open(backfile, 'r')
557 op = open(cronfile, 'w') 685 op = open(cronfile, 'w')
558 for line in fp: 686 for line in fp:
559 if '@reboot' not in line: 687 if not sysvals.myCronJob(line):
560 op.write(line) 688 op.write(line)
561 continue 689 continue
562 fp.close() 690 fp.close()
563 op.write('@reboot python %s\n' % sysvals.cronjobCmdString()) 691 op.write('@reboot python %s\n' % sysvals.cronjobCmdString())
564 op.close() 692 op.close()
565 res = call('crontab %s' % cronfile, shell=True) 693 res = call([cmd, cronfile])
566 except Exception, e: 694 except Exception, e:
567 print 'Exception: %s' % str(e) 695 print 'Exception: %s' % str(e)
568 shutil.move(backfile, cronfile) 696 shutil.move(backfile, cronfile)
@@ -577,25 +705,16 @@ def updateGrub(restore=False):
577 # call update-grub on restore 705 # call update-grub on restore
578 if restore: 706 if restore:
579 try: 707 try:
580 call(['update-grub'], stderr=PIPE, stdout=PIPE, 708 call(sysvals.blexec, stderr=PIPE, stdout=PIPE,
581 env={'PATH': '.:/sbin:/usr/sbin:/usr/bin:/sbin:/bin'}) 709 env={'PATH': '.:/sbin:/usr/sbin:/usr/bin:/sbin:/bin'})
582 except Exception, e: 710 except Exception, e:
583 print 'Exception: %s\n' % str(e) 711 print 'Exception: %s\n' % str(e)
584 return 712 return
585 # verify we can do this
586 sysvals.rootUser(True)
587 grubfile = '/etc/default/grub'
588 if not os.path.exists(grubfile):
589 print 'ERROR: Unable to set the kernel parameters via grub.\n'
590 sysvals.manualRebootRequired()
591 out = Popen(['which', 'update-grub'], stdout=PIPE).stdout.read()
592 if not out:
593 print 'ERROR: Unable to set the kernel parameters via grub.\n'
594 sysvals.manualRebootRequired()
595
596 # extract the option and create a grub config without it 713 # extract the option and create a grub config without it
714 sysvals.rootUser(True)
597 tgtopt = 'GRUB_CMDLINE_LINUX_DEFAULT' 715 tgtopt = 'GRUB_CMDLINE_LINUX_DEFAULT'
598 cmdline = '' 716 cmdline = ''
717 grubfile = '/etc/default/grub'
599 tempfile = '/etc/default/grub.analyze_boot' 718 tempfile = '/etc/default/grub.analyze_boot'
600 shutil.move(grubfile, tempfile) 719 shutil.move(grubfile, tempfile)
601 res = -1 720 res = -1
@@ -622,7 +741,7 @@ def updateGrub(restore=False):
622 # if the target option value is in quotes, strip them 741 # if the target option value is in quotes, strip them
623 sp = '"' 742 sp = '"'
624 val = cmdline.strip() 743 val = cmdline.strip()
625 if val[0] == '\'' or val[0] == '"': 744 if val and (val[0] == '\'' or val[0] == '"'):
626 sp = val[0] 745 sp = val[0]
627 val = val.strip(sp) 746 val = val.strip(sp)
628 cmdline = val 747 cmdline = val
@@ -633,7 +752,7 @@ def updateGrub(restore=False):
633 # write out the updated target option 752 # write out the updated target option
634 op.write('\n%s=%s%s%s\n' % (tgtopt, sp, cmdline, sp)) 753 op.write('\n%s=%s%s%s\n' % (tgtopt, sp, cmdline, sp))
635 op.close() 754 op.close()
636 res = call('update-grub') 755 res = call(sysvals.blexec)
637 os.remove(grubfile) 756 os.remove(grubfile)
638 except Exception, e: 757 except Exception, e:
639 print 'Exception: %s' % str(e) 758 print 'Exception: %s' % str(e)
@@ -641,10 +760,18 @@ def updateGrub(restore=False):
641 # cleanup 760 # cleanup
642 shutil.move(tempfile, grubfile) 761 shutil.move(tempfile, grubfile)
643 if res != 0: 762 if res != 0:
644 doError('update-grub failed') 763 doError('update grub failed')
645 764
646# Function: doError 765# Function: updateKernelParams
647# Description: 766# Description:
767# update boot conf for all kernels with our parameters
768def updateKernelParams(restore=False):
769 # find the boot loader
770 sysvals.getBootLoader()
771 if sysvals.bootloader == 'grub':
772 updateGrub(restore)
773
774# Function: doError Description:
648# generic error function for catastrphic failures 775# generic error function for catastrphic failures
649# Arguments: 776# Arguments:
650# msg: the error message to print 777# msg: the error message to print
@@ -660,7 +787,7 @@ def doError(msg, help=False):
660# print out the help text 787# print out the help text
661def printHelp(): 788def printHelp():
662 print('') 789 print('')
663 print('%s v%.1f' % (sysvals.title, sysvals.version)) 790 print('%s v%s' % (sysvals.title, sysvals.version))
664 print('Usage: bootgraph <options> <command>') 791 print('Usage: bootgraph <options> <command>')
665 print('') 792 print('')
666 print('Description:') 793 print('Description:')
@@ -669,13 +796,19 @@ def printHelp():
669 print(' the start of the init process.') 796 print(' the start of the init process.')
670 print('') 797 print('')
671 print(' If no specific command is given the tool reads the current dmesg') 798 print(' If no specific command is given the tool reads the current dmesg')
672 print(' and/or ftrace log and outputs bootgraph.html') 799 print(' and/or ftrace log and creates a timeline')
800 print('')
801 print(' Generates output files in subdirectory: boot-yymmdd-HHMMSS')
802 print(' HTML output: <hostname>_boot.html')
803 print(' raw dmesg output: <hostname>_boot_dmesg.txt')
804 print(' raw ftrace output: <hostname>_boot_ftrace.txt')
673 print('') 805 print('')
674 print('Options:') 806 print('Options:')
675 print(' -h Print this help text') 807 print(' -h Print this help text')
676 print(' -v Print the current tool version') 808 print(' -v Print the current tool version')
677 print(' -addlogs Add the dmesg log to the html output') 809 print(' -addlogs Add the dmesg log to the html output')
678 print(' -o file Html timeline name (default: bootgraph.html)') 810 print(' -o name Overrides the output subdirectory name when running a new test')
811 print(' default: boot-{date}-{time}')
679 print(' [advanced]') 812 print(' [advanced]')
680 print(' -f Use ftrace to add function detail (default: disabled)') 813 print(' -f Use ftrace to add function detail (default: disabled)')
681 print(' -callgraph Add callgraph detail, can be very large (default: disabled)') 814 print(' -callgraph Add callgraph detail, can be very large (default: disabled)')
@@ -683,13 +816,18 @@ def printHelp():
683 print(' -mincg ms Discard all callgraphs shorter than ms milliseconds (e.g. 0.001 for us)') 816 print(' -mincg ms Discard all callgraphs shorter than ms milliseconds (e.g. 0.001 for us)')
684 print(' -timeprec N Number of significant digits in timestamps (0:S, 3:ms, [6:us])') 817 print(' -timeprec N Number of significant digits in timestamps (0:S, 3:ms, [6:us])')
685 print(' -expandcg pre-expand the callgraph data in the html output (default: disabled)') 818 print(' -expandcg pre-expand the callgraph data in the html output (default: disabled)')
686 print(' -filter list Limit ftrace to comma-delimited list of functions (default: do_one_initcall)') 819 print(' -func list Limit ftrace to comma-delimited list of functions (default: do_one_initcall)')
687 print(' [commands]') 820 print(' -cgfilter S Filter the callgraph output in the timeline')
821 print(' -bl name Use the following boot loader for kernel params (default: grub)')
688 print(' -reboot Reboot the machine automatically and generate a new timeline') 822 print(' -reboot Reboot the machine automatically and generate a new timeline')
689 print(' -manual Show the requirements to generate a new timeline manually') 823 print(' -manual Show the steps to generate a new timeline manually (used with -reboot)')
690 print(' -dmesg file Load a stored dmesg file (used with -ftrace)') 824 print('')
691 print(' -ftrace file Load a stored ftrace file (used with -dmesg)') 825 print('Other commands:')
692 print(' -flistall Print all functions capable of being captured in ftrace') 826 print(' -flistall Print all functions capable of being captured in ftrace')
827 print(' -sysinfo Print out system info extracted from BIOS')
828 print(' [redo]')
829 print(' -dmesg file Create HTML output using dmesg input (used with -ftrace)')
830 print(' -ftrace file Create HTML output using ftrace input (used with -dmesg)')
693 print('') 831 print('')
694 return True 832 return True
695 833
@@ -698,14 +836,15 @@ def printHelp():
698if __name__ == '__main__': 836if __name__ == '__main__':
699 # loop through the command line arguments 837 # loop through the command line arguments
700 cmd = '' 838 cmd = ''
701 simplecmds = ['-updategrub', '-flistall'] 839 testrun = True
840 simplecmds = ['-sysinfo', '-kpupdate', '-flistall', '-checkbl']
702 args = iter(sys.argv[1:]) 841 args = iter(sys.argv[1:])
703 for arg in args: 842 for arg in args:
704 if(arg == '-h'): 843 if(arg == '-h'):
705 printHelp() 844 printHelp()
706 sys.exit() 845 sys.exit()
707 elif(arg == '-v'): 846 elif(arg == '-v'):
708 print("Version %.1f" % sysvals.version) 847 print("Version %s" % sysvals.version)
709 sys.exit() 848 sys.exit()
710 elif(arg in simplecmds): 849 elif(arg in simplecmds):
711 cmd = arg[1:] 850 cmd = arg[1:]
@@ -716,16 +855,32 @@ if __name__ == '__main__':
716 sysvals.usecallgraph = True 855 sysvals.usecallgraph = True
717 elif(arg == '-mincg'): 856 elif(arg == '-mincg'):
718 sysvals.mincglen = aslib.getArgFloat('-mincg', args, 0.0, 10000.0) 857 sysvals.mincglen = aslib.getArgFloat('-mincg', args, 0.0, 10000.0)
858 elif(arg == '-cgfilter'):
859 try:
860 val = args.next()
861 except:
862 doError('No callgraph functions supplied', True)
863 sysvals.setDeviceFilter(val)
864 elif(arg == '-bl'):
865 try:
866 val = args.next()
867 except:
868 doError('No boot loader name supplied', True)
869 if val.lower() not in ['grub']:
870 doError('Unknown boot loader: %s' % val, True)
871 sysvals.bootloader = val.lower()
719 elif(arg == '-timeprec'): 872 elif(arg == '-timeprec'):
720 sysvals.setPrecision(aslib.getArgInt('-timeprec', args, 0, 6)) 873 sysvals.setPrecision(aslib.getArgInt('-timeprec', args, 0, 6))
721 elif(arg == '-maxdepth'): 874 elif(arg == '-maxdepth'):
722 sysvals.max_graph_depth = aslib.getArgInt('-maxdepth', args, 0, 1000) 875 sysvals.max_graph_depth = aslib.getArgInt('-maxdepth', args, 0, 1000)
723 elif(arg == '-filter'): 876 elif(arg == '-func'):
724 try: 877 try:
725 val = args.next() 878 val = args.next()
726 except: 879 except:
727 doError('No filter functions supplied', True) 880 doError('No filter functions supplied', True)
728 aslib.rootCheck(True) 881 sysvals.useftrace = True
882 sysvals.usecallgraph = True
883 sysvals.rootCheck(True)
729 sysvals.setGraphFilter(val) 884 sysvals.setGraphFilter(val)
730 elif(arg == '-ftrace'): 885 elif(arg == '-ftrace'):
731 try: 886 try:
@@ -734,9 +889,10 @@ if __name__ == '__main__':
734 doError('No ftrace file supplied', True) 889 doError('No ftrace file supplied', True)
735 if(os.path.exists(val) == False): 890 if(os.path.exists(val) == False):
736 doError('%s does not exist' % val) 891 doError('%s does not exist' % val)
892 testrun = False
737 sysvals.ftracefile = val 893 sysvals.ftracefile = val
738 elif(arg == '-addlogs'): 894 elif(arg == '-addlogs'):
739 sysvals.addlogs = True 895 sysvals.dmesglog = True
740 elif(arg == '-expandcg'): 896 elif(arg == '-expandcg'):
741 sysvals.cgexp = True 897 sysvals.cgexp = True
742 elif(arg == '-dmesg'): 898 elif(arg == '-dmesg'):
@@ -748,18 +904,15 @@ if __name__ == '__main__':
748 doError('%s does not exist' % val) 904 doError('%s does not exist' % val)
749 if(sysvals.htmlfile == val or sysvals.outfile == val): 905 if(sysvals.htmlfile == val or sysvals.outfile == val):
750 doError('Output filename collision') 906 doError('Output filename collision')
907 testrun = False
751 sysvals.dmesgfile = val 908 sysvals.dmesgfile = val
752 elif(arg == '-o'): 909 elif(arg == '-o'):
753 try: 910 try:
754 val = args.next() 911 val = args.next()
755 except: 912 except:
756 doError('No HTML filename supplied', True) 913 doError('No subdirectory name supplied', True)
757 if(sysvals.dmesgfile == val or sysvals.ftracefile == val): 914 sysvals.testdir = sysvals.setOutputFolder(val)
758 doError('Output filename collision')
759 sysvals.htmlfile = val
760 elif(arg == '-reboot'): 915 elif(arg == '-reboot'):
761 if sysvals.iscronjob:
762 doError('-reboot and -cronjob are incompatible')
763 sysvals.reboot = True 916 sysvals.reboot = True
764 elif(arg == '-manual'): 917 elif(arg == '-manual'):
765 sysvals.reboot = True 918 sysvals.reboot = True
@@ -767,58 +920,93 @@ if __name__ == '__main__':
767 # remaining options are only for cron job use 920 # remaining options are only for cron job use
768 elif(arg == '-cronjob'): 921 elif(arg == '-cronjob'):
769 sysvals.iscronjob = True 922 sysvals.iscronjob = True
770 if sysvals.reboot:
771 doError('-reboot and -cronjob are incompatible')
772 else: 923 else:
773 doError('Invalid argument: '+arg, True) 924 doError('Invalid argument: '+arg, True)
774 925
926 # compatibility errors and access checks
927 if(sysvals.iscronjob and (sysvals.reboot or \
928 sysvals.dmesgfile or sysvals.ftracefile or cmd)):
929 doError('-cronjob is meant for batch purposes only')
930 if(sysvals.reboot and (sysvals.dmesgfile or sysvals.ftracefile)):
931 doError('-reboot and -dmesg/-ftrace are incompatible')
932 if cmd or sysvals.reboot or sysvals.iscronjob or testrun:
933 sysvals.rootCheck(True)
934 if (testrun and sysvals.useftrace) or cmd == 'flistall':
935 if not sysvals.verifyFtrace():
936 doError('Ftrace is not properly enabled')
937
938 # run utility commands
939 sysvals.cpuInfo()
775 if cmd != '': 940 if cmd != '':
776 if cmd == 'updategrub': 941 if cmd == 'kpupdate':
777 updateGrub() 942 updateKernelParams()
778 elif cmd == 'flistall': 943 elif cmd == 'flistall':
779 sysvals.getFtraceFilterFunctions(False) 944 for f in sysvals.getBootFtraceFilterFunctions():
945 print f
946 elif cmd == 'checkbl':
947 sysvals.getBootLoader()
948 print 'Boot Loader: %s\n%s' % (sysvals.bootloader, sysvals.blexec)
949 elif(cmd == 'sysinfo'):
950 sysvals.printSystemInfo()
780 sys.exit() 951 sys.exit()
781 952
782 # update grub, setup a cronjob, and reboot 953 # reboot: update grub, setup a cronjob, and reboot
783 if sysvals.reboot: 954 if sysvals.reboot:
955 if (sysvals.useftrace or sysvals.usecallgraph) and \
956 not sysvals.checkFtraceKernelVersion():
957 doError('Ftrace functionality requires kernel v4.10 or newer')
784 if not sysvals.manual: 958 if not sysvals.manual:
785 updateGrub() 959 updateKernelParams()
786 updateCron() 960 updateCron()
787 call('reboot') 961 call('reboot')
788 else: 962 else:
789 sysvals.manualRebootRequired() 963 sysvals.manualRebootRequired()
790 sys.exit() 964 sys.exit()
791 965
792 # disable the cronjob 966 # cronjob: remove the cronjob, grub changes, and disable ftrace
793 if sysvals.iscronjob: 967 if sysvals.iscronjob:
794 updateCron(True) 968 updateCron(True)
795 updateGrub(True) 969 updateKernelParams(True)
970 try:
971 sysvals.fsetVal('0', 'tracing_on')
972 except:
973 pass
796 974
797 data = loadKernelLog() 975 # testrun: generate copies of the logs
798 if sysvals.useftrace: 976 if testrun:
799 loadTraceLog(data) 977 retrieveLogs()
800 if sysvals.iscronjob: 978 else:
801 try: 979 sysvals.setOutputFile()
802 sysvals.fsetVal('0', 'tracing_on')
803 except:
804 pass
805 980
806 if(sysvals.outfile and sysvals.phoronix): 981 # process the log data
807 fp = open(sysvals.outfile, 'w') 982 if sysvals.dmesgfile:
808 fp.write('pass %s initstart %.3f end %.3f boot %s\n' % 983 data = parseKernelLog()
809 (data.valid, data.initstart*1000, data.end*1000, data.boottime)) 984 if(not data.valid):
810 fp.close()
811 if(not data.valid):
812 if sysvals.dmesgfile:
813 doError('No initcall data found in %s' % sysvals.dmesgfile) 985 doError('No initcall data found in %s' % sysvals.dmesgfile)
814 else: 986 if sysvals.useftrace and sysvals.ftracefile:
815 doError('No initcall data found, is initcall_debug enabled?') 987 parseTraceLog(data)
988 else:
989 doError('dmesg file required')
816 990
817 print(' Host: %s' % sysvals.hostname) 991 print(' Host: %s' % sysvals.hostname)
818 print(' Test time: %s' % sysvals.testtime) 992 print(' Test time: %s' % sysvals.testtime)
819 print(' Boot time: %s' % data.boottime) 993 print(' Boot time: %s' % data.boottime)
820 print('Kernel Version: %s' % sysvals.kernel) 994 print('Kernel Version: %s' % sysvals.kernel)
821 print(' Kernel start: %.3f' % (data.start * 1000)) 995 print(' Kernel start: %.3f' % (data.start * 1000))
822 print(' init start: %.3f' % (data.initstart * 1000)) 996 print('Usermode start: %.3f' % (data.tUserMode * 1000))
997 print('Last Init Call: %.3f' % (data.end * 1000))
998
999 # handle embedded output logs
1000 if(sysvals.outfile and sysvals.embedded):
1001 fp = open(sysvals.outfile, 'w')
1002 fp.write('pass %s initstart %.3f end %.3f boot %s\n' %
1003 (data.valid, data.tUserMode*1000, data.end*1000, data.boottime))
1004 fp.close()
1005
1006 createBootGraph(data)
823 1007
824 createBootGraph(data, sysvals.phoronix) 1008 # if running as root, change output dir owner to sudo_user
1009 if testrun and os.path.isdir(sysvals.testdir) and \
1010 os.getuid() == 0 and 'SUDO_USER' in os.environ:
1011 cmd = 'chown -R {0}:{0} {1} > /dev/null 2>&1'
1012 call(cmd.format(os.environ['SUDO_USER'], sysvals.testdir), shell=True)
diff --git a/tools/power/pm-graph/analyze_suspend.py b/tools/power/pm-graph/analyze_suspend.py
index a9206e67fc1f..1b60fe203741 100755
--- a/tools/power/pm-graph/analyze_suspend.py
+++ b/tools/power/pm-graph/analyze_suspend.py
@@ -68,10 +68,12 @@ from subprocess import call, Popen, PIPE
68# store system values and test parameters 68# store system values and test parameters
69class SystemValues: 69class SystemValues:
70 title = 'SleepGraph' 70 title = 'SleepGraph'
71 version = '4.6' 71 version = '4.7'
72 ansi = False 72 ansi = False
73 verbose = False 73 verbose = False
74 addlogs = False 74 testlog = True
75 dmesglog = False
76 ftracelog = False
75 mindevlen = 0.0 77 mindevlen = 0.0
76 mincglen = 0.0 78 mincglen = 0.0
77 cgphase = '' 79 cgphase = ''
@@ -79,10 +81,11 @@ class SystemValues:
79 max_graph_depth = 0 81 max_graph_depth = 0
80 callloopmaxgap = 0.0001 82 callloopmaxgap = 0.0001
81 callloopmaxlen = 0.005 83 callloopmaxlen = 0.005
84 cpucount = 0
85 memtotal = 204800
82 srgap = 0 86 srgap = 0
83 cgexp = False 87 cgexp = False
84 outdir = '' 88 testdir = ''
85 testdir = '.'
86 tpath = '/sys/kernel/debug/tracing/' 89 tpath = '/sys/kernel/debug/tracing/'
87 fpdtpath = '/sys/firmware/acpi/tables/FPDT' 90 fpdtpath = '/sys/firmware/acpi/tables/FPDT'
88 epath = '/sys/kernel/debug/tracing/events/power/' 91 epath = '/sys/kernel/debug/tracing/events/power/'
@@ -95,14 +98,17 @@ class SystemValues:
95 testcommand = '' 98 testcommand = ''
96 mempath = '/dev/mem' 99 mempath = '/dev/mem'
97 powerfile = '/sys/power/state' 100 powerfile = '/sys/power/state'
101 mempowerfile = '/sys/power/mem_sleep'
98 suspendmode = 'mem' 102 suspendmode = 'mem'
103 memmode = ''
99 hostname = 'localhost' 104 hostname = 'localhost'
100 prefix = 'test' 105 prefix = 'test'
101 teststamp = '' 106 teststamp = ''
107 sysstamp = ''
102 dmesgstart = 0.0 108 dmesgstart = 0.0
103 dmesgfile = '' 109 dmesgfile = ''
104 ftracefile = '' 110 ftracefile = ''
105 htmlfile = '' 111 htmlfile = 'output.html'
106 embedded = False 112 embedded = False
107 rtcwake = True 113 rtcwake = True
108 rtcwaketime = 15 114 rtcwaketime = 15
@@ -127,9 +133,6 @@ class SystemValues:
127 devpropfmt = '# Device Properties: .*' 133 devpropfmt = '# Device Properties: .*'
128 tracertypefmt = '# tracer: (?P<t>.*)' 134 tracertypefmt = '# tracer: (?P<t>.*)'
129 firmwarefmt = '# fwsuspend (?P<s>[0-9]*) fwresume (?P<r>[0-9]*)$' 135 firmwarefmt = '# fwsuspend (?P<s>[0-9]*) fwresume (?P<r>[0-9]*)$'
130 stampfmt = '# suspend-(?P<m>[0-9]{2})(?P<d>[0-9]{2})(?P<y>[0-9]{2})-'+\
131 '(?P<H>[0-9]{2})(?P<M>[0-9]{2})(?P<S>[0-9]{2})'+\
132 ' (?P<host>.*) (?P<mode>.*) (?P<kernel>.*)$'
133 tracefuncs = { 136 tracefuncs = {
134 'sys_sync': dict(), 137 'sys_sync': dict(),
135 'pm_prepare_console': dict(), 138 'pm_prepare_console': dict(),
@@ -218,7 +221,7 @@ class SystemValues:
218 # if this is a phoronix test run, set some default options 221 # if this is a phoronix test run, set some default options
219 if('LOG_FILE' in os.environ and 'TEST_RESULTS_IDENTIFIER' in os.environ): 222 if('LOG_FILE' in os.environ and 'TEST_RESULTS_IDENTIFIER' in os.environ):
220 self.embedded = True 223 self.embedded = True
221 self.addlogs = True 224 self.dmesglog = self.ftracelog = True
222 self.htmlfile = os.environ['LOG_FILE'] 225 self.htmlfile = os.environ['LOG_FILE']
223 self.archargs = 'args_'+platform.machine() 226 self.archargs = 'args_'+platform.machine()
224 self.hostname = platform.node() 227 self.hostname = platform.node()
@@ -233,6 +236,13 @@ class SystemValues:
233 self.rtcpath = rtc 236 self.rtcpath = rtc
234 if (hasattr(sys.stdout, 'isatty') and sys.stdout.isatty()): 237 if (hasattr(sys.stdout, 'isatty') and sys.stdout.isatty()):
235 self.ansi = True 238 self.ansi = True
239 self.testdir = datetime.now().strftime('suspend-%y%m%d-%H%M%S')
240 def rootCheck(self, fatal=True):
241 if(os.access(self.powerfile, os.W_OK)):
242 return True
243 if fatal:
244 doError('This command requires sysfs mount and root access')
245 return False
236 def rootUser(self, fatal=False): 246 def rootUser(self, fatal=False):
237 if 'USER' in os.environ and os.environ['USER'] == 'root': 247 if 'USER' in os.environ and os.environ['USER'] == 'root':
238 return True 248 return True
@@ -249,30 +259,60 @@ class SystemValues:
249 args['date'] = n.strftime('%y%m%d') 259 args['date'] = n.strftime('%y%m%d')
250 args['time'] = n.strftime('%H%M%S') 260 args['time'] = n.strftime('%H%M%S')
251 args['hostname'] = self.hostname 261 args['hostname'] = self.hostname
252 self.outdir = value.format(**args) 262 return value.format(**args)
253 def setOutputFile(self): 263 def setOutputFile(self):
254 if((self.htmlfile == '') and (self.dmesgfile != '')): 264 if self.dmesgfile != '':
255 m = re.match('(?P<name>.*)_dmesg\.txt$', self.dmesgfile) 265 m = re.match('(?P<name>.*)_dmesg\.txt$', self.dmesgfile)
256 if(m): 266 if(m):
257 self.htmlfile = m.group('name')+'.html' 267 self.htmlfile = m.group('name')+'.html'
258 if((self.htmlfile == '') and (self.ftracefile != '')): 268 if self.ftracefile != '':
259 m = re.match('(?P<name>.*)_ftrace\.txt$', self.ftracefile) 269 m = re.match('(?P<name>.*)_ftrace\.txt$', self.ftracefile)
260 if(m): 270 if(m):
261 self.htmlfile = m.group('name')+'.html' 271 self.htmlfile = m.group('name')+'.html'
262 if(self.htmlfile == ''): 272 def systemInfo(self, info):
263 self.htmlfile = 'output.html' 273 p = c = m = b = ''
264 def initTestOutput(self, subdir, testpath=''): 274 if 'baseboard-manufacturer' in info:
275 m = info['baseboard-manufacturer']
276 elif 'system-manufacturer' in info:
277 m = info['system-manufacturer']
278 if 'baseboard-product-name' in info:
279 p = info['baseboard-product-name']
280 elif 'system-product-name' in info:
281 p = info['system-product-name']
282 if 'processor-version' in info:
283 c = info['processor-version']
284 if 'bios-version' in info:
285 b = info['bios-version']
286 self.sysstamp = '# sysinfo | man:%s | plat:%s | cpu:%s | bios:%s | numcpu:%d | memsz:%d' % \
287 (m, p, c, b, self.cpucount, self.memtotal)
288 def printSystemInfo(self):
289 self.rootCheck(True)
290 out = dmidecode(self.mempath, True)
291 fmt = '%-24s: %s'
292 for name in sorted(out):
293 print fmt % (name, out[name])
294 print fmt % ('cpucount', ('%d' % self.cpucount))
295 print fmt % ('memtotal', ('%d kB' % self.memtotal))
296 def cpuInfo(self):
297 self.cpucount = 0
298 fp = open('/proc/cpuinfo', 'r')
299 for line in fp:
300 if re.match('^processor[ \t]*:[ \t]*[0-9]*', line):
301 self.cpucount += 1
302 fp.close()
303 fp = open('/proc/meminfo', 'r')
304 for line in fp:
305 m = re.match('^MemTotal:[ \t]*(?P<sz>[0-9]*) *kB', line)
306 if m:
307 self.memtotal = int(m.group('sz'))
308 break
309 fp.close()
310 def initTestOutput(self, name):
265 self.prefix = self.hostname 311 self.prefix = self.hostname
266 v = open('/proc/version', 'r').read().strip() 312 v = open('/proc/version', 'r').read().strip()
267 kver = string.split(v)[2] 313 kver = string.split(v)[2]
268 n = datetime.now() 314 fmt = name+'-%m%d%y-%H%M%S'
269 testtime = n.strftime('suspend-%m%d%y-%H%M%S') 315 testtime = datetime.now().strftime(fmt)
270 if not testpath:
271 testpath = n.strftime('suspend-%y%m%d-%H%M%S')
272 if(subdir != "."):
273 self.testdir = subdir+"/"+testpath
274 else:
275 self.testdir = testpath
276 self.teststamp = \ 316 self.teststamp = \
277 '# '+testtime+' '+self.prefix+' '+self.suspendmode+' '+kver 317 '# '+testtime+' '+self.prefix+' '+self.suspendmode+' '+kver
278 if(self.embedded): 318 if(self.embedded):
@@ -355,7 +395,7 @@ class SystemValues:
355 continue 395 continue
356 self.tracefuncs[i] = dict() 396 self.tracefuncs[i] = dict()
357 def getFtraceFilterFunctions(self, current): 397 def getFtraceFilterFunctions(self, current):
358 rootCheck(True) 398 self.rootCheck(True)
359 if not current: 399 if not current:
360 call('cat '+self.tpath+'available_filter_functions', shell=True) 400 call('cat '+self.tpath+'available_filter_functions', shell=True)
361 return 401 return
@@ -453,7 +493,7 @@ class SystemValues:
453 val += '\nr:%s_ret %s $retval\n' % (name, func) 493 val += '\nr:%s_ret %s $retval\n' % (name, func)
454 return val 494 return val
455 def addKprobes(self, output=False): 495 def addKprobes(self, output=False):
456 if len(sysvals.kprobes) < 1: 496 if len(self.kprobes) < 1:
457 return 497 return
458 if output: 498 if output:
459 print(' kprobe functions in this kernel:') 499 print(' kprobe functions in this kernel:')
@@ -525,7 +565,7 @@ class SystemValues:
525 fp.flush() 565 fp.flush()
526 fp.close() 566 fp.close()
527 except: 567 except:
528 pass 568 return False
529 return True 569 return True
530 def fgetVal(self, path): 570 def fgetVal(self, path):
531 file = self.tpath+path 571 file = self.tpath+path
@@ -566,9 +606,15 @@ class SystemValues:
566 self.cleanupFtrace() 606 self.cleanupFtrace()
567 # set the trace clock to global 607 # set the trace clock to global
568 self.fsetVal('global', 'trace_clock') 608 self.fsetVal('global', 'trace_clock')
569 # set trace buffer to a huge value
570 self.fsetVal('nop', 'current_tracer') 609 self.fsetVal('nop', 'current_tracer')
571 self.fsetVal('131073', 'buffer_size_kb') 610 # set trace buffer to a huge value
611 if self.usecallgraph or self.usedevsrc:
612 tgtsize = min(self.memtotal / 2, 2*1024*1024)
613 maxbuf = '%d' % (tgtsize / max(1, self.cpucount))
614 if self.cpucount < 1 or not self.fsetVal(maxbuf, 'buffer_size_kb'):
615 self.fsetVal('131072', 'buffer_size_kb')
616 else:
617 self.fsetVal('16384', 'buffer_size_kb')
572 # go no further if this is just a status check 618 # go no further if this is just a status check
573 if testing: 619 if testing:
574 return 620 return
@@ -641,6 +687,15 @@ class SystemValues:
641 if not self.ansi: 687 if not self.ansi:
642 return str 688 return str
643 return '\x1B[%d;40m%s\x1B[m' % (color, str) 689 return '\x1B[%d;40m%s\x1B[m' % (color, str)
690 def writeDatafileHeader(self, filename, fwdata=[]):
691 fp = open(filename, 'w')
692 fp.write(self.teststamp+'\n')
693 fp.write(self.sysstamp+'\n')
694 if(self.suspendmode == 'mem' or self.suspendmode == 'command'):
695 for fw in fwdata:
696 if(fw):
697 fp.write('# fwsuspend %u fwresume %u\n' % (fw[0], fw[1]))
698 fp.close()
644 699
645sysvals = SystemValues() 700sysvals = SystemValues()
646suspendmodename = { 701suspendmodename = {
@@ -1008,6 +1063,12 @@ class Data:
1008 else: 1063 else:
1009 self.trimTime(self.tSuspended, \ 1064 self.trimTime(self.tSuspended, \
1010 self.tResumed-self.tSuspended, False) 1065 self.tResumed-self.tSuspended, False)
1066 def getTimeValues(self):
1067 sktime = (self.dmesg['suspend_machine']['end'] - \
1068 self.tKernSus) * 1000
1069 rktime = (self.dmesg['resume_complete']['end'] - \
1070 self.dmesg['resume_machine']['start']) * 1000
1071 return (sktime, rktime)
1011 def setPhase(self, phase, ktime, isbegin): 1072 def setPhase(self, phase, ktime, isbegin):
1012 if(isbegin): 1073 if(isbegin):
1013 self.dmesg[phase]['start'] = ktime 1074 self.dmesg[phase]['start'] = ktime
@@ -1517,7 +1578,7 @@ class FTraceCallGraph:
1517 prelinedep += 1 1578 prelinedep += 1
1518 last = 0 1579 last = 0
1519 lasttime = line.time 1580 lasttime = line.time
1520 virtualfname = 'execution_misalignment' 1581 virtualfname = 'missing_function_name'
1521 if len(self.list) > 0: 1582 if len(self.list) > 0:
1522 last = self.list[-1] 1583 last = self.list[-1]
1523 lasttime = last.time 1584 lasttime = last.time
@@ -1773,24 +1834,30 @@ class Timeline:
1773 html_device = '<div id="{0}" title="{1}" class="thread{7}" style="left:{2}%;top:{3}px;height:{4}px;width:{5}%;{8}">{6}</div>\n' 1834 html_device = '<div id="{0}" title="{1}" class="thread{7}" style="left:{2}%;top:{3}px;height:{4}px;width:{5}%;{8}">{6}</div>\n'
1774 html_phase = '<div class="phase" style="left:{0}%;width:{1}%;top:{2}px;height:{3}px;background:{4}">{5}</div>\n' 1835 html_phase = '<div class="phase" style="left:{0}%;width:{1}%;top:{2}px;height:{3}px;background:{4}">{5}</div>\n'
1775 html_phaselet = '<div id="{0}" class="phaselet" style="left:{1}%;width:{2}%;background:{3}"></div>\n' 1836 html_phaselet = '<div id="{0}" class="phaselet" style="left:{1}%;width:{2}%;background:{3}"></div>\n'
1837 html_legend = '<div id="p{3}" class="square" style="left:{0}%;background:{1}">&nbsp;{2}</div>\n'
1776 def __init__(self, rowheight, scaleheight): 1838 def __init__(self, rowheight, scaleheight):
1777 self.rowH = rowheight 1839 self.rowH = rowheight
1778 self.scaleH = scaleheight 1840 self.scaleH = scaleheight
1779 self.html = '' 1841 self.html = ''
1780 def createHeader(self, sv, suppress=''): 1842 def createHeader(self, sv):
1781 if(not sv.stamp['time']): 1843 if(not sv.stamp['time']):
1782 return 1844 return
1783 self.html += '<div class="version"><a href="https://01.org/suspendresume">%s v%s</a></div>' \ 1845 self.html += '<div class="version"><a href="https://01.org/suspendresume">%s v%s</a></div>' \
1784 % (sv.title, sv.version) 1846 % (sv.title, sv.version)
1785 if sv.logmsg and 'log' not in suppress: 1847 if sv.logmsg and sv.testlog:
1786 self.html += '<button id="showtest" class="logbtn">log</button>' 1848 self.html += '<button id="showtest" class="logbtn btnfmt">log</button>'
1787 if sv.addlogs and 'dmesg' not in suppress: 1849 if sv.dmesglog:
1788 self.html += '<button id="showdmesg" class="logbtn">dmesg</button>' 1850 self.html += '<button id="showdmesg" class="logbtn btnfmt">dmesg</button>'
1789 if sv.addlogs and sv.ftracefile and 'ftrace' not in suppress: 1851 if sv.ftracelog:
1790 self.html += '<button id="showftrace" class="logbtn">ftrace</button>' 1852 self.html += '<button id="showftrace" class="logbtn btnfmt">ftrace</button>'
1791 headline_stamp = '<div class="stamp">{0} {1} {2} {3}</div>\n' 1853 headline_stamp = '<div class="stamp">{0} {1} {2} {3}</div>\n'
1792 self.html += headline_stamp.format(sv.stamp['host'], sv.stamp['kernel'], 1854 self.html += headline_stamp.format(sv.stamp['host'], sv.stamp['kernel'],
1793 sv.stamp['mode'], sv.stamp['time']) 1855 sv.stamp['mode'], sv.stamp['time'])
1856 if 'man' in sv.stamp and 'plat' in sv.stamp and 'cpu' in sv.stamp:
1857 headline_sysinfo = '<div class="stamp sysinfo">{0} {1} <i>with</i> {2}</div>\n'
1858 self.html += headline_sysinfo.format(sv.stamp['man'],
1859 sv.stamp['plat'], sv.stamp['cpu'])
1860
1794 # Function: getDeviceRows 1861 # Function: getDeviceRows
1795 # Description: 1862 # Description:
1796 # determine how may rows the device funcs will take 1863 # determine how may rows the device funcs will take
@@ -1839,7 +1906,7 @@ class Timeline:
1839 # devlist: the list of devices/actions in a group of contiguous phases 1906 # devlist: the list of devices/actions in a group of contiguous phases
1840 # Output: 1907 # Output:
1841 # The total number of rows needed to display this phase of the timeline 1908 # The total number of rows needed to display this phase of the timeline
1842 def getPhaseRows(self, devlist, row=0): 1909 def getPhaseRows(self, devlist, row=0, sortby='length'):
1843 # clear all rows and set them to undefined 1910 # clear all rows and set them to undefined
1844 remaining = len(devlist) 1911 remaining = len(devlist)
1845 rowdata = dict() 1912 rowdata = dict()
@@ -1852,8 +1919,12 @@ class Timeline:
1852 if tp not in myphases: 1919 if tp not in myphases:
1853 myphases.append(tp) 1920 myphases.append(tp)
1854 dev['row'] = -1 1921 dev['row'] = -1
1855 # sort by length 1st, then name 2nd 1922 if sortby == 'start':
1856 sortdict[item] = (float(dev['end']) - float(dev['start']), item.dev['name']) 1923 # sort by start 1st, then length 2nd
1924 sortdict[item] = (-1*float(dev['start']), float(dev['end']) - float(dev['start']))
1925 else:
1926 # sort by length 1st, then name 2nd
1927 sortdict[item] = (float(dev['end']) - float(dev['start']), item.dev['name'])
1857 if 'src' in dev: 1928 if 'src' in dev:
1858 dev['devrows'] = self.getDeviceRows(dev['src']) 1929 dev['devrows'] = self.getDeviceRows(dev['src'])
1859 # sort the devlist by length so that large items graph on top 1930 # sort the devlist by length so that large items graph on top
@@ -1995,8 +2066,13 @@ class Timeline:
1995# A list of values describing the properties of these test runs 2066# A list of values describing the properties of these test runs
1996class TestProps: 2067class TestProps:
1997 stamp = '' 2068 stamp = ''
2069 sysinfo = ''
1998 S0i3 = False 2070 S0i3 = False
1999 fwdata = [] 2071 fwdata = []
2072 stampfmt = '# [a-z]*-(?P<m>[0-9]{2})(?P<d>[0-9]{2})(?P<y>[0-9]{2})-'+\
2073 '(?P<H>[0-9]{2})(?P<M>[0-9]{2})(?P<S>[0-9]{2})'+\
2074 ' (?P<host>.*) (?P<mode>.*) (?P<kernel>.*)$'
2075 sysinfofmt = '^# sysinfo .*'
2000 ftrace_line_fmt_fg = \ 2076 ftrace_line_fmt_fg = \
2001 '^ *(?P<time>[0-9\.]*) *\| *(?P<cpu>[0-9]*)\)'+\ 2077 '^ *(?P<time>[0-9\.]*) *\| *(?P<cpu>[0-9]*)\)'+\
2002 ' *(?P<proc>.*)-(?P<pid>[0-9]*) *\|'+\ 2078 ' *(?P<proc>.*)-(?P<pid>[0-9]*) *\|'+\
@@ -2019,6 +2095,36 @@ class TestProps:
2019 self.ftrace_line_fmt = self.ftrace_line_fmt_nop 2095 self.ftrace_line_fmt = self.ftrace_line_fmt_nop
2020 else: 2096 else:
2021 doError('Invalid tracer format: [%s]' % tracer) 2097 doError('Invalid tracer format: [%s]' % tracer)
2098 def parseStamp(self, data, sv):
2099 m = re.match(self.stampfmt, self.stamp)
2100 data.stamp = {'time': '', 'host': '', 'mode': ''}
2101 dt = datetime(int(m.group('y'))+2000, int(m.group('m')),
2102 int(m.group('d')), int(m.group('H')), int(m.group('M')),
2103 int(m.group('S')))
2104 data.stamp['time'] = dt.strftime('%B %d %Y, %I:%M:%S %p')
2105 data.stamp['host'] = m.group('host')
2106 data.stamp['mode'] = m.group('mode')
2107 data.stamp['kernel'] = m.group('kernel')
2108 if re.match(self.sysinfofmt, self.sysinfo):
2109 for f in self.sysinfo.split('|'):
2110 if '#' in f:
2111 continue
2112 tmp = f.strip().split(':', 1)
2113 key = tmp[0]
2114 val = tmp[1]
2115 data.stamp[key] = val
2116 sv.hostname = data.stamp['host']
2117 sv.suspendmode = data.stamp['mode']
2118 if sv.suspendmode == 'command' and sv.ftracefile != '':
2119 modes = ['on', 'freeze', 'standby', 'mem']
2120 out = Popen(['grep', 'suspend_enter', sv.ftracefile],
2121 stderr=PIPE, stdout=PIPE).stdout.read()
2122 m = re.match('.* suspend_enter\[(?P<mode>.*)\]', out)
2123 if m and m.group('mode') in ['1', '2', '3']:
2124 sv.suspendmode = modes[int(m.group('mode'))]
2125 data.stamp['mode'] = sv.suspendmode
2126 if not sv.stamp:
2127 sv.stamp = data.stamp
2022 2128
2023# Class: TestRun 2129# Class: TestRun
2024# Description: 2130# Description:
@@ -2090,35 +2196,6 @@ def vprint(msg):
2090 if(sysvals.verbose): 2196 if(sysvals.verbose):
2091 print(msg) 2197 print(msg)
2092 2198
2093# Function: parseStamp
2094# Description:
2095# Pull in the stamp comment line from the data file(s),
2096# create the stamp, and add it to the global sysvals object
2097# Arguments:
2098# m: the valid re.match output for the stamp line
2099def parseStamp(line, data):
2100 m = re.match(sysvals.stampfmt, line)
2101 data.stamp = {'time': '', 'host': '', 'mode': ''}
2102 dt = datetime(int(m.group('y'))+2000, int(m.group('m')),
2103 int(m.group('d')), int(m.group('H')), int(m.group('M')),
2104 int(m.group('S')))
2105 data.stamp['time'] = dt.strftime('%B %d %Y, %I:%M:%S %p')
2106 data.stamp['host'] = m.group('host')
2107 data.stamp['mode'] = m.group('mode')
2108 data.stamp['kernel'] = m.group('kernel')
2109 sysvals.hostname = data.stamp['host']
2110 sysvals.suspendmode = data.stamp['mode']
2111 if sysvals.suspendmode == 'command' and sysvals.ftracefile != '':
2112 modes = ['on', 'freeze', 'standby', 'mem']
2113 out = Popen(['grep', 'suspend_enter', sysvals.ftracefile],
2114 stderr=PIPE, stdout=PIPE).stdout.read()
2115 m = re.match('.* suspend_enter\[(?P<mode>.*)\]', out)
2116 if m and m.group('mode') in ['1', '2', '3']:
2117 sysvals.suspendmode = modes[int(m.group('mode'))]
2118 data.stamp['mode'] = sysvals.suspendmode
2119 if not sysvals.stamp:
2120 sysvals.stamp = data.stamp
2121
2122# Function: doesTraceLogHaveTraceEvents 2199# Function: doesTraceLogHaveTraceEvents
2123# Description: 2200# Description:
2124# Quickly determine if the ftrace log has some or all of the trace events 2201# Quickly determine if the ftrace log has some or all of the trace events
@@ -2136,11 +2213,6 @@ def doesTraceLogHaveTraceEvents():
2136 sysvals.usekprobes = True 2213 sysvals.usekprobes = True
2137 out = Popen(['head', '-1', sysvals.ftracefile], 2214 out = Popen(['head', '-1', sysvals.ftracefile],
2138 stderr=PIPE, stdout=PIPE).stdout.read().replace('\n', '') 2215 stderr=PIPE, stdout=PIPE).stdout.read().replace('\n', '')
2139 m = re.match(sysvals.stampfmt, out)
2140 if m and m.group('mode') == 'command':
2141 sysvals.usetraceeventsonly = True
2142 sysvals.usetraceevents = True
2143 return
2144 # figure out what level of trace events are supported 2216 # figure out what level of trace events are supported
2145 sysvals.usetraceeventsonly = True 2217 sysvals.usetraceeventsonly = True
2146 sysvals.usetraceevents = False 2218 sysvals.usetraceevents = False
@@ -2182,11 +2254,13 @@ def appendIncompleteTraceLog(testruns):
2182 for line in tf: 2254 for line in tf:
2183 # remove any latent carriage returns 2255 # remove any latent carriage returns
2184 line = line.replace('\r\n', '') 2256 line = line.replace('\r\n', '')
2185 # grab the time stamp 2257 # grab the stamp and sysinfo
2186 m = re.match(sysvals.stampfmt, line) 2258 if re.match(tp.stampfmt, line):
2187 if(m):
2188 tp.stamp = line 2259 tp.stamp = line
2189 continue 2260 continue
2261 elif re.match(tp.sysinfofmt, line):
2262 tp.sysinfo = line
2263 continue
2190 # determine the trace data type (required for further parsing) 2264 # determine the trace data type (required for further parsing)
2191 m = re.match(sysvals.tracertypefmt, line) 2265 m = re.match(sysvals.tracertypefmt, line)
2192 if(m): 2266 if(m):
@@ -2219,7 +2293,7 @@ def appendIncompleteTraceLog(testruns):
2219 # look for the suspend start marker 2293 # look for the suspend start marker
2220 if(t.startMarker()): 2294 if(t.startMarker()):
2221 data = testrun[testidx].data 2295 data = testrun[testidx].data
2222 parseStamp(tp.stamp, data) 2296 tp.parseStamp(data, sysvals)
2223 data.setStart(t.time) 2297 data.setStart(t.time)
2224 continue 2298 continue
2225 if(not data): 2299 if(not data):
@@ -2389,11 +2463,13 @@ def parseTraceLog():
2389 for line in tf: 2463 for line in tf:
2390 # remove any latent carriage returns 2464 # remove any latent carriage returns
2391 line = line.replace('\r\n', '') 2465 line = line.replace('\r\n', '')
2392 # stamp line: each stamp means a new test run 2466 # stamp and sysinfo lines
2393 m = re.match(sysvals.stampfmt, line) 2467 if re.match(tp.stampfmt, line):
2394 if(m):
2395 tp.stamp = line 2468 tp.stamp = line
2396 continue 2469 continue
2470 elif re.match(tp.sysinfofmt, line):
2471 tp.sysinfo = line
2472 continue
2397 # firmware line: pull out any firmware data 2473 # firmware line: pull out any firmware data
2398 m = re.match(sysvals.firmwarefmt, line) 2474 m = re.match(sysvals.firmwarefmt, line)
2399 if(m): 2475 if(m):
@@ -2439,7 +2515,7 @@ def parseTraceLog():
2439 testdata.append(data) 2515 testdata.append(data)
2440 testrun = TestRun(data) 2516 testrun = TestRun(data)
2441 testruns.append(testrun) 2517 testruns.append(testrun)
2442 parseStamp(tp.stamp, data) 2518 tp.parseStamp(data, sysvals)
2443 data.setStart(t.time) 2519 data.setStart(t.time)
2444 data.tKernSus = t.time 2520 data.tKernSus = t.time
2445 continue 2521 continue
@@ -2820,10 +2896,13 @@ def loadKernelLog(justtext=False):
2820 idx = line.find('[') 2896 idx = line.find('[')
2821 if idx > 1: 2897 if idx > 1:
2822 line = line[idx:] 2898 line = line[idx:]
2823 m = re.match(sysvals.stampfmt, line) 2899 # grab the stamp and sysinfo
2824 if(m): 2900 if re.match(tp.stampfmt, line):
2825 tp.stamp = line 2901 tp.stamp = line
2826 continue 2902 continue
2903 elif re.match(tp.sysinfofmt, line):
2904 tp.sysinfo = line
2905 continue
2827 m = re.match(sysvals.firmwarefmt, line) 2906 m = re.match(sysvals.firmwarefmt, line)
2828 if(m): 2907 if(m):
2829 tp.fwdata.append((int(m.group('s')), int(m.group('r')))) 2908 tp.fwdata.append((int(m.group('s')), int(m.group('r'))))
@@ -2839,7 +2918,7 @@ def loadKernelLog(justtext=False):
2839 if(data): 2918 if(data):
2840 testruns.append(data) 2919 testruns.append(data)
2841 data = Data(len(testruns)) 2920 data = Data(len(testruns))
2842 parseStamp(tp.stamp, data) 2921 tp.parseStamp(data, sysvals)
2843 if len(tp.fwdata) > data.testnumber: 2922 if len(tp.fwdata) > data.testnumber:
2844 data.fwSuspend, data.fwResume = tp.fwdata[data.testnumber] 2923 data.fwSuspend, data.fwResume = tp.fwdata[data.testnumber]
2845 if(data.fwSuspend > 0 or data.fwResume > 0): 2924 if(data.fwSuspend > 0 or data.fwResume > 0):
@@ -3170,6 +3249,8 @@ def addCallgraphs(sv, hf, data):
3170 continue 3249 continue
3171 list = data.dmesg[p]['list'] 3250 list = data.dmesg[p]['list']
3172 for devname in data.sortedDevices(p): 3251 for devname in data.sortedDevices(p):
3252 if len(sv.devicefilter) > 0 and devname not in sv.devicefilter:
3253 continue
3173 dev = list[devname] 3254 dev = list[devname]
3174 color = 'white' 3255 color = 'white'
3175 if 'color' in data.dmesg[p]: 3256 if 'color' in data.dmesg[p]:
@@ -3309,7 +3390,6 @@ def createHTML(testruns):
3309 html_error = '<div id="{1}" title="kernel error/warning" class="err" style="right:{0}%">ERROR&rarr;</div>\n' 3390 html_error = '<div id="{1}" title="kernel error/warning" class="err" style="right:{0}%">ERROR&rarr;</div>\n'
3310 html_traceevent = '<div title="{0}" class="traceevent{6}" style="left:{1}%;top:{2}px;height:{3}px;width:{4}%;line-height:{3}px;{7}">{5}</div>\n' 3391 html_traceevent = '<div title="{0}" class="traceevent{6}" style="left:{1}%;top:{2}px;height:{3}px;width:{4}%;line-height:{3}px;{7}">{5}</div>\n'
3311 html_cpuexec = '<div class="jiffie" style="left:{0}%;top:{1}px;height:{2}px;width:{3}%;background:{4};"></div>\n' 3392 html_cpuexec = '<div class="jiffie" style="left:{0}%;top:{1}px;height:{2}px;width:{3}%;background:{4};"></div>\n'
3312 html_legend = '<div id="p{3}" class="square" style="left:{0}%;background:{1}">&nbsp;{2}</div>\n'
3313 html_timetotal = '<table class="time1">\n<tr>'\ 3393 html_timetotal = '<table class="time1">\n<tr>'\
3314 '<td class="green" title="{3}">{2} Suspend Time: <b>{0} ms</b></td>'\ 3394 '<td class="green" title="{3}">{2} Suspend Time: <b>{0} ms</b></td>'\
3315 '<td class="yellow" title="{4}">{2} Resume Time: <b>{1} ms</b></td>'\ 3395 '<td class="yellow" title="{4}">{2} Resume Time: <b>{1} ms</b></td>'\
@@ -3346,10 +3426,7 @@ def createHTML(testruns):
3346 # Generate the header for this timeline 3426 # Generate the header for this timeline
3347 for data in testruns: 3427 for data in testruns:
3348 tTotal = data.end - data.start 3428 tTotal = data.end - data.start
3349 sktime = (data.dmesg['suspend_machine']['end'] - \ 3429 sktime, rktime = data.getTimeValues()
3350 data.tKernSus) * 1000
3351 rktime = (data.dmesg['resume_complete']['end'] - \
3352 data.dmesg['resume_machine']['start']) * 1000
3353 if(tTotal == 0): 3430 if(tTotal == 0):
3354 print('ERROR: No timeline data') 3431 print('ERROR: No timeline data')
3355 sys.exit() 3432 sys.exit()
@@ -3581,7 +3658,7 @@ def createHTML(testruns):
3581 id += tmp[1][0] 3658 id += tmp[1][0]
3582 order = '%.2f' % ((data.dmesg[phase]['order'] * pdelta) + pmargin) 3659 order = '%.2f' % ((data.dmesg[phase]['order'] * pdelta) + pmargin)
3583 name = string.replace(phase, '_', ' &nbsp;') 3660 name = string.replace(phase, '_', ' &nbsp;')
3584 devtl.html += html_legend.format(order, \ 3661 devtl.html += devtl.html_legend.format(order, \
3585 data.dmesg[phase]['color'], name, id) 3662 data.dmesg[phase]['color'], name, id)
3586 devtl.html += '</div>\n' 3663 devtl.html += '</div>\n'
3587 3664
@@ -3628,10 +3705,10 @@ def createHTML(testruns):
3628 addCallgraphs(sysvals, hf, data) 3705 addCallgraphs(sysvals, hf, data)
3629 3706
3630 # add the test log as a hidden div 3707 # add the test log as a hidden div
3631 if sysvals.logmsg: 3708 if sysvals.testlog and sysvals.logmsg:
3632 hf.write('<div id="testlog" style="display:none;">\n'+sysvals.logmsg+'</div>\n') 3709 hf.write('<div id="testlog" style="display:none;">\n'+sysvals.logmsg+'</div>\n')
3633 # add the dmesg log as a hidden div 3710 # add the dmesg log as a hidden div
3634 if sysvals.addlogs and sysvals.dmesgfile: 3711 if sysvals.dmesglog and sysvals.dmesgfile:
3635 hf.write('<div id="dmesglog" style="display:none;">\n') 3712 hf.write('<div id="dmesglog" style="display:none;">\n')
3636 lf = open(sysvals.dmesgfile, 'r') 3713 lf = open(sysvals.dmesgfile, 'r')
3637 for line in lf: 3714 for line in lf:
@@ -3640,7 +3717,7 @@ def createHTML(testruns):
3640 lf.close() 3717 lf.close()
3641 hf.write('</div>\n') 3718 hf.write('</div>\n')
3642 # add the ftrace log as a hidden div 3719 # add the ftrace log as a hidden div
3643 if sysvals.addlogs and sysvals.ftracefile: 3720 if sysvals.ftracelog and sysvals.ftracefile:
3644 hf.write('<div id="ftracelog" style="display:none;">\n') 3721 hf.write('<div id="ftracelog" style="display:none;">\n')
3645 lf = open(sysvals.ftracefile, 'r') 3722 lf = open(sysvals.ftracefile, 'r')
3646 for line in lf: 3723 for line in lf:
@@ -3701,6 +3778,7 @@ def addCSS(hf, sv, testcount=1, kerror=False, extra=''):
3701 <style type=\'text/css\'>\n\ 3778 <style type=\'text/css\'>\n\
3702 body {overflow-y:scroll;}\n\ 3779 body {overflow-y:scroll;}\n\
3703 .stamp {width:100%;text-align:center;background:gray;line-height:30px;color:white;font:25px Arial;}\n\ 3780 .stamp {width:100%;text-align:center;background:gray;line-height:30px;color:white;font:25px Arial;}\n\
3781 .stamp.sysinfo {font:10px Arial;}\n\
3704 .callgraph {margin-top:30px;box-shadow:5px 5px 20px black;}\n\ 3782 .callgraph {margin-top:30px;box-shadow:5px 5px 20px black;}\n\
3705 .callgraph article * {padding-left:28px;}\n\ 3783 .callgraph article * {padding-left:28px;}\n\
3706 h1 {color:black;font:bold 30px Times;}\n\ 3784 h1 {color:black;font:bold 30px Times;}\n\
@@ -3746,7 +3824,7 @@ def addCSS(hf, sv, testcount=1, kerror=False, extra=''):
3746 .legend {position:relative; width:100%; height:40px; text-align:center;margin-bottom:20px}\n\ 3824 .legend {position:relative; width:100%; height:40px; text-align:center;margin-bottom:20px}\n\
3747 .legend .square {position:absolute;cursor:pointer;top:10px; width:0px;height:20px;border:1px solid;padding-left:20px;}\n\ 3825 .legend .square {position:absolute;cursor:pointer;top:10px; width:0px;height:20px;border:1px solid;padding-left:20px;}\n\
3748 button {height:40px;width:200px;margin-bottom:20px;margin-top:20px;font-size:24px;}\n\ 3826 button {height:40px;width:200px;margin-bottom:20px;margin-top:20px;font-size:24px;}\n\
3749 .logbtn {position:relative;float:right;height:25px;width:50px;margin-top:3px;margin-bottom:0;font-size:10px;text-align:center;}\n\ 3827 .btnfmt {position:relative;float:right;height:25px;width:auto;margin-top:3px;margin-bottom:0;font-size:10px;text-align:center;}\n\
3750 .devlist {position:'+devlistpos+';width:190px;}\n\ 3828 .devlist {position:'+devlistpos+';width:190px;}\n\
3751 a:link {color:white;text-decoration:none;}\n\ 3829 a:link {color:white;text-decoration:none;}\n\
3752 a:visited {color:white;}\n\ 3830 a:visited {color:white;}\n\
@@ -4084,8 +4162,6 @@ def addScriptCode(hf, testruns):
4084 ' win.document.write(title+"<pre>"+log.innerHTML+"</pre>");\n'\ 4162 ' win.document.write(title+"<pre>"+log.innerHTML+"</pre>");\n'\
4085 ' win.document.close();\n'\ 4163 ' win.document.close();\n'\
4086 ' }\n'\ 4164 ' }\n'\
4087 ' function onClickPhase(e) {\n'\
4088 ' }\n'\
4089 ' function onMouseDown(e) {\n'\ 4165 ' function onMouseDown(e) {\n'\
4090 ' dragval[0] = e.clientX;\n'\ 4166 ' dragval[0] = e.clientX;\n'\
4091 ' dragval[1] = document.getElementById("dmesgzoombox").scrollLeft;\n'\ 4167 ' dragval[1] = document.getElementById("dmesgzoombox").scrollLeft;\n'\
@@ -4120,9 +4196,6 @@ def addScriptCode(hf, testruns):
4120 ' document.getElementById("zoomin").onclick = zoomTimeline;\n'\ 4196 ' document.getElementById("zoomin").onclick = zoomTimeline;\n'\
4121 ' document.getElementById("zoomout").onclick = zoomTimeline;\n'\ 4197 ' document.getElementById("zoomout").onclick = zoomTimeline;\n'\
4122 ' document.getElementById("zoomdef").onclick = zoomTimeline;\n'\ 4198 ' document.getElementById("zoomdef").onclick = zoomTimeline;\n'\
4123 ' var list = document.getElementsByClassName("square");\n'\
4124 ' for (var i = 0; i < list.length; i++)\n'\
4125 ' list[i].onclick = onClickPhase;\n'\
4126 ' var list = document.getElementsByClassName("err");\n'\ 4199 ' var list = document.getElementsByClassName("err");\n'\
4127 ' for (var i = 0; i < list.length; i++)\n'\ 4200 ' for (var i = 0; i < list.length; i++)\n'\
4128 ' list[i].onclick = errWindow;\n'\ 4201 ' list[i].onclick = errWindow;\n'\
@@ -4193,8 +4266,14 @@ def executeSuspend():
4193 if sysvals.testcommand != '': 4266 if sysvals.testcommand != '':
4194 call(sysvals.testcommand+' 2>&1', shell=True); 4267 call(sysvals.testcommand+' 2>&1', shell=True);
4195 else: 4268 else:
4269 mode = sysvals.suspendmode
4270 if sysvals.memmode and os.path.exists(sysvals.mempowerfile):
4271 mode = 'mem'
4272 pf = open(sysvals.mempowerfile, 'w')
4273 pf.write(sysvals.memmode)
4274 pf.close()
4196 pf = open(sysvals.powerfile, 'w') 4275 pf = open(sysvals.powerfile, 'w')
4197 pf.write(sysvals.suspendmode) 4276 pf.write(mode)
4198 # execution will pause here 4277 # execution will pause here
4199 try: 4278 try:
4200 pf.close() 4279 pf.close()
@@ -4219,24 +4298,15 @@ def executeSuspend():
4219 pm.stop() 4298 pm.stop()
4220 sysvals.fsetVal('0', 'tracing_on') 4299 sysvals.fsetVal('0', 'tracing_on')
4221 print('CAPTURING TRACE') 4300 print('CAPTURING TRACE')
4222 writeDatafileHeader(sysvals.ftracefile, fwdata) 4301 sysvals.writeDatafileHeader(sysvals.ftracefile, fwdata)
4223 call('cat '+tp+'trace >> '+sysvals.ftracefile, shell=True) 4302 call('cat '+tp+'trace >> '+sysvals.ftracefile, shell=True)
4224 sysvals.fsetVal('', 'trace') 4303 sysvals.fsetVal('', 'trace')
4225 devProps() 4304 devProps()
4226 # grab a copy of the dmesg output 4305 # grab a copy of the dmesg output
4227 print('CAPTURING DMESG') 4306 print('CAPTURING DMESG')
4228 writeDatafileHeader(sysvals.dmesgfile, fwdata) 4307 sysvals.writeDatafileHeader(sysvals.dmesgfile, fwdata)
4229 sysvals.getdmesg() 4308 sysvals.getdmesg()
4230 4309
4231def writeDatafileHeader(filename, fwdata):
4232 fp = open(filename, 'a')
4233 fp.write(sysvals.teststamp+'\n')
4234 if(sysvals.suspendmode == 'mem' or sysvals.suspendmode == 'command'):
4235 for fw in fwdata:
4236 if(fw):
4237 fp.write('# fwsuspend %u fwresume %u\n' % (fw[0], fw[1]))
4238 fp.close()
4239
4240# Function: setUSBDevicesAuto 4310# Function: setUSBDevicesAuto
4241# Description: 4311# Description:
4242# Set the autosuspend control parameter of all USB devices to auto 4312# Set the autosuspend control parameter of all USB devices to auto
@@ -4244,7 +4314,7 @@ def writeDatafileHeader(filename, fwdata):
4244# to always-on since the kernel cant determine if the device can 4314# to always-on since the kernel cant determine if the device can
4245# properly autosuspend 4315# properly autosuspend
4246def setUSBDevicesAuto(): 4316def setUSBDevicesAuto():
4247 rootCheck(True) 4317 sysvals.rootCheck(True)
4248 for dirname, dirnames, filenames in os.walk('/sys/devices'): 4318 for dirname, dirnames, filenames in os.walk('/sys/devices'):
4249 if(re.match('.*/usb[0-9]*.*', dirname) and 4319 if(re.match('.*/usb[0-9]*.*', dirname) and
4250 'idVendor' in filenames and 'idProduct' in filenames): 4320 'idVendor' in filenames and 'idProduct' in filenames):
@@ -4467,13 +4537,146 @@ def devProps(data=0):
4467# Output: 4537# Output:
4468# A string list of the available modes 4538# A string list of the available modes
4469def getModes(): 4539def getModes():
4470 modes = '' 4540 modes = []
4471 if(os.path.exists(sysvals.powerfile)): 4541 if(os.path.exists(sysvals.powerfile)):
4472 fp = open(sysvals.powerfile, 'r') 4542 fp = open(sysvals.powerfile, 'r')
4473 modes = string.split(fp.read()) 4543 modes = string.split(fp.read())
4474 fp.close() 4544 fp.close()
4545 if(os.path.exists(sysvals.mempowerfile)):
4546 deep = False
4547 fp = open(sysvals.mempowerfile, 'r')
4548 for m in string.split(fp.read()):
4549 memmode = m.strip('[]')
4550 if memmode == 'deep':
4551 deep = True
4552 else:
4553 modes.append('mem-%s' % memmode)
4554 fp.close()
4555 if 'mem' in modes and not deep:
4556 modes.remove('mem')
4475 return modes 4557 return modes
4476 4558
4559# Function: dmidecode
4560# Description:
4561# Read the bios tables and pull out system info
4562# Arguments:
4563# mempath: /dev/mem or custom mem path
4564# fatal: True to exit on error, False to return empty dict
4565# Output:
4566# A dict object with all available key/values
4567def dmidecode(mempath, fatal=False):
4568 out = dict()
4569
4570 # the list of values to retrieve, with hardcoded (type, idx)
4571 info = {
4572 'bios-vendor': (0, 4),
4573 'bios-version': (0, 5),
4574 'bios-release-date': (0, 8),
4575 'system-manufacturer': (1, 4),
4576 'system-product-name': (1, 5),
4577 'system-version': (1, 6),
4578 'system-serial-number': (1, 7),
4579 'baseboard-manufacturer': (2, 4),
4580 'baseboard-product-name': (2, 5),
4581 'baseboard-version': (2, 6),
4582 'baseboard-serial-number': (2, 7),
4583 'chassis-manufacturer': (3, 4),
4584 'chassis-type': (3, 5),
4585 'chassis-version': (3, 6),
4586 'chassis-serial-number': (3, 7),
4587 'processor-manufacturer': (4, 7),
4588 'processor-version': (4, 16),
4589 }
4590 if(not os.path.exists(mempath)):
4591 if(fatal):
4592 doError('file does not exist: %s' % mempath)
4593 return out
4594 if(not os.access(mempath, os.R_OK)):
4595 if(fatal):
4596 doError('file is not readable: %s' % mempath)
4597 return out
4598
4599 # by default use legacy scan, but try to use EFI first
4600 memaddr = 0xf0000
4601 memsize = 0x10000
4602 for ep in ['/sys/firmware/efi/systab', '/proc/efi/systab']:
4603 if not os.path.exists(ep) or not os.access(ep, os.R_OK):
4604 continue
4605 fp = open(ep, 'r')
4606 buf = fp.read()
4607 fp.close()
4608 i = buf.find('SMBIOS=')
4609 if i >= 0:
4610 try:
4611 memaddr = int(buf[i+7:], 16)
4612 memsize = 0x20
4613 except:
4614 continue
4615
4616 # read in the memory for scanning
4617 fp = open(mempath, 'rb')
4618 try:
4619 fp.seek(memaddr)
4620 buf = fp.read(memsize)
4621 except:
4622 if(fatal):
4623 doError('DMI table is unreachable, sorry')
4624 else:
4625 return out
4626 fp.close()
4627
4628 # search for either an SM table or DMI table
4629 i = base = length = num = 0
4630 while(i < memsize):
4631 if buf[i:i+4] == '_SM_' and i < memsize - 16:
4632 length = struct.unpack('H', buf[i+22:i+24])[0]
4633 base, num = struct.unpack('IH', buf[i+24:i+30])
4634 break
4635 elif buf[i:i+5] == '_DMI_':
4636 length = struct.unpack('H', buf[i+6:i+8])[0]
4637 base, num = struct.unpack('IH', buf[i+8:i+14])
4638 break
4639 i += 16
4640 if base == 0 and length == 0 and num == 0:
4641 if(fatal):
4642 doError('Neither SMBIOS nor DMI were found')
4643 else:
4644 return out
4645
4646 # read in the SM or DMI table
4647 fp = open(mempath, 'rb')
4648 try:
4649 fp.seek(base)
4650 buf = fp.read(length)
4651 except:
4652 if(fatal):
4653 doError('DMI table is unreachable, sorry')
4654 else:
4655 return out
4656 fp.close()
4657
4658 # scan the table for the values we want
4659 count = i = 0
4660 while(count < num and i <= len(buf) - 4):
4661 type, size, handle = struct.unpack('BBH', buf[i:i+4])
4662 n = i + size
4663 while n < len(buf) - 1:
4664 if 0 == struct.unpack('H', buf[n:n+2])[0]:
4665 break
4666 n += 1
4667 data = buf[i+size:n+2].split('\0')
4668 for name in info:
4669 itype, idxadr = info[name]
4670 if itype == type:
4671 idx = struct.unpack('B', buf[i+idxadr])[0]
4672 if idx > 0 and idx < len(data) - 1:
4673 s = data[idx-1].strip()
4674 if s and s.lower() != 'to be filled by o.e.m.':
4675 out[name] = data[idx-1]
4676 i = n + 2
4677 count += 1
4678 return out
4679
4477# Function: getFPDT 4680# Function: getFPDT
4478# Description: 4681# Description:
4479# Read the acpi bios tables and pull out FPDT, the firmware data 4682# Read the acpi bios tables and pull out FPDT, the firmware data
@@ -4487,7 +4690,7 @@ def getFPDT(output):
4487 prectype[0] = 'Basic S3 Resume Performance Record' 4690 prectype[0] = 'Basic S3 Resume Performance Record'
4488 prectype[1] = 'Basic S3 Suspend Performance Record' 4691 prectype[1] = 'Basic S3 Suspend Performance Record'
4489 4692
4490 rootCheck(True) 4693 sysvals.rootCheck(True)
4491 if(not os.path.exists(sysvals.fpdtpath)): 4694 if(not os.path.exists(sysvals.fpdtpath)):
4492 if(output): 4695 if(output):
4493 doError('file does not exist: %s' % sysvals.fpdtpath) 4696 doError('file does not exist: %s' % sysvals.fpdtpath)
@@ -4617,7 +4820,7 @@ def statusCheck(probecheck=False):
4617 4820
4618 # check we have root access 4821 # check we have root access
4619 res = sysvals.colorText('NO (No features of this tool will work!)') 4822 res = sysvals.colorText('NO (No features of this tool will work!)')
4620 if(rootCheck(False)): 4823 if(sysvals.rootCheck(False)):
4621 res = 'YES' 4824 res = 'YES'
4622 print(' have root access: %s' % res) 4825 print(' have root access: %s' % res)
4623 if(res != 'YES'): 4826 if(res != 'YES'):
@@ -4716,16 +4919,6 @@ def doError(msg, help=False):
4716 print('ERROR: %s\n') % msg 4919 print('ERROR: %s\n') % msg
4717 sys.exit() 4920 sys.exit()
4718 4921
4719# Function: rootCheck
4720# Description:
4721# quick check to see if we have root access
4722def rootCheck(fatal):
4723 if(os.access(sysvals.powerfile, os.W_OK)):
4724 return True
4725 if fatal:
4726 doError('This command requires sysfs mount and root access')
4727 return False
4728
4729# Function: getArgInt 4922# Function: getArgInt
4730# Description: 4923# Description:
4731# pull out an integer argument from the command line with checks 4924# pull out an integer argument from the command line with checks
@@ -4779,6 +4972,7 @@ def processData():
4779 if(sysvals.ftracefile and (sysvals.usecallgraph or sysvals.usetraceevents)): 4972 if(sysvals.ftracefile and (sysvals.usecallgraph or sysvals.usetraceevents)):
4780 appendIncompleteTraceLog(testruns) 4973 appendIncompleteTraceLog(testruns)
4781 createHTML(testruns) 4974 createHTML(testruns)
4975 return testruns
4782 4976
4783# Function: rerunTest 4977# Function: rerunTest
4784# Description: 4978# Description:
@@ -4790,17 +4984,20 @@ def rerunTest():
4790 doError('recreating this html output requires a dmesg file') 4984 doError('recreating this html output requires a dmesg file')
4791 sysvals.setOutputFile() 4985 sysvals.setOutputFile()
4792 vprint('Output file: %s' % sysvals.htmlfile) 4986 vprint('Output file: %s' % sysvals.htmlfile)
4793 if(os.path.exists(sysvals.htmlfile) and not os.access(sysvals.htmlfile, os.W_OK)): 4987 if os.path.exists(sysvals.htmlfile):
4794 doError('missing permission to write to %s' % sysvals.htmlfile) 4988 if not os.path.isfile(sysvals.htmlfile):
4795 processData() 4989 doError('a directory already exists with this name: %s' % sysvals.htmlfile)
4990 elif not os.access(sysvals.htmlfile, os.W_OK):
4991 doError('missing permission to write to %s' % sysvals.htmlfile)
4992 return processData()
4796 4993
4797# Function: runTest 4994# Function: runTest
4798# Description: 4995# Description:
4799# execute a suspend/resume, gather the logs, and generate the output 4996# execute a suspend/resume, gather the logs, and generate the output
4800def runTest(subdir, testpath=''): 4997def runTest():
4801 # prepare for the test 4998 # prepare for the test
4802 sysvals.initFtrace() 4999 sysvals.initFtrace()
4803 sysvals.initTestOutput(subdir, testpath) 5000 sysvals.initTestOutput('suspend')
4804 vprint('Output files:\n\t%s\n\t%s\n\t%s' % \ 5001 vprint('Output files:\n\t%s\n\t%s\n\t%s' % \
4805 (sysvals.dmesgfile, sysvals.ftracefile, sysvals.htmlfile)) 5002 (sysvals.dmesgfile, sysvals.ftracefile, sysvals.htmlfile))
4806 5003
@@ -4897,7 +5094,7 @@ def configFromFile(file):
4897 if(opt.lower() == 'verbose'): 5094 if(opt.lower() == 'verbose'):
4898 sysvals.verbose = checkArgBool(value) 5095 sysvals.verbose = checkArgBool(value)
4899 elif(opt.lower() == 'addlogs'): 5096 elif(opt.lower() == 'addlogs'):
4900 sysvals.addlogs = checkArgBool(value) 5097 sysvals.dmesglog = sysvals.ftracelog = checkArgBool(value)
4901 elif(opt.lower() == 'dev'): 5098 elif(opt.lower() == 'dev'):
4902 sysvals.usedevsrc = checkArgBool(value) 5099 sysvals.usedevsrc = checkArgBool(value)
4903 elif(opt.lower() == 'proc'): 5100 elif(opt.lower() == 'proc'):
@@ -4947,7 +5144,7 @@ def configFromFile(file):
4947 elif(opt.lower() == 'mincg'): 5144 elif(opt.lower() == 'mincg'):
4948 sysvals.mincglen = getArgFloat('-mincg', value, 0.0, 10000.0, False) 5145 sysvals.mincglen = getArgFloat('-mincg', value, 0.0, 10000.0, False)
4949 elif(opt.lower() == 'output-dir'): 5146 elif(opt.lower() == 'output-dir'):
4950 sysvals.setOutputFolder(value) 5147 sysvals.testdir = sysvals.setOutputFolder(value)
4951 5148
4952 if sysvals.suspendmode == 'command' and not sysvals.testcommand: 5149 if sysvals.suspendmode == 'command' and not sysvals.testcommand:
4953 doError('No command supplied for mode "command"') 5150 doError('No command supplied for mode "command"')
@@ -5030,8 +5227,6 @@ def configFromFile(file):
5030# Description: 5227# Description:
5031# print out the help text 5228# print out the help text
5032def printHelp(): 5229def printHelp():
5033 modes = getModes()
5034
5035 print('') 5230 print('')
5036 print('%s v%s' % (sysvals.title, sysvals.version)) 5231 print('%s v%s' % (sysvals.title, sysvals.version))
5037 print('Usage: sudo sleepgraph <options> <commands>') 5232 print('Usage: sudo sleepgraph <options> <commands>')
@@ -5048,7 +5243,7 @@ def printHelp():
5048 print(' If no specific command is given, the default behavior is to initiate') 5243 print(' If no specific command is given, the default behavior is to initiate')
5049 print(' a suspend/resume and capture the dmesg/ftrace output as an html timeline.') 5244 print(' a suspend/resume and capture the dmesg/ftrace output as an html timeline.')
5050 print('') 5245 print('')
5051 print(' Generates output files in subdirectory: suspend-mmddyy-HHMMSS') 5246 print(' Generates output files in subdirectory: suspend-yymmdd-HHMMSS')
5052 print(' HTML output: <hostname>_<mode>.html') 5247 print(' HTML output: <hostname>_<mode>.html')
5053 print(' raw dmesg output: <hostname>_<mode>_dmesg.txt') 5248 print(' raw dmesg output: <hostname>_<mode>_dmesg.txt')
5054 print(' raw ftrace output: <hostname>_<mode>_ftrace.txt') 5249 print(' raw ftrace output: <hostname>_<mode>_ftrace.txt')
@@ -5058,8 +5253,9 @@ def printHelp():
5058 print(' -v Print the current tool version') 5253 print(' -v Print the current tool version')
5059 print(' -config fn Pull arguments and config options from file fn') 5254 print(' -config fn Pull arguments and config options from file fn')
5060 print(' -verbose Print extra information during execution and analysis') 5255 print(' -verbose Print extra information during execution and analysis')
5061 print(' -m mode Mode to initiate for suspend %s (default: %s)') % (modes, sysvals.suspendmode) 5256 print(' -m mode Mode to initiate for suspend (default: %s)') % (sysvals.suspendmode)
5062 print(' -o subdir Override the output subdirectory') 5257 print(' -o name Overrides the output subdirectory name when running a new test')
5258 print(' default: suspend-{date}-{time}')
5063 print(' -rtcwake t Wakeup t seconds after suspend, set t to "off" to disable (default: 15)') 5259 print(' -rtcwake t Wakeup t seconds after suspend, set t to "off" to disable (default: 15)')
5064 print(' -addlogs Add the dmesg and ftrace logs to the html output') 5260 print(' -addlogs Add the dmesg and ftrace logs to the html output')
5065 print(' -srgap Add a visible gap in the timeline between sus/res (default: disabled)') 5261 print(' -srgap Add a visible gap in the timeline between sus/res (default: disabled)')
@@ -5084,17 +5280,20 @@ def printHelp():
5084 print(' -cgphase P Only show callgraph data for phase P (e.g. suspend_late)') 5280 print(' -cgphase P Only show callgraph data for phase P (e.g. suspend_late)')
5085 print(' -cgtest N Only show callgraph data for test N (e.g. 0 or 1 in an x2 run)') 5281 print(' -cgtest N Only show callgraph data for test N (e.g. 0 or 1 in an x2 run)')
5086 print(' -timeprec N Number of significant digits in timestamps (0:S, [3:ms], 6:us)') 5282 print(' -timeprec N Number of significant digits in timestamps (0:S, [3:ms], 6:us)')
5087 print(' [commands]') 5283 print('')
5088 print(' -ftrace ftracefile Create HTML output using ftrace input (used with -dmesg)') 5284 print('Other commands:')
5089 print(' -dmesg dmesgfile Create HTML output using dmesg (used with -ftrace)')
5090 print(' -summary directory Create a summary of all test in this dir')
5091 print(' -modes List available suspend modes') 5285 print(' -modes List available suspend modes')
5092 print(' -status Test to see if the system is enabled to run this tool') 5286 print(' -status Test to see if the system is enabled to run this tool')
5093 print(' -fpdt Print out the contents of the ACPI Firmware Performance Data Table') 5287 print(' -fpdt Print out the contents of the ACPI Firmware Performance Data Table')
5288 print(' -sysinfo Print out system info extracted from BIOS')
5094 print(' -usbtopo Print out the current USB topology with power info') 5289 print(' -usbtopo Print out the current USB topology with power info')
5095 print(' -usbauto Enable autosuspend for all connected USB devices') 5290 print(' -usbauto Enable autosuspend for all connected USB devices')
5096 print(' -flist Print the list of functions currently being captured in ftrace') 5291 print(' -flist Print the list of functions currently being captured in ftrace')
5097 print(' -flistall Print all functions capable of being captured in ftrace') 5292 print(' -flistall Print all functions capable of being captured in ftrace')
5293 print(' -summary directory Create a summary of all test in this dir')
5294 print(' [redo]')
5295 print(' -ftrace ftracefile Create HTML output using ftrace input (used with -dmesg)')
5296 print(' -dmesg dmesgfile Create HTML output using dmesg (used with -ftrace)')
5098 print('') 5297 print('')
5099 return True 5298 return True
5100 5299
@@ -5102,9 +5301,9 @@ def printHelp():
5102# exec start (skipped if script is loaded as library) 5301# exec start (skipped if script is loaded as library)
5103if __name__ == '__main__': 5302if __name__ == '__main__':
5104 cmd = '' 5303 cmd = ''
5105 cmdarg = '' 5304 outdir = ''
5106 multitest = {'run': False, 'count': 0, 'delay': 0} 5305 multitest = {'run': False, 'count': 0, 'delay': 0}
5107 simplecmds = ['-modes', '-fpdt', '-flist', '-flistall', '-usbtopo', '-usbauto', '-status'] 5306 simplecmds = ['-sysinfo', '-modes', '-fpdt', '-flist', '-flistall', '-usbtopo', '-usbauto', '-status']
5108 # loop through the command line arguments 5307 # loop through the command line arguments
5109 args = iter(sys.argv[1:]) 5308 args = iter(sys.argv[1:])
5110 for arg in args: 5309 for arg in args:
@@ -5135,7 +5334,7 @@ if __name__ == '__main__':
5135 elif(arg == '-f'): 5334 elif(arg == '-f'):
5136 sysvals.usecallgraph = True 5335 sysvals.usecallgraph = True
5137 elif(arg == '-addlogs'): 5336 elif(arg == '-addlogs'):
5138 sysvals.addlogs = True 5337 sysvals.dmesglog = sysvals.ftracelog = True
5139 elif(arg == '-verbose'): 5338 elif(arg == '-verbose'):
5140 sysvals.verbose = True 5339 sysvals.verbose = True
5141 elif(arg == '-proc'): 5340 elif(arg == '-proc'):
@@ -5195,7 +5394,7 @@ if __name__ == '__main__':
5195 val = args.next() 5394 val = args.next()
5196 except: 5395 except:
5197 doError('No subdirectory name supplied', True) 5396 doError('No subdirectory name supplied', True)
5198 sysvals.setOutputFolder(val) 5397 outdir = sysvals.setOutputFolder(val)
5199 elif(arg == '-config'): 5398 elif(arg == '-config'):
5200 try: 5399 try:
5201 val = args.next() 5400 val = args.next()
@@ -5236,7 +5435,7 @@ if __name__ == '__main__':
5236 except: 5435 except:
5237 doError('No directory supplied', True) 5436 doError('No directory supplied', True)
5238 cmd = 'summary' 5437 cmd = 'summary'
5239 cmdarg = val 5438 outdir = val
5240 sysvals.notestrun = True 5439 sysvals.notestrun = True
5241 if(os.path.isdir(val) == False): 5440 if(os.path.isdir(val) == False):
5242 doError('%s is not accesible' % val) 5441 doError('%s is not accesible' % val)
@@ -5260,11 +5459,14 @@ if __name__ == '__main__':
5260 sysvals.mincglen = sysvals.mindevlen 5459 sysvals.mincglen = sysvals.mindevlen
5261 5460
5262 # just run a utility command and exit 5461 # just run a utility command and exit
5462 sysvals.cpuInfo()
5263 if(cmd != ''): 5463 if(cmd != ''):
5264 if(cmd == 'status'): 5464 if(cmd == 'status'):
5265 statusCheck(True) 5465 statusCheck(True)
5266 elif(cmd == 'fpdt'): 5466 elif(cmd == 'fpdt'):
5267 getFPDT(True) 5467 getFPDT(True)
5468 elif(cmd == 'sysinfo'):
5469 sysvals.printSystemInfo()
5268 elif(cmd == 'usbtopo'): 5470 elif(cmd == 'usbtopo'):
5269 detectUSB() 5471 detectUSB()
5270 elif(cmd == 'modes'): 5472 elif(cmd == 'modes'):
@@ -5276,7 +5478,7 @@ if __name__ == '__main__':
5276 elif(cmd == 'usbauto'): 5478 elif(cmd == 'usbauto'):
5277 setUSBDevicesAuto() 5479 setUSBDevicesAuto()
5278 elif(cmd == 'summary'): 5480 elif(cmd == 'summary'):
5279 runSummary(cmdarg, True) 5481 runSummary(outdir, True)
5280 sys.exit() 5482 sys.exit()
5281 5483
5282 # if instructed, re-analyze existing data files 5484 # if instructed, re-analyze existing data files
@@ -5289,21 +5491,43 @@ if __name__ == '__main__':
5289 print('Check FAILED, aborting the test run!') 5491 print('Check FAILED, aborting the test run!')
5290 sys.exit() 5492 sys.exit()
5291 5493
5494 # extract mem modes and convert
5495 mode = sysvals.suspendmode
5496 if 'mem' == mode[:3]:
5497 if '-' in mode:
5498 memmode = mode.split('-')[-1]
5499 else:
5500 memmode = 'deep'
5501 if memmode == 'shallow':
5502 mode = 'standby'
5503 elif memmode == 's2idle':
5504 mode = 'freeze'
5505 else:
5506 mode = 'mem'
5507 sysvals.memmode = memmode
5508 sysvals.suspendmode = mode
5509
5510 sysvals.systemInfo(dmidecode(sysvals.mempath))
5511
5292 if multitest['run']: 5512 if multitest['run']:
5293 # run multiple tests in a separate subdirectory 5513 # run multiple tests in a separate subdirectory
5294 s = 'x%d' % multitest['count'] 5514 if not outdir:
5295 if not sysvals.outdir: 5515 s = 'suspend-x%d' % multitest['count']
5296 sysvals.outdir = datetime.now().strftime('suspend-'+s+'-%m%d%y-%H%M%S') 5516 outdir = datetime.now().strftime(s+'-%y%m%d-%H%M%S')
5297 if not os.path.isdir(sysvals.outdir): 5517 if not os.path.isdir(outdir):
5298 os.mkdir(sysvals.outdir) 5518 os.mkdir(outdir)
5299 for i in range(multitest['count']): 5519 for i in range(multitest['count']):
5300 if(i != 0): 5520 if(i != 0):
5301 print('Waiting %d seconds...' % (multitest['delay'])) 5521 print('Waiting %d seconds...' % (multitest['delay']))
5302 time.sleep(multitest['delay']) 5522 time.sleep(multitest['delay'])
5303 print('TEST (%d/%d) START' % (i+1, multitest['count'])) 5523 print('TEST (%d/%d) START' % (i+1, multitest['count']))
5304 runTest(sysvals.outdir) 5524 fmt = 'suspend-%y%m%d-%H%M%S'
5525 sysvals.testdir = os.path.join(outdir, datetime.now().strftime(fmt))
5526 runTest()
5305 print('TEST (%d/%d) COMPLETE' % (i+1, multitest['count'])) 5527 print('TEST (%d/%d) COMPLETE' % (i+1, multitest['count']))
5306 runSummary(sysvals.outdir, False) 5528 runSummary(outdir, False)
5307 else: 5529 else:
5530 if outdir:
5531 sysvals.testdir = outdir
5308 # run the test in the current directory 5532 # run the test in the current directory
5309 runTest('.', sysvals.outdir) 5533 runTest()
diff --git a/tools/power/pm-graph/bootgraph.8 b/tools/power/pm-graph/bootgraph.8
index 55272a67b0e7..dbdafcf546df 100644
--- a/tools/power/pm-graph/bootgraph.8
+++ b/tools/power/pm-graph/bootgraph.8
@@ -8,14 +8,23 @@ bootgraph \- Kernel boot timing analysis
8.RB [ COMMAND ] 8.RB [ COMMAND ]
9.SH DESCRIPTION 9.SH DESCRIPTION
10\fBbootgraph \fP reads the dmesg log from kernel boot and 10\fBbootgraph \fP reads the dmesg log from kernel boot and
11creates an html representation of the initcall timeline up to the start 11creates an html representation of the initcall timeline. It graphs
12of the init process. 12every module init call found, through both kernel and user modes. The
13timeline is split into two phases: kernel mode & user mode. kernel mode
14represents a single process run on a single cpu with serial init calls.
15Once user mode begins, the init process is called, and the init calls
16start working in parallel.
13.PP 17.PP
14If no specific command is given, the tool reads the current dmesg log and 18If no specific command is given, the tool reads the current dmesg log and
15outputs bootgraph.html. 19outputs a new timeline.
16.PP 20.PP
17The tool can also augment the timeline with ftrace data on custom target 21The tool can also augment the timeline with ftrace data on custom target
18functions as well as full trace callgraphs. 22functions as well as full trace callgraphs.
23.PP
24Generates output files in subdirectory: boot-yymmdd-HHMMSS
25 html timeline : <hostname>_boot.html
26 raw dmesg file : <hostname>_boot_dmesg.txt
27 raw ftrace file : <hostname>_boot_ftrace.txt
19.SH OPTIONS 28.SH OPTIONS
20.TP 29.TP
21\fB-h\fR 30\fB-h\fR
@@ -28,15 +37,18 @@ Print the current tool version
28Add the dmesg log to the html output. It will be viewable by 37Add the dmesg log to the html output. It will be viewable by
29clicking a button in the timeline. 38clicking a button in the timeline.
30.TP 39.TP
31\fB-o \fIfile\fR 40\fB-o \fIname\fR
32Override the HTML output filename (default: bootgraph.html) 41Overrides the output subdirectory name when running a new test.
33.SS "Ftrace Debug" 42Use {date}, {time}, {hostname} for current values.
43.sp
44e.g. boot-{hostname}-{date}-{time}
45.SS "advanced"
34.TP 46.TP
35\fB-f\fR 47\fB-f\fR
36Use ftrace to add function detail (default: disabled) 48Use ftrace to add function detail (default: disabled)
37.TP 49.TP
38\fB-callgraph\fR 50\fB-callgraph\fR
39Use ftrace to create initcall callgraphs (default: disabled). If -filter 51Use ftrace to create initcall callgraphs (default: disabled). If -func
40is not used there will be one callgraph per initcall. This can produce 52is not used there will be one callgraph per initcall. This can produce
41very large outputs, i.e. 10MB - 100MB. 53very large outputs, i.e. 10MB - 100MB.
42.TP 54.TP
@@ -50,16 +62,19 @@ This reduces the html file size as there can be many tiny callgraphs
50which are barely visible in the timeline. 62which are barely visible in the timeline.
51The value is a float: e.g. 0.001 represents 1 us. 63The value is a float: e.g. 0.001 represents 1 us.
52.TP 64.TP
65\fB-cgfilter \fI"func1,func2,..."\fR
66Reduce callgraph output in the timeline by limiting it to a list of calls. The
67argument can be a single function name or a comma delimited list.
68(default: none)
69.TP
53\fB-timeprec \fIn\fR 70\fB-timeprec \fIn\fR
54Number of significant digits in timestamps (0:S, 3:ms, [6:us]) 71Number of significant digits in timestamps (0:S, 3:ms, [6:us])
55.TP 72.TP
56\fB-expandcg\fR 73\fB-expandcg\fR
57pre-expand the callgraph data in the html output (default: disabled) 74pre-expand the callgraph data in the html output (default: disabled)
58.TP 75.TP
59\fB-filter \fI"func1,func2,..."\fR 76\fB-func \fI"func1,func2,..."\fR
60Instead of tracing each initcall, trace a custom list of functions (default: do_one_initcall) 77Instead of tracing each initcall, trace a custom list of functions (default: do_one_initcall)
61
62.SH COMMANDS
63.TP 78.TP
64\fB-reboot\fR 79\fB-reboot\fR
65Reboot the machine and generate a new timeline automatically. Works in 4 steps. 80Reboot the machine and generate a new timeline automatically. Works in 4 steps.
@@ -73,16 +88,23 @@ Show the requirements to generate a new timeline manually. Requires 3 steps.
73 1. append the string to the kernel command line via your native boot manager. 88 1. append the string to the kernel command line via your native boot manager.
74 2. reboot the system 89 2. reboot the system
75 3. after startup, re-run the tool with the same arguments and no command 90 3. after startup, re-run the tool with the same arguments and no command
91
92.SH COMMANDS
93.SS "rebuild"
76.TP 94.TP
77\fB-dmesg \fIfile\fR 95\fB-dmesg \fIfile\fR
78Create HTML output from an existing dmesg file. 96Create HTML output from an existing dmesg file.
79.TP 97.TP
80\fB-ftrace \fIfile\fR 98\fB-ftrace \fIfile\fR
81Create HTML output from an existing ftrace file (used with -dmesg). 99Create HTML output from an existing ftrace file (used with -dmesg).
100.SS "other"
82.TP 101.TP
83\fB-flistall\fR 102\fB-flistall\fR
84Print all ftrace functions capable of being captured. These are all the 103Print all ftrace functions capable of being captured. These are all the
85possible values you can add to trace via the -filter argument. 104possible values you can add to trace via the -func argument.
105.TP
106\fB-sysinfo\fR
107Print out system info extracted from BIOS. Reads /dev/mem directly instead of going through dmidecode.
86 108
87.SH EXAMPLES 109.SH EXAMPLES
88Create a timeline using the current dmesg log. 110Create a timeline using the current dmesg log.
@@ -93,13 +115,13 @@ Create a timeline using the current dmesg and ftrace log.
93.IP 115.IP
94\f(CW$ bootgraph -callgraph\fR 116\f(CW$ bootgraph -callgraph\fR
95.PP 117.PP
96Create a timeline using the current dmesg, add the log to the html and change the name. 118Create a timeline using the current dmesg, add the log to the html and change the folder.
97.IP 119.IP
98\f(CW$ bootgraph -addlogs -o myboot.html\fR 120\f(CW$ bootgraph -addlogs -o "myboot-{date}-{time}"\fR
99.PP 121.PP
100Capture a new boot timeline by automatically rebooting the machine. 122Capture a new boot timeline by automatically rebooting the machine.
101.IP 123.IP
102\f(CW$ sudo bootgraph -reboot -addlogs -o latestboot.html\fR 124\f(CW$ sudo bootgraph -reboot -addlogs -o "latest-{hostname)"\fR
103.PP 125.PP
104Capture a new boot timeline with function trace data. 126Capture a new boot timeline with function trace data.
105.IP 127.IP
@@ -111,7 +133,7 @@ Capture a new boot timeline with trace & callgraph data. Skip callgraphs smaller
111.PP 133.PP
112Capture a new boot timeline with callgraph data over custom functions. 134Capture a new boot timeline with callgraph data over custom functions.
113.IP 135.IP
114\f(CW$ sudo bootgraph -reboot -callgraph -filter "acpi_ps_parse_aml,msleep"\fR 136\f(CW$ sudo bootgraph -reboot -callgraph -func "acpi_ps_parse_aml,msleep"\fR
115.PP 137.PP
116Capture a brand new boot timeline with manual reboot. 138Capture a brand new boot timeline with manual reboot.
117.IP 139.IP
@@ -123,6 +145,15 @@ Capture a brand new boot timeline with manual reboot.
123.IP 145.IP
124\f(CW$ sudo bootgraph -callgraph # re-run the tool after restart\fR 146\f(CW$ sudo bootgraph -callgraph # re-run the tool after restart\fR
125.PP 147.PP
148.SS "rebuild timeline from logs"
149.PP
150Rebuild the html from a previous run's logs, using the same options.
151.IP
152\f(CW$ bootgraph -dmesg dmesg.txt -ftrace ftrace.txt -callgraph\fR
153.PP
154Rebuild the html with different options.
155.IP
156\f(CW$ bootgraph -dmesg dmesg.txt -ftrace ftrace.txt -addlogs\fR
126 157
127.SH "SEE ALSO" 158.SH "SEE ALSO"
128dmesg(1), update-grub(8), crontab(1), reboot(8) 159dmesg(1), update-grub(8), crontab(1), reboot(8)
diff --git a/tools/power/pm-graph/sleepgraph.8 b/tools/power/pm-graph/sleepgraph.8
index 610e72ebbc06..fbe7bd3eae8e 100644
--- a/tools/power/pm-graph/sleepgraph.8
+++ b/tools/power/pm-graph/sleepgraph.8
@@ -39,8 +39,9 @@ Pull arguments and config options from a file.
39\fB-m \fImode\fR 39\fB-m \fImode\fR
40Mode to initiate for suspend e.g. standby, freeze, mem (default: mem). 40Mode to initiate for suspend e.g. standby, freeze, mem (default: mem).
41.TP 41.TP
42\fB-o \fIsubdir\fR 42\fB-o \fIname\fR
43Override the output subdirectory. Use {date}, {time}, {hostname} for current values. 43Overrides the output subdirectory name when running a new test.
44Use {date}, {time}, {hostname} for current values.
44.sp 45.sp
45e.g. suspend-{hostname}-{date}-{time} 46e.g. suspend-{hostname}-{date}-{time}
46.TP 47.TP
@@ -52,7 +53,7 @@ disable rtcwake and require a user keypress to resume.
52Add the dmesg and ftrace logs to the html output. They will be viewable by 53Add the dmesg and ftrace logs to the html output. They will be viewable by
53clicking buttons in the timeline. 54clicking buttons in the timeline.
54 55
55.SS "Advanced" 56.SS "advanced"
56.TP 57.TP
57\fB-cmd \fIstr\fR 58\fB-cmd \fIstr\fR
58Run the timeline over a custom suspend command, e.g. pm-suspend. By default 59Run the timeline over a custom suspend command, e.g. pm-suspend. By default
@@ -91,7 +92,7 @@ Include \fIt\fR ms delay after last resume (default: 0 ms).
91Execute \fIn\fR consecutive tests at \fId\fR seconds intervals. The outputs will 92Execute \fIn\fR consecutive tests at \fId\fR seconds intervals. The outputs will
92be created in a new subdirectory with a summary page: suspend-xN-{date}-{time}. 93be created in a new subdirectory with a summary page: suspend-xN-{date}-{time}.
93 94
94.SS "Ftrace Debug" 95.SS "ftrace debug"
95.TP 96.TP
96\fB-f\fR 97\fB-f\fR
97Use ftrace to create device callgraphs (default: disabled). This can produce 98Use ftrace to create device callgraphs (default: disabled). This can produce
@@ -124,12 +125,6 @@ Number of significant digits in timestamps (0:S, [3:ms], 6:us).
124 125
125.SH COMMANDS 126.SH COMMANDS
126.TP 127.TP
127\fB-ftrace \fIfile\fR
128Create HTML output from an existing ftrace file.
129.TP
130\fB-dmesg \fIfile\fR
131Create HTML output from an existing dmesg file.
132.TP
133\fB-summary \fIindir\fR 128\fB-summary \fIindir\fR
134Create a summary page of all tests in \fIindir\fR. Creates summary.html 129Create a summary page of all tests in \fIindir\fR. Creates summary.html
135in the current folder. The output page is a table of tests with 130in the current folder. The output page is a table of tests with
@@ -146,6 +141,9 @@ with any options you intend to use to see if they will work.
146\fB-fpdt\fR 141\fB-fpdt\fR
147Print out the contents of the ACPI Firmware Performance Data Table. 142Print out the contents of the ACPI Firmware Performance Data Table.
148.TP 143.TP
144\fB-sysinfo\fR
145Print out system info extracted from BIOS. Reads /dev/mem directly instead of going through dmidecode.
146.TP
149\fB-usbtopo\fR 147\fB-usbtopo\fR
150Print out the current USB topology with power info. 148Print out the current USB topology with power info.
151.TP 149.TP
@@ -162,9 +160,16 @@ with -fadd they will also be checked.
162\fB-flistall\fR 160\fB-flistall\fR
163Print all ftrace functions capable of being captured. These are all the 161Print all ftrace functions capable of being captured. These are all the
164possible values you can add to trace via the -fadd argument. 162possible values you can add to trace via the -fadd argument.
163.SS "rebuild"
164.TP
165\fB-ftrace \fIfile\fR
166Create HTML output from an existing ftrace file.
167.TP
168\fB-dmesg \fIfile\fR
169Create HTML output from an existing dmesg file.
165 170
166.SH EXAMPLES 171.SH EXAMPLES
167.SS "Simple Commands" 172.SS "simple commands"
168Check which suspend modes are currently supported. 173Check which suspend modes are currently supported.
169.IP 174.IP
170\f(CW$ sleepgraph -modes\fR 175\f(CW$ sleepgraph -modes\fR
@@ -185,12 +190,8 @@ Generate a summary of all timelines in a particular folder.
185.IP 190.IP
186\f(CW$ sleepgraph -summary ~/workspace/myresults/\fR 191\f(CW$ sleepgraph -summary ~/workspace/myresults/\fR
187.PP 192.PP
188Re-generate the html output from a previous run's dmesg and ftrace log.
189.IP
190\f(CW$ sleepgraph -dmesg myhost_mem_dmesg.txt -ftrace myhost_mem_ftrace.txt\fR
191.PP
192 193
193.SS "Capturing Simple Timelines" 194.SS "capturing basic timelines"
194Execute a mem suspend with a 15 second wakeup. Include the logs in the html. 195Execute a mem suspend with a 15 second wakeup. Include the logs in the html.
195.IP 196.IP
196\f(CW$ sudo sleepgraph -rtcwake 15 -addlogs\fR 197\f(CW$ sudo sleepgraph -rtcwake 15 -addlogs\fR
@@ -204,7 +205,7 @@ Execute a freeze with no wakeup (require keypress). Change output folder name.
204\f(CW$ sudo sleepgraph -m freeze -rtcwake off -o "freeze-{hostname}-{date}-{time}"\fR 205\f(CW$ sudo sleepgraph -m freeze -rtcwake off -o "freeze-{hostname}-{date}-{time}"\fR
205.PP 206.PP
206 207
207.SS "Capturing Advanced Timelines" 208.SS "capturing advanced timelines"
208Execute a suspend & include dev mode source calls, limit callbacks to 5ms or larger. 209Execute a suspend & include dev mode source calls, limit callbacks to 5ms or larger.
209.IP 210.IP
210\f(CW$ sudo sleepgraph -m mem -rtcwake 15 -dev -mindev 5\fR 211\f(CW$ sudo sleepgraph -m mem -rtcwake 15 -dev -mindev 5\fR
@@ -222,8 +223,7 @@ Execute a suspend using a custom command.
222\f(CW$ sudo sleepgraph -cmd "echo mem > /sys/power/state" -rtcwake 15\fR 223\f(CW$ sudo sleepgraph -cmd "echo mem > /sys/power/state" -rtcwake 15\fR
223.PP 224.PP
224 225
225 226.SS "adding callgraph data"
226.SS "Capturing Timelines with Callgraph Data"
227Add device callgraphs. Limit the trace depth and only show callgraphs 10ms or larger. 227Add device callgraphs. Limit the trace depth and only show callgraphs 10ms or larger.
228.IP 228.IP
229\f(CW$ sudo sleepgraph -m mem -rtcwake 15 -f -maxdepth 5 -mincg 10\fR 229\f(CW$ sudo sleepgraph -m mem -rtcwake 15 -f -maxdepth 5 -mincg 10\fR
@@ -235,6 +235,16 @@ Capture a full callgraph across all suspend, then filter the html by a single ph
235\f(CW$ sleepgraph -dmesg host_mem_dmesg.txt -ftrace host_mem_ftrace.txt -f -cgphase resume 235\f(CW$ sleepgraph -dmesg host_mem_dmesg.txt -ftrace host_mem_ftrace.txt -f -cgphase resume
236.PP 236.PP
237 237
238.SS "rebuild timeline from logs"
239.PP
240Rebuild the html from a previous run's logs, using the same options.
241.IP
242\f(CW$ sleepgraph -dmesg dmesg.txt -ftrace ftrace.txt -callgraph\fR
243.PP
244Rebuild the html with different options.
245.IP
246\f(CW$ sleepgraph -dmesg dmesg.txt -ftrace ftrace.txt -addlogs -srgap\fR
247
238.SH "SEE ALSO" 248.SH "SEE ALSO"
239dmesg(1) 249dmesg(1)
240.PP 250.PP